source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Goursat%27s%20lemma
Goursat's lemma, named after the French mathematician Édouard Goursat, is an algebraic theorem about subgroups of the direct product of two groups. It can be stated more generally in a Goursat variety (and consequently it also holds in any Maltsev variety), from which one recovers a more general version of Zassenhaus' butterfly lemma. In this form, Goursat's theorem also implies the snake lemma. Groups Goursat's lemma for groups can be stated as follows. Let , be groups, and let be a subgroup of such that the two projections and are surjective (i.e., is a subdirect product of and ). Let be the kernel of and the kernel of . One can identify as a normal subgroup of , and as a normal subgroup of . Then the image of in is the graph of an isomorphism . One then obtains a bijection between : Subgroups of which project onto both factors, Triples with normal in , normal in and isomorphism of onto . An immediate consequence of this is that the subdirect product of two groups can be described as a fiber product and vice versa. Notice that if is any subgroup of (the projections and need not be surjective), then the projections from onto and are surjective. Then one can apply Goursat's lemma to . To motivate the proof, consider the slice in , for any arbitrary . By the surjectivity of the projection map to , this has a non trivial intersection with . Then essentially, this intersection represents exactly one particular coset of . Indeed, if we have elements with and , then being a group, we get that , and hence, . It follows that and lie in the same coset of . Thus the intersection of with every "horizontal" slice isomorphic to is exactly one particular coset of in . By an identical argument, the intersection of with every "vertical" slice isomorphic to is exactly one particular coset of in . All the cosets of are present in the group , and by the above argument, there is an exact 1:1 correspondence between them. The proof below further shows that the map is an isomorphism. Proof Before proceeding with the proof, and are shown to be normal in and , respectively. It is in this sense that and can be identified as normal in G and G''', respectively. Since is a homomorphism, its kernel N is normal in H. Moreover, given , there exists , since is surjective. Therefore, is normal in G, viz: . It follows that is normal in since . The proof that is normal in proceeds in a similar manner. Given the identification of with , we can write and instead of and , . Similarly, we can write and , . On to the proof. Consider the map defined by . The image of under this map is . Since is surjective, this relation is the graph of a well-defined function provided for every , essentially an application of the vertical line test. Since (more properly, ), we have . Thus , whence , that is, . Furthermore, for every we have . It follows that this function is a group homomorphism. By symmetry,
https://en.wikipedia.org/wiki/Reuben%20Hersh
Reuben Hersh (December 9, 1927 – January 3, 2020) was an American mathematician and academic, best known for his writings on the nature, practice, and social impact of mathematics. Although he was generally known as Reuben Hersh, late in life he sometimes used the name Reuben Laznovsky in recognition of his father's ancestral family name. His work challenges and complements mainstream philosophy of mathematics. Education After receiving a B.A. in English literature from Harvard University in 1946, Hersh spent a decade writing for Scientific American and working as a machinist. After losing his right thumb when working with a band saw, he decided to study mathematics at the Courant Institute of Mathematical Sciences. In 1962, he was awarded a Ph.D. in mathematics from New York University; his advisor was P.D. Lax. He was affiliated with the University of New Mexico since 1964, where he was professor emeritus. Academic career Hersh wrote a number of technical articles on partial differential equations, probability, random evolutions (example), and linear operator equations. He was the co-author of four articles in Scientific American, and 12 articles in the Mathematical Intelligencer. Hersh was best known as the co-author with Philip J. Davis of The Mathematical Experience (1981), which won a National Book Award in Science. Hersh and Martin Davis won the 1984 Chauvenet Prize for their Scientific American article on Hilbert's tenth problem. Hersh advocated what he called a "humanist" philosophy of mathematics, opposed to both Platonism (so-called "realism") and its rivals nominalism/fictionalism/formalism. He held that mathematics is real, and its reality is social-cultural-historical, located in the shared thoughts of those who learn it, teach it, and create it. His article "The Kingdom of Math is Within You" (a chapter in his Experiencing Mathematics, 2014) explains how mathematicians' proofs compel agreement, even when they are inadequate as formal logic. He sympathized with the perspectives on mathematics of Imre Lakatos and Where Mathematics Comes From, George Lakoff and Rafael Nunez, Basic Books. Books 1981, Hersh and Philip Davis. The Mathematical Experience. (Mariner Books, 1999). 1986, Hersh and Philip Davis. Descartes' Dream: The World According to Mathematics. (Dover, 2005) 1997. What Is Mathematics, Really? Oxford Univ. Press. 2006, edited by Hersh. 18 Unconventional Essays on the Nature of Mathematics. Springer Verlag. 2009, Hersh and Vera John-Steiner. Loving and Hating Mathematics. Princeton University Press Greenwood, P.; Hersh, R. "Stochastic differentials and quasi-standard random variables", Probabilistic methods in differential equations (Proc. Conf., Univ. Victoria, Victoria, B. C., 1974), pp. 35–62. Lecture Notes in Math., Vol. 451, Springer, Berlin, 1975. 2014, Reuben Hersh. Experiencing Mathematics: What do we do, when we do mathematics? American Mathematical Society. 2015, Reuben Hersh. Peter Lax: Mathematici
https://en.wikipedia.org/wiki/Constraint%20algebra
In theoretical physics, a constraint algebra is a linear space of all constraints and all of their polynomial functions or functionals whose action on the physical vectors of the Hilbert space should be equal to zero. For example, in electromagnetism, the equation for the Gauss' law is an equation of motion that does not include any time derivatives. This is why it is counted as a constraint, not a dynamical equation of motion. In quantum electrodynamics, one first constructs a Hilbert space in which Gauss' law does not hold automatically. The true Hilbert space of physical states is constructed as a subspace of the original Hilbert space of vectors that satisfy In more general theories, the constraint algebra may be a noncommutative algebra. See also First class constraints References Quantum mechanics Quantum field theory String theory
https://en.wikipedia.org/wiki/Fevzi%20Davletov
Fevzi Davletov (born 20 September 1972) is a retired Uzbekistan International football defender. Career statistics International Scores and results list Uzbekistan's goal tally first, score column indicates score after each Davletov goal. References External links Bio at playerhistory.com Profile at KLISF 1972 births Living people Footballers from Tashkent Soviet men's footballers Uzbekistani men's footballers Uzbekistani expatriate men's footballers Uzbekistan men's international footballers 1996 AFC Asian Cup players 2000 AFC Asian Cup players FC Rubin Kazan players FC Tobol players navbahor Namangan players FC Qizilqum Zarafshon players FK Andijon players Expatriate men's footballers in Kazakhstan Uzbekistani expatriate sportspeople in Kazakhstan Expatriate men's footballers in Russia Uzbekistani expatriate sportspeople in Russia FC Irtysh Pavlodar players FC Zhetysu players FC Dustlik players Men's association football defenders Asian Games gold medalists for Uzbekistan Asian Games medalists in football Footballers at the 1994 Asian Games Medalists at the 1994 Asian Games FC Megasport players
https://en.wikipedia.org/wiki/Vertical%20tangent
In mathematics, particularly calculus, a vertical tangent is a tangent line that is vertical. Because a vertical line has infinite slope, a function whose graph has a vertical tangent is not differentiable at the point of tangency. Limit definition A function ƒ has a vertical tangent at x = a if the difference quotient used to define the derivative has infinite limit: The first case corresponds to an upward-sloping vertical tangent, and the second case to a downward-sloping vertical tangent. The graph of ƒ has a vertical tangent at x = a if the derivative of ƒ at a is either positive or negative infinity. For a continuous function, it is often possible to detect a vertical tangent by taking the limit of the derivative. If then ƒ must have an upward-sloping vertical tangent at x = a. Similarly, if then ƒ must have a downward-sloping vertical tangent at x = a. In these situations, the vertical tangent to ƒ appears as a vertical asymptote on the graph of the derivative. Vertical cusps Closely related to vertical tangents are vertical cusps. This occurs when the one-sided derivatives are both infinite, but one is positive and the other is negative. For example, if then the graph of ƒ will have a vertical cusp that slopes up on the left side and down on the right side. As with vertical tangents, vertical cusps can sometimes be detected for a continuous function by examining the limit of the derivative. For example, if then the graph of ƒ will have a vertical cusp at x = a that slopes down on the left side and up on the right side. Example The function has a vertical tangent at x = 0, since it is continuous and Similarly, the function has a vertical cusp at x = 0, since it is continuous, and References Vertical Tangents and Cusps. Retrieved May 12, 2006. Mathematical analysis
https://en.wikipedia.org/wiki/Specht%20module
In mathematics, a Specht module is one of the representations of symmetric groups studied by . They are indexed by partitions, and in characteristic 0 the Specht modules of partitions of n form a complete set of irreducible representations of the symmetric group on n points. Definition Fix a partition λ of n and a commutative ring k. The partition determines a Young diagram with n boxes. A Young tableau of shape λ is a way of labelling the boxes of this Young diagram by distinct numbers . A tabloid is an equivalence class of Young tableaux where two labellings are equivalent if one is obtained from the other by permuting the entries of each row. For each Young tableau T of shape λ let be the corresponding tabloid. The symmetric group on n points acts on the set of Young tableaux of shape λ. Consequently, it acts on tabloids, and on the free k-module V with the tabloids as basis. Given a Young tableau T of shape λ, let where QT is the subgroup of permutations, preserving (as sets) all columns of T and is the sign of the permutation σ. The Specht module of the partition λ is the module generated by the elements ET as T runs through all tableaux of shape λ. The Specht module has a basis of elements ET for T a standard Young tableau. A gentle introduction to the construction of the Specht module may be found in Section 1 of "Specht Polytopes and Specht Matroids". Structure The dimension of the Specht module is the number of standard Young tableaux of shape . It is given by the hook length formula. Over fields of characteristic 0 the Specht modules are irreducible, and form a complete set of irreducible representations of the symmetric group. A partition is called p-regular (for a prime number p) if it does not have p parts of the same (positive) size. Over fields of characteristic p>0 the Specht modules can be reducible. For p-regular partitions they have a unique irreducible quotient, and these irreducible quotients form a complete set of irreducible representations. See also Garnir relations, a more detailed description of the structure of Specht modules. References Representation theory of finite groups
https://en.wikipedia.org/wiki/Hereditary%20ring
In mathematics, especially in the area of abstract algebra known as module theory, a ring R is called hereditary if all submodules of projective modules over R are again projective. If this is required only for finitely generated submodules, it is called semihereditary. For a noncommutative ring R, the terms left hereditary and left semihereditary and their right hand versions are used to distinguish the property on a single side of the ring. To be left (semi-)hereditary, all (finitely generated) submodules of projective left R-modules must be projective, and similarly to be right (semi-)hereditary all (finitely generated) submodules of projective right R-modules must be projective. It is possible for a ring to be left (semi-)hereditary but not right (semi-)hereditary and vice versa. Equivalent definitions The ring R is left (semi-)hereditary if and only if all (finitely generated) left ideals of R are projective modules. The ring R is left hereditary if and only if all left modules have projective resolutions of length at most 1. This is equivalent to saying that the left global dimension is at most 1. Hence the usual derived functors such as and are trivial for . Examples Semisimple rings are left and right hereditary via the equivalent definitions: all left and right ideals are summands of R, and hence are projective. By a similar token, in a von Neumann regular ring every finitely generated left and right ideal is a direct summand of R, and so von Neumann regular rings are left and right semihereditary. For any nonzero element x in a domain R, via the map . Hence in any domain, a principal right ideal is free, hence projective. This reflects the fact that domains are right Rickart rings. It follows that if R is a right Bézout domain, so that finitely generated right ideals are principal, then R has all finitely generated right ideals projective, and hence R is right semihereditary. Finally if R is assumed to be a principal right ideal domain, then all right ideals are projective, and R is right hereditary. A commutative hereditary integral domain is called a Dedekind domain. A commutative semi-hereditary integral domain is called a Prüfer domain. An important example of a (left) hereditary ring is the path algebra of a quiver. This is a consequence of the existence of the standard resolution (which is of length 1) for modules over a path algebra. The triangular matrix ring is right hereditary and left semi-hereditary but not left hereditary. If S is a von Neumann regular ring with an ideal I that is not a direct summand, then the triangular matrix ring is left semi-hereditary but not right semi-hereditary. Properties For a left hereditary ring R, every submodule of a free left R-module is isomorphic to a direct sum of left ideals of R and hence is projective. References Ring theory
https://en.wikipedia.org/wiki/Affiliated%20operator
In mathematics, affiliated operators were introduced by Murray and von Neumann in the theory of von Neumann algebras as a technique for using unbounded operators to study modules generated by a single vector. Later Atiyah and Singer showed that index theorems for elliptic operators on closed manifolds with infinite fundamental group could naturally be phrased in terms of unbounded operators affiliated with the von Neumann algebra of the group. Algebraic properties of affiliated operators have proved important in L2 cohomology, an area between analysis and geometry that evolved from the study of such index theorems. Definition Let M be a von Neumann algebra acting on a Hilbert space H. A closed and densely defined operator A is said to be affiliated with M if A commutes with every unitary operator U in the commutant of M. Equivalent conditions are that: each unitary U in M should leave invariant the graph of A defined by . the projection onto G(A) should lie in M2(M). each unitary U in M''' should carry D(A), the domain of A, onto itself and satisfy UAU* = A there. each unitary U in M should commute with both operators in the polar decomposition of A. The last condition follows by uniqueness of the polar decomposition. If A has a polar decomposition it says that the partial isometry V should lie in M and that the positive self-adjoint operator |A| should be affiliated with M. However, by the spectral theorem, a positive self-adjoint operator commutes with a unitary operator if and only if each of its spectral projections does. This gives another equivalent condition: each spectral projection of |A| and the partial isometry in the polar decomposition of A lies in M. Measurable operators In general the operators affiliated with a von Neumann algebra M need not necessarily be well-behaved under either addition or composition. However in the presence of a faithful semi-finite normal trace τ and the standard Gelfand–Naimark–Segal action of M on H = L2(M, τ), Edward Nelson proved that the measurable affiliated operators do form a *-algebra with nice properties: these are operators such that τ(I − E([0,N])) < ∞ for N sufficiently large. This algebra of unbounded operators is complete for a natural topology, generalising the notion of convergence in measure. It contains all the non-commutative Lp spaces defined by the trace and was introduced to facilitate their study. This theory can be applied when the von Neumann algebra M is type I or type II. When M = B(H) acting on the Hilbert space L2(H) of Hilbert–Schmidt operators, it gives the well-known theory of non-commutative Lp spaces Lp (H) due to Schatten and von Neumann. When M is in addition a finite von Neumann algebra, for example a type II1 factor, then every affiliated operator is automatically measurable, so the affiliated operators form a *-algebra, as originally observed in the first paper of Murray and von Neumann. In this case M is a von Neumann regular ring: for on the closure of
https://en.wikipedia.org/wiki/Hereditarily%20countable%20set
In set theory, a set is called hereditarily countable if it is a countable set of hereditarily countable sets. Results The inductive definition above is well-founded and can be expressed in the language of first-order set theory. Equivalent properties A set is hereditarily countable if and only if it is countable, and every element of its transitive closure is countable. If the axiom of countable choice holds, then a set is hereditarily countable if and only if its transitive closure is countable. The collection of all h. c. sets The class of all hereditarily countable sets can be proven to be a set from the axioms of Zermelo–Fraenkel set theory (ZF) and is set is designated . In particular, the existence does not require any form of the axiom of choice. Constructive Zermelo-Freankel (CZF) does not prove the class to be a set. Model theory This class is a model of Kripke–Platek set theory with the axiom of infinity (KPI), if the axiom of countable choice is assumed in the metatheory. If , then . Generalizations More generally, a set is hereditarily of cardinality less than κ if it is of cardinality less than κ, and all its elements are hereditarily of cardinality less than κ; the class of all such sets can also be proven to be a set from the axioms of ZF, and is designated . If the axiom of choice holds and the cardinal κ is regular, then a set is hereditarily of cardinality less than κ if and only if its transitive closure is of cardinality less than κ. See also Hereditarily finite set Constructible universe External links "On Hereditarily Countable Sets" by Thomas Jech Set theory Large cardinals
https://en.wikipedia.org/wiki/Arthur%20Mkrtchyan
Arthur Mkrtchyan (, born on 9 September 1973) is an Armenian football coach and a former defender. He was capped 25 times for the Armenia national team. National team statistics External links 1973 births Living people Footballers from Yerevan Soviet men's footballers Armenian men's footballers Armenia men's international footballers Armenian expatriate men's footballers FC Pyunik players FC Torpedo Moscow players FC Torpedo-2 players PFC Krylia Sovetov Samara players FC Mika players FC Darida Minsk Raion players Expatriate men's footballers in Russia Armenian Premier League players Russian Premier League players Expatriate men's footballers in Belarus Armenian expatriate sportspeople in Russia Armenian expatriate sportspeople in Belarus Men's association football defenders Soviet Armenians Armenian football managers Armenian expatriate sportspeople in Kazakhstan
https://en.wikipedia.org/wiki/Bases%20on%20balls%20per%20nine%20innings%20pitched
In baseball statistics, bases on balls per nine innings pitched (BB/9IP or BB/9) or walks per nine innings (denoted by W/9) is the average number of bases on balls, (or walks) given up by a pitcher per nine innings pitched. It is determined by multiplying the number of bases on balls allowed by nine, and dividing by the number of innings pitched. It is a measure of the bases on balls ability of a pitcher. Leaders All but one of the top 25 single-season leaders in BB/9IP through 2018 pitched in the period of 1876-84. George Zettlein was the all-time single-season leader (0.2308 in 1876), followed by Cherokee Fisher (0.2355 in 1876) and George Bradley (0.2755 in 1880). The highest single-season modern day baseball performance was by Carlos Silva (0.4301 in 2005). The all-time career leaders in BB/9IP through 2022 were Candy Cummings (0.4731), Tommy Bond (0.4787), and Al Spalding (0.5114), all of whom played in the 1870s and 1880s. The active career leaders in BB/9IP through 2022 were Corey Kluber (1.9683), Michael Pineda (1.9719), and Hyun Jin Ryu (1.9914). References Pitching statistics
https://en.wikipedia.org/wiki/Nerve%20%28category%20theory%29
In category theory, a discipline within mathematics, the nerve N(C) of a small category C is a simplicial set constructed from the objects and morphisms of C. The geometric realization of this simplicial set is a topological space, called the classifying space of the category C. These closely related objects can provide information about some familiar and useful categories using algebraic topology, most often homotopy theory. Motivation The nerve of a category is often used to construct topological versions of moduli spaces. If X is an object of C, its moduli space should somehow encode all objects isomorphic to X and keep track of the various isomorphisms between all of these objects in that category. This can become rather complicated, especially if the objects have many non-identity automorphisms. The nerve provides a combinatorial way of organizing this data. Since simplicial sets have a good homotopy theory, one can ask questions about the meaning of the various homotopy groups πn(N(C)). One hopes that the answers to such questions provide interesting information about the original category C, or about related categories. The notion of nerve is a direct generalization of the classical notion of classifying space of a discrete group; see below for details. Construction Let C be a small category. There is a 0-simplex of N(C) for each object of C. There is a 1-simplex for each morphism f : x → y in C. Now suppose that f: x → y and g : y →  z are morphisms in C. Then we also have their composition gf : x → z. The diagram suggests our course of action: add a 2-simplex for this commutative triangle. Every 2-simplex of N(C) comes from a pair of composable morphisms in this way. The addition of these 2-simplices does not erase or otherwise disregard morphisms obtained by composition, it merely remembers that this is how they arise. In general, N(C)k consists of the k-tuples of composable morphisms of C. To complete the definition of N(C) as a simplicial set, we must also specify the face and degeneracy maps. These are also provided to us by the structure of C as a category. The face maps are given by composition of morphisms at the ith object (or removing the ith object from the sequence, when i is 0 or k). This means that di sends the k-tuple to the (k − 1)-tuple That is, the map di composes the morphisms Ai−1 → Ai and Ai → Ai+1 into the morphism Ai−1 → Ai+1, yielding a (k − 1)-tuple for every k-tuple. Similarly, the degeneracy maps are given by inserting an identity morphism at the object Ai. Simplicial sets may also be regarded as functors Δop → Set, where Δ is the category of totally ordered finite sets and order-preserving morphisms. Every partially ordered set P yields a (small) category i(P) with objects the elements of P and with a unique morphism from p to q whenever p ≤ q in P. We thus obtain a functor i from the category Δ to the category of small categories. We can now describe the nerve of the category C as the functor Δo
https://en.wikipedia.org/wiki/List%20of%20communities%20in%20Manitoba%20by%20population
Manitoba has 81 communities, excluding rural municipalities, that have a population of 1,000 or greater according to the 2021 Census of Canada conducted by Statistics Canada. These communities include cities, towns, villages, reserves inhabited by First Nations, a local government district that is urban in nature, designated places, and population centres. A population centre, according to Statistics Canada, is an area with a population of at least 1,000 and a density of 400 or more people per square kilometre. List See also List of census agglomerations in Manitoba List of communities in Manitoba List of municipalities in Manitoba List of population centres in Manitoba Manitoba Geography References Communities
https://en.wikipedia.org/wiki/Scandinavian%20Design%20%28store%29
Scandinavian Design, Inc. was a furniture retailer located in New York City. It was founded in 1955 by Hans Lindblom and his wife Celia, who sold the work of their friend, Swedish designer Bruno Mathsson, under the name Bruno Mathsson Furniture. During the years more and more designs were added, and the store became Scandinavian Design, Inc., representing many designers and manufacturers from Denmark, Finland and Sweden. Original designers that were at times showcased by Scandinavian Design included Alvar Aalto, Arne Jacobsen, Poul Kjaerholm, Borge Mogensen and Hans J. Wegner. The showroom was first located at East 53 Street in Manhattan. It was later moved to East 59th Street, where it was located until 1998, then moved to 347 Fifth Avenue. The company closed its showroom in 2014. References Furniture retailers of the United States Retail companies established in 1955 1955 establishments in New York City Retail companies disestablished in 2014 2014 disestablishments in New York (state)
https://en.wikipedia.org/wiki/Fej%C3%A9r%27s%20theorem
In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following: Explanation of Fejér's Theorem's Explicitly, we can write the Fourier series of f as where the nth partial sum of the Fourier series of f may be written as where the Fourier coefficients are Then, we can define with Fn being the nth order Fejér kernel. Then, Fejér's theorem asserts that with uniform convergence. With the convergence written out explicitly, the above statement becomes Proof of Fejér's Theorem We first prove the following lemma: Proof: Recall the definition of , the Dirichlet Kernel:We substitute the integral form of the Fourier coefficients into the formula for above Using a change of variables we get This completes the proof of Lemma 1. We next prove the following lemma: Proof: Recall the definition of the Fejér Kernel As in the case of Lemma 1, we substitute the integral form of the Fourier coefficients into the formula for This completes the proof of Lemma 2. We next prove the 3rd Lemma: This completes the proof of Lemma 3. We are now ready to prove Fejér's Theorem. First, let us recall the statement we are trying to prove We want to find an expression for . We begin by invoking Lemma 2: By Lemma 3a we know that Applying the triangle inequality yields and by Lemma 3b, we get We now split the integral into two parts, integrating over the two regions and . The motivation for doing so is that we want to prove that . We can do this by proving that each integral above, integral 1 and integral 2, goes to zero. This is precisely what we'll do in the next step. We first note that the function f is continuous on [-π,π]. We invoke the theorem that every periodic function on [-π,π] that is continuous is also bounded and uniformily continuous. This means that . Hence we can rewrite the integral 1 as follows Because and By Lemma 3a we then get for all n This gives the desired bound for integral 1 which we can exploit in final step. For integral 2, we note that since f is bounded, we can write this bound as We are now ready to prove that . We begin by writing Thus,By Lemma 3c we know that the integral goes to 0 as n goes to infinity, and because epsilon is arbitrary, we can set it equal to 0. Hence , which completes the proof. Modifications and Generalisations of Fejér's Theorem In fact, Fejér's theorem can be modified to hold for pointwise convergence. Sadly however, the theorem does not work in a general sense when we replace the sequence with . This is because there exist functions whose Fourier series fails to converge at some point. However, the set of points at which a function in diverges has to be measure zero. This fact, called Lusins conjecture or Carleson's theorem, was proven in 1966 by L. Carleson. We can however prove a corrollary relating which goes as follows: A more general form of the theorem applies to functions which are not necessarily continuous . Suppose that f is in L1(-π
https://en.wikipedia.org/wiki/Joseph%20Lovering
Joseph Lovering (25 December 1813 – 18 January 1892) was an American scientist and educator. Biography Lovering graduated from Harvard in 1833. In 1838, he was named Hollis Professor of mathematics and natural philosophy in Harvard. He held this chair until 1888, when he was appointed Professor Emeritus, after 50 years service. He was acting regent of the university (1853–1854) and succeeded Felton as regent. He was director of Jefferson Physical Laboratory from 1884 to 1888, and was associated with the Harvard College Observatory, especially in the joint observations of the United States and the London Royal Society on terrestrial magnetism. From 1869 to 1873 he served as corresponding secretary, from 1873 to 1880 vice president, and from 1880 to 1881 president of the American Association for the Advancement of Science. He contributed to numerous scientific publications, prepared a volume on The Aurora Borealis (1873), and edited a new edition of Professor John Farrar's Electricity and Magnetism (1842). In 1837, several Yale professors - Denison Olmsted, Alexander Twining, Elias Loomis, along with Edward Herrick had published papers supporting the existence of an annual meteor storm in August (which peaks around the 9th/10th of the month). Lovering was a strong opponent of this idea. He believed that meteor showers were related to the weather rather than "the Earth in its revolution had encroached upon a nest of meteors". He also did not believe that meteor showers recurred at the same dates annually. Instead, he said, "meteoritic appearances are much more common every night than has been imagined" and "no season of the year is especially provided : that about the same average number can be seen every fair night... an equal and uniform distribution of meteors throughout the year". He was elected as a member to the American Philosophical Society in 1881. References External links National Academy of Sciences Biographical Memoir 1813 births 1892 deaths American astronomers 19th-century American mathematicians American science writers Harvard University alumni Harvard University faculty Writers from Boston Hollis Chair of Mathematics and Natural Philosophy
https://en.wikipedia.org/wiki/Niccol%C3%B2%20Cacciatore
Niccolò Cacciatore (; 26 January 1770 – 28 January 1841) was an Italian astronomer. Cacciatore was born at Casteltermini, in Sicily. While studying mathematics and physics in Palermo, he became acquainted with Giuseppe Piazzi, head of the Palermo Astronomical Observatory, and became a graduate student assistant at the observatory in 1798. Two years later, in 1800, the year before Piazzi discovered Ceres, Cacciatore was formally put on staff. Cacciatore helped Piazzi compile the second edition of the Palermo Star Catalogue (1814). He did the bulk of the work, in fact heading the project starting in 1807. He also published works on the comets of 1807 and 1819. Cacciatore succeeded Piazzi as director of the Palermo Observatory in 1817. As such, his most notable observation was the discovery of globular cluster NGC 6541 on 19 March 1826. The observatory was attacked, and he was imprisoned, during the Sicilian Revolution of 1820, but he survived to restore the facility and lead it for two more decades. In addition to astronomy, he was an expert on meteorology, and wrote a number of books on the subject. Further, after the political troubles of 1820, he served as a member of the legislature of the Kingdom of the Two Sicilies. Cacciatore was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1837. He married Emmanuela Martini in 1812, with whom he had five children. His son, Gaetano, succeeded him as director of the observatory. Sualocin and Rotanev Alpha and Beta Delphini are a pair of visually unremarkable 4th magnitude stars. When the Palermo Catalogue was published in 1814, the unfamiliar names Sualocin and Rotanev were attached to them. Eventually the Reverend Thomas William Webb, a British astronomer, puzzled out the explanation. Cacciatore's name, Nicholas Hunter in English translation, would be Latinized to Nicolaus Venator. Reversing the letters of this construction produces the two star names. They have endured, the result of Cacciatore's little practical joke of naming the two stars after himself. How Webb arrived at this explanation 45 years after the publication of the catalogue is still a mystery. In 2016, the two names were approved as official by the International Astronomical Union (IAU). Works See also James Dunlop Thomas William Webb References Further reading Cacciatore at NGC/IC observers; includes picture For NGC 6541 see Olbers AN #104, "Ein neuer Nebelfleck" AN #113, and Biela AN #120 1770 births 1841 deaths People from the Province of Agrigento 19th-century Italian astronomers Fellows of the American Academy of Arts and Sciences Scientists from Sicily
https://en.wikipedia.org/wiki/Strikeouts%20per%20nine%20innings%20pitched
In baseball statistics, strikeouts per nine innings pitched (K/9, SO/9, or SO/9IP) is the mean of strikeouts (or Ks) by a pitcher per nine innings pitched. It is determined by multiplying the number of strikeouts by nine, and dividing by the number of innings pitched. To qualify, a pitcher must have pitched 1,000 innings, which generally limits the list to starters. A separate list is maintained for relievers with 300 innings pitched or 200 appearances. Leaders The all-time leader in this statistic through 2022 is Chris Sale (11.06). The only other pitchers who had averaged over 10 strikeouts are Robbie Ray (11.03), Jacob deGrom (10.96), Yu Darvish (10.70), Max Scherzer (10.69), Randy Johnson (10.61), Stephen Strasburg (10.55), Gerrit Cole (10.45), Kerry Wood (10.32), Pedro Martinez (10.04) and Aaron Nola (10.02). The top three in 2022 were Carlos Rodon (11.98), Shohei Ohtani (11.87), and Gerrit Cole (11.53). Among qualifying relievers, Aroldis Chapman (14.88) was the all-time leader in strikeouts per nine innings through 2020, followed by Craig Kimbrel (14.66), Kenley Jansen (13.25), Rob Dibble (12.17), David Robertson (11.93), and Billy Wagner (11.92). In 2022 Kyle Harrison led the minor leagues with 14.8 strikeouts per 9 innings, the highest rate for a pitcher in the minors–minimum 100 innings–in a season dating back to 1960. Analysis One effect of K/9 is that it may reward or "inflate" the numbers for pitchers with high batting averages on balls in play (BABIP). Two pitchers may have the same K/9 rates despite striking out a different percentage of batters since one pitcher will pitch to more batters to obtain the same cumulative number of strikeouts. For example, a pitcher who strikes out one batter in an inning, but also gives up a walk or a hit, strikes out a lower percentage of batters than a pitcher who strikes out one batter in an inning without allowing a baserunner, but both have the same K/9. References Pitching statistics
https://en.wikipedia.org/wiki/Discrete%20Poisson%20equation
In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics. On a two-dimensional rectangular grid Using the finite difference numerical method to discretize the 2-dimensional Poisson equation (assuming a uniform spatial discretization, ) on an grid gives the following formula: where and . The preferred arrangement of the solution vector is to use natural ordering which, prior to removing boundary elements, would look like: This will result in an linear system: where is the identity matrix, and , also , is given by: and is defined by For each equation, the columns of correspond to a block of components in : while the columns of to the left and right of each correspond to other blocks of components within : and respectively. From the above, it can be inferred that there are block columns of in . It is important to note that prescribed values of (usually lying on the boundary) would have their corresponding elements removed from and . For the common case that all the nodes on the boundary are set, we have and , and the system would have the dimensions , where and would have dimensions . Example For a 3×3 ( and ) grid with all the boundary nodes prescribed, the system would look like: with and As can be seen, the boundary 's are brought to the right-hand-side of the equation. The entire system is while and are and given by: and Methods of solution Because is block tridiagonal and sparse, many methods of solution have been developed to optimally solve this linear system for . Among the methods are a generalized Thomas algorithm with a resulting computational complexity of , cyclic reduction, successive overrelaxation that has a complexity of , and Fast Fourier transforms which is . An optimal solution can also be computed using multigrid methods. Applications In computational fluid dynamics, for the solution of an incompressible flow problem, the incompressibility condition acts as a constraint for the pressure. There is no explicit form available for pressure in this case due to a strong coupling of the velocity and pressure fields. In this condition, by taking the divergence of all terms in the momentum equation, one obtains the pressure poisson equation. For an incompressible flow this constraint is given by: where is the velocity in the direction, is velocity in and is the velocity in the direction. Taking divergence of the momentum equation and using the incompressibility constraint, the pressure Poisson equation is formed given by: where is the kinematic viscosity of the fluid and is the velocity vector. The discrete Poisson's equation arises in the theory of Mar
https://en.wikipedia.org/wiki/Admissible%20set
In set theory, a discipline within mathematics, an admissible set is a transitive set such that is a model of Kripke–Platek set theory (Barwise 1975). The smallest example of an admissible set is the set of hereditarily finite sets. Another example is the set of hereditarily countable sets. See also Admissible ordinal References Barwise, Jon (1975). Admissible Sets and Structures: An Approach to Definability Theory, Perspectives in Mathematical Logic, Volume 7, Springer-Verlag. Electronic version on Project Euclid. Set theory
https://en.wikipedia.org/wiki/Code%20%28set%20theory%29
In set theory, a code for a hereditarily countable set is a set such that there is an isomorphism between (ω,E) and (X,) where X is the transitive closure of {x}. If X is finite (with cardinality n), then use n×n instead of ω×ω and (n,E) instead of (ω,E). According to the axiom of extensionality, the identity of a set is determined by its elements. And since those elements are also sets, their identities are determined by their elements, etc.. So if one knows the element relation restricted to X, then one knows what x is. (We use the transitive closure of {x} rather than of x itself to avoid confusing the elements of x with elements of its elements or whatever.) A code includes that information identifying x and also information about the particular injection from X into ω which was used to create E. The extra information about the injection is non-essential, so there are many codes for the same set which are equally useful. So codes are a way of mapping into the powerset of ω×ω. Using a pairing function on ω (such as (n,k) goes to (n2+2·n·k+k2+n+3·k)/2), we can map the powerset of ω×ω into the powerset of ω. And we can map the powerset of ω into the Cantor set, a subset of the real numbers. So statements about can be converted into statements about the reals. Therefore, Codes are useful in constructing mice. See also L(R) References William J. Mitchell,"The Complexity of the Core Model","Journal of Symbolic Logic",Vol.63,No.4,December 1998,page 1393. Set theory Inner model theory
https://en.wikipedia.org/wiki/Nagata%27s%20conjecture%20on%20curves
In mathematics, the Nagata conjecture on curves, named after Masayoshi Nagata, governs the minimal degree required for a plane algebraic curve to pass through a collection of very general points with prescribed multiplicities. History Nagata arrived at the conjecture via work on the 14th problem of Hilbert, which asks whether the invariant ring of a linear group action on the polynomial ring over some field is finitely generated. Nagata published the conjecture in a 1959 paper in the American Journal of Mathematics, in which he presented a counterexample to Hilbert's 14th problem. Statement Nagata Conjecture. Suppose are very general points in and that are given positive integers. Then for any curve in that passes through each of the points with multiplicity must satisfy The condition is necessary: The cases and are distinguished by whether or not the anti-canonical bundle on the blowup of at a collection of points is nef. In the case where , the cone theorem essentially gives a complete description of the cone of curves of the blow-up of the plane. Current status The only case when this is known to hold is when is a perfect square, which was proved by Nagata. Despite much interest, the other cases remain open. A more modern formulation of this conjecture is often given in terms of Seshadri constants and has been generalised to other surfaces under the name of the Nagata–Biran conjecture. References . . . Algebraic curves Conjectures
https://en.wikipedia.org/wiki/Neville%27s%20algorithm
In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points. Neville's algorithm evaluates this polynomial. Neville's algorithm is based on the Newton form of the interpolating polynomial and the recursion relation for the divided differences. It is similar to Aitken's algorithm (named after Alexander Aitken), which is nowadays not used. The algorithm Given a set of n+1 data points (xi, yi) where no two xi are the same, the interpolating polynomial is the polynomial p of degree at most n with the property p(xi) = yi for all i = 0,…,n This polynomial exists and it is unique. Neville's algorithm evaluates the polynomial at some point x. Let pi,j denote the polynomial of degree j − i which goes through the points (xk, yk) for k = i, i + 1, …, j. The pi,j satisfy the recurrence relation {| | || |- | || |} This recurrence can calculate p0,n(x), which is the value being sought. This is Neville's algorithm. For instance, for n = 4, one can use the recurrence to fill the triangular tableau below from the left to the right. {| | |- | || |- | || || |- | || || || |- | || || || || style="border: 1px solid;" | |- | || || || |- | || || |- | || |- | |} This process yields p0,4(x), the value of the polynomial going through the n + 1 data points (xi, yi) at the point x. This algorithm needs O(n2) floating point operations to interpolate a single point, and O(n3) floating point operations to interpolate a polynomial of degree n. The derivative of the polynomial can be obtained in the same manner, i.e: {| | || |- | || |} Application to numerical differentiation Lyness and Moler showed in 1966 that using undetermined coefficients for the polynomials in Neville's algorithm, one can compute the Maclaurin expansion of the final interpolating polynomial, which yields numerical approximations for the derivatives of the function at the origin. While "this process requires more arithmetic operations than is required in finite difference methods", "the choice of points for function evaluation is not restricted in any way". They also show that their method can be applied directly to the solution of linear systems of the Vandermonde type. References (link is bad) J. N. Lyness and C.B. Moler, Van Der Monde Systems and Numerical Differentiation, Numerische Mathematik 8 (1966) 458-464 (doi: 10.1007/BF02166671) Neville, E.H.: Iterative interpolation. J. Indian Math. Soc.20, 87–120 (1934) External links Polynomials Interpolation de:Polynominterpolation#Algorithmus von Neville-Aitken
https://en.wikipedia.org/wiki/Birch%27s%20theorem
In mathematics, Birch's theorem, named for Bryan John Birch, is a statement about the representability of zero by odd degree forms. Statement of Birch's theorem Let K be an algebraic number field, k, l and n be natural numbers, r1, ..., rk be odd natural numbers, and f1, ..., fk be homogeneous polynomials with coefficients in K of degrees r1, ..., rk respectively in n variables. Then there exists a number ψ(r1, ..., rk, l, K) such that if then there exists an l-dimensional vector subspace V of Kn such that Remarks The proof of the theorem is by induction over the maximal degree of the forms f1, ..., fk. Essential to the proof is a special case, which can be proved by an application of the Hardy–Littlewood circle method, of the theorem which states that if n is sufficiently large and r is odd, then the equation has a solution in integers x1, ..., xn, not all of which are 0. The restriction to odd r is necessary, since even degree forms, such as positive definite quadratic forms, may take the value 0 only at the origin. References Diophantine equations Analytic number theory Theorems in number theory
https://en.wikipedia.org/wiki/Jean-Louis%20Koszul
Jean-Louis Koszul (; January 3, 1921 – January 12, 2018) was a French mathematician, best known for studying geometry and discovering the Koszul complex. He was a second generation member of Bourbaki. Biography Koszul was educated at the in Strasbourg before studying at the Faculty of Science University of Strasbourg and the Faculty of Science of the University of Paris. His Ph.D. thesis, titled Homologie et cohomologie des algèbres de Lie, was written in 1950 under the direction of Henri Cartan. He lectured at many universities and was appointed in 1963 professor in the Faculty of Science at the University of Grenoble. He was a member of the French Academy of Sciences. Koszul was the cousin of the French composer Henri Dutilleux, and the grandchild of the composer Julien Koszul. Koszul married Denise Reyss-Brion on July 17, 1948. They had three children: Michel, Bertrand, and Anne. He died on January 12, 2018, at the age of 97, nine days after his 97th birthday. See also Koszul algebra Koszul complex Koszul duality Koszul cohomology Koszul connection Koszul-Tate resolution Lie algebra cohomology References External links 1921 births 2018 deaths 20th-century French mathematicians École Normale Supérieure alumni French people of Polish descent Members of the French Academy of Sciences Nicolas Bourbaki Scientists from Strasbourg Academic staff of Grenoble Alpes University
https://en.wikipedia.org/wiki/Dunce%20hat%20%28topology%29
In topology, the dunce hat is a compact topological space formed by taking a solid triangle and gluing all three sides together, with the orientation of one side reversed. Simply gluing two sides oriented in the opposite direction would yield a cone much like the dunce cap, but the gluing of the third side results in identifying the base of the cap with a line joining the base to the point. Name The name is due to E. C. Zeeman, who observed that any contractible 2-complex (such as the dunce hat) after taking the Cartesian product with the closed unit interval seemed to be collapsible. This observation became known as the Zeeman conjecture and was shown by Zeeman to imply the Poincaré conjecture. Properties The dunce hat is contractible, but not collapsible. Contractibility can be easily seen by noting that the dunce hat embeds in the 3-ball and the 3-ball deformation retracts onto the dunce hat. Alternatively, note that the dunce hat is the CW-complex obtained by gluing the boundary of a 2-cell onto the circle. The gluing map is homotopic to the identity map on the circle and so the complex is homotopy equivalent to the disc. By contrast, it is not collapsible because it does not have a free face. See also House with two rooms List of topologies References Topological spaces Algebraic topology
https://en.wikipedia.org/wiki/Clipmap
Clipmapping is a method of clipping a mipmap to a subset of data pertinent to the geometry being displayed. This is useful for loading as little data as possible when memory is limited, such as on a graphics processing unit. The technique is used for LODing in NVIDIA’s implementation of voxel cone tracing. The high-resolution levels of the mipmapped scene representation are clipped to a region near the camera while lower resolution levels are clipped further away. References External links SGI paper from 1998 SGI paper from 1996 Description from SGI's developer library Clipping (computer graphics)
https://en.wikipedia.org/wiki/Tamara%20Davis
Tamara Maree Davis is an Australian astrophysicist. , she is a professor in the School of Mathematics and Physics at the University of Queensland, where she has been employed since 2008. The Australian Academy of Science awarded her their Nancy Millis Medal in 2015, and she was awarded an Australian Laureate Fellowship in 2018,. She received the Astronomical Society of Australia's Louise Webster Prize in 2009, and their Robert Ellery Lectureship in 2021. She became a Member of the Order of Australia in 2020. As an athlete, Davis has competed for Australia at an international level in Ultimate Frisbee. Education Davis completed her Ph.D. in astrophysics at the University of New South Wales in 2004. She also has a BSc in physics and a BA in philosophy. References External links Tamara Davis on Twitter Interview with Martine Harte about Ruby Payne-Scott for Engaging Women. Living people Australian astrophysicists University of New South Wales alumni Academic staff of the University of Queensland Year of birth missing (living people) Members of the Order of Australia Australian women academics 21st-century Australian women scientists
https://en.wikipedia.org/wiki/Brahmagupta%27s%20identity
In algebra, Brahmagupta's identity says that, for given , the product of two numbers of the form is itself a number of that form. In other words, the set of such numbers is closed under multiplication. Specifically: Both (1) and (2) can be verified by expanding each side of the equation. Also, (2) can be obtained from (1), or (1) from (2), by changing b to −b. This identity holds in both the ring of integers and the ring of rational numbers, and more generally in any commutative ring. History The identity is a generalization of the so-called Fibonacci identity (where n=1) which is actually found in Diophantus' Arithmetica (III, 19). That identity was rediscovered by Brahmagupta (598–668), an Indian mathematician and astronomer, who generalized it and used it in his study of what is now called Pell's equation. His Brahmasphutasiddhanta was translated from Sanskrit into Arabic by Mohammad al-Fazari, and was subsequently translated into Latin in 1126. The identity later appeared in Fibonacci's Book of Squares in 1225. Application to Pell's equation In its original context, Brahmagupta applied his discovery to the solution of what was later called Pell's equation, namely x2 − Ny2 = 1. Using the identity in the form he was able to "compose" triples (x1, y1, k1) and (x2, y2, k2) that were solutions of x2 − Ny2 = k, to generate the new triple Not only did this give a way to generate infinitely many solutions to x2 − Ny2 = 1 starting with one solution, but also, by dividing such a composition by k1k2, integer or "nearly integer" solutions could often be obtained. The general method for solving the Pell equation given by Bhaskara II in 1150, namely the chakravala (cyclic) method, was also based on this identity. See also Brahmagupta matrix Brahmagupta–Fibonacci identity Brahmagupta's interpolation formula Gauss composition law Indian mathematics List of Indian mathematicians References External links Brahmagupta's identity at PlanetMath Brahmagupta Identity on MathWorld A Collection of Algebraic Identities Algebra Elementary algebra Mathematical identities Brahmagupta
https://en.wikipedia.org/wiki/Math%20circle
A math circle is a learning space where participants engage in the depths and intricacies of mathematical thinking, propagate the culture of doing mathematics, and create knowledge. To reach these goals, participants partake in problem-solving, mathematical modeling, the practice of art, and philosophical discourse. Some circles involve competition, while others do not. Characteristics Math circles can have a variety of styles. Some are very informal, with the learning proceeding through games, stories, or hands-on activities. Others are more traditional enrichment classes but without formal examinations. Some have a strong emphasis on preparing for Olympiad competitions; some avoid competition as much as possible. Models can use any combination of these techniques, depending on the audience, the mathematician, and the environment of the circle. Athletes have sports teams through which to deepen their involvement with sports; math circles can play a similar role for kids who like to think. Two features all math circles have in common are (1) that they are composed of students who want to be there - either like math, or want to like math, and (2) that they give students a social context in which to enjoy mathematics. History Mathematical enrichment activities in the United States have been around since sometime before 1977, in the form of residential summer programs, math contests, and local school-based programs. The concept of a math circle, on the other hand, with its emphasis on convening professional mathematicians and secondary school students regularly to solve problems, appeared in the U.S. in 1994 with Robert and Ellen Kaplan at Harvard University. This form of mathematical outreach made its way to the U.S. most directly from the former Soviet Union and present-day Russia and Bulgaria. They first appeared in the Soviet Union during the 1930s; they have existed in Bulgaria since sometime before 1907. The tradition arrived in the U.S. with émigrés who had received their inspiration from math circles as teenagers. Many of them successfully climbed the academic ladder to secure positions within universities, and a few pioneers among them decided to initiate math circles within their communities to preserve the tradition which had been so pivotal in their own formation as mathematicians. These days, math circles frequently partner with other mathematical education organizations, such as CYFEMAT: The International Network of Math Circles and Festivals, the Julia Robinson Mathematics Festival ,and the Mandelbrot Competition. Content choices Decisions about content are difficult for newly forming math circles and clubs, or for parents seeking groups for their children. 'Project-based clubs may spend a few meetings building origami, developing a math trail in their town, or programming a math-like computer game together. Math-rich projects may be artistic, exploratory, applied to sciences, executable (software-based), business-oriented, or d
https://en.wikipedia.org/wiki/Error%20exponent
In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large . Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity. Error exponent in channel coding For time-invariant DMC's The channel coding theorem states that for any ε > 0 and for any rate less than the channel capacity, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε > 0 for sufficiently long message block X. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity. Assuming a channel coding setup as follows: the channel can transmit any of messages, by transmitting the corresponding codeword (which is of length n). Each component in the codebook is drawn i.i.d. according to some probability distribution with probability mass function Q. At the decoding end, maximum likelihood decoding is done. Let be the th random codeword in the codebook, where goes from to . Suppose the first message is selected, so codeword is transmitted. Given that is received, the probability that the codeword is incorrectly detected as is: The function has upper bound for Thus, Since there are a total of M messages, and the entries in the codebook are i.i.d., the probability that is confused with any other message is times the above expression. Using the union bound, the probability of confusing with any message is bounded by: for any . Averaging over all combinations of : Choosing and combining the two sums over in the above formula: Using the independence nature of the elements of the codeword, and the discrete memoryless nature of the channel: Using the fact that each element of codeword is identically distributed and thus stationary: Replacing M by 2nR and defining probability of error becomes Q and should be chosen so that the bound is tighest. Thus, the error exponent can be defined as Error exponent in source coding For time invariant discrete memoryless sources The source coding theorem states that for any and any discr
https://en.wikipedia.org/wiki/Preconditioner
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method. Preconditioning for linear systems In linear algebra and numerical analysis, a preconditioner of a matrix is a matrix such that has a smaller condition number than . It is also common to call the preconditioner, rather than , since itself is rarely explicitly available. In modern preconditioning, the application of , i.e., multiplication of a column vector, or a block of column vectors, by , is commonly performed in a matrix-free fashion, i.e., where neither , nor (and often not even ) are explicitly available in a matrix form. Preconditioners are useful in iterative methods to solve a linear system for since the rate of convergence for most iterative linear solvers increases because the condition number of a matrix decreases as a result of preconditioning. Preconditioned iterative solvers typically outperform direct solvers, e.g., Gaussian elimination, for large, especially for sparse, matrices. Iterative solvers can be used as matrix-free methods, i.e. become the only choice if the coefficient matrix is not stored explicitly, but is accessed by evaluating matrix-vector products. Description Instead of solving the original linear system for , one may consider the right preconditioned system and solve for and for . Alternatively, one may solve the left preconditioned system Both systems give the same solution as the original system as long as the preconditioner matrix is nonsingular. The left preconditioning is more traditional. The two-sided preconditioned system may be beneficial, e.g., to preserve the matrix symmetry: if the original matrix is real symmetric and real preconditioners and satisfy then the preconditioned matrix is also symmetric. The two-sided preconditioning is common for diagonal scaling where the preconditioners and are diagonal and scaling is applied both to columns and rows of the original matrix , e.g., in order to decrease the dynamic range of entries of the matrix. The goal of preconditioning is reducing the condition number, e.g., of the left or right preconditioned system matrix or . Small condition numbers benefit fast convergence of iterative solvers and improve stability of the solution with respect to perturbations in the system matrix and the right-hand side, e.g., allowing for more aggressive quantization of the matrix entries using lower computer precision. The preconditioned matrix or is rarely explicitly formed. Only the action of applying the preconditioner solve operation to a given vector may need to be computed. Typically there is a trade-off in the choice of . Since the operator must be applied at each step of t
https://en.wikipedia.org/wiki/ADHM%20construction
In mathematical physics and gauge theory, the ADHM construction or monad construction is the construction of all instantons using methods of linear algebra by Michael Atiyah, Vladimir Drinfeld, Nigel Hitchin, Yuri I. Manin in their paper "Construction of Instantons." ADHM data The ADHM construction uses the following data: complex vector spaces V and W of dimension k and N, k × k complex matrices B1, B2, a k × N complex matrix I and a N × k complex matrix J, a real moment map a complex moment map Then the ADHM construction claims that, given certain regularity conditions, Given B1, B2, I, J such that , an anti-self-dual instanton in a SU(N) gauge theory with instanton number k can be constructed, All anti-self-dual instantons can be obtained in this way and are in one-to-one correspondence with solutions up to a U(k) rotation which acts on each B in the adjoint representation and on I and J via the fundamental and antifundamental representations The metric on the moduli space of instantons is that inherited from the flat metric on B, I and J. Generalizations Noncommutative instantons In a noncommutative gauge theory, the ADHM construction is identical but the moment map is set equal to the self-dual projection of the noncommutativity matrix of the spacetime times the identity matrix. In this case instantons exist even when the gauge group is U(1). The noncommutative instantons were discovered by Nikita Nekrasov and Albert Schwarz in 1998. Vortices Setting B2 and J to zero, one obtains the classical moduli space of nonabelian vortices in a supersymmetric gauge theory with an equal number of colors and flavors, as was demonstrated in Vortices, instantons and branes. The generalization to greater numbers of flavors appeared in Solitons in the Higgs phase: The Moduli matrix approach. In both cases the Fayet–Iliopoulos term, which determines a squark condensate, plays the role of the noncommutativity parameter in the real moment map. The construction formula Let x be the 4-dimensional Euclidean spacetime coordinates written in quaternionic notation Consider the 2k × (N + 2k) matrix Then the conditions are equivalent to the factorization condition where f(x) is a k × k Hermitian matrix. Then a hermitian projection operator P can be constructed as The nullspace of Δ(x) is of dimension N for generic x. The basis vectors for this null-space can be assembled into an (N + 2k) × N matrix U(x) with orthonormalization condition U†U = 1. A regularity condition on the rank of Δ guarantees the completeness condition The anti-selfdual connection is then constructed from U by the formula See also Monad (homological algebra) Twistor theory References Hitchin, N. (1983), "On the Construction of Monopoles", Commun. Math. Phys. 89, 145–190. Gauge theories Differential geometry Quantum chromodynamics
https://en.wikipedia.org/wiki/Choi%27s%20theorem%20on%20completely%20positive%20maps
In mathematics, Choi's theorem on completely positive maps is a result that classifies completely positive maps between finite-dimensional (matrix) C*-algebras. An infinite-dimensional algebraic generalization of Choi's theorem is known as Belavkin's "Radon–Nikodym" theorem for completely positive maps. Statement Choi's theorem. Let be a linear map. The following are equivalent: (i) is -positive (i.e. is positive whenever is positive). (ii) The matrix with operator entries is positive, where is the matrix with 1 in the -th entry and 0s elsewhere. (The matrix CΦ is sometimes called the Choi matrix of .) (iii) is completely positive. Proof (i) implies (ii) We observe that if then E=E* and E2=nE, so E=n−1EE* which is positive. Therefore CΦ =(In ⊗ Φ)(E) is positive by the n-positivity of Φ. (iii) implies (i) This holds trivially. (ii) implies (iii) This mainly involves chasing the different ways of looking at Cnm×nm: Let the eigenvector decomposition of CΦ be where the vectors lie in Cnm . By assumption, each eigenvalue is non-negative so we can absorb the eigenvalues in the eigenvectors and redefine so that The vector space Cnm can be viewed as the direct sum compatibly with the above identification and the standard basis of Cn. If Pk ∈ Cm × nm is projection onto the k-th copy of Cm, then Pk* ∈ Cnm×m is the inclusion of Cm as the k-th summand of the direct sum and Now if the operators Vi ∈ Cm×n are defined on the k-th standard basis vector ek of Cn by then Extending by linearity gives us for any A ∈ Cn×n. Any map of this form is manifestly completely positive: the map is completely positive, and the sum (across ) of completely positive operators is again completely positive. Thus is completely positive, the desired result. The above is essentially Choi's original proof. Alternative proofs have also been known. Consequences Kraus operators In the context of quantum information theory, the operators {Vi} are called the Kraus operators (after Karl Kraus) of Φ. Notice, given a completely positive Φ, its Kraus operators need not be unique. For example, any "square root" factorization of the Choi matrix gives a set of Kraus operators. Let where bi*'s are the row vectors of B, then The corresponding Kraus operators can be obtained by exactly the same argument from the proof. When the Kraus operators are obtained from the eigenvector decomposition of the Choi matrix, because the eigenvectors form an orthogonal set, the corresponding Kraus operators are also orthogonal in the Hilbert–Schmidt inner product. This is not true in general for Kraus operators obtained from square root factorizations. (Positive semidefinite matrices do not generally have a unique square-root factorizations.) If two sets of Kraus operators {Ai}1nm and {Bi}1nm represent the same completely positive map Φ, then there exists a unitary operator matrix This can be viewed as a special case of the result relating two minimal Stinespring representatio
https://en.wikipedia.org/wiki/Fine%20topology%20%28potential%20theory%29
In mathematics, in the field of potential theory, the fine topology is a natural topology for setting the study of subharmonic functions. In the earliest studies of subharmonic functions, namely those for which where is the Laplacian, only smooth functions were considered. In that case it was natural to consider only the Euclidean topology, but with the advent of upper semi-continuous subharmonic functions introduced by F. Riesz, the fine topology became the more natural tool in many situations. Definition The fine topology on the Euclidean space is defined to be the coarsest topology making all subharmonic functions (equivalently all superharmonic functions) continuous. Concepts in the fine topology are normally prefixed with the word 'fine' to distinguish them from the corresponding concepts in the usual topology, as for example 'fine neighbourhood' or 'fine continuous'. Observations The fine topology was introduced in 1940 by Henri Cartan to aid in the study of thin sets and was initially considered to be somewhat pathological due to the absence of a number of properties such as local compactness which are so frequently useful in analysis. Subsequent work has shown that the lack of such properties is to a certain extent compensated for by the presence of other slightly less strong properties such as the quasi-Lindelöf property. In one dimension, that is, on the real line, the fine topology coincides with the usual topology since in that case the subharmonic functions are precisely the convex functions which are already continuous in the usual (Euclidean) topology. Thus, the fine topology is of most interest in where . The fine topology in this case is strictly finer than the usual topology, since there are discontinuous subharmonic functions. Cartan observed in correspondence with Marcel Brelot that it is equally possible to develop the theory of the fine topology by using the concept of 'thinness'. In this development, a set is thin at a point if there exists a subharmonic function defined on a neighbourhood of such that Then, a set is a fine neighbourhood of if and only if the complement of is thin at . Properties of the fine topology The fine topology is in some ways much less tractable than the usual topology in euclidean space, as is evidenced by the following (taking ): A set in is fine compact if and only if is finite. The fine topology on is not locally compact (although it is Hausdorff). The fine topology on is not first-countable, second-countable or metrisable. The fine topology does at least have a few 'nicer' properties: The fine topology has the Baire property. The fine topology in is locally connected. The fine topology does not possess the Lindelöf property but it does have the slightly weaker quasi-Lindelöf property: An arbitrary union of fine open subsets of differs by a polar set from some countable subunion. References Subharmonic functions
https://en.wikipedia.org/wiki/Hermann%20Schlichting
Hermann Schlichting (22 September 1907 – 15 June 1982) was a German fluid dynamics engineer. Life and work Hermann Schlichting studied from 1926 till 1930 mathematics, physics and applied mechanics at the University of Jena, Vienne and Göttingen. In 1930 he wrote his PhD in Göttingen titled Über das ebene Windschattenproblem and also in the same year passed the state examination as teacher for higher mathematics and physics. His meeting with Ludwig Prandtl had a long-lasting effect on him. He worked from 1931 till 1935 at the Kaiser Wilhelm Institute for Flow Research in Göttingen. His main research area was fluid flows with viscous effects. Simultaneously he also started working on airfoil aerodynamics. In 1935 Schlichting went to Dornier in Friedrichshafen. There he did the planning for the new wind tunnel and after short construction time took charge over it. With it he gained useful experience in the field of aerodynamics. At the age of 30 in 1937 he joined Technische Universität Braunschweig, where in 1938 he became a professor. After joining in October 1937 Schlichting worked on setting up the Aerodynamic Institute at the Braunschweig-Waggum airport. Some features of a boundary layer transitioning from a laminar to turbulent state has been named after him, the Tollmien–Schlichting waves. Prof. Schlichting became an emeritus professor on 30 September 1975 at TU Braunschweig. Achievements 1953 Medal "50th Anniversary of Powered Flight“ from National Aeronautical Association, Washington D.C. 1968 Dr.-Ing. E.h. at Technical University Munich 1969 Ludwig-Prandtl-Ring from Deutschen Gesellschaft für Luft- und Raumfahrt (DGLR) 1972 Bundesverdienstkreuz 1976 Honorary member of Deutschen Forschungs- und Versuchsanstalt für Luft- und Raumfahrt e.V. (DFVLR) 1980 "Von-Kármán-Medaille“ from Advisory Group for Aerospace Research and Development (AGARD), Paris Books Hermann Schlichting, Erich Truckenbrodt: Aerodynamik des Flugzeugs Springer, Berlin 1967 Hermann Schlichting, Klaus Gersten, Boundary Layer Theory, 8th ed. Springer-Verlag 2004, Hermann Schlichting, Klaus Gersten, Egon Krause, Herbert, jun. Oertel: Grenzschicht-Theorie Springer, Berlin 2006, External links Fluid dynamicists 1907 births 1982 deaths Engineers from Lower Saxony Ludwig-Prandtl-Ring recipients Commanders Crosses of the Order of Merit of the Federal Republic of Germany Academic staff of the Technical University of Braunschweig People from Stade (district)
https://en.wikipedia.org/wiki/Butson-type%20Hadamard%20matrix
In mathematics, a complex Hadamard matrix H of size N with all its columns (rows) mutually orthogonal, belongs to the Butson-type H(q, N) if all its elements are powers of q-th root of unity, Existence If p is prime and , then can exist only for with integer m and it is conjectured they exist for all such cases with . For , the corresponding conjecture is existence for all multiples of 4. In general, the problem of finding all sets such that the Butson - type matrices exist, remains open. Examples contains real Hadamard matrices of size N, contains Hadamard matrices composed of - such matrices were called by Turyn, complex Hadamard matrices. in the limit one can approximate all complex Hadamard matrices. Fourier matrices belong to the Butson-type, while , where References A. T. Butson, Generalized Hadamard matrices, Proc. Am. Math. Soc. 13, 894-898 (1962). A. T. Butson, Relations among generalized Hadamard matrices, relative difference sets, and maximal length linear recurring sequences, Can. J. Math. 15, 42-48 (1963). R. J. Turyn, Complex Hadamard matrices, pp. 435–437 in Combinatorial Structures and their Applications, Gordon and Breach, London (1970). External links Complex Hadamard Matrices of Butson type - a catalogue, by Wojciech Bruzda, Wojciech Tadej and Karol Życzkowski, retrieved October 24, 2006 Matrices
https://en.wikipedia.org/wiki/Computable%20ordinal
In mathematics, specifically computability and set theory, an ordinal is said to be computable or recursive if there is a computable well-ordering of a computable subset of the natural numbers having the order type . It is easy to check that is computable. The successor of a computable ordinal is computable, and the set of all computable ordinals is closed downwards. The supremum of all computable ordinals is called the Church–Kleene ordinal, the first nonrecursive ordinal, and denoted by . The Church–Kleene ordinal is a limit ordinal. An ordinal is computable if and only if it is smaller than . Since there are only countably many computable relations, there are also only countably many computable ordinals. Thus, is countable. The computable ordinals are exactly the ordinals that have an ordinal notation in Kleene's . See also Arithmetical hierarchy Large countable ordinal Ordinal analysis Ordinal notation References Hartley Rogers Jr. The Theory of Recursive Functions and Effective Computability, 1967. Reprinted 1987, MIT Press, (paperback), Gerald Sacks Higher Recursion Theory. Perspectives in mathematical logic, Springer-Verlag, 1990. Set theory Computability theory Ordinal numbers
https://en.wikipedia.org/wiki/Willingness%20to%20communicate
Willingness to communicate (WTC) was originally conceptualised for first language acquisition, and seeks to demonstrate the probability that a speaker will choose to participate in a conversation of their own volition (McCroskey & Baer 1985, cited in MacIntyre et al., 1998). Traditionally, it was seen as a fixed personality trait that did not change according to context. However, McCroskey and associates suggested that it is in fact a situational variable that will change according to a number of factors (how well the speaker knows the interlocutor(s), number of interlocutors, formality, topic etc.). Difference between L1 and second language WTC MacIntyre, Clément, Dörnyei & Noels (1998) noted that WTC in first language (L1) does not necessarily transfer to the second language. "It is highly unlikely that WTC in the second language (second language) is a simple manifestation of WTC in the L1" (p. 546). According to MacIntyre, a key difference between WTC in L1 and L2 is that in L2, WTC is “a readiness to enter into discourse at a particular time with a specific person or persons, using a L2.” (1998, p. 547, italics added). That is, the speaker indicates they have intention to speak, for example raising their hand in a class, even if they don’t physically produce language at that time, because the conditions have been met for them to believe they have the ability to communicate. Therefore, "the ultimate goal of the learning process should be to engender in language education students the willingness to communicate.”(MacIntyre, Clément, Dörnyei & Noels:1998). Pyramid model A pyramid model has been established that describes the possible influences on a student’s willingness to communicate in a second language . “The pyramid shape shows the immediacy of some factors and the relatively distal influence of others.” (p. 546) At the top of the pyramid is the point of communication, and moving down the pyramid, the influencing factors become less transient, situation specific and more long term, stable factors that can be applied to almost any situation. As described by MacIntyre et al. 1998, the model has six layers and “is based on a host of learner variables that have been well established as influences on L2 learning and communication” (p. 558): communication behaviour (I) behavioural intention (II) situated antecedents (III) motivational propensities (IV) affective-cognitive context (V) social and individual context (VI) Layers I-III represent transient, situation specific factors that will influence WTC dependent on the specific person, topic, context and time. Layers IV-VI represent more stable, long-term traits of the speaker that will apply to almost all situations, irrespective of other factors. Within each layer, there are a number of constructs which further explain the situational and enduring influences on WTC: use (layer I) willingness to communicate (II) desire to communicate with a specific person (III) state of com
https://en.wikipedia.org/wiki/Census%20in%20Canada
Statistics Canada conducts a national census of population and census of agriculture every five years and releases the data with a two-year lag. The Census of Population provides demographic and statistical data that is used to plan public services such as health care, education, and transportation; determine federal transfer payments; and determine the number of Members of Parliament for each province and territory. The Census of Population is the primary source of sociodemographic data for specific population groups, such as lone-parent families, Indigenous peoples, immigrants, seniors and language groups. Data from the census is also used to assess the economic state of the country, including the economic conditions of immigrants over time, and labour market activity of communities and specific populations. Census data are also leveraged to develop socioeconomic status indicators in support of analysis of various impacts on education achievement and outcomes. At a sub-national level, two provinces (Alberta and Saskatchewan) and two territories (Nunavut and Yukon) have legislation that allows local governments to conduct their own municipal censuses. The Census of Population gathers important data on a variety of topics, including: Indigenous peoples Education, training and learning Ethnic diversity and immigration Families, households and housing Income, pensions, spending and wealth Labour Languages Population and demography There have been questions about religion in Canada in the national census since 1871. In 1951, when the frequency of conducting the national census changed from being collected every 10 years to every 5 years, questions about religion were still asked only every 10 years. Questions on religion were included in the last census, which occurred in 2021, but it will not be included in the 2026 census as questions on religion are included in census years that end in “1”. The census typically undercounts the population by ~2–4% because people are not at home, have trouble understanding the census, or census enumerators are unable to find the people. History The first census in what is now Canada took place in New France in 1666, under the direction of Intendant Jean Talon. The census noted the age, sex, marital status and occupation of 3,215 inhabitants. French-controlled Acadia also took their own census from 1671 to 1755. It is notable that section 8 of the Constitution Act, 1867 mandates that a national census must be done every 10 years, on years ending in 1 (1871, 1881, 1891, etc.). However, the section has been interpreted to mean that a census cannot be conducted beyond that 10-year period, but this does not indicate that a census cannot be conducted more regularly—such as every 5 years, as is now required of Statistics Canada by the Statistics Act. The first national census of Canada was taken in 1871, as required by section 8 of the then British North America Act, 1867 (now the Constitution Act, 1867
https://en.wikipedia.org/wiki/Leray%20spectral%20sequence
In mathematics, the Leray spectral sequence was a pioneering example in homological algebra, introduced in 1946 by Jean Leray. It is usually seen nowadays as a special case of the Grothendieck spectral sequence. Definition Let be a continuous map of topological spaces, which in particular gives a functor from sheaves of abelian groups on to sheaves of abelian groups on . Composing this with the functor of taking sections on is the same as taking sections on , by the definition of the direct image functor : Thus the derived functors of compute the sheaf cohomology for : But because and send injective objects in to -acyclic objects in , there is a spectral sequencepg 33,19 whose second page is and which converges to This is called the Leray spectral sequence. Generalizing to other sheaves and complexes of sheaves Note this result can be generalized by instead considering sheaves of modules over a locally constant sheaf of rings for a fixed commutative ring . Then, the sheaves will be sheaves of -modules, where for an open set , such a sheaf is an -module for . In addition, instead of sheaves, we could consider complexes of sheaves bounded below for the derived category of . Then, one replaces sheaf cohomology with sheaf hypercohomology. Construction The existence of the Leray spectral sequence is a direct application of the Grothendieck spectral sequencepg 19. This states that given additive functors between Abelian categories having enough injectives, a left-exact functor, and sending injective objects to -acyclic objects, then there is an isomorphism of derived functors for the derived categories . In the example above, we have the composition of derived functors Classical definition Let be a continuous map of smooth manifolds. If is an open cover of form the Čech complex of a sheaf with respect to cover of The boundary maps and maps of sheaves on together give a boundary map on the double complex This double complex is also a single complex graded by with respect to which is a boundary map. If each finite intersection of the is diffeomorphic to one can show that the cohomology of this complex is the de Rham cohomology of Moreover, any double complex has a spectral sequence E with (so that the sum of these is and where is the presheaf on Y sending In this context, this is called the Leray spectral sequence. The modern definition subsumes this, because the higher direct image functor is the sheafification of the presheaf Examples Let be smooth manifolds, and be simply connected, so . We calculate the Leray spectral sequence of the projection . If the cover is good (finite intersections are ) then Since is simply connected, any locally constant presheaf is constant, so this is the constant presheaf . So the second page of the Leray spectral sequence is As the cover of is also good, . So Here is the first place we use that is a projection and not just a fibre bundle: every elemen
https://en.wikipedia.org/wiki/MSU%20Faculty%20of%20Mechanics%20and%20Mathematics
The MSU Faculty of Mechanics and Mathematics () is a faculty of Moscow State University. History Although lectures in mathematics had been delivered since Moscow State University was founded in 1755, the mathematical and physical department was founded only in 1804. The Mathematics and Mechanics Department was founded on 1 May 1933 and comprised mathematics, mechanics and astronomy departments (the latter passed to the Physics Department in 1956). In 1953 the department moved to a new building on the Sparrow Hills and the current division in mathematics and mechanics branches was settled. In 1970, the Department of Computational Mathematics and Cybernetics broke off the department due to the research in computer science. A 2014 article entitled "Math as a tool of anti-semitism" in The Mathematics Enthusiast discussed antisemitism in the Moscow State University’s Department of Mathematics during the 1970s and 1980s. Current state Today the Department comprises 26 chairs (17 in the mathematical and 9 in the mechanics branch) and 14 research laboratories. Around 350 professors, assistant professors and researchers work at the department. Around 2000 students and 450 postgraduates study at the department. The education lasts 5 years (6 years from 2011). Notable alumni Notable faculty (past and present) Algebra – O. U. Schmidt, A. G. Kurosh, Yu. I. Manin Number theory – B. N. Delaunay, A. I. Khinchin, L. G. Shnirelman, A. O. Gelfond Topology – P. S. Alexandrov, A. N. Tychonoff, L. S. Pontryagin, Lev Tumarkin Real analysis – D. E. Menshov, A. I. Khinchin, N. K. Bari, A. N. Kolmogorov, S. B. Stechkin Complex analysis – I. I. Privalov, M. A. Lavrentiev, A. O. Gelfond, M. V. Keldysh Ordinary differential equations – V. V. Stepanov, V. V. Nemitski, V. I. Arnold, N. N. Nekhoroshev Partial differential equations – I. G. Petrovsky, S. L. Sobolev, E. M. Landis Mathematical logic and Theory of algorithms – A. A. Markov (Jr.), A. N. Kolmogorov, V. A. Melnikov, V. A. Uspensky, A. L.Semenov Calculus of variations – L. A. Lusternik Functional analysis – A. N. Kolmogorov, I. M. Gelfand Probability theory – A. I. Khinchin, A. N. Kolmogorov, Ya. G. Sinai, A. N. Shiryaev Differential geometry – V. F. Kagan, A. T. Fomenko, N. V. Efimov Discrete mathematics – O. B. Lupanov Theoretical Mechanics and Mechatronics – D. E. Okhotsimsky, V. V. Rumyantsev Aero- and hydrodynamics – L. I. Sedov Wave theory – A. I. Nekrasov References External links Official website of the department (in Russian) Moscow State University Education in Moscow Mathematics departments
https://en.wikipedia.org/wiki/Jean%20Bartik
Jean Bartik ( Betty Jean Jennings; December 27, 1924 – March 23, 2011) was one of the original six programmers for the ENIAC computer. Bartik studied mathematics in school then began work at the University of Pennsylvania, first manually calculating ballistics trajectories and then using ENIAC to do so. The other five ENIAC programmers were Betty Holberton, Ruth Teitelbaum, Kathleen Antonelli, Marlyn Meltzer, and Frances Spence. Bartik and her colleagues developed and codified many of the fundamentals of programming while working on the ENIAC, since it was the first computer of its kind. After her work on ENIAC, Bartik went on to work on BINAC and UNIVAC, and spent time at a variety of technical companies as a writer, manager, engineer and programmer. She spent her later years as a real estate agent and died in 2011 from congestive heart failure complications. Content-management framework Drupal's default theme, Bartik, is named in her honor. Early life and education Born Betty Jean Jennings in Gentry County, Missouri in 1924, she was the sixth of seven children. Her father, William Smith Jennings (1893–1971) was from Alanthus Grove, where he was a schoolteacher as well as a farmer. Her mother, Lula May Spainhower (1887–1988) was from Alanthus. Jennings had three older brothers, William (January 10, 1915) Robert (March 15, 1918); and Raymond (January 23, 1922); two older sisters, Emma (August 11, 1916) and Lulu (August 22, 1919), and one younger sister, Mable (December 15, 1928). In her childhood, she would ride on horseback to visit her grandmother, who bought the young girl a newspaper to read every day and became a role model for the rest of her life. She began her education at a local one-room school, and gained local attention for her softball skill. In order to attend high school, she lived with her older sister in the neighboring town, where the school was located, and then began to drive every day despite being only 14. She graduated from Stanberry High School in 1941, aged 16. She was given the title of salutatorian on her graduation. She attended Northwest Missouri State Teachers College now known Northwest Missouri State University, majoring in mathematics with a minor in English and graduated in 1945. Jennings was awarded the only mathematics degree in her class. Although she had originally intended to study journalism, she decided to change to mathematics because she had a bad relationship with her adviser. Later in her life, she earned a master's degree in English at the University of Pennsylvania in 1967 and was awarded an honorary doctorate degree from Northwest Missouri State University in 2002. Career In 1945, the United States Army was recruiting mathematicians from universities to aid in the war effort; despite a warning by her adviser that she would be "a cog in a wheel" with the Army, and encouragement to become a mathematics teacher instead, Bartik decided to become a human computer. Bartik's calculus professor enco
https://en.wikipedia.org/wiki/PROP%20%28category%20theory%29
In category theory, a branch of mathematics, a PROP is a symmetric strict monoidal category whose objects are the natural numbers n identified with the finite sets and whose tensor product is given on objects by the addition on numbers. Because of “symmetric”, for each n, the symmetric group on n letters is given as a subgroup of the automorphism group of n. The name PROP is an abbreviation of "PROduct and Permutation category". The notion was introduced by Adams and MacLane; the topological version of it was later given by Boardman and Vogt. Following them, J. P. May then introduced the notion of “operad”, a particular kind of PROP. There are the following inclusions of full subcategories: where the first category is the category of (symmetric) operads. Examples and variants An important elementary class of PROPs are the sets of all matrices (regardless of number of rows and columns) over some fixed ring . More concretely, these matrices are the morphisms of the PROP; the objects can be taken as either (sets of vectors) or just as the plain natural numbers (since objects do not have to be sets with some structure). In this example: Composition of morphisms is ordinary matrix multiplication. The identity morphism of an object (or ) is the identity matrix with side . The product acts on objects like addition ( or ) and on morphisms like an operation of constructing block diagonal matrices: . The compatibility of composition and product thus boils down to . As an edge case, matrices with no rows ( matrices) or no columns ( matrices) are allowed, and with respect to multiplication count as being zero matrices. The identity is the matrix. The permutations in the PROP are the permutation matrices. Thus the left action of a permutation on a matrix (morphism of this PROP) is to permute the rows, whereas the right action is to permute the columns. There are also PROPs of matrices where the product is the Kronecker product, but in that class of PROPs the matrices must all be of the form (sides are all powers of some common base ); these are the coordinate counterparts of appropriate symmetric monoidal categories of vector spaces under tensor product. Further examples of PROPs: the discrete category of natural numbers, the category FinSet of natural numbers and functions between them, the category Bij of natural numbers and bijections, the category Inj of natural numbers and injections. If the requirement “symmetric” is dropped, then one gets the notion of PRO category. If “symmetric” is replaced by braided, then one gets the notion of PROB category. the category BijBraid of natural numbers, equipped with the braid group Bn as the automorphisms of each n (and no other morphisms). is a PROB but not a PROP. the augmented simplex category of natural numbers and order-preserving functions. is an example of PRO that is not even a PROB. Algebras of a PRO An algebra of a PRO in a monoidal category is a strict monoidal funct
https://en.wikipedia.org/wiki/Margulis%20lemma
In differential geometry, the Margulis lemma (named after Grigory Margulis) is a result about discrete subgroups of isometries of a non-positively curved Riemannian manifold (e.g. the hyperbolic n-space). Roughly, it states that within a fixed radius, usually called the Margulis constant, the structure of the orbits of such a group cannot be too complicated. More precisely, within this radius around a point all points in its orbit are in fact in the orbit of a nilpotent subgroup (in fact a bounded finite number of such). The Margulis lemma for manifolds of non-positive curvature Formal statement The Margulis lemma can be formulated as follows. Let be a simply-connected manifold of non-positive bounded sectional curvature. There exist constants with the following property. For any discrete subgroup of the group of isometries of and any , if is the set: then the subgroup generated by contains a nilpotent subgroup of index less than . Here is the distance induced by the Riemannian metric. An immediately equivalent statement can be given as follows: for any subset of the isometry group, if it satisfies that: there exists a such that ; the group generated by is discrete then contains a nilpotent subgroup of index . Margulis constants The optimal constant in the statement can be made to depend only on the dimension and the lower bound on the curvature; usually it is normalised so that the curvature is between -1 and 0. It is usually called the Margulis constant of the dimension. One can also consider Margulis constants for specific spaces. For example, there has been an important effort to determine the Margulis constant of the hyperbolic spaces (of constant curvature -1). For example: the optimal constant for the hyperbolic plane is equal to ; In general the Margulis constant for the hyperbolic -space is known to satisfy the bounds: for some . Zassenhaus neighbourhoods A particularly studied family of examples of negatively curved manifolds are given by the symmetric spaces associated to semisimple Lie groups. In this case the Margulis lemma can be given the following, more algebraic formulation which dates back to Hans Zassenhaus. If is a semisimple Lie group there exists a neighbourhood of the identity in and a such that any discrete subgroup which is generated by contains a nilpotent subgroup of index . Such a neighbourhood is called a Zassenhaus neighbourhood in . If is compact this theorem amounts to Jordan's theorem on finite linear groups. Thick-thin decomposition Let be a Riemannian manifold and . The thin part of is the subset of points where the injectivity radius of at is less than , usually denoted , and the thick part its complement, usually denoted . There is a tautological decomposition into a disjoint union . When is of negative curvature and is smaller than the Margulis constant for the universal cover , the structure of the components of the thin part is very simple. Let us restr
https://en.wikipedia.org/wiki/1701%20%28number%29
1701 is the natural number preceding 1702 and following 1700. In mathematics 1701 is an odd number and a Stirling number of the second kind. The number 1701 also has unusual properties as it: belongs to a set of numbers such that contains exactly seven different digits. is a decagonal and a 13-gonal number. is divisible by the square of the sum of its digits. belongs to a set of numbers with only palindromic prime factors whose sum is palindromic. is a First Beale cipher. belongs to a set of numbers whose digits of prime factors are either 3 or 7. its reversal digit sequence (1071) is divisible by 7. is a Harshad number. In Star Trek In the Star Trek science fiction franchise, NCC-1701 is the designation for several starships named USS Enterprise. Several of these vessels are focal points in the fictional universe created by Gene Roddenberry. References Integers
https://en.wikipedia.org/wiki/Joseph%20L.%20Fleiss
Joseph L. Fleiss (November 13, 1937 – June 12, 2003) was an American professor of biostatistics at the Columbia University Mailman School of Public Health, where he also served as head of the Division of Biostatistics from 1975 to 1992. He is known for his work in mental health statistics, particularly assessing the reliability of diagnostic classifications, and the measures, models, and control of errors in categorization. Early life and education Fleiss was born in Brooklyn, New York. He attended Columbia College of Columbia University and was awarded a bachelor's degree cum laude in 1959. In 1960 he attended a program in biostatistics at the University of Minnesota, then returned to Columbia University, where he earned an M.S. in biostatistics in 1961 from the School of Public Health (now called the Mailman School of Public Health), and a Ph.D. in statistics in 1967 from the Department of Mathematical Statistics in the Columbia Graduate School of Arts and Sciences. Career While still a college student, Fleiss began his career at the Biometrics Research Unit of the New York State Psychiatric Institute, first as a statistical clerk and later as a research scientist and biostatistician. He was affiliated with the Psychiatric Institute until 1986. In 1975, Columbia University recruited Fleiss to be a professor and head of the Division of Biostatistics at the School of Public Health. He remained in that capacity until 1992. Under his leadership, the Division increased in size and stature. Fleiss transformed the Division from a small program consisting chiefly of New Yorkers into a department with international prestige. He instituted a Ph.D. program in 1977. He recruited top faculty from major institutions around the world. The Division trained students, performed independent research, and supported clinical research associated with Columbia University's health sciences divisions. Field of expertise One of Fleiss's chief concerns was mental health statistics, particularly assessing the reliability of diagnostic classifications, and the measures, models, and control of errors in categorization. He was among the first to notice the equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability in categorical data (see Fleiss' kappa). In an influential 1974 paper co-authored with Robert Spitzer, Fleiss demonstrated that the second edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-II) was an unreliable diagnostic tool. They found that different practitioners using the DSM-II were rarely in agreement when diagnosing patients with similar problems. In reviewing previous studies of 18 major diagnostic categories, Fleiss and Spitzer concluded that "there are no diagnostic categories for which reliability is uniformly high. Reliability appears to be only satisfactory for three categories: mental deficiency, organic brain syndrome (but not its subtypes), and
https://en.wikipedia.org/wiki/Walter%20Tollmien
Walter Tollmien (13 October 1900, in Berlin – 25 November 1968, in Göttingen) was a German fluid dynamicist. Life Walter Tollmien studied from the winter semester 1920–1921 mathematics and physics with Ludwig Prandtl in Göttingen and then from 1924 onwards worked under Prandtl at Kaiser Wilhelm Institute. After a research stays in United States in 1930 and 1933 he became a Professor in 1937 at Technische Hochschule Dresden. In 1957 he took over the post of Director at Max-Planck Institute for fluid mechanics research. Achievements Through his pioneering work as a researcher and a teacher Walter Tollmien brought fluid mechanics into the lime light and as an inter disciplinary science of extreme importance. The transition from laminar to turbulence results in Tollmien–Schlichting waves named after him. Work Tollmien, Walter (1929): Über die Entstehung der Turbulenz. 1. Mitteilung, Nachr. Ges. Wiss. Göttingen, Math. Phys. Klasse 1929: 21ff Tollmien, Walter (1931): Grenzschichttheorie, in: Handbuch der Experimentalphysik IV,1, Leipzig, S. 239–287. External links Fluid dynamicists 1900 births 1968 deaths Max Planck Institute directors Academic staff of TU Dresden
https://en.wikipedia.org/wiki/1938%2024%20Hours%20of%20Le%20Mans
The 1938 24 Hours of Le Mans was the 15th Grand Prix of Endurance, and took place on 18 and 19 June 1938. Official results Did not finish Statistics Fastest Lap – #19 Raymond Sommer – 5:13.8 Distance – 3180.94 km Average Speed – 132.539 km/h Trophy winners 13th Rudge-Whitworth Biennial Cup – #28 Adler Index of Performance – #51 Amédée Gordini 24 Hours of Le Mans races Le Mans 1938 in French motorsport
https://en.wikipedia.org/wiki/Statistical%20distance
In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points. A distance between populations can be interpreted as measuring the distance between two probability distributions and hence they are essentially measures of distances between probability measures. Where statistical distance measures relate to the differences between random variables, these may have statistical dependence, and hence these distances are not directly related to measures of distances between probability measures. Again, a measure of distance between random variables may relate to the extent of dependence between them, rather than to their individual values. Many statistical distance measures are not metrics, and some are not symmetric. Some types of distance measures, which generalize squared distance, are referred to as (statistical) divergences. Terminology Many terms are used to refer to various notions of distance; these are often confusingly similar, and may be used inconsistently between authors and over time, either loosely or with precise technical meaning. In addition to "distance", similar terms include deviance, deviation, discrepancy, discrimination, and divergence, as well as others such as contrast function and metric. Terms from information theory include cross entropy, relative entropy, discrimination information, and information gain. Distances as metrics Metrics A metric on a set X is a function (called the distance function or simply distance) d : X × X → R+ (where R+ is the set of non-negative real numbers). For all x, y, z in X, this function is required to satisfy the following conditions: d(x, y) ≥ 0     (non-negativity) d(x, y) = 0   if and only if   x = y     (identity of indiscernibles. Note that condition 1 and 2 together produce positive definiteness) d(x, y) = d(y, x)     (symmetry) d(x, z) ≤ d(x, y) + d(y, z)     (subadditivity / triangle inequality). Generalized metrics Many statistical distances are not metrics, because they lack one or more properties of proper metrics. For example, pseudometrics violate property (2), identity of indiscernibles; quasimetrics violate property (3), symmetry; and semimetrics violate property (4), the triangle inequality. Statistical distances that satisfy (1) and (2) are referred to as divergences. Statistically close The variation distance of two distributions and over a finite domain , (often referred to as statistical difference or statistical distance in cryptography) is defined as . We say that two probability ensembles and are statistically close if is a negligible function in . Examples Metrics Total variation distance (sometimes just called "the" statistical distance) Hellinger distance Lévy–Prokhorov metric Wassers
https://en.wikipedia.org/wiki/Variance%20reduction
In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates obtained for a given simulation or computational effort. Every output random variable from the simulation is associated with a variance which limits the precision of the simulation results. In order to make a simulation statistically efficient, i.e., to obtain a greater precision and smaller confidence intervals for the output random variable of interest, variance reduction techniques can be used. The main variance reduction methods are common random numbers antithetic variates control variates importance sampling stratified sampling moment matching conditional Monte Carlo and quasi random variables (in Quasi-Monte Carlo method) For simulation with black-box models subset simulation and line sampling can also be used. Under these headings are a variety of specialized techniques; for example, particle transport simulations make extensive use of "weight windows" and "splitting/Russian roulette" techniques, which are a form of importance sampling. Crude Monte Carlo simulation Suppose one wants to compute with the random variable defined on the probability space . Monte Carlo does this by sampling i.i.d. copies of and then to estimate via the sample-mean estimator Under further mild conditions such as , a central limit theorem will apply such that for large , the distribution of converges to a normal distribution with mean and standard error . Because the standard deviation only converges towards at the rate , implying one needs to increase the number of simulations () by a factor of to halve the standard deviation of , variance reduction methods are often useful for obtaining more precise estimates for without needing very large numbers of simulations. Common Random Numbers (CRN) The common random numbers variance reduction technique is a popular and useful variance reduction technique which applies when we are comparing two or more alternative configurations (of a system) instead of investigating a single configuration. CRN has also been called correlated sampling, matched streams or matched pairs. CRN requires synchronization of the random number streams, which ensures that in addition to using the same random numbers to simulate all configurations, a specific random number used for a specific purpose in one configuration is used for exactly the same purpose in all other configurations. For example, in queueing theory, if we are comparing two different configurations of tellers in a bank, we would want the (random) time of arrival of the N-th customer to be generated using the same draw from a random number stream for both configurations. Underlying principle of the CRN technique Suppose and are the observations from the first and second configurations on the j-th independent replication. We want to estimate If we perform n replications of each configuration and
https://en.wikipedia.org/wiki/Cayley%27s%20nodal%20cubic%20surface
In algebraic geometry, the Cayley surface, named after Arthur Cayley, is a cubic nodal surface in 3-dimensional projective space with four conical points. It can be given by the equation when the four singular points are those with three vanishing coordinates. Changing variables gives several other simple equations defining the Cayley surface. As a del Pezzo surface of degree 3, the Cayley surface is given by the linear system of cubics in the projective plane passing through the 6 vertices of the complete quadrilateral. This contracts the 4 sides of the complete quadrilateral to the 4 nodes of the Cayley surface, while blowing up its 6 vertices to the lines through two of them. The surface is a section through the Segre cubic. The surface contains nine lines, 11 tritangents and no double-sixes. A number of affine forms of the surface have been presented. Hunt uses by transforming coordinates to and dehomogenizing by setting . A more symmetrical form is References External links Cayley’s Nodal Cubic Surface, John Baez, Visual Insight, 15 August, 2016 Cayley Surface on MathCurve. Algebraic surfaces Complex surfaces
https://en.wikipedia.org/wiki/Butterfly%20curve%20%28algebraic%29
In mathematics, the algebraic butterfly curve is a plane algebraic curve of degree six, given by the equation The butterfly curve has a single singularity with delta invariant three, which means it is a curve of genus seven. The only plane curves of genus seven are singular, since seven is not a triangular number, and the minimum degree for such a curve is six. The butterfly curve has branching number and multiplicity two, and hence the singularity link has two components, pictured at right. The area of the algebraic butterfly curve is given by (with gamma function ) and its arc length s by See also Butterfly curve (transcendental) References External links -- Sequence for the area of algebraic butterfly -- Sequence for the arc length of algebraic butterfly curve Sextic curves
https://en.wikipedia.org/wiki/Butterfly%20curve
Butterfly curve may refer to: Butterfly curve (algebraic), a curve defined by a trinomial Butterfly curve (transcendental), a curve based on sine functions Mathematics disambiguation pages
https://en.wikipedia.org/wiki/Bitangents%20of%20a%20quartic
In the theory of algebraic plane curves, a general quartic plane curve has 28 bitangent lines, lines that are tangent to the curve in two places. These lines exist in the complex projective plane, but it is possible to define quartic curves for which all 28 of these lines have real numbers as their coordinates and therefore belong to the Euclidean plane. An explicit quartic with twenty-eight real bitangents was first given by As Plücker showed, the number of real bitangents of any quartic must be 28, 16, or a number less than 9. Another quartic with 28 real bitangents can be formed by the locus of centers of ellipses with fixed axis lengths, tangent to two non-parallel lines. gave a different construction of a quartic with twenty-eight bitangents, formed by projecting a cubic surface; twenty-seven of the bitangents to Shioda's curve are real while the twenty-eighth is the line at infinity in the projective plane. Example The Trott curve, another curve with 28 real bitangents, is the set of points (x,y) satisfying the degree four polynomial equation These points form a nonsingular quartic curve that has genus three and that has twenty-eight real bitangents. Like the examples of Plücker and of Blum and Guinand, the Trott curve has four separated ovals, the maximum number for a curve of degree four, and hence is an M-curve. The four ovals can be grouped into six different pairs of ovals; for each pair of ovals there are four bitangents touching both ovals in the pair, two that separate the two ovals, and two that do not. Additionally, each oval bounds a nonconvex region of the plane and has one bitangent spanning the nonconvex portion of its boundary. Connections to other structures The dual curve to a quartic curve has 28 real ordinary double points, dual to the 28 bitangents of the primal curve. The 28 bitangents of a quartic may also be placed in correspondence with symbols of the form where are all zero or one and where There are 64 choices for , but only 28 of these choices produce an odd sum. One may also interpret as the homogeneous coordinates of a point of the Fano plane and as the coordinates of a line in the same finite projective plane; the condition that the sum is odd is equivalent to requiring that the point and the line do not touch each other, and there are 28 different pairs of a point and a line that do not touch. The points and lines of the Fano plane that are disjoint from a non-incident point-line pair form a triangle, and the bitangents of a quartic have been considered as being in correspondence with the 28 triangles of the Fano plane. The Levi graph of the Fano plane is the Heawood graph, in which the triangles of the Fano plane are represented by 6-cycles. The 28 6-cycles of the Heawood graph in turn correspond to the 28 vertices of the Coxeter graph. The 28 bitangents of a quartic also correspond to pairs of the 56 lines on a degree-2 del Pezzo surface, and to the 28 odd theta characteristics. The 27 lines
https://en.wikipedia.org/wiki/Polar%20set%20%28potential%20theory%29
In mathematics, in the area of classical potential theory, polar sets are the "negligible sets", similar to the way in which sets of measure zero are the negligible sets in measure theory. Definition A set in (where ) is a polar set if there is a non-constant superharmonic function on such that Note that there are other (equivalent) ways in which polar sets may be defined, such as by replacing "subharmonic" by "superharmonic", and by in the definition above. Properties The most important properties of polar sets are: A singleton set in is polar. A countable set in is polar. The union of a countable collection of polar sets is polar. A polar set has Lebesgue measure zero in Nearly everywhere A property holds nearly everywhere in a set S if it holds on S−E where E is a Borel polar set. If P holds nearly everywhere then it holds almost everywhere. See also Pluripolar set References External links Subharmonic functions
https://en.wikipedia.org/wiki/Harnack%27s%20curve%20theorem
In real algebraic geometry, Harnack's curve theorem, named after Axel Harnack, gives the possible numbers of connected components that an algebraic curve can have, in terms of the degree of the curve. For any algebraic curve of degree in the real projective plane, the number of components is bounded by The maximum number is one more than the maximum genus of a curve of degree , attained when the curve is nonsingular. Moreover, any number of components in this range of possible values can be attained. A curve which attains the maximum number of real components is called an M-curve (from "maximum") – for example, an elliptic curve with two components, such as or the Trott curve, a quartic with four components, are examples of M-curves. This theorem formed the background to Hilbert's sixteenth problem. In a recent development a Harnack curve is shown to be a curve whose amoeba has area equal to the Newton polygon of the polynomial , which is called the characteristic curve of dimer models, and every Harnack curve is the spectral curve of some dimer model.() References Dmitrii Andreevich Gudkov, The topology of real projective algebraic varieties, Uspekhi Mat. Nauk 29 (1974), 3–79 (Russian), English transl., Russian Math. Surveys 29:4 (1974), 1–79 Carl Gustav Axel Harnack, Ueber die Vieltheiligkeit der ebenen algebraischen Curven, Math. Ann. 10 (1876), 189–199 George Wilson, Hilbert's sixteenth problem, Topology 17 (1978), 53–74 Real algebraic geometry Theorems in algebraic geometry
https://en.wikipedia.org/wiki/Lemniscate
In algebraic geometry, a lemniscate is any of several figure-eight or -shaped curves. The word comes from the Latin meaning "decorated with ribbons", from the Greek meaning "ribbon", or which alternatively may refer to the wool from which the ribbons were made. Curves that have been called a lemniscate include three quartic plane curves: the hippopede or lemniscate of Booth, the lemniscate of Bernoulli, and the lemniscate of Gerono. The study of lemniscates (and in particular the hippopede) dates to ancient Greek mathematics, but the term "lemniscate" for curves of this type comes from the work of Jacob Bernoulli in the late 17th century. History and examples Lemniscate of Booth The consideration of curves with a figure-eight shape can be traced back to Proclus, a Greek Neoplatonist philosopher and mathematician who lived in the 5th century AD. Proclus considered the cross-sections of a torus by a plane parallel to the axis of the torus. As he observed, for most such sections the cross section consists of either one or two ovals; however, when the plane is tangent to the inner surface of the torus, the cross-section takes on a figure-eight shape, which Proclus called a horse fetter (a device for holding two feet of a horse together), or "hippopede" in Greek. The name "lemniscate of Booth" for this curve dates to its study by the 19th-century mathematician James Booth. The lemniscate may be defined as an algebraic curve, the zero set of the quartic polynomial when the parameter d is negative (or zero for the special case where the lemniscate becomes a pair of externally tangent circles). For positive values of d one instead obtains the oval of Booth. Lemniscate of Bernoulli In 1680, Cassini studied a family of curves, now called the Cassini oval, defined as follows: the locus of all points, the product of whose distances from two fixed points, the curves' foci, is a constant. Under very particular circumstances (when the half-distance between the points is equal to the square root of the constant) this gives rise to a lemniscate. In 1694, Johann Bernoulli studied the lemniscate case of the Cassini oval, now known as the lemniscate of Bernoulli (shown above), in connection with a problem of "isochrones" that had been posed earlier by Leibniz. Like the hippopede, it is an algebraic curve, the zero set of the polynomial . Bernoulli's brother Jacob Bernoulli also studied the same curve in the same year, and gave it its name, the lemniscate. It may also be defined geometrically as the locus of points whose product of distances from two foci equals the square of half the interfocal distance. It is a special case of the hippopede (lemniscate of Booth), with , and may be formed as a cross-section of a torus whose inner hole and circular cross-sections have the same diameter as each other. The lemniscatic elliptic functions are analogues of trigonometric functions for the lemniscate of Bernoulli, and the lemniscate constants arise in evaluating
https://en.wikipedia.org/wiki/Lemniscate%20of%20Gerono
In algebraic geometry, the lemniscate of Gerono, or lemniscate of Huygens, or figure-eight curve, is a plane algebraic curve of degree four and genus zero and is a lemniscate curve shaped like an symbol, or figure eight. It has equation It was studied by Camille-Christophe Gerono. Parameterization Because the curve is of genus zero, it can be parametrized by rational functions; one means of doing that is Another representation is which reveals that this lemniscate is a special case of a Lissajous figure. Dual curve The dual curve (see Plücker formula), pictured below, has therefore a somewhat different character. Its equation is References External links Algebraic curves Christiaan Huygens
https://en.wikipedia.org/wiki/Trends%20in%20International%20Mathematics%20and%20Science%20Study
The IEA's Trends in International Mathematics and Science Study (TIMSS) is a series of international assessments of the mathematics and science knowledge of students around the world. The participating students come from a diverse set of educational systems (countries or regional jurisdictions of countries) in terms of economic development, geographical location, and population size. In each of the participating educational systems, a minimum of 4,000 to 5,000 students is evaluated. Contextual data about the conditions in which participating students learn mathematics and science are collected from the students and their teachers, their principals, and their parents via questionnaires. TIMSS is one of the studies established by IEA aimed at allowing educational systems worldwide to compare students' educational achievement and learn from the experiences of others in designing effective education policy. This assessment was first conducted in 1995, and has been administered every four years thereafter. Therefore, some of the participating educational systems have trend data across assessments from 1995 to 2019. TIMSS assesses 4th and 8th grade students, while TIMSS Advanced assesses students in the final year of secondary school in advanced mathematics and physics. Definition of Terms "Eighth grade" in the United States is approximately 13–14 years of age and equivalent to: Year 9 (Y9) in England and Wales 2nd Year (S2) in Scotland 2nd Year in the Republic of Ireland 1st Year in South Africa Form 2 in Hong Kong 4ème in France Year 9 in New Zealand Form 2 in Malaysia "Fourth grade" in the United States is approximately equivalent to 9–10 years of age and equivalent to: Year 5 (Y5) in England and Wales Primary 6 (P6) in Scotland Group 6 in the Netherlands CM1 in France Fourth Class in the Republic of Ireland Standard 3 or Year 5 in New Zealand History A precursor to TIMSS was the First International Mathematics Study (FIMS) performed in 1964 in 11 countries for students aged 13 and in the final year of secondary education (FS) under the auspices of the International Association for the Evaluation of Educational Achievement (IEA). This was followed in 1970-71 by the First International Science Study (FISS) for students aged 10, 14, and FS. Fourteen countries tested 10-year-olds; 16 countries tested the older two groups. These were replicated between 1980 and 1984. These early studies were revised and combined by the IEA to create TIMSS, which was first administered in 1995. It was the largest international student assessment study of its time and evaluated students in five grades. In the second cycle (1999) only eighth-grade students were tested. In the next cycles (2003, 2007, 2011, and 2015) both 4th and 8th grade students were assessed. The 2011 cycle was performed in the same year as the IEA's Progress in International Reading Literacy Study (PIRLS), offering a comprehensive assessment of mathematics, science and reading f
https://en.wikipedia.org/wiki/Ruby%20%28hardware%20description%20language%29
Ruby is a hardware description language designed by in 1986 intended to facilitate the notation and development of integrated circuits via relational algebra and functional programming. It should not be confused with RHDL, a hardware description language based on the 1995 Ruby programming language. References External links Hardware description languages
https://en.wikipedia.org/wiki/Craps%20principle
In probability theory, the craps principle is a theorem about event probabilities under repeated iid trials. Let and denote two mutually exclusive events which might occur on a given trial. Then the probability that occurs before equals the conditional probability that occurs given that or occur on the next trial, which is The events and need not be collectively exhaustive (if they are, the result is trivial). Proof Let be the event that occurs before . Let be the event that neither nor occurs on a given trial. Since , and are mutually exclusive and collectively exhaustive for the first trial, we have and . Since the trials are i.i.d., we have . Using and solving the displayed equation for gives the formula . Application If the trials are repetitions of a game between two players, and the events are then the craps principle gives the respective conditional probabilities of each player winning a certain repetition, given that someone wins (i.e., given that a draw does not occur). In fact, the result is only affected by the relative marginal probabilities of winning and ; in particular, the probability of a draw is irrelevant. Stopping If the game is played repeatedly until someone wins, then the conditional probability above is the probability that the player wins the game. This is illustrated below for the original game of craps, using an alternative proof. Craps example If the game being played is craps, then this principle can greatly simplify the computation of the probability of winning in a certain scenario. Specifically, if the first roll is a 4, 5, 6, 8, 9, or 10, then the dice are repeatedly re-rolled until one of two events occurs: Since and are mutually exclusive, the craps principle applies. For example, if the original roll was a 4, then the probability of winning is This avoids having to sum the infinite series corresponding to all the possible outcomes: Mathematically, we can express the probability of rolling ties followed by rolling the point: The summation becomes an infinite geometric series: which agrees with the earlier result. References Notes Theorems in statistics Probability theorems Statistical principles
https://en.wikipedia.org/wiki/Slutsky%27s%20theorem
In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables. The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to Harald Cramér. Statement Let be sequences of scalar/vector/matrix random elements. If converges in distribution to a random element and converges in probability to a constant , then   provided that c is invertible, where denotes convergence in distribution. Notes: The requirement that Yn converges to a constant is important — if it were to converge to a non-degenerate random variable, the theorem would be no longer valid. For example, let and . The sum for all values of n. Moreover, , but does not converge in distribution to , where , , and and are independent. The theorem remains valid if we replace all convergences in distribution with convergences in probability. Proof This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (X, c) (see here). Next we apply the continuous mapping theorem, recognizing the functions g(x,y) = x + y, g(x,y) = xy, and g(x,y) = x y−1 are continuous (for the last function to be continuous, y has to be invertible). See also Convergence of random variables References Further reading Asymptotic theory (statistics) Probability theorems Theorems in statistics
https://en.wikipedia.org/wiki/Distortion%20synthesis
Distortion synthesis is a group of sound synthesis techniques which modify existing sounds to produce more complex sounds (or timbres), usually by using non-linear circuits or mathematics. While some synthesis methods achieve sonic complexity by using many oscillators, distortion methods create a frequency spectrum which has many more components than oscillators. Some distortion techniques are: FM synthesis, waveshaping synthesis, and discrete summation formulas. FM synthesis Frequency modulation synthesis distorts the carrier frequency of an oscillator by modulating it with another signal. The distortion can be controlled by means of a modulation index. The method known as phase distortion synthesis is similar to FM. Waveshaping synthesis Waveshaping synthesis changes an original waveform by responding to its amplitude in a non-linear fashion. It can generate a bandwidth-limited spectrum, and can be continuously controlled with an index. The clipping caused by overdriving an audio amplifier is a simple example of this method, changing a sine wave into a square-like wave. (Note that direct digital implementations suffer from aliasing of the clipped signal's infinite number of harmonics, however.) Discrete summation formulas DSF synthesis refers to algorithmic synthesis methods which use mathematical formulas to sum, or add together, many numbers to achieve a desired wave shape. This powerful method allows, for example, synthesizing a 3-formant voice in a manner similar to FM voice synthesis. DSF allows the synthesis of harmonic and inharmonic, band-limited or unlimited spectra, and can be controlled by an index. As Roads points out, by reducing digital synthesis of complex spectra to a few parameters, DSF can be much more economical. Notable users Jean-Claude Risset was one notable pioneer in the adoption of distortion methods. References External links Sound synthesis types
https://en.wikipedia.org/wiki/Random%20element
In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.” The modern-day usage of “random element” frequently assumes the space of values is a topological vector space, often a Banach or Hilbert space with a specified natural sigma algebra of subsets. Definition Let be a probability space, and a measurable space. A random element with values in E is a function which is -measurable. That is, a function X such that for any , the preimage of B lies in . Sometimes random elements with values in are called -valued random variables. Note if , where are the real numbers, and is its Borel σ-algebra, then the definition of random element is the classical definition of random variable. The definition of a random element with values in a Banach space is typically understood to utilize the smallest -algebra on B for which every bounded linear functional is measurable. An equivalent definition, in this case, to the above, is that a map , from a probability space, is a random element if is a random variable for every bounded linear functional f, or, equivalently, that is weakly measurable. Examples of random elements Random variable A random variable is the simplest type of random element. It is a map is a measurable function from the set of possible outcomes to . As a real-valued function, often describes some numerical quantity of a given event. E.g. the number of heads after a certain number of coin flips; the heights of different people. When the image (or range) of is finite or countably infinite, the random variable is called a discrete random variable and its distribution can be described by a probability mass function which assigns a probability to each value in the image of . If the image is uncountably infinite then is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous, for example a mixture distribution. Such random variables cannot be described by a probability density or a probability mass function. Random vector A random vector is a column vector (or its transpose, which is a row vector) whose components are scalar-valued random variables on the same probability space , whe
https://en.wikipedia.org/wiki/Tristan%20Needham
Tristan Needham is a British mathematician and professor of mathematics at the University of San Francisco. Education, career and publications Tristan is the son of social anthropologist Rodney Needham of Oxford, England. He attended the Dragon School. Later Needham attended the University of Oxford and studied physics at Merton College, and then transferred to the Mathematical Institute where he studied under Roger Penrose. He obtained his D.Phil. in 1987 and in 1989 took up his post at University of San Francisco. In 1993 he published A Visual Explanation of Jensen's inequality. The following year he published The Geometry of Harmonic Functions, which won the Carl B. Allendoerfer Award for 1995. Needham wrote the book Visual Complex Analysis, which has received positive reviews. Though it is described as a "radical first course in complex analysis aimed at undergraduates", writing in Mathematical Reviews D.H. Armitage said that "the book will be appreciated most by those who already know some complex analysis." In fact Douglas Hofstadter wrote "Needham's work of art with its hundreds and hundreds of beautiful figures á la Latta, brings complex analysis alive in an unprecedented manner". Hofstadter had studied complex analysis at Stanford with Gordon Latta, and he recalled "Latta's amazingly precise and elegant blackboard diagrams". In 2001 a German language version, translated by Norbert Herrmann and Ina Paschen, was published by R. Oldenbourg Verlag, Munich. In 2021, Needham published Visual Differential Geometry and Forms: A Mathematical Drama in Five Acts (Princeton University Press). (The original title was Visual Differential Geometry.) Much of this material was already developed in the writing of Visual Complex Analysis. See also Amplitwist Bibliography Needham, Tristan. Visual Complex Analysis. The Clarendon Press, Oxford University Press, New York, 1997 . Needham, Tristan. Visual Differential Geometry and Forms: A Mathematical Drama in Five Acts. Princeton University Press, Princeton, 2021 . Notes External links Author website for the book Visual Complex Analysis Princeton University Press website for the book Visual Differential Geometry and Forms: A Mathematical Drama in Five Acts Author website (including Errata) for the book Visual Differential Geometry and Forms: A Mathematical Drama in Five Acts 20th-century American mathematicians 21st-century American mathematicians Living people Year of birth missing (living people) Alumni of Merton College, Oxford People educated at The Dragon School University of San Francisco faculty American textbook writers
https://en.wikipedia.org/wiki/R%20group
R group may refer to: In chemistry: Pendant group or side group Side chain Substituent In mathematics: Tempered representation
https://en.wikipedia.org/wiki/Polynomial%20lemniscate
In mathematics, a polynomial lemniscate or polynomial level curve is a plane algebraic curve of degree 2n, constructed from a polynomial p with complex coefficients of degree n. For any such polynomial p and positive real number c, we may define a set of complex numbers by This set of numbers may be equated to points in the real Cartesian plane, leading to an algebraic curve ƒ(x, y) = c2 of degree 2n, which results from expanding out in terms of z = x + iy. When p is a polynomial of degree 1 then the resulting curve is simply a circle whose center is the zero of p. When p is a polynomial of degree 2 then the curve is a Cassini oval. Erdős lemniscate A conjecture of Erdős which has attracted considerable interest concerns the maximum length of a polynomial lemniscate ƒ(x, y) = 1 of degree 2n when p is monic, which Erdős conjectured was attained when p(z) = zn − 1. This is still not proved but Fryntov and Nazarov proved that p gives a local maximum. In the case when n = 2, the Erdős lemniscate is the Lemniscate of Bernoulli and it has been proven that this is indeed the maximal length in degree four. The Erdős lemniscate has three ordinary n-fold points, one of which is at the origin, and a genus of (n − 1)(n − 2)/2. By inverting the Erdős lemniscate in the unit circle, one obtains a nonsingular curve of degree n. Generic polynomial lemniscate In general, a polynomial lemniscate will not touch at the origin, and will have only two ordinary n-fold singularities, and hence a genus of (n − 1)2. As a real curve, it can have a number of disconnected components. Hence, it will not look like a lemniscate, making the name something of a misnomer. An interesting example of such polynomial lemniscates are the Mandelbrot curves. If we set p0 = z, and pn = pn−12 + z, then the corresponding polynomial lemniscates Mn defined by |pn(z)| = 2 converge to the boundary of the Mandelbrot set. The Mandelbrot curves are of degree 2n+1. Notes References Alexandre Eremenko and Walter Hayman, On the length of lemniscates, Michigan Math. J., (1999), 46, no. 2, 409–415 O. S. Kusnetzova and V. G. Tkachev, Length functions of lemniscates, Manuscripta Math., (2003), 112, 519–538 Plane curves Algebraic curves
https://en.wikipedia.org/wiki/The%20Probability%20Broach
The Probability Broach is a 1979 science fiction novel by American writer L. Neil Smith. It is set in an alternate history, the so-called "Gallatin Universe", where a libertarian society has formed on the North American continent, styled the North American Confederacy (NAC). This history was created when the Declaration of Independence has the word unanimous added to the preamble, to read that governments "derive their just power from the unanimous consent of the governed". Plot summary Edward William "Win" Bear is an Ute Indian who works for the Denver Police Department in a version of the United States in an alternate history of 1987 that is controlled by an anti-capitalist, ecofascist government complete with a new police force created in 1984 called the Federal Security Police (FSP, or "SecPol" as it is more commonly known) reminiscent of the Gestapo. Henry M. Jackson is president, citizens' freedoms are very limited, and many laws and regulations have been passed. Examples include hoarding precious metals, such as silver and gold, is illegal and due to strict gun control policies, only the police and citizens with federal permits are allowed to carry guns. Bear is called to investigate the unusual murder of physicist Vaughn Meiss; he eventually finds himself projected into the North American Confederacy by means of the "Probability Broach", an inter-dimensional conduit originally developed as a means for interstellar travel in the North American Confederacy by a bottlenose dolphin physicist, named Ooloorie Eckickeck P'Wheet, and her human compatriot, Dr. Dora Jayne Thorens. Win encounters his NAC counterpart, Edward William "Ed" Bear, and Ed's neighbors, most notably the "healer" Clarissa Olson and Lucy Kropotkin, who is later revealed to be 135 years old. Lucy's life becomes the vantage point by which Win is acclimated to life in the NAC and Laporte, the NAC equivalent to Denver. Win and Ed unravel the mystery of the Meiss murder and learn that he was killed to hide an effort by SecPol to conquer the NAC with the help of Hamiltonian forces on the NAC side, led by John Jay Madison, a.k.a. the infamous Prussian expatriate and 1918 war hero Manfred von Richthofen, known here as the Red Knight of Prussia. Win, Ed, Lucy and Clarissa lead the effort to notify the nascent NAC government of the threat. En route to the meeting of the Continental Congress, Ed and Clarissa are kidnapped, leaving Win and Lucy to reveal the plot. After fighting (and winning) a duel with a SecPol agent, Win and Lucy rescue their friends and track Madison and the Hamiltonians to a small town outside Laporte. Win sets off an explosion that eliminates all of the Hamiltonians. Win elects to remain in the NAC and marries Clarissa. Ed marries Lucy, who at the time of the story is awaiting a delayed "regeneration" because of an accident involving massive radiation exposure, and they then set out for the Asteroid belt to build a new life for themselves on the NAC frontier
https://en.wikipedia.org/wiki/North%20American%20Confederacy
The North American Confederacy is an alternate history series of novels created by L. Neil Smith. The series begins with The Probability Broach and there are eight sequels. The stories take place in a fictional country of the same name. Novels By publication The Probability Broach (1979) The Venus Belt (1980) Their Majesties' Bucketeers (1981) The Nagasaki Vector (1983) Tom Paine Maru (1984) The Gallatin Divergence (1985) Brightsuit MacBear (1988) Taflak Lysandra (1989) The American Zone (2001) By chronology The Probability Broach (1979) The Nagasaki Vector (1983) The American Zone (2001) The Venus Belt (1980) The Gallatin Divergence (1985) Tom Paine Maru (1984) Brightsuit MacBear (1988) Taflak Lysandra (1989) Their Majesties' Bucketeers (1981) takes place in the same universe, although none of the characters from the series appears in it. History The ostensible point of divergence leading to the North American Confederacy (NAC) is the addition of a single word in the preamble to the United States Declaration of Independence, wherein it states that governments "derive their just power from the unanimous consent of the governed." Inspired by this wording, Albert Gallatin intercedes in the Whiskey Rebellion in 1794 to the benefit of the farmers rather than the fledgling United States government as he does in real life. This results in the rebellion becoming a Second American Revolution, which ultimately leads to the overthrow of the government and the execution by firing squad of George Washington for treason. The United States Constitution is declared null and void, and Gallatin is proclaimed the second president. In 1795, a new caretaker government is established, and a revised version of the Articles of Confederation is ratified in 1797, but with a much greater emphasis on individual and economic freedom. After the war, Alexander Hamilton flees to Prussia and lives there until he is killed in a duel in 1804. Over the ensuing century, the remnants of central government dissipate. The government can no longer create money, and only individual people can, it being backed by gold, silver, wheat, corn, iron, and even whiskey. In 1803, Gallatin and James Monroe arrange the Louisiana Purchase from the French Empire, borrowing money from private sources against the value of the land. Thomas Jefferson successfully leads an abolitionist movement that brings a peaceful end to slavery in 1820. Jefferson is also responsible for developing new systems of weights and measures (metric inches and pounds, among others) in 1800. He also devises a new calendar system to honor the birth of liberty as the old year 1776 becomes Year Zero, Anno Liberati's (Latin for Year of Liberation). When Jefferson first proposes the new calendar system in 1796, he originally marks it as Gallatin's ascension to the presidency. However, Gallatin protests that the real revolution was in 1776 and that the Federalist period should be regarded as an aberratio
https://en.wikipedia.org/wiki/Science%20and%20technology%20in%20the%20Ottoman%20Empire
During its 600-year existence, the Ottoman Empire made significant advances in science and technology, in a wide range of fields including mathematics, astronomy and medicine. The Islamic Golden Age was traditionally believed to have ended in the thirteenth century, but has been extended to the fifteenth and sixteenth centuries by some, who have included continuing scientific activity in the Ottoman Empire in the west and in Persia and Mughal India in the east. Education Advancement of madrasah The madrasah education institution, which first originated during the Seljuk period, reached its highest point during the Ottoman reign. Education of Ottoman Women in Medicine Harems were places within a Sultan's palace where his wives, daughters, and female slaves were expected to stay. However, accounts of teaching young girls and boys here have been recorded. Most education of women in the Ottoman Empire was focused on teaching the women to be good house wives and social etiquette. Although the formal education of women was not popular, female physicians and surgeons were still accounted for. Female physicians were given an informal education instead of a formal one. However, the first properly trained female Turkish physician was Safiye Ali. Ali studied medicine in Germany and opened her own practice in Istanbul in 1922, 1 year before the fall of the Ottoman Empire. Technical education Istanbul Technical University has a history that began in 1773. It was founded by Sultan Mustafa III as the Imperial Naval Engineers' School (original name: Mühendishane-i Bahr-i Humayun), and it was originally dedicated to the training of ship builders and cartographers. In 1795 the scope of the school was broadened to train technical military staff to modernize the Ottoman army to match the European standards. In 1845 the engineering department of the school was further developed with the addition of a program devoted to the training of architects. The scope and name of the school were extended and changed again in 1883 and in 1909 the school became a public engineering school which was aimed at training civil engineers who could create new infrastructure to develop the empire. Astronomy Astronomy was a very important discipline in the Ottoman Empire. Ali Quşhji, one of the most important astronomers of the state, managed to make the first map of the Moon and wrote the first book describing the shapes of the Moon. At the same time, a new system was developed for Mercury. Mustafa ibn Muwaqqit and Muhammad Al-Qunawi developed the first astronomical calculations measuring minutes and seconds. Taqi al-Din later built the Constantinople Observatory of Taqi ad-Din in 1577, where he carried out astronomical observations until 1580. He produced a Zij (named Unbored Pearl) and astronomical catalogues that were more accurate than those of his contemporaries, Tycho Brahe and Nicolaus Copernicus. Taqi al-Din was also the first astronomer to employ a decimal point notatio
https://en.wikipedia.org/wiki/Cap%20product
In algebraic topology the cap product is a method of adjoining a chain of degree p with a cochain of degree q, such that q ≤ p, to form a composite chain of degree p − q. It was introduced by Eduard Čech in 1936, and independently by Hassler Whitney in 1938. Definition Let X be a topological space and R a coefficient ring. The cap product is a bilinear map on singular homology and cohomology defined by contracting a singular chain with a singular cochain by the formula: Here, the notation indicates the restriction of the simplicial map to its face spanned by the vectors of the base, see Simplex. Interpretation In analogy with the interpretation of the cup product in terms of the Künneth formula, we can explain the existence of the cap product in the following way. Using CW approximation we may assume that is a CW-complex and (and ) is the complex of its cellular chains (or cochains, respectively). Consider then the composition where we are taking tensor products of chain complexes, is the diagonal map which induces the map on the chain complex, and is the evaluation map (always 0 except for ). This composition then passes to the quotient to define the cap product , and looking carefully at the above composition shows that it indeed takes the form of maps , which is always zero for . Fundamental Class For any point in , we have the long-exact sequence in homology (with coefficients in ) of the pair (M, M - {x}) (See Relative homology) An element of is called the fundamental class for if is a generator of . A fundamental class of exists if is closed and R-orientable. In fact, if is a closed, connected and -orientable manifold, the map is an isomorphism for all in and hence, we can choose any generator of as the fundamental class. Relation with Poincaré duality For a closed -orientable n-manifold with fundamental class in (which we can choose to be any generator of ), the cap product map is an isomorphism for all . This result is famously called Poincaré duality. The slant product If in the above discussion one replaces by , the construction can be (partially) replicated starting from the mappings and to get, respectively, slant products : and In case X = Y, the first one is related to the cap product by the diagonal map: . These ‘products’ are in some ways more like division than multiplication, which is reflected in their notation. Equations The boundary of a cap product is given by : Given a map f the induced maps satisfy : The cap and cup product are related by : where , and An interesting consequence of the last equation is that it makes into a right module. See also Cup product Poincaré duality Singular homology Homology theory References Hatcher, A., Algebraic Topology, Cambridge University Press (2002) . Detailed discussion of homology theories for simplicial complexes and manifolds, singular homology, etc. Section 2.7 provides a category-theoretic presentation of the theorem a
https://en.wikipedia.org/wiki/Sol%20Garfunkel
Solomon "Sol" Garfunkel born 1943, in Brooklyn, New York, is an American mathematician who has dedicated his career to mathematics education. Since 1980, he has served as the executive director of the award-winning non-profit organization "Consortium for Mathematics and Its Applications", working with teachers, students, and business people to create learning environments where mathematics is used to investigate and model real issues in our world. Garfunkel is best known for hosting the 1987 PBS series titled "For All Practical Purposes: An Introduction to Contemporary Mathematics", followed by the 1991 series, "Algebra: In Simplest Terms", both often used in classrooms. Early life At the age of 24, Garfunkel received his PhD in Mathematical Logic from the University of Wisconsin–Madison. While in attendance he worked with Howard Jerome Keisler, Michael D. Morley, and Stephen Kleene. Garfunkel then worked at Cornell University and the University of Connecticut at Storrs. Garfunkel continued his work advocating for the improvement of mathematics in public school systems. He coauthored the article "How to Fix Our Math Education" with David Mumford, emeritus professor of mathematics at Brown University. Since published, this article has been credited with successfully bringing new awareness to the topic. The article has become a topic for a vast number of blogs, and has been translated into several languages. Garfunkel has served as project director for several National Science Foundation curriculum projects, and in 2009 was awarded the Glenn Gilbert National Leadership Award from the National Council of Supervisors of Mathematics. Most recently, Garfunkel co-founded the International Mathematical Modeling Challenge. References External links https://web.archive.org/web/20060902004846/http://itech.fgcu.edu/faculty/fmaa/1998program/plenary.html http://www.comap.com/product/?idx=746 For All Practical Purposes hosted by Sol http://www.learner.org/resources/series66.html Algebra Video Course hosted by Sol 1943 births 20th-century American mathematicians 21st-century American mathematicians American logicians University of Wisconsin–Madison alumni Living people
https://en.wikipedia.org/wiki/Dedekind%20psi%20function
In number theory, the Dedekind psi function is the multiplicative function on the positive integers defined by where the product is taken over all primes dividing (By convention, , which is the empty product, has value 1.) The function was introduced by Richard Dedekind in connection with modular functions. The value of for the first few integers is: 1, 3, 4, 6, 6, 12, 8, 12, 12, 18, 12, 24, ... . The function is greater than for all greater than 1, and is even for all greater than 2. If is a square-free number then , where is the divisor function. The function can also be defined by setting for powers of any prime , and then extending the definition to all integers by multiplicativity. This also leads to a proof of the generating function in terms of the Riemann zeta function, which is This is also a consequence of the fact that we can write as a Dirichlet convolution of . There is an additive definition of the psi function as well. Quoting from Dickson, R. Dedekind proved that, if is decomposed in every way into a product and if is the g.c.d. of then where ranges over all divisors of and over the prime divisors of and is the totient function. Higher orders The generalization to higher orders via ratios of Jordan's totient is with Dirichlet series . It is also the Dirichlet convolution of a power and the square of the Möbius function, . If is the characteristic function of the squares, another Dirichlet convolution leads to the generalized σ-function, . References External links See also (page 25, equation (1)) Section 3.13.2 is ψ2, is ψ3, and is ψ4 Multiplicative functions
https://en.wikipedia.org/wiki/Dedekind%20function
In number theory, Dedekind function can refer to any of three functions, all introduced by Richard Dedekind Dedekind eta function Dedekind psi function Dedekind zeta function
https://en.wikipedia.org/wiki/Cianorte
Cianorte is a municipality in the state of Paraná in Brazil, with an estimated population of 83,816, according to the Brazilian Institute of Geography and Statistics in 2020. History The city was planned as a "garden city" and founded by the Company for the Improvement of the North of Paraná (Companhia Melhoramentos Norte do Paraná), a British company for which it was named. In the beginning of the 20th century the region was dominated by a subtropical forest and totally wild, except for the Road of Peabiru, used by the Portuguese to connect the Guaira region, further west, to the coast. The road existed from the 17th century, but the first reported contact with the natives of the region, the Xetas, was in the 1930s. The Xetas, a group of three or four hundred, had their own language, and were early Iron Age in culture. The group vanished after they were contacted by the British in controversial and unexplained circumstances. In the 1940s the English company drew the city plan and split the region into very small farms. At this time, the city was redivided and part of the city and the areas around were sold to immigrants, mainly Italian-Brazilians and Japanese-Brazilians of second or third generation from São Paulo. Those immigrants were primarily poor ordinary workers in the huge coffee farms of São Paulo, and perceived the inexpensive land in Cianorte as their big opportunity in life. They built houses and schools, temples and businesses. The city become a municipality, which, under Brazilian laws, allows the area to extend its political structure. The Municipality of Cianorte was created through the State Law no. 2.412 of July 26, 1955. Cianorte then had around 11,000 inhabitants, mostly in the countryside. The economy was based on coffee. A disastrous frost in the winter of 1975, in which temperatures dropped below zero for the first time in recorded weather, destroyed the coffee plantations. Coffee trees take around five years to start producing, and so the economy went through a terrible crisis. Population fell and businesses closed. The disaster transformed the city. People opened clothing factories and shops in their garages and back yards. By the time agriculture began to recover, some of the mini-factories had grown to medium-sized companies, and the work force was already devoted to those. During the next decades some of those garage enterprises turned into huge factories that today sell clothes to the entire country, and export a significant portion to several countries. Shop owners from several states of Brazil visit Cianorte in the beginning of every season to purchase clothing, so hotels and restaurants are opened specially for them. Local agriculture is now significantly diversified — coffee is only 5% of the farmland now — and other farmers plant soy, sugar cane and corn. Beef and chicken are also produced in a fairly large scale. With the factories and the agriculture doing so well, in the turn of the century the city attr
https://en.wikipedia.org/wiki/Regular%20part
In mathematics, the regular part of a Laurent series consists of the series of terms with positive powers. That is, if then the regular part of this Laurent series is In contrast, the series of terms with negative powers is the principal part. References Complex analysis
https://en.wikipedia.org/wiki/Variable%20geometry
Variable geometry may refer to: Variable-geometry turbocharger Variable geometry turbomachine Variable geometry Europe, a proposed strategy for European integration Variable Geometry Self-Propelled Battle Droid Variable-sweep wing Wing configuration#Variable geometry ways to alter the shape of an aircraft's wings in flight in order to alter their aerodynamic properties Anglo-French Variable Geometry (AFVG) aircraft project
https://en.wikipedia.org/wiki/Carl%20Friedrich%20Gauss%20Prize
The Carl Friedrich Gauss Prize for Applications of Mathematics is a mathematics award, granted jointly by the International Mathematical Union and the German Mathematical Society for "outstanding mathematical contributions that have found significant applications outside of mathematics". The award receives its name from the German mathematician Carl Friedrich Gauss. With its premiere in 2006, it is to be awarded every fourth year, at the International Congress of Mathematicians. The previous laureate was presented with a medal and a cash purse of EUR10,000 funded by the International Congress of Mathematicians 1998 budget surplus. The official announcement of the prize took place on 30 April 2002, the 225th anniversary of the birth of Gauss. The prize was developed specifically to give recognition to mathematicians; while mathematicians influence the world outside of their field, their studies are often not recognized. The prize aims to honour those who have made contributions and effects in the fields of business, technology, or even day-to-day life. Laureates See also Fields Medal Chern Medal List of mathematics awards References Awards established in 2006 Awards of the International Mathematical Union Prize
https://en.wikipedia.org/wiki/Inverse%20Gaussian%20distribution
In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞). Its probability density function is given by for x > 0, where is the mean and is the shape parameter. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a Brownian motion's level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian motion with positive drift takes to reach a fixed positive level. Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable. To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write . Properties Single parameter form The probability density function (pdf) of the inverse Gaussian distribution has a single parameter form given by In this form, the mean and variance of the distribution are equal, Also, the cumulative distribution function (cdf) of the single parameter inverse Gaussian distribution is related to the standard normal distribution by where , and the is the cdf of standard normal distribution. The variables and are related to each other by the identity In the single parameter form, the MGF simplifies to An inverse Gaussian distribution in double parameter form can be transformed into a single parameter form by appropriate scaling where The standard form of inverse Gaussian distribution is Summation If Xi has an distribution for i = 1, 2, ..., n and all Xi are independent, then Note that is constant for all i. This is a necessary condition for the summation. Otherwise S would not be Inverse Gaussian distributed. Scaling For any t > 0 it holds that Exponential family The inverse Gaussian distribution is a two-parameter exponential family with natural parameters −λ/(2μ2) and −λ/2, and natural statistics X and 1/X. For fixed, it is also a single-parameter natural exponential family distribution where the base distribution has density Indeed, with , is a density over the reals. Evaluating the integral, we get Substituting makes the above expression equal to . Relationship with Brownian motion Let the stochastic process Xt be given by where Wt is a standard Brownian motion. That is, Xt is a Brownian motion with drift . Then the first passage time for a fixed level by Xt is distributed according to an inverse-Gaussian: i.e (cf. Schrödinger equation 19, Smoluchowski, equation 8, and Folks, equation 1). Suppose that we have a Brownian motion with drift defined by: And suppose that we wish to find the probability density function for the time when the process first hits some barrier - known as the first passage time. The Fokker-Planck equation describing the evoluti
https://en.wikipedia.org/wiki/Classical%20modular%20curve
In number theory, the classical modular curve is an irreducible plane algebraic curve given by an equation , such that is a point on the curve. Here denotes the -invariant. The curve is sometimes called , though often that notation is used for the abstract algebraic curve for which there exist various models. A related object is the classical modular polynomial, a polynomial in one variable defined as . It is important to note that the classical modular curves are part of the larger theory of modular curves. In particular it has another expression as a compactified quotient of the complex upper half-plane . Geometry of the modular curve The classical modular curve, which we will call , is of degree greater than or equal to when , with equality if and only if is a prime. The polynomial has integer coefficients, and hence is defined over every field. However, the coefficients are sufficiently large that computational work with the curve can be difficult. As a polynomial in with coefficients in , it has degree , where is the Dedekind psi function. Since , is symmetrical around the line , and has singular points at the repeated roots of the classical modular polynomial, where it crosses itself in the complex plane. These are not the only singularities, and in particular when , there are two singularities at infinity, where and , which have only one branch and hence have a knot invariant which is a true knot, and not just a link. Parametrization of the modular curve For , or , has genus zero, and hence can be parametrized by rational functions. The simplest nontrivial example is , where: is (up to the constant term) the McKay–Thompson series for the class 2B of the Monster, and is the Dedekind eta function, then parametrizes in terms of rational functions of . It is not necessary to actually compute to use this parametrization; it can be taken as an arbitrary parameter. Mappings A curve , over is called a modular curve if for some there exists a surjective morphism , given by a rational map with integer coefficients. The famous modularity theorem tells us that all elliptic curves over are modular. Mappings also arise in connection with since points on it correspond to some -isogenous pairs of elliptic curves. An isogeny between two elliptic curves is a non-trivial morphism of varieties (defined by a rational map) between the curves which also respects the group laws, and hence which sends the point at infinity (serving as the identity of the group law) to the point at infinity. Such a map is always surjective and has a finite kernel, the order of which is the degree of the isogeny. Points on correspond to pairs of elliptic curves admitting an isogeny of degree with cyclic kernel. When has genus one, it will itself be isomorphic to an elliptic curve, which will have the same -invariant. For instance, has -invariant , and is isomorphic to the curve . If we substitute this value of for in , we obtain two rational
https://en.wikipedia.org/wiki/Frey%20curve
In mathematics, a Frey curve or Frey–Hellegouarch curve is the elliptic curve associated with a (hypothetical) solution of Fermat's equation The curve is named after Gerhard Frey and (sometimes) . History came up with the idea of associating solutions of Fermat's equation with a completely different mathematical object: an elliptic curve. If ℓ is an odd prime and a, b, and c are positive integers such that then a corresponding Frey curve is an algebraic curve given by the equation or, equivalently This is a nonsingular algebraic curve of genus one defined over Q, and its projective completion is an elliptic curve over Q. called attention to the unusual properties of the same curve as Hellegouarch, which became called a Frey curve. This provided a bridge between Fermat and Taniyama by showing that a counterexample to Fermat's Last Theorem would create such a curve that would not be modular. The conjecture attracted considerable interest when suggested that the Taniyama–Shimura–Weil conjecture implies Fermat's Last Theorem. However, his argument was not complete. In 1985, Jean-Pierre Serre proposed that a Frey curve could not be modular and provided a partial proof of this. This showed that a proof of the semistable case of the Taniyama–Shimura conjecture would imply Fermat's Last Theorem. Serre did not provide a complete proof and what was missing became known as the epsilon conjecture or ε-conjecture. In the summer of 1986, Ribet (1990) proved the epsilon conjecture, thereby proving that the Taniyama–Shimura–Weil conjecture implies Fermat's Last Theorem. References Number theory
https://en.wikipedia.org/wiki/List%20of%20urban%20areas%20in%20the%20Republic%20of%20Ireland%20for%20the%202011%20census
The following table gives all the urban areas in Ireland listed in the Central Statistics Office (CSO) report of the 2011 census. This includes cities, boroughs, and towns with local government councils, and other places identified by the CSO with at least 50 occupied dwellings. Census towns are required to have a local area plan if they have a population over 5,000, and are permitted to have one with a population over 1,500. Explanation of table Table Notes References Central Statistics Office, 2012 Census 2011 Population Classified by Area (Formerly Volume One) See also List of cities, boroughs and towns in the Republic of Ireland details of municipal towns with councils, distinguishing administrative, electoral, and suburban populations Cenus towns
https://en.wikipedia.org/wiki/Fourier%E2%80%93Bessel%20series
In mathematics, Fourier–Bessel series is a particular kind of generalized Fourier series (an infinite series expansion on a finite interval) based on Bessel functions. Fourier–Bessel series are used in the solution to partial differential equations, particularly in cylindrical coordinate systems. Definition The Fourier–Bessel series of a function with a domain of satisfying is the representation of that function as a linear combination of many orthogonal versions of the same Bessel function of the first kind Jα, where the argument to each version n is differently scaled, according to where uα,n is a root, numbered n associated with the Bessel function Jα and cn are the assigned coefficients: Interpretation The Fourier–Bessel series may be thought of as a Fourier expansion in the ρ coordinate of cylindrical coordinates. Just as the Fourier series is defined for a finite interval and has a counterpart, the continuous Fourier transform over an infinite interval, so the Fourier–Bessel series has a counterpart over an infinite interval, namely the Hankel transform. Calculating the coefficients As said, differently scaled Bessel Functions are orthogonal with respect to the inner product according to (where: is the Kronecker delta). The coefficients can be obtained from projecting the function onto the respective Bessel functions: where the plus or minus sign is equally valid. For the inverse transform, one makes use of the following representation of the Dirac delta function One-to-one relation between order index (n) and continuous frequency () Fourier–Bessel series coefficients are unique for a given signal, and there is one-to-one mapping between continuous frequency () and order index which can be expressed as follows: Since, . So above equation can be rewritten as follows: where is the length of the signal and is the sampling frequency of the signal. 2-D- Fourier-Bessel series expansion For an image of size M×N, the synthesis equations for order-0 2D-Fourier–Bessel series expansion is as follows: Where is 2D-Fourier–Bessel series expansion coefficients whose mathematical expressions are as follows: where, Fourier-Bessel series expansion based entropies For a signal of length , Fourier-Bessel based spectral entropy such as Shannon spectral entropy (), log energy entropy (), and Wiener entropy () are defined as follows: where is the normalized energy distribution which is mathematically defined as follows: is energy spectrum which is mathematically defined as follows: Fourier Bessel Series Expansion based Empirical Wavelet Transform The Empirical wavelet transform (EWT) is a multi-scale signal processing approach for the decomposition of multi-component signal into intrinsic mode functions (IMFs). The EWT is based on the design of empirical wavelet based filter bank based on the segregation of Fourier spectrum of the multi-component signals. The segregation of Fourier spectrum of multi-component signal is
https://en.wikipedia.org/wiki/Poisson%20random%20measure
Let be some measure space with -finite measure . The Poisson random measure with intensity measure is a family of random variables defined on some probability space such that i) is a Poisson random variable with rate . ii) If sets don't intersect then the corresponding random variables from i) are mutually independent. iii) is a measure on Existence If then satisfies the conditions i)–iii). Otherwise, in the case of finite measure , given , a Poisson random variable with rate , and , mutually independent random variables with distribution , define where is a degenerate measure located in . Then will be a Poisson random measure. In the case is not finite the measure can be obtained from the measures constructed above on parts of where is finite. Applications This kind of random measure is often used when describing jumps of stochastic processes, in particular in Lévy–Itō decomposition of the Lévy processes. Generalizations The Poisson random measure generalizes to the Poisson-type random measures, where members of the PT family are invariant under restriction to a subspace. References Statistical randomness Poisson point processes
https://en.wikipedia.org/wiki/Information%20algebra
The term "information algebra" refers to mathematical techniques of information processing. Classical information theory goes back to Claude Shannon. It is a theory of information transmission, looking at communication and storage. However, it has not been considered so far that information comes from different sources and that it is therefore usually combined. It has furthermore been neglected in classical information theory that one wants to extract those parts out of a piece of information that are relevant to specific questions. A mathematical phrasing of these operations leads to an algebra of information, describing basic modes of information processing. Such an algebra involves several formalisms of computer science, which seem to be different on the surface: relational databases, multiple systems of formal logic or numerical problems of linear algebra. It allows the development of generic procedures of information processing and thus a unification of basic methods of computer science, in particular of distributed information processing. Information relates to precise questions, comes from different sources, must be aggregated, and can be focused on questions of interest. Starting from these considerations, information algebras are two-sorted algebras , where is a semigroup, representing combination or aggregation of information, is a lattice of domains (related to questions) whose partial order reflects the granularity of the domain or the question, and a mixed operation representing focusing or extraction of information. Information and its operations More precisely, in the two-sorted algebra , the following operations are defined Additionally, in the usual lattice operations (meet and join) are defined. Axioms and definition The axioms of the two-sorted algebra , in addition to the axioms of the lattice : A two-sorted algebra satisfying these axioms is called an Information Algebra. Order of information A partial order of information can be introduced by defining if . This means that is less informative than if it adds no new information to . The semigroup is a semilattice relative to this order, i.e. . Relative to any domain (question) a partial order can be introduced by defining if . It represents the order of information content of and relative to the domain (question) . Labeled information algebra The pairs , where and such that form a labeled Information Algebra. More precisely, in the two-sorted algebra , the following operations are defined Models of information algebras Here follows an incomplete list of instances of information algebras: Relational algebra: The reduct of a relational algebra with natural join as combination and the usual projection is a labeled information algebra, see Example. Constraint systems: Constraints form an information algebra . Semiring valued algebras: C-Semirings induce information algebras ;;. Logic: Many logic systems induce information algebras . Reducts of cylin
https://en.wikipedia.org/wiki/Bills%20of%20mortality
Bills of mortality were the weekly mortality statistics in London, designed to monitor burials from 1592 to 1595 and then continuously from 1603. The responsibility to produce the statistics was chartered in 1611 to the Worshipful Company of Parish Clerks. The bills covered an area that started to expand as London grew from the City of London, before reaching its maximum extent in 1636. New parishes were then only added where ancient parishes within the area were divided. Factors such as the use of suburban cemeteries outside the area, the exemption of extra-parochial places within the area, the wider growth of the metropolis, and that they recorded burials rather than deaths, rendered their data incomplete. Production of the bills went into decline from 1819 as parishes ceased to provide returns, with the last surviving weekly bill dating from 1858. They were superseded by the weekly returns of the Registrar General from 1840, taking in further parishes until 1847. This area became the district of the Metropolitan Board of Works in 1855, the County of London in 1889 and Inner London in 1965. History Bills were produced intermittently in the several parishes of the City of London during outbreaks of plague. The first Bill is believed to date from November 1532. The first regular weekly collection and publishing of the number of burials in the parishes of London began on 21 December 1592 and continued until 18 December 1595. The practice was abandoned and then revived on 21 December 1603 when there was another outbreak of plague. In 1611 the duty to produce the Bills was imposed on the members of the Worshipful Company of Parish Clerks by a charter granted by James I. Annual returns were made on 21 December (the feast of St Thomas), to coincide with the city calendar. New charters were granted by Charles I in 1636 and 1639. The Bills covered 129 parishes at the granting of the 1639 charter. By 1570 the Bills included baptisms; in 1629 the cause of death was given, and in the early 18th century the age at death. In 1632, the Clerks were asked to identify five different infectious diseases caused by human-to-human transmission: TB, Small Pox, Measles, French Pox, and Plague. In 1819 the bills ceased to be published under the authority of the Corporation of London, coming directly from the Worshipful Company of Parish Clerks. The clerk of St George Hanover Square ceased to provide returns from 1823. From then until 1858 the practice of producing bills of mortality was in decline, as parishes ceased to provide returns to the Worshipful Company of Parish Clerks. The last surviving bill of mortality is believed to be from 28 September 1858. Problems with the bills The area fixed in 1636, adding only St Mary le Strand in 1726 which was already within the outer boundary of the bills. The area quickly became much smaller than the growing metropolis. The bills recorded burials in Church of England churchyards and not deaths. The bills did not include
https://en.wikipedia.org/wiki/Diagonally%20dominant%20matrix
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if where aij denotes the entry in the ith row and jth column. This definition uses a weak inequality, and is therefore sometimes called weak diagonal dominance. If a strict inequality (>) is used, this is called strict diagonal dominance. The unqualified term diagonal dominance can mean both strict and weak diagonal dominance, depending on the context. Variations The definition in the first paragraph sums entries across each row. It is therefore sometimes called row diagonal dominance. If one changes the definition to sum down each column, this is called column diagonal dominance. Any strictly diagonally dominant matrix is trivially a weakly chained diagonally dominant matrix. Weakly chained diagonally dominant matrices are nonsingular and include the family of irreducibly diagonally dominant matrices. These are irreducible matrices that are weakly diagonally dominant, but strictly diagonally dominant in at least one row. Examples The matrix is diagonally dominant because   since     since     since   . The matrix is not diagonally dominant because   since     since     since   . That is, the first and third rows fail to satisfy the diagonal dominance condition. The matrix is strictly diagonally dominant because   since     since     since   . Applications and properties The following results can be proved trivially from Gershgorin's circle theorem. Gershgorin's circle theorem itself has a very short proof. A strictly diagonally dominant matrix (or an irreducibly diagonally dominant matrix) is non-singular. A Hermitian diagonally dominant matrix with real non-negative diagonal entries is positive semidefinite. This follows from the eigenvalues being real, and Gershgorin's circle theorem. If the symmetry requirement is eliminated, such a matrix is not necessarily positive semidefinite. For example, consider However, the real parts of its eigenvalues remain non-negative by Gershgorin's circle theorem. Similarly, a Hermitian strictly diagonally dominant matrix with real positive diagonal entries is positive definite. No (partial) pivoting is necessary for a strictly column diagonally dominant matrix when performing Gaussian elimination (LU factorization). The Jacobi and Gauss–Seidel methods for solving a linear system converge if the matrix is strictly (or irreducibly) diagonally dominant. Many matrices that arise in finite element methods are diagonally dominant. A slight variation on the idea of diagonal dominance is used to prove that the pairing on diagrams without loops in the Temperley–Lieb algebra is nondegenerate. For a matrix with polynomial entries, one sensible definition of diagonal dominance
https://en.wikipedia.org/wiki/Calabi%20conjecture
In the mathematical field of differential geometry, the Calabi conjecture was a conjecture about the existence of certain kinds of Riemannian metrics on certain complex manifolds, made by . It was proved by , who received the Fields Medal and Oswald Veblen Prize in part for his proof. His work, principally an analysis of an elliptic partial differential equation known as the complex Monge–Ampère equation, was an influential early result in the field of geometric analysis. More precisely, Calabi's conjecture asserts the resolution of the prescribed Ricci curvature problem within the setting of Kähler metrics on closed complex manifolds. According to Chern–Weil theory, the Ricci form of any such metric is a closed differential 2-form which represents the first Chern class. Calabi conjectured that for any such differential form , there is exactly one Kähler metric in each Kähler class whose Ricci form is . (Some compact complex manifolds admit no Kähler classes, in which case the conjecture is vacuous.) In the special case that the first Chern class vanishes, this implies that each Kähler class contains exactly one Ricci-flat metric. These are often called Calabi–Yau manifolds. However, the term is often used in slightly different ways by various authors — for example, some uses may refer to the complex manifold while others might refer to a complex manifold together with a particular Ricci-flat Kähler metric. This special case can equivalently be regarded as the complete existence and uniqueness theory for Kähler–Einstein metrics of zero scalar curvature on compact complex manifolds. The case of nonzero scalar curvature does not follow as a special case of Calabi's conjecture, since the 'right-hand side' of the Kähler–Einstein problem depends on the 'unknown' metric, thereby placing the Kähler–Einstein problem outside the domain of prescribing Ricci curvature. However, Yau's analysis of the complex Monge–Ampère equation in resolving the Calabi conjecture was sufficiently general so as to also resolve the existence of Kähler–Einstein metrics of negative scalar curvature. The third and final case of positive scalar curvature was resolved in the 2010s, in part by making use of the Calabi conjecture. Outline of the proof of the Calabi conjecture Calabi transformed the Calabi conjecture into a non-linear partial differential equation of complex Monge–Ampère type, and showed that this equation has at most one solution, thus establishing the uniqueness of the required Kähler metric. Yau proved the Calabi conjecture by constructing a solution of this equation using the continuity method. This involves first solving an easier equation, and then showing that a solution to the easy equation can be continuously deformed to a solution of the hard equation. The hardest part of Yau's solution is proving certain a priori estimates for the derivatives of solutions. Transformation of the Calabi conjecture to a differential equation Suppose that is a comple
https://en.wikipedia.org/wiki/Pseudo-Zernike%20polynomials
In mathematics, pseudo-Zernike polynomials are well known and widely used in the analysis of optical systems. They are also widely used in image analysis as shape descriptors. Definition They are an orthogonal set of complex-valued polynomials defined as where and orthogonality on the unit disk is given as where the star means complex conjugation, and , , are the standard transformations between polar and Cartesian coordinates. The radial polynomials are defined as with integer coefficients Examples Examples are: Moments The pseudo-Zernike Moments (PZM) of order and repetition are defined as where , and takes on positive and negative integer values subject to . The image function can be reconstructed by expansion of the pseudo-Zernike coefficients on the unit disk as Pseudo-Zernike moments are derived from conventional Zernike moments and shown to be more robust and less sensitive to image noise than the Zernike moments. See also Zernike polynomials Image moment References Orthogonal polynomials
https://en.wikipedia.org/wiki/Mark%20Steiner
Mark Steiner (May 6, 1942 – April 6, 2020) was an American-born Israeli professor of philosophy. He taught philosophy of mathematics and physics at the Hebrew University of Jerusalem. Steiner died after contracting COVID-19 during the COVID-19 pandemic. Biography Mark Steiner was born in the Bronx, New York. He graduated from Columbia University in 1965 and studied at the University of Oxford as a Fulbright Fellow. He then received his Ph.D. in philosophy from Princeton University in 1972 after completing a doctoral dissertation titled "On mathematical knowledge." Steiner taught at Columbia from 1970 to 1977. Steiner died on April 6, 2020, in Shaare Zedek Medical Center, after contracting the COVID-19 virus during the COVID-19 pandemic in Israel. Academic career Steiner is best known for his book The Applicability of Mathematics as a Philosophical Problem, in which he attempted to explain the historical utility of mathematics in physics. The book may be considered an extended meditation on the issues raised by Eugene Wigner's article "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". Steiner is also the author of the book Mathematical Knowledge. Steiner also translated Reuven Agushewitz's philosophical work Emune un Apikorses from Yiddish. References External links Faculty address page 1942 births 2020 deaths Writers from the Bronx American emigrants to Israel Israeli people of American-Jewish descent Jewish American academics Jewish philosophers Philosophers of mathematics Deaths from the COVID-19 pandemic in Israel Academic staff of the Hebrew University of Jerusalem Princeton University alumni 20th-century Israeli philosophers 21st-century Israeli philosophers Columbia College (New York) alumni Columbia University faculty
https://en.wikipedia.org/wiki/List%20of%20airports%20in%20Serbia
This is the list of airports in Serbia, grouped by type and sorted by location. Airports statistics Airports with number of passengers served per year: List of airports in Serbia Airport names shown in bold indicate the airport has scheduled service with commercial airlines: See also Airports of Serbia Transport in Serbia AirSerbia References References: Map of airports in Serbia with asphalt - concrete runways AERODROMI u PDF formatu Serbia Airports Airports Serbia
https://en.wikipedia.org/wiki/List%20of%20airports%20in%20Montenegro
This is a list of airports in Montenegro, grouped by type and sorted by location. Passenger statistics Airports with number of passengers served. Airports Airports shown in bold have scheduled service on commercial airlines. See also Transport in Montenegro List of airports by ICAO code: L#LY – Serbia and Montenegro Wikipedia:WikiProject Aviation/Airline destination lists: Europe#Montenegro References AERODROMI u PDF formatu – includes IATA codes – IATA and ICAO codes – IATA, ICAO and DAFIF codes Montenegro Airports Airports Montenegro
https://en.wikipedia.org/wiki/Cauchy%27s%20functional%20equation
Cauchy's functional equation is the functional equation: A function that solves this equation is called an additive function. Over the rational numbers, it can be shown using elementary algebra that there is a single family of solutions, namely for any rational constant Over the real numbers, the family of linear maps now with an arbitrary real constant, is likewise a family of solutions; however there can exist other solutions not of this form that are extremely complicated. However, any of a number of regularity conditions, some of them quite weak, will preclude the existence of these pathological solutions. For example, an additive function is linear if: is continuous (Cauchy, 1821). In fact, it suffices for to be continuous at one point (Darboux, 1875). is monotonic on any interval. is bounded on any interval. is Lebesgue measurable. On the other hand, if no further conditions are imposed on then (assuming the axiom of choice) there are infinitely many other functions that satisfy the equation. This was proved in 1905 by Georg Hamel using Hamel bases. Such functions are sometimes called Hamel functions. The fifth problem on Hilbert's list is a generalisation of this equation. Functions where there exists a real number such that are known as Cauchy-Hamel functions and are used in Dehn-Hadwiger invariants which are used in the extension of Hilbert's third problem from 3D to higher dimensions. This equation is sometimes referred to as Cauchy's additive functional equation to distinguish it from Cauchy's exponential functional equation Cauchy's logarithmic functional equation and Cauchy's multiplicative functional equation Solutions over the rational numbers A simple argument, involving only elementary algebra, demonstrates that the set of additive maps , where are vector spaces over an extension field of , is identical to the set of -linear maps from to . Theorem: Let be an additive function. Then is -linear. Proof: We want to prove that any solution to Cauchy’s functional equation, , satisfies for any and . Let . First note , hence , and therewith from which follows . Via induction, is proved for any . For any negative integer we know , therefore . Thus far we have proved for any . Let , then and hence . Finally, any has a representation with and , so, putting things together, , q.e.d. Properties of nonlinear solutions over the real numbers We prove below that any other solutions must be highly pathological functions. In particular, it is shown that any other solution must have the property that its graph is dense in that is, that any disk in the plane (however small) contains a point from the graph. From this it is easy to prove the various conditions given in the introductory paragraph. Existence of nonlinear solutions over the real numbers The linearity proof given above also applies to where is a scaled copy of the rationals. This shows that only linear solutions are permitted w
https://en.wikipedia.org/wiki/Jacob%20Klein%20%28philosopher%29
Jacob Klein (March 3, 1899 – July 16, 1978) was a Russian-American philosopher and interpreter of Plato, who worked extensively on the nature and historical origin of modern symbolic mathematics. Biography Klein was born in Libava, Russian Empire. He studied at Berlin and Marburg, where he received his Ph.D. in 1922. A student of Nicolai Hartmann, Martin Heidegger, and Edmund Husserl, he later taught at St. John's College in Annapolis, Maryland from 1938 until his death. He served as dean from 1949 to 1958. Klein was affectionately known as Jasha (pronounced "Yasha"). He was one of the world's preeminent interpreters of Plato and the Platonic tradition. As one of many Jewish scholars who were no longer safe in Europe, he fled the Nazis. He was a friend of fellow émigré and German-American philosopher Leo Strauss. Of Klein's first book Greek Mathematical Thought and the Origin of Algebra, Strauss said: The work is much more than a historical study. But even if we take it as a purely historical work, there is not, in my opinion, a contemporary work in the history of philosophy or science or in "the history of ideas" generally speaking which in intrinsic worth comes within hailing distance of it.Russian born French philosopher Alexandre Kojève counted Klein as one of the two people (along with Strauss) from whom he could learn anything. The central thesis of his work Greek Mathematical Thought and the Origin of Algebra is that the modern concept of mathematics is based on the symbolic interpretation of the Greek concept of number (arithmos). Klein died in 1978 in Annapolis, Maryland. Works A Commentary on Plato's Meno (University of North Carolina Press, 1965) Greek Mathematical Thought and the Origin of Algebra (MIT Press, 1968), translated from German by Eva Brann, originally published in 1934–36. Plato's Trilogy: Theaetetus, the Sophist, and the Statesman (University of Chicago Press, 1977) Jacob Klein: Lectures and Essays ed. by Robert Williamson and Elliott Zuckerman (St. John's College Press, 1985) Notes References 1899 births 1978 deaths People from Liepāja People from Courland Governorate Latvian Jews 20th-century American philosophers Jewish philosophers Immigrants to the United States American historians of mathematics St. John's College (Annapolis/Santa Fe) faculty German male writers 20th-century German philosophers Philosophers of mathematics Emigrants from the Russian Empire Immigrants to Germany
https://en.wikipedia.org/wiki/Higher%20category%20theory
In mathematics, higher category theory is the part of category theory at a higher order, which means that some equalities are replaced by explicit arrows in order to be able to explicitly study the structure behind those equalities. Higher category theory is often applied in algebraic topology (especially in homotopy theory), where one studies algebraic invariants of spaces, such as their fundamental weak ∞-groupoid. In higher category theory, the concept of higher categorical structures, such as (∞-categories), allows for a more robust treatment of homotopy theory, enabling one to capture finer homotopical distinctions, such as differentiating two topological spaces that have the same fundamental group, but differ in their higher homotopy groups. This approach is particularly valuable when dealing with spaces with intricate topological features, such as the Eilenberg-MacLane space. Strict higher categories An ordinary category has objects and morphisms, which are called 1-morphisms in the context of higher category theory. A 2-category generalizes this by also including 2-morphisms between the 1-morphisms. Continuing this up to n-morphisms between (n − 1)-morphisms gives an n-category. Just as the category known as Cat, which is the category of small categories and functors is actually a 2-category with natural transformations as its 2-morphisms, the category n-Cat of (small) n-categories is actually an (n + 1)-category. An n-category is defined by induction on n by: A 0-category is a set, An (n + 1)-category is a category enriched over the category n-Cat. So a 1-category is just a (locally small) category. The monoidal structure of Set is the one given by the cartesian product as tensor and a singleton as unit. In fact any category with finite products can be given a monoidal structure. The recursive construction of n-Cat works fine because if a category has finite products, the category of -enriched categories has finite products too. While this concept is too strict for some purposes in for example, homotopy theory, where "weak" structures arise in the form of higher categories, strict cubical higher homotopy groupoids have also arisen as giving a new foundation for algebraic topology on the border between homology and homotopy theory; see the article Nonabelian algebraic topology, referenced in the book below. Weak higher categories In weak , the associativity and identity conditions are no longer strict (that is, they are not given by equalities), but rather are satisfied up to an isomorphism of the next level. An example in topology is the composition of paths, where the identity and association conditions hold only up to reparameterization, and hence up to homotopy, which is the for this . These n-isomorphisms must well behave between hom-sets and expressing this is the difficulty in the definition of weak . Weak , also called bicategories, were the first to be defined explicitly. A particularity of these is that a bicatego
https://en.wikipedia.org/wiki/CASM
CASM may refer to: Education Centre for Aboriginal Studies in Music, an educational unit of the University of Adelaide, South Australia Certificate of Advanced Study in Mathematics, a former qualification gained from Cambridge University Galleries and museums Canada Aviation and Space Museum, the national aviation history museum in Ottawa, Ontario, Canada Canadian Air and Space Conservancy, formerly Canadian Air and Space Museum, a former aviation museum in Toronto, Ontario, Canada Centre d'Art Santa Mònica, an art gallery in Barcelona, Spain Other uses Centre for the Analysis of Social Media, a research unit of the UK think tank Demos Chinese Academy of Surveying and Mapping, an organization affiliated with the defunct State Bureau of Surveying and Mapping Collaborative group on Artisanal and Small-Scale Mining, an association for artisanal mining Cost per Available Seat Mile, a measure of unit cost in the airline industry
https://en.wikipedia.org/wiki/Binary%20entropy%20function
In information theory, the binary entropy function, denoted or , is defined as the entropy of a Bernoulli process with probability of one of two values. It is a special case of , the entropy function. Mathematically, the Bernoulli trial is modelled as a random variable that can take on only two values: 0 and 1, which are mutually exclusive and exhaustive. If , then and the entropy of (in shannons) is given by , where is taken to be 0. The logarithms in this formula are usually taken (as shown in the graph) to the base 2. See binary logarithm. When , the binary entropy function attains its maximum value. This is the case of an unbiased coin flip. is distinguished from the entropy function in that the former takes a single real number as a parameter whereas the latter takes a distribution or random variable as a parameter. Sometimes the binary entropy function is also written as . However, it is different from and should not be confused with the Rényi entropy, which is denoted as . Explanation In terms of information theory, entropy is considered to be a measure of the uncertainty in a message. To put it intuitively, suppose . At this probability, the event is certain never to occur, and so there is no uncertainty at all, leading to an entropy of 0. If , the result is again certain, so the entropy is 0 here as well. When , the uncertainty is at a maximum; if one were to place a fair bet on the outcome in this case, there is no advantage to be gained with prior knowledge of the probabilities. In this case, the entropy is maximum at a value of 1 bit. Intermediate values fall between these cases; for instance, if , there is still a measure of uncertainty on the outcome, but one can still predict the outcome correctly more often than not, so the uncertainty measure, or entropy, is less than 1 full bit. Derivative The derivative of the binary entropy function may be expressed as the negative of the logit function: . Taylor series The Taylor series of the binary entropy function in a neighborhood of 1/2 is for . Bounds The following bounds hold for : and where denotes natural logarithm. See also Metric entropy Information theory Information entropy Quantities of information References Further reading MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. Entropy and information zh-yue:二元熵函數
https://en.wikipedia.org/wiki/Tournament%20of%20the%20Towns
The Tournament of the Towns (International Mathematics Tournament of the Towns, Турнир Городов, Международный Математический Турнир Городов) is an international mathematical competition for school students originating in Russia. The contest was created by mathematician Nikolay Konstantinov and has participants from over 100 cities in many different countries. Organization There are two rounds in this contest: Fall (October) and Spring (February–March) of the same academic year. Both have an O-Level (Basic) paper and an A-Level (Advanced) paper separated by 1–2 weeks. The O-Level contains around 5 questions and the A-Level contains around 7 questions. The duration of the exams is 5 hours for both Levels. The A-Level problems are more difficult than O-Level but have a greater maximum score. Participating students are divided into two divisions; Junior (usually grades 7–10) and Senior (two last school grades, usually grades 11–12). To account for age differences inside of each division, students in different grades have different loadings (coefficients). A contestant's final score is his/her highest score from the four exams. It is not necessary albeit recommended to write all four exams. Different towns are given handicaps to account for differences in population. A town's score is the average of the scores of its N best students, where its population is N hundred thousand. It is also worth noting that the minimum value of N is 5. Philosophy Tournament of Towns differs from many other similar competitions by its philosophy relying much more upon ingenuity than the drill. First, problems are difficult (especially in A Level in the Senior division where they are comparable with those at International Mathematical Olympiad but much more ingenious and less technical). Second, it allows the participants to choose problems they like as for each paper the participant's score is the sum of his/her 3 best answers. The problems are mostly combinatorial, with the occasional geometry, number theory or algebra problem. They have a different flavour to problems seen in other mathematics competitions, and are usually quite challenging. Some of the problems have become classics, in particular two from the Autumn 1984 paper. History The first competition, held in the 1979–1980 academic year, was called the Olympiad of Three Towns. They were Moscow, Kiev and Riga. The reputation of the competition grew and the following year, it was called Tournament of the Towns. The Tournament of the Towns was almost closed down in its early years of development but in 1984 it gained recognition when it became a sub-committee of the USSR Academy of Sciences. Awards Diplomas are awarded by the Central Committee to students who have achieved high scores (after their papers have been rechecked by the Central Jury). Different certificates are also awarded by Local Committees. Summer Conferences Students performing outstandingly (higher than Diploma) receive an invi
https://en.wikipedia.org/wiki/Generalizations%20of%20Pauli%20matrices
In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the (linear algebraic) properties of the Pauli matrices. Here, a few classes of such matrices are summarized. Multi-qubit Pauli matrices (Hermitian) This method of generalizing the Pauli matrices refers to a generalization from a single 2-level system (qubit) to multiple such systems. In particular, the generalized Pauli matrices for a group of qubits is just the set of matrices generated by all possible products of Pauli matrices on any of the qubits. The vector space of a single qubit is and the vector space of qubits is . We use the tensor product notation to refer to the operator on that acts as a Pauli matrix on the th qubit and the identity on all other qubits. We can also use for the identity, i.e., for any we use . Then the multi-qubit Pauli matrices are all matrices of the form , i.e., for a vector of integers between 0 and 4. Thus there are such generalized Pauli matrices if we include the identity and if we do not. Higher spin matrices (Hermitian) The traditional Pauli matrices are the matrix representation of the Lie algebra generators , , and in the 2-dimensional irreducible representation of SU(2), corresponding to a spin-1/2 particle. These generate the Lie group SU(2). For a general particle of spin , one instead utilizes the -dimensional irreducible representation. Generalized Gell-Mann matrices (Hermitian) This method of generalizing the Pauli matrices refers to a generalization from 2-level systems (Pauli matrices acting on qubits) to 3-level systems (Gell-Mann matrices acting on qutrits) and generic d-level systems (generalized Gell-Mann matrices acting on qudits). Construction Let be the matrix with 1 in the -th entry and 0 elsewhere. Consider the space of d×d complex matrices, , for a fixed d. Define the following matrices, , for . , for . , the identity matrix, for , , for . for . The collection of matrices defined above without the identity matrix are called the generalized Gell-Mann matrices, in dimension . The symbol ⊕ (utilized in the Cartan subalgebra above) means matrix direct sum. The generalized Gell-Mann matrices are Hermitian and traceless by construction, just like the Pauli matrices. One can also check that they are orthogonal in the Hilbert–Schmidt inner product on . By dimension count, one sees that they span the vector space of complex matrices, (,ℂ). They then provide a Lie-algebra-generator basis acting on the fundamental representation of ( ). In dimensions = 2 and 3, the above construction recovers the Pauli and Gell-Mann matrices, respectively. Sylvester's generalized Pauli matrices (non-Hermitian) A particularly notable generalization of the Pauli matrices was constructed by James Joseph Sylvester in 1882. These are known as "Weyl–Heisenberg matrices" as well as "generalized Pauli matrices". Framing