text
stringlengths
100
500k
subset
stringclasses
4 values
Weird number In number theory, a weird number is a natural number that is abundant but not semiperfect.[1][2] In other words, the sum of the proper divisors (divisors including 1 but not itself) of the number is greater than the number, but no subset of those divisors sums to the number itself. Examples The smallest weird number is 70. Its proper divisors are 1, 2, 5, 7, 10, 14, and 35; these sum to 74, but no subset of these sums to 70. The number 12, for example, is abundant but not weird, because the proper divisors of 12 are 1, 2, 3, 4, and 6, which sum to 16; but 2 + 4 + 6 = 12. The first few weird numbers are 70, 836, 4030, 5830, 7192, 7912, 9272, 10430, 10570, 10792, 10990, 11410, 11690, 12110, 12530, 12670, 13370, 13510, 13790, 13930, 14770, ... (sequence A006037 in the OEIS). Properties Unsolved problem in mathematics: Are there any odd weird numbers? (more unsolved problems in mathematics) Infinitely many weird numbers exist.[3] For example, 70p is weird for all primes p ≥ 149. In fact, the set of weird numbers has positive asymptotic density.[4] It is not known if any odd weird numbers exist. If so, they must be greater than 1021.[5] Sidney Kravitz has shown that for k a positive integer, Q a prime exceeding 2k, and $R={\frac {2^{k}Q-(Q+1)}{(Q+1)-2^{k}}}$ also prime and greater than 2k, then $n=2^{k-1}QR$ is a weird number.[6] With this formula, he found the large weird number $n=2^{56}\cdot (2^{61}-1)\cdot 153722867280912929\ \approx \ 2\cdot 10^{52}.$ Primitive weird numbers A property of weird numbers is that if n is weird, and p is a prime greater than the sum of divisors σ(n), then pn is also weird.[4] This leads to the definition of primitive weird numbers: weird numbers that are not a multiple of other weird numbers (sequence A002975 in the OEIS). Among the 1765 weird numbers less than one million, there are 24 primitive weird numbers. The construction of Kravitz yields primitive weird numbers, since all weird numbers of the form $2^{k}pq$ are primitive, but the existence of infinitely many k and Q which yield a prime R is not guaranteed. It is conjectured that there exist infinitely many primitive weird numbers, and Melfi has shown that the infiniteness of primitive weird numbers is a consequence of Cramér's conjecture.[7] Primitive weird numbers with as many as 16 prime factors and 14712 digits have been found.[8] See also • Untouchable number References 1. Benkoski, Stan (August–September 1972). "E2308 (in Problems and Solutions)". The American Mathematical Monthly. 79 (7): 774. doi:10.2307/2316276. JSTOR 2316276. 2. Richard K. Guy (2004). Unsolved Problems in Number Theory. Springer-Verlag. ISBN 0-387-20860-7. OCLC 54611248. Section B2. 3. Sándor, József; Mitrinović, Dragoslav S.; Crstici, Borislav, eds. (2006). Handbook of number theory I. Dordrecht: Springer-Verlag. pp. 113–114. ISBN 1-4020-4215-9. Zbl 1151.11300. 4. Benkoski, Stan; Erdős, Paul (April 1974). "On Weird and Pseudoperfect Numbers". Mathematics of Computation. 28 (126): 617–623. doi:10.2307/2005938. JSTOR 2005938. MR 0347726. Zbl 0279.10005. 5. Sloane, N. J. A. (ed.). "Sequence A006037 (Weird numbers: abundant (A005101) but not pseudoperfect (A005835))". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. -- comments concerning odd weird numbers 6. Kravitz, Sidney (1976). "A search for large weird numbers". Journal of Recreational Mathematics. Baywood Publishing. 9 (2): 82–85. Zbl 0365.10003. 7. Melfi, Giuseppe (2015). "On the conditional infiniteness of primitive weird numbers". Journal of Number Theory. Elsevier. 147: 508–514. doi:10.1016/j.jnt.2014.07.024. 8. Amato, Gianluca; Hasler, Maximilian; Melfi, Giuseppe; Parton, Maurizio (2019). "Primitive abundant and weird numbers with many prime factors". Journal of Number Theory. Elsevier. 201: 436–459. arXiv:1802.07178. doi:10.1016/j.jnt.2019.02.027. S2CID 119136924. External links • Weisstein, Eric W. "Weird number". MathWorld. Divisibility-based sets of integers Overview • Integer factorization • Divisor • Unitary divisor • Divisor function • Prime factor • Fundamental theorem of arithmetic Factorization forms • Prime • Composite • Semiprime • Pronic • Sphenic • Square-free • Powerful • Perfect power • Achilles • Smooth • Regular • Rough • Unusual Constrained divisor sums • Perfect • Almost perfect • Quasiperfect • Multiply perfect • Hemiperfect • Hyperperfect • Superperfect • Unitary perfect • Semiperfect • Practical • Erdős–Nicolas With many divisors • Abundant • Primitive abundant • Highly abundant • Superabundant • Colossally abundant • Highly composite • Superior highly composite • Weird Aliquot sequence-related • Untouchable • Amicable (Triple) • Sociable • Betrothed Base-dependent • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith Other sets • Arithmetic • Deficient • Friendly • Solitary • Sublime • Harmonic divisor • Descartes • Refactorable • Superperfect
Wikipedia
Directed acyclic graph In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called arcs), with each edge directed from one vertex to another, such that following those directions will never form a closed loop. A directed graph is a DAG if and only if it can be topologically ordered, by arranging the vertices as a linear ordering that is consistent with all edge directions. DAGs have numerous scientific and computational applications, ranging from biology (evolution, family trees, epidemiology) to information science (citation networks) to computation (scheduling). Directed acyclic graphs are sometimes instead called acyclic directed graphs[1] or acyclic digraphs.[2] Definitions A graph is formed by vertices and by edges connecting pairs of vertices, where the vertices can be any kind of object that is connected in pairs by edges. In the case of a directed graph, each edge has an orientation, from one vertex to another vertex. A path in a directed graph is a sequence of edges having the property that the ending vertex of each edge in the sequence is the same as the starting vertex of the next edge in the sequence; a path forms a cycle if the starting vertex of its first edge equals the ending vertex of its last edge. A directed acyclic graph is a directed graph that has no cycles.[1][2][3] A vertex v of a directed graph is said to be reachable from another vertex u when there exists a path that starts at u and ends at v. As a special case, every vertex is considered to be reachable from itself (by a path with zero edges). If a vertex can reach itself via a nontrivial path (a path with one or more edges), then that path is a cycle, so another way to define directed acyclic graphs is that they are the graphs in which no vertex can reach itself via a nontrivial path.[4] Mathematical properties Reachability relation, transitive closure, and transitive reduction A DAG Its transitive reduction The reachability relation of a DAG can be formalized as a partial order ≤ on the vertices of the DAG. In this partial order, two vertices u and v are ordered as u ≤ v exactly when there exists a directed path from u to v in the DAG; that is, when u can reach v (or v is reachable from u).[5] However, different DAGs may give rise to the same reachability relation and the same partial order.[6] For example, a DAG with two edges u → v and v → w has the same reachability relation as the DAG with three edges u → v, v → w, and u → w. Both of these DAGs produce the same partial order, in which the vertices are ordered as u ≤ v ≤ w. The transitive closure of a DAG is the graph with the most edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the reachability relation ≤ of the DAG, and may therefore be thought of as a direct translation of the reachability relation ≤ into graph-theoretic terms. The same method of translating partial orders into DAGs works more generally: for every finite partially ordered set (S, ≤), the graph that has a vertex for every element of S and an edge for every pair of elements in ≤ is automatically a transitively closed DAG, and has (S, ≤) as its reachability relation. In this way, every finite partially ordered set can be represented as a DAG. The transitive reduction of a DAG is the graph with the fewest edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the covering relation of the reachability relation ≤ of the DAG. It is a subgraph of the DAG, formed by discarding the edges u → v for which the DAG also contains a longer directed path from u to v. Like the transitive closure, the transitive reduction is uniquely defined for DAGs. In contrast, for a directed graph that is not acyclic, there can be more than one minimal subgraph with the same reachability relation.[7] Transitive reductions are useful in visualizing the partial orders they represent, because they have fewer edges than other graphs representing the same orders and therefore lead to simpler graph drawings. A Hasse diagram of a partial order is a drawing of the transitive reduction in which the orientation of every edge is shown by placing the starting vertex of the edge in a lower position than its ending vertex.[8] Topological ordering A topological ordering of a directed acyclic graph: every edge goes from earlier in the ordering (upper left) to later in the ordering (lower right). A directed graph is acyclic if and only if it has a topological ordering. Adding the red edges to the blue directed acyclic graph produces another DAG, the transitive closure of the blue graph. For each red or blue edge u → v, v is reachable from u: there exists a blue path starting at u and ending at v. A topological ordering of a directed graph is an ordering of its vertices into a sequence, such that for every edge the start vertex of the edge occurs earlier in the sequence than the ending vertex of the edge. A graph that has a topological ordering cannot have any cycles, because the edge into the earliest vertex of a cycle would have to be oriented the wrong way. Therefore, every graph with a topological ordering is acyclic. Conversely, every directed acyclic graph has at least one topological ordering. The existence of a topological ordering can therefore be used as an equivalent definition of a directed acyclic graphs: they are exactly the graphs that have topological orderings.[2] In general, this ordering is not unique; a DAG has a unique topological ordering if and only if it has a directed path containing all the vertices, in which case the ordering is the same as the order in which the vertices appear in the path.[9] The family of topological orderings of a DAG is the same as the family of linear extensions of the reachability relation for the DAG,[10] so any two graphs representing the same partial order have the same set of topological orders. Combinatorial enumeration The graph enumeration problem of counting directed acyclic graphs was studied by Robinson (1973).[11] The number of DAGs on n labeled vertices, for n = 0, 1, 2, 3, … (without restrictions on the order in which these numbers appear in a topological ordering of the DAG) is 1, 1, 3, 25, 543, 29281, 3781503, … (sequence A003024 in the OEIS). These numbers may be computed by the recurrence relation $a_{n}=\sum _{k=1}^{n}(-1)^{k-1}{n \choose k}2^{k(n-k)}a_{n-k}.$[11] Eric W. Weisstein conjectured,[12] and McKay et al. (2004) proved, that the same numbers count the (0,1) matrices for which all eigenvalues are positive real numbers. The proof is bijective: a matrix A is an adjacency matrix of a DAG if and only if A + I is a (0,1) matrix with all eigenvalues positive, where I denotes the identity matrix. Because a DAG cannot have self-loops, its adjacency matrix must have a zero diagonal, so adding I preserves the property that all matrix coefficients are 0 or 1.[13] Related families of graphs A multitree, a DAG in which the subgraph reachable from any vertex induces an undirected tree (e.g. in red) A polytree, a DAG formed by orienting the edges of an undirected tree A multitree (also called a strongly unambiguous graph or a mangrove) is a DAG in which there is at most one directed path between any two vertices. Equivalently, it is a DAG in which the subgraph reachable from any vertex induces an undirected tree.[14] A polytree (also called a directed tree) is a multitree formed by orienting the edges of an undirected tree.[15] An arborescence is a polytree formed by orienting the edges of an undirected tree away from a particular vertex, called the root of the arborescence. Computational problems Topological sorting and recognition Topological sorting is the algorithmic problem of finding a topological ordering of a given DAG. It can be solved in linear time.[16] Kahn's algorithm for topological sorting builds the vertex ordering directly. It maintains a list of vertices that have no incoming edges from other vertices that have not already been included in the partially constructed topological ordering; initially this list consists of the vertices with no incoming edges at all. Then, it repeatedly adds one vertex from this list to the end of the partially constructed topological ordering, and checks whether its neighbors should be added to the list. The algorithm terminates when all vertices have been processed in this way.[17] Alternatively, a topological ordering may be constructed by reversing a postorder numbering of a depth-first search graph traversal.[16] It is also possible to check whether a given directed graph is a DAG in linear time, either by attempting to find a topological ordering and then testing for each edge whether the resulting ordering is valid[18] or alternatively, for some topological sorting algorithms, by verifying that the algorithm successfully orders all the vertices without meeting an error condition.[17] Construction from cyclic graphs Any undirected graph may be made into a DAG by choosing a total order for its vertices and directing every edge from the earlier endpoint in the order to the later endpoint. The resulting orientation of the edges is called an acyclic orientation. Different total orders may lead to the same acyclic orientation, so an n-vertex graph can have fewer than n! acyclic orientations. The number of acyclic orientations is equal to |χ(−1)|, where χ is the chromatic polynomial of the given graph.[19] Any directed graph may be made into a DAG by removing a feedback vertex set or a feedback arc set, a set of vertices or edges (respectively) that touches all cycles. However, the smallest such set is NP-hard to find.[20] An arbitrary directed graph may also be transformed into a DAG, called its condensation, by contracting each of its strongly connected components into a single supervertex.[21] When the graph is already acyclic, its smallest feedback vertex sets and feedback arc sets are empty, and its condensation is the graph itself. Transitive closure and transitive reduction The transitive closure of a given DAG, with n vertices and m edges, may be constructed in time O(mn) by using either breadth-first search or depth-first search to test reachability from each vertex.[22] Alternatively, it can be solved in time O(nω) where ω < 2.373 is the exponent for matrix multiplication algorithms; this is a theoretical improvement over the O(mn) bound for dense graphs.[23] In all of these transitive closure algorithms, it is possible to distinguish pairs of vertices that are reachable by at least one path of length two or more from pairs that can only be connected by a length-one path. The transitive reduction consists of the edges that form length-one paths that are the only paths connecting their endpoints. Therefore, the transitive reduction can be constructed in the same asymptotic time bounds as the transitive closure.[24] Closure problem The closure problem takes as input a vertex-weighted directed acyclic graph and seeks the minimum (or maximum) weight of a closure – a set of vertices C, such that no edges leave C. The problem may be formulated for directed graphs without the assumption of acyclicity, but with no greater generality, because in this case it is equivalent to the same problem on the condensation of the graph. It may be solved in polynomial time using a reduction to the maximum flow problem.[25] Path algorithms Some algorithms become simpler when used on DAGs instead of general graphs, based on the principle of topological ordering. For example, it is possible to find shortest paths and longest paths from a given starting vertex in DAGs in linear time by processing the vertices in a topological order, and calculating the path length for each vertex to be the minimum or maximum length obtained via any of its incoming edges.[26] In contrast, for arbitrary graphs the shortest path may require slower algorithms such as Dijkstra's algorithm or the Bellman–Ford algorithm,[27] and longest paths in arbitrary graphs are NP-hard to find.[28] Applications Scheduling Directed acyclic graph representations of partial orderings have many applications in scheduling for systems of tasks with ordering constraints.[29] An important class of problems of this type concern collections of objects that need to be updated, such as the cells of a spreadsheet after one of the cells has been changed, or the object files of a piece of computer software after its source code has been changed. In this context, a dependency graph is a graph that has a vertex for each object to be updated, and an edge connecting two objects whenever one of them needs to be updated earlier than the other. A cycle in this graph is called a circular dependency, and is generally not allowed, because there would be no way to consistently schedule the tasks involved in the cycle. Dependency graphs without circular dependencies form DAGs.[30] For instance, when one cell of a spreadsheet changes, it is necessary to recalculate the values of other cells that depend directly or indirectly on the changed cell. For this problem, the tasks to be scheduled are the recalculations of the values of individual cells of the spreadsheet. Dependencies arise when an expression in one cell uses a value from another cell. In such a case, the value that is used must be recalculated earlier than the expression that uses it. Topologically ordering the dependency graph, and using this topological order to schedule the cell updates, allows the whole spreadsheet to be updated with only a single evaluation per cell.[31] Similar problems of task ordering arise in makefiles for program compilation[31] and instruction scheduling for low-level computer program optimization.[32] A somewhat different DAG-based formulation of scheduling constraints is used by the program evaluation and review technique (PERT), a method for management of large human projects that was one of the first applications of DAGs. In this method, the vertices of a DAG represent milestones of a project rather than specific tasks to be performed. Instead, a task or activity is represented by an edge of a DAG, connecting two milestones that mark the beginning and completion of the task. Each such edge is labeled with an estimate for the amount of time that it will take a team of workers to perform the task. The longest path in this DAG represents the critical path of the project, the one that controls the total time for the project. Individual milestones can be scheduled according to the lengths of the longest paths ending at their vertices.[33] Data processing networks A directed acyclic graph may be used to represent a network of processing elements. In this representation, data enters a processing element through its incoming edges and leaves the element through its outgoing edges. For instance, in electronic circuit design, static combinational logic blocks can be represented as an acyclic system of logic gates that computes a function of an input, where the input and output of the function are represented as individual bits. In general, the output of these blocks cannot be used as the input unless it is captured by a register or state element which maintains its acyclic properties.[34] Electronic circuit schematics either on paper or in a database are a form of directed acyclic graphs using instances or components to form a directed reference to a lower level component. Electronic circuits themselves are not necessarily acyclic or directed. Dataflow programming languages describe systems of operations on data streams, and the connections between the outputs of some operations and the inputs of others. These languages can be convenient for describing repetitive data processing tasks, in which the same acyclically-connected collection of operations is applied to many data items. They can be executed as a parallel algorithm in which each operation is performed by a parallel process as soon as another set of inputs becomes available to it.[35] In compilers, straight line code (that is, sequences of statements without loops or conditional branches) may be represented by a DAG describing the inputs and outputs of each of the arithmetic operations performed within the code. This representation allows the compiler to perform common subexpression elimination efficiently.[36] At a higher level of code organization, the acyclic dependencies principle states that the dependencies between modules or components of a large software system should form a directed acyclic graph.[37] Feedforward neural networks are another example. Causal structures Main article: Bayesian network Graphs in which vertices represent events occurring at a definite time, and where the edges always point from the early time vertex to a late time vertex of the edge, are necessarily directed and acyclic. The lack of a cycle follows because the time associated with a vertex always increases as you follow any path in the graph so you can never return to a vertex on a path. This reflects our natural intuition that causality means events can only affect the future, they never affect the past, and thus we have no causal loops. An example of this type of directed acyclic graph are those encountered in the causal set approach to quantum gravity though in this case the graphs considered are transitively complete. In the version history example below, each version of the software is associated with a unique time, typically the time the version was saved, committed or released. In the citation graph examples below, the documents are published at one time and can only refer to older documents. Sometimes events are not associated with a specific physical time. Provided that pairs of events have a purely causal relationship, that is edges represent causal relations between the events, we will have a directed acyclic graph.[38] For instance, a Bayesian network represents a system of probabilistic events as vertices in a directed acyclic graph, in which the likelihood of an event may be calculated from the likelihoods of its predecessors in the DAG.[39] In this context, the moral graph of a DAG is the undirected graph created by adding an (undirected) edge between all parents of the same vertex (sometimes called marrying), and then replacing all directed edges by undirected edges.[40] Another type of graph with a similar causal structure is an influence diagram, the vertices of which represent either decisions to be made or unknown information, and the edges of which represent causal influences from one vertex to another.[41] In epidemiology, for instance, these diagrams are often used to estimate the expected value of different choices for intervention.[42][43] The converse is also true. That is in any application represented by a directed acyclic graph there is a causal structure, either an explicit order or time in the example or an order which can be derived from graph structure. This follows because all directed acyclic graphs have a topological ordering, i.e. there is at least one way to put the vertices in an order such that all edges point in the same direction along that order. Genealogy and version history Family trees may be seen as directed acyclic graphs, with a vertex for each family member and an edge for each parent-child relationship.[44] Despite the name, these graphs are not necessarily trees because of the possibility of marriages between relatives (so a child has a common ancestor on both the mother's and father's side) causing pedigree collapse.[45] The graphs of matrilineal descent (mother-daughter relationships) and patrilineal descent (father-son relationships) are trees within this graph. Because no one can become their own ancestor, family trees are acyclic.[46] The version history of a distributed revision control system, such as Git, generally has the structure of a directed acyclic graph, in which there is a vertex for each revision and an edge connecting pairs of revisions that were directly derived from each other. These are not trees in general due to merges.[47] In many randomized algorithms in computational geometry, the algorithm maintains a history DAG representing the version history of a geometric structure over the course of a sequence of changes to the structure. For instance in a randomized incremental algorithm for Delaunay triangulation, the triangulation changes by replacing one triangle by three smaller triangles when each point is added, and by "flip" operations that replace pairs of triangles by a different pair of triangles. The history DAG for this algorithm has a vertex for each triangle constructed as part of the algorithm, and edges from each triangle to the two or three other triangles that replace it. This structure allows point location queries to be answered efficiently: to find the location of a query point q in the Delaunay triangulation, follow a path in the history DAG, at each step moving to the replacement triangle that contains q. The final triangle reached in this path must be the Delaunay triangle that contains q.[48] Citation graphs In a citation graph the vertices are documents with a single publication date. The edges represent the citations from the bibliography of one document to other necessarily earlier documents. The classic example comes from the citations between academic papers as pointed out in the 1965 article "Networks of Scientific Papers"[49] by Derek J. de Solla Price who went on to produce the first model of a citation network, the Price model.[50] In this case the citation count of a paper is just the in-degree of the corresponding vertex of the citation network. This is an important measure in citation analysis. Court judgements provide another example as judges support their conclusions in one case by recalling other earlier decisions made in previous cases. A final example is provided by patents which must refer to earlier prior art, earlier patents which are relevant to the current patent claim. By taking the special properties of directed acyclic graphs into account, one can analyse citation networks with techniques not available when analysing the general graphs considered in many studies using network analysis. For instance transitive reduction gives new insights into the citation distributions found in different applications highlighting clear differences in the mechanisms creating citations networks in different contexts.[51] Another technique is main path analysis, which traces the citation links and suggests the most significant citation chains in a given citation graph. The Price model is too simple to be a realistic model of a citation network but it is simple enough to allow for analytic solutions for some of its properties. Many of these can be found by using results derived from the undirected version of the Price model, the Barabási–Albert model. However, since Price's model gives a directed acyclic graph, it is a useful model when looking for analytic calculations of properties unique to directed acyclic graphs. For instance, the length of the longest path, from the n-th node added to the network to the first node in the network, scales as[52] $\ln(n)$. Data compression Directed acyclic graphs may also be used as a compact representation of a collection of sequences. In this type of application, one finds a DAG in which the paths form the given sequences. When many of the sequences share the same subsequences, these shared subsequences can be represented by a shared part of the DAG, allowing the representation to use less space than it would take to list out all of the sequences separately. For example, the directed acyclic word graph is a data structure in computer science formed by a directed acyclic graph with a single source and with edges labeled by letters or symbols; the paths from the source to the sinks in this graph represent a set of strings, such as English words.[53] Any set of sequences can be represented as paths in a tree, by forming a tree vertex for every prefix of a sequence and making the parent of one of these vertices represent the sequence with one fewer element; the tree formed in this way for a set of strings is called a trie. A directed acyclic word graph saves space over a trie by allowing paths to diverge and rejoin, so that a set of words with the same possible suffixes can be represented by a single tree vertex.[54] The same idea of using a DAG to represent a family of paths occurs in the binary decision diagram,[55][56] a DAG-based data structure for representing binary functions. In a binary decision diagram, each non-sink vertex is labeled by the name of a binary variable, and each sink and each edge is labeled by a 0 or 1. The function value for any truth assignment to the variables is the value at the sink found by following a path, starting from the single source vertex, that at each non-sink vertex follows the outgoing edge labeled with the value of that vertex's variable. Just as directed acyclic word graphs can be viewed as a compressed form of tries, binary decision diagrams can be viewed as compressed forms of decision trees that save space by allowing paths to rejoin when they agree on the results of all remaining decisions.[57] References 1. Thulasiraman, K.; Swamy, M. N. S. (1992), "5.7 Acyclic Directed Graphs", Graphs: Theory and Algorithms, John Wiley and Son, p. 118, ISBN 978-0-471-51356-8. 2. Bang-Jensen, Jørgen (2008), "2.1 Acyclic Digraphs", Digraphs: Theory, Algorithms and Applications, Springer Monographs in Mathematics (2nd ed.), Springer-Verlag, pp. 32–34, ISBN 978-1-84800-997-4. 3. Christofides, Nicos (1975), Graph theory: an algorithmic approach, Academic Press, pp. 170–174. 4. Mitrani, I. (1982), Simulation Techniques for Discrete Event Systems, Cambridge Computer Science Texts, vol. 14, Cambridge University Press, p. 27, ISBN 9780521282826. 5. Kozen, Dexter (1992), The Design and Analysis of Algorithms, Monographs in Computer Science, Springer, p. 9, ISBN 978-0-387-97687-7. 6. Banerjee, Utpal (1993), "Exercise 2(c)", Loop Transformations for Restructuring Compilers: The Foundations, Springer, p. 19, Bibcode:1993ltfr.book.....B, ISBN 978-0-7923-9318-4. 7. Bang-Jensen, Jørgen; Gutin, Gregory Z. (2008), "2.3 Transitive Digraphs, Transitive Closures and Reductions", Digraphs: Theory, Algorithms and Applications, Springer Monographs in Mathematics, Springer, pp. 36–39, ISBN 978-1-84800-998-1. 8. Jungnickel, Dieter (2012), Graphs, Networks and Algorithms, Algorithms and Computation in Mathematics, vol. 5, Springer, pp. 92–93, ISBN 978-3-642-32278-5. 9. Sedgewick, Robert; Wayne, Kevin (2011), "4,2,25 Unique topological ordering", Algorithms (4th ed.), Addison-Wesley, pp. 598–599, ISBN 978-0-13-276256-4. 10. Bender, Edward A.; Williamson, S. Gill (2005), "Example 26 (Linear extensions – topological sorts)", A Short Course in Discrete Mathematics, Dover Books on Computer Science, Courier Dover Publications, p. 142, ISBN 978-0-486-43946-4. 11. Robinson, R. W. (1973), "Counting labeled acyclic digraphs", in Harary, F. (ed.), New Directions in the Theory of Graphs, Academic Press, pp. 239–273. See also Harary, Frank; Palmer, Edgar M. (1973), Graphical Enumeration, Academic Press, p. 19, ISBN 978-0-12-324245-7. 12. Weisstein, Eric W., "Weisstein's Conjecture", MathWorld 13. McKay, B. D.; Royle, G. F.; Wanless, I. M.; Oggier, F. E.; Sloane, N. J. A.; Wilf, H. (2004), "Acyclic digraphs and eigenvalues of (0,1)-matrices", Journal of Integer Sequences, 7: 33, arXiv:math/0310423, Bibcode:2004JIntS...7...33M, Article 04.3.3. 14. Furnas, George W.; Zacks, Jeff (1994), "Multitrees: enriching and reusing hierarchical structure", Proc. SIGCHI conference on Human Factors in Computing Systems (CHI '94), pp. 330–336, doi:10.1145/191666.191778, ISBN 978-0897916509, S2CID 18710118. 15. Rebane, George; Pearl, Judea (1987), "The recovery of causal poly-trees from statistical data", in Proc. 3rd Annual Conference on Uncertainty in Artificial Intelligence (UAI 1987), Seattle, WA, USA, July 1987 (PDF), pp. 222–228. 16. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990], Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, ISBN 0-262-03293-7 Section 22.4, Topological sort, pp. 549–552. 17. Jungnickel (2012), pp. 50–51. 18. For depth-first search based topological sorting algorithm, this validity check can be interleaved with the topological sorting algorithm itself; see e.g. Skiena, Steven S. (2009), The Algorithm Design Manual, Springer, pp. 179–181, ISBN 978-1-84800-070-4. 19. Stanley, Richard P. (1973), "Acyclic orientations of graphs" (PDF), Discrete Mathematics, 5 (2): 171–178, doi:10.1016/0012-365X(73)90108-8. 20. Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, ISBN 0-7167-1045-5, Problems GT7 and GT8, pp. 191–192. 21. Harary, Frank; Norman, Robert Z.; Cartwright, Dorwin (1965), Structural Models: An Introduction to the Theory of Directed Graphs, John Wiley & Sons, p. 63. 22. Skiena (2009), p. 495. 23. Skiena (2009), p. 496. 24. Bang-Jensen & Gutin (2008), p. 38. 25. Picard, Jean-Claude (1976), "Maximal closure of a graph and applications to combinatorial problems", Management Science, 22 (11): 1268–1272, doi:10.1287/mnsc.22.11.1268, MR 0403596. 26. Cormen et al. 2001, Section 24.2, Single-source shortest paths in directed acyclic graphs, pp. 592–595. 27. Cormen et al. 2001, Sections 24.1, The Bellman–Ford algorithm, pp. 588–592, and 24.3, Dijkstra's algorithm, pp. 595–601. 28. Cormen et al. 2001, p. 966. 29. Skiena (2009), p. 469. 30. Al-Mutawa, H. A.; Dietrich, J.; Marsland, S.; McCartin, C. (2014), "On the shape of circular dependencies in Java programs", 23rd Australian Software Engineering Conference, IEEE, pp. 48–57, doi:10.1109/ASWEC.2014.15, ISBN 978-1-4799-3149-1, S2CID 17570052. 31. Gross, Jonathan L.; Yellen, Jay; Zhang, Ping (2013), Handbook of Graph Theory (2nd ed.), CRC Press, p. 1181, ISBN 978-1-4398-8018-0. 32. Srikant, Y. N.; Shankar, Priti (2007), The Compiler Design Handbook: Optimizations and Machine Code Generation (2nd ed.), CRC Press, pp. 19–39, ISBN 978-1-4200-4383-9. 33. Wang, John X. (2002), What Every Engineer Should Know About Decision Making Under Uncertainty, CRC Press, p. 160, ISBN 978-0-8247-4373-4. 34. Sapatnekar, Sachin (2004), Timing, Springer, p. 133, ISBN 978-1-4020-7671-8. 35. Dennis, Jack B. (1974), "First version of a data flow procedure language", Programming Symposium, Lecture Notes in Computer Science, vol. 19, pp. 362–376, doi:10.1007/3-540-06859-7_145, ISBN 978-3-540-06859-4. 36. Touati, Sid; de Dinechin, Benoit (2014), Advanced Backend Optimization, John Wiley & Sons, p. 123, ISBN 978-1-118-64894-0. 37. Garland, Jeff; Anthony, Richard (2003), Large-Scale Software Architecture: A Practical Guide using UML, John Wiley & Sons, p. 215, ISBN 9780470856383. 38. Gopnik, Alison; Schulz, Laura (2007), Causal Learning, Oxford University Press, p. 4, ISBN 978-0-19-803928-0. 39. Shmulevich, Ilya; Dougherty, Edward R. (2010), Probabilistic Boolean Networks: The Modeling and Control of Gene Regulatory Networks, Society for Industrial and Applied Mathematics, p. 58, ISBN 978-0-89871-692-4. 40. Cowell, Robert G.; Dawid, A. Philip; Lauritzen, Steffen L.; Spiegelhalter, David J. (1999), "3.2.1 Moralization", Probabilistic Networks and Expert Systems, Springer, pp. 31–33, ISBN 978-0-387-98767-5. 41. Dorf, Richard C. (1998), The Technology Management Handbook, CRC Press, p. 9-7, ISBN 978-0-8493-8577-3. 42. Boslaugh, Sarah (2008), Encyclopedia of Epidemiology, Volume 1, SAGE, p. 255, ISBN 978-1-4129-2816-8. 43. Pearl, Judea (1995), "Causal diagrams for empirical research", Biometrika, 82 (4): 669–709, doi:10.1093/biomet/82.4.669. 44. Kirkpatrick, Bonnie B. (April 2011), "Haplotypes versus genotypes on pedigrees", Algorithms for Molecular Biology, 6 (10): 10, doi:10.1186/1748-7188-6-10, PMC 3102622, PMID 21504603. 45. McGuffin, M. J.; Balakrishnan, R. (2005), "Interactive visualization of genealogical graphs" (PDF), IEEE Symposium on Information Visualization (INFOVIS 2005), pp. 16–23, doi:10.1109/INFVIS.2005.1532124, ISBN 978-0-7803-9464-3, S2CID 15449409. 46. Bender, Michael A.; Pemmasani, Giridhar; Skiena, Steven; Sumazin, Pavel (2001), "Finding least common ancestors in directed acyclic graphs", Proceedings of the Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '01), Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, pp. 845–854, ISBN 978-0-89871-490-6. 47. Bartlang, Udo (2010), Architecture and Methods for Flexible Content Management in Peer-to-Peer Systems, Springer, p. 59, Bibcode:2010aamf.book.....B, ISBN 978-3-8348-9645-2. 48. Pach, János; Sharir, Micha, Combinatorial Geometry and Its Algorithmic Applications: The Alcalá Lectures, Mathematical surveys and monographs, vol. 152, American Mathematical Society, pp. 93–94, ISBN 978-0-8218-7533-9. 49. Price, Derek J. de Solla (July 30, 1965), "Networks of Scientific Papers" (PDF), Science, 149 (3683): 510–515, Bibcode:1965Sci...149..510D, doi:10.1126/science.149.3683.510, PMID 14325149. 50. Price, Derek J. de Solla (1976), "A general theory of bibliometric and other cumulative advantage processes", Journal of the American Society for Information Science, 27 (5): 292–306, doi:10.1002/asi.4630270505, S2CID 8536863. 51. Clough, James R.; Gollings, Jamie; Loach, Tamar V.; Evans, Tim S. (2015), "Transitive reduction of citation networks", Journal of Complex Networks, 3 (2): 189–203, arXiv:1310.8224, doi:10.1093/comnet/cnu039, S2CID 10228152. 52. Evans, T.S.; Calmon, L.; Vasiliauskaite, V. (2020), "The Longest Path in the Price Model", Scientific Reports, 10 (1): 10503, arXiv:1903.03667, Bibcode:2020NatSR..1010503E, doi:10.1038/s41598-020-67421-8, PMC 7324613, PMID 32601403 53. Crochemore, Maxime; Vérin, Renaud (1997), "Direct construction of compact directed acyclic word graphs", Combinatorial Pattern Matching, Lecture Notes in Computer Science, vol. 1264, Springer, pp. 116–129, CiteSeerX 10.1.1.53.6273, doi:10.1007/3-540-63220-4_55, ISBN 978-3-540-63220-7, S2CID 17045308. 54. Lothaire, M. (2005), Applied Combinatorics on Words, Encyclopedia of Mathematics and its Applications, vol. 105, Cambridge University Press, p. 18, ISBN 9780521848022. 55. Lee, C. Y. (1959), "Representation of switching circuits by binary-decision programs", Bell System Technical Journal, 38 (4): 985–999, doi:10.1002/j.1538-7305.1959.tb01585.x. 56. Akers, Sheldon B. (1978), "Binary decision diagrams", IEEE Transactions on Computers, C-27 (6): 509–516, doi:10.1109/TC.1978.1675141, S2CID 21028055. 57. Friedman, S. J.; Supowit, K. J. (1987), "Finding the optimal variable ordering for binary decision diagrams", Proc. 24th ACM/IEEE Design Automation Conference (DAC '87), New York, NY, USA: ACM, pp. 348–356, doi:10.1145/37888.37941, ISBN 978-0-8186-0781-3, S2CID 14796451. External links Wikimedia Commons has media related to directed acyclic graphs. • Weisstein, Eric W., "Acyclic Digraph", MathWorld • DAGitty – an online tool for creating DAGs
Wikipedia
Weitzenböck's inequality In mathematics, Weitzenböck's inequality, named after Roland Weitzenböck, states that for a triangle of side lengths $a$, $b$, $c$, and area $\Delta $, the following inequality holds: $a^{2}+b^{2}+c^{2}\geq 4{\sqrt {3}}\,\Delta .$ Not to be confused with Weitzenböck identity. Equality occurs if and only if the triangle is equilateral. Pedoe's inequality is a generalization of Weitzenböck's inequality. The Hadwiger–Finsler inequality is a strengthened version of Weitzenböck's inequality. Geometric interpretation and proof Rewriting the inequality above allows for a more concrete geometric interpretation, which in turn provides an immediate proof.[1] ${\frac {\sqrt {3}}{4}}a^{2}+{\frac {\sqrt {3}}{4}}b^{2}+{\frac {\sqrt {3}}{4}}c^{2}\geq 3\,\Delta .$ Now the summands on the left side are the areas of equilateral triangles erected over the sides of the original triangle and hence the inequation states that the sum of areas of the equilateral triangles is always greater than or equal to threefold the area of the original triangle. $\Delta _{a}+\Delta _{b}+\Delta _{c}\geq 3\,\Delta .$ This can now can be shown by replicating area of the triangle three times within the equilateral triangles. To achieve that the Fermat point is used to partition the triangle into three obtuse subtriangles with a $120^{\circ }$ angle and each of those subtriangles is replicated three times within the equilateral triangle next to it. This only works if every angle of the triangle is smaller than $120^{\circ }$, since otherwise the Fermat point is not located in the interior of the triangle and becomes a vertex instead. However if one angle is greater or equal to $120^{\circ }$ it is possible to replicate the whole triangle three times within the largest equilateral triangle, so the sum of areas of all equilateral triangles stays greater than the threefold area of the triangle anyhow. Further proofs The proof of this inequality was set as a question in the International Mathematical Olympiad of 1961. Even so, the result is not too difficult to derive using Heron's formula for the area of a triangle: ${\begin{aligned}\Delta &{}={\frac {1}{4}}{\sqrt {(a+b+c)(a+b-c)(b+c-a)(c+a-b)}}\\[4pt]&{}={\frac {1}{4}}{\sqrt {2(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2})-(a^{4}+b^{4}+c^{4})}}.\end{aligned}}$ First method It can be shown that the area of the inner Napoleon's triangle, which must be nonnegative, is[2] ${\frac {\sqrt {3}}{24}}(a^{2}+b^{2}+c^{2}-4{\sqrt {3}}\Delta ),$ so the expression in parentheses must be greater than or equal to 0. Second method This method assumes no knowledge of inequalities except that all squares are nonnegative. ${\begin{aligned}{}&(a^{2}-b^{2})^{2}+(b^{2}-c^{2})^{2}+(c^{2}-a^{2})^{2}\geq 0\\[5pt]{}\iff &2(a^{4}+b^{4}+c^{4})-2(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2})\geq 0\\[5pt]{}\iff &{\frac {4(a^{4}+b^{4}+c^{4})}{3}}\geq {\frac {4(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2})}{3}}\\[5pt]{}\iff &{\frac {(a^{4}+b^{4}+c^{4})+2(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2})}{3}}\geq 2(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2})-(a^{4}+b^{4}+c^{4})\\[5pt]{}\iff &{\frac {(a^{2}+b^{2}+c^{2})^{2}}{3}}\geq (4\Delta )^{2},\end{aligned}}$ and the result follows immediately by taking the positive square root of both sides. From the first inequality we can also see that equality occurs only when $a=b=c$ and the triangle is equilateral. Third method This proof assumes knowledge of the AM–GM inequality. ${\begin{aligned}&&(a-b)^{2}+(b-c)^{2}+(c-a)^{2}&\geq &&0\\\Rightarrow &&2a^{2}+2b^{2}+2c^{2}&\geq &&2ab+2bc+2ac\\\iff &&3(a^{2}+b^{2}+c^{2})&\geq &&(a+b+c)^{2}\\\iff &&a^{2}+b^{2}+c^{2}&\geq &&{\sqrt {3(a+b+c)\left({\frac {a+b+c}{3}}\right)^{3}}}\\\Rightarrow &&a^{2}+b^{2}+c^{2}&\geq &&{\sqrt {3(a+b+c)(-a+b+c)(a-b+c)(a+b-c)}}\\\iff &&a^{2}+b^{2}+c^{2}&\geq &&4{\sqrt {3}}\Delta .\end{aligned}}$ As we have used the arithmetic-geometric mean inequality, equality only occurs when $a=b=c$ and the triangle is equilateral. Fourth method Write $x=\cot A,c=\cot A+\cot B>0$ so the sum $S=\cot A+\cot B+\cot C=c+{\frac {1-x(c-x)}{c}}$ and $cS=c^{2}-xc+x^{2}+1=\left(x-{\frac {c}{2}}\right)^{2}+\left({\frac {c{\sqrt {3}}}{2}}-1\right)^{2}+c{\sqrt {3}}\geq c{\sqrt {3}}$ i.e. $S\geq {\sqrt {3}}$. But $\cot A={\frac {b^{2}+c^{2}-a^{2}}{4\Delta }}$, so $S={\frac {a^{2}+b^{2}+c^{2}}{4\Delta }}$. See also • List of triangle inequalities • Isoperimetric inequality • Hadwiger–Finsler inequality Notes 1. Claudi Alsina, Roger B. Nelsen: Geometric Proofs of the Weitzenböck and Hadwiger–Finsler Inequalities. Mathematics Magazine, Vol. 81, No. 3 (Jun., 2008), pp. 216–219 (JSTOR) 2. Coxeter, H.S.M., and Greitzer, Samuel L. Geometry Revisited, page 64. References & further reading • Claudi Alsina, Roger B. Nelsen: When Less is More: Visualizing Basic Inequalities. MAA, 2009, ISBN 9780883853429, pp. 84-86 • Claudi Alsina, Roger B. Nelsen: Geometric Proofs of the Weitzenböck and Hadwiger–Finsler Inequalities. Mathematics Magazine, Vol. 81, No. 3 (Jun., 2008), pp. 216–219 (JSTOR) • D. M. Batinetu-Giurgiu, Nicusor Minculete, Nevulai Stanciu: Some geometric inequalities of Ionescu-Weitzebböck type. International Journal of Geometry, Vol. 2 (2013), No. 1, April • D. M. Batinetu-Giurgiu, Nevulai Stanciu: The inequality Ionescu - Weitzenböck. MateInfo.ro, April 2013, (online copy) • Daniel Pedoe: On Some Geometrical Inequalities. The Mathematical Gazette, Vol. 26, No. 272 (Dec., 1942), pp. 202-208 (JSTOR) • Roland Weitzenböck: Über eine Ungleichung in der Dreiecksgeometrie. Mathematische Zeitschrift, Volume 5, 1919, pp. 137-146 (online copy at Göttinger Digitalisierungszentrum) • Dragutin Svrtan, Darko Veljan: Non-Euclidean Versions of Some Classical Triangle Inequalities. Forum Geometricorum, Volume 12, 2012, pp. 197–209 (online copy) • Mihaly Bencze, Nicusor Minculete, Ovidiu T. Pop: New inequalities for the triangle. Octogon Mathematical Magazine, Vol. 17, No.1, April 2009, pp. 70-89 (online copy) External links • Weisstein, Eric W. "Weitzenböck's Inequality". MathWorld. • "Weitzenböck's Inequality," an interactive demonstration by Jay Warendorff - Wolfram Demonstrations Project.
Wikipedia
Weitzenböck identity In mathematics, in particular in differential geometry, mathematical physics, and representation theory a Weitzenböck identity, named after Roland Weitzenböck, expresses a relationship between two second-order elliptic operators on a manifold with the same principal symbol. Usually Weitzenböck formulae are implemented for G-invariant self-adjoint operators between vector bundles associated to some principal G-bundle, although the precise conditions under which such a formula exists are difficult to formulate. This article focuses on three examples of Weitzenböck identities: from Riemannian geometry, spin geometry, and complex analysis. Not to be confused with Weitzenböck's inequality. Riemannian geometry In Riemannian geometry there are two notions of the Laplacian on differential forms over an oriented compact Riemannian manifold M. The first definition uses the divergence operator δ defined as the formal adjoint of the de Rham operator d: $\int _{M}\langle \alpha ,\delta \beta \rangle :=\int _{M}\langle d\alpha ,\beta \rangle $ :=\int _{M}\langle d\alpha ,\beta \rangle } where α is any p-form and β is any (p + 1)-form, and $\langle \cdot ,\cdot \rangle $ is the metric induced on the bundle of (p + 1)-forms. The usual form Laplacian is then given by $\Delta =d\delta +\delta d.$ On the other hand, the Levi-Civita connection supplies a differential operator $\nabla :\Omega ^{p}M\rightarrow \Omega ^{1}M\otimes \Omega ^{p}M,$ :\Omega ^{p}M\rightarrow \Omega ^{1}M\otimes \Omega ^{p}M,} where ΩpM is the bundle of p-forms. The Bochner Laplacian is given by $\Delta '=\nabla ^{*}\nabla $ where $\nabla ^{*}$ is the adjoint of $\nabla $. This is also known as the connection or rough Laplacian. The Weitzenböck formula then asserts that $\Delta '-\Delta =A$ where A is a linear operator of order zero involving only the curvature. The precise form of A is given, up to an overall sign depending on curvature conventions, by $A={\frac {1}{2}}\langle R(\theta ,\theta )\#,\#\rangle +\operatorname {Ric} (\theta ,\#),$ where • R is the Riemann curvature tensor, • Ric is the Ricci tensor, • $\theta :T^{*}M\otimes \Omega ^{p}M\rightarrow \Omega ^{p+1}M$ is the map that takes the wedge product of a 1-form and p-form and gives a (p+1)-form, • $\#:\Omega ^{p+1}M\rightarrow T^{*}M\otimes \Omega ^{p}M$ is the universal derivation inverse to θ on 1-forms. Spin geometry If M is an oriented spin manifold with Dirac operator ð, then one may form the spin Laplacian Δ = ð2 on the spin bundle. On the other hand, the Levi-Civita connection extends to the spin bundle to yield a differential operator $\nabla :SM\rightarrow T^{*}M\otimes SM.$ As in the case of Riemannian manifolds, let $\Delta '=\nabla ^{*}\nabla $. This is another self-adjoint operator and, moreover, has the same leading symbol as the spin Laplacian. The Weitzenböck formula yields: $\Delta '-\Delta =-{\frac {1}{4}}Sc$ where Sc is the scalar curvature. This result is also known as the Lichnerowicz formula. Complex differential geometry If M is a compact Kähler manifold, there is a Weitzenböck formula relating the ${\bar {\partial }}$-Laplacian (see Dolbeault complex) and the Euclidean Laplacian on (p,q)-forms. Specifically, let $\Delta ={\bar {\partial }}^{*}{\bar {\partial }}+{\bar {\partial }}{\bar {\partial }}^{*},$ and $\Delta '=-\sum _{k}\nabla _{k}\nabla _{\bar {k}}$ in a unitary frame at each point. According to the Weitzenböck formula, if $\alpha \in \Omega ^{(p,q)}M$, then $\Delta ^{\prime }\alpha -\Delta \alpha =A(\alpha )$ where $A$ is an operator of order zero involving the curvature. Specifically, if $\alpha =\alpha _{i_{1}i_{2}\dots i_{p}{\bar {j}}_{1}{\bar {j}}_{2}\dots {\bar {j}}_{q}}$ in a unitary frame, then $A(\alpha )=-\sum _{k,j_{s}}\operatorname {Ric} _{{\bar {j}}_{\alpha }}^{\bar {k}}\alpha _{i_{1}i_{2}\dots i_{p}{\bar {j}}_{1}{\bar {j}}_{2}\dots {\bar {k}}\dots {\bar {j}}_{q}}$ with k in the s-th place. Other Weitzenböck identities • In conformal geometry there is a Weitzenböck formula relating a particular pair of differential operators defined on the tractor bundle. See Branson, T. and Gover, A.R., "Conformally Invariant Operators, Differential Forms, Cohomology and a Generalisation of Q-Curvature", Communications in Partial Differential Equations, 30 (2005) 1611–1669. See also • Bochner identity • Bochner–Kodaira–Nakano identity • Laplacian operators in differential geometry References • Griffiths, Philip; Harris, Joe (1978), Principles of algebraic geometry, Wiley-Interscience (published 1994), ISBN 978-0-471-05059-9
Wikipedia
Welch's t-test In statistics, Welch's t-test, or unequal variances t-test, is a two-sample location test which is used to test the (null) hypothesis that two populations have equal means. It is named for its creator, Bernard Lewis Welch, is an adaptation of Student's t-test,[1] and is more reliable when the two samples have unequal variances and possibly unequal sample sizes.[2][3] These tests are often referred to as "unpaired" or "independent samples" t-tests, as they are typically applied when the statistical units underlying the two samples being compared are non-overlapping. Given that Welch's t-test has been less popular than Student's t-test[2] and may be less familiar to readers, a more informative name is "Welch's unequal variances t-test" — or "unequal variances t-test" for brevity.[3] Assumptions Student's t-test assumes that the sample means being compared for two populations are normally distributed, and that the populations have equal variances. Welch's t-test is designed for unequal population variances, but the assumption of normality is maintained.[1] Welch's t-test is an approximate solution to the Behrens–Fisher problem. Calculations Welch's t-test defines the statistic t by the following formula: $t={\frac {\Delta {\overline {X}}}{s_{\Delta {\bar {X}}}}}={\frac {{\overline {X}}_{1}-{\overline {X}}_{2}}{\sqrt {{s_{{\bar {X}}_{1}}^{2}}+{s_{{\bar {X}}_{2}}^{2}}}}}\,$ $s_{{\bar {X}}_{i}}={s_{i} \over {\sqrt {N_{i}}}}\,$ where ${\overline {X}}_{i}$ and $s_{{\bar {X}}_{i}}$ are the $i^{\text{th}}$ sample mean and its standard error, with $s_{i}$ denoting the corrected sample standard deviation, and sample size $N_{i}$. Unlike in Student's t-test, the denominator is not based on a pooled variance estimate. The degrees of freedom $\nu $  associated with this variance estimate is approximated using the Welch–Satterthwaite equation:[4] $\nu \quad \approx \quad {\frac {\left(\;{\frac {s_{1}^{2}}{N_{1}}}\;+\;{\frac {s_{2}^{2}}{N_{2}}}\;\right)^{2}}{\quad {\frac {s_{1}^{4}}{N_{1}^{2}\nu _{1}}}\;+\;{\frac {s_{2}^{4}}{N_{2}^{2}\nu _{2}}}\quad }}.$ This expression can be simplified when $N_{1}=N_{2}$: $\nu \approx {\frac {s_{\Delta {\bar {X}}}^{4}}{\nu _{1}^{-1}s_{{\bar {X}}_{1}}^{4}+\nu _{2}^{-1}s_{{\bar {X}}_{2}}^{4}}}.$ Here, $\nu _{i}=N_{i}-1$ is the degrees of freedom associated with the i-th variance estimate. The statistic is approximately from the t-distribution since we have an approximation of the chi-square distribution. This approximation is better done when both $N_{1}$ and $N_{2}$ are larger than 5.[5][6] Statistical test Once t and $\nu $ have been computed, these statistics can be used with the t-distribution to test one of two possible null hypotheses: • that the two population means are equal, in which a two-tailed test is applied; or • that one of the population means is greater than or equal to the other, in which a one-tailed test is applied. The approximate degrees of freedom are real numbers $\left(\nu \in \mathbb {R} ^{+}\right)$ and used as such in statistics-oriented software, whereas they are rounded down to the nearest integer in spreadsheets. Advantages and limitations Welch's t-test is more robust than Student's t-test and maintains type I error rates close to nominal for unequal variances and for unequal sample sizes under normality. Furthermore, the power of Welch's t-test comes close to that of Student's t-test, even when the population variances are equal and sample sizes are balanced.[2] Welch's t-test can be generalized to more than 2-samples,[7] which is more robust than one-way analysis of variance (ANOVA). It is not recommended to pre-test for equal variances and then choose between Student's t-test or Welch's t-test.[8] Rather, Welch's t-test can be applied directly and without any substantial disadvantages to Student's t-test as noted above. Welch's t-test remains robust for skewed distributions and large sample sizes.[9] Reliability decreases for skewed distributions and smaller samples, where one could possibly perform Welch's t-test.[10] Examples The following three examples compare Welch's t-test and Student's t-test. Samples are from random normal distributions using the R programming language. For all three examples, the population means were $\mu _{1}=20$ and $\mu _{2}=22$. The first example is for unequal but near variances ($\sigma _{1}^{2}=7.9$, $\sigma _{2}^{2}=3.8$) and equal sample sizes ($ N_{1}=N_{2}=15$). Let A1 and A2 denote two random samples: $A_{1}=\{27.5,21.0,19.0,23.6,17.0,17.9,16.9,20.1,21.9,22.6,23.1,19.6,19.0,21.7,21.4\}$ $A_{2}=\{27.1,22.0,20.8,23.4,23.4,23.5,25.8,22.0,24.8,20.2,21.9,22.1,22.9,20.5,24.4\}$ The second example is for unequal variances ($\sigma _{1}^{2}=9.0$, $\sigma _{2}^{2}=0.9$) and unequal sample sizes ($N_{1}=10$, $N_{2}=20$). The smaller sample has the larger variance: ${\begin{aligned}A_{1}&=\{17.2,20.9,22.6,18.1,21.7,21.4,23.5,24.2,14.7,21.8\}\\A_{2}&=\{21.5,22.8,21.0,23.0,21.6,23.6,22.5,20.7,23.4,21.8,20.7,21.7,21.5,22.5,23.6,21.5,22.5,23.5,21.5,21.8\}\end{aligned}}$ The third example is for unequal variances ($\sigma _{1}^{2}=1.4$, $\sigma _{2}^{2}=17.1$) and unequal sample sizes ($N_{1}=10$, $N_{2}=20$). The larger sample has the larger variance: ${\begin{aligned}A_{1}&=\{19.8,20.4,19.6,17.8,18.5,18.9,18.3,18.9,19.5,22.0\}\\A_{2}&=\{28.2,26.6,20.1,23.3,25.2,22.1,17.7,27.6,20.6,13.7,23.2,17.5,20.6,18.0,23.9,21.6,24.3,20.4,24.0,13.2\}\end{aligned}}$ Reference p-values were obtained by simulating the distributions of the t statistics for the null hypothesis of equal population means ($\mu _{1}-\mu _{2}=0$). Results are summarised in the table below, with two-tailed p-values: Sample A1 Sample A2 Student's t-test Welch's t-test Example $N_{1}$${\overline {X}}_{1}$$s_{1}^{2}$ $N_{2}$${\overline {X}}_{2}$$s_{2}^{2}$ $t$$\nu $$P$$P_{\mathrm {sim} }$ $t$$\nu $$P$$P_{\mathrm {sim} }$ 11520.87.91523.03.8−2.46280.0210.021−2.4624.90.0210.017 21020.69.02022.10.9−2.10280.0450.150−1.579.90.1490.144 31019.41.42021.617.1−1.64280.1100.036−2.2224.50.0360.042 Welch's t-test and Student's t-test gave identical results when the two samples have similar variances and sample sizes (Example 1). But note that even if you sample data from populations with identical variances, the sample variances will differ, as will the results of the two t-tests. So with actual data, the two tests will almost always give somewhat different results. For unequal variances, Student's t-test gave a low p-value when the smaller sample had a larger variance (Example 2) and a high p-value when the larger sample had a larger variance (Example 3). For unequal variances, Welch's t-test gave p-values close to simulated p-values. Software implementations Language/ProgramFunctionDocumentation LibreOfficeTTEST(Data1; Data2; Mode; Type)[11] MATLABttest2(data1, data2, 'Vartype', 'unequal')[12] Microsoft Excel pre 2010 (Student's T Test)TTEST(array1, array2, tails, type)[13] Microsoft Excel 2010 and later (Student's T Test)T.TEST(array1, array2, tails, type)[14] MinitabAccessed through menu[15] SAS (Software)Default output from proc ttest (labeled "Satterthwaite") Python (through 3rd-party library SciPy)scipy.stats.ttest_ind(a, b, equal_var=False)[16] Rt.test(data1, data2)[17] HaskellStatistics.Test.StudentT.welchTTest SamplesDiffer data1 data2[18] JMP Oneway( Y( YColumn), X( XColumn), Unequal Variances( 1 ) );[19] Julia UnequalVarianceTTest(data1, data2)[20] Stata ttest varname1 == varname2, welch [21] Google Sheets TTEST(range1, range2, tails, type) [22] GraphPad Prism It is a choice on the t test dialog. IBM SPSS StatisticsAn option in the menu[23][24] GNU Octave welch_test(x, y) [25] See also • Student's t-test • Z-test • Factorial experiment • One-way analysis of variance • Hotelling's two-sample T-squared statistic, a multivariate extension of Welch's t-test References 1. Welch, B. L. (1947). "The generalization of "Student's" problem when several different population variances are involved". Biometrika. 34 (1–2): 28–35. doi:10.1093/biomet/34.1-2.28. MR 0019277. PMID 20287819. 2. Ruxton, G. D. (2006). "The unequal variance t-test is an underused alternative to Student's t-test and the Mann–Whitney U test". Behavioral Ecology. 17 (4): 688–690. doi:10.1093/beheco/ark016. 3. Derrick, B; Toher, D; White, P (2016). "Why Welchs test is Type I error robust" (PDF). The Quantitative Methods for Psychology. 12 (1): 30–38. doi:10.20982/tqmp.12.1.p030. 4. 7.3.1. Do two processes have the same mean?, Engineering Statistics Handbook, NIST. (Online source accessed 2021-07-30.) 5. Allwood, Michael (2008). "The Satterthwaite Formula for Degrees of Freedom in the Two-Sample t-Test" (PDF). p. 6. 6. Yates; Moore; Starnes (2008). The Practice of Statistics (3rd ed.). New York: W.H. Freeman and Company. p. 792. ISBN 9780716773092. 7. Welch, B. L. (1951). "On the Comparison of Several Mean Values: An Alternative Approach". Biometrika. 38 (3/4): 330–336. doi:10.2307/2332579. JSTOR 2332579. 8. Zimmerman, D. W. (2004). "A note on preliminary tests of equality of variances". British Journal of Mathematical and Statistical Psychology. 57 (Pt 1): 173–181. doi:10.1348/000711004849222. PMID 15171807. 9. Fagerland, M. W. (2012). "t-tests, non-parametric tests, and large studies—a paradox of statistical practice?". BMC Medical Research Methodology. 12: 78. doi:10.1186/1471-2288-12-78. PMC 3445820. PMID 22697476. 10. Fagerland, M. W.; Sandvik, L. (2009). "Performance of five two-sample location tests for skewed distributions with unequal variances". Contemporary Clinical Trials. 30 (5): 490–496. doi:10.1016/j.cct.2009.06.007. PMID 19577012. 11. "Statistical Functions Part Five - LibreOffice Help". 12. "Two-sample t-test - MATLAB ttest2 - MathWorks United Kingdom". 13. "TTEST - Excel - Microsoft Office". office.microsoft.com. Archived from the original on 2010-06-13. 14. "T.TEST function". 15. Overview for 2-Sample t - Minitab: — official documentation for Minitab version 18. Accessed 2020-09-19. 16. "Scipy.stats.ttest_ind — SciPy v1.7.1 Manual". 17. "R: Student's t-Test". 18. "Statistics.Test.StudentT". 19. "Index of /Support/Help". 20. "Welcome to Read the Docs — HypothesisTests.jl latest documentation". 21. "Stata 17 help for ttest". 22. "T.TEST - Docs Editors Help". 23. Jeremy Miles: Unequal variances t-test or U Mann-Whitney test?, Accessed 2014-04-11 24. One-Sample Test — Official documentation for SPSS Statistics version 24. Accessed 2019-01-22. 25. "Function Reference: Welch_test".
Wikipedia
Berlekamp–Welch algorithm The Berlekamp–Welch algorithm, also known as the Welch–Berlekamp algorithm, is named for Elwyn R. Berlekamp and Lloyd R. Welch. This is a decoder algorithm that efficiently corrects errors in Reed–Solomon codes for an RS(n, k), code based on the Reed Solomon original view where a message $m_{1},\cdots ,m_{k}$ is used as coefficients of a polynomial $F(a_{i})$ or used with Lagrange interpolation to generate the polynomial $F(a_{i})$ of degree < k for inputs $a_{1},\cdots ,a_{k}$ and then $F(a_{i})$ is applied to $a_{k+1},\cdots ,a_{n}$ to create an encoded codeword $c_{1},\cdots ,c_{n}$. The goal of the decoder is to recover the original encoding polynomial $F(a_{i})$, using the known inputs $a_{1},\cdots ,a_{n}$ and received codeword $b_{1},\cdots ,b_{n}$ with possible errors. It also computes an error polynomial $E(a_{i})$ where $E(a_{i})=0$ corresponding to errors in the received codeword. The key equations Defining e = number of errors, the key set of n equations is $b_{i}E(a_{i})=E(a_{i})F(a_{i})$ Where E(ai) = 0 for the e cases when bi ≠ F(ai), and E(ai) ≠ 0 for the n - e non error cases where bi = F(ai) . These equations can't be solved directly, but by defining Q() as the product of E() and F(): $Q(a_{i})=E(a_{i})F(a_{i})$ and adding the constraint that the most significant coefficient of E(ai) = ee = 1, the result will lead to a set of equations that can be solved with linear algebra. $b_{i}E(a_{i})=Q(a_{i})$ $b_{i}E(a_{i})-Q(a_{i})=0$ $b_{i}(e_{0}+e_{1}a_{i}+e_{2}a_{i}^{2}+\cdots +e_{e}a_{i}^{e})-(q_{0}+q_{1}a_{i}+q_{2}a_{i}^{2}+\cdots +q_{q}a_{i}^{q})=0$ where q = n - e - 1. Since ee is constrained to be 1, the equations become: $b_{i}(e_{0}+e_{1}a_{i}+e_{2}a_{i}^{2}+\cdots +e_{e-1}a_{i}^{e-1})-(q_{0}+q_{1}a_{i}+q_{2}a_{i}^{2}+\cdots +q_{q}a_{i}^{q})=-b_{i}a_{i}^{e}$ resulting in a set of equations which can be solved using linear algebra, with time complexity O(n^3). The algorithm begins assuming the maximum number of errors e = ⌊ (n-k)/2 ⌋. If the equations can not be solved (due to redundancy), e is reduced by 1 and the process repeated, until the equations can be solved or e is reduced to 0, indicating no errors. If Q()/E() has remainder = 0, then F() = Q()/E() and the code word values F(ai) are calculated for the locations where E(ai) = 0 to recover the original code word. If the remainder ≠ 0, then an uncorrectable error has been detected. Example Consider RS(7,3) (n = 7, k = 3) defined in GF(7) with α = 3 and input values: ai = i-1 : {0,1,2,3,4,5,6}. The message to be systematically encoded is {1,6,3}. Using Lagrange interpolation, F(ai) = 3 x2 + 2 x + 1, and applying F(ai) for a4 = 3 to a7 = 6, results in the code word {1,6,3,6,1,2,2}. Assume errors occur at c2 and c5 resulting in the received code word {1,5,3,6,3,2,2}. Start off with e = 2 and solve the linear equations: ${\begin{bmatrix}b_{1}&b_{1}a_{1}&-1&-a_{1}&-a_{1}^{2}&-a_{1}^{3}&-a_{1}^{4}\\b_{2}&b_{2}a_{2}&-1&-a_{2}&-a_{2}^{2}&-a_{2}^{3}&-a_{2}^{4}\\b_{3}&b_{3}a_{3}&-1&-a_{3}&-a_{3}^{2}&-a_{3}^{3}&-a_{3}^{4}\\b_{4}&b_{4}a_{4}&-1&-a_{4}&-a_{4}^{2}&-a_{4}^{3}&-a_{4}^{4}\\b_{5}&b_{5}a_{5}&-1&-a_{5}&-a_{5}^{2}&-a_{5}^{3}&-a_{5}^{4}\\b_{6}&b_{6}a_{6}&-1&-a_{6}&-a_{6}^{2}&-a_{6}^{3}&-a_{6}^{4}\\b_{7}&b_{7}a_{7}&-1&-a_{7}&-a_{7}^{2}&-a_{7}^{3}&-a_{7}^{4}\\\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q0\\q1\\q2\\q3\\q4\\\end{bmatrix}}={\begin{bmatrix}-b_{1}a_{1}^{2}\\-b_{2}a_{2}^{2}\\-b_{3}a_{3}^{2}\\-b_{4}a_{4}^{2}\\-b_{5}a_{5}^{2}\\-b_{6}a_{6}^{2}\\-b_{7}a_{7}^{2}\\\end{bmatrix}}$ ${\begin{bmatrix}1&0&6&0&0&0&0\\5&5&6&6&6&6&6\\3&6&6&5&3&6&5\\6&4&6&4&5&1&3\\3&5&6&3&5&6&3\\2&3&6&2&3&1&5\\2&5&6&1&6&1&6\\\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q0\\q1\\q2\\q3\\q4\\\end{bmatrix}}={\begin{bmatrix}0\\2\\2\\2\\1\\6\\5\\\end{bmatrix}}$ ${\begin{bmatrix}1&0&0&0&0&0&0\\0&1&0&0&0&0&0\\0&0&1&0&0&0&0\\0&0&0&1&0&0&0\\0&0&0&0&1&0&0\\0&0&0&0&0&1&0\\0&0&0&0&0&0&1\\\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q0\\q1\\q2\\q3\\q4\\\end{bmatrix}}={\begin{bmatrix}4\\2\\4\\3\\3\\1\\3\\\end{bmatrix}}$ Starting from the bottom of the right matrix, and the constraint e2 = 1: $Q(a_{i})=3x^{4}+1x^{3}+3x^{2}+3x+4$ $E(a_{i})=1x^{2}+2x+4$ $F(a_{i})=Q(a_{i})/E(a_{i})=3x^{2}+2x+1$ with remainder = 0. E(ai) = 0 at a2 = 1 and a5 = 4 Calculate F(a2 = 1) = 6 and F(a5 = 4) = 1 to produce corrected code word {1,6,3,6,1,2,2}. See also • Reed–Solomon error correction External links • MIT Lecture Notes on Essential Coding Theory – Dr. Madhu Sudan • University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra • Algebraic Codes on Lines, Planes and Curves, An Engineering Approach – Richard E. Blahut • Welch Berlekamp Decoding of Reed–Solomon Codes – L. R. Welch • US 4,633,470, Welch, Lloyd R. & Berlekamp, Elwyn R., "Error Correction for Algebraic Block Codes", published September 27, 1983, issued December 30, 1986 – The patent by Lloyd R. Welch and Elewyn R. Berlekamp
Wikipedia
Welfare maximization The welfare maximization problem is an optimization problem studied in economics and computer science. Its goal is to partition a set of items among agents with different utility functions, such that the welfare – defined as the sum of the agents' utilities – is as high as possible. In other words, the goal is to find an item allocation satisfying the utilitarian rule.[1] An equivalent problem in the context of combinatorial auctions is called the winner determination problem. In this context, each agent submits a list of bids on sets of items, and the goal is to determine what bid or bids should win, such that the sum of the winning bids is maximum. Definitions There is a set M of m items, and a set N of n agents. Each agent i in N has a utility function $u_{i}:2^{M}\to \mathbb {R} $. The function assigns a real value to every possible subset of items. It is usually assumed that the utility functions are monotone set functions, that is, $Z_{1}\supseteq Z_{2}$ implies $u_{i}(Z_{1})\geq u_{i}(Z_{2})$. It is also assumed that $u_{i}(\emptyset )=0$. Together with monotonicity, this implies that all utilities are non-negative. An allocation is an ordered partition of the items into n disjoint subsets, one subset per agent, denoted $\mathbf {X} =(X_{1},\ldots ,X_{n})$, such that $M=X_{1}\sqcup \cdots \sqcup X_{n}$.The welfare of an allocation is the sum of agents' utilities: $W(\mathbf {X} ):=\sum _{i\in N}u_{i}(X_{i})$. The welfare maximization problem is: find an allocation X that maximizes W(X). The welfare maximization problem has many variants, depending on the type of allowed utility functions, the way by which the algorithm can access the utility functions, and whether there are additional constraints on the allowed allocations. Additive agents An additive agent has a utility function that is an additive set function: for every additive agent i and item j, there is a value $v_{i,j}$, such that $u_{i}(Z)=\sum _{j\in X_{i}}v_{i,j}$ for every set Z of items. When all agents are additive, welfare maximization can be done by a simple polynomial-time algorithm: give each item j to an agent for whom $v_{i,j}$ is maximum (breaking ties arbitrarily). The problem becomes more challenging when there are additional constraints on the allocation. Fairness constraints One may want to maximize the welfare among all allocations that are fair, for example, envy-free up to one item (EF1), proportional up to one item (PROP1), or equitable up to one item (EQ1). This problem is strongly NP-hard when n is variable. For any fixed n ≥ 2, the problem is weakly NP-hard,[2][3] and has a pseudo-polynomial time algorithm based on dynamic programming.[2] For n = 2, the problem has a fully polynomial-time approximation scheme.[4] There are algorithms for solving this problem in polynomial time when there are few agent types, few item types or small value levels.[5] The problem can also be solved in polynomial time when the agents' additive utilities are binary (the value of every item is either 0 or 1), as well as for a more general class of utilities called generalized binary.[6] Matroid constraints Another constraint on the allocation is that the bundles must be independent sets of a matroid. For example, every bundle must contain at most k items, where k is a fixed integer (this corresponds to a uniform matroid). Or, the items may be partitioned into categories, and each bundle must contain at most kc items from each category c (this corresponds to a partition matroid). In general, there may be a different matroid for each agent, and the allocation must give each agent i a subset Xi that is an independent set of their own matroid. Welfare maximization with additive utilities under heterogeneous matroid constraints can be done in polynomial time, by reduction to the weighted matroid intersection problem.[7] Gross-substitute agents Gross-substitute utilities are more general than additive utilities. Welfare maximization with gross-substitute agents can be done in polynomial time. This is because, with gross-substitute agents, a Walrasian equilibrium always exists, and it maximizes the sum of utilities.[8] A Walrasian equilibrium can be found in polynomial time. Submodular agents A submodular agent has a utility function that is a submodular set function. This means that the agent's utility has decreasing marginals. Submodular utilities are more general than gross-substitute utilities. Hardness Welfare maximization with submodular agents is NP-hard.[9] Moreover, it cannot be approximated to a factor better than (1-1/e)≈0.632 unless P=NP.[10] Moreover, a better than (1-1/e) approximation would require an exponential number of querires to a value oracle, regardless of whether P=NP.[11] Greedy algorithm The maximum welfare can be approximated by the following polynomial-time greedy algorithm: • Initialize X1 = X2 = ... = Xn = empty. • For every item j (in an arbitrary order): • Compute, for each agen i, his marginal utility for j, defined as: ui(Xi+j) - ui(Xi). • Give item j an agent with the largest marginal utility. Fisher, Nemhauser and Wolsey[12] and Lehman, Lehman and Nisan[9] prove that the greedy algorithm finds a 1/2-factor approximation. Better approximation algorithms can be partitioned by the way they access the agents' valuations. Algorithms using a value oracle A value oracle is an oracle that, given a set of items, returns the agent's value to this set. In this model: • Dobzinski and Schapira[13] present a polytime $n/(2n-1)$-approximation algorithm, and an (1-1/e)≈0.632-approximation algorithm for the special case in which the agents' utilities are set-coverage functions. • Vondrak[14]: Sec.5  and Calinescu, Chekuri, Pal and Vondrak[15] present a randomized polytime algorithm that finds a (1-1/e)-approximation with high probability. Their algorithm uses a continuous-greedy algorithm - an algorithm that extends a fractional bundle (a bundle that contains a fraction pj of each item j) in a greedy direction (similarly to gradient descent). Their algorithm needs to compute the value of fractional bundles, defined as the expected value of the bundle attained when each item j is selected independently with probability pj. In general, computing the value of a fractional bundle might require 2m calls to a value oracle; however, it can be computed approximately with high probability by random sampling. This leads to a randomized algorithm that attains a (1-1/e)-approximation with high probability. In cases when fractional bundles can be evaluated efficiently (e.g. when utility functions are set-coverage functions), the algorithm can be made deterministic.[15]: Sec.5  They mention as an open problem, whether there is a deterministic polytime (1-1/e)-approximation algorithm for general submodular functions. The welfare maximization problem (with n different submodular functions) can be reduced to the problem of maximizing a single submodular set function subject to a matroid constraint:[9][14][15] given an instance with m items and n agents, construct an instance with m*n (agent,item) pairs, where each pair represents the assignment of an item to an agent. Construct a single function that assigns, to each set of pairs, the total welfare of the corresponding allocation. It can be shown that, if all utilities are submodular, then this welfare function is also submodular. This function should be maximized subject to a partition matroid constraint, ensuring that each item is allocated to at most one agent. Algorithms using a demand oracle Another way to access the agents' utilities is using a demand oracle (an oracle that, given a price-vector, returns the agent's most desired bundle). In this model: • Dobzinski and Schapira[13] present a polytime (1-1/e)-approximation algorithm. • Feige and Vondrak[16] improve this to (1-1/e+ε) for some small positive ε (this does not contradict the above hardness result, since the hardness result uses only a value oracle; in the hardness examples, the demand oracle itself would require exponentially many queries). Subadditive agents When agents' utilities are subadditive set functions (more general than submodular), a ${\frac {1}{m^{1/2-\epsilon }}}$ approximation would require an exponential number of value queries.[11] Feige[17] presents a way of rounding any fractional solution to an LP relaxation to this problem to a feasible solution with welfare at least 1/2 the value of the fractional solution. This gives a 1/2-approximation for general subadditive agents, and (1-1/e)-approximation for the special case of fractionally-subadditive valuations. Superadditive agents When agents' utilities are superadditive set functions (more general than supermodular), a ${\frac {(\log m)^{1+\epsilon }}{m}}$ approximation would require a super-polynomial number of value queries.[11] Single-minded agents A single-minded agent wants only a specific set of items. For every single-minded agent i, there is a demanded set Di, and a value Vi > 0, such that $u_{i}(Z)={\begin{cases}V_{i}&Z\supseteq D_{i}\\0&{\text{otherwise}}\end{cases}}$. That is, the agent receives a fixed positive utility if and only if their bundle contains their demanded set. Welfare maximization with single-minded agents is NP-hard even when $V_{i}=1$ for all i. In this case, the problem is equivalent to set packing, which is known to be NP hard. Moreover, it cannot be approximated within any constant factor (in contrast to the case of submodular agents).[18] The best known algorithm approximates it within a factor of $O({\sqrt {m}})$.[19] General agents When agents can have arbitrary monotone utility functions (including complementary items), welfare maximization is hard to approximate within a factor of $O(n^{1/2-\epsilon })$ for any $\epsilon >0$.[20] However, there are algorithms based on state space search that work very well in practice.[21] References 1. Vondrak, Jan (2008-05-17). "Optimal approximation for the submodular welfare problem in the value oracle model". Proceedings of the fortieth annual ACM symposium on Theory of computing. STOC '08. New York, NY, USA: Association for Computing Machinery. pp. 67–74. doi:10.1145/1374376.1374389. ISBN 978-1-60558-047-0. S2CID 170510. 2. Aziz, Haris; Huang, Xin; Mattei, Nicholas; Segal-Halevi, Erel (2022-10-13). "Computing welfare-Maximizing fair allocations of indivisible goods". European Journal of Operational Research. 307 (2): 773–784. arXiv:2012.03979. doi:10.1016/j.ejor.2022.10.013. ISSN 0377-2217. S2CID 235266307. 3. Sun, Ankang; Chen, Bo; Doan, Xuan Vinh (2022-12-02). "Equitability and welfare maximization for allocating indivisible items". Autonomous Agents and Multi-Agent Systems. 37 (1): 8. doi:10.1007/s10458-022-09587-1. ISSN 1573-7454. S2CID 254152607. 4. Bu, Xiaolin; Li, Zihao; Liu, Shengxin; Song, Jiaxin; Tao, Biaoshuai (2022-05-27). "On the Complexity of Maximizing Social Welfare within Fair Allocations of Indivisible Goods". arXiv:2205.14296 [cs.GT]. 5. Nguyen, Trung Thanh; Rothe, Jörg (2023-01-01). "Fair and efficient allocation with few agent types, few item types, or small value levels". Artificial Intelligence. 314: 103820. doi:10.1016/j.artint.2022.103820. ISSN 0004-3702. S2CID 253430435. 6. Camacho, Franklin; Fonseca-Delgado, Rigoberto; Pino Pérez, Ramón; Tapia, Guido (2022-11-07). "Generalized binary utility functions and fair allocations". Mathematical Social Sciences. 121: 50–60. doi:10.1016/j.mathsocsci.2022.10.003. ISSN 0165-4896. S2CID 253411165. 7. Dror, Amitay; Feldman, Michal; Segal-Halevi, Erel (2022-04-24). "On Fair Division under Heterogeneous Matroid Constraints". arXiv:2010.07280 [cs.GT]. 8. Kelso, A. S.; Crawford, V. P. (1982). "Job Matching, Coalition Formation, and Gross Substitutes". Econometrica. 50 (6): 1483. doi:10.2307/1913392. JSTOR 1913392. 9. Lehmann, Benny; Lehmann, Daniel; Nisan, Noam (2001-10-14). "Combinatorial auctions with decreasing marginal utilities". Proceedings of the 3rd ACM conference on Electronic Commerce. EC '01. New York, NY, USA: Association for Computing Machinery. pp. 18–28. arXiv:cs/0202015. doi:10.1145/501158.501161. ISBN 978-1-58113-387-5. S2CID 2241237. 10. Khot, Subhash; Lipton, Richard J.; Markakis, Evangelos; Mehta, Aranyak (2008-09-01). "Inapproximability Results for Combinatorial Auctions with Submodular Utility Functions". Algorithmica. 52 (1): 3–18. doi:10.1007/s00453-007-9105-7. ISSN 1432-0541. S2CID 7600128. 11. Mirrokni, Vahab; Schapira, Michael; Vondrak, Jan (2008-07-08). "Tight information-theoretic lower bounds for welfare maximization in combinatorial auctions". Proceedings of the 9th ACM conference on Electronic commerce. EC '08. New York, NY, USA: Association for Computing Machinery. pp. 70–77. doi:10.1145/1386790.1386805. ISBN 978-1-60558-169-9. S2CID 556774. 12. Fisher, M. L.; Nemhauser, G. L.; Wolsey, L. A. (1978), Balinski, M. L.; Hoffman, A. J. (eds.), "An analysis of approximations for maximizing submodular set functions—II", Polyhedral Combinatorics: Dedicated to the memory of D.R. Fulkerson, Berlin, Heidelberg: Springer, pp. 73–87, doi:10.1007/bfb0121195, ISBN 978-3-642-00790-3, retrieved 2023-02-26 13. Dobzinski, Shahar; Schapira, Michael (2006-01-22). "An improved approximation algorithm for combinatorial auctions with submodular bidders". Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm. SODA '06. USA: Society for Industrial and Applied Mathematics: 1064–1073. doi:10.1145/1109557.1109675. ISBN 978-0-89871-605-4. S2CID 13108913. 14. Vondrak, Jan (2008-05-17). "Optimal approximation for the submodular welfare problem in the value oracle model". Proceedings of the fortieth annual ACM symposium on Theory of computing. STOC '08. New York, NY, USA: Association for Computing Machinery. pp. 67–74. doi:10.1145/1374376.1374389. ISBN 978-1-60558-047-0. S2CID 170510. 15. Calinescu, Gruia; Chekuri, Chandra; Pál, Martin; Vondrák, Jan (2011-01-01). "Maximizing a Monotone Submodular Function Subject to a Matroid Constraint". SIAM Journal on Computing. 40 (6): 1740–1766. doi:10.1137/080733991. ISSN 0097-5397. 16. Feige, Uriel; Vondrák, Jan (2010-12-09). "The Submodular Welfare Problem with Demand Queries". Theory of Computing. 6: 247–290. doi:10.4086/toc.2010.v006a011. 17. Feige, Uriel (2006-05-21). "On maximizing welfare when utility functions are subadditive". Proceedings of the thirty-eighth annual ACM symposium on Theory of Computing. STOC '06. New York, NY, USA: Association for Computing Machinery. pp. 41–50. doi:10.1145/1132516.1132523. ISBN 978-1-59593-134-4. S2CID 11504912. 18. Hazan, Elad; Safra, Shmuel; Schwartz, Oded (2006). "On the complexity of approximating k-set packing". Computational Complexity. 15 (1): 20–39. CiteSeerX 10.1.1.352.5754. doi:10.1007/s00037-006-0205-6. MR 2226068. S2CID 1858087.. See in particular p. 21: "Maximum clique (and therefore also maximum independent set and maximum set packing) cannot be approximated to within $O(n^{1-\epsilon })$ unless NP ⊂ ZPP." 19. Halldórsson, Magnus M.; Kratochvíl, Jan; Telle, Jan Arne (1998). Independent sets with domination constraints. 25th International Colloquium on Automata, Languages and Programming. Lecture Notes in Computer Science. Vol. 1443. Springer-Verlag. pp. 176–185. 20. Lehmann, Daniel; Oćallaghan, Liadan Ita; Shoham, Yoav (2002-09-01). "Truth revelation in approximately efficient combinatorial auctions". Journal of the ACM. 49 (5): 577–602. doi:10.1145/585265.585266. ISSN 0004-5411. S2CID 52829303. 21. Sandholm, Tuomas; Suri, Subhash (2000-07-30). "Improved Algorithms for Optimal Winner Determination in Combinatorial Auctions and Generalizations". Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence. AAAI Press: 90–97. ISBN 978-0-262-51112-4.
Wikipedia
Well-colored graph In graph theory, a subfield of mathematics, a well-colored graph is an undirected graph for which greedy coloring uses the same number of colors regardless of the order in which colors are chosen for its vertices. That is, for these graphs, the chromatic number (minimum number of colors) and Grundy number (maximum number of greedily-chosen colors) are equal.[1] Examples The well-colored graphs include the complete graphs and odd-length cycle graphs (the graphs that form the exceptional cases to Brooks' theorem) as well as the complete bipartite graphs and complete multipartite graphs. The simplest example of a graph that is not well-colored is a four-vertex path. Coloring the vertices in path order uses two colors, the optimum for this graph. However, coloring the ends of the path first (using the same color for each end) causes the greedy coloring algorithm to use three colors for this graph. Because there exists a non-optimal vertex ordering, the path is not well-colored.[2][3] Complexity A graph is well-colored if and only if does not have two vertex orderings for which the greedy coloring algorithm produces different numbers of colors. Therefore, recognizing non-well-colored graphs can be performed within the complexity class NP. On the other hand, a graph $G$ has Grundy number $k$ or more if and only if the graph obtained from $G$ by adding a $(k-1)$-vertex clique is well-colored. Therefore, by a reduction from the Grundy number problem, it is NP-complete to test whether these two orderings exist. It follows that it is co-NP-complete to test whether a given graph is well-colored.[1] Related properties A graph is hereditarily well-colored if every induced subgraph is well-colored. The hereditarily well-colored graphs are exactly the cographs, the graphs that do not have a four-vertex path as an induced subgraph.[4] References 1. Zaker, Manouchehr (2006), "Results on the Grundy chromatic number of graphs", Discrete Mathematics, 306 (23): 3166–3173, doi:10.1016/j.disc.2005.06.044, MR 2273147 2. Hansen, Pierre; Kuplinsky, Julio (1991), "The smallest hard-to-color graph", Discrete Mathematics, 96 (3): 199–212, doi:10.1016/0012-365X(91)90313-Q, MR 1139447 3. Kosowski, Adrian; Manuszewski, Krzysztof (2004), "Classical coloring of graphs", Graph Colorings, Contemporary Mathematics, vol. 352, Providence, Rhode Island: American Mathematical Society, pp. 1–19, doi:10.1090/conm/352/06369, MR 2076987 4. Christen, Claude A.; Selkow, Stanley M. (1979), "Some perfect coloring properties of graphs", Journal of Combinatorial Theory, Series B, 27 (1): 49–59, doi:10.1016/0095-8956(79)90067-4, MR 0539075
Wikipedia
Condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given $f(x)=y,$ one is solving for x, and thus the condition number of the (local) inverse must be used. In linear regression the condition number of the moment matrix can be used as a diagnostic for multicollinearity.[1][2] The condition number is an application of the derivative, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (the independent variables) there is a large change in the answer or dependent variable. This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called backward stability; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number $\kappa (A)=10^{k}$, then you may lose up to $k$ digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods.[3] However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy). General definition in the context of error analysis Given a problem $f$ and an algorithm ${\tilde {f}}$ with an input $x$ and output ${\tilde {f}}(x),$ the error is $\delta f(x):=f(x)-{\tilde {f}}(x),$ the absolute error is $\|\delta f(x)\|=\left\|f(x)-{\tilde {f}}(x)\right\|$ and the relative error is $\|\delta f(x)\|/\|f(x)\|=\left\|f(x)-{\tilde {f}}(x)\right\|/\|f(x)\|.$ In this context, the absolute condition number of a problem $f$ is $\lim _{\varepsilon \rightarrow 0^{+}}\,\sup _{\|\delta x\|\,\leq \,\varepsilon }{\frac {\|\delta f(x)\|}{\|\delta x\|}}$ and the relative condition number is $\lim _{\varepsilon \rightarrow 0^{+}}\,\sup _{\|\delta x\|\,\leq \,\varepsilon }{\frac {\|\delta f(x)\|/\|f(x)\|}{\|\delta x\|/\|x\|}}.$ Matrices For example, the condition number associated with the linear equation Ax = b gives a bound on how inaccurate the solution x will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating-point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution x will change with respect to a change in b. Thus, if the condition number is large, even a small error in b may cause a large error in x. On the other hand, if the condition number is small, then the error in x will not be much bigger than the error in b. The condition number is defined more precisely to be the maximum ratio of the relative error in x to the relative error in b. Let e be the error in b. Assuming that A is a nonsingular matrix, the error in the solution A−1b is A−1e. The ratio of the relative error in the solution to the relative error in b is ${\frac {\left\|A^{-1}e\right\|}{\left\|A^{-1}b\right\|}}/{\frac {\|e\|}{\|b\|}}={\frac {\left\|A^{-1}e\right\|}{\|e\|}}{\frac {\|b\|}{\left\|A^{-1}b\right\|}}.$ The maximum value (for nonzero b and e) is then seen to be the product of the two operator norms as follows: ${\begin{aligned}\max _{e,b\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}{\frac {\|b\|}{\left\|A^{-1}b\right\|}}\right\}&=\max _{e\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}\right\}\,\max _{b\neq 0}\left\{{\frac {\|b\|}{\left\|A^{-1}b\right\|}}\right\}\\&=\max _{e\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}\right\}\,\max _{x\neq 0}\left\{{\frac {\|Ax\|}{\|x\|}}\right\}\\&=\left\|A^{-1}\right\|\,\|A\|.\end{aligned}}$ The same definition is used for any consistent norm, i.e. one that satisfies $\kappa (A)=\left\|A^{-1}\right\|\,\left\|A\right\|\geq \left\|A^{-1}A\right\|=1.$ When the condition number is exactly one (which can only happen if A is a scalar multiple of a linear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm, as can be illustrated by two examples. If $\|\cdot \|$ is the matrix norm induced by the (vector) Euclidean norm (sometimes known as the L2 norm and typically denoted as $\|\cdot \|_{2}$), then $\kappa (A)={\frac {\sigma _{\text{max}}(A)}{\sigma _{\text{min}}(A)}},$ where $\sigma _{\text{max}}(A)$ and $\sigma _{\text{min}}(A)$ are maximal and minimal singular values of $A$ respectively. Hence: • If $A$ is normal, then $\kappa (A)={\frac {\left|\lambda _{\text{max}}(A)\right|}{\left|\lambda _{\text{min}}(A)\right|}},$ where $\lambda _{\text{max}}(A)$ and $\lambda _{\text{min}}(A)$ are maximal and minimal (by moduli) eigenvalues of $A$ respectively. • If $A$ is unitary, then $\kappa (A)=1.$ The condition number with respect to L2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix. If $\|\cdot \|$ is the matrix norm induced by the $L^{\infty }$ (vector) norm and $A$ is lower triangular non-singular (i.e. $a_{ii}\neq 0$ for all $i$), then $\kappa (A)\geq {\frac {\max _{i}{\big (}|a_{ii}|{\big )}}{\min _{i}{\big (}|a_{ii}|{\big )}}}$ recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a non-linear algebra, for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not significantly larger than one, the matrix is well-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible is often said to have a condition number equal to infinity. Alternatively, it can be defined as $\kappa (A)=\|A\|\|A^{\dagger }\|$, where $A^{\dagger }$ is the Moore-Penrose pseudoinverse. For square matrices, this unfortunately makes the condition number discontinuous, but it is a useful definition for rectangular matrices, which are never invertible but are still used to define systems of equations. Nonlinear Condition numbers can also be defined for nonlinear functions, and can be computed using calculus. The condition number varies with the point; in some cases one can use the maximum (or supremum) condition number over the domain of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. One variable The condition number of a differentiable function $f$ in one variable as a function is $\left|xf'/f\right|$. Evaluated at a point $x$, this is $\left|{\frac {xf'(x)}{f(x)}}\right|.$ Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of $f$, which is $(\log f)'=f'/f$, and the logarithmic derivative of $x$, which is $(\log x)'=x'/x=1/x$, yielding a ratio of $xf'/f$. This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative $f'$ scaled by the value of $f$. Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small change $\Delta x$ in $x$, the relative change in $x$ is $[(x+\Delta x)-x]/x=(\Delta x)/x$, while the relative change in $f(x)$ is $[f(x+\Delta x)-f(x)]/f(x)$. Taking the ratio yields ${\frac {[f(x+\Delta x)-f(x)]/f(x)}{(\Delta x)/x}}={\frac {x}{f(x)}}{\frac {f(x+\Delta x)-f(x)}{(x+\Delta x)-x}}={\frac {x}{f(x)}}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}.$ The last term is the difference quotient (the slope of the secant line), and taking the limit yields the derivative. Condition numbers of common elementary functions are particularly important in computing significant figures and can be computed immediately from the derivative; see significance arithmetic of transcendental functions. A few important ones are given below: NameSymbolCondition number Addition / subtraction$x+a$$\left|{\frac {x}{x+a}}\right|$ Scalar multiplication$ax$$1$ Division$1/x$$1$ Polynomial$x^{n}$$|n|$ Exponential function$e^{x}$$|x|$ Natural logarithm function$\ln(x)$$\left|{\frac {1}{\ln(x)}}\right|$ Sine function$\sin(x)$$|x\cot(x)|$ Cosine function$\cos(x)$$|x\tan(x)|$ Tangent function$\tan(x)$$|x(\tan(x)+\cot(x))|$ Inverse sine function$\arcsin(x)$${\frac {x}{{\sqrt {1-x^{2}}}\arcsin(x)}}$ Inverse cosine function$\arccos(x)$${\frac {|x|}{{\sqrt {1-x^{2}}}\arccos(x)}}$ Inverse tangent function$\arctan(x)$${\frac {x}{(1+x^{2})\arctan(x)}}$ Several variables Condition numbers can be defined for any function $f$ mapping its data from some domain (e.g. an $m$-tuple of real numbers $x$) into some codomain (e.g. an $n$-tuple of real numbers $f(x)$), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example, polynomial root finding or computing eigenvalues. The condition number of $f$ at a point $x$ (specifically, its relative condition number[4]) is then defined to be the maximum ratio of the fractional change in $f(x)$ to any fractional change in $x$, in the limit where the change $\delta x$ in $x$ becomes infinitesimally small:[4] $\lim _{\varepsilon \to 0^{+}}\sup _{\|\delta x\|\leq \varepsilon }\left[\left.{\frac {\left\|f(x+\delta x)-f(x)\right\|}{\|f(x)\|}}\right/{\frac {\|\delta x\|}{\|x\|}}\right],$ where $\|\cdot \|$ is a norm on the domain/codomain of $f$. If $f$ is differentiable, this is equivalent to:[4] ${\frac {\|J(x)\|}{\|f(x)\|/\|x\|}},$ where $J(x)$ denotes the Jacobian matrix of partial derivatives of $f$ at $x$, and $\|J(x)\|$ is the induced norm on the matrix. See also • Numerical methods for linear least squares • Hilbert matrix • Ill-posed problem • Singular value • Wilson matrix References 1. Belsley, David A.; Kuh, Edwin; Welsch, Roy E. (1980). "The Condition Number". Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley & Sons. pp. 100–104. ISBN 0-471-05856-4. 2. Pesaran, M. Hashem (2015). "The Multicollinearity Problem". Time Series and Panel Data Econometrics. New York: Oxford University Press. pp. 67–72 [p. 70]. ISBN 978-0-19-875998-0. 3. Cheney; Kincaid (2008). Numerical Mathematics and Computing. p. 321. ISBN 978-0-495-11475-8. 4. Trefethen, L. N.; Bau, D. (1997). Numerical Linear Algebra. SIAM. ISBN 978-0-89871-361-9. Further reading • Demmel, James (1990). "Nearest Defective Matrices and the Geometry of Ill-conditioning". In Cox, M. G.; Hammarling, S. (eds.). Reliable Numerical Computation. Oxford: Clarendon Press. pp. 35–55. ISBN 0-19-853564-3. External links • Condition Number of a Matrix at Holistic Numerical Methods Institute • MATLAB library function to determine condition number • Condition number – Encyclopedia of Mathematics • Who Invented the Matrix Condition Number? by Nick Higham
Wikipedia
Well-covered graph In graph theory, a well-covered graph is an undirected graph in which every minimal vertex cover has the same size as every other minimal vertex cover. Equivalently, these are the graphs in which all maximal independent sets have equal size. Well-covered graphs were defined and first studied by Michael D. Plummer in 1970. The well-covered graphs include all complete graphs, balanced complete bipartite graphs, and the rook's graphs whose vertices represent squares of a chessboard and edges represent moves of a chess rook. Known characterizations of the well-covered cubic graphs, well-covered claw-free graphs, and well-covered graphs of high girth allow these graphs to be recognized in polynomial time, but testing whether other kinds of graph are well-covered is a coNP-complete problem. Definitions A vertex cover in a graph is a set of vertices that touches every edge in the graph. A vertex cover is minimal, or irredundant, if removing any vertex from it would destroy the covering property. It is minimum if there is no other vertex cover with fewer vertices. A well-covered graph is one in which every minimal cover is also minimum. In the original paper defining well-covered graphs, Plummer writes that this is "obviously equivalent" to the property that every two minimal covers have the same number of vertices as each other.[1] An independent set in a graph is a set of vertices no two of which are adjacent to each other. If C is a vertex cover in a graph G, the complement of C must be an independent set, and vice versa. C is a minimal vertex cover if and only if its complement I is a maximal independent set, and C is a minimum vertex cover if and only if its complement is a maximum independent set. Therefore, a well-covered graph is, equivalently, a graph in which every maximal independent set has the same size, or a graph in which every maximal independent set is maximum.[2] In the original paper defining well-covered graphs, these definitions were restricted to connected graphs,[3] although they are meaningful for disconnected graphs as well. Some later authors have replaced the connectivity requirement with the weaker requirement that a well-covered graph must not have any isolated vertices.[4] For both connected well-covered graphs and well-covered graphs without isolated vertices, there can be no essential vertices, vertices which belong to every minimum vertex cover.[3] Additionally, every well-covered graph is a critical graph for vertex covering in the sense that, for every vertex v, deleting v from the graph produces a graph with a smaller minimum vertex cover.[3] The independence complex of a graph G is the simplicial complex having a simplex for each independent set in G. A simplicial complex is said to be pure if all its maximal simplices have the same cardinality, so a well-covered graph is equivalently a graph with a pure independence complex.[5] Examples abcdefgh 8 8 77 66 55 44 33 22 11 abcdefgh A non-attacking placement of eight rooks on a chessboard. If fewer than eight rooks are placed in a non-attacking way on a chessboard, they can always be extended to eight rooks that remain non-attacking. A cycle graph of length four or five is well-covered: in each case, every maximal independent set has size two. A cycle of length seven, and a path of length three, are also well-covered.[6] Every complete graph is well-covered: every maximal independent set consists of a single vertex. Similarly, every cluster graph (a disjoint union of complete graphs) is well-covered.[7] A complete bipartite graph is well covered if the two sides of its bipartition have equal numbers of vertices, for these are its only two maximal independent sets. The complement graph of a triangle-free graph with no isolated vertices is well-covered: it has no independent sets larger than two, and every single vertex can be extended to a two-vertex independent set.[6] A rook's graph is well covered: if one places any set of rooks on a chessboard so that no two rooks are attacking each other, it is always possible to continue placing more non-attacking rooks until there is one on every row and column of the chessboard.[8] The graph whose vertices are the diagonals of a simple polygon and whose edges connect pairs of diagonals that cross each other is well-covered, because its maximal independent sets are triangulations of the polygon and all triangulations have the same number of edges.[9] If G is any n-vertex graph, then the rooted product of G with a one-edge graph (that is, the graph H formed by adding n new vertices to G, each of degree one and each adjacent to a distinct vertex in G) is well-covered. For, a maximal independent set in H must consist of an arbitrary independent set in G together with the degree-one neighbors of the complementary set, and must therefore have size n.[10] More generally, given any graph G together with a clique cover (a partition p of the vertices of G into cliques), the graph Gp formed by adding another vertex to each clique is well-covered; the rooted product is the special case of this construction when p consists of n one-vertex cliques.[11] Thus, every graph is an induced subgraph of a well-covered graph. Bipartiteness, very well covered graphs, and girth Favaron (1982) defines a very well covered graph to be a well-covered graph (possibly disconnected, but with no isolated vertices) in which each maximal independent set (and therefore also each minimal vertex cover) contains exactly half of the vertices. This definition includes the rooted products of a graph G and a one-edge graph. It also includes, for instance, the bipartite well-covered graphs studied by Ravindra (1977) and Berge (1981): in bipartite graph without isolated vertices, both sides of any bipartition form maximal independent sets (and minimal vertex covers), so if the graph is well-covered both sides must have equally many vertices. In a well-covered graph with n vertices, without isolated vertices, the size of a maximum independent set is at most n/2, so very well covered graphs are the well covered graphs in which the maximum independent set size is as large as possible.[12] A bipartite graph G is well-covered if and only if it has a perfect matching M with the property that, for every edge uv in M, the induced subgraph of the neighbors of u and v forms a complete bipartite graph.[13] The characterization in terms of matchings can be extended from bipartite graphs to very well covered graphs: a graph G is very well covered if and only if it has a perfect matching M with the following two properties: 1. No edge of M belongs to a triangle in G, and 2. If an edge of M is the central edge of a three-edge path in G, then the two endpoints of the path must be adjacent. Moreover, if G is very well covered, then every perfect matching in G satisfies these properties.[14] Trees are a special case of bipartite graphs, and testing whether a tree is well-covered can be handled as a much simpler special case of the characterization of well-covered bipartite graphs: if G is a tree with more than two vertices, it is well-covered if and only if each non-leaf node of the tree is adjacent to exactly one leaf.[13] The same characterization applies to graphs that are locally tree-like, in the sense that low-diameter neighborhoods of every vertex are acyclic: if a graph has girth eight or more (so that, for every vertex v, the subgraph of vertices within distance three of v is acyclic) then it is well-covered if and only if every vertex of degree greater than one has exactly one neighbor of degree one.[15] A closely related but more complex characterization applies to well-covered graphs of girth five or more.[16] Regularity and planarity The cubic (3-regular) well-covered graphs have been classified: they consist of seven 3-connected examples, together with three infinite families of cubic graphs with lesser connectivity.[17] The seven 3-connected cubic well-covered graphs are the complete graph K4, the graphs of the triangular prism and the pentagonal prism, the Dürer graph, the utility graph K3,3, an eight-vertex graph obtained from the utility graph by a Y-Δ transform, and the 14-vertex generalized Petersen graph G(7,2).[18] Of these graphs, the first four are planar graphs. They are the only four cubic polyhedral graphs (graphs of simple convex polyhedra) that are well-covered.[19] Four of the graphs (the two prisms, the Dürer graph, and G(7,2)) are generalized Petersen graphs. The 1- and 2-connected cubic well-covered graphs are all formed by replacing the nodes of a path or cycle by three fragments of graphs which Plummer (1993) labels A, B, and C. Fragments A or B may be used to replace the nodes of a cycle or the interior nodes of a path, while fragment C is used to replace the two end nodes of a path. Fragment A contains a bridge, so the result of performing this replacement process on a path and using fragment A to replace some of the path nodes (and the other two fragments for the remaining nodes) is a 1-vertex-connected cubic well-covered graph. All 1-vertex-connected cubic well-covered graphs have this form, and all such graphs are planar.[17] There are two types of 2-vertex-connected cubic well-covered graphs. One of these two families is formed by replacing the nodes of a cycle by fragments A and B, with at least two of the fragments being of type A; a graph of this type is planar if and only if it does not contain any fragments of type B. The other family is formed by replacing the nodes of a path by fragments of type B and C; all such graphs are planar.[17] Complementing the characterization of well-covered simple polyhedra in three dimensions, researchers have also considered the well-covered simplicial polyhedra, or equivalently the well-covered maximal planar graphs. Every maximal planar graph with five or more vertices has vertex connectivity 3, 4, or 5.[20] There are no well-covered 5-connected maximal planar graphs, and there are only four 4-connected well-covered maximal planar graphs: the graphs of the regular octahedron, the pentagonal dipyramid, the snub disphenoid, and an irregular polyhedron (a nonconvex deltahedron) with 12 vertices, 30 edges, and 20 triangular faces. However, there are infinitely many 3-connected well-covered maximal planar graphs.[21] For instance, if a 3t-vertex maximal planar graph has t disjoint triangle faces, then these faces will form a clique cover. Therefore, the clique cover construction, which for these graphs consists of subdividing each of these t triangles into three new triangles meeting at a central vertex, produces a well-covered maximal planar graph.[22] Complexity Testing whether a graph contains two maximal independent sets of different sizes is NP-complete; that is, complementarily, testing whether a graph is well-covered is coNP-complete.[23] Although it is easy to find maximum independent sets in graphs that are known to be well-covered, it is also NP-hard for an algorithm to produce as output, on all graphs, either a maximum independent set or a guarantee that the input is not well-covered.[24] In contrast, it is possible to test whether a given graph G is very well covered in polynomial time. To do so, find the subgraph H of G consisting of the edges that satisfy the two properties of a matched edge in a very well covered graph, and then use a matching algorithm to test whether H has a perfect matching.[14] Some problems that are NP-complete for arbitrary graphs, such as the problem of finding a Hamiltonian cycle, may also be solved in polynomial time for very well covered graphs.[25] A graph is said to be equimatchable if every maximal matching is maximum; that is, it is equimatchable if its line graph is well-covered.[26] More strongly it is called randomly matchable if every maximal matching is a perfect matching. The only connected randomly matchable graphs are the complete graphs and the balanced complete bipartite graphs.[27] It is possible to test whether a line graph, or more generally a claw-free graph, is well-covered in polynomial time.[28] The characterizations of well-covered graphs with girth five or more, and of well-covered graphs that are 3-regular, also lead to efficient polynomial time algorithms to recognize these graphs.[29] Notes 1. Plummer (1970). 2. Plummer (1993). 3. Plummer (1970). 4. Favaron (1982). 5. For examples of papers using this terminology, see Dochtermann & Engström (2009) and Cook & Nagel (2010). 6. Sankaranarayana (1994), Section 2.4, "Examples", p. 7. 7. Holroyd & Talbot (2005). 8. The rook's graphs are, equivalently, the line graphs of complete bipartite graphs, so the well-covered property of rook's graphs is equivalent to the fact that complete bipartite graphs are equimatchable, for which see Sumner (1979) and Lesk, Plummer & Pulleyblank (1984). 9. Greenberg (1993). 10. This class of examples was studied by Fink et al. (1985); they are also (together with the four-edge cycle, which is also well-covered) exactly the graphs whose domination number is n/2. Its well-covering property is also stated in different terminology (having a pure independence complex) as Theorem 4.4 of Dochtermann & Engström (2009). 11. For the clique cover construction, see Cook & Nagel (2010), Lemma 3.2. This source states its results in terms of the purity of the independence complex, and uses the term "fully-whiskered" for the special case of the rooted product. 12. Berge (1981). 13. Ravindra (1977); Plummer (1993). 14. Staples (1975); Favaron (1982); Plummer (1993). 15. Finbow & Hartnell (1983); Plummer (1993), Theorem 4.1. 16. Finbow & Hartnell (1983); Plummer (1993), Theorem 4.2. 17. Campbell (1987); Campbell & Plummer (1988); Plummer (1993). 18. Campbell (1987); Finbow, Hartnell & Nowakowski (1988); Campbell, Ellingham & Royle (1993); Plummer (1993). 19. Campbell & Plummer (1988). 20. The complete graphs on 1, 2, 3, and 4 vertices are all maximal planar and well-covered; their vertex connectivity is either unbounded or at most three, depending on details of the definition of vertex connectivity that are irrelevant for larger maximal planar graphs. 21. Finbow, Hartnell, and Nowakowski et al. (2003, 2009, 2010). 22. The graphs constructed in this way are called the $K_{4}$-family by Finbow et al. (2016); additional examples can be constructed by an operation they call an O-join for combining smaller graphs. 23. Sankaranarayana & Stewart (1992); Chvátal & Slater (1993); Caro, Sebő & Tarsi (1996). 24. Raghavan & Spinrad (2003). 25. Sankaranarayana & Stewart (1992). 26. Lesk, Plummer & Pulleyblank (1984). 27. Sumner (1979). 28. Lesk, Plummer & Pulleyblank (1984); Tankus & Tarsi (1996); Tankus & Tarsi (1997). 29. Campbell, Ellingham & Royle (1993); Plummer (1993). References • Berge, Claude (1981), "Some common properties for regularizable graphs, edge-critical graphs and B-graphs", Graph theory and algorithms (Proc. Sympos., Res. Inst. Electr. Comm., Tohoku Univ., Sendai, 1980), Lecture Notes in Computer Science, vol. 108, Berlin: Springer, pp. 108–123, doi:10.1007/3-540-10704-5_10, ISBN 978-3-540-10704-0, MR 0622929. • Campbell, S. R. (1987), Some results on planar well-covered graphs, Ph.D. thesis, Vanderbilt University, Department of Mathematics. As cited by Plummer (1993). • Campbell, S. R.; Ellingham, M. N.; Royle, Gordon F. (1993), "A characterisation of well-covered cubic graphs", Journal of Combinatorial Mathematics and Combinatorial Computing, 13: 193–212, MR 1220613. • Campbell, Stephen R.; Plummer, Michael D. (1988), "On well-covered 3-polytopes", Ars Combinatoria, 25 (A): 215–242, MR 0942505. • Caro, Yair; Sebő, András; Tarsi, Michael (1996), "Recognizing greedy structures", Journal of Algorithms, 20 (1): 137–156, doi:10.1006/jagm.1996.0006, MR 1368720. • Chvátal, Václav; Slater, Peter J. (1993), "A note on well-covered graphs", Quo vadis, graph theory?, Annals of Discrete Mathematics, vol. 55, Amsterdam: North-Holland, pp. 179–181, MR 1217991. • Cook, David, II; Nagel, Uwe (2010), "Cohen-Macaulay graphs and face vectors of flag complexes", SIAM Journal on Discrete Mathematics, 26: 89–101, arXiv:1003.4447, Bibcode:2010arXiv1003.4447C, doi:10.1137/100818170{{citation}}: CS1 maint: multiple names: authors list (link). • Dochtermann, Anton; Engström, Alexander (2009), "Algebraic properties of edge ideals via combinatorial topology", Electronic Journal of Combinatorics, 16 (2): Research Paper 2, doi:10.37236/68, MR 2515765. • Favaron, O. (1982), "Very well covered graphs", Discrete Mathematics, 42 (2–3): 177–187, doi:10.1016/0012-365X(82)90215-1, MR 0677051. • Finbow, A. S.; Hartnell, B. L. (1983), "A game related to covering by stars", Ars Combinatoria, 16 (A): 189–198, MR 0737090. • Finbow, A.; Hartnell, B.; Nowakowski, R. (1988), "Well-dominated graphs: a collection of well-covered ones", Ars Combinatoria, 25 (A): 5–10, MR 0942485. • Finbow, A.; Hartnell, B.; Nowakowski, R. J. (1993), "A characterization of well covered graphs of girth 5 or greater", Journal of Combinatorial Theory, Series B, 57 (1): 44–68, doi:10.1006/jctb.1993.1005, MR 1198396. • Finbow, A.; Hartnell, B.; Nowakowski, R.; Plummer, Michael D. (2003), "On well-covered triangulations. I", Discrete Applied Mathematics, 132 (1–3): 97–108, doi:10.1016/S0166-218X(03)00393-7, MR 2024267. • Finbow, Arthur S.; Hartnell, Bert L.; Nowakowski, Richard J.; Plummer, Michael D. (2009), "On well-covered triangulations. II", Discrete Applied Mathematics, 157 (13): 2799–2817, doi:10.1016/j.dam.2009.03.014, MR 2537505. • Finbow, Arthur S.; Hartnell, Bert L.; Nowakowski, Richard J.; Plummer, Michael D. (2010), "On well-covered triangulations. III", Discrete Applied Mathematics, 158 (8): 894–912, doi:10.1016/j.dam.2009.08.002, MR 2602814. • Finbow, Arthur S.; Hartnell, Bert L.; Nowakowski, Richard J.; Plummer, Michael D. (2016), "Well-covered triangulations: Part IV", Discrete Applied Mathematics, 215: 71–94, doi:10.1016/j.dam.2016.06.030, MR 3548980. • Fink, J. F.; Jacobson, M. S.; Kinch, L. F.; Roberts, J. (1985), "On graphs having domination number half their order", Period. Math. Hungar., 16 (4): 287–293, doi:10.1007/BF01848079, MR 0833264. • Greenberg, Peter (1993), "Piecewise SL2Z geometry", Transactions of the American Mathematical Society, 335 (2): 705–720, doi:10.2307/2154401, JSTOR 2154401, MR 1140914. • Holroyd, Fred; Talbot, John (2005), "Graphs with the Erdős-Ko-Rado property", Discrete Mathematics, 293 (1–3): 165–176, arXiv:math/0307073, doi:10.1016/j.disc.2004.08.028, MR 2136060. • Lesk, M.; Plummer, M. D.; Pulleyblank, W. R. (1984), "Equi-matchable graphs", in Bollobás, Béla (ed.), Graph Theory and Combinatorics: Proceedings of the Cambridge Combinatorial Conference, in Honour of Paul Erdös, London: Academic Press, pp. 239–254, MR 0777180. • Plummer, Michael D. (1970), "Some covering concepts in graphs", Journal of Combinatorial Theory, 8: 91–98, doi:10.1016/S0021-9800(70)80011-4, MR 0289347. • Plummer, Michael D. (1993), "Well-covered graphs: a survey", Quaestiones Mathematicae, 16 (3): 253–287, doi:10.1080/16073606.1993.9631737, MR 1254158, archived from the original on May 27, 2012. • Raghavan, Vijay; Spinrad, Jeremy (2003), "Robust algorithms for restricted domains", Journal of Algorithms, 48 (1): 160–172, doi:10.1016/S0196-6774(03)00048-8, MR 2006100. • Ravindra, G. (1977), "Well-covered graphs", Journal of Combinatorics, Information and System Sciences, 2 (1): 20–21, MR 0469831. • Sankaranarayana, Ramesh S. (1994), Well-covered graphs: some new sub-classes and complexity results (Doctoral dissertation), University of Alberta • Sankaranarayana, Ramesh S.; Stewart, Lorna K. (1992), "Complexity results for well-covered graphs", Networks, 22 (3): 247–262, CiteSeerX 10.1.1.47.9278, doi:10.1002/net.3230220304, MR 1161178. • Staples, J. (1975), On some subclasses of well-covered graphs, Ph.D. thesis, Vanderbilt University, Department of Mathematics. As cited by Plummer (1993). • Sumner, David P. (1979), "Randomly matchable graphs", Journal of Graph Theory, 3 (2): 183–186, doi:10.1002/jgt.3190030209, MR 0530304. • Tankus, David; Tarsi, Michael (1996), "Well-covered claw-free graphs", Journal of Combinatorial Theory, Series B, 66 (2): 293–302, doi:10.1006/jctb.1996.0022, MR 1376052. • Tankus, David; Tarsi, Michael (1997), "The structure of well-covered graphs and the complexity of their recognition problems", Journal of Combinatorial Theory, Series B, 69 (2): 230–233, doi:10.1006/jctb.1996.1742, MR 1438624.
Wikipedia
Well-founded relation In mathematics, a binary relation R is called well-founded (or wellfounded or foundational[1]) on a class X if every non-empty subset S ⊆ X has a minimal element with respect to R, that is, an element m ∈ S not related by s R m (for instance, "s is not smaller than m") for any s ∈ S. In other words, a relation is well founded if $(\forall S\subseteq X)\;[S\neq \varnothing \implies (\exists m\in S)(\forall s\in S)\lnot (s\mathrel {R} m)].$ "Noetherian induction" redirects here. For the use in topology, see Noetherian topological space. Transitive binary relations Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Total, Semiconnex Anti- reflexive Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗ Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗ Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗ Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗ Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗ Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗ Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗ Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗ Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗ Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$ Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively. All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$ A term's definition may require additional properties that are not listed in this table. Some authors include an extra condition that R is set-like, i.e., that the elements less than any given element form a set. Equivalently, assuming the axiom of dependent choice, a relation is well-founded when it contains no infinite descending chains, which can be proved when there is no infinite sequence x0, x1, x2, ... of elements of X such that xn+1 R xn for every natural number n.[2][3] In order theory, a partial order is called well-founded if the corresponding strict order is a well-founded relation. If the order is a total order then it is called a well-order. In set theory, a set x is called a well-founded set if the set membership relation is well-founded on the transitive closure of x. The axiom of regularity, which is one of the axioms of Zermelo–Fraenkel set theory, asserts that all sets are well-founded. A relation R is converse well-founded, upwards well-founded or Noetherian on X, if the converse relation R−1 is well-founded on X. In this case R is also said to satisfy the ascending chain condition. In the context of rewriting systems, a Noetherian relation is also called terminating. Induction and recursion An important reason that well-founded relations are interesting is because a version of transfinite induction can be used on them: if (X, R) is a well-founded relation, P(x) is some property of elements of X, and we want to show that P(x) holds for all elements x of X, it suffices to show that: If x is an element of X and P(y) is true for all y such that y R x, then P(x) must also be true. That is, $(\forall x\in X)\;[(\forall y\in X)\;[y\mathrel {R} x\implies P(y)]\implies P(x)]\quad {\text{implies}}\quad (\forall x\in X)\,P(x).$ Well-founded induction is sometimes called Noetherian induction,[4] after Emmy Noether. On par with induction, well-founded relations also support construction of objects by transfinite recursion. Let (X, R) be a set-like well-founded relation and F a function that assigns an object F(x, g) to each pair of an element x ∈ X and a function g on the initial segment {y: y R x} of X. Then there is a unique function G such that for every x ∈ X, $G(x)=F\left(x,G\vert _{\left\{y:\,y\mathrel {R} x\right\}}\right).$ That is, if we want to construct a function G on X, we may define G(x) using the values of G(y) for y R x. As an example, consider the well-founded relation (N, S), where N is the set of all natural numbers, and S is the graph of the successor function x ↦ x+1. Then induction on S is the usual mathematical induction, and recursion on S gives primitive recursion. If we consider the order relation (N, <), we obtain complete induction, and course-of-values recursion. The statement that (N, <) is well-founded is also known as the well-ordering principle. There are other interesting special cases of well-founded induction. When the well-founded relation is the usual ordering on the class of all ordinal numbers, the technique is called transfinite induction. When the well-founded set is a set of recursively-defined data structures, the technique is called structural induction. When the well-founded relation is set membership on the universal class, the technique is known as ∈-induction. See those articles for more details. Examples Well-founded relations that are not totally ordered include: • The positive integers {1, 2, 3, ...}, with the order defined by a < b if and only if a divides b and a ≠ b. • The set of all finite strings over a fixed alphabet, with the order defined by s < t if and only if s is a proper substring of t. • The set N × N of pairs of natural numbers, ordered by (n1, n2) < (m1, m2) if and only if n1 < m1 and n2 < m2. • Every class whose elements are sets, with the relation ∈ ("is an element of"). This is the axiom of regularity. • The nodes of any finite directed acyclic graph, with the relation R defined such that a R b if and only if there is an edge from a to b. Examples of relations that are not well-founded include: • The negative integers {−1, −2, −3, ...}, with the usual order, since any unbounded subset has no least element. • The set of strings over a finite alphabet with more than one element, under the usual (lexicographic) order, since the sequence "B" > "AB" > "AAB" > "AAAB" > ... is an infinite descending chain. This relation fails to be well-founded even though the entire set has a minimum element, namely the empty string. • The set of non-negative rational numbers (or reals) under the standard ordering, since, for example, the subset of positive rationals (or reals) lacks a minimum. Other properties If (X, <) is a well-founded relation and x is an element of X, then the descending chains starting at x are all finite, but this does not mean that their lengths are necessarily bounded. Consider the following example: Let X be the union of the positive integers with a new element ω that is bigger than any integer. Then X is a well-founded set, but there are descending chains starting at ω of arbitrary great (finite) length; the chain ω, n − 1, n − 2, ..., 2, 1 has length n for any n. The Mostowski collapse lemma implies that set membership is a universal among the extensional well-founded relations: for any set-like well-founded relation R on a class X that is extensional, there exists a class C such that (X, R) is isomorphic to (C, ∈). Reflexivity A relation R is said to be reflexive if a R a holds for every a in the domain of the relation. Every reflexive relation on a nonempty domain has infinite descending chains, because any constant sequence is a descending chain. For example, in the natural numbers with their usual order ≤, we have 1 ≥ 1 ≥ 1 ≥ .... To avoid these trivial descending sequences, when working with a partial order ≤, it is common to apply the definition of well foundedness (perhaps implicitly) to the alternate relation < defined such that a < b if and only if a ≤ b and a ≠ b. More generally, when working with a preorder ≤, it is common to use the relation < defined such that a < b if and only if a ≤ b and b ≰ a. In the context of the natural numbers, this means that the relation <, which is well-founded, is used instead of the relation ≤, which is not. In some texts, the definition of a well-founded relation is changed from the definition above to include these conventions. References 1. See Definition 6.21 in Zaring W.M., G. Takeuti (1971). Introduction to axiomatic set theory (2nd, rev. ed.). New York: Springer-Verlag. ISBN 0387900241. 2. "Infinite Sequence Property of Strictly Well-Founded Relation". ProofWiki. Retrieved 10 May 2021. 3. Fraisse, R. (15 December 2000). Theory of Relations, Volume 145 - 1st Edition (1st ed.). Elsevier. p. 46. ISBN 9780444505422. Retrieved 20 February 2019. 4. Bourbaki, N. (1972) Elements of mathematics. Commutative algebra, Addison-Wesley. • Just, Winfried and Weese, Martin (1998) Discovering Modern Set Theory. I, American Mathematical Society ISBN 0-8218-0266-6. • Karel Hrbáček & Thomas Jech (1999) Introduction to Set Theory, 3rd edition, "Well-founded relations", pages 251–5, Marcel Dekker ISBN 0-8247-7915-0 Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia
Well-order In mathematics, a well-order (or well-ordering or well-order relation) on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering. The set S together with the well-order relation is then called a well-ordered set. In some academic articles and textbooks these terms are instead written as wellorder, wellordered, and wellordering or well order, well ordered, and well ordering. Transitive binary relations Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Total, Semiconnex Anti- reflexive Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗ Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗ Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗ Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗ Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗ Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗ Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗ Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗ Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗ Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$ Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively. All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$ A term's definition may require additional properties that are not listed in this table. Every non-empty well-ordered set has a least element. Every element s of a well-ordered set, except a possible greatest element, has a unique successor (next element), namely the least element of the subset of all elements greater than s. There may be elements besides the least element which have no predecessor (see § Natural numbers below for an example). A well-ordered set S contains for every subset T with an upper bound a least upper bound, namely the least element of the subset of all upper bounds of T in S. If ≤ is a non-strict well ordering, then < is a strict well ordering. A relation is a strict well ordering if and only if it is a well-founded strict total order. The distinction between strict and non-strict well orders is often ignored since they are easily interconvertible. Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The well-ordering theorem, which is equivalent to the axiom of choice, states that every set can be well ordered. If a set is well ordered (or even if it merely admits a well-founded relation), the proof technique of transfinite induction can be used to prove that a given statement is true for all elements of the set. The observation that the natural numbers are well ordered by the usual less-than relation is commonly called the well-ordering principle (for natural numbers). Ordinal numbers Main article: Ordinal number Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The position of each element within the ordered set is also given by an ordinal number. In the case of a finite set, the basic operation of counting, to find the ordinal number of a particular object, or to find the object with a particular ordinal number, corresponds to assigning ordinal numbers one by one to the objects. The size (number of elements, cardinal number) of a finite set is equal to the order type. Counting in the everyday sense typically starts from one, so it assigns to each object the size of the initial segment with that object as last element. Note that these numbers are one more than the formal ordinal numbers according to the isomorphic order, because these are equal to the number of earlier objects (which corresponds to counting from zero). Thus for finite n, the expression "n-th element" of a well-ordered set requires context to know whether this counts from zero or one. In a notation "β-th element" where β can also be an infinite ordinal, it will typically count from zero. For an infinite set the order type determines the cardinality, but not conversely: well-ordered sets of a particular cardinality can have many different order types (see § Natural numbers, below, for an example). For a countably infinite set, the set of possible order types is uncountable. Examples and counterexamples Natural numbers The standard ordering ≤ of the natural numbers is a well ordering and has the additional property that every non-zero natural number has a unique predecessor. Another well ordering of the natural numbers is given by defining that all even numbers are less than all odd numbers, and the usual ordering applies within the evens and the odds: ${\begin{matrix}0&2&4&6&8&\dots &1&3&5&7&9&\dots \end{matrix}}$ This is a well-ordered set of order type ω + ω. Every element has a successor (there is no largest element). Two elements lack a predecessor: 0 and 1. Integers Unlike the standard ordering ≤ of the natural numbers, the standard ordering ≤ of the integers is not a well ordering, since, for example, the set of negative integers does not contain a least element. The following binary relation R is an example of well ordering of the integers: x R y if and only if one of the following conditions holds: 1. x = 0 2. x is positive, and y is negative 3. x and y are both positive, and x ≤ y 4. x and y are both negative, and |x| ≤ |y| This relation R can be visualized as follows: ${\begin{matrix}0&1&2&3&4&\dots &-1&-2&-3&\dots \end{matrix}}$ R is isomorphic to the ordinal number ω + ω. Another relation for well ordering the integers is the following definition: $x\leq _{z}y$ if and only if $|x|<|y|\qquad {\text{or}}\qquad |x|=|y|{\text{ and }}x\leq y.$ This well order can be visualized as follows: ${\begin{matrix}0&-1&1&-2&2&-3&3&-4&4&\dots \end{matrix}}$ This has the order type ω. Reals The standard ordering ≤ of any real interval is not a well ordering, since, for example, the open interval $(0,1)\subseteq [0,1]$ does not contain a least element. From the ZFC axioms of set theory (including the axiom of choice) one can show that there is a well order of the reals. Also Wacław Sierpiński proved that ZF + GCH (the generalized continuum hypothesis) imply the axiom of choice and hence a well order of the reals. Nonetheless, it is possible to show that the ZFC+GCH axioms alone are not sufficient to prove the existence of a definable (by a formula) well order of the reals.[1] However it is consistent with ZFC that a definable well ordering of the reals exists—for example, it is consistent with ZFC that V=L, and it follows from ZFC+V=L that a particular formula well orders the reals, or indeed any set. An uncountable subset of the real numbers with the standard ordering ≤ cannot be a well order: Suppose X is a subset of $\mathbb {R} $ well ordered by ≤. For each x in X, let s(x) be the successor of x in ≤ ordering on X (unless x is the last element of X). Let $A=\{(x,s(x))\,|\,x\in X\}$ whose elements are nonempty and disjoint intervals. Each such interval contains at least one rational number, so there is an injective function from A to $\mathbb {Q} .$ There is an injection from X to A (except possibly for a last element of X which could be mapped to zero later). And it is well known that there is an injection from $\mathbb {Q} $ to the natural numbers (which could be chosen to avoid hitting zero). Thus there is an injection from X to the natural numbers which means that X is countable. On the other hand, a countably infinite subset of the reals may or may not be a well order with the standard ≤. For example, • The natural numbers are a well order under the standard ordering ≤. • The set $\{1/n\,|\,n=1,2,3,\dots \}$ has no least element and is therefore not a well order under standard ordering ≤. Examples of well orders: • The set of numbers $\{-2^{-n}\,|\,0\leq n<\omega \}$ has order type ω. • The set of numbers $\{-2^{-n}-2^{-m-n}\,|\,0\leq m,n<\omega \}$ has order type ω2. The previous set is the set of limit points within the set. Within the set of real numbers, either with the ordinary topology or the order topology, 0 is also a limit point of the set. It is also a limit point of the set of limit points. • The set of numbers $\{-2^{-n}\,|\,0\leq n<\omega \}\cup \{1\}$ has order type ω + 1. With the order topology of this set, 1 is a limit point of the set. With the ordinary topology (or equivalently, the order topology) of the real numbers it is not. Equivalent formulations If a set is totally ordered, then the following are equivalent to each other: 1. The set is well ordered. That is, every nonempty subset has a least element. 2. Transfinite induction works for the entire ordered set. 3. Every strictly decreasing sequence of elements of the set must terminate after only finitely many steps (assuming the axiom of dependent choice). 4. Every subordering is isomorphic to an initial segment. Order topology Every well-ordered set can be made into a topological space by endowing it with the order topology. With respect to this topology there can be two kinds of elements: • isolated points — these are the minimum and the elements with a predecessor. • limit points — this type does not occur in finite sets, and may or may not occur in an infinite set; the infinite sets without limit point are the sets of order type ω, for example the natural numbers $\mathbb {N} .$ For subsets we can distinguish: • Subsets with a maximum (that is, subsets which are bounded by themselves); this can be an isolated point or a limit point of the whole set; in the latter case it may or may not be also a limit point of the subset. • Subsets which are unbounded by themselves but bounded in the whole set; they have no maximum, but a supremum outside the subset; if the subset is non-empty this supremum is a limit point of the subset and hence also of the whole set; if the subset is empty this supremum is the minimum of the whole set. • Subsets which are unbounded in the whole set. A subset is cofinal in the whole set if and only if it is unbounded in the whole set or it has a maximum which is also maximum of the whole set. A well-ordered set as topological space is a first-countable space if and only if it has order type less than or equal to ω1 (omega-one), that is, if and only if the set is countable or has the smallest uncountable order type. See also • Tree (set theory), generalization • Ordinal number • Well-founded set • Well partial order • Prewellordering • Directed set References 1. Feferman, S. (1964). "Some Applications of the Notions of Forcing and Generic Sets". Fundamenta Mathematicae. 56 (3): 325–345. doi:10.4064/fm-56-3-325-345. • Folland, Gerald B. (1999). Real Analysis: Modern Techniques and Their Applications. Pure and applied mathematics (2nd ed.). Wiley. pp. 4–6, 9. ISBN 978-0-471-31716-6. Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia
Well-ordering principle In mathematics, the well-ordering principle states that every non-empty set of positive integers contains a least element.[1] In other words, the set of positive integers is well-ordered by its "natural" or "magnitude" order in which $x$ precedes $y$ if and only if $y$ is either $x$ or the sum of $x$ and some positive integer (other orderings include the ordering $2,4,6,...$; and $1,3,5,...$). Not to be confused with Well-ordering theorem. The phrase "well-ordering principle" is sometimes taken to be synonymous with the "well-ordering theorem". On other occasions it is understood to be the proposition that the set of integers $\{\ldots ,-2,-1,0,1,2,3,\ldots \}$ contains a well-ordered subset, called the natural numbers, in which every nonempty subset contains a least element. Properties Depending on the framework in which the natural numbers are introduced, this (second-order) property of the set of natural numbers is either an axiom or a provable theorem. For example: • In Peano arithmetic, second-order arithmetic and related systems, and indeed in most (not necessarily formal) mathematical treatments of the well-ordering principle, the principle is derived from the principle of mathematical induction, which is itself taken as basic. • Considering the natural numbers as a subset of the real numbers, and assuming that we know already that the real numbers are complete (again, either as an axiom or a theorem about the real number system), i.e., every bounded (from below) set has an infimum, then also every set $A$ of natural numbers has an infimum, say $a^{*}$. We can now find an integer $n^{*}$ such that $a^{*}$ lies in the half-open interval $(n^{*}-1,n^{*}]$, and can then show that we must have $a^{*}=n^{*}$, and $n^{*}$ in $A$. • In axiomatic set theory, the natural numbers are defined as the smallest inductive set (i.e., set containing 0 and closed under the successor operation). One can (even without invoking the regularity axiom) show that the set of all natural numbers $n$ such that "$\{0,\ldots ,n\}$ is well-ordered" is inductive, and must therefore contain all natural numbers; from this property one can conclude that the set of all natural numbers is also well-ordered. In the second sense, this phrase is used when that proposition is relied on for the purpose of justifying proofs that take the following form: to prove that every natural number belongs to a specified set $S$, assume the contrary, which implies that the set of counterexamples is non-empty and thus contains a smallest counterexample. Then show that for any counterexample there is a still smaller counterexample, producing a contradiction. This mode of argument is the contrapositive of proof by complete induction. It is known light-heartedly as the "minimal criminal" method and is similar in its nature to Fermat's method of "infinite descent". Garrett Birkhoff and Saunders Mac Lane wrote in A Survey of Modern Algebra that this property, like the least upper bound axiom for real numbers, is non-algebraic; i.e., it cannot be deduced from the algebraic properties of the integers (which form an ordered integral domain). Example Applications The well-ordering principle can be used in the following proofs. Prime Factorization Theorem: Every integer greater than one can be factored as a product of primes. This theorem constitutes part of the Prime Factorization Theorem. Proof (by well-ordering principle). Let $C$ be the set of all integers greater than one that cannot be factored as a product of primes. We show that $C$ is empty. Assume for the sake of contradiction that $C$ is not empty. Then, by the well-ordering principle, there is a least element $n\in C$; $n$ cannot be prime since a prime number itself is considered a length-one product of primes. By the definition of non-prime numbers, $n$ has factors $a,b$, where $a,b$ are integers greater than one and less than $n$. Since $a,b<n$, they are not in $C$ as $n$ is the smallest element of $C$. So, $a,b$ can be factored as products of primes, where $a=p_{1}p_{2}...p_{k}$ and $b=q_{1}q_{2}...q_{l}$, meaning that $n=p_{1}p_{2}...p_{k}\cdot q_{1}q_{2}...q_{l}$, a factor of primes. This contradicts the assumption that $n\in C$, so the assumption that $C$ is nonempty must be false.[2] Integer summation Theorem: $1+2+3+...+n={\frac {n(n+1)}{2}}$ for all nonnegative integers $n$. Proof. Suppose for the sake of contradiction that the above theorem is false. Then, there exists a non-empty set of non-negative integers $C=\{n\in \mathbb {N} \mid 1+2+3+...+n\neq {\frac {n(n+1)}{2}}\}$. By the well-ordering principle, $C$ has a minimum element $c$ such that when $n=c$, the equation is false, but true for all non-negative integers less than $c$. The equation is true for $n=0$, so $c>0$; $c-1$ is a non-negative integer less than $c$, so the equation holds for $c-1$ as it is not in $C$. Therefore, ${\begin{aligned}1+2+3+...+(c-1)&={\frac {(c-1)c}{2}}\\1+2+3+...+(c-1)+c&={\frac {(c-1)c}{2}}+c\\&={\frac {c^{2}-c}{2}}+{\frac {2c}{2}}\\&={\frac {c^{2}+c}{2}}\\&={\frac {c(c+1)}{2}}\end{aligned}}$ which shows that the equation holds for $c$, a contradiction. So, the equation must hold for all non-negative integers.[2] References 1. Apostol, Tom (1976). Introduction to Analytic Number Theory. New York: Springer-Verlag. pp. 13. ISBN 0-387-90163-9. 2. Lehman, Eric; Meyer, Albert R; Leighton, F Tom. Mathematics for Computer Science (PDF). Retrieved 2 May 2023.
Wikipedia
Well-pointed category In category theory, a category with a terminal object $1$ is well-pointed if for every pair of arrows $f,g:A\to B$ such that $f\neq g$, there is an arrow $p:1\to A$ such that $f\circ p\neq g\circ p$. (The arrows $p$ are called the global elements or points of the category; a well-pointed category is thus one that has "enough points" to distinguish non-equal arrows.) See also • Pointed category References • Pitts, Andrew M. (2013). Nominal Sets: Names and Symmetry in Computer Science. Cambridge Tracts in Theoretical Computer Science. Vol. 57. Cambridge University Press. p. 16. ISBN 1107017785.
Wikipedia
Elliptic operator In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions. Elliptic operators are typical of potential theory, and they appear frequently in electrostatics and continuum mechanics. Elliptic regularity implies that their solutions tend to be smooth functions (if the coefficients in the operator are smooth). Steady-state solutions to hyperbolic and parabolic equations generally solve elliptic equations. Definitions Let $L$ be a linear differential operator of order m on a domain $\Omega $ in Rn given by $Lu=\sum _{|\alpha |\leq m}a_{\alpha }(x)\partial ^{\alpha }u$ where $\alpha =(\alpha _{1},\dots ,\alpha _{n})$ denotes a multi-index, and $\partial ^{\alpha }u=\partial _{1}^{\alpha _{1}}\cdots \partial _{n}^{\alpha _{n}}u$ denotes the partial derivative of order $\alpha _{i}$ in $x_{i}$. Then $L$ is called elliptic if for every x in $\Omega $ and every non-zero $\xi $ in Rn, $\sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }\neq 0,$ where $\xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}$. In many applications, this condition is not strong enough, and instead a uniform ellipticity condition may be imposed for operators of order m = 2k: $(-1)^{k}\sum _{|\alpha |=2k}a_{\alpha }(x)\xi ^{\alpha }>C|\xi |^{2},$ where C is a positive constant. Note that ellipticity only depends on the highest-order terms.[1] A nonlinear operator $L(u)=F\left(x,u,\left(\partial ^{\alpha }u\right)_{|\alpha |\leq m}\right)$ is elliptic if its linearization is; i.e. the first-order Taylor expansion with respect to u and its derivatives about any point is an elliptic operator. Example 1 The negative of the Laplacian in Rd given by $-\Delta u=-\sum _{i=1}^{d}\partial _{i}^{2}u$ is a uniformly elliptic operator. The Laplace operator occurs frequently in electrostatics. If ρ is the charge density within some region Ω, the potential Φ must satisfy the equation $-\Delta \Phi =4\pi \rho .$ Example 2 Given a matrix-valued function A(x) which is symmetric and positive definite for every x, having components aij, the operator $Lu=-\partial _{i}\left(a^{ij}(x)\partial _{j}u\right)+b^{j}(x)\partial _{j}u+cu$ is elliptic. This is the most general form of a second-order divergence form linear elliptic differential operator. The Laplace operator is obtained by taking A = I. These operators also occur in electrostatics in polarized media. Example 3 For p a non-negative number, the p-Laplacian is a nonlinear elliptic operator defined by $L(u)=-\sum _{i=1}^{d}\partial _{i}\left(|\nabla u|^{p-2}\partial _{i}u\right).$ A similar nonlinear operator occurs in glacier mechanics. The Cauchy stress tensor of ice, according to Glen's flow law, is given by $\tau _{ij}=B\left(\sum _{k,l=1}^{3}\left(\partial _{l}u_{k}\right)^{2}\right)^{-{\frac {1}{3}}}\cdot {\frac {1}{2}}\left(\partial _{j}u_{i}+\partial _{i}u_{j}\right)$ for some constant B. The velocity of an ice sheet in steady state will then solve the nonlinear elliptic system $\sum _{j=1}^{3}\partial _{j}\tau _{ij}+\rho g_{i}-\partial _{i}p=Q,$ where ρ is the ice density, g is the gravitational acceleration vector, p is the pressure and Q is a forcing term. Elliptic regularity theorem Let L be an elliptic operator of order 2k with coefficients having 2k continuous derivatives. The Dirichlet problem for L is to find a function u, given a function f and some appropriate boundary values, such that Lu = f and such that u has the appropriate boundary values and normal derivatives. The existence theory for elliptic operators, using Gårding's inequality and the Lax–Milgram lemma, only guarantees that a weak solution u exists in the Sobolev space Hk. This situation is ultimately unsatisfactory, as the weak solution u might not have enough derivatives for the expression Lu to be well-defined in the classical sense. The elliptic regularity theorem guarantees that, provided f is square-integrable, u will in fact have 2k square-integrable weak derivatives. In particular, if f is infinitely-often differentiable, then so is u. Any differential operator exhibiting this property is called a hypoelliptic operator; thus, every elliptic operator is hypoelliptic. The property also means that every fundamental solution of an elliptic operator is infinitely differentiable in any neighborhood not containing 0. As an application, suppose a function $f$ satisfies the Cauchy–Riemann equations. Since the Cauchy-Riemann equations form an elliptic operator, it follows that $f$ is smooth. General definition Let $D$ be a (possibly nonlinear) differential operator between vector bundles of any rank. Take its principal symbol $\sigma _{\xi }(D)$ with respect to a one-form $\xi $. (Basically, what we are doing is replacing the highest order covariant derivatives $\nabla $ by vector fields $\xi $.) We say $D$ is weakly elliptic if $\sigma _{\xi }(D)$ is a linear isomorphism for every non-zero $\xi $. We say $D$ is (uniformly) strongly elliptic if for some constant $c>0$, $\left([\sigma _{\xi }(D)](v),v\right)\geq c\|v\|^{2}$ for all $\|\xi \|=1$ and all $v$. It is important to note that the definition of ellipticity in the previous part of the article is strong ellipticity. Here $(\cdot ,\cdot )$ is an inner product. Notice that the $\xi $ are covector fields or one-forms, but the $v$ are elements of the vector bundle upon which $D$ acts. The quintessential example of a (strongly) elliptic operator is the Laplacian (or its negative, depending upon convention). It is not hard to see that $D$ needs to be of even order for strong ellipticity to even be an option. Otherwise, just consider plugging in both $\xi $ and its negative. On the other hand, a weakly elliptic first-order operator, such as the Dirac operator can square to become a strongly elliptic operator, such as the Laplacian. The composition of weakly elliptic operators is weakly elliptic. Weak ellipticity is nevertheless strong enough for the Fredholm alternative, Schauder estimates, and the Atiyah–Singer index theorem. On the other hand, we need strong ellipticity for the maximum principle, and to guarantee that the eigenvalues are discrete, and their only limit point is infinity. See also • Elliptic partial differential equation • Hyperbolic partial differential equation • Parabolic partial differential equation • Hopf maximum principle • Elliptic complex • Ultrahyperbolic wave equation • Semi-elliptic operator • Weyl's lemma Notes 1. Note that this is sometimes called strict ellipticity, with uniform ellipticity being used to mean that an upper bound exists on the symbol of the operator as well. It is important to check the definitions the author is using, as conventions may differ. See, e.g., Evans, Chapter 6, for a use of the first definition, and Gilbarg and Trudinger, Chapter 3, for a use of the second. References • Evans, L. C. (2010) [1998], Partial differential equations, Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-4974-3, MR 2597943 Review: Rauch, J. (2000). "Partial differential equations, by L. C. Evans" (PDF). Journal of the American Mathematical Society. 37 (3): 363–367. doi:10.1090/s0273-0979-00-00868-5. • Gilbarg, D.; Trudinger, N. S. (1983) [1977], Elliptic partial differential equations of second order, Grundlehren der Mathematischen Wissenschaften, vol. 224 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-13025-3, MR 0737190 • Shubin, M. A. (2001) [1994], "Elliptic operator", Encyclopedia of Mathematics, EMS Press External links • Linear Elliptic Equations at EqWorld: The World of Mathematical Equations. • Nonlinear Elliptic Equations at EqWorld: The World of Mathematical Equations. Authority control: National • France • BnF data • Germany • Israel • United States
Wikipedia
Well-ordering theorem In mathematics, the well-ordering theorem, also known as Zermelo's theorem, states that every set can be well-ordered. A set X is well-ordered by a strict total order if every non-empty subset of X has a least element under the ordering. The well-ordering theorem together with Zorn's lemma are the most important mathematical statements that are equivalent to the axiom of choice (often called AC, see also Axiom of choice § Equivalents).[1][2] Ernst Zermelo introduced the axiom of choice as an "unobjectionable logical principle" to prove the well-ordering theorem.[3] One can conclude from the well-ordering theorem that every set is susceptible to transfinite induction, which is considered by mathematicians to be a powerful technique.[3] One famous consequence of the theorem is the Banach–Tarski paradox. Not to be confused with Well-ordering principle. History Georg Cantor considered the well-ordering theorem to be a "fundamental principle of thought".[4] However, it is considered difficult or even impossible to visualize a well-ordering of $\mathbb {R} $; such a visualization would have to incorporate the axiom of choice.[5] In 1904, Gyula Kőnig claimed to have proven that such a well-ordering cannot exist. A few weeks later, Felix Hausdorff found a mistake in the proof.[6] It turned out, though, that in first-order logic the well-ordering theorem is equivalent to the axiom of choice, in the sense that the Zermelo–Fraenkel axioms with the axiom of choice included are sufficient to prove the well-ordering theorem, and conversely, the Zermelo–Fraenkel axioms without the axiom of choice but with the well-ordering theorem included are sufficient to prove the axiom of choice. (The same applies to Zorn's lemma.) In second-order logic, however, the well-ordering theorem is strictly stronger than the axiom of choice: from the well-ordering theorem one may deduce the axiom of choice, but from the axiom of choice one cannot deduce the well-ordering theorem.[7] There is a well-known joke about the three statements, and their relative amenability to intuition: The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?[8] Proof from axiom of choice The well-ordering theorem follows from the axiom of choice as follows.[9] Let the set we are trying to well-order be $A$, and let $f$ be a choice function for the family of non-empty subsets of $A$. For every ordinal $\alpha $, define an element $a_{\alpha }$ that is in $A$ by setting $a_{\alpha }\ =\ f(A\smallsetminus \{a_{\xi }\mid \xi <\alpha \})$ if this complement $A\smallsetminus \{a_{\xi }\mid \xi <\alpha \}$ is nonempty, or leave $a_{\alpha }$ undefined if it is. That is, $a_{\alpha }$ is chosen from the set of elements of $A$ that have not yet been assigned a place in the ordering (or undefined if the entirety of $A$ has been successfully enumerated). Then $\langle a_{\alpha }\mid a_{\alpha }{\text{ is defined}}\rangle $ is a well-order of $A$ as desired. Proof of axiom of choice The axiom of choice can be proven from the well-ordering theorem as follows. To make a choice function for a collection of non-empty sets, $E$, take the union of the sets in $E$ and call it $X$. There exists a well-ordering of $X$; let $R$ be such an ordering. The function that to each set $S$ of $E$ associates the smallest element of $S$, as ordered by (the restriction to $S$ of) $R$, is a choice function for the collection $E$. An essential point of this proof is that it involves only a single arbitrary choice, that of $R$; applying the well-ordering theorem to each member $S$ of $E$ separately would not work, since the theorem only asserts the existence of a well-ordering, and choosing for each $S$ a well-ordering would require just as many choices as simply choosing an element from each $S$. Particularly, if $E$ contains uncountably many sets, making all uncountably many choices is not allowed under the axioms of Zermelo-Fraenkel set theory without the axiom of choice. Notes 1. Kuczma, Marek (2009). An introduction to the theory of functional equations and inequalities. Berlin: Springer. p. 14. ISBN 978-3-7643-8748-8. 2. Hazewinkel, Michiel (2001). Encyclopaedia of Mathematics: Supplement. Berlin: Springer. p. 458. ISBN 1-4020-0198-3. 3. Thierry, Vialar (1945). Handbook of Mathematics. Norderstedt: Springer. p. 23. ISBN 978-2-95-519901-5. 4. Georg Cantor (1883), “Ueber unendliche, lineare Punktmannichfaltigkeiten”, Mathematische Annalen 21, pp. 545–591. 5. Sheppard, Barnaby (2014). The Logic of Infinity. Cambridge University Press. p. 174. ISBN 978-1-1070-5831-6. 6. Plotkin, J. M. (2005), "Introduction to "The Concept of Power in Set Theory"", Hausdorff on Ordered Sets, History of Mathematics, vol. 25, American Mathematical Society, pp. 23–30, ISBN 9780821890516 7. Shapiro, Stewart (1991). Foundations Without Foundationalism: A Case for Second-Order Logic. New York: Oxford University Press. ISBN 0-19-853391-8. 8. Krantz, Steven G. (2002), "The Axiom of Choice", in Krantz, Steven G. (ed.), Handbook of Logic and Proof Techniques for Computer Science, Birkhäuser Boston, pp. 121–126, doi:10.1007/978-1-4612-0115-1_9, ISBN 9781461201151 9. Jech, Thomas (2002). Set Theory (Third Millennium Edition). Springer. p. 48. ISBN 978-3-540-44085-7. External links • Mizar system proof: http://mizar.org/version/current/html/wellord2.html
Wikipedia
Well-quasi-ordering In mathematics, specifically order theory, a well-quasi-ordering or wqo on a set $X$ is a quasi-ordering of $X$ for which every infinite sequence of elements $x_{0},x_{1},x_{2},\ldots $ from $X$ contains an increasing pair $x_{i}\leq x_{j}$ with $i<j.$ Transitive binary relations Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Total, Semiconnex Anti- reflexive Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗ Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗ Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗ Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗ Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗ Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗ Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗ Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗ Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗ Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$ Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively. All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$ A term's definition may require additional properties that are not listed in this table. Motivation Well-founded induction can be used on any set with a well-founded relation, thus one is interested in when a quasi-order is well-founded. (Here, by abuse of terminology, a quasiorder $\leq $ is said to be well-founded if the corresponding strict order $x\leq y\land y\nleq x$ is a well-founded relation.) However the class of well-founded quasiorders is not closed under certain operations—that is, when a quasi-order is used to obtain a new quasi-order on a set of structures derived from our original set, this quasiorder is found to be not well-founded. By placing stronger restrictions on the original well-founded quasiordering one can hope to ensure that our derived quasiorderings are still well-founded. An example of this is the power set operation. Given a quasiordering $\leq $ for a set $X$ one can define a quasiorder $\leq ^{+}$ on $X$'s power set $P(X)$ by setting $A\leq ^{+}B$ if and only if for each element of $A$ one can find some element of $B$ that is larger than it with respect to $\leq $. One can show that this quasiordering on $P(X)$ needn't be well-founded, but if one takes the original quasi-ordering to be a well-quasi-ordering, then it is. Formal definition A well-quasi-ordering on a set $X$ is a quasi-ordering (i.e., a reflexive, transitive binary relation) such that any infinite sequence of elements $x_{0},x_{1},x_{2},\ldots $ from $X$ contains an increasing pair $x_{i}\leq x_{j}$ with $i<j$. The set $X$ is said to be well-quasi-ordered, or shortly wqo. A well partial order, or a wpo, is a wqo that is a proper ordering relation, i.e., it is antisymmetric. Among other ways of defining wqo's, one is to say that they are quasi-orderings which do not contain infinite strictly decreasing sequences (of the form $x_{0}>x_{1}>x_{2}>\cdots $) nor infinite sequences of pairwise incomparable elements. Hence a quasi-order (X, ≤) is wqo if and only if (X, <) is well-founded and has no infinite antichains. Examples • $(\mathbb {N} ,\leq )$, the set of natural numbers with standard ordering, is a well partial order (in fact, a well-order). However, $(\mathbb {Z} ,\leq )$, the set of positive and negative integers, is not a well-quasi-order, because it is not well-founded (see Pic.1). • $(\mathbb {N} ,|)$, the set of natural numbers ordered by divisibility, is not a well-quasi-order: the prime numbers are an infinite antichain (see Pic.2). • $(\mathbb {N} ^{k},\leq )$, the set of vectors of $k$ natural numbers (where $k$ is finite) with component-wise ordering, is a well partial order (Dickson's lemma; see Pic.3). More generally, if $(X,\leq )$ is well-quasi-order, then $(X^{k},\leq ^{k})$ is also a well-quasi-order for all $k$. • Let $X$ be an arbitrary finite set with at least two elements. The set $X^{*}$ of words over $X$ ordered lexicographically (as in a dictionary) is not a well-quasi-order because it contains the infinite decreasing sequence $b,ab,aab,aaab,\ldots $. Similarly, $X^{*}$ ordered by the prefix relation is not a well-quasi-order, because the previous sequence is an infinite antichain of this partial order. However, $X^{*}$ ordered by the subsequence relation is a well partial order.[1] (If $X$ has only one element, these three partial orders are identical.) • More generally, $(X^{*},\leq )$, the set of finite $X$-sequences ordered by embedding is a well-quasi-order if and only if $(X,\leq )$ is a well-quasi-order (Higman's lemma). Recall that one embeds a sequence $u$ into a sequence $v$ by finding a subsequence of $v$ that has the same length as $u$ and that dominates it term by term. When $(X,=)$ is an unordered set, $u\leq v$ if and only if $u$ is a subsequence of $v$. • $(X^{\omega },\leq )$, the set of infinite sequences over a well-quasi-order $(X,\leq )$, ordered by embedding, is not a well-quasi-order in general. That is, Higman's lemma does not carry over to infinite sequences. Better-quasi-orderings have been introduced to generalize Higman's lemma to sequences of arbitrary lengths. • Embedding between finite trees with nodes labeled by elements of a wqo $(X,\leq )$ is a wqo (Kruskal's tree theorem). • Embedding between infinite trees with nodes labeled by elements of a wqo $(X,\leq )$ is a wqo (Nash-Williams' theorem). • Embedding between countable scattered linear order types is a well-quasi-order (Laver's theorem). • Embedding between countable boolean algebras is a well-quasi-order. This follows from Laver's theorem and a theorem of Ketonen. • Finite graphs ordered by a notion of embedding called "graph minor" is a well-quasi-order (Robertson–Seymour theorem). • Graphs of finite tree-depth ordered by the induced subgraph relation form a well-quasi-order,[2] as do the cographs ordered by induced subgraphs.[3] Wqo's versus well partial orders In practice, the wqo's one manipulates are quite often not orderings (see examples above), and the theory is technically smoother if we do not require antisymmetry, so it is built with wqo's as the basic notion. On the other hand, according to Milner 1985, no real gain in generality is obtained by considering quasi-orders rather than partial orders... it is simply more convenient to do so. Observe that a wpo is a wqo, and that a wqo gives rise to a wpo between equivalence classes induced by the kernel of the wqo. For example, if we order $\mathbb {Z} $ by divisibility, we end up with $n\equiv m$ if and only if $n=\pm m$, so that $(\mathbb {Z} ,|)\approx (\mathbb {N} ,|)$. Infinite increasing subsequences If $(X,\leq )$ is wqo then every infinite sequence $x_{0},x_{1},x_{2},\ldots ,$ contains an infinite increasing subsequence $x_{n_{0}}\leq x_{n_{1}}\leq x_{n_{2}}\leq \cdots $ (with $n_{0}<n_{1}<n_{2}<\cdots $). Such a subsequence is sometimes called perfect. This can be proved by a Ramsey argument: given some sequence $(x_{i})_{i}$, consider the set $I$ of indexes $i$ such that $x_{i}$ has no larger or equal $x_{j}$ to its right, i.e., with $i<j$. If $I$ is infinite, then the $I$-extracted subsequence contradicts the assumption that $X$ is wqo. So $I$ is finite, and any $x_{n}$ with $n$ larger than any index in $I$ can be used as the starting point of an infinite increasing subsequence. The existence of such infinite increasing subsequences is sometimes taken as a definition for well-quasi-ordering, leading to an equivalent notion. Properties of wqos • Given a quasiordering $(X,\leq )$ the quasiordering $(P(X),\leq ^{+})$ defined by $A\leq ^{+}B\iff \forall a\in A,\exists b\in B,a\leq b$ is well-founded if and only if $(X,\leq )$ is a wqo.[4] • A quasiordering is a wqo if and only if the corresponding partial order (obtained by quotienting by $x\sim y\iff x\leq y\land y\leq x$) has no infinite descending sequences or antichains. (This can be proved using a Ramsey argument as above.) • Given a well-quasi-ordering $(X,\leq )$, any sequence of upward-closed subsets $S_{0}\subseteq S_{1}\subseteq \cdots \subseteq X$ eventually stabilises (meaning there exists $n\in \mathbb {N} $ such that $S_{n}=S_{n+1}=\cdots $; a subset $S\subseteq X$ is called upward-closed if $\forall x,y\in X,x\leq y\wedge x\in S\Rightarrow y\in S$): assuming the contrary $\forall i\in \mathbb {N} ,\exists j\in \mathbb {N} ,j>i,\exists x\in S_{j}\setminus S_{i}$, a contradiction is reached by extracting an infinite non-ascending subsequence. • Given a well-quasi-ordering $(X,\leq )$, any subset $S$ of $X$ has a finite number of minimal elements with respect to $\leq $, for otherwise the minimal elements of $S$ would constitute an infinite antichain. See also • Better-quasi-ordering – mathematical relationPages displaying wikidata descriptions as a fallback • Prewellordering – Set theory concept • Well-order – Class of mathematical orderings Notes ^ Here x < y means: $x\leq y$ and $y\nleq x.$ References 1. Gasarch, W. (1998), "A survey of recursive combinatorics", Handbook of Recursive Mathematics, Vol. 2, Stud. Logic Found. Math., vol. 139, Amsterdam: North-Holland, pp. 1041–1176, doi:10.1016/S0049-237X(98)80049-9, MR 1673598. See in particular page 1160. 2. Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), "Lemma 6.13", Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, vol. 28, Heidelberg: Springer, p. 137, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058. 3. Damaschke, Peter (1990), "Induced subgraphs and well-quasi-ordering", Journal of Graph Theory, 14 (4): 427–435, doi:10.1002/jgt.3190140406, MR 1067237. 4. Forster, Thomas (2003). "Better-quasi-orderings and coinduction". Theoretical Computer Science. 309 (1–3): 111–123. doi:10.1016/S0304-3975(03)00131-2. • Dickson, L. E. (1913). "Finiteness of the odd perfect and primitive abundant numbers with r distinct prime factors". American Journal of Mathematics. 35 (4): 413–422. doi:10.2307/2370405. JSTOR 2370405. • Higman, G. (1952). "Ordering by divisibility in abstract algebras". Proceedings of the London Mathematical Society. 2: 326–336. doi:10.1112/plms/s3-2.1.326. • Kruskal, J. B. (1972). "The theory of well-quasi-ordering: A frequently discovered concept". Journal of Combinatorial Theory. Series A. 13 (3): 297–305. doi:10.1016/0097-3165(72)90063-5. • Ketonen, Jussi (1978). "The structure of countable Boolean algebras". Annals of Mathematics. 108 (1): 41–89. doi:10.2307/1970929. JSTOR 1970929. • Milner, E. C. (1985). "Basic WQO- and BQO-theory". In Rival, I. (ed.). Graphs and Order. The Role of Graphs in the Theory of Ordered Sets and Its Applications. D. Reidel Publishing Co. pp. 487–502. ISBN 90-277-1943-8. • Gallier, Jean H. (1991). "What's so special about Kruskal's theorem and the ordinal Γo? A survey of some results in proof theory". Annals of Pure and Applied Logic. 53 (3): 199–260. doi:10.1016/0168-0072(91)90022-E. Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia
Wendel's theorem In geometric probability theory, Wendel's theorem, named after James G. Wendel, gives the probability that N points distributed uniformly at random on an $(n-1)$-dimensional hypersphere all lie on the same "half" of the hypersphere. In other words, one seeks the probability that there is some half-space with the origin on its boundary that contains all N points. Wendel's theorem says that the probability is[1] $p_{n,N}=2^{-N+1}\sum _{k=0}^{n-1}{\binom {N-1}{k}}.$ The statement is equivalent to $p_{n,N}$ being the probability that the origin is not contained in the convex hull of the N points and holds for any probability distribution on Rn that is symmetric around the origin. In particular this includes all distribution which are rotationally invariant around the origin. This is essentially a probabilistic restatement of Schläfli's theorem that $N$ hyperplanes in general position in $\mathbb {R} ^{n}$ divides it into $2\sum _{k=0}^{n-1}{\binom {N-1}{k}}$ regions.[2] References 1. Wendel, James G. (1962), "A Problem in Geometric Probability", Math. Scand., 11: 109–111 2. Cover, Thomas M.; Efron, Bradley (February 1967). "Geometrical Probability and Random Points on a Hypersphere". The Annals of Mathematical Statistics. 38 (1): 213–220. doi:10.1214/aoms/1177699073. ISSN 0003-4851.
Wikipedia
Wendelin Werner Wendelin Werner (born 23 September 1968) is a German-born French mathematician working on random processes such as self-avoiding random walks, Brownian motion, Schramm–Loewner evolution, and related theories in probability theory and mathematical physics. In 2006, at the 25th International Congress of Mathematicians in Madrid, Spain he received the Fields Medal "for his contributions to the development of stochastic Loewner evolution, the geometry of two-dimensional Brownian motion, and conformal field theory". He is currently Rouse Ball professor of Mathematics at the University of Cambridge. Wendelin Werner Werner in 2007 Born (1968-09-23) 23 September 1968 Cologne, West Germany (now Germany) NationalityFrench Alma materÉcole normale supérieure Université Pierre-et-Marie-Curie AwardsHeinz Gumin Prize (de) (2016) Fields Medal (2006) Pólya Prize (2006) Loève Prize (2005) Grand Prix Jacques Herbrand (2003) Fermat Prize (2001) EMS Prize (2000) Prix Paul Doistau–Émile Blutet (1999) Davidson Prize (1998) Scientific career FieldsMathematics InstitutionsCNRS Université Paris-Sud ETH Zurich University of Cambridge ThesisQuelques propriétés du mouvement brownien plan (1993) Doctoral advisorJean-François Le Gall Doctoral studentsVincent Beffara (de), Julien Dubédat (de), Christophe Garban, Yiin Wang Biography Werner was born on 23 September 1968 in Cologne, West Germany. His parents moved to France when he was nine months old and he became a French citizen in 1977.[1] After a classe préparatoire at Lycée Hoche in Versailles, he studied at École Normale Supérieure from 1987 to 1991. His 1993 doctorate was written at the Université Pierre-et-Marie-Curie and supervised by Jean-François Le Gall. Werner was a researcher at the CNRS (National Center of Scientific Research, Centre national de la recherche scientifique) from 1991 to 1997, during which he also held a two-year Leibniz Fellowship, at the University of Cambridge. He was Professor at the University of Paris-Sud from 1997 to 2013 and also taught at the École Normale Supérieure from 2005 to 2013.[2][3] He was then Professor at the ETH Zürich from 2013 to 2023. Awards and honors Werner has received several awards besides the Fields Medal, including the Rollo Davidson Prize in 1998, the Prix Paul Doistau–Émile Blutet in 1999, the Fermat Prize in 2001, the Grand Prix Jacques Herbrand of the French Academy of Sciences in 2003, the Loève Prize in 2005, the 2006 SIAM George Pólya Prize with his collaborators Gregory Lawler and Oded Schramm, and the Heinz Gumin Prize (de) in 2016. He became a member of the French Academy of Sciences in 2008. He is also a member of other academies of sciences, including the Academy of Sciences Leopoldina and the Berlin-Brandenburg Academy of Sciences and is an honorary fellow of Gonville and Caius College.[2][3][4] He was elected a Foreign Member of the Royal Society in 2020.[5] Miscellaneous He also had a part in the 1982 French film La Passante du Sans-Souci.[1] He has an Erdős–Bacon number of six. References 1. "Der Mann, der den Zufall beherrscht" [The man who masters randomness] (in German). Der Bund. Retrieved 1 August 2018. 2. "Wendelin Werner, 2006 Fields Medal Winner - CNRS press release". Centre national de la recherche scientifique. Retrieved 1 August 2018. 3. "Curriculum Vitae of Wendelin Werner" (PDF). International Mathematical Union. Retrieved 1 August 2018. 4. "The Rollo Davidson Trust". University of Cambridge. 5. "Wendelin Werner". Royal Society. Retrieved 19 September 2020. External links • O'Connor, John J.; Robertson, Edmund F., "Wendelin Werner", MacTutor History of Mathematics Archive, University of St Andrews • Wendelin Werner at the Mathematics Genealogy Project • Page at ETH • La Passante du Sans-Souci on imdb.org • Wendelin Werner at IMDb Fields Medalists • 1936  Ahlfors • Douglas • 1950  Schwartz • Selberg • 1954  Kodaira • Serre • 1958  Roth • Thom • 1962  Hörmander • Milnor • 1966  Atiyah • Cohen • Grothendieck • Smale • 1970  Baker • Hironaka • Novikov • Thompson • 1974  Bombieri • Mumford • 1978  Deligne • Fefferman • Margulis • Quillen • 1982  Connes • Thurston • Yau • 1986  Donaldson • Faltings • Freedman • 1990  Drinfeld • Jones • Mori • Witten • 1994  Bourgain • Lions • Yoccoz • Zelmanov • 1998  Borcherds • Gowers • Kontsevich • McMullen • 2002  Lafforgue • Voevodsky • 2006  Okounkov • Perelman • Tao • Werner • 2010  Lindenstrauss • Ngô • Smirnov • Villani • 2014  Avila • Bhargava • Hairer • Mirzakhani • 2018  Birkar • Figalli • Scholze • Venkatesh • 2022  Duminil-Copin • Huh • Maynard • Viazovska • Category • Mathematics portal Fellows of the Royal Society elected in 2020 Fellows • Timothy Behrens • Yoshua Bengio • Malcolm J. Bennett • Ben Berks • Zulfiqar Bhutta • Kevin Brindle • Gordon Brown • William C. Campbell • Henry Chapman • G. Marius Clore • Vikram Deshpande • John Endler • Adam Eyre-Walker • Daniel Frost • François Guillemot • David Harel • Marian Holness • Ehud Hrushovski • Andrew P. Jackson • George Jackson • Xin Lu • Alexander Makarov • Keith Matthews • Iain McCulloch • Linda Nazar • Peter Nellist • Giles Oldroyd • Hugh Osborn • Oliver L. Phillips • Raymond Pierrehumbert • John Plane • Cathy Price • Carol Prives • Didier Queloz • Nicholas Read • Michael Rudnicki • William Schafer • Nigel Scrutton • John Shine • Stephen Smartt • Ralf Speth • Molly Stevens • Donna Strickland • Andrew M. Stuart • Sarah Teichmann • Richard Thompson • Jack Thorne • Nicholas Turner • Jane Visvader • Alan M. Wilson • Steve Young Honorary David Cooksey Foreign • Frances Arnold • Francis Collins • Kerry Emanuel • Ben Feringa • Else Marie Friis • Regine Kahmann • Margaret Kivelson • Ramamoorthy Ramesh • Wendelin Werner • Ada Yonath Authority control International • FAST • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • United States • Netherlands Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Wendy Myrvold Wendy Joanne Myrvold is a Canadian mathematician and computer scientist known for her work on graph algorithms, planarity testing, and algorithms in enumerative combinatorics. She is a professor emeritus of computer science at the University of Victoria.[1] Myrvold completed her Ph.D. in 1988 at the University of Waterloo. Her dissertation, The Ally and Adversary Reconstruction Problems, was supervised by Charles Colbourn.[2] References 1. "Emeritus, adjunct, cross-listed, and sessionals", Faculty & Staff, University of Victoria Computer Science 2. Wendy Myrvold at the Mathematics Genealogy Project External links • Home page • Wendy Myrvold publications indexed by Google Scholar Authority control: Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project
Wikipedia
Wente torus In differential geometry, a Wente torus is an immersed torus in $\mathbb {R} ^{3}$ of constant mean curvature, discovered by Henry C. Wente (1986). It is a counterexample to the conjecture of Heinz Hopf that every closed, compact, constant-mean-curvature surface is a sphere (though this is true if the surface is embedded). There are similar examples known for every positive genus. References • Wente, Henry C. (1986), "Counterexample to a conjecture of H. Hopf.", Pacific Journal of Mathematics, 121: 193–243, doi:10.2140/pjm.1986.121.193, MR 0815044 • The Wente torus, University of Toledo Mathematics Department, retrieved 2013-09-01. External links • Visualization of the Wente torus
Wikipedia
Wenxian Shen Wenxian Shen is a Chinese-American mathematician known for her work in topological dynamics, almost-periodicity, waves and other spatial patterns in dynamical systems. She is a professor of mathematics at Auburn University.[1] Education Shen graduated from Zhejiang Normal University in 1982, and earned a master's degree at Peking University in 1987.[1] She completed a Ph.D. in mathematics at the Georgia Institute of Technology in 1992, with the dissertation Stability and Bifurcation of Traveling Wave Solutions supervised by Shui-Nee Chow.[2] Book Shen is the coauthor of two monographs, Almost Automorphic and Almost Periodic Dynamics in Skew-Product Semiflows (with Yingfei Yi, American Mathematical Society, 1998),[3] and Spectral Theory for Random and Nonautonomous Parabolic Equations and Applications (with Janusz Mierczyński, CRC Press, 2008).[4] References 1. Wenxian Shen, Auburn University, retrieved 2020-12-27 2. Wenxian Shen at the Mathematics Genealogy Project 3. Johnson, Russell A. (1999), "Featured review of Almost Automorphic and Almost Periodic Dynamics in Skew-Product Semiflows", MathSciNet, MR 1445493; Andres, J., "Review of Almost Automorphic and Almost Periodic Dynamics in Skew-Product Semiflows", zbMATH, Zbl 0913.58051 4. Twardowska, Krystyna (2010), "Review of Spectral Theory for Random and Nonautonomous Parabolic Equations and Applications", MathSciNet, MR 2464792 External links • Wenxian Shen publications indexed by Google Scholar Authority control: Academics • MathSciNet • Mathematics Genealogy Project
Wikipedia
Werner Müller (mathematician) Werner Müller (born 7 September 1949) is a German mathematician. His research focuses on global analysis and automorphic forms. Biography Werner Müller grew up in the German Democratic Republic (East Germany). He studied mathematics at the Humboldt University of Berlin in East Berlin. In 1977 he completed his PhD under the supervision of Herbert Kurke. In his thesis, Analytische Torsion Riemannscher Mannigfaltigkeiten, he solved, at the same time as but independently of Jeff Cheeger, the Ray–Singer conjecture on the equality between analytic torsion and Reidemeister torsion. Thereafter he moved to the Karl-Weierstraß-Institut für Mathematik of the Academy of Sciences of the GDR. After the reunion of Germany he spent some time at the Max Planck Institute for Mathematics in Bonn. Since 1994 he is Professor at the Mathematics Institute of the University of Bonn.[1] He is the successor on the chair of Friedrich Hirzebruch. He has supervised 12 doctoral students, including Maryna Viazovska. Together with Jeff Cheeger, he has been awarded the Max-Planck-Forschungspreis in 1991 .[2] The Cheeger–Müller theorem on the analytic torsion of Riemannian manifolds is named after them.[3][4] Important Papers • Müller, Werner (1978). "Analytic torsion and $R$-torsion of Riemannian manifolds". Advances in Mathematics. 28 (3): 233–305. doi:10.1016/0001-8708(78)90116-0. • Muller, Werner (1989). "The trace class conjecture in the theory of automorphic forms". Annals of Mathematics. Second Series. 130 (3): 473–529. doi:10.2307/1971453. JSTOR 1971453. References 1. Global Analysis Group, Mathematics Institute, University of Bonn. Accessed January 22, 2010 2. Max-Planck Research Prize laureates for 1991, Max Planck Society. Accessed January 22, 2010 3. Michael Farber, Wolfgang Lück, and Shmuel Weinberger (Editors), Tel Aviv Topology Conference: Rothenberg Festschrift. American Mathematical Society, 1999, Contemporary Mathematics series, vol. 231; ISBN 0-8218-1362-5; p. 77 4. Maxim Braverman, New Proof of the Cheeger–Müller Theorem, Annals of Global Analysis and Geometry, vol. 23 (2003), no. 1, pp. 77-92 External links • —Conference in honor of his 60. birthday at the Hausdorff Center for Mathematics in Bonn • —Conference in honor of his 60. birthday at the Hebrew University Jerusalem • Personal Homepage, Bonn University • Werner Müller at the Mathematics Genealogy Project Authority control International • ISNI • VIAF • WorldCat National • Norway • Germany • Israel • United States • Sweden • Czech Republic • Netherlands Academics • CiNii • Leopoldina • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Werner Römisch Werner Römisch (born 28 December 1947) is a German mathematician, professor emeritus at the Humboldt University of Berlin, most known for his pioneer work in the field of stochastic programming. Werner Römisch Born Werner Römisch (1947-12-28)28 December 1947 Zwickau, Germany EducationHumboldt University of Berlin Known for • Stochastic Programming • Optimization in energy industry AwardsKhachiyan Prize 2018 Scientific career Fields • Numerical analysis • Optimization • Stochastic Programming InstitutionsHumboldt University of Berlin Websitewww.mathematik.hu-berlin.de/~romisch/ Education and early life Römisch was born in Zwickau, Germany in 1947. He earned his diploma degree in Mathematics (1971) and doctoral degree in mathematics (1976) at the Humboldt University of Berlin (HUB). In 1984 he earned his Habilitation degree and after that he was appointed as Privatdozent at the HUB. In 1993 he became full professor of applied mathematics at HUB. He is married to Ute Römisch, lives in Berlin and has two children. Career and research Römisch is known for being a pioneer in the field of stochastic programming, to which he made several significant contributions. His work on analysis of discrete approximations,[1][2] stability,[3][4][5][6][7] power systems,[8][9] risk quantification and management,[10] scenario reduction[11][12][13][14] and efficient Monte-Carlo sampling[15] are notable contributions to the field. He authored three books and more than 130 research papers. He was co-editor of the Journal of Stochastic Programming E-Print Series (1999–2018), Associate Editor of Optimization Letters (OPTL) (2006–2013), of Energy Systems (2009–2020), of Computational Management Science (2012–2020), and of SIAM Journal on Optimization (2013– ). He is co-author of the algorithm for scenario reduction SCENRED,[16] which is used in several optimization frameworks in the energy industry. Awards and honours 2018 Khachiyan Prize Winner for lifetime achievements in the field of optimization awarded by the INFORMS Optimization Society.[17] References 1. Römisch, Werner (1981). "On discrete approximations in stochastic programming" (PDF). Proceedings 13. Jahrestagung "Mathematische Optimierung". 39: 166-175. 2. Römisch, Werner (1985). "An approximation method in stochastic optimization and control". Mathematical Control Theory, Banach Center Publications. 14: 477-490. doi:10.4064/-14-1-477-490. 3. Rachev, Svetlozar T; Römisch, Werner (2002). "Quantitative Stability in Stochastic Programming: The Method of Probability Metrics". Mathematics of Operations Research. 27 (4): 792–818. doi:10.1287/moor.27.4.792.304. 4. Römisch, Werner; Wets, RJ-B (2007). "Stability of ε-approximate solutions to convex stochastic programs". SIAM Journal on Optimization. 18 (3): 961–979. doi:10.1137/060657716. 5. Römisch, W.; Schultz, R. (1991). "Stability analysis for stochastic programs". Annals of Operations Research. 30: 241–266. doi:10.1007/BF02204819. S2CID 18988851. 6. Römisch, W.; Schultz, R. (1993). "Stability of solutions for stochastic programs with complete recourse". Mathematics of Operations Research. 18 (3): 590–609. doi:10.1287/moor.18.3.590. 7. Henrion, R.; Römisch, W. (1999). "Metric regularity and quantitative stability in stochastic programs with probabilistic constraints". Mathematical Programming. 84: 55–88. doi:10.1007/s10107980016a. S2CID 2304352. 8. Dentcheva, D.; Römisch, W. (1998). "Optimal power generation under uncertainty via stochastic programming". Stochastic Programming Methods and Technical Applications. Lecture Notes in Economics and Mathematical Systems. 458: 22–56. doi:10.1007/978-3-642-45767-8_2. ISBN 978-3-540-63924-4. 9. Eichhorn, A.; Römisch, W. (2006). "Mean-risk optimization models for electricity portfolio management". 2006 International Conference on Probabilistic Methods Applied to Power Systems. pp. 1–7. doi:10.1109/PMAPS.2006.360230. ISBN 978-91-7178-585-5. S2CID 2326985. 10. Pflug, G. Ch.; Römisch, W. (2007). Modeling, Measuring and Managing Risk. World Scientific. doi:10.1142/6478. ISBN 978-981-270-740-6. 11. Dupačová, J.; Gröwe-Kuska, N.; Römisch, W. (2003). "Scenario reduction in stochastic programming". Math. Program. 95 (Ser. A 95): 493–511. doi:10.1007/s10107-002-0331-0. S2CID 22626063. 12. Heitsch, Holger; Römisch, Werner (2003). "Scenario reduction algorithms in stochastic programming". Computational Optimization and Applications. 24 (2): 187–206. doi:10.1023/A:1021805924152. S2CID 16956981. 13. Römisch, Werner (2010). "Scenario generation". Wiley Encyclopedia of Operations Research and Management Science. 14. Heitsch, H.; Römisch, W. (2010). "Stability and scenario trees for multistage stochastic programs". Stochastic Programming, the State of the Art, in Honor of G.B. Dantzig. 6 (2): 139–164. doi:10.1007/s10287-008-0087-y. S2CID 3230220. 15. Leövey, H.; Römisch, W. (2015). "Quasi-Monte Carlo methods for linear two-stage stochastic programming problems". Mathematical Programming. 151: 315–345. doi:10.1007/s10107-015-0898-x. S2CID 14254876. 16. "Scenred". 17. "Khachiyan Prize". Optimization Society. 5 December 2022. Retrieved 26 July 2023. Authority control International • ISNI • VIAF National • Norway • Germany • Israel • United States • Czech Republic • Netherlands Academics • CiNii • Google Scholar • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Werner Weber (mathematician) Werner Weber (3 January 1906 in Oberstein, near Hamburg, Germany – 2 February 1975) was a German mathematician.[1] He was one of the Noether boys, the doctoral students of Emmy Noether. Considered scientifically gifted but a modest mathematician, he was also an extreme Nazi, who would later take part in driving Jewish mathematicians out of the University of Göttingen.[2] He later started work as part of a group of five mathematicians, recruited by Wilhelm Fenner, and which included Ernst Witt, Georg Aumann, Alexander Aigner, Oswald Teichmueller and Johann Friedrich Schultze, and led by Wolfgang Franz, to form the backbone of the new mathematical research department in the late 1930s, which would eventually be called: Section IVc of Cipher Department of the High Command of the Wehrmacht (abbr. OKW/Chi).[3][4] Life Weber was born in 1906 in Oberstein (near Hamburg, Germany), the son of a merchant. In 1924, he graduated from the Abitur. He studied mathematics in Hamburg and at the University of Göttingen and in 1928 he handed over the Lehramt staatsexamen (state examination) in Mathematics, Physics, Biology. Weber took his examination for promotion of Dr. phil. in Göttingen with Emmy Noether, (who was described by Pavel Alexandrov, Albert Einstein, Jean Dieudonné, Hermann Weyl, and Norbert Wiener as the most important woman in the history of mathematics), with a dissertation titled: Ideal theoretical interpretation of the representability of any natural numbers by square forms, (German: Idealtheoretische Deutung der Darstellbarkeit beliebiger natürlicher Zahlen durch quadratische Formen)[5] Noether had not been authorized to supervise dissertations on her own.[6] In Göttingen, his postdoctoral scholarship was co-sponsored by Edmund Landau in 1931, with whom he had been an assistant since 1928 and whom he represented in 1933 after his leave of absence. Landau and Noether had judged his dissertation to be excellent, but Weber was only a mediocre mathematician, and his usefulness for Landau consisted chiefly of his abilities in accurate proofreading, to which Landau devoted much attention (according to an anecdote which was then prevalent, he was able to distinguish between an Italic point and a Roman point).[7] In 1933, Oswald Teichmüller convinced Weber to convert to Nazism.[8] He was involved in the publication of the Deutsche Mathematik and published a book on the Pell equation.[9] From 1946, Weber worked as a publishing director in Hamburg and from 1951 at the private school "Institut Dr. Brechtefeld" in Hamburg as a teacher. He left a detailed manuscript (written down before 1940) about his discussion with Hasse,[10] which serves as an important source for the events at that time in Göttingen. Nazi Biography Weber was a member of the SA, but only became a Nazi on 1 May 1933 when he was given the Nazi party number 3,118,177.[11][12] In November 1933, he signed the Vow of allegiance of the Professors of the German Universities and High-Schools to Adolf Hitler and the National Socialistic State. Removal of Jewish mathematicians Weber was involved in the removal of the Jewish mathematician Edmund Landau on 2 November 1933 from the mathematics faculty at the University of Göttingen. Richard Courant was also forced out of Göttingen in May 1933. As the leader of a group of pro-Nazi students Weber, along with the Nazi mathematician Oswald Teichmüller, along with the SS, organized a group that commanded a boycott of Edmund Landau's lectures. In a letter that Richard Courant wrote to Abraham Flexner, he stated: [There] were some seventy students, partly in SS uniforms, but inside [the lecture theatre] not a soul. Every student who wanted to enter was prevented from entering by Weber.[13] Landau received a delegate from the students, who informed him that "Aryan students want Aryan mathematics...and requested that he refrain from giving [any more] lectures."[14] The speaker for the students was a very young, scientifically gifted man, but completely muddled and notorious. That person was Oswald Teichmüller. Landau would leave the university soon after. Mathematical Institute On 13 February 1934 the university Dekan (Deacon) asked Weber, who was acting director of the mathematical institute at Göttingen, for recommendations on who should replace Hermann Weyl as new operational director. Several days later Weber recommended, as the best mathematician, the algebraist Helmut Hasse, then working at the University of Marburg, but preferred the Nazi Udo Wegner.[6] In 1940, Weber would write: On the morning of 25 April 1933, I sank into gloomy brooding over how to save German mathematics. According to Weber: The tradition of Felix Klein that had been destroyed by the Jews, could only be awakened to a new life by one man: Wegner[15] The decision was made by the Nazi Theodor Vahlen, who appointed Helmut Hasse in April 1934. The Nazi were unsure if Hasse was fully committed to National Socialist policies, and tried to appoint someone to a second chair at Göttingen who was a firm Nazi supporter. Udo Wegner was a strong candidate, but the probability theorist Erhard Tornier and ardent Nazi, eventually gained the second chair.[16] Later Weber and other convinced national socialists, met in Göttingen with the designated new head of the Göttingen mathematical institute Helmut Hasse, who also sympathized with the Nazis, to question him about his ancestry, where Weber and his cronies did not consider him reliable for (Nazi) party politics, due to him having a Jewish grandmother. Although Hasse was acceptable to the Nazis, he was not acceptable to Weber, who refused to hand over the keys to the institute.[17] Proceedings were started against Weber with Weber sending a 400+ page document to Dr Mentzel of the Reich Ministry for Science Education and Adult Education. War work interrupted the proceedings. In 1945, Weber was dismissed due to his Nazi involvement. War work During World War II, he worked with Oswald Teichmüller in the Cipher Department of the High Command of the Wehrmacht in section IVc under Wolfgang Franz which was scientific decoding of enemy crypts, the development of code-breaking methods and working on re-cyphering systems not solved by practical decoding. The agency was managed by Erich Hüttenhain. He successfully deciphered a cypher of the Japanese diplomatic service. He also worked on cryptanalytic theory. References 1. Werner Weber at the Mathematics Genealogy Project 2. Schappacher, Norbert (1998). "The Nazi era: the Berlin way of politicizing mathematics". In Begehr, Heinrich; Koch, Helmut; Kramer, Jürg; Schappacher, Norbert; Thiele, Ernst-Jochen (eds.). Mathematics in Berlin. Birkhäuser, Basel: Springer. p. 127-136. ISBN 978-3-0348-8787-8. 3. "Army Security Agency: DF-187 The Career of Wilhelm Fenner with Special Regard to his activity in the field of Cryptography and Cryptanalysis (PDF)". Google Drive. 1 December 1949. p. 7. Retrieved 30 March 2016. 4. TICOM reports DF-187 A-G and DF-176, ‘European Axis Signal Intelligence in World War II’ vol 2 5. Math. Annalen 102 (1930) S. 740–767 6. Segal 2014, p. 128 7. Segal 2003, p. S. 128 8. Segal 2003, p. 447 Note 85. According to Peter Scherk, Weber was brought to national socialism by Teichmüller 9. Die Pellsche Gleichung (= Beihefte Deutsche Mathematik 1). Hirzel, Leipzig 1939. 10. Bundesarchiv Berlin R 4901/10.091 11. Segal 2014, p. 447 12. Segal 2014, p. 129 13. Noether, Emmy; Brewer, James W.; Smith, Martha K. (1981). Emmy Noether: a tribute to her life and work. M. Dekker. p. 29. ISBN 978-0-8247-1550-2. 14. Krantz, Steven G. (2005). Mathematical Apocrypha Redux: More Stories and Anecdotes of Mathematicians and the Mathematical. Cambridge University Press. p. 223. ISBN 978-0-88385-554-6. 15. Menzler-Trott 2007, p. 46 16. O'Connor, J. J.; Robertson, E. F. (10 April 1016). "Udo Hugo Helmuth Wegner". Mactutor Archive – School of Mathematics and Statistics University of St Andrews, Scotland. JOC/EFR. Retrieved 31 March 2017. 17. Menzler-Trott 2007, p. 48 Sources • Menzler-Trott, Eckart (1 January 2007). Logic's Lost Genius: The Life of Gerhard Gentzen. American Mathematical Soc. ISBN 978-0-8218-9129-2. • Segal, Sanford L. (2003). Mathematicians Under the Nazis. Princeton University Press. ISBN 0-691-00451-X. • Segal, Sanford L. (23 November 2014). Mathematicians under the Nazis. Princeton University Press. ISBN 978-1-4008-6538-3. German Signals intelligence organisations before 1945 • The Type of organisation • Name of organisation • People Military (?) Wehrmacht High Command Cipher Bureau • Erich Fellgiebel • Albert Praun • Hugo Kettler • Wilhelm Fenner • Erich Hüttenhain • Peter Novopashenny • Walter Fricke • Karl Stein • Wolfgang Franz • Gisbert Hasenjaeger • Heinrich Scholz • Werner Liebknecht • Gottfried Köthe • Ernst Witt • Helmut Grunsky • Georg Hamel • Georg Aumann • Oswald Teichmüller • Alexander Aigner • Werner Weber • Otto Leiberich • Otto Buggisch • Fritz Menzer General der Nachrichtenaufklärung • Erich Fellgiebel • Fritz Thiele • Wilhelm Gimmler • Hugo Kettler • Fritz Boetzel • Otto Buggisch • Fritz Menzer • Herbert von Denffer • Ludwig Föppl • Horst Schubert • Friedrich Böhm • Bruno von Freytag-Löringhoff • Johannes Marquart • Willi Rinow • Rudolf Kochendörffer • Hans Pietsch • Guido Hoheisel • Hans-Peter Luzius • Wilhelm Vauck • Rudolf Bailovic • Alfred Kneschke Luftnachrichten Abteilung 350 • Wolfgang Martini • Ferdinand Voegele B-Dienst • Kurt Fricke • Ludwig Stummel • Heinz Bonatz • Wilhelm Tranow • Erhard Maertens • Fritz Krauss Abwehr (?) • Wilhelm Canaris Civilian (?) Pers Z S • Kurt Selchow • Horst Hauthal • Rudolf Schauffler • Johannes Benzing • Otfried Deubner • Hans Rohrbach • Helmut Grunsky • Erika Pannwitz • Karl Schröter Research Office of the Reich Air Ministry • Hermann Göring • Gottfried Schapper • Hans Schimpf • Prince Christoph of Hesse Training (?) GdNA Training Referat Heer and Luftwaffe Signals School • German Radio Intelligence Operations during World War II Authority control International • ISNI • VIAF National • Germany Academics • Mathematics Genealogy Project • zbMATH • 2 People • Deutsche Biographie
Wikipedia
Werner state A Werner state[1] is a $d^{2}$ × $d^{2}$-dimensional bipartite quantum state density matrix that is invariant under all unitary operators of the form $U\otimes U$. That is, it is a bipartite quantum state $\rho _{AB}$ that satisfies $\rho _{AB}=(U\otimes U)\rho _{AB}(U^{\dagger }\otimes U^{\dagger })$ for all unitary operators U acting on d-dimensional Hilbert space. These states were first developed by Reinhard F. Werner in 1989. General definition Every Werner state $W_{AB}^{(p,d)}$ is a mixture of projectors onto the symmetric and antisymmetric subspaces, with the relative weight $p\in [0,1]$ being the main parameter that defines the state, in addition to the dimension $d\geq 2$: $W_{AB}^{(p,d)}=p{\frac {2}{d(d+1)}}P_{AB}^{\text{sym}}+(1-p){\frac {2}{d(d-1)}}P_{AB}^{\text{as}},$ where $P_{AB}^{\text{sym}}={\frac {1}{2}}(I_{AB}+F_{AB}),$ $P_{AB}^{\text{as}}={\frac {1}{2}}(I_{AB}-F_{AB}),$ are the projectors and $F_{AB}=\sum _{ij}|i\rangle \langle j|_{A}\otimes |j\rangle \langle i|_{B}$ is the permutation or flip operator that exchanges the two subsystems A and B. Werner states are separable for p ≥ 1⁄2 and entangled for p < 1⁄2. All entangled Werner states violate the PPT separability criterion, but for d ≥ 3 no Werner state violates the weaker reduction criterion. Werner states can be parametrized in different ways. One way of writing them is $\rho _{AB}={\frac {1}{d^{2}-d\alpha }}(I_{AB}-\alpha F_{AB}),$ where the new parameter α varies between −1 and 1 and relates to p as $\alpha =((1-2p)d+1)/(1-2p+d).$ Two-qubit example Two-qubit Werner states, corresponding to $d=2$ above, can be written explicitly in matrix form as $W_{AB}^{(p,2)}={\frac {p}{6}}{\begin{pmatrix}2&0&0&0\\0&1&1&0\\0&1&1&0\\0&0&0&2\end{pmatrix}}+{\frac {(1-p)}{2}}{\begin{pmatrix}0&0&0&0\\0&1&-1&0\\0&-1&1&0\\0&0&0&0\end{pmatrix}}={\begin{pmatrix}{\frac {p}{3}}&0&0&0\\0&{\frac {3-2p}{6}}&{\frac {-3+4p}{6}}&0\\0&{\frac {-3+4p}{6}}&{\frac {3-2p}{6}}&0\\0&0&0&{\frac {p}{3}}\end{pmatrix}}.$ Equivalently, these can be written as a convex combination of the totally mixed state with (the projection onto) a Bell state: $W_{AB}^{(\lambda ,2)}=\lambda |\Psi ^{-}\rangle \!\langle \Psi ^{-}|+{\frac {1-\lambda }{4}}I_{AB},\qquad |\Psi ^{-}\rangle \equiv {\frac {1}{\sqrt {2}}}(|01\rangle -|10\rangle ),$ where $\lambda \in [-1/3,1]$ (or, confining oneself to positive values, $\lambda \in [0,1]$) is related to $p$ by $\lambda =(3-4p)/3$. Then, two-qubit Werner states are separable for $\lambda \leq 1/3$ and entangled for $\lambda >1/3$. Werner-Holevo channels A Werner-Holevo quantum channel ${\mathcal {W}}_{A\rightarrow B}^{\left(p,d\right)}$ with parameters $p\in \left[0,1\right]$ and integer $d\geq 2$ is defined as [2] [3] [4] ${\mathcal {W}}_{A\rightarrow B}^{\left(p,d\right)}=p{\mathcal {W}}_{A\rightarrow B}^{\text{sym}}+\left(1-p\right){\mathcal {W}}_{A\rightarrow B}^{\text{as}},$ where the quantum channels ${\mathcal {W}}_{A\rightarrow B}^{\text{sym}}$ and ${\mathcal {W}}_{A\rightarrow B}^{\text{as}}$ are defined as ${\mathcal {W}}_{A\rightarrow B}^{\text{sym}}(X_{A})={\frac {1}{d+1}}\left[\operatorname {Tr} [X_{A}]I_{B}+\operatorname {id} _{A\rightarrow B}(T_{A}(X_{A}))\right],$ ${\mathcal {W}}_{A\rightarrow B}^{\text{as}}(X_{A})={\frac {1}{d-1}}\left[\operatorname {Tr} [X_{A}]I_{B}-\operatorname {id} _{A\rightarrow B}(T_{A}(X_{A}))\right],$ and $T_{A}$ denotes the partial transpose map on system A. Note that the Choi state of the Werner-Holevo channel ${\mathcal {W}}_{A\rightarrow B}^{p,d}$ is a Werner state: ${\mathcal {W}}_{A\rightarrow B}^{\left(p,d\right)}(\Phi _{RA})=p{\frac {2}{d\left(d+1\right)}}P_{RB}^{\text{sym}}+\left(1-p\right){\frac {2}{d\left(d-1\right)}}P_{RB}^{\text{as}},$ where $\Phi _{RA}={\frac {1}{d}}\sum _{i,j}|i\rangle \langle j|_{R}\otimes |i\rangle \langle j|_{A}$. Multipartite Werner states Werner states can be generalized to the multipartite case.[5] An N-party Werner state is a state that is invariant under $U\otimes U\otimes ...\otimes U$ for any unitary U on a single subsystem. The Werner state is no longer described by a single parameter, but by N! − 1 parameters, and is a linear combination of the N! different permutations on N systems. References 1. Reinhard F. Werner (1989). "Quantum states with Einstein-Podolsky-Rosen correlations admitting a hidden-variable model". Physical Review A. 40 (8): 4277–4281. Bibcode:1989PhRvA..40.4277W. doi:10.1103/PhysRevA.40.4277. PMID 9902666. 2. Reinhard F. Werner and Alexander S. Holevo (2002). "Counterexample to an additivity conjecture for output purity of quantum channels". Journal of Mathematical Physics. 43 (9): 4353–4357. arXiv:quant-ph/0203003. Bibcode:2002JMP....43.4353W. doi:10.1063/1.1498491. S2CID 42832247. 3. Fannes, Mark; Haegeman, B.; Mosonyi, Milan; Vanpeteghem, D. (2004). "Additivity of minimal entropy out- put for a class of covariant channels". unpublished. arXiv:quant-ph/0410195. Bibcode:2004quant.ph.10195F. 4. Debbie Leung and William Matthews (2015). "On the power of PPT-preserving and non-signalling codes". IEEE Transactions on Information Theory. 61 (8): 4486–4499. arXiv:1406.7142. doi:10.1109/TIT.2015.2439953. S2CID 14083225. 5. Eggeling, Tilo; Werner, Reinhard (2001). "Separability properties of tripartite states with UxUxU-symmetry". Physical Review A. 63: 042111. arXiv:quant-ph/0010096. doi:10.1103/PhysRevA.63.042111. S2CID 119350302.
Wikipedia
West Coast Number Theory West Coast Number Theory (WCNT), a meeting that has also been known variously as the Western Number Theory Conference and the Asilomar Number Theory meeting, is an annual gathering of number theorists first organized by D. H. and Emma Lehmer at the Asilomar Conference Grounds in 1969.[1] In his tribute to D. H. Lehmer, John Brillhart stated that "There is little doubt that one of [Dick and Emma's] most enduring contributions to the world of mathematicians is their founding of the West Coast Number Theory Meeting [an annual event] in 1969".[2] To date, the conference remains an active meeting of young and experienced number theorists alike. West Coast Number Theory (conference) StatusActive GenreMathematics conference FrequencyAnnual CountryU.S. Years active1969–present Inaugurated1969 (1969) FounderDerrick H. Lehmer and Emma Lehmer Previous event2021 Next eventWinter 2022 ActivityActive Websitewestcoastnumbertheory.org History West Coast Number Theory has been held at a variety of locations throughout western North America. Typically, odd years are held in Pacific Grove, California. Until 2013, this was always at the Asilomar Conference Grounds, though meetings from 2014-2017 moved to the Lighthouse Lodge, just up the road. • 1969 Asilomar • 1970 Tucson • 1971 Asilomar • 1972 Claremont • 1973 Los Angeles • 1974 Los Angeles • 1975 Asilomar • 1976 San Diego • 1977 Los Angeles • 1978 Santa Barbara • 1979 Asilomar • 1980 Tucson • 1981 Santa Barbara • 1982 San Diego • 1983 Asilomar • 1984 Asilomar • 1985 Asilomar • 1986 Tucson • 1987 Asilomar • 1988 Las Vegas • 1989 Asilomar • 1990 Asilomar • 1991 Asilomar • 1992 Corvallis • 1993 Asilomar • 1994 San Diego • 1995 Asilomar • 1996 Las Vegas • 1997 Asilomar • 1998 San Francisco • 1999 Asilomar • 2000 San Diego • 2001 Asilomar • 2002 San Francisco • 2003 Asilomar • 2004 Las Vegas • 2005 Asilomar • 2006 Ensenada • 2007 Asilomar • 2008 Fort Collins • 2009 Asilomar • 2010 Orem • 2011 Asilomar • 2012 Asilomar • 2013 Asilomar • 2014 Pacific Grove • 2015 Pacific Grove • 2016 Pacific Grove • 2017 Pacific Grove • 2018 Chico • 2019 Asilomar (50th Anniversary Conference) • 2020 Canceled • 2021 Virtual • 2022 Asilomar Related • Asilomar Conference Grounds • Pacific Grove, California References 1. The Lehmers at Berkeley 2. J. Brillhart in Acta Arith. 62 (1992), 207–213 External links • West Coast Number Theory page
Wikipedia
Devanagari numerals The Devanagari numerals are the symbols used to write numbers in the Devanagari script, the predominant for northern Indian languages. They are used to write decimal numbers, instead of the Western Arabic numerals. Part of a series on Numeral systems Place-value notation Hindu-Arabic numerals • Western Arabic • Eastern Arabic • Bengali • Devanagari • Gujarati • Gurmukhi • Odia • Sinhala • Tamil • Malayalam • Telugu • Kannada • Dzongkha • Tibetan • Balinese • Burmese • Javanese • Khmer • Lao • Mongolian • Sundanese • Thai East Asian systems Contemporary • Chinese • Suzhou • Hokkien • Japanese • Korean • Vietnamese Historic • Counting rods • Tangut Other systems • History Ancient • Babylonian Post-classical • Cistercian • Mayan • Muisca • Pentadic • Quipu • Rumi Contemporary • Cherokee • Kaktovik (Iñupiaq) By radix/base Common radices/bases • 2 • 3 • 4 • 5 • 6 • 8 • 10 • 12 • 16 • 20 • 60 • (table) Non-standard radices/bases • Bijective (1) • Signed-digit (balanced ternary) • Mixed (factorial) • Negative • Complex (2i) • Non-integer (φ) • Asymmetric Sign-value notation Non-alphabetic • Aegean • Attic • Aztec • Brahmi • Chuvash • Egyptian • Etruscan • Kharosthi • Prehistoric counting • Proto-cuneiform • Roman • Tally marks Alphabetic • Abjad • Armenian • Alphasyllabic • Akṣarapallī • Āryabhaṭa • Kaṭapayādi • Coptic • Cyrillic • Geʽez • Georgian • Glagolitic • Greek • Hebrew List of numeral systems Table Modern Devanagari Western Arabic Words for the cardinal number Sanskrit (wordstem) Hindi Marathi Odia ०0śūnya (शून्य)शून्य (śūny)शून्य (śūnya)शून्य (śūnya) १1eka (एकः)एक (ek)एक (ek)एक (ek) २2dvi (द्वि)दो (do)दोन (don)दुइ (dui) ३3tri (त्रिणि)तीन (tīn)तीन (tīn)तिन (tīn) ४4catur (चत्वारी)चार (cār)चार (cār)चारि (cāri) ५5pañca (पञ्च)पाँच (pāñc)पाच (pāch)पाँच (pānch) ६6ṣaṭ (षट्)छह (chah)सहा (sahā)छअ (chaā) ७7sapta (सप्त)सात (sāt)सात (sāt)सात (sāt) ८8aṣṭa (अष्ट)आठ (āṭh)आठ (āṭh)आठ (āṭha) ९9nava (नव)नौ (nau)नऊ (naū)नअ (nā) The word śūnya for zero was calqued into Arabic as صف sifr, meaning 'nothing', which became the term "zero" in many European languages via Medieval Latin zephirum.[1] Variants Devanagari digits shapes may vary depending on geographical area or epoch. Some of the variants are also seen in older Sanskrit literature.[2][3] १ Common Nepali 1 ५ "Bombay" Variant "Calcutta" Variant 5 ८ "Bombay" Variant "Calcutta" Variant 8 ९ Common Nepali Variant 9 See also • (PDF):- Hindi Number Names-(PDF) 1 to 1000 Numbers Name in Hindi- 1 to 1000 की गिनती • Indian numbering system References Notes 1. "zero - Origin and meaning of zero by Online Etymology Dictionary". www.etymonline.com. 2. Devanagari for TEX version 2.17, page 22 3. "Alternate digits in Devanagari". Scriptsource.org. Retrieved 13 September 2017. Sources • Sanskrit Siddham (Bonji) Numbers Archived 2009-02-07 at the Wayback Machine • Devanagari Numbers in Nepali language Nepali language • Grammar • Declension • Pronouns • Verbs • Phonology • Devanagari • Numerals • Kinship terms • Braille • History • Etymology Varieties Dialects • Eastern Pahari • Jumli Khas • Doteli • Palpa language Forms • Signed Nepali • Nepali Sign Language • Nepali English • Nepali manual alphabet Language politics • Nepali language movement (Amendment) • Nepal Bhasa movement • Gorkhaland movement • Literary movements (Aswikrit Sahitya Andolan • Ralpha • Srijanshil Arajakta • Tesro Aayam) Organizations • Nepal Academy • Nepali Sahitya Sammelan • Nepali Sahitya Parishad Sikkim • Sajha Publications • Madan Puraskar Pustakalaya • Aarohan Gurukul Arts • Literature (Nepal • Bhutan • India) • Awards • Sahitya Akademi Award (Recipients • Translation Prize • Yuva Puraskar) • Madan Puraskar • Jagadamba Shree Puraskar • Cinema • Music • Writers • Poets Related topics • Nepali Wikipedia • Bhanu Jayanti • Nepal Literature Festival • Khasa Prakrit language • Category • Media • Wiktionary Devanāgarī • Abugida • Brahmic scripts • Inherent vowel Languages • Hindi • Marathi • Nepali • Konkani • Maithili • Sindhi • Bodo • Pali • Prakrit • Sanskrit and more Transliteration • IAST • Hunterian • National Library at Kolkata • ISO 15919 • Harvard-Kyoto • ITRANS • Velthuis • SLP1 • WX notation • (ISCII) Vowels and syllabic consonants अ a ॲ æ ऄ आ (का) ā इ (कि) i ई (की) ī उ (कु) u ऊ (कू) ū ॶ (कॖ) ॷ (कॗ) ऋ (कृ) r̥ ॠ (कॄ) r̥̄ ऌ (कॢ) l̥ ॡ (कॣ) l̥̄ ए (के) ē ऍ (कॅ) ê ऎ (कॆ) e ऐ (कै) ai (कॕ) (कॎ) ओ (को) ō ऑ (कॉ) ô ऒ (कॊ) o औ (कौ) au ॳ (कऺ) ॴ (कऻ) ॵ (कॏ) • Schwa deletion in Indo-Aryan languages Consonants क k क़ q ख kh ख़ k̲h̲ ग g ग़ ġ ॻ gg घ gh ङ ṅ च c छ ch ज j ज़ z ॼ jj ॹ ž झ jh झ़ zh ञ ñ ट ṭ ठ ṭh ड ḍ ड़ ṛ ॾ ḍḍ ॸ ḍ ढ ḍh ढ़ ṛh ण ṇ त t थ th द d ध dh न n ऩ ṉ प p फ ph फ़ f ब b ॿ bb भ bh म m य y य़ ẏ ॺ र r ऱ ṟ ळ ḷ ऴ ḻ ल l व v श ś ष ṣ स s ह h ॽ • Consonant conjuncts Diacritics, punctuation and other symbols अं ṁ (anusvāra) अः ḥ (visarga) अँ m̐ (candrabindu) अऀ (inverted candrabindu) ऽ ’ (avagraha) क़ (nuqta) क् (virāma) ३ 3 (pluta) ᳵ ẖ (jihvāmūlīya) ᳶ ḫ (upadhmānīya) अ॑ ` (svarita) अ॒ (anudātta) अ॓ (grave) अ॔ (acute) ॐ aum̐ । (daṇḍa) ॥ (double daṇḍa) ॰ (lāghava cihna) ॱ (high spacing dot) ₹ INR (Indian rupee sign) • Devanagari (Unicode block) • Devanagari Extended • Devanagari Extended-A • Vedic Extensions • Devanagari Braille Numerals ० 0 १ 1 २ 2 ३ 3 ४ 4 ५ 5 ६ 6 ७ 7 ८ 8 ९ 9 • Āryabhaṭa numeration • Akṣarapallī • Bhūtasaṃkhyā system • Kaṭapayādi system
Wikipedia
Equidistribution theorem In mathematics, the equidistribution theorem is the statement that the sequence a, 2a, 3a, ... mod 1 is uniformly distributed on the circle $\mathbb {R} /\mathbb {Z} $, when a is an irrational number. It is a special case of the ergodic theorem where one takes the normalized angle measure $\mu ={\frac {d\theta }{2\pi }}$. History While this theorem was proved in 1909 and 1910 separately by Hermann Weyl, Wacław Sierpiński and Piers Bohl, variants of this theorem continue to be studied to this day. In 1916, Weyl proved that the sequence a, 22a, 32a, ... mod 1 is uniformly distributed on the unit interval. In 1937, Ivan Vinogradov proved that the sequence pn a mod 1 is uniformly distributed, where pn is the nth prime. Vinogradov's proof was a byproduct of the odd Goldbach conjecture, that every sufficiently large odd number is the sum of three primes. George Birkhoff, in 1931, and Aleksandr Khinchin, in 1933, proved that the generalization x + na, for almost all x, is equidistributed on any Lebesgue measurable subset of the unit interval. The corresponding generalizations for the Weyl and Vinogradov results were proven by Jean Bourgain in 1988. Specifically, Khinchin showed that the identity $\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}f((x+ka){\bmod {1}})=\int _{0}^{1}f(y)\,dy$ holds for almost all x and any Lebesgue integrable function ƒ. In modern formulations, it is asked under what conditions the identity $\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}f((x+b_{k}a){\bmod {1}})=\int _{0}^{1}f(y)\,dy$ might hold, given some general sequence bk. One noteworthy result is that the sequence 2ka mod 1 is uniformly distributed for almost all, but not all, irrational a. Similarly, for the sequence bk = 2ka, for every irrational a, and almost all x, there exists a function ƒ for which the sum diverges. In this sense, this sequence is considered to be a universally bad averaging sequence, as opposed to bk = k, which is termed a universally good averaging sequence, because it does not have the latter shortcoming. A powerful general result is Weyl's criterion, which shows that equidistribution is equivalent to having a non-trivial estimate for the exponential sums formed with the sequence as exponents. For the case of multiples of a, Weyl's criterion reduces the problem to summing finite geometric series. See also • Diophantine approximation • Low-discrepancy sequence • Dirichlet's approximation theorem • Three-gap theorem References Historical references • P. Bohl, (1909) Über ein in der Theorie der säkularen Störungen vorkommendes Problem, J. reine angew. Math. 135, pp. 189–283. • Weyl, H. (1910). "Über die Gibbs'sche Erscheinung und verwandte Konvergenzphänomene". Rendiconti del Circolo Matematico di Palermo. 330: 377–407. doi:10.1007/bf03014883. S2CID 122545523. • W. Sierpinski, (1910) Sur la valeur asymptotique d'une certaine somme, Bull Intl. Acad. Polonaise des Sci. et des Lettres (Cracovie) series A, pp. 9–11. • Weyl, H. (1916). "Ueber die Gleichverteilung von Zahlen mod. Eins". Math. Ann. 77 (3): 313–352. doi:10.1007/BF01475864. S2CID 123470919. • Birkhoff, G. D. (1931). "Proof of the ergodic theorem". Proc. Natl. Acad. Sci. U.S.A. 17 (12): 656–660. Bibcode:1931PNAS...17..656B. doi:10.1073/pnas.17.12.656. PMC 1076138. PMID 16577406. • Ya. Khinchin, A. (1933). "Zur Birkhoff's Lösung des Ergodensproblems". Math. Ann. 107: 485–488. doi:10.1007/BF01448905. S2CID 122289068. Modern references • Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, ISBN 0-521-45999-0. (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit interval. Focuses on methods developed by Bourgain.) • Elias M. Stein and Rami Shakarchi, Fourier Analysis. An Introduction, (2003) Princeton University Press, pp 105–113 (Proof of the Weyl's theorem based on Fourier Analysis)
Wikipedia
Weyl's inequality (number theory) In number theory, Weyl's inequality, named for Hermann Weyl, states that if M, N, a and q are integers, with a and q coprime, q > 0, and f is a real polynomial of degree k whose leading coefficient c satisfies $|c-a/q|\leq tq^{-2},$ for some t greater than or equal to 1, then for any positive real number $\scriptstyle \varepsilon $ one has $\sum _{x=M}^{M+N}\exp(2\pi if(x))=O\left(N^{1+\varepsilon }\left({t \over q}+{1 \over N}+{t \over N^{k-1}}+{q \over N^{k}}\right)^{2^{1-k}}\right){\text{ as }}N\to \infty .$ This inequality will only be useful when $q<N^{k},$ for otherwise estimating the modulus of the exponential sum by means of the triangle inequality as $\scriptstyle \leq \,N$ provides a better bound. References • Vinogradov, Ivan Matveevich (1954). The method of trigonometrical sums in the theory of numbers. Translated, revised and annotated by K. F. Roth and Anne Davenport, New York: Interscience Publishers Inc. X, 180 p. • Allakov, I. A. (2002). "On One Estimate by Weyl and Vinogradov". Siberian Mathematical Journal. 43 (1): 1–4. doi:10.1023/A:1013873301435. S2CID 117556877.
Wikipedia
Weyl's tube formula Weyl's tube formula gives the volume of an object defined as the set of all points within a small distance of a manifold. Let $\Sigma $ be an oriented, closed, two-dimensional surface, and let $N_{\varepsilon }(\Sigma )$ denote the set of all points within a distance $\varepsilon $ of the surface $\Sigma $. Then, for $\varepsilon $ sufficiently small, the volume of $N_{\varepsilon }(\Sigma )$ is $V=2A(\Sigma )\varepsilon +{\frac {4\pi }{3}}\chi (\Sigma )\varepsilon ^{3},$ where $A(\Sigma )$ is the area of the surface and $\chi (\Sigma )$ is its Euler characteristic. This expression can be generalized to the case where $\Sigma $ is a $q$-dimensional submanifold of $n$-dimensional Euclidean space $\mathbb {R} ^{n}$. References • Weyl, Hermann (1939). "On the volume of tubes". American Journal of Mathematics. 61: 461–472. JSTOR 2371513. • Gray, Alfred (2004). "An introduction to Weyl's Tube Formula". Tubes. Progress in Mathematics, volume 221. Springer Science+Business Media. doi:10.1007/978-3-0348-7966-8_1. ISBN 978-3-0348-9639-9. • Willerton, Simon (2010-03-12). "Intrinsic Volumes and Weyl's Tube Formula". The n-Category Café. Retrieved 2018-03-10.
Wikipedia
Weyl character formula In mathematics, the Weyl character formula in representation theory describes the characters of irreducible representations of compact Lie groups in terms of their highest weights.[1] It was proved by Hermann Weyl (1925, 1926a, 1926b). There is a closely related formula for the character of an irreducible representation of a semisimple Lie algebra.[2] In Weyl's approach to the representation theory of connected compact Lie groups, the proof of the character formula is a key step in proving that every dominant integral element actually arises as the highest weight of some irreducible representation.[3] Important consequences of the character formula are the Weyl dimension formula and the Kostant multiplicity formula. By definition, the character $\chi $ of a representation $\pi $ of G is the trace of $\pi (g)$, as a function of a group element $g\in G$. The irreducible representations in this case are all finite-dimensional (this is part of the Peter–Weyl theorem); so the notion of trace is the usual one from linear algebra. Knowledge of the character $\chi $ of $\pi $ gives a lot of information about $\pi $ itself. Weyl's formula is a closed formula for the character $\chi $, in terms of other objects constructed from G and its Lie algebra. Statement of Weyl character formula The character formula can be expressed in terms of representations of complex semisimple Lie algebras or in terms of the (essentially equivalent) representation theory of compact Lie groups. Complex semisimple Lie algebras Let $\pi $ be an irreducible, finite-dimensional representation of a complex semisimple Lie algebra ${\mathfrak {g}}$. Suppose ${\mathfrak {h}}$ is a Cartan subalgebra of ${\mathfrak {g}}$. The character of $\pi $ is then the function $\operatorname {ch} _{\pi }:{\mathfrak {h}}\rightarrow \mathbb {C} $ defined by $\operatorname {ch} _{\pi }(H)=\operatorname {tr} (e^{\pi (H)}).$ The value of the character at $H=0$ is the dimension of $\pi $. By elementary considerations, the character may be computed as $\operatorname {ch} _{\pi }(H)=\sum _{\mu }m_{\mu }e^{\mu (H)}$, where the sum ranges over all the weights $\mu $ of $\pi $ and where $m_{\mu }$ is the multiplicity of $\mu $. (The preceding expression is sometimes taken as the definition of the character.) The character formula states[4] that $\operatorname {ch} _{\pi }(H)$ may also be computed as $\operatorname {ch} _{\pi }(H)={\frac {\sum _{w\in W}\varepsilon (w)e^{w(\lambda +\rho )(H)}}{\prod _{\alpha \in \Delta ^{+}}(e^{\alpha (H)/2}-e^{-\alpha (H)/2})}}$ where • $W$ is the Weyl group; • $\Delta ^{+}$ is the set of the positive roots of the root system $\Delta $; • $\rho $ is the half-sum of the positive roots, often called the Weyl vector; • $\lambda $ is the highest weight of the irreducible representation $V$; • $\varepsilon (w)$ is the determinant of the action of $w$ on the Cartan subalgebra ${\mathfrak {h}}\subset {\mathfrak {g}}$. This is equal to $(-1)^{\ell (w)}$, where $\ell (w)$ is the length of the Weyl group element, defined to be the minimal number of reflections with respect to simple roots such that $w$ equals the product of those reflections. Discussion Using the Weyl denominator formula (described below), the character formula may be rewritten as $\operatorname {ch} _{\pi }(H)={\frac {\sum _{w\in W}\varepsilon (w)e^{w(\lambda +\rho )(H)}}{\sum _{w\in W}\varepsilon (w)e^{w(\rho )(H)}}}$, or, equivalently, $\operatorname {ch} _{\pi }(H){\sum _{w\in W}\varepsilon (w)e^{w(\rho )(H)}}=\sum _{w\in W}\varepsilon (w)e^{w(\lambda +\rho )(H)}.$ The character is itself a large sum of exponentials. In this last expression, we then multiply the character by an alternating sum of exponentials—which seemingly will result in an even larger sum of exponentials. The surprising part of the character formula is that when we compute this product, only a small number of terms actually remain. Many more terms than this occur at least once in the product of the character and the Weyl denominator, but most of these terms cancel out to zero.[5] The only terms that survive are the terms that occur only once, namely $e^{(\lambda +\rho )(H)}$ (which is obtained by taking the highest weight from $\operatorname {ch} _{\pi }$ and the highest weight from the Weyl denominator) and things in the Weyl-group orbit of $e^{(\lambda +\rho )(H)}$. Compact Lie groups Let $K$ be a compact, connected Lie group and let $T$ be a maximal torus in $K$. Let $\Pi $ be an irreducible representation of $K$. Then we define the character of $\Pi $ to be the function $\mathrm {X} (x)=\operatorname {trace} (\Pi (x)),\quad x\in K.$ The character is easily seen to be a class function on $K$ and the Peter–Weyl theorem asserts that the characters form an orthonormal basis for the space of square-integrable class functions on $K$.[6] Since $\mathrm {X} $ is a class function, it is determined by its restriction to $T$. Now, for $H$ in the Lie algebra ${\mathfrak {t}}$ of $T$, we have $\operatorname {trace} (\Pi (e^{H}))=\operatorname {trace} (e^{\pi (H)})$, where $\pi $ is the associated representation of the Lie algebra ${\mathfrak {k}}$ of $K$. Thus, the function $H\mapsto \operatorname {trace} (\Pi (e^{H}))$ is simply the character of the associated representation $\pi $ of ${\mathfrak {k}}$, as described in the previous subsection. The restriction of the character of $\Pi $ to $T$ is then given by the same formula as in the Lie algebra case: $\mathrm {X} (e^{H})={\frac {\sum _{w\in W}\varepsilon (w)e^{w(\lambda +\rho )(H)}}{\sum _{w\in W}\varepsilon (w)e^{w(\rho )(H)}}}.$ Weyl's proof of the character formula in the compact group setting is completely different from the algebraic proof of the character formula in the setting of semisimple Lie algebras.[7] In the compact group setting, it is common to use "real roots" and "real weights", which differ by a factor of $i$ from the roots and weights used here. Thus, the formula in the compact group setting has factors of $i$ in the exponent throughout. The SU(2) case In the case of the group SU(2), consider the irreducible representation of dimension $m+1$. If we take $T$ to be the diagonal subgroup of SU(2), the character formula in this case reads[8] $\mathrm {X} \left({\begin{pmatrix}e^{i\theta }&0\\0&e^{-i\theta }\end{pmatrix}}\right)={\frac {e^{i(m+1)\theta }-e^{-i(m+1)\theta }}{e^{i\theta }-e^{-i\theta }}}={\frac {\sin((m+1)\theta )}{\sin \theta }}.$ (Both numerator and denominator in the character formula have two terms.) It is instructive to verify this formula directly in this case, so that we can observe the cancellation phenomenon implicit in the Weyl character formula. Since the representations are known very explicitly, the character of the representation can be written down as $\mathrm {X} \left({\begin{pmatrix}e^{i\theta }&0\\0&e^{-i\theta }\end{pmatrix}}\right)=e^{im\theta }+e^{i(m-2)\theta }+\cdots +e^{-im\theta }.$ The Weyl denominator, meanwhile, is simply the function $e^{i\theta }-e^{-i\theta }$. Multiplying the character by the Weyl denominator gives $\mathrm {X} \left({\begin{pmatrix}e^{i\theta }&0\\0&e^{-i\theta }\end{pmatrix}}\right)(e^{i\theta }-e^{-i\theta })=\left(e^{i(m+1)\theta }+e^{i(m-1)\theta }+\cdots +e^{-i(m-1)\theta }\right)-\left(e^{i(m-1)\theta }+\cdots +e^{-i(m-1)\theta }+e^{-i(m+1)\theta }\right).$ We can now easily verify that most of the terms cancel between the two term on the right-hand side above, leaving us with only $\mathrm {X} \left({\begin{pmatrix}e^{i\theta }&0\\0&e^{-i\theta }\end{pmatrix}}\right)(e^{i\theta }-e^{-i\theta })=e^{i(m+1)\theta }-e^{-i(m+1)\theta }$ so that $\mathrm {X} \left({\begin{pmatrix}e^{i\theta }&0\\0&e^{-i\theta }\end{pmatrix}}\right)={\frac {e^{i(m+1)\theta }-e^{-i(m+1)\theta }}{e^{i\theta }-e^{-i\theta }}}={\frac {\sin((m+1)\theta )}{\sin \theta }}.$ The character in this case is a geometric series with $R=e^{2i\theta }$ and that preceding argument is a small variant of the standard derivation of the formula for the sum of a finite geometric series. Weyl denominator formula In the special case of the trivial 1-dimensional representation the character is 1, so the Weyl character formula becomes the Weyl denominator formula:[9] ${\sum _{w\in W}\varepsilon (w)e^{w(\rho )(H)}=\prod _{\alpha \in \Delta ^{+}}(e^{\alpha (H)/2}-e^{-\alpha (H)/2})}.$ For special unitary groups, this is equivalent to the expression $\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\,X_{1}^{\sigma (1)-1}\cdots X_{n}^{\sigma (n)-1}=\prod _{1\leq i<j\leq n}(X_{j}-X_{i})$ for the Vandermonde determinant.[10] Weyl dimension formula By evaluating the character at $H=0$, Weyl's character formula gives the Weyl dimension formula $\dim(V_{\lambda })={\prod _{\alpha \in \Delta ^{+}}(\lambda +\rho ,\alpha ) \over \prod _{\alpha \in \Delta ^{+}}(\rho ,\alpha )}$ for the dimension of a finite dimensional representation $V_{\lambda }$ with highest weight $\lambda $. (As usual, ρ is half the sum of the positive roots and the products run over positive roots α.) The specialization is not completely trivial, because both the numerator and denominator of the Weyl character formula vanish to high order at the identity element, so it is necessary to take a limit of the trace of an element tending to the identity, using a version of L'Hospital's rule.[11] In the SU(2) case described above, for example, we can recover the dimension $m+1$ of the representation by using L'Hospital's rule to evaluate the limit as $\theta $ tends to zero of $\sin((m+1)\theta )/\sin \theta $. We may consider as an example the complex semisimple Lie algebra sl(3,C), or equivalently the compact group SU(3). In that case, the representations are labeled by a pair $(m_{1},m_{2})$ of non-negative integers. In this case, there are three positive roots and it is not hard to verify that the dimension formula takes the explicit form[12] $\dim(V_{m_{1},m_{2}})={\frac {1}{2}}(m_{1}+1)(m_{2}+1)(m_{1}+m_{2}+2)$ The case $m_{1}=1,\,m_{2}=0$ is the standard representation and indeed the dimension formula gives the value 3 in this case. Kostant multiplicity formula Main article: Kostant partition function The Weyl character formula gives the character of each representation as a quotient, where the numerator and denominator are each a finite linear combination of exponentials. While this formula in principle determines the character, it is not especially obvious how one can compute this quotient explicitly as a finite sum of exponentials. Already In the SU(2) case described above, it is not immediately obvious how to go from the Weyl character formula, which gives the character as $\sin((m+1)\theta )/\sin \theta $ back to the formula for the character as a sum of exponentials: $e^{im\theta }+e^{i(m-2)\theta }+\cdots +e^{-im\theta }.$ In this case, it is perhaps not terribly difficult to recognize the expression $\sin((m+1)\theta )/\sin \theta $ as the sum of a finite geometric series, but in general we need a more systematic procedure. In general, the division process can be accomplished by computing a formal reciprocal of the Weyl denominator and then multiplying the numerator in the Weyl character formula by this formal reciprocal.[13] The result gives the character as a finite sum of exponentials. The coefficients of this expansion are the dimensions of the weight spaces, that is, the multiplicities of the weights. We thus obtain from the Weyl character formula a formula for the multiplicities of the weights, known as the Kostant multiplicity formula. An alternative formula, that is more computationally tractable in some cases, is given in the next section. Freudenthal's formula Hans Freudenthal's formula is a recursive formula for the weight multiplicities that gives the same answer as the Kostant multiplicity formula, but is sometimes easier to use for calculations as there can be far fewer terms to sum. The formula is based on use of the Casimir element and its derivation is independent of the character formula. It states[14] $(\|\Lambda +\rho \|^{2}-\|\lambda +\rho \|^{2})m_{\Lambda }(\lambda )=2\sum _{\alpha \in \Delta ^{+}}\sum _{j\geq 1}(\lambda +j\alpha ,\alpha )m_{\Lambda }(\lambda +j\alpha )$ where • Λ is a highest weight, • λ is some other weight, • mΛ(λ) is the multiplicity of the weight λ in the irreducible representation VΛ • ρ is the Weyl vector • The first sum is over all positive roots α. Weyl–Kac character formula The Weyl character formula also holds for integrable highest-weight representations of Kac–Moody algebras, when it is known as the Weyl–Kac character formula. Similarly there is a denominator identity for Kac–Moody algebras, which in the case of the affine Lie algebras is equivalent to the Macdonald identities. In the simplest case of the affine Lie algebra of type A1 this is the Jacobi triple product identity $\prod _{m=1}^{\infty }\left(1-x^{2m}\right)\left(1-x^{2m-1}y\right)\left(1-x^{2m-1}y^{-1}\right)=\sum _{n=-\infty }^{\infty }(-1)^{n}x^{n^{2}}y^{n}.$ The character formula can also be extended to integrable highest weight representations of generalized Kac–Moody algebras, when the character is given by ${\sum _{w\in W}(-1)^{\ell (w)}w(e^{\lambda +\rho }S) \over e^{\rho }\prod _{\alpha \in \Delta ^{+}}(1-e^{-\alpha })}.$ Here S is a correction term given in terms of the imaginary simple roots by $S=\sum _{I}(-1)^{|I|}e^{\Sigma I}\,$ where the sum runs over all finite subsets I of the imaginary simple roots which are pairwise orthogonal and orthogonal to the highest weight λ, and |I| is the cardinality of I and ΣI is the sum of the elements of I. The denominator formula for the monster Lie algebra is the product formula $j(p)-j(q)=\left({1 \over p}-{1 \over q}\right)\prod _{n,m=1}^{\infty }(1-p^{n}q^{m})^{c_{nm}}$ for the elliptic modular function j. Peterson gave a recursion formula for the multiplicities mult(β) of the roots β of a symmetrizable (generalized) Kac–Moody algebra, which is equivalent to the Weyl–Kac denominator formula, but easier to use for calculations: $(\beta ,\beta -2\rho )c_{\beta }=\sum _{\gamma +\delta =\beta }(\gamma ,\delta )c_{\gamma }c_{\delta }\,$ where the sum is over positive roots γ, δ, and $c_{\beta }=\sum _{n\geq 1}{\operatorname {mult} (\beta /n) \over n}.$ Harish-Chandra Character Formula Harish-Chandra showed that Weyl's character formula admits a generalization to representations of a real, reductive group. Suppose $\pi $ is an irreducible, admissible representation of a real, reductive group G with infinitesimal character $\lambda $. Let $\Theta _{\pi }$ be the Harish-Chandra character of $\pi $; it is given by integration against an analytic function on the regular set. If H is a Cartan subgroup of G and H' is the set of regular elements in H, then $\Theta _{\pi }|_{H'}={\sum _{w\in W/W_{\lambda }}a_{w}e^{w\lambda } \over e^{\rho }\prod _{\alpha \in \Delta ^{+}}(1-e^{-\alpha })}.$ Here • W is the complex Weyl group of $H_{\mathbb {C} }$ with respect to $G_{\mathbb {C} }$ • $W_{\lambda }$ is the stabilizer of $\lambda $ in W and the rest of the notation is as above. The coefficients $a_{w}$ are still not well understood. Results on these coefficients may be found in papers of Herb, Adams, Schmid, and Schmid-Vilonen among others. See also • Character theory • Algebraic character • Demazure character formula • Weyl integration formula • Kirillov character formula References 1. Hall 2015 Section 12.4. 2. Hall 2015 Section 10.4. 3. Hall 2015 Section 12.5. 4. Hall 2015 Theorem 10.14 5. Hall 2015 Section 10.4. 6. Hall 2015 Section 12.3 7. See Hall 2015 Section 10.8 in the Lie algebra setting and Section 12.4 in the compact group setting 8. Hall 2015 Example 12.23 9. Hall 2015 Lemma 10.28. 10. Hall 2015 Exercise 9 in Chapter 10. 11. Hall 2015 Section 10.5. 12. Hall 2015 Example 10.23 13. Hall 2015 Section 10.6 14. Humphreys 1972 Section 22.3 • Fulton, William and Harris, Joe (1991). Representation theory: a first course. New York: Springer-Verlag. ISBN 0387974954. OCLC 22861245.[1] • Hall, Brian C. (2015), Lie groups, Lie algebras, and representations: An elementary introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666 • Humphreys, James E. (1972), Introduction to Lie Algebras and Representation Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90053-7. • Infinite dimensional Lie algebras, V. G. Kac, ISBN 0-521-37215-1 • Duncan J. Melville (2001) [1994], "Weyl–Kac character formula", Encyclopedia of Mathematics, EMS Press • Weyl, Hermann (1925), "Theorie der Darstellung kontinuierlicher halb-einfacher Gruppen durch lineare Transformationen. I", Mathematische Zeitschrift, Springer Berlin / Heidelberg, 23: 271–309, doi:10.1007/BF01506234, ISSN 0025-5874, S2CID 123145812 • Weyl, Hermann (1926a), "Theorie der Darstellung kontinuierlicher halb-einfacher Gruppen durch lineare Transformationen. II", Mathematische Zeitschrift, Springer Berlin / Heidelberg, 24: 328–376, doi:10.1007/BF01216788, ISSN 0025-5874, S2CID 186229448 • Weyl, Hermann (1926b), "Theorie der Darstellung kontinuierlicher halb-einfacher Gruppen durch lineare Transformationen. III", Mathematische Zeitschrift, Springer Berlin / Heidelberg, 24: 377–395, doi:10.1007/BF01216789, ISSN 0025-5874, S2CID 186232780 1. Fulton, William, 1939- (1991). Representation theory : a first course. Harris, Joe, 1951-. New York: Springer-Verlag. ISBN 0387974954. OCLC 22861245.{{cite book}}: CS1 maint: multiple names: authors list (link)
Wikipedia
Spinor In geometry and physics, spinors /spɪnər/ are elements of a complex number-based vector space that can be associated with Euclidean space.[lower-alpha 2] A spinor transforms linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation,[lower-alpha 3] but unlike geometric vectors and tensors, a spinor transforms to its negative when the space rotates through 360° (see picture). It takes a rotation of 720° for a spinor to go back to its original state. This property characterizes spinors: spinors can be viewed as the "square roots" of vectors (although this is inaccurate and may be misleading; they are better viewed as "square roots" of sections of vector bundles – in the case of the exterior algebra bundle of the cotangent bundle, they thus become "square roots" of differential forms). It is also possible to associate a substantially similar notion of spinor to Minkowski space, in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913.[1][lower-alpha 4] In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles.[lower-alpha 5] Spinors are characterized by the specific way in which they behave under rotations. They change in different ways depending not just on the overall final rotation, but the details of how that rotation was achieved (by a continuous path in the rotation group). There are two topologically distinguishable classes (homotopy classes) of paths through rotations that result in the same overall rotation, as illustrated by the belt trick puzzle. These two inequivalent classes yield spinor transformations of opposite sign. The spin group is the group of all rotations keeping track of the class.[lower-alpha 6] It doubly covers the rotation group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The space of spinors by definition is equipped with a (complex) linear representation of the spin group, meaning that elements of the spin group act as linear transformations on the space of spinors, in a way that genuinely depends on the homotopy class.[lower-alpha 7] In mathematical terms, spinors are described by a double-valued projective representation of the rotation group SO(3). Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis-independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with.[lower-alpha 8] A Clifford space operates on a spinor space, and the elements of a spinor space are spinors.[3] After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices,[lower-alpha 9] and the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex[lower-alpha 10]) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even.[lower-alpha 11] Introduction A gradual rotation can be visualized as a ribbon in space.[lower-alpha 12] Two gradual rotations with different classes, one through 360° and one through 720° are illustrated here in the belt trick puzzle. A solution of the puzzle is a continuous manipulation of the belt, fixing the endpoints, that untwists it. This is impossible with the 360° rotation, but possible with the 720° rotation. A solution, shown in the second animation, gives an explicit homotopy in the rotation group between the 720° rotation and the 0° identity rotation. What characterizes spinors and distinguishes them from geometric vectors and other tensors is subtle. Consider applying a rotation to the coordinates of a system. No object in the system itself has moved, only the coordinates have, so there will always be a compensating change in those coordinate values when applied to any object of the system. Geometrical vectors, for example, have components that will undergo the same rotation as the coordinates. More broadly, any tensor associated with the system (for instance, the stress of some medium) also has coordinate descriptions that adjust to compensate for changes to the coordinate system itself. Spinors do not appear at this level of the description of a physical system, when one is concerned only with the properties of a single isolated rotation of the coordinates. Rather, spinors appear when we imagine that instead of a single rotation, the coordinate system is gradually (continuously) rotated between some initial and final configuration. For any of the familiar and intuitive ("tensorial") quantities associated with the system, the transformation law does not depend on the precise details of how the coordinates arrived at their final configuration. Spinors, on the other hand, are constructed in such a way that makes them sensitive to how the gradual rotation of the coordinates arrived there: They exhibit path-dependence. It turns out that, for any final configuration of the coordinates, there are actually two ("topologically") inequivalent gradual (continuous) rotations of the coordinate system that result in this same configuration. This ambiguity is called the homotopy class of the gradual rotation. The belt trick puzzle (shown) demonstrates two different rotations, one through an angle of 2π and the other through an angle of 4π, having the same final configurations but different classes. Spinors actually exhibit a sign-reversal that genuinely depends on this homotopy class. This distinguishes them from vectors and other tensors, none of which can feel the class. Spinors can be exhibited as concrete objects using a choice of Cartesian coordinates. In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to (angular momenta about) the three coordinate axes. These are 2×2 matrices with complex entries, and the two-component complex column vectors on which these matrices act by matrix multiplication are the spinors. In this case, the spin group is isomorphic to the group of 2×2 unitary matrices with determinant one, which naturally sits inside the matrix algebra. This group acts by conjugation on the real vector space spanned by the Pauli matrices themselves,[lower-alpha 13] realizing it as a group of rotations among them,[lower-alpha 14] but it also acts on the column vectors (that is, the spinors). More generally, a Clifford algebra can be constructed from any vector space V equipped with a (nondegenerate) quadratic form, such as Euclidean space with its standard dot product or Minkowski space with its standard Lorentz metric. The space of spinors is the space of column vectors with $2^{\lfloor \dim V/2\rfloor }$ components. The orthogonal Lie algebra (i.e., the infinitesimal "rotations") and the spin group associated to the quadratic form are both (canonically) contained in the Clifford algebra, so every Clifford algebra representation also defines a representation of the Lie algebra and the spin group.[lower-alpha 15] Depending on the dimension and metric signature, this realization of spinors as column vectors may be irreducible or it may decompose into a pair of so-called "half-spin" or Weyl representations.[lower-alpha 16] When the vector space V is four-dimensional, the algebra is described by the gamma matrices. Mathematical definition The space of spinors is formally defined as the fundamental representation of the Clifford algebra. (This may or may not decompose into irreducible representations.) The space of spinors may also be defined as a spin representation of the orthogonal Lie algebra. These spin representations are also characterized as the finite-dimensional projective representations of the special orthogonal group that do not factor through linear representations. Equivalently, a spinor is an element of a finite-dimensional group representation of the spin group on which the center acts non-trivially. Overview There are essentially two frameworks for viewing the notion of a spinor: the representation theoretic point of view and the geometric point of view. Representation theoretic point of view From a representation theoretic point of view, one knows beforehand that there are some representations of the Lie algebra of the orthogonal group that cannot be formed by the usual tensor constructions. These missing representations are then labeled the spin representations, and their constituents spinors. From this view, a spinor must belong to a representation of the double cover of the rotation group SO(n,$\mathbb {R} $), or more generally of a double cover of the generalized special orthogonal group SO+(p, q, $\mathbb {R} $) on spaces with a metric signature of (p, q). These double covers are Lie groups, called the spin groups Spin(n) or Spin(p, q). All the properties of spinors, and their applications and derived objects, are manifested first in the spin group. Representations of the double covers of these groups yield double-valued projective representations of the groups themselves. (This means that the action of a particular rotation on vectors in the quantum Hilbert space is only defined up to a sign.) In summary, given a representation specified by the data $(V,{\text{Spin}}(p,q),\rho )$ where $V$ is a vector space over $K=\mathbb {R} $ or $\mathbb {C} $ and $\rho $ is a homomorphism $\rho :{\text{Spin}}(p,q)\rightarrow {\text{GL}}(V)$ :{\text{Spin}}(p,q)\rightarrow {\text{GL}}(V)} , a spinor is an element of the vector space $V$. Geometric point of view From a geometrical point of view, one can explicitly construct the spinors and then examine how they behave under the action of the relevant Lie groups. This latter approach has the advantage of providing a concrete and elementary description of what a spinor is. However, such a description becomes unwieldy when complicated properties of the spinors, such as Fierz identities, are needed. Clifford algebras Further information: Clifford algebra The language of Clifford algebras[4] (sometimes called geometric algebras) provides a complete picture of the spin representations of all the spin groups, and the various relationships between those representations, via the classification of Clifford algebras. It largely removes the need for ad hoc constructions. In detail, let V be a finite-dimensional complex vector space with nondegenerate symmetric bilinear form g. The Clifford algebra Cℓ(V, g) is the algebra generated by V along with the anticommutation relation xy + yx = 2g(x, y). It is an abstract version of the algebra generated by the gamma or Pauli matrices. If V = $\mathbb {C} ^{n}$, with the standard form g(x, y) = xTy = x1y1 + ... + xnyn we denote the Clifford algebra by Cℓn($\mathbb {C} $). Since by the choice of an orthonormal basis every complex vectorspace with non-degenerate form is isomorphic to this standard example, this notation is abused more generally if dim$\mathbb {C} $(V) = n. If n = 2k is even, Cℓn($\mathbb {C} $) is isomorphic as an algebra (in a non-unique way) to the algebra Mat(2k, $\mathbb {C} $) of 2k × 2k complex matrices (by the Artin–Wedderburn theorem and the easy to prove fact that the Clifford algebra is central simple). If n = 2k + 1 is odd, Cℓ2k+1($\mathbb {C} $) is isomorphic to the algebra Mat(2k, $\mathbb {C} $) ⊕ Mat(2k, $\mathbb {C} $) of two copies of the 2k × 2k complex matrices. Therefore, in either case Cℓ(V, g) has a unique (up to isomorphism) irreducible representation (also called simple Clifford module), commonly denoted by Δ, of dimension 2[n/2]. Since the Lie algebra so(V, g) is embedded as a Lie subalgebra in Cℓ(V, g) equipped with the Clifford algebra commutator as Lie bracket, the space Δ is also a Lie algebra representation of so(V, g) called a spin representation. If n is odd, this Lie algebra representation is irreducible. If n is even, it splits further into two irreducible representations Δ = Δ+ ⊕ Δ− called the Weyl or half-spin representations. Irreducible representations over the reals in the case when V is a real vector space are much more intricate, and the reader is referred to the Clifford algebra article for more details. Spin groups Spinors form a vector space, usually over the complex numbers, equipped with a linear group representation of the spin group that does not factor through a representation of the group of rotations (see diagram). The spin group is the group of rotations keeping track of the homotopy class. Spinors are needed to encode basic information about the topology of the group of rotations because that group is not simply connected, but the simply connected spin group is its double cover. So for every rotation there are two elements of the spin group that represent it. Geometric vectors and other tensors cannot feel the difference between these two elements, but they produce opposite signs when they affect any spinor under the representation. Thinking of the elements of the spin group as homotopy classes of one-parameter families of rotations, each rotation is represented by two distinct homotopy classes of paths to the identity. If a one-parameter family of rotations is visualized as a ribbon in space, with the arc length parameter of that ribbon being the parameter (its tangent, normal, binormal frame actually gives the rotation), then these two distinct homotopy classes are visualized in the two states of the belt trick puzzle (above). The space of spinors is an auxiliary vector space that can be constructed explicitly in coordinates, but ultimately only exists up to isomorphism in that there is no "natural" construction of them that does not rely on arbitrary choices such as coordinate systems. A notion of spinors can be associated, as such an auxiliary mathematical object, with any vector space equipped with a quadratic form such as Euclidean space with its standard dot product, or Minkowski space with its Lorentz metric. In the latter case, the "rotations" include the Lorentz boosts, but otherwise the theory is substantially similar. Spinor fields in physics The constructions given above, in terms of Clifford algebra or representation theory, can be thought of as defining spinors as geometric objects in zero-dimensional space-time. To obtain the spinors of physics, such as the Dirac spinor, one extends the construction to obtain a spin structure on 4-dimensional space-time (Minkowski space). Effectively, one starts with the tangent manifold of space-time, each point of which is a 4-dimensional vector space with SO(3,1) symmetry, and then builds the spin group at each point. The neighborhoods of points are endowed with concepts of smoothness and differentiability: the standard construction is one of a fibre bundle, the fibers of which are affine spaces transforming under the spin group. After constructing the fiber bundle, one may then consider differential equations, such as the Dirac equation, or the Weyl equation on the fiber bundle. These equations (Dirac or Weyl) have solutions that are plane waves, having symmetries characteristic of the fibers, i.e. having the symmetries of spinors, as obtained from the (zero-dimensional) Clifford algebra/spin representation theory described above. Such plane-wave solutions (or other solutions) of the differential equations can then properly be called fermions; fermions have the algebraic qualities of spinors. By general convention, the terms "fermion" and "spinor" are often used interchangeably in physics, as synonyms of one-another. It appears that all fundamental particles in nature that are spin-1/2 are described by the Dirac equation, with the possible exception of the neutrino. There does not seem to be any a priori reason why this would be the case. A perfectly valid choice for spinors would be the non-complexified version of Cℓ2,2($\mathbb {R} $), the Majorana spinor.[5] There also does not seem to be any particular prohibition to having Weyl spinors appear in nature as fundamental particles. The Dirac, Weyl, and Majorana spinors are interrelated, and their relation can be elucidated on the basis of real geometric algebra.[6] Dirac and Weyl spinors are complex representations while Majorana spinors are real representations. Weyl spinors are insufficient to describe massive particles, such as electrons, since the Weyl plane-wave solutions necessarily travel at the speed of light; for massive particles, the Dirac equation is needed. The initial construction of the Standard Model of particle physics starts with both the electron and the neutrino as massless Weyl spinors; the Higgs mechanism gives electrons a mass; the classical neutrino remained massless, and was thus an example of a Weyl spinor.[lower-alpha 17] However, because of observed neutrino oscillation, it is now believed that they are not Weyl spinors, but perhaps instead Majorana spinors.[7] It is not known whether Weyl spinor fundamental particles exist in nature. The situation for condensed matter physics is different: one can construct two and three-dimensional "spacetimes" in a large variety of different physical materials, ranging from semiconductors to far more exotic materials. In 2015, an international team led by Princeton University scientists announced that they had found a quasiparticle that behaves as a Weyl fermion.[8] Spinors in representation theory Main article: Spin representation One major mathematical application of the construction of spinors is to make possible the explicit construction of linear representations of the Lie algebras of the special orthogonal groups, and consequently spinor representations of the groups themselves. At a more profound level, spinors have been found to be at the heart of approaches to the Atiyah–Singer index theorem, and to provide constructions in particular for discrete series representations of semisimple groups. The spin representations of the special orthogonal Lie algebras are distinguished from the tensor representations given by Weyl's construction by the weights. Whereas the weights of the tensor representations are integer linear combinations of the roots of the Lie algebra, those of the spin representations are half-integer linear combinations thereof. Explicit details can be found in the spin representation article. Attempts at intuitive understanding The spinor can be described, in simple terms, as "vectors of a space the transformations of which are related in a particular way to rotations in physical space".[9] Stated differently: Spinors ... provide a linear representation of the group of rotations in a space with any number $n$ of dimensions, each spinor having $2^{\nu }$ components where $n=2\nu +1$ or $2\nu $.[2] Several ways of illustrating everyday analogies have been formulated in terms of the plate trick, tangloids and other examples of orientation entanglement. Nonetheless, the concept is generally considered notoriously difficult to understand, as illustrated by Michael Atiyah's statement that is recounted by Dirac's biographer Graham Farmelo: No one fully understands spinors. Their algebra is formally understood but their general significance is mysterious. In some sense they describe the "square root" of geometry and, just as understanding the square root of −1 took centuries, the same might be true of spinors.[10] History The most general mathematical form of spinors was discovered by Élie Cartan in 1913.[11] The word "spinor" was coined by Paul Ehrenfest in his work on quantum physics.[12] Spinors were first applied to mathematical physics by Wolfgang Pauli in 1927, when he introduced his spin matrices.[13] The following year, Paul Dirac discovered the fully relativistic theory of electron spin by showing the connection between spinors and the Lorentz group.[14] By the 1930s, Dirac, Piet Hein and others at the Niels Bohr Institute (then known as the Institute for Theoretical Physics of the University of Copenhagen) created toys such as Tangloids to teach and model the calculus of spinors. Spinor spaces were represented as left ideals of a matrix algebra in 1930, by G. Juvet[15] and by Fritz Sauter.[16][17] More specifically, instead of representing spinors as complex-valued 2D column vectors as Pauli had done, they represented them as complex-valued 2 × 2 matrices in which only the elements of the left column are non-zero. In this manner the spinor space became a minimal left ideal in Mat(2, $\mathbb {C} $).[lower-alpha 18][19] In 1947 Marcel Riesz constructed spinor spaces as elements of a minimal left ideal of Clifford algebras. In 1966/1967, David Hestenes[20][21] replaced spinor spaces by the even subalgebra Cℓ01,3($\mathbb {R} $) of the spacetime algebra Cℓ1,3($\mathbb {R} $).[17][19] As of the 1980s, the theoretical physics group at Birkbeck College around David Bohm and Basil Hiley has been developing algebraic approaches to quantum theory that build on Sauter and Riesz' identification of spinors with minimal left ideals. Examples Some simple examples of spinors in low dimensions arise from considering the even-graded subalgebras of the Clifford algebra Cℓp, q($\mathbb {R} $). This is an algebra built up from an orthonormal basis of n = p + q mutually orthogonal vectors under addition and multiplication, p of which have norm +1 and q of which have norm −1, with the product rule for the basis vectors $e_{i}e_{j}={\begin{cases}+1&i=j,\,i\in (1,\ldots ,p)\\-1&i=j,\,i\in (p+1,\ldots ,n)\\-e_{j}e_{i}&i\neq j.\end{cases}}$ Two dimensions The Clifford algebra Cℓ2,0($\mathbb {R} $) is built up from a basis of one unit scalar, 1, two orthogonal unit vectors, σ1 and σ2, and one unit pseudoscalar i = σ1σ2. From the definitions above, it is evident that (σ1)2 = (σ2)2 = 1, and (σ1σ2)(σ1σ2) = −σ1σ1σ2σ2 = −1. The even subalgebra Cℓ02,0($\mathbb {R} $), spanned by even-graded basis elements of Cℓ2,0($\mathbb {R} $), determines the space of spinors via its representations. It is made up of real linear combinations of 1 and σ1σ2. As a real algebra, Cℓ02,0($\mathbb {R} $) is isomorphic to the field of complex numbers  $\mathbb {C} $. As a result, it admits a conjugation operation (analogous to complex conjugation), sometimes called the reverse of a Clifford element, defined by $(a+b\sigma _{1}\sigma _{2})^{*}=a+b\sigma _{2}\sigma _{1}.$ which, by the Clifford relations, can be written $(a+b\sigma _{1}\sigma _{2})^{*}=a+b\sigma _{2}\sigma _{1}=a-b\sigma _{1}\sigma _{2}.$ The action of an even Clifford element γ ∈ Cℓ02,0($\mathbb {R} $) on vectors, regarded as 1-graded elements of Cℓ2,0($\mathbb {R} $), is determined by mapping a general vector u = a1σ1 + a2σ2 to the vector $\gamma (u)=\gamma u\gamma ^{*},$ where $\gamma ^{*}$ is the conjugate of $\gamma $, and the product is Clifford multiplication. In this situation, a spinor[lower-alpha 19] is an ordinary complex number. The action of $\gamma $ on a spinor $\phi $ is given by ordinary complex multiplication: $\gamma (\phi )=\gamma \phi .$ An important feature of this definition is the distinction between ordinary vectors and spinors, manifested in how the even-graded elements act on each of them in different ways. In general, a quick check of the Clifford relations reveals that even-graded elements conjugate-commute with ordinary vectors: $\gamma (u)=\gamma u\gamma ^{*}=\gamma ^{2}u.$ On the other hand, in comparison with its action on spinors $\gamma (\phi )=\gamma \phi $, the action of $\gamma $ on ordinary vectors appears as the square of its action on spinors. Consider, for example, the implication this has for plane rotations. Rotating a vector through an angle of θ corresponds to γ2 = exp(θ σ1σ2), so that the corresponding action on spinors is via γ = ± exp(θ σ1σ2/2). In general, because of logarithmic branching, it is impossible to choose a sign in a consistent way. Thus the representation of plane rotations on spinors is two-valued. In applications of spinors in two dimensions, it is common to exploit the fact that the algebra of even-graded elements (that is just the ring of complex numbers) is identical to the space of spinors. So, by abuse of language, the two are often conflated. One may then talk about "the action of a spinor on a vector". In a general setting, such statements are meaningless. But in dimensions 2 and 3 (as applied, for example, to computer graphics) they make sense. Examples • The even-graded element $\gamma ={\tfrac {1}{\sqrt {2}}}(1-\sigma _{1}\sigma _{2})$ corresponds to a vector rotation of 90° from σ1 around towards σ2, which can be checked by confirming that ${\tfrac {1}{2}}(1-\sigma _{1}\sigma _{2})\{a_{1}\sigma _{1}+a_{2}\sigma _{2}\}(1-\sigma _{2}\sigma _{1})=a_{1}\sigma _{2}-a_{2}\sigma _{1}$ It corresponds to a spinor rotation of only 45°, however: ${\tfrac {1}{\sqrt {2}}}(1-\sigma _{1}\sigma _{2})\{a_{1}+a_{2}\sigma _{1}\sigma _{2}\}={\frac {a_{1}+a_{2}}{\sqrt {2}}}+{\frac {-a_{1}+a_{2}}{\sqrt {2}}}\sigma _{1}\sigma _{2}$ • Similarly the even-graded element γ = −σ1σ2 corresponds to a vector rotation of 180°: $(-\sigma _{1}\sigma _{2})\{a_{1}\sigma _{1}+a_{2}\sigma _{2}\}(-\sigma _{2}\sigma _{1})=-a_{1}\sigma _{1}-a_{2}\sigma _{2}$ but a spinor rotation of only 90°: $(-\sigma _{1}\sigma _{2})\{a_{1}+a_{2}\sigma _{1}\sigma _{2}\}=a_{2}-a_{1}\sigma _{1}\sigma _{2}$ • Continuing on further, the even-graded element γ = −1 corresponds to a vector rotation of 360°: $(-1)\{a_{1}\sigma _{1}+a_{2}\sigma _{2}\}\,(-1)=a_{1}\sigma _{1}+a_{2}\sigma _{2}$ but a spinor rotation of 180°. Three dimensions Main articles: Spinors in three dimensions and Quaternions and spatial rotation The Clifford algebra Cℓ3,0($\mathbb {R} $) is built up from a basis of one unit scalar, 1, three orthogonal unit vectors, σ1, σ2 and σ3, the three unit bivectors σ1σ2, σ2σ3, σ3σ1 and the pseudoscalar i = σ1σ2σ3. It is straightforward to show that (σ1)2 = (σ2)2 = (σ3)2 = 1, and (σ1σ2)2 = (σ2σ3)2 = (σ3σ1)2 = (σ1σ2σ3)2 = −1. The sub-algebra of even-graded elements is made up of scalar dilations, $u'=\rho ^{\left({\frac {1}{2}}\right)}u\rho ^{\left({\frac {1}{2}}\right)}=\rho u,$ and vector rotations $u'=\gamma u\gamma ^{*},$ where $\left.{\begin{aligned}\gamma &=\cos \left({\frac {\theta }{2}}\right)-\{a_{1}\sigma _{2}\sigma _{3}+a_{2}\sigma _{3}\sigma _{1}+a_{3}\sigma _{1}\sigma _{2}\}\sin \left({\frac {\theta }{2}}\right)\\&=\cos \left({\frac {\theta }{2}}\right)-i\{a_{1}\sigma _{1}+a_{2}\sigma _{2}+a_{3}\sigma _{3}\}\sin \left({\frac {\theta }{2}}\right)\\&=\cos \left({\frac {\theta }{2}}\right)-iv\sin \left({\frac {\theta }{2}}\right)\end{aligned}}\right\}$ (1) corresponds to a vector rotation through an angle θ about an axis defined by a unit vector v = a1σ1 + a2σ2 + a3σ3. As a special case, it is easy to see that, if v = σ3, this reproduces the σ1σ2 rotation considered in the previous section; and that such rotation leaves the coefficients of vectors in the σ3 direction invariant, since $\left[\cos \left({\frac {\theta }{2}}\right)-i\sigma _{3}\sin \left({\frac {\theta }{2}}\right)\right]\sigma _{3}\left[\cos \left({\frac {\theta }{2}}\right)+i\sigma _{3}\sin \left({\frac {\theta }{2}}\right)\right]=\left[\cos ^{2}\left({\frac {\theta }{2}}\right)+\sin ^{2}\left({\frac {\theta }{2}}\right)\right]\sigma _{3}=\sigma _{3}.$ The bivectors σ2σ3, σ3σ1 and σ1σ2 are in fact Hamilton's quaternions i, j, and k, discovered in 1843: ${\begin{aligned}\mathbf {i} &=-\sigma _{2}\sigma _{3}=-i\sigma _{1}\\\mathbf {j} &=-\sigma _{3}\sigma _{1}=-i\sigma _{2}\\\mathbf {k} &=-\sigma _{1}\sigma _{2}=-i\sigma _{3}\end{aligned}}$ With the identification of the even-graded elements with the algebra $\mathbb {H} $ of quaternions, as in the case of two dimensions the only representation of the algebra of even-graded elements is on itself.[lower-alpha 20] Thus the (real[lower-alpha 21]) spinors in three-dimensions are quaternions, and the action of an even-graded element on a spinor is given by ordinary quaternionic multiplication. Note that the expression (1) for a vector rotation through an angle θ, the angle appearing in γ was halved. Thus the spinor rotation γ(ψ) = γψ (ordinary quaternionic multiplication) will rotate the spinor ψ through an angle one-half the measure of the angle of the corresponding vector rotation. Once again, the problem of lifting a vector rotation to a spinor rotation is two-valued: the expression (1) with (180° + θ/2) in place of θ/2 will produce the same vector rotation, but the negative of the spinor rotation. The spinor/quaternion representation of rotations in 3D is becoming increasingly prevalent in computer geometry and other applications, because of the notable brevity of the corresponding spin matrix, and the simplicity with which they can be multiplied together to calculate the combined effect of successive rotations about different axes. Explicit constructions A space of spinors can be constructed explicitly with concrete and abstract constructions. The equivalence of these constructions is a consequence of the uniqueness of the spinor representation of the complex Clifford algebra. For a complete example in dimension 3, see spinors in three dimensions. Component spinors Given a vector space V and a quadratic form g an explicit matrix representation of the Clifford algebra Cℓ(V, g) can be defined as follows. Choose an orthonormal basis e1 ... en for V i.e. g(eμeν) = ημν where ημμ = ±1 and ημν = 0 for μ ≠ ν. Let k = ⌊n/2⌋. Fix a set of 2k × 2k matrices γ1 ... γn such that γμγν + γνγμ = 2ημν1 (i.e. fix a convention for the gamma matrices). Then the assignment eμ → γμ extends uniquely to an algebra homomorphism Cℓ(V, g) → Mat(2k, $\mathbb {C} $) by sending the monomial eμ1 ⋅⋅⋅ eμk in the Clifford algebra to the product γμ1 ⋅⋅⋅ γμk of matrices and extending linearly. The space $\Delta =\mathbb {C} ^{2^{k}}$ on which the gamma matrices act is now a space of spinors. One needs to construct such matrices explicitly, however. In dimension 3, defining the gamma matrices to be the Pauli sigma matrices gives rise to the familiar two component spinors used in non relativistic quantum mechanics. Likewise using the 4 × 4 Dirac gamma matrices gives rise to the 4 component Dirac spinors used in 3+1 dimensional relativistic quantum field theory. In general, in order to define gamma matrices of the required kind, one can use the Weyl–Brauer matrices. In this construction the representation of the Clifford algebra Cℓ(V, g), the Lie algebra so(V, g), and the Spin group Spin(V, g), all depend on the choice of the orthonormal basis and the choice of the gamma matrices. This can cause confusion over conventions, but invariants like traces are independent of choices. In particular, all physically observable quantities must be independent of such choices. In this construction a spinor can be represented as a vector of 2k complex numbers and is denoted with spinor indices (usually α, β, γ). In the physics literature, such indices are often used to denote spinors even when an abstract spinor construction is used. Abstract spinors There are at least two different, but essentially equivalent, ways to define spinors abstractly. One approach seeks to identify the minimal ideals for the left action of Cℓ(V, g) on itself. These are subspaces of the Clifford algebra of the form Cℓ(V, g)ω, admitting the evident action of Cℓ(V, g) by left-multiplication: c : xω → cxω. There are two variations on this theme: one can either find a primitive element ω that is a nilpotent element of the Clifford algebra, or one that is an idempotent. The construction via nilpotent elements is more fundamental in the sense that an idempotent may then be produced from it.[22] In this way, the spinor representations are identified with certain subspaces of the Clifford algebra itself. The second approach is to construct a vector space using a distinguished subspace of V, and then specify the action of the Clifford algebra externally to that vector space. In either approach, the fundamental notion is that of an isotropic subspace W. Each construction depends on an initial freedom in choosing this subspace. In physical terms, this corresponds to the fact that there is no measurement protocol that can specify a basis of the spin space, even if a preferred basis of V is given. As above, we let (V, g) be an n-dimensional complex vector space equipped with a nondegenerate bilinear form. If V is a real vector space, then we replace V by its complexification $V\otimes _{\mathbb {R} }\mathbb {C} $ and let g denote the induced bilinear form on $V\otimes _{\mathbb {R} }\mathbb {C} $. Let W be a maximal isotropic subspace, i.e. a maximal subspace of V such that g|W = 0. If n =  2k is even, then let W′ be an isotropic subspace complementary to W. If n =  2k + 1 is odd, let W′ be a maximal isotropic subspace with W ∩ W′ = 0, and let U be the orthogonal complement of W ⊕ W′. In both the even- and odd-dimensional cases W and W′ have dimension k. In the odd-dimensional case, U is one-dimensional, spanned by a unit vector u. Minimal ideals Since W′ is isotropic, multiplication of elements of W′ inside Cℓ(V, g) is skew. Hence vectors in W′ anti-commute, and Cℓ(W′, g|W′) = Cℓ(W′, 0) is just the exterior algebra Λ∗W′. Consequently, the k-fold product of W′ with itself, W′k, is one-dimensional. Let ω be a generator of W′k. In terms of a basis w′1, ..., w′k of in W′, one possibility is to set $\omega =w'_{1}w'_{2}\cdots w'_{k}.$ Note that ω2 = 0 (i.e., ω is nilpotent of order 2), and moreover, w′ω = 0 for all w′ ∈ W′. The following facts can be proven easily: 1. If n = 2k, then the left ideal Δ = Cℓ(V, g)ω is a minimal left ideal. Furthermore, this splits into the two spin spaces Δ+ = Cℓevenω and Δ− = Cℓoddω on restriction to the action of the even Clifford algebra. 2. If n = 2k + 1, then the action of the unit vector u on the left ideal Cℓ(V, g)ω decomposes the space into a pair of isomorphic irreducible eigenspaces (both denoted by Δ), corresponding to the respective eigenvalues +1 and −1. In detail, suppose for instance that n is even. Suppose that I is a non-zero left ideal contained in Cℓ(V, g)ω. We shall show that I must be equal to Cℓ(V, g)ω by proving that it contains a nonzero scalar multiple of ω. Fix a basis wi of W and a complementary basis wi′ of W′ so that wiwj′ +wj′wi = δij, and (wi)2 = 0, (wi′)2 = 0. Note that any element of I must have the form αω, by virtue of our assumption that I ⊂ Cℓ(V, g) ω. Let αω ∈ I be any such element. Using the chosen basis, we may write $\alpha =\sum _{i_{1}<i_{2}<\cdots <i_{p}}a_{i_{1}\dots i_{p}}w_{i_{1}}\cdots w_{i_{p}}+\sum _{j}B_{j}w'_{j}$ where the ai1...ip are scalars, and the Bj are auxiliary elements of the Clifford algebra. Observe now that the product $\alpha \omega =\sum _{i_{1}<i_{2}<\cdots <i_{p}}a_{i_{1}\dots i_{p}}w_{i_{1}}\cdots w_{i_{p}}\omega .$ Pick any nonzero monomial a in the expansion of α with maximal homogeneous degree in the elements wi: $a=a_{i_{1}\dots i_{\text{max}}}w_{i_{1}}\dots w_{i_{\text{max}}}$ (no summation implied), then $w'_{i_{\text{max}}}\cdots w'_{i_{1}}\alpha \omega =a_{i_{1}\dots i_{\text{max}}}\omega $ is a nonzero scalar multiple of ω, as required. Note that for n even, this computation also shows that $\Delta =\mathrm {C} \ell (W)\omega =\left(\Lambda ^{*}W\right)\omega $ as a vector space. In the last equality we again used that W is isotropic. In physics terms, this shows that Δ is built up like a Fock space by creating spinors using anti-commuting creation operators in W acting on a vacuum ω. Exterior algebra construction The computations with the minimal ideal construction suggest that a spinor representation can also be defined directly using the exterior algebra Λ∗ W = ⊕j Λj W of the isotropic subspace W. Let Δ = Λ∗ W denote the exterior algebra of W considered as vector space only. This will be the spin representation, and its elements will be referred to as spinors.[23][24] The action of the Clifford algebra on Δ is defined first by giving the action of an element of V on Δ, and then showing that this action respects the Clifford relation and so extends to a homomorphism of the full Clifford algebra into the endomorphism ring End(Δ) by the universal property of Clifford algebras. The details differ slightly according to whether the dimension of V is even or odd. When dim(V) is even, V = W ⊕ W′ where W′ is the chosen isotropic complement. Hence any v ∈ V decomposes uniquely as v = w + w′ with w ∈ W and w′ ∈ W′. The action of v on a spinor is given by $c(v)w_{1}\wedge \cdots \wedge w_{n}=\left(\epsilon (w)+i\left(w'\right)\right)\left(w_{1}\wedge \cdots \wedge w_{n}\right)$ where i(w′) is interior product with w′ using the nondegenerate quadratic form to identify V with V∗, and ε(w) denotes the exterior product. This action is sometimes called the Clifford product. It may be verified that $c(u)\,c(v)+c(v)\,c(u)=2\,g(u,v)\,,$ and so c respects the Clifford relations and extends to a homomorphism from the Clifford algebra to End(Δ). The spin representation Δ further decomposes into a pair of irreducible complex representations of the Spin group[25] (the half-spin representations, or Weyl spinors) via $\Delta _{+}=\Lambda ^{\text{even}}W,\,\Delta _{-}=\Lambda ^{\text{odd}}W.$ When dim(V) is odd, V = W ⊕ U ⊕ W′, where U is spanned by a unit vector u orthogonal to W. The Clifford action c is defined as before on W ⊕ W′, while the Clifford action of (multiples of) u is defined by $c(u)\alpha ={\begin{cases}\alpha &{\hbox{if }}\alpha \in \Lambda ^{\text{even}}W\\-\alpha &{\hbox{if }}\alpha \in \Lambda ^{\text{odd}}W\end{cases}}$ As before, one verifies that c respects the Clifford relations, and so induces a homomorphism. Hermitian vector spaces and spinors If the vector space V has extra structure that provides a decomposition of its complexification into two maximal isotropic subspaces, then the definition of spinors (by either method) becomes natural. The main example is the case that the real vector space V is a hermitian vector space (V, h), i.e., V is equipped with a complex structure J that is an orthogonal transformation with respect to the inner product g on V. Then $V\otimes _{\mathbb {R} }\mathbb {C} $ splits in the ±i eigenspaces of J. These eigenspaces are isotropic for the complexification of g and can be identified with the complex vector space (V, J) and its complex conjugate (V, −J). Therefore, for a hermitian vector space (V, h) the vector space $\Lambda _{\mathbb {C} }^{\cdot }{\bar {V}}$ (as well as its complex conjugate $\Lambda _{\mathbb {C} }^{\cdot }V$ is a spinor space for the underlying real euclidean vector space. With the Clifford action as above but with contraction using the hermitian form, this construction gives a spinor space at every point of an almost Hermitian manifold and is the reason why every almost complex manifold (in particular every symplectic manifold) has a Spinc structure. Likewise, every complex vector bundle on a manifold carries a Spinc structure.[26] Clebsch–Gordan decomposition A number of Clebsch–Gordan decompositions are possible on the tensor product of one spin representation with another.[27] These decompositions express the tensor product in terms of the alternating representations of the orthogonal group. For the real or complex case, the alternating representations are • Γr = ΛrV, the representation of the orthogonal group on skew tensors of rank r. In addition, for the real orthogonal groups, there are three characters (one-dimensional representations) • σ+ : O(p, q) → {−1, +1} given by σ+(R) = −1, if R reverses the spatial orientation of V, +1, if R preserves the spatial orientation of V. (The spatial character.) • σ− : O(p, q) → {−1, +1} given by σ−(R) = −1, if R reverses the temporal orientation of V, +1, if R preserves the temporal orientation of V. (The temporal character.) • σ = σ+σ− . (The orientation character.) The Clebsch–Gordan decomposition allows one to define, among other things: • An action of spinors on vectors. • A Hermitian metric on the complex representations of the real spin groups. • A Dirac operator on each spin representation. Even dimensions If n = 2k is even, then the tensor product of Δ with the contragredient representation decomposes as $\Delta \otimes \Delta ^{*}\cong \bigoplus _{p=0}^{n}\Gamma _{p}\cong \bigoplus _{p=0}^{k-1}\left(\Gamma _{p}\oplus \sigma \Gamma _{p}\right)\oplus \Gamma _{k}$ which can be seen explicitly by considering (in the Explicit construction) the action of the Clifford algebra on decomposable elements αω ⊗ βω′. The rightmost formulation follows from the transformation properties of the Hodge star operator. Note that on restriction to the even Clifford algebra, the paired summands Γp ⊕ σΓp are isomorphic, but under the full Clifford algebra they are not. There is a natural identification of Δ with its contragredient representation via the conjugation in the Clifford algebra: $(\alpha \omega )^{*}=\omega \left(\alpha ^{*}\right).$ So Δ ⊗ Δ also decomposes in the above manner. Furthermore, under the even Clifford algebra, the half-spin representations decompose ${\begin{aligned}\Delta _{+}\otimes \Delta _{+}^{*}\cong \Delta _{-}\otimes \Delta _{-}^{*}&\cong \bigoplus _{p=0}^{k}\Gamma _{2p}\\\Delta _{+}\otimes \Delta _{-}^{*}\cong \Delta _{-}\otimes \Delta _{+}^{*}&\cong \bigoplus _{p=0}^{k-1}\Gamma _{2p+1}\end{aligned}}$ For the complex representations of the real Clifford algebras, the associated reality structure on the complex Clifford algebra descends to the space of spinors (via the explicit construction in terms of minimal ideals, for instance). In this way, we obtain the complex conjugate Δ of the representation Δ, and the following isomorphism is seen to hold: ${\bar {\Delta }}\cong \sigma _{-}\Delta ^{*}$ In particular, note that the representation Δ of the orthochronous spin group is a unitary representation. In general, there are Clebsch–Gordan decompositions $\Delta \otimes {\bar {\Delta }}\cong \bigoplus _{p=0}^{k}\left(\sigma _{-}\Gamma _{p}\oplus \sigma _{+}\Gamma _{p}\right).$ In metric signature (p, q), the following isomorphisms hold for the conjugate half-spin representations • If q is even, then ${\bar {\Delta }}_{+}\cong \sigma _{-}\otimes \Delta _{+}^{*}$ and ${\bar {\Delta }}_{-}\cong \sigma _{-}\otimes \Delta _{-}^{*}.$ • If q is odd, then ${\bar {\Delta }}_{+}\cong \sigma _{-}\otimes \Delta _{-}^{*}$ and ${\bar {\Delta }}_{-}\cong \sigma _{-}\otimes \Delta _{+}^{*}.$ Using these isomorphisms, one can deduce analogous decompositions for the tensor products of the half-spin representations Δ± ⊗ Δ±. Odd dimensions If n = 2k + 1 is odd, then $\Delta \otimes \Delta ^{*}\cong \bigoplus _{p=0}^{k}\Gamma _{2p}.$ In the real case, once again the isomorphism holds ${\bar {\Delta }}\cong \sigma _{-}\Delta ^{*}.$ Hence there is a Clebsch–Gordan decomposition (again using the Hodge star to dualize) given by $\Delta \otimes {\bar {\Delta }}\cong \sigma _{-}\Gamma _{0}\oplus \sigma _{+}\Gamma _{1}\oplus \dots \oplus \sigma _{\pm }\Gamma _{k}$ Consequences There are many far-reaching consequences of the Clebsch–Gordan decompositions of the spinor spaces. The most fundamental of these pertain to Dirac's theory of the electron, among whose basic requirements are • A manner of regarding the product of two spinors ϕψ as a scalar. In physical terms, a spinor should determine a probability amplitude for the quantum state. • A manner of regarding the product ψϕ as a vector. This is an essential feature of Dirac's theory, which ties the spinor formalism to the geometry of physical space. • A manner of regarding a spinor as acting upon a vector, by an expression such as ψvψ. In physical terms, this represents an electric current of Maxwell's electromagnetic theory, or more generally a probability current. Summary in low dimensions • In 1 dimension (a trivial example), the single spinor representation is formally Majorana, a real 1-dimensional representation that does not transform. • In 2 Euclidean dimensions, the left-handed and the right-handed Weyl spinor are 1-component complex representations, i.e. complex numbers that get multiplied by e±iφ/2 under a rotation by angle φ. • In 3 Euclidean dimensions, the single spinor representation is 2-dimensional and quaternionic. The existence of spinors in 3 dimensions follows from the isomorphism of the groups SU(2) ≅ Spin(3) that allows us to define the action of Spin(3) on a complex 2-component column (a spinor); the generators of SU(2) can be written as Pauli matrices. • In 4 Euclidean dimensions, the corresponding isomorphism is Spin(4) ≅ SU(2) × SU(2). There are two inequivalent quaternionic 2-component Weyl spinors and each of them transforms under one of the SU(2) factors only. • In 5 Euclidean dimensions, the relevant isomorphism is Spin(5) ≅ USp(4) ≅ Sp(2) that implies that the single spinor representation is 4-dimensional and quaternionic. • In 6 Euclidean dimensions, the isomorphism Spin(6) ≅ SU(4) guarantees that there are two 4-dimensional complex Weyl representations that are complex conjugates of one another. • In 7 Euclidean dimensions, the single spinor representation is 8-dimensional and real; no isomorphisms to a Lie algebra from another series (A or C) exist from this dimension on. • In 8 Euclidean dimensions, there are two Weyl–Majorana real 8-dimensional representations that are related to the 8-dimensional real vector representation by a special property of Spin(8) called triality. • In d + 8 dimensions, the number of distinct irreducible spinor representations and their reality (whether they are real, pseudoreal, or complex) mimics the structure in d dimensions, but their dimensions are 16 times larger; this allows one to understand all remaining cases. See Bott periodicity. • In spacetimes with p spatial and q time-like directions, the dimensions viewed as dimensions over the complex numbers coincide with the case of the (p + q)-dimensional Euclidean space, but the reality projections mimic the structure in |p − q| Euclidean dimensions. For example, in 3 + 1 dimensions there are two non-equivalent Weyl complex (like in 2 dimensions) 2-component (like in 4 dimensions) spinors, which follows from the isomorphism SL(2, $\mathbb {C} $) ≅ Spin(3,1). Metric signature Weyl, complex Conjugacy Dirac, complex Majorana–Weyl, real Majorana, real Left-handed Right-handed Left-handed Right-handed (2,0)11Mutual2––2 (1,1)11Self2112 (3,0)–––2––– (2,1)–––2––2 (4,0)22Self4––– (3,1)22Mutual4––4 (5,0)–––4––– (4,1)–––4––– (6,0)44Mutual8––8 (5,1)44Self8––– (7,0)–––8––8 (6,1)–––8––– (8,0)88Self168816 (7,1)88Mutual16––16 (9,0)–––16––16 (8,1)–––16––16 See also • Anyon • Dirac equation in the algebra of physical space • Eigenspinor • Einstein–Cartan theory • Projective representation • Pure spinor • Spin-1/2 • Spinor bundle • Supercharge • Twistor theory Notes 1. Spinors in three dimensions are points of a line bundle over a conic in the projective plane. In this picture, which is associated to spinors of a three-dimensional pseudo-Euclidean space of signature (1,2), the conic is an ordinary real conic (here the circle), the line bundle is the Möbius bundle, and the spin group is SL2($\mathbb {R} $). In Euclidean signature, the projective plane, conic and line bundle are over the complex instead, and this picture is just a real slice. 2. Spinors can always be defined over the complex numbers. However, in some signatures there exist real spinors. Details can be found in spin representation. 3. A formal definition of spinors at this level is that the space of spinors is a linear representation of the Lie algebra of infinitesimal rotations of a certain kind. 4. "Spinors were first used under that name, by physicists, in the field of Quantum Mechanics. In their most general form, spinors were discovered in 1913 by the author of this work, in his investigations on the linear representations of simple groups*; they provide a linear representation of the group of rotations in a space with any number $n$ of dimensions, each spinor having $2^{\nu }$ components where $n=2\nu +1$ or $2\nu $."[2] The star (*) refers to Cartan (1913). 5. More precisely, it is the fermions of spin-1/2 that are described by spinors, which is true both in the relativistic and non-relativistic theory. The wavefunction of the non-relativistic electron has values in 2-component spinors transforming under 3-dimensional infinitesimal rotations. The relativistic Dirac equation for the electron is an equation for 4-component spinors transforming under infinitesimal Lorentz transformations, for which a substantially similar theory of spinors exists. 6. Formally, the spin group is the group of relative homotopy classes with fixed endpoints in the rotation group. 7. More formally, the space of spinors can be defined as an (irreducible) representation of the spin group that does not factor through a representation of the rotation group (in general, the connected component of the identity of the orthogonal group). 8. Geometric algebra is a name for the Clifford algebra in an applied setting. 9. The Pauli matrices correspond to angular momenta operators about the three coordinate axes. This makes them slightly atypical gamma matrices because in addition to their anticommutation relation they also satisfy commutation relations. 10. The metric signature relevant as well if we are concerned with real spinors. See spin representation. 11. Whether the representation decomposes depends on whether they are regarded as representations of the spin group (or its Lie algebra), in which case it decomposes in even but not odd dimensions, or the Clifford algebra when it is the other way around. Other structures than this decomposition can also exist; precise criteria are covered at spin representation and Clifford algebra. 12. The TNB frame of the ribbon defines a rotation continuously for each value of the arc length parameter. 13. This is the set of 2×2 complex traceless hermitian matrices. 14. Except for a kernel of $\{\pm 1\}$ corresponding to the two different elements of the spin group that go to the same rotation. 15. So the ambiguity in identifying the spinors themselves persists from the point of view of the group theory, and still depends on choices. 16. The Clifford algebra can be given an even/odd grading from the parity of the degree in the gammas, and the spin group and its Lie algebra both lie in the even part. Whether here by "representation" we mean representations of the spin group or the Clifford algebra will affect the determination of their reducibility. Other structures than this splitting can also exist; precise criteria are covered at spin representation and Clifford algebra. 17. More precisely, the electron starts out as two massless Weyl spinors, left and right-handed. Upon symmetry breaking, both gain a mass, and are coupled to form a Dirac spinor. 18. The matrices of dimension N × N in which only the elements of the left column are non-zero form a left ideal in the N × N matrix algebra Mat(N, $\mathbb {C} $) – multiplying such a matrix M from the left with any N × N matrix A gives the result AM that is again an N × N matrix in which only the elements of the left column are non-zero. Moreover, it can be shown that it is a minimal left ideal.[18] 19. These are the right-handed Weyl spinors in two dimensions. For the left-handed Weyl spinors, the representation is via γ(ϕ) = γϕ. The Majorana spinors are the common underlying real representation for the Weyl representations. 20. Since, for a skew field, the kernel of the representation must be trivial. So inequivalent representations can only arise via an automorphism of the skew-field. In this case, there are a pair of equivalent representations: γ(ϕ) = γϕ, and its quaternionic conjugate γ(ϕ) = ϕγ. 21. The complex spinors are obtained as the representations of the tensor product $\mathbb {H} \otimes _{\mathbb {R} }\mathbb {C} $ = Mat2($\mathbb {C} $). These are considered in more detail in spinors in three dimensions. References 1. Cartan 1913. 2. Quote from Elie Cartan: The Theory of Spinors, Hermann, Paris, 1966, first sentence of the Introduction section at the beginning of the book, before page numbers start. 3. Rukhsan-Ul-Haq (December 2016). "Geometry of Spin: Clifford Algebraic Approach". Resonance. 21 (12): 1105–1117. doi:10.1007/s12045-016-0422-5. S2CID 126053475. 4. Named after William Kingdon Clifford, 5. Named after Ettore Majorana. 6. Francis, Matthew R.; Kosowsky, Arthur (2005) [20 March 2004]. "The construction of spinors in geometric algebra". Annals of Physics. 317 (2): 383–409. arXiv:math-ph/0403040. Bibcode:2005AnPhy.317..383F. doi:10.1016/j.aop.2004.11.008. S2CID 119632876. 7. Wilczek, Frank (2009). "Majorana returns". Nature Physics. Macmillan Publishers. 5 (9): 614–618. Bibcode:2009NatPh...5..614W. doi:10.1038/nphys1380. ISSN 1745-2473. 8. Xu, Yang-Su; et al. (2015). "Discovery of a Weyl Fermion semimetal and topological Fermi arcs". Science Magazine. AAAS. 349 (6248): 613–617. arXiv:1502.03807. Bibcode:2015Sci...349..613X. doi:10.1126/science.aaa9297. ISSN 0036-8075. PMID 26184916. S2CID 206636457. 9. Jean Hladik: Spinors in Physics, translated by J. M. Cole, Springer 1999, ISBN 978-0-387-98647-0, p. 3 10. Farmelo, Graham (2009). The Strangest Man: The hidden life of Paul Dirac, quantum genius. Faber & Faber. p. 430. ISBN 978-0-571-22286-5. 11. Cartan 1913 12. Tomonaga 1998, p. 129 13. Pauli 1927. 14. Dirac 1928. 15. Juvet, G. (1930). "Opérateurs de Dirac et équations de Maxwell". Commentarii Mathematici Helvetici (in French). 2: 225–235. doi:10.1007/BF01214461. S2CID 121226923. 16. Sauter, F. (1930). "Lösung der Diracschen Gleichungen ohne Spezialisierung der Diracschen Operatoren". Zeitschrift für Physik. 63 (11–12): 803–814. Bibcode:1930ZPhy...63..803S. doi:10.1007/BF01339277. S2CID 122940202. 17. Pertti Lounesto: Crumeyrolle's bivectors and spinors, pp. 137–166, In: Rafał Abłamowicz, Pertti Lounesto (eds.): Clifford algebras and spinor structures: A Special Volume Dedicated to the Memory of Albert Crumeyrolle (1919–1992), ISBN 0-7923-3366-7, 1995, p. 151 18. See also: Pertti Lounesto: Clifford algebras and spinors, London Mathematical Society Lecture Notes Series 286, Cambridge University Press, Second Edition 2001, ISBN 978-0-521-00551-7, p. 52 19. Pertti Lounesto: Clifford algebras and spinors, London Mathematical Society Lecture Notes Series 286, Cambridge University Press, Second Edition 2001, ISBN 978-0-521-00551-7, p. 148 f. and p. 327 f. 20. D. Hestenes: Space–Time Algebra, Gordon and Breach, New York, 1966, 1987, 1992 21. Hestenes, D. (1967). "Real spinor fields". J. Math. Phys. 8 (4): 798–808. Bibcode:1967JMP.....8..798H. doi:10.1063/1.1705279. S2CID 13371668. 22. This construction is due to Cartan (1913). The treatment here is based on Chevalley (1954) harvtxt error: no target: CITEREFChevalley1954 (help). 23. One source for this subsection is Fulton & Harris (1991). 24. Jurgen Jost, "Riemannian Geometry and Geometric Analysis" (2002) Springer-Verlag Univeritext ISBN 3-540-42627-2. See chapter 1. 25. Via the even-graded Clifford algebra. 26. Lawson & Michelsohn 1989, Appendix D. 27. Brauer & Weyl 1935. Further reading • Brauer, Richard; Weyl, Hermann (1935). "Spinors in n dimensions". American Journal of Mathematics. The Johns Hopkins University Press. 57 (2): 425–449. doi:10.2307/2371218. JSTOR 2371218. • Cartan, Élie (1913). "Les groupes projectifs qui ne laissent invariante aucune multiplicité plane" (PDF). Bull. Soc. Math. Fr. 41: 53–96. doi:10.24033/bsmf.916. • Cartan, Élie (1981) [1966]. The Theory of Spinors (reprint ed.). Paris, FR: Hermann (1966); Dover Publications (1981). ISBN 978-0-486-64070-9. • Chevalley, Claude (1996) [1954]. The Algebraic Theory of Spinors and Clifford Algebras (reprint ed.). Columbia University Press (1954); Springer (1996). ISBN 978-3-540-57063-9. • Dirac, Paul M. (1928). "The quantum theory of the electron". Proceedings of the Royal Society of London A. 117 (778): 610–624. Bibcode:1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023. JSTOR 94981. • Fulton, William; Harris, Joe (1991). Representation Theory: A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 0-387-97495-4. MR 1153249. • Gilkey, Peter B. (1984). Invariance Theory: The heat equation, and the Atiyah–Singer index theorem. Publish or Perish. ISBN 0-914098-20-9. • Harvey, F. Reese (1990). Spinors and Calibrations. Academic Press. ISBN 978-0-12-329650-4. • Hitchin, Nigel J. (1974). "Harmonic spinors". Advances in Mathematics. 14: 1–55. doi:10.1016/0001-8708(74)90021-8. MR 0358873. • Lawson, H. Blaine; Michelsohn, Marie-Louise (1989). Spin Geometry. Princeton University Press. ISBN 0-691-08542-0. • Pauli, Wolfgang (1927). "Zur Quantenmechanik des magnetischen Elektrons". Zeitschrift für Physik. 43 (9–10): 601–632. Bibcode:1927ZPhy...43..601P. doi:10.1007/BF01397326. S2CID 128228729. • Penrose, Roger; Rindler, W. (1988). Spinor and twistor methods in space-time geometry. Spinors and Space-Time. Vol. 2. Cambridge University Press. ISBN 0-521-34786-6. • Tomonaga, Sin-Itiro (1998). "Lecture 7: The quantity which is neither vector nor tensor". The Story of Spin. University of Chicago Press. p. 129. ISBN 0-226-80794-0. Tensors Glossary of tensor theory Scope Mathematics • Coordinate system • Differential geometry • Dyadic algebra • Euclidean geometry • Exterior calculus • Multilinear algebra • Tensor algebra • Tensor calculus • Physics • Engineering • Computer vision • Continuum mechanics • Electromagnetism • General relativity • Transport phenomena Notation • Abstract index notation • Einstein notation • Index notation • Multi-index notation • Penrose graphical notation • Ricci calculus • Tetrad (index notation) • Van der Waerden notation • Voigt notation Tensor definitions • Tensor (intrinsic definition) • Tensor field • Tensor density • Tensors in curvilinear coordinates • Mixed tensor • Antisymmetric tensor • Symmetric tensor • Tensor operator • Tensor bundle • Two-point tensor Operations • Covariant derivative • Exterior covariant derivative • Exterior derivative • Exterior product • Hodge star operator • Lie derivative • Raising and lowering indices • Symmetrization • Tensor contraction • Tensor product • Transpose (2nd-order tensors) Related abstractions • Affine connection • Basis • Cartan formalism (physics) • Connection form • Covariance and contravariance of vectors • Differential form • Dimension • Exterior form • Fiber bundle • Geodesic • Levi-Civita connection • Linear map • Manifold • Matrix • Multivector • Pseudotensor • Spinor • Vector • Vector space Notable tensors Mathematics • Kronecker delta • Levi-Civita symbol • Metric tensor • Nonmetricity tensor • Ricci curvature • Riemann curvature tensor • Torsion tensor • Weyl tensor Physics • Moment of inertia • Angular momentum tensor • Spin tensor • Cauchy stress tensor • stress–energy tensor • Einstein tensor • EM tensor • Gluon field strength tensor • Metric tensor (GR) Mathematicians • Élie Cartan • Augustin-Louis Cauchy • Elwin Bruno Christoffel • Albert Einstein • Leonhard Euler • Carl Friedrich Gauss • Hermann Grassmann • Tullio Levi-Civita • Gregorio Ricci-Curbastro • Bernhard Riemann • Jan Arnoldus Schouten • Woldemar Voigt • Hermann Weyl
Wikipedia
Schouten tensor In Riemannian geometry the Schouten tensor is a second-order tensor introduced by Jan Arnoldus Schouten defined for n ≥ 3 by: $P={\frac {1}{n-2}}\left(\mathrm {Ric} -{\frac {R}{2(n-1)}}g\right)\,\Leftrightarrow \mathrm {Ric} =(n-2)P+Jg\,,$ where Ric is the Ricci tensor (defined by contracting the first and third indices of the Riemann tensor), R is the scalar curvature, g is the Riemannian metric, $J={\frac {1}{2(n-1)}}R$ is the trace of P and n is the dimension of the manifold. The Weyl tensor equals the Riemann curvature tensor minus the Kulkarni–Nomizu product of the Schouten tensor with the metric. In an index notation $R_{ijkl}=W_{ijkl}+g_{ik}P_{jl}-g_{jk}P_{il}-g_{il}P_{jk}+g_{jl}P_{ik}\,.$ The Schouten tensor often appears in conformal geometry because of its relatively simple conformal transformation law $g_{ij}\mapsto \Omega ^{2}g_{ij}\Rightarrow P_{ij}\mapsto P_{ij}-\nabla _{i}\Upsilon _{j}+\Upsilon _{i}\Upsilon _{j}-{\frac {1}{2}}\Upsilon _{k}\Upsilon ^{k}g_{ij}\,,$ where $\Upsilon _{i}:=\Omega ^{-1}\partial _{i}\Omega \,.$ Further reading • Arthur L. Besse, Einstein Manifolds. Springer-Verlag, 2007. See Ch.1 §J "Conformal Changes of Riemannian Metrics." • Spyros Alexakis, The Decomposition of Global Conformal Invariants. Princeton University Press, 2012. Ch.2, noting in a footnote that the Schouten tensor is a "trace-adjusted Ricci tensor" and may be considered as "essentially the Ricci tensor." • Wolfgang Kuhnel and Hans-Bert Rademacher, "Conformal diffeomorphisms preserving the Ricci tensor", Proc. Amer. Math. Soc. 123 (1995), no. 9, 2841–2848. Online eprint (pdf). • T. Bailey, M.G. Eastwood and A.R. Gover, "Thomas's Structure Bundle for Conformal, Projective and Related Structures", Rocky Mountain Journal of Mathematics, vol. 24, Number 4, 1191-1217. See also • Weyl–Schouten theorem • Cotton tensor
Wikipedia
Weyl–von Neumann theorem In mathematics, the Weyl–von Neumann theorem is a result in operator theory due to Hermann Weyl and John von Neumann. It states that, after the addition of a compact operator (Weyl (1909)) or Hilbert–Schmidt operator (von Neumann (1935)) of arbitrarily small norm, a bounded self-adjoint operator or unitary operator on a Hilbert space is conjugate by a unitary operator to a diagonal operator. The results are subsumed in later generalizations for bounded normal operators due to David Berg (1971, compact perturbation) and Dan-Virgil Voiculescu (1979, Hilbert–Schmidt perturbation). The theorem and its generalizations were one of the starting points of operator K-homology, developed first by Lawrence G. Brown, Ronald Douglas and Peter Fillmore and, in greater generality, by Gennadi Kasparov. In 1958 Kuroda showed that the Weyl–von Neumann theorem is also true if the Hilbert–Schmidt class is replaced by any Schatten class Sp with p ≠ 1. For S1, the trace-class operators, the situation is quite different. The Kato–Rosenblum theorem, proved in 1957 using scattering theory, states that if two bounded self-adjoint operators differ by a trace-class operator, then their absolutely continuous parts are unitarily equivalent. In particular if a self-adjoint operator has absolutely continuous spectrum, no perturbation of it by a trace-class operator can be unitarily equivalent to a diagonal operator. References • Conway, John B. (2000), A Course in Operator Theory, Graduate Studies in Mathematics, American Mathematical Society, ISBN 0821820656 • Davidson, Kenneth R. (1996), C*-Algebras by Example, Fields Institute Monographs, vol. 6, American Mathematical Society, ISBN 0821805991 • Higson, Nigel; Roe, John (2000), Analytic K-Homology, Oxford University Press, ISBN 0198511760 • Katō, Tosio (1995), Perturbation Theory for Linear Operators, Grundlehren der mathematischen Wissenschaften, vol. 132 (2nd ed.), Springer, ISBN 354058661X • Martin, Mircea; Putinar, Mihai (1989), Lectures on hyponormal operators, Operator theory, advances and applications, vol. 39, Birkhäuser Verlag, ISBN 0817623299 • Reed, Michael; Simon, Barry (1979), Methods of modern mathematical physics, III: Scattering theory, Academic Press, ISBN 0125850034 • Simon, Barry (2010), Trace Ideals and Their Applications, Mathematical Surveys and Monographs (2nd ed.), American Mathematical Society, ISBN 978-0821849880 • von Neumann, John (1935), Charakterisierung des Spektrums eines Integraloperators, Actualités Sci. Indust., vol. 229, Hermann • Weyl, Hermann (1909), "Über beschränkte quadratische Formen, deren Differenz vollstetig ist", Rend. Circolo Mat. Palermo, 27: 373–392, doi:10.1007/bf03019655, S2CID 122374162 Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Weyl algebra In abstract algebra, the Weyl algebra is the ring of differential operators with polynomial coefficients (in one variable), namely expressions of the form $f_{m}(X)\partial _{X}^{m}+f_{m-1}(X)\partial _{X}^{m-1}+\cdots +f_{1}(X)\partial _{X}+f_{0}(X).$ More precisely, let F be the underlying field, and let F[X] be the ring of polynomials in one variable, X, with coefficients in F. Then each fi lies in F[X]. ∂X is the derivative with respect to X. The algebra is generated by X and ∂X. The Weyl algebra is an example of a simple ring that is not a matrix ring over a division ring. It is also a noncommutative example of a domain, and an example of an Ore extension. The Weyl algebra is isomorphic to the quotient of the free algebra on two generators, X and Y, by the ideal generated by the element $YX-XY=1~.$ The Weyl algebra is the first in an infinite family of algebras, also known as Weyl algebras. The n-th Weyl algebra, An, is the ring of differential operators with polynomial coefficients in n variables. It is generated by Xi and ∂Xi, i = 1, ..., n. Weyl algebras are named after Hermann Weyl, who introduced them to study the Heisenberg uncertainty principle in quantum mechanics. It is a quotient of the universal enveloping algebra of the Heisenberg algebra, the Lie algebra of the Heisenberg group, by setting the central element of the Heisenberg algebra (namely [X,Y]) equal to the unit of the universal enveloping algebra (called 1 above). The Weyl algebra is also referred to as the symplectic Clifford algebra.[1][2][3] Weyl algebras represent the same structure for symplectic bilinear forms that Clifford algebras represent for non-degenerate symmetric bilinear forms.[1] Generators and relations One may give an abstract construction of the algebras An in terms of generators and relations. Start with an abstract vector space V (of dimension 2n) equipped with a symplectic form ω. Define the Weyl algebra W(V) to be $W(V):=T(V)/(\!(v\otimes u-u\otimes v-\omega (v,u),{\text{ for }}v,u\in V)\!),$ where T(V) is the tensor algebra on V, and the notation $(\!()\!)$ means "the ideal generated by". In other words, W(V) is the algebra generated by V subject only to the relation vu − uv = ω(v, u). Then, W(V) is isomorphic to An via the choice of a Darboux basis for ω. Quantization The algebra W(V) is a quantization of the symmetric algebra Sym(V). If V is over a field of characteristic zero, then W(V) is naturally isomorphic to the underlying vector space of the symmetric algebra Sym(V) equipped with a deformed product – called the Groenewold–Moyal product (considering the symmetric algebra to be polynomial functions on V∗, where the variables span the vector space V, and replacing iħ in the Moyal product formula with 1). The isomorphism is given by the symmetrization map from Sym(V) to W(V) $a_{1}\cdots a_{n}\mapsto {\frac {1}{n!}}\sum _{\sigma \in S_{n}}a_{\sigma (1)}\otimes \cdots \otimes a_{\sigma (n)}~.$ If one prefers to have the iħ and work over the complex numbers, one could have instead defined the Weyl algebra above as generated by Xi and iħ∂Xi (as per quantum mechanics usage). Thus, the Weyl algebra is a quantization of the symmetric algebra, which is essentially the same as the Moyal quantization (if for the latter one restricts to polynomial functions), but the former is in terms of generators and relations (considered to be differential operators) and the latter is in terms of a deformed multiplication. In the case of exterior algebras, the analogous quantization to the Weyl one is the Clifford algebra, which is also referred to as the orthogonal Clifford algebra.[2][4] Properties of the Weyl algebra Further information: Stone–von Neumann theorem In the case that the ground field F has characteristic zero, the nth Weyl algebra is a simple Noetherian domain. It has global dimension n, in contrast to the ring it deforms, Sym(V), which has global dimension 2n. It has no finite-dimensional representations. Although this follows from simplicity, it can be more directly shown by taking the trace of σ(X) and σ(Y) for some finite-dimensional representation σ (where [X,Y] = 1). $\mathrm {tr} ([\sigma (X),\sigma (Y)])=\mathrm {tr} (1)~.$ Since the trace of a commutator is zero, and the trace of the identity is the dimension of the representation, the representation must be zero dimensional. In fact, there are stronger statements than the absence of finite-dimensional representations. To any finitely generated An-module M, there is a corresponding subvariety Char(M) of V × V∗ called the 'characteristic variety' whose size roughly corresponds to the size of M (a finite-dimensional module would have zero-dimensional characteristic variety). Then Bernstein's inequality states that for M non-zero, $\dim(\operatorname {char} (M))\geq n$ An even stronger statement is Gabber's theorem, which states that Char(M) is a co-isotropic subvariety of V × V∗ for the natural symplectic form. Positive characteristic The situation is considerably different in the case of a Weyl algebra over a field of characteristic p > 0. In this case, for any element D of the Weyl algebra, the element Dp is central, and so the Weyl algebra has a very large center. In fact, it is a finitely generated module over its center; even more so, it is an Azumaya algebra over its center. As a consequence, there are many finite-dimensional representations which are all built out of simple representations of dimension p. Constant center The center of Weyl algebra is the field of constants. For any element $h=f_{m}(X)\partial _{X}^{m}+f_{m-1}(X)\partial _{X}^{m-1}+\cdots +f_{1}(X)\partial _{X}+f_{0}(X)$ in the center, $h\partial _{X}=\partial _{X}h$ implies $f_{i}'=0$ for all $i$ and $hX=Xh$ implies $f_{i}=0$ for $i>0$. Thus $h=f_{0}$ is a constant. Generalizations For more details about this quantization in the case n = 1 (and an extension using the Fourier transform to a class of integrable functions larger than the polynomial functions), see Wigner–Weyl transform. Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras. Affine varieties Weyl algebras also generalize in the case of algebraic varieties. Consider a polynomial ring $R={\frac {\mathbb {C} [x_{1},\ldots ,x_{n}]}{I}}.$ Then a differential operator is defined as a composition of $\mathbb {C} $-linear derivations of $R$. This can be described explicitly as the quotient ring ${\text{Diff}}(R)={\frac {\{D\in A_{n}\colon D(I)\subseteq I\}}{I\cdot A_{n}}}.$ See also • Jacobian conjecture • Dixmier conjecture References • de Traubenberg, M. Rausch; Slupinski, M. J.; Tanasa, A. (2006). "Finite-dimensional Lie subalgebras of the Weyl algebra". J. Lie Theory. 16: 427–454. arXiv:math/0504224. (Classifies subalgebras of the one-dimensional Weyl algebra over the complex numbers; shows relationship to SL(2,C)) • Tsit Yuen Lam (2001). A first course in noncommutative rings. Graduate Texts in Mathematics. Vol. 131 (2nd ed.). Springer. p. 6. ISBN 978-0-387-95325-0. • Coutinho, S.C. (1997). "The many avatars of a simple algebra". American Mathematical Monthly. 104 (7): 593–604. doi:10.1080/00029890.1997.11990687. • Traves, Will (2010). "Differential Operations on Grassmann Varieties". In Campbell, H.; Helminck, A.; Kraft, H.; Wehlau, D. (eds.). Symmetry and Spaces. Progress in Mathematics. Vol. 278. Birkhäuse. pp. 197–207. doi:10.1007/978-0-8176-4875-6_10. ISBN 978-0-8176-4875-6. 1. Helmstetter, Jacques; Micali, Artibano (2008). "Introduction: Weyl algebras". Quadratic Mappings and Clifford Algebras. Birkhäuser. p. xii. ISBN 978-3-7643-8605-4. 2. Abłamowicz, Rafał (2004). "Foreword". Clifford algebras: applications to mathematics, physics, and engineering. Progress in Mathematical Physics. Birkhäuser. pp. xvi. ISBN 0-8176-3525-4. 3. Oziewicz, Z.; Sitarczyk, Cz. (1989). "Parallel treatment of Riemannian and symplectic Clifford algebras". In Micali, A.; Boudet, R.; Helmstetter, J. (eds.). Clifford algebras and their applications in mathematical physics. Kluwer. pp. 83–96 see p.92. ISBN 0-7923-1623-1. 4. Oziewicz & Sitarczyk 1989, p. 83
Wikipedia
Weyl distance function In combinatorial geometry, the Weyl distance function is a function that behaves in some ways like the distance function of a metric space, but instead of taking values in the positive real numbers, it takes values in a group of reflections, called the Weyl group (named for Hermann Weyl). This distance function is defined on the collection of chambers in a mathematical structure known as a building, and its value on a pair of chambers a minimal sequence of reflections (in the Weyl group) to go from one chamber to the other. An adjacent sequence of chambers in a building is known as a gallery, so the Weyl distance function is a way of encoding the information of a minimal gallery between two chambers. In particular, the number of reflections to go from one chamber to another coincides with the length of the minimal gallery between the two chambers, and so gives a natural metric (the gallery metric) on the building. According to Abramenko & Brown (2008), the Weyl distance function is something like a geometric vector: it encodes both the magnitude (distance) between two chambers of a building, as well as the direction between them. Definitions We record here definitions from Abramenko & Brown (2008). Let Σ(W,S) be the Coxeter complex associated to a group W generated by a set of reflections S. The vertices of Σ(W,S) are the elements of W, and the chambers of the complex are the cosets of S in W. The vertices of each chamber can be colored in a one-to-one manner by the elements of S so that no adjacent vertices of the complex receive the same color. This coloring, although essentially canonical, is not quite unique. The coloring of a given chamber is not uniquely determined by its realization as a coset of S. But once the coloring of a single chamber has been fixed, the rest of the Coxeter complex is uniquely colorable. Fix such a coloring of the complex. A gallery is a sequence of adjacent chambers $C_{0},C_{1},\dots ,C_{n}.$ Because these chambers are adjacent, any consecutive pair $C_{i-1},C_{i}$ of chambers share all but one vertex. Denote the color of this vertex by $s_{i}$. The Weyl distance function between $C_{0}$ and $C_{n}$ is defined by $\delta (C_{0},C_{n})=s_{1}s_{2}\cdots s_{n}.$ It can be shown that this does not depend on the choice of gallery connecting $C_{0}$ and $C_{n}$. Now, a building is a simplicial complex that is organized into apartments, each of which is a Coxeter complex (satisfying some coherence axioms). Buildings are colorable, since the Coxeter complexes that make them up are colorable. A coloring of a building is associated with a uniform choice of Weyl group for the Coxeter complexes that make it up, allowing it to be regarded as a collection of words on the set of colors with relations. Now, if $C_{0},\dots ,C_{n}$ is a gallery in a building, then define the Weyl distance between $C_{0}$ and $C_{n}$ by $\delta (C_{0},C_{n})=s_{1}s_{2}\cdots s_{n}$ where the $s_{i}$ are as above. As in the case of Coxeter complexes, this does not depend on the choice of gallery connecting the chambers $C_{0}$ and $C_{n}$. The gallery distance $d(C_{0},C_{n})$ is defined as the minimal word length needed to express $\delta (C_{0},C_{n})$ in the Weyl group. Symbolically, $d(C_{0},C_{n})=\ell (\delta (C_{0},C_{n}))$. Properties The Weyl distance function satisfies several properties that parallel those of distance functions in metric spaces: • $\delta (C,D)=1$ if and only if $C=D$ (the group element 1 corresponds to the empty word on S). This corresponds to the property $d(C,D)=0$ if and only if $C=D$ of the gallery metric (Abramenko & Brown 2008, p. 199): • $\delta (C,D)=\delta (D,C)^{-1}$ (inversion corresponds to reversal of words in the alphabet S). This corresponds to symmetry $d(C,D)=d(D,C)$ of the gallery metric. • If $\delta (C',C)=s\in S$ and $\delta (C,D)=w$, then $\delta (C',D)$ is either w or sw. Moreover, if $\ell (sw)=\ell (w)+1$, then $\delta (C',D)=sw$. This corresponds to the triangle inequality. Abstract characterization of buildings In addition to the properties listed above, the Weyl distance function satisfies the following property: • If $\delta (C,D)=w$, then for any $s\in S$ there is a chamber $C'$, such that $\delta (C',C)=s$ and $\delta (C',D)=sw$. In fact, this property together with the two listed in the "Properties" section furnishes an abstract "metrical" characterization of buildings, as follows. Suppose that (W,S) is a Coxeter system consisting of a Weyl group W generated by reflections belonging to the subset S. A building of type (W,S) is a pair consisting of a set C of chambers and a function: $\delta :C\times C\to W$ such that the three properties listed above are satisfied. Then C carries the canonical structure of a building, in which δ is the Weyl distance function. References • Abramenko, P.; Brown, K. (2008), Buildings: Theory and applications, Springer External links • Mike Davis, Cohomology of Coxeter groups and buildings, MSRI 2007.
Wikipedia
Weyl group In mathematics, in particular the theory of Lie algebras, the Weyl group (named after Hermann Weyl) of a root system Φ is a subgroup of the isometry group of that root system. Specifically, it is the subgroup which is generated by reflections through the hyperplanes orthogonal to the roots, and as such is a finite reflection group. In fact it turns out that most finite reflection groups are Weyl groups.[1] Abstractly, Weyl groups are finite Coxeter groups, and are important examples of these. Lie groups and Lie algebras Classical groups • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) Simple Lie groups Classical • An • Bn • Cn • Dn Exceptional • G2 • F4 • E6 • E7 • E8 Other Lie groups • Circle • Lorentz • Poincaré • Conformal group • Diffeomorphism • Loop • Euclidean Lie algebras • Lie group–Lie algebra correspondence • Exponential map • Adjoint representation • Killing form • Index • Simple Lie algebra • Loop algebra • Affine Lie algebra Semisimple Lie algebra • Dynkin diagrams • Cartan subalgebra • Root system • Weyl group • Real form • Complexification • Split Lie algebra • Compact Lie algebra Representation theory • Lie group representation • Lie algebra representation • Representation theory of semisimple Lie algebras • Representations of classical Lie groups • Theorem of the highest weight • Borel–Weil–Bott theorem Lie groups in physics • Particle physics and representation theory • Lorentz group representations • Poincaré group representations • Galilean group representations Scientists • Sophus Lie • Henri Poincaré • Wilhelm Killing • Élie Cartan • Hermann Weyl • Claude Chevalley • Harish-Chandra • Armand Borel • Glossary • Table of Lie groups The Weyl group of a semisimple Lie group, a semisimple Lie algebra, a semisimple linear algebraic group, etc. is the Weyl group of the root system of that group or algebra. Definition and examples Let $\Phi $ be a root system in a Euclidean space $V$. For each root $\alpha \in \Phi $, let $s_{\alpha }$ denote the reflection about the hyperplane perpendicular to $\alpha $, which is given explicitly as $s_{\alpha }(v)=v-2{\frac {(v,\alpha )}{(\alpha ,\alpha )}}\alpha $, where $(\cdot ,\cdot )$ is the inner product on $V$. The Weyl group $W$ of $\Phi $ is the subgroup of the orthogonal group $O(V)$ generated by all the $s_{\alpha }$'s. By the definition of a root system, each $s_{\alpha }$ preserves $\Phi $, from which it follows that $W$ is a finite group. In the case of the $A_{2}$ root system, for example, the hyperplanes perpendicular to the roots are just lines, and the Weyl group is the symmetry group of an equilateral triangle, as indicated in the figure. As a group, $W$ is isomorphic to the permutation group on three elements, which we may think of as the vertices of the triangle. Note that in this case, $W$ is not the full symmetry group of the root system; a 60-degree rotation preserves $\Phi $ but is not an element of $W$. We may consider also the $A_{n}$ root system. In this case, $V$ is the space of all vectors in $\mathbb {R} ^{n+1}$ whose entries sum to zero. The roots consist of the vectors of the form $e_{i}-e_{j},\,i\neq j$, where $e_{i}$ is the $i$th standard basis element for $\mathbb {R} ^{n+1}$. The reflection associated to such a root is the transformation of $V$ obtained by interchanging the $i$th and $j$th entries of each vector. The Weyl group for $A_{n}$ is then the permutation group on $n+1$ elements. Weyl chambers See also: Coxeter group § Affine Coxeter groups If $\Phi \subset V$ is a root system, we may consider the hyperplane perpendicular to each root $\alpha $. Recall that $s_{\alpha }$ denotes the reflection about the hyperplane and that the Weyl group is the group of transformations of $V$ generated by all the $s_{\alpha }$'s. The complement of the set of hyperplanes is disconnected, and each connected component is called a Weyl chamber. If we have fixed a particular set Δ of simple roots, we may define the fundamental Weyl chamber associated to Δ as the set of points $v\in V$ such that $(\alpha ,v)>0$ for all $\alpha \in \Delta $. Since the reflections $s_{\alpha },\,\alpha \in \Phi $ preserve $\Phi $, they also preserve the set of hyperplanes perpendicular to the roots. Thus, each Weyl group element permutes the Weyl chambers. The figure illustrates the case of the A2 root system. The "hyperplanes" (in this case, one dimensional) orthogonal to the roots are indicated by dashed lines. The six 60-degree sectors are the Weyl chambers and the shaded region is the fundamental Weyl chamber associated to the indicated base. A basic general theorem about Weyl chambers is this:[2] Theorem: The Weyl group acts freely and transitively on the Weyl chambers. Thus, the order of the Weyl group is equal to the number of Weyl chambers. A related result is this one:[3] Theorem: Fix a Weyl chamber $C$. Then for all $v\in V$, the Weyl-orbit of $v$ contains exactly one point in the closure ${\bar {C}}$ of $C$. Coxeter group structure Generating set A key result about the Weyl group is this:[4] Theorem: If $\Delta $ is base for $\Phi $, then the Weyl group is generated by the reflections $s_{\alpha }$ with $\alpha $ in $\Delta $. That is to say, the group generated by the reflections $s_{\alpha },\,\alpha \in \Delta ,$ is the same as the group generated by the reflections $s_{\alpha },\,\alpha \in \Phi $. Relations Meanwhile, if $\alpha $ and $\beta $ are in $\Delta $, then the Dynkin diagram for $\Phi $ relative to the base $\Delta $ tells us something about how the pair $\{s_{\alpha },s_{\beta }\}$ behaves. Specifically, suppose $v$ and $v'$ are the corresponding vertices in the Dynkin diagram. Then we have the following results: • If there is no bond between $v$ and $v'$, then $s_{\alpha }$ and $s_{\beta }$ commute. Since $s_{\alpha }$ and $s_{\beta }$ each have order two, this is equivalent to saying that $(s_{\alpha }s_{\beta })^{2}=1$. • If there is one bond between $v$ and $v'$, then $(s_{\alpha }s_{\beta })^{3}=1$. • If there are two bonds between $v$ and $v'$, then $(s_{\alpha }s_{\beta })^{4}=1$. • If there are three bonds between $v$ and $v'$, then $(s_{\alpha }s_{\beta })^{6}=1$. The preceding claim is not hard to verify, if we simply remember what the Dynkin diagram tells us about the angle between each pair of roots. If, for example, there is no bond between the two vertices, then $\alpha $ and $\beta $ are orthogonal, from which it follows easily that the corresponding reflections commute. More generally, the number of bonds determines the angle $\theta $ between the roots. The product of the two reflections is then a rotation by angle $2\theta $ in the plane spanned by $\alpha $ and $\beta $, as the reader may verify, from which the above claim follows easily. As a Coxeter group Weyl groups are examples of finite reflection groups, as they are generated by reflections; the abstract groups (not considered as subgroups of a linear group) are accordingly finite Coxeter groups, which allows them to be classified by their Coxeter–Dynkin diagram. Being a Coxeter group means that a Weyl group has a special kind of presentation in which each generator xi is of order two, and the relations other than xi2=1 are of the form (xixj)mij=1. The generators are the reflections given by simple roots, and mij is 2, 3, 4, or 6 depending on whether roots i and j make an angle of 90, 120, 135, or 150 degrees, i.e., whether in the Dynkin diagram they are unconnected, connected by a simple edge, connected by a double edge, or connected by a triple edge. We have already noted these relations in the bullet points above, but to say that $W$ is a Coxeter group, we are saying that those are the only relations in $W$. Weyl groups have a Bruhat order and length function in terms of this presentation: the length of a Weyl group element is the length of the shortest word representing that element in terms of these standard generators. There is a unique longest element of a Coxeter group, which is opposite to the identity in the Bruhat order. Weyl groups in algebraic, group-theoretic, and geometric settings Above, the Weyl group was defined as a subgroup of the isometry group of a root system. There are also various definitions of Weyl groups specific to various group-theoretic and geometric contexts (Lie algebra, Lie group, symmetric space, etc.). For each of these ways of defining Weyl groups, it is a (usually nontrivial) theorem that it is a Weyl group in the sense of the definition at the top of this article, namely the Weyl group of some root system associated with the object. A concrete realization of such a Weyl group usually depends on a choice – e.g. of Cartan subalgebra for a Lie algebra, of maximal torus for a Lie group.[5] The Weyl group of a connected compact Lie group Let $K$ be a connected compact Lie group and let $T$ be a maximal torus in $K$. We then introduce the normalizer of $T$ in $K$, denoted $N(T)$ and defined as $N(T)=\{x\in K|xtx^{-1}\in T,\,{\text{for all }}t\in T\}$. We also define the centralizer of $T$ in $K$, denoted $Z(T)$ and defined as $Z(T)=\{x\in K|xtx^{-1}=t\,{\text{for all }}t\in T\}$. The Weyl group $W$ of $K$ (relative to the given maximal torus $T$) is then defined initially as $W=N(T)/T$. Eventually, one proves that $Z(T)=T$,[6] at which point one has an alternative description of the Weyl group as $W=N(T)/Z(T)$. Now, one can define a root system $\Phi $ associated to the pair $(K,T)$; the roots are the nonzero weights of the adjoint action of $T$ on the Lie algebra of $K$. For each $\alpha \in \Phi $, one can construct an element $x_{\alpha }$ of $N(T)$ whose action on $T$ has the form of reflection.[7] With a bit more effort, one can show that these reflections generate all of $N(T)/Z(T)$.[6] Thus, in the end, the Weyl group as defined as $N(T)/T$ or $N(T)/Z(T)$ is isomorphic to the Weyl group of the root system $\Phi $. In other settings For a complex semisimple Lie algebra, the Weyl group is simply defined as the reflection group generated by reflections in the roots – the specific realization of the root system depending on a choice of Cartan subalgebra. For a Lie group G satisfying certain conditions,[note 1] given a torus T < G (which need not be maximal), the Weyl group with respect to that torus is defined as the quotient of the normalizer of the torus N = N(T) = NG(T) by the centralizer of the torus Z = Z(T) = ZG(T), $W(T,G):=N(T)/Z(T).\ $ The group W is finite – Z is of finite index in N. If T = T0 is a maximal torus (so it equals its own centralizer: $Z(T_{0})=T_{0}$) then the resulting quotient N/Z = N/T is called the Weyl group of G, and denoted W(G). Note that the specific quotient set depends on a choice of maximal torus, but the resulting groups are all isomorphic (by an inner automorphism of G), since maximal tori are conjugate. If G is compact and connected, and T is a maximal torus, then the Weyl group of G is isomorphic to the Weyl group of its Lie algebra, as discussed above. For example, for the general linear group GL, a maximal torus is the subgroup D of invertible diagonal matrices, whose normalizer is the generalized permutation matrices (matrices in the form of permutation matrices, but with any non-zero numbers in place of the '1's), and whose Weyl group is the symmetric group. In this case the quotient map N → N/T splits (via the permutation matrices), so the normalizer N is a semidirect product of the torus and the Weyl group, and the Weyl group can be expressed as a subgroup of G. In general this is not always the case – the quotient does not always split, the normalizer N is not always the semidirect product of W and Z, and the Weyl group cannot always be realized as a subgroup of G.[5] Bruhat decomposition Further information: Bruhat decomposition If B is a Borel subgroup of G, i.e., a maximal connected solvable subgroup and a maximal torus T = T0 is chosen to lie in B, then we obtain the Bruhat decomposition $G=\bigcup _{w\in W}BwB$ which gives rise to the decomposition of the flag variety G/B into Schubert cells (see Grassmannian). The structure of the Hasse diagram of the group is related geometrically to the cohomology of the manifold (rather, of the real and complex forms of the group), which is constrained by Poincaré duality. Thus algebraic properties of the Weyl group correspond to general topological properties of manifolds. For instance, Poincaré duality gives a pairing between cells in dimension k and in dimension n - k (where n is the dimension of a manifold): the bottom (0) dimensional cell corresponds to the identity element of the Weyl group, and the dual top-dimensional cell corresponds to the longest element of a Coxeter group. Analogy with algebraic groups Main article: q-analog See also: Field with one element There are a number of analogies between algebraic groups and Weyl groups – for instance, the number of elements of the symmetric group is n!, and the number of elements of the general linear group over a finite field is related to the q-factorial $[n]_{q}!$; thus the symmetric group behaves as though it were a linear group over "the field with one element". This is formalized by the field with one element, which considers Weyl groups to be simple algebraic groups over the field with one element. Cohomology For a non-abelian connected compact Lie group G, the first group cohomology of the Weyl group W with coefficients in the maximal torus T used to define it,[note 2] is related to the outer automorphism group of the normalizer $N=N_{G}(T),$ as:[8] $\operatorname {Out} (N)\cong H^{1}(W;T)\rtimes \operatorname {Out} (G).$ The outer automorphisms of the group Out(G) are essentially the diagram automorphisms of the Dynkin diagram, while the group cohomology is computed in Hämmerli, Matthey & Suter 2004 and is a finite elementary abelian 2-group ($(\mathbf {Z} /2)^{k}$); for simple Lie groups it has order 1, 2, or 4. The 0th and 2nd group cohomology are also closely related to the normalizer.[8] See also • Affine Weyl group • Semisimple Lie algebra#Cartan subalgebras and root systems • Maximal torus • Root system of a semi-simple Lie algebra • Hasse diagram Footnotes Notes 1. Different conditions are sufficient – most simply if G is connected and either compact, or an affine algebraic group. The definition is simpler for a semisimple (or more generally reductive) Lie group over an algebraically closed field, but a relative Weyl group can be defined for a split Lie group. 2. W acts on T – that is how it is defined – and the group $H^{1}(W;T)$ means "with respect to this action". Citations 1. Humphreys 1992, p. 6. 2. Hall 2015 Propositions 8.23 and 8.27 3. Hall 2015 Proposition 8.29 4. Hall 2015 Propositions 8.24 5. Popov & Fedenko 2001 harvnb error: no target: CITEREFPopovFedenko2001 (help) 6. Hall 2015 Theorem 11.36 7. Hall 2015 Propositions 11.35 8. Hämmerli, Matthey & Suter 2004 References • Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3-319-13466-6 • Knapp, Anthony W. (2002), Lie Groups: Beyond an Introduction, Progress in Mathematics, vol. 140 (2nd ed.), Birkhaeuser, ISBN 978-0-8176-4259-4 • Popov, V.L.; Fedenko, A.S. (2001) [1994], "Weyl group", Encyclopedia of Mathematics, EMS Press • Hämmerli, J.-F.; Matthey, M.; Suter, U. (2004), "Automorphisms of Normalizers of Maximal Tori and First Cohomology of Weyl Groups" (PDF), Journal of Lie Theory, Heldermann Verlag, 14: 583–617, Zbl 1092.22004 Further reading • Bourbaki, Nicolas (2002), Lie Groups and Lie Algebras: Chapters 4-6, Elements of Mathematics, Springer, ISBN 978-3-540-42650-9, Zbl 0983.17001 • Björner, Anders; Brenti, Francesco (2005), Combinatorics of Coxeter Groups, Graduate Texts in Mathematics, vol. 231, Springer, ISBN 978-3-540-27596-1, Zbl 1110.05001 • Coxeter, H. S. M. (1934), "Discrete groups generated by reflections", Ann. of Math., 35 (3): 588–621, CiteSeerX 10.1.1.128.471, doi:10.2307/1968753, JSTOR 1968753 • Coxeter, H. S. M. (1935), "The complete enumeration of finite groups of the form $r_{i}^{2}=(r_{i}r_{j})^{k_{ij}}=1$", J. London Math. Soc., 1, 10 (1): 21–25, doi:10.1112/jlms/s1-10.37.21 • Davis, Michael W. (2007), The Geometry and Topology of Coxeter Groups (PDF), ISBN 978-0-691-13138-2, Zbl 1142.20020 • Grove, Larry C.; Benson, Clark T. (1985), Finite Reflection Groups, Graduate texts in mathematics, vol. 99, Springer, ISBN 978-0-387-96082-1 • Hiller, Howard (1982), Geometry of Coxeter groups, Research Notes in Mathematics, vol. 54, Pitman, ISBN 978-0-273-08517-1, Zbl 0483.57002 • Howlett, Robert B. (1988), "On the Schur Multipliers of Coxeter Groups", J. London Math. Soc., 2, 38 (2): 263–276, doi:10.1112/jlms/s2-38.2.263, Zbl 0627.20019 • Humphreys, James E. (1992) [1990], Reflection Groups and Coxeter Groups, Cambridge Studies in Advanced Mathematics, vol. 29, Cambridge University Press, ISBN 978-0-521-43613-7, Zbl 0725.20028 • Ihara, S.; Yokonuma, Takeo (1965), "On the second cohomology groups (Schur-multipliers) of finite reflection groups" (PDF), J. Fac. Sci. Univ. Tokyo, Sect. 1, 11: 155–171, Zbl 0136.28802 • Kane, Richard (2001), Reflection Groups and Invariant Theory, CMS Books in Mathematics, Springer, ISBN 978-0-387-98979-2, Zbl 0986.20038 • Vinberg, E. B. (1984), "Absence of crystallographic groups of reflections in Lobachevski spaces of large dimension", Trudy Moskov. Mat. Obshch., 47 • Yokonuma, Takeo (1965), "On the second cohomology groups (Schur-multipliers) of infinite discrete reflection groups", J. Fac. Sci. Univ. Tokyo, Sect. 1, 11: 173–186, hdl:2261/6049, Zbl 0136.28803 External links • "Coxeter group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Weisstein, Eric W. "Coxeter group". MathWorld. • Jenn software for visualizing the Cayley graphs of finite Coxeter groups on up to four generators
Wikipedia
Weyl's inequality In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix. This article is about Weyl's inequality in linear algebra. For Weyl's inequality in number theory, see Weyl's inequality (number theory). Weyl's inequality about perturbation Let $M=N+R,\,N,$ and $R$ be n×n Hermitian matrices, with their respective eigenvalues $\mu _{i},\,\nu _{i},\,\rho _{i}$ ordered as follows: $M:\quad \mu _{1}\geq \cdots \geq \mu _{n},$ $N:\quad \nu _{1}\geq \cdots \geq \nu _{n},$ $R:\quad \rho _{1}\geq \cdots \geq \rho _{n}.$ Then the following inequalities hold: $\nu _{i}+\rho _{n}\leq \mu _{i}\leq \nu _{i}+\rho _{1},\quad i=1,\dots ,n,$ and, more generally, $\nu _{j}+\rho _{k}\leq \mu _{i}\leq \nu _{r}+\rho _{s},\quad j+k-n\geq i\geq r+s-1.$ In particular, if $R$ is positive definite then plugging $\rho _{n}>0$ into the above inequalities leads to $\mu _{i}>\nu _{i}\quad \forall i=1,\dots ,n.$ Note that these eigenvalues can be ordered, because they are real (as eigenvalues of Hermitian matrices). Weyl's inequality between eigenvalues and singular values Let $A\in \mathbb {C} ^{n\times n}$ have singular values $\sigma _{1}(A)\geq \cdots \geq \sigma _{n}(A)\geq 0$ and eigenvalues ordered so that $|\lambda _{1}(A)|\geq \cdots \geq |\lambda _{n}(A)|$. Then $|\lambda _{1}(A)\cdots \lambda _{k}(A)|\leq \sigma _{1}(A)\cdots \sigma _{k}(A)$ For $k=1,\ldots ,n$, with equality for $k=n$. [1] Applications Estimating perturbations of the spectrum Assume that $R$ is small in the sense that its spectral norm satisfies $\|R\|_{2}\leq \epsilon $ for some small $\epsilon >0$. Then it follows that all the eigenvalues of $R$ are bounded in absolute value by $\epsilon $. Applying Weyl's inequality, it follows that the spectra of the Hermitian matrices M and N are close in the sense that[2] $|\mu _{i}-\nu _{i}|\leq \epsilon \qquad \forall i=1,\ldots ,n.$ Note, however, that this eigenvalue perturbation bound is generally false for non-Hermitian matrices (or more accurately, for non-normal matrices). For a counterexample, let $t>0$ be arbitrarily small, and consider $M={\begin{bmatrix}0&0\\1/t^{2}&0\end{bmatrix}},\qquad N=M+R={\begin{bmatrix}0&1\\1/t^{2}&0\end{bmatrix}},\qquad R={\begin{bmatrix}0&1\\0&0\end{bmatrix}}.$ whose eigenvalues $\mu _{1}=\mu _{2}=0$ and $\nu _{1}=+1/t,\nu _{2}=-1/t$ do not satisfy $|\mu _{i}-\nu _{i}|\leq \|R\|_{2}=1$. Weyl's inequality for singular values Let $M$ be a $p\times n$ matrix with $1\leq p\leq n$. Its singular values $\sigma _{k}(M)$ are the $p$ positive eigenvalues of the $(p+n)\times (p+n)$ Hermitian augmented matrix ${\begin{bmatrix}0&M\\M^{*}&0\end{bmatrix}}.$ Therefore, Weyl's eigenvalue perturbation inequality for Hermitian matrices extends naturally to perturbation of singular values.[3] This result gives the bound for the perturbation in the singular values of a matrix $M$ due to an additive perturbation $\Delta $: $|\sigma _{k}(M+\Delta )-\sigma _{k}(M)|\leq \sigma _{1}(\Delta )$ where we note that the largest singular value $\sigma _{1}(\Delta )$ coincides with the spectral norm $\|\Delta \|_{2}$. Notes 1. Roger A. Horn, and Charles R. Johnson Topics in Matrix Analysis. Cambridge, 1st Edition, 1991. p.171 2. Weyl, Hermann. "Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung)." Mathematische Annalen 71, no. 4 (1912): 441-479. 3. Tao, Terence (2010-01-13). "254A, Notes 3a: Eigenvalues and sums of Hermitian matrices". Terence Tao's blog. Retrieved 25 May 2015. References • Matrix Theory, Joel N. Franklin, (Dover Publications, 1993) ISBN 0-486-41179-6 • "Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen", H. Weyl, Math. Ann., 71 (1912), 441–479
Wikipedia
Weyl integration formula In mathematics, the Weyl integration formula, introduced by Hermann Weyl, is an integration formula for a compact connected Lie group G in terms of a maximal torus T. Precisely, it says[1] there exists a real-valued continuous function u on T such that for every class function f on G: $\int _{G}f(g)\,dg=\int _{T}f(t)u(t)\,dt.$ Moreover, $u$ is explicitly given as: $u=|\delta |^{2}/\#W$ where $W=N_{G}(T)/T$ is the Weyl group determined by T and $\delta (t)=\prod _{\alpha >0}\left(e^{\alpha (t)/2}-e^{-\alpha (t)/2}\right),$ the product running over the positive roots of G relative to T. More generally, if $f$ is only a continuous function, then $\int _{G}f(g)\,dg=\int _{T}\left(\int _{G}f(gtg^{-1})\,dg\right)u(t)\,dt.$ The formula can be used to derive the Weyl character formula. (The theory of Verma modules, on the other hand, gives a purely algebraic derivation of the Weyl character formula.) Derivation Consider the map $q:G/T\times T\to G,\,(gT,t)\mapsto gtg^{-1}$. The Weyl group W acts on T by conjugation and on $G/T$ from the left by: for $nT\in W$, $nT(gT)=gn^{-1}T.$ Let $G/T\times _{W}T$ be the quotient space by this W-action. Then, since the W-action on $G/T$ is free, the quotient map $p:G/T\times T\to G/T\times _{W}T$ is a smooth covering with fiber W when it is restricted to regular points. Now, $q$ is $p$ followed by $G/T\times _{W}T\to G$ and the latter is a homeomorphism on regular points and so has degree one. Hence, the degree of $q$ is $\#W$ and, by the change of variable formula, we get: $\#W\int _{G}f\,dg=\int _{G/T\times T}q^{*}(f\,dg).$ Here, $q^{*}(f\,dg)|_{(gT,t)}=f(t)q^{*}(dg)|_{(gT,t)}$ since $f$ is a class function. We next compute $q^{*}(dg)|_{(gT,t)}$. We identify a tangent space to $G/T\times T$ as ${\mathfrak {g}}/{\mathfrak {t}}\oplus {\mathfrak {t}}$ where ${\mathfrak {g}},{\mathfrak {t}}$ are the Lie algebras of $G,T$. For each $v\in T$, $q(gv,t)=gvtv^{-1}g^{-1}$ and thus, on ${\mathfrak {g}}/{\mathfrak {t}}$, we have: $d(gT\mapsto q(gT,t))({\dot {v}})=gtg^{-1}(gt^{-1}{\dot {v}}tg^{-1}-g{\dot {v}}g^{-1})=(\operatorname {Ad} (g)\circ (\operatorname {Ad} (t^{-1})-I))({\dot {v}}).$ Similarly we see, on ${\mathfrak {t}}$, $d(t\mapsto q(gT,t))=\operatorname {Ad} (g)$. Now, we can view G as a connected subgroup of an orthogonal group (as it is compact connected) and thus $\det(\operatorname {Ad} (g))=1$. Hence, $q^{*}(dg)=\det(\operatorname {Ad} _{{\mathfrak {g}}/{\mathfrak {t}}}(t^{-1})-I_{{\mathfrak {g}}/{\mathfrak {t}}})\,dg.$ To compute the determinant, we recall that ${\mathfrak {g}}_{\mathbb {C} }={\mathfrak {t}}_{\mathbb {C} }\oplus \oplus _{\alpha }{\mathfrak {g}}_{\alpha }$ where ${\mathfrak {g}}_{\alpha }=\{x\in {\mathfrak {g}}_{\mathbb {C} }\mid \operatorname {Ad} (t)x=e^{\alpha (t)}x,t\in T\}$ and each ${\mathfrak {g}}_{\alpha }$ has dimension one. Hence, considering the eigenvalues of $\operatorname {Ad} _{{\mathfrak {g}}/{\mathfrak {t}}}(t^{-1})$, we get: $\det(\operatorname {Ad} _{{\mathfrak {g}}/{\mathfrak {t}}}(t^{-1})-I_{{\mathfrak {g}}/{\mathfrak {t}}})=\prod _{\alpha >0}(e^{-\alpha (t)}-1)(e^{\alpha (t)}-1)=\delta (t){\overline {\delta (t)}},$ as each root $\alpha $ has pure imaginary value. Weyl character formula Main article: Weyl character formula The Weyl character formula is a consequence of the Weyl integral formula as follows. We first note that $W$ can be identified with a subgroup of $\operatorname {GL} ({\mathfrak {t}}_{\mathbb {C} }^{*})$; in particular, it acts on the set of roots, linear functionals on ${\mathfrak {t}}_{\mathbb {C} }$. Let $A_{\mu }=\sum _{w\in W}(-1)^{l(w)}e^{w(\mu )}$ where $l(w)$ is the length of w. Let $\Lambda $ be the weight lattice of G relative to T. The Weyl character formula then says that: for each irreducible character $\chi $ of $G$, there exists a $\mu \in \Lambda $ such that $\chi |T\cdot \delta =A_{\mu }$. To see this, we first note 1. $\|\chi \|^{2}=\int _{G}|\chi |^{2}dg=1.$ 2. $\chi |T\cdot \delta \in \mathbb {Z} [\Lambda ].$ The property (1) is precisely (a part of) the orthogonality relations on irreducible characters. References 1. Adams 1969, Theorem 6.1. • Adams, J. F. (1969), Lectures on Lie Groups, University of Chicago Press • Theodor Bröcker and Tammo tom Dieck, Representations of compact Lie groups, Graduate Texts in Mathematics 98, Springer-Verlag, Berlin, 1995.
Wikipedia
Weyl law In mathematics, especially spectral theory, Weyl's law describes the asymptotic behavior of eigenvalues of the Laplace–Beltrami operator. This description was discovered in 1911 (in the $d=2,3$ case) by Hermann Weyl for eigenvalues for the Laplace–Beltrami operator acting on functions that vanish at the boundary of a bounded domain $\Omega \subset \mathbb {R} ^{d}$. In particular, he proved that the number, $N(\lambda )$, of Dirichlet eigenvalues (counting their multiplicities) less than or equal to $\lambda $ satisfies $\lim _{\lambda \rightarrow \infty }{\frac {N(\lambda )}{\lambda ^{d/2}}}=(2\pi )^{-d}\omega _{d}\mathrm {vol} (\Omega )$ where $\omega _{d}$ is a volume of the unit ball in $\mathbb {R} ^{d}$.[1] In 1912 he provided a new proof based on variational methods.[2][3] Generalizations The Weyl law has been extended to more general domains and operators. For the Schrödinger operator $H=-h^{2}\Delta +V(x)$ it was extended to $N(E,h)\sim (2\pi h)^{-d}\int _{\{|\xi |^{2}+V(x)<E\}}dxd\xi $ as $E$ tending to $+\infty $ or to a bottom of essential spectrum and/or $h\to +0$. Here $N(E,h)$ is the number of eigenvalues of $H$ below $E$ unless there is essential spectrum below $E$ in which case $N(E,h)=+\infty $. In the development of spectral asymptotics, the crucial role was played by variational methods and microlocal analysis. Counter-examples The extended Weyl law fails in certain situations. In particular, the extended Weyl law "claims" that there is no essential spectrum if and only if the right-hand expression is finite for all $E$. If one considers domains with cusps (i.e. "shrinking exits to infinity") then the (extended) Weyl law claims that there is no essential spectrum if and only if the volume is finite. However for the Dirichlet Laplacian there is no essential spectrum even if the volume is infinite as long as cusps shrinks at infinity (so the finiteness of the volume is not necessary). On the other hand, for the Neumann Laplacian there is an essential spectrum unless cusps shrinks at infinity faster than the negative exponent (so the finiteness of the volume is not sufficient). Weyl conjecture Weyl conjectured that $N(\lambda )=(2\pi )^{-d}\lambda ^{d/2}\omega _{d}\mathrm {vol} (\Omega )\mp {\frac {1}{4}}(2\pi )^{1-d}\omega _{d-1}\lambda ^{(d-1)/2}\mathrm {area} (\partial \Omega )+o(\lambda ^{(d-1)/2})$ where the remainder term is negative for Dirichlet boundary conditions and positive for Neumann. The remainder estimate was improved upon by many mathematicians. In 1922, Richard Courant proved a bound of $O(\lambda ^{(d-1)/2}\log \lambda )$. In 1952, Boris Levitan proved the tighter bound of $O(\lambda ^{(d-1)/2})$ for compact closed manifolds. Robert Seeley extended this to include certain Euclidean domains in 1978.[4] In 1975, Hans Duistermaat and Victor Guillemin proved the bound of $o(\lambda ^{(d-1)/2})$ when the set of periodic bicharacteristics has measure 0.[5] This was finally generalized by Victor Ivrii in 1980.[6] This generalization assumes that the set of periodic trajectories of a billiard in $\Omega $ has measure 0, which Ivrii conjectured is fulfilled for all bounded Euclidean domains with smooth boundaries. Since then, similar results have been obtained for wider classes of operators. References 1. Weyl, Hermann (1911). "Über die asymptotische Verteilung der Eigenwerte". Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen: 110–117. 2. "Das asymptotische Verteilungsgesetz linearen partiellen Differentialgleichungen". Mathematische Annalen. 71: 441–479. 1912. doi:10.1007/BF01456804. S2CID 120278241. 3. For a proof in English, see Strauss, Walter A. (2008). Partial Differential Equations. John Wiley & Sons. See chapter 11. 4. Seeley, Robert (1978). "A sharp asymptotic estimate for the eigenvalues of the Laplacian in a domain of $\mathbf {R} ^{3}$". Advances in Mathematics. 102 (3): 244–264. doi:10.1016/0001-8708(78)90013-0. 5. The spectrum of positive elliptic operators and periodic bicharacteristics. Inventiones Mathematicae , 29(1):37–79 (1975). 6. Second term of the spectral asymptotic expansion for the Laplace–Beltrami operator on manifold with boundary. Functional Analysis and Its Applications 14(2):98–106 (1980). Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Weyl's lemma (Laplace equation) In mathematics, Weyl's lemma, named after Hermann Weyl, states that every weak solution of Laplace's equation is a smooth solution. This contrasts with the wave equation, for example, which has weak solutions that are not smooth solutions. Weyl's lemma is a special case of elliptic or hypoelliptic regularity. Statement of the lemma Let $\Omega $ be an open subset of $n$-dimensional Euclidean space $\mathbb {R} ^{n}$, and let $\Delta $ denote the usual Laplace operator. Weyl's lemma[1] states that if a locally integrable function $u\in L_{\mathrm {loc} }^{1}(\Omega )$ is a weak solution of Laplace's equation, in the sense that $\int _{\Omega }u(x)\,\Delta \varphi (x)\,dx=0$ for every smooth test function $\varphi \in C_{c}^{\infty }(\Omega )$ with compact support, then (up to redefinition on a set of measure zero) $u\in C^{\infty }(\Omega )$ is smooth and satisfies $\Delta u=0$ pointwise in $\Omega $. This result implies the interior regularity of harmonic functions in $\Omega $, but it does not say anything about their regularity on the boundary $\partial \Omega $. Idea of the proof To prove Weyl's lemma, one convolves the function $u$ with an appropriate mollifier $\varphi _{\varepsilon }$ and shows that the mollification $u_{\varepsilon }=\varphi _{\varepsilon }\ast u$ satisfies Laplace's equation, which implies that $u_{\varepsilon }$ has the mean value property. Taking the limit as $\varepsilon \to 0$ and using the properties of mollifiers, one finds that $u$ also has the mean value property, which implies that it is a smooth solution of Laplace's equation.[2] Alternative proofs use the smoothness of the fundamental solution of the Laplacian or suitable a priori elliptic estimates. Generalization to distributions More generally, the same result holds for every distributional solution of Laplace's equation: If $T\in D'(\Omega )$ satisfies $\langle T,\Delta \varphi \rangle =0$ for every $\varphi \in C_{c}^{\infty }(\Omega )$, then $T=T_{u}$ is a regular distribution associated with a smooth solution $u\in C^{\infty }(\Omega )$ of Laplace's equation.[3] Connection with hypoellipticity Weyl's lemma follows from more general results concerning the regularity properties of elliptic or hypoelliptic operators.[4] A linear partial differential operator $P$ with smooth coefficients is hypoelliptic if the singular support of $Pu$ is equal to the singular support of $u$ for every distribution $u$. The Laplace operator is hypoelliptic, so if $\Delta u=0$, then the singular support of $u$ is empty since the singular support of $0$ is empty, meaning that $u\in C^{\infty }(\Omega )$. In fact, since the Laplacian is elliptic, a stronger result is true, and solutions of $\Delta u=0$ are real-analytic. Notes 1. Hermann Weyl, The method of orthogonal projections in potential theory, Duke Math. J., 7, 411–444 (1940). See Lemma 2, p. 415 2. Bernard Dacorogna, Introduction to the Calculus of Variations, 2nd ed., Imperial College Press (2009), p. 148. 3. Lars Gårding, Some Points of Analysis and their History, AMS (1997), p. 66. 4. Lars Hörmander, The Analysis of Linear Partial Differential Operators I, 2nd ed., Springer-Verlag (1990), p.110 References • Gilbarg, David; Neil S. Trudinger (1988). Elliptic Partial Differential Equations of Second Order. Springer. ISBN 3-540-41160-7. • Stein, Elias (2005). Real Analysis: Measure Theory, Integration, and Hilbert Spaces. Princeton University Press. ISBN 0-691-11386-6.
Wikipedia
Weyl metrics In general relativity, the Weyl metrics (named after the German-American mathematician Hermann Weyl)[1] are a class of static and axisymmetric solutions to Einstein's field equation. Three members in the renowned Kerr–Newman family solutions, namely the Schwarzschild, nonextremal Reissner–Nordström and extremal Reissner–Nordström metrics, can be identified as Weyl-type metrics. Standard Weyl metrics The Weyl class of solutions has the generic form[2][3] $ds^{2}=-e^{2\psi (\rho ,z)}dt^{2}+e^{2\gamma (\rho ,z)-2\psi (\rho ,z)}(d\rho ^{2}+dz^{2})+e^{-2\psi (\rho ,z)}\rho ^{2}d\phi ^{2}\,,$ (1) where $\psi (\rho ,z)$ and $\gamma (\rho ,z)$ are two metric potentials dependent on Weyl's canonical coordinates $\{\rho \,,z\}$. The coordinate system $\{t,\rho ,z,\phi \}$ serves best for symmetries of Weyl's spacetime (with two Killing vector fields being $\xi ^{t}=\partial _{t}$ and $\xi ^{\phi }=\partial _{\phi }$) and often acts like cylindrical coordinates,[2] but is incomplete when describing a black hole as $\{\rho \,,z\}$ only cover the horizon and its exteriors. Hence, to determine a static axisymmetric solution corresponding to a specific stress–energy tensor $T_{ab}$, we just need to substitute the Weyl metric Eq(1) into Einstein's equation (with c=G=1): $R_{ab}-{\frac {1}{2}}Rg_{ab}=8\pi T_{ab}\,,$ (2) and work out the two functions $\psi (\rho ,z)$ and $\gamma (\rho ,z)$. Reduced field equations for electrovac Weyl solutions One of the best investigated and most useful Weyl solutions is the electrovac case, where $T_{ab}$ comes from the existence of (Weyl-type) electromagnetic field (without matter and current flows). As we know, given the electromagnetic four-potential $A_{a}$, the anti-symmetric electromagnetic field $F_{ab}$ and the trace-free stress–energy tensor $T_{ab}$ $(T=g^{ab}T_{ab}=0)$ will be respectively determined by $F_{ab}=A_{b\,;\,a}-A_{a\,;\,b}=A_{b\,,\,a}-A_{a\,,\,b}$ (3) $T_{ab}={\frac {1}{4\pi }}\,\left(\,F_{ac}F_{b}^{\;c}-{\frac {1}{4}}g_{ab}F_{cd}F^{cd}\right)\,,$ (4) which respects the source-free covariant Maxwell equations: ${\big (}F^{ab}{\big )}_{;\,b}=0\,,\quad F_{[ab\,;\,c]}=0\,.$ (5.a) Eq(5.a) can be simplified to: $\left({\sqrt {-g}}\,F^{ab}\right)_{,\,b}=0\,,\quad F_{[ab\,,\,c]}=0$ (5.b) in the calculations as $\Gamma _{bc}^{a}=\Gamma _{cb}^{a}$. Also, since $R=-8\pi T=0$ for electrovacuum, Eq(2) reduces to $R_{ab}=8\pi T_{ab}\,.$ (6) Now, suppose the Weyl-type axisymmetric electrostatic potential is $A_{a}=\Phi (\rho ,z)[dt]_{a}$ (the component $\Phi $ is actually the electromagnetic scalar potential), and together with the Weyl metric Eq(1), Eqs(3)(4)(5)(6) imply that $\nabla ^{2}\psi =\,(\nabla \psi )^{2}+\gamma _{,\,\rho \rho }+\gamma _{,\,zz}$ (7.a) $\nabla ^{2}\psi =\,e^{-2\psi }(\nabla \Phi )^{2}$ (7.b) ${\frac {1}{\rho }}\,\gamma _{,\,\rho }=\,\psi _{,\,\rho }^{2}-\psi _{,\,z}^{2}-e^{-2\psi }{\big (}\Phi _{,\,\rho }^{2}-\Phi _{,\,z}^{2}{\big )}$ (7.c) ${\frac {1}{\rho }}\,\gamma _{,\,z}=\,2\psi _{,\,\rho }\psi _{,\,z}-2e^{-2\psi }\Phi _{,\,\rho }\Phi _{,\,z}$ (7.d) $\nabla ^{2}\Phi =\,2\nabla \psi \nabla \Phi \,,$ (7.e) where $R=0$ yields Eq(7.a), $R_{tt}=8\pi T_{tt}$ or $R_{\varphi \varphi }=8\pi T_{\varphi \varphi }$ yields Eq(7.b), $R_{\rho \rho }=8\pi T_{\rho \rho }$ or $R_{zz}=8\pi T_{zz}$ yields Eq(7.c), $R_{\rho z}=8\pi T_{\rho z}$ yields Eq(7.d), and Eq(5.b) yields Eq(7.e). Here $\nabla ^{2}=\partial _{\rho \rho }+{\frac {1}{\rho }}\,\partial _{\rho }+\partial _{zz}$ and $\nabla =\partial _{\rho }\,{\hat {e}}_{\rho }+\partial _{z}\,{\hat {e}}_{z}$ are respectively the Laplace and gradient operators. Moreover, if we suppose $\psi =\psi (\Phi )$ in the sense of matter-geometry interplay and assume asymptotic flatness, we will find that Eqs(7.a-e) implies a characteristic relation that $e^{\psi }=\,\Phi ^{2}-2C\Phi +1\,.$ (7.f) Specifically in the simplest vacuum case with $\Phi =0$ and $T_{ab}=0$, Eqs(7.a-7.e) reduce to[4] $\gamma _{,\,\rho \rho }+\gamma _{,\,zz}=-(\nabla \psi )^{2}$ (8.a) $\nabla ^{2}\psi =0$ (8.b) $\gamma _{,\,\rho }=\rho \,{\Big (}\psi _{,\,\rho }^{2}-\psi _{,\,z}^{2}{\Big )}$ (8.c) $\gamma _{,\,z}=2\,\rho \,\psi _{,\,\rho }\psi _{,\,z}\,.$ (8.d) We can firstly obtain $\psi (\rho ,z)$ by solving Eq(8.b), and then integrate Eq(8.c) and Eq(8.d) for $\gamma (\rho ,z)$. Practically, Eq(8.a) arising from $R=0$ just works as a consistency relation or integrability condition. Unlike the nonlinear Poisson's equation Eq(7.b), Eq(8.b) is the linear Laplace equation; that is to say, superposition of given vacuum solutions to Eq(8.b) is still a solution. This fact has a widely application, such as to analytically distort a Schwarzschild black hole. We employed the axisymmetric Laplace and gradient operators to write Eqs(7.a-7.e) and Eqs(8.a-8.d) in a compact way, which is very useful in the derivation of the characteristic relation Eq(7.f). In the literature, Eqs(7.a-7.e) and Eqs(8.a-8.d) are often written in the following forms as well: $\psi _{,\,\rho \rho }+{\frac {1}{\rho }}\psi _{,\,\rho }+\psi _{,\,zz}=\,(\psi _{,\,\rho })^{2}+(\psi _{,\,z})^{2}+\gamma _{,\,\rho \rho }+\gamma _{,\,zz}$ (A.1.a) $\psi _{,\,\rho \rho }+{\frac {1}{\rho }}\psi _{,\,\rho }+\psi _{,\,zz}=e^{-2\psi }{\big (}\Phi _{,\,\rho }^{2}+\Phi _{,\,z}^{2}{\big )}$ (A.1.b) ${\frac {1}{\rho }}\,\gamma _{,\,\rho }=\,\psi _{,\,\rho }^{2}-\psi _{,\,z}^{2}-e^{-2\psi }{\big (}\Phi _{,\,\rho }^{2}-\Phi _{,\,z}^{2}{\big )}$ (A.1.c) ${\frac {1}{\rho }}\,\gamma _{,\,z}=\,2\psi _{,\,\rho }\psi _{,\,z}-2e^{-2\psi }\Phi _{,\,\rho }\Phi _{,\,z}$ (A.1.d) $\Phi _{,\,\rho \rho }+{\frac {1}{\rho }}\Phi _{,\,\rho }+\Phi _{,\,zz}=\,2\psi _{,\,\rho }\Phi _{,\,\rho }+2\psi _{,\,z}\Phi _{,\,z}$ (A.1.e) and $(\psi _{,\,\rho })^{2}+(\psi _{,\,z})^{2}=-\gamma _{,\,\rho \rho }-\gamma _{,\,zz}$ (A.2.a) $\psi _{,\,\rho \rho }+{\frac {1}{\rho }}\psi _{,\,\rho }+\psi _{,\,zz}=0$ (A.2.b) $\gamma _{,\,\rho }=\rho \,{\Big (}\psi _{,\,\rho }^{2}-\psi _{,\,z}^{2}{\Big )}$ (A.2.c) $\gamma _{,\,z}=2\,\rho \,\psi _{,\,\rho }\psi _{,\,z}\,.$ (A.2.d) Considering the interplay between spacetime geometry and energy-matter distributions, it is natural to assume that in Eqs(7.a-7.e) the metric function $\psi (\rho ,z)$ relates with the electrostatic scalar potential $\Phi (\rho ,z)$ via a function $\psi =\psi (\Phi )$ (which means geometry depends on energy), and it follows that $\psi _{,\,i}=\psi _{,\,\Phi }\cdot \Phi _{,\,i}\quad ,\quad \nabla \psi =\psi _{,\,\Phi }\cdot \nabla \Phi \quad ,\quad \nabla ^{2}\psi =\psi _{,\,\Phi }\cdot \nabla ^{2}\Phi +\psi _{,\,\Phi \Phi }\cdot (\nabla \Phi )^{2},$ (B.1) Eq(B.1) immediately turns Eq(7.b) and Eq(7.e) respectively into $\Psi _{,\,\Phi }\cdot \nabla ^{2}\Phi \,=\,{\big (}e^{-2\psi }-\psi _{,\,\Phi \Phi }{\big )}\cdot (\nabla \Phi )^{2},$ (B.2) $\nabla ^{2}\Phi \,=\,2\psi _{,\,\Phi }\cdot (\nabla \Phi )^{2},$ (B.3) which give rise to $\psi _{,\,\Phi \Phi }+2\,{\big (}\psi _{,\,\Phi }{\big )}^{2}-e^{-2\psi }=0.$ (B.4) Now replace the variable $\psi $ by $\zeta :=e^{2\psi }$ :=e^{2\psi }} , and Eq(B.4) is simplified to $\zeta _{,\,\Phi \Phi }-2=0.$ (B.5) Direct quadrature of Eq(B.5) yields $\zeta =e^{2\psi }=\Phi ^{2}+{\tilde {C}}\Phi +B$, with $\{B,{\tilde {C}}\}$ being integral constants. To resume asymptotic flatness at spatial infinity, we need $\lim _{\rho ,z\to \infty }\Phi =0$ and $\lim _{\rho ,z\to \infty }e^{2\psi }=1$, so there should be $B=1$. Also, rewrite the constant ${\tilde {C}}$ as $-2C$ for mathematical convenience in subsequent calculations, and one finally obtains the characteristic relation implied by Eqs(7.a-7.e) that $e^{2\psi }=\Phi ^{2}-2C\Phi +1\,.$ (7.f) This relation is important in linearize the Eqs(7.a-7.f) and superpose electrovac Weyl solutions. Newtonian analogue of metric potential Ψ(ρ,z) In Weyl's metric Eq(1), $ e^{\pm 2\psi }=\sum _{n=0}^{\infty }{\frac {(\pm 2\psi )^{n}}{n!}}$; thus in the approximation for weak field limit $\psi \to 0$, one has $g_{tt}=-(1+2\psi )-{\mathcal {O}}(\psi ^{2})\,,\quad g_{\phi \phi }=1-2\psi +{\mathcal {O}}(\psi ^{2})\,,$ (9) and therefore $ds^{2}\approx -{\Big (}1+2\psi (\rho ,z){\Big )}\,dt^{2}+{\Big (}1-2\psi (\rho ,z){\Big )}\left[e^{2\gamma }(d\rho ^{2}+dz^{2})+\rho ^{2}d\phi ^{2}\right]\,.$ (10) This is pretty analogous to the well-known approximate metric for static and weak gravitational fields generated by low-mass celestial bodies like the Sun and Earth,[5] $ds^{2}=-{\Big (}1+2\Phi _{N}(\rho ,z){\Big )}\,dt^{2}+{\Big (}1-2\Phi _{N}(\rho ,z){\Big )}\,\left[d\rho ^{2}+dz^{2}+\rho ^{2}d\phi ^{2}\right]\,.$ (11) where $\Phi _{N}(\rho ,z)$ is the usual Newtonian potential satisfying Poisson's equation $\nabla _{L}^{2}\Phi _{N}=4\pi \varrho _{N}$, just like Eq(3.a) or Eq(4.a) for the Weyl metric potential $\psi (\rho ,z)$. The similarities between $\psi (\rho ,z)$ and $\Phi _{N}(\rho ,z)$ inspire people to find out the Newtonian analogue of $\psi (\rho ,z)$ when studying Weyl class of solutions; that is, to reproduce $\psi (\rho ,z)$ nonrelativistically by certain type of Newtonian sources. The Newtonian analogue of $\psi (\rho ,z)$ proves quite helpful in specifying particular Weyl-type solutions and extending existing Weyl-type solutions.[2] Schwarzschild solution The Weyl potentials generating Schwarzschild's metric as solutions to the vacuum equations Eq(8) are given by[2][3][4] $\psi _{SS}={\frac {1}{2}}\ln {\frac {L-M}{L+M}}\,,\quad \gamma _{SS}={\frac {1}{2}}\ln {\frac {L^{2}-M^{2}}{l_{+}l_{-}}}\,,$ (12) where $L={\frac {1}{2}}{\big (}l_{+}+l_{-}{\big )}\,,\quad l_{+}={\sqrt {\rho ^{2}+(z+M)^{2}}}\,,\quad l_{-}={\sqrt {\rho ^{2}+(z-M)^{2}}}\,.$ (13) From the perspective of Newtonian analogue, $\psi _{SS}$ equals the gravitational potential produced by a rod of mass $M$ and length $2M$ placed symmetrically on the $z$-axis; that is, by a line mass of uniform density $\sigma =1/2$ embedded the interval $z\in [-M,M]$. (Note: Based on this analogue, important extensions of the Schwarzschild metric have been developed, as discussed in ref.[2]) Given $\psi _{SS}$ and $\gamma _{SS}$, Weyl's metric Eq(\ref{Weyl metric in canonical coordinates}) becomes $ds^{2}=-{\frac {L-M}{L+M}}dt^{2}+{\frac {(L+M)^{2}}{l_{+}l_{-}}}(d\rho ^{2}+dz^{2})+{\frac {L+M}{L-M}}\,\rho ^{2}d\phi ^{2}\,,$ (14) and after substituting the following mutually consistent relations ${\begin{aligned}&L+M=r\,,\quad l_{+}-l_{-}=2M\cos \theta \,,\quad z=(r-M)\cos \theta \,,\\&\rho ={\sqrt {r^{2}-2Mr}}\,\sin \theta \,,\quad l_{+}l_{-}=(r-M)^{2}-M^{2}\cos ^{2}\theta \,,\end{aligned}}$ (15) one can obtain the common form of Schwarzschild metric in the usual $\{t,r,\theta ,\phi \}$ coordinates, $ds^{2}=-\left(1-{\frac {2M}{r}}\right)\,dt^{2}+\left(1-{\frac {2M}{r}}\right)^{-1}dr^{2}+r^{2}d\theta ^{2}+r^{2}\sin ^{2}\theta \,d\phi ^{2}\,.$ (16) The metric Eq(14) cannot be directly transformed into Eq(16) by performing the standard cylindrical-spherical transformation $(t,\rho ,z,\phi )=(t,r\sin \theta ,r\cos \theta ,\phi )$, because $\{t,r,\theta ,\phi \}$ is complete while $(t,\rho ,z,\phi )$ is incomplete. This is why we call $\{t,\rho ,z,\phi \}$ in Eq(1) as Weyl's canonical coordinates rather than cylindrical coordinates, although they have a lot in common; for example, the Laplacian $\nabla ^{2}:=\partial _{\rho \rho }+{\frac {1}{\rho }}\partial _{\rho }+\partial _{zz}$ in Eq(7) is exactly the two-dimensional geometric Laplacian in cylindrical coordinates. Nonextremal Reissner–Nordström solution The Weyl potentials generating the nonextremal Reissner–Nordström solution ($M>|Q|$) as solutions to Eqs(7} are given by[2][3][4] $\psi _{RN}={\frac {1}{2}}\ln {\frac {L^{2}-\left(M^{2}-Q^{2}\right)}{\left(L+M\right)^{2}}}\,,\quad \gamma _{RN}={\frac {1}{2}}\ln {\frac {L^{2}-\left(M^{2}-Q^{2}\right)}{l_{+}l_{-}}}\,,$ (17) where $L={\frac {1}{2}}{\big (}l_{+}+l_{-}{\big )}\,,\quad l_{+}={\sqrt {\rho ^{2}+\left(z+{\sqrt {M^{2}-Q^{2}}}\right)^{2}}}\,,\quad l_{-}={\sqrt {\rho ^{2}+\left(z-{\sqrt {M^{2}-Q^{2}}}\right)^{2}}}\,.$ (18) Thus, given $\psi _{RN}$ and $\gamma _{RN}$, Weyl's metric becomes $ds^{2}=-{\frac {L^{2}-\left(M^{2}-Q^{2}\right)}{\left(L+M\right)^{2}}}dt^{2}+{\frac {\left(L+M\right)^{2}}{l_{+}l_{-}}}(d\rho ^{2}+dz^{2})+{\frac {(L+M)^{2}}{L^{2}-(M^{2}-Q^{2})}}\rho ^{2}d\phi ^{2}\,,$ (19) and employing the following transformations ${\begin{aligned}&L+M=r\,,\quad l_{+}-l_{-}=2{\sqrt {M^{2}-Q^{2}}}\,\cos \theta \,,\quad z=(r-M)\cos \theta \,,\\&\rho ={\sqrt {r^{2}-2Mr+Q^{2}}}\,\sin \theta \,,\quad l_{+}l_{-}=(r-M)^{2}-(M^{2}-Q^{2})\cos ^{2}\theta \,,\end{aligned}}$ (20) one can obtain the common form of non-extremal Reissner–Nordström metric in the usual $\{t,r,\theta ,\phi \}$ coordinates, $ds^{2}=-\left(1-{\frac {2M}{r}}+{\frac {Q^{2}}{r^{2}}}\right)dt^{2}+\left(1-{\frac {2M}{r}}+{\frac {Q^{2}}{r^{2}}}\right)^{-1}dr^{2}+r^{2}d\theta ^{2}+r^{2}\sin ^{2}\theta \,d\phi ^{2}\,.$ (21) Extremal Reissner–Nordström solution The potentials generating the extremal Reissner–Nordström solution ($M=|Q|$) as solutions to Eqs(7) are given by[4] (Note: We treat the extremal solution separately because it is much more than the degenerate state of the nonextremal counterpart.) $\psi _{ERN}={\frac {1}{2}}\ln {\frac {L^{2}}{(L+M)^{2}}}\,,\quad \gamma _{ERN}=0\,,\quad {\text{with}}\quad L={\sqrt {\rho ^{2}+z^{2}}}\,.$ (22) Thus, the extremal Reissner–Nordström metric reads $ds^{2}=-{\frac {L^{2}}{(L+M)^{2}}}dt^{2}+{\frac {(L+M)^{2}}{L^{2}}}(d\rho ^{2}+dz^{2}+\rho ^{2}d\phi ^{2})\,,$ (23) and by substituting $L+M=r\,,\quad z=L\cos \theta \,,\quad \rho =L\sin \theta \,,$ (24) we obtain the extremal Reissner–Nordström metric in the usual $\{t,r,\theta ,\phi \}$ coordinates, $ds^{2}=-\left(1-{\frac {M}{r}}\right)^{2}dt^{2}+\left(1-{\frac {M}{r}}\right)^{-2}dr^{2}+r^{2}d\theta ^{2}+r^{2}\sin ^{2}\theta \,d\phi ^{2}\,.$ (25) Mathematically, the extremal Reissner–Nordström can be obtained by taking the limit $Q\to M$ of the corresponding nonextremal equation, and in the meantime we need to use the L'Hospital rule sometimes. Remarks: Weyl's metrics Eq(1) with the vanishing potential $\gamma (\rho ,z)$ (like the extremal Reissner–Nordström metric) constitute a special subclass which have only one metric potential $\psi (\rho ,z)$ to be identified. Extending this subclass by canceling the restriction of axisymmetry, one obtains another useful class of solutions (still using Weyl's coordinates), namely the conformastatic metrics,[6][7] $ds^{2}\,=-e^{2\lambda (\rho ,z,\phi )}dt^{2}+e^{-2\lambda (\rho ,z,\phi )}{\Big (}d\rho ^{2}+dz^{2}+\rho ^{2}d\phi ^{2}{\Big )}\,,$ (26) where we use $\lambda $ in Eq(22) as the single metric function in place of $\psi $ in Eq(1) to emphasize that they are different by axial symmetry ($\phi $-dependence). Weyl vacuum solutions in spherical coordinates Weyl's metric can also be expressed in spherical coordinates that $ds^{2}\,=-e^{2\psi (r,\theta )}dt^{2}+e^{2\gamma (r,\theta )-2\psi (r,\theta )}(dr^{2}+r^{2}d\theta ^{2})+e^{-2\psi (r,\theta )}\rho ^{2}d\phi ^{2}\,,$ (27) which equals Eq(1) via the coordinate transformation $(t,\rho ,z,\phi )\mapsto (t,r\sin \theta ,r\cos \theta ,\phi )$ (Note: As shown by Eqs(15)(21)(24), this transformation is not always applicable.) In the vacuum case, Eq(8.b) for $\psi (r,\theta )$ becomes $r^{2}\psi _{,\,rr}+2r\,\psi _{,\,r}+\psi _{,\,\theta \theta }+\cot \theta \cdot \psi _{,\,\theta }\,=\,0\,.$ (28) The asymptotically flat solutions to Eq(28) is[2] $\psi (r,\theta )\,=-\sum _{n=0}^{\infty }a_{n}{\frac {P_{n}(\cos \theta )}{r^{n+1}}}\,,$ (29) where $P_{n}(\cos \theta )$ represent Legendre polynomials, and $a_{n}$ are multipole coefficients. The other metric potential $\gamma (r,\theta )$is given by[2] $\gamma (r,\theta )\,=-\sum _{l=0}^{\infty }\sum _{m=0}^{\infty }a_{l}a_{m}{\frac {(l+1)(m+1)}{l+m+2}}{\frac {P_{l}P_{m}-P_{l+1}P_{m+1}}{r^{l+m+2}}}\,.$ (30) See also • Schwarzschild metric • Reissner–Nordström metric • Distorted Schwarzschild metric References 1. Weyl, H., "Zur Gravitationstheorie," Ann. der Physik 54 (1917), 117–145. 2. Jeremy Bransom Griffiths, Jiri Podolsky. Exact Space-Times in Einstein's General Relativity. Cambridge: Cambridge University Press, 2009. Chapter 10. 3. Hans Stephani, Dietrich Kramer, Malcolm MacCallum, Cornelius Hoenselaers, Eduard Herlt. Exact Solutions of Einstein's Field Equations. Cambridge: Cambridge University Press, 2003. Chapter 20. 4. R Gautreau, R B Hoffman, A Armenti. Static multiparticle systems in general relativity. IL NUOVO CIMENTO B, 1972, 7(1): 71-98. 5. James B Hartle. Gravity: An Introduction To Einstein's General Relativity. San Francisco: Addison Wesley, 2003. Eq(6.20) transformed into Lorentzian cylindrical coordinates 6. Guillermo A Gonzalez, Antonio C Gutierrez-Pineres, Paolo A Ospina. Finite axisymmetric charged dust disks in conformastatic spacetimes. Physical Review D, 2008, 78(6): 064058. arXiv:0806.4285v1 7. Antonio C Gutierrez-Pineres, Guillermo A Gonzalez, Hernando Quevedo. Conformastatic disk-haloes in Einstein-Maxwell gravity. Physical Review D, 2013, 87(4): 044010.
Wikipedia
Weyl module In algebra, a Weyl module is a representation of a reductive algebraic group, introduced by Carter and Lusztig (1974, 1974b) and named after Hermann Weyl. In characteristic 0 these representations are irreducible, but in positive characteristic they can be reducible, and their decomposition into irreducible components can be hard to determine. See also • Borel–Weil–Bott theorem • Garnir relations Further reading • Carter, Roger W.; Lusztig, George (1974), "On the modular representations of the general linear and symmetric groups", Mathematische Zeitschrift, 136 (3): 193–242, doi:10.1007/BF01214125, ISSN 0025-5874, MR 0354887, S2CID 186230432 • Carter, Roger W.; Lusztig, G. (1974b), "On the modular representations of the general linear and symmetric groups", Proceedings of the Second International Conference on the Theory of Groups (Australian Nat. Univ., Canberra, 1973), Lecture Notes in Mathematics, vol. 372, Berlin, New York: Springer-Verlag, pp. 218–220, doi:10.1007/BFb0065172, ISBN 978-3-540-06845-7, MR 0369503 • Dipper, R. (2001) [1994], "Weyl_module", Encyclopedia of Mathematics, EMS Press
Wikipedia
Weyl sequence In mathematics, a Weyl sequence is a sequence from the equidistribution theorem proven by Hermann Weyl:[1] The sequence of all multiples of an irrational α, 0, α, 2α, 3α, 4α, ... is equidistributed modulo 1.[2] In other words, the sequence of the fractional parts of each term will be uniformly distributed in the interval [0, 1). In computing In computing, an integer version of this sequence is often used to generate a discrete uniform distribution rather than a continuous one. Instead of using an irrational number, which cannot be calculated on a digital computer, the ratio of two integers is used in its place. An integer k is chosen, relatively prime to an integer modulus m. In the common case that m is a power of 2, this amounts to requiring that k is odd. The sequence of all multiples of such an integer k, 0, k, 2k, 3k, 4k, … is equidistributed modulo m. That is, the sequence of the remainders of each term when divided by m will be uniformly distributed in the interval [0, m). The term appears to originate with George Marsaglia’s paper "Xorshift RNGs".[3] The following C code generates what Marsaglia calls a "Weyl sequence": d += 362437; In this case, the odd integer is 362437, and the results are computed modulo m = 232 because d is a 32-bit quantity. The results are equidistributed modulo 232. See also • List of things named after Hermann Weyl References 1. Weyl, H. (September 1916). "Über die Gleichverteilung von Zahlen mod. Eins" [On the uniform distribution of numbers modulo one]. Mathematische Annalen (in German). 77 (3): 313–352. doi:10.1007/BF01475864. S2CID 123470919. 2. Kuipers, L.; Niederreiter, H. (2006) [1974]. Uniform Distribution of Sequences. Dover Publications. ISBN 0-486-45019-8. 3. Marsaglia, George (July 2003). "Xorshift RNGs". Journal of Statistical Software. 8 (14). doi:10.18637/jss.v008.i14.
Wikipedia
Exponential sum In mathematics, an exponential sum may be a finite Fourier series (i.e. a trigonometric polynomial), or other finite sum formed using the exponential function, usually expressed by means of the function $e(x)=\exp(2\pi ix).\,$ Therefore, a typical exponential sum may take the form $\sum _{n}e(x_{n}),$ summed over a finite sequence of real numbers xn. Formulation If we allow some real coefficients an, to get the form $\sum _{n}a_{n}e(x_{n})$ it is the same as allowing exponents that are complex numbers. Both forms are certainly useful in applications. A large part of twentieth century analytic number theory was devoted to finding good estimates for these sums, a trend started by basic work of Hermann Weyl in diophantine approximation. Estimates The main thrust of the subject is that a sum $S=\sum _{n}e(x_{n})$ is trivially estimated by the number N of terms. That is, the absolute value $|S|\leq N\,$ by the triangle inequality, since each summand has absolute value 1. In applications one would like to do better. That involves proving some cancellation takes place, or in other words that this sum of complex numbers on the unit circle is not of numbers all with the same argument. The best that is reasonable to hope for is an estimate of the form $|S|=O({\sqrt {N}})\,$ which signifies, up to the implied constant in the big O notation, that the sum resembles a random walk in two dimensions. Such an estimate can be considered ideal; it is unattainable in many of the major problems, and estimates $|S|=o(N)\,$ have to be used, where the o(N) function represents only a small saving on the trivial estimate. A typical 'small saving' may be a factor of log(N), for example. Even such a minor-seeming result in the right direction has to be referred all the way back to the structure of the initial sequence xn, to show a degree of randomness. The techniques involved are ingenious and subtle. A variant of 'Weyl differencing' investigated by Weyl involving a generating exponential sum $G(\tau )=\sum _{n}e^{iaf(x)+ia\tau n}$ was previously studied by Weyl himself, he developed a method to express the sum as the value $G(0)$, where 'G' can be defined via a linear differential equation similar to Dyson equation obtained via summation by parts. History If the sum is of the form $S(x)=e^{iaf(x)}$ where ƒ is a smooth function, we could use the Euler–Maclaurin formula to convert the series into an integral, plus some corrections involving derivatives of S(x), then for large values of a you could use "stationary phase" method to calculate the integral and give an approximate evaluation of the sum. Major advances in the subject were Van der Corput's method (c. 1920), related to the principle of stationary phase, and the later Vinogradov method (c.1930). The large sieve method (c.1960), the work of many researchers, is a relatively transparent general principle; but no one method has general application. Types of exponential sum Many types of sums are used in formulating particular problems; applications require usually a reduction to some known type, often by ingenious manipulations. Partial summation can be used to remove coefficients an, in many cases. A basic distinction is between a complete exponential sum, which is typically a sum over all residue classes modulo some integer N (or more general finite ring), and an incomplete exponential sum where the range of summation is restricted by some inequality. Examples of complete exponential sums are Gauss sums and Kloosterman sums; these are in some sense finite field or finite ring analogues of the gamma function and some sort of Bessel function, respectively, and have many 'structural' properties. An example of an incomplete sum is the partial sum of the quadratic Gauss sum (indeed, the case investigated by Gauss). Here there are good estimates for sums over shorter ranges than the whole set of residue classes, because, in geometric terms, the partial sums approximate a Cornu spiral; this implies massive cancellation. Auxiliary types of sums occur in the theory, for example character sums; going back to Harold Davenport's thesis. The Weil conjectures had major applications to complete sums with domain restricted by polynomial conditions (i.e., along an algebraic variety over a finite field). Weyl sums One of the most general types of exponential sum is the Weyl sum, with exponents 2πif(n) where f is a fairly general real-valued smooth function. These are the sums involved in the distribution of the values ƒ(n) modulo 1, according to Weyl's equidistribution criterion. A basic advance was Weyl's inequality for such sums, for polynomial f. There is a general theory of exponent pairs, which formulates estimates. An important case is where f is logarithmic, in relation with the Riemann zeta function. See also equidistribution theorem.[1] Example: the quadratic Gauss sum Let p be an odd prime and let $\xi =e^{2\pi i/p}$. Then the Quadratic Gauss sum is given by $\sum _{n=0}^{p-1}\xi ^{n^{2}}={\begin{cases}{\sqrt {p}},&p=1\mod 4\\i{\sqrt {p}},&p=3\mod 4\end{cases}}$ where the square roots are taken to be positive. This is the ideal degree of cancellation one could hope for without any a priori knowledge of the structure of the sum, since it matches the scaling of a random walk. Statistical model The sum of exponentials is a useful model in pharmacokinetics (chemical kinetics in general) for describing the concentration of a substance over time. The exponential terms correspond to first-order reactions, which in pharmacology corresponds to the number of modelled diffusion compartments.[2][3] See also • Hua's lemma References 1. Montgomery (1994) p.39 2. Hughes, JH; Upton, RN; Reuter, SE; Phelps, MA; Foster, DJR (November 2019). "Optimising time samples for determining area under the curve of pharmacokinetic data using non-compartmental analysis". The Journal of Pharmacy and Pharmacology. 71 (11): 1635–1644. doi:10.1111/jphp.13154. PMID 31412422. 3. Hull, CJ (July 1979). "Pharmacokinetics and pharmacodynamics". British Journal of Anaesthesia. 51 (7): 579–94. doi:10.1093/bja/51.7.579. PMID 550900. • Montgomery, Hugh L. (1994). Ten lectures on the interface between analytic number theory and harmonic analysis. Regional Conference Series in Mathematics. Vol. 84. Providence, RI: American Mathematical Society. ISBN 0-8218-0737-4. Zbl 0814.11001. • Sándor, József; Mitrinović, Dragoslav S.; Crstici, Borislav, eds. (2006). Handbook of number theory I. Dordrecht: Springer-Verlag. ISBN 1-4020-4215-9. Zbl 1151.11300. Further reading • Korobov, N.M. (1992). Exponential sums and their applications. Mathematics and Its Applications. Soviet Series. Vol. 80. Translated from the Russian by Yu. N. Shakhov. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-1647-9. Zbl 0754.11022. External links • A brief introduction to Weyl sums on Mathworld
Wikipedia
Unitarian trick In mathematics, the unitarian trick is a device in the representation theory of Lie groups, introduced by Adolf Hurwitz (1897) for the special linear group and by Hermann Weyl for general semisimple groups. It applies to show that the representation theory of some group G is in a qualitative way controlled by that of some other compact group K. An important example is that in which G is the complex general linear group, and K the unitary group acting on vectors of the same size. From the fact that the representations of K are completely reducible, the same is concluded for those of G, at least in finite dimensions. The relationship between G and K that drives this connection is traditionally expressed in the terms that the Lie algebra of K is a real form of that of G. In the theory of algebraic groups, the relationship can also be put that K is a dense subset of G, for the Zariski topology. The trick works for reductive Lie groups, of which an important case are semisimple Lie groups. Weyl's theorem The complete reducibility of finite-dimensional linear representations of compact groups, or connected semisimple Lie groups and complex semisimple Lie algebras goes sometimes under the name of Weyl's theorem.[1] A related result, that the universal cover of a compact semisimple Lie group is also compact, also goes by the same name.[2] History Adolf Hurwitz had shown how integration over a compact Lie group could be used to construct invariants, in the cases of unitary groups and compact orthogonal groups. Issai Schur in 1924 showed that this technique can be applied to show complete reducibility of representations for such groups via the construction of an invariant inner product. Weyl extended Schur's method to complex semisimple Lie algebras by showing they had a compact real form.[3] Notes 1. "Completely-reducible set", Encyclopedia of Mathematics, EMS Press, 2001 [1994] 2. "Lie group, compact", Encyclopedia of Mathematics, EMS Press, 2001 [1994] 3. Nicolas Bourbaki, Lie groups and Lie algebras (1989), p. 426. References • V. S. Varadarajan, An introduction to harmonic analysis on semisimple Lie groups (1999), p. 49. • Wulf Rossmann, Lie groups: an introduction through linear groups (2006), p. 225. • Roe Goodman, Nolan R. Wallach, Symmetry, Representations, and Invariants (2009), p. 171. • Hurwitz, A. (1897), "Über die Erzeugung der Invarienten durch Integration", Nachrichten Ges. Wiss. Göttingen: 71–90
Wikipedia
Weyl–Brauer matrices In mathematics, particularly in the theory of spinors, the Weyl–Brauer matrices are an explicit realization of a Clifford algebra as a matrix algebra of 2⌊n/2⌋ × 2⌊n/2⌋ matrices. They generalize the Pauli matrices to n dimensions, and are a specific construction of higher-dimensional gamma matrices. They are named for Richard Brauer and Hermann Weyl,[1] and were one of the earliest systematic constructions of spinors from a representation theoretic standpoint. The matrices are formed by taking tensor products of the Pauli matrices, and the space of spinors in n dimensions may then be realized as the column vectors of size 2⌊n/2⌋ on which the Weyl–Brauer matrices act. Construction Suppose that V = Rn is a Euclidean space of dimension n. There is a sharp contrast in the construction of the Weyl–Brauer matrices depending on whether the dimension n is even or odd. Let n = 2k (or 2k+1) and suppose that the Euclidean quadratic form on V is given by $q_{1}^{2}+\dots +q_{k}^{2}+p_{1}^{2}+\dots +p_{k}^{2}~~(+p_{n}^{2})~,$ where (pi, qi) are the standard coordinates on Rn. Define matrices 1, 1', P, and Q by ${\begin{matrix}{\mathbf {1} }=\sigma _{0}=\left({\begin{matrix}1&0\\0&1\end{matrix}}\right),&{\mathbf {1} }'=\sigma _{3}=\left({\begin{matrix}1&0\\0&-1\end{matrix}}\right),\\P=\sigma _{1}=\left({\begin{matrix}0&1\\1&0\end{matrix}}\right),&Q=-\sigma _{2}=\left({\begin{matrix}0&i\\-i&0\end{matrix}}\right)\end{matrix}}$. In even or in odd dimensionality, this quantization procedure amounts to replacing the ordinary p, q coordinates with non-commutative coordinates constructed from P, Q in a suitable fashion. Even case In the case when n = 2k is even, let $P_{i}={\mathbf {1} }'\otimes \dots \otimes {\mathbf {1} }'\otimes P\otimes {\mathbf {1} }\otimes \dots \otimes {\mathbf {1} }$ $Q_{i}={\mathbf {1} }'\otimes \dots \otimes {\mathbf {1} }'\otimes Q\otimes {\mathbf {1} }\otimes \dots \otimes {\mathbf {1} }$ for i = 1,2,...,k (where the P or Q is considered to occupy the i-th position). The operation $\otimes $ is the tensor product of matrices. It is no longer important to distinguish between the Ps and Qs, so we shall simply refer to them all with the symbol P, and regard the index on Pi as ranging from i = 1 to i = 2k. For instance, the following properties hold: $P_{i}^{2}=1,i=1,2,...,2k$, and $P_{i}P_{j}=-P_{j}P_{i}$ for all unequal pairs i and j. (Clifford relations.) Thus the algebra generated by the Pi is the Clifford algebra of euclidean n-space. Let A denote the algebra generated by these matrices. By counting dimensions, A is a complete 2k×2k matrix algebra over the complex numbers. As a matrix algebra, therefore, it acts on 2k-dimensional column vectors (with complex entries). These column vectors are the spinors. We now turn to the action of the orthogonal group on the spinors. Consider the application of an orthogonal transformation to the coordinates, which in turn acts upon the Pi via $P_{i}\mapsto R(P)_{i}=\sum _{j}R_{ij}P_{j}$. That is, $R\in SO(n)$. Since the Pi generate A, the action of this transformation extends to all of A and produces an automorphism of A. From elementary linear algebra, any such automorphism must be given by a change of basis. Hence there is a matrix S, depending on R, such that $R(P)_{i}=S(R)P_{i}S(R)^{-1}$ (1). In particular, S(R) will act on column vectors (spinors). By decomposing rotations into products of reflections, one can write down a formula for S(R) in much the same way as in the case of three dimensions. There is more than one matrix S(R) which produces the action in (1). The ambiguity defines S(R) up to a nonevanescent scalar factor c. Since S(R) and cS(R) define the same transformation (1), the action of the orthogonal group on spinors is not single-valued, but instead descends to an action on the projective space associated to the space of spinors. This multiple-valued action can be sharpened by normalizing the constant c in such a way that (det S(R))2 = 1. In order to do this, however, it is necessary to discuss how the space of spinors (column vectors) may be identified with its dual (row vectors). In order to identify spinors with their duals, let C be the matrix defined by $C=P\otimes Q\otimes P\otimes \dots \otimes Q.$ Then conjugation by C converts a Pi matrix to its transpose: tPi = C Pi C−1. Under the action of a rotation, ${\hbox{ }}^{t}P_{i}\rightarrow \,^{t}S(R)^{-1}\,^{t}P_{i}\,^{t}S(R)=(CS(R)C^{-1})\,^{t}P_{i}(CS(R)C^{-1})^{-1}$ whence C S(R) C−1 = α tS(R)−1 for some scalar α. The scalar factor α can be made to equal one by rescaling S(R). Under these circumstances, (det S(R))2 = 1, as required. In physics, the matrix C is conventionally interpreted as charge conjugation. Weyl spinors Let U be the element of the algebra A defined by $U={\mathbf {1} }'\otimes \dots \otimes {\mathbf {1} }'$, (k factors). Then U is preserved under rotations, so in particular its eigenspace decomposition (which necessarily corresponds to the eigenvalues +1 and -1, occurring in equal numbers) is also stabilized by rotations. As a consequence, each spinor admits a decomposition into eigenvectors under U: ξ = ξ+ + ξ− into a right-handed Weyl spinor ξ+ and a left-handed Weyl spinor ξ−. Because rotations preserve the eigenspaces of U, the rotations themselves act diagonally as matrices S(R)+, S(R)− via (S(R)ξ)+ = S+(R) ξ+, and (S(R)ξ)− = S−(R) ξ−. This decomposition is not, however, stable under improper rotations (e.g., reflections in a hyperplane). A reflection in a hyperplane has the effect of interchanging the two eigenspaces. Thus there are two irreducible spin representations in even dimensions given by the left-handed and right-handed Weyl spinors, each of which has dimension 2k-1. However, there is only one irreducible pin representation (see below) owing to the non-invariance of the above eigenspace decomposition under improper rotations, and that has dimension 2k. Odd case In the quantization for an odd number 2k+1 of dimensions, the matrices Pi may be introduced as above for i = 1,2,...,2k, and the following matrix may be adjoined to the system: $P_{n}={\mathbf {1} }'\otimes \dots \otimes {\mathbf {1} }'$, (k factors), so that the Clifford relations still hold. This adjunction has no effect on the algebra A of matrices generated by the Pi, since in either case A is still a complete matrix algebra of the same dimension. Thus A, which is a complete 2k×2k matrix algebra, is not the Clifford algebra, which is an algebra of dimension 2×2k×2k. Rather A is the quotient of the Clifford algebra by a certain ideal. Nevertheless, one can show that if R is a proper rotation (an orthogonal transformation of determinant one), then the rotation among the coordinates $R(P)_{i}=\sum _{j}R_{ij}P_{j}$ is again an automorphism of A, and so induces a change of basis $R(P)_{i}=S(R)P_{i}S(R)^{-1}$ exactly as in the even-dimensional case. The projective representation S(R) may again be normalized so that (det S(R))2 = 1. It may further be extended to general orthogonal transformations by setting S(R) = -S(-R) in case det R = -1 (i.e., if R is a reversal). In the case of odd dimensions it is not possible to split a spinor into a pair of Weyl spinors, and spinors form an irreducible representation of the spin group. As in the even case, it is possible to identify spinors with their duals, but for one caveat. The identification of the space of spinors with its dual space is invariant under proper rotations, and so the two spaces are spinorially equivalent. However, if improper rotations are also taken into consideration, then the spin space and its dual are not isomorphic. Thus, while there is only one spin representation in odd dimensions, there are a pair of inequivalent pin representations. This fact is not evident from the Weyl's quantization approach, however, and is more easily seen by considering the representations of the full Clifford algebra. See also • Higher-dimensional gamma matrices • Clifford algebra Notes 1. Brauer, Richard; Weyl, Hermann (1935). "Spinors in n dimensions". Am. J. Math. 57: 425–449. doi:10.2307/2371218. JFM 61.1025.06. JSTOR 2371218. Zbl 0011.24401..
Wikipedia
Weyl–Schouten theorem In the mathematical field of differential geometry, the existence of isothermal coordinates for a (pseudo-)Riemannian metric is often of interest. In the case of a metric on a two-dimensional space, the existence of isothermal coordinates is unconditional. For higher-dimensional spaces, the Weyl–Schouten theorem (named after Hermann Weyl and Jan Arnoldus Schouten) characterizes the existence of isothermal coordinates by certain equations to be satisfied by the Riemann curvature tensor of the metric. Existence of isothermal coordinates is also called conformal flatness, although some authors refer to it instead as local conformal flatness; for those authors, conformal flatness refers to a more restrictive condition. Theorem In terms of the Riemann curvature tensor, the Ricci tensor, and the scalar curvature, the Weyl tensor of a pseudo-Riemannian metric g of dimension n is given by[1] $W_{ijkl}=R_{ijkl}-{\frac {R_{ik}g_{jl}-R_{il}g_{jk}+R_{jl}g_{ik}-R_{jk}g_{il}}{n-2}}+{\frac {R}{(n-1)(n-2)}}(g_{jl}g_{ik}-g_{jk}g_{ik}).$ The Schouten tensor is defined via the Ricci and scalar curvatures by[1] $S_{ij}={\frac {2}{n-2}}R_{ij}-{\frac {Rg_{ij}}{(n-2)(n-1)}}.$ As can be calculated by the Bianchi identities, these satisfy the relation that[2] $\nabla ^{j}W_{ijkl}={\frac {n-3}{2}}(\nabla _{k}S_{il}-\nabla _{l}S_{ik}).$ The Weyl–Schouten theorem says that for any pseudo-Riemannian manifold of dimension n:[3] • If n ≥ 4 then the manifold is conformally flat if and only if its Weyl tensor is zero. • If n = 3 then the manifold is conformally flat if and only if its Schouten tensor is a Codazzi tensor. As known prior to the work of Weyl and Schouten, in the case n = 2, every manifold is conformally flat. In all cases, the theorem and its proof are entirely local, so the topology of the manifold is irrelevant. There are varying conventions for the meaning of conformal flatness; the meaning as taken here is sometimes instead called local conformal flatness. Sketch of proof The only if direction is a direct computation based on how the Weyl and Schouten tensors are modified by a conformal change of metric. The if direction requires more work. Consider the following equation for a 1-form ω: $\nabla _{i}\omega _{j}={\frac {1}{2}}\omega _{i}\omega _{j}-{\frac {1}{4}}g^{pq}\omega _{p}\omega _{q}g_{ij}-S_{ij}$ Let Fω,g denote the tensor on the right-hand side. The Frobenius theorem[4] states that the above equation is locally solvable if and only if $\partial _{k}\Gamma _{ij}^{p}\omega _{p}+\Gamma _{ij}^{p}F_{kp}^{\omega ,g}+{\frac {1}{2}}F_{ki}^{\omega ,g}\omega _{j}+{\frac {1}{2}}\omega _{i}F_{kj}^{\omega ,g}-{\frac {1}{4}}\partial _{k}g^{pq}\omega _{p}\omega _{q}g_{ij}-{\frac {1}{2}}g^{pq}\omega _{p}F_{kq}^{\omega ,g}g_{ij}-{\frac {1}{4}}g^{pq}\omega _{p}\omega _{q}\partial _{k}g_{ij}-\partial _{k}S_{ij}$ is symmetric in i and k for any 1-form ω. A direct cancellation of terms[5] shows that this is the case if and only if ${W_{kij}}^{p}\omega _{p}=\nabla _{k}S_{ij}-\nabla _{i}S_{jk}$ for any 1-form ω. If n = 3 then the left-hand side is zero since the Weyl tensor of any three-dimensional metric is zero; the right-hand side is zero whenever the Schouten tensor is a Codazzi tensor. If n ≥ 4 then the left-hand side is zero whenever the Weyl tensor is zero; the right-hand side is also then zero due to the identity given above which relates the Weyl tensor to the Schouten tensor. As such, under the given curvature and dimension conditions, there always exists a locally defined 1-form ω solving the given equation. From the symmetry of the right-hand side, it follows that ω must be a closed form. The Poincaré lemma then implies that there is a real-valued function u with ω = du. Due to the formula for the Ricci curvature under a conformal change of metric, the (locally defined) pseudo-Riemannian metric eug is Ricci-flat. If n = 3 then every Ricci-flat metric is flat, and if n ≥ 4 then every Ricci-flat and Weyl-flat metric is flat.[3] See also • Yamabe problem References Notes. 1. Aubin 1998, Definition 4.23. 2. Aubin 1998, p. 118; Eisenhart 1926, p. 91. 3. Aubin 1998, Theorem 4.24; Eisenhart 1926, Section 28. 4. For the direct version being used, see Abraham, Marsden & Ratiu 1988, Example 6.4.25D; Lee 2013, Proposition 19.29; Warner 1983, Remarks 1.61. 5. This uses the identity $W_{kijp}=R_{kijp}-{\frac {1}{2}}g_{ip}S_{jk}+{\frac {1}{2}}g_{ij}S_{kp}+{\frac {1}{2}}g_{kp}S_{ij}-{\frac {1}{2}}g_{jk}S_{ip}.$ Sources. • Abraham, R.; Marsden, J. E.; Ratiu, T. (1988). Manifolds, tensor analysis, and applications. Applied Mathematical Sciences. Vol. 75 (Second edition of 1983 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-1029-0. ISBN 0-387-96790-7. MR 0960687. Zbl 0875.58002. • Aubin, Thierry (1998). Some nonlinear problems in Riemannian geometry. Springer Monographs in Mathematics. Berlin: Springer-Verlag. doi:10.1007/978-3-662-13006-3. ISBN 3-540-60752-8. MR 1636569. Zbl 0896.53003. • Eisenhart, Luther Pfahler (1926). Riemannian geometry. Reprinted in 1997. Princeton: Princeton University Press. doi:10.1515/9781400884216. ISBN 0-691-02353-0. JFM 52.0721.01. • Lee, John M. (2013). Introduction to smooth manifolds. Graduate Texts in Mathematics. Vol. 218 (Second edition of 2003 original ed.). New York: Springer. doi:10.1007/978-1-4419-9982-5. ISBN 978-1-4419-9981-8. MR 2954043. Zbl 1258.53002. • Warner, Frank W. (1983). Foundations of differentiable manifolds and Lie groups. Graduate Texts in Mathematics. Vol. 94 (Corrected reprint of the 1971 original ed.). New York–Berlin: Springer-Verlag. doi:10.1007/978-1-4757-1799-0. ISBN 0-387-90894-3. MR 0722297. Zbl 0516.58001.
Wikipedia
What Is Mathematics? What Is Mathematics? is a mathematics book written by Richard Courant and Herbert Robbins, published in England by Oxford University Press. It is an introduction to mathematics, intended both for the mathematics student and for the general public. What Is Mathematics? Cover of 1996 second edition AuthorRichard Courant and Herbert Robbins LanguageEnglish SubjectMathematics PublisherOxford University Press Publication date 1941 ISBN0-19-502517-2 OCLC16608993 First published in 1941, it discusses number theory, geometry, topology and calculus. A second edition was published in 1996 with an additional chapter on recent progress in mathematics, written by Ian Stewart. Authorship The book was based on Courant's course material. Although Robbins assisted in writing a large part of the book, he had to fight for authorship. Nevertheless, Courant alone held the copyright for the book. This resulted in Robbins receiving a smaller share of the royalties.[1][2] Title Michael Katehakis remembers Robbins' interest in the literature and Tolstoy in particular and he is convinced that the title of the book is most likely due to Robbins, who was inspired by the title of the essay What Is Art? by Leo Tolstoy. Robbins did the same in the book Great Expectations: The Theory of Optimal Stopping he co-authored with Yuan-Shih Chow and David Siegmund, where one can not miss the connection with the title of the novel Great Expectations by Charles Dickens. According to Constance Reid,[2] Courant finalized the title after a conversation with Thomas Mann. Translations • The first Russian translation Что такое математика? was published in 1947; there were 5 translations since then, the last one in 2010. • The first Italian translation, Che cos'è la matematica?, was published in 1950. А translation of the second edition was issued in 2000. • The first German translation Was ist Mathematik? by Iris Runge was published in 1962. • A Spanish translation of the second edition, ¿Qué Son Las Matemáticas?, was published in 2002. • The first Bulgarian translation, Що е математика?, was published in 1967. А second translation appeared in 1985. • The first Romanian translation, Ce este matematica?, was published in 1969. • The first Polish translation, Co to jest matematyka, was published in 1959. А second translation appeared in 1967. А translation of the second edition was published in 1998. • The first Hungarian translation, Mi a matematika?, was published in 1966. • The first Serbian translation, Šta je matematika?, was published in 1973. • The first Japanese translation, 数学とは何か, was published in 1966. А translation of the second edition was published in 2001. • A Korean translation of the second edition, 수학이란 무엇인가, was published in 2000. • A Portuguese translation of the second edition, O que é matemática?, was published in 2000. Reviews • What is Mathematics? An Elementary Approach to Ideas and Methods, book review by Brian E. Blank, Notices of the American Mathematical Society 48, #11 (December 2001), pp. 1325–1330 • What is Mathematics?, book review by Leonard Gillman, The American Mathematical Monthly 105, #5 (May 1998), pp. 485–488. Editions • Richard Courant and Herbert Robbins (1941). What is Mathematics?: An Elementary Approach to Ideas and Methods. London: Oxford University Press. ISBN 0-19-502517-2. Reprinted several times with a few corrections of minor errors and misprints as a "Second Edition" in 1943, as a "Third Edition" in 1945, as a "Fourth Edition" in 1947", as "Ninth Printing" in 1958 and as "Tenth Printing" in 1960, and in 1978.[3][4] • (1996) 2nd edition, with additional material by Ian Stewart. New York: Oxford University Press. ISBN 0-19-510519-2. • Courant, Richard; Robbins, Herbert (2015). Qu'est-ce que les mathématiques ? Une introduction élémentaire aux idées et aux méthodes. Cassini. ISBN 9782842252045. French translation of the second English edition by Marie Anglade and Karine Py. • Courant, Richard; Robbins, Herbert; Stewart, Ian (2002). ¿Qué Son Las Matemáticas? Conceptos y métodos fundamentales (in Spanish). México, D. F.: Fondo de Cultura Económica. ISBN 968-16-6717-4. Spanish translation of the second English edition. • Courant, Richard; Robbins, Herbert (1950). Che cos'è la matematica? Introduzione elementare ai suoi concetti e metodi (in Italian). Turin: Einaudi. (first Italian translation, from the 1945 English edition) • Courant, Richard; Robbins, Herbert (1971). Che cos'è la matematica? Introduzione elementare ai suoi concetti e metodi (in Italian). Turin: Boringhieri. (based on the previous Eianudi's edition) • Courant, Richard; Robbins, Herbert (1984). Toán học là gì (in Vietnamese). Hanoi: Khoa học Kỹ thuật. (Vietnamese translation by Hàn Liên Hải from the Russian edition) • Courant, Richard; Robbins, Herbert; Stewart, Ian (2000). Che cos'è la matematica? Introduzione elementare ai suoi concetti e metodi (in Italian). Turin: Bollati Boringhieri. ISBN 88-339-1200-0. (Italian translation of the second English edition) References 1. Page, Warren; Robbins, Herbert (1984), "An Interview with Herbert Robbins", The College Mathematics Journal, The Mathematical Association of America, 15 (1): 5, doi:10.2307/3027425, JSTOR 3027425 2. Reid, Constance, Courant in Göttingen and New York. The story of an improbable mathematician. Springer-Verlag, New York-Heidelberg, 1976. ii+314 pp. 3. Courant, Richard and Robbins, Herbert Ellis, What is Mathematics?, Oxford University Press, London-New York-Toronto, Tenth Printing, 1960. xix+521 pp. 4. Courant, Richard and Robbins, Herbert Ellis, What is Mathematics?, Oxford University Press, London-New York-Toronto, 1978. • Herbert Robbins, Great Expectations: The Theory of Optimal Stopping, with Y. S. Chow and David Siegmund. Boston: Houghton Mifflin, 1971.
Wikipedia
What We Cannot Know What We Cannot Know: Explorations at the Edge of Knowledge is a 2016 popular science book by the British mathematician Marcus du Sautoy. He poses questions from science and mathematics and attempts to identify whether they are known, currently unknown or may be impossible to ever know. What We Cannot Know Front cover AuthorMarcus du Sautoy SubjectEpistemology PublisherHarperCollins Publication date 19 May 2016 Pages320 ISBN9780007576661 Background The author, British mathematician Marcus du Sautoy, succeeded Richard Dawkins as Simonyi Professor for the Public Understanding of Science. His contributions to science communication include television documentaries and a co-hosting role on Dara Ó Briain: School of Hard Sums.[1] Du Sautoy said that the book took three years to write. He was inspired to explore unknowns in science by considering provable unknowns in mathematics: for instance, Gödel's first incompleteness theorem states that in any (sufficiently sophisticated) logical system, there are true statements about positive whole numbers that cannot be proven true. In analogy, du Sautoy says that there are unknown questions around consciousness because every person is limited to their own consciousness (like a formal system is limited to its axioms).[2] Another mathematical analogy du Sautoy made is that Euclid's theorem—that there are infinitely many prime numbers—is a finite proof of a fact about infinity. Du Sautoy imagines that, in physics, some proof of infinitude of the universe could be similarly possible.[3] The book was published on 19 May 2016.[4] Synopsis Du Sautoy identifies seven "edges" of human knowledge, through consideration of physical objects. For instance, he questions whether it is possible to know what side a die will land on prior to rolling, using probability and chaos theory in his analysis. He explores philosophical and scientific concepts of time and consciousness. Other topics include evolutionary biology and particle physics. As well as unknown questions, he illustrates known facts from quantum physics and astronomy. Du Sautoy connects the unknowns of human knowledge to God, recalling that a radio interviewer defined God to him as "something which transcends human understanding". He ultimately rejects belief in a deity himself. He illustrates topics with examples from his own life, such as his practice of the trumpet and cello. Reception In Undark Magazine, science communicator John Durant praised that du Sautoy's book is "honestly self-deprecating" and that he manages to be "amiable and entertaining" without exaggerating scientific fact.[1] Similarly, Nicola Davis of The Guardian praised that du Sautoy "exposes with humility his own confusions, apprehensions and concerns", but criticised that he "somewhat limply concludes" that what humans cannot know may remain unknown.[5] In contrast, a writer for The Economist found the conclusion to be "optimistic" and saw the book as "fascinating".[6] Barbara Kiser reviewed the book for Nature as a "finely synthesized study" in which du Sautoy takes readers on a "dazzling journey".[7] Rob Kingston recommended it as a science book of 2016 for The Times.[8] The University of Limerick's Centre for Teaching and Learning listed it as one of seven books that they encouraged students to read in 2020.[9] References 1. Durant, John (22 July 2016). "Book Review: What We Cannot Know". Undark Magazine. Retrieved 26 November 2022. 2. du Sautoy, Marcus (2 March 2017). "What We Cannot Know". Retrieved 26 November 2022. 3. du Sautoy, Marcus (19 May 2016). "If I ruled the world: Marcus du Sautoy". Prospect. Retrieved 26 November 2022. 4. "What We Cannot Know". Waterstones. Retrieved 26 November 2022. 5. Davis, Nicola (15 May 2016). "What We Cannot Know by Marcus du Sautoy – review". The Guardian. Retrieved 26 November 2022. 6. "Circle in a Circle; The Boundaries of Science". The Economist. Vol. 419, no. 8994. 2016. pp. 84–87. 7. Kiser, Barbara (2016). "What We Cannot Know (Review)". Nature. 533 (7603): 319. doi:10.1038/533319a. S2CID 4449643. 8. Kingston, Rob (27 November 2016). "Books of the year: science". The Times. Retrieved 26 November 2022. 9. "Seven must-read books for freshers". Irish Independent. 12 September 2020.
Wikipedia
Wheel theory A wheel is a type of algebra (in the sense of universal algebra) where division is always defined. In particular, division by zero is meaningful. The real numbers can be extended to a wheel, as can any commutative ring. The term wheel is inspired by the topological picture $\odot $ of the real projective line together with an extra point ⊥ (bottom element) such as $\bot =0/0$.[1] A wheel can be regarded as the equivalent of a commutative ring (and semiring) where addition and multiplication are not a group but respectively a commutative monoid and a commutative monoid with involution.[1] Definition A wheel is an algebraic structure $(W,0,1,+,\cdot ,/)$, in which • $W$ is a set, • ${}0$ and $1$ are elements of that set, • $+$ and $\cdot $ are binary operations, • $/$ is a unary operation, and satisfying the following properties: • $+$ and $\cdot $ are each commutative and associative, and have $\,0$ and $1$ as their respective identities. • $//x=x$ ($/$ is an involution) • $/(xy)=/x/y$ ($/$ is multiplicative) • $(x+y)z+0z=xz+yz$ • $(x+yz)/y=x/y+z+0y$ • $0\cdot 0=0$ • $(x+0y)z=xz+0y$ • $/(x+0y)=/x+0y$ • $0/0+x=0/0$ Algebra of wheels Wheels replace the usual division as a binary operation with multiplication, with a unary operation applied to one argument $/x$ similar (but not identical) to the multiplicative inverse $x^{-1}$, such that $a/b$ becomes shorthand for $a\cdot /b=/b\cdot a$, but neither $a\cdot b^{-1}$ nor $b^{-1}\cdot a$ in general, and modifies the rules of algebra such that • $0x\neq 0$ in the general case • $x/x\neq 1$ in the general case, as $/x$ is not the same as the multiplicative inverse of $x$. Other identities that may be derived are • $0x+0y=0xy$ • $x/x=1+0x/x$ • $x-x=0x^{2}$ where the negation $-x$ is defined by $-x=ax$ and $x-y=x+(-y)$ if there is an element $a$ such that $1+a=0$ (thus in the general case $x-x\neq 0$). However, for values of $x$ satisfying $0x=0$ and $0/x=0$, we get the usual • $x/x=1$ • $x-x=0$ If negation can be defined as below then the subset $\{x\mid 0x=0\}$ is a commutative ring, and every commutative ring is such a subset of a wheel. If $x$ is an invertible element of the commutative ring then $x^{-1}=/x$. Thus, whenever $x^{-1}$ makes sense, it is equal to $/x$, but the latter is always defined, even when $x=0$. Examples Wheel of fractions Let $A$ be a commutative ring, and let $S$ be a multiplicative submonoid of $A$. Define the congruence relation $\sim _{S}$ on $A\times A$ via $(x_{1},x_{2})\sim _{S}(y_{1},y_{2})$ means that there exist $s_{x},s_{y}\in S$ such that $(s_{x}x_{1},s_{x}x_{2})=(s_{y}y_{1},s_{y}y_{2})$. Define the wheel of fractions of $A$ with respect to $S$ as the quotient $A\times A~/{\sim _{S}}$ (and denoting the equivalence class containing $(x_{1},x_{2})$ as $[x_{1},x_{2}]$) with the operations $0=[0_{A},1_{A}]$           (additive identity) $1=[1_{A},1_{A}]$           (multiplicative identity) $/[x_{1},x_{2}]=[x_{2},x_{1}]$           (reciprocal operation) $[x_{1},x_{2}]+[y_{1},y_{2}]=[x_{1}y_{2}+x_{2}y_{1},x_{2}y_{2}]$           (addition operation) $[x_{1},x_{2}]\cdot [y_{1},y_{2}]=[x_{1}y_{1},x_{2}y_{2}]$           (multiplication operation) Projective line and Riemann sphere The special case of the above starting with a field produces a projective line extended to a wheel by adjoining a bottom element noted ⊥, where $0/0=\bot $. The projective line is itself an extension of the original field by an element $\infty $, where $z/0=\infty $ for any element $z\neq 0$ in the field. However, $0/0$ is still undefined on the projective line, but is defined in its extension to a wheel. Starting with the real numbers, the corresponding projective "line" is geometrically a circle, and then the extra point $0/0$ gives the shape that is the source of the term "wheel". Or starting with the complex numbers instead, the corresponding projective "line" is a sphere (the Riemann sphere), and then the extra point gives a 3-dimensional version of a wheel. See also • NaN Citations 1. Carlström 2004. References • Setzer, Anton (1997), Wheels (PDF) (a draft) • Carlström, Jesper (2004), "Wheels – On Division by Zero", Mathematical Structures in Computer Science, Cambridge University Press, 14 (1): 143–184, doi:10.1017/S0960129503004110, S2CID 11706592 (also available online here). • A, BergstraJ; V, TuckerJ (1 April 2007). "The rational numbers as an abstract data type". Journal of the ACM. 54 (2): 7. doi:10.1145/1219092.1219095. S2CID 207162259. • Bergstra, Jan A.; Ponse, Alban (2015). "Division by Zero in Common Meadows". Software, Services, and Systems: Essays Dedicated to Martin Wirsing on the Occasion of His Retirement from the Chair of Programming and Software Engineering. Lecture Notes in Computer Science. Springer International Publishing. 8950: 46–61. arXiv:1406.6878. doi:10.1007/978-3-319-15545-6_6. ISBN 978-3-319-15544-9. S2CID 34509835.
Wikipedia
Wheel factorization Wheel factorization is a method for generating a sequence of natural numbers by repeated additions, as determined by a number of the first few primes, so that the generated numbers are coprime with these primes, by construction. Description For a chosen number n (usually no larger than 4 or 5), the first n primes determine the specific way to generate a sequence of natural numbers which are all known in advance to be coprime with these primes, i.e. are all known to not be multiples of any of these primes. This method can thus be used for an improvement of the trial division method for integer factorization, as none of the generated numbers need be tested in trial divisions by those small primes. The trial division method consists of dividing the number to be factorized by the integers in increasing order (2, 3, 4, 5, ...) successively. A common improvement consists of testing only by primes, i.e. by 2, 3, 5, 7, 11, ... . With the wheel factorization, one starts from a small list of numbers, called the basis — generally the first few prime numbers; then one generates the list, called the wheel, of the integers that are coprime with all the numbers in the basis. Then, for the numbers generated by "rolling the wheel", one needs to only consider the primes not in the basis as their possible factors. It is as if these generated numbers have already been tested, and found to not be divisible by any of the primes in the basis. It is an optimization because all these operations become redundant, and are spared from being performed at all. When used in finding primes, or sieving in general, this method reduces the amount of candidate numbers to be considered as possible primes. With the basis {2, 3}, the reduction is to 1/3 < 34% of all the numbers. This means that fully 2/3 of all the candidate numbers are skipped over automatically. Larger bases reduce this proportion even further; for example, with basis {2, 3, 5} to 8/30 < 27%; and with basis {2, 3, 5, 7} to 48/210 < 23%. The bigger the wheel the larger the computational resources involved and the smaller the additional improvements, though, so it is the case of quickly diminishing returns. Introduction Natural numbers from 1 and up are enumerated by repeated addition of 1: 1, 2, 3, 4, 5, ... Considered by spans of two numbers each, they are enumerated by repeated additions of 2: 1, 2  ;  3, 4  ;  5, 6, ... Every second thus generated number will be even. Thus odds are generated by the repeated additions of 2: 1  ;  3  ;  5  ;  7 ... Considered by spans of three numbers each, they are enumerated by repeated additions of 2 * 3 = 6: 1, 3, 5  ;  7, 9, 11  ;  ... Every second number in these triplets will be a multiple of 3, because numbers of the form 3 + 6k are all odd multiples of 3. Thus all the numbers coprime with the first two primes i.e. 2 and 3, i.e. 2 * 3 = 6–coprime numbers, will be generated by repeated additions of 6, starting from {1, 5}: 1, 5  ;  7, 11  ;  13, 17  ;  ... The same sequence can be generated by repeated additions of 2 * 3 * 5 = 30, turning each five consecutive spans, of two numbers each, into one joined span of ten numbers: 1, 5, 7, 11, 13, 17, 19, 23, 25, 29  ;  31, 35, 37, ... Out of each ten of these 6–coprime numbers, two are multiples of 5, thus the remaining eight will be 30–coprime: 1, 7, 11, 13, 17, 19, 23, 29  ;  31, 37, 41, 43, 47, 49, ... This is naturally generalized. The above showcases first three wheels: • {1} (containing one i.e. (2-1) number) with the "circumference" of 2 for generating the sequence of 2–coprimes i.e. odds by repeated addition of 2; • {1,5} (containing two i.e. (2-1)*(3-1) numbers) with the "circumference" of 2 * 3 = 6, for generating the sequence of 6–coprime numbers by repeated additions of 6; • {1, 7, 11, 13, 17, 19, 23, 29} (containing eight i.e. (2-1)*(3-1)*(5-1) numbers) with the "circumference" of 2*3*5 = 30, for generating the sequence of 30–coprime numbers by repeated additions of 30; etc. Another representation of these wheels is by turning a wheel's numbers, as seen above, into a circular list of the differences between the consecutive numbers, and then generating the sequence starting from 1 by repeatedly adding these increments one after another to the last generated number, indefinitely. This is the closest it comes to the "rolling the wheel" metaphor. For instance, this turns {1, 7, 11, 13, 17, 19, 23, 29, 31} into {6, 4, 2, 4, 2, 4, 6, 2}, and then the sequence is generated as • n=1; n+6=7; n+4=11; n+2=13; n+4=17; n+2=19; n+4=23; n+6=29; n+2=31; n+6=37; n+4=41; n+2=43; etc. A typical example With a given basis of the first few prime numbers {2, 3, 5}, the "first turn" of the wheel consists of: 7, 11, 13, 17, 19, 23, 29, 31. The second turn is obtained by adding 30, the product of the basis, to the numbers in the first turn. The third turn is obtained by adding 30 to the second turn, and so on. For implementing the method, one may remark that the increments between two consecutive elements of the wheel, that is inc = [4, 2, 4, 2, 4, 6, 2, 6], remain the same after each turn. The suggested implementation that follows uses an auxiliary function div(n, k), which tests whether n is evenly divisible by k, and returns true in this case, false otherwise. In this implementation, the number to be factorized is n, and the program returns the smallest divisor of n – returning n itself if it is prime. if div(n, 2) = true then return 2 if div(n, 3) = true then return 3 if div(n, 5) = true then return 5 k := 7; i := 0 while k * k ≤ n do if div(n, k) = true, then return k k := k + inc[i] if i < 7 then i := i + 1 else i := 0 return n For getting the complete factorization of an integer, the computation may be continued without restarting the wheel at the beginning. This leads to the following program for a complete factorization, where the function "add" adds its first argument at the end of the second argument, which must be a list. factors := [ ] while div(n, 2) = true do factors := add(2, factors) n := n / 2 while div(n, 3) = true do factors := add(3, factors) n := n / 3 while div(n, 5) = true do factors := add(5, factors) n := n / 5 k := 7; i := 0 while k * k ≤ n do if div(n, k) = true then add(k, factors) n := n / k else k := k + inc[i] if i < 7 then i := i + 1 else i := 0 if n > 1 then add(n, factors) return factors Another presentation Wheel factorization is used for generating lists of mostly prime numbers from a simple mathematical formula and a much smaller list of the first prime numbers. These lists may then be used in trial division or sieves. Because not all the numbers in these lists are prime, doing so introduces inefficient redundant operations. However, the generators themselves require very little memory compared to keeping a pure list of prime numbers. The small list of initial prime numbers constitute complete parameters for the algorithm to generate the remainder of the list. These generators are referred to as wheels. While each wheel may generate an infinite list of numbers, past a certain point the numbers cease to be mostly prime. The method may further be applied recursively as a prime number wheel sieve to generate more accurate wheels. Much definitive work on wheel factorization, sieves using wheel factorization, and wheel sieve, was done by Paul Pritchard[1][2][3][4] in formulating a series of different algorithms. To visualize the use of a factorization wheel, one may start by writing the natural numbers around circles as shown in the adjacent diagram. The number of spokes is chosen such that prime numbers will have a tendency to accumulate in a minority of the spokes. Sample graphical procedure 1. Find the first few prime numbers to form the basis of the factorization wheel. They are known or perhaps determined from previous applications of smaller factorization wheels or by quickly finding them using the Sieve of Eratosthenes. 2. Multiply the base prime numbers together to give the result n which is the circumference of the factorization wheel. 3. Write the numbers 1 to n in a circle. This will be the inner-most circle representing one rotation of the wheel. 4. From the numbers 1 to n in the innermost circle, strike off all multiples of the base primes from step one as applied in step 2. This composite number elimination can be accomplished either by use of a sieve such as the Sieve of Eratosthenes or as the result of applications of smaller factorization wheels. 5. Taking x to be the number of circles written so far, continue to write xn + 1 to xn + n in concentric circles around the inner-most circle, such that xn + 1 is in the same position as (x − 1)n + 1. 6. Repeat step 5 until the largest rotation circle spans the largest number to be tested for primality. 7. Strike off the number 1. 8. Strike off the spokes of the prime numbers as found in step 1 and applied in step 2 in all outer circles without striking off the prime numbers in the inner-most circle (in circle 1). 9. Strike off the spokes of all multiples of prime numbers struck from the inner circle 1 in step 4 in the same way as striking off the spokes of the base primes in step 8. 10. The remaining numbers in the wheel are mostly prime numbers (they are collectively called "relatively" prime). Use other methods such as the Sieve of Eratosthenes or further application of larger factorization wheels to remove the remaining non-primes. Example 1. Find the first 2 prime numbers: 2 and 3. 2. n = 2 × 3 = 6 3. 1 2 3 4 5 6 4. strike off factors of 2 and 3 which are 4 and 6 as factors of 2; 6 as the only factor of 3 is already stricken: 1 2 3 4 5 6 5. x = 1. xn + 1 = 1 · 6 + 1 = 7. (x + 1)n = (1 + 1) · 6 = 12. Write 7 to 12 with 7 aligned with 1. 1 2 3 4 5 6 7 8 9 10 11 12 6. x = 2. xn + 1 = 2 · 6 + 1 = 13. (x + 1)n = (2 + 1) · 6 = 18. Write 13 to 18. Repeat for the next few lines. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 7 and 8. Sieving 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 9. Sieving 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 10. The resulting list contains a non-prime number of 25 which is 52. Use other methods such as a sieve to eliminate it to arrive at 2 3 5 7 11 13 17 19 23 29 Note that by using exactly the next prime number of 5 wheel cycles and eliminating the multiple(s) of that prime (and only that prime) from the resulting list, we have obtained the base wheel as per step 4 for a factorization wheel with base primes of 2, 3, and 5; this is one wheel in advance of the previous 2/3 factorization wheel. One could then follow the steps to step 10 using the next succeeding prime of 7 cycles and only eliminating the multiples of 7 from the resulting list in step 10 (leaving some "relative" primes in this case and all successive cases - i.e. some not true fully qualified primes), to get the next further advanced wheel, recursively repeating the steps as necessary to get successively larger wheels. Analysis and computer implementation Formally, the method makes use of the following insights: First, that the set of base primes unioned with its (infinite) set of coprimes is a superset of the primes. Second, that the infinite set of coprimes can be enumerated easily from the coprimes to the base set between 2 and the base set product. (Note that 1 requires special handling.) As seen in the example above, the result of repeated applications of the above recursive procedure from steps 4 to 10 can be a wheel list which spans any desired sieving range (to which it can be truncated) and the resulting list then includes only the multiples of primes higher than one past the last used base primes. Note that once a wheel spans the desired upper limit of the sieving range, one can stop generating further wheels and use the information in that wheel to cull the remaining composite numbers from that last wheel list using a Sieve of Eratosthenes type technique but using the gap pattern inherent to the wheel to avoid redundant culls; some optimizations may be able to be made based on the fact that (will be proven in the next section) that there will be no repeat culling of any composite number: each remaining composite will be culled exactly once. Alternatively, one can continue to generate truncated wheel lists using primes up to the square root of the desired sieve range, in which case all remaining number representations in the wheel will be prime; however, although this method is as efficient as to never culling composite numbers more than once, it loses much time external to the normally considered culling operations in processing the successive wheel sweeps so as to take much longer. The elimination of composite numbers by a factorization wheel is based on the following: Given a number k > n, we know that k is not prime if k mod n and n are not relatively prime. From that, the fraction of numbers that the wheel sieve eliminates can be determined (although not all need be physically struck off; many can be culled automatically in the operations of copying of lesser wheels to greater wheels) as 1 - phi (n) / n, which is also the efficiency of the sieve. It is known that $\lim \inf {\frac {\varphi (n)}{n}}\log \log n=e^{-\gamma }\sim 0.56145948,$ where γ is Euler's constant.[5] Thus phi(n) / n goes to zero slowly as n increases to infinity and it can be seen that this efficiency rises very slowly to 100% for infinitely large n. From the properties of phi, it can easily be seen that the most efficient sieve smaller than x is the one where $n=p_{1}p_{2}...p_{i}<x$ and $np_{i+1}\geq x$ (i.e. wheel generation can stop when the last wheel passes or has a sufficient circumference to include the highest number in the sieving range). To be of maximum use on a computer, we want the numbers that are smaller than n and relatively prime to it as a set. Using a few observations, the set can easily be generated : 1. Start with $S_{1}=\{1\}$, which is the set for $n=1$ with 2 as the first prime. This initial set means that all numbers starting at two up are included as "relative" primes as the circumference of the wheel is 1. 2. Following sets are $S_{2}=\{1\}$ which means that it starts at 3 for all odd numbers with the factors of 2 eliminated (circumference of 2), $S_{6}=\{1,5\}$ has the factors of 2 and 3 eliminated (circumference of 6) as for the initial base wheel in the example above and so on. 3. Let $S_{n}+k$ be the set where k has been added to each element of $S_{n}$. 4. Then $S_{np_{i+1}}=F_{p_{i+1}}[S_{n}\cup S_{n}+n\cup S_{n}+2n\cup ...\cup S_{n}+n(p_{i+1}-1)]$ where $F_{x}$ represents the operation of removing all multiples of x. 5. 1 and $p_{i+1}$ will be the two smallest of $S_{n}$ when $n>2$ removing the need to compute prime numbers separately although the algorithm does need to keep a record of all eliminated base primes which are no longer included in the succeeding sets. 6. All sets where the circumference n > 2 are symmetrical around $n/2$, reducing storage requirements. The following algorithm does not use this fact, but it is based on the fact that the gaps between successive numbers in each set are symmetrical around the halfway point. See also • Sieve of Sundaram • Sieve of Atkin • Sieve theory References 1. Pritchard, Paul, "Linear prime-number sieves: a family tree," Sci. Comput. Programming 9:1 (1987), pp. 17–35. 2. Paul Pritchard, A sublinear additive sieve for finding prime numbers, Communications of the ACM 24 (1981), 18–23. MR600730 3. Paul Pritchard, Explaining the wheel sieve, Acta Informatica 17 (1982), 477–485. MR685983 4. Paul Pritchard, Fast compact prime number sieves (among others), Journal of Algorithms 4 (1983), 332–344. MR729229 5. Hardy & Wright 1979, thm. 328 harvnb error: no target: CITEREFHardyWright1979 (help) External links • Wheel Factorization • Improved incremental prime number sieves by Paul Pritchard Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Wheel graph In the mathematical discipline of graph theory, a wheel graph is a graph formed by connecting a single universal vertex to all vertices of a cycle. A wheel graph with n vertices can also be defined as the 1-skeleton of an (n – 1)-gonal pyramid. Some authors[1] write Wn to denote a wheel graph with n vertices (n ≥ 4); other authors[2] instead use Wn to denote a wheel graph with n + 1 vertices (n ≥ 3), which is formed by connecting a single vertex to all vertices of a cycle of length n. The rest of this article uses the former notation. Wheel graph Several examples of wheel graphs Verticesn Edges2(n − 1) Diameter2 if n > 4 1 if n = 4 Girth3 Chromatic number4 if n is even 3 if n is odd Spectrum$\{2\cos(2k\pi /(n-1))^{1};$ $k=1,\ldots ,n-2\}$$\cup \{1\pm {\sqrt {n}}\}$ PropertiesHamiltonian Self-dual Planar NotationWn Table of graphs and parameters Set-builder construction Given a vertex set of {1, 2, 3, …, v}, the edge set of the wheel graph can be represented in set-builder notation by {{1, 2}, {1, 3}, …, {1, v}, {2, 3}, {3, 4}, …, {v − 1, v}, {v, 2}}.[3] Properties Wheel graphs are planar graphs, and have a unique planar embedding. More specifically, every wheel graph is a Halin graph. They are self-dual: the planar dual of any wheel graph is an isomorphic graph. Every maximal planar graph, other than K4 = W4, contains as a subgraph either W5 or W6. There is always a Hamiltonian cycle in the wheel graph and there are $n^{2}-3n+3$ cycles in Wn (sequence A002061 in the OEIS). For odd values of n, Wn is a perfect graph with chromatic number 3: the vertices of the cycle can be given two colors, and the center vertex given a third color. For even n, Wn has chromatic number 4, and (when n ≥ 6) is not perfect. W7 is the only wheel graph that is a unit distance graph in the Euclidean plane.[4] The chromatic polynomial of the wheel graph Wn is : $P_{W_{n}}(x)=x((x-2)^{(n-1)}-(-1)^{n}(x-2)).$ In matroid theory, two particularly important special classes of matroids are the wheel matroids and the whirl matroids, both derived from wheel graphs. The k-wheel matroid is the graphic matroid of a wheel Wk+1, while the k-whirl matroid is derived from the k-wheel by considering the outer cycle of the wheel, as well as all of its spanning trees, to be independent. The wheel W6 supplied a counterexample to a conjecture of Paul Erdős on Ramsey theory: he had conjectured that the complete graph has the smallest Ramsey number among all graphs with the same chromatic number, but Faudree and McKay (1993) showed W6 has Ramsey number 17 while the complete graph with the same chromatic number, K4, has Ramsey number 18.[5] That is, for every 17-vertex graph G, either G or its complement contains W6 as a subgraph, while neither the 17-vertex Paley graph nor its complement contains a copy of K4. References 1. Weisstein, Eric W. "Wheel Graph". MathWorld. 2. Rosen, Kenneth H. (2011). Discrete Mathematics and Its Applications (7th ed.). McGraw-Hill. p. 655. ISBN 978-0073383095. 3. Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Pub. p. 56. ISBN 978-0-486-67870-2. Retrieved 8 August 2012. 4. Buckley, Fred; Harary, Frank (1988), "On the euclidean dimension of a wheel", Graphs and Combinatorics, 4 (1): 23–30, doi:10.1007/BF01864150, S2CID 44596093. 5. Faudree, Ralph J.; McKay, Brendan D. (1993), "A conjecture of Erdős and the Ramsey number r(W6)", J. Combinatorial Math. and Combinatorial Comput., 13: 23–31.
Wikipedia
Wheels, Life and Other Mathematical Amusements Wheels, Life and Other Mathematical Amusements is a book by Martin Gardner published in 1983. The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries.[1] Contents Wheels, Life and Other Mathematical Amusements is a book of 22 mathematical games columns that were revised and extended after being previously published in Scientific American.[2] It is Gardner's 10th collection of columns, and includes material on Conway's Game of Life, supertasks, intransitive dice, braided polyhedra, combinatorial game theory, the Collatz conjecture, mathematical card tricks, and Diophantine equations such as Fermat's Last Theorem.[3] Reception Dave Langford reviewed Wheels, Life and Other Mathematical Amusements for White Dwarf #55, and stated that "Here too are revisions of the three famous pieces on Conway's solitaire game Life, which has absorbed several National Debts' worth of computer time since 1970. Fascinating."[2] The book was positively reviewed in several other mathematics and science journals.[4] References 1. "Wheels, Life, and Other Mathematical Amusements". Mathematical Association of America. Retrieved 2020-07-09. 2. Langford, Dave (July 1984). "Critical Mass". White Dwarf. Games Workshop (55): 20. 3. Heuer, G. A. "Review of Wheels, Life and Other Mathematical Amusements". zbMATH. Zbl 0537.00002. 4. Additional reviews: • Roberts, Sharon M. (May 1984). The Mathematics Teacher. 77 (5): 397. JSTOR 27964108.{{cite journal}}: CS1 maint: untitled periodical (link) • Golomb, Solomon W. (July–August 1984). American Scientist. 72 (4): 408. JSTOR 27852818.{{cite journal}}: CS1 maint: untitled periodical (link) • Carter, D. C. (March 1985). The Mathematical Gazette. 69 (447): 75. doi:10.2307/3616487. JSTOR 3616487.{{cite journal}}: CS1 maint: untitled periodical (link) • Klarner, David A. (April 1986). The American Mathematical Monthly. 93 (4): 321–323. doi:10.2307/2323703. JSTOR 2323703.{{cite journal}}: CS1 maint: untitled periodical (link) Martin Gardner Books • Fads and Fallacies in the Name of Science (1957) • The Annotated Alice (1960) • The Ambidextrous Universe (1964) • The Flight of Peter Fromm (1973) • Science Fiction Puzzle Tales (1981) • Wheels, Life and Other Mathematical Amusements (1983) • Calculus Made Easy (1998) • Visitors from Oz (1998) Scientific American columns • List of Martin Gardner Mathematical Games columns Alter Ego • Irving Joshua Matrix Legacy • Gathering 4 Gardner • Martin Gardner bibliography
Wikipedia
When Topology Meets Chemistry When Topology Meets Chemistry: A Topological Look At Molecular Chirality is a book in chemical graph theory on the graph-theoretic analysis of chirality in molecular structures. It was written by Erica Flapan, based on a series of lectures she gave in 1996 at the Institut Henri Poincaré,[1] and was published in 2000 by the Cambridge University Press and Mathematical Association of America as the first volume in their shared Outlooks book series.[2] When Topology Meets Chemistry: A Topological Look at Molecular Chirality AuthorErica Flapan SeriesOutlooks SubjectChemical graph theory and chirality Publisher • Cambridge University Press • Mathematical Association of America Publication date 2000 Topics A chiral molecule is a molecular structure that is different from its mirror image. This property, while seemingly abstract, can have big consequences in biochemistry, where the shape of molecules is essential to their chemical function,[3] and where a chiral molecule can have very different biological activities from its mirror-image molecule.[4] When Topology Meets Chemistry concerns the mathematical analysis of molecular chirality. The book has seven chapters, beginning with an introductory overview and ending with a chapter on the chirality of DNA molecules.[2] Other topics covered through the book include the rigid geometric chirality of tree-like molecular structures such as tartaric acid, and the stronger topological chirality of molecules that cannot be deformed into their mirror image without breaking and re-forming some of their molecular bonds. It discusses results of Flapan and Jonathan Simon on molecules with the molecular structure of Möbius ladders, according to which every embedding of a Möbius ladder with an odd number of rungs is chiral while Möbius ladders with an even number of rungs have achiral embeddings. It uses the symmetries of graphs, in a result that the symmetries of certain graphs can always be extended to topological symmetries of three-dimensional space, from which it follows that non-planar graphs with no self-inverse symmetry are always chiral. It discusses graphs for which every embedding is topologically knotted or linked. And it includes material on the use of knot invariants to detect topological chirality.[1][2][4][5] Audience and reception The book is self-contained, and requires only an undergraduate level of mathematics.[3][5] It includes many exercises,[2] making it suitable for use as a textbook at both the advanced undergraduate and introductory graduate levels.[1] Reviewer Buks van Rensburg describes the book's presentation as "efficient and intuitive", and recommends the book to "every mathematician or chemist interested in the notions of chirality and symmetry".[6] References 1. Keesling, J. E. (2002), "Review of When Topology Meets Chemistry", Mathematical Reviews, MR 1781912 2. Lord, Nick (November 2001), "Review of When Topology Meets Chemistry", The Mathematical Gazette, 85 (504): 550–552, doi:10.2307/3621805, JSTOR 3621805 3. Ashbacher, Charles (2005–2006), "Review of When Topology Meets Chemistry", Journal of Recreational Mathematics, 34 (1), ProQuest 89066158 4. Langton, Stacy G. (January 2001), "Review of When Topology Meets Chemistry", MAA Reviews, Mathematical Association of America 5. Whittington, Stuart (September 2001), "Review of When Topology Meets Chemistry", SIAM Review, 43 (3): 577–579, JSTOR 3649818 6. van Rensburg, Buks (May–June 2001), "Untangling molecular knots (review of When Topology Meets Chemistry)", American Scientist, 89 (3): 279–280, JSTOR 27857483
Wikipedia
Where Mathematics Comes From Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (hereinafter WMCF) is a book by George Lakoff, a cognitive linguist, and Rafael E. Núñez, a psychologist. Published in 2000, WMCF seeks to found a cognitive science of mathematics, a theory of embodied mathematics based on conceptual metaphor. Where Mathematics Comes From AuthorGeorge Lakoff Rafael E. Núñez SubjectNumerical cognition Published2000 Pages492 ISBN978-0-465-03771-1 OCLC44045671 WMCF definition of mathematics Mathematics makes up that part of the human conceptual system that is special in the following way: It is precise, consistent, stable across time and human communities, symbolizable, calculable, generalizable, universally available, consistent within each of its subject matters, and effective as a general tool for description, explanation, and prediction in a vast number of everyday activities, [ranging from] sports, to building, business, technology, and science. - WMCF, pp. 50, 377 Nikolay Lobachevsky said "There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world." A common type of conceptual blending process would seem to apply to the entire mathematical procession. Human cognition and mathematics Lakoff and Núñez's avowed purpose is to begin laying the foundations for a truly scientific understanding of mathematics, one grounded in processes common to all human cognition. They find that four distinct but related processes metaphorically structure basic arithmetic: object collection, object construction, using a measuring stick, and moving along a path. WMCF builds on earlier books by Lakoff (1987) and Lakoff and Johnson (1980, 1999), which analyze such concepts of metaphor and image schemata from second-generation cognitive science. Some of the concepts in these earlier books, such as the interesting technical ideas in Lakoff (1987), are absent from WMCF. Lakoff and Núñez hold that mathematics results from the human cognitive apparatus and must therefore be understood in cognitive terms. WMCF advocates (and includes some examples of) a cognitive idea analysis of mathematics which analyzes mathematical ideas in terms of the human experiences, metaphors, generalizations, and other cognitive mechanisms giving rise to them. A standard mathematical education does not develop such idea analysis techniques because it does not pursue considerations of A) what structures of the mind allow it to do mathematics or B) the philosophy of mathematics. Lakoff and Núñez start by reviewing the psychological literature, concluding that human beings appear to have an innate ability, called subitizing, to count, add, and subtract up to about 4 or 5. They document this conclusion by reviewing the literature, published in recent decades, describing experiments with infant subjects. For example, infants quickly become excited or curious when presented with "impossible" situations, such as having three toys appear when only two were initially present. The authors argue that mathematics goes far beyond this very elementary level due to a large number of metaphorical constructions. For example, the Pythagorean position that all is number, and the associated crisis of confidence that came about with the discovery of the irrationality of the square root of two, arises solely from a metaphorical relation between the length of the diagonal of a square, and the possible numbers of objects. Much of WMCF deals with the important concepts of infinity and of limit processes, seeking to explain how finite humans living in a finite world could ultimately conceive of the actual infinite. Thus much of WMCF is, in effect, a study of the epistemological foundations of the calculus. Lakoff and Núñez conclude that while the potential infinite is not metaphorical, the actual infinite is. Moreover, they deem all manifestations of actual infinity to be instances of what they call the "Basic Metaphor of Infinity", as represented by the ever-increasing sequence 1, 2, 3, ... WMCF emphatically rejects the Platonistic philosophy of mathematics. They emphasize that all we know and can ever know is human mathematics, the mathematics arising from the human intellect. The question of whether there is a "transcendent" mathematics independent of human thought is a meaningless question, like asking if colors are transcendent of human thought—colors are only varying wavelengths of light, it is our interpretation of physical stimuli that make them colors. WMCF (p. 81) likewise criticizes the emphasis mathematicians place on the concept of closure. Lakoff and Núñez argue that the expectation of closure is an artifact of the human mind's ability to relate fundamentally different concepts via metaphor. WMCF concerns itself mainly with proposing and establishing an alternative view of mathematics, one grounding the field in the realities of human biology and experience. It is not a work of technical mathematics or philosophy. Lakoff and Núñez are not the first to argue that conventional approaches to the philosophy of mathematics are flawed. For example, they do not seem all that familiar with the content of Davis and Hersh (1981), even though the book warmly acknowledges Hersh's support. Lakoff and Núñez cite Saunders Mac Lane (the inventor, with Samuel Eilenberg, of category theory) in support of their position. Mathematics, Form and Function (1986), an overview of mathematics intended for philosophers, proposes that mathematical concepts are ultimately grounded in ordinary human activities, mostly interactions with the physical world.[1] Educators have taken some interest in what WMCF suggests about how mathematics is learned, and why students find some elementary concepts more difficult than others. However, even from an educational perspective, WMCF is still problematic. From the conceptual metaphor theory's point of view, metaphors reside in a different realm, the abstract, from that of 'real world', the concrete. In other words, despite their claim of mathematics being human,  established mathematical knowledge — which is what we learn in school — is assumed to be and treated as abstract, completely detached from its physical origin. It cannot account for the way learners could access to such knowledge.[2] WMCF is also criticized for its monist approach. First, it ignores the fact that the sensori-motor experience upon which our linguistic structure — thus, mathematics — is assumed to be based may vary across cultures and situations.[3] Second, the mathematics WMCF is concerned with is "almost entirely... standard utterances in textbooks and curricula",[3] which is the most-well established body of knowledge. It is negligent of the dynamic and diverse nature of the history of mathematics. WMCF's logo-centric approach is another target for critics. While it is predominantly interested in the association between language and mathematics, it does not account for how non-linguistic factors contribute to the emergence of mathematical ideas (e.g. See Radford, 2009;[4] Rotman, 2008[5]). Examples of mathematical metaphors Conceptual metaphors described in WMCF, in addition to the Basic Metaphor of Infinity, include: • Arithmetic is motion along a path, object collection/construction; • Change is motion; • Sets are containers, objects; • Continuity is gapless; • Mathematical systems have an "essence," namely their axiomatic algebraic structure; • Functions are sets of ordered pairs, curves in the Cartesian plane; • Geometric figures are objects in space; • Logical independence is geometric orthogonality; • Numbers are sets, object collections, physical segments, points on a line; • Recurrence is circular. Mathematical reasoning requires variables ranging over some universe of discourse, so that we can reason about generalities rather than merely about particulars. WMCF argues that reasoning with such variables implicitly relies on what it terms the Fundamental Metonymy of Algebra. Example of metaphorical ambiguity WMCF (p. 151) includes the following example of what the authors term "metaphorical ambiguity." Take the set $A=\{\{\emptyset \},\{\emptyset ,\{\emptyset \}\}\}.$ Then recall two bits of standard terminology from elementary set theory: 1. The recursive construction of the ordinal natural numbers, whereby 0 is $\emptyset $, and $n+1$ is $n\cup \{n\}.$ 2. The ordered pair (a,b), defined as $\{\{a\},\{a,b\}\}.$ By (1), A is the set {1,2}. But (1) and (2) together say that A is also the ordered pair (0,1). Both statements cannot be correct; the ordered pair (0,1) and the unordered pair {1,2} are fully distinct concepts. Lakoff and Johnson (1999) term this situation "metaphorically ambiguous." This simple example calls into question any Platonistic foundations for mathematics. While (1) and (2) above are admittedly canonical, especially within the consensus set theory known as the Zermelo–Fraenkel axiomatization, WMCF does not let on that they are but one of several definitions that have been proposed since the dawning of set theory. For example, Frege, Principia Mathematica, and New Foundations (a body of axiomatic set theory begun by Quine in 1937) define cardinals and ordinals as equivalence classes under the relations of equinumerosity and similarity, so that this conundrum does not arise. In Quinian set theory, A is simply an instance of the number 2. For technical reasons, defining the ordered pair as in (2) above is awkward in Quinian set theory. Two solutions have been proposed: • A variant set-theoretic definition of the ordered pair more complicated than the usual one; • Taking ordered pairs as primitive. The Romance of Mathematics The "Romance of Mathematics" is WMCF's light-hearted term for a perennial philosophical viewpoint about mathematics which the authors describe and then dismiss as an intellectual myth: • Mathematics is transcendent, namely it exists independently of human beings, and structures our actual physical universe and any possible universe. Mathematics is the language of nature, and is the primary conceptual structure we would have in common with extraterrestrial aliens, if any such there be. • Mathematical proof is the gateway to a realm of transcendent truth. • Reasoning is logic, and logic is essentially mathematical. Hence mathematics structures all possible reasoning. • Because mathematics exists independently of human beings, and reasoning is essentially mathematical, reason itself is disembodied. Therefore, artificial intelligence is possible, at least in principle. It is very much an open question whether WMCF will eventually prove to be the start of a new school in the philosophy of mathematics. Hence the main value of WMCF so far may be a critical one: its critique of Platonism and romanticism in mathematics. Critical response Many working mathematicians resist the approach and conclusions of Lakoff and Núñez. Reviews of WMCF by mathematicians in professional journals, while often respectful of its focus on conceptual strategies and metaphors as paths for understanding mathematics, have taken exception to some of the WMCF's philosophical arguments on the grounds that mathematical statements have lasting 'objective' meanings.[6] For example, Fermat's Last Theorem means exactly what it meant when Fermat initially proposed it 1664. Other reviewers have pointed out that multiple conceptual strategies can be employed in connection with the same mathematically defined term, often by the same person (a point that is compatible with the view that we routinely understand the 'same' concept with different metaphors). The metaphor and the conceptual strategy are not the same as the formal definition which mathematicians employ. However, WMCF points out that formal definitions are built using words and symbols that have meaning only in terms of human experience. Critiques of WMCF include the humorous: It's difficult for me to conceive of a metaphor for a real number raised to a complex power, but if there is one, I'd sure like to see it. — Joseph Auslander[7] and the physically informed: But their analysis leaves at least a couple of questions insufficiently answered. For one thing, the authors ignore the fact that brains not only observe nature, but also are part of nature. Perhaps the math that brains invent takes the form it does because math had a hand in forming the brains in the first place (through the operation of natural laws in constraining the evolution of life). Furthermore, it's one thing to fit equations to aspects of reality that are already known. It's something else for that math to tell of phenomena never previously suspected. When Paul Dirac's equations describing electrons produced more than one solution, he surmised that nature must possess other particles, now known as antimatter. But scientists did not discover such particles until after Dirac's math told him they must exist. If math is a human invention, nature seems to know what was going to be invented.[7] Lakoff made his reputation by linking linguistics to cognitive science and the analysis of metaphor. Núñez, educated in Switzerland, is a product of Jean Piaget's school of cognitive psychology as a basis for logic and mathematics. Núñez has thought much about the foundations of real analysis, the real and complex numbers, and the Basic Metaphor of Infinity. These topics, however, worthy though they be, form part of the superstructure of mathematics. Indeed, the authors do pay a fair bit of attention early on to logic, Boolean algebra and the Zermelo–Fraenkel axioms, even lingering a bit over group theory. But neither author is well-trained in logic, the philosophy of set theory, the axiomatic method, metamathematics, and model theory. Nor does WMCF say enough about the derivation of number systems (the Peano axioms go unmentioned), abstract algebra, equivalence and order relations, mereology, topology, and geometry. Lakoff and Núñez tend to dismiss the negative opinions mathematicians have expressed about WMCF, because their critics do not appreciate the insights of cognitive science. Lakoff and Núñez maintain that their argument can only be understood using the discoveries of recent decades about the way human brains process language and meaning. They argue that any arguments or criticisms that are not grounded in this understanding cannot address the content of the book.[8] It has been pointed out that it is not at all clear that WMCF establishes that the claim "intelligent alien life would have mathematical ability" is a myth. To do this, it would be required to show that intelligence and mathematical ability are separable, and this has not been done. On Earth, intelligence and mathematical ability seem to go hand in hand in all life-forms, as pointed out by Keith Devlin among others.[9] The authors of WMCF have not explained how this situation would (or even could) be different anywhere else. Lakoff and Núñez also appear not to appreciate the extent to which intuitionists and constructivists have anticipated their attack on the Romance of (Platonic) Mathematics. Brouwer, the founder of the intuitionist/constructivist point of view, in his dissertation On the Foundation of Mathematics, argued that mathematics was a mental construction, a free creation of the mind and totally independent of logic and language. He goes on to upbraid the formalists for building verbal structures that are studied without intuitive interpretation. Symbolic language should not be confused with mathematics; it reflects, but does not contain, mathematical reality.[10] Summing up WMCF (pp. 378–79) concludes with some key points, a number of which follow. Mathematics arises from our bodies and brains, our everyday experiences, and the concerns of human societies and cultures. It is: • The result of normal adult cognitive capacities, in particular the capacity for conceptual metaphor, and as such is a human universal. The ability to construct conceptual metaphors is neurologically based, and enables humans to reason about one domain using the language and concepts of another domain. Conceptual metaphor is both what enabled mathematics to grow out of everyday activities, and what enables mathematics to grow by a continual process of analogy and abstraction; • Symbolic, thereby enormously facilitating precise calculation; • Not transcendent, but the result of human evolution and culture, to which it owes its effectiveness. During experience of the world a connection to mathematical ideas is going on within the human mind; • A system of human concepts making extraordinary use of the ordinary tools of human cognition; • An open-ended creation of human beings, who remain responsible for maintaining and extending it; • One of the greatest products of the collective human imagination, and a magnificent example of the beauty, richness, complexity, diversity, and importance of human ideas. The cognitive approach to formal systems, as described and implemented in WMCF, need not be confined to mathematics, but should also prove fruitful when applied to formal logic, and to formal philosophy such as Edward Zalta's theory of abstract objects. Lakoff and Johnson (1999) fruitfully employ the cognitive approach to rethink a good deal of the philosophy of mind, epistemology, metaphysics, and the history of ideas. See also • Abstract object • Cognitive science • Cognitive science of mathematics • Conceptual metaphor • Embodied philosophy • Foundations of mathematics • From Action to Mathematics per Mac Lane • Metaphor • Philosophy of mathematics • The Unreasonable Effectiveness of Mathematics in the Natural Sciences Footnotes 1. See especially the table in Mac Lane (1986), p. 35. 2. de Freitas, Elizabeth; Sinclair, Natalie (2014). Mathematics and the body : Material entanglements in the classroom. NY, USA: Cambridge University Press. 3. Schiralli, Martin; Sinclair, Natalie (2003). "A constructive response to 'Where mathematics comes from'". Educational Studies in Mathematics. 52: 79–91. doi:10.1023/A:1023673520853. S2CID 12546421. 4. Radford, Luis (2009). "Why do gestures matter? Sensuous cognition and the palpability of mathematical meanings". Educational Studies in Mathematics. 70 (2): 111–126. doi:10.1007/s10649-008-9127-3. S2CID 73624789. 5. Rotman, Brian (2008). Becoming beside ourselves : the alphabet, ghosts, and distributed human being. Durham: Duke University Press. 6. "Where Mathematics Comes From". University of Fribourg. Archived from the original on July 16, 2006. 7. What is the Nature of Mathematics?, Michael Sutcliffe, referenced February 1, 2011 8. See http://www.unifr.ch/perso/nunezr/warning.html Archived June 13, 2002, at the Wayback Machine 9. Devlin, Keith (2005), The Math Instinct / Why You're a Mathematical Genius (Along with Lobsters, Birds, Cats and Dogs), Thunder's Mouth Press, ISBN 1-56025-839-X 10. Burton, David M. (2011), The History of Mathematics / An Introduction (7th ed.), McGraw-Hill, p. 712, ISBN 978-0-07-338315-6 References • Davis, Philip J., and Reuben Hersh, 1999 (1981). The Mathematical Experience. Mariner Books. First published by Houghton Mifflin. • George Lakoff, 1987. Women, Fire and Dangerous Things. Univ. of Chicago Press. • ------ and Mark Johnson, 1999. Philosophy in the Flesh. Basic Books. • ------ and Rafael Núñez, 2000, Where Mathematics Comes From. Basic Books. ISBN 0-465-03770-4 • John Randolph Lucas, 2000. The Conceptual Roots of Mathematics. Routledge. • Saunders Mac Lane, 1986. Mathematics: Form and Function. Springer Verlag. External links • WMCF web site. • Reviews of WMCF. • Joseph Auslander in American Scientist; • Bonnie Gold, MAA Reviews 2001 • Lakoff's response to Gold's MAA review.
Wikipedia
Whewell equation The Whewell equation of a plane curve is an equation that relates the tangential angle (φ) with arclength (s), where the tangential angle is the angle between the tangent to the curve and the x-axis, and the arc length is the distance along the curve from a fixed point. These quantities do not depend on the coordinate system used except for the choice of the direction of the x-axis, so this is an intrinsic equation of the curve, or, less precisely, the intrinsic equation. If a curve is obtained from another by translation then their Whewell equations will be the same. When the relation is a function, so that tangential angle is given as a function of arclength, certain properties become easy to manipulate. In particular, the derivative of the tangential angle with respect to arclength is equal to the curvature. Thus, taking the derivative of the Whewell equation yields a Cesàro equation for the same curve. The concept is named after William Whewell, who introduced it in 1849, in a paper in the Cambridge Philosophical Transactions. In his conception, the angle used is the deviation from the direction of the curve at some fixed starting point, and this convention is sometimes used by other authors as well. This is equivalent to the definition given here by the addition of a constant to the angle or by rotating the curve. Properties If the curve is given parametrically in terms of the arc length s, then φ is determined by ${\frac {d{\vec {r}}}{ds}}={\begin{pmatrix}{dx}/{ds}\\{dy}/{ds}\end{pmatrix}}={\begin{pmatrix}\cos \varphi \\\sin \varphi \end{pmatrix}}\quad {\text{since}}\quad \left|{\frac {d{\vec {r}}}{ds}}\right|=1,$ which implies ${\frac {dy}{dx}}=\tan \varphi .$ Parametric equations for the curve can be obtained by integrating: ${\begin{aligned}x&=\int \cos \varphi \,ds\\y&=\int \sin \varphi \,ds\end{aligned}}$ Since the curvature is defined by $\kappa ={\frac {d\varphi }{ds}},$ the Cesàro equation is easily obtained by differentiating the Whewell equation. Examples Curve Equation Line $\varphi =c$ Circle $s=a\varphi $ Logarithmic Spiral $s={\frac {ae^{\varphi \tan \alpha }}{\sin \alpha }}$ Catenary $s=a\tan \varphi $ Tautochrone $s=a\sin \varphi $ References • Whewell, W. Of the Intrinsic Equation of a Curve, and its Application. Cambridge Philosophical Transactions, Vol. VIII, pp. 659-671, 1849. Google Books • Todhunter, Isaac. William Whewell, D.D., An Account of His Writings, with Selections from His Literary and Scientific Correspondence. Vol. I. Macmillan and Co., 1876, London. Section 56: p. 317. • J. Dennis Lawrence (1972). A catalog of special plane curves. Dover Publications. pp. 1–5. ISBN 0-486-60288-5. • Yates, R. C.: A Handbook on Curves and Their Properties, J. W. Edwards (1952), "Intrinsic Equations" p124-5 External links • Weisstein, Eric W. "Whewell Equation". MathWorld.
Wikipedia
Order-5 truncated pentagonal hexecontahedron The order-5 truncated pentagonal hexecontahedron is a convex polyhedron with 72 faces: 60 hexagons and 12 pentagons triangular, with 210 edges, and 140 vertices. Its dual is the pentakis snub dodecahedron. Order-5 truncated pentagonal hexecontahedron Conwayt5gD or wD Goldberg{5+,3}2,1 FullereneC140 Faces72: 60 hexagons 12 pentagons Edges210 Vertices140 Symmetry groupIcosahedral (I) Dual polyhedronPentakis snub dodecahedron Propertiesconvex, chiral It is Goldberg polyhedron {5+,3}2,1 in the icosahedral family, with chiral symmetry. The relationship between pentagons steps into 2 hexagons away, and then a turn with one more step. It is a Fullerene C140.[1] Construction It is explicitly called a pentatruncated pentagonal hexecontahedron since only the valence-5 vertices of the pentagonal hexecontahedron are truncated.[2] Its topology can be constructed in Conway polyhedron notation as t5gD and more simply wD as a whirled dodecahedron, reducing original pentagonal faces and adding 5 distorted hexagons around each, in clockwise or counter-clockwise forms. This picture shows its flat construction before the geometry is adjusted into a more spherical form. The snub can create a (5,3) geodesic polyhedron by k5k6. Related polyhedra The whirled dodecahedron creates more polyhedra by basic Conway polyhedron notation. The zip whirled dodecahedron makes a chamfered truncated icosahedron, and Goldberg (4,1). Whirl applied twice produces Goldberg (5,3), and applied twice with reverse orientations produces goldberg (7,0). Whirled dodecahedron polyhedra "seed"ambotruncatezipexpandbevelsnubchamferwhirlwhirl-reverse wD = G(2,1) wD awD awD twD twD zwD = G(4,1) zwD ewD ewD bwD bwD swD swD cwD = G(4,2) cwD wwD = G(5,3) wwD wrwD = G(7,0) wrwD dualjoinneedlekisorthomedialgyrodual chamferdual whirldual whirl-reverse dwD dwD jwD jwD nwD nwD kwD kwD owD owD mwD mwD gwD gwD dcwD dcwD dwwD dwwD dwrwD dwrwD See also • Truncated pentagonal icositetrahedron t4gC References 1. Heinl, Sebastian (2015). "Giant Spherical Cluster with I-C140 Fullerene Topology". Angewandte Chemie International Edition. 54 (45): 13431–13435. doi:10.1002/anie.201505516. PMC 4691335. PMID 26411255. 2. Shaping Space: Exploring Polyhedra in Nature, Art, and the Geometrical Imagination, 2013, Chapter 9 Goldberg polyhedra • Goldberg, Michael (1937). "A class of multi-symmetric polyhedra". Tohoku Mathematical Journal. 43: 104–108. • Hart, George (2012). "Goldberg Polyhedra". In Senechal, Marjorie (ed.). Shaping Space (2nd ed.). Springer. pp. 125–138. doi:10.1007/978-0-387-92714-5_9. ISBN 978-0-387-92713-8. • Hart, George (June 18, 2013). "Mathematical Impressions: Goldberg Polyhedra". Simons Science News. • Fourth class of convex equilateral polyhedron with polyhedral symmetry related to fullerenes and viruses, Stan Schein and James Maurice Gaye, PNAS, Early Edition doi: 10.1073/pnas.1310939111 External links • VRML polyhedral generator Try "t5gI" (Conway polyhedron notation)
Wikipedia
Stephen Whisson Stephen Whisson (1710[1] – 3 November 1783) was a tutor at Trinity College, Cambridge, United Kingdom, and coached 72 students in the 1744–1754 period. Stephen Whisson Born1710 St Neots, Huntingdonshire, England Died3 November 1783 Cambridge, England Alma materTrinity College, Cambridge Scientific career FieldsMathematician InstitutionsTrinity College, Cambridge Academic advisorsWalter Taylor Notable studentsThomas Postlethwaite Biography Wisson was from St Neots, Huntingdonshire and was the son of a publican. In 1735, he matriculated from Wakefield School, Yorkshire. On 29 November 1734, he was admitted as a sizar at Trinity College, Cambridge, becoming a scholar in 1738.[2] Timeline • 1738/9 BA • 1742 MA • 1761 BD • 1741 Fellow of Trinity • 1744 Taxor • 1751-83 Cambridge University librarian • 1752-80 Senior bursar • 1757-58 Senior proctor[3] • 1739 ordained deacon • 1741 priest • 1746 - c. 1766 Vicar of Babraham, Cambridgeshire. • 1753-71 Rector of Shimpling, Norfolk. • 1771-83 Rector of Orwell, Cambridgeshire. • 1783 Buried in Trinity Chapel. Notes 1. Toshiharu Taura, Yukari Nagai (eds), Design Creativity 2010, Springer, 2011, p. 53. 2. "Whisson, Stephen (WHS734S)". A Cambridge Alumni Database. University of Cambridge. 3. "Records relating to the administrative and academic officers of the University". Cambridge University Archives. 1757. Retrieved 22 March 2009. External links • Stephen Whisson at the Mathematics Genealogy Project Authority control: Academics • Mathematics Genealogy Project
Wikipedia
White noise analysis In probability theory, a branch of mathematics, white noise analysis, otherwise known as Hida calculus, is a framework for infinite-dimensional and stochastic calculus, based on the Gaussian white noise probability space, to be compared with Malliavin calculus based on the Wiener process.[1] It was initiated by Takeyuki Hida in his 1975 Carleton Mathematical Lecture Notes.[2] The term white noise was first used for signals with a flat spectrum. White noise measure The white noise probability measure $\mu $ on the space $S'(\mathbb {R} )$ of tempered distributions has the characteristic function[3] $C(f)=\int _{S'(\mathbb {R} )}\exp \left(i\left\langle \omega ,f\right\rangle \right)\,d\mu (\omega )=\exp \left(-{\frac {1}{2}}\int _{\mathbb {R} }f^{2}(t)\,dt\right),\quad f\in S(\mathbb {R} ).$ Brownian motion in white noise analysis A version of Wiener's Brownian motion $B(t)$ is obtained by the dual pairing $B(t)=\langle \omega ,1\!\!1_{[0,t)}\rangle ,$ where $1\!\!1_{[0,t)}$ is the indicator function of the interval $[0,t)$. Informally $B(t)=\int _{0}^{t}\omega (t)\,dt$ and in a generalized sense $\omega (t)={\frac {dB(t)}{dt}}.$ Hilbert space Fundamental to white noise analysis is the Hilbert space $(L^{2}):=L^{2}\left(S'(\mathbb {R} ),\mu \right),$ generalizing the Hilbert spaces $L^{2}(\mathbb {R} ^{n},e^{-{\frac {1}{2}}x^{2}}d^{n}x)$ to infinite dimension. Wick polynomials An orthonormal basis in this Hilbert space, generalizing that of Hermite polynomials, is given by the so-called "Wick", or "normal ordered" polynomials $\left\langle {:\omega ^{n}:},f_{n}\right\rangle $ with ${:\omega ^{n}:}\in S'(\mathbb {R} ^{n})$ and $f_{n}\in S(\mathbb {R} ^{n})$ with normalization $\int _{S'(\mathbb {R} )}\left\langle :\omega ^{n}:,f_{n}\right\rangle ^{2}\,d\mu (\omega )=n!\int f_{n}^{2}(x_{1},\ldots ,x_{n})\,d^{n}x,$ :\omega ^{n}:,f_{n}\right\rangle ^{2}\,d\mu (\omega )=n!\int f_{n}^{2}(x_{1},\ldots ,x_{n})\,d^{n}x,} entailing the Itô-Segal-Wiener isomorphism of the white noise Hilbert space $(L^{2})$ with Fock space: $L^{2}\left(S'(\mathbb {R} ),\mu \right)\simeq \bigoplus \limits _{n=0}^{\infty }\operatorname {Sym} L^{2}(\mathbb {R} ^{n},n!\,d^{n}x).$ The "chaos expansion" $\varphi (\omega )=\sum _{n}\left\langle :\omega ^{n}:,f_{n}\right\rangle $ :\omega ^{n}:,f_{n}\right\rangle } in terms of Wick polynomials corresponds to the expansion in terms of multiple Wiener integrals. Brownian martingales $M_{t}(\omega )$ are characterized by kernel functions $f_{n}$ depending on $t$ only by a "cut off": $f_{n}(x_{1},\ldots ,x_{n};t)={\begin{cases}f_{n}(x_{1},\ldots ,x_{n})&{\text{if }}ix_{i}\leq t,\\0&{\text{otherwise}}.\end{cases}}$ Gelfand triples Suitable restrictions of the kernel functions $\varphi _{n}$ to be smooth and rapidly decreasing in $x$ and $n$ give rise to spaces of white noise test functions $\varphi $, and, by duality, to spaces of generalized functions $\Psi $ of white noise, with $\left\langle \!\left\langle \Psi ,\varphi \right\rangle \!\right\rangle :=\sum _{n}n!\left\langle \psi _{n},\varphi _{n}\right\rangle $ :=\sum _{n}n!\left\langle \psi _{n},\varphi _{n}\right\rangle } generalizing the scalar product in $(L^{2})$. Examples are the Hida triple, with $\varphi \in (S)\subset (L^{2})\subset (S)^{\ast }\ni \Psi $ or the more general Kondratiev triples.[4] T- and S-transform Using the white noise test functions $\varphi _{f}(\omega ):=\exp \left(i\left\langle \omega ,f\right\rangle \right)\in (S),\quad f\in S(\mathbb {R} )$ one introduces the "T-transform" of white noise distributions $\Psi $ by setting $T\Psi (f):=\left\langle \!\left\langle \Psi ,\varphi _{f}\right\rangle \!\right\rangle .$ Likewise, using $\phi _{f}(\omega ):=\exp \left(-{\frac {1}{2}}\int f^{2}(t)\,dt\right)\exp \left(-\left\langle \omega ,f\right\rangle \right)\in (S)$ one defines the "S-transform" of white noise distributions $\Psi $ by $S\Psi (f):=\left\langle \!\left\langle \Psi ,\phi _{f}\right\rangle \!\right\rangle ,\quad f\in S(\mathbb {R} ).$ It is worth noting that for generalized functions $\Psi $, with kernels $\psi _{n}$ as in , the S-transform is just $S\Psi (f)=\sum n!\left\langle \psi _{n},f^{\otimes n}\right\rangle .$ Depending on the choice of Gelfand triple, the white noise test functions and distributions are characterized by corresponding growth and analyticity properties of their S- or T-transforms.[3][4] Characterization theorem The function $G(f)$ is the T-transform of a (unique) Hida distribution $\Psi $ iff for all $f_{1},f_{2}\in S(R),$ the function $z\mapsto G(zf_{1}+f_{2})$ is analytic in the whole complex plane and of second order exponential growth, i.e. $\left\vert G(\ f)\right\vert <ae^{bK(f,f)},$ where $K$ is some continuous quadratic form on $S'(\mathbb {R} )\times S'(\mathbb {R} )$.[3][5][6] The same is true for S-transforms, and similar characterization theorems hold for the more general Kondratiev distributions.[4] Calculus For test functions $\varphi \in (S)$, partial, directional derivatives exist: $\partial _{\eta }\varphi (\omega ):=\lim _{\varepsilon \rightarrow 0}{\frac {\varphi (\omega +\varepsilon \eta )-F(\omega )}{\varepsilon }}$ where $\omega $ may be varied by any generalized function $\eta $. In particular, for the Dirac distribution $\eta =\delta _{t}$ one defines the "Hida derivative", denoting $\partial _{t}\varphi (\omega ):=\lim _{\varepsilon \rightarrow 0}{\frac {\varphi (\omega +\varepsilon \delta _{t})-F(\omega )}{\varepsilon }}.$ Gaussian integration by parts yields the dual operator on distribution space $\partial _{t}^{\ast }=-\partial _{t}+\omega (t)$ An infinite-dimensional gradient $\nabla :(S)\rightarrow L^{2}(R,dt)\otimes (S)$ :(S)\rightarrow L^{2}(R,dt)\otimes (S)} is given by $\nabla F(t,\omega )=\partial _{t}F(\omega ).$ The Laplacian $\triangle $ ("Laplace–Beltrami operator") with $-\triangle =\int dt\;\partial _{t}^{\ast }\partial _{t}\geq 0$ plays an important role in infinite-dimensional analysis and is the image of the Fock space number operator. Stochastic integrals A stochastic integral, the "Hitsuda-Skorohod integral" can be defined for suitable families $\Psi (t)$ of white noise distributions as a Pettis integral $\int \partial _{t}^{\ast }\Psi (t)\,dt\in (S)^{\ast },$ generalizing the Itô integral beyond adapted integrands. Applications In general terms, there are two features of white noise analysis which have been prominent in applications.[7][8][9][10][11] First, white noise is a generalized stochastic process with independent values at each time.[12] Hence it plays the role of a generalized system of independent coordinates, in the sense that in various contexts it has been fruitful to express more general processes occurring e.g. in engineering or mathematical finance, in terms of white noise.[13][9][10] Second, the characterization theorem given above allows various heuristic expressions to be identified as generalized functions of white noise. This is particularly effective to attribute a well-defined mathematical meaning to so-called "functional integrals". Feynman integrals in particular have been given rigorous meaning for large classes of quantum dynamical models. Noncommutative extensions of the theory have grown under the name of quantum white noise, and finally, the rotational invariance of the white noise characteristic function provides a framework for representations of infinite-dimensional rotation groups. References 1. Zhi-yuan., Huang (2000). Introduction to Infinite-Dimensional Stochastic Analysis. Yan, J. (Jia-An). Dordrecht: Springer Netherlands. ISBN 9789401141086. OCLC 851373497. 2. Hida, Takeyuki (1976). "Analysis of Brownian functionals". Stochastic Systems: Modeling, Identification and Optimization, I. Mathematical Programming Studies. Vol. 5. Springer, Berlin, Heidelberg. pp. 53–59. doi:10.1007/bfb0120763. ISBN 978-3-642-00783-5. 3. Hida, Takeyuki; Kuo, Hui-Hsiung; Potthoff, Jürgen; Streit, Ludwig (1993). White Noise | SpringerLink. doi:10.1007/978-94-017-3680-0. ISBN 978-90-481-4260-6. 4. Kondrat'ev, Yu.G.; Streit, L. (1993). "Spaces of White Noise distributions: constructions, descriptions, applications. I". Reports on Mathematical Physics. 33 (3): 341–366. Bibcode:1993RpMP...33..341K. doi:10.1016/0034-4877(93)90003-w. 5. Kuo, H.-H.; Potthoff, J.; Streit, L. (1991). "A characterization of white noise test functionals". Nagoya Mathematical Journal. 121: 185–194. doi:10.1017/S0027763000003469. ISSN 0027-7630. 6. Kondratiev, Yu.G.; Leukert, P.; Potthoff, J.; Streit, L.; Westerkamp, W. (1996). "Generalized Functionals in Gaussian Spaces: The Characterization Theorem Revisited". Journal of Functional Analysis. 141 (2): 301–318. arXiv:math/0303054. doi:10.1006/jfan.1996.0130. S2CID 58889052. 7. Accardi, Luigi; Chen, Louis Hsiao Yun; Ohya, Masanori; Hida, Takeyuki; Si, Si (June 2017). White noise analysis and quantum information. Accardi, L. (Luigi), 1947-. Singapore. ISBN 9789813225459. OCLC 1007244903.{{cite book}}: CS1 maint: location missing publisher (link) 8. Caseñas), Bernido, Christopher C. (Christopher (2015). Methods and applications of white noise analysis in interdisciplinary sciences. Carpio-Bernido, M. Victoria. [Hackensack,] New Jersey. ISBN 9789814569118. OCLC 884440293.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: multiple names: authors list (link) 9. Stochastic partial differential equations : a modeling, white noise functional approach. Holden, H. (Helge), 1956- (2nd ed.). New York: Springer. 2010. ISBN 978-0-387-89488-1. OCLC 663094108.{{cite book}}: CS1 maint: others (link) 10. Let us use white noise. Hida, Takeyuki, 1927-, Streit, Ludwig, 1938-. New Jersey. 2017. ISBN 9789813220935. OCLC 971020065.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: others (link) 11. Hida, Takeyuki (2005). Stochastic Analysis: Classical and Quantum. doi:10.1142/5962. ISBN 978-981-256-526-6. 12. (1913–2009)., Gelfand, Izrail Moiseevitch (1964). Generalized functions. Volume 4, Applications of harmonic analysis. Vilenkin, Naum Âkovlevič (1920–1991)., Feinstein, Amiel. New York: Academic Press. ISBN 978-0-12-279504-6. OCLC 490085153.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link) 13. Biagini, Francesca; Øksendal, Bernt; Sulem, Agnès; Wallner, Naomi (2004-01-08). "An introduction to white–noise theory and Malliavin calculus for fractional Brownian motion". Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 460 (2041): 347–372. Bibcode:2004RSPSA.460..347B. doi:10.1098/rspa.2003.1246. hdl:10852/10633. ISSN 1364-5021. S2CID 120225816.
Wikipedia
Nuclear space In mathematics, nuclear spaces are topological vector spaces that can be viewed as a generalization of finite dimensional Euclidean spaces and share many of their desirable properties. Nuclear spaces are however quite different from Hilbert spaces, another generalization of finite dimensional Euclidean spaces. They were introduced by Alexander Grothendieck. The topology on nuclear spaces can be defined by a family of seminorms whose unit balls decrease rapidly in size. Vector spaces whose elements are "smooth" in some sense tend to be nuclear spaces; a typical example of a nuclear space is the set of smooth functions on a compact manifold. All finite-dimensional vector spaces are nuclear. There are no Banach spaces that are nuclear, except for the finite-dimensional ones. In practice a sort of converse to this is often true: if a "naturally occurring" topological vector space is not a Banach space, then there is a good chance that it is nuclear. Original motivation: The Schwartz kernel theorem See also: Distribution (mathematics) § Topology on the space of distributions, and Schwartz kernel theorem Much of the theory of nuclear spaces was developed by Alexander Grothendieck while investigating the Schwartz kernel theorem and published in (Grothendieck 1955). We now describe this motivation. For any open subsets $\Omega _{1}\subseteq \mathbb {R} ^{m}$ and $\Omega _{2}\subseteq \mathbb {R} ^{n},$ the canonical map ${\mathcal {D}}^{\prime }\left(\Omega _{1}\times \Omega _{2}\right)\to L_{b}\left(C_{c}^{\infty }\left(\Omega _{2}\right);{\mathcal {D}}^{\prime }\left(\Omega _{1}\right)\right)$ is an isomorphism of TVSs (where $L_{b}\left(C_{c}^{\infty }\left(\Omega _{2}\right);{\mathcal {D}}^{\prime }\left(\Omega _{1}\right)\right)$ has the topology of uniform convergence on bounded subsets) and furthermore, both of these spaces are canonically TVS-isomorphic to ${\mathcal {D}}^{\prime }\left(\Omega _{1}\right){\widehat {\otimes }}{\mathcal {D}}^{\prime }\left(\Omega _{2}\right)$ (where since ${\mathcal {D}}^{\prime }\left(\Omega _{1}\right)$ is nuclear, this tensor product is simultaneously the injective tensor product and projective tensor product).[1] In short, the Schwartz kernel theorem states that: ${\mathcal {D}}^{\prime }\left(\Omega _{1}\times \Omega _{2}\right)\cong {\mathcal {D}}^{\prime }\left(\Omega _{1}\right){\widehat {\otimes }}{\mathcal {D}}^{\prime }\left(\Omega _{2}\right)\cong L_{b}\left(C_{c}^{\infty }\left(\Omega _{2}\right);{\mathcal {D}}^{\prime }\left(\Omega _{1}\right)\right)$ where all of these TVS-isomorphisms are canonical. This result is false if one replaces the space $C_{c}^{\infty }$ with $L^{2}$ (which is a reflexive space that is even isomorphic to its own strong dual space) and replaces ${\mathcal {D}}^{\prime }$ with the dual of this $L^{2}$ space.[2] Why does such a nice result hold for the space of distributions and test functions but not for the Hilbert space $L^{2}$ (which is generally considered one of the "nicest" TVSs)? This question led Grothendieck to discover nuclear spaces, nuclear maps, and the injective tensor product. Motivations from geometry Another set of motivating examples comes directly from geometry and smooth manifold theory[3]appendix 2. Given smooth manifolds $M,N$ and a locally convex Hausdorff topological vector space, then there are the following isomorphisms of nuclear spaces • $C^{\infty }(M)\otimes C^{\infty }(N)\cong C^{\infty }(M\times N)$ • $C^{\infty }(M)\otimes F\cong \{f:M\to F:f{\text{ is smooth }}\}$ Using standard tensor products for $C^{\infty }(\mathbb {R} )$ as a vector space, the function $\sin(x+y):\mathbb {R} ^{2}\to \mathbb {R} $ cannot be expressed as a function $f\otimes g$ for $f,g\in C^{\infty }(\mathbb {R} ).$ This gives an example demonstrating there is a strict inclusion of sets $C^{\infty }(\mathbb {R} )\otimes C^{\infty }(\mathbb {R} )\subset C^{\infty }(\mathbb {R} ^{2}).$ Definition This section lists some of the more common definitions of a nuclear space. The definitions below are all equivalent. Note that some authors use a more restrictive definition of a nuclear space, by adding the condition that the space should also be a Fréchet space. (This means that the space is complete and the topology is given by a countable family of seminorms.) The following definition was used by Grothendieck to define nuclear spaces.[4] Definition 0: Let $X$ be a locally convex topological vector space. Then $X$ is nuclear if for any locally convex space $Y,$ the canonical vector space embedding $X\otimes _{\pi }Y\to {\mathcal {B}}_{\epsilon }\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)$ is an embedding of TVSs whose image is dense in the codomain (where the domain $X\otimes _{\pi }Y$ is the projective tensor product and the codomain is the space of all separately continuous bilinear forms on $X_{\sigma }^{\prime }\times Y_{\sigma }^{\prime }$ endowed with the topology of uniform convergence on equicontinuous subsets). We start by recalling some background. A locally convex topological vector space $X$ has a topology that is defined by some family of seminorms. For any seminorm, the unit ball is a closed convex symmetric neighborhood of the origin, and conversely any closed convex symmetric neighborhood of 0 is the unit ball of some seminorm. (For complex vector spaces, the condition "symmetric" should be replaced by "balanced".) If $p$ is a seminorm on $X,$ then $X_{p}$ denotes the Banach space given by completing the auxiliary normed space using the seminorm $p.$ There is a natural map $X\to X_{p}$ (not necessarily injective). If $q$ is another seminorm, larger than $p$ (pointwise as a function on $X$), then there is a natural map from $X_{q}$ to $X_{p}$ such that the first map factors as $X\to X_{q}\to X_{p}.$ These maps are always continuous. The space $X$ is nuclear when a stronger condition holds, namely that these maps are nuclear operators. The condition of being a nuclear operator is subtle, and more details are available in the corresponding article. Definition 1: A nuclear space is a locally convex topological vector space such that for any seminorm $p$ we can find a larger seminorm $q$ so that the natural map $X_{q}\to X_{p}$ is nuclear. Informally, this means that whenever we are given the unit ball of some seminorm, we can find a "much smaller" unit ball of another seminorm inside it, or that any neighborhood of 0 contains a "much smaller" neighborhood. It is not necessary to check this condition for all seminorms $p$; it is sufficient to check it for a set of seminorms that generate the topology, in other words, a set of seminorms that are a subbase for the topology. Instead of using arbitrary Banach spaces and nuclear operators, we can give a definition in terms of Hilbert spaces and trace class operators, which are easier to understand. (On Hilbert spaces nuclear operators are often called trace class operators.) We will say that a seminorm $p$ is a Hilbert seminorm if $X_{p}$ is a Hilbert space, or equivalently if $p$ comes from a sesquilinear positive semidefinite form on $X.$ Definition 2: A nuclear space is a topological vector space with a topology defined by a family of Hilbert seminorms, such that for any Hilbert seminorm $p$ we can find a larger Hilbert seminorm $q$ so that the natural map from $X_{q}$ to $X_{p}$ is trace class. Some authors prefer to use Hilbert–Schmidt operators rather than trace class operators. This makes little difference, because any trace class operator is Hilbert–Schmidt, and the product of two Hilbert–Schmidt operators is of trace class. Definition 3: A nuclear space is a topological vector space with a topology defined by a family of Hilbert seminorms, such that for any Hilbert seminorm $p$ we can find a larger Hilbert seminorm $q$ so that the natural map from $X_{q}$ to $X_{p}$ is Hilbert–Schmidt. If we are willing to use the concept of a nuclear operator from an arbitrary locally convex topological vector space to a Banach space, we can give shorter definitions as follows: Definition 4: A nuclear space is a locally convex topological vector space such that for any seminorm $p$ the natural map from $X\to X_{p}$ is nuclear. Definition 5: A nuclear space is a locally convex topological vector space such that any continuous linear map to a Banach space is nuclear. Grothendieck used a definition similar to the following one: Definition 6: A nuclear space is a locally convex topological vector space $A$ such that for any locally convex topological vector space $B$ the natural map from the projective to the injective tensor product of $A$ and $B$ is an isomorphism. In fact it is sufficient to check this just for Banach spaces $B,$ or even just for the single Banach space $\ell ^{1}$ of absolutely convergent series. Characterizations Let $X$ be a Hausdorff locally convex space. Then the following are equivalent: 1. $X$ is nuclear; 2. for any locally convex space $Y,$ the canonical vector space embedding $X\otimes _{\pi }Y\to {\mathcal {B}}_{\epsilon }\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)$ is an embedding of TVSs whose image is dense in the codomain; 3. for any Banach space $Y,$ the canonical vector space embedding $X{\widehat {\otimes }}_{\pi }Y\to X{\widehat {\otimes }}_{\epsilon }Y$ is a surjective isomorphism of TVSs;[5] 4. for any locally convex Hausdorff space $Y,$ the canonical vector space embedding $X{\widehat {\otimes }}_{\pi }Y\to X{\widehat {\otimes }}_{\epsilon }Y$ is a surjective isomorphism of TVSs;[5] 5. the canonical embedding of $\ell ^{1}[\mathbb {N} ,X]$ in $\ell ^{1}(\mathbb {N} ,X)$ is a surjective isomorphism of TVSs;[6] 6. the canonical map of $\ell ^{1}{\widehat {\otimes }}_{\pi }X\to \ell ^{1}{\widehat {\otimes }}_{\epsilon }X$ is a surjective TVS-isomorphism.[6] 7. for any seminorm $p$ we can find a larger seminorm $q$ so that the natural map $X_{q}\to X_{p}$ is nuclear; 8. for any seminorm $p$ we can find a larger seminorm $q$ so that the canonical injection $X_{p}^{\prime }\to X_{q}^{\prime }$ is nuclear;[5] 9. the topology of $X$ is defined by a family of Hilbert seminorms, such that for any Hilbert seminorm $p$ we can find a larger Hilbert seminorm $q$ so that the natural map $X_{q}\to X_{p}$ is trace class; 10. $X$ has a topology defined by a family of Hilbert seminorms, such that for any Hilbert seminorm $p$ we can find a larger Hilbert seminorm $q$ so that the natural map $X_{q}\to X_{p}$ is Hilbert–Schmidt; 11. for any seminorm $p$ the natural map from $X\to X_{p}$ is nuclear. 12. any continuous linear map to a Banach space is nuclear; 13. every continuous seminorm on $X$ is prenuclear;[7] 14. every equicontinuous subset of $X^{\prime }$ is prenuclear;[7] 15. every linear map from a Banach space into $X^{\prime }$ that transforms the unit ball into an equicontinuous set, is nuclear;[5] 16. the completion of $X$ is a nuclear space; If $X$ is a Fréchet space then the following are equivalent: 1. $X$ is nuclear; 2. every summable sequence in $X$ is absolutely summable;[6] 3. the strong dual of $X$ is nuclear; Sufficient conditions • A locally convex Hausdorff space is nuclear if and only if its completion is nuclear. • Every subspace of a nuclear space is nuclear.[8] • Every Hausdorff quotient space of a nuclear space is nuclear.[8] • The inductive limit of a countable sequence of nuclear spaces is nuclear.[8] • The locally convex direct sum of a countable sequence of nuclear spaces is nuclear.[8] • The strong dual of a nuclear Fréchet space is nuclear.[9] • In general, the strong dual of a nuclear space may fail to be nuclear.[9] • A Fréchet space whose strong dual is nuclear is itself nuclear.[9] • The limit of a family of nuclear spaces is nuclear.[8] • The product of a family of nuclear spaces is nuclear.[8] • The completion of a nuclear space is nuclear (and in fact a space is nuclear if and only if its completion is nuclear). • The tensor product of two nuclear spaces is nuclear. • The projective tensor product, as well as its completion, of two nuclear spaces is nuclear.[10] Suppose that $X,Y,$ and $N$ are locally convex space with $N$ is nuclear. • If $N$ is nuclear then the vector space of continuous linear maps $L_{\sigma }(X,N)$ endowed with the topology of simple convergence is a nuclear space.[9] • If $X$ is a semi-reflexive space whose strong dual is nuclear and if $N$ is nuclear then the vector space of continuous linear maps $L_{b}(X,N)$ (endowed with the topology of uniform convergence on bounded subsets of $X$ ) is a nuclear space.[11] Examples If $d$ is a set of any cardinality, then $\mathbb {R} ^{d}$ and $\mathbb {C} ^{d}$ (with the product topology) are both nuclear spaces.[12] A relatively simple infinite dimensional example of a nuclear space is the space of all rapidly decreasing sequences $c=\left(c_{1},c_{2},\ldots \right).$ ("Rapidly decreasing" means that $c_{n}p(n)$ is bounded for any polynomial $p$). For each real number $s,$ it is possible to define a norm $\|\,\cdot \,\|_{s}$ by $\|c\|_{s}=\sup _{}\left|c_{n}\right|n^{s}$ If the completion in this norm is $C_{s},$ then there is a natural map from $C_{s}\to C_{t}$ whenever $s\geq t,$ and this is nuclear whenever $s>t+1$ essentially because the series $\sum n^{t-s}$ is then absolutely convergent. In particular for each norm $\|\,\cdot \,\|_{t}$ this is possible to find another norm, say $\|\,\cdot \,\|_{t+1},$ such that the map $C_{t+2}\to C_{t}$ is nuclear. So the space is nuclear. • The space of smooth functions on any compact manifold is nuclear. • The Schwartz space of smooth functions on $\mathbb {R} ^{n}$ for which the derivatives of all orders are rapidly decreasing is a nuclear space. • The space of entire holomorphic functions on the complex plane is nuclear. • The space of distributions ${\mathcal {D}}^{\prime },$ the strong dual of ${\mathcal {D}},$ is nuclear.[11] Properties Nuclear spaces are in many ways similar to finite-dimensional spaces and have many of their good properties. • Every finite-dimensional Hausdorff space is nuclear. • A Fréchet space is nuclear if and only if its strong dual is nuclear. • Every bounded subset of a nuclear space is precompact (recall that a set is precompact if its closure in the completion of the space is compact).[13] This is analogous to the Heine-Borel theorem. In contrast, no infinite dimensional normed space has this property (although the finite dimensional spaces do). • If $X$ is a quasi-complete (i.e. all closed and bounded subsets are complete) nuclear space then $X$ has the Heine-Borel property.[14] • A nuclear quasi-complete barrelled space is a Montel space. • Every closed equicontinuous subset of the dual of a nuclear space is a compact metrizable set (for the strong dual topology). • Every nuclear space is a subspace of a product of Hilbert spaces. • Every nuclear space admits a basis of seminorms consisting of Hilbert norms. • Every nuclear space is a Schwartz space. • Every nuclear space possesses the approximation property.[15] • Any subspace and any quotient space by a closed subspace of a nuclear space is nuclear. • If $A$ is nuclear and $B$ is any locally convex topological vector space, then the natural map from the projective tensor product of A and $B$ to the injective tensor product is an isomorphism. Roughly speaking this means that there is only one sensible way to define the tensor product. This property characterizes nuclear spaces $A.$ • In the theory of measures on topological vector spaces, a basic theorem states that any continuous cylinder set measure on the dual of a nuclear Fréchet space automatically extends to a Radon measure. This is useful because it is often easy to construct cylinder set measures on topological vector spaces, but these are not good enough for most applications unless they are Radon measures (for example, they are not even countably additive in general). The kernel theorem Much of the theory of nuclear spaces was developed by Alexander Grothendieck while investigating the Schwartz kernel theorem and published in (Grothendieck 1955). We have the following generalization of the theorem. Schwartz kernel theorem:[9] Suppose that $X$ is nuclear, $Y$ is locally convex, and $v$ is a continuous bilinear form on $X\times Y.$ Then $v$ originates from a space of the form $X_{A^{\prime }}^{\prime }{\widehat {\otimes }}_{\epsilon }Y_{B^{\prime }}^{\prime }$ where $A^{\prime }$ and $B^{\prime }$ are suitable equicontinuous subsets of $X^{\prime }$ and $Y^{\prime }.$ Equivalently, $v$ is of the form, $v(x,y)=\sum _{i=1}^{\infty }\lambda _{i}\left\langle x,x_{i}^{\prime }\right\rangle \left\langle y,y_{i}^{\prime }\right\rangle \quad {\text{ for all }}(x,y)\in X\times Y$ where $\left(\lambda _{i}\right)\in \ell ^{1}$ and each of $\left\{x_{1}^{\prime },x_{2}^{\prime },\ldots \right\}$ and $\left\{y_{1}^{\prime },y_{2}^{\prime },\ldots \right\}$ are equicontinuous. Furthermore, these sequences can be taken to be null sequences (that is, convergent to 0) in $X_{A^{\prime }}^{\prime }$ and $Y_{B^{\prime }}^{\prime },$ respectively. Bochner–Minlos theorem A continuous functional $C$ on a nuclear space $A$ is called a characteristic functional if $C(0)=1,$ and for any complex $z_{j}{\text{ and }}x_{j}\in A,$ $j,k=1,\ldots ,n,$ $\sum _{j=1}^{n}\sum _{k=1}^{n}z_{j}{\bar {z}}_{k}C(x_{j}-x_{k})\geq 0.$ Given a characteristic functional on a nuclear space $A,$ the Bochner–Minlos theorem (after Salomon Bochner and Robert Adol'fovich Minlos) guarantees the existence and uniqueness of a corresponding probability measure $\mu $ on the dual space $A^{\prime },$ given by $C(y)=\int _{A^{\prime }}e^{i\langle x,y\rangle }\,d\mu (x).$ This extends the inverse Fourier transform to nuclear spaces. In particular, if $A$ is the nuclear space $A=\bigcap _{k=0}^{\infty }H_{k},$ where $H_{k}$ are Hilbert spaces, the Bochner–Minlos theorem guarantees the existence of a probability measure with the characteristic function $e^{-{\frac {1}{2}}\|y\|_{H_{0}}^{2}},$ that is, the existence of the Gaussian measure on the dual space. Such measure is called white noise measure. When $A$ is the Schwartz space, the corresponding random element is a random distribution. Strongly nuclear spaces A strongly nuclear space is a locally convex topological vector space such that for any seminorm $p$ there exists a larger seminorm $q$ so that the natural map $X_{q}\to X_{p}$ is a strongly nuclear. See also • Auxiliary normed space • Fredholm kernel – type of a kernel on a Banach spacePages displaying wikidata descriptions as a fallback • Injective tensor product • Locally convex topological vector space – A vector space with a topology defined by convex open sets • Nuclear operator • Projective tensor product – tensor product defined on two topological vector spacesPages displaying wikidata descriptions as a fallback • Rigged Hilbert space – Construction linking the study of "bound" and continuous eigenvalues in functional analysis • Trace class – Compact operator for which a finite trace can be defined • Topological vector space – Vector space with a notion of nearness References 1. Trèves 2006, p. 531. 2. Trèves 2006, pp. 509–510. 3. Costello, Kevin (2011). Renormalization and effective field theory. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-5288-0. OCLC 692084741. 4. Schaefer & Wolff 1999, p. 170. 5. Trèves 2006, p. 511. 6. Schaefer & Wolff 1999, p. 184. 7. Schaefer & Wolff 1999, p. 178. 8. Schaefer & Wolff 1999, p. 103. 9. Schaefer & Wolff 1999, p. 172. 10. Schaefer & Wolff 1999, p. 105. 11. Schaefer & Wolff 1999, p. 173. 12. Schaefer & Wolff 1999, p. 100. 13. Schaefer & Wolff 1999, p. 101. 14. Trèves 2006, p. 520. 15. Schaefer & Wolff 1999, p. 110. Bibliography • Becnel, Jeremy (2021). Tools for Infinite Dimensional Analysis. CRC Press. ISBN 978-0-367-54366-2. OCLC 1195816154. • Grothendieck, Alexandre (1955). "Produits tensoriels topologiques et espaces nucléaires". Memoirs of the American Mathematical Society. 16. • Diestel, Joe (2008). The metric theory of tensor products : Grothendieck's résumé revisited. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-4440-3. OCLC 185095773. • Dubinsky, Ed (1979). The structure of nuclear Fréchet spaces. Berlin New York: Springer-Verlag. ISBN 3-540-09504-7. OCLC 5126156. • Grothendieck, Grothendieck (1966). Produits tensoriels topologiques et espaces nucléaires (in French). Providence: American Mathematical Society. ISBN 0-8218-1216-5. OCLC 1315788. • Husain, Taqdir (1978). Barrelledness in topological and ordered vector spaces. Berlin New York: Springer-Verlag. ISBN 3-540-09096-7. OCLC 4493665. • Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370. • Nlend, H (1977). Bornologies and functional analysis : introductory course on the theory of duality topology-bornology and its use in functional analysis. Amsterdam New York New York: North-Holland Pub. Co. Sole distributors for the U.S.A. and Canada, Elsevier-North Holland. ISBN 0-7204-0712-5. OCLC 2798822. • Nlend, H (1981). Nuclear and conuclear spaces : introductory courses on nuclear and conuclear spaces in the light of the duality. Amsterdam New York New York, N.Y: North-Holland Pub. Co. Sole distributors for the U.S.A. and Canada, Elsevier North-Holland. ISBN 0-444-86207-2. OCLC 7553061. • Gel'fand, I. M.; Vilenkin, N. Ya. (1964). Generalized Functions – vol. 4: Applications of harmonic analysis. New York: Academic Press. OCLC 310816279. • Takeyuki Hida and Si Si, Lectures on white noise functionals, World Scientific Publishing, 2008. ISBN 978-981-256-052-0 • T. R. Johansen, The Bochner-Minlos Theorem for nuclear spaces and an abstract white noise space, 2003. • G.L. Litvinov (2001) [1994], "Nuclear space", Encyclopedia of Mathematics, EMS Press • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Pietsch, Albrecht (1972) [1965]. Nuclear locally convex spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 66. Berlin, New York: Springer-Verlag. ISBN 978-0-387-05644-9. MR 0350360. • Pietsch, Albrecht (1972). Nuclear locally convex spaces. Berlin,New York: Springer-Verlag. ISBN 0-387-05644-0. OCLC 539541. • Robertson, A.P.; W.J. Robertson (1964). Topological vector spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge University Press. p. 141. • Robertson, A. P. (1973). Topological vector spaces. Cambridge England: University Press. ISBN 0-521-29882-2. OCLC 589250. • Ryan, Raymond (2002). Introduction to tensor products of Banach spaces. London New York: Springer. ISBN 1-85233-437-1. OCLC 48092184. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. • Wong (1979). Schwartz spaces, nuclear spaces, and tensor products. Berlin New York: Springer-Verlag. ISBN 3-540-09513-6. OCLC 5126158. Topological tensor products and nuclear spaces Basic concepts • Auxiliary normed spaces • Nuclear space • Tensor product • Topological tensor product • of Hilbert spaces Topologies • Inductive tensor product • Injective tensor product • Projective tensor product Operators/Maps • Fredholm determinant • Fredholm kernel • Hilbert–Schmidt operator • Hypocontinuity • Integral • Nuclear • between Banach spaces • Trace class Theorems • Grothendieck trace theorem • Schwartz kernel theorem Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
White surface In algebraic geometry, a White surface is one of the rational surfaces in Pn studied by White (1923), generalizing cubic surfaces and Bordiga surfaces, which are the cases n = 3 or 4. A White surface in Pn is given by the embedding of P2 blown up in n(n + 1)/2 points by the linear system of degree n curves through these points. References • White, F. P. (1923), "On certain nets of plane curves", Proceedings of the Cambridge Philosophical Society, 22: 1–10, doi:10.1017/S0305004100000037
Wikipedia
White test In statistics, the White test is a statistical test that establishes whether the variance of the errors in a regression model is constant: that is for homoskedasticity. This test, and an estimator for heteroscedasticity-consistent standard errors, were proposed by Halbert White in 1980.[1] These methods have become widely used, making this paper one of the most cited articles in economics.[2] In cases where the White test statistic is statistically significant, heteroskedasticity may not necessarily be the cause; instead the problem could be a specification error. In other words, the White test can be a test of heteroskedasticity or specification error or both. If no cross product terms are introduced in the White test procedure, then this is a test of pure heteroskedasticity. If cross products are introduced in the model, then it is a test of both heteroskedasticity and specification bias. Testing constant variance To test for constant variance one undertakes an auxiliary regression analysis: this regresses the squared residuals from the original regression model onto a set of regressors that contain the original regressors along with their squares and cross-products.[3] One then inspects the R2. The Lagrange multiplier (LM) test statistic is the product of the R2 value and sample size: ${\text{LM}}=nR^{2}.$ This follows a chi-squared distribution, with degrees of freedom equal to P − 1, where P is the number of estimated parameters (in the auxiliary regression). The logic of the test is as follows. First, the squared residuals from the original model serve as a proxy for the variance of the error term at each observation. (The error term is assumed to have a mean of zero, and the variance of a zero-mean random variable is just the expectation of its square.) The independent variables in the auxiliary regression account for the possibility that the error variance depends on the values of the original regressors in some way (linear or quadratic). If the error term in the original model is in fact homoskedastic (has a constant variance) then the coefficients in the auxiliary regression (besides the constant) should be statistically indistinguishable from zero and the R2 should be “small". Conversely, a “large" R2 (scaled by the sample size so that it follows the chi-squared distribution) counts against the hypothesis of homoskedasticity. An alternative to the White test is the Breusch–Pagan test, where the Breusch-Pagan test is designed to detect only linear forms of heteroskedasticity. Under certain conditions and a modification of one of the tests, they can be found to be algebraically equivalent.[4] If homoskedasticity is rejected one can use heteroskedasticity-consistent standard errors. Software implementations • In R, White's Test can be implemented using the white function of the skedastic package.[5] • In Python, White's Test can be implemented using the het_white function of the statsmodels.stats.diagnostic.het_white [6] • In Stata, the test can be implemented using the estat imtest, white function.[7] See also • Heteroskedasticity • Breusch–Pagan test References 1. White, H. (1980). "A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity". Econometrica. 48 (4): 817–838. CiteSeerX 10.1.1.11.7646. doi:10.2307/1912934. JSTOR 1912934. MR 0575027. 2. Kim, E.H.; Morse, A.; Zingales, L. (2006). "What Has Mattered to Economics since 1970" (PDF). Journal of Economic Perspectives. 20 (4): 189–202. doi:10.1257/jep.20.4.189. 3. Verbeek, Marno (2008). A Guide to Modern Econometrics (Third ed.). Wiley. pp. 99–100. ISBN 978-0-470-51769-7. 4. Waldman, Donald M. (1983). "A note on algebraic equivalence of White's test and a variation of the Godfrey/Breusch-Pagan test for heteroscedasticity". Economics Letters. 13 (2–3): 197–200. doi:10.1016/0165-1765(83)90085-X. 5. "skedastic: Heteroskedasticity Diagnostics for Linear Regression Models". CRAN. 6. "statsmodels v0.12.1". 7. Stata. "regress postestimation — Postestimation tools for regress" (PDF).{{cite web}}: CS1 maint: url-status (link) Further reading • Gujarati, Damodar N.; Porter, Dawn C. (2009). Basic Econometrics (Fifth ed.). New York: McGraw-Hill Irwin. pp. 386–88. ISBN 978-0-07-337577-9. • Kmenta, Jan (1986). Elements of Econometrics (Second ed.). New York: Macmillan. pp. 292–298. ISBN 978-0-02-365070-3. • Wooldridge, Jeffrey M. (2013). Introductory Econometrics: A Modern Approach (Fifth ed.). South-Western. pp. 269–70. ISBN 978-1-111-53439-4.
Wikipedia
Whitehead's algorithm Whitehead's algorithm is a mathematical algorithm in group theory for solving the automorphic equivalence problem in the finite rank free group Fn. The algorithm is based on a classic 1936 paper of J. H. C. Whitehead.[1] It is still unknown (except for the case n = 2) if Whitehead's algorithm has polynomial time complexity. Statement of the problem Let $F_{n}=F(x_{1},\dots ,x_{n})$ be a free group of rank $n\geq 2$ with a free basis $X=\{x_{1},\dots ,x_{n}\}$. The automorphism problem, or the automorphic equivalence problem for $F_{n}$ asks, given two freely reduced words $w,w'\in F_{n}$ whether there exists an automorphism $\varphi \in \operatorname {Aut} (F_{n})$ such that $\varphi (w)=w'$. Thus the automorphism problem asks, for $w,w'\in F_{n}$ whether $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$. For $w,w'\in F_{n}$ one has $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$ if and only if $\operatorname {Out} (F_{n})[w]=\operatorname {Out} (F_{n})[w']$, where $[w],[w']$ are conjugacy classes in $F_{n}$ of $w,w'$ accordingly. Therefore, the automorphism problem for $F_{n}$ is often formulated in terms of $\operatorname {Out} (F_{n})$-equivalence of conjugacy classes of elements of $F_{n}$. For an element $w\in F_{n}$, $|w|_{X}$ denotes the freely reduced length of $w$ with respect to $X$, and $\|w\|_{X}$ denotes the cyclically reduced length of $w$ with respect to $X$. For the automorphism problem, the length of an input $w$ is measured as $|w|_{X}$ or as $\|w\|_{X}$, depending on whether one views $w$ as an element of $F_{n}$ or as defining the corresponding conjugacy class $[w]$ in $F_{n}$. History The automorphism problem for $F_{n}$ was algorithmically solved by J. H. C. Whitehead in a classic 1936 paper,[1] and his solution came to be known as Whitehead's algorithm. Whitehead used a topological approach in his paper. Namely, consider the 3-manifold $M_{n}=\#_{i=1}^{n}\mathbb {S} ^{2}\times \mathbb {S} ^{1}$, the connected sum of $n$ copies of $\mathbb {S} ^{2}\times \mathbb {S} ^{1}$. Then $\pi _{1}(M_{n})\cong F_{n}$, and, moreover, up to a quotient by a finite normal subgroup isomorphic to $\mathbb {Z} _{2}^{n}$, the mapping class group of $M_{n}$ is equal to $\operatorname {Out} (F_{n})$; see.[2] Different free bases of $F_{n}$ can be represented by isotopy classes of "sphere systems" in $M_{n}$, and the cyclically reduced form of an element $w\in F_{n}$, as well as the Whitehead graph of $[w]$, can be "read-off" from how a loop in general position representing $[w]$ intersects the spheres in the system. Whitehead moves can be represented by certain kinds of topological "swapping" moves modifying the sphere system.[3][4][5] Subsequently, Rapaport,[6] and later, based on her work, Higgins and Lyndon,[7] gave a purely combinatorial and algebraic re-interpretation of Whitehead's work and of Whitehead's algorithm. The exposition of Whitehead's algorithm in the book of Lyndon and Schupp[8] is based on this combinatorial approach. Culler and Vogtmann,[9] in their 1986 paper that introduced the Outer space, gave a hybrid approach to Whitehead's algorithm, presented in combinatorial terms but closely following Whitehead's original ideas. Whitehead's algorithm Our exposition regarding Whitehead's algorithm mostly follows Ch.I.4 in the book of Lyndon and Schupp,[8] as well as.[10] Overview The automorphism group $\operatorname {Aut} (F_{n})$ has a particularly useful finite generating set ${\mathcal {W}}$ of Whitehead automorphisms or Whitehead moves. Given $w,w'\in F_{n}$ the first part of Whitehead's algorithm consists of iteratively applying Whitehead moves to $w,w'$ to take each of them to an ``automorphically minimal" form, where the cyclically reduced length strictly decreases at each step. Once we find automorphically these minimal forms $u,u'$ of $w,w'$, we check if $\|u\|_{X}=\|u'\|_{X}$. If $\|u\|_{X}\neq \|u'\|_{X}$ then $w,w'$ are not automorphically equivalent in $F_{n}$. If $\|u\|_{X}=\|u'\|_{X}$, we check if there exists a finite chain of Whitehead moves taking $u$ to $u'$ so that the cyclically reduced length remains constant throughout this chain. The elements $w,w'$ are not automorphically equivalent in $F_{n}$ if and only if such a chain exists. Whitehead's algorithm also solves the search automorphism problem for $F_{n}$. Namely, given $w,w'\in F_{n}$, if Whitehead's algorithm concludes that $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$, the algorithm also outputs an automorphism $\varphi \in \operatorname {Aut} (F_{n})$ such that $\varphi (w)=w'$. Such an element $\varphi \in \operatorname {Aut} (F_{n})$ is produced as the composition of a chain of Whitehead moves arising from the above procedure and taking $w$ to $w'$. Whitehead automorphisms A Whitehead automorphism, or Whitehead move, of $F_{n}$ is an automorphism $\tau \in \operatorname {Aut} (F_{n})$ of $F_{n}$ of one of the following two types: (i) There is a permutation $\sigma \in S_{n}$ of $\{1,2,\dots ,n\}$ such that for $i=1,\dots ,n$ $\tau (x_{i})=x_{\sigma (i)}^{\pm 1}$ Such $\tau $ is called a Whitehead automorphism of the first kind. (ii) There is an element $a\in X^{\pm 1}$, called the multiplier, such that for every $x\in X^{\pm 1}$ $\tau (x)\in \{x,xa,a^{-1}x,a^{-1}xa\}.$ Such $\tau $ is called a Whitehead automorphism of the second kind. Since $\tau $ is an automorphism of $F_{n}$, it follows that $\tau (a)=a$ in this case. Often, for a Whitehead automorphism $\tau \in \operatorname {Aut} (F_{n})$, the corresponding outer automorphism in $\operatorname {Out} (F_{n})$ is also called a Whitehead automorphism or a Whitehead move. Examples Let $F_{4}=F(x_{1},x_{2},x_{3},x_{4})$. Let $\tau :F_{4}\to F_{4}$ be a homomorphism such that $\tau (x_{1})=x_{2}x_{1},\quad \tau (x_{2})=x_{2},\quad \tau (x_{3})=x_{2}x_{3}x_{2}^{-1},\quad \tau (x_{4})=x_{4}$ Then $\tau $ is actually an automorphism of $F_{4}$, and, moreover, $\tau $ is a Whitehead automorphism of the second kind, with the multiplier $a=x_{2}^{-1}$. Let $\tau ':F_{4}\to F_{4}$ be a homomorphism such that $\tau '(x_{1})=x_{1},\quad \tau '(x_{2})=x_{1}^{-1}x_{2}x_{1},\quad \tau '(x_{3})=x_{1}^{-1}x_{3}x_{1},\quad \tau '(x_{4})=x_{1}^{-1}x_{4}x_{1}$ Then $\tau '$ is actually an inner automorphism of $F_{4}$ given by conjugation by $x_{1}$, and, moreover, $\tau '$is a Whitehead automorphism of the second kind, with the multiplier $a=x_{1}$. Automorphically minimal and Whitehead minimal elements For $w\in F_{n}$, the conjugacy class $[w]$ is called automorphically minimal if for every $\varphi \in \operatorname {Aut} (F_{n})$ we have $\|w\|_{X}\leq \|\varphi (w)\|_{X}$. Also, a conjugacy class $[w]$ is called Whitehead minimal if for every Whitehead move $\tau \in \operatorname {Aut} (F_{n})$ we have $\|w\|_{X}\leq \|\tau (w)\|_{X}$. Thus, by definition, if $[w]$ is automorphically minimal then it is also Whitehead minimal. It turns out that the converse is also true. Whitehead's "Peak Reduction Lemma" The following statement is referred to as Whitehead's "Peak Reduction Lemma", see Proposition 4.20 in [8] and Proposition 1.2 in:[10] Let $w\in F_{n}$. Then the following hold: (1) If $[w]$ is not automorphically minimal, then there exists a Whitehead automorphism $\tau \in \operatorname {Aut} (F_{n})$ such that $\|\tau (w)\|_{X}<\|w\|_{X}$. (2) Suppose that $[w]$ is automorphically minimal, and that another conjugacy class $[w']$ is also automorphically minimal. Then $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$ if and only if $\|w\|_{X}=\|w'\|_{X}$ and there exists a finite sequence of Whitehead moves $\tau _{1},\dots ,\tau _{k}\in \operatorname {Aut} (F_{n})$ such that $\tau _{k}\cdots \tau _{1}(w)=w'$ and $\|\tau _{i}\cdots \tau _{1}(w)\|_{X}=\|w\|_{X}{\text{ for }}i=1,\dots ,k.$ Part (1) of the Peak Reduction Lemma implies that a conjugacy class $[w]$ is Whitehead minimal if and only if it is automorphically minimal. The automorphism graph The automorphism graph ${\mathcal {A}}$ of $F_{n}$ is a graph with the vertex set being the set of conjugacy classes $[u]$ of elements $u\in F_{n}$. Two distinct vertices $[u],[v]$ are adjacent in ${\mathcal {A}}$ if $\|u\|_{X}=\|v\|_{X}$ and there exists a Whitehead automorphism $\tau $ such that $[\tau (u)]=[v]$. For a vertex $[u]$ of ${\mathcal {A}}$, the connected component of $[u]$ in ${\mathcal {A}}$ is denoted ${\mathcal {A}}[u]$. Whitehead graph For $1\neq w\in F_{n}$ with cyclically reduced form $u$, the Whitehead graph $\Gamma _{[w]}$ is a labelled graph with the vertex set $X^{\pm 1}$, where for $x,y\in X^{\pm 1},x\neq y$ there is an edge joining $x$ and $y$ with the label or "weight" $n(\{x,y\};[w])$ which is equal to the number of distinct occurrences of subwords $x^{-1}y,y^{-1}x$ read cyclically in $u$. (In some versions of the Whitehead graph one only includes the edges with $n(\{x,y\};[w])>0$.) If $\tau \in \operatorname {Aut} (F_{n})$ is a Whitehead automorphism, then the length change $\|\tau (w)\|_{X}-\|w\|_{X}$ can be expressed as a linear combination, with integer coefficients determined by $\tau $, of the weights $n(\{x,y\};[w])$ in the Whitehead graph $\Gamma _{[w]}$. See Proposition 4.16 in Ch. I of.[8] This fact plays a key role in the proof of Whitehead's peak reduction result. Whitehead's minimization algorithm Whitehead's minimization algorithm, given a freely reduced word $w\in F_{n}$, finds an automorphically minimal $[v]$ such that $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})v.$ This algorithm proceeds as follows. Given $w\in F_{n}$, put $w_{1}=w$. If $w_{i}$ is already constructed, check if there exists a Whitehead automorphism $\tau \in \operatorname {Aut} (F_{n})$ such that $\|\tau (w_{i})\|_{X}<\|w_{i}\|_{X}$. (This condition can be checked since the set of Whitehead automorphisms of $F_{n}$ is finite.) If such $\tau $ exists, put $w_{i+1}=\tau (w_{i})$ and go to the next step. If no such $\tau $ exists, declare that $[w_{i}]$ is automorphically minimal, with $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w_{i}$, and terminate the algorithm. Part (1) of the Peak Reduction Lemma implies that the Whitehead's minimization algorithm terminates with some $w_{m}$, where $m\leq \|w\|_{X}$, and that then $[w_{m}]$ is indeed automorphically minimal and satisfies $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w_{m}$. Whitehead's algorithm for the automorphic equivalence problem Whitehead's algorithm for the automorphic equivalence problem, given $w,w'\in F_{n}$ decides whether or not $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$. The algorithm proceeds as follows. Given $w,w'\in F_{n}$, first apply the Whitehead minimization algorithm to each of $w,w'$ to find automorphically minimal $[v],[v']$ such that $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})v$ and $\operatorname {Aut} (F_{n})w'=\operatorname {Aut} (F_{n})v'$. If $\|v\|_{X}\neq \|v'\|_{X}$, declare that $\operatorname {Aut} (F_{n})w\neq \operatorname {Aut} (F_{n})w'$ and terminate the algorithm. Suppose now that $\|v\|_{X}=\|v'\|_{X}=t\geq 0$. Then check if there exists a finite sequence of Whitehead moves $\tau _{1},\dots ,\tau _{k}\in \operatorname {Aut} (F_{n})$ such that $\tau _{k}\dots \tau _{1}(v)=v'$ and $\|\tau _{i}\dots \tau _{1}(v)\|_{X}=\|v\|_{X}=t{\text{ for }}i=1,\dots ,k.$ This condition can be checked since the number of cyclically reduced words of length $t$ in $F_{n}$ is finite. More specifically, using the breadth-first approach, one constructs the connected components ${\mathcal {A}}[v],{\mathcal {A}}[v']$ of the automorphism graph and checks if ${\mathcal {A}}[v]\cap {\mathcal {A}}[v']=\varnothing $. If such a sequence exists, declare that $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$, and terminate the algorithm. If no such sequence exists, declare that $\operatorname {Aut} (F_{n})w\neq \operatorname {Aut} (F_{n})w'$ and terminate the algorithm. The Peak Reduction Lemma implies that Whitehead's algorithm correctly solves the automorphic equivalence problem in $F_{n}$. Moreover, if $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$, the algorithm actually produces (as a composition of Whitehead moves) an automorphism $\varphi \in \operatorname {Aut} (F_{n})$ such that $\varphi (w)=w'$. Computational complexity of Whitehead's algorithm • If the rank $n\geq 2$ of $F_{n}$ is fixed, then, given $w\in F_{n}$, the Whitehead minimization algorithm always terminates in quadratic time $O(|w|_{X}^{2})$ and produces an automorphically minimal cyclically reduced word $u\in F_{n}$ such that $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})u$.[10] Moreover, even if $n$ is not considered fixed, (an adaptation of) the Whitehead minimization algorithm on an input $w\in F_{n}$ terminates in time $O(|w|_{X}^{2}n^{3})$.[11] • If the rank $n\geq 3$ of $F_{n}$ is fixed, then for an automorphically minimal $u\in F_{n}$ constructing the graph ${\mathcal {A}}[u]$ takes $O\left(\|u\|_{X}\cdot \#V{\mathcal {A}}[u]\right)$ time and thus requires a priori exponential time in $|u|_{X})$. For that reason Whitehead's algorithm for deciding, given $w,w'\in F_{n}$, whether or not $\operatorname {Aut} (F_{n})w=\operatorname {Aut} (F_{n})w'$, runs in at most exponential time in $\max\{|w|_{X},|w'|_{X}\}$. • For $n=2$, Khan proved that for an automorphically minimal $u\in F_{2}$, the graph ${\mathcal {A}}[u]$ has at most $O\left(\|u\|_{X}\right)$ vertices and hence constructing the graph ${\mathcal {A}}[u]$ can be done in quadratic time in $|u|_{X}$.[12] Consequently, Whitehead's algorithm for the automorphic equivalence problem in $F_{2}$, given $w,w'\in F_{2}$ runs in quadratic time in $\max\{|w|_{X},|w'|_{X}\}$. Applications, generalizations and related results • Whitehead's algorithm can be adapted to solve, for any fixed $m\geq 1$, the automorphic equivalence problem for m-tuples of elects of $F_{n}$ and for m-tuples of conjugacy classes in $F_{n}$; see Ch.I.4 of [8] and [13] • McCool used Whitehead's algorithm and the peak reduction to prove that for any $w\in F_{n}$ the stabilizer $\operatorname {Stab} _{\operatorname {Out} (F_{n})}([w])$ is finitely presentable, and obtained a similar results for $\operatorname {Out} (F_{n})$-stabilizers of m-tuples of conjugacy classes in $F_{n}$.[14] McCool also used the peak reduction method to construct of a finite presentation of the group $\operatorname {Aut} (F_{n})$ with the set of Whitehead automorphisms as the generating set.[15] He then used this presentation to recover a finite presentation for $\operatorname {Aut} (F_{n})$, originally due to Nielsen, with Nielsen automorphisms as generators.[16] • Gersten obtained a variation of Whitehead's algorithm, for deciding, given two finite subsets $S,S'\subseteq F_{n}$, whether the subgroups $H=\langle S\rangle ,H'=\langle S'\rangle \leq F_{n}$ are automorphically equivalent, that is, whether there exists $\varphi \in \operatorname {Aut} (F_{n})$ such that $\varphi (H)=H'$.[17] • Whitehead's algorithm and peak reduction play a key role in the proof by Culler and Vogtmann that the Culler–Vogtmann Outer space is contractible.[9] • Collins and Zieschang obtained analogs of Whitehead's peak reduction and of Whitehead's algorithm for automorphic equivalence in free products of groups.[18][19] • Gilbert used a version of a peak reduction lemma to construct a presentation for the automorphism group $\operatorname {Aut} (G)$ of a free product $G=\ast _{i=1}^{m}G_{i}$.[20] • Levitt and Vogtmann produced a Whitehead-type algorithm for saving the automorphic equivalence problem (for elects, m-tuples of elements and m-tuples of conjugacy classes) in a group $G=\pi _{1}(S)$ where $S$ is a closed hyperbolic surface.[21] • If an element $w\in F_{n}=F(X)$ chosen uniformly at random from the sphere of radius $m\geq 1$ in $F(X)$, then, with probability tending to 1 exponentially fast as $m\to \infty $, the conjugacy class $[w]$ is already automorphically minimal and, moreover, $\#V{\mathcal {A}}[w]=O\left(\|w\|_{X}\right)=O(m)$. Consequently, if $w,w'\in F_{n}$ are two such ``generic" elements, Whitehead's algorithm decides whether $w,w'$ are automorphically equivalent in linear time in $\max\{|w|_{X},|w'|_{X}\}$.[10] • Similar to the above results hold for the genericity of automorphic minimality for ``randomly chosen" finitely generated subgroups of $F_{n}$.[22] • Lee proved that if $u\in F_{n}=F(X)$ is a cyclically reduced word such that $[u]$ is automorphically minimal, and if whenever $x_{i},x_{j},i<j$ both occur in $u$ or $u^{-1}$ then the total number of occurrences of $x_{i}^{\pm 1}$ in $u$ is smaller than the number of occurrences of $x_{i}^{\pm 1}$, then $\#V{\mathcal {A}}[u]$ is bounded above by a polynomial of degree $2n-3$ in $|u|_{X}$.[23] Consequently, if $w,w'\in F_{n},n\geq 3$ are such that $w$ is automorphically equivalent to some $u$ with the above property, then Whitehead's algorithm decides whether $w,w'$ are automorphically equivalent in time $O\left(\max\{|w|_{X}^{2n-3},|w'|_{X}^{2}\}\right)$. • The Garside algorithm for solving the conjugacy problem in braid groups has a similar general structure to Whitehead's algorithm, with "cycling moves" playing the role of Whitehead moves.[24] • Clifford and Goldstein used Whitehead-algorithm based techniques to produce an algorithm that, given a finite subset $Z\subseteq F_{n}$ decides whether or not the subgroup $H=\langle Z\rangle \leq F_{n}$ contains a primitive element of $F_{n},$ that is an element of a free generating set of $F_{n}.$[25] • Day obtained analogs of Whitehead's algorithm and of Whitehead's peak reduction for automorphic equivalence of elements of right-angled Artin groups.[26] References 1. J. H. C. Whitehead, On equivalent sets of elements in a free group, Ann. of Math. (2) 37:4 (1936), 782–800. MR1503309 2. Suhas Pandit, A note on automorphisms of the sphere complex. Proc. Indian Acad. Sci. Math. Sci. 124:2 (2014), 255–265; MR3218895 3. Allen Hatcher, Homological stability for automorphism groups of free groups, Commentarii Mathematici Helvetici 70:1 (1995) 39–62; MR1314940 4. Karen Vogtmann, Automorphisms of free groups and outer space. Proceedings of the Conference on Geometric and Combinatorial Group Theory, Part I (Haifa, 2000). Geometriae Dedicata 94 (2002), 1–31; MR1950871 5. Andrew Clifford, and Richard Z. Goldstein, Sets of primitive elements in a free group. Journal of Algebra 357 (2012), 271–278; MR2905255 6. Elvira Rapaport, On free groups and their automorphisms. Acta Mathematica 99 (1958), 139–163; MR0131452 7. P. J. Higgins, and R. C. Lyndon, Equivalence of elements under automorphisms of a free group. Journal of the London Mathematical Society (2) 8 (1974), 254–258; MR0340420 8. Roger Lyndon and Paul Schupp, Combinatorial group theory. Reprint of the 1977 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. ISBN 3-540-41158-5MR1812024 9. Marc Culler; Karen Vogtmann (1986). "Moduli of graphs and automorphisms of free groups" (PDF). Inventiones Mathematicae. 84 (1): 91–119. doi:10.1007/BF01388734. MR 0830040. S2CID 122869546. 10. Ilya Kapovich, Paul Schupp, and Vladimir Shpilrain, Generic properties of Whitehead's algorithm and isomorphism rigidity of random one-relator groups. Pacific Journal of Mathematics 223:1 (2006), 113–140 11. Abdó Roig, Enric Ventura, and Pascal Weil, On the complexity of the Whitehead minimization problem. International Journal of Algebra and Computation 17:8 (2007), 1611–1634; MR2378055 12. Bilal Khan, The structure of automorphic conjugacy in the free group of rank two. Computational and experimental group theory, 115–196, Contemp. Math., 349, American Mathematical Society, Providence, RI, 2004 13. Sava Krstić, Martin Lustig, and Karen Vogtmann, An equivariant Whitehead algorithm and conjugacy for roots of Dehn twist automorphisms. Proceedings of the Edinburgh Mathematical Society (2) 44:1 (2001), 117–141 14. James McCool, Some finitely presented subgroups of the automorphism group of a free group. Journal of Algebra 35:1-3 (1975), 205–213; MR0396764 15. James McCool, A presentation for the automorphism group of a free group of finite rank. Journal of the London Mathematical Society (2) 8 (1974), 259–266; MR0340421 16. James McCool, On Nielsen's presentation of the automorphism group of a free group. Journal of the London Mathematical Society (2) 10 (1975), 265–270 17. Stephen Gersten, On Whitehead's algorithm, Bulletin of the American Mathematical Society 10:2 (1984), 281–284; MR0733696 18. Donald J. Collins, and Heiner Zieschang, Rescuing the Whitehead method for free products. I. Peak reduction. Mathematische Zeitschrift 185:4 (1984), 487–504 MR0733769 19. Donald J. Collins, and Heiner Zieschang, Rescuing the Whitehead method for free products. II. The algorithm. Mathematische Zeitschrift 186:3 (1984), 335–361; MR0744825 20. Nick D. Gilbert, Presentations of the automorphism group of a free product. Proceedings of the London Mathematical Society (3) 54 (1987), no. 1, 115–140. 21. Gilbert Levitt and Karen Vogtmann, A Whitehead algorithm for surface groups, Topology 39:6 (2000), 1239–1251 22. Frédérique Bassino, Cyril Nicaud, and Pascal Weil, On the genericity of Whitehead minimality. Journal of Group Theory 19:1 (2016), 137–159 MR3441131 23. Donghi Lee, A tighter bound for the number of words of minimum length in an automorphic orbit. Journal of Algebra 305:2 (2006), 1093–1101; MRMR2266870 24. Joan Birman, Ki Hyoung Ko, and Sang Jin Lee, A new approach to the word and conjugacy problems in the braid groups, Advances in Mathematics 139:2 (1998), 322–353; Zbl 0937.20016 MR1654165 25. Andrew Clifford, and Richard Z. Goldstein, Subgroups of free groups and primitive elements. Journal of Group Theory 13:4 (2010), 601–611; MR2661660 26. Matthew Day, Full-featured peak reduction in right-angled Artin groups. Algebraic and Geometric Topology 14:3 (2014), 1677–1743 MR3212581 Further reading • Heiner Zieschang, On the Nielsen and Whitehead methods in combinatorial group theory and topology. Groups—Korea '94 (Pusan), 317–337, Proceedings of the 3rd International Conference on the Theory of Groups held at Pusan National University, Pusan, August 18–25, 1994. Edited by A. C. Kim and D. L. Johnson. de Gruyter, Berlin, 1995; ISBN 3-11-014793-9 MR1476976 • Karen Vogtmann's lecture notes on Whitehead's algorithm using Whitehead's 3-manifold model
Wikipedia
Whitehead's lemma (Lie algebra) In homological algebra, Whitehead's lemmas (named after J. H. C. Whitehead) represent a series of statements regarding representation theory of finite-dimensional, semisimple Lie algebras in characteristic zero. Historically, they are regarded as leading to the discovery of Lie algebra cohomology.[1] One usually makes the distinction between Whitehead's first and second lemma for the corresponding statements about first and second order cohomology, respectively, but there are similar statements pertaining to Lie algebra cohomology in arbitrary orders which are also attributed to Whitehead. The first Whitehead lemma is an important step toward the proof of Weyl's theorem on complete reducibility. Statements Without mentioning cohomology groups, one can state Whitehead's first lemma as follows: Let ${\mathfrak {g}}$ be a finite-dimensional, semisimple Lie algebra over a field of characteristic zero, V a finite-dimensional module over it, and $f\colon {\mathfrak {g}}\to V$ a linear map such that $f([x,y])=xf(y)-yf(x)$. Then there exists a vector $v\in V$ such that $f(x)=xv$ for all $x\in {\mathfrak {g}}$. In terms of Lie algebra cohomology, this is, by definition, equivalent to the fact that $H^{1}({\mathfrak {g}},V)=0$ for every such representation. The proof uses a Casimir element (see the proof below).[2] Similarly, Whitehead's second lemma states that under the conditions of the first lemma, also $H^{2}({\mathfrak {g}},V)=0$. Another related statement, which is also attributed to Whitehead, describes Lie algebra cohomology in arbitrary order: Given the same conditions as in the previous two statements, but further let $V$ be irreducible under the ${\mathfrak {g}}$-action and let ${\mathfrak {g}}$ act nontrivially, so ${\mathfrak {g}}\cdot V\neq 0$. Then $H^{q}({\mathfrak {g}},V)=0$ for all $q\geq 0$.[3] Proof[4] As above, let ${\mathfrak {g}}$ be a finite-dimensional semisimple Lie algebra over a field of characteristic zero and $\pi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)$ :{\mathfrak {g}}\to {\mathfrak {gl}}(V)} a finite-dimensional representation (which is semisimple but the proof does not use that fact). Let ${\mathfrak {g}}=\operatorname {ker} (\pi )\oplus {\mathfrak {g}}_{1}$ where ${\mathfrak {g}}_{1}$ is an ideal of ${\mathfrak {g}}$. Then, since ${\mathfrak {g}}_{1}$ is semisimple, the trace form $(x,y)\mapsto \operatorname {tr} (\pi (x)\pi (y))$, relative to $\pi $, is nondegenerate on ${\mathfrak {g}}_{1}$. Let $e_{i}$ be a basis of ${\mathfrak {g}}_{1}$ and $e^{i}$ the dual basis with respect to this trace form. Then define the Casimir element $c$ by $c=\sum _{i}e_{i}e^{i},$ which is an element of the universal enveloping algebra of ${\mathfrak {g}}_{1}$. Via $\pi $, it acts on V as a linear endomorphism (namely, $\pi (c)=\sum _{i}\pi (e_{i})\circ \pi (e^{i}):V\to V$.) The key property is that it commutes with $\pi ({\mathfrak {g}})$ in the sense $\pi (x)\pi (c)=\pi (c)\pi (x)$ for each element $x\in {\mathfrak {g}}$. Also, $\operatorname {tr} (\pi (c))=\sum \operatorname {tr} (\pi (e_{i})\pi (e^{i}))=\dim {\mathfrak {g}}_{1}.$ Now, by Fitting's lemma, we have the vector space decomposition $V=V_{0}\oplus V_{1}$ such that $\pi (c):V_{i}\to V_{i}$ is a (well-defined) nilpotent endomorphism for $i=0$ and is an automorphism for $i=1$. Since $\pi (c)$ commutes with $\pi ({\mathfrak {g}})$, each $V_{i}$ is a ${\mathfrak {g}}$-submodule. Hence, it is enough to prove the lemma separately for $V=V_{0}$ and $V=V_{1}$. First, suppose $\pi (c)$ is a nilpotent endomorphism. Then, by the early observation, $\dim({\mathfrak {g}}/\operatorname {ker} (\pi ))=\operatorname {tr} (\pi (c))=0$; that is, $\pi $ is a trivial representation. Since ${\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]$, the condition on $f$ implies that $f(x)=0$ for each $x\in {\mathfrak {g}}$; i.e., the zero vector $v=0$ satisfies the requirement. Second, suppose $\pi (c)$ is an automorphism. For notational simplicity, we will drop $\pi $ and write $xv=\pi (x)v$. Also let $(\cdot ,\cdot )$ denote the trace form used earlier. Let $w=\sum e_{i}f(e^{i})$, which is a vector in $V$. Then $xw=\sum _{i}e_{i}xf(e^{i})+\sum _{i}[x,e_{i}]f(e^{i}).$ Now, $[x,e_{i}]=\sum _{j}([x,e_{i}],e^{j})e_{j}=-\sum _{j}([x,e^{j}],e_{i})e_{j}$ and, since $[x,e^{j}]=\sum _{i}([x,e^{j}],e_{i})e^{i}$, the second term of the expansion of $xw$ is $-\sum _{j}e_{j}f([x,e^{j}])=-\sum _{i}e_{i}(xf(e^{i})-e^{i}f(x)).$ Thus, $xw=\sum _{i}e_{i}e^{i}f(x)=cf(x).$ Since $c$ is invertible and $c^{-1}$ commutes with $x$, the vector $v=c^{-1}w$ has the required property. $\square $ Notes 1. Jacobson 1979, p. 93 2. Jacobson 1979, p. 77, p. 95 3. Jacobson 1979, p. 96 4. Jacobson 1979, Ch. III, § 7, Lemma 3. References • Jacobson, Nathan (1979). Lie algebras (Republication of the 1962 original ed.). Dover Publications. ISBN 978-0-486-13679-0. OCLC 867771145.
Wikipedia
Whitehead Prize The Whitehead Prize is awarded yearly by the London Mathematical Society to multiple mathematicians working in the United Kingdom who are at an early stage of their career. The prize is named in memory of homotopy theory pioneer J. H. C. Whitehead. More specifically, people being considered for the award must be resident in the United Kingdom on 1 January of the award year or must have been educated in the United Kingdom. Also, the candidates must have less than 15 years of work at the postdoctorate level and must not have received any other prizes from the Society. Since the inception of the prize, no more than two could be awarded per year, but in 1999 this was increased to four "to allow for the award of prizes across the whole of mathematics, including applied mathematics, mathematical physics, and mathematical aspects of computer science". The Senior Whitehead Prize has similar residence requirements and rules concerning prior prizes, but is intended to recognize more experienced mathematicians. List of Whitehead Prize winners • 1979 Peter Cameron, Peter Johnstone • 1980 H. G. Dales, Toby Stafford[1] • 1981 Nigel Hitchin, Derek F. Holt • 1982 John M. Ball, Martin J. Taylor • 1983 Jeff Paris, Andrew Ranicki • 1984 Simon Donaldson, Samuel James Patterson • 1985 Dan Segal, Philip J. Rippon • 1986 Terence Lyons, David A. Rand[2] • 1987 Caroline Series, Aidan H. Schofield • 1988 S. M. Rees, P. J. Webb, Andrew Wiles • 1989 D. E. Evans, Frances Kirwan, R. S. Ward • 1990 Martin T. Barlow, Richard Taylor, Antony Wassermann • 1991 Nicholas Manton, A. J. Scholl • 1992 K. M. Ball, Richard Borcherds • 1993 D. J. Benson, Peter B. Kronheimer, D. G. Vassiliev • 1994 P. H. Kropholler, R. S. MacKay • 1995 Timothy Gowers, Jeremy Rickard • 1996 John Roe, Y. Safarov • 1997 Brian Bowditch, A. Grigor'yan, Dominic Joyce • 1998 S. J. Chapman, Igor Rivin, Jan Nekovář • 1999 Martin Bridson, G. Friesecke, Nicholas Higham, Imre Leader • 2000 M. A. J. Chaplain, Gwyneth Stallard, Andrew M. Stuart, Burt Totaro • 2001 M. McQuillan, A. N. Skorobogatov, V. Smyshlyaev, J. R. King • 2002 Kevin Buzzard, Alessio Corti, Marianna Csörnyei, C. Teleman • 2003 N. Dorey, T. Hall, Marc Lackenby, M. Nazarov • 2004 M. Ainsworth, Vladimir Markovic, Richard Thomas,[3] Ulrike Tillmann • 2005 Ben Green, Bernd Kirchheim, Neil Strickland, Peter Topping • 2006 Raphaël Rouquier, Jonathan Sherratt, Paul Sutcliffe, Agata Smoktunowicz • 2007 Nikolay Nikolov, Oliver Riordan, Ivan Smith, Catharina Stroppel • 2008 Timothy Browning, Tamás Hausel, Martin Hairer, Nina Snaith • 2009 Mihalis Dafermos, Cornelia Druțu, Bethany Rose Marsh, Markus Owen • 2010 Harald Helfgott, Jens Marklof, Lasse Rempe-Gillen, Françoise Tisseur • 2011 Jonathan Bennett, Alexander Gorodnik, Barbara Niethammer, Alexander Pushnitski • 2012 Toby Gee, Eugen Vărvărucă, Sarah Waters, Andreas Winter • 2013 Luis Fernando Alday, André Neves, Tom Sanders, Corinna Ulcigrai • 2014 Clément Mouhot, Ruth Baker, Tom Coates, Daniela Kühn and Deryk Osthus[4] • 2015 Peter Keevash, James Maynard, Christoph Ortner, Mason Porter, Dominic Vella, David Loeffler and Sarah Zerbes • 2016 A. Bayer, G. Holzegel, Jason P. Miller, Carola-Bibiane Schönlieb[5] • 2017 Julia Gog, András Máthé, Ashley Montanaro, Oscar Randal-Williams, Jack Thorne, Michael Wemyss[6] • 2018 Caucher Birkar, Ana Caraiani, Heather Harrington, Valerio Lucarini, Filip Rindler, Péter Varjú[7] • 2019 Alexandr Buryak, David Conlon, Toby Cubitt, Anders Hansen, William Parnell, Nick Sheridan[8] • 2020 Maria Bruna, Ben Davison, Adam Harper, Holly Krieger, Andrea Mondino, Henry Wilton[9] • 2021 Jonathan Evans, Patrick Farrell, Agelos Georgakopoulos, Michael Magee, Aretha Teckentrup, Stuart White[10] • 2022 Jessica Fintzen, Ian Griffiths, Dawid Kielak, Chunyi Li, Tadahiro Oh, Euan Spence[11] See also • Fröhlich Prize • Senior Whitehead Prize • Shephard Prize • Berwick Prize • Naylor Prize and Lectureship • Pólya Prize (LMS) • De Morgan Medal • List of mathematics awards References 1. University of Manchester website accessed 28 December 2008 2. Biography on EPSRC website accessed 27 December 2008 Archived 21 November 2008 at the Wayback Machine 3. Imperial College web site 4. LMS Website http://www.lms.ac.uk/prizes/lms-prizes-2014 accessed 6 December 2014 5. LMS website https://www.lms.ac.uk/prizes/list-lms-prize-winners#Whead accessed July 2016 6. LMS website https://www.lms.ac.uk/prizes/2017-nominations-lms-prizes 7. LMS website https://www.lms.ac.uk/news-entry/29062018-1745/2018-lms-prize-winners 8. Prize Winners 2019 9. "LMS Prize Winners 2020 | London Mathematical Society". www.lms.ac.uk. Retrieved 26 June 2020. 10. "2021 LMS Prize Winners | London Mathematical Society". www.lms.ac.uk. Retrieved 19 July 2021. 11. 2022 LMS Winners External links • Prize rules • List of LMS prize winners This article incorporates material from Whitehead Prize on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. This article incorporates material from list of mathematicians awarded the Whitehead Prize on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. Awards of the London Mathematical Society • Anne Bennett Prize • Berwick Prize • De Morgan Medal • Fröhlich Prize • Louis Bachelier Prize • Naylor Prize and Lectureship • Pólya Prize • Senior Berwick Prize • Senior Whitehead Prize • Shephard Prize • Whitehead Prize
Wikipedia
Whitehead conjecture The Whitehead conjecture (also known as the Whitehead asphericity conjecture) is a claim in algebraic topology. It was formulated by J. H. C. Whitehead in 1941. It states that every connected subcomplex of a two-dimensional aspherical CW complex is aspherical. Not to be confused with Whitehead theorem or Whitehead problem. A group presentation $G=(S\mid R)$ is called aspherical if the two-dimensional CW complex $K(S\mid R)$ associated with this presentation is aspherical or, equivalently, if $\pi _{2}(K(S\mid R))=0$. The Whitehead conjecture is equivalent to the conjecture that every sub-presentation of an aspherical presentation is aspherical. In 1997, Mladen Bestvina and Noel Brady constructed a group G so that either G is a counterexample to the Eilenberg–Ganea conjecture, or there must be a counterexample to the Whitehead conjecture; in other words, it is not possible for both conjectures to be true. References • Whitehead, J. H. C. (1941). "On adding relations to homotopy groups". Annals of Mathematics. 2nd Ser. 42 (2): 409–428. doi:10.2307/1968907. JSTOR 1968907. MR 0004123. • Bestvina, Mladen; Brady, Noel (1997). "Morse theory and finiteness properties of groups". Inventiones Mathematicae. 129 (3): 445–470. Bibcode:1997InMat.129..445B. doi:10.1007/s002220050168. MR 1465330. S2CID 120422255.
Wikipedia
Whitehead's lemma Whitehead's lemma is a technical result in abstract algebra used in algebraic K-theory. It states that a matrix of the form ${\begin{bmatrix}u&0\\0&u^{-1}\end{bmatrix}}$ For a lemma on Lie algebras, see Whitehead's lemma (Lie algebras). is equivalent to the identity matrix by elementary transformations (that is, transvections): ${\begin{bmatrix}u&0\\0&u^{-1}\end{bmatrix}}=e_{21}(u^{-1})e_{12}(1-u)e_{21}(-1)e_{12}(1-u^{-1}).$ Here, $e_{ij}(s)$ indicates a matrix whose diagonal block is $1$ and $ij^{th}$ entry is $s$. The name "Whitehead's lemma" also refers to the closely related result that the derived group of the stable general linear group is the group generated by elementary matrices.[1][2] In symbols, $\operatorname {E} (A)=[\operatorname {GL} (A),\operatorname {GL} (A)]$. This holds for the stable group (the direct limit of matrices of finite size) over any ring, but not in general for the unstable groups, even over a field. For instance for $\operatorname {GL} (2,\mathbb {Z} /2\mathbb {Z} )$ one has: $\operatorname {Alt} (3)\cong [\operatorname {GL} _{2}(\mathbb {Z} /2\mathbb {Z} ),\operatorname {GL} _{2}(\mathbb {Z} /2\mathbb {Z} )]<\operatorname {E} _{2}(\mathbb {Z} /2\mathbb {Z} )=\operatorname {SL} _{2}(\mathbb {Z} /2\mathbb {Z} )=\operatorname {GL} _{2}(\mathbb {Z} /2\mathbb {Z} )\cong \operatorname {Sym} (3),$ where Alt(3) and Sym(3) denote the alternating resp. symmetric group on 3 letters. See also • Special linear group#Relations to other subgroups of GL(n,A) References 1. Milnor, John Willard (1971). Introduction to algebraic K-theory. Annals of Mathematics Studies. Vol. 72. Princeton, NJ: Princeton University Press. Section 3.1. MR 0349811. Zbl 0237.18005. 2. Snaith, V. P. (1994). Explicit Brauer Induction: With Applications to Algebra and Number Theory. Cambridge Studies in Advanced Mathematics. Vol. 40. Cambridge University Press. p. 164. ISBN 0-521-46015-8. Zbl 0991.20005.
Wikipedia
Whitehead link In knot theory, the Whitehead link, named for J. H. C. Whitehead, is one of the most basic links. It can be drawn as an alternating link with five crossings, from the overlay of a circle and a figure-eight shaped loop. Whitehead link Braid length5 Braid no.3 Crossing no.5 Hyperbolic volume3.663862377 Linking no.0 Unknotting no.1 Conway notation[212] A–B notation52 1 ThistlethwaiteL5a1 Last /NextL4a1 / L6a1 Other alternating Structure Alternating link diagram Alternative diagram, symmetric by 3d rotation around a vertical line in the plane of the drawing[1] A common way of describing this knot is formed by overlaying a figure-eight shaped loop with another circular loop surrounding the crossing of the figure-eight. The above-below relation between these two unknots is then set as an alternating link, with the consecutive crossings on each loop alternating between under and over. This drawing has five crossings, one of which is the self-crossing of the figure-eight curve, which does not count towards the linking number. Because the remaining crossings have equal numbers of under and over crossings on each loop, its linking number is 0. It is not isotopic to the unlink, but it is link homotopic to the unlink. Although this construction of the knot treats its two loops differently from each other, the two loops are topologically symmetric: it is possible to deform the same link into a drawing of the same type in which the loop that was drawn as a figure eight is circular and vice versa.[2] Alternatively, there exist realizations of this knot in three dimensions in which the two loops can be taken to each other by a geometric symmetry of the realization.[1] In braid theory notation, the link is written $\sigma _{1}^{2}\sigma _{2}^{2}\sigma _{1}^{-1}\sigma _{2}^{-2}.\,$ Its Jones polynomial is $V(t)=t^{-{3 \over 2}}\left(-1+t-2t^{2}+t^{3}-2t^{4}+t^{5}\right).$ This polynomial and $V(1/t)$ are the two factors of the Jones polynomial of the L10a140 link. Notably, $V(1/t)$ is the Jones polynomial for the mirror image of a link having Jones polynomial $V(t)$. Volume The hyperbolic volume of the complement of the Whitehead link is 4 times Catalan's constant, approximately 3.66. The Whitehead link complement is one of two two-cusped hyperbolic manifolds with the minimum possible volume, the other being the complement of the pretzel link with parameters (−2, 3, 8).[3] Dehn filling on one component of the Whitehead link can produce the sibling manifold of the complement of the figure-eight knot, and Dehn filling on both components can produce the Weeks manifold, respectively one of the minimum-volume hyperbolic manifolds with one cusp and the minimum-volume hyperbolic manifold with no cusps. History The Whitehead link is named for J. H. C. Whitehead, who spent much of the 1930s looking for a proof of the Poincaré conjecture. In 1934, he used the link as part of his construction of the now-named Whitehead manifold, which refuted his previous purported proof of the conjecture.[4] See also Wikimedia Commons has media related to Whitehead links. • Solomon's knot • Weeks manifold • Whitehead double References 1. Skopenkov, A. (2020), "Fig. 22: Isotopy of the Whitehead link", A user's guide to basic knot and link theory, p. 17, arXiv:2001.01472v1 2. Cundy, H. Martyn; Rollett, A.P. (1961), Mathematical models (2nd ed.), Oxford: Clarendon Press, p. 59, MR 0124167 3. Agol, Ian (2010), "The minimal volume orientable hyperbolic 2-cusped 3-manifolds", Proceedings of the American Mathematical Society, 138 (10): 3723–3732, arXiv:0804.0043, doi:10.1090/S0002-9939-10-10364-5, MR 2661571 4. Gordon, C. McA. (1999), "3-dimensional topology up to 1960" (PDF), in James, I. M. (ed.), History of Topology, Amsterdam: North-Holland, pp. 449–489, doi:10.1016/B978-044482375-5/50016-X, MR 1674921; see p. 480 External links • "L5a1 knot-theoretic link", The Knot Atlas. • Weisstein, Eric W., "Whitehead link", MathWorld Knot theory (knots and links) Hyperbolic • Figure-eight (41) • Three-twist (52) • Stevedore (61) • 62 • 63 • Endless (74) • Carrick mat (818) • Perko pair (10161) • (−2,3,7) pretzel (12n242) • Whitehead (52 1 ) • Borromean rings (63 2 ) • L10a140 • Conway knot (11n34) Satellite • Composite knots • Granny • Square • Knot sum Torus • Unknot (01) • Trefoil (31) • Cinquefoil (51) • Septafoil (71) • Unlink (02 1 ) • Hopf (22 1 ) • Solomon's (42 1 ) Invariants • Alternating • Arf invariant • Bridge no. • 2-bridge • Brunnian • Chirality • Invertible • Crosscap no. • Crossing no. • Finite type invariant • Hyperbolic volume • Khovanov homology • Genus • Knot group • Link group • Linking no. • Polynomial • Alexander • Bracket • HOMFLY • Jones • Kauffman • Pretzel • Prime • list • Stick no. • Tricolorability • Unknotting no. and problem Notation and operations • Alexander–Briggs notation • Conway notation • Dowker–Thistlethwaite notation • Flype • Mutation • Reidemeister move • Skein relation • Tabulation Other • Alexander's theorem • Berge • Braid theory • Conway sphere • Complement • Double torus • Fibered • Knot • List of knots and links • Ribbon • Slice • Sum • Tait conjectures • Twist • Wild • Writhe • Surgery theory • Category • Commons
Wikipedia
Whitehead manifold In mathematics, the Whitehead manifold is an open 3-manifold that is contractible, but not homeomorphic to $\mathbb {R} ^{3}.$ J. H. C. Whitehead (1935) discovered this puzzling object while he was trying to prove the Poincaré conjecture, correcting an error in an earlier paper Whitehead (1934, theorem 3) where he incorrectly claimed that no such manifold exists. A contractible manifold is one that can continuously be shrunk to a point inside the manifold itself. For example, an open ball is a contractible manifold. All manifolds homeomorphic to the ball are contractible, too. One can ask whether all contractible manifolds are homeomorphic to a ball. For dimensions 1 and 2, the answer is classical and it is "yes". In dimension 2, it follows, for example, from the Riemann mapping theorem. Dimension 3 presents the first counterexample: the Whitehead manifold.[1] Construction Take a copy of $S^{3},$ the three-dimensional sphere. Now find a compact unknotted solid torus $T_{1}$ inside the sphere. (A solid torus is an ordinary three-dimensional doughnut, that is, a filled-in torus, which is topologically a circle times a disk.) The closed complement of the solid torus inside $S^{3}$ is another solid torus. Now take a second solid torus $T_{2}$ inside $T_{1}$ so that $T_{2}$ and a tubular neighborhood of the meridian curve of $T_{1}$ is a thickened Whitehead link. Note that $T_{2}$ is null-homotopic in the complement of the meridian of $T_{1}.$ This can be seen by considering $S^{3}$ as $\mathbb {R} ^{3}\cup \{\infty \}$ and the meridian curve as the z-axis together with $\infty .$ The torus $T_{2}$ has zero winding number around the z-axis. Thus the necessary null-homotopy follows. Since the Whitehead link is symmetric, that is, a homeomorphism of the 3-sphere switches components, it is also true that the meridian of $T_{1}$ is also null-homotopic in the complement of $T_{2}.$ Now embed $T_{3}$ inside $T_{2}$ in the same way as $T_{2}$ lies inside $T_{1},$ and so on; to infinity. Define W, the Whitehead continuum, to be $W=T_{\infty },$ or more precisely the intersection of all the $T_{k}$ for $k=1,2,3,\dots .$ The Whitehead manifold is defined as $X=S^{3}\setminus W,$ which is a non-compact manifold without boundary. It follows from our previous observation, the Hurewicz theorem, and Whitehead's theorem on homotopy equivalence, that X is contractible. In fact, a closer analysis involving a result of Morton Brown shows that $X\times \mathbb {R} \cong \mathbb {R} ^{4}.$ However, X is not homeomorphic to $\mathbb {R} ^{3}.$ The reason is that it is not simply connected at infinity. The one point compactification of X is the space $S^{3}/W$ (with W crunched to a point). It is not a manifold. However, $\left(\mathbb {R} ^{3}/W\right)\times \mathbb {R} $ is homeomorphic to $\mathbb {R} ^{4}.$ David Gabai showed that X is the union of two copies of $\mathbb {R} ^{3}$ whose intersection is also homeomorphic to $\mathbb {R} ^{3}.$[1] Related spaces More examples of open, contractible 3-manifolds may be constructed by proceeding in similar fashion and picking different embeddings of $T_{i+1}$ in $T_{i}$ in the iterative process. Each embedding should be an unknotted solid torus in the 3-sphere. The essential properties are that the meridian of $T_{i}$ should be null-homotopic in the complement of $T_{i+1},$ and in addition the longitude of $T_{i+1}$ should not be null-homotopic in $T_{i}\setminus T_{i+1}.$ Another variation is to pick several subtori at each stage instead of just one. The cones over some of these continua appear as the complements of Casson handles in a 4-ball. The dogbone space is not a manifold but its product with $\mathbb {R} ^{1}$ is homeomorphic to $\mathbb {R} ^{4}.$ See also • List of topologies • Tame manifold References 1. Gabai, David (2011). "The Whitehead manifold is a union of two Euclidean spaces". Journal of Topology. 4 (3): 529–534. doi:10.1112/jtopol/jtr010. Further reading • Kirby, Robion (1989). The topology of 4-manifolds. Lecture Notes in Mathematics, no. 1374, Springer-Verlag. ISBN 978-0-387-51148-1. • Rolfsen, Dale (2003), "Section 3.I.8.", Knots and links, AMS Chelsea Publishing, p. 82, ISBN 978-0821834367 • Whitehead, J. H. C. (1934), "Certain theorems about three-dimensional manifolds (I)", Quarterly Journal of Mathematics, 5 (1): 308–320, Bibcode:1934QJMat...5..308W, doi:10.1093/qmath/os-5.1.308 • Whitehead, J. H. C. (1935), "A certain open manifold whose group is unity", Quarterly Journal of Mathematics, 6 (1): 268–279, Bibcode:1935QJMat...6..268W, doi:10.1093/qmath/os-6.1.268 Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Whitehead's point-free geometry In mathematics, point-free geometry is a geometry whose primitive ontological notion is region rather than point. Two axiomatic systems are set out below, one grounded in mereology, the other in mereotopology and known as connection theory. Point-free geometry was first formulated in Whitehead (1919, 1920), not as a theory of geometry or of spacetime, but of "events" and of an "extension relation" between events. Whitehead's purposes were as much philosophical as scientific and mathematical.[lower-alpha 1] Formalizations Whitehead did not set out his theories in a manner that would satisfy present-day canons of formality. The two formal first-order theories described in this entry were devised by others in order to clarify and refine Whitehead's theories. The domain of discourse for both theories consists of "regions." All unquantified variables in this entry should be taken as tacitly universally quantified; hence all axioms should be taken as universal closures. No axiom requires more than three quantified variables; hence a translation of first-order theories into relation algebra is possible. Each set of axioms has but four existential quantifiers. Inclusion-based point-free geometry (mereology) The fundamental primitive binary relation is inclusion, denoted by the infix operator "≤", which corresponds to the binary Parthood relation that is a standard feature in mereological theories. The intuitive meaning of x ≤ y is "x is part of y." Assuming that equality, denoted by the infix operator "=", is part of the background logic, the binary relation Proper Part, denoted by the infix operator "<", is defined as: $x<y\leftrightarrow (x\leq y\land x\not =y).$ The axioms are:[lower-alpha 2] • Inclusion partially orders the domain. G1. $x\leq x.$ (reflexive) G2. $(x\leq z\land z\leq y)\rightarrow x\leq y.$ (transitive) WP4. G3. $(x\leq y\land y\leq x)\rightarrow x=y.$ (antisymmetric) • Given any two regions, there exists a region that includes both of them. WP6. G4. $\exists z[x\leq z\land y\leq z].$ • Proper Part densely orders the domain. WP5. G5. $x<y\rightarrow \exists z[x<z<y].$ • Both atomic regions and a universal region do not exist. Hence the domain has neither an upper nor a lower bound. WP2. G6. $\exists y\exists z[y<x\land x<z].$ • Proper Parts Principle. If all the proper parts of x are proper parts of y, then x is included in y. WP3. G7. $\forall z[z<x\rightarrow z<y]\rightarrow x\leq y.$ A model of G1–G7 is an inclusion space. Definition (Gerla and Miranda 2008: Def. 4.1). Given some inclusion space S, an abstractive class is a class G of regions such that S\G is totally ordered by inclusion. Moreover, there does not exist a region included in all of the regions included in G. Intuitively, an abstractive class defines a geometrical entity whose dimensionality is less than that of the inclusion space. For example, if the inclusion space is the Euclidean plane, then the corresponding abstractive classes are points and lines. Inclusion-based point-free geometry (henceforth "point-free geometry") is essentially an axiomatization of Simons's (1987: 83) system W. In turn, W formalizes a theory in Whitehead (1919) whose axioms are not made explicit. Point-free geometry is W with this defect repaired. Simons (1987) did not repair this defect, instead proposing in a footnote that the reader do so as an exercise. The primitive relation of W is Proper Part, a strict partial order. The theory[1] of Whitehead (1919) has a single primitive binary relation K defined as xKy ↔ y < x. Hence K is the converse of Proper Part. Simons's WP1 asserts that Proper Part is irreflexive and so corresponds to G1. G3 establishes that inclusion, unlike Proper Part, is antisymmetric. Point-free geometry is closely related to a dense linear order D, whose axioms are G1-3, G5, and the totality axiom $x\leq y\lor y\leq x.$[2] Hence inclusion-based point-free geometry would be a proper extension of D (namely D ∪ {G4, G6, G7}), were it not that the D relation "≤" is a total order. Connection theory (mereotopology) A different approach was proposed in Whitehead (1929), one inspired by De Laguna (1922). Whitehead took as primitive the topological notion of "contact" between two regions, resulting in a primitive "connection relation" between events. Connection theory C is a first-order theory that distills the first 12 of the 31 assumptions in chapter 2 of part 4 of Process and Reality into 6 axioms, C1-C6. C is a proper fragment of the theories proposed in Clarke (1981), who noted their mereological character. Theories that, like C, feature both inclusion and topological primitives, are called mereotopologies. C has one primitive relation, binary "connection," denoted by the prefixed predicate letter C. That x is included in y can now be defined as x ≤ y ↔ ∀z[Czx→Czy]. Unlike the case with inclusion spaces, connection theory enables defining "non-tangential" inclusion,[lower-alpha 3] a total order that enables the construction of abstractive classes. Gerla and Miranda (2008) argue that only thus can mereotopology unambiguously define a point. The axioms C1-C6 below are, but for numbering, those of Def. 3.1 in Gerla and Miranda (2008): • C is reflexive. C.1. C1. $\ Cxx.$ • C is symmetric. C.2. C2. $Cxy\rightarrow Cyx.$ • C is extensional. C.11. C3. $\forall z[Czx\leftrightarrow Czy]\rightarrow x=y.$ • All regions have proper parts, so that C is an atomless theory. P.9. C4. $\exists y[y<x].$ • Given any two regions, there is a region connected to both of them. C5. $\exists z[Czx\land Czy].$ • All regions have at least two unconnected parts. C.14. C6. $\exists y\exists z[(y\leq x)\land (z\leq x)\land \neg Cyz].$ A model of C is a connection space. Following the verbal description of each axiom is the identifier of the corresponding axiom in Casati and Varzi (1999). Their system SMT (strong mereotopology) consists of C1-C3, and is essentially due to Clarke (1981).[lower-alpha 4] Any mereotopology can be made atomless by invoking C4, without risking paradox or triviality. Hence C extends the atomless variant of SMT by means of the axioms C5 and C6, suggested by chapter 2 of part 4 of Process and Reality. For an advanced and detailed discussion of systems related to C, see Roeper (1997). Biacino and Gerla (1991) showed that every model of Clarke's theory is a Boolean algebra, and models of such algebras cannot distinguish connection from overlap. It is doubtful whether either fact is faithful to Whitehead's intent. See also • Mereology • Mereotopology • Pointless topology Notes 1. See Kneebone (1963), chpt. 13.5, for a gentle introduction to Whitehead's theory. Also see Lucas (2000), chpt. 10. 2. The axioms G1 to G7 are, but for numbering, those of Def. 2.1 in Gerla and Miranda (2008) (see also Gerla (1995)). The identifiers of the form WPn, included in the verbal description of each axiom, refer to the corresponding axiom in Simons (1987: 83). 3. Presumably this is Casati and Varzi's (1999) "Internal Part" predicate, IPxy ↔ (x≤y)∧(Czx→∃v[v≤z ∧ v≤y]. This definition combines their (4.8) and (3.1). 4. Grzegorczyk (1960) proposed a similar theory, whose motivation was primarily topological. References 1. Kneebone (1963), p. 346. 2. Stoll, R. R., 1963. Set Theory and Logic. Dover reprint, 1979. P. 423. Bibliography • Biacino L., and Gerla G., 1991, "Connection Structures," Notre Dame Journal of Formal Logic 32: 242-47. • Casati, R., and Varzi, A. C., 1999. Parts and places: the structures of spatial representation. MIT Press. • Clarke, Bowman, 1981, "A calculus of individuals based on 'connection'," Notre Dame Journal of Formal Logic 22: 204-18. • ------, 1985, "Individuals and Points," Notre Dame Journal of Formal Logic 26: 61-75. • De Laguna, T., 1922, "Point, line and surface as sets of solids," The Journal of Philosophy 19: 449-61. • Gerla, G., 1995, "Pointless Geometries" in Buekenhout, F., Kantor, W. eds., Handbook of incidence geometry: buildings and foundations. North-Holland: 1015-31. • --------, and Miranda A., 2008, "Inclusion and Connection in Whitehead's Point-free Geometry," in Michel Weber and Will Desmond, (eds.), Handbook of Whiteheadian Process Thought, Frankfurt / Lancaster, ontos verlag, Process Thought X1 & X2. • Gruszczynski R., and Pietruszczak A., 2008, "Full development of Tarski's geometry of solids," Bulletin of Symbolic Logic 14:481-540. The paper contains presentation of point-free system of geometry originating from Whitehead's ideas and based on Lesniewski's mereology. It also briefly discusses the relation between point-free and point-based systems of geometry. Basic properties of mereological structures are given as well. • Grzegorczyk, A., 1960, "Axiomatizability of geometry without points," Synthese 12: 228-235. • Kneebone, G., 1963. Mathematical Logic and the Foundation of Mathematics. Dover reprint, 2001. • Lucas, J. R., 2000. Conceptual Roots of Mathematics. Routledge. Chpt. 10, on "prototopology," discusses Whitehead's systems and is strongly influenced by the unpublished writings of David Bostock. • Roeper, P., 1997, "Region-Based Topology," Journal of Philosophical Logic 26: 251-309. • Simons, P., 1987. Parts: A Study in Ontology. Oxford Univ. Press. • Whitehead, A.N., 1916, "La Theorie Relationiste de l'Espace," Revue de Metaphysique et de Morale 23: 423-454. Translated as Hurley, P.J., 1979, "The relational theory of space," Philosophy Research Archives 5: 712-741. • --------, 1919. An Enquiry Concerning the Principles of Natural Knowledge. Cambridge Univ. Press. 2nd ed., 1925. • --------, 1920. The Concept of Nature. Cambridge Univ. Press. 2004 paperback, Prometheus Books. Being the 1919 Tarner Lectures delivered at Trinity College. • --------, 1979 (1929). Process and Reality. Free Press.
Wikipedia
Whitehead problem In group theory, a branch of abstract algebra, the Whitehead problem is the following question: Is every abelian group A with Ext1(A, Z) = 0 a free abelian group? Not to be confused with Whitehead theorem or Whitehead conjecture. Saharon Shelah proved that Whitehead's problem is independent of ZFC, the standard axioms of set theory.[1] Refinement Assume that A is an abelian group such that every short exact sequence $0\rightarrow \mathbb {Z} \rightarrow B\rightarrow A\rightarrow 0$ must split if B is also abelian. The Whitehead problem then asks: must A be free? This splitting requirement is equivalent to the condition Ext1(A, Z) = 0. Abelian groups A satisfying this condition are sometimes called Whitehead groups, so Whitehead's problem asks: is every Whitehead group free? It should be mentioned that if this condition is strengthened by requiring that the exact sequence $0\rightarrow C\rightarrow B\rightarrow A\rightarrow 0$ must split for any abelian group C, then it is well known that this is equivalent to A being free. Caution: The converse of Whitehead's problem, namely that every free abelian group is Whitehead, is a well known group-theoretical fact. Some authors call Whitehead group only a non-free group A satisfying Ext1(A, Z) = 0. Whitehead's problem then asks: do Whitehead groups exist? Shelah's proof Saharon Shelah showed that, given the canonical ZFC axiom system, the problem is independent of the usual axioms of set theory.[1] More precisely, he showed that: • If every set is constructible, then every Whitehead group is free; • If Martin's axiom and the negation of the continuum hypothesis both hold, then there is a non-free Whitehead group. Since the consistency of ZFC implies the consistency of both of the following: • The axiom of constructibility (which asserts that all sets are constructible); • Martin's axiom plus the negation of the continuum hypothesis, Whitehead's problem cannot be resolved in ZFC. Discussion J. H. C. Whitehead, motivated by the second Cousin problem, first posed the problem in the 1950s. Stein answered the question in the affirmative for countable groups.[2] Progress for larger groups was slow, and the problem was considered an important one in algebra for some years. Shelah's result was completely unexpected. While the existence of undecidable statements had been known since Gödel's incompleteness theorem of 1931, previous examples of undecidable statements (such as the continuum hypothesis) had all been in pure set theory. The Whitehead problem was the first purely algebraic problem to be proved undecidable. Shelah later showed that the Whitehead problem remains undecidable even if one assumes the continuum hypothesis.[3][4] The Whitehead conjecture is true if all sets are constructible. That this and other statements about uncountable abelian groups are provably independent of ZFC shows that the theory of such groups is very sensitive to the assumed underlying set theory. See also • Free abelian group • Whitehead torsion • List of statements undecidable in ZFC • Statements true in L References 1. Shelah, S. (1974). "Infinite Abelian groups, Whitehead problem and some constructions". Israel Journal of Mathematics. 18 (3): 243–256. doi:10.1007/BF02757281. MR 0357114. S2CID 123351674. 2. Stein, Karl (1951). "Analytische Funktionen mehrerer komplexer Veränderlichen zu vorgegebenen Periodizitätsmoduln und das zweite Cousinsche Problem". Mathematische Annalen. 123: 201–222. doi:10.1007/BF02054949. MR 0043219. S2CID 122647212. 3. Shelah, S. (1977). "Whitehead groups may not be free, even assuming CH. I". Israel Journal of Mathematics. 28 (3): 193-203. doi:10.1007/BF02759809. hdl:10338.dmlcz/102427. MR 0469757. S2CID 123029484. 4. Shelah, S. (1980). "Whitehead groups may not be free, even assuming CH. II". Israel Journal of Mathematics. 35 (4): 257–285. doi:10.1007/BF02760652. MR 0594332. S2CID 122336538. Further reading • Eklof, Paul C. (December 1976). "Whitehead's Problem is Undecidable". The American Mathematical Monthly. 83 (10): 775–788. doi:10.2307/2318684. JSTOR 2318684. An expository account of Shelah's proof. • Eklof, P.C. (2001) [1994], "Whitehead problem", Encyclopedia of Mathematics, EMS Press
Wikipedia
Whitehead product In mathematics, the Whitehead product is a graded quasi-Lie algebra structure on the homotopy groups of a space. It was defined by J. H. C. Whitehead in (Whitehead 1941). The relevant MSC code is: 55Q15, Whitehead products and generalizations. Definition Given elements $f\in \pi _{k}(X),g\in \pi _{l}(X)$, the Whitehead bracket $[f,g]\in \pi _{k+l-1}(X)$ is defined as follows: The product $S^{k}\times S^{l}$ can be obtained by attaching a $(k+l)$-cell to the wedge sum $S^{k}\vee S^{l}$; the attaching map is a map $S^{k+l-1}{\stackrel {\phi }{\ \longrightarrow \ }}S^{k}\vee S^{l}.$ Represent $f$ and $g$ by maps $f\colon S^{k}\to X$ and $g\colon S^{l}\to X,$ then compose their wedge with the attaching map, as $S^{k+l-1}{\stackrel {\phi }{\ \longrightarrow \ }}S^{k}\vee S^{l}{\stackrel {f\vee g}{\ \longrightarrow \ }}X.$ The homotopy class of the resulting map does not depend on the choices of representatives, and thus one obtains a well-defined element of $\pi _{k+l-1}(X).$ Grading Note that there is a shift of 1 in the grading (compared to the indexing of homotopy groups), so $\pi _{k}(X)$ has degree $(k-1)$; equivalently, $L_{k}=\pi _{k+1}(X)$ (setting L to be the graded quasi-Lie algebra). Thus $L_{0}=\pi _{1}(X)$ acts on each graded component. Properties The Whitehead product satisfies the following properties: • Bilinearity. $[f,g+h]=[f,g]+[f,h],[f+g,h]=[f,h]+[g,h]$ • Graded Symmetry. $[f,g]=(-1)^{pq}[g,f],f\in \pi _{p}X,g\in \pi _{q}X,p,q\geq 2$ • Graded Jacobi identity. $(-1)^{pr}[[f,g],h]+(-1)^{pq}[[g,h],f]+(-1)^{rq}[[h,f],g]=0,f\in \pi _{p}X,g\in \pi _{q}X,h\in \pi _{r}X{\text{ with }}p,q,r\geq 2$ Sometimes the homotopy groups of a space, together with the Whitehead product operation are called a graded quasi-Lie algebra; this is proven in Uehara & Massey (1957) via the Massey triple product. Relation to the action of $\pi _{1}$ If $f\in \pi _{1}(X)$, then the Whitehead bracket is related to the usual action of $\pi _{1}$ on $\pi _{k}$ by $[f,g]=g^{f}-g,$ where $g^{f}$ denotes the conjugation of $g$ by $f$. For $k=1$, this reduces to $[f,g]=fgf^{-1}g^{-1},$ which is the usual commutator in $\pi _{1}(X)$. This can also be seen by observing that the $2$-cell of the torus $S^{1}\times S^{1}$ is attached along the commutator in the $1$-skeleton $S^{1}\vee S^{1}$. Whitehead products on H-spaces For a path connected H-space, all the Whitehead products on $\pi _{*}(X)$ vanish. By the previous subsection, this is a generalization of both the facts that the fundamental groups of H-spaces are abelian, and that H-spaces are simple. Suspension All Whitehead products of classes $\alpha \in \pi _{i}(X)$, $\beta \in \pi _{j}(X)$ lie in the kernel of the suspension homomorphism $\Sigma \colon \pi _{i+j-1}(X)\to \pi _{i+j}(\Sigma X)$ Examples • $[\mathrm {id} _{S^{2}},\mathrm {id} _{S^{2}}]=2\cdot \eta \in \pi _{3}(S^{2})$, where $\eta \colon S^{3}\to S^{2}$ is the Hopf map. This can be shown by observing that the Hopf invariant defines an isomorphism $\pi _{3}(S^{2})\cong \mathbb {Z} $ and explicitly calculating the cohomology ring of the cofibre of a map representing $[\mathrm {id} _{S^{2}},\mathrm {id} _{S^{2}}]$. Using the Pontryagin–Thom construction there is a direct geometric argument, using the fact that the preimage of a regular point is a copy of the Hopf link. Applications to ∞-groupoids Recall that an ∞-groupoid $\Pi (X)$ is an $\infty $-category generalization of groupoids which is conjectured to encode the data of the homotopy type of $X$ in an algebraic formalism. The objects are the points in the space $X$, morphisms are homotopy classes of paths between points, and higher morphisms are higher homotopies of those points. The existence of the Whitehead product is the main reason why defining a notion of ∞-groupoids is such a demanding task. It was shown that any strict ∞-groupoid[1] has only trivial Whitehead products, hence strict groupoids can never model the homotopy types of spheres, such as $S^{3}$.[2] See also • Generalised Whitehead product • Massey product • Toda bracket References 1. Brown, Ronald; Higgins, Philip J. (1981). "The equivalence of ∞-groupoids and crossed complexes". Cahiers de Topologie et Géométrie Différentielle Catégoriques. 22 (4): 371–386. 2. Simpson, Carlos (1998-10-09). "Homotopy types of strict 3-groupoids". arXiv:math/9810059. • Whitehead, J. H. C. (April 1941), "On adding relations to homotopy groups", Annals of Mathematics, 2, 42 (2): 409–428, doi:10.2307/1968907, JSTOR 1968907 • Uehara, Hiroshi; Massey, William S. (1957), "The Jacobi identity for Whitehead products", Algebraic geometry and topology. A symposium in honor of S. Lefschetz, Princeton, N. J.: Princeton University Press, pp. 361–377, MR 0091473 • Whitehead, George W. (July 1946), "On products in homotopy groups", Annals of Mathematics, 2, 47 (3): 460–475, doi:10.2307/1969085, JSTOR 1969085 • Whitehead, George W. (1978). "X.7 The Whitehead Product". Elements of homotopy theory. Springer-Verlag. pp. 472–487. ISBN 978-0387903361.
Wikipedia
Whitehead theorem In homotopy theory (a branch of mathematics), the Whitehead theorem states that if a continuous mapping f between CW complexes X and Y induces isomorphisms on all homotopy groups, then f is a homotopy equivalence. This result was proved by J. H. C. Whitehead in two landmark papers from 1949, and provides a justification for working with the concept of a CW complex that he introduced there. It is a model result of algebraic topology, in which the behavior of certain algebraic invariants (in this case, homotopy groups) determines a topological property of a mapping. Not to be confused with Whitehead problem or Whitehead conjecture. Statement In more detail, let X and Y be topological spaces. Given a continuous mapping $f\colon X\to Y$ and a point x in X, consider for any n ≥ 1 the induced homomorphism $f_{*}\colon \pi _{n}(X,x)\to \pi _{n}(Y,f(x)),$ where πn(X,x) denotes the n-th homotopy group of X with base point x. (For n = 0, π0(X) just means the set of path components of X.) A map f is a weak homotopy equivalence if the function $f_{*}\colon \pi _{0}(X)\to \pi _{0}(Y)$ is bijective, and the homomorphisms f* are bijective for all x in X and all n ≥ 1. (For X and Y path-connected, the first condition is automatic, and it suffices to state the second condition for a single point x in X.) The Whitehead theorem states that a weak homotopy equivalence from one CW complex to another is a homotopy equivalence. (That is, the map f: X → Y has a homotopy inverse g: Y → X, which is not at all clear from the assumptions.) This implies the same conclusion for spaces X and Y that are homotopy equivalent to CW complexes. Combining this with the Hurewicz theorem yields a useful corollary: a continuous map $f\colon X\to Y$ between simply connected CW complexes that induces an isomorphism on all integral homology groups is a homotopy equivalence. Spaces with isomorphic homotopy groups may not be homotopy equivalent A word of caution: it is not enough to assume πn(X) is isomorphic to πn(Y) for each n in order to conclude that X and Y are homotopy equivalent. One really needs a map f : X → Y inducing an isomorphism on homotopy groups. For instance, take X= S2 × RP3 and Y= RP2 × S3. Then X and Y have the same fundamental group, namely the cyclic group Z/2, and the same universal cover, namely S2 × S3; thus, they have isomorphic homotopy groups. On the other hand their homology groups are different (as can be seen from the Künneth formula); thus, X and Y are not homotopy equivalent. The Whitehead theorem does not hold for general topological spaces or even for all subspaces of Rn. For example, the Warsaw circle, a compact subset of the plane, has all homotopy groups zero, but the map from the Warsaw circle to a single point is not a homotopy equivalence. The study of possible generalizations of Whitehead's theorem to more general spaces is part of the subject of shape theory. Generalization to model categories In any model category, a weak equivalence between cofibrant-fibrant objects is a homotopy equivalence. References • J. H. C. Whitehead, Combinatorial homotopy. I., Bull. Amer. Math. Soc., 55 (1949), 213–245 • J. H. C. Whitehead, Combinatorial homotopy. II., Bull. Amer. Math. Soc., 55 (1949), 453–496 • A. Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002. xii+544 pp. ISBN 0-521-79160-X and ISBN 0-521-79540-0 (see Theorem 4.5)
Wikipedia
Whitehead torsion In geometric topology, a field within mathematics, the obstruction to a homotopy equivalence $f\colon X\to Y$ of finite CW-complexes being a simple homotopy equivalence is its Whitehead torsion $\tau (f)$ which is an element in the Whitehead group $\operatorname {Wh} (\pi _{1}(Y))$. These concepts are named after the mathematician J. H. C. Whitehead. The Whitehead torsion is important in applying surgery theory to non-simply connected manifolds of dimension > 4: for simply-connected manifolds, the Whitehead group vanishes, and thus homotopy equivalences and simple homotopy equivalences are the same. The applications are to differentiable manifolds, PL manifolds and topological manifolds. The proofs were first obtained in the early 1960s by Stephen Smale, for differentiable manifolds. The development of handlebody theory allowed much the same proofs in the differentiable and PL categories. The proofs are much harder in the topological category, requiring the theory of Robion Kirby and Laurent C. Siebenmann. The restriction to manifolds of dimension greater than four are due to the application of the Whitney trick for removing double points. In generalizing the h-cobordism theorem, which is a statement about simply connected manifolds, to non-simply connected manifolds, one must distinguish simple homotopy equivalences and non-simple homotopy equivalences. While an h-cobordism W between simply-connected closed connected manifolds M and N of dimension n > 4 is isomorphic to a cylinder (the corresponding homotopy equivalence can be taken to be a diffeomorphism, PL-isomorphism, or homeomorphism, respectively), the s-cobordism theorem states that if the manifolds are not simply-connected, an h-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion $M\hookrightarrow W$ vanishes. Whitehead group The Whitehead group of a connected CW-complex or a manifold M is equal to the Whitehead group $\operatorname {Wh} (\pi _{1}(M))$ of the fundamental group $\pi _{1}(M)$ of M. If G is a group, the Whitehead group $\operatorname {Wh} (G)$ is defined to be the cokernel of the map $G\times \{\pm 1\}\to K_{1}(\mathbb {Z} [G])$ which sends (g, ±1) to the invertible (1,1)-matrix (±g). Here $\mathbb {Z} [G]$ is the group ring of G. Recall that the K-group K1(A) of a ring A is defined as the quotient of GL(A) by the subgroup generated by elementary matrices. The group GL(A) is the direct limit of the finite-dimensional groups GL(n, A) → GL(n+1, A); concretely, the group of invertible infinite matrices which differ from the identity matrix in only a finite number of coefficients. An elementary matrix here is a transvection: one such that all main diagonal elements are 1 and there is at most one non-zero element not on the diagonal. The subgroup generated by elementary matrices is exactly the derived subgroup, in other words the smallest normal subgroup such that the quotient by it is abelian. In other words, the Whitehead group $\operatorname {Wh} (G)$ of a group G is the quotient of $\operatorname {GL} (\mathbb {Z} [G])$ by the subgroup generated by elementary matrices, elements of G and $\pm 1$. Notice that this is the same as the quotient of the reduced K-group ${\tilde {K}}_{1}(\mathbb {Z} [G])$ by G. Examples • The Whitehead group of the trivial group is trivial. Since the group ring of the trivial group is $\mathbb {Z} ,$ we have to show that any matrix can be written as a product of elementary matrices times a diagonal matrix; this follows easily from the fact that $\mathbb {Z} $ is a Euclidean domain. • The Whitehead group of a free abelian group is trivial, a 1964 result of Hyman Bass, Alex Heller and Richard Swan. This is quite hard to prove, but is important as it is used in the proof that an s-cobordism of dimension at least 6 whose ends are tori is a product. It is also the key algebraic result used in the surgery theory classification of piecewise linear manifolds of dimension at least 5 which are homotopy equivalent to a torus; this is the essential ingredient of the 1969 Kirby–Siebenmann structure theory of topological manifolds of dimension at least 5. • The Whitehead group of a braid group (or any subgroup of a braid group) is trivial. This was proved by F. Thomas Farrell and Sayed K. Roushon. • The Whitehead group of the cyclic groups of orders 2, 3, 4, and 6 are trivial. • The Whitehead group of the cyclic group of order 5 is $\mathbb {Z} $. This was proved in 1940 by Graham Higman. An example of a non-trivial unit in the group ring arises from the identity $(1-t-t^{4})(1-t^{2}-t^{3})=1,$ where t is a generator of the cyclic group of order 5. This example is closely related to the existence of units of infinite order (in particular, the golden ratio) in the ring of integers of the cyclotomic field generated by fifth roots of unity. • The Whitehead group of any finite group G is finitely generated, of rank equal to the number of irreducible real representations of G minus the number of irreducible rational representations. this was proved in 1965 by Bass. • If G is a finite cyclic group then $K_{1}(\mathbb {Z} [G])$ is isomorphic to the units of the group ring $\mathbb {Z} [G]$ under the determinant map, so Wh(G) is just the group of units of $\mathbb {Z} [G]$ modulo the group of "trivial units" generated by elements of G and −1. • It is a well-known conjecture that the Whitehead group of any torsion-free group should vanish. The Whitehead torsion At first we define the Whitehead torsion $\tau (h_{*})\in {\tilde {K}}_{1}(R)$ for a chain homotopy equivalence $h_{*}:D_{*}\to E_{*}$ of finite based free R-chain complexes. We can assign to the homotopy equivalence its mapping cone C* := cone*(h*) which is a contractible finite based free R-chain complex. Let $\gamma _{*}:C_{*}\to C_{*+1}$ be any chain contraction of the mapping cone, i.e., $c_{n+1}\circ \gamma _{n}+\gamma _{n-1}\circ c_{n}=\operatorname {id} _{C_{n}}$ for all n. We obtain an isomorphism $(c_{*}+\gamma _{*})_{\mathrm {odd} }:C_{\mathrm {odd} }\to C_{\mathrm {even} }$ with $C_{\mathrm {odd} }:=\bigoplus _{n{\text{ odd}}}C_{n},\qquad C_{\mathrm {even} }:=\bigoplus _{n{\text{ even}}}C_{n}.$ We define $\tau (h_{*}):=[A]\in {\tilde {K}}_{1}(R)$, where A is the matrix of $(c_{*}+\gamma _{*})_{\rm {odd}}$ with respect to the given bases. For a homotopy equivalence $f:X\to Y$ of connected finite CW-complexes we define the Whitehead torsion $\tau (f)\in \operatorname {Wh} (\pi _{1}(Y))$ as follows. Let ${\tilde {f}}:{\tilde {X}}\to {\tilde {Y}}$ be the lift of $f:X\to Y$ to the universal covering. It induces $\mathbb {Z} [\pi _{1}(Y)]$-chain homotopy equivalences $C_{*}({\tilde {f}}):C_{*}({\tilde {X}})\to C_{*}({\tilde {Y}})$. Now we can apply the definition of the Whitehead torsion for a chain homotopy equivalence and obtain an element in ${\tilde {K}}_{1}(\mathbb {Z} [\pi _{1}(Y)])$ which we map to Wh(π1(Y)). This is the Whitehead torsion τ(ƒ) ∈ Wh(π1(Y)). Properties Homotopy invariance: Let $f,g\colon X\to Y$ be homotopy equivalences of finite connected CW-complexes. If f and g are homotopic, then $\tau (f)=\tau (g)$. Topological invariance: If $f\colon X\to Y$ is a homeomorphism of finite connected CW-complexes, then $\tau (f)=0$. Composition formula: Let $f\colon X\to Y$, $g\colon Y\to Z$ be homotopy equivalences of finite connected CW-complexes. Then $\tau (g\circ f)=g_{*}\tau (f)+\tau (g)$. Geometric interpretation The s-cobordism theorem states for a closed connected oriented manifold M of dimension n > 4 that an h-cobordism W between M and another manifold N is trivial over M if and only if the Whitehead torsion of the inclusion $M\hookrightarrow W$ vanishes. Moreover, for any element in the Whitehead group there exists an h-cobordism W over M whose Whitehead torsion is the considered element. The proofs use handle decompositions. There exists a homotopy theoretic analogue of the s-cobordism theorem. Given a CW-complex A, consider the set of all pairs of CW-complexes (X, A) such that the inclusion of A into X is a homotopy equivalence. Two pairs (X1, A) and (X2, A) are said to be equivalent, if there is a simple homotopy equivalence between X1 and X2 relative to A. The set of such equivalence classes form a group where the addition is given by taking union of X1 and X2 with common subspace A. This group is natural isomorphic to the Whitehead group Wh(A) of the CW-complex A. The proof of this fact is similar to the proof of s-cobordism theorem. See also • Algebraic K-theory • Reidemeister torsion • s-Cobordism theorem • Wall's finiteness obstruction References • Bass, Hyman; Heller, Alex; Swan, Richard (1964), "The Whitehead group of a polynomial extension", Publications Mathématiques de l'IHÉS, 22: 61–79, MR 0174605 • Cohen, M. A course in simple homotopy theory Graduate Text in Mathematics 10, Springer, 1973 • Higman, Graham (1940), "The units of group-rings", Proceedings of the London Mathematical Society, 2, 46: 231–248, doi:10.1112/plms/s2-46.1.231, MR 0002137 • Kirby, Robion; Siebenmann, Laurent (1977), Foundational essays on topological manifolds, smoothings, and triangulations, Annals of Mathematics Studies, vol. 88, Princeton University Press Princeton, N.J.; University of Tokyo Press, Tokyo • Milnor, John (1966), "Whitehead torsion", Bulletin of the American Mathematical Society, 72: 358–426, doi:10.1090/S0002-9904-1966-11484-2, MR 0196736 • Smale, Stephen (1962), "On the structure of manifolds", American Journal of Mathematics, 84: 387–399, doi:10.2307/2372978, MR 0153022 • Whitehead, J. H. C. (1950), "Simple homotopy types", American Journal of Mathematics, 72: 1–57, doi:10.2307/2372133, MR 0035437 External links • A description of Whitehead torsion is in section two.
Wikipedia
Postnikov system In homotopy theory, a branch of algebraic topology, a Postnikov system (or Postnikov tower) is a way of decomposing a topological space's homotopy groups using an inverse system of topological spaces whose homotopy type at degree $k$ agrees with the truncated homotopy type of the original space $X$. Postnikov systems were introduced by, and are named after, Mikhail Postnikov. Definition A Postnikov system of a path-connected space $X$ is an inverse system of spaces $\cdots \to X_{n}\xrightarrow {p_{n}} X_{n-1}\xrightarrow {p_{n-1}} \cdots \xrightarrow {p_{3}} X_{2}\xrightarrow {p_{2}} X_{1}\xrightarrow {p_{1}} *$ with a sequence of maps $\phi _{n}\colon X\to X_{n}$ compatible with the inverse system such that 1. The map $\phi _{n}\colon X\to X_{n}$ induces an isomorphism $\pi _{i}(X)\to \pi _{i}(X_{n})$ for every $i\leq n$. 2. $\pi _{i}(X_{n})=0$ for $i>n$.[1]: 410  3. Each map $p_{n}\colon X_{n}\to X_{n-1}$ is a fibration, and so the fiber $F_{n}$ is an Eilenberg–MacLane space, $K(\pi _{n}(X),n)$. The first two conditions imply that $X_{1}$ is also a $K(\pi _{1}(X),1)$-space. More generally, if $X$ is $(n-1)$-connected, then $X_{n}$ is a $K(\pi _{n}(X),n)$-space and all $X_{i}$ for $i<n$ are contractible. Note the third condition is only included optionally by some authors. Existence Postnikov systems exist on connected CW complexes,[1]: 354  and there is a weak homotopy-equivalence between $X$ and its inverse limit, so $X\simeq \varprojlim {}X_{n}$, showing that $X$ is a CW approximation of its inverse limit. They can be constructed on a CW complex by iteratively killing off homotopy groups. If we have a map $f\colon S^{n}\to X$ representing a homotopy class $[f]\in \pi _{n}(X)$, we can take the pushout along the boundary map $S^{n}\to e_{n+1}$, killing off the homotopy class. For $X_{m}$ this process can be repeated for all $n>m$, giving a space which has vanishing homotopy groups $\pi _{n}(X_{m})$. Using the fact that $X_{n-1}$can be constructed from $X_{n}$ by killing off all homotopy maps $S^{n}\to X_{n}$, we obtain a map $X_{n}\to X_{n-1}$. Main property One of the main properties of the Postnikov tower, which makes it so powerful to study while computing cohomology, is the fact the spaces $X_{n}$ are homotopic to a CW complex ${\mathfrak {X}}_{n}$ which differs from $X$ only by cells of dimension $\geq n+2$. Homotopy classification of fibrations The sequence of fibrations $p_{n}:X_{n}\to X_{n-1}$[2] have homotopically defined invariants, meaning the homotopy classes of maps $p_{n}$, give a well defined homotopy type $[X]\in \operatorname {Ob} (hTop)$. The homotopy class of $p_{n}$ comes from looking at the homotopy class of the classifying map for the fiber $K(\pi _{n}(X),n)$. The associated classifying map is $X_{n-1}\to B(K(\pi _{n}(X),n))\simeq K(\pi _{n}(X),n+1)$, hence the homotopy class $[p_{n}]$ is classified by a homotopy class $[p_{n}]\in [X_{n-1},K(\pi _{n}(X),n+1)]\cong H^{n+1}(X_{n-1},\pi _{n}(X))$ called the n-th Postnikov invariant of $X$, since the homotopy classes of maps to Eilenberg-Maclane spaces gives cohomology with coefficients in the associated abelian group. Fiber sequence for spaces with two nontrivial homotopy groups One of the special cases of the homotopy classification is the homotopy class of spaces $X$ such that there exists a fibration $K(A,n)\to X\to \pi _{1}(X)$ giving a homotopy type with two non-trivial homotopy groups, $\pi _{1}(X)=G$, and $\pi _{n}(X)=A$. Then, from the previous discussion, the fibration map $BG\to K(A,n+1)$ gives a cohomology class in $H^{n+1}(BG,A)$, which can also be interpreted as a group cohomology class. This space $X$ can be considered a higher local system. Examples of Postnikov towers Postnikov tower of a K(G,n) One of the conceptually simplest cases of a Postnikov tower is that of the Eilenberg–Maclane space $K(G,n)$. This gives a tower with ${\begin{matrix}X_{i}\simeq *&{\text{for }}i<n\\X_{i}\simeq K(G,n)&{\text{for }}i\geq n\end{matrix}}$ Postnikov tower of S2 The Postnikov tower for the sphere $S^{2}$ is a special case whose first few terms can be understood explicitly. Since we have the first few homotopy groups from the simply connectedness of $S^{2}$, degree theory of spheres, and the Hopf fibration, giving $\pi _{k}(S^{2})\simeq \pi _{k}(S^{3})$ for $k\geq 3$, hence ${\begin{matrix}\pi _{1}(S^{2})=&0\\\pi _{2}(S^{2})=&\mathbb {Z} \\\pi _{3}(S^{2})=&\mathbb {Z} \\\pi _{4}(S^{2})=&\mathbb {Z} /2.\end{matrix}}$ Then, $X_{2}=S_{2}^{2}=K(\mathbb {Z} ,2)$, and $X_{3}$ comes from a pullback sequence ${\begin{matrix}X_{3}&\to &*\\\downarrow &&\downarrow \\X_{2}&\to &K(\mathbb {Z} ,4),\end{matrix}}$ which is an element in $[p_{3}]\in [K(\mathbb {Z} ,2),K(\mathbb {Z} ,4)]\cong H^{4}(\mathbb {CP} ^{\infty })=\mathbb {Z} $. If this was trivial it would imply $X_{3}\simeq K(\mathbb {Z} ,2)\times K(\mathbb {Z} ,3)$. But, this is not the case! In fact, this is responsible for why strict infinity groupoids don't model homotopy types.[3] Computing this invariant requires more work, but can be explicitly found.[4] This is the quadratic form $x\mapsto x^{2}$ on $\mathbb {Z} \to \mathbb {Z} $ coming from the Hopf fibration $S^{3}\to S^{2}$. Note that each element in $H^{4}(\mathbb {CP} ^{\infty })$ gives a different homotopy 3-type. Homotopy groups of spheres One application of the Postnikov tower is the computation of homotopy groups of spheres.[5] For an $n$-dimensional sphere $S^{n}$ we can use the Hurewicz theorem to show each $S_{i}^{n}$ is contractible for $i<n$, since the theorem implies that the lower homotopy groups are trivial. Recall there is a spectral sequence for any Serre fibration, such as the fibration $K(\pi _{n+1}(X),n+1)\simeq F_{n+1}\to S_{n+1}^{n}\to S_{n}^{n}\simeq K(\mathbb {Z} ,n)$. We can then form a homological spectral sequence with $E^{2}$-terms $E_{p,q}^{2}=H_{p}\left(K(\mathbb {Z} ,n),H_{q}\left(K\left(\pi _{n+1}\left(S^{n}\right),n+1\right)\right)\right)$. And the first non-trivial map to $\pi _{n+1}\left(S^{n}\right)$, $d_{0,n+1}^{n+1}\colon H_{n+2}(K(\mathbb {Z} ,n))\to H_{0}\left(K(\mathbb {Z} ,n),H_{n+1}\left(K\left(\pi _{n+1}\left(S^{n}\right),n+1\right)\right)\right)$, equivalently written as $d_{0,n+1}^{n+1}\colon H_{n+2}(K(\mathbb {Z} ,n))\to \pi _{n+1}\left(S^{n}\right)$. If it's easy to compute $H_{n+1}\left(S_{n+1}^{n}\right)$ and $H_{n+2}\left(S_{n+2}^{n}\right)$, then we can get information about what this map looks like. In particular, if it's an isomorphism, we obtain a computation of $\pi _{n+1}\left(S^{n}\right)$. For the case $n=3$, this can be computed explicitly using the path fibration for $K(\mathbb {Z} ,3)$, the main property of the Postnikov tower for ${\mathfrak {X}}_{4}\simeq S^{3}\cup \{{\text{cells of dimension}}\geq 6\}$ (giving $H_{4}(X_{4})=H_{5}(X_{4})=0$, and the universal coefficient theorem giving $\pi _{4}\left(S^{3}\right)=\mathbb {Z} /2$. Moreover, because of the Freudenthal suspension theorem this actually gives the stable homotopy group $\pi _{1}^{\mathbb {S} }$ since $\pi _{n+k}\left(S^{n}\right)$ is stable for $n\geq k+2$. Note that similar techniques can be applied using the Whitehead tower (below) for computing $\pi _{4}\left(S^{3}\right)$ and $\pi _{5}\left(S^{3}\right)$, giving the first two non-trivial stable homotopy groups of spheres. Postnikov towers of spectra In addition to the classical Postnikov tower, there is a notion of Postnikov towers in stable homotopy theory constructed on spectra[6]pg 85-86. Definition For a spectrum $E$ a postnikov tower of $E$ is a diagram in the homotopy category of spectra, ${\text{Ho}}({\textbf {Spectra}})$, given by $\cdots \to E_{(2)}\xrightarrow {p_{2}} E_{(1)}\xrightarrow {p_{1}} E_{(0)}$, with maps $\tau _{n}\colon E\to E_{(n)}$ commuting with the $p_{n}$ maps. Then, this tower is a Postnikov tower if the following two conditions are satisfied: 1. $\pi _{i}^{\mathbb {S} }\left(E_{(n)}\right)=0$ for $i>n$, 2. $\left(\tau _{n}\right)_{*}\colon \pi _{i}^{\mathbb {S} }(E)\to \pi _{i}^{\mathbb {S} }\left(E_{(n)}\right)$ is an isomorphism for $i\leq n$, where $\pi _{i}^{\mathbb {S} }$ are stable homotopy groups of a spectrum. It turns out every spectrum has a Postnikov tower and this tower can be constructed using a similar kind of inductive procedure as the one given above. Whitehead tower Given a CW complex $X$, there is a dual construction to the Postnikov tower called the Whitehead tower. Instead of killing off all higher homotopy groups, the Whitehead tower iteratively kills off lower homotopy groups. This is given by a tower of CW complexes, $\cdots \to X_{3}\to X_{2}\to X_{1}\to X$, where 1. The lower homotopy groups are zero, so $\pi _{i}(X_{n})=0$ for $i\leq n$. 2. The induced map $\pi _{i}\colon \pi _{i}(X_{n})\to \pi _{i}(X)$ is an isomorphism for $i>n$. 3. The maps $X_{n}\to X_{n-1}$ are fibrations with fiber $K(\pi _{n}(X),n-1)$. Implications Notice $X_{1}\to X$ is the universal cover of $X$ since it is a covering space with a simply connected cover. Furthermore, each $X_{n}\to X$ is the universal $n$-connected cover of $X$. Construction The spaces $X_{n}$ in the Whitehead tower are constructed inductively. If we construct a $K\left(\pi _{n+1}(X),n+1\right)$ by killing off the higher homotopy groups in $X_{n}$,[7] we get an embedding $X_{n}\to K(\pi _{n+1}(X),n+1)$. If we let $X_{n+1}=\left\{f\colon I\to K\left(\pi _{n+1}(X),n+1\right):f(0)=p{\text{ and }}f(1)\in X_{n}\right\}$ for some fixed basepoint $p$, then the induced map $X_{n+1}\to X_{n}$ is a fiber bundle with fiber homeomorphic to $\Omega K\left(\pi _{n+1}(X),n+1\right)\simeq K\left(\pi _{n+1}(X),n\right)$, and so we have a Serre fibration $K\left(\pi _{n+1}(X),n\right)\to X_{n}\to X_{n-1}$. Using the long exact sequence in homotopy theory, we have that $\pi _{i}(X_{n})=\pi _{i}\left(X_{n-1}\right)$ for $i\geq n+1$, $\pi _{i}(X_{n})=\pi _{i}(X_{n-1})=0$ for $i<n-1$, and finally, there is an exact sequence $0\to \pi _{n+1}\left(X_{n+1})\to \pi _{n+1}(X_{n}\right)\mathrel {\overset {\partial }{\rightarrow }} \pi _{n}K\left(\pi _{n+1}(X),n\right)\to \pi _{n}\left(X_{n+1}\right)\to 0$, where if the middle morphism is an isomorphism, the other two groups are zero. This can be checked by looking at the inclusion $X_{n}\to K(\pi _{n+1}(X),n+1)$ and noting that the Eilenberg–Maclane space has a cellular decomposition $X_{n-1}\cup \{{\text{cells of dimension}}\geq n+2\}$; thus, $\pi _{n+1}\left(X_{n}\right)\cong \pi _{n+1}\left(K\left(\pi _{n+1}(X),n+1\right)\right)\cong \pi _{n}\left(K\left(\pi _{n+1}(X),n\right)\right)$, giving the desired result. As a homotopy fiber Another way to view the components in the Whitehead tower is as a homotopy fiber. If we take ${\text{Hofiber}}(\phi _{n}:X\to X_{n})$ from the Postnikov tower, we get a space $X^{n}$ which has $\pi _{k}(X^{n})={\begin{cases}\pi _{k}(X)&k>n\\0&k\leq n\end{cases}}$ Whitehead tower of spectra The dual notion of the Whitehead tower can be defined in a similar manner using homotopy fibers in the category of spectra. If we let $E\langle n\rangle =\operatorname {Hofiber} \left(\tau _{n}:E\to E_{(n)}\right)$ then this can be organized in a tower giving connected covers of a spectrum. This is a widely used construction[8][9][10] in bordism theory because the coverings of the unoriented cobordism spectrum $M{\text{O}}$ gives other bordism theories[10] ${\begin{aligned}M{\text{String}}&=M{\text{O}}\langle 8\rangle \\M{\text{Spin}}&=M{\text{O}}\langle 4\rangle \\M{\text{SO}}&=M{\text{O}}\langle 2\rangle \end{aligned}}$ such as string bordism. Whitehead tower and string theory In Spin geometry the $\operatorname {Spin} (n)$ group is constructed as the universal cover of the Special orthogonal group $\operatorname {SO} (n)$, so $\mathbb {Z} /2\to \operatorname {Spin} (n)\to SO(n)$ is a fibration, giving the first term in the Whitehead tower. There are physically relevant interpretations for the higher parts in this tower, which can be read as $\cdots \to \operatorname {Fivebrane} (n)\to \operatorname {String} (n)\to \operatorname {Spin} (n)\to \operatorname {SO} (n)$ where $\operatorname {String} (n)$ is the $3$-connected cover of $\operatorname {SO} (n)$ called the string group, and $\operatorname {Fivebrane} (n)$ is the $7$-connected cover called the fivebrane group.[11][12] See also • Adams spectral sequence • Eilenberg–MacLane space • CW complex • Obstruction theory • Stable homotopy theory • Homotopy groups of spheres • Higher group References 1. Hatcher, Allen. Algebraic Topology (PDF). 2. Kahn, Donald W. (1963-03-01). "Induced maps for Postnikov systems" (PDF). Transactions of the American Mathematical Society. 107 (3): 432–450. doi:10.1090/s0002-9947-1963-0150777-x. ISSN 0002-9947. 3. Simpson, Carlos (1998-10-09). "Homotopy types of strict 3-groupoids". arXiv:math/9810059. 4. Eilenberg, Samuel; MacLane, Saunders (1954). "On the Groups $H(\Pi ,n)$, III: Operations and Obstructions". Annals of Mathematics. 60 (3): 513–557. doi:10.2307/1969849. ISSN 0003-486X. JSTOR 1969849. 5. Laurențiu-George, Maxim. "Spectral sequences and homotopy groups of spheres" (PDF). Archived (PDF) from the original on 19 May 2017. 6. On Thom Spectra, Orientability, and Cobordism. Springer Monographs in Mathematics. Berlin, Heidelberg: Springer. 1998. doi:10.1007/978-3-540-77751-9. ISBN 978-3-540-62043-3. 7. Maxim, Laurențiu. "Lecture Notes on Homotopy Theory and Applications" (PDF). p. 66. Archived (PDF) from the original on 16 February 2020. 8. Hill, Michael A. (2009). "The string bordism of BE8 and BE8 × BE8 through dimension 14". Illinois Journal of Mathematics. 53 (1): 183–196. doi:10.1215/ijm/1264170845. ISSN 0019-2082. 9. Bunke, Ulrich; Naumann, Niko (2014-12-01). "Secondary invariants for string bordism and topological modular forms". Bulletin des Sciences Mathématiques. 138 (8): 912–970. doi:10.1016/j.bulsci.2014.05.002. ISSN 0007-4497. 10. Szymik, Markus (2019). "String bordism and chromatic characteristics". In Daniel G. Davis; Hans-Werner Henn; J. F. Jardine; Mark W. Johnson; Charles Rezk (eds.). Homotopy Theory: Tools and Applications. Contemporary Mathematics. Vol. 729. pp. 239–254. arXiv:1312.4658. doi:10.1090/conm/729/14698. ISBN 9781470442446. S2CID 56461325. 11. "Mathematical physics – Physical application of Postnikov tower, String(n) and Fivebrane(n)". Physics Stack Exchange. Retrieved 2020-02-16. 12. "at.algebraic topology – What do Whitehead towers have to do with physics?". MathOverflow. Retrieved 2020-02-16. • Postnikov, Mikhail M. (1951). "Determination of the homology groups of a space by means of the homotopy invariants". Doklady Akademii Nauk SSSR. 76: 359–362. • Lecture Notes on Homotopy Theory and Applications • Determination of the Second Homology and Cohomology Groups of a Space by Means of Homotopy Invariants - gives accessible examples of postnikov invariants • Hatcher, Allen (2002). Algebraic topology. Cambridge University Press. ISBN 978-0-521-79540-1. • "Handwritten notes" (PDF). Archived from the original (PDF) on 2020-02-13.
Wikipedia
Whitening transformation A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they are uncorrelated and each have variance 1.[1] The transformation is called "whitening" because it changes the input vector into a white noise vector. Several other transformations are closely related to whitening: 1. the decorrelation transform removes only the correlations but leaves variances intact, 2. the standardization transform sets variances to 1 but leaves correlations intact, 3. a coloring transformation transforms a vector of white random variables into a random vector with a specified covariance matrix.[2] Definition Suppose $X$ is a random (column) vector with non-singular covariance matrix $\Sigma $ and mean $0$. Then the transformation $Y=WX$ with a whitening matrix $W$ satisfying the condition $W^{\mathrm {T} }W=\Sigma ^{-1}$ yields the whitened random vector $Y$ with unit diagonal covariance. There are infinitely many possible whitening matrices $W$ that all satisfy the above condition. Commonly used choices are $W=\Sigma ^{-1/2}$ (Mahalanobis or ZCA whitening), $W=L^{T}$ where $L$ is the Cholesky decomposition of $\Sigma ^{-1}$ (Cholesky whitening),[3] or the eigen-system of $\Sigma $ (PCA whitening).[4] Optimal whitening transforms can be singled out by investigating the cross-covariance and cross-correlation of $X$ and $Y$.[3] For example, the unique optimal whitening transformation achieving maximal component-wise correlation between original $X$ and whitened $Y$ is produced by the whitening matrix $W=P^{-1/2}V^{-1/2}$ where $P$ is the correlation matrix and $V$ the variance matrix. Whitening a data matrix Whitening a data matrix follows the same transformation as for random variables. An empirical whitening transform is obtained by estimating the covariance (e.g. by maximum likelihood) and subsequently constructing a corresponding estimated whitening matrix (e.g. by Cholesky decomposition). High-dimensional whitening This modality is a generalization of the pre-whitening procedure extended to more general spaces where $X$ is usually assumed to be a random function or other random objects in a Hilbert space $H$. One of the main issues of extending whitening to infinite dimensions is that the covariance operator has an unbounded inverse in $H$. Nevertheless, if one assumes that Picard condition holds for $X$ in the range space of the covariance operator, whitening becomes possible.[5] A whitening operator can be then defined from the factorization of the Moore–Penrose inverse of the covariance operator, which has effective mapping on Karhunen–Loève type expansions of $X$. The advantage of these whitening transformations is that they can be optimized according to the underlying topological properties of the data (smoothness, continuity and contiguity), thus producing more robust whitening representations. High-dimensional features of the data can be exploited through kernel regressors or basis function systems.[6] R implementation An implementation of several whitening procedures in R, including ZCA-whitening and PCA whitening but also CCA whitening, is available in the "whitening" R package [7] published on CRAN. The R package "pfica"[8] allows the computation of high-dimensional whitening representations using basis function systems (B-splines, Fourier basis, etc.). See also • Decorrelation • Principal component analysis • Weighted least squares • Canonical correlation • Mahalanobis distance (is Euclidean after W. transformation). References 1. Koivunen, A.C.; Kostinski, A.B. (1999). "The Feasibility of Data Whitening to Improve Performance of Weather Radar". Journal of Applied Meteorology. 38 (6): 741–749. Bibcode:1999JApMe..38..741K. doi:10.1175/1520-0450(1999)038<0741:TFODWT>2.0.CO;2. ISSN 1520-0450. 2. Hossain, Miliha. "Whitening and Coloring Transforms for Multivariate Gaussian Random Variables". Project Rhea. Retrieved 21 March 2016. 3. Kessy, A.; Lewin, A.; Strimmer, K. (2018). "Optimal whitening and decorrelation". The American Statistician. 72 (4): 309–314. arXiv:1512.00809. doi:10.1080/00031305.2016.1277159. S2CID 55075085. 4. Friedman, J. (1987). "Exploratory Projection Pursuit". Journal of the American Statistical Association. 82 (397): 249–266. doi:10.1080/01621459.1987.10478427. ISSN 0162-1459. JSTOR 2289161. OSTI 1447861. 5. Vidal, M.; Aguilera, A.M. (2022). "Novel whitening approaches in functional settings". STAT. 12 (1): e516. doi:10.1002/sta4.516. 6. Ramsay, J.O.; Silverman, J.O. (2005). Functional Data Analysis. Springer New York, NY. ISBN 978-0-387-40080-8. 7. "whitening R package". Retrieved 2018-11-25. 8. "pfica R package". Retrieved 2023-02-11. External links • http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf • The ZCA whitening transformation. Appendix A of Learning Multiple Layers of Features from Tiny Images by A. Krizhevsky.
Wikipedia
Whitney embedding theorem In mathematics, particularly in differential topology, there are two Whitney embedding theorems, named after Hassler Whitney: • The strong Whitney embedding theorem states that any smooth real m-dimensional manifold (required also to be Hausdorff and second-countable) can be smoothly embedded in the real 2m-space, $\mathbb {R} ^{2m},$ if m > 0. This is the best linear bound on the smallest-dimensional Euclidean space that all m-dimensional manifolds embed in, as the real projective spaces of dimension m cannot be embedded into real (2m − 1)-space if m is a power of two (as can be seen from a characteristic class argument, also due to Whitney). • The weak Whitney embedding theorem states that any continuous function from an n-dimensional manifold to an m-dimensional manifold may be approximated by a smooth embedding provided m > 2n. Whitney similarly proved that such a map could be approximated by an immersion provided m > 2n − 1. This last result is sometimes called the Whitney immersion theorem. A little about the proof The general outline of the proof is to start with an immersion $f:M\to \mathbb {R} ^{2m}$ with transverse self-intersections. These are known to exist from Whitney's earlier work on the weak immersion theorem. Transversality of the double points follows from a general-position argument. The idea is to then somehow remove all the self-intersections. If M has boundary, one can remove the self-intersections simply by isotoping M into itself (the isotopy being in the domain of f), to a submanifold of M that does not contain the double-points. Thus, we are quickly led to the case where M has no boundary. Sometimes it is impossible to remove the double-points via an isotopy—consider for example the figure-8 immersion of the circle in the plane. In this case, one needs to introduce a local double point. Once one has two opposite double points, one constructs a closed loop connecting the two, giving a closed path in $\mathbb {R} ^{2m}.$ Since $\mathbb {R} ^{2m}$ is simply connected, one can assume this path bounds a disc, and provided 2m > 4 one can further assume (by the weak Whitney embedding theorem) that the disc is embedded in $\mathbb {R} ^{2m}$ such that it intersects the image of M only in its boundary. Whitney then uses the disc to create a 1-parameter family of immersions, in effect pushing M across the disc, removing the two double points in the process. In the case of the figure-8 immersion with its introduced double-point, the push across move is quite simple (pictured). This process of eliminating opposite sign double-points by pushing the manifold along a disc is called the Whitney Trick. To introduce a local double point, Whitney created immersions $\alpha _{m}:\mathbb {R} ^{m}\to \mathbb {R} ^{2m}$ which are approximately linear outside of the unit ball, but containing a single double point. For m = 1 such an immersion is given by ${\begin{cases}\alpha :\mathbb {R} ^{1}\to \mathbb {R} ^{2}\\\alpha (t)=\left({\frac {1}{1+t^{2}}},\ t-{\frac {2t}{1+t^{2}}}\right)\end{cases}}$ :\mathbb {R} ^{1}\to \mathbb {R} ^{2}\\\alpha (t)=\left({\frac {1}{1+t^{2}}},\ t-{\frac {2t}{1+t^{2}}}\right)\end{cases}}} Notice that if α is considered as a map to $\mathbb {R} ^{3}$ like so: $\alpha (t)=\left({\frac {1}{1+t^{2}}},\ t-{\frac {2t}{1+t^{2}}},0\right)$ then the double point can be resolved to an embedding: $\beta (t,a)=\left({\frac {1}{(1+t^{2})(1+a^{2})}},\ t-{\frac {2t}{(1+t^{2})(1+a^{2})}},\ {\frac {ta}{(1+t^{2})(1+a^{2})}}\right).$ Notice β(t, 0) = α(t) and for a ≠ 0 then as a function of t, β(t, a) is an embedding. For higher dimensions m, there are αm that can be similarly resolved in $\mathbb {R} ^{2m+1}.$ For an embedding into $\mathbb {R} ^{5},$ for example, define $\alpha _{2}(t_{1},t_{2})=\left(\beta (t_{1},t_{2}),\ t_{2}\right)=\left({\frac {1}{(1+t_{1}^{2})(1+t_{2}^{2})}},\ t_{1}-{\frac {2t_{1}}{(1+t_{1}^{2})(1+t_{2}^{2})}},\ {\frac {t_{1}t_{2}}{(1+t_{1}^{2})(1+t_{2}^{2})}},\ t_{2}\right).$ This process ultimately leads one to the definition: $\alpha _{m}(t_{1},t_{2},\cdots ,t_{m})=\left({\frac {1}{u}},t_{1}-{\frac {2t_{1}}{u}},{\frac {t_{1}t_{2}}{u}},t_{2},{\frac {t_{1}t_{3}}{u}},t_{3},\cdots ,{\frac {t_{1}t_{m}}{u}},t_{m}\right),$ where $u=(1+t_{1}^{2})(1+t_{2}^{2})\cdots (1+t_{m}^{2}).$ The key properties of αm is that it is an embedding except for the double-point αm(1, 0, ... , 0) = αm(−1, 0, ... , 0). Moreover, for |(t1, ... , tm)| large, it is approximately the linear embedding (0, t1, 0, t2, ... , 0, tm). Eventual consequences of the Whitney trick The Whitney trick was used by Stephen Smale to prove the h-cobordism theorem; from which follows the Poincaré conjecture in dimensions m ≥ 5, and the classification of smooth structures on discs (also in dimensions 5 and up). This provides the foundation for surgery theory, which classifies manifolds in dimension 5 and above. Given two oriented submanifolds of complementary dimensions in a simply connected manifold of dimension ≥ 5, one can apply an isotopy to one of the submanifolds so that all the points of intersection have the same sign. History See also: History of manifolds and varieties The occasion of the proof by Hassler Whitney of the embedding theorem for smooth manifolds is said (rather surprisingly) to have been the first complete exposition of the manifold concept precisely because it brought together and unified the differing concepts of manifolds at the time: no longer was there any confusion as to whether abstract manifolds, intrinsically defined via charts, were any more or less general than manifolds extrinsically defined as submanifolds of Euclidean space. See also the history of manifolds and varieties for context. Sharper results Although every n-manifold embeds in $\mathbb {R} ^{2n},$ one can frequently do better. Let e(n) denote the smallest integer so that all compact connected n-manifolds embed in $\mathbb {R} ^{e(n)}.$ Whitney's strong embedding theorem states that e(n) ≤ 2n. For n = 1, 2 we have e(n) = 2n, as the circle and the Klein bottle show. More generally, for n = 2k we have e(n) = 2n, as the 2k-dimensional real projective space show. Whitney's result can be improved to e(n) ≤ 2n − 1 unless n is a power of 2. This is a result of André Haefliger and Morris Hirsch (for n > 4) and C. T. C. Wall (for n = 3); these authors used important preliminary results and particular cases proved by Hirsch, William S. Massey, Sergey Novikov and Vladimir Rokhlin.[1] At present the function e is not known in closed-form for all integers (compare to the Whitney immersion theorem, where the analogous number is known). Restrictions on manifolds One can strengthen the results by putting additional restrictions on the manifold. For example, the n-sphere always embeds in $\mathbb {R} ^{n+1}$ – which is the best possible (closed n-manifolds cannot embed in $\mathbb {R} ^{n}$). Any compact orientable surface and any compact surface with non-empty boundary embeds in $\mathbb {R} ^{3},$ though any closed non-orientable surface needs $\mathbb {R} ^{4}.$ If N is a compact orientable n-dimensional manifold, then N embeds in $\mathbb {R} ^{2n-1}$ (for n not a power of 2 the orientability condition is superfluous). For n a power of 2 this is a result of André Haefliger and Morris Hirsch (for n > 4), and Fuquan Fang (for n = 4); these authors used important preliminary results proved by Jacques Boéchat and Haefliger, Simon Donaldson, Hirsch and William S. Massey.[1] Haefliger proved that if N is a compact n-dimensional k-connected manifold, then N embeds in $\mathbb {R} ^{2n-k}$ provided 2k + 3 ≤ n.[1] Isotopy versions A relatively 'easy' result is to prove that any two embeddings of a 1-manifold into $\mathbb {R} ^{4}$ are isotopic (see Knot theory#Higher dimensions). This is proved using general position, which also allows to show that any two embeddings of an n-manifold into $\mathbb {R} ^{2n+2}$ are isotopic. This result is an isotopy version of the weak Whitney embedding theorem. Wu proved that for n ≥ 2, any two embeddings of an n-manifold into $\mathbb {R} ^{2n+1}$ are isotopic. This result is an isotopy version of the strong Whitney embedding theorem. As an isotopy version of his embedding result, Haefliger proved that if N is a compact n-dimensional k-connected manifold, then any two embeddings of N into $\mathbb {R} ^{2n-k+1}$ are isotopic provided 2k + 2 ≤ n. The dimension restriction 2k + 2 ≤ n is sharp: Haefliger went on to give examples of non-trivially embedded 3-spheres in $\mathbb {R} ^{6}$ (and, more generally, (2d − 1)-spheres in $\mathbb {R} ^{3d}$). See further generalizations. See also • Representation theorem • Whitney immersion theorem • Nash embedding theorem • Takens's theorem • Nonlinear dimensionality reduction Notes 1. See section 2 of Skopenkov (2008) References • Whitney, Hassler (1992), Eells, James; Toledo, Domingo (eds.), Collected Papers, Boston: Birkhäuser, ISBN 0-8176-3560-2 • Milnor, John (1965), Lectures on the h-cobordism theorem, Princeton University Press • Adachi, Masahisa (1993), Embeddings and Immersions, translated by Hudson, Kiki, American Mathematical Society, ISBN 0-8218-4612-4 • Skopenkov, Arkadiy (2008), "Embedding and knotting of manifolds in Euclidean spaces", in Nicholas Young; Yemon Choi (eds.), Surveys in Contemporary Mathematics, London Math. Soc. Lect. Notes., vol. 347, Cambridge: Cambridge University Press, pp. 248–342, arXiv:math/0604045, Bibcode:2006math......4045S, MR 2388495 External links • Classification of embeddings Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Whitney's planarity criterion In mathematics, Whitney's planarity criterion is a matroid-theoretic characterization of planar graphs, named after Hassler Whitney.[1] It states that a graph G is planar if and only if its graphic matroid is also cographic (that is, it is the dual matroid of another graphic matroid). In purely graph-theoretic terms, this criterion can be stated as follows: There must be another (dual) graph G'=(V',E') and a bijective correspondence between the edges E' and the edges E of the original graph G, such that a subset T of E forms a spanning tree of G if and only if the edges corresponding to the complementary subset E-T form a spanning tree of G'. Algebraic duals An equivalent form of Whitney's criterion is that a graph G is planar if and only if it has a dual graph whose graphic matroid is dual to the graphic matroid of G. A graph whose graphic matroid is dual to the graphic matroid of G is known as an algebraic dual of G. Thus, Whitney's planarity criterion can be expressed succinctly as: a graph is planar if and only if it has an algebraic dual. Topological duals If a graph is embedded into a topological surface such as the plane, in such a way that every face of the embedding is a topological disk, then the dual graph of the embedding is defined as the graph (or in some cases multigraph) H that has a vertex for every face of the embedding, and an edge for every adjacency between a pair of faces. According to Whitney's criterion, the following conditions are equivalent: • The surface on which the embedding exists is topologically equivalent to the plane, sphere, or punctured plane • H is an algebraic dual of G • Every simple cycle in G corresponds to a minimal cut in H, and vice versa • Every simple cycle in H corresponds to a minimal cut in G, and vice versa • Every spanning tree in G corresponds to the complement of a spanning tree in H, and vice versa.[2] It is possible to define dual graphs of graphs embedded on nonplanar surfaces such as the torus, but these duals do not generally have the correspondence between cuts, cycles, and spanning trees required by Whitney's criterion. References 1. Whitney, Hassler (1932), "Non-separable and planar graphs", Transactions of the American Mathematical Society, 34 (2): 339–362, doi:10.1090/S0002-9947-1932-1501641-2. 2. Tutte, W. T. (1965), "Lectures on matroids", Journal of Research of the National Bureau of Standards, 69B: 1–47, doi:10.6028/jres.069b.001, MR 0179781. See in particular section 2.5, "Bon-matroid of a graph", pp. 5–6, section 5.6, "Graphic and co-graphic matroids", pp. 19–20, and section 9, "Graphic matroids", pp. 38–47.
Wikipedia
Whitney umbrella In geometry, the Whitney umbrella (or Whitney's umbrella, named after American mathematician Hassler Whitney, and sometimes called a Cayley umbrella) is a specific self-intersecting ruled surface placed in three dimensions. It is the union of all straight lines that pass through points of a fixed parabola and are perpendicular to a fixed straight line which is parallel to the axis of the parabola and lies on its perpendicular bisecting plane. Formulas Whitney's umbrella can be given by the parametric equations in Cartesian coordinates $\left\{{\begin{aligned}x(u,v)&=uv,\\y(u,v)&=u,\\z(u,v)&=v^{2},\end{aligned}}\right.$ where the parameters u and v range over the real numbers. It is also given by the implicit equation $x^{2}-y^{2}z=0.$ This formula also includes the negative z axis (which is called the handle of the umbrella). Properties Whitney's umbrella is a ruled surface and a right conoid. It is important in the field of singularity theory, as a simple local model of a pinch point singularity. The pinch point and the fold singularity are the only stable local singularities of maps from R2 to R3. It is named after the American mathematician Hassler Whitney. In string theory, a Whitney brane is a D7-brane wrapping a variety whose singularities are locally modeled by the Whitney umbrella. Whitney branes appear naturally when taking Sen's weak coupling limit of F-theory. See also • Cross-cap • Right conoid • Ruled surface References • "Whitney's Umbrella". The Topological Zoo. The Geometry Center. Retrieved 2006-03-08. (Images and movies of the Whitney umbrella.)
Wikipedia
Regular homotopy In the mathematical field of topology, a regular homotopy refers to a special kind of homotopy between immersions of one manifold in another. The homotopy must be a 1-parameter family of immersions. Similar to homotopy classes, one defines two immersions to be in the same regular homotopy class if there exists a regular homotopy between them. Regular homotopy for immersions is similar to isotopy of embeddings: they are both restricted types of homotopies. Stated another way, two continuous functions $f,g:M\to N$ are homotopic if they represent points in the same path-components of the mapping space $C(M,N)$, given the compact-open topology. The space of immersions is the subspace of $C(M,N)$ consisting of immersions, denoted by $\operatorname {Imm} (M,N)$. Two immersions $f,g:M\to N$ are regularly homotopic if they represent points in the same path-component of $\operatorname {Imm} (M,N)$. Examples Any two knots in 3-space are equivalent by regular homotopy, though not by isotopy. The Whitney–Graustein theorem classifies the regular homotopy classes of a circle into the plane; two immersions are regularly homotopic if and only if they have the same turning number – equivalently, total curvature; equivalently, if and only if their Gauss maps have the same degree/winding number. Stephen Smale classified the regular homotopy classes of a k-sphere immersed in $\mathbb {R} ^{n}$ – they are classified by homotopy groups of Stiefel manifolds, which is a generalization of the Gauss map, with here k partial derivatives not vanishing. More precisely, the set $I(n,k)$ of regular homotopy classes of embeddings of sphere $S^{k}$ in $\mathbb {R} ^{n}$ is in one-to-one correspondence with elements of group $\pi _{k}\left(V_{k}\left(\mathbb {R} ^{n}\right)\right)$. In case $k=n-1$ we have $V_{n-1}\left(\mathbb {R} ^{n}\right)\cong SO(n)$. Since $SO(1)$ is path connected, $\pi _{2}(SO(3))\cong \pi _{2}\left(\mathbb {R} P^{3}\right)\cong \pi _{2}\left(S^{3}\right)\cong 0$ and $\pi _{6}(SO(6))\to \pi _{6}(SO(7))\to \pi _{6}\left(S^{6}\right)\to \pi _{5}(SO(6)\to \pi _{5}(SO(7))$ and due to Bott periodicity theorem we have $\pi _{6}(SO(6))\cong \pi _{6}(\operatorname {Spin} (6))\cong \pi _{6}(SU(4))\cong \pi _{6}(U(4))\cong 0$ and since $\pi _{5}(SO(6))\cong \mathbb {Z} ,\ \pi _{5}(SO(7))\cong 0$ then we have $\pi _{6}(SO(7))\cong 0$. Therefore all immersions of spheres $S^{0},\ S^{2}$ and $S^{6}$ in euclidean spaces of one more dimension are regular homotopic. In particular, spheres $S^{n}$ embedded in $\mathbb {R} ^{n+1}$ admit eversion if $n=0,2,6$. A corollary of his work is that there is only one regular homotopy class of a 2-sphere immersed in $\mathbb {R} ^{3}$. In particular, this means that sphere eversions exist, i.e. one can turn the 2-sphere "inside-out". Both of these examples consist of reducing regular homotopy to homotopy; this has subsequently been substantially generalized in the homotopy principle (or h-principle) approach. Non-degenerate homotopy For locally convex, closed space curves, one can also define non-degenerate homotopy. Here, the 1-parameter family of immersions must be non-degenerate (i.e. the curvature may never vanish). There are 2 distinct non-degenerate homotopy classes.[1] Further restrictions of non-vanishing torsion lead to 4 distinct equivalence classes.[2] References 1. Feldman, E. A. (1968). "Deformations of closed space curves". Journal of Differential Geometry. 2 (1): 67–75. doi:10.4310/jdg/1214501138. 2. Little, John A. (1971). "Third order nondegenerate homotopies of space curves". Journal of Differential Geometry. 5 (3): 503–515. doi:10.4310/jdg/1214430012. • Whitney, Hassler (1937). "On regular closed curves in the plane". Compositio Mathematica. 4: 276–284. • Smale, Stephen (February 1959). "A classification of immersions of the two-sphere" (PDF). Transactions of the American Mathematical Society. 90 (2): 281–290. doi:10.2307/1993205. JSTOR 1993205. • Smale, Stephen (March 1959). "The classification of immersions of spheres in Euclidean spaces" (PDF). Annals of Mathematics. 69 (2): 327–344. doi:10.2307/1970186. JSTOR 1970186.
Wikipedia
Clique complex Clique complexes, independence complexes, flag complexes, Whitney complexes and conformal hypergraphs are closely related mathematical objects in graph theory and geometric topology that each describe the cliques (complete subgraphs) of an undirected graph. Clique complex The clique complex X(G) of an undirected graph G is an abstract simplicial complex (that is, a family of finite sets closed under the operation of taking subsets), formed by the sets of vertices in the cliques of G. Any subset of a clique is itself a clique, so this family of sets meets the requirement of an abstract simplicial complex that every subset of a set in the family should also be in the family. The clique complex can also be viewed as a topological space in which each clique of k vertices is represented by a simplex of dimension k – 1. The 1-skeleton of X(G) (also known as the underlying graph of the complex) is an undirected graph with a vertex for every 1-element set in the family and an edge for every 2-element set in the family; it is isomorphic to G.[1] Negative example Every clique complex is an abstract simplicial complex, but the opposite is not true. For example, consider the abstract simplicial complex over {1,2,3,4} with maximal sets {1,2,3}, {2,3,4}, {4,1}. If it were the X(G) of some graph G, then G had to have the edges {1,2}, {1,3}, {2,3}, {2,4}, {3,4}, {4,1}, so X(G) should also contain the clique {1,2,3,4}. Independence complex The independence complex I(G) of an undirected graph G is an abstract simplicial complex formed by the sets of vertices in the independent sets of G. The clique complex of G is equivalent to the independence complex of the complement graph of G. Flag complex A flag complex is an abstract simplicial complex with an additional property called "2-determined": for every subset S of vertices, if every pair of vertices in S is in the complex, then S itself is in the complex too. Every clique complex is a flag complex: if every pair of vertices in S is a clique of size 2, then there is an edge between them, so S is a clique. Every flag complex is a clique complex: given a flag complex, define a graph G on the set of all vertices, where two vertices u,v are adjacent in G iff {u,v} is in the complex (this graph is called the 1-skeleton of the complex). By definition of a flag complex, every set of vertices that are pairwise-connected, is in the complex. Therefore, the flag complex is equal to the clique complex on G. Thus, flag complexes and clique complexes are essentially the same thing. However, in many cases it is convenient to define a flag complex directly from some data other than a graph, rather than indirectly as the clique complex of a graph derived from that data.[2] Mikhail Gromov defined the no-Δ condition to be the condition of being a flag complex. Whitney complex Clique complexes are also known as Whitney complexes, after Hassler Whitney. A Whitney triangulation or clean triangulation of a two-dimensional manifold is an embedding of a graph G onto the manifold in such a way that every face is a triangle and every triangle is a face. If a graph G has a Whitney triangulation, it must form a cell complex that is isomorphic to the Whitney complex of G. In this case, the complex (viewed as a topological space) is homeomorphic to the underlying manifold. A graph G has a 2-manifold clique complex, and can be embedded as a Whitney triangulation, if and only if G is locally cyclic; this means that, for every vertex v in the graph, the induced subgraph formed by the neighbors of v forms a single cycle.[3] Conformal hypergraph The primal graph G(H) of a hypergraph is the graph on the same vertex set that has as its edges the pairs of vertices appearing together in the same hyperedge. A hypergraph is said to be conformal if every maximal clique of its primal graph is a hyperedge, or equivalently, if every clique of its primal graph is contained in some hyperedge.[4] If the hypergraph is required to be downward-closed (so it contains all hyperedges that are contained in some hyperedge) then the hypergraph is conformal precisely when it is a flag complex. This relates the language of hypergraphs to the language of simplicial complexes. Examples and applications The barycentric subdivision of any cell complex C is a flag complex having one vertex per cell of C. A collection of vertices of the barycentric subdivision form a simplex if and only if the corresponding collection of cells of C form a flag (a chain in the inclusion ordering of the cells).[2] In particular, the barycentric subdivision of a cell complex on a 2-manifold gives rise to a Whitney triangulation of the manifold. The order complex of a partially ordered set consists of the chains (totally ordered subsets) of the partial order. If every pair of some subset is itself ordered, then the whole subset is a chain, so the order complex satisfies the no-Δ condition. It may be interpreted as the clique complex of the comparability graph of the partial order.[2] The matching complex of a graph consists of the sets of edges no two of which share an endpoint; again, this family of sets satisfies the no-Δ condition. It may be viewed as the clique complex of the complement graph of the line graph of the given graph. When the matching complex is referred to without any particular graph as context, it means the matching complex of a complete graph. The matching complex of a complete bipartite graph Km,n is known as a chessboard complex. It is the clique graph of the complement graph of a rook's graph,[5] and each of its simplices represents a placement of rooks on an m × n chess board such that no two of the rooks attack each other. When m = n ± 1, the chessboard complex forms a pseudo-manifold. The Vietoris–Rips complex of a set of points in a metric space is a special case of a clique complex, formed from the unit disk graph of the points; however, every clique complex X(G) may be interpreted as the Vietoris–Rips complex of the shortest path metric on the underlying graph G. Hodkinson & Otto (2003) describe an application of conformal hypergraphs in the logics of relational structures. In that context, the Gaifman graph of a relational structure is the same as the underlying graph of the hypergraph representing the structure, and a structure is guarded if it corresponds to a conformal hypergraph. Gromov showed that a cubical complex (that is, a family of hypercubes intersecting face-to-face) forms a CAT(0) space if and only if the complex is simply connected and the link of every vertex forms a flag complex. A cubical complex meeting these conditions is sometimes called a cubing or a space with walls.[1][6] Homology groups Meshulam[7] proves the following theorem on the homology of the clique complex. Given integers $l\geq 1,t\geq 0$, suppose a graph G satisfies a property called $P(l,t)$, which means that: • Every set of $l$ vertices in G has a common neighbor; • There exists a set A of vertices, that contains a common neighbor to every set of $l$ vertices, and in addition, the induced graph G[A] does not contain, as an induced subgraph, a copy of the 1-skeleton of the t-dimensional octahedral sphere. Then the j-th reduced homology of the clique complex X(G) is trivial for any j between 0 and $\max(l-t,\lfloor {l}/{2}\rfloor )-1$. See also • Simplex graph, a kind of graph having one node for every clique of the underlying graph • Partition matroid, a kind of matroid whose matroid intersections may form clique complexes Notes 1. Bandelt & Chepoi (2008). 2. Davis (2002). 3. Hartsfeld & Ringel (1991); Larrión, Neumann-Lara & Pizaña (2002); Malnič & Mohar (1992). 4. Berge (1989); Hodkinson & Otto (2003). 5. Dong & Wachs (2002). 6. Chatterji & Niblo (2005). 7. Meshulam, Roy (2001-01-01). "The Clique Complex and Hypergraph Matching". Combinatorica. 21 (1): 89–94. doi:10.1007/s004930170006. ISSN 1439-6912. S2CID 207006642. References • Bandelt, H.-J.; Chepoi, V. (2008), "Metric graph theory and geometry: a survey", in Goodman, J. E.; Pach, J.; Pollack, R. (eds.), Surveys on Discrete and Computational Geometry: Twenty Years Later (PDF), Contemporary Mathematics, vol. 453, Providence, RI: AMS, pp. 49–86. • Berge, C. (1989), Hypergraphs: Combinatorics of Finite Sets, North-Holland, ISBN 0-444-87489-5. • Chatterji, I.; Niblo, G. (2005), "From wall spaces to CAT(0) cube complexes", International Journal of Algebra and Computation, 15 (5–6): 875–885, arXiv:math.GT/0309036, doi:10.1142/S0218196705002669, S2CID 2786607. • Davis, M. W. (2002), "Nonpositive curvature and reflection groups", in Daverman, R. J.; Sher, R. B. (eds.), Handbook of Geometric Topology, Elsevier, pp. 373–422. • Dong, X.; Wachs, M. L. (2002), "Combinatorial Laplacian of the matching complex", Electronic Journal of Combinatorics, 9: R17, doi:10.37236/1634. • Hartsfeld, N.; Ringel, Gerhard (1991), "Clean triangulations", Combinatorica, 11 (2): 145–155, doi:10.1007/BF01206358, S2CID 28144260. • Hodkinson, I.; Otto, M. (2003), "Finite conformal hypergraph covers and Gaifman cliques in finite structures", The Bulletin of Symbolic Logic, 9 (3): 387–405, CiteSeerX 10.1.1.107.5000, doi:10.2178/bsl/1058448678. • Larrión, F.; Neumann-Lara, V.; Pizaña, M. A. (2002), "Whitney triangulations, local girth and iterated clique graphs", Discrete Mathematics, 258 (1–3): 123–135, doi:10.1016/S0012-365X(02)00266-2. • Malnič, A.; Mohar, B. (1992), "Generating locally cyclic triangulations of surfaces", Journal of Combinatorial Theory, Series B, 56 (2): 147–164, doi:10.1016/0095-8956(92)90015-P.
Wikipedia
Whitney covering lemma In mathematical analysis, the Whitney covering lemma, or Whitney decomposition, asserts the existence of a certain type of partition of an open set in a Euclidean space. Originally it was employed in the proof of Hassler Whitney's extension theorem. The lemma was subsequently applied to prove generalizations of the Calderón–Zygmund decomposition. Roughly speaking, the lemma states that it is possible to decompose an open set by cubes each of whose diameters is proportional, within certain bounds, to its distance from the boundary of the open set. More precisely: Whitney Covering Lemma (Grafakos 2008, Appendix J) Let $\Omega $ be an open non-empty proper subset of $\mathbb {R} ^{n}$. Then there exists a family of closed cubes $\{Q_{j}\}_{j}$ such that • $\cup _{j}Q_{j}=\Omega $ and the $Q_{j}$'s have disjoint interiors. • ${\sqrt {n}}\ell (Q_{j})\leq \mathrm {dist} (Q_{j},\Omega ^{c})\leq 4{\sqrt {n}}\ell (Q_{j}).$ • If the boundaries of two cubes $Q_{j}$ and $Q_{k}$ touch then ${\frac {1}{4}}\leq {\frac {\ell (Q_{j})}{\ell (Q_{k})}}\leq 4.$ • For a given $Q_{j}$ there exist at most $12^{n}Q_{k}$'s that touch it. Where $\ell (Q)$ denotes the length of a cube $Q$. References • Grafakos, Loukas (2008). Classical Fourier Analysis. Springer. ISBN 978-0-387-09431-1. • DiBenedetto, Emmanuele (2002), Real analysis, Birkhäuser, ISBN 0-8176-4231-5. • Stein, Elias (1970), Singular Integrals and Differentiability Properties of Functions, Princeton University Press. • Whitney, Hassler (1934), "Analytic extensions of functions defined in closed sets", Transactions of the American Mathematical Society, American Mathematical Society, 36 (1): 63–89, doi:10.2307/1989708, JSTOR 1989708.
Wikipedia
Whitney disk In mathematics, given two submanifolds A and B of a manifold X intersecting in two points p and q, a Whitney disc is a mapping from the two-dimensional disc D, with two marked points, to X, such that the two marked points go to p and q, one boundary arc of D goes to A and the other to B.[1] Their existence and embeddedness is crucial in proving the cobordism theorem, where it is used to cancel the intersection points; and its failure in low dimensions corresponds to not being able to embed a Whitney disc. Casson handles are an important technical tool for constructing the embedded Whitney disc relevant to many results on topological four-manifolds. Pseudoholomorphic Whitney discs are counted by the differential in Lagrangian intersection Floer homology. References 1. Scorpan, Alexandru (2005), The Wild World of 4-manifolds, American Mathematical Society, p. 560, ISBN 9780821837498.
Wikipedia
Whitney inequality In mathematics, the Whitney inequality gives an upper bound for the error of best approximation of a function by polynomials in terms of the moduli of smoothness. It was first proved by Hassler Whitney in 1957,[1] and is an important tool in the field of approximation theory for obtaining upper estimates on the errors of best approximation. Statement of the theorem Denote the value of the best uniform approximation of a function $f\in C([a,b])$ by algebraic polynomials $P_{n}$ of degree $\leq n$ by $E_{n}(f)_{[a,b]}:=\inf _{P_{n}}{\|f-P_{n}\|_{C([a,b])}}$ The moduli of smoothness of order $k$ of a function $f\in C([a,b])$ are defined as: $\omega _{k}(t):=\omega _{k}(t;f;[a,b]):=\sup _{h\in [0,t]}\|\Delta _{h}^{k}(f;\cdot )\|_{C([a,b-kh])}\quad {\text{ for }}\quad t\in [0,(b-a)/k],$ $\omega _{k}(t):=\omega _{k}((b-a)/k)\quad {\text{ for}}\quad t>(b-a)/k,$ where $\Delta _{h}^{k}$ is the finite difference of order $k$. Theorem: [2] [Whitney, 1957] If $f\in C([a,b])$, then $E_{k-1}(f)_{[a,b]}\leq W_{k}\omega _{k}\left({\frac {b-a}{k}};f;[a,b]\right)$ where $W_{k}$ is a constant depending only on $k$. The Whitney constant $W(k)$ is the smallest value of $W_{k}$ for which the above inequality holds. The theorem is particularly useful when applied on intervals of small length, leading to good estimates on the error of spline approximation. Proof The original proof given by Whitney follows an analytic argument which utilizes the properties of moduli of smoothness. However, it can also be proved in a much shorter way using Peetre's K-functionals.[3] Let: $x_{0}:=a,\quad h:={\frac {b-a}{k}},\quad x_{j}:=x+0+jh,\quad F(x)=\int _{a}^{x}f(u)\,du,$ $G(x):=F(x)-L(x;F;x_{0},\ldots ,x_{k}),\quad g(x):=G'(x),$ $\omega _{k}(t):=\omega _{k}(t;f;[a,b])\equiv \omega _{k}(t;g;[a,b])$ where $L(x;F;x_{0},\ldots ,x_{k})$ is the Lagrange polynomial for $F$ at the nodes $\{x_{0},\ldots ,x_{k}\}$. Now fix some $x\in [a,b]$ and choose $\delta $ for which $(x+k\delta )\in [a,b]$. Then: $\int _{0}^{1}\Delta _{t\delta }^{k}(g;x)\,dt=(-1)^{k}g(x)+\sum _{j=1}^{k}(-1)^{k-j}{\binom {k}{j}}\int _{0}^{1}g(x+jt\delta )\,dt$ $=(-1)^{k}g(x)+\sum _{j=1}^{k}{(-1)^{k-j}{\binom {k}{j}}{\frac {1}{j\delta }}(G(x+j\delta )-G(x))},$ Therefore: $|g(x)|\leq \int _{0}^{1}|\Delta _{t\delta }^{k}(g;x)|\,dt+{\frac {2}{|\delta |}}\|G\|_{C([a,b])}\sum _{j=1}^{k}{\binom {k}{j}}{\frac {1}{j}}\leq \omega _{k}(|\delta |)+{\frac {1}{|\delta |}}2^{k+1}\|G\|_{C([a,b])}$ And since we have $\|G\|_{C([a,b])}\leq h\omega _{k}(h)$, (a property of moduli of smoothness) $E_{k-1}(f)_{[a,b]}\leq \|g\|_{C([a,b])}\leq \omega _{k}(|\delta |)+{\frac {1}{|\delta |}}h2^{k+1}\omega _{k}(h).$ Since $\delta $ can always be chosen in such a way that $h\geq |\delta |\geq h/2$, this completes the proof. Whitney constants and Sendov's conjecture It is important to have sharp estimates of the Whitney constants. It is easily shown that $W(1)=1/2$, and it was first proved by Burkill (1952) that $W(2)\leq 1$, who conjectured that $W(k)\leq 1$ for all $k$. Whitney was also able to prove that [2] $W(2)={\frac {1}{2}},\quad {\frac {8}{15}}\leq W(3)\leq 0.7\quad W(4)\leq 3.3\quad W(5)\leq 10.4$ and $W(k)\geq {\frac {1}{2}},\quad k\in \mathbb {N} $ In 1964, Brudnyi was able to obtain the estimate $W(k)=O(k^{2k})$, and in 1982, Sendov proved that $W(k)\leq (k+1)k^{k}$. Then, in 1985, Ivanov and Takev proved that $W(k)=O(k\ln k)$, and Binev proved that $W(k)=O(k)$. Sendov conjectured that $W(k)\leq 1$ for all $k$, and in 1985 was able to prove that the Whitney constants are bounded above by an absolute constant, that is, $W(k)\leq 6$ for all $k$. Kryakin, Gilewicz, and Shevchuk (2002)[4] were able to show that $W(k)\leq 2$ for $k\leq 82000$, and that $W(k)\leq 2+{\frac {1}{e^{2}}}$ for all $k$. References 1. Hassler, Whitney (1957). "On Functions with Bounded nth Differences". J. Math. Pures Appl. 36 (IX): 67–95. 2. Dzyadyk, Vladislav K.; Shevchuk, Igor A. (2008). "3.6". Theory of Uniform Approximation of Functions by Polynomials (1st ed.). Berlin, Germany: Walter de Gruyter. pp. 231–233. ISBN 978-3-11-020147-5. 3. Devore, R. A. K.; Lorentz, G. G. (4 November 1993). "6, Theorem 4.2". Constructive Approximation, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] (1st ed.). Berlin, Germany: Springer-Verlag. ISBN 978-3540506270. 4. Gilewicz, J.; Kryakin, Yu. V.; Shevchuk, I. A. (2002). "Boundedness by 3 of the Whitney Interpolation Constant". Journal of Approximation Theory. 119 (2): 271–290. doi:10.1006/jath.2002.3732.
Wikipedia
Dual graph In the mathematical discipline of graph theory, the dual graph of a planar graph G is a graph that has a vertex for each face of G. The dual graph has an edge for each pair of faces in G that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge e of G has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of e. The definition of the dual depends on the choice of embedding of the graph G, so it is a property of plane graphs (graphs that are already embedded in the plane) rather than planar graphs (graphs that may be embedded but for which the embedding is not yet known). For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph. Historically, the first form of graph duality to be recognized was the association of the Platonic solids into pairs of dual polyhedra. Graph duality is a topological generalization of the geometric concepts of dual polyhedra and dual tessellations, and is in turn generalized combinatorially by the concept of a dual matroid. Variations of planar graph duality include a version of duality for directed graphs, and duality for graphs embedded onto non-planar two-dimensional surfaces. These notions of dual graphs should not be confused with a different notion, the edge-to-vertex dual or line graph of a graph. The term dual is used because the property of being a dual graph is symmetric, meaning that if H is a dual of a connected graph G, then G is a dual of H. When discussing the dual of a graph G, the graph G itself may be referred to as the "primal graph". Many other graph properties and structures may be translated into other natural properties and structures of the dual. For instance, cycles are dual to cuts, spanning trees are dual to the complements of spanning trees, and simple graphs (without parallel edges or self-loops) are dual to 3-edge-connected graphs. Graph duality can help explain the structure of mazes and of drainage basins. Dual graphs have also been applied in computer vision, computational geometry, mesh generation, and the design of integrated circuits. Examples Cycles and dipoles A dipole graph A cycle graph The unique planar embedding of a cycle graph divides the plane into only two regions, the inside and outside of the cycle, by the Jordan curve theorem. However, in an n-cycle, these two regions are separated from each other by n different edges. Therefore, the dual graph of the n-cycle is a multigraph with two vertices (dual to the regions), connected to each other by n dual edges. Such a graph is called a multiple edge, linkage, or sometimes a dipole graph. Conversely, the dual to an n-edge dipole graph is an n-cycle.[1] Dual polyhedra Main article: Dual polyhedron According to Steinitz's theorem, every polyhedral graph (the graph formed by the vertices and edges of a three-dimensional convex polyhedron) must be planar and 3-vertex-connected, and every 3-vertex-connected planar graph comes from a convex polyhedron in this way. Every three-dimensional convex polyhedron has a dual polyhedron; the dual polyhedron has a vertex for every face of the original polyhedron, with two dual vertices adjacent whenever the corresponding two faces share an edge. Whenever two polyhedra are dual, their graphs are also dual. For instance the Platonic solids come in dual pairs, with the octahedron dual to the cube, the dodecahedron dual to the icosahedron, and the tetrahedron dual to itself.[2] Polyhedron duality can also be extended to duality of higher dimensional polytopes,[3] but this extension of geometric duality does not have clear connections to graph-theoretic duality. Self-dual graphs A plane graph is said to be self-dual if it is isomorphic to its dual graph. The wheel graphs provide an infinite family of self-dual graphs coming from self-dual polyhedra (the pyramids).[4][5] However, there also exist self-dual graphs that are not polyhedral, such as the one shown. Servatius & Christopher (1992) describe two operations, adhesion and explosion, that can be used to construct a self-dual graph containing a given planar graph; for instance, the self-dual graph shown can be constructed as the adhesion of a tetrahedron with its dual.[5] It follows from Euler's formula that every self-dual graph with n vertices has exactly 2n − 2 edges.[6] Every simple self-dual planar graph contains at least four vertices of degree three, and every self-dual embedding has at least four triangular faces.[7] Properties Many natural and important concepts in graph theory correspond to other equally natural but different concepts in the dual graph. Because the dual of the dual of a connected plane graph is isomorphic to the primal graph,[8] each of these pairings is bidirectional: if concept X in a planar graph corresponds to concept Y in the dual graph, then concept Y in a planar graph corresponds to concept X in the dual. Simple graphs versus multigraphs The dual of a simple graph need not be simple: it may have self-loops (an edge with both endpoints at the same vertex) or multiple edges connecting the same two vertices, as was already evident in the example of dipole multigraphs being dual to cycle graphs. As a special case of the cut-cycle duality discussed below, the bridges of a planar graph G are in one-to-one correspondence with the self-loops of the dual graph.[9] For the same reason, a pair of parallel edges in a dual multigraph (that is, a length-2 cycle) corresponds to a 2-edge cutset in the primal graph (a pair of edges whose deletion disconnects the graph). Therefore, a planar graph is simple if and only if its dual has no 1- or 2-edge cutsets; that is, if it is 3-edge-connected. The simple planar graphs whose duals are simple are exactly the 3-edge-connected simple planar graphs.[10] This class of graphs includes, but is not the same as, the class of 3-vertex-connected simple planar graphs. For instance, the figure showing a self-dual graph is 3-edge-connected (and therefore its dual is simple) but is not 3-vertex-connected. Uniqueness Because the dual graph depends on a particular embedding, the dual graph of a planar graph is not unique, in the sense that the same planar graph can have non-isomorphic dual graphs.[11] In the picture, the blue graphs are isomorphic but their dual red graphs are not. The upper red dual has a vertex with degree 6 (corresponding to the outer face of the blue graph) while in the lower red graph all degrees are less than 6. Hassler Whitney showed that if the graph is 3-connected then the embedding, and thus the dual graph, is unique.[12] By Steinitz's theorem, these graphs are exactly the polyhedral graphs, the graphs of convex polyhedra. A planar graph is 3-vertex-connected if and only if its dual graph is 3-vertex-connected. More generally, a planar graph has a unique embedding, and therefore also a unique dual, if and only if it is a subdivision of a 3-vertex-connected planar graph (a graph formed from a 3-vertex-connected planar graph by replacing some of its edges by paths). For some planar graphs that are not 3-vertex-connected, such as the complete bipartite graph K2,4, the embedding is not unique, but all embeddings are isomorphic. When this happens, correspondingly, all dual graphs are isomorphic. Because different embeddings may lead to different dual graphs, testing whether one graph is a dual of another (without already knowing their embeddings) is a nontrivial algorithmic problem. For biconnected graphs, it can be solved in polynomial time by using the SPQR trees of the graphs to construct a canonical form for the equivalence relation of having a shared mutual dual. For instance, the two red graphs in the illustration are equivalent according to this relation. However, for planar graphs that are not biconnected, this relation is not an equivalence relation and the problem of testing mutual duality is NP-complete.[13] Cuts and cycles A cutset in an arbitrary connected graph is a subset of edges defined from a partition of the vertices into two subsets, by including an edge in the subset when it has one endpoint on each side of the partition. Removing the edges of a cutset necessarily splits the graph into at least two connected components. A minimal cutset (also called a bond) is a cutset with the property that every proper subset of the cutset is not itself a cut. A minimal cutset of a connected graph necessarily separates its graph into exactly two components, and consists of the set of edges that have one endpoint in each component.[14] A simple cycle is a connected subgraph in which each vertex of the cycle is incident to exactly two edges of the cycle.[15] In a connected planar graph G, every simple cycle of G corresponds to a minimal cutset in the dual of G, and vice versa.[16] This can be seen as a form of the Jordan curve theorem: each simple cycle separates the faces of G into the faces in the interior of the cycle and the faces of the exterior of the cycle, and the duals of the cycle edges are exactly the edges that cross from the interior to the exterior.[17] The girth of any planar graph (the size of its smallest cycle) equals the edge connectivity of its dual graph (the size of its smallest cutset).[18] This duality extends from individual cutsets and cycles to vector spaces defined from them. The cycle space of a graph is defined as the family of all subgraphs that have even degree at each vertex; it can be viewed as a vector space over the two-element finite field, with the symmetric difference of two sets of edges acting as the vector addition operation in the vector space. Similarly, the cut space of a graph is defined as the family of all cutsets, with vector addition defined in the same way. Then the cycle space of any planar graph and the cut space of its dual graph are isomorphic as vector spaces.[19] Thus, the rank of a planar graph (the dimension of its cut space) equals the cyclotomic number of its dual (the dimension of its cycle space) and vice versa.[11] A cycle basis of a graph is a set of simple cycles that form a basis of the cycle space (every even-degree subgraph can be formed in exactly one way as a symmetric difference of some of these cycles). For edge-weighted planar graphs (with sufficiently general weights that no two cycles have the same weight) the minimum-weight cycle basis of the graph is dual to the Gomory–Hu tree of the dual graph, a collection of nested cuts that together include a minimum cut separating each pair of vertices in the graph. Each cycle in the minimum weight cycle basis has a set of edges that are dual to the edges of one of the cuts in the Gomory–Hu tree. When cycle weights may be tied, the minimum-weight cycle basis may not be unique, but in this case it is still true that the Gomory–Hu tree of the dual graph corresponds to one of the minimum weight cycle bases of the graph.[19] In directed planar graphs, simple directed cycles are dual to directed cuts (partitions of the vertices into two subsets such that all edges go in one direction, from one subset to the other). Strongly oriented planar graphs (graphs whose underlying undirected graph is connected, and in which every edge belongs to a cycle) are dual to directed acyclic graphs in which no edge belongs to a cycle. To put this another way, the strong orientations of a connected planar graph (assignments of directions to the edges of the graph that result in a strongly connected graph) are dual to acyclic orientations (assignments of directions that produce a directed acyclic graph).[20] In the same way, dijoins (sets of edges that include an edge from each directed cut) are dual to feedback arc sets (sets of edges that include an edge from each cycle).[21] Spanning trees A spanning tree may be defined as a set of edges that, together with all of the vertices of the graph, forms a connected and acyclic subgraph. But, by cut-cycle duality, if a set S of edges in a planar graph G is acyclic (has no cycles), then the set of edges dual to S has no cuts, from which it follows that the complementary set of dual edges (the duals of the edges that are not in S) forms a connected subgraph. Symmetrically, if S is connected, then the edges dual to the complement of S form an acyclic subgraph. Therefore, when S has both properties – it is connected and acyclic – the same is true for the complementary set in the dual graph. That is, each spanning tree of G is complementary to a spanning tree of the dual graph, and vice versa. Thus, the edges of any planar graph and its dual can together be partitioned (in multiple different ways) into two spanning trees, one in the primal and one in the dual, that together extend to all the vertices and faces of the graph but never cross each other. In particular, the minimum spanning tree of G is complementary to the maximum spanning tree of the dual graph.[22] However, this does not work for shortest path trees, even approximately: there exist planar graphs such that, for every pair of a spanning tree in the graph and a complementary spanning tree in the dual graph, at least one of the two trees has distances that are significantly longer than the distances in its graph.[23] An example of this type of decomposition into interdigitating trees can be seen in some simple types of mazes, with a single entrance and no disconnected components of its walls. In this case both the maze walls and the space between the walls take the form of a mathematical tree. If the free space of the maze is partitioned into simple cells (such as the squares of a grid) then this system of cells can be viewed as an embedding of a planar graph, in which the tree structure of the walls forms a spanning tree of the graph and the tree structure of the free space forms a spanning tree of the dual graph.[24] Similar pairs of interdigitating trees can also be seen in the tree-shaped pattern of streams and rivers within a drainage basin and the dual tree-shaped pattern of ridgelines separating the streams.[25] This partition of the edges and their duals into two trees leads to a simple proof of Euler’s formula V − E + F = 2 for planar graphs with V vertices, E edges, and F faces. Any spanning tree and its complementary dual spanning tree partition the edges into two subsets of V − 1 and F − 1 edges respectively, and adding the sizes of the two subsets gives the equation E = (V − 1) + (F − 1) which may be rearranged to form Euler's formula. According to Duncan Sommerville, this proof of Euler's formula is due to K. G. C. Von Staudt’s Geometrie der Lage (Nürnberg, 1847).[26] In nonplanar surface embeddings the set of dual edges complementary to a spanning tree is not a dual spanning tree. Instead this set of edges is the union of a dual spanning tree with a small set of extra edges whose number is determined by the genus of the surface on which the graph is embedded. The extra edges, in combination with paths in the spanning trees, can be used to generate the fundamental group of the surface.[27] Additional properties Any counting formula involving vertices and faces that is valid for all planar graphs may be transformed by planar duality into an equivalent formula in which the roles of the vertices and faces have been swapped. Euler's formula, which is self-dual, is one example. Another given by Harary involves the handshaking lemma, according to which the sum of the degrees of the vertices of any graph equals twice the number of edges. In its dual form, this lemma states that in a plane graph, the sum of the numbers of sides of the faces of the graph equals twice the number of edges.[28] The medial graph of a plane graph is isomorphic to the medial graph of its dual. Two planar graphs can have isomorphic medial graphs only if they are dual to each other.[29] A planar graph with four or more vertices is maximal (no more edges can be added while preserving planarity) if and only if its dual graph is both 3-vertex-connected and 3-regular.[30] A connected planar graph is Eulerian (has even degree at every vertex) if and only if its dual graph is bipartite.[31] A Hamiltonian cycle in a planar graph G corresponds to a partition of the vertices of the dual graph into two subsets (the interior and exterior of the cycle) whose induced subgraphs are both trees. In particular, Barnette's conjecture on the Hamiltonicity of cubic bipartite polyhedral graphs is equivalent to the conjecture that every Eulerian maximal planar graph can be partitioned into two induced trees.[32] If a planar graph G has Tutte polynomial TG(x,y), then the Tutte polynomial of its dual graph is obtained by swapping x and y. For this reason, if some particular value of the Tutte polynomial provides information about certain types of structures in G, then swapping the arguments to the Tutte polynomial will give the corresponding information for the dual structures. For instance, the number of strong orientations is TG(0,2) and the number of acyclic orientations is TG(2,0).[33] For bridgeless planar graphs, graph colorings with k colors correspond to nowhere-zero flows modulo k on the dual graph. For instance, the four color theorem (the existence of a 4-coloring for every planar graph) can be expressed equivalently as stating that the dual of every bridgeless planar graph has a nowhere-zero 4-flow. The number of k-colorings is counted (up to an easily computed factor) by the Tutte polynomial value TG(1 − k,0) and dually the number of nowhere-zero k-flows is counted by TG(0,1 − k).[34] An st-planar graph is a connected planar graph together with a bipolar orientation of that graph, an orientation that makes it acyclic with a single source and a single sink, both of which are required to be on the same face as each other. Such a graph may be made into a strongly connected graph by adding one more edge, from the sink back to the source, through the outer face. The dual of this augmented planar graph is itself the augmentation of another st-planar graph.[35] Variations Directed graphs In a directed plane graph, the dual graph may be made directed as well, by orienting each dual edge by a 90° clockwise turn from the corresponding primal edge.[35] Strictly speaking, this construction is not a duality of directed planar graphs, because starting from a graph G and taking the dual twice does not return to G itself, but instead constructs a graph isomorphic to the transpose graph of G, the graph formed from G by reversing all of its edges. Taking the dual four times returns to the original graph. Weak dual The weak dual of a plane graph is the subgraph of the dual graph whose vertices correspond to the bounded faces of the primal graph. A plane graph is outerplanar if and only if its weak dual is a forest. For any plane graph G, let G+ be the plane multigraph formed by adding a single new vertex v in the unbounded face of G, and connecting v to each vertex of the outer face (multiple times, if a vertex appears multiple times on the boundary of the outer face); then, G is the weak dual of the (plane) dual of G+.[36] Infinite graphs and tessellations The concept of duality applies as well to infinite graphs embedded in the plane as it does to finite graphs. However, care is needed to avoid topological complications such as points of the plane that are neither part of an open region disjoint from the graph nor part of an edge or vertex of the graph. When all faces are bounded regions surrounded by a cycle of the graph, an infinite planar graph embedding can also be viewed as a tessellation of the plane, a covering of the plane by closed disks (the tiles of the tessellation) whose interiors (the faces of the embedding) are disjoint open disks. Planar duality gives rise to the notion of a dual tessellation, a tessellation formed by placing a vertex at the center of each tile and connecting the centers of adjacent tiles.[37] The concept of a dual tessellation can also be applied to partitions of the plane into finitely many regions. It is closely related to but not quite the same as planar graph duality in this case. For instance, the Voronoi diagram of a finite set of point sites is a partition of the plane into polygons within which one site is closer than any other. The sites on the convex hull of the input give rise to unbounded Voronoi polygons, two of whose sides are infinite rays rather than finite line segments. The dual of this diagram is the Delaunay triangulation of the input, a planar graph that connects two sites by an edge whenever there exists a circle that contains those two sites and no other sites. The edges of the convex hull of the input are also edges of the Delaunay triangulation, but they correspond to rays rather than line segments of the Voronoi diagram. This duality between Voronoi diagrams and Delaunay triangulations can be turned into a duality between finite graphs in either of two ways: by adding an artificial vertex at infinity to the Voronoi diagram, to serve as the other endpoint for all of its rays,[38] or by treating the bounded part of the Voronoi diagram as the weak dual of the Delaunay triangulation. Although the Voronoi diagram and Delaunay triangulation are dual, their embedding in the plane may have additional crossings beyond the crossings of dual pairs of edges. Each vertex of the Delaunay triangle is positioned within its corresponding face of the Voronoi diagram. Each vertex of the Voronoi diagram is positioned at the circumcenter of the corresponding triangle of the Delaunay triangulation, but this point may lie outside its triangle. Nonplanar embeddings K7 is dual to the Heawood graph in the torus K6 is dual to the Petersen graph in the projective plane The concept of duality can be extended to graph embeddings on two-dimensional manifolds other than the plane. The definition is the same: there is a dual vertex for each connected component of the complement of the graph in the manifold, and a dual edge for each graph edge connecting the two dual vertices on either side of the edge. In most applications of this concept, it is restricted to embeddings with the property that each face is a topological disk; this constraint generalizes the requirement for planar graphs that the graph be connected. With this constraint, the dual of any surface-embedded graph has a natural embedding on the same surface, such that the dual of the dual is isomorphic to and isomorphically embedded to the original graph. For instance, the complete graph K7 is a toroidal graph: it is not planar but can be embedded in a torus, with each face of the embedding being a triangle. This embedding has the Heawood graph as its dual graph.[39] The same concept works equally well for non-orientable surfaces. For instance, K6 can be embedded in the projective plane with ten triangular faces as the hemi-icosahedron, whose dual is the Petersen graph embedded as the hemi-dodecahedron.[40] Even planar graphs may have nonplanar embeddings, with duals derived from those embeddings that differ from their planar duals. For instance, the four Petrie polygons of a cube (hexagons formed by removing two opposite vertices of the cube) form the hexagonal faces of an embedding of the cube in a torus. The dual graph of this embedding has four vertices forming a complete graph K4 with doubled edges. In the torus embedding of this dual graph, the six edges incident to each vertex, in cyclic order around that vertex, cycle twice through the three other vertices. In contrast to the situation in the plane, this embedding of the cube and its dual is not unique; the cube graph has several other torus embeddings, with different duals.[39] Many of the equivalences between primal and dual graph properties of planar graphs fail to generalize to nonplanar duals, or require additional care in their generalization. Another operation on surface-embedded graphs is the Petrie dual, which uses the Petrie polygons of the embedding as the faces of a new embedding. Unlike the usual dual graph, it has the same vertices as the original graph, but generally lies on a different surface.[41] Surface duality and Petrie duality are two of the six Wilson operations, and together generate the group of these operations.[42] Matroids and algebraic duals An algebraic dual of a connected graph G is a graph G* such that G and G* have the same set of edges, any cycle of G is a cut of G*, and any cut of G is a cycle of G*. Every planar graph has an algebraic dual, which is in general not unique (any dual defined by a plane embedding will do). The converse is actually true, as settled by Hassler Whitney in Whitney's planarity criterion:[43] A connected graph G is planar if and only if it has an algebraic dual. The same fact can be expressed in the theory of matroids. If M is the graphic matroid of a graph G, then a graph G* is an algebraic dual of G if and only if the graphic matroid of G* is the dual matroid of M. Then Whitney's planarity criterion can be rephrased as stating that the dual matroid of a graphic matroid M is itself a graphic matroid if and only if the underlying graph G of M is planar. If G is planar, the dual matroid is the graphic matroid of the dual graph of G. In particular, all dual graphs, for all the different planar embeddings of G, have isomorphic graphic matroids.[44] For nonplanar surface embeddings, unlike planar duals, the dual graph is not generally an algebraic dual of the primal graph. And for a non-planar graph G, the dual matroid of the graphic matroid of G is not itself a graphic matroid. However, it is still a matroid whose circuits correspond to the cuts in G, and in this sense can be thought of as a combinatorially generalized algebraic dual of G.[45] The duality between Eulerian and bipartite planar graphs can be extended to binary matroids (which include the graphic matroids derived from planar graphs): a binary matroid is Eulerian if and only if its dual matroid is bipartite.[31] The two dual concepts of girth and edge connectivity are unified in matroid theory by matroid girth: the girth of the graphic matroid of a planar graph is the same as the graph's girth, and the girth of the dual matroid (the graphic matroid of the dual graph) is the edge connectivity of the graph.[18] Applications Along with its use in graph theory, the duality of planar graphs has applications in several other areas of mathematical and computational study. In geographic information systems, flow networks (such as the networks showing how water flows in a system of streams and rivers) are dual to cellular networks describing drainage divides. This duality can be explained by modeling the flow network as a spanning tree on a grid graph of an appropriate scale, and modeling the drainage divide as the complementary spanning tree of ridgelines on the dual grid graph.[46] In computer vision, digital images are partitioned into small square pixels, each of which has its own color. The dual graph of this subdivision into squares has a vertex per pixel and an edge between pairs of pixels that share an edge; it is useful for applications including clustering of pixels into connected regions of similar colors.[47] In computational geometry, the duality between Voronoi diagrams and Delaunay triangulations implies that any algorithm for constructing a Voronoi diagram can be immediately converted into an algorithm for the Delaunay triangulation, and vice versa.[48] The same duality can also be used in finite element mesh generation. Lloyd's algorithm, a method based on Voronoi diagrams for moving a set of points on a surface to more evenly spaced positions, is commonly used as a way to smooth a finite element mesh described by the dual Delaunay triangulation. This method improves the mesh by making its triangles more uniformly sized and shaped.[49] In the synthesis of CMOS circuits, the function to be synthesized is represented as a formula in Boolean algebra. Then this formula is translated into two series–parallel multigraphs. These graphs can be interpreted as circuit diagrams in which the edges of the graphs represent transistors, gated by the inputs to the function. One circuit computes the function itself, and the other computes its complement. One of the two circuits is derived by converting the conjunctions and disjunctions of the formula into series and parallel compositions of graphs, respectively. The other circuit reverses this construction, converting the conjunctions and disjunctions of the formula into parallel and series compositions of graphs.[50] These two circuits, augmented by an additional edge connecting the input of each circuit to its output, are planar dual graphs.[51] History The duality of convex polyhedra was recognized by Johannes Kepler in his 1619 book Harmonices Mundi.[52] Recognizable planar dual graphs, outside the context of polyhedra, appeared as early as 1725, in Pierre Varignon's posthumously published work, Nouvelle Méchanique ou Statique. This was even before Leonhard Euler's 1736 work on the Seven Bridges of Königsberg that is often taken to be the first work on graph theory. Varignon analyzed the forces on static systems of struts by drawing a graph dual to the struts, with edge lengths proportional to the forces on the struts; this dual graph is a type of Cremona diagram.[53] In connection with the four color theorem, the dual graphs of maps (subdivisions of the plane into regions) were mentioned by Alfred Kempe in 1879, and extended to maps on non-planar surfaces by Lothar Heffter in 1891.[54] Duality as an operation on abstract planar graphs was introduced by Hassler Whitney in 1931.[55] Notes 1. van Lint, J. H.; Wilson, Richard Michael (1992), A Course in Combinatorics, Cambridge University Press, p. 411, ISBN 0-521-42260-4. 2. Bóna, Miklós (2006), A walk through combinatorics (2nd ed.), World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, p. 276, doi:10.1142/6177, ISBN 981-256-885-9, MR 2361255. 3. Ziegler, Günter M. (1995), "2.3 Polarity", Lectures on Polytopes, Graduate Texts in Mathematics, vol. 152, pp. 59–64. 4. Weisstein, Eric W., "Self-dual graph", MathWorld 5. Servatius, Brigitte; Christopher, Peter R. (1992), "Construction of self-dual graphs", The American Mathematical Monthly, 99 (2): 153–158, doi:10.2307/2324184, JSTOR 2324184, MR 1144356. 6. Thulasiraman, K.; Swamy, M. N. S. (2011), Graphs: Theory and Algorithms, John Wiley & Sons, Exercise 7.11, p. 198, ISBN 978-1-118-03025-7. 7. See the proof of Theorem 5 in Servatius & Christopher (1992). 8. Nishizeki, Takao; Chiba, Norishige (2008), Planar Graphs: Theory and Algorithms, Dover Books on Mathematics, Dover Publications, p. 16, ISBN 978-0-486-46671-2. 9. Jensen, Tommy R.; Toft, Bjarne (1995), Graph Coloring Problems, Wiley-Interscience Series in Discrete Mathematics and Optimization, vol. 39, Wiley, p. 17, ISBN 978-0-471-02865-9, note that "bridge" and "loop" are dual concepts. 10. Balakrishnan, V. K. (1997), Schaum's Outline of Graph Theory, McGraw Hill Professional, Problem 8.64, p. 229, ISBN 978-0-07-005489-9. 11. Foulds, L. R. (2012), Graph Theory Applications, Springer, pp. 66–67, ISBN 978-1-4612-0933-1. 12. Bondy, Adrian; Murty, U.S.R. (2008), "Planar Graphs", Graph Theory, Graduate Texts in Mathematics, vol. 244, Springer, Theorem 10.28, p. 267, doi:10.1007/978-1-84628-970-5, ISBN 978-1-84628-969-9, LCCN 2007923502 13. Angelini, Patrizio; Bläsius, Thomas; Rutter, Ignaz (2014), "Testing mutual duality of planar graphs", International Journal of Computational Geometry and Applications, 24 (4): 325–346, arXiv:1303.1640, doi:10.1142/S0218195914600103, MR 3349917. 14. Diestel, Reinhard (2006), Graph Theory, Graduate Texts in Mathematics, vol. 173, Springer, p. 25, ISBN 978-3-540-26183-4. 15. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990], Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, p. 1081, ISBN 0-262-03293-7 16. Godsil, Chris; Royle, Gordon F. (2013), Algebraic Graph Theory, Graduate Texts in Mathematics, vol. 207, Springer, Theorem 14.3.1, p. 312, ISBN 978-1-4613-0163-9. 17. Oxley, J. G. (2006), Matroid Theory, Oxford Graduate Texts in Mathematics, vol. 3, Oxford University Press, p. 93, ISBN 978-0-19-920250-8. 18. Cho, Jung Jin; Chen, Yong; Ding, Yu (2007), "On the (co)girth of a connected matroid", Discrete Applied Mathematics, 155 (18): 2456–2470, doi:10.1016/j.dam.2007.06.015, MR 2365057. 19. Hartvigsen, D.; Mardon, R. (1994), "The all-pairs min cut problem and the minimum cycle basis problem on planar graphs", SIAM Journal on Discrete Mathematics, 7 (3): 403–418, doi:10.1137/S0895480190177042. 20. Noy, Marc (2001), "Acyclic and totally cyclic orientations in planar graphs", American Mathematical Monthly, 108 (1): 66–68, doi:10.2307/2695680, JSTOR 2695680, MR 1857074. 21. Gabow, Harold N. (1995), "Centroids, representations, and submodular flows", Journal of Algorithms, 18 (3): 586–628, doi:10.1006/jagm.1995.1022, MR 1334365 22. Tutte, W. T. (1984), Graph theory, Encyclopedia of Mathematics and its Applications, vol. 21, Reading, MA: Addison-Wesley Publishing Company, Advanced Book Program, p. 289, ISBN 0-201-13520-5, MR 0746795. 23. Riley, T. R.; Thurston, W. P. (2006), "The absence of efficient dual pairs of spanning trees in planar graphs", Electronic Journal of Combinatorics, 13 (1): Note 13, 7, doi:10.37236/1151, MR 2255413. 24. Lyons, Russell (1998), "A bird's-eye view of uniform spanning trees and forests", Microsurveys in discrete probability (Princeton, NJ, 1997), DIMACS Ser. Discrete Math. Theoret. Comput. Sci., vol. 41, Amer. Math. Soc., Providence, RI, pp. 135–162, MR 1630412. See in particular pp. 138–139. 25. Flammini, Alessandro (October 1996), Scaling Behavior for Models of River Network, Ph.D. thesis, International School for Advanced Studies, pp. 40–41. 26. Sommerville, D. M. Y. (1958), An Introduction to the Geometry of N Dimensions, Dover. 27. Eppstein, David (2003), "Dynamic generators of topologically embedded graphs", Proceedings of the 14th ACM/SIAM Symposium on Discrete Algorithms, pp. 599–608, arXiv:cs.DS/0207082. 28. Harary, Frank (1969), Graph Theory, Reading, Mass.: Addison-Wesley Publishing Co., Theorem 9.4, p. 142, MR 0256911. 29. Gross, Jonathan L.; Yellen, Jay, eds. (2003), Handbook of Graph Theory, CRC Press, p. 724, ISBN 978-1-58488-090-5. 30. He, Xin (1999), "On floor-plan of plane graphs", SIAM Journal on Computing, 28 (6): 2150–2167, doi:10.1137/s0097539796308874. 31. Welsh, D. J. A. (1969), "Euler and bipartite matroids", Journal of Combinatorial Theory, 6 (4): 375–377, doi:10.1016/s0021-9800(69)80033-5, MR 0237368. 32. Florek, Jan (2010), "On Barnette's conjecture", Discrete Mathematics, 310 (10–11): 1531–1535, doi:10.1016/j.disc.2010.01.018, MR 2601261. 33. Las Vergnas, Michel (1980), "Convexity in oriented matroids", Journal of Combinatorial Theory, Series B, 29 (2): 231–243, doi:10.1016/0095-8956(80)90082-9, MR 0586435. 34. Tutte, William Thomas (1953), A contribution to the theory of chromatic polynomials 35. di Battista, Giuseppe; Eades, Peter; Tamassia, Roberto; Tollis, Ioannis G. (1999), Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall, p. 91, ISBN 978-0-13-301615-4. 36. Fleischner, Herbert J.; Geller, D. P.; Harary, Frank (1974), "Outerplanar graphs and weak duals", Journal of the Indian Mathematical Society, 38: 215–219, MR 0389672. 37. Weisstein, Eric W., "Dual Tessellation", MathWorld 38. Samet, Hanan (2006), Foundations of Multidimensional and Metric Data Structures, Morgan Kaufmann, p. 348, ISBN 978-0-12-369446-1. 39. Gagarin, Andrei; Kocay, William; Neilson, Daniel (2003), "Embeddings of small graphs on the torus" (PDF), Cubo, 5: 351–371, archived from the original (PDF) on 2017-02-01, retrieved 2015-08-12. 40. Nakamoto, Atsuhiro; Negami, Seiya (2000), "Full-symmetric embeddings of graphs on closed surfaces", Memoirs of Osaka Kyoiku University, 49 (1): 1–15, MR 1833214. 41. Pisanski, Tomaž; Randić, Milan (2000), "Bridges between geometry and graph theory" (PDF), in Gorini, Catherine A. (ed.), Geometry at Work, MAA Notes, vol. 53, Cambridge University Press, pp. 174–194, ISBN 9780883851647, MR 1782654 42. Jones, G. A.; Thornton, J. S. (1983), "Operations on maps, and outer automorphisms", Journal of Combinatorial Theory, Series B, 35 (2): 93–103, doi:10.1016/0095-8956(83)90065-5, MR 0733017 43. Whitney, Hassler (1932), "Non-separable and planar graphs", Transactions of the American Mathematical Society, 34 (2): 339–362, doi:10.1090/S0002-9947-1932-1501641-2. 44. Oxley, J. G. (2006), "5.2 Duality in graphic matroids", Matroid Theory, Oxford graduate texts in mathematics, vol. 3, Oxford University Press, p. 143, ISBN 978-0-19-920250-8. 45. Tutte, W. T. (2012), Graph Theory As I Have Known It, Oxford Lecture Series in Mathematics and Its Applications, vol. 11, Oxford University Press, p. 87, ISBN 978-0-19-966055-1. 46. Chorley, Richard J.; Haggett, Peter (2013), Integrated Models in Geography, Routledge, p. 646, ISBN 978-1-135-12184-6. 47. Kandel, Abraham; Bunke, Horst; Last, Mark (2007), Applied Graph Theory in Computer Vision and Pattern Recognition, Studies in Computational Intelligence, vol. 52, Springer, p. 16, ISBN 978-3-540-68020-8. 48. Devadoss, Satyan L.; O'Rourke, Joseph (2011), Discrete and Computational Geometry, Princeton University Press, p. 111, ISBN 978-1-4008-3898-1. 49. Du, Qiang; Gunzburger, Max (2002), "Grid generation and optimization based on centroidal Voronoi tessellations", Applied Mathematics and Computation, 133 (2–3): 591–607, doi:10.1016/S0096-3003(01)00260-0. 50. Piguet, Christian (2004), "7.2.1 Static CMOS Logic", Low-Power Electronics Design, CRC Press, pp. 7-1 – 7-2, ISBN 978-1-4200-3955-9. 51. Kaeslin, Hubert (2008), "8.1.4 Composite or complex gates", Digital Integrated Circuit Design: From VLSI Architectures to CMOS Fabrication, Cambridge University Press, p. 399, ISBN 978-0-521-88267-5. 52. Richeson, David S. (2012), Euler's Gem: The Polyhedron Formula and the Birth of Topology, Princeton University Press, pp. 58–61, ISBN 978-1-4008-3856-1. 53. Rippmann, Matthias (2016), Funicular Shell Design: Geometric Approaches to Form Finding and Fabrication of Discrete Funicular Structures, Habilitation thesis, Diss. ETH No. 23307, ETH Zurich, pp. 39–40, doi:10.3929/ethz-a-010656780, hdl:20.500.11850/116926. See also Erickson, Jeff (June 9, 2016), Reciprocal force diagrams from Nouvelle Méchanique ou Statique by Pierre de Varignon (1725), Is this the oldest illustration of duality between planar graphs?. 54. Biggs, Norman; Lloyd, E. Keith; Wilson, Robin J. (1998), Graph Theory, 1736–1936, Oxford University Press, p. 118, ISBN 978-0-19-853916-2. 55. Whitney, Hassler (1931), "A theorem on graphs", Annals of Mathematics, Second Series, 32 (2): 378–390, doi:10.2307/1968197, JSTOR 1968197, MR 1503003. External links • Weisstein, Eric W., "Dual graph", MathWorld
Wikipedia
Whitney conditions In differential topology, a branch of mathematics, the Whitney conditions are conditions on a pair of submanifolds of a manifold introduced by Hassler Whitney in 1965. A stratification of a topological space is a finite filtration by closed subsets Fi , such that the difference between successive members Fi and F(i − 1) of the filtration is either empty or a smooth submanifold of dimension i. The connected components of the difference Fi − F(i − 1) are the strata of dimension i. A stratification is called a Whitney stratification if all pairs of strata satisfy the Whitney conditions A and B, as defined below. The Whitney conditions in Rn Let X and Y be two disjoint (locally closed) submanifolds of Rn, of dimensions i and j. • X and Y satisfy Whitney's condition A if whenever a sequence of points x1, x2, … in X converges to a point y in Y, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then T contains the tangent j-plane to Y at y. • X and Y satisfy Whitney's condition B if for each sequence x1, x2, … of points in X and each sequence y1, y2, … of points in Y, both converging to the same point y in Y, such that the sequence of secant lines Lm between xm and ym converges to a line L as m tends to infinity, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then L is contained in T. John Mather first pointed out that Whitney's condition B implies Whitney's condition A in the notes of his lectures at Harvard in 1970, which have been widely distributed. He also defined the notion of Thom–Mather stratified space, and proved that every Whitney stratification is a Thom–Mather stratified space and hence is a topologically stratified space. Another approach to this fundamental result was given earlier by René Thom in 1969. David Trotman showed in his 1977 Warwick thesis that a stratification of a closed subset in a smooth manifold M satisfies Whitney's condition A if and only if the subspace of the space of smooth mappings from a smooth manifold N into M consisting of all those maps which are transverse to all of the strata of the stratification, is open (using the Whitney, or strong, topology). The subspace of mappings transverse to any countable family of submanifolds of M is always dense by Thom's transversality theorem. The density of the set of transverse mappings is often interpreted by saying that transversality is a 'generic' property for smooth mappings, while the openness is often interpreted by saying that the property is 'stable'. The reason that Whitney conditions have become so widely used is because of Whitney's 1965 theorem that every algebraic variety, or indeed analytic variety, admits a Whitney stratification, i.e. admits a partition into smooth submanifolds satisfying the Whitney conditions. More general singular spaces can be given Whitney stratifications, such as semialgebraic sets (due to René Thom) and subanalytic sets (due to Heisuke Hironaka). This has led to their use in engineering, control theory and robotics. In a thesis under the direction of Wieslaw Pawlucki at the Jagellonian University in Kraków, Poland, the Vietnamese mathematician Ta Lê Loi proved further that every definable set in an o-minimal structure can be given a Whitney stratification. See also • Thom–Mather stratified space • Topologically stratified space • Thom's first isotopy lemma • Stratified space References • Mather, John Notes on topological stability, Harvard, 1970 (available on his webpage at Princeton University). • Thom, René Ensembles et morphismes stratifiés, Bulletin of the American Mathematical Society Vol. 75, pp. 240–284), 1969. • Trotman, David Stability of transversality to a stratification implies Whitney (a)-regularity, Inventiones Mathematicae 50(3), pp. 273–277, 1979. • Trotman, David Comparing regularity conditions on stratifications, Singularities, Part 2 (Arcata, Calif., 1981), volume 40 of Proc. Sympos. Pure Math., pp. 575–586. American Mathematical Society, Providence, R.I., 1983. • Whitney, Hassler Local properties of analytic varieties. Differential and Combinatorial Topology (A Symposium in Honor of Marston Morse) pp. 205–244 Princeton Univ. Press, Princeton, N. J., 1965. • Whitney, Hassler, Tangents to an analytic variety, Annals of Mathematics 81, no. 3 (1965), pp. 496–549.
Wikipedia
Vector bundle In mathematics, a vector bundle is a topological construction that makes precise the idea of a family of vector spaces parameterized by another space $X$ (for example $X$ could be a topological space, a manifold, or an algebraic variety): to every point $x$ of the space $X$ we associate (or "attach") a vector space $V(x)$ in such a way that these vector spaces fit together to form another space of the same kind as $X$ (e.g. a topological space, manifold, or algebraic variety), which is then called a vector bundle over $X$. The simplest example is the case that the family of vector spaces is constant, i.e., there is a fixed vector space $V$ such that $V(x)=V$ for all $x$ in $X$: in this case there is a copy of $V$ for each $x$ in $X$ and these copies fit together to form the vector bundle $X\times V$ over $X$. Such vector bundles are said to be trivial. A more complicated (and prototypical) class of examples are the tangent bundles of smooth (or differentiable) manifolds: to every point of such a manifold we attach the tangent space to the manifold at that point. Tangent bundles are not, in general, trivial bundles. For example, the tangent bundle of the sphere is non-trivial by the hairy ball theorem. In general, a manifold is said to be parallelizable if, and only if, its tangent bundle is trivial. Vector bundles are almost always required to be locally trivial, which means they are examples of fiber bundles. Also, the vector spaces are usually required to be over the real or complex numbers, in which case the vector bundle is said to be a real or complex vector bundle (respectively). Complex vector bundles can be viewed as real vector bundles with additional structure. In the following, we focus on real vector bundles in the category of topological spaces. Definition and first consequences A real vector bundle consists of: 1. topological spaces $X$ (base space) and $E$ (total space) 2. a continuous surjection $\pi :E\to X$ (bundle projection) 3. for every $x$ in $X$, the structure of a finite-dimensional real vector space on the fiber $\pi ^{-1}(\{x\})$ where the following compatibility condition is satisfied: for every point $p$ in $X$, there is an open neighborhood $U\subseteq X$ of $p$, a natural number $k$, and a homeomorphism $\varphi \colon U\times \mathbb {R} ^{k}\to \pi ^{-1}(U)$ such that for all $x$ in $U$, • $(\pi \circ \varphi )(x,v)=x$ for all vectors $v$ in $\mathbb {R} ^{k}$, and • the map $v\mapsto \varphi (x,v)$ is a linear isomorphism between the vector spaces $\mathbb {R} ^{k}$ and $\pi ^{-1}(\{x\})$. The open neighborhood $U$ together with the homeomorphism $\varphi $ is called a local trivialization of the vector bundle. The local trivialization shows that locally the map $\pi $ "looks like" the projection of $U\times \mathbb {R} ^{k}$ on $U$. Every fiber $\pi ^{-1}(\{x\})$ is a finite-dimensional real vector space and hence has a dimension $k_{x}$. The local trivializations show that the function $x\to k_{x}$ is locally constant, and is therefore constant on each connected component of $X$. If $k_{x}$ is equal to a constant $k$ on all of $X$, then $k$ is called the rank of the vector bundle, and $E$ is said to be a vector bundle of rank $k$. Often the definition of a vector bundle includes that the rank is well defined, so that $k_{x}$ is constant. Vector bundles of rank 1 are called line bundles, while those of rank 2 are less commonly called plane bundles. The Cartesian product $X\times \mathbb {R} ^{k}$, equipped with the projection $X\times \mathbb {R} ^{k}\to X$, is called the trivial bundle of rank $k$ over $X$. Transition functions Given a vector bundle $E\to X$ of rank $k$, and a pair of neighborhoods $U$ and $V$ over which the bundle trivializes via ${\begin{aligned}\varphi _{U}\colon U\times \mathbb {R} ^{k}&\mathrel {\xrightarrow {\cong } } \pi ^{-1}(U),\\\varphi _{V}\colon V\times \mathbb {R} ^{k}&\mathrel {\xrightarrow {\cong } } \pi ^{-1}(V)\end{aligned}}$ the composite function $\varphi _{U}^{-1}\circ \varphi _{V}\colon (U\cap V)\times \mathbb {R} ^{k}\to (U\cap V)\times \mathbb {R} ^{k}$ is well-defined on the overlap, and satisfies $\varphi _{U}^{-1}\circ \varphi _{V}(x,v)=\left(x,g_{UV}(x)v\right)$ for some ${\text{GL}}(k)$-valued function $g_{UV}\colon U\cap V\to \operatorname {GL} (k).$ These are called the transition functions (or the coordinate transformations) of the vector bundle. The set of transition functions forms a Čech cocycle in the sense that $g_{UU}(x)=I,\quad g_{UV}(x)g_{VW}(x)g_{WU}(x)=I$ for all $U,V,W$ over which the bundle trivializes satisfying $U\cap V\cap W\neq \emptyset $. Thus the data $(E,X,\pi ,\mathbb {R} ^{k})$ defines a fiber bundle; the additional data of the $g_{UV}$ specifies a ${\text{GL}}(k)$ structure group in which the action on the fiber is the standard action of ${\text{GL}}(k)$. Conversely, given a fiber bundle $(E,X,\pi ,\mathbb {R} ^{k})$ with a ${\text{GL}}(k)$ cocycle acting in the standard way on the fiber $\mathbb {R} ^{k}$, there is associated a vector bundle. This is an example of the fibre bundle construction theorem for vector bundles, and can be taken as an alternative definition of a vector bundle. Subbundles Main article: Subbundle One simple method of constructing vector bundles is by taking subbundles of other vector bundles. Given a vector bundle $\pi :E\to X$ over a topological space, a subbundle is simply a subspace $F\subset E$ for which the restriction $\left.\pi \right|_{F}$ of $\pi $ to $F$ gives $\left.\pi \right|_{F}:F\to X$ the structure of a vector bundle also. In this case the fibre $F_{x}\subset E_{x}$ is a vector subspace for every $x\in X$. A subbundle of a trivial bundle need not be trivial, and indeed every real vector bundle over a compact space can be viewed as a subbundle of a trivial bundle of sufficiently high rank. For example, the Möbius band, a non-trivial line bundle over the circle, can be seen as a subbundle of the trivial rank 2 bundle over the circle. Vector bundle morphisms A morphism from the vector bundle π1: E1 → X1 to the vector bundle π2: E2 → X2 is given by a pair of continuous maps f: E1 → E2 and g: X1 → X2 such that g ∘ π1 = π2 ∘ f for every x in X1, the map π1−1({x}) → π2−1({g(x)}) induced by f is a linear map between vector spaces. Note that g is determined by f (because π1 is surjective), and f is then said to cover g. The class of all vector bundles together with bundle morphisms forms a category. Restricting to vector bundles for which the spaces are manifolds (and the bundle projections are smooth maps) and smooth bundle morphisms we obtain the category of smooth vector bundles. Vector bundle morphisms are a special case of the notion of a bundle map between fiber bundles, and are sometimes called (vector) bundle homomorphisms. A bundle homomorphism from E1 to E2 with an inverse which is also a bundle homomorphism (from E2 to E1) is called a (vector) bundle isomorphism, and then E1 and E2 are said to be isomorphic vector bundles. An isomorphism of a (rank k) vector bundle E over X with the trivial bundle (of rank k over X) is called a trivialization of E, and E is then said to be trivial (or trivializable). The definition of a vector bundle shows that any vector bundle is locally trivial. We can also consider the category of all vector bundles over a fixed base space X. As morphisms in this category we take those morphisms of vector bundles whose map on the base space is the identity map on X. That is, bundle morphisms for which the following diagram commutes: (Note that this category is not abelian; the kernel of a morphism of vector bundles is in general not a vector bundle in any natural way.) A vector bundle morphism between vector bundles π1: E1 → X1 and π2: E2 → X2 covering a map g from X1 to X2 can also be viewed as a vector bundle morphism over X1 from E1 to the pullback bundle g*E2. Sections and locally free sheaves Given a vector bundle π: E → X and an open subset U of X, we can consider sections of π on U, i.e. continuous functions s: U → E where the composite π ∘ s is such that (π ∘ s)(u) = u for all u in U. Essentially, a section assigns to every point of U a vector from the attached vector space, in a continuous manner. As an example, sections of the tangent bundle of a differential manifold are nothing but vector fields on that manifold. Let F(U) be the set of all sections on U. F(U) always contains at least one element, namely the zero section: the function s that maps every element x of U to the zero element of the vector space π−1({x}). With the pointwise addition and scalar multiplication of sections, F(U) becomes itself a real vector space. The collection of these vector spaces is a sheaf of vector spaces on X. If s is an element of F(U) and α: U → R is a continuous map, then αs (pointwise scalar multiplication) is in F(U). We see that F(U) is a module over the ring of continuous real-valued functions on U. Furthermore, if OX denotes the structure sheaf of continuous real-valued functions on X, then F becomes a sheaf of OX-modules. Not every sheaf of OX-modules arises in this fashion from a vector bundle: only the locally free ones do. (The reason: locally we are looking for sections of a projection U × Rk → U; these are precisely the continuous functions U → Rk, and such a function is a k-tuple of continuous functions U → R.) Even more: the category of real vector bundles on X is equivalent to the category of locally free and finitely generated sheaves of OX-modules. So we can think of the category of real vector bundles on X as sitting inside the category of sheaves of OX-modules; this latter category is abelian, so this is where we can compute kernels and cokernels of morphisms of vector bundles. A rank n vector bundle is trivial if and only if it has n linearly independent global sections. Operations on vector bundles Most operations on vector spaces can be extended to vector bundles by performing the vector space operation fiberwise. For example, if E is a vector bundle over X, then there is a bundle E* over X, called the dual bundle, whose fiber at x ∈ X is the dual vector space (Ex)*. Formally E* can be defined as the set of pairs (x, φ), where x ∈ X and φ ∈ (Ex)*. The dual bundle is locally trivial because the dual space of the inverse of a local trivialization of E is a local trivialization of E*: the key point here is that the operation of taking the dual vector space is functorial. There are many functorial operations which can be performed on pairs of vector spaces (over the same field), and these extend straightforwardly to pairs of vector bundles E, F on X (over the given field). A few examples follow. • The Whitney sum (named for Hassler Whitney) or direct sum bundle of E and F is a vector bundle E ⊕ F over X whose fiber over x is the direct sum Ex ⊕ Fx of the vector spaces Ex and Fx. • The tensor product bundle E ⊗ F is defined in a similar way, using fiberwise tensor product of vector spaces. • The Hom-bundle Hom(E, F) is a vector bundle whose fiber at x is the space of linear maps from Ex to Fx (which is often denoted Hom(Ex, Fx) or L(Ex, Fx)). The Hom-bundle is so-called (and useful) because there is a bijection between vector bundle homomorphisms from E to F over X and sections of Hom(E, F) over X. • Building on the previous example, given a section s of an endomorphism bundle Hom(E, E) and a function f: X → R, one can construct an eigenbundle by taking the fiber over a point x ∈ X to be the f(x)-eigenspace of the linear map s(x): Ex → Ex. Though this construction is natural, unless care is taken, the resulting object will not have local trivializations. Consider the case of s being the zero section and f having isolated zeroes. The fiber over these zeroes in the resulting "eigenbundle" will be isomorphic to the fiber over them in E, while everywhere else the fiber is the trivial 0-dimensional vector space. • The dual vector bundle E* is the Hom bundle Hom(E, R × X) of bundle homomorphisms of E and the trivial bundle R × X. There is a canonical vector bundle isomorphism Hom(E, F) = E* ⊗ F. Each of these operations is a particular example of a general feature of bundles: that many operations that can be performed on the category of vector spaces can also be performed on the category of vector bundles in a functorial manner. This is made precise in the language of smooth functors. An operation of a different nature is the pullback bundle construction. Given a vector bundle E → Y and a continuous map f: X → Y one can "pull back" E to a vector bundle f*E over X. The fiber over a point x ∈ X is essentially just the fiber over f(x) ∈ Y. Hence, Whitney summing E ⊕ F can be defined as the pullback bundle of the diagonal map from X to X × X where the bundle over X × X is E × F. Remark: Let X be a compact space. Any vector bundle E over X is a direct summand of a trivial bundle; i.e., there exists a bundle E' such that E ⊕ E' is trivial. This fails if X is not compact: for example, the tautological line bundle over the infinite real projective space does not have this property.[1] Additional structures and generalizations Vector bundles are often given more structure. For instance, vector bundles may be equipped with a vector bundle metric. Usually this metric is required to be positive definite, in which case each fibre of E becomes a Euclidean space. A vector bundle with a complex structure corresponds to a complex vector bundle, which may also be obtained by replacing real vector spaces in the definition with complex ones and requiring that all mappings be complex-linear in the fibers. More generally, one can typically understand the additional structure imposed on a vector bundle in terms of the resulting reduction of the structure group of a bundle. Vector bundles over more general topological fields may also be used. If instead of a finite-dimensional vector space, if the fiber F is taken to be a Banach space then a Banach bundle is obtained.[2] Specifically, one must require that the local trivializations are Banach space isomorphisms (rather than just linear isomorphisms) on each of the fibers and that, furthermore, the transitions $g_{UV}\colon U\cap V\to \operatorname {GL} (F)$ are continuous mappings of Banach manifolds. In the corresponding theory for Cp bundles, all mappings are required to be Cp. Vector bundles are special fiber bundles, those whose fibers are vector spaces and whose cocycle respects the vector space structure. More general fiber bundles can be constructed in which the fiber may have other structures; for example sphere bundles are fibered by spheres. Smooth vector bundles A vector bundle (E, p, M) is smooth, if E and M are smooth manifolds, p: E → M is a smooth map, and the local trivializations are diffeomorphisms. Depending on the required degree of smoothness, there are different corresponding notions of Cp bundles, infinitely differentiable C∞-bundles and real analytic Cω-bundles. In this section we will concentrate on C∞-bundles. The most important example of a C∞-vector bundle is the tangent bundle (TM, πTM, M) of a C∞-manifold M. A smooth vector bundle can be characterized by the fact that it admits transition functions as described above which are smooth functions on overlaps of trivializing charts U and V. That is, a vector bundle E is smooth if it admits a covering by trivializing open sets such that for any two such sets U and V, the transition function $g_{UV}:U\cap V\to \operatorname {GL} (k,\mathbb {R} )$ is a smooth function into the matrix group GL(k,R), which is a Lie group. Similarly, if the transition functions are: • Cr then the vector bundle is a Cr vector bundle, • real analytic then the vector bundle is a real analytic vector bundle (this requires the matrix group to have a real analytic structure), • holomorphic then the vector bundle is a holomorphic vector bundle (this requires the matrix group to be a complex Lie group), • algebraic functions then the vector bundle is an algebraic vector bundle (this requires the matrix group to be an algebraic group). The C∞-vector bundles (E, p, M) have a very important property not shared by more general C∞-fibre bundles. Namely, the tangent space Tv(Ex) at any v ∈ Ex can be naturally identified with the fibre Ex itself. This identification is obtained through the vertical lift vlv: Ex → Tv(Ex), defined as $\operatorname {vl} _{v}w[f]:=\left.{\frac {d}{dt}}\right|_{t=0}f(v+tw),\quad f\in C^{\infty }(E_{x}).$ The vertical lift can also be seen as a natural C∞-vector bundle isomorphism p*E → VE, where (p*E, p*p, E) is the pull-back bundle of (E, p, M) over E through p: E → M, and VE := Ker(p*) ⊂ TE is the vertical tangent bundle, a natural vector subbundle of the tangent bundle (TE, πTE, E) of the total space E. The total space E of any smooth vector bundle carries a natural vector field Vv := vlvv, known as the canonical vector field. More formally, V is a smooth section of (TE, πTE, E), and it can also be defined as the infinitesimal generator of the Lie-group action $(t,v)\mapsto e^{tv}$ given by the fibrewise scalar multiplication. The canonical vector field V characterizes completely the smooth vector bundle structure in the following manner. As a preparation, note that when X is a smooth vector field on a smooth manifold M and x ∈ M such that Xx = 0, the linear mapping $C_{x}(X):T_{x}M\to T_{x}M;\quad C_{x}(X)Y=(\nabla _{Y}X)_{x}$ does not depend on the choice of the linear covariant derivative ∇ on M. The canonical vector field V on E satisfies the axioms 1. The flow (t, v) → ΦtV(v) of V is globally defined. 2. For each v ∈ V there is a unique limt→∞ ΦtV(v) ∈ V. 3. Cv(V)∘Cv(V) = Cv(V) whenever Vv = 0. 4. The zero set of V is a smooth submanifold of E whose codimension is equal to the rank of Cv(V). Conversely, if E is any smooth manifold and V is a smooth vector field on E satisfying 1–4, then there is a unique vector bundle structure on E whose canonical vector field is V. For any smooth vector bundle (E, p, M) the total space TE of its tangent bundle (TE, πTE, E) has a natural secondary vector bundle structure (TE, p*, TM), where p* is the push-forward of the canonical projection p: E → M. The vector bundle operations in this secondary vector bundle structure are the push-forwards +*: T(E × E) → TE and λ*: TE → TE of the original addition +: E × E → E and scalar multiplication λ: E → E. K-theory The K-theory group, K(X), of a compact Hausdorff topological space is defined as the abelian group generated by isomorphism classes [E] of complex vector bundles modulo the relation that, whenever we have an exact sequence $0\to A\to B\to C\to 0,$ then $[B]=[A]+[C]$ in topological K-theory. KO-theory is a version of this construction which considers real vector bundles. K-theory with compact supports can also be defined, as well as higher K-theory groups. The famous periodicity theorem of Raoul Bott asserts that the K-theory of any space X is isomorphic to that of the S2X, the double suspension of X. In algebraic geometry, one considers the K-theory groups consisting of coherent sheaves on a scheme X, as well as the K-theory groups of vector bundles on the scheme with the above equivalence relation. The two constructs are the same provided that the underlying scheme is smooth. See also General notions • Grassmannian: classifying spaces for vector bundle, among which projective spaces for line bundles • Characteristic class • Splitting principle • Stable bundle Topology and differential geometry • Gauge theory: the general study of connections on vector bundles and principal bundles and their relations to physics. • Connection: the notion needed to differentiate sections of vector bundles. Algebraic and analytic geometry • Algebraic vector bundle • Picard group • Holomorphic vector bundle Notes 1. Hatcher 2003, Example 3.6. 2. Lang 1995. Sources • Abraham, Ralph H.; Marsden, Jerrold E. (1978), Foundations of mechanics, London: Benjamin-Cummings, see section 1.5, ISBN 978-0-8053-0102-1. • Hatcher, Allen (2003), Vector Bundles & K-Theory (2.0 ed.). • Jost, Jürgen (2002), Riemannian Geometry and Geometric Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-42627-1, see section 1.5. • Lang, Serge (1995), Differential and Riemannian manifolds, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94338-1. • Lee, Jeffrey M. (2009), Manifolds and Differential Geometry, Graduate Studies in Mathematics, vol. 107, Providence: American Mathematical Society, ISBN 978-0-8218-4815-9. • Lee, John M. (2003), Introduction to Smooth Manifolds, New York: Springer, ISBN 0-387-95448-1 see Ch.5 • Rubei, Elena (2014), Algebraic Geometry, a concise dictionary, Berlin/Boston: Walter De Gruyter, ISBN 978-3-11-031622-3. External links • "Vector bundle", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Why is it useful to study vector bundles ? on MathOverflow • Why is it useful to classify the vector bundles of a space ? Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Whitney topologies In mathematics, and especially differential topology, functional analysis and singularity theory, the Whitney topologies are a countably infinite family of topologies defined on the set of smooth mappings between two smooth manifolds. They are named after the American mathematician Hassler Whitney. Construction Let M and N be two real, smooth manifolds. Furthermore, let C∞(M,N) denote the space of smooth mappings between M and N. The notation C∞ means that the mappings are infinitely differentiable, i.e. partial derivatives of all orders exist and are continuous.[1] Whitney Ck-topology For some integer k ≥ 0, let Jk(M,N) denote the k-jet space of mappings between M and N. The jet space can be endowed with a smooth structure (i.e. a structure as a C∞ manifold) which make it into a topological space. This topology is used to define a topology on C∞(M,N). For a fixed integer k ≥ 0 consider an open subset U ⊂ Jk(M,N), and denote by Sk(U) the following: $S^{k}(U)=\{f\in C^{\infty }(M,N):(J^{k}f)(M)\subseteq U\}.$ The sets Sk(U) form a basis for the Whitney Ck-topology on C∞(M,N).[2] Whitney C∞-topology For each choice of k ≥ 0, the Whitney Ck-topology gives a topology for C∞(M,N); in other words the Whitney Ck-topology tells us which subsets of C∞(M,N) are open sets. Let us denote by Wk the set of open subsets of C∞(M,N) with respect to the Whitney Ck-topology. Then the Whitney C∞-topology is defined to be the topology whose basis is given by W, where:[2] $W=\bigcup _{k=0}^{\infty }W^{k}.$ Dimensionality Notice that C∞(M,N) has infinite dimension, whereas Jk(M,N) has finite dimension. In fact, Jk(M,N) is a real, finite-dimensional manifold. To see this, let ℝk[x1,…,xm] denote the space of polynomials, with real coefficients, in m variables of order at most k and with zero as the constant term. This is a real vector space with dimension $\dim \left\{\mathbb {R} ^{k}[x_{1},\ldots ,x_{m}]\right\}=\sum _{i=1}^{k}{\frac {(m+i-1)!}{(m-1)!\cdot i!}}=\left({\frac {(m+k)!}{m!\cdot k!}}-1\right).$ Writing a = dim{ℝk[x1,…,xm]} then, by the standard theory of vector spaces ℝk[x1,…,xm] ≅ ℝa, and so is a real, finite-dimensional manifold. Next, define: $B_{m,n}^{k}=\bigoplus _{i=1}^{n}\mathbb {R} ^{k}[x_{1},\ldots ,x_{m}],\implies \dim \left\{B_{m,n}^{k}\right\}=n\dim \left\{A_{m}^{k}\right\}=n\left({\frac {(m+k)!}{m!\cdot k!}}-1\right).$ Using b to denote the dimension Bkm,n, we see that Bkm,n ≅ ℝb, and so is a real, finite-dimensional manifold. In fact, if M and N have dimension m and n respectively then:[3] $\dim \!\left\{J^{k}(M,N)\right\}=m+n+\dim \!\left\{B_{n,m}^{k}\right\}=m+n\left({\frac {(m+k)!}{m!\cdot k!}}\right).$ Topology Given the Whitney C∞-topology, the space C∞(M,N) is a Baire space, i.e. every residual set is dense.[4] References 1. Golubitsky, M.; Guillemin, V. (1974), Stable Mappings and Their Singularities, Springer, p. 1, ISBN 0-387-90072-1 2. Golubitsky & Guillemin (1974), p. 42. 3. Golubitsky & Guillemin (1974), p. 40. 4. Golubitsky & Guillemin (1974), p. 44.
Wikipedia
A Course of Modern Analysis A Course of Modern Analysis: an introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions (colloquially known as Whittaker and Watson) is a landmark textbook on mathematical analysis written by Edmund T. Whittaker and George N. Watson, first published by Cambridge University Press in 1902.[1] The first edition was Whittaker's alone, but later editions were co-authored with Watson. A Course of Modern Analysis Cover of a 1996 reissue of the fourth edition of the book. AuthorEdmund T. Whittaker and George N. Watson LanguageEnglish SubjectMathematics PublisherCambridge University Press Publication date 1902 History Its first, second, third, and the fourth edition were published in 1902,[2] 1915,[3] 1920,[4] and 1927,[5] respectively. Since then, it has continuously been reprinted and is still in print today.[5][6] A revised, expanded and digitally reset fifth edition, edited by Victor H. Moll, was published in 2021.[7] The book is notable for being the standard reference and textbook for a generation of Cambridge mathematicians including Littlewood and Godfrey H. Hardy. Mary L. Cartwright studied it as preparation for her final honours on the advice of fellow student Vernon C. Morton, later Professor of Mathematics at Aberystwyth University.[8] But its reach was much further than just the Cambridge school; André Weil in his obituary of the French mathematician Jean Delsarte noted that Delsarte always had a copy on his desk.[9] In 1941 the book was included among a "selected list" of mathematical analysis books for use in universities in an article for that purpose published by American Mathematical Monthly.[10] Notable features Some idiosyncratic but interesting problems from an older era of the Cambridge Mathematical Tripos are in the exercises. The book was one of the earliest to use decimal numbering for its sections, an innovation the authors attribute to Giuseppe Peano.[11] Contents Below are the contents of the fourth edition: Part I. The Process of Analysis 1. Complex Numbers 2. The Theory of Convergence 3. Continuous Functions and Uniform Convergence 4. The Theory of Riemann Integration 5. The fundamental properties of Analytic Functions; Taylor's, Laurent's, and Liouville's Theorems 6. The Theory of Residues; application to the evaluation of Definite Integrals 7. The expansion of functions in Infinite Series 8. Asymptotic Expansions and Summable Series 9. Fourier Series and Trigonometrical Series 10. Linear Differential Equations 11. Integral Equations Part II. The Transcendental Functions 1. The Gamma Function 2. The Zeta Function of Riemann 3. The Hypergeometric Function 4. Legendre Functions 5. The Confluent Hypergeometric Function 6. Bessel Functions 7. The Equations of Mathematical Physics 8. Mathieu Functions 9. Elliptic Functions. General theorems and the Weierstrassian Functions 10. The Theta Functions 11. The Jacobian Elliptic Functions 12. Ellipsoidal Harmonics and Lamé's Equation Reception Reviews of the first edition George B. Mathews, in a 1903 review article published in The Mathematical Gazette opens by saying the book is "sure of a favorable reception" because of its "attractive account of some of the most valuable and interesting results of recent analysis".[12] He notes that Part I deals mainly with infinite series, focusing on power series and Fourier expansions while including the "elements of" complex integration and the theory of residues. Part II, in contrast, has chapters on the gamma function, Legendre functions, the hypergeometric series, Bessel functions, elliptic functions, and mathematical physics. Arthur S. Hathaway, in another 1903 review published in the Journal of the American Chemical Society, notes that the book centers around complex analysis, but that topics such as infinite series are "considered in all their phases" along with "all those important series and functions" developed by mathematicians such as Joseph Fourier, Friedrich Bessel, Joseph-Louis Lagrange, Adrien-Marie Legendre, Pierre-Simon Laplace, Carl Friedrich Gauss, Niels Henrik Abel, and others in their respective studies of "practice problems".[13] He goes on to say it "is a useful book for those who wish to make use of the most advanced developments of mathematical analysis in theoretical investigations of physical and chemical questions."[13] In a third review of the first edition, Maxime Bôcher, in a 1904 review published in the Bulletin of the American Mathematical Society notes that while the book falls short of the "rigor" of French, German, and Italian writers, it is a "gratifying sign of progress to find in an English book such an attempt at rigorous treatment as is here made".[1] He notes that important parts of the book were otherwise non-existent in the English language. See also • Bateman Manuscript Project References 1. Bôcher, Maxime (1904). "Review: A Course of Modern Analysis, by E. T. Whittaker". Bulletin of the American Mathematical Society (review). 10 (7): 351–354. doi:10.1090/s0002-9904-1904-01123-4. (4 pages) 2. Whittaker, Edmund Taylor (1902). A Course Of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions (1st ed.). Cambridge, UK: at the University Press. OCLC 1072208628. (xvi+378 pages) 3. Whittaker, Edmund Taylor; Watson, George Neville (1915). A Course Of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions (2nd ed.). Cambridge, UK: at the University Press. OCLC 474155529. (viii+560 pages) 4. Whittaker, Edmund Taylor; Watson, George Neville (1920). A Course Of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions (3rd ed.). Cambridge, UK: at the University Press. OCLC 1170617940. 5. Whittaker, Edmund Taylor; Watson, George Neville (1927-01-02). A Course Of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions (4th ed.). Cambridge, UK: at the University Press. ISBN 0-521-06794-4. ISBN 978-0-521-06794-2. (vi+608 pages) (reprinted: 1935, 1940, 1946, 1950, 1952, 1958, 1962, 1963, 1992) 6. Whittaker, Edmund Taylor; Watson, George Neville (1996) [1927]. A Course of Modern Analysis. Cambridge Mathematical Library (4th reissued ed.). Cambridge, UK: Cambridge University Press. doi:10.1017/cbo9780511608759. ISBN 978-0-521-58807-2. OCLC 802476524. ISBN 0-521-58807-3. (reprinted: 1999, 2000, 2002, 2010) 7. Whittaker, Edmund Taylor; Watson, George Neville (2021-08-26) [2021-08-07]. Moll, Victor Hugo (ed.). A Course of Modern Analysis (5th revised ed.). Cambridge, UK: Cambridge University Press. doi:10.1017/9781009004091. ISBN 978-1-31651893-9. ISBN 1-31651893-0. Archived from the original on 2021-08-10. Retrieved 2021-12-26. (700 pages) 8. O'Connor, John J.; Robertson, Edmund Frederick (October 2003). "Dame Mary Lucy Cartwright". MacTutor. St. Andrews, UK: St. Andrews University. Archived from the original on 2021-03-21. Retrieved 2021-03-21. 9. O'Connor, John J.; Robertson, Edmund Frederick (December 2005). "Jean Frédéric Auguste Delsarte". MacTutor. St. Andrews, UK: St. Andrews University. Archived from the original on 2021-03-21. Retrieved 2021-03-21. 10. "A Selected List of Mathematics Books for Colleges". The American Mathematical Monthly. 48 (9): 600–609. 1941. doi:10.1080/00029890.1941.11991146. ISSN 0002-9890. JSTOR 2303868. (10 pages) 11. Kowalski, Emmanuel [in German] (2008-06-03). "Peano paragraphing". E. Kowalski's blog - Comments on mathematics, mostly. Archived from the original on 2021-02-25. Retrieved 2021-03-21. 12. Mathews, George Ballard (1903). "Review of A Course of Modern Analysis". The Mathematical Gazette (review). 2 (39): 290–292. doi:10.2307/3603560. ISSN 0025-5572. JSTOR 3603560. S2CID 221486387. (3 pages) 13. Hathaway, Arthur Stafford (February 1903). "A Course in Modern Analysis". Journal of the American Chemical Society (review). 25 (2): 220. doi:10.1021/ja02004a022. ISSN 0002-7863. Further reading • Jourdain, Philip E. B. (1916-01-01). "(1) A Course of Pure Mathematics. By G. H. Hardy. Cambridge University Press, 1908. Pp. xvi, 428. Cloth, 12s. net. (2) A Course of Pure Mathematics. By G. H. Hardy. Second edition. Cambridge University Press, 1914. Pp. xii, 443. Cloth, 12s. net. (3) A Course of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions. By E. T. Whittaker. Cambridge University Press, 1902. Pp. xvi, 378. Cloth, 12s. 6d. net. (4) A Course of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions. Second edition, completely revised. By E. T. Whittaker and G. N. Watson. Cambridge University Press, 1915. Pp. viii, 560. Cloth, 18s. net". VI. Critical Notices. Mind (review). XXV (4): 525–533. doi:10.1093/mind/XXV.4.525. ISSN 0026-4423. JSTOR 2248860. (9 pages) • Neville, Eric Harold (1921). "Review of A Course of Modern Analysis". The Mathematical Gazette (review). 10 (152): 283. doi:10.2307/3604927. ISSN 0025-5572. JSTOR 3604927. (1 page) • Wrinch, Dorothy Maud (1921). "Review of A Course of Modern Analysis. Third Edition". Science Progress in the Twentieth Century (1919-1933) (review). Sage Publications, Inc. 15 (60): 658. ISSN 2059-4941. JSTOR 43769035. (1 page) • "Review of A Course of Modern Analysis". The Mathematical Gazette (review). 14 (196): 245. 1928. doi:10.2307/3606904. ISSN 0025-5572. JSTOR 3606904. (1 page) • "Review of A Course of Modern Analysis. An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions". The American Mathematical Monthly (review). 28 (4): 176. 1921. doi:10.2307/2972291. hdl:2027/coo1.ark:/13960/t17m0tq6p. ISSN 0002-9890. JSTOR 2972291. • Φ (1916). "Review of A Course of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions. Second edition, completely revised". The Monist (review). 26 (4): 639–640. ISSN 0026-9662. JSTOR 27900617. (2 pages) • "Review of A Course of Modern Analysis. An Introduction to the General Theory of Infinite Processes and of Analytical Functions, with an Account of the Principal Transcendental Functions. Second Edition". Science Progress (1916–1919) (review). Sage Publications, Inc. 11 (41): 160–161. 1916. ISSN 2059-495X. JSTOR 43426733. (2 pages) • "Review of A Course of Modern Analysis: An introduction to the General Theory of Infinite Processes and of Analytical Functions; With an Account of the Principal Transcendental Functions". The Mathematical Gazette (review). 8 (124): 306–307. 1916. doi:10.2307/3604810. ISSN 0025-5572. JSTOR 3604810. S2CID 40238008. (2 pages) • Schubert, A. (1963). "E. T. Whittaker and G. N. Watson, A Course of Modern Analysis. An introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions. Fourth Edition. 608 S. Cambridge 1962. Cambridge University Press. Preis brosch. 27/6 net". ZAMM - Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik (review). 43 (9): 435. Bibcode:1963ZaMM...43R.435S. doi:10.1002/zamm.19630430916. ISSN 1521-4001. (1 page) • "Modern Analysis. By E. T. Whittaker and G. N. Watson Pp. 608. 27s. 6d. 1962. (Cambridge University Press)". The Mathematical Gazette (review). 47 (359): 88. February 1963. doi:10.1017/S0025557200049032. ISSN 0025-5572. • "A Course of Modern Analysis". Nature (review). 97 (2432): 298–299. 1916-06-08. Bibcode:1916Natur..97..298.. doi:10.1038/097298a0. ISSN 1476-4687. S2CID 3980161. (1 page) • "A Course of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions". Nature (review). 106 (2669): 531. 1920-12-23. Bibcode:1920Natur.106R.531.. doi:10.1038/106531c0. hdl:2027/coo1.ark:/13960/t17m0tq6p. ISSN 1476-4687. S2CID 40238008. (1 page) • M.-T., L. M. (1928-03-17). "A Course of Modern Analysis: an Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions". Nature (review). 121 (3046): 417. Bibcode:1928Natur.121..417M. doi:10.1038/121417a0. ISSN 1476-4687. (1 page) • Stuart, S. N. (1981). "Table errata: A course of modern analysis [fourth edition, Cambridge Univ. Press, Cambridge, 1927; Jbuch 53, 180] by E. T. Whittaker and G. N. Watson". Mathematics of Computation (errata). American Mathematical Society. 36 (153): 315–320 [319]. doi:10.1090/S0025-5718-1981-0595076-1. ISSN 0025-5718. JSTOR 2007758. (1 of 6 pages) Sir Edmund Taylor Whittaker FRS FRSE LLD ScD Fields • Mathematics • Astronomy • Mathematical physics • History of science Notable works • A Course of Modern Analysis (1902) • Analytical Dynamics of Particles and Rigid Bodies (1904) • A History of the Theories of Aether and Electricity, from the age of Descartes to the Close of the Nineteenth Century (1910) • A History of the Theories of Aether and Electricity, the Classic Theories (1951) • A History of the Theories of Aether and Electricity, the Modern Theories (1900-1926) (1953) Eponym of • Whittaker function • Whittaker model • Whittaker–Nyquist–Kotelnikov–Shannon sampling theorem • Whittaker–Shannon interpolation formula • Sir Edmund Whittaker Memorial Prize Notable research • Rapidity • Special functions • Electromagnetism • General relativity • Harmonic functions • Automorphic functions • Confluent hypergeometric functions • Numerical analysis Notable family members • John Macnaghten Whittaker (son) • Edward Copson (son-in-law) Notable disputes • Lorentz-Poincaré-Einstein controversy • Fictitious Problems in Mathematics
Wikipedia
Whittaker function In mathematics, a Whittaker function is a special solution of Whittaker's equation, a modified form of the confluent hypergeometric equation introduced by Whittaker (1903) to make the formulas involving the solutions more symmetric. More generally, Jacquet (1966, 1967) introduced Whittaker functions of reductive groups over local fields, where the functions studied by Whittaker are essentially the case where the local field is the real numbers and the group is SL2(R). Whittaker's equation is ${\frac {d^{2}w}{dz^{2}}}+\left(-{\frac {1}{4}}+{\frac {\kappa }{z}}+{\frac {1/4-\mu ^{2}}{z^{2}}}\right)w=0.$ It has a regular singular point at 0 and an irregular singular point at ∞. Two solutions are given by the Whittaker functions Mκ,μ(z), Wκ,μ(z), defined in terms of Kummer's confluent hypergeometric functions M and U by $M_{\kappa ,\mu }\left(z\right)=\exp \left(-z/2\right)z^{\mu +{\tfrac {1}{2}}}M\left(\mu -\kappa +{\tfrac {1}{2}},1+2\mu ,z\right)$ $W_{\kappa ,\mu }\left(z\right)=\exp \left(-z/2\right)z^{\mu +{\tfrac {1}{2}}}U\left(\mu -\kappa +{\tfrac {1}{2}},1+2\mu ,z\right).$ The Whittaker function $W_{\kappa ,\mu }(z)$ is the same as those with opposite values of μ, in other words considered as a function of μ at fixed κ and z it is even functions. When κ and z are real, the functions give real values for real and imaginary values of μ. These functions of μ play a role in so-called Kummer spaces.[1] Whittaker functions appear as coefficients of certain representations of the group SL2(R), called Whittaker models. References 1. Louis de Branges (1968). Hilbert spaces of entire functions. Prentice-Hall. ASIN B0006BUXNM. Sections 55-57. • Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 13". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 504, 537. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. See also chapter 14. • Bateman, Harry (1953), Higher transcendental functions (PDF), vol. 1, McGraw-Hill. • Brychkov, Yu.A.; Prudnikov, A.P. (2001) [1994], "Whittaker function", Encyclopedia of Mathematics, EMS Press. • Daalhuis, Adri B. Olde (2010), "Whittaker function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. • Jacquet, Hervé (1966), "Une interprétation géométrique et une généralisation P-adique des fonctions de Whittaker en théorie des groupes semi-simples", Comptes Rendus de l'Académie des Sciences, Série A et B, 262: A943–A945, ISSN 0151-0509, MR 0200390 • Jacquet, Hervé (1967), "Fonctions de Whittaker associées aux groupes de Chevalley", Bulletin de la Société Mathématique de France, 95: 243–309, doi:10.24033/bsmf.1654, ISSN 0037-9484, MR 0271275 • Rozov, N.Kh. (2001) [1994], "Whittaker equation", Encyclopedia of Mathematics, EMS Press. • Slater, Lucy Joan (1960), Confluent hypergeometric functions, Cambridge University Press, MR 0107026. • Whittaker, Edmund T. (1903), "An expression of certain known functions as generalized hypergeometric functions", Bulletin of the A.M.S., Providence, R.I.: American Mathematical Society, 10 (3): 125–134, doi:10.1090/S0002-9904-1903-01077-5 Further reading • Hatamzadeh-Varmazyar, Saeed; Masouri, Zahra (2012-11-01). "A fast numerical method for analysis of one- and two-dimensional electromagnetic scattering using a set of cardinal functions". Engineering Analysis with Boundary Elements. 36 (11): 1631–1639. doi:10.1016/j.enganabound.2012.04.014. ISSN 0955-7997. • Gerasimov, A. A.; Lebedev, Dmitrii R.; Oblezin, Sergei V. (2012). "New integral representations of Whittaker functions for classical Lie groups". Russian Mathematical Surveys. 67 (1): 1–92. arXiv:0705.2886. Bibcode:2012RuMaS..67....1G. doi:10.1070/RM2012v067n01ABEH004776. ISSN 0036-0279. • Baudoin, Fabrice; O’Connell, Neil (2011). "Exponential functionals of brownian motion and class-one Whittaker functions". Annales de l'Institut Henri Poincaré, Probabilités et Statistiques. 47 (4): 1096–1120. Bibcode:2011AIHPB..47.1096B. doi:10.1214/10-AIHP401. S2CID 113388. • McKee, Mark (April 2009). "An Infinite Order Whittaker Function". Canadian Journal of Mathematics. 61 (2): 373–381. doi:10.4153/CJM-2009-019-x. ISSN 0008-414X. S2CID 55587239. • Mathai, A. M.; Pederzoli, Giorgio (1997-03-01). "Some properties of matrix-variate Laplace transforms and matrix-variate Whittaker functions". Linear Algebra and Its Applications. 253 (1): 209–226. doi:10.1016/0024-3795(95)00705-9. ISSN 0024-3795. • Whittaker, J. M. (May 1927). "On the Cardinal Function of Interpolation Theory". Proceedings of the Edinburgh Mathematical Society. 1 (1): 41–46. doi:10.1017/S0013091500007318. ISSN 1464-3839. • Cherednik, Ivan (2009). "Whittaker Limits of Difference Spherical Functions". International Mathematics Research Notices. 2009 (20): 3793–3842. arXiv:0807.2155. doi:10.1093/imrn/rnp065. ISSN 1687-0247. S2CID 6253357. • Slater, L. J. (October 1954). "Expansions of generalized Whittaker functions". Mathematical Proceedings of the Cambridge Philosophical Society. 50 (4): 628–631. Bibcode:1954PCPS...50..628S. doi:10.1017/S0305004100029765. ISSN 1469-8064. S2CID 122348447. • Etingof, Pavel (1999-01-12). "Whittaker functions on quantum groups and q-deformed Toda operators". arXiv:math/9901053. • McNamara, Peter J. (2011-01-15). "Metaplectic Whittaker functions and crystal bases". Duke Mathematical Journal. 156 (1): 1–31. arXiv:0907.2675. doi:10.1215/00127094-2010-064. ISSN 0012-7094. S2CID 979197. • Mathai, A. M.; Pederzoli, Giorgio (1998-01-15). "A whittaker function of matrix argument". Linear Algebra and Its Applications. 269 (1): 91–103. doi:10.1016/S0024-3795(97)00059-1. ISSN 0024-3795. • Frenkel, E.; Gaitsgory, D.; Kazhdan, D.; Vilonen, K. (1998). "Geometric realization of Whittaker functions and the Langlands conjecture". Journal of the American Mathematical Society. 11 (2): 451–484. doi:10.1090/S0894-0347-98-00260-4. ISSN 0894-0347. S2CID 13221400.
Wikipedia
Whittaker model In representation theory, a branch of mathematics, the Whittaker model is a realization of a representation of a reductive algebraic group such as GL2 over a finite or local or global field on a space of functions on the group. It is named after E. T. Whittaker even though he never worked in this area, because (Jacquet 1966, 1967) pointed out that for the group SL2(R) some of the functions involved in the representation are Whittaker functions. Irreducible representations without a Whittaker model are sometimes called "degenerate", and those with a Whittaker model are sometimes called "generic". The representation θ10 of the symplectic group Sp4 is the simplest example of a degenerate representation. Whittaker models for GL2 If G is the algebraic group GL2 and F is a local field, and τ is a fixed non-trivial character of the additive group of F and π is an irreducible representation of a general linear group G(F), then the Whittaker model for π is a representation π on a space of functions ƒ on G(F) satisfying $f\left({\begin{pmatrix}1&b\\0&1\end{pmatrix}}g\right)=\tau (b)f(g).$ Jacquet & Langlands (1970) used Whittaker models to assign L-functions to admissible representations of GL2. Whittaker models for GLn Let $G$ be the general linear group $\operatorname {GL} _{n}$, $\psi $ a smooth complex valued non-trivial additive character of $F$ and $U$ the subgroup of $\operatorname {GL} _{n}$ consisting of unipotent upper triangular matrices. A non-degenerate character on $U$ is of the form $\chi (u)=\psi (\alpha _{1}x_{12}+\alpha _{2}x_{23}+\cdots +\alpha _{n-1}x_{n-1n}),$ for $u=(x_{ij})$ ∈ $U$ and non-zero $\alpha _{1},\ldots ,\alpha _{n-1}$ ∈ $F$. If $(\pi ,V)$ is a smooth representation of $G(F)$, a Whittaker functional $\lambda $ is a continuous linear functional on $V$ such that $\lambda (\pi (u)v)=\chi (u)\lambda (v)$ for all $u$ ∈ $U$, $v$ ∈ $V$. Multiplicity one states that, for $\pi $ unitary irreducible, the space of Whittaker functionals has dimension at most equal to one. Whittaker models for reductive groups If G is a split reductive group and U is the unipotent radical of a Borel subgroup B, then a Whittaker model for a representation is an embedding of it into the induced (Gelfand–Graev) representation IndG U (χ), where χ is a non-degenerate character of U, such as the sum of the characters corresponding to simple roots. See also • Gelfand–Graev representation, roughly the sum of Whittaker models over a finite field. • Kirillov model References • Jacquet, Hervé (1966), "Une interprétation géométrique et une généralisation P-adique des fonctions de Whittaker en théorie des groupes semi-simples", Comptes Rendus de l'Académie des Sciences, Série A et B, 262: A943–A945, ISSN 0151-0509, MR 0200390 • Jacquet, Hervé (1967), "Fonctions de Whittaker associées aux groupes de Chevalley", Bulletin de la Société Mathématique de France, 95: 243–309, doi:10.24033/bsmf.1654, ISSN 0037-9484, MR 0271275 • Jacquet, H.; Langlands, Robert P. (1970), Automorphic forms on GL(2), Lecture Notes in Mathematics, Vol. 114, vol. 114, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0058988, ISBN 978-3-540-04903-6, MR 0401654 • J. A. Shalika, The multiplicity one theorem for $GL_{n}$, The Annals of Mathematics, 2nd. Ser., Vol. 100, No. 2 (1974), 171-193. Further reading • Jacquet, Hervé; Shalika, Joseph (1983). "The Whittaker models of induced representations". Pacific Journal of Mathematics. 109 (1): 107–120. doi:10.2140/pjm.1983.109.107. ISSN 0030-8730.
Wikipedia
Wholeness axiom In mathematics, the wholeness axiom is a strong axiom of set theory introduced by Paul Corazza in 2000.[1] Statement The wholeness axiom states roughly that there is an elementary embedding j from the Von Neumann universe V to itself. This has to be stated carefully to avoid Kunen's inconsistency theorem stating (roughly) that no such embedding exists. More specifically, as Samuel Gomes da Silva states, "the inconsistency is avoided by omitting from the schema all instances of the Replacement Axiom for j-formulas".[2] Thus, the wholeness axiom differs from Reinhardt cardinals (another way of providing elementary embeddings from V to itself) by allowing the axiom of choice and instead modifying the axiom of replacement. However, Holmes, Forster & Libert (2012) write that Corrazza's theory should be "naturally viewed as a version of Zermelo set theory rather than ZFC".[3] If the wholeness axiom is consistent, then it is also consistent to add to the wholeness axiom the assertion that all sets are hereditarily ordinal definable.[4] The consistency of stratified versions of the wholeness axiom, introduced by Hamkins (2001),[4] was studied by Apter (2012).[5] References 1. Corazza, Paul (2000), "The Wholeness Axiom and Laver Sequences", Annals of Pure and Applied Logic, 105 (1–3): 157–260, doi:10.1016/s0168-0072(99)00052-4 2. Samuel Gomes da Silva, Review of "The wholeness axioms and the class of supercompact cardinals" by Arthur Apter. 3. Holmes, M. Randall; Forster, Thomas; Libert, Thierry (2012), "Alternative set theories", Sets and extensions in the twentieth century, Handb. Hist. Log., vol. 6, Elsevier/North-Holland, Amsterdam, pp. 559–632, doi:10.1016/B978-0-444-51621-3.50008-6, MR 3409865. 4. Hamkins, Joel David (2001), "The wholeness axioms and V = HOD", Archive for Mathematical Logic, 40 (1): 1–8, arXiv:math/9902079, doi:10.1007/s001530050169, MR 1816602, S2CID 15083392. 5. Apter, Arthur W. (2012), "The wholeness axioms and the class of supercompact cardinals", Bulletin of the Polish Academy of Sciences, Mathematics, 60 (2): 101–111, doi:10.4064/ba60-2-1, MR 2914539. External links • The Wholeness axiom in Cantor's attic
Wikipedia
Why Johnny Can't Add Why Johnny Can't Add: The Failure of the New Math is a 1973 book by Morris Kline, in which the author severely criticized the teaching practices characteristic of the "New Math" fashion for school teaching, which were based on Bourbaki's approach to mathematical research, and were being pushed into schools in the United States.[1][2] Reactions were immediate, and the book became a best seller in its genre and was translated into many languages.[3] Why Johnny Can't Add: The Failure of the New Math AuthorMorris Kline SubjectCriticism of "New Math" education Published1973 References 1. Jürgen Maass; Wolfgang Schlöglmann (2006). New Mathematics Education Research and Practice. Sense Publishers. p. 1. ISBN 978-90-77874-74-5. 2. Joseph W. Dauben; Christoph J. Scriba (23 September 2002). Writing the History of Mathematics: Its Historical Development. Springer Science & Business Media. p. 458. ISBN 978-3-7643-6167-9. 3. Fey, James T.. 1978. “U.S.A.”. Educational Studies in Mathematics 9 (3). Springer: 339–353. https://www.jstor.org/stable/3481942. Further reading • "Review of Why Johnny Can't Add". Bulletin of the Orton Society. 24: 210. 1974-01-01. JSTOR 23769748. • Rising, Gerald R. (1974-01-01). "Review of Why Johnny Can't Add: The Failure of the New Math". The Arithmetic Teacher. 21 (5): 450. JSTOR 41190940. • Gillman, Leonard (1974-01-01). "Review of Why Johnny Can't Add: The Failure of the New Math". The American Mathematical Monthly. 81 (5): 531–532. doi:10.2307/2318615. JSTOR 2318615. • Niman, John (1973-01-01). "Review of Why Johnny Can't Add: The Failure of the New Math". Mathematics Magazine. 46 (4): 228–229. doi:10.2307/2688316. JSTOR 2688316. • McIntosh, Jerry (1973-01-01). Kline, Morris (ed.). "Kline's 'Gutsy Appraisal': New Math Needs Overhaul". The Phi Delta Kappan. 55 (1): 79–80. JSTOR 20297438. • Peak, Philip (1973-01-01). "Review of Why Johnny Can't Add: The Failure of the New Math (L, S, P)". The Mathematics Teacher. 66 (7): 641–642. JSTOR 27959458. • Moore, John W. (1973-01-01). "Why Johnny Can't Add". Journal of College Science Teaching. 3 (2): 167–168. JSTOR 42964980. External links • Text on-line, with permission of the current copyright holders
Wikipedia
Wichmann–Hill Wichmann–Hill is a pseudorandom number generator proposed in 1982 by Brian Wichmann and David Hill.[1] It consists of three linear congruential generators with different prime moduli, each of which is used to produce a uniformly distributed number between 0 and 1. These are summed, modulo 1, to produce the result.[2] Summing three generators produces a pseudorandom sequence with cycle exceeding 6.95×1012.[3] Specifically, the moduli are 30269, 30307 and 30323, producing periods of 30268, 30306 and 30322. The overall period is the least common multiple of these: 30268×30306×30322/4 = 6953607871644. This has been confirmed by a brute-force search.[4][5] Implementation The following pseudocode is for implementation on machines capable of integer arithmetic up to 5,212,632: [r, s1, s2, s3] = function(s1, s2, s3) is // s1, s2, s3 should be random from 1 to 30,000. Use clock if available. s1 := mod(171 × s1, 30269) s2 := mod(172 × s2, 30307) s3 := mod(170 × s3, 30323) r := mod(s1/30269.0 + s2/30307.0 + s3/30323.0, 1) For machines limited to 16-bit signed integers, the following equivalent code only uses numbers up to 30,323: [r, s1, s2, s3] = function(s1, s2, s3) is // s1, s2, s3 should be random from 1 to 30,000. Use clock if available. s1 := 171 × mod(s1, 177) − 2 × floor(s1 / 177) s2 := 172 × mod(s2, 176) − 35 × floor(s2 / 176) s3 := 170 × mod(s3, 178) − 63 × floor(s3 / 178) r := mod(s1/30269 + s2/30307 + s3/30323, 1) The seed values s1, s2 and s3 must be initialized to non-zero values. References 1. Wichmann, Brian A.; Hill, I. David (1982). "Algorithm AS 183: An Efficient and Portable Pseudo-Random Number Generator". Journal of the Royal Statistical Society. Series C (Applied Statistics). 31 (2): 188–190. doi:10.2307/2347988. JSTOR 2347988. 2. McLeod, A. Ian (1985). "Remark AS R58: A Remark on Algorithm AS 183. An Efficient and Portable Pseudo-Random Number Generator". Journal of the Royal Statistical Society. Series C (Applied Statistics). 34 (2): 198–200. doi:10.2307/2347378. JSTOR 2347378. Wichmann and Hill (1982) claim that their generator RANDOM will produce uniform pseudorandom numbers which are strictly greater than zero and less than one. However, depending on the precision of the machine, some zero values may be produced due to rounding error. The problem occurs with single-precision floating point when rounding to zero. 3. Wichmann, Brian; Hill, David (1984). "Correction: Algorithm AS 183: An Efficient and Portable Pseudo-Random Number Generator". Journal of the Royal Statistical Society. Series C (Applied Statistics). 33 (1): 123. doi:10.2307/2347676. JSTOR 2347676. 4. Rikitake, Kenji (16 March 2017). "AS183 PRNG algorithm internal state calculator in C". GitHub. 5. Zeisel, H. (1986). "Remark ASR 61: A Remark on Algorithm AS 183. An Efficient and Portable Pseudo-Random Number Generator". Journal of the Royal Statistical Society. Series C (Applied Statistics). 35 (1): 98. doi:10.1111/j.1467-9876.1986.tb01945.x. JSTOR 2347876. Wichmann and Hill (1982) suggested a pseudo-random generator based on summation of pseudo-random numbers based on summation of pseudo-random numbers generated by multiplicative congruential methods. This, however, is not more than an efficient method to implement a multiplicative congruential generator with a cycle length much greater than the maximal integer. Using the Chinese Remainder Theorem (see e.g. Knuth, 1981) you can prove that you will obtain the same results using only one multiplicative congruential generator Xn+1 = α⋅Xn modulo m with α = 1655 54252 64690, m = 2781 71856 04309. The original version, however, is still necessary to make a generator with such lengthy constants work on ordinary computer arithmetic.
Wikipedia
Isserlis' theorem In probability theory, Isserlis' theorem or Wick's probability theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix. It is named after Leon Isserlis. This theorem is also particularly important in particle physics, where it is known as Wick's theorem after the work of Wick (1950).[1] Other applications include the analysis of portfolio returns,[2] quantum field theory[3] and generation of colored noise.[4] Statement If $(X_{1},\dots ,X_{n})$ is a zero-mean multivariate normal random vector, then $\operatorname {E} [\,X_{1}X_{2}\cdots X_{n}\,]=\sum _{p\in P_{n}^{2}}\prod _{\{i,j\}\in p}\operatorname {E} [\,X_{i}X_{j}\,]=\sum _{p\in P_{n}^{2}}\prod _{\{i,j\}\in p}\operatorname {Cov} (\,X_{i},X_{j}\,),$ where the sum is over all the pairings of $\{1,\ldots ,n\}$, i.e. all distinct ways of partitioning $\{1,\ldots ,n\}$ into pairs $\{i,j\}$, and the product is over the pairs contained in $p$.[5][6] More generally, if $(Z_{1},\dots ,Z_{n})$ is a zero-mean complex-valued multivariate normal random vector, then the formula still holds. The expression on the right-hand side is also known as the hafnian of the covariance matrix of $(X_{1},\dots ,X_{n})$. Odd case If $n=2m+1$ is odd, there does not exist any pairing of $\{1,\ldots ,2m+1\}$. Under this hypothesis, Isserlis' theorem implies that $\operatorname {E} [\,X_{1}X_{2}\cdots X_{2m+1}\,]=0.$ This also follows from the fact that $-X=(-X_{1},\dots ,-X_{n})$ has the same distribution as $X$, which implies that $\operatorname {E} [\,X_{1}\cdots X_{2m+1}\,]=\operatorname {E} [\,(-X_{1})\cdots (-X_{2m+1})\,]=-\operatorname {E} [\,X_{1}\cdots X_{2m+1}\,]=0$. Even case In his original paper,[7] Leon Isserlis proves this theorem by mathematical induction, generalizing the formula for the $4^{\text{th}}$ order moments,[8] which takes the appearance $\operatorname {E} [\,X_{1}X_{2}X_{3}X_{4}\,]=\operatorname {E} [X_{1}X_{2}]\,\operatorname {E} [X_{3}X_{4}]+\operatorname {E} [X_{1}X_{3}]\,\operatorname {E} [X_{2}X_{4}]+\operatorname {E} [X_{1}X_{4}]\,\operatorname {E} [X_{2}X_{3}].$ If $n=2m$ is even, there exist $(2m)!/(2^{m}m!)=(2m-1)!!$ (see double factorial) pair partitions of $\{1,\ldots ,2m\}$: this yields $(2m)!/(2^{m}m!)=(2m-1)!!$ terms in the sum. For example, for $4^{\text{th}}$ order moments (i.e. $4$ random variables) there are three terms. For $6^{\text{th}}$-order moments there are $3\times 5=15$ terms, and for $8^{\text{th}}$-order moments there are $3\times 5\times 7=105$ terms. Proof Since the formula is linear on both sides, if we can prove the real case, we get the complex case for free. Let $\Sigma _{ij}=\operatorname {Cov} (X_{i},X_{j})$ be the covariance matrix, so that we have the zero-mean multivariate normal random vector $(X_{1},...,X_{n})\sim N(0,\Sigma )$. Since both sides of the formula are continuous with respect to $\Sigma $, it suffices to prove the case when $\Sigma $ is invertible. Using quadratic factorization $-x^{T}\Sigma ^{-1}x/2+v^{T}x-v^{T}\Sigma v/2=-(x-\Sigma v)^{T}\Sigma ^{-1}(x-\Sigma v)/2$, we get ${\frac {1}{\sqrt {(2\pi )^{n}\det \Sigma }}}\int e^{-x^{T}\Sigma ^{-1}x/2+v^{T}x}dx=e^{v^{T}\Sigma v/2}$ Differentiate under the integral sign with $\partial _{v_{1},...,v_{n}}|_{v_{1},...,v_{n}=0}$ to obtain $E[X_{1}\cdots X_{n}]=\partial _{v_{1},...,v_{n}}|_{v_{1},...,v_{n}=0}e^{v^{T}\Sigma v/2}$ . That is, we need only find the coefficient of term $v_{1}\cdots v_{n}$ in the Taylor expansion of $e^{v^{T}\Sigma v/2}$. If $n$ is odd, this is zero. So let $n=2m$, then we need only find the coefficient of term $v_{1}\cdots v_{n}$ in the polynomial ${\frac {1}{m!}}(v^{T}\Sigma v/2)^{m}$. Expand the polynomial and count, we obtain the formula. $\square $ Generalizations Gaussian integration by parts An equivalent formulation of the Wick's probability formula is the Gaussian integration by parts. If $(X_{1},\dots X_{n})$ is a zero-mean multivariate normal random vector, then $\operatorname {E} (X_{1}f(X_{1},\ldots ,X_{n}))=\sum _{i=1}^{n}\operatorname {Cov} (X_{1}X_{i})\operatorname {E} (\partial _{X_{i}}f(X_{1},\ldots ,X_{n})).$ This is a generalization of Stein's lemma. The Wick's probability formula can be recovered by induction, considering the function $f:\mathbb {R} ^{n}\to \mathbb {R} $ defined by $f(x_{1},\ldots ,x_{n})=x_{2}\ldots x_{n}$. Among other things, this formulation is important in Liouville conformal field theory to obtain conformal Ward identities, BPZ equations[9] and to prove the Fyodorov-Bouchaud formula.[10] Non-Gaussian random variables For non-Gaussian random variables, the moment-cumulants formula[11] replaces the Wick's probability formula. If $(X_{1},\dots X_{n})$ is a vector of random variables, then $\operatorname {E} (X_{1}\ldots X_{n})=\sum _{p\in P_{n}}\prod _{b\in p}\kappa {\big (}(X_{i})_{i\in b}{\big )},$ where the sum is over all the partitions of $\{1,\ldots ,n\}$, the product is over the blocks of $p$ and $\kappa {\big (}(X_{i})_{i\in b}{\big )}$ is the joint cumulant of $(X_{i})_{i\in b}$. See also • Wick's theorem • Cumulants • Normal distribution References 1. Wick, G.C. (1950). "The evaluation of the collision matrix". Physical Review. 80 (2): 268–272. Bibcode:1950PhRv...80..268W. doi:10.1103/PhysRev.80.268. 2. Repetowicz, Przemysław; Richmond, Peter (2005). "Statistical inference of multivariate distribution parameters for non-Gaussian distributed time series" (PDF). Acta Physica Polonica B. 36 (9): 2785–2796. Bibcode:2005AcPPB..36.2785R. 3. Perez-Martin, S.; Robledo, L.M. (2007). "Generalized Wick's theorem for multiquasiparticle overlaps as a limit of Gaudin's theorem". Physical Review C. 76 (6): 064314. arXiv:0707.3365. Bibcode:2007PhRvC..76f4314P. doi:10.1103/PhysRevC.76.064314. S2CID 119627477. 4. Bartosch, L. (2001). "Generation of colored noise". International Journal of Modern Physics C. 12 (6): 851–855. Bibcode:2001IJMPC..12..851B. doi:10.1142/S0129183101002012. S2CID 54500670. 5. Janson, Svante (June 1997). Gaussian Hilbert Spaces. doi:10.1017/CBO9780511526169. ISBN 9780521561280. Retrieved 2019-11-30. {{cite book}}: |website= ignored (help) 6. Michalowicz, J.V.; Nichols, J.M.; Bucholtz, F.; Olson, C.C. (2009). "An Isserlis' theorem for mixed Gaussian variables: application to the auto-bispectral density". Journal of Statistical Physics. 136 (1): 89–102. Bibcode:2009JSP...136...89M. doi:10.1007/s10955-009-9768-3. S2CID 119702133. 7. Isserlis, L. (1918). "On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables". Biometrika. 12 (1–2): 134–139. doi:10.1093/biomet/12.1-2.134. JSTOR 2331932. 8. Isserlis, L. (1916). "On Certain Probable Errors and Correlation Coefficients of Multiple Frequency Distributions with Skew Regression". Biometrika. 11 (3): 185–190. doi:10.1093/biomet/11.3.185. JSTOR 2331846. 9. Kupiainen, Antti; Rhodes, Rémi; Vargas, Vincent (2019-11-01). "Local Conformal Structure of Liouville Quantum Gravity". Communications in Mathematical Physics. 371 (3): 1005–1069. arXiv:1512.01802. Bibcode:2019CMaPh.371.1005K. doi:10.1007/s00220-018-3260-3. ISSN 1432-0916. S2CID 55282482. 10. Remy, Guillaume (2020). "The Fyodorov–Bouchaud formula and Liouville conformal field theory". Duke Mathematical Journal. 169. arXiv:1710.06897. doi:10.1215/00127094-2019-0045. S2CID 54777103. 11. Leonov, V. P.; Shiryaev, A. N. (January 1959). "On a Method of Calculation of Semi-Invariants". Theory of Probability & Its Applications. 4 (3): 319–329. doi:10.1137/1104031. Further reading • Koopmans, Lambert G. (1974). The spectral analysis of time series. San Diego, CA: Academic Press.
Wikipedia
Cyclostationary process A cyclostationary process is a signal having statistical properties that vary cyclically with time.[1] A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year. Definition There are two differing approaches to the treatment of cyclostationary processes.[2] The stochastic approach is to view measurements as an instance of an abstract stochastic process model. As an alternative, the more empirical approach is to view the measurements as a single time series of data--that which has actually been measured in practice and, for some parts of theory, conceptually extended from an observed finite time interval to an infinite interval. Both mathematical models lead to probabilistic theories: abstract stochastic probability for the stochastic process model and the more empirical Fraction Of Time (FOT) probability for the alternative model. The FOT probability of some event associated with the time series is defined to be the fraction of time that event occurs over the lifetime of the time series. In both approaches, the process or time series is said to be cyclostationary if and only if its associated probability distributions vary periodically with time. However, in the non-stochastic time-series approach, there is an alternative but equivalent definition: A time series that contains no finite-strength additive sine-wave components is said to exhibit cyclostationarity if and only if there exists some nonlinear time-invariant transformation of the time series that produces finite-strength (non-zero) additive sine-wave components. Wide-sense cyclostationarity An important special case of cyclostationary signals is one that exhibits cyclostationarity in second-order statistics (e.g., the autocorrelation function). These are called wide-sense cyclostationary signals, and are analogous to wide-sense stationary processes. The exact definition differs depending on whether the signal is treated as a stochastic process or as a deterministic time series. Cyclostationary stochastic process A stochastic process $x(t)$ of mean $\operatorname {E} [x(t)]$ and autocorrelation function: $R_{x}(t,\tau )=\operatorname {E} \{x(t+\tau )x^{*}(t)\},\,$ where the star denotes complex conjugation, is said to be wide-sense cyclostationary with period $T_{0}$ if both $\operatorname {E} [x(t)]$ and $R_{x}(t,\tau )$ are cyclic in $t$ with period $T_{0},$ i.e.:[2] $\operatorname {E} [x(t)]=\operatorname {E} [x(t+T_{0})]{\text{ for all }}t$ $R_{x}(t,\tau )=R_{x}(t+T_{0};\tau ){\text{ for all }}t,\tau .$ The autocorrelation function is thus periodic in t and can be expanded in Fourier series: $R_{x}(t,\tau )=\sum _{n=-\infty }^{\infty }R_{x}^{n/T_{0}}(\tau )e^{j2\pi {\frac {n}{T_{0}}}t}$ where $R_{x}^{n/T_{0}}(\tau )$ is called cyclic autocorrelation function and equal to: $R_{x}^{n/T_{0}}(\tau )={\frac {1}{T_{0}}}\int _{-T_{0}/2}^{T_{0}/2}R_{x}(t,\tau )e^{-j2\pi {\frac {n}{T_{0}}}t}\mathrm {d} t.$ The frequencies $n/T_{0},\,n\in \mathbb {Z} ,$ are called cycle frequencies. Wide-sense stationary processes are a special case of cyclostationary processes with only $R_{x}^{0}(\tau )\neq 0$. Cyclostationary time series A signal that is just a function of time and not a sample path of a stochastic process can exhibit cyclostationarity properties in the framework of the fraction-of-time point of view. This way, the cyclic autocorrelation function can be defined by:[2] ${\widehat {R}}_{x}^{n/T_{0}}(\tau )=\lim _{T\rightarrow +\infty }{\frac {1}{T}}\int _{-T/2}^{T/2}x(t+\tau )x^{*}(t)e^{-j2\pi {\frac {n}{T_{0}}}t}\mathrm {d} t.$ If the time-series is a sample path of a stochastic process it is $R_{x}^{n/T_{0}}(\tau )=\operatorname {E} \left[{\widehat {R}}_{x}^{n/T_{0}}(\tau )\right]$. If the signal is further cycloergodic,[3] all sample paths exhibit the same cyclic time-averages with probability equal to 1 and thus $R_{x}^{n/T_{0}}(\tau )={\widehat {R}}_{x}^{n/T_{0}}(\tau )$ with probability 1. Frequency domain behavior The Fourier transform of the cyclic autocorrelation function at cyclic frequency α is called cyclic spectrum or spectral correlation density function and is equal to: $S_{x}^{\alpha }(f)=\int _{-\infty }^{+\infty }R_{x}^{\alpha }(\tau )e^{-j2\pi f\tau }\mathrm {d} \tau .$ The cyclic spectrum at zero cyclic frequency is also called average power spectral density. For a Gaussian cyclostationary process, its rate distortion function can be expressed in terms of its cyclic spectrum.[4] The reason $S_{x}^{\alpha }(f)$ is called the spectral correlation density function is that it equals the limit, as filter bandwidth approaches zero, of the expected value of the product of the output of a one-sided bandpass filter with center frequency $f+\alpha /2$ and the conjugate of the output of another one-sided bandpass filter with center frequency $f-\alpha /2$, with both filter outputs frequency shifted to a common center frequency, such as zero, as originally observed and proved in.[5] For time series, the reason the cyclic spectral density function is called the spectral correlation density function is that it equals the limit, as filter bandwidth approaches zero, of the average over all time of the product of the output of a one-sided bandpass filter with center frequency $f+\alpha /2$ and the conjugate of the output of another one-sided bandpass filter with center frequency $f-\alpha /2$, with both filter outputs frequency shifted to a common center frequency, such as zero, as originally observed and proved in.[6] Example: linearly modulated digital signal An example of cyclostationary signal is the linearly modulated digital signal : $x(t)=\sum _{k=-\infty }^{\infty }a_{k}p(t-kT_{0})$ where $a_{k}\in \mathbb {C} $ are i.i.d. random variables. The waveform $p(t)$, with Fourier transform $P(f)$, is the supporting pulse of the modulation. By assuming $\operatorname {E} [a_{k}]=0$ and $\operatorname {E} [|a_{k}|^{2}]=\sigma _{a}^{2}$, the auto-correlation function is: ${\begin{aligned}R_{x}(t,\tau )&=\operatorname {E} [x(t+\tau )x^{*}(t)]\\[6pt]&=\sum _{k,n}\operatorname {E} [a_{k}a_{n}^{*}]p(t+\tau -kT_{0})p^{*}(t-nT_{0})\\[6pt]&=\sigma _{a}^{2}\sum _{k}p(t+\tau -kT_{0})p^{*}(t-kT_{0}).\end{aligned}}$ The last summation is a periodic summation, hence a signal periodic in t. This way, $x(t)$ is a cyclostationary signal with period $T_{0}$ and cyclic autocorrelation function: ${\begin{aligned}R_{x}^{n/T_{0}}(\tau )&={\frac {1}{T_{0}}}\int _{-T_{0}/2}^{T_{0}/2}R_{x}(t,\tau )e^{-j2\pi {\frac {n}{T_{0}}}t}\,\mathrm {d} t\\[6pt]&={\frac {1}{T_{0}}}\int _{-T_{0}/2}^{T_{0}/2}\sigma _{a}^{2}\sum _{k=-\infty }^{\infty }p(t+\tau -kT_{0})p^{*}(t-kT_{0})e^{-j2\pi {\frac {n}{T_{0}}}t}\mathrm {d} t\\[6pt]&={\frac {\sigma _{a}^{2}}{T_{0}}}\sum _{k=-\infty }^{\infty }\int _{-T_{0}/2-kT_{0}}^{T_{0}/2-kT_{0}}p(\lambda +\tau )p^{*}(\lambda )e^{-j2\pi {\frac {n}{T_{0}}}(\lambda +kT_{0})}\mathrm {d} \lambda \\[6pt]&={\frac {\sigma _{a}^{2}}{T_{0}}}\int _{-\infty }^{\infty }p(\lambda +\tau )p^{*}(\lambda )e^{-j2\pi {\frac {n}{T_{0}}}\lambda }\mathrm {d} \lambda \\[6pt]&={\frac {\sigma _{a}^{2}}{T_{0}}}p(\tau )*\left\{p^{*}(-\tau )e^{j2\pi {\frac {n}{T_{0}}}\tau }\right\}.\end{aligned}}$ with $*$ indicating convolution. The cyclic spectrum is: $S_{x}^{n/T_{0}}(f)={\frac {\sigma _{a}^{2}}{T_{0}}}P(f)P^{*}\left(f-{\frac {n}{T_{0}}}\right).$ Typical raised-cosine pulses adopted in digital communications have thus only $n=-1,0,1$ non-zero cyclic frequencies. This same result can be obtained for the non-stochastic time series model of linearly modulated digital signals in which expectation is replaced with infinite time average, but this requires a somewhat modified mathematical method as originally observed and proved in.[7] Cyclostationary models It is possible to generalise the class of autoregressive moving average models to incorporate cyclostationary behaviour. For example, Troutman[8] treated autoregressions in which the autoregression coefficients and residual variance are no longer constant but vary cyclically with time. His work follows a number of other studies of cyclostationary processes within the field of time series analysis.[9][10] Polycyclostationarity In practice, signals exhibiting cyclicity with more than one incommensurate period arise and require a generalization of the theory of cyclostationarity. Such signals are called polycyclostationary if they exhibit a finite number of incommensurate periods and almost cyclostationary if they exhibit a countably infinite number. Such signals arise frequently in radio communications due to multiple transmissions with differing sine-wave carrier frequencies and digital symbol rates. The theory was introduced in [11] for stochastic processes and further developed in [12] for non-stochastic time series. Higher Order and Strict Sense Cyclostationarity The wide sense theory of time series exhibiting cyclostationarity, polycyclostationarity and almost cyclostationarity originated and developed by Gardner [13] was also generalized by Gardner to a theory of higher-order temporal and spectral moments and cumulants and a strict sense theory of cumulative probability distributions. The encyclopedic book [14] comprehensively teaches all of this and provides a scholarly treatment of the originating publications by Gardner and contributions thereafter by others. Applications • Cyclostationarity has extremely diverse applications in essentially all fields of engineering and science, as thoroughly documented in [15] and.[16] A few examples are: • Cyclostationarity is used in telecommunications for signal synchronization, transmitter and receiver optimization, and spectrum sensing for cognitive radio;[17] • In signals intelligence, cyclostationarity is used for signal interception;[18] • In econometrics, cyclostationarity is used to analyze the periodic behavior of financial-markets; • Queueing theory utilizes cyclostationary theory to analyze computer networks and car traffic; • Cyclostationarity is used to analyze mechanical signals produced by rotating and reciprocating machines. Angle-time cyclostationarity of mechanical signals Mechanical signals produced by rotating or reciprocating machines are remarkably well modelled as cyclostationary processes. The cyclostationary family accepts all signals with hidden periodicities, either of the additive type (presence of tonal components) or multiplicative type (presence of periodic modulations). This happens to be the case for noise and vibration produced by gear mechanisms, bearings, internal combustion engines, turbofans, pumps, propellers, etc. The explicit modelling of mechanical signals as cyclostationary processes has been found useful in several applications, such as in noise, vibration, and harshness (NVH) and in condition monitoring.[19] In the latter field, cyclostationarity has been found to generalize the envelope spectrum, a popular analysis technique used in the diagnostics of bearing faults. One peculiarity of rotating machine signals is that the period of the process is strictly linked to the angle of rotation of a specific component – the “cycle” of the machine. At the same time, a temporal description must be preserved to reflect the nature of dynamical phenomena that are governed by differential equations of time. Therefore, the angle-time autocorrelation function is used, $R_{x}(\theta ,\tau )=\operatorname {E} \{x(t(\theta )+\tau )x^{*}(t(\theta ))\},\,$ where $\theta $ stands for angle, $t(\theta )$ for the time instant corresponding to angle $\theta $ and $\tau $ for time delay. Processes whose angle-time autocorrelation function exhibit a component periodic in angle, i.e. such that $R_{x}(\theta ;\tau )$ ;\tau )} has a non-zero Fourier-Bohr coefficient for some angular period $\Theta $, are called (wide-sense) angle-time cyclostationary. The double Fourier transform of the angle-time autocorrelation function defines the order-frequency spectral correlation, $S_{x}^{\alpha }(f)=\lim _{S\rightarrow +\infty }{\frac {1}{S}}\int _{-S/2}^{S/2}\int _{-\infty }^{+\infty }R_{x}(\theta ,\tau )e^{-j2\pi f\tau }e^{-j2\pi \alpha {\frac {\theta }{\Theta }}}\,\mathrm {d} \tau \,\mathrm {d} \theta $ where $\alpha $ is an order (unit in events per revolution) and $f$ a frequency (unit in Hz). For constant speed of rotation, $\omega $, angle is proportional to time, $\theta =\omega t$. Consequently, the angle-time autocorrelation is simply a cyclicity-scaled traditional autocorrelation; that is, the cycle frequencies are scaled by $\omega $. On the other hand, if the speed of rotation changes with time, then the signal is no longer cyclostationary (unless the speed varies periodically). Therefore, it is not a model for cyclostationary signals. It is not even a model for time-warped cyclostationarity, although it can be a useful approximation for sufficiently slow changes in speed of rotation. [20] References 1. Gardner, William A.; Antonio Napolitano; Luigi Paura (2006). "Cyclostationarity: Half a century of research". Signal Processing. Elsevier. 86 (4): 639–697. doi:10.1016/j.sigpro.2005.06.016. 2. Gardner, William A. (1991). "Two alternative philosophies for estimation of the parameters of time-series". IEEE Trans. Inf. Theory. 37 (1): 216–218. doi:10.1109/18.61145. 3. 1983 R. A. Boyles and W. A. Gardner. CYCLOERGODIC PROPERTIES OF DISCRETE-PARAMETER NONSTATIONARY STOCHASTIC PROCESSES. IEEE Transactions on Information Theory, Vol. IT-29, No. 1, pp. 105-114. 4. Kipnis, Alon; Goldsmith, Andrea; Eldar, Yonina (May 2018). "The Distortion Rate Function of Cyclostationary Gaussian Processes". IEEE Transactions on Information Theory. 65 (5): 3810–3824. arXiv:1505.05586. doi:10.1109/TIT.2017.2741978. S2CID 5014143. 5. W. A. Gardner. INTRODUCTION TO RANDOM PROCESSES WITH APPLICATIONS TO SIGNALS AND SYSTEMS. Macmillan, New York, 434 pages, 1985 6. W. A. Gardner. STATISTICAL SPECTRAL ANALYSIS: A NONPROBABILISTIC THEORY. Prentice-Hall, Englewood Cliffs, NJ, 565 pages, 1987. 7. W. A. Gardner. STATISTICAL SPECTRAL ANALYSIS: A NONPROBABILISTIC THEORY. Prentice-Hall, Englewood Cliffs, NJ, 565 pages, 1987. 8. Troutman, B.M. (1979) "Some results in periodic autoregression." Biometrika, 66 (2), 219–228 9. Jones, R.H., Brelsford, W.M. (1967) "Time series with periodic structure." Biometrika, 54, 403–410 10. Pagano, M. (1978) "On periodic and multiple autoregressions." Ann. Stat., 6, 1310–1317. 11. W. A. Gardner. STATIONARIZABLE RANDOM PROCESSES. IEEE Transactions on Information Theory, Vol. IT-24, No. 1, pp. 8-22. 1978 12. W. A. Gardner. STATISTICAL SPECTRAL ANALYSIS: A NONPROBABILISTIC THEORY. Prentice-Hall, Englewood Cliffs, NJ, 565 pages, 1987. 13. W. A. Gardner. STATISTICAL SPECTRAL ANALYSIS: A NONPROBABILISTIC THEORY. Prentice-Hall, Englewood Cliffs, NJ, 565 pages, 1987. 14. A. Napolitano, Cyclostationary Processes and Time Series: Theory, Applications, and Generalizations. Academic Press, 2020. 15. W. A. Gardner. STATISTICALLY INFERRED TIME WARPING: EXTENDING THE CYCLOSTATIONARITY PARADIGM FROM REGULAR TO IRREGULAR STATISTICAL CYCLICITY IN SCIENTIFIC DATA. EURASIP Journal on Advances in Signal Processing volume 2018, Article number: 59. doi: 10.1186/s13634-018-0564-6 16. A. Napolitano, Cyclostationary Processes and Time Series: Theory, Applications, and Generalizations. Academic Press, 2020. 17. W. A. Gardner. CYCLOSTATIONARITY IN COMMUNICATIONS AND SIGNAL PROCESSING. Piscataway, NJ: IEEE Press. 504 pages.1984. 18. W. A. Gardner. SIGNAL INTERCEPTION: A UNIFYING THEORETICAL FRAMEWORK FOR FEATURE DETECTION. IEEE Transactions on Communications, Vol. COM-36, No. 8, pp. 897-906. 1988 19. Antoni, Jérôme (2009). "Cyclostationarity by examples". Mechanical Systems and Signal Processing. Elsevier. 23 (4): 987–1036. doi:10.1016/j.ymssp.2008.10.010. 20. 2018 W. A. Gardner. STATISTICALLY INFERRED TIME WARPING: EXTENDING THE CYCLOSTATIONARITY PARADIGM FROM REGULAR TO IRREGULAR STATISTICAL CYCLICITY IN SCIENTIFIC DATA. EURASIP Journal on Advances in Signal Processing volume 2018, Article number: 59. doi: 10.1186/s13634-018-0564-6 External links • Noise in mixers, oscillators, samplers, and logic: an introduction to cyclostationary noise manuscript annotated presentation presentation
Wikipedia
Widest path problem In graph algorithms, the widest path problem is the problem of finding a path between two designated vertices in a weighted graph, maximizing the weight of the minimum-weight edge in the path. The widest path problem is also known as the maximum capacity path problem. It is possible to adapt most shortest path algorithms to compute widest paths, by modifying them to use the bottleneck distance instead of path length.[1] However, in many cases even faster algorithms are possible. For instance, in a graph that represents connections between routers in the Internet, where the weight of an edge represents the bandwidth of a connection between two routers, the widest path problem is the problem of finding an end-to-end path between two Internet nodes that has the maximum possible bandwidth.[2] The smallest edge weight on this path is known as the capacity or bandwidth of the path. As well as its applications in network routing, the widest path problem is also an important component of the Schulze method for deciding the winner of a multiway election,[3] and has been applied to digital compositing,[4] metabolic pathway analysis,[5] and the computation of maximum flows.[6] A closely related problem, the minimax path problem or bottleneck shortest path problem asks for the path that minimizes the maximum weight of any of its edges. It has applications that include transportation planning.[7] Any algorithm for the widest path problem can be transformed into an algorithm for the minimax path problem, or vice versa, by reversing the sense of all the weight comparisons performed by the algorithm, or equivalently by replacing every edge weight by its negation. Undirected graphs In an undirected graph, a widest path may be found as the path between the two vertices in the maximum spanning tree of the graph, and a minimax path may be found as the path between the two vertices in the minimum spanning tree.[8][9][10] In any graph, directed or undirected, there is a straightforward algorithm for finding a widest path once the weight of its minimum-weight edge is known: simply delete all smaller edges and search for any path among the remaining edges using breadth-first search or depth-first search. Based on this test, there also exists a linear time algorithm for finding a widest s-t path in an undirected graph, that does not use the maximum spanning tree. The main idea of the algorithm is to apply the linear-time path-finding algorithm to the median edge weight in the graph, and then either to delete all smaller edges or contract all larger edges according to whether a path does or does not exist, and recurse in the resulting smaller graph.[9][11][12] Fernández, Garfinkel & Arbiol (1998) use undirected bottleneck shortest paths in order to form composite aerial photographs that combine multiple images of overlapping areas. In the subproblem to which the widest path problem applies, two images have already been transformed into a common coordinate system; the remaining task is to select a seam, a curve that passes through the region of overlap and divides one of the two images from the other. Pixels on one side of the seam will be copied from one of the images, and pixels on the other side of the seam will be copied from the other image. Unlike other compositing methods that average pixels from both images, this produces a valid photographic image of every part of the region being photographed. They weigh the edges of a grid graph by a numeric estimate of how visually apparent a seam across that edge would be, and find a bottleneck shortest path for these weights. Using this path as the seam, rather than a more conventional shortest path, causes their system to find a seam that is difficult to discern at all of its points, rather than allowing it to trade off greater visibility in one part of the image for lesser visibility elsewhere.[4] A solution to the minimax path problem between the two opposite corners of a grid graph can be used to find the weak Fréchet distance between two polygonal chains. Here, each grid graph vertex represents a pair of line segments, one from each chain, and the weight of an edge represents the Fréchet distance needed to pass from one pair of segments to another.[13] If all edge weights of an undirected graph are positive, then the minimax distances between pairs of points (the maximum edge weights of minimax paths) form an ultrametric; conversely every finite ultrametric space comes from minimax distances in this way.[14] A data structure constructed from the minimum spanning tree allows the minimax distance between any pair of vertices to be queried in constant time per query, using lowest common ancestor queries in a Cartesian tree. The root of the Cartesian tree represents the heaviest minimum spanning tree edge, and the children of the root are Cartesian trees recursively constructed from the subtrees of the minimum spanning tree formed by removing the heaviest edge. The leaves of the Cartesian tree represent the vertices of the input graph, and the minimax distance between two vertices equals the weight of the Cartesian tree node that is their lowest common ancestor. Once the minimum spanning tree edges have been sorted, this Cartesian tree can be constructed in linear time.[15] Directed graphs In directed graphs, the maximum spanning tree solution cannot be used. Instead, several different algorithms are known; the choice of which algorithm to use depends on whether a start or destination vertex for the path is fixed, or whether paths for many start or destination vertices must be found simultaneously. All pairs The all-pairs widest path problem has applications in the Schulze method for choosing a winner in multiway elections in which voters rank the candidates in preference order. The Schulze method constructs a complete directed graph in which the vertices represent the candidates and every two vertices are connected by an edge. Each edge is directed from the winner to the loser of a pairwise contest between the two candidates it connects, and is labeled with the margin of victory of that contest. Then the method computes widest paths between all pairs of vertices, and the winner is the candidate whose vertex has wider paths to each opponent than vice versa.[3] The results of an election using this method are consistent with the Condorcet method – a candidate who wins all pairwise contests automatically wins the whole election – but it generally allows a winner to be selected, even in situations where the Concorcet method itself fails.[16] The Schulze method has been used by several organizations including the Wikimedia Foundation.[17] To compute the widest path widths for all pairs of nodes in a dense directed graph, such as the ones that arise in the voting application, the asymptotically fastest known approach takes time O(n(3+ω)/2) where ω is the exponent for fast matrix multiplication. Using the best known algorithms for matrix multiplication, this time bound becomes O(n2.688).[18] Instead, the reference implementation for the Schulze method uses a modified version of the simpler Floyd–Warshall algorithm, which takes O(n3) time.[3] For sparse graphs, it may be more efficient to repeatedly apply a single-source widest path algorithm. Single source If the edges are sorted by their weights, then a modified version of Dijkstra's algorithm can compute the bottlenecks between a designated start vertex and every other vertex in the graph, in linear time. The key idea behind the speedup over a conventional version of Dijkstra's algorithm is that the sequence of bottleneck distances to each vertex, in the order that the vertices are considered by this algorithm, is a monotonic subsequence of the sorted sequence of edge weights; therefore, the priority queue of Dijkstra's algorithm can be implemented as a bucket queue: an array indexed by the numbers from 1 to m (the number of edges in the graph), where array cell i contains the vertices whose bottleneck distance is the weight of the edge with position i in the sorted order. This method allows the widest path problem to be solved as quickly as sorting; for instance, if the edge weights are represented as integers, then the time bounds for integer sorting a list of m integers would apply also to this problem.[12] Single source and single destination Berman & Handler (1987) suggest that service vehicles and emergency vehicles should use minimax paths when returning from a service call to their base. In this application, the time to return is less important than the response time if another service call occurs while the vehicle is in the process of returning. By using a minimax path, where the weight of an edge is the maximum travel time from a point on the edge to the farthest possible service call, one can plan a route that minimizes the maximum possible delay between receipt of a service call and arrival of a responding vehicle.[7] Ullah, Lee & Hassoun (2009) use maximin paths to model the dominant reaction chains in metabolic networks; in their model, the weight of an edge is the free energy of the metabolic reaction represented by the edge.[5] Another application of widest paths arises in the Ford–Fulkerson algorithm for the maximum flow problem. Repeatedly augmenting a flow along a maximum capacity path in the residual network of the flow leads to a small bound, O(m log U), on the number of augmentations needed to find a maximum flow; here, the edge capacities are assumed to be integers that are at most U. However, this analysis does not depend on finding a path that has the exact maximum of capacity; any path whose capacity is within a constant factor of the maximum suffices. Combining this approximation idea with the shortest path augmentation method of the Edmonds–Karp algorithm leads to a maximum flow algorithm with running time O(mn log U).[6] It is possible to find maximum-capacity paths and minimax paths with a single source and single destination very efficiently even in models of computation that allow only comparisons of the input graph's edge weights and not arithmetic on them.[12][19] The algorithm maintains a set S of edges that are known to contain the bottleneck edge of the optimal path; initially, S is just the set of all m edges of the graph. At each iteration of the algorithm, it splits S into an ordered sequence of subsets S1, S2, ... of approximately equal size; the number of subsets in this partition is chosen in such a way that all of the split points between subsets can be found by repeated median-finding in time O(m). The algorithm then reweights each edge of the graph by the index of the subset containing the edge, and uses the modified Dijkstra algorithm on the reweighted graph; based on the results of this computation, it can determine in linear time which of the subsets contains the bottleneck edge weight. It then replaces S by the subset Si that it has determined to contain the bottleneck weight, and starts the next iteration with this new set S. The number of subsets into which S can be split increases exponentially with each step, so the number of iterations is proportional to the iterated logarithm function, O(log*n), and the total time is O(m log*n).[19] In a model of computation where each edge weight is a machine integer, the use of repeated bisection in this algorithm can be replaced by a list-splitting technique of Han & Thorup (2002), allowing S to be split into O(√m) smaller sets Si in a single step and leading to a linear overall time bound.[20] Euclidean point sets A variant of the minimax path problem has also been considered for sets of points in the Euclidean plane. As in the undirected graph problem, this Euclidean minimax path problem can be solved efficiently by finding a Euclidean minimum spanning tree: every path in the tree is a minimax path. However, the problem becomes more complicated when a path is desired that not only minimizes the hop length but also, among paths with the same hop length, minimizes or approximately minimizes the total length of the path. The solution can be approximated using geometric spanners.[21] In number theory, the unsolved Gaussian moat problem asks whether or not minimax paths in the Gaussian prime numbers have bounded or unbounded minimax length. That is, does there exist a constant B such that, for every pair of points p and q in the infinite Euclidean point set defined by the Gaussian primes, the minimax path in the Gaussian primes between p and q has minimax edge length at most B?[22] References 1. Pollack, Maurice (1960), "The maximum capacity through a network", Operations Research, 8 (5): 733–736, doi:10.1287/opre.8.5.733, JSTOR 167387 2. Shacham, N. (1992), "Multicast routing of hierarchical data", IEEE International Conference on Communications (ICC '92), vol. 3, pp. 1217–1221, doi:10.1109/ICC.1992.268047, hdl:2060/19990017646, ISBN 978-0-7803-0599-1, S2CID 60475077; Wang, Zheng; Crowcroft, J. (1995), "Bandwidth-delay based routing algorithms", IEEE Global Telecommunications Conference (GLOBECOM '95), vol. 3, pp. 2129–2133, doi:10.1109/GLOCOM.1995.502780, ISBN 978-0-7803-2509-8, S2CID 9117583 3. Schulze, Markus (2011), "A new monotonic, clone-independent, reversal symmetric, and Condorcet-consistent single-winner election method", Social Choice and Welfare, 36 (2): 267–303, doi:10.1007/s00355-010-0475-4, S2CID 1927244 4. Fernández, Elena; Garfinkel, Robert; Arbiol, Roman (1998), "Mosaicking of aerial photographic maps via seams defined by bottleneck shortest paths", Operations Research, 46 (3): 293–304, doi:10.1287/opre.46.3.293, JSTOR 222823 5. Ullah, E.; Lee, Kyongbum; Hassoun, S. (2009), "An algorithm for identifying dominant-edge metabolic pathways", IEEE/ACM International Conference on Computer-Aided Design (ICCAD 2009), pp. 144–150 6. Ahuja, Ravindra K.; Magnanti, Thomas L.; Orlin, James B. (1993), "7.3 Capacity Scaling Algorithm", Network Flows: Theory, Algorithms and Applications, Prentice Hall, pp. 210–212, ISBN 978-0-13-617549-0 7. Berman, Oded; Handler, Gabriel Y. (1987), "Optimal Minimax Path of a Single Service Unit on a Network to Nonservice Destinations", Transportation Science, 21 (2): 115–122, doi:10.1287/trsc.21.2.115 8. Hu, T. C. (1961), "The maximum capacity route problem", Operations Research, 9 (6): 898–900, doi:10.1287/opre.9.6.898, JSTOR 167055 9. Punnen, Abraham P. (1991), "A linear time algorithm for the maximum capacity path problem", European Journal of Operational Research, 53 (3): 402–404, doi:10.1016/0377-2217(91)90073-5 10. Malpani, Navneet; Chen, Jianer (2002), "A note on practical construction of maximum bandwidth paths", Information Processing Letters, 83 (3): 175–180, doi:10.1016/S0020-0190(01)00323-4, MR 1904226 11. Camerini, P. M. (1978), "The min-max spanning tree problem and some extensions", Information Processing Letters, 7 (1): 10–14, doi:10.1016/0020-0190(78)90030-3 12. Kaibel, Volker; Peinhardt, Matthias A. F. (2006), On the bottleneck shortest path problem (PDF), ZIB-Report 06-22, Konrad-Zuse-Zentrum für Informationstechnik Berlin 13. Alt, Helmut; Godau, Michael (1995), "Computing the Fréchet distance between two polygonal curves" (PDF), International Journal of Computational Geometry and Applications, 5 (1–2): 75–91, doi:10.1142/S0218195995000064. 14. Leclerc, Bruno (1981), "Description combinatoire des ultramétriques", Centre de Mathématique Sociale. École Pratique des Hautes Études. Mathématiques et Sciences Humaines (in French) (73): 5–37, 127, MR 0623034 15. Demaine, Erik D.; Landau, Gad M.; Weimann, Oren (2009), "On Cartesian trees and range minimum queries", Automata, Languages and Programming, 36th International Colloquium, ICALP 2009, Rhodes, Greece, July 5-12, 2009, Lecture Notes in Computer Science, vol. 5555, pp. 341–353, doi:10.1007/978-3-642-02927-1_29, hdl:1721.1/61963, ISBN 978-3-642-02926-4 16. More specifically, the only kind of tie that the Schulze method fails to break is between two candidates who have equally wide paths to each other. 17. See Jesse Plamondon-Willard, Board election to use preference voting, May 2008; Mark Ryan, 2008 Wikimedia Board Election results, June 2008; 2008 Board Elections, June 2008; and 2009 Board Elections, August 2009. 18. Duan, Ran; Pettie, Seth (2009), "Fast algorithms for (max, min)-matrix multiplication and bottleneck shortest paths", Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '09), pp. 384–391. For an earlier algorithm that also used fast matrix multiplication to speed up all pairs widest paths, see Vassilevska, Virginia; Williams, Ryan; Yuster, Raphael (2007), "All-pairs bottleneck paths for general graphs in truly sub-cubic time", Proceedings of the 39th Annual ACM Symposium on Theory of Computing (STOC '07), New York: ACM, pp. 585–589, CiteSeerX 10.1.1.164.9808, doi:10.1145/1250790.1250876, ISBN 9781595936318, MR 2402484, S2CID 9353065 and Chapter 5 of Vassilevska, Virginia (2008), Efficient Algorithms for Path Problems in Weighted Graphs (PDF), Ph.D. thesis, Report CMU-CS-08-147, Carnegie Mellon University School of Computer Science 19. Gabow, Harold N.; Tarjan, Robert E. (1988), "Algorithms for two bottleneck optimization problems", Journal of Algorithms, 9 (3): 411–417, doi:10.1016/0196-6774(88)90031-4, MR 0955149 20. Han, Yijie; Thorup, M. (2002), "Integer sorting in O(n√log log n) expected time and linear space", Proc. 43rd Annual Symposium on Foundations of Computer Science (FOCS 2002), pp. 135–144, doi:10.1109/SFCS.2002.1181890, ISBN 978-0-7695-1822-0, S2CID 5245628. 21. Bose, Prosenjit; Maheshwari, Anil; Narasimhan, Giri; Smid, Michiel; Zeh, Norbert (2004), "Approximating geometric bottleneck shortest paths", Computational Geometry. Theory and Applications, 29 (3): 233–249, doi:10.1016/j.comgeo.2004.04.003, MR 2095376 22. Gethner, Ellen; Wagon, Stan; Wick, Brian (1998), "A stroll through the Gaussian primes", American Mathematical Monthly, 105 (4): 327–337, doi:10.2307/2589708, JSTOR 2589708, MR 1614871.
Wikipedia
Antichain In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two distinct elements in the subset are incomparable. The size of the largest antichain in a partially ordered set is known as its width. By Dilworth's theorem, this also equals the minimum number of chains (totally ordered subsets) into which the set can be partitioned. Dually, the height of the partially ordered set (the length of its longest chain) equals by Mirsky's theorem the minimum number of antichains into which the set can be partitioned. The family of all antichains in a finite partially ordered set can be given join and meet operations, making them into a distributive lattice. For the partially ordered system of all subsets of a finite set, ordered by set inclusion, the antichains are called Sperner families and their lattice is a free distributive lattice, with a Dedekind number of elements. More generally, counting the number of antichains of a finite partially ordered set is #P-complete. Definitions Let $S$ be a partially ordered set. Two elements $a$ and $b$ of a partially ordered set are called comparable if $a\leq b{\text{ or }}b\leq a.$ If two elements are not comparable, they are called incomparable; that is, $x$ and $y$ are incomparable if neither $x\leq y{\text{ nor }}y\leq x.$ A chain in $S$ is a subset $C\subseteq S$ in which each pair of elements is comparable; that is, $C$ is totally ordered. An antichain in $S$ is a subset $A$ of $S$ in which each pair of different elements is incomparable; that is, there is no order relation between any two different elements in $A.$ (However, some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than two distinct elements of the antichain.) Height and width A maximal antichain is an antichain that is not a proper subset of any other antichain. A maximum antichain is an antichain that has cardinality at least as large as every other antichain. The width of a partially ordered set is the cardinality of a maximum antichain. Any antichain can intersect any chain in at most one element, so, if we can partition the elements of an order into $k$ chains then the width of the order must be at most $k$ (if the antichain has more than $k$ elements, by the pigeonhole principle, there would be 2 of its elements belonging to the same chain, a contradiction). Dilworth's theorem states that this bound can always be reached: there always exists an antichain, and a partition of the elements into chains, such that the number of chains equals the number of elements in the antichain, which must therefore also equal the width.[1] Similarly, one can define the height of a partial order to be the maximum cardinality of a chain. Mirsky's theorem states that in any partial order of finite height, the height equals the smallest number of antichains into which the order may be partitioned.[2] Sperner families An antichain in the inclusion ordering of subsets of an $n$-element set is known as a Sperner family. The number of different Sperner families is counted by the Dedekind numbers,[3] the first few of which numbers are 2, 3, 6, 20, 168, 7581, 7828354, 2414682040998, 56130437228687557907788 (sequence A000372 in the OEIS). Even the empty set has two antichains in its power set: one containing a single set (the empty set itself) and one containing no sets. Join and meet operations Any antichain $A$ corresponds to a lower set $L_{A}=\{x:\exists y\in A{\mbox{ such that }}x\leq y\}.$ In a finite partial order (or more generally a partial order satisfying the ascending chain condition) all lower sets have this form. The union of any two lower sets is another lower set, and the union operation corresponds in this way to a join operation on antichains: $A\vee B=\{x\in A\cup B:\nexists y\in A\cup B{\mbox{ such that }}x<y\}.$ Similarly, we can define a meet operation on antichains, corresponding to the intersection of lower sets: $A\wedge B=\{x\in L_{A}\cap L_{B}:\nexists y\in L_{A}\cap L_{B}{\mbox{ such that }}x<y\}.$ The join and meet operations on all finite antichains of finite subsets of a set $X$ define a distributive lattice, the free distributive lattice generated by $X.$ Birkhoff's representation theorem for distributive lattices states that every finite distributive lattice can be represented via join and meet operations on antichains of a finite partial order, or equivalently as union and intersection operations on the lower sets of the partial order.[4] Computational complexity A maximum antichain (and its size, the width of a given partially ordered set) can be found in polynomial time.[5] Counting the number of antichains in a given partially ordered set is #P-complete.[6] References 1. Dilworth, Robert P. (1950), "A decomposition theorem for partially ordered sets", Annals of Mathematics, 51 (1): 161–166, doi:10.2307/1969503, JSTOR 1969503 2. Mirsky, Leon (1971), "A dual of Dilworth's decomposition theorem", American Mathematical Monthly, 78 (8): 876–877, doi:10.2307/2316481, JSTOR 2316481 3. Kahn, Jeff (2002), "Entropy, independent sets and antichains: a new approach to Dedekind's problem", Proceedings of the American Mathematical Society, 130 (2): 371–378, doi:10.1090/S0002-9939-01-06058-0, MR 1862115 4. Birkhoff, Garrett (1937), "Rings of sets", Duke Mathematical Journal, 3 (3): 443–454, doi:10.1215/S0012-7094-37-00334-X 5. Felsner, Stefan; Raghavan, Vijay; Spinrad, Jeremy (2003), "Recognition algorithms for orders of small width and graphs of small Dilworth number", Order, 20 (4): 351–364 (2004), doi:10.1023/B:ORDE.0000034609.99940.fb, MR 2079151, S2CID 1363140 6. Provan, J. Scott; Ball, Michael O. (1983), "The complexity of counting cuts and of computing the probability that a graph is connected", SIAM Journal on Computing, 12 (4): 777–788, doi:10.1137/0212053, MR 0721012 External links • Weisstein, Eric W. "Antichain". MathWorld. • "Antichain". PlanetMath. Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia