text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Social choice theory
Social choice theory or social choice is a theoretical framework for analysis of combining individual opinions, preferences, interests, or welfares to reach a collective decision or social welfare in some sense.[1] Whereas choice theory is concerned with individuals making choices based on their preferences, social choice theory is concerned with how to translate the preferences of individuals into the preferences of a group. A non-theoretical example of a collective decision is enacting a law or set of laws under a constitution. Another example is voting, where individual preferences over candidates are collected to elect a person that best represents the group's preferences.[2]
Part of a series on
Economics
• History
• Outline
• Index
Branches and classifications
• Schools of economics
• Mainstream economics
• Heterodox economics
• Economic methodology
• Economic theory
• Political economy
• Microeconomics
• Macroeconomics
• International economics
• Applied economics
• Mathematical economics
• Econometrics
• JEL classification codes
Concepts, theory and techniques
• Economic systems
• Economic growth
• Market
• National accounting
• Experimental economics
• Computational economics
• Game theory
• Operations research
• Middle income trap
• Industrial complex
By application
• Agricultural
• Behavioral
• Business
• Cultural
• Demographic
• Development
• Digitization
• Ecological
• Economic geography
• Economic history
• Economic planning
• Economic policy
• Economic sociology
• Economic statistics
• Education
• Engineering
• Environmental
• Evolutionary
• Expeditionary
• Feminist
• Financial
• Happiness economics
• Health
• Human capital
• Humanistic economics
• Industrial organization
• Information
• Institutional
• Knowledge
• Labour
• Law
• Managerial
• Monetary
• Natural resource
• Organizational
• Participation
• Personnel
• Public economics
• Public / Social choice
• Regional
• Rural
• Service
• Socioeconomics
• Solidarity economy
• Urban
• Welfare
• Welfare economics
Notable economists
• François Quesnay
• Adam Smith
• David Ricardo
• Thomas Robert Malthus
• John Stuart Mill
• Karl Marx
• William Stanley Jevons
• Léon Walras
• Alfred Marshall
• Irving Fisher
• John Maynard Keynes
• Friedrich Hayek
• Arthur Cecil Pigou
• John Hicks
• Wassily Leontief
• Paul Samuelson
• more
Lists
• Glossary
• Economists
• Publications (journals)
• Business and Economics portal
• Money portal
Part of the Politics series
Politics
• Outline
• Index
• Category
Primary topics
• Outline of political science
• Index of politics articles
• Politics by country
• Politics by subdivision
• Political economy
• Political history
• Political history of the world
• Political philosophy
Political systems
• Anarchy
• City-state
• Democracy
• Dictatorship
• Directorial
• Federacy
• Feudalism
• Hybrid regime
• Meritocracy
• Monarchy
• Parliamentary
• Presidential
• Republic
• Semi-parliamentary
• Semi-presidential
• Theocracy
Academic disciplines
• Political science (political scientists)
• International relations (theory)
• Comparative politics
• Political analysis
• Political theory
• Policy studies
• Political psychology
• Political sociology
Public administration
• Bureaucracy (street-level)
• Technocracy
• Adhocracy
Policy
• Public policy (doctrine)
• Domestic policy
• Foreign policy
• Civil society
• Public interest
Government branches
• Separation of powers
• Legislature
• Executive
• Judiciary
• Election commission
Related topics
• Sovereignty
• Polity / State
• (Politeia / Nation / Civilization / Territorial / Quasi / Warlord)
• Theories of political behavior
• Biology and political orientation
• Political organisations
• Critique of political economy
Subseries
• Electoral systems
• Elections
• voting
• Unitarism
• Federalism
• Government
• forms
• Ideology
• Political campaigning
• Political parties
Politics portal
Social choice blends elements of welfare economics and public choice theory. It is methodologically individualistic, in that it aggregates preferences and behaviors of individual members of society. Using elements of formal logic for generality, analysis proceeds from a set of seemingly reasonable axioms of social choice to form a social welfare function (or constitution).[3] Results uncovered the logical incompatibility of various axioms, as in Arrow's theorem, revealing an aggregation problem and suggesting reformulation or theoretical triage in dropping some axiom(s).[1]
Overlap with public choice theory
"Public choice" and "social choice" are heavily overlapping fields of endeavor.
Social choice and public choice theory may overlap but are disjoint if narrowly construed. The Journal of Economic Literature classification codes place Social Choice under Microeconomics at JEL D71 (with Clubs, Committees, and Associations) whereas most Public Choice subcategories are in JEL D72 (Economic Models of Political Processes: Rent-Seeking, Elections, Legislatures, and Voting Behavior).
Social choice theory (and public choice theory) dates from Condorcet's formulation of the voting paradox, though it arguably goes back further to Ramon Llull's 1299 publication.
Kenneth Arrow's Social Choice and Individual Values (1951) and Arrow's impossibility theorem are often acknowledged as the basis of the modern social choice theory and public choice theory.[1] In addition to Arrow's theorem and the voting paradox, the Gibbard–Satterthwaite theorem, the Condorcet jury theorem, the median voter theorem, and May's theorem are among the more well known results from social choice theory.
Amartya Sen's Nobel Prize winning work was also highly influential. See the #Interpersonal utility comparison section below for more about Sen's work.
Later work also considers approaches to compensations and fairness, liberty and rights, axiomatic domain restrictions on preferences of agents, variable populations, strategy-proofing of social-choice mechanisms, natural resources,[1][4] capabilities and functionings,[5] and welfare,[6] justice,[7] and poverty.[8]
Interpersonal utility comparison
Social choice theory is the study of theoretical and practical methods to aggregate or combine individual preferences into a collective social welfare function. The field generally assumes that individuals have preferences, and it follows that they can be modeled using utility functions. But much of the research in the field assumes that those utility functions are internal to humans, lack a meaningful unit of measure and cannot be compared across different individuals[9] Whether this type of interpersonal utility comparison is possible or not significantly alters the available mathematical structures for social welfare functions and social choice theory.
In one perspective, following Jeremy Bentham, utilitarians have argued that preferences and utility functions of individuals are interpersonally comparable and may therefore be added together to arrive at a measure of aggregate utility. Utilitarian ethics call for maximizing this aggregate.
In contrast many twentieth century economists, following Lionel Robbins, questioned whether mental states, and the utilities they reflect, can be measured and, a fortiori, interpersonal comparisons of utility as well as the social choice theory on which it is based. Consider for instance the law of diminishing marginal utility, according to which utility of an added quantity of a good decreases with the amount of the good that is already in possession of the individual. It has been used to defend transfers of wealth from the "rich" to the "poor" on the premise that the former do not derive as much utility as the latter from an extra unit of income. Robbins (1935, pp. 138–40) argues that this notion is beyond positive science; that is, one cannot measure changes in the utility of someone else, nor is it required by positive theory.
Apologists of the interpersonal comparison of utility have argued that Robbins claimed too much. John Harsanyi agrees that full comparability of mental states such as utility is never possible but believes, however, that human beings are able to make some interpersonal comparisons of utility because they share some common backgrounds, cultural experiences, etc. In the example from Amartya Sen (1970, p. 99), it should be possible to say that Emperor Nero's gain from burning Rome was outweighed by the loss incurred by the rest of the Romans. Harsanyi and Sen thus argue that at least partial comparability of utility is possible, and social choice theory proceeds under that assumption.
Sen proposes, however, that comparability of interpersonal utility need not be partial. Under Sen's theory of informational broadening, even complete interpersonal comparison of utility would lead to socially suboptimal choices because mental states are malleable. A starving peasant may have a particularly sunny disposition and thereby derive high utility from a small income. This fact should not nullify, however, his claim to compensation or equality in the realm of social choice.
Social decisions should accordingly be based on immalleable factors. Sen proposes interpersonal utility comparisons based on a wide range of data. His theory is concerned with access to advantage, viewed as an individual's access to goods that satisfy basic needs (e.g., food), freedoms (in the labor market, for instance), and capabilities. We can proceed to make social choices based on real variables, and thereby address actual position, and access to advantage. Sen's method of informational broadening allows social choice theory to escape the objections of Robbins, which looked as though they would permanently harm social choice theory.
Additionally, since the seminal results of Arrow's impossibility theorem and the Gibbard–Satterthwaite theorem, many positive results focusing on the restriction of the domain of preferences of individuals have elucidated such topics as optimal voting. The initial results emphasized the impossibility of satisfactorily providing a social choice function free of dictatorship and inefficiency in the most general settings. Later results have found natural restrictions that can accommodate many desirable properties.
Empirical studies
Since Arrow social choice analysis has primarily been characterized by being extremely theoretical and formal in character. However, since ca. 1960 attention began to be paid to empirical applications of social choice theoretical insights, first and foremost by American political scientist William H. Riker.
The vast majority of such studies have been focused on finding empirical examples of the Condorcet paradox.[10][11]
A summary of 37 individual studies, covering a total of 265 real-world elections, large and small, found 25 instances of a Condorcet paradox, for a total likelihood of 9.4%[11]: 325 (and this may be a high estimate, since cases of the paradox are more likely to be reported on than cases without). On the other hand, the empirical identification of a Condorcet paradox presupposes extensive data on the decision-makers' preferences over all alternatives—something that is only very rarely available.
While examples of the paradox seem to occur occasionally in small settings (e.g., parliaments) very few examples have been found in larger groups (e.g. electorates), although some have been identified.[12]
Rules
Let $X$ be a set of possible 'states of the world' or 'alternatives'. Society wishes to choose a single state from $X$. For example, in a single-winner election, $X$ may represent the set of candidates; in a resource allocation setting, $X$ may represent all possible allocations.
Let $I$ be a finite set, representing a collection of individuals. For each $i\in I$, let $u_{i}:X\longrightarrow \mathbb {R} $ be a utility function, describing the amount of happiness an individual i derives from each possible state.
A social choice rule is a mechanism which uses the data $(u_{i})_{i\in I}$ to select some element(s) from $X$ which are 'best' for society. The question of what 'best' means is the basic question of social choice theory. The following rules are most common:
• The utilitarian rule - also called the max-sum rule - aims to maximize the sum of utilities, thus maximizing the efficiency.
• The egalitarian rule - also called the max-min rule - aims to maximize the smallest utility, thus maximizing the fairness.
• The proportional-fair rule - sometimes called the max-product rule - aims to balance between the previous two rules, attaining a balance between efficiency and fairness.
Functions
A social choice function or a voting rule takes an individual's complete and transitive preferences over a set of candidates (also called alternatives), and returns some subset of (possible singular) the candidates. We can think of this subset as the winners of an election. This is different from social welfare function, which returns a linear order of the set of alternatives as opposed to simply selecting some subset. We can compare different social choice functions based on which axioms or mathematical properties they fulfill.[2] For example, Instant-runoff voting satisfies the Independence of clones criterion, whereas the Borda count does not; conversely, Borda Count satisfies the Monotonicity criterion whereas IRV does not.
Theorems
Arrow's impossibility theorem is what often comes to mind when one thinks about impossibility theorems in voting. However, Arrow was concerned with social welfare functions, not social choice functions. There are several famous theorems concerning social choice functions. The Gibbard–Satterthwaite theorem states that all non-dictatorial voting rules that is resolute (it always returns a single winner no matter what the ballots are) and non-imposed (every alternative could be chosen) with more than three alternatives (candidates) is manipulable. That is, a voter can cast a ballot that misrepresents their preferences to obtain a result that is more favorable to them under their sincere preferences. The Campbell-Kelley theorem states that, if there exists a Condorcet winner, then selecting that winner is the unique resolute, neutral, anonymous, and non-manipulable voting rule.[2] May's theorem states that when there are only two candidates, Simple majority vote is the unique neutral, anonymous, and positively responsive voting rule.[13]
See also
• Compensation principle
• Computational social choice
• Condorcet paradox
• Emotional choice theory
• Extended sympathy
• Game theory
• Group decision-making
• Justice (economics)
• Liberal paradox
• Mechanism design
• Nakamura number
• Rational choice theory
• Rule according to higher law
• Voting system
Notes
1. Amartya Sen (2008). "Social Choice,". The New Palgrave Dictionary of Economics, 2nd Edition, Abstract & TOC.
2. Zwicker, William S.; Moulin, Herve (2016), Brandt, Felix; Conitzer, Vincent; Endriss, Ulle; Lang, Jerome (eds.), "Introduction to the Theory of Voting", Handbook of Computational Social Choice, Cambridge: Cambridge University Press, pp. 23–56, doi:10.1017/cbo9781107446984.003, ISBN 978-1-107-44698-4, retrieved 2021-12-24
3. For example, in Kenneth J. Arrow (1951). Social Choice and Individual Values, New York: Wiley, ch. II, section 2, A Notation for Preferences and Choice, and ch. III, "The Social Welfare Function".
4. Walter Bossert and John A. Weymark (2008). "Social Choice (New Developments)," The New Palgrave Dictionary of Economics, 2nd Edition, Abstract & TOC.
5. Kaushik, Basu; Lòpez-Calva, Luis F. (2011). Functionings and Capabilities. Handbook of Social Choice and Welfare. Vol. 2. pp. 153–187. doi:10.1016/S0169-7218(10)00016-X. ISBN 9780444508942.
6. d'Aspremont, Claude; Gevers, Louis (2002). Chapter 10 Social welfare functionals and interpersonal comparability. Handbook of Social Choice and Welfare. Vol. 1. pp. 459–541. doi:10.1016/S1574-0110(02)80014-5. ISBN 9780444829146.
7. Amartya Sen ([1987] 2008). "Justice," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract & TOC.
Bertil Tungodden (2008). "Justice (New Perspectives)," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
Louis Kaplow (2008). "Pareto Principle and Competing Principles," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
Amartya K. Sen (1979 [1984]). Collective Choice and Social Welfare, New York: Elsevier, (description):
ch. 9, "Equity and Justice," pp. 131-51.
ch. 9*, "Impersonality and Collective Quasi-Orderings," pp. 152-160.
Kenneth J. Arrow (1983). Collected Papers, v. 1, Social Choice and Justice, Cambridge, MA: Belknap Press, Description, contents, and chapter-preview links.
Charles Blackorby, Walter Bossert, and David Donaldson, 2002. "Utilitarianism and the Theory of Justice", in Handbook of Social Choice and Welfare, edited by Kenneth J. Arrow, Amartya K. Sen, and Kotaro Suzumura, v. 1, ch. 11, pp. 543–596. Abstract.
8. Dutta, Bhaskar (2002). Chapter 12 Inequality, poverty and welfare. Handbook of Social Choice and Welfare. Vol. 1. pp. 597–633. doi:10.1016/S1574-0110(02)80016-9. ISBN 9780444829146.
9. Lionel Robbins (1932, 1935, 2nd ed.). An Essay on the Nature and Significance of Economic Science, London: Macmillan. Links for 1932 HTML and 1935 facsimile.
10. Kurrild-Klitgaard, Peter (2014). "Empirical social choice: An introduction". Public Choice. 158 (3–4): 297–310. doi:10.1007/s11127-014-0164-4. ISSN 0048-5829. S2CID 148982833.
11. Van Deemen, Adrian (2014). "On the empirical relevance of Condorcet's paradox". Public Choice. 158 (3–4): 311–330. doi:10.1007/s11127-013-0133-3. ISSN 0048-5829. S2CID 154862595.
12. Kurrild-Klitgaard, Peter (2014). "An empirical example of the Condorcet paradox of voting in a large electorate". Public Choice. 107 (1/2): 135–145. doi:10.1023/A:1010304729545. ISSN 0048-5829. S2CID 152300013.
13. May, Kenneth O. (October 1952). "A Set of Independent Necessary and Sufficient Conditions for Simple Majority Decision". Econometrica. 20 (4): 680–684. doi:10.2307/1907651. JSTOR 1907651.
References
• Arrow, Kenneth J. (1951, 2nd ed., 1963). Social Choice and Individual Values, New York: Wiley. ISBN 0-300-01364-7
• _____, (1972). "General Economic Equilibrium: Purpose, Analytic Techniques, Collective Choice", Nobel Prize Lecture, Link to text, with Section 8 on the theory and background.
• _____, (1983). Collected Papers, v. 1, Social Choice and Justice, Oxford: Blackwell ISBN 0-674-13760-4
• Arrow, Kenneth J., Amartya K. Sen, and Kotaro Suzumura, eds. (1997). Social Choice Re-Examined, 2 vol., London: Palgrave Macmillan ISBN 0-312-12739-1 & ISBN 0-312-12741-3
• _____, eds. (2002). Handbook of Social Choice and Welfare, v. 1. Chapter-preview links.
• _____, ed. (2011). Handbook of Social Choice and Welfare, v. 2, Amsterdam: Elsevier. Chapter-preview links.
• Bossert, Walter and John A. Weymark (2008). "Social Choice (New Developments)," The New Palgrave Dictionary of Economics, 2nd Edition, London: Palgrave Macmillan Abstract.
• Dryzek, John S. and Christian List (2003). "Social Choice Theory and Deliberative Democracy: A Reconciliation," British Journal of Political Science, 33(1), pp. 1–28, https://www.jstor.org/discover/10.2307/4092266?uid=3739936&uid=2&uid=4&uid=3739256&sid=21102056001967, 2002 PDF link.
• Feldman, Allan M. and Roberto Serrano (2006). Welfare Economics and Social Choice Theory, 2nd ed., New York: Springer ISBN 0-387-29367-1, ISBN 978-0-387-29367-7 Arrow-searchable chapter previews.
• Fleurbaey, Marc (1996). Théories économiques de la justice, Paris: Economica.
• Gaertner, Wulf (2006). A primer in social choice theory. Oxford: Oxford University Press. ISBN 978-0-19-929751-1.
• Harsanyi, John C. (1987). "Interpersonal Utility Comparisons," The New Palgrave: A Dictionary of Economics, v. 2, London: Palgrave, pp. 955–58.
• Moulin, Herve (1988). Axioms of cooperative decision making. Cambridge: Cambridge University Press. ISBN 978-0-521-42458-5.
• Myerson, Roger B. (June 2013). "Fundamentals of social choice theory". Quarterly Journal of Political Science. 8 (3): 305–337. CiteSeerX 10.1.1.297.6781. doi:10.1561/100.00013006.
• Nitzan, Shmuel (2010). Collective Preference and Choice. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-72213-1.
• Robbins, Lionel (1935). An Essay on the Nature and Significance of Economic Science, 2nd ed., London: Macmillan, ch. VI
• ____, (1938). "Interpersonal Comparisons of Utility: A Comment," Economic Journal, 43(4), 635–41.
• Sen, Amartya K. (1970 [1984]). Collective Choice and Social Welfare, New York: Elsevier ISBN 0-444-85127-5 Description.
• _____, (1998). "The Possibility of Social Choice", Nobel Prize Lecture .
• _____, (1987). "Social Choice," The New Palgrave: A Dictionary of Economics, v. 4, London: Palgrave, pp. 382–93.
• _____, (2008). "Social Choice,". The New Palgrave Dictionary of Economics, 2nd Edition, London: Palgrave Abstract.
• Shoham, Yoav; Leyton-Brown, Kevin (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. New York: Cambridge University Press. ISBN 978-0-521-89943-7.. A comprehensive reference from a computational perspective; see Chapter 9. Downloadable free online.
• Suzumura, Kotaro (1983). Rational Choice, Collective Decisions, and Social Welfare, Cambridge: Cambridge University Press ISBN 0-521-23862-5
• Taylor, Alan D. (2005). Social choice and the mathematics of manipulation. New York: Cambridge University Press. ISBN 978-0-521-00883-9.
External links
• List, Christian. "Social Choice Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
• Social Choice Bibliography by J. S. Kelly Archived 2017-12-23 at the Wayback Machine
• Electowiki, a wiki covering many subjects of social choice and voting theory
Industrial and applied mathematics
Computational
• Algorithms
• design
• analysis
• Automata theory
• Coding theory
• Computational geometry
• Constraint programming
• Computational logic
• Cryptography
• Information theory
Discrete
• Computer algebra
• Computational number theory
• Combinatorics
• Graph theory
• Discrete geometry
Analysis
• Approximation theory
• Clifford analysis
• Clifford algebra
• Differential equations
• Ordinary differential equations
• Partial differential equations
• Stochastic differential equations
• Differential geometry
• Differential forms
• Gauge theory
• Geometric analysis
• Dynamical systems
• Chaos theory
• Control theory
• Functional analysis
• Operator algebra
• Operator theory
• Harmonic analysis
• Fourier analysis
• Multilinear algebra
• Exterior
• Geometric
• Tensor
• Vector
• Multivariable calculus
• Exterior
• Geometric
• Tensor
• Vector
• Numerical analysis
• Numerical linear algebra
• Numerical methods for ordinary differential equations
• Numerical methods for partial differential equations
• Validated numerics
• Variational calculus
Probability theory
• Distributions (random variables)
• Stochastic processes / analysis
• Path integral
• Stochastic variational calculus
Mathematical
physics
• Analytical mechanics
• Lagrangian
• Hamiltonian
• Field theory
• Classical
• Conformal
• Effective
• Gauge
• Quantum
• Statistical
• Topological
• Perturbation theory
• in quantum mechanics
• Potential theory
• String theory
• Bosonic
• Topological
• Supersymmetry
• Supersymmetric quantum mechanics
• Supersymmetric theory of stochastic dynamics
Algebraic structures
• Algebra of physical space
• Feynman integral
• Poisson algebra
• Quantum group
• Renormalization group
• Representation theory
• Spacetime algebra
• Superalgebra
• Supersymmetry algebra
Decision sciences
• Game theory
• Operations research
• Optimization
• Social choice theory
• Statistics
• Mathematical economics
• Mathematical finance
Other applications
• Biology
• Chemistry
• Psychology
• Sociology
• "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Related
• Mathematics
• Mathematical software
Organizations
• Society for Industrial and Applied Mathematics
• Japan Society for Industrial and Applied Mathematics
• Société de Mathématiques Appliquées et Industrielles
• International Council for Industrial and Applied Mathematics
• European Community on Computational Methods in Applied Sciences
• Category
• Mathematics portal / outline / topics list
Authority control: National
• France
• BnF data
• Israel
• United States
|
Wikipedia
|
Social golfer problem
In discrete mathematics, the social golfer problem (SGP) is a combinatorial-design problem derived from a question posted in the usenet newsgroup sci.op-research in May 1998.[1] The problem is as follows: a group of 32 golfers plays golf once a week in groups of 4. Schedule these golfers to play for as many weeks as possible without any two golfers playing in the same group more than once.
More generally, this problem can be defined for any $n=g\times s$ golfers who play in $g$ groups of $s$ golfers for $w$ weeks. The solution involves either affirming or denying the existence of a schedule and, if such a schedule exists, determining the number of unique schedules and constructing them.
Challenges
The SGP is a challenging problem to solve for two main reasons:[2]
First is the large search space resulting from the combinatorial and highly symmetrical nature of the problem. There are a total of $(n!)^{w}$ schedules in the search space. For each schedule, the weeks $(w!)$, groups within each week $(g!)$, players within each group $(s!)$, and individual player $(n!)$ can all be permuted. This leads to a total of $w!\times g!\times s!\times n!$ isomorphisms, schedules that are identical through any of these symmetry operations. Due to its high symmetry, the SGP is commonly used as a standard benchmark in symmetry breaking in constraint programming (symmetry-breaking constraints).
Second is the choice of variables. The SGP can be seen as an optimization problem to maximize the number of weeks in the schedule. Hence, incorrectly defined initial points and other variables in the model can lead the process to an area in the search space with no solution.
Solutions
The SGP is the Steiner system S(2,4,32) because 32 golfers are divided into groups of 4 and both the group and week assignments of any 2 golfers can be uniquely identified. Soon after the problem was proposed in 1998, a solution for 9 weeks was found and the existence of a solution for 11 weeks was proven to be impossible. In the case of the latter, note that each player must play with 3 unique players each week. For a schedule lasting 11 weeks, a player will be grouped with a total of $3\times 11=33$ other players. Since there are only 31 other players in the group, this is not possible.[3] A solution for 10 weeks could be obtained from results already published in 1996.[4] It was independently rediscovered using a different method in 2004,[5] which is the solution presented below.
10-Week solution to the Social Golfer Problem (Aguado - 2004)
Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8
Week 010,1,2,34,5,22,236,7,20,218,25,26,279,10,11,2412,13,15,3014,28,29,3116,17,18,19
Week 020,4,8,281,6,18,232,7,17,223,5,26,319,13,14,2710,15,19,2111,25,29,3012,16,20,24
Week 030,11,14,211,7,10,282,15,20,253,13,22,244,9,18,315,16,27,306,8,19,2912,17,23,26
Week 040,18,24,271,9,19,262,8,11,163,10,17,254,7,12,295,6,14,1513,20,23,2821,22,30,31
Week 050,6,13,261,4,11,152,9,21,283,8,14,235,12,18,257,19,24,3010,16,22,2917,20,27,31
Week 060,7,25,311,5,24,292,12,14,193,18,28,304,6,10,278,13,17,219,15,16,2311,20,22,26
Week 070,5,19,201,14,22,252,23,27,293,4,16,216,9,17,307,11,13,188,10,12,3115,24,26,28
Week 080,15,17,291,13,16,312,4,26,303,6,11,125,7,8,910,14,18,2019,22,27,2821,23,24,25
Week 090,9,12,221,8,20,302,5,10,133,7,15,274,14,17,246,16,25,2811,19,23,3118,21,26,29
Week 100,10,23,301,12,21,272,6,24,313,9,20,294,13,19,255,11,17,287,14,16,268,15,18,22
There are many approaches to solving the SGP, namely design theory techniques,[6][7] SAT formulations (propositional satisfiability problem), constraint-based approaches,[8] metaheuristic methods, and radix approach.
The radix approach assigns golfers into groups based on the addition of numbers in base $k$.[9] Variables in the general case of the SGP can be redefined as $n=s^{k}$ golfers who play in $g=s^{k-1}$ groups of $s$ golfers for any number $k$. The maximum number of weeks that these golfers can play without regrouping any two golfers is $(k^{n}-1)/(k-1)$.
Applications
Working in groups is encouraged in classrooms because it fosters active learning and development of critical-thinking and communication skills. The SGP has been used to assign students into groups in undergraduate chemistry classes[9] and breakout rooms in online meeting software[10] to maximize student interaction and socialization.
The SGP has also been used as a model to study tournament scheduling.[11]
See also
• Steiner system
• Kirkman's schoolgirl problem
• Euler's officer problem
• Round-robin tournament
References
1. Harvey, Warwick. "Problem 010: Social Golfers Problem". www.csplib.org. Retrieved 6 September 2021.
2. Liu, Ke; Löffler, Sven; Hofstedt, Petra (2019). "Social Golfer Problem Revisited". In van den Herik, Jaap; Rocha, Ana Paula; Steels, Luc (eds.). Agents and Artificial Intelligence: 11th International Conference, ICAART 2019, Prague, Czech Republic, February 19–21, 2019, Revised Selected Papers. Springer International Publishing. pp. 72–99. doi:10.1007/978-3-030-37494-5_5. ISBN 978-3-030-37494-5.
3. Triska, Markus. "The Social Golfer Problem". www.metalevel.at.
4. Shen, Hao (1996). "Existence of resolvable group divisible designs with block size four and group size two or three". Journal of Shanghai Jiaotong University. 1 (1): 68–70. MR 1454271.
5. Aguado, Alejandro. "A 10 Days Solution to the Social Golfer Problem" (PDF). Math Puzzles. Retrieved 9 September 2021.
6. Lardeux, Frédéric; Monfroy, Eric (2015). "Expressively Modeling the Social Golfer Problem in SAT". Procedia Computer Science. 51: 336–345. doi:10.1016/j.procs.2015.05.252.
7. Triska, Markus; Musliu, Nysret (April 2012). "An improved SAT formulation for the social golfer problem". Annals of Operations Research. 194 (1): 427–438. doi:10.1007/s10479-010-0702-5.
8. Liu, Ke; Löffler, Sven; Hofstedt, Petra (2019). "Solving the Social Golfers Problems by Constraint Programming in Sequential and Parallel". In Rocha, Ana; Steels, Luc; van den Herik, Jaap (eds.). Proceedings of the 11th International Conference on Agents and Artificial Intelligence. Science and Technology Publications. pp. 29–39. doi:10.5220/0007252300290039. ISBN 978-989-758-350-6.
9. Limpanuparb, Taweetham; Datta, Sopanant; Tawornparcha, Piyathida; Chinsukserm, Kridtin (2021). "ACAD-Feedback: Online Framework for Assignment, Collection, Analysis, and Distribution of Self, Peer, Instructor, and Group Feedback". Journal of Chemical Education. 98 (9): 3038–3044. doi:10.1021/acs.jchemed.1c00424.
10. Miller, Alice; Barr, Matthew; Kavanagh, William; Valkov, Ivaylo; Purchase, Helen C (2021). "Breakout Group Allocation Schedules and the Social Golfer Problem with Adjacent Group Sizes". Symmetry. 13 (13). doi:10.3390/sym13010013.
11. Lambers, Roel; Rothuizen, Laurent; Spieksma, Frits C. R. (2021). "The Traveling Social Golfer Problem: The Case of the Volleyball Nations League". In Stuckey, Peter J. (ed.). Integration of Constraint Programming, Artificial Intelligence, and Operations Research: 18th International Conference, CPAIOR 2021, Vienna, Austria, July 5–8, 2021, Proceedings. Springer. pp. 149–162. doi:10.1007/978-3-030-78230-6_10. ISBN 978-3-030-78230-6.
External links
• Wolfram Community: Radix Approach to Solving the Social Golfer Problem and Graph Visualization
• Wolfram Mathworld: Social Golfer Problem
|
Wikipedia
|
Société mathématique de France
The Société Mathématique de France (SMF) is the main professional society of French mathematicians.
The society was founded in 1872 by Émile Lemoine and is one of the oldest mathematical societies in existence. It publishes several academic journals: Annales Scientifiques de l'École Normale Supérieure, Astérisque, Bulletin de la Société Mathématique de France, Gazette des mathématiciens, Mémoires de la Société Mathématique de France, Panoramas et Synthèses, and Revue d'histoire des mathématiques.[1]
List of presidents
• 1873: Michel Chasles[2]
• 1874: Laffon de Ladebat
• 1875: Irénée-Jules Bienaymé
• 1876: Jules de La Gournerie (1814–1883)
• 1877: Amédée Mannheim
• 1878: Jean Gaston Darboux
• 1879: Pierre Ossian Bonnet
• 1880: Camille Jordan
• 1881: Edmond Laguerre
• 1882: Georges Henri Halphen
• 1883: Eugène Rouché
• 1884: Emile Picard
• 1885: Paul Appell
• 1886: Henri Poincaré
• 1887: Georges Fouret
• 1888: Charles-Ange Laisant
• 1889: Désiré André
• 1890: Julien Haton de La Goupillière
• 1891: Édouard Collignon
• 1892: Eugène Vicaire
• 1893: Georges Humbert
• 1894: Henry Picquet
• 1895: Edouard Goursat
• 1896: Gabriel Koenigs
• 1897: Emile Picard
• 1898: Léon Lecornu
• 1899: Emile Guyou (1843–1915)
• 1900: Henri Poincaré
• 1901: Maurice d’Ocagne
• 1902: Louis Raffy
• 1903: Paul Painlevé
• 1904: Emmanuel Carvallo
• 1905: Émile Borel
• 1906: Jacques Hadamard
• 1907: Emile Blutel
• 1908: Raoul Perrin
• 1909: Charles Bioche
• 1910: Raoul Bricard
• 1911: Lucien Lévy
• 1912: Marie Henri Andoyer
• 1913: François Cosserat
• 1914: Ernest Vessiot
• 1915: Élie Cartan
• 1916: Maurice Fouché
• 1917: Claude Guichard
• 1918: Edmond Maillet
• 1919: Henri Lebesgue
• 1920: Jules Drach
• 1921: Auguste Boulanger
• 1922: Eugène Cahen
• 1923: Paul Appell
• 1924: Paul Lévy
• 1925: Paul Montel
• 1926: Pierre Fatou
• 1927: Bertrand de Defontviolant
• 1928: Alexandre Thybaut
• 1929: André Auric
• 1930: Émile Jouguet
• 1931: Arnaud Denjoy
• 1932: Gaston Julia
• 1933: Alfred-Marie Liénard
• 1934: Jean Chazy
• 1935: Maurice Fréchet
• 1936: René Garnier
• 1937: Joseph Pérès
• 1938: Georges Valiron
• 1939: Henri Vergne
• 1940: ?
• 1941: Théophile Got
• 1942: Charles Platrier
• 1943: Bertrand Gambier
• 1944: Jacques Chapelon
• 1945: Georges Darmois
• 1946: Jean Favard
• 1947: Albert Châtelet
• 1948: Maurice Janet
• 1949: Roger Brard
• 1950: Henri Cartan
• 1951: André Lamothe
• 1952: Marie-Louise Dubreil-Jacotin
• 1953: Szolem Mandelbrojt
• 1954: Jean Leray
• 1955: André Marchaud
• 1956: Maurice Roy
• 1957: André Marchaud
• 1958: Paul Dubreil
• 1959: André Lichnerowicz
• 1960: Marcel Brelot
• 1961: Gustave Choquet
• 1962: Laurent Schwartz
• 1963: Pierre Lelong
• 1964: Jean Dieudonné
• 1965: Charles Ehresmann
• 1966: André Revuz
• 1967: Georges Reeb
• 1968: René Thom
• 1969: Charles Pisot
• 1970: Jean-Pierre Serre
• 1971: Jean Cerf
• 1972–1973: Jean-Pierre Kahane
• 1974: Georges Poitou
• 1975: Yvette Amice
• 1976: Claude Godbillon
• 1977: Jacques Neveu
• 1978: Jean-Louis Koszul
• 1979–1980: Marcel Berger
• 1981: Michel Hervé
• 1982–1983: Christian Houzel
• 1984: Jean-Louis Verdier
• 1985: Bernard Malgrange
• 1986–1987: Jean-François Méla
• 1988: Michel Demazure
• 1989: Gérard Schiffmann
• 1990–1992: Jean-Pierre Bourguignon
• 1992–1994: Daniel Barlet
• 1994–1996: Rémy Langevin
• 1996–1998: Jean-Jacques Risler
• 1998–2001: Mireille Martin-Deschamps
• 2001–2004: Michel Waldschmidt
• 2004–2007: Marie-Françoise Roy
• 2007–2010: Stéphane Jaffard
• 2010–2012: Bernard Helffer
• 2012–2013: Aline Bonami
• 2013–2016: Marc Peigné
• 2016–2019: Stéphane Seuret
• 2020– : Fabien Durand
See also
• European Mathematical Society
• Centre International de Rencontres Mathématiques
• List of mathematical societies
References
1. "Les Publications | Société Mathématique de France".
2. Anciens Présidents, SMF
External links
• Webpage of the society
The European Mathematical Society
International member societies
• European Consortium for Mathematics in Industry
• European Society for Mathematical and Theoretical Biology
National member societies
• Austria
• Belarus
• Belgium
• Belgian Mathematical Society
• Belgian Statistical Society
• Bosnia and Herzegovina
• Bulgaria
• Croatia
• Cyprus
• Czech Republic
• Denmark
• Estonia
• Finland
• France
• Mathematical Society of France
• Society of Applied & Industrial Mathematics
• Société Francaise de Statistique
• Georgia
• Germany
• German Mathematical Society
• Association of Applied Mathematics and Mechanics
• Greece
• Hungary
• Iceland
• Ireland
• Israel
• Italy
• Italian Mathematical Union
• Società Italiana di Matematica Applicata e Industriale
• The Italian Association of Mathematics applied to Economic and Social Sciences
• Latvia
• Lithuania
• Luxembourg
• Macedonia
• Malta
• Montenegro
• Netherlands
• Norway
• Norwegian Mathematical Society
• Norwegian Statistical Association
• Poland
• Portugal
• Romania
• Romanian Mathematical Society
• Romanian Society of Mathematicians
• Russia
• Moscow Mathematical Society
• St. Petersburg Mathematical Society
• Ural Mathematical Society
• Slovakia
• Slovak Mathematical Society
• Union of Slovak Mathematicians and Physicists
• Slovenia
• Spain
• Catalan Society of Mathematics
• Royal Spanish Mathematical Society
• Spanish Society of Statistics and Operations Research
• The Spanish Society of Applied Mathematics
• Sweden
• Swedish Mathematical Society
• Swedish Society of Statisticians
• Switzerland
• Turkey
• Ukraine
• United Kingdom
• Edinburgh Mathematical Society
• Institute of Mathematics and its Applications
• London Mathematical Society
Academic Institutional Members
• Abdus Salam International Centre for Theoretical Physics
• Academy of Sciences of Moldova
• Bernoulli Center
• Centre de Recerca Matemàtica
• Centre International de Rencontres Mathématiques
• Centrum voor Wiskunde en Informatica
• Emmy Noether Research Institute for Mathematics
• Erwin Schrödinger International Institute for Mathematical Physics
• European Institute for Statistics, Probability and Operations Research
• Institut des Hautes Études Scientifiques
• Institut Henri Poincaré
• Institut Mittag-Leffler
• Institute for Mathematical Research
• International Centre for Mathematical Sciences
• Isaac Newton Institute for Mathematical Sciences
• Mathematisches Forschungsinstitut Oberwolfach
• Mathematical Research Institute
• Max Planck Institute for Mathematics in the Sciences
• Research Institute of Mathematics of the Voronezh State University
• Serbian Academy of Science and Arts
• Mathematical Society of Serbia
• Stefan Banach International Mathematical Center
• Thomas Stieltjes Institute for Mathematics
Institutional Members
• Central European University
• Faculty of Mathematics at the University of Barcelona
• Cellule MathDoc
Authority control
International
• ISNI
• VIAF
National
• Catalonia
• Germany
• Israel
• United States
• Czech Republic
• Sweden
Other
• IdRef
|
Wikipedia
|
Société de Mathématiques Appliquées et Industrielles
The Société de Mathématiques Appliquées et Industrielles (SMAI) is a French scientific society aiming at promoting applied mathematics, similarly to the Society for Industrial and Applied Mathematics (SIAM).
Not to be confused with Society for Industrial and Applied Mathematics.
SMAI was founded in 1983 to contribute to the development of applied mathematics for research, commercial applications, publications, teaching, and industrial training. As of 2009, the society has nearly 1300 members, including both individuals and institutions.
SMAI is directed by an administration elected by the general assembly. Its chief activities are:
• to organize conferences and workshops,
• to publish the thrice-yearly bulletin Matapli, which contains overviews, book reviews, and information about theses and upcoming conferences,
• to publish scholarly journals including Modélisation Mathématique et Analyse Numérique (M2AN), Contrôle Optimisation et Calcul des Variations (COCV), Probabilités et Statistiques (P&S), Recherche opérationnelle (RO), ESAIM: Proceedings and Surveys, and the cross-disciplinary journal MathematicS in Action (MathS in A.).
Prizes
• Prix Jacques-Louis Lions, established in 2003 with INRIA et le CNES, recognized by the Académie des Sciences
• Prix Blaise Pascal, established in 1984 with GAMNI, recognized by the Académie des Sciences
• Grand Prix Louis Bachelier established in 2007 with the NATIXIS Foundation for Quantitative Research", awarded by the French Academy of Sciences until 2012 and jointly with the London Mathematical Society since 2014.
• Prix Lagrange de l'ICIAM, established in 1998 with SEMA (Spain) and SIMAI (Italy)
• Prix Maurice Audin, awarded by SMF and SMAI
Interest groups
SMAI contains five special interest groups, organized by specific mathematical areas, as follows:
• SMAI-GAMNI (Groupe thématique pour l’Avancement des Méthodes Numériques de l’Ingénieur) which promotes the use of numerical analysis in industry.
• SMAI-MAIRCI (Mathématiques Appliquées, Informatique, Réseaux, Calcul, Industrie) which is at the frontiers of interdisciplinary applied mathematics.
• SMAI-MAS (Modélisation Aléatoire et Statistique) which promotes the use of statistics and probability in industry.
• SMAI-MODE (Mathématiques de l’Optimisation et de la Décision) which is dedicated to fields such as nonlinear analysis, optimization, discrete mathematics, operational research, and mathematical modeling in economics, finance, and the social sciences.
• SMAI-SIGMA (Signal - Image - Géométrie - Modélisation - Approximation; formerly SMAI-AFA Association Française d’Approximation) to promote the study and use of approximations in general.
External links
• Société de Mathématiques Appliquées et Industrielles (SMAI)
• www.conferencealerts.in
• www.worldconferencealerts.com
The European Mathematical Society
International member societies
• European Consortium for Mathematics in Industry
• European Society for Mathematical and Theoretical Biology
National member societies
• Austria
• Belarus
• Belgium
• Belgian Mathematical Society
• Belgian Statistical Society
• Bosnia and Herzegovina
• Bulgaria
• Croatia
• Cyprus
• Czech Republic
• Denmark
• Estonia
• Finland
• France
• Mathematical Society of France
• Society of Applied & Industrial Mathematics
• Société Francaise de Statistique
• Georgia
• Germany
• German Mathematical Society
• Association of Applied Mathematics and Mechanics
• Greece
• Hungary
• Iceland
• Ireland
• Israel
• Italy
• Italian Mathematical Union
• Società Italiana di Matematica Applicata e Industriale
• The Italian Association of Mathematics applied to Economic and Social Sciences
• Latvia
• Lithuania
• Luxembourg
• Macedonia
• Malta
• Montenegro
• Netherlands
• Norway
• Norwegian Mathematical Society
• Norwegian Statistical Association
• Poland
• Portugal
• Romania
• Romanian Mathematical Society
• Romanian Society of Mathematicians
• Russia
• Moscow Mathematical Society
• St. Petersburg Mathematical Society
• Ural Mathematical Society
• Slovakia
• Slovak Mathematical Society
• Union of Slovak Mathematicians and Physicists
• Slovenia
• Spain
• Catalan Society of Mathematics
• Royal Spanish Mathematical Society
• Spanish Society of Statistics and Operations Research
• The Spanish Society of Applied Mathematics
• Sweden
• Swedish Mathematical Society
• Swedish Society of Statisticians
• Switzerland
• Turkey
• Ukraine
• United Kingdom
• Edinburgh Mathematical Society
• Institute of Mathematics and its Applications
• London Mathematical Society
Academic Institutional Members
• Abdus Salam International Centre for Theoretical Physics
• Academy of Sciences of Moldova
• Bernoulli Center
• Centre de Recerca Matemàtica
• Centre International de Rencontres Mathématiques
• Centrum voor Wiskunde en Informatica
• Emmy Noether Research Institute for Mathematics
• Erwin Schrödinger International Institute for Mathematical Physics
• European Institute for Statistics, Probability and Operations Research
• Institut des Hautes Études Scientifiques
• Institut Henri Poincaré
• Institut Mittag-Leffler
• Institute for Mathematical Research
• International Centre for Mathematical Sciences
• Isaac Newton Institute for Mathematical Sciences
• Mathematisches Forschungsinstitut Oberwolfach
• Mathematical Research Institute
• Max Planck Institute for Mathematics in the Sciences
• Research Institute of Mathematics of the Voronezh State University
• Serbian Academy of Science and Arts
• Mathematical Society of Serbia
• Stefan Banach International Mathematical Center
• Thomas Stieltjes Institute for Mathematics
Institutional Members
• Central European University
• Faculty of Mathematics at the University of Barcelona
• Cellule MathDoc
Industrial and applied mathematics
Computational
• Algorithms
• design
• analysis
• Automata theory
• Coding theory
• Computational geometry
• Constraint programming
• Computational logic
• Cryptography
• Information theory
Discrete
• Computer algebra
• Computational number theory
• Combinatorics
• Graph theory
• Discrete geometry
Analysis
• Approximation theory
• Clifford analysis
• Clifford algebra
• Differential equations
• Ordinary differential equations
• Partial differential equations
• Stochastic differential equations
• Differential geometry
• Differential forms
• Gauge theory
• Geometric analysis
• Dynamical systems
• Chaos theory
• Control theory
• Functional analysis
• Operator algebra
• Operator theory
• Harmonic analysis
• Fourier analysis
• Multilinear algebra
• Exterior
• Geometric
• Tensor
• Vector
• Multivariable calculus
• Exterior
• Geometric
• Tensor
• Vector
• Numerical analysis
• Numerical linear algebra
• Numerical methods for ordinary differential equations
• Numerical methods for partial differential equations
• Validated numerics
• Variational calculus
Probability theory
• Distributions (random variables)
• Stochastic processes / analysis
• Path integral
• Stochastic variational calculus
Mathematical
physics
• Analytical mechanics
• Lagrangian
• Hamiltonian
• Field theory
• Classical
• Conformal
• Effective
• Gauge
• Quantum
• Statistical
• Topological
• Perturbation theory
• in quantum mechanics
• Potential theory
• String theory
• Bosonic
• Topological
• Supersymmetry
• Supersymmetric quantum mechanics
• Supersymmetric theory of stochastic dynamics
Algebraic structures
• Algebra of physical space
• Feynman integral
• Poisson algebra
• Quantum group
• Renormalization group
• Representation theory
• Spacetime algebra
• Superalgebra
• Supersymmetry algebra
Decision sciences
• Game theory
• Operations research
• Optimization
• Social choice theory
• Statistics
• Mathematical economics
• Mathematical finance
Other applications
• Biology
• Chemistry
• Psychology
• Sociology
• "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Related
• Mathematics
• Mathematical software
Organizations
• Society for Industrial and Applied Mathematics
• Japan Society for Industrial and Applied Mathematics
• Société de Mathématiques Appliquées et Industrielles
• International Council for Industrial and Applied Mathematics
• European Community on Computational Methods in Applied Sciences
• Category
• Mathematics portal / outline / topics list
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
Other
• IdRef
|
Wikipedia
|
Society for Industrial and Applied Mathematics
Society for Industrial and Applied Mathematics (SIAM) is a professional society dedicated to applied mathematics, computational science, and data science through research, publications, and community. SIAM is the world's largest scientific society devoted to applied mathematics, and roughly two-thirds of its membership resides within the United States.[3] Founded in 1951,[4] the organization began holding annual national meetings in 1954,[5][6] and now hosts conferences, publishes books and scholarly journals, and engages in advocacy in issues of interest to its membership.[1][7] Members include engineers, scientists, and mathematicians, both those employed in academia and those working in industry. The society supports educational institutions promoting applied mathematics.
Not to be confused with Société de Mathématiques Appliquées et Industrielles.
Society for Industrial and Applied Mathematics
Formation1951 (1951)
Type501(c)(3)[1]
Tax ID no.
23-1496016
HeadquartersPhiladelphia, Pennsylvania, United States
Location
• University City, Philadelphia
Coordinates39.9558056°N 75.1967729°W / 39.9558056; -75.1967729
FieldsApplied Mathematics
Membership
14,500[2]
President
Sven Leyffer
Revenue (2015[1])
$13,458,671
Websitewww.siam.org
SIAM is one of the four member organizations of the Joint Policy Board for Mathematics.[8]
Membership
Membership is open to both individuals and organizations. By the end of its first full year of operation, SIAM had 130 members; by 1968, it had 3,700.[5][9]
Student members can join SIAM chapters affiliated and run by students and faculty at universities. Most universities with SIAM chapters are in the United States (including Harvard[10] and MIT[11]), but SIAM chapters also exist in other countries, for example at Oxford,[12] at the École Polytechnique Fédérale de Lausanne[13] and at Peking University.[14] SIAM publishes the SIAM Undergraduate Research Online, a venue for undergraduate research in applied and computational mathematics. (SIAM also offers the SIAM Visiting Lecture Program, which helps arrange visits from industrial mathematicians to speak to student groups about applied mathematics and their own professional experiences.[15][16])
In 2009, SIAM instituted a Fellows program to recognize certain members who have made outstanding contributions to the fields that SIAM serves.[17]
Activity groups
The society includes a number of activity groups (SIAGs) to allow for more focused group discussions and collaborations. Activity groups organize domain-specific conferences and minisymposia, and award prizes.[18]
Unlike special interest groups in similar academic associations like ACM, activity groups are chartered for a fixed period of time, typically for two years, and require submitting a petition to the SIAM Council and Board for renewal. Charter approval is largely based on group size, as topics that were considered hot at one time may have fewer active researchers later.[19]
Current Activity Groups:
• Algebraic Geometry
• Analysis of Partial Differential Equations
• Applied and Computational Discrete Algorithms
• Applied Mathematics Education
• Computational Science and Engineering
• Control and Systems Theory
• Data Science
• Discrete Mathematics
• Dynamical Systems
• Financial Mathematics and Engineering
• Geometric Design
• Geosciences
• Imaging Science
• Life Sciences
• Linear Algebra
• Mathematical Aspects of Materials Science
• Mathematics of Planet Earth
• Nonlinear Waves and Coherent Structures
• Optimization
• Orthogonal Polynomials and Special Functions
• Supercomputing
• Uncertainty Quantification
Publications
Journals
As of 2018, SIAM publishes 18 research journals:[20]
• SIAM Journal on Applied Mathematics (SIAP), since 1966
• formerly Journal of the Society for Industrial and Applied Mathematics, since 1953
• Theory of Probability and Its Applications (TVP), since 1956
• translation of Teoriya Veroyatnostei i ee Primeneniya
• SIAM Review (SIREV), since 1959
• SIAM Journal on Control and Optimization (SICON), since 1976
• formerly SIAM Journal on Control, since 1966
• formerly Journal of the Society for Industrial and Applied Mathematics, Series A: Control, since 1962
• SIAM Journal on Numerical Analysis (SINUM), since 1966
• formerly Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis, since 1964
• SIAM Journal on Mathematical Analysis (SIMA), since 1970
• SIAM Journal on Computing (SICOMP), since 1972
• SIAM Journal on Matrix Analysis and Applications (SIMAX), since 1988
• formerly SIAM Journal on Algebraic and Discrete Methods, since 1980
• SIAM Journal on Scientific Computing (SISC), since 1993
• formerly SIAM Journal on Scientific and Statistical Computing, since 1980
• SIAM Journal on Discrete Mathematics (SIDMA), since 1988
• SIAM Journal on Optimization (SIOPT), since 1991
• SIAM Journal on Applied Dynamical Systems (SIADS), since 2002
• Multiscale Modeling and Simulation (MMS), since 2003
• SIAM Journal on Imaging Sciences (SIIMS), since 2008
• SIAM Journal on Financial Mathematics (SIFIN), since 2010
• SIAM/ASA Journal on Uncertainty Quantification (JUQ), since 2013
• SIAM Journal on Applied Algebra and Geometry (SIAGA), since 2017
• SIAM Journal on Mathematics of Data Science (SIMODS), since 2018
Books
SIAM publishes roughly 20 books each year,[21] including textbooks, conference proceedings and monographs. Many of these are issued in themed series, such as "Advances in design and control", "Financial mathematics" and "Monographs on discrete mathematics and applications". In particular, SIAM distributes books produced by Gilbert Strang's Wellesley-Cambridge Press, such as his Introduction to Linear Algebra (5th edition, 2016). Organizations such as libraries can obtain DRM-free access to SIAM books in eBook format for a subscription fee.[21]
Conferences
SIAM organizes conferences and meetings throughout the year focused on various topics in applied math and computational science. For example, SIAM has hosted an annual conference on data mining since 2001.[22] The establishment of the SIAM Conferences on Discrete Mathematics, held every two years, has been regarded as a sign of the growth of graph theory as a prominent topic of study.[23]
In conjunction with the Association for Computing Machinery, SIAM also organizes the annual Symposium on Discrete Algorithms, using the format of a theoretical computer science conference rather than the mathematics conference format that SIAM typically uses for its conferences.[24]
Prizes and recognition
SIAM recognizes applied mathematician and computational scientists for their contributions to the fields. Prizes include:[25]
• Germund Dahlquist Prize: Awarded to a young scientist (normally under 45) for original contributions to fields associated with Germund Dahlquist (numerical solution of differential equations and numerical methods for scientific computing).[26]
• Ralph E. Kleinman Prize: Awarded for "outstanding research, or other contributions, that bridge the gap between mathematics and applications...Each prize may be given either for a single notable achievement or for a collection of such achievements."[27]
• J.D. Crawford Prize: Awarded to "one individual for recent outstanding work on a topic in nonlinear science, as evidenced by a publication in English in a peer-reviewed journal within the four calendar years preceding the meeting at which the prize is awarded"[28]
• Jürgen Moser Lecture: Awarded to "a person who has made distinguished contributions to nonlinear science".[29]
• Richard C. DiPrima Prize: Awarded to "a young scientist who has done outstanding research in applied mathematics (defined as those topics covered by SIAM journals) and who has completed his/her doctoral dissertation and completed all other requirements for his/her doctorate during the period running from three years prior to the award date to one year prior to the award date".[30]
• George Pólya Prize: "is given every two years, alternately in two categories: (1) for a notable application of combinatorial theory; (2) for a notable contribution in another area of interest to George Pólya such as approximation theory, complex analysis, number theory, orthogonal polynomials, probability theory, or mathematical discovery and learning."[31]
• W. T. and Idalia Reid Prize: Awarded for research in and contributions to areas of differential equations and control theory.[32]
• Theodore von Kármán Prize: Awarded for "notable application of mathematics to mechanics and/or the engineering sciences made during the five to ten years preceding the award".[33]
• James H. Wilkinson Prize in Numerical Analysis and Scientific Computing: Awarded for "research in, or other contributions to, numerical analysis and scientific computing during the six years preceding the award".[34]
John von Neumann Lecture
The John von Neumann Lecture prize was established in 1959 with funds from IBM and other industry corporations, and is awarded for "outstanding and distinguished contributions to the field of applied mathematical sciences and for the effective communication of these ideas to the community".[35] The recipient receives a monetary award and presents a survey lecture at the Annual Meeting.
MathWorks Math Modeling (M3) Challenge
The MathWorks Math Modeling Challenge is an applied mathematics modeling competition for high school students in the United States. Scholarship prizes totaled $60,000 in 2006, and have since been raised to $150,000.[36][37] It is funded by Mathworks.[38][39] Originally, the prize was sponsored by the financial services company Moody's and known as the Moody's Mega Math Challenge.[40]
Leadership
The chief elected officer of SIAM is the president, elected for a single two-year term.[41] SIAM employs an executive director and staff.[1]
The following people have been presidents of the society:[42]
• William E. Bradley, Jr. (1952–1953)
• Donald Houghton (1953–1954)
• Harold W. Kuhn (1954–1955)
• John Mauchly (1955–1956)
• Thomas Southard (1956–1958)
• Donald Thomsen, Jr. (1958–1959)
• Brockway McMillan (1959–1960)
• F. Joachim Weyl (1960–1961)
• Robert Rinehart (1961–1962)
• Joseph P. LaSalle (1962–1963)
• Alston Householder (1963–1964)
• J. Barkley Rosser (1964–1966)
• Garrett Birkhoff (1966–1968)
• J. Wallace Givens (1968–1970)
• Burton Colvin (1970–1972)
• C. C. Lin (1972–1974)
• Herbert Keller (1974–1976)
• Werner Rheinboldt (1976–1978)
• Richard C. DiPrima (1979–1980)
• Seymour Parter (1981–1982)
• Hirsh Cohen (1983–1984)
• Gene H. Golub (1985–1986)
• C. William Gear (1987–1988)
• Ivar Stakgold (1989–1990)
• Robert E. O’Malley, Jr. (1991–1992)
• Avner Friedman (1993–1994)
• Margaret H. Wright (1995–1996)
• John Guckenheimer (1997–1998)
• Gilbert Strang (1999–2000)
• Thomas A. Manteuffel (2001–2002)
• James (Mac) Hyman (2003–2004)
• Martin Golubitsky (2005–2006)
• Cleve Moler (2007–2008)
• Doug Arnold (2009–2010)
• L. N. Trefethen (2011–2012)
• Irene Fonseca (2013–2014)
• Pamela Cook (2015–2016)
• Nicholas J. Higham (2017–2018)
• Lisa Fauci (2019–2020)
• Susanne Brenner (2021-2022)
• Sven Leyffer (2023-2024)
See also
• American Mathematical Society
• Japan Society for Industrial and Applied Mathematics
References
1. "SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS Form 990 2015". ProPublica. Retrieved 10 December 2017.
2. "SIAM: Membership". Retrieved 2017-12-10.
3. Higham, Nicholas J. (2015-09-15). The Princeton Companion to Applied Mathematics. Princeton University Press. ISBN 9781400874477.
4. Grier, David Alan (2006). "Irene Stegun, the "Handbook of Mathematical Functions", and the Lingering Influence of the New Deal". The American Mathematical Monthly. 113 (7): 585–597. doi:10.2307/27642002. JSTOR 27642002.
5. "Society for Industrial and Applied Mathematics". MacTutor History of Mathematics archive. 2005. Retrieved 2017-12-10.
6. "News". The American Statistician. 8 (5): 2–22. 1954. doi:10.1080/00031305.1954.10482760. JSTOR 2681543.
7. Leary, Warren E. (1991-07-10). "14 Scientific Groups Warn Senate About Money Drain of Space Lab". The New York Times. ISSN 0362-4331. Retrieved 2017-12-10.
Risen, Tom (2016-06-28). "Scientists Warn Congress Not to Ignore Climate Change". U.S. News & World Report. Retrieved 2017-12-10.
Henriques, Martha (2017-02-01). "US scientists protest Trump's 'travel ban' in open letter signed by 150 universities and societies". International Business Times UK. Retrieved 2017-12-10.
Thibodeau, Patrick. "Computer scientists say meme research doesn't threaten free speech". Computerworld. Retrieved 2017-12-10.
8. Pitcher, Everett (1988-12-31). A history of the second fifty years, American Mathematical Society 1939-88. American Mathematical Society. ISBN 9780821896761.
9. Scientific and Technical Societies of the United States. National Academy of Sciences. 1968.
10. "Harvard University Chapter of SIAM". Harvard University Chapter of SIAM. Retrieved 2017-12-10.
11. "MIT SIAM". web.mit.edu. Retrieved 2017-12-10.
12. Pain, Elisabeth (2010-10-01). "Expand Your Professional-Skills Training". Science. Retrieved 2017-12-10.
13. "The EPFL Chapter of the Society for Industrial and Applied Mathematics". The EPFL Chapter of the Society for Industrial and Applied Mathematics. Retrieved 2017-12-10.
14. "Student Chapters". Society for Industrial and Applied Mathematics. Retrieved 30 August 2017.
15. S.P Keeler; T.A. Grandine (2013). "Getting Math off the Ground: Applied Math at Boeing". In Damlamian, Alain; Francisco, José Rodrigues; Sträßer, Rudolf (eds.). Educational Interfaces between Mathematics and Industry. Springer. p. 31.
16. "Regional Conferences". Pi Mu Epsilon. Retrieved 30 August 2017.
17. "Fellows Program". SIAM. Retrieved 2012-12-04.
18. "Activity Groups". Society for Industrial and Applied Mathematics. Retrieved 25 April 2017.
19. Crowley, James; Cook, Pam. "A Closer Look at SIAM Activity Groups". SIAM News. Retrieved 25 April 2017.
20. "Journals". SIAM. Retrieved 2018-07-08.
21. Armstrong, Michelle (2013-01-01). "SIAM eBooks". The Charleston Advisor. 14 (3): 47–49. doi:10.5260/chara.14.3.47.
22. Hakikur, Rahman (2008-07-31). Data Mining Applications for Empowering Knowledge Societies. IGI Global. pp. xii. ISBN 9781599046594.
23. Chartrand, Gary; Zhang, Ping (2013-05-20). A First Course in Graph Theory. Courier Corporation. p. 381. ISBN 9780486297309.
24. Winkler, Peter, How (and Why!) to Write a SODA Paper. Distributed by Howard Karloff with the call for papers for SODA 1998.
25. "Prizes, Awards, Lectures and Fellows". SIAM. Retrieved 2012-12-04.
26. "Germund Dahlquist Prize". SIAM. Retrieved 2012-12-04.
27. "Ralph E. Kleinman Prize". SIAM. Retrieved 2012-12-04.
28. "J.D. Crawford Prize (SIAG/Dynamical Systems)". SIAM. Retrieved 2012-12-04.
29. "Jurgen Moser Lecture (SIAG/Dynamical Systems)". SIAM. Retrieved 2013-09-28.
30. "The Richard C. DiPrima Prize". SIAM. Retrieved 2012-12-04.
31. "George Pólya Prize". SIAM. Retrieved 2012-12-04.
32. "W. T. and Idalia Reid Prize in Mathematics". SIAM. Retrieved 2012-12-04.
33. "Theodore von Kármán Prize". SIAM. Retrieved 2012-12-04.
34. "James H. Wilkinson Prize in Numerical Analysis and Scientific Computing". SIAM. Retrieved 2012-12-04.
35. "The John von Neumann Lecture". Society for Industrial and Applied Mathematics. Retrieved 25 April 2017.
36. Gordon, Jane (2006-04-23). "That Was Easy: Social Security Problem Solved". The New York Times. ISSN 0362-4331. Retrieved 2017-12-10.
37. Nicosia, Mareesa (2017-05-14). "The Habits of America's Top Math Students: Survey Shines Light on Study Groups, Sleep, Enthusiasm". The 74. Retrieved 2017-12-10.
38. Persinger, Ryanne (December 9, 2017). "U.S. Education Department looks into discrimination claims". Philadelphia Tribune. Retrieved 10 December 2017.
39. Knapp, Alex (July 17, 2017). "Moody's Foundation Pulls Sponsorship Of High School Math Competition". Forbes. Retrieved 10 December 2017.
40. "Companies with a heart: In search of better corporate philanthropy". The Economist. 2008-02-26. Retrieved 2017-12-10.
41. "Microsoft Word - newbylaws2.doc" (PDF). Retrieved 2017-12-10.
42. "Presidents of". SIAM. Retrieved 2017-12-10.
External links
• Official website
• M3Challenge.SIAM.org
Society for Industrial and Applied Mathematics
Awards
• John von Neumann Lecture
• SIAM Fellowship
• Germund Dahlquist Prize
• George David Birkhoff Prize
• Norbert Wiener Prize in Applied Mathematics
• Ralph E. Kleinman Prize
• J. D. Crawford Prize
• J. H. Wilkinson Prize for Numerical Software
• W. T. and Idalia Reid Prize in Mathematics
• Theodore von Kármán Prize
• George Pólya Prize
• Peter Henrici Prize
• SIAM/ACM Prize in Computational Science and Engineering
• SIAM Prize for Distinguished Service to the Profession
Publications
• SIAM Review
• SIAM Journal on Applied Mathematics
• Theory of Probability and Its Applications
• SIAM Journal on Control and Optimization
• SIAM Journal on Numerical Analysis
• SIAM Journal on Mathematical Analysis
• SIAM Journal on Computing
• SIAM Journal on Matrix Analysis and Applications
• SIAM Journal on Scientific Computing
• SIAM Journal on Discrete Mathematics
Educational programs
• Mathworks Mega Math Challenge
Related societies
• Association for Computing Machinery
• International Council for Industrial and Applied Mathematics
• IEEE
• American Mathematics Society
• American Statistical Association
American mathematics
Organizations
• AMS
• MAA
• SIAM
• AMATYC
• AWM
Institutions
• AIM
• CIMS
• IAS
• ICERM
• IMA
• IPAM
• MBI
• SLMath
• SAMSI
• Geometry Center
Competitions
• MATHCOUNTS
• AMC
• AIME
• USAMO
• MOP
• Putnam Competition
• Integration Bee
Industrial and applied mathematics
Computational
• Algorithms
• design
• analysis
• Automata theory
• Coding theory
• Computational geometry
• Constraint programming
• Computational logic
• Cryptography
• Information theory
Discrete
• Computer algebra
• Computational number theory
• Combinatorics
• Graph theory
• Discrete geometry
Analysis
• Approximation theory
• Clifford analysis
• Clifford algebra
• Differential equations
• Ordinary differential equations
• Partial differential equations
• Stochastic differential equations
• Differential geometry
• Differential forms
• Gauge theory
• Geometric analysis
• Dynamical systems
• Chaos theory
• Control theory
• Functional analysis
• Operator algebra
• Operator theory
• Harmonic analysis
• Fourier analysis
• Multilinear algebra
• Exterior
• Geometric
• Tensor
• Vector
• Multivariable calculus
• Exterior
• Geometric
• Tensor
• Vector
• Numerical analysis
• Numerical linear algebra
• Numerical methods for ordinary differential equations
• Numerical methods for partial differential equations
• Validated numerics
• Variational calculus
Probability theory
• Distributions (random variables)
• Stochastic processes / analysis
• Path integral
• Stochastic variational calculus
Mathematical
physics
• Analytical mechanics
• Lagrangian
• Hamiltonian
• Field theory
• Classical
• Conformal
• Effective
• Gauge
• Quantum
• Statistical
• Topological
• Perturbation theory
• in quantum mechanics
• Potential theory
• String theory
• Bosonic
• Topological
• Supersymmetry
• Supersymmetric quantum mechanics
• Supersymmetric theory of stochastic dynamics
Algebraic structures
• Algebra of physical space
• Feynman integral
• Poisson algebra
• Quantum group
• Renormalization group
• Representation theory
• Spacetime algebra
• Superalgebra
• Supersymmetry algebra
Decision sciences
• Game theory
• Operations research
• Optimization
• Social choice theory
• Statistics
• Mathematical economics
• Mathematical finance
Other applications
• Biology
• Chemistry
• Psychology
• Sociology
• "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Related
• Mathematics
• Mathematical software
Organizations
• Society for Industrial and Applied Mathematics
• Japan Society for Industrial and Applied Mathematics
• Société de Mathématiques Appliquées et Industrielles
• International Council for Industrial and Applied Mathematics
• European Community on Computational Methods in Applied Sciences
• Category
• Mathematics portal / outline / topics list
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Czech Republic
Academics
• CiNii
Other
• IdRef
|
Wikipedia
|
Gabon Mathematical Society
The Gabon Mathematical Society (in French: Société Mathématique du Gabon, SMG) is a learned society of the mathematicians from Gabon, recognized by the International Mathematical Union as the national mathematical organization for its country.[1] It was founded in 2013 and its current president is Philibert Nang, from the École Normale Supérieure, Libreville.[2]
References
1. Gabon (associate member), International Mathematical Union, retrieved 2015-01-24.
2. Nang, Philibert (March 6, 2014). "APPLICATION OF GABON FOR IMU ASSOCIATE MEMBERSHIP" (PDF). International Mathematical Union. Retrieved January 24, 2014.
|
Wikipedia
|
Socle (mathematics)
In mathematics, the term socle has several related meanings.
Socle of a group
In the context of group theory, the socle of a group G, denoted soc(G), is the subgroup generated by the minimal normal subgroups of G. It can happen that a group has no minimal non-trivial normal subgroup (that is, every non-trivial normal subgroup properly contains another such subgroup) and in that case the socle is defined to be the subgroup generated by the identity. The socle is a direct product of minimal normal subgroups.[1]
As an example, consider the cyclic group Z12 with generator u, which has two minimal normal subgroups, one generated by u4 (which gives a normal subgroup with 3 elements) and the other by u6 (which gives a normal subgroup with 2 elements). Thus the socle of Z12 is the group generated by u4 and u6, which is just the group generated by u2.
The socle is a characteristic subgroup, and hence a normal subgroup. It is not necessarily transitively normal, however.
If a group G is a finite solvable group, then the socle can be expressed as a product of elementary abelian p-groups. Thus, in this case, it is just a product of copies of Z/pZ for various p, where the same p may occur multiple times in the product.
Socle of a module
In the context of module theory and ring theory the socle of a module M over a ring R is defined to be the sum of the minimal nonzero submodules of M. It can be considered as a dual notion to that of the radical of a module. In set notation,
$\mathrm {soc} (M)=\sum _{N{\text{ is a simple submodule of }}M}N.$
Equivalently,
$\mathrm {soc} (M)=\bigcap _{E{\text{ is an essential submodule of }}M}E.$
The socle of a ring R can refer to one of two sets in the ring. Considering R as a right R-module, soc(RR) is defined, and considering R as a left R-module, soc(RR) is defined. Both of these socles are ring ideals, and it is known they are not necessarily equal.
• If M is an Artinian module, soc(M) is itself an essential submodule of M.
• A module is semisimple if and only if soc(M) = M. Rings for which soc(M) = M for all M are precisely semisimple rings.
• soc(soc(M)) = soc(M).
• M is a finitely cogenerated module if and only if soc(M) is finitely generated and soc(M) is an essential submodule of M.
• Since the sum of semisimple modules is semisimple, the socle of a module could also be defined as the unique maximal semisimple submodule.
• From the definition of rad(R), it is easy to see that rad(R) annihilates soc(R). If R is a finite-dimensional unital algebra and M a finitely generated R-module then the socle consists precisely of the elements annihilated by the Jacobson radical of R.[2]
Socle of a Lie algebra
In the context of Lie algebras, a socle of a symmetric Lie algebra is the eigenspace of its structural automorphism that corresponds to the eigenvalue −1. (A symmetric Lie algebra decomposes into the direct sum of its socle and cosocle.)[3]
See also
• Injective hull
• Radical of a module
• Cosocle
References
1. Robinson 1996, p.87.
2. J. L. Alperin; Rowen B. Bell, Groups and Representations, 1995, ISBN 0-387-94526-1, p. 136
3. Mikhail Postnikov, Geometry VI: Riemannian Geometry, 2001, ISBN 3540411089,p. 98
• Alperin, J.L.; Bell, Rowen B. (1995). Groups and Representations. Springer-Verlag. p. 136. ISBN 0-387-94526-1.
• Anderson, Frank Wylie; Fuller, Kent R. (1992). Rings and Categories of Modules. Springer-Verlag. ISBN 978-0-387-97845-1.
• Robinson, Derek J. S. (1996), A course in the theory of groups, Graduate Texts in Mathematics, vol. 80 (2 ed.), New York: Springer-Verlag, pp. xviii+499, doi:10.1007/978-1-4419-8594-1, ISBN 0-387-94461-3, MR 1357169
|
Wikipedia
|
Descartes' theorem
In geometry, Descartes' theorem states that for every four kissing, or mutually tangent, circles, the radii of the circles satisfy a certain quadratic equation. By solving this equation, one can construct a fourth circle tangent to three given, mutually tangent circles. The theorem is named after René Descartes, who stated it in 1643.
Frederick Soddy's 1936 poem The Kiss Precise summarizes the theorem in terms of the bends (inverse radii) of the four circles:
The sum of the squares of all four bends
Is half the square of their sum[1]
Special cases of the theorem apply when one or two of the circles is replaced by a straight line (with zero bend) or when the bends are integers or square numbers. A version of the theorem using complex numbers allows the centers of the circles, and not just their radii, to be calculated. With an appropriate definition of curvature, the theorem also applies in spherical geometry and hyperbolic geometry. In higher dimensions, an analogous quadratic equation applies to systems of pairwise tangent spheres or hyperspheres.
History
Geometrical problems involving tangent circles have been pondered for millennia. In ancient Greece of the third century BC, Apollonius of Perga devoted an entire book to the topic, Ἐπαφαί [Tangencies]. It has been lost, and is known largely through a description of its contents by Pappus of Alexandria and through fragmentary references to it in medieval Islamic mathematics.[2] However, Greek geometry was largely focused on straightedge and compass construction. For instance, the problem of Apollonius, closely related to Descartes' theorem, asks for the construction of a circle tangent to three given circles which need not themselves be tangent.[3] Instead, Descartes' theorem is formulated using algebraic relations between numbers describing geometric forms. This is characteristic of analytic geometry, a field pioneered by René Descartes and Pierre de Fermat in the first half of the 17th century.[4]
Descartes discussed the tangent circle problem briefly in 1643, in two letters to Princess Elisabeth of the Palatinate.[5] Descartes initially posed to the princess the problem of Apollonius. After Elisabeth's partial results revealed that solving the full problem analytically would be too tedious, he simplified the problem to the case in which the three given circles are mutually tangent, and in solving this simplified problem he came up with the equation describing the relation between the radii, or curvatures, of four pairwise tangent circles. This result became known as Descartes' theorem.[6][7] Unfortunately, the reasoning through which Descartes found this relation has been lost.[8]
Japanese mathematics frequently concerned problems involving circles and their tangencies,[9] and Japanese mathematician Yamaji Nushizumi stated a form of Descartes’ circle theorem in 1751. Like Descartes, he expressed it as a polynomial equation on the radii rather than their curvatures.[10][11] The special case of this theorem for one straight line and three circles was recorded on a Japanese sangaku tablet from 1824.[12]
Descartes' theorem was rediscovered in 1826 by Jakob Steiner,[13] in 1842 by Philip Beecroft,[14] and in 1936 by Frederick Soddy. Soddy chose to format his version of the theorem as a poem, The Kiss Precise, and published it in Nature. The kissing circles in this problem are sometimes known as Soddy circles. Soddy also extended the theorem to spheres,[1] and in another poem described the chain of six spheres each tangent to its neighbors and to three given mutually tangent spheres, a configuration now called Soddy's hexlet.[15] Thorold Gosset extended the theorem and the poem to arbitrary dimensions.[16] The generalization is sometimes called the Soddy–Gosset theorem,[17] although both the hexlet and the three-dimensional version were known earlier, in sangaku and in the 1886 work of Robert Lachlan.[12][18][19]
A problem involving Descartes' theorem, asking for the height of a circle in a Pappus chain, was one of many "killer" problems used in oral examinations in the Soviet Union to keep Jews out of the Moscow State University mathematics program.[20]
Multiple proofs of the theorem have been published. Steiner's proof uses Pappus chains and Viviani's theorem. Proofs by Philip Beecroft and by H. S. M. Coxeter involve four more circles, passing through triples of tangencies of the original three circles; Coxeter also provided a proof using inversive geometry. Additional proofs involve arguments based on symmetry, calculations in exterior algebra, or algebraic manipulation of Heron's formula.[21][22]
Statement
Descartes' theorem is most easily stated in terms of the circles' curvatures.[23] The signed curvature (or bend) of a circle is defined as $k=\pm 1/r$, where $r$ is its radius. The larger a circle, the smaller is the magnitude of its curvature, and vice versa. The sign in $k=\pm 1/r$ (represented by the $\pm $ symbol) is positive for a circle that is externally tangent to the other circles, like the three black circles in the image. For an internally tangent circle like the large red circle, that circumscribes the other circles, the sign is negative. If a straight line is considered a degenerate circle with zero curvature (and thus infinite radius), Descartes' theorem also applies to a line and three circles that are all three mutually tangent.[1]
For four circles that are tangent to each other at six distinct points, with curvatures $k_{i}$ for $i=1,\dots ,4$, Descartes' theorem says:
$(k_{1}+k_{2}+k_{3}+k_{4})^{2}=2\,(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}).$
$(1)$
If one of the four curvatures is considered to be a variable, and the rest to be constants, this is a quadratic equation. To find the radius of a fourth circle tangent to three given kissing circles, the quadratic equation can be solved as[13][24]
$k_{4}=k_{1}+k_{2}+k_{3}\pm 2{\sqrt {k_{1}k_{2}+k_{2}k_{3}+k_{3}k_{1}}}.$
$(2)$
The $\pm $ symbol indicates that in general this equation has two solutions, and any triple of tangent circles has two tangent circles (or degenerate straight lines). Problem-specific criteria may favor one of these two solutions over the other in any given problem.[21]
The theorem does not apply to systems of circles with more than two circles tangent to each other at the same point. It requires that the points of tangency be distinct.[8] When more than two circles are tangent at a single point, there can be infinitely many such circles, with arbitrary curvatures; see pencil of circles.[25]
Locating the circle centers
To determine a circle completely, not only its radius (or curvature), but also its center must be known. The relevant equation is expressed most clearly if the Cartesian coordinates $(x,y)$ are interpreted as a complex number $z=x+iy$. The equation then looks similar to Descartes' theorem and is therefore called the complex Descartes theorem. Given four circles with curvatures $k_{i}$ and centers $z_{i}$ for $i\in \{1,2,3,4\}$, the following equality holds in addition to equation (1):
$(k_{1}z_{1}+k_{2}z_{2}+k_{3}z_{3}+k_{4}z_{4})^{2}=2\,(k_{1}^{2}z_{1}^{2}+k_{2}^{2}z_{2}^{2}+k_{3}^{2}z_{3}^{2}+k_{4}^{2}z_{4}^{2}).$
$(3)$
Once $k_{4}$ has been found using equation (2), one may proceed to calculate $z_{4}$ by solving equation (3) as a quadratic equation, leading to a form similar to equation (2):
$z_{4}={\frac {z_{1}k_{1}+z_{2}k_{2}+z_{3}k_{3}\pm 2{\sqrt {k_{1}k_{2}z_{1}z_{2}+k_{2}k_{3}z_{2}z_{3}+k_{1}k_{3}z_{1}z_{3}}}}{k_{4}}}.$
Again, in general there are two solutions for $z_{4}$ corresponding to the two solutions for $k_{4}$. The plus/minus sign in the above formula for $z_{4}$ does not necessarily correspond to the plus/minus sign in the formula for $k_{4}$.[17][26][27]
Special cases
Three congruent circles
When three of the four circles are congruent, their centers form an equilateral triangle, as do their points of tangency. The two possibilities for a fourth circle tangent to all three are concentric, and equation (2) reduces to[28]
$k_{4}=(3\pm 2{\sqrt {3}})k_{1}.$
One or more straight lines
If one of the three circles is replaced by a straight line tangent to the remaining circles, then its curvature is zero and drops out of equation (1). For instance, if $k_{3}=0$, then equation (1) can be factorized as[29]
${\begin{aligned}&{\bigl (}{\sqrt {k_{1}}}+{\sqrt {k_{2}}}+{\sqrt {k_{4}}}{\bigr )}{\bigl (}{{\sqrt {k_{2}}}+{\sqrt {k_{4}}}-{\sqrt {k_{1}}}}{\bigr )}\\[3mu]&\quad {}\cdot {\bigl (}{\sqrt {k_{1}}}+{\sqrt {k_{4}}}-{\sqrt {k_{2}}}{\bigr )}{\bigl (}{\sqrt {k_{1}}}+{\sqrt {k_{2}}}-{\sqrt {k_{4}}}{\bigr )}=0,\end{aligned}}$
and equation (2) simplifies to[30]
$k_{4}=k_{1}+k_{2}\pm 2{\sqrt {k_{1}k_{2}}}.$
Taking the square root of both sides leads to another alternative formulation of this case (with $k_{1}\geq k_{2}$),
${\sqrt {k_{4}}}={\sqrt {k_{1}}}\pm {\sqrt {k_{2}}},$
which has been described as "a sort of demented version of the Pythagorean theorem".[23]
If two circles are replaced by lines, the tangency between the two replaced circles becomes a parallelism between their two replacement lines. In this case, with $k_{2}=k_{3}=0$, equation (2) is reduced to the trivial
$\displaystyle k_{4}=k_{1}.$
This corresponds to the observation that, for all four curves to remain mutually tangent, the other two circles must be congruent.[17][24]
Integer curvatures
When four tangent circles described by equation (2) all have integer curvatures, the alternative fourth circle described by the second solution to the equation must also have an integer curvature. This is because both solutions differ from an integer by the square root of an integer, and so either solution can only be an integer if this square root, and hence the other solution, is also an integer. Every four integers that satisfy the equation in Descartes' theorem form the curvatures of four tangent circles.[31] Integer quadruples of this type are also closely related to Heronian triangles, triangles with integer sides and area.[32]
Starting with any four mutually tangent circles, and repeatedly replacing one of the four with its alternative solution (Vieta jumping), in all possible ways, leads to a system of infinitely many tangent circles called an Apollonian gasket. When the initial four circles have integer curvatures, so does each replacement, and therefore all of the circles in the gasket have integer curvatures. Any four tangent circles with integer curvatures belong to exactly one such gasket, uniquely described by its root quadruple of the largest four largest circles and four smallest curvatures. This quadruple can be found, starting from any other quadruple from the same gasket, by repeatedly replacing the smallest circle by a larger one that solves the same Descartes equation, until no such reduction is possible.[31]
A root quadruple is said to be primitive if it has no nontrivial common divisor. Every primitive root quadruple can be found from a factorization of a sum of two squares, $n^{2}+m^{2}=de$, as the quadruple $(-n,\,d+n,\,e+n,\,d+e+n-2m)$. To be primitive, it must satisfy the additional conditions $\gcd(n,d,e)=1$, and $-n\leq 0\leq 2m\leq d\leq e$. Factorizations of sums of two squares can be obtained using the sum of two squares theorem. Any other integer Apollonian gasket can be formed by multiplying a primitive root quadruple by an arbitrary integer, and any quadruple in one of these gaskets (that is, any integer solution to the Descartes equation) can be formed by reversing the replacement process used to find the root quadruple. For instance, the gasket with root quadruple $(-10,18,23,27)$, shown in the figure, is generated in this way from the factorized sum of two squares $10^{2}+2^{2}=8\cdot 13$.[31]
Ford circles
Main article: Ford circle
The special cases of one straight line and integer curvatures combine in the Ford circles. These are an infinite family of circles tangent to the $x$-axis of the Cartesian coordinate system at its rational points. Each fraction $p/q$ (in lowest terms) has a circle tangent to the line at the point $(p/q,0)$ with curvature $2q^{2}$. Three of these curvatures, together with the zero curvature of the axis, meet the conditions of Descartes' theorem whenever the denominators of two of the corresponding fractions sum to the denominator of the third. The two Ford circles for fractions $p/q$ and $r/s$ (both in lowest terms) are tangent when $|ps-qr|=1$. When they are tangent, they form a quadruple of tangent circles with the $x$-axis and with the circle for their mediant $(p+r)/(q+s)$.[33]
The Ford circles belong to a special Apollonian gasket with root quadruple $(0,0,1,1)$, bounded between two parallel lines, which may be taken as the $x$-axis and the line $y=1$. This is the only Apollonian gasket containing a straight line, and not bounded within a negative-curvature circle. The Ford circles are the circles in this gasket that are tangent to the $x$-axis.[31]
Geometric progression
Main article: Coxeter's loxodromic sequence of tangent circles
When the four radii of the circles in Descartes' theorem are assumed to be in a geometric progression with ratio $\rho $, the curvatures are also in the same progression (in reverse). Plugging this ratio into the theorem gives the equation
$2(1+\rho ^{2}+\rho ^{4}+\rho ^{6})=(1+\rho +\rho ^{2}+\rho ^{3})^{2},$
which has only one real solution greater than one, the ratio
$\rho =\varphi +{\sqrt {\varphi }}\approx 2.89005\ ,$
where $\varphi $ is the golden ratio. If the same progression is continued in both directions, each consecutive four numbers describe circles obeying Descartes' theorem. The resulting double-ended geometric progression of circles can be arranged into a single spiral pattern of tangent circles, called Coxeter's loxodromic sequence of tangent circles. It was first described, together with analogous constructions in higher dimensions, by H. S. M. Coxeter in 1968.[34][35]
Soddy circles of a triangle
Main article: Soddy circles of a triangle
Any triangle in the plane has three externally tangent circles centered at its vertices. Letting $A,B,C$ be the three points, $a,b,c$ be the lengths of the opposite sides, and $ s={\tfrac {1}{2}}(a+b+c)$ be the semiperimeter, these three circles have radii $s-a,s-b,s-c$. By Descartes' theorem, two more circles, sometimes called Soddy circles, are tangent to these three circles. They are separated by the incircle, one interior to it and one exterior.[36][37][38] Descartes' theorem can be used to show that the inner Soddy circle's curvature is $ (4R+r+2s)/\Delta $, where $\Delta $ is the triangle's area, $R$ is its circumradius, and $r$ is its inradius. The outer Soddy circle has curvature $ (4R+r-2s)/\Delta $.[39] The inner curvature is always positive, but the outer curvature can be positive, negative, or zero. Triangles whose outer circle degenerates to a straight line with curvature zero have been called "Soddyian triangles".[39]
One of the many proofs of Descartes' theorem is based on this connection to triangle geometry and on Heron's formula for the area of a triangle as a function of its side lengths. If three circles are externally tangent, with radii $r_{1},r_{2},r_{3},$ then their centers $P_{1},P_{2},P_{3}$ form the vertices of a triangle with side lengths $r_{1}+r_{2},$ $r_{1}+r_{3},$ and $r_{2}+r_{3},$ and semiperimeter $r_{1}+r_{2}+r_{3}.$ By Heron's formula, this triangle $\triangle P_{1}P_{2}P_{3}$ has area
${\sqrt {r_{1}r_{2}r_{3}(r_{1}+r_{2}+r_{3})}}.$
Now consider the inner Soddy circle with radius $r_{4},$ centered at point $P_{4}$ inside the triangle. Triangle $\triangle P_{1}P_{2}P_{3}$ can be broken into three smaller triangles $\triangle P_{1}P_{2}P_{4},$ $\triangle P_{4}P_{2}P_{3},$ and $\triangle P_{1}P_{4}P_{3},$ whose areas can be obtained by substituting $r_{4}$ for one of the other radii in the area formula above. The area of the first triangle equals the sum of these three areas:
${\begin{aligned}{\sqrt {r_{1}r_{2}r_{3}(r_{1}+r_{2}+r_{3})}}={}&{\sqrt {r_{1}r_{2}r_{4}(r_{1}+r_{2}+r_{4})}}+{}\\&{\sqrt {r_{1}r_{3}r_{4}(r_{1}+r_{3}+r_{4})}}+{}\\&{\sqrt {r_{2}r_{3}r_{4}(r_{2}+r_{3}+r_{4})}}.\end{aligned}}$
Careful algebraic manipulation shows that this formula is equivalent to equation (1), Descartes' theorem.[21]
This analysis covers all cases in which four circles are externally tangent; one is always the inner Soddy circle of the other three. The cases in which one of the circles is internally tangent to the other three and forms their outer Soddy circle are similar. Again the four centers $P_{1},P_{2},P_{3},P_{4}$ form four triangles, but (letting $P_{4}$ be the center of the outer Soddy circle) the triangle sides incident to $P_{4}$ have lengths that are differences of radii, $r_{4}-r_{1},$ $r_{4}-r_{1},$ and $r_{4}-r_{3},$ rather than sums. $P_{4}$ may lie inside or outside the triangle formed by the other three centers; when it is inside, this triangle's area equals the sum of the other three triangle areas, as above. When it is outside, the quadrilateral formed by the four centers can be subdivided by a diagonal into two triangles, in two different ways, giving an equality between the sum of two triangle areas and the sum of the other two triangle areas. In every case, the area equation reduces to Descartes' theorem. This method does not apply directly to the cases in which one of the circles degenerates to a line, but those can be handled as a limiting case of circles.[21]
Generalizations
Arbitrary four-circle configurations
Descartes' theorem can be expressed as a matrix equation and then generalized to other configurations of four oriented circles by changing the matrix. Let $\mathbf {k} $ be a column vector of the four circle curvatures and let $\mathbf {Q} $ be a symmetric matrix whose coefficients $q_{i,j}$ represent the relative orientation between the ith and jth oriented circles at their intersection point:
$\mathbf {Q} ={\begin{bmatrix}{\phantom {-}}1&-1&-1&-1\\-1&{\phantom {-}}1&-1&-1\\-1&-1&{\phantom {-}}1&-1\\-1&-1&-1&{\phantom {-}}1\\\end{bmatrix}},\qquad \mathbf {Q} ^{-1}={\frac {1}{4}}{\begin{bmatrix}{\phantom {-}}1&-1&-1&-1\\-1&{\phantom {-}}1&-1&-1\\-1&-1&{\phantom {-}}1&-1\\-1&-1&-1&{\phantom {-}}1\\\end{bmatrix}}.$
Then equation (1) can be rewritten as the matrix equation[17][40]
$\mathbf {k} ^{\mathsf {T}}\mathbf {Q} ^{-1}\mathbf {k} =0.$
As a generalization of Descartes' theorem, a modified symmetric matrix $\mathbf {Q} $ can represent any desired configuration of four circles by replacing each coefficient with the inclination $q_{i,j}$ between two circles, defined as
$q_{i,j}={\frac {r_{i}^{2}+r_{j}^{2}-d_{i,j}^{2}}{2r_{i}r_{j}}},$
where $r_{i},r_{j}$ are the respective radii of the circles, and $d_{i,j}$ is the Euclidean distance between their centers.[41][42][43] When the circles intersect, $q_{i,j}=\cos(\theta _{i,j})$, the cosine of the intersection angle between the circles. The inclination, sometimes called inversive distance, is $1$ when the circles are tangent and oriented the same way at their point of tangency, $-1$ when the two circles are tangent and oriented oppositely at the point of tangency, $0$ for orthogonal circles, outside the interval $[-1,1]$ for non-intersecting circles, and $\infty $ in the limit as one circle degenerates to a point.[40][35]
The equation $\mathbf {k} ^{\mathsf {T}}\mathbf {Q} ^{-1}\mathbf {k} =0$ is satisfied for any arbitrary configuration of four circles in the plane, provided $\mathbf {Q} $ is the appropriate matrix of pairwise inclinations.[40]
Spherical and hyperbolic geometry
Descartes' theorem generalizes to mutually tangent great or small circles in spherical geometry if the curvature of the $j$th circle is defined as $ k_{j}=\cot \rho _{j},$ the cotangent of the oriented intrinsic radius $\rho _{j}.$ Then:[42][17]
$(k_{1}+k_{2}+k_{3}+k_{4})^{2}=2(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2})+4.$
Solving for one of the curvatures in terms of the other three,
$k_{4}=k_{1}+k_{2}+k_{3}\pm 2{\sqrt {k_{1}k_{2}+k_{2}k_{3}+k_{3}k_{1}-1}}.$
As a matrix equation,
$\mathbf {k} ^{\mathsf {T}}\mathbf {Q} ^{-1}\mathbf {k} =-1.$
The quantity $1/k_{j}=\tan \rho _{j}$ is the "stereographic diameter" of a small circle. This is the Euclidean length of the diameter in the stereographically projected plane when some point on the circle is projected to the origin. For a great circle, such a stereographic projection is a straight line through the origin, so $k_{j}=0$.[44]
Likewise, the theorem generalizes to mutually tangent circles in hyperbolic geometry if the curvature of the $j$th cycle is defined as $ k_{j}=\coth \rho _{j},$ the hyperbolic cotangent of the oriented intrinsic radius $\rho _{j}.$ Then:[17][42]
$(k_{1}+k_{2}+k_{3}+k_{4})^{2}=2(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2})-4.$
Solving for one of the curvatures in terms of the other three,
$k_{4}=k_{1}+k_{2}+k_{3}\pm 2{\sqrt {k_{1}k_{2}+k_{2}k_{3}+k_{3}k_{1}+1}}.$
As a matrix equation,
$\mathbf {k} ^{\mathsf {T}}\mathbf {Q} ^{-1}\mathbf {k} =1.$
This formula also holds for mutually tangent configurations in hyperbolic geometry including hypercycles and horocycles, if $k_{j}$ is taken to be the reciprocal of the stereographic diameter of the cycle. This is the diameter under stereographic projection (the Poincaré disk model) when one endpoint of the diameter is projected to the origin.[45] Hypercycles do not have a well-defined center or intrinsic radius and horocycles have an ideal point for a center and infinite intrinsic radius, but $|k_{j}|>1$ for a hyperbolic circle, $|k_{j}|=1$ for a horocycle, $|k_{j}|<1$ for a hypercycle, and $k_{j}=0$ for a geodesic.[46]
Higher dimensions
In $n$-dimensional Euclidean space, the maximum number of mutually tangent hyperspheres is $n+2$. For example, in 3-dimensional space, five spheres can be mutually tangent. The curvatures of the hyperspheres satisfy
${\biggl (}\sum _{i=1}^{n+2}k_{i}{\biggr )}^{\!2}=n\,\sum _{i=1}^{n+2}k_{i}^{2}$
with the case $k_{i}=0$ corresponding to a flat hyperplane, generalizing the 2-dimensional version of the theorem.[17][42] Although there is no 3-dimensional analogue of the complex numbers, the relationship between the positions of the centers can be re-expressed as a matrix equation, which also generalizes to $n$ dimensions.[17]
In three dimensions, suppose that three mutually tangent spheres are fixed, and a fourth sphere $S_{1}$ is given, tangent to the three fixed spheres. The three-dimensional version of Descartes' theorem can be applied to find a sphere $S_{2}$ tangent to $S_{1}$ and the fixed spheres, then applied again to find a new sphere $S_{3}$ tangent to $S_{2}$ and the fixed spheres, and so on. The result is a cyclic sequence of six spheres each tangent to its neighbors in the sequence and to the three fixed spheres, a configuration called Soddy's hexlet, after Soddy's discovery and publication of it in the form of another poem in 1936.[15][47]
Higher-dimensional configurations of mutually tangent hyperspheres in spherical or hyperbolic geometry, with curvatures defined as above, satisfy
${\biggl (}\sum _{i=1}^{n+2}k_{i}{\biggr )}^{\!2}=nC+n\,\sum _{i=1}^{n+2}k_{i}^{2},$
where $C=2$ in spherical geometry and $C=-2$ in hyperbolic geometry.[42][17]
See also
• Circle packing in a circle
• Euler's four-square identity
• Malfatti circles
References
1. Soddy, F. (June 1936), "The Kiss Precise", Nature, 137 (3477): 1021, Bibcode:1936Natur.137.1021S, doi:10.1038/1371021a0, S2CID 6012051
2. Hogendijk, Jan P. (1986), "Arabic traces of lost works of Apollonius", Archive for History of Exact Sciences, 35 (3): 187–253, doi:10.1007/BF00357307, JSTOR 41133783, MR 0851067
3. Court, Nathan Altshiller (October 1961), "The problem of Apollonius", The Mathematics Teacher, 54 (6): 444–452, doi:10.5951/MT.54.6.0444, JSTOR 27956431
4. Boyer, Carl B. (2004) [1956], "Chapter 5: Fermat and Descartes", History of Analytic Geometry, Dover Publications, pp. 74–102, ISBN 978-0486438320
5. Descartes, René (1901), Adam, Charles; Tannery, Paul (eds.), Oeuvres de Descartes (in French), vol. 4: Correspondance Juillet 1643 – Avril 1647, Paris: Léopold Cerf, "325. Descartes a Elisabeth", pp. 37–42; "328. Descartes a Elisabeth", pp. 45–50
Bos, Erik-Jan (2010), "Princess Elizabeth of Bohemia and Descartes' letters (1650–1665)", Historia Mathematica, 37 (3): 485–502, doi:10.1016/j.hm.2009.11.004
6. Shapiro, Lisa (2007), The Correspondence between Princess Elisabeth of Bohemia and René Descartes, The Other Voice in Early Modern Europe, University of Chicago Press, pp. 37–39, 73–77, ISBN 978-0-226-20444-4
7. Mackenzie, Dana (March–April 2023), "The princess and the philosopher", American Scientist, vol. 111, no. 2, pp. 80–84, ProQuest 2779946948
8. Coxeter, H. S. M. (January 1968), "The problem of Apollonius", The American Mathematical Monthly, 75 (1): 5–15, doi:10.1080/00029890.1968.11970941, JSTOR 2315097
9. Yanagihara, K. (1913), "On some geometrical propositions in Wasan, the Japanese native mathematics", Tohoku Mathematical Journal, 3: 87–95, JFM 44.0052.02
10. Michiwaki, Yoshimasa (2008), "Geometry in Japanese mathematics", in Selin, Helaine (ed.), Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures, Springer Netherlands, pp. 1018–1019, doi:10.1007/978-1-4020-4425-0_9133
11. Takinami, Susumu; Michiwaki, Yoshimasa (1984), "On the Descartes circle theorem" (PDF), Journal for History of Mathematics, Korean Society for History of Mathematics, 1 (1): 1–8
12. Rothman, Tony; Fugakawa, Hidetoshi (May 1998), "Japanese temple geometry", Scientific American, 278 (5): 84–91, Bibcode:1998SciAm.278e..84R, doi:10.1038/scientificamerican0598-84, JSTOR 26057787; see top illustration, p. 86. Another tablet from 1822 (center, p. 88) concerns Soddy's hexlet, a configuration of three-dimensional tangent spheres.
13. Steiner, Jakob (January 1826), "Fortsetzung der geometrischen Betrachtungen (Heft 2, S. 161)", Journal für die reine und angewandte Mathematik, 1826 (1), pp. 252–288, fig. 2–25 taf. III, doi:10.1515/crll.1826.1.252, S2CID 121590578
14. Beecroft, Philip (1842), "Properties of circles in mutual contact", The Lady's and Gentleman's Diary (139): 91–96
15. Soddy, Frederick (December 1936), "The hexlet", Nature, 138 (3501): 958, Bibcode:1936Natur.138..958S, doi:10.1038/138958a0, S2CID 28170211
16. "The Kiss Precise", Nature, 139 (3506): 62, January 1937, Bibcode:1937Natur.139Q..62., doi:10.1038/139062a0
17. Lagarias, Jeffrey C.; Mallows, Colin L.; Wilks, Allan R. (2002), "Beyond the Descartes circle theorem", The American Mathematical Monthly, 109 (4): 338–361, arXiv:math/0101066, doi:10.2307/2695498, JSTOR 2695498, MR 1903421
18. Hidetoshi, Fukagawa; Kazunori, Horibe (2014), "Sangaku – Japanese Mathematics and Art in the 18th, 19th and 20th Centuries", in Greenfield, Gary; Hart, George; Sarhangi, Reza (eds.), Bridges Seoul Conference Proceedings, Tessellations Publishing, pp. 111–118
19. Lachlan, R. (1886), "On Systems of Circles and Spheres", Philosophical Transactions of the Royal Society of London, 177: 481–625, JSTOR 109492; see "Spheres touching one another", pp. 585–587
20. Egenhoff, Jay (December 2014), "Math as a tool of anti-semitism", The Mathematics Enthusiast, University of Montana, Maureen and Mike Mansfield Library, 11 (3): 649–664, doi:10.54870/1551-3440.1320; see question 7, pp. 559–560
21. Levrie, Paul (2019), "A straightforward proof of Descartes's circle theorem", The Mathematical Intelligencer, 41 (3): 24–27, doi:10.1007/s00283-019-09883-x, hdl:10067/1621880151162165141, MR 3995314, S2CID 253818666
22. Pedoe, Daniel (1967), "On a theorem in geometry", The American Mathematical Monthly, 74 (6): 627–640, doi:10.2307/2314247, JSTOR 2314247, MR 0215169
23. Mackenzie, Dana (January–February 2010), "A tisket, a tasket, an Apollonian gasket", Computing Science, American Scientist, vol. 98, no. 1, pp. 10–14, JSTOR 27859441, All of these reciprocals look a little bit extravagant, so the formula is usually simplified by writing it in terms of the curvatures or the bends of the circles.
24. Wilker, J. B. (1969), "Four proofs of a generalization of the Descartes circle theorem", The American Mathematical Monthly, 76 (3): 278–282, doi:10.2307/2316373, JSTOR 2316373, MR 0246207
25. Glaeser, Georg; Stachel, Hellmuth; Odehnal, Boris (2016), "The parabolic pencil – a common line element", The Universe of Conics, Springer, p. 327, doi:10.1007/978-3-662-45450-3, ISBN 978-3-662-45449-7
26. Northshield, Sam (2014), "Complex Descartes circle theorem", The American Mathematical Monthly, 121 (10): 927–931, doi:10.4169/amer.math.monthly.121.10.927, hdl:1951/69912, JSTOR 10.4169/amer.math.monthly.121.10.927, MR 3295667, S2CID 16335704
27. Tupan, Alexandru (2022), "On the complex Descartes circle theorem", The American Mathematical Monthly, 129 (9): 876–879, doi:10.1080/00029890.2022.2104084, MR 4499753, S2CID 251417228
28. This is a special case of a formula for the radii of circles in a Steiner chain with concentric inner and outer circles, given by Sheydvasser, Arseniy (2023), "3.1 Steiner's porism and 3.6 Steiner's porism revisited", Linear Fractional Transformations, Springer International Publishing, pp. 75–81, 99–101, doi:10.1007/978-3-031-25002-6
29. Hajja, Mowaffaq (2009), "93.33 on a Morsel of Ross Honsberger", The Mathematical Gazette, 93 (527): 309–312, JSTOR 40378744
30. Dergiades, Nikolaos (2007), "The Soddy circles" (PDF), Forum Geometricorum, 7: 191–197, MR 2373402
31. Graham, Ronald L.; Lagarias, Jeffrey C.; Mallows, Colin L.; Wilks, Allan R.; Yan, Catherine H. (2003), "Apollonian circle packings: number theory", Journal of Number Theory, 100 (1): 1–45, arXiv:math/0009113, doi:10.1016/S0022-314X(03)00015-5, MR 1971245, S2CID 16607718
32. Bradley, Christopher J. (March 2003), "Heron triangles and touching circles", The Mathematical Gazette, 87 (508): 36–41, doi:10.1017/s0025557200172080, JSTOR 3620562
33. McGonagle, Annmarie; Northshield, Sam (2014), "A new parameterization of Ford circles", Pi Mu Epsilon Journal, 13 (10): 637–643, JSTOR 24345283, MR 3235834
34. Coxeter, H. S. M. (1968), "Loxodromic sequences of tangent spheres", Aequationes Mathematicae, 1 (1–2): 104–121, doi:10.1007/BF01817563, MR 0235456, S2CID 119897862
35. Weiss, Asia (1981), "On Coxeter's Loxodromic Sequences of Tangent Spheres", in Davis, Chandler; Grünbaum, Branko; Sherk, F.A. (eds.), The Geometric Vein: The Coxeter Festschrift, Springer, pp. 241–250, doi:10.1007/978-1-4612-5648-9_16
36. Lemoine, Émile (1891), "Sur les triangles orthologiques et sur divers sujets de la géométrie du triangle" [On orthologic triangles and on various subjects of triangle geometry], Compte rendu de la 19me session de l'association française pour l'avancement des sciences, pt. 2, Congrès de Limoges 1890 (in French), Paris: Secrétariat de l'association, pp. 111–146, especially §4 "Sur les intersections deux a deux des coniques qui ont pour foyers-deux sommets d'un triangle et passent par le troisième" [On the intersections in pairs of the conics which have as foci two vertices of a triangle and pass through the third], pp. 128–144
37. Veldkamp, G. R. (1985), "The Isoperimetric Point and the Point(s) of Equal Detour in a Triangle", The American Mathematical Monthly, 92 (8): 546–558, doi:10.1080/00029890.1985.11971677, JSTOR 2323159
38. Garcia, Ronaldo; Reznik, Dan; Moses, Peter; Gheorghe, Liliana (2022), "Triads of conics associated with a triangle", KoG, Croatian Society for Geometry and Graphics (26): 16–32, arXiv:2112.15232, doi:10.31896/k.26.2, S2CID 245634505
39. Jackson, Frank M. (2013), "Soddyian Triangles" (PDF), Forum Geometricorum, 13: 1–6
40. Kocik, Jerzy (2007), A theorem on circle configurations, arXiv:0706.0372
Kocik, Jerzy (2010), "Golden window" (PDF), Mathematics Magazine, 83 (5): 384–390, doi:10.4169/002557010X529815
Kocik, Jerzy (2019), Proof of Descartes circle formula and its generalization clarified, arXiv:1910.09174
41. Coolidge, Julian Lowell (1916), "X. The Oriented Circle", A Treatise on the Circle and the Sphere, Clarendon, pp. 351–407, also see p. 109, p. 408
42. Mauldon, J. G. (1962), "Sets of equally inclined spheres", Canadian Journal of Mathematics, 14: 509–516, doi:10.4153/CJM-1962-042-6
43. Rigby, J. F. (1981), "The geometry of cycles, and generalized Laguerre inversion", in Davis, Chandler; Grünbaum, Branko; Sherk, F.A. (eds.), The Geometric Vein: The Coxeter Festschrift, Springer, pp. 355–378, doi:10.1007/978-1-4612-5648-9_26
44. A definition of stereographic distance can be found in Li, Hongbo; Hestenes, David; Rockwood, Alyn (2001), "Spherical conformal geometry with geometric algebra" (PDF), Geometric Computing with Clifford Algebras, Springer, pp. 61–75, doi:10.1007/978-3-662-04621-0_3, ISBN 978-3-642-07442-4
45. This concept of distance was called the "pseudo-chordal distance" for the complex unit disk as a model for the hyperbolic plane by Carathéodory, Constantin (1954), "§§1.3.86–88 Chordal and Pseudo-chordal Distance", Theory of Functions of a Complex Variable, vol. I, translated by Steinhardt, Fritz, Chelsea, pp. 81–86, MR 0060009
46. Eriksson, Nicholas; Lagarias, Jeffrey C. (2007), "Apollonian Circle Packings: Number Theory II. Spherical and Hyperbolic Packings", The Ramanujan Journal, 14 (3): 437–469, arXiv:math/0403296, doi:10.1007/s11139-007-9052-6, S2CID 14024662
47. Barnes, John (2012), "Soddy's hexlet", Gems of Geometry (2nd ed.), Heidelberg: Springer, pp. 173–177, doi:10.1007/978-3-642-30964-9, ISBN 978-3-642-30963-2, MR 2963305
|
Wikipedia
|
Soddy circles of a triangle
In geometry, the Soddy circles of a triangle are two circles associated with any triangle in the plane. Their centers are the Soddy centers of the triangle. They are all named for Frederick Soddy, who rediscovered Descartes' theorem on the radii of mutually tangent quadruples of circles.
Any triangle has three externally tangent circles centered at its vertices. Two more circles, its Soddy circles, are tangent to the three circles centered at the vertices; their centers are called Soddy centers. The line through the Soddy centers is the Soddy line of the triangle. These circles are related to many other notable features of the triangle. They can be generalized to additional triples of tangent circles centered at the vertices in which one circle surrounds the other two.
Construction
Let $A,B,C$ be the three vertices of a circle, and let $a,b,c$ be the lengths of the opposite sides, and $ s={\tfrac {1}{2}}(a+b+c)$ be the semiperimeter. Then the three Soddy circles centered at $A,B,C$ have radii $s-a,s-b,s-c$, respectively. By Descartes' theorem, two more circles, sometimes also called Soddy circles, are tangent to these three circles. The centers of these two tangent circles are the Soddy centers of the triangle.
Related features
Each of the three circles centered at the vertices crosses two sides of the triangle at right angles, at one of the three intouch points of the triangle, where its incircle is tangent to the side. The two circles tangent to these three circles are separated by the incircle, one interior to it and one exterior. The Soddy centers lie at the common intersections of three hyperbolas, each having two triangle vertices as foci and passing through the third vertex.[1][2][3]
The inner Soddy center is an equal detour point: the polyline connecting any two triangle vertices through the inner Soddy point is longer than the line segment connecting those vertices directly, by an amount that does not depend on which two vertices are chosen.[4] By Descartes' theorem, the inner Soddy circle's curvature is $ (4R+r+2s)/\Delta $, where $\Delta $ is the triangle's area, $R$ is its circumradius, and $r$ is its inradius. The outer Soddy circle has curvature $ (4R+r-2s)/\Delta $.[5] When this curvature is positive, the outer Soddy center is another equal detour point; otherwise the equal detour point is unique.[4] When the outer Soddy circle has negative curvature, its center is the isoperimetric point of the triangle: the three triangles formed by this center and two vertices of the starting triangle all have the same perimeter.[4] Triangles whose outer circle degenerates to a straight line with curvature zero have been called "Soddyian triangles".[5]
Excentric circles
As well as the three externally tangent circles formed from a triangle, three more triples of tangent circles also have their centers at the triangle vertices, but with one of the circles surrounding the other two. Their triples of radii are $(-s,s-c,s-b),$ $(s-c,-s,s-a),$ or $(s-b,s-a,-s),$ where a negative radius indicates that the circle is tangent to the other two in its interior. Their points of tangency lie on the lines through the sides of the triangle, with each triple of circles having tangencies at the points where one of the three excircles is tangent to these lines. The pairs of tangent circles to these three triples of circles behave in analogous ways to the pair of inner and outer circles, and are also sometimes called Soddy circles.[6] Instead of lying on the intersection of the three hyperbolas, the centers of these circles lie where the opposite branch of one hyperbola with foci at the two vertices and passing through the third intersects the two ellipses with foci at other pairs of vertices and passing through the third.[1]
Soddy lines
The line through both Soddy centers, called the Soddy line, also passes through the incenter of the triangle, which is the homothetic center of the two Soddy circles,[6] and through the Gergonne point, the intersection of the three lines connecting the intouch points of the triangle to the opposite vertices.[7] Four mutually tangent circles define six points of tangency, which can be grouped in three pairs of tangent points, each pair coming from two disjoint pairs of circles. The three lines through these three pairs of tangent points are concurrent, and the points of concurrency defined in this way from the inner and outer circles define two more triangle centers called the Eppstein points that also lie on the Soddy line.[7][8]
The three additional pairs of excentric Soddy circles each are associated with a Soddy line through their centers. Each passes through the corresponding excenter of the triangle, which is the center of similitude for the two circles. Each Soddy line also passes through an analog of the Gergonne point and the Eppstein points. The four Soddy lines concur at the de Longchamps point, the reflection of the orthocenter of the triangle about the circumcenter.[6][7][9]
References
1. Lemoine, Émile (1891), "Sur les triangles orthologiques et sur divers sujets de la géométrie du triangle" [On orthologic triangles and on various subjects of triangle geometry], Compte rendu de la 19me session de l'association française pour l'avancement des sciences, pt. 2, Congrès de Limoges 1890 (in French), Paris: Secrétariat de l'association, pp. 111–146, especially §4 "Sur les intersections deux a deux des coniques qui ont pour foyers-deux sommets d'un triangle et passent par le troisième" [On the intersections in pairs of the conics which have as foci two vertices of a triangle and pass through the third], pp. 128–144
2. Veldkamp, G. R. (1985), "The Isoperimetric Point and the Point(s) of Equal Detour in a Triangle", The American Mathematical Monthly, 92 (8): 546–558, doi:10.1080/00029890.1985.11971677, JSTOR 2323159
3. Garcia, Ronaldo; Reznik, Dan; Moses, Peter; Gheorghe, Liliana (2022), "Triads of conics associated with a triangle", KoG, Croatian Society for Geometry and Graphics (26): 16–32, arXiv:2112.15232, doi:10.31896/k.26.2, S2CID 245634505
4. Hajja, Mowaffaq; Yff, Peter (2007), "The isoperimetric point and the point(s) of equal detour in a triangle", Journal of Geometry, 87 (1–2): 76–82, doi:10.1007/s00022-007-1906-y, JSTOR 2323159, MR 2372517, S2CID 122898960
5. Jackson, Frank M. (2013), "Soddyian Triangles" (PDF), Forum Geometricorum, 13: 1–6
6. Vandeghen, A. (1964), "Soddy's circles and the De Longchamps point of a triangle", Mathematical Notes, The American Mathematical Monthly, 71 (2): 176–179, doi:10.2307/2311750, JSTOR 2311750, MR 1532529
7. Gisch, David; Ribando, Jason M. (2004), "Apollonius' problem: a study of solutions and their connections" (PDF), American Journal of Undergraduate Research, 3 (1), doi:10.33697/ajur.2004.010, archived from the original (PDF) on 2017-08-11
8. Eppstein, David (2001), "Tangent spheres and triangle centers", The American Mathematical Monthly, 108 (1): 63–66, arXiv:math/9909152, doi:10.1080/00029890.2001.11919724, JSTOR 2695679
9. Longuet-Higgins, Michael S. (2000), "A fourfold point of concurrence lying on the Euler line of a triangle", The Mathematical Intelligencer, 22 (1): 54–59, doi:10.1007/bf03024448
External links
• Bogomolny, Alexander, "Soddy circles and David Eppstein's centers", Cut-the-knot
|
Wikipedia
|
Soddy's hexlet
In geometry, Soddy's hexlet is a chain of six spheres (shown in grey in Figure 1), each of which is tangent to both of its neighbors and also to three mutually tangent given spheres. In Figure 1, the three spheres are the red inner sphere and two spheres (not shown) above and below the plane the centers of the hexlet spheres lie on. In addition, the hexlet spheres are tangent to a fourth sphere (the blue outer sphere in Figure 1), which is not tangent to the three others.
According to a theorem published by Frederick Soddy in 1937,[1] it is always possible to find a hexlet for any choice of mutually tangent spheres A, B and C. Indeed, there is an infinite family of hexlets related by rotation and scaling of the hexlet spheres (Figure 1); in this, Soddy's hexlet is the spherical analog of a Steiner chain of six circles.[2] Consistent with Steiner chains, the centers of the hexlet spheres lie in a single plane, on an ellipse. Soddy's hexlet was also discovered independently in Japan, as shown by Sangaku tablets from 1822 in Kanagawa prefecture.[3]
Definition
Soddy's hexlet is a chain of six spheres, labeled S1–S6, each of which is tangent to three given spheres, A, B and C, that are themselves mutually tangent at three distinct points. (For consistency throughout the article, the hexlet spheres will always be depicted in grey, spheres A and B in green, and sphere C in blue.) The hexlet spheres are also tangent to a fourth fixed sphere D (always shown in red) that is not tangent to the three others, A, B and C.
Each sphere of Soddy's hexlet is also tangent to its neighbors in the chain; for example, sphere S4 is tangent to S3 and S5. The chain is closed, meaning that every sphere in the chain has two tangent neighbors; in particular, the initial and final spheres, S1 and S6, are tangent to one another.
Annular hexlet
The annular Soddy's hexlet is a special case (Figure 2), in which the three mutually tangent spheres consist of a single sphere of radius r (blue) sandwiched between two parallel planes (green) separated by a perpendicular distance 2r. In this case, Soddy's hexlet consists of six spheres of radius r packed like ball bearings around the central sphere and likewise sandwiched. The hexlet spheres are also tangent to a fourth sphere (red), which is not tangent to the other three.
The chain of six spheres can be rotated about the central sphere without affecting their tangencies, showing that there is an infinite family of solutions for this case. As they are rotated, the spheres of the hexlet trace out a torus (a doughnut-shaped surface); in other words, a torus is the envelope of this family of hexlets.
Solution by inversion
The general problem of finding a hexlet for three given mutually tangent spheres A, B and C can be reduced to the annular case using inversion. This geometrical operation always transforms spheres into spheres or into planes, which may be regarded as spheres of infinite radius. A sphere is transformed into a plane if and only if the sphere passes through the center of inversion. An advantage of inversion is that it preserves tangency; if two spheres are tangent before the transformation, they remain so after. Thus, if the inversion transformation is chosen judiciously, the problem can be reduced to a simpler case, such as the annular Soddy's hexlet. Inversion is reversible; repeating an inversion in the same point returns the transformed objects to their original size and position.
Inversion in the point of tangency between spheres A and B transforms them into parallel planes, which may be denoted as a and b. Since sphere C is tangent to both A and B and does not pass through the center of inversion, C is transformed into another sphere c that is tangent to both planes; hence, c is sandwiched between the two planes a and b. This is the annular Soddy's hexlet (Figure 2). Six spheres s1–s6 may be packed around c and likewise sandwiched between the bounding planes a and b. Re-inversion restores the three original spheres, and transforms s1–s6 into a hexlet for the original problem. In general, these hexlet spheres S1–S6 have different radii.
An infinite variety of hexlets may be generated by rotating the six balls s1–s6 in their plane by an arbitrary angle before re-inverting them. The envelope produced by such rotations is the torus that surrounds the sphere c and is sandwiched between the two planes a and b; thus, the torus has an inner radius r and outer radius 3r. After the re-inversion, this torus becomes a Dupin cyclide (Figure 3).
Dupin cyclide
The envelope of Soddy's hexlets is a Dupin cyclide, an inversion of the torus. Thus Soddy's construction shows that a cyclide of Dupin is the envelope of a 1-parameter family of spheres in two different ways, and each sphere in either family is tangent to two spheres in same family and three spheres in the other family.[4] This result was probably known to Charles Dupin, who discovered the cyclides that bear his name in his 1803 dissertation under Gaspard Monge.[5]
Relation to Steiner chains
The intersection of the hexlet with the plane of its spherical centers produces a Steiner chain of six circles.
Parabolic and hyperbolic hexlets
It is assumed that spheres A and B are the same size.
In any elliptic hexlet, such as the one shown at the top of the article, there are two tangent planes to the hexlet. In order for an elliptic hexlet to exist, the radius of C must be less than one quarter that of A. If C's radius is one quarter of A's, each sphere will become a plane in the journey. The inverted image shows a normal elliptic hexlet, though, and in the parabolic hexlet, the point where a sphere turns into a plane is precisely when its inverted image passes through the centre of inversion. In such a hexlet there is only one tangent plane to the hexlet. The line of the centres of a parabolic hexlet is a parabola.
If C is even larger than that, a hyperbolic hexlet is formed, and now there are no tangent planes at all. Label the spheres S1 to S6. S1 thus cannot go very far until it becomes a plane (where its inverted image passes through the centre of inversion) and then reverses its concavity (where its inverted image surrounds the centre of inversion). Now the line of the centres is a hyperbola.
The limiting case is when A, B and C are all the same size. The hexlet now becomes straight. S1 is small as it passes through the hole between A, B and C, and grows till it becomes a plane tangent to them. The centre of inversion is now also with a point of tangency with the image of S6, so it is also a plane tangent to A, B and C. As S1 proceeds, its concavity is reversed and now it surrounds all the other spheres, tangent to A, B, C, S2 and S6. S2 pushes upwards and grows to become a tangent plane and S6 shrinks. S1 then obtains S6's former position as a tangent plane. It then reverses concavity again and passes through the hole again, beginning another round trip. Now the line of centres is a degenerate hyperbola, where it has collapsed into two straight lines.[2]
Sangaku tablets
Japanese mathematicians discovered the same hexlet over one hundred years before Soddy did. They analysed the packing problems in which circles and polygons, balls and polyhedrons come into contact and often found the relevant theorems independently before their discovery by Western mathematicians. They often published these as sangaku. The sangaku about the hexlet was made by Irisawa Shintarō Hiroatsu in the school of Uchida Itsumi, and dedicated to the Samukawa Shrine in May 1822. The original sangaku has been lost but was recorded in Uchida's book of Kokonsankan in 1832. A replica of the sangaku was made from the record and dedicated to the Hōtoku museum in the Samukawa Shrine in August, 2009.[6]
The sangaku by Irisawa consists of three problems. The third problem relates to Soddy's hexlet: "the diameter of the outer circumscribing sphere is 30 sun. The diameters of the nucleus balls are 10 sun and 6 sun each. The diameter of one of the balls in the chain of balls is 5 sun. Then I asked for the diameters of the remaining balls. The answer is 15 sun, 10 sun, 3.75 sun, 2.5 sun and 2 + 8/11 sun."[7]
In his answer, the method for calculating the diameters of the balls is written down and can consider it the following formulas to be given in the modern scale. If the ratios of the diameter of the outside ball to each of the nucleus balls are a1, a2, and if the ratios of the diameter to the chain balls are c1, ..., c6. we want to represent c2, ..., c6 in terms of a1, a2, and c1. If
$K={\sqrt {3\left(a_{1}a_{2}+a_{2}c_{1}+c_{1}a_{1}-\left({\frac {a_{1}+a_{2}+c_{1}+1}{2}}\right)^{2}\right)}}$
then,
${\begin{aligned}c_{2}&=(a_{1}+a_{2}+c_{1}-1)/2-K\\c_{3}&=(3a_{1}+3a_{2}-c_{1}-3)/2-K\\c_{4}&=2a_{1}+2a_{2}-c_{1}-2\\c_{5}&=(3a_{1}+3a_{2}-c_{1}-3)/2+K\\c_{6}&=(a_{1}+a_{2}+c_{1}-1)/2+K.\end{aligned}}$.
Then c1 + c4 = c2 + c5 = c3 + c6.
If r1, ..., r6 are the diameters of six balls, we get the formula:
${\frac {1}{r_{1}}}+{\frac {1}{r_{4}}}={\frac {1}{r_{2}}}+{\frac {1}{r_{5}}}={\frac {1}{r_{3}}}+{\frac {1}{r_{6}}}.$
See also
• Descartes' theorem
• Inversive geometry
• Sangaku
Notes
1. Soddy 1937
2. Ogilvy 1990
3. Rothman 1998
4. Coxeter 1952
5. O'Connor & Robertson 2000
6. Yamaji & Nishida 2009, p. 443.
7. Amano 1992, pp. 21–24.
References
• Amano, Hiroshi (1992), Sangaku Collection in Kanagawa prefecture (Kanagawa-ken Sangaku-syū in Japanese), Amano, Hiroshi.
• Coxeter, HSM (1952), "Interlocked rings of spheres", Scripta Mathematica, 18: 113–121.
• Fukagawa, Hidetoshi; Rothman, Tony (2008), Sacred Mathematics: Japanese Temple Geometry, Princeton University Press, ISBN 978-0-691-12745-3
• O'Connor, John J.; Robertson, Edmund F. (2000), "Pierre Charles François Dupin", MacTutor History of Mathematics archive.
• Ogilvy, C.S. (1990), Excursions in Geometry, Dover, ISBN 0-486-26530-7.
• Soddy, Frederick (1937), "The bowl of integers and the hexlet", Nature, London, 139 (3506): 77–79, doi:10.1038/139077a0.
• Rothman, T (1998), "Japanese Temple Geometry", Scientific American, 278: 85–91, doi:10.1038/scientificamerican0598-84.
• Yamaji, Katsunori; Nishida, Tomomi, eds. (2009), Dictionary of Wasan (Wasan no Jiten in Japanese), Asakura, ISBN 978-4-254-11122-4.
External links
Wikimedia Commons has media related to Soddy's hexlet.
• Weisstein, Eric W. "Hexlet". MathWorld.
• B. Allanson. "Animation of Soddy's hexlet".
• Japanese Temple Geometry at the Wayback Machine (archived March 19, 2019) – The animation 0 of SANGAKU PROBLEM 0 shows the case which the radiuses of spheres A and B are equal each other and the centers of spheres A, B and C are on the line. The animation 1 shows the case which the radiuses of spheres A and B are equal each other and the centers of spheres A, B and C are not on the line. The animation 2 shows the case which the radiuses of spheres A and B are not equal each other. The animation 3 shows the case which the centers of spheres A, B and C are on the line and the radiuses of spheres A and B are variable.
• Replica of Sangaku at Hōtoku museum in Samukawa Shrine at the Wayback Machine (archived August 26, 2016) – The third problem relates to Soddy's hexlet.
• The page of Kokonsankan (1832) - Department of Mathematics, Kyoto University
• The page of Kokonsankan (1832) – The left page relates to Soddy's hexlet.
|
Wikipedia
|
Sofia Olhede
Sofia Charlotta Olhede (born 1977)[1] is a British-Swedish mathematical statistician known for her research on wavelets, graphons, and high-dimensional statistics[2] and for her columns on algorithmic bias. She is a professor of statistical science at the EPFL (École Polytechnique Fédérale de Lausanne).[1]
Sofia Charlotta Olhede
Olhede in 2021
Born1977 (age 45–46)
NationalityBritish
Swedish
Known forWavelets
Graphons
High-dimensional statistics
Academic background
EducationMathematics
Alma materImperial College London
ThesisAnalysis via Time, Frequency and Scale of Nonstationary Signals (2003)
Doctoral advisorAndrew T. Walden
Academic work
DisciplineMathematics
Sub-disciplineMathematical statistics
InstitutionsEPFL (École Polytechnique Fédérale de Lausanne)
Websitehttps://www.epfl.ch/labs/sds/
Education and career
Olhede earned a master's degree from Imperial College London in 2000, and completed her doctorate there in 2003.[3] Her dissertation, Analysis via Time, Frequency and Scale of Nonstationary Signals, was supervised by Andrew T. Walden.[4]
She began her academic career as a lecturer in statistics at Imperial in 2002, and moved to University College London as a professor in 2007. At University College London, she was also an honorary professor of computer science and an honorary senior research associate in mathematics.[3] She became a professor at the Chair of Statistical Data Science at EPFL in 2019.[1][5]
She was also a member of the Public Policy Commission of the Law Society of England and Wales,[6] and served as university liaison director for University College London at the Alan Turing Institute for 2015–2016.[7]
Research
Her scientific work includes non-parametric function regression, high dimensional time series[8] and point process analysis,[9] and network data analysis.[10]
Recognition
Olhede won an Engineering and Physical Sciences Research Council Leadership Fellowship in 2010,[11] and an ERC consolidator fellowship in 2016.[12] She was elected as a fellow of the Institute of Mathematical Statistics in 2018 "for seminal contributions to the theory and application of large and heterogeneous networks, random fields and point process, for advancing research in data science, and for service to the profession through editorial and committee work".[13]
Selected works
• Janson, Svante; Olhede, Sofia (2021). "Can smooth graphons in several dimensions be represented by smooth graphons on $[0,1]$?". arXiv:2101.07587 [math.CO].
• Sykulski, Adam M.; Olhede, Sofia C.; Guillaumin, Arthur P.; Lilly, Jonathan M.; Early, Jeffrey J. (2019). "The debiased Whittle likelihood". Biometrika. 106 (2): 251–266. doi:10.1093/biomet/asy071.
• Lunagómez, Simón; Olhede, Sofia C.; Wolfe, Patrick J. (2020). "Modeling Network Populations via Graph Distances". Journal of the American Statistical Association: 1–18. arXiv:1904.07367. doi:10.1080/01621459.2020.1763803. S2CID 119310085.
• Maugis, P.-A. G.; Olhede, S. C.; Priebe, C. E.; Wolfe, P. J. (2020). "Testing for Equivalence of Network Distribution Using Subgraph Counts". Journal of Computational and Graphical Statistics. 29 (3): 455–465. arXiv:1701.00505. doi:10.1080/10618600.2020.1736085. S2CID 201049943.
References
1. 20 professors appointed at ETH Zurich and EPFL (Press release), Board of the Swiss Federal Institutes of Technology, retrieved 2021-02-09
2. Professor Sofia Olhede awarded £366k EPSRC grant to study modelling and inference for massive populations of heterogeneous point processes, AHRC/EPSRC Science and Heritage Programe, Centre for Sustainable Heritage, Bartlett School of Graduate Studies, University College London, 22 September 2015, retrieved 2018-10-03
3. Curriculum vitae (PDF), 2016, retrieved 2018-10-03
4. Sofia Olhede at the Mathematics Genealogy Project
5. Testa, Andrea (17 December 2018), "Appointment of two Full Professors at SB", EPFL News
6. Using algorithms to deliver justice – bias or boost?, Law Society of England and Wales, 14 June 2018, retrieved 2018-10-03
7. Alan Turing Institute Appoints New Directors, Alan Turing Institute, 22 December 2015, retrieved 2018-10-03
8. Olhede, S.; Walden, A. T. (8 April 2004). "The Hilbert spectrum via wavelet projections". Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 460 (2044): 955–975. Bibcode:2004RSPSA.460..955O. doi:10.1098/rspa.2003.1199. hdl:10044/1/1100. ISSN 1364-5021. S2CID 59451275.
9. Rajala, T.; Murrell, D. J.; Olhede, S. C. (23 April 2018). "Detecting multivariate interactions in spatial point patterns with Gibbs models and variable selection". Journal of the Royal Statistical Society, Series C (Applied Statistics). 67 (5): 1237–1273. doi:10.1111/rssc.12281. ISSN 0035-9254.
10. Olhede, S. C.; Wolfe, P. J. (14 October 2014). "Network histograms and universality of blockmodel approximation". Proceedings of the National Academy of Sciences. 111 (41): 14722–14727. arXiv:1312.5306. Bibcode:2014PNAS..11114722O. doi:10.1073/pnas.1400374111. ISSN 0027-8424. PMC 4205664. PMID 25275010.
11. UCL professor recognised as 'next-gen' scientific leader, University College London, 22 July 2010, retrieved 2018-10-03
12. UCL (27 January 2016). "Four UCL Staff Awarded European Research Council Consolidator Awards". UCL Mathematical & Physical Sciences. Retrieved 2021-02-18.
13. "Introducing the 2018 class of IMS Fellows", IMS Bulletin, Institute of Mathematical Statistics, 15 May 2018
External links
• Home page
• Personal website at EPFL
• Website of the Chair of Statistical Data Science
• Sofia Olhede publications indexed by Google Scholar
Authority control: Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
|
Wikipedia
|
Shift space
In symbolic dynamics and related branches of mathematics, a shift space or subshift is a set of infinite words that represent the evolution of a discrete system. In fact, shift spaces and symbolic dynamical systems are often considered synonyms. The most widely studied shift spaces are the subshifts of finite type and the sofic shifts.
In the classical framework[1] a shift space is any subset $\Lambda $ of $A^{\mathbb {Z} }:=\{(x_{i})_{i\in \mathbb {Z} }:\ x_{i}\in A\ \forall i\in \mathbb {Z} \}$, where $A$ is a finite set, which is closed for the Tychonov topology and invariant by translations. More generally one can define a shift space as the closed and translation-invariant subsets of $A^{\mathbb {G} }$, where $A$ is any non-empty set and $\mathbb {G} $ is any monoid.[2][3]
Definition
Let $\mathbb {G} $ be a monoid, and given $g,h\in \mathbb {G} $, denote the operation of $g$ with $h$ by the product $gh$. Let $\mathbf {1} _{\mathbb {G} }$ denote the identity of $\mathbb {G} $. Consider a non-empty set $A$ (an alphabet) with the discrete topology, and define $A^{\mathbb {G} }$ as the set of all patterns over $A$ indexed by $\mathbb {G} $. For $\mathbf {x} =(x_{i})_{i\in \mathbb {G} }\in A^{\mathbb {G} }$ and a subset $N\subset \mathbb {G} $, we denote the restriction of $\mathbf {x} $ to the indices of $N$ as $\mathbf {x} _{N}:=(x_{i})_{i\in N}$.
On $A^{\mathbb {G} }$, we consider the prodiscrete topology, which makes $A^{\mathbb {G} }$ a Hausdorff and totally disconnected topological space. In the case of $A$ being finite, it follows that $A^{\mathbb {G} }$ is compact. However, if $A$ is not finite, then $A^{\mathbb {G} }$ is not even locally compact.
This topology will be metrizable if and only if $\mathbb {G} $ is countable, and, in any case, the base of this topology consists of a collection of open/closed sets (called cylinders), defined as follows: given a finite set of indices $D\subset \mathbb {G} $, and for each $i\in D$, let $a_{i}\in A$. The cylinder given by $D$ and $(a_{i})_{i\in D}\in A^{|D|}$ is the set
${\big [}(a_{i})_{i\in D}{\big ]}_{D}:=\{\mathbf {x} \in A^{\mathbb {G} }:\ x_{i}=a_{i},\ \forall i\in D\}.$
When $D=\{g\}$, we denote the cylinder fixing the symbol $b$ at the entry indexed by $g$ simply as $[b]_{g}$.
In other words, a cylinder ${\big [}(a_{i})_{i\in D}{\big ]}_{D}$ is the set of all set of all infinite patterns of $A^{\mathbb {G} }$ which contain the finite pattern $(a_{i})_{i\in D}\in A^{|D|}$.
Given $g\in \mathbb {G} $, the g-shift map on $A^{\mathbb {G} }$ is denoted by $\sigma ^{g}:A^{\mathbb {G} }\to A^{\mathbb {G} }$ and defined as
$\sigma ^{g}{\big (}(x_{i})_{i\in \mathbb {G} }{\big )}=(x_{gi})_{i\in \mathbb {G} }$.
A shift space over the alphabet $A$ is a set $\Lambda \subset A^{\mathbb {G} }$ that is closed under the topology of $A^{\mathbb {G} }$ and invariant under translations, i.e., $\sigma ^{g}(\Lambda )\subset \Lambda $ for all $g\in \mathbb {G} $.[note 1] We consider in the shift space $\Lambda $ the induced topology from $A^{\mathbb {G} }$, which has as basic open sets the cylinders ${\big [}(a_{i})_{i\in D}{\big ]}_{\Lambda }:={\big [}(a_{i})_{i\in D}{\big ]}\cap \Lambda $.
For each $k\in \mathbb {N} ^{*}$, define ${\mathcal {N}}_{k}:=\bigcup _{N\subset \mathbb {G} \atop \#N=k}A^{N}$, and ${\mathcal {N}}_{A^{\mathbb {G} }}^{f}:=\bigcup _{k\in \mathbb {N} }{\mathcal {N}}_{k}=\bigcup _{N\subset \mathbb {G} \atop \#N<\infty }A^{N}$. An equivalent way to define a shift space is to take a set of forbidden patterns $F\subset {\mathcal {N}}_{A^{\mathbb {G} }}^{f}$ and define a shift space as the set
$X_{F}:=\{\mathbf {x} \in A^{\mathbb {G} }:\ \forall N\subset \mathbb {G} ,\forall g\in \mathbb {G} ,\ \left(\sigma ^{g}(\mathbf {x} )\right)_{N}=\mathbf {x} _{gN}\notin F\}.$
Intuitively, a shift space $X_{F}$ is the set of all infinite patterns that do not contain any forbidden finite pattern of $F$.
Language of shift space
Given a shift space $\Lambda \subset A^{\mathbb {G} }$ and a finite set of indices $N\subset \mathbb {G} $, let $W_{\emptyset }(\Lambda ):=\{\epsilon \}$, where $\epsilon $ stands for the empty word, and for $N\neq \emptyset $ let $W_{N}(\Lambda )\subset A^{N}$ be the set of all finite configurations of $A^{N}$ that appear in some sequence of $\Lambda $, i.e.,
$W_{N}(\Lambda ):=\{(w_{i})_{i\in N}\in A^{N}:\ \exists \ \mathbf {x} \in \Lambda {\text{ s.t. }}x_{i}=w_{i}\ \forall i\in N\}.$
Note that, since $\Lambda $ is a shift space, if $M\subset \mathbb {G} $ is a translation of $N\subset \mathbb {G} $, i.e., $M=gN$ for some $g\in \mathbb {G} $, then $(w_{j})_{j\in M}\in W_{M}(\Lambda )$ if and only if there exists $(v_{i})_{i\in N}\in W_{N}(\Lambda )$ such that $w_{j}=v_{i}$ if $j=gi$. In other words, $W_{M}(\Lambda )$ and $W_{N}(\Lambda )$ contain the same configurations modulo translation. We will call the set
$W(\Lambda ):=\bigcup _{N\subset \mathbb {G} \atop \#N<\infty }W_{N}(\Lambda )$
the language of $\Lambda $. In the general context stated here, the language of a shift space has not the same mean of that in Formal Language Theory, but in the classical framework which considers the alphabet $A$ being finite, and $\mathbb {G} $ being $\mathbb {N} $ or $\mathbb {Z} $ with the usual addition, the language of a shift space is a formal language.
Classical framework
The classical framework for shift spaces consists of considering the alphabet $A$ as finite, and $\mathbb {G} $ as the set of non-negative integers ($\mathbb {N} $) with the usual addition, or the set of all integers ($\mathbb {Z} $) with the usual addition. In both cases, the identity element $\mathbf {1} _{\mathbb {G} }$ corresponds to the number 0. Furthermore, when $\mathbb {G} =\mathbb {N} $, since all $\mathbb {N} \setminus \{0\}$ can be generated from the number 1, it is sufficient to consider a unique shift map given by $\sigma (\mathbf {x} )_{n}=x_{n+1}$ for all $n$. On the other hand, for the case of $\mathbb {G} =\mathbb {Z} $, since all $\mathbb {Z} $ can be generated from the numbers {-1, 1}, it is sufficient to consider two shift maps given for all $n$ by $\sigma (\mathbf {x} )_{n}=x_{n+1}$ and by $\sigma ^{-1}(\mathbf {x} )_{n}=x_{n-1}$.
Furthermore, whenever $\mathbb {G} $ is $\mathbb {N} $ or $\mathbb {Z} $ with the usual addition (independently of the cardinlaity of $A$), due to its algebraic structure, it is sufficient consider only cylinders in the form
$[a_{0}a_{1}...a_{n}]:=\{(x_{i})_{i\in \mathbb {G} }:\ x_{i}=a_{i}\ \forall i=0,..,n\}.$
Moreover, the language of a shift space $\Lambda \subset A^{\mathbb {G} }$ will be given by
$W(\Lambda ):=\bigcup _{n\geq 0}W_{n}(\Lambda ),$
where $W_{0}:=\{\epsilon \}$ and $\epsilon $ stands for the empty word, and
$W_{n}(\Lambda ):=\{((a_{i})_{i=0,..n}\in A^{n}:\ \exists \mathbf {x} \in \Lambda \ s.t.\ x_{i}=a_{i}\ \forall i=0,...,n\}.$
In the same way, for the particular case of $\mathbb {G} =\mathbb {Z} $, it follows that to define a shift space $\Lambda =X_{F}$ we do not need to specify the index of $\mathbb {G} $ on which the forbidden words of $F$ are defined, that is, we can just consider $F\subset \bigcup _{n\geq 1}A^{n}$ and then
$X_{F}=\{\mathbb {x} \in A^{\mathbb {Z} }:\ \forall i\in \mathbb {Z} ,\ \forall k\geq 0,\ (x_{i}...x_{i+k})\notin F\}.$
However, if $\mathbb {G} =\mathbb {N} $, if we define a shift space $\Lambda =X_{F}$ as above, without to specify the index of where the words are forbidden, then we will just capture shift spaces which are invariant through the shift map, that is, such that $\sigma (X_{F})=X_{F}$. In fact, to define a shift space $X_{F}\subset A^{\mathbb {N} }$ such that $\sigma (X_{F})\subsetneq X_{F}$ it will be necessary to specify from which index on the words of $F$ are forbidden.
In particular, in the classical framework of $A$ being finite, and $\mathbb {G} $ being $\mathbb {N} $) or $\mathbb {Z} $ with the usual addition, it follows that $M_{F}$ is finite if and only if $F$ is finite, which leds to classical definition of a shift of finite type as those shift spaces $\Lambda \subset A^{\mathbb {G} }$ such that $\Lambda =X_{F}$ for some finite $F$.
Some types of shift spaces
Among several types of shift spaces, the most widely studied are the shifts of finite type and the sofic shifts.
In the case when the alphabet $A$ is finite, a shift space $\Lambda $ is a shift of finite type if we can take a finite set of forbidden patterns $F$ such that $\Lambda =X_{F}$, and $\Lambda $ is a sofic shift if it is the image of a shift of finite type under sliding block code[1] (that is, a map $\Phi $ that is continuous and invariant for all $g$-shift maps ). If $A$ is finite and $\mathbb {G} $ is $\mathbb {N} $ or $\mathbb {Z} $ with the usual addition, then the shift $\Lambda $ is a sofic shift if and only if $W(\Lambda )$ is a regular language.
The name "sofic" was coined by Weiss (1973), based on the Hebrew word סופי meaning "finite", to refer to the fact that this is a generalization of a finiteness property.[4]
When $A$ is infinite, it is possible to define shifts of finite type as shift spaces $\Lambda $ for those one can take a set $F$ of forbidden words such that
$M_{F}:=\{g\in \mathbb {G} :\ \exists N\subset \mathbb {G} {\text{ s.t. }}g\in N{\text{ and }}(w_{i})_{i\in N}\in F\},$
is finite and $\Lambda =X_{F}$.[3] In this context of infinite alphabet, a sofic shift will be defined as the image of a shift of finite type under a particular class of sliding block codes.[3] Both, the finiteness of $M_{F}$ and the additional conditions the sliding block codes, are trivially satisfied whenever $A$ is finite.
Topological dynamical systems on shift spaces
Shift spaces are the topological spaces on which symbolic dynamical systems are usually defined.
Given a shift space $\Lambda \subset A^{\mathbb {G} }$ and a $g$-shift map $\sigma ^{g}:\Lambda \to \Lambda $ it follows that the pair $(\Lambda ,\sigma ^{g})$ is a topological dynamical system.
Two shift spaces $\Lambda \subset A^{\mathbb {G} }$ and $\Gamma \subset B^{\mathbb {G} }$ are said to be topologically cojugate (or simply conjugate) if for each $g$-shift map it follows that the topological dynamical systems $(\Lambda ,\sigma ^{g})$ and $(\Gamma ,\sigma ^{g})$ are topologically cojugate, that is, if there exists a continuous map $\Phi :\Lambda \to \Gamma $ :\Lambda \to \Gamma } such that $\Phi \circ \sigma ^{g}=\sigma ^{g}\circ \Phi $. Such maps are known as generalized sliding block codes or just as sliding block codes whenever $\Phi $ is uniformly continuous.[3]
Although any continuous map $\Phi $ from $\Lambda \subset A^{\mathbb {G} }$ to itself will define a topological dynamical system $(\Lambda ,\Phi )$, in symbolic dynamics it is usual to consider only continuous maps $\Phi :\Lambda \to \Lambda $ :\Lambda \to \Lambda } which commute with all $g$-shift maps, i. e., maps which are generalized sliding block codes. The dynamical system $(\Lambda ,\Phi )$ is kown as a 'generalized cellular automaton' (or just as a cellular automaton whenever $\Phi $ is uniformly continuous).
Examples
The first trivial example of shift space (of finite type) is the full shift $A^{\mathbb {N} }$.
Let $A=\{a,b\}$. The set of all infinite words over A containing at most one b is a sofic subshift, not of finite type. The set of all infinite words over A whose b form blocks of prime length is not sofic (this can be shown by using the pumping lemma).
The space of infinite strings in two letters, $\{0,1\}^{\mathbb {N} }$ is called the Bernoulli process. It is isomorphic to the Cantor set.
The bi-infinite space of strings in two letters, $\{0,1\}^{\mathbb {Z} }$ is commonly known as the Baker's map, or rather is homomorphic to the Baker's map.
See also
• Tent map
• Bit shift map
• Gray code
Footnotes
1. It is common to reffer to a shift space using just the expression shift or subshift. However, some authors use the terms shift and subshift for sets of infinite parterns that are just invariant under the $g$-shift maps, and reserve the term shift space for those that are also closed for the prodiscrete topology.
References
1. Lind, Douglas A.; Marcus, Brian (1995). An introduction to symbolic dynamics and coding. Cambridge: Cambridge University press. ISBN 978-0-521-55900-3.
2. Ceccherini-Silberstein, T.; Coornaert, M. (2010). Cellular automata and groups Springer Monographs in Mathematics. Springer Monographs in Mathematics. Springer Verlag. doi:10.1007/978-3-642-14034-1. ISBN 978-3-642-14033-4.
3. Sobottka, Marcelo (September 2022). "Some Notes on the Classification of Shift Spaces: Shifts of Finite Type; Sofic Shifts; and Finitely Defined Shifts". Bulletin of the Brazilian Mathematical Society. New Series. 53 (3): 981–1031. arXiv:2010.10595. doi:10.1007/s00574-022-00292-x. ISSN 1678-7544. S2CID 254048586.
4. Weiss, Benjamin (1973), "Subshifts of finite type and sofic systems", Monatsh. Math., 77 (5): 462–474, doi:10.1007/bf01295322, MR 0340556, S2CID 123440583. Weiss does not describe the origin of the word other than calling it a neologism; however, its Hebrew origin is stated by MathSciNet reviewer R. L. Adler.
Further reading
• Ceccherini-Silberstein, T.; Coornaert, M. (2010). Cellular automata and groups Springer Monographs in Mathematics. Springer Verlag. ISBN 978-3-642-14034-1.
• Lind, Douglas; Marcus, Brian (1995). An Introduction to Symbolic Dynamics and Coding. Cambridge UK: Cambridge University Press. ISBN 0-521-55900-6.
• Lothaire, M. (2002). "Finite and Infinite Words". Algebraic Combinatorics on Words. Cambridge UK: Cambridge University Press. ISBN 0-521-81220-8. Retrieved 2008-01-29.
• Morse, Marston; Hedlund, Gustav A. (1938). "Symbolic Dynamics". American Journal of Mathematics. 60 (4): 815–866. doi:10.2307/2371264. JSTOR 2371264.
• Sobottka, M. (2022). "Some Notes on the Classification of Shift Spaces: Shifts of Finite Type; Sofic Shifts; and Finitely Defined Shifts". Bulletin of the Brazilian Mathematical Society. New Series. 53 (3): 981–1031. arXiv:2010.10595. doi:10.1007/s00574-022-00292-x. S2CID 254048586.
|
Wikipedia
|
Sofiya Ostrovska
Sofiya Ostrovska (born 1958) is a Ukrainian mathematician interested in probability theory and approximation theory, and known for her research on q-Bernstein polynomials, the q-analogs of the Bernstein polynomials. She has also published works in computer science concerning software engineering. She is a professor of mathematics at Atılım University in Turkey.[1]
Early life and education
Ostrovska was born on 26 September 1958 in Sloviansk, then part of the Soviet Union.[2] Her parents, Larisa Semenovna Kudina and Iossif Ostrovskii, were both mathematicians, and her younger brother Mikhail Ostrovskii, became a mathematics professor at St. John's University (New York City).[3]
She studied mathematics at Kharkov State University (renamed as the National University of Kharkiv in 1999), earning a bachelor's degree in 1977 and master's degree in 1980. She completed her Ph.D. in 1989 at Kyiv State University, again later renamed as the Taras Shevchenko National University of Kyiv.[2]
Career
From 1984 to 1993 she was an assistant professor at the Kharkiv Polytechnic Institute, and from 1993 to 1995 she was an associate professor at Kharkov State University. In 1995 she moved to the H.S. Skovoroda Kharkiv National Pedagogical University and in 1996 she relocated to Turkey, initially as an associate professor at Dokuz Eylül University in İzmir. She became a full professor at the İzmir Institute of Technology in 2000, and took her present position at Atılım University in 2001.
References
1. "Academic staff", Department of Mathematics, Atılım University, retrieved 2022-03-19
2. Curriculum vitae, Atılım University, 2019, retrieved 2022-03-19
3. Catrina, F.; Dilworth, S.; Kadets, V.; Kutzarova, D.; Plichko, A.; Popov, M.; Randrianantoanina, B.; Rosenthal, D.; Shulman, V. (2021), "Mikhail Ostrovskii (for his 60th birthday)", Carpathian Math. Publ., 13 (1): 272–283
External links
• Sofiya Ostrovska publications indexed by Google Scholar
Authority control: Academics
• Association for Computing Machinery
• DBLP
• MathSciNet
• zbMATH
|
Wikipedia
|
Soft configuration model
In applied mathematics, the soft configuration model (SCM) is a random graph model subject to the principle of maximum entropy under constraints on the expectation of the degree sequence of sampled graphs.[1] Whereas the configuration model (CM) uniformly samples random graphs of a specific degree sequence, the SCM only retains the specified degree sequence on average over all network realizations; in this sense the SCM has very relaxed constraints relative to those of the CM ("soft" rather than "sharp" constraints[2]). The SCM for graphs of size $n$ has a nonzero probability of sampling any graph of size $n$, whereas the CM is restricted to only graphs having precisely the prescribed connectivity structure.
Part of a series on
Network science
• Theory
• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence
Network types
• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip
Graphs
Features
• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list / matrix
• Incidence list / matrix
Types
• Bipartite
• Complete
• Directed
• Hyper
• Labeled
• Multi
• Random
• Weighted
• Metrics
• Algorithms
• Centrality
• Degree
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency
Models
Topology
• Random graph
• Erdős–Rényi
• Barabási–Albert
• Bianconi–Barabási
• Fitness model
• Watts–Strogatz
• Exponential random (ERGM)
• Random geometric (RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Blockmodeling
• Maximum entropy
• Soft configuration
• LFR Benchmark
Dynamics
• Boolean network
• agent based
• Epidemic/SIR
• Lists
• Categories
• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory
Model formulation
The SCM is a statistical ensemble of random graphs $G$ having $n$ vertices ($n=|V(G)|$) labeled $\{v_{j}\}_{j=1}^{n}=V(G)$, producing a probability distribution on ${\mathcal {G}}_{n}$ (the set of graphs of size $n$). Imposed on the ensemble are $n$ constraints, namely that the ensemble average of the degree $k_{j}$ of vertex $v_{j}$ is equal to a designated value ${\widehat {k}}_{j}$, for all $v_{j}\in V(G)$. The model is fully parameterized by its size $n$ and expected degree sequence $\{{\widehat {k}}_{j}\}_{j=1}^{n}$. These constraints are both local (one constraint associated with each vertex) and soft (constraints on the ensemble average of certain observable quantities), and thus yields a canonical ensemble with an extensive number of constraints.[2] The conditions $\langle k_{j}\rangle ={\widehat {k}}_{j}$ are imposed on the ensemble by the method of Lagrange multipliers (see Maximum-entropy random graph model).
Derivation of the probability distribution
The probability $\mathbb {P} _{\text{SCM}}(G)$ of the SCM producing a graph $G$ is determined by maximizing the Gibbs entropy $S[G]$ subject to constraints $\langle k_{j}\rangle ={\widehat {k}}_{j},\ j=1,\ldots ,n$ and normalization $\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)=1$. This amounts to optimizing the multi-constraint Lagrange function below:
${\begin{aligned}&{\mathcal {L}}\left(\alpha ,\{\psi _{j}\}_{j=1}^{n}\right)\\[6pt]={}&-\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)\log \mathbb {P} _{\text{SCM}}(G)+\alpha \left(1-\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)\right)+\sum _{j=1}^{n}\psi _{j}\left({\widehat {k}}_{j}-\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)k_{j}(G)\right),\end{aligned}}$
where $\alpha $ and $\{\psi _{j}\}_{j=1}^{n}$ are the $n+1$ multipliers to be fixed by the $n+1$ constraints (normalization and the expected degree sequence). Setting to zero the derivative of the above with respect to $\mathbb {P} _{\text{SCM}}(G)$ for an arbitrary $G\in {\mathcal {G}}_{n}$ yields
$0={\frac {\partial {\mathcal {L}}\left(\alpha ,\{\psi _{j}\}_{j=1}^{n}\right)}{\partial \mathbb {P} _{\text{SCM}}(G)}}=-\log \mathbb {P} _{\text{SCM}}(G)-1-\alpha -\sum _{j=1}^{n}\psi _{j}k_{j}(G)\ \Rightarrow \ \mathbb {P} _{\text{SCM}}(G)={\frac {1}{Z}}\exp \left[-\sum _{j=1}^{n}\psi _{j}k_{j}(G)\right],$
the constant $Z:=e^{\alpha +1}=\sum _{G\in {\mathcal {G}}_{n}}\exp \left[-\sum _{j=1}^{n}\psi _{j}k_{j}(G)\right]=\prod _{1\leq i<j\leq n}\left(1+e^{-(\psi _{i}+\psi _{j})}\right)$[3] being the partition function normalizing the distribution; the above exponential expression applies to all $G\in {\mathcal {G}}_{n}$, and thus is the probability distribution. Hence we have an exponential family parameterized by $\{\psi _{j}\}_{j=1}^{n}$, which are related to the expected degree sequence $\{{\widehat {k}}_{j}\}_{j=1}^{n}$ by the following equivalent expressions:
$\langle k_{q}\rangle =\sum _{G\in {\mathcal {G}}_{n}}k_{q}(G)\mathbb {P} _{\text{SCM}}(G)=-{\frac {\partial \log Z}{\partial \psi _{q}}}=\sum _{j\neq q}{\frac {1}{e^{\psi _{q}+\psi _{j}}+1}}={\widehat {k}}_{q},\ q=1,\ldots ,n.$
References
1. van der Hoorn, Pim; Gabor Lippner; Dmitri Krioukov (2017-10-10). "Sparse Maximum-Entropy Random Graphs with a Given Power-Law Degree Distribution". arXiv:1705.10261.
2. Garlaschelli, Diego; Frank den Hollander; Andrea Roccaverde (January 30, 2018). "Coviariance structure behind breaking of ensemble equivalence in random graphs" (PDF). Archived (PDF) from the original on February 4, 2023. Retrieved September 14, 2018.
3. Park, Juyong; M.E.J. Newman (2004-05-25). "The statistical mechanics of networks". arXiv:cond-mat/0405566.
|
Wikipedia
|
Constrained optimization
In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.
Relation to constraint-satisfaction problems
The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model.[1] COP is a CSP that includes an objective function to be optimized. Many algorithms are used to handle the optimization part.
General form
A general constrained minimization problem may be written as follows:[2]
${\begin{array}{rcll}\min &~&f(\mathbf {x} )&\\\mathrm {subject~to} &~&g_{i}(\mathbf {x} )=c_{i}&{\text{for }}i=1,\ldots ,n\quad {\text{Equality constraints}}\\&~&h_{j}(\mathbf {x} )\geq d_{j}&{\text{for }}j=1,\ldots ,m\quad {\text{Inequality constraints}}\end{array}}$
where $g_{i}(\mathbf {x} )=c_{i}~\mathrm {for~} i=1,\ldots ,n$ and $h_{j}(\mathbf {x} )\geq d_{j}~\mathrm {for~} j=1,\ldots ,m$ are constraints that are required to be satisfied (these are called hard constraints), and $f(\mathbf {x} )$ is the objective function that needs to be optimized subject to the constraints.
In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.
Solution methods
Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect.[3]
Substitution method
For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution.[4] The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint. For example, assume the objective is to maximize $f(x,y)=x\cdot y$ subject to $x+y=10$. The constraint implies $y=10-x$, which can be substituted into the objective function to create $p(x)=x(10-x)=10x-x^{2}$. The first-order necessary condition gives ${\frac {\partial p}{\partial x}}=10-2x=0$, which can be solved for $x=5$ and, consequently, $y=10-5=5$.
Lagrange multiplier
Main article: Lagrange multipliers
If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.
Inequality constraints
With inequality constraints, the problem can be characterized in terms of the geometric optimality conditions, Fritz John conditions and Karush–Kuhn–Tucker conditions, under which simple problems may be solvable.
Linear programming
If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem. This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time.
Nonlinear programming
If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a nonlinear programming problem.
Quadratic programming
If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP hard.
KKT conditions
Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity.
Branch and bound
Constraint optimization can be solved by branch-and-bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution.
Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.
On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.
A variation of this approach called Hansen's method uses interval methods.[5] It inherently implements rectangular constraints.
First-choice bounding functions
One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for $x=a$ while another constraint is maximal for $x=b$.
Russian doll search
This method[6] runs a branch-and-bound algorithm on $n$ problems, where $n$ is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables $x_{1},\ldots ,x_{i}$ from the original problem, along with the constraints containing them. After the problem on variables $x_{i+1},\ldots ,x_{n}$ is solved, its optimal cost can be used as an upper bound while solving the other problems,
In particular, the cost estimate of a solution having $x_{i+1},\ldots ,x_{n}$ as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.
There is similarity between the Russian Doll Search method and dynamic programming. Like dynamic programming, Russian Doll Search solves sub-problems in order to solve the whole problem. But, whereas Dynamic Programming directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search.
Bucket elimination
The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if $x$ is the variable to be removed, $C_{1},\ldots ,C_{n}$ are the soft constraints containing it, and $y_{1},\ldots ,y_{m}$ are their variables except $x$, the new soft constraint is defined by:
$C(y_{1}=a_{1},\ldots ,y_{n}=a_{n})=\max _{a}\sum _{i}C_{i}(x=a,y_{1}=a_{1},\ldots ,y_{n}=a_{n}).$
Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.
See also
• Constrained least squares
• Distributed constraint optimization
• Constraint satisfaction problem (CSP)
• Constraint programming
• Integer programming
• Penalty method
• Superiorization
References
1. Rossi, Francesca; van Beek, Peter; Walsh, Toby (2006-01-01), Rossi, Francesca; van Beek, Peter; Walsh, Toby (eds.), "Chapter 1 – Introduction", Foundations of Artificial Intelligence, Handbook of Constraint Programming, Elsevier, vol. 2, pp. 3–12, doi:10.1016/s1574-6526(06)80005-2, retrieved 2019-10-04
2. Martins, J. R. R. A.; Ning, A. (2021). Engineering Design Optimization. Cambridge University Press. ISBN 978-1108833417.
3. Wenyu Sun; Ya-Xiang Yuan (2010). Optimization Theory and Methods: Nonlinear Programming, Springer, ISBN 978-1441937650. p. 541
4. Prosser, Mike (1993). "Constrained Optimization by Substitution". Basic Mathematics for Economists. New York: Routledge. pp. 338–346. ISBN 0-415-08424-5.
5. Leader, Jeffery J. (2004). Numerical Analysis and Scientific Computation. Addison Wesley. ISBN 0-201-73499-0.
6. Verfaillie, Gérard, Michel Lemaître, and Thomas Schiex. "Russian doll search for solving constraint optimization problems." AAAI/IAAI, Vol. 1. 1996.
Further reading
• Bertsekas, Dimitri P. (1982). Constrained Optimization and Lagrange Multiplier Methods. New York: Academic Press. ISBN 0-12-093480-9.
• Dechter, Rina (2003). Constraint Processing. Morgan Kaufmann. ISBN 1-55860-890-7.
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
• Golden-section search
• Interpolation methods
• Line search
• Nelder–Mead method
• Successive parabolic interpolation
Gradients
Convergence
• Trust region
• Wolfe conditions
Quasi–Newton
• Berndt–Hall–Hall–Hausman
• Broyden–Fletcher–Goldfarb–Shanno and L-BFGS
• Davidon–Fletcher–Powell
• Symmetric rank-one (SR1)
Other methods
• Conjugate gradient
• Gauss–Newton
• Gradient
• Mirror
• Levenberg–Marquardt
• Powell's dog leg method
• Truncated Newton
Hessians
• Newton's method
Constrained nonlinear
General
• Barrier methods
• Penalty methods
Differentiable
• Augmented Lagrangian methods
• Sequential quadratic programming
• Successive linear programming
Convex optimization
Convex
minimization
• Cutting-plane method
• Reduced gradient (Frank–Wolfe)
• Subgradient method
Linear and
quadratic
Interior point
• Affine scaling
• Ellipsoid algorithm of Khachiyan
• Projective algorithm of Karmarkar
Basis-exchange
• Simplex algorithm of Dantzig
• Revised simplex algorithm
• Criss-cross algorithm
• Principal pivoting algorithm of Lemke
Combinatorial
Paradigms
• Approximation algorithm
• Dynamic programming
• Greedy algorithm
• Integer programming
• Branch and bound/cut
Graph
algorithms
Minimum
spanning tree
• Borůvka
• Prim
• Kruskal
Shortest path
• Bellman–Ford
• SPFA
• Dijkstra
• Floyd–Warshall
Network flows
• Dinic
• Edmonds–Karp
• Ford–Fulkerson
• Push–relabel maximum flow
Metaheuristics
• Evolutionary algorithm
• Hill climbing
• Local search
• Parallel metaheuristics
• Simulated annealing
• Spiral optimization algorithm
• Tabu search
• Software
|
Wikipedia
|
Viterbi algorithm
The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especially in the context of Markov information sources and hidden Markov models (HMM).
The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, speech synthesis, diarization,[1] keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
History
The Viterbi algorithm is named after Andrew Viterbi, who proposed it in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links.[2] It has, however, a history of multiple invention, with at least seven independent discoveries, including those by Viterbi, Needleman and Wunsch, and Wagner and Fischer.[3] It was introduced to Natural Language Processing as a method of part-of-speech tagging as early as 1987.
Viterbi path and Viterbi algorithm have become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities.[3] For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is commonly called the "Viterbi parse".[4][5][6] Another application is in target tracking, where the track is computed that assigns a maximum likelihood to a sequence of observations.[7]
Extensions
A generalization of the Viterbi algorithm, termed the max-sum algorithm (or max-product algorithm) can be used to find the most likely assignment of all or some subset of latent variables in a large number of graphical models, e.g. Bayesian networks, Markov random fields and conditional random fields. The latent variables need, in general, to be connected in a way somewhat similar to a hidden Markov model (HMM), with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involves message passing and is substantially similar to the belief propagation algorithm (which is the generalization of the forward-backward algorithm).
With the algorithm called iterative Viterbi decoding one can find the subsequence of an observation that matches best (on average) to a given hidden Markov model. This algorithm is proposed by Qi Wang et al. to deal with turbo code.[8] Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence.
An alternative algorithm, the Lazy Viterbi algorithm, has been proposed.[9] For many applications of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the original Viterbi decoder (using Viterbi algorithm). While the original Viterbi algorithm calculates every node in the trellis of possible outcomes, the Lazy Viterbi algorithm maintains a prioritized list of nodes to evaluate in order, and the number of calculations required is typically fewer (and never more) than the ordinary Viterbi algorithm for the same result. However, it is not so easy to parallelize in hardware.
Pseudocode
This algorithm generates a path $X=(x_{1},x_{2},\ldots ,x_{T})$, which is a sequence of states $x_{n}\in S=\{s_{1},s_{2},\dots ,s_{K}\}$ that generate the observations $Y=(y_{1},y_{2},\ldots ,y_{T})$ with $y_{n}\in O=\{o_{1},o_{2},\dots ,o_{N}\}$, where $N$ is the number of possible observations in the observation space $O$.
Two 2-dimensional tables of size $K\times T$ are constructed:
• Each element $T_{1}[i,j]$ of $T_{1}$ stores the probability of the most likely path so far ${\hat {X}}=({\hat {x}}_{1},{\hat {x}}_{2},\ldots ,{\hat {x}}_{j})$ with ${\hat {x}}_{j}=s_{i}$ that generates $Y=(y_{1},y_{2},\ldots ,y_{j})$.
• Each element $T_{2}[i,j]$ of $T_{2}$ stores ${\hat {x}}_{j-1}$ of the most likely path so far ${\hat {X}}=({\hat {x}}_{1},{\hat {x}}_{2},\ldots ,{\hat {x}}_{j-1},{\hat {x}}_{j}=s_{i})$ $\forall j,2\leq j\leq T$
The table entries $T_{1}[i,j],T_{2}[i,j]$ are filled by increasing order of $K\cdot j+i$:
$T_{1}[i,j]=\max _{k}{(T_{1}[k,j-1]\cdot A_{ki}\cdot B_{iy_{j}})}$,
$T_{2}[i,j]=\operatorname {argmax} _{k}{(T_{1}[k,j-1]\cdot A_{ki}\cdot B_{iy_{j}})}$,
with $A_{ki}$ and $B_{iy_{j}}$ as defined below. Note that $B_{iy_{j}}$ does not need to appear in the latter expression, as it's non-negative and independent of $k$ and thus does not affect the argmax.
Input
• The observation space $O=\{o_{1},o_{2},\dots ,o_{N}\}$,
• the state space $S=\{s_{1},s_{2},\dots ,s_{K}\}$,
• an array of initial probabilities $\Pi =(\pi _{1},\pi _{2},\dots ,\pi _{K})$ such that $\pi _{i}$ stores the probability that $x_{1}=s_{i}$,
• a sequence of observations $Y=(y_{1},y_{2},\ldots ,y_{T})$ such that $y_{t}=o_{i}$ if the observation at time $t$ is $o_{i}$,
• transition matrix $A$ of size $K\times K$ such that $A_{ij}$ stores the transition probability of transiting from state $s_{i}$ to state $s_{j}$,
• emission matrix $B$ of size $K\times N$ such that $B_{ij}$ stores the probability of observing $o_{j}$ from state $s_{i}$.
Output
• The most likely hidden state sequence $X=(x_{1},x_{2},\ldots ,x_{T})$
function VITERBI$(O,S,\Pi ,Y,A,B):X$
for each state $i=1,2,\ldots ,K$ do
$T_{1}[i,1]\leftarrow \pi _{i}\cdot B_{iy_{1}}$
$T_{2}[i,1]\leftarrow 0$
end for
for each observation $j=2,3,\ldots ,T$ do
for each state $i=1,2,\ldots ,K$ do
$T_{1}[i,j]\gets \max _{k}{(T_{1}[k,j-1]\cdot A_{ki}\cdot B_{iy_{j}})}$
$T_{2}[i,j]\gets \arg \max _{k}{(T_{1}[k,j-1]\cdot A_{ki}\cdot B_{iy_{j}})}$
end for
end for
$z_{T}\gets \arg \max _{k}{(T_{1}[k,T])}$
$x_{T}\leftarrow s_{z_{T}}$
for $j=T,T-1,\ldots ,2$ do
$z_{j-1}\leftarrow T_{2}[z_{j},j]$
$x_{j-1}\leftarrow s_{z_{j-1}}$
end for
return $X$
end function
Restated in a succinct near-Python:
function viterbi$(O,S,\Pi ,Tm,Em):best\_path$ Tm: transition matrix Em: emission matrix
$trellis\leftarrow matrix(length(S),length(O))$ To hold probability of each state given each observation
$pointers\leftarrow matrix(length(S),length(O))$ To hold backpointer to best prior state
for s in $range(length(S))$: Determine each hidden state's probability at time 0…
$trellis[s,0]\leftarrow \Pi [s]\cdot Em[s,O[0]]$
for o in $range(1,length(O))$: …and after, tracking each state's most likely prior state, k
for s in $range(length(S))$:
$k\leftarrow \arg \max(trellis[k,o-1]\cdot Tm[k,s]\cdot Em[s,o]\ {\mathsf {for}}\ k\ {\mathsf {in}}\ range(length(S)))$
$trellis[s,o]\leftarrow trellis[k,o-1]\cdot Tm[k,s]\cdot Em[s,o]$
$pointers[s,o]\leftarrow k$
$best\_path\leftarrow list()$
$k\leftarrow \arg \max(trellis[k,length(O)-1]\ {\mathsf {for}}\ k\ {\mathsf {in}}\ range(length(S)))$ Find k of best final state
for o in $range(length(O)-1,-1,-1)$: Backtrack from last observation
$best\_path.insert(0,S[k])$ Insert previous state on most likely path
$k\leftarrow pointers[k,o]$ Use backpointer to find best previous state
return $best\_path$
Explanation
Suppose we are given a hidden Markov model (HMM) with state space $S$, initial probabilities $\pi _{i}$ of being in the hidden state $i$ and transition probabilities $a_{i,j}$ of transitioning from state $i$ to state $j$. Say, we observe outputs $y_{1},\dots ,y_{T}$. The most likely state sequence $x_{1},\dots ,x_{T}$ that produces the observations is given by the recurrence relations[10]
${\begin{aligned}V_{1,k}&=\mathrm {P} {\big (}y_{1}\ |\pi _{k}{\big )}\cdot \pi _{k},\\V_{t,k}&=\max _{x\in S}\left(\mathrm {P} {\big (}y_{t}\ |\pi _{k}{\big )}\cdot a_{x,k}\cdot V_{t-1,x}\right).\end{aligned}}$
Here $V_{t,k}$ is the probability of the most probable state sequence $\mathrm {P} {\big (}x_{1},\dots ,x_{t},y_{1},\dots ,y_{t}{\big )}$ responsible for the first $t$ observations that have $k$ as its final state. The Viterbi path can be retrieved by saving back pointers that remember which state $x$ was used in the second equation. Let $\mathrm {Ptr} (k,t)$ be the function that returns the value of $x$ used to compute $V_{t,k}$ if $t>1$, or $k$ if $t=1$. Then
${\begin{aligned}x_{T}&=\arg \max _{x\in S}(V_{T,x}),\\x_{t-1}&=\mathrm {Ptr} (x_{t},t).\end{aligned}}$
Here we're using the standard definition of arg max.
The complexity of this implementation is $O(T\times \left|{S}\right|^{2})$. A better estimation exists if the maximum in the internal loop is instead found by iterating only over states that directly link to the current state (i.e. there is an edge from $k$ to $j$). Then using amortized analysis one can show that the complexity is $O(T\times (\left|{S}\right|+\left|{E}\right|))$, where $E$ is the number of edges in the graph.
Example
Consider a village where all villagers are either healthy or have a fever, and only the village doctor can determine whether each has a fever. The doctor diagnoses fever by asking patients how they feel. The villagers may only answer that they feel normal, dizzy, or cold.
The doctor believes that the health condition of the patients operates as a discrete Markov chain. There are two states, "Healthy" and "Fever", but the doctor cannot observe them directly; they are hidden from the doctor. On each day, there is a certain chance that a patient will tell the doctor "I feel normal", "I feel cold", or "I feel dizzy", depending on the patient's health condition.
The observations (normal, cold, dizzy) along with a hidden state (healthy, fever) form a hidden Markov model (HMM), and can be represented as follows in the Python programming language:
obs = ("normal", "cold", "dizzy")
states = ("Healthy", "Fever")
start_p = {"Healthy": 0.6, "Fever": 0.4}
trans_p = {
"Healthy": {"Healthy": 0.7, "Fever": 0.3},
"Fever": {"Healthy": 0.4, "Fever": 0.6},
}
emit_p = {
"Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
"Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
}
In this piece of code, start_p represents the doctor's belief about which state the HMM is in when the patient first visits (all the doctor knows is that the patient tends to be healthy). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately {'Healthy': 0.57, 'Fever': 0.43}. The transition_p represents the change of the health condition in the underlying Markov chain. In this example, a patient who is healthy today has only a 30% chance of having a fever tomorrow. The emit_p represents how likely each possible observation (normal, cold, or dizzy) is, given the underlying condition (healthy or fever). A patient who is healthy has a 50% chance of feeling normal; one who has a fever has a 60% chance of feeling dizzy.
A patient visits three days in a row, and the doctor discovers that the patient feels normal on the first day, cold on the second day, and dizzy on the third day. The doctor has a question: what is the most likely sequence of health conditions of the patient that would explain these observations? This is answered by the Viterbi algorithm.
def viterbi(obs, states, start_p, trans_p, emit_p):
V = [{}]
for st in states:
V[0] [st] = {"prob": start_p[st] * emit_p[st] [obs[0]], "prev": None}
# Run Viterbi when t > 0
for t in range(1, len(obs)):
V.append({})
for st in states:
max_tr_prob = V[t - 1] [states[0]] ["prob"] * trans_p[states[0]] [st] * emit_p[st] [obs[t]]
prev_st_selected = states[0]
for prev_st in states[1:]:
tr_prob = V[t - 1] [prev_st] ["prob"] * trans_p[prev_st] [st] * emit_p[st] [obs[t]]
if tr_prob > max_tr_prob:
max_tr_prob = tr_prob
prev_st_selected = prev_st
max_prob = max_tr_prob
V[t] [st] = {"prob": max_prob, "prev": prev_st_selected}
for line in dptable(V):
print(line)
opt = []
max_prob = 0.0
best_st = None
# Get most probable state and its backtrack
for st, data in V[-1].items():
if data["prob"] > max_prob:
max_prob = data["prob"]
best_st = st
opt.append(best_st)
previous = best_st
# Follow the backtrack till the first observation
for t in range(len(V) - 2, -1, -1):
opt.insert(0, V[t + 1] [previous] ["prev"])
previous = V[t + 1] [previous] ["prev"]
print ("The steps of states are " + " ".join(opt) + " with highest probability of %s" % max_prob)
def dptable(V):
# Print a table of steps from dictionary
yield " " * 5 + " ".join(("%3d" % i) for i in range(len(V)))
for state in V[0]:
yield "%.7s: " % state + " ".join("%.7s" % ("%lf" % v[state] ["prob"]) for v in V)
The function viterbi takes the following arguments: obs is the sequence of observations, e.g. ['normal', 'cold', 'dizzy']; states is the set of hidden states; start_p is the start probability; trans_p are the transition probabilities; and emit_p are the emission probabilities. For simplicity of code, we assume that the observation sequence obs is non-empty and that trans_p[i] [j] and emit_p[i] [j] is defined for all states i,j.
In the running example, the forward/Viterbi algorithm is used as follows:
viterbi(obs,
states,
start_p,
trans_p,
emit_p)
The output of the script is
$ python viterbi_example.py
0 1 2
Healthy: 0.30000 0.08400 0.00588
Fever: 0.04000 0.02700 0.01512
The steps of states are Healthy Healthy Fever with highest probability of 0.01512
This reveals that the observations ['normal', 'cold', 'dizzy'] were most likely generated by states ['Healthy', 'Healthy', 'Fever']. In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day.
The operation of Viterbi's algorithm can be visualized by means of a trellis diagram. The Viterbi path is essentially the shortest path through this trellis.
Soft output Viterbi algorithm
The soft output Viterbi algorithm (SOVA) is a variant of the classical Viterbi algorithm.
SOVA differs from the classical Viterbi algorithm in that it uses a modified path metric which takes into account the a priori probabilities of the input symbols, and produces a soft output indicating the reliability of the decision.
The first step in the SOVA is the selection of the survivor path, passing through one unique node at each time instant, t. Since each node has 2 branches converging at it (with one branch being chosen to form the Survivor Path, and the other being discarded), the difference in the branch metrics (or cost) between the chosen and discarded branches indicate the amount of error in the choice.
This cost is accumulated over the entire sliding window (usually equals at least five constraint lengths), to indicate the soft output measure of reliability of the hard bit decision of the Viterbi algorithm.
See also
• Expectation–maximization algorithm
• Baum–Welch algorithm
• Forward-backward algorithm
• Forward algorithm
• Error-correcting code
• Viterbi decoder
• Hidden Markov model
• Part-of-speech tagging
• A* search algorithm
References
1. Xavier Anguera et al., "Speaker Diarization: A Review of Recent Research", retrieved 19. August 2010, IEEE TASLP
2. 29 Apr 2005, G. David Forney Jr: The Viterbi Algorithm: A Personal History
3. Daniel Jurafsky; James H. Martin. Speech and Language Processing. Pearson Education International. p. 246.
4. Schmid, Helmut (2004). Efficient parsing of highly ambiguous context-free grammars with bit vectors (PDF). Proc. 20th Int'l Conf. on Computational Linguistics (COLING). doi:10.3115/1220355.1220379.
5. Klein, Dan; Manning, Christopher D. (2003). A* parsing: fast exact Viterbi parse selection (PDF). Proc. 2003 Conf. of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL). pp. 40–47. doi:10.3115/1073445.1073461.
6. Stanke, M.; Keller, O.; Gunduz, I.; Hayes, A.; Waack, S.; Morgenstern, B. (2006). "AUGUSTUS: Ab initio prediction of alternative transcripts". Nucleic Acids Research. 34 (Web Server issue): W435–W439. doi:10.1093/nar/gkl200. PMC 1538822. PMID 16845043.
7. Quach, T.; Farooq, M. (1994). "Maximum Likelihood Track Formation with the Viterbi Algorithm". Proceedings of 33rd IEEE Conference on Decision and Control. Vol. 1. pp. 271–276. doi:10.1109/CDC.1994.410918.{{cite conference}}: CS1 maint: multiple names: authors list (link)
8. Qi Wang; Lei Wei; Rodney A. Kennedy (2002). "Iterative Viterbi Decoding, Trellis Shaping, and Multilevel Structure for High-Rate Parity-Concatenated TCM". IEEE Transactions on Communications. 50: 48–55. doi:10.1109/26.975743.
9. A fast maximum-likelihood decoder for convolutional codes (PDF). Vehicular Technology Conference. December 2002. pp. 371–375. doi:10.1109/VETECF.2002.1040367.
10. Xing E, slide 11.
General references
• Viterbi AJ (April 1967). "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm". IEEE Transactions on Information Theory. 13 (2): 260–269. doi:10.1109/TIT.1967.1054010. (note: the Viterbi decoding algorithm is described in section IV.) Subscription required.
• Feldman J, Abou-Faycal I, Frigo M (2002). "A fast maximum-likelihood decoder for convolutional codes". Proceedings IEEE 56th Vehicular Technology Conference. Vol. 1. pp. 371–375. CiteSeerX 10.1.1.114.1314. doi:10.1109/VETECF.2002.1040367. ISBN 978-0-7803-7467-6. S2CID 9783963.
• Forney GD (March 1973). "The Viterbi algorithm". Proceedings of the IEEE. 61 (3): 268–278. doi:10.1109/PROC.1973.9030. Subscription required.
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.2. Viterbi Decoding". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
• Rabiner LR (February 1989). "A tutorial on hidden Markov models and selected applications in speech recognition". Proceedings of the IEEE. 77 (2): 257–286. CiteSeerX 10.1.1.381.3454. doi:10.1109/5.18626. S2CID 13618539. (Describes the forward algorithm and Viterbi algorithm for HMMs).
• Shinghal, R. and Godfried T. Toussaint, "Experiments in text recognition with the modified Viterbi algorithm," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-l, April 1979, pp. 184–193.
• Shinghal, R. and Godfried T. Toussaint, "The sensitivity of the modified Viterbi algorithm to the source statistics," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-2, March 1980, pp. 181–185.
External links
• Implementations in Java, F#, Clojure, C# on Wikibooks
• Tutorial on convolutional coding with viterbi decoding, by Chip Fleming
• A tutorial for a Hidden Markov Model toolkit (implemented in C) that contains a description of the Viterbi algorithm
• Viterbi algorithm by Dr. Andrew J. Viterbi (scholarpedia.org).
Implementations
• Mathematica has an implementation as part of its support for stochastic processes
• Susa signal processing framework provides the C++ implementation for Forward error correction codes and channel equalization here.
• C++
• C#
• Java
• Java 8
• Julia (HMMBase.jl)
• Perl
• Prolog
• Haskell
• Go
• SFIHMM includes code for Viterbi decoding.
|
Wikipedia
|
Softmax function
The softmax function, also known as softargmax[1]: 184 or normalized exponential function,[2]: 198 converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
This article is about the smooth approximation of one-hot arg max. For the smooth approximation of max, see LogSumExp.
Part of a series on
Machine learning
and data mining
Paradigms
• Supervised learning
• Unsupervised learning
• Online learning
• Batch learning
• Meta-learning
• Semi-supervised learning
• Self-supervised learning
• Reinforcement learning
• Rule-based learning
• Quantum machine learning
Problems
• Classification
• Generative model
• Regression
• Clustering
• dimension reduction
• density estimation
• Anomaly detection
• Data Cleaning
• AutoML
• Association rules
• Semantic analysis
• Structured prediction
• Feature engineering
• Feature learning
• Learning to rank
• Grammar induction
• Ontology learning
• Multimodal learning
Supervised learning
(classification • regression)
• Apprenticeship learning
• Decision trees
• Ensembles
• Bagging
• Boosting
• Random forest
• k-NN
• Linear regression
• Naive Bayes
• Artificial neural networks
• Logistic regression
• Perceptron
• Relevance vector machine (RVM)
• Support vector machine (SVM)
Clustering
• BIRCH
• CURE
• Hierarchical
• k-means
• Fuzzy
• Expectation–maximization (EM)
• DBSCAN
• OPTICS
• Mean shift
Dimensionality reduction
• Factor analysis
• CCA
• ICA
• LDA
• NMF
• PCA
• PGD
• t-SNE
• SDL
Structured prediction
• Graphical models
• Bayes net
• Conditional random field
• Hidden Markov
Anomaly detection
• RANSAC
• k-NN
• Local outlier factor
• Isolation forest
Artificial neural network
• Autoencoder
• Cognitive computing
• Deep learning
• DeepDream
• Feedforward neural network
• Recurrent neural network
• LSTM
• GRU
• ESN
• reservoir computing
• Restricted Boltzmann machine
• GAN
• Diffusion model
• SOM
• Convolutional neural network
• U-Net
• Transformer
• Vision
• Spiking neural network
• Memtransistor
• Electrochemical RAM (ECRAM)
Reinforcement learning
• Q-learning
• SARSA
• Temporal difference (TD)
• Multi-agent
• Self-play
Learning with humans
• Active learning
• Crowdsourcing
• Human-in-the-loop
Model diagnostics
• Learning curve
Mathematical foundations
• Kernel machines
• Bias–variance tradeoff
• Computational learning theory
• Empirical risk minimization
• Occam learning
• PAC learning
• Statistical learning
• VC theory
Machine-learning venues
• ECML PKDD
• NeurIPS
• ICML
• ICLR
• IJCAI
• ML
• JMLR
Related articles
• Glossary of artificial intelligence
• List of datasets for machine-learning research
• Outline of machine learning
Definition
The softmax function takes as input a vector z of K real numbers, and normalizes it into a probability distribution consisting of K probabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some vector components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in the interval $(0,1)$, and the components will add up to 1, so that they can be interpreted as probabilities. Furthermore, the larger input components will correspond to larger probabilities.
The standard (unit) softmax function $\sigma :\mathbb {R} ^{K}\mapsto (0,1)^{K}$ :\mathbb {R} ^{K}\mapsto (0,1)^{K}} where $K\geq 1$ is defined by the formula
$\sigma (\mathbf {z} )_{i}={\frac {e^{z_{i}}}{\sum _{j=1}^{K}e^{z_{j}}}}\ \ {\text{ for }}i=1,\dotsc ,K{\text{ and }}\mathbf {z} =(z_{1},\dotsc ,z_{K})\in \mathbb {R} ^{K}.$
In words, it applies the standard exponential function to each element $z_{i}$ of the input vector $\mathbf {z} $ and normalizes these values by dividing by the sum of all these exponentials. The normalization ensures that the sum of the components of the output vector $\sigma (\mathbf {z} )$ is 1. The term "softmax" derives from the amplifying effects of the exponential on any maxima in the input vector. For example, the standard softmax of $(1,2,8)$ is approximately $(0.001,0.002,0.997)$, which amounts to assigning almost all of the total unit weight in the result to the position of the vector's maximal element (of 8).
In general, instead of e a different base b > 0 can be used. If 0 < b < 1, smaller input components will result in larger output probabilities, and decreasing the value of b will create probability distributions that are more concentrated around the positions of the smallest input values. Conversely, as above, if b > 1 larger input components will result in larger output probabilities, and increasing the value of b will create probability distributions that are more concentrated around the positions of the largest input values. Writing $b=e^{\beta }$ or $b=e^{-\beta }$[lower-alpha 1] (for real β)[lower-alpha 2] yields the expressions:[lower-alpha 3]
$\sigma (\mathbf {z} )_{i}={\frac {e^{\beta z_{i}}}{\sum _{j=1}^{K}e^{\beta z_{j}}}}{\text{ or }}\sigma (\mathbf {z} )_{i}={\frac {e^{-\beta z_{i}}}{\sum _{j=1}^{K}e^{-\beta z_{j}}}}{\text{ for }}i=1,\dotsc ,K.$
In some fields, the base is fixed, corresponding to a fixed scale,[lower-alpha 4] while in others the parameter β is varied.
Interpretations
Smooth arg max
See also: Arg max
The name "softmax" is misleading. The function is not a smooth maximum (that is, a smooth approximation to the maximum function), but is rather a smooth approximation to the arg max function: the function whose value is the index of a vector's largest element. In fact, the term "softmax" is also used for the closely related LogSumExp function, which is a smooth maximum. For this reason, some prefer the more accurate term "softargmax", but the term "softmax" is conventional in machine learning.[3][4] This section uses the term "softargmax" to emphasize this interpretation.
Formally, instead of considering the arg max as a function with categorical output $1,\dots ,n$ (corresponding to the index), consider the arg max function with one-hot representation of the output (assuming there is a unique maximum arg):
$\operatorname {arg\,max} (z_{1},\,\dots ,\,z_{n})=(y_{1},\,\dots ,\,y_{n})=(0,\,\dots ,\,0,\,1,\,0,\,\dots ,\,0),$
where the output coordinate $y_{i}=1$ if and only if $i$ is the arg max of $(z_{1},\dots ,z_{n})$, meaning $z_{i}$ is the unique maximum value of $(z_{1},\,\dots ,\,z_{n})$. For example, in this encoding $\operatorname {arg\,max} (1,5,10)=(0,0,1),$ since the third argument is the maximum.
This can be generalized to multiple arg max values (multiple equal $z_{i}$ being the maximum) by dividing the 1 between all max args; formally 1/k where k is the number of arguments assuming the maximum. For example, $\operatorname {arg\,max} (1,\,5,\,5)=(0,\,1/2,\,1/2),$ since the second and third argument are both the maximum. In case all arguments are equal, this is simply $\operatorname {arg\,max} (z,\dots ,z)=(1/n,\dots ,1/n).$ Points z with multiple arg max values are singular points (or singularities, and form the singular set) – these are the points where arg max is discontinuous (with a jump discontinuity) – while points with a single arg max are known as non-singular or regular points.
With the last expression given in the introduction, softargmax is now a smooth approximation of arg max: as $\beta \to \infty $, softargmax converges to arg max. There are various notions of convergence of a function; softargmax converges to arg max pointwise, meaning for each fixed input z as $\beta \to \infty $, $\sigma _{\beta }(\mathbf {z} )\to \operatorname {arg\,max} (\mathbf {z} ).$ However, softargmax does not converge uniformly to arg max, meaning intuitively that different points converge at different rates, and may converge arbitrarily slowly. In fact, softargmax is continuous, but arg max is not continuous at the singular set where two coordinates are equal, while the uniform limit of continuous functions is continuous. The reason it fails to converge uniformly is that for inputs where two coordinates are almost equal (and one is the maximum), the arg max is the index of one or the other, so a small change in input yields a large change in output. For example, $\sigma _{\beta }(1,\,1.0001)\to (0,1),$ but $\sigma _{\beta }(1,\,0.9999)\to (1,\,0),$ and $\sigma _{\beta }(1,\,1)=1/2$ for all inputs: the closer the points are to the singular set $(x,x)$, the slower they converge. However, softargmax does converge compactly on the non-singular set.
Conversely, as $\beta \to -\infty $, softargmax converges to arg min in the same way, where here the singular set is points with two arg min values. In the language of tropical analysis, the softmax is a deformation or "quantization" of arg max and arg min, corresponding to using the log semiring instead of the max-plus semiring (respectively min-plus semiring), and recovering the arg max or arg min by taking the limit is called "tropicalization" or "dequantization".
It is also the case that, for any fixed β, if one input $z_{i}$ is much larger than the others relative to the temperature, $T=1/\beta $, the output is approximately the arg max. For example, a difference of 10 is large relative to a temperature of 1:
$\sigma (0,\,10):=\sigma _{1}(0,\,10)=\left(1/\left(1+e^{10}\right),\,e^{10}/\left(1+e^{10}\right)\right)\approx (0.00005,\,0.99995)$
However, if the difference is small relative to the temperature, the value is not close to the arg max. For example, a difference of 10 is small relative to a temperature of 100:
$\sigma _{1/100}(0,\,10)=\left(1/\left(1+e^{1/10}\right),\,e^{1/10}/\left(1+e^{1/10}\right)\right)\approx (0.475,\,0.525).$
As $\beta \to \infty $, temperature goes to zero, $T=1/\beta \to 0$, so eventually all differences become large (relative to a shrinking temperature), which gives another interpretation for the limit behavior.
Probability theory
In probability theory, the output of the softargmax function can be used to represent a categorical distribution – that is, a probability distribution over K different possible outcomes.
Statistical mechanics
In statistical mechanics, the softargmax function is known as the Boltzmann distribution (or Gibbs distribution):[5]: 7 the index set ${1,\,\dots ,\,k}$ are the microstates of the system; the inputs $z_{i}$ are the energies of that state; the denominator is known as the partition function, often denoted by Z; and the factor β is called the coldness (or thermodynamic beta, or inverse temperature).
Applications
The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression)[2]: 206–209 , multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks.[6] Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the jth class given a sample vector x and a weighting vector w is:
$P(y=j\mid \mathbf {x} )={\frac {e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{j}}}{\sum _{k=1}^{K}e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{k}}}}$
This can be seen as the composition of K linear functions $\mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{1},\ldots ,\mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{K}$ and the softmax function (where $\mathbf {x} ^{\mathsf {T}}\mathbf {w} $ denotes the inner product of $\mathbf {x} $ and $\mathbf {w} $). The operation is equivalent to applying a linear operator defined by $\mathbf {w} $ to vectors $\mathbf {x} $, thus transforming the original, probably highly-dimensional, input to vectors in a K-dimensional space $\mathbb {R} ^{K}$.
Neural networks
The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression.
Since the function maps a vector and a specific index $i$ to a real value, the derivative needs to take the index into account:
${\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},i)(\delta _{ik}-\sigma ({\textbf {q}},k)).$
This expression is symmetrical in the indexes $i,k$ and thus may also be expressed as
${\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},k)(\delta _{ik}-\sigma ({\textbf {q}},i)).$
Here, the Kronecker delta is used for simplicity (cf. the derivative of a sigmoid function, being expressed via the function itself).
In order to achieve stable numerical computations of the derivative, one often subtracts a constant from the input vector. In theory, this does not change the output, and neither the derivative. But it is more stable as it can control explicitly the largest value computed in each exponent.
If the function is scaled with the parameter $\beta $, then these expressions must be multiplied by $\beta $.
See multinomial logit for a probability model which uses the softmax activation function.
Reinforcement learning
In the field of reinforcement learning, a softmax function can be used to convert values into action probabilities. The function commonly used is:[7]
$P_{t}(a)={\frac {\exp(q_{t}(a)/\tau )}{\sum _{i=1}^{n}\exp(q_{t}(i)/\tau )}}{\text{,}}$
where the action value $q_{t}(a)$ corresponds to the expected reward of following action a and $\tau $ is called a temperature parameter (in allusion to statistical mechanics). For high temperatures ($\tau \to \infty $), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature ($\tau \to 0^{+}$), the probability of the action with the highest expected reward tends to 1.
Computational complexity and remedies
In neural network applications, the number K of possible outcomes is often large, e.g. in case of neural language models that predict the most likely outcome out of a vocabulary which might contain millions of possible words.[8] This can make the calculations for the softmax layer (i.e. the matrix multiplications to determine the $z_{i}$, followed by the application of the softmax function itself) computationally expensive.[8][9] What's more, the gradient descent backpropagation method for training such a neural network involves calculating the softmax for every training example, and the number of training examples can also become large. The computational effort for the softmax became a major limiting factor in the development of larger neural language models, motivating various remedies to reduce training times.[8][9]
Approaches that reorganize the softmax layer for more efficient calculation include the hierarchical softmax and the differentiated softmax.[8] The hierarchical softmax (introduced by Morin and Bengio in 2005) uses a binary tree structure where the outcomes (vocabulary words) are the leaves and the intermediate nodes are suitably selected "classes" of outcomes, forming latent variables.[9][10] The desired probability (softmax value) of a leaf (outcome) can then be calculated as the product of the probabilities of all nodes on the path from the root to that leaf.[9] Ideally, when the tree is balanced, this would reduce the computational complexity from $O(K)$ to $O(\log _{2}K)$.[10] In practice, results depend on choosing a good strategy for clustering the outcomes into classes.[9][10] A Huffman tree was used for this in Google's word2vec models (introduced in 2013) to achieve scalability.[8]
A second kind of remedies is based on approximating the softmax (during training) with modified loss functions that avoid the calculation of the full normalization factor.[8] These include methods that restrict the normalization sum to a sample of outcomes (e.g. Importance Sampling, Target Sampling).[8][9]
Mathematical properties
Geometrically the softmax function maps the vector space $\mathbb {R} ^{K}$ to the boundary of the standard $(K-1)$-simplex, cutting the dimension by one (the range is a $(K-1)$-dimensional simplex in $K$-dimensional space), due to the linear constraint that all output sum to 1 meaning it lies on a hyperplane.
Along the main diagonal $(x,\,x,\,\dots ,\,x),$ softmax is just the uniform distribution on outputs, $(1/n,\dots ,1/n)$: equal scores yield equal probabilities.
More generally, softmax is invariant under translation by the same value in each coordinate: adding $\mathbf {c} =(c,\,\dots ,\,c)$ to the inputs $\mathbf {z} $ yields $\sigma (\mathbf {z} +\mathbf {c} )=\sigma (\mathbf {z} )$, because it multiplies each exponent by the same factor, $e^{c}$ (because $e^{z_{i}+c}=e^{z_{i}}\cdot e^{c}$), so the ratios do not change:
$\sigma (\mathbf {z} +\mathbf {c} )_{j}={\frac {e^{z_{j}+c}}{\sum _{k=1}^{K}e^{z_{k}+c}}}={\frac {e^{z_{j}}\cdot e^{c}}{\sum _{k=1}^{K}e^{z_{k}}\cdot e^{c}}}=\sigma (\mathbf {z} )_{j}.$
Geometrically, softmax is constant along diagonals: this is the dimension that is eliminated, and corresponds to the softmax output being independent of a translation in the input scores (a choice of 0 score). One can normalize input scores by assuming that the sum is zero (subtract the average: $\mathbf {c} $ where $ c={\frac {1}{n}}\sum z_{i}$), and then the softmax takes the hyperplane of points that sum to zero, $ \sum z_{i}=0$, to the open simplex of positive values that sum to 1$ \sum \sigma (\mathbf {z} )_{i}=1$, analogously to how the exponent takes 0 to 1, $e^{0}=1$ and is positive.
By contrast, softmax is not invariant under scaling. For instance, $\sigma {\bigl (}(0,\,1){\bigr )}={\bigl (}1/(1+e),\,e/(1+e){\bigr )}$ but $\sigma {\bigl (}(0,2){\bigr )}={\bigl (}1/\left(1+e^{2}\right),\,e^{2}/\left(1+e^{2}\right){\bigr )}.$
The standard logistic function is the special case for a 1-dimensional axis in 2-dimensional space, say the x-axis in the (x, y) plane. One variable is fixed at 0 (say $z_{2}=0$), so $e^{0}=1$, and the other variable can vary, denote it $z_{1}=x$, so $ e^{z_{1}}/\sum _{k=1}^{2}e^{z_{k}}=e^{x}/\left(e^{x}+1\right),$ the standard logistic function, and $ e^{z_{2}}/\sum _{k=1}^{2}e^{z_{k}}=1/\left(e^{x}+1\right),$ its complement (meaning they add up to 1). The 1-dimensional input could alternatively be expressed as the line $(x/2,\,-x/2)$, with outputs $e^{x/2}/\left(e^{x/2}+e^{-x/2}\right)=e^{x}/\left(e^{x}+1\right)$ and $e^{-x/2}/\left(e^{x/2}+e^{-x/2}\right)=1/\left(e^{x}+1\right).$
The softmax function is also the gradient of the LogSumExp function, a smooth maximum:
${\frac {\partial }{\partial z_{i}}}\operatorname {LSE} (\mathbf {z} )={\frac {\exp z_{i}}{\sum _{j=1}^{K}\exp z_{j}}}=\sigma (\mathbf {z} )_{i},\quad {\text{ for }}i=1,\dotsc ,K,\quad \mathbf {z} =(z_{1},\,\dotsc ,\,z_{K})\in \mathbb {R} ^{K},$
where the LogSumExp function is defined as $\operatorname {LSE} (z_{1},\,\dots ,\,z_{n})=\log \left(\exp(z_{1})+\cdots +\exp(z_{n})\right)$.
History
The softmax function was used in statistical mechanics as the Boltzmann distribution in the foundational paper Boltzmann (1868),[11] formalized and popularized in the influential textbook Gibbs (1902).[12]
The use of the softmax in decision theory is credited to Luce (1959) harvtxt error: no target: CITEREFLuce1959 (help),[13]: 1 who used the axiom of independence of irrelevant alternatives in rational choice theory to deduce the softmax in Luce's choice axiom for relative preferences.
In machine learning, the term "softmax" is credited to John S. Bridle in two 1989 conference papers, Bridle (1990a):[13]: 1 and Bridle (1990b):[3]
We are concerned with feed-forward non-linear networks (multi-layer perceptrons, or MLPs) with multiple outputs. We wish to treat the outputs of the network as probabilities of alternatives (e.g. pattern classes), conditioned on the inputs. We look for appropriate output non-linearities and for appropriate criteria for adaptation of the parameters of the network (e.g. weights). We explain two modifications: probability scoring, which is an alternative to squared error minimisation, and a normalised exponential (softmax) multi-input generalisation of the logistic non-linearity.[14]: 227
For any input, the outputs must all be positive and they must sum to unity. ...
Given a set of unconstrained values, $V_{j}(x)$, we can ensure both conditions by using a Normalised Exponential transformation:
$Q_{j}(x)=\left.e^{V_{j}(x)}\right/\sum _{k}e^{V_{k}(x)}$
This transformation can be considered a multi-input generalisation of the logistic, operating on the whole output layer. It preserves the rank order of its input values, and is a differentiable generalisation of the 'winner-take-all' operation of picking the maximum value. For this reason we like to refer to it as softmax.[15]: 213
Example
If we take an input of [1, 2, 3, 4, 1, 2, 3], the softmax of that is [0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175]. The output has most of its weight where the "4" was in the original input. This is what the function is normally used for: to highlight the largest values and suppress values which are significantly below the maximum value. But note: softmax is not scale invariant, so if the input were [0.1, 0.2, 0.3, 0.4, 0.1, 0.2, 0.3] (which sums to 1.6) the softmax would be [0.125, 0.138, 0.153, 0.169, 0.125, 0.138, 0.153]. This shows that for values between 0 and 1 softmax, in fact, de-emphasizes the maximum value (note that 0.169 is not only less than 0.475, it is also less than the initial proportion of 0.4/1.6=0.25).
Computation of this example using Python code:
>>> import numpy as np
>>> a = [1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0]
>>> np.exp(a) / np.sum(np.exp(a))
array([0.02364054, 0.06426166, 0.1746813, 0.474833, 0.02364054,
0.06426166, 0.1746813])
Here is an example of Julia code:
julia> A = [1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0]; # semicolon to suppress interactive output
julia> exp.(A) ./ sum(exp, A)
7-element Array{Float64,1}:
0.0236405
0.0642617
0.174681
0.474833
0.0236405
0.0642617
0.174681
Here is an example of R code:
> z <- c(1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0)
> softmax <- exp(z)/sum(exp(z))
> softmax
[1] 0.02364054 0.06426166 0.17468130 0.47483300 0.02364054 0.06426166 0.17468130
Here is an example of Elixir code:[16]
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> Nx.divide(Nx.exp(t), Nx.sum(Nx.exp(t)))
#Nx.Tensor<
f64[2][2]
[
[0.03205860328008499, 0.08714431874203257],
[0.23688281808991013, 0.6439142598879722]
]
>
Here is an example of Raku code:
> my @z = [1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0];
> say @z.map: {exp($_)/sum(@z.map: {exp($_)})}
(0.023640543021591385 0.06426165851049616 0.17468129859572226 0.4748329997443803 0.023640543021591385 0.06426165851049616 0.17468129859572226)
See also
• Softplus
• Multinomial logistic regression
• Dirichlet distribution – an alternative way to sample categorical distributions
• Partition function
• Exponential tilting – a generalization of Softmax to more general probability distributions
Notes
1. Positive β corresponds to the maximum convention, and is usual in machine learning, corresponding to the highest score having highest probability. The negative −β corresponds to the minimum convention, and is conventional in thermodynamics, corresponding to the lowest energy state having the highest probability; this matches the convention in the Gibbs distribution, interpreting β as coldness.
2. The notation β is for the thermodynamic beta, which is inverse temperature: $\beta =1/T$, $T=1/\beta .$
3. For $\beta =0$ (coldness zero, infinite temperature), $b=e^{\beta }=e^{0}=1$, and this becomes the constant function $(1/n,\dots ,1/n)$, corresponding to the discrete uniform distribution.
4. In statistical mechanics, fixing β is interpreted as having coldness and temperature of 1.
References
1. Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). "6.2.2.3 Softmax Units for Multinoulli Output Distributions". Deep Learning. MIT Press. pp. 180–184. ISBN 978-0-26203561-3.
2. Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. ISBN 0-387-31073-8.
3. Sako, Yusaku (2018-06-02). "Is the term "softmax" driving you nuts?". Medium.
4. Goodfellow, Bengio & Courville 2016, pp. 183–184: The name "softmax" can be somewhat confusing. The function is more closely related to the arg max function than the max function. The term "soft" derives from the fact that the softmax function is continuous and differentiable. The arg max function, with its result represented as a one-hot vector, is not continuous or differentiable. The softmax function thus provides a "softened" version of the arg max. The corresponding soft version of the maximum function is $\operatorname {softmax} (\mathbf {z} )^{\top }\mathbf {z} $. It would perhaps be better to call the softmax function "softargmax," but the current name is an entrenched convention.
5. LeCun, Yann; Chopra, Sumit; Hadsell, Raia; Ranzato, Marc’Aurelio; Huang, Fu Jie (2006). "A Tutorial on Energy-Based Learning" (PDF). In Gökhan Bakır; Thomas Hofmann; Bernhard Schölkopf; Alexander J. Smola; Ben Taskar; S.V.N Vishwanathan (eds.). Predicting Structured Data. Neural Information Processing series. MIT Press. ISBN 978-0-26202617-8.
6. ai-faq What is a softmax activation function?
7. Sutton, R. S. and Barto A. G. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998. Softmax Action Selection
8. Onal, Kezban Dilek; Zhang, Ye; Altingovde, Ismail Sengor; Rahman, Md Mustafizur; Karagoz, Pinar; Braylan, Alex; Dang, Brandon; Chang, Heng-Lu; Kim, Henna; McNamara, Quinten; Angert, Aaron (2018-06-01). "Neural information retrieval: at the end of the early years". Information Retrieval Journal. 21 (2): 111–182. doi:10.1007/s10791-017-9321-y. ISSN 1573-7659. S2CID 21684923.
9. Chen, Wenlin; Grangier, David; Auli, Michael (August 2016). "Strategies for Training Large Vocabulary Neural Language Models". Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics: 1975–1985. doi:10.18653/v1/P16-1186. S2CID 6035643.
10. Morin, Frederic; Bengio, Yoshua (2005-01-06). "Hierarchical Probabilistic Neural Network Language Model" (PDF). International Workshop on Artificial Intelligence and Statistics. PMLR: 246–252.
11. Boltzmann, Ludwig (1868). "Studien über das Gleichgewicht der lebendigen Kraft zwischen bewegten materiellen Punkten" [Studies on the balance of living force between moving material points]. Wiener Berichte. 58: 517–560.
12. Gibbs, Josiah Willard (1902). Elementary Principles in Statistical Mechanics.
13. Gao, Bolin; Pavel, Lacra (2017). "On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning". arXiv:1704.00805 [math.OC].
14. Bridle, John S. (1990a). Soulié F.F.; Hérault J. (eds.). Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition. Neurocomputing: Algorithms, Architectures and Applications (1989). NATO ASI Series (Series F: Computer and Systems Sciences). Vol. 68. Berlin, Heidelberg: Springer. pp. 227–236. doi:10.1007/978-3-642-76153-9_28.
15. Bridle, John S. (1990b). D. S. Touretzky (ed.). Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters. Advances in Neural Information Processing Systems 2 (1989). Morgan-Kaufmann.
16. "Nx/Nx at main · elixir-nx/Nx". GitHub.
Differentiable computing
General
• Differentiable programming
• Information geometry
• Statistical manifold
• Automatic differentiation
• Neuromorphic engineering
• Pattern recognition
• Tensor calculus
• Computational learning theory
• Inductive bias
Concepts
• Gradient descent
• SGD
• Clustering
• Regression
• Overfitting
• Hallucination
• Adversary
• Attention
• Convolution
• Loss functions
• Backpropagation
• Normalization (Batchnorm)
• Activation
• Softmax
• Sigmoid
• Rectifier
• Regularization
• Datasets
• Augmentation
• Diffusion
• Autoregression
Applications
• Machine learning
• In-context learning
• Artificial neural network
• Deep learning
• Scientific computing
• Artificial Intelligence
• Language model
• Large language model
Hardware
• IPU
• TPU
• VPU
• Memristor
• SpiNNaker
Software libraries
• TensorFlow
• PyTorch
• Keras
• Theano
• JAX
• Flux.jl
Implementations
Audio–visual
• AlexNet
• WaveNet
• Human image synthesis
• HWR
• OCR
• Speech synthesis
• Speech recognition
• Facial recognition
• AlphaFold
• DALL-E
• Midjourney
• Stable Diffusion
Verbal
• Word2vec
• Seq2seq
• BERT
• LaMDA
• Bard
• NMT
• Project Debater
• IBM Watson
• GPT-2
• GPT-3
• ChatGPT
• GPT-4
• GPT-J
• Chinchilla AI
• PaLM
• BLOOM
• LLaMA
Decisional
• AlphaGo
• AlphaZero
• Q-learning
• SARSA
• OpenAI Five
• Self-driving car
• MuZero
• Action selection
• Auto-GPT
• Robot control
People
• Yoshua Bengio
• Alex Graves
• Ian Goodfellow
• Stephen Grossberg
• Demis Hassabis
• Geoffrey Hinton
• Yann LeCun
• Fei-Fei Li
• Andrew Ng
• Jürgen Schmidhuber
• David Silver
Organizations
• Anthropic
• EleutherAI
• Google DeepMind
• Hugging Face
• OpenAI
• Meta AI
• Mila
• MIT CSAIL
Architectures
• Neural Turing machine
• Differentiable neural computer
• Transformer
• Recurrent neural network (RNN)
• Long short-term memory (LSTM)
• Gated recurrent unit (GRU)
• Echo state network
• Multilayer perceptron (MLP)
• Convolutional neural network
• Residual network
• Autoencoder
• Variational autoencoder (VAE)
• Generative adversarial network (GAN)
• Graph neural network
• Portals
• Computer programming
• Technology
• Categories
• Artificial neural networks
• Machine learning
|
Wikipedia
|
Sofya Kovalevskaya
Sofya Vasilyevna Kovalevskaya (Russian: Софья Васильевна Ковалевская), born Korvin-Krukovskaya (15 January [O.S. 3 January] 1850 – 10 February 1891), was a Russian mathematician who made noteworthy contributions to analysis, partial differential equations and mechanics. She was a pioneer for women in mathematics around the world – the first woman to obtain a doctorate (in the modern sense) in mathematics, the first woman appointed to a full professorship in northern Europe and one of the first women to work for a scientific journal as an editor.[1] According to historian of science Ann Hibner Koblitz, Kovalevskaya was "the greatest known woman scientist before the twentieth century".[2]: 255
Sofya Kovalevskaya
Софья Ковалевская
Kovalevskaya in 1880
Born
Sofya Vasilyevna Korvin-Krukovskaya
(1850-01-15)15 January 1850
Moscow, Russia
Died10 February 1891(1891-02-10) (aged 41)
Stockholm, Sweden
Resting placeNorra begravningsplatsen
Other names
• Sophie Kowalevski
• Sophie Kowalevsky
Alma materUniversity of Göttingen (PhD)
Known for
• Cauchy–Kowalevski theorem
• Kovalevskaya top
SpouseVladimir Kovalevskij (m. 1868; died 1883)
ChildrenSofia (1878)
Scientific career
FieldsMathematics, Mechanics
Institutions
• Stockholm University
• Russian Academy of Sciences
Thesis (1874)
Doctoral advisorKarl Weierstrass
Historian of mathematics Roger Cooke writes:
... the more I reflect on her life and consider the magnitude of her achievements, set against the weight of the obstacles she had to overcome, the more I admire her. For me she has taken on a heroic stature achieved by very few other people in history. To venture, as she did, into academia, a world almost no woman had yet explored, and to be consequently the object of curious scrutiny, while a doubting society looked on, half-expecting her to fail, took tremendous courage and determination. To achieve, as she did, at least two major results of lasting value to scholarship, is evidence of a considerable talent, developed through iron discipline.[3]: 1
Her sister was the socialist Anne Jaclard.
There are several alternative transliterations of her name. She herself used Sophie Kowalevski (or occasionally Kowalevsky) in her academic publications.
Background and early education
Sofya Kovalevskaya (née Korvin-Krukovskaya) was born in Moscow, the second of three children. Her father, Lieutenant General Vasily Vasilyevich Korvin-Krukovsky, served in the Imperial Russian Army as head of the Moscow Artillery before retiring to Polibino, his family estate in Pskov Oblast in 1858, when Kovalevskaya was eight years old. He was a member of the minor nobility, of mixed (Bela)Russian–Polish descent (Polish on his father's side), with possible partial ancestry from the royal Corvin family of Hungary, and served as Marshall of Nobility for Vitebsk province. (There may also have been some Romani ancestry on the father's side.[4])
Her mother, Yelizaveta Fedorovna Shubert (Schubert), descended from a family of German immigrants to St. Petersburg who lived on Vasilievsky Island. Her maternal great-grandfather was the astronomer and geographer Friedrich Theodor Schubert (1758−1825), who emigrated to Russia from Germany around 1785. He became a full member of the St. Petersburg Academy of Science and head of its astronomical observatory. His son, Kovalevskaya's maternal grandfather, was General Theodor Friedrich von Schubert (1789−1865), who was head of the military topographic service, and an honorary member of the Russian Academy of Sciences, as well as Director of the Kunstkamera museum.
Kovalevskaya's parents provided her with a good early education. At various times, her governesses were native speakers of English, French, and German. When she was 11 years old, she was intrigued by a foretaste of what she was to learn later in her lessons in calculus; the wall of her room had been papered with pages from lecture notes by Ostrogradsky, left over from her father's student days.[5] She was tutored privately in elementary mathematics by Iosif Ignatevich Malevich.
The physicist Nikolai Nikanorovich Tyrtov noted her unusual aptitude when she managed to understand his textbook by discovering for herself an approximate construction of trigonometric functions which she had not yet encountered in her studies.[6] Tyrtov called her a "new Pascal" and suggested she be given a chance to pursue further studies under the tutelage of N. Strannoliubskii.[7] In 1866-67 she spent much of the winter with her family in St. Petersburg, where she was provided private tutoring by Strannoliubskii, a well-known advocate of higher education for women, who taught her calculus. During that same period, the son of a local priest introduced her sister Anna to progressive ideas influenced by the radical movement of the 1860s, providing her with copies of radical journals of the time discussing Russian nihilism.[8]
Although the word nihilist (нигилист) often was used in a negative sense, it did not have that meaning for the young Russians of the 1860s (шестидесятники):
After the famous writer Ivan Turgenev used the word nihilist to refer to Bazarov, the young hero of his 1862 novel Fathers and Children, a certain segment of the "new people" adopted that name as well, despite its negative connotations in most quarters.... For the nihilists, science appeared to be the most effective means of helping the mass of people to a better life. Science pushed back the barriers of religion and superstition, and "proved" through the theory of evolution that (peaceful) social revolutions were the way of nature. For the early nihilists, science was virtually synonymous with truth, progress and radicalism; thus, the pursuit of a scientific career was viewed in no way as a hindrance to social activism. In fact, it was seen as a positive boost to progressive forces, an active blow against backwardness.[9]: 2–4
Despite her obvious talent for mathematics, she could not complete her education in Russia. At that time, women were not allowed to attend universities in Russia and most other countries. In order to study abroad, Kovalevskaya needed written permission from her father (or husband). Accordingly, in 1868 she contracted a "fictitious marriage" with Vladimir Kovalevskij, a young paleontology student, book publisher and radical, who was the first to translate and publish the works of Charles Darwin in Russia. They moved from Russia to Germany in 1869, after a brief stay in Vienna, in order to pursue advanced studies.[10]
Student years
In April 1869, following Sofia's and Vladimir's brief stay in Vienna, where she attended lectures in physics at the university, they moved to Heidelberg. Through great efforts, she obtained permission to audit classes with the professors' approval at the University of Heidelberg. There she attended courses in physics and mathematics under such teachers as Hermann von Helmholtz, Gustav Kirchhoff and Robert Bunsen.[2]: 87–89 Vladimir, meanwhile, went on to the University of Jena to pursue a doctorate in paleontology.
In October 1869, shortly after attending courses in Heidelberg, she visited London with Vladimir, who spent time with his colleagues Thomas Huxley and Charles Darwin, while she was invited to attend George Eliot's Sunday salons.[10] There, at age nineteen, she met Herbert Spencer and was led into a debate, at Eliot's instigation, on "woman's capacity for abstract thought". Although there is no record of the details of their conversation, she had just completed a lecture course in Heidelberg on mechanics, and she may just possibly have made mention of the Euler equations governing the motion of a rigid body (see following section). George Eliot was writing Middlemarch at the time, in which one finds the remarkable sentence: "In short, woman was a problem which, since Mr. Brooke's mind felt blank before it, could hardly be less complicated than the revolutions of an irregular solid."[11] This was well before she made her notable contribution of the "Kovalevskaya top" to the brief list of known examples of integrable rigid body motion (see following section).
In October 1870, Kovalevskaya moved to Berlin, where she began to take private lessons with Karl Weierstrass, since the university would not allow her even to audit classes. He was very impressed with her mathematical skills, and over the subsequent three years taught her the same material that comprised his lectures at the university.
In 1871 she briefly traveled to Paris together with Vladimir in order to help in the Paris Commune, where Kovalevskaya attended the injured and her sister Anyuta was active in the Commune.[2]: 104–106 With the fall of the Commune, however, both Anyuta and her common law husband Victor Jaclard, who was leader of the Montmartre contingent of the National Guard and a prominent Blanquiste, were arrested. Although Anyuta managed to escape to London, Jaclard was sentenced to execution. However, with the assistance of Sofia's and Anyuta's father General Krukovsky, who had come urgently to Paris to help Anyuta and who wrote to Adolphe Thiers asking for clemency, they managed to save Victor Jaclard.[2]: 107–108
Kovalevskaya returned to Berlin and continued her studies with Weierstrass for three more years. In 1874 she presented three papers—on partial differential equations, on the dynamics of Saturn's rings, and on elliptic integrals—to the University of Göttingen as her doctoral dissertation. With the support of Weierstrass, this earned her a doctorate in mathematics summa cum laude, after Weierstrass succeeded in having her exempted from the usual oral examinations.[10]
Kovalevskaya thereby became the first woman to have been awarded a doctorate (in the modern sense of the word) in mathematics. Her paper on partial differential equations contains what is now commonly known as the Cauchy–Kovalevskaya theorem, which proves the existence and analyticity of local solutions to such equations under suitably defined initial/boundary conditions.
Last years in Germany and Sweden
In 1874, Kovalevskaya and her husband Vladimir returned to Russia, but Vladimir failed to secure a professorship because of his radical beliefs. (Kovalevskaya never would have been considered for such a position because of her sex.) During this time they tried a variety of schemes to support themselves, including real estate development and involvement with an oil company. But in the late 1870s they developed financial problems, leading to bankruptcy.[12][2]
In 1875, for some unknown reason, perhaps the death of her father, Sofia and Vladimir decided to spend several years together as an actual married couple. Three years later their daughter, Sofia (called "Fufa"), was born. After almost two years devoted to raising her daughter, Kovalevskaya put Fufa under the care of relatives and friends, resumed her work in mathematics, and left Vladimir for what would be the last time.
Vladimir, who had always suffered severe mood swings, became more unstable. In 1883, faced with worsening mood swings and the possibility of being prosecuted for his role in a stock swindle, Vladimir committed suicide.[10]
That year, with the help of the mathematician Gösta Mittag-Leffler, whom she had known as a fellow student of Weierstrass, Kovalevskaya was able to secure a position as a privat-docent at Stockholm University in Sweden.[10] Kovalevskaya met Mittag-Leffler's sister, the actress, novelist, and playwright Anne Charlotte Edgren-Leffler. Until Kovalevskaya's death the two women shared a close friendship.[13]
In 1884 Kovalevskaya was appointed to a five-year position as Extraordinary Professor (assistant professor in modern terminology) and became an editor of Acta Mathematica. In 1888 she won the Prix Bordin of the French Academy of Science, for her work "Mémoire sur un cas particulier du problème de la rotation d'un corps pesant autour d'un point fixe, où l'intégration s'effectue à l'aide des fonctions ultraelliptiques du temps".[10][14] Her submission featured the celebrated discovery of what is now known as the "Kovalevskaya top", which was subsequently shown to be the only other case of rigid body motion that is "completely integrable" other than the tops of Euler and Lagrange.[15]
In 1889 Kovalevskaya was appointed Ordinary Professor (full professor) at Stockholm University, the first woman in Europe in modern times to hold such a position.[2]: 218 After much lobbying on her behalf (and a change in the Academy's rules) she was made a Corresponding Member of the Russian Academy of Sciences, but she was never offered a professorship in Russia.
Kovalevskaya, who was involved in the progressive political and feminist currents of late nineteenth-century Russian nihilism, wrote several non-mathematical works as well, including a memoir, A Russian Childhood, two plays (in collaboration with Duchess Anne Charlotte Edgren-Leffler) and a partly autobiographical novel, Nihilist Girl (1890).
In 1889, Kovalevskaya fell in love with Maxim Kovalevsky, a distant relation of her deceased husband,[16] but insisted on not marrying him because she would not be able to settle down and live with him.[3]: 18
Kovalevskaya died of epidemic influenza complicated by pneumonia in 1891 at age forty-one, after returning from a vacation in Nice with Maxim.[2]: 231 She is buried in Solna, Sweden, at Norra begravningsplatsen.
Kovalevskaya's mathematical results, such as the Cauchy–Kowalevski theorem, and her pioneering role as a female mathematician in an almost exclusively male-dominated field, have made her the subject of several books, including a biography by Ann Hibner Koblitz,[2] a biography in Russian by Polubarinova-Kochina[17] (translated into English by M. Burov with the title Love and Mathematics: Sofya Kovalevskaya, Mir Publishers, 1985), and a book about her mathematics by R. Cooke.[10]
Tributes
Sonya Kovalevsky High School Mathematics Day is a grant-making program of the Association for Women in Mathematics (AWM), funding workshops across the United States which encourage girls to explore mathematics. While the AWM currently does not have grant money to support this program, multiple universities continue the program with their own funding.[18]
The Kovalevsky Lecture is sponsored annually by the AWM and the Society for Industrial and Applied Mathematics, and is intended to highlight significant contributions of women in the fields of applied or computational mathematics.
The Kovalevskaia Fund, founded in 1985 with the purpose of supporting women in science in developing countries, was named in her honor.
The lunar crater Kovalevskaya is named in her honor.
A gymnasium (high school) and progymnasium in Vilnius and gymnasium in Velikiye Luki are named after Sofya Kovalevskaya.
The Alexander Von Humboldt Foundation of Germany bestows a bi-annual Sofia Kovalevskaya Award to promising young researchers.
Saint Petersburg, Moscow, and Stockholm have streets named in honor of Kovalevskaya.
On 30 June 2021, a satellite named after her (ÑuSat 22 or "Sofya", COSPAR 2021-059AS) was launched into space as part of the Satellogic Aleph-1 constellation.
• Bust by Finnish sculptor Walter Runeberg
• Commemorative coin, 2000
• Soviet Union postage stamp, 1951
In film
Kovalevskaya has been the subject of three film and TV biographies.
• Sofya Kovalevskaya (1956) directed by Iosef Shapiro, starring Yelena Yunger, Lev Kolesov and Tatyana Sezenyevskaya.[19]
• Berget på månens baksida ("A Hill on the Dark Side of the Moon") (1983) directed by Lennart Hjulström, starring Gunilla Nyroos as Sofja Kovalewsky and Bibi Andersson as Anne Charlotte Edgren-Leffler, Duchess of Cajanello, and sister to Gösta Mittag-Leffler.[20]
• Sofya Kovalevskaya (1985 TV) directed by Azerbaijani director Ayan Shakhmaliyeva, starring Yelena Safonova as Sofia.[21]
In fiction
• Little Sparrow: A Portrait of Sophia Kovalevsky (1983), Don H. Kennedy, Ohio University Press, Athens, Ohio
• Beyond the Limit: The Dream of Sofya Kovalevskaya (2002), a biographical novel by mathematician and educator Joan Spicci, published by Tom Doherty Associates, LLC, is an historically accurate portrayal of her early married years and quest for an education. It is based in part on 88 of Kovalevskaya's letters, which the author translated from Russian to English.
• Against the Day, a 2006 novel by Thomas Pynchon was speculated before release to be based on the life of Kovalevskaya, but in the finished novel she appears as a minor character.
• "Too Much Happiness" (2009), short story by Alice Munro, published in the August 2009 issue of Harper's Magazine features Kovalevskaya as a main character. It was later published in a collection of the same name.
See also
• Cauchy–Kowalevski theorem
• Kowalevski top
• Timeline of women in science
• Timeline of women in mathematics
Selected publications
Library resources about
Sofya Kovalevskaya
• Online books
• Resources in your library
• Resources in other libraries
By Sofya Kovalevskaya
• Online books
• Resources in your library
• Resources in other libraries
• Kowalevski, Sophie (1875), "Zur Theorie der partiellen Differentialgleichung", Journal für die reine und angewandte Mathematik, 80: 1–32 (The surname given in the paper is "von Kowalevsky".)
• Kowalevski, Sophie (1884), "Über die Reduction einer bestimmten Klasse Abel'scher Integrale 3ten Ranges auf elliptische Integrale", Acta Mathematica, 4 (1): 393–414, doi:10.1007/BF02418424
• Kowalevski, Sophie (1885), "Über die Brechung des Lichtes In Cristallinischen Mitteln", Acta Mathematica, 6 (1): 249–304, doi:10.1007/BF02400418
• Kowalevski, Sophie (1889), "Sur le probleme de la rotation d'un corps solide autour d'un point fixe", Acta Mathematica, 12 (1): 177–232, doi:10.1007/BF02592182
• Kowalevski, Sophie (1890), "Sur une propriété du système d'équations différentielles qui définit la rotation d'un corps solide autour d'un point fixe", Acta Mathematica, 14 (1): 81–93, doi:10.1007/BF02413316
• Kowalevski, Sophie (1891), "Sur un théorème de M. Bruns", Acta Mathematica, 15 (1): 45–52, doi:10.1007/BF02392602, S2CID 124051110
• Kovalevskaya, Sofia (2021). Mathematician with the Soul of a Poet: Poems and Plays of Sofia Kovalevskaya. Translated by Coleman, Sandra DeLozier. Bohannon Hall Press. ISBN 979-8985029802.
Novel
• Nihilist Girl, translated by Natasha Kolchevska with Mary Zirin; introduction by Natasha Kolchevska. Modern Language Association of America (2001) ISBN 0-87352-790-9
References
1. "Sofya Vasilyevna Kovalevskaya.". Encyclopædia Britannica Online Academic Edition. Encyclopædia Britannica. Retrieved 22 October 2011.
2. Koblitz, Ann Hibner (1993). A convergence of lives: Sofia Kovalevskaia: scientist, writer, revolutionary (Reprinted in hardcover. ed.). New Brunswick (New Jersey): Rutgers University Press. ISBN 9780813519630.
3. Roger L. Cooke, "The life of S. V. Kovalevskaya", in V. B. Kuznetsov, ed., The Kowalevski Property, American Mathematical Society, 2002, p. 1–19.
4. Marie-Louise Dubreil-Jacotin. "Women mathematicians". JOC/EFR. Archived from the original on June 7, 2011. Retrieved June 3, 2012.
5. "Best of Russia --- Famous Russians --- Scientists". TRISTARMEDIA | Web Design, Web Development, Multimedia, Creative Web Solutions. Archived from the original on 3 September 2011. Retrieved 21 October 2011.
6. F. V. Korvin-Krukovskii, "Sofia Vasilevna Korvin-Krukovskaia," Russkaia Starina, vol. 71, no. 9 (1891), p. 623-636.
7. Rappaport, Karen D. "S. Kovalevsky: A Mathematical Lesson." The American Mathematical Monthly 88 (October 1981): 564-573.
8. Sofya Kovalevskaya, A Russian Childhood, translated, edited, and introduced by Beatrice Stillman; with an analysis of Kovalevskaya's Mathematics by P. Y. Kochina. Springer-Verlag, c1978 ISBN 0-387-90348-8
9. Ann Hibner Koblitz, Science, Women and Revolution in Russia, Routledge, 2000.
10. Roger Cooke, The Mathematics of Sonya Kovalevskaya, Springer-Verlag, 1984.
11. George Eliot (Mary Ann Evans), Middlemarch, Chapter IV, last sentence.
12. Kochina, Pelageya (1985). Love and Mathematics: Sofia Kovalevskaya. Moscow: Mir Publisher.
13. McFadden, Margaret. Golden Cables of Sympathy: The Transatlantic Sources of Nineteenth-Century Feminism. University Press of Kentucky, 1999.
14. Sofʹja Vasilʹevna Kovalevskaja, Mémoire sur un cas particulier du problème de la rotation d'un corps pesant autour d'un point fixe où l'intégration s'effectue à l'aide de fonctions ultraelliptiques du temps, IMprimerie nationale, 1894
15. Cooke, Roger (1984). The Mathematics of Sonya Kovalevskaya. Springer. p. 159. ISBN 9781461297666.
16. Bruno, Leonard C. (2003) [1999]. Math and mathematicians : the history of math discoveries around the world. Baker, Lawrence W. Detroit, Mich.: U X L. p. 251. ISBN 0787638145. OCLC 41497065.
17. P. Ia. Polubarinova-Kochina, Sofia Vasilevna Kovalevskaia 1850-1891, Nauka, 1981.
18. "Kovalevsky Days - AWM Association for Women in Mathematics". sites.google.com. Retrieved 2018-08-21.
19. 'Sofya Kovalevskaya' at IMDb
20. 'Berget på månens baksida' at IMDb
21. 'Sofya Kovalevskaya' at IMDb
Further reading
Wikisource has original text related to this article:
An overview of Sofia Kovalevskaya's life and career.
Wikisource has the text of the 1911 Encyclopædia Britannica article "Kovalevsky, Sophie".
• Cooke, Roger (1984).The Mathematics of Sonya Kovalevskaya (Springer-Verlag) ISBN 0-387-96030-9
• Kennedy, Don H. (1983). Little Sparrow, a Portrait of Sofia Kovalevsky. Athens: Ohio University Press. ISBN 0-8214-0692-2
• Koblitz, Ann Hibner (1993). A Convergence of Lives: Sofia Kovalevskaia -- Scientist, Writer, Revolutionary. Lives of women in science, 99-2518221-2 (2., revised ed.). New Brunswick, N.J.: Rutgers Univ. P. ISBN 0-8135-1962-4
• Koblitz, Ann Hibner (1987). Sofia Vasilevna Kovalevskaia in Louise S. Grinstein (Editor), Paul J. Campbell (Editor) (1987), Women of Mathematics: A Bio-Bibliographic Sourcebook, Greenwood Press, New York, ISBN 978-0-313-24849-8 {{citation}}: |author= has generic name (help)
• The Legacy of Sonya Kovalevskaya: proceedings of a symposium sponsored by the Association for Women in Mathematics and the Mary Ingraham Bunting Institute, held October 25–28, 1985. Contemporary mathematics, 0271-4132; 64. Providence, R.I.: American Mathematical Society. 1987. ISBN 0-8218-5067-9
• Sophie (Sonja) Vasiljevna Kovalevsky at Svenskt kvinnobiografiskt lexikon
This article incorporates material from Sofia Kovalevskaya on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
External links
Wikiquote has quotations related to Sofya Kovalevskaya.
• "Sofia Kovalevskaya", Biographies of Women Mathematicians, Agnes Scott College
• O'Connor, John J.; Robertson, Edmund F., "Sofya Kovalevskaya", MacTutor History of Mathematics Archive, University of St Andrews
• Women's History - Sofia Kovalevskaya
• Brief biography of Sofia Kovalevskaya by Yuriy Belits. University of Colorado at Denver, March 17, 2005.
• Biography (in Russian)
• Association for Women in Mathematics
• Sof'i Kovalevskoy street, Saint Petersburg (OpenStreetMap)
• Sof'i Kovalevskoy street, Moscow (OpenStreetMap)
Differential equations
Classification
Operations
• Differential operator
• Notation for differentiation
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
• Holonomic
Attributes of variables
• Dependent and independent variables
• Homogeneous
• Nonhomogeneous
• Coupled
• Decoupled
• Order
• Degree
• Autonomous
• Exact differential equation
• On jet bundles
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solutions
Existence/uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
Solution topics
• Wronskian
• Phase portrait
• Phase space
• Lyapunov stability
• Asymptotic stability
• Exponential stability
• Rate of convergence
• Series solutions
• Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Substitution
• Separation of variables
• Method of undetermined coefficients
• Variation of parameters
• Integrating factor
• Integral transforms
• Euler method
• Finite difference method
• Crank–Nicolson method
• Runge–Kutta methods
• Finite element method
• Finite volume method
• Galerkin method
• Perturbation theory
Applications
• List of named differential equations
Mathematicians
• Isaac Newton
• Gottfried Wilhelm Leibniz
• Leonhard Euler
• Jacob Bernoulli
• Émile Picard
• Józef Maria Hoene-Wroński
• Ernst Lindelöf
• Rudolf Lipschitz
• Joseph-Louis Lagrange
• Augustin-Louis Cauchy
• John Crank
• Phyllis Nicolson
• Carl David Tolmé Runge
• Martin Kutta
• Sofya Kovalevskaya
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• Chile
• Spain
• France
• 2
• 3
• BnF data
• 2
• 3
• Catalonia
• Germany
• Israel
• United States
• Sweden
• Latvia
• Japan
• Czech Republic
• Australia
• Croatia
• Netherlands
• Poland
• Russia
• Vatican
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Artists
• KulturNav
People
• Deutsche Biographie
• 2
• Trove
Other
• IdRef
|
Wikipedia
|
Sohail Nadeem
Sohail Nadeem is a Pakistani Professor of Applied Mathematics and Chairman of Mathematics Department at Quaid-i-Azam University. He is Young a Fellow of the world Academy of sciences and an elected Fellow of Pakistan Academy of Sciences. He is a recipient of the 2022 Obada Prize Award.[1][2][3][4][5]
Early life and education
Sohail Nadeem was born on 15 March 1975. He attended Quaid-i-Azam University, Islamabad, Pakistan from his first degree to the PhD level. He obtained his M. Sc, M.phil and PhD in 1998, 2000 and 2004 in Applied Mathematics.[3][1][4][5]
Career
In 2000, he was appointed as a Senior research assistant at the department of Mathematics at the Quaid-i-Azam University Islamabad. In 2002, he became a lecturer at COMSATS Institute of Information Technology Abbottabad. In 2003, he became an assistant Professor the same institution. In 2005, he moved to Quaid-i-Azam University Islamabad also as an Associate Professor and in 2011, he became an associate professor and eventually become a professor in 2015.[4][5]
Awards and memberships
In 2011, he received the Young fellow World Academy of Sciences Award by the third world Academy of Sciences, Italy. In 2012, he was elected as a member of Pakistan Academy of Sciences. In the same year, he received the Salam prize for Mathematics by the same institution.[4][5] Additionally, He received the Productive scientist Awards by PCST for the years 2012-2013 in A category and in 2016 he was awarded Pakistan Academy of Sciences gold medal in Mathematics and he was eventually elected in as a member in 2019.[5] In 2022, he won the Obada Prize award.[6]
References
1. "Loop | Prof Dr. Sohail Nadeem". loop.frontiersin.org. Retrieved 2022-06-28.
2. "Sohail". www.elivapress.com. Retrieved 2022-06-28.
3. "Nadeem, Sohail". TWAS. Retrieved 2022-06-28.
4. "Dr. Sohail Nadeem | Department of Mathematics". Retrieved 2022-06-28.
5. Nadeem, Sohail. "Curriculum vitae". twas.org.
6. "Bot Verification". obadaprize.com. Retrieved 2022-06-28.
Authority control: Academics
• DBLP
• ORCID
• ResearcherID
• Scopus
• 2
|
Wikipedia
|
Leonhard Sohncke
Leonhard Sohncke (22 February 1842 Halle – 1 November 1897 in Munich) was a German mathematician who classified the 65 space groups in which chiral crystal structures form, called Sohncke groups. He was a professor of physics at the Technische Hochschule Karlsruhe (now called the Karlsruhe Institute of Technology) from 1871 to 1883, at Jena from 1883 to 1886, and at the Technical University of Munich from 1886 to 1897.
His father Ludwig Adolph Sohncke (1807-1853) was professor of mathematics at the University of Halle. He published several books, including Geschichte der geometrie, hauptsachlich mit bezug auf die neueren methoden in 1839, a translation of Aperçu historique sur l’origine et la dévelopement des méthodes en géométrie (1837) by Michel Chasles.
References
• Paul Seidel, Leben und Werke von Leonhard Sohncke (1842–1897), einem Mitbegründer des Oberrheinischen Geologischen Vereins, Jber. Mitt. oberrhein. geol. Ver., N. F., 91, 101–112, 2009.
• Fritz Erk, Leonhard Sohncke, Meteorologische Zeitschrift volume 15 (1898), pp. 81–84.
• Sebastian Finsterwalder, Hermann Ebert: Leonhard Sohncke, Jahresbericht der Königlich Technischen Hochschule in München 1897/98, Anhang pp. 1–21.
• Siegmund Günther (1908), "Sohncke, Leonhard", Allgemeine Deutsche Biographie (ADB) (in German), vol. 54, Leipzig: Duncker & Humblot, pp. 377–379
• Leonhard Sohncke at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Netherlands
Academics
• Mathematics Genealogy Project
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
L. A. Sohnke
L. A. Sohnke was a German mathematician who worked on the complex multiplication of elliptic functions.
References
• Sohnke, L. A. (1837), "Aequationes modulares pro transformatione Functionum Ellipticarum", Journal für die reine und angewandte Mathematik, 16: 97, doi:10.1515/crll.1837.16.97, ISSN 0075-4102
|
Wikipedia
|
Soil moisture velocity equation
The soil moisture velocity equation[1] describes the speed that water moves vertically through unsaturated soil under the combined actions of gravity and capillarity, a process known as infiltration. The equation is alternative form of the Richardson/Richards' equation.[2][3] The key difference being that the dependent variable is the position of the wetting front $z$, which is a function of time, the water content and media properties. The soil moisture velocity equation consists of two terms. The first "advection-like" term was developed to simulate surface infiltration [4] and was extended to the water table,[5] which was verified using data collected in a column experimental that was patterned after the famous experiment by Childs & Poulovassilis (1962)[6] and against exact solutions.[7][1]
Soil moisture velocity equation
The soil moisture velocity equation[1] or SMVE is a Lagrangian reinterpretation of the Eulerian Richards' equation wherein the dependent variable is the position z of a wetting front of a particular moisture content $\theta $ with time.
$\left.{\frac {dz}{dt}}\right\vert _{\theta }={\frac {\partial K(\theta )}{\partial \theta }}\left[1-\left({\frac {\partial \psi (\theta )}{\partial z}}\right)\right]-D(\theta ){\frac {\partial ^{2}\psi /\partial z^{2}}{\partial \psi /\partial z}}$
where:
$z$ is the vertical coordinate [L] (positive downward),
$\theta $ is the water content of the soil at a point [-]
$K(\theta )$ is the unsaturated hydraulic conductivity [L T−1],
$\psi (\theta )$ is the capillary pressure head [L],
$D(\theta )$ is the soil water diffusivity, which is defined as: $K(\theta )\partial \psi /\partial \theta $, [L2 T]
$t$ is time [T].
The first term on the right-hand side of the SMVE is called the "advection-like" term, while the second term is called the "diffusion-like" term. The advection-like term of the Soil Moisture Velocity Equation is particularly useful for calculating the advance of wetting fronts for a liquid invading an unsaturated porous medium under the combined action of gravity and capillarity because it is convertible to an ordinary differential equation by neglecting the diffusion-like term.[5] and it avoids the problem of representative elementary volume by use of a fine water-content discretization and solution method.
This equation was converted into a set of three ordinary differential equations (ODEs)[5] using the method of lines[8] to convert the partial derivatives on the right-hand side of the equation into appropriate finite difference forms. These three ODEs represent the dynamics of infiltrating water, falling slugs, and capillary groundwater, respectively.
Derivation
This derivation of the 1-D soil moisture velocity equation[1] for calculating vertical flux $q$ of water in the vadose zone starts with conservation of mass for an unsaturated porous medium without sources or sinks:
${\frac {\partial \theta }{\partial t}}+{\frac {\partial q}{\partial z}}=0.$
We next insert the unsaturated Buckingham–Darcy flux:[9]
$q=-K(\theta ){\frac {\partial \psi (\theta )}{\partial z}}+K(\theta ),$
yielding Richards' equation[2] in mixed form because it includes both the water content $\theta $and capillary head $\psi (\theta )$:
${\frac {\partial \theta }{\partial t}}={\frac {\partial }{\partial z}}\left[K(\theta )\left({\frac {\partial \psi (\theta )}{\partial z}}-1\right)\right]$.
Applying the chain rule of differentiation to the right-hand side of Richards' equation:
${\frac {\partial \theta }{\partial t}}={\frac {\partial }{\partial z}}K(\theta (z,t)){\frac {\partial }{\partial z}}\psi (\theta (z,t))+K(\theta ){\frac {\partial ^{2}}{\partial z^{2}}}\psi (\theta (z,t))-{\frac {\partial }{\partial z}}K(\theta (z,t))$.
Assuming that the constitutive relations for unsaturated hydraulic conductivity and soil capillarity are solely functions of the water content, $K=K(\theta )$and $\psi =\psi (\theta )$, respectively:
${\frac {\partial \theta }{\partial t}}=K'(\theta )\psi '(\theta )\left({\frac {\partial \theta }{\partial z}}\right)^{2}+K(\theta )\left[\psi ''(\theta )\left({\frac {\partial \theta }{\partial z}}\right)^{2}+\psi '(\theta ){\frac {\partial ^{2}\theta }{\partial z^{2}}}\right]-K'(\theta ){\frac {\partial \theta }{\partial z}}$.
This equation implicitly defines a function $Z_{R}(\theta ,t)$that describes the position of a particular moisture content within the soil using a finite moisture-content discretization. Employing the Implicit function theorem, which by the cyclic rule required dividing both sides of this equation by ${-\partial \theta }/{\partial z}$ to perform the change in variable, resulting in:
${\frac {\partial Z_{R}}{\partial t}}=-K'(\theta )\psi '(\theta ){\frac {\partial \theta }{\partial z}}-K(\theta )\psi ''(\theta ){\frac {\partial \theta }{\partial z}}-K(\theta )\psi '(\theta ){\frac {\partial ^{2}\theta /\partial z^{2}}{\partial \theta /\partial z}}+K'(\theta )$,
which can be written as:
${\frac {\partial Z_{R}}{\partial t}}=-K'(\theta )\left[{\frac {\partial \psi (\theta )}{\partial z}}-1\right]-K(\theta )\left[\psi ''(\theta ){\frac {\partial \theta }{\partial z}}+\psi '(\theta ){\frac {\partial ^{2}\theta /\partial z^{2}}{\partial \theta /\partial z}}\right]$.
Inserting the definition of the soil water diffusivity:
$D(\theta )\equiv K(\theta ){\frac {\partial \psi }{\partial \theta }}$
into the previous equation produces:
${\frac {\partial Z_{R}}{\partial t}}=-K'(\theta )\left[{\frac {\partial \psi (\theta )}{\partial z}}-1\right]-D(\theta ){\frac {\partial ^{2}\psi /\partial z^{2}}{\partial \psi /\partial z}}$
If we consider the velocity of a particular water content $\theta $, then we can write the equation in the form of the Soil Moisture Velocity Equation:
$\left.{\frac {dz}{dt}}\right\vert _{\theta }={\frac {\partial K(\theta )}{\partial \theta }}\left[1-\left({\frac {\partial \psi (\theta )}{\partial z}}\right)\right]-D(\theta ){\frac {\partial ^{2}\psi /\partial z^{2}}{\partial \psi /\partial z}}$
Physical significance
Written in moisture content form, 1-D Richards' equation is[10]
${\frac {\partial \theta }{\partial t}}={\frac {\partial }{\partial z}}\left(D(\theta ){\frac {\partial \theta }{\partial z}}\right)+{\frac {\partial K(\theta )}{\partial z}}$
Where D(θ) [L2/T] is 'the soil water diffusivity' as previously defined.
Note that with $\theta $ as the dependent variable, physical interpretation is difficult because all the factors that affect the divergence of the flux are wrapped up in the soil moisture diffusivity term $D(\theta )$. However, in the SMVE, the three factors that drive flow are in separate terms that have physical significance.
The primary assumptions used in the derivation of the Soil Moisture Velocity Equation are that $K=K(\theta )$ and $\psi =\psi (\theta )$ are not overly restrictive. Analytical and experimental results show that these assumptions are acceptable under most conditions in natural soils. In this case, the Soil Moisture Velocity Equation is equivalent to the 1-D Richards' equation, albeit with a change in dependent variable. This change of dependent variable is convenient because it reduces the complexity of the problem because compared to Richards' equation, which requires the calculation of the divergence of the flux, the SMVE represents a flux calculation, not a divergence calculation. The first term on the right-hand side of the SMVE represents the two scalar drivers of flow, gravity and the integrated capillarity of the wetting front. Considering just that term, the SMVE becomes:
${\frac {\partial Z_{R}}{\partial t}}=-K'(\theta )\left[{\frac {\partial \psi (\theta )}{\partial z}}-1\right]$
where ${\partial \psi (\theta )}/{\partial z}$ is the capillary head gradient that is driving the flux and the remaining conductivity term $K'(\theta )$ represents the ability of gravity to conduct flux through the soil. This term is responsible for the true advection of water through the soil under the combined influences of gravity and capillarity. As such, it is called the "advection-like" term.
Neglecting gravity and the scalar wetting front capillarity, we can consider only the second term on the right-hand side of the SMVE. In this case the Soil Moisture Velocity Equation becomes:
${\frac {\partial Z_{R}}{\partial t}}=-D(\theta ){\frac {\partial ^{2}\psi /\partial z^{2}}{\partial \psi /\partial z}}$
This term is strikingly similar to Fick's second law of diffusion. For this reason, this term is called the "diffusion-like" term of the SMVE.
This term represents the flux due to the shape of the wetting front $-D(\theta ){\partial ^{2}\psi /\partial z^{2}}$, divided by the spatial gradient of the capillary head ${\partial \psi /\partial z}$. Looking at this diffusion-like term, it is reasonable to ask when might this term be negligible? The first answer is that this term will be zero when the first derivative $<\partial \psi /\partial z=C$, because the second derivative will equal zero. One example where this occurs is in the case of an equilibrium hydrostatic moisture profile, when $\partial \psi /\partial z=-1$ with z defined as positive upward. This is a physically realistic result because an equilibrium hydrostatic moisture profile is known to not produce fluxes.
Another instance when the diffusion-like term will be nearly zero is in the case of sharp wetting fronts, where the denominator of the diffusion-like term $\partial \psi /\partial z\to \infty $, causing the term to vanish. Notably, sharp wetting fronts are notoriously difficult to resolve and accurately solve with traditional numerical Richards' equation solvers.[11]
Finally, in the case of dry soils, $K(\theta )$ tends towards $0$, making the soil water diffusivity $D(\theta )$ tend towards zero as well. In this case, the diffusion-like term would produce no flux.
Comparing against exact solutions of Richards' equation for infiltration into idealized soils developed by Ross & Parlange (1994)[12] revealed[1] that indeed, neglecting the diffusion-like term resulted in accuracy >99% in calculated cumulative infiltration. This result indicates that the advection-like term of the SMVE, converted into an ordinary differential equation using the method of lines, is an accurate ODE solution of the infiltration problem. This is consistent with the result published by Ogden et al.[5] who found errors in simulated cumulative infiltration of 0.3% using 263 cm of tropical rainfall over an 8-month simulation to drive infiltration simulations that compared the advection-like SMVE solution against the numerical solution of Richards' equation.
Solution
The advection-like term of the SMVE can be solved using the method of lines and a finite moisture content discretization. This solution of the SMVE advection-like term replaces the 1-D Richards' equation PDE with a set of three ordinary differential equations (ODEs). These three ODEs are:
Infiltration fronts
With reference to Figure 1, water infiltrating the land surface can flow through the pore space between $\theta _{d}$ and $\theta _{i}$. Using the method of lines to convert the SMVE advection-like term into an ODE:
${\frac {\partial K(\theta )}{\partial \theta }}={\frac {K(\theta _{d})-K(\theta _{i})}{\theta _{d}-\theta _{i}}}.$
Given that any ponded depth of water on the land surface is $h_{p}$, the Green and Ampt (1911)[13] assumption is employed,
${\frac {\partial \psi (\theta )}{\partial z}}={\frac {|\psi (\theta _{d})|+h_{p}}{z_{j}}},$
represents the capillary head gradient that is driving the flow in the $j^{th}$ discretization or "bin". Therefore, the finite water-content equation in the case of infiltration fronts is:
$\left({\frac {dz}{dt}}\right)_{j}={\frac {K(\theta _{d})-K(\theta _{i})}{\theta _{d}-\theta _{i}}}\left({\frac {|\psi (\theta _{d})|+h_{p}}{z_{j}}}+1\right).$
Falling slugs
After rainfall stops and all surface water infiltrates, water in bins that contains infiltration fronts detaches from the land surface. Assuming that the capillarity at leading and trailing edges of this 'falling slug' of water is balanced, then the water falls through the media at the incremental conductivity associated with the $j^{\text{th}}\ \Delta \theta $ bin:
$\left({\frac {dz}{dt}}\right)_{j}={\frac {K(\theta _{j})-K(\theta _{j-1})}{\theta _{j}-\theta _{j-1}}}$.
This approach to solving the capillary-free solution is very similar to the kinematic wave approximation.
Capillary groundwater fronts
In this case, the flux of water to the $j^{\text{th}}$ bin occurs between bin j and i. Therefore, in the context of the method of lines:
${\frac {\partial K(\theta )}{\partial \theta }}={\frac {K(\theta _{j})-K(\theta _{i})}{\theta _{j}-\theta _{i}}},$
and
${\frac {\partial \psi (\theta )}{\partial z}}={\frac {|\psi (\theta _{j})|}{H_{j}}}$
which yields:
$\left({\frac {dH}{dt}}\right)_{j}={\frac {K(\theta _{j})-K(\theta _{i})}{\theta _{j}-\theta _{i}}}\left({\frac {|\psi (\theta _{j})|}{H_{j}}}-1\right).$
Note the "-1" in parentheses, representing the fact that gravity and capillarity are acting in opposite directions. The performance of this equation was verified,[7] using a column experiment fashioned after that by Childs and Poulovassilis (1962).[6] Results of that validation showed that the finite water-content vadose zone flux calculation method performed comparably to the numerical solution of Richards' equation. The photo shows apparatus. Data from this column experiment are available by clicking on this hot-linked DOI. These data are useful for evaluating models of near-surface water table dynamics.
It is noteworthy that the SMVE advection-like term solved using the finite moisture-content method completely avoids the need to estimate the specific yield. Calculating the specific yield as the water table nears the land surface is made cumbersome my non-linearities. However, the SMVE solved using a finite moisture-content discretization essentially does this automatically in the case of a dynamic near-surface water table.
Notice and awards
The paper on the Soil Moisture Velocity Equation was highlighted by the editor in the issue of J. Adv. Modeling of Earth Systems when the paper was first published, and is in the public domain. The paper may be freely downloaded here by anyone. The paper describing the finite moisture-content solution of the advection-like term of the Soil Moisture Velocity Equation was selected to receive the 2015 Coolest Paper Award by the early career members of the International Association of Hydrogeologists.
References
1. Ogden, F.L, M.B. Allen, W.Lai, J. Zhu, C.C. Douglas, M. Seo, and C.A. Talbot, 2017. The Soil Moisture Velocity Equation, J. Adv. Modeling Earth Syst.https://doi.org/10.1002/2017MS000931
2. Richardson, L. F. (1922), Weather Prediction by Numerical Process, Cambridge Univ. Press, Cambridge, U. K., pp. 108. online: https://archive.org/details/weatherpredictio00richrich accessed March 23, 2018.
3. Richards, L. A. (1931), Capillary conduction of liquids through porous mediums, J. Appl. Phys., 1(5), 318–333.
4. Talbot, C.A., and F. L. Ogden (2008), A method for computing infiltration and redistribution in a discretized moisture content domain, Water Resour. Res., 44(8), doi: 10.1029/2008WR006815.
5. Ogden, F. L., W. Lai, R. C. Steinke, J. Zhu, C. A. Talbot, and J. L. Wilson (2015), A new general 1-D vadose zone solution method, Water Resour.Res., 51, doi:10.1002/2015WR017126.
6. Childs, E. C., and A. Poulovassilis (1962), The moisture profile above a moving water table, Soil Sci. J., 13(2), 271–285.
7. Ogden, F. L., W. Lai, R. C. Steinke, and J. Zhu (2015b), Validation of finite water-content vadose zone dynamics method using column experiments with a moving water table and applied surface flux, Water Resour. Res., 10.1002/2014WR016454.
8. Griffiths, Graham; Schiesser, William; Hamdi, Samir (2007). "Method of lines". Scholarpedia. 2 (7): 2859. Bibcode:2007SchpJ...2.2859H. doi:10.4249/scholarpedia.2859.
9. Jury, W. A., and R. Horton, 2004. Soil physics. John Wiley & Sons.
10. Philip, J.R., 1957. Theory of infiltration 1: The infiltration equation and its solution. Soil Sci. 83(5):345-357.
11. Farthing, M. W., & Ogden, F. L. (2017). Numerical Solution of Richards’ Equation: A Review of Advances and Challenges. Soil Science Society of America J.
12. Ross, P.J., and J.-Y. Parlange, 1994. Comparing exact and numerical solutions of Richards' for 1-dimensional infiltration and drainage, Soil Sci. 157(6):341-344.
13. Green, W. H., and G. A. Ampt (1911), Studies on soil physics, 1, The flow of air and water through soils, J. Agric. Sci., 4(1), 1–24.
External links
• YouTube video of SMVE-based solution slowed during rainfall to highlight behavior, with fixed water table at 1.0 m and evapotranspiration from a 0.5 m root zone
|
Wikipedia
|
Sokhotski–Plemelj theorem
The Sokhotski–Plemelj theorem (Polish spelling is Sochocki) is a theorem in complex analysis, which helps in evaluating certain integrals. The real-line version of it (see below) is often used in physics, although rarely referred to by name. The theorem is named after Julian Sochocki, who proved it in 1868, and Josip Plemelj, who rediscovered it as a main ingredient of his solution of the Riemann–Hilbert problem in 1908.
Not to be confused with Casorati–Sokhotski–Weierstrass theorem.
Statement of the theorem
Let C be a smooth closed simple curve in the plane, and $\varphi $ an analytic function on C. Note that the Cauchy-type integral
$\phi (z)={\frac {1}{2\pi i}}\int _{C}{\frac {\varphi (\zeta )\,d\zeta }{\zeta -z}},$
cannot be evaluated for any z on the curve C. However, on the interior and exterior of the curve, the integral produces analytic functions, which will be denoted $\phi _{i}$ inside C and $\phi _{e}$ outside. The Sokhotski–Plemelj formulas relate the limiting boundary values of these two analytic functions at a point z on C and the Cauchy principal value ${\mathcal {P}}$ of the integral:
$\lim _{w\to z}\phi _{i}(w)={\frac {1}{2\pi i}}{\mathcal {P}}\int _{C}{\frac {\varphi (\zeta )\,d\zeta }{\zeta -z}}+{\frac {1}{2}}\varphi (z),$
$\lim _{w\to z}\phi _{e}(w)={\frac {1}{2\pi i}}{\mathcal {P}}\int _{C}{\frac {\varphi (\zeta )\,d\zeta }{\zeta -z}}-{\frac {1}{2}}\varphi (z).$
Subsequent generalizations relax the smoothness requirements on curve C and the function φ.
Version for the real line
See also: Kramers–Kronig relations
Especially important is the version for integrals over the real line.
$\lim _{\varepsilon \to 0^{+}}{\frac {1}{x\pm i\varepsilon }}=\mp i\pi \delta (x)+{\mathcal {P}}{{\Big (}{\frac {1}{x}}{\Big )}}.$
where $\delta (x)$ is the Dirac delta function. This should be interpreted as an integral equality, as follows.
Let f be a complex-valued function which is defined and continuous on the real line, and let a and b be real constants with $a<0<b$. Then
$\lim _{\varepsilon \to 0^{+}}\int _{a}^{b}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+{\mathcal {P}}\int _{a}^{b}{\frac {f(x)}{x}}\,dx,$
where ${\mathcal {P}}$ denotes the Cauchy principal value. (Note that this version makes no use of analyticity.)
Proof of the real version
A simple proof is as follows.
$\lim _{\varepsilon \to 0^{+}}\int _{a}^{b}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi \lim _{\varepsilon \to 0^{+}}\int _{a}^{b}{\frac {\varepsilon }{\pi (x^{2}+\varepsilon ^{2})}}f(x)\,dx+\lim _{\varepsilon \to 0^{+}}\int _{a}^{b}{\frac {x^{2}}{x^{2}+\varepsilon ^{2}}}\,{\frac {f(x)}{x}}\,dx.$
For the first term, we note that ε⁄π(x2 + ε2) is a nascent delta function, and therefore approaches a Dirac delta function in the limit. Therefore, the first term equals ∓iπ f(0).
For the second term, we note that the factor x2⁄(x2 + ε2) approaches 1 for |x| ≫ ε, approaches 0 for |x| ≪ ε, and is exactly symmetric about 0. Therefore, in the limit, it turns the integral into a Cauchy principal value integral.
For simple proof of the complex version of the formula and version for polydomains see: Mohammed, Alip (February 2007). "The torus related Riemann problem". Journal of Mathematical Analysis and Applications. 326 (1): 533–555. doi:10.1016/j.jmaa.2006.03.011.
Physics application
In quantum mechanics and quantum field theory, one often has to evaluate integrals of the form
$\int _{-\infty }^{\infty }dE\,\int _{0}^{\infty }dt\,f(E)\exp(-iEt)$
where E is some energy and t is time. This expression, as written, is undefined (since the time integral does not converge), so it is typically modified by adding a negative real term to -iEt in the exponential, and then taking that to zero, i.e.:
$\lim _{\varepsilon \to 0^{+}}\int _{-\infty }^{\infty }dE\,\int _{0}^{\infty }dt\,f(E)\exp(-iEt-\varepsilon t)=-i\lim _{\varepsilon \to 0^{+}}\int _{-\infty }^{\infty }{\frac {f(E)}{E-i\varepsilon }}\,dE=\pi f(0)-i{\mathcal {P}}\int _{-\infty }^{\infty }{\frac {f(E)}{E}}\,dE,$
where the latter step uses the real version of the theorem.
Heitler function
In theoretical quantum optics, the derivation of a master equation in Lindblad form often requires the following integral function,[1] which is a direct consequence of the Sokhotski–Plemelj theorem and is often called the Heitler-function:
$\int _{0}^{\infty }d\tau \,\exp(-i(\omega \pm \nu )\tau )=\pi \delta (\omega \pm \nu )-i{\mathcal {P}}{\Big (}{\frac {1}{\omega \pm \nu }}{\Big )}$
See also
• Singular integral operators on closed curves (account of the Sokhotski–Plemelj theorem for the unit circle and a closed Jordan curve)
• Kramers–Kronig relations
• Hilbert transform
References
1. Breuer, Heinz-Peter; Petruccione, Francesco (2002). The Theory of Open Quantum Systems. Oxford University Press. p. 145. doi:10.1093/acprof:oso/9780199213900.001.0001. ISBN 978-0-19-852063-4.
Literature
• Weinberg, Steven (1995). The Quantum Theory of Fields, Volume 1: Foundations. Cambridge Univ. Press. ISBN 0-521-55001-7. Chapter 3.1.
• Merzbacher, Eugen (1998). Quantum Mechanics. Wiley, John & Sons, Inc. ISBN 0-471-88702-1. Appendix A, equation (A.19).
• Henrici, Peter (1986). Applied and Computational Complex Analysis, vol. 3. Willey, John & Sons, Inc.
• Plemelj, Josip (1964). Problems in the sense of Riemann and Klein. New York: Interscience Publishers.
• Gakhov, F. D. (1990), Boundary value problems. Reprint of the 1966 translation, Dover Publications, ISBN 0-486-66275-6
• Muskhelishvili, N. I. (1949). Singular integral equations, boundary problems of function theory and their application to mathematical physics. Melbourne: Dept. of Supply and Development, Aeronautical Research Laboratories.
• Blanchard, Bruening: Mathematical Methods in Physics (Birkhauser 2003), Example 3.3.1 4
• Sokhotskii, Y. W. (1873). On definite integrals and functions used in series expansions. St. Petersburg.{{cite book}}: CS1 maint: location missing publisher (link)
|
Wikipedia
|
Solder form
In mathematics, more precisely in differential geometry, a soldering (or sometimes solder form) of a fiber bundle to a smooth manifold is a manner of attaching the fibers to the manifold in such a way that they can be regarded as tangent. Intuitively, soldering expresses in abstract terms the idea that a manifold may have a point of contact with a certain model Klein geometry at each point. In extrinsic differential geometry, the soldering is simply expressed by the tangency of the model space to the manifold. In intrinsic geometry, other techniques are needed to express it. Soldering was introduced in this general form by Charles Ehresmann in 1950.[1]
Soldering of a fibre bundle
Let M be a smooth manifold, and G a Lie group, and let E be a smooth fibre bundle over M with structure group G. Suppose that G acts transitively on the typical fibre F of E, and that dim F = dim M. A soldering of E to M consists of the following data:
1. A distinguished section o : M → E.
2. A linear isomorphism of vector bundles θ : TM → o*VE from the tangent bundle of M to the pullback of the vertical bundle of E along the distinguished section.
In particular, this latter condition can be interpreted as saying that θ determines a linear isomorphism
$\theta _{x}:T_{x}M\rightarrow V_{o(x)}E$
from the tangent space of M at x to the (vertical) tangent space of the fibre at the point determined by the distinguished section. The form θ is called the solder form for the soldering.
Special cases
By convention, whenever the choice of soldering is unique or canonically determined, the solder form is called the canonical form, or the tautological form.
Affine bundles and vector bundles
Suppose that E is an affine vector bundle (a vector bundle without a choice of zero section). Then a soldering on E specifies first a distinguished section: that is, a choice of zero section o, so that E may be identified as a vector bundle. The solder form is then a linear isomorphism
$\theta \colon TM\to V_{o}E,$
However, for a vector bundle there is a canonical isomorphism between the vertical space at the origin and the fibre VoE ≈ E. Making this identification, the solder form is specified by a linear isomorphism
$TM\to E.$
In other words, a soldering on an affine bundle E is a choice of isomorphism of E with the tangent bundle of M.
Often one speaks of a solder form on a vector bundle, where it is understood a priori that the distinguished section of the soldering is the zero section of the bundle. In this case, the structure group of the vector bundle is often implicitly enlarged by the semidirect product of GL(n) with the typical fibre of E (which is a representation of GL(n)).[2]
Examples
• As a special case, for instance, the tangent bundle itself carries a canonical solder form, namely the identity.
• If M has a Riemannian metric (or pseudo-Riemannian metric), then the covariant metric tensor gives an isomorphism $g\colon TM\to T^{*}M$ from the tangent bundle to the cotangent bundle, which is a solder form.
• In Hamiltonian mechanics, the solder form is known as the tautological one-form, or alternately as the Liouville one-form, the Poincaré one-form, the canonical one-form, or the symplectic potential.
Applications
• A solder form on a vector bundle allows one to define the torsion and contorsion tensors of a connection.
• Solder forms occur in the sigma model, where they glue together the tangent space of a spacetime manifold to the tangent space of the field manifold.
• Vierbeins, or tetrads in general relativity, look like solder forms, in that they glue together coordinate charts on the spacetime manifold, to the preferred, usually orthonormal basis on the tangent space, where calculations can be considerably simplified. That is, the coordinate charts are the $TM$ in the definitions above, and the frame field is the vertical bundle $VE$. In the sigma model, the vierbeins are explicitly the solder forms.
Principal bundles
In the language of principal bundles, a solder form on a smooth principal G-bundle P over a smooth manifold M is a horizontal and G-equivariant differential 1-form on P with values in a linear representation V of G such that the associated bundle map from the tangent bundle TM to the associated bundle P×G V is a bundle isomorphism. (In particular, V and M must have the same dimension.)
A motivating example of a solder form is the tautological or fundamental form on the frame bundle of a manifold.
The reason for the name is that a solder form solders (or attaches) the abstract principal bundle to the manifold M by identifying an associated bundle with the tangent bundle. Solder forms provide a method for studying G-structures and are important in the theory of Cartan connections. The terminology and approach is particularly popular in the physics literature.
Notes
1. Kobayashi (1957).
2. Cf. Kobayashi (1957) section 11 for a discussion of the companion reduction of the structure group.
References
• Ehresmann, C. (1950). "Les connexions infinitésimales dans un espace fibré différentiel". Colloque de Topologie, Bruxelles: 29–55.
• Kobayashi, Shoshichi (1957). "Theory of Connections". Ann. Mat. Pura Appl. 43 (1): 119–194. doi:10.1007/BF02411907.
• Kobayashi, Shoshichi & Nomizu, Katsumi (1996). Foundations of Differential Geometry, Vol. 1 & 2 (New ed.). Wiley Interscience. ISBN 0-471-15733-3.
|
Wikipedia
|
Ramanujan–Soldner constant
In mathematics, the Ramanujan–Soldner constant (also called the Soldner constant) is a mathematical constant defined as the unique positive zero of the logarithmic integral function. It is named after Srinivasa Ramanujan and Johann Georg von Soldner.
Its value is approximately μ ≈ 1.45136923488338105028396848589202744949303228… (sequence A070769 in the OEIS)
Since the logarithmic integral is defined by
$\mathrm {li} (x)=\int _{0}^{x}{\frac {dt}{\ln t}},$
then using $\mathrm {li} (\mu )=0,$ we have
$\mathrm {li} (x)\;=\;\mathrm {li} (x)-\mathrm {li} (\mu )=\int _{0}^{x}{\frac {dt}{\ln t}}-\int _{0}^{\mu }{\frac {dt}{\ln t}}=\int _{\mu }^{x}{\frac {dt}{\ln t}},$
thus easing calculation for numbers greater than μ. Also, since the exponential integral function satisfies the equation
$\mathrm {li} (x)\;=\;\mathrm {Ei} (\ln {x}),$
the only positive zero of the exponential integral occurs at the natural logarithm of the Ramanujan–Soldner constant, whose value is approximately ln(μ) ≈ 0.372507410781366634461991866… (sequence A091723 in the OEIS)
External links
• Weisstein, Eric W. "Soldner's Constant". MathWorld.
|
Wikipedia
|
Functional completeness
In logic, a functionally complete set of logical connectives or Boolean operators is one which can be used to express all possible truth tables by combining members of the set into a Boolean expression.[1][2] A well-known complete set of connectives is { AND, NOT }. Each of the singleton sets { NAND } and { NOR } is functionally complete. However, the set { AND, OR } is incomplete, due to its inability to express NOT.
A gate or set of gates which is functionally complete can also be called a universal gate / gates.
A functionally complete set of gates may utilise or generate 'garbage bits' as part of its computation which are either not part of the input or not part of the output to the system.
In a context of propositional logic, functionally complete sets of connectives are also called (expressively) adequate.[3]
From the point of view of digital electronics, functional completeness means that every possible logic gate can be realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from either only binary NAND gates, or only binary NOR gates.
Introduction
Modern texts on logic typically take as primitive some subset of the connectives: conjunction ($\land $); disjunction ($\lor $); negation ($\neg $); material conditional ($\to $); and possibly the biconditional ($\leftrightarrow $). Further connectives can be defined, if so desired, by defining them in terms of these primitives. For example, NOR (sometimes denoted $\downarrow $, the negation of the disjunction) can be expressed as conjunction of two negations:
$A\downarrow B:=\neg A\land \neg B$
Similarly, the negation of the conjunction, NAND (sometimes denoted as $\uparrow $), can be defined in terms of disjunction and negation. It turns out that every binary connective can be defined in terms of $\{\neg ,\land ,\lor ,\to ,\leftrightarrow \}$, so this set is functionally complete.
However, it still contains some redundancy: this set is not a minimal functionally complete set, because the conditional and biconditional can be defined in terms of the other connectives as
${\begin{aligned}A\to B&:=\neg A\lor B\\A\leftrightarrow B&:=(A\to B)\land (B\to A).\end{aligned}}$
It follows that the smaller set $\{\neg ,\land ,\lor \}$ is also functionally complete. But this is still not minimal, as $\lor $ can be defined as
$A\lor B:=\neg (\neg A\land \neg B).$
Alternatively, $\land $ may be defined in terms of $\lor $ in a similar manner, or $\lor $ may be defined in terms of $\rightarrow $:
$\ A\vee B:=\neg A\rightarrow B.$
No further simplifications are possible. Hence, every two-element set of connectives containing $\neg $ and one of $\{\land ,\lor ,\rightarrow \}$ is a minimal functionally complete subset of $\{\neg ,\land ,\lor ,\to ,\leftrightarrow \}$.
Formal definition
Given the Boolean domain B = {0,1}, a set F of Boolean functions ƒi: Bni → B is functionally complete if the clone on B generated by the basic functions ƒi contains all functions ƒ: Bn → B, for all strictly positive integers n ≥ 1. In other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed in terms of the functions ƒi. Since every Boolean function of at least one variable can be expressed in terms of binary Boolean functions, F is functionally complete if and only if every binary Boolean function can be expressed in terms of the functions in F.
A more natural condition would be that the clone generated by F consist of all functions ƒ: Bn → B, for all integers n ≥ 0. However, the examples given above are not functionally complete in this stronger sense because it is not possible to write a nullary function, i.e. a constant expression, in terms of F if F itself does not contain at least one nullary function. With this stronger definition, the smallest functionally complete sets would have 2 elements.
Another natural condition would be that the clone generated by F together with the two nullary constant functions be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The example of the Boolean function given by S(x, y, z) = z if x = y and S(x, y, z) = x otherwise shows that this condition is strictly weaker than functional completeness.[4][5][6]
Characterization of functional completeness
Further information: Post's lattice
Emil Post proved that a set of logical connectives is functionally complete if and only if it is not a subset of any of the following sets of connectives:
• The monotonic connectives; changing the truth value of any connected variables from F to T without changing any from T to F never makes these connectives change their return value from T to F, e.g. $\vee ,\wedge ,\top ,\bot $.
• The affine connectives, such that each connected variable either always or never affects the truth value these connectives return, e.g. $\neg ,\top ,\bot ,\leftrightarrow ,\nleftrightarrow $.
• The self-dual connectives, which are equal to their own de Morgan dual; if the truth values of all variables are reversed, so is the truth value these connectives return, e.g. $\neg $, MAJ(p,q,r).
• The truth-preserving connectives; they return the truth value T under any interpretation which assigns T to all variables, e.g. $\vee ,\wedge ,\top ,\rightarrow ,\leftrightarrow $.
• The falsity-preserving connectives; they return the truth value F under any interpretation which assigns F to all variables, e.g. $\vee ,\wedge ,\bot ,\nrightarrow ,\nleftrightarrow $.
In fact, Post gave a complete description of the lattice of all clones (sets of operations closed under composition and containing all projections) on the two-element set {T, F}, nowadays called Post's lattice, which implies the above result as a simple corollary: the five mentioned sets of connectives are exactly the maximal clones.
Minimal functionally complete operator sets
When a single logical connective or Boolean operator is functionally complete by itself, it is called a Sheffer function[7] or sometimes a sole sufficient operator. There are no unary operators with this property. NAND and NOR , which are dual to each other, are the only two binary Sheffer functions. These were discovered, but not published, by Charles Sanders Peirce around 1880, and rediscovered independently and published by Henry M. Sheffer in 1913.[8] In digital electronics terminology, the binary NAND gate (↑) and the binary NOR gate (↓) are the only binary universal logic gates.
The following are the minimal functionally complete sets of logical connectives with arity ≤ 2:[9]
One element
{↑}, {↓}.
Two elements
$\{\vee ,\neg \}$, $\{\wedge ,\neg \}$, $\{\to ,\neg \}$, $\{\gets ,\neg \}$, $\{\to ,\bot \}$, $\{\gets ,\bot \}$, $\{\to ,\nleftrightarrow \}$, $\{\gets ,\nleftrightarrow \}$, $\{\to ,\nrightarrow \}$, $\{\to ,\nleftarrow \}$, $\{\gets ,\nrightarrow \}$, $\{\gets ,\nleftarrow \}$, $\{\nrightarrow ,\neg \}$, $\{\nleftarrow ,\neg \}$, $\{\nrightarrow ,\top \}$, $\{\nleftarrow ,\top \}$, $\{\nrightarrow ,\leftrightarrow \}$, $\{\nleftarrow ,\leftrightarrow \}.$
Three elements
$\{\lor ,\leftrightarrow ,\bot \}$, $\{\lor ,\leftrightarrow ,\nleftrightarrow \}$, $\{\lor ,\nleftrightarrow ,\top \}$, $\{\land ,\leftrightarrow ,\bot \}$, $\{\land ,\leftrightarrow ,\nleftrightarrow \}$, $\{\land ,\nleftrightarrow ,\top \}.$
There are no minimal functionally complete sets of more than three at most binary logical connectives.[9] In order to keep the lists above readable, operators that ignore one or more inputs have been omitted. For example, an operator that ignores the first input and outputs the negation of the second can be replaced by a unary negation.
Examples
• Examples of using the NAND(↑) completeness. As illustrated by,[10]
• ¬A ≡ A ↑ A
• A ∧ B ≡ ¬(A ↑ B) ≡ (A ↑ B) ↑ (A ↑ B)
• A ∨ B ≡ (A ↑ A) ↑ (B ↑ B)
• Examples of using the NOR(↓) completeness. As illustrated by,[11]
• ¬A ≡ A ↓ A
• A ∨ B ≡ ¬(A ↓ B) ≡ (A ↓ B) ↓ (A ↓ B)
• A ∧ B ≡ (A ↓ A) ↓ (B ↓ B)
Note that an electronic circuit or a software function can be optimized by reuse, to reduce the number of gates. For instance, the "A ∧ B" operation, when expressed by ↑ gates, is implemented with the reuse of "A ↑ B",
X ≡ (A ↑ B); A ∧ B ≡ X ↑ X
In other domains
Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains. For example, a set of reversible gates is called functionally complete, if it can express every reversible operator.
The 3-input Fredkin gate is functionally complete reversible gate by itself – a sole sufficient operator. There are many other three-input universal logic gates, such as the Toffoli gate.
In quantum computing, the Hadamard gate and the T gate are universal, albeit with a slightly more restrictive definition than that of functional completeness.
Set theory
There is an isomorphism between the algebra of sets and the Boolean algebra, that is, they have the same structure. Then, if we map boolean operators into set operators, the "translated" above text are valid also for sets: there are many "minimal complete set of set-theory operators" that can generate any other set relations. The more popular "Minimal complete operator sets" are {¬, ∩} and {¬, ∪}. If the universal set is forbidden, set operators are restricted to being falsity- (Ø) preserving, and cannot be equivalent to functionally complete Boolean algebra.
See also
• Algebra of sets – Identities and relationships involving sets
• Boolean algebra – Algebraic manipulation of "true" and "false"
• Completeness (logic) – Characteristic of some logical systems
• List of Boolean algebra topics
• NAND logic – Logic constructed only from NAND gates
• NOR logic – Making other gates using just NOR gates
• One instruction set computer – Abstract machine that uses only one instructionPages displaying short descriptions of redirect targets
References
1. Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-238452-3. ("Complete set of logical connectives").
2. Nolt, John; Rohatyn, Dennis; Varzi, Achille (1998), Schaum's outline of theory and problems of logic (2nd ed.), New York: McGraw–Hill, ISBN 978-0-07-046649-4. ("[F]unctional completeness of [a] set of logical operators").
3. Smith, Peter (2003), An introduction to formal logic, Cambridge University Press, ISBN 978-0-521-00804-4. (Defines "expressively adequate", shortened to "adequate set of connectives" in a section heading.)
4. Wesselkamper, T.C. (1975), "A sole sufficient operator", Notre Dame Journal of Formal Logic, 16: 86–88, doi:10.1305/ndjfl/1093891614
5. Massey, G.J. (1975), "Concerning an alleged Sheffer function", Notre Dame Journal of Formal Logic, 16 (4): 549–550, doi:10.1305/ndjfl/1093891898
6. Wesselkamper, T.C. (1975), "A Correction To My Paper" A. Sole Sufficient Operator", Notre Dame Journal of Formal Logic, 16 (4): 551, doi:10.1305/ndjfl/1093891899
7. The term was originally restricted to binary operations, but since the end of the 20th century it is used more generally. Martin, N.M. (1989), Systems of logic, Cambridge University Press, p. 54, ISBN 978-0-521-36770-7.
8. Scharle, T.W. (1965), "Axiomatization of propositional calculus with Sheffer functors", Notre Dame J. Formal Logic, 6 (3): 209–217, doi:10.1305/ndjfl/1093958259.
9. Wernick, William (1942) "Complete Sets of Logical Functions," Transactions of the American Mathematical Society 51: 117–32. In his list on the last page of the article, Wernick does not distinguish between ← and →, or between $\nleftarrow $ and $\nrightarrow $.
10. "NAND Gate Operations" at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html
11. "NOR Gate Operations" at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nor.html
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Wikipedia
|
Solid Klein bottle
In mathematics, a solid Klein bottle is a three-dimensional topological space (a 3-manifold) whose boundary is the Klein bottle.[1]
It is homeomorphic to the quotient space obtained by gluing the top disk of a cylinder $\scriptstyle D^{2}\times I$ to the bottom disk by a reflection across a diameter of the disk.
Alternatively, one can visualize the solid Klein bottle as the trivial product $\scriptstyle M{\ddot {o}}\times I$, of the möbius strip and an interval $\scriptstyle I=[0,1]$. In this model one can see that the core central curve at 1/2 has a regular neighborhood which is again a trivial cartesian product: $\scriptstyle M{\ddot {o}}\times [{\frac {1}{2}}-\varepsilon ,{\frac {1}{2}}+\varepsilon ]$ and whose boundary is a Klein bottle.
References
1. Carter, J. Scott (1995), How Surfaces Intersect in Space: An Introduction to Topology, K & E series on knots and everything, vol. 2, World Scientific, p. 169, ISBN 9789810220662.
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
|
Wikipedia
|
Solid of revolution
In geometry, a solid of revolution is a solid figure obtained by rotating a plane figure around some straight line (the axis of revolution), which may not intersect the generatrix (except at its boundary). The surface created by this revolution and which bounds the solid is the surface of revolution.
Assuming that the curve does not cross the axis, the solid's volume is equal to the length of the circle described by the figure's centroid multiplied by the figure's area (Pappus's second centroid theorem).
A representative disc is a three-dimensional volume element of a solid of revolution. The element is created by rotating a line segment (of length w) around some axis (located r units away), so that a cylindrical volume of πr2w units is enclosed.
Finding the volume
Two common methods for finding the volume of a solid of revolution are the disc method and the shell method of integration. To apply these methods, it is easiest to draw the graph in question; identify the area that is to be revolved about the axis of revolution; determine the volume of either a disc-shaped slice of the solid, with thickness δx, or a cylindrical shell of width δx; and then find the limiting sum of these volumes as δx approaches 0, a value which may be found by evaluating a suitable integral. A more rigorous justification can be given by attempting to evaluate a triple integral in cylindrical coordinates with two different orders of integration.
Disc method
Main article: Disc integration
The disc method is used when the slice that was drawn is perpendicular to the axis of revolution; i.e. when integrating parallel to the axis of revolution.
The volume of the solid formed by rotating the area between the curves of f(y) and g(y) and the lines y = a and y = b about the y-axis is given by
$V=\pi \int _{a}^{b}\left|f(y)^{2}-g(y)^{2}\right|\,dy\,.$
If g(y) = 0 (e.g. revolving an area between the curve and the y-axis), this reduces to:
$V=\pi \int _{a}^{b}f(y)^{2}\,dy\,.$
The method can be visualized by considering a thin horizontal rectangle at y between f(y) on top and g(y) on the bottom, and revolving it about the y-axis; it forms a ring (or disc in the case that g(y) = 0), with outer radius f(y) and inner radius g(y). The area of a ring is π(R2 − r2), where R is the outer radius (in this case f(y)), and r is the inner radius (in this case g(y)). The volume of each infinitesimal disc is therefore πf(y)2 dy. The limit of the Riemann sum of the volumes of the discs between a and b becomes integral (1).
Assuming the applicability of Fubini's theorem and the multivariate change of variables formula, the disk method may be derived in a straightforward manner by (denoting the solid as D):
$V=\iiint _{D}dV=\int _{a}^{b}\int _{g(z)}^{f(z)}\int _{0}^{2\pi }r\,d\theta \,dr\,dz=2\pi \int _{a}^{b}\int _{g(z)}^{f(z)}r\,dr\,dz=2\pi \int _{a}^{b}{\frac {1}{2}}r^{2}\Vert _{g(z)}^{f(z)}\,dz=\pi \int _{a}^{b}f(z)^{2}-g(z)^{2}\,dz$
Cylinder method
Main article: Shell integration
The cylinder method is used when the slice that was drawn is parallel to the axis of revolution; i.e. when integrating perpendicular to the axis of revolution.
The volume of the solid formed by rotating the area between the curves of f(x) and g(x) and the lines x = a and x = b about the y-axis is given by
$V=2\pi \int _{a}^{b}x|f(x)-g(x)|\,dx\,.$
If g(x) = 0 (e.g. revolving an area between curve and y-axis), this reduces to:
$V=2\pi \int _{a}^{b}x|f(x)|\,dx\,.$
The method can be visualized by considering a thin vertical rectangle at x with height f(x) − g(x), and revolving it about the y-axis; it forms a cylindrical shell. The lateral surface area of a cylinder is 2πrh, where r is the radius (in this case x), and h is the height (in this case f(x) − g(x)). Summing up all of the surface areas along the interval gives the total volume.
This method may be derived with the same triple integral, this time with a different order of integration:
$V=\iiint _{D}dV=\int _{a}^{b}\int _{g(r)}^{f(r)}\int _{0}^{2\pi }r\,d\theta \,dz\,dr=2\pi \int _{a}^{b}\int _{g(r)}^{f(r)}r\,dz\,dr=2\pi \int _{a}^{b}r(f(r)-g(r))\,dr.$
Solid of revolution demonstration
The shapes at rest
The shapes in motion, showing the solids of revolution formed by each
Parametric form
When a curve is defined by its parametric form (x(t),y(t)) in some interval [a,b], the volumes of the solids generated by revolving the curve around the x-axis or the y-axis are given by[1]
$V_{x}=\int _{a}^{b}\pi y^{2}\,{\frac {dx}{dt}}\,dt\,,$
$V_{y}=\int _{a}^{b}\pi x^{2}\,{\frac {dy}{dt}}\,dt\,.$
Under the same circumstances the areas of the surfaces of the solids generated by revolving the curve around the x-axis or the y-axis are given by[2]
$A_{x}=\int _{a}^{b}2\pi y\,{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\,dt\,,$
$A_{y}=\int _{a}^{b}2\pi x\,{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\,dt\,.$
Polar form
For a polar curve $r=f(\theta )$ where $\alpha \leq \theta \leq \beta $, the volumes of the solids generated by revolving the curve around the x-axis or y-axis are
$V_{x}=\int _{\alpha }^{\beta }\left(\pi r^{2}\sin ^{2}{\theta }\cos {\theta }\,{\frac {dr}{d\theta }}-\pi r^{3}\sin ^{3}{\theta }\right)d\theta \,,$
$V_{y}=\int _{\alpha }^{\beta }\left(\pi r^{2}\sin {\theta }\cos ^{2}{\theta }\,{\frac {dr}{d\theta }}+\pi r^{3}\cos ^{3}{\theta }\right)d\theta \,.$
The areas of the surfaces of the solids generated by revolving the curve around the x-axis or the y-axis are given
$A_{x}=\int _{\alpha }^{\beta }2\pi r\sin {\theta }\,{\sqrt {r^{2}+\left({\frac {dr}{d\theta }}\right)^{2}}}\,d\theta \,,$
$A_{y}=\int _{\alpha }^{\beta }2\pi r\cos {\theta }\,{\sqrt {r^{2}+\left({\frac {dr}{d\theta }}\right)^{2}}}\,d\theta \,,$
See also
Wikimedia Commons has media related to Solids of revolution.
• Gabriel's Horn
• Guldinus theorem
• Pseudosphere
• Surface of revolution
• Ungula
Notes
1. Sharma, A. K. (2005). Application Of Integral Calculus. Discovery Publishing House. p. 168. ISBN 81-7141-967-4.
2. Singh, Ravish R. (1993). Engineering Mathematics (6th ed.). Tata McGraw-Hill. p. 6.90. ISBN 0-07-014615-2.
References
• "Volumes of Solids of Revolution". CliffsNotes.com. 12 Apr 2011. Archived from the original on 2012-03-19.
• Ayres, Frank; Mendelson, Elliott (2008). Calculus. Schaum's Outlines. McGraw-Hill Professional. pp. 244–248. ISBN 978-0-07-150861-2. (online copy, p. 244, at Google Books)
• Weisstein, Eric W. "Solid of Revolution". MathWorld.
Authority control: National
• Germany
|
Wikipedia
|
Solid harmonics
In physics and mathematics, the solid harmonics are solutions of the Laplace equation in spherical polar coordinates, assumed to be (smooth) functions $\mathbb {R} ^{3}\to \mathbb {C} $. There are two kinds: the regular solid harmonics $R_{\ell }^{m}(\mathbf {r} )$, which are well-defined at the origin and the irregular solid harmonics $I_{\ell }^{m}(\mathbf {r} )$, which are singular at the origin. Both sets of functions play an important role in potential theory, and are obtained by rescaling spherical harmonics appropriately:
$R_{\ell }^{m}(\mathbf {r} )\equiv {\sqrt {\frac {4\pi }{2\ell +1}}}\;r^{\ell }Y_{\ell }^{m}(\theta ,\varphi )$
$I_{\ell }^{m}(\mathbf {r} )\equiv {\sqrt {\frac {4\pi }{2\ell +1}}}\;{\frac {Y_{\ell }^{m}(\theta ,\varphi )}{r^{\ell +1}}}$
Derivation, relation to spherical harmonics
Introducing r, θ, and φ for the spherical polar coordinates of the 3-vector r, and assuming that $\Phi $ is a (smooth) function $\mathbb {R} ^{3}\to \mathbb {C} $, we can write the Laplace equation in the following form
$\nabla ^{2}\Phi (\mathbf {r} )=\left({\frac {1}{r}}{\frac {\partial ^{2}}{\partial r^{2}}}r-{\frac {{\hat {l}}^{2}}{r^{2}}}\right)\Phi (\mathbf {r} )=0,\qquad \mathbf {r} \neq \mathbf {0} ,$
where l2 is the square of the nondimensional angular momentum operator,
$\mathbf {\hat {l}} =-i\,(\mathbf {r} \times \mathbf {\nabla } ).$
It is known that spherical harmonics Ym
ℓ
are eigenfunctions of l2:
${\hat {l}}^{2}Y_{\ell }^{m}\equiv \left[{{\hat {l}}_{x}}^{2}+{\hat {l}}_{y}^{2}+{\hat {l}}_{z}^{2}\right]Y_{\ell }^{m}=\ell (\ell +1)Y_{\ell }^{m}.$
Substitution of Φ(r) = F(r) Ym
ℓ
into the Laplace equation gives, after dividing out the spherical harmonic function, the following radial equation and its general solution,
${\frac {1}{r}}{\frac {\partial ^{2}}{\partial r^{2}}}rF(r)={\frac {\ell (\ell +1)}{r^{2}}}F(r)\Longrightarrow F(r)=Ar^{\ell }+Br^{-\ell -1}.$
The particular solutions of the total Laplace equation are regular solid harmonics:
$R_{\ell }^{m}(\mathbf {r} )\equiv {\sqrt {\frac {4\pi }{2\ell +1}}}\;r^{\ell }Y_{\ell }^{m}(\theta ,\varphi ),$
and irregular solid harmonics:
$I_{\ell }^{m}(\mathbf {r} )\equiv {\sqrt {\frac {4\pi }{2\ell +1}}}\;{\frac {Y_{\ell }^{m}(\theta ,\varphi )}{r^{\ell +1}}}.$
The regular solid harmonics correspond to harmonic homogeneous polynomials, i.e. homogeneous polynomials which are solutions to Laplace's equation.
Racah's normalization
Racah's normalization (also known as Schmidt's semi-normalization) is applied to both functions
$\int _{0}^{\pi }\sin \theta \,d\theta \int _{0}^{2\pi }d\varphi \;R_{\ell }^{m}(\mathbf {r} )^{*}\;R_{\ell }^{m}(\mathbf {r} )={\frac {4\pi }{2\ell +1}}r^{2\ell }$
(and analogously for the irregular solid harmonic) instead of normalization to unity. This is convenient because in many applications the Racah normalization factor appears unchanged throughout the derivations.
Addition theorems
The translation of the regular solid harmonic gives a finite expansion,
$R_{\ell }^{m}(\mathbf {r} +\mathbf {a} )=\sum _{\lambda =0}^{\ell }{\binom {2\ell }{2\lambda }}^{1/2}\sum _{\mu =-\lambda }^{\lambda }R_{\lambda }^{\mu }(\mathbf {r} )R_{\ell -\lambda }^{m-\mu }(\mathbf {a} )\;\langle \lambda ,\mu ;\ell -\lambda ,m-\mu |\ell m\rangle ,$ ;\ell -\lambda ,m-\mu |\ell m\rangle ,}
where the Clebsch–Gordan coefficient is given by
$\langle \lambda ,\mu ;\ell -\lambda ,m-\mu |\ell m\rangle ={\binom {\ell +m}{\lambda +\mu }}^{1/2}{\binom {\ell -m}{\lambda -\mu }}^{1/2}{\binom {2\ell }{2\lambda }}^{-1/2}.$ ;\ell -\lambda ,m-\mu |\ell m\rangle ={\binom {\ell +m}{\lambda +\mu }}^{1/2}{\binom {\ell -m}{\lambda -\mu }}^{1/2}{\binom {2\ell }{2\lambda }}^{-1/2}.}
The similar expansion for irregular solid harmonics gives an infinite series,
$I_{\ell }^{m}(\mathbf {r} +\mathbf {a} )=\sum _{\lambda =0}^{\infty }{\binom {2\ell +2\lambda +1}{2\lambda }}^{1/2}\sum _{\mu =-\lambda }^{\lambda }R_{\lambda }^{\mu }(\mathbf {r} )I_{\ell +\lambda }^{m-\mu }(\mathbf {a} )\;\langle \lambda ,\mu ;\ell +\lambda ,m-\mu |\ell m\rangle $ ;\ell +\lambda ,m-\mu |\ell m\rangle }
with $|r|\leq |a|\,$. The quantity between pointed brackets is again a Clebsch-Gordan coefficient,
$\langle \lambda ,\mu ;\ell +\lambda ,m-\mu |\ell m\rangle =(-1)^{\lambda +\mu }{\binom {\ell +\lambda -m+\mu }{\lambda +\mu }}^{1/2}{\binom {\ell +\lambda +m-\mu }{\lambda -\mu }}^{1/2}{\binom {2\ell +2\lambda +1}{2\lambda }}^{-1/2}.$ ;\ell +\lambda ,m-\mu |\ell m\rangle =(-1)^{\lambda +\mu }{\binom {\ell +\lambda -m+\mu }{\lambda +\mu }}^{1/2}{\binom {\ell +\lambda +m-\mu }{\lambda -\mu }}^{1/2}{\binom {2\ell +2\lambda +1}{2\lambda }}^{-1/2}.}
The addition theorems were proved in different manners by several authors.[1][2]
Complex form
The regular solid harmonics are homogeneous, polynomial solutions to the Laplace equation $\Delta R=0$. Separating the indeterminate $z$ and writing $ R=\sum _{a}p_{a}(x,y)z^{a}$, the Laplace equation is easily seen to be equivalent to the recursion formula
$p_{a+2}={\frac {-\left(\partial _{x}^{2}+\partial _{y}^{2}\right)p_{a}}{\left(a+2\right)\left(a+1\right)}}$
so that any choice of polynomials $p_{0}(x,y)$ of degree $\ell $ and $p_{1}(x,y)$ of degree $\ell -1$ gives a solution to the equation. One particular basis of the space of homogeneous polynomials (in two variables) of degree $k$ is $\left\{(x^{2}+y^{2})^{m}(x\pm iy)^{k-2m}\mid 0\leq m\leq k/2\right\}$. Note that it is the (unique up to normalization) basis of eigenvectors of the rotation group $SO(2)$: The rotation $\rho _{\alpha }$ of the plane by $\alpha \in [0,2\pi ]$ acts as multiplication by $e^{\pm i(k-2m)\alpha }$ on the basis vector $(x^{2}+y^{2})^{m}(x+iy)^{k-2m}$.
If we combine the degree $\ell $ basis and the degree $\ell -1$ basis with the recursion formula, we obtain a basis of the space of harmonic, homogeneous polynomials (in three variables this time) of degree $\ell $ consisting of eigenvectors for $SO(2)$ (note that the recursion formula is compatible with the $SO(2)$-action because the Laplace operator is rotationally invariant). These are the complex solid harmonics:
${\begin{aligned}R_{\ell }^{\pm \ell }&=(x\pm iy)^{\ell }z^{0}\\R_{\ell }^{\pm (\ell -1)}&=(x\pm iy)^{\ell -1}z^{1}\\R_{\ell }^{\pm (\ell -2)}&=(x^{2}+y^{2})(x\pm iy)^{\ell -2}z^{0}+{\frac {-(\partial _{x}^{2}+\partial _{y}^{2})\left((x^{2}+y^{2})(x\pm iy)^{\ell -2}\right)}{1\cdot 2}}z^{2}\\R_{\ell }^{\pm (\ell -3)}&=(x^{2}+y^{2})(x\pm iy)^{\ell -3}z^{1}+{\frac {-(\partial _{x}^{2}+\partial _{y}^{2})\left((x^{2}+y^{2})(x\pm iy)^{\ell -3}\right)}{2\cdot 3}}z^{3}\\R_{\ell }^{\pm (\ell -4)}&=(x^{2}+y^{2})^{2}(x\pm iy)^{\ell -4}z^{0}+{\frac {-(\partial _{x}^{2}+\partial _{y}^{2})\left((x^{2}+y^{2})^{2}(x\pm iy)^{\ell -4}\right)}{1\cdot 2}}z^{2}+{\frac {(\partial _{x}^{2}+\partial _{y}^{2})^{2}\left((x^{2}+y^{2})^{2}(x\pm iy)^{\ell -4}\right)}{1\cdot 2\cdot 3\cdot 4}}z^{4}\\R_{\ell }^{\pm (\ell -5)}&=(x^{2}+y^{2})^{2}(x\pm iy)^{\ell -5}z^{1}+{\frac {-(\partial _{x}^{2}+\partial _{y}^{2})\left((x^{2}+y^{2})^{2}(x\pm iy)^{\ell -5}\right)}{2\cdot 3}}z^{3}+{\frac {(\partial _{x}^{2}+\partial _{y}^{2})^{2}\left((x^{2}+y^{2})^{2}(x\pm iy)^{\ell -5}\right)}{2\cdot 3\cdot 4\cdot 5}}z^{5}\\&\;\,\vdots \end{aligned}}$
and in general
$R_{\ell }^{\pm m}={\begin{cases}\sum _{k}(\partial _{x}^{2}+\partial _{y}^{2})^{k}\left((x^{2}+y^{2})^{(\ell -m)/2}(x\pm iy)^{m}\right){\frac {(-1)^{k}z^{2k}}{(2k)!}}&\ell -m{\text{ is even}}\\\sum _{k}(\partial _{x}^{2}+\partial _{y}^{2})^{k}\left((x^{2}+y^{2})^{(\ell -1-m)/2}(x\pm iy)^{m}\right){\frac {(-1)^{k}z^{2k+1}}{(2k+1)!}}&\ell -m{\text{ is odd}}\end{cases}}$
for $0\leq m\leq \ell $.
Plugging in spherical coordinates $x=r\cos(\theta )\sin(\varphi )$, $y=r\sin(\theta )\sin(\varphi )$, $z=r\cos(\varphi )$ and using $x^{2}+y^{2}=r^{2}\sin(\varphi )^{2}=r^{2}(1-\cos(\varphi )^{2})$ one finds the usual relationship to spherical harmonics $R_{\ell }^{m}=r^{\ell }e^{im\phi }P_{\ell }^{m}(\cos(\vartheta ))$ with a polynomial $P_{\ell }^{m}$, which is (up to normalization) the associated Legendre polynomial, and so $R_{\ell }^{m}=r^{\ell }Y_{\ell }^{m}(\theta ,\varphi )$ (again, up to the specific choice of normalization).
Real form
By a simple linear combination of solid harmonics of ±m these functions are transformed into real functions, i.e. functions $\mathbb {R} ^{3}\to \mathbb {R} $. The real regular solid harmonics, expressed in Cartesian coordinates, are real-valued homogeneous polynomials of order $\ell $ in x, y, z. The explicit form of these polynomials is of some importance. They appear, for example, in the form of spherical atomic orbitals and real multipole moments. The explicit Cartesian expression of the real regular harmonics will now be derived.
Linear combination
We write in agreement with the earlier definition
$R_{\ell }^{m}(r,\theta ,\varphi )=(-1)^{(m+|m|)/2}\;r^{\ell }\;\Theta _{\ell }^{|m|}(\cos \theta )e^{im\varphi },\qquad -\ell \leq m\leq \ell ,$
with
$\Theta _{\ell }^{m}(\cos \theta )\equiv \left[{\frac {(\ell -m)!}{(\ell +m)!}}\right]^{1/2}\,\sin ^{m}\theta \,{\frac {d^{m}P_{\ell }(\cos \theta )}{d\cos ^{m}\theta }},\qquad m\geq 0,$
where $P_{\ell }(\cos \theta )$ is a Legendre polynomial of order ℓ. The m dependent phase is known as the Condon–Shortley phase.
The following expression defines the real regular solid harmonics:
${\begin{pmatrix}C_{\ell }^{m}\\S_{\ell }^{m}\end{pmatrix}}\equiv {\sqrt {2}}\;r^{\ell }\;\Theta _{\ell }^{m}{\begin{pmatrix}\cos m\varphi \\\sin m\varphi \end{pmatrix}}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}(-1)^{m}&\quad 1\\-(-1)^{m}i&\quad i\end{pmatrix}}{\begin{pmatrix}R_{\ell }^{m}\\R_{\ell }^{-m}\end{pmatrix}},\qquad m>0.$
and for m = 0:
$C_{\ell }^{0}\equiv R_{\ell }^{0}.$
Since the transformation is by a unitary matrix the normalization of the real and the complex solid harmonics is the same.
z-dependent part
Upon writing u = cos θ the m-th derivative of the Legendre polynomial can be written as the following expansion in u
${\frac {d^{m}P_{\ell }(u)}{du^{m}}}=\sum _{k=0}^{\left\lfloor (\ell -m)/2\right\rfloor }\gamma _{\ell k}^{(m)}\;u^{\ell -2k-m}$
with
$\gamma _{\ell k}^{(m)}=(-1)^{k}2^{-\ell }{\binom {\ell }{k}}{\binom {2\ell -2k}{\ell }}{\frac {(\ell -2k)!}{(\ell -2k-m)!}}.$
Since z = r cos θ it follows that this derivative, times an appropriate power of r, is a simple polynomial in z,
$\Pi _{\ell }^{m}(z)\equiv r^{\ell -m}{\frac {d^{m}P_{\ell }(u)}{du^{m}}}=\sum _{k=0}^{\left\lfloor (\ell -m)/2\right\rfloor }\gamma _{\ell k}^{(m)}\;r^{2k}\;z^{\ell -2k-m}.$
(x,y)-dependent part
Consider next, recalling that x = r sin θ cos φ and y = r sin θ sin φ,
$r^{m}\sin ^{m}\theta \cos m\varphi ={\frac {1}{2}}\left[(r\sin \theta e^{i\varphi })^{m}+(r\sin \theta e^{-i\varphi })^{m}\right]={\frac {1}{2}}\left[(x+iy)^{m}+(x-iy)^{m}\right]$
Likewise
$r^{m}\sin ^{m}\theta \sin m\varphi ={\frac {1}{2i}}\left[(r\sin \theta e^{i\varphi })^{m}-(r\sin \theta e^{-i\varphi })^{m}\right]={\frac {1}{2i}}\left[(x+iy)^{m}-(x-iy)^{m}\right].$
Further
$A_{m}(x,y)\equiv {\frac {1}{2}}\left[(x+iy)^{m}+(x-iy)^{m}\right]=\sum _{p=0}^{m}{\binom {m}{p}}x^{p}y^{m-p}\cos(m-p){\frac {\pi }{2}}$
and
$B_{m}(x,y)\equiv {\frac {1}{2i}}\left[(x+iy)^{m}-(x-iy)^{m}\right]=\sum _{p=0}^{m}{\binom {m}{p}}x^{p}y^{m-p}\sin(m-p){\frac {\pi }{2}}.$
In total
$C_{\ell }^{m}(x,y,z)=\left[{\frac {(2-\delta _{m0})(\ell -m)!}{(\ell +m)!}}\right]^{1/2}\Pi _{\ell }^{m}(z)\;A_{m}(x,y),\qquad m=0,1,\ldots ,\ell $
$S_{\ell }^{m}(x,y,z)=\left[{\frac {2(\ell -m)!}{(\ell +m)!}}\right]^{1/2}\Pi _{\ell }^{m}(z)\;B_{m}(x,y),\qquad m=1,2,\ldots ,\ell .$
List of lowest functions
We list explicitly the lowest functions up to and including ℓ = 5. Here ${\bar {\Pi }}_{\ell }^{m}(z)\equiv \left[{\tfrac {(2-\delta _{m0})(\ell -m)!}{(\ell +m)!}}\right]^{1/2}\Pi _{\ell }^{m}(z).$
${\begin{aligned}{\bar {\Pi }}_{0}^{0}&=1&{\bar {\Pi }}_{3}^{1}&={\frac {1}{4}}{\sqrt {6}}(5z^{2}-r^{2})&{\bar {\Pi }}_{4}^{4}&={\frac {1}{8}}{\sqrt {35}}\\{\bar {\Pi }}_{1}^{0}&=z&{\bar {\Pi }}_{3}^{2}&={\frac {1}{2}}{\sqrt {15}}\;z&{\bar {\Pi }}_{5}^{0}&={\frac {1}{8}}z(63z^{4}-70z^{2}r^{2}+15r^{4})\\{\bar {\Pi }}_{1}^{1}&=1&{\bar {\Pi }}_{3}^{3}&={\frac {1}{4}}{\sqrt {10}}&{\bar {\Pi }}_{5}^{1}&={\frac {1}{8}}{\sqrt {15}}(21z^{4}-14z^{2}r^{2}+r^{4})\\{\bar {\Pi }}_{2}^{0}&={\frac {1}{2}}(3z^{2}-r^{2})&{\bar {\Pi }}_{4}^{0}&={\frac {1}{8}}(35z^{4}-30r^{2}z^{2}+3r^{4})&{\bar {\Pi }}_{5}^{2}&={\frac {1}{4}}{\sqrt {105}}(3z^{2}-r^{2})z\\{\bar {\Pi }}_{2}^{1}&={\sqrt {3}}z&{\bar {\Pi }}_{4}^{1}&={\frac {\sqrt {10}}{4}}z(7z^{2}-3r^{2})&{\bar {\Pi }}_{5}^{3}&={\frac {1}{16}}{\sqrt {70}}(9z^{2}-r^{2})\\{\bar {\Pi }}_{2}^{2}&={\frac {1}{2}}{\sqrt {3}}&{\bar {\Pi }}_{4}^{2}&={\frac {1}{4}}{\sqrt {5}}(7z^{2}-r^{2})&{\bar {\Pi }}_{5}^{4}&={\frac {3}{8}}{\sqrt {35}}z\\{\bar {\Pi }}_{3}^{0}&={\frac {1}{2}}z(5z^{2}-3r^{2})&{\bar {\Pi }}_{4}^{3}&={\frac {1}{4}}{\sqrt {70}}\;z&{\bar {\Pi }}_{5}^{5}&={\frac {3}{16}}{\sqrt {14}}\\\end{aligned}}$
The lowest functions $A_{m}(x,y)\,$ and $B_{m}(x,y)\,$ are:
m Am Bm
0 $1\,$ $0\,$
1 $x\,$ $y\,$
2 $x^{2}-y^{2}\,$ $2xy\,$
3 $x^{3}-3xy^{2}\,$ $3x^{2}y-y^{3}\,$
4 $x^{4}-6x^{2}y^{2}+y^{4}\,$ $4x^{3}y-4xy^{3}\,$
5 $x^{5}-10x^{3}y^{2}+5xy^{4}\,$ $5x^{4}y-10x^{2}y^{3}+y^{5}\,$
References
1. R. J. A. Tough and A. J. Stone, J. Phys. A: Math. Gen. Vol. 10, p. 1261 (1977)
2. M. J. Caola, J. Phys. A: Math. Gen. Vol. 11, p. L23 (1978)
• Steinborn, E. O.; Ruedenberg, K. (1973). "Rotation and Translation of Regular and Irregular Solid Spherical Harmonics". In Lowdin, Per-Olov (ed.). Advances in quantum chemistry. Vol. 7. Academic Press. pp. 1–82. ISBN 9780080582320.
• Thompson, William J. (2004). Angular momentum: an illustrated guide to rotational symmetries for physical systems. Weinheim: Wiley-VCH. pp. 143–148. ISBN 9783527617838.
|
Wikipedia
|
Solid partition
In mathematics, solid partitions are natural generalizations of partitions and plane partitions defined by Percy Alexander MacMahon.[1] A solid partition of $n$ is a three-dimensional array of non-negative integers $n_{i,j,k}$ (with indices $i,j,k\geq 1$) such that
$\sum _{i,j,k}n_{i,j,k}=n$
and
$n_{i+1,j,k}\leq n_{i,j,k},\quad n_{i,j+1,k}\leq n_{i,j,k}\quad {\text{and}}\quad n_{i,j,k+1}\leq n_{i,j,k}$ for all $i,j{\text{ and }}k.$
Let $p_{3}(n)$ denote the number of solid partitions of $n$. As the definition of solid partitions involves three-dimensional arrays of numbers, they are also called three-dimensional partitions in notation where plane partitions are two-dimensional partitions and partitions are one-dimensional partitions. Solid partitions and their higher-dimensional generalizations are discussed in the book by Andrews.[2]
Ferrers diagrams for solid partitions
Another representation for solid partitions is in the form of Ferrers diagrams. The Ferrers diagram of a solid partition of $n$ is a collection of $n$ points or nodes, $\lambda =(\mathbf {y} _{1},\mathbf {y} _{2},\ldots ,\mathbf {y} _{n})$, with $\mathbf {y} _{i}\in \mathbb {Z} _{\geq 0}^{4}$ satisfying the condition:[3]
Condition FD: If the node $\mathbf {a} =(a_{1},a_{2},a_{3},a_{4})\in \lambda $, then so do all the nodes $\mathbf {y} =(y_{1},y_{2},y_{3},y_{4})$ with $0\leq y_{i}\leq a_{i}$ for all $i=1,2,3,4$.
For instance, the Ferrers diagram
$\left({\begin{smallmatrix}0\\0\\0\\0\end{smallmatrix}}{\begin{smallmatrix}0\\0\\1\\0\end{smallmatrix}}{\begin{smallmatrix}0\\1\\0\\0\end{smallmatrix}}{\begin{smallmatrix}1\\0\\0\\0\end{smallmatrix}}{\begin{smallmatrix}1\\1\\0\\0\end{smallmatrix}}\right)\ ,$
where each column is a node, represents a solid partition of $5$. There is a natural action of the permutation group $S_{4}$ on a Ferrers diagram – this corresponds to permuting the four coordinates of all nodes. This generalises the operation denoted by conjugation on usual partitions.
Equivalence of the two representations
Given a Ferrers diagram, one constructs the solid partition (as in the main definition) as follows.
Let $n_{i,j,k}$ be the number of nodes in the Ferrers diagram with coordinates of the form $(i-1,j-1,k-1,*)$ where $*$ denotes an arbitrary value. The collection $n_{i,j,k}$ form a solid partition. One can verify that condition FD implies that the conditions for a solid partition are satisfied.
Given a set of $n_{i,j,k}$ that form a solid partition, one obtains the corresponding Ferrers diagram as follows.
Start with the Ferrers diagram with no nodes. For every non-zero $n_{i,j,k}$, add $n_{i,j,k}$ nodes $(i-1,j-1,k-1,y_{4})$ for $0\leq y_{4}<n_{i,j,k}$ to the Ferrers diagram. By construction, it is easy to see that condition FD is satisfied.
For example, the Ferrers diagram with $5$ nodes given above corresponds to the solid partition with
$n_{1,1,1}=n_{2,1,1}=n_{1,2,1}=n_{1,1,2}=n_{2,2,1}=1$
with all other $n_{i,j,k}$ vanishing.
Generating function
Let $p_{3}(0)\equiv 1$. Define the generating function of solid partitions, $P_{3}(q)$, by
$P_{3}(q):=\sum _{n=0}^{\infty }p_{3}(n)q^{n}=1+q+4q^{2}+10q^{3}+26q^{4}+59q^{5}+140q^{6}+\cdots .$
The generating functions of integer partitions and plane partitions have simple product formulae, due to Euler and MacMahon, respectively. However, a guess of MacMahon fails to correctly reproduce the solid partitions of 6.[3] It appears that there is no simple formula for the generating function of solid partitions; in particular, there cannot be any formula analogous to the product formulas of Euler and MacMahon.[4]
Exact enumeration using computers
Given the lack of an explicitly known generating function, the enumerations of the numbers of solid partitions for larger integers have been carried out numerically. There are two algorithms that are used to enumerate solid partitions and their higher-dimensional generalizations. The work of Atkin. et al. used an algorithm due to Bratley and McKay.[5] In 1970, Knuth proposed a different algorithm to enumerate topological sequences that he used to evaluate numbers of solid partitions of all integers $n\leq 28$.[6] Mustonen and Rajesh extended the enumeration for all integers $n\leq 50$.[7] In 2010, S. Balakrishnan proposed a parallel version of Knuth's algorithm that has been used to extend the enumeration to all integers $n\leq 72$.[8] One finds
$p_{3}(72)=3464274974065172792\ ,$
which is a 19 digit number illustrating the difficulty in carrying out such exact enumerations.
Asymptotic behavior
It is conjectured that there exists a constant $c$ such that[9][7][10]
$\lim _{n\rightarrow \infty }{\frac {\log p_{3}(n)}{n^{3/4}}}=c.$
References
1. P. A. MacMahon, Combinatory Analysis. Cambridge Univ. Press, London and New York, Vol. 1, 1915 and Vol. 2, 1916; see vol. 2, p 332.
2. G. E. Andrews, The theory of partitions, Cambridge University Press, 1998.
3. A. O. L. Atkin, P. Bratley, I. G. McDonald and J. K. S. McKay, Some computations for m-dimensional partitions, Proc. Camb. Phil. Soc., 63 (1967), 1097–1100.
4. Stanley, Richard P. (1999). Enumerative Combinatorics, volume 2. Cambridge University Press. p. 402.
5. P. Bratley and J. K. S. McKay, "Algorithm 313: Multi-dimensional partition generator", Comm. ACM, 10 (Issue 10, 1967), p. 666.
6. D. E. Knuth, "A note on solid partitions", Math. Comp., 24 (1970), 955–961.
7. Ville Mustonen and R. Rajesh, "Numerical Estimation of the Asymptotic Behaviour of Solid Partitions of an Integer", J. Phys. A: Math. Gen. 36 (2003), no. 24, 6651.cond-mat/0303607
8. Srivatsan Balakrishnan, Suresh Govindarajan and Naveen S. Prabhakar, "On the asymptotics of higher-dimensional partitions", J.Phys. A: Math. Gen. 45 (2012) 055001 arXiv:1105.6231.
9. Destainville, N., & Govindarajan, S. (2015). Estimating the asymptotics of solid partitions. Journal of Statistical Physics, 158, 950-967
10. D P Bhatia, M A Prasad and D Arora, "Asymptotic results for the number of multidimensional partitions of an integer and directed compact lattice animals", J. Phys. A: Math. Gen. 30 (1997) 2281
External links
• OEIS sequence A000293 (Solid (i.e., three-dimensional) partitions)
• The Solid Partitions Project of IIT Madras
• The Mathworld entry for Solid Partitions
|
Wikipedia
|
Quasitopos
In mathematics, specifically category theory, a quasitopos is a generalization of a topos. A topos has a subobject classifier classifying all subobjects, but in a quasitopos, only strong subobjects are classified. Quasitoposes are also required to be finitely cocomplete and locally cartesian closed.[1] A solid quasitopos is one for which 0 is a strong subobject of 1.[2]
References
1. Wyler, Oswald (1991). Lecture Notes on Topoi and Quasitopoi. ISBN 978-9810201531. Retrieved 3 February 2017.
2. Monro, G.P. (September 1986). "Quasitopoi, logic and heyting-valued models". Journal of Pure and Applied Algebra. 42 (2): 141–164. doi:10.1016/0022-4049(86)90077-0.
External links
• Quasitopos at the nLab
|
Wikipedia
|
Solid set
In mathematics, specifically in order theory and functional analysis, a subset $S$ of a vector lattice is said to be solid and is called an ideal if for all $s\in S$ and $x\in X,$ if $|x|\leq |s|$ then $x\in S.$ An ordered vector space whose order is Archimedean is said to be Archimedean ordered.[1] If $S\subseteq X$ then the ideal generated by $S$ is the smallest ideal in $X$ containing $S.$ An ideal generated by a singleton set is called a principal ideal in $X.$
Examples
The intersection of an arbitrary collection of ideals in $X$ is again an ideal and furthermore, $X$ is clearly an ideal of itself; thus every subset of $X$ is contained in a unique smallest ideal.
In a locally convex vector lattice $X,$ the polar of every solid neighborhood of the origin is a solid subset of the continuous dual space $X^{\prime }$; moreover, the family of all solid equicontinuous subsets of $X^{\prime }$ is a fundamental family of equicontinuous sets, the polars (in bidual $X^{\prime \prime }$) form a neighborhood base of the origin for the natural topology on $X^{\prime \prime }$ (that is, the topology of uniform convergence on equicontinuous subset of $X^{\prime }$).[2]
Properties
• A solid subspace of a vector lattice $X$ is necessarily a sublattice of $X.$[1]
• If $N$ is a solid subspace of a vector lattice $X$ then the quotient $X/N$ is a vector lattice (under the canonical order).[1]
See also
• Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets
References
1. Schaefer & Wolff 1999, pp. 204–214.
2. Schaefer & Wolff 1999, pp. 234–242.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Ordered topological vector spaces
Basic concepts
• Ordered vector space
• Partially ordered space
• Riesz space
• Order topology
• Order unit
• Positive linear operator
• Topological vector lattice
• Vector lattice
Types of orders/spaces
• AL-space
• AM-space
• Archimedean
• Banach lattice
• Fréchet lattice
• Locally convex vector lattice
• Normed lattice
• Order bound dual
• Order dual
• Order complete
• Regularly ordered
Types of elements/subsets
• Band
• Cone-saturated
• Lattice disjoint
• Dual/Polar cone
• Normal cone
• Order complete
• Order summable
• Order unit
• Quasi-interior point
• Solid set
• Weak order unit
Topologies/Convergence
• Order convergence
• Order topology
Operators
• Positive
• State
Main results
• Freudenthal spectral
|
Wikipedia
|
Solid sweep
The sweep Sw of a solid S is defined as the solid created when a motion M is applied to a given solid. The solid S should be considered to be a set of points in the Euclidean space R3. Then the solid Sw which is generated by sweeping S over M will contain all the points over which the points of S have moved during the motion M.
|
Wikipedia
|
Solid torus
In mathematics, a solid torus is the topological space formed by sweeping a disk around a circle.[1] It is homeomorphic to the Cartesian product $S^{1}\times D^{2}$ of the disk and the circle,[2] endowed with the product topology.
Not to be confused with its surface which is a regular torus.
A standard way to visualize a solid torus is as a toroid, embedded in 3-space. However, it should be distinguished from a torus, which has the same visual appearance: the torus is the two-dimensional space on the boundary of a toroid, while the solid torus includes also the compact interior space enclosed by the torus.
A solid torus is a torus plus the volume inside the torus. Real-world objects that approximate a solid torus include O-rings, non-inflatable lifebuoys, ring doughnuts, and bagels.
Topological properties
The solid torus is a connected, compact, orientable 3-dimensional manifold with boundary. The boundary is homeomorphic to $S^{1}\times S^{1}$, the ordinary torus.
Since the disk $D^{2}$ is contractible, the solid torus has the homotopy type of a circle, $S^{1}$.[3] Therefore the fundamental group and homology groups are isomorphic to those of the circle:
${\begin{aligned}\pi _{1}\left(S^{1}\times D^{2}\right)&\cong \pi _{1}\left(S^{1}\right)\cong \mathbb {Z} ,\\H_{k}\left(S^{1}\times D^{2}\right)&\cong H_{k}\left(S^{1}\right)\cong {\begin{cases}\mathbb {Z} &{\text{if }}k=0,1,\\0&{\text{otherwise}}.\end{cases}}\end{aligned}}$
See also
• Cheerios
• Hyperbolic Dehn surgery
• Reeb foliation
• Whitehead manifold
• Donut
References
1. Falconer, Kenneth (2004), Fractal Geometry: Mathematical Foundations and Applications (2nd ed.), John Wiley & Sons, p. 198, ISBN 9780470871355.
2. Matsumoto, Yukio (2002), An Introduction to Morse Theory, Translations of mathematical monographs, vol. 208, American Mathematical Society, p. 188, ISBN 9780821810224.
3. Ravenel, Douglas C. (1992), Nilpotence and Periodicity in Stable Homotopy Theory, Annals of mathematics studies, vol. 128, Princeton University Press, p. 2, ISBN 9780691025728.
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
|
Wikipedia
|
Solinas prime
In mathematics, a Solinas prime, or generalized Mersenne prime, is a prime number that has the form $f(2^{m})$, where $f(x)$ is a low-degree polynomial with small integer coefficients.[1][2] These primes allow fast modular reduction algorithms and are widely used in cryptography. They are named after Jerome Solinas.
This class of numbers encompasses a few other categories of prime numbers:
• Mersenne primes, which have the form $2^{k}-1$,
• Crandall or pseudo-Mersenne primes, which have the form $2^{k}-c$ for small odd $c$.[3]
Modular reduction algorithm
Let $f(t)=t^{d}-c_{d-1}t^{d-1}-...-c_{0}$ be a monic polynomial of degree $d$ with coefficients in $\mathbb {Z} $ and suppose that $p=f(2^{m})$ is a Solinas prime. Given a number $n<p^{2}$ with up to $2md$ bits, we want to find a number congruent to $n$ mod $p$ with only as many bits as $p$ – that is, with at most $md$ bits.
First, represent $n$ in base $2^{m}$:
$n=\sum _{j=0}^{2d-1}A_{j}2^{mj}$
Next, generate a $d$-by-$d$ matrix $X=(X_{i,j})$ by stepping $d$ times the linear-feedback shift register defined over $\mathbb {Z} $ by the polynomial $f$: starting with the $d$-integer register $[0|0|...|0|1]$, shift right one position, injecting $0$ on the left and adding (component-wise) the output value times the vector $[c_{0},...,c_{d-1}]$ at each step (see [1] for details). Let $X_{i,j}$ be the integer in the $j$th register on the $i$th step and note that the first row of $X$ is given by $(X_{0,j})=[c_{0},...,c_{d-1}]$. Then if we denote by $B=(B_{i})$ the integer vector given by:
$(B_{0}...B_{d-1})=(A_{0}...A_{d-1})+(A_{d}...A_{2d-1})X$,
it can be easily checked that:
$\sum _{j=0}^{d-1}B_{j}2^{mj}\equiv \sum _{j=0}^{2d-1}A_{j}2^{mj}\mod p$.
Thus $B$ represents an $md$-bit integer congruent to $n$.
For judicious choices of $f$ (again, see [1]), this algorithm involves only a relatively small number of additions and subtractions (and no divisions!), so it can be much more efficient than the naive modular reduction algorithm ($n-p\cdot (n/p)$).
Examples
Four of the recommended primes in NIST's document "Recommended Elliptic Curves for Federal Government Use" are Solinas primes:
• p-192 $2^{192}-2^{64}-1$
• p-224 $2^{224}-2^{96}+1$
• p-256 $2^{256}-2^{224}+2^{192}+2^{96}-1$
• p-384 $2^{384}-2^{128}-2^{96}+2^{32}-1$
Curve448 uses the Solinas prime $2^{448}-2^{224}-1.$
See also
• Mersenne prime
References
1. Solinas, Jerome A. (1999). Generalized Mersenne Numbers (PDF) (Technical report). Center for Applied Cryptographic Research, University of Waterloo. CORR-99-39.
2. Solinas, Jerome A. (2011). "Generalized Mersenne Prime". In Tilborg, Henk C. A. van; Jajodia, Sushil (eds.). Encyclopedia of Cryptography and Security. Springer US. pp. 509–510. doi:10.1007/978-1-4419-5906-5_32. ISBN 978-1-4419-5905-8.
3. US patent 5159632, Richard E. Crandall, "Method and apparatus for public key exchange in a cryptographic system", issued 1992-10-27, assigned to NeXT Computer, Inc.
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
|
Wikipedia
|
Descent algebra
In algebra, Solomon's descent algebra of a Coxeter group is a subalgebra of the integral group ring of the Coxeter group, introduced by Solomon (1976).
The descent algebra of the symmetric group
In the special case of the symmetric group Sn, the descent algebra is given by the elements of the group ring such that permutations with the same descent set have the same coefficients. (The descent set of a permutation σ consists of the indices i such that σ(i) > σ(i+1).) The descent algebra of the symmetric group Sn has dimension 2n-1. It contains the peak algebra as a left ideal.
References
• Solomon, Louis (1976), "A Mackey formula in the group ring of a Coxeter group", J. Algebra, 41 (2): 255–264, doi:10.1016/0021-8693(76)90182-4, MR 0444756
|
Wikipedia
|
Solomon Mikhlin
Solomon Grigor'evich Mikhlin (Russian: Соломо́н Григо́рьевич Ми́хлин, real name Zalman Girshevich Mikhlin) (the family name is also transliterated as Mihlin or Michlin) (23 April 1908 – 29 August 1990[1]) was a Soviet mathematician of who worked in the fields of linear elasticity, singular integrals and numerical analysis: he is best known for the introduction of the symbol of a singular integral operator, which eventually led to the foundation and development of the theory of pseudodifferential operators.[2]
Solomon Grigor'evich Mikhlin
Solomon Grigor'evich Mikhlin
Born23 April 1908
Cholmieč, Rečyca Raion, Minsk Governorate, Russian Empire
Died29 August 1990(1990-08-29) (aged 82)[1]
Saint Petersburg (former Leningrad)
NationalitySoviet
Alma materLeningrad University (1929)
Known for
• Elasticity theory
• singular integrals
• numerical analysis
Awards
• Order of the Badge of Honour (1961)
• Laurea honoris causa by the Karl-Marx-Stadt Polytechnic (1968)
• Membership of the German Academy of Sciences Leopoldina (1970)
• Membership of the Accademia Nazionale dei Lincei (1981)
Scientific career
FieldsMathematics and mechanics
Institutions
• Seismological Institute of the USSR Academy of Sciences (1932–1941)
• Kazakh University in Alma Ata (1941–1944)
• Leningrad University (now Saint Petersburg State University) (1944–1990)
Academic advisorsVladimir Smirnov, Leningrad University, master thesis
Doctoral studentssee the teaching activity section
Other notable studentsVladimir Maz'ya
Biography
He was born in Kholmech, Rechytsa District, Minsk Governorate (in present-day Belarus) on 23 April 1908; Mikhlin (1968) himself states in his resume that his father was a merchant, but this assertion could be untrue since, in that period, people sometimes lied on the profession of parents in order to overcome political limitations in the access to higher education. According to a different version, his father was a melamed, at a primary religious school (kheder), and that the family was of modest means: according to the same source, Zalman was the youngest of five children. His first wife was Victoria Isaevna Libina: Mikhlin's book (Mikhlin 1965) is dedicated to her memory. She died of peritonitis in 1961 during a boat trip on Volga. In 1940 they adopted a son, Grigory Zalmanovich Mikhlin, who later emigrated to Haifa, Israel. His second wife was Eugenia Yakovlevna Rubinova, born in 1918, who was his companion for the rest of his life.
Education and academic career
He graduated from a secondary school in Gomel in 1923 and entered the State Herzen Pedagogical Institute in 1925. In 1927 he was transferred to the Department of Mathematics and Mechanics of Leningrad State University as a second year student, passing all the exams of the first year without attending lectures. Among his university professors there were Nikolai Maximovich Günther and Vladimir Ivanovich Smirnov. The latter became his master thesis supervisor: the topic of the thesis was the convergence of double series,[3] and was defended in 1929. Sergei Lvovich Sobolev studied in the same class as Mikhlin. In 1930 he started his teaching career, working in some Leningrad institutes for short periods, as Mikhlin himself records on the document (Mikhlin 1968). In 1932 he got a position at the Seismological Institute of the USSR Academy of Sciences, where he worked till 1941: in 1935 he got the degree "Doktor nauk" in Mathematics and Physics, without having to earn the "kandidat nauk" degree, and finally in 1937 he was promoted to the rank of professor. During World War II he became professor at the Kazakh University in Alma Ata. Since 1944 S.G. Mikhlin has been professor at the Leningrad State University. From 1964 to 1986 he headed the Laboratory of Numerical Methods at the Research Institute of Mathematics and Mechanics of the same university: since 1986 until his death he was a senior researcher at that laboratory.
Honours
He received the order of the Badge of Honour (Russian: Орден Знак Почёта) in 1961:[4] the name of the recipients of this prize was usually published in newspapers. He was awarded of the Laurea honoris causa by the Karl-Marx-Stadt (now Chemnitz) Polytechnic in 1968 and was elected member of the German Academy of Sciences Leopoldina in 1970 and of the Accademia Nazionale dei Lincei in 1981. As Fichera (1994, p. 51) states, in his country he did not receive honours comparable to his scientific stature, mainly because of the racial policy of the communist regime, briefly described in the following section.
Influence of communist antisemitism
He lived in one of the most difficult periods of contemporary Russian history. The state of mathematical sciences during this period is well described by Lorentz (2002): marxist ideology rise in the USSR universities and Academia was one of the main themes of that period. Local administrators and communist party functionaries interfered with scientists on either ethnical or ideological grounds. As a matter of fact, during the war and during the creation of a new academic system, Mikhlin did not experience the same difficulties as younger Soviet scientists of Jewish origin: for example he was included in the Soviet delegation in 1958, at the International Congress of Mathematicians in Edinburgh.[5] However, Fichera (1994, pp. 56–60), examining the life of Mikhlin, finds it surprisingly similar to the life of Vito Volterra under the fascist regime. He notes that antisemitism in communist countries took different forms compared to his nazist counterpart: the communist regime aimed not to the brutal homicide of Jews, but imposed on them a number of constrictions, sometimes very cruel, in order to make their life difficult. During the period from 1963 to 1981, he met Mikhlin attending several conferences in the Soviet Union, and realised how he was in a state of isolation, almost marginalized inside his native community: Fichera describes several episodes revealing this fact.[6] Perhaps, the most illuminating one is the election of Mikhlin as a member of the Accademia Nazionale dei Lincei: in June 1981, Solomon G. Mikhlin was elected Foreign Member of the class of mathematical and physical sciences of the Lincei. At first time, he was proposed as a winner of the Antonio Feltrinelli Prize, but the almost sure confiscation of the prize by the Soviet authorities induced the Lincei members to elect him as a member: they decided to honour him in a way that no political authority could alienate.[7] However, Mikhlin was not allowed to visit Italy by the Soviet authorities,[8] so Fichera and his wife brought the tiny golden lynx, the symbol of the Lincei membership, directly to Mikhlin's apartment in Leningrad on 17 October 1981: the only guests to that "ceremony" were Vladimir Maz'ya and his wife Tatyana Shaposhnikova.
They just have power, but we have theorems. Therefore we are stronger!
— Solomon G. Mikhlin, cited by Vladimir Maz'ya (2014, p. 142)
Death
According to Fichera (1994, pp. 60–61), which refers a conversation with Mark Vishik and Olga Oleinik, on 29 August 1990 Mikhlin left home to buy medicines for his wife Eugenia. On a public transport, he suffered a lethal stroke. He had no documents with him, therefore he was identified only some time after his death: this may be the cause of the difference in the death date reported on several biographies and obituary notices.[9] Fichera also writes that Mikhlin's wife Eugenia survived him only a few months.
Work
Research activity
He was author of monographs and textbooks which become classics for their style. His research is devoted mainly to the following fields.[10]
Elasticity theory and boundary value problems
In mathematical elasticity theory, Mikhlin was concerned by three themes: the plane problem (mainly from 1932 to 1935), the theory of shells (from 1954) and the Cosserat spectrum (from 1967 to 1973).[11] Dealing with the plane elasticity problem, he proposed two methods for its solution in multiply connected domains. The first one is based upon the so-called complex Green's function and the reduction of the related boundary value problem to integral equations. The second method is a certain generalization of the classical Schwarz algorithm for the solution of the Dirichlet problem in a given domain by splitting it in simpler problems in smaller domains whose union is the original one. Mikhlin studied its convergence and gave applications to special applied problems. He proved existence theorems for the fundamental problems of plane elasticity involving inhomogeneous anisotropic media: these results are collected in the book (Mikhlin 1957). Concerning the theory of shells, there are several Mikhlin's articles dealing with it. He studied the error of the approximate solution for shells, similar to plane plates, and found out that this error is small for the so-called purely rotational state of stress. As a result of his study of this problem, Mikhlin also gave a new (invariant) form of the basic equations of the theory. He also proved a theorem on perturbations of positive operators in a Hilbert space which let him to obtain an error estimate for the problem of approximating a sloping shell by a plane plate.[12] Mikhlin studied also the spectrum of the operator pencil of the classical linear elastostatic operator or Navier–Cauchy operator
${\boldsymbol {\mathcal {A}}}(\omega ){\boldsymbol {u}}=\Delta _{2}{\boldsymbol {u}}+\omega \nabla \left(\nabla \cdot {\boldsymbol {u}}\right)$
where $u$ is the displacement vector, $\scriptstyle \Delta _{2}$ is the vector laplacian, $\scriptstyle \nabla $ is the gradient, $\scriptstyle \nabla \cdot $ is the divergence and $\omega $ is a Cosserat eigenvalue. The full description of the spectrum and the proof of the completeness of the system of eigenfunctions are also due to Mikhlin, and partly to V.G. Maz'ya in their only joint work.[13]
Singular integrals and Fourier multipliers
He is one of the founders of the multi-dimensional theory of singular integrals, jointly with Francesco Tricomi and Georges Giraud, and also one of the main contributors. By singular integral we mean an integral operator of the following form
$Au=v({\boldsymbol {x}})=\int _{\mathbb {R} ^{n}}{\frac {f({\boldsymbol {x}},{\boldsymbol {\theta }})}{r^{n}}}u({\boldsymbol {y}})\mathrm {d} {\boldsymbol {y}}$
where $x$∈ℝn is a point in the n-dimensional euclidean space, $r$=|$y-x$| and $\scriptstyle {\boldsymbol {\theta }}={\frac {{\boldsymbol {y}}-{\boldsymbol {x}}}{r}}$ are the hyperspherical coordinates (or the polar coordinates or the spherical coordinates respectively when $n=2$ or $n=3$) of the point $y$ with respect to the point $x$. Such operators are called singular since the singularity of the kernel of the operator is so strong that the integral does not exist in the ordinary sense, but only in the sense of Cauchy principal value.[14] Mikhlin was the first to develop a theory of singular integral equations as a theory of operator equations in function spaces. In the papers (Mikhlin 1936a) and (Mikhlin 1936b) he found a rule for the composition of double singular integrals (i.e. in 2-dimensional euclidean spaces) and introduced the very important notion of symbol of a singular integral. This enabled him to show that the algebra of bounded singular integral operators is isomorphic to the algebra of either scalar or matrix-valued functions. He proved the Fredholm's theorems for singular integral equations and systems of such equations under the hypothesis of non-degeneracy of the symbol: he also proved that the index of a single singular integral equation in the euclidean space is zero. In 1961 Mikhlin developed a theory of multidimensional singular integral equations on Lipschitz spaces. These spaces are widely used in the theory of one-dimensional singular integral equations: however, the direct extension of the related theory to the multidimensional case meets some technical difficulties, and Mikhlin suggested another approach to this problem. Precisely, he obtained the basic properties of this kind of singular integral equations as a by-product of the Lp-space theory of these equations. Mikhlin also proved[15] a now classical theorem on multipliers of Fourier transform in the Lp-space, based on an analogous theorem of Józef Marcinkiewicz on Fourier series. A complete collection of his results in this field up to the 1965, as well as the contributions of other mathematicians like Tricomi, Giraud, Calderón and Zygmund,[16] is contained in the monograph (Mikhlin 1965).[17]
A synthesis of the theories of singular integrals and linear partial differential operators was accomplished, in the mid sixties of the 20th century, by the theory of pseudodifferential operators: Joseph J. Kohn, Louis Nirenberg, Lars Hörmander and others operated this synthesis, but this theory owe his rise to the discoveries of Mikhlin, as is universally acknowledged.[2] This theory has numerous applications to mathematical physics. Mikhlin's multiplier theorem is widely used in different branches of mathematical analysis, particularly to the theory of differential equations. The analysis of Fourier multipliers was later forwarded by Lars Hörmander, Walter Littman, Elias Stein, Charles Fefferman and others.
Partial differential equations
In four papers, published in the period 1940–1942, Mikhlin applies the potentials method to the mixed problem for the wave equation. In particular, he solves the mixed problem for the two-space dimensional wave equation in the half plane by reducing it to the planar Abel integral equation. For plane domains with a sufficiently smooth curvilinear boundary he reduces the problem to an integro-differential equation, which he is also able to solve when the boundary of the given domain is analytic. In 1951 Mikhlin proved the convergence of the Schwarz alternating method for second order elliptic equations.[18] He also applied the methods of functional analysis, at the same time as Mark Vishik but independently of him, to the investigation of boundary value problems for degenerate second order elliptic partial differential equations.
Numerical mathematics
His work in this field can be divided into several branches:[19] in the following text, four main branches are described, and a sketch of his last researches is also given. The papers within the first branch are summarized in the monograph (Mikhlin 1964), which contain the study of convergence of variational methods for problems connected with positive operators, in particular, for some problems of mathematical physics. Both "a priori" and "a posteriori" estimates of the errors concerning the approximation given by these methods are proved. The second branch deals with the notion of stability of a numerical process introduced by Mikhlin himself. When applied to the variational method, this notion enables him to state necessary and sufficient conditions in order to minimize errors in the solution of the given problem when the error arising in the numerical construction of the algebraic system resulting from the application of the method itself is sufficiently small, no matter how large is the system's order. The third branch is the study of variational-difference and finite element methods. Mikhlin studied the completeness of the coordinate functions used in this methods in the Sobolev space W^{1,p}, deriving the order of approximation as a function of the smoothness properties of the functions to be approximation of functions approximated. He also characterized the class of coordinate functions which give the best order of approximation, and has studied the stability of the variational-difference process and the growth of the condition number of the variation-difference matrix. Mikhlin also studied the finite element approximation in weighted Sobolev spaces related to the numerical solution of degenerate elliptic equations. He found the optimal order of approximation for some methods of solution of variational inequalities. The fourth branch of his research in numerical mathematics is a method for the solution of Fredholm integral equations which he called resolvent method: its essence rely on the possibility of substituting the kernel of the integral operator by its variational-difference approximation, so that the resolvent of the new kernel can be expressed by simple recurrence relations. This eliminates the need to construct and solve large systems of equations.[20] During his last years, Mikhlin contributed to the theory of errors in numerical processes,[21] proposing the following classification of errors.
1. Approximation error: is the error due to the replacement of an exact problem by an approximating one.
2. Perturbation error: is the error due to the inaccuracies in the computation of the data of the approximating problem.
3. Algorithm error: is the intrinsic error of the algorithm used for the solution of the approximating problem.
4. Rounding error: is the error due to the limits of computer arithmetic.
This classification is useful since enables one to develop computational methods adjusted in order to diminish the errors of each particular type, following the divide et impera (divide and rule) principle.
Teaching activity
He was the "kandidat nauk" advisor of Tatyana O. Shaposhnikova. He was also mentor and friend of Vladimir Maz'ya: he was never his official supervisor, but his friendship with the young undergraduate Maz'ya had a great influence on shaping his mathematical style.
Selected publications
Books
• Mikhlin, S.G. (1957), Integral equations and their applications to certain problems in mechanics, mathematical physics and technology, International Series of Monographs in Pure and Applied Mathematics, vol. 5, Oxford–London–Edinburgh–New York–Paris–Frankfurt: Pergamon Press, pp. XII+338, Zbl 0077.09903. The book of Mikhlin summarizing his results in the plane elasticity problem: according to Fichera (1994, pp. 55–56) this is a widely known monograph in the theory of integral equations.
• Mikhlin, S.G. (1964), Variational methods in mathematical physics, International Series of Monographs in Pure and Applied Mathematics, vol. 50, Oxford–London–Edinburgh–New York–Paris–Frankfurt: Pergamon Press, pp. XXXII+584, Zbl 0119.19002.
• Mikhlin, S.G. (1965), Multidimensional singular integrals and integral equations, International Series of Monographs in Pure and Applied Mathematics, vol. 83, Oxford–London–Edinburgh–New York–Paris–Frankfurt: Pergamon Press, pp. XII+255, MR 0185399, Zbl 0129.07701. A masterpiece in the multidimensional theory of singular integrals and singular integral equations summarizing all the results from the beginning to the year of publication, and also sketching the history of the subject.
• Mikhlin, Solomon G.; Prössdorf, Siegfried (1986), Singular Integral Operators, Berlin–Heidelberg–New York: Springer Verlag, p. 528, ISBN 978-3-540-15967-4, MR 0867687, Zbl 0612.47024.
• Mikhlin, S.G. (1991), Error analysis in numerical processes, Pure and Applied Mathematics. A Wiley-Interscience Series of Text Monographs & Tracts, vol. 1237, Chichester: John Wiley & Sons, p. 283, ISBN 978-0-471-92133-2, MR 1129889, Zbl 0786.65038. This book summarize the contributions of Mikhlin and of the former Soviet school of numerical analysis to the problem of error analysis in numerical solutions of various kind of equations: it was also reviewed by Stummel (1993, pp. 204–206) for the Bulletin of the American Mathematical Society.
• Mikhlin, Solomon G.; Morozov, Nikita Fedorovich; Paukshto, Michael V. (1995), The integral equations of the theory of elasticity, Teubner-Texte zur Mathematik, vol. 135, Leipzig: Teubner Verlag, p. 375, doi:10.1007/978-3-663-11626-4, ISBN 3-8154-2060-1, MR 1314625, Zbl 0817.45004.
Papers
• Michlin, S.G. (1932), "Sur la convergence uniforme des séries de fonctions analytiques", Matematicheskii Sbornik (in French), 39 (3): 88–96, JFM 58.0302.03, Zbl 0006.31701.
• Mikhlin, Solomon G. (1936a), "Équations intégrales singulières à deux variables indépendantes", Recueil Mathématique (Matematicheskii Sbornik), New Series (in Russian), 1(43) (4): 535–552, Zbl 0016.02902. The paper, with French title and abstract, where Solomon Mikhlin introduces the symbol of a singular integral operator as a means to calculate the composition of such kind of operators and solve singular integral equations: the integral operators considered here are defined by integration on the whole n-dimensional (for n = 2) euclidean space.
• Mikhlin, Solomon G. (1936b), "Complément à l'article "Équations intégrales singulières à deux variables indépendantes", Recueil Mathématique (Matematicheskii Sbornik), New Series (in Russian), 1(43) (6): 963–964, JFM 62.1251.02. In this paper, with French title and abstract, Solomon Mikhlin extends the definition of the symbol of a singular integral operator introduced before in the paper (Mikhlin 1936a) to integral operators defined by integration on a (n − 1)-dimensional closed manifold (for n = 3) in n-dimensional euclidean space.
• Mikhlin, Solomon G. (1948), "Singular integral equations", Uspekhi Matematicheskikh Nauk (in Russian), 3 (25): 29–112, MR 0027429.
• Mikhlin, S.G. (1951), "On the Schwarz algorithm", Doklady Akademii Nauk SSSR, novaya Seriya (in Russian), 77: 569–571, Zbl 0054.04204.
• Mikhlin, Solomon G. (1952a), "An estimate of the error of approximating elastic shells by plane plates", Prikladnaya Matematika i Mekhanika (in Russian), 16 (4): 399–418, Zbl 0048.42304.
• Mikhlin, Solomon G. (1952b), "A theorem in operator theory and its application to the theory of elastic shells", Doklady Akademii Nauk SSSR, novaya Seriya (in Russian), 84: 909–912, Zbl 0048.42401.
• Mikhlin, Solomon G. (1956a), "The theory of multidimensional singular integral equations", Vestnik Leningradskogo Universiteta, Seriya Matematika, Mekhanika, Astronomija (in Russian), 11 (1): 3–24, Zbl 0075.11402.
• Mikhlin, Solomon G. (1956b), "On the multipliers of Fourier integrals", Doklady Akademii Nauk SSSR, New Series (in Russian), 109: 701–703, Zbl 0073.08402.
• Mikhlin, Solomon G. (1966), "On Cosserat functions", Probl. Mat. Analiza, kraevye Zadachi integral'nye Uravenya (in Russian), Leningrad, pp. 59–69, Zbl 0166.37505{{citation}}: CS1 maint: location missing publisher (link).
• Mikhlin, Solomon G. (1973), "The spectrum of a family of operators in the theory of elasticity", Uspekhi Matematicheskikh Nauk (in Russian), 28 (3(171)): 43–82, MR 0415422, Zbl 0291.35065
• Mikhlin, S.G. (1974), "On a method for the approximate solution of integral equations", Vestn. Leningr. Univ., Ser. Mat. Mekh. Astron. (in Russian), 13 (3): 26–33, Zbl 0308.45014.
See also
• Linear elasticity
• Mikhlin multiplier theorem
• Multiplier (Fourier analysis)
• Singular integrals
• Singular integral equations
Notes
1. See the section "Death" for a description of the circumstances and for the probable reason of discrepancies between the death date reported by different biographical sources.
2. According to Fichera (1994, p. 54) and the references cited therein: see also (Maz'ya 2014, p. 143). For more information on this subject, see the entries on singular integral operators and on pseudodifferential operators.
3. A part of this thesis is probably reproduced in his paper (Michlin 1932), where he thanks his master Vladimir Ivanovich Smirnov but does not acknowledge him as a thesis advisor.
4. See (Mikhlin 1968, p. 4).
5. See the report of the conference by Aleksandrov & Kurosh (1959, p. 250).
6. Almost all recollections of Gaetano Fichera concerning how this situation influenced his relationships with Mikhlin are presented in (Fichera 1994, pp. 56–61).
7. According to Fichera (1994, p. 59).
8. According to Maz'ya (2000, p. 2).
9. See for example Fichera (1994) and the memorial page at the St. Petersburg Mathematical Society (2006).
10. Comprehensive descriptions of his work appear in the papers (Fichera 1994), (Fichera & Maz'ya 1978) and in the references cited therein.
11. According to Fichera & Maz'ya (1978, p. 167).
12. The references pertaining to this work are (Mikhlin 1952a) and (Mikhlin 1952b).
13. See the comprehensive survey paper of Kozhevnikov (1999), describing the subject in his historical development including more recent development. The work of Mikhlin and his collaborators is summarized in the paper (Mikhlin 1973): for a detailed analytical treatment, see also appendix I, pp. 271—311 of the posthumous book (Mikhlin, Morozov & Paukshto 1995).
14. See the entry "Singular integral" for more details on this subject.
15. See references (Mikhlin 1956b) and (Mikhlin 1965, pp. 225–240).
16. According to Fichera (1994, p. 52), Mikhlin himself (partially preceded by Bochner (1951)) shed light on the relationship between his theory of singular integrals and Calderon–Zygmund theory, proving in the paper (Mikhlin 1956a) that, for kernels of convolution type i.e. kernels depending on the difference y-x of the two variables x and y, but not on the variable x, the symbol is the Fourier transform (in a generalized sense) of the kernel of the given singular integral operator.
17. Also the treatise (Mikhlin & Prössdorf 1986) contains a lot of information on this field, and an exposition of both the one-dimensional and the multidimensional theory.
18. See (Mikhlin 1951) for further details.
19. He is, according to Fichera (1994, p. 55), one of the pioneers of modern numerical analysis together with Boris Galerkin, Alexander Ostrowski, John von Neumann, Walter Ritz and Mauro Picone.
20. See (Mikhlin 1974) and the references therein.
21. See the book (Mikhlin 1991) and, for an overview of the contents, see also its review by Stummel (1993, pp. 204–206).
References
Biographical and general references
• Aleksandrov, P. S.; Kurosh, A. G. (1959), "International Congress of Mathematicians in Edinburg", Uspekhi Matematicheskikh Nauk (in Russian), 14 (1(142)): 249–253.
• Babich, Vasilii Mikhailovich; Bakelman, Ilya Yakovlevich; Koshelev, Alexander Ivanovich; Maz'ya, Vladimir Gilelevich (1968), "Solomon Grigor'evich Mikhlin (on the sixtieth anniversary of his birth)", Uspekhi Matematicheskikh Nauk (in Russian), 23 (4(142)): 269–272, MR 0228313, Zbl 0157.01202.
• Bakelman, Ilya Yakovlevich; Birman, Mikhail Shlemovich; Ladyzhenskaya, Olga Aleksandrovna (1958), "Solomon Grigor'evich Mikhlin (on the fiftieth anniversary of his birth)", Uspekhi Matematicheskikh Nauk (in Russian), 13 (5(83)): 215–221, Zbl 0085.00701.
• Dem'yanovich, Yuri Kazimirovich; Il'in, Valentin Petrovich; Koshelev, Alexander Ivanovich; Oleinik, Olga Arsen'evna; Sobolev, Sergei L'vovich (1988), "Solomon Grigor'evich Mikhlin (on his eightieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 43 (4(262)): 239–240, Bibcode:1988RuMaS..43..249D, doi:10.1070/RM1988v043n04ABEH001906, MR 0228313, S2CID 250917521, Zbl 0157.01202.
• Fichera, Gaetano (1994), "Solomon G. Mikhlin (1908–1990)", Atti della Accademia Nazionale dei Lincei, Rendiconti Lincei, Matematica e Applicazioni, Serie XI (in Italian), 5 (1): 49–61, Zbl 0852.01034. A detailed commemorative paper, referencing the works Bakelman, Birman & Ladyzhenskaya (1958), Babich et al. (1968) and of Dem'yanovich et al. (1988) for the bibliographical details.
• Fichera, G.; Maz'ya, V. (1978), "In honor of professor Solomon G. Mikhlin on the occasion of his seventieth birthday", Applicable Analysis, 7 (3): 167–170, doi:10.1080/00036817808839188, Zbl 0378.01018. A short survey of the work of Mikhlin by a friend and his pupil: not as complete as the commemorative paper (Fichera 1994), but very useful for the English speaking reader.
• Kantorovich, Leonid Vital'evich; Koshelev, Alexander Ivanovich; Oleinik, Olga Arsen'evna; Sobolev, Sergei L'vovich (1978), "Solomon Grigor'evich Mikhlin (on his seventieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 33 (2(200)): 213–216, Bibcode:1978RuMaS..33..209K, doi:10.1070/RM1978v033n02ABEH002313, MR 0495520, S2CID 250776686, Zbl 0378.01017.
• Lorentz, G.G. (2002), "Mathematics and politics in the Soviet Union from 1928 to 1953", Journal of Approximation Theory, 116 (2): 169–223, doi:10.1006/jath.2002.3670, MR 1911079, Zbl 1006.01009. See also the final version available from the "George Lorentz" section of the Approximation Theory web page at the Mathematics Department of the Ohio State University (retrieved on 25 October 2009).
• Maz'ya, Vladimir (2000), "In memory of Gaetano Fichera" (PDF), in Ricci, Paolo Emilio (ed.), Problemi attuali dell'analisi e della fisica matematica. Atti del II simposio internazionale (Taormina, 15–17 ottobre 1998). Dedicato alla memoria del Prof. Gaetano Fichera., Roma: Aracne Editrice, pp. 1–4, Zbl 0977.01027. Some vivid recollection about Gaetano Fichera by his colleague and friend Vladimir Gilelevich Maz'ya: there is a short description of the "ceremony" for the election of Mikhlin as a foreign member of the Accademia Nazionale dei Lincei.
• Maz'ya, Vladimir G. (2014), Differential equations of my young years, Basel: Birkhäuser Verlag, pp. xiii+191, ISBN 978-3-319-01808-9, MR 3288312, Zbl 1303.01002.
• Solomon Grigor'evich Mikhlin's entry at the Russian Wikipedia, Retrieved 28 May 2010.
• Mikhlin, Solomon G. (7 September 1968), ЛИЧНЫЙ ЛИСТОК ПО УЧЕТУ КАДРОВ [Formation record list] (in Russian), USSR, pp. 1–5. An official resume written by Mikhlin itself to be used by the public authority in the former Soviet Union: it contains very useful (if not unique) information about his early career and school formation.
Scientific references
• Bochner, Salomon (1 December 1951), "Theta Relations with Spherical Harmonics", PNAS, 37 (12): 804–808, Bibcode:1951PNAS...37..804B, doi:10.1073/pnas.37.12.804, PMC 1063475, PMID 16589032, Zbl 0044.07501.
• Kozhevnikov, Alexander (1999), "A history of the Cosserat spectrum", in Rossman, Jürgen; Takáč, Peter; Günther, Wildenhain (eds.), The Maz'ya anniversary collection. Vol. 1: On Maz'ya's work in functional analysis, partial differential equations and applications. Based on talks given at the conference, Rostock, Germany, August 31 – September 4, 1998, Operator Theory. Advances and Applications, vol. 109, Basel: Birkhäuser Verlag, pp. 223–234, ISBN 978-3-7643-6201-0, Zbl 0936.35118.
• Stummel, F. (1993), "Review: Error analysis in numerical processes, by Solomon G. Mikhlin", Bulletin of the American Mathematical Society, 28 (1): 204–206, doi:10.1090/s0273-0979-1993-00357-4.
External links
• Maz'ya, Vladimir G.; Shaposhnikova, Tatyana O.; Tampieri, Daniele (March 2011), "Solomon Grigoryevich Mikhlin", in O'Connor, John J.; Robertson, Edmund F. (eds.), MacTutor History of Mathematics Archive, University of St Andrews
• Solomon G. Mikhlin at the Mathematics Genealogy Project.
• St. Petersburg Mathematical Society (2006), Solomon Grigor'evich Mikhlin, retrieved 13 November 2009. Memorial page at the St. Petersburg Mathematical Pantheon.
Authority control
International
• ISNI
• VIAF
National
• Norway
• Spain
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Sweden
• Czech Republic
• Croatia
• Netherlands
• Poland
Academics
• CiNii
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Solomon Lefschetz
Solomon Lefschetz ForMemRS (Russian: Соломо́н Ле́фшец; 3 September 1884 – 5 October 1972) was a Russian-born American mathematician who did fundamental work on algebraic topology, its applications to algebraic geometry, and the theory of non-linear ordinary differential equations.[3][1][4][5]
Solomon Lefschetz
ForMemRS
Born(1884-09-03)3 September 1884
Moscow, Russian Empire
Died5 October 1972(1972-10-05) (aged 88)
Princeton, New Jersey, US
CitizenshipUS
Alma materÉcole Centrale Paris
Clark University
Known forLefschetz fixed-point theorem
Picard–Lefschetz theory
Lefschetz connection
Lefschetz hyperplane theorem
Lefschetz duality
Lefschetz manifold
Lefschetz number
Lefschetz principle
Lefschetz zeta function
Lefschetz pencil
Lefschetz theorem on (1,1)-classes
AwardsBôcher Memorial Prize (1924)
National Medal of Science (1964)
Leroy P. Steele Prize (1970)
Fellow of the Royal Society[1]
Scientific career
FieldsAlgebraic topology
Institutions
• University of Nebraska
• University of Kansas
• Princeton University
• National Autonomous University of Mexico[2]
• Brown University
ThesisOn the Existence of Loci with Given Singularities (1911)
Doctoral advisorWilliam Edward Story[3]
Doctoral studentsEdward Begle
Richard Bellman
Felix Browder
Clifford Dowker
George F. D. Duff
Ralph Fox
Ralph Gomory
John McCarthy
Robert Prim
Paul A. Smith
Norman Steenrod
Arthur Harold Stone
Clifford Truesdell
Albert W. Tucker
John Tukey
Henry Wallman
Shaun Wylie[3]
Other notable studentsSylvia de Neymet
Life
He was born in Moscow, the son of Alexander Lefschetz and his wife Sarah or Vera Lifschitz, Jewish traders who used to travel around Europe and the Middle East (they held Ottoman passports). Shortly thereafter, the family moved to Paris. He was educated there in engineering at the École Centrale Paris, but emigrated to the US in 1905.
He was badly injured in an industrial accident in 1907, losing both hands.[6] He moved towards mathematics, receiving a Ph.D. in algebraic geometry from Clark University in Worcester, Massachusetts in 1911.[7] He then took positions in University of Nebraska and University of Kansas, moving to Princeton University in 1924, where he was soon given a permanent position. He remained there until 1953.
In the application of topology to algebraic geometry, he followed the work of Charles Émile Picard, whom he had heard lecture in Paris at the École Centrale Paris. He proved theorems on the topology of hyperplane sections of algebraic varieties, which provide a basic inductive tool (these are now seen as allied to Morse theory, though a Lefschetz pencil of hyperplane sections is a more subtle system than a Morse function because hyperplanes intersect each other). The Picard–Lefschetz formula in the theory of vanishing cycles is a basic tool relating the degeneration of families of varieties with 'loss' of topology, to monodromy. He was an Invited Speaker of the ICM in 1920 in Strasbourg.[8] His book L'analysis situs et la géométrie algébrique from 1924, though opaque foundationally given the current technical state of homology theory, was in the long term very influential (one could say that it was one of the sources for the eventual proof of the Weil conjectures, through SGA 7 also for the study of Picard groups of Zariski surface). In 1924 he was awarded the Bôcher Memorial Prize for his work in mathematical analysis. He was elected to the United States National Academy of Sciences in 1925 and the American Philosophical Society in 1929.[9][10]
The Lefschetz fixed-point theorem, now a basic result of topology, was developed by him in papers from 1923 to 1927, initially for manifolds. Later, with the rise of cohomology theory in the 1930s, he contributed to the intersection number approach (that is, in cohomological terms, the ring structure) via the cup product and duality on manifolds. His work on topology was summed up in his monograph Algebraic Topology (1942). From 1944 he worked on differential equations.
He was editor of the Annals of Mathematics from 1928 to 1958. During this time, the Annals became an increasingly well-known and respected journal, and Lefschetz played an important role in this.[11]
In 1945 he travelled to Mexico for the first time, where he joined the Institute of Mathematics at the National University of Mexico as a visiting professor. He visited frequently for long periods, and during 1953–1966 he spent most of his winters in Mexico City.[11] He played an important role in the foundation of mathematics in Mexico, and sent several students back to Princeton. His students included Emilio Lluis, José Adem, Samuel Gitler, Santiago López de Medrano, Francisco Javier González-Acuña and Alberto Verjovsky.[2]
Lefschetz came out of retirement in 1958, because of the launch of Sputnik, to augment the mathematical component of Glenn L. Martin Company's Research Institute for Advanced Studies (RIAS) in Baltimore, Maryland. His team became the world's largest group of mathematicians devoted to research in nonlinear differential equations.[12] The RIAS mathematics group stimulated the growth of nonlinear differential equations through conferences and publications. He left RIAS in 1964 to form the Lefschetz Center for Dynamical Systems at Brown University, Providence, Rhode Island.[13]
Selected works
• L´Analysis situs et la géométrie algébrique, Paris, Gauthier-Villars 1924[14]
• Intersections and transformations of complexes and manifolds, Transactions of the American Mathematical Society vol. 28, 1926, pp. 1–49, online ; fixed-point theorem, published in vol. 29, 1927, pp. 429–462, online.
• Géométrie sur les surfaces et les variétés algébriques, Paris, Gauthier Villars 1929[15]
• Topology, AMS 1930[16]
• Algebraic Topology, New York, American Mathematical Society 1942
• Introduction to topology, Princeton 1949
• with Joseph P. LaSalle, Stability by Liapunov's direct method with applications, New York, Academic Press 1961[17]
• Algebraic geometry, Princeton 1953, 2nd edn., 1964
• Differential equations: geometric theory, Interscience, 1957,[18] 2nd edn., 1963
• Stability of nonlinear control systems, 1965
• Reminiscences of a mathematical immigrant in the United States, American Mathematical Monthly, vol.77, 1970, pp. 344–350.
References
1. Hodge, W. V. D. (1973). "Solomon Lefschetz 1884-1972". Biographical Memoirs of Fellows of the Royal Society. 19: 433–453. doi:10.1098/rsbm.1973.0016. S2CID 122747688.
2. "Mathematics in Mexico" (PDF). Sociedad Matematica Mexicana.
3. Solomon Lefschetz at the Mathematics Genealogy Project
4. Markus, L. (1973). "Solomon Lefschetz: An appreciation in memoriam". Bull. Amer. Math. Soc. 79 (4): 663–680. doi:10.1090/s0002-9904-1973-13256-2.
5. O'Connor, John J.; Robertson, Edmund F., "Solomon Lefschetz", MacTutor History of Mathematics Archive, University of St Andrews
6. Mathematical Apocrypha: Stories and Anecdotes of Mathematicians and the Mathematical, p. 148, at Google Books
7. Lefschetz, Solomon (1911). On the existence of LocI with given singularities (Ph.D.). Clark University. OCLC 245921866 – via ProQuest.
8. "Quelques remarques sur la multiplication complexe by S. Lefschetz" (PDF). Compte rendu du Congrès international des mathématiciens tenu à Strasbourg du 22 au 30 Septembre 1920. 1921. pp. 300–307. Archived from the original (PDF) on 2017-10-29.
9. "Solomon Lefschetz". www.nasonline.org. Retrieved 2023-07-20.
10. "APS Member History". search.amphilsoc.org. Retrieved 2023-07-20.
11. Griffiths, Phillip; Spencer, Donald; Whitehead, George (1992). "Solomon Lefschetz 1884-1972" (PDF). National Academy of Sciences. Archived from the original (PDF) on 2014-12-22.
12. Allen, K. N. (1988, January). Undaunted genius. Clark News, 11(1), p. 9.
13. About LCDS (Lefschetz Center for Dynamical Systems @ Brown University)
14. Alexander, James W. (1925). "Review: S. Lefschetz, L'Analysis Situs et la Géométrie Algébrique". Bull. Amer. Math. Soc. 31 (9): 558–559. doi:10.1090/s0002-9904-1925-04116-6.
15. Zariski, Oscar (1930). "Review: S. Lefschetz, Géométrie sur les Surfaces et les Variétés Algébriques". Bulletin of the American Mathematical Society. 36 (9): 617–618. doi:10.1090/s0002-9904-1930-05017-x.
16. Smith, Paul A. (1931). "Letschetz on Topology". Bulletin of the American Mathematical Society. 37 (9, Part 1): 645–648. doi:10.1090/S0002-9904-1931-05201-0.
17. Antosiewicz, H. A. (1963). "Review: Joseph LaSalle and Solomon Lefschetz, Stability by Liapunov's direct method with applications". Bulletin of the American Mathematical Society. 69 (2): 209–210. doi:10.1090/s0002-9904-1963-10915-5.
18. Haas, Felix (1958). "Review: S. Lefschetz, Differential equations: Geometric theory". Bulletin of the American Mathematical Society. 64 (4): 203–206. doi:10.1090/s0002-9904-1958-10212-8.
External links
Wikiquote has quotations related to Solomon Lefschetz.
• Works by or about Solomon Lefschetz at Internet Archive
• Works by Solomon Lefschetz at LibriVox (public domain audiobooks)
• "Fine Hall in its golden age: Remembrances of Princeton in the early fifties" by Gian-Carlo Rota. Contains a lengthy section on Lefschetz at Princeton.
• Gompf: What is a Lefschetz Pencil?, Notices AMS 2005
• National Academy of Sciences Biographical Memoir
United States National Medal of Science laureates
Behavioral and social science
1960s
1964
Neal Elgar Miller
1980s
1986
Herbert A. Simon
1987
Anne Anastasi
George J. Stigler
1988
Milton Friedman
1990s
1990
Leonid Hurwicz
Patrick Suppes
1991
George A. Miller
1992
Eleanor J. Gibson
1994
Robert K. Merton
1995
Roger N. Shepard
1996
Paul Samuelson
1997
William K. Estes
1998
William Julius Wilson
1999
Robert M. Solow
2000s
2000
Gary Becker
2003
R. Duncan Luce
2004
Kenneth Arrow
2005
Gordon H. Bower
2008
Michael I. Posner
2009
Mortimer Mishkin
2010s
2011
Anne Treisman
2014
Robert Axelrod
2015
Albert Bandura
Biological sciences
1960s
1963
C. B. van Niel
1964
Theodosius Dobzhansky
Marshall W. Nirenberg
1965
Francis P. Rous
George G. Simpson
Donald D. Van Slyke
1966
Edward F. Knipling
Fritz Albert Lipmann
William C. Rose
Sewall Wright
1967
Kenneth S. Cole
Harry F. Harlow
Michael Heidelberger
Alfred H. Sturtevant
1968
Horace Barker
Bernard B. Brodie
Detlev W. Bronk
Jay Lush
Burrhus Frederic Skinner
1969
Robert Huebner
Ernst Mayr
1970s
1970
Barbara McClintock
Albert B. Sabin
1973
Daniel I. Arnon
Earl W. Sutherland Jr.
1974
Britton Chance
Erwin Chargaff
James V. Neel
James Augustine Shannon
1975
Hallowell Davis
Paul Gyorgy
Sterling B. Hendricks
Orville Alvin Vogel
1976
Roger Guillemin
Keith Roberts Porter
Efraim Racker
E. O. Wilson
1979
Robert H. Burris
Elizabeth C. Crosby
Arthur Kornberg
Severo Ochoa
Earl Reece Stadtman
George Ledyard Stebbins
Paul Alfred Weiss
1980s
1981
Philip Handler
1982
Seymour Benzer
Glenn W. Burton
Mildred Cohn
1983
Howard L. Bachrach
Paul Berg
Wendell L. Roelofs
Berta Scharrer
1986
Stanley Cohen
Donald A. Henderson
Vernon B. Mountcastle
George Emil Palade
Joan A. Steitz
1987
Michael E. DeBakey
Theodor O. Diener
Harry Eagle
Har Gobind Khorana
Rita Levi-Montalcini
1988
Michael S. Brown
Stanley Norman Cohen
Joseph L. Goldstein
Maurice R. Hilleman
Eric R. Kandel
Rosalyn Sussman Yalow
1989
Katherine Esau
Viktor Hamburger
Philip Leder
Joshua Lederberg
Roger W. Sperry
Harland G. Wood
1990s
1990
Baruj Benacerraf
Herbert W. Boyer
Daniel E. Koshland Jr.
Edward B. Lewis
David G. Nathan
E. Donnall Thomas
1991
Mary Ellen Avery
G. Evelyn Hutchinson
Elvin A. Kabat
Robert W. Kates
Salvador Luria
Paul A. Marks
Folke K. Skoog
Paul C. Zamecnik
1992
Maxine Singer
Howard Martin Temin
1993
Daniel Nathans
Salome G. Waelsch
1994
Thomas Eisner
Elizabeth F. Neufeld
1995
Alexander Rich
1996
Ruth Patrick
1997
James Watson
Robert A. Weinberg
1998
Bruce Ames
Janet Rowley
1999
David Baltimore
Jared Diamond
Lynn Margulis
2000s
2000
Nancy C. Andreasen
Peter H. Raven
Carl Woese
2001
Francisco J. Ayala
George F. Bass
Mario R. Capecchi
Ann Graybiel
Gene E. Likens
Victor A. McKusick
Harold Varmus
2002
James E. Darnell
Evelyn M. Witkin
2003
J. Michael Bishop
Solomon H. Snyder
Charles Yanofsky
2004
Norman E. Borlaug
Phillip A. Sharp
Thomas E. Starzl
2005
Anthony Fauci
Torsten N. Wiesel
2006
Rita R. Colwell
Nina Fedoroff
Lubert Stryer
2007
Robert J. Lefkowitz
Bert W. O'Malley
2008
Francis S. Collins
Elaine Fuchs
J. Craig Venter
2009
Susan L. Lindquist
Stanley B. Prusiner
2010s
2010
Ralph L. Brinster
Rudolf Jaenisch
2011
Lucy Shapiro
Leroy Hood
Sallie Chisholm
2012
May Berenbaum
Bruce Alberts
2013
Rakesh K. Jain
2014
Stanley Falkow
Mary-Claire King
Simon Levin
Chemistry
1960s
1964
Roger Adams
1980s
1982
F. Albert Cotton
Gilbert Stork
1983
Roald Hoffmann
George C. Pimentel
Richard N. Zare
1986
Harry B. Gray
Yuan Tseh Lee
Carl S. Marvel
Frank H. Westheimer
1987
William S. Johnson
Walter H. Stockmayer
Max Tishler
1988
William O. Baker
Konrad E. Bloch
Elias J. Corey
1989
Richard B. Bernstein
Melvin Calvin
Rudolph A. Marcus
Harden M. McConnell
1990s
1990
Elkan Blout
Karl Folkers
John D. Roberts
1991
Ronald Breslow
Gertrude B. Elion
Dudley R. Herschbach
Glenn T. Seaborg
1992
Howard E. Simmons Jr.
1993
Donald J. Cram
Norman Hackerman
1994
George S. Hammond
1995
Thomas Cech
Isabella L. Karle
1996
Norman Davidson
1997
Darleane C. Hoffman
Harold S. Johnston
1998
John W. Cahn
George M. Whitesides
1999
Stuart A. Rice
John Ross
Susan Solomon
2000s
2000
John D. Baldeschwieler
Ralph F. Hirschmann
2001
Ernest R. Davidson
Gábor A. Somorjai
2002
John I. Brauman
2004
Stephen J. Lippard
2005
Tobin J. Marks
2006
Marvin H. Caruthers
Peter B. Dervan
2007
Mostafa A. El-Sayed
2008
Joanna Fowler
JoAnne Stubbe
2009
Stephen J. Benkovic
Marye Anne Fox
2010s
2010
Jacqueline K. Barton
Peter J. Stang
2011
Allen J. Bard
M. Frederick Hawthorne
2012
Judith P. Klinman
Jerrold Meinwald
2013
Geraldine L. Richmond
2014
A. Paul Alivisatos
Engineering sciences
1960s
1962
Theodore von Kármán
1963
Vannevar Bush
John Robinson Pierce
1964
Charles S. Draper
Othmar H. Ammann
1965
Hugh L. Dryden
Clarence L. Johnson
Warren K. Lewis
1966
Claude E. Shannon
1967
Edwin H. Land
Igor I. Sikorsky
1968
J. Presper Eckert
Nathan M. Newmark
1969
Jack St. Clair Kilby
1970s
1970
George E. Mueller
1973
Harold E. Edgerton
Richard T. Whitcomb
1974
Rudolf Kompfner
Ralph Brazelton Peck
Abel Wolman
1975
Manson Benedict
William Hayward Pickering
Frederick E. Terman
Wernher von Braun
1976
Morris Cohen
Peter C. Goldmark
Erwin Wilhelm Müller
1979
Emmett N. Leith
Raymond D. Mindlin
Robert N. Noyce
Earl R. Parker
Simon Ramo
1980s
1982
Edward H. Heinemann
Donald L. Katz
1983
Bill Hewlett
George Low
John G. Trump
1986
Hans Wolfgang Liepmann
Tung-Yen Lin
Bernard M. Oliver
1987
Robert Byron Bird
H. Bolton Seed
Ernst Weber
1988
Daniel C. Drucker
Willis M. Hawkins
George W. Housner
1989
Harry George Drickamer
Herbert E. Grier
1990s
1990
Mildred Dresselhaus
Nick Holonyak Jr.
1991
George H. Heilmeier
Luna B. Leopold
H. Guyford Stever
1992
Calvin F. Quate
John Roy Whinnery
1993
Alfred Y. Cho
1994
Ray W. Clough
1995
Hermann A. Haus
1996
James L. Flanagan
C. Kumar N. Patel
1998
Eli Ruckenstein
1999
Kenneth N. Stevens
2000s
2000
Yuan-Cheng B. Fung
2001
Andreas Acrivos
2002
Leo Beranek
2003
John M. Prausnitz
2004
Edwin N. Lightfoot
2005
Jan D. Achenbach
2006
Robert S. Langer
2007
David J. Wineland
2008
Rudolf E. Kálmán
2009
Amnon Yariv
2010s
2010
Shu Chien
2011
John B. Goodenough
2012
Thomas Kailath
Mathematical, statistical, and computer sciences
1960s
1963
Norbert Wiener
1964
Solomon Lefschetz
H. Marston Morse
1965
Oscar Zariski
1966
John Milnor
1967
Paul Cohen
1968
Jerzy Neyman
1969
William Feller
1970s
1970
Richard Brauer
1973
John Tukey
1974
Kurt Gödel
1975
John W. Backus
Shiing-Shen Chern
George Dantzig
1976
Kurt Otto Friedrichs
Hassler Whitney
1979
Joseph L. Doob
Donald E. Knuth
1980s
1982
Marshall H. Stone
1983
Herman Goldstine
Isadore Singer
1986
Peter Lax
Antoni Zygmund
1987
Raoul Bott
Michael Freedman
1988
Ralph E. Gomory
Joseph B. Keller
1989
Samuel Karlin
Saunders Mac Lane
Donald C. Spencer
1990s
1990
George F. Carrier
Stephen Cole Kleene
John McCarthy
1991
Alberto Calderón
1992
Allen Newell
1993
Martin David Kruskal
1994
John Cocke
1995
Louis Nirenberg
1996
Richard Karp
Stephen Smale
1997
Shing-Tung Yau
1998
Cathleen Synge Morawetz
1999
Felix Browder
Ronald R. Coifman
2000s
2000
John Griggs Thompson
Karen Uhlenbeck
2001
Calyampudi R. Rao
Elias M. Stein
2002
James G. Glimm
2003
Carl R. de Boor
2004
Dennis P. Sullivan
2005
Bradley Efron
2006
Hyman Bass
2007
Leonard Kleinrock
Andrew J. Viterbi
2009
David B. Mumford
2010s
2010
Richard A. Tapia
S. R. Srinivasa Varadhan
2011
Solomon W. Golomb
Barry Mazur
2012
Alexandre Chorin
David Blackwell
2013
Michael Artin
Physical sciences
1960s
1963
Luis W. Alvarez
1964
Julian Schwinger
Harold Urey
Robert Burns Woodward
1965
John Bardeen
Peter Debye
Leon M. Lederman
William Rubey
1966
Jacob Bjerknes
Subrahmanyan Chandrasekhar
Henry Eyring
John H. Van Vleck
Vladimir K. Zworykin
1967
Jesse Beams
Francis Birch
Gregory Breit
Louis Hammett
George Kistiakowsky
1968
Paul Bartlett
Herbert Friedman
Lars Onsager
Eugene Wigner
1969
Herbert C. Brown
Wolfgang Panofsky
1970s
1970
Robert H. Dicke
Allan R. Sandage
John C. Slater
John A. Wheeler
Saul Winstein
1973
Carl Djerassi
Maurice Ewing
Arie Jan Haagen-Smit
Vladimir Haensel
Frederick Seitz
Robert Rathbun Wilson
1974
Nicolaas Bloembergen
Paul Flory
William Alfred Fowler
Linus Carl Pauling
Kenneth Sanborn Pitzer
1975
Hans A. Bethe
Joseph O. Hirschfelder
Lewis Sarett
Edgar Bright Wilson
Chien-Shiung Wu
1976
Samuel Goudsmit
Herbert S. Gutowsky
Frederick Rossini
Verner Suomi
Henry Taube
George Uhlenbeck
1979
Richard P. Feynman
Herman Mark
Edward M. Purcell
John Sinfelt
Lyman Spitzer
Victor F. Weisskopf
1980s
1982
Philip W. Anderson
Yoichiro Nambu
Edward Teller
Charles H. Townes
1983
E. Margaret Burbidge
Maurice Goldhaber
Helmut Landsberg
Walter Munk
Frederick Reines
Bruno B. Rossi
J. Robert Schrieffer
1986
Solomon J. Buchsbaum
H. Richard Crane
Herman Feshbach
Robert Hofstadter
Chen-Ning Yang
1987
Philip Abelson
Walter Elsasser
Paul C. Lauterbur
George Pake
James A. Van Allen
1988
D. Allan Bromley
Paul Ching-Wu Chu
Walter Kohn
Norman Foster Ramsey Jr.
Jack Steinberger
1989
Arnold O. Beckman
Eugene Parker
Robert Sharp
Henry Stommel
1990s
1990
Allan M. Cormack
Edwin M. McMillan
Robert Pound
Roger Revelle
1991
Arthur L. Schawlow
Ed Stone
Steven Weinberg
1992
Eugene M. Shoemaker
1993
Val Fitch
Vera Rubin
1994
Albert Overhauser
Frank Press
1995
Hans Dehmelt
Peter Goldreich
1996
Wallace S. Broecker
1997
Marshall Rosenbluth
Martin Schwarzschild
George Wetherill
1998
Don L. Anderson
John N. Bahcall
1999
James Cronin
Leo Kadanoff
2000s
2000
Willis E. Lamb
Jeremiah P. Ostriker
Gilbert F. White
2001
Marvin L. Cohen
Raymond Davis Jr.
Charles Keeling
2002
Richard Garwin
W. Jason Morgan
Edward Witten
2003
G. Brent Dalrymple
Riccardo Giacconi
2004
Robert N. Clayton
2005
Ralph A. Alpher
Lonnie Thompson
2006
Daniel Kleppner
2007
Fay Ajzenberg-Selove
Charles P. Slichter
2008
Berni Alder
James E. Gunn
2009
Yakir Aharonov
Esther M. Conwell
Warren M. Washington
2010s
2011
Sidney Drell
Sandra Faber
Sylvester James Gates
2012
Burton Richter
Sean C. Solomon
2014
Shirley Ann Jackson
John von Neumann Lecturers
• Lars Ahlfors (1960)
• Mark Kac (1961)
• Jean Leray (1962)
• Stanislaw Ulam (1963)
• Solomon Lefschetz (1964)
• Freeman Dyson (1965)
• Eugene Wigner (1966)
• Chia-Chiao Lin (1967)
• Peter Lax (1968)
• George F. Carrier (1969)
• James H. Wilkinson (1970)
• Paul Samuelson (1971)
• Jule Charney (1974)
• James Lighthill (1975)
• René Thom (1976)
• Kenneth Arrow (1977)
• Peter Henrici (1978)
• Kurt O. Friedrichs (1979)
• Keith Stewartson (1980)
• Garrett Birkhoff (1981)
• David Slepian (1982)
• Joseph B. Keller (1983)
• Jürgen Moser (1984)
• John W. Tukey (1985)
• Jacques-Louis Lions (1986)
• Richard M. Karp (1987)
• Germund Dahlquist (1988)
• Stephen Smale (1989)
• Andrew Majda (1990)
• R. Tyrrell Rockafellar (1992)
• Martin D. Kruskal (1994)
• Carl de Boor (1996)
• William Kahan (1997)
• Olga Ladyzhenskaya (1998)
• Charles S. Peskin (1999)
• Persi Diaconis (2000)
• David Donoho (2001)
• Eric Lander (2002)
• Heinz-Otto Kreiss (2003)
• Alan C. Newell (2004)
• Jerrold E. Marsden (2005)
• George C. Papanicolaou (2006)
• Nancy Kopell (2007)
• David Gottlieb (2008)
• Franco Brezzi (2009)
• Bernd Sturmfels (2010)
• Ingrid Daubechies (2011)
• John M. Ball (2012)
• Stanley Osher (2013)
• Leslie Greengard (2014)
• Jennifer Tour Chayes (2015)
• Donald Knuth (2016)
• Bernard J. Matkowsky (2017)
• Charles F. Van Loan (2018)
• Margaret H. Wright (2019)
• Nick Trefethen (2020)
• Chi-Wang Shu (2021)
• Leah Keshet (2022)
Presidents of the American Mathematical Society
1888–1900
• John Howard Van Amringe (1888–1890)
• Emory McClintock (1891–1894)
• George William Hill (1895–1896)
• Simon Newcomb (1897–1898)
• Robert Simpson Woodward (1899–1900)
1901–1924
• E. H. Moore (1901–1902)
• Thomas Fiske (1903–1904)
• William Fogg Osgood (1905–1906)
• Henry Seely White (1907–1908)
• Maxime Bôcher (1909–1910)
• Henry Burchard Fine (1911–1912)
• Edward Burr Van Vleck (1913–1914)
• Ernest William Brown (1915–1916)
• Leonard Eugene Dickson (1917–1918)
• Frank Morley (1919–1920)
• Gilbert Ames Bliss (1921–1922)
• Oswald Veblen (1923–1924)
1925–1950
• George David Birkhoff (1925–1926)
• Virgil Snyder (1927–1928)
• Earle Raymond Hedrick (1929–1930)
• Luther P. Eisenhart (1931–1932)
• Arthur Byron Coble (1933–1934)
• Solomon Lefschetz (1935–1936)
• Robert Lee Moore (1937–1938)
• Griffith C. Evans (1939–1940)
• Marston Morse (1941–1942)
• Marshall H. Stone (1943–1944)
• Theophil Henry Hildebrandt (1945–1946)
• Einar Hille (1947–1948)
• Joseph L. Walsh (1949–1950)
1951–1974
• John von Neumann (1951–1952)
• Gordon Thomas Whyburn (1953–1954)
• Raymond Louis Wilder (1955–1956)
• Richard Brauer (1957–1958)
• Edward J. McShane (1959–1960)
• Deane Montgomery (1961–1962)
• Joseph L. Doob (1963–1964)
• Abraham Adrian Albert (1965–1966)
• Charles B. Morrey Jr. (1967–1968)
• Oscar Zariski (1969–1970)
• Nathan Jacobson (1971–1972)
• Saunders Mac Lane (1973–1974)
1975–2000
• Lipman Bers (1975–1976)
• R. H. Bing (1977–1978)
• Peter Lax (1979–1980)
• Andrew M. Gleason (1981–1982)
• Julia Robinson (1983–1984)
• Irving Kaplansky (1985–1986)
• George Mostow (1987–1988)
• William Browder (1989–1990)
• Michael Artin (1991–1992)
• Ronald Graham (1993–1994)
• Cathleen Synge Morawetz (1995–1996)
• Arthur Jaffe (1997–1998)
• Felix Browder (1999–2000)
2001–2024
• Hyman Bass (2001–2002)
• David Eisenbud (2003–2004)
• James Arthur (2005–2006)
• James Glimm (2007–2008)
• George Andrews (2009–2010)
• Eric Friedlander (2011–2012)
• David Vogan (2013–2014)
• Robert Bryant (2015–2016)
• Ken Ribet (2017–2018)
• Jill Pipher (2019–2020)
• Ruth Charney (2021–2022)
• Bryna Kra (2023–2024)
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Italy
• Israel
• United States
• Sweden
• Japan
• Czech Republic
• Australia
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
|
Wikipedia
|
Solomon's knot
Solomon's knot (Latin: sigillum Salomonis, lit. 'Solomon's seal') is a traditional decorative motif used since ancient times, and found in many cultures. Despite the name, it is classified as a link, and is not a true knot according to the definitions of mathematical knot theory.
Basic Solomon's knot
Braid length7
Braid no.4
Crossing no.4
Hyperbolic volume0
Linking no.0
Stick no.5
Unknotting no.2
Conway notation[4]
A–B notation42
1
ThistlethwaiteL4a1
Last /NextL2a1 / L5a1
Other
alternating
Structure
The Solomon's knot consists of two closed loops, which are doubly interlinked in an interlaced manner. If laid flat, the Solomon's knot is seen to have four crossings where the two loops interweave under and over each other. This contrasts with two crossings in the simpler Hopf link.
In most artistic representations, the parts of the loops that alternately cross over and under each other become the sides of a central square, while four loopings extend outward in four directions. The four extending loopings may have oval, square, or triangular endings, or may terminate with free-form shapes such as leaves, lobes, blades, wings etc.
Occurrences
The Solomon's knot often occurs in ancient Roman mosaics, usually represented as two interlaced ovals.
Sepphoris National Park, Israel, has Solomon's Knots in stone mosaics at the site of an ancient synagogue.
Across the Middle East, historical Islamic sites show Solomon's knot as part of Muslim tradition. It appears over the doorway of an early twentieth century CE mosque/madrasa in Cairo. Two versions of Solomon's knot are included in the recently excavated Yattir Mosaic in Jordan. To the east, it is woven into an antique Central Asian prayer rug. To the west, Solomon's knot appeared in Moorish Spain, and it shines in leaded glass windows in a late twentieth century CE mosque in the United States. The British Museum, London, England has a fourteenth-century CE Egyptian Qur'an with a Solomon's Knot as its frontispiece.
University of California at Los Angeles Fowler Museum of Cultural History, USA has a large African collection that includes nineteenth and twentieth century CE Yoruba glass beadwork crowns and masks decorated with Solomon's Knots.
Home of Peace Mausoleum, a Jewish Cemetery, Los Angeles, California, USA has multiple images of Solomon's knot in stone and concrete bas reliefs sculpted 1934 CE.
Saint Sophia's Greek Orthodox Cathedral, "Byzantine District" of Los Angeles, California, USA has an olive wood Epitaphios (bier for Christ) with Solomon's knots carved at each corner. The Epitaphios is used in the Greek Easter services.
Powell Library University of California at Los Angeles, USA has ceiling beams in the Main Reading Room covered with Solomon's Knots. Built in 1926 CE, the reading room also features a central Dome of Wisdom bordered by Solomon's knots.
Name
In Latin, this configuration was sometimes known as sigillum Salomonis, meaning literally 'seal of Solomon'. It was associated with the Biblical monarch Solomon because of his reputation for wisdom and knowledge (and in some legends, his occult powers). This phrase is usually rendered into English as "Solomon's knot", since "seal of Solomon" has other conflicting meanings (often referring to either a Star of David or pentagram). In the study of ancient mosaics, the Solomon's knot is often known as a "guilloche knot" or "duplex knot", while a Solomon's knot in the center of a decorative configuration of four curving arcs is known as a "pelta-swastika" (where pelta is Latin for "shield").
Among other names currently in use are the following:
• "Foundation Knot" applies to the interweaving or interlacing which is the basis for many elaborate Celtic designs, and is used in the United States in crochet and macramé patterns.
• "Imbolo" describes the knot design on the textiles of the Kuba people of Congo.[1]
• Nodo di Salomone is the Italian term for Solomon's knot, and is used to name the Solomon's knot mosaic found at the ruins of a synagogue at Ostia, the ancient seaport for Rome.[2]
Symbolism
Since the knot has been used across a number of cultures and historical eras, it can be given a range of symbolic interpretations.
Because there is no visible beginning or ending, it may represent immortality and eternity—as does the more complicated Buddhist Endless Knot.
Because the knot seems to be two entwined figures, it is sometimes interpreted as a Lover's Knot, although that name may indicate another knot.
Because of religious connections, the knot is sometimes designated the all-faith symbol of faith, but, at the same time, it appears in many places as a valued secular symbol of prestige, importance, beauty.
Solomon's Knot appears on tombstones and mausoleums in Jewish graveyards and catacombs in many nations. In this context, Solomon's Knot is currently interpreted to symbolize eternity.
Some seek to connect it with Solomon by translating the Hebrew word peka'im (פקעים) found in the Bible at I Kings 6:18 and I Kings 7:24 as meaning "knobs" or "knots", and interpreting it to refer to Solomon's knot; however, the more accepted modern translation of this word is "gourd-shaped ornaments".
In Africa, Solomon's knot is found on glass beadwork, textiles, and carvings of the Yoruba people. When the knot appears in this culture, it often denotes royal status; thus, it is featured on crowns, tunics, and other ceremonial objects. Also in Africa, the Knot is found on Kasai velvet, the raffia woven cloth of the Kuba people. They attribute mystical meaning to it, as do the Akan people of West Africa who stamp it on their sacred Adinkra cloth. In the Adinkra symbol system, a version of Solomon's knot is the Kramo-bone symbol, interpreted as meaning "one being bad makes all appear to be bad".
In Latvia, when Solomon's knot is used on textiles and metal work, it is associated with time, motion, and the powers of ancient pagan gods.
In modern science, some versions of the conventionalized sign for an atom (electrons orbiting a nucleus) are variations of Solomon's knot. The logo of the Joomla software program is a Solomon's knot.
See also
• Quatrefoil
• Swastika
• Whitehead link
• Comacine masters
References
1. Paulus Gerdes, Mozambican Ethnomathematics Research Centre
2. sapere.it, Il nodo di Salomone
Further reading
A book-length illustrated study of Solomon's Knot is Seeing Solomon's Knot, With Photographs by Joel Lipton by Lois Rose Rose, Los Angeles, 2005 (official website http://www.StoneandScott.com/solomonsknot.asp Archived 2016-03-10 at the Wayback Machine).
A few archaeological reports, art books, craft manuals, museum catalogs, auction catalogs, travel books, and religious documents which discuss or depict the Solomon's Knot configuration are listed below:
• Bronze Age Civilization of Central Asia, The: Recent Soviet Discoveries. Armonk, New York: M.E. Sharpe, 1981. (Early examples of Solomon's Knot from the Gonur 1 settlement, figure 4, p. 233.)
• Chen, Lydia. Chinese Knotting. Taiwan: Echo Publishing Company, 1981, ISBN 0-8048-1389-2. (Instructions for creating a "flat" or Solomon's Knot, p. 58.)
• Christie's Catalog: The Erlenmeyer Collection of Ancient Near Eastern Stamp Seals and Amulets. London: Christie, Manson & Woods, Auction June 6, 1989. (Cruciform interlace carved stone seal, Ubaid, circa 4500 BCE, Lot 185.)
• Fraser, Douglas and Herbert M. Cole, eds. African Art and Leadership. Madison, Milwaukee, and London: University of Wisconsin Press, 1972.
• Cole, Ibo Art and Authority, p. 85.
• Fraser: Symbols of Ashanti Kingship, pp. 143–144.
• Fraser: King's ceremonial stool, personal choices of various African leaders, p,209, p. 215, p. 283, p. 290, p. 318
• Fraser: More attention should be paid to the significance of the Solomon's Knot motif, p. 318.
• Laine, Daniel. African Kings. Berkeley, Toronto: Ten Speed Press, 1991, ISBN 1-58008-272-6. (Two Nigerian chiefs, Oba Oyebade Lipede and Alake of Abeokuta, wear garments with embroidered Solomon's Knots, p. 63.)
• Lusini, Aldo. The Cathedral of Sienna. Sienna, Italy: 1950. (The choir stall, carved 1363 to 1425: photographs of stalls showing variations of Solomon's Knot, plate 49, pp. 20–21.)
• Wolpert, Stuart. "UCLA Chemists Make Molecular Rings in the Shape of King Solomon's Knot, a Symbol of Wisdom," News release from the University of California at Los Angeles, January 10, 2007, Newsroom.
External links
Wikimedia Commons has media related to Solomon's knot.
• "L4a1 knot-theoretic link", The Knot Atlas.
Knot theory (knots and links)
Hyperbolic
• Figure-eight (41)
• Three-twist (52)
• Stevedore (61)
• 62
• 63
• Endless (74)
• Carrick mat (818)
• Perko pair (10161)
• (−2,3,7) pretzel (12n242)
• Whitehead (52
1
)
• Borromean rings (63
2
)
• L10a140
• Conway knot (11n34)
Satellite
• Composite knots
• Granny
• Square
• Knot sum
Torus
• Unknot (01)
• Trefoil (31)
• Cinquefoil (51)
• Septafoil (71)
• Unlink (02
1
)
• Hopf (22
1
)
• Solomon's (42
1
)
Invariants
• Alternating
• Arf invariant
• Bridge no.
• 2-bridge
• Brunnian
• Chirality
• Invertible
• Crosscap no.
• Crossing no.
• Finite type invariant
• Hyperbolic volume
• Khovanov homology
• Genus
• Knot group
• Link group
• Linking no.
• Polynomial
• Alexander
• Bracket
• HOMFLY
• Jones
• Kauffman
• Pretzel
• Prime
• list
• Stick no.
• Tricolorability
• Unknotting no. and problem
Notation
and operations
• Alexander–Briggs notation
• Conway notation
• Dowker–Thistlethwaite notation
• Flype
• Mutation
• Reidemeister move
• Skein relation
• Tabulation
Other
• Alexander's theorem
• Berge
• Braid theory
• Conway sphere
• Complement
• Double torus
• Fibered
• Knot
• List of knots and links
• Ribbon
• Slice
• Sum
• Tait conjectures
• Twist
• Wild
• Writhe
• Surgery theory
• Category
• Commons
Crosses
In modern use
• Alcoraz
• Anchored/Saint Clement
• Anuradhapura
• Archangels
• Archiepiscopal
• Armenian
• Arrow/Barby
• Balkenkreuz
• Bolnisi
• Bottony
• Branch
• Bulgarian
• Burgundy
• Byzantine
• Calvary
• Camargue
• Canterbury
• Catherine wheel
• Celtic
• Variant
• Cercelée
• Coptic
• Crosslet
• Fitchy
• Crucifix
• Cruciform halo
• Double
• Ethiopian
• Evangelists
• Fleury
• Fitchy
• Forked
• Fourchy
• Fylfot
• Globus cruciger
• Archbishop's variant
• Gnostic
• Grapevine/Saint Nino
• Greek
• Greek Orthodox
• Huguenot
• Iron
• Jeremiah
• Jerusalem/Crusaders
• Jerusalem (Kingdom)
• Latin/Roman
• Macedonian
• Maltese
• Marada
• Marian
• Maronite
• Moline
• Nordic
• Novgorod
• Occitan
• Order of Christ
• Papal
• Patonce
• Pattée
• Fitchée
• Patriarchal
• Pommy
• Portate/Saint Gilbert
• Potent
• Quadrate
• Resistance
• Ringed
• Russian
• Russian Orthodox
• Salem
• Saltire/Saint Andrew
• Saint Chad
• Saint David
• Saint Florian
• Saint George
• Saint James/Santiago
• Saint John
• Saint Patrick
• Saint Peter
• Saint Philip
• Saint Piran
• Saint Thomas
• Serbian
• Serbian Orthodox
• Short Sword
• Syriac (Eastern)
• Syriac (Western)
• Tau/Saint Anthony
Historical
• Avellane
• Aviz
• Black
• Blanc croix rouge
• Brigid
• Carolingian
• Chouan
• Consecration
• Coptic
• Coptic (Early)
• Cross cramponnée
• Crown
• Cuthbert's pectoral
• Engrailed
• Erminée
• Gammadion
• Jewelled
• Katanga
• Lazarus
• Lorraine
• Neith
• Nestorian
• Peñalba
• Pierced
• Quarterly
• Saint Alban
• Saint Julian
• Templar
• Teutonic Order
• Two-barred
• Victory
• Voided
By function
• Altar
• Blessing
• Conciliation
• Heraldry
• Nordic
• Pisan
• High
• Market
• Mercat
• Memorial
• Mission
• Necklace
• Pectoral
• Plague
• Preaching
• Processional
• Lalibela
• Rood/Triumphal cross
• Summit
• Wayside
Christograms, Chrismons
• Chi Rho
• IX monogram
• Labarum
• Signum manus
• Staurogram/Monogrammatic/Tau Rho
See also
• Ankh
• Armenian eternity sign
• Ichthys
• Irminsul
• Kolovrat
• Lauburu
• Mjölnir
• Rose
• Rota
• Solomon's knot
• Scientology
• Shamrock
• Shield of the Trinity
• Sunwheel swastika
• Sun
• Swastika
• Triskelion/Triskele
• Descriptions in antiquity of the execution cross
• List of tallest crosses in the world
• Christianity portal
• Arts portal
Solomon
Family and
reputed relations
• David
• Davidic line
• Menelik I
• Solomonic dynasty
• Naamah
• Pharaoh's daughter
• Queen of Sheba
• Rehoboam
Occurrences
• Judgement of Solomon
• Solomon in Islam
• Solomon's shamir
• Solomon's Temple
• Throne of Solomon
• Valley of the ants
Reputed works
• Protocanonical
• Ecclesiastes
• Proverbs
• Psalm 72
• Psalm 127
• Song of Songs
• Deuterocanonical
• Book of Wisdom
• Apocryphal
• Odes of Solomon
• Prayer of Solomon
• Psalms of Solomon
• Testament of Solomon
• Grimoires
• Key of Solomon
• The Lesser Key of Solomon
• Magical Treatise of Solomon
Related articles
• King Solomon's Mines
• Seal of Solomon
• Solomonic column
• Solomon's knot
• Solomon's Pools
• United Monarchy
|
Wikipedia
|
Robert M. Solovay
Robert Martin Solovay (born December 15, 1938) is an American mathematician specializing in set theory.
Robert M. Solovay
Robert Solovay in 1983 (photo by George Bergman)
Born (1938-12-15) December 15, 1938
Brooklyn, New York, U.S.
NationalityAmerican
Alma materUniversity of Chicago
Known forSolovay model
Solovay–Strassen primality test
Zero sharp
Martin's axiom
Solovay–Kitaev theorem
AwardsParis Kanellakis Award (2003)
Scientific career
FieldsMathematics
InstitutionsUniversity of California, Berkeley
Doctoral advisorSaunders Mac Lane
Doctoral studentsMatthew Foreman
Judith Roitman
Betül Tanbay
W. Hugh Woodin
Biography
Solovay earned his Ph.D. from the University of Chicago in 1964 under the direction of Saunders Mac Lane, with a dissertation on A Functorial Form of the Differentiable Riemann–Roch theorem.[1] Solovay has spent his career at the University of California at Berkeley, where his Ph.D. students include W. Hugh Woodin and Matthew Foreman.[2]
Work
Solovay's theorems include:
• Solovay's theorem showing that, if one assumes the existence of an inaccessible cardinal, then the statement "every set of real numbers is Lebesgue measurable" is consistent with Zermelo–Fraenkel set theory without the axiom of choice;
• Isolating the notion of 0#;
• Proving that the existence of a real-valued measurable cardinal is equiconsistent with the existence of a measurable cardinal;
• Proving that if $\lambda $ is a strong limit singular cardinal, greater than a strongly compact cardinal then $2^{\lambda }=\lambda ^{+}$ holds;
• Proving that if $\kappa $ is an uncountable regular cardinal, and $S\subseteq \kappa $ is a stationary set, then $S$ can be decomposed into the union of $\kappa $ disjoint stationary sets;
• With Stanley Tennenbaum, developing the method of iterated forcing and showing the consistency of Suslin's hypothesis;
• With Donald A. Martin, showed the consistency of Martin's axiom with arbitrarily large cardinality of the continuum;
• Outside of set theory, developing (with Volker Strassen) the Solovay–Strassen primality test, used to identify large natural numbers that are prime with high probability. This method has had implications for cryptography;
• Regarding the P versus NP problem, he proved with T. P. Baker and J. Gill that relativizing arguments cannot prove $\mathrm {P} \neq \mathrm {NP} $.[3]
• Proving that GL (the normal modal logic which has the instances of the schema $\Box (\Box A\to A)\to \Box A$ as additional axioms) completely axiomatizes the logic of the provability predicate of Peano arithmetic;
• With Alexei Kitaev, proving that a finite set of quantum gates can efficiently approximate an arbitrary unitary operator on one qubit in what is now known as Solovay–Kitaev theorem.
Selected publications
• Solovay, Robert M. (1970). "A model of set-theory in which every set of reals is Lebesgue measurable". Annals of Mathematics. Second Series. 92 (1): 1–56. doi:10.2307/1970696. JSTOR 1970696.
• Solovay, Robert M. (1967). "A nonconstructible Δ13 set of integers". Transactions of the American Mathematical Society. American Mathematical Society. 127 (1): 50–75. doi:10.2307/1994631. JSTOR 1994631.
• Solovay, Robert M. and Volker Strassen (1977). "A fast Monte-Carlo test for primality". SIAM Journal on Computing. 6 (1): 84–85. doi:10.1137/0206006.
See also
• Provability logic
References
1. Robert M. Solovay at the Mathematics Genealogy Project
2. "Robert M. Solovay | Department of Mathematics at University of California Berkeley".
3. Emerson, T. (1994-10-10). "Relativizations of the P=?NP question over the reals (and other ordered rings)". Theoretical Computer Science. 133 (1): 15–22. doi:10.1016/0304-3975(94)00068-9. ISSN 0304-3975.
External links
• Robert M. Solovay at the Mathematics Genealogy Project
• Robert Solovay at DBLP Bibliography Server
Winners of the Paris Kanellakis Theory and Practice Award
• Adleman, Diffie, Hellman, Merkle, Rivest, Shamir (1996)
• Lempel, Ziv (1997)
• Bryant, Clarke, Emerson, McMillan (1998)
• Sleator, Tarjan (1999)
• Karmarkar (2000)
• Myers (2001)
• Franaszek (2002)
• Miller, Rabin, Solovay, Strassen (2003)
• Freund, Schapire (2004)
• Holzmann, Kurshan, Vardi, Wolper (2005)
• Brayton (2006)
• Buchberger (2007)
• Cortes, Vapnik (2008)
• Bellare, Rogaway (2009)
• Mehlhorn (2010)
• Samet (2011)
• Broder, Charikar, Indyk (2012)
• Blumofe, Leiserson (2013)
• Demmel (2014)
• Luby (2015)
• Fiat, Naor (2016)
• Shenker (2017)
• Pevzner (2018)
• Alon, Gibbons, Matias, Szegedy (2019)
• Azar, Broder, Karlin, Mitzenmacher, Upfal (2020)
• Blum, Dinur, Dwork, McSherry, Nissim, Smith (2021)
• Burrows, Ferragina, Manzini (2022)
Authority control
International
• FAST
• ISNI
• VIAF
National
• United States
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Solovay model
In the mathematical field of set theory, the Solovay model is a model constructed by Robert M. Solovay (1970) in which all of the axioms of Zermelo–Fraenkel set theory (ZF) hold, exclusive of the axiom of choice, but in which all sets of real numbers are Lebesgue measurable. The construction relies on the existence of an inaccessible cardinal.
In this way Solovay showed that in the proof of the existence of a non-measurable set from ZFC (Zermelo–Fraenkel set theory plus the axiom of choice), the axiom of choice is essential, at least granted that the existence of an inaccessible cardinal is consistent with ZFC.
Statement
ZF stands for Zermelo–Fraenkel set theory, and DC for the axiom of dependent choice.
Solovay's theorem is as follows. Assuming the existence of an inaccessible cardinal, there is an inner model of ZF + DC of a suitable forcing extension V[G] such that every set of reals is Lebesgue measurable, has the perfect set property, and has the Baire property.
Construction
Solovay constructed his model in two steps, starting with a model M of ZFC containing an inaccessible cardinal κ.
The first step is to take a Levy collapse M[G] of M by adding a generic set G for the notion of forcing that collapses all cardinals less than κ to ω. Then M[G] is a model of ZFC with the property that every set of reals that is definable over a countable sequence of ordinals is Lebesgue measurable, and has the Baire and perfect set properties. (This includes all definable and projective sets of reals; however for reasons related to Tarski's undefinability theorem the notion of a definable set of reals cannot be defined in the language of set theory, while the notion of a set of reals definable over a countable sequence of ordinals can be.)
The second step is to construct Solovay's model N as the class of all sets in M[G] that are hereditarily definable over a countable sequence of ordinals. The model N is an inner model of M[G] satisfying ZF + DC such that every set of reals is Lebesgue measurable, has the perfect set property, and has the Baire property. The proof of this uses the fact that every real in M[G] is definable over a countable sequence of ordinals, and hence N and M[G] have the same reals.
Instead of using Solovay's model N, one can also use the smaller inner model L(R) of M[G], consisting of the constructible closure of the real numbers, which has similar properties.
Complements
Solovay suggested in his paper that the use of an inaccessible cardinal might not be necessary. Several authors proved weaker versions of Solovay's result without assuming the existence of an inaccessible cardinal. In particular Krivine (1969) showed there was a model of ZFC in which every ordinal-definable set of reals is measurable, Solovay showed there is a model of ZF + DC in which there is some translation-invariant extension of Lebesgue measure to all subsets of the reals, and Shelah (1984) showed that there is a model in which all sets of reals have the Baire property (so that the inaccessible cardinal is indeed unnecessary in this case).
The case of the perfect set property was solved by Specker (1957), who showed (in ZF) that if every set of reals has the perfect set property and the first uncountable cardinal ℵ1 is regular then ℵ1 is inaccessible in the constructible universe. Combined with Solovay's result, this shows that the statements "There is an inaccessible cardinal" and "Every set of reals has the perfect set property" are equiconsistent over ZF.
Finally, Shelah (1984) showed that consistency of an inaccessible cardinal is also necessary for constructing a model in which all sets of reals are Lebesgue measurable. More precisely he showed that if every Σ1
3
set of reals is measurable then the first uncountable cardinal ℵ1 is inaccessible in the constructible universe, so that the condition about an inaccessible cardinal cannot be dropped from Solovay's theorem. Shelah also showed that the Σ1
3
condition is close to the best possible by constructing a model (without using an inaccessible cardinal) in which all Δ1
3
sets of reals are measurable. See Raisonnier (1984) and Stern (1985) and Miller (1989) for expositions of Shelah's result.
Shelah & Woodin (1990) showed that if supercompact cardinals exist then every set of reals in L(R), the constructible sets generated by the reals, is Lebesgue measurable and has the Baire property; this includes every "reasonably definable" set of reals.
References
• Krivine, Jean-Louis (1969), "Modèles de ZF + AC dans lesquels tout ensemble de réels définissable en termes d'ordinaux est mesurable-Lebesgue", Comptes Rendus de l'Académie des Sciences, Série A et B, 269: A549–A552, ISSN 0151-0509, MR 0253894
• Krivine, Jean-Louis (1971), "Théorèmes de consistance en théorie de la mesure de R. Solovay", Séminaire Bourbaki vol. 1968/69 Exposés 347-363, Lecture Notes in Mathematics, vol. 179, pp. 187–197, doi:10.1007/BFb0058812, ISBN 978-3-540-05356-9
• Miller, Arnold W. (1989), "Review of "Can You Take Solovay's Inaccessible Away? by Saharon Shelah"", The Journal of Symbolic Logic, Association for Symbolic Logic, 54 (2): 633–635, doi:10.2307/2274892, ISSN 0022-4812, JSTOR 2274892
• Raisonnier, Jean (1984), "A mathematical proof of S. Shelah's theorem on the measure problem and related results.", Israel Journal of Mathematics, 48: 48–56, doi:10.1007/BF02760523, MR 0768265
• Shelah, Saharon (1984), "Can you take Solovay's inaccessible away?", Israel Journal of Mathematics, 48 (1): 1–47, doi:10.1007/BF02760522, ISSN 0021-2172, MR 0768264
• Shelah, Saharon; Woodin, Hugh (1990), "Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable", Israel Journal of Mathematics, 70 (3): 381–394, doi:10.1007/BF02801471, ISSN 0021-2172, MR 1074499
• Solovay, Robert M. (1970), "A model of set-theory in which every set of reals is Lebesgue measurable", Annals of Mathematics, Second Series, 92 (1): 1–56, doi:10.2307/1970696, ISSN 0003-486X, JSTOR 1970696, MR 0265151
• Specker, Ernst (1957), "Zur Axiomatik der Mengenlehre (Fundierungs- und Auswahlaxiom)", Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 3 (13–20): 173–210, doi:10.1002/malq.19570031302, ISSN 0044-3050, MR 0099297
• Stern, Jacques (1985), "Le problème de la mesure", Astérisque (121): 325–346, ISSN 0303-1179, MR 0768968
|
Wikipedia
|
Solution in radicals
A solution in radicals or algebraic solution is a closed-form expression, and more specifically a closed-form algebraic expression, that is the solution of a polynomial equation, and relies only on addition, subtraction, multiplication, division, raising to integer powers, and the extraction of nth roots (square roots, cube roots, and other integer roots).
Not to be confused with Algebraic number.
A well-known example is the solution
$x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}}$
of the quadratic equation
$ax^{2}+bx+c=0.$
There exist more complicated algebraic solutions for cubic equations[1] and quartic equations.[2] The Abel–Ruffini theorem,[3]: 211 and, more generally Galois theory, state that some quintic equations, such as
$x^{5}-x+1=0,$
do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation $x^{10}=2$ can be solved as $x=\pm {\sqrt[{10}]{2}}.$ The eight other solutions are nonreal complex numbers, which are also algebraic and have the form $x=\pm r{\sqrt[{10}]{2}},$ where r is a fifth root of unity, which can be expressed with two nested square roots. See also Quintic function § Other solvable quintics for various other examples in degree 5.
Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result.
Algebraic solutions form a subset of closed-form expressions, because the latter permit transcendental functions (non-algebraic functions) such as the exponential function, the logarithmic function, and the trigonometric functions and their inverses.
See also
• Solvable quintics
• Solvable sextics
• Solvable septics
References
1. Nickalls, R. W. D., "A new approach to solving the cubic: Cardano's solution revealed," Mathematical Gazette 77, November 1993, 354-359.
2. Carpenter, William, "On the solution of the real quartic," Mathematics Magazine 39, 1966, 28-30.
3. Jacobson, Nathan (2009), Basic Algebra 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1
|
Wikipedia
|
Linear differential equation
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
$a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''\cdots +a_{n}(x)y^{(n)}=b(x)$
This article is about linear differential equations with one independent variable. For similar equations with two or more independent variables, see Partial differential equation § Linear equations of second order.
Differential equations
Scope
Fields
• Natural sciences
• Engineering
• Astronomy
• Physics
• Chemistry
• Biology
• Geology
Applied mathematics
• Continuum mechanics
• Chaos theory
• Dynamical systems
Social sciences
• Economics
• Population dynamics
List of named differential equations
Classification
Types
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
By variable type
• Dependent and independent variables
• Autonomous
• Coupled / Decoupled
• Exact
• Homogeneous / Nonhomogeneous
Features
• Order
• Operator
• Notation
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solution
Existence and uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
General topics
• Initial conditions
• Boundary values
• Dirichlet
• Neumann
• Robin
• Cauchy problem
• Wronskian
• Phase portrait
• Lyapunov / Asymptotic / Exponential stability
• Rate of convergence
• Series / Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Method of characteristics
• Euler
• Exponential response formula
• Finite difference (Crank–Nicolson)
• Finite element
• Infinite element
• Finite volume
• Galerkin
• Petrov–Galerkin
• Green's function
• Integrating factor
• Integral transforms
• Perturbation theory
• Runge–Kutta
• Separation of variables
• Undetermined coefficients
• Variation of parameters
People
List
• Isaac Newton
• Gottfried Leibniz
• Jacob Bernoulli
• Leonhard Euler
• Józef Maria Hoene-Wroński
• Joseph Fourier
• Augustin-Louis Cauchy
• George Green
• Carl David Tolmé Runge
• Martin Kutta
• Rudolf Lipschitz
• Ernst Lindelöf
• Émile Picard
• Phyllis Nicolson
• John Crank
where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
Such an equation is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives.
A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any.
The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound.
Basic terminology
The highest order of derivation that appears in a (linear) differential equation is the order of the equation. The term b(x), which does not depend on the unknown function and its derivatives, is sometimes called the constant term of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be homogeneous, as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the associated homogeneous equation. A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation.
A solution of a differential equation is a function that satisfies the equation. The solutions of a homogeneous linear differential equation form a vector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation.
Linear differential operator
Main article: Differential operator
A basic differential operator of order i is a mapping that maps any differentiable function to its ith derivative, or, in the case of several variables, to one of its partial derivatives of order i. It is commonly denoted
${\frac {d^{i}}{dx^{i}}}$
in the case of univariate functions, and
${\frac {\partial ^{i_{1}+\cdots +i_{n}}}{\partial x_{1}^{i_{1}}\cdots \partial x_{n}^{i_{n}}}}$
in the case of functions of n variables. The basic differential operators include the derivative of order 0, which is the identity mapping.
A linear differential operator (abbreviated, in this article, as linear operator or, simply, operator) is a linear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form[1]
$a_{0}(x)+a_{1}(x){\frac {d}{dx}}+\cdots +a_{n}(x){\frac {d^{n}}{dx^{n}}},$
where a0(x), ..., an(x) are differentiable functions, and the nonnegative integer n is the order of the operator (if an(x) is not the zero function).
Let L be a linear differential operator. The application of L to a function f is usually denoted Lf or Lf(X), if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a linear operator, since it maps sums to sums and the product by a scalar to the product by the same scalar.
As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a vector space over the real numbers or the complex numbers (depending on the nature of the functions that are considered). They form also a free module over the ring of differentiable functions.
The language of operators allows a compact writing for differentiable equations: if
$L=a_{0}(x)+a_{1}(x){\frac {d}{dx}}+\cdots +a_{n}(x){\frac {d^{n}}{dx^{n}}},$
is a linear differential operator, then the equation
$a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}=b(x)$
may be rewritten
$Ly=b(x).$
There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in y and the right-hand and of the equation, such as Ly(x) = b(x) or Ly = b.
The kernel of a linear differential operator is its kernel as a linear mapping, that is the vector space of the solutions of the (homogeneous) differential equation Ly = 0.
In the case of an ordinary differential operator of order n, Carathéodory's existence theorem implies that, under very mild conditions, the kernel of L is a vector space of dimension n, and that the solutions of the equation Ly(x) = b(x) have the form
$S_{0}(x)+c_{1}S_{1}(x)+\cdots +c_{n}S_{n}(x),$
where c1, ..., cn are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval I, if the functions b, a0, ..., an are continuous in I, and there is a positive real number k such that |an(x)| > k for every x in I.
Homogeneous equation with constant coefficients
A homogeneous linear differential equation has constant coefficients if it has the form
$a_{0}y+a_{1}y'+a_{2}y''+\cdots +a_{n}y^{(n)}=0$
where a1, ..., an are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients.
The study of these differential equations with constant coefficients dates back to Leonhard Euler, who introduced the exponential function ex, which is the unique solution of the equation f′ = f such that f(0) = 1. It follows that the nth derivative of ecx is cnecx, and this allows solving homogeneous linear differential equations rather easily.
Let
$a_{0}y+a_{1}y'+a_{2}y''+\cdots +a_{n}y^{(n)}=0$
be a homogeneous linear differential equation with constant coefficients (that is a0, ..., an are real or complex numbers).
Searching solutions of this equation that have the form eαx is equivalent to searching the constants α such that
$a_{0}e^{\alpha x}+a_{1}\alpha e^{\alpha x}+a_{2}\alpha ^{2}e^{\alpha x}+\cdots +a_{n}\alpha ^{n}e^{\alpha x}=0.$
Factoring out eαx (which is never zero), shows that α must be a root of the characteristic polynomial
$a_{0}+a_{1}t+a_{2}t^{2}+\cdots +a_{n}t^{n}$
of the differential equation, which is the left-hand side of the characteristic equation
$a_{0}+a_{1}t+a_{2}t^{2}+\cdots +a_{n}t^{n}=0.$
When these roots are all distinct, one has n distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be linearly independent, by considering the Vandermonde determinant of the values of these solutions at x = 0, ..., n – 1. Together they form a basis of the vector space of solutions of the differential equation (that is, the kernel of the differential operator).
Example
$y''''-2y'''+2y''-2y'+y=0$
has the characteristic equation
$z^{4}-2z^{3}+2z^{2}-2z+1=0.$
This has zeros, i, −i, and 1 (multiplicity 2). The solution basis is thus
$e^{ix},\;e^{-ix},\;e^{x},\;xe^{x}.$
A real basis of solution is thus
$\cos x,\;\sin x,\;e^{x},\;xe^{x}.$
In the case where the characteristic polynomial has only simple roots, the preceding provides a complete basis of the solutions vector space. In the case of multiple roots, more linearly independent solutions are needed for having a basis. These have the form
$x^{k}e^{\alpha x},$
where k is a nonnegative integer, α is a root of the characteristic polynomial of multiplicity m, and k < m. For proving that these functions are solutions, one may remark that if α is a root of the characteristic polynomial of multiplicity m, the characteristic polynomial may be factored as P(t)(t − α)m. Thus, applying the differential operator of the equation is equivalent with applying first m times the operator $ {\frac {d}{dx}}-\alpha $, and then the operator that has P as characteristic polynomial. By the exponential shift theorem,
$\left({\frac {d}{dx}}-\alpha \right)\left(x^{k}e^{\alpha x}\right)=kx^{k-1}e^{\alpha x},$
and thus one gets zero after k + 1 application of $ {\frac {d}{dx}}-\alpha $.
As, by the fundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a base of the vector space of the solutions.
In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of real-valued functions. Such a basis may be obtained from the preceding basis by remarking that, if a + ib is a root of the characteristic polynomial, then a – ib is also a root, of the same multiplicity. Thus a real basis is obtained by using Euler's formula, and replacing $x^{k}e^{(a+ib)x}$ and $x^{k}e^{(a-ib)x}$ by $x^{k}e^{ax}\cos(bx)$ and $x^{k}e^{ax}\sin(bx)$.
Second-order case
A homogeneous linear differential equation of the second order may be written
$y''+ay'+by=0,$
and its characteristic polynomial is
$r^{2}+ar+b.$
If a and b are real, there are three cases for the solutions, depending on the discriminant D = a2 − 4b. In all three cases, the general solution depends on two arbitrary constants c1 and c2.
• If D > 0, the characteristic polynomial has two distinct real roots α, and β. In this case, the general solution is
$c_{1}e^{\alpha x}+c_{2}e^{\beta x}.$
• If D = 0, the characteristic polynomial has a double root −a/2, and the general solution is
$(c_{1}+c_{2}x)e^{-ax/2}.$
• If D < 0, the characteristic polynomial has two complex conjugate roots α ± βi, and the general solution is
$c_{1}e^{(\alpha +\beta i)x}+c_{2}e^{(\alpha -\beta i)x},$
which may be rewritten in real terms, using Euler's formula as
$e^{\alpha x}(c_{1}\cos(\beta x)+c_{2}\sin(\beta x)).$
Finding the solution y(x) satisfying y(0) = d1 and y′(0) = d2, one equates the values of the above general solution at 0 and its derivative there to d1 and d2, respectively. This results in a linear system of two linear equations in the two unknowns c1 and c2. Solving this system gives the solution for a so-called Cauchy problem, in which the values at 0 for the solution of the DEQ and its derivative are specified.
Non-homogeneous equation with constant coefficients
A non-homogeneous equation of order n with constant coefficients may be written
$y^{(n)}(x)+a_{1}y^{(n-1)}(x)+\cdots +a_{n-1}y'(x)+a_{n}y(x)=f(x),$
where a1, ..., an are real or complex numbers, f is a given function of x, and y is the unknown function (for sake of simplicity, "(x)" will be omitted in the following).
There are several methods for solving such an equation. The best method depends on the nature of the function f that makes the equation non-homogeneous. If f is a linear combination of exponential and sinusoidal functions, then the exponential response formula may be used. If, more generally, f is a linear combination of functions of the form xneax, xn cos(ax), and xn sin(ax), where n is a nonnegative integer, and a a constant (which need not be the same in each term), then the method of undetermined coefficients may be used. Still more general, the annihilator method applies when f satisfies a homogeneous linear differential equation, typically, a holonomic function.
The most general method is the variation of constants, which is presented here.
The general solution of the associated homogeneous equation
$y^{(n)}+a_{1}y^{(n-1)}+\cdots +a_{n-1}y'+a_{n}y=0$
is
$y=u_{1}y_{1}+\cdots +u_{n}y_{n},$
where (y1, ..., yn) is a basis of the vector space of the solutions and u1, ..., un are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of considering u1, ..., un as constants, they can be considered as unknown functions that have to be determined for making y a solution of the non-homogeneous equation. For this purpose, one adds the constraints
${\begin{aligned}0&=u'_{1}y_{1}+u'_{2}y_{2}+\cdots +u'_{n}y_{n}\\0&=u'_{1}y'_{1}+u'_{2}y'_{2}+\cdots +u'_{n}y'_{n}\\&\;\;\vdots \\0&=u'_{1}y_{1}^{(n-2)}+u'_{2}y_{2}^{(n-2)}+\cdots +u'_{n}y_{n}^{(n-2)},\end{aligned}}$
which imply (by product rule and induction)
$y^{(i)}=u_{1}y_{1}^{(i)}+\cdots +u_{n}y_{n}^{(i)}$
for i = 1, ..., n – 1, and
$y^{(n)}=u_{1}y_{1}^{(n)}+\cdots +u_{n}y_{n}^{(n)}+u'_{1}y_{1}^{(n-1)}+u'_{2}y_{2}^{(n-1)}+\cdots +u'_{n}y_{n}^{(n-1)}.$
Replacing in the original equation y and its derivatives by these expressions, and using the fact that y1, ..., yn are solutions of the original homogeneous equation, one gets
$f=u'_{1}y_{1}^{(n-1)}+\cdots +u'_{n}y_{n}^{(n-1)}.$
This equation and the above ones with 0 as left-hand side form a system of n linear equations in u′1, ..., u′n whose coefficients are known functions (f, the yi, and their derivatives). This system can be solved by any method of linear algebra. The computation of antiderivatives gives u1, ..., un, and then y = u1y1 + ⋯ + unyn.
As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation.
First-order equation with variable coefficients
The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient of y′(x), is:
$y'(x)=f(x)y(x)+g(x).$
If the equation is homogeneous, i.e. g(x) = 0, one may rewrite and integrate:
${\frac {y'}{y}}=f,\qquad \log y=k+F,$
where k is an arbitrary constant of integration and $F=\textstyle \int f\,dx$ is any antiderivative of f. Thus, the general solution of the homogeneous equation is
$y=ce^{F},$
where c = ek is an arbitrary constant.
For the general non-homogeneous equation, one may multiply it by the reciprocal e−F of a solution of the homogeneous equation.[2] This gives
$y'e^{-F}-yfe^{-F}=ge^{-F}.$
As $-fe^{-F}={\tfrac {d}{dx}}\left(e^{-F}\right),$ the product rule allows rewriting the equation as
${\frac {d}{dx}}\left(ye^{-F}\right)=ge^{-F}.$
Thus, the general solution is
$y=ce^{F}+e^{F}\int ge^{-F}dx,$
where c is a constant of integration, and F is any antiderivative of f (changing of antiderivative amounts to change the constant of integration).
Example
Solving the equation
$y'(x)+{\frac {y(x)}{x}}=3x.$
The associated homogeneous equation $y'(x)+{\frac {y(x)}{x}}=0$ gives
${\frac {y'}{y}}=-{\frac {1}{x}},$
that is
$y={\frac {c}{x}}.$
Dividing the original equation by one of these solutions gives
$xy'+y=3x^{2}.$
That is
$(xy)'=3x^{2},$
$xy=x^{3}+c,$
and
$y(x)=x^{2}+c/x.$
For the initial condition
$y(1)=\alpha ,$
one gets the particular solution
$y(x)=x^{2}+{\frac {\alpha -1}{x}}.$
System of linear differential equations
Main article: Matrix differential equation
A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations.
An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if $y',y'',\ldots ,y^{(k)}$ appear in an equation, one may replace them by new unknown functions $y_{1},\ldots ,y_{k}$ that must satisfy the equations $y'=y_{1}$ and $y_{i}'=y_{i+1},$ for i = 1, ..., k – 1.
A linear system of the first order, which has n unknown functions and n differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a differential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the form
${\begin{aligned}y_{1}'(x)&=b_{1}(x)+a_{1,1}(x)y_{1}+\cdots +a_{1,n}(x)y_{n}\\[1ex]&\;\;\vdots \\[1ex]y_{n}'(x)&=b_{n}(x)+a_{n,1}(x)y_{1}+\cdots +a_{n,n}(x)y_{n},\end{aligned}}$
where $b_{n}$ and the $a_{i,j}$ are functions of x. In matrix notation, this system may be written (omitting "(x)")
$\mathbf {y} '=A\mathbf {y} +\mathbf {b} .$
The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.
Let
$\mathbf {u} '=A\mathbf {u} .$
be the homogeneous equation associated to the above matrix equation. Its solutions form a vector space of dimension n, and are therefore the columns of a square matrix of functions $U(x)$, whose determinant is not the zero function. If n = 1, or A is a matrix of constants, or, more generally, if A commutes with its antiderivative $\textstyle B=\int Adx$, then one may choose U equal the exponential of B. In fact, in these cases, one has
${\frac {d}{dx}}\exp(B)=A\exp(B).$
In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a numerical method, or an approximation method such as Magnus expansion.
Knowing the matrix U, the general solution of the non-homogeneous equation is
$\mathbf {y} (x)=U(x)\mathbf {y_{0}} +U(x)\int U^{-1}(x)\mathbf {b} (x)\,dx,$
where the column matrix $\mathbf {y_{0}} $ is an arbitrary constant of integration.
If initial conditions are given as
$\mathbf {y} (x_{0})=\mathbf {y} _{0},$
the solution that satisfies these initial conditions is
$\mathbf {y} (x)=U(x)U^{-1}(x_{0})\mathbf {y_{0}} +U(x)\int _{x_{0}}^{x}U^{-1}(t)\mathbf {b} (t)\,dt.$
Higher order with variable coefficients
A linear ordinary equation of order one with variable coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is not the case for order at least two. This is the main result of Picard–Vessiot theory which was initiated by Émile Picard and Ernest Vessiot, and whose recent developments are called differential Galois theory.
The impossibility of solving by quadrature can be compared with the Abel–Ruffini theorem, which states that an algebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of differential Galois theory.
Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers.
Nevertheless, the case of order two with rational coefficients has been completely solved by Kovacic's algorithm.
Cauchy–Euler equation
Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form
$x^{n}y^{(n)}(x)+a_{n-1}x^{n-1}y^{(n-1)}(x)+\cdots +a_{0}y(x)=0,$
where $a_{0},\ldots ,a_{n-1}$ are constant coefficients.
Holonomic functions
Main article: holonomic function
A holonomic function, also called a D-finite function, is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients.
Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include polynomials, algebraic functions, logarithm, exponential function, sine, cosine, hyperbolic sine, hyperbolic cosine, inverse trigonometric and inverse hyperbolic functions, and many special functions such as Bessel functions and hypergeometric functions.
Holonomic functions have several closure properties; in particular, sums, products, derivative and integrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are algorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input.[3]
Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows.[3]
A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients. The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a power series is holonomic, then the series defines a holonomic function (even if the radius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and vice versa. [3]
It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most calculus operations can be done automatically on these functions, such as derivative, indefinite and definite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, limits, localization of singularities, asymptotic behavior at infinity and near singularities, proof of identities, etc.[4]
See also
• Continuous-repayment mortgage
• Fourier transform
• Laplace transform
• Linear difference equation
• Variation of parameters
References
1. Gershenfeld 1999, p.9
2. Motivation: In analogy to completing the square technique we write the equation as y′ − fy = g, and try to modify the left side so it becomes a derivative. Specifically, we seek an "integrating factor" h = h(x) such that multiplying by it makes the left side equal to the derivative of hy, namely hy′ − hfy = (hy)′. This means h′ = −f, so that h = e−∫ f dx = e−F, as in the text.
3. Zeilberger, Doron. A holonomic systems approach to special functions identities. Journal of computational and applied mathematics. 32.3 (1990): 321-368
4. Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., & Salvy, B. (2010, September). The dynamic dictionary of mathematical functions (DDMF). In International Congress on Mathematical Software (pp. 35-41). Springer, Berlin, Heidelberg.
• Birkhoff, Garrett & Rota, Gian-Carlo (1978), Ordinary Differential Equations, New York: John Wiley and Sons, Inc., ISBN 0-471-07411-X
• Gershenfeld, Neil (1999), The Nature of Mathematical Modeling, Cambridge, UK.: Cambridge University Press, ISBN 978-0-521-57095-4
• Robinson, James C. (2004), An Introduction to Ordinary Differential Equations, Cambridge, UK.: Cambridge University Press, ISBN 0-521-82650-0
External links
• http://eqworld.ipmnet.ru/en/solutions/ode.htm
• Dynamic Dictionary of Mathematical Function. Automatic and interactive study of many holonomic functions.
Differential equations
Classification
Operations
• Differential operator
• Notation for differentiation
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
• Holonomic
Attributes of variables
• Dependent and independent variables
• Homogeneous
• Nonhomogeneous
• Coupled
• Decoupled
• Order
• Degree
• Autonomous
• Exact differential equation
• On jet bundles
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solutions
Existence/uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
Solution topics
• Wronskian
• Phase portrait
• Phase space
• Lyapunov stability
• Asymptotic stability
• Exponential stability
• Rate of convergence
• Series solutions
• Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Substitution
• Separation of variables
• Method of undetermined coefficients
• Variation of parameters
• Integrating factor
• Integral transforms
• Euler method
• Finite difference method
• Crank–Nicolson method
• Runge–Kutta methods
• Finite element method
• Finite volume method
• Galerkin method
• Perturbation theory
Applications
• List of named differential equations
Mathematicians
• Isaac Newton
• Gottfried Wilhelm Leibniz
• Leonhard Euler
• Jacob Bernoulli
• Émile Picard
• Józef Maria Hoene-Wroński
• Ernst Lindelöf
• Rudolf Lipschitz
• Joseph-Louis Lagrange
• Augustin-Louis Cauchy
• John Crank
• Phyllis Nicolson
• Carl David Tolmé Runge
• Martin Kutta
• Sofya Kovalevskaya
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Latvia
• Czech Republic
|
Wikipedia
|
Solution of triangles
Solution of triangles (Latin: solutio triangulorum) is the main trigonometric problem of finding the characteristics of a triangle (angles and lengths of sides), when some of these are known. The triangle can be located on a plane or on a sphere. Applications requiring triangle solutions include geodesy, astronomy, construction, and navigation.
Solving plane triangles
A general form triangle has six main characteristics (see picture): three linear (side lengths a, b, c) and three angular (α, β, γ). The classical plane trigonometry problem is to specify three of the six characteristics and determine the other three. A triangle can be uniquely determined in this sense when given any of the following:[1][2]
• Three sides (SSS)
• Two sides and the included angle (SAS, side-angle-side)
• Two sides and an angle not included between them (SSA), if the side length adjacent to the angle is shorter than the other side length.
• A side and the two angles adjacent to it (ASA)
• A side, the angle opposite to it and an angle adjacent to it (AAS).
For all cases in the plane, at least one of the side lengths must be specified. If only the angles are given, the side lengths cannot be determined, because any similar triangle is a solution.
Trigonomic relations
The standard method of solving the problem is to use fundamental relations.
Law of cosines
$a^{2}=b^{2}+c^{2}-2bc\cos \alpha $
$b^{2}=a^{2}+c^{2}-2ac\cos \beta $
$c^{2}=a^{2}+b^{2}-2ab\cos \gamma $
Law of sines
${\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}$
Sum of angles
$\alpha +\beta +\gamma =180^{\circ }$
Law of tangents
${\frac {a-b}{a+b}}={\frac {\tan[{\frac {1}{2}}(\alpha -\beta )]}{\tan[{\frac {1}{2}}(\alpha +\beta )]}}.$
There are other (sometimes practically useful) universal relations: the law of cotangents and Mollweide's formula.
Notes
1. To find an unknown angle, the law of cosines is safer than the law of sines. The reason is that the value of sine for the angle of the triangle does not uniquely determine this angle. For example, if sin β = 0.5, the angle β can equal either 30° or 150°. Using the law of cosines avoids this problem: within the interval from 0° to 180° the cosine value unambiguously determines its angle. On the other hand, if the angle is small (or close to 180°), then it is more robust numerically to determine it from its sine than its cosine because the arc-cosine function has a divergent derivative at 1 (or −1).
2. We assume that the relative position of specified characteristics is known. If not, the mirror reflection of the triangle will also be a solution. For example, three side lengths uniquely define either a triangle or its reflection.
Three sides given (SSS)
Let three side lengths a, b, c be specified. To find the angles α, β, the law of cosines can be used:[3]
${\begin{aligned}\alpha &=\arccos {\frac {b^{2}+c^{2}-a^{2}}{2bc}}\\[4pt]\beta &=\arccos {\frac {a^{2}+c^{2}-b^{2}}{2ac}}.\end{aligned}}$
Then angle γ = 180° − α − β.
Some sources recommend to find angle β from the law of sines but (as Note 1 above states) there is a risk of confusing an acute angle value with an obtuse one.
Another method of calculating the angles from known sides is to apply the law of cotangents.
Two sides and the included angle given (SAS)
Here the lengths of sides a, b and the angle γ between these sides are known. The third side can be determined from the law of cosines:[4]
$c={\sqrt {a^{2}+b^{2}-2ab\cos \gamma }}.$
Now we use law of cosines to find the second angle:
$\alpha =\arccos {\frac {b^{2}+c^{2}-a^{2}}{2bc}}.$
Finally, β = 180° − α − γ.
Two sides and non-included angle given (SSA)
This case is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Assume that two sides b, c and the angle β are known. The equation for the angle γ can be implied from the law of sines:[5]
$\sin \gamma ={\frac {c}{b}}\sin \beta .$
We denote further D = c/b sin β (the equation's right side). There are four possible cases:
1. If D > 1, no such triangle exists because the side b does not reach line BC. For the same reason a solution does not exist if the angle β ≥ 90° and b ≤ c.
2. If D = 1, a unique solution exists: γ = 90°, i.e., the triangle is right-angled.
3. If D < 1 two alternatives are possible.
1. If b ≥ c, then β ≥ γ (the larger side corresponds to a larger angle). Since no triangle can have two obtuse angles, γ is an acute angle and the solution γ = arcsin D is unique.
2. If b < c, the angle γ may be acute: γ = arcsin D or obtuse: γ′ = 180° − γ. The figure on right shows the point C, the side b and the angle γ as the first solution, and the point C′, side b′ and the angle γ′ as the second solution.
Once γ is obtained, the third angle α = 180° − β − γ.
The third side can then be found from the law of sines:
$a=b\ {\frac {\sin \alpha }{\sin \beta }}$
or from the law of cosines:
$a=c\cos \beta \pm {\sqrt {b^{2}-c^{2}\sin ^{2}\beta }}$
A side and two adjacent angles given (ASA)
The known characteristics are the side c and the angles α, β. The third angle γ = 180° − α − β.
Two unknown sides can be calculated from the law of sines:[6]
$a=c\ {\frac {\sin \alpha }{\sin \gamma }};\quad b=c\ {\frac {\sin \beta }{\sin \gamma }}.$
or
$a=c{\frac {\sin \alpha }{\sin \alpha \cos \beta +\sin \beta \cos \alpha }}$
$b=c{\frac {\sin \beta }{\sin \alpha \cos \beta +\sin \beta \cos \alpha }}$
A side, one adjacent angle and the opposite angle given (AAS)
The procedure for solving an AAS triangle is same as that for an ASA triangle: First, find the third angle by using the angle sum property of a triangle, then find the other two sides using the law of sines.
Other given lengths
In many cases, triangles can be solved given three pieces of information some of which are the lengths of the triangle's medians, altitudes, or angle bisectors. Posamentier and Lehmann[7] list the results for the question of solvability using no higher than square roots (i.e., constructibility) for each of the 95 distinct cases; 63 of these are constructible.
Solving spherical triangles
The general spherical triangle is fully determined by three of its six characteristics (3 sides and 3 angles). The lengths of the sides a, b, c of a spherical triangle are their central angles, measured in angular units rather than linear units. (On a unit sphere, the angle (in radians) and length around the sphere are numerically the same. On other spheres, the angle (in radians) is equal to the length around the sphere divided by the radius.)
Spherical geometry differs from planar Euclidean geometry, so the solution of spherical triangles is built on different rules. For example, the sum of the three angles α + β + γ depends on the size of the triangle. In addition, similar triangles cannot be unequal, so the problem of constructing a triangle with specified three angles has a unique solution. The basic relations used to solve a problem are similar to those of the planar case: see Spherical law of cosines and Spherical law of sines.
Among other relationships that may be useful are the half-side formula and Napier's analogies:[8]
• $\tan {\frac {c}{2}}\cos {\frac {\alpha -\beta }{2}}=\tan {\frac {a+b}{2}}\cos {\frac {\alpha +\beta }{2}}$
• $\tan {\frac {c}{2}}\sin {\frac {\alpha -\beta }{2}}=\tan {\frac {a-b}{2}}\sin {\frac {\alpha +\beta }{2}}$
• $\cot {\frac {\gamma }{2}}\cos {\frac {a-b}{2}}=\tan {\frac {\alpha +\beta }{2}}\cos {\frac {a+b}{2}}$
• $\cot {\frac {\gamma }{2}}\sin {\frac {a-b}{2}}=\tan {\frac {\alpha -\beta }{2}}\sin {\frac {a+b}{2}}.$
Three sides given (spherical SSS)
Known: the sides a, b, c (in angular units). The triangle's angles are computed using the spherical law of cosines:
$\alpha =\arccos \left({\frac {\cos a-\cos b\ \cos c}{\sin b\ \sin c}}\right),$
$\beta =\arccos \left({\frac {\cos b-\cos c\ \cos a}{\sin c\ \sin a}}\right),$
$\gamma =\arccos \left({\frac {\cos c-\cos a\ \cos b}{\sin a\ \sin b}}\right).$
Two sides and the included angle given (spherical SAS)
Known: the sides a, b and the angle γ between them. The side c can be found from the spherical law of cosines:
$c=\arccos \left(\cos a\cos b+\sin a\sin b\cos \gamma \right).$
The angles α, β can be calculated as above, or by using Napier's analogies:
$\alpha =\arctan \ {\frac {2\sin a}{\tan({\frac {\gamma }{2}})\sin(b+a)+\cot({\frac {\gamma }{2}})\sin(b-a)}},$
$\beta =\arctan \ {\frac {2\sin b}{\tan({\frac {\gamma }{2}})\sin(a+b)+\cot({\frac {\gamma }{2}})\sin(a-b)}}.$
This problem arises in the navigation problem of finding the great circle between two points on the earth specified by their latitude and longitude; in this application, it is important to use formulas which are not susceptible to round-off errors. For this purpose, the following formulas (which may be derived using vector algebra) can be used:
${\begin{aligned}c&=\arctan {\frac {\sqrt {(\sin a\cos b-\cos a\sin b\cos \gamma )^{2}+(\sin b\sin \gamma )^{2}}}{\cos a\cos b+\sin a\sin b\cos \gamma }},\\\alpha &=\arctan {\frac {\sin a\sin \gamma }{\sin b\cos a-\cos b\sin a\cos \gamma }},\\\beta &=\arctan {\frac {\sin b\sin \gamma }{\sin a\cos b-\cos a\sin b\cos \gamma }},\end{aligned}}$
where the signs of the numerators and denominators in these expressions should be used to determine the quadrant of the arctangent.
Two sides and non-included angle given (spherical SSA)
This problem is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Known: the sides b, c and the angle β not between them. A solution exists if the following condition holds:
$b>\arcsin(\sin c\,\sin \beta ).$
The angle γ can be found from the spherical law of sines:
$\gamma =\arcsin \left({\frac {\sin c\,\sin \beta }{\sin b}}\right).$
As for the plane case, if b < c then there are two solutions: γ and 180° - γ.
We can find other characteristics by using Napier's analogies:
${\begin{aligned}a&=2\arctan \left[\tan \left({\tfrac {1}{2}}(b-c)\right){\frac {\sin \left({\tfrac {1}{2}}(\beta +\gamma )\right)}{\sin \left({\tfrac {1}{2}}(\beta -\gamma )\right)}}\right],\\[4pt]\alpha &=2\operatorname {arccot} \left[\tan \left({\tfrac {1}{2}}(\beta -\gamma )\right){\frac {\sin \left({\tfrac {1}{2}}(b+c)\right)}{\sin \left({\tfrac {1}{2}}(b-c)\right)}}\right].\end{aligned}}$
A side and two adjacent angles given (spherical ASA)
Known: the side c and the angles α, β. First we determine the angle γ using the spherical law of cosines:
$\gamma =\arccos(\sin \alpha \sin \beta \cos c-\cos \alpha \cos \beta ).\,$
We can find the two unknown sides from the spherical law of cosines (using the calculated angle γ):
$a=\arccos \left({\frac {\cos \alpha +\cos \beta \cos \gamma }{\sin \beta \sin \gamma }}\right),$
$b=\arccos \left({\frac {\cos \beta +\cos \alpha \cos \gamma }{\sin \alpha \sin \gamma }}\right),$
or by using Napier's analogies:
${\begin{aligned}a&=\arctan \left[{\frac {2\sin \alpha }{\cot({\frac {c}{2}})\sin(\beta +\alpha )+\tan({\frac {c}{2}})\sin(\beta -\alpha )}}\right],\\[4pt]b&=\arctan \left[{\frac {2\sin \beta }{\cot({\frac {c}{2}})\sin(\alpha +\beta )+\tan({\frac {c}{2}})\sin(\alpha -\beta )}}\right].\end{aligned}}$
A side, one adjacent angle and the opposite angle given (spherical AAS)
Known: the side a and the angles α, β. The side b can be found from the spherical law of sines:
$b=\arcsin \left({\frac {\sin a\,\sin \beta }{\sin \alpha }}\right).$
If the angle for the side a is acute and α > β, another solution exists:
$b=\pi -\arcsin \left({\frac {\sin a\,\sin \beta }{\sin \alpha }}\right).$
We can find other characteristics by using Napier's analogies:
${\begin{aligned}c&=2\arctan \left[\tan \left({\tfrac {1}{2}}(a-b)\right){\frac {\sin \left({\tfrac {1}{2}}(\alpha +\beta )\right)}{\sin \left({\frac {1}{2}}(\alpha -\beta )\right)}}\right],\\[4pt]\gamma &=2\operatorname {arccot} \left[\tan \left({\tfrac {1}{2}}(\alpha -\beta )\right){\frac {\sin \left({\tfrac {1}{2}}(a+b)\right)}{\sin \left({\frac {1}{2}}(a-b)\right)}}\right].\end{aligned}}$
Three angles given (spherical AAA)
Known: the angles α, β, γ. From the spherical law of cosines we infer:
$a=\arccos \left({\frac {\cos \alpha +\cos \beta \cos \gamma }{\sin \beta \sin \gamma }}\right),$
$b=\arccos \left({\frac {\cos \beta +\cos \gamma \cos \alpha }{\sin \gamma \sin \alpha }}\right),$
$c=\arccos \left({\frac {\cos \gamma +\cos \alpha \cos \beta }{\sin \alpha \sin \beta }}\right).$
Solving right-angled spherical triangles
The above algorithms become much simpler if one of the angles of a triangle (for example, the angle C) is the right angle. Such a spherical triangle is fully defined by its two elements, and the other three can be calculated using Napier's Pentagon or the following relations.
$\sin a=\sin c\cdot \sin A$ (from the spherical law of sines)
$\tan a=\sin b\cdot \tan A$
$\cos c=\cos a\cdot \cos b$ (from the spherical law of cosines)
$\tan b=\tan c\cdot \cos A$
$\cos A=\cos a\cdot \sin B$ (also from the spherical law of cosines)
$\cos c=\cot A\cdot \cot B$
Some applications
Triangulation
Main article: Triangulation
If one wants to measure the distance d from shore to a remote ship via triangulation, one marks on the shore two points with known distance l between them (the baseline). Let α, β be the angles between the baseline and the direction to the ship.
From the formulae above (ASA case, assuming planar geometry) one can compute the distance as the triangle height:
$d={\frac {\sin \alpha \,\sin \beta }{\sin(\alpha +\beta )}}\ell ={\frac {\tan \alpha \,\tan \beta }{\tan \alpha +\tan \beta }}\ell .$
For the spherical case, one can first compute the length of side from the point at α to the ship (i.e. the side opposite to β) via the ASA formula
$\tan b={\frac {2\sin \beta }{\cot(l/2)\sin(\alpha +\beta )+\tan(l/2)\sin(\alpha -\beta )}},$
and insert this into the AAS formula for the right subtriangle that contains the angle α and the sides b and d:
$\sin d=\sin b\sin \alpha ={\frac {\tan b}{\sqrt {1+\tan ^{2}b}}}\sin \alpha .$
(The planar formula is actually the first term of the Taylor expansion of d of the spherical solution in powers of l.)
This method is used in cabotage. The angles α, β are defined by observation of familiar landmarks from the ship.
As another example, if one wants to measure the height h of a mountain or a high building, the angles α, β from two ground points to the top are specified. Let ℓ be the distance between these points. From the same ASA case formulas we obtain:
$h={\frac {\sin \alpha \,\sin \beta }{\sin(\beta -\alpha )}}\ell ={\frac {\tan \alpha \,\tan \beta }{\tan \beta -\tan \alpha }}\ell .$
The distance between two points on the globe
Main article: Great-circle distance
To calculate the distance between two points on the globe,
Point A: latitude λA, longitude LA, and
Point B: latitude λB, longitude LB
we consider the spherical triangle ABC, where C is the North Pole. Some characteristics are:
$a=90^{\mathrm {o} }-\lambda _{\mathrm {B} },\,$
$b=90^{\mathrm {o} }-\lambda _{\mathrm {A} },\,$
$\gamma =L_{\mathrm {A} }-L_{\mathrm {B} }.\,$
If two sides and the included angle given, we obtain from the formulas
$\mathrm {AB} =R\arccos \left[\sin \lambda _{\mathrm {A} }\,\sin \lambda _{\mathrm {B} }+\cos \lambda _{\mathrm {A} }\,\cos \lambda _{\mathrm {B} }\,\cos \left(L_{\mathrm {A} }-L_{\mathrm {B} }\right)\right].$
Here R is the Earth's radius.
See also
• Congruence
• Hansen's problem
• Hinge theorem
• Lénárt sphere
• Snellius–Pothenot problem
References
1. "Solving Triangles". Maths is Fun. Retrieved 4 April 2012.
2. "Solving Triangles". web.horacemann.org. Archived from the original on 7 January 2014. Retrieved 4 April 2012.
3. "Solving SSS Triangles". Maths is Fun. Retrieved 13 January 2015.
4. "Solving SAS Triangles". Maths is Fun. Retrieved 13 January 2015.
5. "Solving SSA Triangles". Maths is Fun. Retrieved 9 March 2013.
6. "Solving ASA Triangles". Maths is Fun. Retrieved 13 January 2015.
7. Alfred S. Posamentier and Ingmar Lehmann, The Secrets of Triangles, Prometheus Books, 2012: pp. 201–203.
8. Napier's Analogies at MathWorld
• Euclid (1956) [1925]. Sir Thomas Heath (ed.). The Thirteen Books of the Elements. Volume I. Translated with introduction and commentary. Dover. ISBN 0-486-60088-2.
External links
• Trigonometric Delights, by Eli Maor, Princeton University Press, 1998. Ebook version, in PDF format, full text presented.
• Trigonometry by Alfred Monroe Kenyon and Louis Ingold, The Macmillan Company, 1914. In images, full text presented. Google book.
• Spherical trigonometry on Math World.
• Intro to Spherical Trig. Includes discussion of The Napier circle and Napier's rules
• Spherical Trigonometry — for the use of colleges and schools by I. Todhunter, M.A., F.R.S. Historical Math Monograph posted by Cornell University Library.
• Triangulator – Triangle solver. Solve any plane triangle problem with the minimum of input data. Drawing of the solved triangle.
• TriSph – Free software to solve the spherical triangles, configurable to different practical applications and configured for gnomonic.
• Spherical Triangle Calculator – Solves spherical triangles.
• TrianCal – Triangles solver by Jesus S.
|
Wikipedia
|
Equation
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =.[2][3] The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.[4]
Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.[5][6]
The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.[1]
Description
An equation is written as two expressions, connected by an equals sign ("=").[2] The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.
The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials. The sides of a polynomial equation contain one or more terms. For example, the equation
$Ax^{2}+Bx+C-y=0$
has left-hand side $Ax^{2}+Bx+C-y$, which has four terms, and right-hand side $0$, consisting of just one term. The names of the variables suggest that x and y are unknowns, and that A, B, and C are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or A, B, and C may be ordinary variables).
An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount of grain must be removed from the other pan to keep the scale in balance. More generally, an equation remains in balance if the same operation is performed on its both sides.
Properties
Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:
• Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero.
• Multiplying or dividing both sides of an equation by a non-zero quantity.
• Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum.
• For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity.
If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation $x=1$ has the solution $x=1.$ Raising both sides to the exponent of 2 (which means applying the function $f(s)=s^{2}$ to both sides of the equation) changes the equation to $x^{2}=1$, which not only has the previous solution but also introduces the extraneous solution, $x=-1.$ Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.
The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination.
Examples
Analogous illustration
An equation is analogous to a weighing scale, balance, or seesaw.
Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).
In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.
Parameters and unknowns
See also: Expression (mathematics)
Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters.
An example of an equation involving x and y as unknowns and the parameter R is
$x^{2}+y^{2}=R^{2}.$
When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle.
Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0.
The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions.
A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system
${\begin{aligned}3x+5y&=2\\5x+8y&=3\end{aligned}}$
has the unique solution x = −1, y = 1.
Identities
Main articles: Identity (mathematics) and List of trigonometric identities
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
In algebra, an example of an identity is the difference of two squares:
$x^{2}-y^{2}=(x+y)(x-y)$
which is true for all x and y.
Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:
$\sin ^{2}(\theta )+\cos ^{2}(\theta )=1$
and
$\sin(2\theta )=2\sin(\theta )\cos(\theta )$
which are both true for all values of θ.
For example, to solve for the value of θ that satisfies the equation:
$3\sin(\theta )\cos(\theta )=1\,,$
where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give:
${\frac {3}{2}}\sin(2\theta )=1\,,$
yielding the following solution for θ:
$\theta ={\frac {1}{2}}\arcsin \left({\frac {2}{3}}\right)\approx 20.9^{\circ }.$
Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.
Algebra
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
Polynomial equations
Main article: Polynomial equation
In general, an algebraic equation or polynomial equation is an equation of the form
$P=0$, or
$P=Q$ [lower-alpha 1]
where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).
For example,
$x^{5}-3x+1=0$
is a univariate algebraic (polynomial) equation with integer coefficients and
$y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}$
is a multivariate polynomial equation over the rational numbers.
Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.
A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
Systems of linear equations
A system of linear equations (or linear system) is a collection of linear equations involving one or more variables.[lower-alpha 2] For example,
${\begin{alignedat}{7}3x&&\;+\;&&2y&&\;-\;&&z&&\;=\;&&1&\\2x&&\;-\;&&2y&&\;+\;&&4z&&\;=\;&&-2&\\-x&&\;+\;&&{\tfrac {1}{2}}y&&\;-\;&&z&&\;=\;&&0&\end{alignedat}}$
is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
${\begin{alignedat}{2}x&\,=\,&1\\y&\,=\,&-2\\z&\,=\,&-2\end{alignedat}}$
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Geometry
Analytic geometry
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form $ax+by+cz+d=0$, where $a,b,c$ and $d$ are real numbers and $x,y,z$ are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values $a,b,c$ are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in $\mathbb {R} ^{2}$ or as the solution set of two linear equations with values in $\mathbb {R} ^{3}.$
A conic section is the intersection of a cone with equation $x^{2}+y^{2}=z^{2}$ and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.
The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.
Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
Cartesian equations
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.
One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4.
Parametric equations
Main article: Parametric equation
A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter.[7][8] For example,
${\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}$
are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve.
The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
Number theory
Diophantine equations
Main article: Diophantine equation
A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is ax + by = c where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
Algebraic and transcendental numbers
Main articles: Algebraic number and Transcendental number
An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
Algebraic geometry
Main article: Algebraic geometry
Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
Differential equations
Main article: Differential equation
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
Ordinary differential equations
Main article: Ordinary differential equation
An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
Partial differential equations
Main article: Partial differential equation
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
Types of equations
Equations can be classified according to the types of operations and quantities involved. Important types include:
• An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree:
• linear equation for degree one
• quadratic equation for degree two
• cubic equation for degree three
• quartic equation for degree four
• quintic equation for degree five
• sextic equation for degree six
• septic equation for degree seven
• octic equation for degree eight
• A Diophantine equation is an equation where the unknowns are required to be integers
• A transcendental equation is an equation involving a transcendental function of its unknowns
• A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations
• A functional equation is an equation in which the unknowns are functions rather than simple quantities
• Equations involving derivatives, integrals and finite differences:
• A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as $f'(x)=x^{2}$. Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables
• An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface
• An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable.
• A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as $f'(x)=f(x-2)$
• A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation
• A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process
See also
• Formula
• History of algebra
• Indeterminate equation
• List of equations
• List of scientific equations named after people
• Term (logic)
• Theory of equations
• Cancelling out
Notes
1. As such an equation can be rewritten P – Q = 0, many authors do not consider this case explicitly.
2. The subject of this article is basic in mathematics, and is treated in a lot of textbooks. Among them, Lay 2005, Meyer 2001, and Strang 2005 contain the material of this article.
References
1. Recorde, Robert, The Whetstone of Witte ... (London, England: Jhon Kyngstone, 1557), the third page of the chapter "The rule of equation, commonly called Algebers Rule."
2. "Equation - Math Open Reference". www.mathopenref.com. Retrieved 2020-09-01.
3. "Equations and Formulas". www.mathsisfun.com. Retrieved 2020-09-01.
4. Marcus, Solomon; Watt, Stephen M. "What is an Equation?". Retrieved 2019-02-27.
5. Lachaud, Gilles. "Équation, mathématique". Encyclopædia Universalis (in French).
6. "A statement of equality between two expressions. Equations are of two types, identities and conditional equations (or usually simply "equations")". « Equation », in Mathematics Dictionary, Glenn James et Robert C. James (éd.), Van Nostrand, 1968, 3 ed. 1st ed. 1948, p. 131.
7. Thomas, George B., and Finney, Ross L., Calculus and Analytic Geometry, Addison Wesley Publishing Co., fifth edition, 1979, p. 91.
8. Weisstein, Eric W. "Parametric Equations." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/ParametricEquations.html
External links
• Winplot: General Purpose plotter that can draw and animate 2D and 3D mathematical equations.
• Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y).
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Feasible region
In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints.[1] This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.
For example, consider the problem of minimizing the function $x^{2}+y^{4}$ with respect to the variables $x$ and $y,$ subject to $1\leq x\leq 10$ and $5\leq y\leq 12.\,$ Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is $x^{2}+y^{4}.$
In many problems, the feasible set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the feasible set is the set of integers (or some subset thereof). In linear programming problems, the feasible set is a convex polytope: a region in multidimensional space whose boundaries are formed by hyperplanes and whose corners are vertices.
Constraint satisfaction is the process of finding a point in the feasible region.
Convex feasible set
See also: Convex optimization
A convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has a convex objective function that is to be maximized, it will generally be easier to solve in the presence of a convex feasible set and any local optimum will also be a global optimum.
No feasible set
If the constraints of an optimization problem are mutually contradictory, there are no points that satisfy all the constraints and thus the feasible region is the empty set. In this case the problem has no solution and is said to be infeasible.
Bounded and unbounded feasible sets
Feasible sets may be bounded or unbounded. For example, the feasible set defined by the constraint set {x ≥ 0, y ≥ 0} is unbounded because in some directions there is no limit on how far one can go and still be in the feasible region. In contrast, the feasible set formed by the constraint set {x ≥ 0, y ≥ 0, x + 2y ≤ 4} is bounded because the extent of movement in any direction is limited by the constraints.
In linear programming problems with n variables, a necessary but insufficient condition for the feasible set to be bounded is that the number of constraints be at least n + 1 (as illustrated by the above example).
If the feasible set is unbounded, there may or may not be an optimum, depending on the specifics of the objective function. For example, if the feasible region is defined by the constraint set {x ≥ 0, y ≥ 0}, then the problem of maximizing x + y has no optimum since any candidate solution can be improved upon by increasing x or y; yet if the problem is to minimize x + y, then there is an optimum (specifically at (x, y) = (0, 0)).
Candidate solution
In optimization and other branches of mathematics, and in search algorithms (a topic in computer science), a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints; that is, it is in the set of feasible solutions. Algorithms for solving various types of optimization problems often narrow the set of candidate solutions down to a subset of the feasible solutions, whose points remain as candidate solutions while the other feasible solutions are henceforth excluded as candidates.
The space of all candidate solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space, or solution space. This is the set of all possible solutions that satisfy the problem's constraints. Constraint satisfaction is the process of finding a point in the feasible set.
Genetic algorithm
In the case of the genetic algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm.[2]
Calculus
In calculus, an optimal solution is sought using the first derivative test: the first derivative of the function being optimized is equated to zero, and any values of the choice variable(s) that satisfy this equation are viewed as candidate solutions (while those that do not are ruled out as candidates). There are several ways in which a candidate solution might not be an actual solution. First, it might give a minimum when a maximum is being sought (or vice versa), and second, it might give neither a minimum nor a maximum but rather a saddle point or an inflection point, at which a temporary pause in the local rise or fall of the function occurs. Such candidate solutions may be able to be ruled out by use of the second derivative test, the satisfaction of which is sufficient for the candidate solution to be at least locally optimal. Third, a candidate solution may be a local optimum but not a global optimum.
In taking antiderivatives of monomials of the form $x^{n},$ the candidate solution using Cavalieri's quadrature formula would be ${\tfrac {1}{n+1}}x^{n+1}+C.$ This candidate solution is in fact correct except when $n=-1.$
Linear programming
In the simplex method for solving linear programming problems, a vertex of the feasible polytope is selected as the initial candidate solution and is tested for optimality; if it is rejected as the optimum, an adjacent vertex is considered as the next candidate solution. This process is continued until a candidate solution is found to be the optimum.
References
1. Beavis, Brian; Dobbs, Ian (1990). Optimisation and Stability Theory for Economic Analysis. New York: Cambridge University Press. p. 32. ISBN 0-521-33605-8.
2. Whitley, Darrell (1994). "A genetic algorithm tutorial" (PDF). Statistics and Computing. 4 (2): 65–85. doi:10.1007/BF00175354. S2CID 3447126.
|
Wikipedia
|
Algebraic equation
In mathematics, an algebraic equation or polynomial equation is an equation of the form
$P=0$
where P is a polynomial with coefficients in some field, often the field of the rational numbers. For many authors, the term algebraic equation refers only to univariate equations, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables. In the case of several variables (the multivariate case), the term polynomial equation is usually preferred to algebraic equation.
For example,
$x^{5}-3x+1=0$
is an algebraic equation with integer coefficients and
$y^{4}+{\frac {xy}{2}}-{\frac {x^{3}}{3}}+xy^{2}+y^{2}+{\frac {1}{7}}=0$
is a multivariate polynomial equation over the rationals.
Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
Terminology
The term "algebraic equation" dates from the time when the main problem of algebra was to solve univariate polynomial equations. This problem was completely solved during the 19th century; see Fundamental theorem of algebra, Abel–Ruffini theorem and Galois theory.
Since then, the scope of algebra has been dramatically enlarged. In particular, it includes the study of equations that involve nth roots and, more generally, algebraic expressions. This makes the term algebraic equation ambiguous outside the context of the old problem. So the term polynomial equation is generally preferred when this ambiguity may occur, specially when considering multivariate equations.
History
The study of algebraic equations is probably as old as mathematics: the Babylonian mathematicians, as early as 2000 BC could solve some kinds of quadratic equations (displayed on Old Babylonian clay tablets).
Univariate algebraic equations over the rationals (i.e., with rational coefficients) have a very long history. Ancient mathematicians wanted the solutions in the form of radical expressions, like $x={\frac {1+{\sqrt {5}}}{2}}$ for the positive solution of $x^{2}-x-1=0$. The ancient Egyptians knew how to solve equations of degree 2 in this manner. The Indian mathematician Brahmagupta (597–668 AD) explicitly described the quadratic formula in his treatise Brāhmasphuṭasiddhānta published in 628 AD, but written in words instead of symbols. In the 9th century Muhammad ibn Musa al-Khwarizmi and other Islamic mathematicians derived the quadratic formula, the general solution of equations of degree 2, and recognized the importance of the discriminant. During the Renaissance in 1545, Gerolamo Cardano published the solution of Scipione del Ferro and Niccolò Fontana Tartaglia to equations of degree 3 and that of Lodovico Ferrari for equations of degree 4. Finally Niels Henrik Abel proved, in 1824, that equations of degree 5 and higher do not have general solutions using radicals. Galois theory, named after Évariste Galois, showed that some equations of at least degree 5 do not even have an idiosyncratic solution in radicals, and gave criteria for deciding if an equation is in fact solvable using radicals.
Areas of study
The algebraic equations are the basis of a number of areas of modern mathematics: Algebraic number theory is the study of (univariate) algebraic equations over the rationals (that is, with rational coefficients). Galois theory was introduced by Évariste Galois to specify criteria for deciding if an algebraic equation may be solved in terms of radicals. In field theory, an algebraic extension is an extension such that every element is a root of an algebraic equation over the base field. Transcendental number theory is the study of the real numbers which are not solutions to an algebraic equation over the rationals. A Diophantine equation is a (usually multivariate) polynomial equation with integer coefficients for which one is interested in the integer solutions. Algebraic geometry is the study of the solutions in an algebraically closed field of multivariate polynomial equations.
Two equations are equivalent if they have the same set of solutions. In particular the equation $P=Q$ is equivalent to $P-Q=0$. It follows that the study of algebraic equations is equivalent to the study of polynomials.
A polynomial equation over the rationals can always be converted to an equivalent one in which the coefficients are integers. For example, multiplying through by 42 = 2·3·7 and grouping its terms in the first member, the previously mentioned polynomial equation $y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}$ becomes
$42y^{4}+21xy-14x^{3}+42xy^{2}-42y^{2}+6=0.$
Because sine, exponentiation, and 1/T are not polynomial functions,
$e^{T}x^{2}+{\frac {1}{T}}xy+\sin(T)z-2=0$
is not a polynomial equation in the four variables x, y, z, and T over the rational numbers. However, it is a polynomial equation in the three variables x, y, and z over the field of the elementary functions in the variable T.
Theory
Polynomials
Main article: Polynomial § Solving polynomial equations
Given an equation in unknown x
$(\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0$,
with coefficients in a field K, one can equivalently say that the solutions of (E) in K are the roots in K of the polynomial
$P=a_{n}X^{n}+a_{n-1}X^{n-1}+\dots +a_{1}X+a_{0}\quad \in K[X]$.
It can be shown that a polynomial of degree n in a field has at most n roots. The equation (E) therefore has at most n solutions.
If K' is a field extension of K, one may consider (E) to be an equation with coefficients in K and the solutions of (E) in K are also solutions in K' (the converse does not hold in general). It is always possible to find a field extension of K known as the rupture field of the polynomial P, in which (E) has at least one solution.
Existence of solutions to real and complex equations
The fundamental theorem of algebra states that the field of the complex numbers is closed algebraically, that is, all polynomial equations with complex coefficients and degree at least one have a solution.
It follows that all polynomial equations of degree 1 or more with real coefficients have a complex solution. On the other hand, an equation such as $x^{2}+1=0$ does not have a solution in $\mathbb {R} $ (the solutions are the imaginary units i and –i).
While the real solutions of real equations are intuitive (they are the x-coordinates of the points where the curve y = P(x) intersects the x-axis), the existence of complex solutions to real equations can be surprising and less easy to visualize.
However, a monic polynomial of odd degree must necessarily have a real root. The associated polynomial function in x is continuous, and it approaches $-\infty $ as x approaches $-\infty $ and $+\infty $ as x approaches $+\infty $. By the intermediate value theorem, it must therefore assume the value zero at some real x, which is then a solution of the polynomial equation.
Connection to Galois theory
There exist formulas giving the solutions of real or complex polynomials of degree less than or equal to four as a function of their coefficients. Abel showed that it is not possible to find such a formula in general (using only the four arithmetic operations and taking roots) for equations of degree five or higher. Galois theory provides a criterion which allows one to determine whether the solution to a given polynomial equation can be expressed using radicals.
Explicit solution of numerical equations
Approach
The explicit solution of a real or complex equation of degree 1 is trivial. Solving an equation of higher degree n reduces to factoring the associated polynomial, that is, rewriting (E) in the form
$a_{n}(x-z_{1})\dots (x-z_{n})=0$,
where the solutions are then the $z_{1},\dots ,z_{n}$. The problem is then to express the $z_{i}$ in terms of the $a_{i}$.
This approach applies more generally if the coefficients and solutions belong to an integral domain.
Factoring
If an equation P(x) = 0 of degree n has a rational root α, the associated polynomial can be factored to give the form P(X) = (X – α)Q(X) (by dividing P(X) by X – α or by writing P(X) – P(α) as a linear combination of terms of the form Xk – αk, and factoring out X – α. Solving P(x) = 0 thus reduces to solving the degree n – 1 equation Q(x) = 0. See for example the case n = 3.
Elimination of the sub-dominant term
To solve an equation of degree n,
$(\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0$,
a common preliminary step is to eliminate the degree-n - 1 term: by setting $x=y-{\frac {a_{n-1}}{n\,a_{n}}}$, equation (E) becomes
$a_{n}y^{n}+b_{n-2}y^{n-2}+\dots +b_{1}y+b_{0}=0$.
Leonhard Euler developed this technique for the case n = 3 but it is also applicable to the case n = 4, for example.
Quadratic equations
Main article: Quadratic equation
To solve a quadratic equation of the form $ax^{2}+bx+c=0$ one calculates the discriminant Δ defined by $\Delta =b^{2}-4ac$.
If the polynomial has real coefficients, it has:
• two distinct real roots if $\Delta >0$ ;
• one real double root if $\Delta =0$ ;
• no real root if $\Delta <0$, but two complex conjugate roots.
Cubic equations
Main article: Cubic equation
The best-known method for solving cubic equations, by writing roots in terms of radicals, is Cardano's formula.
Quartic equations
Main article: Quartic equation
For detailed discussions of some solution methods see:
• Tschirnhaus transformation (general method, not guaranteed to succeed);
• Bezout method (general method, not guaranteed to succeed);
• Ferrari method (solutions for degree 4);
• Euler method (solutions for degree 4);
• Lagrange method (solutions for degree 4);
• Descartes method (solutions for degree 2 or 4);
A quartic equation $ax^{4}+bx^{3}+cx^{2}+dx+e=0$ with $a\neq 0$ may be reduced to a quadratic equation by a change of variable provided it is either biquadratic (b = d = 0) or quasi-palindromic (e = a, d = b).
Some cubic and quartic equations can be solved using trigonometry or hyperbolic functions.
Higher-degree equations
Main articles: Abel–Ruffini theorem and Galois group
Évariste Galois and Niels Henrik Abel showed independently that in general a polynomial of degree 5 or higher is not solvable using radicals. Some particular equations do have solutions, such as those associated with the cyclotomic polynomials of degrees 5 and 17.
Charles Hermite, on the other hand, showed that polynomials of degree 5 are solvable using elliptical functions.
Otherwise, one may find numerical approximations to the roots using root-finding algorithms, such as Newton's method.
See also
• Algebraic function
• Algebraic number
• Root finding
• Linear equation (degree = 1)
• Quadratic equation (degree = 2)
• Cubic equation (degree = 3)
• Quartic equation (degree = 4)
• Quintic equation (degree = 5)
• Sextic equation (degree = 6)
• Septic equation (degree = 7)
• System of linear equations
• System of polynomial equations
• Linear Diophantine equation
• Linear equation over a ring
• Cramer's theorem (algebraic curves), on the number of points usually sufficient to determine a bivariate n-th degree curve
References
• "Algebraic equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Weisstein, Eric W. "Algebraic Equation". MathWorld.
Authority control: National
• France
• BnF data
• Czech Republic
|
Wikipedia
|
Differential equation
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives.[1] In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
Not to be confused with Difference equation.
Differential equations
Scope
Fields
• Natural sciences
• Engineering
• Astronomy
• Physics
• Chemistry
• Biology
• Geology
Applied mathematics
• Continuum mechanics
• Chaos theory
• Dynamical systems
Social sciences
• Economics
• Population dynamics
List of named differential equations
Classification
Types
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
By variable type
• Dependent and independent variables
• Autonomous
• Coupled / Decoupled
• Exact
• Homogeneous / Nonhomogeneous
Features
• Order
• Operator
• Notation
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solution
Existence and uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
General topics
• Initial conditions
• Boundary values
• Dirichlet
• Neumann
• Robin
• Cauchy problem
• Wronskian
• Phase portrait
• Lyapunov / Asymptotic / Exponential stability
• Rate of convergence
• Series / Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Method of characteristics
• Euler
• Exponential response formula
• Finite difference (Crank–Nicolson)
• Finite element
• Infinite element
• Finite volume
• Galerkin
• Petrov–Galerkin
• Green's function
• Integrating factor
• Integral transforms
• Perturbation theory
• Runge–Kutta
• Separation of variables
• Undetermined coefficients
• Variation of parameters
People
List
• Isaac Newton
• Gottfried Leibniz
• Jacob Bernoulli
• Leonhard Euler
• Józef Maria Hoene-Wroński
• Joseph Fourier
• Augustin-Louis Cauchy
• George Green
• Carl David Tolmé Runge
• Martin Kutta
• Rudolf Lipschitz
• Ernst Lindelöf
• Émile Picard
• Phyllis Nicolson
• John Crank
The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are soluble by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.
Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
History
Differential equations came into existence with the invention of calculus by Newton and Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum,[2] Isaac Newton listed three kinds of differential equations:
${\begin{aligned}{\frac {dy}{dx}}&=f(x)\\[4pt]{\frac {dy}{dx}}&=f(x,y)\\[4pt]x_{1}{\frac {\partial y}{\partial x_{1}}}&+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}$
In all these cases, y is an unknown function of x (or of x1 and x2), and f is a given function.
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.
Jacob Bernoulli proposed the Bernoulli differential equation in 1695.[3] This is an ordinary differential equation of the form
$y'+P(x)y=Q(x)y^{n}\,$
for which the following year Leibniz obtained solutions by simplifying it.[4]
Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange.[5][6][7][8] In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.[9]
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.
In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat),[10] in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum.
Example
In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.
In some cases, this differential equation (called an equation of motion) may be solved explicitly.
An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.
Types
Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.
Ordinary differential equations
Main articles: Ordinary differential equation and Linear differential equation
An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals.
Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function).
As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer.
Partial differential equations
Main article: Partial differential equation
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness.
Non-linear differential equations
Main article: Non-linear differential equations
A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[11]
Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.
Equation order and degree
The order of the differential equation is the highest order of derivative of the unknown function that appears in the differential equation. For example, an equation containing only first-order derivatives is a first-order differential equation, an equation containing the second-order derivative is a second-order differential equation, and so on.[12][13]
When it is written as a polynomial equation in the unknown function and its derivatives, its degree of the differential equation is, depending on the context, the polynomial degree in the highest derivative of the unknown function,[14] or its total degree in the unknown function and its derivatives. In particular, a linear differential equation has degree one for both meanings, but the non-linear differential equation $y'+y^{2}=0$ is of degree one for the first meaning but not for the second one.
Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin-film equation, which is a fourth order partial differential equation.
Examples
In the first group of examples u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and heterogeneous ones.
• Heterogeneous first-order linear constant coefficient ordinary differential equation:
${\frac {du}{dx}}=cu+x^{2}.$
• Homogeneous second-order linear ordinary differential equation:
${\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.$
• Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
${\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.$
• Heterogeneous first-order nonlinear ordinary differential equation:
${\frac {du}{dx}}=u^{2}+4.$
• Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:
$L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.$
In the next group of examples, the unknown function u depends on two variables x and t or x and y.
• Homogeneous first-order linear partial differential equation:
${\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.$
• Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
${\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.$
• Homogeneous third-order non-linear partial differential equation, the KdV equation:
${\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.$
Existence of solutions
Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest.
For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point $(a,b)$ in the xy-plane, define some rectangular region $Z$, such that $Z=[l,m]\times [n,p]$ and $(a,b)$ is in the interior of $Z$. If we are given a differential equation $ {\frac {dy}{dx}}=g(x,y)$ and the condition that $y=b$ when $x=a$, then there is locally a solution to this problem if $g(x,y)$ and $ {\frac {\partial g}{\partial x}}$ are both continuous on $Z$. This solution exists on some interval with its center at $a$. The solution may not be unique. (See Ordinary differential equation for other results.)
However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order:
$f_{n}(x){\frac {d^{n}y}{dx^{n}}}+\cdots +f_{1}(x){\frac {dy}{dx}}+f_{0}(x)y=g(x)$
such that
${\begin{aligned}y(x_{0})&=y_{0},&y'(x_{0})&=y'_{0},&y''(x_{0})&=y''_{0},&\ldots \end{aligned}}$
For any nonzero $f_{n}(x)$, if $\{f_{0},f_{1},\ldots \}$ and $g$ are continuous on some interval containing $x_{0}$, $y$ is unique and exists.[15]
Related concepts
• A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.
• Integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals.[16]
• An integro-differential equation (IDE) is an equation that combines aspects of a differential equation and an integral equation.
• A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations.
• A stochastic partial differential equation (SPDE) is an equation that generalizes SDEs to include space-time noise processes, with applications in quantum field theory and statistical mechanics.
• An ultrametric pseudo-differential equation is an equation which contains p-adic numbers in an ultrametric space. Mathematical models that involve ultrametric pseudo-differential equations use pseudo-differential operators instead of differential operators.
• A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form.
Connection to difference equations
See also: Time scale calculus
The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.
Applications
The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.
Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.
The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. See List of named differential equations.
Software
Some CAS software can solve differential equations. These CAS software and their commands are worth mentioning:
• Maple:[17] dsolve
• Mathematica:[18] DSolve[]
• Maxima:[19] ode2(equation, y, x)
• SageMath:[20] desolve()
• SymPy:[21] sympy.solvers.ode.dsolve(equation)
• Xcas:[22] desolve(y'=k*y,y)
See also
• Exact differential equation
• Functional differential equation
• Initial condition
• Integral equations
• Numerical methods for ordinary differential equations
• Numerical methods for partial differential equations
• Picard–Lindelöf theorem on existence and uniqueness of solutions
• Recurrence relation, also known as 'difference equation'
• Abstract differential equation
• System of differential equations
References
1. Dennis G. Zill (15 March 2012). A First Course in Differential Equations with Modeling Applications. Cengage Learning. ISBN 978-1-285-40110-2.
2. Newton, Isaac. (c.1671). Methodus Fluxionum et Serierum Infinitarum (The Method of Fluxions and Infinite Series), published in 1736 [Opuscula, 1744, Vol. I. p. 66].
3. Bernoulli, Jacob (1695), "Explicationes, Annotationes & Additiones ad ea, quae in Actis sup. de Curva Elastica, Isochrona Paracentrica, & Velaria, hinc inde memorata, & paratim controversa legundur; ubi de Linea mediarum directionum, alliisque novis", Acta Eruditorum
4. Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0
5. Frasier, Craig (July 1983). "Review of The evolution of dynamics, vibration theory from 1687 to 1742, by John T. Cannon and Sigalia Dostrovsky" (PDF). Bulletin of the American Mathematical Society. New Series. 9 (1).
6. Wheeler, Gerard F.; Crummett, William P. (1987). "The Vibrating String Controversy". Am. J. Phys. 55 (1): 33–37. Bibcode:1987AmJPh..55...33W. doi:10.1119/1.15311.
7. For a special collection of the 9 groundbreaking papers by the three authors, see First Appearance of the wave equation: D'Alembert, Leonhard Euler, Daniel Bernoulli. - the controversy about vibrating strings Archived 2020-02-09 at the Wayback Machine (retrieved 13 Nov 2012). Herman HJ Lynge and Son.
8. For de Lagrange's contributions to the acoustic wave equation, can consult Acoustics: An Introduction to Its Physical Principles and Applications Allan D. Pierce, Acoustical Soc of America, 1989; page 18.(retrieved 9 Dec 2012)
9. Speiser, David. Discovering the Principles of Mechanics 1600-1800, p. 191 (Basel: Birkhäuser, 2008).
10. Fourier, Joseph (1822). Théorie analytique de la chaleur (in French). Paris: Firmin Didot Père et Fils. OCLC 2688081.
11. Boyce, William E.; DiPrima, Richard C. (1967). Elementary Differential Equations and Boundary Value Problems (4th ed.). John Wiley & Sons. p. 3.
12. Weisstein, Eric W. "Ordinary Differential Equation Order." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/OrdinaryDifferentialEquationOrder.html
13. Order and degree of a differential equation Archived 2016-04-01 at the Wayback Machine, accessed Dec 2015.
14. Elias Loomis (1887). Elements of the Differential and Integral Calculus (revised ed.). Harper & Bros. p. 247. Extract of page 247
15. Zill, Dennis G. (2001). A First Course in Differential Equations (5th ed.). Brooks/Cole. ISBN 0-534-37388-7.
16. Chen, Ricky T. Q.; Rubanova, Yulia; Bettencourt, Jesse; Duvenaud, David (2018-06-19). "Neural Ordinary Differential Equations". arXiv:1806.07366 [cs.LG].
17. "dsolve - Maple Programming Help". www.maplesoft.com. Retrieved 2020-05-09.
18. "DSolve - Wolfram Language Documentation". www.wolfram.com. Retrieved 2020-06-28.
19. Schelter, William F. Gaertner, Boris (ed.). "Differential Equations - Symbolic Solutions". The Computer Algebra Program Maxima - a Tutorial (in Maxima documentation on SourceForge). Archived from the original on 2022-10-04.
20. "Basic Algebra and Calculus — Sage Tutorial v9.0". doc.sagemath.org. Retrieved 2020-05-09.
21. "ODE". SymPy 1.11 documentation. 2022-08-22. Archived from the original on 2022-09-26.
22. "Symbolic algebra and Mathematics with Xcas" (PDF).
Further reading
• Abbott, P.; Neill, H. (2003). Teach Yourself Calculus. pp. 266–277.
• Blanchard, P.; Devaney, R. L.; Hall, G. R. (2006). Differential Equations. Thompson.
• Boyce, W.; DiPrima, R.; Meade, D. (2017). Elementary Differential Equations and Boundary Value Problems. Wiley.
• Coddington, E. A.; Levinson, N. (1955). Theory of Ordinary Differential Equations. McGraw-Hill.
• Ince, E. L. (1956). Ordinary Differential Equations. Dover.
• Johnson, W. (1913). A Treatise on Ordinary and Partial Differential Equations. John Wiley and Sons. In University of Michigan Historical Math Collection
• Polyanin, A. D.; Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2nd ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2.
• Porter, R. I. (1978). "XIX Differential Equations". Further Elementary Analysis.
• Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
• Daniel Zwillinger (12 May 2014). Handbook of Differential Equations. Elsevier Science. ISBN 978-1-4832-6396-0.
External links
Wikiquote has quotations related to Differential equation.
Wikibooks has a book on the topic of: Ordinary Differential Equations
Wikiversity has learning resources about Differential equations
Wikisource has the text of the 1911 Encyclopædia Britannica article "Differential Equation".
• Media related to Differential equations at Wikimedia Commons
• Lectures on Differential Equations MIT Open CourseWare Videos
• Online Notes / Differential Equations Paul Dawkins, Lamar University
• Differential Equations, S.O.S. Mathematics
• Introduction to modeling via differential equations Introduction to modeling by means of differential equations, with critical remarks.
• Mathematical Assistant on Web Symbolic ODE tool, using Maxima
• Exact Solutions of Ordinary Differential Equations
• Collection of ODE and DAE models of physical systems MATLAB models
• Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC
• Khan Academy Video playlist on differential equations Topics covered in a first year course in differential equations.
• MathDiscuss Video playlist on differential equations
Major mathematics areas
• History
• Timeline
• Future
• Outline
• Lists
• Glossary
Foundations
• Category theory
• Information theory
• Mathematical logic
• Philosophy of mathematics
• Set theory
• Type theory
Algebra
• Abstract
• Commutative
• Elementary
• Group theory
• Linear
• Multilinear
• Universal
• Homological
Analysis
• Calculus
• Real analysis
• Complex analysis
• Hypercomplex analysis
• Differential equations
• Functional analysis
• Harmonic analysis
• Measure theory
Discrete
• Combinatorics
• Graph theory
• Order theory
Geometry
• Algebraic
• Analytic
• Arithmetic
• Differential
• Discrete
• Euclidean
• Finite
Number theory
• Arithmetic
• Algebraic number theory
• Analytic number theory
• Diophantine geometry
Topology
• General
• Algebraic
• Differential
• Geometric
• Homotopy theory
Applied
• Engineering mathematics
• Mathematical biology
• Mathematical chemistry
• Mathematical economics
• Mathematical finance
• Mathematical physics
• Mathematical psychology
• Mathematical sociology
• Mathematical statistics
• Probability
• Statistics
• Systems science
• Control theory
• Game theory
• Operations research
Computational
• Computer science
• Theory of computation
• Computational complexity theory
• Numerical analysis
• Optimization
• Computer algebra
Related topics
• Mathematicians
• lists
• Informal mathematics
• Films about mathematicians
• Recreational mathematics
• Mathematics and art
• Mathematics education
• Mathematics portal
• Category
• Commons
• WikiProject
Differential equations
Classification
Operations
• Differential operator
• Notation for differentiation
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
• Holonomic
Attributes of variables
• Dependent and independent variables
• Homogeneous
• Nonhomogeneous
• Coupled
• Decoupled
• Order
• Degree
• Autonomous
• Exact differential equation
• On jet bundles
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solutions
Existence/uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
Solution topics
• Wronskian
• Phase portrait
• Phase space
• Lyapunov stability
• Asymptotic stability
• Exponential stability
• Rate of convergence
• Series solutions
• Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Substitution
• Separation of variables
• Method of undetermined coefficients
• Variation of parameters
• Integrating factor
• Integral transforms
• Euler method
• Finite difference method
• Crank–Nicolson method
• Runge–Kutta methods
• Finite element method
• Finite volume method
• Galerkin method
• Perturbation theory
Applications
• List of named differential equations
Mathematicians
• Isaac Newton
• Gottfried Wilhelm Leibniz
• Leonhard Euler
• Jacob Bernoulli
• Émile Picard
• Józef Maria Hoene-Wroński
• Ernst Lindelöf
• Rudolf Lipschitz
• Joseph-Louis Lagrange
• Augustin-Louis Cauchy
• John Crank
• Phyllis Nicolson
• Carl David Tolmé Runge
• Martin Kutta
• Sofya Kovalevskaya
Major topics in mathematical analysis
• Calculus: Integration
• Differentiation
• Differential equations
• ordinary
• partial
• stochastic
• Fundamental theorem of calculus
• Calculus of variations
• Vector calculus
• Tensor calculus
• Matrix calculus
• Lists of integrals
• Table of derivatives
• Real analysis
• Complex analysis
• Hypercomplex analysis (quaternionic analysis)
• Functional analysis
• Fourier analysis
• Least-squares spectral analysis
• Harmonic analysis
• P-adic analysis (P-adic numbers)
• Measure theory
• Representation theory
• Functions
• Continuous function
• Special functions
• Limit
• Series
• Infinity
Mathematics portal
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Exact differential equation
In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.
Differential equations
Scope
Fields
• Natural sciences
• Engineering
• Astronomy
• Physics
• Chemistry
• Biology
• Geology
Applied mathematics
• Continuum mechanics
• Chaos theory
• Dynamical systems
Social sciences
• Economics
• Population dynamics
List of named differential equations
Classification
Types
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
By variable type
• Dependent and independent variables
• Autonomous
• Coupled / Decoupled
• Exact
• Homogeneous / Nonhomogeneous
Features
• Order
• Operator
• Notation
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solution
Existence and uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
General topics
• Initial conditions
• Boundary values
• Dirichlet
• Neumann
• Robin
• Cauchy problem
• Wronskian
• Phase portrait
• Lyapunov / Asymptotic / Exponential stability
• Rate of convergence
• Series / Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Method of characteristics
• Euler
• Exponential response formula
• Finite difference (Crank–Nicolson)
• Finite element
• Infinite element
• Finite volume
• Galerkin
• Petrov–Galerkin
• Green's function
• Integrating factor
• Integral transforms
• Perturbation theory
• Runge–Kutta
• Separation of variables
• Undetermined coefficients
• Variation of parameters
People
List
• Isaac Newton
• Gottfried Leibniz
• Jacob Bernoulli
• Leonhard Euler
• Józef Maria Hoene-Wroński
• Joseph Fourier
• Augustin-Louis Cauchy
• George Green
• Carl David Tolmé Runge
• Martin Kutta
• Rudolf Lipschitz
• Ernst Lindelöf
• Émile Picard
• Phyllis Nicolson
• John Crank
Definition
Given a simply connected and open subset D of R2 and two functions I and J which are continuous on D, an implicit first-order ordinary differential equation of the form
$I(x,y)\,dx+J(x,y)\,dy=0,$
is called an exact differential equation if there exists a continuously differentiable function F, called the potential function,[1][2] so that
${\frac {\partial F}{\partial x}}=I$
and
${\frac {\partial F}{\partial y}}=J.$
An exact equation may also be presented in the following form:
$I(x,y)+J(x,y)\,y'(x)=0$
where the same constraints on I and J apply for the differential equation to be exact.
The nomenclature of "exact differential equation" refers to the exact differential of a function. For a function $F(x_{0},x_{1},...,x_{n-1},x_{n})$, the exact or total derivative with respect to $x_{0}$ is given by
${\frac {dF}{dx_{0}}}={\frac {\partial F}{\partial x_{0}}}+\sum _{i=1}^{n}{\frac {\partial F}{\partial x_{i}}}{\frac {dx_{i}}{dx_{0}}}.$
Example
The function $F:\mathbb {R} ^{2}\to \mathbb {R} $ given by
$F(x,y)={\frac {1}{2}}(x^{2}+y^{2})+c$
is a potential function for the differential equation
$x\,dx+y\,dy=0.\,$
First order exact differential equations
Identifying first order exact differential equations
Let the functions $ M$, $ N$, $ M_{y}$, and $ N_{x}$, where the subscripts denote the partial derivative with respect to the relative variable, be continuous in the region $ R:\alpha <x<\beta ,\gamma <y<\delta $. Then the differential equation
$M(x,y)+N(x,y){\frac {dy}{dx}}=0$
is exact if and only if
$M_{y}(x,y)=N_{x}(x,y)$
That is, there exists a function $\psi (x,y)$, called a potential function, such that
$\psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)$
So, in general:
$M_{y}(x,y)=N_{x}(x,y)\iff {\begin{cases}\exists \psi (x,y)\\\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}$
Proof
The proof has two parts.
First, suppose there is a function $\psi (x,y)$ such that$\psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)$
It then follows that$M_{y}(x,y)=\psi _{xy}(x,y){\text{ and }}N_{x}(x,y)=\psi _{yx}(x,y)$
Since $M_{y}$ and $N_{x}$ are continuous, then $\psi _{xy}$ and $\psi _{yx}$ are also continuous which guarantees their equality.
The second part of the proof involves the construction of $\psi (x,y)$ and can also be used as a procedure for solving first order exact differential equations. Suppose that $M_{y}(x,y)=N_{x}(x,y)$ and let there be a function $\psi (x,y)$ for which $\psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)$
Begin by integrating the first equation with respect to $x$. In practice, it doesn't matter if you integrate the first or the second equation, so long as the integration is done with respect to the appropriate variable.
${\frac {\partial \psi }{\partial x}}(x,y)=M(x,y)$
$\psi (x,y)=\int {M(x,y)dx}+h(y)$
$\psi (x,y)=Q(x,y)+h(y)$
where $Q(x,y)$ is any differentiable function such that $Q_{x}=M$. The function $h(y)$ plays the role of a constant of integration, but instead of just a constant, it is function of $y$, since we $M$ is a function of both $x$ and $y$ and we are only integrating with respect to $x$.
Now to show that it is always possible to find an $h(y)$ such that $\psi _{y}=N$.
$\psi (x,y)=Q(x,y)+h(y)$
Differentiate both sides with respect to $y$.
${\frac {\partial \psi }{\partial y}}(x,y)={\frac {\partial Q}{\partial y}}(x,y)+h'(y)$
Set the result equal to $N$ and solve for $h'(y)$.
$h'(y)=N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)$
In order to determine $h'(y)$ from this equation, the right-hand side must depend only on $y$. This can be proven by showing that its derivative with respect to $x$ is always zero, so differentiate the right-hand side with respect to $x$.
${\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial }{\partial x}}{\frac {\partial Q}{\partial y}}(x,y)\iff {\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial }{\partial y}}{\frac {\partial Q}{\partial x}}(x,y)$
Since $Q_{x}=M$,
${\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial M}{\partial y}}(x,y)$
Now, this is zero based on our initial supposition that $M_{y}(x,y)=N_{x}(x,y)$
Therefore,
$h'(y)=N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)$
$h(y)=\int {\left(N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)\right)dy}$
$\psi (x,y)=Q(x,y)+\int {\left(N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)\right)dy}+C$
And this completes the proof.
Solutions to first order exact differential equations
First order exact differential equations of the form
$M(x,y)+N(x,y){\frac {dy}{dx}}=0$
can be written in terms of the potential function $\psi (x,y)$
${\frac {\partial \psi }{\partial x}}+{\frac {\partial \psi }{\partial y}}{\frac {dy}{dx}}=0$
where
${\begin{cases}\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}$
This is equivalent to taking the exact differential of $\psi (x,y)$.
${\frac {\partial \psi }{\partial x}}+{\frac {\partial \psi }{\partial y}}{\frac {dy}{dx}}=0\iff {\frac {d}{dx}}\psi (x,y(x))=0$
The solutions to an exact differential equation are then given by
$\psi (x,y(x))=c$
and the problem reduces to finding $\psi (x,y)$.
This can be done by integrating the two expressions $M(x,y)dx$ and $N(x,y)dy$ and then writing down each term in the resulting expressions only once and summing them up in order to get $\psi (x,y)$.
The reasoning behind this is the following. Since
${\begin{cases}\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}$
it follows, by integrating both sides, that
${\begin{cases}\psi (x,y)=\int {M(x,y)dx}+h(y)=Q(x,y)+h(y)\\\psi (x,y)=\int {N(x,y)dy}+g(x)=P(x,y)+g(x)\end{cases}}$
Therefore,
$Q(x,y)+h(y)=P(x,y)+g(x)$
where $Q(x,y)$ and $P(x,y)$ are differentiable functions such that $Q_{x}=M$ and $P_{y}=N$.
In order for this to be true and for both sides to result in the exact same expression, namely $\psi (x,y)$, then $h(y)$ must be contained within the expression for $P(x,y)$ because it cannot be contained within $g(x)$, since it is entirely a function of $y$ and not $x$ and is therefore not allowed to have anything to do with $x$. By analogy, $g(x)$ must be contained within the expression $Q(x,y)$.
Ergo,
$Q(x,y)=g(x)+f(x,y){\text{ and }}P(x,y)=h(y)+d(x,y)$
for some expressions $f(x,y)$ and $d(x,y)$. Plugging in into the above equation, we find that
$g(x)+f(x,y)+h(y)=h(y)+d(x,y)+g(x)\Rightarrow f(x,y)=d(x,y)$
and so $f(x,y)$ and $d(x,y)$ turn out to be the same function. Therefore,
$Q(x,y)=g(x)+f(x,y){\text{ and }}P(x,y)=h(y)+f(x,y)$
Since we already showed that
${\begin{cases}\psi (x,y)=Q(x,y)+h(y)\\\psi (x,y)=P(x,y)+g(x)\end{cases}}$
it follows that
$\psi (x,y)=g(x)+f(x,y)+h(y)$
So, we can construct $\psi (x,y)$ by doing $\int {M(x,y)dx}$ and $\int {N(x,y)dy}$ and then taking the common terms we find within the two resulting expressions (that would be $f(x,y)$ ) and then adding the terms which are uniquely found in either one of them - $g(x)$ and $h(y)$.
Second order exact differential equations
The concept of exact differential equations can be extended to second order equations.[3] Consider starting with the first-order exact equation:
$I\left(x,y\right)+J\left(x,y\right){dy \over dx}=0$
Since both functions $I\left(x,y\right)$, $J\left(x,y\right)$ are functions of two variables, implicitly differentiating the multivariate function yields
${dI \over dx}+\left({dJ \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}\left(J\left(x,y\right)\right)=0$
Expanding the total derivatives gives that
${dI \over dx}={\partial I \over \partial x}+{\partial I \over \partial y}{dy \over dx}$
and that
${dJ \over dx}={\partial J \over \partial x}+{\partial J \over \partial y}{dy \over dx}$
Combining the $ {dy \over dx}$ terms gives
${\partial I \over \partial x}+{dy \over dx}\left({\partial I \over \partial y}+{\partial J \over \partial x}+{\partial J \over \partial y}{dy \over dx}\right)+{d^{2}y \over dx^{2}}\left(J\left(x,y\right)\right)=0$
If the equation is exact, then $ {\partial J \over \partial x}={\partial I \over \partial y}$. Additionally, the total derivative of $J\left(x,y\right)$ is equal to its implicit ordinary derivative $ {dJ \over dx}$. This leads to the rewritten equation
${\partial I \over \partial x}+{dy \over dx}\left({\partial J \over \partial x}+{dJ \over dx}\right)+{d^{2}y \over dx^{2}}\left(J\left(x,y\right)\right)=0$
Now, let there be some second-order differential equation
$f\left(x,y\right)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}\left(J\left(x,y\right)\right)=0$
If ${\partial J \over \partial x}={\partial I \over \partial y}$ for exact differential equations, then
$\int \left({\partial I \over \partial y}\right)dy=\int \left({\partial J \over \partial x}\right)dy$
and
$\int \left({\partial I \over \partial y}\right)dy=\int \left({\partial J \over \partial x}\right)dy=I\left(x,y\right)-h\left(x\right)$
where $h\left(x\right)$ is some arbitrary function only of $x$ that was differentiated away to zero upon taking the partial derivative of $I\left(x,y\right)$ with respect to $y$. Although the sign on $h\left(x\right)$ could be positive, it is more intuitive to think of the integral's result as $I\left(x,y\right)$ that is missing some original extra function $h\left(x\right)$ that was partially differentiated to zero.
Next, if
${dI \over dx}={\partial I \over \partial x}+{\partial I \over \partial y}{dy \over dx}$
then the term ${\partial I \over \partial x}$ should be a function only of $x$ and $y$, since partial differentiation with respect to $x$ will hold $y$ constant and not produce any derivatives of $y$. In the second order equation
$f\left(x,y\right)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}\left(J\left(x,y\right)\right)=0$
only the term $f\left(x,y\right)$ is a term purely of $x$ and $y$. Let ${\partial I \over \partial x}=f\left(x,y\right)$. If ${\partial I \over \partial x}=f\left(x,y\right)$, then
$f\left(x,y\right)={dI \over dx}-{\partial I \over \partial y}{dy \over dx}$
Since the total derivative of $I\left(x,y\right)$ with respect to $x$ is equivalent to the implicit ordinary derivative ${dI \over dx}$ , then
$f\left(x,y\right)+{\partial I \over \partial y}{dy \over dx}={dI \over dx}={d \over dx}\left(I\left(x,y\right)-h\left(x\right)\right)+{dh\left(x\right) \over dx}$
So,
${dh\left(x\right) \over dx}=f\left(x,y\right)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}\left(I\left(x,y\right)-h\left(x\right)\right)$
and
$h\left(x\right)=\int \left(f\left(x,y\right)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}\left(I\left(x,y\right)-h\left(x\right)\right)\right)dx$
Thus, the second order differential equation
$f\left(x,y\right)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}\left(J\left(x,y\right)\right)=0$
is exact only if $g\left(x,y,{dy \over dx}\right)={dJ \over dx}+{\partial J \over \partial x}={dJ \over dx}+{\partial J \over \partial x}$ and only if the below expression
$\int \left(f\left(x,y\right)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}\left(I\left(x,y\right)-h\left(x\right)\right)\right)dx=\int \left(f\left(x,y\right)-{\partial \left(I\left(x,y\right)-h\left(x\right)\right) \over \partial x}\right)dx$
is a function solely of $x$. Once $h\left(x\right)$ is calculated with its arbitrary constant, it is added to $I\left(x,y\right)-h\left(x\right)$ to make $I\left(x,y\right)$. If the equation is exact, then we can reduce to the first order exact form which is solvable by the usual method for first-order exact equations.
$I\left(x,y\right)+J\left(x,y\right){dy \over dx}=0$
Now, however, in the final implicit solution there will be a $C_{1}x$ term from integration of $h\left(x\right)$ with respect to $x$ twice as well as a $C_{2}$, two arbitrary constants as expected from a second-order equation.
Example
Given the differential equation
$\left(1-x^{2}\right)y''-4xy'-2y=0$
one can always easily check for exactness by examining the $y''$ term. In this case, both the partial and total derivative of $1-x^{2}$ with respect to $x$ are $-2x$, so their sum is $-4x$, which is exactly the term in front of $y'$. With one of the conditions for exactness met, one can calculate that
$\int \left(-2x\right)dy=I\left(x,y\right)-h\left(x\right)=-2xy$
Letting $f\left(x,y\right)=-2y$, then
$\int \left(-2y-2xy'-{d \over dx}\left(-2xy\right)\right)dx=\int \left(-2y-2xy'+2xy'+2y\right)dx=\int \left(0\right)dx=h\left(x\right)$
So, $h\left(x\right)$ is indeed a function only of $x$ and the second order differential equation is exact. Therefore, $h\left(x\right)=C_{1}$ and $I\left(x,y\right)=-2xy+C_{1}$. Reduction to a first-order exact equation yields
$-2xy+C_{1}+\left(1-x^{2}\right)y'=0$
Integrating $I\left(x,y\right)$ with respect to $x$ yields
$-x^{2}y+C_{1}x+i\left(y\right)=0$
where $i\left(y\right)$ is some arbitrary function of $y$. Differentiating with respect to $y$ gives an equation correlating the derivative and the $y'$ term.
$-x^{2}+i'\left(y\right)=1-x^{2}$
So, $i\left(y\right)=y+C_{2}$ and the full implicit solution becomes
$C_{1}x+C_{2}+y-x^{2}y=0$
Solving explicitly for $y$ yields
$y={\frac {C_{1}x+C_{2}}{1-x^{2}}}$
Higher order exact differential equations
The concepts of exact differential equations can be extended to any order. Starting with the exact second order equation
${d^{2}y \over dx^{2}}\left(J\left(x,y\right)\right)+{dy \over dx}\left({dJ \over dx}+{\partial J \over \partial x}\right)+f\left(x,y\right)=0$
it was previously shown that equation is defined such that
$f\left(x,y\right)={dh\left(x\right) \over dx}+{d \over dx}\left(I\left(x,y\right)-h\left(x\right)\right)-{\partial J \over \partial x}{dy \over dx}$
Implicit differentiation of the exact second-order equation $n$ times will yield an $\left(n+2\right)$th order differential equation with new conditions for exactness that can be readily deduced from the form of the equation produced. For example, differentiating the above second-order differential equation once to yield a third-order exact equation gives the following form
${d^{3}y \over dx^{3}}\left(J\left(x,y\right)\right)+{d^{2}y \over dx^{2}}{dJ \over dx}+{d^{2}y \over dx^{2}}\left({dJ \over dx}+{\partial J \over \partial x}\right)+{dy \over dx}\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)+{df\left(x,y\right) \over dx}=0$
where
${df\left(x,y\right) \over dx}={d^{2}h\left(x\right) \over dx^{2}}+{d^{2} \over dx^{2}}\left(I\left(x,y\right)-h\left(x\right)\right)-{d^{2}y \over dx^{2}}{\partial J \over \partial x}-{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)=F\left(x,y,{dy \over dx}\right)$
and where $F\left(x,y,{dy \over dx}\right)$ is a function only of $x,y$ and ${dy \over dx}$. Combining all ${dy \over dx}$ and ${d^{2}y \over dx^{2}}$ terms not coming from $F\left(x,y,{dy \over dx}\right)$ gives
${d^{3}y \over dx^{3}}\left(J\left(x,y\right)\right)+{d^{2}y \over dx^{2}}\left(2{dJ \over dx}+{\partial J \over \partial x}\right)+{dy \over dx}\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)+F\left(x,y,{dy \over dx}\right)=0$
Thus, the three conditions for exactness for a third-order differential equation are: the ${d^{2}y \over dx^{2}}$ term must be $2{dJ \over dx}+{\partial J \over \partial x}$, the ${dy \over dx}$ term must be ${d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)$ and
$F\left(x,y,{dy \over dx}\right)-{d^{2} \over dx^{2}}\left(I\left(x,y\right)-h\left(x\right)\right)+{d^{2}y \over dx^{2}}{\partial J \over \partial x}+{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)$
must be a function solely of $x$.
Example
Consider the nonlinear third-order differential equation
$yy'''+3y'y''+12x^{2}=0$
If $J\left(x,y\right)=y$, then $y''\left(2{dJ \over dx}+{\partial J \over \partial x}\right)$ is $2y'y''$ and $y'\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)=y'y''$which together sum to $3y'y''$. Fortunately, this appears in our equation. For the last condition of exactness,
$F\left(x,y,{dy \over dx}\right)-{d^{2} \over dx^{2}}\left(I\left(x,y\right)-h\left(x\right)\right)+{d^{2}y \over dx^{2}}{\partial J \over \partial x}+{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)=12x^{2}-0+0+0=12x^{2}$
which is indeed a function only of $x$. So, the differential equation is exact. Integrating twice yields that $h\left(x\right)=x^{4}+C_{1}x+C_{2}=I\left(x,y\right)$. Rewriting the equation as a first-order exact differential equation yields
$x^{4}+C_{1}x+C_{2}+yy'=0$
Integrating $I\left(x,y\right)$ with respect to $x$ gives that ${x^{5} \over 5}+C_{1}x^{2}+C_{2}x+i\left(y\right)=0$. Differentiating with respect to $y$ and equating that to the term in front of $y'$ in the first-order equation gives that $i'\left(y\right)=y$ and that $i\left(y\right)={y^{2} \over 2}+C_{3}$. The full implicit solution becomes
${x^{5} \over 5}+C_{1}x^{2}+C_{2}x+C_{3}+{y^{2} \over 2}=0$
The explicit solution, then, is
$y=\pm {\sqrt {C_{1}x^{2}+C_{2}x+C_{3}-{\frac {2x^{5}}{5}}}}$
See also
• Exact differential
• Inexact differential equation
References
1. Wolfgang Walter (11 March 2013). Ordinary Differential Equations. Springer Science & Business Media. ISBN 978-1-4612-0601-9.
2. Vladimir A. Dobrushkin (16 December 2014). Applied Differential Equations: The Primary Course. CRC Press. ISBN 978-1-4987-2835-5.
3. Tenenbaum, Morris; Pollard, Harry (1963). "Solution of the Linear Differential Equation with Nonconstant Coefficients. Reduction of Order Method.". Ordinary Differential Equations: An Elementary Textbook for Students of Mathematics, Engineering and the Sciences. New York: Dover. pp. 248. ISBN 0-486-64940-7.
Further reading
• Boyce, William E.; DiPrima, Richard C. (1986). Elementary Differential Equations (4th ed.). New York: John Wiley & Sons, Inc. ISBN 0-471-07894-8
Differential equations
Classification
Operations
• Differential operator
• Notation for differentiation
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
• Holonomic
Attributes of variables
• Dependent and independent variables
• Homogeneous
• Nonhomogeneous
• Coupled
• Decoupled
• Order
• Degree
• Autonomous
• Exact differential equation
• On jet bundles
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solutions
Existence/uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
Solution topics
• Wronskian
• Phase portrait
• Phase space
• Lyapunov stability
• Asymptotic stability
• Exponential stability
• Rate of convergence
• Series solutions
• Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Substitution
• Separation of variables
• Method of undetermined coefficients
• Variation of parameters
• Integrating factor
• Integral transforms
• Euler method
• Finite difference method
• Crank–Nicolson method
• Runge–Kutta methods
• Finite element method
• Finite volume method
• Galerkin method
• Perturbation theory
Applications
• List of named differential equations
Mathematicians
• Isaac Newton
• Gottfried Wilhelm Leibniz
• Leonhard Euler
• Jacob Bernoulli
• Émile Picard
• Józef Maria Hoene-Wroński
• Ernst Lindelöf
• Rudolf Lipschitz
• Joseph-Louis Lagrange
• Augustin-Louis Cauchy
• John Crank
• Phyllis Nicolson
• Carl David Tolmé Runge
• Martin Kutta
• Sofya Kovalevskaya
|
Wikipedia
|
Rational difference equation
A rational difference equation is a nonlinear difference equation of the form[1][2][3][4]
$x_{n+1}={\frac {\alpha +\sum _{i=0}^{k}\beta _{i}x_{n-i}}{A+\sum _{i=0}^{k}B_{i}x_{n-i}}}~,$
where the initial conditions $x_{0},x_{-1},\dots ,x_{-k}$ are such that the denominator never vanishes for any n.
First-order rational difference equation
A first-order rational difference equation is a nonlinear difference equation of the form
$w_{t+1}={\frac {aw_{t}+b}{cw_{t}+d}}.$
When $a,b,c,d$ and the initial condition $w_{0}$ are real numbers, this difference equation is called a Riccati difference equation.[3]
Such an equation can be solved by writing $w_{t}$ as a nonlinear transformation of another variable $x_{t}$ which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in $x_{t}$.
Equations of this form arise from the infinite resistor ladder problem.[5][6]
Solving a first-order equation
First approach
One approach[7] to developing the transformed variable $x_{t}$, when $ad-bc\neq 0$, is to write
$y_{t+1}=\alpha -{\frac {\beta }{y_{t}}}$
where $\alpha =(a+d)/c$ and $\beta =(ad-bc)/c^{2}$ and where $w_{t}=y_{t}-d/c$.
Further writing $y_{t}=x_{t+1}/x_{t}$ can be shown to yield
$x_{t+2}-\alpha x_{t+1}+\beta x_{t}=0.$
Second approach
This approach[8] gives a first-order difference equation for $x_{t}$ instead of a second-order one, for the case in which $(d-a)^{2}+4bc$ is non-negative. Write $x_{t}=1/(\eta +w_{t})$ implying $w_{t}=(1-\eta x_{t})/x_{t}$, where $\eta $ is given by $\eta =(d-a+r)/2c$ and where $r={\sqrt {(d-a)^{2}+4bc}}$. Then it can be shown that $x_{t}$ evolves according to
$x_{t+1}=\left({\frac {d-\eta c}{\eta c+a}}\right)\!x_{t}+{\frac {c}{\eta c+a}}.$
Third approach
The equation
$w_{t+1}={\frac {aw_{t}+b}{cw_{t}+d}}$
can also be solved by treating it as a special case of the more general matrix equation
$X_{t+1}=-(E+BX_{t})(C+AX_{t})^{-1},$
where all of A, B, C, E, and X are n × n matrices (in this case n = 1); the solution of this is[9]
$X_{t}=N_{t}D_{t}^{-1}$
where
${\begin{pmatrix}N_{t}\\D_{t}\end{pmatrix}}={\begin{pmatrix}-B&-E\\A&C\end{pmatrix}}^{t}{\begin{pmatrix}X_{0}\\I\end{pmatrix}}.$
Application
It was shown in [10] that a dynamic matrix Riccati equation of the form
$H_{t-1}=K+A'H_{t}A-A'H_{t}C(C'H_{t}C)^{-1}C'H_{t}A,$
which can arise in some discrete-time optimal control problems, can be solved using the second approach above if the matrix C has only one more row than column.
References
1. Skellam, J.G. (1951). “Random dispersal in theoretical populations”, Biometrika 38 196−218, eqns (41,42)
2. Camouzis, Elias; Ladas, G. (November 16, 2007). Dynamics of Third-Order Rational Difference Equations with Open Problems and Conjectures. CRC Press. ISBN 9781584887669 – via Google Books.
3. Kulenovic, Mustafa R. S.; Ladas, G. (July 30, 2001). Dynamics of Second Order Rational Difference Equations: With Open Problems and Conjectures. CRC Press. ISBN 9781420035384 – via Google Books.
4. Newth, Gerald, "World order from chaotic beginnings", Mathematical Gazette 88, March 2004, 39-45 gives a trigonometric approach.
5. "Equivalent resistance in ladder circuit". Stack Exchange. Retrieved 21 February 2022.
6. "Thinking Recursively: How to Crack the Infinite Resistor Ladder Puzzle!". Youtube. Retrieved 21 February 2022.
7. Brand, Louis, "A sequence defined by a difference equation," American Mathematical Monthly 62, September 1955, 489–492. online
8. Mitchell, Douglas W., "An analytic Riccati solution for two-target discrete-time control," Journal of Economic Dynamics and Control 24, 2000, 615–622.
9. Martin, C. F., and Ammar, G., "The geometry of the matrix Riccati equation and associated eigenvalue method," in Bittani, Laub, and Willems (eds.), The Riccati Equation, Springer-Verlag, 1991.
10. Balvers, Ronald J., and Mitchell, Douglas W., "Reducing the dimensionality of linear quadratic control problems," Journal of Economic Dynamics and Control 31, 2007, 141–159.
Further reading
• Simons, Stuart, "A non-linear difference equation," Mathematical Gazette 93, November 2009, 500–504.
|
Wikipedia
|
Solvable Lie algebra
In mathematics, a Lie algebra ${\mathfrak {g}}$ is solvable if its derived series terminates in the zero subalgebra. The derived Lie algebra of the Lie algebra ${\mathfrak {g}}$ is the subalgebra of ${\mathfrak {g}}$, denoted
$[{\mathfrak {g}},{\mathfrak {g}}]$
Lie groups and Lie algebras
Classical groups
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
Simple Lie groups
Classical
• An
• Bn
• Cn
• Dn
Exceptional
• G2
• F4
• E6
• E7
• E8
Other Lie groups
• Circle
• Lorentz
• Poincaré
• Conformal group
• Diffeomorphism
• Loop
• Euclidean
Lie algebras
• Lie group–Lie algebra correspondence
• Exponential map
• Adjoint representation
• Killing form
• Index
• Simple Lie algebra
• Loop algebra
• Affine Lie algebra
Semisimple Lie algebra
• Dynkin diagrams
• Cartan subalgebra
• Root system
• Weyl group
• Real form
• Complexification
• Split Lie algebra
• Compact Lie algebra
Representation theory
• Lie group representation
• Lie algebra representation
• Representation theory of semisimple Lie algebras
• Representations of classical Lie groups
• Theorem of the highest weight
• Borel–Weil–Bott theorem
Lie groups in physics
• Particle physics and representation theory
• Lorentz group representations
• Poincaré group representations
• Galilean group representations
Scientists
• Sophus Lie
• Henri Poincaré
• Wilhelm Killing
• Élie Cartan
• Hermann Weyl
• Claude Chevalley
• Harish-Chandra
• Armand Borel
• Glossary
• Table of Lie groups
that consists of all linear combinations of Lie brackets of pairs of elements of ${\mathfrak {g}}$. The derived series is the sequence of subalgebras
${\mathfrak {g}}\geq [{\mathfrak {g}},{\mathfrak {g}}]\geq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\geq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\geq ...$
If the derived series eventually arrives at the zero subalgebra, then the Lie algebra is called solvable.[1] The derived series for Lie algebras is analogous to the derived series for commutator subgroups in group theory, and solvable Lie algebras are analogs of solvable groups.
Any nilpotent Lie algebra is a fortiori solvable but the converse is not true. The solvable Lie algebras and the semisimple Lie algebras form two large and generally complementary classes, as is shown by the Levi decomposition. The solvable Lie algebras are precisely those that can be obtained from semidirect products, starting from 0 and adding one dimension at a time.[2]
A maximal solvable subalgebra is called a Borel subalgebra. The largest solvable ideal of a Lie algebra is called the radical.
Characterizations
Let ${\mathfrak {g}}$ be a finite-dimensional Lie algebra over a field of characteristic 0. The following are equivalent.
• (i) ${\mathfrak {g}}$ is solvable.
• (ii) ${\rm {ad}}({\mathfrak {g}})$, the adjoint representation of ${\mathfrak {g}}$, is solvable.
• (iii) There is a finite sequence of ideals ${\mathfrak {a}}_{i}$ of ${\mathfrak {g}}$:
${\mathfrak {g}}={\mathfrak {a}}_{0}\supset {\mathfrak {a}}_{1}\supset ...{\mathfrak {a}}_{r}=0,\quad [{\mathfrak {a}}_{i},{\mathfrak {a}}_{i}]\subset {\mathfrak {a}}_{i+1}\,\,\forall i.$
• (iv) $[{\mathfrak {g}},{\mathfrak {g}}]$ is nilpotent.[3]
• (v) For ${\mathfrak {g}}$ $n$-dimensional, there is a finite sequence of subalgebras ${\mathfrak {a}}_{i}$ of ${\mathfrak {g}}$:
${\mathfrak {g}}={\mathfrak {a}}_{0}\supset {\mathfrak {a}}_{1}\supset ...{\mathfrak {a}}_{n}=0,\quad \operatorname {dim} {\mathfrak {a}}_{i}/{\mathfrak {a}}_{i+1}=1\,\,\forall i,$
with each ${\mathfrak {a}}_{i+1}$ an ideal in ${\mathfrak {a}}_{i}$.[4] A sequence of this type is called an elementary sequence.
• (vi) There is a finite sequence of subalgebras ${\mathfrak {g}}_{i}$ of ${\mathfrak {g}}$,
${\mathfrak {g}}={\mathfrak {g}}_{0}\supset {\mathfrak {g}}_{1}\supset ...{\mathfrak {g}}_{r}=0,$
such that ${\mathfrak {g}}_{i+1}$ is an ideal in ${\mathfrak {g}}_{i}$ and ${\mathfrak {g}}_{i}/{\mathfrak {g}}_{i+1}$ is abelian.[5]
• (vii) The Killing form $B$ of ${\mathfrak {g}}$ satisfies $B(X,Y)=0$ for all X in ${\mathfrak {g}}$ and Y in $[{\mathfrak {g}},{\mathfrak {g}}]$.[6] This is Cartan's criterion for solvability.
Properties
Lie's Theorem states that if $V$ is a finite-dimensional vector space over an algebraically closed field of characteristic zero, and ${\mathfrak {g}}$ is a solvable Lie algebra, and if $\pi $ is a representation of ${\mathfrak {g}}$ over $V$, then there exists a simultaneous eigenvector $v\in V$ of the endomorphisms $\pi (X)$ for all elements $X\in {\mathfrak {g}}$.[7]
• Every Lie subalgebra and quotient of a solvable Lie algebra are solvable.[8]
• Given a Lie algebra ${\mathfrak {g}}$ and an ideal ${\mathfrak {h}}$ in it,
${\mathfrak {g}}$ is solvable if and only if both ${\mathfrak {h}}$ and ${\mathfrak {g}}/{\mathfrak {h}}$ are solvable.[8][2]
The analogous statement is true for nilpotent Lie algebras provided ${\mathfrak {h}}$ is contained in the center. Thus, an extension of a solvable algebra by a solvable algebra is solvable, while a central extension of a nilpotent algebra by a nilpotent algebra is nilpotent.
• A solvable nonzero Lie algebra has a nonzero abelian ideal, the last nonzero term in the derived series.[2]
• If ${\mathfrak {a}},{\mathfrak {b}}\subset {\mathfrak {g}}$ are solvable ideals, then so is ${\mathfrak {a}}+{\mathfrak {b}}$.[1] Consequently, if ${\mathfrak {g}}$ is finite-dimensional, then there is a unique solvable ideal ${\mathfrak {r}}\subset {\mathfrak {g}}$ containing all solvable ideals in ${\mathfrak {g}}$. This ideal is the radical of ${\mathfrak {g}}$.[2]
• A solvable Lie algebra ${\mathfrak {g}}$ has a unique largest nilpotent ideal ${\mathfrak {n}}$, called the nilradical, the set of all $X\in {\mathfrak {g}}$ such that ${\rm {ad}}_{X}$ is nilpotent. If D is any derivation of ${\mathfrak {g}}$, then $D({\mathfrak {g}})\subset {\mathfrak {n}}$.[9]
Completely solvable Lie algebras
A Lie algebra ${\mathfrak {g}}$ is called completely solvable or split solvable if it has an elementary sequence{(V) As above definition} of ideals in ${\mathfrak {g}}$ from $0$ to ${\mathfrak {g}}$. A finite-dimensional nilpotent Lie algebra is completely solvable, and a completely solvable Lie algebra is solvable. Over an algebraically closed field a solvable Lie algebra is completely solvable, but the $3$-dimensional real Lie algebra of the group of Euclidean isometries of the plane is solvable but not completely solvable.
A solvable Lie algebra ${\mathfrak {g}}$ is split solvable if and only if the eigenvalues of ${\rm {ad}}_{X}$ are in $k$ for all $X$ in ${\mathfrak {g}}$.[2]
Examples
Abelian Lie algebras
Every abelian Lie algebra ${\mathfrak {a}}$ is solvable by definition, since its commutator $[{\mathfrak {a}},{\mathfrak {a}}]=0$. This includes the Lie algebra of diagonal matrices in ${\mathfrak {gl}}(n)$, which are of the form
$\left\{{\begin{bmatrix}*&0&0\\0&*&0\\0&0&*\end{bmatrix}}\right\}$
for $n=3$. The Lie algebra structure on a vector space $V$ given by the trivial bracket $[m,n]=0$ for any two matrices $m,n\in {\text{End}}(V)$ gives another example.
Nilpotent Lie algebras
Another class of examples comes from nilpotent Lie algebras since the adjoint representation is solvable. Some examples include the upper-diagonal matrices, such as the class of matrices of the form
$\left\{{\begin{bmatrix}0&*&*\\0&0&*\\0&0&0\end{bmatrix}}\right\}$
called the Lie algebra of strictly upper triangular matrices. In addition, the Lie algebra of upper diagonal matrices in ${\mathfrak {gl}}(n)$ form a solvable Lie algebra. This includes matrices of the form
$\left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&0&*\end{bmatrix}}\right\}$
and is denoted ${\mathfrak {b}}_{k}$.
Solvable but not split-solvable
Let ${\mathfrak {g}}$ be the set of matrices on the form
$X=\left({\begin{matrix}0&\theta &x\\-\theta &0&y\\0&0&0\end{matrix}}\right),\quad \theta ,x,y\in \mathbb {R} .$
Then ${\mathfrak {g}}$ is solvable, but not split solvable.[2] It is isomorphic with the Lie algebra of the group of translations and rotations in the plane.
Non-example
A semisimple Lie algebra ${\mathfrak {l}}$ is never solvable since its radical ${\text{Rad}}({\mathfrak {l}})$, which is the largest solvable ideal in ${\mathfrak {l}}$, is trivial.[1] page 11
Solvable Lie groups
Because the term "solvable" is also used for solvable groups in group theory, there are several possible definitions of solvable Lie group. For a Lie group $G$, there is
• termination of the usual derived series of the group $G$ (as an abstract group);
• termination of the closures of the derived series;
• having a solvable Lie algebra
See also
• Cartan's criterion
• Killing form
• Lie-Kolchin theorem
• Solvmanifold
• Dixmier mapping
Notes
1. Humphreys 1972
2. Knapp 2002
3. Knapp 2002 Proposition 1.39.
4. Knapp 2002 Proposition 1.23.
5. Fulton & Harris 1991
6. Knapp 2002 Proposition 1.46.
7. Knapp 2002 Theorem 1.25.
8. Serre, Ch. I, § 6, Definition 2. harvnb error: no target: CITEREFSerre (help)
9. Knapp 2002 Proposition 1.40.
External links
• EoM article Lie algebra, solvable
• EoM article Lie group, solvable
References
• Fulton, W.; Harris, J. (1991). Representation theory. A first course. Graduate Texts in Mathematics. Vol. 129. New York: Springer-Verlag. ISBN 978-0-387-97527-6. MR 1153249.
• Humphreys, James E. (1972). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9. New York: Springer-Verlag. ISBN 0-387-90053-5.
• Knapp, A. W. (2002). Lie groups beyond an introduction. Progress in Mathematics. Vol. 120 (2nd ed.). Boston·Basel·Berlin: Birkhäuser. ISBN 0-8176-4259-5..
• Jean-Pierre Serre: Complex Semisimple Lie Algebras, Springer, Berlin, 2001. ISBN 3-5406-7827-1
Authority control: National
• Germany
|
Wikipedia
|
Solvable group
In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a solvable group is a group whose derived series terminates in the trivial subgroup.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Motivation
Historically, the word "solvable" arose from Galois theory and the proof of the general unsolvability of quintic equation. Specifically, a polynomial equation is solvable in radicals if and only if the corresponding Galois group is solvable[1] (note this theorem holds only in characteristic 0). This means associated to a polynomial $f\in F[x]$ there is a tower of field extensions
$F=F_{0}\subseteq F_{1}\subseteq F_{2}\subseteq \cdots \subseteq F_{m}=K$
such that
1. $F_{i}=F_{i-1}[\alpha _{i}]$ where $\alpha _{i}^{m_{i}}\in F_{i-1}$, so $\alpha _{i}$ is a solution to the equation $x^{m_{i}}-a$ where $a\in F_{i-1}$
2. $F_{m}$ contains a splitting field for $f(x)$
Example
For example, the smallest Galois field extension of $\mathbb {Q} $ containing the element
$a={\sqrt[{5}]{{\sqrt {2}}+{\sqrt {3}}}}$
gives a solvable group. It has associated field extensions
$\mathbb {Q} \subseteq \mathbb {Q} ({\sqrt {2}})\subseteq \mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\subseteq \mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\left(e^{2i\pi /5}\right)\subseteq \mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\left(e^{2i\pi /5},a\right)$
giving a solvable group of Galois extensions containing the following composition factors:
• $\mathrm {Aut} \left(\mathbb {Q({\sqrt {2}})} \right/\mathbb {Q} )\cong \mathbb {Z} /2$ with group action $f\left(\pm {\sqrt {2}}\right)=\mp {\sqrt {2}},\ f^{2}=1$, and minimal polynomial $x^{2}-2$.
• $\mathrm {Aut} \left(\mathbb {Q({\sqrt {2}},{\sqrt {3}})} \right/\mathbb {Q({\sqrt {2}})} )\cong \mathbb {Z} /2$ with group action $g\left(\pm {\sqrt {3}}\right)=\mp {\sqrt {3}},\ g^{2}=1$, and minimal polynomial $x^{2}-3$.
• $\mathrm {Aut} \left(\mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\left(e^{2i\pi /5}\right)/\mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\right)\cong \mathbb {Z} /4$ with group action $h^{n}\left(e^{2im\pi /5}\right)=e^{2(n+1)mi\pi /5},\ 0\leq n\leq 3,\ h^{4}=1$, and minimal polynomial $x^{4}+x^{3}+x^{2}+x+1=(x^{5}-1)/(x-1)$ containing the 5th roots of unity excluding $1$.
• $\mathrm {Aut} \left(\mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\left(e^{2i\pi /5},a\right)/\mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\left(e^{2i\pi /5}\right)\right)\cong \mathbb {Z} /5$ with group action $j^{l}(a)=e^{2li\pi /5}a,\ j^{5}=1$, and minimal polynomial $x^{5}-\left({\sqrt {2}}+{\sqrt {3}}\right)$.
, where $1$ is the identity permutation. All of the defining group actions change a single extension while keeping all of the other extensions fixed. For example, an element of this group is the group action $fgh^{3}j^{4}$. A general element in the group can be written as $f^{a}g^{b}h^{n}j^{l},\ 0\leq a,b\leq 1,\ 0\leq n\leq 3,\ 0\leq l\leq 4$ for a total of 80 elements.
It is worthwhile to note that this group is not abelian itself. For example:
$hj(a)=h(e^{2i\pi /5}a)=e^{4i\pi /5}a$
$jh(a)=j(a)=e^{2i\pi /5}a$
In fact, in this group, $jh=hj^{3}$. The solvable group is isometric to $(\mathbb {C} _{5}\rtimes _{\varphi }\mathbb {C} _{4})\times (\mathbb {C} _{2}\times \mathbb {C} _{2}),\ \mathrm {where} \ \varphi _{h}(j)=hjh^{-1}=j^{2}$, defined using the semidirect product and direct product of the cyclic groups. In the solvable group, $\mathbb {C} _{4}$ is not a normal subgroup.
Definition
A group G is called solvable if it has a subnormal series whose factor groups (quotient groups) are all abelian, that is, if there are subgroups 1 = G0 < G1 < ⋅⋅⋅ < Gk = G such that Gj−1 is normal in Gj, and Gj /Gj−1 is an abelian group, for j = 1, 2, …, k.
Or equivalently, if its derived series, the descending normal series
$G\triangleright G^{(1)}\triangleright G^{(2)}\triangleright \cdots ,$
where every subgroup is the commutator subgroup of the previous one, eventually reaches the trivial subgroup of G. These two definitions are equivalent, since for every group H and every normal subgroup N of H, the quotient H/N is abelian if and only if N includes the commutator subgroup of H. The least n such that G(n) = 1 is called the derived length of the solvable group G.
For finite groups, an equivalent definition is that a solvable group is a group with a composition series all of whose factors are cyclic groups of prime order. This is equivalent because a finite group has finite composition length, and every simple abelian group is cyclic of prime order. The Jordan–Hölder theorem guarantees that if one composition series has this property, then all composition series will have this property as well. For the Galois group of a polynomial, these cyclic groups correspond to nth roots (radicals) over some field. The equivalence does not necessarily hold for infinite groups: for example, since every nontrivial subgroup of the group Z of integers under addition is isomorphic to Z itself, it has no composition series, but the normal series {0, Z}, with its only factor group isomorphic to Z, proves that it is in fact solvable.
Examples
Abelian groups
The basic example of solvable groups are abelian groups. They are trivially solvable since a subnormal series is formed by just the group itself and the trivial group. But non-abelian groups may or may not be solvable.
Nilpotent groups
More generally, all nilpotent groups are solvable. In particular, finite p-groups are solvable, as all finite p-groups are nilpotent.
Quaternion groups
In particular, the quaternion group is a solvable group given by the group extension
$1\to \mathbb {Z} /2\to Q\to \mathbb {Z} /2\times \mathbb {Z} /2\to 1$
where the kernel $\mathbb {Z} /2$ is the subgroup generated by $-1$.
Group extensions
Group extensions form the prototypical examples of solvable groups. That is, if $G$ and $G'$ are solvable groups, then any extension
$1\to G\to G''\to G'\to 1$
defines a solvable group $G''$. In fact, all solvable groups can be formed from such group extensions.
Non-abelian group which is non-nilpotent
A small example of a solvable, non-nilpotent group is the symmetric group S3. In fact, as the smallest simple non-abelian group is A5, (the alternating group of degree 5) it follows that every group with order less than 60 is solvable.
Finite groups of odd order
The Feit–Thompson theorem states that every finite group of odd order is solvable. In particular this implies that if a finite group is simple, it is either a prime cyclic or of even order.
Non-example
The group S5 is not solvable — it has a composition series {E, A5, S5} (and the Jordan–Hölder theorem states that every other composition series is equivalent to that one), giving factor groups isomorphic to A5 and C2; and A5 is not abelian. Generalizing this argument, coupled with the fact that An is a normal, maximal, non-abelian simple subgroup of Sn for n > 4, we see that Sn is not solvable for n > 4. This is a key step in the proof that for every n > 4 there are polynomials of degree n which are not solvable by radicals (Abel–Ruffini theorem). This property is also used in complexity theory in the proof of Barrington's theorem.
Subgroups of GL2
Consider the subgroups
$B=\left\{{\begin{bmatrix}*&*\\0&*\end{bmatrix}}\right\}{\text{, }}U=\left\{{\begin{bmatrix}1&*\\0&1\end{bmatrix}}\right\}$ of $GL_{2}(\mathbb {F} )$
for some field $\mathbb {F} $. Then, the group quotient $B/U$ can be found by taking arbitrary elements in $B,U$, multiplying them together, and figuring out what structure this gives. So
${\begin{bmatrix}a&b\\0&c\end{bmatrix}}\cdot {\begin{bmatrix}1&d\\0&1\end{bmatrix}}={\begin{bmatrix}a&ad+b\\0&c\end{bmatrix}}$
Note the determinant condition on $GL_{2}$ implies $ac\neq 0$, hence $\mathbb {F} ^{\times }\times \mathbb {F} ^{\times }\subset B$ is a subgroup (which are the matrices where $b=0$). For fixed $a,b$, the linear equation $ad+b=0$ implies $d=-b/a$, which is an arbitrary element in $\mathbb {F} $ since $b\in \mathbb {F} $. Since we can take any matrix in $B$ and multiply it by the matrix
${\begin{bmatrix}1&d\\0&1\end{bmatrix}}$
with $d=-b/a$, we can get a diagonal matrix in $B$. This shows the quotient group $B/U\cong \mathbb {F} ^{\times }\times \mathbb {F} ^{\times }$.
Remark
Notice that this description gives the decomposition of $B$ as $\mathbb {F} \rtimes (\mathbb {F} ^{\times }\times \mathbb {F} ^{\times })$ where $(a,c)$ acts on $b$ by $(a,c)(b)=ab$. This implies $(a,c)(b+b')=(a,c)(b)+(a,c)(b')=ab+ab'$. Also, a matrix of the form
${\begin{bmatrix}a&b\\0&c\end{bmatrix}}$
corresponds to the element $(b)\times (a,c)$ in the group.
Borel subgroups
For a linear algebraic group $G$ its Borel subgroup is defined as a subgroup which is closed, connected, and solvable in $G$, and it is the maximal possible subgroup with these properties (note the second two are topological properties). For example, in $GL_{n}$ and $SL_{n}$ the group of upper-triangular, or lower-triangular matrices are two of the Borel subgroups. The example given above, the subgroup $B$ in $GL_{2}$ is the Borel subgroup.
Borel subgroup in GL3
In $GL_{3}$ there are the subgroups
$B=\left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&0&*\end{bmatrix}}\right\},{\text{ }}U_{1}=\left\{{\begin{bmatrix}1&*&*\\0&1&*\\0&0&1\end{bmatrix}}\right\}$
Notice $B/U_{1}\cong \mathbb {F} ^{\times }\times \mathbb {F} ^{\times }\times \mathbb {F} ^{\times }$, hence the Borel group has the form
$U\rtimes (\mathbb {F} ^{\times }\times \mathbb {F} ^{\times }\times \mathbb {F} ^{\times })$
Borel subgroup in product of simple linear algebraic groups
In the product group $GL_{n}\times GL_{m}$ the Borel subgroup can be represented by matrices of the form
${\begin{bmatrix}T&0\\0&S\end{bmatrix}}$
where $T$ is an $n\times n$ upper triangular matrix and $S$ is a $m\times m$ upper triangular matrix.
Z-groups
Any finite group whose p-Sylow subgroups are cyclic is a semidirect product of two cyclic groups, in particular solvable. Such groups are called Z-groups.
OEIS values
Numbers of solvable groups with order n are (start with n = 0)
0, 1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, 2, 2, 1, 15, 2, 2, 5, 4, 1, 4, 1, 51, 1, 2, 1, 14, 1, 2, 2, 14, 1, 6, 1, 4, 2, 2, 1, 52, 2, 5, 1, 5, 1, 15, 2, 13, 2, 2, 1, 12, 1, 2, 4, 267, 1, 4, 1, 5, 1, 4, 1, 50, ... (sequence A201733 in the OEIS)
Orders of non-solvable groups are
60, 120, 168, 180, 240, 300, 336, 360, 420, 480, 504, 540, 600, 660, 672, 720, 780, 840, 900, 960, 1008, 1020, 1080, 1092, 1140, 1176, 1200, 1260, 1320, 1344, 1380, 1440, 1500, ... (sequence A056866 in the OEIS)
Properties
Solvability is closed under a number of operations.
• If G is solvable, and H is a subgroup of G, then H is solvable.[2]
• If G is solvable, and there is a homomorphism from G onto H, then H is solvable; equivalently (by the first isomorphism theorem), if G is solvable, and N is a normal subgroup of G, then G/N is solvable.[3]
• The previous properties can be expanded into the following "three for the price of two" property: G is solvable if and only if both N and G/N are solvable.
• In particular, if G and H are solvable, the direct product G × H is solvable.
Solvability is closed under group extension:
• If H and G/H are solvable, then so is G; in particular, if N and H are solvable, their semidirect product is also solvable.
It is also closed under wreath product:
• If G and H are solvable, and X is a G-set, then the wreath product of G and H with respect to X is also solvable.
For any positive integer N, the solvable groups of derived length at most N form a subvariety of the variety of groups, as they are closed under the taking of homomorphic images, subalgebras, and (direct) products. The direct product of a sequence of solvable groups with unbounded derived length is not solvable, so the class of all solvable groups is not a variety.
Burnside's theorem
Main article: Burnside's theorem
Burnside's theorem states that if G is a finite group of order paqb where p and q are prime numbers, and a and b are non-negative integers, then G is solvable.
Related concepts
Supersolvable groups
Main article: supersolvable group
As a strengthening of solvability, a group G is called supersolvable (or supersoluble) if it has an invariant normal series whose factors are all cyclic. Since a normal series has finite length by definition, uncountable groups are not supersolvable. In fact, all supersolvable groups are finitely generated, and an abelian group is supersolvable if and only if it is finitely generated. The alternating group A4 is an example of a finite solvable group that is not supersolvable.
If we restrict ourselves to finitely generated groups, we can consider the following arrangement of classes of groups:
cyclic < abelian < nilpotent < supersolvable < polycyclic < solvable < finitely generated group.
Virtually solvable groups
A group G is called virtually solvable if it has a solvable subgroup of finite index. This is similar to virtually abelian. Clearly all solvable groups are virtually solvable, since one can just choose the group itself, which has index 1.
Hypoabelian
A solvable group is one whose derived series reaches the trivial subgroup at a finite stage. For an infinite group, the finite derived series may not stabilize, but the transfinite derived series always stabilizes. A group whose transfinite derived series reaches the trivial group is called a hypoabelian group, and every solvable group is a hypoabelian group. The first ordinal α such that G(α) = G(α+1) is called the (transfinite) derived length of the group G, and it has been shown that every ordinal is the derived length of some group (Malcev 1949).
See also
• Prosolvable group
• Parabolic subgroup
Notes
1. Milne. Field Theory (PDF). p. 45.
2. Rotman (1995), Theorem 5.15, p. 102, at Google Books
3. Rotman (1995), Theorem 5.16, p. 102, at Google Books
References
• Malcev, A. I. (1949), "Generalized nilpotent algebras and their associated groups", Mat. Sbornik, New Series, 25 (67): 347–366, MR 0032644
• Rotman, Joseph J. (1995), An Introduction to the Theory of Groups, Graduate Texts in Mathematics, vol. 148 (4 ed.), Springer, ISBN 978-0-387-94285-8
External links
• OEIS sequence A056866 (Orders of non-solvable groups)
• Solvable groups as iterated extensions
|
Wikipedia
|
Uses of trigonometry
Amongst the lay public of non-mathematicians and non-scientists, trigonometry is known chiefly for its application to measurement problems, yet is also often used in ways that are far more subtle, such as its place in the theory of music; still other uses are more technical, such as in number theory. The mathematical topics of Fourier series and Fourier transforms rely heavily on knowledge of trigonometric functions and find application in a number of areas, including statistics.
Trigonometry
• Outline
• History
• Usage
• Functions (inverse)
• Generalized trigonometry
Reference
• Identities
• Exact constants
• Tables
• Unit circle
Laws and theorems
• Sines
• Cosines
• Tangents
• Cotangents
• Pythagorean theorem
Calculus
• Trigonometric substitution
• Integrals (inverse functions)
• Derivatives
Thomas Paine's statement
In Chapter XI of The Age of Reason, the American revolutionary and Enlightenment thinker Thomas Paine wrote:[1]
The scientific principles that man employs to obtain the foreknowledge of an eclipse, or of any thing else relating to the motion of the heavenly bodies, are contained chiefly in that part of science that is called trigonometry, or the properties of a triangle, which, when applied to the study of the heavenly bodies, is called astronomy; when applied to direct the course of a ship on the ocean, it is called navigation; when applied to the construction of figures drawn by a ruler and compass, it is called geometry; when applied to the construction of plans of edifices, it is called architecture; when applied to the measurement of any portion of the surface of the earth, it is called land-surveying. In fine, it is the soul of science. It is an eternal truth: it contains the mathematical demonstration of which man speaks, and the extent of its uses are unknown.
History
Great Trigonometrical Survey
From 1802 until 1871, the Great Trigonometrical Survey was a project to survey the Indian subcontinent with high precision. Starting from the coastal baseline, mathematicians and geographers triangulated vast distances across the country. One of the key achievements was measuring the height of Himalayan mountains, and determining that Mount Everest is the highest point on Earth. [2]
Historical use for multiplication
For the 25 years preceding the invention of the logarithm in 1614, prosthaphaeresis was the only known generally applicable way of approximating products quickly. It used the identities for the trigonometric functions of sums and differences of angles in terms of the products of trigonometric functions of those angles.
Some modern uses
Scientific fields that make use of trigonometry include:
acoustics, architecture, astronomy, cartography, civil engineering, geophysics, crystallography, electrical engineering, electronics, land surveying and geodesy, many physical sciences, mechanical engineering, machining, medical imaging, number theory, oceanography, optics, pharmacology, probability theory, seismology, statistics, and visual perception
That these fields involve trigonometry does not mean knowledge of trigonometry is needed in order to learn anything about them. It does mean that some things in these fields cannot be understood without trigonometry. For example, a professor of music may perhaps know nothing of mathematics, but would probably know that Pythagoras was the earliest known contributor to the mathematical theory of music.
In some of the fields of endeavor listed above it is easy to imagine how trigonometry could be used. For example, in navigation and land surveying, the occasions for the use of trigonometry are in at least some cases simple enough that they can be described in a beginning trigonometry textbook. In the case of music theory, the application of trigonometry is related to work begun by Pythagoras, who observed that the sounds made by plucking two strings of different lengths are consonant if both lengths are small integer multiples of a common length. The resemblance between the shape of a vibrating string and the graph of the sine function is no mere coincidence. In oceanography, the resemblance between the shapes of some waves and the graph of the sine function is also not coincidental. In some other fields, among them climatology, biology, and economics, there are seasonal periodicities. The study of these often involves the periodic nature of the sine and cosine function.
Fourier series
Many fields make use of trigonometry in more advanced ways than can be discussed in a single article. Often those involve what are called Fourier series, after the 18th- and 19th-century French mathematician and physicist Joseph Fourier. Fourier series have a surprisingly diverse array of applications in many scientific fields, in particular in all of the phenomena involving seasonal periodicities mentioned above, and in wave motion, and hence in the study of radiation, of acoustics, of seismology, of modulation of radio waves in electronics, and of electric power engineering.
A Fourier series is a sum of this form:
$\square +\underbrace {\square \cos \theta +\square \sin \theta } _{1}+\underbrace {\square \cos(2\theta )+\square \sin(2\theta )} _{2}+\underbrace {\square \cos(3\theta )+\square \sin(3\theta )} _{3}+\cdots \,$
where each of the squares ($\square $) is a different number, and one is adding infinitely many terms. Fourier used these for studying heat flow and diffusion (diffusion is the process whereby, when you drop a sugar cube into a gallon of water, the sugar gradually spreads through the water, or a pollutant spreads through the air, or any dissolved substance spreads through any fluid).
Fourier series are also applicable to subjects whose connection with wave motion is far from obvious. One ubiquitous example is digital compression whereby images, audio and video data are compressed into a much smaller size which makes their transmission feasible over telephone, internet and broadcast networks. Another example, mentioned above, is diffusion. Among others are: the geometry of numbers, isoperimetric problems, recurrence of random walks, quadratic reciprocity, the central limit theorem, Heisenberg's inequality.
Fourier transforms
A more abstract concept than Fourier series is the idea of Fourier transform. Fourier transforms involve integrals rather than sums, and are used in a similarly diverse array of scientific fields. Many natural laws are expressed by relating rates of change of quantities to the quantities themselves. For example: The rate of change of population is sometimes jointly proportional to (1) the present population and (2) the amount by which the present population falls short of the carrying capacity. This kind of relationship is called a differential equation. If, given this information, one tries to express population as a function of time, one is trying to "solve" the differential equation. Fourier transforms may be used to convert some differential equations to algebraic equations for which methods of solving them are known. Fourier transforms have many uses. In almost any scientific context in which the words spectrum, harmonic, or resonance are encountered, Fourier transforms or Fourier series are nearby.
Statistics, including mathematical psychology
Intelligence quotients are sometimes held to be distributed according to the bell-shaped curve. About 40% of the area under the curve is in the interval from 100 to 120; correspondingly, about 40% of the population scores between 100 and 120 on IQ tests. Nearly 9% of the area under the curve is in the interval from 120 to 140; correspondingly, about 9% of the population scores between 120 and 140 on IQ tests, etc. Similarly many other things are distributed according to the "bell-shaped curve", including measurement errors in many physical measurements. Why the ubiquity of the "bell-shaped curve"? There is a theoretical reason for this, and it involves Fourier transforms and hence trigonometric functions. That is one of a variety of applications of Fourier transforms to statistics.
Trigonometric functions are also applied when statisticians study seasonal periodicities, which are often represented by Fourier series.
Number theory
There is a hint of a connection between trigonometry and number theory. Loosely speaking, one could say that number theory deals with qualitative properties rather than quantitative properties of numbers.
${\frac {1}{42}},\qquad {\frac {2}{42}},\qquad {\frac {3}{42}},\qquad \dots \dots ,\qquad {\frac {39}{42}},\qquad {\frac {40}{42}},\qquad {\frac {41}{42}}.$
Discard the ones that are not in lowest terms; keep only those that are in lowest terms:
${\frac {1}{42}},\qquad {\frac {5}{42}},\qquad {\frac {11}{42}},\qquad \dots ,\qquad {\frac {31}{42}},\qquad {\frac {37}{42}},\qquad {\frac {41}{42}}.$
Then bring in trigonometry:
$\cos \left(2\pi \cdot {\frac {1}{42}}\right)+\cos \left(2\pi \cdot {\frac {5}{42}}\right)+\cdots +\cos \left(2\pi \cdot {\frac {37}{42}}\right)+\cos \left(2\pi \cdot {\frac {41}{42}}\right)$
The value of the sum is −1, because 42 has an odd number of prime factors and none of them is repeated: 42 = 2 × 3 × 7. (If there had been an even number of non-repeated factors then the sum would have been 1; if there had been any repeated prime factors (e.g., 60 = 2 × 2 × 3 × 5) then the sum would have been 0; the sum is the Möbius function evaluated at 42.) This hints at the possibility of applying Fourier analysis to number theory.
Solving non-trigonometric equations
Various types of equations can be solved using trigonometry.
For example, a linear difference equation or linear differential equation with constant coefficients has solutions expressed in terms of the eigenvalues of its characteristic equation; if some of the eigenvalues are complex, the complex terms can be replaced by trigonometric functions of real terms, showing that the dynamic variable exhibits oscillations.
Similarly, cubic equations with three real solutions have an algebraic solution that is unhelpful in that it contains cube roots of complex numbers; again an alternative solution exists in terms of trigonometric functions of real terms.
References
1. Thomas, Paine (2004). The Age of Reason. Dover Publications. p. 52.
2. "Triangles and Trigonometry". Mathigon. Retrieved 2019-02-06.
|
Wikipedia
|
Solving quadratic equations with continued fractions
In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is
$ax^{2}+bx+c=0,$
where a ≠ 0.
The quadratic equation on a number $x$ can be solved using the well-known quadratic formula, which can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are expressed in a form that often involves a quadratic irrational number, which is an algebraic fraction that can be evaluated as a decimal fraction only by applying an additional root extraction algorithm.
If the roots are real, there is an alternative technique that obtains a rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions.
Simple example
Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. We begin with the equation
$x^{2}=2$
and manipulate it directly. Subtracting one from both sides we obtain
$x^{2}-1=1.$
This is easily factored into
$(x+1)(x-1)=1$
from which we obtain
$(x-1)={\frac {1}{1+x}}$
and finally
$x=1+{\frac {1}{1+x}}.$
Now comes the crucial step. We substitute this expression for x back into itself, recursively, to obtain
$x=1+{\cfrac {1}{1+\left(1+{\cfrac {1}{1+x}}\right)}}=1+{\cfrac {1}{2+{\cfrac {1}{1+x}}}}.$
But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity x as far down and to the right as we please, and obtaining in the limit the infinite continued fraction
$x=1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+\ddots }}}}}}}}}}={\sqrt {2}}.$
By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers.
Algebraic explanation
We can gain further insight into this simple example by considering the successive powers of
$\omega ={\sqrt {2}}-1.$
That sequence of successive powers is given by
${\begin{aligned}\omega ^{2}&=3-2{\sqrt {2}},&\omega ^{3}&=5{\sqrt {2}}-7,&\omega ^{4}&=17-12{\sqrt {2}},\\\omega ^{5}&=29{\sqrt {2}}-41,&\omega ^{6}&=99-70{\sqrt {2}},&\omega ^{7}&=169{\sqrt {2}}-239,\,\end{aligned}}$
and so forth. Notice how the fractions derived as successive approximants to √2 appear in this geometric progression.
Since 0 < ω < 1, the sequence {ωn} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to √2, in the limit.
We can also find these numerators and denominators appearing in the successive powers of
$\omega ^{-1}={\sqrt {2}}+1.$
The sequence of successive powers {ω−n} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example.
Notice also that the set obtained by forming all the combinations a + b√2, where a and b are integers, is an example of an object known in abstract algebra as a ring, and more specifically as an integral domain. The number ω is a unit in that integral domain. See also algebraic number field.
General quadratic equation
Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of a monic polynomial
$x^{2}+bx+c=0$
which can always be obtained by dividing the original equation by its leading coefficient. Starting from this monic equation we see that
${\begin{aligned}x^{2}+bx&=-c\\x+b&={\frac {-c}{x}}\\x&=-b-{\frac {c}{x}}\,\end{aligned}}$
But now we can apply the last equation to itself recursively to obtain
$x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}$
If this infinite continued fraction converges at all, it must converge to one of the roots of the monic polynomial x2 + bx + c = 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering the quadratic formula and a monic polynomial with real coefficients. If the discriminant of such a polynomial is negative, then both roots of the quadratic equation have imaginary parts. In particular, if b and c are real numbers and b2 − 4c < 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the form u + iv (where v ≠ 0), which does not lie on the real number line.
General theorem
By applying a result obtained by Euler in 1748 it can be shown that the continued fraction solution to the general monic quadratic equation with real coefficients
$x^{2}+bx+c=0$
given by
$x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}$
either converges or diverges depending on both the coefficient b and the value of the discriminant, b2 − 4c.
If b = 0 the general continued fraction solution is totally divergent; the convergents alternate between 0 and $\infty $. If b ≠ 0 we distinguish three cases.
1. If the discriminant is negative, the fraction diverges by oscillation, which means that its convergents wander around in a regular or even chaotic fashion, never approaching a finite limit.
2. If the discriminant is zero the fraction converges to the single root of multiplicity two.
3. If the discriminant is positive the equation has two real roots, and the continued fraction converges to the larger (in absolute value) of these. The rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.
When the monic quadratic equation with real coefficients is of the form x2 = c, the general solution described above is useless because division by zero is not well defined. As long as c is positive, though, it is always possible to transform the equation by subtracting a perfect square from both sides and proceeding along the lines illustrated with √2 above. In symbols, if
$x^{2}=c\qquad (c>0)$
just choose some positive real number p such that
$p^{2}<c.$
Then by direct manipulation we obtain
${\begin{aligned}x^{2}-p^{2}&=c-p^{2}\\(x+p)(x-p)&=c-p^{2}\\x-p&={\frac {c-p^{2}}{p+x}}\\x&=p+{\frac {c-p^{2}}{p+x}}\\&=p+{\cfrac {c-p^{2}}{p+\left(p+{\cfrac {c-p^{2}}{p+x}}\right)}}&=p+{\cfrac {c-p^{2}}{2p+{\cfrac {c-p^{2}}{2p+{\cfrac {c-p^{2}}{2p+\ddots \,}}}}}}\,\end{aligned}}$
and this transformed continued fraction must converge because all the partial numerators and partial denominators are positive real numbers.
Complex coefficients
By the fundamental theorem of algebra, if the monic polynomial equation x2 + bx + c = 0 has complex coefficients, it must have two (not necessarily distinct) complex roots. Unfortunately, the discriminant b2 − 4c is not as useful in this situation, because it may be a complex number. Still, a modified version of the general theorem can be proved.
The continued fraction solution to the general monic quadratic equation with complex coefficients
$x^{2}+bx+c=0\qquad (b\neq 0)$
given by
$x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}$
converges or not depending on the value of the discriminant, b2 − 4c, and on the relative magnitude of its two roots.
Denoting the two roots by r1 and r2 we distinguish three cases.
1. If the discriminant is zero the fraction converges to the single root of multiplicity two.
2. If the discriminant is not zero, and |r1| ≠ |r2|, the continued fraction converges to the root of maximum modulus (i.e., to the root with the greater absolute value).
3. If the discriminant is not zero, and |r1| = |r2|, the continued fraction diverges by oscillation.
In case 2, the rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.
This general solution of monic quadratic equations with complex coefficients is usually not very useful for obtaining rational approximations to the roots, because the criteria are circular (that is, the relative magnitudes of the two roots must be known before we can conclude that the fraction converges, in most cases). But this solution does find useful applications in the further analysis of the convergence problem for continued fractions with complex elements.
See also
• Lucas sequence
• Methods of computing square roots
• Pell's equation
References
• H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., 1948 ISBN 0-8284-0207-8
|
Wikipedia
|
Solving the Riddle of Phyllotaxis
Solving the Riddle of Phyllotaxis: Why the Fibonacci Numbers and the Golden Ratio Occur in Plants is a book on the mathematics of plant structure, and in particular on phyllotaxis, the arrangement of leaves on plant stems. It was written by Irving Adler, and published in 2012 by World Scientific. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.[1]
Background
Irving Adler (1913–2012) was known as a peace protester, schoolteacher, and children's science book author[2] before, in 1961, earning a doctorate in abstract algebra. Even later in his life, Adler began working on phyllotaxis, the mathematical structure of leaves on plant stems. This book, which collects several of his papers on the subject previously published in journals and edited volumes,[3] is the last of his 85 books to be published before his death.[2]
Topics
Different plants arrange their leaves differently, for instance on alternating sides of the plant stem, or rotated from each other by other fractions of a full rotation between consecutive leaves. In these patterns, rotations by 1/2 of an angle, 1/3 of an angle, 3/8 of an angle, or 5/8 of an angle are common, and it does not appear to be coincidental that the numerators and denominators of these fractions are all Fibonacci numbers. Higher Fibonacci numbers often appear in the number of spiral arms in the spiraling patterns of sunflower seed heads, or the helical patterns of pineapple cells.[1] The theme of Adler's work in this area, in the papers reproduced in this volume, was to find a mathematical model for plant development that would explain these patterns and the occurrence of the Fibonacci numbers and the golden ratio within them.[4]
The papers are arranged chronologically; they include four journal papers from the 1970s, another from the late 1990s, and a preface and book chapter also from the 1990s. Among them, the first is the longest, and reviewer Adhemar Bultheel calls it "the most fundamental"; it uses the idea of "contact pressure" to cause plant parts to maximize their distance from each other and maintain a consistent angle of divergence from each other, and makes connections with the mathematical theories of circle packing and space-filling curves. Subsequent papers refine this theory, make additional connections for instance to the theory of continued fractions, and provide a more general overview.[4]
Interspersed with the theoretical results in this area are historical asides discussing, among others, the work on phyllotaxis of Theophrastus (the first to study phyllotaxis), Leonardo da Vinci (the first to apply mathematics to phyllotaxis), Johannes Kepler (the first to recognize the importance of the Fibonacci numbers to phyllotaxis), and later naturalists and mathematicians.[1]
Audience and reception
Reviewer Peter Ruane found the book gripping, writing that it can be read by a mathematically inclined reader with no background knowledge in phyllotaxis. He suggests, however, that it might be easier to read the papers in the reverse of their chronological order, as the broader overview papers were written later in this sequence.[1] And Yuri V. Rogovchenko calls its publication "a thoughtful tribute to Dr. Adler’s multi-faceted career as a researcher, educator, political activist, and author".[3]
References
1. Ruane, Peter (May 2013), "Review of Solving the Riddle of Phyllotaxis", MAA Reviews, Mathematical Association of America
2. "Teacher and writer Irving Adler dies at 99", The Washington Post, September 30, 2012
3. Rogovchenko, Yuri V., "Review of Solving the Riddle of Phyllotaxis", zbMATH, Zbl 1274.00029
4. Bultheel, Adhemar (November 2012), "Review of Solving the Riddle of Phyllotaxis", EMS Reviews, European Mathematical Society
|
Wikipedia
|
Solèr's theorem
In mathematics, Solèr's theorem is a result concerning certain infinite-dimensional vector spaces. It states that any orthomodular form that has an infinite orthonormal sequence is a Hilbert space over the real numbers, complex numbers or quaternions.[1][2] Originally proved by Maria Pia Solèr, the result is significant for quantum logic[3][4] and the foundations of quantum mechanics.[5][6] In particular, Solèr's theorem helps to fill a gap in the effort to use Gleason's theorem to rederive quantum mechanics from information-theoretic postulates.[7][8] It is also an important step in the Heunen-Kornell axiomatisation of the category of Hilbert spaces.[9]
Physicist John C. Baez notes,
Nothing in the assumptions mentions the continuum: the hypotheses are purely algebraic. It therefore seems quite magical that [the division ring over which the Hilbert space is defined] is forced to be the real numbers, complex numbers or quaternions.[6]
Writing a decade after Solèr's original publication, Pitowsky calls her theorem "celebrated".[7]
Statement
Let $\mathbb {K} $ be a division ring. That means it is a ring in which one can add, subtract, multiply, and divide but in which the multiplication need not be commutative. Suppose this ring has a conjugation, i.e. an operation $x\mapsto x^{*}$ for which
${\begin{aligned}&(x+y)^{*}=x^{*}+y^{*},\\&(xy)^{*}=y^{*}x^{*}{\text{ (the order of multiplication is inverted), and }}\\&(x^{*})^{*}=x.\end{aligned}}$
Consider a vector space V with scalars in $\mathbb {K} $, and a mapping
$(u,v)\mapsto \langle u,v\rangle \in \mathbb {K} $
which is $\mathbb {K} $ -linear in left (or in the right) entry, satisfying the identity
$\langle u,v\rangle =\langle v,u\rangle ^{*}.$
This is called a Hermitian form. Suppose this form is non-degenerate in the sense that
$\langle u,v\rangle =0{\text{ for all values of }}u{\text{ only if }}v=0.$
For any subspace S let $S^{\bot }$ be the orthogonal complement of S. Call the subspace "closed" if $S^{\bot \bot }=S.$
Call this whole vector space, and the Hermitian form, "orthomodular" if for every closed subspace S we have that $S+S^{\bot }$ is the entire space. (The term "orthomodular" derives from the study of quantum logic. In quantum logic, the distributive law is taken to fail due to the uncertainty principle, and it is replaced with the "modular law," or in the case of infinite-dimensional Hilbert spaces, the "orthomodular law."[6])
A set of vectors $ u_{i}\in V$ is called "orthonormal" if
$\langle u_{i},u_{j}\rangle =\delta _{ij}.$
The result is this:
If this space has an infinite orthonormal set, then the division ring of scalars is either the field of real numbers, the field of complex numbers, or the ring of quaternions.
References
1. Solèr, M. P. (1995-01-01). "Characterization of hilbert spaces by orthomodular spaces". Communications in Algebra. 23 (1): 219–243. doi:10.1080/00927879508825218. ISSN 0092-7872.
2. Prestel, Alexander (1995-12-01). "On Solèr's characterization of Hilbert spaces". Manuscripta Mathematica. 86 (1): 225–238. doi:10.1007/bf02567991. ISSN 0025-2611. S2CID 123553981.
3. Coecke, Bob; Moore, David; Wilce, Alexander (2000). "Operational Quantum Logic: An Overview". Current Research in Operational Quantum Logic. Springer, Dordrecht. pp. 1–36. arXiv:quant-ph/0008019. doi:10.1007/978-94-017-1201-9_1. ISBN 978-90-481-5437-1. S2CID 2479454.
4. Moretti, Valter; Oppio, Marco (2018). "The correct formulation of Gleason's theorem in quaternionic Hilbert spaces". Annales Henri Poincaré. 19 (11): 3321–3355. arXiv:1803.06882. Bibcode:2018AnHP...19.3321M. doi:10.1007/s00023-018-0729-8. ISSN 1424-0661. S2CID 53630146.
5. Holland, Samuel S. (1995). "Orthomodularity in infinite dimensions; a theorem of M. Solèr". Bulletin of the American Mathematical Society. 32 (2): 205–234. arXiv:math/9504224. Bibcode:1995math......4224H. doi:10.1090/s0273-0979-1995-00593-8. ISSN 0273-0979. S2CID 17438283.
6. Baez, John C. (1 December 2010). "Solèr's Theorem". The n-Category Café. Retrieved 2017-07-22.
7. Pitowsky, Itamar (2006). "Quantum Mechanics as a Theory of Probability". Physical Theory and its Interpretation. The Western Ontario Series in Philosophy of Science. Vol. 72. Springer, Dordrecht. pp. 213–240. arXiv:quant-ph/0510095. doi:10.1007/1-4020-4876-9_10. ISBN 978-1-4020-4875-3. S2CID 14339351.
8. Grinbaum, Alexei (2007-09-01). "Reconstruction of Quantum Theory" (PDF). The British Journal for the Philosophy of Science. 58 (3): 387–408. doi:10.1093/bjps/axm028. ISSN 0007-0882.
Cassinelli, G.; Lahti, P. (2017-11-13). "Quantum mechanics: why complex Hilbert space?". Philosophical Transactions of the Royal Society A. 375 (2106): 20160393. Bibcode:2017RSPTA.37560393C. doi:10.1098/rsta.2016.0393. ISSN 1364-503X. PMID 28971945.
9. Heunen, Chris; Kornell, Andre (2022). "Axioms for the category of Hilbert spaces". Proceedings of the National Academy of Sciences. 119 (9): e2117024119. arXiv:2109.07418. Bibcode:2022PNAS..11917024H. doi:10.1073/pnas.2117024119. PMC 8892366. PMID 35217613.
|
Wikipedia
|
Somers' D
In statistics, Somers’ D, sometimes incorrectly referred to as Somer’s D, is a measure of ordinal association between two possibly dependent random variables X and Y. Somers’ D takes values between $-1$ when all pairs of the variables disagree and $1$ when all pairs of the variables agree. Somers’ D is named after Robert H. Somers, who proposed it in 1962.[1]
Somers’ D plays a central role in rank statistics and is the parameter behind many nonparametric methods.[2] It is also used as a quality measure of binary choice or ordinal regression (e.g., logistic regressions) and credit scoring models.
Somers’ D for sample
We say that two pairs $(x_{i},y_{i})$ and $(x_{j},y_{j})$ are concordant if the ranks of both elements agree, or $x_{i}>x_{j}$ and $y_{i}>y_{j}$ or if $x_{i}<x_{j}$ and $y_{i}<y_{j}$. We say that two pairs $(x_{i},y_{i})$ and $(x_{j},y_{j})$ are discordant, if the ranks of both elements disagree, or if $x_{i}>x_{j}$ and $y_{i}<y_{j}$ or if $x_{i}<x_{j}$ and $y_{i}>y_{j}$. If $x_{i}=x_{j}$ or $y_{i}=y_{j}$, the pair is neither concordant nor discordant.
Let $(x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{n},y_{n})$ be a set of observations of two possibly dependent random vectors X and Y. Define Kendall tau rank correlation coefficient $\tau $ as
$\tau ={\frac {N_{C}-N_{D}}{n(n-1)/2}},$
where $N_{C}$ is the number of concordant pairs and $N_{D}$ is the number of discordant pairs. Somers’ D of Y with respect to X is defined as $D_{YX}=\tau (X,Y)/\tau (X,X)$.[2] Note that Kendall's tau is symmetric in X and Y, whereas Somers’ D is asymmetric in X and Y.
As $\tau (X,X)$ quantifies the number of pairs with unequal X values, Somers’ D is the difference between the number of concordant and discordant pairs, divided by the number of pairs with X values in the pair being unequal.
Somers’ D for distribution
Let two independent bivariate random variables $(X_{1},Y_{1})$ and $(X_{2},Y_{2})$ have the same probability distribution $\operatorname {P} _{XY}$. Again, Somers’ D, which measures ordinal association of random variables X and Y in $\operatorname {P} _{XY}$, can be defined through Kendall's tau
${\begin{aligned}\tau (X,Y)&=\operatorname {E} {\Bigl (}\operatorname {sgn}(X_{1}-X_{2})\operatorname {sgn}(Y_{1}-Y_{2}){\Bigr )}\\&=\operatorname {P} {\Bigl (}\operatorname {sgn}(X_{1}-X_{2})\operatorname {sgn}(Y_{1}-Y_{2})=1{\Bigr )}-\operatorname {P} {\Bigl (}\operatorname {sgn}(X_{1}-X_{2})\operatorname {sgn}(Y_{1}-Y_{2})=-1{\Bigr )},\\\end{aligned}}$
or the difference between the probabilities of concordance and discordance. Somers’ D of Y with respect to X is defined as $D_{YX}=\tau (X,Y)/\tau (X,X)$. Thus, $D_{YX}$ is the difference between the two corresponding probabilities, conditional on the X values not being equal. If X has a continuous probability distribution, then $\tau (X,X)=1$ and Kendall's tau and Somers’ D coincide. Somers’ D normalizes Kendall's tau for possible mass points of variable X.
If X and Y are both binary with values 0 and 1, then Somers’ D is the difference between two probabilities:
$D_{YX}=\operatorname {P} (Y=1\mid X=1)-\operatorname {P} (Y=1\mid X=0).$
Somers' D for binary dependent variables
In practice, Somers' D is most often used when the dependent variable Y is a binary variable,[2] i.e. for binary classification or prediction of binary outcomes including binary choice models in econometrics. Methods for fitting such models include logistic and probit regression.
Several statistics can be used to quantify the quality of such models: area under the receiver operating characteristic (ROC) curve, Goodman and Kruskal's gamma, Kendall's tau (Tau-a), Somers’ D, etc. Somers’ D is probably the most widely used of the available ordinal association statistics.[3] Identical to the Gini coefficient, Somers’ D is related to the area under the receiver operating characteristic curve (AUC),[2]
$\mathrm {AUC} ={\frac {D_{XY}+1}{2}}$.
In the case where the independent (predictor) variable X is discrete and the dependent (outcome) variable Y is binary, Somers’ D equals
$D_{XY}={\frac {N_{C}-N_{D}}{N_{C}+N_{D}+N_{T}}},$
where $N_{T}$ is the number of neither concordant nor discordant pairs that are tied on variable X and not on variable Y.
Example
Suppose that the independent (predictor) variable X takes three values, 0.25, 0.5, or 0.75, and dependent (outcome) variable Y takes two values, 0 or 1. The table below contains observed combinations of X and Y:
Frequencies of
Y, X pairs
X
Y
0.25 0.5 0.75
0 352
1 176
The number of concordant pairs equals
$N_{C}=3\times 7+3\times 6+5\times 6=69.$
The number of discordant pairs equals
$N_{D}=1\times 5+1\times 2+7\times 2=21.$
The number of pairs tied is equal to the total number of pairs minus the concordant and discordant pairs
$N_{T}=(3+5+2)\times (1+7+6)-69-21=50$
Thus, Somers’ D equals
$D_{XY}={\frac {69-21}{69+21+50}}\approx 0.34.$
References
1. Somers, R. H. (1962). "A new asymmetric measure of association for ordinal variables". American Sociological Review. 27 (6). doi:10.2307/2090408. JSTOR 2090408.
2. Newson, Roger (2002). "Parameters behind "nonparametric" statistics: Kendall's tau, Somers' D and median differences". Stata Journal. 2 (1): 45–64.
3. O'Connell, A. A. (2006). Logistic Regression Models for Ordinal Response Variables. SAGE Publications.
|
Wikipedia
|
Somos' quadratic recurrence constant
In mathematics, Somos' quadratic recurrence constant, named after Michael Somos, is the number
$\sigma ={\sqrt {1{\sqrt {2{\sqrt {3\cdots }}}}}}=1^{1/2}\;2^{1/4}\;3^{1/8}\cdots .\,$
This can be easily re-written into the far more quickly converging product representation
$\sigma =\sigma ^{2}/\sigma =\left({\frac {2}{1}}\right)^{1/2}\left({\frac {3}{2}}\right)^{1/4}\left({\frac {4}{3}}\right)^{1/8}\left({\frac {5}{4}}\right)^{1/16}\cdots ,$
which can then be compactly represented in infinite product form by:
$\sigma =\prod _{k=1}^{\infty }\left(1+{\frac {1}{k}}\right)^{\frac {1}{2^{k}}}.$
The constant σ arises when studying the asymptotic behaviour of the sequence
$g_{0}=1\,;\,g_{n}=ng_{n-1}^{2},\qquad n>1,\,$
with first few terms 1, 1, 2, 12, 576, 1658880, ... (sequence A052129 in the OEIS). This sequence can be shown to have asymptotic behaviour as follows:[1]
$g_{n}\sim {\frac {\sigma ^{2^{n}}}{n+2+O({\frac {1}{n}})}}.$
Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent:
$\ln \sigma ={\frac {-1}{2}}{\frac {\partial \Phi }{\partial s}}\!\left({\frac {1}{2}},0,1\right)$
where ln is the natural logarithm and $\Phi $(z, s, q) is the Lerch transcendent.
Finally,
$\sigma =1.661687949633594121296\dots \;$ (sequence A112302 in the OEIS).
Notes
1. Weisstein, Eric W. "Somos's Quadratic Recurrence Constant". MathWorld.
References
• Steven R. Finch, Mathematical Constants (2003), Cambridge University Press, p. 446. ISBN 0-521-81805-2.
• Jesus Guillera and Jonathan Sondow, "Double integrals and infinite products for some classical constants via analytic continuations of Lerch's transcendent", Ramanujan Journal 16 (2008), 247–270 (Provides an integral and a series representation). arXiv:math/0506319
|
Wikipedia
|
Song Sun
Song Sun (Chinese: 孙崧; pinyin: Sūn Sōng, born in 1987) is a Chinese mathematician whose research concerns geometry and topology. A Sloan Research Fellow, he is a professor at the Department of Mathematics of the University of California, Berkeley, where he has been since 2018. In 2019, he was awarded the Oswald Veblen Prize in Geometry.
Biography
Sun attended Huaining High School in Huaining County, Anhui, China, before being admitted to the Special Class for the Gifted Young at the University of Science and Technology of China in 2002.[1] After graduating from the program with a B.S. in 2006, he moved to the United States to pursue graduate studies at the University of Wisconsin, obtaining his Ph.D in mathematics (differential geometry) in 2010.[1][2] His doctoral advisor was Xiuxiong Chen, and his dissertation was titled "Kempf–Ness theorem and uniqueness of extremal metrics".[3]
Sun worked as a research associate at Imperial College London before becoming an assistant professor at Stony Brook University in 2013.[2] He was awarded the Sloan Research Fellowship in 2014.[2] In 2018, he was appointed an associate professor at the Department of Mathematics of the University of California, Berkeley.[4]
He was an invited speaker at the 2018 International Congress of Mathematicians, in Rio de Janeiro.[5] For 2021 he received the Breakthrough Prize in Mathematics – New Horizons in Mathematics.[6]
Conjecture on Fano manifolds and Veblen Prize
In 2019, Sun was awarded the prestigious Oswald Veblen Prize in Geometry, together with his former advisor Xiuxiong Chen and Simon Donaldson, for proving a long-standing conjecture on Fano manifolds, which states that "a Fano manifold admits a Kähler–Einstein metric if and only if it is K-stable". It had been one of the most actively investigated topics in geometry since a rough version of it was conjectured in the 1980s by Shing-Tung Yau, who had previously proved the Calabi conjecture.[7] The conjecture was later given a precise formulation by Donaldson, based in part on earlier work of Gang Tian. The solution by Chen, Donaldson and Sun was published in the Journal of the American Mathematical Society in 2015 as a three-article series, "Kähler–Einstein metrics on Fano manifolds, I, II and III".[7][8]
Major publications
• Donaldson, Simon; Sun, Song (2014). "Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry". Acta Math. 213 (1): 63–106. doi:10.1007/s11511-014-0116-3. S2CID 120450769.
• Chen, Xiuxiong; Donaldson, Simon; Sun, Song (2015). "Kähler-Einstein metrics on Fano manifolds. I: Approximation of metrics with cone singularities". J. Amer. Math. Soc. 28 (1): 183–197. arXiv:1211.4566. doi:10.1090/S0894-0347-2014-00799-2. S2CID 119641827.
• Chen, Xiuxiong; Donaldson, Simon; Sun, Song (2015). "Kähler-Einstein metrics on Fano manifolds. II: Limits with cone angle less than 2π". J. Amer. Math. Soc. 28 (1): 199–234. arXiv:1212.4714. doi:10.1090/S0894-0347-2014-00800-6. S2CID 119140033.
• Chen, Xiuxiong; Donaldson, Simon; Sun, Song (2015). "Kähler-Einstein metrics on Fano manifolds. III: Limits as cone angle approaches 2π and completion of the main proof". J. Amer. Math. Soc. 28 (1): 235–278. arXiv:1302.0282. doi:10.1090/S0894-0347-2014-00801-8. S2CID 119575364.
References
1. "陈秀雄孙崧荣获维布伦奖". University of Science and Technology of China Initiative Foundation. 2018-11-20. Retrieved 2019-04-09.
2. "Song Sun Awarded Prestigious Sloan Fellowship for Mathematics". Stony Brook University. 2014-02-18. Retrieved 2019-04-03.
3. "Song Sun". The Mathematics Genealogy Project. Retrieved 2019-04-03.
4. "Song Sun". Department of Mathematics, University of California Berkeley. Retrieved 2019-04-03.
5. "Invited Section Lectures Speakers". ICM 2018. Retrieved 2019-04-03.
6. Breakthrough Prize in Mathematics 2021
7. "2019 Oswald Veblen Prize in Geometry to Xiuxiong Chen, Simon Donaldson, and Song Sun". American Mathematical Society. 2018-11-19. Retrieved 2019-04-09.
8. "Song Sun to receive the 2019 Oswald Veblen Prize in Geometry. Congratulations!". Department of Mathematics, University of California Berkeley. Retrieved 2019-04-03.
Recipients of the Oswald Veblen Prize in Geometry
• 1964 Christos Papakyriakopoulos
• 1964 Raoul Bott
• 1966 Stephen Smale
• 1966 Morton Brown and Barry Mazur
• 1971 Robion Kirby
• 1971 Dennis Sullivan
• 1976 William Thurston
• 1976 James Harris Simons
• 1981 Mikhail Gromov
• 1981 Shing-Tung Yau
• 1986 Michael Freedman
• 1991 Andrew Casson and Clifford Taubes
• 1996 Richard S. Hamilton and Gang Tian
• 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins
• 2004 David Gabai
• 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó
• 2010 Tobias Colding and William Minicozzi; Paul Seidel
• 2013 Ian Agol and Daniel Wise
• 2016 Fernando Codá Marques and André Neves
• 2019 Xiuxiong Chen, Simon Donaldson and Song Sun
Authority control: Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Laguerre polynomials
In mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834–1886), are solutions of Laguerre's differential equation:
$xy''+(1-x)y'+ny=0,\ y=y(x)$
which is a second-order linear differential equation. This equation has nonsingular solutions only if n is a non-negative integer.
Sometimes the name Laguerre polynomials is used for solutions of
$xy''+(\alpha +1-x)y'+ny=0~.$
where n is still a non-negative integer. Then they are also named generalized Laguerre polynomials, as will be done here (alternatively associated Laguerre polynomials or, rarely, Sonine polynomials, after their inventor[1] Nikolay Yakovlevich Sonin).
More generally, a Laguerre function is a solution when n is not necessarily a non-negative integer.
The Laguerre polynomials are also used for Gaussian quadrature to numerically compute integrals of the form
$\int _{0}^{\infty }f(x)e^{-x}\,dx.$
These polynomials, usually denoted L0, L1, …, are a polynomial sequence which may be defined by the Rodrigues formula,
$L_{n}(x)={\frac {e^{x}}{n!}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x}x^{n}\right)={\frac {1}{n!}}\left({\frac {d}{dx}}-1\right)^{n}x^{n},$
reducing to the closed form of a following section.
They are orthogonal polynomials with respect to an inner product
$\langle f,g\rangle =\int _{0}^{\infty }f(x)g(x)e^{-x}\,dx.$
The rook polynomials in combinatorics are more or less the same as Laguerre polynomials, up to elementary changes of variables. Further see the Tricomi–Carlitz polynomials.
The Laguerre polynomials arise in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom. They also describe the static Wigner functions of oscillator systems in quantum mechanics in phase space. They further enter in the quantum mechanics of the Morse potential and of the 3D isotropic harmonic oscillator.
Physicists sometimes use a definition for the Laguerre polynomials that is larger by a factor of n! than the definition used here. (Likewise, some physicists may use somewhat different definitions of the so-called associated Laguerre polynomials.)
The first few polynomials
These are the first few Laguerre polynomials:
n $L_{n}(x)\,$
0$1\,$
1$-x+1\,$
2 ${\tfrac {1}{2}}(x^{2}-4x+2)\,$
3 ${\tfrac {1}{6}}(-x^{3}+9x^{2}-18x+6)\,$
4 ${\tfrac {1}{24}}(x^{4}-16x^{3}+72x^{2}-96x+24)\,$
5 ${\tfrac {1}{120}}(-x^{5}+25x^{4}-200x^{3}+600x^{2}-600x+120)\,$
6 ${\tfrac {1}{720}}(x^{6}-36x^{5}+450x^{4}-2400x^{3}+5400x^{2}-4320x+720)\,$
n ${\tfrac {1}{n!}}((-x)^{n}+n^{2}(-x)^{n-1}+\dots +n({n!})(-x)+n!)\,$
Recursive definition, closed form, and generating function
One can also define the Laguerre polynomials recursively, defining the first two polynomials as
$L_{0}(x)=1$
$L_{1}(x)=1-x$
and then using the following recurrence relation for any k ≥ 1:
$L_{k+1}(x)={\frac {(2k+1-x)L_{k}(x)-kL_{k-1}(x)}{k+1}}.$
Furthermore,
$xL'_{n}(x)=nL_{n}(x)-nL_{n-1}(x).$
In solution of some boundary value problems, the characteristic values can be useful:
$L_{k}(0)=1,L_{k}'(0)=-k.$
The closed form is
$L_{n}(x)=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {(-1)^{k}}{k!}}x^{k}.$
The generating function for them likewise follows,
$\sum _{n=0}^{\infty }t^{n}L_{n}(x)={\frac {1}{1-t}}e^{-tx/(1-t)}.$
Polynomials of negative index can be expressed using the ones with positive index:
$L_{-n}(x)=e^{x}L_{n-1}(-x).$
Generalized Laguerre polynomials
For arbitrary real α the polynomial solutions of the differential equation[2]
$x\,y''+\left(\alpha +1-x\right)y'+n\,y=0$
are called generalized Laguerre polynomials, or associated Laguerre polynomials.
One can also define the generalized Laguerre polynomials recursively, defining the first two polynomials as
$L_{0}^{(\alpha )}(x)=1$
$L_{1}^{(\alpha )}(x)=1+\alpha -x$
and then using the following recurrence relation for any k ≥ 1:
$L_{k+1}^{(\alpha )}(x)={\frac {(2k+1+\alpha -x)L_{k}^{(\alpha )}(x)-(k+\alpha )L_{k-1}^{(\alpha )}(x)}{k+1}}.$
The simple Laguerre polynomials are the special case α = 0 of the generalized Laguerre polynomials:
$L_{n}^{(0)}(x)=L_{n}(x).$
The Rodrigues formula for them is
$L_{n}^{(\alpha )}(x)={x^{-\alpha }e^{x} \over n!}{d^{n} \over dx^{n}}\left(e^{-x}x^{n+\alpha }\right)={\frac {x^{-\alpha }}{n!}}\left({\frac {d}{dx}}-1\right)^{n}x^{n+\alpha }.$
The generating function for them is
$\sum _{n=0}^{\infty }t^{n}L_{n}^{(\alpha )}(x)={\frac {1}{(1-t)^{\alpha +1}}}e^{-tx/(1-t)}.$
Explicit examples and properties of the generalized Laguerre polynomials
• Laguerre functions are defined by confluent hypergeometric functions and Kummer's transformation as[3]
$L_{n}^{(\alpha )}(x):={n+\alpha \choose n}M(-n,\alpha +1,x).$
where $ {n+\alpha \choose n}$ is a generalized binomial coefficient. When n is an integer the function reduces to a polynomial of degree n. It has the alternative expression[4]
$L_{n}^{(\alpha )}(x)={\frac {(-1)^{n}}{n!}}U(-n,\alpha +1,x)$
in terms of Kummer's function of the second kind.
• The closed form for these generalized Laguerre polynomials of degree n is[5]
$L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n}(-1)^{i}{n+\alpha \choose n-i}{\frac {x^{i}}{i!}}$
derived by applying Leibniz's theorem for differentiation of a product to Rodrigues' formula.
• Laguerre polynomials have a differential operator representation, much like the closely related Hermite polynomials. Namely, let $D={\frac {d}{dx}}$ and consider the differential operator $M=qxD^{2}+(\alpha +1)D$. Then $\exp(-tM)x^{n}=(-1)^{n}q^{n}t^{n}n!L_{n}^{(\alpha )}\left({\frac {x}{qt}}\right)$.
• The first few generalized Laguerre polynomials are:
${\begin{aligned}L_{0}^{(\alpha )}(x)&=1\\L_{1}^{(\alpha )}(x)&=-x+(\alpha +1)\\L_{2}^{(\alpha )}(x)&={\frac {x^{2}}{2}}-(\alpha +2)x+{\frac {(\alpha +1)(\alpha +2)}{2}}\\L_{3}^{(\alpha )}(x)&={\frac {-x^{3}}{6}}+{\frac {(\alpha +3)x^{2}}{2}}-{\frac {(\alpha +2)(\alpha +3)x}{2}}+{\frac {(\alpha +1)(\alpha +2)(\alpha +3)}{6}}\end{aligned}}$
• The coefficient of the leading term is (−1)n/n!;
• The constant term, which is the value at 0, is
$L_{n}^{(\alpha )}(0)={n+\alpha \choose n}={\frac {\Gamma (n+\alpha +1)}{n!\,\Gamma (\alpha +1)}};$
• If α is non-negative, then Ln(α) has n real, strictly positive roots (notice that $\left((-1)^{n-i}L_{n-i}^{(\alpha )}\right)_{i=0}^{n}$ is a Sturm chain), which are all in the interval $\left(0,n+\alpha +(n-1){\sqrt {n+\alpha }}\,\right].$
• The polynomials' asymptotic behaviour for large n, but fixed α and x > 0, is given by[6][7]
${\begin{aligned}&L_{n}^{(\alpha )}(x)={\frac {n^{{\frac {\alpha }{2}}-{\frac {1}{4}}}}{\sqrt {\pi }}}{\frac {e^{\frac {x}{2}}}{x^{{\frac {\alpha }{2}}+{\frac {1}{4}}}}}\sin \left(2{\sqrt {nx}}-{\frac {\pi }{2}}\left(\alpha -{\frac {1}{2}}\right)\right)+O\left(n^{{\frac {\alpha }{2}}-{\frac {3}{4}}}\right),\\[6pt]&L_{n}^{(\alpha )}(-x)={\frac {(n+1)^{{\frac {\alpha }{2}}-{\frac {1}{4}}}}{2{\sqrt {\pi }}}}{\frac {e^{-x/2}}{x^{{\frac {\alpha }{2}}+{\frac {1}{4}}}}}e^{2{\sqrt {x(n+1)}}}\cdot \left(1+O\left({\frac {1}{\sqrt {n+1}}}\right)\right),\end{aligned}}$
and summarizing by
${\frac {L_{n}^{(\alpha )}\left({\frac {x}{n}}\right)}{n^{\alpha }}}\approx e^{x/2n}\cdot {\frac {J_{\alpha }\left(2{\sqrt {x}}\right)}{{\sqrt {x}}^{\alpha }}},$
where $J_{\alpha }$ is the Bessel function.
As a contour integral
Given the generating function specified above, the polynomials may be expressed in terms of a contour integral
$L_{n}^{(\alpha )}(x)={\frac {1}{2\pi i}}\oint _{C}{\frac {e^{-xt/(1-t)}}{(1-t)^{\alpha +1}\,t^{n+1}}}\;dt,$
where the contour circles the origin once in a counterclockwise direction without enclosing the essential singularity at 1
Recurrence relations
The addition formula for Laguerre polynomials:[8]
$L_{n}^{(\alpha +\beta +1)}(x+y)=\sum _{i=0}^{n}L_{i}^{(\alpha )}(x)L_{n-i}^{(\beta )}(y).$
Laguerre's polynomials satisfy the recurrence relations
$L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n}L_{n-i}^{(\alpha +i)}(y){\frac {(y-x)^{i}}{i!}},$
in particular
$L_{n}^{(\alpha +1)}(x)=\sum _{i=0}^{n}L_{i}^{(\alpha )}(x)$
and
$L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n}{\alpha -\beta +n-i-1 \choose n-i}L_{i}^{(\beta )}(x),$
or
$L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n}{\alpha -\beta +n \choose n-i}L_{i}^{(\beta -i)}(x);$
moreover
${\begin{aligned}L_{n}^{(\alpha )}(x)-\sum _{j=0}^{\Delta -1}{n+\alpha \choose n-j}(-1)^{j}{\frac {x^{j}}{j!}}&=(-1)^{\Delta }{\frac {x^{\Delta }}{(\Delta -1)!}}\sum _{i=0}^{n-\Delta }{\frac {n+\alpha \choose n-\Delta -i}{(n-i){n \choose i}}}L_{i}^{(\alpha +\Delta )}(x)\\[6pt]&=(-1)^{\Delta }{\frac {x^{\Delta }}{(\Delta -1)!}}\sum _{i=0}^{n-\Delta }{\frac {n+\alpha -i-1 \choose n-\Delta -i}{(n-i){n \choose i}}}L_{i}^{(n+\alpha +\Delta -i)}(x)\end{aligned}}$
They can be used to derive the four 3-point-rules
${\begin{aligned}L_{n}^{(\alpha )}(x)&=L_{n}^{(\alpha +1)}(x)-L_{n-1}^{(\alpha +1)}(x)=\sum _{j=0}^{k}{k \choose j}L_{n-j}^{(\alpha +k)}(x),\\[10pt]nL_{n}^{(\alpha )}(x)&=(n+\alpha )L_{n-1}^{(\alpha )}(x)-xL_{n-1}^{(\alpha +1)}(x),\\[10pt]&{\text{or }}\\{\frac {x^{k}}{k!}}L_{n}^{(\alpha )}(x)&=\sum _{i=0}^{k}(-1)^{i}{n+i \choose i}{n+\alpha \choose k-i}L_{n+i}^{(\alpha -k)}(x),\\[10pt]nL_{n}^{(\alpha +1)}(x)&=(n-x)L_{n-1}^{(\alpha +1)}(x)+(n+\alpha )L_{n-1}^{(\alpha )}(x)\\[10pt]xL_{n}^{(\alpha +1)}(x)&=(n+\alpha )L_{n-1}^{(\alpha )}(x)-(n-x)L_{n}^{(\alpha )}(x);\end{aligned}}$
combined they give this additional, useful recurrence relations
${\begin{aligned}L_{n}^{(\alpha )}(x)&=\left(2+{\frac {\alpha -1-x}{n}}\right)L_{n-1}^{(\alpha )}(x)-\left(1+{\frac {\alpha -1}{n}}\right)L_{n-2}^{(\alpha )}(x)\\[10pt]&={\frac {\alpha +1-x}{n}}L_{n-1}^{(\alpha +1)}(x)-{\frac {x}{n}}L_{n-2}^{(\alpha +2)}(x)\end{aligned}}$
Since $L_{n}^{(\alpha )}(x)$ is a monic polynomial of degree $n$ in $\alpha $, there is the partial fraction decomposition
${\begin{aligned}{\frac {n!\,L_{n}^{(\alpha )}(x)}{(\alpha +1)_{n}}}&=1-\sum _{j=1}^{n}(-1)^{j}{\frac {j}{\alpha +j}}{n \choose j}L_{n}^{(-j)}(x)\\&=1-\sum _{j=1}^{n}{\frac {x^{j}}{\alpha +j}}\,\,{\frac {L_{n-j}^{(j)}(x)}{(j-1)!}}\\&=1-x\sum _{i=1}^{n}{\frac {L_{n-i}^{(-\alpha )}(x)L_{i-1}^{(\alpha +1)}(-x)}{\alpha +i}}.\end{aligned}}$
The second equality follows by the following identity, valid for integer i and n and immediate from the expression of $L_{n}^{(\alpha )}(x)$ in terms of Charlier polynomials:
${\frac {(-x)^{i}}{i!}}L_{n}^{(i-n)}(x)={\frac {(-x)^{n}}{n!}}L_{i}^{(n-i)}(x).$
For the third equality apply the fourth and fifth identities of this section.
Derivatives of generalized Laguerre polynomials
Differentiating the power series representation of a generalized Laguerre polynomial k times leads to
${\frac {d^{k}}{dx^{k}}}L_{n}^{(\alpha )}(x)={\begin{cases}(-1)^{k}L_{n-k}^{(\alpha +k)}(x)&{\text{if }}k\leq n,\\0&{\text{otherwise.}}\end{cases}}$
This points to a special case (α = 0) of the formula above: for integer α = k the generalized polynomial may be written
$L_{n}^{(k)}(x)=(-1)^{k}{\frac {d^{k}L_{n+k}(x)}{dx^{k}}},$
the shift by k sometimes causing confusion with the usual parenthesis notation for a derivative.
Moreover, the following equation holds:
${\frac {1}{k!}}{\frac {d^{k}}{dx^{k}}}x^{\alpha }L_{n}^{(\alpha )}(x)={n+\alpha \choose k}x^{\alpha -k}L_{n}^{(\alpha -k)}(x),$
which generalizes with Cauchy's formula to
$L_{n}^{(\alpha ')}(x)=(\alpha '-\alpha ){\alpha '+n \choose \alpha '-\alpha }\int _{0}^{x}{\frac {t^{\alpha }(x-t)^{\alpha '-\alpha -1}}{x^{\alpha '}}}L_{n}^{(\alpha )}(t)\,dt.$
The derivative with respect to the second variable α has the form,[9]
${\frac {d}{d\alpha }}L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n-1}{\frac {L_{i}^{(\alpha )}(x)}{n-i}}.$
This is evident from the contour integral representation below.
The generalized Laguerre polynomials obey the differential equation
$xL_{n}^{(\alpha )\prime \prime }(x)+(\alpha +1-x)L_{n}^{(\alpha )\prime }(x)+nL_{n}^{(\alpha )}(x)=0,$
which may be compared with the equation obeyed by the kth derivative of the ordinary Laguerre polynomial,
$xL_{n}^{[k]\prime \prime }(x)+(k+1-x)L_{n}^{[k]\prime }(x)+(n-k)L_{n}^{[k]}(x)=0,$
where $L_{n}^{[k]}(x)\equiv {\frac {d^{k}L_{n}(x)}{dx^{k}}}$ for this equation only.
In Sturm–Liouville form the differential equation is
$-\left(x^{\alpha +1}e^{-x}\cdot L_{n}^{(\alpha )}(x)^{\prime }\right)'=n\cdot x^{\alpha }e^{-x}\cdot L_{n}^{(\alpha )}(x),$
which shows that L(α)
n
is an eigenvector for the eigenvalue n.
Orthogonality
The generalized Laguerre polynomials are orthogonal over [0, ∞) with respect to the measure with weighting function xα e−x:[10]
$\int _{0}^{\infty }x^{\alpha }e^{-x}L_{n}^{(\alpha )}(x)L_{m}^{(\alpha )}(x)dx={\frac {\Gamma (n+\alpha +1)}{n!}}\delta _{n,m},$
which follows from
$\int _{0}^{\infty }x^{\alpha '-1}e^{-x}L_{n}^{(\alpha )}(x)dx={\alpha -\alpha '+n \choose n}\Gamma (\alpha ').$
If $\Gamma (x,\alpha +1,1)$ denotes the gamma distribution then the orthogonality relation can be written as
$\int _{0}^{\infty }L_{n}^{(\alpha )}(x)L_{m}^{(\alpha )}(x)\Gamma (x,\alpha +1,1)dx={n+\alpha \choose n}\delta _{n,m},$
The associated, symmetric kernel polynomial has the representations (Christoffel–Darboux formula)
${\begin{aligned}K_{n}^{(\alpha )}(x,y)&:={\frac {1}{\Gamma (\alpha +1)}}\sum _{i=0}^{n}{\frac {L_{i}^{(\alpha )}(x)L_{i}^{(\alpha )}(y)}{\alpha +i \choose i}}\\[4pt]&={\frac {1}{\Gamma (\alpha +1)}}{\frac {L_{n}^{(\alpha )}(x)L_{n+1}^{(\alpha )}(y)-L_{n+1}^{(\alpha )}(x)L_{n}^{(\alpha )}(y)}{{\frac {x-y}{n+1}}{n+\alpha \choose n}}}\\[4pt]&={\frac {1}{\Gamma (\alpha +1)}}\sum _{i=0}^{n}{\frac {x^{i}}{i!}}{\frac {L_{n-i}^{(\alpha +i)}(x)L_{n-i}^{(\alpha +i+1)}(y)}{{\alpha +n \choose n}{n \choose i}}};\end{aligned}}$
recursively
$K_{n}^{(\alpha )}(x,y)={\frac {y}{\alpha +1}}K_{n-1}^{(\alpha +1)}(x,y)+{\frac {1}{\Gamma (\alpha +1)}}{\frac {L_{n}^{(\alpha +1)}(x)L_{n}^{(\alpha )}(y)}{\alpha +n \choose n}}.$
Moreover,
$y^{\alpha }e^{-y}K_{n}^{(\alpha )}(\cdot ,y)\to \delta (y-\cdot ).$
Turán's inequalities can be derived here, which is
$L_{n}^{(\alpha )}(x)^{2}-L_{n-1}^{(\alpha )}(x)L_{n+1}^{(\alpha )}(x)=\sum _{k=0}^{n-1}{\frac {\alpha +n-1 \choose n-k}{n{n \choose k}}}L_{k}^{(\alpha -1)}(x)^{2}>0.$
The following integral is needed in the quantum mechanical treatment of the hydrogen atom,
$\int _{0}^{\infty }x^{\alpha +1}e^{-x}\left[L_{n}^{(\alpha )}(x)\right]^{2}dx={\frac {(n+\alpha )!}{n!}}(2n+\alpha +1).$
Series expansions
Let a function have the (formal) series expansion
$f(x)=\sum _{i=0}^{\infty }f_{i}^{(\alpha )}L_{i}^{(\alpha )}(x).$
Then
$f_{i}^{(\alpha )}=\int _{0}^{\infty }{\frac {L_{i}^{(\alpha )}(x)}{i+\alpha \choose i}}\cdot {\frac {x^{\alpha }e^{-x}}{\Gamma (\alpha +1)}}\cdot f(x)\,dx.$
The series converges in the associated Hilbert space L2[0, ∞) if and only if
$\|f\|_{L^{2}}^{2}:=\int _{0}^{\infty }{\frac {x^{\alpha }e^{-x}}{\Gamma (\alpha +1)}}|f(x)|^{2}\,dx=\sum _{i=0}^{\infty }{i+\alpha \choose i}|f_{i}^{(\alpha )}|^{2}<\infty .$
Further examples of expansions
Monomials are represented as
${\frac {x^{n}}{n!}}=\sum _{i=0}^{n}(-1)^{i}{n+\alpha \choose n-i}L_{i}^{(\alpha )}(x),$
while binomials have the parametrization
${n+x \choose n}=\sum _{i=0}^{n}{\frac {\alpha ^{i}}{i!}}L_{n-i}^{(x+i)}(\alpha ).$
This leads directly to
$e^{-\gamma x}=\sum _{i=0}^{\infty }{\frac {\gamma ^{i}}{(1+\gamma )^{i+\alpha +1}}}L_{i}^{(\alpha )}(x)\qquad {\text{convergent iff }}\Re (\gamma )>-{\tfrac {1}{2}}$
for the exponential function. The incomplete gamma function has the representation
$\Gamma (\alpha ,x)=x^{\alpha }e^{-x}\sum _{i=0}^{\infty }{\frac {L_{i}^{(\alpha )}(x)}{1+i}}\qquad \left(\Re (\alpha )>-1,x>0\right).$
In quantum mechanics
In quantum mechanics the Schrödinger equation for the hydrogen-like atom is exactly solvable by separation of variables in spherical coordinates. The radial part of the wave function is a (generalized) Laguerre polynomial.[11]
Vibronic transitions in the Franck-Condon approximation can also be described using Laguerre polynomials.[12]
Multiplication theorems
Erdélyi gives the following two multiplication theorems [13]
${\begin{aligned}&t^{n+1+\alpha }e^{(1-t)z}L_{n}^{(\alpha )}(zt)=\sum _{k=n}^{\infty }{k \choose n}\left(1-{\frac {1}{t}}\right)^{k-n}L_{k}^{(\alpha )}(z),\\[6pt]&e^{(1-t)z}L_{n}^{(\alpha )}(zt)=\sum _{k=0}^{\infty }{\frac {(1-t)^{k}z^{k}}{k!}}L_{n}^{(\alpha +k)}(z).\end{aligned}}$
Relation to Hermite polynomials
The generalized Laguerre polynomials are related to the Hermite polynomials:
${\begin{aligned}H_{2n}(x)&=(-1)^{n}2^{2n}n!L_{n}^{(-1/2)}(x^{2})\\[4pt]H_{2n+1}(x)&=(-1)^{n}2^{2n+1}n!xL_{n}^{(1/2)}(x^{2})\end{aligned}}$
where the Hn(x) are the Hermite polynomials based on the weighting function exp(−x2), the so-called "physicist's version."
Because of this, the generalized Laguerre polynomials arise in the treatment of the quantum harmonic oscillator.
Relation to hypergeometric functions
The Laguerre polynomials may be defined in terms of hypergeometric functions, specifically the confluent hypergeometric functions, as
$L_{n}^{(\alpha )}(x)={n+\alpha \choose n}M(-n,\alpha +1,x)={\frac {(\alpha +1)_{n}}{n!}}\,_{1}F_{1}(-n,\alpha +1,x)$
where $(a)_{n}$ is the Pochhammer symbol (which in this case represents the rising factorial).
Hardy–Hille formula
The generalized Laguerre polynomials satisfy the Hardy–Hille formula[14][15]
$\sum _{n=0}^{\infty }{\frac {n!\,\Gamma \left(\alpha +1\right)}{\Gamma \left(n+\alpha +1\right)}}L_{n}^{(\alpha )}(x)L_{n}^{(\alpha )}(y)t^{n}={\frac {1}{(1-t)^{\alpha +1}}}e^{-(x+y)t/(1-t)}\,_{0}F_{1}\left(;\alpha +1;{\frac {xyt}{(1-t)^{2}}}\right),$
where the series on the left converges for $\alpha >-1$ and $|t|<1$. Using the identity
$\,_{0}F_{1}(;\alpha +1;z)=\,\Gamma (\alpha +1)z^{-\alpha /2}I_{\alpha }\left(2{\sqrt {z}}\right),$
(see generalized hypergeometric function), this can also be written as
$\sum _{n=0}^{\infty }{\frac {n!}{\Gamma (1+\alpha +n)}}L_{n}^{(\alpha )}(x)L_{n}^{(\alpha )}(y)t^{n}={\frac {1}{(xyt)^{\alpha /2}(1-t)}}e^{-(x+y)t/(1-t)}I_{\alpha }\left({\frac {2{\sqrt {xyt}}}{1-t}}\right).$
This formula is a generalization of the Mehler kernel for Hermite polynomials, which can be recovered from it by using the relations between Laguerre and Hermite polynomials given above.
Physicist Scaling Convention
The generalized Laguerre polynomials are used to describe the quantum wavefunction for hydrogen atom orbitals. In the introductory literature on this topic,[16][17][18] a different scaling is used for the generalized Laguerre polynomials than the scaling presented in this article. In the convention taken here, the generalized Laguerre polynomials can be expressed as [19]
$L_{n}^{(\alpha )}(x)={\frac {\Gamma (\alpha +n+1)}{\Gamma (\alpha +1)n!}}\,_{1}F_{1}(-n;\alpha +1;x),$
where $\,_{1}F_{1}(a;b;x)$ is the confluent hypergeometric function. In the physicist literature, such as [18], the generalized Laguerre polynomials are instead defined as
${\bar {L}}_{n}^{(\alpha )}(x)={\frac {\left[\Gamma (\alpha +n+1)\right]^{2}}{\Gamma (\alpha +1)n!}}\,_{1}F_{1}(-n;\alpha +1;x).$
The physicist version is related to the standard version by
${\bar {L}}_{n}^{(\alpha )}(x)=(n+\alpha )!L_{n}^{(\alpha )}(x).$
There is yet another convention in use, though less frequently, in the physics literature. Under this convention the Laguerre polynomials are given by [20][21][22]
${\tilde {L}}_{n}^{(\alpha )}(x)=(-1)^{\alpha }{\bar {L}}_{n-\alpha }^{(\alpha )}.$
Umbral Calculus Convention
Generalized Laguerre polynomials are linked to Umbral calculus by being Sheffer sequences for $D/(D-I)$ when multiplied by $n!$. In Umbral Calculus convention,[23] the default Laguerre polynomials are defined to be
${\mathcal {L}}_{n}(x)=n!L_{n}^{(-1)}(x)=\sum _{k=0}^{n}L(n,k)(-x)^{k}$
where $ L(n,k)={\binom {n-1}{k-1}}{\frac {n!}{k!}}$ are the signless Lah numbers. $ ({\mathcal {L}}_{n}(x))_{n\in \mathbb {N} }$ is a sequence of polynomials of binomial type, ie they satisfy
${\mathcal {L}}_{n}(x+y)=\sum _{k=0}^{n}{\binom {n}{k}}{\mathcal {L}}_{k}(x){\mathcal {L}}_{n-k}(y)$
See also
• Orthogonal polynomials
• Rodrigues' formula
• Angelescu polynomials
• Bessel polynomials
• Denisyuk polynomials
• Transverse mode, an important application of Laguerre polynomials to describe the field intensity within a waveguide or laser beam profile.
Notes
1. N. Sonine (1880). "Recherches sur les fonctions cylindriques et le développement des fonctions continues en séries". Math. Ann. 16 (1): 1–80. doi:10.1007/BF01459227. S2CID 121602983.
2. A&S p. 781
3. A&S p. 509
4. A&S p. 510
5. A&S p. 775
6. Szegő, p. 198.
7. D. Borwein, J. M. Borwein, R. E. Crandall, "Effective Laguerre asymptotics", SIAM J. Numer. Anal., vol. 46 (2008), no. 6, pp. 3285–3312 doi:10.1137/07068031X
8. A&S equation (22.12.6), p. 785
9. Koepf, Wolfram (1997). "Identities for families of orthogonal polynomials and special functions". Integral Transforms and Special Functions. 5 (1–2): 69–102. CiteSeerX 10.1.1.298.7657. doi:10.1080/10652469708819127.
10. "Associated Laguerre Polynomial".
11. Ratner, Schatz, Mark A., George C. (2001). Quantum Mechanics in Chemistry. 0-13-895491-7: Prentice Hall. pp. 90–91.{{cite book}}: CS1 maint: location (link) CS1 maint: multiple names: authors list (link)
12. Jong, Mathijs de; Seijo, Luis; Meijerink, Andries; Rabouw, Freddy T. (2015-06-24). "Resolving the ambiguity in the relation between Stokes shift and Huang–Rhys parameter". Physical Chemistry Chemical Physics. 17 (26): 16959–16969. Bibcode:2015PCCP...1716959D. doi:10.1039/C5CP02093J. hdl:1874/321453. ISSN 1463-9084. PMID 26062123. S2CID 34490576.
13. C. Truesdell, "On the Addition and Multiplication Theorems for the Special Functions", Proceedings of the National Academy of Sciences, Mathematics, (1950) pp. 752–757.
14. Szegő, p. 102.
15. W. A. Al-Salam (1964), "Operational representations for Laguerre and other polynomials", Duke Math J. 31 (1): 127–142.
16. Griffiths, David J. (2005). Introduction to quantum mechanics (2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 0131118927.
17. Sakurai, J. J. (2011). Modern quantum mechanics (2nd ed.). Boston: Addison-Wesley. ISBN 978-0805382914.
18. Merzbacher, Eugen (1998). Quantum mechanics (3rd ed.). New York: Wiley. ISBN 0471887021.
19. Abramowitz, Milton (1965). Handbook of mathematical functions, with formulas, graphs, and mathematical tables. New York: Dover Publications. ISBN 978-0-486-61272-0.
20. Schiff, Leonard I. (1968). Quantum mechanics (3d ed.). New York: McGraw-Hill. ISBN 0070856435.
21. Messiah, Albert (2014). Quantum Mechanics. Dover Publications. ISBN 9780486784557.
22. Boas, Mary L. (2006). Mathematical methods in the physical sciences (3rd ed.). Hoboken, NJ: Wiley. ISBN 9780471198260.
23. Rota, Gian-Carlo; Kahaner, D; Odlyzko, A (1973-06-01). "On the foundations of combinatorial theory. VIII. Finite operator calculus". Journal of Mathematical Analysis and Applications. 42 (3): 684–760. doi:10.1016/0022-247X(73)90172-8. ISSN 0022-247X.
References
• Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 22". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 773. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
• G. Szegő, Orthogonal polynomials, 4th edition, Amer. Math. Soc. Colloq. Publ., vol. 23, Amer. Math. Soc., Providence, RI, 1975.
• Koornwinder, Tom H.; Wong, Roderick S. C.; Koekoek, Roelof; Swarttouw, René F. (2010), "Orthogonal Polynomials", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• B. Spain, M.G. Smith, Functions of mathematical physics, Van Nostrand Reinhold Company, London, 1970. Chapter 10 deals with Laguerre polynomials.
• "Laguerre polynomials", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Eric W. Weisstein, "Laguerre Polynomial", From MathWorld—A Wolfram Web Resource.
• George Arfken and Hans Weber (2000). Mathematical Methods for Physicists. Academic Press. ISBN 978-0-12-059825-0.
External links
• Timothy Jones. "The Legendre and Laguerre Polynomials and the elementary quantum mechanical model of the Hydrogen Atom".
• Weisstein, Eric W. "Laguerre polynomial". MathWorld.
Authority control
International
• FAST
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
Other
• IdRef
|
Wikipedia
|
Sonja Lyttkens
Sonja Lyttkens (26 August 1919 – 18 December 2014)[1] was a Swedish mathematician, the third woman to earn a mathematics doctorate in Sweden and the first of these women to obtain a permanent university position in mathematics.[2] She is also known for her work to make academia less hostile to women,[3] and for pointing out that the Swedish taxation system of the time, which provided an income deduction for husbands of non-working wives, pressured women even in low-income families not to work. Her observations helped push Sweden into taxing married people separately from their spouses.[4][5]
Education and career
Lyttkens grew up in Halmstad and Karlskrona, and moved to Kalmar in 1930. She moved again to Uppsala in 1937 to study mathematics, but her studies were interrupted by marriage and children.[6] She earned a licentiate in 1951,[6] and completed her Ph.D. at Uppsala University in 1956. Her dissertation, The Remainder In Tauberian Theorems, concerned Tauberian theorems and was jointly supervised by Arne Beurling and Lennart Carleson.[7] She was the third woman to earn a doctorate in mathematics in Sweden, after Louise Petrén-Overton in 1911 and Ingrid Lindström in 1947.[2]
Although Sofya Kovalevskaya had become a full professor of mathematics in a private university in Stockholm in 1884,[8] women were forbidden from holding public university positions in Sweden until 1925,[8][3] and both Petrén and Lindström became schoolteachers.[2] Lyttkens obtained a permanent position as a senior lecturer at Uppsala University in 1963,[2][3] and in 1970 she became the university's first female inspektor (an honorary chair of a student union), for the Kalmar nation.[3] She retired in 1984.[1]
Personal life
Lyttkens was the daughter of Swedish sculptor Anna Petrus and her husband, physician Harald Lyttkens. Two of her children, Ulla Lyttkens and Harald Hamrell, both became film actors and directors.[6]
As well as working in mathematics, Lyttkens also painted watercolors before and after her retirement, and had several exhibitions of her paintings.[1]
References
1. "Sonja Lyttkens 1919–2014, Mathematician", Celebrities buried in the Old Cemetery in Uppsala, Church of Sweden, retrieved 2020-02-04
2. Balog, Antal; Szász, Domokos; Recski, András; Katona, Gyula O. H., eds. (1998), "Round Table D: Women and Mathematics", European Congress of Mathematics: Budapest, July 22–26, 1996, Volume 2, Progress in Mathematics, vol. 169, Springer, p. 362, ISBN 9783764354985
3. Women in Uppsala University history, Uppsala University, retrieved 2020-02-04
4. Gustaffson, Siv (1995), "Single mothers in Sweden: Why is poverty less severe?", in McFate, Katherine; Lawson, Roger; Wilson, William Julius (eds.), Poverty, Inequality, and the Future of Social Policy: Western States in the New World Order, Russell Sage Foundation, pp. 291–327, ISBN 9781610446686. See in particular p. 299.
5. Nyberg, Anita (2012), "Retour sur l'imposition séparée en Suède", Travail, Genre et Sociétés (in French), 27 (1): 163, doi:10.3917/tgs.027.0163
6. "Sonja Lyttkens", Upsala Nya Tidning (in Swedish), 14 January 2015
7. Sonja Lyttkens at the Mathematics Genealogy Project
8. Lyttkens, Sonja (25 November 1962), "Under strecket 1962: "Kvinna i matematikens värld"", Svenska Dagbladet (in Swedish)
Further reading
• Sonja Lyttkens at Svenskt kvinnobiografiskt lexikon
Authority control
International
• ISNI
• VIAF
National
• Sweden
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Sonja Petrović (statistician)
Sonja Petrović is a Serbian-American statistician and associate professor in the Department of Applied Mathematics, College of Computing, at Illinois Institute of Technology. Her research is focused on mathematical statistics and algebraic statistics, applied and computational algebraic geometry and random graph (network) models.[1] She was elected to the International Statistics Institute in 2015.[2]
Sonja Petrović
CitizenshipAmerican
Alma mater
• University of Kentucky (Ph.D.)
• University of Tennessee at Chattanooga (B.S.)
AwardsElected to International Institute of Statistics
Scientific career
FieldsMathematics
InstitutionsIllinois Institute of Technology
ThesisAlgebraic and Combinatorial Properties of Certain Toric Ideals in the Theory and Applications (2008)
Doctoral advisorUwe Nagel
Websitehttp://www.sonjapetrovicstats.com
Education and career
Petrović did her undergraduate work at the University of Tennessee at Chattanooga and received her B.S. degree in applied mathematics, magna cum laude in 2003. She minored in music performance at Chattanooga.[3] Petrović did her doctoral work in mathematics at the University of Kentucky in Lexington, Kentucky, specializing in commutative algebra. Her dissertation Algebraic and Combinatorial Properties of Certain Toric Ideals in the Theory and Applications was directed by Uwe Nagel. Petrović was awarded her Ph.D. by Kentucky in 2008. [4]
After her doctoral studies, Petrović held a post-doctoral position at the University of Illinois at Chicago from 2008 to 2011. She was a research fellow at the Statistical and Applied Mathematical Sciences Institute based in Research Triangle Park, North Carolina, in 2009, participating in the Program on Algebraic Methods in Systems Biology and Statistics.[3] After holding the position of assistant professor of statistics at Pennsylvania State University from 2011 to 2013, Petrović joined the faculty of Illinois Tech as an assistant professor of applied mathematics in 2013. She was promoted to associate professor at Illinois Tech in 2017.
In 2011, Petrović visited the Mittag-Leffler Institute in Djursholm, Sweden and participated in the program "Algebraic Geometry with a View Towards Applications".[5] In 2016, she was a long-term participant in the "Theoretical Foundations of Statistical Network Analysis Program" at the Isaac Newton Institute of Mathematical Sciences in Cambridge, United Kingdom.[6] Petrović was a co-organizer of the “Summer School on Randomness and Learning in Non-Linear Algebra” at the Max Planck Institute for Mathematics located in Leipzig, Germany in July 2019.[7]
She received an Illinois Tech College of Science Junior Research Excellence Award in 2015 and an Excellence in Teaching Award for the College of Science in April 2018.[8].
References
1. "Illinois Tech: Sonja Petrović". Illinois Institute of Technology. Retrieved 8 April 2021.
2. "Individual Members". International Statistics Institute. Retrieved 8 April 2021.
3. "Sonja Petrovic - Penn State Personal Web Server". Penn State University. Retrieved 8 April 2021.
4. Sonja Petrović at the Mathematics Genealogy Project
5. "Algebraic Geometry with a View Towards Applications". Mittag-Leffler Institute. Retrieved 15 April 2021.
6. "Theoretical Foundations of Statistical Network Analysis Program". Isaac Newton Institute for Mathematical Statistics. 25 February 2021. Retrieved 15 April 2021.
7. "Sonja Petrović". Max-Planck Institut für Mathematik. Retrieved 8 April 2021.
8. "2018 Excellence in Teaching Awards Announced". Illinois Tech Today. Illinois Institute of Technology. Retrieved 8 April 2021.
External links
• Sonja Petrović Author Profile at MathSciNet
• Sonja Petrović publications indexed by Google Scholar
• Official website
Authority control: Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Sonya Christian
Sonya Christian is the Chancellor of the California Community Colleges[1] and the 6th Chancellor of the Kern Community College District (Kern District). She previously served as the 10th President of Bakersfield College from 2013–2021.
Sonya Christian
Chancellor California Community Colleges
Incumbent
Assumed office
2021
10th President of Bakersfield College
In office
2013–2021
Personal details
BornKerala, India
Alma materUniversity of Kerala (BS)
University of Southern California (MS)
University of California Los Angeles (Ph.D)
Education and career
Christian received a bachelor of science degree from the University of Kerala in Kerala, India. She earned her master of science in applied mathematics from the University of Southern California. She earned her doctorate from the University of California, Los Angeles.[2]
In 1991, she began her career at Bakersfield College as a mathematics professor. During her 12 years at Bakersfield College, she also served as division chair, then Dean of Science, Engineering, Allied Health, and Mathematics.[3]
In 2003, Christian was named Associate Vice President for Instruction at Lane College in Eugene, Oregon. She later became Vice President of Academic and Student Affairs and Chief Academic Officer.[2]
She returned to Bakersfield College in 2013 after being named Bakersfield College's 10th president.[4] During her tenure, she focused her work on student success with equity. In 2015, those efforts were rewarded when Bakersfield College's "Making it Happen" program was named a 2015 Exemplary Program by the Board of Governors of California Community Colleges.[5] The California Community Colleges honored Bakersfield College with the 2018 Chancellor's Student Success Award for its work with High-Touch, High-Tech Transfer Pathways.[6] That program was also honored in 2019, when Bakersfield College was named a 2019 Innovation of the Year Award winner.[7] Also in 2019, Bakersfield College was named as a recipient of the Council for Higher Education Accreditation International Quality Group's CIQG Quality Award for the college's work in improving student outcomes.[8]
Christian also served as Chair of the Accrediting Commission for Community and Junior Colleges (2020-2022). She was appointed by the Governor to the Student Centered Funding Formula Oversight Committee,[9] where she served from 2018-2022. She currently serves on the boards for the Campaign for College Opportunity and Equity in Policy Implementation Board, and is the Chair of the California Community College's Women's Caucus.
On April 19, 2021, the Kern Community College District Board of Trustees announced that Christian would become the district's 10th chancellor, beginning her term in July 2021.[10]
Awards and accolades
Christian received the District 6 Pacesetter of the Year Award from the National Council for Marketing & Public Relations awarded Christian the[11]
Assemblyman Rudy Salas nominated Christian as 2016 Woman of the Year for CA-32.[12]
In 2018, the Delano Chamber of Commerce selected Christian as co-recipient of the annual Educational Award, which she shared with Assemblymember Rudy Salas.
In 2019, Christian was named Woman of the Year by the Kern County Hispanic Chamber of Commerce.[13]
References
1. "Meet the Chancellor | California Community Colleges Chancellor's Office". www.cccco.edu. Retrieved 2023-07-17.
2. "Meet Chancellor Christian | Kern Community College District". www.kccd.edu. Retrieved 2023-01-04.
3. "BC's new president hired for 'depth and breadth' of experience | KBAK". bakersfieldnow.com. Retrieved 2023-01-04.
4. [email protected], RACHEL COOK Californian staff writer. "Bakersfield College hires former faculty member to be 10th president". The Bakersfield Californian. Retrieved 2023-01-04.
5. "Bakersfield College's "Making it Happen" Named 2015 Exemplary Program by the Board of Governors of California Community Colleges | Bakersfield College". www.bakersfieldcollege.edu. Retrieved 2023-01-05.
6. alanavillemez (2018-12-18). "Bakersfield College and College of the Desert Honored with Student Success Award". CCCCO News Center. Retrieved 2023-01-05.
7. "2019 Innovation of the Year Award Winners | The League for Innovation in the Community College". www.league.org. Retrieved 2023-01-05.
8. Californian, The Bakersfield. "Bakersfield College receives 2019 CIQG Quality Award". The Bakersfield Californian. Retrieved 2023-01-05.
9. "SCFFoversightcommittee - Committee Members". www.scffoversightcommittee.org. Retrieved 2023-01-05.
10. MAYER, STEVEN. "Sonya Christian chosen to serve as sixth chancellor of the Kern Community College District". The Bakersfield Californian. Retrieved 2023-01-05.
11. [email protected], LAUREN FOREMAN, The Bakersfield Californian. "THE GRADE: The ins & outs of education in Kern County". The Bakersfield Californian. Retrieved 2023-01-04.{{cite web}}: CS1 maint: multiple names: authors list (link)
12. "Bakersfield College President Sonya Christian named Woman of the Year | Bakersfield College". www.bakersfieldcollege.edu. Retrieved 2023-01-04.
13. staff, Eyewitness News (2017-02-16). "BC President named woman of the year by Kern County Hispanic Chamber of Commerce". KBAK. Retrieved 2023-01-04.
|
Wikipedia
|
Sophia Levy
Sophia Hazel Levy McDonald (December 12, 1888 – December 6, 1963) was an American astronomer, numerical analyst, and mathematics educator.[1] She became the second tenured woman on mathematics faculty at the University of California, Berkeley, at a time when it was unusual for top mathematics programs to have even one female mathematician.[2] Her main research topic concerned the orbits of comets and minor planets.[1]
Education and career
Levy majored in astronomy at the University of California, Berkeley, graduating in 1910, and continued at the university for graduate study in astronomy, supporting herself as Watson Assistant in Astronomy, University Fellow in Astronomy, assistant to the dean of the graduate division, and secretary to the California State Board of Education. She completed her Ph.D. in 1920, but continued in her secretarial work for a few more years, returning to the university to manage the office of the University of California Press and again taking a position as assistant to the dean.[1]
After obtaining a position as a research assistant in astronomy, she was hired in 1921 or 1923 as an instructor in mathematics. She obtained a tenure-track position as assistant professor in 1924 or 1925,[3] the second to do so after Pauline Sperry. Beginning in 1933, six mathematics faculty members including Levy, Annie Biddle, three male instructors, and a male assistant professor were all considered for termination, as part of an increased push for research excellence at the university that also included the hire of Griffith C. Evans as a new department chair in 1936. Biddle was let go, with the explanation that because she had married she did not need to remain employed, while the three male instructors were kept on, with the explanation for at least one being that because he was married and had a child he did need to remain employed. Levy and the other assistant professor were also kept on, in Levy's case at least in part because she was unmarried and was supporting another family member, her mother.[2]
Continuing at Berkeley, Levy helped found the Northern California Section of the Mathematical Association of America in 1939, and served as its second chair and later as its sectional governor. She was also councilor general of Pi Mu Epsilon. She was tenured as an associate professor in 1940, and promoted to full professor in 1949. She retired in 1954, becoming a professor emerita.[1]
She helped found the Northern California Section of the Mathematical Association of America in 1939, and served as its second chair and later as its sectional governor. She was also councilor general of Pi Mu Epsilon.[1]
Research and publications
Levy's main topic research was in theoretical astronomy, and involved calculations involving the orbits of comets and minor planets, the perturbations of those orbits by Jupiter, and the use of the observed perturbations to more accurately estimate the mass of Jupiter.[1] Her dissertation, The theory of motion of the planet (175) Andromache, concerned the minor planet 175 Andromache, and was supervised by Armin Otto Leuschner.[4] Some of her early work was represented in a paper with Leuschner and Anna Estelle Glancy in the Memoirs of the National Academy of Sciences, and she continued to collaborate with Leuschner for many years.[1]
After becoming a mathematics instructor, she also came to work in numerical analysis, "including such subjects as interpolation methods, mechanical quadratures, the numerical solution of algebraic and transcendental equations, Fourier analysis and periodogram analysis". During World War II she directed a mathematics education program for the US Army at Berkeley,[1] and published a textbook, Introductory Artillery Mathematics and Antiaircraft Mathematics (University of California Press, 1943).[1][5]
Personal life
Sophia Levy was born in Alameda, California, on December 12, 1888; her parents were also native Californians, born in former California Gold Rush communities.[1]
In 1944 she married another Berkeley mathematician, John Hector McDonald. The marriage came after McDonald's retirement from Berkeley.[1] By waiting for his retirement to marry him, Levy evaded the university's anti-nepotism rules which might well have terminated her job (but not his) if they married while he was still an active faculty member, as happened for instance at another university to Josephine M. Mitchell.[2]
Her husband died in 1953, and she died on December 6, 1963, in Oakland, California.[1]
References
1. Lenzen, V. F.; Einarsson, S.; Evans, G. C. (April 1965), "Sophia Levy McDonald, Mathematics: Berkeley, 1888–1963, Professor Emeritus", University of California: In Memoriam
2. Kessel, Cathy (March 2022), "Tenured Women at Berkeley Before 1980" (PDF), Notices of the American Mathematical Society, 69 (3): 427–438, doi:10.1090/noti2448, S2CID 246755695
3. Lenzen, Einarsson & Evans (1965) give the later dates, while Kessel (2022) gives the earlier dates, noting the discrepancy in a footnote.
4. "Sophia Hazel Levy McDonald (Sophia Hazel Levy) (1888–1963)", AstroGen, American Astronomical Society, retrieved 2022-02-16
5. Bacon, H. M. (March 1944), "Review of Introductory Artillery Mathematics and Antiaircraft Mathematics", The Mathematics Teacher, 37 (3): 138–139, JSTOR 27952848
|
Wikipedia
|
Sophie Dabo-Niang
Sophie Dabo-Niang (née Dabo) is a Senegalese and French mathematician, statistician, and professor[1] who has done outreach to increase the status of African mathematicians.
Sophie Dabo
Born
Senegal
Occupation(s)Mathematician, professor
Children4
Biography
Early life
Sophie was encouraged to pursue mathematics by her parents and her teachers. She knew she wanted to study mathematics early in high school.[1]
Education
Sophie Dabo-Niang earned her PhD in 2002 from the Pierre and Marie Curie University in Paris.[1] Sophie enjoys passing on her passion for mathematics to her students.[1]
Marriage and children
As of 2016, Dabo-Niang is married.[1] She had 3 children between starting her master's degree and finishing her doctoral thesis, and has 4 children in total.[1][2] She has said that balancing parenting and her mathematics career has been a challenge, and she credits her persistence to her desire to succeed and the support of her husband.[1]
Mathematical work
Dabo-Niang has published articles on functional statistics, nonparametric and semi-parametric estimates of weakly independent processes, spatial statistics, and mathematical epidemiology.[3]
Dabo-Niang serves as an editor of the journal Revista Colombiana de Estadística[4][5] and is on the scientific committee of the Centre International de Mathématiques Pures et Appliquées (CIMPA).[6]
Professorship and developing country outreach
Sophie Dabo-Niang has successfully supervised the doctoral theses of several students in Africa.[7] As of January 2021 she is a full professor at the University of Lille and is supervising and co-supervising multiple African students.[1] She has taught master's-level statistics courses, including in Senegal.[1]
She introduced the spatial statistics subfields to a university in Dakar, Senegal, and supervised the first Senegalese and Mauritanian doctoral students focusing on the field. She often participates on thesis juries in Africa.[1]
Dabo-Niang has coordinated scientific events in Africa. In Senegal, she coordinated a CIMPA event and an event to encourage young girls in the mathematical sciences.[1] She serves as the chair of the Developing Countries Committee for the European Mathematical Society.[5][8]
Books
• Functional and Operatorial Statistics. Contributions to Statistics. Sophie Dabo-Niang, Frédéric Ferraty (eds.). Physica-Verlag Heidelberg. 2008. ISBN 978-3-7908-2061-4. Retrieved 2021-01-15.{{cite book}}: CS1 maint: others (link)
• Mathematical Modeling of Random and Deterministic Phenomena. Solym Mawaki Mamou-Abi, Sophie Dabo-Niang, Jean-Jacques Salone (eds.). ISTE, Wiley. 2020-02-01. ISBN 978-1-78630-454-4. Retrieved 2021-01-15.{{cite book}}: CS1 maint: others (link)
Articles
• Dabo-Niang, Sophie; Rhomari, Noureddine (2003-01-01). "Estimation non paramétrique de la régression avec variable explicative dans un espace métrique". Comptes Rendus Mathematique. 336 (1): 75–80. doi:10.1016/S1631-073X(02)00012-2. ISSN 1631-073X. Retrieved 2021-01-15.
• Dabo-Niang, Sophie (2007). "Kernel Regression Estimation for Continuous Spatial Processes". Mathematical Methods of Statistics. 16 (4): 298–317. doi:10.3103/S1066530707040023. S2CID 121410227. Retrieved 2021-01-15.
• Dabo-Niang, Sophie; Ferraty, Frédéric; Vieu, Philippe (2007-06-15). "On the using of modal curves for radar waveforms classification". Computational Statistics & Data Analysis. 51 (10): 4878–4890. doi:10.1016/j.csda.2006.07.012. ISSN 0167-9473. Retrieved 2021-01-15.
• Chebana, Fateh; Dabo‐Niang, Sophie; Ouarda, Taha B. M. J. (2012). "Exploratory functional flood frequency analysis and outlier detection". Water Resources Research. 48 (4). doi:10.1029/2011WR011040. ISSN 1944-7973.
• Dabo-Niang, Sophie; Yao, Anne-Françoise (2013-01-01). "Kernel spatial density estimation in infinite dimension space". Metrika. 76 (1): 19–52. doi:10.1007/s00184-011-0374-4. ISSN 1435-926X. S2CID 121408701. Retrieved 2021-01-15.
Honours, decorations, awards and distinctions
The African Women in Mathematics Association has profiled Dabo-Niang. She was honored by Femmes et Mathématiques in 2015.[9]
See also
• Female education in STEM
References
1. "Sophie Dabo | African Women in Mathematics Association". Retrieved 2021-01-14.
2. Sciences & Coundefined (Director) (2017-03-22). Egalité Hommes/Femmes au cœur des laboratoires scientifiques. Event occurs at 2:50. Retrieved 2021-01-15.
3. "Sophie DABO Contact, Faculty Profile - Université de Lille". Université de Lille. Retrieved 2021-01-15.
4. "Rev.Colomb.Estad. - Editorial board". www.scielo.org.co. Retrieved 2021-01-15.
5. Vauzeilles, Jacqueline (2015). Rapport d'activité du LEM (PDF). Lille Économie Management. p. 114.
6. "Executive Team | CIMPA". www.cimpa.info. Retrieved 2021-01-15.
7. "Catalogue SUDOC". www.sudoc.abes.fr. Retrieved 2021-01-15.
8. nickgill (2018-02-20). "Members". EMS-CDC. Retrieved 2021-01-15.
9. "Mathématiciennes africaines Meeting Agenda" (Professional Organization). FEMMES ET MATHÉMATIQUES. 2015-05-30. Retrieved 2021-01-15.
External links
• personal website
• podcast recording of her workshop on "Functional estimation in high dimensional data : Application to classification"
• Sophie Dabo-Niang publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• ResearcherID
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sophie Germain's theorem
In number theory, Sophie Germain's theorem is a statement about the divisibility of solutions to the equation $x^{p}+y^{p}=z^{p}$ of Fermat's Last Theorem for odd prime $p$.
Formal statement
Specifically, Sophie Germain proved that at least one of the numbers $x$, $y$, $z$ must be divisible by $p^{2}$ if an auxiliary prime $q$ can be found such that two conditions are satisfied:
1. No two nonzero $p^{\mathrm {th} }$ powers differ by one modulo $q$; and
2. $p$ is itself not a $p^{\mathrm {th} }$ power modulo $q$.
Conversely, the first case of Fermat's Last Theorem (the case in which $p$ does not divide $xyz$) must hold for every prime $p$ for which even one auxiliary prime can be found.
History
Germain identified such an auxiliary prime $q$ for every prime less than 100. The theorem and its application to primes $p$ less than 100 were attributed to Germain by Adrien-Marie Legendre in 1823.[1]
Notes
1. Legendre AM (1823). "Recherches sur quelques objets d'analyse indéterminée et particulièrement sur le théorème de Fermat". Mém. Acad. Roy. des Sciences de l'Institut de France. 6. Didot, Paris, 1827. Also appeared as Second Supplément (1825) to Essai sur la théorie des nombres, 2nd edn., Paris, 1808; also reprinted in Sphinx-Oedipe 4 (1909), 97–128.
References
• Laubenbacher R, Pengelley D (2007) "Voici ce que j'ai trouvé": Sophie Germain's grand plan to prove Fermat's Last Theorem
• Mordell LJ (1921). Three Lectures on Fermat's Last Theorem. Cambridge: Cambridge University Press. pp. 27–31.
• Ribenboim P (1979). 13 Lectures on Fermat's Last Theorem. New York: Springer-Verlag. pp. 54–63. ISBN 978-0-387-90432-0.
|
Wikipedia
|
Sophie Germain Counter Mode
A new mode called Sophie Germain Counter Mode (SGCM) has been proposed as a variant of the Galois/Counter Mode of operation for block ciphers. Instead of the binary field GF(2128), it uses modular arithmetic in GF(p) where p is a safe prime 2128 + 12451 with corresponding Sophie Germain prime p − 1/2 = 2127 + 6225.[1] SGCM does prevent the specific "weak key" attack described in its paper, however there are other ways of modifying the message that will achieve the same forgery probability against SGCM as is possible against GCM: by modifying a valid n-word message, you can create a SGCM forgery with probability circa n/2128.[2] That is, its authentication bounds are no better than those of Galois/Counter Mode. SGCM when implemented in hardware has a higher gate count than GCM. However, its authors expect software implementations of SGCM to have similar or superior performance to GCM on most software platforms.
References
1. Markku-Juhani O. Saarinen (2011-06-16). "SGCM: The Sophie Germain Counter Mode". Cryptology ePrint Archive. Report 2011/326.
2. Scott Fluhrer (2011-07-18). "Re: AES-GCM weakness". Crypto Forum Research Group mailing list.
|
Wikipedia
|
Safe and Sophie Germain primes
In number theory, a prime number p is a Sophie Germain prime if 2p + 1 is also prime. The number 2p + 1 associated with a Sophie Germain prime is called a safe prime. For example, 11 is a Sophie Germain prime and 2 × 11 + 1 = 23 is its associated safe prime. Sophie Germain primes are named after French mathematician Sophie Germain, who used them in her investigations of Fermat's Last Theorem.[1] One attempt by Germain to prove Fermat’s Last Theorem was to let p be a prime number of the form 8k + 7 and to let n = p – 1. In this case, $x^{n}+y^{n}=z^{n}$ is unsolvable. Germain’s proof, however, remained unfinished.[2][3] Through her attempts to solve Fermat's Last Theorem, Germain developed a result now known as Germain's Theorem which states that if p is an odd prime and 2p + 1 is also prime, then p must divide x, y, or z. Otherwise, $ x^{n}+y^{n}\neq z^{n}$. This case where p does not divide x, y, or z is called the first case. Sophie Germain’s work was the most progress achieved on Fermat’s last theorem at that time.[2] Later work by Kummer and others always divided the problem into first and second cases. Sophie Germain primes and safe primes have applications in public key cryptography and primality testing. It has been conjectured that there are infinitely many Sophie Germain primes, but this remains unproven.
Individual numbers
The first few Sophie Germain primes (those less than 1000) are
2, 3, 5, 11, 23, 29, 41, 53, 83, 89, 113, 131, 173, 179, 191, 233, 239, 251, 281, 293, 359, 419, 431, 443, 491, 509, 593, 641, 653, 659, 683, 719, 743, 761, 809, 911, 953, ... OEIS: A005384
Hence, the first few safe primes are
5, 7, 11, 23, 47, 59, 83, 107, 167, 179, 227, 263, 347, 359, 383, 467, 479, 503, 563, 587, 719, 839, 863, 887, 983, 1019, 1187, 1283, 1307, 1319, 1367, 1439, 1487, 1523, 1619, 1823, 1907, ... OEIS: A005385
In cryptography much larger Sophie Germain primes like 1,846,389,521,368 + 11600 are required.
Two distributed computing projects, PrimeGrid and Twin Prime Search, include searches for large Sophie Germain primes. Some of the largest known Sophie Germain primes are given in the following table.[4]
ValueNumber of digitsTime of discoveryDiscoverer
2618163402417 × 21290000 − 1388342February 2016Dr. James Scott Brown in a distributed PrimeGrid search using the programs TwinGen and LLR[5]
18543637900515 × 2666667 − 1200701April 2012Philipp Bliedung in a distributed PrimeGrid search using the programs TwinGen and LLR[6]
183027 × 2265440 − 179911March 2010Tom Wu using LLR[7]
648621027630345 × 2253824 − 1 and 620366307356565 × 2253824 − 176424November 2009Zoltán Járai, Gábor Farkas, Tímea Csajbók, János Kasza and Antal Járai[8][9]
1068669447 × 2211088 − 163553May 2020Michael Kwok[10]
99064503957 × 2200008 − 160220April 2016S. Urushihata[11]
607095 × 2176311 − 153081September 2009Tom Wu[12]
48047305725 × 2172403 − 151910January 2007David Underbakke using TwinGen and LLR[13]
137211941292195 × 2171960 − 151780May 2006Járai et al.[14]
On 2 Dec 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé, and Paul Zimmermann announced the computation of a discrete logarithm modulo the 240-digit (795 bit) prime RSA-240 + 49204 (the first safe prime above RSA-240) using a number field sieve algorithm; see Discrete logarithm records.
Properties
There is no special primality test for safe primes the way there is for Fermat primes and Mersenne primes. However, Pocklington's criterion can be used to prove the primality of 2p + 1 once one has proven the primality of p.
Just as every term except the last one of a Cunningham chain of the first kind is a Sophie Germain prime, so every term except the first of such a chain is a safe prime. Safe primes ending in 7, that is, of the form 10n + 7, are the last terms in such chains when they occur, since 2(10n + 7) + 1 = 20n + 15 is divisible by 5.
Modular restrictions
With the exception of 7, a safe prime q is of the form 6k − 1 or, equivalently, q ≡ 5 (mod 6) – as is p > 3. Similarly, with the exception of 5, a safe prime q is of the form 4k − 1 or, equivalently, q ≡ 3 (mod 4) — trivially true since (q − 1) / 2 must evaluate to an odd natural number. Combining both forms using lcm(6, 4) we determine that a safe prime q > 7 also must be of the form 12k − 1 or, equivalently, q ≡ 11 (mod 12).
It follows that, for any safe prime q > 7:
• both 3 and 12 are quadratic residues mod q (per law of quadratic reciprocity)
• neither 3 nor 12 is a primitive root of q
• the only safe primes that are also full reptend primes in base 12 are 5 and 7
• q divides 3(q−1)/2 − 1
If p is a Sophie Germain prime greater than 3, then p must be congruent to 2 mod 3. For, if not, it would be congruent to 1 mod 3 and 2p + 1 would be congruent to 3 mod 3, impossible for a prime number.[15] Similar restrictions hold for larger prime moduli, and are the basis for the choice of the "correction factor" 2C in the Hardy–Littlewood estimate on the density of the Sophie Germain primes.[16]
If a Sophie Germain prime p is congruent to 3 (mod 4) (OEIS: A002515, Lucasian primes), then its matching safe prime 2p + 1 (congruent to 7 modulo 8) will be a divisor of the Mersenne number 2p − 1. Historically, this result of Leonhard Euler was the first known criterion for a Mersenne number with a prime index to be composite.[17] It can be used to generate the largest Mersenne numbers (with prime indices) that are known to be composite.[18]
Infinitude and density
Unsolved problem in mathematics:
Are there infinitely many Sophie Germain primes?
(more unsolved problems in mathematics)
It is conjectured that there are infinitely many Sophie Germain primes, but this has not been proven.[16] Several other famous conjectures in number theory generalize this and the twin prime conjecture; they include the Dickson's conjecture, Schinzel's hypothesis H, and the Bateman–Horn conjecture.
A heuristic estimate for the number of Sophie Germain primes less than n is[16]
$2C{\frac {n}{(\ln n)^{2}}}\approx 1.32032{\frac {n}{(\ln n)^{2}}}$
where
$C=\prod _{p>2}{\frac {p(p-2)}{(p-1)^{2}}}\approx 0.660161$
is Hardy–Littlewood's twin prime constant. For n = 104, this estimate predicts 156 Sophie Germain primes, which has a 20% error compared to the exact value of 190. For n = 107, the estimate predicts 50822, which is still 10% off from the exact value of 56032. The form of this estimate is due to G. H. Hardy and J. E. Littlewood, who applied a similar estimate to twin primes.[19]
A sequence (p, 2p + 1, 2(2p + 1) + 1, ...) in which all of the numbers are prime is called a Cunningham chain of the first kind. Every term of such a sequence except the last is a Sophie Germain prime, and every term except the first is a safe prime. Extending the conjecture that there exist infinitely many Sophie Germain primes, it has also been conjectured that arbitrarily long Cunningham chains exist,[20] although infinite chains are known to be impossible.[21]
Strong primes
A prime number q is a strong prime if q + 1 and q − 1 both have some large (around 500 digits) prime factors. For a safe prime q = 2p + 1, the number q − 1 naturally has a large prime factor, namely p, and so a safe prime q meets part of the criteria for being a strong prime. The running times of some methods of factoring a number with q as a prime factor depend partly on the size of the prime factors of q − 1. This is true, for instance, of the p − 1 method.
Applications
Cryptography
Safe primes are also important in cryptography because of their use in discrete logarithm-based techniques like Diffie–Hellman key exchange. If 2p + 1 is a safe prime, the multiplicative group of integers modulo 2p + 1 has a subgroup of large prime order. It is usually this prime-order subgroup that is desirable, and the reason for using safe primes is so that the modulus is as small as possible relative to p.
A prime number p = 2q + 1 is called a safe prime if q is prime. Thus, p = 2q + 1 is a safe prime if and only if q is a Sophie Germain prime, so finding safe primes and finding Sophie Germain primes are equivalent in computational difficulty. The notion of a safe prime can be strengthened to a strong prime, for which both p − 1 and p + 1 have large prime factors. Safe and strong primes were useful as the factors of secret keys in the RSA cryptosystem, because they prevent the system being broken by some factorization algorithms such as Pollard's p − 1 algorithm. However, with the current factorization technology, the advantage of using safe and strong primes appears to be negligible.[22]
Similar issues apply in other cryptosystems as well, including Diffie–Hellman key exchange and similar systems that depend on the security of the discrete log problem rather than on integer factorization.[23] For this reason, key generation protocols for these methods often rely on efficient algorithms for generating strong primes, which in turn rely on the conjecture that these primes have a sufficiently high density.[24]
In Sophie Germain Counter Mode, it was proposed to use the arithmetic in the finite field of order equal to the safe prime 2128 + 12451, to counter weaknesses in Galois/Counter Mode using the binary finite field GF(2128). However, SGCM has been shown to be vulnerable to many of the same cryptographic attacks as GCM.[25]
Primality testing
In the first version of the AKS primality test paper, a conjecture about Sophie Germain primes is used to lower the worst-case complexity from O(log12n) to O(log6n). A later version of the paper is shown to have time complexity O(log7.5n) which can also be lowered to O(log6n) using the conjecture.[26] Later variants of AKS have been proven to have complexity of O(log6n) without any conjectures or use of Sophie Germain primes.
Pseudorandom number generation
Safe primes obeying certain congruences can be used to generate pseudo-random numbers of use in Monte Carlo simulation.
Similarly, Sophie Germain primes may be used in the generation of pseudo-random numbers. The decimal expansion of 1/q will produce a stream of q − 1 pseudo-random digits, if q is the safe prime of a Sophie Germain prime p, with p congruent to 3, 9, or 11 modulo 20.[27] Thus "suitable" prime numbers q are 7, 23, 47, 59, 167, 179, etc. (OEIS: A000353) (corresponding to p = 3, 11, 23, 29, 83, 89, etc.) (OEIS: A000355). The result is a stream of length q − 1 digits (including leading zeros). So, for example, using q = 23 generates the pseudo-random digits 0, 4, 3, 4, 7, 8, 2, 6, 0, 8, 6, 9, 5, 6, 5, 2, 1, 7, 3, 9, 1, 3. Note that these digits are not appropriate for cryptographic purposes, as the value of each can be derived from its predecessor in the digit-stream.
In popular culture
Sophie Germain primes are mentioned in the stage play Proof[28] and the subsequent film.[29]
References
1. Specifically, Germain proved that the first case of Fermat's Last Theorem, in which the exponent divides one of the bases, is true for every Sophie Germain prime, and she used similar arguments to prove the same for all other primes up to 100. For details see Edwards, Harold M. (2000), Fermat's Last Theorem: A Genetic Introduction to Algebraic Number Theory, Graduate Texts in Mathematics, vol. 50, Springer, pp. 61–65, ISBN 9780387950020.
2. Dalmedico, Amy (1991). "Sophie Germain". Scientific American. 265 (6): 116–123. doi:10.1038/scientificamerican1291-116. JSTOR 24938838 – via JSTOR.
3. Laubenbacher, Reinhard; Pengelley, David (2010-11-01). ""Voici ce que j'ai trouvé:" Sophie Germain's grand plan to prove Fermat's Last Theorem". Historia Mathematica. 37 (4): 641–692. doi:10.1016/j.hm.2009.12.002. ISSN 0315-0860.
4. The Top Twenty Sophie Germain Primes — from the Prime Pages. Retrieved 17 May 2020.
5. "PrimeGrid's Sophie Germain Prime Search" (PDF). PrimeGrid. Archived (PDF) from the original on 2022-10-09. Retrieved 29 February 2016.
6. "PrimeGrid's Sophie Germain Prime Search" (PDF). PrimeGrid. Archived (PDF) from the original on 2022-10-09. Retrieved 18 April 2012.
7. The Prime Database: 183027*2^265440-1. From The Prime Pages.
8. The Prime Database: 648621027630345*2^253824-1.
9. The Prime Database: 620366307356565*2^253824-1
10. The Prime Database: 1068669447*2^211088-1 From The Prime Pages.
11. The Prime Database: 99064503957*2^200008-1 From The Prime Pages.
12. The Prime Database: 607095*2^176311-1.
13. The Prime Database: 48047305725*2^172403-1.
14. The Prime Database: 137211941292195*2^171960-1.
15. Krantz, Steven G. (2010), An Episodic History of Mathematics: Mathematical Culture Through Problem Solving, Mathematical Association of America, p. 206, ISBN 9780883857663.
16. Shoup, Victor (2009), "5.5.5 Sophie Germain primes", A Computational Introduction to Number Theory and Algebra, Cambridge University Press, pp. 123–124, ISBN 9780521516440.
17. Ribenboim, P. (1983), "1093", The Mathematical Intelligencer, 5 (2): 28–34, doi:10.1007/BF03023623, MR 0737682.
18. Dubner, Harvey (1996), "Large Sophie Germain primes", Mathematics of Computation, 65 (213): 393–396, CiteSeerX 10.1.1.106.2395, doi:10.1090/S0025-5718-96-00670-9, MR 1320893.
19. Ribenboim, Paulo (1999), Fermat's Last Theorem for Amateurs, Springer, p. 141, ISBN 9780387985084.
20. Wells, David (2011), Prime Numbers: The Most Mysterious Figures in Math, John Wiley & Sons, p. 35, ISBN 9781118045718, If the strong prime k-tuples conjecture is true, then Cunningham chains can reach any length.
21. Löh, Günter (1989), "Long chains of nearly doubled primes", Mathematics of Computation, 53 (188): 751–759, doi:10.1090/S0025-5718-1989-0979939-8, MR 0979939.
22. Rivest, Ronald L.; Silverman, Robert D. (November 22, 1999), Are 'strong' primes needed for RSA? (PDF), archived (PDF) from the original on 2022-10-09
23. Cheon, Jung Hee (2006), "Security analysis of the strong Diffie–Hellman problem", 24th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT'06), St. Petersburg, Russia, May 28 – June 1, 2006, Proceedings (PDF), Lecture Notes in Computer Science, vol. 4004, Springer-Verlag, pp. 1–11, doi:10.1007/11761679_1.
24. Gordon, John A. (1985), "Strong primes are easy to find", Proceedings of EUROCRYPT 84, A Workshop on the Theory and Application of Cryptographic Techniques, Paris, France, April 9–11, 1984, Lecture Notes in Computer Science, vol. 209, Springer-Verlag, pp. 216–223, doi:10.1007/3-540-39757-4_19.
25. Yap, Wun-She; Yeo, Sze Ling; Heng, Swee-Huay; Henricksen, Matt (2013), "Security analysis of GCM for communication", Security and Communication Networks, 7 (5): 854–864, doi:10.1002/sec.798.
26. Agrawal, Manindra; Kayal, Neeraj; Saxena, Nitin (2004), "PRIMES is in P" (PDF), Annals of Mathematics, 160 (2): 781–793, doi:10.4007/annals.2004.160.781, JSTOR 3597229, archived (PDF) from the original on 2022-10-09
27. Matthews, Robert A. J. (1992), "Maximally periodic reciprocals", Bulletin of the Institute of Mathematics and Its Applications, 28 (9–10): 147–148, MR 1192408.
28. Peterson, Ivars (Dec 21, 2002), "Drama in numbers: putting a passion for mathematics on stage", Science News, doi:10.2307/4013968, JSTOR 4013968, [Jean E.] Taylor pointed out that the example of a Germain prime given in the preliminary text was missing the term "+ 1." "When I first went to see 'Proof' and that moment came up in the play, I was happy to hear the 'plus one' clearly spoken," Taylor says.
29. Ullman, Daniel (2006), "Movie Review: Proof" (PDF), Notices of the AMS, 53 (3): 340–342, archived (PDF) from the original on 2022-10-09, There are a couple of breaks from realism in Proof where characters speak in a way that is for the benefit of the audience rather than the way mathematicians would actually talk among themselves. When Hal (Harold) remembers what a Germain prime is, he speaks to Catherine in a way that would be patronizing to another mathematician.
External links
• Safe prime at PlanetMath.
• M. Abramowitz, I. A. Stegun, ed. (1972). Handbook of Mathematical Functions. Applied Math. Series. Vol. 55 (Tenth Printing ed.). National Bureau of Standards. p. 870.
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
|
Wikipedia
|
Sophie Germain Prize
The Sophie Germain Prize (in French: Prix Sophie Germain) is an annual mathematics prize from the French Academy of Sciences conferred since the year 2003. It is named after the French mathematician Sophie Germain, and comes with a prize of €8000.[1][2][3][4]
"The Sophie Germain Prize of the Institut de France has been awarded every year by the French Academy of Sciences since 2003 to researchers who have carried out fundamental research in mathematics. Through this prize, the Academy of Sciences furthers its mission of encouraging the advancement of science."[5]
Recipients
• 2003 Claire Voisin
• 2004 Henri Berestycki
• 2005 Jean-François Le Gall
• 2006 Michael Harris
• 2007 Ngô Bảo Châu
• 2008 Håkan Eliasson
• 2009 Nessim Sibony
• 2010 Guy Henniart
• 2011 Yves Le Jan
• 2012 Lucien Birgé
• 2013 Albert Fathi
• 2014 Bernhard Keller
• 2015 Carlos Simpson
• 2016 François Ledrappier
• 2017 Xiaonan Ma
• 2018 Isabelle Gallagher
• 2019 Bertrand Toën
• 2020 Georges Skandalis
• 2021 Étienne Fouvry
• 2022 Thierry Bodineau[5]
See also
• List of mathematics awards
References
1. http://www.academie-sciences.fr/pdf/prix/laureats_2015.pdf
2. http://www.academie-sciences.fr/pdf/prix/laureats_2017.pdf
3. "Prix Sophie-Germain".
4. Prix Sophie Germain page officielle
5. "Thierry Bodineau awarded the Sophie Germain Prize". IHES. 2022-10-18. Retrieved 2022-12-08.
|
Wikipedia
|
Sophie Germain's identity
In mathematics, Sophie Germain's identity is a polynomial factorization named after Sophie Germain stating that
${\begin{aligned}x^{4}+4y^{4}&={\bigl (}(x+y)^{2}+y^{2}{\bigr )}\cdot {\bigl (}(x-y)^{2}+y^{2}{\bigr )}\\&=(x^{2}+2xy+2y^{2})\cdot (x^{2}-2xy+2y^{2}).\end{aligned}}$
This article is about a polynomial identity attributed to Sophie Germain. For the pseudonymous identity used by Germain, see Antoine-Auguste Le Blanc.
Beyond its use in elementary algebra, it can also be used in number theory to factorize integers of the special form $x^{4}+4y^{4}$, and it frequently forms the basis of problems in mathematics competitions.[1][2][3]
History
Although the identity has been attributed to Sophie Germain, it does not appear in her works. Instead, in her works one can find the related identity[4][5]
${\begin{aligned}x^{4}+y^{4}&=(x^{2}-y^{2})^{2}+2(xy)^{2}\\&=(x^{2}+y^{2})^{2}-2(xy)^{2}.\\\end{aligned}}$
Modifying this equation by multiplying $y$ by ${\sqrt {2}}$ gives
$x^{4}+4y^{4}=(x^{2}+2y^{2})^{2}-4(xy)^{2},$
a difference of two squares, from which Germain's identity follows.[5] The inaccurate attribution of this identity to Germain was made by Leonard Eugene Dickson in his History of the Theory of Numbers, which also stated (equally inaccurately) that it could be found in a letter from Leonhard Euler to Christian Goldbach.[5][6]
The identity can be proven simply by multiplying the two terms of the factorization together, and verifying that their product equals the right hand side of the equality.[7] A proof without words is also possible based on multiple applications of the Pythagorean theorem.[1]
Applications to integer factorization
One consequence of Germain's identity is that the numbers of the form
$n^{4}+4^{n}$
cannot be prime for $n>1$. (For $n=1$, the result is the prime number 5.) They are obviously not prime if $n$ is even, and if $n$ is odd they have a factorization given by the identity with $x=n$ and $y=2^{(n-1)/2}$.[3][7] These numbers (starting with $n=0$) form the integer sequence
1, 5, 32, 145, 512, 1649, 5392, 18785, 69632, ... (sequence A001589 in the OEIS).
Many of the appearances of Sophie Germain's identity in mathematics competitions come from this corollary of it.[2][3]
Another special case of the identity with $x=1$ and $y=2^{k}$ can be used to produce the factorization
${\begin{aligned}\Phi _{4}(2^{2k+1})&=2^{4k+2}+1\\&=(2^{2k+1}-2^{k+1}+1)\cdot (2^{2k+1}+2^{k+1}+1),\\\end{aligned}}$
where $\Phi _{4}(x)=x^{2}+1$ is the fourth cyclotomic polynomial. As with the cyclotomic polynomials more generally, $\Phi _{4}$ is an irreducible polynomial, so this factorization of infinitely many of its values cannot be extended to a factorization of $\Phi _{4}$ as a polynomial, making this an example of an aurifeuillean factorization.[8]
Generalization
Germain's identity has been generalized to the functional equation
$f(x)^{2}+4f(y)^{2}={\bigl (}f(x+y)+f(y){\bigr )}{\bigl (}f(x-y)+f(y){\bigr )},$
which by Sophie Germain's identity is satisfied by the square function.[4]
References
1. Moreno, Samuel G.; García-Caballero, Esther M. (2019), "Proof without words: Sophie Germain's identity", The College Mathematics Journal, 50 (3): 197, doi:10.1080/07468342.2019.1603533, MR 3955328, S2CID 191131755
2. "CC79: Show that if $n$ is an integer greater than 1, then $n^{4}+4$ is not prime" (PDF), The contest corner, Crux Mathematicorum, 40 (6): 239, June 2014; originally from 1979 APICS Math Competition
3. Engel, Arthur (1998), Problem-Solving Strategies, Problem Books in Mathematics, New York: Springer-Verlag, p. 121, doi:10.1007/b97682, ISBN 0-387-98219-1, MR 1485512
4. Łukasik, Radosław; Sikorska, Justyna; Szostok, Tomasz (2018), "On an equation of Sophie Germain", Results in Mathematics, 73 (2), Paper No. 60, doi:10.1007/s00025-018-0820-y, MR 3783549, S2CID 253591505
5. Whitty, Robin, "Sophie Germain's identity" (PDF), Theorem of the day
6. Dickson, Leonard Eugene (1919), History of the Theory of Numbers, Volume I: Divisibility and Primality, Carnegie Institute of Washington, p. 382
7. Bogomolny, Alexander, "Sophie Germain's identity", Cut-the-Knot, retrieved 2023-06-19
8. Granville, Andrew; Pleasants, Peter (2006), "Aurifeuillian factorization", Mathematics of Computation, 75 (253): 497–508, doi:10.1090/S0025-5718-05-01766-7, MR 2176412
|
Wikipedia
|
Sophie Morel
Sophie Morel (born 1979)[3] is a French mathematician, specializing in number theory. She is a CNRS directrice de recherches in mathematics at École normale supérieure de Lyon. In 2012 she received one of the ten prizes of the European Mathematical Society.
Sophie Morel
Born1979 (age 43–44)
NationalityFrench
Alma materÉcole Normale Supérieure
Université Paris-Sud
AwardsEMS Prize (2012)
AWM-Microsoft Research Prize (2014)[1]
Scientific career
FieldsMathematics
InstitutionsÉcole Normale Supérieure de Lyon
Princeton University
Harvard University
Doctoral advisorGérard Laumon[2]
Biography
In a 2011 interview, Morel credited a math magazine bought while in 9th grade as well as summer camps for developing her interest in mathematics[4] and in a 2012 interview she mentioned being a keen distance runner.[5] She studied in Paris at the École Normale Supérieure, graduating in 1999.[6] In 2005 she finished her Ph.D. at the University of Paris-Sud, under the supervision of Gérard Laumon.[5][6] Her thesis made progress on the Langlands program.[5]
After her Ph.D., she was a Clay Research Fellow between 2005 and 2011. In December 2009 she was appointed as a professor of mathematics at Harvard University,[7] becoming the first woman in mathematics to be tenured there.[8] From 2012 to 2020, she was a professor of mathematics in Princeton University, where she was also the Henry Burchard Fine Professor in 2015.[9] Morel moved to École Normale supérieure de Lyon as an CNRS directrice de recherches in mathematics in 2020.
Recognition
She gave an invited talk at the International Congress of Mathematicians in 2010, in the "Number Theory" section.[10] In 2012 she received one of the prestigious European Mathematical Society Prize for young researchers, and in May 2013 she was announced as the winner of the inaugural 2014 AWM-Microsoft Research Prize in Algebra and Number Theory.[11][6]
Selected publications
• Morel, Sophie (2006). "Complexes pondérés sur les compactifications de Baily-Borel: Le cas des variétés de Siegel". Journal of the American Mathematical Society. 21 (1): 23–61. doi:10.1090/S0894-0347-06-00538-8.
• Morel, Sophie (2010). On the Cohomology of Certain Non-Compact Shimura Varieties. Annals of Mathematics Studies. Vol. 173. Princeton University Press. ISBN 978-1400835393.
• Morel, Sophie (2011). "Cohomologie d'intersection des variétés modulaires de Siegel, suite". Compositio Mathematica. 147 (6): 1671–1740. arXiv:1806.09910. doi:10.1112/S0010437X11005409. ISSN 0010-437X.
• Morel, Sophie; Suh, Junecue (2019). "The standard sign conjecture on algebraic cycles: The case of Shimura varieties". Journal für die reine und angewandte Mathematik (Crelle's Journal). 2019 (748): 139–151. arXiv:1408.0461. doi:10.1515/crelle-2016-0048. ISSN 0075-4102.
References
1. "AWM Awards Inaugural Research Prizes" (PDF), Mathematics People, Notices of the AMS, 60 (7): 930, August 2013.
2. Sophie Morel at the Mathematics Genealogy Project
3. Birth year from German National Library catalog entry, retrieved 2021-03-24
4. "An Interview with Sophie Morel, Part 1" (PDF), Girls' Angle Bulletin, 5 (1): 3–6, October 2011
5. "Interview: Sophie Morel, Harvard University", EWM Newsletter, European Women in Mathematics, 21: 8–9, November 6, 2012
6. "Sophie Morel". École normale supérieure (in French). Retrieved March 21, 2021.
7. Bradt, Steve (January 14, 2010), "Mathematician gains dual appointments Sophie Morel will join FAS, Radcliffe Institute", Harvard Gazette.
8. Lewin, Tamar (March 12, 2010), "Women Making Gains on Faculty at Harvard", The New York Times.
9. Office of Communications. "Faculty chosen for endowed professorships". Princeton University. Retrieved March 21, 2021.
10. "ICM Plenary and Invited Speakers since 1897". International Congress of Mathematicians.
11. "Prof. Sophie Morel wins inaugural AWM-Microsoft Research Prize in Algebra and Number Theory". Princeton University Mathematics Department.
External links
• Official website
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• Belgium
• United States
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sophie Piccard
Sophie Piccard (1904–1990) was a Russian-Swiss mathematician who became the first female full professor (professor ordinarius) in Switzerland.[1][2] Her research concerned set theory, group theory, linear algebra, and the history of mathematics.[1]
Sophie Piccard
Born(1904-09-27)27 September 1904
Saint Petersburg, Russia
Died6 January 1990(1990-01-06) (aged 85)
Fribourg, Switzerland
NationalityRussian-Swiss
OccupationProfessor
Academic background
Alma materUniversity of Lausanne
Academic work
DisciplineMathematics
Sub-disciplineSet theory
Group theory
Linear algebra
History of mathematics
InstitutionsUniversity of Neuchâtel
Early life and education
Piccard was born on September 27, 1904, in Saint Petersburg, with a French Huguenot mother and a Swiss father. She earned a diploma in Smolensk in 1925, where her father, Eugène-Ferdinand Piccard, was a university professor and her mother a language teacher at the lycée. Soon afterwards she moved to Switzerland with her parents, escaping the unrest in Russia that her mother, Eulalie Piccard, would become known for writing about. Sophie Piccard's Russian degree was worthless in Switzerland, and she earned another from the University of Lausanne in 1927, going on to complete a doctorate there in 1929 under the supervision of Dmitry Mirimanoff.[1][2][3]
Career and later life
She worked outside of mathematics until 1936, when she began teaching part-time at the University of Neuchâtel as an assistant to Rudolf Gaberel. Gaberel died in 1938 and she inherited his position, becoming a professor extraordinarius (associate professor); she was promoted to professor ordinarius in 1943, as the chair of higher geometry and probability theory at Neuchâtel.[1][2][3]
Piccard died on January 6, 1990, in Fribourg.[1]
Contributions
Piccard was an invited speaker at the International Congress of Mathematicians in 1932 and again in 1936.[4]
In 1939 she published the book Sur les ensembles de distances des ensembles de points d'un espace Euclidean (Mémoires de L’Université de Neuchâtel 13, Paris, France: Libraire Gauthier-Villars and Cie., 1939).[1][5] Its subject was the sets of distances that a collection of points in a Euclidean space might determine. This book included early research on Golomb rulers, finite sets of integer points in a one-dimensional space with the property that their distances are all distinct. She published a theorem claiming that every two Golomb rulers with the same distance set must be congruent to each other; this turned out to be false for certain sets of six points, but true otherwise.[6][7][8]
References
1. Riddle, Larry (10 January 2014), "Sophie Piccard", Biographies of Women Mathematicians, Agnes Scott College.
2. Zaslavsky, Sandrine (1 February 2010), "Piccard, Sophie", Historical Dictionary of Switzerland.
3. Ogilvie, Marilyn Bailey; Harvey, Joy Dorothy (2000), "Piccard, Sophie (ca. 1906– )", The Biographical Dictionary of Women in Science: Pioneering Lives from Ancient Times to the Mid-20th Century, Vol. 2: L-Z, Taylor & Francis, pp. 1019–1020, ISBN 9780415920407.
4. ICM Plenary and Invited Speakers since 1897, International Mathematical Union, retrieved 2 October 2015.
5. Ficken, F. A. (1943), "Review: Sophie Piccard, Sur les ensembles de distances des ensembles de points d'un espace Euclidien", Bulletin of the American Mathematical Society, 49 (1): 29–31, doi:10.1090/s0002-9904-1943-07825-1.
6. Bloom, Gary S. (1977), "A counterexample to a theorem of S. Piccard", Journal of Combinatorial Theory, Series A, 22 (3): 378–379, doi:10.1016/0097-3165(77)90013-9, MR 0439657.
7. Yovanof, G. S.; Golomb, S. W. (1998), "The polynomial model in the study of counterexamples to S. Piccard's theorem", Ars Combinatoria, 48: 43–63, MR 1623042.
8. Bekir, Ahmad; Golomb, Solomon W. (2007), "There are no further counterexamples to S. Piccard's theorem", IEEE Transactions on Information Theory, 53 (8): 2864–2867, doi:10.1109/TIT.2007.899468, MR 2400501.
Further reading
• Schumacher, Mireille, "Des mathématiciennes en Suisse", Tangente (in French), 157: 38–39.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Italy
• Israel
• United States
• Netherlands
• Poland
• Vatican
Academics
• MathSciNet
• zbMATH
People
• Deutsche Biographie
Other
• Historical Dictionary of Switzerland
• IdRef
|
Wikipedia
|
Sophomore's dream
In mathematics, the sophomore's dream is the pair of identities (especially the first)
${\begin{alignedat}{2}&\int _{0}^{1}x^{-x}\,dx&&=\sum _{n=1}^{\infty }n^{-n}\\&\int _{0}^{1}x^{x}\,dx&&=\sum _{n=1}^{\infty }(-1)^{n+1}n^{-n}=-\sum _{n=1}^{\infty }(-n)^{-n}\end{alignedat}}$
discovered in 1697 by Johann Bernoulli.
The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively.
The name "sophomore's dream"[1] is in contrast to the name "freshman's dream" which is given to the incorrect[note 1] identity $ (x+y)^{n}=x^{n}+y^{n}$. The sophomore's dream has a similar too-good-to-be-true feel, but is true.
Proof
The proofs of the two identities are completely analogous, so only the proof of the second is presented here. The key ingredients of the proof are:
• to write $ x^{x}=\exp(x\ln x)$ (using the notation ln for the natural logarithm and exp for the exponential function;
• to expand $ \exp(x\ln x)$ using the power series for exp; and
• to integrate termwise, using integration by substitution.
In details, xx can be expanded as
$x^{x}=\exp(x\log x)=\sum _{n=0}^{\infty }{\frac {x^{n}(\log x)^{n}}{n!}}.$
Therefore,
$\int _{0}^{1}x^{x}\,dx=\int _{0}^{1}\sum _{n=0}^{\infty }{\frac {x^{n}(\log x)^{n}}{n!}}\,dx.$
By uniform convergence of the power series, one may interchange summation and integration to yield
$\int _{0}^{1}x^{x}\,dx=\sum _{n=0}^{\infty }\int _{0}^{1}{\frac {x^{n}(\log x)^{n}}{n!}}\,dx.$
To evaluate the above integrals, one may change the variable in the integral via the substitution $ x=\exp(-{\frac {u}{n+1}}).$ With this substitution, the bounds of integration are transformed to $0<u<\infty ,$ giving the identity
$\int _{0}^{1}x^{n}(\log x)^{n}\,dx=(-1)^{n}(n+1)^{-(n+1)}\int _{0}^{\infty }u^{n}e^{-u}\,du.$
By Euler's integral identity for the Gamma function, one has
$\int _{0}^{\infty }u^{n}e^{-u}\,du=n!,$
so that
$\int _{0}^{1}{\frac {x^{n}(\log x)^{n}}{n!}}\,dx=(-1)^{n}(n+1)^{-(n+1)}.$
Summing these (and changing indexing so it starts at n= 1 instead of n = 0) yields the formula.
Historical proof
The original proof, given in Bernoulli,[2] and presented in modernized form in Dunham,[3] differs from the one above in how the termwise integral $ \int _{0}^{1}x^{n}(\log x)^{n}\,dx$ is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms.
The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration $+C$ both because this was done historically, and because it drops out when computing the definite integral.
Integrating $ \int x^{m}(\log x)^{n}\,dx$ by substituting $ u=(\log x)^{n}$ and $ dv=x^{m}\,dx$ yields:
${\begin{aligned}\int x^{m}(\log x)^{n}\,dx&={\frac {x^{m+1}(\log x)^{n}}{m+1}}-{\frac {n}{m+1}}\int x^{m+1}{\frac {(\log x)^{n-1}}{x}}\,dx\qquad {\text{(for }}m\neq -1{\text{)}}\\&={\frac {x^{m+1}}{m+1}}(\log x)^{n}-{\frac {n}{m+1}}\int x^{m}(\log x)^{n-1}\,dx\qquad {\text{(for }}m\neq -1{\text{)}}\end{aligned}}$
(also in the list of integrals of logarithmic functions). This reduces the power on the logarithm in the integrand by 1 (from $n$ to $n-1$) and thus one can compute the integral inductively, as
$\int x^{m}(\log x)^{n}\,dx={\frac {x^{m+1}}{m+1}}\cdot \sum _{i=0}^{n}(-1)^{i}{\frac {(n)_{i}}{(m+1)^{i}}}(\log x)^{n-i}$
where $ (n)_{i}$ denotes the falling factorial; there is a finite sum because the induction stops at 0, since n is an integer.
In this case $ m=n$, and they are integers, so
$\int x^{n}(\log x)^{n}\,dx={\frac {x^{n+1}}{n+1}}\cdot \sum _{i=0}^{n}(-1)^{i}{\frac {(n)_{i}}{(n+1)^{i}}}(\log x)^{n-i}.$
Integrating from 0 to 1, all the terms vanish except the last term at 1,[note 2] which yields:
$\int _{0}^{1}{\frac {x^{n}(\log x)^{n}}{n!}}\,dx={\frac {1}{n!}}{\frac {1^{n+1}}{n+1}}(-1)^{n}{\frac {(n)_{n}}{(n+1)^{n}}}=(-1)^{n}(n+1)^{-(n+1)}.$
This is equivalent to computing Euler's integral identity $\Gamma (n+1)=n!$ for the Gamma function on a different domain (corresponding to changing variables by substitution), as Euler's identity itself can also be computed via an analogous integration by parts.
See also
• Series (mathematics)
Notes
1. Incorrect in general, but correct when one is working in a commutative ring of prime characteristic p with n being a power of p. The correct result in a general commutative context is given by the binomial theorem.
2. All the terms vanish at 0 because $ \lim _{x\to 0^{+}}x^{m}(\log x)^{n}=0$ by l'Hôpital's rule (Bernoulli omitted this technicality), and all but the last term vanish at 1 since log 1 = 0.
References
Formula
• Bernoulli, Johann (1697). Opera omnia. Vol. 3. pp. 376–381.
• Borwein, Jonathan; Bailey, David H.; Girgensohn, Roland (2004). Experimentation in Mathematics: Computational Paths to Discovery. pp. 4, 44. ISBN 9781568811369.
• Dunham, William (2005). "Chapter 3: The Bernoullis (Johann and $x^{x}$)". The Calculus Gallery, Masterpieces from Newton to Lebesgue. Princeton University Press. pp. 46–51. ISBN 9780691095653.
• OEIS, (sequence A083648 in the OEIS) and (sequence A073009 in the OEIS)
• Pólya, George; Szegő, Gábor (1998), "Part I, problem 160", Problems and Theorems in Analysis, p. 36, ISBN 9783540636403
• Weisstein, Eric W. "Sophomore's Dream". MathWorld.
• Max R. P. Grossmann (2017): Sophomore's dream. 1,000,000 digits of the first constant
Function
• Literature for x^x and Sophomore's Dream, Tetration Forum, 03/02/2010
• The Coupled Exponential, Jay A. Fantini, Gilbert C. Kloepfer, 1998
• Sophomore's Dream Function, Jean Jacquelin, 2010, 13 pp.
• Lehmer, D. H. (1985). "Numbers associated with Stirling numbers and xx". Rocky Mountain Journal of Mathematics. 15 (2): 461. doi:10.1216/RMJ-1985-15-2-461.
• Gould, H. W. (1996). "A Set of Polynomials Associated with the Higher Derivatives of y = xx". Rocky Mountain Journal of Mathematics. 26 (2): 615. doi:10.1216/rmjm/1181072076.
Footnotes
1. It appears in Borwein, Bailey & Girgensohn 2004.
2. Bernoulli 1697.
3. Dunham 2005.
|
Wikipedia
|
Sophus Lie
Marius Sophus Lie (/liː/ LEE; Norwegian: [liː]; 17 December 1842 – 18 February 1899) was a Norwegian mathematician. He largely created the theory of continuous symmetry and applied it to the study of geometry and differential equations.
Sophus Lie
Born(1842-12-17)17 December 1842
Nordfjordeid, Norway
Died18 February 1899(1899-02-18) (aged 56)
Kristiania, Norway
NationalityNorwegian
Alma materUniversity of Christiania
Known forOne-parameter group
Differential invariant
Contact transformation
Infinitesimal transformation
W-curve
Carathéodory–Jacobi–Lie theorem
Lie algebra
Lie bracket
Lie group
Lie product formula
Lie sphere geometry
Lie theory
Lie transform
Lie's theorem
Lie's third theorem
Lie–Kolchin theorem
See full list
AwardsLobachevsky Medal (1897)
ForMemRS (1895)
Scientific career
FieldsMathematics
InstitutionsUniversity of Christiania
University of Leipzig
Doctoral advisorCarl Anton Bjerknes
Cato Maximilian Guldberg
Doctoral studentsHans Blichfeldt
Lucjan Emil Böttcher
Gerhard Kowalewski
Kazimierz Żorawski
Élie Cartan
Elling Holst
Edgar Odell Lovett
Life and career
Marius Sophus Lie was born on 17 December 1842 in the small town of Nordfjordeid. He was the youngest of six children born to Lutheran pastor Johann Herman Lie and his wife, who came from a well-known Trondheim family.[1]
He had his primary education in the south-eastern coast of Moss, before attending high school at Oslo (known then as Christiania). After graduating from high school, his ambition towards a military career was dashed when the army rejected him due to poor eyesight. Then he enrolled at the University of Christiania.
Sophus Lie's first mathematical work, Repräsentation der Imaginären der Plangeometrie, was published in 1869 by the Academy of Sciences in Christiania and also by Crelle's Journal. That same year he received a scholarship and travelled to Berlin, where he stayed from September to February 1870. There, he met Felix Klein and they became close friends. When he left Berlin, Lie travelled to Paris, where he was joined by Klein two months later. There, they met Camille Jordan and Gaston Darboux. But on 19 July 1870 the Franco-Prussian War began and Klein (who was Prussian) had to leave France very quickly. Lie left for Fontainebleau where he was arrested, suspected of being a German spy, garnering him fame in Norway. He was released from prison after a month, thanks to the intervention of Darboux.[2]
Lie obtained his PhD at the University of Christiania (in present-day Oslo) in 1871 with a thesis entitled Over en Classe geometriske Transformationer (On a Class of Geometric Transformations).[3] It would be described by Darboux as "one of the most handsome discoveries of modern Geometry". The next year, the Norwegian Parliament established an extraordinary professorship for him. That same year, Lie visited Klein, who was then at Erlangen and working on the Erlangen program.
In 1872, Lie spent eight years together with Peter Ludwig Mejdell Sylow, editing and publishing the mathematical works of their countryman, Niels Henrik Abel.
At the end of 1872, Sophus Lie proposed to Anna Birch, then eighteen years old, and they were married in 1874. The couple had three children: Marie (b. 1877), Dagny (b. 1880) and Herman (b. 1884).
From 1876, he co-edited the journal Archiv for Mathematik og Naturvidenskab, together with the physician Jacob Worm-Müller, and the biologist Georg Ossian Sars.
In 1884, Friedrich Engel arrived at Christiania to help him, with the support of Klein and Adolph Mayer (who were both professors at Leipzig by then). Engel would help Lie to write his most important treatise, Theorie der Transformationsgruppen, published in Leipzig in three volumes from 1888 to 1893. Decades later, Engel would also be one of the two editors of Lie's collected works.
In 1886, Lie became a professor at Leipzig, replacing Klein, who had moved to Göttingen. In November 1889, Lie suffered a mental breakdown and had to be hospitalized until June 1890. Subsequently he returned to his post, but over the years his anaemia progressed to the point where he returned to his homeland. In 1898 he tendered his resignation in May, and left for home in September the same year. He died the following year in 1899 at the age of 56, due to pernicious anemia, a disease caused by impaired absorption of vitamin B12.
He was made Honorary Member of the London Mathematical Society in 1878, Member of the French Academy of Sciences in 1892, Foreign Member of the Royal Society of London in 1895 and foreign associate of the National Academy of Sciences of the United States of America in 1895.
• 1888 copy of "Theorie der Transformationsgruppen," volume I
• Title page to "Theorie der Transformationsgruppen"
• Preface to "Theorie der Transformationsgruppen"
Legacy
Lie's principal tool, and one of his greatest achievements, was the discovery that continuous transformation groups (now called, after him, Lie groups) could be better understood by "linearizing" them, and studying the corresponding generating vector fields (the so-called infinitesimal generators). The generators are subject to a linearized version of the group law, now called the commutator bracket, and have the structure of what is today called a Lie algebra.[4][5]
Hermann Weyl used Lie's work on group theory in his papers from 1922 and 1923, and Lie groups today play a role in quantum mechanics.[5] However, the subject of Lie groups as it is studied today is vastly different from what the research by Sophus Lie was about and "among the 19th century masters, Lie's work is in detail certainly the least known today".[6]
Sophus Lie was an eager proponent in the establishment of the Abel Prize. Inspired by the Nansen fund named after Fridtjof Nansen, and the lack of a prize for mathematics in the Nobel Prize. He gathered support for the establishment of an award for outstanding work in pure mathematics.[7]
Lie advised many doctoral students who went on to become successful mathematicians. Élie Cartan became widely regarded as one of the greatest mathematicians of the 20th century. Kazimierz Żorawski's work was proved to be of importance to a variety of fields. Hans Frederick Blichfeldt made contributions to various fields of mathematics.
Books
• Lie, Sophus (1888), Theorie der Transformationsgruppen I (in German), Leipzig: B. G. Teubner. Written with the help of Friedrich Engel. English translation available: Edited and translated from the German and with a foreword by Joël Merker, see ISBN 978-3-662-46210-2 and arXiv:1003.3202
• Lie, Sophus (1890), Theorie der Transformationsgruppen II (in German), Leipzig: B. G. Teubner. Written with the help of Friedrich Engel.
• Lie, Sophus (1891), Vorlesungen über differentialgleichungen mit bekannten infinitesimalen transformationen (in German), Leipzig: B. G. Teubner. Written with the help of Georg Scheffers.[8]
• Lie, Sophus (1893), Vorlesungen über continuierliche Gruppen (in German), Leipzig: B. G. Teubner. Written with the help of Georg Scheffers.[9]
• Lie, Sophus (1893), Theorie der Transformationsgruppen III (in German), Leipzig: B. G. Teubner. Written with the help of Friedrich Engel.
• Lie, Sophus (1896), Geometrie der Berührungstransformationen (in German), Leipzig: B. G. Teubner. Written with the help of Georg Scheffers.[10]
• Lie, Sophus, Engel, Friedrich; Heegaard, Poul (eds.), Gesammelte Abhandlungen, Leipzig: Teubner; 7 vols., 1922–1960{{citation}}: CS1 maint: postscript (link)[11][12]
See also
• Lie derivative
• List of simple Lie groups
• List of things named after Sophus Lie
Notes
1. James, Ioan (2002). Remarkable Mathematicians. Cambridge University Press. p. 201. ISBN 978-0-521-52094-2.
2. Darboux, Gaston (1899). "Sophus Lie". Bull. Amer. Math. Soc. 5 (7): 367–370. doi:10.1090/s0002-9904-1899-00628-1.
3. Lie, Sophus (1871). Over en classe geometriske Transformationer (PhD). University of Christiania.
4. Helgason, Sigurdur (1994), "Sophus Lie, the Mathematician" (PDF), Proceedings of the Sophus Lie Memorial Conference, Oslo, August, 1992, Oslo: Scandinavian University Press, pp. 3–21.
5. Gale, Thomson. "Marius Sophus Lie Biography". World of Mathematics. Retrieved 23 January 2009.
6. Hermann, Robert, ed. (1975), Sophus Lie's 1880 transformation group paper, Lie groups: History, frontiers and applications, vol. 1, Math Sci Press, p. iii, ISBN 0-915692-10-4
7. "The History of the Abel Prize". www.abelprize.no. Retrieved 4 February 2021.
8. Lovett, E. O. (1898). "Review: Vorlesungen über Differentialgleichungen mit bekannten infinitesimalen Transformationen". Bull. Amer. Math. Soc. 4 (4): 155–167. doi:10.1090/s0002-9904-1898-00476-7.
9. Brooks, J. M. (1895). "Review: Vorlesungen über continuerliche Gruppen mit geometrischen und anderen Anwendungen". Bull. Amer. Math. Soc. 1 (10): 241–248. doi:10.1090/s0002-9904-1895-00283-9.
10. Lovett, E. O. (1897). "Review: Geometrie der Berührungstransformationen". Bull. Amer. Math. Soc. 3 (9): 321–350. doi:10.1090/s0002-9904-1897-00430-x.
11. Schilling, O. F. G. (1939). "Book Review: Sophus Lie's Gesammelte Abhandlungen. Geometrische Abhandlungen, Volumes I & II". Bulletin of the American Mathematical Society. 45 (7): 513–514. doi:10.1090/S0002-9904-1939-07032-8. ISSN 0002-9904.
12. Carmichael, R. D. (1930). "Book Review: vol. IV of Sophus Lie's Gesammelte Abhandlungen (Samlede Avhandlinger, Norwegian edition published by Aschehoug)". Bulletin of the American Mathematical Society. 36 (5): 337–338. doi:10.1090/S0002-9904-1930-04950-2. ISSN 0002-9904. (with links to 1923 review of Vol. III, 1925 review of Vol. V, & 1928 review of Vol. VI)
References
• Fritzsche, Bernd (1999), "Sophus Lie: A Sketch of his Life and Work", Journal of Lie Theory, vol. 9, no. 1, pp. 1–38, ISSN 0949-5932, MR 1680023, Zbl 0927.01029, retrieved 2 December 2010
• Freudenthal, Hans (1970–1980), "Lie, Marius Sophus", Dictionary of Scientific Biography, Charles Scribner's Sons
• Stubhaug, Arild (2002), The mathematician Sophus Lie: It was the audacity of my thinking, Springer-Verlag, ISBN 3-540-42137-8
• Yaglom, Isaak Moiseevich (1988), Grant, Hardy; Shenitzer, Abe (eds.), Felix Klein and Sophus Lie: Evolution of the idea of symmetry in the nineteenth century, Birkhäuser, ISBN 3-7643-3316-2
External links
• Chisholm, Hugh, ed. (1911). "Lie, Marius Sophus" . Encyclopædia Britannica (11th ed.). Cambridge University Press.
• O'Connor, John J.; Robertson, Edmund F. (February 2000), "Sophus Lie", MacTutor History of Mathematics Archive, University of St Andrews
• Works by Sophus Lie at Project Gutenberg
• Works by or about Sophus Lie at Internet Archive
• "The foundations of the theory of infinite continuous transformation groups – I" An English translation of a key paper by Lie (Part I)
• "The foundations of the theory of infinite continuous transformation groups – II" An English translation of a key paper by Lie (Part II)
• "On complexes – in particular, line and sphere complexes – with applications to the theory of partial differential equations" An English translation of a key paper by Lie
• "Foundations of an invariant theory of contact transformations" An English translation of a key paper by Lie
• "The infinitesimal contact transformations of mechanics" An English translation of a key paper by Lie
• U. Amaldi, "On the principal results obtained in the theory of continuous groups since the death of Sophus Lie (1898–1907)" English translation of a survey paper that followed his death
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• Spain
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Sweden
• Latvia
• Japan
• Czech Republic
• Australia
• Greece
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
|
Wikipedia
|
Sorgenfrey plane
In topology, the Sorgenfrey plane is a frequently-cited counterexample to many otherwise plausible-sounding conjectures. It consists of the product of two copies of the Sorgenfrey line, which is the real line $\mathbb {R} $ under the half-open interval topology. The Sorgenfrey line and plane are named for the American mathematician Robert Sorgenfrey.
A basis for the Sorgenfrey plane, denoted $\mathbb {S} $ from now on, is therefore the set of rectangles that include the west edge, southwest corner, and south edge, and omit the southeast corner, east edge, northeast corner, north edge, and northwest corner. Open sets in $\mathbb {S} $ are unions of such rectangles.
$\mathbb {S} $ is an example of a space that is a product of Lindelöf spaces that is not itself a Lindelöf space. The so-called anti-diagonal $\Delta =\{(x,-x)\mid x\in \mathbb {R} \}$ is an uncountable discrete subset of this space, and this is a non-separable subset of the separable space $\mathbb {S} $. It shows that separability does not inherit to closed subspaces. Note that $K=\{(x,-x)\mid x\in \mathbb {Q} \}$ and $\Delta \setminus K$ are closed sets; it can be proved that they cannot be separated by open sets, showing that $\mathbb {S} $ is not normal. Thus it serves as a counterexample to the notion that the product of normal spaces is normal; in fact, it shows that even the finite product of perfectly normal spaces need not be normal.
See also
• List of topologies
• Moore plane
References
• Kelley, John L. (1955). General Topology. van Nostrand. Reprinted as Kelley, John L. (1975). General Topology. Springer-Verlag. ISBN 0-387-90125-6.
• Robert Sorgenfrey, "On the topological product of paracompact spaces", Bull. Amer. Math. Soc. 53 (1947) 631–632.
• Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978]. Counterexamples in Topology (Dover reprint of 1978 ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-486-68735-3. MR 0507446.
|
Wikipedia
|
Sorting algorithm
In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.
Formally, the output of any sorting algorithm must satisfy two conditions:
1. The output is in monotonic order (each element is no smaller/larger than the previous element, according to the required order).
2. The output is a permutation (a reordering, yet retaining all of the original elements) of the input.
For optimum efficiency, the input data should be stored in a data structure which allows random access rather than one that allows only sequential access.
History and concepts
From the beginning of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. Among the authors of early sorting algorithms around 1951 was Betty Holberton, who worked on ENIAC and UNIVAC.[1][2] Bubble sort was analyzed as early as 1956.[3] Asymptotically optimal algorithms have been known since the mid-20th century – new algorithms are still being invented, with the widely used Timsort dating to 2002, and the library sort being first published in 2006.
Comparison sorting algorithms have a fundamental requirement of Ω(n log n) comparisons (some input sequences will require a multiple of n log n comparisons, where n is the number of elements in the array to be sorted). Algorithms not based on comparisons, such as counting sort, can have better performance.
Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide-and-conquer algorithms, data structures such as heaps and binary trees, randomized algorithms, best, worst and average case analysis, time–space tradeoffs, and upper and lower bounds.
Sorting small arrays optimally (in fewest comparisons and swaps) or fast (i.e. taking into account machine specific details) is still an open research problem, with solutions only known for very small arrays (<20 elements). Similarly optimal (by various definitions) sorting on a parallel machine is an open research topic.
Classification
Sorting algorithms can be classified by:
• Computational complexity
• Best, worst and average case behavior in terms of the size of the list. For typical serial sorting algorithms, good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2). Ideal behavior for a serial sort is O(n), but this is not possible in the average case. Optimal parallel sorting is O(log n).
• Swaps for "in-place" algorithms.
• Memory usage (and use of other computer resources). In particular, some sorting algorithms are "in-place". Strictly, an in-place sort needs only O(1) memory beyond the items being sorted; sometimes O(log n) additional memory is considered "in-place".
• Recursion: Some algorithms are either recursive or non-recursive, while others may be both (e.g., merge sort).
• Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values).
• Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
• General method: insertion, exchange, selection, merging, etc. Exchange sorts include bubble sort and quicksort. Selection sorts include cycle sort and heapsort.
• Whether the algorithm is serial or parallel. The remainder of this discussion almost exclusively concentrates upon serial algorithms and assumes serial operation.
• Adaptability: Whether or not the presortedness of the input affects the running time. Algorithms that take this into account are known to be adaptive.
• Online: An algorithm such as Insertion Sort that is online can sort a constant stream of input.
Stability
Stable sort algorithms sort equal elements in the same order that they appear in the input. For example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list. Stable sorting algorithms choose one of these, according to the following rule: if two items compare as equal (like the two 5 cards), then their relative order will be preserved, i.e. if one comes before the other in the input, it will come before the other in the output.
Stability is important to preserve order over multiple sorts on the same data set. For example, say that student records consisting of name and class section are sorted dynamically, first by name, then by class section. If a stable sorting algorithm is used in both cases, the sort-by-class-section operation will not change the name order; with an unstable sort, it could be that sorting by section shuffles the name order, resulting in a nonalphabetical list of students.
More formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the key. In the card example, cards are represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable if whenever there are two records R and S with the same key, and R appears before S in the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. Stability is also not an issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original input list as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary key. For example, suppose we wish to sort a hand of cards such that the suits are in the order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are sorted by rank. This can be done by first sorting the cards by rank (using any sort), and then doing a stable sort by suit:
Within each suit, the stable sort preserves the ordering by rank that was already done. This idea can be extended to any number of keys and is utilised by radix sort. The same effect can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g., compares first by suit, and then compares by rank if the suits are the same.
Comparison of algorithms
In these tables, n is the number of records to be sorted. The columns "Best", "Average" and "Worst" give the time complexity in each case, under the assumption that the length of each key is constant, and therefore that all comparisons, swaps and other operations can proceed in constant time. "Memory" denotes the amount of extra storage needed additionally to that used by the list itself, under the same assumption. The run times and the memory requirements listed are inside big O notation, hence the base of the logarithms does not matter. The notation log2 n means (log n)2.
Comparison sorts
Below is a table of comparison sorts. A comparison sort cannot perform better than O(n log n) on average.[4]
Comparison sorts
NameBestAverageWorstMemoryStableMethodOther notes
Quicksort $n\log n$ $n\log n$ $n^{2}$ $\log n$ No Partitioning Quicksort is usually done in-place with O(log n) stack space.[5][6]
Merge sort $n\log n$ $n\log n$ $n\log n$ n Yes Merging Highly parallelizable (up to O(log n) using the Three Hungarians' Algorithm).[7]
In-place merge sort — — $n\log ^{2}n$ 1 Yes Merging Can be implemented as a stable sort based on stable in-place merging.[8]
Introsort $n\log n$ $n\log n$ $n\log n$ $\log n$ No Partitioning & Selection Used in several STL implementations.
Heapsort $n\log n$ $n\log n$ $n\log n$ 1 No Selection
Insertion sort n $n^{2}$ $n^{2}$ 1 Yes Insertion O(n + d), in the worst case over sequences that have d inversions.
Block sort n $n\log n$ $n\log n$ 1 Yes Insertion & Merging Combine a block-based $O(n)$ in-place merge algorithm[9] with a bottom-up merge sort.
Timsort n $n\log n$ $n\log n$ n Yes Insertion & Merging Makes n-1 comparisons when the data is already sorted or reverse sorted.
Selection sort $n^{2}$ $n^{2}$ $n^{2}$ 1 No Selection Stable with $O(n)$ extra space, when using linked lists, or when made as a variant of Insertion Sort instead of swapping the two items.[10]
Cubesort n $n\log n$ $n\log n$ n Yes Insertion Makes n-1 comparisons when the data is already sorted or reverse sorted.
Shellsort $n\log n$ $n^{4/3}$ $n^{3/2}$ 1 No Insertion Small code size.
Bubble sort n $n^{2}$ $n^{2}$ 1 Yes Exchanging Tiny code size.
Exchange sort $n^{2}$ $n^{2}$ $n^{2}$ 1 No Exchanging Tiny code size.
Tree sort $n\log n$ $n\log n$ $n\log n$(balanced) n Yes Insertion When using a self-balancing binary search tree.
Cycle sort $n^{2}$ $n^{2}$ $n^{2}$ 1 No Selection In-place with theoretically optimal number of writes.
Library sort $n\log n$ $n\log n$ $n^{2}$ n No Insertion Similar to a gapped insertion sort. It requires randomly permuting the input to warrant with-high-probability time bounds, which makes it not stable.
Patience sorting n $n\log n$ $n\log n$ n No Insertion & Selection Finds all the longest increasing subsequences in O(n log n).
Smoothsort n $n\log n$ $n\log n$ 1 No Selection An adaptive variant of heapsort based upon the Leonardo sequence rather than a traditional binary heap.
Strand sort n $n^{2}$ $n^{2}$ n Yes Selection
Tournament sort $n\log n$ $n\log n$ $n\log n$ n[11] No Selection Variation of Heapsort.
Cocktail shaker sort n $n^{2}$ $n^{2}$ 1 Yes Exchanging A variant of Bubblesort which deals well with small values at end of list
Comb sort $n\log n$ $n^{2}$ $n^{2}$ 1 No Exchanging Faster than bubble sort on average.
Gnome sort n $n^{2}$ $n^{2}$ 1 Yes Exchanging Tiny code size.
Odd–even sort n $n^{2}$ $n^{2}$ 1 Yes Exchanging Can be run on parallel processors easily.
Non-comparison sorts
The following table describes integer sorting algorithms and other sorting algorithms that are not comparison sorts. As such, they are not limited to Ω(n log n).[12] Complexities below assume n items to be sorted, with keys of size k, digit size d, and r the range of numbers to be sorted. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that n ≪ 2k, where ≪ means "much less than". In the unit-cost random-access machine model, algorithms with running time of $\scriptstyle n\cdot {\frac {k}{d}}$, such as radix sort, still take time proportional to Θ(n log n), because n is limited to be not more than $2^{\frac {k}{d}}$, and a larger number of elements to sort would require a bigger k in order to store them in the memory.[13]
Non-comparison sorts
NameBestAverageWorstMemoryStablen ≪ 2kNotes
Pigeonhole sort — $n+2^{k}$ $n+2^{k}$ $2^{k}$ Yes Yes Cannot sort non-integers.
Bucket sort (uniform keys) — $n+k$ $n^{2}\cdot k$ $n\cdot k$ Yes No Assumes uniform distribution of elements from the domain in the array.[14]
Also cannot sort non-integers.
Bucket sort (integer keys) — $n+r$ $n+r$ $n+r$ Yes Yes If r is $O(n)$, then average time complexity is $O(n)$.[15]
Counting sort — $n+r$ $n+r$ $n+r$ Yes Yes If r is $O(n)$, then average time complexity is $O(n)$.[14]
LSD Radix Sort $n$ $n\cdot {\frac {k}{d}}$ $n\cdot {\frac {k}{d}}$ $n+2^{d}$ Yes No ${\frac {k}{d}}$ recursion levels, 2d for count array.[14][15]
Unlike most distribution sorts, this can sort non-integers.
MSD Radix Sort — $n\cdot {\frac {k}{d}}$ $n\cdot {\frac {k}{d}}$ $n+2^{d}$ Yes No Stable version uses an external array of size n to hold all of the bins.
Same as the LSD variant, it can sort non-integers.
MSD Radix Sort (in-place) — $n\cdot {\frac {k}{1}}$ $n\cdot {\frac {k}{1}}$ $2^{1}$ No No d=1 for in-place, $k/1$ recursion levels, no count array.
Spreadsort n $n\cdot {\frac {k}{d}}$ $n\cdot \left({{\frac {k}{s}}+d}\right)$ ${\frac {k}{d}}\cdot 2^{d}$ No No Asymptotic are based on the assumption that n ≪ 2k, but the algorithm does not require this.
Burstsort — $n\cdot {\frac {k}{d}}$ $n\cdot {\frac {k}{d}}$ $n\cdot {\frac {k}{d}}$ No No Has better constant factor than radix sort for sorting strings. Though relies somewhat on specifics of commonly encountered strings.
Flashsort n $n+r$ $n^{2}$ n No No Requires uniform distribution of elements from the domain in the array to run in linear time. If distribution is extremely skewed then it can go quadratic if underlying sort is quadratic (it is usually an insertion sort). In-place version is not stable.
Postman sort — $n\cdot {\frac {k}{d}}$ $n\cdot {\frac {k}{d}}$ $n+2^{d}$ — No A variation of bucket sort, which works very similarly to MSD Radix Sort. Specific to post service needs.
Samplesort can be used to parallelize any of the non-comparison sorts, by efficiently distributing data into several buckets and then passing down sorting to several processors, with no need to merge as buckets are already sorted between each other.
Others
Some algorithms are slow compared to those discussed above, such as the bogosort with unbounded run time and the stooge sort which has O(n2.7) run time. These sorts are usually described for educational purposes to demonstrate how the run time of algorithms is estimated. The following table describes some sorting algorithms that are impractical for real-life use in traditional software contexts due to extremely poor performance or specialized hardware requirements.
NameBestAverageWorstMemoryStableComparisonOther notes
Bead sort n S S $n^{2}$ — No Works only with positive integers. Requires specialized hardware for it to run in guaranteed $O(n)$ time. There is a possibility for software implementation, but running time will be $O(S)$, where S is the sum of all integers to be sorted; in the case of small integers, it can be considered to be linear.
Simple pancake sort $1$ $n$ $n$ 1 No Yes Count is number of flips.
Merge-insertion sort $n\log n$
comparisons
$n\log n$
comparisons
$n\log n$
comparisons
Varies No Yes Makes very few comparisons worst case compared to other sorting algorithms.
Mostly of theoretical interest due to implementational complexity and suboptimal data moves.
"I Can't Believe It Can Sort"[16] $n^{2}$ $n^{2}$ $n^{2}$ 1 No Yes Notable primarily for appearing to be an erroneous implementation of either Insertion Sort or Exchange Sort.
Spaghetti (Poll) sort n n n $n^{2}$ Yes Polling This is a linear-time, analog algorithm for sorting a sequence of items, requiring O(n) stack space, and the sort is stable. This requires n parallel processors. See spaghetti sort#Analysis.
Sorting network Varies Varies Varies Varies Varies (stable sorting networks require more comparisons) Yes Order of comparisons are set in advance based on a fixed network size.
Bitonic sorter $\log ^{2}n$ parallel $\log ^{2}n$ parallel $n\log ^{2}n$ non-parallel 1 No Yes An effective variation of Sorting networks.
Bogosort n $(n\times n!)$ Unbounded 1 No Yes Random shuffling. Used for example purposes only, as even the expected best-case runtime is awful.[17]
Worst case is unbounded when using randomization, but a deterministic version guarantees $O(n\times n!)$ worst case.
Stooge sort $n^{\log 3/\log 1.5}$ $n^{\log 3/\log 1.5}$ $n^{\log 3/\log 1.5}$ n No Yes Slower than most of the sorting algorithms (even naive ones) with a time complexity of O(nlog 3 / log 1.5 ) = O(n2.7095...) Can be made stable, and is also a sorting network.
Slowsort $n^{\Omega (\log n)}$ $n^{\Omega (\log n)}$ $n^{\Omega (\log n)}$ n No Yes A multiply and surrender algorithm, antonymous with divide-and-conquer algorithm.
Franceschini's method[18] $-$ $n\log n$ $n\log n$ 1 Yes Yes Makes O(n) data moves in the worst case. Possesses ideal comparison sort asymptotic bounds but is only of theoretical interest.
Theoretical computer scientists have detailed other sorting algorithms that provide better than O(n log n) time complexity assuming additional constraints, including:
• Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite size, taking O(n log log n) time and O(n) space.[19]
• A randomized integer sorting algorithm taking $O\left(n{\sqrt {\log \log n}}\right)$ expected time and O(n) space.[20]
• One of the authors of the previously mentioned algorithm also claims to have discovered an algorithm taking $O\left(n{\sqrt {\log n}}\right)$ time and O(n) space, sorting real numbers.[21] Further claiming that, without any added assumptions on the input, it can be modified to achieve $O\left(n\log n/{\sqrt {\log \log n}}\right)$ time and O(n) space.
Popular sorting algorithms
While there are a large number of sorting algorithms, in practical implementations a few algorithms predominate. Insertion sort is widely used for small data sets, while for large data sets an asymptotically efficient sort is used, primarily heapsort, merge sort, or quicksort. Efficient implementations generally use a hybrid algorithm, combining an asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom of a recursion. Highly tuned implementations use more sophisticated variants, such as Timsort (merge sort, insertion sort, and additional logic), used in Android, Java, and Python, and introsort (quicksort and heapsort), used (in variant forms) in some C++ sort implementations and in .NET.
For more restricted data, such as numbers in a fixed interval, distribution sorts such as counting sort or radix sort are widely used. Bubble sort and variants are rarely used in practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intuitively generally use insertion sorts for small sets. For larger sets, people often first bucket, such as by initial letter, and multiple bucketing allows practical sorting of very large sets. Often space is relatively cheap, such as by spreading objects out on the floor or over a large area, but operations are expensive, particularly moving an object a large distance – locality of reference is important. Merge sorts are also practical for physical objects, particularly as two hands can be used, one for each list to merge, while other algorithms, such as heapsort or quicksort, are poorly suited for human use. Other algorithms, such as library sort, a variant of insertion sort that leaves spaces, are also practical for physical use.
Simple sorts
Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.
Insertion sort
Main article: Insertion sort
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list similar to how we put money in our wallet.[22] In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shellsort is a variant of insertion sort that is more efficient for larger lists.
Selection sort
Main article: Selection sort
Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list.[23] It does no more than n swaps, and thus is useful where swapping is very expensive.
Efficient sorts
Practical general sorting algorithms are almost always based on an algorithm with average time complexity (and generally worst-case complexity) O(n log n), of which the most common are heapsort, merge sort, and quicksort. Each has advantages and drawbacks, with the most significant being that simple implementation of merge sort uses O(n) additional space, and simple implementation of quicksort has O(n2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data – these are common in real-world data, and can be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such as Timsort (based on merge sort) or introsort (based on quicksort, falling back to heapsort).
Merge sort
Main article: Merge sort
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list.[24] Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O(n) space complexity, and involves a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithm Timsort, which is used for the standard sort routine in the programming languages Python[25] and Java (as of JDK7[26]). Merge sort itself is the standard routine in Perl,[27] among others, and has been used in Java at least since 2000 in JDK1.3.[28]
Heapsort
Main article: Heapsort
Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree.[29] Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst case complexity.
Quicksort
Main article: Quicksort
Quicksort is a divide-and-conquer algorithm which relies on a partition operation: to partition an array, an element called a pivot is selected.[30][31] All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields average time complexity of O(n log n), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O(n2) performance, but good choice of pivots yields O(n log n) performance, which is asymptotically optimal. For example, if at each step the median is chosen as the pivot then the algorithm works in O(n log n). Finding the median, such as by the median of medians selection algorithm is however an O(n) operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O(n log n) performance.
If a guarantee of O(n log n) performance is important, there is a simple modification to achieve that. The idea, due to Musser, is to set a limit on the maximum depth of recursion.[32] If that limit is exceeded, then sorting is continued using the heapsort algorithm. Musser proposed that the limit should be $1+2\lfloor \log _{2}(n)\rfloor $, which is approximately twice the maximum recursion depth one would expect on average with a randomly ordered array.
Shellsort
Main article: Shellsort
Shellsort was invented by Donald Shell in 1959.[33] It improves upon insertion sort by moving out of order elements more than one position at a time. The concept behind Shellsort is that insertion sort performs in $O(kn)$ time, where k is the greatest distance between two out-of-place elements. This means that generally, they perform in O(n2), but for data that is mostly sorted, with only a few elements out of place, they perform faster. So, by first sorting elements far away, and progressively shrinking the gap between the elements to sort, the final sort computes much faster. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem and depends on the gap sequence used, with known complexities ranging from O(n2) to O(n4/3) and Θ(n log2 n). This, combined with the fact that Shellsort is in-place, only needs a relatively small amount of code, and does not require use of the call stack, makes it is useful in situations where memory is at a premium, such as in embedded systems and operating system kernels.
Bubble sort and variants
Bubble sort, and variants such as the Comb sort and cocktail sort, are simple, highly inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of analysis, but they are rarely used in practice.
Bubble sort
Main article: Bubble sort
Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass.[34] This algorithm's average time and worst-case performance is O(n2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2n time.
[35]
Comb sort
Comb sort is a relatively simple sorting algorithm based on bubble sort and originally designed by Włodzimierz Dobosiewicz in 1980.[36] It was later rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine article published in April 1991. The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort) It accomplishes this by initially swapping elements that are a certain distance from one another in the array, rather than only swapping elements if they are adjacent to one another, and then shrinking the chosen distance until it is operating as a normal bubble sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that swaps elements spaced a certain distance away from one another, comb sort can be thought of as the same generalization applied to bubble sort.
Exchange sort
Exchange sort is sometimes confused with bubble sort, although the algorithms are in fact distinct.[37][38] Exchange sort works by comparing the first element with all elements above it, swapping where needed, thereby guaranteeing that the first element is correct for the final sort order; it then proceeds to do the same for the second element, and so on. It lacks the advantage which bubble sort has of detecting in one pass if the list is already sorted, but it can be faster than bubble sort by a constant factor (one less pass over the data to be sorted; half as many total comparisons) in worst case situations. Like any simple O(n2) sort it can be reasonably fast over very small data sets, though in general insertion sort will be faster.
Distribution sorts
Distribution sort refers to any sorting algorithm where data is distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be a distributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allows external sorting of data too large to fit into a single computer's memory.
Counting sort
Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm often cannot be used because S needs to be reasonably small for the algorithm to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as n increases. It also can be modified to provide stable behavior.
Bucket sort
Main article: Bucket sort
Bucket sort is a divide-and-conquer sorting algorithm that generalizes counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across all buckets.
Radix sort
Main article: Radix sort
Radix sort is an algorithm that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for the counting sort algorithm to be used internally by the radix sort. A hybrid sorting approach, such as using insertion sort for small bins, improves performance of radix sort significantly.
Memory usage patterns and index sorting
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".[39]
Another technique for overcoming the memory-size problem is using external sorting, for example one of the ways is to combine two algorithms in a way that takes advantage of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted using an efficient algorithm (such as quicksort), and the results merged using a k-way merge similar to that used in merge sort. This is faster than performing either merge sort or quicksort over the entire list.[40][41]
Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.
Related algorithms
Related problems include approximate sorting (sorting a sequence to within a certain amount of the correct order), partial sorting (sorting only the k smallest elements of a list, or finding the k smallest elements, but unordered) and selection (computing the kth smallest element). These can be solved inefficiently by a total sort, but more efficient algorithms exist, often derived by generalizing a sorting algorithm. The most notable example is quickselect, which is related to quicksort. Conversely, some sorting algorithms can be derived by repeated application of a selection algorithm; quicksort and quickselect can be seen as the same pivoting move, differing only in whether one recurses on both sides (quicksort, divide-and-conquer) or one side (quickselect, decrease-and-conquer).
A kind of opposite of a sorting algorithm is a shuffling algorithm. These are fundamentally different because they require a source of random numbers. Shuffling can also be implemented by a sorting algorithm, namely by a random sort: assigning a random number to each element of the list and then sorting based on the random numbers. This is generally not done in practice, however, and there is a well-known simple and efficient algorithm for shuffling: the Fisher–Yates shuffle.
Sorting algorithms are ineffective for finding an order in many situations. Usually when elements have no reliable comparison function (crowdsourced preferences like voting systems), comparisons are very costly (sports), or when it would be impossible to pairwise compare all elements for all criteria (search engines). In these cases, the problem is usually referred to as ranking and the goal is to find the "best" result for some criteria according to probabilities inferred from comparisons or rankings. A common example is in chess, where players are ranked with the Elo rating system, and rankings are determined by a tournament system instead of a sorting algorithm.
See also
• Collation – Assembly of written information into a standard order
• K-sorted sequence
• Pairwise comparison
• Schwartzian transform – Programming idiom for efficiently sorting a list by a computed key
• Search algorithm – Any algorithm which solves the search problem
• Quantum sort – Sorting algorithms for quantum computers
References
1. "Meet the 'Refrigerator Ladies' Who Programmed the ENIAC". Mental Floss. 2013-10-13. Archived from the original on 2018-10-08. Retrieved 2016-06-16.
2. Lohr, Steve (Dec 17, 2001). "Frances E. Holberton, 84, Early Computer Programmer". NYTimes. Archived from the original on 16 December 2014. Retrieved 16 December 2014.
3. Demuth, Howard B. (1956). Electronic Data Sorting (PhD thesis). Stanford University. ProQuest 301940891.
4. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009), "8", Introduction To Algorithms (3rd ed.), Cambridge, MA: The MIT Press, p. 167, ISBN 978-0-262-03293-3
5. Sedgewick, Robert (1 September 1998). Algorithms In C: Fundamentals, Data Structures, Sorting, Searching, Parts 1-4 (3 ed.). Pearson Education. ISBN 978-81-317-1291-7. Retrieved 27 November 2012.
6. Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM. 21 (10): 847–857. doi:10.1145/359619.359631. S2CID 10020756.
7. Ajtai, M.; Komlós, J.; Szemerédi, E. (1983). An O(n log n) sorting network. STOC '83. Proceedings of the fifteenth annual ACM symposium on Theory of computing. pp. 1–9. doi:10.1145/800061.808726. ISBN 0-89791-099-0.
8. Huang, B. C.; Langston, M. A. (December 1992). "Fast Stable Merging and Sorting in Constant Extra Space". Comput. J. 35 (6): 643–650. CiteSeerX 10.1.1.54.8381. doi:10.1093/comjnl/35.6.643.
9. Kim, P. S.; Kutzner, A. (2008). Ratio Based Stable In-Place Merging. TAMC 2008. Theory and Applications of Models of Computation. LNCS. Vol. 4978. pp. 246–257. CiteSeerX 10.1.1.330.2641. doi:10.1007/978-3-540-79228-4_22. ISBN 978-3-540-79227-7.
10. "SELECTION SORT (Java, C++) – Algorithms and Data Structures". Algolist.net. Archived from the original on 9 December 2012. Retrieved 14 April 2018.
11. Prof. E. Rahm. "Sortierverfahren" (PDF). Dbs.uni-leipzig.de. Archived (PDF) from the original on 23 August 2022. Retrieved 1 March 2022.
12. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), "8", Introduction To Algorithms (2nd ed.), Cambridge, MA: The MIT Press, p. 165, ISBN 0-262-03293-7
13. Nilsson, Stefan (2000). "The Fastest Sorting Algorithm?". Dr. Dobb's. Archived from the original on 2019-06-08. Retrieved 2015-11-23.
14. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03293-7.
15. Goodrich, Michael T.; Tamassia, Roberto (2002). "4.5 Bucket-Sort and Radix-Sort". Algorithm Design: Foundations, Analysis, and Internet Examples. John Wiley & Sons. pp. 241–243. ISBN 978-0-471-38365-9.
16. Fung, Stanley P. Y. (3 October 2021). "Is this the simplest (and most surprising) sorting algorithm ever?". arXiv:2110.01111 [cs.DS].
17. Gruber, H.; Holzer, M.; Ruepp, O., "Sorting the slow way: an analysis of perversely awful randomized sorting algorithms", 4th International Conference on Fun with Algorithms, Castiglioncello, Italy, 2007 (PDF), Lecture Notes in Computer Science, vol. 4475, Springer-Verlag, pp. 183–197, doi:10.1007/978-3-540-72914-3_17, archived (PDF) from the original on 2020-09-29, retrieved 2020-06-27.
18. Franceschini, G. (June 2007). "Sorting Stably, in Place, with O(n log n) Comparisons and O(n) Moves". Theory of Computing Systems. 40 (4): 327–353. doi:10.1007/s00224-006-1311-1.
19. Thorup, M. (February 2002). "Randomized Sorting in O(n log log n) Time and Linear Space Using Addition, Shift, and Bit-wise Boolean Operations". Journal of Algorithms. 42 (2): 205–230. doi:10.1006/jagm.2002.1211. S2CID 9700543.
20. Han, Yijie; Thorup, M. (2002). Integer sorting in O(n√(log log n)) expected time and linear space. The 43rd Annual IEEE Symposium on Foundations of Computer Science. pp. 135–144. doi:10.1109/SFCS.2002.1181890. ISBN 0-7695-1822-2.
21. Han, Yijie (2020-04-01). "Sorting Real Numbers in $$O\big (n\sqrt{\log n}\big )$$ Time and Linear Space". Algorithmica. 82 (4): 966–978. doi:10.1007/s00453-019-00626-0. ISSN 1432-0541.
22. Wirth, Niklaus (1986). Algorithms & Data Structures. Upper Saddle River, NJ: Prentice-Hall. pp. 76–77. ISBN 978-0130220059.
23. Wirth 1986, pp. 79–80
24. Wirth 1986, pp. 101–102
25. "Tim Peters's original description of timsort". python.org. Archived from the original on 22 January 2018. Retrieved 14 April 2018.
26. "OpenJDK's TimSort.java". java.net. Archived from the original on 14 August 2011. Retrieved 14 April 2018.
27. "sort – perldoc.perl.org". perldoc.perl.org. Archived from the original on 14 April 2018. Retrieved 14 April 2018.
28. Merge sort in Java 1.3, Sun. Archived 2009-03-04 at the Wayback Machine
29. Wirth 1986, pp. 87–89
30. Wirth 1986, p. 93
31. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009), Introduction to Algorithms (3rd ed.), Cambridge, MA: The MIT Press, pp. 171–172, ISBN 978-0262033848
32. Musser, David R. (1997), "Introspective Sorting and Selection Algorithms", Software: Practice and Experience, 27 (8): 983–993, doi:10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#
33. Shell, D. L. (1959). "A High-Speed Sorting Procedure" (PDF). Communications of the ACM. 2 (7): 30–32. doi:10.1145/368370.368387. S2CID 28572656. Archived from the original (PDF) on 2017-08-30. Retrieved 2020-03-23.
34. Wirth 1986, pp. 81–82
35. "kernel/groups.c". GitHub. Archived from the original on 2021-02-25. Retrieved 2012-05-05.
36. Brejová, B. (15 September 2001). "Analyzing variants of Shellsort". Inf. Process. Lett. 79 (5): 223–227. doi:10.1016/S0020-0190(00)00223-4.
37. "Exchange Sort Algorithm". CodingUnit Programming Tutorials. Archived from the original on 2021-07-10. Retrieved 2021-07-10.
38. "Exchange Sort". JavaBitsNotebook.com. Archived from the original on 2021-07-10. Retrieved 2021-07-10.
39. "tag sort Definition from PC Magazine Encyclopedia". Pcmag.com. Archived from the original on 6 October 2012. Retrieved 14 April 2018.
40. Donald Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching, Second Edition. Addison-Wesley, 1998, ISBN 0-201-89685-0, Section 5.4: External Sorting, pp. 248–379.
41. Ellis Horowitz and Sartaj Sahni, Fundamentals of Data Structures, H. Freeman & Co., ISBN 0-7167-8042-9.
Further reading
• Knuth, Donald E. (1998), Sorting and Searching, The Art of Computer Programming, vol. 3 (2nd ed.), Boston: Addison-Wesley, ISBN 0-201-89685-0
• Sedgewick, Robert (1980), "Efficient Sorting by Computer: An Introduction", Computational Probability, New York: Academic Press, pp. 101–130, ISBN 0-12-394680-8
External links
The Wikibook Algorithm implementation has a page on the topic of: Sorting algorithms
The Wikibook A-level Mathematics has a page on the topic of: Sorting algorithms
Wikimedia Commons has media related to Sorting algorithms.
• Sorting Algorithm Animations at the Wayback Machine (archived 3 March 2015).
• Sequential and parallel sorting algorithms – Explanations and analyses of many sorting algorithms.
• Dictionary of Algorithms, Data Structures, and Problems – Dictionary of algorithms, techniques, common functions, and problems.
• Slightly Skeptical View on Sorting Algorithms – Discusses several classic algorithms and promotes alternatives to the quicksort algorithm.
• 15 Sorting Algorithms in 6 Minutes (Youtube) – Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.
• A036604 sequence in OEIS database titled "Sorting numbers: minimal number of comparisons needed to sort n elements" – Performed by Ford–Johnson algorithm.
• Sorting Algorithms Used on Famous Paintings (Youtube) – Visualization of Sorting Algorithms on Many Famous Paintings.
• A Comparison of Sorting Algorithms – Runs a series of tests of 9 of the main sorting algorithms using Python timeit and Google Colab.
Well-known computer science algorithms
Categories
• Minimax
• Sorting
• Search
• Streaming
Paradigms
• Backtracking
• Brute-force search
• Divide and conquer
• Dynamic programming
• Greedy
• Prune and search
• Sweep line
• Recursion
Other
• Binary search
• Breadth-first search
• Depth-first search
• Topological sorting
• List of algorithms
Sorting algorithms
Theory
• Computational complexity theory
• Big O notation
• Total order
• Lists
• Inplacement
• Stability
• Comparison sort
• Adaptive sort
• Sorting network
• Integer sorting
• X + Y sorting
• Transdichotomous model
• Quantum sort
Exchange sorts
• Bubble sort
• Cocktail shaker sort
• Odd–even sort
• Comb sort
• Gnome sort
• Proportion extend sort
• Quicksort
Selection sorts
• Selection sort
• Heapsort
• Smoothsort
• Cartesian tree sort
• Tournament sort
• Cycle sort
• Weak-heap sort
Insertion sorts
• Insertion sort
• Shellsort
• Splaysort
• Tree sort
• Library sort
• Patience sorting
Merge sorts
• Merge sort
• Cascade merge sort
• Oscillating merge sort
• Polyphase merge sort
Distribution sorts
• American flag sort
• Bead sort
• Bucket sort
• Burstsort
• Counting sort
• Interpolation sort
• Pigeonhole sort
• Proxmap sort
• Radix sort
• Flashsort
Concurrent sorts
• Bitonic sorter
• Batcher odd–even mergesort
• Pairwise sorting network
• Samplesort
Hybrid sorts
• Block merge sort
• Kirkpatrick–Reisch sort
• Timsort
• Introsort
• Spreadsort
• Merge-insertion sort
Other
• Topological sorting
• Pre-topological order
• Pancake sorting
• Spaghetti sort
Impractical sorts
• Stooge sort
• Slowsort
• Bogosort
|
Wikipedia
|
Sorting number
In mathematics and computer science, the sorting numbers are a sequence of numbers introduced in 1950 by Hugo Steinhaus for the analysis of comparison sort algorithms. These numbers give the worst-case number of comparisons used by both binary insertion sort and merge sort. However, there are other algorithms that use fewer comparisons.
Formula and examples
The $n$th sorting number is given by the formula[1]
$n\,S(n)-2^{S(n)}+1,$
where
$S(n)=\lfloor 1+\log _{2}n\rfloor .$
The sequence of numbers given by this formula (starting with $n=1$) is
0, 1, 3, 5, 8, 11, 14, 17, 21, 25, 29, 33, 37, 41, ... (sequence A001855 in the OEIS).
The same sequence of numbers can also be obtained from the recurrence relation[2],
$A(n)=A{\bigl (}\lfloor n/2\rfloor {\bigr )}+A{\bigl (}\lceil n/2\rceil {\bigr )}+n-1$
or closed form
$A(n)=n\lceil \log _{2}n\rceil -2^{\lceil \log _{2}n\rceil }+1.$
It is an example of a 2-regular sequence.[2]
Asymptotically, the value of the $n$th sorting number fluctuates between approximately $n\log _{2}n-n$ and $n\log _{2}n-0.915n,$ depending on the ratio between $n$ and the nearest power of two.[1]
Application to sorting
In 1950, Hugo Steinhaus observed that these numbers count the number of comparisons used by binary insertion sort, and conjectured (incorrectly) that they give the minimum number of comparisons needed to sort $n$ items using any comparison sort. The conjecture was disproved in 1959 by L. R. Ford Jr. and Selmer M. Johnson, who found a different sorting algorithm, the Ford–Johnson merge-insert sort, using fewer comparisons.[1]
The same sequence of sorting numbers also gives the worst-case number of comparisons used by merge sort to sort $n$ items.[2]
Other applications
The sorting numbers (shifted by one position) also give the sizes of the shortest possible superpatterns for the layered permutations.[3]
References
1. Ford, Lester R. Jr.; Johnson, Selmer M. (1959), "A tournament problem", American Mathematical Monthly, 66 (5): 387–389, doi:10.2307/2308750, JSTOR 2308750, MR 0103159
2. Allouche, Jean-Paul; Shallit, Jeffrey (1992), "The ring of $k$-regular sequences", Theoretical Computer Science, 98 (2): 163–197, doi:10.1016/0304-3975(92)90001-V, MR 1166363. See Example 28, p. 192.
3. Albert, Michael; Engen, Michael; Pantone, Jay; Vatter, Vincent (2018), "Universal layered permutations", Electronic Journal of Combinatorics, 25 (3): P23:1–P23:5, doi:10.37236/7386, S2CID 52100342
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
|
Wikipedia
|
SOS-convexity
A multivariate polynomial is SOS-convex (or sum of squares convex) if its Hessian matrix H can be factored as H(x) = ST(x)S(x) where S is a matrix (possibly rectangular) which entries are polynomials in x.[1] In other words, the Hessian matrix is a SOS matrix polynomial.
An equivalent definition is that the form defined as g(x,y) = yTH(x)y is a sum of squares of forms.[2]
Connection with convexity
If a polynomial is SOS-convex, then it is also convex. Since establishing whether a polynomial is SOS-convex amounts to solving a semidefinite programming problem, SOS-convexity can be used as a proxy to establishing if a polynomial is convex. In contrast, deciding if a generic polynomial of degree large than four is convex is a NP-hard problem.[3]
The first counterexample of a polynomial which is convex but not SOS-convex was constructed by Amir Ali Ahmadi and Pablo Parrilo in 2009.[4] The polynomial is a homogeneous polynomial that is sum-of-squares and given by:[4]
$p(x)=32x_{1}^{8}+118x_{1}^{6}x_{2}^{2}+40x_{1}^{6}x_{3}^{2}+25x_{1}^{4}x_{2}^{4}-43x_{1}^{4}x_{2}^{2}x_{3}^{2}-35x_{1}^{4}x_{3}^{4}+3x_{1}^{2}x_{2}^{4}x_{3}^{2}-16x_{1}^{2}x_{2}^{2}x_{3}^{4}+24x_{1}^{2}x_{3}^{6}+16x_{2}^{8}+44x_{2}^{6}x_{3}^{2}+70x_{2}^{4}x_{3}^{4}+60x_{2}^{2}x_{3}^{6}+30x_{3}^{8}$
In the same year, Grigoriy Blekherman proved in a non-constructive manner that there exist convex forms that is not representable as sum of squares.[5] An explicit example of a convex form (with degree 4 and 272 variables) that is not a sum of squares was claimed by James Saunderson in 2021.[6]
Connection with non-negativity and sum-of-squares
In 2013 Amir Ali Ahmadi and Pablo Parrilo showed that every convex homogeneous polynomial in n variables and degree 2d is SOS-convex if and only if either (a) n = 2 or (b) 2d = 2 or (c) n = 3 and 2d = 4.[7] Impressively, the same relation is valid for non-negative homogeneous polynomial in n variables and degree 2d that can be represented as sum of squares polynomials (See Hilbert's seventeenth problem).
References
1. Helton, J. William; Nie, Jiawang (2010). "Semidefinite representation of convex sets". Mathematical Programming. 122 (1): 21–64. arXiv:0705.4068. doi:10.1007/s10107-008-0240-y. ISSN 0025-5610. S2CID 1352703.
2. Ahmadi, Amir Ali; Parrilo, Pablo A. (2010). "On the equivalence of algebraic conditions for convexity and quasiconvexity of polynomials". 49th IEEE Conference on Decision and Control (CDC). pp. 3343–3348. doi:10.1109/CDC.2010.5717510. hdl:1721.1/74151. ISBN 978-1-4244-7745-6. S2CID 11904851.
3. Ahmadi, Amir Ali; Olshevsky, Alex; Parrilo, Pablo A.; Tsitsiklis, John N. (2013). "NP-hardness of deciding convexity of quartic polynomials and related problems". Mathematical Programming. 137 (1–2): 453–476. arXiv:1012.1908. doi:10.1007/s10107-011-0499-2. ISSN 0025-5610. S2CID 2319461.
4. Ahmadi, Amir Ali; Parrilo, Pablo A. (2009). "A positive definite polynomial Hessian that does not factor". Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference. pp. 1195–1200. doi:10.1109/CDC.2009.5400519. hdl:1721.1/58871. ISBN 978-1-4244-3871-6. S2CID 5344338.
5. Blekherman, Grigoriy (2009-10-04). "Convex Forms that are not Sums of Squares". arXiv:0910.0656 [math.AG].
6. Saunderson, James (2022). "A Convex Form That is Not a Sum of Squares". Mathematics of Operations Research. 48: 569–582. arXiv:2105.08432. doi:10.1287/moor.2022.1273. S2CID 234763204.
7. Ahmadi, Amir Ali; Parrilo, Pablo A. (2013). "A Complete Characterization of the Gap between Convexity and SOS-Convexity". SIAM Journal on Optimization. 23 (2): 811–833. arXiv:1111.4587. doi:10.1137/110856010. ISSN 1052-6234. S2CID 16801871.
See also
• Hilbert's seventeenth problem
• Polynomial SOS
• Sum-of-squares optimization
|
Wikipedia
|
Sotero Prieto Rodríguez
Sotero Prieto Rodríguez (December 25, 1884 – May 22, 1935) was a Mexican mathematician who taught at the National Autonomous University of Mexico. Among his students were physicist Manuel Sandoval Vallarta, physicist and mathematician Carlos Graef Fernández, and engineer and Rector of UNAM Nabor Carrillo Flores.[1]
Sotero Prieto Rodríguez
Sotero Prieto Rodríguez
Born(1884-12-25)December 25, 1884
Guadalajara, Mexico
DiedMay 22, 1935(1935-05-22) (aged 50)
Mexico City
NationalityMexican
Alma materNational Autonomous University of Mexico
OccupationMathematician
Spouse
Isabel Rio de la Loza Salazar
(m. 1918)
ChildrenAgustín Prieto Río de la Loza
Raúl Prieto Río de la Loza
Early life
Sotero Prieto Rodríguez was the son of the mining engineer and mathematics teacher Raúl Prieto González Bango and Teresa Rodríguez de Prieto.[1] He was the cousin of Isabel Prieto de Landázuri, a distinguished poet, considered the first Mexican romantic. In 1897, at twelve years of age, Prieto arrived in Mexico City and began his preparatory studies in the Instituto Colón de don Toribio Soto, finishing them at the Escuela Nacional Preparatoria in 1901. In 1902 he was accepted as a student in the Escuela Nacional de Ingenieros where he studied of course of civil engineering, which he completed in 1906, without which he would never have received the corresponding degree.
Career
Still being very young, he began teaching and carried out mathematical studies. He influenced notably the change and progress of mathematical research in Mexico, by influencing the then new generation of engineers and students of the exact sciences at the National Autonomous University of Mexico.[2]
He was a teacher of Manuel Sandoval Vallarta in the Escuela Nacional Preparatoria and of Alberto Barajas Celis, Carlos Graef Fernández and of Nabor Carrillo Flores in the Escuela Nacional de Ingenieros, currently the Facultad de Ingeniería.
In 1932, he established the Mathematics Section of the Sociedad Científica "Antonio Alzate", currently the Academia Nacional de Ciencias de México,[3] where his students presented the results of their research.
Death
According to some close people, it was said that Prieto had expressed judgement that if he passed fifty years of age without having achieved some great discovery in his specialty, he would commit suicide, a statement which no-one took seriously. However, at midday of May 22, 1935 in house number 2 of Génova Street, Mexico City, when he was alone, he tragically fulfilled the promise that he had made himself.[4] However, according to his family, the reasons for his suicide were others.
References
1. Figueroa, Antonio Rivera (2014). Cálculo Diferencial: Fundamentos, Aplicaciones y Notas Históricas. Grupo Editorial Patria. p. xviii. ISBN 9786074388985.
2. "70 aniversario de la Facultad de Ciencias de la UNAM" (in Spanish). Archived from the original on February 9, 2011. Retrieved August 18, 2012.
3. "Academia Mexicana de Ciencias". www.amc.unam.mx. Archived from the original on 2000-07-06.
4. "Sotero Prieto Rodríguez" (in Spanish). Retrieved July 9, 2020.
Bibliography
• Vasconcelos, José.- La Raza Cósmica.- México, Editorial Botas, S.A., 1926. p. 156.
• Armendáriz, Antonio. – Hermandad Pitagórica.- México, Novedades, March 21, 1987, P. Editorial.
Authority control: Academics
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Chris Soteros
Christine Elaine Soteros is a Canadian applied mathematician. She is professor and acting head of the department of mathematics and statistics at the University of Saskatchewan,[1] and site director of the Pacific Institute for the Mathematical Sciences for their Saskatchewan site.[2] Her research involves the folding and packing behavior of DNA, proteins, and other string-like biomolecules, and the knot theory of random space curves.[3][4]
Soteros graduated from the University of Windsor in 1980.[2] She completed her Ph.D. in chemical engineering at Princeton University in 1988. Her dissertation, Studies of Metal Hydride Phase Transitions Using the Cluster Variation Method, was supervised by Carol K. Hall.[5] After postdoctoral research at the University of Toronto,[2] working with Stuart Whittington and De Witt Sumners,[4] she became a faculty member at the University of Saskatchewan in 1989.[2]
References
1. Mathematics faculty, University of Saskatchewan, retrieved 2019-08-15
2. Chris Soteros Appointed PIMS University of Saskatchewan Site Director, Pacific Institute for the Mathematical Sciences, January 20, 2016, retrieved 2019-08-15
3. Rowley, Mari-Lou (June 13, 2018), A tango with tangled polymers, phys.org
4. Pi, Pie, Knotted Structures, and Biophysics, Biophysical Society, retrieved 2019-08-15
5. Chris Soteros at the Mathematics Genealogy Project
External links
• Home page
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Soul theorem
In mathematics, the soul theorem is a theorem of Riemannian geometry that largely reduces the study of complete manifolds of non-negative sectional curvature to that of the compact case. Jeff Cheeger and Detlef Gromoll proved the theorem in 1972 by generalizing a 1969 result of Gromoll and Wolfgang Meyer. The related soul conjecture, formulated by Cheeger and Gromoll at that time, was proved twenty years later by Grigori Perelman.
Soul theorem
Cheeger and Gromoll's soul theorem states:[1]
If (M, g) is a complete connected Riemannian manifold with nonnegative sectional curvature, then there exists a closed totally convex, totally geodesic embedded submanifold whose normal bundle is diffeomorphic to M.
Such a submanifold is called a soul of (M, g). By the Gauss equation and total geodesicity, the induced Riemannian metric on the soul automatically has nonnegative sectional curvature. Gromoll and Meyer had earlier studied the case of positive sectional curvature, where they showed that a soul is given by a single point, and hence that M is diffeomorphic to Euclidean space.[2]
Very simple examples, as below, show that the soul is not uniquely determined by (M, g) in general. However, Vladimir Sharafutdinov constructed a 1-Lipschitz retraction from M to any of its souls, thereby showing that any two souls are isometric. This mapping is known as the Sharafutdinov's retraction.[3]
Cheeger and Gromoll also posed a converse question of whether there is a complete Riemannian metric of nonnegative sectional curvature on the total space of any vector bundle over closed manifolds of positive sectional curvature.[4]
Examples.
• As can be directly seen from the definition, every compact manifold is its own soul. For this reason, the theorem is often stated only for non-compact manifolds.
• As a very simple example, take M to be Euclidean space Rn. The sectional curvature is 0 everywhere, and any point of M can serve as a soul of M.
• Now take the paraboloid M = {(x, y, z) : z = x2 + y2}, with the metric g being the ordinary Euclidean distance coming from the embedding of the paraboloid in Euclidean space R3. Here the sectional curvature is positive everywhere, though not constant. The origin (0, 0, 0) is a soul of M. Not every point x of M is a soul of M, since there may be geodesic loops based at x, in which case $\{x\}$ wouldn't be totally convex.[5]
• One can also consider an infinite cylinder M = {(x, y, z) : x2 + y2 = 1}, again with the induced Euclidean metric. The sectional curvature is 0 everywhere. Any "horizontal" circle {(x, y, z) : x2 + y2 = 1} with fixed z is a soul of M. Non-horizontal cross sections of the cylinder are not souls since they are neither totally convex nor totally geodesic.[6]
Soul conjecture
As mentioned above, Gromoll and Meyer proved that if g has positive sectional curvature then the soul is a point. Cheeger and Gromoll conjectured that this would hold even if g had nonnegative sectional curvature, with positivity only required of all sectional curvatures at a single point.[7] This soul conjecture was proved by Grigori Perelman, who established the more powerful fact that Sharafutdinov's retraction is a Riemannian submersion, and even a submetry.[8]
References
1. Cheeger & Ebin 2008, Chapter 8; Petersen 2016, Theorem 12.4.1; Sakai 1996, Theorem V.3.4.
2. Petersen 2016, p. 462; Sakai 1996, Corollary V.3.5.
3. Chow et al. 2010, Theorem I.25.
4. Yau 1982, Problem 6.
5. Petersen 2016, Example 12.4.4; Sakai 1996, p. 217.
6. Petersen 2016, Example 12.4.3; Sakai 1996, p. 217.
7. Sakai 1996, p. 217; Yau 1982, Problem 18.
8. Petersen 2016, p. 469.
Sources.
• Cheeger, Jeff; Ebin, David G. (2008). Comparison theorems in Riemannian geometry (Revised reprint of the 1975 original ed.). Providence, RI: AMS Chelsea Publishing. doi:10.1090/chel/365. ISBN 978-0-8218-4417-5. MR 2394158.
• Cheeger, Jeff; Gromoll, Detlef (1972). "On the structure of complete manifolds of nonnegative curvature". Annals of Mathematics. Second Series. 96 (3): 413–443. doi:10.2307/1970819. ISSN 0003-486X. JSTOR 1970819. MR 0309010.
• Chow, Bennett; Chu, Sun-Chin; Glickenstein, David; Guenther, Christine; Isenberg, James; Ivey, Tom; Knopf, Dan; Lu, Peng; Luo, Feng; Ni, Lei (2010). The Ricci flow: techniques and applications. Part III. Geometric-analytic aspects. Mathematical Surveys and Monographs. Vol. 163. Providence, RI: American Mathematical Society. doi:10.1090/surv/163. ISBN 978-0-8218-4661-2. MR 2604955.
• Gromoll, Detlef; Meyer, Wolfgang (1969). "On complete open manifolds of positive curvature". Annals of Mathematics. Second Series. 90 (1): 75–90. doi:10.2307/1970682. ISSN 0003-486X. JSTOR 1970682. MR 0247590. S2CID 122543838.
• Perelman, Grigori (1994). "Proof of the soul conjecture of Cheeger and Gromoll". Journal of Differential Geometry. 40 (1): 209–212. doi:10.4310/jdg/1214455292. ISSN 0022-040X. MR 1285534. Zbl 0818.53056.
• Petersen, Peter (2016). Riemannian geometry. Graduate Texts in Mathematics. Vol. 171 (Third edition of 1998 original ed.). Springer, Cham. doi:10.1007/978-3-319-26654-1. ISBN 978-3-319-26652-7. MR 3469435. Zbl 1417.53001.
• Sakai, Takashi (1996). Riemannian geometry. Translations of Mathematical Monographs. Vol. 149. Providence, RI: American Mathematical Society. doi:10.1090/mmono/149. ISBN 0-8218-0284-4. MR 1390760. Zbl 0886.53002.
• Sharafutdinov, V. A. (1979). "Convex sets in a manifold of nonnegative curvature". Mathematical Notes. 26 (1): 556–560. doi:10.1007/BF01140282. S2CID 119764156.
• Yau, Shing Tung (1982). "Problem section". In Yau, Shing-Tung (ed.). Seminar on Differential Geometry. Annals of Mathematics Studies. Vol. 102. Princeton, NJ: Princeton University Press. pp. 669–706. doi:10.1515/9781400881918-035. ISBN 9781400881918. MR 0645762. Zbl 0479.53001.
|
Wikipedia
|
Entropy rate
In the mathematical theory of probability, the entropy rate or source information rate of a stochastic process is, informally, the time density of the average information in a stochastic process. For stochastic processes with a countable index, the entropy rate $H(X)$ is the limit of the joint entropy of $n$ members of the process $X_{k}$ divided by $n$, as $n$ tends to infinity:
$H(X)=\lim _{n\to \infty }{\frac {1}{n}}H(X_{1},X_{2},\dots X_{n})$
Information theory
• Entropy
• Differential entropy
• Conditional entropy
• Joint entropy
• Mutual information
• Directed information
• Conditional mutual information
• Relative entropy
• Entropy rate
• Limiting density of discrete points
• Asymptotic equipartition property
• Rate–distortion theory
• Shannon's source coding theorem
• Channel capacity
• Noisy-channel coding theorem
• Shannon–Hartley theorem
when the limit exists. An alternative, related quantity is:
$H'(X)=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},\dots X_{1})$
For strongly stationary stochastic processes, $H(X)=H'(X)$. The entropy rate can be thought of as a general property of stochastic sources; this is the asymptotic equipartition property. The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning.[1]
Entropy rates for Markov chains
Since a stochastic process defined by a Markov chain that is irreducible, aperiodic and positive recurrent has a stationary distribution, the entropy rate is independent of the initial distribution.
For example, for such a Markov chain $Y_{k}$ defined on a countable number of states, given the transition matrix $P_{ij}$, $H(Y)$ is given by:
$\displaystyle H(Y)=-\sum _{ij}\mu _{i}P_{ij}\log P_{ij}$
where $\mu _{i}$ is the asymptotic distribution of the chain.
A simple consequence of this definition is that an i.i.d. stochastic process has an entropy rate that is the same as the entropy of any individual member of the process.
See also
• Information source (mathematics)
• Markov information source
• Asymptotic equipartition property
• Maximal entropy random walk - chosen to maximize entropy rate
References
1. Einicke, G. A. (2018). "Maximum-Entropy Rate Selection of Features for Classifying Changes in Knee and Ankle Dynamics During Running". IEEE Journal of Biomedical and Health Informatics. 28 (4): 1097–1103. doi:10.1109/JBHI.2017.2711487. PMID 29969403. S2CID 49555941.
• Cover, T. and Thomas, J. (1991) Elements of Information Theory, John Wiley and Sons, Inc., ISBN 0-471-06259-6
|
Wikipedia
|
Source unfolding
In computational geometry, the source unfolding of a convex polyhedron is a net obtained by cutting the polyhedron along the cut locus of a point on the surface of the polyhedron. The cut locus of a point $p$ consists of all points on the surface that have two or more shortest geodesics to $p$. For every convex polyhedron, and every choice of the point $p$ on its surface, cutting the polyhedron on the cut locus will produce a result that can be unfolded into a flat plane, producing the source unfolding. The resulting net may, however, cut across some of the faces of the polyhedron rather than only cutting along its edges.[1]
The source unfolding can also be continuously transformed from the polyhedron to its flat net, keeping flat the parts of the net that do not lie along edges of the polyhedron, as a blooming of the polyhedron.[2] The unfolded shape of the source unfolding is always a star-shaped polygon, with all of its points visible by straight line segments from the image of $p$; this is in contrast to the star unfolding, a different method for producing nets that does not always produce star-shaped polygons.[1]
An analogous unfolding method can be applied to any higher-dimensional convex polytope, cutting the surface of the polytope into a net that can be unfolded into a flat hyperplane.[3]
References
1. Demaine, Erik; O'Rourke, Joseph (2007), "24.1.1 Source unfolding", Geometric Folding Algorithms, Cambridge University Press, pp. 359–362, ISBN 978-0-521-71522-5
2. Demaine, Erik D.; Demaine, Martin L.; Hart, Vi; Iacono, John; Langerman, Stefan; O'Rourke, Joseph (2011), "Continuous blooming of convex polyhedra", Graphs and Combinatorics, 27 (3): 363–376, doi:10.1007/s00373-011-1024-3, MR 2787423. Announced at the Japan Conference on Computational Geometry and Graphs, 2009.
3. Miller, Ezra; Pak, Igor (2008), "Metric combinatorics of convex polyhedra: Cut loci and nonoverlapping unfoldings", Discrete & Computational Geometry, 39 (1–3): 339–388, doi:10.1007/s00454-008-9052-3, MR 2383765. Announced in 2003.
Mathematics of paper folding
Flat folding
• Big-little-big lemma
• Crease pattern
• Huzita–Hatori axioms
• Kawasaki's theorem
• Maekawa's theorem
• Map folding
• Napkin folding problem
• Pureland origami
• Yoshizawa–Randlett system
Strip folding
• Dragon curve
• Flexagon
• Möbius strip
• Regular paperfolding sequence
3d structures
• Miura fold
• Modular origami
• Paper bag problem
• Rigid origami
• Schwarz lantern
• Sonobe
• Yoshimura buckling
Polyhedra
• Alexandrov's uniqueness theorem
• Blooming
• Flexible polyhedron (Bricard octahedron, Steffen's polyhedron)
• Net
• Source unfolding
• Star unfolding
Miscellaneous
• Fold-and-cut theorem
• Lill's method
Publications
• Geometric Exercises in Paper Folding
• Geometric Folding Algorithms
• Geometric Origami
• A History of Folding in Mathematics
• Origami Polyhedra Design
• Origamics
People
• Roger C. Alperin
• Margherita Piazzola Beloch
• Robert Connelly
• Erik Demaine
• Martin Demaine
• Rona Gurkewitz
• David A. Huffman
• Tom Hull
• Kôdi Husimi
• Humiaki Huzita
• Toshikazu Kawasaki
• Robert J. Lang
• Anna Lubiw
• Jun Maekawa
• Kōryō Miura
• Joseph O'Rourke
• Tomohiro Tachi
• Eve Torrence
|
Wikipedia
|
Vertex (graph theory)
In discrete mathematics, and more specifically in graph theory, a vertex (plural vertices) or node is the fundamental unit of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). In a diagram of a graph, a vertex is usually represented by a circle with a label, and an edge is represented by a line or arrow extending from one vertex to another.
From the point of view of graph theory, vertices are treated as featureless and indivisible objects, although they may have additional structure depending on the application from which the graph arises; for instance, a semantic network is a graph in which the vertices represent concepts or classes of objects.
The two vertices forming an edge are said to be the endpoints of this edge, and the edge is said to be incident to the vertices. A vertex w is said to be adjacent to another vertex v if the graph contains an edge (v,w). The neighborhood of a vertex v is an induced subgraph of the graph, formed by all vertices adjacent to v.
Types of vertices
The degree of a vertex, denoted 𝛿(v) in a graph is the number of edges incident to it. An isolated vertex is a vertex with degree zero; that is, a vertex that is not an endpoint of any edge (the example image illustrates one isolated vertex).[1] A leaf vertex (also pendant vertex) is a vertex with degree one. In a directed graph, one can distinguish the outdegree (number of outgoing edges), denoted 𝛿 +(v), from the indegree (number of incoming edges), denoted 𝛿−(v); a source vertex is a vertex with indegree zero, while a sink vertex is a vertex with outdegree zero. A simplicial vertex is one whose neighbors form a clique: every two neighbors are adjacent. A universal vertex is a vertex that is adjacent to every other vertex in the graph.
A cut vertex is a vertex the removal of which would disconnect the remaining graph; a vertex separator is a collection of vertices the removal of which would disconnect the remaining graph into small pieces. A k-vertex-connected graph is a graph in which removing fewer than k vertices always leaves the remaining graph connected. An independent set is a set of vertices no two of which are adjacent, and a vertex cover is a set of vertices that includes at least one endpoint of each edge in the graph. The vertex space of a graph is a vector space having a set of basis vectors corresponding with the graph's vertices.
A graph is vertex-transitive if it has symmetries that map any vertex to any other vertex. In the context of graph enumeration and graph isomorphism it is important to distinguish between labeled vertices and unlabeled vertices. A labeled vertex is a vertex that is associated with extra information that enables it to be distinguished from other labeled vertices; two graphs can be considered isomorphic only if the correspondence between their vertices pairs up vertices with equal labels. An unlabeled vertex is one that can be substituted for any other vertex based only on its adjacencies in the graph and not based on any additional information.
Vertices in graphs are analogous to, but not the same as, vertices of polyhedra: the skeleton of a polyhedron forms a graph, the vertices of which are the vertices of the polyhedron, but polyhedron vertices have additional structure (their geometric location) that is not assumed to be present in graph theory. The vertex figure of a vertex in a polyhedron is analogous to the neighborhood of a vertex in a graph.
See also
• Node (computer science)
• Graph theory
• Glossary of graph theory
References
1. File:Small Network.png; example image of a network with 8 vertices and 10 edges
• Gallo, Giorgio; Pallotino, Stefano (1988). "Shortest path algorithms". Annals of Operations Research. 13 (1): 1–79. doi:10.1007/BF02288320. S2CID 62752810.
• Berge, Claude, Théorie des graphes et ses applications. Collection Universitaire de Mathématiques, II Dunod, Paris 1958, viii+277 pp. (English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition. Dover, New York 2001)
• Chartrand, Gary (1985). Introductory graph theory. New York: Dover. ISBN 0-486-24775-9.
• Biggs, Norman; Lloyd, E. H.; Wilson, Robin J. (1986). Graph theory, 1736-1936. Oxford [Oxfordshire]: Clarendon Press. ISBN 0-19-853916-9.
• Harary, Frank (1969). Graph theory. Reading, Mass.: Addison-Wesley Publishing. ISBN 0-201-41033-8.
• Harary, Frank; Palmer, Edgar M. (1973). Graphical enumeration. New York, Academic Press. ISBN 0-12-324245-2.
External links
• Weisstein, Eric W. "Graph Vertex". MathWorld.
|
Wikipedia
|
Suslin's problem
In mathematics, Suslin's problem is a question about totally ordered sets posed by Mikhail Yakovlevich Suslin (1920) and published posthumously. It has been shown to be independent of the standard axiomatic system of set theory known as ZFC; Solovay & Tennenbaum (1971) showed that the statement can neither be proven nor disproven from those axioms, assuming ZF is consistent.
(Suslin is also sometimes written with the French transliteration as Souslin, from the Cyrillic Суслин.)
Un ensemble ordonné (linéairement) sans sauts ni lacunes et tel que tout ensemble de ses intervalles (contenant plus qu'un élément) n'empiétant pas les uns sur les autres est au plus dénumerable, est-il nécessairement un continue linéaire (ordinaire)?
Is a (linearly) ordered set without jumps or gaps and such that every set of its intervals (containing more than one element) not overlapping each other is at most denumerable, necessarily an (ordinary) linear continuum?
The original statement of Suslin's problem from (Suslin 1920) harv error: no target: CITEREFSuslin1920 (help)
Formulation
Suslin's problem asks: Given a non-empty totally ordered set R with the four properties
1. R does not have a least nor a greatest element;
2. the order on R is dense (between any two distinct elements there is another);
3. the order on R is complete, in the sense that every non-empty bounded subset has a supremum and an infimum; and
4. every collection of mutually disjoint non-empty open intervals in R is countable (this is the countable chain condition for the order topology of R),
is R necessarily order-isomorphic to the real line R?
If the requirement for the countable chain condition is replaced with the requirement that R contains a countable dense subset (i.e., R is a separable space), then the answer is indeed yes: any such set R is necessarily order-isomorphic to R (proved by Cantor).
The condition for a topological space that every collection of non-empty disjoint open sets is at most countable is called the Suslin property.
Implications
Any totally ordered set that is not isomorphic to R but satisfies properties 1–4 is known as a Suslin line. The Suslin hypothesis says that there are no Suslin lines: that every countable-chain-condition dense complete linear order without endpoints is isomorphic to the real line. An equivalent statement is that every tree of height ω1 either has a branch of length ω1 or an antichain of cardinality $\aleph _{1}$.
The generalized Suslin hypothesis says that for every infinite regular cardinal κ every tree of height κ either has a branch of length κ or an antichain of cardinality κ. The existence of Suslin lines is equivalent to the existence of Suslin trees and to Suslin algebras.
The Suslin hypothesis is independent of ZFC. Jech (1967) and Tennenbaum (1968) independently used forcing methods to construct models of ZFC in which Suslin lines exist. Jensen later proved that Suslin lines exist if the diamond principle, a consequence of the axiom of constructibility V = L, is assumed. (Jensen's result was a surprise, as it had previously been conjectured that V = L implies that no Suslin lines exist, on the grounds that V = L implies that there are "few" sets.) On the other hand, Solovay & Tennenbaum (1971) used forcing to construct a model of ZFC without Suslin lines; more precisely, they showed that Martin's axiom plus the negation of the continuum hypothesis implies the Suslin hypothesis.
The Suslin hypothesis is also independent of both the generalized continuum hypothesis (proved by Ronald Jensen) and of the negation of the continuum hypothesis. It is not known whether the generalized Suslin hypothesis is consistent with the generalized continuum hypothesis; however, since the combination implies the negation of the square principle at a singular strong limit cardinal—in fact, at all singular cardinals and all regular successor cardinals—it implies that the axiom of determinacy holds in L(R) and is believed to imply the existence of an inner model with a superstrong cardinal.
See also
• List of statements independent of ZFC
• Continuum hypothesis
• AD+
• Cantor's isomorphism theorem
References
• K. Devlin and H. Johnsbråten, The Souslin Problem, Lecture Notes in Mathematics (405) Springer 1974.
• Jech, Tomáš (1967), "Non-provability of Souslin's hypothesis", Comment. Math. Univ. Carolinae, 8: 291–305, MR 0215729
• Souslin, M. (1920), "Problème 3" (PDF), Fundamenta Mathematicae, 1: 223, doi:10.4064/fm-1-1-223-224
• Solovay, R. M.; Tennenbaum, S. (1971), "Iterated Cohen Extensions and Souslin's Problem", Annals of Mathematics, 94 (2): 201–245, doi:10.2307/1970860, JSTOR 1970860
• Tennenbaum, S. (1968), "Souslin's problem.", Proc. Natl. Acad. Sci. U.S.A., 59 (1): 60–63, Bibcode:1968PNAS...59...60T, doi:10.1073/pnas.59.1.60, MR 0224456, PMC 286001, PMID 16591594
• Grishin, V. N. (2001) [1994], "Suslin hypothesis", Encyclopedia of Mathematics, EMS Press
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
|
Wikipedia
|
South African Mathematical Society
The South African Mathematical Society (SAMS) is a professional mathematical society of South Africa. The Society was established in 1957.[1] The SAMS publishes a research journal Quaestiones Mathematicae, as well the Notices of the South African Mathematical Society (which serves as a general communications bulletin of the society), and holds its Annual Congress. The Society also helps represent South African mathematics and mathematicians in various national and international structures, including the International Mathematical Union, African Mathematical Union, Southern Africa Mathematical Sciences Association, Association for Mathematics Education of South Africa, and others.
South African Mathematical Society
Formation1957 (1957)
Membership
300+
President
Zurab Janelidze
Key people
Immediate Past President: Precious Sibanda, Vice-president: Sethuti Moshokoa, General Secretary: Karin-Therese Howell
Websitewww.sams.ac.za
The SAMS has more than 300 members.
History
The South African Mathematical Society was established in 1957, originally under the English name 'The South African Mathematical Association' and the corresponding Afrikaans name ‘Die Suid-Afrikaanse Wiskundige Vereniging’.[2] Dr Johann van der Mark from the Mathematics Division, National Physics Research Laboratory, Institute of Physics of the Council for Scientific and Industrial Research (CSIR) in Pretoria, was a key figure in setting up and launching the South African Mathematical Association.[2] The first Chairman of the Council of the Association, for the period 1957–1958, was James M. Hyslop (University of the Witwatersrand).[2]
The SAMS was one of the groups involved in launching, in 1992–1993 the new Association for Mathematics Education of South Africa (AMESA),[3] a professional association for mathematics education in South Africa.
SAMS and apartheid
From the moment the SAMS was launched in 1957, its Constitution, membership, election and office-holder rules were non-discriminatory and did not contain any racially restrictive language. In August 1962 the SAMS Council decided that, notwithstanding the apartheid laws and social expectations in the country, the SAMS would not form separate branches of the SAMS for non-white members.[4] Nevertheless, until the early 1990s, the SAMS had only a few non-white members; thus the Society had two black members in 1977 and four in 1980.[4]
During the apartheid era, the SAMS continued to experience difficulties it terms of its international recognition and international activities because of the general international human rights, protest and boycott movements directed against the South African regime. Thus the American Mathematical Society originally established a reciprocity agreement with the SAMS in 1972, after an investigation by the AMS showed that the membership rules for admittance to the SAMS were non-discriminatory. However, after protests by civil rights activists, the AMS cancelled the reciprocity agreement in 1974.[5][6] The AMS restored its reciprocity agreement with the SAMS in 1994, after the fall of the apartheid.[7] The SAMS annual Distinguished Visitor program experienced a high rate of declined invitations and postponed visits due to concerns of the international mathematical community about the apartheid.[4] One of the most publicized visits under the program was that of Peter Hilton, who was the SAMS Distinguished Visitor for 1981. Hilton published a letter in the Notices of the American Mathematical Society, prior to his visit, explaining the conditions on which he accepted the visit[4] and later published an account of his visit.[8]
Membership
Generally, membership in SAMS requires holding a bachelor's degree, or its equivalent, in a mathematical discipline. Admission to membership requires nomination by two current members of the Society and must be approved by the SAMS Council.[9]
At present there are four membership categories:[9]
• Full members
• Special members, eligible for reduced membership fees (often applicable to graduate students and retirees, and to members of foreign mathematical societies with which the SAMS has reciprocity agreements)
• Institutional members (applies to certain institutions that are regarded as a single member of the SAMS)
• Honorary members
Awards
The SAMS grants two awards:
• The SAMS Award for Research Distinction, awarded for outstanding research achievements, to "recognise and reward substantial research carried out in South Africa, which does credit to South African Mathematics".[10]
• The SAMS Award for the Advancement of Mathematics, "to recognise and reward exceptional and distinguished service to the cultivation of Mathematics in South Africa".[10]
The SAMS also has one student award:
• The SAMS Bronze Medal is awarded to the best honors students in Mathematics or Applied Mathematics at South African universities.[11]
References
1. The South African Mathematical Society, MacTutor History of Mathematics archive. Accessed October 22, 2016
2. P. Maritz, The South African Mathematical Society 1957 - 2007, South African Mathematical Society. Accessed October 22, 2016.
3. History of AMESA, Association for Mathematics Education of South Africa. Accessed October 23, 2016
4. P. Maritz, The South African Mathematical Society 1957 – 2007Section H: Delicate Issues. South African Mathematical Society. Accessed October 23, 2016
5. Everett Pitcher, A history of the second fifty years, American Mathematical Society 1939–88, American Mathematical Society, 1988, ISBN 978-0-8218-0125-3; p. 156
6. Charlene Morrow, Teri Perl, Notable Women in Mathematics: A Biographical Dictionary, Greenwood Press, 1998, ISBN 0313291314; p. 73
7. Bettye Anne Case, A Century of Mathematical Meetings, American Mathematical Society, 1995, ISBN 0821804650; p 90
8. Peter Hilton, Reflections on a visit to South Africa, Focus, vol. 1, no. 4, Nov. – Dec 1981. Mathematical Association of America
9. Constitution. South African Mathematical Society. Accessed October 23, 2016
10. Awards of the SAMS, South African Mathematical Society. Accessed March 7, 2022.
11. SAMS Bronze Medal, South African Mathematical Society. Accessed March 7, 2021.
External links
• Sams.ac.za: official South African Mathematical Society−SAMS website
• Nsc.co.za: Quaestiones Mathematicae — research journal published by the SAMS.
|
Wikipedia
|
South East Asian Mathematics Competition
The South East Asian Mathematics Competition (SEAMC) is an annual three-day non-profit mathematics competition for Southeast Asian students at different grade levels. It is a qualifying competition organized by Eunoia Ventures for invitation to the World Mathematics Championships.[1] [2][3]
South East Asian Mathematics Competition
GenreMathematics competition
FrequencyAnnual
InauguratedMarch 2001
Most recent21 August 2021
Websitehttps://seamc.asia
Teams have participated from China, Thailand, Hong Kong, Malaysia, Singapore, Brunei, Vietnam, Cambodia, and Nepal.[3]
Host venue locations of the SEAMC changes annually. An online version was held due to the COVID-19 Pandemic.[4]
Eligibility
• The Senior Competition is open to all students in Grade 12 (Year 13) or younger.[2]
• The Junior Competition is open to all students in Grade 9 (Year 10) or younger.[2]
• The Secondary Competition is open to all students in Grade 7 (Year 8) or younger during the month of the event and
• Primary level for Grade 5 (Year 6) or younger.[5][6]
The competition
History
SEAMC is a mathematics collaboration experience for school students located in South or North East Asia to come together for 2-3 days.
SEAMC was conceived of by Steve Warry, who taught at Alice Smith School in Kuala Lumpur.[2] He organised SEAMC in March 2001. He died one week prior to the first competition.[2] Teams competed for the Warry Cup, which is named in Steve's honour.[3]
From 2014, the NEAMC sister event has been organised for students in Northeast Asia.[7] The organizers enlisted the Nanjing International School to host it initially in February 2014 with the help of Malcolm Coad.[2][8]
In 2017, the SNEAMC family of events became the World Mathematics Championships.[2]
Format
Each school enters teams of 3 students each.[2] The competition has nine rounds.[2]
All WMC qualifying competitions have:
• 3 days of engagement
• 9 equally weighted rounds
• 6 skills categories for prizes
• The best sum ranking across all 9 rounds win
School teams engage within the Communication skills rounds.
The Collaboration skills rounds (Open, Lightning and Innovation) are in buddy teams of three.[2] The Challenge are skills rounds undertaken as individuals.
Three skills rounds are (subject specific skills and procedures) knowledge based, three are (plan and execute) strategy focused and three depend upon (new and imaginative ideas) creativity.
So each strategy, creative and knowledge skill category is engaged in alone, in school teams and in buddy teams.
Past questions can be found around the web.[6]
In many SEAMC competitions, there are initial icebreaker events.[5]
Prizes
• All participants receive a transcript of relative attainment in each of the 9 rounds.
• The highest ranked individuals in each category receive medals.
• The highest ranked individuals across all 9 rounds receive medals.
• The best ranked school team across all 9 rounds receive a respectively named Cup (for the SEAMC Junior competition, this is the original Warry Cup).
The better ranked teams across all of the competition venues that year are invited to the ultimate World Mathematics Championships showdown, hosted by Trinity College, University of Melbourne in the following July each year.
Results
Past team winners
• 2020 - UWCSEA East, Singapore
• 2019 - British School Manila, Philippines
• 2018 - British School Manila, Philippines; Saigon South International School Ho Chi Minh City, Vietnam
• 2017 - British School Manila, Philippines (Senior), Singapore American School, Singapore (Junior)[9]
• 2016 - British International School Ho Chi Minh City, Vietnam [5]
• 2015 - Singapore Chinese School, Singapore
• 2014 - Hong Kong International School, Hong Kong[8]
• 2013 - Chinese International School, Hong Kong
• 2012 - Chinese International School, Hong Kong
• 2011 - West Island School, Hong Kong
• 2010 - British International School, Vietnam (Jaeho Han, Cheewon Oh, Jungmin Kang)
• 2009 - German Swiss International School, Hong Kong
• 2008 - UWCSEA Dover, Singapore
• 2007 - KGV, Hong Kong, China
• 2006 - KGV, Hong Kong, China
• 2005 - Island School, Hong Kong, China
• 2004 - Island School, Hong Kong, China
• 2003 - Garden IS, Kuala Lumpur
• 2002 - Island School, Hong Kong, China
• 2001 - South Island School, Hong Kong, China
World Mathematics Championship June 2018 Results[10]
Senior Level
• Winner : Julian Yu
• Runner Up : Yan Pui Matthew Ling
• Runner Up : Wye Yew Ho
• Runner Up : Kevin Xin
• Runner Up : Linda Wang
Junior Level
• Winner : Seung Jae Yang
• Runner Up : Arunav Maheshwari
• Runner Up : Jangju Lee
• Runner Up : Ryusuke Suehiro
• Runner Up : Ravi Bahukhandi
• Runner Up : Soumyaditya Choudhuri
• Runner Up : Tanai Chotanphuti
World Mathematics Championship December 2018 Results
• Winner : Palis Pisuttisarun
• Runner Up : Ho Wang Tang
• Runner Up : Byung Hoo Park
• Runner Up : Rocco Jiang
Past individual winners
• 2020 - This has not been held yet due to the COVID-19 pandemic.
• 2019 - Andrew Chang, Singapore American School, Singapore
• 2018 - Juhee (Jessie) Hong, Singapore American School, Singapore
• 2017 - Tiffany Ong, British School Manila; Rahul Arya, King George V School
• 2016 - Otto Winata, Sampoerna Academy Medan, Indonesia
• 2015 - Alex Lee, Taipei European School, Taipei
• 2014 - Tie between Kyung Chan Lee, Garden International School, Kuala Lumpur, and Michael Wu, Hong Kong International School, Hong Kong
• 2013 - Joanna Cheng, South Island School, Hong Kong
• 2012 - Charles Meng, Chinese International School, Hong Kong
• 2011 - Alexander Cooke, South Island School, Hong Kong
• 2010 - Ki Yun Kim, JIS, Indonesia
• 2009 - Joon Young Lee, ISB, China
• 2008 - Dong Wook Chung, UWCSEA, Singapore
• 2007 - Oliver Huang, KGV, Hong Kong
• 2006 - En Seng Ng, SAS, Singapore
• 2005 - Tiffany Lau, Island School, Hong Kong
• 2004 - Otto Chan, Island School, Hong Kong
• 2003 - Ernest Chia, Garden IS, Kuala Lumpur
• 2002 - Ernest Chia, Garden IS, Kuala Lumpur
• 2001 - John Chan, WIS, Hong Kong
References
1. "Eunoia Ventures". Eunoia Ventures. Archived from the original on 30 November 2020. Retrieved 8 September 2021.
2. "Competition Academy". Archived from the original on 2018-03-06. Retrieved 11 April 2021.
3. "SEAMC : The S.E. Asia Mathematics Competition". Retrieved 2021-04-11.
4. "Southeast Asian Mathematics Competition | Guidelines". seamc. Retrieved 2021-09-08.
5. "2016 SEAMC Competition". www.nordangliaeducation.com. Retrieved 2021-04-11.
6. "SEAMC : The S.E. Asia Mathematics Competition: Past SEAMC Questions". Retrieved 2021-04-11.
7. Eunoia Ventures, Eunoia ventures (23 June 2019). "WMC".
8. "South East Asian Maths Competition (2014) | Alice Smith School". 21 Mar 2014. Archived from the original on 2017-09-06. Retrieved 11 Apr 2021.
9. "South East Asian Mathematics Competition (SEAMC) 2017". KGV - ESF. 2017-05-12. Retrieved 2021-04-11.
10. "WMC Finals - Awards Ceremony". Facebook. 2017-05-12. Retrieved 2021-06-22.
External links
• Official website of Eunoia Ventures
|
Wikipedia
|
Souček space
In mathematics, Souček spaces are generalizations of Sobolev spaces, named after the Czech mathematician Jiří Souček. One of their main advantages is that they offer a way to deal with the fact that the Sobolev space W1,1 is not a reflexive space; since W1,1 is not reflexive, it is not always true that a bounded sequence has a weakly convergent subsequence, which is a desideratum in many applications.
Definition
Let Ω be a bounded domain in n-dimensional Euclidean space with smooth boundary. The Souček space W1,μ(Ω; Rm) is defined to be the space of all ordered pairs (u, v), where
• u lies in the Lebesgue space L1(Ω; Rm);
• v (thought of as the gradient of u) is a regular Borel measure on the closure of Ω;
• there exists a sequence of functions uk in the Sobolev space W1,1(Ω; Rm) such that
$\lim _{k\to \infty }u_{k}=u{\mbox{ in }}L^{1}(\Omega ;\mathbf {R} ^{m})$ ;\mathbf {R} ^{m})}
and
$\lim _{k\to \infty }\nabla u_{k}=v$
weakly-∗ in the space of all Rm×n-valued regular Borel measures on the closure of Ω.
Properties
• The Souček space W1,μ(Ω; Rm) is a Banach space when equipped with the norm given by
$\|(u,v)\|:=\|u\|_{L^{1}}+\|v\|_{M},$
i.e. the sum of the L1 and total variation norms of the two components.
References
• Souček, Jiří (1972). "Spaces of functions on domain Ω, whose k-th derivatives are measures defined on Ω̅". Časopis Pěst. Mat. 97: 10–46, 94. doi:10.21136/CPM.1972.117746. ISSN 0528-2195. MR0313798
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Space-filling polyhedron
In geometry, a space-filling polyhedron is a polyhedron that can be used to fill all of three-dimensional space via translations, rotations and/or reflections, where filling means that; taken together, all the instances of the polyhedron constitute a partition of three-space. Any periodic tiling or honeycomb of three-space can in fact be generated by translating a primitive cell polyhedron.
Any parallelepiped tessellates Euclidean 3-space, and more specifically any of five parallelohedra such as the rhombic dodecahedron, which is one of nine edge-transitive and face-transitive solids. Examples of other space-filling polyhedra include the set of five convex polyhedra with regular faces, which include the triangular prism, hexagonal prism, gyrobifastigium, cube, and truncated octahedron; a set that intersects with that of the five parallelohedra.
An example of a study relating to space-filling polyhedrons is the Weaire–Phelan structure.
The Cube is the only platonic solid that can fill space independently, although tiling space using tetrahedrons and octahedrons is possible.
If the shape of a regular lattice is converted into a prism of the same base. triangular prisms from triangles. cubes (or in this case square prisms) from squares. hexagonal prisms from hexagons. It is possible to have a filled 3-dimensional space by layering lattices on top of one another.
References
• Space-Filling Polyhedron, MathWorld
• Arthur L. Loeb (1991). "Space-filling Polyhedra". Space Structures. Boston, MA: Birkhäuser. pp. 127–132. doi:10.1007/978-1-4612-0437-4_16. ISBN 978-1-4612-0437-4.
|
Wikipedia
|
Space cardioid
The space cardioid is a 3-dimensional curve derived from the cardioid. It has a parametric representation using trigonometric functions.
Definition
The general form of the equation is most easily understood in parametric form, as follows:
$X(t)=((k+\cos(t))\cos(t),(j+\cos(t))\sin(t),\sin(t))\,$
See also
• Trigonometric functions
References
|
Wikipedia
|
Space form
In mathematics, a space form is a complete Riemannian manifold M of constant sectional curvature K. The three most fundamental examples are Euclidean n-space, the n-dimensional sphere, and hyperbolic space, although a space form need not be simply connected.
Reduction to generalized crystallography
The Killing–Hopf theorem of Riemannian geometry states that the universal cover of an n-dimensional space form $M^{n}$ with curvature $K=-1$ is isometric to $H^{n}$, hyperbolic space, with curvature $K=0$ is isometric to $R^{n}$, Euclidean n-space, and with curvature $K=+1$ is isometric to $S^{n}$, the n-dimensional sphere of points distance 1 from the origin in $R^{n+1}$.
By rescaling the Riemannian metric on $H^{n}$, we may create a space $M_{K}$ of constant curvature $K$ for any $K<0$. Similarly, by rescaling the Riemannian metric on $S^{n}$, we may create a space $M_{K}$ of constant curvature $K$ for any $K>0$. Thus the universal cover of a space form $M$ with constant curvature $K$ is isometric to $M_{K}$.
This reduces the problem of studying space forms to studying discrete groups of isometries $\Gamma $ of $M_{K}$ which act properly discontinuously. Note that the fundamental group of $M$, $\pi _{1}(M)$, will be isomorphic to $\Gamma $. Groups acting in this manner on $R^{n}$ are called crystallographic groups. Groups acting in this manner on $H^{2}$ and $H^{3}$ are called Fuchsian groups and Kleinian groups, respectively.
See also
• Borel conjecture
References
• Goldberg, Samuel I. (1998), Curvature and Homology, Dover Publications, ISBN 978-0-486-40207-9
• Lee, John M. (1997), Riemannian manifolds: an introduction to curvature, Springer
|
Wikipedia
|
Space group
In mathematics, physics and chemistry, a space group is the symmetry group of a repeating pattern in space, usually in three dimensions.[1] The elements of a space group (its symmetry operations) are the rigid transformations of the pattern that leave it unchanged. In three dimensions, space groups are classified into 219 distinct types, or 230 types if chiral copies are considered distinct. Space groups are discrete cocompact groups of isometries of an oriented Euclidean space in any number of dimensions. In dimensions other than 3, they are sometimes called Bieberbach groups.
In crystallography, space groups are also called the crystallographic or Fedorov groups, and represent a description of the symmetry of the crystal. A definitive source regarding 3-dimensional space groups is the International Tables for Crystallography Hahn (2002).
History
Space groups in 2 dimensions are the 17 wallpaper groups which have been known for several centuries, though the proof that the list was complete was only given in 1891, after the much more difficult classification of space groups had largely been completed.[2]
In 1879 the German mathematician Leonhard Sohncke listed the 65 space groups (called Sohncke groups) whose elements preserve the chirality.[3] More accurately, he listed 66 groups, but both the Russian mathematician and crystallographer Evgraf Fedorov and the German mathematician Arthur Moritz Schoenflies noticed that two of them were really the same. The space groups in three dimensions were first enumerated in 1891 by Fedorov[4] (whose list had two omissions (I43d and Fdd2) and one duplication (Fmm2)), and shortly afterwards in 1891 were independently enumerated by Schönflies[5] (whose list had four omissions (I43d, Pc, Cc, ?) and one duplication (P421m)). The correct list of 230 space groups was found by 1892 during correspondence between Fedorov and Schönflies.[6] William Barlow (1894) later enumerated the groups with a different method, but omitted four groups (Fdd2, I42d, P421d, and P421c) even though he already had the correct list of 230 groups from Fedorov and Schönflies; the common claim that Barlow was unaware of their work is incorrect. Burckhardt (1967) describes the history of the discovery of the space groups in detail.
Elements
The space groups in three dimensions are made from combinations of the 32 crystallographic point groups with the 14 Bravais lattices, each of the latter belonging to one of 7 lattice systems. What this means is that the action of any element of a given space group can be expressed as the action of an element of the appropriate point group followed optionally by a translation. A space group is thus some combination of the translational symmetry of a unit cell (including lattice centering), the point group symmetry operations of reflection, rotation and improper rotation (also called rotoinversion), and the screw axis and glide plane symmetry operations. The combination of all these symmetry operations results in a total of 230 different space groups describing all possible crystal symmetries.
The number of replicates of the asymmetric unit in a unit cell is thus the number of lattice points in the cell times the order of the point group. This ranges from 1 in the case of space group P1 to 192 for a space group like Fm3m, the NaCl structure.
Elements fixing a point
The elements of the space group fixing a point of space are the identity element, reflections, rotations and improper rotations, including inversion points.
Translations
The translations form a normal abelian subgroup of rank 3, called the Bravais lattice (so named after French physicist Auguste Bravais). There are 14 possible types of Bravais lattice. The quotient of the space group by the Bravais lattice is a finite group which is one of the 32 possible point groups.
Glide planes
A glide plane is a reflection in a plane, followed by a translation parallel with that plane. This is noted by $a$, $b$, or $c$, depending on which axis the glide is along. There is also the $n$ glide, which is a glide along the half of a diagonal of a face, and the $d$ glide, which is a fourth of the way along either a face or space diagonal of the unit cell. The latter is called the diamond glide plane as it features in the diamond structure. In 17 space groups, due to the centering of the cell, the glides occur in two perpendicular directions simultaneously, i.e. the same glide plane can be called b or c, a or b, a or c. For example, group Abm2 could be also called Acm2, group Ccca could be called Cccb. In 1992, it was suggested to use symbol e for such planes. The symbols for five space groups have been modified:
Space group no.3941646768
New symbol Aem2Aea2CmceCmmeCcce
Old Symbol Abm2Aba2CmcaCmmaCcca
Screw axes
A screw axis is a rotation about an axis, followed by a translation along the direction of the axis. These are noted by a number, n, to describe the degree of rotation, where the number is how many operations must be applied to complete a full rotation (e.g., 3 would mean a rotation one third of the way around the axis each time). The degree of translation is then added as a subscript showing how far along the axis the translation is, as a portion of the parallel lattice vector. So, 21 is a twofold rotation followed by a translation of 1/2 of the lattice vector.
General formula
The general formula for the action of an element of a space group is
y = M.x + D
where M is its matrix, D is its vector, and where the element transforms point x into point y. In general, D = D (lattice) + D(M), where D(M) is a unique function of M that is zero for M being the identity. The matrices M form a point group that is a basis of the space group; the lattice must be symmetric under that point group, but the crystal structure itself may not be symmetric under that point group as applied to any particular point (that is, without a translation). For example, the diamond cubic structure does not have any point where the cubic point group applies.
The lattice dimension can be less than the overall dimension, resulting in a "subperiodic" space group. For (overall dimension, lattice dimension):
• (1,1): One-dimensional line groups
• (2,1): Two-dimensional line groups: frieze groups
• (2,2): Wallpaper groups
• (3,1): Three-dimensional line groups; with the 3D crystallographic point groups, the rod groups
• (3,2): Layer groups
• (3,3): The space groups discussed in this article
Chirality
The 65 "Sohncke" space groups, not containing any mirrors, inversion points, improper rotations or glide planes, yield chiral crystals, not identical to their mirror image; whereas space groups that do include at least one of those give achiral crystals. Achiral molecules sometimes form chiral crystals, but chiral molecules always form chiral crystals, in one of the space groups that permit this.
Among the 65 Sohncke groups are 22 that come in 11 enantiomorphic pairs.
Combinations
Only certain combinations of symmetry elements are possible in a space group. Translations are always present, and the space group P1 has only translations and the identity element. The presence of mirros implies glide planes as well, and the presence of rotation axes implies screw axes as well, but the converses are not true. An inversion and a mirror implies two-fold screw axes, and so on.
Notation
There are at least ten methods of naming space groups. Some of these methods can assign several different names to the same space group, so altogether there are many thousands of different names.
Number
The International Union of Crystallography publishes tables of all space group types, and assigns each a unique number from 1 to 230. The numbering is arbitrary, except that groups with the same crystal system or point group are given consecutive numbers.
International symbol notation
Hermann–Mauguin notation
The Hermann–Mauguin (or international) notation describes the lattice and some generators for the group. It has a shortened form called the international short symbol, which is the one most commonly used in crystallography, and usually consists of a set of four symbols. The first describes the centering of the Bravais lattice (P, A, C, I, R or F). The next three describe the most prominent symmetry operation visible when projected along one of the high symmetry directions of the crystal. These symbols are the same as used in point groups, with the addition of glide planes and screw axis, described above. By way of example, the space group of quartz is P3121, showing that it exhibits primitive centering of the motif (i.e., once per unit cell), with a threefold screw axis and a twofold rotation axis. Note that it does not explicitly contain the crystal system, although this is unique to each space group (in the case of P3121, it is trigonal).
In the international short symbol the first symbol (31 in this example) denotes the symmetry along the major axis (c-axis in trigonal cases), the second (2 in this case) along axes of secondary importance (a and b) and the third symbol the symmetry in another direction. In the trigonal case there also exists a space group P3112. In this space group the twofold axes are not along the a and b-axes but in a direction rotated by 30°.
The international symbols and international short symbols for some of the space groups were changed slightly between 1935 and 2002, so several space groups have 4 different international symbols in use.
The viewing directions of the 7 crystal systems are shown as follows.
Position in the symbol Triclinic Monoclinic Orthorhombic Tetragonal Trigonal Hexagonal Cubic
1 — b a c c c a
2 — b a a a [111]
3 — c [110] [210] [210] [110]
Hall notation[7]
Space group notation with an explicit origin. Rotation, translation and axis-direction symbols are clearly separated and inversion centers are explicitly defined. The construction and format of the notation make it particularly suited to computer generation of symmetry information. For example, group number 3 has three Hall symbols: P 2y (P 1 2 1), P 2 (P 1 1 2), P 2x (P 2 1 1).
Schönflies notation
The space groups with given point group are numbered by 1, 2, 3, ... (in the same order as their international number) and this number is added as a superscript to the Schönflies symbol for the point group. For example, groups numbers 3 to 5 whose point group is C2 have Schönflies symbols C1
2
, C2
2
, C3
2
.
Fedorov notation
Shubnikov symbol
Strukturbericht designation
A related notation for crystal structures given a letter and index: A Elements (monatomic), B for AB compounds, C for AB2 compounds, D for Am Bn compounds, (E, F, ..., K More complex compounds), L Alloys, O Organic compounds, S Silicates. Some structure designation share the same space groups. For example, space group 225 is A1, B1, and C1. Space group 221 is Ah, and B2.[8] However, crystallographers would not use Strukturbericht notation to describe the space group, rather it would be used to describe a specific crystal structure (e.g. space group + atomic arrangement (motif)).
Orbifold notation (2D)
Fibrifold notation (3D)
As the name suggests, the orbifold notation describes the orbifold, given by the quotient of Euclidean space by the space group, rather than generators of the space group. It was introduced by Conway and Thurston, and is not used much outside mathematics. Some of the space groups have several different fibrifolds associated to them, so have several different fibrifold symbols.
Coxeter notation
Spatial and point symmetry groups, represented as modifications of the pure reflectional Coxeter groups.
Geometric notation[9]
A geometric algebra notation.
Classification systems
There are (at least) 10 different ways to classify space groups into classes. The relations between some of these are described in the following table. Each classification system is a refinement of the ones below it. To understand an explanation given here it may be necessary to understand the next one down.
(Crystallographic) space group types (230 in three dimensions)
Two space groups, considered as subgroups of the group of affine transformations of space, have the same space group type if they are the same up to an affine transformation of space that preserves orientation. Thus e.g. a change of angle between translation vectors does not affect the space group type if it does not add or remove any symmetry. A more formal definition involves conjugacy (see Symmetry group). In three dimensions, for 11 of the affine space groups, there is no chirality-preserving (i.e. orientation-preserving) map from the group to its mirror image, so if one distinguishes groups from their mirror images these each split into two cases (such as P41 and P43). So instead of the 54 affine space groups that preserve chirality there are 54 + 11 = 65 space group types that preserve chirality (the Sohncke groups).For most chiral crystals, the two enantiomorphs belong to the same crystallographic space group, such as P213 for FeSi,[10] but for others, such as quartz, they belong to two enantiomorphic space groups.
Affine space group types (219 in three dimensions)
Two space groups, considered as subgroups of the group of affine transformations of space, have the same affine space group type if they are the same up to an affine transformation, even if that inverts orientation. The affine space group type is determined by the underlying abstract group of the space group. In three dimensions, Fifty-four of the affine space group types preserve chirality and give chiral crystals. The two enantiomorphs of a chiral crystal have the same affine space group.
Arithmetic crystal classes (73 in three dimensions)
Sometimes called Z-classes. These are determined by the point group together with the action of the point group on the subgroup of translations. In other words, the arithmetic crystal classes correspond to conjugacy classes of finite subgroup of the general linear group GLn(Z) over the integers. A space group is called symmorphic (or split) if there is a point such that all symmetries are the product of a symmetry fixing this point and a translation. Equivalently, a space group is symmorphic if it is a semidirect product of its point group with its translation subgroup. There are 73 symmorphic space groups, with exactly one in each arithmetic crystal class. There are also 157 nonsymmorphic space group types with varying numbers in the arithmetic crystal classes.
Arithmetic crystal classes may be interpreted as different orientations of the point groups in the lattice, with the group elements' matrix components being constrained to have integer coefficients in lattice space. This is rather easy to picture in the two-dimensional, wallpaper group case. Some of the point groups have reflections, and the reflection lines can be along the lattice directions, halfway in between them, or both.
• None: C1: p1; C2: p2; C3: p3; C4: p4; C6: p6
• Along: D1: pm, pg; D2: pmm, pmg, pgg; D3: p31m
• Between: D1: cm; D2: cmm; D3: p3m1
• Both: D4: p4m, p4g; D6: p6m
(geometric) Crystal classes (32 in three dimensions) Bravais flocks (14 in three dimensions)
Sometimes called Q-classes. The crystal class of a space group is determined by its point group: the quotient by the subgroup of translations, acting on the lattice. Two space groups are in the same crystal class if and only if their point groups, which are subgroups of GLn(Z), are conjugate in the larger group GLn(Q). These are determined by the underlying Bravais lattice type.
These correspond to conjugacy classes of lattice point groups in GLn(Z), where the lattice point group is the group of symmetries of the underlying lattice that fix a point of the lattice, and contains the point group.
Crystal systems (7 in three dimensions) Lattice systems (7 in three dimensions)
Crystal systems are an ad hoc modification of the lattice systems to make them compatible with the classification according to point groups. They differ from crystal families in that the hexagonal crystal family is split into two subsets, called the trigonal and hexagonal crystal systems. The trigonal crystal system is larger than the rhombohedral lattice system, the hexagonal crystal system is smaller than the hexagonal lattice system, and the remaining crystal systems and lattice systems are the same. The lattice system of a space group is determined by the conjugacy class of the lattice point group (a subgroup of GLn(Z)) in the larger group GLn(Q). In three dimensions the lattice point group can have one of the 7 different orders 2, 4, 8, 12, 16, 24, or 48. The hexagonal crystal family is split into two subsets, called the rhombohedral and hexagonal lattice systems.
Crystal families (6 in three dimensions)
The point group of a space group does not quite determine its lattice system, because occasionally two space groups with the same point group may be in different lattice systems. Crystal families are formed from lattice systems by merging the two lattice systems whenever this happens, so that the crystal family of a space group is determined by either its lattice system or its point group. In 3 dimensions the only two lattice families that get merged in this way are the hexagonal and rhombohedral lattice systems, which are combined into the hexagonal crystal family. The 6 crystal families in 3 dimensions are called triclinic, monoclinic, orthorhombic, tetragonal, hexagonal, and cubic. Crystal families are commonly used in popular books on crystals, where they are sometimes called crystal systems.
Conway, Delgado Friedrichs, and Huson et al. (2001) gave another classification of the space groups, called a fibrifold notation, according to the fibrifold structures on the corresponding orbifold. They divided the 219 affine space groups into reducible and irreducible groups. The reducible groups fall into 17 classes corresponding to the 17 wallpaper groups, and the remaining 35 irreducible groups are the same as the cubic groups and are classified separately.
In other dimensions
Bieberbach's theorems
In n dimensions, an affine space group, or Bieberbach group, is a discrete subgroup of isometries of n-dimensional Euclidean space with a compact fundamental domain. Bieberbach (1911, 1912) proved that the subgroup of translations of any such group contains n linearly independent translations, and is a free abelian subgroup of finite index, and is also the unique maximal normal abelian subgroup. He also showed that in any dimension n there are only a finite number of possibilities for the isomorphism class of the underlying group of a space group, and moreover the action of the group on Euclidean space is unique up to conjugation by affine transformations. This answers part of Hilbert's eighteenth problem. Zassenhaus (1948) showed that conversely any group that is the extension of Zn by a finite group acting faithfully is an affine space group. Combining these results shows that classifying space groups in n dimensions up to conjugation by affine transformations is essentially the same as classifying isomorphism classes for groups that are extensions of Zn by a finite group acting faithfully.
It is essential in Bieberbach's theorems to assume that the group acts as isometries; the theorems do not generalize to discrete cocompact groups of affine transformations of Euclidean space. A counter-example is given by the 3-dimensional Heisenberg group of the integers acting by translations on the Heisenberg group of the reals, identified with 3-dimensional Euclidean space. This is a discrete cocompact group of affine transformations of space, but does not contain a subgroup Z3.
Classification in small dimensions
This table gives the number of space group types in small dimensions, including the numbers of various classes of space group. The numbers of enantiomorphic pairs are given in parentheses.
Dimensions Crystal families, OEIS sequence A004032 Crystal systems, OEIS sequence A004031 Bravais lattices, OEIS sequence A256413 Abstract crystallographic point groups, OEIS sequence A006226 Geometric crystal classes, Q-classes, crystallographic point groups, OEIS sequence A004028 Arithmetic crystal classes, Z-classes, OEIS sequence A004027 Affine space group types, OEIS sequence A004029 Crystallographic space group types, OEIS sequence A006227
0[lower-alpha 1] 1 1 1 1 1 1 1 1
1[lower-alpha 2] 1 1 1 2 2 2 2 2
2[lower-alpha 3] 4 4 5 9 10 13 17 17
3[lower-alpha 4] 6 7 14 18 32 73 219 (+11) 230
4[lower-alpha 5] 23 (+6) 33 (+7) 64 (+10) 118 227 (+44) 710 (+70) 4783 (+111) 4894
5[lower-alpha 6] 32 59 189 239 955 6079 222018 (+79) 222097
6[lower-alpha 7] 91 251 841 1594 7103 85308 (+?) 28927915 (+?) ?
1. Trivial group
2. One is the group of integers and the other is the infinite dihedral group; see symmetry groups in one dimension.
3. These 2D space groups are also called wallpaper groups or plane groups.
4. In 3D, there are 230 crystallographic space group types, which reduces to 219 affine space group types because of some types being different from their mirror image; these are said to differ by enantiomorphous character (e.g. P3112 and P3212). Usually space group refers to 3D. They were enumerated independently by Barlow (1894), Fedorov (1891a) and Schönflies (1891).
5. The 4895 4-dimensional groups were enumerated by Harold Brown, Rolf Bülow, and Joachim Neubüser et al. (1978) Neubüser, Souvignier & Wondratschek (2002) corrected the number of enantiomorphic groups from 112 to 111, so total number of groups is 4783 + 111 = 4894. There are 44 enantiomorphic point groups in 4-dimensional space. If we consider enantiomorphic groups as different, the total number of point groups is 227 + 44 = 271.
6. Plesken & Schulz (2000) enumerated the ones of dimension 5. Souvignier (2003) counted the enantiomorphs.
7. Plesken & Schulz (2000) enumerated the ones of dimension 6, later the corrected figures were found.[11] Initially published number of 826 Lattice types in Plesken & Hanrath (1984) was corrected to 841 in Opgenorth, Plesken & Schulz (1998). See also Janssen et al. (2002). Souvignier (2003) counted the enantiomorphs, but that paper relied on old erroneous CARAT data for dimension 6.
Magnetic groups and time reversal
In addition to crystallographic space groups there are also magnetic space groups (also called two-color (black and white) crystallographic groups or Shubnikov groups). These symmetries contain an element known as time reversal. They treat time as an additional dimension, and the group elements can include time reversal as reflection in it. They are of importance in magnetic structures that contain ordered unpaired spins, i.e. ferro-, ferri- or antiferromagnetic structures as studied by neutron diffraction. The time reversal element flips a magnetic spin while leaving all other structure the same and it can be combined with a number of other symmetry elements. Including time reversal there are 1651 magnetic space groups in 3D (Kim 1999, p.428). It has also been possible to construct magnetic versions for other overall and lattice dimensions (Daniel Litvin's papers, (Litvin 2008), (Litvin 2005)). Frieze groups are magnetic 1D line groups and layer groups are magnetic wallpaper groups, and the axial 3D point groups are magnetic 2D point groups. Number of original and magnetic groups by (overall, lattice) dimension:(Palistrant 2012)(Souvignier 2006)
Overall
dimension
Lattice
dimension
Ordinary groups Magnetic groups
Name Symbol Count Symbol Count
0 0 Zero-dimensional symmetry group $G_{0}$ 1 $G_{0}^{1}$ 2
1 0 One-dimensional point groups $G_{10}$ 2 $G_{10}^{1}$ 5
1 One-dimensional discrete symmetry groups $G_{1}$ 2 $G_{1}^{1}$ 7
2 0 Two-dimensional point groups $G_{20}$ 10 $G_{20}^{1}$ 31
1 Frieze groups $G_{21}$ 7 $G_{21}^{1}$ 31
2 Wallpaper groups $G_{2}$ 17 $G_{2}^{1}$ 80
3 0 Three-dimensional point groups $G_{30}$ 32 $G_{30}^{1}$ 122
1 Rod groups $G_{31}$ 75 $G_{31}^{1}$ 394
2 Layer groups $G_{32}$ 80 $G_{32}^{1}$ 528
3 Three-dimensional space groups $G_{3}$ 230 $G_{3}^{1}$ 1651
4 0 Four-dimensional point groups $G_{40}$ 271 $G_{40}^{1}$ 1202
1 $G_{41}$ 343
2 $G_{42}$ 1091
3 $G_{43}$ 1594
4 Four-dimensional discrete symmetry groups $G_{4}$ 4894 $G_{4}^{1}$ 62227
Table of space groups in 2 dimensions (wallpaper groups)
Table of the wallpaper groups using the classification of the 2-dimensional space groups:
Crystal system,
Bravais lattice
Geometric class, point group Arithmetic
class
Wallpaper groups (cell diagram)
Int'lSchön.OrbifoldCox.Ord.
Oblique
1C1(1)[ ]+1 None p1
(1)
2C2(22)[2]+2 None p2
(2222)
Rectangular
mD1(*)[ ]2 Along pm
(**)
pg
(××)
2mmD2(*22)[2]4 Along pmm
(*2222)
pmg
(22*)
Centered rectangular
mD1(*)[ ]2 Between cm
(*×)
2mmD2(*22)[2]4 Between cmm
(2*22)
pgg
(22×)
Square
4C4(44)[4]+4 None p4
(442)
4mmD4(*44)[4]8 Both p4m
(*442)
p4g
(4*2)
Hexagonal
3C3(33)[3]+3 None p3
(333)
3mD3(*33)[3]6 Between p3m1
(*333)
p31m
(3*3)
6C6(66)[6]+6 None p6
(632)
6mmD6(*66)[6]12 Both p6m
(*632)
For each geometric class, the possible arithmetic classes are
• None: no reflection lines
• Along: reflection lines along lattice directions
• Between: reflection lines halfway in between lattice directions
• Both: reflection lines both along and between lattice directions
Table of space groups in 3 dimensions
№ Crystal system,
(count),
Bravais lattice
Point group Space groups (international short symbol)
Int'l Schön. Orbifold Cox. Ord.
1 Triclinic
(2)
1C111[ ]+1 P1
2 1Ci1×[2+,2+]2 P1
3–5 Monoclinic
(13)
2C222[2]+2 P2, P21
C2
6–9 mCs*11[ ]2 Pm, Pc
Cm, Cc
10–15 2/mC2h2*[2,2+]4 P2/m, P21/m
C2/m, P2/c, P21/c
C2/c
16–24 Orthorhombic
(59)
222D2222[2,2]+4 P222, P2221, P21212, P212121, C2221, C222, F222, I222, I212121
25–46 mm2C2v*22[2]4 Pmm2, Pmc21, Pcc2, Pma2, Pca21, Pnc2, Pmn21, Pba2, Pna21, Pnn2
Cmm2, Cmc21, Ccc2, Amm2, Aem2, Ama2, Aea2
Fmm2, Fdd2
Imm2, Iba2, Ima2
47–74 mmmD2h*222[2,2]8 Pmmm, Pnnn, Pccm, Pban, Pmma, Pnna, Pmna, Pcca, Pbam, Pccn, Pbcm, Pnnm, Pmmn, Pbcn, Pbca, Pnma
Cmcm, Cmce, Cmmm, Cccm, Cmme, Ccce
Fmmm, Fddd
Immm, Ibam, Ibca, Imma
75–80 Tetragonal
(68)
4C444[4]+4 P4, P41, P42, P43, I4, I41
81–82 4S42×[2+,4+]4 P4, I4
83–88 4/mC4h4*[2,4+]8 P4/m, P42/m, P4/n, P42/n
I4/m, I41/a
89–98 422D4224[2,4]+8 P422, P4212, P4122, P41212, P4222, P42212, P4322, P43212
I422, I4122
99–110 4mmC4v*44[4]8 P4mm, P4bm, P42cm, P42nm, P4cc, P4nc, P42mc, P42bc
I4mm, I4cm, I41md, I41cd
111–122 42mD2d2*2[2+,4]8 P42m, P42c, P421m, P421c, P4m2, P4c2, P4b2, P4n2
I4m2, I4c2, I42m, I42d
123–142 4/mmmD4h*224[2,4]16 P4/mmm, P4/mcc, P4/nbm, P4/nnc, P4/mbm, P4/mnc, P4/nmm, P4/ncc, P42/mmc, P42/mcm, P42/nbc, P42/nnm, P42/mbc, P42/mnm, P42/nmc, P42/ncm
I4/mmm, I4/mcm, I41/amd, I41/acd
143–146 Trigonal
(25)
3C333[3]+3 P3, P31, P32
R3
147–148 3S63×[2+,6+]6 P3, R3
149–155 32D3223[2,3]+6 P312, P321, P3112, P3121, P3212, P3221
R32
156–161 3mC3v*33[3]6 P3m1, P31m, P3c1, P31c
R3m, R3c
162–167 3mD3d2*3[2+,6]12 P31m, P31c, P3m1, P3c1
R3m, R3c
168–173 Hexagonal
(27)
6C666[6]+6 P6, P61, P65, P62, P64, P63
174 6C3h3*[2,3+]6 P6
175–176 6/mC6h6*[2,6+]12 P6/m, P63/m
177–182 622D6226[2,6]+12 P622, P6122, P6522, P6222, P6422, P6322
183–186 6mmC6v*66[6]12 P6mm, P6cc, P63cm, P63mc
187–190 6m2D3h*223[2,3]12 P6m2, P6c2, P62m, P62c
191–194 6/mmmD6h*226[2,6]24 P6/mmm, P6/mcc, P63/mcm, P63/mmc
195–199 Cubic
(36)
23T332[3,3]+12 P23, F23, I23
P213, I213
200–206 m3Th3*2[3+,4]24 Pm3, Pn3, Fm3, Fd3, Im3, Pa3, Ia3
207–214 432O432[3,4]+24 P432, P4232
F432, F4132
I432
P4332, P4132, I4132
215–220 43mTd*332[3,3]24 P43m, F43m, I43m
P43n, F43c, I43d
221–230 m3mOh*432[3,4]48 Pm3m, Pn3n, Pm3n, Pn3m
Fm3m, Fm3c, Fd3m, Fd3c
Im3m, Ia3d
Note: An e plane is a double glide plane, one having glides in two different directions. They are found in seven orthorhombic, five tetragonal and five cubic space groups, all with centered lattice. The use of the symbol e became official with Hahn (2002).
The lattice system can be found as follows. If the crystal system is not trigonal then the lattice system is of the same type. If the crystal system is trigonal, then the lattice system is hexagonal unless the space group is one of the seven in the rhombohedral lattice system consisting of the 7 trigonal space groups in the table above whose name begins with R. (The term rhombohedral system is also sometimes used as an alternative name for the whole trigonal system.) The hexagonal lattice system is larger than the hexagonal crystal system, and consists of the hexagonal crystal system together with the 18 groups of the trigonal crystal system other than the seven whose names begin with R.
The Bravais lattice of the space group is determined by the lattice system together with the initial letter of its name, which for the non-rhombohedral groups is P, I, F, A or C, standing for the principal, body centered, face centered, A-face centered or C-face centered lattices. There are seven rhombohedral space groups, with initial letter R.
Derivation of the crystal class from the space group
1. Leave out the Bravais type
2. Convert all symmetry elements with translational components into their respective symmetry elements without translation symmetry (Glide planes are converted into simple mirror planes; Screw axes are converted into simple axes of rotation)
3. Axes of rotation, rotoinversion axes and mirror planes remain unchanged.
References
1. Hiller, Howard (1986). "Crystallography and cohomology of groups". The American Mathematical Monthly. 93 (10): 765–779. doi:10.2307/2322930. JSTOR 2322930.
2. Fedorov (1891b).
3. Sohncke, Leonhard (1879). Die Entwicklung einer Theorie der Krystallstruktur [The Development of a Theory of Crystal Structure] (in German). Leipzig, Germany: B.G. Teubner.
4. Fedorov (1891a).
5. Schönflies, Arthur M. (1891). Krystallsysteme und Krystallstruktur [Crystal Systems and Crystal Structure] (in German). Leipzig, Germany: B.G. Teubner.
6. von Fedorow, E. (1892). "Zusammenstellung der kirstallographischen Resultate des Herrn Schoenflies und der meinigen" [Compilation of the crystallographic results of Mr. Schoenflies and of mine]. Zeitschrift für Krystallographie und Mineralogie (in German). 20: 25–75.
7. Sydney R. Hall; Ralf W. Grosse-Kunstleve. "Concise Space-Group Symbols".
8. "Strukturbericht - Wikimedia Commons". commons.wikimedia.org.
9. David Hestenes; Jeremy Holt (January 2007). "The Crystallographic Space Groups in Geometric Algebra" (PDF). Journal of Mathematical Physics. 48 (2): 023514. Bibcode:2007JMP....48b3514H. doi:10.1063/1.2426416.
10. J.C.H. Spence and J.M. Zuo (1994). "On the minimum number of beams needed to distinguish enantiomorphs in X-ray and electron diffraction". Acta Crystallographica Section A. 50 (5): 647–650. doi:10.1107/S0108767394002850.
11. "The CARAT Homepage". Retrieved 11 May 2015.
• Barlow, W (1894), "Über die geometrischen Eigenschaften starrer Strukturen und ihre Anwendung auf Kristalle" [On the geometric properties of rigid structures and their application to crystals], Zeitschrift für Kristallographie, 23: 1–63, doi:10.1524/zkri.1894.23.1.1, S2CID 102301331
• Bieberbach, Ludwig (1911), "Über die Bewegungsgruppen der Euklidischen Räume" [On the groups of rigid transformations in Euclidean spaces], Mathematische Annalen, 70 (3): 297–336, doi:10.1007/BF01564500, ISSN 0025-5831, S2CID 124429194
• Bieberbach, Ludwig (1912), "Über die Bewegungsgruppen der Euklidischen Räume (Zweite Abhandlung.) Die Gruppen mit einem endlichen Fundamentalbereich" [On the groups of rigid transformations in Euclidean spaces (Second essay.) Groups with a finite fundamental domain], Mathematische Annalen, 72 (3): 400–412, doi:10.1007/BF01456724, ISSN 0025-5831, S2CID 119472023
• Brown, Harold; Bülow, Rolf; Neubüser, Joachim; Wondratschek, Hans; Zassenhaus, Hans (1978), Crystallographic groups of four-dimensional space, New York: Wiley-Interscience [John Wiley & Sons], ISBN 978-0-471-03095-9, MR 0484179
• Burckhardt, Johann Jakob (1947), Die Bewegungsgruppen der Kristallographie [Groups of Rigid Transformations in Crystallography], Lehrbücher und Monographien aus dem Gebiete der exakten Wissenschaften (Textbooks and Monographs from the Fields of the Exact Sciences), vol. 13, Verlag Birkhäuser, Basel, MR 0020553
• Burckhardt, Johann Jakob (1967), "Zur Geschichte der Entdeckung der 230 Raumgruppen" [On the history of the discovery of the 230 space groups], Archive for History of Exact Sciences, 4 (3): 235–246, doi:10.1007/BF00412962, ISSN 0003-9519, MR 0220837, S2CID 121994079
• Conway, John Horton; Delgado Friedrichs, Olaf; Huson, Daniel H.; Thurston, William P. (2001), "On three-dimensional space groups", Beiträge zur Algebra und Geometrie, 42 (2): 475–507, ISSN 0138-4821, MR 1865535
• Fedorov, E. S. (1891a), "Симметрія правильныхъ системъ фигуръ" [Simmetriya pravil'nykh sistem figur, The symmetry of regular systems of figures], Записки Императорского С.-Петербургского Минералогического Общества (Zapiski Imperatorskova Sankt Petersburgskova Mineralogicheskova Obshchestva, Proceedings of the Imperial St. Petersburg Mineralogical Society), 2nd series (in Russian), 28 (2): 1–146
• English translation: Fedorov, E. S. (1971). Symmetry of Crystals. American Crystallographic Association Monograph No. 7. Translated by David and Katherine Harker. Buffalo, NY: American Crystallographic Association. pp. 50–131.
• Fedorov, E. S. (1891b). "Симметрія на плоскости" [Simmetrija na ploskosti, Symmetry in the plane]. Записки Императорского С.-Петербургского Минералогического Общества (Zapiski Imperatorskogo Sant-Petersburgskogo Mineralogicheskogo Obshchestva, Proceedings of the Imperial St. Petersburg Mineralogical Society). 2nd series (in Russian). 28: 345–390.
• Hahn, Th. (2002), Hahn, Theo (ed.), International Tables for Crystallography, Volume A: Space Group Symmetry, International Tables for Crystallography, vol. A (5th ed.), Berlin, New York: Springer-Verlag, doi:10.1107/97809553602060000100, ISBN 978-0-7923-6590-7
• Hall, S.R. (1981), "Space-Group Notation with an Explicit Origin", Acta Crystallographica A, 37 (4): 517–525, Bibcode:1981AcCrA..37..517H, doi:10.1107/s0567739481001228
• Janssen, T.; Birman, J.L.; Dénoyer, F.; Koptsik, V.A.; Verger-Gaugry, J.L.; Weigel, D.; Yamamoto, A.; Abrahams, S.C.; Kopsky, V. (2002), "Report of a Subcommittee on the Nomenclature of n-Dimensional Crystallography. II. Symbols for arithmetic crystal classes, Bravais classes and space groups", Acta Crystallographica A, 58 (Pt 6): 605–621, doi:10.1107/S010876730201379X, PMID 12388880
• Kim, Shoon K. (1999), Group theoretical methods and applications to molecules and crystals, Cambridge University Press, doi:10.1017/CBO9780511534867, ISBN 978-0-521-64062-6, MR 1713786, S2CID 117849701
• Litvin, D.B. (May 2008), "Tables of crystallographic properties of magnetic space groups", Acta Crystallographica A, 64 (Pt 3): 419–24, Bibcode:2008AcCrA..64..419L, doi:10.1107/S010876730800768X, PMID 18421131
• Litvin, D.B. (May 2005), "Tables of properties of magnetic subperiodic groups" (PDF), Acta Crystallographica A, 61 (Pt 3): 382–5, Bibcode:2005AcCrA..61..382L, doi:10.1107/S010876730500406X, PMID 15846043
• Neubüser, J.; Souvignier, B.; Wondratschek, H. (2002), "Corrections to Crystallographic Groups of Four-Dimensional Space by Brown et al. (1978) [New York: Wiley and Sons]", Acta Crystallographica A, 58 (Pt 3): 301, doi:10.1107/S0108767302001368, PMID 11961294
• Opgenorth, J; Plesken, W; Schulz, T (1998), "Crystallographic Algorithms and Tables", Acta Crystallographica A, 54 (Pt 5): 517–531, doi:10.1107/S010876739701547X
• Palistrant, A. F. (2012), "Complete Scheme of Four-Dimensional Crystallographic Symmetry Groups", Crystallography Reports, 57 (4): 471–477, Bibcode:2012CryRp..57..471P, doi:10.1134/S1063774512040104, S2CID 95680790
• Plesken, Wilhelm; Hanrath, W (1984), "The lattices of six-dimensional space", Math. Comp., 43 (168): 573–587, doi:10.1090/s0025-5718-1984-0758205-5
• Plesken, Wilhelm; Schulz, Tilman (2000), "Counting crystallographic groups in low dimensions", Experimental Mathematics, 9 (3): 407–411, doi:10.1080/10586458.2000.10504417, ISSN 1058-6458, MR 1795312, S2CID 40588234
• Schönflies, Arthur Moritz (1923), "Theorie der Kristallstruktur" [Theory of Crystal Structure], Gebrüder Bornträger, Berlin
• Souvignier, Bernd (2003), "Enantiomorphism of crystallographic groups in higher dimensions with results in dimensions up to 6", Acta Crystallographica A, 59 (3): 210–220, doi:10.1107/S0108767303004161, PMID 12714771
• Souvignier, Bernd (2006), "The four-dimensional magnetic point and space groups", Zeitschrift für Kristallographie, 221: 77–82, Bibcode:2006ZK....221...77S, doi:10.1524/zkri.2006.221.1.77, S2CID 99946564
• Vinberg, E. (2001) [1994], "Crystallographic group", Encyclopedia of Mathematics, EMS Press
• Zassenhaus, Hans (1948), "Über einen Algorithmus zur Bestimmung der Raumgruppen" [On an algorithm for the determination of space groups], Commentarii Mathematici Helvetici, 21: 117–141, doi:10.1007/BF02568029, ISSN 0010-2571, MR 0024424, S2CID 120651709
External links
Wikimedia Commons has media related to Space groups.
• International Union of Crystallography
• Point Groups and Bravais Lattices Archived 2012-07-16 at the Wayback Machine
• Bilbao Crystallographic Server
• Space Group Info (old)
• Space Group Info (new)
• Crystal Lattice Structures: Index by Space Group
• Full list of 230 crystallographic space groups
• Interactive 3D visualization of all 230 crystallographic space groups
• Huson, Daniel H. (1999), The Fibrifold Notation and Classification for 3D Space Groups (PDF)
• The Geometry Center: 2.1 Formulas for Symmetries in Cartesian Coordinates (two dimensions)
• The Geometry Center: 10.1 Formulas for Symmetries in Cartesian Coordinates (three dimensions)
Groups
Basic notions
• Subgroup
• Normal subgroup
• Commutator subgroup
• Quotient group
• Group homomorphism
• (Semi-) direct product
• direct sum
Types of groups
• Finite groups
• Abelian groups
• Cyclic groups
• Infinite group
• Simple groups
• Solvable groups
• Symmetry group
• Space group
• Point group
• Wallpaper group
• Trivial group
Discrete groups
Classification of finite simple groups
Cyclic group Zn
Alternating group An
Sporadic groups
Mathieu group M11..12,M22..24
Conway group Co1..3
Janko groups J1, J2, J3, J4
Fischer group F22..24
Baby monster group B
Monster group M
Other finite groups
Symmetric group Sn
Dihedral group Dn
Rubik's Cube group
Lie groups
• General linear group GL(n)
• Special linear group SL(n)
• Orthogonal group O(n)
• Special orthogonal group SO(n)
• Unitary group U(n)
• Special unitary group SU(n)
• Symplectic group Sp(n)
Exceptional Lie groups
G2
F4
E6
E7
E8
• Circle group
• Lorentz group
• Poincaré group
• Quaternion group
Infinite dimensional groups
• Conformal group
• Diffeomorphism group
• Loop group
• Quantum group
• O(∞)
• SU(∞)
• Sp(∞)
• History
• Applications
• Abstract algebra
Authority control
International
• FAST
National
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Space hierarchy theorem
In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.
The foundation for the hierarchy theorems lies in the intuition that with either more time or more space comes the ability to compute more functions (or decide more languages). The hierarchy theorems are used to demonstrate that the time and space complexity classes form a hierarchy where classes with tighter bounds contain fewer languages than those with more relaxed bounds. Here we define and prove the space hierarchy theorem.
The space hierarchy theorems rely on the concept of space-constructible functions. The deterministic and nondeterministic space hierarchy theorems state that for all space-constructible functions f(n),
${\mathsf {SPACE}}\left(o(f(n))\right)\subsetneq {\mathsf {SPACE}}(f(n))$,
where SPACE stands for either DSPACE or NSPACE, and o refers to the little o notation.
Statement
Formally, a function $f:\mathbb {N} \longrightarrow \mathbb {N} $ is space-constructible if $f(n)\geq \log ~n$ and there exists a Turing machine which computes the function $f(n)$ in space $O(f(n))$ when starting with an input $1^{n}$, where $1^{n}$ represents a string of n consecutive 1s. Most of the common functions that we work with are space-constructible, including polynomials, exponents, and logarithms.
For every space-constructible function $f:\mathbb {N} \longrightarrow \mathbb {N} $, there exists a language L that is decidable in space $O(f(n))$ but not in space $o(f(n))$.
Proof
The goal is to define a language that can be decided in space $O(f(n))$ but not space $o(f(n))$. The language is defined as L:
$L=\{~(\langle M\rangle ,10^{k}):M{\mbox{ uses space }}\leq f(|\langle M\rangle ,10^{k}|){\mbox{ and time }}\leq 2^{f(|\langle M\rangle ,10^{k}|)}{\mbox{ and }}M{\mbox{ does not accept }}(\langle M\rangle ,10^{k})~\}$
For any machine M that decides a language in space $o(f(n))$, L will differ in at least one spot from the language of M. Namely, for some large enough k, M will use space $\leq f(|\langle M\rangle ,10^{k}|)$ on $(\langle M\rangle ,10^{k})$ and will therefore differ at its value.
On the other hand, L is in ${\mathsf {SPACE}}(f(n))$. The algorithm for deciding the language L is as follows:
1. On an input x, compute $f(|x|)$ using space-constructibility, and mark off $f(|x|)$ cells of tape. Whenever an attempt is made to use more than $f(|x|)$ cells, reject.
2. If x is not of the form $\langle M\rangle ,10^{k}$ for some TM M, reject.
3. Simulate M on input x for at most $2^{f(|x|)}$ steps (using $f(|x|)$ space). If the simulation tries to use more than $f(|x|)$ space or more than $2^{f(|x|)}$ operations, then reject.
4. If M accepted x during this simulation, then reject; otherwise, accept.
Note on step 3: Execution is limited to $2^{f(|x|)}$ steps in order to avoid the case where M does not halt on the input x. That is, the case where M consumes space of only $O(f(x))$ as required, but runs for infinite time.
The above proof holds for the case of PSPACE, but some changes need to be made for the case of NPSPACE. The crucial point is that while on a deterministic TM, acceptance and rejection can be inverted (crucial for step 4), this is not possible on a non-deterministic machine.
For the case of NPSPACE, L needs to be redefined first:
$L=\{~(\langle M\rangle ,10^{k}):M{\mbox{ uses space }}\leq f(|\langle M\rangle ,10^{k}|){\mbox{ and }}M{\mbox{ accepts }}(\langle M\rangle ,10^{k})~\}$
Now, the algorithm needs to be changed to accept L by modifying step 4 to:
• If M accepted x during this simulation, then accept; otherwise, reject.
L can not be decided by a TM using $o(f(n))$ cells. Assuming L can be decided by some TM M using $o(f(n))$ cells, and following from the Immerman–Szelepcsényi theorem, ${\overline {L}}$ can also be determined by a TM (called ${\overline {M}}$) using $o(f(n))$ cells. Here lies the contradiction, therefore the assumption must be false:
1. If $w=(\langle {\overline {M}}\rangle ,10^{k})$ (for some large enough k) is not in ${\overline {L}}$ then M will accept it, therefore ${\overline {M}}$ rejects w, therefore w is in ${\overline {L}}$ (contradiction).
2. If $w=(\langle {\overline {M}}\rangle ,10^{k})$ (for some large enough k) is in ${\overline {L}}$ then M will reject it, therefore ${\overline {M}}$ accepts w, therefore w is not in ${\overline {L}}$ (contradiction).
Comparison and improvements
The space hierarchy theorem is stronger than the analogous time hierarchy theorems in several ways:
• It only requires s(n) to be at least log n instead of at least n.
• It can separate classes with any asymptotic difference, whereas the time hierarchy theorem requires them to be separated by a logarithmic factor.
• It only requires the function to be space-constructible, not time-constructible.
It seems to be easier to separate classes in space than in time. Indeed, whereas the time hierarchy theorem has seen little remarkable improvement since its inception, the nondeterministic space hierarchy theorem has seen at least one important improvement by Viliam Geffert in his 2003 paper "Space hierarchy theorem revised". This paper made several generalizations of the theorem:
• It relaxes the space-constructibility requirement. Instead of merely separating the union classes ${\mathsf {DSPACE}}(O(s(n))$ and ${\mathsf {DSPACE}}(o(s(n))$, it separates ${\mathsf {DSPACE}}(f(n))$ from ${\mathsf {DSPACE}}(g(n))$ where $f(n)$ is an arbitrary $O(s(n))$ function and g(n) is a computable $o(s(n))$ function. These functions need not be space-constructible or even monotone increasing.
• It identifies a unary language, or tally language, which is in one class but not the other. In the original theorem, the separating language was arbitrary.
• It does not require $s(n)$ to be at least log n; it can be any nondeterministically fully space-constructible function.
Refinement of space hierarchy
If space is measured as the number of cells used regardless of alphabet size, then ${\mathsf {SPACE}}(f(n))={\mathsf {SPACE}}(O(f(n)))$ because one can achieve any linear compression by switching to a larger alphabet. However, by measuring space in bits, a much sharper separation is achievable for deterministic space. Instead of being defined up to a multiplicative constant, space is now defined up to an additive constant. However, because any constant amount of external space can be saved by storing the contents into the internal state, we still have ${\mathsf {SPACE}}(f(n))={\mathsf {SPACE}}(f(n)+O(1))$.
Assume that f is space-constructible. SPACE is deterministic.
• For a wide variety of sequential computational models, including for Turing machines, SPACE(f(n)-ω(log(f(n)+n))) ⊊ SPACE(f(n)). This holds even if SPACE(f(n)-ω(log(f(n)+n))) is defined using a different computational model than ${\mathsf {SPACE}}(f(n))$ because the different models can simulate each other with $O(\log(f(n)+n))$ space overhead.
• For certain computational models, we even have SPACE(f(n)-ω(1)) ⊊ SPACE(f(n)). In particular, this holds for Turing machines if we fix the alphabet, the number of heads on the input tape, the number of heads on the worktape (using a single worktape), and add delimiters for the visited portion of the worktape (that can be checked without increasing space usage). SPACE(f(n)) does not depend on whether the worktape is infinite or semi-infinite. We can also have a fixed number of worktapes if f(n) is either a SPACE constructible tuple giving the per-tape space usage, or a SPACE(f(n)-ω(log(f(n)))-constructible number giving the total space usage (not counting the overhead for storing the length of each tape).
The proof is similar to the proof of the space hierarchy theorem, but with two complications: The universal Turing machine has to be space-efficient, and the reversal has to be space-efficient. One can generally construct universal Turing machines with $O(\log(space))$ space overhead, and under appropriate assumptions, just $O(1)$ space overhead (which may depend on the machine being simulated). For the reversal, the key issue is how to detect if the simulated machine rejects by entering an infinite (space-constrained) loop. Simply counting the number of steps taken would increase space consumption by about $f(n)$. At the cost of a potentially exponential time increase, loops can be detected space-efficiently as follows:[1]
Modify the machine to erase everything and go to a specific configuration A on success. Use depth-first search to determine whether A is reachable in the space bound from the starting configuration. The search starts at A and goes over configurations that lead to A. Because of determinism, this can be done in place and without going into a loop.
It can also be determined whether the machine exceeds a space bound (as opposed to looping within the space bound) by iterating over all configurations about to exceed the space bound and checking (again using depth-first search) whether the initial configuration leads to any of them.
Corollaries
Corollary 1
For any two functions $f_{1}$, $f_{2}:\mathbb {N} \longrightarrow \mathbb {N} $, where $f_{1}(n)$ is $o(f_{2}(n))$ and $f_{2}$ is space-constructible, ${\mathsf {SPACE}}(f_{1}(n))\subsetneq {\mathsf {SPACE}}(f_{2}(n))$.
This corollary lets us separate various space complexity classes. For any function $n^{k}$ is space-constructible for any natural number k. Therefore for any two natural numbers $k_{1}<k_{2}$ we can prove ${\mathsf {SPACE}}(n^{k_{1}})\subsetneq {\mathsf {SPACE}}(n^{k_{2}})$. This idea can be extended for real numbers in the following corollary. This demonstrates the detailed hierarchy within the PSPACE class.
Corollary 2
For any two nonnegative real numbers $a_{1}<a_{2},{\mathsf {SPACE}}(n^{a_{1}})\subsetneq {\mathsf {SPACE}}(n^{a_{2}})$.
Corollary 3
NL ⊊ PSPACE.
Proof
Savitch's theorem shows that ${\mathsf {NL}}\subseteq {\mathsf {SPACE}}(\log ^{2}n)$, while the space hierarchy theorem shows that ${\mathsf {SPACE}}(\log ^{2}n)\subsetneq {\mathsf {SPACE}}(n)$. The result is this corollary along with the fact that TQBF ∉ NL since TQBF is PSPACE-complete.
This could also be proven using the non-deterministic space hierarchy theorem to show that NL ⊊ NPSPACE, and using Savitch's theorem to show that PSPACE = NPSPACE.
Corollary 4
PSPACE ⊊ EXPSPACE.
This last corollary shows the existence of decidable problems that are intractable. In other words, their decision procedures must use more than polynomial space.
Corollary 5
There are problems in PSPACE requiring an arbitrarily large exponent to solve; therefore PSPACE does not collapse to DSPACE(nk) for some constant k.
Corollary 6
SPACE(n) ≠ PTIME.
To see it, assume the contrary, thus any problem decided in space $O(n)$ is decided in time $O(n^{c})$, and any problem $L$ decided in space $O(n^{b})$ is decided in time $O((n^{b})^{c})=O(n^{bc})$. Now ${\mathsf {P}}:=\bigcup _{k\in \mathbb {N} }{\mathsf {DTIME}}(n^{k})$, thus P is closed under such a change of bound, that is $\bigcup _{k\in \mathbb {N} }{\mathsf {DTIME}}(n^{bk})\subseteq {\mathsf {P}}$, so $L\in {\mathsf {P}}$. This implies that for all $b,{\mathsf {SPACE}}(n^{b})\subseteq {\mathsf {P}}\subseteq {\mathsf {SPACE}}(n)$, but the space hierarchy theorem implies that ${\mathsf {SPACE}}(n^{2})\not \subseteq {\mathsf {SPACE}}(n)$, and Corollary 6 follows. Note that this argument neither proves that ${\mathsf {P}}\not \subseteq {\mathsf {SPACE}}(n)$ nor that ${\mathsf {SPACE}}(n)\not \subseteq {\mathsf {P}}$, as to reach a contradiction we used the negation of both sentences, that is we used both inclusions, and can only deduce that at least one fails. It is currently unknown which fail(s) but conjectured that both do, that is that P and SPACE(n) are incomparable.[2] This question is related to that of the time complexity of (nondeterministic) linear bounded automata which accept the complexity class ${\mathsf {NSPACE}}(n)$ (aka as context-sensitive languages, CSL); so by the above CSL is not known to be decidable in polynomial time -see also Kuroda's two problems on LBA.
See also
• Time hierarchy theorem
References
1. Sipser, Michael (1978). "Halting Space-Bounded Computations". Proceedings of the 19th Annual Symposium on Foundations of Computer Science.
2. https://mathoverflow.net/questions/40770/how-do-we-know-that-p-linspace-without-knowing-if-one-is-a-subset-of-the-othe/40771#40771
• Arora, Sanjeev; Barak, Boaz (2009). Computational complexity. A modern approach. Cambridge University Press. ISBN 978-0-521-42426-4. Zbl 1193.68112.
• Luca Trevisan. Notes on Hierarchy Theorems. Handout 7. CS172: Automata, Computability and Complexity. U.C. Berkeley. April 26, 2004.
• Viliam Geffert. Space hierarchy theorem revised. Theoretical Computer Science, volume 295, number 1–3, p. 171-187. February 24, 2003.
• Sipser, Michael (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Pages 306–310 of section 9.1: Hierarchy theorems.
• Papadimitriou, Christos (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 0-201-53082-1. Section 7.2: The Hierarchy Theorem, pp. 143–146.
|
Wikipedia
|
Bravais lattice
In geometry and crystallography, a Bravais lattice, named after Auguste Bravais (1850),[1] is an infinite array of discrete points generated by a set of discrete translation operations described in three dimensional space by
$\mathbf {R} =n_{1}\mathbf {a} _{1}+n_{2}\mathbf {a} _{2}+n_{3}\mathbf {a} _{3},$
where the ni are any integers, and ai are primitive translation vectors, or primitive vectors, which lie in different directions (not necessarily mutually perpendicular) and span the lattice. The choice of primitive vectors for a given Bravais lattice is not unique. A fundamental aspect of any Bravais lattice is that, for any choice of direction, the lattice appears exactly the same from each of the discrete lattice points when looking in that chosen direction.
The Bravais lattice concept is used to formally define a crystalline arrangement and its (finite) frontiers. A crystal is made up of one or more atoms, called the basis or motif, at each lattice point. The basis may consist of atoms, molecules, or polymer strings of solid matter, and the lattice provides the locations of the basis.
Two Bravais lattices are often considered equivalent if they have isomorphic symmetry groups. In this sense, there are 5 possible Bravais lattices in 2-dimensional space and 14 possible Bravais lattices in 3-dimensional space. The 14 possible symmetry groups of Bravais lattices are 14 of the 230 space groups. In the context of the space group classification, the Bravais lattices are also called Bravais classes, Bravais arithmetic classes, or Bravais flocks.[2]
Unit cell
In crystallography, there is the concept of a unit cell which comprises the space between adjacent lattice points as well as any atoms in that space. A unit cell is defined as a space that, when translated through a subset of all vectors described by $\mathbf {R} =n_{1}\mathbf {a} _{1}+n_{2}\mathbf {a} _{2}+n_{3}\mathbf {a} _{3}$, fills the lattice space without overlapping or voids. (I.e., a lattice space is a multiple of a unit cell.)[3] There are mainly two types of unit cells: primitive unit cells and conventional unit cells. A primitive cell is the very smallest component of a lattice (or crystal) which, when stacked together with lattice translation operations, reproduces the whole lattice (or crystal).[4] Note that the translations must be lattice translation operations that cause the lattice to appear unchanged after the translation. If arbitrary translations were allowed, one could make a primitive cell half the size of the true one, and translate twice as often, as an example. Another way of defining the size of a primitive cell that avoids invoking lattice translation operations, is to say that the primitive cell is the smallest possible component of a lattice (or crystal) that can be repeated to reproduce the whole lattice (or crystal), and that contains exactly one lattice point. In either definition, the primitive cell is characterized by its small size. There are clearly many choices of cell that can reproduce the whole lattice when stacked (two lattice halves, for instance), and the minimum size requirement distinguishes the primitive cell from all these other valid repeating units. If the lattice or crystal is 2-dimensional, the primitive cell has a minimum area; likewise in 3 dimensions the primitive cell has a minimum volume. Despite this rigid minimum-size requirement, there is not one unique choice of primitive unit cell. In fact, all cells whose borders are primitive translation vectors will be primitive unit cells. The fact that there is not a unique choice of primitive translation vectors for a given lattice leads to the multiplicity of possible primitive unit cells. Conventional unit cells, on the other hand, are not necessarily minimum-size cells. They are chosen purely for convenience and are often used for illustration purposes. They are loosely defined.
Primitive unit cells are defined as unit cells with the smallest volume for a given crystal. (A crystal is a lattice and a basis at every lattice point.) To have the smallest cell volume, a primitive unit cell must contain (1) only one lattice point and (2) the minimum amount of basis constituents (e.g., the minimum number of atoms in a basis). For the former requirement, counting the number of lattice points in a unit cell is such that, if a lattice point is shared by m adjacent unit cells around that lattice point, then the point is counted as 1/m. The latter requirement is necessary since there are crystals that can be described by more than one combination of a lattice and a basis. For example, a crystal, viewed as a lattice with a single kind of atom located at every lattice point (the simplest basis form), may also be viewed as a lattice with a basis of two atoms. In this case, a primitive unit cell is a unit cell having only one lattice point in the first way of describing the crystal in order to ensure the smallest unit cell volume.
There can be more than one way to choose a primitive cell for a given crystal and each choice will have a different primitive cell shape, but the primitive cell volume is the same for every choice and each choice will have the property that a one-to-one correspondence can be established between primitive unit cells and discrete lattice points over the associated lattice. All primitive unit cells with different shapes for a given crystal have the same volume by definition; For a given crystal, if n is the density of lattice points in a lattice ensuring the minimum amount of basis constituents and v is the volume of a chosen primitive cell, then nv = 1 resulting in v = 1/n, so every primitive cell has the same volume of 1/n.[3]
Among all possible primitive cells for a given crystal, an obvious primitive cell may be the parallelepiped formed by a chosen set of primitive translation vectors. (Again, these vectors must make a lattice with the minimum amount of basis constituents.)[3] That is, the set of all points $\mathbf {r} =x_{1}\mathbf {a} _{1}+x_{2}\mathbf {a} _{2}+x_{3}\mathbf {a} _{3}$ where $0\leq x_{i}<1$ and $\mathbf {a} _{i}$ is the chosen primitive vector. This primitive cell does not always show the clear symmetry of a given crystal. In this case, a conventional unit cell easily displaying the crystal symmetry is often used. The conventional unit cell volume will be an integer-multiple of the primitive unit cell volume.
Origin of concept
See also: Crystallographic point group
Any lattice can be specified by the length of its two primitive translation vectors and the angle between them. There are an infinite number of possible lattices one can describe in this way. Some way to categorize different types of lattices is desired. One way to do so is to recognize that some lattices have inherent symmetry. One can impose conditions on the length of the primitive translation vectors and on the angle between them to produce various symmetric lattices. These symmetries themselves are categorized into different types, such as point groups (which includes mirror symmetries, inversion symmetries and rotation symmetries) and translational symmetries. Thus, lattices can be categorized based on what point group or translational symmetry applies to them.
In two dimensions, the most basic point group corresponds to rotational invariance under 2π and π, or 1- and 2-fold rotational symmetry. This actually applies automatically to all 2D lattices, and is the most general point group. Lattices contained in this group (technically all lattices, but conventionally all lattices that don't fall into any of the other point groups) are called oblique lattices. From there, there are 4 further combinations of point groups with translational elements (or equivalently, 4 types of restriction on the lengths/angles of the primitive translation vectors) that correspond to the 4 remaining lattice categories: square, hexagonal, rectangular, and centered rectangular. Thus altogether there are 5 Bravais lattices in 2 dimensions.
Likewise, in 3 dimensions, there are 14 Bravais lattices: 1 general "wastebasket" category (triclinic) and 13 more categories. These 14 lattice types are classified by their point groups into 7 lattice systems (triclinic, monoclinic, orthorhombic, tetragonal, cubic, trigonal, and hexagonal).
In 2 dimensions
Further information: Lattice (group)
In two-dimensional space there are 5 Bravais lattices,[5] grouped into four lattice systems, shown in the table below. Below each diagram is the Pearson symbol for that Bravais lattice.
Note: In the unit cell diagrams in the following table the lattice points are depicted using black circles and the unit cells are depicted using parallelograms (which may be squares or rectangles) outlined in black. Although each of the four corners of each parallelogram connects to a lattice point, only one of the four lattice points technically belongs to a given unit cell and each of the other three lattice points belongs to one of the adjacent unit cells. This can be seen by imagining moving the unit cell parallelogram slightly left and slightly down while leaving all the black circles of the lattice points fixed.
Lattice system Point group
(Schönflies notation)
5 Bravais lattices
Primitive (p) Centered (c)
Monoclinic (m) C2
Oblique
(mp)
Orthorhombic (o) D2
Rectangular
(op)
Centered rectangular
(oc)
Tetragonal (t) D4
Square
(tp)
Hexagonal (h) D6
Hexagonal
(hp)
The unit cells are specified according to the relative lengths of the cell edges (a and b) and the angle between them (θ). The area of the unit cell can be calculated by evaluating the norm ‖a × b‖, where a and b are the lattice vectors. The properties of the lattice systems are given below:
Lattice system Area Axial distances (edge lengths) Axial angle
Monoclinic $ab\,\sin \theta $
Orthorhombic $ab$ θ = 90°
Tetragonal $a^{2}$ a = b θ = 90°
Hexagonal ${\frac {\sqrt {3}}{2}}\,a^{2}$ a = b θ = 120°
In 3 dimensions
In three-dimensional space there are 14 Bravais lattices. These are obtained by combining one of the seven lattice systems with one of the centering types. The centering types identify the locations of the lattice points in the unit cell as follows:
• Primitive (P): lattice points on the cell corners only (sometimes called simple)
• Base-centered (S: A, B, or C): lattice points on the cell corners with one additional point at the center of each face of one pair of parallel faces of the cell (sometimes called end-centered)
• Body-centered (I): lattice points on the cell corners, with one additional point at the center of the cell
• Face-centered (F): lattice points on the cell corners, with one additional point at the center of each of the faces of the cell
Not all combinations of lattice systems and centering types are needed to describe all of the possible lattices, as it can be shown that several of these are in fact equivalent to each other. For example, the monoclinic I lattice can be described by a monoclinic C lattice by different choice of crystal axes. Similarly, all A- or B-centred lattices can be described either by a C- or P-centering. This reduces the number of combinations to 14 conventional Bravais lattices, shown in the table below.[6]: 744 Below each diagram is the Pearson symbol for that Bravais lattice.
Note: In the unit cell diagrams in the following table all the lattice points on the cell boundary (corners and faces) are shown; however, not all of these lattice points technically belong to the given unit cell. This can be seen by imagining moving the unit cell slightly in the negative direction of each axis while keeping the lattice points fixed. Roughly speaking, this can be thought of as moving the unit cell slightly left, slightly down, and slightly out of the screen. This shows that only one of the eight corner lattice points (specifically the front, left, bottom one) belongs to the given unit cell (the other seven lattice points belong to adjacent unit cells). In addition, only one of the two lattice points shown on the top and bottom face in the Base-centered column belongs to the given unit cell. Finally, only three of the six lattice points on the faces in the Face-centered column belongs to the given unit cell.
Crystal family Lattice system Point group
(Schönflies notation)
14 Bravais lattices
Primitive (P) Base-centered (S) Body-centered (I) Face-centered (F)
Triclinic (a) Ci
aP
Monoclinic (m) C2h
mP
mS
Orthorhombic (o) D2h
oP
oS
oI
oF
Tetragonal (t) D4h
tP
tI
Hexagonal (h) Rhombohedral D3d
hR
Hexagonal D6h
hP
Cubic (c) Oh
cP
cI
cF
The unit cells are specified according to six lattice parameters which are the relative lengths of the cell edges (a, b, c) and the angles between them (α, β, γ). The volume of the unit cell can be calculated by evaluating the triple product a · (b × c), where a, b, and c are the lattice vectors. The properties of the lattice systems are given below:
Crystal family Lattice system Volume Axial distances (edge lengths)[6]: 758 Axial angles[6] Corresponding examples
Triclinic $abc{\sqrt {1-\cos ^{2}\alpha -\cos ^{2}\beta -\cos ^{2}\gamma +2\cos \alpha \cos \beta \cos \gamma }}$ K2Cr2O7, CuSO4·5H2O, H3BO3
Monoclinic $abc\,\sin \beta $ α = γ = 90° Monoclinic sulphur, Na2SO4·10H2O, PbCrO3
Orthorhombic $abc$ α = β = γ = 90° Rhombic sulphur, KNO3, BaSO4
Tetragonal $a^{2}c$ a = b α = β = γ = 90° White tin, SnO2, TiO2, CaSO4
Hexagonal Rhombohedral $a^{3}{\sqrt {1-3\cos ^{2}\alpha +2\cos ^{3}\alpha }}$ a = b = c α = β = γ Calcite (CaCO3), cinnabar (HgS)
Hexagonal ${\frac {\sqrt {3}}{2}}\,a^{2}c$ a = b α = β = 90°, γ = 120° Graphite, ZnO, CdS
Cubic $a^{3}$ a = b = c α = β = γ = 90° NaCl, zinc blende, copper metal, KCl, Diamond, Silver
Some basic information for the lattice systems and Bravais lattices in three dimensions is summarized in the diagram at the beginning of this page. The seven sided polygon (heptagon) and the number 7 at the centre indicate the seven lattice systems. The inner heptagons indicate the lattice angles, lattice parameters, Bravais lattices and Schöenflies notations for the respective lattice systems.
In 4 dimensions
In four dimensions, there are 64 Bravais lattices. Of these, 23 are primitive and 41 are centered. Ten Bravais lattices split into enantiomorphic pairs.[7]
See also
• Crystal habit
• Crystal system
• Miller index
• Reciprocal lattice
• Translation operator (quantum mechanics)
• Translational symmetry
• Zone axis
References
1. Aroyo, Mois I.; Müller, Ulrich; Wondratschek, Hans (2006). "Historical Introduction". International Tables for Crystallography. A1 (1.1): 2–5. CiteSeerX 10.1.1.471.4170. doi:10.1107/97809553602060000537. Archived from the original on 4 July 2013. Retrieved 21 April 2008.
2. "Bravais class". Online Dictionary of Crystallography. IUCr. Retrieved 8 August 2019.
3. Ashcroft, Neil; Mermin, Nathaniel (1976). Solid State Physics. Saunders College Publishing. pp. 71–72. ISBN 0030839939.
4. Peidong Yang (2016). "Materials & Solid State Chemistry (course notes)" (PDF). UC Berkeley. Chem 253.{{cite web}}: CS1 maint: url-status (link)
5. Kittel, Charles (1996) [1953]. "Chapter 1". Introduction to Solid State Physics (Seventh ed.). New York: John Wiley & Sons. p. 10. ISBN 978-0-471-11181-8. Retrieved 21 April 2008.
6. Hahn, Theo, ed. (2002). International Tables for Crystallography, Volume A: Space Group Symmetry. International Tables for Crystallography. Vol. A (5th ed.). Berlin, New York: Springer-Verlag. doi:10.1107/97809553602060000100. ISBN 978-0-7923-6590-7.
7. Brown, Harold; Bülow, Rolf; Neubüser, Joachim; Wondratschek, Hans; Zassenhaus, Hans (1978), Crystallographic groups of four-dimensional space, New York: Wiley-Interscience [John Wiley & Sons], ISBN 978-0-471-03095-9, MR 0484179
Further reading
• Bravais, A. (1850). "Mémoire sur les systèmes formés par les points distribués régulièrement sur un plan ou dans l'espace" [Memoir on the systems formed by points regularly distributed on a plane or in space]. J. École Polytech. (in French). 19: 1–128. (English: Memoir 1, Crystallographic Society of America, 1949).
External links
• Catalogue of Lattices (by Nebe and Sloane)
• Smith, Walter Fox (2002). "The Bravais Lattices Song".
Crystal systems
• Bravais lattice
• Crystallographic point group
Seven 3D systems
• triclinic (anorthic)
• monoclinic
• orthorhombic
• tetragonal
• trigonal & hexagonal
• cubic (isometric)
Four 2D systems
• oblique
• rectangular
• square
• hexagonal
Crystallography
Key concepts
• Crystal structure
• Unit cell
• Miller index
• Reciprocal lattice
• Bravais lattice
• Crystal system
• Hermann–Mauguin notation
• Bragg's law
• Bragg plane
• Ewald's sphere
• Friedel's law
• Structure factor
• Pair distribution function
• Electron diffraction
• Electron density
• Resolution
• Phase problem
• Dynamical diffraction
• R-factor
• Debye–Waller factor
• Thermal ellipsoid
• Powder diffraction
• Radiation damage
• Crystallographic defects
X-ray crystallography
• X-ray diffractometer
• Rietveld refinement
• Mosaicity
• Pendellösung
• Serial femtosecond crystallography
Electron crystallography
• HRTEM
• Cryoelectron microscopy
• Precession electron diffraction
• Convergent beam electron diffraction
• Low-energy electron diffraction
• Reflection high-energy electron diffraction
Reconstruction algorithms
• Patterson map
• Direct methods (crystallography)
• Molecular replacement
• Crystallographic image processing
• Single particle analysis
Softwares
• SHELX
• DSR
• CCP4
• Coot (software)
• Olex2
• JANA2020
Databases
• Cambridge Structural Database
• Inorganic Crystal Structure Database
• Crystallography Open Database
• Protein Data Bank
• Bilbao Crystallographic Server
Notable people (see also Timeline)
Early 20th century
• William Henry Bragg
• Lawrence Bragg
• Max von Laue
• Paul Scherrer
• Peter Debye
• Paul Peter Ewald
• Paul Niggli
• Evgraf Fedorov
• Georges Friedel
• Victor Goldschmidt
• J. D. Bernal
• Herman Francis Mark
• Ralph Wyckoff
Mid 20th century
• Martin Buerger
• Johannes M. Bijvoet
• Fritz Laves
• Linus Pauling
• Aaron Klug
• Dorothy Hodgkin
• Kathleen Lonsdale
• Rosalind Franklin
• Jack D. Dunitz
• Jerome Karle
• Herbert A. Hauptman
• William Houlder Zachariasen
• Francis Crick
• Max Perutz
• John Kendrew
• Helen Megaw
• David Sayre
• Hugo Rietveld
• G. N. Ramachandran
Late 20th century
• Alexei V. Shubnikov
• Boris Vainshtein
• Richard Henderson
• Joachim Frank
• Jacques Dubochet
• John R. Helliwell
• Ada Yonath
• Michael Levitt
• Dan Shechtman
• George Sheldrick
• Hartmut Michel
• Axel Brunger
• Olga Kennard
• Douglas Dorset
• Tom Blundell
• Michael Woolfson
• Ted Janssen
• Philip Coppens
• John Keith Moffat
• Eleanor Dodson
• Alexander F. Wells
• Mike Glazer
• John Cowley
Early 21st century
• Richard Neutze
• Janos Hajdu
• John Spence
• Henry Chapman
• Sjors Scheres
• Nikolaus Grigorieff
• Tamir Gonen
• Jianwei (John) Miao
• Eva Nogales
• Wayne Hendrickson
|
Wikipedia
|
Continuous functions on a compact Hausdorff space
In mathematical analysis, and especially functional analysis, a fundamental role is played by the space of continuous functions on a compact Hausdorff space $X$ with values in the real or complex numbers. This space, denoted by ${\mathcal {C}}(X),$ is a vector space with respect to the pointwise addition of functions and scalar multiplication by constants. It is, moreover, a normed space with norm defined by
$\|f\|=\sup _{x\in X}|f(x)|,$
the uniform norm. The uniform norm defines the topology of uniform convergence of functions on $X.$ The space ${\mathcal {C}}(X)$ is a Banach algebra with respect to this norm.(Rudin 1973, §11.3)
Properties
• By Urysohn's lemma, ${\mathcal {C}}(X)$ separates points of $X$: If $x,y\in X$ are distinct points, then there is an $f\in {\mathcal {C}}(X)$ such that $f(x)\neq f(y).$
• The space ${\mathcal {C}}(X)$ is infinite-dimensional whenever $X$ is an infinite space (since it separates points). Hence, in particular, it is generally not locally compact.
• The Riesz–Markov–Kakutani representation theorem gives a characterization of the continuous dual space of ${\mathcal {C}}(X).$ Specifically, this dual space is the space of Radon measures on $X$ (regular Borel measures), denoted by $\operatorname {rca} (X).$ This space, with the norm given by the total variation of a measure, is also a Banach space belonging to the class of ba spaces. (Dunford & Schwartz 1958, §IV.6.3)
• Positive linear functionals on ${\mathcal {C}}(X)$ correspond to (positive) regular Borel measures on $X,$ by a different form of the Riesz representation theorem. (Rudin 1966, Chapter 2)
• If $X$ is infinite, then ${\mathcal {C}}(X)$ is not reflexive, nor is it weakly complete.
• The Arzelà–Ascoli theorem holds: A subset $K$ of ${\mathcal {C}}(X)$ is relatively compact if and only if it is bounded in the norm of ${\mathcal {C}}(X),$ and equicontinuous.
• The Stone–Weierstrass theorem holds for ${\mathcal {C}}(X).$ In the case of real functions, if $A$ is a subring of ${\mathcal {C}}(X)$ that contains all constants and separates points, then the closure of $A$ is ${\mathcal {C}}(X).$ In the case of complex functions, the statement holds with the additional hypothesis that $A$ is closed under complex conjugation.
• If $X$ and $Y$ are two compact Hausdorff spaces, and $F:{\mathcal {C}}(X)\to {\mathcal {C}}(Y)$ is a homomorphism of algebras which commutes with complex conjugation, then $F$ is continuous. Furthermore, $F$ has the form $F(h)(y)=h(f(y))$ for some continuous function $f:Y\to X.$ In particular, if $C(X)$ and $C(Y)$ are isomorphic as algebras, then $X$ and $Y$ are homeomorphic topological spaces.
• Let $\Delta $ be the space of maximal ideals in ${\mathcal {C}}(X).$ Then there is a one-to-one correspondence between Δ and the points of $X.$ Furthermore, $\Delta $ can be identified with the collection of all complex homomorphisms ${\mathcal {C}}(X)\to \mathbb {C} .$ Equip $\Delta $with the initial topology with respect to this pairing with ${\mathcal {C}}(X)$ (that is, the Gelfand transform). Then $X$ is homeomorphic to Δ equipped with this topology. (Rudin 1973, §11.13)
• A sequence in ${\mathcal {C}}(X)$ is weakly Cauchy if and only if it is (uniformly) bounded in ${\mathcal {C}}(X)$ and pointwise convergent. In particular, ${\mathcal {C}}(X)$ is only weakly complete for $X$ a finite set.
• The vague topology is the weak* topology on the dual of ${\mathcal {C}}(X).$
• The Banach–Alaoglu theorem implies that any normed space is isometrically isomorphic to a subspace of $C(X)$ for some $X.$
Generalizations
The space $C(X)$ of real or complex-valued continuous functions can be defined on any topological space $X.$ In the non-compact case, however, $C(X)$ is not in general a Banach space with respect to the uniform norm since it may contain unbounded functions. Hence it is more typical to consider the space, denoted here $C_{B}(X)$ of bounded continuous functions on $X.$ This is a Banach space (in fact a commutative Banach algebra with identity) with respect to the uniform norm. (Hewitt & Stromberg 1965, Theorem 7.9)
It is sometimes desirable, particularly in measure theory, to further refine this general definition by considering the special case when $X$ is a locally compact Hausdorff space. In this case, it is possible to identify a pair of distinguished subsets of $C_{B}(X)$: (Hewitt & Stromberg 1965, §II.7)
• $C_{00}(X),$ the subset of $C(X)$ consisting of functions with compact support. This is called the space of functions vanishing in a neighborhood of infinity.
• $C_{0}(X),$ the subset of $C(X)$ consisting of functions such that for every $r>0,$ there is a compact set $K\subseteq X$ such that $|f(x)|<r$ for all $x\in X\backslash K.$ This is called the space of functions vanishing at infinity.
The closure of $C_{00}(X)$ is precisely $C_{0}(X).$ In particular, the latter is a Banach space.
References
• Dunford, N.; Schwartz, J.T. (1958), Linear operators, Part I, Wiley-Interscience.
• Hewitt, Edwin; Stromberg, Karl (1965), Real and abstract analysis, Springer-Verlag.
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Rudin, Walter (1966), Real and complex analysis, McGraw-Hill, ISBN 0-07-054234-1.
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Space partitioning
In geometry, space partitioning is the process of dividing a space (usually a Euclidean space) into two or more disjoint subsets (see also partition of a set). In other words, space partitioning divides a space into non-overlapping regions. Any point in the space can then be identified to lie in exactly one of the regions.
Overview
Space-partitioning systems are often hierarchical, meaning that a space (or a region of space) is divided into several regions, and then the same space-partitioning system is recursively applied to each of the regions thus created. The regions can be organized into a tree, called a space-partitioning tree.
Most space-partitioning systems use planes (or, in higher dimensions, hyperplanes) to divide space: points on one side of the plane form one region, and points on the other side form another. Points exactly on the plane are usually arbitrarily assigned to one or the other side. Recursively partitioning space using planes in this way produces a BSP tree, one of the most common forms of space partitioning.
Uses
In computer graphics
Space partitioning is particularly important in computer graphics, especially heavily used in ray tracing, where it is frequently used to organize the objects in a virtual scene. A typical scene may contain millions of polygons. Performing a ray/polygon intersection test with each would be a very computationally expensive task.
Storing objects in a space-partitioning data structure (k-d tree or BSP tree for example) makes it easy and fast to perform certain kinds of geometry queries—for example in determining whether a ray intersects an object, space partitioning can reduce the number of intersection test to just a few per primary ray, yielding a logarithmic time complexity with respect to the number of polygons.[1][2][3]
Space partitioning is also often used in scanline algorithms to eliminate the polygons out of the camera's viewing frustum, limiting the number of polygons processed by the pipeline. There is also a usage in collision detection: determining whether two objects are close to each other can be much faster using space partitioning.
In integrated circuit design
In integrated circuit design, an important step is design rule check. This step ensures that the completed design is manufacturable. The check involves rules that specify widths and spacings and other geometry patterns. A modern design can have billions of polygons that represent wires and transistors. Efficient checking relies heavily on geometry query. For example, a rule may specify that any polygon must be at least n nanometers from any other polygon. This is converted into a geometry query by enlarging a polygon by n/2 at all sides and query to find all intersecting polygons.
In probability and statistical learning theory
The number of components in a space partition plays a central role in some results in probability theory. See Growth function for more details.
In Geography and GIS
There are many studies and applications where Geographical Spatial Reality is partitioned by hydrological criteria, administrative criteria, mathematical criteria or many others.
In the context of Cartography and GIS - Geographic Information System, is common to identify cells of the partition by standard codes. For example the for HUC code identifying hydrographical basins and sub-basins, ISO 3166-2 codes identifying countries and its subdivisions, or arbitrary DGGs - discrete global grids identifying quadrants or locations.
Data structures
Common space-partitioning systems include:
• BSP trees
• Quadtrees
• Octrees
• k-d trees
• Bins
• R-trees
Number of components
Suppose the n-dimensional Euclidean space is partitioned by $r$ hyperplanes that are $(n-1)$-dimensional. What is the number of components in the partition? The largest number of components is attained when the hyperplanes are in general position, i.e, no two are parallel and no three have the same intersection. Denote this maximum number of components by $Comp(n,r)$. Then, the following recurrence relation holds: [4] [5]
$Comp(n,r)=Comp(n,r-1)+Comp(n-1,r-1)$
$Comp(0,r)=1$ - when there are no dimensions, there is a single point.
$Comp(n,0)=1$ - when there are no hyperplanes, all the space is a single component.
And its solution is:
$Comp(n,r)=\sum _{k=0}^{n}{r \choose k}$ if $r\geq n$
$Comp(n,r)=2^{r}$ if $r\leq n$
(consider e.g. $r$ perpendicular hyperplanes; each additional hyperplane divides each existing component to 2).
which is upper-bounded as:
$Comp(n,r)\leq r^{n}+1$
See also
• Binary space partitioning
• Discrete global grid
• Polygon partition
• Tessellation
References
1. Tomas Nikodym (2010). "Ray Tracing Algorithm For Interactive Applications" (PDF). Czech Technical University, FEE.
2. Ingo Wald, William R. Mark; et al. (2007). "State of the Art in Ray Tracing Animated Scenes". Eurographics. CiteSeerX 10.1.1.108.8495.
3. Ray Tracing - Auxiliary Data Structures
4. Vapnik, V. N.; Chervonenkis, A. Ya. (1971). "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Theory of Probability & Its Applications. 16 (2): 266. doi:10.1137/1116025. This is an English translation, by B. Seckler, of the Russian paper: "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Dokl. Akad. Nauk. 181 (4): 781. 1968. The translation was reproduced as: Vapnik, V. N.; Chervonenkis, A. Ya. (2015). "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Measures of Complexity. p. 11. doi:10.1007/978-3-319-21852-6_3. ISBN 978-3-319-21851-9.
5. See also detailed discussions and explanations on the case n=2 and the general case. See also Winder, R. O. (1966). "Partitions of N-Space by Hyperplanes". SIAM Journal on Applied Mathematics. 14 (4): 811–818. doi:10.1137/0114068..
|
Wikipedia
|
Skew polygon
In geometry, a skew polygon is a polygon whose vertices are not all coplanar.[1] Skew polygons must have at least four vertices. The interior surface (or area) of such a polygon is not uniquely defined.
Skew infinite polygons (apeirogons) have vertices which are not all colinear.
A zig-zag skew polygon or antiprismatic polygon[2] has vertices which alternate on two parallel planes, and thus must be even-sided.
Regular skew polygons in 3 dimensions (and regular skew apeirogons in two dimensions) are always zig-zag.
Antiprismatic skew polygon in three dimensions
A regular skew polygon is isogonal with equal edge lengths. In 3 dimensions a regular skew polygon is a zig-zag skew (or antiprismatic) polygon, with vertices alternating between two parallel planes. The side edges of an n-antiprism can define a regular skew 2n-gon.
A regular skew n-gon can be given a Schläfli symbol {p}#{ } as a blend of a regular polygon {p} and an orthogonal line segment { }.[3] The symmetry operation between sequential vertices is glide reflection.
Examples are shown on the uniform square and pentagon antiprisms. The star antiprisms also generate regular skew polygons with different connection order of the top and bottom polygons. The filled top and bottom polygons are drawn for structural clarity, and are not part of the skew polygons.
Regular zig-zag skew polygons
Skew square Skew hexagon Skew octagon Skew decagon Skew dodecagon
{2}#{ } {3}#{ } {4}#{ } {5}#{ } {5/2}#{ } {5/3}#{ } {6}#{ }
s{2,4} s{2,6} s{2,8} s{2,10} sr{2,5/2} s{2,10/3} s{2,12}
A regular compound skew 2n-gon can be similarly constructed by adding a second skew polygon by a rotation. These share the same vertices as the prismatic compound of antiprisms.
Regular compounds of zig-zag skew polygons
Skew squares Skew hexagons Skew decagons
Two {2}#{ } Three {2}#{ } Two {3}#{ } Two {5/3}#{ }
Petrie polygons are regular skew polygons defined within regular polyhedra and polytopes. For example, the five Platonic solids have 4-, 6-, and 10-sided regular skew polygons, as seen in these orthogonal projections with red edges around their respective projective envelopes. The tetrahedron and the octahedron include all the vertices in their respective zig-zag skew polygons, and can be seen as a digonal antiprism and a triangular antiprism respectively.
Regular skew polygon as vertex figure of regular skew polyhedron
A regular skew polyhedron has regular polygon faces, and a regular skew polygon vertex figure.
Three infinite regular skew polyhedra are space-filling in 3-space; others exist in 4-space, some within the uniform 4-polytopes.
Skew vertex figures of the 3 infinite regular skew polyhedra
{4,6|4} {6,4|4} {6,6|3}
Regular skew hexagon
{3}#{ }
Regular skew square
{2}#{ }
Regular skew hexagon
{3}#{ }
Isogonal skew polygons in three dimensions
An isogonal skew polygon is a skew polygon with one type of vertex, connected by two types of edges. Isogonal skew polygons with equal edge lengths can also be considered quasiregular. It is similar to a zig-zag skew polygon, existing on two planes, except allowing one edge to cross to the opposite plane, and the other edge to stay on the same plane.
Isogonal skew polygons can be defined on even-sided n-gonal prisms, alternatingly following an edge of one side polygon, and moving between polygons. For example, on the vertices of a cube. Vertices alternate between top and bottom squares with red edges between sides, and blue edges along each side.
Octagon Dodecagon Icosikaitetragon
Cube, square-diagonal
Cube
Crossed cube
Hexagonal prism
Hexagonal prism
Hexagonal prism
Twisted prism
Regular skew polygons in four dimensions
In 4 dimensions, a regular skew polygon can have vertices on a Clifford torus and related by a Clifford displacement. Unlike zig-zag skew polygons, skew polygons on double rotations can include an odd-number of sides.
The Petrie polygons of the regular 4-polytopes define regular zig-zag skew polygons. The Coxeter number for each coxeter group symmetry expresses how many sides a Petrie polygon has. This is 5 sides for a 5-cell, 8 sides for a tesseract and 16-cell, 12 sides for a 24-cell, and 30 sides for a 120-cell and 600-cell.
When orthogonally projected onto the Coxeter plane, these regular skew polygons appear as regular polygon envelopes in the plane.
A4, [3,3,3] B4, [4,3,3] F4, [3,4,3] H4, [5,3,3]
Pentagon Octagon Dodecagon Triacontagon
5-cell
{3,3,3}
tesseract
{4,3,3}
16-cell
{3,3,4}
24-cell
{3,4,3}
120-cell
{5,3,3}
600-cell
{3,3,5}
The n-n duoprisms and dual duopyramids also have 2n-gonal Petrie polygons. (The tesseract is a 4-4 duoprism, and the 16-cell is a 4-4 duopyramid.)
Hexagon Decagon Dodecagon
3-3 duoprism
3-3 duopyramid
5-5 duoprism
5-5 duopyramid
6-6 duoprism
6-6 duopyramid
See also
• Petrie polygon
• Quadrilateral#Skew quadrilaterals
• Regular skew polyhedron
• Skew apeirohedron (infinite skew polyhedron)
• Skew lines
Citations
1. Coxeter 1973, §1.1 Regular polygons; "If the vertices are all coplanar, we speak of a plane polygon, otherwise a skew polygon."
2. Regular complex polytopes, p. 6
3. Abstract Regular Polytopes, p.217
References
• McMullen, Peter; Schulte, Egon (December 2002), Abstract Regular Polytopes (1st ed.), Cambridge University Press, ISBN 0-521-81496-0 p. 25
• Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. ISBN 0-486-23729-X. "Skew Polygons (Saddle Polygons)" §2.2
• Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). New York: Dover.
• Coxeter, H.S.M.; Regular complex polytopes (1974). Chapter 1. Regular polygons, 1.5. Regular polygons in n dimensions, 1.7. Zigzag and antiprismatic polygons, 1.8. Helical polygons. 4.3. Flags and Orthoschemes, 11.3. Petrie polygons
• Coxeter, H. S. M. Petrie Polygons. Regular Polytopes, 3rd ed. New York: Dover, 1973. (sec 2.6 Petrie Polygons pp. 24–25, and Chapter 12, pp. 213–235, The generalized Petrie polygon)
• Coxeter, H. S. M. & Moser, W. O. J. (1980). Generators and Relations for Discrete Groups. New York: Springer-Verlag. ISBN 0-387-09212-9. (1st ed, 1957) 5.2 The Petrie polygon {p,q}.
• John Milnor: On the total curvature of knots, Ann. Math. 52 (1950) 248–257.
• J.M. Sullivan: Curves of finite total curvature, ArXiv:math.0606007v2
External links
• Weisstein, Eric W. "Skew polygon". MathWorld.
• Weisstein, Eric W. "Petrie polygon". MathWorld.
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Spacetime
In physics, spacetime is any mathematical model that fuses the three dimensions of space and the one dimension of time into a single four-dimensional continuum. Spacetime diagrams are useful in visualizing and understanding relativistic effects such as how different observers perceive where and when events occur.
Part of a series on
Spacetime
• Special relativity
• General relativity
Spacetime concepts
• Spacetime manifold
• Equivalence principle
• Lorentz transformations
• Minkowski space
General relativity
• Introduction to general relativity
• Mathematics of general relativity
• Einstein field equations
Classical gravity
• Introduction to gravitation
• Newton's law of universal gravitation
Relevant mathematics
• Four-vector
• Derivations of relativity
• Spacetime diagrams
• Differential geometry
• Curved spacetime
• Mathematics of general relativity
• Spacetime topology
• Physics portal
• Category
Until the turn of the 20th century, the assumption had been that the three-dimensional geometry of the universe (its description in terms of locations, shapes, distances, and directions) was distinct from time (the measurement of when events occur within the universe). However, space and time took on new meanings with the Lorentz transformation and special theory of relativity.
In 1908, Hermann Minkowski presented a geometric interpretation of special relativity that fused time and the three spatial dimensions of space into a single four-dimensional continuum now known as Minkowski space. This interpretation proved vital to the general theory of relativity, wherein spacetime is curved by mass and energy.
Fundamentals
Definitions
Non-relativistic classical mechanics treats time as a universal quantity of measurement which is uniform throughout space, and separate from space. Classical mechanics assumes that time has a constant rate of passage, independent of the observer's state of motion, or anything external.[1] Furthermore, it assumes that space is Euclidean; it assumes that space follows the geometry of common sense.[2]
In the context of special relativity, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer.[3]: 214–217 General relativity also provides an explanation of how gravitational fields can slow the passage of time for an object as seen by an observer outside the field.
In ordinary space, a position is specified by three numbers, known as dimensions. In the Cartesian coordinate system, these are called x, y, and z. A position in spacetime is called an event, and requires four numbers to be specified: the three-dimensional location in space, plus the position in time (Fig. 1). An event is represented by a set of coordinates x, y, z and t. Spacetime is thus four dimensional.
Unlike the analogies used in popular writings to explain events, such as firecrackers or sparks, mathematical events have zero duration and represent a single point in spacetime.[4] Although it is possible to be in motion relative to the popping of a firecracker or a spark, it is not possible for an observer to be in motion relative to an event.
The path of a particle through spacetime can be considered to be a succession of events. The series of events can be linked together to form a line which represents a particle's progress through spacetime. That line is called the particle's world line.[5]: 105
Mathematically, spacetime is a manifold, which is to say, it appears locally "flat" near each point in the same way that, at small enough scales, a globe appears flat.[6] A scale factor, $c$ (conventionally called the speed-of-light) relates distances measured in space with distances measured in time. The magnitude of this scale factor (nearly 300,000 kilometres or 190,000 miles in space being equivalent to one second in time), along with the fact that spacetime is a manifold, implies that at ordinary, non-relativistic speeds and at ordinary, human-scale distances, there is little that humans might observe which is noticeably different from what they might observe if the world were Euclidean. It was only with the advent of sensitive scientific measurements in the mid-1800s, such as the Fizeau experiment and the Michelson–Morley experiment, that puzzling discrepancies began to be noted between observation versus predictions based on the implicit assumption of Euclidean space.[7]
In special relativity, an observer will, in most cases, mean a frame of reference from which a set of objects or events is being measured. This usage differs significantly from the ordinary English meaning of the term. Reference frames are inherently nonlocal constructs, and according to this usage of the term, it does not make sense to speak of an observer as having a location. In Fig. 1-1, imagine that the frame under consideration is equipped with a dense lattice of clocks, synchronized within this reference frame, that extends indefinitely throughout the three dimensions of space. Any specific location within the lattice is not important. The latticework of clocks is used to determine the time and position of events taking place within the whole frame. The term observer refers to the entire ensemble of clocks associated with one inertial frame of reference.[8]: 17–22 In this idealized case, every point in space has a clock associated with it, and thus the clocks register each event instantly, with no time delay between an event and its recording. A real observer, however, will see a delay between the emission of a signal and its detection due to the speed of light. To synchronize the clocks, in the data reduction following an experiment, the time when a signal is received will be corrected to reflect its actual time were it to have been recorded by an idealized lattice of clocks.
In many books on special relativity, especially older ones, the word "observer" is used in the more ordinary sense of the word. It is usually clear from context which meaning has been adopted.
Physicists distinguish between what one measures or observes (after one has factored out signal propagation delays), versus what one visually sees without such corrections. Failure to understand the difference between what one measures/observes versus what one sees is the source of much error among beginning students of relativity.[9]
History
Figure 1-2. Michelson and Morley expected that motion through the aether would cause a differential phase shift between light traversing the two arms of their apparatus. The most logical explanation of their negative result, aether dragging, was in conflict with the observation of stellar aberration.
By the mid-1800s, various experiments such as the observation of the Arago spot and differential measurements of the speed of light in air versus water were considered to have proven the wave nature of light as opposed to a corpuscular theory.[10] Propagation of waves was then assumed to require the existence of a waving medium; in the case of light waves, this was considered to be a hypothetical luminiferous aether.[note 1] However, the various attempts to establish the properties of this hypothetical medium yielded contradictory results. For example, the Fizeau experiment of 1851, conducted by French physicist Hippolyte Fizeau, demonstrated that the speed of light in flowing water was less than the sum of the speed of light in air plus the speed of the water by an amount dependent on the water's index of refraction.[11] Among other issues, the dependence of the partial aether-dragging implied by this experiment on the index of refraction (which is dependent on wavelength) led to the unpalatable conclusion that aether simultaneously flows at different speeds for different colors of light.[12] The famous Michelson–Morley experiment of 1887 (Fig. 1-2) showed no differential influence of Earth's motions through the hypothetical aether on the speed of light, and the most likely explanation, complete aether dragging, was in conflict with the observation of stellar aberration.[7]
George Francis FitzGerald in 1889,[13] and Hendrik Lorentz in 1892, independently proposed that material bodies traveling through the fixed aether were physically affected by their passage, contracting in the direction of motion by an amount that was exactly what was necessary to explain the negative results of the Michelson–Morley experiment. (No length changes occur in directions transverse to the direction of motion.)
By 1904, Lorentz had expanded his theory such that he had arrived at equations formally identical with those that Einstein was to derive later (i.e. the Lorentz transformation).[14] As a theory of dynamics (the study of forces and torques and their effect on motion), his theory assumed actual physical deformations of the physical constituents of matter.[15]: 163–174 Lorentz's equations predicted a quantity that he called local time, with which he could explain the aberration of light, the Fizeau experiment and other phenomena.
Hendrik Lorentz
Henri Poincaré
Albert Einstein
Hermann Minkowski
Figure 1-3.
Henri Poincaré was the first to combine space and time into spacetime.[16][17]: 73–80, 93–95 He argued in 1898 that the simultaneity of two events is a matter of convention.[18][note 2] In 1900, he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by applying an explicitly operational definition of clock synchronization assuming constant light speed.[note 3] In 1900 and 1904, he suggested the inherent undetectability of the aether by emphasizing the validity of what he called the principle of relativity, and in 1905/1906[19] he mathematically perfected Lorentz's theory of electrons in order to bring it into accordance with the postulate of relativity. While discussing various hypotheses on Lorentz invariant gravitation, he introduced the innovative concept of a 4-dimensional spacetime by defining various four vectors, namely four-position, four-velocity, and four-force.[20][21] He did not pursue the 4-dimensional formalism in subsequent papers, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding "that three-dimensional language seems the best suited to the description of our world".[21] Furthermore, even as late as 1909, Poincaré continued to describe the dynamical interpretation of the Lorentz transform.[15]: 163–174
In 1905, Albert Einstein analyzed special relativity in terms of kinematics (the study of moving bodies without reference to forces) rather than dynamics. His results were mathematically equivalent to those of Lorentz and Poincaré. He obtained them by recognizing that the entire theory can be built upon two postulates: The principle of relativity and the principle of the constancy of light speed. His work was filled with vivid imagery involving the exchange of light signals between clocks in motion, careful measurements of the lengths of moving rods, and other such examples.[22][note 4]
In addition, Einstein in 1905 superseded previous attempts of an electromagnetic mass–energy relation by introducing the general equivalence of mass and energy, which was instrumental for his subsequent formulation of the equivalence principle in 1907, which declares the equivalence of inertial and gravitational mass. By using the mass–energy equivalence, Einstein showed, in addition, that the gravitational mass of a body is proportional to its energy content, which was one of the early results in developing general relativity. While it would appear that he did not at first think geometrically about spacetime,[3]: 219 in the further development of general relativity Einstein fully incorporated the spacetime formalism.
When Einstein published in 1905, another of his competitors, his former mathematics professor Hermann Minkowski, had also arrived at most of the basic elements of special relativity. Max Born recounted a meeting he had made with Minkowski, seeking to be Minkowski's student/collaborator:[24]
I went to Cologne, met Minkowski and heard his celebrated lecture 'Space and Time' delivered on 2 September 1908. [...] He told me later that it came to him as a great shock when Einstein published his paper in which the equivalence of the different local times of observers moving relative to each other was pronounced; for he had reached the same conclusions independently but did not publish them because he wished first to work out the mathematical structure in all its splendor. He never made a priority claim and always gave Einstein his full share in the great discovery.
Minkowski had been concerned with the state of electrodynamics after Michelson's disruptive experiments at least since the summer of 1905, when Minkowski and David Hilbert led an advanced seminar attended by notable physicists of the time to study the papers of Lorentz, Poincaré et al. Minkowski saw Einstein's work as an extension of Lorentz's, and was most directly influenced by Poincare.[25]
On 5 November 1907 (a little more than a year before his death), Minkowski introduced his geometric interpretation of spacetime in a lecture to the Göttingen Mathematical society with the title, The Relativity Principle (Das Relativitätsprinzip).[note 5] On 21 September 1908, Minkowski presented his famous talk, Space and Time (Raum und Zeit),[26] to the German Society of Scientists and Physicians. The opening words of Space and Time include Minkowski's famous statement that "Henceforth, space for itself, and time for itself shall completely reduce to a mere shadow, and only some sort of union of the two shall preserve independence." Space and Time included the first public presentation of spacetime diagrams (Fig. 1-4), and included a remarkable demonstration that the concept of the invariant interval (discussed below), along with the empirical observation that the speed of light is finite, allows derivation of the entirety of special relativity.[note 6]
The spacetime concept and the Lorentz group are closely connected to certain types of sphere, hyperbolic, or conformal geometries and their transformation groups already developed in the 19th century, in which invariant intervals analogous to the spacetime interval are used.[note 7]
Einstein, for his part, was initially dismissive of Minkowski's geometric interpretation of special relativity, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). However, in order to complete his search for general relativity that started in 1907, the geometric interpretation of relativity proved to be vital, and in 1916, Einstein fully acknowledged his indebtedness to Minkowski, whose interpretation greatly facilitated the transition to general relativity.[15]: 151–152 Since there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime.
Spacetime in special relativity
Spacetime interval
In three dimensions, the distance $\Delta {d}$ between two points can be defined using the Pythagorean theorem:
$(\Delta {d})^{2}=(\Delta {x})^{2}+(\Delta {y})^{2}+(\Delta {z})^{2}$
Although two viewers may measure the x, y, and z position of the two points using different coordinate systems, the distance between the points will be the same for both (assuming that they are measuring using the same units). The distance is "invariant".
In special relativity, however, the distance between two points is no longer the same if measured by two different observers when one of the observers is moving, because of Lorentz contraction. The situation is even more complicated if the two points are separated in time as well as in space. For example, if one observer sees two events occur at the same place, but at different times, a person moving with respect to the first observer will see the two events occurring at different places, because (from their point of view) they are stationary, and the position of the event is receding or approaching. Thus, a different measure must be used to measure the effective "distance" between two events.
In four-dimensional spacetime, the analog to distance is the interval. Although time comes in as a fourth dimension, it is treated differently than the spatial dimensions. Minkowski space hence differs in important respects from four-dimensional Euclidean space. The fundamental reason for merging space and time into spacetime is that space and time are separately not invariant, which is to say that, under the proper conditions, different observers will disagree on the length of time between two events (because of time dilation) or the distance between the two events (because of length contraction). But special relativity provides a new invariant, called the spacetime interval, which combines distances in space and in time. All observers who measure the time and distance between any two events will end up computing the same spacetime interval. Suppose an observer measures two events as being separated in time by $\Delta t$ and a spatial distance $\Delta x.$ Then the spacetime interval $(\Delta {s})^{2}$ between the two events that are separated by a distance $\Delta {x}$ in space and by $\Delta {ct}=c\Delta t$ in the $ct$-coordinate is:
$(\Delta s)^{2}=(\Delta ct)^{2}-(\Delta x)^{2},$
or for three space dimensions,
$(\Delta s)^{2}=(\Delta ct)^{2}-(\Delta x)^{2}-(\Delta y)^{2}-(\Delta z)^{2}.$[30]
The constant $c,$ the speed of light, converts time units (like seconds) into space units (like meters). The squared interval $\Delta s^{2}$ is a measure of separation between events A and B that are time separated and in addition space separated either because there are two separate objects undergoing events, or because a single object in space is moving inertially between its events. The separation interval is derived by squaring the spatial distance separating event B from event A and subtracting it from the square of the spatial distance traveled by a light signal in that same time interval $\Delta t$. If the event separation is due to a light signal, then this difference vanishes and $\Delta s=0$.
When the event considered is infinitesimally close to each other, then we may write
$ds^{2}=c^{2}dt^{2}-dx^{2}-dy^{2}-dz^{2}.$
In a different inertial frame, say with coordinates $(t',x',y',z')$, the spacetime interval $ds'$ can be written in a same form as above. Because of the constancy of speed of light, the light events in all inertial frames belong to zero interval, $ds=ds'=0$. For any other infinitesimal event where $ds\neq 0$, one can prove that $ds^{2}=ds'^{2}$ which in turn upon integration leads to $s=s'$.[31]: 2 The invariance of interval of any event between all intertial frames of reference is one of the fundamental results of special theory of relativity.
Although for brevity, one frequently sees interval expressions expressed without deltas, including in most of the following discussion, it should be understood that in general, $x$ means $\Delta {x}$, etc. We are always concerned with differences of spatial or temporal coordinate values belonging to two events, and since there is no preferred origin, single coordinate values have no essential meaning.
The equation above is similar to the Pythagorean theorem, except with a minus sign between the $(ct)^{2}$ and the $x^{2}$ terms. The spacetime interval is the quantity $s^{2},$ not $s$ itself. The reason is that unlike distances in Euclidean geometry, intervals in Minkowski spacetime can be negative. Rather than deal with square roots of negative numbers, physicists customarily regard $s^{2}$ as a distinct symbol in itself, rather than the square of something.[3]: 217
In general $s^{2}$ can assume any values of the real number. If $s^{2}$ is positive, the spacetime interval is referred to as timelike. Since spatial distance traversed by any massive object is always less than distance traveled by the light for the same time interval, real intervals are always timelike. If $s^{2}$ is negative, the spacetime interval is said to be spacelike, where the spacetime interval is imaginary. Spacetime intervals are equal to zero when $x=\pm ct.$ In other words, the spacetime interval between two events on the world line of something moving at the speed of light is zero. Such an interval is termed lightlike or null. A photon arriving in our eye from a distant star will not have aged, despite having (from our perspective) spent years in its passage.
A spacetime diagram is typically drawn with only a single space and a single time coordinate. Fig. 2-1 presents a spacetime diagram illustrating the world lines (i.e. paths in spacetime) of two photons, A and B, originating from the same event and going in opposite directions. In addition, C illustrates the world line of a slower-than-light-speed object. The vertical time coordinate is scaled by $c$ so that it has the same units (meters) as the horizontal space coordinate. Since photons travel at the speed of light, their world lines have a slope of ±1. In other words, every meter that a photon travels to the left or right requires approximately 3.3 nanoseconds of time.
There are two sign conventions in use in the relativity literature:
$s^{2}=(ct)^{2}-x^{2}-y^{2}-z^{2}$
and
$s^{2}=-(ct)^{2}+x^{2}+y^{2}+z^{2}$
These sign conventions are associated with the metric signatures (+−−−) and (−+++). A minor variation is to place the time coordinate last rather than first. Both conventions are widely used within the field of study.
Reference frames
To gain insight in how spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-2, two Galilean reference frames (i.e. conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame S′ (pronounced "S prime") belongs to a second observer O′.
• The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′.
• Frame S′ moves in the x-direction of frame S with a constant velocity v as measured in frame S.
• The origins of frames S and S′ are coincident when time t = 0 for frame S and t′ = 0 for frame S′.[5]: 107
Fig. 2-3a redraws Fig. 2-2 in a different orientation. Fig. 2-3b illustrates a spacetime diagram from the viewpoint of observer O. Since S and S′ are in standard configuration, their origins coincide at times t = 0 in frame S and t′ = 0 in frame S′. The ct′ axis passes through the events in frame S′ which have x′ = 0. But the points with x′ = 0 are moving in the x-direction of frame S with velocity v, so that they are not coincident with the ct axis at any time other than zero. Therefore, the ct′ axis is tilted with respect to the ct axis by an angle θ given by
$\tan(\theta )=v/c.$
The x′ axis is also tilted with respect to the x axis. To determine the angle of this tilt, we recall that the slope of the world line of a light pulse is always ±1. Fig. 2-3c presents a spacetime diagram from the viewpoint of observer O′. Event P represents the emission of a light pulse at x′ = 0, ct′ = −a. The pulse is reflected from a mirror situated a distance a from the light source (event Q), and returns to the light source at x′ = 0, ct′ = a (event R).
The same events P, Q, R are plotted in Fig. 2-3b in the frame of observer O. The light paths have slopes = 1 and −1, so that △PQR forms a right triangle with PQ and QR both at 45 degrees to the x and ct axes. Since OP = OQ = OR, the angle between x′ and x must also be θ.[5]: 113–118
While the rest frame has space and time axes that meet at right angles, the moving frame is drawn with axes that meet at an acute angle. The frames are actually equivalent. The asymmetry is due to unavoidable distortions in how spacetime coordinates can map onto a Cartesian plane, and should be considered no stranger than the manner in which, on a Mercator projection of the Earth, the relative sizes of land masses near the poles (Greenland and Antarctica) are highly exaggerated relative to land masses near the Equator.
Light cone
In Fig. 2–4, event O is at the origin of a spacetime diagram, and the two diagonal lines represent all events that have zero spacetime interval with respect to the origin event. These two lines form what is called the light cone of the event O, since adding a second spatial dimension (Fig. 2-5) makes the appearance that of two right circular cones meeting with their apices at O. One cone extends into the future (t>0), the other into the past (t<0).
A light (double) cone divides spacetime into separate regions with respect to its apex. The interior of the future light cone consists of all events that are separated from the apex by more time (temporal distance) than necessary to cross their spatial distance at lightspeed; these events comprise the timelike future of the event O. Likewise, the timelike past comprises the interior events of the past light cone. So in timelike intervals Δct is greater than Δx, making timelike intervals positive. The region exterior to the light cone consists of events that are separated from the event O by more space than can be crossed at lightspeed in the given time. These events comprise the so-called spacelike region of the event O, denoted "Elsewhere" in Fig. 2-4. Events on the light cone itself are said to be lightlike (or null separated) from O. Because of the invariance of the spacetime interval, all observers will assign the same light cone to any given event, and thus will agree on this division of spacetime.[3]: 220
The light cone has an essential role within the concept of causality. It is possible for a not-faster-than-light-speed signal to travel from the position and time of O to the position and time of D (Fig. 2-4). It is hence possible for event O to have a causal influence on event D. The future light cone contains all the events that could be causally influenced by O. Likewise, it is possible for a not-faster-than-light-speed signal to travel from the position and time of A, to the position and time of O. The past light cone contains all the events that could have a causal influence on O. In contrast, assuming that signals cannot travel faster than the speed of light, any event, like e.g. B or C, in the spacelike region (Elsewhere), cannot either affect event O, nor can they be affected by event O employing such signalling. Under this assumption any causal relationship between event O and any events in the spacelike region of a light cone is excluded.[32]
Relativity of simultaneity
All observers will agree that for any given event, an event within the given event's future light cone occurs after the given event. Likewise, for any given event, an event within the given event's past light cone occurs before the given event. The before–after relationship observed for timelike-separated events remains unchanged no matter what the reference frame of the observer, i.e. no matter how the observer may be moving. The situation is quite different for spacelike-separated events. Fig. 2-4 was drawn from the reference frame of an observer moving at v = 0. From this reference frame, event C is observed to occur after event O, and event B is observed to occur before event O. From a different reference frame, the orderings of these non-causally-related events can be reversed. In particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally related. The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity.[33]
Fig. 2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneity. The events in spacetime are invariant, but the coordinate frames transform as discussed above for Fig. 2-3. The three events (A, B, C) are simultaneous from the reference frame of an observer moving at v = 0. From the reference frame of an observer moving at v = 0.3c, the events appear to occur in the order C, B, A. From the reference frame of an observer moving at v = −0.5c, the events appear to occur in the order A, B, C. The white line represents a plane of simultaneity being moved from the past of the observer to the future of the observer, highlighting events residing on it. The gray area is the light cone of the observer, which remains invariant.
A spacelike spacetime interval gives the same distance that an observer would measure if the events being measured were simultaneous to the observer. A spacelike spacetime interval hence provides a measure of proper distance, i.e. the true distance = ${\sqrt {-s^{2}}}.$ Likewise, a timelike spacetime interval gives the same measure of time as would be presented by the cumulative ticking of a clock that moves along a given world line. A timelike spacetime interval hence provides a measure of the proper time = ${\sqrt {s^{2}}}.$[3]: 220–221
Invariant hyperbola
In Euclidean space (having spatial dimensions only), the set of points equidistant (using the Euclidean metric) from some point form a circle (in two dimensions) or a sphere (in three dimensions). In (1+1)-dimensional Minkowski spacetime (having one temporal and one spatial dimension), the points at some constant spacetime interval away from the origin (using the Minkowski metric) form curves given by the two equations
$(ct)^{2}-x^{2}=\pm s^{2},$
with $s^{2}$some positive real constant. These equations describe two families of hyperbolae in an x–ct spacetime diagram, which are termed invariant hyperbolae.
In Fig. 2-7a, each magenta hyperbola connects all events having some fixed spacelike separation from the origin, while the green hyperbolae connect events of equal timelike separation.
The magenta hyperbolae, which cross the x axis, are timelike curves, which is to say that these hyperbolae represent actual paths that can be traversed by (constantly accelerating) particles in spacetime: Between any two events on one hyperbola a causality relation is possible, because the inverse of the slope—representing the necessary speed—for all secants is less than $c$. On the other hand, the green hyperbolae, which cross the ct axis, are spacelike curves because all intervals along these hyperbolae are spacelike intervals: No causality is possible between any two points on one of these hyperbolae, because all secants represent speeds larger than $c$.
Fig. 2-7b reflects the situation in (1+2)-dimensional Minkowski spacetime (one temporal and two spatial dimensions) with the corresponding hyperboloids. The invariant hyperbolae displaced by spacelike intervals from the origin generate hyperboloids of one sheet, while the invariant hyperbolae displaced by timelike intervals from the origin generate hyperboloids of two sheets.
The (1+2)-dimensional boundary between space- and timelike hyperboloids, established by the events forming a zero spacetime interval to the origin, is made up by degenerating the hyperboloids to the light cone. In (1+1)-dimensions the hyperbolae degenerate to the two grey 45°-lines depicted in Fig. 2-7a.
Time dilation and length contraction
Fig. 2-8 illustrates the invariant hyperbola for all events that can be reached from the origin in a proper time of 5 meters (approximately 1.67×10−8 s). Different world lines represent clocks moving at different speeds. A clock that is stationary with respect to the observer has a world line that is vertical, and the elapsed time measured by the observer is the same as the proper time. For a clock traveling at 0.3 c, the elapsed time measured by the observer is 5.24 meters (1.75×10−8 s), while for a clock traveling at 0.7 c, the elapsed time measured by the observer is 7.00 meters (2.34×10−8 s). This illustrates the phenomenon known as time dilation. Clocks that travel faster take longer (in the observer frame) to tick out the same amount of proper time, and they travel further along the x–axis within that proper time than they would have without time dilation.[3]: 220–221 The measurement of time dilation by two observers in different inertial reference frames is mutual. If observer O measures the clocks of observer O′ as running slower in his frame, observer O′ in turn will measure the clocks of observer O as running slower.
Length contraction, like time dilation, is a manifestation of the relativity of simultaneity. Measurement of length requires measurement of the spacetime interval between two events that are simultaneous in one's frame of reference. But events that are simultaneous in one frame of reference are, in general, not simultaneous in other frames of reference.
Fig. 2-9 illustrates the motions of a 1 m rod that is traveling at 0.5 c along the x axis. The edges of the blue band represent the world lines of the rod's two endpoints. The invariant hyperbola illustrates events separated from the origin by a spacelike interval of 1 m. The endpoints O and B measured when t′ = 0 are simultaneous events in the S′ frame. But to an observer in frame S, events O and B are not simultaneous. To measure length, the observer in frame S measures the endpoints of the rod as projected onto the x-axis along their world lines. The projection of the rod's world sheet onto the x axis yields the foreshortened length OC.[5]: 125
(not illustrated) Drawing a vertical line through A so that it intersects the x′ axis demonstrates that, even as OB is foreshortened from the point of view of observer O, OA is likewise foreshortened from the point of view of observer O′. In the same way that each observer measures the other's clocks as running slow, each observer measures the other's rulers as being contracted.
In regards to mutual length contraction, Fig. 2-9 illustrates that the primed and unprimed frames are mutually rotated by a hyperbolic angle (analogous to ordinary angles in Euclidean geometry).[note 8] Because of this rotation, the projection of a primed meter-stick onto the unprimed x-axis is foreshortened, while the projection of an unprimed meter-stick onto the primed x′-axis is likewise foreshortened.
Mutual time dilation
Mutual time dilation and length contraction tend to strike beginners as inherently self-contradictory concepts. If an observer in frame S measures a clock, at rest in frame S', as running slower than his', while S' is moving at speed v in S, then the principle of relativity requires that an observer in frame S' likewise measures a clock in frame S, moving at speed −v in S', as running slower than hers. How two clocks can run both slower than the other, is an important question that "goes to the heart of understanding special relativity."[3]: 198
This apparent contradiction stems from not correctly taking into account the different settings of the necessary, related measurements. These settings allow for a consistent explanation of the only apparent contradiction. It is not about the abstract ticking of two identical clocks, but about how to measure in one frame the temporal distance of two ticks of a moving clock. It turns out that in mutually observing the duration between ticks of clocks, each moving in the respective frame, different sets of clocks must be involved. In order to measure in frame S the tick duration of a moving clock W′ (at rest in S′), one uses two additional, synchronized clocks W1 and W2 at rest in two arbitrarily fixed points in S with the spatial distance d.
Two events can be defined by the condition "two clocks are simultaneously at one place", i.e., when W′ passes each W1 and W2. For both events the two readings of the collocated clocks are recorded. The difference of the two readings of W1 and W2 is the temporal distance of the two events in S, and their spatial distance is d. The difference of the two readings of W′ is the temporal distance of the two events in S′. In S′ these events are only separated in time, they happen at the same place in S′. Because of the invariance of the spacetime interval spanned by these two events, and the nonzero spatial separation d in S, the temporal distance in S′ must be smaller than the one in S: the smaller temporal distance between the two events, resulting from the readings of the moving clock W′, belongs to the slower running clock W′.
Conversely, for judging in frame S′ the temporal distance of two events on a moving clock W (at rest in S), one needs two clocks at rest in S′.
In this comparison the clock W is moving by with velocity −v. Recording again the four readings for the events, defined by "two clocks simultaneously at one place", results in the analogous temporal distances of the two events, now temporally and spatially separated in S′, and only temporally separated but collocated in S. To keep the spacetime interval invariant, the temporal distance in S must be smaller than in S′, because of the spatial separation of the events in S′: now clock W is observed to run slower.
The necessary recordings for the two judgements, with "one moving clock" and "two clocks at rest" in respectively S or S′, involves two different sets, each with three clocks. Since there are different sets of clocks involved in the measurements, there is no inherent necessity that the measurements be reciprocally "consistent" such that, if one observer measures the moving clock to be slow, the other observer measures the one's clock to be fast.[3]: 198–199
Figure 2-10. Mutual time dilation
Fig. 2-10 illustrates the previous discussion of mutual time dilation with Minkowski diagrams. The upper picture reflects the measurements as seen from frame S "at rest" with unprimed, rectangular axes, and frame S′ "moving with v > 0", coordinatized by primed, oblique axes, slanted to the right; the lower picture shows frame S′ "at rest" with primed, rectangular coordinates, and frame S "moving with −v < 0", with unprimed, oblique axes, slanted to the left.
Each line drawn parallel to a spatial axis (x, x′) represents a line of simultaneity. All events on such a line have the same time value (ct, ct′). Likewise, each line drawn parallel to a temporal axis (ct, ct′) represents a line of equal spatial coordinate values (x, x′).
One may designate in both pictures the origin O (= O′) as the event, where the respective "moving clock" is collocated with the "first clock at rest" in both comparisons. Obviously, for this event the readings on both clocks in both comparisons are zero. As a consequence, the worldlines of the moving clocks are the slanted to the right ct′-axis (upper pictures, clock W′) and the slanted to the left ct-axes (lower pictures, clock W). The worldlines of W1 and W′1 are the corresponding vertical time axes (ct in the upper pictures, and ct′ in the lower pictures).
In the upper picture the place for W2 is taken to be Ax > 0, and thus the worldline (not shown in the pictures) of this clock intersects the worldline of the moving clock (the ct′-axis) in the event labelled A, where "two clocks are simultaneously at one place". In the lower picture the place for W′2 is taken to be Cx′ < 0, and so in this measurement the moving clock W passes W′2 in the event C.
In the upper picture the ct-coordinate At of the event A (the reading of W2) is labeled B, thus giving the elapsed time between the two events, measured with W1 and W2, as OB. For a comparison, the length of the time interval OA, measured with W′, must be transformed to the scale of the ct-axis. This is done by the invariant hyperbola (see also Fig. 2-8) through A, connecting all events with the same spacetime interval from the origin as A. This yields the event C on the ct-axis, and obviously: OC < OB, the "moving" clock W′ runs slower.
To show the mutual time dilation immediately in the upper picture, the event D may be constructed as the event at x′ = 0 (the location of clock W′ in S′), that is simultaneous to C (OC has equal spacetime interval as OA) in S′. This shows that the time interval OD is longer than OA, showing that the "moving" clock runs slower.[5]: 124
In the lower picture the frame S is moving with velocity −v in the frame S′ at rest. The worldline of clock W is the ct-axis (slanted to the left), the worldline of W′1 is the vertical ct′-axis, and the worldline of W′2 is the vertical through event C, with ct′-coordinate D. The invariant hyperbola through event C scales the time interval OC to OA, which is shorter than OD; also, B is constructed (similar to D in the upper pictures) as simultaneous to A in S, at x = 0. The result OB > OC corresponds again to above.
The word "measure" is important. In classical physics an observer cannot affect an observed object, but the object's state of motion can affect the observer's observations of the object.
Twin paradox
Many introductions to special relativity illustrate the differences between Galilean relativity and special relativity by posing a series of "paradoxes". These paradoxes are, in fact, ill-posed problems, resulting from our unfamiliarity with velocities comparable to the speed of light. The remedy is to solve many problems in special relativity and to become familiar with its so-called counter-intuitive predictions. The geometrical approach to studying spacetime is considered one of the best methods for developing a modern intuition.[34]
The twin paradox is a thought experiment involving identical twins, one of whom makes a journey into space in a high-speed rocket, returning home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin observes the other twin as moving, and so at first glance, it would appear that each should find the other to have aged less. The twin paradox sidesteps the justification for mutual time dilation presented above by avoiding the requirement for a third clock.[3]: 207 Nevertheless, the twin paradox is not a true paradox because it is easily understood within the context of special relativity.
The impression that a paradox exists stems from a misunderstanding of what special relativity states. Special relativity does not declare all frames of reference to be equivalent, only inertial frames. The traveling twin's frame is not inertial during periods when she is accelerating. Furthermore, the difference between the twins is observationally detectable: the traveling twin needs to fire her rockets to be able to return home, while the stay-at-home twin does not.[35][note 9]
These distinctions should result in a difference in the twins' ages. The spacetime diagram of Fig. 2-11 presents the simple case of a twin going straight out along the x axis and immediately turning back. From the standpoint of the stay-at-home twin, there is nothing puzzling about the twin paradox at all. The proper time measured along the traveling twin's world line from O to C, plus the proper time measured from C to B, is less than the stay-at-home twin's proper time measured from O to A to B. More complex trajectories require integrating the proper time between the respective events along the curve (i.e. the path integral) to calculate the total amount of proper time experienced by the traveling twin.[35]
Complications arise if the twin paradox is analyzed from the traveling twin's point of view.
Weiss's nomenclature, designating the stay-at-home twin as Terence and the traveling twin as Stella, is hereafter used.[35]
Stella is not in an inertial frame. Given this fact, it is sometimes incorrectly stated that full resolution of the twin paradox requires general relativity:[35]
A pure SR analysis would be as follows: Analyzed in Stella's rest frame, she is motionless for the entire trip. When she fires her rockets for the turnaround, she experiences a pseudo force which resembles a gravitational force.[35] Figs. 2-6 and 2-11 illustrate the concept of lines (planes) of simultaneity: Lines parallel to the observer's x-axis (xy-plane) represent sets of events that are simultaneous in the observer frame. In Fig. 2-11, the blue lines connect events on Terence's world line which, from Stella's point of view, are simultaneous with events on her world line. (Terence, in turn, would observe a set of horizontal lines of simultaneity.) Throughout both the outbound and the inbound legs of Stella's journey, she measures Terence's clocks as running slower than her own. But during the turnaround (i.e. between the bold blue lines in the figure), a shift takes place in the angle of her lines of simultaneity, corresponding to a rapid skip-over of the events in Terence's world line that Stella considers to be simultaneous with her own. Therefore, at the end of her trip, Stella finds that Terence has aged more than she has.[35]
Although general relativity is not required to analyze the twin paradox, application of the Equivalence Principle of general relativity does provide some additional insight into the subject. Stella is not stationary in an inertial frame. Analyzed in Stella's rest frame, she is motionless for the entire trip. When she is coasting her rest frame is inertial, and Terence's clock will appear to run slow. But when she fires her rockets for the turnaround, her rest frame is an accelerated frame and she experiences a force which is pushing her as if she were in a gravitational field. Terence will appear to be high up in that field and because of gravitational time dilation, his clock will appear to run fast, so much so that the net result will be that Terence has aged more than Stella when they are back together.[35] The theoretical arguments predicting gravitational time dilation are not exclusive to general relativity. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence, including Newton's theory.[3]: 16
Gravitation
This introductory section has focused on the spacetime of special relativity, since it is the easiest to describe. Minkowski spacetime is flat, takes no account of gravity, is uniform throughout, and serves as nothing more than a static background for the events that take place in it. The presence of gravity greatly complicates the description of spacetime. In general relativity, spacetime is no longer a static background, but actively interacts with the physical systems that it contains. Spacetime curves in the presence of matter, can propagate waves, bends light, and exhibits a host of other phenomena.[3]: 221 A few of these phenomena are described in the later sections of this article.
Basic mathematics of spacetime
Galilean transformations
A basic goal is to be able to compare measurements made by observers in relative motion. If there is an observer O in frame S who has measured the time and space coordinates of an event, assigning this event three Cartesian coordinates and the time as measured on his lattice of synchronized clocks (x, y, z, t) (see Fig. 1-1). A second observer O′ in a different frame S′ measures the same event in her coordinate system and her lattice of synchronized clocks (x′, y′, z′, t′). With inertial frames, neither observer is under acceleration, and a simple set of equations allows us to relate coordinates (x, y, z, t) to (x′, y′, z′, t′). Given that the two coordinate systems are in standard configuration, meaning that they are aligned with parallel (x, y, z) coordinates and that t = 0 when t′ = 0, the coordinate transformation is as follows:[36][37]
$x'=x-vt$
$y'=y$
$z'=z$
$t'=t.$
Fig. 3-1 illustrates that in Newton's theory, time is universal, not the velocity of light.[38]: 36–37 Consider the following thought experiment: The red arrow illustrates a train that is moving at 0.4 c with respect to the platform. Within the train, a passenger shoots a bullet with a speed of 0.4 c in the frame of the train. The blue arrow illustrates that a person standing on the train tracks measures the bullet as traveling at 0.8 c. This is in accordance with our naive expectations.
More generally, assuming that frame S′ is moving at velocity v with respect to frame S, then within frame S′, observer O′ measures an object moving with velocity u′. Velocity u with respect to frame S, since x = ut, x′ = x − vt, and t = t′, can be written as x′ = ut − vt = (u − v)t = (u − v)t′. This leads to u′ = x′/t′ and ultimately
$u'=u-v$ or $u=u'+v,$
which is the common-sense Galilean law for the addition of velocities.
Relativistic composition of velocities
The composition of velocities is quite different in relativistic spacetime. To reduce the complexity of the equations slightly, we introduce a common shorthand for the ratio of the speed of an object relative to light,
$\beta =v/c$
Fig. 3-2a illustrates a red train that is moving forward at a speed given by v/c = β = s/a. From the primed frame of the train, a passenger shoots a bullet with a speed given by u′/c = β′ = n/m, where the distance is measured along a line parallel to the red x′ axis rather than parallel to the black x axis. What is the composite velocity u of the bullet relative to the platform, as represented by the blue arrow? Referring to Fig. 3-2b:
1. From the platform, the composite speed of the bullet is given by u = c(s + r)/(a + b).
2. The two yellow triangles are similar because they are right triangles that share a common angle α. In the large yellow triangle, the ratio s/a = v/c = β.
3. The ratios of corresponding sides of the two yellow triangles are constant, so that r/a = b/s = n/m = β′. So b = u′s/c and r = u′a/c.
4. Substitute the expressions for b and r into the expression for u in step 1 to yield Einstein's formula for the addition of velocities:[38]: 42–48
$u={v+u' \over 1+(vu'/c^{2})}.$
The relativistic formula for addition of velocities presented above exhibits several important features:
• If u′ and v are both very small compared with the speed of light, then the product vu′/c2 becomes vanishingly small, and the overall result becomes indistinguishable from the Galilean formula (Newton's formula) for the addition of velocities: u = u′ + v. The Galilean formula is a special case of the relativistic formula applicable to low velocities.
• If u′ is set equal to c, then the formula yields u = c regardless of the starting value of v. The velocity of light is the same for all observers regardless their motions relative to the emitting source.[38]: 49
Time dilation and length contraction revisited
It is straightforward to obtain quantitative expressions for time dilation and length contraction. Fig. 3-3 is a composite image containing individual frames taken from two previous animations, simplified and relabeled for the purposes of this section.
To reduce the complexity of the equations slightly, there are a variety of different shorthand notations for ct:
$\mathrm {T} =ct$ and $w=ct$ are common.
One also sees very frequently the use of the convention $c=1.$
In Fig. 3-3a, segments OA and OK represent equal spacetime intervals. Time dilation is represented by the ratio OB/OK. The invariant hyperbola has the equation w = √x2 + k2 where k = OK, and the red line representing the world line of a particle in motion has the equation w = x/β = xc/v. A bit of algebraic manipulation yields $ OB=OK/{\sqrt {1-v^{2}/c^{2}}}.$
The expression involving the square root symbol appears very frequently in relativity, and one over the expression is called the Lorentz factor, denoted by the Greek letter gamma $\gamma $:[39]
$\gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}={\frac {1}{\sqrt {1-\beta ^{2}}}}$
If v is greater than or equal to c, the expression for $\gamma $ becomes physically meaningless, implying that c is the maximum possible speed in nature. For any v greater than zero, the Lorentz factor will be greater than one, although the shape of the curve is such that for low speeds, the Lorentz factor is extremely close to one.
In Fig. 3-3b, segments OA and OK represent equal spacetime intervals. Length contraction is represented by the ratio OB/OK. The invariant hyperbola has the equation x = √w2 + k2, where k = OK, and the edges of the blue band representing the world lines of the endpoints of a rod in motion have slope 1/β = c/v. Event A has coordinates (x, w) = (γk, γβk). Since the tangent line through A and B has the equation w = (x − OB)/β, we have γβk = (γk − OB)/β and
$OB/OK=\gamma (1-\beta ^{2})={\frac {1}{\gamma }}$
Lorentz transformations
The Galilean transformations and their consequent commonsense law of addition of velocities work well in our ordinary low-speed world of planes, cars and balls. Beginning in the mid-1800s, however, sensitive scientific instrumentation began finding anomalies that did not fit well with the ordinary addition of velocities.
Lorentz transformations are used to transform the coordinates of an event from one frame to another in special relativity.
The Lorentz factor appears in the Lorentz transformations:
${\begin{aligned}t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\x'&=\gamma \left(x-vt\right)\\y'&=y\\z'&=z\end{aligned}}$
The inverse Lorentz transformations are:
${\begin{aligned}t&=\gamma \left(t'+{\frac {vx'}{c^{2}}}\right)\\x&=\gamma \left(x'+vt'\right)\\y&=y'\\z&=z'\end{aligned}}$
When v ≪ c and x is small enough, the v2/c2 and vx/c2 terms approach zero, and the Lorentz transformations approximate to the Galilean transformations.
$t'=\gamma (t-vx/c^{2}),$ $x'=\gamma (x-vt)$ etc., most often really mean $\Delta t'=\gamma (\Delta t-v\Delta x/c^{2}),$ $\Delta x'=\gamma (\Delta x-v\Delta t)$ etc. Although for brevity the Lorentz transformation equations are written without deltas, x means Δx, etc. We are, in general, always concerned with the space and time differences between events.
Calling one set of transformations the normal Lorentz transformations and the other the inverse transformations is misleading, since there is no intrinsic difference between the frames. Different authors call one or the other set of transformations the "inverse" set. The forwards and inverse transformations are trivially related to each other, since the S frame can only be moving forwards or reverse with respect to S′. So inverting the equations simply entails switching the primed and unprimed variables and replacing v with −v.[40]: 71–79
Example: Terence and Stella are at an Earth-to-Mars space race. Terence is an official at the starting line, while Stella is a participant. At time t = t′ = 0, Stella's spaceship accelerates instantaneously to a speed of 0.5 c. The distance from Earth to Mars is 300 light-seconds (about 90.0×106 km). Terence observes Stella crossing the finish-line clock at t = 600.00 s. But Stella observes the time on her ship chronometer to be $t^{\prime }=\gamma \left(t-vx/c^{2}\right)=519.62\ {\text{s}}$ as she passes the finish line, and she calculates the distance between the starting and finish lines, as measured in her frame, to be 259.81 light-seconds (about 77.9×106 km). 1).
Deriving the Lorentz transformations
There have been many dozens of derivations of the Lorentz transformations since Einstein's original work in 1905, each with its particular focus. Although Einstein's derivation was based on the invariance of the speed of light, there are other physical principles that may serve as starting points. Ultimately, these alternative starting points can be considered different expressions of the underlying principle of locality, which states that the influence that one particle exerts on another can not be transmitted instantaneously.[41]
The derivation given here and illustrated in Fig. 3-5 is based on one presented by Bais[38]: 64–66 and makes use of previous results from the Relativistic Composition of Velocities, Time Dilation, and Length Contraction sections. Event P has coordinates (w, x) in the black "rest system" and coordinates (w′, x′) in the red frame that is moving with velocity parameter β = v/c. To determine w′ and x′ in terms of w and x (or the other way around) it is easier at first to derive the inverse Lorentz transformation.
1. There can be no such thing as length expansion/contraction in the transverse directions. y' must equal y and z′ must equal z, otherwise whether a fast moving 1 m ball could fit through a 1 m circular hole would depend on the observer. The first postulate of relativity states that all inertial frames are equivalent, and transverse expansion/contraction would violate this law.[40]: 27–28
2. From the drawing, w = a + b and x = r + s
3. From previous results using similar triangles, we know that s/a = b/r = v/c = β.
4. Because of time dilation, a = γw′
5. Substituting equation (4) into s/a = β yields s = γw′β.
6. Length contraction and similar triangles give us r = γx′ and b = βr = βγx′
7. Substituting the expressions for s, a, r and b into the equations in Step 2 immediately yield
$w=\gamma w'+\beta \gamma x'$
$x=\gamma x'+\beta \gamma w'$
The above equations are alternate expressions for the t and x equations of the inverse Lorentz transformation, as can be seen by substituting ct for w, ct′ for w′, and v/c for β. From the inverse transformation, the equations of the forwards transformation can be derived by solving for t′ and x′.
Linearity of the Lorentz transformations
The Lorentz transformations have a mathematical property called linearity, since x′ and t′ are obtained as linear combinations of x and t, with no higher powers involved. The linearity of the transformation reflects a fundamental property of spacetime that was tacitly assumed in the derivation, namely, that the properties of inertial frames of reference are independent of location and time. In the absence of gravity, spacetime looks the same everywhere.[38]: 67 All inertial observers will agree on what constitutes accelerating and non-accelerating motion.[40]: 72–73 Any one observer can use her own measurements of space and time, but there is nothing absolute about them. Another observer's conventions will do just as well.[3]: 190
A result of linearity is that if two Lorentz transformations are applied sequentially, the result is also a Lorentz transformation.
Example: Terence observes Stella speeding away from him at 0.500 c, and he can use the Lorentz transformations with β = 0.500 to relate Stella's measurements to his own. Stella, in her frame, observes Ursula traveling away from her at 0.250 c, and she can use the Lorentz transformations with β = 0.250 to relate Ursula's measurements with her own. Because of the linearity of the transformations and the relativistic composition of velocities, Terence can use the Lorentz transformations with β = 0.666 to relate Ursula's measurements with his own.
Doppler effect
The Doppler effect is the change in frequency or wavelength of a wave for a receiver and source in relative motion. For simplicity, we consider here two basic scenarios: (1) The motions of the source and/or receiver are exactly along the line connecting them (longitudinal Doppler effect), and (2) the motions are at right angles to the said line (transverse Doppler effect). We are ignoring scenarios where they move along intermediate angles.
Longitudinal Doppler effect
The classical Doppler analysis deals with waves that are propagating in a medium, such as sound waves or water ripples, and which are transmitted between sources and receivers that are moving towards or away from each other. The analysis of such waves depends on whether the source, the receiver, or both are moving relative to the medium. Given the scenario where the receiver is stationary with respect to the medium, and the source is moving directly away from the receiver at a speed of vs for a velocity parameter of βs, the wavelength is increased, and the observed frequency f is given by
$f={\frac {1}{1+\beta _{s}}}f_{0}$
On the other hand, given the scenario where source is stationary, and the receiver is moving directly away from the source at a speed of vr for a velocity parameter of βr, the wavelength is not changed, but the transmission velocity of the waves relative to the receiver is decreased, and the observed frequency f is given by
$f=(1-\beta _{r})f_{0}$
Light, unlike sound or water ripples, does not propagate through a medium, and there is no distinction between a source moving away from the receiver or a receiver moving away from the source. Fig. 3-6 illustrates a relativistic spacetime diagram showing a source separating from the receiver with a velocity parameter β, so that the separation between source and receiver at time w is βw. Because of time dilation, $W=YW^{\prime }$. Since the slope of the green light ray is −1, ${T}=W+\beta w=\gamma w^{\prime }(1+\beta )$. Hence, the relativistic Doppler effect is given by[38]: 58–59
$f={\sqrt {\frac {1-\beta }{1+\beta }}}\,f_{0}.$
Transverse Doppler effect
Suppose that a source and a receiver, both approaching each other in uniform inertial motion along non-intersecting lines, are at their closest approach to each other. It would appear that the classical analysis predicts that the receiver detects no Doppler shift. Due to subtleties in the analysis, that expectation is not necessarily true. Nevertheless, when appropriately defined, transverse Doppler shift is a relativistic effect that has no classical analog. The subtleties are these:[42]: 541–543
• Fig. 3-7a. What is the frequency measurement when the receiver is geometrically at its closest approach to the source? This scenario is most easily analyzed from the frame S' of the source.[note 10]
• Fig. 3-7b. What is the frequency measurement when the receiver sees the source as being closest to it? This scenario is most easily analyzed from the frame S of the receiver.
Two other scenarios are commonly examined in discussions of transverse Doppler shift:
• Fig. 3-7c. If the receiver is moving in a circle around the source, what frequency does the receiver measure?
• Fig. 3-7d. If the source is moving in a circle around the receiver, what frequency does the receiver measure?
In scenario (a), the point of closest approach is frame-independent and represents the moment where there is no change in distance versus time (i.e. dr/dt = 0 where r is the distance between receiver and source) and hence no longitudinal Doppler shift. The source observes the receiver as being illuminated by light of frequency f′, but also observes the receiver as having a time-dilated clock. In frame S, the receiver is therefore illuminated by blueshifted light of frequency
$f=f'\gamma =f'/{\sqrt {1-\beta ^{2}}}$
In scenario (b) the illustration shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clocks are time dilated as measured in frame S, and since dr/dt was equal to zero at this point, the light from the source, emitted from this closest point, is redshifted with frequency
$f=f'/\gamma =f'{\sqrt {1-\beta ^{2}}}$
Scenarios (c) and (d) can be analyzed by simple time dilation arguments. In (c), the receiver observes light from the source as being blueshifted by a factor of $\gamma $, and in (d), the light is redshifted. The only seeming complication is that the orbiting objects are in accelerated motion. However, if an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation. (The converse, however, is not true.)[42]: 541–543 Most reports of transverse Doppler shift refer to the effect as a redshift and analyze the effect in terms of scenarios (b) or (d).[note 11]
Energy and momentum
Main articles: Four-momentum, Momentum, and Mass–energy equivalence
Extending momentum to four dimensions
In classical mechanics, the state of motion of a particle is characterized by its mass and its velocity. Linear momentum, the product of a particle's mass and velocity, is a vector quantity, possessing the same direction as the velocity: p = mv. It is a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change.
In relativistic mechanics, the momentum vector is extended to four dimensions. Added to the momentum vector is a time component that allows the spacetime momentum vector to transform like the spacetime position vector $(x,t)$. In exploring the properties of the spacetime momentum, we start, in Fig. 3-8a, by examining what a particle looks like at rest. In the rest frame, the spatial component of the momentum is zero, i.e. p = 0, but the time component equals mc.
We can obtain the transformed components of this vector in the moving frame by using the Lorentz transformations, or we can read it directly from the figure because we know that $(mc)^{\prime }=\gamma mc$ and $p^{\prime }=-\beta \gamma mc$, since the red axes are rescaled by gamma. Fig. 3-8b illustrates the situation as it appears in the moving frame. It is apparent that the space and time components of the four-momentum go to infinity as the velocity of the moving frame approaches c.[38]: 84–87
We will use this information shortly to obtain an expression for the four-momentum.
Momentum of light
Light particles, or photons, travel at the speed of c, the constant that is conventionally known as the speed of light. This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a lightlike world line and, in appropriate units, have equal space and time components for every observer.
A consequence of Maxwell's theory of electromagnetism is that light carries energy and momentum, and that their ratio is a constant: $E/p=c$. Rearranging, $E/c=p$, and since for photons, the space and time components are equal, E/c must therefore be equated with the time component of the spacetime momentum vector.
Photons travel at the speed of light, yet have finite momentum and energy. For this to be so, the mass term in γmc must be zero, meaning that photons are massless particles. Infinity times zero is an ill-defined quantity, but E/c is well-defined.
By this analysis, if the energy of a photon equals E in the rest frame, it equals $E^{\prime }=(1-\beta )\gamma E$ in a moving frame. This result can be derived by inspection of Fig. 3-9 or by application of the Lorentz transformations, and is consistent with the analysis of Doppler effect given previously.[38]: 88
Mass–energy relationship
Consideration of the interrelationships between the various components of the relativistic momentum vector led Einstein to several famous conclusions.
• In the low speed limit as β = v/c approaches zero, γ approaches 1, so the spatial component of the relativistic momentum $\beta \gamma mc=\gamma mv$ approaches mv, the classical term for momentum. Following this perspective, γm can be interpreted as a relativistic generalization of m. Einstein proposed that the relativistic mass of an object increases with velocity according to the formula $m_{\text{rel}}=\gamma m$.
• Likewise, comparing the time component of the relativistic momentum with that of the photon, $\gamma mc=m_{\text{rel}}c=E/c$, so that Einstein arrived at the relationship $E=m_{\text{rel}}c^{2}$. Simplified to the case of zero velocity, this is Einstein's famous equation relating energy and mass.
Another way of looking at the relationship between mass and energy is to consider a series expansion of γmc2 at low velocity:
$E=\gamma mc^{2}={\frac {mc^{2}}{\sqrt {1-\beta ^{2}}}}$ $\approx mc^{2}+{\frac {1}{2}}mv^{2}...$
The second term is just an expression for the kinetic energy of the particle. Mass indeed appears to be another form of energy.[38]: 90–92 [40]: 129–130, 180
The concept of relativistic mass that Einstein introduced in 1905, mrel, although amply validated every day in particle accelerators around the globe (or indeed in any instrumentation whose use depends on high velocity particles, such as electron microscopes,[43] old-fashioned color television sets, etc.), has nevertheless not proven to be a fruitful concept in physics in the sense that it is not a concept that has served as a basis for other theoretical development. Relativistic mass, for instance, plays no role in general relativity.
For this reason, as well as for pedagogical concerns, most physicists currently prefer a different terminology when referring to the relationship between mass and energy.[44] "Relativistic mass" is a deprecated term. The term "mass" by itself refers to the rest mass or invariant mass, and is equal to the invariant length of the relativistic momentum vector. Expressed as a formula,
$E^{2}-p^{2}c^{2}=m_{\text{rest}}^{2}c^{4}$
This formula applies to all particles, massless as well as massive. For photons where mrest equals zero, it yields, $E=\pm pc$.[38]: 90–92
Four-momentum
Because of the close relationship between mass and energy, the four-momentum (also called 4-momentum) is also called the energy–momentum 4-vector. Using an uppercase P to represent the four-momentum and a lowercase p to denote the spatial momentum, the four-momentum may be written as
$P\equiv (E/c,{\vec {p}})=(E/c,p_{x},p_{y},p_{z})$ or alternatively,
$P\equiv (E,{\vec {p}})=(E,p_{x},p_{y},p_{z})$ using the convention that $c=1.$[40]: 129–130, 180
Conservation laws
Main article: Conservation law
In physics, conservation laws state that certain particular measurable properties of an isolated physical system do not change as the system evolves over time. In 1915, Emmy Noether discovered that underlying each conservation law is a fundamental symmetry of nature.[45] The fact that physical processes don't care where in space they take place (space translation symmetry) yields conservation of momentum, the fact that such processes don't care when they take place (time translation symmetry) yields conservation of energy, and so on. In this section, we examine the Newtonian views of conservation of mass, momentum and energy from a relativistic perspective.
Total momentum
To understand how the Newtonian view of conservation of momentum needs to be modified in a relativistic context, we examine the problem of two colliding bodies limited to a single dimension.
In Newtonian mechanics, two extreme cases of this problem may be distinguished yielding mathematics of minimum complexity:
(1) The two bodies rebound from each other in a completely elastic collision.
(2) The two bodies stick together and continue moving as a single particle. This second case is the case of completely inelastic collision.
For both cases (1) and (2), momentum, mass, and total energy are conserved. However, kinetic energy is not conserved in cases of inelastic collision. A certain fraction of the initial kinetic energy is converted to heat.
In case (2), two masses with momentums ${\boldsymbol {p}}_{\boldsymbol {1}}=m_{1}{\boldsymbol {v}}_{\boldsymbol {1}}$ and ${\boldsymbol {p}}_{\boldsymbol {2}}=m_{2}{\boldsymbol {v}}_{\boldsymbol {2}}$ collide to produce a single particle of conserved mass $m=m_{1}+m_{2}$ traveling at the center of mass velocity of the original system, ${\boldsymbol {v_{cm}}}=\left(m_{1}{\boldsymbol {v_{1}}}+m_{2}{\boldsymbol {v_{2}}}\right)/\left(m_{1}+m_{2}\right)$. The total momentum ${\boldsymbol {p=p_{1}+p_{2}}}$ is conserved.
Fig. 3-10 illustrates the inelastic collision of two particles from a relativistic perspective. The time components $E_{1}/c$ and $E_{2}/c$ add up to total E/c of the resultant vector, meaning that energy is conserved. Likewise, the space components ${\boldsymbol {p_{1}}}$ and ${\boldsymbol {p_{2}}}$ add up to form p of the resultant vector. The four-momentum is, as expected, a conserved quantity. However, the invariant mass of the fused particle, given by the point where the invariant hyperbola of the total momentum intersects the energy axis, is not equal to the sum of the invariant masses of the individual particles that collided. Indeed, it is larger than the sum of the individual masses: $m>m_{1}+m_{2}$.[38]: 94–97
Looking at the events of this scenario in reverse sequence, we see that non-conservation of mass is a common occurrence: when an unstable elementary particle spontaneously decays into two lighter particles, total energy is conserved, but the mass is not. Part of the mass is converted into kinetic energy.[40]: 134–138
Choice of reference frames
Figure 3-11.
(above) Lab Frame.
(right) Center of Momentum Frame.
The freedom to choose any frame in which to perform an analysis allows us to pick one which may be particularly convenient. For analysis of momentum and energy problems, the most convenient frame is usually the "center-of-momentum frame" (also called the zero-momentum frame, or COM frame). This is the frame in which the space component of the system's total momentum is zero. Fig. 3-11 illustrates the breakup of a high speed particle into two daughter particles. In the lab frame, the daughter particles are preferentially emitted in a direction oriented along the original particle's trajectory. In the COM frame, however, the two daughter particles are emitted in opposite directions, although their masses and the magnitude of their velocities are generally not the same.
Energy and momentum conservation
In a Newtonian analysis of interacting particles, transformation between frames is simple because all that is necessary is to apply the Galilean transformation to all velocities. Since $v'=v-u$, the momentum $p'=p-mu$. If the total momentum of an interacting system of particles is observed to be conserved in one frame, it will likewise be observed to be conserved in any other frame.[40]: 241–245
Conservation of momentum in the COM frame amounts to the requirement that p = 0 both before and after collision. In the Newtonian analysis, conservation of mass dictates that $m=m_{1}+m_{2}$. In the simplified, one-dimensional scenarios that we have been considering, only one additional constraint is necessary before the outgoing momenta of the particles can be determined—an energy condition. In the one-dimensional case of a completely elastic collision with no loss of kinetic energy, the outgoing velocities of the rebounding particles in the COM frame will be precisely equal and opposite to their incoming velocities. In the case of a completely inelastic collision with total loss of kinetic energy, the outgoing velocities of the rebounding particles will be zero.[40]: 241–245
Newtonian momenta, calculated as $p=mv$, fail to behave properly under Lorentzian transformation. The linear transformation of velocities $v'=v-u$ is replaced by the highly nonlinear $v^{\prime }=(v-u){\Big /}\left(1-{\frac {vu}{c^{2}}}\right)$ so that a calculation demonstrating conservation of momentum in one frame will be invalid in other frames. Einstein was faced with either having to give up conservation of momentum, or to change the definition of momentum. This second option was what he chose.[38]: 104
Figure 3-12a. Energy–momentum diagram for decay of a charged pion.
Figure 3-12b. Graphing calculator analysis of charged pion decay.
The relativistic conservation law for energy and momentum replaces the three classical conservation laws for energy, momentum and mass. Mass is no longer conserved independently, because it has been subsumed into the total relativistic energy. This makes the relativistic conservation of energy a simpler concept than in nonrelativistic mechanics, because the total energy is conserved without any qualifications. Kinetic energy converted into heat or internal potential energy shows up as an increase in mass.[40]: 127
Example: Because of the equivalence of mass and energy, elementary particle masses are customarily stated in energy units, where 1 MeV = 106 electron volts. A charged pion is a particle of mass 139.57 MeV (approx. 273 times the electron mass). It is unstable, and decays into a muon of mass 105.66 MeV (approx. 207 times the electron mass) and an antineutrino, which has an almost negligible mass. The difference between the pion mass and the muon mass is 33.91 MeV.
π−
→
μ−
+
ν
μ
Fig. 3-12a illustrates the energy–momentum diagram for this decay reaction in the rest frame of the pion. Because of its negligible mass, a neutrino travels at very nearly the speed of light. The relativistic expression for its energy, like that of the photon, is $E_{v}=pc,$ which is also the value of the space component of its momentum. To conserve momentum, the muon has the same value of the space component of the neutrino's momentum, but in the opposite direction.
Algebraic analyses of the energetics of this decay reaction are available online,[46] so Fig. 3-12b presents instead a graphing calculator solution. The energy of the neutrino is 29.79 MeV, and the energy of the muon is 33.91 MeV − 29.79 MeV = 4.12 MeV. Most of the energy is carried off by the near-zero-mass neutrino.
Beyond the basics
The topics in this section are of significantly greater technical difficulty than those in the preceding sections and are not essential for understanding Introduction to curved spacetime.
Rapidity
Figure 4-1a. A ray through the unit circle x2 + y2 = 1 in the point (cos a, sin a), where a is twice the area between the ray, the circle, and the x-axis.
Figure 4-1b. A ray through the unit hyperbola x2 − y2 = 1 in the point (cosh a, sinh a), where a is twice the area between the ray, the hyperbola, and the x-axis.
Lorentz transformations relate coordinates of events in one reference frame to those of another frame. Relativistic composition of velocities is used to add two velocities together. The formulas to perform the latter computations are nonlinear, making them more complex than the corresponding Galilean formulas.
This nonlinearity is an artifact of our choice of parameters.[8]: 47–59 We have previously noted that in an x–ct spacetime diagram, the points at some constant spacetime interval from the origin form an invariant hyperbola. We have also noted that the coordinate systems of two spacetime reference frames in standard configuration are hyperbolically rotated with respect to each other.
The natural functions for expressing these relationships are the hyperbolic analogs of the trigonometric functions. Fig. 4-1a shows a unit circle with sin(a) and cos(a), the only difference between this diagram and the familiar unit circle of elementary trigonometry being that a is interpreted, not as the angle between the ray and the x-axis, but as twice the area of the sector swept out by the ray from the x-axis. (Numerically, the angle and 2 × area measures for the unit circle are identical.) Fig. 4-1b shows a unit hyperbola with sinh(a) and cosh(a), where a is likewise interpreted as twice the tinted area.[47] Fig. 4-2 presents plots of the sinh, cosh, and tanh functions.
For the unit circle, the slope of the ray is given by
${\text{slope}}=\tan a={\frac {\sin a}{\cos a}}.$
In the Cartesian plane, rotation of point (x, y) into point (x', y') by angle θ is given by
${\begin{pmatrix}x'\\y'\\\end{pmatrix}}={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{pmatrix}}{\begin{pmatrix}x\\y\\\end{pmatrix}}.$
In a spacetime diagram, the velocity parameter $\beta $ is the analog of slope. The rapidity, φ, is defined by[40]: 96–99
$\beta \equiv \tanh \phi \equiv {\frac {v}{c}},$
where
$\tanh \phi ={\frac {\sinh \phi }{\cosh \phi }}={\frac {e^{\phi }-e^{-\phi }}{e^{\phi }+e^{-\phi }}}.$
The rapidity defined above is very useful in special relativity because many expressions take on a considerably simpler form when expressed in terms of it. For example, rapidity is simply additive in the collinear velocity-addition formula;[8]: 47–59
$\beta ={\frac {\beta _{1}+\beta _{2}}{1+\beta _{1}\beta _{2}}}=$ ${\frac {\tanh \phi _{1}+\tanh \phi _{2}}{1+\tanh \phi _{1}\tanh \phi _{2}}}=$ $\tanh(\phi _{1}+\phi _{2}),$
or in other words, $\phi =\phi _{1}+\phi _{2}.$
The Lorentz transformations take a simple form when expressed in terms of rapidity. The γ factor can be written as
$\gamma ={\frac {1}{\sqrt {1-\beta ^{2}}}}={\frac {1}{\sqrt {1-\tanh ^{2}\phi }}}$ $=\cosh \phi ,$
$\gamma \beta ={\frac {\beta }{\sqrt {1-\beta ^{2}}}}={\frac {\tanh \phi }{\sqrt {1-\tanh ^{2}\phi }}}$ $=\sinh \phi .$
Transformations describing relative motion with uniform velocity and without rotation of the space coordinate axes are called boosts.
Substituting γ and γβ into the transformations as previously presented and rewriting in matrix form, the Lorentz boost in the x-direction may be written as
${\begin{pmatrix}ct'\\x'\end{pmatrix}}={\begin{pmatrix}\cosh \phi &-\sinh \phi \\-\sinh \phi &\cosh \phi \end{pmatrix}}{\begin{pmatrix}ct\\x\end{pmatrix}},$
and the inverse Lorentz boost in the x-direction may be written as
${\begin{pmatrix}ct\\x\end{pmatrix}}={\begin{pmatrix}\cosh \phi &\sinh \phi \\\sinh \phi &\cosh \phi \end{pmatrix}}{\begin{pmatrix}ct'\\x'\end{pmatrix}}.$
In other words, Lorentz boosts represent hyperbolic rotations in Minkowski spacetime.[40]: 96–99
The advantages of using hyperbolic functions are such that some textbooks such as the classic ones by Taylor and Wheeler introduce their use at a very early stage.[8][48][note 12]
4‑vectors
Four‑vectors have been mentioned above in context of the energy–momentum 4‑vector, but without any great emphasis. Indeed, none of the elementary derivations of special relativity require them. But once understood, 4‑vectors, and more generally tensors, greatly simplify the mathematics and conceptual understanding of special relativity. Working exclusively with such objects leads to formulas that are manifestly relativistically invariant, which is a considerable advantage in non-trivial contexts. For instance, demonstrating relativistic invariance of Maxwell's equations in their usual form is not trivial, while it is merely a routine calculation (really no more than an observation) using the field strength tensor formulation. On the other hand, general relativity, from the outset, relies heavily on 4‑vectors, and more generally tensors, representing physically relevant entities. Relating these via equations that do not rely on specific coordinates requires tensors, capable of connecting such 4‑vectors even within a curved spacetime, and not just within a flat one as in special relativity. The study of tensors is outside the scope of this article, which provides only a basic discussion of spacetime.
Definition of 4-vectors
A 4-tuple, $A=\left(A_{0},A_{1},A_{2},A_{3}\right)$ is a "4-vector" if its component Ai transform between frames according to the Lorentz transformation.
If using $(ct,x,y,z)$ coordinates, A is a 4–vector if it transforms (in the x-direction) according to
${\begin{aligned}A_{0}'&=\gamma \left(A_{0}-(v/c)A_{1}\right)\\A_{1}'&=\gamma \left(A_{1}-(v/c)A_{0}\right)\\A_{2}'&=A_{2}\\A_{3}'&=A_{3}\end{aligned}}$
which comes from simply replacing ct with A0 and x with A1 in the earlier presentation of the Lorentz transformation.
As usual, when we write x, t, etc. we generally mean Δx, Δt etc.
The last three components of a 4–vector must be a standard vector in three-dimensional space. Therefore, a 4–vector must transform like $(c\Delta t,\Delta x,\Delta y,\Delta z)$ under Lorentz transformations as well as rotations.[34]: 36–59
Properties of 4-vectors
• Closure under linear combination: If A and B are 4-vectors, then $C=aA+aB$ is also a 4-vector.
• Inner-product invariance: If A and B are 4-vectors, then their inner product (scalar product) is invariant, i.e. their inner product is independent of the frame in which it is calculated. Note how the calculation of inner product differs from the calculation of the inner product of a 3-vector. In the following, ${\vec {A}}$ and ${\vec {B}}$ are 3-vectors:
$A\cdot B\equiv $ $A_{0}B_{0}-A_{1}B_{1}-A_{2}B_{2}-A_{3}B_{3}\equiv $ $A_{0}B_{0}-{\vec {A}}\cdot {\vec {B}}$
In addition to being invariant under Lorentz transformation, the above inner product is also invariant under rotation in 3-space.
Two vectors are said to be orthogonal if $A\cdot B=0.$ Unlike the case with 3-vectors, orthogonal 4-vectors are not necessarily at right angles with each other. The rule is that two 4-vectors are orthogonal if they are offset by equal and opposite angles from the 45° line which is the world line of a light ray. This implies that a lightlike 4-vector is orthogonal with itself.
• Invariance of the magnitude of a vector: The magnitude of a vector is the inner product of a 4-vector with itself, and is a frame-independent property. As with intervals, the magnitude may be positive, negative or zero, so that the vectors are referred to as timelike, spacelike or null (lightlike). Note that a null vector is not the same as a zero vector. A null vector is one for which $A\cdot A=0,$ while a zero vector is one whose components are all zero. Special cases illustrating the invariance of the norm include the invariant interval $c^{2}t^{2}-x^{2}$ and the invariant length of the relativistic momentum vector $E^{2}-p^{2}c^{2}.$[40]: 178–181 [34]: 36–59
Examples of 4-vectors
• Displacement 4-vector: Otherwise known as the spacetime separation, this is (Δt, Δx, Δy, Δz), or for infinitesimal separations, (dt, dx, dy, dz).
$dS\equiv (dt,dx,dy,dz)$
• Velocity 4-vector: This results when the displacement 4-vector is divided by $d\tau $, where $d\tau $ is the proper time between the two events that yield dt, dx, dy, and dz.
$V\equiv {\frac {dS}{d\tau }}={\frac {(dt,dx,dy,dz)}{dt/\gamma }}=$ $\gamma \left(1,{\frac {dx}{dt}},{\frac {dy}{dt}},{\frac {dz}{dt}}\right)=$ $(\gamma ,\gamma {\vec {v}})$
Figure 4-3a. The momentarily comoving reference frames of an accelerating particle as observed from a stationary frame.
Figure 4-3b. The momentarily comoving reference frames along the trajectory of an accelerating observer (center).
The 4-velocity is tangent to the world line of a particle, and has a length equal to one unit of time in the frame of the particle.
An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found which is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles.
Since photons move on null lines, $d\tau =0$ for a photon, and a 4-velocity cannot be defined. There is no frame in which a photon is at rest, and no MCRF can be established along a photon's path.
• Energy–momentum 4-vector:
$P\equiv (E/c,{\vec {p}})=(E/c,p_{x},p_{y},p_{z})$
As indicated before, there are varying treatments for the energy-momentum 4-vector so that one may also see it expressed as $(E,{\vec {p}})$ or $(E,{\vec {p}}c).$ The first component is the total energy (including mass) of the particle (or system of particles) in a given frame, while the remaining components are its spatial momentum. The energy-momentum 4-vector is a conserved quantity.
• Acceleration 4-vector: This results from taking the derivative of the velocity 4-vector with respect to $\tau .$
$A\equiv {\frac {dV}{d\tau }}=$ ${\frac {d}{d\tau }}(\gamma ,\gamma {\vec {v}})=$ $\gamma \left({\frac {d\gamma }{dt}},{\frac {d(\gamma {\vec {v}})}{dt}}\right)$
• Force 4-vector: This is the derivative of the momentum 4-vector with respect to $\tau .$
$F\equiv {\frac {dP}{d\tau }}=$ $\gamma \left({\frac {dE}{dt}},{\frac {d{\vec {p}}}{dt}}\right)=$ $\gamma \left({\frac {dE}{dt}},{\vec {f}}\right)$
As expected, the final components of the above 4-vectors are all standard 3-vectors corresponding to spatial 3-momentum, 3-force etc.[40]: 178–181 [34]: 36–59
4-vectors and physical law
The first postulate of special relativity declares the equivalency of all inertial frames. A physical law holding in one frame must apply in all frames, since otherwise it would be possible to differentiate between frames. Newtonian momenta fail to behave properly under Lorentzian transformation, and Einstein preferred to change the definition of momentum to one involving 4-vectors rather than give up on conservation of momentum.
Physical laws must be based on constructs that are frame independent. This means that physical laws may take the form of equations connecting scalars, which are always frame independent. However, equations involving 4-vectors require the use of tensors with appropriate rank, which themselves can be thought of as being built up from 4-vectors.[40]: 186
Acceleration
It is a common misconception that special relativity is applicable only to inertial frames, and that it is unable to handle accelerating objects or accelerating reference frames. Actually, accelerating objects can generally be analyzed without needing to deal with accelerating frames at all. It is only when gravitation is significant that general relativity is required.[49]
Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime.[49]
In this section, we analyze several scenarios involving accelerated reference frames.
Dewan–Beran–Bell spaceship paradox
The Dewan–Beran–Bell spaceship paradox (Bell's spaceship paradox) is a good example of a problem where intuitive reasoning unassisted by the geometric insight of the spacetime approach can lead to issues.
In Fig. 4-4, two identical spaceships float in space and are at rest relative to each other. They are connected by a string which is capable of only a limited amount of stretching before breaking. At a given instant in our frame, the observer frame, both spaceships accelerate in the same direction along the line between them with the same constant proper acceleration.[note 13] Will the string break?
When the paradox was new and relatively unknown, even professional physicists had difficulty working out the solution. Two lines of reasoning lead to opposite conclusions. Both arguments, which are presented below, are flawed even though one of them yields the correct answer.[40]: 106, 120–122
1. To observers in the rest frame, the spaceships start a distance L apart and remain the same distance apart during acceleration. During acceleration, L is a length contracted distance of the distance L' = γL in the frame of the accelerating spaceships. After a sufficiently long time, γ will increase to a sufficiently large factor that the string must break.
2. Let A and B be the rear and front spaceships. In the frame of the spaceships, each spaceship sees the other spaceship doing the same thing that it is doing. A says that B has the same acceleration that he has, and B sees that A matches her every move. So the spaceships stay the same distance apart, and the string does not break.[40]: 106, 120–122
The problem with the first argument is that there is no "frame of the spaceships." There cannot be, because the two spaceships measure a growing distance between the two. Because there is no common frame of the spaceships, the length of the string is ill-defined. Nevertheless, the conclusion is correct, and the argument is mostly right. The second argument, however, completely ignores the relativity of simultaneity.[40]: 106, 120–122
A spacetime diagram (Fig. 4-5) makes the correct solution to this paradox almost immediately evident. Two observers in Minkowski spacetime accelerate with constant magnitude $k$ acceleration for proper time $\sigma $ (acceleration and elapsed time measured by the observers themselves, not some inertial observer). They are comoving and inertial before and after this phase. In Minkowski geometry, the length along the line of simultaneity $A'B''$ turns out to be greater than the length along the line of simultaneity $AB$.
The length increase can be calculated with the help of the Lorentz transformation. If, as illustrated in Fig. 4-5, the acceleration is finished, the ships will remain at a constant offset in some frame $S'.$ If $x_{A}$ and $x_{B}=x_{A}+L$ are the ships' positions in $S,$ the positions in frame $S'$ are:[50]
${\begin{aligned}x'_{A}&=\gamma \left(x_{A}-vt\right)\\x'_{B}&=\gamma \left(x_{A}+L-vt\right)\\L'&=x'_{B}-x'_{A}=\gamma L\end{aligned}}$
The "paradox", as it were, comes from the way that Bell constructed his example. In the usual discussion of Lorentz contraction, the rest length is fixed and the moving length shortens as measured in frame $S$. As shown in Fig. 4-5, Bell's example asserts the moving lengths $AB$ and $A'B'$ measured in frame $S$ to be fixed, thereby forcing the rest frame length $A'B''$ in frame $S'$ to increase.
Accelerated observer with horizon
Certain special relativity problem setups can lead to insight about phenomena normally associated with general relativity, such as event horizons. In the text accompanying Fig. 2-7, the magenta hyperbolae represented actual paths that are tracked by a constantly accelerating traveler in spacetime. During periods of positive acceleration, the traveler's velocity just approaches the speed of light, while, measured in our frame, the traveler's acceleration constantly decreases.
Fig. 4-6 details various features of the traveler's motions with more specificity. At any given moment, her space axis is formed by a line passing through the origin and her current position on the hyperbola, while her time axis is the tangent to the hyperbola at her position. The velocity parameter $\beta $ approaches a limit of one as $ct$ increases. Likewise, $\gamma $ approaches infinity.
The shape of the invariant hyperbola corresponds to a path of constant proper acceleration. This is demonstrable as follows:
1. We remember that $\beta =ct/x.$
2. Since $c^{2}t^{2}-x^{2}=s^{2},$ we conclude that $\beta (ct)=ct/{\sqrt {c^{2}t^{2}-s^{2}}}.$
3. $\gamma =1/{\sqrt {1-\beta ^{2}}}=$ ${\sqrt {c^{2}t^{2}-s^{2}}}/s$
4. From the relativistic force law, $F=dp/dt=$$dpc/d(ct)=d(\beta \gamma mc^{2})/d(ct).$
5. Substituting $\beta (ct)$ from step 2 and the expression for $\gamma $ from step 3 yields $F=mc^{2}/s,$ which is a constant expression.[38]: 110–113
Fig. 4-6 illustrates a specific calculated scenario. Terence (A) and Stella (B) initially stand together 100 light hours from the origin. Stella lifts off at time 0, her spacecraft accelerating at 0.01 c per hour. Every twenty hours, Terence radios updates to Stella about the situation at home (solid green lines). Stella receives these regular transmissions, but the increasing distance (offset in part by time dilation) causes her to receive Terence's communications later and later as measured on her clock, and she never receives any communications from Terence after 100 hours on his clock (dashed green lines).[38]: 110–113
After 100 hours according to Terence's clock, Stella enters a dark region. She has traveled outside Terence's timelike future. On the other hand, Terence can continue to receive Stella's messages to him indefinitely. He just has to wait long enough. Spacetime has been divided into distinct regions separated by an apparent event horizon. So long as Stella continues to accelerate, she can never know what takes place behind this horizon.[38]: 110–113
Introduction to curved spacetime
Main articles: Introduction to general relativity and General relativity
Basic propositions
Newton's theories assumed that motion takes place against the backdrop of a rigid Euclidean reference frame that extends throughout all space and all time. Gravity is mediated by a mysterious force, acting instantaneously across a distance, whose actions are independent of the intervening space.[note 14] In contrast, Einstein denied that there is any background Euclidean reference frame that extends throughout space. Nor is there any such thing as a force of gravitation, only the structure of spacetime itself.[8]: 175–190
In spacetime terms, the path of a satellite orbiting the Earth is not dictated by the distant influences of the Earth, Moon and Sun. Instead, the satellite moves through space only in response to local conditions. Since spacetime is everywhere locally flat when considered on a sufficiently small scale, the satellite is always following a straight line in its local inertial frame. We say that the satellite always follows along the path of a geodesic. No evidence of gravitation can be discovered following alongside the motions of a single particle.[8]: 175–190
In any analysis of spacetime, evidence of gravitation requires that one observe the relative accelerations of two bodies or two separated particles. In Fig. 5-1, two separated particles, free-falling in the gravitational field of the Earth, exhibit tidal accelerations due to local inhomogeneities in the gravitational field such that each particle follows a different path through spacetime. The tidal accelerations that these particles exhibit with respect to each other do not require forces for their explanation. Rather, Einstein described them in terms of the geometry of spacetime, i.e. the curvature of spacetime. These tidal accelerations are strictly local. It is the cumulative total effect of many local manifestations of curvature that result in the appearance of a gravitational force acting at a long range from Earth.[8]: 175–190
Two central propositions underlie general relativity.
• The first crucial concept is coordinate independence: The laws of physics cannot depend on what coordinate system one uses. This is a major extension of the principle of relativity from the version used in special relativity, which states that the laws of physics must be the same for every observer moving in non-accelerated (inertial) reference frames. In general relativity, to use Einstein's own (translated) words, "the laws of physics must be of such a nature that they apply to systems of reference in any kind of motion."[51]: 113 This leads to an immediate issue: In accelerated frames, one feels forces that seemingly would enable one to assess one's state of acceleration in an absolute sense. Einstein resolved this problem through the principle of equivalence.[52]: 137–149
• The equivalence principle states that in any sufficiently small region of space, the effects of gravitation are the same as those from acceleration.
In Fig. 5-2, person A is in a spaceship, far from any massive objects, that undergoes a uniform acceleration of g. Person B is in a box resting on Earth. Provided that the spaceship is sufficiently small so that tidal effects are non-measurable (given the sensitivity of current gravity measurement instrumentation, A and B presumably should be Lilliputians), there are no experiments that A and B can perform which will enable them to tell which setting they are in.[52]: 141–149
An alternative expression of the equivalence principle is to note that in Newton's universal law of gravitation, F = GMmg/r2 = mgg and in Newton's second law, F = mia, there is no a priori reason why the gravitational mass mg should be equal to the inertial mass mi. The equivalence principle states that these two masses are identical.[52]: 141–149
To go from the elementary description above of curved spacetime to a complete description of gravitation requires tensor calculus and differential geometry, topics both requiring considerable study. Without these mathematical tools, it is possible to write about general relativity, but it is not possible to demonstrate any non-trivial derivations.
Further information: Introduction to general relativity and General relativity
Curvature of time
In the discussion of special relativity, forces played no more than a background role. Special relativity assumes the ability to define inertial frames that fill all of spacetime, all of whose clocks run at the same rate as the clock at the origin. Is this really possible? In a nonuniform gravitational field, experiment dictates that the answer is no. Gravitational fields make it impossible to construct a global inertial frame. In small enough regions of spacetime, local inertial frames are still possible. General relativity involves the systematic stitching together of these local frames into a more general picture of spacetime.[34]: 118–126
Years before publication of the general theory in 1916, Einstein used the equivalence principle to predict the existence of gravitational redshift in the following thought experiment: (i) Assume that a tower of height h (Fig. 5-3) has been constructed. (ii) Drop a particle of rest mass m from the top of the tower. It falls freely with acceleration g, reaching the ground with velocity v = (2gh)1/2, so that its total energy E, as measured by an observer on the ground, is $m+{\frac {{\frac {1}{2}}mv^{2}}{c^{2}}}=m+{\frac {mgh}{c^{2}}}$ (iii) A mass-energy converter transforms the total energy of the particle into a single high energy photon, which it directs upward. (iv) At the top of the tower, an energy-mass converter transforms the energy of the photon E' back into a particle of rest mass m'.[34]: 118–126
It must be that m = m', since otherwise one would be able to construct a perpetual motion device. We therefore predict that E' = m, so that
${\frac {E'}{E}}={\frac {h\nu \,'}{h\nu }}={\frac {m}{m+{\frac {mgh}{c^{2}}}}}=1-{\frac {gh}{c^{2}}}$
A photon climbing in Earth's gravitational field loses energy and is redshifted. Early attempts to measure this redshift through astronomical observations were somewhat inconclusive, but definitive laboratory observations were performed by Pound & Rebka (1959) and later by Pound & Snider (1964).[53]
Light has an associated frequency, and this frequency may be used to drive the workings of a clock. The gravitational redshift leads to an important conclusion about time itself: Gravity makes time run slower. Suppose we build two identical clocks whose rates are controlled by some stable atomic transition. Place one clock on top of the tower, while the other clock remains on the ground. An experimenter on top of the tower observes that signals from the ground clock are lower in frequency than those of the clock next to her on the tower. Light going up the tower is just a wave, and it is impossible for wave crests to disappear on the way up. Exactly as many oscillations of light arrive at the top of the tower as were emitted at the bottom. The experimenter concludes that the ground clock is running slow, and can confirm this by bringing the tower clock down to compare side by side with the ground clock.[3]: 16–18 For a 1 km tower, the discrepancy would amount to about 9.4 nanoseconds per day, easily measurable with modern instrumentation.
Clocks in a gravitational field do not all run at the same rate. Experiments such as the Pound–Rebka experiment have firmly established curvature of the time component of spacetime. The Pound–Rebka experiment says nothing about curvature of the space component of spacetime. But the theoretical arguments predicting gravitational time dilation do not depend on the details of general relativity at all. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence.[3]: 16 This includes Newtonian gravitation. A standard demonstration in general relativity is to show how, in the "Newtonian limit" (i.e. the particles are moving slowly, the gravitational field is weak, and the field is static), curvature of time alone is sufficient to derive Newton's law of gravity.[54]: 101–106
Newtonian gravitation is a theory of curved time. General relativity is a theory of curved time and curved space. Given G as the gravitational constant, M as the mass of a Newtonian star, and orbiting bodies of insignificant mass at distance r from the star, the spacetime interval for Newtonian gravitation is one for which only the time coefficient is variable:[3]: 229–232
$\Delta s^{2}=\left(1-{\frac {2GM}{c^{2}r}}\right)(c\Delta t)^{2}-\,(\Delta x)^{2}-(\Delta y)^{2}-(\Delta z)^{2}$
Curvature of space
The $(1-2GM/(c^{2}r))$ coefficient in front of $(c\Delta t)^{2}$ describes the curvature of time in Newtonian gravitation, and this curvature completely accounts for all Newtonian gravitational effects. As expected, this correction factor is directly proportional to $G$ and $M$, and because of the $r$ in the denominator, the correction factor increases as one approaches the gravitating body, meaning that time is curved.
But general relativity is a theory of curved space and curved time, so if there are terms modifying the spatial components of the spacetime interval presented above, shouldn't their effects be seen on, say, planetary and satellite orbits due to curvature correction factors applied to the spatial terms?
The answer is that they are seen, but the effects are tiny. The reason is that planetary velocities are extremely small compared to the speed of light, so that for planets and satellites of the solar system, the $(c\Delta t)^{2}$ term dwarfs the spatial terms.[3]: 234–238
Despite the minuteness of the spatial terms, the first indications that something was wrong with Newtonian gravitation were discovered over a century-and-a-half ago. In 1859, Urbain Le Verrier, in an analysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848, reported that known physics could not explain the orbit of Mercury, unless there possibly existed a planet or asteroid belt within the orbit of Mercury. The perihelion of Mercury's orbit exhibited an excess rate of precession over that which could be explained by the tugs of the other planets.[55] The ability to detect and accurately measure the minute value of this anomalous precession (only 43 arc seconds per tropical century) is testimony to the sophistication of 19th century astrometry.
As the famous astronomer who had earlier discovered the existence of Neptune "at the tip of his pen" by analyzing wobbles in the orbit of Uranus, Le Verrier's announcement triggered a two-decades long period of "Vulcan-mania", as professional and amateur astronomers alike hunted for the hypothetical new planet. This search included several false sightings of Vulcan. It was ultimately established that no such planet or asteroid belt existed.[56]
In 1916, Einstein was to show that this anomalous precession of Mercury is explained by the spatial terms in the curvature of spacetime. Curvature in the temporal term, being simply an expression of Newtonian gravitation, has no part in explaining this anomalous precession. The success of his calculation was a powerful indication to Einstein's peers that the general theory of relativity could be correct.
The most spectacular of Einstein's predictions was his calculation that the curvature terms in the spatial components of the spacetime interval could be measured in the bending of light around a massive body. Light has a slope of ±1 on a spacetime diagram. Its movement in space is equal to its movement in time. For the weak field expression of the invariant interval, Einstein calculated an exactly equal but opposite sign curvature in its spatial components.[3]: 234–238
$\Delta s^{2}=\left(1-{\frac {2GM}{c^{2}r}}\right)(c\Delta t)^{2}$$-\,\left(1+{\frac {2GM}{c^{2}r}}\right)\left[(\Delta x)^{2}+(\Delta y)^{2}+(\Delta z)^{2}\right]$
In Newton's gravitation, the $(1-2GM/(c^{2}r))$ coefficient in front of $(c\Delta t)^{2}$ predicts bending of light around a star. In general relativity, the $(1+2GM/(c^{2}r))$ coefficient in front of $\left[(\Delta x)^{2}+(\Delta y)^{2}+(\Delta z)^{2}\right]$ predicts a doubling of the total bending.[3]: 234–238
The story of the 1919 Eddington eclipse expedition and Einstein's rise to fame is well told elsewhere.[57]
Sources of spacetime curvature
In Newton's theory of gravitation, the only source of gravitational force is mass.
In contrast, general relativity identifies several sources of spacetime curvature in addition to mass. In the Einstein field equations, the sources of gravity are presented on the right-hand side in $T_{\mu \nu },$ the stress–energy tensor.
Fig. 5-5 classifies the various sources of gravity in the stress–energy tensor:
• $T^{00}$ (red): The total mass–energy density, including any contributions to the potential energy from forces between the particles, as well as kinetic energy from random thermal motions.
• $T^{0i}$ and $T^{i0}$ (orange): These are momentum density terms. Even if there is no bulk motion, energy may be transmitted by heat conduction, and the conducted energy will carry momentum.
• $T^{ij}$ are the rates of flow of the i-component of momentum per unit area in the j-direction. Even if there is no bulk motion, random thermal motions of the particles will give rise to momentum flow, so the i = j terms (green) represent isotropic pressure, and the i ≠ j terms (blue) represent shear stresses.[58]
One important conclusion to be derived from the equations is that, colloquially speaking, gravity itself creates gravity.[note 15] Energy has mass. Even in Newtonian gravity, the gravitational field is associated with an energy, $E=mgh,$ called the gravitational potential energy. In general relativity, the energy of the gravitational field feeds back into creation of the gravitational field. This makes the equations nonlinear and hard to solve in anything other than weak field cases.[3]: 240 Numerical relativity is a branch of general relativity using numerical methods to solve and analyze problems, often employing supercomputers to study black holes, gravitational waves, neutron stars and other phenomena in the strong field regime.
Energy-momentum
Figure 5-6. (left) Mass-energy warps spacetime. (right) Rotating mass–energy distributions with angular momentum J generate gravitomagnetic fields H.
In special relativity, mass-energy is closely connected to momentum. Just as space and time are different aspects of a more comprehensive entity called spacetime, mass–energy and momentum are merely different aspects of a unified, four-dimensional quantity called four-momentum. In consequence, if mass–energy is a source of gravity, momentum must also be a source. The inclusion of momentum as a source of gravity leads to the prediction that moving or rotating masses can generate fields analogous to the magnetic fields generated by moving charges, a phenomenon known as gravitomagnetism.[59]
It is well known that the force of magnetism can be deduced by applying the rules of special relativity to moving charges. (An eloquent demonstration of this was presented by Feynman in volume II, chapter 13–6 of his Lectures on Physics, available online.[60]) Analogous logic can be used to demonstrate the origin of gravitomagnetism. In Fig. 5-7a, two parallel, infinitely long streams of massive particles have equal and opposite velocities −v and +v relative to a test particle at rest and centered between the two. Because of the symmetry of the setup, the net force on the central particle is zero. Assume $v\ll c$ so that velocities are simply additive. Fig. 5-7b shows exactly the same setup, but in the frame of the upper stream. The test particle has a velocity of +v, and the bottom stream has a velocity of +2v. Since the physical situation has not changed, only the frame in which things are observed, the test particle should not be attracted towards either stream. But it is not at all clear that the forces exerted on the test particle are equal. (1) Since the bottom stream is moving faster than the top, each particle in the bottom stream has a larger mass energy than a particle in the top. (2) Because of Lorentz contraction, there are more particles per unit length in the bottom stream than in the top stream. (3) Another contribution to the active gravitational mass of the bottom stream comes from an additional pressure term which, at this point, we do not have sufficient background to discuss. All of these effects together would seemingly demand that the test particle be drawn towards the bottom stream.
The test particle is not drawn to the bottom stream because of a velocity-dependent force that serves to repel a particle that is moving in the same direction as the bottom stream. This velocity-dependent gravitational effect is gravitomagnetism.[3]: 245–253
Matter in motion through a gravitomagnetic field is hence subject to so-called frame-dragging effects analogous to electromagnetic induction. It has been proposed that such gravitomagnetic forces underlie the generation of the relativistic jets (Fig. 5-8) ejected by some rotating supermassive black holes.[61][62]
Pressure and stress
Quantities that are directly related to energy and momentum should be sources of gravity as well, namely internal pressure and stress. Taken together, mass-energy, momentum, pressure and stress all serve as sources of gravity: Collectively, they are what tells spacetime how to curve.
General relativity predicts that pressure acts as a gravitational source with exactly the same strength as mass–energy density. The inclusion of pressure as a source of gravity leads to dramatic differences between the predictions of general relativity versus those of Newtonian gravitation. For example, the pressure term sets a maximum limit to the mass of a neutron star. The more massive a neutron star, the more pressure is required to support its weight against gravity. The increased pressure, however, adds to the gravity acting on the star's mass. Above a certain mass determined by the Tolman–Oppenheimer–Volkoff limit, the process becomes runaway and the neutron star collapses to a black hole.[3]: 243, 280
The stress terms become highly significant when performing calculations such as hydrodynamic simulations of core-collapse supernovae.[63]
These predictions for the roles of pressure, momentum and stress as sources of spacetime curvature are elegant and play an important role in theory. In regards to pressure, the early universe was radiation dominated,[64] and it is highly unlikely that any of the relevant cosmological data (e.g. nucleosynthesis abundances, etc.) could be reproduced if pressure did not contribute to gravity, or if it did not have the same strength as a source of gravity as mass–energy. Likewise, the mathematical consistency of the Einstein field equations would be broken if the stress terms did not contribute as a source of gravity.
Definitions: Active, passive, and inertial mass
Bondi distinguishes between different possible types of mass: (1) active mass ($m_{a}$) is the mass which acts as the source of a gravitational field; (2)passive mass ($m_{p}$) is the mass which reacts to a gravitational field; (3) inertial mass ($m_{i}$) is the mass which reacts to acceleration.[65]
• $m_{p}$ is the same as gravitational mass ($m_{g}$) in the discussion of the equivalence principle.
In Newtonian theory,
• The third law of action and reaction dictates that $m_{a}$ and $m_{p}$ must be the same.
• On the other hand, whether $m_{p}$ and $m_{i}$ are equal is an empirical result.
In general relativity,
• The equality of $m_{p}$ and $m_{i}$ is dictated by the equivalence principle.
• There is no "action and reaction" principle dictating any necessary relationship between $m_{a}$ and $m_{p}$.[65]
Pressure as a gravitational source
The classic experiment to measure the strength of a gravitational source (i.e. its active mass) was first conducted in 1797 by Henry Cavendish (Fig. 5-9a). Two small but dense balls are suspended on a fine wire, making a torsion balance. Bringing two large test masses close to the balls introduces a detectable torque. Given the dimensions of the apparatus and the measurable spring constant of the torsion wire, the gravitational constant G can be determined.
To study pressure effects by compressing the test masses is hopeless, because attainable laboratory pressures are insignificant in comparison with the mass-energy of a metal ball.
However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 1028 atm ≈ 1033 Pa ≈ 1033 kg·s−2m−1. This amounts to about 1% of the nuclear mass density of approximately 1018kg/m3 (after factoring in c2 ≈ 9×1016m2s−2).[66]
Figure 5-10. Lunar laser ranging experiment. (left) This retroreflector was left on the Moon by astronauts on the Apollo 11 mission. (right) Astronomers all over the world have bounced laser light off the retroreflectors left by Apollo astronauts and Russian lunar rovers to measure precisely the Earth-Moon distance.
If pressure does not act as a gravitational source, then the ratio $m_{a}/m_{p}$ should be lower for nuclei with higher atomic number Z, in which the electrostatic pressures are higher. L. B. Kreuzer (1968) did a Cavendish experiment using a Teflon mass suspended in a mixture of the liquids trichloroethylene and dibromoethane having the same buoyant density as the Teflon (Fig. 5-9b). Fluorine has atomic number Z = 9, while bromine has Z = 35. Kreuzer found that repositioning the Teflon mass caused no differential deflection of the torsion bar, hence establishing active mass and passive mass to be equivalent to a precision of 5×10−5.[67]
Although Kreuzer originally considered this experiment merely to be a test of the ratio of active mass to passive mass, Clifford Will (1976) reinterpreted the experiment as a fundamental test of the coupling of sources to gravitational fields.[68]
In 1986, Bartlett and Van Buren noted that lunar laser ranging had detected a 2 km offset between the moon's center of figure and its center of mass. This indicates an asymmetry in the distribution of Fe (abundant in the Moon's core) and Al (abundant in its crust and mantle). If pressure did not contribute equally to spacetime curvature as does mass–energy, the moon would not be in the orbit predicted by classical mechanics. They used their measurements to tighten the limits on any discrepancies between active and passive mass to about 10−12.[69]
Gravitomagnetism
The existence of gravitomagnetism was proven by Gravity Probe B (GP-B), a satellite-based mission which launched on 20 April 2004.[70] The spaceflight phase lasted until 2005. The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism.
Initial results confirmed the relatively large geodetic effect (which is due to simple spacetime curvature, and is also known as de Sitter precession) to an accuracy of about 1%. The much smaller frame-dragging effect (which is due to gravitomagnetism, and is also known as Lense–Thirring precession) was difficult to measure because of unexpected charge effects causing variable drift in the gyroscopes. Nevertheless, by August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result,[71] while the geodetic effect was confirmed to better than 0.5%.[72][73]
Subsequent measurements of frame dragging by laser-ranging observations of the LARES, LAGEOS-1 and LAGEOS-2 satellites has improved on the GP-B measurement, with results (as of 2016) demonstrating the effect to within 5% of its theoretical value,[74] although there has been some disagreement on the accuracy of this result.[75]
Another effort, the Gyroscopes in General Relativity (GINGER) experiment, seeks to use three 6 m ring lasers mounted at right angles to each other 1400 m below the Earth's surface to measure this effect.[76][77]
Technical topics
Is spacetime really curved?
In Poincaré's conventionalist views, the essential criteria according to which one should select a Euclidean versus non-Euclidean geometry would be economy and simplicity. A realist would say that Einstein discovered spacetime to be non-Euclidean. A conventionalist would say that Einstein merely found it more convenient to use non-Euclidean geometry. The conventionalist would maintain that Einstein's analysis said nothing about what the geometry of spacetime really is.[78]
Such being said,
1. Is it possible to represent general relativity in terms of flat spacetime?
2. Are there any situations where a flat spacetime interpretation of general relativity may be more convenient than the usual curved spacetime interpretation?
In response to the first question, a number of authors including Deser, Grishchuk, Rosen, Weinberg, etc. have provided various formulations of gravitation as a field in a flat manifold. Those theories are variously called "bimetric gravity", the "field-theoretical approach to general relativity", and so forth.[79][80][81][82] Kip Thorne has provided a popular review of these theories.[83]: 397–403
The flat spacetime paradigm posits that matter creates a gravitational field that causes rulers to shrink when they are turned from circumferential orientation to radial, and that causes the ticking rates of clocks to dilate. The flat spacetime paradigm is fully equivalent to the curved spacetime paradigm in that they both represent the same physical phenomena. However, their mathematical formulations are entirely different. Working physicists routinely switch between using curved and flat spacetime techniques depending on the requirements of the problem. The flat spacetime paradigm turns out to be especially convenient when performing approximate calculations in weak fields. Hence, flat spacetime techniques will be used when solving gravitational wave problems, while curved spacetime techniques will be used in the analysis of black holes.[83]: 397–403
Asymptotic symmetries
The spacetime symmetry group for Special Relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner[84] and Rainer K. Sachs[85] addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at lightlike infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group — not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances.[86]: 35
Riemannian geometry
This section is an excerpt from Riemannian geometry.[edit]
Elliptic geometry is also sometimes called "Riemannian geometry".
Geometry
Projecting a sphere to a plane
• Outline
• History (Timeline)
Branches
• Euclidean
• Non-Euclidean
• Elliptic
• Spherical
• Hyperbolic
• Non-Archimedean geometry
• Projective
• Affine
• Synthetic
• Analytic
• Algebraic
• Arithmetic
• Diophantine
• Differential
• Riemannian
• Symplectic
• Discrete differential
• Complex
• Finite
• Discrete/Combinatorial
• Digital
• Convex
• Computational
• Fractal
• Incidence
• Noncommutative geometry
• Noncommutative algebraic geometry
• Concepts
• Features
Dimension
• Straightedge and compass constructions
• Angle
• Curve
• Diagonal
• Orthogonality (Perpendicular)
• Parallel
• Vertex
• Congruence
• Similarity
• Symmetry
Zero-dimensional
• Point
One-dimensional
• Line
• segment
• ray
• Length
Two-dimensional
• Plane
• Area
• Polygon
Triangle
• Altitude
• Hypotenuse
• Pythagorean theorem
Parallelogram
• Square
• Rectangle
• Rhombus
• Rhomboid
Quadrilateral
• Trapezoid
• Kite
Circle
• Diameter
• Circumference
• Area
Three-dimensional
• Volume
• Cube
• cuboid
• Cylinder
• Dodecahedron
• Icosahedron
• Octahedron
• Pyramid
• Platonic Solid
• Sphere
• Tetrahedron
Four- / other-dimensional
• Tesseract
• Hypersphere
Geometers
by name
• Aida
• Aryabhata
• Ahmes
• Alhazen
• Apollonius
• Archimedes
• Atiyah
• Baudhayana
• Bolyai
• Brahmagupta
• Cartan
• Coxeter
• Descartes
• Euclid
• Euler
• Gauss
• Gromov
• Hilbert
• Huygens
• Jyeṣṭhadeva
• Kātyāyana
• Khayyám
• Klein
• Lobachevsky
• Manava
• Minkowski
• Minggatu
• Pascal
• Pythagoras
• Parameshvara
• Poincaré
• Riemann
• Sakabe
• Sijzi
• al-Tusi
• Veblen
• Virasena
• Yang Hui
• al-Yasamin
• Zhang
• List of geometers
by period
BCE
• Ahmes
• Baudhayana
• Manava
• Pythagoras
• Euclid
• Archimedes
• Apollonius
1–1400s
• Zhang
• Kātyāyana
• Aryabhata
• Brahmagupta
• Virasena
• Alhazen
• Sijzi
• Khayyám
• al-Yasamin
• al-Tusi
• Yang Hui
• Parameshvara
1400s–1700s
• Jyeṣṭhadeva
• Descartes
• Pascal
• Huygens
• Minggatu
• Euler
• Sakabe
• Aida
1700s–1900s
• Gauss
• Lobachevsky
• Bolyai
• Riemann
• Klein
• Poincaré
• Hilbert
• Minkowski
• Cartan
• Veblen
• Coxeter
Present day
• Atiyah
• Gromov
Riemannian geometry is the branch of differential geometry that studies Riemannian manifolds, defined as smooth manifolds with a Riemannian metric (an inner product on the tangent space at each point that varies smoothly from point to point). This gives, in particular, local notions of angle, length of curves, surface area and volume. From those, some other global quantities can be derived by integrating local contributions.
Riemannian geometry originated with the vision of Bernhard Riemann expressed in his inaugural lecture "Ueber die Hypothesen, welche der Geometrie zu Grunde liegen" ("On the Hypotheses on which Geometry is Based").[87] It is a very broad and abstract generalization of the differential geometry of surfaces in R3. Development of Riemannian geometry resulted in synthesis of diverse results concerning the geometry of surfaces and the behavior of geodesics on them, with techniques that can be applied to the study of differentiable manifolds of higher dimensions. It enabled the formulation of Einstein's general theory of relativity, made profound impact on group theory and representation theory, as well as analysis, and spurred the development of algebraic and differential topology.
Curved manifolds
Main articles: Manifold, Lorentzian manifold, and Differentiable manifold
For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold $(M,g)$. This means the smooth Lorentz metric $g$ has signature $(3,1)$. The metric determines the geometry of spacetime, as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates $(x,y,z,t)$ are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light $c$ is equal to 1.[88]
A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event $p$. Another reference frame may be identified by a second coordinate chart about $p$. Two observers (one in each reference frame) may describe the same event $p$ but obtain different descriptions.[88]
Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing $p$ (representing an observer) and another containing $q$ (representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally.[88]
For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event $p$). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples $(x,y,z,t)$ (as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented.
Geodesics are said to be timelike, null, or spacelike if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by timelike and null (lightlike) geodesics, respectively.[88]
Privileged character of 3+1 spacetime
This section is an excerpt from Anthropic principle § Dimensions of spacetime.[edit]
There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional).[90] Let the number of spatial dimensions be N and the number of temporal dimensions be T. That N = 3 and T = 1, setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue.
The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena".[91] Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204).[note 16]
In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy.[92] Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are $5+2k$ spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold.[93] Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse.[94]
Max Tegmark expands on the preceding argument in the following anthropic manner.[95] If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if T > 1, Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.)[95] Lastly, if N < 3, gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when N < 3, nerves cannot cross without intersecting.[95] Hence anthropic and other arguments rule out all cases except N = 3 and T = 1, which describes the world around us.
On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that (3+1)-dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed.[96]
In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks.[97][98]
See also
• Basic introduction to the mathematics of curved spacetime
• Complex spacetime
• Einstein's thought experiments
• Four-dimensionalism
• Geography
• Global spacetime structure
• List of spacetimes
• Metric space
• Philosophy of space and time
• Present
• Time geography
Notes
1. luminiferous from the Latin lumen, light, + ferens, carrying; aether from the Greek αἰθήρ (aithēr), pure air, clear sky
2. By stating that simultaneity is a matter of convention, Poincaré meant that to talk about time at all, one must have synchronized clocks, and the synchronization of clocks must be established by a specified, operational procedure (convention). This stance represented a fundamental philosophical break from Newton, who conceived of an absolute, true time that was independent of the workings of the inaccurate clocks of his day. This stance also represented a direct attack against the influential philosopher Henri Bergson, who argued that time, simultaneity, and duration were matters of intuitive understanding.[18]
3. The operational procedure adopted by Poincaré was essentially identical to what is known as Einstein synchronization, even though a variant of it was already a widely used procedure by telegraphers in the middle 19th century. Basically, to synchronize two clocks, one flashes a light signal from one to the other, and adjusts for the time that the flash takes to arrive.[18]
4. A hallmark of Einstein's career, in fact, was his use of visualized thought experiments (Gedanken–Experimente) as a fundamental tool for understanding physical issues. For special relativity, he employed moving trains and flashes of lightning for his most penetrating insights. For curved spacetime, he considered a painter falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his great Solvay Debates with Bohr on the nature of reality (1927 and 1930), he devised multiple imaginary contraptions intended to show, at least in concept, means whereby the Heisenberg uncertainty principle might be evaded. Finally, in a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement.[23]
5. In the original version of this lecture, Minkowski continued to use such obsolescent terms as the ether, but the posthumous publication in 1915 of this lecture in the Annals of Physics (Annalen der Physik) was edited by Sommerfeld to remove this term. Sommerfeld also edited the published form of this lecture to revise Minkowski's judgement of Einstein from being a mere clarifier of the principle of relativity, to being its chief expositor.[24]
6. (In the following, the group G∞ is the Galilean group and the group Gc the Lorentz group.) "With respect to this it is clear that the group Gc in the limit for c = ∞, i.e. as group G∞, exactly becomes the full group belonging to Newtonian Mechanics. In this state of affairs, and since Gc is mathematically more intelligible than G∞, a mathematician may, by a free play of imagination, hit upon the thought that natural phenomena actually possess an invariance, not for the group G∞, but rather for a group Gc, where c is definitely finite, and only exceedingly large using the ordinary measuring units."[26]
7. For instance, the Lorentz group is a subgroup of the conformal group in four dimensions.[27]: 41–42 The Lorentz group is isomorphic to the Laguerre group transforming planes into planes,[27]: 39–42 it is isomorphic to the Möbius group of the plane,[28]: 22 and is isomorphic to the group of isometries in hyperbolic space which is often expressed in terms of the hyperboloid model.[29]: 3.2.3
8. In a Cartesian plane, ordinary rotation leaves a circle unchanged. In spacetime, hyperbolic rotation preserves the hyperbolic metric.
9. Even with no (de)acceleration i.e. using one inertial frame O for constant, high-velocity outward journey and another inertial frame I for constant, high-velocity inward journey – the sum of the elapsed time in those frames (O and I) is shorter than the elapsed time in the stationary inertial frame S. Thus acceleration and deceleration is not the cause of shorter elapsed time during the outward and inward journey. Instead the use of two different constant, high-velocity inertial frames for outward and inward journey is really the cause of shorter elapsed time total. Granted, if the same twin has to travel outward and inward leg of the journey and safely switch from outward to inward leg of the journey, the acceleration and deceleration is required. If the travelling twin could ride the high-velocity outward inertial frame and instantaneously switch to high-velocity inward inertial frame the example would still work. The point is that real reason should be stated clearly. The asymmetry is because of the comparison of sum of elapsed times in two different inertial frames (O and I) to the elapsed time in a single inertial frame S.
10. The ease of analyzing a relativistic scenario often depends on the frame in which one chooses to perform the analysis. In this linked image, we present alternative views of the transverse Doppler shift scenario where source and receiver are at their closest approach to each other. (a) If we analyze the scenario in the frame of the receiver, we find that the analysis is more complicated than it should be. The apparent position of a celestial object is displaced from its true position (or geometric position) because of the object's motion during the time it takes its light to reach an observer. The source would be time-dilated relative to the receiver, but the redshift implied by this time dilation would be offset by a blueshift due to the longitudinal component of the relative motion between the receiver and the apparent position of the source. (b) It is much easier if, instead, we analyze the scenario from the frame of the source. An observer situated at the source knows, from the problem statement, that the receiver is at its closest point to him. That means that the receiver has no longitudinal component of motion to complicate the analysis. Since the receiver's clocks are time-dilated relative to the source, the light that the receiver receives is therefore blue-shifted by a factor of gamma.
11. Not all experiments characterize the effect in terms of a redshift. For example, the Kündig experiment was set up to measure transverse blueshift using a Mössbauer source setup at the center of a centrifuge rotor and an absorber at the rim.
12. Rapidity arises naturally as a coordinates on the pure boost generators inside the Lie algebra algebra of the Lorentz group. Likewise, rotation angles arise naturally as coordinates (modulo 2π) on the pure rotation generators in the Lie algebra. (Together they coordinatize the whole Lie algebra.) A notable difference is that the resulting rotations are periodic in the rotation angle, while the resulting boosts are not periodic in rapidity (but rather one-to-one). The similarity between boosts and rotations is formal resemblance.
13. In relativity theory, proper acceleration is the physical acceleration (i.e., measurable acceleration as by an accelerometer) experienced by an object. It is thus acceleration relative to a free-fall, or inertial, observer who is momentarily at rest relative to the object being measured.
14. Newton himself was acutely aware of the inherent difficulties with these assumptions, but as a practical matter, making these assumptions was the only way that he could make progress. In 1692, he wrote to his friend Richard Bentley: "That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it."
15. More precisely, the gravitational field couples to itself. In Newtonian gravity, the potential due to two point masses is simply the sum of the potentials of the two masses, but this does not apply to GR. This can be thought of as the result of the equivalence principle: If gravitation did not couple to itself, two particles bound by their mutual gravitational attraction would not have the same inertial mass (due to negative binding energy) as their gravitational mass.[54]: 112–113
16. This is because the law of gravitation (or any other inverse-square law) follows from the concept of flux and the proportional relationship of flux density and field strength. If N = 3, then 3-dimensional solid objects have surface areas proportional to the square of their size in any selected spatial dimension. In particular, a sphere of radius r has a surface area of 4πr2. More generally, in a space of N dimensions, the strength of the gravitational attraction between two bodies separated by a distance of r would be inversely proportional to rN−1.
Additional details
1. Different reporters viewing the scenarios presented in this figure interpret the scenarios differently depending on their knowledge of the situation. (i) A first reporter, at the center of mass of particles 2 and 3 but unaware of the large mass 1, concludes that a force of repulsion exists between the particles in scenario A while a force of attraction exists between the particles in scenario B. (ii) A second reporter, aware of the large mass 1, smiles at the first reporter's naiveté. This second reporter knows that in reality, the apparent forces between particles 2 and 3 really represent tidal effects resulting from their differential attraction by mass 1. (iii) A third reporter, trained in general relativity, knows that there are, in fact, no forces at all acting between the three objects. Rather, all three objects move along geodesics in spacetime.
References
1. Rynasiewicz, Robert (12 August 2004). "Newton's Views on Space, Time, and Motion". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 11 December 2015. Retrieved 24 March 2017.
2. Davis, Philip J. (2006). Mathematics & Common Sense: A Case of Creative Tension. Wellesley, Massachusetts: A.K. Peters. p. 86. ISBN 978-1-4398-6432-6.
3. Schutz, Bernard (2004). Gravity from the Ground Up: An Introductory Guide to Gravity and General Relativity (Reprint ed.). Cambridge: Cambridge University Press. ISBN 0-521-45506-5. Archived from the original on 17 January 2023. Retrieved 24 May 2017.
4. Lawden, D. F. (1982). Introduction to Tensor Calculus, Relativity and Cosmology (3rd ed.). Mineola, New York: Dover Publications. p. 7. ISBN 978-0-486-42540-5.
5. Collier, Peter (2017). A Most Incomprehensible Thing: Notes Towards a Very Gentle Introduction to the Mathematics of Relativity (3rd ed.). Incomprehensible Books. ISBN 978-0-9573894-6-5.
6. Rowland, Todd. "Manifold". Wolfram Mathworld. Wolfram Research. Archived from the original on 13 March 2017. Retrieved 24 March 2017.
7. French, A.P. (1968). Special Relativity. Boca Raton, Florida: CRC Press. pp. 35–60. ISBN 0-7487-6422-4.
8. Taylor, Edwin F.; Wheeler, John Archibald (1992). Spacetime Physics: Introduction to Special Relativity (2nd ed.). San Francisco, California: Freeman. ISBN 0-7167-0336-X. Retrieved 14 April 2017.
9. Scherr, Rachel E.; Shaffer, Peter S.; Vokos, Stamatis (July 2001). "Student understanding of time in special relativity: Simultaneity and reference frames" (PDF). American Journal of Physics. College Park, Maryland: American Association of Physics Teachers. 69 (S1): S24–S35. arXiv:physics/0207109. Bibcode:2001AmJPh..69S..24S. doi:10.1119/1.1371254. S2CID 8146369. Archived (PDF) from the original on 28 September 2018. Retrieved 11 April 2017.
10. Hughes, Stefan (2013). Catchers of the Light: Catching Space: Origins, Lunar, Solar, Solar System and Deep Space. Paphos, Cyprus: ArtDeCiel Publishing. pp. 202–233. ISBN 978-1-4675-7992-6. Archived from the original on 17 January 2023. Retrieved 7 April 2017.
11. Williams, Matt (28 January 2022). "What is Einstein's Theory of Relativity?". Universe Today. Archived from the original on 3 August 2022. Retrieved 13 August 2022.
12. Stachel, John (2005). "Fresnel's (Dragging) Coefficient as a Challenge to 19th Century Optics of Moving Bodies." (PDF). In Kox, A. J.; Eisenstaedt, Jean (eds.). The Universe of General Relativity. Boston: Birkhäuser. pp. 1–13. ISBN 0-8176-4380-X. Archived from the original (PDF) on 13 April 2017.
13. "George Francis FitzGerald". The Linda Hall Library. Archived from the original on 17 January 2023. Retrieved 13 August 2022.
14. "The Nobel Prize in Physics 1902". NobelPrize.org. Archived from the original on 23 June 2017. Retrieved 13 August 2022.
15. Pais, Abraham (1982). ""Subtle is the Lord–": The Science and the Life of Albert Einstein (11th ed.). Oxford: Oxford University Press. ISBN 0-19-853907-X.
16. Darrigol, O. (2005), "The Genesis of the theory of relativity" (PDF), Séminaire Poincaré, 1: 1–22, Bibcode:2006eins.book....1D, doi:10.1007/3-7643-7436-5_1, ISBN 978-3-7643-7435-8, archived (PDF) from the original on 28 February 2008, retrieved 17 July 2017
17. Miller, Arthur I. (1998). Albert Einstein's Special Theory of Relativity. New York: Springer-Verlag. ISBN 0-387-94870-8.
18. Galison, Peter (2003). Einstein's Clocks, Poincaré's Maps: Empires of Time. New York: W. W. Norton & Company, Inc. pp. 13–47. ISBN 0-393-02001-0.
19. Poincare, Henri (1906). "On the Dynamics of the Electron (Sur la dynamique de l'électron)". Rendiconti del Circolo Matematico di Palermo. 21: 129–176. Bibcode:1906RCMP...21..129P. doi:10.1007/bf03013466. hdl:2027/uiug.30112063899089. S2CID 120211823. Archived from the original on 11 July 2017. Retrieved 15 July 2017.
20. Zahar, Elie (1989) [1983], "Poincaré's Independent Discovery of the relativity principle", Einstein's Revolution: A Study in Heuristic, Chicago: Open Court Publishing Company, ISBN 0-8126-9067-2
21. Walter, Scott A. (2007). "Breaking in the 4-vectors: the four-dimensional movement in gravitation, 1905–1910". In Renn, Jürgen; Schemmel, Matthias (eds.). The Genesis of General Relativity, Volume 3. Berlin: Springer. pp. 193–252. Archived from the original on 15 July 2017. Retrieved 15 July 2017.
22. Einstein, Albert (1905). "On the Electrodynamics of Moving Bodies ( Zur Elektrodynamik bewegter Körper)". Annalen der Physik. 322 (10): 891–921. Bibcode:1905AnP...322..891E. doi:10.1002/andp.19053221004. Archived from the original on 6 November 2018. Retrieved 7 April 2018.
23. Isaacson, Walter (2007). Einstein: His Life and Universe. Simon & Schuster. pp. 26–27, 122–127, 145–146, 345–349, 448–460. ISBN 978-0-7432-6473-0.
24. Weinstein, Galina (2012). "Max Born, Albert Einstein and Hermann Minkowski's Space–Time Formalism of Special Relativity". arXiv:1210.6929 [physics.hist-ph].
25. Galison, Peter Louis (1979). "Minkowski's space–time: From visual thinking to the absolute world". Historical Studies in the Physical Sciences. 10: 85–121. doi:10.2307/27757388. JSTOR 27757388.
26. Minkowski, Hermann (1909). "Raum und Zeit" [Space and Time]. Jahresbericht der Deutschen Mathematiker-Vereinigung. B.G. Teubner: 1–14. Archived from the original on 28 July 2017. Retrieved 17 July 2017.
27. Cartan, É.; Fano, G. (1955) [1915]. "La théorie des groupes continus et la géométrie". Encyclopédie des Sciences Mathématiques Pures et Appliquées. 3 (1): 39–43. Archived from the original on 23 March 2018. Retrieved 6 April 2018. (Only pages 1–21 were published in 1915, the entire article including pp. 39–43 concerning the groups of Laguerre and Lorentz was posthumously published in 1955 in Cartan's collected papers, and was reprinted in the Encyclopédie in 1991.)
28. Kastrup, H. A. (2008). "On the advancements of conformal transformations and their associated symmetries in geometry and theoretical physics". Annalen der Physik. 520 (9–10): 631–690. arXiv:0808.2730. Bibcode:2008AnP...520..631K. doi:10.1002/andp.200810324. S2CID 12020510.
29. Ratcliffe, J. G. (1994). "Hyperbolic geometry". Foundations of Hyperbolic Manifolds. New York. pp. 56–104. ISBN 0-387-94348-X.{{cite book}}: CS1 maint: location missing publisher (link)
30. Curtis, W. D.; Miller, F. R. (1985). Differential Manifolds and Theoretical Physics. Academic Press. p. 223. ISBN 978-0-08-087435-7. Archived from the original on 17 January 2023. Retrieved 16 January 2018.
31. Landau, L. D. Lifshitz E,M. (2013). The classical theory of fields (Vol. 2).
32. Curiel, Erik; Bokulich, Peter. "Lightcones and Causal Structure". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 17 May 2019. Retrieved 26 March 2017.
33. Savitt, Steven. "Being and Becoming in Modern Physics. 3. The Special Theory of Relativity". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 11 March 2017. Retrieved 26 March 2017.
34. Schutz, Bernard F. (1985). A first course in general relativity. Cambridge, UK: Cambridge University Press. p. 26. ISBN 0-521-27703-5.
35. Weiss, Michael. "The Twin Paradox". The Physics and Relativity FAQ. Archived from the original on 27 April 2017. Retrieved 10 April 2017.
36. Mould, Richard A. (1994). Basic Relativity (1st ed.). Springer. p. 42. ISBN 978-0-387-95210-9. Retrieved 22 April 2017.
37. Lerner, Lawrence S. (1997). Physics for Scientists and Engineers, Volume 2 (1st ed.). Jones & Bartlett Pub. p. 1047. ISBN 978-0-7637-0460-5. Retrieved 22 April 2017.
38. Bais, Sander (2007). Very Special Relativity: An Illustrated Guide. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-02611-7.
39. Forshaw, Jeffrey; Smith, Gavin (2014). Dynamics and Relativity. John Wiley & Sons. p. 118. ISBN 978-1-118-93329-9. Retrieved 24 April 2017.
40. Morin, David (2017). Special Relativity for the Enthusiastic Beginner. CreateSpace Independent Publishing Platform. ISBN 978-1-5423-2351-2.
41. Landau, L. D.; Lifshitz, E. M. (2006). The Classical Theory of Fields, Course of Theoretical Physics, Volume 2 (4th ed.). Amsterdam: Elsevier. pp. 1–24. ISBN 978-0-7506-2768-9.
42. Morin, David (2008). Introduction to Classical Mechanics: With Problems and Solutions. Cambridge University Press. ISBN 978-0-521-87622-3.
43. Rose, H. H. (21 April 2008). "Optics of high-performance electron microscopes". Science and Technology of Advanced Materials. 9 (1): 014107. Bibcode:2008STAdM...9a4107R. doi:10.1088/0031-8949/9/1/014107. PMC 5099802. PMID 27877933.
44. Griffiths, David J. (2013). Revolutions in Twentieth-Century Physics. Cambridge: Cambridge University Press. p. 60. ISBN 978-1-107-60217-5. Retrieved 24 May 2017.
45. Byers, Nina (1998). "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws". arXiv:physics/9807044.
46. Nave, R. "Energetics of Charged Pion Decay". Hyperphysics. Department of Physics and Astronomy, Georgia State University. Archived from the original on 21 May 2017. Retrieved 27 May 2017.
47. Thomas, George B.; Weir, Maurice D.; Hass, Joel; Giordano, Frank R. (2008). Thomas' Calculus: Early Transcendentals (Eleventh ed.). Boston: Pearson Education, Inc. p. 533. ISBN 978-0-321-49575-4.
48. Taylor, Edwin F.; Wheeler, John Archibald (1992). Spacetime Physics (2nd ed.). W. H. Freeman. ISBN 0-7167-2327-1.
49. Gibbs, Philip. "Can Special Relativity Handle Acceleration?". The Physics and Relativity FAQ. math.ucr.edu. Archived from the original on 7 June 2017. Retrieved 28 May 2017.
50. Franklin, Jerrold (2010). "Lorentz contraction, Bell's spaceships, and rigid body motion in special relativity". European Journal of Physics. 31 (2): 291–298. arXiv:0906.1919. Bibcode:2010EJPh...31..291F. doi:10.1088/0143-0807/31/2/006. S2CID 18059490.
51. Lorentz, H. A.; Einstein, A.; Minkowski, H.; Weyl, H. (1952). The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity. Dover Publications. ISBN 0-486-60081-5.
52. Mook, Delo E.; Vargish, Thoma s (1987). Inside Relativity. Princeton, New Jersey: Princeton University Press. ISBN 0-691-08472-6.
53. Mester, John. "Experimental Tests of General Relativity" (PDF). Laboratoire Univers et Théories. Archived from the original (PDF) on 18 March 2017. Retrieved 9 June 2017.
54. Carroll, Sean M. (2 December 1997). "Lecture Notes on General Relativity". arXiv:gr-qc/9712019.
55. Le Verrier, Urbain (1859). "Lettre de M. Le Verrier à M. Faye sur la théorie de Mercure et sur le mouvement du périhélie de cette planète". Comptes rendus hebdomadaires des séances de l'Académie des Sciences. 49: 379–383.
56. Worrall, Simon (4 November 2015). "The Hunt for Vulcan, the Planet That Wasn't There". National Geographic. Archived from the original on 24 May 2017.
57. Levine, Alaina G. (May 2016). "May 29, 1919: Eddington Observes Solar Eclipse to Test General Relativity". This Month in Physics History. APS News. American Physical Society. Archived from the original on 2 June 2017.
58. Hobson, M. P.; Efstathiou, G.; Lasenby, A. N. (2006). General Relativity. Cambridge: Cambridge University Press. pp. 176–179. ISBN 978-0-521-82951-9.
59. Thorne, Kip S. (1988). Fairbank, J. D.; Deaver, B. S. Jr.; Everitt, W. F.; Michelson, P. F. (eds.). Near zero: New Frontiers of Physics (PDF). W. H. Freeman and Company. pp. 573–586. S2CID 12925169. Archived from the original (PDF) on 28 July 2017.
60. Feynman, R. P.; Leighton, R. B.; Sands, M. (1964). The Feynman Lectures on Physics, vol. 2 (New Millenium ed.). Basic Books. pp. 13–6 to 13–11. ISBN 978-0-465-02416-2. Archived from the original on 17 January 2023. Retrieved 1 July 2017.
61. Williams, R. K. (1995). "Extracting X rays, Ύ rays, and relativistic e−–e+ pairs from supermassive Kerr black holes using the Penrose mechanism". Physical Review D. 51 (10): 5387–5427. Bibcode:1995PhRvD..51.5387W. doi:10.1103/PhysRevD.51.5387. PMID 10018300.
62. Williams, R. K. (2004). "Collimated escaping vortical polar e−–e+ jets intrinsically produced by rotating black holes and Penrose processes". The Astrophysical Journal. 611 (2): 952–963. arXiv:astro-ph/0404135. Bibcode:2004ApJ...611..952W. doi:10.1086/422304. S2CID 1350543.
63. Kuroda, Takami; Kotake, Kei; Takiwaki, Tomoya (2012). "Fully General Relativistic Simulations of Core-Collapse Supernovae with An Approximate Neutrino Transport". The Astrophysical Journal. 755 (1): 11. arXiv:1202.2487. Bibcode:2012ApJ...755...11K. doi:10.1088/0004-637X/755/1/11. S2CID 119179339.
64. Wollack, Edward J. (10 December 2010). "Cosmology: The Study of the Universe". Universe 101: Big Bang Theory. NASA. Archived from the original on 14 May 2011. Retrieved 15 April 2017.
65. Bondi, Hermann (1957). DeWitt, Cecile M.; Rickles, Dean (eds.). The Role of Gravitation in Physics: Report from the 1957 Chapel Hill Conference. Berlin: Max Planck Research Library. pp. 159–162. ISBN 978-3-86931-963-6. Archived from the original on 28 July 2017. Retrieved 1 July 2017.
66. Crowell, Benjamin (2000). General Relativity. Fullerton, CA: Light and Matter. pp. 241–258. Archived from the original on 18 June 2017. Retrieved 30 June 2017.
67. Kreuzer, L. B. (1968). "Experimental measurement of the equivalence of active and passive gravitational mass". Physical Review. 169 (5): 1007–1011. Bibcode:1968PhRv..169.1007K. doi:10.1103/PhysRev.169.1007.
68. Will, C. M. (1976). "Active mass in relativistic gravity-Theoretical interpretation of the Kreuzer experiment". The Astrophysical Journal. 204: 224–234. Bibcode:1976ApJ...204..224W. doi:10.1086/154164. Archived from the original on 28 September 2018. Retrieved 2 July 2017.
69. Bartlett, D. F.; Van Buren, Dave (1986). "Equivalence of active and passive gravitational mass using the moon". Phys. Rev. Lett. 57 (1): 21–24. Bibcode:1986PhRvL..57...21B. doi:10.1103/PhysRevLett.57.21. PMID 10033347.
70. "Gravity Probe B: FAQ". Archived from the original on 2 June 2018. Retrieved 2 July 2017.
71. Gugliotta, G. (16 February 2009). "Perseverance Is Paying Off for a Test of Relativity in Space". New York Times. Archived from the original on 3 September 2018. Retrieved 2 July 2017.
72. Everitt, C.W.F.; Parkinson, B.W. (2009). "Gravity Probe B Science Results—NASA Final Report" (PDF). Archived (PDF) from the original on 23 October 2012. Retrieved 2 July 2017.
73. Everitt; et al. (2011). "Gravity Probe B: Final Results of a Space Experiment to Test General Relativity". Physical Review Letters. 106 (22): 221101. arXiv:1105.3456. Bibcode:2011PhRvL.106v1101E. doi:10.1103/PhysRevLett.106.221101. PMID 21702590. S2CID 11878715.
74. Ciufolini, Ignazio; Paolozzi, Antonio Rolf Koenig; Pavlis, Erricos C.; Koenig, Rolf (2016). "A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model". Eur Phys J C. 76 (3): 120. arXiv:1603.09674. Bibcode:2016EPJC...76..120C. doi:10.1140/epjc/s10052-016-3961-8. PMC 4946852. PMID 27471430.
75. Iorio, L. (February 2017). "A comment on "A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model. Measurement of Earth's dragging of inertial frames," by I. Ciufolini et al". The European Physical Journal C. 77 (2): 73. arXiv:1701.06474. Bibcode:2017EPJC...77...73I. doi:10.1140/epjc/s10052-017-4607-1. S2CID 118945777.
76. Cartlidge, Edwin (20 January 2016). "Underground ring lasers will put general relativity to the test". physicsworld.com. Institute of Physics. Archived from the original on 12 July 2017. Retrieved 2 July 2017.
77. "Einstein right using the most sensitive Earth rotation sensors ever made". Phys.org. Science X network. Archived from the original on 10 May 2017. Retrieved 2 July 2017.
78. Murzi, Mauro. "Jules Henri Poincaré (1854–1912)". Internet Encyclopedia of Philosophy (ISSN 2161-0002). Archived from the original on 23 December 2020. Retrieved 9 April 2018.
79. Deser, S. (1970). "Self-Interaction and Gauge Invariance". General Relativity and Gravitation. 1 (18): 9–8. arXiv:gr-qc/0411023. Bibcode:1970GReGr...1....9D. doi:10.1007/BF00759198. S2CID 14295121.
80. Grishchuk, L. P.; Petrov, A. N.; Popova, A. D. (1984). "Exact Theory of the (Einstein) Gravitational Field in an Arbitrary Background Space–Time". Communications in Mathematical Physics. 94 (3): 379–396. Bibcode:1984CMaPh..94..379G. doi:10.1007/BF01224832. S2CID 120021772. Archived from the original on 25 February 2021. Retrieved 9 April 2018.
81. Rosen, N. (1940). "General Relativity and Flat Space I". Physical Review. 57 (2): 147–150. Bibcode:1940PhRv...57..147R. doi:10.1103/PhysRev.57.147.
82. Weinberg, S. (1964). "Derivation of Gauge Invariance and the Equivalence Principle from Lorentz Invariance of the S-Matrix". Physics Letters. 9 (4): 357–359. Bibcode:1964PhL.....9..357W. doi:10.1016/0031-9163(64)90396-8.
83. Thorne, Kip (1995). Black Holes & Time Warps: Einstein's Outrageous Legacy. W. W. Norton & Company. ISBN 978-0-393-31276-8.
84. Bondi, H.; Van der Burg, M.G.J.; Metzner, A. (1962). "Gravitational waves in general relativity: VII. Waves from axisymmetric isolated systems". Proceedings of the Royal Society of London A. A269 (1336): 21–52. Bibcode:1962RSPSA.269...21B. doi:10.1098/rspa.1962.0161. S2CID 120125096.
85. Sachs, R. (1962). "Asymptotic symmetries in gravitational theory". Physical Review. 128 (6): 2851–2864. Bibcode:1962PhRv..128.2851S. doi:10.1103/PhysRev.128.2851.
86. Strominger, Andrew (2017). "Lectures on the Infrared Structure of Gravity and Gauge Theory". arXiv:1703.05448 [hep-th]. ...redacted transcript of a course given by the author at Harvard in spring semester 2016. It contains a pedagogical overview of recent developments connecting the subjects of soft theorems, the memory effect and asymptotic symmetries in four-dimensional QED, nonabelian gauge theory and gravity with applications to black holes. To be published Princeton University Press, 158 pages.
87. maths.tcd.ie
88. Bär, Christian; Fredenhagen, Klaus (2009). "Lorentzian Manifolds" (PDF). Quantum Field Theory on Curved Spacetimes: Concepts and Mathematical Foundations. Dordrecht: Springer. pp. 39–58. ISBN 978-3-642-02779-6. Archived from the original (PDF) on 13 April 2017. Retrieved 14 April 2017.
89. Tegmark, Max (1 April 1997). "On the dimensionality of spacetime". Classical and Quantum Gravity. 14 (4): L69–L75. arXiv:gr-qc/9702052. Bibcode:1997CQGra..14L..69T. doi:10.1088/0264-9381/14/4/002. ISSN 0264-9381. S2CID 250904081.
90. Skow, Bradford (2007). "What makes time different from space?" (PDF). Noûs. 41 (2): 227–252. CiteSeerX 10.1.1.404.7853. doi:10.1111/j.1468-0068.2007.00645.x. Archived from the original (PDF) on 24 August 2016. Retrieved 13 April 2018.
91. Leibniz, Gottfried (1880). "Discourse on metaphysics". Die philosophischen schriften von Gottfried Wilhelm Leibniz. Vol. 4. Weidmann. pp. 427–463. Retrieved 13 April 2018.
92. Ehrenfest, Paul (1920). "Welche Rolle spielt die Dreidimensionalität des Raumes in den Grundgesetzen der Physik?" [How do the fundamental laws of physics make manifest that space has 3 dimensions?]. Annalen der Physik. 61 (5): 440–446. Bibcode:1920AnP...366..440E. doi:10.1002/andp.19203660503.. Also see Ehrenfest, P. (1917) "In what way does it become manifest in the fundamental laws of physics that space has three dimensions?" Proceedings of the Amsterdam academy 20:200.
93. Weyl, H. (1922). Space, time, and matter. Dover reprint: 284.
94. Tangherlini, F. R. (1963). "Schwarzschild field in n dimensions and the dimensionality of space problem". Nuovo Cimento. 27 (3): 636–651. Bibcode:1963NCim...27..636T. doi:10.1007/BF02784569. S2CID 119683293.
95. Tegmark, Max (April 1997). "On the dimensionality of spacetime" (PDF). Classical and Quantum Gravity. 14 (4): L69–L75. arXiv:gr-qc/9702052. Bibcode:1997CQGra..14L..69T. doi:10.1088/0264-9381/14/4/002. S2CID 15694111. Retrieved 16 December 2006.
96. Feng, W.X. (3 August 2022). "Gravothermal phase transition, black holes and space dimensionality". Physical Review D. 106 (4): L041501. arXiv:2207.14317. Bibcode:2022PhRvD.106d1501F. doi:10.1103/PhysRevD.106.L041501. S2CID 251196731.
97. Scargill, J. H. C. (26 February 2020). "Existence of life in 2 + 1 dimensions". Physical Review Research. 2 (1): 013217. arXiv:1906.05336. Bibcode:2020PhRvR...2a3217S. doi:10.1103/PhysRevResearch.2.013217. S2CID 211734117.
98. "Life could exist in a 2D universe (according to physics, anyway)". technologyreview.com. Retrieved 16 June 2021.
Further reading
• Barrow, John D.; Tipler, Frank J. (1986). The Anthropic Cosmological Principle (1st ed.). Oxford University Press. ISBN 978-0-19-282147-8. LCCN 87028148.
• George F. Ellis and Ruth M. Williams (1992) Flat and curved space–times. Oxford Univ. Press. ISBN 0-19-851164-7
• Lorentz, H. A., Einstein, Albert, Minkowski, Hermann, and Weyl, Hermann (1952) The Principle of Relativity: A Collection of Original Memoirs. Dover.
• Lucas, John Randolph (1973) A Treatise on Time and Space. London: Methuen.
• Penrose, Roger (2004). The Road to Reality. Oxford: Oxford University Press. ISBN 0-679-45443-8. Chpts. 17–18.
• Taylor, E. F.; Wheeler, John A. (1992). Spacetime Physics, Second Edition. Internet Archive: W. H. Freeman. ISBN 0-7167-2327-1.
• Arkani-Hamed, Nima (1 December 2017). The Doom of Spacetime: Why It Must Dissolve Into More Fundamental Structures (Speech). The 2,384Th Meeting Of The Society. Washington, D.C. Retrieved 16 July 2022.
External links
Wikiquote has quotations related to Spacetime.
Wikibooks has a book on the topic of: Special Relativity
• Media related to Spacetime at Wikimedia Commons
• Albert Einstein on space–time 13th edition Encyclopædia Britannica Historical: Albert Einstein's 1926 article
• Encyclopedia of Space–time and gravitation Scholarpedia Expert articles
• Stanford Encyclopedia of Philosophy: "Space and Time: Inertial Frames" by Robert DiSalle.
Dimension
Dimensional spaces
• Vector space
• Euclidean space
• Affine space
• Projective space
• Free module
• Manifold
• Algebraic variety
• Spacetime
Other dimensions
• Krull
• Lebesgue covering
• Inductive
• Hausdorff
• Minkowski
• Fractal
• Degrees of freedom
Polytopes and shapes
• Hyperplane
• Hypersurface
• Hypercube
• Hyperrectangle
• Demihypercube
• Hypersphere
• Cross-polytope
• Simplex
• Hyperpyramid
Dimensions by number
• Zero
• One
• Two
• Three
• Four
• Five
• Six
• Seven
• Eight
• n-dimensions
See also
• Hyperspace
• Codimension
Category
Time
Key concepts
• Past
• Present
• Future
• Eternity
Measurement
and standards
Chronometry
• UTC
• UT
• TAI
• Unit of time
• Orders of magnitude (time)
Measurement
systems
• Italian six-hour clock
• Thai six-hour clock
• 12-hour clock
• 24-hour clock
• Relative hour
• Daylight saving time
• Chinese
• Decimal
• Hexadecimal
• Hindu
• Metric
• Roman
• Sidereal
• Solar
• Time zone
Calendars
• Main types
• Solar
• Lunar
• Lunisolar
• Gregorian
• Julian
• Hebrew
• Islamic
• Solar Hijri
• Chinese
• Hindu Panchang
• Maya
• List
Clocks
• Main types
• astronomical
• astrarium
• atomic
• quantum
• hourglass
• marine
• sundial
• watch
• mechanical
• stopwatch
• water-based
• Cuckoo clock
• Digital clock
• Grandfather clock
• History
• Timeline
• Chronology
• History
• Astronomical chronology
• Big History
• Calendar era
• Deep time
• Periodization
• Regnal year
• Timeline
Philosophy of time
• A series and B series
• B-theory of time
• Chronocentrism
• Duration
• Endurantism
• Eternal return
• Eternalism
• Event
• Perdurantism
• Presentism
• Temporal finitism
• Temporal parts
• The Unreality of Time
• Religion
• Mythology
• Ages of Man
• Destiny
• Immortality
• Dreamtime
• Kāla
• Time and fate deities
• Father Time
• Wheel of time
• Kalachakra
Human experience
and use of time
• Chronemics
• Generation time
• Mental chronometry
• Music
• tempo
• time signature
• Rosy retrospection
• Tense–aspect–mood
• Time discipline
• Time management
• Yesterday – Today – Tomorrow
Time in science
Geology
• Geological time
• age
• chron
• eon
• epoch
• era
• period
• Geochronology
• Geological history of Earth
Physics
• Absolute space and time
• Arrow of time
• Chronon
• Coordinate time
• Instant
• Proper time
• Spacetime
• Theory of relativity
• Time domain
• Time translation symmetry
• Time reversal symmetry
Other fields
• Chronological dating
• Chronobiology
• Circadian rhythms
• Clock reaction
• Glottochronology
• Time geography
Related
• Memory
• Moment
• Space
• System time
• Tempus fugit
• Time capsule
• Time immemorial
• Time travel
• Category
• Commons
Time measurement and standards
• Chronometry
• Orders of magnitude
• Metrology
International standards
• Coordinated Universal Time
• offset
• UT
• ΔT
• DUT1
• International Earth Rotation and Reference Systems Service
• ISO 31-1
• ISO 8601
• International Atomic Time
• 12-hour clock
• 24-hour clock
• Barycentric Coordinate Time
• Barycentric Dynamical Time
• Civil time
• Daylight saving time
• Geocentric Coordinate Time
• International Date Line
• IERS Reference Meridian
• Leap second
• Solar time
• Terrestrial Time
• Time zone
• 180th meridian
Obsolete standards
• Ephemeris time
• Greenwich Mean Time
• Prime meridian
Time in physics
• Absolute space and time
• Spacetime
• Chronon
• Continuous signal
• Coordinate time
• Cosmological decade
• Discrete time and continuous time
• Proper time
• Theory of relativity
• Time dilation
• Gravitational time dilation
• Time domain
• Time translation symmetry
• T-symmetry
Horology
• Clock
• Astrarium
• Atomic clock
• Complication
• History of timekeeping devices
• Hourglass
• Marine chronometer
• Marine sandglass
• Radio clock
• Watch
• stopwatch
• Water clock
• Sundial
• Dialing scales
• Equation of time
• History of sundials
• Sundial markup schema
Calendar
• Gregorian
• Hebrew
• Hindu
• Holocene
• Islamic (lunar Hijri)
• Julian
• Solar Hijri
• Astronomical
• Dominical letter
• Epact
• Equinox
• Intercalation
• Julian date
• Leap year
• Lunar
• Lunisolar
• Solar
• Solstice
• Tropical year
• Weekday determination
• Weekday names
Archaeology and geology
• Chronological dating
• Geologic time scale
• International Commission on Stratigraphy
Astronomical chronology
• Galactic year
• Nuclear timescale
• Precession
• Sidereal time
Other units of time
• Instant
• Flick
• Shake
• Jiffy
• Second
• Minute
• Moment
• Hour
• Day
• Week
• Fortnight
• Month
• Year
• Olympiad
• Lustrum
• Decade
• Century
• Saeculum
• Millennium
Related topics
• Chronology
• Duration
• music
• Mental chronometry
• Decimal time
• Metric time
• System time
• Time metrology
• Time value of money
• Timekeeper
Relativity
Special
relativity
Background
• Principle of relativity (Galilean relativity
• Galilean transformation)
• Special relativity
• Doubly special relativity
Fundamental
concepts
• Frame of reference
• Speed of light
• Hyperbolic orthogonality
• Rapidity
• Maxwell's equations
• Proper length
• Proper time
• Relativistic mass
Formulation
• Lorentz transformation
Phenomena
• Time dilation
• Mass–energy equivalence
• Length contraction
• Relativity of simultaneity
• Relativistic Doppler effect
• Thomas precession
• Ladder paradox
• Twin paradox
• Terrell rotation
Spacetime
• Light cone
• World line
• Minkowski diagram
• Biquaternions
• Minkowski space
General
relativity
Background
• Introduction
• Mathematical formulation
Fundamental
concepts
• Equivalence principle
• Riemannian geometry
• Penrose diagram
• Geodesics
• Mach's principle
Formulation
• ADM formalism
• BSSN formalism
• Einstein field equations
• Linearized gravity
• Post-Newtonian formalism
• Raychaudhuri equation
• Hamilton–Jacobi–Einstein equation
• Ernst equation
Phenomena
• Black hole
• Event horizon
• Singularity
• Two-body problem
• Gravitational waves: astronomy
• detectors (LIGO and collaboration
• Virgo
• LISA Pathfinder
• GEO)
• Hulse–Taylor binary
• Other tests: precession of Mercury
• lensing (together with Einstein cross and Einstein rings)
• redshift
• Shapiro delay
• frame-dragging / geodetic effect (Lense–Thirring precession)
• pulsar timing arrays
Advanced
theories
• Brans–Dicke theory
• Kaluza–Klein
• Quantum gravity
Solutions
• Cosmological: Friedmann–Lemaître–Robertson–Walker (Friedmann equations)
• Lemaître–Tolman
• Kasner
• BKL singularity
• Gödel
• Milne
• Spherical: Schwarzschild (interior
• Tolman–Oppenheimer–Volkoff equation)
• Reissner–Nordström
• Axisymmetric: Kerr (Kerr–Newman)
• Weyl−Lewis−Papapetrou
• Taub–NUT
• van Stockum dust
• discs
• Others: pp-wave
• Ozsváth–Schücking
• Alcubierre
• In computational physics: Numerical relativity
Scientists
• Poincaré
• Lorentz
• Einstein
• Hilbert
• Schwarzschild
• de Sitter
• Weyl
• Eddington
• Friedmann
• Lemaître
• Milne
• Robertson
• Chandrasekhar
• Zwicky
• Wheeler
• Choquet-Bruhat
• Kerr
• Zel'dovich
• Novikov
• Ehlers
• Geroch
• Penrose
• Hawking
• Taylor
• Hulse
• Bondi
• Misner
• Yau
• Thorne
• Weiss
• others
Category
Authority control
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
Other
• IdRef
|
Wikipedia
|
Space-filling curve
In mathematical analysis, a space-filling curve is a curve whose range reaches every point in a higher dimensional region, typically the unit square (or more generally an n-dimensional unit hypercube). Because Giuseppe Peano (1858–1932) was the first to discover one, space-filling curves in the 2-dimensional plane are sometimes called Peano curves, but that phrase also refers to the Peano curve, the specific example of a space-filling curve found by Peano.
The closely related FASS curves (approximately space-Filling, self-Avoiding, Simple, and Self-similar curves) can be thought of as finite approximations of a certain type of space-filling curves.[1][2][3][4][5][6]
Definition
Intuitively, a curve in two or three (or higher) dimensions can be thought of as the path of a continuously moving point. To eliminate the inherent vagueness of this notion, Jordan in 1887 introduced the following rigorous definition, which has since been adopted as the precise description of the notion of a curve:
A curve (with endpoints) is a continuous function whose domain is the unit interval [0, 1].
In the most general form, the range of such a function may lie in an arbitrary topological space, but in the most commonly studied cases, the range will lie in a Euclidean space such as the 2-dimensional plane (a planar curve) or the 3-dimensional space (space curve).
Sometimes, the curve is identified with the image of the function (the set of all possible values of the function), instead of the function itself. It is also possible to define curves without endpoints to be a continuous function on the real line (or on the open unit interval (0, 1)).
History
In 1890, Giuseppe Peano discovered a continuous curve, now called the Peano curve, that passes through every point of the unit square.[7] His purpose was to construct a continuous mapping from the unit interval onto the unit square. Peano was motivated by Georg Cantor's earlier counterintuitive result that the infinite number of points in a unit interval is the same cardinality as the infinite number of points in any finite-dimensional manifold, such as the unit square. The problem Peano solved was whether such a mapping could be continuous; i.e., a curve that fills a space. Peano's solution does not set up a continuous one-to-one correspondence between the unit interval and the unit square, and indeed such a correspondence does not exist (see § Properties below).
It was common to associate the vague notions of thinness and 1-dimensionality to curves; all normally encountered curves were piecewise differentiable (that is, have piecewise continuous derivatives), and such curves cannot fill up the entire unit square. Therefore, Peano's space-filling curve was found to be highly counterintuitive.
From Peano's example, it was easy to deduce continuous curves whose ranges contained the n-dimensional hypercube (for any positive integer n). It was also easy to extend Peano's example to continuous curves without endpoints, which filled the entire n-dimensional Euclidean space (where n is 2, 3, or any other positive integer).
Most well-known space-filling curves are constructed iteratively as the limit of a sequence of piecewise linear continuous curves, each one more closely approximating the space-filling limit.
Peano's ground-breaking article contained no illustrations of his construction, which is defined in terms of ternary expansions and a mirroring operator. But the graphical construction was perfectly clear to him—he made an ornamental tiling showing a picture of the curve in his home in Turin. Peano's article also ends by observing that the technique can be obviously extended to other odd bases besides base 3. His choice to avoid any appeal to graphical visualization was motivated by a desire for a completely rigorous proof owing nothing to pictures. At that time (the beginning of the foundation of general topology), graphical arguments were still included in proofs, yet were becoming a hindrance to understanding often counterintuitive results.
A year later, David Hilbert published in the same journal a variation of Peano's construction.[8] Hilbert's article was the first to include a picture helping to visualize the construction technique, essentially the same as illustrated here. The analytic form of the Hilbert curve, however, is more complicated than Peano's.
Outline of the construction of a space-filling curve
Let ${\mathcal {C}}$ denote the Cantor space $\mathbf {2} ^{\mathbb {N} }$.
We start with a continuous function $h$ from the Cantor space ${\mathcal {C}}$ onto the entire unit interval $[0,\,1]$. (The restriction of the Cantor function to the Cantor set is an example of such a function.) From it, we get a continuous function $H$ from the topological product ${\mathcal {C}}\;\times \;{\mathcal {C}}$ onto the entire unit square $[0,\,1]\;\times \;[0,\,1]$ by setting
$H(x,y)=(h(x),h(y)).\,$
Since the Cantor set is homeomorphic to the product ${\mathcal {C}}\times {\mathcal {C}}$, there is a continuous bijection $g$ from the Cantor set onto ${\mathcal {C}}\;\times \;{\mathcal {C}}$. The composition $f$ of $H$ and $g$ is a continuous function mapping the Cantor set onto the entire unit square. (Alternatively, we could use the theorem that every compact metric space is a continuous image of the Cantor set to get the function $f$.)
Finally, one can extend $f$ to a continuous function $F$ whose domain is the entire unit interval $[0,\,1]$. This can be done either by using the Tietze extension theorem on each of the components of $f$, or by simply extending $f$ "linearly" (that is, on each of the deleted open interval $(a,\,b)$ in the construction of the Cantor set, we define the extension part of $F$ on $(a,\,b)$ to be the line segment within the unit square joining the values $f(a)$ and $f(b)$).
Properties
If a curve is not injective, then one can find two intersecting subcurves of the curve, each obtained by considering the images of two disjoint segments from the curve's domain (the unit line segment). The two subcurves intersect if the intersection of the two images is non-empty. One might be tempted to think that the meaning of curves intersecting is that they necessarily cross each other, like the intersection point of two non-parallel lines, from one side to the other. However, two curves (or two subcurves of one curve) may contact one another without crossing, as, for example, a line tangent to a circle does.
A non-self-intersecting continuous curve cannot fill the unit square because that will make the curve a homeomorphism from the unit interval onto the unit square (any continuous bijection from a compact space onto a Hausdorff space is a homeomorphism). But a unit square has no cut-point, and so cannot be homeomorphic to the unit interval, in which all points except the endpoints are cut-points. There exist non-self-intersecting curves of nonzero area, the Osgood curves, but by Netto's theorem they are not space-filling.[9]
For the classic Peano and Hilbert space-filling curves, where two subcurves intersect (in the technical sense), there is self-contact without self-crossing. A space-filling curve can be (everywhere) self-crossing if its approximation curves are self-crossing. A space-filling curve's approximations can be self-avoiding, as the figures above illustrate. In 3 dimensions, self-avoiding approximation curves can even contain knots. Approximation curves remain within a bounded portion of n-dimensional space, but their lengths increase without bound.
Space-filling curves are special cases of fractal curves. No differentiable space-filling curve can exist. Roughly speaking, differentiability puts a bound on how fast the curve can turn. Michał Morayne proved that the continuum hypothesis is equivalent to the existence of a Peano curve such that at each point of real line at least one of its components is differentiable.[10]
The Hahn–Mazurkiewicz theorem
The Hahn–Mazurkiewicz theorem is the following characterization of spaces that are the continuous image of curves:
A non-empty Hausdorff topological space is a continuous image of the unit interval if and only if it is a compact, connected, locally connected, second-countable space.
Spaces that are the continuous image of a unit interval are sometimes called Peano spaces.
In many formulations of the Hahn–Mazurkiewicz theorem, second-countable is replaced by metrizable. These two formulations are equivalent. In one direction a compact Hausdorff space is a normal space and, by the Urysohn metrization theorem, second-countable then implies metrizable. Conversely, a compact metric space is second-countable.
Kleinian groups
There are many natural examples of space-filling, or rather sphere-filling, curves in the theory of doubly degenerate Kleinian groups. For example, Cannon & Thurston (2007) showed that the circle at infinity of the universal cover of a fiber of a mapping torus of a pseudo-Anosov map is a sphere-filling curve. (Here the sphere is the sphere at infinity of hyperbolic 3-space.)
Integration
Wiener pointed out in The Fourier Integral and Certain of its Applications that space-filling curves could be used to reduce Lebesgue integration in higher dimensions to Lebesgue integration in one dimension.
See also
• Dragon curve
• Gosper curve
• Hilbert curve
• Koch curve
• Moore curve
• Murray polygon
• Sierpiński curve
• Space-filling tree
• Spatial index
• Hilbert R-tree
• Bx-tree
• Z-order (curve) (Morton order)
• Cannon–Thurston map
• List of fractals by Hausdorff dimension
Notes
1. Przemyslaw Prusinkiewicz and Aristid Lindenmayer. "The Algorithmic Beauty of Plants". 2012. p. 12
2. Jeffrey Ventrella. "Brainfilling Curves - A Fractal Bestiary". 2011. p. 43
3. Marcia Ascher. "Mathematics Elsewhere: An Exploration of Ideas Across Cultures". 2018. p. 179.
4. "Fractals in the Fundamental and Applied Sciences". 1991. p. 341-343.
5. Przemyslaw Prusinkiewicz; Aristid Lindenmayer; F. David Fracchia. "Synthesis of Space-Filling Curves on the Square Grid". 1989.
6. "FASS-curve". D. Frettlöh, E. Harriss, F. Gähler: Tilings encyclopedia, https://tilings.math.uni-bielefeld.de/
7. Peano 1890.
8. Hilbert 1891.
9. Sagan 1994, p. 131.
10. Morayne, Michał (1987). "On differentiability of Peano type functions". Colloquium Mathematicum. 53 (1): 129–132. doi:10.4064/cm-53-1-129-132. ISSN 0010-1354.
References
• Cannon, James W.; Thurston, William P. (2007) [1982], "Group invariant Peano curves", Geometry & Topology, 11 (3): 1315–1355, doi:10.2140/gt.2007.11.1315, ISSN 1465-3060, MR 2326947
• Hilbert, D. (1891), "Ueber die stetige Abbildung einer Linie auf ein Flächenstück", Mathematische Annalen (in German), 38 (3): 459–460, doi:10.1007/BF01199431, S2CID 123643081
• Mandelbrot, B. B. (1982), "Ch. 7: Harnessing the Peano Monster Curves", The Fractal Geometry of Nature, W. H. Freeman.
• McKenna, Douglas M. (1994), "SquaRecurves, E-Tours, Eddies, and Frenzies: Basic Families of Peano Curves on the Square Grid", in Guy, Richard K.; Woodrow, Robert E. (eds.), The Lighter Side of Mathematics: Proceedings of the Eugene Strens Memorial Conference on Recreational Mathematics and its History, Mathematical Association of America, pp. 49–73, ISBN 978-0-88385-516-4.
• Peano, G. (1890), "Sur une courbe, qui remplit toute une aire plane", Mathematische Annalen (in French), 36 (1): 157–160, doi:10.1007/BF01199438, S2CID 179177780.
• Sagan, Hans (1994), Space-Filling Curves, Universitext, Springer-Verlag, doi:10.1007/978-1-4612-0871-6, ISBN 0-387-94265-3, MR 1299533.
External links
Wikimedia Commons has media related to Space-filling curves.
• Multidimensional Space-Filling Curves
• Proof of the existence of a bijection at cut-the-knot
Java applets:
• Peano Plane Filling Curves at cut-the-knot
• Hilbert's and Moore's Plane Filling Curves at cut-the-knot
• All Peano Plane Filling Curves at cut-the-knot
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
|
Wikipedia
|
Minkowski space
In mathematical physics, Minkowski space (or Minkowski spacetime) (/mɪŋˈkɔːfski, -ˈkɒf-/[1]) combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position (inertial frame of reference) to the field (physics). A four-vector (x,y,z,t) consists of a coordinate axes such as a Euclidean space plus time. This may be used with the non-inertial frame to illustrate specifics of motion, but should not be confused with the spacetime model generally.
The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it "was grown on experimental physical grounds."
Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalized. While the individual components in Euclidean space and time might differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events.[nb 1] Minkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensions.
In 3-dimensional Euclidean space, the isometry group (the maps preserving the regular Euclidean distance) is the Euclidean group. It is generated by rotations, reflections and translations. When time is appended as a fourth dimension, the further transformations of translations in time and Lorentz boosts are added, and the group of all these transformations is called the Poincaré group. Minkowski's model follows special relativity where motion causes time dilation changing the scale applied to the frame in motion and shifts the phase of light.
Spacetime is equipped with an indefinite non-degenerate bilinear form, variously called the Minkowski metric,[2] the Minkowski norm squared or Minkowski inner product depending on the context.[nb 2] The Minkowski inner product is defined so as to yield the spacetime interval between two events when given their coordinate difference vector as argument.[3] Equipped with this inner product, the mathematical model of spacetime is called Minkowski space. The group of transformations for Minkowski space that preserve the spacetime interval (as opposed to the spatial Euclidean distance) is the Poincaré group (as opposed to the isometry group).
History
Part of a series on
Spacetime
• Special relativity
• General relativity
Spacetime concepts
• Spacetime manifold
• Equivalence principle
• Lorentz transformations
• Minkowski space
General relativity
• Introduction to general relativity
• Mathematics of general relativity
• Einstein field equations
Classical gravity
• Introduction to gravitation
• Newton's law of universal gravitation
Relevant mathematics
• Four-vector
• Derivations of relativity
• Spacetime diagrams
• Differential geometry
• Curved spacetime
• Mathematics of general relativity
• Spacetime topology
• Physics portal
• Category
Complex Minkowski spacetime
See also: Four-dimensional space
In his second relativity paper in 1905–06, Henri Poincaré showed[4] how, by taking time to be an imaginary fourth spacetime coordinate ict, where c is the speed of light and i is the imaginary unit, Lorentz transformations can be visualized as ordinary rotations of the four-dimensional Euclidean sphere. The four-dimensional spacetime can be visualized as a four-dimensional sphere, with each point on the sphere representing an event in spacetime. The Lorentz transformations can then be thought of as rotations of this four-dimensional sphere, where the rotation axis corresponds to the direction of relative motion between the two observers and the rotation angle is related to their relative velocity.
To see this, consider the coordinates of an event in spacetime represented as a four-vector (t, x, y, z). A Lorentz transformation can be represented as a matrix that acts on the four-vector and changes its components. This matrix can be thought of as a rotation matrix in four-dimensional space, which rotates the four-vector about a particular axis.
$x^{2}+y^{2}+z^{2}+(ict)^{2}={\text{constant}}.$
Rotations in planes spanned by two space unit vectors appear in coordinate space as well as in physical spacetime as Euclidean rotations, and are interpreted in the ordinary sense. The "rotation" in a plane spanned by a space unit vector and a time unit vector, while formally still a rotation in coordinate space, is a Lorentz boost in physical spacetime with real inertial coordinates. The analogy with Euclidean rotations is only partial since the radius of the sphere is actually imaginary which turns rotations into rotations in hyperbolic space (see hyperbolic rotation).
This idea, which was mentioned only briefly by Poincaré, was elaborated by Minkowski in a paper in German published in 1908 called "The Fundamental Equations for Electromagnetic Processes in Moving Bodies".[5] He reformulated Maxwell equations as a symmetrical set of equations in the four variables (x, y, z, ict) combined with redefined vector variables for electromagnetic quantities, and he was able to show directly and very simply their invariance under Lorentz transformation. He also made other important contributions and used matrix notation for the first time in this context. From his reformulation he concluded that time and space should be treated equally, and so arose his concept of events taking place in a unified four-dimensional spacetime continuum.
Real Minkowski spacetime
In a further development in his 1908 "Space and Time" lecture,[6] Minkowski gave an alternative formulation of this idea that used a real time coordinate instead of an imaginary one, representing the four variables (x, y, z, t) of space and time in coordinate form in a four dimensional real vector space. Points in this space correspond to events in spacetime. In this space, there is a defined light-cone associated with each point, and events not on the light-cone are classified by their relation to the apex as spacelike or timelike. It is principally this view of spacetime that is current nowadays, although the older view involving imaginary time has also influenced special relativity.
In the English translation of Minkowski's paper, the Minkowski metric as defined below is referred to as the line element. The Minkowski inner product of below appears unnamed when referring to orthogonality (which he calls normality) of certain vectors, and the Minkowski norm squared is referred to (somewhat cryptically, perhaps this is translation dependent) as "sum".
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g. proper time and length contraction) and to provide geometrical interpretation to the generalization of Newtonian mechanics to relativistic mechanics. For these special topics, see the referenced articles, as the presentation below will be principally confined to the mathematical structure (Minkowski metric and from it derived quantities and the Poincaré group as symmetry group of spacetime) following from the invariance of the spacetime interval on the spacetime manifold as consequences of the postulates of special relativity, not to specific application or derivation of the invariance of the spacetime interval. This structure provides the background setting of all present relativistic theories, barring general relativity for which flat Minkowski spacetime still provides a springboard as curved spacetime is locally Lorentzian.
Minkowski, aware of the fundamental restatement of the theory which he had made, said
The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.
— Hermann Minkowski, 1908, 1909[6]
Though Minkowski took an important step for physics, Albert Einstein saw its limitation:
At a time when Minkowski was giving the geometrical interpretation of special relativity by extending the Euclidean three-space to a quasi-Euclidean four-space that included time, Einstein was already aware that this is not valid, because it excludes the phenomenon of gravitation. He was still far from the study of curvilinear coordinates and Riemannian geometry, and the heavy mathematical apparatus entailed.[7]
For further historical information see references Galison (1979), Corry (1997) and Walter (1999).
Causal structure
Where v is velocity, x, y, and z are Cartesian coordinates in 3-dimensional space, c is the constant representing the universal speed limit, and t is time, the four-dimensional vector v = (ct, x, y, z) = (ct, r) is classified according to the sign of c2t2 − r2. A vector is timelike if c2t2 > r2, spacelike if c2t2 < r2, and null or lightlike if c2t2 = r2. This can be expressed in terms of the sign of η(v, v) as well, which depends on the signature. The classification of any vector will be the same in all frames of reference that are related by a Lorentz transformation (but not by a general Poincaré transformation because the origin may then be displaced) because of the invariance of the spacetime interval under Lorentz transformation.
The set of all null vectors at an event[nb 3] of Minkowski space constitutes the light cone of that event. Given a timelike vector v, there is a worldline of constant velocity associated with it, represented by a straight line in a Minkowski diagram.
Once a direction of time is chosen,[nb 4] timelike and null vectors can be further decomposed into various classes. For timelike vectors one has
1. future-directed timelike vectors whose first component is positive, (tip of vector located in absolute future in figure) and
2. past-directed timelike vectors whose first component is negative (absolute past).
Null vectors fall into three classes:
1. the zero vector, whose components in any basis are (0, 0, 0, 0) (origin),
2. future-directed null vectors whose first component is positive (upper light cone), and
3. past-directed null vectors whose first component is negative (lower light cone).
Together with spacelike vectors there are 6 classes in all.
An orthonormal basis for Minkowski space necessarily consists of one timelike and three spacelike unit vectors. If one wishes to work with non-orthonormal bases, it is possible to have other combinations of vectors. For example, one can easily construct a (non-orthonormal) basis consisting entirely of null vectors, called a null basis.
Vector fields are called timelike, spacelike or null if the associated vectors are timelike, spacelike or null at each point where the field is defined.
Properties of time-like vectors
Time-like vectors have special importance in the theory of relativity as they correspond to events which are accessible to the observer at (0, 0, 0, 0) with a speed less than that of light. Of most interest are time-like vectors which are similarly directed i.e. all either in the forward or in the backward cones. Such vectors have several properties not shared by space-like vectors. These arise because both forward and backward cones are convex whereas the space-like region is not convex.
Scalar product
The scalar product of two time-like vectors u1 = (t1, x1, y1, z1) and u2 = (t2, x2, y2, z2) is
$\eta (u_{1},u_{2})=u_{1}\cdot u_{2}=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}.$
Positivity of scalar product: An important property is that the scalar product of two similarly directed time-like vectors is always positive. This can be seen from the reversed Cauchy–Schwarz inequality below. It follows that if the scalar product of two vectors is zero then one of these at least, must be space-like. The scalar product of two space-like vectors can be positive or negative as can be seen by considering the product of two space-like vectors having orthogonal spatial components and times either of different or the same signs.
Using the positivity property of time-like vectors it is easy to verify that a linear sum with positive coefficients of similarly directed time-like vectors is also similarly directed time-like (the sum remains within the light-cone because of convexity).
Norm and reversed Cauchy inequality
The norm of a time-like vector u = (ct, x, y, z) is defined as
$\left\|u\right\|={\sqrt {\eta (u,u)}}={\sqrt {c^{2}t^{2}-x^{2}-y^{2}-z^{2}}}$
The reversed Cauchy inequality is another consequence of the convexity of either light-cone.[8] For two distinct similarly directed time-like vectors u1 and u2 this inequality is
$\eta (u_{1},u_{2})>\left\|u_{1}\right\|\left\|u_{2}\right\|$
or algebraically,
$c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}>{\sqrt {\left(c^{2}t_{1}^{2}-x_{1}^{2}-y_{1}^{2}-z_{1}^{2}\right)\left(c^{2}t_{2}^{2}-x_{2}^{2}-y_{2}^{2}-z_{2}^{2}\right)}}$
From this the positivity property of the scalar product can be seen.
The reversed triangle inequality
For two similarly directed time-like vectors u and w, the inequality is[9]
$\left\|u+w\right\|\geq \left\|u\right\|+\left\|w\right\|,$
where the equality holds when the vectors are linearly dependent.
The proof uses the algebraic definition with the reversed Cauchy inequality:[10]
${\begin{aligned}\left\|u+w\right\|^{2}&=\left\|u\right\|^{2}+2\left(u,w\right)+\left\|w\right\|^{2}\\[5mu]&\geq \left\|u\right\|^{2}+2\left\|u\right\|\left\|w\right\|+\left\|w\right\|^{2}=\left(\left\|u\right\|+\left\|w\right\|\right)^{2}.\end{aligned}}$
The result now follows by taking the square root on both sides.
Mathematical structure
It is assumed below that spacetime is endowed with a coordinate system corresponding to an inertial frame. This provides an origin, which is necessary for spacetime to be modeled as a vector space. This addition is not required and more complex treatments analogous to an affine space can remove the extra structure. However this is not the introductory convention and is not covered here.
For an overview, Minkowski space is a 4-dimensional real vector space equipped with a non-degenerate, symmetric bilinear form on the tangent space at each point in spacetime, here simply called the Minkowski inner product, with metric signature either (+ − − −) or (− + + +). The tangent space at each event is a vector space of the same dimension as spacetime, 4.
Tangent vectors
In practice, one need not be concerned with the tangent spaces. The vector space structure of Minkowski space allows for the canonical identification of vectors in tangent spaces at points (events) with vectors (points, events) in Minkowski space itself. See e.g. Lee (2003, Proposition 3.8.) or Lee (2012, Proposition 3.13.) These identifications are routinely done in mathematics. They can be expressed formally in Cartesian coordinates as[11]
${\begin{aligned}\left(x^{0},\,x^{1},\,x^{2},\,x^{3}\right)\ &\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{p}+\left.x^{1}\mathbf {e} _{1}\right|_{p}+\left.x^{2}\mathbf {e} _{2}\right|_{p}+\left.x^{3}\mathbf {e} _{3}\right|_{p}\\&\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{q}+\left.x^{1}\mathbf {e} _{1}\right|_{q}+\left.x^{2}\mathbf {e} _{2}\right|_{q}+\left.x^{3}\mathbf {e} _{3}\right|_{q}\end{aligned}}$
with basis vectors in the tangent spaces defined by
$\left.\mathbf {e} _{\mu }\right|_{p}=\left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p}{\text{ or }}\mathbf {e} _{0}|_{p}=\left({\begin{matrix}1\\0\\0\\0\end{matrix}}\right){\text{, etc}}.$
Here p and q are any two events and the second basis vector identification is referred to as parallel transport. The first identification is the canonical identification of vectors in the tangent space at any point with vectors in the space itself. The appearance of basis vectors in tangent spaces as first order differential operators is due to this identification. It is motivated by the observation that a geometrical tangent vector can be associated in a one-to-one manner with a directional derivative operator on the set of smooth functions. This is promoted to a definition of tangent vectors in manifolds not necessarily being embedded in Rn. This definition of tangent vectors is not the only possible one as ordinary n-tuples can be used as well.
Definitions of tangent vectors as ordinary vectors
A tangent vector at a point p may be defined, here specialized to Cartesian coordinates in Lorentz frames, as 4 × 1 column vectors v associated to each Lorentz frame related by Lorentz transformation Λ such that the vector v in a frame related to some frame by Λ transforms according to v → Λv. This is the same way in which the coordinates xμ transform. Explicitly,
${\begin{aligned}x'^{\mu }&={\Lambda ^{\mu }}_{\nu }x^{\nu },\\v'^{\mu }&={\Lambda ^{\mu }}_{\nu }v^{\nu }.\end{aligned}}$
This definition is equivalent to the definition given above under a canonical isomorphism.
For some purposes it is desirable to identify tangent vectors at a point p with displacement vectors at p, which is, of course, admissible by essentially the same canonical identification.[12] The identifications of vectors referred to above in the mathematical setting can correspondingly be found in a more physical and explicitly geometrical setting in Misner, Thorne & Wheeler (1973). They offer various degree of sophistication (and rigor) depending on which part of the material one chooses to read.
Metric signature
The metric signature refers to which sign the Minkowski inner product yields when given space (spacelike to be specific, defined further down) and time basis vectors (timelike) as arguments. Further discussion about this theoretically inconsequential, but practically necessary, choice for purposes of internal consistency and convenience is deferred to the hide box below.
The choice of metric signature
In general, but with several exceptions, mathematicians and general relativists prefer spacelike vectors to yield a positive sign, (− + + +), while particle physicists tend to prefer timelike vectors to yield a positive sign, (+ − − −). Authors covering several areas of physics, e.g. Steven Weinberg and Landau and Lifshitz ((− + + +) and (+ − − −) respectively) stick to one choice regardless of topic. Arguments for the former convention include "continuity" from the Euclidean case corresponding to the non-relativistic limit c → ∞. Arguments for the latter include that minus signs, otherwise ubiquitous in particle physics, go away. Yet other authors, especially of introductory texts, e.g. Kleppner & Kolenkow (1978), do not choose a signature at all, but instead opt to coordinatize spacetime such that the time coordinate (but not time itself!) is imaginary. This removes the need of the explicit introduction of a metric tensor (which may seem as an extra burden in an introductory course), and one needs not be concerned with covariant vectors and contravariant vectors (or raising and lowering indices) to be described below. The inner product is instead effected by a straightforward extension of the dot product in R3 to R3 × C. This works in the flat spacetime of special relativity, but not in the curved spacetime of general relativity, see Misner, Thorne & Wheeler (1973, Box 2.1, Farewell to ict) (who, by the way use (− + + +)). MTW also argues that it hides the true indefinite nature of the metric and the true nature of Lorentz boosts, which aren't rotations. It also needlessly complicates the use of tools of differential geometry that are otherwise immediately available and useful for geometrical description and calculation – even in the flat spacetime of special relativity, e.g. of the electromagnetic field.
Terminology
Mathematically associated to the bilinear form is a tensor of type (0,2) at each point in spacetime, called the Minkowski metric.[nb 5] The Minkowski metric, the bilinear form, and the Minkowski inner product are all the same object; it is a bilinear function that accepts two (contravariant) vectors and returns a real number. In coordinates, this is the 4×4 matrix representing the bilinear form.
For comparison, in general relativity, a Lorentzian manifold L is likewise equipped with a metric tensor g, which is a nondegenerate symmetric bilinear form on the tangent space TpL at each point p of L. In coordinates, it may be represented by a 4×4 matrix depending on spacetime position. Minkowski space is thus a comparatively simple special case of a Lorentzian manifold. Its metric tensor is in coordinates the same symmetric matrix at every point of M, and its arguments can, per above, be taken as vectors in spacetime itself.
Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension n = 4 and signature (3, 1) or (1, 3). Elements of Minkowski space are called events. Minkowski space is often denoted R3,1 or R1,3 to emphasize the chosen signature, or just M. It is perhaps the simplest example of a pseudo-Riemannian manifold.
Then mathematically, the metric is a bilinear form on an abstract four-dimensional real vector space $V$, that is,
$\eta :V\times V\rightarrow \mathbb {R} $
where $\eta $ has signature $(-,+,+,+)$, and signature is a coordinate-invariant property of $\eta $. The space of bilinear maps forms a vector space which can be identified with $M^{*}\otimes M^{*}$, and $\eta $ may be equivalently viewed as an element of this space. By making a choice of orthonormal basis $\{e_{\mu }\}$, we can identify $M:=(V,\eta )$ with the space $\mathbb {R} ^{1,3}:=(\mathbb {R} ^{4},\eta _{\mu \nu })$. The notation is meant to emphasise the fact that $M$ and $\mathbb {R} ^{1,3}$ are not just vector spaces but have added structure. $\eta _{\mu \nu }={\text{diag}}(-1,+1,+1,+1)$.
An interesting example of non-inertial coordinates for (part of) Minkowski spacetime are the Born coordinates. Another useful set of coordinates are the light-cone coordinates.
Pseudo-Euclidean metrics
Main articles: Pseudo-Euclidean space and Lorentzian manifolds
The Minkowski inner product is not an inner product, since it is not positive-definite, i.e. the quadratic form η(v, v) need not be positive for nonzero v. The positive-definite condition has been replaced by the weaker condition of non-degeneracy. The bilinear form is said to be indefinite. The Minkowski metric η is the metric tensor of Minkowski space. It is a pseudo-Euclidean metric, or more generally a constant pseudo-Riemannian metric in Cartesian coordinates. As such it is a nondegenerate symmetric bilinear form, a type (0, 2) tensor. It accepts two arguments up, vp, vectors in TpM, p ∈ M, the tangent space at p in M. Due to the above-mentioned canonical identification of TpM with M itself, it accepts arguments u, v with both u and v in M.
As a notational convention, vectors v in M, called 4-vectors, are denoted in italics, and not, as is common in the Euclidean setting, with boldface v. The latter is generally reserved for the 3-vector part (to be introduced below) of a 4-vector.
The definition [13]
$u\cdot v=\eta (u,\,v)$
yields an inner product-like structure on M, previously and also henceforth, called the Minkowski inner product, similar to the Euclidean inner product, but it describes a different geometry. It is also called the relativistic dot product. If the two arguments are the same,
$u\cdot u=\eta (u,u)\equiv \|u\|^{2}\equiv u^{2},$
the resulting quantity will be called the Minkowski norm squared. The Minkowski inner product satisfies the following properties.
Linearity in first argument
$\eta (au+v,\,w)=a\eta (u,\,w)+\eta (v,\,w),\quad \forall u,\,v\in M,\;\forall a\in \mathbb {R} $
Symmetry
$\eta (u,\,v)=\eta (v,\,u)$
Non-degeneracy
$\eta (u,\,v)=0,\;\forall v\in M\ \Rightarrow \ u=0$
The first two conditions imply bilinearity. The defining difference between a pseudo-inner product and an inner product proper is that the former is not required to be positive definite, that is, η(u, u) < 0 is allowed.
The most important feature of the inner product and norm squared is that these are quantities unaffected by Lorentz transformations. In fact, it can be taken as the defining property of a Lorentz transformation that it preserves the inner product (i.e. the value of the corresponding bilinear form on two vectors). This approach is taken more generally for all classical groups definable this way in classical group. There, the matrix Φ is identical in the case O(3, 1) (the Lorentz group) to the matrix η to be displayed below.
Two vectors v and w are said to be orthogonal if η(v, w) = 0. For a geometric interpretation of orthogonality in the special case when η(v, v) ≤ 0 and η(w, w) ≥ 0 (or vice versa), see hyperbolic orthogonality.
A vector e is called a unit vector if η(e, e) = ±1. A basis for M consisting of mutually orthogonal unit vectors is called an orthonormal basis.[14]
For a given inertial frame, an orthonormal basis in space, combined with the unit time vector, forms an orthonormal basis in Minkowski space. The number of positive and negative unit vectors in any such basis is a fixed pair of numbers, equal to the signature of the bilinear form associated with the inner product. This is Sylvester's law of inertia.
More terminology (but not more structure): The Minkowski metric is a pseudo-Riemannian metric, more specifically, a Lorentzian metric, even more specifically, the Lorentz metric, reserved for 4-dimensional flat spacetime with the remaining ambiguity only being the signature convention.
Minkowski metric
Not to be confused with Minkowski distance which is also called Minkowski metric.
From the second postulate of special relativity, together with homogeneity of spacetime and isotropy of space, it follows that the spacetime interval between two arbitrary events called 1 and 2 is:[15]
$c^{2}\left(t_{1}-t_{2}\right)^{2}-\left(x_{1}-x_{2}\right)^{2}-\left(y_{1}-y_{2}\right)^{2}-\left(z_{1}-z_{2}\right)^{2}.$
This quantity is not consistently named in the literature. The interval is sometimes referred to as the square root of the interval as defined here.[16][17]
The invariance of the interval under coordinate transformations between inertial frames follows from the invariance of
$c^{2}t^{2}-x^{2}-y^{2}-z^{2}$
provided the transformations are linear. This quadratic form can be used to define a bilinear form
$u\cdot v=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}.$
via the polarization identity. This bilinear form can in turn be written as
$u\cdot v=u^{\textsf {T}}[\eta ]v\,.$
Where [η] is a $4\times 4$ matrix associated with η. While possibly confusing, it is common practice to denote [η] with just η. The matrix is read off from the explicit bilinear form as
$\eta ={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}},$
and the bilinear form
$u\cdot v=\eta (u,v),$
with which this section started by assuming its existence, is now identified.
For definiteness and shorter presentation, the signature (− + + +) is adopted below. This choice (or the other possible choice) has no (known) physical implications. The symmetry group preserving the bilinear form with one choice of signature is isomorphic (under the map given here) with the symmetry group preserving the other choice of signature. This means that both choices are in accord with the two postulates of relativity. Switching between the two conventions is straightforward. If the metric tensor η has been used in a derivation, go back to the earliest point where it was used, substitute η for −η, and retrace forward to the desired formula with the desired metric signature.
Standard basis
A standard or orthonormal basis for Minkowski space is a set of four mutually orthogonal vectors {e0, e1, e2, e3} such that
$-\eta (e_{0},e_{0})=\eta (e_{1},e_{1})=\eta (e_{2},e_{2})=\eta (e_{3},e_{3})=1$
and for which
$\eta (e_{\mu },e_{\nu })=0$
when $ \mu \neq \nu \,.$
These conditions can be written compactly in the form
$\eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.$
Relative to a standard basis, the components of a vector v are written (v0, v1, v2, v3) where the Einstein notation is used to write v = vμ eμ. The component v0 is called the timelike component of v while the other three components are called the spatial components. The spatial components of a 4-vector v may be identified with a 3-vector v = (v1, v2, v3).
In terms of components, the Minkowski inner product between two vectors v and w is given by
$\eta (v,w)=\eta _{\mu \nu }v^{\mu }w^{\nu }=v^{0}w_{0}+v^{1}w_{1}+v^{2}w_{2}+v^{3}w_{3}=v^{\mu }w_{\mu }=v_{\mu }w^{\mu },$
and
$\eta (v,v)=\eta _{\mu \nu }v^{\mu }v^{\nu }=v^{0}v_{0}+v^{1}v_{1}+v^{2}v_{2}+v^{3}v_{3}=v^{\mu }v_{\mu }.$
Here lowering of an index with the metric was used.
There are many possible choices of standard basis obeying the condition $\eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.$ Any two such bases are related in some sense by a Lorentz transformation, either by a change-of-basis matrix $\Lambda _{\nu }^{\mu }$, a real $4\times 4$ matrix satisfying
$\Lambda _{\rho }^{\mu }\eta _{\mu \nu }\Lambda _{\sigma }^{\nu }=\eta _{\rho \sigma }.$
or $\Lambda ,$ a linear map on the abstract vector space satisfying, for any pair of vectors $u,v,$
$\eta (\Lambda u,\Lambda v)=\eta (u,v).$
Then if we have two different bases $\{e_{0},e_{1},e_{2},e_{3}\}$ and $\{e_{0}',e_{1}',e_{2}',e_{3}'\}$, we can write $e_{\mu }'=e_{\nu }\Lambda _{\mu }^{\nu }$ or $e_{\mu }'=\Lambda e_{\mu }$. While it might be tempting to think of $\Lambda _{\nu }^{\mu }$ and $\Lambda $ as the same thing, mathematically they are elements of different spaces, and act on the space of standard bases from different sides.
Raising and lowering of indices
Main articles: Raising and lowering indices and tensor contraction
Technically, a non-degenerate bilinear form provides a map between a vector space and its dual; in this context, the map is between the tangent spaces of M and the cotangent spaces of M. At a point in M, the tangent and cotangent spaces are dual vector spaces (so the dimension of the cotangent space at an event is also 4). Just as an authentic inner product on a vector space with one argument fixed, by Riesz representation theorem, may be expressed as the action of a linear functional on the vector space, the same holds for the Minkowski inner product of Minkowski space.[19]
Thus if vμ are the components of a vector in a tangent space, then ημν vμ = vν are the components of a vector in the cotangent space (a linear functional). Due to the identification of vectors in tangent spaces with vectors in M itself, this is mostly ignored, and vectors with lower indices are referred to as covariant vectors. In this latter interpretation, the covariant vectors are (almost always implicitly) identified with vectors (linear functionals) in the dual of Minkowski space. The ones with upper indices are contravariant vectors. In the same fashion, the inverse of the map from tangent to cotangent spaces, explicitly given by the inverse of η in matrix representation, can be used to define raising of an index. The components of this inverse are denoted ημν. It happens that ημν = ημν. These maps between a vector space and its dual can be denoted η♭ (eta-flat) and η♯ (eta-sharp) by the musical analogy.[20]
Contravariant and covariant vectors are geometrically very different objects. The first can and should be thought of as arrows. A linear functional can be characterized by two objects: its kernel, which is a hyperplane passing through the origin, and its norm. Geometrically thus, covariant vectors should be viewed as a set of hyperplanes, with spacing depending on the norm (bigger = smaller spacing), with one of them (the kernel) passing through the origin. The mathematical term for a covariant vector is 1-covector or 1-form (though the latter is usually reserved for covector fields).
One quantum mechanical analogy explored in the literature is that of a de Broglie wave (scaled by a factor of Planck's reduced constant) associated to a momentum four-vector to illustrate how one could imagine a covariant version of a contravariant vector. The inner product of two contravariant vectors could equally well be thought of as the action of the covariant version of one of them on the contravariant version of the other. The inner product is then how many time the arrow pierces the planes.[18] The mathematical reference, Lee (2003), offers the same geometrical view of these objects (but mentions no piercing).
The electromagnetic field tensor is a differential 2-form, which geometrical description can as well be found in MTW.
One may, of course, ignore geometrical views all together (as is the style in e.g. Weinberg (2002) and Landau & Lifshitz 2002) and proceed algebraically in a purely formal fashion. The time-proven robustness of the formalism itself, sometimes referred to as index gymnastics, ensures that moving vectors around and changing from contravariant to covariant vectors and vice versa (as well as higher order tensors) is mathematically sound. Incorrect expressions tend to reveal themselves quickly.
Coordinate free raising and lowering
Given a bilinear form $\eta :M\times M\rightarrow \mathbb {R} $, the lowered version of a vector can be thought of as the partial evaluation of $\eta $, that is, there is an associated partial evaluation map
$\eta (\cdot ,-):M\rightarrow M^{*};v\mapsto \eta (v,\cdot ).$
The lowered vector $\eta (v,\cdot )\in M^{*}$ is then the dual map $u\mapsto \eta (v,u)$. Note it does not matter which argument is partially evaluated due to symmetry of $\eta $.
Non-degeneracy is then equivalent to injectivity of the partial evaluation map, or equivalently non-degeneracy tells us the kernel of the map is trivial. In finite dimension, as we have here, and noting that the dimension of a finite dimensional space is equal to the dimension of the dual, this is enough to conclude the partial evaluation map is a linear isomorphism from $M$ to $M^{*}$. This then allows definition of the inverse partial evaluation map,
$\eta ^{-1}:M^{*}\rightarrow M,$
which allows us to define the inverse metric
$\eta ^{-1}:M^{*}\times M^{*}\rightarrow \mathbb {R} ,\eta ^{-1}(\alpha ,\beta )=\eta (\eta ^{-1}(\alpha ),\eta ^{-1}(\beta ))$
where the two different usages of $\eta ^{-1}$ can be told apart by the argument each is evaluated on. This can then be used to raise indices. If we work in a coordinate basis, we find that the metric $\eta ^{-1}$ is indeed the matrix inverse to $\eta .$
The formalism of the Minkowski metric
The present purpose is to show semi-rigorously how formally one may apply the Minkowski metric to two vectors and obtain a real number, i.e. to display the role of the differentials, and how they disappear in a calculation. The setting is that of smooth manifold theory, and concepts such as convector fields and exterior derivatives are introduced.
A formal approach to the Minkowski metric
A full-blown version of the Minkowski metric in coordinates as a tensor field on spacetime has the appearance
$\eta _{\mu \nu }dx^{\mu }\otimes dx^{\nu }=\eta _{\mu \nu }dx^{\mu }\odot dx^{\nu }=\eta _{\mu \nu }dx^{\mu }dx^{\nu }.$
Explanation: The coordinate differentials are 1-form fields. They are defined as the exterior derivative of the coordinate functions xμ. These quantities evaluated at a point p provide a basis for the cotangent space at p. The tensor product (denoted by the symbol ⊗) yields a tensor field of type (0, 2), i.e. the type that expects two contravariant vectors as arguments. On the right hand side, the symmetric product (denoted by the symbol ⊙ or by juxtaposition) has been taken. The equality holds since, by definition, the Minkowski metric is symmetric.[21] The notation on the far right is also sometimes used for the related, but different, line element. It is not a tensor. For elaboration on the differences and similarities, see Misner, Thorne & Wheeler (1973, Box 3.2 and section 13.2.)
Tangent vectors are, in this formalism, given in terms of a basis of differential operators of the first order,
$\left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p},$
where p is an event. This operator applied to a function f gives the directional derivative of f at p in the direction of increasing xμ with xν, ν ≠ μ fixed. They provide a basis for the tangent space at p.
The exterior derivative df of a function f is a covector field, i.e. an assignment of a cotangent vector to each point p, by definition such that
$df(X)=Xf,$
for each vector field X. A vector field is an assignment of a tangent vector to each point p. In coordinates X can be expanded at each point p in the basis given by the ∂/∂xν|p. Applying this with f = xμ, the coordinate function itself, and X = ∂/∂xν, called a coordinate vector field, one obtains
$dx^{\mu }\left({\frac {\partial }{\partial x^{\nu }}}\right)={\frac {\partial x^{\mu }}{\partial x^{\nu }}}=\delta _{\nu }^{\mu }.$
Since this relation holds at each point p, the dxμ|p provide a basis for the cotangent space at each p and the bases dxμ|p and ∂/∂xν|p are dual to each other,
$\left.dx^{\mu }\right|_{p}\left(\left.{\frac {\partial }{\partial x^{\nu }}}\right|_{p}\right)=\delta _{\nu }^{\mu }.$
at each p. Furthermore, one has
$\alpha \otimes \beta (a,b)=\alpha (a)\beta (b)$
for general one-forms on a tangent space α, β and general tangent vectors a, b. (This can be taken as a definition, but may also be proved in a more general setting.)
Thus when the metric tensor is fed two vectors fields a, b, both expanded in terms of the basis coordinate vector fields, the result is
$\eta _{\mu \nu }dx^{\mu }\otimes dx^{\nu }(a,b)=\eta _{\mu \nu }a^{\mu }b^{\nu },$
where aμ, bν are the component functions of the vector fields. The above equation holds at each point p, and the relation may as well be interpreted as the Minkowski metric at p applied to two tangent vectors at p.
As mentioned, in a vector space, such as that modelling the spacetime of special relativity, tangent vectors can be canonically identified with vectors in the space itself, and vice versa. This means that the tangent spaces at each point are canonically identified with each other and with the vector space itself. This explains how the right hand side of the above equation can be employed directly, without regard to spacetime point the metric is to be evaluated and from where (which tangent space) the vectors come from.
This situation changes in general relativity. There one has
$g(p)_{\mu \nu }\left.dx^{\mu }\right|_{p}\left.dx^{\nu }\right|_{p}(a,b)=g(p)_{\mu \nu }a^{\mu }b^{\nu },$
where now η → g(p), i.e., g is still a metric tensor but now depending on spacetime and is a solution of Einstein's field equations. Moreover, a, b must be tangent vectors at spacetime point p and can no longer be moved around freely.
Chronological and causality relations
Let x, y ∈ M. We say that
1. x chronologically precedes y if y − x is future-directed timelike. This relation has the transitive property and so can be written x < y.
2. x causally precedes y if y − x is future-directed null or future-directed timelike. It gives a partial ordering of spacetime and so can be written x ≤ y.
Suppose x ∈ M is timelike. Then the simultaneous hyperplane for x is $\{y:\eta (x,y)=0\}.$ Since this hyperplane varies as x varies, there is a relativity of simultaneity in Minkowski space.
Generalizations
Main articles: Lorentzian manifold and Super Minkowski space
A Lorentzian manifold is a generalization of Minkowski space in two ways. The total number of spacetime dimensions is not restricted to be 4 (2 or more) and a Lorentzian manifold need not be flat, i.e. it allows for curvature.
Complexified Minkowski space
Complexified Minkowski space is defined as Mc = M ⊕ iM.[22] Its real part is the Minkowski space of four-vectors, such as the four-velocity and the four-momentum, which are independent of the choice of orientation of the space. The imaginary part, on the other hand, may consist of four-pseudovectors, such as angular velocity and magnetic moment, which change their direction with a change of orientation. We introduce a pseudoscalar i which also changes sign with a change of orientation. Thus, elements of Mc are independent of the choice of the orientation.
The inner product-like structure on Mc is defined as u ⋅ v = η(u,v) for any u,v ∈ Mc. A relativistic pure spin of an electron or any half spin particle is described by ρ ∈ Mc as ρ = u+is, where u is the four-velocity of the particle, satisfying u2 = 1 and s is the 4D spin vector,[23] which is also the Pauli–Lubanski pseudovector satisfying s2 = −1 and u⋅s = 0.
Generalized Minkowski space
Minkowski space refers to a mathematical formulation in four dimensions. However, the mathematics can easily be extended or simplified to create an analogous generalized Minkowski space in any number of dimensions. If n ≥ 2, n-dimensional Minkowski space is a vector space of real dimension n on which there is a constant Minkowski metric of signature (n − 1, 1) or (1, n − 1). These generalizations are used in theories where spacetime is assumed to have more or less than 4 dimensions. String theory and M-theory are two examples where n > 4. In string theory, there appears conformal field theories with 1 + 1 spacetime dimensions.
de Sitter space can be formulated as a submanifold of generalized Minkowski space as can the model spaces of hyperbolic geometry (see below).
Curvature
As a flat spacetime, the three spatial components of Minkowski spacetime always obey the Pythagorean Theorem. Minkowski space is a suitable basis for special relativity, a good description of physical systems over finite distances in systems without significant gravitation. However, in order to take gravity into account, physicists use the theory of general relativity, which is formulated in the mathematics of a non-Euclidean geometry. When this geometry is used as a model of physical space, it is known as curved space.
Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities).[nb 6] More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity.
Geometry
The meaning of the term geometry for the Minkowski space depends heavily on the context. Minkowski space is not endowed with a Euclidean geometry, and not with any of the generalized Riemannian geometries with intrinsic curvature, those exposed by the model spaces in hyperbolic geometry (negative curvature) and the geometry modeled by the sphere (positive curvature). The reason is the indefiniteness of the Minkowski metric. Minkowski space is, in particular, not a metric space and not a Riemannian manifold with a Riemannian metric. However, Minkowski space contains submanifolds endowed with a Riemannian metric yielding hyperbolic geometry.
Model spaces of hyperbolic geometry of low dimension, say 2 or 3, cannot be isometrically embedded in Euclidean space with one more dimension, i.e. ℝ3 or ℝ4 respectively, with the Euclidean metric g, disallowing easy visualization.[nb 7][24] By comparison, model spaces with positive curvature are just spheres in Euclidean space of one higher dimension.[25] Hyperbolic spaces can be isometrically embedded in spaces of one more dimension when the embedding space is endowed with the Minkowski metric η.
Define H1(n)
R
⊂ Mn+1
to be the upper sheet (ct > 0) of the hyperboloid
$\mathbf {H} _{R}^{1(n)}=\left\{\left(ct,x^{1},\ldots ,x^{n}\right)\in \mathbf {M} ^{n}:c^{2}t^{2}-\left(x^{1}\right)^{2}-\cdots -\left(x^{n}\right)^{2}=R^{2},ct>0\right\}$
in generalized Minkowski space Mn+1 of spacetime dimension n + 1. This is one of the surfaces of transitivity of the generalized Lorentz group. The induced metric on this submanifold,
$h_{R}^{1(n)}=\iota ^{*}\eta ,$
the pullback of the Minkowski metric η under inclusion, is a Riemannian metric. With this metric H1(n)
R
is a Riemannian manifold. It is one of the model spaces of Riemannian geometry, the hyperboloid model of hyperbolic space. It is a space of constant negative curvature −1/R2.[26] The 1 in the upper index refers to an enumeration of the different model spaces of hyperbolic geometry, and the n for its dimension. A 2(2) corresponds to the Poincaré disk model, while 3(n) corresponds to the Poincaré half-space model of dimension n.
Preliminaries
In the definition above ι: H1(n)
R
→ Mn+1
is the inclusion map and the superscript star denotes the pullback. The present purpose is to describe this and similar operations as a preparation for the actual demonstration that H1(n)
R
actually is a hyperbolic space.
Behavior of tensors under inclusion, pullback of covariant tensors under general maps and pushforward of vectors under general maps
Behavior of tensors under inclusion:
For inclusion maps from a submanifold S into M and a covariant tensor α of order k on M it holds that
$\iota ^{*}\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(\iota _{*}X_{1},\,\iota _{*}X_{2},\,\ldots ,\,\iota _{*}X_{k}\right)=\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right),$
where X1, X1, …, Xk are vector fields on S. The subscript star denotes the pushforward (to be introduced later), and it is in this special case simply the identity map (as is the inclusion map). The latter equality holds because a tangent space to a submanifold at a point is in a canonical way a subspace of the tangent space of the manifold itself at the point in question. One may simply write
$\iota ^{*}\alpha =\alpha |_{S},$
meaning (with slight abuse of notation) the restriction of α to accept as input vectors tangent to some s ∈ S only.
Pullback of tensors under general maps:
The pullback of a covariant k-tensor α (one taking only contravariant vectors as arguments) under a map F: M → N is a linear map
$F^{*}\colon T_{F(p)}^{k}N\rightarrow T_{p}^{k}M,$
where for any vector space V,
$T^{k}V=\underbrace {V^{*}\otimes V^{*}\otimes \cdots \otimes V^{*}} _{k{\text{ times}}}.$
It is defined by
$F^{*}(\alpha )\left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(F_{*}X_{1},\,F_{*}X_{2},\,\ldots ,\,F_{*}X_{k}\right),$
where the subscript star denotes the pushforward of the map F, and X1, X2, …, Xk are vectors in TpM. (This is in accord with what was detailed about the pullback of the inclusion map. In the general case here, one cannot proceed as simply because F∗X1 ≠ X1 in general.)
The pushforward of vectors under general maps:
Heuristically, pulling back a tensor to p ∈ M from F(p) ∈ N feeding it vectors residing at p ∈ M is by definition the same as pushing forward the vectors from p ∈ M to F(p) ∈ N feeding them to the tensor residing at F(p) ∈ N.
Further unwinding the definitions, the pushforward F∗: TMp → TNF(p) of a vector field under a map F: M → N between manifolds is defined by
$F_{*}(X)f=X(f\circ F),$
where f is a function on N. When M = ℝm, N= ℝn the pushforward of F reduces to DF: ℝm → ℝn, the ordinary differential, which is given by the Jacobian matrix of partial derivatives of the component functions. The differential is the best linear approximation of a function F from ℝm to ℝn. The pushforward is the smooth manifold version of this. It acts between tangent spaces, and is in coordinates represented by the Jacobian matrix of the coordinate representation of the function.
The corresponding pullback is the dual map from the dual of the range tangent space to the dual of the domain tangent space, i.e. it is a linear map,
$F^{*}\colon T_{F(p)}^{*}N\rightarrow T_{p}^{*}M.$
Hyperbolic stereographic projection
In order to exhibit the metric, it is necessary to pull it back via a suitable parametrization. A parametrization of a submanifold S of M is a map U ⊂ Rm → M whose range is an open subset of S. If S has the same dimension as M, a parametrization is just the inverse of a coordinate map φ: M → U ⊂ Rm. The parametrization to be used is the inverse of hyperbolic stereographic projection. This is illustrated in the figure to the right for n = 2. It is instructive to compare to stereographic projection for spheres.
Stereographic projection σ: Hn
R
→ Rn
and its inverse σ−1: Rn → Hn
R
are given by
${\begin{aligned}\sigma (\tau ,\mathbf {x} )=\mathbf {u} &={\frac {R\mathbf {x} }{R+\tau }},\\\sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )&=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right),\end{aligned}}$
where, for simplicity, τ ≡ ct. The (τ, x) are coordinates on Mn+1 and the u are coordinates on Rn.
Detailed derivation
Let
$\mathbf {H} _{R}^{n}=\left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\subset \mathbf {M} :-\tau ^{2}+\left(x^{1}\right)^{2}+\cdots +\left(x^{n}\right)^{2}=-R^{2},\tau >0\right\}$ :-\tau ^{2}+\left(x^{1}\right)^{2}+\cdots +\left(x^{n}\right)^{2}=-R^{2},\tau >0\right\}}
and let
$S=(-R,0,\ldots ,0)$
If
$P=\left(\tau ,x^{1},\ldots ,x^{n}\right)\in \mathbf {H} _{R}^{n},$
then it is geometrically clear that the vector
${\overrightarrow {PS}}$
intersects the hyperplane
$\left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\in M:\tau =0\right\}$
once in point denoted
$U=\left(0,u^{1}(P),\ldots ,u^{n}(P)\right)\equiv (0,\mathbf {u} ).$
One has
${\begin{aligned}S+{\overrightarrow {SU}}&=U\Rightarrow {\overrightarrow {SU}}=U-S,\\S+{\overrightarrow {SP}}&=P\Rightarrow {\overrightarrow {SP}}=P-S\end{aligned}}.$
or
${\begin{aligned}{\overrightarrow {SU}}&=(0,\mathbf {u} )-(-R,\mathbf {0} )=(R,\mathbf {u} ),\\{\overrightarrow {SP}}&=(\tau ,\mathbf {x} )-(-R,\mathbf {0} )=(\tau +R,\mathbf {x} ).\end{aligned}}$
By construction of stereographic projection one has
${\overrightarrow {SU}}=\lambda (\tau ){\overrightarrow {SP}}.$
This leads to the system of equations
${\begin{aligned}R&=\lambda (\tau +R),\\\mathbf {u} &=\lambda \mathbf {x} .\end{aligned}}$
The first of these is solved for $\lambda $ and one obtains for stereographic projection
$\sigma (\tau ,\mathbf {x} )=\mathbf {u} ={\frac {R\mathbf {x} }{R+\tau }}.$
Next, the inverse $\sigma ^{-1}(u)=(\tau ,\mathbf {x} )$ must be calculated. Use the same considerations as before, but now with
${\begin{aligned}U&=(0,\mathbf {u} )\\P&=(\tau (\mathbf {u} ),\mathbf {x} (\mathbf {u} )).\end{aligned}}$
One gets
${\begin{aligned}\tau &={\frac {R(1-\lambda )}{\lambda }},\\\mathbf {x} &={\frac {\mathbf {u} }{\lambda }},\end{aligned}}$
but now with $\lambda $ depending on $\mathbf {u} .$ The condition for P lying in the hyperboloid is
$-\tau ^{2}+|\mathbf {x} |^{2}=-R^{2},$
or
$-{\frac {R^{2}(1-\lambda )^{2}}{\lambda ^{2}}}+{\frac {|\mathbf {u} |^{2}}{\lambda ^{2}}}=-R^{2},$
leading to
$\lambda ={\frac {R^{2}-|u|^{2}}{2R^{2}}}.$
With this $\lambda $, one obtains
$\sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).$
Pulling back the metric
One has
$h_{R}^{1(n)}=\eta |_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}\right)^{2}+\ldots +\left(dx^{n}\right)^{2}-d\tau ^{2}$
and the map
$\sigma ^{-1}:\mathbb {R} ^{n}\rightarrow \mathbf {H} _{R}^{1(n)};\quad \sigma ^{-1}(\mathbf {u} )=(\tau (\mathbf {u} ),\,\mathbf {x} (\mathbf {u} ))=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},\,{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).$
The pulled back metric can be obtained by straightforward methods of calculus;
$\left.\left(\sigma ^{-1}\right)^{*}\eta \right|_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}(\mathbf {u} )\right)^{2}+\ldots +\left(dx^{n}(\mathbf {u} )\right)^{2}-\left(d\tau (\mathbf {u} )\right)^{2}.$
One computes according to the standard rules for computing differentials (though one is really computing the rigorously defined exterior derivatives),
${\begin{aligned}dx^{1}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}\right)={\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}+\ldots +{\frac {\partial }{\partial u^{n}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{n}+{\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ,\\&\ \ \vdots \\dx^{n}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{n}}{R^{2}-|u|^{2}}}\right)=\cdots ,\\d\tau (\mathbf {u} )&=d\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)=\cdots ,\end{aligned}}$
and substitutes the results into the right hand side. This yields
$\left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\ldots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.$
Detailed outline of computation
One has
${\begin{aligned}{\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}&={\frac {2\left(R^{2}-|u|^{2}\right)+4R^{2}\left(u^{1}\right)^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{1},\\{\frac {\partial }{\partial u^{2}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{2}&={\frac {4R^{2}u^{1}u^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{2},\end{aligned}}$
and
${\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ^{2}=0.$
With this one may write
$dx^{1}(\mathbf {u} )={\frac {2R^{2}\left(R^{2}-|u|^{2}\right)du^{1}+4R^{2}u^{1}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{2}}},$
from which
$\left(dx^{1}(\mathbf {u} )\right)^{2}={\frac {4R^{2}\left(r^{2}-|u|^{2}\right)^{2}\left(du^{1}\right)^{2}+16R^{4}\left(R^{2}-|u|^{2}\right)\left(\mathbf {u} \cdot d\mathbf {u} \right)u^{1}du^{1}+16R^{4}\left(u^{1}\right)^{2}\left(\mathbf {u} \cdot d\mathbf {u} \right)^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.$
Summing this formula one obtains
${\begin{aligned}&\left(dx^{1}(\mathbf {u} )\right)^{2}+\ldots +\left(dx^{n}(\mathbf {u} )\right)^{2}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\ldots +\left(du^{n}\right)^{2}\right]+16R^{4}\left(R^{2}-|u|^{2}\right)(\mathbf {u} \cdot d\mathbf {u} )(\mathbf {u} \cdot d\mathbf {u} )+16R^{4}|u|^{2}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\ldots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{4}}}+R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{4}}}.\end{aligned}}$
Similarly, for τ one gets
$d\tau =\sum _{i=1}^{n}{\frac {\partial }{\partial u^{i}}}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}du^{i}+{\frac {\partial }{\partial \tau }}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}d\tau =\sum _{i=1}^{n}R^{4}{\frac {4R^{2}u^{i}du^{i}}{\left(R^{2}-|u|^{2}\right)}},$
yielding
$-d\tau ^{2}=-\left(R{\frac {4R^{4}\left(\mathbf {u} \cdot d\mathbf {u} \right)}{\left(R^{2}-|u|^{2}\right)^{2}}}\right)^{2}=-R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.$
Now add this contribution to finally get
$\left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\ldots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.$
This last equation shows that the metric on the ball is identical to the Riemannian metric h2(n)
R
in the Poincaré ball model, another standard model of hyperbolic geometry.
Alternative calculation using the pushforward
The pullback can be computed in a different fashion. By definition,
$\left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}(V,\,V)=h_{R}^{1(n)}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right)=\eta |_{\mathbf {H} _{R}^{1(n)}}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right).$
In coordinates,
$\left(\sigma ^{-1}\right)_{*}V=\left(\sigma ^{-1}\right)_{*}V^{i}{\frac {\partial }{\partial u^{i}}}=V^{i}{\frac {\partial x^{j}}{\partial u^{i}}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial \tau }{\partial u^{i}}}{\frac {\partial }{\partial \tau }}=V^{i}{\frac {\partial }{x}}^{j}{\partial u^{i}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial }{\tau }}{\partial u^{i}}{\frac {\partial }{\partial \tau }}=Vx^{j}{\frac {\partial }{\partial x^{j}}}+V\tau {\frac {\partial }{\partial \tau }}.$
One has from the formula for σ–1
${\begin{aligned}Vx^{j}&=V^{i}{\frac {\partial }{\partial u^{i}}}\left({\frac {2R^{2}u^{j}}{R^{2}-|u|^{2}}}\right)={\frac {2R^{2}V^{j}}{R^{2}-|u|^{2}}}-{\frac {4R^{2}u^{j}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}},\quad \left({\text{here }}V|u|^{2}=2\sum _{k=1}^{n}V^{k}u^{k}\equiv 2\langle \mathbf {V} ,\,\mathbf {u} \rangle \right)\\V\tau &=V\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)={\frac {4R^{3}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}}.\end{aligned}}$
Lastly,
$\eta \left(\sigma _{*}^{-1}V,\,\sigma _{*}^{-1}V\right)=\sum _{j=1}^{n}\left(Vx^{j}\right)^{2}-(V\tau )^{2}={\frac {4R^{4}|V|^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}=h_{R}^{2(n)}(V,z,V),$
and the same conclusion is reached.
See also
• Hyperbolic quaternion
• Hyperspace
• Introduction to the mathematics of general relativity
• Minkowski plane
Remarks
1. This makes spacetime distance an invariant.
2. Consistent use of the terms "Minkowski inner product", "Minkowski norm" or "Minkowski metric" is intended for the bilinear form here, since it is in widespread use. It is by no means "standard" in the literature, but no standard terminology seems to exist.
3. Translate the coordinate system so that the event is the new origin.
4. This corresponds to the time coordinate either increasing or decreasing when proper time for any particle increases. An application of T flips this direction.
5. For comparison and motivation of terminology, take a Riemannian metric, which provides a positive definite symmetric bilinear form, i. e. an inner product proper at each point on a manifold.
6. This similarity between flat space and curved space at infinitesimally small distance scales is foundational to the definition of a manifold in general.
7. There is an isometric embedding into ℝn according to the Nash embedding theorem (Nash (1956)), but the embedding dimension is much higher, n = (m/2)(m + 1)(3m + 11) for a Riemannian manifold of dimension m.
Notes
1. "Minkowski" Archived 2019-06-22 at the Wayback Machine. Random House Webster's Unabridged Dictionary.
2. Lee 1997, p. 31
3. Schutz, John W. (1977). Independent Axioms for Minkowski Space–Time (illustrated ed.). CRC Press. pp. 184–185. ISBN 978-0-582-31760-4. Extract of page 184
4. Poincaré 1905–1906, pp. 129–176 Wikisource translation: On the Dynamics of the Electron
5. Minkowski 1907–1908, pp. 53–111 *Wikisource translation: s:Translation:The Fundamental Equations for Electromagnetic Processes in Moving Bodies.
6. Minkowski 1908–1909, pp. 75–88 Various English translations on Wikisource: "Space and Time."
7. Cornelius Lanczos (1972) "Einstein's Path from Special to General Relativity", pages 5–19 of General Relativity: Papers in Honour of J. L. Synge, L. O'Raifeartaigh editor, Clarendon Press, see page 11
8. See Schutz's proof p 148, also Naber p.48
9. Schutz p.148, Naber p.49
10. Schutz p.148
11. Lee 1997, p. 15
12. Lee 2003, See Lee's discussion on geometric tangent vectors early in chapter 3.
13. Giulini 2008 pp. 5,6
14. Gregory L. Naber (2003). The Geometry of Minkowski Spacetime: An Introduction to the Mathematics of the Special Theory of Relativity (illustrated ed.). Courier Corporation. p. 8. ISBN 978-0-486-43235-9. Archived from the original on 2022-12-26. Retrieved 2022-12-26. Extract of page 8 Archived 2022-12-26 at the Wayback Machine
15. Sean M. Carroll (2019). Spacetime and Geometry (illustrated, herdruk ed.). Cambridge University Press. p. 7. ISBN 978-1-108-48839-6.
16. Sard 1970, p. 71
17. Minkowski, Landau & Lifshitz 2002, p. 4
18. Misner, Thorne & Wheeler 1973
19. Lee 2003. One point in Lee's proof of existence of this map needs modification (Lee deals with Riemannian metrics.). Where Lee refers to positive definiteness to show injectivity of the map, one needs instead appeal to non-degeneracy.
20. Lee 2003, The tangent-cotangent isomorphism p. 282.
21. Lee 2003
22. Y. Friedman, A Physically Meaningful Relativistic Description of the Spin State of an Electron, Symmetry 2021, 13(10), 1853; https://doi.org/10.3390/sym13101853 Archived 2023-08-13 at the Wayback Machine
23. Jackson, J.D., Classical Electrodynamics, 3rd ed.; John Wiley \& Sons: Hoboken, NJ, USA,1998
24. Lee 1997, p. 66
25. Lee 1997, p. 33
26. Lee 1997
References
• Corry, L. (1997). "Hermann Minkowski and the postulate of relativity". Arch. Hist. Exact Sci. 51 (4): 273–314. doi:10.1007/BF00518231. ISSN 0003-9519. S2CID 27016039.
• Catoni, F.; et al. (2008). Mathematics of Minkowski Space. Frontiers in Mathematics. Basel: Birkhäuser Verlag. doi:10.1007/978-3-7643-8614-6. ISBN 978-3-7643-8613-9. ISSN 1660-8046.
• Galison, P. L. (1979). R McCormach; et al. (eds.). Minkowski's Space–Time: from visual thinking to the absolute world. Historical Studies in the Physical Sciences. Vol. 10. Johns Hopkins University Press. pp. 85–121. doi:10.2307/27757388. JSTOR 27757388.
• Giulini D The rich structure of Minkowski space, https://arxiv.org/abs/0802.4345v1.
• Kleppner, D.; Kolenkow, R. J. (1978) [1973]. An Introduction to Mechanics. London: McGraw-Hill. ISBN 978-0-07-035048-9.
• Landau, L.D.; Lifshitz, E.M. (2002) [1939]. The Classical Theory of Fields. Course of Theoretical Physics. Vol. 2 (4th ed.). Butterworth–Heinemann. ISBN 0-7506-2768-9.
• Lee, J. M. (2003). Introduction to Smooth manifolds. Springer Graduate Texts in Mathematics. Vol. 218. ISBN 978-0-387-95448-6.
• Lee, J. M. (2012). Introduction to Smooth manifolds. Springer Graduate Texts in Mathematics. ISBN 978-1-4419-9981-8.
• Lee, J. M. (1997). Riemannian Manifolds – An Introduction to Curvature. Springer Graduate Texts in Mathematics. Vol. 176. New York · Berlin · Heidelberg: Springer Verlag. ISBN 978-0-387-98322-6.
• Minkowski, Hermann (1907–1908), "Die Grundgleichungen für die elektromagnetischen Vorgänge in bewegten Körpern" [The Fundamental Equations for Electromagnetic Processes in Moving Bodies], Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse: 53–111
• Published translation: Carus, Edward H. (1918). "Space and Time". The Monist. 28 (288): 288–302. doi:10.5840/monist19182826.
• Wikisource translation: The Fundamental Equations for Electromagnetic Processes in Moving Bodies.
• Minkowski, Hermann (1908–1909), "Raum und Zeit" [Space and Time], Physikalische Zeitschrift, 10: 75–88 Various English translations on Wikisource: Space and Time.
• Misner, Charles W.; Thorne, Kip. S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 978-0-7167-0344-0.
• Naber, G. L. (1992). The Geometry of Minkowski Spacetime. New York: Springer-Verlag. ISBN 978-0-387-97848-2.
• Nash, J. (1956). "The Imbedding Problem for Riemannian Manifolds". Annals of Mathematics. 63 (1): 20–63. doi:10.2307/1969989. JSTOR 1969989. MR 0075639.
• Penrose, Roger (2005). "18 Minkowskian geometry". Road to Reality : A Complete Guide to the Laws of the Universe. Alfred A. Knopf. ISBN 9780679454434.
• Poincaré, Henri (1905–1906), "Sur la dynamique de l'électron" [On the Dynamics of the Electron], Rendiconti del Circolo Matematico di Palermo, 21: 129–176, Bibcode:1906RCMP...21..129P, doi:10.1007/BF03013466, hdl:2027/uiug.30112063899089, S2CID 120211823 Wikisource translation: On the Dynamics of the Electron.
• Robb A A: Optical Geometry of Motion; a New View of the Theory of Relativity Cambridge 1911, (Heffers). http://www.archive.org/details/opticalgeometryoOOrobbrich.
• Robb A A: Geometry of Time and Space, 1936 Cambridge Univ Press http://www.archive.org/details/geometryoftimean032218mbp.
• Sard, R. D. (1970). Relativistic Mechanics - Special Relativity and Classical Particle Dynamics. New York: W. A. Benjamin. ISBN 978-0805384918.
• Shaw, R. (1982). "§ 6.6 Minkowski space, § 6.7,8 Canonical forms pp 221–242". Linear Algebra and Group Representations. Academic Press. ISBN 978-0-12-639201-2.
• Walter, Scott A. (1999). "Minkowski, Mathematicians, and the Mathematical Theory of Relativity". In Goenner, Hubert; et al. (eds.). The Expanding Worlds of General Relativity. Boston: Birkhäuser. pp. 45–86. ISBN 978-0-8176-4060-6.
• Weinberg, S. (2002), The Quantum Theory of Fields, vol. 1, Cambridge University Press, ISBN 978-0-521-55001-7.
External links
Media related to Minkowski diagrams at Wikimedia Commons
• Animation clip on YouTube visualizing Minkowski space in the context of special relativity.
• The Geometry of Special Relativity: The Minkowski Space - Time Light Cone
• Minkowski space at PhilPapers
Relativity
Special
relativity
Background
• Principle of relativity (Galilean relativity
• Galilean transformation)
• Special relativity
• Doubly special relativity
Fundamental
concepts
• Frame of reference
• Speed of light
• Hyperbolic orthogonality
• Rapidity
• Maxwell's equations
• Proper length
• Proper time
• Relativistic mass
Formulation
• Lorentz transformation
Phenomena
• Time dilation
• Mass–energy equivalence
• Length contraction
• Relativity of simultaneity
• Relativistic Doppler effect
• Thomas precession
• Ladder paradox
• Twin paradox
• Terrell rotation
Spacetime
• Light cone
• World line
• Minkowski diagram
• Biquaternions
• Minkowski space
General
relativity
Background
• Introduction
• Mathematical formulation
Fundamental
concepts
• Equivalence principle
• Riemannian geometry
• Penrose diagram
• Geodesics
• Mach's principle
Formulation
• ADM formalism
• BSSN formalism
• Einstein field equations
• Linearized gravity
• Post-Newtonian formalism
• Raychaudhuri equation
• Hamilton–Jacobi–Einstein equation
• Ernst equation
Phenomena
• Black hole
• Event horizon
• Singularity
• Two-body problem
• Gravitational waves: astronomy
• detectors (LIGO and collaboration
• Virgo
• LISA Pathfinder
• GEO)
• Hulse–Taylor binary
• Other tests: precession of Mercury
• lensing (together with Einstein cross and Einstein rings)
• redshift
• Shapiro delay
• frame-dragging / geodetic effect (Lense–Thirring precession)
• pulsar timing arrays
Advanced
theories
• Brans–Dicke theory
• Kaluza–Klein
• Quantum gravity
Solutions
• Cosmological: Friedmann–Lemaître–Robertson–Walker (Friedmann equations)
• Lemaître–Tolman
• Kasner
• BKL singularity
• Gödel
• Milne
• Spherical: Schwarzschild (interior
• Tolman–Oppenheimer–Volkoff equation)
• Reissner–Nordström
• Axisymmetric: Kerr (Kerr–Newman)
• Weyl−Lewis−Papapetrou
• Taub–NUT
• van Stockum dust
• discs
• Others: pp-wave
• Ozsváth–Schücking
• Alcubierre
• In computational physics: Numerical relativity
Scientists
• Poincaré
• Lorentz
• Einstein
• Hilbert
• Schwarzschild
• de Sitter
• Weyl
• Eddington
• Friedmann
• Lemaître
• Milne
• Robertson
• Chandrasekhar
• Zwicky
• Wheeler
• Choquet-Bruhat
• Kerr
• Zel'dovich
• Novikov
• Ehlers
• Geroch
• Penrose
• Hawking
• Taylor
• Hulse
• Bondi
• Misner
• Yau
• Thorne
• Weiss
• others
Category
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Latvia
• Czech Republic
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.