source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
356,512 |
I am totally new to academia so I am really not sure how mathematicians works together, can more experienced mathematicians here shed some light on how you find coauthors? I guess one way to do this is to attend conference. But if one doesn't have the chance to do so, are there other options?
|
Alright, I'll try to outline several ways of how mathematicians find co-authors (since I'm still quite young, I lack the experience to judge how things might change later during a career, but given that you're new to academia, I hope that what follows fits your situation): I'll try to write my answer as a kind of classification scheme. Preliminary remark concerning terminology: Actually, mathematicians do not find co-authors - they find collaborators . By this I mean that it is probably not a good idea to approach people in order to "write a paper with them", but rather in order to "discuss and work with them on an interesting problem or idea." If this work turns out to be fruitful (and novel), you can then turn your results into a paper that you co-author together. . Your advisor: When you're new to academia, you are most likely doing a PhD (or something related), so you will have an advisor. In some (not in all) cases your advisor is the most natural choice to discuss your ideas, seek for explanations, and so on. Depending on how large your advisor's share of your research is, you might write a paper alone or together with your advisor. How to start a collaboration with your adviser: Please, be advised that there are very different types of advisors who work and behave quite differently. Some might push you quite often und actively urge you to work with them, others are much more reserved and will let you do your thing unless you actively go for a collaboration with them. Make sure that you find out and keep in mind what type of advisor yours is. . Other people from your institution. Another natural choice is to work with other people from your institution. This can vary in several ways: You might work with people at your level, with people who are a bit more advanced in their career than you are, or with people who are much more experienced than you. When you are completely new to academia, it can be helpful to work with people who have a bit more experience than you (say, more advanced PhD students, or postdocs). They can offer you a bit of advice and guidance (in addition to your advisor). You might work with people who are very close to your field or research project, or you might discuss with people who have specialized in other things, and when you find a common question of interest, you can combine your knowledge and ideas from two different fields to attack the question. Note that this can range from people working in fields nearby to people who do things that are, at first glance, very different from yours (but I would suspect that the "younger" you are, the better it is if you work with people with similar background to yours). How to start a collaboration with people from your institution: First rule: Meet people, and discuss with them - at lunch, at tea, whenever. Do not sit alone in your office or at home all the time (well, Corona won't last forever...). Do not think "I'll discuss with XYZ now, maybe we can work on a project together", but rather think "I have a question in mind in which XYZ might be interested; let's see what she/he thinks about it." Of course, details depend on which fields people are working in. Is there a PhD student or a postdoc with a field of research that is similar to yours? Go and discuss with them as often as you can! Ask them for their expertise, ask them questions you are thinking about (not necessarily research questions; also things which are probably known, but not to you, yet). Ask them what they are thinking about. Is there a postdoc in a field that is related, but not quite them same as yours? Ask them which problems they find interesting, and why. Ask them if they know how their work relates to your field, and if they have done similar things as you do. $(*)$ Important: After discussing a topic that you found interesting, re-think what you discussed, look things up, keep coming back to your colleagues with new information, new insights or new questions. Similarly, if somebody keeps coming back to you, and you find their questions or insights interesting, take the opportunity to discuss even more with them. After all, collaborations are not planned or directed - they grow. One additional remark: Sometimes collaborations which might seem a bit surprising at first glance can occur, as I can illustrate by a personal anecdote: During my PhD I gained some experience in functional analysis and operator theory, and briefly after I started my first postdoc position, a PhD student from stochastics knocked on my door: he was trying to solve an inverse problem in statistics which was given by a linear integral equation, and somebody had apparently told him that I'm the linear operator guy. Fortunately, I had time (or, say: I took the time), so we discussed the issue in several meetings and finally resolved it. So in the end we wrote a joint paper, together with his advisor, that mixed up functional analysis and mathematical statistics. What do we learn from this anecdote? Well, if a problem occurs in your research that seems to stem from another field, it can be a good idea to just knock on somebody's door. And if somebody knocks on your door, it can be a good idea to take some time for them. . People from other institutions, part 1: visits at your university. Researchers usually travel a lot, so it will probably happen quite often that people from other institutions come to your university to give a talk or/and to stay for a (often short) period where they work face-to-face with people from your institute. These researchers might have been invited by your advisor, or by somebody else, and their visits can be a good opportunity to find collaborators. How to start a collaboration with a visiting researcher: Well, I'm beginning to repeat myself: Talk to them, ask them questions, discuss. Most visiting researchers give a seminar talk, so attend such talks; if you find the topic of the talk interesting or useful (let alone both), ask them questions about it (after the talk, at tea, during lunch - often there's also a joint dinner after a talk, so you can go there, too). If you want to discuss even more, just knock on the research visitor's door. Most people are happy when they note that somebody is interested in their work. Besides all this: $(*)$ applies, of course. . People from other institutions, part 2: visit other institutions yourself (if possible). Some PhD students have the possibility for research stays (a few days or even a few weeks) at other institutions. Since such visits are planned in advance, your host will expect you and will be prepared to spend considerable time to work with you, so this is a great chance for collaboration. How to visit other institutions. Of course, it depends on whether your institution (or your advisor) has sufficient funding. If funding is avaible, your advisor is most likely to know good opportunities for a visit or a research stay. Your advisor knows your research topic(s), and she/he knows other researchers in the field, so she/he might ask them whether it is possible for you to visit them. If your advisor does not suggest a research stay on their own - just ask. . People from other institutions, part 3: conferences and workshops. Of course, you can meet a lot of interesting and clever people there, and it is a good opportunity to find collaborators. However, many (not all) young academics tend to have a few misconceptions about conferences, so here is some advise: How to find collaborators at a workshop / conference: First, and most importantly: A conference is a social event! This means, it is not all about the talks - on the contrary, it is about the talks only as far as talks are social interactions themselves. More concretely, this means (among other things): If you attend a talk, ask questions! Do not be afraid to ask stupid questions. (In contrast to what some people claim, there do exist stupid questions, but the point is that they do no harm at all.) I tend to ask many questions, and a considerable fraction of them turns out to be stupid afterwards. But I learn from both types of questions (the stupid and the good ones), and people tend to remember the good ones much better than the stupid ones (at least I believe so). The coffee breaks are important! Do not waste them with the preparation of your talk (instead, have your talk prepared before the conference commences), or with reading a paper, or checking emails. Go drink coffee and talk to people (I personally don't like coffee nor tea, so I eat cookies instead). Ask the speakers additional questions there if you found their talk interesting; if you gave a talk, be there to give people the opportunity to ask you questions. This way, you will meet many people, and you will notice that you share common interests (and expertise!) with some of them. If this happens: Keep in touch. After the conference, continue discussions that were interesting via email, or via video calls. Again, $(*)$ applies (this time by means of electronic communication rather than face-to-face).
|
{
"source": [
"https://mathoverflow.net/questions/356512",
"https://mathoverflow.net",
"https://mathoverflow.net/users/155590/"
]
}
|
356,618 |
Pyknotic and condensed sets have been introduced recently as a convenient framework for working with topological rings/algebras/groups/modules/etc. Recently there has been much (justified) excitement about these ideas and the theories coming from them, such as Scholze's analytic geometry . (Small note: the difference between pyknotic and condensed is essentially set-theoretic, as explained by Peter Scholze here .) On the other side, cohesion is a notion first introduced by Lawvere many years ago that aims to axiomatise what it means to be a category of "spaces". It has been developed further by Schreiber in the context of synthetic higher differential geometry (and also by Shulman in cohesive HoTT and by Rezk in global homotopy theory , to give a few other names in this direction). Recently, David Corfield started a very interesting discussion on the relation between these two notions at the $n$ -Category Café . The aim of this question is basically to ask what's in the title: What is the precise relation between pyknoticity and cohesiveness? Along with a few subquestions: (On algebraic cohesion) It seems to me that the current notion of cohesion only works for smooth, differential-geometric spaces: we don't really have a good notion of algebraic cohesion (i.e. cohesion for schemes/stacks/etc.) or $p$ -adic variants (rigid/Berkovich/adic/etc. spaces). Is this indeed the case? (On the relevance of cohesion to AG and homotopy theory) Despite its very young age, it's already clear that condensed/pyknotic technology is very useful and is probably going to be fruitfully applied to problems in homotopy theory and algebraic geometry. Can the same be said of cohesion? (On "condensed cohesion") Cohesion is a relative notion: not only do we have cohesive topoi but also cohesive morphisms of topoi, which recover the former in the special case of cohesive morphisms to the punctual topos. Scholze has suggested in the comments of the linked $n$ -CatCafé discussion that we should not only consider cohesion with respect to $\mathrm{Sets}$ , but also to condensed sets. What benefits does this approach presents? Is this (or some variant of this idea) a convenient notion of "cohesion" for algebraic geometry?
|
The work on analytic geometry is all joint with Dustin Clausen! Your main question seems a little vague to me, but let me try to get at it by answering the subquestions. See also the discussion at the nCatCafe. Also, as David Corfield comments, much of this had been observed long before: https://nforum.ncatlab.org/discussion/5473/etale-site/?Focus=43431#Comment_43431 Yes, I think cohesion does not work in algebraic or $p$ -adic contexts. The issue is that schemes or rigid-analytic spaces are just not locally contractible. Cohesion does not seem to have been applied in algebraic or $p$ -adic contexts. However, I realized recently (before this nCatCafe discussion), in my project with Laurent Fargues on the geometrization of the local Langlands correspondence, that the existence of the left adjoint to pullback ("relative homology") is a really useful structure in the pro-etale setting. I'm still somewhat confused about many things, but to some extent it can be used as a replacement to the functor $f_!$ of compactly supported cohomology, and has the advantage that its definition is completely canonical and it exists and has good properties even without any assumptions on $f$ (like being of finite dimension), at least after passing to "solid $\ell$ -adic sheaves". So it may be that the existence of this left adjoint, which I believe is a main part of cohesion, may play some important role. As I already hinted in 2, this relative notion of cohesiveness may be a convenient notion. In brief, there are no sites relevant in algebraic geometry that are cohesive over sets, but there are such sites that are (essentially) cohesive over condensed sets; for example, the big pro-etale site on all schemes over a separably closed field $k$ . So in this way the approach relative to condensed sets has benefits. All of these questions sidestep the question of why condensed sets are not cohesive over sets, when cohesion is meant to model "toposes of spaces" and condensed sets are meant to be " the topos of spaces". I think the issue here is simply that for Lawvere a "space" was always built from locally contractible pieces, while work in algebraic geometry has taught us that schemes are just not built in this way. But things are OK if instead of "locally contractible"(="locally contractible onto a point") one says "locally contractible onto a profinite set", and this leads to the idea of cohesion relative to the topos of condensed sets. Let me use this opportunity to point out that this dichotomy between locally contractible things as in the familar geometry over $\mathbb R$ and profinite things as codified in condensed sets is one of the key things that Dustin and I had to overcome in our work on analytic geometry. To prove our results on liquid $\mathbb R$ -vector spaces we have to resolve real vector spaces by locally profinite sets!
|
{
"source": [
"https://mathoverflow.net/questions/356618",
"https://mathoverflow.net",
"https://mathoverflow.net/users/130058/"
]
}
|
356,884 |
Given a commutative associative unital algebra over a field of characteristic zero. Is it true that any derivation of it preseves its nil-radical? More explicitly, let $D$ be a derivation of an algebra $A$ . Let $N$ denote the nil-radical of $A$ . Is it true that $D(N)\subset N?$
|
Suppose $x\in N$ , so that $x^n=0$ for some $n$ . Then using the product rule for derivations many times, we see that $$
0=D^n(x^n)=n! D(x)^n+Y,
$$ where $Y$ is divisible by $x$ . Therefore, $D(x)^{n^2}=(D(x)^n)^n$ is divisible by $x^n$ , and therefore vanishes. Thus, $D(x)$ is nilpotent, and therefore $D(N)\subset N$ .
|
{
"source": [
"https://mathoverflow.net/questions/356884",
"https://mathoverflow.net",
"https://mathoverflow.net/users/16183/"
]
}
|
357,077 |
Short version: A note of mine was rejected by the arXiv moderation (something I didn't even know was possible) on account of being “unrefereeable”. The moderation process provides absolutely no feedback as to why and does not answer questions. I can think of various reasons but I don't know which are actually relevant, and I'm afraid that trying to resubmit the note, even with substantial changes, might get me banned permanently. So I'm looking for advice from people with more experience either in the subject or in dealing with the arXiv, on what to do next (e.g., “forget it, it's crap”, “try to improve it”, “upload it somewhere else”, “do <this-or-that> to establish dialogue with the arXiv moderators”, something of the sort), or simply for insight. [ Meta-question here as to whether this question was appropriate for MO.] The detailed story (this is long, but I thought important to get all the specifics clear; actual questions follow): A little over a week ago, I asked a question on MO on a delay-differential equation modeling a variant of the classical SIR epidemiological model where individuals recover in constant time instead of an exponential distribution. A little later, I found that I was able to answer my own question by finding an exact closed-form solution to this model: I wrote a short answer here and, since the answer garnered some interest, a longer discussion on the comparison of both models in a blog post (in French). A number of people then encouraged me to try to give this a little more publicity than a blog post. (My main conclusion is that constant-time recovery, which seems a little less unrealistic than exponential-process recovery, gives a faster initial growth, and a sharper and more pronounced epidemiological peak even assuming a given reproduction number, contagiousness and expected recovery time, while still having the same attack rate: in a world where a lot of modeling is done using SIR, I think this is worth pointing out.) So I wrote a note on the subject , expanding a little more what I could say about the comparison between this constant-time-recovery variant and classical SIR, adding some illustrative graphs and remarks on random oriented graphs. After getting the required endorsements, I submitted this note to the arXiv (on 2020-04-06) in the math.CA (“Classical Analysis and ODEs”) category. The submission simply vanished without a trace, so I inquired and the arXiv help desk told me that the submission had been rejected by the moderators with the following comment: Our moderators have determined that your submission is not of sufficient interest for inclusion within arXiv. This decision was reached after examining your submission. The moderators have rejected your submission as "unrefereeable": your article does not contain sufficient original or substantive scholarly research. As a result, we have removed your submission. Please note that our moderators are not referees and provide no reviews with such decisions. For in-depth reviews of your work, please seek feedback from another forum. Please do not resubmit this paper without contacting arXiv moderation and obtaining a positive response. Resubmission of removed papers may result in the loss of your submission privileges. I must admit I didn't know there was even such a thing as arXiv moderation (since there is already the endorsement hurdle to cross) or, if there was, I thought it was limited to removing completely off-topic material and obviously off-topic stuff like proofs that Einstein was wrong. And I certainly disagree about the assessment that my note is “unrefereeable” (it would probably not get past the refereeing process in any moderately prestigious journal, but I find it hard to believe that no journal would even consider sending it to a referee). I made an honest effort to try to ascertain whether the exact-form solution I wrote down had been previously known, and could come up with nothing: but of course, this sort of things is very hard to make sure and I can have missed some general theory which would imply it trivially. I also believe the remarks I make near the end of the note concerning the link between extinction probabilities and attack rates of epidemics, extinction probabilities of Galton-Watson processes, and reachable nodes in random oriented graphs, are of interest. The problem wouldn't be so bad if I could at least have some kind of dialogue with the arXiv moderators, e.g., inquire into how this judgment was made, and what kind of changes would get it reconsidered. But I wrote to moderation at arxiv.org to ask for clarification and got no answer whatsoever. So I'm taking their advice and trying to “seek feedback from another forum”. Clearly it was a mistake of mine to submit a note with so few references and I should probably have framed the main result as a precise theorem. Hindsight is always 20/20. Now I don't know whether this can be fixed or whether this would be enough: I have now heard chilling stories about how the arXiv is capricious in banning people silently and permanently for trying to upload something which they don't like, which makes me wish to be careful before I try anything like re-uploading. Also, it may have been a mistake of mine to create my arXiv account using a personal rather than institutional email address, I don't know. It is also possible that the arXiv is currently overwhelmed by papers on the pandemic and have taken a hard line against anything remotely related to epidemiology or Covid-19. I thought it best not to upload the note to viXra , which would probably classify it in everyone's eyes as crackpotology, so the best I could come up with (beyond self-hosting) was to place it on “HAL Archives Ouvertes” , a web site created by some French institutions which, however, does not have the same goals as the arXiv (it's more about storing than dissemination) and does not seem to provide a way to publish the source files. Questions: Can somebody provide insight as to how the arXiv moderation process works and how they form their decisions, or why my note might have been rejected? Is there some way to communicate with the arXiv moderators? Is there any way I could ask for a second opinion after improving my note (e.g., to add many references) and without risking a ban? Is there anything else I might try to do with that note apart from just giving up? (Various people have suggested bioRxiv or even PLOS One, but I wouldn't want to risk being blacklisted by every scientific preprint site in existence in the attempt to make one result public.)
|
Q1: The arXiv moderation procedure is described here ; as you can read, "unrefereeable content" is a placeholder for a paper "in need of significant review and revision". Unlike refereeing, which has a relaxed time schedule, moderation is done in the 6 hour time frame between the closure of the submission window and the announcement of the new submissions. A typical moderator may find themselves deciding on a dozen submissions, so this is very much a rapid decision, and first impressions can make a big difference. A submission from a personal rather than institutional email address, formatted in a somewhat unusual way, with minimal references to the literature, on a topic where everyone and their dog seems to have an opinion, may very well trigger an unjustified negative decision. Q2: The mathematics moderators are listed here , but it is considered inappropriate to contact a moderator directly. The appeals process, described here , outlines the steps to take, and also points out that it may be a lengthy process. Q3: It is worth to appeal, because that will allow the moderators to take the time they would not have in their ordinary work flow. A careful look at your note should convince them this is substantial research.
|
{
"source": [
"https://mathoverflow.net/questions/357077",
"https://mathoverflow.net",
"https://mathoverflow.net/users/17064/"
]
}
|
357,197 |
John Horton Conway is known for many achievements:
Life, the three sporadic groups in the "Conway constellation," surreal numbers, his "Look-and-Say" sequence analysis, the Conway-Schneeberger $15$ -theorem, the Free-Will theorem—the list goes on and on. But he was so prolific that I bet he established many less-celebrated
results not so widely known. Here is one:
a surprising closed billiard-ball trajectory in a regular tetrahedron: Image from Izidor Hafner . Q . What are other of Conway's lesser-known results? Edit: Professor Conway passed away April 11, 2020 from complications of covid-19: https://www.princeton.edu/news/2020/04/14/mathematician-john-horton-conway-magical-genius-known-inventing-game-life-dies-age
|
Conway's office at Cambridge was notoriously messy. One day, he got tired of how hard he had to struggle to find a paper in there, and shut himself away for a few hours to come up with a solution to the problem. He proudly showed a sketch of his solution to Richard Guy, who said, "Congratulations, Conway – you've invented the filing cabinet."
|
{
"source": [
"https://mathoverflow.net/questions/357197",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
357,498 |
According to a polemical article by Adrian Mathias , Robert Solovay showed that Bourbaki's definition of the number 1, written out using the formalism in the 1970 edition of Théorie des Ensembles , requires 2,409,875,496,393,137,472,149,767,527,877,436,912,979,508,338,752,092,897 $\approx$ 2.4 $\cdot$ 10 54 symbols and 871,880,233,733,949,069,946,182,804,910,912,227,472,430,953,034,182,177 $\approx$ 8.7 $\cdot$ 10 53 connective links used in their treatment of bound variables. Mathias notes that at 80 symbols per line, 50 lines per page, 1,000 pages per book, this definition would fill up 6 $\cdot$ 10 47 books. (If each book weighed a kilogram, these books would be about 200,000 times the mass of the Milky Way.) My question: can anyone verify Solovay's calculation? Solovay originally did this calculation using a program in Lisp. I asked him if he still had it, but it seems he does not. He has asked Mathias, and if it turns up I'll let people know. (I conjecture that Bourbaki's proof of 1+1=2, written on paper, would not fit inside the observable Universe.)
|
These calculations have been carried out by José Grimm; see [1] as well as [2] .
According to one version of the formalism in the original Bourbaki, Grimm gets $$16420314314806459564661629306079999627642979365493156625 \approx 1.6 \times 10^{55}$$ (see page 517 of [1, version 10]). The discrepancy with Solovay's number is probably due to some subtle difference of interpretation of some detail. Note that the English translation of Bourbaki introduces some "small" changes and Grimm gets a rather different value: $$5733067044017980337582376403672241161543539419681476659296689 \approx 5.7 \times 10^{60}$$ EDIT: As suggested in the comments, here are the full citations for Grimm's papers. [1] José Grimm. Implementation of Bourbaki’s Elements of Mathematics in Coq: Part Two; Ordered Sets, Cardinals, Integers. [Research Report] RR-7150, Inria Sophia Antipolis; INRIA. 2018, pp.826. inria-00440786v10 . doi: 10.6092/issn.1972-5787/4771 [2] Grimm, J. (2010). Implementation of Bourbaki's Elements of Mathematics in Coq: Part One, Theory of Sets. Journal of Formalized Reasoning, 3(1), 79-126. doi: 10.6092/issn.1972-5787/1899
|
{
"source": [
"https://mathoverflow.net/questions/357498",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2893/"
]
}
|
358,020 |
Consider the following curious statement: $(S)$ $\;$ Let $X$ be a non-empty set and let $f:X \to X$ be fixpoint-free (that is $f(x) \neq x$ for all $x\in X$ ). Then there are subsets $X_1, X_2, X_3 \subseteq X$ with $X_1\cup X_2\cup X_3 = X$ and $$X_i \cap f(X_i) = \emptyset$$ for $i \in \{1,2,3\}$ . There are easy examples showing that one cannot get by using $2$ subsets only. Statement $(S)$ can be proved using the axiom of choice. Question. Does $(S)$ imply (AC)?
|
The three-set lemma is listed as form 285 in Howard and Rubin's "Consequences of the axiom of choice". According to their book, the earliest appearance seems to be a problem in a 1963 issue of the American Mathematical Monthly (problem 5077). As mentioned already in the comments by Emil Jerabek, this form of choice is not equivalent to full AC, but already follows from the Boolean prime ideal theorem (BPI). However, it is not equivalent to BPI either, since it already follows from the ordering principle (every set can be linearly ordered), which readily implies the axiom of choice for families of finite sets, and this latter implies the three-set lemma in question as shown by Wisniewski ("On functions without fixed points", Comment. Math. Prace Mat vol 17, pp. 227-228, 1973); as proved by Mathias turning a Fraenkel-Mostowski model into a model of ZF, the ordering principle does not imply BPI.
|
{
"source": [
"https://mathoverflow.net/questions/358020",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8628/"
]
}
|
358,175 |
Consider some positive non-integer $\beta$ and a non-negative integer $p$ . Does anyone have any idea how to show that the determinant of the following matrix is non-zero? $$
\begin{pmatrix}
\frac{1}{\beta + 1} & \frac{1}{2} & \frac{1}{3} & \dots & \frac{1}{p+1}\\
\frac{1}{\beta + 2} & \frac{1}{3} & \frac{1}{4} & \dots & \frac{1}{p+2}\\
\frac{1}{\beta + 3} & \frac{1}{4} & \frac{1}{5} & \dots & \frac{1}{p+3}\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{1}{\beta + p + 1} & \frac{1}{p+2} & \frac{1}{p+3} & \dots & \frac{1}{2p+1}
\end{pmatrix}.
$$
|
I think the reference "Advanced Determinant Calculus" has a pointer to the answer. But I'll still elaborate for it is ingenious. Suppose $x_i$ 's and $y_j$ 's, $1\leq i,j \leq N$ , are numbers such that $x_i+y_j\neq 0$ for any $i,j$ combination, then the following identity (called Cauchy Alternant Identity) holds good: $$
\det ~\left(\frac{1}{x_i+y_j}\right)_{i,j} = \frac{\prod_{1\leq i<j\leq n}(x_i-x_j)(y_i-y_j)}{\prod_{1\leq i\neq j\leq n}(x_i+y_j)}.
$$ Thus the determinant of $$
\begin{pmatrix}
\frac{1}{\beta + 1} & \frac{1}{2} & \frac{1}{3} & \dots & \frac{1}{p+1}\\
\frac{1}{\beta + 2} & \frac{1}{3} & \frac{1}{4} & \dots & \frac{1}{p+2}\\
\frac{1}{\beta + 3} & \frac{1}{4} & \frac{1}{5} & \dots & \frac{1}{p+3}\\
\vdots & \vdots & \vdots & \dots & \vdots \\
\frac{1}{\beta + p + 1} & \frac{1}{p+2} & \frac{1}{p+3} & \dots & \frac{1}{2p+1}
\end{pmatrix}
$$ can be obtained by choosing $[x_1,\cdots, x_{p+1}] = [1, \cdots, (p+1)]$ and $[y_1,\cdots, y_{p+1}] = [\beta, 1, \cdots, p]$ . This is certainly not zero as $\beta$ is not an integer. The proof of the identity is ingenious. Perform the basic column operation where, $C_j = C_j-C_n$ , and remove common factors from the rows and columns. Then perform the row operations, $R_j = R_j-R_n$ . This renders the matrix block diagonal of 2 blocks with size n-1 and 1. The first block is the the principal submatrix of the orignal matrix, and the second block is the element 1. This then induces a recursion for the determinant, which yields the desired result. Thanks for the good question and the reference.
|
{
"source": [
"https://mathoverflow.net/questions/358175",
"https://mathoverflow.net",
"https://mathoverflow.net/users/156685/"
]
}
|
358,606 |
In elementary calculus texts, Green's theorem is proved for regions enclosed by piecewise smooth, simple closed curves (and by extension, finite unions of such regions), including regions that are not simply connected. Can Green's theorem be further generalized? In particular, are there regions on which Green's theorem definitely does not hold?
|
I think this is an interesting and sort of deep question, so I'm going to answer it in part with the hope that my answer attracts even better answers. I'll start with my first thought: surely there's no hope of formulating Green's theorem for an unbounded region, say the region $y > 0$ . But then I thought about it for a moment, and observed that if you consider a smooth vector field $F(v)$ on the plane such that $F(v) \to 0$ rapidly as $v \to \infty$ then we can extend $F$ to the sphere by stereographic projection ; this sends $y > 0$ to a hemisphere and the boundary curve $y = 0$ to the bounding great circle, and you can apply Stokes' theorem to this situation. Unwinding the calculations, this would give you a version of "Green's theorem" even for unbounded regions, albeit one that applies only to a certain class of vector field. Then I thought about regions whose boundary is pathological, like the interior of the Koch snowflake . Here the boundary has infinite length, so surely there is no real hope of even defining the "boundary side" of Green's theorem. But then I noted that the Koch snowflake - like many pathological plane curves - has a very nice polygonal approximation, and it didn't sound insane that the boundary side could be defined as a limit of integrals over these approximations (again, maybe not for all vector fields). Sure enough, this has been worked out, and there is indeed a version of Green's theorem for fractal boundaries: Jenny Harrison and Alec Norton, The Gauss-Green theorem for fractal boundaries , Duke Math. J. 67 Number 3 (1992) pp. 575-588. doi: 10.1215/S0012-7094-92-06724-X , author pdf . There are other crazy things to try, like removing a non-measurable set from the plane or something. But Green's theorem (and its parent, the fundamental theorem of calculus) is based on a very resilient idea, something like "when you sum differences, things cancel". So in the spirit of the principle, "The fastest way to find something is to assert that it doesn't exist on the internet", I'll make a bold conjecture: Green's theorem can be generalized to any subset of the plane.
|
{
"source": [
"https://mathoverflow.net/questions/358606",
"https://mathoverflow.net",
"https://mathoverflow.net/users/157024/"
]
}
|
359,705 |
The word elliptic appears quite often in mathematics; I will list a few occurrences below. For some of these, it is clear to me how they are related; for instance, elliptic functions (named after ellipses, see here ) are the functions on elliptic curves over $\mathbb C$ . For others, I do not know if there is a relationship at all. Ellipses Elliptic integrals Elliptic functions Elliptic curves Elliptic genera (in the sense of Hirzebruch) Elliptic (as opposed to parabolic or hyperbolic) isometries of the hyperbolic plane Elliptic partial differential operators, elliptic PDEs Elliptic cohomology I am interested in the etymology of this word, in particular, the origins of the different usages listed above. More precisely, I was wondering whether there is, in a way, a single "strain" for all uses of elliptic in mathematics, going all the way back to ellipses in Euclidean geometry.
|
Your saying "elliptic functions are the functions on elliptic curves over $\mathbb C$ " is somewhat misleading, I think. First came elliptic integrals measuring arc-length on an ellipse. These are generalizations of the inverse trig functions (take the ellipse to be a circle). The inverse functions to the elliptic integrals are elliptic functions . It was noted that the integrand of an elliptic integral is (after a change of variables) of the form $dx/\sqrt{f(x)}$ , where $f(x)$ is a cubic (or quartic, depending on your preference). This in turn leads to elliptic curves , which are curves of the form $y^2=f(x)$ , since then the integrand is $dx/y$ , and the integral is on the curve. At this point, one sees that the use of the word elliptic in elliptic curve is quite unfortunate, since the geometry of an elliptic curve is quite different from the geometry of the ellipse from which it derives its name. Further, there is the distinction between a (smooth algebraic) curve of genus $1$ , and such a curve with a marked base point that serves as the identity element for its group law. This is especially important if one is working over a non-algebraically closed field, but even over $\mathbb C$ , if elliptic curve includes the group law, then it presupposes the choice of a point.
|
{
"source": [
"https://mathoverflow.net/questions/359705",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14233/"
]
}
|
360,578 |
This is a question to research mathematicians, as well as to those concerned with the history and philosophy of mathematics. I am asking for a reference. In order to make the reference request as precise as possible, I am outlining the background and nature of my questions here: I did my Ph.D. in probability & statistics in 1994, and my formal mathematics education was completely based on set theory. Recently, I got interested in algebraic topology, and have started to read introductory texts like Allen Hatcher, or Laures & Szymik, and others. I am struck by the broad usage of category theory and started to wonder: (1) Is category theory the new language of mathematics, or recently the more preferred language? (2) Recognizing that set theory can be articulated or founded through category theory (the text from Rosebrugh and Lawvere), is category theory now seen as the foundation of mathematics? (3) Is the choice between category theory language and set theory language maybe depending on the field of mathematics, i.e. some fields tend to prefer set theory, others category theory? Edit: On (3), if such a preference actually exists, what is the underlying reason for that? Would someone be able to give me a good reference for questions like this? I would be very grateful for that. Later Edit: Just adding the link to a great, related discussion on MO: Could groups be used instead of sets as a foundation of mathematics? It discusses the question whether every mathematical statement could be encoded as a statement about groups, a fascinating thought. Could groups be used instead of sets as a foundation of mathematics?
|
Category theory and set theory are complementary to one another, not in competition. I think this 'debate' is a bit of academic controversialising rather than an actual difference. If you've done a bit of category theory, you will realize how important the category of sets is (for Yoneda's lemma, representability, existence of generators, etc). Even if you completely buy into homotopy type theory as a foundation for ∞-categories and homotopy theory, the theory of sets reappears in other garb as the theory of 0-types. A theory of sets is too natural an idea to escape. I just also want to note: If you write out the syntactic version of ETCS, you end up with something that is more or less equivalent to ZFC. The ETCC, on the other hand, is widely considered to be a dead-end. From the nLab: As pointed out by J. Isbell in 1967, one of Lawvere’s results (namely, the theorem on the ‘construction of categories by description’ on p.14) was mistaken, which left the axiomatics dangling with insufficient power to construct models for categories. Several ways to overcome these problems where suggested in the following but no system achieved univocal approval (cf. Blanc-Preller(1975), Blanc-Donnadieu(1976), Donnadieu(1975), McLarty(1991)). As ETCC also lacked the simplicity of ETCS, it rarely played a role in the practice of category theory in the following and was soon eclipsed by topos theory in the attention of the research community that generally preferred to hedge their foundations with appeals to Gödel-Bernays set-theory or Grothendieck universes. Edit: Just to clarify, I think most mathematicians working in category theory, homotopy theory, algebraic geometry, etc. are more or less agnostic about foundations, as long as they are equivalent in strength to ZFC (or stronger with universes). There have been arguments for ETCS(+Whatever) as a 'better' foundation, but when you get into hairy set-theoretic issues (for example, see the Appendix to lecture 2 of Scholze's notes on condensed mathematics), we are just as likely to work with ZFC because setting up ordinals in ETCS is an added annoyance. I added this edit just to clarify that I am not a partisan of either approach and appreciate both (and am not interested in bringing up this old argument about Tom's paper that I linked!!!)
|
{
"source": [
"https://mathoverflow.net/questions/360578",
"https://mathoverflow.net",
"https://mathoverflow.net/users/156936/"
]
}
|
360,889 |
What can be said about publishing mathematical papers on e.g. viXra if the motivation is its low barriers and lack of experience with publishing papers and the idea behind it is to build up a reputation, provided the content of the publication suffices that purpose. Can that way of getting a foot into the door of publishing be recommended or would it be better to resort to polishing door knobs at arXiv to get an endorsement? Personal experience or that of someone you know would of course also be interesting to me.
|
Yes, the place of publication can absolutely hurt your reputation. Specifically, I can tell you from having served on many hiring committees (and from conversations with professors at other universities about their hiring committees and tenure processes), that publications in predatory journals can hurt you. I'm talking specifically about journals whose model is to get the author to pay them, and whose peer review standards are a joke. Publications in journals like that can be interpreted as an author trying to side-step the normal process, or unethically inflate their numbers. It may be hard to break into the absolute top journals with your first few papers (unless you have a famous advisor/coauthor or went to a prestigious school). But there are plenty of good journals around and after a track record of publishing in good journals you will have less difficulty publishing in top journals (of course, it'll always be extremely hard to publish in the Annals and other super elite journals). For people starting out, I recommend at least checking Beall's list of predatory publishers to be sure you don't end up publishing somewhere that might be frowned upon later in your career. Also, don't let fear paralyze you from trying. Lots of editors and referees will go gently on new PhDs. I wish this was even more common, rather than pushing young people out of academia.
|
{
"source": [
"https://mathoverflow.net/questions/360889",
"https://mathoverflow.net",
"https://mathoverflow.net/users/31310/"
]
}
|
360,924 |
I am very interested in proofs that become shorter and simpler by going to higher dimension in $\mathbb R^n$ , or higher cardinality. By "higher" I mean that the proof is using higher dimension or cardinality than the actual theorem . Specific examples for that: The proof of the 2-dimensional Brouwer Fixed Point Theorem given by by Aigner and Ziegler in "Proofs from the BOOK" (based on the Lemma of Sperner). The striking feature is that the main proof argument is set up and run in $\mathbb R^3$ , and this 3-dimensional set-up turns the proof particularly short and simple. The proof about natural number Goodstein sequences that uses ordinal numbers to bound from above. The proof of the Finite Ramsey Theorem using the Infinite Ramsey Theorem. In fact, I would also be interested in an example where the theorem is e.g. about curves, lattice grids, or planar graphs $-$ and where the proof becomes strikingly simple when the object is embedded e.g. in a torus, sphere, or any other manifold. Are you aware of proofs that use such techniques?
|
Whitney's theorem is an example of this. To prove the weak version (i.e. embedding a manifold $M^n$ in $\mathbb{R}^{2n +1}$ ), you start by using a partition of unity to embed $M^n$ into $\mathbb{R}^{N}$ where $N$ is very large. This is relatively easy to do when $M^n$ is compact and takes a little bit of thought otherwise, but is significantly easier than trying to get an embedding in a lower dimension from scratch. You can then use transversality arguments to show that a generic projection map preserves the embedding of $M^n$ to cut down $N$ until you get to $\mathbb{R}^{2n +1}$ . To get the strong version of the theorem (embedding $M^n$ in $\mathbb{R}^{2n}$ ), there is another insight needed, which is using Whitney's trick to get rid of double points. As such, it's really the weak version where the high-dimensionality approach is used.
|
{
"source": [
"https://mathoverflow.net/questions/360924",
"https://mathoverflow.net",
"https://mathoverflow.net/users/156936/"
]
}
|
360,964 |
Sorry if this question is not well-suited here, but I thought research in mathematics can be identified from other science field, so I wanted to ask to mathematicians. I am just starting graduate study in mathematics (and my bachelor was in other field) so I have no research experience in mathematics. Recently I came up with a problem by myself, and thought it was interesting so devoted some time to draw the results. Now I have some results, but I am not sure this is already studied somewhere by someone. I tried to lookup some possible keywords in Google Scholar, but have nothing. I am sure for more mature mathematicians they already know their fields of study and recent trends of the research, so it won't be a difficult problem to know this is original or not. But if you come up with some idea that are not seems to belong to any field, how do you know your result is original or not trivial result? Thank you in advance!
|
(1) It depends a lot on the field. In fields that rely on specialized techniques discovered relatively recently or known only to a few, or fields where the questions involve recently-introduced objects, it's much easier to keep abreast of current research. On the other hand, in fields with elementary questions that could have been studied a hundred years ago, sometimes even senior mathematicians discover that their work was studied a hundred years ago . Of course, working in a trendy field carries its own risk, that someone else could be working on the same thing at the same time, but not much can be done about that. (2) If you're working in a specialized field, as other have said, the best thing is to ask your advisor. If you have an advisor in a specialized field and have ideas in a different field, the best thing would be to ask someone in that field. As a grad student you probably want to start with fellow grad students, but a senior mathematician would probably asks someone on their own level. If you have an idea that is more elementary, you should still ask your advisor, but there are certain mathematicians who know a lot of elementary and classical mathematics you could potentially ask. (3) With regards to literature review, one trick that helps a bit when keyword searches fail is to use citations. If your idea generalizes work of Paper X, or answers a question from Paper X, or uses in a fundamental way the results of Paper X, anyone else who had the same idea would likely cite Paper X. You can produce a list of papers citing Paper X on both Google and MathSciNet. (4) As a starting graduate student, even if your idea is completely new and original, it is likely that the greatest value it provides to you will be as practice for your future work. (I mean if you're good enough to do groundbreaking work right off the bat, you will probably do even more groundbreaking work once you get some experience under your belt.) So don't feel bad at all if you find out something was already well-known - the experience of formulating and solving your own problem makes you well-placed to do original research once you learn a bit more, as compared to someone who knows a lot but hasn't done this.
|
{
"source": [
"https://mathoverflow.net/questions/360964",
"https://mathoverflow.net",
"https://mathoverflow.net/users/151368/"
]
}
|
362,326 |
Question 0 Are there any mathematical phenomena which are related to the form of honeycomb cells? Question 1 Maybe hexagonal lattices satisfy certain optimality condition(s) which are related to it? The reason to ask - some considerations with the famous "K-means" clustering algorithm on the plane. It also tends to produce something similar to hexagons, moreover, maybe, ruling out technicalities, hexagonal lattice is optimal for K-means functional, that is MO362135 question. Question 2 Can it also be related to bee's construction? Googling gives lots of sources on the question. But many of them are focused on non-mathematical sides of the question - how are bees being able to produce such quite precise forms of hexagons? Why is it useful for them? Etc. Let me quote the relatively recent paper from Nature 2016 ,
"The hexagonal shape of the honeycomb cells depends on the construction behavior of bees",
Francesco Nazzi: Abstract. The hexagonal shape of the honey bee cells has attracted the
attention of humans for centuries. It is now accepted that bees build
cylindrical cells that later transform into hexagonal prisms through a
process that it is still debated. The early explanations involving the
geometers’ skills of bees have been abandoned in favor of new
hypotheses involving the action of physical forces, but recent data
suggest that mechanical shaping by bees plays a role. However, the
observed geometry can arise only if isodiametric cells are previously
arranged in a way that each one is surrounded by six other similar
cells; here I suggest that this is a consequence of the building
program adopted by bees and propose a possible behavioral rule
ultimately accounting for the hexagonal shape of bee cells.
|
There are two principles at play here: a mathematical principle that favors hexagonal networks, and a physical principle that favors a network with straight walls. The mathematical principle that prefers hexagonal planar networks is Euler's theorem applied to the two-torus $\mathbb{T}^2$ (to avoid boundary effects), $$V-E+F=0,$$ with $V$ the number of vertices, $E$ the number of edges, and $F$ the number of cells. Because every vertex is incident with three edges $^\ast$ and every edge is incident with two vertices, we have $2E = 3V$ , hence $E/F=3$ . Since every edge is adjacent to two cells, the average number of sides per cell is 6 --- hence a uniform network must be hexagonal. $^\ast$ A vertex with a higher coordination number than 3 is mechanically unstable, it will split as indicated in this diagram to lower the surface energy. blue: total edge length for the left diagram (diagonals of a unit square), gold: total edge length for the right diagram, as a function of the length $x$ of the splitting. Euler's theorem still allows for curved rather than straight walls of the cells. The physical principle that prefers straight walls is the minimization of surface area. source: Honeybee combs: how the circular cells transform into rounded hexagons An experiment that appears to be directly relevant for honeybee combs is the transformation of a closed-packed bundle of circular plastic straws into a hexagonal pattern on heating by conduction until the melting point of the plastic. Likewise, the honeybee combs start out as such a closed-packed bundle of circular cells (panel a). The wax walls of the cells are heated to the melting point by the bees and then become straight to reduce the surface energy (panel b).
|
{
"source": [
"https://mathoverflow.net/questions/362326",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10446/"
]
}
|
362,578 |
I uniformly mark $n^2$ points in $[0,1]^2$ . Then I want to draw $cn$ vertical lines and $cn$ horizontal lines such that in each small rectangle there is at most one marked point. Surely, for a given constant c it is not always possible. But it seems that for $c=100$ when n tends to infinity, the probability that such a cut exists should tend to one, as a variation of the law of the large numbers.
Do you have any idea how to prove this rigorously?
|
Given $n^2$ i.i.d. uniform points in $[0,1]^2$ , the goal is to draw a configuration of $cn$ vertical lines and $cn$ horizontal lines such that in each small rectangle there is at most one marked point. We show below that $c$ must satisfy $c=\Omega(n^{1/3})$ for this to be typically possible: In fact, $\Theta(n^{4/3})$ lines are necessary and sufficient for such a configuration to exist with substantial probability. More precisely, denote by $p_n(k)$ the probability that a configuration of $k$ vertical lines and $k$ horizontal lines separate $n^2$ i.i.d. uniform points $\{x_j\}_{j=1}^{n^2}$ in $[0,1]^2$ . Claim: For suitable constants $0<c_1<c_2<\infty$ , we have (omitting integer part symbols): (a) $\; p_n(c_1 n^{4/3}) \to 0$ as $n \to \infty$ , and (b) $\; p_n(c_2 n^{4/3}) \to 1$ as $n \to \infty$ . This is proved below with $c_1=1/20$ and $c_2=3/2$ ; no attempt has been made to optimize these constants. Proof: Consider an auxiliary grid of $L:=n^{4/3}$ uniformly spaced vertical lines and $L$ uniformly spaced horizontal lines in the unit square. This grid defines $L^2$ grid squares of side length $1/L$ . (a) Call a grid square $Q$ nice if it contains exactly two of the $n^2$ given points $\{x_j\}$ . Observe that for two distinct grid squares, the events that they are nice are negatively correlated. Call a nice grid square $Q$ good if there is at most one other nice square in its row and at most one other nice square in its column. The probability that a specific grid square $Q$ is nice is $${n^2 \choose 2}L^{-4}(1-L^{-2})^{n^2-2}=(1/2+o(1))L^{-1}.$$ Given that $Q$ is nice, The conditional expectation of the number of nice squares (other than $Q$ ) in the row of $Q$ is $1/2+o(1)$ Thus, given that $Q$ is nice, Markov's inequality implies that the conditional probability that there are two or more additional nice squares in the row of $Q$ (besides $Q$ itself) is at most $1/4+o(1)$ .
The same applies to the column of $Q$ , and we deduce that $$P(Q \; {\rm is \; good}\; | Q \; {\rm is \; nice}) \ge 1/2+o(1) \, ,$$ so $$P(Q \; {\rm is \; good} ) \ge (1/4+o(1))L^{-1} \, .$$ Let $G$ denote the number of good grid squares. Then the mean satisfies $$E(G) \ge (1/4+o(1))L \,.$$ Observe that if we replace one point $x_i$ by $x_i'$ then $G$ will change by at most 5, so Mcdiarmid's inequality, see [1, Theorem 3.1] or [2], implies that for $n$ large enough, $$P(G \le L/5) \le \exp(-\frac{(L/21)^2}{25n^2}) \to 0 \,. {\rm as} \; n \to \infty \,.$$ (Alternatively, one could invoke the Efron-Stein inequality or estimate the variance directly to verify this.)
Now suppose that $S$ is a set of vertical and horizontal lines that separate the points $\{x_j\}_{j=1}^{n^2}$ .
For each good grid square $Q$ , a line of $S$ is required to separate the two points $x_i, x_j$ in the square, and each such line can be used for at most two good squares. Thus $|S| \ge G/2$ so $$p_n(L/20) \le P(\exists \; {\rm separating } \; S \; {\rm with } \; |S| \le L/10) \le P(G \le L/5) \to 0
$$ . (b) Denote by $M$ the number of pairs $(i,j)$ such that $1 \le i<j \le n^2$ and $x_i,x_j$ fall in the same grid square. Then $E(M) = {n^2 \choose 2}L^{-2} \le L/2$ , and another application of McDirarmid's inequality implies that $P(M \ge L) \to 0$ as $n \to \infty$ . Finally, construct a separating set of lines $S$ by combining the $2L$ lines of the auxiliary grid with one separating line for each pair $(i,j)$ counted in $M$ (we can take half of these lines vertical and half horizontal). Then $P(|S| \ge 3L) \to 1$ as $n \to \infty$ and $p_n(3L/2) \to 1$ as well. [1] McDiarmid, Colin. "Concentration." In Probabilistic methods for algorithmic discrete mathematics, pp. 195-248. Springer, Berlin, Heidelberg, 1998. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=8B1FFFE4553B63543AFEA0706E686E65?doi=10.1.1.168.5794&rep=rep1&type=pdf [2] McDiarmid, C. (1989). "On the method of bounded differences". Surveys in Combinatorics. London Math. Soc. Lectures Notes 141. Cambridge: Cambridge Univ. Press. pp. 148–188. MR 1036755
|
{
"source": [
"https://mathoverflow.net/questions/362578",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4298/"
]
}
|
362,627 |
My colleague and I are researchers in philosophy of mathematical practice and are working on developing an account of mathematical understanding. We have often seen it remarked that there is an important difference between merely verifying that a proof is correct and really understanding it. Bourbaki put it as follows: [E]very mathematician knows that a proof has not really been “understood” if one has done nothing more than verifying step by step the correctness of the deductions of which it is composed, and has not tried to gain a clear insight into the ideas which have led to the construction of this particular chain of deductions in preference to every other one. [Bourbaki, ‘The Architecture of Mathematics’, 1950, p.223] We are interested in examples which, from the perspective of a professional mathematician, illustrate this phenomenon. If you have ever experienced this difference between simply verifying a proof and understanding it, we would be interested to know which proof(s) and why you did not understand it (them) in the first place. We are especially interested in proofs that are no longer than a couple of pages in length. We would also be very grateful if you could provide some references to the proof(s) in question. We are sorry if this isn’t the appropriate place to post this, but we were hoping that professional mathematicians on MathOverflow could provide some examples that would help with our research.
|
Don Zagier has a well-known paper, A one-sentence proof that every prime $p\equiv 1\pmod 4$ is a sum of two squares . An undergraduate mathematics major should be able to verify that this proof is correct. But as you can see elsewhere on MathOverflow , most professional mathematicians are unable to "understand" this proof just by studying it in isolation. By lack of "understanding" is meant, for example, the inability to answer questions such as, "Where did those formulas come from? How did anybody ever come up with this proof in the first place? Is there some general principle on which this proof is based, that is not being presented explicitly in the proof?"
|
{
"source": [
"https://mathoverflow.net/questions/362627",
"https://mathoverflow.net",
"https://mathoverflow.net/users/159344/"
]
}
|
362,849 |
It can be checked that the Vandermonde determinant defined as $$V(\alpha_1, \cdots, \alpha_n) = \prod_{1 \le i < j \le n}(\alpha_i-\alpha_j) $$ is a harmonic function, that is $\Delta V = 0$ where $\Delta$ is the Laplace operator. Is there a deeper or more intuitive reason why this fact should hold? The straightforward proof of just computing the derivatives and checking doesn't provide any insights.
|
Consider the symmetric group action permuting the variables. The Vandermonde determinant $V$ is antisymmetric, meaning it spans an alternating representation—it's invariant under permutations, up to multiplication by the sign of the permutation. Applying any symmetric differential operator (such as the Laplacian) preserves antisymmetry, but lowers the degree (as long as the operator doesn't have any constant terms). And $V$ is the lowest-degree antisymmetric form. This is a fun, quick exercise. First note that $\deg V = \binom{n}{2}$ , which is $0 + 1 + \dotsb + (n-1)$ , and indeed all the monomials appearing in $V$ have the form $x_1^0 x_2^1 \dotsm x_n^{n-1}$ , up to permutation and coefficient of $\pm 1$ . None of the exponents here are repeated, and we realize that in any lower degree polynomial, there isn't enough room to have distinct exponents. Now if $f$ is any antisymmetric polynomial with a term $c x_1^{a_1} \dotsm x_n^{a_n}$ with a repeated exponent $a_i = a_j$ , then permuting by the transposition $(i \, j)$ leaves this term unchanged; but it has to take this term to $-c x_1^{a_1} \dotsm x_n^{a_n}$ in order for $f$ to be antisymmetric; so $c=0$ . Only terms with pairwise-distinct exponents can appear in $f$ , so $\deg f$ must be at least $\binom{n}{2}$ . This actually proves a bit more: up to scalar factor, $V$ is the unique antisymmetric polynomial of degree $\binom{n}{2}$ , and in fact any antisymmetric polynomial is divisible by $V$ . This also generalizes to other finite reflection groups. You can see, for example, Chapter 20 of Kane's book Reflection groups and invariant theory . But at the moment we just care about the property of having minimal degree. Now the point is that applying a symmetric differential operator preserves the antisymmetry property, but lowers the degree. But the only lower-degree antisymmetric form is just zero. I would argue that this approach provides insights: it generalizes to other reflection groups, which are deeply studied. For me, it came up in relation to apolarity and Waring rank, where it was useful to know which differential operators annihilate $V$ . (The above shows that symmetric differential operators lie in the ideal of annihilators, and it turns out they generate the ideal.)
|
{
"source": [
"https://mathoverflow.net/questions/362849",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83122/"
]
}
|
363,119 |
In Gian-Carlo Rota's "Ten lessons I wish I had been taught" he has a section, "Every mathematician has only a few tricks" , where he asserts that even mathematicians like Hilbert have only a few tricks which they use over and over again. Assuming Rota is correct, what are the few tricks that mathematicians use repeatedly?
|
$$
\sum_{i=1}^m\sum_{j=1}^n a_{i,j}=\sum_{j=1}^n\sum_{i=1}^m a_{i,j}
$$ (and its variants for other measure spaces). I still get misty-eyed whenever I read something that capitalizes on this trick in an unpredictable way.
|
{
"source": [
"https://mathoverflow.net/questions/363119",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7113/"
]
}
|
363,226 |
The question " Every mathematician has only a few tricks " originally had approximately the title of my question here, but originally admitted an interpretation asking for a small collection of tricks used by all mathematicians. That question now has many answers fitting this "there exist a small set of tricks used by all mathematicians" interpretation. I find that swapping the quantifiers gives a better question. I.e. I am more interested in hearing about the small collections of tricks of individual mathematicians. Pointing back to the other question above, and Rota's article, what are the few tricks of Erdős, or of Hilbert? Question: What are the few tricks of some individual mathematicians? Of course, as the comment in the earlier question quips, a mathematician never reveals tricks...but one can hope. In your answers, please include the name of the mathematician, and their few tricks...perhaps some cool places where the tricks are used, i.e. some "greatest hits" applications of the tricks. Note, I don't think that knowing these tricks can make you into Erdős or Hilbert, but a long time ago a friend told me that a talented mathematician he knew would approach research problems by asking himself how other mathematicians would attack the problem. This is sort of like writing in another author's style, which can be a useful exercise. Wouldn't it be neat to be able to ask yourself "How would Hilbert have attacked this problem?" MO is a good place to collect these, because it often takes extended reading (as intimated by Rota) to realize the few tricks used by a certain mathematician. As a community, we may be able to do this.
|
The question is worded in a way that seems to imply we might speak of other mathematician's tricks, but I'm not sure I know the tricks of even my closest collaborators, except by osmosis; so I hope it's OK if I specify my own "one weird trick". The entirety of my research centres around the idea that, if $\chi$ is a non-trivial character of a compact group $K$ (understood either in the sense of "homomorphism to $\mathbb C^\times$ ", or the more general sense of $k \mapsto \operatorname{tr} \pi(k)$ for a non-trivial, irreducible representation $\pi$ of $K$ ), then $\int_K \chi(k)\mathrm dk$ equals $0$ . It's amazing the mileage you can get out of this; it usually arises for me when combining Frobenius formula with the first-order approximation in Campbell–Baker–Hausdorff. Combining it with the second -order approximation in CBH gives exponential sums, which in my field we call Gauss sums although that seems to intersect only loosely with how number theorists think of the matter. Curiously, I have never found an application for the third-order approximation.
|
{
"source": [
"https://mathoverflow.net/questions/363226",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6269/"
]
}
|
363,254 |
This is probably a very easy question for experts in probability or measure theory. I have a sequence of finite measures $\mu_{n}$ on a non-compact metric space $X$ such that $\mu_{n}$ converges to $\mu$ in the following sense: $$ \int_{X}fd\mu_{n} \to \int_{X}fd\mu \ \ \ \ \ \text{ for all f continuous with compact support}
$$ I would like to say that $\mu_{n}(X)\to \mu(X)$ . I know this is false in general, but I have the additional condition that for every $\epsilon>0$ there is $n_{0}\in \mathbb{N}$ and $K\subset X$ compact such that $\mu_{n}(K^{c})\leq \epsilon$ for every $n\geq n_{0}$ . This looks very similar to the definition of tight sequence (which guarantees the result I would like). Is this equivalent? Additional assumptions: X is Polish and locally compact, precisely it is a closed surface with some finitely many points removed. All measures $\mu_{n}$ and $\mu$ are area measures of Riemannian metrics (with singularities at the points removed) on X.
|
The question is worded in a way that seems to imply we might speak of other mathematician's tricks, but I'm not sure I know the tricks of even my closest collaborators, except by osmosis; so I hope it's OK if I specify my own "one weird trick". The entirety of my research centres around the idea that, if $\chi$ is a non-trivial character of a compact group $K$ (understood either in the sense of "homomorphism to $\mathbb C^\times$ ", or the more general sense of $k \mapsto \operatorname{tr} \pi(k)$ for a non-trivial, irreducible representation $\pi$ of $K$ ), then $\int_K \chi(k)\mathrm dk$ equals $0$ . It's amazing the mileage you can get out of this; it usually arises for me when combining Frobenius formula with the first-order approximation in Campbell–Baker–Hausdorff. Combining it with the second -order approximation in CBH gives exponential sums, which in my field we call Gauss sums although that seems to intersect only loosely with how number theorists think of the matter. Curiously, I have never found an application for the third-order approximation.
|
{
"source": [
"https://mathoverflow.net/questions/363254",
"https://mathoverflow.net",
"https://mathoverflow.net/users/127739/"
]
}
|
363,720 |
I'd like to have a big-list of "great" short exact sequences that capture some vital phenomena. I'm learning module theory, so I'd like to get a good stock of examples to think about. An elementary example I have in mind is the SES: $$
0 \rightarrow I \cap J \rightarrow I \oplus J \rightarrow I + J \rightarrow 0
$$ from which one can recover the rank-nullity theorem for vector spaces and the Chinese remainder theorem .
I'm wondering what other 'bang-for-buck' short exact sequences exist which satisfy one of the criteria: They portray some deep relationship between the objects in the sequence that is non-obvious, or They describe an interesting relationship that is obvious, but is of important consequence.
|
There is one obvious sequence that underlies all vector analysis and a lot that builds up on it, no matter if its applied analysis, PDE, physics or the original foundations of algebraic topology. Yet it is rarely written out, as the people in the applied fields prefer to split it into its constituent statements and the people in pure mathematics are inclined to immediately write down some generalization instead. What I am talking about is of course the relationship between the classic differential operators on 3D vector fields: $$0 \to \mathbb R\to C^\infty(\mathbb{R}^3;\mathbb{R}) \stackrel{\operatorname{grad}}{\to} C^\infty(\mathbb{R}^3;\mathbb{R}^3) \stackrel{\operatorname{curl}}{\to} C^\infty(\mathbb{R}^3;\mathbb{R}^3) \stackrel{\operatorname{div}}{\to} C^\infty(\mathbb{R}^3;\mathbb{R}) \to 0 $$
|
{
"source": [
"https://mathoverflow.net/questions/363720",
"https://mathoverflow.net",
"https://mathoverflow.net/users/123769/"
]
}
|
364,080 |
If $M$ and $N$ are closed smooth manifolds, and $M\times S^1$ is diffeomorphic to $N\times S^1$ , is it true that $M$ and $N$ are diffeomorphic?
|
If $M$ is of dimension $<4$ then the answer is YES because there are no exotic structures on $M$ and there are full classification results.[ EDIT : In case of 3-manifolds this is true except some surface bundles over $S^1$ with fiber genus $>1$ and periodic monodromy, Stability of 3-manifolds .] It is not true in dimension 4. For example, any closed simply-connected 4-manifold $M$ and an exotic copy $M'$ are h-cobordant by a theorem of Wall. Thus $M\times S^1$ is h-cobordant to $M'\times S^1$ (as one can extend the previus h-cobordsim trivially on the $S^1$ component). This is then a trivial cobordism by the high-dimensional s-cobordism theorem which says that such an h-cobordism is trivial if the Whitehead torsion of $\pi_1(M\times S^1)$ vanishes and indeed $Wh(\pi_1(M\times S^1))= Wh(\mathbb Z)=0$ by a result of Bass . So they are in fact diffeomorphic. When the dimension is $>4$ the answer is YES if $M$ is simply-connected . To see this, notice that it is enough to show that $M$ and $M'$ are h-cobordant. Since $M\times S^1$ is diffeomorphic to $M'\times S^1$ there is a map $f:M \to M'\times S^1$ . Since $M$ is simply-connected $f$ has a lift $\bar{f}$ to the universal cover $M'\times \mathbb R$ . We claim that the image of $\bar{f}$ separates $M'\times \mathbb R$ . Otherwise we can cut $M'\times \mathbb R$ along $Im(\bar{f})$ and connect the two boundary components by an arc $\gamma$ . This arc in the original manifold $M'\times \mathbb R$ gives rise to a closed curve $\gamma'$ which transversally intersects $Im(\bar{f})$ at a single point. But $M'\times \mathbb R$ is simply-connected and thus $\gamma'$ is homotopic to a point disjoint from $Im(\bar{f})$ , contradicting the homotopy-invariant count of transverse intersection points. Now, since $M$ is compact we can find a cobordism from $Im(\bar{f})\approx M$ to $M'\times \{t\}$ for some sufficiently large $t\in \mathbb R$ . Since everything is simply-connected and the projection map induces isomorphisms on homologies, by Hurewitz' theorem we can conclude that this is an h-cobordism and so $M$ is diffeomorphic to $M'$ .
|
{
"source": [
"https://mathoverflow.net/questions/364080",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5259/"
]
}
|
364,099 |
Is it possible to (locally) approximate an arbitrary smooth diffeomorphism by a polynomial diffeomorphism ? More precisely: Let $f:\mathbb{R}^d\rightarrow\mathbb{R}^d$ be a smooth diffeomorphism for $d>1$ . For $U\subset\mathbb{R}^d$ bounded and open and $\varepsilon>0$ , is there a diffeomorphism $p=(p_1, \cdots, p_d) : U\rightarrow\mathbb{R}^d$ (with inverse $q:=p^{-1} : p(U)\rightarrow U$ ) such that both $\|f - p\|_{\infty;\,U}:=\sup_{x\in U}|f(x) - p(x)| < \varepsilon$ , $\ \textbf{and}$ each component of $p$ and of $q=(q_1,\cdots,q_d)$ is a polynomial, i.e. $p_i, q_i\in\mathbb{R}[x_1, \ldots, x_d]$ for each $i=1, \ldots, d$ ? Clearly, by Stone-Weierstrass there is a polynomial map $p : \mathbb{R}^d\rightarrow\mathbb{R}^d$ with $\|f - p\|_{\infty;\,U} < \varepsilon$ and such that $q:=(\left.p\right|_U)^{-1}$ exists; in general, however, this $q$ will not be a polynomial map. Do you have any ideas/references under which conditions on $f$ an approximation of the above kind can be guaranteed nonetheless? $\textbf{Note:}$ This is a crosspost from https://math.stackexchange.com/questions/3689873/approximation-of-smooth-diffeomorphisms-by-polynomial-diffeomorphisms
|
The answer is 'no', because polynomial mappings with polynomial inverses preserve volumes up to a constant multiple. To see why this property holds, suppose that $p:\mathbb{R}^d\to\mathbb{R}^d$ is a polynomial mapping with polynomial inverse $q:\mathbb{R}^d\to\mathbb{R}^d$ . Then $p$ and $q$ extend to $\mathbb{C}^d$ as polynomial maps with polynomial inverses. This means that the Jacobian determinant of $p$ on $\mathbb{C}^d$ is a complex polynomial with no zeros and hence must be a (nonzero) constant. Now, consider a diffeomorphism $f:\mathbb{R}^d\to\mathbb{R}^d$ that is radial , i.e., $f(x) = m(|x|^2)x$ for some smooth function $m>0$ . One can easily choose $m$ in such a way that $m(4)=1/2$ and $m(9)=4/3$ , so that $f$ maps the ball of radius $2$ about the origin diffeomorphically onto the ball of radius $1$ about the origin while it maps the ball of radius $3$ about the origin diffeomorphically onto the ball of radius $4$ about the origin. Let $\epsilon>0$ be very small and suppose that $\|f-p\|_{\infty;U} <\epsilon$ for $U$ chosen to be some very large ball centered on the origin. Then $p$ maps the sphere of radius $2$ about the origin to within an $\epsilon$ -neighborhood of the sphere of radius $1$ , while it maps the sphere of radius $3$ about the origin to within an $\epsilon$ -neighborhood of the sphere of radius $4$ . It's easy to see from this that $p$ cannot have constant Jacobian determinant. Added remark: The group $\mathrm{SDiff}(\mathbb{R}^d)$ consisting of volume-preserving diffeomorphisms of $\mathbb{R}^d$ is a 'Lie group' in Sophus Lie's original sense (i.e., a group of diffeomorphisms defined by the satisfaction of a system of differential equations; in this case, that the Jacobian determinant be equal to $1$ ). The subgroup $\mathcal{SP}(\mathbb{R}^d)\subset \mathrm{SDiff}(\mathbb{R}^d)$ consisting of volume-preserving polynomial diffeomorphisms with polynomial inverses however, is not a 'Lie subgroup' in Lie's original sense when $d>1$ , as it cannot be defined by the satisfaction of a system of differential equations: It contains all of the mappings of the form $p(x) = x + a\,(b{\cdot}x)^m$ where $a,b\in\mathbb{R}^d$ satisfy $a\cdot b = 0$ and $m>1$ is an integer (indeed, $p^{-1}(y) = y - a\,(b{\cdot}y)^m$ ), plus, it contains $\mathrm{SL}(d,\mathbb{R})$ and the subgroup consisting of the translations. Using this, it is easy to show that, for any $f\in\mathrm{SDiff}(\mathbb{R}^d)$ and for any integer $k$ , there exists a $p\in \mathcal{SP}(\mathbb{R}^d)$ such that $f$ and $p$ have the same Taylor series at the origin up to and including order $k$ . Thus, $\mathcal{SP}(\mathbb{R}^d)$ cannot be defined by a system of differential equations (in Lie's sense). Using this Taylor approximation property, one can prove that $\mathcal{SP}(\mathbb{R}^d)$ , like $\mathrm{SDiff}(\mathbb{R}^d)$ , acts transitively on $n$ -tuples of distinct points in $\mathbb{R}^d$ for any integer $n$ . Whether one can prove that $\mathcal{SP}(\mathbb{R}^d)$ can 'uniformly approximate' $\mathrm{SDiff}(\mathbb{R}^d)$ on compact sets is an interesting question.
|
{
"source": [
"https://mathoverflow.net/questions/364099",
"https://mathoverflow.net",
"https://mathoverflow.net/users/160188/"
]
}
|
365,070 |
In A survey of homogeneous structures by Macpherson (Discrete Mathematics, vol. 311, 2011), a stable or unstable theory is defined as (Definition 3.3.1): A complete theory $T$ is unstable if there is a formula $\varphi(\overline{x}, \overline{y})$ (where $\ell(\overline{x}) = r$ and $\ell(\overline{y}) = s)$ , some model $M$ of $T$ , and $\overline{a_i} \in M^r$ and $\overline{b_i}\in M^s$ (for $i \in \Bbb{N}$ ) such that for all $i, j \in\Bbb{N}$ , $$M \vDash \varphi(\overline{a_i}, \overline{b_j})\iff i \leqslant j$$ The theory $T$ is stable otherwise. I wonder what is the intuition behind this definition. More specifically, under what situation is it conceived? Why is it called stable theory? Does the name stable related to the notion of stability in geometry or physics?
|
I am going to explain some motivation by relating this definition to stability to other definitions, and discussing some examples. For simplicity I am going to assume we are working in a countable language. An alternate definition of stability, for a complete theory $T$ is as follows. Definition. $T$ is stable if there is some infinite cardinal $\lambda$ such that $T$ is $\lambda$ -stable , i.e., for any $M\models T$ , and any $A\subseteq M$ , if $|A|\leq\lambda$ then for any $n$ , $|S_n^T(A)|\leq\lambda$ . (In the previous definition, $S_n^T(A)$ is the space of complete $n$ -types with parameters from $A$ . One usually drops $T$ from this notation when $T$ is fixed. Also, in checking stability of $T$ , it is enough to restrict to $1$ -types by an easy exercise.) So the slogan is that stable theories have "few types", since in general $|S^T_n(A)|$ is only bounded by $2^{|A|+\aleph_0}$ . Since types describe the behavior of elements of models (or rather the "potential" behavior in elementary extensions), then we have the idea that stable theories are nice because the number of possible behaviors is constrained. Counting types this way is an important part of stability theory and Shelah's work in classifying first-order theories according to the spectrum function $I(T,\kappa)$ , which counts the number of non-isomorphic models of $T$ of cardinality $\kappa$ . Part of the idea is that in order for this function to be well-behaved, and for their to be hope of classifying the models of a theory, one needs a stability (in a strong way).
It is also worth mentioning here that there are strong restrictions on the set of $\lambda$ for which a theory $T$ can be $\lambda$ -stable. For example if $T$ is stable in some $\lambda$ then it is stable in unboundedly many $\lambda$ . Moreover, if $T$ is $\aleph_0$ -stable then it is stable in all infinite $\lambda$ . But this is getting off track a little so I won't say more. To give some intuition for the connection between the two definitions of stability, consider the following example. Example 1. Let $T$ be the theory a dense linear order without endpoints (this determines a complete theory). We can see in $T$ that there a lot of types . For example just consider $1$ -types over $\mathbb{Q}$ . For any irrational $\alpha\in\mathbb{Q}$ , we have a type $p_{\alpha}$ which says $x<r$ for any rational $r>\alpha$ , and $x>r$ for any rational $r<\alpha$ . This type is finitely-satisfiable in $\mathbb{Q}$ (hence is a bona fide type, and complete by QE for this theory). It is clear that different irrationals give distinct types, and so we obtain $2^{\aleph_0}$ types over a countable parameter set. Conclusion: DLO is not $\aleph_0$ -stable (by the second definition). This argument can be generalized to any $\lambda$ (this is Exercise 4.5.21 in Dave Marker's book Model Theory: An Introduction ). In the previous example, we kill stability and obtain a lot of types by using cuts in orderings . So the amazing result is that, in some sense, this is precisely the way to kill stability, but with respect to a more "local" notion of ordering as described by the definition you've given. Indeed, the following is part of Shelah's Unstable Formula Theorem , which is Theorem 2.2 in Chapter 2 of Classification Theory . Unstable Formula Theorem (abridged). Let $\phi(\bar{x},\bar{y})$ be a formula. The following are equivalent. $\phi(\bar{x},\bar{y})$ is unstable in every $\lambda\geq\aleph_0$ , i.e., for every $\lambda\geq\aleph_0$ , there is a subset $A$ (of some model of $T$ ) such that $|A|\leq\lambda<|S_\phi(A)|$ (where $S_\phi(A)$ denotes the space of $\phi$ -types with parameters from $A$ ). $\phi(\bar{x},\bar{y})$ is unstable in some $\lambda\geq\aleph_0$ . $\phi(\bar{x},\bar{y})$ has the order property (i.e., satisfies the condition in your definition). The proof of this theorem is quite beautiful, and draws from both model theory and combinatorics, going back to a combinatorial result of Erdos and Makkai from this paper . In order to fully establish the equivalence between the two notions of stability for a theory (rather than a formula), one needs to show that if $T$ is not $\lambda$ -stable for any $\lambda$ (which means there are a lot of complete types) then this can be detected by a single formula (i.e. there are a lot of $\phi$ -types for a fixed $\phi(\bar{x},\bar{y})$ ). This is Theorem 2.13 in the same chapter of Shelah's book. Remark (on the word "stable"). Stable theories were first defined by Shelah in the 1969 paper Stable Theories . Much of the motivation in this paper comes from work of Morley on totally transcendental theories , which are a subclass of stable theories (in a countable language, totally transcendental is the same as $\aleph_0$ -stable). In this paper, Shelah proves the first "spectrum theorem" for type-counting. In particular he proves that if $T$ is a complete theory (we are still in a countable language) then one of the following holds: For any model $M$ and $A\subseteq M$ , $|S_1(A)|\leq |A|+2^{\aleph_0}$ . For any model $M$ and $A\subseteq M$ , $|S_1(A)|\leq |A|^{\aleph_0}$ ; and (1) fails. For any infinite $\lambda$ , there is a model $M$ and $A\subseteq M$ such that $|A|=\lambda<|S_1(A)|$ . Shelah calls the first two cases "stable", presumably because the the cardinalities of the type spaces are "stable" as functions of the cardinalities of the parameter sets (modulo some cardinal arithmetic). For example, in the first case, we can see that $T$ is $\lambda$ -stable according to the above definition whenever $\lambda\geq 2^{\aleph_0}$ . In the second case $T$ is $\lambda$ -stable whenever $\lambda^{\aleph_0}=\lambda$ . This result narrows the "stability spectrum" for a particular theory, and Shelah further refines this in his book. Example 2 (an $\aleph_0$ -stable theory). Let $T$ be the theory of algebraically closed fields of characteristic $0$ (this determines a complete theory in the language of rings). This theory has quantifier elimination in the language of rings (this is known to algebraic geometers as the result of Chevalley saying that the projection of a constructible set is constructible). We can use this to deduce that $T$ is $\lambda$ -stable for any infinite $\lambda$ . Indeed, suppose $A$ is a subset of some model of $T$ . We can assume for simplicity that $A$ is a model (note that $|acl(A)|=|A|+\aleph_0$ ). By quantifier elimination, a $1$ -type over $A$ is determined by polynomial equations and inequations with variables in $A$ . So one of two things can happen. Either the $1$ -type contains a polynomial equation (and thus specifies an element in $A$ ) or the $1$ -type says that no polynomial equation is satisfied (and thus describes a "transcendental" over $A$ ). So there are $|A|+1=|A|$ many $1$ -types, i.e., $|S_1(A)|=|A|$ . Example 3 (a superstable theory). Let $T$ be the theory of the additive group of integers. This theory does not have quantifier elimination in the language of groups. In order to obtain QE, one needs binary relation symbols $\equiv_n$ specifying congruence modulo $n$ for all $n\geq 2$ (note that each $\equiv_n$ is definable in the language of groups using existential quantifiers). We can use these congruence relations to see that $T$ is not $\aleph_0$ -stable. In particular, for any (possibly infinite) set $P$ of primes, there is a $1$ -type over $\emptyset$ which contains $x\equiv_n 0$ for all $n\in P$ and $x\not\equiv_n 0$ for all primes $n\not\in P$ . So we get $2^{\aleph_0}$ types over $\emptyset$ . On the other hand, $T$ is $\lambda$ -stable for any $\lambda\geq 2^{\aleph_0}$ . This takes a little more work to write out, but one essentially uses QE to show that a $1$ -type over a model $A$ either specifies an element of $A$ , or specifies a new element plus divisibility conditions for primes. So we get $|A|+2^{\aleph_0}$ types. In general, a theory that is $\lambda$ -stable for sufficiently large $\lambda$ (i.e. case (1) in the Remark above) is called superstable . In both of the previous examples, a good thought experiment is whether it would be easy to check stability using the order property definition (i.e., show that no formula admits an order). Indeed, in my experience the order property definition is a good way to show that a theory is unstable , but in order to show a theory is stable one usually uses type-counting or more sophisticated methods (e.g., forking). On the other hand, going back to Example 1, while type-counting in dense linear orders wasn't all that hard, it is completely trivial to find a formula with the order property. What I've said only scratches the surface of stability, and there is much more to say. Probably others will add. Indeed, if this website were called Model Theory MathOverflow , then your question could easily be a community wiki.
|
{
"source": [
"https://mathoverflow.net/questions/365070",
"https://mathoverflow.net",
"https://mathoverflow.net/users/120374/"
]
}
|
365,387 |
I have over the years learned some tricks which saves a lot of time,
and I wish I had known them earlier. Some tricks are LaTeX-specific, but other tricks are more general. Let me start with a few examples: Use LaTeX macros and definitions for easy reuse. This is particularly useful for when making many similar-looking figures. Another example is to make a macro that includes the $q$ when typing q-binomial coefficients. This ensures consistency. In documents with many Tikz figures, compilation time can become quite brutal. However, spreading out all figures in many documents is also inconvenient. Solution: Use one standalone file, where each figure appears as a separate .pdf page. Then include the .pdf pages as figures in the main document. All figures are in one .tex-file, making it easy to reuse macros. I find this trick extremely useful , as it does not lead to duplicate code spread over several files. Use bibtex and .bib files. I prefer to use doi2bib to convert doi's to a .bib entry (some light editing might be needed). For collaboration, use git . Also, Dropbox or similar for backups. Keeping track of versions has saved me several times. Learn Regular expressions, for search-and-replace in .tex files. This is useful for converting hard-coded syntax into macros. Get electronic (local) copies of standard references, and make sure to name them in a sane manner . Then it is easy to quickly search for the correct book. These are available when the wifi is down, or while traveling. Do file reorganization and cleanup regularly. Get final versions of your published papers, and store in a folder, as you'll need them for job applications. Hunting down (your own!) published papers in pay-walled journals can be surprisingly tedious! Take the time to move code snippets from project-specific notebooks, and turn into software packages for easy reuse. Also, it is sometimes worth to spend time optimizing code - waiting for code to run does not seem like a big deal, but I have noticed that small improvements in my work-flow can have big impact. I am much more likely to try out a conjecture if it is easy to run the code.
|
Quiver by Varkor, provides a graphical interface to generate commutative diagrams . I find it extremely useful. Check out his blog: https://varkor.github.io/blog/2020/11/25/announcing-quiver.html
|
{
"source": [
"https://mathoverflow.net/questions/365387",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1056/"
]
}
|
365,411 |
I am hoping that the brilliant MathOverflow geometers can help me out. Question 1. Suppose that I have a fixed finite-length straightedge and fixed finite-size compass. Can I still construct all constructible points in the plane? I know the answers in several variations. If I have compasses of arbitrary size, then I don't need a straightedge at all. This is the Mohr–Moscheroni compass-only theorem . If I have an infinite straightedge (or arbitrarily large), then I need only a single compass of any fixed size. This is the rusty compass theorem . Indeed, the Poncelet–Steiner theorem shows that I need only an infinite straightedge and a single circle of known center and radius. But what I don't know is the remaining case, where both the straightedge and compass are limited in size. The difficult case seems to be where you have two points very far apart and you want to construct the line joining them. Will Sawin's comment answers the question I had asked above. But it doesn't seem to answer the relative version of the question, which is what I had had in mind: Question 2. Suppose that I have a fixed finite-length straightedge and fixed finite-size compass. Can I still construct all constructible points in the plane, relative to a fixed finite set of points? In other words, is the tool set of a finite straightedge and finite compass fully equivalent to the arbitrary size toolset we usually think about.
|
A bounded-length straightedge can emulate an arbitrarily large straightedge (even without requiring any compass), so the rusty compass theorem is sufficient. Note that, in particular, it suffices to show that there exists an $\varepsilon > 0$ such that a straightedge of length $1$ is capable of joining two points separated by any distance $\leq 1 + \varepsilon$ (and therefore emulates a straightedge of length $1 + \varepsilon$ , and therefore arbitrarily long straightedges by iterating this process). We can use Pappus's theorem to establish this result for any $\varepsilon < 1$ : https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem In particular, given two points $A$ and $c$ (separated by a distance slightly greater than 1) which we wish to join, draw a line $g$ through $A$ and a line $h$ through $c$ which approach relatively close to each other. Then add arbitrary points $B, C$ on $g$ and $a, b$ on $h$ such that the four new points are within distance $1$ of each other and the two original points. We assume wlog $b$ is between $a, c$ and $B$ is between $A, C$ . Then we can construct $X$ by intersecting the short (length $< 1$ ) lines $Ab, Ba$ , and construct $Z$ by intersecting the short lines $Bc, Cb$ . Then $Y$ can be constructed by intersecting the short lines $XZ$ and $Ca$ . Now, $Y$ is positioned collinear with $A$ and $c$ and between them, so we can use it as a 'stepping stone' to draw a straight line between $A$ and $c$ . The result follows. EDIT: I decided to do this with the edge of a coaster and two points slightly too far apart to be joined directly:
|
{
"source": [
"https://mathoverflow.net/questions/365411",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1946/"
]
}
|
365,569 |
I have just graduated from the University of Chicago and no longer have access to online journal resources, but I cannot afford to pay for them directly. Normally, I would be able to access library resources for a small fee in the campus library. However, due to closures due to Covid-19, this is no longer an option. Previous responses to a similar question often involve physical access to a library, which is impossible for many for the foreseeable future. They also consider some access to online resources, but I was wondering if there were any Covid-specific online resources which have come about given the recent crisis. Even if they are not specific to the crisis, online tools which are accessible now are increasingly important, so would be useful.
|
Let me try to summarize this long discussion in the comments. There are many free resources. arXiv . It is true that not all mathematicians post their papers on the arXiv, for various reasons. But some of those who don't, post them on their personal sites. There are also other depositories, for example in Europe (see 5 below). NSF depository . NSF's stated policy is that all results of NSF-sponsored research "must be available to the public at most in 1 year since their publication". I checked: the site is somewhat confusing but it works. Many journals are freely available, and many more make their papers available after some time (usually 4-5 years). When you choose a journal to publish your paper, take this into account! Here is a convenient catalog of online journals . I am sure many other libraries have similar catalogs; I find this one convenient. The journals with free access are marked green, with partial access yellow and red. Access may depend on your location or on the date of publication. Finally there are "pirate" sites. Some of them may have huge collections, larger than many university libraries. They frequently change their names and location. Some keywords may be "bookfi" or "genesis" for books, and "sci-hub" for journal articles. (Some of them may be illegal in some countries). A simple search on Google and especially on Google Scholar sometimes finds what you need. It could be a place you do not expect. Some saved/cached copy. Some preprint depository that you do not know. Some personal web site, etc. The very important resource is MathScinet, which unfortunately has no free version. But its German competitor Zentralblatt Math is partially free. Whatever you search there, it gives you only 3 items for free. But by clever choice of search criteria
you can obtain amazing results. For really old items, there is also Jahrbuch which is free (it is a subset of Zentralblatt). For new papers, Google Scholar is excellent, especially if you know the author's name and title of the paper. It also sometimes finds you a free copy when available. EDIT. I asked NSF, and they explained that all NSF-supported papers older than 1 year are
really available, though the site is somewhat confusing. One has to click on
the title of the paper, and then on a little square which says "pdf". EDIT 2: I collected some links to free resources on my web page .
|
{
"source": [
"https://mathoverflow.net/questions/365569",
"https://mathoverflow.net",
"https://mathoverflow.net/users/140709/"
]
}
|
365,808 |
Sorry if something like this has already been asked, I searched but I couldn't find anything similar to my question. I'm a senior undergraduate and currently doing my senior thesis. My senior thesis is not original work, however it's quite demanding and I'm learning a lot of high level topics. I have been lurking around arxiv and started reading "Solved and unsolved problems in Number theory" by Daniel Shanks. My plan is to work on some open problems and play around with them so that I can try to get a publication before I graduate. My main reason for trying to get a publication is to increase my chances to get into a good graduate program (my GPA is not that great and I don't have the money to apply to many programs, so unless I publish something I'll probably only apply to safety schools). With that being said if I were to do original work, how would I go about publishing? I might end up modifying a problem too much and proving something that might not be interesting, so I feel it'll get rejected from a journal for not being profound. I will also attack problems with all I know, so I might also end up using some heavy tools that aren't part of an undergraduate curriculum so I don't if i would send them to an undergraduate research journal. Maybe I could just upload on dropbox or arxiv, but then it's not a publication. I have thought about asking my advisors about this, but I'll rather not since I'm aware I'm probably being overly ambitious and should probably focus on my thesis instead. Which I can agree with, hence I'll probably play around with problems on the weekends only or once a week. I'm also aware I might end up not publishing anything all, however in my mind unless I give it a shot I won't know. Either way I'll have fun and end up learning a lot about research so I don't see a downside. (In case my background is relevant, my senior thesis is about perfectoid spaces. I've already taken a graduate course on commutative algebra, have taken a basic course on p-adic analysis,started learning about point free topology, already know the basics of category theory, still learning more about algebraic geometry, will learn about adic spaces soon/already know a bit about krull valuations, learning about homological algebra through weibel's book, started reading szamuely's galois theory book, will have to learn about etale cohomology soon, will also learn some things from almost mathematics, etc.)
|
There are many undergraduate journals that would not be likely to reject your work as "not profound enough." Basically, if it's written while the author was an undergraduate, and contains anything novel at all (at the level one would expect of an undergraduate), then a journal can be found for it. This includes well-written expository accounts of existing texts, especially if they work out some more examples. The most famous undergrad math journal is Involve, which has higher standards than the others (more like what a grad student or professor might do). But there are plenty of others that accept papers that might not make it to Involve: List of undergrad math journals List of general undergrad journals Tons of resources for undergrad research Like the rest of us, you should do the research first, and think later about where it can be published. I agree with Sam that this should be guided by your advisor. Don't be afraid to have a conversation with your advisor stating that you are hoping for a journal paper in one of these journals. That can guide how focused the research experience should be, and can guide how you write the thesis. It's not overly ambitious at all. One last point: I don't think it makes sense to tether your perceptions of admission to grad school to whether or not you have a journal publication. For one thing, even after you finish the research, it'll take you weeks or months to write the paper. Then, there will be 6-12 months while the paper is being refereed. Then a back and forth with the referees. The point is: grad programs in math would be crazy to expect undergrads to already have publications before applying. However, if you have a good draft, you can share that when you apply. I think it would matter less than your recommendation letters (another reason to talk to your advisor often), GPA, and test scores. Having fun with it, as you say, is the best idea. Whether or not you prove new results, your advisor can relay your passion and depth of understanding to graduate programs, and that will matter much more than your publication record at this stage of your career. Good luck!
|
{
"source": [
"https://mathoverflow.net/questions/365808",
"https://mathoverflow.net",
"https://mathoverflow.net/users/155670/"
]
}
|
365,947 |
I think a related question might be this ( Set-Theoretic Issues/Categories ). There are many ways in which you can avoid set theoretical paradoxes in dealing with category theory (see for instance Shulman - Set theory for category theory ). Some important results in category theory assume some kind of ‘smallness’ of your category in practice. A very much used result in homological algebra is the Freyd–Mitchell embedding theorem: Every small abelian category admits an fully faithful exact embedding in a category $\text{$R$-mod}$ for a suitable ring $R$ . Now, in everyday usage of this result, the restriction that the category is small is not important: for instance, if you want to do diagram chasing in a diagram on any category, you can always restrict your attention to the abelian subcategory generated by the objects and maps on the diagram, and the category will be small. I am wondering: What are results of category theory, commonly used in mathematical practice, in which considerations of size are crucial? Shulman in [op. cit.] gives what I think is an example, the Freyd Special Adjoint Functor Theorem : a functor from a complete, locally small, and well-powered category with a cogenerating set to a
locally small category has a left adjoint if and only if it preserves small limits. I would find interesting to see some discussion on this topic.
|
Very often one has the feeling that set-theoretic issues are somewhat cheatable, and people feel like they have eluded foundations when they manage to cheat them. Even worse, some claim that foundations are irrelevant because each time they dare to be relevant, they can be cheated. What these people haven't understood is that the best foundation is the one that allows the most cheating (without falling apart). In the relationship between foundation and practice, though, what matters the most is the phenomenology of every-day mathematics. In order to make this statement clear, let me state the uncheatable lemma. In the later discussion, we will see the repercussion of this lemma. Lemma (The uncheatable).
A locally small, large-cocomplete category is a poset. The lemma shows that no matter how fat are the sets where you enrich your category, there is no chance that the category is absolutely cocomplete. Example . In the category of sets, the large coproduct of all sets is not a set. If you enlarge the universe in such a way that it is, then some other (even larger) coproduct will not exist. This is inescapable and always boils down to the Russel Paradox. Remark . Notice that obvious analogs of this lemma are true also for categories based on Grothendieck Universes (as opposed to sets and classes). One can't escape the truth by changing its presentation . Excursus . Very recently Thomas Forster, Adam Lewicki, Alice Vidrine have tried to reboot category theory in Stratified Set Theory in their paper Category Theory with Stratified Set Theory (arXiv: https://arxiv.org/abs/1911.04704 ). One could consider this as a kind of solution to the uncheatable lemma. But it's hard to tell whether it is a true solution or a more or less equivalent linguistic reformulation. This theory is at its early stages. At this point one could say that I haven't shown any concrete problem, we all know that the class of all sets is not a set, and it appears as a piece of quite harmless news to us. In the rest of the discussion, I will try to show that the uncheatable lemma has consequences in the daily use of category theory. Categories will be assumed to be locally small with respect to some category of sets. Let me recall a standard result from the theory of Kan extensions. Lemma (Kan). Let $\mathsf{B} \stackrel{f}{\leftarrow} \mathsf{A} \stackrel{g}{\to} \mathsf{C}$ be a span where $\mathsf{A}$ is small and $\mathsf{C}$ is (small) cocomplete. The the left Kan extension $\mathsf{lan}_f g$ exists. Kan extensions are a useful tool in everyday practice, with applications in many different topics of category theory. In this lemma (which is one of the most used in this topic) the set-theoretic issue is far from being hidden: $\mathsf{A}$ needs to be small (with respect to the size of $\mathsf{C})$ ! There is no chance that the lemma is true when $\mathsf{A}$ is a large category. Indeed since colimits can be computed via Kan extensions, the lemma would imply that every (small) cocomplete category is large cocomplete, which is not allowed by the uncheatable . Also, there is no chance to solve the problem by saying: well, let's just consider $\mathsf{C}$ to be large-cocomplete , again because of the the uncheatable . This problem is hard to avoid because the size of the categories of our interest is as a fact always larger than the size of their inhabitants (this just means that most of the time Ob $\mathsf{C}$ is a proper class, as big as the size of the enrichment). Notice that the Kan extension problem recovers the Adjoint functor theorem one, because adjoints are computed via Kan extensions of identities of large categories, $$\mathsf{R} = \mathsf{lan}_\mathsf{L}(1) \qquad \mathsf{L} = \mathsf{ran}_\mathsf{R}(1) .$$ Indeed, in that case, the solution set condition is precisely what is needed in order to cut down the size of some colimits that otherwise would be too large to compute, as can be synthesized by the sharp version of the Kan lemma. Sharp Kan lemma. Let $\mathsf{B} \stackrel{f}{\leftarrow} \mathsf{A} \stackrel{g}{\to} \mathsf{C}$ be a span where $\mathsf{B}(f-,b)$ is a is small presheaf for every $b \in \mathsf{B}$ and $\mathsf{C}$ is (small) cocomplete. Then the left Kan extension $\mathsf{lan}_f g$ exists. Indeed this lemma allows $\mathsf{A}$ to be large, but we must pay a tribute to its presheaf category: $f$ needs to be somehow locally small (with respect to the size of $\mathsf{C}$ ). Kan lemma Fortissimo. Let $ \mathsf{A} \stackrel{f}{\to} \mathsf{B} $ be a functor. The following are equivalent: for every $g :\mathsf{A} \to \mathsf{C}$ where $\mathsf{C}$ is a small-cocomplete category, $\mathsf{lan}_f g$ exists. $\mathsf{lan}_f y$ exists, where $y$ is the Yoneda embedding in the category of small presheaves $y: \mathsf{A} \to \mathcal{P}(\mathsf{A})$ . $\mathsf{B}(f-,b)$ is a is small presheaf for every $b \in \mathsf{B}$ . Even unconsciously, the previous discussion is one of the reasons of the popularity of locally presentable categories. Indeed, having a dense generator is a good compromise between generality and tameness . As an evidence of this, in the context of accessible categories the sharp Kan lemma can be simplified. Tame Kan lemma. Let $\mathsf{B} \stackrel{f}{\leftarrow} \mathsf{A} \stackrel{g}{\to} \mathsf{C}$ be a span of accessible categories, where $f$ is an accessible functor and $\mathsf{C}$ is (small) cocomplete. Then the left Kan extension $\mathsf{lan}_f g$ exists (and is accessible). Warning . The proof of the previous lemma is based on the density (as opposed to codensity) of $\lambda$ -presentable objects in an accessible category. Thus the lemma is not valid for the right Kan extension. References for Sharp. I am not aware of a reference for this result. It can follow from a careful analysis of Prop. A.7 in my paper Codensity: Isbell duality, pro-objects, compactness and accessibility . The structure of the proof remains the same, presheaves must be replaced by small presheaves. References for Tame. This is an exercise, it can follow directly from the sharp Kan lemma, but it's enough to properly combine the usual Kan lemma, Prop A.1&2 of the above-mentioned paper, and the fact that accessible functors have arity. This answer is connected to this other.
|
{
"source": [
"https://mathoverflow.net/questions/365947",
"https://mathoverflow.net",
"https://mathoverflow.net/users/160378/"
]
}
|
366,070 |
In a lot of computational math, operations research, such as algorithm design for optimization problems and the like, authors like to use $$\langle \cdot, \cdot \rangle$$ as opposed to $$(\cdot)^T (\cdot)$$ Even when the space is clearly Euclidean and the operation is clearly the dot product. What is the benefit or advantage for doing so? Is it so that the notations generalize nicely to other spaces? Update: Thank you for all the great answers! Will take a while to process...
|
Mathematical notation in a given mathematical field $X$ is basically a correspondence $$ \mathrm{Notation}: \{ \hbox{well-formed expressions}\} \to \{ \hbox{abstract objects in } X \}$$ between mathematical expressions (or statements) on the written page (or blackboard, electronic document, etc.) and the mathematical objects (or concepts and ideas) in the heads of ourselves, our collaborators, and our audience. A good notation should make this correspondence $\mathrm{Notation}$ (and its inverse) as close to a (natural) isomorphism as possible. Thus, for instance, the following properties are desirable (though not mandatory): (Unambiguity) Every well-formed expression in the notation should have a unique mathematical interpretation in $X$ . (Related to this, one should strive to minimize the possible confusion between an interpretation of an expression using the given notation $\mathrm{Notation}$ , and the interpretation using a popular competing notation $\widetilde{\mathrm{Notation}}$ .) (Expressiveness) Conversely, every mathematical concept or object in $X$ should be describable in at least one way using the notation. (Preservation of quality, I) Every "natural" concept in $X$ should be easily expressible using the notation. (Preservation of quality, II) Every "unnatural" concept in $X$ should be difficult to express using the notation. [In particular, it is possible for a notational system to be too expressive to be suitable for a given application domain.] Contrapositively, expressions that look clean and natural in the notation system ought to correspond to natural objects or concepts in $X$ . (Error correction/detection) Typos in a well-formed expression should create an expression that is easily corrected (or at least detected) to recover the original intended meaning (or a small perturbation thereof). (Suggestiveness, I) Concepts that are "similar" in $X$ should have similar expressions in the notation, and conversely. (Suggestiveness, II) The calculus of formal manipulation in $\mathrm{Notation}$ should resemble the calculus of formal manipulation in other notational systems $\widetilde{\mathrm{Notation}}$ that mathematicians in $X$ are already familiar with. (Transformation) "Natural" transformation of mathematical concepts in $X$ (e.g., change of coordinates, or associativity of multiplication) should correspond to "natural" manipulation of their symbolic counterparts in the notation; similarly, application of standard results in $X$ should correspond to a clean and powerful calculus in the notational system. [In particularly good notation, the converse is also true: formal manipulation in the notation in a "natural" fashion can lead to discovering new ways to "naturally" transform the mathematical objects themselves.] etc. To evaluate these sorts of qualities, one has to look at the entire field $X$ as a whole; the quality of notation cannot be evaluated in a purely pointwise fashion by inspecting the notation $\mathrm{Notation}^{-1}(C)$ used for a single mathematical concept $C$ in $X$ . In particular, it is perfectly permissible to have many different notations $\mathrm{Notation}_1^{-1}(C), \mathrm{Notation}_2^{-1}(C), \dots$ for a single concept $C$ , each designed for use in a different field $X_1, X_2, \dots$ of mathematics. (In some cases, such as with the metrics of quality in desiderata 1 and 7, it is not even enough to look at the entire notational system $\mathrm{Notation}$ ; one must also consider its relationship with the other notational systems $\widetilde{\mathrm{Notation}}$ that are currently in popular use in the mathematical community, in order to assess the suitability of use of that notational system.) Returning to the specific example of expressing the concept $C$ of a scalar quantity $c$ being equal to the inner product of two vectors $u, v$ in a standard vector space ${\bf R}^n$ , there are not just two notations commonly used to capture $C$ , but in fact over a dozen (including several mentioned in other answers): Pedestrian notation : $c = \sum_{i=1}^n u_i v_i$ (or $c = u_1 v_1 + \dots + u_n v_n$ ). Euclidean notation : $c = u \cdot v$ (or $c = \vec{u} \cdot \vec{v}$ or $c = \mathbf{u} \cdot \mathbf{v}$ ). Hilbert space notation : $c = \langle u, v \rangle$ (or $c = (u,v)$ ). Riemannian geometry notation : $c = \eta(u,v)$ , where $\eta$ is the Euclidean metric form (also $c = u \neg (\eta \cdot v)$ or $c = \iota_u (\eta \cdot v)$ ; one can also use $\eta(-,v)$ in place of $\eta \cdot v$ . Alternative names for the Euclidean metric include $\delta$ and $g$ ). Musical notation : $c = u_\flat(v)$ (or $c = u^\flat(v)$ ). Matrix notation : $c = u^T v$ (or $c = \mathrm{tr}(vu^T)$ or $c = u^* v$ or $c = u^\dagger v$ ). Bra-ket notation : $c = \langle u| v\rangle$ . Einstein notation, I (without matching superscript/subscript requirement): $c = u_i v_i$ (or $c=u^iv^i$ , if vector components are denoted using superscripts). Einstein notation, II (with matching superscript/subscript requirement): $c = \eta_{ij} u^i v^j$ . Einstein notation, III (with matching superscript/subscript requirement and also implicit raising and lowering operators): $c = u^i v_i$ (or $c = u_i v^i$ or $c = \eta_{ij} u^i v^j$ ). Penrose abstract index notation : $c = u^\alpha v_\alpha$ (or $c = u_\alpha v^\alpha$ or $c = \eta_{\alpha \beta} u^\alpha v^\beta$ ). [In the absence of derivatives this is nearly identical to Einstein notation III, but distinctions between the two notational systems become more apparent in the presence of covariant derivatives ( $\nabla_\alpha$ in Penrose notation, or a combination of $\partial_i$ and Christoffel symbols in Einstein notation).] Hodge notation : $c = \mathrm{det}(u \wedge *v)$ (or $u \wedge *v = c \omega$ , with $\omega$ the volume form). [Here we are implicitly interpreting $u,v$ as covectors rather than vectors.] Geometric algebra notation : $c = \frac{1}{2} \{u,v\}$ , where $\{u,v\} := uv+vu$ is the anticommutator. Clifford algebra notation : $uv + vu = 2c1$ . Measure theory notation : $c = \int_{\{1,\dots,n\}} u(i) v(i)\ d\#(i)$ , where $d\#$ denotes counting measure. Probabilistic notation : $c = n {\mathbb E} u_{\bf i} v_{\bf i}$ , where ${\bf i}$ is drawn uniformly at random from $\{1,\dots,n\}$ . Trigonometric notation : $c = |u| |v| \cos \angle(u,v)$ . Graphical notations such as Penrose graphical notation , which would use something like $\displaystyle c =\bigcap_{u\ \ v}$ to capture this relation. etc. It is not a coincidence that there is a lot of overlap and similarity between all these notational systems; again, see desiderata 1 and 7. Each of these notations is tailored to a different mathematical domain of application. For instance: Matrix notation would be suitable for situations in which many other matrix operations and expressions are in use (e.g., the rank one operators $vu^T$ ). Riemannian or abstract index notation would be suitable in situations in which linear or nonlinear changes of variable are frequently made. Hilbert space notation would be suitable if one intends to eventually generalize one's calculations to other Hilbert spaces, including infinite dimensional ones. Euclidean notation would be suitable in contexts in which other Euclidean operations (e.g., cross product) are also in frequent use. Einstein and Penrose abstract index notations are suitable in contexts in which higher rank tensors are heavily involved. Einstein I is more suited for Euclidean applications or other situations in which one does not need to make heavy use of covariant operations, otherwise Einstein III or Penrose is preferable (and the latter particularly desirable if covariant derivatives are involved). Einstein II is suitable for situations in which one wishes to make the dependence on the metric explicit. Clifford algebra notation is suitable when working over fields of arbitrary characteristic, in particular if one wishes to allow characteristic 2. And so on and so forth. There is no unique "best" choice of notation to use for this concept; it depends on the intended context and application domain. For instance, matrix notation would be unsuitable if one does not want the reader to accidentally confuse the scalar product $u^T v$ with the rank one operator $vu^T$ , Hilbert space notation would be unsuitable if one frequently wished to perform coordinatewise operations (e.g., Hadamard product) on the vectors and matrices/linear transformations used in the analysis, and so forth. (See also Section 2 of Thurston's " Proof and progress in mathematics ", in which the notion of derivative is deconstructed in a fashion somewhat similar to the way the notion of inner product is here.) ADDED LATER: One should also distinguish between the "one-time costs" of a notation (e.g., the difficulty of learning the notation and avoiding standard pitfalls with that notation, or the amount of mathematical argument needed to verify that the notation is well-defined and compatible with other existing notations), with the "recurring costs" that are incurred with each use of the notation. The desiderata listed above are primarily concerned with lowering the "recurring costs", but the "one-time costs" are also a significant consideration if one is only using the mathematics from the given field $X$ on a casual basis rather than a full-time one. In particular, it can make sense to offer "simplified" notational systems to casual users of, say, linear algebra even if there are more "natural" notational systems (scoring more highly on the desiderata listed above) that become more desirable to switch to if one intends to use linear algebra heavily on a regular basis.
|
{
"source": [
"https://mathoverflow.net/questions/366070",
"https://mathoverflow.net",
"https://mathoverflow.net/users/120345/"
]
}
|
366,097 |
I'm trying to analytically find the following expectation $$\mathbb{E}\left[ a \mathcal{Q} \left( \sqrt{b } \gamma \right) \right],$$ where $a$ and $b$ are constant values, $\mathcal{Q}$ is the Gaussian Q-function, which is defined as $\mathcal{Q}(x) = \frac{1}{2 \pi}\int_{x}^{\infty} e^{-u^2/2}du$ and $\gamma$ is a random variable with Gamma distribition, i.e., $f_{\gamma}(y) \sim \frac{1}{\Gamma(\kappa)\theta^{\kappa}} y^{\kappa-1} e^{-y/\theta} $ . By using Mathematica, I've found the following solution: $$\mathbb{E}\left[ a \mathcal{Q} \left( \sqrt{b } \gamma \right) \right] = a 2^{-\frac{\kappa }{2}-3} b^{-\frac{\kappa }{2}-\frac{1}{2}} \theta ^{-\kappa -1} \left(2 \sqrt{2} \sqrt{b} \theta \, _2\tilde{F}_2\left(\frac{\kappa +1}{2},\frac{\kappa }{2};\frac{1}{2},\frac{\kappa +2}{2};\frac{1}{2 b \theta ^2}\right)-\kappa \, _2\tilde{F}_2\left(\frac{\kappa +1}{2},\frac{\kappa +2}{2};\frac{3}{2},\frac{\kappa +3}{2};\frac{1}{2 b \theta ^2}\right)\right),$$ however, I'd like to know the steps to find this solution or to find another one.
|
Mathematical notation in a given mathematical field $X$ is basically a correspondence $$ \mathrm{Notation}: \{ \hbox{well-formed expressions}\} \to \{ \hbox{abstract objects in } X \}$$ between mathematical expressions (or statements) on the written page (or blackboard, electronic document, etc.) and the mathematical objects (or concepts and ideas) in the heads of ourselves, our collaborators, and our audience. A good notation should make this correspondence $\mathrm{Notation}$ (and its inverse) as close to a (natural) isomorphism as possible. Thus, for instance, the following properties are desirable (though not mandatory): (Unambiguity) Every well-formed expression in the notation should have a unique mathematical interpretation in $X$ . (Related to this, one should strive to minimize the possible confusion between an interpretation of an expression using the given notation $\mathrm{Notation}$ , and the interpretation using a popular competing notation $\widetilde{\mathrm{Notation}}$ .) (Expressiveness) Conversely, every mathematical concept or object in $X$ should be describable in at least one way using the notation. (Preservation of quality, I) Every "natural" concept in $X$ should be easily expressible using the notation. (Preservation of quality, II) Every "unnatural" concept in $X$ should be difficult to express using the notation. [In particular, it is possible for a notational system to be too expressive to be suitable for a given application domain.] Contrapositively, expressions that look clean and natural in the notation system ought to correspond to natural objects or concepts in $X$ . (Error correction/detection) Typos in a well-formed expression should create an expression that is easily corrected (or at least detected) to recover the original intended meaning (or a small perturbation thereof). (Suggestiveness, I) Concepts that are "similar" in $X$ should have similar expressions in the notation, and conversely. (Suggestiveness, II) The calculus of formal manipulation in $\mathrm{Notation}$ should resemble the calculus of formal manipulation in other notational systems $\widetilde{\mathrm{Notation}}$ that mathematicians in $X$ are already familiar with. (Transformation) "Natural" transformation of mathematical concepts in $X$ (e.g., change of coordinates, or associativity of multiplication) should correspond to "natural" manipulation of their symbolic counterparts in the notation; similarly, application of standard results in $X$ should correspond to a clean and powerful calculus in the notational system. [In particularly good notation, the converse is also true: formal manipulation in the notation in a "natural" fashion can lead to discovering new ways to "naturally" transform the mathematical objects themselves.] etc. To evaluate these sorts of qualities, one has to look at the entire field $X$ as a whole; the quality of notation cannot be evaluated in a purely pointwise fashion by inspecting the notation $\mathrm{Notation}^{-1}(C)$ used for a single mathematical concept $C$ in $X$ . In particular, it is perfectly permissible to have many different notations $\mathrm{Notation}_1^{-1}(C), \mathrm{Notation}_2^{-1}(C), \dots$ for a single concept $C$ , each designed for use in a different field $X_1, X_2, \dots$ of mathematics. (In some cases, such as with the metrics of quality in desiderata 1 and 7, it is not even enough to look at the entire notational system $\mathrm{Notation}$ ; one must also consider its relationship with the other notational systems $\widetilde{\mathrm{Notation}}$ that are currently in popular use in the mathematical community, in order to assess the suitability of use of that notational system.) Returning to the specific example of expressing the concept $C$ of a scalar quantity $c$ being equal to the inner product of two vectors $u, v$ in a standard vector space ${\bf R}^n$ , there are not just two notations commonly used to capture $C$ , but in fact over a dozen (including several mentioned in other answers): Pedestrian notation : $c = \sum_{i=1}^n u_i v_i$ (or $c = u_1 v_1 + \dots + u_n v_n$ ). Euclidean notation : $c = u \cdot v$ (or $c = \vec{u} \cdot \vec{v}$ or $c = \mathbf{u} \cdot \mathbf{v}$ ). Hilbert space notation : $c = \langle u, v \rangle$ (or $c = (u,v)$ ). Riemannian geometry notation : $c = \eta(u,v)$ , where $\eta$ is the Euclidean metric form (also $c = u \neg (\eta \cdot v)$ or $c = \iota_u (\eta \cdot v)$ ; one can also use $\eta(-,v)$ in place of $\eta \cdot v$ . Alternative names for the Euclidean metric include $\delta$ and $g$ ). Musical notation : $c = u_\flat(v)$ (or $c = u^\flat(v)$ ). Matrix notation : $c = u^T v$ (or $c = \mathrm{tr}(vu^T)$ or $c = u^* v$ or $c = u^\dagger v$ ). Bra-ket notation : $c = \langle u| v\rangle$ . Einstein notation, I (without matching superscript/subscript requirement): $c = u_i v_i$ (or $c=u^iv^i$ , if vector components are denoted using superscripts). Einstein notation, II (with matching superscript/subscript requirement): $c = \eta_{ij} u^i v^j$ . Einstein notation, III (with matching superscript/subscript requirement and also implicit raising and lowering operators): $c = u^i v_i$ (or $c = u_i v^i$ or $c = \eta_{ij} u^i v^j$ ). Penrose abstract index notation : $c = u^\alpha v_\alpha$ (or $c = u_\alpha v^\alpha$ or $c = \eta_{\alpha \beta} u^\alpha v^\beta$ ). [In the absence of derivatives this is nearly identical to Einstein notation III, but distinctions between the two notational systems become more apparent in the presence of covariant derivatives ( $\nabla_\alpha$ in Penrose notation, or a combination of $\partial_i$ and Christoffel symbols in Einstein notation).] Hodge notation : $c = \mathrm{det}(u \wedge *v)$ (or $u \wedge *v = c \omega$ , with $\omega$ the volume form). [Here we are implicitly interpreting $u,v$ as covectors rather than vectors.] Geometric algebra notation : $c = \frac{1}{2} \{u,v\}$ , where $\{u,v\} := uv+vu$ is the anticommutator. Clifford algebra notation : $uv + vu = 2c1$ . Measure theory notation : $c = \int_{\{1,\dots,n\}} u(i) v(i)\ d\#(i)$ , where $d\#$ denotes counting measure. Probabilistic notation : $c = n {\mathbb E} u_{\bf i} v_{\bf i}$ , where ${\bf i}$ is drawn uniformly at random from $\{1,\dots,n\}$ . Trigonometric notation : $c = |u| |v| \cos \angle(u,v)$ . Graphical notations such as Penrose graphical notation , which would use something like $\displaystyle c =\bigcap_{u\ \ v}$ to capture this relation. etc. It is not a coincidence that there is a lot of overlap and similarity between all these notational systems; again, see desiderata 1 and 7. Each of these notations is tailored to a different mathematical domain of application. For instance: Matrix notation would be suitable for situations in which many other matrix operations and expressions are in use (e.g., the rank one operators $vu^T$ ). Riemannian or abstract index notation would be suitable in situations in which linear or nonlinear changes of variable are frequently made. Hilbert space notation would be suitable if one intends to eventually generalize one's calculations to other Hilbert spaces, including infinite dimensional ones. Euclidean notation would be suitable in contexts in which other Euclidean operations (e.g., cross product) are also in frequent use. Einstein and Penrose abstract index notations are suitable in contexts in which higher rank tensors are heavily involved. Einstein I is more suited for Euclidean applications or other situations in which one does not need to make heavy use of covariant operations, otherwise Einstein III or Penrose is preferable (and the latter particularly desirable if covariant derivatives are involved). Einstein II is suitable for situations in which one wishes to make the dependence on the metric explicit. Clifford algebra notation is suitable when working over fields of arbitrary characteristic, in particular if one wishes to allow characteristic 2. And so on and so forth. There is no unique "best" choice of notation to use for this concept; it depends on the intended context and application domain. For instance, matrix notation would be unsuitable if one does not want the reader to accidentally confuse the scalar product $u^T v$ with the rank one operator $vu^T$ , Hilbert space notation would be unsuitable if one frequently wished to perform coordinatewise operations (e.g., Hadamard product) on the vectors and matrices/linear transformations used in the analysis, and so forth. (See also Section 2 of Thurston's " Proof and progress in mathematics ", in which the notion of derivative is deconstructed in a fashion somewhat similar to the way the notion of inner product is here.) ADDED LATER: One should also distinguish between the "one-time costs" of a notation (e.g., the difficulty of learning the notation and avoiding standard pitfalls with that notation, or the amount of mathematical argument needed to verify that the notation is well-defined and compatible with other existing notations), with the "recurring costs" that are incurred with each use of the notation. The desiderata listed above are primarily concerned with lowering the "recurring costs", but the "one-time costs" are also a significant consideration if one is only using the mathematics from the given field $X$ on a casual basis rather than a full-time one. In particular, it can make sense to offer "simplified" notational systems to casual users of, say, linear algebra even if there are more "natural" notational systems (scoring more highly on the desiderata listed above) that become more desirable to switch to if one intends to use linear algebra heavily on a regular basis.
|
{
"source": [
"https://mathoverflow.net/questions/366097",
"https://mathoverflow.net",
"https://mathoverflow.net/users/103291/"
]
}
|
366,310 |
One of the major results in graph theory is the graph structure theorem from Robertson and Seymour https://en.wikipedia.org/wiki/Graph_structure_theorem . It gives a deep and fundamental connection between the theory of graph minors and topological embeddings, and is frequently applied for algorithms. I was working with this results for years, and now heard someone saying that Robertson and Seymour called this result a "red herring". Is this true, and why would they possibly call such a breakthrough result a red herring? (Edit: my question refers to the 2003 graph structure theorem, not to the graph minor theorem which they established later)
|
Seymour and Robertson have indeed said that, and in fact they wrote that in their 2003 article in which they published the graph structure theorem. Here is the quote from Robertson and Seymour „Graph Minors. XVI. Excluding a non-planar graph“ (Journal of Combinatorial Theory, Series B, Vol. 89, Issue 1, Sept. 2003, pages 43–76, doi: 10.1016/S0095-8956(03)00042-X ).
Their theorem 1.3 is the earliest version of what we now call the graph structure theorem. While 1.3 has been one of the main goals of this series of papers, it turns out to have been a red herring. There is another result (theorem 3.1) which is proved in this paper, and from which 1.3 is then derived; and in all the future applications in this series of papers, it is not 1.3 but 3.1 that will be needed. My reading is, they were calling it a red herring because, at this point, they realized the importance of the concept of a tangle . (I think Reinhard Diestel said that the notion of a tangle is perhaps the deepest single innovation for graph theory stemming from this proof.) Continuing the quote of Robertson and Seymour (the bold font is from me, not from the article): Let us explain how theorem 3.1 is used to prove 1.3. Evidently we would like to eliminate the ‘‘tree-structure’’ part of 1.3 and concentrate on the internal structure of one of the ‘‘nodes’’ of the tree. How can we do so? An inductive argument looks plausible at first sight; if there is no low order cutset of G dividing it into two substantial pieces then G itself must be almost a ‘‘node’’ if the theorem is to be true, while if there is such a cutset we may express G as a clique-sum of two smaller graphs, and hope to apply our inductive hypothesis to these graphs. But there is a difficulty here; it is possible that these smaller graphs have an L-minor while G does not. Fortunately there is a way to focus in on a ‘‘node’’ which does not involve any decomposing, as follows. We can assume that the tree is as refined as possible in the sense that no node can be split into two smaller nodes, and so for every low order cutset of G; most of any node will lie on one side or the other of the cutset (except for nodes of bounded cardinality, which we can ignore.) Therefore if we fix some node, every small cutset has a ‘‘big’’ side (containing most of the node) and a ‘‘small’’ side—and it turns out that no three small sides have union G: Thus a node defines a ‘‘tangle’’ , which is such an assignment of big and small sides to the low order cutsets; and conversely, it can be shown that any tangle in G of sufficiently high ‘‘order’’ will be associated with some node of the tree-structure. Hence a convenient way to analyze the internal structure of the nodes is to analyze the local structure of G with respect to some high order tangle , and this is the content of theorem 3.1.
|
{
"source": [
"https://mathoverflow.net/questions/366310",
"https://mathoverflow.net",
"https://mathoverflow.net/users/161328/"
]
}
|
366,312 |
Consider coloring the edges of a complete graph on even order. This can be seen as the completion of an order $n$ symmetric Latin square except the leading diagonal. My question pertains to whether we can always complete the edge coloring in $n-1$ colors given a certain set of colors? The number of colors I fix is exactly equal to $\frac{(k)(k+2)}{2}$ , where $k=\frac{n}{2}$ and form $4$ distinct consecutive last four subdiagonals (and, by symmetry, superdiagonals) in the partial Latin square. For example, in the case of $K_8$ , I fix the following colors: \begin{bmatrix}X&&&&1&3&7&4\\&X&&&&2&4&1\\&&X&&&&3&5\\&&&X&&&&6\\1&&&&X&&&\\3&2&&&&X&&\\7&4&3&&&&X&\\4&1&5&6&&&&X\end{bmatrix} A completion to a proper edge coloring in this case would be: \begin{bmatrix}X&5&6&2&1&3&7&4\\5&X&7&3&6&2&4&1\\6&7&X&4&2&1&3&5\\2&3&4&X&7&5&1&6\\1&6&2&7&X&4&5&3\\3&2&1&5&4&X&6&7\\7&4&3&1&5&6&X&2\\4&1&5&6&3&7&2&X\end{bmatrix} Can the above be always done if the colors I fix follow the same pattern for all even order complete graphs? Note that the pattern followed in the precoloring consists of two portions- i) the last $k-1$ subdiagonals are actually taken from a canonical $n$ -edge coloring of the complete graph on $n-1$ vertices, where $n$ is even. By canonical, I mean the commutative idempotent 'anti-circulant' latin square. Like in the example above, the canonical coloring of the complete graph on $7$ vertices is \begin{bmatrix}1&5&2&6&3&7&4\\5&2&6&3&7&4&1\\2&6&3&7&4&1&5\\6&3&7&4&1&5&2\\3&7&4&1&5&2&6\\7&4&1&5&2&6&3\\4&1&5&2&6&3&7\end{bmatrix} ii)The $k$ -th subdiagonal just consists of entries in the pattern $1-2-3-$ so on and takes into account the previous entries to create an appropriate entry. Like in the example above the last diagonal I took was $1-2-3-6$ . It could also have been $1-2-3-7$ . And, if the completion exists, would the completion be unique? Any hints? Thanks beforehand.
|
Seymour and Robertson have indeed said that, and in fact they wrote that in their 2003 article in which they published the graph structure theorem. Here is the quote from Robertson and Seymour „Graph Minors. XVI. Excluding a non-planar graph“ (Journal of Combinatorial Theory, Series B, Vol. 89, Issue 1, Sept. 2003, pages 43–76, doi: 10.1016/S0095-8956(03)00042-X ).
Their theorem 1.3 is the earliest version of what we now call the graph structure theorem. While 1.3 has been one of the main goals of this series of papers, it turns out to have been a red herring. There is another result (theorem 3.1) which is proved in this paper, and from which 1.3 is then derived; and in all the future applications in this series of papers, it is not 1.3 but 3.1 that will be needed. My reading is, they were calling it a red herring because, at this point, they realized the importance of the concept of a tangle . (I think Reinhard Diestel said that the notion of a tangle is perhaps the deepest single innovation for graph theory stemming from this proof.) Continuing the quote of Robertson and Seymour (the bold font is from me, not from the article): Let us explain how theorem 3.1 is used to prove 1.3. Evidently we would like to eliminate the ‘‘tree-structure’’ part of 1.3 and concentrate on the internal structure of one of the ‘‘nodes’’ of the tree. How can we do so? An inductive argument looks plausible at first sight; if there is no low order cutset of G dividing it into two substantial pieces then G itself must be almost a ‘‘node’’ if the theorem is to be true, while if there is such a cutset we may express G as a clique-sum of two smaller graphs, and hope to apply our inductive hypothesis to these graphs. But there is a difficulty here; it is possible that these smaller graphs have an L-minor while G does not. Fortunately there is a way to focus in on a ‘‘node’’ which does not involve any decomposing, as follows. We can assume that the tree is as refined as possible in the sense that no node can be split into two smaller nodes, and so for every low order cutset of G; most of any node will lie on one side or the other of the cutset (except for nodes of bounded cardinality, which we can ignore.) Therefore if we fix some node, every small cutset has a ‘‘big’’ side (containing most of the node) and a ‘‘small’’ side—and it turns out that no three small sides have union G: Thus a node defines a ‘‘tangle’’ , which is such an assignment of big and small sides to the low order cutsets; and conversely, it can be shown that any tangle in G of sufficiently high ‘‘order’’ will be associated with some node of the tree-structure. Hence a convenient way to analyze the internal structure of the nodes is to analyze the local structure of G with respect to some high order tangle , and this is the content of theorem 3.1.
|
{
"source": [
"https://mathoverflow.net/questions/366312",
"https://mathoverflow.net",
"https://mathoverflow.net/users/100231/"
]
}
|
366,765 |
QUICK FINAL UPDATE : Just wanted to thank you MO users for all your support. Special thanks for the fast answers, I've accepted first one, appreciated the clarity it gave me. I've updated my torus algorithm with ${\rm cr}(G)$ . Works fine on my full test set, i.e. evidence for ${\rm cr}(G)={\rm pcr}(G)$ on torus. More on this later, will test sharper bound from last answer as well. I'm going to submit in time! Thanks again MO users for all your help! Original post: I apologize if „crisis“ is too strong a word, but I am in a mode of panic, if that's the right word: In two weeks, I should be submitting my Ph.D. Thesis, but I have just received bad news, or I should say information that makes me very concerned. It is really an emergency situation: My thesis is in computer science, algorithms related to graph drawings on the sphere and the torus. One of the cornerstone mathematical results I am relying on is the graph edge crossing lemma (or edge crossing inequality). It gives a lower bound for the minimum number of edge crossings ${\rm cr}(G)$ for any drawing of the graph $G$ with $n$ vertices and $e$ edges $${\rm cr}(G)\geq \frac{e^3}{64n^2}$$ for $e>4n$ . PROBLEM: I am reading in the article of Pach and Tóth that there is a possibility that mathematics papers on crossing numbers operate with different definitions. There is the crossing number ${\rm cr}(G)$ (minimum of edge crossings in a drawing of $G$ ), but also the pair crossing number ${\rm pcr}(G)$ , the minimum number of edge pairs crossing in a drawing of $G$ . I double-checked my algorithms and, based on this definition, I clearly apply the pair crossing number ${\rm pcr}(G)$ CRITICAL QUESTION: Can you confirm to me that the edge crossing lemma remains valid on the sphere and the torus also for the pair crossing number ${\rm pcr}(G)$ ? Reference: János Pach and Géza Tóth. Which crossing number is it anyway? J. Combin. Theory Ser. B, 80(2): 225–246, 2000. And Wikipedia article as a starting point https://en.wikipedia.org/wiki/Crossing_number_inequality
|
$\DeclareMathOperator\cr{cr}\DeclareMathOperator\pcr{pcr}$ For the pair crossing number $\pcr(G)$ , the short answer is yes the crossing lemma holds for drawings on the sphere, but it is not known whether it also holds on the torus. The best and most current reference for you could be the survey article from Schaefer, updated in February 2020: “The Graph Crossing Number and its Variants: A Survey” from the Electronic Journal of Combinatorics
( https://doi.org/10.37236/2713 ). The relevant pages for you are pages 5 and 6 with the following quote from Schaefer: “Since the Hanani–Tutte theorem is not known to be true for the torus, this means that we do not currently have a proof of the crossing lemma for $\pcr$ or $\pcr_−$ on the torus.” Generally, $\pcr(G)\leq \cr(G)$ . It is still an open problem whether they are equal or not. The first proofs of the crossing lemma did not make the distinction. The first one to raise the ambiguity was Mohar (1995) in a conference talk. The Pach and Tóth (2000) paper that you mention does make the distinction between $\pcr(G)$ and $\cr(G)$ , and applies Hanani–Tutte in the proof of the crossing lemma, which ensures that it also holds for $\pcr(G)$ . The issue is that you can apply Hanani–Tutte for the sphere (and the projective plane), but you cannot apply it for the torus. For surfaces of genus $\geq4$ it is known to be false, see Fulek and Kynčl (2019). This means the torus is really “in-between”. Edit: Adding the references Bojan Mohar (1995): Problem mentioned at the special session on Topological Graph Theory, Mathfest, Burlington, Vermont. (cited from: L.A. Székely (2016): Turán’s Brick Factory Problem: The Status of the Conjectures of Zarankiewicz and Hill . In: R. Gera et al. (eds.)(2016): Graph Theory—favorite conjectures and open problems. 1.) Hanani–Tutte Theorem https://en.wikipedia.org/wiki/Hanani%E2%80%93Tutte_theorem Radoslav Fulek and Jan Kynčl (2019): Counterexample to an Extension of the Hanani–Tutte Theorem on the Surface of Genus 4 . Combinatorica, 39(6):1267–1279
|
{
"source": [
"https://mathoverflow.net/questions/366765",
"https://mathoverflow.net",
"https://mathoverflow.net/users/161819/"
]
}
|
368,213 |
The Navier-Stokes equations are as follows, $$\dot{u}+(u\cdot \nabla ) u +\nu \nabla^2 u =\nabla p$$ where $u$ is the velocity field, $\nu$ is the viscosity, and $p$ is the pressure. Some elementary manipulations show that if you zoom in by a factor of $\lambda$ , then you expect viscosity to scale as $\lambda^{\frac{3}{2}}$ . So, for example, if you zoom in to the length scale of a cell, you expect viscosity to be around a million times larger than humans experience it. This is not observed, however, which makes sense since we expect the components of a cell to move around extremely quickly. (EDIT: this is observed - see answer - my initial google searches were untrustworthy, damn google). Nonetheless, the calculation above suggests that they feel like they are moving through one of the most viscous fluids imaginable. What then is the mechanism that prevents this? I have seen some explanations through the ideas of 'microviscosity' and 'macroviscosity' in the physics community, but I couldn't find much of a theoretic backing for them. I'm wondering if there is a more mathematical explanation, perhaps directly from the Navier-Stokes equation itself (seems unlikely), or something from a kinetic theory point of view? For example some kind of statistical model of water molecules that reproduces the result?
|
There is a beautiful article (a write-up of a talk, actually), by E.M. Purcell, Life at low Reynolds number , that explains how bacteria swim. Low Reynolds number is the technical way to phrase the statement in the OP that motion at that scale feels like moving in a tar pit. The governing equation is the linearized Navier-Stokes equation, a.k.a. the Stokes equation, which lacks the inertial $v\nabla v$ term. The linearity of the Stokes equation means that the swimming technique which we would use, moving arms or legs back and forth, will not work. Purcell calls this the "scallop theorem": opening and closing the shells of a scallop will just move the object back and forth, without net forward motion. Inertia can still play a role on short time scales, as explained in Emergency cell swimming. The way bacteria move in the absence of inertia is the way a corkscrew enters a material upon turning, the cork screw being the flagellum. In fact, any nonsymmetrical object, when turned will propagate in a tar pit. Typical velocities are $1$ mm/min, as Purcell says: "Motion at low Reynolds number is very majestic, slow, and regular." Here is a visualization of a sperm cell moving by rotating its flagellum (published just this week in Science Advances ). Note that the rotation is only clearly visible in three dimensions. Two-dimensional projections suggest a beating motion (first reported by Van Leeuwenhoek in the 17th century), which is not an effective means of propagation at low Reynolds number.
|
{
"source": [
"https://mathoverflow.net/questions/368213",
"https://mathoverflow.net",
"https://mathoverflow.net/users/161947/"
]
}
|
368,373 |
I have a matrix $$ A= \begin{pmatrix} 0 & a & d & c\\ \bar a & 0 & b & d \\ \bar d & \bar b & 0 & a \\ \bar c & \bar d & \bar a & 0 \end{pmatrix} $$ As you can see, the matrix is always self-adjoint for any $a, b, c, d \in \mathbb C$ . But it has a funny property (that I found by playing with some numbers): If $a,b,c$ are arbitrary real numbers and also $d$ is real, then the spectrum of $A$ is in general not symmetric with respect to zero. To illustrate this, we take $d := 2$ , $a := 5$ , $b := 3$ , $c := 4$ then the eigenvalues are $$\sigma(A):=\{10.5178, -6.54138, -3.51783, -0.458619\}$$ But once I take $d \in i \mathbb R$ , the spectrum becomes immediately symmetric. In fact, $d := 2 i$ , $a := 5$ , $b := 3$ , $c := 4$ leads to eigenvalues $$\sigma(A)=\{-9.05607, 9.05607, -0.993809, 0.993809\}$$ Is there any particular symmetry that only exists for $d \in i\mathbb R$ that implies this nice inflection symmetry? I am less interested in a brute-force computation of the spectrum than of an explanation of what symmetry causes the inflection symmetry.
|
For real $a,b,c$ and imaginary $d$ the matrix $A$ has chiral symmetry , meaning it anticommutes with a matrix $X$ that squares to the identity: $$X=\left(
\begin{array}{cccc}
0 & 0 & 0 & -i \\
0 & 0 & i & 0 \\
0 & -i & 0 & 0 \\
i & 0 & 0 & 0 \\
\end{array}
\right),\;\;XA+AX=0,\;\;X^2=I.$$ Hence the spectrum of $A$ has $\pm$ symmetry: $$\det (\lambda-A)=\det(\lambda X^2-XAX)=\det(\lambda+X^2A)=\det(\lambda+A),$$ so if $\lambda$ is an eigenvalue then also $-\lambda$ .
|
{
"source": [
"https://mathoverflow.net/questions/368373",
"https://mathoverflow.net",
"https://mathoverflow.net/users/119875/"
]
}
|
368,463 |
(I am posting this in my capacity as chair of the ICM programme committee.) ICM 2022 will feature a number of "special lectures", both at the sectional and plenary level, see last year's report of the ICM structure committee . The idea is that these are lectures that differ from the traditional ICM format (author of a recent breakthrough result talking about their work). Some possibilities are a Bourbaki-style lecture where a recent breakthrough result (or series of results) is put into a broader context, a "double act" where related results are presented by two speakers, a survey lecture on a subfield relevant for some recent development, a lecture that doesn't fit into any of the existing sections, a lecture creating new connections between different areas of mathematics, but these are not meant to be exhaustive in any way. So what special lecture(s) would you like to see at the next ICM? (Unless it is self-evident, please state what makes the lecture you would like to see "special". If you would like to nominate someone for an "ordinary" plenary lecture instead, please do so by sending me an email.)
|
How about a lecture on proof assistants/formal proofs? Most mathematicians are still skeptical of the value of proof assistants, and it's certainly true that proof assistants are still very difficult for the average mathematician to use. However, I think that much of the skepticism stems from a lack of understanding of what proof assistants have to offer. A popular misconception is that proof assistants just give you a laborious way of increasing your certainty of the correctness of a proof from 99% to 99.9999%. But that's not where their primary value lies, IMO. For example, having a large body of formalized mathematics available could help machine learning algorithms figure out what constitutes "interesting" mathematics and help them autonomously discover interesting new definitions and concepts—something that seems beyond what computers can do now. For another example, there are increasingly many cases where editors can't find a referee for a complicated and potentially important paper because the referees are skeptical and don't want to waste time studying something that might be wrong. If proof assistants become sufficiently easy to use that authors are routinely required to formally verify their proofs before submission, then referees can focus on the more rewarding work of assessing whether a result is interesting and important instead of spending the bulk of their time checking correctness. A good lecture on this topic could give the subject a valuable boost. Incidentally, if you want to poll people to assess interest, I would recommend polling younger people. This is one topic where I would value the opinion of younger mathematicians and students more than the opinion of senior mathematicians. EDIT: Kevin Buzzard ended up giving precisely such a talk at ICM 2022: The rise of formalism in mathematics .
|
{
"source": [
"https://mathoverflow.net/questions/368463",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38566/"
]
}
|
368,515 |
I am currently reading Kervaire-Milnor's paper "Groups of Homotopy Spheres I", Annals of Mathematics , and I am trying to prove (or disprove) the following result. The more elementary the proof, the better. If two smooth manifolds are homeomorphic, then their stable tangent
bundles (i.e. the Whitney sum of the tangent bundle with the trivial
line bundle) are vector bundle isomorphic. I am trying to prove this as an intermediate step to give an alternative proof for KM's Theorem 3.1: Every homotopy sphere is $s$ -parallelizable.
|
The result you are hoping for is in fact false. In section 9 of Microbundles: Part I , Milnor constructs an open set $U \subset \mathbb{R}^m$ . With its standard smooth structure, the (stable) tangent bundle of $U\times\mathbb{R}^k \subset \mathbb{R}^{m+k}$ is trivial, while in Corollary 9.3, Milnor shows that it admits a smooth structure for which the tangent bundle has a non-zero Pontryagin class. As Pontryagin classes are stable, the stable tangent bundle of the latter manifold is not trivial, and hence not isomorphic to the stable tangent bundle of $U\times\mathbb{R}^k$ with its standard smooth structure. Milnor, John W. , Microbundles , Topology 3, Suppl. 1, 53-80 (1964). ZBL0124.38404 .
|
{
"source": [
"https://mathoverflow.net/questions/368515",
"https://mathoverflow.net",
"https://mathoverflow.net/users/152049/"
]
}
|
368,716 |
Let $Q\in \mathbb{Z}[x]$ be a polynomial defining an injective function $\mathbb{Z}\to\mathbb{Z}$ . Does it define an injective function $\mathbb{Z}/p\mathbb{Z}\to\mathbb{Z}/p\mathbb{Z}$ for some prime $p$ ?
|
Consider $Q(x)=x(2x-1)(3x-1)$ . This gives an injective map $\mathbb Z\to \mathbb Z$ , because $n<m \implies Q(n)<Q(m)$ . However, this $Q$ is not injective over $\mathbb Z/p\mathbb Z$ for any $p$ because $Q(x)=0$ has three solutions when $p\geq 5$ and two solutions when $p\in \{2,3\}$ .
|
{
"source": [
"https://mathoverflow.net/questions/368716",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
368,957 |
My guess is that there exists a constant $C$ such that $A(X) \sim C (\log X)^2$ .
|
$$ \sum_{1 \leq i,j \leq X} \frac{1}{\mathrm{lcm}(i,j)} = \sum_{1 \leq i,j \leq X} \frac{\mathrm{gcd}(i,j)}{ij} $$ $$ = \sum_{1 \leq i,j \leq X} \frac{\sum_{d|i,j} \phi(d)}{ij}$$ $$ = \sum_{d \leq X} \phi(d) \sum_{1 \leq i,j \leq X: d|i,j} \frac{1}{ij}$$ $$ = \sum_{d \leq X} \frac{\phi(d)}{d^2} \sum_{1 \leq i',j' \leq X/d} \frac{1}{i'j'}$$ $$ = \sum_{d \leq X} \frac{\phi(d)}{d^2} ( \sum_{1 \leq i \leq X/d} \frac{1}{i})^2$$ $$ = \sum_{d \leq X} \frac{\phi(d)}{d^2} ( \log(X/d) + O(1))^2$$ $$ = \sum_{d \leq X} \frac{\phi(d)}{d^2} ( \log^2(X) - 2 \log(d) \log(X) + \log^2(d) + O( \log X ) )$$ $$ = A_0 \log^2 X - 2 A_1 \log X + A_2 + O( \log^2 X )$$ where $$ A_j := \sum_{d \leq X} \frac{\phi(d) \log^j d}{d^2}.$$ One can compute asymptotics for the $A_j$ by Perron's formula, but we proceed instead by elementary means. Since $\phi(d) = \sum_{d=ab} \mu(a) b$ we have $$ A_j = \sum_{ab \leq X} \frac{\mu(a) b \log^j(ab)}{a^2b^2}$$ $$ = \sum_{a \leq X} \frac{\mu(a)}{a^2} (\frac{\log^{j+1}(X)}{j+1} + O( (1 + \log^j(a)) \log^j X ) )$$ $$ = \frac{1}{j+1} \log^{j+1} X \sum_{a \leq X} \frac{\mu(a)}{a^2} + O(\log^j X)$$ $$ = \frac{1}{(j+1)\zeta(2)} \log^{j+1} X + O(\log^j X).$$ Hence $$ \sum_{1 \leq i,j \leq X} \frac{1}{\mathrm{lcm}(i,j)} = \frac{1}{3 \zeta(2)} \log^3 X + O(\log^2 X).$$
|
{
"source": [
"https://mathoverflow.net/questions/368957",
"https://mathoverflow.net",
"https://mathoverflow.net/users/163643/"
]
}
|
368,963 |
Recently I saw an MO post Algebraic graph invariant $\mu(G)$ which links Four-Color-Theorem with Schrödinger operators: further topological characterizations of graphs? that got me interested. It is about a graph parameter that is derived from the Laplacian of a graph. Its origins are in spectral operator theory, but it is quite strong in characterizing important properties of graphs. So I was quite fascinated by the link it creates between different branches of mathematics. I went through other posts on MO that discuss this topic as well, and in the meantime I read a few linked articles that work with the graph Laplacian. I understand that they view an (undirected) graph as a metric graph embedded in a surface, and the metric on the graph is approximated by Riemannian metrics which give the edge distance along the edges, and which is close to zero everywhere else on the surface. The eigenvalues of the surface Laplacian approximate the eigenvalues of the graph Laplacian, and a lot of surprisingly useful conclusions follow, about connectivity and embeddability of the graph, and even about minor-monotonicity. I have gained a technical understanding of what is happening and how these eigenvalues (and their multiplicity) are determined, using the graph Laplacian. I also have a basic understanding of the role of a Laplacian in differential geometry, like the Laplacian of a function $f$ at a point $x$ measures by how much the average value of $f$ over small spheres around $x$ deviates from $f(x)$ , or I think of it to represent the flux density of the gradient flow of $f$ . But I am failing to gain or develop such an intuition for the graph Laplacian. Conceptually or intuitively, what does a graph Laplacian represent? I am trying to understand, how can it be so powerful when applied to graphs? (I am aware that the graph Laplacian can be defined using the graph adjacency matrix, but I was unable to link this with my differential geometry intuition)
|
How to understand the Graph Laplacian (3-steps recipe for the impatients) read the answer here by Muni Pydi. This is essentially a concentrate of a comprehensive article, which is very nice and well-written ( see here ). work through the example of Muni. In particular, forget temporarily about the adjacency matrix and use instead the incidence matrix . Why? Because the incidence matrix shows the relation nodes-edges, and that in turn can be reinterpreted as coupling between vectors (the value at the nodes) and dual vectors (the values at the edges). See point 3 below. now, after 1 and 2, think of this: you know the Laplacian in $R^n$ or more generally in differential geometry. The first step is to discretize: think of laying a regular grid on your manifold and discretize all operations ( derivatives become differences between adjacent points ). Now you are already in the realm of graph laplacians. But not quite: the grid is a very special type of graph, for instance the degree of a node is always the same. So you need to generalize a notch further: forget the underlying manifold, and DEFINE THE DERIVATIVES and the LAPLACIAN directly on the Graph. If you do the above, you will see that the Laplacian on the Graph is just what you imagine it to be, the Divergence of the Gradient . Except that here the Gradient maps functions on the nodes to functions on the edges (via the discrete derivative , where every edge is a direction..) and the divergence maps the gradient back into a nodes function: the one which measures the value at a node with respect to its neighbors. So, nodes-edges-nodes, that is the way (that is why I said focus on the incidence matrix) Hope it helps
|
{
"source": [
"https://mathoverflow.net/questions/368963",
"https://mathoverflow.net",
"https://mathoverflow.net/users/161328/"
]
}
|
368,976 |
If $K$ is a finitely generated field extension of $k$ , then there exists an irreducible affine $k$ -variety with function field $K$ . The idea is that if $x_1, \dots, x_n$ are generators of $K$ under $k$ , i.e each elements of $K$ is a rational function in $x_1, \dots , x_n$ , then the kernel of the map $k[t_1,\dots, t_n]\to K$ is a prime ideal and the induced map between their field fractions is an isomorphism: $(k[t_1,\dots, t_n]/I)_0\cong K$ This means $Z(I)\subseteq k^n$ is the affine irreducible variety which field fraction corresponds to $K$ . Now I have the following problem: In this case I have $k$ equal to the function field of $\mathbb{P}^2$ , and $K$ equal to the finite extension $k((\frac{l_2}{l_1})^{\frac{1}{n}},\dots, , (\frac{l_k}{l_1})^{\frac{1}{n}})$ . In the paper the author tells us $K$ determine an algebraic (affine?) surface $X$ with normal singularities and a natural map $\pi: X\to \mathbb{P}^2$ . I don't understand how to define this natural map $\pi$ and what is exactly this surface $X$ . I think that $K$ determine an affine variety up to birational morphisms and so I don't understand how to define exactly $X$ . Can you give me an example for $n=2$ and $k=3$ , please?
|
How to understand the Graph Laplacian (3-steps recipe for the impatients) read the answer here by Muni Pydi. This is essentially a concentrate of a comprehensive article, which is very nice and well-written ( see here ). work through the example of Muni. In particular, forget temporarily about the adjacency matrix and use instead the incidence matrix . Why? Because the incidence matrix shows the relation nodes-edges, and that in turn can be reinterpreted as coupling between vectors (the value at the nodes) and dual vectors (the values at the edges). See point 3 below. now, after 1 and 2, think of this: you know the Laplacian in $R^n$ or more generally in differential geometry. The first step is to discretize: think of laying a regular grid on your manifold and discretize all operations ( derivatives become differences between adjacent points ). Now you are already in the realm of graph laplacians. But not quite: the grid is a very special type of graph, for instance the degree of a node is always the same. So you need to generalize a notch further: forget the underlying manifold, and DEFINE THE DERIVATIVES and the LAPLACIAN directly on the Graph. If you do the above, you will see that the Laplacian on the Graph is just what you imagine it to be, the Divergence of the Gradient . Except that here the Gradient maps functions on the nodes to functions on the edges (via the discrete derivative , where every edge is a direction..) and the divergence maps the gradient back into a nodes function: the one which measures the value at a node with respect to its neighbors. So, nodes-edges-nodes, that is the way (that is why I said focus on the incidence matrix) Hope it helps
|
{
"source": [
"https://mathoverflow.net/questions/368976",
"https://mathoverflow.net",
"https://mathoverflow.net/users/158821/"
]
}
|
369,179 |
I am looking for ideas that began as small and maybe naïve or weak in some obscure and not very known paper, school or book but at some point in history turned into big powerful tools in research opening new paths or suggesting new ways of thinking maybe somewhere else. I would like to find examples (with early references of first appearances if possible or available) of really big and powerful ideas nowadays that began in some obscure or small paper maybe in a really innocent way. What I am pursuing with this question is to fix here some examples showing how Mathematics behave like an enormous resonance chamber of ideas where one really small idea in a maybe very far topic can end being a powerful engine after some iterations maybe in a field completely different. I think that this happens much more in mathematics than in other disciplines due to the highly coherent connectedness of our field in comparison with others and it is great that Mathematics in this way give a chance to almost every reasonable idea after maybe some initial time required to mature it in the minds, hands and papers of the correct mathematicians (who do not necessarily have to be the same that first found that idea). Summarizing, I am looking for ideas, concepts, objects, results (theorems), definitions, proofs or ways of thinking in general that appeared earlier in history (it does not have to be very early but just before the correct way of using the idea came to us) as something very obscure and not looking very useful and that then, after some undetermined amount of time, became a really powerful and deep tool opening new borders and frontiers in some (maybe other) part of the vast landscape of mathematics. Edit: I really do not understand the aim in closing this question as it is actually at research level. I am clearly asking for tools that developed into modern research topics. I recognize that some answers are not research level answers, but then you should downvote the answer, not the question. I am really surprised by this decision as one of the persons that vote to close suggested it for publication in a place where it is clear that some of the most valuable answers that this question has received would have never occur precisely because the site that this person suggested is not research oriented. I do not imagine people on HSM answering about species or pointfree topology sincerely as these topics are really current research and not history (and I am interested mainly in current research topics). I do not agree with the fact that a limitation in reading understanding of some people can be enough to close a legitimate question, a question that it is worth for us as mathematicians to do and to show to other people that think that mathematics is useful and powerful the day after being published ignoring thus the true way mathematics is done, with its turnabouts and surprises; a discipline where a simple idea has the power to change the field as $0$ did, as the positional systems did, as sheaves did, or as species did. I am really sad for this decision. It is a pity that so many mathematicians regret the actual way in which their field develops, reject to explain and expose this behavior and hide themselves from this kind of questions about the internal development of ideas in mathematics. I challenge all those who voted to close this question as off-topic to look in HSM for any mention about "locale theory" there.
|
In a letter to Frobenius, Dedekind made the following curious observation: if we see the multiplication table of a finite group $G$ as a matrix (considering each element of the group as an abstract variable) and take the determinant, then the resulting polynomial factors into a product of $c$ distinct irreducible polynomials, each with multiplicity equal to its degree, where $c$ is the number of conjugacy classes of $G$ . This is now known as Frobenius determinant theorem, and it is what led Frobenius to develop the whole representation theory of finite groups ( https://en.wikipedia.org/wiki/Frobenius_determinant_theorem ).
|
{
"source": [
"https://mathoverflow.net/questions/369179",
"https://mathoverflow.net",
"https://mathoverflow.net/users/158098/"
]
}
|
369,672 |
First of all, sorry if this post is not appropriate for this forum. I have a habit that every time I read a beautiful article I look at the author's homepage and often find amazing things. Recently I read a paper of Andrew Hicks and when I opened his homepage I found many links about his invention: Flawless wing mirrors (car mirror) . ( Image Source ) I would not be surprised if this invention was made by a non-mathematician. His mirror is an amazing invention to me because every day I see it, but didn't know its inventor is a mathematician! Anyway, I want to ask Question 1: Are there mathematicians who have done outstanding/prominent non-mathematical work like inventions, patents, solving social/economical/etc. problems, papers in these areas, etc.? Of course, one can say that almost all technology nowadays is based on the work of mathematicians, but I'm asking for specific contributions/innovations. I want to ask a similar question (Maybe it will be useful for those who are looking for a job!): Question 2: Which mathematicians are working in non-mathematical areas/companies? Note: Please add to your answers the name and the work of the mathematician.
|
Charles Lutwidge Dodgson (1832-1898), better known as Lewis Carroll .
|
{
"source": [
"https://mathoverflow.net/questions/369672",
"https://mathoverflow.net",
"https://mathoverflow.net/users/90655/"
]
}
|
369,710 |
Let me begin by formulating a concrete (if not 100% precise) question, and then I'll explain what my real agenda is. Two key facts about forcing are (1) the definability of forcing; i.e., the existence of a notion $\Vdash^\star$ (to use Kunen's notation) such that $p\Vdash \phi$ if and only if $(p \Vdash^\star \phi)^M$ , and (2) the truth lemma; i.e., anything true in $M[G]$ is forced by some $p\in G$ . I am wondering if there is a way to "axiomatize" these facts by saying what properties forcing must have, without actually introducing a poset or saying that $G$ is a generic filter or that forcing is a statement about all generic filters, etc. And when I say that forcing "must have" these properties, I mean that by using these axioms, we can go ahead and prove that $M[G]$ satisfies ZFC, and only worry later about how to construct something that satisfies the axioms. Now for my hidden agenda. As some readers know, I have written A beginner's guide to forcing where I try to give a motivated exposition of forcing. But I am not entirely satisfied with it, and I have recently been having some interesting email conversations with Scott Aaronson that have prompted me to revisit this topic. I am (and I think Scott is) fairly comfortable with the exposition up to the point where one recognizes that it would be nice if one could add some function $F : \aleph_2^M \times \aleph_0 \to \lbrace 0,1\rbrace$ to a countable transitive model $M$ to get a bigger countable transitive model $M[F]$ . It's also easy to grasp, by analogy from algebra, that one also needs to add further sets "generated by $F$ ." And with some more thought, one can see that adding arbitrary sets to $M$ can create contradictions, and that even if you pick an $F$ that is "safe," it's not immediately clear how to add a set that (for example) plays the role of the power set of $F$ , since the "true" powerset of $F$ (in $\mathbf{V}$ ) is clearly the wrong thing to add. It's even vaguely plausible that one might want to introduce "names" of some sort to label the things you want to add, and to keep track of the relations between them, before you commit to saying exactly what these names are names of . But then there seems to be a big conceptual leap to saying, "Okay, so now instead of $F$ itself, let's focus on the poset $P$ of finite partial functions, and a generic filter $G$ . And here's a funny recursive definition of $P$ -names." Who ordered all that ? In Cohen's own account of the discovery of forcing, he wrote: There are certainly moments in any mathematical discovery when the resolution of a problem takes place at such a subconscious level that, in retrospect, it seems impossible to dissect it and explain its origin. Rather, the entire idea presents itself at once, often perhaps in a vague form, but gradually becomes more precise. So a 100% motivated exposition may be a tad ambitious. However, it occurs to me that the following strategy might be fruitful. Take one of the subtler axioms, such as Comprehension or Powerset. We can "cheat" by looking at the textbook proof that $M[G]$ satisfies the axiom. This proof is actually fairly short and intuitive if you are willing to take for granted certain things, such as the meaningfulness of this funny $\Vdash$ symbol and its two key properties (definability and the truth lemma). The question I have is whether we can actually produce a rigorous proof that proceeds "backwards": We don't give the usual definitions of a generic filter or of $\Vdash$ or even of $M[G]$ , but just give the bare minimum that is needed to make sense of the proof that $M[G]$ satisfies ZFC. Then we "backsolve" to figure out that we need to introduce a poset and a generic filter in order to construct something that satisfies the axioms. If this can be made to work, then I think it would greatly help "ordinary mathematicians" grasp the proof. In ordinary mathematics, expanding a structure $M$ to a larger structure $M[G]$ never requires anything as elaborate as the forcing machinery, so it feels like you're getting blindsided by some deus ex machina . Of course the reason is that the axioms of ZFC are so darn complicated. So it would be nice if one could explain what's going on by first looking at what is needed to prove that $M[G]$ satisfies ZFC, and use that to motivate the introduction of a poset, etc. By the way, I suspect that in practice, many people learn this stuff somewhat "backwards" already. Certainly, on my first pass through Kunen's book, I skipped the ugly technical proof of the definability of forcing and went directly to the proof that $M[G]$ satisfies ZFC. So the question is whether one can push this backwards approach even further, and postpone even the introduction of the poset until after one sees why a poset is needed.
|
I have proposed such an axiomatization. It is published in Comptes Rendus: Mathématique, which has returned to the Académie des Sciences in 2020 and is now completely open access. Here is a link: https://doi.org/10.5802/crmath.97 The axiomatization I have proposed is as follows: Let $(M, \mathbb P, R, \left\{\Vdash\phi : \phi\in L(\in)\right\}, C)$ be a quintuple such that: $M$ is a transitive model of $ZFC$ . $\mathbb P$ is a partial ordering with maximum. $R$ is a definable in $M$ and absolute ternary relation (the $\mathbb P$ -membership relation, usually denoted by $M\models a\in_p b$ ). $\Vdash\phi$ is, if $\phi$ is a formula with $n$ free variables, a definable $n+1$ -ary predicate in $M$ called the forcing predicate corresponding to $\phi$ . $C$ is a predicate (the genericity predicate). As usual, we use $G$ to denote a filter satisfying the genericity predicate $C$ . Assume that the following axioms hold: (1) The downward closedness of forcing: Given a formula $\phi$ , for all $\overline{a}$ , $p$ and $q$ , if $M\models (p\Vdash\phi)[\overline{a}]$ and $q\leq p$ , then $M\models (q\Vdash\phi)[\overline{a}]$ . (2) The downward closedness of $\mathbb P$ -membership: For all $p$ , $q$ , $a$ and $b$ , if $M\models a\in_p b$ and $q\leq p$ , then $M\models a\in_q b$ . (3) The well-foundedness axiom: The binary relation $\exists p; M\models a\in_p b$ is well-founded and well-founded in $M$ . In particular, it is left-small in $M$ , that is, $\left\{a : \exists p; M\models a\in_p b\right\}$ is a set in $M$ . (4) The generic existence axiom: For each $p\in \mathbb P$ , there is a generic filter $G$ containing $p$ as an element. Let $F_G$ denote the transitive collapse of the well-founded relation $\exists p\in G; M\models a\in_p b$ . (5) The canonical naming for individuals axiom: $\forall a\in M;\exists b\in M; \forall G; F_G(b)=a$ . (6) The canonical naming for $G$ axiom: $\exists c\in M;\forall G; F_G(c)= G$ . Let $M[G]$ denote the direct image of $M$ under $F_G$ . The next two axioms are the fundamental duality that you have mentioned: (7) $M[G]\models \phi[F_G(\overline{a})]$ iff $\exists p\in G; M\models (p\Vdash\phi)[\overline{a}]$ , for all $\phi$ , $\overline{a}$ , $G$ . (8) $M\models (p\Vdash\phi)[\overline{a}]$ iff $\forall G\ni p; M[G]\models \phi[F_G(\overline{a})]$ , for all $\phi$ , $\overline{a}$ , $p$ . Finally, the universality of $\mathbb P$ -membership axiom. (9) Given an individual $a$ , if $a$ is a downward closed relation between individuals and conditions, then there is a $\mathbb P$ -imitation $c$ of $a$ , that is, $M\models b\in_p c$ iff $(b,p)\in a$ , for all $b$ and $p$ . It follows that $(M, \mathbb P, R, \left\{\Vdash\phi : \phi\in L(\in)\right\}, C, G)$ represent a standard forcing-generic extension: The usual definitions of the forcing predicates can be recovered, the usual definition of genericity can also be recovered ( $G$ intersects every dense set in $M$ ), $M[G]$ is a model of $ZFC$ determined by $M$ and $G$ and it is the least such model. (Axiom $(9)$ is used only in the proof that $M[G]$ is a model).
|
{
"source": [
"https://mathoverflow.net/questions/369710",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
369,846 |
This should be well-known, but I can't find a reference (or a proof, or a counter-example...). Let $d$ be a positive square-free integer. Suppose that there is no element in the ring of integers of $\mathbb{Q}(\sqrt{d})$ with norm $-1$ . Then I believe that no element of $\mathbb{Q}(\sqrt{d})$ has norm $-1\ $ (in fancy terms, the homomorphism $H^2(G,\mathscr{O}^*)\rightarrow H^2(G,\mathbb{Q}(\sqrt{d})^*)$ , with $G:=\operatorname{Gal}(\mathbb{Q}(\sqrt{d})/\mathbb{Q})=$ $\mathbb{Z}/2 $ , is injective). Is that correct? If yes, I'd appreciate a proof or a reference.
|
This is false. The smallest counterexample is $d = 34$ . Let $K = \mathbb{Q}(\sqrt{34})$ . The fundamental unit in $\mathcal{O}_{K} = \mathbb{Z}[\sqrt{34}]$ is $35 + 6 \sqrt{34}$ , which has norm $1$ , and therefore, there is no element in $\mathcal{O}_{K}$ with norm $-1$ . However, $\frac{3}{5} + \frac{1}{5} \sqrt{34}$ has norm $-1$ , so there is an element of norm $-1$ in $K$ .
|
{
"source": [
"https://mathoverflow.net/questions/369846",
"https://mathoverflow.net",
"https://mathoverflow.net/users/40297/"
]
}
|
369,930 |
Below, we compute with exact real numbers using a realistic / conservative model of computability like Type Two Effectivity . Assume that there is an algorithm that, given a symmetric real matrix $M$ , finds an eigenvector $v$ of $M$ of unit length. Let $$M(\epsilon) = \begin{cases}
\left[\begin{matrix}1 & \epsilon\\ \epsilon & 1\end{matrix}\right]
,& \epsilon \geq 0 \\
\left[\begin{matrix}1 - \epsilon & 0\\0 & 1 + \epsilon\end{matrix}\right]
,& \epsilon \leq 0
\end{cases}$$ and assume that it's possible to find an eigenvector $v$ of $M(\epsilon)$ . If $\epsilon > 0$ then $v$ must necessarily be $\pm \frac 1 {\sqrt 2}\left[\begin{matrix}1\\1\end{matrix}\right]$ or $\pm \frac 1 {\sqrt 2}\left[\begin{matrix}-1\\1\end{matrix}\right]$ . Observe that in all four cases, the $L^1$ norm of $v$ is $\sqrt 2$ . If $\epsilon < 0$ , then $v$ must necessarily be $\pm\left[\begin{matrix}1\\0\end{matrix}\right]$ or $\pm\left[\begin{matrix}0\\1\end{matrix}\right]$ . Observe that in all four cases, the $L^1$ norm of $v$ is $1$ . It's easily determinable whether the $L^1$ norm of $v$ is less than $\sqrt 2$ or greater than $1$ . Therefore we can decide whether $\epsilon \leq 0$ or $\epsilon \geq 0$ , which is impossible! In a way, this is strange, because many sources say that the Singular Value Decomposition (SVD) and Schur Decomposition (which are generalisations of the Spectral Decomposition) are numerically stable. They're also widely used in numerical applications. But I've just tested the examples above for small $\epsilon$ using SciPy and got incorrect results. So my question is, how do numerical analysts get around this problem? Or why is this apparently not a problem? I could venture some guesses: While finding eigenvectors of general matrices may be impossible, it is possible to find their eigenvalues. Also, it's possible to "shift" a problematic matrix by some small $\epsilon$ so that its eigendecomposition is computable.
|
The singular value decomposition, when applied to a real symmetric matrix $A = \sum_i \lambda_i(A) u_i(A) u_i(A)^T$ , computes a stable mathematical object (spectral measure $\mu_A = \sum_i \delta_{\lambda_i(A)} u_i(A) u_i(A)^T$ , which is a projection-valued measure ) using a partially unstable coordinate system (the eigenvalues $\lambda_i(A)$ and eigenvectors $u_i(A)$ ; the eigenvalues are stable, but the eigenvectors are not). The numerical instability of the latter reflects the coordinate singularities of this coordinate system, but does not contradict the stability of the former. But in numerical computations we have to use the latter rather than the former, because standard computer languages have built-in data representations for numbers and vectors, but usually do not have built-in data representations for projection-valued measures. An analogy is with floating-point arithmetic. The operation of multiplication of two floating point numbers (expressed in binary $x = \sum_i a_i(x) 2^{-i}$ or decimal $x = \sum_i b_i(x) 10^{-i}$ ) is a stable (i.e., continuous) operation on the abstract real numbers ${\bf R}$ , but when viewed in a binary or decimal representation system becomes "uncomputable". For instance, the square of $1.414213\dots$ could be either $1.99999\dots$ or $2.0000\dots$ , depending on exactly what is going on in the $\dots$ ; hence questions such as "what is the first digit of the square of $1.414213\dots$ " are uncomputable. But this is an artefact of the numeral representation system used and is not an indicator of any lack of stability or computability for any actual computational
problem that involves the abstract real numbers (rather than an artificial problem that is sensitive to the choice of numeral representation used). In contrast, floating point division when the denominator is near zero is a true singularity; regardless of what numeral system one uses, this operation is genuinely discontinuous (in a dramatic fashion) on the abstract reals and generates actual instabilities that cannot be explained away as mere coordinate singularity artefacts. Returning back to matrices, whereas the individual eigenvectors $u_i(A)$ of a real symmetric matrix $A$ are not uniquely defined (there is a choice of sign for $u_i(A)$ , even when there are no repeated eigenvalues) or continuously dependent on $A$ , the spectral measure $\mu_A := \sum_i \delta_{\lambda_i(A)} u_i(A) u_i(A)^T$ is unambiguous; it is the unique projection-valued measure for which one has the functional calculus $$ f(A) = \int_{\bf R} f(E)\ d\mu_A(E)$$ for any polynomial $f$ (or indeed for any continuous function $f \colon {\bf R} \to {\bf R}$ ). The spectral measure $\mu_A$ depends continuously on $A$ in the vague topology ; indeed one has the inequality $$ \| f(A) - f(B) \|_F \leq \|f\|_\text{Lip} \|A-B\|_F$$ for any real symmetric $A,B$ and any Lipschitz $f$ , where $\|\|_F$ denotes the Frobenius norm (also known as the Hilbert-Schmidt norm or 2-Schatten norm). This allows for the possibility for stable computation of this measure, and indeed standard algorithms such as tridiagonalisation methods using (for instance) the QR factorisation and Householder reflections do allow one to compute this measure in a numerically stable fashion (e.g., small roundoff errors only lead to small variations in any test $\int_{\bf R} f(E)\ d\mu_A(E)$ of the spectral measure $\mu_A$ against a given test function $f$ ), although actually demonstrating this stability rigorously for a given numerical SVD algorithm does require a non-trivial amount of effort. The practical upshot of this is that if one uses a numerically stable SVD algorithm to compute a quantity that can be expressed as a numerically stable function of the spectral measure (e.g., the inverse $A^{-1}$ , assuming that the spectrum is bounded away from zero), then the computation will be stable, despite the fact that the representation of this spectral measure in eigenvalue/eigenvector form may contain coordinate instabilities. In examples involving eigenvalue collision such as the one you provided in your post, the eigenvectors can change dramatically (while the eigenvalues remains stable), but when the time comes to apply the SVD to compute a stable quantity such as the inverse $A^{-1}$ , these dramatic changes "miraculously" cancel each other out and the algorithm becomes numerically stable again. (This is analogous to how a stable floating point arithmetic computation (avoiding division by very small denominators) applied to an input $x = 1.99999\dots$ and an input $x' = 2.00000\dots$ will lead to outcomes that are very close to each other (as abstract real numbers), even though all the digits in the representations of $x$ and $x'$ are completely different; the changes in digits "cancel each other out" at the end of the day.) [The situation is a bit more interesting when applying the SVD to a non-symmetric matrix $A = \sum_i \sigma_i(A) u_i(A) v_i(A)^T$ . Now one gets two spectral measures, $\mu_{(A^* A)^{1/2}} = \sum_i \delta_{\sigma_i(A)} v_i(A) v_i(A)^T$ and $\mu_{(AA^*)^{1/2}} = \sum_i \delta_{\sigma_i(A)} u_i(A) u_i(A)^T$ which are numerically stable, but these don't capture the full strength of the SVD (for instance, they are not sufficient for computing $A^{-1}$ ). The non-projection-valued spectral measure $\mu_A = \sum_i \delta_{\sigma_i(A)} u_i(A) v_i(A)^T$ does capture the full SVD in this case, but is only stable using the vague topology on the open half-line $(0,+\infty)$ , that is to say $\int_0^\infty f(E)\ d\mu_A(E)$ varies continuously with $A$ as long as $f$ is a test function compactly supported in $(0,+\infty)$ , but is unstable if tested by functions that do not vanish at the origin. This is ultimately due to a genuine singularity in the polar decomposition of a non-selfadjoint matrix when the matrix becomes singular, which in one dimension is simply the familiar singularity in the polar decomposition of a complex number near the origin.]
|
{
"source": [
"https://mathoverflow.net/questions/369930",
"https://mathoverflow.net",
"https://mathoverflow.net/users/75761/"
]
}
|
370,028 |
I stumbled upon these very nice looking notes by Brian Lawrence on the period of the Fibonacci numbers over finite fields. In them, he shows that the period of the Fibonacci sequence over $\mathbb{F}_p$ divides $p$ or $p-1$ or $p+1$ . I am wondering if there are explicit lower bounds on this period. Is it true, for instance, that as $p \to \infty$ , so does the order? A quick calculation on my computer shows that there are some "large" primes with period under 100. 9901 66
19489 58
28657 92
|
This maybe too elementary for this site, so if your question is closed, you might try asking on MathStackExchange. Many questions about the period can be answered by using the formula $$ F_n = (A^n-B^n)/(A-B), $$ where $A$ and $B$ are the roots of $T^2-T-1$ . So if $\sqrt5$ is in your finite field, then so are $A$ and $B$ , and since $AB=-1$ , the period divides $p-1$ from Fermat's little theorem. If not, then you're in the subgroup of $\mathbb F_{p^2}$ consisting of elements of norm $\pm1$ , so the period divides $2(p+1)$ . If you want small period, then take primes that divide $A^n-1$ , or really its norm, so take primes dividing $(A^n-1)(B^n-1)$ , where $A$ and $B$ are $\frac12(1\pm\sqrt5)$ . An open question is in the other direction: Are there infinitely many $p\equiv\pm1\pmod5$ such that the period is maximal, i.e., equal to $p-1$ ? BTW, the source you quote isn't quite correct, if $p\equiv\pm2\pmod5$ , then the period divides $2(p+1)$ , but might not divide $p+1$ . The simplest example is $p=3$ , where the Fibonacci sequence is $$ 0,1,1,2,0,2,2,1,\quad 0,1,1,2,0,2,2,1,\ldots. $$ Note that the first 0 does not necessarily mean that it will start to repeat. What happens is that the term before the first $0$ is $p-1$ , so the first part of the sequence repeats with negative signs before you get to a consecutive $0$ and $1$ .
|
{
"source": [
"https://mathoverflow.net/questions/370028",
"https://mathoverflow.net",
"https://mathoverflow.net/users/92401/"
]
}
|
370,762 |
If two complex projective manifolds are homotopy equivalent are they homeomorphic?
|
For curves this follows from the classification of (2-dimensional topological) surfaces, and for simply-connected surfaces this follows from Freedman's theorem. My former colleagues Anatoly Libgober and John Wood found examples of pairs of 3-folds which are complete intersections and are homotopy equivalent but not diffeomorphic, in fact have distinct Pontryagin classes. See Example 9.2 . Since in this case $H^4(M;\mathbb{Z})\cong \mathbb{Z}$ , this implies that the manifolds are not homeomorphic by the topological invariance of rational Pontryagin classes (see Ben Wieland's comment). For the higher dimensional case see: Fang, Fuquan , Topology of complete intersections , Comment. Math. Helv. 72, No. 3, 466-480 (1997). ZBL0896.14028 .
|
{
"source": [
"https://mathoverflow.net/questions/370762",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
370,898 |
Lately, I have been learning about the replication crisis , see How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth (good YouTube video) — by Michael Shermer and Stuart Ritchie. According to Wikipedia, the replication crisis (also known as the replicability crisis or reproducibility crisis) is an ongoing methodological crisis in which it has been found that many
scientific studies are difficult or impossible to replicate or
reproduce. The replication crisis affects the social sciences and
medicine most severely. Has the replication crisis impacted (pure) mathematics, or is mathematics unaffected? How should results in mathematics be reproduced? How can complicated proofs be replicated, given that so few people are able to understand them to begin with?
|
Mathematics does have its own version of the replicability problem, but for various reasons it is not as severe as in some scientific literature. A good example is the classification of finite simple groups - this was a monumental achievement (mostly) completed in the 1980's, spanning tens of thousands of pages written by dozens of authors. But over the past 20 years there has been significant ongoing effort undertaken by Gorenstein, Lyons, Solomon, and others to consolidate the proof in one place. This is partially to simplify and iron out kinks in the proof, but also out of a very real concern that the proof will be lost as experts retire and the field attracts fewer and fewer new researchers. This is one replicability issue in mathematics: some bodies of mathematical knowledge slide into folklore or arcana unless there is a concerted effort by the next generation to organize and preserve them. Another example is the ongoing saga of Mochizuki's proposed proof of the abc conjecture . The proof involve thousands of pages of work that remains obscure to all but a few, and there remains serious disagreement over whether the argument is correct . There are numerous other examples where important results are called into question because few experts spend the time and energy necessary to carefully work through difficult foundational theory - symplectic geometry provides another recent example. Why do I think these issues are not as big of a problem for mathematics as analogous issues in the sciences? Negative results: If you set out to solve an important mathematical problem but instead discover a disproof or counterexample, this is often just as highly valued as a proof. This provides a check against the perverse incentives which motivate some empirical researchers to stretch their evidence for the sake of getting a publication. Interconnectedness: Most mathematical research is part of an ecosystem of similar results about similar objects, and in an area with enough activity it is difficult for inconsistencies to develop and persist unnoticed. Generalization: Whenever there is a major mathematical breakthrough it is normally followed by a flurry of activity to extend it and solve other related problems. This entails not just replicating the breakthrough but clarifying it and probing its limits - a good example of this is all the work in the Langlands program which extends and clarifies Wiles' work on the modularity theorem. Purity: social science and psychology research is hard because the results of an experiment depend on norms and empirical circumstances which can change significantly over time - for instance, many studies about media consumption before the 90's were rendered almost irrelevant by the internet. The foundations of an area of mathematics can change, but the logical correctness of a mathematical argument can't (more or less).
|
{
"source": [
"https://mathoverflow.net/questions/370898",
"https://mathoverflow.net",
"https://mathoverflow.net/users/161631/"
]
}
|
370,971 |
Let $G$ be a regular graph of valence $d$ with finitely many vertices, let $A_G$ be its adjacency matrix, and let $$P_G(X)=\det(X-A_G)\in\mathbb{Z}[X]$$ be the adjacency polynomial of $G$ , i.e., the characteristic polynomial of $A_G$ . In some graphs that came up in my work, the adjacency polynomials $P_G(X)$ have a lot of factors in $\mathbb Z[X]$ , many of them repeated factors. So my questions are: Is it common for the adjacency polynomial of a regular graph to be highly factorizable in $\mathbb Z[X]$ , and to have many repeated factors? If not, what are the graph-theoretic consequences of having many small-degree factors? If not, what are the graph-theoretic consequences of having factors appearing to power greater than 1? To give an idea of the numbers involved, one example was a connected 3-regular graph with 64 vertices, and $$
P_G(X) =
(x - 3)x^{3}(x + 1)^{3}(x^2 - 3 x + 1)^{3}(x^2 - x - 3)^{3}(x^2 - x - 1)^{6}
(x^2 + x - 3)^{3}(x^3 - 3 x^2 - x + 4)^{2}(x^3 - 4 x + 1)
(x^6 - x^5 - 11 x^4 + 9 x^3 + 31 x^2 - 19 x - 8)^{3}
$$ I've looked at a couple of references and tried a Google search, but didn't find anything relevant.
|
Expanding on Richard's comment: let me rename your graph to $S$ and consider the adjacency matrix $A$ abstractly as a linear operator acting on the free vector space $\mathbb{C}[S]$ on (the vertices of) $S$ , and let $G$ be its automorphism group (this is why I wanted a new name). Then $\mathbb{C}[S]$ is a completely reducible representation of $G$ and $A$ is an endomorphism of this representation. Hence if we write $$\mathbb{C}[S] \cong \bigoplus_i n_i V_i$$ where $V_i$ are the irreducibles, then $A$ is an element of the endomorphism algebra $$\text{End}_G(\mathbb{C}[S]) \cong \prod_i M_{n_i}(\mathbb{C}).$$ This means more explicitly that $A$ is conjugate over $\mathbb{C}$ to a block diagonal matrix with a block for each isotypic component (hence its characteristic polynomial factors accordingly). In the nicest possible case the decomposition above is multiplicity-free in which case the endomorphism algebra is a product of copies of $\mathbb{C}$ and we just have that $A$ must act by a scalar $\lambda_i$ on each $V_i$ that occurs in the decomposition, which contributes a multiplicity of $\dim V_i$ to $\lambda_i$ as a root of the characteristic polynomial and hence, over $\mathbb{Q}$ , contributes a multiplicity of $\dim V_i$ to the minimal polynomial of $\lambda_i$ as a factor of the characteristic polynomial. (I think the result of this analysis comes out the same if you work over $\mathbb{Q}$ from the beginning but it's more annoying to describe.) I work through a few examples of this in my old blog post The Schrodinger equation on a finite graph , where I was trying to understand via a toy model the quantum mechanical phenomenon of group symmetries introducing "degeneracies," which is physics speak for eigenvalues (of the Hamiltonian in this case) of multiplicity greater than $1$ . The "most degenerate" case is the complete graph $S = K_n$ , where $G = S_n$ and the corresponding representation is a copy of the trivial representation and an irreducible representation of degree $n-1$ . This means the adjacency matrix $A$ must have at most two eigenvalues, one with multiplicity $1$ and one with multiplicity $n-1$ , which turn out to be $n-1$ and $-1$ respectively (this is easily computed by computing $\text{tr}(A)$ and $\text{tr}(A^2)$ , or just finding all the eigenvectors of $A + I$ ), inducing a factorization $$\det (tI - A) = (t - n + 1)(t + 1)^{n-1}$$ with a factor of multiplicity $n-1$ . One of the "least degenerate" cases where the automorphism group still acts transitively on vertices is $S = C_n$ the cycle graph, where $G = D_n$ is the dihedral group and the corresponding representation splits up into mostly $2$ -dimensional irreps. This reflects the fairly mild degeneracies of the eigenvalues of the adjacency matrix, which are $2 \cos \frac{2\pi k}{n}, k = 0, \dots n-1$ , and/but which also organize themselves into nontrivial Galois orbits coming from the action of the Galois group of $\mathbb{Q}(\zeta_n)$ .
|
{
"source": [
"https://mathoverflow.net/questions/370971",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11926/"
]
}
|
371,159 |
I need to verify the value of the following integral $$ 4n(n-1)\int_0^1 \frac{1}{8t^3}\left[\frac{(2t-t^2)^{n+1}}{(n+1)}-\frac{t^{2n+2}}{n+1}-t^4\{\frac{(2t-t^2)^{n-1}}{n-1}-\frac{t^{2n-2}}{n-1} \} \right] dt.$$ The integrand (factor of $4n(n-1)$ ) included) is the pdf of certain random variable for $n\geq 3$ and hence I expect it be 1. If somebody can kindly put it into some computer algebra system like MATHEMATICA, I would be most obliged. I do not have access to any CAS software. PS:-I do not know of any free CAS software. If there is any somebody may please share
|
The integral can be rewritten as \begin{align*}
I&=\frac{n(n-1)}{2}\int_0^1\frac{t^{n-2}(2-t)^{n+1}-t^{2n-1}}{n+1}-\frac{t^n(2-t)^{n-1}-t^{2n-1}}{n-1}\,dt\\[6pt]
&=\frac{1}{2n+2}+\frac{n(n-1)}{2}\int_0^1\frac{t^{n-2}(2-t)^{n+1}}{n+1}-\frac{t^n(2-t)^{n-1}}{n-1}\,dt.
\end{align*} Integrating by parts, we obtain $$\int_0^1\frac{t^{n-2}(2-t)^{n+1}}{n+1}\,dt=\frac{1}{n^2-1}+\int_0^1\frac{t^{n-1}(2-t)^n}{n-1}\,dt.$$ Therefore, \begin{align*}
I&=\frac{1}{2}+\frac{n}{2}\int_0^1t^{n-1}(2-t)^n-t^n(2-t)^{n-1}\,dt\\[6pt]
&=\frac{1}{2}+\frac{1}{2}\int_0^1(t^n(2-t)^n)'\,dt=\frac{1}{2}+\frac{1}{2}=1.
\end{align*} P.S. You can use SageMath and WolframAlpha for symbolic calculations. Both are free.
|
{
"source": [
"https://mathoverflow.net/questions/371159",
"https://mathoverflow.net",
"https://mathoverflow.net/users/158175/"
]
}
|
371,172 |
Let $f:[0,\infty)\to \mathbb{R}$ be supported on $[0,1]$ , with $\int_0^1 f(x) dx = 1$ . Let $\mathcal{L} f$ be its Laplace transform. How slowly may $$\int_{-\infty}^\infty |\mathcal{L} f(\sigma+i t)| dt$$ grow as $\sigma\to -\infty$ ? It is clear it could be $\ll e^{\epsilon |\sigma|}$ (just let $f$ be supported on $[0,\epsilon]$ ). Could it grow polynomially on $|\sigma|$ ? Linearly on $|\sigma|$ ?
|
The integral can be rewritten as \begin{align*}
I&=\frac{n(n-1)}{2}\int_0^1\frac{t^{n-2}(2-t)^{n+1}-t^{2n-1}}{n+1}-\frac{t^n(2-t)^{n-1}-t^{2n-1}}{n-1}\,dt\\[6pt]
&=\frac{1}{2n+2}+\frac{n(n-1)}{2}\int_0^1\frac{t^{n-2}(2-t)^{n+1}}{n+1}-\frac{t^n(2-t)^{n-1}}{n-1}\,dt.
\end{align*} Integrating by parts, we obtain $$\int_0^1\frac{t^{n-2}(2-t)^{n+1}}{n+1}\,dt=\frac{1}{n^2-1}+\int_0^1\frac{t^{n-1}(2-t)^n}{n-1}\,dt.$$ Therefore, \begin{align*}
I&=\frac{1}{2}+\frac{n}{2}\int_0^1t^{n-1}(2-t)^n-t^n(2-t)^{n-1}\,dt\\[6pt]
&=\frac{1}{2}+\frac{1}{2}\int_0^1(t^n(2-t)^n)'\,dt=\frac{1}{2}+\frac{1}{2}=1.
\end{align*} P.S. You can use SageMath and WolframAlpha for symbolic calculations. Both are free.
|
{
"source": [
"https://mathoverflow.net/questions/371172",
"https://mathoverflow.net",
"https://mathoverflow.net/users/398/"
]
}
|
371,606 |
Sometimes I see spider webs in very complex surroundings, like in the middle of twigs in a tree or in a bush. I keep thinking “if you understand the spider web, you understand the space around it”. What fascinates me, in some sense it gives a discrete view on the continuous space surrounding it. I started to wonder what are good mathematical models for spider webs. Obvious candidates are geometric graphs embedded in surfaces, or rather in space. One could argue that Tutte’s Spring Theorem from 1963 is the base model: a planar geometric graph, given as the equilibrium position for a system of springs representing the edges of the graph. It is the minimum-energy configuration of the system of springs (see the picture for illustration). There are generalizations of such minimum-energy configurations for convex graph embeddings into space (Linial, Lovász, Wigderson 1988), where you place, for example, four vertices of the graph at the vertices of a simplex in $\mathbb R^3$ . I think such systems of springs are good models, because the threads of the spider web are elastic. However, when viewed as models for spider webs, I wonder whether these minimum-energy spring models are missing two aspects: The purpose of spider webs is to catch prey, so I feel the ideal model should also consider (A) maximizing the area covered (or the volume of the convex hull) and (B) minimizing the distances between the edges. To me, formalizing (A) and (B) and combining it with the minimum-energy principle for a system of springs would be the ideal mathematical model for spider webs. Now, it is not obvious to me whether the minimum-energy principle alone determines a geometric graph satisfying (A) and/or (B)? Asking differently, if you add conditions like (A) or (B) to the minimum-energy principle, will this lead to different geometric graphs? My second, broader question: Are you aware of any mathematical models developed explicitely to model spider webs? I checked MO and MSE and searched on the internet, but could not find anything. Maybe I am looking in the wrong fields, I wonder. Any help would be greatly appreciated! References: Tutte, W. T. (1963), "How to draw a graph", Proceedings of the London Mathematical Society, 13: 743–767, doi:10.1112/plms/s3-13.1.743 Linial, N.; Lovász, L.; Wigderson, A. (1988), "Rubber bands, convex embeddings and graph connectivity", Combinatorica, 8(1): 91–102, doi:10.1007/BF02122557 The picture is from Daniel Spielman’s lecture notes pdf on the web
|
In response to the second question (which I interpret as asking for math models of spider webs as they appear in Nature): There exist several distinct types of spider webs. The most common type, the orb web of araneids , has been modeled in Simple Model for the Mechanics of Spider Webs (2010). A key property of the orb web model is that the web is free of stress concentrations even when a few spiral threads are broken. This is distinctly different from usual elastic materials in which a crack causes stress concentrations and weakens the material. The model highlights the mechanical adaptability of the web: spiders can increase the number of spiral threads to make a dense web (to catch small insects) or they can adjust the number of radial threads (to adapt to environmental conditions or reduce the cost of making the web) – in both cases without reducing the damage tolerance of the web. Left panel: Construction of the orb web described in the cited paper. Right panel: Naturally occurring orb web ( Wikipedia ).
|
{
"source": [
"https://mathoverflow.net/questions/371606",
"https://mathoverflow.net",
"https://mathoverflow.net/users/156936/"
]
}
|
371,750 |
The curve, given in polar coordinates as $r(\theta)=\sin(\theta)/\theta$ is plotted below. This is similar to the classical cardioid, but it is not the same curve (the curve above is not even algebraic, I believe). Does this curve have a name? Does it show up somewhere?
This curve has the property that it solves $\mathrm{Im}(1/z+\log(z))=0$ , if this perhaps rings a bell. This particular curve arises in some research I am working on at the moment, and it would be great if it perhaps connects to some classical area. Edit: Thanks for the great references!
As a reward, here is a more artistic rendering of the shape
using a type of complex dynamical systems.
|
The name of the curve is cochleoid (= shell-shaped rather than cardioid = heart-shaped). I compare the two below (gold = cochleoid, blue = cardioid). The distinction shell/heart refers to the additional windings remarked upon by მამუკა ჯიბლაძე , without these windings the two shapes would be qualitatively the same.
|
{
"source": [
"https://mathoverflow.net/questions/371750",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1056/"
]
}
|
372,480 |
This question is essentially a reposting of this question from Math.SE, which has a partial answer. YCor suggested I repost it here. Our starting point is a theorem of Matumoto: every group $Q$ is the outer automorphism group of some group $G_Q$ [1]. It seems to be a research theme to place restrictions on the groups involved. For example, Bumagin and Wise proved that if we restrict $Q$ to be countable then we may take $G_Q$ to be finitely generated [2], and more recently Logan proved that if we restrict $Q$ to be finitely generated and residually finite group then we may take $G_Q$ to be residually finite [3, Corollary D] (this paper also cites quite a few other papers which play this game). However, all the results I have found always produce infinite groups $G_Q$ , even when the "input" groups $Q$ are finite. For example, Matumoto's groups $G_Q$ are fundamental groups of graphs of groups (so are always infinite), Bumagin and Wise use a variant of Rips' construction (so (as $Q$ is finite) their groups $G_Q$ have finite index in metric small cancellation groups, so are infinite), and Logan's groups $G_Q$ are HNN-extensions of hyperbolic triangle groups (so again are infinite). So we have a question: Does every finite group $Q$ occur as the outer automorphism group of some finite group $G_Q$ ? The answer is "yes" if we take $Q$ to be finite abelian or a symmetric group; this is what the answer to the original Math.SE question proves. [1] Matumoto, Takao. "Any group is represented by an outerautomorphism group." Hiroshima Mathematical Journal 19.1 (1989): 209-219. ( Project Euclid ) [2] Bumagin, Inna, and Daniel T. Wise. "Every group is an outer automorphism group of a finitely generated group." Journal of Pure and Applied Algebra 200.1-2 (2005): 137-147. ( doi ) [3] Logan, Alan D. "Every group is the outer automorphism group of an HNN-extension of a fixed triangle group." Advances in Mathematics 353 (2019): 116-152. ( doi , arXiv )
|
Yes. For each finite group $Q$ I'll construct a finite group $H$ with $\mathrm{Out}(H)\simeq Q$ , moreover $H$ will be constructed as a semidirect product $D\ltimes P$ , with $P$ a $p$ -group of exponent $p$ and nilpotency class $<p$ , (with prime $p$ arbitrary chosen $>|Q|+1$ ) and $D$ abelian of order coprime to $p$ (actually $D$ being a power of a cyclic group of order $p-1$ ). I'll use Lie algebras which are convenient tools to encode $p$ -groups when $p$ is less than the nilpotency class, taking advantage of linear algebra. In Lie algebras, we denote $[x_1,\dots,x_m]=[x_1,[x_2,\dots,[x_{m-1},x_m]\cdots]]$ . Also I choose the convention to let permutations act on the left. The base field is $K=\mathbf{F}_p$ , $p$ prime. Fix $n\ge 1$ . Let $\mathfrak{f}_n$ be the free Lie $K$ -algebra on generators $(e_1,\dots,e_n)$ . It admits a unique grading in $\mathbf{Z}^n$ such that $e_i$ has degree $E_i$ , where $(E_i)$ is the canonical basis of $\mathbf{Z}^n$ , it is called multi-grading. For instance, $[e_3,[e_1,e_3]]$ has multi-degree $(1,0,2,0,\dots,0)$ . Let $I$ be a finite-codimensional multi-graded ideal contained in $[\mathfrak{f}_n,\mathfrak{f}_n]$ : so the quotient $\mathfrak{g}=\mathfrak{f}_n/I$ is naturally multi-graded. There is a natural action of ${K^*}^n$ on $\mathfrak{g}$ : namely $(t_1,\dots,t_n)$ acts on $\mathfrak{g}_{(m_1,\dots,m_n)}$ by multiplication by $\prod_{i=1}^n t_i^{m_i}$ . Let $D\subset\mathrm{Aut}(\mathfrak{g})$ be the image of this action. Also denote by $c$ the nilpotency class of $\mathfrak{g}$ : we assume $p>c$ . Using that $p>c$ , we endow, à la Malcev–Lazard, $\mathfrak{g}$ with the group law given by the Baker-Campbell-Hausdorff formula: $xy=x+y+\frac12[x,y]+\dots$ . We thus view $\mathfrak{g}$ as both a Lie algebra and a group; we denote it as $G$ when endowed with the group law (but feel free to refer to the Lie algebra law in $G$ ); this is a $p$ -group of exponent $p$ and nilpotency class $c<p$ . Define $H=D\ltimes G$ . Every permutation $\sigma\in\mathfrak{S}_n$ induces an automorphism $u_\sigma$ of $\mathfrak{f}_n$ , defined by $u_\sigma(e_i)=e_{\sigma(i)}$ . Write $\Gamma_I=\Gamma_{\mathfrak{g}}=\{\sigma\in\mathfrak{S}_n:u_\sigma(I)=I\}$ . Proposition 1. The natural map $\Gamma_\mathfrak{g}\to\mathrm{Out}(H)$ is an isomorphism. We need a lemma: Lemma 2. Define $M$ as $\mathbf{F}_p^*\ltimes\mathbf{F}_p$ . Then $\mathrm{Out(M^n})$ is reduced to the symmetric group permuting the $n$ factors. Proof. Let $f$ be an automorphism. This is a product of $n$ directly indecomposable center-free abelian groups, hence its automorphism group permutes the $n$ (isomorphic) factors. Hence after composing with a permutation, we can suppose that $f$ preserves each factor. We are then reduced to checking that every automorphism of $\mathbf{F}_p^*\ltimes\mathbf{F}_p$ is inner. Indeed after composing by an inner automorphism, we can suppose that it maps the Hall subgroup $\mathbf{F}_p^*$ into itself. Then after composing with an inner automorphism, we can also suppose it acts as identity on $\mathbf{F}_p$ . It easily follows that this is the identity. (Note for conciseness I used some slightly fancy results in this proof, but this lemma can be checked more elementarily.) $\Box$ Proof of the proposition. After composing with an inner automorphism, we can suppose that $f$ maps the Hall subgroup $D$ into itself. Next, $f$ induces an automorphism of $H/[G,G]$ , which can naturally be identified with $M^n$ of the previous lemma (recall that $I\subset [G,G]=[\mathfrak{g},\mathfrak{g}])$ . Hence after composing with conjugation by some element of $D$ , we can suppose that $f$ both preserves $D$ and acts on $H/[G,G]\simeq M^n$ by permuting the factors (without changing coordinates). Hence $f$ acts as the identity on $D$ , and $f(e_i)=e_{\sigma(i)}+w_i$ for all $i$ , with $w_i\in [G,G]$ ( $+$ is the Lie algebra addition). Now, for $d\in D$ , we have $f(d)=d$ , so $f(de_id^{-1})=df(e_i)d^{-1}$ . Choose $d$ as the action of $(t,\dots,t)$ . Then this gives $t(e_{\sigma(i)}+w_i)=te_{\sigma(t)}+df(w_i)d^{-1}$ . Hence $w_i$ is an eigenvector for the eigenvalue $t$ in $[\mathfrak{g},\mathfrak{g}]$ , on which $d$ has the eigenvalues $(t^2,t^3,\dots,t^c)$ . Choosing $t$ of order $p-1$ , we see that $t$ is not an eigenvalue, and hence $w_i=0$ . Hence up to inner automorphisms, every automorphism of $H$ is induced by permutation of the $n$ coordinates. Necessarily such a permutation has to be in $\Gamma_\mathfrak{g}$ . $\Box$ . To conclude we have to prove: Proposition 3. For every finite group $Q$ of order $n$ and prime $p>n+1$ there exist $n,p$ and $I$ finite-codimensional multi-graded ideal in $\mathfrak{f}_n(=\mathfrak{f}_n(\mathbf{F}_p)$ , such that $\Gamma_I\simeq Q$ . (And such that $\mathfrak{f}_n/I$ has nilpotency class $\le n+1$ .) This starts with the following, which provides for each finite group $Q$ a relation whose automorphism group is the group $L_Q\subset\mathfrak{S}(Q)$ of left translations of $Q$ . Lemma 4. Let $Q$ be a group, $n=|Q|$ , and $q=(q_1,\dots,q_n)$ an injective $n$ -tuple of $Q$ . Define $X=Qq\subset Q^n$ . Then $L_Q=\{\sigma\in\mathfrak{S}(Q):\sigma X=X\}$ . Proof . Clearly $L_Q$ preserves $X$ . Conversely, if $\sigma$ preserves $X$ , after composing with a left translation we can suppose that $\sigma(q_1)=q_1$ , so $\sigma(q)\in\{q_1\}\times Q^{n-1}$ ; since $X\cap \{q_1\}\times Q^{n-1}=\{q\}$ , we deduce $\sigma(q)=q$ , which in turn implies $\sigma=\mathrm{Id}$ . $\Box$ . Proof of the proposition. Write $\mathfrak{f}_Q\simeq\mathfrak{f}_n$ as the free Lie algebra over the generating family $(e_q)_{q\in Q}$ . It can be viewed as graded in $\mathbf{Z}^G$ , with basis $(E_q)_{q\in Q}$ . Write $E=\sum_q E_q$ . For $q\in Q^n$ , define $\xi_q=[e_{q_n},e_{q_1},e_{q_2},\dots,e_{q_n}]$ (note that it is homogeneous of degree $E+E_{q_n}$ ; in particular $(\xi_h)_{h\in X}$ is linearly independent. Fix an injective $n$ -tuple $q$ and define $X$ as in the proof of the lemma; for convenience suppose $q_n=1$ .
Define $J$ as the $n$ -dimensional subspace of $\mathfrak{f}_Q$ with basis $(\xi_h)_{h\in X}$ . Define $I=J\oplus \mathfrak{f}_Q^{n+2}$ , where $\mathfrak{f}_Q^i$ is the $i$ -th term in the lower central series. Hence $I$ is an ideal, and $\mathfrak{g}=\mathfrak{f}_Q/I$ is defined by killing all $i$ -fold commutators for $i\ge n+1$ and certain particular $(n+1)$ -fold commutators. (Since we assume $p>n+1$ , we can view it as a group as previously.) Claim. For $h=(h_1,\dots,h_n)\in Q^n$ with $h_{n-1}\neq h_n$ , we have $\xi_h\in I$ if and only if $h\in X$ . By definition, $h\in X$ implies the condition. Now suppose that $h$ satisfies the condition. First, the condition $h_{n-1}\neq h_n$ ensures that $\xi_h\neq 0$ ; it is homogeneous in the multi-grading. If it belongs to $J$ , its multi-degree is therefore some permute of $(2,1,\dots,1)$ . This is the case if and only if the $h_i$ are pairwise distinct, so we now assume it; its degree is therefore equal to $E_{h_n}+E$ . Now $J_{E+E_{h_n}}$ is 1-dimensional, and generated by $\xi_{h_nq}$ . Hence $\xi_h$ is a scalar multiple of $\xi_{h_nq}$ : $$[e_{h_n},e_{h_1},\dots,e_{h_{n-1}},e_{h_n}]=\lambda
[e_{h_n},e_{h_nq_1},\dots,e_{h_nq_{n-1}},e_{h_n}].$$ The next lemma implies that $h_i=h_nq_i$ for all $i\in\{1,\dots,n-1\}$ . So $h\in X$ . The claim is proved. The claim implies that for every permutation $\sigma$ of $Q$ , if the automorphism $u_\sigma$ of $\mathfrak{f}_Q$ preserves $I$ , then $\sigma$ has to preserve $X$ , and hence (Lemma 4) $\sigma$ is a left translation of $Q$ . This finishes the proof. $\Box$ . Lemma 5 Consider the free Lie algebra on $(e_1,\dots,e_n)$ . If for some permutation $\sigma$ of $\{1,\dots,n-1\}$ and scalar $\lambda$ we have $$[e_n,e_1,\dots,e_{n-1},e_n]=\lambda [e_n,e_{\sigma(1)},\dots,e_{\sigma(n-1)},e_n],$$ then $\sigma$ is the identity and $\lambda=1$ . Proof. Use the representation $f$ in $\mathfrak{gl}_n$ mapping $e_i$ to the elementary matrix $\mathcal{E}_{i-1,i}$ (consider indices modulo $n$ ). Then $[e_n,e_1,\dots,e_{n-1},e_n]=[e_n,e_1,[e_2,\dots,e_n]]$ maps to $$[\mathcal{E}_{n-1,n},\mathcal{E}_{n,1},\mathcal{E}_{1,n}]=[\mathcal{E}_{n-1,n},\mathcal{E}_{n,n}-,\mathcal{E}_{1,1}]=\mathcal{E}_{n-1,n}.$$ Let by contradiction $j$ be maximal such that $\sigma(j)\neq j$ ; note that $2\le j\le n-1$ and $n\ge 3$ . Then $[e_n,e_{\sigma(1)},\dots,e_{\sigma(n-1)},e_n]=[e_n,e_{\sigma(1)},\dots,e_{\sigma(j)},[e_{j+1},\dots,e_n]]$ maps to $$w=[\mathcal{E}_{n,n-1},\mathcal{E}_{\sigma(1)-1,\sigma(1)},\dots,\mathcal{E}_{\sigma(j)-1,\sigma(j)},\mathcal{E}_{j,n}],$$ which cannot be zero. Hence $[\mathcal{E}_{\sigma(j)-1,\sigma(j)},\mathcal{E}_{j,n}]\neq 0$ . Since $\sigma(j)<j$ , this implies $\sigma(j)-1=n$ (modulo $n$ ), that is, $\sigma(j)=1$ . So $$w=-[\mathcal{E}_{n,n-1},\mathcal{E}_{\sigma(1)-1,\sigma(1)},\dots,\mathcal{E}_{\sigma(j-1)-1,\sigma(j-1)},\mathcal{E}_{j,1}].$$ In turn, $[\mathcal{E}_{\sigma(j-1)-1,\sigma(j-1)},\mathcal{E}_{j,1}]$ , using that $\sigma(j-1)\neq 1$ , implies $\sigma(j-1)=j$ , and so on, we deduce that $\sigma$ is the cycle $j\mapsto j+1$ (modulo $n-1$ ). Eventually we obtain $$w=[\mathcal{E}_{n-1,1},\mathcal{E}_{1,2},\mathcal{E}_{2,1}]=\mathcal{E}_{n-1,1}.$$ So $\mathcal{E}_{n-1,n}=\lambda \mathcal{E}_{n-1,1}$ , contradiction. (Note on Lemma 5: one has $[e_1,e_2,e_1,e_2]=[e_2,e_1,e_1,e_2]$ , but this and its obvious consequences are probably the only identities between Lie monomials in the free Lie algebra, beyond the ones obtained from skew-symmetry in the last two variables.) Note on the result: the resulting group $H$ roughly has size $|Q|^{|Q|}$ , which is probably not optimal. In Proposition 4, $I$ is strictly contained in $\mathfrak{f}_Q^{n+1}$ , as soon as $|Q|\ge 3$ , so the nilpotency class of $G$ is then equal to $n+1$ . (For $|Q|=1$ , choosing $p\ge 3$ outputs $H$ as the group $M=M_p$ which has trivial Out, and $G$ is abelian then; for $|Q|=2$ , this outputs a group $H$ of order $(p-1)^2.p^3$ for chosen prime $p\ge 5$ , and $G$ has nilpotency length $2$ . For $|Q|=3$ it outputs a group $H$ of order $(p-1)^3.p^{29}$ for $p\ge 5$ which is already quite big.) To improve the bounds in explicit cases by running this method, one should describe $Q$ a permutation group of a set $Y$ that can be described as stabilizer of some $\ell$ -ary relation $R$ contained in the set of pairwise distinct $\ell$ -tuples of $Y$ , for $\ell$ as small as possible.
|
{
"source": [
"https://mathoverflow.net/questions/372480",
"https://mathoverflow.net",
"https://mathoverflow.net/users/35478/"
]
}
|
372,831 |
I am a young PhD student (24) at a Germany university and I am not sure whether this is the right place to ask this kind of question. If not feel free to move it elsewhere or delete it completely. Currently, I a have a half time position in Analysis and my doctoral advisor more and more turns out to be not very involved in my PhD. I started 1 1/2 years ago at the age of 22 and my PhD advisor was at least somehow involved as I wrote my Master thesis but gave me much freedom. He gave this thesis the best grade possible and I felt I also deserved it to an extent. The topic was one that I had chosen myself and I learned much new things writing it; combining different areas that I did not know before. This time was quite stressful for me personally as I tend to pressure myself too hard if I have to perform like this. After that I was quite exhausted but wanted to pursue a PhD at my university in the field I wrote my Master thesis in because I felt like the right thing to do; I really like the people and also the topic I wrote my thesis on. However, things changed rapidly as I became officially a PhD student. First thing my advisor told me was that he had no time to spend on me as his oldest PhD student had to finish after over four years. I did not wonder and at that time I had the luck to write a paper with a guy, which still is like some mathematical godfather to me (and much more capable than me), who I met on a conference. The paper that we wrote had quite nice new ideas in it, however I felt my part in creating it was minor. But I also did some good work I think. At the start of the year I investigated some other question on my own and managed to produce a positive result with the tools I learned from writing the first paper. I also extended the question quite a bit to write a paper on my own about the topic; without any advising at all. The only thing I currently do, is to speak with some colleagues of mine over very specific questions. I sent this "paper" to my advisor and the only thing he told me is that he would be too busy to read it in near future. Currently, I have another collaboration going with the guy I mentioned beforehand and several others who advise me more than my own advisor, although they work on completely different universities. So currently I am quite lucky to have some advision and a perspective in research. Finally, my PhD advisor didn't give me a question to work one. He just mentioned very vaguely that one maybe could extend some of the concepts used in my Master thesis but he couldn't tell me any possible applications for these abstractions. So I did not feel like this would be promising to work on. He also does not meet up with me on a weakly basis to discuss. Furthermore, my advisor also holds a record on suggesting topics to this PhD students that are completely inept to work on at this stage of their mathematical career. My older "PhD brothers", for example, spend to years working on a big conjecture in one specific field without making any progress, whatsoever. My PhD advisor had also no new idea how to approach that problem; he basically just told them to try it without giving much help. So I frequently ask myself the following question: "Do I feel it is worth to pursue a PhD under this circumstances?" I saw how other advisors work with their PhD students and I feel their advisors have a clear initial idea on the "what" and on the "how". Moreover, they meet up and discuss the current problems that arise while pursuing the question. All this I do not have at the moment. I really enjoy teaching courses but I do not have the impression that I move forward in research to much and that really pulls me down. And I also feel that this whole situation damages me mentally to a point where I frequently get anxiety attacks. On the other hand, I know that I could earn good money in the economy with my qualities and my intellectual capacity. So my question is: Would you advise me to quit my PhD and to try it at another university in my field? Or should I stay and fight? Or should I just skip the PhD and do something that earns my money and gives me more structure? I know that the "right" answer to this question is not determined in any way and that it might fit in the category "vague question" that we usually try to avoid on this platform. But I do not know where to ask it elsewhere and I would like to get answers from people with more academic experience than I have. I really cannot really pigeonhole my whole situation and do not know what to do at the moment.
|
I second Nate's suggestion to look at https://academia.stackexchange.com , there are already many similar questions (with answers, some of them specifically from mathematicians) on that site that may help you. But since "go somewhere else" is not exactly the kind of answer someone in your situation needs, here are a few thoughts from a random person on the internet: First, it sounds to me as if your PhD is actually going rather well: You've already obtained independent results, written papers, and initiated fruitful collaborations on your own. If that is not what you should demonstrate for this degree, I don't know what is. (Of course it could -- always! -- be going better, and the experience could be more pleasant for you.) So I wouldn't worry about your chances of graduating. In fact, one possible (but certainly not the only) explanation is that your advisor is thinking the same thing: "They're doing well on their own, they don't need my help, and it's better for their career if they're working independently anyway." (Of course, this can also be a convenient rationalization of laziness or poor time management on their end...) If this is the case, I'd sit down with them, explain to them that you in fact do need their help, and negotiate exactly what kind and on what schedule. If that doesn't work (and you haven't graduated by then), switching advisors or getting a formal co-advisor (either in the same department or a different university) is certainly not unheard of. Finally (and this is the reason I am writing this answer now), you write And I also feel that this whole situation damages me mentally to a point where I frequently get anxiety attacks. It's completely normal to have doubts and frustrations during your PhD (and the timing seems about on schedule for it, as well), but this is a strong emotional response that you should take seriously and seek help dealing with. Here I don't necessarily mean professional help (although there are certainly professionals that can help with this), but finding a trusted person you can talk to about these issues on a regular basis to prevent them from building up. (Here especially, https://academia.stackexchange.com can give you much better recommendations since this is something that happens in all disciplines.)
|
{
"source": [
"https://mathoverflow.net/questions/372831",
"https://mathoverflow.net",
"https://mathoverflow.net/users/91126/"
]
}
|
373,441 |
I received an email today about the award of the 2020 Nobel Prize in Physics to Roger Penrose , Reinhard Genzel and Andrea Ghez . Roger Penrose receives one-half of the prize "for the discovery that black hole formation is a robust prediction of the general theory of relativity." Genzel and Ghez share one-half "for the discovery of a supermassive compact object at the centre of our galaxy". Roger Penrose is an English mathematical physicist who has made contributions to the mathematical physics of general relativity and cosmology. I have checked some of his works which relate to mathematics, and I have found the paper M. Ko, E. T. Newman, R. Penrose, The Kähler structure of asymptotic twistor space , Journal of Mathematical Physics 18 (1977) 58–64, doi: 10.1063/1.523151 , which seems to indicate Penrose has widely contributed generally to the mathematics of general relativity like tensors and manifolds. Now my question here is: Question What are contributions of Sir Roger Penrose, the winner of the 2020 Nobel prize in physics, to the mathematics of general relativity, like tensors and manifolds? We may motivate this question by adding a nice question which is pointed out in the comment by Alexandre Eremenko below where he asks: Is Sir Roger Penrose the first true mathematician to receive a Nobel prize in physics? If the answer is yes, then Sir Roger Penrose would say to us "before being a physicist you should be a mathematician". On the other hand, in my opinion the first mathematician to be awarded several physics prizes is the American mathematical and theoretical physicist Edward Witten . This seems to meet Sir Roger Penrose in his research such as cosmology and research in modern physics (Einstein general relativity). Related question : Penrose’s singularity theorem
|
It seems (as mentioned by Sam Hopkins above) that the Singularity Theorem is the official reason for the Nobel Award. But that is by no means the only (and perhaps not even the most important) contribution of Sir Roger Penrose to mathematical physics ( not to mention his works as a geometer and his research on tilings, and so many other things). In Physics, his grand idea is Twistor Theory , an ongoing project which is still far from completion, but that has been incorporated in other areas (see for instance here for its connection to Strings Theory, and also there is another connection with the Bohm-Hiley approach using Clifford Algebras, see here ). But his influence goes even beyond that: Penrose invented Spin Networks in the late sixties as a way to discretize space-time. The core idea was subsequently incorporated in the grand rival of String Theory, Loop Quantum Gravity . As far as I know, all approaches to a background independent Quantum Theory of gravity use spin networks, one way or the other. Moral: Congratulations Sir Roger ! ADDENDUM @TimotyChow mentioned that my answer does not address the ask of the OP, namely Penrose's contribution to General Relativity. I have mentioned two big ideas of Penrose, namely Spin Networks and Twistor Theory. The first one is, as far as I know, not directly related to standard relativity, rather to "building" a discrete space-time. It is not entirely unrelated, though, because the core idea is that space-time, the main actor of GR, is an emergent phenomenon . The ultimate goal of spin networks and also of all theories which capitalize on them is to generate a description of the universe which accommodates Quantum Mechanics and at the same time enable the recovery of GR as a limit process . As for the second theory, Twistors, I am obviously not the right person to speak about them, as they are a quite involved matter, with many ramifications, from multi dimensional complex manifold to sheaf cohomology theory, and a lot more. But, for this post, I can say this: the core idea is almost childish, and yet absolutely deep. Here it is: Penrose, thinking about Einstein's universe, realized that light lines are fundamentals, not space-time points . Think for simplicity of the projective space : you reverse the order. Rather than lines being made of points, it is points which are the focal intersection of light rays. The set of light rays , endowed with a suitable topology, make up twistor space (it is a complex manifold of even dimension). Now, according to Penrose, relativity should be done inside Twistor Space, and the normal space-time can be recovered from it using the "points trick" and the Penrose mapping which transforms twistor coordinates into the lorentzian ones. What is more is that twistor space provide some degree of freedom for QM as well. How? well, think of a set of tilting light rays. Rather than a well defined space-time point you will get a "fuzzy point". But here I stop.
|
{
"source": [
"https://mathoverflow.net/questions/373441",
"https://mathoverflow.net",
"https://mathoverflow.net/users/51189/"
]
}
|
373,775 |
I raised the following question as part of another MO question , but I am following the suggestion of Nate Eldredge to make it a question in its own right. For many years, there has a been a valuable web resource, hosted by Purdue, on the Consequences of the Axiom of Choice . Unfortunately, the page is no longer functioning, as you will quickly discover if you try submitting a form number. The URLs have changed. I suspect that Purdue redesigned its website at some point, changing the URLs, and that since Herman Rubin died a couple of years ago, there is now nobody responsible for maintaining the Axiom of Choice page. I tried emailing a couple of random people in the Purdue mathematics department to find out if something could be done to revive the page, but have received no response. I am wondering if there is a way to revive this resource, ideally in a way that will prevent it from suffering a similar extinction risk a few years down the line. Perhaps some people can turn the page into a wiki, much in the way the OEIS evolved from a personal project of Neil Sloane's into a wiki? Also, maybe someone reading this knows more than I do about the situation at Purdue and can comment on what would be involved in making the data publicly available again.
|
Sorry I just saw this, and thank you @martin-sleziak for informing me of this question! I'm still investigating what went wrong, but cgraph is back online: https://cgraph.inters.co About the original "Consequences of the axiom of choice" website I know Paul Howard was working on a new version (hopefully with cgraph integration), I will try to find out what is the status and post here again. Either way, please feel free to use cgraph, either from the website or by installing the program locally. Let me know if you need any help, want to offer any help, or you discover any problems either per email or by opening an issue at either repository: https://gitlab.common-lisp.net/idimitriou/jeffrey https://github.com/ioannad/jeffrey By the way, I have pledged to maintain cgraph for life, and I am open to suggestions to integrate it with or expand it for any wikis anyone wants to create. Just drop me a line!
|
{
"source": [
"https://mathoverflow.net/questions/373775",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
373,849 |
The HoTT community is quite friendly, and produces many motivational introductions to HoTT. The blog and the HoTT book are quite helpful. However, I want to get my hands directly onto that, and am looking for a formal treatment of HoTT. Therefore this question: what's the formal definitions of the gadgets in homotopy type theory (HoTT)? I expect an answer with absolutely no motivations and no explanations. It should define all terminologies, such as type, term, : , for all, exist, identity, equivalence.. etc. It should also clarify which words are undefined. I feel like I'm almost asking for a computer program.. so if that's your answer, I'm happy to study it.
|
Here are some resources: The appendix of the homotopy type theory book gives two formal presentations of homotopy type theory. Martín Escardó wrote lecture notes Introduction to Univalent Foundations of Mathematics with Agda which are at the same time written as traditional mathematics and formalized in Agda (so as formal as it gets). Designed for teaching purposes is Egbert Rijke's formalization HoTT-intro which accompanies his upcoming textbook on homotopy type theory. The HoTT library is a formalization of homotopy type theory in Coq. Start reading in the order of files imported in HoTT.v . The HoTT-Agda library is quite similar to the HoTT library, except formalized in Agda. The UniMath library is another formalization of univalent mathematics in Coq.
|
{
"source": [
"https://mathoverflow.net/questions/373849",
"https://mathoverflow.net",
"https://mathoverflow.net/users/124549/"
]
}
|
374,089 |
Is there a generally available (commercial or not) complete implementation of the Risch algorithm for determining whether an elementary function has an elementary antiderivative? The Wikipedia article on symbolic integration claims that the general case of the Risch algorithm was solved and implemented in Axiom by Manuel Bronstein, and an answer to another MO question says the same thing. However, I have some doubts, based on the following comment by Manuel Bronstein himself on the USENET newsgroup sci.math.symbolic on September 5, 2003: If Axiom returns an unevaluated integral,
then
it has proven that no elementary antiderivative exists. There are
however
some cases where Axiom can return an error message saying that you've
hit
an unimplemented branch of the algorithm, in which case it cannot
conclude.
So Richard was right in pointing out that the Risch algorithm is not
fully implemented there either. Axiom is unique in making the difference
between unimplemented branches and proofs of non-integrability, and also
in actually proving the algebraic independence of the building blocks
of the integrand before concluding nonintegrability (others typically
assume this independence after performing some heuristic dependence
checking). Bronstein unfortunately passed away on June 6, 2005 . It is possible that he completed the implementation before he died, but I haven't been able to confirm that. I do know that Bronstein never managed to finish his intended book on the integration of algebraic functions. [ EDIT: As a further
check, I emailed Barry Trager. He confirmed that the implementation that
he and Bronstein worked on was not complete. He did not know much about
other implementations but was not aware of any complete implementations.] I have access to Maple 2018, and it doesn't seem to have a complete implementation either. A useful test case is the following integral, taken from the (apparently unpublished) paper Trager's algorithm for the integration of algebraic functions revisited by Daniel Schultz: $$\int \frac{29x^2+18x-3}{\sqrt{x^6+4x^5+6x^4-12x^3+33x^2-16x}}\,dx$$ Schultz explicitly provides an elementary antiderivative in his paper, but Maple 2018 returns the integral unevaluated.
|
No computer algebra system implements a complete decision process for the integration of mixed transcendental and algebraic functions. The integral from the excellent paper of Schultz may be solved by Maple if you convert the integrand to RootOf notation (Why this is not done internally in Maple is an interesting question?) int(convert((29*x^2+18*x-3)/(x^6+4*x^5+6*x^4-12*x^3+33*x^2-16*x)^(1/2),RootOf),x); My experiments suggest Maple has the best implementation of the Risch-Trager-Bronstein algorithm for the integration of purely algebraic integrals in terms of elementary functions (ref: table 1, section 3 of Sam Blake, A Simple Method for Computing Some Pseudo-Elliptic Integrals in Terms of Elementary Functions , arXiv: 2004.04910 ). However, Maple's implementation does not integrate expressions containing parameters or nested radicals (both of which has some support in AXIOM and FriCAS). It would seem that some significant progress has been made in computing the logarithmic part of a mixed transcendental-algebraic integral by Miller [1]. Though, as far as I know, no computer algebra system has implemented his algorithm. It is also not clear if Miller's algorithm can deal with parameters, for example, the Risch-Trager-Bronstein algorithm has difficulties with the following pseudo-elliptic integral $$\int\frac{\left(p x^2-q\right) \left(p x^2-x+q\right)dx}{x \left(p x^2+2 x+q\right) \sqrt{2 p^2x^4+2 p x^3+(4 p q+1) x^2+2 q x+2 q^2}} = - \frac{1}{\sqrt{2}}\log (x) + \frac{1}{\sqrt{2}}\log \left(\sqrt{2} y +2 p x^2+x+2q\right) - \frac{3}{\sqrt{5}}\tanh ^{-1}\left(\frac{\sqrt{5} y}{3 p x^2+3 q+x}\right),$$ where $y=\sqrt{2 p^2 x^4+2 p x^3+(4 pq+1)x^2+2 q x+2 q^2}$ . My heuristic in the previously-linked paper computes this integral quickly with the substitution $u=\frac{px^2+q}{p x}$ . In regards to the mixed algebraic-transcendental case of the Risch-Trager-Bronstein algorithm, an integral which cannot be solved with Maple, Mathematica, AXIOM or FriCAS (and possibly other CAS) is $$\int \frac{\left(\sqrt{x}+1\right) \left(e^{2x \sqrt{x}} -a\right) \sqrt{a^2+2 a x e^{2 \sqrt{x}} +cx e^{2 \sqrt{x}} +x^2 e^{4 \sqrt{x}}}}{x \sqrt{x}e^{\sqrt{x}} \left(a+x e^{2 \sqrt{x}} \right)} dx.$$ This integral is interesting as it returns two distinct messages from AXIOM and FriCAS suggesting their respective implementations are incomplete. FriCAS returns (1) -> integrate(((-a+exp(2*x^(1/2))*x)*x^(-3/2)*(1+x^(1/2))*(a^2+2*a*exp(2*x^(1/2))*x+c*exp(2*x^(1/2))*x+exp(4*x^(1/2))*x^2)^(1/2))/(exp(x^(1/2))*(a+exp(2*x^(1/2))*x)),x)
>> Error detected within library code:
integrate: implementation incomplete (has polynomial part) While AXIOM returns (1) -> integrate(((-a+exp(2*x^(1/2))*x)*x^(-3/2)*(1+x^(1/2))*(a^2+2*a*exp(2*x^(1/2))*x+c*exp(2*x^(1/2))*x+exp(4*x^(1/2))*x^2)^(1/2))/(exp(x^(1/2))*(a+exp(2*x^(1/2))*x)),x)
>> Error detected within library code:
integrate: implementation incomplete (constant residues) [1] Miller, B. (2012). “ On the Integration of Elementary Functions: Computing the Logarithmic Part ”. Thesis (Ph.D.) Texas Tech University, Dept. of Mathematics and Statistics.
|
{
"source": [
"https://mathoverflow.net/questions/374089",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
374,102 |
I don't know if it is suitable for MathOverflow, if not please direct it to suitable sites. I don't understand the following: I find that there are many ways a graph is associated with an algebraic structure, namely Zero divisor graph ( Anderson and Livingston - The zero-divisor graph of a commutative ring ), Non-Commuting Graph ( Abdollahi, Akbari, and Maimani - Non-commuting graph of a group ) and many others. All these papers receive hundreds of citations which means many people work in this field. I read the papers, it basically tries to find the properties of the associated graph from the algebraic structure namely when it is connected, complete, planar, girth etc. My questions are: We already have a list of many unsolved problems in Abstract Algebra and Graph Theory, why do we mix the two topics in order to get more problems? It is evident that if we just associate a graph with an algebraic structure then it is going to give us new problems like finding the structure of the graph because we just have a new graph. Are we able to solve any existing problems in group theory or ring theory by associating a suitable graph structure? Unfortunately I could not find that in any of the papers. Can someone show me by giving an example of a problem in group theory or ring theory which can be solved by associating a suitable graph structure? For example suppose I take the ring $(\mathbb Z_n,+,.)$ , i.e. the ring of integers modulo $n$ . What unsolved problems about $\mathbb Z_n$ can we solve by associating the zero divisor graph to it? NOTE: I got some answers/comments where people said that we study those graphs because we are curious and find them interesting.
I am not sure if this is how mathematics works.
Every subject developed because it had certain motivation.
So I don't think this reason that " Mathematicians are curious about it, so they study it " stands.
As a matter of fact, I am looking for that reason why people study this field of Algebraic Graph Theory.
|
The following answer basically involves things I learned about from others in a conversation on the topic of this question, which I have heard voiced many times and is a reasonable question. The paper ANISOTROPIC GROUPS OF TYPE $A_n$ AND THE COMMUTING GRAPH OF FINITE SIMPLE GROUPS
by Yoav Segev and Gary M. Seitz uses the commuting graph of a finite simple group to make progress on the Margulis-Platanov conjecture. This earlier Annals paper of Segev On finite homomorphic images of the multiplicative group of a division algebra also uses commuting graphs to solve known conjectures. Of course the graph is not the only thing used and I am not enough of an expert on the subject to say how central the graphs are to the paper. So the commuting graph of a finite group definitely came up naturally. I am unaware of similar ring theoretic examples. I also am unaware of how planarity and other graph theoretic properties of such graphs play a role in any applications. Let me also note the nice paper of Peter Cameron The power graph of a finite group, II which shows that groups with isomorphic power graphs have the same number of elements of each order. This wasn't motivated from outside the theory but I think is still a sign the graph is relevant.
|
{
"source": [
"https://mathoverflow.net/questions/374102",
"https://mathoverflow.net",
"https://mathoverflow.net/users/141429/"
]
}
|
374,180 |
Suppose we have a function $f(x_1 ,x_2 ,x_3 ,x_4).$ We know that we can factor it in two ways as $f(x_1 ,x_2 ,x_3 ,x_4)=\phi_1 (x_1 ,x_2 )\phi_2(x_3 ,x_4 )=\psi_1 (x_1,x_3)\psi_2(x_2,x_4)$ Show that we can completely factor the function as: $f(x_1 ,x_2 ,x_3 ,x_4)=\varphi_1(x_1)\varphi_2(x_2)\varphi_3(x_3)\varphi_4(x_4).$ I stumbled a little bit on this elementary problem as the proof is not as immediate as I think. But eventually I can prove this. Here the overlap of partition {{1,2} {3,4}} and {{1,3},{2,4}} is {{1},{2},{3},{4}} and indeed satisfying the first two partition implies that we can factor by the overlap of both partitions. I wonder if there is a general statement/theory of this.
|
Here is a fairly straightforward proof which also proves various generalizations of your problem. Choose $c,d$ such that $\phi_2(c,d) \neq 0$ . If no such $c,d$ exist, then $f$ is identically $0$ and can be completely factored trivially. Now, $$\phi_1(x_1, x_2)=\psi_1(x_1, c)\psi_2(x_2, d) \phi_2(c,d)^{-1},$$ for all $x_1$ , $x_2$ . Similarly, choosing $a$ , $b$ such that $\phi_1(a,b) \neq 0$ , we have $$\phi_2(x_3, x_4)=\psi_1(a, x_3)\psi_2(b, x_4) \phi_1(a,b)^{-1},$$ for all $x_3$ , $x_4$ . Thus, $$f(x_1 ,x_2 ,x_3 ,x_4)=\phi_1(a,b)^{-1}\phi_2(c,d)^{-1}\psi_1(x_1, c)\psi_2(x_2, d) \psi_1(a, x_3)\psi_2(b, x_4), $$ for all $x_1,x_2,x_3,x_4$ . $\Box$ The same proof also proves the following generalization. Given a partition $\alpha$ of $[n]$ , we say that $f(x_1, \dotsc, x_n)$ factors with respect to $\alpha$ if for each $A \in \alpha$ there exists a function $f_A$ (which only depends on the variables $x_i$ for $i \in A$ ) such that $f(x_1, \dotsc, x_n)=\prod_{A \in \alpha} f_A$ . Given two partitions $\alpha$ and $\beta$ of $[n]$ , $a \wedge b$ is the partition of $[n]$ whose sets are the non-empty sets of the form $A \cap B$ for $A \in \alpha$ and $B \in \beta$ . Lemma. Let $\alpha$ and $\beta$ be partitions of $[n]$ . If $f(x_1, \dotsc, x_n)$ factors with respect to both $\alpha$ and $\beta$ , then $f(x_1, \dotsc, x_n)$ factors with respect to $\alpha \wedge \beta$ . Note that I am only using the fact that the function takes values in some field or some group. I am not sure if the result still holds if inverses do not exist (this was asked by Richard Stanley in the comments below). Update. The above lemma does not always hold for monoids, as shown by Harry West in an answer to Functions over monoids which factor in two different ways .
|
{
"source": [
"https://mathoverflow.net/questions/374180",
"https://mathoverflow.net",
"https://mathoverflow.net/users/116621/"
]
}
|
374,306 |
I wonder whether the following property holds true: For every real symmetric matrix $S$ , which is positive in both senses: $$\forall x\in{\mathbb R}^n,\,x^TSx\ge0,\qquad\forall 1\le i,j\le n,\,s_{ij}\ge0,$$ then $\sqrt S$ (the unique square root among positive semi-definite symmetric matrices) is positive in both senses too. In other words, it is entrywise non-negative. At least, this is true if $n=2$ . By continuity of $S\mapsto\sqrt S$ , we may assume that $S$ is positive definite. Denoting $$\sqrt S=\begin{pmatrix} a & b \\ b & c \end{pmatrix},$$ we do have $a,c>0$ . Because $s_{12}=b(a+c)$ is $\ge0$ , we infer $b\ge0$ .
|
No. If $$A = \begin{pmatrix}10&-1&5\\-1&10&5\\5&5&10\end{pmatrix},$$ then $A$ is positive definite but does not have all entries positive, while $$
A^2 = \begin{pmatrix}126&5&95\\5&126&95\\95&95&150\end{pmatrix}
$$ is positive in both senses.
|
{
"source": [
"https://mathoverflow.net/questions/374306",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8799/"
]
}
|
374,732 |
If $V \hookrightarrow W$ and $W \hookrightarrow V$ are injective linear maps, then is there an isomorphism $V \cong W$ ? If we assume the axiom of choice, the answer is yes : use the fact that every linearly independent set can be extended to a basis and apply the usual Schroeder-Bernstein theorem . If we don't assume the axiom of choice, and we work in ZF, say (or some other formalism with excluded middle), then vector spaces don't necessarily have bases (in fact, Blass showed that there must be a vector space without a basis over some field), so we can't use the same proof strategy. Nevertheless, there's room for optimism, since Schroeder-Bernstein still holds for sets in ZF. So one might hope that it also holds for vector spaces in ZF. Question: Work in ZF (or some other formalism with excluded middle but without choice). If $V \hookrightarrow W$ and $W \hookrightarrow V$ are injective linear maps of vector spaces over a field $k$ , then is there an isomorphism $V \cong W$ ? Variation 1: What if we assume that $k$ is finite, or even that $k = \mathbb F_p$ for a prime $p$ ? Variation 2: What if we assume that $V$ is a direct summand of $W$ and vice versa? The following consequence of Bumby's theorem appears to be constructive: If $k$ is a ring and every $k$ -module is injective, then $k$ -modules satisfy Schroeder-Bernstein. But the condition "every module over a field is injective" sounds pretty choice-ey to me. I suppose it's worth noting, though: Variation 3: Does "Every vector space over any field is injective" imply choice? How about "Every vector space over $\mathbb F_p$ is injective"?
|
Without the axiom of choice, it is possible that there is a vector space $U\neq 0$ over a field $k$ with no nonzero linear functionals. Let $V$ be the direct sum of countably many copies of $U$ , and $W=V\oplus k$ . Then each of $V$ and $W$ embeds in the other, but they are not isomorphic, since $V$ doesn’t have any nonzero linear functionals, but $W$ does. I don't think there's any restriction on the field $k$ , so this answers Variation 1 as well.
|
{
"source": [
"https://mathoverflow.net/questions/374732",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2362/"
]
}
|
374,737 |
Let $\Omega$ be a domain in $\mathbb{C}^n$ . Let $\mathbb{D}$ denote the open unit disc in $\mathbb{C}$ . Let $C_b(\Omega)$ and $C_b(\mathbb{D})$ denote the space of all bounded continuous complex valued functions on $\Omega$ and $\mathbb{D}$ respectively. Let $T:C_b(\Omega)\longrightarrow C_b(\mathbb{D})$ be a positive linear operator which is unital. Suppose $f\in C_b(\Omega)$ be such that $f(z)\neq 0$ for any $z\in \Omega$ . Will it imply that $Tf(y)\neq 0$ for every $y\in\mathbb{D}$ ? If not then under what additional conditions will $T$ satisfy this property?
|
Without the axiom of choice, it is possible that there is a vector space $U\neq 0$ over a field $k$ with no nonzero linear functionals. Let $V$ be the direct sum of countably many copies of $U$ , and $W=V\oplus k$ . Then each of $V$ and $W$ embeds in the other, but they are not isomorphic, since $V$ doesn’t have any nonzero linear functionals, but $W$ does. I don't think there's any restriction on the field $k$ , so this answers Variation 1 as well.
|
{
"source": [
"https://mathoverflow.net/questions/374737",
"https://mathoverflow.net",
"https://mathoverflow.net/users/153432/"
]
}
|
374,979 |
I have been wondering if there are many cases of an author having published two (or more?) papers in the same issue of the same journal. I vaguely recall having seen one or two cases like this, maybe be old papers, but cannot vividly remember. I have the impression such a situation would make sense should the two papers be in the same topic, say, one is sort of a (substantial) continuation of the other, for instance (given the fact that in Mathematics there is some pressure against publishing too often in the same journal). I am of course asking this question for papers having a sole author (or maybe the same set of authors).
|
Roger Howe famously filled an entire issue of Pacific Journal of Mathematics ( volume 73, no.2 , 1977) with 8 different papers. (Also, Euler...)
|
{
"source": [
"https://mathoverflow.net/questions/374979",
"https://mathoverflow.net",
"https://mathoverflow.net/users/15155/"
]
}
|
375,647 |
I am reading a nice booklet (in Italian) containing the exchange of letters that André and Simone Weil had in 1940, when André was in Rouen prison for having refused to accomplish his military duties. Of course, among these letters, there is the famous one where André describes his mathematical work to his sister, whose English translation was published in 2005 in the Notices AMS. Referring to this translation, at page 340 we can read [...] this bridge exists; it is the theory of the field of algebraic functions over a finite field of constants (that is to say,
a finite number of elements: also said to be a Galois field, or earlier “Galois imaginaries” because
Galois first defined them and studied them; they
are the algebraic extensions of a field with p elements formed by the numbers 0, 1, 2,...,p − 1
where one calculates with them modulo p, p =
prime number). They appear already in Dedekind.
A young student in Göttingen, killed in 1914 or
1915, studied, in his dissertation that appeared in
1919 (work done entirely on his own, says his
teacher Landau), zeta functions for certain of these
fields, and showed that the ordinary methods of
the theory of algebraic numbers applied to them. My Italian book contains a note at this point, saying Di questo "giovane studente" non abbiamo altre notizie that can be translated as We have no further information about this "young student". This seems a bit strange to me: if an important result on zeta functions is really due to this student, his name should be known, at least among the experts in the field. So let me ask the following: Question. Who is the "young student in Göttingen, killed in 1914 or 1915" André Weil is talking about?
|
This must have been Heinrich Kornblum (1890-1914). [ note by E. Landau in German, my translation] $^1$ The author, born in Wohlau on August 23, 1890, had before the war
independently made the discovery that Dirichlet's classic proof of the theorem
of prime numbers in an arithmetic progression (along with the later elementary
reasons for the non-vanishing of the known series) had an analogue in
the theory of prime functions in residue classes with a double module ( $p,M$ ).
His doctoral dissertation on this self-chosen topic was
already essentially finished when, as a war volunteer, he fell in October 1914 at Роёl-Сареllе.
Only recently I received from his estate the manuscript (known to me since 1914). I hereby publish the most beautiful and interesting parts. The Kornblum approach is characterized by high elegance
and shows that science has lost in him a very promising researcher.
|
{
"source": [
"https://mathoverflow.net/questions/375647",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7460/"
]
}
|
375,759 |
In my paper On the optimal error bound for the first step in the method of cyclic alternating projections , I defined functions $f_n:[0,1]\to\mathbb{R}$ , $n\geqslant 2$ , by $$
f_n(c)=\sup\{\|P_n\dotsm P_2 P_1-P_0\|\,|\,c_F(H_1,\dotsc,H_n)\leqslant c\},
$$ where (1) the supremum is taken over all complex Hilbert spaces $H$ and systems of closed subspaces $H_1,\dotsc,H_n$ of $H$ such that the Friedrichs number $c_F(H_1,\dotsc,H_n)$ is less than or equal to $c$ . The Friedrichs number is a quantitative characteristics of a system of closed subspaces, $c$ is a given number from $[0,1]$ . (2) $P_i$ is the orthogonal projection onto $H_i$ , $i=1,2,\dotsc,n$ and $P_0$ is the orthogonal projection onto the subspace $H_1\cap H_2\cap\dotsb\cap H_n$ . Now I have some doubts in the validity of this definition. Namely, I know that all sets do not constitute a set and if I understand things right, all Hilbert spaces do not constitute a set (and even all one dimensional Hilbert spaces do not constitute a set). So, can I take the supremum? Please help me; I will be very grateful for any comments, remarks, and answers. In response to Mike Miller's comment :
Unfortunately, I also know almost nothing about set theory.
Yes, I understand that in fact I take the supremum of a "set" of real numbers, namely, of the "set" $A=A_n(c)$ which consists of all $a\in\mathbb{R}$ for which there exist a complex Hilbert space $H$ and a system of closed
subspaces $H_1,\dotsc,H_n$ of $H$ such that $c_F(H_1,\dotsc,H_n)\leqslant c$ and $\|P_n\dotsm P_2 P_1-P_0\|=a$ . But I do not understand why the "set" $A$ is a set.
|
It is true that we cannot use an arbitrary property $^1$ $P$ to define a set, in the sense that the collection of all things with property $P$ need not be a set. However, the axiom (scheme) of separation says that we can use an arbitrary property to define a subset : whenever $X$ is a set, the collection $\{x\in X: P(x)\}$ is also a set. So just take $X=\mathbb{R}$ and $P(x)$ = "There is a Hilbert space such that …". Per Separation, we get that your collection of reals $A$ is in fact a set. And we may now take its supremum. Note that this illustrates an important point about how $\mathsf{ZFC}$ (and its variants) get around Russell's paradox: It's size , $^{2}$ not complexity of definition , which controls whether or not a collection is a set or a proper class in $\mathsf{ZFC}$ . $^{3}$ Part of the success of $\mathsf{ZFC}$ is due to the ease with which we can in fact verify that something is a set. The only time you'll run into trouble is when you want to form a set which isn't a priori part of some bigger thing you already know is a set; here we may have to think a bit (although the axiom (scheme) of replacement similarly makes things usually very easy, once it's mastered). EDIT: Per the comments below , let me sketch how to define "complete metric space" in the language of set theory. As you'll see, even the sketch is quite lengthy; if there's a particular point you'd like further information on, I suggest asking a separate question at MSE. Here's the sequence of definitions we need to whip up: We need to talk about ordered pairs, functions, and Cartesian products. We need to build $\mathbb{N}$ , so that we can build $\mathbb{Q}_{\ge 0}$ , so that we can build $\mathbb{R}_{\ge 0}$ ; along the way we'll need the notions of equivalence relation and equivalence class, of course. While the previous two points will be enough to define metric spaces ("An ordered pair $(X,\delta)$ where $X$ is a set and $\delta:X^2\rightarrow\mathbb{R}$ such that [stuff]"), to define complete metric spaces we'll also need the notions of infinite sequence and equivalence relation/class . The first bulletpoint is standard set-theoretic fare which you'll see treated in the beginning of any text on set theory, so I'll skip it; if you're interested, though, you can start with the wiki page on ordered pairs . The third is really the first in disguise: an infinite sequence is just a function with domain $\mathbb{N}$ . So all the "meat" is in bulletpoint 2. We proceed as follows: First, we'll use the von Neumann approach to $\mathbb{N}$ : an ordinal is a hereditarily transitive set, ordinals are ordered by $\in$ , and the finite ordinals are the ordinals which do not contain any (nonempty) limit ordinal. We then identify $\mathbb{N}$ with the finite ordinals — more jargonily, $\mathbb{N}=\omega$ . We define addition and multiplication of ordinals via transfinite recursion as usual. Next, we consider the equivalence relation $\sim$ on $\omega\times(\omega\setminus\{0\})$ as follows: $$\langle a,b\rangle\sim\langle c,d\rangle \iff ad=bc,$$ and we let $\mathbb{Q}_{\ge0}$ be the set of $\sim$ -classes. We lift the ordering on $\omega$ to $\mathbb{Q}_{\ge 0}$ in the obvious way. Now we're ready to define $\mathbb{R}_{\ge 0}$ , via Dedekind cuts: an element of $\mathbb{R}_{\ge 0}$ is a nonempty, downwards-closed, bounded-above subset of $\mathbb{Q}_{\ge0}$ . The ordering on $\mathbb{R}_{\ge 0}$ is just $\subseteq$ . With all this in hand, the naive definitions of metric space, Cauchy sequence, and complete metric space translate into the language of set theory directly (if tediously). The point is that all of this is first-order in set theory , with axioms like Powerset (which, despite what they mean intuitively, are indeed first-order) doing the heavy lifting needed to show that the objects we want actually exist at all. (For a bit more about the nuance of "first-order in set theory," see this recent answer of mine .) $^1$ Really I mean "first-order formula," but I don't want to get too much into the details. $^{2}$ Specifically, in a precise sense we have: a class is a proper class iff it surjects onto the class of ordinals. This is not the same as the principle of limitation of size , but it's of similar flavor. $^3$ I should observe that this isn't the only possible response to the need to distinguish between sets and proper classes: there are other set theories (e.g. $\mathsf{NF}$ , $\mathsf{GPK^+_\infty}$ , ...) which take the other approach. However, these theories make it harder to check whether something is in fact a set.
|
{
"source": [
"https://mathoverflow.net/questions/375759",
"https://mathoverflow.net",
"https://mathoverflow.net/users/48157/"
]
}
|
375,778 |
I've been obsessed with this one problem for many months now, and today is the sad day that I admit to myself I won't be able to solve it and I need your help. The problem is simple. We let $$\mathbb{E}_{n\in\mathbb{N}}[f(n)]:=\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}f(n)$$ denote the expected value of a function $f(n)$ (if it exists). This means that for some fixed choice of sequence $(a_n)_{n=1}^{\infty}$ the quantities $\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]$ will give the "average value of every $k$ " elements. For example, $\mathbb{E}_{n\in\mathbb{N}}[a_{3n}]$ will be the average of $a_{3},a_{6},a_{9}$ etc... . If all of these quantities exist, we will call a sequence $a_n$ to be "sequentially summable". The problem is as follows: Show that for any bounded sequentially summable $a_n$ \begin{equation}\sum_{k=1}^{\infty}\frac{\lambda(k)\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]}{k}=0\tag{1}\end{equation} where $\lambda(n)$ denotes the Liouville function Swapping which side the term $\mathbb{E}_{n\in\mathbb{N}}[a_n]$ is on, this gives an absolutely lovely formula for $\mathbb{E}_{n\in\mathbb{N}}[a_n]$ from $\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]$ ( $k\geq 2$ ) which can be appreciated by even people who have done no mathematics. Similarly, I would expect that the conjecture holds for $\lambda(n)$ replaced by its "twin" $\mu(n)$ , the Mobius function. Throughout the rest of this question, I will give all of the partial results and a general outline of how they are obtained. This first partial results answers the question "why on earth would these values be 0???": Partial Result $\#1$ : For any bounded sequentially summable sequence $a_n$ , it holds that \begin{equation}\lim_{s\to1^+}\sum_{k=1}^{\infty}\frac{\lambda(k)\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]}{k^s}=\lim_{s\to1^+}\sum_{k=1}^{\infty}\frac{\mu(k)\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]}{k^{s}}=0\tag{2}\end{equation} and thus if the sum in (1) converges it must converge to $0$ This result is obtained by inverting the summation order and exploiting the fact that $$\sum_{d|n}\frac{\mu(n)}{n^s}=\prod_{p}\left(1-\frac{1}{p^s}\right)>0$$ which means that the triangle inequality does not "lose" any sign cancelation. The following partial result is much stronger: Partial Result $\#2$ : There exists an absolute constant $c_0$ such that for any sequentially summable sequence $a_n$ and $N>0$ \begin{equation}\left|\sum_{k=1}^N\frac{\lambda(k)\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]}{k}\right|<c_0m\tag{3}\end{equation} where $$m^2=\limsup_{N\to\infty}\frac{1}{N}\sum_{n=1}^Na^2_{n}$$ An outline of the proof of this result is given in this Math.SE question, but in essence the result comes from the fact that the boundedness of \begin{equation}\left|\sum_{k=1}^N\frac{f(k)\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]}{k}\right|\end{equation} for some function $f(k)$ is essentially equivalent to tight enough bounds on the partial sums $$\sum_{n=1}^{N}\frac{f(mk)}{k}$$ for all values of $m<N$ , which $\mu(n)$ definitely has. The boundedness in Partial Result $\#2$ can be translated to bounds when $\mu(n)$ is replaced by $\lambda(n)$ by exploiting the identity $\lambda(n)=\sum_{d^2|n}\mu\left(\frac{n}{d^2}\right)$ and the relative uniformity of the bound w.r.t out choice of $a_n$ . Here is the next partial result Partial Result $\#3$ : If, for any bounded sequentially summable sequence $a_n$ , we have that \begin{equation}
\lim_{N\to\infty}\frac{1}{N}\sum_{k=1}^{N}\lambda(k)\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]=0
\end{equation} then our conjecute (1) holds. This is a very important partial result since it is often much easier to show that the average of coefficients is zero than to show that they converge when summed with a factor of $\frac{1}{k}$ . This result is done in a "smoothing" manner where Partial Result $\#3$ can be considered as control over short intervals and Partial Result $\#2$ can be considered as control over large intervals which we can bound as error. At this point I will note that the condition that $a_n$ be bounded is quite important. For example, if $a_n=\Lambda(n)$ is the Von-Magnolt function then by the PNT $\mathbb{E}_{n\in\mathbb{N}}[a_n]=1$ but since $kn$ will have "few" prime powers for $k>1$ we have that $\mathbb{E}_{n\in\mathbb{N}}[a_{kn}]=0$ and thus our sum converges, but not to $0$ . I add as well that $$\sum_{k=1}^{N}\frac{\mu(k)}{k}\frac{1}{[N/k]}\sum_{n=1}^{[N/k]}a_{kn}=\frac{a_1}{N}$$ due to simple inversion of summation order, and so since $$\frac{1}{[N/k]}\sum_{n=1}^{[N/k]}a_{kn}\approx \mathbb{E}_{n\in\mathbb{N}}[a_{kn}]$$ we get further intuition for the result.
|
This identity is true, though somewhat tricky to prove and the infinite series here might only converge conditionally rather than absolutely. The key lemma is Lemma 1 (Fourier representation of averages along homogeneous arithmetic progressions). Let $a_n$ be a bounded sequentially summable sequence. Then there exist complex numbers $c_\xi$ for each $\xi \in {\mathbb Q}/{\mathbb Z}$ such that $\sum_{\xi \in {\mathbb Q}/{\mathbb Z}} |c_\xi|^2 \ll 1$ and $$ {\mathbb E}_{n \in {\mathbb N}} a_{kn} = \sum_{q|k; 1 \leq b < q; (b,q)=1} c_{b/q}.$$ for each natural number $k$ . Proof We first use the Furstenberg correspondence principle to move things to a compact abelian group (specifically, the profinite integers $\hat {\mathbb Z}$ ). It is convenient to introduce a generalised limit functional $\tilde \lim_{N \to \infty} \colon \ell^\infty({\mathbb N}) \to {\mathbb C}$ that is a continuous linear functional extending the usual limit functional (this can be created by the Hahn-Banach theorem or by using an ultrafilter). On every cyclic group ${\mathbb Z}/q{\mathbb Z}$ we then have a complex bounded density measure $\mu_q$ defined by $$ \mu_q(\{b \hbox{ mod } q \}) := \frac{1}{q} \tilde \lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N a_{b+nq}$$ for any integer $b$ (the particular choice of coset representative $b$ is not important). One can check that $\mu_q$ pushes forward to $\mu_{q'}$ under the quotient map from ${\mathbb Z}/q{\mathbb Z}$ to ${\mathbb Z}/q'{\mathbb Z}$ whenever $q'$ divides $q$ , hence by the Kolmogorov extension theorem, there is a complex bounded density measure $\mu$ on the profinite integers $\hat {\mathbb Z}$ that pushes forward to all of the $\mu_q$ , in particular $$ \mu( b + q \hat {\mathbb Z} ) = \frac{1}{q} \tilde \lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N a_{b+nq}$$ for any residue class $b \hbox{ mod } q$ . Specialising to $b=0$ and using the sequential summability hypothesis we conclude $$ \mu( q \hat {\mathbb Z} ) = \frac{1}{q} {\mathbb E}_{n \in {\mathbb N}} a_{qn}.$$ Now we use the Fourier transform to move to frequency space. The Radon-Nikodym derivative of $\mu_q$ with respect to Haar probability measure on ${\mathbb Z}/q{\mathbb Z}$ is bounded, hence the Radon-Nikodym derivative of $\mu$ with respect to Haar probability measure $\mathrm{Haar}_{\hat {\mathbb Z}}$ on the compact abelian group $\hat {\mathbb Z}$ is bounded. By Fourier expansion and Plancherel's theorem on $\hat {\mathbb Z}$ (which has Pontryagin dual ${\mathbb Q}/{\mathbb Z}$ ) we conclude the Fourier expansion $$ \frac{d\mu}{d\mathrm{Haar}_{\hat {\mathbb Z}}}(x) = \sum_{\xi \in {\mathbb Q}/{\mathbb Z}} c_\xi e^{2\pi i x \xi} $$ (in an $L^2$ sense) where $x \xi \in {\mathbb R}/{\mathbb Z}$ is defined in the obvious fashion and the Fourier coefficients $c_\xi$ are square-summable. In particular (by Parseval or a suitable form of Poisson summation) we have $$ \mu( q \hat {\mathbb Z} ) = \frac{1}{q} \sum_{b \in {\mathbb Z}/q{\mathbb Z}} c_{b/q}$$ and thus we have a compact formula for the average values of the $a_{kn}$ in terms of the Fourier coefficients: $$ {\mathbb E}_{n \in {\mathbb N}} a_{kn} = \sum_{b \in {\mathbb Z}/k{\mathbb Z}} c_{b/k}.$$ Reducing the fractions to lowest terms we conclude that $$ {\mathbb E}_{n \in {\mathbb N}} a_{kn} = \sum_{q|k; 1 \leq b < q; (b,q)=1} c_{b/q}$$ as desired. $\Box$ Remark 2 One can avoid the use of generalised limits (and thus the axiom of choice or some slightly weaker version thereof) by establishing a truncated version of this lemma in which one only considers those natural numbers $k$ dividing some large modulus $Q$ , and replaces the averages ${\mathbb E}_{n \in {\mathbb N}}$ by ${\mathbb E}_{n \in [N]}$ for some $N$ much larger than $Q$ . Then one has a similar formula with errors that go to zero in the limit $N \to \infty$ (for $Q$ fixed). One can then either recover the full strength of Lemma 1 by any number of standard compactness arguments (e.g., Tychonoff, Banach-Alaoglu, Hahn-Banach, Arzela-Ascoli, or ultrafilters) or else just use the truncated version in the arguments below and manage all the error terms that appear. In particular it is possible to solve the problem without use of the axiom of choice. I leave these variations of the argument to the interested reader. $\diamond$ Remark 3 The Fourier coefficients $c_{b/q}$ can be given explicitly by the formula $$ c_{b/q} = \tilde \lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N a_n e(-nb/q)$$ (the minus sign in the exponential $e(\theta) := e^{2\pi i \theta}$ is an arbitrary convention and may be omitted if desired). The desired representation then follows from the Fourier inversion formula in ${\mathbb Z}/k{\mathbb Z}$ , and the square-summability follows from the large sieve (e.g., equation (20) from this paper of Montgomery ; many other references exist for this). $\diamond$ Returning to the problem, we use the above lemma to write $$ \sum_{k=1}^N \frac{\lambda(k) {\mathbb E}_{n \in {\mathbb N}} a_{kn}}{k}
= \sum_{k=1}^N \frac{\lambda(k) \sum_{q|k; 1 \leq b < q; (b,q)=1} c_{b/q}}{k}$$ $$ = \sum_{q \leq N} \sum_{1 \leq b < q: (b,q)=1} c_{b/q} \sum_{l \leq N/q} \frac{\lambda(ql)}{ql}.$$ Note the cancellation present in the inner sum. To exploit this cancellation, let $\varepsilon>0$ be a small parameter. For $N$ large enough, we have from dominated convergence that $$ \sum_{\varepsilon N \leq q \leq N} \sum_{1 \leq b < q: (b,q)=1} |c_{b/q}|^2 \leq \varepsilon$$ and $$ \sum_{q \leq \varepsilon N} \sum_{1 \leq b < q: (b,q)=1} |c_{b/q}|^2 \ll 1$$ so by Cauchy-Schwarz one can bound the preceding expression in magnitude by $$ \ll \left(\sum_{q \leq \varepsilon N} \sum_{1 \leq b < q: (b,q)=1} \left|\sum_{l \leq N/q} \frac{\lambda(ql)}{ql}\right|^2 + \varepsilon \sum_{\varepsilon N \leq q \leq N} \sum_{1 \leq b < q: (b,q)=1} \left|\sum_{l \leq N/q} \frac{\lambda(ql)}{ql}\right|^2\right)^{1/2}.$$ There are at most $q$ values of $b$ for each $q$ , so this can be bounded by $$ \ll \left(\sum_{q \leq \varepsilon N} \frac{1}{q} \left|\sum_{l \leq N/q} \frac{\lambda(l)}{l}\right|^2 + \varepsilon \sum_{\varepsilon N \leq q \leq N} \frac{1}{q} \left|\sum_{l \leq N/q} \frac{\lambda(l)}{l}\right|^2\right)^{1/2}.$$ From the prime number theorem one has $$ \sum_{l \leq N/q} \frac{\lambda(l)}{l} \ll \log^{-10}(2+N/q)$$ (say). From this and some calculation one can bound the preceding expression by $$ (\log^{-19}(1/\varepsilon) + \varepsilon)^{1/2}$$ which goes to zero as $\varepsilon \to 0$ , giving the claim.
|
{
"source": [
"https://mathoverflow.net/questions/375778",
"https://mathoverflow.net",
"https://mathoverflow.net/users/159298/"
]
}
|
375,998 |
This question was inspired by the following: https://math.stackexchange.com/questions/3882691/lfloor-xn-rfloor-lfloor-yn-rfloor-is-a-perfect-square Is there a real nonintegral $x>1$ s.t. $\lfloor x^n \rfloor$ is square integer for all positive integers $n$ ? I am asking because the question is interesting in and of itself, but also because the proof techniques should also be interesting.
|
There is no such number. Suppose $\alpha>1$ is a real number such that $\lfloor \alpha^n \rfloor$ is a square for all $n\in {\Bbb N}$ . Put $\beta=\sqrt{\alpha}$ . Now for each $n$ we have $$
m^2 + 1 > \alpha^n \ge m^2
$$ for some integer $m$ , so that taking square-roots $$
m + \frac{1}{2m} > \beta^n \ge m.
$$ In other words $\beta^n$ has exponentially small fractional part, indeed at most $1/(2m) \approx 1/(2\beta^n)$ . A theorem of Pisot now states that $\beta$ must be a Pisot--Vijayaraghavan (or PV) number (see for example the Wikipedia page on PV numbers ). That is $\beta$ is an algebraic integer $>1$ such that all its Galois conjugates are $<1$ in absolute value. Suppose that $\beta$ has degree $k$ and that $\beta_1$ , $\ldots$ , $\beta_{k-1}$ are its Galois conjugates. Now for all $n$ , we must have $$
\beta^n + \beta_1^n + \ldots +\beta_{k-1}^{n} \in {\Bbb Z},
$$ and from our assumption it follows that for large $n$ $$
|\beta_1^n + \ldots +\beta_{k-1}^n |\le \frac{3}{4\beta^n}.
$$ Write each $\beta_j$ in polar coordinates as $r_j e^{2\pi i\theta_j}$ . Note that $$
\beta r_1 \cdots r_{k-1} \ge 1,
$$ since this is the absolute value of the norm of $\beta$ . Moreover by Dirichlet's theorem we may find arbitrarily large $n$ with $\Vert n\theta_j \Vert \le 1/10^6$ for all $1\le j\le k-1$ . Then, for such $n$ , \begin{align*}
Re(\beta_1^n + \ldots + \beta_{k-1}^n) &\ge 0.99 (r_1^n +\ldots +r_{k-1}^n) \ge 0.99 (k-1) (r_1\cdots r_{k-1})^{n/(k-1)} \\
&\ge 0.99 (k-1) \beta^{-n/(k-1)},
\end{align*} by AM-GM. That gives a contradiction. The argument also shows that if $\alpha^n$ is within a bounded distance of a square, then $\beta$ again is a PV number, and moreover its degree $k$ must be $2$ and its norm must be $1$ in size. In other words, we must have $\beta = (r+ \sqrt{r^2 \pm 4})/2$ for some natural number $r$ . This is in keeping with Jeremy Rouse's comment above.
|
{
"source": [
"https://mathoverflow.net/questions/375998",
"https://mathoverflow.net",
"https://mathoverflow.net/users/122188/"
]
}
|
376,084 |
I've seen internet jokes (at least more than 1) between mathematicians like this one here about someone studying a set with interesting properties. And then, after a lot of research (presumably after some years of work), find out such set couldn't be other than the empty set, making the work of years useless (or at least disappointing), I guess. Is this something that happens commonly? Do you know any real examples of this? EDIT: I like how someone interpreted this question in the comments as "are there verifiably true examples of this well-known 'urban legend template'?"
|
Jonathan Borwein, page 10 of Generalisations , Examples and Counter-examples in Analysis and Optimisation, wrote, Thirty years ago I was the external examiner for a PhD thesis on Pareto optimization by a student in a well-known Business school. It studied infinite dimensional Banach space partial orders with five properties that allowed most finite-dimensional results to be extended. This surprised me and two days later I had proven that those five properties forced the space to have a norm compact unit ball – and so to be finite-dimensional. This discovery gave me an even bigger headache as one chapter was devoted to an infinite dimensional model in portfolio management. The seeming impass took me longer to disentangle. The error was in the first sentence which started “Clearly the infimum is ...”. So many errors are buried in “clearly, obviously” or “it is easy to see”. Many years ago my then colleague Juan Schäffer told me “if it really is easy to see, it is easy to give the reason.” If a routine but not immediate calculation is needed then provide an outline. Authors tend to labour the points they personally had difficulty with; these are often neither the same nor the only places where the reader needs detail! My written report started “There are no objects such as are studied in this thesis.” Failure to find a second, even contrived example, might have avoided what was a truly embarrassing thesis defence.
|
{
"source": [
"https://mathoverflow.net/questions/376084",
"https://mathoverflow.net",
"https://mathoverflow.net/users/166253/"
]
}
|
376,175 |
My question concerns categorical "proofs" or "guessing" of arithmetical identities using category theory. I’m not at all a specialist of CT but I try to put my hands on and to understand something in this fascinating subject. Trying to prove something concrete, the exercise was to prove the identity $$ \gcd(n,m) \cdot \operatorname{lcm}(n,m) = n\cdot m$$ Using the concepts of product and coproduct in a specific category (objects are integers and arrows is "divisibility") I've been able to prove it with very minimalistic assumptions, only the notion of divisibility of integers. No use of prime numbers or unique factorization with primes or other specific theorems like Bezout theorem, etc... I'm wondering if someone has other examples like that one. Someone gave me the reference Marcelo Fiore, Tom Leinster, Objects of Categories as Complex Numbers , Advances in Mathematics 190 (2005), 264-277, doi: 10.1016/j.aim.2004.01.002 , arXiv: math/0212377 , which is interesting but it doesn't really answer my question. In that paper, if I've correctly understood, they show that if something is true with complex numbers it is true in more general categories. I would be interested if someone has some "simple" examples of concrete applications of CT proving or "guessing" arithmetical identities Thanks a lot for any clue
|
I don't know what counts as an "arithmetical identity" for you, but there's a rich family of interesting examples coming from groupoid cardinality . To say it very tersely, if $X$ is a groupoid with at most countably many isomorphism classes of objects, such that each automorphism group $\text{Aut}(x)$ is finite, we can define its groupoid cardinality $$|X| = \sum_{x \in \pi_0(X)} \frac{1}{|\text{Aut}(x)|}$$ if this sum converges, where $\pi_0(X)$ denotes the set of isomorphism classes of objects. Groupoids such that this sum converges are called tame . Groupoid cardinality has the following properties: Normalization: $|\bullet| = 1$ , where $\bullet$ denotes the terminal groupoid. Additivity: $|X \sqcup Y| = |X| + |Y|$ . Multiplicativity: $|X \times Y| = |X| |Y|$ . Covering: If $f : X \to Y$ is an $n$ -fold covering map of groupoids then $n |X| = |Y|$ . These properties (in fact just normalization, additivity and covering) uniquely determine $|X|$ for finite groupoids (finitely many objects and morphisms). $|X|$ is even countably additive and this together with normalization and covering determines it for tame groupoids. Now here are a family of interesting examples of identities proven by computing a groupoid cardinality in two different ways. Let $X$ be a finite set and consider the groupoid of families of finite sets over $X$ . On the one hand this is $\text{FinSet}^X$ , so it has groupoid cardinality $$|\text{FinSet}|^{|X|} = \left( \sum_{n \ge 0} \frac{1}{n!} \right)^{|X|} = e^{|X|}.$$ On the other hand this is the groupoid $\text{FinSet}/X$ of finite sets equipped with a map to $X$ . (I'm being a little cavalier about notation here; I need the map to $X$ to not necessarily be a bijection but I am only considering bijections as morphisms so this is not really a slice category strictly speaking. What I am really doing is naming various categories and then implicitly taking their cores , I guess.) The groupoid of sets of cardinality $n$ equipped with a map to $X$ has an $n!$ -fold cover given by the groupoid of totally ordered sets of cardinality $n$ equipped with a map to $X$ , which is discrete and has cardinality $|X|^n$ , where the $n!$ -foldness comes from the action of the symmetric group $S_n$ . The covering property then gives that this groupoid has groupoid cardinality $\frac{|X|^n}{n!}$ , which gives $$e^{|X|} = \sum_{n \ge 0} \frac{|X|^n}{n!}.$$ So we've described an equivalence of groupoids (between $\text{FinSet}^X$ and $\text{FinSet}/X$ ) which categorifies a familiar basic property of the exponential function, and in particular which gives a conceptual interpretation of the terms of the Taylor series. We similarly have $\text{FinSet}^{X \sqcup Y} \cong \text{FinSet}^X \times \text{FinSet}^Y$ which gives $$e^{|X| + |Y|} = e^{|X|} e^{|Y|}.$$ Unwinding what this gives in terms of the groupoid of finite sets equipped with a map to $X \sqcup Y$ , we get that every set equipped with a map to $X \sqcup Y$ canonically disconnects into two pieces, its fiber over $X$ and its fiber over $Y$ , which gives an equivalence $\text{FinSet}/(X \sqcup Y) \cong \text{FinSet}/X \times \text{FinSet}/Y$ ; this reflects the fact that $\text{FinSet}$ (the usual one, with all functions as morphisms) is an extensive category . The equivalence $\text{FinSet}^X \cong \text{FinSet}/X$ reflects the fact that $\text{FinSet}$ is the "classifying space of finite sets," and can be thought of as a special case of the Grothendieck construction . One of my favorite generalizations of this circle of ideas is that $X$ can be replaced with a groupoid. In this case we can consider the free symmetric monoidal groupoid on $X$ , which explicitly is given by $$\text{Sym}(X) = \bigsqcup_{n \ge 0} \frac{X^n}{S_n}$$ where the fraction notation denotes taking the action groupoid / homotopy quotient . As the notation suggests, if $X$ is tame we have $$|\text{Sym}(X)| = \sum_{n \ge 0} \frac{|X|^n}{n!}$$ and equivalences $\text{Sym}(X \sqcup Y) \cong \text{Sym}(X) \times \text{Sym}(Y)$ as before. The connection to the previous construction is that if $X$ is a finite set then the free symmetric monoidal groupoid on $X$ is $\text{FinSet}^X \cong \text{FinSet}/X$ (equipped with disjoint union), and in particular $\text{FinSet}$ itself (the groupoid of finite sets and bijections) is the free symmetric monoidal groupoid on a point. The significance of this construction at this level of generality is that many groupoids of finite sets with extra structure (which can equivalently be thought of as combinatorial species ) are free symmetric monoidal groupoids (with respect to disjoint union); said in a more down-to-earth way, their objects have canonical "connected component" decompositions. The identity $|\text{Sym}(X)| = e^{|X|}$ , possibly with some weights thrown in, then becomes a version of the exponential formula in combinatorics. There are many examples of this; I'll limit myself to one of my favorites, which is taken from this blog post and which also appears as Exercise 5.13a in Stanley's Enumerative Combinatorics, Vol. II . Let $\pi$ be a finitely generated group and consider the groupoid $[B \pi, \text{FinSet}]$ of finite sets equipped with an action of $\pi$ . This groupoid is usually not tame, but it is always "weighted tame" in the sense that we can consider its groupoid cardinality weighted by a factor of $x^n$ for finite sets of size $n$ and the coefficients of the corresponding generating function converge (and in fact are sums of finitely many rational numbers and so are rational). We can compute its weighted groupoid cardinality in two ways as follows. On the one hand, as above, the groupoid of actions of $\pi$ on a set of size $n$ has an $n!$ -fold cover given by the groupoid of actions of $\pi$ on a totally ordered set of size $n$ , which is discrete with cardinality $|\text{Hom}(\pi, S_n)|$ , where again the $n!$ -foldness comes from the action of $S_n$ . Said another way, the groupoid is equivalent to the action groupoid / homotopy quotient of the action of $S_n$ by conjugation on the set of homomorphisms $\pi \to S_n$ . It follows that the weighted groupoid cardinality is $$\sum_{n \ge 0} \frac{|\text{Hom}(\pi, S_n)|}{n!} x^n.$$ On the other hand, this groupoid is free symmetric monoidal: every $\pi$ -set has a canonical decomposition into transitive $\pi$ -sets. The groupoid of transitive actions of $\pi$ on a set of size $n$ has an $n$ -fold cover given by the groupoid of transitive actions of $\pi$ on a set of size $n$ with a basepoint. The stabilizer of such a basepoint is a subgroup of $\pi$ of index $n$ , and this is an equivalence of groupoids; in other words, this groupoid is discrete and has size the number $s_n(\pi)$ of subgroups of $\pi$ of index $n$ . It follows that the weighted groupoid cardinality is $$\exp \left( \sum_{n \ge 1} \frac{s_n(\pi)}{n} x^n \right).$$ So we get an identity $$\sum_{n \ge 0} \frac{|\text{Hom}(\pi, S_n)|}{n!} x^n = \exp \left( \sum_{n \ge 1} \frac{s_n(\pi)}{n} x^n \right).$$ Specializing to various finitely generated groups $\pi$ gives various interesting identities, as I explain in this blog post . For example if $\pi = \mathbb{Z}$ we are considering the groupoid of finite sets equipped with a bijection, and we get $$\frac{1}{1 - x} = \sum_{n \ge 0} x^n = \exp \left( \sum_{n \ge 1} \frac{x^n}{n} \right) = \exp \log \frac{1}{1 - x}.$$ This identity expresses the uniqueness of cycle decomposition, and among other things gives a conceptual interpretation of the Taylor series of the logarithm (it is expressing the weighted groupoid cardinality of the groupoid of cycles, or equivalently the groupoid of finite transitive $\mathbb{Z}$ -sets). I also take $\pi = C_k, F_2, \mathbb{Z}^2, \pi_1(\Sigma_g)$ in the blog post above. In this blog post I take $\pi = \Gamma \cong C_2 \ast C_3$ to be the modular group, which gives a generating function $$\sum_{n \ge 1} \frac{s_n(\Gamma)}{n} x^n = \log \left( \sum_{n \ge 0} \frac{o_2(S_n) o_3(S_n)}{n!} x^n \right)$$ for the number of subgroups of $\Gamma$ of index $n$ ( A005133 on the OEIS), where $o_k(S_n)$ denotes the number of elements of $S_n$ of order dividing $k$ .
|
{
"source": [
"https://mathoverflow.net/questions/376175",
"https://mathoverflow.net",
"https://mathoverflow.net/users/41405/"
]
}
|
376,486 |
I have seen similar questions, but none of the answers relate to my difficulty, which I will now proceed to convey. Let $(M,g)$ be a Riemannian manifolds. The Levi-Civita connection is the unique connection that satisfies two conditions: agreeing with the metric, and being torsion-free. Agreeing with the metric is easy to understand. This is equivalent to the parallel transport associated with the connection to satisfy that the isomorphism between tangent spaces at different points along a path are isometries. Makes sense. Let's imagine for a second what happens if we stop with this condition, and take the case of $M=\mathbb{R}^2$ , with $g$ being the usual metric. Then it's easy to think of non-trivial ways to define parallel transport other than the one induced by the Levi-Civita connection. For example, imagine the following way to do parallel transport: if $\gamma$ is a path in $\mathbb{R}^2$ , then the associated map from $TM_{\gamma(s)}$ to $TM_{\gamma(t)}$ will be a rotation based with angle $p_2(\gamma(s))-p_2(\gamma(t))$ , where $p_i$ is the projection of $\mathbb{R}^2$ onto the $i^\text{th}$ coordinate. So I guess torsion-free-ness is supposed to rule this kind of example out. Now I'm somewhat confused. One of the answers to a similar question that any two connections that satisfy that they agree with the metric satisfy that they have the same geodesics, and in that case choosing a torsion-free one is just a way of choosing a canonical one. That seems incorrect, as $\gamma(t)=(0,t)$ is a geodesic of $\mathbb{R}^2$ with the Levi-Civita connection but not the one I just described... Let's think from a different direction. In the case of $\mathbb{R}^2$ , if $\nabla$ is the usual (and therefore Levi-Civita) connection then $\nabla_XY$ is just $XY$ , and $\nabla_YX$ is just $YX$ . So of course we have torsion-free-ness. So I guess one way to think of torsion-free-ness is saying that you want the parallel transport induced by the connection to be the one associated with $\mathbb{R}^n$ via the local trivializations. Except that this seems over-simplistic: torsion-free-ness is weaker than the condition that $\nabla_XY=XY$ and $\nabla_YX=YX$ . So why this crazy weaker condition that $\nabla_XY-\nabla_YX=[X, Y]$ ? What does that even mean geometrically? Why is this sensible? How would say that in words that are similar to "it means that the connection is the connection induced from the trivializations" except more correct than that?
|
I think that the literal answer is that the Levi-Civita connection of $g$ is trying to describe the metric $g$ and nothing else . It is the only connection-assignment that is uniquely defined by the metric and its first derivatives and nothing else, in the sense that, if you have a diffeomorphism-equivariant assignment $g\to C(g)$ where $C(g)$ is a connection that depends only on $g$ and its first derivatives, then $C(g)$ is the Levi-Civita connection. Note that the restriction to first derivatives is necessary. For example, there is a unique connection on $TM$ that is compatible with $g$ and satisfies $$
\nabla_XY -\nabla_YX - [X,Y] = \mathrm{d}S(X)\,Y - \mathrm{d}S(Y)\,X,
$$ where $S= S(g)$ is the scalar curvature of $g$ . However, this canonical connection depends on three derivatives of $g$ . Meanwhile, connections with torsion can arise naturally from other structures: For example, on a Lie group, there is a unique connection for which the left-invariant vector fields are parallel and a unique connection for which the right-invariant vector fields are parallel. When the identity component of the group is nonabelian, these are distinct connections with nonvanishing torsion, while their average is a canonical connection that is torsion-free. (This latter connection need not be metric compatible, of course.) A more well-known example is the unique connection associated to an Hermitian metric on a complex manifold that is compatible with both the metric and the complex structure and whose torsion is of type (0,2). It's not unreasonable to ask whether imposing the torsion-free condition, just because you can, right out of the gate is too restrictive. Einstein tried for years to devise a 'unified field theory' that would geometrize all of the known forces of nature by considering connections compatible with the metric (i.e., the gravitational field) that had torsion. There is a book containing the correspondence between Einstein and Élie Cartan ( Letters on absolute parallelism ) in which Einstein would propose a set of field equations that would constrain the torsion so that they describe the other known forces (just as the Einstein equations constrain the gravitational field) and Cartan would analyze them to determine whether they had the necessary 'flexibility' to describe the known phenomena without being so 'flexible' that they couldn't make predictions. It's very interesting reading. This tradition of seeking a physical interpretation of torsion has continued, off and on, since then, with several attempts to generalize Einstein's theory of gravity (aka, 'general relativity'). Some of these are described in Misner, Thorne, and Wheeler, and references are given to others. In fact, quite recently, Thibault Damour (IHÉS), famous for his work on black holes, and a collaborator have been working on a gravitational theory-with-torsion, which they call 'torsion bigravity'. (See arXiv:1906.11859 [gr-qc] and arXiv:2007.08606 [gr-qc].) [To be frank, though, I'm not aware that any of these alternative theories have made any predictions that disagree with GR that have been verified by experiment. I think we all would have heard about that.] I guess the point is that 'why impose torsion-free?' is actually a very reasonable question to ask, and, indeed, it has been asked many times. One answer is that, if you are only trying to understand the geometry of a metric, you might as well go with the most natural connection, and the Levi-Civita connection is the best one of those in many senses. Another answer is that, if you have some geometric or physical phenomenon that can be captured by a metric and another tensor that can be interpreted as (part of) the torsion of the connection, then, sure, go ahead and incorporate that information into the connection and see where it leads you. Remark on connections with the same geodesics: I realize that I didn't respond to the OP's confusion about connections with the same geodesics vs. compatible with a metric $g$ but with torsion.
(I did respond in a comment that turned out to be wrong, so I deleted it. Hopefully, this will be better.) First, about torsion (of a connection on TM). The torsion $T^\nabla$ of a (linear) connection on $TM$ is a section of the bundle $TM\otimes\Lambda^2(T^*M)$ . Here is an (augmented) Fundamental Lemma of (pseudo-)Riemannian geometry: Lemma 1: If $g$ is a (nondegenerate) pseudo-Riemannian metric on $M$ and $\tau$ is a section of $TM\otimes\Lambda^2(T^*M)$ , then there is a unique linear connection $\nabla$ on $TM$ such that $\nabla g = 0$ and $T^\nabla = \tau$ . (The usual FLRG is the special case $\tau=0$ .) Note that this $\nabla$ depends algebraically on $\tau$ and the $1$ -jet of $g$ . The proof of Lemma 1 is the usual linear algebra. Second, if $\nabla$ and $\nabla^*$ are two linear connections on $TM$ ,
their difference is well-defined and is a section of $TM\otimes T^*M\otimes T^*M$ . Specifically $\nabla^* - \nabla:TM\times TM\to TM$ has the property that, on vector fields $X$ and $Y$ , we have $$
\left({\nabla^*} - \nabla\right)(X,Y) = {\nabla^*}_XY-\nabla_XY.
$$ Lemma 2: Two linear connections, $\nabla$ and $\nabla^*$ have the same geodesics (i.e., each curve $\gamma$ is a geodesic for one if and only if it is a geodesic for the other) if and only if $\tilde\nabla - \nabla$ is a section of the subbundle $TM\otimes\Lambda^2(T^*M)\subset TM\otimes T^*M\otimes T^*M$ . Proof: In local coordinates $x = (x^i)$ , let $\Gamma^i_{jk}$ (respectively, $\tilde\Gamma^i_{jk}$ ) be the coefficients of $\nabla$ 0 (respectively, $\tilde\nabla$ ). Then $$
\tilde\nabla-\nabla = (\tilde\Gamma^i_{jk}-\Gamma^i_{jk})\ \partial_i\otimes \mathrm{d}x^j\otimes\mathrm{d}x^k.
$$ Meanwhile, a curve $\gamma$ in the $x$ -coordinates is a $\nabla$ -geodesic
(respectively, a $\tilde\nabla$ -geodesic) iff $$
\ddot x^i + \Gamma^i_{jk}(x)\,\dot x^j\dot x^k = 0\qquad
(\text{respectively},\ \ddot x^i + \tilde\Gamma^i_{jk}(x)\,\dot x^j\dot x^k = 0).
$$ These are the same equations iff $(\tilde\Gamma^i_{jk}(x)-\Gamma^i_{jk}(x))\,y^jy^k\equiv0$ for all $y^i$ , i.e., iff $$
{\tilde\nabla}-\nabla = \tfrac12({\tilde\Gamma}^i_{jk}-\Gamma^i_{jk})\ \partial_i\otimes \mathrm{d}x^j\wedge\mathrm{d}x^k.\quad \square
$$ Finally, we examine when two $g$ -compatible connections have the same geodesics: Lemma 3: If $g$ is a nondegenerate (pseudo-)Riemannian metric, and $\nabla$ and $\nabla^*$ are linear connections on $TM$ that satisfy $\nabla g = \nabla^*g = 0$ , then they have the same geodesics if and only if the expression $$
\phi(X,Y,Z) = g\bigl( X,(\nabla^*{-}\nabla)(Y,Z)\bigr)
$$ is skew-symmetric in $X$ , $Y$ , and $Z$ . Proof: $\nabla g = \nabla^* g = 0$ implies $\phi(X,Y,Z)+\phi(Z,Y,X)=0$ , while they have the same geodesics if and only if $\phi(X,Y,Z)+\phi(X,Z,Y)=0$ . Corollary: If $g$ is a nondegenerate (pseudo-)Riemannian metric, then the space of linear connections $\nabla$ on $TM$ that satisfy $\nabla g = 0$ and have the same geodesics as $\nabla^g$ , the Levi-Civita connection of $g$ , is a vector space naturally isomorphic to $\Omega^3(M)$ , the space of $3$ -forms on $M$ .
|
{
"source": [
"https://mathoverflow.net/questions/376486",
"https://mathoverflow.net",
"https://mathoverflow.net/users/98901/"
]
}
|
376,839 |
In his talk, The Future of Mathematics , Dr. Kevin Buzzard states that Lean is the only existing proof assistant suitable for formalizing all of math . In the Q&A part of the talk (at 1:00:00 ) he justifies this as follows: Automation is very difficult with set theory Simple type theory is too simple Univalent type theory hasn't been successful in proof assistants My question is about the first of these: Why is automation very difficult with set theory (compared to dependent type theory)?
|
I apologize for writing a lengthy answer, but I get the feeling the discussions about foundations for formalized mathematics are often hindered by lack of information. I have used proof assistants for a while now, and also worked on their design and implementation. While I will be quick to tell jokes about set theory, I am bitterly aware of the shortcomings of type theory, very likely more so than the typical set theorist. (Ha, ha, "typical set theorist"!) If anyone can show me how to improve proof assistants with set theory, I will be absolutely deligthed! But it is not enough to just have good ideas – you need to test them in practice on large projects, as many phenomena related to formalized mathematics only appear once we reach a certain level of complexity. The components of a proof assistant The architecture of modern proof assistants is the result of several decades of experimentation, development and practical experience. A proof assistant incorporates not one, but several formal systems. The central component of a proof assistant is the kernel , which validates every inference step and makes sure that proofs are correct. It does so by implementing a formal system $F$ (the foundation ) which is expressive enough to allow formalization of a large amount of mathematics, but also simple enough to allow an efficient and correct implementation. The foundational system implemented in the kernel is too rudimentary to be directly usable for sophisticated mathematics. Instead, the user writes their input in a more expressive formal language $V$ (the vernacular ) that is designed to be practical and useful. Typically $V$ is quite complex so that it can accommodate various notational conventions and other accepted forms of mathematical expression. A second component of the proof assistant, the elaborator , translates $V$ to $F$ and passes the translations to the kernel for verification. A proof assistant may incorporate a third formal language $M$ (the meta-language ), which is used to implement proof search, decision procedures, and other automation techniques. Because the purpose of $M$ is to implement algorithms, it typically resembles a programming language. The distinction between $M$ and $V$ may not be very sharp, and sometimes they are combined into a single formalism. From mathematical point of view, $M$ is less interesting than $F$ and $V$ , so we shall ignore it. Suitability of foundation $F$ The correctness of the entire system depends on the correctness of the kernel. A bug in the kernel allows invalid proofs to be accepted, whereas a bug in any other component is just an annoyance. Therefore, the foundation $F$ should be simple so that we can implement it reliably. It should not be so exotic that logicians cannot tell how it relates to the accepted foundations of mathematics. Computers are fast, so it does not matter (too much) if the translation from $V$ to $F$ creates verbose statements. Also, $F$ need not be directly usable by humans. A suitable variant of set theory or type theory fits these criteria. Indeed Mizar is based on set theory, while HOL, Lean, Coq, and Agda use type theory in the kernel. Since both set theory and type theory are mathematically very well understood, and more or less equally expressive, the choice will hinge on technical criteria, such as availability and efficiency of proof-checking algorithms. Suitability of vernacular $V$ A much more interesting question is what makes the vernacular $V$ suitable. For the vernacular to be useful, it has to reflect mathematical practice as much as possible. It should allow expression of mathematical ideas and concepts directly in familiar terms, and without unnecessary formalistic hassle. On the other hand, $V$ should be a formal language so that the elaborator can translate it to the foundation $F$ . To learn more about what makes $V$ good, we need to carefully observe how mathematicians actually write mathematics. They produce complex webs of definitions, theorems, and constructions, therefore $V$ should support management of large collections of formalized mathematics. In this regards we can learn a great deal by looking at how programmers organize software. For instance, saying that a body of mathematics is "just a series of definitions, theorems and proofs" is a naive idealization that works in certain contexts, but certainly not in practical formalization of mathematics. Mathematicians omit a great deal of information in their writings, and are quite willing to sacrifice formal correctness for succinctness. The reader is expected to fill in the missing details, and to rectify the imprecisions. The proof assistant is expected to do the same. To illustrate this point, consider the following snippet of mathematical text: Let $U$ and $V$ be vector spaces and $f : U \to V$ a linear map. Then $f(2 \cdot x + y) = 2 \cdot f(x) + f(y)$ for all $x$ and $y$ . Did you understand it? Of course. But you might be quite surprised to learn how much guesswork and correction your brain carried out: The field of scalars is not specified, but this does not prevent you from understanding the text. You simply assumed that there is some underlying field of scalars $K$ . You might find out more about $K$ in subsequent text. ( $K$ is an existential variable .) Strictly speaking " $f : U \to V$ " does not make sense because $U$ and $V$ are not sets, but structures $U = (|U|, 0_U, {+}_U, {-}_U, {\cdot}_U)$ and $V = (|V|, 0_V, {+}_V, {-}_V, {\cdot}_V)$ . Of course, you correctly surmised that $f$ is a map between the carriers , i.e., $f : |U| \to |V|$ . (You inserted an implicit coercion from a vector space to its carrier.) What do $x$ and $y$ range over? For $f(x)$ and $f(y)$ to make sense, it must be the case that $x \in |U|$ and $y \in |U|$ . (You inferred the domain of $x$ and $y$ .) In the equation, $+$ on the left-hand side means $+_{U}$ , and $+$ on the right-hand side ${+}_V$ , and similarly for scalar multiplication. (You reconstructed the implicit arguments of $+$ .) The symbol $2$ normally denotes a natural number, as every child knows, but clearly it is meant to denote the scalar $1_K +_K 1_K$ . (You interpreed " $2$ " in the notation scope appropriate for the situation at hand.) The vernacular $V$ must support these techniques, and many more, so that they can be implemented in the elaborator. It cannot be anything as simple as ZFC with first-order logic and definitional extensions, or bare Martin-Löf type theory. You may consider the development of $V$ to be outside of scope of mathematics and logic, but then do not complain when computer scientist fashion it after their technology. I have never seen any serious proposals for a vernacular based on set theory. Or to put it another way, as soon as we start expanding and transforming set theory to fit the requirements for $V$ , we end up with a theoretical framework that looks a lot like type theory. (You may entertain yourself by thinking how set theory could be used to detect that $f : U \to V$ above does not make sense unless we insert coercions – for if everthying is a set then so are $U$ and $V$ , in which case $f : U \to V$ does make sense.) Detecting mistakes An important aspect of suitability of foundation is its ability to detect mistakes. Of course, its purpose is to prevent logical errors, but there is more to mistakes than just violation of logic. There are formally meaningful statements which, with very high probability, are mistakes. Consider the following snippet, and read it carefully: Definition: A set $X$ is jaberwocky when for every $x \in X$ there exists a bryllyg $U \subseteq X$ and an uffish $K \subseteq X$ such that $x \in U$ and $U \in K$ . Even if you have never read Lewis Carroll's works, you should wonder about " $U \in K$ ". It looks like " $U \subseteq K$ " would make more sense, since $U$ and $K$ are both subsets of $X$ . Nevertheless, a proof assistant whose foundation $F$ is based on ZFC will accept the above definition as valid, even though it is very unlikely that the human intended it. A proof assistant based on type theory would reject the definition by stating that " $U \in K$ " is a type error. So suppose we use a set-theoretic foundation $F$ that accepts any syntactically valid formula as meaningful. In such a system writing " $U \in K$ " is meaningful and therefore the above definition will be accepted by the kernel. If we want the proof assistant to actually assist the human, it has to contain an additional mechanism that will flag " $U \in K$ " as suspect, despite the kernel being happy with it. But what is this additional mechanism, if not just a second kernel based on type theory? I am not saying that it is impossible to design a proof assistant based on set theory. After all, Mizar , the most venerable of them all, is designed precisely in this way – set theory with a layer of type-theoretic mechanisms on top. But I cannot help to wonder: why bother with the set-theoretic kernel that requires a type-theoretic fence to insulate the user from the unintended permissiveness of set theory?
|
{
"source": [
"https://mathoverflow.net/questions/376839",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30352/"
]
}
|
377,922 |
$\DeclareMathOperator\Spec{Spec}\DeclareMathOperator\ev{ev}$ Teaching algebraic geometry, in particular schemes, I am struggling to provide intuitive proofs. In particular, I find it counter-intuitive that points are prime ideals. I discovered a trick which I suspect is not new. Basically, you build the functor of points into the definition. I want to modify the definition of $\Spec(R)$ as follows: As a set, $\Spec(R)$ is simply all pairs $x=(k_x, \ev_x)$ where $k_x$ is a field and $\ev_x:R\to k_x$ is a homomorphism. Then as usual, elements of $R$ are called functions and the value of a function $f\in R$ at a point $x$ is $f(x)\mathrel{:=}\ev_x(f)\in k_x$ . Then it continues as usual: closed set is where some collection of functions vanishes. Basic open set is where some function is invertible. Of course, there are some problems with this approach: The class of all fields is not a set. Technically, we can limit ourselves to some very large set of "test fields". So this can be swept under the rug. $\Spec(R)$ with this definition is not $T_0$ . But after getting used to spaces being not Hausdorff it should be easy to take it to the next level with spaces being not $T_0$ . Of course, to every non- $T_0$ space there is a canonically associated $T_0$ space where you identify topologically indistinguishable points, so you recover the usual construction of $\Spec(R)$ this way. Nevertheless, I find this approach much more intuitive, because it seems like a natural question to solve some system of equations in some unknown field, rather then studying prime ideals (which is of course basically the same thing, language aside). Is this not new? Are there any lecture notes following this approach? Of course, the full "functor of points" approach sort of contains this one, but notice that to do what I want I do not need Yoneda lemma, I do not ask for functoriality, so I do not need to sweep under the rug all the tedious checks of naturality. So I find it more basic than functor of points. Here is an example. When we construct the localization of a ring $R$ with respect to a multiplicative set $S$ we prove that prime ideals of $S^{-1}R$ are in bijection with a subset of ideals of $R$ . With this approach the corresponding statement is a simple consequence of the universal property of the localization, there is nothing more to prove. Another example. Prove that the map $\mathbb{A}^1\to \mathbb{A}^3$ given by $t\to (t^3, t^4, t^5)$ has image $Z(xz-y^2, x^3-yz, x^2 y -z^2)$ . This becomes simply high school algebra.
|
Actually, you have rediscovered a nice motivation of using prime ideals as points. Indeed, your collection of points are triples $(R, k_x, \mathrm{ev}_x)$ where , $\mathrm{ev}_x \colon R \to k_x$ is a homomorphism. The collection of all such triples is a class rather a set. In any case, you should not change the universe to get the underlying topological space whose functions are given by $R$ , much as you won't change the universe when you reconstruct a differential manifold from the algebra of differentiable functions. A nice solution is to impose an equivalence relation on the set of points. Define $$(R, k_x, \mathrm{ev}_x) \sim (R, k_y, \mathrm{ev}_y)$$ whenever there are field extensions $i_1 \colon k_x \hookrightarrow K$ and $i_2 \colon k_y \hookrightarrow K$ such that $i_1 \circ \mathrm{ev}_x = i_2 \circ \mathrm{ev}_y$ . After all, a point with coordinates in a field remains the same if we consider the coordinates in a bigger field. Now take the quotient set of the equivalence relation. It is clear that the triples $(R, k_x, \mathrm{ev}_x)$ are classified by $\mathrm{Im}(\mathrm{ev}_x)$ , equivalently by $\mathrm{Ker}(\mathrm{ev}_x)$ , that turns out to be a prime ideal. Thus, every equivalence class has a canonical representative $(R, \kappa(\mathfrak{p}), \mathrm{ev}_\mathfrak{p})$ where $\mathfrak{p}$ is a prime ideal in $R$ , $\kappa(\mathfrak{p}) = R_\mathfrak{p}/\mathfrak{p}R_\mathfrak{p}$ , the residue field of $\mathfrak{p}$ and $\mathrm{ev}_\mathfrak{p} \colon R \to \kappa(\mathfrak{p})$ the canonical map. So in fact points as maps to fields are classified by primes, have a canonical field where elements of $R$ may be evaluated and the collection that form the equivalence classes is clearly a set. Of course, the next step is to define a sheaf of rings, that in some sense, might be interpreted as a sheaf of functions on $\mathrm{Spec}(R)$ . This is exactly the motivation I use for the philosophy points are primes in algebraic geometry in my graduate courses under the name "the sermon of points". Of course, this point of view is well-known though it is rarely displayed in print.
|
{
"source": [
"https://mathoverflow.net/questions/377922",
"https://mathoverflow.net",
"https://mathoverflow.net/users/89514/"
]
}
|
378,150 |
Let $E$ be a linear subspace of ${\bigwedge}^2({\mathbb R}^n)$ . What is the minimal dimension of $E$ that guarantees $E$ contains a nonzero element of the form $X\wedge Y$ , with $X, Y\in{\mathbb R}^n$ ? When $n=3$ , dimension $1$ is enough. When $n=4$ we would need dimension $4$ . For general $n$ , it is easy to see $E$ having dimension $\frac{(n-1)(n-2)}{2}+1$ is sufficient, but I don't know if that is optimal.
|
Partial answer: the minimal dimension is at least ${n-2 \choose 2} + 1$ , with equality if $n-1$ is a power of $2$ .
For example, if $n=5$ the minimum is $4$ , curiously the same as for $n=4$ ,
and less than the "easy" bound of ${5-1 \choose 2} + 1 = 7$ . Let $N = {n \choose 2}$ , which is the dimension of the alternating square of
an $n$ -dimensional vector space $V$ . Then the pure tensors $X \wedge Y$ constitute a homogeneous subset of dimension $2n-3$ ; projectively this is
the Plücker embedding in $(N-1)$ -dimensional projective space
of the Grassmanian ${\rm Gr}(2,n)$ of $2$ -planes in $V$ ,
which has dimension $2n-4$ .
Thus a general linear space of codimension less than $2n-4$ will miss ${\rm Gr}(2,n)$ for lack of sufficient degrees of freedom.
This gives the lower bound ${n-2 \choose 2} + 1 = N-(2n-4)$ . Over an algebraically closed field this necessary condition is also sufficient,
and the general linear subspace of codimension $2n-4$ meets the Plücker variety
in $d_n$ points counted with multiplicity, where $d_n$ is the degree of
the Plücker variety. It is known that $d_n$ is the Catalan number $C_{n-2} = \frac1{n-1}{2n-4 \choose n-2}$ . The field of real numbers is not algebraically closed,
but every polynomial of odd degree has a root.
Thus if $d_n$ is odd we are still guaranteed a real intersection.
This happens when $n-1$ is a power of $2$ , i.e. $n=3,5,9,17,\ldots$ .
We've now proven that the bound ${n-2 \choose 2} + 1$ is attained for such $n$ . For $n=4$ it is well-known that the real Grassmannian ${\rm Gr}(2,4)$ is a quadric of signature $(3,3)$ , so as Yuval found it takes
a subspace of dimension at least $4$ to guarantee a real intersection.
For $n \geq 6$ that are not of the form $2^m + 1$ , I do not know
by how much the real answer exceeds the lower bound ${n-2 \choose 2} + 1$ .
|
{
"source": [
"https://mathoverflow.net/questions/378150",
"https://mathoverflow.net",
"https://mathoverflow.net/users/130379/"
]
}
|
378,181 |
There are two groups, $G_1$ and $G_2$ . They are both acting on a set $S$ . $S$ may have some structure. The groups may too. The actions respect them. $G_1$ is mysterious. Perhaps all we know about it is the way it acts on $S$ . We'd like to know more. $G_2$ is well-known. We might be able to learn about $G_1$ from watching how its action interacts with the action of $G_2$ . Intersection of their orbits, for instance. Is this situation systematically studied under some name? Beyond the case when the actions commute.
|
Partial answer: the minimal dimension is at least ${n-2 \choose 2} + 1$ , with equality if $n-1$ is a power of $2$ .
For example, if $n=5$ the minimum is $4$ , curiously the same as for $n=4$ ,
and less than the "easy" bound of ${5-1 \choose 2} + 1 = 7$ . Let $N = {n \choose 2}$ , which is the dimension of the alternating square of
an $n$ -dimensional vector space $V$ . Then the pure tensors $X \wedge Y$ constitute a homogeneous subset of dimension $2n-3$ ; projectively this is
the Plücker embedding in $(N-1)$ -dimensional projective space
of the Grassmanian ${\rm Gr}(2,n)$ of $2$ -planes in $V$ ,
which has dimension $2n-4$ .
Thus a general linear space of codimension less than $2n-4$ will miss ${\rm Gr}(2,n)$ for lack of sufficient degrees of freedom.
This gives the lower bound ${n-2 \choose 2} + 1 = N-(2n-4)$ . Over an algebraically closed field this necessary condition is also sufficient,
and the general linear subspace of codimension $2n-4$ meets the Plücker variety
in $d_n$ points counted with multiplicity, where $d_n$ is the degree of
the Plücker variety. It is known that $d_n$ is the Catalan number $C_{n-2} = \frac1{n-1}{2n-4 \choose n-2}$ . The field of real numbers is not algebraically closed,
but every polynomial of odd degree has a root.
Thus if $d_n$ is odd we are still guaranteed a real intersection.
This happens when $n-1$ is a power of $2$ , i.e. $n=3,5,9,17,\ldots$ .
We've now proven that the bound ${n-2 \choose 2} + 1$ is attained for such $n$ . For $n=4$ it is well-known that the real Grassmannian ${\rm Gr}(2,4)$ is a quadric of signature $(3,3)$ , so as Yuval found it takes
a subspace of dimension at least $4$ to guarantee a real intersection.
For $n \geq 6$ that are not of the form $2^m + 1$ , I do not know
by how much the real answer exceeds the lower bound ${n-2 \choose 2} + 1$ .
|
{
"source": [
"https://mathoverflow.net/questions/378181",
"https://mathoverflow.net",
"https://mathoverflow.net/users/169951/"
]
}
|
378,192 |
Problem setting : $ \underset{x}{\text{min}} \|Ax-b\|$ , where $A \in \mathcal{R}^{m \times n}, m\gg n $ , full rank. L1 loss is used for robust estimation using IRLS. The corresponding equation to solve turns out to be $ A^{T}WAx=A^{T}Wb$ , where $W=\mathrm{diag}(d_i), d_i=1/|e_{i}|$ , $e_{i}=a_{i}^{T}x-b_{i}$ , $a_{i}$ is the ith row of $A$ , $b_{i}$ is the ith element of $b$ . For $e_{i}$ close to $0$ , the value of $d_{i}$ is very large. For my specific case, the range of $d_i$ is from $10^{-3}$ to $10^5$ . To avoid high values of $d_i$ , it is taken as $d_i=1/(|e_i|+\delta)$ where $\delta>0$ is a small number near $0$ . Let $\delta=10^{-3}$ . This brings the range of $d_i$ as $10^{-3}$ to $10^3$ . The range of values of $d_i$ is still high to bring numerical stability. It makes $ A^{T}WA$ a near singular matrix. Please suggest a way to avoid numerical instability. Thanks in advance!
|
Partial answer: the minimal dimension is at least ${n-2 \choose 2} + 1$ , with equality if $n-1$ is a power of $2$ .
For example, if $n=5$ the minimum is $4$ , curiously the same as for $n=4$ ,
and less than the "easy" bound of ${5-1 \choose 2} + 1 = 7$ . Let $N = {n \choose 2}$ , which is the dimension of the alternating square of
an $n$ -dimensional vector space $V$ . Then the pure tensors $X \wedge Y$ constitute a homogeneous subset of dimension $2n-3$ ; projectively this is
the Plücker embedding in $(N-1)$ -dimensional projective space
of the Grassmanian ${\rm Gr}(2,n)$ of $2$ -planes in $V$ ,
which has dimension $2n-4$ .
Thus a general linear space of codimension less than $2n-4$ will miss ${\rm Gr}(2,n)$ for lack of sufficient degrees of freedom.
This gives the lower bound ${n-2 \choose 2} + 1 = N-(2n-4)$ . Over an algebraically closed field this necessary condition is also sufficient,
and the general linear subspace of codimension $2n-4$ meets the Plücker variety
in $d_n$ points counted with multiplicity, where $d_n$ is the degree of
the Plücker variety. It is known that $d_n$ is the Catalan number $C_{n-2} = \frac1{n-1}{2n-4 \choose n-2}$ . The field of real numbers is not algebraically closed,
but every polynomial of odd degree has a root.
Thus if $d_n$ is odd we are still guaranteed a real intersection.
This happens when $n-1$ is a power of $2$ , i.e. $n=3,5,9,17,\ldots$ .
We've now proven that the bound ${n-2 \choose 2} + 1$ is attained for such $n$ . For $n=4$ it is well-known that the real Grassmannian ${\rm Gr}(2,4)$ is a quadric of signature $(3,3)$ , so as Yuval found it takes
a subspace of dimension at least $4$ to guarantee a real intersection.
For $n \geq 6$ that are not of the form $2^m + 1$ , I do not know
by how much the real answer exceeds the lower bound ${n-2 \choose 2} + 1$ .
|
{
"source": [
"https://mathoverflow.net/questions/378192",
"https://mathoverflow.net",
"https://mathoverflow.net/users/169969/"
]
}
|
378,207 |
Consider the vertices of an $n$ -dimensional cube. The distance between two vertices is measured as the minimum number of edges between the two vertices. Now consider a subset of these vertices.
If we call the total set of vertices as $T$ and the subset as $S$ then our purpose is to partition $S$ into two sets $A$ and $B$ and for both of these sets find vertices $x_A$ and $x_B$ from $T$ such that the sum total of the distance between $x_A$ and the vertices of $A$ and $x_B$ and the vertices of $B$ should be a minimum. How to approach this question? Given that we know the distance relation for every pair of vertex, is it possible to know the minimum distance through some simple calculation?
|
Partial answer: the minimal dimension is at least ${n-2 \choose 2} + 1$ , with equality if $n-1$ is a power of $2$ .
For example, if $n=5$ the minimum is $4$ , curiously the same as for $n=4$ ,
and less than the "easy" bound of ${5-1 \choose 2} + 1 = 7$ . Let $N = {n \choose 2}$ , which is the dimension of the alternating square of
an $n$ -dimensional vector space $V$ . Then the pure tensors $X \wedge Y$ constitute a homogeneous subset of dimension $2n-3$ ; projectively this is
the Plücker embedding in $(N-1)$ -dimensional projective space
of the Grassmanian ${\rm Gr}(2,n)$ of $2$ -planes in $V$ ,
which has dimension $2n-4$ .
Thus a general linear space of codimension less than $2n-4$ will miss ${\rm Gr}(2,n)$ for lack of sufficient degrees of freedom.
This gives the lower bound ${n-2 \choose 2} + 1 = N-(2n-4)$ . Over an algebraically closed field this necessary condition is also sufficient,
and the general linear subspace of codimension $2n-4$ meets the Plücker variety
in $d_n$ points counted with multiplicity, where $d_n$ is the degree of
the Plücker variety. It is known that $d_n$ is the Catalan number $C_{n-2} = \frac1{n-1}{2n-4 \choose n-2}$ . The field of real numbers is not algebraically closed,
but every polynomial of odd degree has a root.
Thus if $d_n$ is odd we are still guaranteed a real intersection.
This happens when $n-1$ is a power of $2$ , i.e. $n=3,5,9,17,\ldots$ .
We've now proven that the bound ${n-2 \choose 2} + 1$ is attained for such $n$ . For $n=4$ it is well-known that the real Grassmannian ${\rm Gr}(2,4)$ is a quadric of signature $(3,3)$ , so as Yuval found it takes
a subspace of dimension at least $4$ to guarantee a real intersection.
For $n \geq 6$ that are not of the form $2^m + 1$ , I do not know
by how much the real answer exceeds the lower bound ${n-2 \choose 2} + 1$ .
|
{
"source": [
"https://mathoverflow.net/questions/378207",
"https://mathoverflow.net",
"https://mathoverflow.net/users/169985/"
]
}
|
378,692 |
Christmas is just around the corner and I haven't bought all the gifts for my family yet ( yeah, )
My Dad has a PhD in Mathematics, he works in Graph theory and his thesis was about Quasiperiodic tilings.
What do you think would make a good gift for him?
I'll appreciate anything you could think of!
Thanks for reading, hope you have a great day . p.s.: after reading all the tags in this website I think this is the right one for this kind of question? please correct me if I'm wrong!
|
I'm surprised no one has yet suggested a lifetime supply of Hagoromo chalk.
|
{
"source": [
"https://mathoverflow.net/questions/378692",
"https://mathoverflow.net",
"https://mathoverflow.net/users/170387/"
]
}
|
378,725 |
The Ackermann function $A(m,n)$ is a binary function on the natural numbers defined by a certain double recursion, famous for exhibiting extremely fast-growing behavior. One finds various slightly different formulations of the Ackermann function, with slightly different initial conditions. I prefer the following version: \begin{eqnarray*}
A(0,n) &=& n+1 \\
A(m,0) &=& 1, \quad\text{ except for }A(1,0)=2\text{ and }A(2,0)=0\\
A(m+1,n+1) &=& A\bigl(m,A(m+1,n)\bigr).
\end{eqnarray*} I like this version because it has the nice properties that \begin{eqnarray*}
A(0,n) &=& n+1\\
A(1,n) &=& n+2\\
A(2,n) &=& 2n\\
A(3,n) &=& 2^n\\
A(4,n) &=& 2^{2^{\cdot^{\cdot^2}}}{}^{\bigr\}n}\\
\end{eqnarray*} In this way, the levels of the Ackermann function exhibit increasingly fast-growing behavior. Each horizontal level of the Ackermann function, the function $A_m(n)=A(m,n)$ considered as a unary function with $m$ fixed, is defined by recursion using the previous level, and consequently all these functions are primitive recursive . Furthermore, these functions are unbounded in the primitive recursive functions in the sense that every primitive recursive function is eventually bounded by some horizontal level $A_m$ of the Ackermann function. It follows from this that the diagonal of the Ackermann function $n\mapsto A(n,n)$ is not primitive recursive. My question is about the sections of the Ackermann function taken on the other coordinate, the vertical sections of the Ackerman function, the functions $A^n(m)=A(m,n)$ , for fixed $n$ . Question. Are the vertical sections of the Ackermann function $A^n$ each primitive recursive? I don't expect the answer to depend on which particular version of the Ackermann function one uses, but just in case, let me mention another standard version, which has a smoother definition, which works better in many of the inductive arguments, even though the level functions are not as nice. \begin{array}{lcl}\operatorname {A} (0,n)&=&n+1\\\operatorname {A} (m+1,0)&=&\operatorname {A} (m,1)\\\operatorname {A} (m+1,n+1)&=&\operatorname {A} (m,\operatorname {A} (m+1,n))\end{array}
|
No, already $A(n,3)$ is not primitive recursive. Let me use the essentially equivalent up-arrow notation : $A(n,m)=2\uparrow^{n-1}m$ , and argue why $f(n)=2\uparrow^n 3$ is not PR. I claim $f(2n-2)\geq 2\uparrow^{n+1}n=A(n,n)$ . $n\mapsto A(n,n)$ outgrows all $n\mapsto A(m,n)$ , so by a classical argument it eventually outgrows all PR functions, and hence so will $f$ after we justify the claim. It is enough to show that for $m\geq 3$ we have $2\uparrow^n m\geq 2\uparrow^{n-1}(m+1)$ . We show this by induction on $m$ . Indeed, $2\uparrow^n m=2\uparrow^{n-1}(2\uparrow^n(m-1))$ , so it is enough to show $2\uparrow^n(m-1)\geq m+1$ . This holds true for $m=3$ as $2\uparrow^n 2=4$ for all $n$ , and for larger $m$ we have $2\uparrow^n(m-1)=2\uparrow^{n-1}(2\uparrow^n(m-2))\geq 2\uparrow^{n-1}m\geq m+1$ . Let me note that the bound $f(2n-2)\geq 2\uparrow^{n+1}n$ is far from optimal, in fact it should hold for $f(n+2)$ , at least for large enough $n$ .
|
{
"source": [
"https://mathoverflow.net/questions/378725",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1946/"
]
}
|
378,777 |
Edward Nelson advocated weak versions of arithmetic (called predicative arithmetic) that couldn't prove the totality of exponentiation. Since his theory extends Robinson arithmetic, the incompleteness theorems apply to it. But if the incompleteness theorems are proven in theories stronger than those he accepts, he could presumably reject them. So my questions are first, did Nelson doubt either of the incompleteness theorems? And second, can the incompleteness theorems be proved in weak systems of arithmetic that don't prove the totality of exponentiation? The closest thing I can find to an answer is an excerpt from his book Predicative Arithmetic, in which he says on page 81 "at least one of these two pillars of finitary mathematical logic, the Hilbert-Ackermann Consistency Theorem and Gödel's Second Theorem, makes an appeal to impredicative concepts."
|
Gödel’s second incompleteness theorem requires neither exponentiation nor “impredicative concepts”. The systems Nelson works in are fragments of arithmetic interpretable on definable cuts in $Q$ ; one such fragment is the bounded arithmetic $I\Delta_0+\Omega_1$ (this appears to be what Nelson calls $Q_4$ in the Predicative arithmetic book). The theory $I\Delta_0+\Omega_1$ (and even weak fragments of it with more restricted induction, such as $PV_1$ ) is perfectly capable of proving the second incompleteness theorem (for theories with a polynomial-time set of axioms, which is not a real constraint).
|
{
"source": [
"https://mathoverflow.net/questions/378777",
"https://mathoverflow.net",
"https://mathoverflow.net/users/163672/"
]
}
|
378,976 |
$\DeclareMathOperator\Var{Var}\DeclareMathOperator\CRings{CRings}\DeclareMathOperator\Grp{Grp}\DeclareMathOperator\Sets{Sets}$ I'm not a logician/set theorist, and I have some questions on set theory and references that may seem "trivial" for experts. Still I ask the question – if you have references this would be interesting. In algebraic geometry (See Hartshorne's book, Appendix A) the following theorem is proved: Let $\Var(k)$ be the "category of non-singular quasi-projective varieties over an algebraically closed field $k$ and morphisms of varieties over $k$ . This category is defined in Hartshorne's book. Theorem 1.1. There is a unique intersection theory $A^*(X)$ for algebraic cycles on $X\in \Var(k)$ modulo rational equivalence satisfying the axioms A1–A7. The axioms A1–A7 are listed on page 426-427 in the book. For a variety $X\in \Var(k)$ one defines
a commutative unital ring $A^*(X)$ – the Chow ring – and this construction is unique. There is only one way to do this, meaning there is a unique functor $A^*(-) : \Var(k) \rightarrow \CRings$ such that axioms A1–A7 hold. Here $\CRings$ is the category of commutative unital rings and maps of unital rings. In algebra one defines a group $(G, \bullet)$ as a set $G$ with an operation $\bullet: G\times G \rightarrow G$ satisfying $3$ axioms: G1 Associativity, G2 existence of identity and G3 existence of inverse. One defines a morphism of groups and the "category of groups" $\Grp$ . Clearly the category of groups $\Grp$ contains non-isomorphic groups, hence the axioms G1–G3 does not uniquely determine
one group. There are many different groups satisfying G1–G3. In ZF set theory set theorists write down 9 axioms ZF1-ZF9, and these axioms determine $\Sets$ – the "category of sets". $\Sets$ is a category with "sets" as objects and "maps between sets" as morphisms. We would like the category $\Sets$ to be uniquely determined by the axioms ZF1–ZF9 similarly to what happens for the Chow ring. Is it? Is there a unique category $\Sets$ fulfilling the axioms ZF1–ZF9? If yes I ask for a reference. For reference, Wikipedia has the page ZF set theory .
|
We say that a mathematical theory is categorical if it has exactly one model, up to isomorphism. We intend some theories to be categorical, for instance the Peano axioms for natural numbers, Euclid's planar geometry, and set theory. Other theories are designed not to be categorical, i.e., the theory of a group, the theory of a ring, etc. You are asking whether there are general theorems about categoricity, and whether in particular the Zermelo-Fraenkel set theory is categorical. First we have: Theorem: If a theory expressed in first-order logic is categorical, then it axiomatizes a unique (up to isomorphism) finite structure. Thus Zermelo-Fraenkel set theory and Peano arithmetic are not categorical. In fact, they both have many models: Theorem: (Löwenheim-Skolem theorem) If a theory expressed in first-order logic has an infinite model, then it has an infinite model of every cardinality. How should one react to these results? Perhaps we need not worry about it. So what if there are many models of Peano arithmetic and set theory? If we can accept the fact that there are many different groups, why not accept the fact that there are many different set theories? The mathematical universe just gets richer this way (but the search for "absolute truth" has to shift focus). We could also "blame" first-order logic for these undesirable phenomena. For instance, whereas Peano axioms do not "pin down" the natural numbers, the category-theoretic notion of the natural numbers object does: all natural number objects in a category are isomorphic (because they are all the initial algebras for the functor $X \mapsto 1 + X$ ). This is possible because the category-theoretic description speaks about the entire category , not just the object of natural numbers. We can in fact do the same for set theory: assuming a suitable "category of classes", the Zermelo-Fraenkel universe of sets can be characterized (uniquely up to isomorphism) as a certain initial "ZF-algebra" (this point of view has been studied in algebraic set theory ). Note however that from a foundational point of view we have not achieved much, as we just shifted the problem from sets to classes. As an algebraist, should you be worried that the category of sets is not "uniquely determined" by the Zermelo-Fraenkel axioms? I don't think so. Algebra is generally quite robust, and works equally well in all models of set theory. Of course, there are also parts of algebra that depend on the set-theoretic ambient, but is that not a source of interesting mathematics?
|
{
"source": [
"https://mathoverflow.net/questions/378976",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
379,107 |
I'm fairly confident that the following assertion is true (but I will confess that I did not verify the octahedral axiom yet): Let $T$ be a triangulated category and $C$ any category (let's say small to avoid alarming my set theorist friends). Then, the category of functors $C \to T$ inherits a natural triangulated structure from T. By "natural" and "inherits" I mean that the shift map $[1]$ on our functor category sends each $F:C \to T$ to the functor $F[1]$ satisfying $F[1](c) = F(c)[1]$ on each object $c$ of $C$ ; and similarly, distinguished triangles of functors $$F \to G \to H \to F[1]$$ are precisely the ones for which over each object $c$ of $C$ we have a distinguished triangle in $T$ of the form $$F(c) \to G(c) \to H(c) \to F[1](c).$$ The main question is whether this has been written up in some standard book or paper (I couldn't find it in Gelfand-Manin for instance). Perhaps it is considered too obvious and relegated to an elementary exercise. Mostly, I am interested in inheriting t-structures and hearts from $T$ to functor categories $C \to T$ , and would appreciate any available reference which deals with such matters.
|
The statement is false. For example, take $C=[1]\times [1]$ to be a square and $\mathcal{T} = h\mathsf{Sp}$ to be the homotopy category of spectra. Now consider the square $X$ with $X(0,0) = S^2$ , $X(1,0) = S^1$ , and the other values zero, and the other square $Y$ with $Y(1,0) = S^1$ and $Y(1,1) = S^0$ . Take the maps $S^2 \to S^1$ and $S^1 \to S^0$ to be $\eta$ , and consider the natural transformation $X \to Y$ which is given by multiplication by 2 on $X(1,0)=S^1 \to S^1 = Y(1,0)$ . If this map had a cofiber, then, from the initial to final vertex we would get a map $S^3 \to S^0$ . Following the square one direction, we see that we would have some representative for the Toda bracket $\langle \eta, 2, \eta\rangle$ . Following the other direction, we factor through zero. But this Toda bracket consists of the classes $2\nu$ and $-2\nu$ ; in particular, it does not contain zero. [Of course, this example can be generalized to any nontrivial Toda bracket/Massey product in any triangulated category you're more familiar with.] Indeed, the Toda bracket is exactly the obstruction to 'filling in the cube' for the natural transformation $X \to Y$ . Anyway- this is one of many reasons to drop triangulated categories in favor of one of the many modern alternatives (e.g. stable $\infty$ -categories, derivators, etc.). As for t-structures and so on, in the land of stable $\infty$ -categories these are easy to come by. (See, e.g., Higher Algebra section 1.2.1 and Proposition 1.4.4.11 for various tricks for building these.)
|
{
"source": [
"https://mathoverflow.net/questions/379107",
"https://mathoverflow.net",
"https://mathoverflow.net/users/18263/"
]
}
|
379,323 |
Let $p(x)$ be a polynomial, $p(x) \in \mathbb{Q}[x]$ , and $p^{(m+1)}(x)=p(p^{(m)}(x))$ for any positive integer $m$ . If $p^{(2)}(x) \in \mathbb{Z}[x]$ it's not possible to say that $p(x) \in \mathbb{Z}[x]$ . Is it possible to conclude that $p(x) \in \mathbb{Z}[x]$ if $p^{(2)}(x) \in \mathbb{Z}[x]$ and $p^{(3)}(x) \in \mathbb{Z}[x]$ ? More general, suppose there exist positive integers $k_1 <k_2$ , such that $p^{(k_1)}(x) \in \mathbb{Z}[x]$ and $p^{(k_2)}(x) \in \mathbb{Z}[x]$ . Does it follow that $p(x) \in \mathbb{Z}[x]$ ?
|
$\newcommand\ZZ{\mathbb{Z}}\newcommand\QQ{\mathbb{Q}}$ The statement is true. Notation : I'm going to change the name of the polynomial to $f$ , so that $p$ can be a prime. Fix a prime $p$ , let $\QQ_p$ be the $p$ -adic numbers, $\ZZ_p$ the $p$ -adic integers and $v$ the $p$ -adic valuation.
Let $\QQ_p^{alg}$ be an algebraic closure of $\QQ_p$ , then $v$ extends to a unique valuation on $\QQ_p^{alg}$ , which we also denote by $v$ . We recall the notion of a Newton polygon: Let $f(x) = f_0 + f_1 x + \cdots + f_d x^d$ be a polynomial in $\QQ_p[x]$ .
The Newton polygon of $f$ is the piecewise linear path from $(0, v(f_0))$ to $(d, v(f_d))$ which is the lower convex hull of the points $(j, v(f_j))$ .
We let the Newton polygon pass through the points $(j, N_j)$ , for $0 \leq j \leq d$ , and we set $s_j = N_j - N_{j-1}$ ; the $s_j$ are called the slopes of the Newton polygon. Since the Newton polygon is convex, we have $s_1 \leq s_2 \leq \cdots \leq s_d$ . There are two main Facts about Newton polygons: (Fact 1) Let $f$ and $\bar{f}$ be two polynomials and let the slopes of their Newton polygons be $(s_1, s_2, \ldots, s_d)$ and $(\bar{s}_1, \bar{s}_2, \ldots, \bar{s}_{\bar{d}})$ respectively. Then the slopes of $f \bar{f}$ are the list $(s_1, s_2, \ldots, s_d, \bar{s}_1, \bar{s}_2, \ldots, \bar{s}_{\bar{d}})$ , sorted into increasing order. (Fact 2) Let $\theta_1$ , $\theta_2$ , ... $\theta_d$ be the roots of $f$ in $\QQ_p^{alg}$ . Then, after reordering the roots appropriately, we have $v(\theta_j) = -s_j$ . Here is the lemma that does the main work: Lemma : Let $f$ be a polynomial in $\QQ_p[x]$ which is not in $\ZZ_p[x]$ , and suppose that the constant term $f_0$ is in $\ZZ_p$ . Then $f^{(2)}$ is not in $\ZZ_p[x]$ . Remark : An instructive example with $f_0 \not\in \ZZ_p$ is to take $p=2$ and $f(x) = 2 x^2 + 1/2$ , so that $f(f(x)) = 8 x^4 + 4 x^2+1$ . You might enjoy going through this proof and seeing why it doesn't apply to this case. Proof : We use all the notations related to Newton polygons above. Note that the leading term of $f^{(2)}$ is $f_d^{d+1}$ , so if $f_d \not\in \ZZ_p$ we are done; we therefore assume that $f_d \in \ZZ_p$ .
So $v(f_0)$ and $v(f_d) \geq 0$ , but (since $f \not\in \ZZ_p[x]$ ), there is some $j$ with $v(f_j) < 0$ . Thus the Newton polygon has both a downward portion and an upward portion.
Let the slopes of the Newton polygon be $s_1 \leq s_2 \leq \cdots \leq s_k \leq 0 \leq s_{k+1} \leq \cdots \leq s_d$ . Thus, $(k,N_k)$ is the most negative point on the Newton polygon; we abbreviate $N_k = -b$ and $N_d = a$ . Let $\theta_1$ , ..., $\theta_d$ be the roots of $f$ , numbered so that $v(\theta_j) = - s_j$ .
We have $f(x) = f_d \prod_j (x-\theta_j)$ and so $f^{(2)}(x) = f_d \prod_j (f(x) - \theta_j)$ . We will compute (part of) the Newton polygon of $f^{(2)}$ by merging the slopes of the Newton polygons of the polynomials $f(x) - \theta_j$ , as in Fact 1. Case 1: $1 \leq j \leq k$ . Then $v(\theta_j) = - s_j \geq 0$ . Using our assumption that $f_0 \in \ZZ_p$ , the constant term of $f(x) - \theta_j$ has valuation $\geq 0$ . Therefore, the upward sloping parts of the Newton polygons of $f(x)$ and of $f(x) - \theta_j$ are the same, so the list of slopes of Newton polygon of $f(x) - \theta_j$ ends with $(s_{k+1}, s_{k+2}, \ldots, s_d)$ . Thus, the height change of the Newton polygon from its most negative point to the right end is $s_{k+1} + s_{k+2} + \cdots + s_d = a+b$ . Case 2: $k+1 \leq j \leq d$ . Then $v(\theta_j) < 0$ , so the left hand point of the Newton polygon of $f(x) - \theta_j$ is $(0, v(\theta_j)) = (0, -s_j)$ , and the right hand point is $(d, v(f_d)) = (d, a)$ . We see that the total height change over the entire Newton polygon is $a+s_j$ and thus the height change of the Newton polygon from its most negative point to the right end is $\geq a+s_j$ . The right hand side of the Newton polygon of $f^{(2)}$ is at height $v(f_d^{d+1}) = (d+1) a$ . Since we shuffle the slopes of the factors together (Fact 1), the Newton polygon of $f^{(2)}$ drops down from its right endpoint by the sum of the height drops of all the factors. So the lowest point of the Newton polygon of $f^{(2)}$ is at least as negative as $$(d+1) a - k(a+b) - \sum_{j=k+1}^d (a+s_j).$$ We now compute $$(d+1) a - k(a+b) - \sum_{j=k+1}^d (a+s_j) = (d+1) a - k(a+b) - (d-k) a - \sum_{j=k+1}^d s_j$$ $$ = (d+1) a - k(a+b) - (d-k) a- (a+b)= -(k+1)b < 0 .$$ Since this is negative, we have shown that the Newton polygon goes below the $x$ -axis, and we win. $\square$ We now use this lemma to prove the requested results. Theorem 1: Let $g \in \QQ_p[x]$ and suppose that $g^{(2)}$ and $g^{(3)}$ are in $\ZZ_p[x]$ . Then $g \in \ZZ_p[x]$ . Proof : Note that $g(g(0))$ and $g(g(g(0)))$ are in $\ZZ_p$ . Put $$f(x) = g{\big (}x+g(g(0)){\big )} - g(g(0)).$$ Then $f^{(2)}(x) = g^{(2)}{\big (} x+g(g(0)) {\big )} - g(g(0))$ , so $f^{(2)}$ is in $\ZZ_p[x]$ .
Also, $f(0) = g^{(3)}(0) - g^{(2)}(0) \in \ZZ_p$ . So, by the contrapositive of the lemma, $f(x) \in \ZZ_p[x]$ and thus $g(x) \in \ZZ_p[x]$ . $\square$ We also have the stronger version: Theorem 2: Let $h \in \QQ_p[x]$ and suppose that $h^{(k_1)}$ and $h^{(k_2)}$ are in $\ZZ_p[x]$ for some relatively prime $k_1$ and $k_2$ . Then $h \in \ZZ_p[x]$ . Proof : Since $GCD(k_1, k_2) = 1$ , every sufficiently large integer $m$ is of the form $c_1 k_1 + c_2 k_2$ for $c_1$ , $c_2 \geq 0$ , and thus $h^{(m)}$ is in $\ZZ_p[x]$ for every sufficiently large $m$ .
Suppose for the sake of contradiction that $h(x) \not\in \ZZ_p[x]$ . Then there is some largest $r$ for which $h^{(r)}(x) \not\in \ZZ_p[x]$ . But for this value of $r$ , we have $h^{(2r)}$ and $h^{(3r)}$ in $\ZZ_p[x]$ , contradicting Theorem 1. $\square$ . From this question on math.SE , I have recently learned that this question is from the 2019 Japanese Math Olympiad . (Fortunately, this question was asked in 2020.) I can't read Japanese, but if anyone is able to track down and translate the official solution; I'd be interested. Back when I was training for Olympiads in the late 90's, I remember that the Japanese solutions were always very clever and surprising.
|
{
"source": [
"https://mathoverflow.net/questions/379323",
"https://mathoverflow.net",
"https://mathoverflow.net/users/70464/"
]
}
|
379,678 |
Given some sufficiently smooth function $f$ what conditions would be sufficient for its Fourier coefficients, as defined by $$
\hat{f}(n) := \int_{0}^{2\pi}\cos(nx)f(x)\ dx, \quad \text{for } n = 1,2,\ldots,
$$ to be monotonic? Given the decay properties of Fourier coefficients, the monotonicity result would translate to $$
|\hat{f}(n)| \geq |\hat{f}(n+1)|, \quad n = 1,2,\ldots.
$$ I haven't been able to find any literature regarding this and a result of this nature would be very interesting.
|
It suffices that $f$ be (the restriction to $[0,2\pi]$ of) a completely monotone real-valued function defined on $[0,\infty)$ . Indeed, then for some finite measure $\mu$ on $[0,\infty)$ and all real $x\ge0$ we have $$f(x)=\int_0^\infty\mu(da) e^{-a x},$$ whence for natural $n$ $$\hat f(n)=\int_0^\infty\mu(da) \int_0^{2\pi}dx\,\cos(nx)e^{-a x}
=\int_0^\infty\mu(da) \frac{a \left(1-e^{-2 \pi a}\right)}{a^2+n^2},$$ which is obviously decreasing in $n$ (to $0$ , by dominated convergence or by the Riemann--Lebesgue lemma). Note that, if $f(x)\equiv1$ or $f(x)\equiv x$ , then $\hat f(n)=0$ for all natural $n$ . So, if $f$ has the desired property, then the function $[0,2\pi]\ni x\mapsto a+bx+f(x)$ also has it for any real $a$ and $b$ . Also, clearly, if $f$ has the desired property, then do does the function $$[0,2\pi]\ni x\mapsto f^-(x):=f(2\pi-x)$$ -- because $\widehat{f^-}(n)=\hat f(n)$ for all natural $n$ . It follows that, if $f$ and $g$ have the desired property, then the function $$[0,2\pi]\ni x\mapsto a+bx+f(x)+g(2\pi-x)$$ also has it for any real $a$ and $b$ . Added: As noted in a comment by Fedor Petrov, if $f(x)=h(\pi-x)$ for some odd function $h$ and all $x\in[0,2\pi]$ , then $\hat f(n)=0$ for all natural $n$ . It follows from this answer by fedja that, if $$f(x)=\int_1^\infty[\mu(dp) x^p+\nu(dp)(2\pi-x)^p]<\infty$$ for some measures $\mu$ and $\nu$ on $[1,\infty)$ and all $x\in[0,2\pi]$ , then $f$ has the desired property.
|
{
"source": [
"https://mathoverflow.net/questions/379678",
"https://mathoverflow.net",
"https://mathoverflow.net/users/160454/"
]
}
|
379,733 |
For example, consider the following problem $$\frac{\partial u}{\partial t} = k\frac{\partial^2 u}{\partial x^2},\hspace{0.5cm} u(x,0)=f(x),\hspace{0.5cm} u(0,t)=0,\hspace{0.5cm} u(L,t)=0$$ Textbooks (e.g., Paul's Online Notes ) usually apply separation of variables, assuming that $u(x,t)=\varphi(x)G(t)$ without any explanation why this assumption can be made. Do we lose any solutions that way given that there are functions of two variables $x$ and $t$ that are not products of functions of individual variables? Separation of variables gives the following solution when we consider only boundary conditions: $$u_n(x,t) = \sin\left(\frac{n\pi x}{L}\right)e^{-k\left(\frac{n\pi}{L}\right)^2t},\hspace{0.5cm}n=1,2,3,\dotsc.$$ The equation is linear, so we can take a superposition of $u_n$ : $$u(x,t) = \sum\limits_{n=1}^{\infty}B_n\sin\left(\frac{n\pi x}{L}\right)e^{-k\left(\frac{n\pi}{L}\right)^2t}$$ where $B_n$ are found from the initial condition: $$B_n = \frac{2}{L}\int\limits_0^Lf(x)\sin\left(\frac{n\pi x}{L}\right)dx,\hspace{0.5cm}n=1,2,3,\dotsc.$$ Are there solutions $u(x,t)$ that cannot be represented this way (not for this particular pde but in general)? What happens in the case of non-linear equations? Can we apply separation of variables there?
|
Consider your purported solution $u(x,t)$ at fixed $t$ , i.e., think of it as a function only of $x$ . Such a function can be expanded in a complete set of functions $f_n (x)$ , $$
u(x,t)=\sum_{n} u_n f_n (x)
$$ What happens when you now choose a different fixed $t$ ? As long as the boundary conditions in the $x$ direction don't change (which is the case in your example), you can still expand in the same set $f_n (x)$ , so the only place where the $t$ -dependence enters is in the coefficients $u_n $ - they are what changes when you expand a different function of $x$ in the same set of $f_n (x)$ . So the complete functional dependence of $u(x,t)$ can be written as $$
u(x,t)=\sum_{n} u_n (t) f_n (x)
$$ Thus, when we make a separation ansatz, we are not assuming that our solutions are products. We are merely stating that we can construct a basis of product form in which our solutions can be expanded. That is not a restriction for a large class of problems. As is evident from the preceding argument, this goes wrong when the boundary conditions in the $x$ direction do depend on $t$ - then we cannot expand in the same set $f_n (x)$ for each $t$ . For example, if the domain were triangular such that the length of the $x$ -interval depends on $t$ , the frequencies in the sine functions in your example would become $t$ -dependent.
|
{
"source": [
"https://mathoverflow.net/questions/379733",
"https://mathoverflow.net",
"https://mathoverflow.net/users/124262/"
]
}
|
379,921 |
Context: A submission to a very good generalist journal X received one positive referee report recommending publication and two shorter opinions which both deemed the paper a solid and valuable contribution and thus worthy of publication but perhaps not a priority given the backlog this journal has, thus only weakly recommending publication. Given the fairly high standing of the journal this logically resulted in a rejection. Questions: Would it be appropriate upon resubmission to a new journal Y to inform the editor of Y of the existing detailed referee report at X? (To be clear: I do not mean sending them the actual report, just making them aware of its existence) Upshot:the referee's time is not wasted, in particular as the detailed referee report from X is the most thorough report I have ever received (out of over 100), the paper is quite technical in places and the expert referee has spent considerable amount of time on it. Drawback: The paper starts on the bad footing of being a rejected one, but I think this should not be a big problem if Y is e.g. a specialist journal or if Y is deemed not as "highly ranked" as X. Moreover during the review process at X the referee asked for some clarifications and suggested some improvements which has led to a better paper. I think that even when resubmitting elsewhere we should keep the thank you to an anonymous referee for suggested improvements which already shows the paper was rejected, thus the drawback is there anyway. If the answer is yes to 1) should one ask the editor of X beforehand whether it is ok to pass on their contact details to potentially transfer the referee report to the editor at Y? Personally I think the answer is "Yes" to both questions but hearing other people's perspective is very interesting.
|
For many journals the referee is asked to tick a box when they submit their report to indicate whether or not they (1) allow the report to be used for another journal and (2) whether their identity may be disclosed to the editors of that other journal. Given this practice, the answer to question 1 would be a "yes". Whether or not the report is then actually transferred from journal X to journal Y (anonymously or with the identity of the referee disclosed) would be something journal X decides based on whether or not the referee authorized them to do so. Concerning question 2, I don't think you need permission beforehand from journal X when you inform journal Y of the existence of a detailed report. This complication will hopefully become a thing of the past, when more and more journals migrate to a practice where referee reports are public documents, even if anonymous.
|
{
"source": [
"https://mathoverflow.net/questions/379921",
"https://mathoverflow.net",
"https://mathoverflow.net/users/130882/"
]
}
|
379,946 |
I am looking for an explicit (preferably simple) example of an ODE with time-independent coefficients in $\mathbb{R}^3$ such that there does not exist an Euler-Lagrange equation $$\frac{\partial L}{\partial q^i}=\frac{d}{dt}\frac{\partial L}{\partial \dot q^i}, \, \, i=1,2,3,$$ with the same solutions. I prefer to distinguish two cases: either $L$ depends explicitly on time or not.
|
Note: I'm updating my answer to give a better (i.e., simpler) example plus a little more information about how to derive the example from Douglas' results (which may not be entirely clear upon first reading of his paper). This also addresses the question of time-dependent Lagrangians originally raised by the OP. Have a look at Jesse Douglas' paper Solution of the inverse problem of the calculus of variations, Trans. Amer. Math. Soc. 50 (1941), 71–128. In this paper, Douglas derives necessary conditions for a system of second order equations in any number of dependent variables to be the Euler-Lagrange equations of a first-order functional. He shows how to reduce the problem to an overdetermined linear system whose compatability can be checked by differentiation. For example, it's a consequence of his results that the system $$
\ddot x = 0,\quad\ddot y = 0,\quad \ddot z = (y^2+z^2)\tag1
$$ is not equivalent (in the sense of having the same solutions) to the Euler-Lagrange equations for any nondegenerate first-order functional $$
\int \phi(t,x,y,z,\dot x, \dot y,\dot z)\,\mathrm{d}t,\tag2
$$ where 'nondegenerate' has the usual meaning that the Hessian of $\phi$ with respect to the variables $(\dot x, \dot y,\dot z)$ is invertible. What Douglas does is show that the problem of finding such a $\phi$ is equivalent to finding a nondegenerate solution of an overdetermined linear system of equations for the components of the Hessian of $\phi$ with respect to the variables $(\dot x, \dot y,\dot z)$ . This is his system (4.7–10) together with the nondegeneracy condition (4.11). For the given right-hand sides in the above system (1), one easily finds that Douglas' system (4.7–10) implies $$
\frac{\partial^2\phi}{\partial\dot x\,\partial\dot z}
=\frac{\partial^2\phi}{\partial\dot y\,\partial\dot z}
=\frac{\partial^2\phi}{\partial\dot z\,\partial\dot z} = 0,
$$ which implies that $\phi$ cannot be nondegenerate. Remark: Most of Douglas' paper concerns explicitly working out, in the case of two dependent variables, the consequences of the criterion he derives in Part II for an arbitrary number of dependent variables. Part II is quite short and readable while the later parts are much more technical.
|
{
"source": [
"https://mathoverflow.net/questions/379946",
"https://mathoverflow.net",
"https://mathoverflow.net/users/16183/"
]
}
|
380,206 |
At the age of 16, Leonhard Euler defended his Master's Thesis, where he discussed and compared Descartes' and Newton's approaches to planet motion. I don't know anything else about it. In particular, I don’t know what position the young Euler supported. Is there any modern account of this dissertation? In English or French? Edit . The only source I know of is of dubious value, to say the least. On the occasion of a celebration of Euler's tri-centenary, I was offered a comics, authored by Andreas K. & Alice K. Heyne (illustrations by Elena S. Pini). There is a lot of good to say about it. But on page 10, block 3.1, Euler is concluding is defense with the words ... so the planets are dragged along by aether vortices. I wonder whether the authors have any source to support this citation. By the way, I wish to mention that it took a considerable time for Newton's theory of gravitation to be accepted in France, and more generally in continental Europe. Descartes' reputation was so high that any contradiction to his writings was a priori rejected. The Principia were published in 1687, but they penetrated the French scientific community only once Emilie du Chatelet translated them circa 1745, on Voltaire's request.
|
Martin Mattmüller, in his article Leonhard Euler, seine Heimatstadt und ihre
Universität on Euler's hometown Basel, writes that this public talk (not a dissertation or written thesis), which Euler gave in 1724, is lost, and that it is not known which position he supported. Euler had obtained his magister degree already in 1723.
|
{
"source": [
"https://mathoverflow.net/questions/380206",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8799/"
]
}
|
380,828 |
In my recent researches, I encountered functions $f$ satisfying the following functional inequality: $$
(*)\; f(x)\geq f(y)(1+x-y) \; ; \; x,y\in \mathbb{R}.
$$ Since $f$ is convex (because $\displaystyle f(x)=\sup_y [f(y)+f(y)(x-y)]$ ), it is left and right differentiable. Also, it is obvious that all functions of the form $f(t)=ce^t$ with $c\geq 0$ satisfy $(*)$ . Now, my questions: (1) Is $f$ everywhere differentiable? (2) Are there any other solutions for $(*)$ ? (3) Is this functional inequality well-known (any references (paper, book, website, etc.) for such functional inequalities)? Thanks in advance
|
Replace $x$ with $x+y$ to get $f(x+y)\ge f(y)(1+x)$ or $f(x+y)-f(y)\ge xf(y)$ .
Replace $y$ with $x+y$ and then interchange $x$ and $y$ to get $f(x+y)-f(y)\le xf(x+y)$ .
Together, $$
xf(y)\le f(x+y)-f(y)\le xf(x+y).
$$ Dividing by $x$ and taking the limit as $x\to0$ implies that $f$ is differentiable with $f'=f$ .
|
{
"source": [
"https://mathoverflow.net/questions/380828",
"https://mathoverflow.net",
"https://mathoverflow.net/users/40520/"
]
}
|
381,357 |
I am wondering if the orthogonal group $O_n({\bf Q})$ is dense in $O_n({\bf R})$ ? It is easily checked for $n = 2$ but I think that there is a general principle concerning compact algebraic groups underneath.
|
There's an easy argument based on the Cayley transform: If $a$ is a skew-symmetric $n$ -by- $n$ real matrix, then $I_n+a$ is invertible (since $(I_n-a)(I_n+a)=I_n-a^2$ is a positive definite symmetric matrix and hence invertible), and $$
A = (I_n-a)(I_n+a)^{-1}
$$ is orthgonal (i.e., $AA^T = I_n$ ). Note that $(I_n+A)(I_n+a) = 2I_n$ , so $I_n+A$ is invertible. Conversely, if $A$ is an orthogonal $n$ -by- $n$ matrix such that $I_n+A$ is invertible, one can solve the above equation uniquely in the form $$
a = (I_n+A)^{-1}(I_n-A) = -a^T.
$$ This establishes a rational 'parametrization' (known as the Cayley transform) of $\mathrm{SO}_n(\mathbb{R})$ . Plainly, $a$ has rational entries if and only if $A$ has rational entries. The density of $\mathrm{O}_n(\mathbb{Q})$ in $\mathrm{O}_n(\mathbb{R})$ follows immediately.
|
{
"source": [
"https://mathoverflow.net/questions/381357",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6129/"
]
}
|
381,379 |
Bernstein polynomials preserves nicely several global properties of the function to be approximated: if e.g. $f:[0,1]\to\mathbb R$ is non-negative, or monotone, or convex; or if it has, say, non-negative $17$ -th derivative, on $[0,1]$ , it is easy to see that the same holds true for the polynomials $B_nf$ . In particular, since all $B_n$ fix all affine functions, if $f\le ax+b$ , also $B_nf(x)\le ax+b$ , whence it follows immediately $B_nf\le f$ for concave $f$ . On the other hand, comparing $B_nf$ and $B_{n+1}f$ turns out to be harder, due to the different choice of nodes where $f$ is evaluated. Consider for instance the Bernstein polynomials of the function $\sqrt{x}$ , $$p_n(x):=\sum_{k=0}^n{n\choose k}\sqrt{\frac kn}\,x^k(1-x)^{n-k}.$$ Question: Is this sequence of polynomial increasing? More generally, when is $B_nf$ increasing? Some tentative approaches and remarks. 1. To compare $p_{n+1}$ with $p_n$ we may write the binomial coefficients in the expression for $p_{n+1}(x)$ as ${n+1\choose k}={n\choose k}+{n\choose k-1}$ ; splitting correspondingly the sum into two sums, and shifting the index in the latter,
we finally get $$p_{n+1}(x)-p_n(x)=\sum_{k=0}^n{n\choose k}\bigg[x\sqrt{\frac{k+1}{n+1}}+(1-x)\sqrt{\frac k{n+1}}-\sqrt{\frac kn}\,\bigg]x^k(1-x)^{n-k},$$ which at least has non-negative terms approximatively for $\frac kn<x$ , which is still not decisive. 2. Monotonicity of the sequence $B_nf$ is somehow reminiscent of that of the real exponential sequence $\big(1+\frac xn\big)^n$ . Precisely, let $\delta_n:f\mapsto \frac{f(\cdot+\frac1n)-f(\cdot)}{\frac1n}$ denote the discrete difference operator, and $e_0:f\mapsto f(0)$ the evaluation at $0$ . Then the Bernstein operator $f\mapsto B_nf$ can be written as $B_n=e_0\displaystyle \Big({\bf 1} + \frac{x\delta_n}n\Big)^n$ (which, at least for analytic functions, converges to the Taylor series $e^{xD}$ at $0$ ). Unfortunately, the analogy seems to stop here. 3. The picture below shows the graphs of $\sqrt x$ and of the first ten $p_n(x)$ .
(The convergence is somehow slow; indeed it is $O(n^{-1/4})$ , as per Kac' general estimate $O(n^{-\frac \alpha2})$ for $\alpha$ -Hölder functions). The picture leaves some doubts about the endpoints; yet there should be no surprise, since $p_n(0)=0$ , $p_n'(0)=\sqrt{n}\uparrow+\infty$ , $p_n(1)=1$ , $p_n'(1)=\frac1{1+\sqrt{1-\frac1n}}\downarrow\frac12$ .
|
As noted by Paata Ivanishvili, if $f$ is concave on $[0,1]$ , then the Bernstein polynomials $B_n(f,p)$ are increasing in $n$ .
Here is a probabilistic proof: Let $I_j$ for $j \ge 1$ be independent variables taking value 1 with probability $p$ and $0$ with probability $1-p$ . Then $X_n:=\sum_{j=1}^n I_j$ has a binomial Bin $(n,p)$ distribution and the Bernstein polynomial can be written as $B_n(f,p)=E[f(X_n/n)]$ . Now for every $j \in [1,n+1]$ , the random variable $Y_j=Y_j(n)=X_{n+1}-I_j$ also has a Bin $(n,p)$ distribution
and $$ {X_{n+1}} = {\sum_{j=1}^{n+1} (Y_j/n)} \, .$$ For concave $f$ , Jensen's inequality gives $$ f \left(\frac{\sum_{j=1}^{n+1} (Y_j/n)}{n+1} \right) \ge \left(\frac{\sum_{j=1}^{n+1} f(Y_j/n)}{n+1} \right) $$ whence $$B_{n+1}(f,p)=E f \left(\frac{X_{n+1}}{n+1}\right)=E f \left(\frac{\sum_{j=1}^{n+1} (Y_j/n)}{n+1} \right) \ge E \left(\frac{\sum_{j=1}^{n+1} f(Y_j/n)}{n+1} \right) =B_n(f,p) $$
|
{
"source": [
"https://mathoverflow.net/questions/381379",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6101/"
]
}
|
381,456 |
By density of primes, I mean the proportion of integers between $1$ and $x$ which are prime. The prime number theorem says that this is asymptotically $1/\log(x)$ . I want something much weaker, namely that the proportion just goes to zero, at whatever rate. And I want the easiest proof possible. The simplest proof I know uses estimates involving the binomial coefficient $\binom{2n}{n}$ , but the argument still feels a bit involved. Does anyone know an even simpler proof that the density of primes goes to zero?
|
I'm summarising the discussion in GH from MO's answer as a separate answer for clarity. The fact that the primes have (natural) density zero can be deduced from a (seemingly) more general statement: Theorem Let $1 < n_1 < n_2 < \dots$ be a sequence of natural numbers that are pairwise coprime. Then this sequence has zero (natural) density. Proof There are two cases, depending on whether the sum $\sum_{k=1}^\infty \frac{1}{n_k}$ diverges or not. Case 1: $\sum_{k=1}^\infty \frac{1}{n_k} < \infty$ . Then for any $\varepsilon>0$ , the density of $n_k$ inside a dyadic block $[2^j,2^{j+1})$ must be less than $\varepsilon$ for all but finitely many $j$ . From this one easily verifies that the $n_k$ have natural density zero. Case 2: $\sum_{k=1}^\infty \frac{1}{n_k} = \infty$ . Then $\prod_{k=1}^\infty (1-\frac{1}{n_k})=0$ . Thus for any $\varepsilon > 0$ , there exists a finite $K$ such that $\prod_{k=1}^K (1 - \frac{1}{n_k}) \leq \varepsilon$ . On the other hand, by the Chinese remainder theorem and the pairwise coprimality hypothesis, the set of natural numbers coprime to all of $n_1,\dots,n_K$ has density at most $\prod_{k=1}^K (1 - \frac{1}{n_k}) \leq \varepsilon$ . Since this set contains all but finitely many of the $n_j$ by hypothesis, the $n_j$ have zero natural density. $\Box$ Informally, the pairwise coprimality hypothesis produces a competition between the small values of $n_k$ and the large values of $n_k$ ; if there are too many small values then there can't be too many large values. In particular pairwise coprimality is incompatible with positive (upper) natural density. (If one tries to occupy any dyadic block $[2^j,2^{j+1})$ with $n_k$ 's to density at least $\varepsilon$ , this will thin out the set of possible candidates for (much) larger $n_k$ by a factor of approximately $1-\varepsilon$ . So if enough dyadic blocks attain this density, the set of candidates will eventually have its density reduced to at most $\varepsilon$ . So having a non-zero density in this sequence is in ultimately "self-defeating", and so one has no choice but to eventually concede the density to be zero.) In the specific case that $n_k$ is the sequence of primes, one can skip Case 1 by supplying a separate proof of Euler's theorem $\sum_p \frac{1}{p} = \infty$ (or equivalently $\prod_p (1-\frac{1}{p}) = 0$ ). For instance one can use the Euler product identity $\prod_p (1-\frac{1}{p})^{-1} = \sum_n \frac{1}{n}$ . Remark This argument is completely ineffective as it does not provide any explicit decay rate on the density of the $n_k$ inside any fixed large interval $[1,x]$ . However, effective bounds for this theorem can be obtained by other means. Indeed, by replacing each of the $n_k$ with an arbitrary prime factor we see that the number of $n_k$ in $[1,x]$ cannot exceed the number $\pi(x)$ of primes in $[1,x]$ , and this bound is of course optimal.
|
{
"source": [
"https://mathoverflow.net/questions/381456",
"https://mathoverflow.net",
"https://mathoverflow.net/users/126543/"
]
}
|
381,527 |
Apologies if the answer is trivial, this is far from my domain.
In order to define the field of Hahn series , one needs the following fact: if $A,B$ are two well-ordered subsets of $\mathbb{R}$ (or any ordered group — with the induced order of course), the subset $A+B:=\{a+b\,|\,a\in A,b\in B\} $ is well-ordered. How does one see that?
|
Ramsey theory! Suppose $A + B$ is not well-ordered. Then there is a strictly decreasing sequence $a_1 + b_1 > a_2 + b_2 > \cdots$ . Observe that for any $i < j$ , either $a_i > a_j$ or $b_i > b_j$ (or both). Make a graph with vertex set $\mathbb{N}$ by putting an edge between $i$ and $j$ if $a_i > a_j$ , for any $i < j$ . By the countably infinite Ramsey theorem, there is either an infinite clique or an infinite anticlique, and hence either a strictly decreasing sequence in $A$ or a strictly decreasing sequence in $B$ , contradiction.
|
{
"source": [
"https://mathoverflow.net/questions/381527",
"https://mathoverflow.net",
"https://mathoverflow.net/users/40297/"
]
}
|
381,620 |
Something I learned (probably in middle school) that always bothered me is that the truth value of "and" and "but" are basically the same. If you were going to assign a truth-functional interpretation of "but" in first-order logic, it would be the same as "and". There's been a explosion of logical systems that are alternatives to first-order logic, such as fuzzy logic. Is there a logical system that can distinguish "and" and "but"?
|
Interpreting “ $X \text{ but } Y$ “ as $$X \wedge Y \wedge \diamond(X\wedge\neg Y)$$ is a reasonable starting point. (“X and Y and it would be possible to have X and not Y”.) This works for the basic examples I found in online dictionaries: “He was poor but proud” “She’s 83 but she still goes swimming every day” “My brother went but I did not” “He stumbled but did not fall” “She fell but wasn’t hurt” This correctly identifies that “he is a bachelor but unmarried” is not an appropriate use of “but”. And this also shows the difference between such examples as: “That comment was harsh but fair.” (It was harsh and fair, while some comments are harsh and unfair.) “That comment was fair but harsh.” (It was fair and harsh, while some comments are fair and compassionate.)
|
{
"source": [
"https://mathoverflow.net/questions/381620",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3711/"
]
}
|
381,908 |
Many mathematical subfields often use the axiom of choice and proofs by contradiction. I heard from people supporting constructive mathematics that often one can rewrite the definitions and theorems so that both the axiom of choice and proofs by contradiction aren't needed anymore. An example is the theory of locales . This is a reformulation of topology that doesn't need the axiom of choice to prove analogues of results that classically need some form of choice, such as Tychonoff's theorem . I wonder: Are there some tricks behind this "reformulation"? When I have a theorem with a classical proof, how can I reformulate it so that it is provable constructively? In case this question is too broad, I would be interested in the following example: The usual proof of Gödel's completeness theorem by Henkin isn't constructive. Is there some easy way to reformulate the theorem and the proof constructively? Or is it necessary to define, with much non-trivial work, categorical semantics and state the theorem in this new context?
|
If you want a "general method" that "always works" to turn a classical theorem into a constructive one, there are double-negation translations : if you add enough $\neg\neg$ s to a classical theorem, you can make a constructively provable statement. However, this rarely produces a constructively meaningful result. Aside from this, there are no fully general methods, but there are general heuristic (and, in some cases, formalizable) techniques and tricks that can be applied. The most basic technique is learning to recognize when uses of excluded middle, proof by contradiction, and choice are totally unnecessary. This involves inspecting definitions as well as theorems and proofs, looking for occurences of negated statements that can be turned into classically-equivalent positive ones. For instance, if you defined an injective function to be one such that if $x\neq y$ then $f(x)\neq f(y)$ , then using that definition is going to involve a lot of excluded middle; but if you remove the unnecessary contrapositive and define it to mean that if $f(x)=f(y)$ then $x=y$ , many proofs immediately become more constructive. Another technique is to replace a negated statement by a classically-equivalent positive one. For instance, if $x$ and $y$ are real numbers, then in constructive analysis we generally replace $x\neq y$ by $x \mathrel{\#} y$ (" $x$ is apart from $y$ "), meaning that they are at least some positive rational distance apart. To a certain extent, this can even be formalized: in Linear logic for constructive mathematics I showed that if a proof can be written in a certain variant of affine logic, then it can automatically be interpreted constructively if enough negative statements are replaced by positive ones in this way. A third technique is to avoid asking for the bare existence of "ideal" objects, such as ultrafilters, prime ideals, zeros of functions, etc. Instead we can replace these by appropriate "approximating systems". For instance, without some form of choice we may not be able to prove that some spectrum has enough points (filters or ideals) to be a nontrivial topological space; but we can instead consider the frame in which those filters would exist as representing a point-free space (locale). Somewhat analogously, without excluded middle we can't prove the usual Intermediate Value Theorem about the actual existence of some zero; but instead we can approximate such a zero by finding a point whose image is as close to zero as desired. These replacements aren't always classically equivalent. For instance, there are also constructive versions of the intermediate value theorem in which the hypotheses on the function are strengthened, e.g. to say that it never "hovers" around zero. Such strengthenings can often be found by inspecting the proof to see what is "really used" once excluded middle is pared away; often it happens that these stronger hypotheses are satisfied by most practical applications anyway. These are just a few of the ideas that occur to me; other more dedicated constructivists will probably be able to list many more. Note that a common theme in many of these techniques is to pay attention to computability. Mathematics (even constructive mathematics) always involves idealizations and infinite objects (unless you're a dyed-in-the-wool finitist/predicativist), but certain kinds of "completed infinity" are impossible to compute with, so we can make them more constructive by thinking about how we might represent an approximation computationally, or what actual data would witness the truth of some negative statement. Finally, regarding Gödel's completeness theorem, as you may know it is equivalent to either weak König's lemma (if the language is countable) or the Boolean Prime Ideal Theorem (if the language is arbitrary). Since these are known to be unprovable constructively, so is the completeness theorem as Gödel stated it. I would say that the version in categorical semantics is the "correct" constructive completeness theorem; sometimes it's unavoidable that a certain amount of nontrivial work is required to turn something into a constructive version (for instance, locales also require some work vis-a-vis topological spaces). This can be regarded as an instance of the general method of "avoiding ideal objects": instead of a single ideal set-model that requires non-constructive principles for its existence, we consider the collection of more concrete categorical models.
|
{
"source": [
"https://mathoverflow.net/questions/381908",
"https://mathoverflow.net",
"https://mathoverflow.net/users/172789/"
]
}
|
382,003 |
The letter below is written by my son. I have been sending him text books and looking for answers on the internet to keep his interest up. He has progressed so far on his own and now he needs direction and assistance from a professional in mathematics. Any advice or assistance you can provide is greatly appreciated. My name is ---, I'm 25, I've been in prison for the past 6 years, and I'm self-taught in mathematics. I began with a list of courses required in a standard undergraduate curriculum and studied the required texts from each course. I covered the basics in this way before branching off into my own interests, beginning with partial differential equations and eventually landing in scattering theory. I began studying mathematics because it was fun and interesting (and passed the time), but it has since become so much more. The progress that I've made, combined with the observation that I am capable of at least understanding research in my fields of interest, has compelled me to take the next step into conducting research of my own, and my current goal is to make advances of publishable value. I am just beginning in this process, yet already I have made progress studying scattering resonances. At the moment, I'm working on a number of problems related to resonance counting. In particular, my primary focus is on "inverse resonance counting": By assuming an asymptotic formula for the resonance counting function (as well as some other results concerning distribution), my goal is to determine properties of the potential. Similarly, in the case of a surface with hyperbolic ends, the goal is to determine properties of the surface from knowledge of an exact asymptotic formula for the counting function. My primary resources at present are Mathematical Theory of Scattering Resonances by Dyatlov and Zworski, and Spectral Theory of Infinite-Area Hyperbolic Surfaces by Borthwick. I'm not sure what I'm asking for here, I just know that I am ready for the next step and seek some guidance as I enter the world of research mathematics. I encounter many problems when it comes to research, such as staying up to date on current topics, finding open problems which suit my skills and interests, and finding papers on topics I need to study more deeply. For example, right now I am in need of results on how resonances change under smooth, small changes in the potential. One of my texts mentioned the paper of P. D. Stefanov, Stability of Resonances Under Smooth Perturbations of the Boundary (1994), but I need more, and that paper makes no citation to papers of the same content. How do I find papers which are similar, or even cite this one? In short, without direct access to the internet or fellow researchers, I hit many roadblocks which are not math-related, and that can be frustrating. I'm looking for ways to make my unconventional research process go a little more smoothly. If anyone has any suggestions, please let me know here. And thanks in advance.
|
I have received a response back from my son he said. "I took Calculus my first and only year at Michigan State University, prior to my incarceration. That is the highest course I have taken formally. Shortly after beginning my sentence, I asked my father for a multivariable calculus textbook and he sent me one. I studied it deeply, and enjoyed it so much that I asked my dad if he could find anything online about what's after calculus in a standard undergraduate curriculum. He found MIT's opencourseware website, and sent me screenshots of a page listing which courses were required of an undergrad math major at MIT, some sample undergrad course loads, as well as pages listing course titles, with descriptions and prerequisites. From there I would ask my dad for all the info on a given course, including required textbooks, lecture notes, and problem sets. Many courses even gave dates on which the problems were due. He'd order me the book(s), and print out and mail me the problems and notes. Starting with linear algebra and a course on ordinary differential equations, I proceeded this way for a couple of years. Often, I would also study the chapters in the books which weren't required by the course and at least attempt every problem. Here is a sample of some of the books required with the courses: Strang - Introduction to Linear Algebra Zill - A First Course in Differential Equations Pinter - A Book of Abstract Algebra Rudin - Principles of Mathematical Analysis Ahlfors - Complex Analysis do Carmo - Differential Geometry of Curves and Surfaces Simmons - Introduction to Topology and Modern Analysis Lee - Introduction to Smooth Manifolds As I matured mathematically, I stopped using the opencourseware site, and began studying in areas which had interested me. I no longer read textbooks linearly, nor do I try to digest every concept in every book I obtain. But I have many, many books. Among them are about 4 on PDE's in general, a handful on more technical but related topics, like perturbations, scaling, dimensional analysis, waves. I own Hormander's, The Analysis of Linear Partial Differential Operators, Vols 1-4, and Reed and Simon's, Methods of Modern Mathematical Physics, Vols 1-3. I have 2 on semiclassical and microlocal analysis, 2 on analytic number theory. Of course in addition to Volume 3 of the Reed and Simon series, the books mentioned in the original post are my resources on scattering theory. I also have a handful of introductory physics texts. Aside from books, I have a few research articles in scattering related to my recent attempts at research. Scattering is the only field in which I've made a serious attempt at research. Hopefully this answers your question, and if not, please follow up!"
|
{
"source": [
"https://mathoverflow.net/questions/382003",
"https://mathoverflow.net",
"https://mathoverflow.net/users/172846/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.