url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/9834/heuristic-behind-the-fourier-mukai-transform/ | ## Heuristic behind the Fourier-Mukai transform
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What is the heuristic idea behind the Fourier-Mukai transform? What is the connection to the classical Fourier transform?
Moreover, could someone recommend a concise introduction to the subject?
-
A survey: arxiv.org/abs/1109.3083 – Thomas Riepe Sep 15 2011 at 5:22
## 6 Answers
First, recall the classical Fourier transform. It's something like this: Take a function $f(x)$, and then the Fourier transform is the function $g(y) := \int f(x)e^{2\pi i xy} dx$. I really know almost nothing about the classical Fourier transform, but one of the main points is that the Fourier transform is supposed to be an invertible operation.
The Fourier-Mukai transform in algebraic geometry gets its name because it at least superficially resembles the classical Fourier transform. (And of course because it was studied by Mukai.) Let me give a rough picture of the Fourier-Mukai transform and how it resembles the classical situation.
1. Take two varieties $X$ and $Y$, and a sheaf $\mathcal{P}$ on $X \times Y$. The sheaf $\mathcal{P}$ is sometimes called the "integral kernel". Take a sheaf $\mathcal{F}$ on $X$. Think of $\mathcal{F}$ as being analogous to the function $f(x)$ in the classical situation. Think of $\mathcal{P}$ as being analogous to, in the classical situation, some function of $x$ and $y$.
2. Now pull the sheaf back along the projection $p_1 : X \times Y \to X$. Think of the pullback $p_1^\ast \mathcal{F}$ as being analogous to the function $F(x,y) := f(x)$. Think of $\mathcal{P}$ as being analogous to the function $e^{2\pi i xy}$ (but maybe not exactly, see below).
3. Next, take the tensor product $p_1^\ast \mathcal{F} \otimes \mathcal{P}$. This is analogous to the function $F(x,y) e^{2\pi i xy}$ $=$ $f(x)e^{2\pi i xy}$.
4. Finally, push $p_1^\ast\mathcal{F} \otimes \mathcal{P}$ down along the projection $p_2: X \times Y \to Y$. The result is the Fourier-Mukai transform of $\mathcal{F}$ --- it is $p_{2,\ast} (p_1^\ast \mathcal{F} \otimes \mathcal{P})$. This last pushforward step can be thought of as "integration along the fiber" --- here the fiber direction is the $X$ direction. So the analogous thing in the classical situation is $g(y) = \int f(x)e^{2\pi i xy}dx$ --- the Fourier transform of $f(x)$!
But to make all of this rigorous, we have to deal with derived categories of (coherent) sheaves, not just (coherent) sheaves. The main difficulty is in doing the pushforward. The pushforward of a coherent sheaf is not always coherent. But we can use the derived pushfoward instead, at the "price" of having to deal with derived categories.
When $X$ is an abelian variety, $Y$ is the dual abelian variety, and $\mathcal{P}$ is the so-called Poincare line bundle on $X \times Y$, then the Fourier-Mukai transform gives an equivalence of the derived category of coherent sheaves on $X$ with the derived category of coherent sheaves on $Y$. I think this was proven by Mukai. I think this is supposed to be analogous to the statement I made about the classical Fourier transform being invertible. In other words I think the Poincare line bundle is really supposed to be analogous to the function $e^{2\pi i xy}$. A more general choice of $\mathcal{P}$ corresponds to, in the classical situation, so-called integral transforms, which have been previously discussed here. This is probably why $\mathcal{P}$ is called the integral kernel. You may also be interested in reading about Pontryagin duality, which is a version of the Fourier transform for locally compact abelian topological groups --- this is obviously quite similar, at least superficially, to Mukai's result about abelian varieties. However I don't know enough to say anything more than that.
There are some cool theorems of Orlov, I forget the precise statements (but you can probably easily find them in any of the books suggested so far), which say that in certain cases any derived equivalence is induced by a Fourier-Mukai transform. Note that the converse is not true: some random Fourier-Mukai transform (i.e. some random choice of the sheaf $\mathcal{P}$) is probably not a derived equivalence.
I think Huybrechts' book "Fourier-Mukai transforms in algebraic geometry" is a good book to look at.
Edit: I hope this gives you a better idea of what is going on, though I have to admit that I don't know of any good heuristic idea behind, e.g., Mukai's result --- it is analogous to the Fourier transform and to Pontryagin duality, and thus I suppose we can apply whatever heuristic ideas we have about the Fourier transform to the Fourier-Mukai transform --- but I don't know of any heuristic ideas that explain the Fourier-Mukai transform in a direct way, without appealing to any analogies to things that are outside of algebraic geometry proper. Hopefully somebody else can say something about that.
But --- there is certainly something deep going on. Just as CommRing behaves a lot like Setop, I think there is probably some kind of general phenomenon that sheaves (or vector bundles) behave a lot like functions, which is what's happening here. Pullback of sheaves behave a lot like pullback of functions... Pushforward of sheaves behave a lot like integration of functions... Tensor product of sheaves behave a lot like multiplication of functions...
-
The LaTeX is not displaying very well for me, is it displaying OK for others? – Kevin Lin Dec 27 2009 at 0:39
It looks okay to me. In what way is it not displaying well for you? – Anton Geraschenko♦ Dec 27 2009 at 0:48
The math keeps spilling over the right margin. – Kevin Lin Dec 27 2009 at 0:51
1
ncatlab.org/nlab/show/geometric+function+theory – Reid Barton Dec 27 2009 at 20:34
3
I tend to disagree, you write: "The main difficulty is in doing the pushforward. The pushforward of a coherent sheaf is not always coherent." If the map proper, the pushforward is coherent, if it's not proper, the derived category won't help. The real reason to use derived category is that there are higher direct images. In particular, without derived category the base change would not work, so you cannot prove anything about F-M transform (e.g., you cannot write its inverse). – Roman Fedorov Feb 9 2012 at 22:40
show 7 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You may want to look at Tom Bridgeland's PhD thesis.
-
The following answers might be useful:
The last one has my sketch of an answer which I'll post here once it gets better.
-
I second Kevin's suggestion of Huybrechts' book, but if you want to to look at something shorter first I recommend the notes by Hille and van den Bergh.
-
Just a complement to the answer of Kevin Lin.
There is a case where the analogy between sheaves and functions is more than analogy : the case of varieties over finite field. More precisely, if X is a variety over F_p and F is a l-adic constructible sheaf on X, one can associate to F a function (in a set theoretic sense) over the set of F_p points of X by mapping x to the trace of the Frobenius acting on the fiber of F at x. This defines a correspondace sheaf-function compatible with all the analogies cited by Kevin.
If we fix a character of F_p then one have the usual Fourier transform for functions over F_p. One can ask for an analogue for the l-adic sheaves over the affine space A. It exists, it is the Fourier-Deligne transform. The fact that the function associated to the Fourier-Deligne transform of a sheaf is the (usual) Fourier transform of the function associated to the sheaf is a consequence of the Grothendieck trace formula.
In fact, the Fourier-Deligne transform is a Fourier-Mukai transform for the derived category of l-adic constructible sheaf on A ! Ok, when one speaks about Fourier-Mukai, one think about complex algebraic geometric and categories of coherent sheaves but I think that to have the above situation where we have a really sheaf/function dictionnary in mind can be useful. This dictionnary was one of the motivation for the formulation of the geometric Langlands program (see some expository articles of Frenkel for example).
-
Alexander Polishchuk, Abelian Varieties, Theta Functions and the Fourier Transform, Cambridge Tracts in Mathematics 153, Cambridge University Press, 2003. This also happens to be one of my favourite books.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262829422950745, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/195289-tangent-line.html | # Thread:
1. ## Tangent Line
Find the equation of the line tangent to the graph of $f(x) = -4x^3 + 3\sqrt[5]{x} +4$ at the point $(-1,f(-1))$. Leave your answer in the form $y = mx + b$.
I just want to verify if what I'm doing is correct... It has been awhile since I have done one of these.
I assumed I should use the equation $y = f(a) + f'(a)(x-a)$
Which gave me:
$f(-1) = 8 + 3\sqrt[5]{-1}$
$f'(-1) = 12 + \frac{3}{5}\sqrt[5]{(-1)^4} = \frac{63}{5}$
$y = 8 + 3\sqrt[5]{-1} + \frac{63}{5}(x+1)$
$y = \frac{63}{5}x + (8 + \frac{63}{5} + 3\sqrt[5]{-1})$
$y = \frac{63}{5}x + (\frac{103}{5} + 3\sqrt[5]{-1})$
Here is where I have trouble... I am unsure of the fifth root of -1. I have seen that it could be complex, or it could be 1... And I just can't verify it myself. I'm sure it is easier than I am making it out to be, but any help would be appreciated.
2. ## Re: Tangent Line
In the real numbers $\sqrt[5]{-1}=-1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9672760367393494, "perplexity_flag": "head"} |
http://letterstonature.wordpress.com/2008/07/28/are-the-dice-loaded/ | # Letters to Nature
Feeds:
Posts
Comments
## Are The Dice Loaded?
July 28, 2008 by lukebarnes
I am currently reading Universes (1989) by John Leslie, Professor Emeritus of Philosophy at The University of Guelph, Ontario, Canada. The book, praised on the back cover by Antony Flew and Quentin Smith, discusses the issues surrounding the “fine-tuning” of the constants of nature, initial conditions, and even the forms of the laws of nature themselves to permit the existence of observers. I will not go into details of the fine-tuning here – readers are referred to “The Anthropic Cosmological Principle” by Barrow and Tipler.
This is a huge and hugely controversial area and I don’t want to bite off more than I can chew. (Leslie: “The ways in which ‘anthropic’ reasoning can be misunderstood form a long and dreary list”). Instead, I want to consider a single point made by Leslie, in response to the following quote from M. Scriven’s “Primary Philosophy” (1966):
If the world exists at all, it has to have some properties. What happened is just one of the possibilities. If we decide to toss a die ten times, it is guaranteed that a particular one of the $6^{10}$ possible combinations of ten throws is going to occur. Each is equally likely.
The argument is as follows: we cannot deduce anything interesting from the fine-tuning of the universe because the actual set of constants/initial conditions is just as likely as any other set. It is this claim (and this claim only) that I want to address, because I found Leslie’s treatment to be calling out for an example.
A casino has a game where a die is thrown, and it is advantageous for a player to throw sixes. The casino, naturally, is concerned about cheating players. On top of their usual security measures (cue clip from ‘Ocean’s Thirteen’ – security cameras linked to a computer that can distinguish genuine from false surprise etc.), the boss wonders if they can catch cheats using only the sequence of throws. One of his lackeys argues: any sequence of throws is as probable as any other sequence (of the same length), so we can’t draw any conclusions. Sounds reasonable.
Now, while I’m no Brendon Brewer, I have been known to dabble in the dark arts of Bayesian probability. So let’s try to put this argument into mathematical form.
Let: $S_n$ = a series of n sixes is thrown by a particular player.
$R_n$ = the series 2, 1, 3, 5, 5, 6, 2, 1, 2, 6, … (a typical ordered sequence of n throws)
$B$ = background information (e.g. a die has 6 sides)
$F$ = the die thrown is fair and unbiased
$L$ = the die thrown is loaded, rigged to throw a six on cue (i.e. $P(S_n | L\&B) = 1$). Once again, Ocean’s Thirteen furnishes an example. We will assume that the cheating player, having been smart enough to invent an undetectable loaded die, is dumb enough to use it indiscriminately, continuing to throw sixes without thinking that it might raise suspicions.
Then, the fact that the lackey refers to is this:
$P(S_n | F\&B) = P(R_n | F\&B) = 1 / 6^n$
From which he concludes that:
$P(L | S_n\&B) = P(L | B)$
i.e. a sequence of sixes, no matter how long, doesn’t make it any more likely that the die is loaded.
Having recast the claim in probabilistic terms, we can see that the claim is patently false. Bayes theorem allows us to express the probability that the die is fair given that n sixes have been thrown by a particular player:
$P(F | S_n\&B) = \frac{P(S_n | F\&B) P(F | B)} {P(S_n | B)}$
It will help to write, using the law of total probability:
$P(S_n | B) = P(S_n | F\&B) P(F | B) + P(S_n | \bar{F}\&B) P(\bar{F} | B)$
Where $\bar{F}$ = “not $F$” = the die is not fair. We will make the simplification that if the die is not fair, then it is loaded i.e. $\bar{F} = L$. Then:
$P(F | S_n\&B) = \frac{P(S_n | F\&B) P(F | B)} {P(S_n | F\&B) P(F | B) + P(S_n | L\&B) P(L | B)}$
We’ve evaluated some terms above:$P(S_n | F\&B) = 1 / 6^n$, $P(S_n | L\&B) = 1$.
Now, we need the all important prior: $P(F | B) = p$. Note that $P(L | B) = 1 - p$. To consider a concrete example, suppose that the casino boss has received a tip-off that a player could use a loaded die that will slip past conventional security measures. Unfortunately, 10,000 players will be at the tables in casinos across the country tonight, and the cheat could appear at any one of them. Hence in this simplified case, the prior probability that a particular player is using a loaded die, before a single die is thrown, is at most:
$P(L | B) = 1-p = \frac{1} {10,000}$
Thus, we have;
$P(F | S_n\&B) = \frac{6^{-n} p} {6^{-n} p + (1-p)}$
Now, the all-important question is: how many consecutive sixes need to be thrown before it is more likely that the die is loaded than fair. The critical value is $P(F | S_n\&B) = 1/2$. Then, solving for n gives:
$n = \log_6 \frac{p} {1-p} = 5.14$
Thus, 6 sixes in a row will make it more likely that not that the player is cheating. If they want to be 99% sure that the player is cheating ($P(F | S_n\&B)$ = 1%), then they will have to wait for 8 sixes.
Note carefully the moral of the story: so long as $p < 1$ (it’s possible that a player is cheating), there is always some number of consecutive sixes that that makes it probable that the player is cheating. Note also the converse: no matter how many sixes a player throws, we can dispense with the loaded die hypothesis so long as a fair die is used by lots of people, lowering the prior. Both of these conclusions show that the sequence of sixes does call out for an explanation. The fact that a sequence of sixes is just as probable as any other sequence is irrelevant: what matters is that a sequence of sixes supports the hypothesis that the die is not fair, or that many, many people are actually playing.
In Leslie’s words:
“A chief reason for thinking that something stands in special need of explanation is that we actually glimpse some tidy way in which it might be explained.”
In other words, the hypothesis of a loaded die suggests both an explanation, and the need for one. Thus, we are indeed entitled to draw conclusions from the fine-tuning of the universe, because we can glimpse tidy explanations. Leslie’s primary conclusion regarding the fine-tuning of the universe is the same moral we have drawn from the loaded die fable: either many, many dice are being rolled (multiple universes plus the observational effect that universes that don’t permit observers cannot be observed), or the die has been manipulated by an intentional agent (guess who?). In other words, there is a selection effect at work: either an observational selection effect that “chooses” between actual universes, or an intentional selection in the mind of an agent who chooses between possible universes to find observer-permitting ones. (Or, to be completely rigorous, both. Leslie rightly rejects the possibility that “only one kind of world is logically or mathematically or cognitively possible”.)
So we can indeed draw conclusions from the fine-tuning of the universe. And what monumental conclusions they are!
More of my posts on fine-tuning are here.
### Like this:
Posted in Physics, Science, The Universe | Tagged anthropic principle, fine tuning, probability, universe | 10 Comments
### 10 Responses
1. That’s right (except that last sentence ). What matters is how the probability of what was actually observed varies *as a function of the hypothesis*. All of the other probability (of data sets that were not observed) could be redistributed around the data space arbitrarily (and in different ways for each hypothesis) and your inference remains exactly the same.
This is called the likelihood principle, and Bayesian Inference always satisfies it.
2. on July 30, 2008 at 8:02 am | Reply Owen
um… how many combinations are there of ten tosses of a coin? Or perhaps you meant rolls of a dice.
3. on July 30, 2008 at 9:06 am | Reply lukebarnes
Well spotted. I’ve corrected the quote
4. on July 31, 2008 at 12:25 pm | Reply dougaj4
Suppose in the casino analogy, we were just provided with the information that someone had thrown 10 sixes in a row, and no other information.
Possible explanations include:
There have been a huge number of people throwing dice for a very long time (and only information about 10 sixes in a row is transmitted).
The dice are weighted.
The dice were arranged that way.
Is this a reasonable analogy for the information we have about the physical constants of his Universe?
If so, it doesn’t tell us very much at all does it?
5. Hi Doug
I think that’s an excellent analogy and your conclusion is correct, that it doesn’t tell us much. But the reason for that conclusion isn’t due to the following fallacy:
The argument is as follows: we cannot deduce anything interesting from the fine-tuning of the universe because the actual set of constants/initial conditions is just as likely as any other set.
By the way, if anyone really wants to understand the anthropic principle, and how not to abuse it, I consider the following article to be a prerequisite.
http://www.cs.toronto.edu/~radford/ftp/anth.pdf
6. on August 3, 2008 at 2:17 pm | Reply lukebarnes
Brendon and Doug don’t seem to be very impressed by my conclusion, though they do agree. Let’s look at Doug’s options …
1. Many, many players with fair dice.
2. Weighted dice
3. The dice were arranged that way
I think options 2 and 3 collapse to the same idea: the hypothesis that the outcome of the throw of the die was manipulated by an intentional agent. The only difference is that a weighted die could fool someone into thinking it was a fair die, whereas just placing the die isn’t going to fool anyone. So were left with the loaded (a.k.a. manipulated) dice hypothesis or the many games of dice hypothesis.
In the context of the universe, this means that at least one of the following is true:
1.) There exists an ensemble of many, many (small-u) universes. This could be either large, causally disconnected spatial regions (bubbles), or previous cycles of an oscillating universe, or some other option. The first half of the 20th century taught us that the Earth, indeed the Milky way, is just a grain of sand in the vast expanse of the visible universe. Cosmic fine-tuning could tell us that the visible universe itself may be just a speck in the unimaginable size and diversity of the real universe. The constants that we all hold dear (gravitational, fine structure, properties of elementary particles, etc) may just be the result of random symmetry breaking. The search for the fundamental laws of nature may become the search for life-permitting options within an overwhelmingly lifeless landscape. Moreover, these other universes will presumably need to be generated by some mechanism, subject to some meta-laws of nature whose offspring are the laws of nature as we know them. (The alternative, that these other universes “just exist”, as a brute fact, seems unacceptable). But do these meta-laws need fine-tuning? It seems that if we’re searching for the ultimate laws of nature, cosmic fine-tuning could provide some vital clues.
2.) The life-permitting properties of our universe are the result of intentional causes. It would seem then that the fine-tuning of the universe, coupled with evidence against the existence of a universe-ensemble, furnishes a plausible design argument for the existence of God; one that, unlike William Paley’s, is immune from the effects of Darwinism. (Dallas Willard: “any sort of evolution of order of any kind will always presuppose pre-existing order and pre-existing entities governed by it. It follows as a simple matter of logic that not all order evolved”.)
At least one of these options has to be true, if Leslie’s reasoning is correct. Are these honestly unimpressive? Perhaps apatheism is more widespread than I thought.
7. on August 4, 2008 at 7:04 am | Reply dougaj4
“I think options 2 and 3 collapse to the same idea: the hypothesis that the outcome of the throw of the die was manipulated by an intentional agent.”
No, I have two important points of difference.
Option 2 was not supposed to be an analogy for an intelligent agent providing wheighted dice (although it could be of course, but then it would be just like 3, as you said), the intended analogy was that the fundamental constants are what they are because that is what they have to be (for some reason unknown to us).
Secondly the dice being arranged does not necessarilly imply an intelligent entity intentionaly arranging that way. They may have been arranged by some unintelligent process which we don’t know about. Even if they were arranged by some intelligent entity, I think it is misleading to call this an analogy for a god figure, because the entity may not have any of the features that “gods” traditionally have.
Brendon – I’m reading that paper, but I won’t comment until I’ve finished it.
8. [...] that talk, I read some internet articles that were rather woeful. It’s time to quote John Leslie again: “The ways in which ‘anthropic’ reasoning can be misunderstood form a long and dreary [...]
9. [...] and gets four aces each time – call this “M”, the “magic deal”. The probability of M, assuming he is dealing fairly, is approximately p(M | fair-deal) = one chance in . Now suppose that there are other poker games [...]
10. on January 5, 2013 at 7:19 am | Reply James
The problem with equating drawing a 6 on a die to *Complexity is obvious. But atheists are not using reason but bias.
We are not asking for something to happen that is no different than a 4 is to a 6. We are asking for extreme complexity to assemble itself when garbage should be the norm. A 6 is no different than a 4 in this respect. However consciousness is much different than a ball of crap. One can comprehend the reality it is in–see it when it cannot know there even something to see, smell it touch it, taste it, and hear it.
So all theists are showing is a darkened intellect and denial of the obvious in these infantile demonstrations.
You have to come to the conclusion that is actually something wrong with these people. That in denying God and constantly attacking those who clearly perceive Him they have corrupted their thinking beyond repair.
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951099693775177, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/55727/sparse-graphs-are-locally-tree-like/55763 | “sparse graphs are locally tree-like”
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to be able to state with confidence that sparse graphs (graphs with small numbers of edges) are locally tree-like (they have few short cycles). Apparently "Sparse graphs are locally tree like in the sense that the typical size of loops is O(N)" - see citation below. Here I am pretty sure "N" is |V|, the number of nodes. But I can't find any proof or formal statement of this.
I am interested in "most" graphs, not all of them, so if my understanding is right this is not a question of extremal graph theory. For example, I would like to be able to say something like: if |E| = O(|V|) then most graphs have girth O(|V|), or most loops have length O(|V|).
[Macris 2006, Applications of correlation inequalities to low density graphical codes, www.springerlink.com/index/3416607227705N33.pdf]
-
5
I think N is $\log_2 |V|$, or something like that, in that paper. They consider binary vectors of length $N$. Furthermore, "most" sparse graphs have logarithmic diameter (say, random regular graphs of constant degree $d \geq 3$, or the giant component of Erdos-Rényi random graphs with $p=c/n$ and $c>1$ a constant), rather than linear. – Louigi Addario-Berry Feb 17 2011 at 14:34
[Thanks to everyone who has answered or commented - I am going to need to take some time to think about these answers, but they look very helpful.] – eddddd84 Feb 21 2011 at 16:47
4 Answers
I don't believe you can say that "most" graphs in this range have small girth, but there is a sense in which you can say they have few short cycles. For example, if you consider the model of random regular graphs of degree $d$ (graphs chosen uniformly from all $d$ regular graphs on $n$ vertices), and let $X_i$ denote the number of cycles of length $i$, then Bollobás and Wormald independently showed that the $X_i$ behaved asymptotically as independent Poisson variables with mean $(d-1)^i/(2i)$.
In other words: There's a positive probability that a graph contains each of $3$-cycles, $4$-cycles, etc. Because these events are asymptotically independent, "most" $d-$regular graphs have bounded girth. On the other hand, the number of cycles of each fixed length on average remains bounded even as the size of the graph tends to infinity. So if I fix a single vertex and look in the neighborhood of that vertex, I have to look at farther and farther distance before I see any cycles at all. (But not too far...as Louigi noted, we can't expect to go much past the $\log n$ diameter of the graph). This is the "locally" part of "locally tree-like".
A similar situation should hold for Erdős–Rényi graphs like the ones mentioned in Louigi's comment.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think you would need a condition something like $|E|<(1+(1-\epsilon)\ln(V))V$. If $|E|=3|V|$ then it could be that every vertex is on $6$ $3$-cycles. That is only one such graph but I would expect the girth would be low. If the graph is regular of degree $3$ (so $|E|=\frac{3}{2}|V|$ ) then every vertex is on a cycle of length shorter than `$\log_2(V)$`.
If I recall correctly, a random tree has expected diameter less than $4\sqrt{V},$ so the expected girth of a graph with $|V|=|E|$ would be $O(\sqrt{V}).$
-
Your question fits into the area of random graphs, rather than extremal graph theory; and also expander graphs are relevant.
As mentioned previously, Erdos-Renyi graphs are a good and simple model for random graphs. For example $G_{n,p}$ has $n$ vertices and each edge is independently randomly determined to exist with probability $p$.
If you're talking about sparse graphs, you have to quantify how sparse. Say, for example, $p = \frac{\log n}{n}$? Above a certain point (Alon and Spencer, "The Probabilistic Method", will have many details) there is essentially a single "giant component" to the graph. Below that, there is a transition (which they also understand in detail) and then everything should be a tree below that.
Expander graphs (there are many constructions) are typically sparse graphs which however are sufficiently connected that a random walk mixes rapidly. With expanders there should be a result about the typical cycle size and distribution of cycles by length, compared to the second eigenvalue of the Laplacian of the graph, which governs its expansion.
It appears you're looking at LDPC codes, whose vertices have (if I recall correctly from undergrad days) edges independently chosen at random, with each vertex choosing a number $d$ as its total number of edges, where $d$ comes from some distribution chosen to maximize efficiency as a code. Mitzenmacher, Luby, and others were involved in their creation and have analyzed the efficiency extensively. "Digital Fountain" is/was a company doing this.
LDPC codes offer a bit of independence if they are as described, but locally the edge probabilities will be correlated because of the distribution of $d$.
It might be possible to use Janson's inequality (Ch8 of Alon and Spencer) to analyze this, as long as you're in the situation where there are no "negatively correlated" pairs of probabilities in your sum. It only uses the second (and first) probability moments.
LDPC codes are probably good expanders, so you could use bounds from expander graph literature if true.
Off the top of my head, that's where this problem fits ... maybe I'll be able to fill in more details for some of this later.
-
I think the typical loop-length goes like $\log(N)$ rather than $N$...
if $\langle k \rangle$ is the average degree, the number of $l$-distant neighbours is approx $\langle k \rangle^l$, and hence when $k^l=N$ we expect to have a loop, so $l \approx \log(N)/\log(\langle k \rangle)$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577208161354065, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/14932/why-do-we-not-have-spin-greater-than-2?answertab=votes | # Why do we not have spin greater than 2?
It is commonly asserted that no consistent, interacting quantum field theory can be constructed with fields that have spin greater than 2 (possibly with some allusion to renormalization). I've also seen (see Bailin and Love, Supersymmetry) that we cannot have helicity greater than 1, absenting gravity. I am yet to see an explanation as to why this is the case; so can anyone help? Thanks.
-
3
– Dan Sep 22 '11 at 2:18
1
Thanks. In the meantime, I'd also like to add that the question as to why we can't have spin greater than 2 also makes me wonder why we have Ramond-Ramond fields in string theory, which must surely have spin greater than 2 (since they have multiple Lorentz indices)? – James Sep 25 '11 at 15:19
1
@James: "spin" is tricky, it means maximum helicity, not number of indices. When all the indices are antisymmetric, the spin is 1, regardless of the number of indices, and this is why form fields are consistent in supergravity and strings. – Ron Maimon Sep 25 '11 at 16:26
This doesn't answer the question, but perhaps it's a useful reference for the OP: In his lectures on gravitation, Feynman explains why the graviton field must be integer (0, 1, 2, 3,...), then explains why 0 and 1 are out of the question. He then proceeds to attempt to construct a spin-2 theory, because it's the simplest that could work. – mtrencseni Sep 26 '11 at 20:18
## 1 Answer
Higher spin particles have to be coupled to conserved currents, and there are no conserved currents of high spin in quantum field theories. The only conserved currents are vector currents associated with internal symmetries, the stress-energy tensor current, the angular momentum tensor current, and the spin-3/2 supercurrent, for a supersymmetric theory.
This restriction on the currents constrains the spins to 0,1/2 (which do not need to be coupled to currents), spin 1 (which must be coupled to the vector currents), spin 3/2 (which must be coupled to a supercurrent) and spin 2 (which must be coupled to the stress-energy tensor). The argument is heuristic, and I do not think it rises to the level of a mathematical proof, but it is plausible enough to be a good guide.
### Preliminaries: All possible symmetries of the S-matrix
You should accept the following result of O'Raferteigh, Coleman and Mandula--- the continuous symmetries of the particle S-matrix, assuming a mass-gap and Lorentz invariance, are a Lie Group of internal symmetries, plus the Lorentz group. This theorem is true, given its assumptions, but these assumptions leave out a lot of interesting physics:
• Coleman-Mandula assume that the symmetry is a symmetry of the S-matrix, meaning that it acts nontrivially on some particle state. This seems innocuous, until you realize that you can have a symmetry which doesn't touch particle states, but only acts nontrivially on objects like strings and membranes. Such symmetries would only be relevant for the scattering of infinitely extended infinite energy objects, so it doesn't show up in the S-matrix. The transformations would become trivial whenever these sheets close in on themselves to make a localized particle. If you look at Coleman and Mandula's argument (a simple version is presented in Argyres' supersymmetry notes, which gives the flavor. There is an excellent complete presentation in Weinberg's quantum field theory book, and the original article is accessible and clear), it almost begs for the objects which are charged under the higher symmetry to be spatially extended. When you have extended fundamental objects, it is not clear that you are doing field theory anymore. If the extended objects are solitons in a renormalizable field theory, you can zoom in on ultra-short distance scattering, and consider the ultra-violet fixed point theory as the field theory you are studying, and this is sufficient to understand most examples. But the extended-object exception is the most important one, and must always be kept in the back of the mind.
• Coleman and Mandula assume a mass gap. The standard extension of this theorem to the massless case just extends the maximal symmetry from the Poincare group to the conformal group, to allow the space-time part to be bigger. But Coleman and Madula use analyticity properties which I am not sure can be used in a conformal theory with all the branch-cuts which are not controlled by mass-gaps. The result is extremely plausible, but I am not sure if it is still rigorously true. This is an exercise in Weinberg, which unfortunately I haven't done.
• Coleman and Mandula ignore supersymmetries. This is fixed by Haag–Lopuszanski–Sohnius, who use the Coleman mandula theorem to argue that the maximal symmetry structure of a quantum field theory is a superconformal group plus internal symmetries, and that the supersymmetry must close on the stress-energy tensor.
What the Coleman Mandula theorem means in practice is that whenever you have a conserved current in a quantum field theory, and this current acts nontrivially on particles, then it must not carry any space-time indices other than the vector index, with the only exceptions being the geometric currents: a spinor supersymmetry current, $J^{\alpha\mu}$, the (Belinfante symmetric) stress-energy tensor $T^{\mu\nu}$, the (Belinfante) angular momentum tensor $S^{\mu\nu\lambda} = x^{\mu} T^{\nu\lambda} - x^\nu T^{\mu\lambda}$, and sometimes the dilation current $D^\mu = x^\mu T^\alpha_\alpha$ and conformal and superconformal currents too.
The spin of the conserved currents is found by representation theory--- antisymmetric indices are spin 1, whether there are 1 or 2, so the spin of the internal symmetry currents is 1, and of the stress energy tensor is 2. The other geometric tensors derived from the stress energy tensor are also restricted to spin less then 2, with the supercurrent having spin 3/2.
### What is a QFT?
Here this is a practical question--- for this discussion, a quantum field theory is a finite collection of local fields, each corresponding to a representation of the Poincare group, with a local interaction Lagrangian which couples them together. Further, it is assumed that there is an ultra-violet regime where all the masses are irrelevant, and where all the couplings are still relatively small, so that perturbative particle exchange is ok. I say pseudo-limit, because this isn't a real ultra-violet fixed point, which might not exist, and it does not require renormalizability, only unitarity in the regime where the theory is still perturbative.
Every particle must interact with something to be part of the theory. If you have a noninteracting sector, you throw it away as unobservable. The theory does not have to be renormalizable, but it must be unitary, so that the amplitudes must unitarize perturbatively. The couplings are assumed to be weak at some short distance scale, so that you don't make a big mess at short distances, but you can still analyze particle emission order by order
The Froissart bound for a mass-gap theory states that the scattering amplitude cannot grow faster than the logarithm of the energy. This means that any faster than constant growth in the scattering amplitude must be cancelled by something.
### Propagators for any spin
The propagators for massive/massless particles of any spin follow from group theory considerations. These propagators have the schematic form
$$s^J\over s-m^2$$
And the all-important s scaling, with its J-dependence can be extracted from the physically obvious angular dependence of the scattering amplitude. If you exchange a spin-J particle with a short propagation distance (so that the mass is unimportant) between two long plane waves (so that their angular momentum is zero), you expect the scattering amplitude to go like $\cos(\theta)^J$, just because rotations act on the helicity of the exchanged particle with this factor.
For example, when you exchange an electron between an electron and a positron, forming two photons, and the internal electron has an average momentum k and a helicity +, then if you rotate the contribution to the scattering amplitude from this exchange around the k-axis by an angle $\theta$ counterclockwise, you should get a phase of $\theta/2$ in the outgoing photon phases.
In terms of Mandelstam variables, the angular amplitude goes like $(1-t)^J$, since t is the cosine of the scattering variable, up to some scaling in s. For large t, this grows as t^J, but "t" is the "s" of a crossed channel (up to a little bit of shifting), and so crossing t and s, you expect the growth to go with the power of the angular dependence. The denominator is fixed at $J=0$, and this law is determined by Regge theory.
So that for $J=0,1/2$, the propagators shrink at large momentum, for $J=1$, the scattering amplitudes are constant in some directions, and for $J>1$ they grow. This schematic structure is of course complicated by the actual helicity states you attach on the ends of the propagator, but the schematic form is what you use in Weinberg's argument.
### Spin 0, 1/2 are OK
That spin 0 and 1/2 are ok with no special treatment, and this argument shows you why: the propagator for spin 0 is
$$1\over k^2 + m^2$$
Which falls off in k-space at large k. This means that when you scatter by exchanging scalars, your tree diagrams are shrinking, so that they don't require new states to make the theory unitary.
Spinors have a propagator
$$1\over \gamma\cdot k + m$$
This also falls off at large k, but only linearly. The exchange of spinors does not make things worse, because spinor loops tend to cancel the linear divergence by symmetry in k-space, leaving log divergences which are symptomatic of a renormalizable theory.
So spinors and scalars can interact without revealing substructure, because their propagators do not require new things for unitarization. This is reflected in the fact that they can make renormalizable theories all by themselves.
### Spin 1
Introducing spin 1, you get a propagator that doesn't fall off. The massive propagator for spin 1 is
$${ g_{\mu\nu} - {k_\mu k_\nu\over m^2} \over k^2 + m^2 }$$
The numerator projects the helicity to be perpendicular to k, and the second term is problematic. There are directions in k-space where the propagator does not fall off at all! This means that when you scatter by spin-1 exchange, these directions can lead to a blow-up in the scattering amplitude at high energies which has to be cancelled somehow.
If you cancel the divergence with higher spin, you get a divergence there, and you need to cancel that, and then higher spin, and so on, and you get infinitely many particle types. So the assumption is that you must get rid of this divergence intrinsically. The way to do this is to assume that the $k_\mu k_\nu$ term is always hitting a conserved current. Then it's contribution vanishes.
This is what happens in massive electrodynamics. In this situation, the massive propagator is still ok for renormalizability, as noted by Schwinger and Feynman, and explained by Stueckelberg. The $k_\mu k_\nu$ is always hitting a $J^\mu$, and in x-space, it is proportional to the divergence of the current, which is zero because the current is conserved even with a massive photon (because the photon isn't charged).
The same argument works to kill the k-k part of the propagator in Yang-Mills fields, but it is much more complicated, because the Yang-Mills field itself is charged, so the local conservation law is usually expressed in a different way, etc,etc. The heuristic lesson is that spin-1 is only ok if you have a conservation law which cancels the non-shrinking part of the numerator. This requires Yang-Mills theory, and the result is also compatible with renormalizability.
If you have a spin-1 particle which is not a Yang-Mills field, you will need to reveal new structure to unitarize its longitudinal component, whose propagator is not properly shrinking at high energies.
### Spin 3/2
In this case, you have a Rarita Schwinger field, and the propagator is going to grow like $\sqrt{s}$ at large energies, just from the Mandelstam argument presented before.
The propagator growth leads to unphysical growth in scattering exchanging this particle, unless the spin-3/2 field is coupled to a conserved current. The conserved current is the Supersymmetry current, by the Haag–Lopuszanski–Sohnius theorem, because it is a spinor of conserved currents.
This means that the spin-3/2 particle should interact with a spin 3/2 conserved supercurrent in order to be consistent, and the number of gravitinos is (less then or equal to) the number of supercharges.
The gravitinos are always introduced in a supermultiplet with the graviton, but I don't know if it is definitely impossible to introduce them with a spin-1 partner, and couple them to the supercurrent anyway. These spin-3/2/spin-1 multiplets will probably not be renormalizable barring some supersymmetry miracle. I haven't worked it out, but it might be possible.
### Spin 2
In this case, you have a perturbative graviton-like field $h_{\mu\nu}$, and the propagator contains terms growing linearly with s.
In order to cancel the growth in the numerator, you need the tensor particle to be coupled to a conserved current to kill the parts with too-rapid growth, and produce a theory which does not require new particles for unitarity. The conserved quantity must be a tensor $T_{\mu\nu}$. Now one can appeal to the Coleman Mandula theorem and conclude that the conserved tensor current must be the stress energy tensor, and this gives general relativity, since the stress-tensor includes the stress of the h field too.
There is a second tensor conserved quantity, the angular momentum tensor $S_{\mu\nu\sigma}$, which is also spin-2 (it might look like its spin 3, but its antisymmetric on two of its indices). You can try to couple a spin-2 field to the angular momentum tensor. To see if this works requires a detailed analysis, which I haven't done, but I would guess that the result will just be a non-dynamical torsion coupled to the local spin, as required by the Einstein-Cartan theory.
Witten mentions yet another possiblity for spin 2 in chapter 1 of Green Schwarz and Witten, but I don't remember what it is, and I don't know whether it is viable.
### Summary
I believe that these arguments are due to Weinberg, but I personally only read the sketchy summary of them in the first chapters of Green Schwarz and Witten. They do not seem to me to have the status of a theorem, because the argument is particle by particle, it requires independent exchange in a given regime, and it discounts the possiblity that unitary can be restored by some family of particles.
Of course, in string theory, there are fields of arbitrarily high spin, and unitarity is restored by propagating all of them together. For field theories with bound states which lie on Regge trajectories, you can have arbitrarily high spins too, so long as you consider all the trajectory contributions together, to restore unitarity (this was one of the original motivations for Regge theory--- unitarizing higher spin theories).
For example, in QCD, we have nuclei of high ground-state spin. So there are stable S-matrix states of high spin, but they come in families with other excited states of the same nuclei.
The conclusion here is that if you have higher spin particles, you can be pretty sure that you will have new particles of even higher spin at higher energies, and this chain of particles will not stop until you reveal new structure at some point. So the tensor mesons observed in the strong interaction mean that you should expect an infinite family of strongly interacting particles, petering out only when the quantum field substructure is revealed.
## Some comments
James said:
• It seems higher spin fields must be massless so that they have a gauge symmetry and thus a current to couple to
• A massless spin-2 particle can only be a graviton.
These statements are as true as the arguments above are convincing. From the cancellation required for the propagator to become sensible, higher spin fields are fundamentally massless at short distances. The spin-1 fields become massive by the Higgs mechanism, the spin 3/2 gravitinos become massive through spontaneous SUSY breaking, and this gets rid of Goldstone bosons/Goldstinos.
But all this stuff is, at best, only at the "mildly plausible" level of argument--- the argument is over propagator unitarization with each propagator separately having no cancellations. It's actually remarkable that it works as a guideline, and that there aren't a slew of supersymmetric exceptions of higher spin theories with supersymmetry enforcing propagator cancellations and unitarization. Maybe there are, and they just haven't been discovered yet. Maybe there's a better way to state the argument which shows that unitarity can't be restored by using positive spectral-weight particles.
### Big Rift in 1960s
James askes
• Why wasn't this pointed out earlier in the history of string theory?
The history of physics cannot be well understood without appreciating the unbelievable antagonism between the Chew/Mandelstam/Gribov S-matrix camp, and the Weinberg/Glashow/Polyakov Field theory camp. The two sides hated each other, did not hire each other, and did not read each other, at least not in the west. The only people that straddled both camps were older folks and Russians--- Gell-Mann more than Landau (who believed the Landau pole implied S-matrix), Gribov and Migdal more than anyone else in the west other than Gell-Mann and Wilson. Wilson did his PhD in S-matrix theory, for example, as did David Gross (under Chew).
In the 1970s, S-matrix theory just plain died. All practitioners jumped ship rapidly in 1974, with the triple-whammy of Wilsonian field theory, the discovery of the Charm quark, and asymptotically freedom. These results killed S-matrix theory for thirty years. Those that jumped ship include all the original string theorists who stayed employed: notably Veneziano, who was convinced that gauge theory was right when t'Hooft showed that large-N gauge fields give the string topological expansion, and Susskind, who didn't mention Regge theory after the early 1970s. Everybody stopped studying string theory except Scherk and Schwarz, and Schwarz was protected by Gell-Mann, or else he would never have been tenured and funded.
This sorry history means that not a single S-matrix theory course is taught in the curriculum today, nobody studies it except a few theorists of advanced age hidden away in particle accelerators, and the main S-matrix theory, string-theory, is not properly explained and remains completely enigmatic even to most physicists. There were some good reasons for this--- some S-matrix people said silly things about the consistency of quantum field theory--- but to be fair, quantum field theory people said equally silly things about S-matrix theory.
Weinberg came up with these heuristic arguments in the 1960s, which convinced him that S-matrix theory was a dead end, or rather, to show that it was a tautological synonym for quantum field theory. Weinberg was motivated by models of pion-nucleon interactions, which was a hot S-matrix topic in the early 1960s. The solution to the problem is the chiral symmetry breaking models of the pion condensate, and these are effective field theories.
Building on this result, Weinberg became convinced that the only real solution to the S-matrix was a field theory of some particles with spin. He still says this every once in a while, but it is dead wrong. The most charitable interpretation is that every S-matrix has a field theory limit, where all but a finite number of particles decouple, but this is not true either (consider little string theory). String theory exists, and there are non-field theoretic S-matrices, namely all the ones in string theory, including little string theory in (5+1)d, which is non-gravitational.
### Lorentz indices
James comments:
• regarding spin, I tried doing the group theoretic approach to an antisymmetric tensor but got a little lost - doesn't an antisymmetric 2-form (for example) contain two spin-1 fields?
The group theory for an antisymmetric tensor is simple: it consists of an "E" and "B" field which can be turned into the pure chiral representations E+iB, E-iB. This was also called a "six-vector" sometimes, meaning E,B making an antisymmetric four-tensor.
You can do this using dotted and undotted indices more easily, if you realize that the representation theory of SU(2) is best done in indices--- see the "warm up" problem in this answer: Mathematically, what is color charge?
-
1
Where do I read more about the antagonism you talked about? – Dan Piponi Nov 30 '11 at 0:51
1
@ron Sounds like a nice bit of history that ought to be written up and published (other than as an answer on this web site!). My only exposure to S-matrices is the brief Veneziano story which I think is in the intro to most String Theory text books. – Dan Piponi Nov 30 '11 at 21:16
1
@Dan: I fantasize about writing an article or mongraph about this. But I am too young to remember when S-matrix was current, I only learned about it from reading literature. It would require thorough interviews with Chew and Mandelstam, both of whom are in their 80s. It is already too late to get Gribov's perspective. Lipatov would probably be close to Gribov, and might have good insight, but Lipatov is younger too. You must remember that S-matrix was still considered hopeless as recently as two years ago. I wrote some wikipedia pages to help rehabilitate it, but I wish I did more. – Ron Maimon Dec 1 '11 at 5:37
1
@drake: We observe spin 1: gluons, photons, W's Z's, spin 2: gravitons, and presumably will observe spin 3/2 gravitinos at some point. If you couple spin-1 charged under another vector, and you don't adjust the couplings just right, you will either break unitarity or break the other charge conservation. If you do adjust the coupling just right, all you have done is extended the gauge symmetry of the other charged vector to make a gauge theory. So this is not an exception. The masses are an issue--- the mass term breaks the gauge invariance, so it wrecks charge conservation in nonabelian case. – Ron Maimon Jul 27 '12 at 23:09
1
@drake: You don't get any violation of the argument or the result if it's a U(1) gauge theory. If you make the photon massive in QED, you don't do anything to consistency. This led people to think that a massive graviton or nonabelian boson is ok in the 1960, it's not so, as shown by Veltman/t'Hooft. This is an accident because the photon is neutral. If you have a nonabelian theory or gravity, a bare mass term will violate local gauge invariance and ruin the global conservation law once you gauge fix and look at internal loops. – Ron Maimon Jul 28 '12 at 6:16
show 23 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391452074050903, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=621977&page=3 | Physics Forums
Page 3 of 3 < 1 2 3
## Geometric difference between a homotopy equivalance and a homeomorphism
Quote by homeomorphic I guess they have to be homotopy equivalent to CW complexes, since they can be realized as CW complexes, and they are unique up to homotopy equivalence. The way you construct them is what I have been talking about. Make a wedge of spheres to get generators of the non-zero homotopy group. By the Hurewicz theorem, it's isomorphic to the homology in that dimension because all the lower homotopy groups vanish. Then, just keep attaching cells to kill off all the homotopy groups above that dimension. This doesn't give you a very concrete construction, so in the end, you don't really know what you've built. But some Eilenberg-Maclane spaces occur in nature, so to speak, like ℝP^∞, ℂP^∞, or S^1. As far as I know, those are the only naturally occurring ones. Some asked for examples of them when I first encountered them in my algebraic topology class, and the prof didn't seem to know of any other than those few examples and the abstract construction of them. Of course, maybe the term "natural occurring" doesn't have too much meaning.
Not to go too-far off-topic, but maybe a reasonable meaning for " x being naturally-occuring" is that one is somewhat likely to either run into x or hear about it while doing research that is not too wildly unusual.
Not to go too-far off-topic, but maybe a reasonable meaning for " x being naturally-occuring" is that one is somewhat likely to either run into x or hear about it while doing research that is not too wildly unusual.
Yeah, and I forgot to say the closed (orientable?) surfaces are Eilenberg Maclane. And aspherical manifolds of all sorts, since those are manifolds with vanishing higher homotopy groups (which came up in my 3-manifold readings).
Recognitions: Science Advisor The classifying spaces for flat bundles,bundles with discrete structure group, are all EMs. For finite groups these are all probably all infinite dimensional CW complexes. For instance the classifying space for Z2 bundles is the infinite real projective space. An example of a Z2 bundle is the tangent bundle of the Klein bottle (itself an EM space). It follows that the classifying map into the infinite Grassmann of 2 planes in Euclidean space can be factored through the infinite projective space. (I wonder though whether it can actually be factored through the two dimensional projective plane by following a ramified cover of the sphere by a torus with the antipodal map.) In terms of group cohomology this corresponds to the projection map, $\pi_{1}$(K) -> Z2 obtained by modding out the maximal two dimensional lattice. Group cohomology is the same as the cohomology of the universal classifying space for vector bundles with that structure group - I think In the case of the flat Klein bottle, this shows that its holonomy group is Z2.
Recognitions:
Science Advisor
Quote by mathwonk note that even an exotic sphere has a morse function with just two critical points. so when you attach that disc you cannot always attach it differentiably in the usual way. In fact this is apparently how one produces exotic spheres. you produce a manifold that is not an ordinary smooth sphere somehow, but that does have a morse function with only two critical points. then it is homeomorphic to a sphere.
reference?
The classifying spaces for flat bundles,bundles with discrete structure group, are all EMs.
Well, what I was saying is that most of those aren't some familiar space that has a name, like S^1.
For finite groups these are all probably all infinite dimensional CW complexes. For instance the classifying space for Z2 bundles is the infinite real projective space.
Not always. For example, for surface groups, as we just mentioned. And there are some aspherical manifolds that are finite-dimensional. This includes, for example, all hyperbolic 3-manifolds.
http://en.wikipedia.org/wiki/Aspherical_space
Group cohomology is the same as the cohomology of the universal classifying space for vector bundles with that structure group - I think
I would say principal bundles, rather than vector bundles. I would prefer to say the cohomology of G is the cohomology of a K(G,1). That's my favorite definition, and of course, it's equivalent to other definitions of group cohomology, like the one as a derived functor or from the bar resolution of Z over ZG.
Recognitions:
Science Advisor
Quote by homeomorphic Not always. For example, for surface groups, as we just mentioned. And there are some aspherical manifolds that are finite-dimensional. This includes, for example, all hyperbolic 3-manifolds. http://en.wikipedia.org/wiki/Aspherical_space
I think that the fundamental groups of ashperical manifolds are infinite e.g. tori. The fundamental groups of closed orientable surfaces are infinite except for the sphere.
I would say principal bundles, rather than vector bundles. I would prefer to say the cohomology of G is the cohomology of a K(G,1). That's my favorite definition, and of course, it's equivalent to other definitions of group cohomology, like the one as a derived functor or from the bar resolution of Z over ZG.
Same thing I think.
For aspherical manifolds the fundamental domain in the universal covering space generates a free resolution of the integers over the fundamental group. e.g. for a 2 dimensional torus one has four vertices and edjes and one rectange as a basis over ZxZ.
Recognitions:
Science Advisor
Quote by mathwonk Lavinia, have you read the classical reference, Milnor's Morse Theory? This is Thm. 3.5 proved in roughly the first 24 pages. He proves that the region on the manifold "below" c+e where c is a critical value, has the homotopy type of the region below c-e with the attachment of a single cell determined by the index of the critical point with value c. The passage from local to global you ask about may be Lemma 3.7. In it he proves that a homotopy equivalence between two spaces extends to one between the spaces obtained from them by attaching a cell. He makes use of deformation retractions and ultimately uses Whitehead's theorem that a map is a homotopy equivalence if it induces isomorphism on homotopy groups, at least for spaces dominated by CW complexes.
Thanks again Mathwonk. I just browsed through the first chapter. The lemmas you mention do the trick.
So what are two homotopy equivalent compact manifold without boundary that are not homeomorphic?
Recognitions: Homework Help Science Advisor read the first page of this paper for some related results: http://deepblue.lib.umich.edu/bitstr.../1/0000331.pdf but perhaps these examples are not compact.
Recognitions: Homework Help Science Advisor here you go: (lens spaces) http://en.wikipedia.org/wiki/Spherical_3-manifold
Recognitions: Homework Help Science Advisor to be explicit: "In particular, the lens spaces L(7,1) and L(7,2) give examples of two 3-manifolds that are homotopy equivalent but not homeomorphic."
I think that the fundamental groups of ashperical manifolds are infinite e.g. tori. The fundamental groups of closed orientable surfaces are infinite except for the sphere.
True, I sometimes confuse finite with finitely generated. That's why I was confused. Yeah, I think if you even have a subgroup of finite order, you have to have an infinite-dimensional complex. The proof was really cool, but I'll have to try and remember it. There was a covering-spaces proof.
Recognitions:
Science Advisor
Quote by mathwonk to be explicit: "In particular, the lens spaces L(7,1) and L(7,2) give examples of two 3-manifolds that are homotopy equivalent but not homeomorphic."
pretty cool. So how do their Morse functions distinguish them?
Page 3 of 3 < 1 2 3
Thread Tools
| | | |
|----------------------------------------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Geometric difference between a homotopy equivalance and a homeomorphism | | |
| Thread | Forum | Replies |
| | Linear & Abstract Algebra | 1 |
| | Differential Equations | 3 |
| | Differential Geometry | 1 |
| | Beyond the Standard Model | 1 |
| | General Physics | 0 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410668015480042, "perplexity_flag": "head"} |
http://www.physicsforums.com/showpost.php?p=471566&postcount=12 | View Single Post
Blog Entries: 9 Recognitions: Science Advisor I have been thinking about this and I came to the conclusion that the fact that this coincidence arises in the Milne universe (linearly expanding universe) is natural (actually it is not a coincidence in that model). In that model t = 1/H, for every t, since $\dot a = k$ (the first time derivative of the scale factor is a constant; note that $H = \dot a / a$). I wonder now whether there is an explanation for the the similarity between the Milne universe and the concordance model today at H0 = 71 Km/s Mpc. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398753046989441, "perplexity_flag": "middle"} |
http://openwetware.org/index.php?title=User:Pranav_Rathi/Notebook/OT/2010/08/18/CrystaLaser_specifications&diff=674782&oldid=654965 | # User:Pranav Rathi/Notebook/OT/2010/08/18/CrystaLaser specifications
### From OpenWetWare
(Difference between revisions)
| | | | |
|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| () | | Current revision (16:39, 8 February 2013) (view source) () | |
| (12 intermediate revisions not shown.) | | | |
| Line 84: | | Line 84: | |
| | Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M<sup>2</sup>. | | Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M<sup>2</sup>. |
| | | | |
| - | [[Image:Beamwaistexp.png|700x600px|Diameter Vs Z]] | + | [[Image:Beamwaistexp.png|700x600px|Beam waist Vs Z]] |
| | | | |
| - | =====M<sup>2</sup>===== | + | =====Beam propagation factor M<sup>2</sup>===== |
| | The beam propagation factor M<sup>2</sup> was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM<sub>00</sub> beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. | | The beam propagation factor M<sup>2</sup> was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM<sub>00</sub> beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. |
| | | | |
| Line 92: | | Line 92: | |
| | | | |
| | ::<math> | | ::<math> |
| - | \theta_{th}=\frac{\lambda}{\pi w_0} | + | \theta_{0}=\frac{\lambda}{\pi w_0} |
| | </math> | | </math> |
| | this is theoretical half divergence angle in radian. | | this is theoretical half divergence angle in radian. |
| | ::<math> | | ::<math> |
| - | \theta_{ac}=M^2\frac{\lambda}{\pi w_0} | + | \theta_{R}=M^2\frac{\lambda} {\pi w_0} |
| | | + | </math> |
| | | + | so |
| | | + | ::<math> |
| | | + | M^2=\frac{\theta_{R}} {\theta_{0}} |
| | </math> | | </math> |
| - | this is actual half divergence angle in radian. | | |
| - | | | |
| | Where: | | Where: |
| - | λ is the laser wavelength | + | *λ is the laser wavelength |
| - | w<sub>0</sub> is the beam waist radius (at the 1/e<sup>2</sup> point) | + | *θ<sub>R</sub> is the far field divergence angle of the real beam. |
| - | M<sup>2</sup> is the beam propagation factor | + | *w<sub>0</sub> is the beam waist radius and θ<sub>0</sub> is the far field divergence angle of the theoratical beam. |
| | | + | *M<sup>2</sup> is the beam propagation factor |
| | | | |
| | This definition of M<sup>2</sup> allows us to make simple change to optical formulas by taking M<sup>2</sup> factor as multiplication, to account for the actual beam divergence. This is the reason why M<sub></sub>2 is also sometimes referred to as the “times diffraction limit number”. | | This definition of M<sup>2</sup> allows us to make simple change to optical formulas by taking M<sup>2</sup> factor as multiplication, to account for the actual beam divergence. This is the reason why M<sub></sub>2 is also sometimes referred to as the “times diffraction limit number”. |
| | | + | |
| | | + | My experimental beam waist is: |
| | | + | |
| | | + | w<sub>R</sub>=.63mm with a theoretical divergence of .69mrad. The data suggests the real far field divergence angle to be 1mrad (half angle) (w<sub>R</sub>/z at that z). This gives: |
| | | + | |
| | | + | '''M<sup>2</sup>≈1.4''' |
| | | + | |
| | | + | Now using beam propagation formula with M<sup>2</sup> correction: |
| | | + | |
| | | + | :<math>w_R(z) = w_0 \, \sqrt{ 1+ {\left( \frac{z M^2}{z_\mathrm{R}} \right)}^2 } \ . </math> |
| | | | |
| | | + | instead of: |
| | | + | :<math>w_R(z) = w_0 \, \sqrt{ 1+ {\left( \frac{z}{z_\mathrm{R}} \right)}^2 } \ . </math> |
| | | | |
| | | + | The result is obvious. Plot shows the real experimental data with theoretical data fit with and without M^2 correction. |
| | | + | |
| | | + | [[Image:Beam waist experiment with M2.jpg|700x600px|Beam waist Vs Z]] |
| | | | |
| | | + | M<sup>2</sup> is an important parameter and it is good to know it to complete the characterization of a laser. |
| | [[Category:1064crystalaser]] | | [[Category:1064crystalaser]] |
## Specifications
We are expecting our laser any time. To know the laser more we are looking forward to investigate number of things. These specifications are already given by the maker, but we will verify them.
### Polarization
Laser is TM (transverse magnetic) or P or Horizontal linearly polarized (in the specimen plane laser is still TM polarized; when looking into the sample plane from the front of the microscope). We investigated these two ways: 1) by putting a glass interface at Brewster’s angle and measured the reflected and transmitted power. At this angle all the light is transmitted because the laser is P-polarized, 2) by putting a polarizing beam splitter which uses birefringence to separate the two polarizations; P is reflected and S is transmitted, by measuring and comparing the powers, the desired polarizability is determined. We performed the experiment at 1.8 W where P is 1.77 W and S is less than .03 W*
### Beam waist at the output window
We used knife edge method (this method is used to determine the beam waist (not the beam diameter) directly); measure the input power of 1.86W at 86.5 and 13.5 % at the laser head (15mm). It gave us the beam waist (Wo) of .82mm (beam diameter =1.64mm).
### Possible power fluctuations if any
The power supply temperature is really critical. Laser starts at roughly 1.8 W but if the temperature of the power supply is controlled very well it reaches to 2 W in few minutes and stay there. It’s really stupid of manufacturer that they do not have any fans inside so we put two chopper fans on the top of it to cool it and keep it cool. If no fans are used then within an hour the power supply reaches above 50 degrees of Celsius and then, not only the laser output falls but also the power supply turns itself off after every few minutes.
### Mode Profile
Higher order modes had been a serious problem in our old laser, which compelled us to buy this one. The success of our experiments depends on the requirement of TEM00 profile, efficiency of trap and stiffness is a function of profile.So mode profiling is critical; we want our laser to be in TEM00. I am not going to discuss the technique of mode profiling; it can be learned from this link: [1] [2].
As a result it’s confirmed that this laser is TEM00 mode. Check out the pics:
A LabView program is written to show a 3D Gaussian profile, it also contains a MatLab code[3].
## Specs by the Manufacturer
All the laser specs and the manual are in the document: [Specs[4]]
## Beam Profile
The original beam waist of the laser is .2mm, but since we requested the 4x beam expansion option, the resultant beam waist is .84 at the output aperture of the laser. As the nature of Gaussian beam it still converges in the far field. We do not know where? So there is a beam waist somewhere in the far field. There are two ways to solve the problem; by using Gaussian formal but, for that we need the beam parameters before expansion optics and information about the expansion optics, which we do not have. So the only way we have, is experimentally measure the beam waist along the z-axis at many points and verify its location for the minimum. Once this is found we put the AOM there. So the experimental data gives us the beam waist and its distance from the laser in the z-direction. We use scanning knife edge method to measure the beam waist.
### Method
• In this method we used a knife blade on a translation stage with 10 micron accuracy. The blade is moved transverse to the beam and the power of the uneclipsed portion is recorded with a power meter. The cross section of a Gaussian beam is given by:
$I(r)=I_0 exp(\frac {-2r^2}{w_L^2})$
Where I(r) is the Intensity as function of radius (distance in transverse direction), I0 is the input intensity at r = 0, and wL is the beam radius. Here the beam radius is defined as the radius where the intensity is reduced to 1/e2 of the value at r = 0. This can be seen by letting r = wL.
setup
Power Profile
The experiment data is obtained by gradually moving the blade across from point A to B, and recording the power. Without going into the math the intensity at the points can be obtained. For starting point A
$\mathbf{I_A(r=0)}=I_0 exp(-2)=I_0*.865$
For stopping point B
$\mathbf{I_B}=I_0 *(1-.865)$
By measuring this distance the beam waist can be measured and beam diameter is just twice of it:
$\mathbf{\omega_0}=r_{.135}-r_{.865}$
this is the method we used below.
• Beam waist can also be measured the same way in terms of the power. The power transmitted by a partially occluding knife edge:
$\mathbf {p(r)}=\frac{P_0}{\omega_0} \sqrt{\frac{2}{\pi}} \int\limits_r^\infty exp(-\frac{2r^2}{\omega^2}) dr$
After integrating for transmitted power:
$\mathbf {p(r)}=\frac{P_0}{2}{erfc}(2^{1/2}\frac{r}{\omega_0})$
Now the power of 10% and 90% is measured at two points and the value of the points substituted here:
$\mathbf{\omega_0}=.783(r_{.1} - r_{.9})$
The difference between the methods is; the first method measures the value little higher than the second method (power), but the difference is still under 13%. So either method is GOOD but the second is more accurate. Here is a link of a LabView code to calculate the beam waist with knife edge method[5].
#### Data
We measured the beam waist at every 12.5, 15 and 25mm, over a range of 2000mm from the output aperture of the laser head. The measurement is minimum at 612.5 mm from the laser, thus the beam waist is at 612.5±12.5mm from the laser. And it is to be 1.26±.1 mm.
#### Analysis
Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M2.
##### Beam propagation factor M2
The beam propagation factor M2 was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM00 beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level.
M2 is defined as the ratio of a beam’s actual divergence to the divergence of an ideal, diffraction limited, Gaussian, TEM00 beam having the same waist size and location. Specifically, beam divergence for an ideal, diffraction limited beam is given by:
$\theta_{0}=\frac{\lambda}{\pi w_0}$
this is theoretical half divergence angle in radian.
$\theta_{R}=M^2\frac{\lambda} {\pi w_0}$
so
$M^2=\frac{\theta_{R}} {\theta_{0}}$
Where:
• λ is the laser wavelength
• θR is the far field divergence angle of the real beam.
• w0 is the beam waist radius and θ0 is the far field divergence angle of the theoratical beam.
• M2 is the beam propagation factor
This definition of M2 allows us to make simple change to optical formulas by taking M2 factor as multiplication, to account for the actual beam divergence. This is the reason why M2 is also sometimes referred to as the “times diffraction limit number”. The more information about M2 is available in these links:[6][7]
My experimental beam waist is:
wR=.63mm with a theoretical divergence of .69mrad. The data suggests the real far field divergence angle to be 1mrad (half angle) (wR/z at that z). This gives:
M2≈1.4
Now using beam propagation formula with M2 correction:
$w_R(z) = w_0 \, \sqrt{ 1+ {\left( \frac{z M^2}{z_\mathrm{R}} \right)}^2 } \ .$
instead of:
$w_R(z) = w_0 \, \sqrt{ 1+ {\left( \frac{z}{z_\mathrm{R}} \right)}^2 } \ .$
The result is obvious. Plot shows the real experimental data with theoretical data fit with and without M^2 correction.
M2 is an important parameter and it is good to know it to complete the characterization of a laser. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9030190110206604, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/First-order_logic | # First-order logic
From Wikipedia, the free encyclopedia
Jump to: navigation, search
First-order logic is a formal system used in mathematics, philosophy, linguistics, and computer science. It is also known as first-order predicate calculus, the lower predicate calculus, quantification theory, and predicate logic (a less precise term). First-order logic is distinguished from propositional logic by its use of quantified variables.
A theory about some topic is usually first-order logic together with a specified domain of discourse over which the quantified variables range, finitely many functions which map from that domain into it, finitely many predicates defined on that domain, and a recursive set of axioms which are believed to hold for those things. Sometimes "theory" is understood in a more formal sense, which is just a set of sentences in first-order logic.
The adjective "first-order" distinguishes first-order logic from higher-order logic in which there are predicates having predicates or functions as arguments, or in which one or both of predicate quantifiers or function quantifiers are permitted.[1] In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets.
There are many deductive systems for first-order logic that are sound (all provable statements are true) and complete (all true statements are provable). Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem.
First-order logic is of great importance to the foundations of mathematics, because it is the standard formal logic for axiomatic systems. Many common axiomatic systems, such as first-order Peano arithmetic and axiomatic set theory, including the canonical Zermelo–Fraenkel set theory (ZF), can be formalized as first-order theories. No first-order theory, however, has the strength to describe fully and categorically structures with an infinite domain, such as the natural numbers or the real line. Categorical axiom systems for these structures can be obtained in stronger logics such as second-order logic.
For a history of first-order logic and how it came to be the dominant formal logic, see José Ferreirós 2001.
## Introduction
While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification.
A predicate resembles a function that returns either True or False. Consider the following sentences: "Socrates is a philosopher", "Plato is a philosopher". In propositional logic these are treated as two unrelated propositions, denoted for example by p and q. In first-order logic, however, the sentences can be expressed in a more parallel manner using the predicate Phil(a), which asserts that the object represented by a is a philosopher. Thus if a represents Socrates then Phil(a) asserts the first proposition, p; if a instead represents Plato then Phil(a) asserts the second proposition, q. A key aspect of first-order logic is visible here: the string "Phil" is a syntactic entity which is given semantic meaning by declaring that Phil(a) holds exactly when a is a philosopher. An assignment of semantic meaning is called an interpretation.
First-order logic allows reasoning about properties that are shared by many objects, through the use of variables. For example, let Phil(a) assert that a is a philosopher and let Schol(a) assert that a is a scholar. Then the formula
$\text{Phil}(a)\to \text{Schol}(a) \,$
asserts that if a is a philosopher then a is a scholar. The symbol $\to$ is used to denote a conditional (if/then) statement. The hypothesis lies to the left of the arrow and the conclusion to the right. The truth of this formula depends on which object is denoted by a, and on the interpretations of "Phil" and "Schol".
Assertions of the form "for every a, if a is a philosopher then a is a scholar" require both the use of variables and the use of a quantifier. Again, let Phil(a) assert a is a philosopher and let Schol(a) assert that a is a scholar. Then the first-order sentence
$\forall a ( \text{Phil}(a) \to \text{Schol}(a)) \,$
asserts that no matter what a represents, if a is a philosopher then a is scholar. Here $\forall$, the universal quantifier, expresses the idea that the claim in parentheses holds for all choices of a.
To show that the claim "If a is a philosopher then a is a scholar" is false, one would show there is some philosopher who is not a scholar. This counterclaim can be expressed with the existential quantifier $\exists$:
$\exists a ( \text{Phil}(a) \land \lnot \text{Schol}(a)) \,.$
Here:
• $\lnot$ is the negation operator: $\lnot \text{Schol}(a)$ is true if and only if $\text{Schol}(a) \,$ is false, in other words if and only if a is not a scholar.
• $\land$ is the conjunction operator: $\text{Phil}(a) \land \lnot \text{Schol}(a)$ asserts that a is a philosopher and also not a scholar.
The predicates Phil(a) and Schol(a) take only one parameter each. First-order logic can also express predicates with more than one parameter. For example, "there is someone who can be fooled every time" can be expressed as:
$\exists x (\mbox{Person}(x) \and \forall y (\mbox{Time}(y) \rightarrow \mbox{Canfool}(x,y))) \,.$
Here Person(x) is interpreted to mean x is a person, Time(y) to mean that y is a moment of time, and Canfool(x,y) to mean that (person) x can be fooled at (time) y. For clarity, this statement asserts that there is at least one person who can be fooled at all times, which is stronger than asserting that at all times at least one person exists who can be fooled. This would be expressed as:
$\forall y (\mbox{Time}(y) \rightarrow \exists x (\mbox{Person}(x) \and \mbox{Canfool}(x,y))) \,.$
Asserting the latter (that there is always at least one foolable person) does not signify whether this foolable person is always the same for all moments of time.
The range of the quantifiers is the set of objects that can be used to satisfy them. (In the informal examples in this section, the range of the quantifiers was left unspecified.) In addition to specifying the meaning of predicate symbols such as Person and Time, an interpretation must specify a nonempty set, known as the domain of discourse or universe, as a range for the quantifiers. Thus a statement of the form $\exists a \text{Phil}(a)$ is said to be true, under a particular interpretation, if there is some object in the domain of discourse of that interpretation that satisfies the predicate that the interpretation uses to assign meaning to the symbol Phil.
## Syntax
There are two key parts of first order logic. The syntax determines which collections of symbols are legal expressions in first-order logic, while the semantics determine the meanings behind these expressions.
### Alphabet
Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is legal. There are two key types of legal expressions: terms, which intuitively represent objects, and formulas, which intuitively express predicates that can be true or false. The terms and formulas of first-order logic are strings of symbols which together form the alphabet of the language. As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols.
It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol $\land$ always represents "and"; it is never interpreted as "or". On the other hand, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate, depending on the interpretation at hand.
#### Logical symbols
There are several logical symbols in the alphabet, which vary by author but usually include:
• The quantifier symbols $\forall$ and $\exists$
• The logical connectives: $\land$ for conjunction, $\lor$ for disjunction, $\rightarrow$ for implication, $\harr$ for biconditional, $\lnot$ for negation. Occasionally other logical connective symbols are included. Some authors use $\Rightarrow$, or Cpq, instead of $\rightarrow$, and $\Harr$, or Epq, instead of $\harr$, especially in contexts where $\to$ is used for other purposes. Moreover, the horseshoe $\supset$ may replace $\rightarrow$; the triple-bar $\equiv$ may replace $\harr$, and a tilde (~), Np, or Fpq, may replace $\lnot$; ||, or Apq may replace $\lor$; and &, Kpq, or the middle dot, $\cdot$, may replace $\land$, especially if these symbols are not available for technical reasons. (Note: the aforementioned symbols Cpq, Epq, Np, Apq, and Kpq are used in Polish notation.)
• Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context.
• An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, … . Subscripts are often used to distinguish variables: x0, x1, x2, … .
• An equality symbol (sometimes, identity symbol) =; see the section on equality below.
It should be noted that not all of these symbols are required - only one of the quantifiers, negation and conjunction, variables, brackets and equality suffice. There are numerous minor variations that may define additional logical symbols:
• Sometimes the truth constants T, Vpq, or $\top$, for "true" and F, Opq, or $\bot$, for "false" are included. Without any such logical operators of valence 0, these two constants can only be expressed using quantifiers.
• Sometimes additional logical connectives are included, such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq.
#### Non-logical symbols
The non-logical symbols represent predicates (relations), functions and constants on the domain of discourse. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature.[2]
The traditional approach is to have only one, infinite, set of non-logical symbols (one signature) for all applications. Consequently, under the traditional approach there is only one language of first-order logic.[3] This approach is still common, especially in philosophically oriented books.
1. For every integer n ≥ 0 there is a collection of n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n we have an infinite supply of them:
Pn0, Pn1, Pn2, Pn3, …
2. For every integer n ≥ 0 there are infinitely many n-ary function symbols:
f n0, f n1, f n2, f n3, …
In contemporary mathematical logic, the signature varies by application. Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim-Skolem theorem.
In this approach, every non-logical symbol is of one of the following types.
1. A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These which are often denoted by uppercase letters P, Q, R,... .
• Relations of valence 0 can be identified with propositional variables. For example, P, which can stand for any statement.
• For example, P(x) is a predicate variable of valence 1. One possible interpretation is "x is a man".
• Q(x,y) is a predicate variable of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y".
2. A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase letters f, g, h,... .
• Examples: f(x) may be interpreted as for "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, g(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y".
• Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet a, b, c,... . The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, such a constant may stand for the empty set.
The traditional approach can be recovered in the modern approach by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols.
### Formation rules
The formation rules define the terms and formulas of first order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms.
#### Terms
The set of terms is inductively defined by the following rules:
1. Variables. Any variable is a term.
2. Functions. Any expression f(t1,...,tn) of n arguments (where each argument ti is a term and f is a function symbol of valence n) is a term. In particular, symbols denoting individual constants are 0-ary function symbols, and are thus terms.
Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term.
#### Formulas
The set of formulas (also called well-formed formulas[4] or wffs) is inductively defined by the following rules:
1. Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t1,...,tn) is a formula.
2. Equality. If the equality symbol is considered part of logic, and t1 and t2 are terms, then t1 = t2 is a formula.
3. Negation. If φ is a formula, then $\neg$φ is a formula.
4. Binary connectives. If φ and ψ are formulas, then (φ $\rightarrow$ ψ) is a formula. Similar rules apply to other binary logical connectives.
5. Quantifiers. If φ is a formula and x is a variable, then $\forall x \varphi$ and $\exists x \varphi$ are formulas.
Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas.
For example,
$\forall x \forall y (P(f(x)) \rightarrow\neg (P(x) \rightarrow Q(f(y),x,z)))$
is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. On the other hand, $\forall x\, x \rightarrow$ is not a formula, although it is a string of symbols from the alphabet.
The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way by following the inductive definition (in other words, there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability.
This definition of a formula does not support defining an if-then-else function ite(c, a, b), where "c" is a condition expressed as a formula, that would return "a" if c is true, and "b" if it is false. This is because both predicates and functions can only accept terms as parameters, but the first parameter is a formula. Some languages built on first-order logic, such as SMT-LIB 2.0, add this.[5]
#### Notational conventions
For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is:
• $\lnot$ is evaluated first
• $\land$ and $\lor$ are evaluated next
• Quantifiers are evaluated next
• $\to$ is evaluated last.
Moreover, extra punctuation not required by the definition may be inserted to make formulas easier to read. Thus the formula
$(\lnot \forall x P(x) \to \exists x \lnot P(x))$
might be written as
$(\lnot [\forall x P(x)]) \to \exists x [\lnot P(x)].$
In some fields, it is common to use infix notation for binary relations and functions, instead of the prefix notation defined above. For example, in arithmetic, one typically writes "2 + 2 = 4" instead of "=(+(2,2),4)". It is common to regard formulas in infix notation as abbreviations for the corresponding formulas in prefix notation.
The definitions above use infix notation for binary connectives such as $\to$. A less common convention is Polish notation, in which one writes $\rightarrow$, $\wedge$, and so on in front of their arguments rather than between them. This convention allows all punctuation symbols to be discarded. Polish notation is compact and elegant, but rarely used in practice because it is hard for humans to read it. In Polish notation, the formula
$\forall x \forall y (P(f(x)) \rightarrow\neg (P(x) \rightarrow Q(f(y),x,z)))$
becomes "∀x∀y→Pfx¬→ PxQfyxz".
### Free and bound variables
Main article: Free variables and bound variables
In a formula, a variable may occur free or bound. Intuitively, a variable is free in a formula if it is not quantified: in $\forall y\, P(x,y)$, variable x is free while y is bound. The free and bound variables of a formula are defined inductively as follows.
1. Atomic formulas. If φ is an atomic formula then x is free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula.
2. Negation. x is free in $\neg$φ if and only if x is free in φ. x is bound in $\neg$φ if and only if x is bound in φ.
3. Binary connectives. x is free in (φ $\rightarrow$ ψ) if and only if x is free in either φ or ψ. x is bound in (φ $\rightarrow$ ψ) if and only if x is bound in either φ or ψ. The same rule applies to any other binary connective in place of $\rightarrow$.
4. Quantifiers. x is free in $\forall$y φ if and only if x is free in φ and x is a different symbol from y. Also, x is bound in $\forall$y φ if and only if x is y or x is bound in φ. The same rule holds with $\exists$ in place of $\forall$.
For example, in $\forall$x $\forall$y (P(x)$\rightarrow$ Q(x,f(x),z)), x and y are bound variables, z is a free variable, and w is neither because it does not occur in the formula.
Freeness and boundness can be also specialized to specific occurrences of variables in a formula. For example, in $P(x) \rightarrow \forall x\, Q(x)$, the first occurrence of x is free while the second is bound. In other words, the x in $P(x)$ is free while the $x$ in $\forall x\, Q(x)$ is bound.
A formula in first-order logic with no free variables is called a first-order sentence. These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence $\exists x\, \text{Phil}(x)$ will be either true or false in a given interpretation.
### Examples
#### Abelian groups
In mathematics the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then:
• The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z.
• The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas.
These are usually written as x + y = 0 and x + y − z ≤ x + y.
• The expression $(\forall x \forall y \, \mathop{\leq}(\mathop{+}(x, y), z) \to \forall x\, \forall y\, \mathop{+}(x, y) = 0)$ is a formula, which is usually written as $\forall x \forall y ( x + y \leq z) \to \forall x \forall y (x+y = 0).$
#### Loving relation
There are 10 different formulas with 8 different meanings, that use the loving relation Lxy ("x loves y.") and the quantifiers ∀ and ∃:
No column/row is empty:
1. $\forall x \exist y Lyx$: Everyone is loved by someone. 2. $\forall x \exist y Lxy$: Everyone loves someone.
The diagonal is nonempty/full:
5. $\exist x Lxx$: Someone loves himself.
6. $\forall x Lxx$: Everyone loves himself.
The matrix is nonempty/full:
7. $\exist x \exist y Lxy$: Someone loves someone. 8. $\exist x \exist y Lyx$: Someone is loved by someone.
9. $\forall x \forall y Lxy$: Everyone loves everyone. 10. $\forall x \forall y Lyx$: Everyone is loved by everyone.
Hasse diagram of the implications
One row/column is full:
3. $\exist x \forall y Lxy$: Someone loves everyone. 4. $\exist x \forall y Lyx$: Someone is loved by everyone.
The logical matrices represent the formulas for the case that there are five individuals that can love (vertical axis) and be loved (horizontal axis). Except for the sentences 9 and 10, they are examples. E.g. the matrix representing sentence 5 stands for "b loves himself."; the matrix representing sentences 7 and 8 stands for "c loves b."
It's important and instructive to distinguish sentence 1, $\forall x \exist y Lyx$, and 3, $\exist x \forall y Lxy$: In both cases everyone is loved; but in the first case everyone is loved by someone, in the second case everyone is loved by the same person.
Some sentences imply each other — e.g. if 3 is true also 1 is true, but not vice versa. (See Hasse diagram)
## Semantics
An interpretation of a first-order language assigns a denotation to all non-logical constants in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.)
The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, a first-order formula is a statement about these objects; for example, $\exists x P(x)$ states the existence of an object x such that the predicate P is true where referred to it. The domain of discourse is the set of considered objects. For example, one can take $D$ to be the set of integer numbers.
The interpretation of a function symbol is a function. For example, if the domain of discourse consists of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function I(f) which, in this interpretation, is addition.
The interpretation of a constant symbol is a function from the one-element set D0 to D, which can be simply identified with an object in D. For example, an interpretation may assign the value $I(c)=10$ to the constant symbol $c$.
The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of the domain of discourse. This means that, given an interpretation, a predicate symbol, and n elements of the domain of discourse, one can tell whether the predicate is true of those elements according to the given interpretation. For example, an interpretation I(P) of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than the second.
### First-order structures
Main article: Structure (mathematical logic)
The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a nonempty set D that forms the domain of discourse and an interpretation I of the non-logical terms of the signature. This interpretation is itself a function:
• Each function symbol f of arity n is assigned a function I(f) from $D^n$ to $D$. In particular, each constant symbol of the signature is assigned an individual in the domain of discourse.
• Each predicate symbol P of arity n is assigned a relation I(P) over $D^n$ or, equivalently, a function from $D^n$ to $\{true, false\}$. Thus each predicate symbol is interpreted by a Boolean-valued function on D.
### Evaluation of truth values
A formula evaluates to true or false given an interpretation, and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as $y = x$. The truth value of this formula changes depending on whether x and y denote the same individual.
First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment:
1. Variables. Each variable x evaluates to μ(x)
2. Functions. Given terms $t_1, \ldots, t_n$ that have been evaluated to elements $d_1, \ldots, d_n$ of the domain of discourse, and a n-ary function symbol f, the term $f(t_1, \ldots, t_n)$ evaluates to $(I(f))(d_1,\ldots,d_n)$.
Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema.
1. Atomic formulas (1). A formula $P(t_1,\ldots,t_n)$ is associated the value true or false depending on whether $\langle v_1,\ldots,v_n \rangle \in I(P)$, where $v_1,\ldots,v_n$ are the evaluation of the terms $t_1,\ldots,t_n$ and $I(P)$ is the interpretation of $P$, which by assumption is a subset of $D^n$.
2. Atomic formulas (2). A formula $t_1 = t_2$ is assigned true if $t_1$ and $t_2$ evaluate to the same object of the domain of discourse (see the section on equality below).
3. Logical connectives. A formula in the form $\neg \phi$, $\phi \rightarrow \psi$, etc. is evaluated according to the truth table for the connective in question, as in propositional logic.
4. Existential quantifiers. A formula $\exists x \phi(x)$ is true according to M and $\mu$ if there exists an evaluation $\mu'$ of the variables that only differs from $\mu$ regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment $\mu'$. This formal definition captures the idea that $\exists x \phi(x)$ is true if and only if there is a way to choose a value for x such that φ(x) is satisfied.
5. Universal quantifiers. A formula $\forall x \phi(x)$ is true according to M and $\mu$ if φ(x) is true for every pair composed by the interpretation M and some variable assignment $\mu'$ that differs from $\mu$ only on the value of x. This captures the idea that $\forall x \phi(x)$ is true if every possible choice of a value for x causes φ(x) to be true.
If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and $\mu$ if and only if it is true according to M and every other variable assignment $\mu'$.
There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol cd is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows:
1. Existential quantifiers (alternate). A formula $\exists x \phi(x)$ is true according to M if there is some d in the domain of discourse such that $\phi(c_d)$ holds. Here $\phi(c_d)$ is the result of substituting cd for every free occurrence of x in φ.
2. Universal quantifiers (alternate). A formula $\forall x \phi(x)$ is true according to M if, for every d in the domain of discourse, $\phi(c_d)$ is true according to M.
This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments.
### Validity, satisfiability, and logical consequence
See also: Satisfiability
If a sentence φ evaluates to True under a given interpretation M, one says that M satisfies φ; this is denoted $M \vDash \phi$. A sentence is satisfiable if there is some interpretation under which it is true.
Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula with free variables is said to be satisfied by an interpretation if the formula remains true regardless which individuals from the domain of discourse are assigned to its free variables. This has the same effect as saying that a formula is satisfied if and only if its universal closure is satisfied.
A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic.
A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.
### Algebraizations
An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic, that do not involve replacing quantifiers with other variable binding term operators:
• Cylindric algebra, by Alfred Tarski and his coworkers;
• Polyadic algebra, by Paul Halmos;
• Predicate functor logic, mainly due to Willard Quine.
These algebras are all lattices that properly extend the two-element Boolean algebra.
Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers, has the same expressive power as relation algebra. This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions.
### First-order theories, models, and elementary classes
Further information: List of first-order theories
A first-order theory consists of a set of axioms in a particular first-order signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called effective. Some authors require theories to also include all logical consequences of the axioms.
A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory.
Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models.
A theory is consistent if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete.
### Empty domains
Main article: Empty domain
The definition above requires that the domain of discourse of any interpretation must be a nonempty set. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class.
There are several difficulties with empty domains, however:
• Many common rules of inference are only valid when the domain of discourse is required to be nonempty. One example is the rule stating that $\phi \lor \exists x \psi$ implies $\exists x (\phi \lor \psi)$ when x is not a free variable in φ. This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted.
• The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains.
Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition.
## Deductive systems
A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs, but are completely formalized unlike natural-language mathematical proofs.
A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective.
A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus a sound argument is correct in every possible interpretation of the language, regardless whether that interpretation is about mathematics, economics, or some other area.
In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B.
### Rules of inference
Further information: List of rules of inference
A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion.
For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] (often denoted φ[x/t]) is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φ[t/x] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.)
To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by $\exists x (x = y)$, in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[t/y] is $\exists x ( x = x+1)$, which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is $\exists z ( z = x+1)$, which is again logically valid.
The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically-defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule.
### Hilbert-style systems and natural deduction
A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand, or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemes of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemes of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference.
Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof.
### Sequent calculus
Further information: Sequent calculus
The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses sequents, which are expressions of the form
$A_1, \ldots, A_n \vdash B_1, \ldots, B_k,$
where A1, ..., An, B1, ..., Bk are formulas and the turnstile symbol $\vdash$ is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that $(A_1 \land \cdots\land A_n)$ implies $(B_1\lor\cdots\lor B_k)$.
### Tableaux method
A tableaux proof for the propositional formula ((a ∨ ~b) & b) → a.
Further information: Method of analytic tableaux
Unlike the methods just described, the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has $\lnot A$ at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that $C \lor D$ is unsatisfiable requires showing that C and D are each unsatisfiable; the corresponds to a branching point in the tree with parent $C \lor D$ and children C and D.
### Resolution
The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving.
The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses $A_1 \lor\cdots\lor A_k \lor C$ and $B_1\lor\cdots\lor B_l\lor\lnot C$, the conclusion $A_1\lor\cdots\lor A_k\lor B_1\lor\cdots\lor B_l$ can be obtained.
### Provable identities
The following sentences can be called "identities" because the main connective in each is the biconditional.
$\lnot \forall x \, P(x) \Leftrightarrow \exists x \, \lnot P(x)$
$\lnot \exists x \, P(x) \Leftrightarrow \forall x \, \lnot P(x)$
$\forall x \, \forall y \, P(x,y) \Leftrightarrow \forall y \, \forall x \, P(x,y)$
$\exists x \, \exists y \, P(x,y) \Leftrightarrow \exists y \, \exists x \, P(x,y)$
$\forall x \, P(x) \land \forall x \, Q(x) \Leftrightarrow \forall x \, (P(x) \land Q(x))$
$\exists x \, P(x) \lor \exists x \, Q(x) \Leftrightarrow \exists x \, (P(x) \lor Q(x))$
$P \land \exists x \, Q(x) \Leftrightarrow \exists x \, (P \land Q(x))$ (where $x$ must not occur free in $P$)
$P \lor \forall x \, Q(x) \Leftrightarrow \forall x \, (P \lor Q(x))$ (where $x$ must not occur free in $P$)
## Equality and its axioms
There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are:
1. Reflexivity. For each variable x, x = x.
2. Substitution for functions. For all variables x and y, and any function symbol f,
x = y → f(...,x,...) = f(...,y,...).
3. Substitution for formulas. For any variables x and y and any formula φ(x), if φ' is obtained by replacing any number of free occurrences of x in φ with y, such that these remain free occurrences of y, then
x = y → (φ → φ').
These are axiom schemes, each of which specifies an infinite set of axioms. The third scheme is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second scheme, involving the function symbol f, is (equivalent to) a special case of the third scheme, using the formula
x = y → (f(...,x,...) = z → f(...,y,...) = z).
Many other properties of equality are consequences of the axioms above, for example:
1. Symmetry. If x = y then y = x.
2. Transitivity. If x = y and y = z then x = z.
### First-order logic without equality
An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation.
When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered.
First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted.
### Defining equality within a theory
If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemes as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument.
Some theories allow other ad hoc definitions of equality:
• In the theory of partial orders with one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t $\wedge$ t ≤ s.
• In set theory with one relation $\in$, one may define s = t to be an abbreviation for $\forall$x (s $\in$ x $\leftrightarrow$ t $\in$ x) $\wedge$ $\forall$x (x $\in$ s $\leftrightarrow$ x $\in$ t). This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, $\forall x \forall y [ \forall z (z \in x \Leftrightarrow z \in y) \Rightarrow x = y]$, by $\forall x \forall y [ \forall z (z \in x \Leftrightarrow z \in y) \Rightarrow \forall z (x \in z \Leftrightarrow y \in z) ]$, i.e. if x and y have the same elements, then they belong to the same sets.
## Metalogical properties
One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories.
### Completeness and undecidability
Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ.
Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem.
There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics.
### The Löwenheim–Skolem theorem
The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has any infinite model then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable).
The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox.
### The compactness theorem
The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models.
The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures).
There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in the signature of graphs, that expresses the idea that there is a path from x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as $\Sigma_1^1$ also enjoys compactness.
### Lindström's theorem
Main article: Lindström's theorem
Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type:
• A logical system satisfying Lindström's definition that contains first-order logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to first-order logic.
• A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to first-order logic.
## Limitations
Although first-order logic is sufficient for formalizing much of mathematics, and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe.
For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm is impossible. This has led to the study of interesting decidable fragments such as C2, first-order logic with two variables and the counting quantifiers $\exist^{\ge n}$ and $\exist^{\le n}$ (these quantifiers are, respectively, "there exists at least n" and "there exists at most n") (Horrocks 2010).
### Expressiveness
The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order.
### Formalizing natural languages
First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". But there are many more complicated features of natural language that cannot be expressed in (single-sorted) first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic" (Gamut 1991, p. 75).
Type Example Comment
Quantification over properties If John is self-satisfied, then there is at least one thing he has in common with Peter Requires a quantifier over predicates, which cannot be implemented in single-sorted first-order logic: Zj→ ∃X(Xj∧Xp)
Quantification over properties Santa Claus has all the attributes of a sadist Requires quantifiers over predicates, which cannot be implemented in single-sorted first-order logic: ∀X(∀x(Sx → Xx)→Xs)
Predicate adverbial John is walking quickly Cannot be analysed as Wj ∧ Qj; predicate adverbials are not the same kind of thing as second-order predicates such as colour
Relative adjective Jumbo is a small elephant Cannot be analysed as Sj ∧ Ej; predicate adjectives are not the same kind of thing as second-order predicates such as colour
Predicate adverbial modifier John is walking very quickly -
Relative adjective modifier Jumbo is terribly small An expression such as "terribly", when applied to a relative adjective such as "small", results in a new composite relative adjective "terribly small"
Prepositions Mary is sitting next to John The preposition "next to" when applied to "John" results in the predicate adverbial "next to John"
## Restrictions, extensions, and variations
There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity.
### Restricted languages
First-order logic can be studied in languages with fewer logical symbols than were described above.
• Because $\exists x \phi(x)$ can be expressed as $\neg \forall x \neg \phi(x)$, and $\forall x \phi(x)$ can be expressed as $\neg \exists x \neg \phi(x)$, either of the two quantifiers $\exists$ and $\forall$ can be dropped.
• Since $\phi \lor \psi$ can be expressed as $\lnot (\lnot \phi \land \lnot \psi)$ and $\phi \land \psi$ can be expressed as $\lnot(\lnot \phi \lor \lnot \psi)$, either $\vee$ or $\wedge$ can be dropped. In other words, it is sufficient to have $\neg$ and $\vee$, or $\neg$ and $\wedge$, as the only logical connectives.
• Similarly, it is sufficient to have only $\neg$ and $\rightarrow$ as logical connectives, or to have only the Sheffer stroke (NAND) or the Peirce arrow (NOR) operator.
• It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol $\; 0$ one may use a predicate $\; 0(x)$ (interpreted as $\; x=0$ ), and replace every predicate such as $\; P(0,y)$ with $\forall x \;(0(x) \rightarrow P(x,y))$. A function such as $f(x_1,x_2,...,x_n)$ will similarly be replaced by a predicate $F(x_1,x_2,...,x_n,y)$ interpreted as $y = f(x_1,x_2,...,x_n)$. This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics.
Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemes in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system.
It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied.
### Many-sorted logic
Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts, which have different domains. This is also called typed first-order logic, and the sorts called types (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic.
When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic. One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory, and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols $P_1(x)$ and $P_2(x)$ and the axiom
$\forall x ( P_1(x) \lor P_2(x)) \land \lnot \exists x (P_1(x) \land P_2(x))$.
Then the elements satisfying $P_1$ are thought of as elements of the first sort, and elements satisfying $P_2$ as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula φ(x), one writes
$\exists x (P_1(x) \land \phi(x))$.
### Additional quantifiers
Additional quantifiers can be added to first-order logic.
• Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as $\exists!$x P(x). This notation, called uniqueness quantification, may be taken to abbreviate a formula such as $\exists$x (P(x) $\wedge\forall$y (P(y) $\rightarrow$ (x = y))).
• First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others.
• Bounded quantifiers are often used in the study of set theory or arithmetic.
### Infinitary logics
Main article: Infinitary logic
Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory.
Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus formulas are, essentially, identified with their parse trees, rather than with the strings being parsed.
The most commonly studied infinitary logics are denoted Lαβ, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is Lωω. In the logic L∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as Lκω. For example, Lω1ω permits countable conjunctions and disjunctions.
The set of free variables in a formula of Lκω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another.[6] In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in Lκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic Lκλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ.
### Non-classical and modal logics
• Intuitionistic first-order logic uses intuitionistic rather than classical propositional calculus; for example, ¬¬φ need not be equivalent to φ.
• First-order modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard first-order logic we have a single domain and each predicate is assigned one extension. With first-order modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a Philosopher, but might have been a Mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all.
• first-order fuzzy logics are first-order extensions of propositional fuzzy logics rather than classical propositional calculus.
### Higher-order logics
Main article: Higher-order logic
The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus
$\exists a ( \text{Phil}(a))$
is a legal first-order formula, but
$\exists \text{Phil} ( \text{Phil}(a))$
is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified.
Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics.
Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics.
## Automated theorem proving and formal methods
Further information: First-order theorem proving
Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search.
The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct.
Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small, core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write,[7] results are often formalized as a series of lemmas, for which derivations can be constructed separately.
Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences.
## See also
• ACL2 — A Computational Logic for Applicative Common Lisp.
• Equiconsistency
• Extension by definitions
• Hanf number
• Herbrandization
• Lowenheim number
• Prenex normal form
• Skolem normal form
• Table of logic symbols
• Tarski's World
• Truth table
• Type (model theory)
## Notes
1. Mendelson, Elliott (1964). Introduction to Mathematical Logic. Van Nostrand Reinhold. p. 56.
2. The word language is sometimes used as a synonym for signature, but this can be confusing because "language" can also refer to the set of formulas.
3. More precisely, there is only one language of each variant of one-sorted first-order logic: with or without equality, with or without functions, with or without propositional variables, ….
4. Some authors who use the term "well-formed formula" use "formula" to mean any string of symbols from the alphabet. However, most authors in mathematical logic use "formula" to mean "well-formed formula" and have no term for non-well-formed formulas. In every context, it is only the well-formed formulas that are of interest.
5. Some authors only admit formulas with finitely many free variables in Lκω, and more generally only formulas with < λ free variables in Lκλ.
6. Avigad et al. (2007) discuss the process of formally verifying a proof of the prime number theorem. The formalized proof required approximately 30,000 lines of input to the Isabelle proof verifier.
## References
• Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer.
• Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Raff, Paul (2007); "A formally verified proof of the prime number theorem", ACM Transactions on Computational Logic, vol. 9 no. 1 doi:10.1145/1297658.1297660
• Barwise, Jon (1977); "An Introduction to First-Order Logic", in Barwise, Jon, ed. (1982). Handbook of Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Amsterdam, NL: North-Holland. ISBN 978-0-444-86388-1.
• Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications (Distributed by the University of Chicago Press)
• Bocheński, Józef Maria (2007); A Précis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from the French and German editions by Otto Bird
• Ferreirós, José (2001); The Road to Modern Logic — An Interpretation, Bulletin of Symbolic Logic, Volume 7, Issue 4, 2001, pp. 441–484, DOI 10.2307/2687794, JStor
• Gamut, L. T. F. (1991); Logic, Language, and Meaning, Volume 2: Intensional Logic and Logical Grammar, Chicago, IL: University of Chicago Press, ISBN 0-226-28088-8
• Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English translation of Grundzüge der theoretischen Logik, 1928 German first edition)
• Hodges, Wilfrid (2001); "Classical Logic I: First Order Logic", in Goble, Lou (ed.); The Blackwell Guide to Philosophical Logic, Blackwell
• Ebbinghaus, Heinz-Dieter; Flum, Jörg; and Thomas, Wolfgang (1994); Mathematical Logic, Undergraduate Texts in Mathematics, Berlin, DE/New York, NY: Springer-Verlag, Second Edition, ISBN 978-0-387-94258-2
• | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 211, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186573624610901, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/86542/relationship-between-sequential-compactness-of-a-convex-set-and-its-extremal-poin | ## Relationship between sequential compactness of a convex set and its extremal points
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose that $X$ is a compact convex subset of a topological vector space. Suppose also that the extremal points of $X$ have the additional property that any sequence $x_n$ of extremal points has a subsequence $x_{n_k}$ converging to an extremal point $x$. Does this imply that $X$ is sequentially compact? If not, what additional conditions would imply $X$ is sequentially compact?
If the topological vector space is first countable, then the compactness of $X$ implies that it is sequentially compact as well, so we might as well assume that $X$ is not first countable. This question is a little out of my area of expertise, so I'm not sure if people ever deal with convex sets in vector spaces which are not first countable.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9652401208877563, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/97596/ring-structrures-on-rn/97597 | ## Ring structrures on R^n
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a commutative ring $A= ( \mathbb{R}^n , + , \times)$, where $+$ is the usual one. Assume further that $\times$ is continuous (with respect to the usual topology). Let $H$ be the set of non invertible elements of this ring.
For $k \geq 0$, what is the largest integer $n=n(k)$ such that $\mathbb{R}^n$ can be endowed with a ring structure as described above, for which the corresponding $H$ is a vector space of dimension at most $k$ ?
For example, in the $k=0$ case we are looking for a field, and it is thus well-known that $n(0)=2$ (realized by $\mathbb{C} \simeq \mathbb{R^2}$). More generally, I can show that $n(k) \leq k + 2$ holds for any $k$ (hence showing that the quantity $n(k)$ is well defined !).
Is it true that $n(k) = k+2$ holds for all $k \geq 0$ ? In particular, is there a ring structure (as described above) on $\mathbb{R}^3$ such that $H$ is a line ?
Up to now, the only lower bound I have is the trivial $n(k) \geq n(k-1) \geq \cdots \geq n(0)=2$.
-
Aren't you implicitly assuming that $A$ is an $\mathbb{R}$-algebra, isomorphic to $\mathbb{R}^n$ as a vector space (not just as a group)? – Laurent Moret-Bailly May 22 2012 at 6:12
2
If $x,y$ are vectors and $\lambda \in \mathbb{R}$, then $(\lambda x) \times y = \lambda (x \times y)$ follows from the continuity assumption. – js May 22 2012 at 10:13
## 1 Answer
For $k$ even, take the ring $\mathbb C[\epsilon]/\epsilon^{k/2+1}$. Non-invertible elements are multiples of $\epsilon$, which form a $k$-dimensional vector space. The ring has dimension $k+2$, so $n(k)=k+2$ for $k$ even.
For $k$ odd, take the ring $\mathbb R[\epsilon]/\epsilon^{k+1}$. By the same logic, this gives $n(k)\geq k+1$ for $k$ odd.
To make this lower bound an upper bound for $k$ odd, let us show that if the non-invertible elements form a vector space, then they form an ideal. They are closed under summation, being a vector space, and by multiplication by elements of the ring, since if $ab$ has an inverse then $a$ has an inverse. Thus we can quotient out by the non-invertible elements, and get a field. If $n(k)=k+2$ then that field can be $\mathbb C$. Then we can always find a copy of $\mathbb C$ inside the ring by a Hensel's lemma-type argument, so the ring is a vector space over $\mathbb C$, so it is even-dimensional.
This answers the question.
-
2
Thanks ! But I don't see how to construct the embedding $\mathbb{C} \simeq A/H \mapsto A$. (if you could indicate a reference where a similar argument is developped, it would be fine). – js May 22 2012 at 10:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927880585193634, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/63519?sort=votes | ## Coefficients in cohomology
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(Sorry if this is too elementary for this site)
I’m having some trouble understanding sheaf cohomology. It’s supposed to provide a theory of cohomology “with local coefficient”, and allow easy comparison between different theories like singular, Cech, de Rham and Alexander Spanier. What I don’t understand is: what’s all the fuss with coefficients that vary with each open set? Indeed what’s all the fuss with changing coefficients in an ordinary cohomology theory as in Eilenberg Steenrod?
Homology is trying to measure the “holes” of a space; wouldn’t integer coefficients suffice already? I’m not really sure what cohomology is trying to measure; at least I think the first singular group is trying to measure some kind of “potential difference”, like explained in Hatcher’s book. It gets worse for me when the coefficient group isn’t the integers. But when I get to sheaf cohomology I’m totally dumbstruck as to what it’s trying to measure, and what useful information of the space can be extracted from it. Now if it’s just about comparisons of different theories I can live with that…
Can someone please give me an intuitive explanation of the fuss with all these different coefficients? Please start off with why we even use different coefficients in Eilenberg Steenrod. Sorry if this is too elementary.
-
3
I think this might be more appropriate for math.stackexchange.com. So singular theory is cohomology with coefficients in the constant sheaf. In topology this is all we really need, but not in algebraic geometry or number theory. Group cohomology/homology with constant coefficients is boring, you are just looking at the trivial module that the group acts on. This should be boring, the trivial representation does not tell you a whole lot about the group. – Sean Tilson Apr 30 2011 at 14:16
1
I agree with Sean. Group homology with twisted coefficients arises in interesting contexts all the time. – Jim Conant Apr 30 2011 at 14:19
2
George Whitehead's textbook "Elements of homotopy theory" has a long and well-motivated chapter on local coefficients in topology, and also about obstruction theory and Postnikov systems. It is hard to imagine to do these things systematically without changing coefficients explicitly. – Zoran Škoda May 1 2011 at 8:59
## 5 Answers
This (elementary and perfectly standard) example might help show the power of sheaves with non-constant coefficients:
First, think about the circle $S^1$. Suppose you want to understand (real) line bundles on the circle. You can certainly cover the circle with two open contractible subsets $U_1$ and $U_2$ (which you can take to be the complements of the north and south poles), and we know that any line bundle on a contractible space is trivial. So if you've got a line bundle $L$ over $S^1$, you can restrict it to either $U_i$ and get a trivial bundle $L_i$. $L$ is built from these $L_i$ and the way they they are patched together over $U_1\cap U_2$.
Now what does it mean to patch the $L_i$ together over $U_{12}=U_1\cap U_2$? It means choosing an isomorphism $L_1|U_{12}\rightarrow L_2|U_{12}$. For any $x\in U_{12}$, the restriction of this isomorphism to the fiber $L_x$ over $x$ is an isomorphism between 1-dimensional vector spaces, and so (after choosing bases) can be identified with an element of ${\bf R}^*$ (the non-zero reals). Therefore your patching consists of a continuous map
$$U_{12}\rightarrow {\mathbb R}^*$$
which is to say, a Cech 1-cocycle for the sheaf of continuous ${\bf R}^{*}$-valued functions.
Now of course you could build a line bundle in some other way, say by starting with two different contractible sets $U_1$ and $U_2$. When do two sets of patching data give isomorphic line bundles? A little thought reveals that the answer is: When and only when the corresponding cocycles give the same class in
$$H^1(S^1,G^{*})$$
with `$ G^{*} $` being the sheaf of continuous `${\bf R}^*$`-valued functions.
Therefore line bundles are classified by $H^1(S^1,G^{*})$. Now consider the exact sequence of sheaves
$$0 \rightarrow G \rightarrow G^*\rightarrow {\bf Z}/2{\bf Z}\rightarrow 0$$
where $G$ is the sheaf of continuous ${\bf R}$ valued functions, and the map on the left is exponentiation. Follow the long exact sequence of cohomology, use the fact that $G$ is acyclic, and conclude that $H^1(S^1,G^*)=H^1(S^1,{\bf Z}/2{\bf Z})={\bf Z}/2{\bf Z}$. In other words, there are exactly two real line bundles over $S^1$ --- and indeed there are: the cylinder and the Mobius strip.
Exercise: Do a similar calculation for ${\bf CP}^1$ (the Riemann sphere). Conclude that the set of (complex) line bundles is in one-one correspondence with $H^2({\bf CP}^1,{\bf Z})={\bf Z}$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As soon as you proceed from the first ideas of ''counting holes'' in a space to more advanced problems in algebraic topology, you will begin to appreciate local coefficient systems. Even the passage from $Z$ to rings like $Z/2$ does not merely simplify computations, but allows you to detect more phenomena. For example, the map $RP^2 \to S^2$ that collapses $RP^1$ to a point is null in integral homology, but not in $Z/2$-homology. Think a few minutes about why this is not a contradiction to the universal coefficient theorem.
But local coefficient systems are useful in a variety of situations. Poincare duality for nonoriented manifolds has been mentioned (and in fact, it sheds light on the oriented case as well). Then there is obstruction theory: If $f:X \to Y$ is a fibration with fibre $F$. Let $g:Z \to Y$ be a map. A basic problem of homotopy theory is to decide whether there can be a lift $h: Y \to X$ of $g$ through $f$. There is a sequence of obstructions to the existence of such a thing; and these obstructions live in $H^n (Z; \pi_{n-1}(F))$, but with twisted coefficients if $Y$ is not simply-connected. Then the Leray-Serre spectral sequence comes to my mind: it relates the (co)homology of the base, the fibre and the total space of a fibration; and if the base isn't simply-connected, then local coefficients are inevitable.
Especially in the last two situations, the introduction of local coefficient systems makes the proofs more transparent even in the simply-connected case.
I admit that for most purposes of algebraic topology, the introduction of sheaves (more general than local coefficient systems) is overkill. The classical areas where sheaves are most important are complex analysis and algebraic geometry.
-
As Johannes Ebert says, the classical areas where sheaves are most important are complex analysis and algebraic geometry.
There are two completely different kinds of sheaves one might consider on a complex manifold: constructible sheaves (basically, locally constant along a stratification) and quasicoherent sheaves (modules over the ring of functions). It's kind of an amazing accident, and I think rather misleading, that "sheaf theory" is useful for studying both kinds of sheaves. Certainly there are theorems that apply to both kinds, but most interesting theorems require you to assume one or the other.
This is very much a matter of opinion, and I expect to get comments disagreeing with me! Let me give just one example of what I mean. For any sheaf at all, we can consider Cech cohomology using an open cover. In the constructible-sheaf world (like you were getting in topology), one likes to assume that the intersections of sets in the cover are contractible, so all cohomology comes from gluing. In the quasicoherent-sheaf world, one likes to assume that the sets in the cover are affine (and that the scheme is separated, so the intersections are likewise affine), again so all cohomology comes from gluing. Obviously one could state a general theorem about acyclic covers or somesuch, but it's crucial to bear in mind how different those are for the two kinds of sheaves.
(N.B. Of course there are sheaves that are neither constructible nor quasicoherent, and one occasionally does use, but not as often as these two.)
-
8
Illuminating answer! But I have to disagree a bit, as there exist whole (and very classical) theories which rely on \emph{mixing} the two types of sheaves. A large portion of the classical theory of compact Riemann surfaces is organized around the exponential sequence $\mathbb{Z} \to \mathbb{O} \to \mathcal{O}^{\times}$. The first sheaf is constructible, the middle sheaf coherent and the third is neither. However, the exp sequence does not exist in ''algebraic algebraic geometry''. – Johannes Ebert Apr 30 2011 at 19:23
I had the impression that Hatcher's book claims as motivation that local coefficients allow Poincaré Duality to work properly for non-orientable spaces.
-
A practical motivation: ordinary homology with, say, mod2 or rational coefficients are often easier to compute (and hence - to apply) than integral homology.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377484917640686, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/37953?sort=newest | ## spatial ciphers/cryptanalysis techniques? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Are there spatial ciphers/cryptanalysis techniques based on neighborhood spaces in a grid? More specially are there 5 orientation "spaces" with the values being north, south, east, west, neutral?
I know this sounds more like rigorous math, but just need a simple explanation or some direction, as in resources or keywords or links (springerlink, arxiv, etc.)?
Thanks for the help everyone.
--
(update)
Well, in the context of the way I am handling the problem, I assigned these 5 values/symbols (N,S,E,W,I). Basically, I have globular shapes (whose descriptors don't matter) that are in a n x n grid of these "neighborhood spaces". They are not given - just a method I am tackling the problem with. I base these values (N,S,E,W,I) on whether these globules cross into neighboring spaces or are strictly inside a grid-cell. I treated globules extending/touching the grid lines shared between two neighboring cells as still crossing over into a neighboring space.
Also, it may not use a cipher, but a code. So foremost, I am looking for ways to construct a cipher or a code based on spatial alignments. I hope this is clear, so let me know if clarifications are needed.
Note: I - inside/ neutral.
-
The obvious answer is that any string of symbols could encode data, and this is no different from just having a string of digits between $1$ and $5$. I don't know of any standard encoding which is formatted this way. I really can't imagine what there could be to say mathematically about this sort of data as compared to just strings of symbols from a $5$ character alphabet. – David Speyer Sep 7 2010 at 11:25
1
I only answered this as below because it hasn't been deleted yet, despite multiple downvotes, and the question's author has at least tried to amend and clarify the question, though it still remains very unclear to me, particularly his cipher vs code, and the comments about globs. Perhaps describing the problem itself would make things clearer... – sleepless in beantown Sep 8 2010 at 1:13
## 3 Answers
There is a classic substitution cipher technique that uses a 25-square ($5 \times 5$) grid which holds 25 of the 26 letters of the roman alphabet:
````- + 1 2 3 4 5
- 1 A B C D E
- 2 F G H I J
- 3 K L M N O
- 4 P Q R S T
- 5 U V W X Y
````
It's effectively a short-hand technique for creating a substitution cipher by viewing it as a toroidal lattice, letting NORTH define the mapping {'A'$\to$'U', 'B'$\to$'V', ... 'Y'$\to$'T'}, etc. Effectively, north maps the coordinates (x,y) to ($NORTH_x(x),NORTH_y(y)$), for $x,y \in$ {1,2,3,4,5},
$NORTH_x(t)=t$, for $1 \le t \le 5$
$NORTH_y(t)=$ $t-1$ for $2\le t \le 5$,
$NORTH_y(t)=5$ for $t=1$.
Similar definitions exist for SOUTH, EAST, and WEST. The Identity direction or NULL direction stands for the identity function, $NULL(t)=t$.
A message can be encoded with a single direction, meaning only one substitution cipher is used for the entire message. A message can be encoded with multiple directions, meaning that each sequential letter is encoded by a different direction, rolling over when you get to the end of the cipher.
Napoleon used a variation of this, with two $5 \times 5$ grids, with a pass-phrase used for the second grid.
This may have nothing to do with what you're asking for. Your explanation thus far is not illuminating enough for me to grasp what it is exactly that you are trying to do. Can you explain your ultimate end-goal? Is it to create an encrytion cipher? Is it to analyze an already existing encrypted message? Is it to analyze a particular encoding algorithm or technique?
What exactly is the underlying problem which you are attempting to solve?
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
-
1
Please do not leave comments as answers. – David Speyer Sep 7 2010 at 19:25
-
1
Please do not leave comments as answers – Yemon Choi Sep 7 2010 at 18:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324292540550232, "perplexity_flag": "middle"} |
http://physics.aps.org/articles/print/v1/26 | # Viewpoint: Reconnecting to superfluid turbulence
, The Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste, Italy
Published October 6, 2008 | Physics 1, 26 (2008) | DOI: 10.1103/Physics.1.26
Images of vortex motion in superfluid helium reveal connections between quantum and classical turbulence and may lead to an understanding of complex flows in both superfluids and ordinary fluids.
Superfluid flows are interesting playgrounds, where hydrodynamics confronts quantum mechanics. One of the more important and interesting questions is what a complex turbulent flow would look like in a superfluid that was prevented from rotational motion except for circulation about individual, discrete vortex filaments, each having single quanta of circulation about a core of atomic dimensions. This is a great simplification when compared to ordinary turbulence, in which vortices and eddies can have any strength and size. A number of recent works, which have substituted superfluids for ordinary fluids in standard turbulence experiments, have suggested that turbulence in the two fluids is nearly indistinguishable. However, in a recent paper in Physical Review Letters, M. S. Paoletti, M. E. Fisher, and D. P. Lathrop at the University of Maryland, and K. R. Sreenivasan of the Abdus Salam International Center for Theoretical Physics, Trieste, have probed turbulent superfluid flow at small enough scales to see a clear difference [1]. This was achieved by dressing the quantized vortices in turbulent superfluid liquid $4He$ with small clusters or particles of frozen hydrogen, formed by injecting a small amount of $H2$ diluted with helium gas into the liquid helium, and then optically tracking their motion.
The difference is not only dramatic—strongly non-Gaussian distributions of velocity replacing the near-Gaussian statistics in classical homogeneous and isotropic turbulence—but it appears to also have a simple explanation. Reconnections between quantized vortices occurring at the microscopic level of the core can give rise to the same statistical signature that these authors have observed. Such events, established experimentally here as a robust feature, are necessary to fully explain turbulence in superfluids and fundamental to understanding how a pure superfluid like $4He$ at absolute zero can shed its turbulent energy in the complete absence of viscosity.
The reconnections we have in mind can be roughly described as follows: two vortex filaments that approach each other closely, attempt to cross, forming sharp cusps at the point of closest approach. At this point they can break apart, so that part of one vortex reconnects with part of the other, and so forth, significantly changing topology. Reconnections, which are a significant feature of superfluid turbulence, are not unique to it, and can occur in ordinary fluids [2], magnetized plasmas [3], and perhaps even between cosmic strings [4]. Reconnection between broken magnetic field lines in the sun is a relatively common occurrence leading to solar flares. However, there is a fundamental difference: classical reconnections are related to energy dissipation through viscosity, whereas in quantum fluids they take place due to a quantum stress acting at the scale of the core without changes of total energy [5].
Liquid $4He$ becomes superfluid below about 2.2 K, resulting from a type of Bose condensation as the de Broglie wavelength of the individual helium atoms becomes comparable to the average spacing between them. It then behaves as if it were composed of two intermingling and independent fluids: a superfluid with zero viscosity and zero entropy, and a viscous normal fluid, each having its own velocity field and density, where the ratio of superfluid to normal fluid density varies from 0 at the transition to 1 at absolute zero. From this model it follows that the superfluid component must also be irrotational (the curl of velocity must be zero) and this would have seemed to rule out turbulence altogether were it not for the peculiar vortices that are at the “core” of this story.
These vortices, first proposed by Onsager [6] and Feynman [7], can easily be seen [8] in solutions of the nonlinear Schrödinger equation (NLSE) for the condensate wave function of an ideal Bose gas. For these vortex solutions, a coherence length gives the distance over which the amplitude of the wave function rises radially from zero to some constant value. Since the superfluid density is given by the squared modulus of the wave function, this approximately defines the size of the vortex core, which for superfluid $4He$ is extremely small, on the order of one angstrom. The vortex circulation is obtained by integrating the superfluid velocity around a loop enclosing the superfluid-free core (thus avoiding the irrotational condition of the two fluid model) and the solitary stable value that results, namely Planck’s constant divided by the mass of a single helium atom, yields singly quantized line vortices [9].
Feynman [7] suggested a model for turbulence in the superfluid, which he envisioned as a tangle of such quantized line vortices. But how could a collection of these vortices, having just one quanta of circulation each, resemble classical turbulence in a viscous fluid with all its swirls from large to small? More specifically, would the statistical properties of a turbulent superfluid match those of classical turbulence? For this, we start with the following picture for ordinary fluids: energy injected into a flow at some large scale is transferred without dissipation by a cascade process to smaller and smaller scales, until it is finally dissipated into heat at the smallest scale where viscosity becomes important.
In the 1940s, a dimensional analysis by Kolmogorov [10] corresponding to this picture of turbulence produced the well-known spectral energy density $E(k)=cϵ2/3k-5/3$ for wave numbers $k$ between those of energy injection and dissipation, where $c$ is a constant and $ϵ$ the energy dissipation rate per unit mass. This spectral distribution should be independent of how the turbulence was generated in the first place. With this as background, Maurer and Tabeling [11] showed that for the turbulent flow between two counter-rotating discs, the same Kolmogorov energy spectrum with wave-number exponent -5/3 could be observed above and below the transition temperature in liquid $4He$. Similar experiments with moving grids [12] also showed this quantum mimicry of classical turbulence. What is going on here?
These experiments had at least two things in common: the fraction of normal nonsuperfluid was small but not negligible, and the measurements were sensitive to scales much larger than that of individual vortex lines in the turbulent state. About the first, note that motion of a quantized vortex relative to the normal fluid produces a mutual friction force [13], coupling the two fluids at large scales (as well as providing dissipation at small ones), so it is not unthinkable then that both normal and superfluid act together to produce a Kolmogorov spectrum. This may take place [14] as a result of a partial or complete polarization, or local alignment of spin axes, of a large number of vortex filaments that mimics the range of eddies we see in classical flows. A simple example of such polarization under nonturbulent conditions is the well-known mimicking of solid body rotation in a rapidly rotating container filled with superfluid helium, which results from the alignment of a large array of quantized vortices all along the axis of rotation [8].
At the scale of individual vortices, Schwarz [15] developed numerical simulations of superfluid turbulence, based on the assumption that vortex filaments approaching each other too closely will reconnect (see the left panel of Fig. 1). Using entirely classical analysis, he was able to account for most of the experimental observations in the commonly studied thermal counterflow, a flow in which the normal fluid carries thermal energy away from a heater and a mass-conserving counter-current of superfluid is produced. Koplik and Levine [8], using the nonlinear Schrödinger equation, showed that Schwarz’ assumptions about reconnections were correct. Even this flow, which unlike the other experiments mentioned above, has no classical analog, also exhibits a classical decay when probed on length scales that are large compared to the average intervortex line spacing [16].
Vortex reconnections should be frequent in superfluid turbulence [17] and this is a fundamental difference from the classical case. At absolute zero, where there is neither viscosity nor mutual friction to dissipate energy, reconnections between vortices are expected [18] to lead to Kelvin waves along the cores (see right panel of Fig. 1), allowing the energy cascade to proceed beyond the level of the intervortex line spacing. Kelvin waves are defined as helical displacements of a rectilinear vortex line propagating along the core. When a vortex reconnection occurs, the cusps or kinks at the crossing point (see above) can relax into Kelvin waves and subsequent reconnections in the turbulent regime generate more waves whose nonlinear interactions lead to a wide spectrum of Kelvin waves extending to high frequencies. At the highest frequencies (wave numbers) these waves can generate phonons, thus dissipating the turbulent kinetic energy. The bridge between classical and quantum regimes of turbulence [19, 20], it seems, must be provided by numerous reconnection events.
In the work of Paoletti et al. [1], a thermal counterflow as described above is allowed to decay and then probed at the level of discrete vortex lines by illuminating the hydrogen particles moving with the vortices with a laser light sheet. Viewing the scattered light at right angles to the sheet with a CCD camera allows the motion of the vortices to be tracked (see Video 1). This relies on previous work showing that hydrogen tracers could be trapped on the vortices [21, 22]. Large velocities of recoil associated with reconnection events have recently been observed experimentally [23] and in simulations [24]. Paoletti et al. [1] are able to show that the observed, strongly non-Gaussian distributions of velocity due to these atypically large velocities are quantitatively consistent with the frequent reconnection of quantized line vortices. To the extent that turbulent flows are necessarily characterized by their statistical properties, this work provides a clear experimental foundation for a bridge connecting the classical and quantum turbulent regimes.
While insights from the well-studied turbulence problem in ordinary flows have allowed us to move forward in understanding quantum turbulence, the reverse might be said as well: the knowledge we gain there may well yield new insights into classical turbulence, a problem of immense interest in both engineering and large natural flows in fluids and plasmas, and for which a satisfying theoretical framework has yet to be found. Just as in the classical problem, experiments and simulations play a large role, and this leads to many challenges, especially as the temperature is lowered to a pure helium superflow regime. The work of Paoletti et al. [1] is a large step in this direction, allowing us to experimentally confirm our picture of how quantum turbulence proceeds. Going to very low temperatures will require different and more difficult techniques of generating the turbulence than these authors used (in the almost complete absence of the normal component) but ultimately the freely vibrating vortices there may give us the best opportunity to listen clearly to the strange and complex sounds emitted from an “instrument” whose quantum strings are plucked by reconnections.
### References
1. M. S. Paoletti, M. E. Fisher, K. R. Sreenivasan, and D. P. Lathrop, Phys. Rev. Lett. 101, 154501 (2008).
2. S. Kida, M. Takaoka, and F. Hussain, J. Fluid Mech. 230, 583 (1991).
3. E. R. Priest and T. G. Forbes, Magnetic Reconnection: MHD Theory and Applications[Amazon][WorldCat] (Cambridge University Press, 2007).
4. A. Hanany and K. Hashimoto, arXiv:hep-th/0501031v2 (2005).
5. M. Leadbeater, T. Winiecki, D. C. Samuels, C. F. Barenghi, and C. S. Adams, Phys. Rev. Lett. 86, 1410 (2001); C. F. Barenghi, Physica D 237, 2195 (2008).
6. R. J. Donnelly, Quantized Vortices in Helium II[Amazon][WorldCat] (Cambridge University Press, 1991).
7. R. P. Feynman, in Progress in Low Temperature Physics, Vol. 1, edited by C. J. Gorter (North-Holland, Amsterdam, 1955).
8. J. Koplik and H. Levine, Phys. Rev. Lett. 71, 1375 (1993).
9. W. F. Vinen, Proc. Roy. Soc. Lond. A Mat. 260, 218 (1961).
10. A. Kolmogorov, Dokl. Acad. Nauk SSSR 30, 301 (1941).
11. J. Maurer and P. Tabeling, Europhys. Lett. 43, 29 (1998).
12. S. R. Stalp, L. Skrbek, and R. J. Donnelly, Phys. Rev. Lett. 82, 4831 (1999).
13. H. E. Hall and W. F. Vinen, Proc. Roy. Soc. A238, 215 (1956).
14. W. F. Vinen and J. J. Niemela, J. Low Temp. Phys. 128, 167 (2002).
15. K. W. Schwarz, Phys. Rev. B 31, 5782 (1985).
16. L. Skrbek in Vortices and Turbulence at Very Low Temperatures[Amazon][WorldCat], edited by C. F. Barenghi and Y. A. Sergeev (Springer, New York, 2008), p. 91.
17. M. Tsubota, T. Araki, and S. K. Nemirovskii, Phys. Rev. B 62, 11751 (2000).
18. B. V. Svistunov, Phys. Rev. B 52, 3647 (1995).
19. W. F. Vinen, J. Low Temp. Phys. 145, 7 (2006).
20. E. Kozik and B. V. Svistonov, arXiv:cond-mat/0703047v3 (2007).
21. D. R. Poole, C. F. Barenghi, Y. A. Sergeev, and W. F. Vinen, Phys. Rev. B 71, 064514 (2005).
22. G. P. Bewley, D. P. Lathrop, and K. R. Sreenivasan, Nature 441, 588 (2006).
23. G. P. Bewley, M. S. Paoletti, K. R. Sreenivasan and D. P. Lathrop, Proc. Natl. Acad. Sci. U.S.A. (to be published).
24. S. Nazarenko, J. Low Temp. Phys. 132, 1 (2003).
25. C. F. Barenghi, in Vortices and Turbulence at Very Low Temperatures[Amazon][WorldCat], edited by C. F. Barenghi and Y. A. Sergeev (Springer, New York, 2008), p.1.
### Highlighted article
#### Velocity Statistics Distinguish Quantum Turbulence from Classical Turbulence
M. S. Paoletti, Michael E. Fisher, K. R. Sreenivasan, and D. P. Lathrop
Published October 6, 2008 | PDF (free)
### Figures
M. S. Paoletti et al. [1]
Video 1: individual reconnection events are annotated by white circles and evidenced by groups of hydrogen clusters rapidly separating from one another. The clusters that are trapped on the vortices enable the authors to measure the separation as the vortices approach and retract from one another.
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8963891267776489, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/122127/subgroup-of-lattice-ordered-group | ## Subgroup of lattice-ordered group
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $H$ be a subgroup of lattice-ordered group $G$. $H$ is a lattice-ordered group as a induced partial order from $G$ but $H$ is not a lattice-subgroup of $G$. For $a, b\in H$, let $c=inf(a, b) \in H$ and let $d= inf(a, b) \in G$. Is it necessary to be $c = d$ or not? Thanks
-
Isn't it true by definition? A lattice ordered group is an algebra with 4 operations: $\cdot, ^{-1}, \wedge, \vee$, so a lattice ordered subgroup is a subalgebra with respect to these 4 operations. – Mark Sapir Feb 18 at 3:50
Sometimes a lattice ordered group is defined as a partially ordered group were the partial order happens to be a lattice, i.e., any two elements have infimum and supremum. In this case the answer to this question is not so obvious. – Stefan Geschke Feb 18 at 12:56
@ Mark Sapir and Stefan Geschke, Thanks. I am trying to show $c=d$ but still I am not – Rajnish Feb 18 at 21:24
As I understand, @Rajnish asks about a subgroup (not a lattice subgroup) which happens to be a lattice w.r. to the induced partial order, but which is NOT a sublattice of the whole group because the lattice operations in the subgroup are not the same as in the whole group. There are other algebraic structures (instead of a group) where this kind of a situation is common. – Wlodzimierz Holsztynski Feb 19 at 1:12
@ Wlodzimierz Thank you very much. I am completely agree with the "a subgroup (not a lattice subgroup) which happens to be a lattice w.r. to the induced partial order". I am going to correct my question. – Rajnish Feb 19 at 5:21
## 1 Answer
No. A counterexample (essentially from Bourbaki's Algèbre VI.1 Exercice 12 a)) is the following.
We furnish $\mathbb{Z}$ with its usual structure of ordered group and consider the product of ordered groups $G=\mathbb{Z}^3$. This is a lattice, and for $(x,y,z),(u,v,w)\in G$ we have $$\textstyle\sup_G((x,y,z),(u,v,w))=(\sup(x,u),\sup(y,v),\sup(z,w)).$$ Now we consider the subgroup `$H=\{(x,y,z)\in G\mid z=x+y\}$` of $G$, furnished with its induced structure of ordered group. This is also a lattice, as one readily checks that for $(x,y,x+y),(u,v,u+v)\in H$ we have $$\textstyle\sup_H((x,y,x+y),(u,v,u+v))=(\sup(x,u),\sup(y,v),\sup(x,u)+\sup(y,v)).$$ However, since $$\textstyle\sup_G((0,1,1),(1,0,1))=(1,1,1)\neq(1,1,2)=\sup_H((0,1,1),(1,0,1))$$ we see that $H$ is not a sublattice of $G$.
-
@Fred, Thank you. I did not get the idea used for subgroup $H$. Why is that on third compoent $(x + y)\vee (u + v) = x\vee u + y\vee u$. – Rajnish Feb 20 at 21:18
Dear Rajnish, I do not understand your question. Please clarify. – Fred Rohrer Feb 20 at 21:41
@Fred,Thank you very much. I did not get the result of third component in this line $sup_{H}((x,y,x+y), (u,v, u + v)) = (sup(x,u), sup(y,v), sup(x,y) + sup(y,v))$. – Rajnish Feb 20 at 22:11
First, $\sup(x,u)+\sup(y,v)$ is greater than $x+y$ and than $u+v$. Second, if $(a,b,a+b)\in H$ is greater than $(x,y,x+y)$ and $(u,v,u+v)$, then $\sup(x,u)$ is smaller than $a$ and $\sup(y,v)$ is smaller than $b$. Hence, $\sup(x,u)+\sup(y,v)$ is smaller than $a+b$. This yields the claim. (Note that the third component needs to be the sum of the first and the second in order for the triple to be an element of $H$.) – Fred Rohrer Feb 20 at 22:59
@ Fred, Thank you. I got it. – Rajnish Feb 20 at 23:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468995332717896, "perplexity_flag": "head"} |
http://mathhelpforum.com/number-theory/149000-two-consecutive-z-relatively-prime-iff-their-lcm-gcd.html | # Thread:
1. ## Two positive Z are relatively prime iff. their LCM=ab
Two $\mathbb{Z}^+$ are relatively prime iff. their LCM=ab.
$LCM(a,b)=\frac{ab}{GCD(a,b)}$
1. Assume a and b are relatively prime.
$GCD(a,b)=1$
$LCM(a,b)=\frac{ab}{1}=ab$
2. Assume $LCM=ab$.
$LCM(a,b)=\frac{ab}{GCD(a,b)}\rightarrow GCD(a,b)=\frac{ab}{LCM(a,b)} \ \mbox{but since the LCM=ab} \ GCD(a,b)=1$
2. Originally Posted by dwsmith
Two consecutive $\mathbb{Z}$ are relatively prime iff. their LCM=GCD.
$LCM(a,b)=\frac{ab}{GCD(a,b)}$
1. Assume a and b are relatively prime.
$GCD(a,b)=k\rightarrow k|a \ \mbox{and} \ k|b\rightarrow k|(\alpha a+\beta b)$
I am not sure if this going in the right direction.
2. Assume LCM=GCD.
Not sure how to go this direction.
I don't understand the whole setup of this problem. Two consecutive integers are always relatively prime. And consider 3 and 4. Their LCM is 12 and their GCD is 1. Are you sure you copied it out right?
3. Originally Posted by undefined
I don't understand the whole setup of this problem. Two consecutive integers are always relatively prime. And consider 3 and 4. Their LCM is 12 and their GCD is 1. Are you sure you copied it out right?
I edited the original post. I had consecutive integers on the brain.
4. Originally Posted by dwsmith
I edited the original post. I had consecutive integers on the brain.
Hmm, I'm still not seeing it. My above example of 3 and 4 still applies... when two integers are relatively prime, their GCD is 1, and the only way to have LCM equal to 1 is if both integers are 1. Am I missing something?
5. Originally Posted by undefined
Hmm, I'm still not seeing it. My above example of 3 and 4 still applies... when two integers are relatively prime, their GCD is 1, and the only way to have LCM equal to 1 is if both integers are 1. Am I missing something?
Ok I re-edited it.
6. Originally Posted by dwsmith
Ok I re-edited it.
Ohh okay it makes sense now.
My first thought is to use prime factorisations. I think the most convenient notation for prime factorisation is to use an infinite product and allow exponents to equal 0. This has the added advantage that we don't have to treat 1 as a special case.
$a=p_1^{\alpha_1}p_2^{\alpha_2}\dots=\displaystyle\ prod_{i=1}^\infty p_i^{\alpha_i}$
$b=p_1^{\beta_1}p_2^{\beta_2}\dots=\displaystyle\pr od_{i=1}^\infty p_i^{\beta_i}$
where for simplicity we let $p_1=2, p_2=3, p_3=5, \dots$
We have that
$\text{lcm}(a,b)=p_1^{\max(\alpha_1,\beta_1)}p_2^{\ max(\alpha_2,\beta_2)}\dots=\displaystyle \prod_{i=1}^\infty p_i^{\max(\alpha_i,\beta_i)}$
We also have that
$ab = \displaystyle \prod_{i=1}^\infty p_i^{\alpha_i+\beta_i}$
So we have $\text{lcm}(a,b)=ab \iff \forall i\in\mathbb{Z},i>0:\max(\alpha_i,\beta_i)=\alpha_i +\beta_i$ which is true if and only if either $\alpha_i=0$ or $\beta_i=0$, or both. Can you show how this last part is equivalent to $a\ \text{and}\ b\ \text{are relatively prime}$?
(Fixed a small typo)
7. Originally Posted by undefined
Ohh okay it makes sense now.
My first thought is to use prime factorisations. I think the most convenient notation for prime factorisation is to use an infinite product and allow exponents to equal 0. This has the added advantage that we don't have to treat 1 as a special case.
$a=p_1^{\alpha_1}p_2^{\alpha_2}\dots=\displaystyle\ prod_{i=1}^\infty p_i^{\alpha_i}$
$b=p_1^{\beta_1}p_2^{\beta_2}\dots=\displaystyle\pr od_{i=1}^\infty p_i^{\beta_i}$
I don't understand why we would say a and b are infinite products and if they are to the 0 power, wouldn't that make ever p then =1
8. Originally Posted by dwsmith
I don't understand why we would say a and b are infinite products and if they are to the 0 power, wouldn't that make ever p then =1
Well consider the integer 126. The typical way to express the prime factorisation is
$126=2^1\cdot3^2\cdot7^1$
But if we instead write
$126=2^1\cdot3^2\cdot5^0\cdot7^1\cdot11^0\cdot13^0\ cdots$
this gives us flexibility when comparing the prime factorisations of two different integers, because we can just match up each corresponding index without having to use notation to indicate the highest non-zero index, etc.
9. Originally Posted by undefined
Well consider the integer 126. The typical way to express the prime factorisation is
$126=2^1\cdot3^2\cdot7^1$
But if we instead write
$126=2^1\cdot3^2\cdot5^0\cdot7^1\cdot11^0\cdot13^0\ cdots$
this gives us flexibility when comparing the prime factorisations of two different integers, because we can just match up each corresponding index without having to use notation to indicate the highest non-zero index, etc.
I updated my first post with what I think my work. Can you let me know what you think? Thanks.
10. Well if we're allowed to use that $\text{lcm}(a,b)=\displaystyle\frac{ab}{\gcd(a,b)}$ then the $\Leftarrow$ direction also follows easily.
Assume $\text{lcm}(a,b)=ab$
Then $\text{lcm}(a,b)=\displaystyle\frac{ab}{\gcd(a,b)}$
$\Rightarrow ab=\displaystyle\frac{ab}{\gcd(a,b)}$
$\Rightarrow \gcd(a,b)=1$
Edit: Ah, you just changed your first post and added another post saying you changed it. Looks good. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521846175193787, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Permittivity | # Permittivity
A dielectric medium showing orientation of charged particles creating polarization effects. Such a medium can have a higher ratio of electric flux to charge (permittivity) than empty space
In electromagnetism, absolute permittivity is the measure of the resistance that is encountered when forming an electric field in a medium. In other words, permittivity is a measure of how an electric field affects, and is affected by, a dielectric medium. The permittivity of a medium describes how much electric field (more correctly, flux) is 'generated' per unit charge in that medium. More electric flux exists in a medium with a high permittivity (per unit charge) because of polarization effects. Permittivity is directly related to electric susceptibility, which is a measure of how easily a dielectric polarizes in response to an electric field. Thus, permittivity relates to a material's ability to transmit (or "permit") an electric field.
In SI units, permittivity ε is measured in farads per meter (F/m); electric susceptibility χ is dimensionless. They are related to each other through
$\varepsilon = \varepsilon_{\text{r}} \varepsilon_0 = (1+\chi)\varepsilon_0$
where εr is the relative permittivity of the material, and ε0 = 8.854187817.. × 10−12 F/m is the vacuum permittivity.
## Explanation
In electromagnetism, the electric displacement field D represents how an electric field E influences the organization of electrical charges in a given medium, including charge migration and electric dipole reorientation. Its relation to permittivity in the very simple case of linear, homogeneous, isotropic materials with "instantaneous" response to changes in electric field is
$\mathbf{D}=\varepsilon \mathbf{E}$
where the permittivity ε is a scalar. If the medium is anisotropic, the permittivity is a second rank tensor.
In general, permittivity is not a constant, as it can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters. In a nonlinear medium, the permittivity can depend on the strength of the electric field. Permittivity as a function of frequency can take on real or complex values.
In SI units, permittivity is measured in farads per meter (F/m or A2·s4·kg−1·m−3). The displacement field D is measured in units of coulombs per square meter (C/m2), while the electric field E is measured in volts per meter (V/m). D and E describe the interaction between charged objects. D is related to the charge densities associated with this interaction, while E is related to the forces and potential differences.
## Vacuum permittivity
Main article: vacuum permittivity
The vacuum permittivity ε0 (also called permittivity of free space or the electric constant) is the ratio D/E in free space. It also appears in the Coulomb force constant, ke = 1/(4πε0).
Its value is[1]
$\begin{align} \varepsilon_0 & \stackrel{\mathrm{def}}{=}\ \frac{1}{c_0^2\mu_0} = \frac{1}{35950207149.4727056\pi}\ \frac{\text{F}}{\text{m}} \approx 8.8541878176\ldots\times 10^{-12}\ \text{F/m} \end{align}$
where
c0 is the speed of light in free space,[2]
µ0 is the vacuum permeability.
Constants c0 and μ0 are defined in SI units to have exact numerical values, shifting responsibility of experiment to the determination of the meter and the ampere.[3] (The approximation in the second value of ε0 above stems from π being an irrational number.)
## Relative permittivity
Main article: relative permittivity
The linear permittivity of a homogeneous material is usually given relative to that of free space, as a relative permittivity εr (also called dielectric constant, although this sometimes only refers to the static, zero-frequency relative permittivity). In an anisotropic material, the relative permittivity may be a tensor, causing birefringence. The actual permittivity is then calculated by multiplying the relative permittivity by ε0:
$\varepsilon = \varepsilon_{\text{r}} \varepsilon_0 = (1+\chi)\varepsilon_0,$
where χ (frequently written χe) is the electric susceptibility of the material.
The susceptibility is defined as the constant of proportionality (which may be a tensor) relating an electric field E to the induced dielectric polarization density P such that
$\mathbf{P} = \varepsilon_0\chi\mathbf{E},$
where ε0 is the electric permittivity of free space.
The susceptibility of a medium is related to its relative permittivity εr by
$\chi = \varepsilon_{\text{r}} - 1.$
So in the case of a vacuum,
$\chi = 0.$
The susceptibility is also related to the polarizability of individual particles in the medium by the Clausius-Mossotti relation.
The electric displacement D is related to the polarization density P by
$\mathbf{D} = \varepsilon_0\mathbf{E} + \mathbf{P} = \varepsilon_0 (1+\chi) \mathbf{E} = \varepsilon_{\text{r}} \varepsilon_0 \mathbf{E}.$
The permittivity ε and permeability µ of a medium together determine the phase velocity v = c/n of electromagnetic radiation through that medium:
$\varepsilon \mu = \frac{1}{v^2}.$
## Dispersion and causality
In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is
$\mathbf{P}(t)=\varepsilon_0 \int_{-\infty}^t \chi(t-t') \mathbf{E}(t') \, dt'.$
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by χ(Δt). The upper limit of this integral can be extended to infinity as well if one defines χ(Δt) = 0 for Δt < 0. An instantaneous response corresponds to Dirac delta function susceptibility χ(Δt) = χ δ(Δt).
It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Because of the convolution theorem, the integral becomes a simple product,
$\mathbf{P}(\omega)=\varepsilon_0 \chi(\omega) \mathbf{E}(\omega).$
This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material.
Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. χ(Δt) = 0 for Δt < 0), a consequence of causality, imposes Kramers–Kronig constraints on the susceptibility χ(0).
### Complex permittivity
A dielectric permittivity spectrum over a wide range of frequencies. ε′ and ε″ denote the real and the imaginary part of the permittivity, respectively. Various processes are labeled on the image: ionic and dipolar relaxation, and atomic and electronic resonances at higher energies.[4]
As opposed to the response of a vacuum, the response of normal materials to external fields generally depends on the frequency of the field. This frequency dependence reflects the fact that a material's polarization does not respond instantaneously to an applied field. The response must always be causal (arising after the applied field) which can be represented by a phase difference. For this reason, permittivity is often treated as a complex function of the (angular) frequency of the applied field ω: $\varepsilon \rightarrow \widehat{\varepsilon}(\omega)$ (since complex numbers allow specification of magnitude and phase). The definition of permittivity therefore becomes
$D_0 e^{-i \omega t} = \widehat{\varepsilon}(\omega) E_0 e^{-i \omega t},$
where
D0 and E0 are the amplitudes of the displacement and electrical fields, respectively,
i is the imaginary unit, i2 = −1.
The response of a medium to static electric fields is described by the low-frequency limit of permittivity, also called the static permittivity εs (also εDC ):
$\varepsilon_{\text{s}} = \lim_{\omega \rightarrow 0} \widehat{\varepsilon}(\omega).$
At the high-frequency limit, the complex permittivity is commonly referred to as ε∞. At the plasma frequency and above, dielectrics behave as ideal metals, with electron gas behavior. The static permittivity is a good approximation for alternating fields of low frequencies, and as the frequency increases a measurable phase difference δ emerges between D and E. The frequency at which the phase shift becomes noticeable depends on temperature and the details of the medium. For moderate fields strength (E0), D and E remain proportional, and
$\widehat{\varepsilon} = \frac{D_0}{E_0} = |\varepsilon|e^{i\delta}.$
Since the response of materials to alternating fields is characterized by a complex permittivity, it is natural to separate its real and imaginary parts, which is done by convention in the following way:
$\widehat{\varepsilon}(\omega) = \varepsilon'(\omega) + i\varepsilon''(\omega) = \frac{D_0}{E_0} \left( \cos\delta + i\sin\delta \right).$
where
ε’ is the real part of the permittivity, which is related to the stored energy within the medium;
ε” is the imaginary part of the permittivity, which is related to the dissipation (or loss) of energy within the medium.
It is important to realize that the choice of sign for time-dependence, exp(-iωt), dictates the sign convention for the imaginary part of permittivity. The signs used here correspond to those commonly used in physics, whereas for the engineering convention one should reverse all imaginary quantities.
The complex permittivity is usually a complicated function of frequency ω, since it is a superimposed description of dispersion phenomena occurring at multiple frequencies. The dielectric function ε(ω) must have poles only for frequencies with positive imaginary parts, and therefore satisfies the Kramers–Kronig relations. However, in the narrow frequency ranges that are often studied in practice, the permittivity can be approximated as frequency-independent or by model functions.
At a given frequency, the imaginary part of $\widehat{\varepsilon}$ leads to absorption loss if it is positive (in the above sign convention) and gain if it is negative. More generally, the imaginary parts of the eigenvalues of the anisotropic dielectric tensor should be considered.
In the case of solids, the complex dielectric function is intimately connected to band structure. The primary quantity that characterizes the electronic structure of any crystalline material is the probability of photon absorption, which is directly related to the imaginary part of the optical dielectric function ε(ω). The optical dielectric function is given by the fundamental expression:[5]
$\varepsilon(\omega)=1+\frac{8\pi^2e^2}{m^2}\sum_{c,v}\int W_{c,v}(E) \left[ \varphi (\hbar \omega - E)-\varphi( \hbar \omega +E) \right ] \, dx.$
In this expression, Wc,v(E) represents the product of the Brillouin zone-averaged transition probability at the energy E with the joint density of states,[6][7] Jc,v(E); φ is a broadening function, representing the role of scattering in smearing out the energy levels.[8] In general, the broadening is intermediate between Lorentzian and Gaussian;[9][10] for an alloy it is somewhat closer to Gaussian because of strong scattering from statistical fluctuations in the local composition on a nanometer scale.
### Tensorial permittivity
According to the Drude model of magnetized plasma, a more general expression which takes into account the interaction of the carriers with an alternating electric field at millimeter and microwave frequencies in an axially magnetized semiconductor requires the expression of the permittivity as a non-diagonal tensor.[11] (see also Electro-gyration).
$\begin{align} \mathbf{D}(\omega) & = \begin{vmatrix} \varepsilon_{1} & -i \varepsilon_{2} & 0\\ i \varepsilon_{2} & \varepsilon_{1} & 0\\ 0 & 0 & \varepsilon_{z}\\ \end{vmatrix} \mathbf{E}(\omega)\\ \end{align}$
If $\varepsilon_{2}$ vanishes, then the tensor is diagonal but not proportional to the identity and the medium is said an uniaxial medium.
### Classification of materials
Materials can be classified according to their permittivity and conductivity, σ. Materials with a large amount of loss inhibit the propagation of electromagnetic waves. In this case, generally when σ/(ωε’) >> 1, we consider the material to be a good conductor. Dielectrics are associated with lossless or low-loss materials, where σ/(ωε’) << 1. Those that do not fall under either limit are considered to be general media. A perfect dielectric is a material that has no conductivity, thus exhibiting only a displacement current. Therefore it stores and returns electrical energy as if it were an ideal capacitor.
### Lossy medium
In the case of lossy medium, i.e. when the conduction current is not negligible, the total current density flowing is:
$J_\text{tot} = J_{\text{c}} + J_{\text{d}} = \sigma E - i \omega \varepsilon' E = -i \omega \widehat{\varepsilon} E$
where
σ is the conductivity of the medium;
ε’ is the real part of the permittivity.
$\widehat{\varepsilon}$ is the complex permittivity
The size of the displacement current is dependent on the frequency ω of the applied field E; there is no displacement current in a constant field.
In this formalism, the complex permittivity is defined as:[12]
$\widehat{\varepsilon} = \varepsilon' - i \frac{\sigma}{\omega}$
In general, the absorption of electromagnetic energy by dielectrics is covered by a few different mechanisms that influence the shape of the permittivity as a function of frequency:
• First, are the relaxation effects associated with permanent and induced molecular dipoles. At low frequencies the field changes slowly enough to allow dipoles to reach equilibrium before the field has measurably changed. For frequencies at which dipole orientations cannot follow the applied field because of the viscosity of the medium, absorption of the field's energy leads to energy dissipation. The mechanism of dipoles relaxing is called dielectric relaxation and for ideal dipoles is described by classic Debye relaxation.
• Second are the resonance effects, which arise from the rotations or vibrations of atoms, ions, or electrons. These processes are observed in the neighborhood of their characteristic absorption frequencies.
The above effects often combine to cause non-linear effects within capacitors. For example, dielectric absorption refers to the inability of a capacitor that has been charged for a long time to completely discharge when briefly discharged. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small voltage, a phenomenon that is also called soakage or battery action. For some dielectrics, such as many polymer films, the resulting voltage may be less than 1-2% of the original voltage. However, it can be as much as 15 - 25% in the case of electrolytic capacitors or supercapacitors.
### Quantum-mechanical interpretation
In terms of quantum mechanics, permittivity is explained by atomic and molecular interactions.
At low frequencies, molecules in polar dielectrics are polarized by an applied electric field, which induces periodic rotations. For example, at the microwave frequency, the microwave field causes the periodic rotation of water molecules, sufficient to break hydrogen bonds. The field does work against the bonds and the energy is absorbed by the material as heat. This is why microwave ovens work very well for materials containing water. There are two maxima of the imaginary component (the absorptive index) of water, one at the microwave frequency, and the other at far ultraviolet (UV) frequency. Both of these resonances are at higher frequencies than the operating frequency of microwave ovens.
At moderate frequencies, the energy is too high to cause rotation, yet too low to affect electrons directly, and is absorbed in the form of resonant molecular vibrations. In water, this is where the absorptive index starts to drop sharply, and the minimum of the imaginary permittivity is at the frequency of blue light (optical regime).
At high frequencies (such as UV and above), molecules cannot relax, and the energy is purely absorbed by atoms, exciting electron energy levels. Thus, these frequencies are classified as ionizing radiation.
While carrying out a complete ab initio (that is, first-principles) modelling is now computationally possible, it has not been widely applied yet. Thus, a phenomenological model is accepted as being an adequate method of capturing experimental behaviors. The Debye model and the Lorentz model use a 1st-order and 2nd-order (respectively) lumped system parameter linear representation (such as an RC and an LRC resonant circuit).
## Measurement
Main article: dielectric spectroscopy
The dielectric constant of a material can be found by a variety of static electrical measurements. The complex permittivity is evaluated over a wide range of frequencies by using different variants of dielectric spectroscopy, covering nearly 21 orders of magnitude from 10−6 to 1015 Hz. Also, by using cryostats and ovens, the dielectric properties of a medium can be characterized over an array of temperatures. In order to study systems for such diverse excitation fields, a number of measurement setups are used, each adequate for a special frequency range.
Various microwave measurement techniques are outlined in Chen et al..[13] Typical errors for the Hakki-Coleman method employing a puck of material between conducting planes are about 0.3%.[14]
• Low-frequency time domain measurements (10−6-103 Hz)
• Low-frequency frequency domain measurements (10−5-106 Hz)
• Reflective coaxial methods (106-1010 Hz)
• Transmission coaxial method (108-1011 Hz)
• Quasi-optical methods (109-1010 Hz)
• Terahertz time-domain spectroscopy (1011-1013 Hz)
• Fourier-transform methods (1011-1015 Hz)
At infrared and optical frequencies, a common technique is ellipsometry. Dual polarisation interferometry is also used to measure the complex refractive index for very thin films at optical frequencies.
## References
1. Current practice of standards organizations such as NIST and BIPM is to use c0, rather than c, to denote the speed of light in vacuum according to ISO 31. In the original Recommendation of 1983, the symbol c was used for this purpose. See NIST Special Publication 330, Appendix 2, p. 45 .
2. Peter Y. Yu, Manuel Cardona (2001). Fundamentals of Semiconductors: Physics and Materials Properties. Berlin: Springer. p. 261. ISBN 3-540-25470-6.
3. José García Solé, Jose Solé, Luisa Bausa, (2001). An introduction to the optical spectroscopy of inorganic solids. Wiley. Appendix A1, pp, 263. ISBN 0-470-86885-6.
4. John H. Moore, Nicholas D. Spencer (2001). Encyclopedia of chemical physics and physical chemistry. Taylor and Francis. p. 105. ISBN 0-7503-0798-6.
5. Solé, José García; Bausá, Louisa E; Jaque, Daniel (2005-03-22). Solé and Bausa. p. 10. ISBN 3-540-25470-6.
6. Hartmut Haug, Stephan W. Koch (1994). Quantum Theory of the Optical and Electronic Properties of Semiconductors. World Scientific. p. 196. ISBN 981-02-1864-8.
7. Manijeh Razeghi (2006). Fundamentals of Solid State Engineering. Birkhauser. p. 383. ISBN 0-387-28152-5.
8. [1] Prati E. (2003) "Propagation in gyroelectromagnetic guiding systems", J. of Electr. Wav. and Appl. 17, 8, 1177
9. John S. Seybold (2005) Introduction to RF propagation. 330 pp, eq.(2.6), p.22.
10. Linfeng Chen, V. V. Varadan, C. K. Ong, Chye Poh Neo (2004). "Microwave theory and techniques for materials characterization". Microwave electronics. Wiley. p. 37. ISBN 0-470-84492-2.
11. Mailadil T. Sebastian (2008). Dielectric Materials foress Communication. Elsevier. p. 19. ISBN 0-08-045330-9.
## Further reading
• Theory of Electric Polarization: Dielectric Polarization, C.J.F. Böttcher, ISBN 0-444-41579-3
• Dielectrics and Waves edited by von Hippel, Arthur R., ISBN 0-89006-803-8
• Dielectric Materials and Applications edited by Arthur von Hippel, ISBN 0-89006-805-4. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8867092132568359, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/201304/need-help-with-this-linear-equation | # Need help with this linear equation
Trying to figure out what went wrong with the way I solved this
So, a linear function is given. It's $g(x) = 10x + 5$
I know that $\frac{g(a+h) - g(a)}{(a+h) - a}$
I came up with $\frac{[10(a+h) +2] - [10a + 5]}{h}$
I then simplified that down and came up with 20, but apparently that's wrong?
What did I do wrong?
Edit: Ah, I forgot to say I'm trying to find the average rate of change of the function between $x = a$ and $x = a + h$.
-
2
It is not clear at all what you are asking. Your title says linear equations, but you seem to be talking about some kind of difference quotient? Also, if this is homework, please indicate it in the tags. – Daniel Littlewood Sep 23 '12 at 19:18
1
Yes, it's wrong. $$\dfrac{(10 (a+h) + 5) - (10 a + 5)}{h} = 10$$ – Robert Israel Sep 23 '12 at 19:19
Note $g(a+h)=10(a+h)+5$, not $10(a+h)+2$ as you have. Still, with what you have, your simplification went awry. You should not have obtained "$20$". Try going through it again (with the proper value of $g(a+h)$, of course)... – David Mitra Sep 23 '12 at 19:23
## 1 Answer
$\frac{g(a+h)-g(a)}{(a+h)-a}=\frac{10a+10h+5-10a-5}{h}=\frac{10h}{h}=10$ shows that any difference quotient is $10$.
-
Figured what I did wrong; didn't distribute a minus sign. Simple error, thanks guy! – Brandt Sep 23 '12 at 19:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9665519595146179, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/6085/in-the-paillier-cryptosystem-is-there-a-method-to-judge-whether-an-encrypted-nu | # In the Paillier cryptosystem, is there a method to judge whether an encrypted number is less than 0 (without the private key)
Or, is there a cryptosystem that is both order-perserving and additive homomorphic?
-
A fully order-preserving cryptosystem would have a few issues.. – Thomas Jan 23 at 15:46
2
The message space of the Paillier Cryptosystem is $\mathbb{Z}_N$. So, in fact, yes it is possible to judge that, because they never are. (But that is probably not what you actually wanted to know.) – Maeher Jan 23 at 15:59
Might be possible to have someone prove a predicate about a ciphertext if they were the one who encrypted the value to get the given ciphertext. Not sure if that meets your needs. – mikeazo♦ Jan 24 at 0:46
Thank you all. So it seems that any order-perserving and additive homomorphic public key cryptosystem is broken, as shown by poncho... – phan Jan 24 at 1:35
## 2 Answers
In Paillier, if it were possible to determine whether an encrypted number is less than 0 (that is, is equivalent modulo N to a value $x$ where $N/2 < x < N$), then it would be possible to decrypt arbitrary encrypted values with only the public key. That is, if someone found such a method, they will have broken Paillier as a public key system.
The details are fairly straight-forward; to compare an encrypted value $E(x)$ against a known value $y$, you compute $E(y)$, use the homomorphic property to compute the value $E(x+y)$, and then use your blockbox to determine from this value whether $x+y<0$
Hence, you can use binary searching, using the above method as the comparison method, to recover the value $x$ given $E(x)$, using $O(\log N)$ probes.
-
You are right. Thank you. However, is there any symmetrical encryption system that matchs this two properties? – phan Jan 24 at 1:40
@phan I don't know of any additively homomorphic symmetric ciphers, but if such a cipher existed, it would not be IND-CPA secure. The attacker could use the order property to work out which of the two ciphertexts belongs to any one plaintext. – Thomas Jan 24 at 2:08
@Thomas Yes, any order-perserving systems I know lack the semantical security, as well as any deterministic encryption schemes like AES ... Hence I think it is acceptable for some applications ... – phan Jan 24 at 2:19
As mentioned above this is not possible in a direct way. However there exists a Zero Knowledge Proof that may do the job. It proofs that a message encrypts one out of a publicly known number of plain text messages. If these known messages only contain values greater or equal 0 this may be what you are looking for but unfortunately message and computation overhead is quite high for large sets.
Have a look at this pdf, page 17 "Proof that an encrypted message lies in a given set of messages".
-
Thank you for the interesting paper! However, if what I understand is correct, this protocol can be executed for each encrypted number only few times, otherwise it serves as the binary-searching-enabling blockbox mentioned by poncho... – phan Jan 27 at 15:32
As far as I can judge this ponchos counter-example makes different assumptions. In his case the blackbox is available to anyone and does not require the cooperation of any other party. So an attacker can query the box with any input he likes. However, in case of the Zero Knowledge Proof (ZKP) the proof stringently requires cooperation of the secret-holder. The basic idea of any ZKP is that a prover applying a ZKP can never reveal more to a verifier than what he has already stated. – Thomas Lieven Jan 29 at 15:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93193119764328, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/sobolev-spaces+numerical-methods | # Tagged Questions
1answer
27 views
### finite elements-exercice
We consider in $\mathbb{R}^2$ the set of points $$\{M_1(-1,1),M_2(0,1), M_3(2,1),M_4(-1,0),M_5(1,0),M_6(2,0)\}$$ Let $\Omega$ a rectangular structure consisting of the heads \$\{M_4(-1,0),M_6(2,0), ...
0answers
20 views
### Density of finite element functions in $W^{1,p}(\Omega)$
I would like to know if the following statement is true: For each $u \in W^{1,p}(\Omega)$ and $\varepsilon > 0$ there exists a piecewise affine function $u_{\varepsilon}$ and a triangulation of ...
0answers
27 views
### Discrete Sobolev space of $R^n$ valued maps
Can some one tell me the reference or any idea how to take the Discrete Sobolev space work defined for a scalar valued map to the space of maps which are vector valued.Let's say \$f:\Omega ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8216415643692017, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/283856/why-is-this-incorrect-regarding-differentiating-the-natural-log | # Why is this incorrect (regarding differentiating the natural log)?
We must differentiate the following:
$$[f(x) = \ln (3x^2 +3)]\space '$$
Why is this incorrect? I am just using the product rule:
$[f(x) = \ln (3x^2 +3)]\space ' = \dfrac{1}{x} \times (3x^2 + 3) + \ln(6x) = \dfrac{3x^2 +3}{x} + \ln(6x)$
My book gives the following answer:
$$\dfrac{6x}{3x^2 +3}$$
-
You need to use the chain rule, not the product rule. – Henry T. Horton Jan 21 at 23:06
2
It's logarithm of $3x^2+3$, not logarithm times $3x^2+3$. – Gerry Myerson Jan 21 at 23:07
@GerryMyerson But even if we remove the $\ln(6x)$ we still have a totally different answer. – Yoshi Jan 21 at 23:09
Yoshi, you miss my point, but I think you have got it now from the answers that have been posted. – Gerry Myerson Jan 21 at 23:49
## 2 Answers
There is no product here; you should be using the chain rule.
The start of your answer makes it look like you were differentiating $\log(x) \cdot (3x^2 + 3)$ instead of the given function, but the latter part of your attempt clarifies that you are just getting tangled up.
(Also, it's a bit strange that your book didn't reduce its final answer, but it's still correct.)
More precisely:
$[f(g(x))]' = f'(g(x)) \cdot g'(x)$.
In this case, $$[\log(3x^2 + 3)]' = \frac{1}{3x^2 +3} \cdot (3x^2 + 3)' = \frac{6x}{3x^2 + 3}$$ as your book suggests. Of course, we could divide top and bottom by $3$ to simplify our answer to:
$$\frac{2x}{x^2 + 1}$$
Going back to the original function, note that $\log(3x^2 + 3x) = \log(3) + \log(x^2 + x)$. If you now differentiate the function in this form, the derivative of the constant term $\log(3)$ will be $0$, and you will end up with the same answer as above (already in simplified form).
-
This is not a product, it is a composition. This is a common misconception for students when reading functional notation. The expression in parenthesis in functional notation is not multiplying its function-name, it's being evaluated inside the function.
You are interpreting it as if it were $\ln (x)\times (3x^2+3)$, but it is $3x^2+3$ composed with $\ln (x)$.
$\ln$ can never be written alone, it always has to be read with its input inside the parentheses. Likewise, $\sin(3x)$ is not $(\sin)$ times $3x$, it is $\sin$ evaluated at $3x$.
If original function were $\ln(x) \times (3x^2+3)$, then you would be correctly applying the product rule, but you would have had to have written this:
$$=\frac{1}{x} \times (3x^2+3)+\ln(x)\times 6x$$
You can see the difference between what you wrote and this is, again, that the $\ln$ had been written without something to be evaluated at.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695996642112732, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/121409-dummy-variable.html | # Thread:
1. ## The dummy variable
$\int_a^b{g(x)} dt = b*g(x) - a*g(x)$
When I apply the rule to the following equation--
$\frac{\pi}{2x}=\int_{0}^{\frac{\pi}{2}}\frac{\pi\l eft(1+\tanh\left(\frac{-\ln{\sin\theta}}{2} \right ) \right )}{4x} d\theta$
I get:
$\frac{\pi}{2}\frac{\pi\left(1+\tanh\left(\frac{-\ln{\sin\frac{\pi}{2}}}{2} \right ) \right )}{4x}=\frac{\pi^2}{8x}\neq\frac{\pi}{2x}$
which doesn't make sense. What am I missing?
2) How does the rule work for definite integrals whose intervals are infinite, such as in the gamma function?
Pardon my ignorance.
2. The rule you posted only works for when the variable that your integrating isn't in the function itself it can be shown as
$<br /> \int_{a}^{b} g(x) dt = g(x) \int_{a}^{b} dt = g(x) \times (a-b)<br />$
so it would be only applicable if you can do that first step.
In the example you gave you cannot because your integrating with respect to theta so cannot factor out that function of theta
3. Originally Posted by thelostchild
The rule you posted only works for when the variable that your integrating isn't in the function itself it can be shown as
$<br /> \int_{a}^{b} g(x) dt = g(x) \int_{a}^{b} dt = g(x) \times (a-b)<br />$
so it would be only applicable if you can do that first step.
In the example you gave you cannot because your integrating with respect to theta so cannot factor out that function of theta
Thanks. But I'm still not entirely clear on how you can integrate with respect to a variable that is not even in the equation. I would be much obliged if you could give a simple example.
4. Originally Posted by rainer
Thanks. But I'm still not entirely clear on how you can integrate with respect to a variable that is not even in the equation. I would be much obliged if you could give a simple example.
If you want something you know :
An antiderivative of 1 is t.
So $\int_a^b ~dt=\int_a^b 1 ~dt=\left. t\right|_a^b=b-a$
Satisfied ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493732452392578, "perplexity_flag": "head"} |
http://citizendia.org/Special_relativity | For a generally accessible and less technical introduction to the topic, see Introduction to special relativity. In Physics, special relativity is a fundamental Theory about Space and Time, developed by Albert Einstein in 1905 as a modification
Special relativity (SR) (also known as the special theory of relativity or STR) is the physical theory of measurement in inertial frames of reference proposed in 1905 by Albert Einstein (after considerable contributions of Hendrik Lorentz and Henri Poincaré) in the paper "On the Electrodynamics of Moving Bodies". Theoretical physics employs Mathematical models and Abstractions of Physics in an attempt to explain experimental data taken of the natural world Measurement is the process of estimating the magnitude of some attribute of an object such as its length or weight relative to some standard ( unit of measurement) such as In Physics, an inertial frame of reference is a Frame of reference which belongs to a set of frames in which Physical laws hold in the same and simplest Albert Einstein ( German: ˈalbɐt ˈaɪ̯nʃtaɪ̯n; English: ˈælbɝt ˈaɪnstaɪn (14 March 1879 – 18 April 1955 was a German -born theoretical Hendrik Antoon Lorentz ( July 18, 1853 &ndash February 4, 1928) was a Dutch Physicist who shared the 1902 Nobel Jules Henri Poincaré ( 29 April 1854 &ndash 17 July 1912) (ˈʒyl ɑ̃ˈʁi pwɛ̃kaˈʁe was a French Mathematician The Annus Mirabilis Papers (from Latin, Annus mirabilis, for 'extraordinary year' are the papers of Albert Einstein published in the " [1] It generalizes Galileo's principle of relativity – that all uniform motion is relative, and that there is no absolute and well-defined state of rest (no privileged reference frames) – from mechanics to all the laws of physics, including both the laws of mechanics and of electrodynamics, whatever they may be. Galilean invariance or Galilean relativity is a Principle of relativity which states that the fundamental laws of physics are the same in all Inertial In Physics, an inertial frame of reference is a Frame of reference which belongs to a set of frames in which Physical laws hold in the same and simplest In Theoretical physics, a preferred or privileged frame is usually a special Hypothetical Frame of reference in which the Laws of physics Mechanics ( Greek) is the branch of Physics concerned with the behaviour of physical bodies when subjected to Forces or displacements A physical law or scientific law is a Scientific generalization based on empirical Observations of physical behavior (i Classical electromagnetism (or classical electrodynamics) is a theory of Electromagnetism that was developed over the course of the 19th century most prominently In addition, special relativity incorporates the principle that the speed of light is the same for all inertial observers regardless of the state of motion of the source. The term observer in Special relativity refers most commonly to an Inertial reference frame. [2]
This theory has a wide range of consequences which have been experimentally verified. Special relativity overthrows Newtonian notions of absolute space and time by stating that time and space are perceived differently by observers in different states of motion. Classical mechanics is used for describing the motion of Macroscopic objects from Projectiles to parts of Machinery, as well as Astronomical objects For other uses see Time (disambiguation Time is a component of a measuring system used to sequence events to compare the durations of Space is the extent within which Matter is physically extended and objects and Events have positions relative to one another It yields the equivalence of matter and energy, as expressed in the mass-energy equivalence formula E = mc2, where c is the speed of light in a vacuum. Matter is commonly defined as being anything that has mass and that takes up space. In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός In Physics, mass–energy equivalence is the concept that for particles slower than light any Mass has an associated Energy and vice versa. The predictions of special relativity agree well with Newtonian mechanics in their common realm of applicability, specifically in experiments in which all velocities are small compared to the speed of light.
The theory is termed "special" because it applies the principle of relativity only to inertial frames. A principle of relativity is a criterion for judging physical theories, stating that they are inadequate if they do not prescribe the exact same laws of physics in In Physics, an inertial frame of reference is a Frame of reference which belongs to a set of frames in which Physical laws hold in the same and simplest Einstein developed general relativity to apply the principle generally, that is, to any frame, and that theory includes the effects of gravity. General relativity or the general theory of relativity is the geometric theory of Gravitation published by Albert Einstein in 1916 Gravitation is a natural Phenomenon by which objects with Mass attract one another Strictly, special relativity cannot be applied in accelerating frames or in gravitational fields.
Special relativity reveals that c is not just the velocity of a certain phenomenon, namely the propagation of electromagnetic radiation (light)—but rather a fundamental feature of the way space and time are unified as spacetime. SpaceTime is a patent-pending three dimensional graphical user interface that allows end users to search their content such as Google Google Images Yahoo! YouTube eBay Amazon and RSS A consequence of this is that it is impossible for any particle that has mass to be accelerated to the speed of light.
For history and motivation, see the article: History of special relativity
## Postulates
In his autobiographical notes published in November 1949 Einstein described how he had arrived at the two fundamental postulates on which he based the special theory of relativity. The History of special relativity consists of many theoretical and empirical results of physicists like Hendrik Lorentz and Henri Poincaré, which culminated in the After describing in detail the state of both mechanics and electrodynamics at the beginning of the 20th century, he wrote
"Reflections of this type made it clear to me as long ago as shortly after 1900, i. e. , shortly after Planck's trailblazing work, that neither mechanics nor electrodynamics could (except in limiting cases) claim exact validity. Gradually I despaired of the possibility of discovering the true laws by means of constructive efforts based on known facts. The longer and the more desperately I tried, the more I came to the conviction that only the discovery of a universal formal principle could lead us to assured results… How, then, could such a universal principle be found?"[3]
He discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of either the (then) known laws of mechanics or electrodynamics. These propositions were (1) the constancy of the velocity of light, and (2) the independence of physical laws (especially the constancy of the velocity of light) from the choice of inertial system. In his initial presentation of special relativity in 1905[4] he expressed these postulates as
• The Principle of Relativity - The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems of inertial coordinates in uniform translatory motion.
• The Principle of Invariant Light Speed - Light in vacuum propagates with the speed c (a fixed constant) in terms of any system of inertial coordinates, regardless of the state of motion of the light source.
It should be noted that the derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (which are made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history. Isotropy is uniformity in all directions Precise definitions depend on the subject area [5].
Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations[6]. However, the most common set of postulates remains those employed by Einstein in his original paper. These postulates refer to the axiomatic basis of the Lorentz transformation, which is the essential core of special relativity. In all of Einstein's papers in which he presented derivations of the Lorentz transformation, he based it on these two principles. [7]
In addition to the papers referenced above—which give derivations of the Lorentz transformation and describe the foundations of special relativity—Einstein also wrote at least four papers giving heuristic arguments for the equivalence (and transmutability) of mass and energy. (It should be noted that this equivalence does not follow from the basic premises of special relativity. [8] The first of these was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. In this and each of his subsequent three papers on this subject[9], Einstein augmented the two fundamental principles by postulating the relations involving momentum and energy of electromagnetic waves implied by Maxwell's equations (the assumption of which, of course, entails among other things the assumption of the constancy of the speed of light). He acknowledged in his 1907 survey paper on special relativity that it was problematic to rely on Maxwell's equations[10] for the heuristic mass-energy argument, and this is why he consistently based the derivation of Lorentz invariance (the essential core of special relativity) on just the two basic principles of relativity and light-speed invariance. He wrote
"The insight fundamental for the special theory of relativity is this: The assumptions relativity and light speed invariance are compatible if relations of a new type ("Lorentz transformation") are postulated for the conversion of coordinates and times of events… The universal principle of the special theory of relativity is contained in the postulate: The laws of physics are invariant with respect to Lorentz transformations (for the transition from one inertial system to any other arbitrarily chosen inertial system). This is a restricting principle for natural laws…"[11]
Thus many modern treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime. [12][13]
## Lack of an absolute reference frame
The principle of relativity, which states that there is no stationary reference frame, dates back to Galileo, and was incorporated into Newtonian Physics. A principle of relativity is a criterion for judging physical theories, stating that they are inadequate if they do not prescribe the exact same laws of physics in Galileo Galilei (15 February 1564 &ndash 8 January 1642 was a Tuscan ( Italian) Physicist, Mathematician, Astronomer, and Philosopher However, in the late 19th century, the existence of electromagnetic waves led physicists to suggest that the universe was filled with a substance known as "aether", which would act as the medium through which these waves, or vibrations traveled. Electromagnetic radiation takes the form of self-propagating Waves in a Vacuum or in Matter. In the late 19th century " luminiferous aether " (or " ether " meaning light-bearing aether, was the term used to describe a medium for the propagation The aether was thought to constitute an absolute reference frame against which speeds could be measured. In other words, the aether was the only fixed or motionless thing in the universe. Aether supposedly had some wonderful properties: it was sufficiently elastic that it could support electromagnetic waves, and those waves could interact with matter, yet it offered no resistance to bodies passing through it. The results of various experiments, including the Michelson-Morley experiment, indicated that the Earth was always 'stationary' relative to the aether – something that was difficult to explain, since the Earth is in orbit around the Sun. The Michelson–Morley experiment, one of the most important and famous experiments in the History of physics, was performed in 1887 by Albert Michelson and Einstein's elegant solution was to discard the notion of an aether and an absolute state of rest. Special relativity is formulated so as to not assume that any particular frame of reference is special; rather, in relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in a vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities.
## Consequences
Main article: Consequences of special relativity
Einstein has said that all of the consequences of special relativity can be derived from examination of the Lorentz transformations. Special relativity has several consequences that struck many people as counterintuitive among which are The time lapse between two events is not invariant from one observer In Physics, the Lorentz transformation converts between two different observers' measurements of space and time where one observer is in constant motion with respect to
These transformations, and hence special relativity, lead to different physical predictions than Newtonian mechanics when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything humans encounter that some of the effects predicted by relativity are initially counter-intuitive:
• Time dilation – the time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames (e. This article discusses a concept in physics For the concept in sociology see Time displacement. g. , the twin paradox which concerns a twin who flies off in a spaceship traveling near the speed of light and returns to discover that his or her twin sibling has aged much more). In physics the twin paradox is a thought experiment in Special Relativity, in which a person who makes a journey into space in a high-speed rocket will return home to find he
• Relativity of simultaneity – two events happening in two different locations that occur simultaneously to one observer, may occur at different times to another observer (lack of absolute simultaneity). The relativity of simultaneity is the concept that simultaneity is not absolute but dependent on the observer In Physics, the concept of absolute time and absolute space are Hypothetical models in which time either runs at the same rate for all the observers in
• Lorentz contraction – the dimensions (e. Length contraction, according to Hendrik Lorentz, is the physical phenomenon of a decrease in Length detected by an observer in objects that travel at any non-zero g. , length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e. g. , the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage). The ladder paradox (or barn-pole paradox) is a Thought experiment in Special relativity.
• Composition of velocities – velocities (and speeds) do not simply 'add', for example if a rocket is moving at ⅔ the speed of light relative to an observer, and the rocket fires a missile at ⅔ of the speed of light relative to the rocket, the missile does not exceed the speed of light relative to the observer. The velocity-addition formula is one of two physics equations that relates the velocities of a moving object in different reference frames Galilean Addition Of Velocities (In this example, the observer would see the missile travel with a speed of 12/13 the speed of light. )
• Inertia and momentum – as an object's speed approaches the speed of light from an observer's point of view, its mass appears to increase thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The vis insita or innate force of matter is a power of resisting by which every body as much as in it lies endeavors to preserve in its present state whether it be of rest or of moving In Classical mechanics, momentum ( pl momenta SI unit kg · m/s, or equivalently N · s) is the product
• Equivalence of mass and energy, E = mc2 – The energy content of an object at rest with mass m equals mc2. Mass is a fundamental concept in Physics, roughly corresponding to the Intuitive idea of how much Matter there is in an object In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός In Physics, mass–energy equivalence is the concept that for particles slower than light any Mass has an associated Energy and vice versa. Conservation of energy implies that in any reaction a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction. Similarly, the mass of an object can be increased by taking in kinetic energies.
## Reference frames, coordinates and the Lorentz transformation
Diagram 1. In Physics, the Lorentz transformation converts between two different observers' measurements of space and time where one observer is in constant motion with respect to Changing views of spacetime along the world line of a rapidly accelerating observer. In physics the world line of an object is the unique path of that object as it travels through 4- Dimensional Spacetime. In this animation, the vertical direction indicates time and the horizontal direction indicates distance, the dashed line is the spacetime trajectory ("world line") of the observer. The lower quarter of the diagram shows the events that are visible to the observer, and the upper quarter shows the light cone- those that will be able to see the observer. In Special relativity, a light cone (or null cone) is the pattern describing the temporal evolution of a flash of Light in Minkowski spacetime The small dots are arbitrary events in spacetime. The slope of the world line (deviation from being vertical) gives the relative velocity to the observer. Note how the view of spacetime changes when the observer accelerates.
Relativity theory depends on "reference frames". A reference frame is an observational perspective in space at rest, or in uniform motion, from which a position can be measured along 3 spatial axes. In addition, a reference frame has the ability to determine measurements of the time of events using a 'clock' (any reference device with uniform periodicity).
An event is an occurrence that can be assigned a single unique time and location in space relative to a reference frame: it is a "point" in space-time. SpaceTime is a patent-pending three dimensional graphical user interface that allows end users to search their content such as Google Google Images Yahoo! YouTube eBay Amazon and RSS Since the speed of light is constant in relativity in each and every reference frame, pulses of light can be used to unambiguously measure distances and refer back the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired.
For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four space-time coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S.
In relativity theory we often want to calculate the position of a point from a different reference point.
Suppose we have a second reference frame S', whose spatial axes and clock exactly coincide with that of S at time zero, but it is moving at a constant velocity $v\,$ with respect to S along the $x\,$-axis.
Since there is no absolute reference frame in relativity theory, a concept of 'moving' doesn't strictly exist, as everything is always moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore S and S' are not comoving.
Let's define the event to have space-time coordinates $(t, x, y, z)\,$ in system S and $(t', x', y', z')\,$ in S'. SpaceTime is a patent-pending three dimensional graphical user interface that allows end users to search their content such as Google Google Images Yahoo! YouTube eBay Amazon and RSS Then the Lorentz transformation specifies that these coordinates are related in the following way:
$\begin{cases}t' = \gamma \left(t - \frac{v x}{c^{2}} \right) \\x' = \gamma (x - v t) \\y' = y \\z' = z ,\end{cases}$
where $\gamma = \frac{1}{\sqrt{1 - v^2/c^2}}$ is called the Lorentz factor and $c\,$ is the speed of light in a vacuum. In Physics, the Lorentz transformation converts between two different observers' measurements of space and time where one observer is in constant motion with respect to The Lorentz factor or Lorentz term appears in several equations in Special relativity, including Time dilation, Length contraction, and the
The $y\,$ and $z\,$ coordinates are unaffected, but the $x\,$ and $t\,$ axes are mixed up by the transformation. In a way this transformation can be understood as a hyperbolic rotation.
A quantity invariant under Lorentz transformations is known as a Lorentz scalar. In Physics, the Lorentz transformation converts between two different observers' measurements of space and time where one observer is in constant motion with respect to In Physics a Lorentz scalar is a scalar which is invariant under a Lorentz transformation.
## Simultaneity
Event B is simultaneous with A in the green reference frame, but it occurred before in the blue frame, and will occur later in the red frame.
Main article: Relativity of simultaneity
From the first equation of the Lorentz transformation in terms of coordinate differences
$\Delta t' = \gamma \left(\Delta t - \frac{v \Delta x}{c^{2}} \right)$
it is clear that two events that are simultaneous in frame S (satisfying $\Delta t = 0\,$), are not necessarily simultaneous in another inertial frame S' (satisfying $\Delta t' = 0\,$). The relativity of simultaneity is the concept that simultaneity is not absolute but dependent on the observer Only if these events are colocal in frame S (satisfying $\Delta x = 0\,$), will they be simultaneous in another frame S'.
## Time dilation and length contraction
Writing the Lorentz transformation and its inverse in terms of coordinate differences we get
$\begin{cases}\Delta t' = \gamma \left(\Delta t - \frac{v \Delta x}{c^{2}} \right) \\\Delta x' = \gamma (\Delta x - v \Delta t)\,\end{cases}$
and
$\begin{cases}\Delta t = \gamma \left(\Delta t' + \frac{v \Delta x'}{c^{2}} \right) \\\Delta x = \gamma (\Delta x' + v \Delta t')\,\end{cases}$
Suppose we have a clock at rest in the unprimed system S. Circadian Locomotor Output Cycles Kaput, or Clock is a gene which encodes proteins regulating Circadian rhythm. Two consecutive ticks of this clock are then characterized by Δx = 0. If we want to know the relation between the times between these ticks as measured in both systems, we can use the first equation and find:
$\Delta t' = \gamma \Delta t \qquad ( \,$ for events satisfying $\Delta x = 0 )\,$
This shows that the time Δt' between the two ticks as seen in the 'moving' frame S' is larger than the time Δt between these ticks as measured in the rest frame of the clock. This phenomenon is called time dilation. This article discusses a concept in physics For the concept in sociology see Time displacement.
Similarly, suppose we have a measuring rod at rest in the unprimed system. A Measuring rod is a kind of Ruler. This phrase is often used without mention of a particular kind or length of ruler and has been used since ancient times In this system, the length of this rod is written as Δx. If we want to find the length of this rod as measured in the 'moving' system S', we must make sure to measure the distances x' to the end points of the rod simultaneously in the primed frame S'. In other words, the measurement is characterized by Δt' = 0, which we can combine with the fourth equation to find the relation between the lengths Δx and Δx':
$\Delta x' = \frac{\Delta x}{\gamma} \qquad ( \,$ for events satisfying $\Delta t' = 0 )\,$
This shows that the length Δx' of the rod as measured in the 'moving' frame S' is shorter than the length Δx in its own rest frame. This phenomenon is called length contraction or Lorentz contraction. Length contraction, according to Hendrik Lorentz, is the physical phenomenon of a decrease in Length detected by an observer in objects that travel at any non-zero
These effects are not merely appearances; they are explicitly related to our way of measuring time intervals between events which occur at the same place in a given coordinate system (called "co-local" events). These time intervals will be different in another coordinate system moving with respect to the first, unless the events are also simultaneous. Similarly, these effects also relate to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spacial distance from each other when seen from another moving coordinate system.
See also the twin paradox. In physics the twin paradox is a thought experiment in Special Relativity, in which a person who makes a journey into space in a high-speed rocket will return home to find he
## Causality and prohibition of motion faster than light
Diagram 2. Causality (but not causation) denotes a necessary relationship between one event (called cause and another event (called effect) which is the direct consequence Light cone
In diagram 2 the interval AB is 'time-like'; i. e. , there is a frame of reference in which event A and event B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames. It is hypothetically possible for matter (or information) to travel from A to B, so there can be a causal relationship (with A the cause and B the effect).
The interval AC in the diagram is 'space-like'; i. e. , there is a frame of reference in which event A and event C occur simultaneously, separated only in space. However there are also frames in which A precedes C (as shown) and frames in which C precedes A. If it were possible for a cause-and-effect relationship to exist between events A and C, then paradoxes of causality would result. For example, if A was the cause, and C the effect, then there would be frames of reference in which the effect preceded the cause. Although this in itself won't give rise to a paradox, one can show[14][15] that faster than light signals can be sent back into one's own past. A causal paradox can then be constructed by sending the signal if and only if no signal was received previously.
Therefore, one of the consequences of special relativity is that (assuming causality is to be preserved), no information or material object can travel faster than light. Causality (but not causation) denotes a necessary relationship between one event (called cause and another event (called effect) which is the direct consequence On the other hand, the logical situation is not as clear in the case of general relativity, so it is an open question whether there is some fundamental principle that preserves causality (and therefore prevents motion faster than light) in general relativity. The chronology protection conjecture is a Conjecture by the Physicist Professor Stephen Hawking that the Laws of physics are such as to prevent
Even without considerations of causality, there are other strong reasons why faster-than-light travel is forbidden by special relativity. For example, if a constant force is applied to an object for a limitless amount of time, then integrating F=dp/dt gives a momentum that grows without bound, but this is simply because p = mγv approaches infinity as v approaches c. To an observer who is not accelerating, it appears as though the object's inertia is increasing, so as to produce a smaller acceleration in response to the same force. This behavior is in fact observed in particle accelerators.
See also the Tachyonic Antitelephone. The tachyonic antitelephone is a hypothetical device in Theoretical physics that can be used to send Signals into one's own Past.
## Composition of velocities
Main article: Velocity-addition formula
If the observer in S sees an object moving along the x axis at velocity w, then the observer in the S' system, a frame of reference moving at velocity v in the x direction with respect to S, will see the object moving with velocity w' where
$w'=\frac{w-v}{1-wv/c^2}.$
This equation can be derived from the space and time transformations above. The velocity-addition formula is one of two physics equations that relates the velocities of a moving object in different reference frames Galilean Addition Of Velocities Notice that if the object were moving at the speed of light in the S system (i. e. w = c), then it would also be moving at the speed of light in the S' system. Also, if both w and v are small with respect to the speed of light, we will recover the intuitive Galilean transformation of velocities: $w' \approx w-v$.
## Mass, momentum, and energy
Main article: Mass in special relativity
Main article: Conservation of energy
In addition to modifying notions of space and time, special relativity forces one to reconsider the concepts of mass, momentum, and energy, all of which are important constructs in Newtonian mechanics. The term Mass in Special relativity usually refers to the Rest mass of the object which is the Newtonian mass as measured by an observer moving along with In Physics, the law of conservation of energy states that the total amount of Energy in an isolated system remains constant and cannot be created although it may Mass is a fundamental concept in Physics, roughly corresponding to the Intuitive idea of how much Matter there is in an object In Classical mechanics, momentum ( pl momenta SI unit kg · m/s, or equivalently N · s) is the product In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός Classical mechanics is used for describing the motion of Macroscopic objects from Projectiles to parts of Machinery, as well as Astronomical objects Special relativity shows, in fact, that these concepts are all different aspects of the same physical quantity in much the same way that it shows space and time to be interrelated.
There are a couple of (equivalent) ways to define momentum and energy in SR. One method uses conservation laws. In Physics, a conservation law states that a particular measurable property of an isolated Physical system does not change as the system evolves If these laws are to remain valid in SR they must be true in every possible reference frame. However, if one does some simple thought experiments using the Newtonian definitions of momentum and energy one sees that these quantities are not conserved in SR. A thought experiment (from the German Gedankenexperiment) is a proposal for an Experiment that would test a Hypothesis or Theory One can rescue the idea of conservation by making some small modifications to the definitions to account for relativistic velocities. It is these new definitions which are taken as the correct ones for momentum and energy in SR.
Given an object of invariant mass m traveling at velocity v the energy and momentum are given (and even defined) by
$E = \gamma m c^2 \,\!$
$\vec p = \gamma m \vec v \,\!$
where γ (the Lorentz factor) is given by
$\gamma = \frac{1}{\sqrt{1 - \beta^2}}$
where $\beta = \frac{v}{c}$ is the ratio of the velocity and the speed of light. The Lorentz factor or Lorentz term appears in several equations in Special relativity, including Time dilation, Length contraction, and the The term γ occurs frequently in relativity, and comes from the Lorentz transformation equations. In Physics, the Lorentz transformation converts between two different observers' measurements of space and time where one observer is in constant motion with respect to
Relativistic energy and momentum can be related through the formula
$E^2 - (p c)^2 = (m c^2)^2 \,\!$
which is referred to as the relativistic energy-momentum equation. It is interesting to observe that while the energy $E\,$ and the momentum $p\,$ are observer dependent (vary from frame to frame) the quantity $E^2 - (p c)^2 = (m c^2)^2 \,\!$ is observer independent.
For velocities much smaller than those of light, γ can be approximated using a Taylor series expansion and one finds that
$E \approx m c^2 + \begin{matrix} \frac{1}{2} \end{matrix} m v^2 \,\!$
$\vec p \approx m \vec v \,\!$
Barring the first term in the energy expression (discussed below), these formulas agree exactly with the standard definitions of Newtonian kinetic energy and momentum. In Mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its Derivatives The kinetic energy of an object is the extra Energy which it possesses due to its motion This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities.
Looking at the above formulas for energy, one sees that when an object is at rest (v = 0 and γ = 1) there is a non-zero energy remaining:
$E_{rest} = m c^2 \,\!$
This energy is referred to as rest energy. The rest energy does not cause any conflict with the Newtonian theory because it is a constant and, as far as kinetic energy is concerned, it is only differences in energy which are meaningful.
Taking this formula at face value, we see that in relativity, mass is simply another form of energy. In 1927 Einstein remarked about special relativity:
Under this theory mass is not an unalterable magnitude, but a magnitude dependent on (and, indeed, identical with) the amount of energy. [16]
This formula becomes important when one measures the masses of different atomic nuclei. By looking at the difference in masses, one can predict which nuclei have extra stored energy that can be released by nuclear reactions, providing important information which was useful in the development of nuclear energy and, consequently, the nuclear bomb. In Nuclear physics, a nuclear reaction is the process in which two nuclei or nuclear particles collide to produce products different from the initial particles A nuclear weapon is an explosive device that derives its destructive force from Nuclear reactions either fission or a combination of fission and fusion. The implications of this formula on 20th-century life have made it one of the most famous equations in all of science.
## Relativistic mass
Introductory physics courses and some older textbooks on special relativity sometimes define a relativistic mass which increases as the velocity of a body increases. The term Mass in Special relativity usually refers to the Rest mass of the object which is the Newtonian mass as measured by an observer moving along with According to the geometric interpretation of special relativity, this is often deprecated and the term 'mass' is reserved to mean invariant mass and is thus independent of the inertial frame, i. e. , invariant.
Using the relativistic mass definition, the mass of an object may vary depending on the observer's inertial frame in the same way that other properties such as its length may do so. Defining such a quantity may sometimes be useful in that doing so simplifies a calculation by restricting it to a specific frame. For example, consider a body with an invariant mass m moving at some velocity relative to an observer's reference system. That observer defines the relativistic mass of that body as:
$M = \gamma m\!$
"Relativistic mass" should not be confused with the "longitudinal" and "transverse mass" definitions that were used around 1900 and that were based on an inconsistent application of the laws of Newton: those used f=ma for a variable mass, while relativistic mass corresponds to Newton's dynamic mass in which
$p=Mv \!$
and
$f=dp/dt\!$.
Note also that the body does not actually become more massive in its proper frame, since the relativistic mass is only different for an observer in a different frame. The only mass that is frame independent is the invariant mass. When using the relativistic mass, the applicable reference frame should be specified if it isn't already obvious or implied. It also goes almost without saying that the increase in relativistic mass does not come from an increased number of atoms in the object. Instead, the relativistic mass of each atom and subatomic particle has increased.
Physics textbooks sometimes use the relativistic mass as it allows the students to utilize their knowledge of Newtonian physics to gain some intuitive grasp of relativity in their frame of choice (usually their own!). "Relativistic mass" is also consistent with the concepts "time dilation" and "length contraction". This article discusses a concept in physics For the concept in sociology see Time displacement. Length contraction, according to Hendrik Lorentz, is the physical phenomenon of a decrease in Length detected by an observer in objects that travel at any non-zero
## Force
The classical definition of ordinary force f is given by Newton's Second Law in its original form:
$\vec f = d\vec p/dt$
and this is valid in relativity. Newton's laws of motion are three Physical laws which provide relationships between the Forces acting on a body and the motion of the
Many modern textbooks rewrite Newton's Second Law as
$\vec f = M \vec a$
This form is not valid in relativity or in other situations where the relativistic mass M is varying.
This formula can be replaced in the relativistic case by
$\vec f = \gamma m \vec a + \gamma^3 m \frac{\vec v \cdot \vec a}{c^2} \vec v$
As seen from the equation, ordinary force and acceleration vectors are not necessarily parallel in relativity.
However the four-vector expression relating four-force $F^\mu\,$ to invariant mass m and four-acceleration $A^\mu\,$ restores the same equation form
$F^\mu = mA^\mu\,$
## The geometry of space-time
Main article: Minkowski space
SR uses a 'flat' 4-dimensional Minkowski space, which is an example of a space-time. In the Special theory of relativity four-force is a Four-vector that replaces the classical Force; the four-force is the four-vector defined as the change In Special relativity, four-acceleration is a Four-vector and is defined as the change in Four-velocity over the particle's Proper time: In Physics and Mathematics, Minkowski space (or Minkowski spacetime) is the mathematical setting in which Einstein's theory of Special relativity SpaceTime is a patent-pending three dimensional graphical user interface that allows end users to search their content such as Google Google Images Yahoo! YouTube eBay Amazon and RSS This space, however, is very similar to the standard 3 dimensional Euclidean space, and fortunately by that fact, very easy to work with.
The differential of distance (ds) in cartesian 3D space is defined as:
$ds^2 = dx_1^2 + dx_2^2 + dx_3^2$
where (dx1,dx2,dx3) are the differentials of the three spatial dimensions. In differential calculus, a differential is traditionally an Infinitesimally small change in a Variable. In the geometry of special relativity, a fourth dimension is added, derived from time, so that the equation for the differential of distance becomes:
$ds^2 = dx_1^2 + dx_2^2 + dx_3^2 - c^2 dt^2$
If we wished to make the time coordinate look like the space coordinates, we could treat time as imaginary: x4 = ict . Geometric interpretation Geometrically imaginary numbers are found on the vertical axis of the complex number plane In this case the above equation becomes symmetric:
$ds^2 = dx_1^2 + dx_2^2 + dx_3^2 + dx_4^2$
This suggests what is in fact a profound theoretical insight as it shows that special relativity is simply a rotational symmetry of our space-time, very similar to rotational symmetry of Euclidean space. Generally speaking an object with rotational symmetry is an object that looks the same after a certain amount of Rotation. SpaceTime is a patent-pending three dimensional graphical user interface that allows end users to search their content such as Google Google Images Yahoo! YouTube eBay Amazon and RSS Just as Euclidean space uses a Euclidean metric, so space-time uses a Minkowski metric. In Mathematics, the Euclidean distance or Euclidean metric is the "ordinary" Distance between two points that one would measure with a ruler In Physics and Mathematics, Minkowski space (or Minkowski spacetime) is the mathematical setting in which Einstein's theory of Special relativity Basically, SR can be stated in terms of the invariance of space-time interval (between any two events) as seen from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski space-time. In Physics and Mathematics, the Poincaré group, named after Henri Poincaré, is the group of isometries of Minkowski spacetime According to Misner (1971 §2. 3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) rather than a "disguised" Euclidean metric using ict as the time coordinate.
If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3-D space
$ds^2 = dx_1^2 + dx_2^2 - c^2 dt^2$
We see that the null geodesics lie along a dual-cone:
defined by the equation
$ds^2 = 0 = dx_1^2 + dx_2^2 - c^2 dt^2$
or
$dx_1^2 + dx_2^2 = c^2 dt^2$
Which is the equation of a circle with r=c×dt. In General relativity, Geodesics generalize the notion of "straight lines" to curved Spacetime. In Mathematics, a geodesic /ˌdʒiəˈdɛsɪk -ˈdisɪk/ -dee-sik is a generalization of the notion of a " straight line " to " curved spaces If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone:
$ds^2 = 0 = dx_1^2 + dx_2^2 + dx_3^2 - c^2 dt^2$
$dx_1^2 + dx_2^2 + dx_3^2 = c^2 dt^2$
This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star which I am receiving is X years old", we are looking down this line of sight: a null geodesic. A star is a massive luminous ball of plasma. The nearest star to Earth is the Sun, which is the source of most of the Energy on Earth We are looking at an event $d = \sqrt{x_1^2+x_2^2+x_3^2}$ meters away and d/c seconds in the past. For this reason the null dual cone is also known as the 'light cone'. (The point in the lower left of the picture below represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight". )
The cone in the -t region is the information that the point is 'receiving', while the cone in the +t section is the information that the point is 'sending'.
The geometry of Minkowski space can be depicted using Minkowski diagrams, which are also useful in understanding many of the thought-experiments in special relativity. The Minkowski diagram was developed in 1908 by Herman Minkowski and provides an illustration of the properties of space and time in the Special theory of relativity
## Physics in spacetime
Here, we see how to write the equations of special relativity in a manifestly Lorentz covariant form. In standard Physics, Lorentz covariance is a key property of Spacetime that follows from the Special theory of relativity, where it applies globally The position of an event in spacetime is given by a contravariant four vector whose components are:
$x^\nu=\left(t, x, y, z\right)$
That is, x0 = t and x1 = x and x2 = y and x3 = z. Superscripts are contravariant indices in this section rather than exponents except when they indicate a square. Subscripts are covariant indices which also range from zero to three as with the spacetime gradient of a field φ:
$\partial_0 \phi = \frac{\partial \phi}{\partial t}, \quad \partial_1 \phi = \frac{\partial \phi}{\partial x}, \quad \partial_2 \phi = \frac{\partial \phi}{\partial y}, \quad \partial_3 \phi = \frac{\partial \phi}{\partial z}.$
### Metric and transformations of coordinates
Having recognised the four-dimensional nature of spacetime, we are driven to employ the Minkowski metric, η, given in components (valid in any inertial reference frame) as:
$\eta_{\alpha\beta} = \begin{pmatrix}-c^2 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{pmatrix}$
Its reciprocal is:
$\eta^{\alpha\beta} = \begin{pmatrix}-1/c^2 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{pmatrix}$
Then we recognize that co-ordinate transformations between inertial reference frames are given by the Lorentz transformation tensor Λ. In Physics, an inertial frame of reference is a Frame of reference which belongs to a set of frames in which Physical laws hold in the same and simplest In Physics, the Lorentz transformation converts between two different observers' measurements of space and time where one observer is in constant motion with respect to History The word tensor was introduced in 1846 by William Rowan Hamilton to describe the norm operation in a certain type of algebraic system (eventually For the special case of motion along the x-axis, we have:
$\Lambda^{\mu'}{}_\nu = \begin{pmatrix}\gamma & -\beta\gamma/c & 0 & 0\\-\beta\gamma c & \gamma & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{pmatrix}$
which is simply the matrix of a boost (like a rotation) between the x and t coordinates. Where μ' indicates the row and ν indicates the column. Also, β and γ are defined as:
$\beta = \frac{v}{c},\ \gamma = \frac{1}{\sqrt{1-\beta^2}}.$
More generally, a transformation from one inertial frame (ignoring translations for simplicity) to another must satisfy:
$\eta_{\alpha\beta} = \eta_{\mu'\nu'} \Lambda^{\mu'}{}_\alpha \Lambda^{\nu'}{}_\beta \!$
where there is an implied summation of $\mu' \!$ and $\nu' \!$ from 0 to 3 on the right-hand side in accordance with the Einstein summation convention. In Mathematics, especially in applications of Linear algebra to Physics, the Einstein notation or Einstein summation convention is a notational The Poincaré group is the most general group of transformations which preserves the Minkowski metric and this is the physical symmetry underlying special relativity. In Physics and Mathematics, the Poincaré group, named after Henri Poincaré, is the group of isometries of Minkowski spacetime In Physics and Mathematics, Minkowski space (or Minkowski spacetime) is the mathematical setting in which Einstein's theory of Special relativity
All proper physical quantities are given by tensors. So to transform from one frame to another, we use the well-known tensor transformation law
$T^{\left[i_1',i_2',...i_p'\right]}_{\left[j_1',j_2',...j_q'\right]} = \Lambda^{i_1'}{}_{i_1}\Lambda^{i_2'}{}_{i_2}...\Lambda^{i_p'}{}_{i_p}\Lambda_{j_1'}{}^{j_1}\Lambda_{j_2'}{}^{j_2}...\Lambda_{j_q'}{}^{j_q}T^{\left[i_1,i_2,...i_p\right]}_{\left[j_1,j_2,...j_q\right]}$
Where $\Lambda_{j_k'}{}^{j_k} \!$ is the reciprocal matrix of $\Lambda^{j_k'}{}_{j_k} \!$. History The word tensor was introduced in 1846 by William Rowan Hamilton to describe the norm operation in a certain type of algebraic system (eventually
To see how this is useful, we transform the position of an event from an unprimed co-ordinate system S to a primed system S', we calculate
$\begin{pmatrix}t'\\ x'\\ y'\\ z'\end{pmatrix} = x^{\mu'}=\Lambda^{\mu'}{}_\nu x^\nu=\begin{pmatrix}\gamma & -\beta\gamma/c & 0 & 0\\-\beta\gamma c & \gamma & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{pmatrix}\begin{pmatrix}t\\ x\\ y\\ z\end{pmatrix} =\begin{pmatrix}\gamma t- \gamma\beta x/c\\\gamma x - \beta \gamma ct \\ y\\ z\end{pmatrix}$
which is the Lorentz transformation given above. All tensors transform by the same rule.
The squared length of the differential of the position four-vector $dx^\mu \!$ constructed using
$\mathbf{dx}^2 = \eta_{\mu\nu}dx^\mu dx^\nu = -(c \cdot dt)^2+(dx)^2+(dy)^2+(dz)^2\,$
is an invariant. Being invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no Λ appears in its trivial transformation. Notice that when the line element $\mathbf{dx}^2$ is negative that $d\tau=\sqrt{-\mathbf{dx}^2} / c$ is the differential of proper time, while when $\mathbf{dx}^2$ is positive, $\sqrt{\mathbf{dx}^2}$ is differential of the proper distance. A line element in Mathematics can most generally be thought of as the square of the change in a position vector in an Affine space equated to the square of the change In relativity, proper time is Time measured by a single Clock between events that occur at the same place as the clock In relativistic Physics, proper Length is an invariant quantity which is the rod Distance between Spacelike
The primary value of expressing the equations of physics in a tensor form is that they are then manifestly invariant under the Poincaré group, so that we do not have to do a special and tedious calculation to check that fact. Also in constructing such equations we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation.
### Velocity and acceleration in 4D
Recognising other physical quantities as tensors also simplifies their transformation laws. First note that the velocity four-vector Uμ is given by
$U^\mu = \frac{dx^\mu}{d\tau} = \begin{pmatrix} \gamma \\ \gamma v_x \\ \gamma v_y \\ \gamma v_z \end{pmatrix}$
Recognising this, we can turn the awkward looking law about composition of velocities into a simple statement about transforming the velocity four-vector of one particle from one frame to another. In Physics, in particular in Special relativity and General relativity, the four-velocity of an object is a Four-vector (vector in four-dimensional Uμ also has an invariant form:
${\mathbf U}^2 = \eta_{\nu\mu} U^\nu U^\mu = -c^2 .$
So all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. The acceleration 4-vector is given by $A^\mu = d{\mathbf U^\mu}/d\tau$. In Special relativity, four-acceleration is a Four-vector and is defined as the change in Four-velocity over the particle's Proper time: Given this, differentiating the above equation by τ produces
$2\eta_{\mu\nu}A^\mu U^\nu = 0. \!$
So in relativity, the acceleration four-vector and the velocity four-vector are orthogonal.
### Momentum in 4D
The momentum and energy combine into a covariant 4-vector:
$p_\nu = m \cdot \eta_{\nu\mu} U^\mu = \begin{pmatrix}-E \\ p_x\\ p_y\\ p_z\end{pmatrix}.$
where m is the invariant mass.
The invariant magnitude of the momentum 4-vector is:
$\mathbf{p}^2 = \eta^{\mu\nu}p_\mu p_\nu = -(E/c)^2 + p^2 .$
We can work out what this invariant is by first arguing that, since it is a scalar, it doesn't matter which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero. In Special relativity, four-momentum is the generalization of the classical three-dimensional Momentum to four-dimensional Spacetime.
$\mathbf{p}^2 = - (E_{rest}/c)^2 = - (m \cdot c)^2 .$
We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero.
The rest energy is related to the mass according to the celebrated equation discussed above:
$E_{rest} = m c^2\,$
Note that the mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames.
### Force in 4D
To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. Newton's laws of motion are three Physical laws which provide relationships between the Forces acting on a body and the motion of the That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D which contains the components of the 3D force vector among its components.
If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. In the Special theory of relativity four-force is a Four-vector that replaces the classical Force; the four-force is the four-vector defined as the change It is the rate of change of the above energy momentum four-vector with respect to proper time. In relativity, a four-vector is a vector in a four-dimensional real Vector space, called Minkowski space. The covariant version of the four-force is:
$F_\nu = \frac{d p_{\nu}}{d \tau} = \begin{pmatrix} -{d E}/{d \tau} \\ {d p_x}/{d \tau} \\ {d p_y}/{d \tau} \\ {d p_z}/{d \tau} \end{pmatrix}$
where $\tau \,$ is the proper time.
In the rest frame of the object, the time component of the four force is zero unless the "invariant mass" of the object is changing in which case it is the negative of that rate of change times c2. In general, though, the components of the four force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, i. e. $\frac{d p}{d t}$ while the four force is defined by the rate of change of momentum with respect to proper time, i. e. $\frac{d p} {d \tau}$.
In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is the negative of the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism.
## Relativity and unifying electromagnetism
Main article: Classical electromagnetism and special relativity
Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. The theory of Special relativity plays an important role in the modern theory of Classical electromagnetism. Classical electromagnetism (or classical electrodynamics) is a theory of Electromagnetism that was developed over the course of the 19th century most prominently Equations generalizing the electromagnetic effects found that finite propagation-speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity. The Liénard-Wiechert potential describes the electromagnetic effect of a moving Electric charge.
The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. In Physics, the space surrounding an Electric charge or in the presence of a time-varying Magnetic field has a property called an electric field (that can In Physics, a magnetic field is a Vector field that permeates space and which can exert a magnetic force on moving Electric charges Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. In Classical electromagnetism, Maxwell's equations are a set of four Partial differential equations that describe the properties of the electric As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame.
### Electromagnetism in 4D
Main article: Covariant formulation of classical electromagnetism
Maxwell's equations in the 3D form are already consistent with the physical content of special relativity. The covariant formulation of Classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular Maxwell's equations In Classical electromagnetism, Maxwell's equations are a set of four Partial differential equations that describe the properties of the electric But we must rewrite them to make them manifestly invariant. [17]
The charge density $\rho \!$ and current density $[J_x,J_y,J_z] \!$ are unified into the current-charge 4-vector:
$J^\mu = \begin{pmatrix}\rho \\ J_x\\ J_y\\ J_z\end{pmatrix}$
The law of charge conservation, $\frac{\partial \rho} {\partial t} + \nabla \cdot \mathbf{J} = 0$, becomes:
$\partial_\mu J^\mu = 0. \!$
The electric field $[E_x,E_y,E_z] \!$ and the magnetic induction $[B_x,B_y,B_z] \!$ are now unified into the (rank 2 antisymmetric covariant) electromagnetic field tensor:
$F_{\mu\nu} = \begin{pmatrix} 0 & -E_x & -E_y & -E_z \\ E_x & 0 & B_z & -B_y \\ E_y & -B_z & 0 & B_x \\ E_z & B_y & -B_x & 0 \end{pmatrix}$
The density, $f_\mu \!$, of the Lorentz force, $\mathbf{f} = \rho \mathbf{E} + \mathbf{J} \times \mathbf{B}$, exerted on matter by the electromagnetic field becomes:
$f_\mu = F_{\mu\nu}J^\nu .\!$
Faraday's law of induction, $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$, and Gauss's law for magnetism, $\nabla \cdot \mathbf{B} = 0$, combine to form:
$\partial_\lambda F_{\mu\nu}+ \partial _\mu F_{\nu \lambda}+ \partial_\nu F_{\lambda \mu} = 0. \!$
Although there appear to be 64 equations here, it actually reduces to just four independent equations. The linear surface or volume charge density is the amount of Electric charge in a line, Surface, or Volume. Current density is a measure of the Density of flow of a conserved charge. In special and General relativity, the four-current is the Lorentz covariant Four-vector that replaces the Electromagnetic Current Charge conservation is the principle that Electric charge can neither be created nor destroyed In Physics, the space surrounding an Electric charge or in the presence of a time-varying Magnetic field has a property called an electric field (that can In Physics, a magnetic field is a Vector field that permeates space and which can exert a magnetic force on moving Electric charges The electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is In Physics, the Lorentz force is the Force on a Point charge due to Electromagnetic fields It is given by the following equation Faraday's law of induction describes an important basic law of electromagnetism which is involved in the working of Transformers Inductors and many forms of Using the antisymmetry of the electromagnetic field one can either reduce to an identity (0=0) or render redundant all the equations except for those with λ,μ,ν = either 1,2,3 or 2,3,0 or 3,0,1 or 0,1,2.
The electric displacement $[D_x,D_y,D_z] \!$ and the magnetic field $[H_x,H_y,H_z] \!$ are now unified into the (rank 2 antisymmetric contravariant) electromagnetic displacement tensor:
$\mathcal{D}^{\mu\nu} = \begin{pmatrix} 0 & D_x & D_y & D_z \\ -D_x & 0 & H_z & -H_y \\ -D_y & -H_z & 0 & H_x \\ -D_z & H_y & -H_x & 0 \end{pmatrix}$
Ampère's law, $\nabla \times \mathbf{H} = \mathbf{J} + \frac{\partial \mathbf{D}} {\partial t}$, and Gauss's law, $\nabla \cdot \mathbf{D} = \rho$, combine to form:
$\partial_\nu \mathcal{D}^{\mu \nu} = J^{\mu}. \!$
In a vacuum, the constitutive equations are:
$\mu_0 \mathcal{D}^{\mu\nu} = \eta^{\mu\alpha} \eta^{\nu\beta} F_{\alpha\beta}.$
Antisymmetry reduces these 16 equations to just six independent equations. In Physics, the electric displacement field (also called electrical field/flux density is a Vector field \mathbf{D} that appears in Maxwell's equations In Physics, a magnetic field is a Vector field that permeates space and which can exert a magnetic force on moving Electric charges In Classical electromagnetism, Ampère's circuital law, discovered by André-Marie Ampère, relates the integrated Magnetic field around a closed In Structural analysis, constitutive relations connect applied stresses or Forces to strains or Deformations The constitutive relation
The energy density of the electromagnetic field combines with Poynting vector and the Maxwell stress tensor to form the 4D electromagnetic stress-energy tensor. Energy density is the amount of Energy stored in a given system or region of space per unit Volume, or per unit Mass, depending on the context although In Physics, the Poynting vector can be thought of as representing the Energy Flux (in W/m2 of an Electromagnetic field. The Maxwell Stress Tensor (also known as Maxwell's Stress Tensor is used to calculate the stresses on objects in magnetic or electrical fields In Physics, the electromagnetic stress-energy tensor is the portion of the Stress-energy tensor due to the Electromagnetic field. It is the flux (density) of the momentum 4-vector and as a rank 2 mixed tensor it is:
$T_\alpha^\pi = F_{\alpha\beta} \mathcal{D}^{\pi\beta} - \frac{1}{4} \delta_\alpha^\pi F_{\mu\nu} \mathcal{D}^{\mu\nu}$
where $\delta_\alpha^\pi$ is the Kronecker delta. In Mathematics, the Kronecker delta or Kronecker's delta, named after Leopold Kronecker ( 1823 - 1891) is a function of two When upper index is lowered with η, it becomes symmetric and is part of the source of the gravitational field.
The conservation of linear momentum and energy by the electromagnetic field is expressed by:
$f_\mu + \partial_\nu T_\mu^\nu = 0\!$
where $f_\mu \!$ is again the density of the Lorentz force. In Physics, the Lorentz force is the Force on a Point charge due to Electromagnetic fields It is given by the following equation This equation can be deduced from the equations above (with considerable effort).
## Status
Main article: Status of special relativity
Special relativity is accurate only when gravitational potential is much less than c2; in a strong gravitational field one must use general relativity (which becomes special relativity at the limit of weak field). See also Special relativity Special relativity (SR is usually concerned with the behaviour of objects and "observers" (inertial reference systems which remain at Potential energy can be thought of as Energy stored within a physical system General relativity or the general theory of relativity is the geometric theory of Gravitation published by Albert Einstein in 1916 At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. The Planck length, denoted by \scriptstyle\ell_P \, is the unit of Length approximately 1 Quantum gravity is the field of Theoretical physics attempting to unify Quantum mechanics, which describes three of the fundamental forces of nature However, at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10-20)[18] and thus accepted by the physics community. Experimental results which appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors.
Because of the freedom one has to select how one defines units of length and time in physics, it is possible to make one of the two postulates of relativity a tautological consequence of the definitions, but one cannot do this for both postulates simultaneously, as when combined they have consequences which are independent of one's choice of definition of length and time. In Propositional logic, a tautology (from the Greek word ταυτολογία is a Propositional formula that is true under any possible valuation
Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields). In quantum field theory (QFT the forces between particles are mediated by other particles String theory is a still-developing scientific approach to Theoretical physics, whose original building blocks are one-dimensional extended objects called strings
Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) - thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See Status of special relativity for a more detailed discussion. See also Special relativity Special relativity (SR is usually concerned with the behaviour of objects and "observers" (inertial reference systems which remain at
A few key experiments can be mentioned that led to special relativity:
• The Trouton–Noble experiment showed that the torque on a capacitor is independent on position and inertial reference frame – such experiments led to the first postulate
• The famous Michelson-Morley experiment gave further support to the postulate that detecting an absolute reference velocity was not achievable. The Trouton–Noble experiment attempted to detect motion of the Earth through the Luminiferous aether, and was conducted in 1901&ndash1903 by Frederick Thomas The Michelson–Morley experiment, one of the most important and famous experiments in the History of physics, was performed in 1887 by Albert Michelson and It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times.
A number of experiments have been conducted to test special relativity against rival theories. These include:
• Kaufmann-Bucherer-Neumann experiments – electron deflection in approximate agreement with Lorentz-Einstein prediction. Walter Kaufmann ( June 5, 1871, Elberfeld - January 1, 1947, Freiburg im Breisgau) was a German physicist
• Hammar experiment – no "ether flow obstruction"
• Kennedy–Thorndike experiment – time dilation in accordance with Lorentz transformations
• Rossi-Hall experiment – relativistic effects on a fast-moving particle's half-life
• Experiments to test emitter theory demonstrated that the speed of light is independent of the speed of the emitter. The Hammar experiment was an experiment designed by Gustaf Wilhelm Hammar to test the Aether drag hypothesis. The Kennedy–Thorndike experiment ('Experimental Establishment of the Relativity of Time' first conducted in 1932, is a modified form of the Michelson–Morley experimental Performed in 1940 at Echo Lake and Denver in Colorado, the Rossi -Hall experiment measured the relativistic decay of mesotrons ( Mesons Emission theory (also called "emitter theory" was a competing theory for the Special theory of relativity, explaining the results of the Michelson-Morley experiment
In addition, particle accelerators routinely accelerate and measure the properties of particles moving at near the speed of light, where their their behavior is completely consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. Classical mechanics is used for describing the motion of Macroscopic objects from Projectiles to parts of Machinery, as well as Astronomical objects These machines would simply not work if they were not engineered according to relativistic principles.
## References
1. ^ http://www.fourmilab.ch/etexts/einstein/specrel/www/ On the Electrodynamics of Moving Bodies, A. Einstein, Annalen der Physik, 17:891, June 30, 1905 (in English translation)
2. ^ Edwin F. Events 350 - Roman usurper Nepotianus, of the Constantinian dynasty, is defeated and killed by troops of the Usurper Year 1905 ( MCMV) was a Common year starting on Sunday (link will display full calendar of the Gregorian calendar (or a Common year starting Taylor and John Archibald Wheeler (1992). Spacetime Physics: Introduction to Special Relativity. W. H. Freeman. ISBN 0-7167-2327-1.
3. ^ Einstein, Autobiographical Notes, 1949.
4. ^ Einstein, On the Electrodynamics of Moving Bodies, 1905.
5. ^ Einstein, "Fundamental Ideas and Methods of the Theory of Relativity", 1920)
6. ^ For a survey of such derivations, see Lucas and Hodgson, Spacetime and Electromagnetism, 1990
7. ^ Einstein, On the Relativity Principle and the Conclusions Drawn from It, 1907; "The Principle of Relativity and Its Consequences in Modern Physics, 1910; "The Theory of Relativity", 1911; Manuscript on the Special Theory of Relativity, 1912; Theory of Relativity, 1913; Einstein, Relativity, the Special and General Theory, 1916; The Principle Ideas of the Theory of Relativity, 1916; What Is The Theory of Relativity?, 1919; The Principle of Relativity (Princeton Lectures), 1921; Physics and Reality, 1936; The Theory of Relativity, 1949.
8. ^ Rindler, Essential Relativity, 1977
9. ^ Einstein, The Principle of Conservation of Motion of the Center of Gravity and The Inertia of Energy, 1906; On the Inertia of Energy Required by the Relativity Principle, 1907; Elementary Derivation of the Equivalence of Mass and Energy, 1946.
10. ^ In a letter to Carl Seelig in 1955, Einstein wrote "I had already previously found that Maxwell's theory did not account for the micro-structure of radiation and could therefore have no general validity. ", Einstein letter to Carl Seelig, 1955.
11. ^ Einstein, Autobiographical Notes, 1949.
12. ^ Das, A. , The The Special Theory of Relativity, A Mathematical Exposition, Springer, 1993.
13. ^ Schutz, J. , Independent Axioms for Minkowski Spacetime, 1997.
14. ^ R. C. Tolman, The theory of the Relativity of Motion, (Berkeley 1917), p. 54
15. ^ G. A. Benford, D. L. Book, and W. A. Newcomb, The Tachyonic Antitelephone, Phys. Rev. D 2, 263 - 265 (1970) article
16. ^ Einstein on Newton 1927
17. ^ E. J. Post (1962). Formal Structure of Electromagnetics: General Covariance and Electromagnetics. Dover Publications Inc. . ISBN 0-486-65427-3.
18. ^ The number of works is vast, see as example:
Sidney Coleman, Sheldon L. Glashow, Cosmic Ray and Neutrino Tests of Special Relativity, Phys. Lett. B405 (1997) 249-252, online
### Textbooks
• Einstein, Albert. "Relativity: The Special and the General Theory".
• Grøn, Øyvind; Hervik, Sigbjørn (2007). Einstein's General Theory of Relativity. New York: Springer. ISBN 978-0-387-69199-2.
• Silberstein, Ludwik (1914) The Theory of Relativity. Optics Book of Optics
• Tipler, Paul; Llewellyn, Ralph (2002). Modern Physics (4th ed. ). W. H. Freeman Company. ISBN 0-7167-4345-0
• Schutz, Bernard F. A First Course in General Relativity, Cambridge University Press. ISBN 0-521-27703-5
• Taylor, Edwin, and Wheeler, John (1992). John Archibald Wheeler ( July 9, 1911 &ndash April 13, 2008) was an eminent American Theoretical physicist. Spacetime Physics (2nd ed. ). W. H. Freeman and Company. ISBN 0-7167-2327-1
• Einstein, Albert (1996). The Meaning of Relativity. Fine Communications. ISBN 1-56731-136-9
• Geroch, Robert (1981). General Relativity From A to B. University of Chicago Press. ISBN 0-226-28864-1
• Logunov, Anatoly A. (2005) Henri Poincaré and the Relativity Theory (transl. from Russian by G. Pontocorvo and V. O. Soleviev, edited by V. A. Petrov) Nauka, Moscow.
• Misner, Charles W. ; Thorne, Kip S. , Wheeler, John Archibald (1971). Gravitation. San Francisco: W. H. Freeman & Co. . ISBN 0-7167-0334-3.
• Post, E. J. , Formal Structure of Electromagnetics: General Covariance and Electromagnetics, Dover Publications Inc. Mineola NY, 1962 reprinted 1997.
• Freund, Jűrgen (2008) Special Relativity for Beginners - A Textbook for Undergraduates World Scientific. ISBN-10 981-277-160-3
### Journal articles
• On the Electrodynamics of Moving Bodies, A. Einstein, Annalen der Physik, 17:891, June 30, 1905 (in English translation)
• Wolf, Peter and Gerard, Petit. Events 350 - Roman usurper Nepotianus, of the Constantinian dynasty, is defeated and killed by troops of the Usurper Year 1905 ( MCMV) was a Common year starting on Sunday (link will display full calendar of the Gregorian calendar (or a Common year starting "Satellite test of Special Relativity using the Global Positioning System", Physics Review A 56 (6), 4405-4409 (1997).
• Will, Clifford M. "Clock synchronization and isotropy of the one-way speed of light", Physics Review D 45, 403-411 (1992).
• Rizzi G. et al, "Synchronization Gauges and the Principles of Special Relativity", Found. Phys. 34 (2005) 1835-1887
• Alvager et al. , "Test of the Second Postulate of Special Relativity in the GeV region", Physics Letters 12, 260 (1964).
• Olivier Darrigol (2004) "The Mystery of the Poincaré-Einstein Connection", Isis 95 (4), 614 - 626.
## See also
People: Arthur Eddington | Albert Einstein | Hendrik Lorentz | Hermann Minkowski | Bernhard Riemann | Henri Poincaré | Alexander MacFarlane | Harry Bateman | Robert S. Shankland | Walter Ritz
Relativity: Theory of relativity | principle of relativity | general relativity | Fundamental Speed | frame of reference | inertial frame of reference | Lorentz transformations | Bondi k-calculus | Einstein synchronisation | Rietdijk-Putnam Argument
Physics: Newtonian Mechanics | spacetime | speed of light | simultaneity | physical cosmology | Doppler effect | relativistic Euler equations | Aether drag hypothesis | Lorentz ether theory | Moving magnet and conductor problem
Maths: Minkowski space | four-vector | world line | light cone | Lorentz group | Poincaré group | geometry | tensors | split-complex number
Philosophy: actualism | conventionalism | formalism | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 108, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9200263023376465, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/47025/in-a-non-degenerate-plasma-why-are-e-e-collision-negligible-compared-to-e-ion-f | # In a non-degenerate plasma, why are e-e collision negligible compared to e-ion for thermal conduction?
I'm trying to make some order of magnitude estimates of heat transfer in stars - to better understand 1) why conduction is said to be negligible (for non-degenerate matter) and 2) when convection happens and is dominant.
For thermal conduction, I have the expression that the flux of energy over a surface
$f \approx - \rho \langle v \rangle \cdot \lambda \cdot a k_B \frac{dT}{dx}$
For a mean-free path $\lambda$ and degrees of freedom over two '$a$'. I.e. an expression for the thermal conductivity.
Anyway, everything I read (e.g. See section 'Electron Conductivity'), says that the mean free path should be calculated for electron-ion collisions instead of electron-electron collisions. Can anyone explain this?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8636213541030884, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Network_science | # Network science
Network science
Theory · History
Graph · Complex network · Contagion
Community structure · Percolation · Evolution · Controllability · Topology · Graph drawing · Social capital · Link analysis · Optimization
Reciprocity · Closure · Homophily
Transitivity · Preferential attachment
Balance · Network effect · Influence
Types of Networks
Information · Telecommunication
Social · Biological · Neural
Interdependent · Semantic
Random · Dependency · Flow
Graphs
Vertex · Edge · Component
Directed · Multigraph · Bipartite
Weighted · Hypergraph · Random
Cycle · Loop · Path
Neighborhood · Clique · Complete · Cut
Data structure · Adjacency list & matrix
Incidence list & matrix
Metrics and Algorithms
Centrality · Degree · Betweenness
Closeness · PageRank · Motif
Clustering · Degree distribution · Assortativity · Distance · Modularity
Models
Random · Erdős–Rényi
Barabási–Albert · Watts–Strogatz
ERGM · Epidemic · Hierarchical
Browse
Science
Part of a series on
Philosophy and history of science
Network science is an interdisciplinary academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks. The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."[1]
## Background and history
The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. The earliest known paper in this field is the famous Seven Bridges of Königsberg written by Leonhard Euler in 1736. Euler's mathematical description of vertices and edges was the foundation of graph theory, a branch of mathematics that studies the properties of pairwise relations in a network structure. The field of graph theory continued to develop and found applications in chemistry (Sylvester, 1878).
In the 1930s Jacob Moreno, a psychologist in the Gestalt tradition, arrived in the United States. He developed the sociogram and presented it to the public in April 1933 at a convention of medical scholars. Moreno claimed that "before the advent of sociometry no one knew what the interpersonal structure of a group 'precisely' looked like (Moreno, 1953). The sociogram was a representation of the social structure of a group of elementary school students. The boys were friends of boys and the girls were friends of girls with the exception of one boy who said he liked a single girl. The feeling was not reciprocated. This network representation of social structure was found so intriguing that it was printed in The New York Times (April 3, 1933, page 17). The sociogram has found many applications and has grown into the field of social network analysis.
Probabilistic theory in network science developed as an off-shoot of graph theory with Paul Erdős and Alfréd Rényi's eight famous papers on random graphs. For social networks the exponential random graph model or p* is a notational framework used to represent the probability space of a tie occurring in a social network. An alternate approach to network probability structures is the network probability matrix, which models the probability of edges occurring in a network, based on the historic presence or absence of the edge in a sample of networks.
In 1998, David Krackhardt and Kathleen Carley introduced the idea of a meta-network with the PCANS Model. They suggest that "all organizations are structured along these three domains, Individuals, Tasks, and Resources". Their paper introduced the concept that networks occur across multiple domains and that they are interrelated. This field has grown into another sub-discipline of network science called dynamic network analysis.
More recently other network science efforts have focused on mathematically describing different network topologies. Duncan Watts reconciled empirical data on networks with mathematical representation, describing the small-world network. Albert-László Barabási and Reka Albert developed the scale-free network which is a loosely defined network topology that contains hub vertices with many connections, that grow in a way to maintain a constant ratio in the number of the connections versus all other nodes. Although many networks, such as the internet, appear to maintain this aspect, other networks have long tailed distributions of nodes that only approximate scale free ratios.
## Department of Defense Initiatives
The U.S. military first became interested in network-centric warfare as an operational concept based on network science in 1996. John A. Parmentola, the U.S. Army Director for Research and Laboratory Management, proposed to the Army’s Board on Science and Technology (BAST) on December 1, 2003 that Network Science become a new Army research area. The BAST, the Division on Engineering and Physical Sciences for the National Research Council (NRC) of the National Academies, serves as a convening authority for the discussion of science and technology issues of importance to the Army and oversees independent Army-related studies conducted by the National Academies. The BAST conducted a study to find out whether identifying and funding a new field of investigation in basic research, Network Science, could help close the gap between what is needed to realize Network-Centric Operations and the current primitive state of fundamental knowledge of networks.
As a result, the BAST issued the NRC study in 2005 titled Network Science (referenced above) that defined a new field of basic research in Network Science for the Army. Based on the findings and recommendations of that study and the subsequent 2007 NRC report titled Strategy for an Army Center for Network Science, Technology, and Experimentation, Army basic research resources were redirected to initiate a new basic research program in Network Science. To build a new theoretical foundation for complex networks, some of the key Network Science research efforts now ongoing in Army laboratories address:
• Mathematical models of network behavior to predict performance with network size, complexity, and environment
• Optimized human performance required for network-enabled warfare
• Networking within ecosystems and at the molecular level in cells.
As initiated in 2004 by Frederick I. Moxley with support he solicited from David S. Alberts, the Department of Defense helped to establish the first Network Science Center in conjunction with the U.S. Army at the United States Military Academy (USMA). Under the tutelage of Dr. Moxley and the faculty of the USMA, the first interdisciplinary undergraduate courses in Network Science were taught to cadets at West Point. Subsequently, the U.S. Department of Defense has funded numerous research projects in the area of Network Science.
In 2006, the U.S. Army and the United Kingdom (UK) formed the Network and Information Science International Technology Alliance, a collaborative partnership among the Army Research Laboratory, UK Ministry of Defense and a consortium of industries and universities in the U.S. and UK. The goal of the alliance is to perform basic research in support of Network- Centric Operations across the needs of both nations.
In 2009, the U.S. Army formed the Network Science CTA, a collaborative research alliance among the Army Research Laboratory, CERDEC, and a consortium of about 30 industrial R&D labs and universities in the U.S. The goal of the alliance is to develop a deep understanding of the underlying commonalities among intertwined social/cognitive, information, and communications networks, and as a result improve our ability to analyze, predict, design, and influence complex systems interweaving many kinds of networks.
Today, network science is an exciting and growing interdisciplinary field. Scientists from many diverse fields are working together. Network science holds the promise of increasing collaboration across disciplines, by sharing data, algorithms, and software tools.
## Network properties
Oftentimes, Networks have certain attributes that can be calculated to analyze the properties & characteristics of the network. These Network properties often define Network Models and can be used to analyze how certain models contrast to each other. Many of the definitions for other terms used in network science can be found in Glossary of graph theory.
### Density
The density $D$ of a network is defined as a ratio of the number of edges $E$ to the number of possible edges, given by the binomial coefficient $\tbinom N2$, giving $D = \frac{2E}{N(N-1)}.$
### Size
The size of a network can refer to the number of nodes $N$ or, less commonly, the number of edges $E$ which can range from $N-1$ (a tree) to $E_{max}$ (a complete graph).
### Average degree
The degree $k$ of a node is the number of edges connected to it. Closely related to the density of a network is the average degree, $<k> = \tfrac{2E}{N}$. In the ER random graph model, we can compute $<k> = pN(N-1)$ where $p$ is the probability of two nodes being connected.
### Average path length
Average path length is calculated by finding the shortest path between all pairs of nodes, adding them up, and then dividing by the total number of pairs. This shows us, on average, the number of steps it takes to get from one member of the network to another.
### Diameter of a network
As another means of measuring network graphs, we can define the diameter of a network as the longest of all the calculated shortest paths in a network. In other words, once the shortest path length from every node to all other nodes is calculated, the diameter is the longest of all the calculated path lengths. The diameter is representative of the linear size of a network.
### Clustering coefficient
The clustering coefficient is a measure of an "all-my-friends-know-each-other" property. This is sometimes described as the friends of my friends are my friends. More precisely, the clustering coefficient of a node is the ratio of existing links connecting a node's neighbors to each other to the maximum possible number of such links. The clustering coefficient for the entire network is the average of the clustering coefficients of all the nodes. A high clustering coefficient for a network is another indication of a small world.
The clustering coefficient of the $i$'th node is
$C_i = {2e_i\over k_i{(k_i - 1)}}\,,$
where $k_i$ is the number of neighbours of the $i$'th node, and $e_i$ is the number of connections between these neighbours. The maximum possible number of connections between neighbors is, of course,
${\binom {k}{2}} = {{k(k-1)}\over 2}\,.$
### Connectedness
The way in which a network is connected plays a large part into how networks are analyzed and interpreted. Networks are classified in four different categories:
• Clique/Complete Graph: a completely connected network, where all nodes are connected to every other node. These networks are symmetric in that all nodes have in-links and out-links from all others.
• Giant Component: A single connected component which contains most of the nodes in the network.
• Weakly Connected Component: A collection of nodes in which there exists a path from any node to any other, ignoring directionality of the edges.
• Strongly Connected Component: A collection of nodes in which there exists a directed path from any node to any other.
### Node centrality
Node centrality can be viewed as a measure of influence or importance in a network model. There exists three main measures of Centrality that are studied in Network Science.
• Closeness: represents the average distance that each node is from all other nodes in the network
• Betweeness: represents the number of shortest paths in a network that traverse through that node
• Degree/Strength: represents the amount links that a particular node possesses in a network. In a directed network, one must differentiate between in-links and out links by calculating in-degree and out-degree. The analogue to degree in a weighted network, strength is the sum of a node's edge weights. In-strength and out-strength are analogously defined for directed networks.
## Network models
Network models serve as a foundation to understanding interactions within empirical complex networks. Various random graph generation models produce network structures that may be used in comparison to real-world complex networks.
### Erdős–Rényi Random Graph model
This Erdős–Rényi model is generated with N=4 nodes. For each edge in the complete graph formed by all N nodes, a random number is generated and compared to a given probability. If the random number is greater than p, an edge is formed on the model.
The Erdős–Rényi model, named for Paul Erdős and Alfréd Rényi, is used for generating random graphs in which edges are set between nodes with equal probabilities. It can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs.
To generate an Erdős–Rényi model two parameters must be specified: the number of nodes in the graph generated as $N$ and the probability that a link should be formed between any two nodes as $p$. A constant $<k>$ may derived from these two components with the formula $<k> = 2E/N = p(N-1)$.
The Erdős–Rényi model has several interesting characteristics in comparison to other graphs. Because the model is generated without bias to particular nodes, the degree distribution is binomial in nature with regards to the formula: $P(\operatorname{deg}(v) = k) = {n-1\choose k}p^k(1-p)^{n-1-k}$. Also as a result of this characteristic, the clustering coefficient tends to 0. The model tends to form a giant component in situations where <k> > 1 in a process called percolation. The average path length is relatively short in this model and tends to $log(N)$.
### Watts-Strogatz Small World model
The Watts and Strogatz model uses the concept of rewiring to achieve its structure. The model generator will iterate through each edge in the original lattice structure. An edge may changed its connected vertices according to a given rewiring probability. $<k> = 4$ in this example.
The Watts and Strogatz model is a random graph generation model that produces graphs with small-world properties.
An initial lattice structure is used to generate a Watts-Strogatz model. Each node in the network is initially linked to its $<k>$ closest neighbors. Another parameter is specified as the rewiring probability. Each edge has a probability $p$ that it will be rewired to the graph as a random edge. The expected number of rewired links in the model is $pE = pN<k>/2$.
As the Watts-Strogatz model begins as non-random lattice structure, it has a very high clustering coefficient along with high average path length. Each rewire is likely to create a shortcut between highly connected clusters. As the rewiring probability increases, the clustering coefficient decreases slower than the average path length. In effect, this allows the average path length of the network to decrease significantly with only slightly decreases in clustering coefficient. Higher values of p force more rewired edges, which in effect makes the Watts-Strogatz model a random network.
### Barabási–Albert (BA) Preferential Attachment model
The Barabási–Albert model is a random network model used to demonstrate a preferential attachment or a "rich-get-richer" effect. In this model, an edge is most likely to attach to nodes with higher degrees. The network begins with an initial network of m0 nodes. m0 ≥ 2 and the degree of each node in the initial network should be at least 1, otherwise it will always remain disconnected from the rest of the network.
In the BA model, new nodes are added to the network one at a time. Each new node is connected to $m$ existing nodes with a probability that is proportional to the number of links that the existing nodes already have. Formally, the probability pi that the new node is connected to node i is[2]
$p_i = \frac{k_i}{\sum_j k_j},$
where ki is the degree of node i. Heavily linked nodes ("hubs") tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a "preference" to attach themselves to the already heavily linked nodes.
The degree distribution of the BA Model, which follows a power law. In loglog scale the power law function is a straight line.[3]
The degree distribution resulting from the BA model is scale free, in particular, it is a power law of the form:
$P\left(k\right)\sim k^{-3} \,$
Hubs exhibit high betweenness centrality which allows short paths to exist between nodes. As a result the BA model tends to have very short average path lengths. The clustering coefficient of this model also tends to 0. While the diameter, D, of many models including the Erdős Rényi random graph model and several small world networks is proportional to log N, the BA model exhibits D~loglogN (ultr-small word).[4] Note that the average opath length scale with N as the diameter.
## Network analysis
### Social network analysis
Social network analysis examines the structure of relationships between social entities.[5] These entities are often persons, but may also be groups, organizations, nation states, web sites, scholarly publications.
Since the 1970s, the empirical study of networks has played a central role in social science, and many of the mathematical and statistical tools used for studying networks have been first developed in sociology.[6] Amongst many other applications, social network analysis has been used to understand the diffusion of innovations, news and rumors. Similarly, it has been used to examine the spread of both diseases and health-related behaviors. It has also been applied to the study of markets, where it has been used to examine the role of trust in exchange relationships and of social mechanisms in setting prices. Similarly, it has been used to study recruitment into political movements and social organizations. It has also been used to conceptualize scientific disagreements as well as academic prestige. More recently, network analysis (and its close cousin traffic analysis) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical and leaderless nature.[7][8]
### Biological network analysis
With the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this content are closely related to social network analysis, but often focusing on local patterns in the network. For example network motifs are small subgraphs that are over-represented in the network. Activity motifs are similar over-represented patterns in the attributes of nodes and edges in the network that are over represented given the network structure.
### Link analysis
Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed and financial transactions that they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of police investigation. Link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed by banks and insurance agencies in fraud detection, by telecommunication operators in telecommunication network analysis, by medical sector in epidemiology and pharmacology, in law enforcement investigations, by search engines for relevance rating (and conversely by the spammers for spamdexing and by business owners for search engine optimization), and everywhere else where relationships between many objects have to be analyzed.
#### Network robustness
The structural robustness of networks[9] is studied using percolation theory. When a critical fraction of nodes is removed the network becomes fragmented into small clusters. This phenomenon is called percolation[10] and it represents an order-disorder type of phase transition with critical exponents.
#### Pandemic Analysis
The SIR Model is one of the most well known algorithms on predicting the spread of global pandemics within an infectious population.
##### Susceptible to Infected
$S = \beta(1/N)$
The formula above describes the "force" of infection fore each susceptible unit in an infectious population, where β is equivalent to the transmission rate of said disease.
To track the change of those susceptible in an infectious population:
$\Delta S = \beta \times S {1\over N} \Delta t$
##### Infected to Recovered
$\Delta I = \mu I\Delta t$
Over time, the number of those infected fluctuates by: the specified rate of recovery, represented by $\mu$ but deducted to one over the average infectious period ${1\over \tau}$, the numbered of infecious individuals, $I$, and the change in time, $\Delta t$.
##### Infectious Period
Whether a population will be overcome by a pandemic, with regards to the SIR model, is dependent on the value of $R_0$ or the "average people infected by an infected individual."
$R_0 = \beta\tau = {\beta\over\mu}$
#### Web Link Analysis
Several Web search ranking algorithms use link-based centrality metrics, including (in order of appearance) Marchiori's Hyper Search, Google's PageRank, Kleinberg's HITS algorithm, the CheiRank and TrustRank algorithms. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example the analysis might be of the interlinking between politicians' web sites or blogs.
##### PageRank
PageRank works by randomly picking "nodes" or websites and then with a certain probability, "randomly jumping" to other nodes. By randomly jumping to these other nodes, it helps PageRank completely traverse the network as some webpages exist on the periphery and would not as readily be assessed.
Each node, $x_i$, has a PageRank as defined by the sum of pages $j$ that link to $i$ times one over the outlinks or "out-degree" of $j$ times the "importance" or PageRank of $j$.
$x_i = \sum_{j\rightarrow i}{1\over N_j}x_j^{(k)}$
###### Random Jumping
As explained above, PageRank enlists random jumps in attempts to assign PageRank to every website on the internet. These random jumps find websites that might not be found during the normal search methodologies such as Breadth-First Search and Depth-First Search.
In an improvement over the aforementioned formula for determining PageRank includes adding these random jump components. Without the random jumps, some pages would receive a PageRank of 0 which would not be good.
The first is $\alpha$, or the probability that a random jump will occur. Contrasting is the "damping factor", or $1 - \alpha$.
$R{(p)} = {\alpha\over N} + (1 - \alpha) \sum_{j\rightarrow i}{1\over N_j}x_j^{(k)}$
Another way of looking at it:
$R(A) = \sum {R_B\over B_{(outlinks)}} + ... + {R_n \over n_{(outlinks)}}$
### Centrality measures
Information about the relative importance of nodes and edges in a graph can be obtained through centrality measures, widely used in disciplines like sociology. Centrality measures are essential when a network analysis has to answer questions such as: "Which nodes in the network should be targeted to ensure that a message or information spreads to all or most nodes in the network?" or conversely, "Which nodes should be targeted to curtail the spread of a disease?". Formally established measures of centrality are degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, and katz centrality. The objective of network analysis generally determines the type of centrality measure(s) to be used.
• Degree centrality of a node in a network is the number of links (vertices) incident on the node.
• Closeness centrality determines how “close” a node is to other nodes in a network by measuring the sum of the shortest distances (geodesic paths) between that node and all other nodes in the network.
• Betweenness centrality determines the relative importance of a node by measuring the amount of traffic flowing through that node to other nodes in the network. This is done my measuring the fraction of paths connecting all pairs of nodes and containing the node of interest.
• Eigenvector centrality is a more sophisticated version of degree centrality where the centrality of a node not only depends on the number of links incident on the node but also the quality of those links. This quality factor is determined by the eigenvectors of the adjacency matrix of the network.
• Katz centrality of a node is measured by summing the geodesic paths between that node and all (reachable) nodes in the network. These paths are weighted, paths connecting the node with its immediate neighbors carry higher weights than those which connect with nodes farther away from the immediate neighbors.
## Spread of content in networks
Content in a complex network can spread via two major methods: conserved spread and non-conserved spread.[11] In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes . Here, the pitcher represents the original source and the water is the content being spread. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the amount of content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes . Here, the amount of water from the original source is infinite Also, any funnels that have been exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of most infectious diseases.
### The SIR Model
In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments, susceptible: $S(t)$, infected, $I(t)$, and recovered, $R(t)$. The compartments used for this model consist of three classes:
• $S(t)$ is used to represent the number of individuals not yet infected with the disease at time t, or those susceptible to the disease
• $I(t)$ denotes the number of individuals who have been infected with the disease and are capable of spreading the disease to those in the susceptible category
• $R(t)$ is the compartment used for those individuals who have been infected and then recovered from the disease. Those in this category are not able to be infected again or to transmit the infection to others.
The flow of this model may be considered as follows:
$\color{blue}\mathcal{S} \rightarrow \mathcal{I} \rightarrow \mathcal{R}$
Using a fixed population, $N = S(t) + I(t) + R(t)$, Kermack and McKendrick derived the following equations:
$\frac{dS}{dt} = - \beta S I$
$\frac{dI}{dt} = \beta S I - \gamma I$
$\frac{dR}{dt} = \gamma I$
Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of $\beta$, which is considered the contact or infection rate of the disease. Therefore, an infected individual makes contact and is able to transmit the disease with $\beta N$ others per unit time and the fraction of contacts by an infected with a susceptible is $S/N$. The number of new infections in unit time per infective then is $\beta N (S/N)$, giving the rate of new infections (or those leaving the susceptible category) as $\beta N (S/N)I = \beta SI$ (Brauer & Castillo-Chavez, 2001). For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, a number equal to the fraction ($\gamma$ which represents the mean recovery rate, or $1/\gamma$ the mean infective period) of infectives are leaving this class per unit time to enter the removed class. These processes which occur simultaneously are referred to as the Law of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned (Daley & Gani, 2005). Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model.
More can be read on this model on the Epidemic model page.
## Interdependent networks
Main article: Interdependent networks
An interdependent network is a system of coupled networks where nodes of one or more networks depend on nodes in other networks. Such dependencies are enhanced by the developments in modern technology. Dependencies may lead to cascading failures between the networks and a relatively small failure can lead to a catastrophic breakdown of the system. Blackouts are a fascinating demonstration of the important role played by the dependencies between networks. A recent study developed a framework to study the cascading failures in an interdependent networks system.[12][13]
## Network optimization
Network problems that involve finding an optimal way of doing something are studied under the name of combinatorial optimization. Examples include network flow, shortest path problem, transport problem, transshipment problem, location problem, matching problem, assignment problem, packing problem, routing problem, Critical Path Analysis and PERT (Program Evaluation & Review Technique).
## Network analysis and visualization tools
• Graph-tool and NetworkX, free and efficient Python modules for manipulation and statistical analysis of networks. [2] [3]
• igraph, an open source C library for the analysis of large-scale complex networks, with interfaces to R, Python and Ruby.
• Orange, a free data mining software suite, module orngNetwork
• Pajek, program for (large) network analysis and visualization.
• Tulip, a free data mining and visualization software dedicated to the analysis and visualization of relational data. [4]
## References
• "Network Science Center," http://www.dodccrp.org/files/Network_Science_Center.asf
• "Connected: The Power of Six Degrees," http://ivl.slis.indiana.edu/km/movies/2008-talas-connected.mov
• R. Cohen, K. Erez, D. ben-Avraham, S. Havlin, "Resilience of the Internet to random breakdown" Phys. Rev. Lett. 85, 4626 (2000).
• R. Cohen, K. Erez, D. ben-Avraham, S. Havlin, "Breakdown of the Internet under intentional attack" Phys. Rev. Lett. 86, 3682 (2001)
• R. Cohen, S. Havlin, "Scale-free networks are ultrasmall" Phys. Rev. Lett. 90, 058701 (2003)
## Further reading
• "The Burgeoning Field of Network Science," http://themilitaryengineer.com/index.php?option=com_content&task=view&id=88
• S.N. Dorogovtsev and J.F.F. Mendes, Evolution of Networks: From biological networks to the Internet and WWW, Oxford University Press, 2003, ISBN 0-19-851590-1
• Linked: The New Science of Networks, A.-L. Barabási (Perseus Publishing, Cambridge
• Network Science, Committee on Network Science for Future Army Applications, National Research Council. 2005. The National Academies Press (2005)ISBN 0-309-10026-7
• Network Science Bulletin, USMA (2007) ISBN 978-1-934808-00-9
• The Structure and Dynamics of Networks Mark Newman, Albert-László Barabási, & Duncan J. Watts (The Princeton Press, 2006) ISBN 0-691-11357-2
• Dynamical processes on complex networks, Alain Barrat, Marc Barthelemy, Alessandro Vespignani (Cambridge University Press, 2008) ISBN 978-0-521-87950-7
• Network Science: Theory and Applications, Ted G. Lewis (Wiley, March 11, 2009) ISBN 0-470-33188-7
• Nexus: Small Worlds and the Groundbreaking Theory of Networks, Mark Buchanan (W. W. Norton & Company, June 2003) ISBN 0-393-32442-7
• Six Degrees: The Science of a Connected Age, Duncan J. Watts (W. W. Norton & Company, February 17, 2004) ISBN 0-393-32542-3
• netwiki Scientific wiki dedicated to network theory
• New Network Theory International Conference on 'New Network Theory'
• Network Workbench: A Large-Scale Network Analysis, Modeling and Visualization Toolkit
• Network analysis of computer networks
• Network analysis of organizational networks
• Network analysis of terrorist networks
• Network analysis of a disease outbreak
• Link Analysis: An Information Science Approach (book)
• Connected: The Power of Six Degrees (documentary)
• Influential Spreaders in Networks, M. Kitsak, L. K. Gallos, S. Havlin, F. Liljeros, L. Muchnik, H. E. Stanley, H.A. Makse, Nature Physics 6, 888 (2010)
• A short course on complex networks
• A course on complex network analysis by Albert-László Barabási | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 68, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150864481925964, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/4169/solving-the-functional-equation-ffx-fxx | # Solving the functional Equation $f(f(x))=f(x)+x$
Let $f$ be continuous on $\mathbb{R}$. Then how to find all continuous functions satisfying $f(f(x))=f(x)+x$
-
1
The polynomial f(x)=((1+√5)/2)x is one solution. – yjj Sep 7 '10 at 0:28
Did you create this problem yourself? – ShreevatsaR Sep 7 '10 at 1:31
1
An [unsourced] tag would be useful. – T.. Sep 7 '10 at 2:01
@ShreevatsaR: Yes, i posted a similar problem where one is asked to find all function such that $f(x^k)=f^{k}(x)$, that was the motivation for this problem – anonymous Sep 7 '10 at 4:29
– anon Oct 22 '10 at 19:57
## 2 Answers
This one is a problem from a journal or from competitions at the level of the Putnam contest (see reference below).
Hint: $g(x) = x + Af(x)$ satisfies $g(f^n(x))=A^ng(x)$ when $A^2 = A + 1$; consider the cases $n \to \pm \infty$.
Source for a similar problem, with solution: http://books.google.com/books?id=-CNbGp2ZFXUC&pg=PA21
-
I like the book you mentioned. – Jack Sep 2 '11 at 23:00
In fact this belongs to a functional equation of the form http://eqworld.ipmnet.ru/en/solutions/fe/fe1220.pdf.
Let $\begin{cases}x=u(t)\\f=u(t+1)\end{cases}$ ,
Then $u(t+2)=u(t+1)+u(t)$
$u(t+2)-u(t+1)-u(t)=0$
$u(t)=C_1(t)\left(\dfrac{1+\sqrt{5}}{2}\right)^t+C_2(t)\left(\dfrac{1-\sqrt{5}}{2}\right)^t$ , where $C_1(t)$ and $C_2(t)$ are arbitrary periodic functions with unit period
$\therefore\begin{cases}x=C_1(t)\left(\dfrac{1+\sqrt{5}}{2}\right)^t+C_2(t)\left(\dfrac{1-\sqrt{5}}{2}\right)^t\\f=C_1(t)\left(\dfrac{1+\sqrt{5}}{2}\right)^{t+1}+C_2(t)\left(\dfrac{1-\sqrt{5}}{2}\right)^{t+1}\end{cases}$ , where $C_1(t)$ and $C_2(t)$ are arbitrary periodic functions with unit period
-
So, (assuming you have the details right,) any solution to the original problem yields a solution to the problem you solved. There's more work to be done to solve the original problem, though! – Hurkyl Sep 30 '12 at 21:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8574168682098389, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/162377/evaluating-int-frac-sqrt1x2x3dx | # Evaluating $\int \frac{\sqrt{1+x^2}}{x^3}dx$
Evaluating $\int \frac{\sqrt{1+x^2}}{x^3} dx$. Any suggestions? I thought the replacement $t=\sinh x$ but in this way should solve the integral of $\frac{\cosh^2 t}{\sinh^3 t}$ (more difficult for me).
-
$\def\csch{\mathop{\mathrm{csch}}}\cosh^2 t / \sinh^3 t = \coth^2 t \csch t = \csch t + \csch^3 t$ – Hurkyl Jun 25 '12 at 0:13
## 3 Answers
If you look at the form of the integrand, you can see it is
$$\sqrt{1+x^2}$$
Which is similar to our pythagorean identity
$$\sin^2(x) + \cos^2(x) = 1$$
If you fiddle around with the equation, you can obtain
$$\tan^2(x) + 1 = \sec^2(x)$$
$$\tan^2(x) + 1$$
looks just like
$$\sqrt{1+x^2}$$
So, we can make the substitution
$$x = \tan(u)$$
$$\sqrt{1 + x^2} = \sqrt{1 + \tan^2(x)} = \sqrt{\sec^2(x)} = \sec(x)$$
Substituting in for your original problem, we get
$$\int \frac{\sec(x)}{\tan^3(x)}\, dx$$
From here, I would simplify by writing in terms of sine and cosine and then solve. Remember that once you arrive at your answer, it will be in terms of u, in this case. You must draw a triangle with angle u, and since you have x = tan(u), you must derive from your triangle and substitute everything back in terms of x, your original variable.
-
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
Put $t= \tan\theta$ , then what you get is $$\int \frac{\tan\theta}{\tan^{3}\theta} \cdot \frac{1}{\cos^{2}\theta} \ d\theta = \int \csc^{2}\theta \ d \theta$$
-
Just a nitpick, maybe you mean $x=\tan \theta$. Likewise $\sqrt{1+\tan^2 \theta}=\sec^2\theta\ne\tan\theta$ – E.O. Jun 24 '12 at 13:19
Rewrite the integrand as follows: $$I=\int \frac{\sqrt{1+x^2}}{x^3}dx=\int\left(\sqrt{1+\frac{1}{x^2}}\right)\frac{dx}{x^2}=-\int\left(\sqrt{1+\frac{1}{x^2}}\right)d\left(\frac{1}{x}\right)$$ Now let $t=\frac{1}{x}$ $$I=-\int\sqrt{1+t^2}dt$$ The latter is a table integral (or you can let $x=\sinh t$ now)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369014501571655, "perplexity_flag": "middle"} |
http://programmingpraxis.com/2012/01/13/excels-xirr-function/?like=1&source=post_flair&_wpnonce=3b93935130 | # Programming Praxis
A collection of etudes, updated weekly, for the education and enjoyment of the savvy programmer
## Excel’s XIRR Function
### January 13, 2012
We studied numerical integration in a previous exercise. In today’s exercise we will look at the inverse operation of numerically calculating a derivative.
The function that interests us in today’s exercise is the XIRR function from Excel, which computes the internal rate of return of a series of cash flows that are not necessarily periodic. The XIRR function calculates the value of x that makes the following equation go to 0, where pi is the ith cash flow, di is the date of the ith cash flow, and d0 is the date of the first cash flow:
$\sum_i \frac{p_i}{(1+x)^{(d_i-d_0)/365}}$
The method used to estimate x was devised by Sir Isaac Newton about three hundred years ago. If xn is an approximation to a function, then a better approximation xn+1 is given by
$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$
where f'(xn) is the derivative of f at n. Mathematically, the derivative of a function at a given point is the slope of the tangent line at that point. Arithmetically, we calculate the slope of the tangent line by knowing the value of the function at a point x and a nearby point x+ε, then using the equation
$\frac{f(x+\epsilon) - f(x)}{(x+\epsilon)-x}$
to determine the slope of the line. Thus, to find x, pick an initial guess (0.1 or 10% works well for most interest calculations) and iterate until the difference between two successive values is close enough. For example, with payments of -10000, 2750, 4250, 3250, and 2750 on dates 1 Jan 2008, 1 March 2008, 30 October 2008, 15 February 2009, and 1 April 2009, the internal rate of return is 37.3%.
Your task is to write a function that mimics Excel’s XIRR function. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
### Like this:
Pages: 1 2
Posted by programmingpraxis
Filed in Exercises
2 Comments »
### 2 Responses to “Excel’s XIRR Function”
1. ardnew said
January 13, 2012 at 5:06 PM
The XIRR function calculates the value of x which causes that particular summation to equal zero? I don’t see any equation other than Newton’s formula there, maybe I’m just being nit-picky
2. Joe said
September 24, 2012 at 2:26 AM
The XIRR has a constant of 365 built-in. So it forces the resulting rate to be annualized. Is there another function similar to XIRR but with a user difinable parameter instead of a constant 365?
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113627076148987, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/7966/list | ## Return to Answer
2 added 314 characters in body
Take any two closed connected simply-connected homeomorphic smooth closed 4-manifolds that are not diffeomorphic. Then their products with $\mathbb R$ are diffeomorphic because the smooth structure on a 5-manifold such a product is unique. (Indeed, since PL/O is 6-connected, it is enough to show that the associated PL structure is unique, but the set of PL-structures on a PL-manifold $M$ of dimension $\ge 5$ is bijective to the set of homotopy classes of maps from $M$ to $TOP/PL$, and the latter space is $K(\mathbb Z_2, 3)$, so the set of PL structures on $M$ is bijective to $H^3(M,\mathbb Z_2)$, which vanishes by Poncare duality if $M$ is homotopy equivalent to a simply-connected $4$-manifold; in fact the argument shows that all we need is $H_1(M;\mathbb Z_2)=0$).
It follows that the original manifolds closed simply-connected $4$-manifolds are tangentially homotopy equivalent, i.e. there is a homotopy equivalence that pulls stable tangent bundles to each other. A priori this homotopy equivalence need not be homotopic to a homeomorphism but if one of your manifold is stably parallelizable, so is the otherone, and then the homeomorphism has to preserve the stable tangent bundle because the pullback of a trivial bundle is trivial.
1
Take any two closed connected homeomorphic smooth closed 4-manifolds that are not diffeomorphic. Then their products with $\mathbb R$ are diffeomorphic because the smooth structure on a 5-manifold is unique. It follows that the original manifolds are tangentially homotopy equivalent, i.e. there is a homotopy equivalence that pulls stable tangent bundles to each other. A priori this homotopy equivalence need not be homotopic to a homeomorphism but if one of your manifold is stably parallelizable, so is the other one, and then the homeomorphism has to preserve the stable tangent bundle because the pullback of a trivial bundle is trivial. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466765522956848, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Rhumb_lines | # Rhumb line
(Redirected from Rhumb lines)
For the album, see The Rhumb Line. For the board game, see Rhumb Line (board game).
Image of a loxodrome, or rhumb-line, spiraling towards the North Pole
In navigation, a rhumb line (or loxodrome) is a line crossing all meridians of longitude at the same angle, i.e. a path derived from a defined initial bearing. That is, upon taking an initial bearing, one proceeds along the same bearing, without changing the direction as measured relative to true north.
## Usage
Its use in navigation is directly linked to the style, or projection of certain navigational maps. A rhumb line appears as a straight line on a Mercator projection map.[1]
The name is derived from Old French or Spanish respectively: "rumb" or "rumbo", a line on the chart which intersects all meridians at the same angle.[1] On a plane surface this would be the shortest distance between two points. Over the Earth's surface at low latitudes or over short distances it can be used for plotting the course of a vehicle, aircraft or ship.[1] Over longer distances and/or at higher latitudes the great circle route is significantly shorter than the rhumb line between the same two points. However the inconvenience of having to continuously change bearings while travelling a great circle route makes rhumb line navigation appealing in certain instances.[1]
The point can be illustrated with an East-West passage over 90 degrees of longitude along the equator, for which the great-circle and rhumb-line distances are the same at 5,400 nautical miles (10,000 km). At 20 degrees North the great-circle distance is 4,997 miles (8,042 km) while the rhumb-line distance is 5,074 miles (8,166 km), about 1½ percent further. But at 60 degrees North the great circle distance is 2,485 miles (3,999 km) while the rhumb-line is 2,700 miles (4,300 km), a difference of 8½ percent. A more extreme case is the air route between New York and Hong Kong, for which the rhumb-line path is 9,700 nautical miles (18,000 km). The great-circle route over the North Pole is 7,000 nautical miles (13,000 km), or 5½ hours less flying time at a typical cruising speed.
Some old maps in the Mercator projection have grids composed of lines of latitude and longitude but also show rhumb lines which are oriented directly towards North, at a right angle from the North, or at some angle from the North which is some simple rational fraction of a right angle. These rhumb lines would be drawn so that they would converge at certain points of the map: lines going in every direction would converge at each of these points. See compass rose. Such maps would necessarily have been in the Mercator projection therefore not all old maps would have been capable of showing rhumb-line markings.
The radial lines on a compass rose are also called rhumbs. The expression "sailing on a rhumb" was used in the 16th–19th centuries to indicate a particular compass heading.[1]
Early navigators in the time before the invention of the chronometer used rhumb-line courses on long ocean passages, because the ship's latitude could be established accurately by sightings of the Sun or stars but there was no accurate way to determine the longitude. The ship would sail North or South until the latitude of the destination was reached, and the ship would then sail East or West along the rhumb-line (actually a parallel, which is a special case of the rhumb-line), maintaining a constant latitude and recording regular estimates of the distance sailed until evidence of land was sighted.[2]
## General and mathematical description
The effect of following a rhumb line course on the surface of a globe was first discussed by the Portuguese mathematician Pedro Nunes in 1537, in his Treatise in Defense of the Marine Chart, with further mathematical development by Thomas Harriot in the 1590s.
A rhumb line can be contrasted with a great circle, which is the path of shortest distance between two points on the surface of a sphere, but whose bearing is non-constant. If you were to drive a car along a great circle you would hold the steering wheel fixed, but to follow a rhumb line you would have to turn the wheel, turning it more sharply as the poles are approached. In other words, a great circle is locally "straight" with zero geodesic curvature, whereas a rhumb line has non-zero geodesic curvature.
Meridians of longitude and parallels of latitude provide special cases of the rhumb line, where their angles of intersection are respectively 0° and 90°. On a North-South passage the rhumb-line course coincides with a great circle, as it does on an East-West passage along the equator.
On a Mercator projection map, a rhumb line is a straight line; a rhumb line can be drawn on such a map between any two points on Earth without going off the edge of the map. But theoretically a loxodrome can extend beyond the right edge of the map, where it then continues at the left edge with the same slope (assuming that the map covers exactly 360 degrees of longitude).
Rhumb lines which cut meridians at oblique angles are loxodromic curves which spiral towards the poles.[1] On a Mercator projection the North and South poles occur at infinity and are therefore never shown. However the full loxodrome on an infinitely high map would consist of infinitely many line segments between the two edges. On a stereographic projection map, a loxodrome is an equiangular spiral whose center is the North (or South) Pole.
All loxodromes spiral from one pole to the other. Near the poles, they are close to being logarithmic spirals (on a stereographic projection they are exactly, see below), so they wind round each pole an infinite number of times but reach the pole in a finite distance. The pole-to-pole length of a loxodrome is (assuming a perfect sphere) the length of the meridian divided by the cosine of the bearing away from true north. Loxodromes are not defined at the poles.
• Three views of a pole-to-pole loxodrome
### Mathematical derivation
Let β be the constant bearing from true north of the loxodrome and $\lambda_0\,\!$ be the longitude where the loxodrome passes the equator. Let $\lambda\,\!$ be the longitude of a point on the loxodrome. Under the Mercator projection the loxodrome will be a straight line
$x = \lambda\,$
$y = m (\lambda - \lambda_0)\,$
with slope $m=\cot(\beta)\,\!$. For a point with latitude $\phi\,$ and longitude $\lambda\,\!$ the position in the Mercator projection can be expressed as
$x= \lambda\,$
$y=\tanh^{-1}(\sin \phi).\,\!$
Then the latitude of the point will be
$\phi=\sin^{-1}(\tanh(m (\lambda-\lambda_0))),\,$
or in terms of the Gudermannian function gd $\phi=\rm{gd}(\mathit{m} (\lambda-\lambda_0)).\,$ In cartesian coordinates this can be simplified to
$x = r \cos(\lambda) / \cosh(m (\lambda-\lambda_0)),\,$
$y = r \sin(\lambda) / \cosh(m (\lambda-\lambda_0)),\,$
$z = r \tanh(m (\lambda-\lambda_0)).\,$
Finding the loxodromes between two given points can be done graphically on a Mercator map, or by solving a nonlinear system of two equations in the two unknowns tan(α) and λ0. There are infinitely many solutions; the shortest one is that which covers the actual longitude difference, i.e. does not make extra revolutions, and does not go "the wrong way around".
The distance between two points, measured along a loxodrome, is simply the absolute value of the secant of the bearing (azimuth) times the north-south distance (except for circles of latitude for which the distance becomes infinite).
The above formulas assume a spherical earth; the formulas for the spheroid are of course more complicated, but not hopelessly so.
## Etymology and historical description
The word "loxodrome" comes from Greek loxos : oblique + dromos : running (from dramein : to run). The word "rhumb" may come from Spanish/Portuguese rumbo/rumo (course, direction) and Greek ῥόμβος.[3]
The 1878 edition ofThe Globe Encyclopaedia of Universal Information describes loxodrome lines as:[4]
Loxodrom'ic Line is a curve which cuts every member of a system of lines of curvature of a given surface at the same angle. A ship sailing towards the same point of the compass describes such a line which cuts all the meridians at the same angle. In Mercator's Projection (q.v.) the Loxodromic lines are evidently straight.[4]
## On the Riemann sphere
Main article: Möbius transformation
The surface of the earth can be understood mathematically as a Riemann sphere, that is, as a projection of the sphere to the complex plane. In this case, loxodromes can be understood as certain classes of Möbius transformations.
## References
1. A Brief History of British Seapower, David Howarth, pub. Constable & Robinson, London, 2003, chapter 8.
2. at TheFreeDictionary
3. ^ a b Ross, J.M. (editor) (1878). "The Globe Encyclopaedia of Universal Information", Vol. IV, Edinburgh-Scotland, Thomas C. Jack, Grange Publishing Works, retrieved from Google Books 2009-03-18; | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007692933082581, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/42447?sort=votes | ## Subset higher power sum question (related to quadratic forms)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathbb N_{n} = {1,2,\cdots,n}$.
Let $S$ be of cardinality $n$ where elements of $S$ are integers from $\mathbb N_{n}$ and at least one element of $S$ is repeated (That is at least one integer from $\mathbb N_{n}$ is skipped. One can easily find a set $S$ with the property that: $\displaystyle \sum_{j \in S}j^{i} = \displaystyle \sum_{j \in \mathbb N_{n}}j^{i}$ when $i = 1$. (Example: $n=4$, $S={1,1,4,4}$ has sum $10$, the same as sum of first n consecutive integers)
How about for $i \ge 2$? It is not obvious that higher power sum sets exist due to constraint in the cardinality of $S$ and $\mathbb N_{n}$. One cannot deny it either? Is there a easy way to tackle some sumset questions?
For $i=2$ it is related to quadratic forms and integer norms. In an integer coordinate system, how many ways can a given integer norm occur when the coordinates are bounded?
-
## 2 Answers
It's a little easier to state an answer if you let $N_n=\lbrace0,1,\dots,n-1\rbrace$.
Let $n=2^k$, let $S$ be the multiset of integers with an odd number of ones in binary, each such integer appearing with multiplicity 2. Then it works for all $i\lt k$.
E.g., $k=3$, $S=\lbrace1,2,4,7\rbrace$, each taken twice, you get $1^i+2^i+4^i+7^i=0^i+3^i+5^i+6^i$ for $i=0,1,2$ (where, by convention, $0^0=1$).
If you really need the range to start at 1, just add 1 to everything, take $S=\lbrace2,3,5,8\rbrace$.
This has to do with the Tarry-Escott problem, q.v.
-
This helped a lot thank you. – unknown (google) Oct 17 2010 at 0:58
How about if $n \ne 2^{k}$? Is there always such an $S$? Actually I am looking for a negative answer:)? If I have a negative answer there is a way to solve some hard problems in computer science in a somewhat easier manner. – unknown (google) Oct 17 2010 at 1:29
"q.v."? . – JBL Oct 17 2010 at 1:30
quod vide, q.v. – Gerry Myerson Oct 17 2010 at 7:03
1
q.v. = quod vide, Latin for "Google it". – Hugo van der Sanden Oct 17 2010 at 7:19
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As soon as you have a sum of distinct $i$th powers, say $a_1^i+\dots+a_s^i$, equal to another sum of (not necessarily distinct) $i$th powers $b_1^i+\dots+b_s^i$ ($s$ is, of course, the same), you have the desired property for $n\ge\max\lbrace a_1,\dots,a_s,b_1,\dots,b_s\rbrace$. So, your question is about a "minimal" solution of $$a_1^i+\dots+a_s^i=b_1^i+\dots+b_s^i$$ in integers with $a_1,\dots,a_s$. The equation does not look pretty enough, and solutions for small $i$ can be found "by hand".
Let me conclude that your problem is a version of Waring's_problem.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218083024024963, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/149898-potential-function.html | # Thread:
1. ## Potential Function
F(x,y) is defined by the integral:
$\\<br /> \int\nolimits_{(2,\pi )}^{(x,y)} {[-2uv^{2}\sin (u^{2}v)]du+[\cos (u^{2}v)-u^{2}v\sin (u^{2}v)]dv}$
Express F(x,y) as a function of x and y, eliminating the integral sign.
_______________________
My approach:
I found the curl of the function to be zero, which means it is path independent and that a potential function exists... I'm now working towards building f from F and finally calculating f(x,y)-f(2,pi)
Which I did and I got:
$f(u,v)=v[\cos (u^{2}v)]$
which means the final value for the integral is:
$y\cos \left ({x^{2}y} \right )-\pi$
Does my answer seem correct to you?
2. ## Confirmed!
I just thought of a way to confirm that my answer is correct. Here's how:
If you differentiate $f(u,v)=v[\cos (u^{2}v)]$ with respect to 'u' you get the first integrand and if you differentiate with respect to 'v' you get the second integrand in F(x,y)...
So $f(u,v)=v[\cos (u^{2}v)]$ must been the potential function to begin with.
Anybody wanna tap me on the shoulder?
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240427017211914, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/290386/what-is-the-hat-number-problem?answertab=oldest | # What is the hat number problem?
I believe I remember the answer is surpringly $\displaystyle \frac{1}{e}$ when calculating some permutation when people are switching hats. Do you know what I'm talking about?
It's supposedly applied mathematics where a number of people are switching hats and surprisingly a probability turn out to be $\displaystyle \frac{1}{e}$ or likewise.
## Update
Well I found it and I think it's surpring that it's 1/e: http://books.google.se/books?id=OVkoCcszEZ0C&pg=PA39&dq=hat&redir_esc=y#v=onepage&q=hat&f=false
-
2
Maybe look at the Wikipedia article on derangements. – André Nicolas Jan 30 at 7:34
1
– Glen_b Jan 30 at 7:48
## 1 Answer
At a party $n$ men take off their hats. The hats are then mixed up and each man randomly selects one. What is the probability that no man select their own hat. Also show that if $n$ is tends to $\infty$, then the probability will become $1 \over e$.
To solve this problem use Poincare's Theorem. Normally your probability will be $$p=\sum_{k=0}^n\frac{(-1)^k}{k!}$$ and note that $$\lim_{n \to \infty} \sum_{k=0}^n \frac{(-1)^k}{k!}= \frac{1}{e}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8909751772880554, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/184321-determinant-matrix.html | # Thread:
1. ## Determinant of a matrix
Hello everyone,
$\begin{vmatrix}x&-1&0&\cdots &0&0\\0&x&-1&\codts &0&0\\0&0&x&\ddots&\vdots&\vdots\\ \vdots&\vdot&\vdots &\ddots&-1&0\\0&0&0&\cdots&x&-1\\a_n&a_{n-1}&a_{n-2}&\cdots&a_1&a_0\end{vmatrix}$
I've tried to solve by myself, but I coudln't get anywhere. I do wish to improve my skill in mathematics; I am not asking for the complete answer. Just give me what to start, then I will try again. I do appreciate your help. Thank you.
2. ## Re: Determinant of a matrix
$D_2=\begin{vmatrix}{x}&{-1}\\{a_1}&{a_0}\end{vmatrix}=a_0x+a_1$
$D_3=\begin{vmatrix}{x}&{-1}&{0}\\{0}&{x}&{-1}\\{a_2}&{a_1}&{a_0}\end{vmatrix}=xD_2+a_2\begin{ vmatrix}{-1}&{\;\;0}\\{\;\;x}&{-1}\end{vmatrix}=a_0x^2+a_1x+a_2$
Conjecture $D_{n+1}=a_0x^n+a_1x^{n-1}+\ldots +a_n$ and use induction. The way we have found $D_3$ will help you.
3. ## Re: Determinant of a matrix
Thank you very much!
Assuming $D_{k+1} = a_0x^k+a_1x^{k-1}+\cdots +a_k$,
$D_{k+2}=xD_{k+1}+(-1)^{k+3}a_{k+1}\begin{vmatrix}-1&0&\cdots&\cdots&0\\x&-1&0&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&0\\0&\cdots&\cdots&x&-1\end{vmatrix}$.
and as $\begin{vmatrix}-1&0&\cdots&\cdots&0\\x&-1&0&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&0\\0&\cdots&\cdots&x&-1\end{vmatrix}=(-1)^{k+1}$, we get
$D_{k+2}=xD_{k+1}+a_{k+1}$.
Therefore
$D_{k+2}=a_0^{k+1}+a_1x^{k}\cdots+a_{k+1}$
$D_2$ satisfies the first assumption, therefore via mathematical induction, $D_{n+1}=a_0x^n+a_1^{n-1}+\cdots+a_n\ \ \ ^\forall n\in \mathbb{Z},n\geq 0.$
Am I correct? Thank you very much for your help, and I will work even harder.
4. ## Re: Determinant of a matrix
Originally Posted by joll
Am I correct?
Yes, you are. Only one thing: it should be $n\geq 1$ instead of $n\geq 0$ . If $n=0$, we have $D_{n+1}=D_1=\det [x]=x$ i.e. we have no $a_i$ and the rule defining the given determinant has no sense.
Thank you very much for your help,
You are welcome!
and I will work even harder.
Good. I am pretty sure.
5. ## Re: Determinant of a matrix
Thank you very much.
6. ## Re: Determinant of a matrix
Originally Posted by joll
Hello everyone,
$\begin{vmatrix}x&-1&0&\cdots &0&0\\0&x&-1&\codts &0&0\\0&0&x&\ddots&\vdots&\vdots\\ \vdots&\vdot&\vdots &\ddots&-1&0\\0&0&0&\cdots&x&-1\\a_n&a_{n-1}&a_{n-2}&\cdots&a_1&a_0\end{vmatrix}$
I've tried to solve by myself, but I coudln't get anywhere. I do wish to improve my skill in mathematics; I am not asking for the complete answer. Just give me what to start, then I will try again. I do appreciate your help. Thank you.
it can also be done without using induction. just expand the determinant along the last row and you'll quickly get the answer.
7. ## Re: Determinant of a matrix
I see... that way, the determinant can be written
$(-1)^{n+1}a_n\begin{vmatrix}-1&0&\cdots&0&0\\x&-1&\cdots&0&0\\0&x&\ddots&\vdots&\vdots\\ \vdots&&\ddots&-1&0\\0&0&\cdots&x&-1\end{vmatrix}+ (-1)^{n+2}a_{n-1}\begin{vmatrix} x&0&\cdots&0&0\\0&-1&\cdots&0&0\\0&x&\ddots&\vdots&\vdots\\ \vdots &&\ddots&-1&0\\0&0&\cdots&x&-1\end{vmatrix}\dots$.
The determinent term in (n,i)cofactor (I mean, the \vmatrix part of the equation above) of the matrix is given by
$(-1)^{n-i}x^{i-1}$
Therefore
$D_{n+1}=a_0x^n+\dots a_n$(am I correct?).
I haven't been learning linear algebra for long yet, so knowing many ways to solve one problem really helps me. Thank you very much.
8. ## Re: Determinant of a matrix
Originally Posted by joll
I see... that way, the determinant can be written
$(-1)^{n+1}a_n\begin{vmatrix}-1&0&\cdots&0&0\\x&-1&\cdots&0&0\\0&x&\ddots&\vdots&\vdots\\ \vdots&&\ddots&-1&0\\0&0&\cdots&x&-1\end{vmatrix}+ (-1)^{n+2}a_{n-1}\begin{vmatrix} x&0&\cdots&0&0\\0&-1&\cdots&0&0\\0&x&\ddots&\vdots&\vdots\\ \vdots &&\ddots&-1&0\\0&0&\cdots&x&-1\end{vmatrix}\dots$.
The determinent term in (n,i)cofactor (I mean, the \vmatrix part of the equation above) of the matrix is given by
$(-1)^{n-i}x^{i-1}$
Therefore
$D_{n+1}=a_0x^n+\dots a_n$(am I correct?).
I haven't been learning linear algebra for long yet, so knowing many ways to solve one problem really helps me. Thank you very much.
yes, you've got the idea but you forgot that the original determinant is $(n+1) \times (n+1)$ not $n \times n$.
so, after expanding along the last row, you'll get that the determinant is equal to
$\sum_{i=1}^{n+1}(-1)^{n+1+i}a_{n+1-i}(-1)^{n+1-i}x^{i-1}=\sum_{i=1}^{n+1}a_{n+1-i}x^{i-1}=a_n + \ldots + a_0x^n.$
9. ## Re: Determinant of a matrix
Ah..yes, I forgot that. Thank you. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289429783821106, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/55092/martingales-in-both-discrete-and-continuous-setting/55101 | ## Martingales in both discrete and continuous setting
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am wondering, polynomials like
$S_n^4-6n S_n^2+3n^2+2n$ for $$S_n=\sum_{i=1}^n{X_i}$$ where $$\mathbb{P}(X_i=1)=\mathbb{P}(X_i=-1)=\frac{1}{2}$$ is a martingale (under the conventional filtration). While $$B_t^4-6t B_t^2+3t^2$$ for Brownian motion $B_t$ is also a martingale.
Note the difference between the two, and the similarity!
What's the general conclusion about the polynomials of $S_n$ and $n$, also about $B_t$ and $t$, to make them into martingales?
Thanks.
-
## 1 Answer
One knows that $P(S_n,n)$ is a martingale if and only if $P(s+1,n+1)+P(s-1,n+1)=2P(s,n)$ and that $Q(B_t,t)$ is a martingale if and only if $2\partial_tQ(x,t)+\partial^2_{xx}Q(x,t)=0$.
Assume that $P(S_n,n)$ is a martingale and, for a given $d$ and for every $h>0$, let $$Q_h(x,t)=h^{d}P(x/\sqrt{h},t/h),$$ in the sense that one evaluates $P(s,n)$ at the integer parts $s$ and $n$ of $x/\sqrt{h}$ and $t/h$.
If $Q_h\to Q$ when $h\to0$, writing $\partial_t$ and $\partial^2_{xx}$ as limits of finite differences of orders $1$ and $2$, one sees that $2\partial_tQ+\partial^2_{xx}Q=0$, hence $Q(B_t,t)$ is a martingale.
Example: $P(s,n)=s^2-n$. For $d=1$, $Q_h(x,t)=x^2-t$ hence $Q(x,t)=x^2-t$.
Other example: $P(s,n)=s^4-6ns^2+3n^2+2n$. For $d=2$, $Q_h(x,t)=x^4-6tx^2+3t^2+2ht$ hence $Q(x,t)=x^4-6tx^2+3t^2$.
In the other direction, to deduce a martingale in $S_n$ and $n$ from a martingale in $B_t$ and $t$, one should probably replace each monomial by a sum of its first derivative. This means something like replacing $q(t)=3t^2$ by $\displaystyle\sum_{k=1}^n(\partial_tq)(k)=3n^2+3n$ but I did not look into the details.
Edit (Thanks to The Bridge for a comment on the part of this answer above this line)
Recall that a natural way to build in one strike a full family of martingales that are polynomial functions of $(B_t,t)$ is to consider so-called exponential martingales. For every parameter $u$, $$M^u_t=\exp(uB_t-u^2t/2)$$ is a martingale hence every "coefficient" of its expansion as a series of multiples of $u^i$ for nonnegative integers $i$ is also a martingale. This yields the well known fact that $$1,\ B_t,\ B^2_t-t,\ B^3_t-3tB_t,\ B^4_t-6tB_t^2+3t^2,$$ etc., are all martingales. One recognizes the sequence of Hermite polynomials $H_n(B_t,t)$, a fact which is not very surprising since these polynomials may be defined precisely through the expansion of $\exp(ux-u^2t/2)$.
So far, so good. But what could be an analogue of this for standard random walks? The exponential martingale becomes $$D^u_n=\exp(uS_n-(\ln\cosh(u))n)$$ and the rest is simultaneously straightforward (in theory) and somewhat messy (in practice): one should expand $\ln\cosh(u)$ along increasing powers of $u$ (warning, here comes the family of Bernoulli numbers), then deduce from this the expansion of $D^u_n$ along increasing powers of $u$, and finally collect the resulting sequence of martingales polynomial in $(S_n,n)$.
Let us see what happens in practice. Keeping only two terms in the expansion of $\ln\cosh(u)$ yields $\ln\cosh(u)=\frac12u^2-\frac1{12}u^4+O(u^6)$ hence $$\exp(-(\ln\cosh(u))n)=1-\frac12u^2n+\frac1{24}u^4(2n+3n^2)+O(u^6).$$ Multiplying this by $$\exp(uS_n)=1+uS_n+\frac12u^2S_n^2+\frac16u^3S_n^3+\frac1{24}u^4S_n^4+\frac1{120}u^5S_n^5+O(u^6),$$ and looking for the coefficients of the terms $u^i$ in this expansion yields the martingales $$1,\ S_n,\ S_n^2-n,\ S_n^3-3nS_n,$$ and $$S_n^4-6nS_n^2+2n+3n^2,\ S_n^5-10nS_n^3+5(2n+3n^2)S_n.$$ Thus, in $M_t^u$, $B_t$ scales like $1/u$ and $t$ like $1/u^2$ hence Hermite polynomials are homogeneous when one replaces $t$ by $B_t^2$. The analogues of Hermite polynomials for $(S_n,n)$, from degree $4$ on, are not homogeneous in the sense of this dimensional analysis where $n$ is like $S_n^2$. Ultimately, this is simply because in $D_n^u$ one has to compensate $uS_n$ by $(\ln\cosh(u))n$, which is not homogeneous in $u^2n$.
Note that this argument of non homogeneity carries through to continuous time processes. For instance, the exponential martingales for the standard Poisson process $(N_t)_t$ are $$\exp(uN_t-(\mathrm{e}^u-1)t),$$ and the rest of the argument is valid once one has noted that $\mathrm{e}^u-1$ is not a power of $u$.
-
1
Just to add, it is widely known that Hermite polynomials and Brownian Motions are deeply connected with regards to (local) martingale property (and the chaos decomposition property) . math.ucsd.edu/~pfitz/downloads/hermite.pdf – The Bridge Feb 12 2011 at 0:05
@Didier: this is great! I just have a few comments. 1. For functions not in $\mathbb{C}^1$ (continuous differentiable to the first order) in $t$ and $\mathbb{C}^2$ in $B_t$, are there any examples of martingale in the form of $Q(B_t, t)$? 2. Does the condition $P(s+1,n+1)+P(s-1,n+1)=2P(s,n) P(S_n, n)$ truly enumerate all martingales in the form of $P(S_n, n)$? – Qiang Li Feb 13 2011 at 2:02
@Qiang Li: ("comments" in the sense of "questions".) Well, after an interesting initial question, you once again fall back on some standard textbook stuff. This is not MO purpose. Please get yourself some lecture notes (this time, on Brownian martingales), as was already suggested to you about other MO basic probability questions, and study them. Specific references were provided to you, did you go and read them? Considering the rythm of your questions on MO and elsewhere, this is doubtful. .../... – Didier Piau Feb 13 2011 at 11:19
.../... Anyway, about Brownian martingales, Durrett is excellent, Tsirelson is kind enough to put tau.ac.il/~tsirel/Courses/Brown/syllabus.html on the web, and many other good introductory texts exist. Re 2.: Does it, indeed? Either you did not spend one minute thinking about the answer, or you have no clue of what you are talking about. – Didier Piau Feb 13 2011 at 11:21
4
@Didier Piau : Sometimes I feel frustrated on MO 'cause I can't vote twice for a nice answer. – The Bridge Feb 14 2011 at 10:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315977096557617, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/231393/proving-a-set-of-functions-from-mathbbn-to-1-0-is-countable/231397 | # Proving a set of functions from $\mathbb{N}$ to $\{1,0\}$ is countable
The question is to prove the set $S$ of all the functions $f:\mathbb{N}\to \{0,1\}$, for which $f^{-1}(\{1\})$ is finite, is countable.
After considering this for a while I do understand what it means, but I have no idea how to solve it. How do I even make a function from neutral numbers to functions and how can I prove such function is bijective?
I'm not even sure how to start this, so I'll be happy with any push in the right direction.
edit: Thanks you guys for your answers. from them I realized I'm missing something since I didn't understand half of what you said. Though I'm a bit surprised since I only missed a 1-hour lecture once and I don't remember discussing most of what written here. The exercise itself is due in a bit less then a week. I'll go study for a bit and come back to this soon. Will leave the question open in the meantime.
Edit 2: OK, after asking around for a bit and reading some stuff, and then sitting for 15 minutes just thining about all the pieces, I think I finally understand this. I haven't written the proof yet, but I feel like I know how to do this, so I'll be closing the question. Thanks again everyone.
-
Thanks for your edit/update: It seems you want to consider all/only functions that map a finite number of $n\in \mathbb{N}$ to $1$, and maps all other (countably many) $n^{\prime} \in \mathbb{N}$ to $0$? Am I correct? – amWhy Nov 6 '12 at 15:32
Yep, these are the functions I try to prove there are countably infinite number of. – Nescio Nov 6 '12 at 15:44
## 4 Answers
You shouldn’t try to find a bijection: that gets very messy very quickly. You should look for a more indirect argument.
Here’s one possible approach. There’s an easy bijection between your set of functions and the set of finite subsets of $\Bbb N$: pair a function $f$ with $\{k\in\Bbb N:f(k)=1\}$. For each $n\in\Bbb N$ let $[\Bbb N]^n$ be the set of subsets of $\Bbb N$ having exactly $n$ elements. If you can show that each $[\Bbb N]^n$ is countable, you can then use the fact that the union of a countable family of countable sets is countable.
To show that $[\Bbb N]^n$ is countable, find an injection (one-to-one mapping) of $[\Bbb N]^n$ into $\Bbb N^n$, the set of $n$-tuples of natural numbers. (I’m assuming here that you’ve already shown that $\Bbb N^n$ is countable.) HINT: You can list any set of natural numbers in increasing order.
-
actually we didn't talk at all about of $\mathbb{N}^n$ like that. unless I missed a lot more then I can remember – Nescio Nov 7 '12 at 9:14
– Brian M. Scott Nov 7 '12 at 9:20
Hint: Any natural number can be written in binary. Prepend an infinite sequence of zeros the the front of such a binary representation, and then reverse it.
-
Any such function is uniquely determined by $f^{-1}(\{1\})$, you can construct a bijection between the set of such functions and the set of all finite subsets of $\mathbb N$.
Thus, we need to prove that the set of all finite subsets of $\mathbb N$ is countable..
Hint Can you prove that for every $n$, the set of subsets of $\mathbb N$ with $n$ elements is countable?
Can you get the problem from here?
-
Hint: first try to prove that the set of all functions for which $|f^{-1}(\{1\})|=1$ is countable. Then the same for the set of all functions for which $|f^{-1}(\{1\})|=2$. Then generalize.
-
I just touched up typos in the formatting...Check to make sure I got what you intended! – amWhy Nov 6 '12 at 15:06
Yes, this is it. Thanks! – Dan Shved Nov 6 '12 at 15:09
I tried looking at the private case of $|f^{-1}(\{1\})|=1$, But I'm not exactly sure how to write it. I understand that exists a function $g$ that maps every $n\in\mathbb{N}$ to the function for which $f(n)=1$, but how do I write it formally? – Nescio Nov 7 '12 at 9:09
– Dan Shved Nov 7 '12 at 9:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587450623512268, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/189763-finite-group.html | # Thread:
1. ## Finite group
Let G be a finite group which possesses an automorphism $\sigma$ such that $\sigma(g)=g$ iff. g = 1.
$(\Rightarrow)$
Suppose |G| = n and $\sigma(g)=g$.
First, I don't get why this would only work if g is 1.
2. ## Re: Finite group
I don't understand what we have to show.
3. ## Re: Finite group
Originally Posted by girdav
I don't understand what we have to show.
Then you aren't alone.
4. ## Re: Finite group
In your first post, you say that we have a finite group which possesses an automorphism $\sigma$ such that $\sigma (g)=g$ if and only if $g=1$.
Then, what do we have to do?
5. ## Re: Finite group
Originally Posted by girdav
In your first post, you say that we have a finite group which possesses an automorphism $\sigma$ such that $\sigma (g)=g$ if and only if $g=1$.
Then, what do we have to do?
Show it is a homomorphism, monic, and epi.
6. ## Re: Finite group
Originally Posted by dwsmith
Let G be a finite group which possesses an automorphism $\sigma$ such that $\sigma(g)=g$ iff. g = 1.
$(\Rightarrow)$
Suppose |G| = n and $\sigma(g)=g$.
First, I don't get why this would only work if g is 1.
If I had to guess, you are asking about the common problem: "If $G$ is a finite group and $\sigma\in\text{Aut}(G)$ such that $\sigma^2=\text{id}$ and $\sigma$ possesses no non-identity fixed points, then $\sigma$ is the inverse map and $G$ is abelian." | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548876881599426, "perplexity_flag": "head"} |
http://nrich.maths.org/7120 | ### Converse
Clearly if a, b and c are the lengths of the sides of a triangle and the triangle is equilateral then a^2 + b^2 + c^2 = ab + bc + ca. Is the converse true, and if so can you prove it? That is if a^2 + b^2 + c^2 = ab + bc + ca is the triangle with side lengths a, b and c necessarily equilateral?
### Consecutive Squares
The squares of any 8 consecutive numbers can be arranged into two sets of four numbers with the same sum. True of false?
### Parabolic Patterns
The illustration shows the graphs of fifteen functions. Two of them have equations y=x^2 and y=-(x-4)^2. Find the equations of all the other graphs.
# Quadratic Transformations
##### Stage: 4 Challenge Level:
If you have never used the NRICH Number Plumber before, click here to watch a short introductory video.
Click on the image below to investigate the two functions and their graphs.
Try using the same input number for both functions. What do you notice?
Look at the steps which make up the functions.
How are the two functions related?
How are their graphs related?
If you have met the idea of expressing a function in the form $f(x)$, try to write one function in terms of the other.
Experiment with other pairs of functions linked in the same way.
Click on the image below to investigate two more functions and their graphs.
Try to find pairs of input numbers so that both machines give the same output. What do you notice?
Look at the steps which make up the functions.
How are the two functions related?
How are their graphs related?
If you have met the idea of expressing a function in the form $f(x)$, try to write one function in terms of the other.
Experiment with other pairs of functions linked in the same way. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8800631761550903, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/212459-linear-transforamtion-proof.html | Thread:
1. Linear Transformation proof?
Let T: V-> W be a bijective linear transformation
Prove that if {v1,v2,...,vn} is a basis for V,
then {T(v1),T(v2),...,T(vn)} is a basis for W
2. Re: Linear Transforamtion proof?
Assuming $\{v_1,..,v_n\}$ is a basis for V.
To show the set $\{T(v_1),...,T(v_n)\}$ is a basis. We need to show 2 tihings
1)Every vector in W can be written as a linear combination of this set (or this set spans W)
2)This set is linearly independent
The first one is pretty straightforward, since T is bijective it means that for every $w \in W$ we can find a coordinate in $V$ such that
$L(c_1v_1 + ... + c_nv_n) = w$ so that equals $c_1L(v_1)+...+c_nL(v_n) = w$. There we picked any arbitray vector in W and wrote it as the linear combination of $L(v_1)..L(v_n)$
The second one is also pretty straight forward. if $\{L(v_1),...,L(v_n) \}$ was linearly dependent, it would have atleast one non zero solution to the homogenous equation, so there is a coordinate $x_1 \in V$ such that $x_1 \not = 0$ and $L(x_1) = 0$ Now Pick another non zero vector $x_2 \in V$ now observe that $x_3 = x_1 + x_2$ and $x_3 \not = x_1$ but $L(x_1 + x_2) = L(x_1) + L(x_2) = 0 + L(x_2) = L(x_3)$ how can 2 different vectors in $V$ go to the same vector in W? it was supposed to be one-to-one, thus only the zero solution must exist for the homogenous equation, thus $\{L(v_1),...,L(v_n) \}$ is linearly independent and thus a basis for W. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417914152145386, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/43365-uncountable-set-limit-point.html | # Thread:
1. ## uncountable set: limit point
Prove that an uncountable set of $\bold{R}$ must have a limit point.
If a set $S \subset \bold{R}$ and is uncountable, then $\aleph_{0} < \text{card}(S)$. Now a point $x_0$ is a limit point of $S \subset \bold{R}$ if for every $\varepsilon > 0$, $(x_{0}- \varepsilon, \ x_{0}+ \varepsilon) \cap S \ \backslash \{x_0 \} \neq \emptyset$.
Suppose for contradiction that $S$ did not have a limit point. Would this then create a gap, contradicting the fact that the set is uncountable?
2. Originally Posted by particlejohn
Prove that an uncountable set of $\bold{R}$ must have a limit point.
If a set $S \subset \bold{R}$ and is uncountable, then $\aleph_{0} < \text{card}(S)$. Now a point $x_0$ is a limit point of $S \subset \bold{R}$ if for every $\varepsilon > 0$, $(x_{0}- \varepsilon, \ x_{0}+ \varepsilon) \cap S \ \backslash \{x_0 \} \neq \emptyset$.
Suppose for contradiction that $S$ did not have a limit point. Would this then create a gap, contradicting the fact that the set is uncountable?
i'll try it this way..
if $S$ does not have a limit point, then every $s\in S$, there is an epsilon neighborhood $V_{\varepsilon,s}$ of $s$ contains finitely many points in $S$.
Then, $S \subseteq \bigcup_{s \in S} V_{\varepsilon,s}$ is still finite.
EDIT: my mistake, sorry john..
3. Originally Posted by kalagota
i'll try it this way..
if $S$ does not have a limit point, then every $s\in S$, there is an epsilon neighborhood $V_{\varepsilon,s}$ of $s$ contains finitely many points in $S$.
Then, $S \subseteq \bigcup_{s \in S} V_{\varepsilon,s}$ is still finite.
which is the contradiction I was aiming for?
4. Hi
Originally Posted by kalagota
if $S$ does not have a limit point, then every $s\in S$, there is an epsilon neighborhood $V_{\varepsilon,s}$ of $s$ contains finitely many points in $S$.
Then, $S \subseteq \bigcup_{s \in S} V_{\varepsilon,s}$ is still finite.
I don't know how to answer the original question but this solution doesn't seem right to me : take $S=[0,1]$, for $s\in S,\, \{s\}$ is a finite set but $\bigcup_{s\in S}\{s\}=S$ is uncountable. (a union of countable sets is not necessarily countable)
5. oh yeah! it is the intersection of an arbitrary collection of finite sets is finite..
thanks for that and sorry john for the mistakes..
EDIT: I also jumbled the concepts of countability and finity..
6. I did not try an answer before because I was unsure of what you are working with,
But here are some observations.
If $S$ has no limits points, then every point of $S$ is not a limit point of $S$.
If $t \in S$ there is an open interval such that $V_t = \left( {t - \delta ,t + \delta } \right)\,\& \,S \cap V_t = \left\{ t \right\}$.
Because between any two numbers there is a rational number, $\left( {\exists x_t \in Q} \right)\left[ {t \in \left( {x_t - \varepsilon ,x_t + \varepsilon } \right) \subseteq \left( {t - \delta ,t + \delta } \right)} \right]$.
Name $\left( {x_t - \varepsilon ,x_t + \varepsilon } \right) = U_{x_t }$. Note that $U_{x_t }$ contains exactly one point of $S$.
How many $U_{x_t }$ could there be? (This is a modification of a proof by John Kelly.)
7. Originally Posted by Plato
I did not try an answer before because I was unsure of what you are working with,
But here are some observations.
If $S$ has no limits points, then every point of $S$ is not a limit point of $S$.
If $t \in S$ there is an open interval such that $V_t = \left( {t - \delta ,t + \delta } \right)\,\& \,S \cap V_t = \left\{ t \right\}$.
Because between any two numbers there is a rational number, $\left( {\exists x_t \in Q} \right)\left[ {t \in \left( {x_t - \varepsilon ,x_t + \varepsilon } \right) \subseteq \left( {t - \delta ,t + \delta } \right)} \right]$.
Name $\left( {x_t - \varepsilon ,x_t + \varepsilon } \right) = U_{x_t }$. Note that $U_{x_t }$ contains exactly one point of $S$.
How many $U_{x_t }$ could there be? (This is a modification of a proof by John Kelly.)
There can be an infinite number of $U_{x_t}$.
8. Originally Posted by particlejohn
There can be an infinite number of $U_{x_t}$.
Yes, but only countably many! The rationals are countable.
But you are given that $S$ is uncountable.
Is that a contradiction?
9. That is a contradiction. Thanks. Just a quick question: $\left( {\exists x_t \in Q} \right)\left[ {t \in \left( {x_t - \varepsilon ,x_t + \varepsilon } \right) \subseteq \left( {t - \delta ,t + \delta } \right)} \right]$.
Is this basically saying that there is a rational number $x_t$, such that the neighborhood of $x_t$ is a subset of the neighborhood of the real number?
10. Originally Posted by particlejohn
Is this basically saying that there is a rational number $x_t$, such that the neighborhood of $x_t$ is a subset of the neighborhood of the real number?
What in the world does that mean: “subset of a number”?
It says that given any point $t \in S$ there is a neighborhood, V, of $t$ that contains no other point of $S$.
Then there is neighborhood, U, with a rational center that contains $t$ and is a subset of neighborhood, V.
11. Originally Posted by Plato
What in the world does that mean: “subset of a number”?
It says that given any point $t \in S$ there is a neighborhood, V, of $t$ that contains no other point of $S$.
Then there is neighborhood, U, with a rational center that contains $t$ and is a subset of neighborhood, V.
I said subset of neighborhood.
12. Originally Posted by particlejohn
Prove that an uncountable set of $\bold{R}$ must have a limit point.
If a set $S \subset \bold{R}$ and is uncountable, then $\aleph_{0} < \text{card}(S)$. Now a point $x_0$ is a limit point of $S \subset \bold{R}$ if for every $\varepsilon > 0$, $(x_{0}- \varepsilon, \ x_{0}+ \varepsilon) \cap S \ \backslash \{x_0 \} \neq \emptyset$.
Suppose for contradiction that $S$ did not have a limit point. Would this then create a gap, contradicting the fact that the set is uncountable?
Break the real line into countably many disjoint, bounded intervals ..., [-2,1), [-1,0), [0, 1), [1, 2), ...
At least one of these intervals must contain infinitely many members of S-- otherwise S would be the union of countably many finite sets and would be countable. So...
13. Originally Posted by awkward
Break the real line into countably many disjoint, bounded intervals ..., [-2,1), [-1,0), [0, 1), [1, 2), ...
At least one of these intervals must contain infinitely many members of S-- otherwise S would be the union of countably many finite sets and would be countable. So...
i think, we have to distinguish the definition of limit points in topology.
the points in any interval in R are limit points. so in fact, all of those intervals are set of limit points, even the end points.
14. I then list some candidates for S.
1. a set that contains an interval.
2. a set that contains the irrationals.
3. (both of course)
4. (and of course, R)
what are the other sets that are uncountable | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 81, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9623421430587769, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?s=0076ac1cd845fd0bf344561eb63ff067&p=4257238 | Physics Forums
## Generators of Z6
I understand how {1} and {5} are generators of Z6.
{1} = {1, 2, 3, 4, 5, 0} = {0, 1, 2, 3, 4, 5}
{5} = {5, 4, 3, 2, 1, 0} = {0, 1, 2, 3, 4, 5}
But my book also says that {2, 3} also generates Z6 since 2 + 3 = 5 such as {2,3,4} and {3,4} I believe. Thus every subgroup containing 2 and 3 must also be 5 except for {2,4}.
Can someone explain this to me? Ty in advance.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 1 Recognitions: Gold Member Homework Help Science Advisor Your notation is incorrect. {1} is a set containing one element, 1. {1, 2, 3, 4, 5, 0} is a set containing six elements. Therefore {1} does not equal {1, 2, 3, 4, 5, 0}. The usual notation for the group generated by a set is a pair of angle brackets: <{1}> denotes the group generated by the set {1}. It is true that <{1}> = <{5}> = Z6. It is also (trivially) true that <{1, 2, 3, 4, 5, 0}> = Z6. Note that in general, if S is a subset of Z6, <S> is the smallest subgroup of Z6 which contains all of the elements of S. If S is a subgroup, then S = <S>. Also, it's easy to verify that if S $\subseteq$ T, then <S> $\subseteq$ <T>. Now what about <{2,3}>? This is a group, by definition, so it must be closed under addition. Thus <{2,3}> must contain 5 because 2+3=5. In other words, {5} $\subseteq$ <{2,3}>. Therefore Z6 = <{5}> $\subseteq$ <{2,3}>. For the reverse containment, we have {2,3} $\subseteq$ {1,2,3,4,5,0}, so <{2,3}> $\subseteq$ <{1,2,3,4,5,0}> = Z6. We conclude that <{2,3}> = Z6.
Tags
abstarct algebra, generators
Thread Tools
| | | |
|---------------------------------------|------------------------------|---------|
| Similar Threads for: Generators of Z6 | | |
| Thread | Forum | Replies |
| | Linear & Abstract Algebra | 1 |
| | Engineering Systems & Design | 4 |
| | Linear & Abstract Algebra | 3 |
| | Linear & Abstract Algebra | 2 |
| | Electrical Engineering | 2 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823091387748718, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/144699-stokes.html | # Thread:
1. ## stokes
May I know what is wrong with my workings?
2. Originally Posted by alexandrabel90
May I know what is wrong with my workings?
Yes - two things. First ${\bf n} = <1,1,1>$ and second, it has to be a unit normal so you have to divide each component by $\sqrt{3}$ .
3. must it always be a normal vector?
because i thought i can actually use N(x,y) = (-fx, -fy,1) dA where dA is the area on the xy plane.
4. There are two ways of thinking about this. A vector integral over a surface is usually written " $\int\int \vec{f}\cdot\vec{n}dS$" where n is a unit normal vector to the surface and $dS$ is the "differential of surface area".
Here, the surface is the plane x+ y+ z= 5. The way I prefer to calculate dS is to write z= 5- x- y so that the "position vector" of any point in the plane is $\vec{X}(x,y)= x\vec{i}+ y\vec{j}+ z\vec{k}= x\vec{i}+ y\vec{j}+ (5- x- y)\vec{k}$ in terms of the two parameters x and y. Now, the derivative vectors $\vec{X}_x= \vec{i}- \vec{k}$ and $\vec{X}_y= \vec{j}- \vec{k}$ are in the tangent plane and their cross product, $\vec{X}_x\times\vec{X}_y= \vec{i}+ \vec{j}+ \vec{k}$ is normal to the surface (you could have gotten that from the fact that $A\vec{i}+ B\vec{j}+ C\vec{k}$ is normal to Ax+ By+ Cz= D) and has length $\sqrt{3}$ so that $\vec{n}= \frac{1}{\sqrt{3}}\vec{i}+ \frac{1}{\sqrt{3}}\vec{j}+ \frac{1}{\sqrt{3}}\vec{k}$ is the unit normal and the "differential of surface area" is the length of that vector times dxdy or $\sqrt{3}dxdy$.
That is, here $\vec{n}dS= (\frac{1}{\sqrt{3}}\vec{i}+ \frac{1}{\sqrt{3}}\vec{j}+ \frac{1}{\sqrt{3}}\vec{k})\sqrt{3}dxdy= (\vec{i}+ \vec{j}+ \vec{k})dxdy$. The two "lengths", $\sqrt{3}$, cancel!
That is why I prefer to write " $d\vec{S}$" and write the integral of a vector function over a surface as $\int\int \vec{f}\cdot d\vec{S}$ to begin with.
Here, $\nabla\times\vec{f}= \vec{i}+ \vec{j}+ \vec{k}$ and $d\vec{S}= (\vec{i}+ \vec{j}+ \vec{k})dxdy$ so that $\nabla\times\vec{f}\cdot d\vec{S}= 3dxdy$ so that the integral is just 3 times the area of the surface- which is a "square of side length 2".
Note that $\vec{X}_x\times\vec{X}_y= -\vec{X}_y\times\vec{X}_x$ so that you have to choose in which order you will take the cross product. I took $\vec{X}_x\times\vec{X}_y= \vec{i}+ \vec{j}+ \vec{k}$ rather than $\vec{X}_y\times\vec{X}_y= -\vec{i}- \vec{j}- \vec{k}$.
That is equivalent to choosing an orientation for the surface and for its boundary. Here we are told "oriented clockwise as seen from the origin". Given that orientation for the boundary, a person "walking around" the boundary, with left side toward the interior, would have his/her head in the positive z direction. That means that we must choose the sign so that the z-component is positive. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9643470644950867, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/18268/discrete-stochastic-process-exponentially-correlated-bernoulli | ## discrete stochastic process: exponentially correlated Bernoulli?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There is a question that was asked on stackoverflow that at first sounds simple but I think it's a lot harder than it sounds.
Suppose we have a stationary random process that generates a sequence of random variables x[i] where each individual random variable has a Bernoulli distribution with probability p, but the correlation between any two of the random variables x[m] and x[n] is α|m-n|.
How is it possible to generate such a process? The textbook examples of a Bernoulli process (right distribution, but independent variables) and a discrete-time IID Gaussian process passed through a low-pass filter (right correlation, but wrong distribution) are very simple by themselves, but cannot be combined in this way... can they? Or am I missing something obvious? If you take a Bernoulli process and pass it through a low-pass filter, you no longer have a discrete-valued process.
(I can't create tags, so please retag as appropriate... stochastic-process?)
-
you could try to take a process $x(t)$ like an Ornstein-Uhlenbeck, that has a correlation structure that decreases exponentially, and then define $B_n = 1_{x(n) > \alpha}$ where $\alpha$ is a well-chosen threshold - I have not done the computations, but I have the feeling that the correlation between these Bernoulli random variables also decreases exponentially. Do you really need the correlation to be equal to $\alpha^{|m-n|}$ ? Would an exponentially decreasing correlation be enough for your particular purpose ? – Alekk Mar 15 2010 at 14:59
thx for the suggestion... I'm posting this on behalf of someone else (see the link in the 1st sentence) so I do not know the stringency of their requirements. The problem seemed simple enough to state that I felt I could translate into a "proper" problem statement for mathoverflow. – Jason S Mar 15 2010 at 15:20
...and I had kind of the same hunch (make a continuous-value process, then use a threshold to produce a binary-value output) but don't quite know how to go about characterizing the output process w/r/t correlation, other than an empirical calculation on the computer. – Jason S Mar 15 2010 at 15:56
By the way, the SO problem is not $\alpha^{|m-n|}$, but $c|m-n|^{-\alpha}$. – Douglas Zare Mar 15 2010 at 18:09
Yes, that was pointed out to me... but I am suspicious + wondering if the OP meant alpha ^ |m-n|. Using the c |m-n| ^ (-alpha) formula, correlation is undefined for m=n. – Jason S Mar 15 2010 at 19:15
show 1 more comment
## 4 Answers
Here is a construction.
• Let $\{Y_i\}$ be independent Bernouilli random variables with probability $p$.
• Let $N(t)$ be a Poisson process chosen so that $P(N(1)=0)=\alpha$.
• Let $X_i = Y_{N(i)}$.
In words, we have some radioactive decay which tells us when to flip a new (biased) coin. $X_n$ is the last coin flipped at time $n$. The correlation between $X_m$ and $X_n$ comes from the possibility that there are no decays between time $m$ and time $n$, which happens with probability $\alpha^{|m-n|}$.
The conditional correlation between $X_m$ and $x_n$ is $1$ if $N(m) = N(n)$, and $0$ if $N(m)\ne N(n)$, so $\text{Cor}(X_n,X_m) = P(N(m)=N(n)) = \alpha^{|m-n|}.$
You can simplify this by saying that $N(i) = \sum_{t=1}^i B_i$ where $\{B_i\}$ are independent Bernoulli random variables which are $0$ with probability $\alpha$.
-
1
fascinating! I think I understand... thanks! – Jason S Mar 15 2010 at 19:13
Brilliant answer – David Bar Moshe Mar 16 2010 at 9:44
Phrasing it in terms of a Poisson process seems overly complicated; the properties of Poisson processes aren't actually used. Couldn't one just phrase it as follows? Let $$X_{i+1} = \begin{cases} X_i & \text{with probability }\alpha; \\ \text{a new Bernoulli trial independent of }X_i & \text{with probability }1-\alpha. \end{cases}$$ – Michael Hardy Jun 2 2010 at 20:36
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In other words:
Start with a random variable $X_0$ Bernoulli with parameter $p$, random variables $Y_n$ Bernoulli with parameter $\alpha$, random variables $Z_n$ Bernoulli with parameter $p$, and assume that all these are independent. Define recursively the sequence `$(X_n)_{n\ge0}$` by setting $X_{n+1}=Y_nX_n+(1-Y_n)Z_n$ for every $n\ge0$.
Then $X_n$ and $X_{n+k}$ are conditionally correlated if and only if $Y_i=1$ for every $i$ from $n$ to $n+k-1$. This happens with probability $\alpha^k$, hence you are done.
This is Douglas Zare's idea, but with no Poisson process.
-
Interesting variation, thanks! – Jason S Mar 16 2010 at 14:06
The last line of my answer gave the same construction. My $B_i$ is your $1-Y_i$. – Douglas Zare Mar 16 2010 at 14:33
I suggest also to look a the paper: Generating spike-trains with specified correlations. By Jakob Macke, Philipp Berens, et al. (Max Planck Institute for Biological Cybernetics.).
Generating spike-trains with specified correlations
They also offer a Matlab Package for 'Sampling from multivariate correlated binary and poisson random variables' ... also available at Matlab central:
Sampling from multivariate correlated binary and poisson random variables
Also look at the page link
-
The above solution is very nice, but relies on the very special structure of the desired process. In a much more general framework, I think that one could use a perfect simulation algorithm as described in:
Processes with long memory: Regenerative construction and perfect simulation, Francis Comets, Roberto Fernández, and Pablo A. Ferrari, Ann. Appl. Probab. 12, Number 3 (2002), 921-943.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144530892372131, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/115477/list | ## Return to Question
3 typo, previously retag
Let us say that a finite set $A$ in the plane is $1$-separated if:
1) it has an even number of points;
2) no open ball of diameter $1$ contains more than $|A|/2$ points.
For a $1$-separated set $A$ define $G(A)$ to be a graph where two points $x,y$ in $A$ are joined by an edge iff the distance between them is at least $1$.
Question: can one find a finite set of graphs $G _ 1,\dots,G _ n$ such that any $1$-separated set $A$ can be partitioned into non-empty $1$-separated sets $A _ 1,\dots,A _ k$ such that $G(A _ i)$ is isomorphic to one the of the $G _ j$'s?
Comment: The definition makes sense on the real line (the ball of diameter $1$ is replaced by an interval of length $1$). In that case we can take $n=1$ and $G_1$ to be a graph on two vertices joined by an edge (that is, $G(A)$ contains a matching).
2 format; added 19 characters in body
Let us say that a finite set A $A$ in the plane is 1-separated $1$-separated if:
1) it has an even number of points;
2) no open ball of diameter 1 $1$ contains more than 1/2|A| $|A|/2$ points.
For a 1-separated $1$-separated set A $A$ define G(A) $G(A)$ to be a graph where two points x,y $x,y$ in A $A$ are joined by an edge iff the distance between them is at least 1.$1$.
Question: can one find a finite set of graphs G_1,...,G_n $G _ 1,\dots,G _ n$ such that any 1-separated $1$-separated set $A$ can be partitioned into non-empty 1-separated $1$-separated sets A_1,...,A_k $A _ 1,\dots,A _ k$ such that G(A_i) $G(A _ i)$ is isomorphic to one the the G_j's$G _ j$'s?
Comment: The definition makes sense on the real line (the ball of diameter 1 $1$ is replaced by an interval of length 1). $1$). In that case we can take n=1 $n=1$ and G_1 $G_1$ to be a graph on two vertices joined by an egde edge (that is, G(A) $G(A)$ contains a matching).
1
# On well separated point sets in the plane
Let us say that a finite set A in the plane is 1-separated if: 1) it has an even number of points 2) no open ball of diameter 1 contains more than 1/2|A| points.
For a 1-separated set A define G(A) to be a graph where two points x,y in A are joined by an edge iff the distance between them is at least 1.
Question: can one find a finite set of graphs G_1,...,G_n such that any 1-separated set A can be partitioned into non-empty 1-separated sets A_1,...,A_k such that G(A_i) is isomorphic to one the the G_j's?
Comment: The definition makes sense on the real line (the ball of diameter 1 is replaced by an interval of length 1). In that case we can take n=1 and G_1 to be a graph on two vertices joined by an egde (that is, G(A) contains a matching). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466907382011414, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/62552?sort=newest | ## Functional equations relating to p-adic L-functions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let f be a modular form of weight k for $\Gamma_0(N)$. Let us assume that $p\not\vert$N. Then we can construct 2 p-adic L-functions corresponding to the 2 roots $\alpha$ and $\beta$ of the equation $x^2+a_px+p^{k-1}=0$ (assume we are not in critical slope case) and they are related by $L_{(p,\alpha)}(f,j)=\frac{(1-p^{j-1}/\alpha)(1-\beta/p^j)L_{(p,\beta)}(f,j)}{(1-p^{(j-1)}/\beta)(1-\alpha/p^j)}$. So one of them determines the other. So I have 2 questions: what happens in the critical slope case: is there some relation between p-adic L-function corresponding to the unit root and the critical slope one. The only reason one can even hope to get such a thing is because the roots have a relation between them and the usual functional equation for complex L-functions. My second question is how should one think of the Euler factors that appear?(maybe that should be my first question)
-
Just a trivial correction: $\alpha$ and $\beta$ are the roots of the Hecke polynomial, which is $X^2 - a_p X + p^{k-1}$ -- you are out by a sign on the linear term. – David Loeffler Apr 21 2011 at 21:07
Perrin-Riou's work gives a meaning to the Euler factors that arise, so you could take a look at that. – Rob Harron Apr 21 2011 at 22:21
@ david sorry for the typo. @ Rob I have tried to read that paper but I will always find her work really hard to understand. I wish there is an exposition of that paper. – Arijit Apr 22 2011 at 12:59
I wouldn't call the interpolation factors that arise *Euler factors*; they are not Euler factors as far as I know. – Emerton Apr 22 2011 at 22:14
Correction to the previous comment; as Rob H. explains below, they are a ratio of Euler factors. So perhaps I should retract my preceding comment! – Emerton Apr 22 2011 at 22:17
show 3 more comments
## 2 Answers
To answer the second question:
The interpolation factor is the determinant of $1-\varphi$ on $D_{\text{cris}}$ divided by its determinant on $D_{\text{cris}}^\ast(1)$. As to why this is what it should be, you can trace that back to Coates & Perrin-Riou's original paper (p-adic L-functions attached to motives over Q) where right above their definition of the interpolation factor (equation 4.11) they say "Following a suggestion of R. Greenberg". Deligne suggested an interpretation of the interpolation factor as modifying some $\varepsilon$-factors in a way completely analogous to the modifications of the Gamma-factors at $\infty$, this appears in the papers Coates wrote after the Coates–Perrin-Riou paper, of which Motivic p-adic L-functions in the Durham proceedings is the most definitive account. So that's a couple of ways of thinking about the Euler factors that appear in the interpolation property.
Update: @arijit: to answer the question you asked in the comments of David's answer, I think what you do is the following: In the ordinary case, the Selmer group one generally looks at is the one Greenberg defined, i.e. the "ordinary" Selmer group whose definition is based on the existence of the subrepresentation on which Frobenius acts via the unit root $\alpha$. I.e. there is a sub $W\subseteq V$ and the local condition of the Selmer group is
$$\ker\left(H^1(D_p,V)\rightarrow H^2(I_p,V/W)\right)$$
Accordingly, you should be looking at the $p$-adic $L$-function given by $\alpha$. Now, the filtered $(\varphi,N)$-module $D=D_{\text{cris}}(V)$, has two $\varphi$-stable subspaces: the one coming from the bona fide subrepresentation, namely $D_{\text{cris}}(W)$, and a non-admissible sub $D^\prime$ coming from the non-unit root. If you use this sub, you can define a local condition for a Selmer group completely analogously to the Greenberg defintion, but in the cohomology of $(\phi,\Gamma)$-modules. This should be related to the critical $p$-adic $L$-function. Basically, different "refinements" of the filtered $(\varphi,N)$-module (or "triangulations" of the $(\phi,\Gamma)$-module, or "$p$-stabilizations" of the automorphic representation) should correspond to different Selmer groups and different $p$-adic $L$-functions. Someone please correct me if I've simply made this up!
-
1
Dear Rob, Thanks for this nice post. Cheers, Matt – Emerton Apr 22 2011 at 22:16
My pleasure! -Rob – Rob Harron Apr 22 2011 at 22:31
Thanks a lot.That is a great answer! – Arijit Apr 24 2011 at 0:44
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The formula you give relating $L_\alpha$ and $L_\beta$ is correct, but it is only valid for $1 \le j \le k-1$, so it only gives you finitely many values and hence it doesn't show that one L-function determines the other.
There is another similar formula relating the values of $L_\alpha$ and $L_\beta$ for twists of $f$ by p-power Dirichlet characters, though. Essentially these formulae are a by-product of the fact that the values of the p-adic L-functions are related to values of the corresponding complex L-function. In the critical slope case, the special values don't uniquely determine the L-function.
There is much more on these critical-slope L-functions in Pollack + Stevens' papers "Overconvergent modular symbols and p-adic L-functions", and "Critical slope p-adic L-functions", as well as other more recent preprints by Bellaiche and by myself and Zerbes.
-
Thanks a lot for your reply. That clears a lot of things. – Arijit Apr 22 2011 at 12:58
Maybe I should add my motivation for asking this question: The arithmetically defined p-adic L-function is the characteristic polynomial of some Selmer group. So my question is which of these 2 should it be? – Arijit Apr 22 2011 at 13:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930776059627533, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/6524/maximum-likelihood-and-sufficient-statistics | # Maximum likelihood and sufficient statistics
fT(t;B,C) = exp(-t/C)-exp(-t/B) / C-B where our mean is C+B and t>0.
so far i have found my log likelihood functions and differentiated them as follows:
dl/dB = sum[t*exp(t/C) / (B^2(exp(t/c)-exp(t/B)))] +n/(C-B) = 0
i have also found a similar dl/dC.
I have now been asked to comment what you can find in the way of sufficient statistics for estimating these parameters and why there is no simple way of using Maximum Likelihood for estimation in the problem. I am simply unsure as to what to comment upon. Any help would be appreciated. Thanks, Rachel
-
2
this site supports latex, please reformat your question. I am reluctant to do it myself, since the notation is not very clear. – mpiktas Jan 25 '11 at 13:41
## 2 Answers
OK, your question isn't perfectly clear but maybe I can help a little.
A statistic $T(X)$ is sufficient for a parameter $\theta$ if
$P(X|T(X), \theta) = P(X|T(X)$
In terms of likelihood functions you can verify that this implies
$f(x;\theta) = h(x)g(T(x); \theta)$
for some $h$ and $g$, which is known by a few different monikers (the factorization theorem/lemme/criteria and sometimes with a name or two attached). This is where @probabilityislogic's comment comes from, although like I said it's just a property of the likelihood function.
There are often a lot of different sufficient statistics (in particular, take $h=1$ and $g=f$, where $T(X)=X$ is just the entire dataset). Since the goal is to find a particular way to reduce the data without losing information, this leads into questions of minimal/complete sufficient statistics, etc. It's not clear what you need for your question, so I'll leave off there.
In terms of the MLE, your notation is a little confusing to me so I'll make a couple general comments. What problems can happen finding the MLE? It might not have a closed form, which is less a problem than a complication. It can fail to be unique, or occur at the edge of the parameter space, be infinite, etc. You need to at least define the parameter space, which you haven't done in your problem statement so far as I can tell.
-
## Did you find this question interesting? Try our newsletter
email address
The "failsafe" way to find a sufficient statistic in just about any problem: Calculate the Bayesian posterior distribution (up to prop constant) using a uniform prior. If a sufficient statistic exists, this method will tell you what it is. So basically, strip all multiplicative factors (those factors which do not involve your parameters, but may involve functions of the data) from your likelihood function. I would suggest that the "sum function" stuff is your sufficient statistic (although the notation was not clear at the time I answered this question).
NOTE: the use of a uniform prior may make the posterior improper but it will show you what the functions are sufficient for your problem.
-
This hasn't got anything to do with Bayes; it's just the factorization lemma and is a direct consequence of the definition of sufficiency. Bringing Bayes into the mix just complicates things. – JMS Feb 25 '11 at 1:53
@JMS - Yes I do understand your "factorisation" theorem argument. What I am saying is that using Bayes Theorem will 1) tell you if a sufficient statistic of reduced dimensionality exists, and 2) what the sufficient statistic is for your problem. The use of "sufficient statistics" is basically a way to bring "frequentist" statistics" closer to "Bayesian statistics" without admitting that one is doing so. They also reduce the calculations one has to perform in an analysis. Sufficiency is also closely related to the maximum entropy method (aka "ultimate inference"). – probabilityislogic Feb 25 '11 at 8:37
"What I am saying is that using Bayes Theorem will 1) tell you if a sufficient statistic of reduced dimensionality exists, and 2) what the sufficient statistic is for your problem." That has nothing to do with Bayes theorem, which is my point. Sufficiency is a property of the likelihood function and exists independently of whatever mode of inference you prefer. – JMS Feb 26 '11 at 3:39
1
You're missing my point entirely. Whatever mode of inference you prefer, sufficiency is simply a property of the likelihood function. Bayes theorem doesn't add anything at all. You aren't using Bayes theorem when you factor a likelihood! Nor does it add to the results that follow about minimal/complete/ancillary statistics. Deriving sufficient statistics doesn't require you to accept the principles of Bayesian inference; in fact it's intimately related to the ideas behind MLE. – JMS Feb 27 '11 at 5:02
1
Incidentally, having read many of those textbooks on statistical inference I'd take your bet. Casella & Berger, Bickel & Doksum, Lehman & Casella, etc all at least discuss Bayesian parameter estimation if not some decision theory. Certainly it doesn't get the treatment it deserves but it's unreasonable to expect an introductory text to go too deep - and you can't become a statistician just on the back of them anyway. I'm an avowed Bayesian myself, but a solid understanding of classical statistics is important. – JMS Feb 27 '11 at 5:12
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347483515739441, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/17994/how-to-write-the-frohlich-hamiltonian-in-one-dimension?answertab=oldest | # How to write the Fröhlich Hamiltonian in one dimension?
I am currently working on a (functional) analysis problem refining Pekar's Ansatz (or adiabatic approximation, as it is called in his beautiful 1961 manuscript "Research in Electron Theory of Crystals").
Anyways, I have two related questions, which the members of this community may find simple.
The Fröhlich Hamiltonian is given as follows in three dimensions
$$H=\mathbf{p^{2}}+\sum_{k}a_{k}^{\dagger}a_{k}-\biggl(\frac{4\pi\alpha}{V}\biggr)^{\frac{1}{2}}\sum_{k}\biggl[\frac{a_{k}}{|\mathbf{k}|}e^{i\mathbf{k\cdot x}}+\frac{a_{k}^{\dagger}}{|\mathbf{k}|}e^{-i\mathbf{k\cdot x}}\biggr]$$
The physical scenario here is an electron moving in a 3-dimensional crystal. Each $k$ signifies a (vibrational) mode of the crystal.
If we restrict ourselves to just a 1-dimensional crystal, why is it that the Hamiltonian can be written as follows:
$$H=\mathbf{p^{2}}+\sum_{k}a_{k}^{\dagger}a_{k}-\biggl(\frac{4\pi\alpha}{V}\biggr)^{\frac{1}{2}}\sum_{k}\biggl[a_{k}e^{i\mathbf{k\cdot x}}+a_{k}^{\dagger}e^{-i\mathbf{k\cdot x}}\biggr]$$
Namely, why do we drop the $|\mathbf{k}|$ factor in the third term?
Furthermore, I see how the creation and annihilation operators work on the (bosonic) Fock space (referring to the crystal here), especially when we write the creation operator in the form $\sum_{k=0}^{\infty}\frac{(a^{\dagger})^{k}}{\sqrt{k!}}\left|0\right\rangle =\left|k\right\rangle$. Namely, the creation operator is jumping from one tensored state in Fock Space to the next. However, I also see the form $a_{k}=\frac{1}{\sqrt{2}}\bigl(k+\frac{d}{dk}\bigr)$. How are the two forms connected? How do you intuitively think of the latter form? For example, I thought of the former form as the creation operator jumping from one state in fock space to the next, but the latter form I am not quite sure.
-
This is a good question, but I think it would be better asked as two separate pieces. As a rule of thumb, you should be able to actually write your question in the title, and (maybe it's just me, but) I don't think your two questions are closely related enough to do that succinctly. I'd suggest removing the second question from this post and posting it separately. – David Zaslavsky♦ Dec 8 '11 at 6:16
Can you give the exact reference that says you can drop the |k|? You might have misread a nontrivial manipulation that might be apparent in context. – Ron Maimon Dec 8 '11 at 9:57
1
Also, the equation you have for $\left|k\right\rangle$ makes no sense --- there is no free $k$ index on the left hand side... – genneth Dec 8 '11 at 10:13
@genneth: The "k" in the formula is an occupation number on the right, but is summed over on the left. I was going to fix it to be an "n", but it does indicate a conceptual confusion. r.g.: Each k mode needs to have a seperate integer occupation number n, and the formula you wrote for the action of the creation operator is on one occupation number only. – Ron Maimon Dec 8 '11 at 21:07
## 1 Answer
The answer to your second question is simple. For a Harmonic oscillator, the creation and annihilation operators are related to the x and p operators by (up to a choice of units):
$$a = x+ ip$$ $$a^\dagger = x-ip$$
Writing the x operator as $i{\partial\over\partial p}$ reproduces your formula (up to phases and signs, the above phases and signs are correct in the usual physics convensions), with p playing the role of k. The k in your formula must be interpreted as the k operator.
This is the polaron problem, which was studied heavily in the mid 1950s, after Frohlich deduced from the isotope effect that phonon electron interactions must be responsible for superconductivity. The oscillators are the phonon modes, the 1 over |k| tells you that long wavelength phonons are singular, but I don't know the answer to the first question, because the identity is superficially impossible, because one of the two forms will be dimensionally inconsistent. If you provide a reference to fix the conventions, one can decide which one is correct, and perhaps this is a simple misunderstanding. The phonons in your description have the exact same frequency, for example, which is incorrect--- the dispersion for phonons should makes the second term $\sum_k |k| a^{\dagger} a$\$.
User wsc tells me that the Frohlich Hamiltonian is used to model the interactions with optical phonons. These have a flat dispersion, so the phonon part is ok as you wrote it.
-
This is also very common in quantum optics, where the variance in x and y correspond to intensity and phase fluctuations respectively. – Antillar Maximus Dec 8 '11 at 11:53
2
Most of this is fine, but Froehlich hamiltonians generally describe an interaction with optical phonons, so the flat dispersion is correct. – wsc Dec 8 '11 at 15:28
@wsc: Sorry, I misread your comment. I will update the answer accordingly. – Ron Maimon Dec 9 '11 at 0:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521960020065308, "perplexity_flag": "head"} |
http://mathforum.org/mathimages/index.php?title=Divergence_Theorem&diff=6668&oldid=6652 | # Divergence Theorem
### From Math Images
(Difference between revisions)
| | | | |
|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 38: | | Line 38: | |
| | *We now turn to the right side of the equation, the integral of flux. | | *We now turn to the right side of the equation, the integral of flux. |
| | :'''Step 3:''' We first parametrize the parts of the surface which have non-zero flux. | | :'''Step 3:''' We first parametrize the parts of the surface which have non-zero flux. |
| - | :{{hide|1= :Notice that the given vector field has vectors which only extend in the x-direction, since each vector has zero y and z components. Therefore, only two sides of our cube can have vectors normal to them, those sides which are perpendicular to the x-axis. Furthermore, the side of the cube perpendicular to the x axis and with all points at x = 0 cannot have any flux, since all vectors in this surface are zero vectors. | + | :{{hide|1= :Notice that the given vector field has vectors which only extend in the x-direction, since each vector has zero y and z components. Therefore, only two sides of our cube can have vectors normal to them, those sides which are perpendicular to the x-axis. Furthermore, the side of the cube perpendicular to the x axis with all points satisfying x = 0 cannot have any flux, since all vectors on this surface are zero vectors. |
| | | | |
| | :We are thus only concerned with one side of the cube since only one side has non-zero flux. This side is parametrized using | | :We are thus only concerned with one side of the cube since only one side has non-zero flux. This side is parametrized using |
## Revision as of 10:38, 1 July 2009
Fountain Flux
The water flowing out of a fountain demonstrates an important theorem for vector fields, the Divergence Theorem.
Fountain Flux
Field: Calculus
Created By: Brendan John
# Basic Description
Consider a fountain like the one pictured, particularly its top layer. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Some multivariable calculus
[Click to view A More Mathematical Explanation]
The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considere [...]
[Click to hide A More Mathematical Explanation]
The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considered a vector field because at each point the water has a position and a velocity vector. Faster moving water is represented by a larger vector in our field. The divergence of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field $F$ is
$\nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}$,
where $F _i$ is the component of $F$ in the $i$ direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence. The divergence theorem requires that we sum divergence over an entire volume. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of flux, the flow through a surface, to quantify this movement through the boundary, which itself is a surface.
The divergence theorem is formally stated as:
$\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$
The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the flux through the boundary.
### Example of Divergence Theorem Verification
The following example verifies that given a volume and a vector field, the Divergence Theorem is valid.
Consider the vector field $F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}$.
For a volume, we will use a cube of edge length two, and vertices at (0,0,0), (2,0,0), (0,2,0), (0,0,2), (2,2,0), (2,0,2), (0,2,2), (2,2,2). This cube has a corner at the origin and all the points it contains are in positive regions.
• We begin by calculating the left side of the Divergence Theorem.
Step 1: Calculate the divergence of the field:
$\nabla\cdot F = 2x$
Step 2: Integrate the divergence of the field over the entire volume.
$\iiint\nabla\cdot F\,dV =\int_0^2\int_0^2\int_0^2 2x \, dxdydz$
$=\int_0^2\int_0^2 4\, dydx$
$=16$
• We now turn to the right side of the equation, the integral of flux.
Step 3: We first parametrize the parts of the surface which have non-zero flux.
Notice that the given vector field has vectors which only extend in the x-direction, since each vector has zero y and z components. Therefore, only two sides of our cube can have vectors normal to them, those sides which are perpendicular to the x-axis. Furthermore, the side of the cube perpendicular to the x axis with all points satisfying x = 0 cannot have any flux, since all vectors on this surface are zero vectors.
We are thus only concerned with one side of the cube since only one side has non-zero flux. This side is parametrized using
$X=\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} 2 \\ u\\ v\\ \end{bmatrix}\, , u \in (0,2)\, ,v \in (0,2)$
Step 4: With this parametrization, we find a general normal vector to our surface.
To find this normal vector, we find two vectors which are always tangent to (or contained in) the surface, and are not collinear. The cross product of two such vectors gives a vector normal to the surface.
The first vector is the partial derivative of our surface with respect to u: $\frac{\part{X}}{\part{u}} = \begin{bmatrix} 0\\ 1\\ 0\\ \end{bmatrix}$
The second vector is the partial derivative of our surface with respect to v: $\frac{\part{X}}{\part{v}} = \begin{bmatrix} 0\\ 0\\ 1\\ \end{bmatrix}$
The normal vector is finally the cross product of these two vectors, which is simply $N = \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix}.$
Step 5: Integrate the dot product of this normal vector with the given vector field.
The amount of the field normal to our surface is the flux through it, and is exactly what this integral gives us.
$\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$
$= \int_0^2 \int_0^2 F \cdot N \,dsdt$
$= \int_0^2 \int_0^2 \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0\\ 0\\ \end{bmatrix} \,dsdt = \int_0^2 \int_0^2 x^2dsdt = \int_0^2 \int_0^2 4 \,dsdt$
$=16$
• Both sides of the equation give 16, so the Divergence Theorem is indeed valid here. ■
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199858903884888, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/90824/minmax-problem-for-polygons | ## Minmax problem for polygons
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let `$\text{Pol}_n$` be the set of all convex polygons on a plane with $n$ vertices. For `$P\in \text{Pol}_n$` denote by `$\text{Tr}(P)$` the set of all triangles which vertices are some vertices of $P$. I want to find an explicit formula for the function ```$$
\Phi(n)=\inf\limits_{P\in \text{Pol}_n}\max\limits_{T\in \text{Tr}(p)}\frac{\text{area}(T)}{\text{area}(P)}
$$``` It is not hard to prove that `$\Phi(3)=1$`, `$\Phi(4)=1/2$`. For `$n\geq 5$` we have an estimation `$\Phi(n)\geq 1/(n-2)$`.
Here you can find some attempts to solve it. Any ideas are appreciated!
-
Somewhat related question: mathoverflow.net/questions/34865/… – Gerry Myerson Mar 11 2012 at 1:18
An algorithm for finding a maximal triangle is given in cs.bgu.ac.il/~sityon/soda06.pdf. I don't know whether the paper, or its references, contains any theoretical results on the size of said triangle. – Gerry Myerson Mar 11 2012 at 1:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8773991465568542, "perplexity_flag": "middle"} |
http://en.m.wikipedia.org/wiki/Ellsberg_paradox | Ellsberg paradox
The Ellsberg paradox is a paradox in decision theory and experimental economics in which people's choices violate the expected utility hypothesis.[1] One interpretation is that expected utility theory does not properly describe actual human choices.
It is generally taken to be evidence for ambiguity aversion. The paradox was popularized by Daniel Ellsberg, although a version of it was noted considerably earlier by John Maynard Keynes.[2]
Ellsberg raised two problems: 1 urn problem and 2 urn problem. Here, 1 urn problem is described, which is the better known one.
The 1 urn paradox
This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2013)
This section . Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research may be removed. (February 2013)
Suppose you have an urn containing 30 red balls and 60 other balls that are either black or yellow. You don't know how many black or how many yellow balls there are, but that the total number of black balls plus the total number of yellow equals 60. The balls are well mixed so that each individual ball is as likely to be drawn as any other. You are now given a choice between two gambles:
Gamble A Gamble B
You receive \$100 if you draw a red ball You receive \$100 if you draw a black ball
Also you are given the choice between these two gambles (about a different draw from the same urn):
Gamble C Gamble D
You receive \$100 if you draw a red or yellow ball You receive \$100 if you draw a black or yellow ball
This situation poses both Knightian uncertainty – how many of the non-red balls are yellow and how many are black, which is not quantified – and probability – whether the ball is red or non-red, which is ⅓ vs. ⅔.
Utility theory interpretation
Utility theory models the choice by assuming that in choosing between these gambles, people assume a probability that the non-red balls are yellow versus black, and then compute the expected utility of the two gambles.
Since the prizes are exactly the same, it follows that you will prefer Gamble A to Gamble B if and only if you believe that drawing a red ball is more likely than drawing a black ball (according to expected utility theory). Also, there would be no clear preference between the choices if you thought that a red ball was as likely as a black ball. Similarly it follows that you will prefer Gamble C to Gamble D if, and only if, you believe that drawing a red or yellow ball is more likely than drawing a black or yellow ball. It might seem intuitive that, if drawing a red ball is more likely than drawing a black ball, then drawing a red or yellow ball is also more likely than drawing a black or yellow ball. So, supposing you prefer Gamble A to Gamble B, it follows that you will also prefer Gamble C to Gamble D. And, supposing instead that you prefer Gamble B to Gamble A, it follows that you will also prefer Gamble D to Gamble C.
When surveyed, however, most people strictly prefer Gamble A to Gamble B and Gamble D to Gamble C. Therefore, some assumptions of the expected utility theory are violated.
Mathematical demonstration
Mathematically, your estimated probabilities of each color ball can be represented as: R, Y, and B. If you strictly prefer Gamble A to Gamble B, by utility theory, it is presumed this preference is reflected by the expected utilities of the two gambles: specifically, it must be the case that
$R \cdot U(\$100) + (1-R) \cdot U(\$0) > B\cdot U(\$100) + (1-B) \cdot U(\$0)$
where $U(\cdot)$ is your utility function. If $U(\$100) > U(\$0)$ (you strictly prefer \$100 to nothing), this simplifies to:
$R [U(\$100) - U(\$0)] > B [U(\$100) - U(\$0)]$
$\Longleftrightarrow R > B \;$
If you also strictly prefer Gamble D to Gamble C, the following inequality is similarly obtained:
$B\cdot U(\$100) + Y\cdot U(\$100) + R \cdot U(\$0) > R \cdot U(\$100) + Y\cdot U(\$100) + B \cdot U(\$0)$
This simplifies to:
$B [U(\$100) - U(\$0)] > R [U(\$100) - U(\$0)]$
$\Longleftrightarrow B > R \;$
This contradiction indicates that your preferences are inconsistent with expected-utility theory.
Generality of the paradox
Note that the result holds regardless of your utility function. Indeed, the amount of the payoff is likewise irrelevant. Whichever gamble you choose, the prize for winning it is the same, and the cost of losing it is the same (no cost), so ultimately, there are only two outcomes: you receive a specific amount of money, or you receive nothing. Therefore it is sufficient to assume that you prefer receiving some money to receiving nothing (and in fact, this assumption is not necessary — in the mathematical treatment above, it was assumed U(\$100) > U(\$0), but a contradiction can still be obtained for U(\$100) < U(\$0) and for U(\$100) = U(\$0)).
In addition, the result holds regardless of your risk aversion. All the gambles involve risk. By choosing Gamble D, you have a 1 in 3 chance of receiving nothing, and by choosing Gamble A, you have a 2 in 3 chance of receiving nothing. If Gamble A was less risky than Gamble B, it would follow that Gamble C was less risky than Gamble D (and vice versa), so, risk is not averted in this way.
However, because the exact chances of winning are known for Gambles A and D, and not known for Gambles B and C, this can be taken as evidence for some sort of ambiguity aversion which cannot be accounted for in expected utility theory. It has been demonstrated that this phenomenon occurs only when the choice set permits comparison of the ambiguous proposition with a less vague proposition (but not when ambiguous propositions are evaluated in isolation).[3]
Possible explanations
There have been various attempts to provide decision-theoretic explanations of Ellsberg's observation. Since the probabilistic information available to the decision-maker is incomplete, these attempts sometimes focus on quantifying the non-probabilistic ambiguity which the decision-maker faces – see Knightian uncertainty. That is, these alternative approaches sometimes suppose that the agent formulates a subjective (though not necessarily Bayesian) probability for possible outcomes.
One such attempt is based on info-gap decision theory. The agent is told precise probabilities of some outcomes, though the practical meaning of the probability numbers is not entirely clear. For instance, in the gambles discussed above, the probability of a red ball is 30/90, which is a precise number. Nonetheless, the agent may not distinguish, intuitively, between this and, say, 30/91. No probability information whatsoever is provided regarding other outcomes, so the agent has very unclear subjective impressions of these probabilities.
In light of the ambiguity in the probabilities of the outcomes, the agent is unable to evaluate a precise expected utility. Consequently, a choice based on maximizing the expected utility is also impossible. The info-gap approach supposes that the agent implicitly formulates info-gap models for the subjectively uncertain probabilities. The agent then tries to satisfice the expected utility and to maximize the robustness against uncertainty in the imprecise probabilities. This robust-satisficing approach can be developed explicitly to show that the choices of decision-makers should display precisely the preference reversal which Ellsberg observed.[4]
Another possible explanation is that this type of game triggers a deceit aversion mechanism. Many humans naturally assume in real-world situations that if they are not told the probability of a certain event, it is to deceive them. People make the same decisions in the experiment that they would about related but not identical real-life problems where the experimenter would be likely to be a deceiver acting against the subject's interests. When faced with the choice between a red ball and a black ball, the probability of 30/90 is compared to the lower part of the 0/90-60/90 range (the probability of getting a black ball). The average person expects there to be fewer black balls than yellow balls because in most real-world situations, it would be to the advantage of the experimenter to put fewer black balls in the urn when offering such a gamble. On the other hand, when offered a choice between red and yellow balls and black and yellow balls, people assume that there must be fewer than 30 yellow balls as would be necessary to deceive them. When making the decision, it is quite possible that people simply forget to consider that the experimenter does not have a chance to modify the contents of the urn in between the draws. In real-life situations, even if the urn is not to be modified, people would be afraid of being deceived on that front as well.
A modification of utility theory to incorporate uncertainty as distinct from risk is Choquet expected utility, which also proposes a solution to the paradox.
Alternative explanations
Other alternative explanations include the competence hypothesis [5] and comparative ignorance hypothesis.[3] These theories attribute the source of the ambiguity aversion to the participant's pre-existing knowledge.
↑Jump back a section
See also
↑Jump back a section
References
1. Ellsberg, Daniel (1961). "Risk, Ambiguity, and the Savage Axioms". 75 (4): 643–669. doi:10.2307/1884324. JSTOR 1884324.
2. ^ a b Fox, Craig R.; Tversky, Amos (1995). "Ambiguity Aversion and Comparative Ignorance". 110 (3): 585–603. doi:10.2307/2946693. JSTOR 2946693.
3. Ben-Haim, Yakov (2006). Info-gap Decision Theory: Decisions Under Severe Uncertainty (2nd ed.). Academic Press. section 11.1. ISBN 0-12-373552-1.
4. Chip, Health (1991). "Preference and Belief: Ambiguity and Competence in Choice under Uncertainty". Journal of Risk and Uncertainty 4: 5–28.
• Anand, Paul (1993). Foundations of Rational Choice Under Risk. Oxford University Press. ISBN 0-19-823303-5.
• Keynes, John Maynard (1921). A Treatise on Probability. London: Macmillan.
• Schmeidler, D. (1989). "Subjective Probability and Expected Utility without Additivity". Econometrica 57 (3): 571–587. doi:10.2307/1911053. JSTOR 1911053.
↑Jump back a section | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267928004264832, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/tagged/modular-arithmetic+options | # Tagged Questions
2answers
323 views
### Solving/Reducing equations in $\mathbb{Z}/p\mathbb{Z}$
I was trying to find all the numbers $n$ for which $2^n=n\mod 10^k$ using Mathematica. My first try: Reduce[2^n == n, n, Modulus -> 100] However, I receive ...
1answer
319 views
### Linear Solve with Modular Arithmetic
I am interested in using LinearSolve[m,b] which will find a solution to the equation $m.x=b$, where I am in mod 2 arithmetic. Is there any way to perform this ...
2answers
370 views
### Factorizing polynomials over fields other than $\mathbb{C}$
I'd like to take a polynomial in $\mathbb{Z}_5[x]$ of the form $ax^2+bx+c$ and factor it into irreducible polynomials. For example: Input... x^2+4 Output... ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180806279182434, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/120209?sort=oldest | ## Mersenne primes problem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Well, I need someone here with programming skills (because I have none of it) to check if this problem that I am proposing is at least true for the known Mersenne primes, and here is the list of the exponents of the known Mersenne primes :
http://wwwhomes.uni-bielefeld.de/achim/mersenne.html
And the problem is:
If the number $M_p=2^p-1$ is prime then it can be written in one of the two following forms:
$M_p=18k+1$ or $M_p=18k+13$, for $p\geq5$, that is, Mersenne primes, when divided with $18$ leave a remainder that is equal to $1$ or equal to $13$.
I feel that this is quite easy to program but since my skills in programming are practically non-existent it would be so nice if someone here would do that job for me (and for himself if he is interested in this kind of problems).
If this is a proven fact about Mersenne primes then please tell me where I can find the proof because I did not find fact of this kind when I was reading about known facts about Mersenne primes, and I am sorry if this is something quite elementary beacuse I already posted some questions which turned out to be homework-type problems and I did not see it in the moment of posting. Thank you.
-
1
I found this question somehow nice, so I answered it. But as a general advice, I think to use primarily math.stackexchange.com and MO only if the feedback there suggests it is a good idea to do so, in the long run will give the better experience for you. – quid Jan 29 at 14:12
@Antisha: This site is mainly reserved for professional mathematicians (with a Ph.D.) or graduate students (who are working on a thesis towards a Ph.D.). My impression is that you don't have this expertise, so most likely you will not be able to ask a question "of research level", i.e. one that is of interest here. Of course, with a good instinct or luck you might succeed, but the chances are against you. – GH Jan 29 at 14:39
@Antisha: Also, you are not supposed to ask here for comments or opinions about a result you proved. That belongs to discussion boards, blogs, conferences, journal submissions etc. – GH Jan 29 at 14:42
I didn´t see this at the time of writing my post, but I would like when it is posted now that you give an opinion about the result, if it is not good enough I will post no more here. – Antisha Jan 29 at 14:53
## 2 Answers
No programming skills are needed, not even a computer, or even pocket-calculator ;)
Look, $M_p = 2^p - 1$ is congruent $1$ or $13$ modulo $18$ (which is what you are asking) if and only if $2^p$ is $2$ or $14$ modulo $18$.
Now, modulo $18$, one has $2^1=2$, $2^2=4$, $2^3= 8$, $2^4= 16$, $2^5= 14$, $2^6=10$, $2^7=2$.
Thus, $2^n$ is congruent $2$ modulo $18$ if and only if $n$ is $1$ modulo $6$, and $2^n$ is congruent $14$ modulo $18$ if and only if $n$ is $5$ modulo $6$.
Since every prime (except $2,3$) is $1$ or $5$ modulo $6$, and the exponent for a Mersenne-prime is a prime number the claim follows.
-
You beat me by a minute or two... – GH Jan 29 at 14:11
I am amazed how two of you see these things in a so short period of time. Okay, I will post no more until I have a really good question that is on the research level. Thank you. – Antisha Jan 29 at 14:14
@GH: 120 seconds, to be precise ;) – quid Jan 29 at 14:16
I will now write a post about something about prime numbers that I managed to prove so I would like to see your comments about usefulness of such a formula. – Antisha Jan 29 at 14:27
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is an easy exercise to prove your statement for all Mersenne primes, not just the known ones, using that $2^6\equiv 1\pmod{9}$. Indeed, $p$ must be prime for $M_p=2^p-1$ to be prime. Hence either $p\equiv 1\pmod{6}$ in which case $M_p\equiv 2^1-1\equiv 1\pmod{9}$ and so $M_p\equiv 1\pmod{18}$, or $p\equiv 5\pmod{6}$ in which case $M_p\equiv 2^5-1\equiv 4\pmod{9}$ and so $M_p\equiv 13\pmod{18}$. I told you earlier that your questions are not of research level. Try MathStackExchange next time.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509544968605042, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/33923?sort=oldest | ## Distribution of running maximum of a local martingale
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $(\Omega, \mathcal{F}, \mathbb{P}, \mathcal{F}_t)$ be a given probability space with usual conditions, on which $W$ is a standard Brownian motion. For $x \ge 0$, consider $$X(t) = x + \int_0^t \sigma (X(s)) dW(s)$$ Assume $\sigma \in C^{0,1/2}_{loc}$, $\sigma(0) = 0$, $\sigma>0$ on $(0,\infty)$. By [Karatzas and Shereve 98], there exists a unique strong solution with absorbing state at zero. Denote the running maximum by $X^*(T) = \sup_{s\in [0,T]} X(s)$.
Question: For a fixed $T$, is this possible to show that $\mathbb{P} ( X^*(T) \ge \beta) = o(\beta^{-1})$ as $\beta \to \infty$?
I am trying to use time-changed Brownian motion, i.e. $X(t) = x + B([X]_t)$, where $B$ is BM, and $[X]$ is quadratic variation. There is also density function available for running maximum $B^* (T)$, i.e. $\mathbb{P}(B^*(T) \ge \beta) = 2 - 2 \Phi(\beta/\sqrt{T}) = o(\beta^{-1})$, where $\Phi(\cdot)$ is c.d.f of standard normal distribution. But, I could not succeed using those facts to prove it.
-
It is pretty clear that the estimate you are after will depend strongly on the behavior of $\sigma$ near 0. Can you be more specific about what $C^{0,1/2}_{loc}$ is? – Jeff Schenker Jul 30 2010 at 18:50
Actually, it is not going to depend on what $\sigma$ is like near zero much at all. It is a local martingale and, once it gets very close to zero, it is unlikely to escape. It depends more on how fast $\sigma$ grows as x goes to infinity. $C^{0,1/2}$ is the class of Holder continuous functions of exponent 1/2. This does guarantee a unique strong solution, but I don't think that has much bearing on the question. – George Lowther Jul 30 2010 at 19:12
The bound $\mathbb{P}(X^*_T>K)\le x/K$ follows from the local martingale property. Stopped at the first time it hits K, its expectation is x, but is equal to K with probability at least $\mathbb{P}(X^*_\infty>K)$, giving the inequality (actually, it is Doob's maximal inequality). It is not possible to achieve this bound, so the question can be understood as asking if we can get very close to it in some sense. – George Lowther Jul 30 2010 at 19:21
Doh! Of course this has nothing to do with the behavior near zero. I read it all too quickly and imagined the question was about the typical time to reach zero. Thanks for explaining the notation. – Jeff Schenker Jul 30 2010 at 20:08
## 1 Answer
No. It is true that $\mathbb{P}(X^*_T>\beta)=O(\beta^{-1})$, but you don't have a`little-o' bound. In fact it fails, and $\beta\,\mathbb{P}(X^*_T>\beta)$ converges to a strictly positive value, precisely when X fails to be a martingale.
If S is the first time at which X hits $\beta>x$ then continuity gives $$X_{S\wedge T} = \beta 1_{\{X^*_T>\beta\}}+1_{\{X^*_T\le\beta\}}X_T$$ Take expectations, and use $\mathbb{E}[X_{S\wedge T}]=x$, which follows from the fact that the first term is a local martingale stopped at time S, so is bounded (and hence a proper martingale). $$x=\beta\,\mathbb{P}(X^*_T>\beta)+\mathbb{E}[1_{\{X^*_T\le\beta\}}X_T].$$ The final expectation converges to $\mathbb{E}[X_T]$ as $\beta$ goes to infinity, by monotone convergence. This gives $$\lim_{\beta\to\infty}\beta\,\mathbb{P}(X^*_T>\beta)=x-\mathbb{E}[X_T].$$ Now, it is a well known result that if X is a nonnegative local martingale and $X_0$ is integrable then it is a supermartingale, so $\mathbb{E}[X_T]\le\mathbb{E}[X_0]$, and equality holds precisely when it is a martingale over the range [0,T]. So, in our case, $\mathbb{P}(X^*_T>\beta)=o(\beta^{-1})$ exactly when $\mathbb{E}[X_T]=x$ and X is a martingale over the range [0,T].
An example when solutions to your SDE fails to be a martingale is $\sigma(x)=x^2$, $dX=X^2\,dW$. The solution to this SDE can be written as $X=1/\Vert B\Vert$ for a 3-dimensional Brownian motion B started from the point $(x^{-1},0,0)$. You can calculate $\mathbb{E}[X_t]$ and determine that it is decreasing in t, so X is not a martingale - just a local martingale. This example appears in Roger's & Williams book Diffusions, Markov Processes and Martingales as an example of a local martingale which is not a proper martingale.
-
George, Thank you very much. – kenneth Jul 30 2010 at 23:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937917947769165, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/156066-another-factorising-question.html | # Thread:
1. ## Another factorising question.
OK one more problem I have
$6(2x+y)^2-26x-13y+6$
There are several ones like these i need to do but I should be able to do those if I can tackle this one. I have no clue where to start with this one.
2. Originally Posted by Gerard
OK one more problem I have
$6(2x+y)^2-26x-13y+6$
There are several ones like these i need to do but I should be able to do those if I can tackle this one. I have no clue where to start with this one.
It can be re-written as $6 (2x + y)^2 - 13(2x + y) + 6$ which has the form $6a^2 - 13 a + 6$. So factorise $6a^2 - 13 a + 6$, substitute $a = 2x + y$ and simplify.
3. Alright thanks for the help. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9666858911514282, "perplexity_flag": "head"} |
http://sagemath.org/doc/bordeaux_2008/level_one_forms.html | # Level One Modular Forms¶
## Computing $$\Delta$$¶
The modular form
$\Delta = q\prod(1-q^n)^{24} = \sum \tau(n)q^n$
is perhaps the world’s most famous modular form. We compute some terms from the definition.
```sage: R.<q> = QQ[[]]
sage: q * prod( 1-q^n+O(q^6) for n in (1..5) )^24
q - 24*q^2 + 252*q^3 - 1472*q^4 + 4830*q^5 - 6048*q^6 + O(q^7)
```
There are much better ways to compute $$\Delta$$, which amount to just a few polynomial multiplications over $$\ZZ$$.
```sage: D = delta_qexp(10^5) # less than 10 seconds
sage: D[:10]
q - 24*q^2 + 252*q^3 - 1472*q^4 + ...
sage: [p for p in primes(10^5) if D[p] % p == 0]
[2, 3, 5, 7, 2411]
sage: D[2411]
4542041100095889012
sage: f = eisenstein_series_qexp(12,6) - D[:6]; f
691/65520 + 2073*q^2 + 176896*q^3 + 4197825*q^4 + 48823296*q^5 + O(q^6)
sage: f % 691
O(q^6)
```
## The Victor Miller Basis¶
The Victor Miller basis for $$M_k(\mathrm{SL}_2(\ZZ))$$ is the reduced row echelon basis. It’s a lemma that it has all integer coefficients, and a rather nice diagonal shape.
```sage: victor_miller_basis(24, 6)
[
1 + 52416000*q^3 + 39007332000*q^4 + 6609020221440*q^5 + O(q^6),
q + 195660*q^3 + 12080128*q^4 + 44656110*q^5 + O(q^6),
q^2 - 48*q^3 + 1080*q^4 - 15040*q^5 + O(q^6)
]
sage: dimension_modular_forms(1,200)
17
sage: B = victor_miller_basis(200, 18) #5 seconds
sage: B
[
1 + 79288314420681734048660707200000*q^17 + O(q^18),
q + 2687602718106772837928968846869*q^17 + O(q^18),
...
q^16 + 96*q^17 + O(q^18)
]
```
Note: Craig Citro has made the above computation an order of magnitude faster in code he hasn’t quite got into Sage yet.
“I’ll clean those up and submit them soon, since I need them for something I’m working on ... I’m currently in the process of making spaces of modular forms of level one subclass the existing code, and actually take advantage of all our fast $$E_k$$ and $$\Delta$$ computation code, as well as cleaning things up a bit.”
### Table Of Contents
Method of Graphs
#### Next topic
Half Integral Weight Forms
### Quick search
Enter search terms or a module, class or function name. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8943049907684326, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/14549/particle-antiparticle-annihilation-do-they-have-to-be-of-the-same-type?answertab=active | Particle antiparticle annihilation-do they have to be of the same type?
I read that a particle will meet its antiparticle and annihilate to generate a photon. Is it important for the pairs to be of the same type? What will happen when for example a neutron meets an antiproton or a proton meets a positron? Are there any rules to determine what happens when such particles meet?
-
1
– voix Sep 11 '11 at 12:28
3 Answers
Yes, there are rules that depend on the quantum numbers carried by the particles under question and the energy available for the interaction.
In general we label as annihilation when particle meets antiparticle because all the characterising quantum numbers are equal and opposite in sign and add and become 0, allowing for the decay into two photons, two because you need momentum conservation.
A positron meeting a proton will be repulsed by the electromagnetic interaction, unless it has very high energy and can interact with the quarks inside the proton, according to the rules of the standard model interactions.
When a neutron meets an antiproton the only quantum number that is not equal and opposite is the charge, so we cannot have annihilation to just photons, but the constituent antiquarks of the antiproton will annihilate with some of the quarks in the neutron there will no longer be any baryons, just mesons and photons, and all these interactions are given by the rules and crossections of the standard model.
-
I'm having trouble seeing why a reaction like $\mu^{+}e^{-}\rightarrow \gamma \gamma$ is disallowed by an argument like this, but I"m also having trouble generating a diagram that does this that doesn't have a ton of vertices involving annhilating neutrinos. – Jerry Schirmer Sep 11 '11 at 17:06
1
@Jerry that violates the lepton flavor conservation, so you need those weak verticies to make it go ahead. It's got to be massively suppressed. – dmckee♦ Sep 11 '11 at 17:24
@Jerry that is why we have nu_tau , nu_mu and nu_e neutrinos to keep the flavour count. There are oscillations though, but that is another story. – anna v Sep 11 '11 at 18:58
The only thing that a particle and an antiparticle can do always for sure in theory is annihilate into gravitons. Everything else depends on the particle.
The reason people talk about particle/antiparticle annhilation is to convey that antiparticles are particles going backward in time, in an S-matrix particle-path picture, like Feynman diagrams. If you have interactions that can knock a particle sideways, then these interactions can also knock a particle back in time, so that any external interaction can produce particle/antiparticle pairs, and can lead to creation and annihilation of pairs. If the external potential can absorb the particles, if they aren't protected by conservation laws, then it can produce the particle and antiparticle singly.
There are no conservation laws which can forbid a particle from annihilating with its antiparticle, because gravity can always knock a particle back in time.
But some particles are hardly interacting, externally or internally. These particles don't annihilate with their antiparticle, they just ignore it. A neutrino is its own antiparticle, but two stopped neutrinos will just sit there next to each other, wavefunctions spreading, doing nothing much at all. Their cross section for annihilating into anything is negligible. Same with two photons, or two gravitons.
Charged particle/antiparticle pairs can annihilate into photons, usually two or three depending on the conservation laws, but not all particles are charged. neutrons and antineutrons have a hard time finding each other, and I don't think its fair to say that they annihilate. When they collide, they can in theory turn into photons, but they almost always will just go past each other, or, if they collide, they turn into mesons.
The most extreme example of annihilation is at the highest energies, a massive charged spinning black hole. The antiparticle has opposite charge. Two such black holes collide to make a big neutral probably spinning black hole, which then slowly decays.
-
"A neutrino is its own antiparticle" This may be true in the favored theories, but it is not settled. We'll get back to you in a few years... – dmckee♦ Sep 12 '11 at 17:36
@dmckee: the fact that there are irresponsible theorists in the world should not change the story we tell ourselves. That neutrinos are one chirality only was established by Feynman,Gell-Mann, Sudarshan, and Marshak in the 1950s! This predicts the neutrino masses order of magnitude right. It is impossible that neutrinos are Dirac and somebody has to say it. That does not mean that experimentalists should not verify that it is Majorana, not Dirac, but the theorists should not be dithering on this obvious fact. To have a Dirac neutrino you need a light sterile neutrino, absurd fine-tuning. – Ron Maimon Sep 12 '11 at 20:32
Ok--- now I know it's not just Georg. What's the problem here? The answer is correct. – Ron Maimon Oct 23 '11 at 18:57
+1: You are correct but I think people are downvoting because you are leaving out any discussion of the iconic particle-antiparticle annihilation of an electron-positron annihilation into 2 photons. This has very high cross section and extremely low cross section into anything else ( such as neutrino + antineutrino ). You also put down weak annihilation but graviton annihilation would have a lifetime of much greater than the age of the universe ( I'm guessing ). – FrankH Oct 23 '11 at 21:56
@FrankH: yes, you are right, but one downvote is just reflexive. The point of this is to answer the original question about why people talk about annihilation with regards to particle/antiparticle at all. The reason is to explain that any particle path that can go forward in time can double-back to a particle antiparticle annihilation, but the only universal thing that can go at the turnaround point is a graviton. – Ron Maimon Oct 24 '11 at 4:01
The major problem you run into with this question is the protons and neutrons are not fundamental particles.
The short answer is that $x + \bar{y}$ does not result in any annihilation at the vertex level (but sometimes other reactions are possible), but $n + \bar{p}$ is not expressed at the vertex level.
Instead a neutron is a composite object made up of two (matter) down quarks and one (matter) up quark and a seething mess of virtual particles (often called "the sea") that pop into and out-of existence all the time.
A anti-proton is made up of two anti-up quarks and one anti-down quark and another seething mess of virtual particles.
So if a neutron meets and anti-neutron ($(udd+ \text{sea}) + (\bar{u}\bar{u}\bar{d} + \text{sea}) \to \text{??}$) you get reactions at the quark level, and the two composite particles can be destroyed leaving zero baryons but a lot of hadronic spray.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289121627807617, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/132698/burgess-on-quadratic-residues-and-non-residues?answertab=votes | # Burgess on quadratic residues and non-residues
Has any of you guys gone through Burgess's paper The distribution of quadratic residues and non-residues?
I'm having a hard time trying to disentangle his Lemma 2 (on page 108).
Do you know if there is a simpler presentation of it somewhere? For instance, is there a way to rephrase the following paragraph without resorting to the word "class"? I don't even know for sure what he is referring to by it...
"Divide the sets of values of $m_{1}, \ldots, m_{2r}$ into two classes, putting in the first class those which consist of at most $r$ distinct integers, each occurring an even number of times, and putting into the second class all other sets. The number of sets in the first class is less than $(2r)^{r}h^{r}$, and for each set the inner sum over $x$ is at most $p.$... The number of sets in the second class is at most $h^{2r}$ (trivially)."
I would really appreciate any suggestions you wish to make. If you have written about this and gained some intuition on those portions of the article (lemma 2 and lemma 3), I would be more than glad to have an opportunity to take a look at your write-ups.
Thanks.
Reference: D. A. Burgess. The distribution of quadratic residues and non-residues. Mathematika 4 (1957), 106-112.
-
2
"Class" is just a synonym for "set" here; often people prefer to say "class" when speaking of a set whose elements are themselves sets. – Greg Martin Apr 17 '12 at 1:56
## 2 Answers
Fix an integer $h$ and let $$S_h(x) = \sum_{t=0}^{h-1}\;\left(\frac{x+t}{p}\right)$$ Where the $(\cdot/p)$ representes the Legendre symbol. We want to prove that $$\sum_{x=0}^{p-1}S_h(x)^{2r} < (2r)^r p h^r + 4rh^{2r}\sqrt{p}$$ Developping the powers in the left and inverting summations we obtain the sum $$\sum_{m_1\dots m_{2r} = 1}^h \sum_{x=0}^{p-1} \left( \frac{(x+m_1)(x+m_2)\dots(x+m_r)}{p} \right )$$
The outer sum is over the $h^{2r}$ tuples $(m_1,\dots,m_{2r})$ where each $m_i$ varies independently from $1$ to $h$. In order to bound above this sum we divide the outer sum in two sets of tuples, in the first set we pick first the tuples $(m_1,\dots,m_{2r})$ for which the polynomial $$(x-m_1)(x-m_2) \dots (x-m_{2r})$$ is a square, in this case we have $$\sum_{x=0}^{p-1} \left( \frac{(x+m_1)(x+m_2)\dots(x+m_{2r})\,}{p} \right ) \le p$$ because the value of the polynomial is a square for every $x$ and in consequence the value of the Legendre symbol inside the sum is always 0 or 1.
Now if the polynomial is a square that means that we can group the $m_i$'s of a tuple in $r$ pairs with the same value in each pair. So the number of tuples which lead to a square polynomial can be bounded above by the number of partitions of the $2r$ positions in $r$ pairs, times the number of independent $r$-tuples of values from 1 to $h$.
The number of partitions of $1,2,\dots, 2r$ in $r$ pairs is simply $$(2r-1)(2r-3)\dots 5\cdot 3\cdot 1$$ just observe that the first index can be paired with any of the other $2r-1$ positions, and now the first free index can be paired with any of the $2r-3$ free positions, and so on. But $$(2r-1)(2r-3)\dots 5\cdot 3\cdot 1 < (2r)^r$$ as there are $r$ factors all smaller than $2r$.
Now to each pair we can assign any integer from 1 to $h$, as there are $r$ pairs we have $h^r$ possible asignations. So finally the number of tuples $m_1,\dots,m_{2r}$ wich lead to a square polynomial $(x+m_1)(x+m_2)\dots(x+m_{2r})$ is bounded by $(2r)^rh^r$
Note that the estimation of the number of tuples leading to a square polynomial is a little rough, for example with $r=2$ and $h=3$ there are only 21 tuples in this group (starting with (1,1,1,1),(1,1,2,2),(1,1,3,3),(1,2,1,2),(1,2,2,1),(1,3,1,3),(1,3,3,1),$\dots$ ), but $(2r)^rh^r= 1296$.
In the second group we pick the tuples $(m_1,\dots, m_{2r})$ for wich $(x+m_1)\dots(x+m_{2r}$ is not a square. This means that we can write them as a product $g(x)^2F(x)$ where $F(x)$ is squarefree. For the Legendre symbol we have then $$\left(g(x)^2F(x) \over p \right ) = \left( F(x) \over p \right)$$ except possibly when $g(x) = 0$, as $g(x)$ has at most degree $r$ this means that the sums $$\sum_{x=1}^p \left(g(x)^2F(x)\over p\right) \quad\text{and}\quad \sum_{x=1}^p \left(F(x)\over p\right)$$ differ at most by $r$. For the right sum we can use now the bound (André Weil) $$\left \lvert \sum_{x=0}^{p-1} \left( F(x) \over p \right ) \right \rvert \leq (\deg F -1) \sqrt{p}$$ and so $$\left \lvert \sum_{x=0}^{p-1} \left( g(x)^2F(x) \over p \right ) \right \rvert \leq 2r + 2r\sqrt{p}< 4r\sqrt{p}$$
As the number of tuples in this case is at most $h^{2r}$ (the total number of tuples) we finally have the toal upper bound of $h^{2r}\sqrt{p}$ to the sum limited to this second set of tuples giving the lemma.
EDIT 2: I have rewritten the proof to make it more clear.
-
Would you explain this part : "there are at most (2r)^r ways to pick r pairs out of 2r quantities..." ? What's the exact link between those 2r-tuples we are considering and the number of pairs you mention? By a pair you mean something like (a,b), don't you? – absalon Apr 18 '12 at 21:48
I have edited the answer to make it more clear. I hope this helps. – Esteban Crespi Apr 19 '12 at 11:40
Thanks... I'll take a look at the new version of your post. – absalon Apr 19 '12 at 20:03
One quick question: when you write "...(x−m_1)(x−m_2)…(x−m_2r) is a square", do you actually mean the - sign? – absalon Apr 19 '12 at 20:12
No that's an error I correct it – Esteban Crespi Apr 19 '12 at 21:04
You would need to give much more detail for anyone to be sure what is being discussed. However, from later articles that refer to Burgess, it seems he is generally bounding the first quadratic residue modulo some fixed prime. See http://oeis.org/A053760 and http://mathworld.wolfram.com/QuadraticNonresidue.html
The significance of "most r distinct integers, each occurring an even number of times," is that an actual square, call it $m^2,$ when reduced $\pmod p,$ gives a "quadratic residue." So I think he is saying squares where the prime factorization is restricted.
Well, I have never seen the paper. However, the size of the smallest quadratic residue $\pmod p$ for some prime $p$ is found in practice to be quite small, and this is reflected in the consequences of assuming a Generalized Riemann Hypothesis, see the reference by Wedeniwski 2001. However, it may be that Burgess has one of the very best bounds that has actually been proved. See Hildebrand 1987
Here is a list that gives, more or less, primes with surprisingly large first quadratic nonresidues, OEIS
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427947402000427, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/48671?sort=oldest | ## Examples of non-rigorous but efficient mathematical methods in physics
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There are situations of applications of mathematics in physics which
• seem to work well enough for physicists (for example they agree with the experimental data)
• but are considered unacceptable or at least non-rigorous to mathematicians
Thank you.
I apologize if this question may seem inappropriate for MO. I consider these examples a great source of research problems for mathematicians who are interested in mathematical physics.
-
8
Rule 42: Any question that gets an animated gif for an answer should be closed. Seriously, "big-list" questions need a lot more justification and background to be good questions. Why are you interested in this? How will it help you with your mathematical research? What problem in your research does this connect with? – Andrew Stacey Dec 8 2010 at 21:02
3
@Qiaochu Yuan: Both questions are concerned with the relation between mathematics and physics, but they are not the same. Applying intuition from physics to math problems is definitely distinct from my question. – Cristi Stoica Dec 8 2010 at 21:16
2
How about: the use of the renormalization 'group' (including perturbative renormalization group calculations using the epsilon expansion), in particular in the study of self-avoiding walks. It seems that very much is "known" by physicists which is not yet proven by mathematicians. I would love to see a complete discussion about this from both sides. What is the mathematical perspective on these methods? I'm not qualified to make an answer out of this. – Gregory Putzel Dec 9 2010 at 3:53
2
@Andrew Stacey. Well, I want to gather "bug reports" from mathematical physics, then "fix" them. I can't do this alone, but fortunately I am not the only interested in this. – Cristi Stoica Dec 9 2010 at 11:27
2
Yes, this is a very appropriate question for MO. – Dr Shello Dec 13 2010 at 4:50
show 4 more comments
## 10 Answers
-
Thank you. Nice and succinct :-) – Cristi Stoica Dec 8 2010 at 20:37
1
Maybe you should show the first and second derivatives, too? – Deane Yang Dec 8 2010 at 20:54
6
Nice, and certainly physics uses the delta function is a non-rigourous way. But... doesn't distribution theory basically put this on a pretty firm mathematical foundation? This seems to hence be a different example to the Feynmann Path Integral one-- there is a rigourous version, just it's not used... – Matthew Daws Dec 8 2010 at 21:20
4
The OP asks "which of these techniques were eventually made rigorous?" Physicists were using delta functions long before mathematicians wrote down the rigorous theory of distributions. – Qiaochu Yuan Dec 8 2010 at 21:25
@Qiaochu: Yeah, okay, I should read more closely!! – Matthew Daws Dec 9 2010 at 22:07
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Feynman's path integral in quantum field theory. It involves integration over spaces of fields, using measures that have not been made rigorous.
-
3
I think this is the classic prototype from modern physics and it's a remarkable challenge to the thesis that mathematics and applications of it to physics operate on identical postulates.Here's an example of a construction that completely lacks modern rigor and yet has been incredibly successful as a theory of the physical world. In all fairness,though-there is an ongoing attempt to put it on a rigorous basis. – Andrew L Dec 8 2010 at 21:09
@Laie: Thanks. I think this is very central, in the sense that the renormalization/regularization method justifies the standard model of particle physics, but in the same time it provides a source of discord between this and gravity. The infinites resulting are used by some physicists as justification for discrete models of spacetime, which are successful in computational physics, but I find them difficult to cope with the Lorentz invariance. – Cristi Stoica Dec 8 2010 at 22:49
2
@Andrew L: I would be grateful if you would provide more details, possible a link, about the ongoing attempt you mention. I know there are some such attempts, in particular by using dressed particles. Thank you. – Cristi Stoica Dec 8 2010 at 22:53
4
Two comments: 1) There is a rigorous notion of integration over spaces of fields. It works just fine for a number of quantum field theories, in spacetime dimension 2 & 3. It can even be partly proven to work (see work by Balaban, Magnen, Rivassaeau, Seneor, and others) in dimension 4. 2) The Standard Model of Particle Physics itself is not just non-rigorous, but almost certainly does not exist in the sense of the previous comment. There is no continuum limit for Higgs fields. – userN Dec 9 2010 at 4:51
2
Louigi, it means that QFTs with scalar fields are typically not asymptotically free. Some coupling becomes large at short distances and keeps you from taking the continuum limit. You need more information at short distances to define the theory. But of course there is no reason to think the Standard Model including Higgs should exist rigorously as a QFT at all energy scales and many reasons to think it does not. That's why particle theorists regard it as a low-energy effective theory and are hoping the LHC will provide some information about its short distance completion. – Jeff Harvey Dec 9 2010 at 16:37
show 1 more comment
In boundary value problems, physicists consider the infinity (in space and in time) to be part of the boundary. Mathematicians know there's a distinction between compact and non-compact spaces.
-
Perhaps it would not be out of place to quote Miles Reid's Bourbaki seminar on the McKay correspondence here:
"The physicists want to do path integrals, that is, they want to integrate some "Action Man functional" over the space of all paths or loops $\gamma : [0; 1] \rightarrow Y$. This impossibly large integral is one of the major schisms between math and fizz. The physicists learn a number of computations in finite terms that approximate their path integrals, and when sufficiently skilled and imaginative, can use these to derive marvellous consequences; whereas the mathematicians give up on making sense of the space of paths, and not infrequently derive satisfaction or a misplaced sense of superiority from pointing out that the physicists' calculations can equally well be used (or abused!) to prove 0 = 1. Maybe it's time some of us also evolved some skill and imagination. The motivic integration treated in the next section builds a miniature model of the physicists' path integral,..."
-
6
"Maybe it's time some of us also evolved some skill and imagination." Love it. I wish I could apply the +1 directly to Miles Reid. – Jim Bryan Dec 9 2010 at 0:34
Another example from theoretical high-energy physics I've encountered: sometimes when physicists have some equation of motion for an arbitrary number $N$ of particles with positions $x_i$, e.g. something of the form $\frac{1}{N}\sum_i f(x_i) + \frac{1}{N^2}\sum_{ij} g(x_i, x_j) = 0$, they wish to know what the solutions to this equation look like for large $N$. A technique they use is to replace the variables $x_i$ with a probability measure $\mu$ on the space of their possible values, which is supposed to represent the number of $x_i$'s in a given region in the large $N$ limit, and instead of solving the original equation they solve the analogous equation in $\mu$, e.g. $\int f(x) \mathrm{d}\mu(x) + \int g(x, y) \mathrm{d}(\mu \times \mu) (x, y) = 0$. In fact it's not hard to come up with a toy example where the original equation can be solved exactly for all $N$ and the solutions "look like" a particular probability distribution in the large $N$ limit, but that probability distribution fails to satisfy the corresponding equation, and for that reason I have some doubt that this method can be turned into something rigorous.
-
1
Every $(x_i)_{1\le i\le N}$ which solves your first equation yields a (discrete) probability measure $\mu_N$ which solves your second equation. So what you are saying is that in a toy example: 1. the solution $\mu_N$ of the first equation is unique for every large enough $N$; 2. the probability measure $\mu_N$ "looks like" $\mu$ when $N\to+\infty$; 3. the probability measure $\mu$ does not solve the second equation. Hmmm... If "looks like" means "converges to", you might want to explain the relevant mode of convergence of measures (and/or the toy example itself). – Didier Piau Dec 9 2010 at 7:35
1
Well, coming up with an appropriate definition of "converges to" would be one of the difficulties in making the technique rigorous, but in the toy example the solution for a given $N$ consisted of the $N^{\mbox{th}}$ roots of unity in the complex plane, and the probability distribution they "look like" was the measure uniformly concentrated on the unit circle. I don't know if there's any notion of convergence that works, but the real examples I saw were of the same form (i.e. sets of points lying at regular intervals on submanifolds of $\mathbb{R}^n$ being approximated by uniform measures). – Phil Wild Dec 9 2010 at 17:39
Indeed the uniform probability distributions on the $N$th roots of unity converge to the uniform probability distribution on the unit circle when $N$ goes to infinity--for several modes of convergence that each have a perfectly rigorous definition thank you. But could you explain the "toy example where the original equation can be solved exactly etc." which you alluded to in your post? We know what are the measures $\mu_N$ now but what are the functions $f$ and $g$? – Didier Piau Dec 13 2010 at 7:33
I'm afraid I don't remember the details, though IIRC it was something simple like $\partial/\partial x_i \left(\sum_j |x_j|^2 - a \sum_{jk} |x_j - x_k|^4 \right) = 0$. – Phil Wild Dec 13 2010 at 17:53
Oh, upon rereading your first comment I should add that the solutions $\mu_N$ in the example I came up with were far from unique. – Phil Wild Dec 13 2010 at 18:09
show 4 more comments
Finally, a Math Overflow question that addresses my specialty: non-rigor!
Here are a few examples of non-rigor as applied to evidence for dualities:
1. Heterotic-Type II. In earlier times, the best evidence for heterotic-Type-II duality was a) counting the number of supersymmetries of the theory, and (b) comparing the moduli spaces.
2. AdS-CFT. For AdS-CFT the earliest and best comparisons were counting the so-called anomalous dimensions of various operators. To date, I think the tests are far from rigorized (and yes, this would be a great problem to make mathematically precise).
3. Mirror Symmetry, early days. Recall that mirror symmetry in CY moduli space came from constructing a chart of the Euler characteristics of CY complete intersections and noticing the symmetry of the chart about zero. Other non-rigorous arguments involve counting the dimensions (just the dimensions) of the moduli of purportedly mirror objects. Then there's the old compute-on-flat-space-and-let-supersymmetry-take-care-of-the-rest trick.
4. Low energy effective field theory. The "fact" that string theory reduces to an oft-identifiable QFT in a low energy limit is a huge source of argumentation/inspiration in string theory. Accounting for (effective) black holes helped lead to M-theory in one context, and to the microscopic description of black-hole entropy in another. One can also argue for dualities by identifying equivalent field contents in two different models. This brings up another point.
5. Invariance of BPS states under perturbation. It is great to take a quantity that does not vary and evaluate it in a limit where it is easy to compute. This argument appears again and again in physics -- and also in math, of course (e.g. in the heat-kernel proof of the index theorem). BPS numbers are just that. (Of course, they do vary, and the continuity of the relevant physical parameters [numbers are not necessarily physical quantities] is what underlies interesting explanations of wall-crossing.)
I'm probably including too many that don't fit and excluding a lot that do. Very non-rigorous of me!
-
The use of random matrix theory to model energy levels of heavy nuclei and other physical systems. See also the following historical piece and the pictures therein: There is striking statistical evidence that the eigenvalues of large random self-adjoint matrices, the energy levels of heavy nuclei, and the normalized zeros of $L$-functions (!) are all spaced about the same.
-
The replica method and the cavity method have been used by physicists to calculate thermodynamic quantities in various statistical mechanics settings (including quite a few classes of random combinatorial objects). The results are often exactly right, even though the method is not at all rigorous. Michel Talagrand has recently proven rigorously some of the results that have been obtained by these methods.
-
2
My favorite example of this is the use of the replica method by Mezard and Parisi in the mid-1980s to "prove" that the expected optimal value of the assignment problem (with costs chosen randomly from the uniform [0,1] distribution) is $\zeta(2) = \pi^2/6$. It wasn't until 2000 that Aldous published a rigorous proof. – Mike Spivey Dec 13 2010 at 3:42
The Hypernetted-chain approximation used in statistical mechanics.
Was for instance used in the theory of the fractional quantum hall effect by Laughlin in order to estimate the energies of elementary excitations of Laughlins wave function.
-
Yang-Mills Equations are experimentally proven but have no strong mathematical foundations. In the Clay Mathematics Institute the mass gap problem is worth one million dollars.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427287578582764, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-statistics/160903-definition-clarification-state-space-support-sample-space.html | # Thread:
1. ## Definition clarification: "state-space", "support" and "sample space"
Hello, what is the difference between the meaning of "state-space", "support" and "sample space". We have just begun stochastic processes and these definitions seem very close to each other.
Thanks
2. Hello,
The sample space is the set where you pick up the events (it's called $\Omega$ in general).
The support is the set where the random variable you're considering is non-zero.
You consider an event $\omega\in\Omega$ and a random variable $X$ (which is a mapping). If $X(\omega)\neq 0$, then it belongs to the support of X.
From what I understand of a state-space, it is similar to the support, except that we're considering the different possible states of a stochastic process (e.g. a Markov chain) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281441569328308, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/102557/completion-and-tensor-product-of-algebras | ## Completion and Tensor Product of Algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $A$ be a commutative ring with 1, $I$ an ideal in $A$, $B$ an $A$-algebra. I am trying to prove the following isomorphism of $A$-algebras: $$\big( A^* \otimes _A B \big) ^* \cong B^*$$ "$^*$" denotes the $I$-adic completion: every $A$-algebra $X$ may be endowed with the $I$-adic topology, defined by the ideal $IX$ in $X$, and the $I$-adic completion of the algebra is the completion with respect to this (uniform) topology.
I have so far been able to prove that the image of $B$ in $T :=A^*\otimes_A B$ under the map $1 \otimes id \colon B \to T$ is dense in $A$. At this stage I considered the map $(1 \otimes id)^* \colon B^* \to T^*$, and tried to show that it is an isomorphism, but I'm having troubles both with the injectivity and the surjectivity of this mapping.
Are these $A$-algebras always isomorphic? If so, how can this be proven? If not, how can a counterexample be constructed, and what do I have to require (Noetherity? Flatness? Finiteness?) for them to be isomorphic?
-
## 2 Answers
It's true if $A$ is Noetherian.
For any $A$-algebra $C$ and any ideal $J$ in $A$, note that $C/JC$ is isomorphic to the tensor product algebra $C \otimes_A A/J$.
Now for any $n \geq 0$, $A/I^n$ is a module over $A^*$, and the multiplication map $A/I^n \otimes_A A^* \to A/I^n$ is an isomorphism, since $A$ is Noetherian --- see for example Proposition 10.13 of Atiyah and Macdonald's "Introduction to Commutative Algebra".
Let $T = A^* \otimes_A B$. Then we get isomorphisms
$T/I^n T \cong A/I^n \otimes_A (A^* \otimes_A B) \cong (A/I^n \otimes_A A^*) \otimes_A B \cong (A/I^n) \otimes_A B \cong B/I^nB$.
Now pass to the inverse limit to obtain an isomorphism $T^* \stackrel{\cong}{\longrightarrow} B^*$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Thank you for your answer, Konstantin!
In fact, since asking the question, my advisor, Prof. Dan Haran, has found a proof which does not use any Noetherity conditions:
For any $A$-algebra $C$, we denote by $\varphi_C$ the canonical mapping from $C$ to its $I$-adic completion $C^*$, and for an $A$-homomorphism $\psi \colon B \to C$ we denote by $\psi^*$ the induced $A$-homomorphism from $B^*$ to $C^*$.
Using the fact that the image of $A$ under $\varphi_A$ is dense in $A^*$, one can directly see that the image of $B$ under $1 \otimes id$ is dense in $T : = A^* \otimes_A B$. Therefore the image of $B^*$ under the induced mapping $(1 \otimes id)^*$ must also be dense in $T^*$.
The algebra mapping $\beta \colon A \to B$ induces the mapping $\beta^* \colon A^* \to B^*$, and using the universal property of the tensor product (as a co-product in the category of commutative $A$-algebras), we obtain a unique $A$-homomorphism $\psi \colon T \to B^*$ such that $\psi \circ (1 \otimes id) = \varphi_B$ and $\psi \circ (id \otimes 1) = \beta^*$.
Now $\varphi_B^* = \varphi_{B^* } \colon B^* \to (B^* )^*$ is an isomorphism, and $\psi^* \circ (id \otimes 1 )^* = \varphi_B^*$ shows that $(id \otimes 1 )^*$ is an injection. Therefore we can view $B^*$ as a complete dense $A$-subalgebra of $T^*$, which means that: $$T^* = \overline{B^* } \cong (B^* )^* \cong B^*$$ (where $\overline{B^* }$ is the closure of $B^*$ in $T^*$ ).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291689991950989, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Monte_Carlo_integration | # Monte Carlo integration
An illustration of Monte Carlo integration. In this example, the domain D is the inner circle and the domain E is the square. Because the square's area can be easily calculated, the area of the circle can be estimated by the ratio (0.8) of the points inside the circle (40) to the total number of points (50), yielding an approximation for π/4 ≈ 0.8
In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular method of Monte Carlo methods that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid,[1] Monte Carlo algorithms randomly choose the points at which the integrand is evaluated.[2] This method is particularly useful for higher dimensional integrals.[3]
Informally, to estimate the area of a domain D, first pick a simple domain E whose area is easily calculated and which contains D. Now pick a sequence of random points that fall within E. Some fraction of these points will also fall within D. The area of D is then estimated as this fraction multiplied by the area of E.
Particular techniques to perform a Monte Carlo integration can be considered. Uniform sampling, stratified sampling and importance sampling are the most common.
## Overview
Consider the set Ω, subset of Rm on which the multidimensional definite integral
$I = \int_{\Omega}f(\overline{\mathbf{x}}) \, d\overline{\mathbf{x}}$
is to be calculated with known volume of Ω
$V = \int_{\Omega}d\overline{\mathbf{x}}$
The most naive approach to compute I is to sample points uniformly on Ω:[4] given N uniform samples,
$\overline{\mathbf{x}}_1, \cdots, \overline{\mathbf{x}}_N\in \Omega,$
I can be approximated by
$I \approx Q_N \equiv V \frac{1}{N} \sum_{i=1}^N f(\overline{\mathbf{x}}_i) = V \langle f\rangle$.
This is because the law of large numbers ensures that
$\lim_{N \to \infty} Q_N = I$.
One has to take into account that implementation issues such as pseudorandom number generators and limited floating point precision can lead to systematic errors, nevertheless, only in very particular cases this has to be taken into account.
Given the estimation of I from QN, the error bars of QN can be estimated by the sample variance using the unbiased estimate of the variance:
$\mathrm{Var}(f)\equiv\sigma_N^2 = \frac{1}{N-1} \sum_{i=1}^N \left (f(\overline{\mathbf{x}}_i) - \langle f \rangle \right )^2.$
which leads to
$\mathrm{Var}(Q_N) = \frac{V^2}{N^2} \sum_{i=1}^N \mathrm{Var}(f) = V^2\frac{\mathrm{Var}(f)}{N} = V^2\frac{\sigma_N^2}{N}$.
As long as the sequence
$\left \{ \sigma_1^2, \sigma_2^2, \sigma_3^2, \ldots \right \}$
is bounded, this variance decreases asymptotically to zero as 1/N. The estimation of the error of QN is thus
$\delta Q_N\approx\sqrt{\mathrm{Var}(Q_N)}=V\frac{\sigma_N}{\sqrt{N}},$
which decreases as $\tfrac{1}{\sqrt{N}}$ and the familiar law of random walk applies: to reduce the error by a factor of 10 requires a 100-fold increase in the number of sample points. This result is quite strong in the sense that it does not depend on the number of dimensions of the integral: most of the deterministic methods like trapezoidal rule strongly depend on the dimension of the integral, because they use a grid to fill the space to compute the integral, and the grid grows exponentially with the dimensions.[5]
The above expression provides a statistical estimate of the error on the result, which is not a strict error bound; random sampling of the region may not uncover all the important features of the function, resulting in an underestimate of the error.
### Example
Relative error as a function of the number of samples, showing the scaling $\tfrac{1}{\sqrt{N}}$
A paradigmatic example of a Monte Carlo integration is the estimation of π. Consider the function
$H_{\circ}\left(x,y\right)=\begin{cases} 1 & \text{if }x^{2}+y^{2}\leq1\\ 0 & \text{else} \end{cases}$
and the set Ω = [−1,1] × [−1,1] with V = 4. Notice that
$I_\pi = \int_\Omega H(x,y) dx dy = \pi.$
Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick N random numbers on Ω and compute
$Q_N = 4 \frac{1}{N}\sum_{i=0}^N H(x_{i},y_{i})$
In the figure the relative error is shown, following the expected scaling.
### Wolfram Mathematica Example
The code below describes a process of integrating the function
$func[x\_]:= \frac{1}{1+\text{Sinh}[2*x]*(\text{Log}[x])^2}$
using the Monte-Carlo method in Mathematica:
code: `func[x_] := 1/(1 + Sinh[2*x]*(Log[x])^2) p = Plot[func[x], {x, 0.8, 3}]; p1 = Plot[PDF[NormalDistribution[1, 0.399], 1.1*x - 0.1], {x, 0.8, 3}]; Show[{p, p1}] NSolve[D[func[x], x] == 0, x, Reals] Distrib[x_, average_, var_] := PDF[NormalDistribution[average, var], 1.1*x - 0.1] n = 10; RV = RandomVariate[TruncatedDistribution[{0.8, 3}, NormalDistribution[1, 0.399]], n] Int = 1/n Total[func[RV]/Distrib[[RV, 1, 0.399]]*Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}] NIntegrate[func[x], {x, 0.8, 3}] Int2 = ((3 - 0.8)/10) Total[func[RV]]`
## Recursive stratified sampling
An illustration of Recursive Stratified Sampling. In this example, the function: $f(x,y) = \begin{cases}1 & x^2+y^2<1 \\0 & x^2+y^2 \ge 1 \end{cases}$ from the above illustration was integrated within a unit square using the suggested algorithm. The sampled points were recorded and plotted. Clearly stratified sampling algorithm concentrates the points in the regions where the variation of the function is largest.
Recursive stratified sampling is a generalization of one-dimensional adaptive quadratures to multi-dimensional integrals. On each recursion step the integral and the error are estimated using a plain Monte Carlo algorithm. If the error estimate is larger than the required accuracy the integration volume is divided into sub-volumes and the procedure is recursively applied to sub-volumes.
The ordinary 'dividing by two' strategy does not work for multi-dimensions as the number of sub-volumes grows way too quickly to keep track. Instead one estimates along which dimension a subdivision should bring the most dividends and only subdivides the volume along this dimension.
The stratified sampling algorithm concentrates the sampling points in the regions where the variance of the function is largest thus reducing the grand variance and making the sampling more effective, as shown on the illustration.
The popular MISER routine implements a similar algorithm.
### MISER Monte Carlo
The MISER algorithm is based on recursive stratified sampling. This technique aims to reduce the overall integration error by concentrating integration points in the regions of highest variance.[6]
The idea of stratified sampling begins with the observation that for two disjoint regions a and b with Monte Carlo estimates of the integral $E_a(f)$ and $E_b(f)$ and variances $\sigma_a^2(f)$ and $\sigma_b^2(f)$, the variance Var(f) of the combined estimate
$E(f) = \tfrac{1}{2} \left (E_a(f) + E_b(f) \right )$
is given by,
$\mathrm{Var}(f) = \frac{\sigma_a^2(f)}{4 N_a} + \frac{\sigma_b^2(f)}{4 N_b}$
It can be shown that this variance is minimized by distributing the points such that,
$\frac{N_a}{N_a + N_b} = \frac{\sigma_a}{\sigma_a + \sigma_b}$
Hence the smallest error estimate is obtained by allocating sample points in proportion to the standard deviation of the function in each sub-region.
The MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step. The direction is chosen by examining all d possible bisections and selecting the one which will minimize the combined variance of the two sub-regions. The variance in the sub-regions is estimated by sampling with a fraction of the total number of points available to the current step. The same procedure is then repeated recursively for each of the two half-spaces from the best bisection. The remaining sample points are allocated to the sub-regions using the formula for Na and Nb. This recursive allocation of integration points continues down to a user-specified depth where each sub-region is integrated using a plain Monte Carlo estimate. These individual values and their error estimates are then combined upwards to give an overall result and an estimate of its error.
## Importance sampling
Main article: Importance sampling
### VEGAS Monte Carlo
Main article: VEGAS algorithm
The VEGAS algorithm takes advantage of the information stored during the sampling, and uses it and importance sampling to efficiently estimate the integral I. It samples points from the probability distribution described by the function |f| so that the points are concentrated in the regions that make the largest contribution to the integral.
In general, if the Monte Carlo integral of f is sampled with points distributed according to a probability distribution described by the function g, we obtain an estimate:
$E_g(f; N) = E \left (\tfrac{f}{g}; N \right )$
with a corresponding variance,
$\mathrm{Var}_g(f; N) = \mathrm{Var} \left (\tfrac{f}{g}; N \right )$
If the probability distribution is chosen as
$g = \tfrac{|f|}{I(|f|)}$
then it can be shown that the variance $V_g(f; N)$ vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution.
The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region which creates the histogram of the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution.[7] In order to avoid the number of histogram bins growing like Kd, the probability distribution is approximated by a separable function:
$g(x_1, x_2, \ldots) = g_1(x_1) g_2(x_2) \ldots$
so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS.
VEGAS incorporates a number of additional features, and combines both stratified sampling and importance sampling.[7] The integration region is divided into a number of "boxes", with each box getting a fixed number of points (the goal is 2). Each box can then have a fractional number of bins, but if bins/box is less than two, Vegas switches to a kind variance reduction (rather than importance sampling).
This routines uses the VEGAS Monte Carlo algorithm to integrate the function f over the dim-dimensional hypercubic region defined by the lower and upper limits in the arrays xl and xu, each of size dim. The integration uses a fixed number of function calls. The result and its error estimate are based on a weighted average of independent samples.
The VEGAS algorithm computes a number of independent estimates of the integral internally, according to the iterations parameter described below, and returns their weighted average. Random sampling of the integrand can occasionally produce an estimate where the error is zero, particularly if the function is constant in some regions. An estimate with zero error causes the weighted average to break down and must be handled separately.
### Metropolis–Hastings algorithm
Main article: Metropolis–Hastings algorithm
Importance sampling provides a very important tool to perform Monte-Carlo integration.[3] The main result of importance sampling to this method is that the uniform sampling of $\overline{\mathbf{x}}$ is a particular case of a more generic choice, on which the samples are drawn from any distribution $p(\overline{\mathbf{x}})$. The idea is that $p(\overline{\mathbf{x}})$ can be chosen to decrease the variance of the measurement QN.
Consider the following example where one would like to numerically integrate a gaussian function, centered at 0, with σ = 1, from −1000 to 1000. Naturally, if the samples are drawn uniformly on the interval [−1000, 1000], only a very small part of them would be significant to the integral. This can be improved by choosing a different distribution from where the samples are chosen from, for instance by sampling according to a gaussian distribution centered at 0, with σ = 1. Of course the "right" choice strongly depends on the integrand.
Formally, given a set of samples chosen from a distribution
$p(\overline{\mathbf{x}}) : \qquad \overline{\mathbf{x}}_1, \cdots, \overline{\mathbf{x}}_N \in V,$
the estimator for I is given by[3]
$Q_N \equiv \frac{1}{Z_N} \sum_{i=1}^N \frac{f(\overline{\mathbf{x}}_i)}{p(\overline{\mathbf{x}}_i)}$
where
$Z_N \equiv \sum_{i=1}^N \frac{1}{p(\overline{\mathbf{x}}_i)}$
is the normalization. Notice that if the $p(\overline{\mathbf{x}})$ is a uniform distribution, this estimator is the same as the one introduced in introduction.
The Metropolis-Hastings algorithm is one of the most used algorithms to generate $\overline{\mathbf{x}}$ from $p(\overline{\mathbf{x}})$,[3] thus providing an efficient way of computing integrals.
See also: Multicanonical ensemble
## Notes
1. Press et al, 2007, Chap. 4.
2. Press et al, 2007, Chap. 7.
3. ^ a b c d Newman, 1999, Chap. 2.
4. Newman, 1999, Chap. 1.
5. Press et al, 2007
6. Press, 1990, pp190-195.
7. ^ a b Lepage, 1978
## References
• R. E. Caflisch, Monte Carlo and quasi-Monte Carlo methods, Acta Numerica vol. 7, Cambridge University Press, 1998, pp. 1–49.
• S. Weinzierl, Introduction to Monte Carlo methods,
• W.H. Press, G.R. Farrar, Recursive Stratified Sampling for Multidimensional Monte Carlo Integration, Computers in Physics, v4 (1990).
• G.P. Lepage, A New Algorithm for Adaptive Multidimensional Integration, Journal of Computational Physics 27, 192-203, (1978)
• G.P. Lepage, VEGAS: An Adaptive Multi-dimensional Integration Program, Cornell preprint CLNS 80-447, March 1980
• J. M. Hammersley, D.C. Handscomb (1964) Monte Carlo Methods. Methuen. ISBN 0-416-52340-4
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
• Newman, MEJ; Barkema, GT (1999). Monte Carlo Methods in Statistical Physics. Clarendon Press.
• Robert, CP; Casella, G (2004). Monte Carlo Statistical Methods (2nd ed.). Springer. ISBN 978-1-4419-1939-7. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8539401292800903, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/153540-simplify-without-absolute-values.html | # Thread:
1. ## Simplify Without Absolute Values
Simplify and write the given numbers without using absolute values.
(1) |pi - sqrt[5]| =
(2) |3 - pi| + 7 =
2. If a is greater than b, then
$|a - b| = a - b$
If a is less than b, then
$|a - b| = b - a$
3. Thank you so much for the absolute value rules needed to solve this problem. I can take it from here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8302837014198303, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/242623/solving-a-1x4999-a-2x4998-a-3x4007-a-5000x0-n?answertab=active | # Solving $a_1x^{4999} + a_2x^{4998} + a_3x^{4007}+…+a_{5000}x^{0}=n$
How can we solve the equation
$$a_1x^{4999} + a_2x^{4998} + a_3x^{4007}+...+a_{5000}x^{0}=n$$
if we know the values $a_1,a_2,a_3,...,a_{5000},n$? Are there any open source solutions?
-
1
Tag description: DO NOT USE THIS TAG. The algebra tag is no longer being used. Please use the [algebra-precalculus] tag or the [abstract-algebra] tag instead. – Belgi Nov 22 '12 at 13:54
Out of curiosity, where did you get this problem? I assume you want a numerical solution, but it seems difficult to even evaluate this polynomial numerically. – littleO Nov 22 '12 at 13:55
of course it is from not-so-friendly geometric-like sequence and I solved it using WolframAlpha trial and verified it myself using binary search. And it turns out that solving the equation this way was wrong for this kind of problem. – thkang Nov 22 '12 at 14:16
## 1 Answer
Abel–Ruffini theorem tells us you can not solve (exactly) for the roots of this polynomial, since it is of degree $>4$
-
1
This is true for the generic polynomial, but can be very wrong for specific polynomials. For instance, we know the exact roots of $1+x+x^2+\cdots+x^n=0$ for every $n$. – Andrea Mori Nov 22 '12 at 14:12
@AndreaMori ofcourse the Galois group of the polynomial can be solvable, but the OP was looking for an open sorce code and such a thing should give a correct outpit for every input. Numerical methods are the only real options, unless the case is very trivial. – Belgi Nov 22 '12 at 14:19
Absolutely. I guess that I was suggesting that although the situation looks bad in general, sometimes we do get lucky. – Andrea Mori Nov 22 '12 at 14:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212926626205444, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/210556/find-a-general-formula-from-piecewise-defined-function-ii/210612 | # Find a general formula from piecewise-defined function (II)
This question is very similar to my previous one. I have: $s \in [0;100]$ and $s \in Z$. The piece-wise definition is as follows: $$20 \le s \le 100 \to 0\\ 10 \le s \lt 20 \to 1\\ s \le 9 \to 2\\$$
Following the previous question's answers my best try was this:
$\displaystyle f(s) = 2 - \left \lfloor \frac{s}{10} \right \rfloor + H \left (\left \lfloor \frac{s}{10} \right \rfloor - 3 \right ) \cdot \left ( \left \lfloor \frac{s}{10} \right \rfloor - 2 \right )$
where $H(x)$ is the Heaviside Step Function. That involves too many calculations. There must be something simpler, I'm just too blind to see it
-
You need to specify what functions can appear in the formula. For example, in C you could write something like (s <= 9 ? 2 : (s >= 10 && s < 20 ? 1 : 0)). – copper.hat Oct 10 '12 at 17:19
What is your objective exactly? Are you using this for a computer program or are you seeking a mathematical answer? – Emmad Kareem Oct 10 '12 at 17:43
@copper.hat: I seek a mathematical answer, like those of my previous question. Allowed functions should be floor, ceil, heaviside step function. But if an answer formula contains some other functions I could accept it. It's difficult to say which one are allowed beforehand. – rubik Oct 10 '12 at 18:05
@EmmadKareem: See my previous comment. – rubik Oct 10 '12 at 18:05
1
A mathematical answer is a bit vague. For what purpose do you want such a formula? One formula could be in terms of characteristic functions, eg, $s \mapsto 1_{\{0,...,9\}}(s)+ 2 \cdot 1_{\{10,...,19\}}(s)$. – copper.hat Oct 10 '12 at 18:20
show 1 more comment
## 2 Answers
With this definition of the Heaviside step function you may use for example : $$f(s)=\left(2-\left \lfloor \frac s{10}\right \rfloor\right)\operatorname{H}(19-s)$$ or the simple : $$f(s)=\operatorname{H}(9-s)+\operatorname{H}(19-s)$$
-
Aw, your second one was rather obvious, I feel stupid now :) – rubik Oct 10 '12 at 19:17
@rubik: Agreed (I was influenced by your divisions by $10$ first and had to get out of the attractor :-)) – Raymond Manzoni Oct 10 '12 at 19:27
$$f(s)=1-\text{sgn}(\lfloor s/10-1 \rfloor)$$
-
1
Interesing idea with the sign function! – rubik Oct 10 '12 at 19:16
Thanks ${ }$ ${ }$ – draks ... Oct 10 '12 at 19:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050265550613403, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/246392/show-that-z-leftg-zg-right-1 | # Show that $Z\left(G/Z(G)\right)=\{1\}$
This problem was given and I didn't give and point any ideas about it in the class in time. It says: if the group $G$ is perfect, then $Z\left(G/Z(G)\right)=\{1\}$, where $Z(G)$ is the centre of $G$. Can someone help me to tackle it well. Thank you!
Edit. A group $G$ is called perfect if $G=[G,G]$.
-
Start with the definition of the centre (or center) of a group, i.e. those elements which commute with every other element of the group. In particular, why does $G/Z(G)$ make sense?? – hardmath Nov 28 '12 at 10:38
2
This result is known as Grun's Lemma. Try to google it. – YACP Nov 28 '12 at 10:42
## 1 Answer
Note first that $gZ(G) \in Z(G/Z(G))$ if and only if $[g,x] \in Z(G)$ for all $x \in G$. Using this fact you can show that when $gZ(G) \in Z(G/Z(G))$, the map $x \mapsto [g,x]$ is a homomorphism $G \rightarrow Z(G)$. Since $Z(G)$ is Abelian and $G$ is perfect, the kernel of this map is all of $G$. Thus $g \in Z(G)$, which proves the claim.
-
+1 Very nicely and simply explained. – DonAntonio Nov 28 '12 at 12:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464007616043091, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?t=10690 | Physics Forums
## extremals - calc of variations
I am trying to find the extremal that minimizes $$\int_{0}^{1} \sqrt{y(1+y'^2)} dx$$
Because it is not explicitly a function of the free variable x, I can use the shortcut:
constant=F-y'*(dF/dy') to solve for y(x)
My problem is that after grinding through the algebra my y(x) is a function of itself, in other words I cannot isolate the variable I want to.
problem or maybe argue that y(x) can be isolated it would be greatly appreciated.
Thanks in advance for the help!
(In case the formatting doesn't work, everything inside the integral is raised to the 1/2)
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member Science Advisor Staff Emeritus I'm not sure what you mean by "my y(x) is a function of itself" Assuming that by "F" you mean the integrand: F= y1/2(1+y'2)1/2 then I get dF/dy'= y1/2y'(1+y'2)-1/2 so F- y'dF/dy'= constant becomes y1/2(1+y'2)1/2-y'2y1/2(1+y'2)-1/2= const. That's a rather complicated differential equation for y. You might be able to simplify it by multiplying the equation by (1+y'2)-1/2, then squaring both sides.
right, I think that is the same result that I get. Upon the simplification that you mentioned, and then solving the diff eq, I think it turns out to be: Kx - x*y^-1/2 + C = 2y^1/2 : c,k are constants and the y's cannont be combined and therefore y is a function of y. Hopefully I am just missing something here and there is an obvious solution. Thanks for the quick response.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## extremals - calc of variations
I don't understand what you mean by "y's cannnot be combined and therefore y is a function of y." The fact that you (or I!) cannot solve an equation doesn't mean that there is no solution.
If there is not a single solution for y as a function of x then there exist more than one solution to the differential equation. You might be able to determine which is correct for this problem when you apply the initial conditions to determine C.
For this particular equation, Kx - x*y^-1/2 + C = 2y^1/2, you might try multiplying throug by y-1/2 to get
(Kx+C)y1/2- xy- 2= 0. Now replace y by z2 so that the equation becomes (Kx+C)z- xz2- 2= 0 which is a quadratic equation. Use the quadratic formula to solve for z and then take the square root to get y.
Point well taken, good idea. I'll let you know if it works out.
Won't you get the same answer if you extremize $$\int_0^1y(1+y'^2)dx$$ instead? dhris
Thread Tools
| | | |
|-----------------------------------------------------|-------------------------------|---------|
| Similar Threads for: extremals - calc of variations | | |
| Thread | Forum | Replies |
| | Academic Guidance | 4 |
| | Calculus & Beyond Homework | 15 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 0 |
| | Calculus | 0 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9077965617179871, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/29750?sort=oldest | ## Intuition and/or visualisation of Ito integral/Ito’s lemma
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Riemann-sums can e.g. be very intuitively visualized by rectangles that approximate the area under the curve. See e.g. Wikipedia:Riemann sum
The Ito integral has due to the unbounded total variation but bounded quadratic variation an extra term (sometimes called Ito correction term). The standard intuition for this is a Taylor expansion, sometimes Jensen's inequality.
But normally there is more than one intuition for a mathematical phenomenon, e.g. in Thurston's paper, "On Proof and Progress in Mathematics", he gives seven different elementary ways of thinking about the derivative.
My question
Could you give me some other intuitions for the Ito integral (and/or Ito's lemma as the so called "chain rule of stochastic calculus"). The more the better and from different fields of mathematics to see the big picture and connections. I am esp. interested in new intuitions and intuitions that are not so well known.
-
## 3 Answers
I find the intuitive explanation by Paul Wilmott particularly appealing.
Fix a small $h>0$. The stochastic integral $$\int_0^{h} f(W(t))\ dW(t)=\lim\limits_{N\to\infty}\sum\limits_{j=1}^{N} f\left(W(t_{j-1})\right)\left(W(t_{j})-W({t_{j-1}})\right),\quad t_j= h\frac{j}{N},$$ involves adding up an infinite number of random variables. Let's substitute every term $f\left(W(t_{j-1})\right)$ with its formal Taylor expansion. Then there are several contributions to the sum: those that are a sum of random variables and those that are a sum of the squares of random variables, and then there are higher-order terms.
Add up a large number of independent random variables and the Central Limit Theorem kicks in, the end result being a normally distributed random variable. Let's calculate its mean and standard deviation.
When we add up $N$ terms that are normal, each with a mean of $0$ and a standard deviation of $\sqrt{h/N}$, we end up with another normal, with a mean of $0$ and a standard deviation of $\sqrt{h}$. This is our $dW$. Notice how the $N$ disappears in the limit.
Now, if we add up the $N$ squares of the same normal terms then we get something which is normally distributed with a mean of $$N\left(\sqrt{\frac{h}{N}}\right)^2=h$$ and a standard deviation which is $h\sqrt{2/N}.$ This tends to zero as $N$ gets larger. In this limit we end up with, in a sense, our $dW^2(t)=dt$, because the randomness as measured by the standard deviation disappears leaving us just with the mean $dt$.
The higher-order terms have means and standard deviations that are too small, disappearing rapidly in the limit as $N\to\infty$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Robert Anderson used nonstandard analysis to generate Brownian motion from a finite random walk obtained from coin tosses, where "finite" means indexed by an infinite, non-standard natural number. The corresponding random walk has bounded variation under a non-standard bound. One can then do everything in terms such an random walk, as has been done without rigorous justification before. The Itô-integral can be obtained from a Stiltjes-integral on the random walk, they differ only by an infinitesimal. An outline of the arguments can be found here. For the details, see:
MR0464380 (57 #4311) Anderson, Robert M. A non-standard representation for Brownian motion and Itô integration. Israel J. Math. 25 (1976), no. 1-2, 15--46.
-
I know this thread is already two years old, but, while preparing for a path integration exam, I arrived at an intuitive picture that sheds some light on the origin of the extra term. The picture represents an integral of a smooth function with respect to a concrete realization of Brownian motion. The sum of the areas of the green rectangles represents the difference between Ito (using the left point of each interval) and "anti-Ito" (using the right point of each interval) for sampling of the Brownian motion represented by the red line. Finer sampling leads to smaller rectangles, but they overlap more and more (because Brownian motion is not monotonic), so even if the area occupied by them tends to zero, the sum of their areas does not. This suggests (only suggests -- it is an upper bound on the difference, not a lower bound) that there is a "room" for Ito and "anti-Ito" to differ in their values. Stratonovich can be expected to lie somewhere in between.
Look at the following image:
-
1
(I included the image whose URL was cited.) – Joseph O'Rourke Jun 24 at 0:00
@Pavel: +1: Thank you, this is really interesting. Did you arrive at this picture yourself or did you see it in a paper, book etc? Then a reference would be great. Thank you again. – vonjd Jun 24 at 10:04
1
I arrived at it myself, so I am not aware of any reference to provide. It is certainly very heuristic. For example, some of the rectangles have actually negative sign. OTOH, if we know from a rigorous proof that the extra term is nonzero, this picture attributes that term to the fact that the the error rectangles overlap. – Pavel Bažant Jun 26 at 10:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258986711502075, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/34424/number-of-finite-simple-groups-of-given-order-is-at-most-2-is-a-classification/34432 | ## Number of finite simple groups of given order is at most 2 - is a classification-free proof possible?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This Wikipedia article states that the isomorphism type of a finite simple group is determined by its order, except that:
• L4(2) and L3(4) both have order 20160
• O2n+1(q) and S2n(q) have the same order for q odd, n > 2
I think this means that for each integer g, there are 0, 1 or 2 simple groups of order g.
Do we need the full strength of the Classification of Finite Simple Groups to prove this, or is there a simpler way of proving it?
-
2
Here's a pedantic comment: my understanding (which might be wrong!) was that the final flourish of the classification was a computation which ruled out the existence of a 27th sporadic simple group; people knew what the order of this group would have to be, and enough of its character table (or perhaps the structure of its 2-Sylow or something) was constructed to get a contradiction and hence rule out its existence. Before this potential simple group had been ruled out one could almost certainly still prove the result you want. So strictly speaking the full strength isn't needed :-) – Kevin Buzzard Aug 3 2010 at 18:40
1
I would be surprised if one could prove without using the classification that there is a universal constant $N$ such that for each integer $g$, there are at most $N$ simple groups of order $g$. However, I would be very happy to be proven wrong! – Andy Putman Aug 3 2010 at 18:40
1
Chronologically speaking, there were several uniqueness proofs (of the form "there is exactly one simple group with order N, a centralizer of an involution of the form X, and possibly satisfying additional property P") that only appeared in the late 1980s, well after the proof of the classification was initially announced. These came at the end of chasing down lots of cases, so I would suspect the answer to your question is "essentially no" with anything resembling current technology. – S. Carnahan♦ Aug 3 2010 at 19:20
3
Sadly, I have nothing to say on the topic of the posting, but you may enjoy a related anecdote (involving A. Weil, but not a time machine) from J.S. Milne's webpage jmilne.org/math/apocrypha.html – algori Aug 3 2010 at 23:58
## 2 Answers
It is usually extraordinarily difficult to prove uniqueness of a simple group given its order, or even given its order and complete character table. In particular one of the last and hardest steps in the classification of finite simple groups was proving uniqueness of the Ree groups of type $^2G_2$ of order $q^3(q^3+1)(q-1)$, (for $q$ of the form $3^{2n+1}$) which was finally solved in a series of notoriously difficult papers by Thompson and Bombieri. Although they were trying to prove the group was unique, proving that there were at most 2 would have been no easier.
Another example is given in the paper by Higman in the book "finite simple groups" where he tries to characterize Janko's first group given not just its order 175560, but its entire character table. Even this takes several pages of complicated arguments.
In other words, there is no easy way to bound the number of simple groups of given order, unless a lot of very smart people have overlooked something easy.
-
7
Nice answer! A related comment: in Steinberg's 1967 Yale notes there is a discussion of known finite simple groups. "The group $H$ of D. Higman and Sims [...] Inspired by this construction, G. Higman then constructed his own group [$H'$] in terms of a very special geometry invented for the occasion. The two groups have the same order [44352000], and everyone seems to feel that they are isomorphic, but no one has yet proved this." – fherzig Aug 9 2010 at 13:27
I don't know though how long it took for the two groups to be proven isomorphic, I see that $H$ was only found in 1967. – fherzig Aug 9 2010 at 13:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Emil Artin proved in 1955 in two papers that the above mentioned examples are the only instances of non-isomorphic finite simple groups having the same order. He proved the result only for the groups that were known till then. As new groups were being discovered Jacques Tits took the responsibility of checking that there were no such further cases. For an exposition of this, one may look in `Kimmerle and others, Proc. London Math. Soc. 60(3) (1990) 89–122'.
So, indeed the classification is used to some extent.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9685907959938049, "perplexity_flag": "head"} |
http://www.reference.com/browse/Nambu+dynamics | Definitions
Nearby Words
# Nambu mechanics
In mathematics, Nambu dynamics is a generalization of Hamiltonian mechanics involving multiple Hamiltonians. Recall that Hamiltonian mechanics is based upon the flows generated by a smooth Hamiltonian over a symplectic manifold. The flows are symplectomorphisms and hence obey Liouville's theorem. This was soon generalized to flows generated by a Hamiltonian over a Poisson manifold. In 1973, Yoichiro Nambu suggested a generalization involving Nambu-Poisson manifolds with more than one Hamiltonian.
In particular we have a differential manifold M, for some integer N ≥ 2, we have a smooth N-linear map from n copies of $C^infty\left(M\right)$ to itself such that it is completely antisymmetric and {h1,...,hN-1,.} acts as a derivation {h1,...,hN-1,fg}={h1,...,hN-1,f}g+f{h1,...,hN-1,g} and the generalized Jacobi identities
$\left\{f_1,...,f_\left\{N-1\right\},\left\{g_1,...,g_N\right\}\right\}$
$=\left\{\left\{f_1,...,f_\left\{N-1\right\},g_1\right\},g_2,...,g_N\right\}+\left\{g_1,\left\{f_1,...,f_\left\{N-1\right\},g_2\right\},...,g_N\right\}+,cdots$
$cdots,+\left\{g_1,...,g_\left\{N-1\right\},\left\{f_1,...,f_\left\{N-1\right\},g_N\right\}\right\}$
i.e. {f_1,...,f_{N-1},.} acts as a (generalized) derivation over the n-fold product {.,...,.}.
There are N − 1 Hamiltonians, H1,..., HN-1 generating a time flow
$frac\left\{d\right\}\left\{dt\right\}f=\left\{f,H_1,...,H_\left\{N-1\right\}\right\}$
The case where N = 2 gives a Poisson manifold.
Quantizing Nambu dynamics leads to interesting structures.
## References
Y.Nambu, Physical Review D,7, 2405 (1973) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.879363477230072, "perplexity_flag": "middle"} |
http://mathhelpforum.com/math-challenge-problems/36391-simple-problem.html | # Thread:
1. ## simple problem??
Hi there
Just wondering if anyone can help me to solve the following problem.
I've the get a total of 28 by using all of the numbers 2,3,4,5 in any order and only once.
I'd really appreciate it if anyone can get it out for me. I've gotten every number near 28 but not actually 28!!
Thanks
2. Originally Posted by michaela-donnelly
Hi there
Just wondering if anyone can help me to solve the following problem.
I've the get a total of 28 by using all of the numbers 2,3,4,5 in any order and only once.
I'd really appreciate it if anyone can get it out for me. I've gotten every number near 28 but not actually 28!!
Thanks
$2^3 + (4 \times 5)$ ?
3. ## simple problem
Ahhh...thanks so much
4. Originally Posted by michaela-donnelly
Ahhh...thanks so much
You are welcome | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157704710960388, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-applied-math/49380-projectile-motion.html | # Thread:
1. ## Projectile Motion
I have been stuck on this for a while....
A projectile's launch speed is five times it speed at maximum height. Find the launch angle.
Anybody have any clue how to do this?
2. Originally Posted by Elite_Guard89
I have been stuck on this for a while....
A projectile's launch speed is five times it speed at maximum height. Find the launch angle.
Anybody have any clue how to do this?
Speed at maximum height $= U \cos \theta$ (since the vertical component of the velocity is equal to zero at maximum height).
Therefore $U = 5U \cos \theta$. Solve for $\theta$.
3. Originally Posted by mr fantastic
Speed at maximum height $= U \cos \theta$ (since the vertical component of the velocity is equal to zero at maximum height).
Therefore $U = 5U \cos \theta$. Solve for $\theta$.
i am now confused wat?
4. $\cos(\theta_{max}) = 1$ (at maximum height), since this is the parabolic path of a projectile. So you have an unknwn velocity at maximum height, $v\cos(\theta_{max}) = v$, an, unknown initial velocity, $u=5v\cos(\theta)$, and an unknown angle $\theta$, which you need to find. I'd say there isn't enough information to find it. if you are assuming gravity of $-9.8ms^{-2}$, and have the final velocity, the launch and impact heights, then you could plug in the values for x and y, and solve simultaneous equations using the equations of motion. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923871636390686, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/272205/equality-of-unions-and-intersections | # equality of unions and intersections
Could you show me how to prove this?
Let $A_i \subset N$, $i \in I$, where $I$ is an arbitrary nonempty set. Prove that there exists at most countable set $J \subset I$ such that:
$\bigcup_{i \in I}A_i = \bigcup_{j \in J} A_j$ and $\bigcap_{i \in I}A_i = \bigcap_{j \in J} A_j$.
-
## 1 Answer
HINT: I’m assuming that $N$ is $\Bbb N$, the set of natural numbers.
For each $n\in\bigcup_{i\in I}A_i$ choose an index $i(n)\in I$ such that $n\in A_{i(n)}$, and let $$J=\left\{i(n):n\in\bigcup_{i\in I}A_i\right\}\;;$$ can you finish the argument from here?
For the second result, apply the first result to $\{\Bbb N\setminus A_i:i\in I\}$ and use the De Morgan laws.
-
I'm really sorry. Could you explain it a bit further? I'm afraid I'm rather slow-thinking today. – Bilbo Jan 7 at 16:18
@Anna: Is it a problem seeing why $J$ is countable, or why $\bigcup_{i\in J}A_i=\bigcup_{i\in I}A_i$? Or both? – Brian M. Scott Jan 7 at 16:22
I see why J is countable. I just don't see why the unions are equal. If you showed me that, I guess I could do the intersections. – Bilbo Jan 7 at 17:02
@Anna: Suppose that $n\in\bigcup_{i\in I}A_i$; then $n\in A_{n(i)}\subseteq\bigcup_{i\in J}A_i$, so $\bigcup_{i\in I}A_i\subseteq\bigcup_{i\in J}A_i$. But $J\subseteq I$, so $\bigcup_{i\in J}A_i\subseteq\bigcup_{i\in I}A_i$, and the two unions are therefore equal. – Brian M. Scott Jan 7 at 17:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140125513076782, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Rotation_(mathematics) | # Rotation (mathematics)
Rotation of an object in two dimensions around a point $O.$
In geometry and linear algebra, a rotation is a transformation in a plane or in space that describes the motion of a rigid body around a fixed point. A rotation is different from a translation, which has no fixed points, and from a reflection, which "flips" the bodies it is transforming. A rotation and the above-mentioned transformations are isometries; they leave the distance between any two points unchanged after the transformation.
It is important to know the frame of reference when considering rotations, as all rotations are described relative to a particular frame of reference. In general for any orthogonal transformation on a body in a coordinate system there is an inverse transformation which if applied to the frame of reference results in the body being at the same coordinates. For example in two dimensions rotating a body clockwise about a point keeping the axes fixed is equivalent to rotating the axes counterclockwise about the same point while the body is kept fixed.
## Two dimensions
Main article: U(1)
A plane rotation around a point followed by another rotation around a different point results in a total motion which is either a rotation (as in this picture), or a translation.
A reflection against an axis followed by a reflection against a second axis not parallel to the first one results in a total motion that is a rotation around the point of intersection of the axes.
Only a single angle is needed to specify a rotation in two dimensions – the angle of rotation. To calculate the rotation two methods can be used, either matrix algebra or complex numbers. In each the rotation is acting to rotate an object counterclockwise through an angle θ about the origin.
### Matrix algebra
To carry out a rotation using matrices the point (x, y) to be rotated is written as a vector, then multiplied by a matrix calculated from the angle, $\theta$, like so:
$\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}$.
where (x′, y′) are the co-ordinates of the point after rotation, and the formulae for x′ and y′ can be seen to be
$\begin{align} x'&=x\cos\theta-y\sin\theta\\ y'&=x\sin\theta+y\cos\theta. \end{align}$
The vectors $\begin{bmatrix} x \\ y \end{bmatrix}$ and $\begin{bmatrix} x' \\ y' \end{bmatrix}$ have the same magnitude and are separated by an angle $\theta$ as expected.
### Complex numbers
Points can also be rotated using complex numbers, as the set of all such numbers, the complex plane, is geometrically a two dimensional plane. The point (x, y) in the plane is represented by the complex number
$z = x + iy \,$
This can be rotated through an angle θ by multiplying it by eiθ, then expanding the product using Euler's formula as follows:
$\begin{align} e^{i \theta} z &= (\cos \theta + i \sin \theta) (x + i y) \\ &= (x \cos \theta + i y \cos \theta + i x \sin \theta - y \sin \theta) \\ &= (x \cos \theta - y \sin \theta) + i (x \sin \theta + y \cos \theta) \\ &= x' + i y' , \end{align}$
which gives the same result as before, 1
$\begin{align} x'&=x\cos\theta-y\sin\theta\\ y'&=x\sin\theta+y\cos\theta. \end{align}$
Like complex numbers rotations in two dimensions are commutative, unlike in higher dimensions. They have only one degree of freedom, as such rotations are entirely determined by the angle of rotation.[1]
## Three dimensions
Main article: SO(3)
Rotations in ordinary three-dimensional space differ from those in two dimensions in a number of important ways. Rotations in three dimensions are generally not commutative, so the order in which rotations are applied is important. They have three degrees of freedom, the same as the number of dimensions.
A three dimensional rotation can be specified in a number of ways. The most usual methods are as follows.
### Matrix algebra
Main article: Rotation matrix
As in two dimensions a matrix can be used to rotate a point (x, y, z) to a point (x′, y′, z′). The matrix used is a 3 × 3 matrix,
$\mathbf{A} = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix}$
This is multiplied by a vector representing the point to give the result
$\mathbf{A} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} x' \\ y' \\ z' \end{pmatrix}$
The matrix A is a member of the three dimensional special orthogonal group, SO(3), that is it is an orthogonal matrix with determinant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it simple to spot and check if a matrix is a valid rotation matrix. The determinant of a rotation orthogonal matrix must be 1. The only other possibility for the determinant of an orthogonal matrix is -1, and this result means the transformation is a reflection, improper rotation or inversion in a point, i.e. not a rotation.
Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using Homogeneous coordinates. Transformations in this space are represented by 4 × 4 matrices, which are not rotation matrices but which have a 3 × 3 rotation matrix in the upper left corner.
The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations where numerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often.
### Mobile frame rotations
Main article: Aircraft principal axes
The principal axes of rotation in space
One way of generalising the two dimensional angle of rotation is to specify three rotation angles, carried out in turn about the three principal axes. They individually can be labelled yaw, pitch, and roll, but in mathematics are more often known by their mathematical name, Euler angles. They have the advantage of modelling a number of physical systems such as gimbals, and joysticks, so are easily visualised, and are a very compact way of storing a rotation. But they are difficult to use in calculations as even simple operations like combining rotations are expensive to do, and suffer from a form of gimbal lock where the angles cannot be uniquely calculated for certain rotations.
### Euler rotations
Main article: Euler angles
Euler rotations of the Earth. Intrinsic (green), Precession (blue) and Nutation (red)
Euler rotations are a set of three rotations defined as the movement obtained by changing one of the Euler angles while leaving the other two constant. Euler rotations are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves.
These rotations are called Precession, Nutation, and intrinsic rotation.
### Axis angle
Main article: Axis-angle representation
A rotation represented by an Euler axis and angle.
A second way of generalising the two dimensional angle of rotation is to specify an angle with the axis about which the rotation takes place. It can be used to model motion constrained by a hinges and Axles, and so is easily visualised, perhaps even more so than Euler angles. There are two ways to represent it;
• as a pair consisting of the angle and a unit vector for the axis, or
• as a vector obtained by multiplying the angle with this unit vector, called the rotation vector.
Usually the angle and axis pair is easier to work with, while the rotation vector is more compact, requiring only three numbers like Euler angles. But like Euler angles it is usually converted to another representation before being used.
### Quaternions
Main article: Quaternions and spatial rotation
Quaternions are in some ways the least intuitive representation of three dimensional rotations. They are not the three dimensional instance of a general approach, like matrices; nor are they easily related to real world models, like Euler angles or axis angles. But they are more compact than matrices and easier to work with than all other methods, so are often preferred in real world applications.[citation needed]
A rotation quaternion consists of four real numbers, constrained so the length of the quaternion considered as a vector is 1. This constraint limits the degree of freedom of the quaternion to three, as required. It can be thought of as a generalisation of the complex numbers, by e.g. the Cayley–Dickson construction, and generates rotations in a similar way by multiplication. But unlike matrices and complex numbers two multiplications are needed:
$\mathbf{x' = qxq^{-1}},$
where q is the rotation quaternion, q−1 is its inverse, and x is the vector treated as a quaternion. The quaternion can be related to the rotation vector form of the axis angle rotation by the exponential map over the quaternions,
$\mathbf{q} = e^{\mathbf{v}/2},$
Where v is the rotation vector treated as a quaternion.
## Four dimensions
An orthogonal projection onto three-dimensions of a hypercube being rotated in four-dimensional Euclidean space.
A general rotation in four dimensions has only one fixed point, the centre of rotation, and no axis of rotation. Instead the rotation has two mutually orthogonal planes of rotation, each of which is fixed in the sense that points in each plane stay within the planes. The rotation has two angles of rotation, one for each plane of rotation, through which points in the planes rotate. If these are ω1 and ω2 then all points not in the planes rotate through an angle between ω1 and ω2.
If ω1 = ω2 the rotation is a double rotation and all points rotate through the same angle so any two orthogonal planes can be taken as the planes of rotation. If one of ω1 and ω2 is zero, one plane is fixed and the rotation is simple. If both ω1 and ω2 are zero the rotation is the identity rotation.[2]
Rotations in four dimensions can be represented by 4th order orthogonal matrices, as a generalisation of the rotation matrix. Quaternions can also be generalised into four dimensions, as even Multivectors of the four dimensional Geometric algebra. A third approach, which only works in four dimensions, is to use a pair of unit quaternions.
Rotations in four dimensions have six degrees of freedom, most easily seen when two unit quaternions are used, as each has three degrees of freedom (they lie on the surface of a 3-sphere) and 2 × 3 = 6.
### Relativity
One application of this is special relativity, as it can be considered to operate in a four dimensional space, spacetime, spanned by three space dimensions and one of time. In special relativity this space is linear and the four dimensional rotations, called Lorentz transformations, have practical physical interpretations.
If a rotation is only in the three space dimensions, i.e. in a plane that is entirely in space, then this rotation is the same as a spatial rotation in three dimensions. But a rotation in a plane spanned by a space dimension and a time dimension is a hyperbolic rotation, a transformation between two different reference frames, which is sometimes called a "Lorentz boost". These transformations, which are not actual rotations, but squeeze mappings, are sometimes described with Minkowski diagrams. The study of relativity is concerned with the Lorentz group generated by the space rotations and hyperbolic rotations.[3]
## Generalizations
### Orthogonal matrices
The set of all matrices M(v,θ) described above together with the operation of matrix multiplication is the rotation group SO(3).
More generally, coordinate rotations in any dimension are represented by orthogonal matrices. The set of all orthogonal matrices of the n-th dimension which describe proper rotations (determinant = +1), together with the operation of matrix multiplication, forms the special orthogonal group: SO(n).
Orthogonal matrices have real elements. The analogous complex-valued matrices are the unitary matrices. The set of all unitary matrices in a given dimension n forms a unitary group of degree n, U(n); and the subgroup of U(n) representing proper rotations forms a special unitary group of degree n, SU(n). The elements of SU(2) are used in quantum mechanics to rotate spin.
## Footnotes
1. Lounesto 2001, p.30.
2. Lounesto 2001, pp. 85, 89.
3. Hestenes 1999, pp. 580 - 588.
## References
• Hestenes, David (1999). New Foundations for Classical Mechanics. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-5514-8.
• Lounesto, Pertti (2001). Clifford algebras and spinors. Cambridge: Cambridge University Press. ISBN 978-0-521-00551-7. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144436120986938, "perplexity_flag": "head"} |
http://www.nag.com/numeric/CL/nagdoc_cl23/html/S/s20adc.html | # NAG Library Function Documentnag_fresnel_c (s20adc)
## 1 Purpose
nag_fresnel_c (s20adc) returns a value for the Fresnel Integral $C\left(x\right)$.
## 2 Specification
#include <nag.h>
#include <nags.h>
double nag_fresnel_c (double x)
## 3 Description
nag_fresnel_c (s20adc) evaluates an approximation to the Fresnel Integral
$C x = ∫ 0 x cos π 2 t 2 dt .$
The function is based on Chebyshev expansions.
## 4 References
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
## 5 Arguments
1: x – doubleInput
On entry: the argument $x$ of the function.
None.
## 7 Accuracy
Let $\delta $ and $\epsilon $ be the relative errors in the argument and result respectively.
If $\delta $ is somewhat larger than the machine precision (i.e., if $\delta $ is due to data errors etc.), then $\epsilon $ and $\delta $ are approximately related by $\epsilon \simeq \left|x\mathrm{cos}\left(\pi {x}^{2}/2\right)/C\left(x\right)\right|\delta $.
However, if $\delta $ is of the same order as the machine precision, then rounding errors could make $\epsilon $ slightly larger than the above relation predicts.
For small $x$, $\epsilon \simeq \delta $ and there is no amplification of relative error.
For moderately large values of $x$, $\left|\epsilon \right|\simeq \left|2x\mathrm{cos}\left(\pi {x}^{2}/2\right)\right|\left|\delta \right|$ and the result will be subject to increasingly large amplification of errors. However, the above relation breaks down for large values of $x$ (i.e., when $1/{x}^{2}$ is of the order of the machine precision); in this region the relative error in the result is essentially bounded by $2/\pi x$.
Hence the effects of error amplification are limited and at worst the relative error loss should not exceed half the possible number of significant figures.
None.
## 9 Example
The following program reads values of the argument $x$ from a file, evaluates the function at each value of $x$ and prints the results.
### 9.1 Program Text
Program Text (s20adce.c)
### 9.2 Program Data
Program Data (s20adce.d)
### 9.3 Program Results
Program Results (s20adce.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5925357937812805, "perplexity_flag": "middle"} |
http://johncarlosbaez.wordpress.com/2011/07/06/operads-and-the-tree-of-life/ | # Azimuth
## Operads and the Tree of Life
This week Lisa and I are visiting her 90-year-old mother in Montréal. Friday I’m giving a talk at the Université du Québec à Montréal. The main person I know there is André Joyal, an expert on category theory and algebraic topology. So, I decided to give a talk explaining how some ideas from these supposedly ‘pure’ branches of math show up in biology.
My talk is called ‘Operads and the Tree of Life’.
#### Trees
In biology, trees are very important:
So are trees of a more abstract sort: phylogenetic trees describe the history of evolution. The biggest phylogenetic tree is the ‘Tree of Life’. It includes all the organisms on our planet, alive now or anytime in the past. Here’s a rough sketch of this enormous tree:
Its structure is far from fully understood. So, biologists typically study smaller phylogenetic trees, like this tree of dog-like species worked out by Robert Wayne:
Abstracting still further, we can also think of a tree as a kind of purely mathematical structure, like this:
Trees are important in combinatorics, but also in algebraic topology. The reason is that in algebraic topology we get pushed into studying spaces equipped with enormous numbers of operations. We’d get hopelessly lost without a good way of drawing these operations. We can draw an operation $f$ with n inputs and one output as a little tree like this:
We can also draw the various ways of composing these operations. Composing them is just like building a big tree out of little trees!
An operation with n inputs and one output is called an n-ary operation. In the late 1960s, various mathematicians including Boardmann and Vogt realized that spaces with tons of n-ary operations were crucial to algebraic topology. To handle all these operations, Peter May invented the concept of an operad. This formalizes the way operations can be drawn as trees. By now operads are a standard tool, not just in topology, but also in algebraic geometry, string theory and many other subjects.
But how do operads show up in biology?
When attending a talk by Susan Holmes on phylogenetic trees, I noticed that her work on phylogenetic trees was closely related to a certain operad. And when I discussed her work here, James Griffin pointed out that this operad can be built using a slight variant of a famous construction due to Boardman and Vogt: their so-called ‘W construction’!
I liked the idea that trees and operads in topology might be related to phylogenetic trees. And thinking further, I found that the relation was real, and far from a coincidence. In fact, phylogenetic trees can be seen as operations in a certain operad… and this operad is closely related to the way computational biologists model DNA evolution as a branching sort of random walk.
That’s what I’d like to explain now.
I’ll be a bit sketchy, because I’d rather get across the basic ideas than the technicalities. I could even be wrong about some fine points, and I’d be glad to talk about those in the comments. But the overall picture is solid.
#### Phylogenetic trees
First, let’s ponder the mathematical structure of a phylogenetic tree. First, it’s a tree: a connected graph with no circuits. Second, it’s a rooted tree, meaning it has one vertex which is designated the root. And third, the leaves are labelled.
I should explain the third part! For any rooted tree, the vertices with just one edge coming out of them are called leaves. If the root is drawn at the bottom of the tree, the leaves are usually drawn at the top. In biology, the leaves are labelled by names of species: these labels matter. In mathematics, we can label the leaves by numbers $1, 2, \dots, n,$ where $n$ is the number of leaves.
Summarizing all this, we can say a phylogenetic tree should at least be a leaf-labelled rooted tree.
That’s not all there is to it. But first, a comment. When you see a phylogenetic tree drawn by a biologist, it’ll pretty much always a binary tree, meaning that as we move up any edge, away from the root, it either branches into two new edges or ends in a leaf. The reason is that while species often split into two as they evolve, it is less likely for a species to split into three or more new species all at once.
So, the phylogenetic trees we see in biology are usually leaf-labeled rooted binary trees. However, we often want to guess such a tree from some data. In this game, trees that aren’t binary become important too!
Why? Well, here another fact comes into play. In a phylogenetic tree, typically each edge can be labeled with a number saying how much evolution occurred along that edge. But as this number goes to zero, we get a tree that’s not binary anymore. So, we think of non-binary trees as conceptually useful ‘borderline cases’ between binary trees.
So, it’s good to think about phylogenetic trees that aren’t necessarily binary… and have edges labelled by numbers. Let’s make this into a formal definition:
Definition A phylogenetic tree is a leaf-labeled rooted tree where each edge not touching a leaf is labeled by a positive real number called its length.
By the way, I’m not claiming that biologists actually use this definition. I’ll write $\mathrm{Phyl}_n$ for the set of phylogenetic trees with $n$ leaves. This becomes a topological space in a fairly obvious way, where we can trace out a continuous path by continuously varying the edge lengths of a tree. But when some edge lengths approach zero, our graph converges to one where the vertices at ends of these edges ‘fuse into one’, leaving us with a graph with fewer vertices.
Here’s an example for you to check your understanding of what I just said. With the topology I’m talking about, there’s a continuous path in $\mathrm{Phyl}_3$ that looks like this:
These trees are upside-down, but don’t worry about that. You can imagine this path as a process where biologists slowly change their minds about a phylogenetic tree as new data dribbles in. As they change their minds, the tree changes shape in a continuous way.
For more on the space of phylogenetic trees, see:
• Louis Billera, Susan Holmes and Karen Vogtmann, Geometry of the space of phylogenetic trees, Advances in Applied Mathematics 27 (2001), 733-767.
#### Operads
How are phylogenetic trees related to operads? I have three things to say about this. First, they are the operations of an operad:
Theorem 1. There is an operad called the phylogenetic operad, or $\mathrm{Phyl},$ whose space of n-ary operations is $\mathrm{Phyl}_n.$
If you don’t know what an operad is, I’d better tell you now. They come in different flavors, and technically I’ll be using ‘symmetric topological operads’. But instead of giving the full definition, which you can find on the nLab, I think it’s better if
I sketch some of the key points.
For starters, an operad $O$ consists of a topological space $O_n$ for each $n = 0,1,2,3 \dots$. The point in $O_n$ are called the n-ary operations of $O.$ You can visualize an n-ary operation $f \in O_n$ as a black box with $n$ input wires and one output wire:
Of course, this also looks like a tree.
We can permute the inputs of an n-ary operation and get a new n-ary operation, so we have an action of the permutation group $S_n$ on $O_n$. You visualize this as permuting input wires:
More importantly, we can compose operations! If we have an n-ary operation $f$, and n more operations, say $g_1, \dots, g_n$, we can compose $f$ with all the rest and get an operation called
$f \circ (g_1, \dots, g_n)$
Here’s how you should imagine it:
Composition and permutation must obey some laws, all of which are completely plausible if you draw them as pictures. For example, the associative law makes a composite of composites like this well-defined:
Now, these pictures look a lot like trees. So it shouldn’t come as a shock that phylogenetic trees are the operations of some operad $\mathrm{Phyl}.$ But let’s sketch why it’s true.
First, we can permute the ‘inputs’—meaning the labels on the leaves—of any phylogenetic tree and get a new phylogenetic tree. This is obvious.
Second, and more importantly, we can ‘compose’ phylogenetic trees. How do we do this? Simple: we glue the roots of a bunch of phylogenetic trees to the leaves of another and get a new one!
More precisely, suppose we have a phylogenetic tree with $n$ leaves, say $f$. And suppose we have $n$ more, say $g_1, \dots, g_n.$ Then we can glue the roots of $g_1, \dots, g_n$ to the leaves of $g$ to get a new phylogenetic tree called
$f \circ (g_1, \dots, g_n)$
Third and finally, all the operad laws hold. Since these laws all look obvious when you draw them using pictures, this is really easy to show.
If you’ve been paying careful attention, you should be worrying about something now. In operad theory, we think of an operation $f \in O_n$ as having n inputs and one output. For example, this guy has 3 inputs and one output:
But in biology, we think of a phylogenetic tree as having one input and n outputs. We start with one species (or other grouping of organisms) at the bottom of the tree, let it evolve and branch, and wind up with n of them!
In other words, operad theorists read a tree from top to bottom, while biologists read it from bottom to top.
Luckily, this isn’t a serious problem. Mathematicians often use a formal trick where they take an operation with n inputs and one output and think of it as having one input and n outputs. They use the prefix ‘co-’ to indicate this formal trick.
So, we could say that phylogenetic trees stand for ‘co-operations’ rather than operations. Soon this trick will come in handy. But not just yet!
#### The W construction
Boardman and Vogt had an important construction for getting new operads for old, called the ‘W construction’. Roughly speaking, if you start with an operad $O$, this gives a new operad $\mathrm{W}(O)$ whose operations are leaf-labelled rooted trees where:
1) all vertices except leaves are labelled by operations of $O,$ and a vertex with n input edges must be labelled by an n-ary operation of $O,$
and
2) all edges except those touching the leaves are labelled by numbers in $(0,1]$.
If you think about it, the operations of $\mathrm{W}(O)$ are strikingly similar to phylogenetic trees, except that:
1) in phylogenetic trees the vertices don’t seem to be labelled by operations of a operad,
and
2) we use arbitrary nonnegative numbers to label edges, instead of numbers in $(0,1]$.
The second point is a real difference, but it doesn’t matter much: if Boardman and Vogt had used nonnegative numbers instead of numbers in $(0,1]$ to label edges in the W construction, it would have worked just as well. Technically, they’d get a ‘weakly equivalent’ operad.
The first point is not a real difference. You see, there’s an operad called $\mathrm{Comm}$ which has exactly one operation of each arity. So, labelling vertices by operations of $\mathrm{Comm}$ is a completely trivial process.
As a result, we conclude:
Theorem 2. The phylogenetic operad is weakly equivalent to $\mathrm{W}(\mathrm{Comm})$.
If you’re not an expert on operads (such a person is called an ‘operadchik’), you may be wondering what $\mathrm{Comm}$ stands for. The point is that operads have ‘algebras’, where the abstract operations of the operad are realized as actual operations on some topological space. And the algebras of $\mathrm{Comm}$ are precisely commutative topological monoids: that is, topological spaces equipped with a commutative associative product!
#### Branching Markov processes and evolution
By now, if you haven’t fallen asleep, you should be brimming with questions, such as:
1) What does it mean that phylogenetic trees are the operations of some operad $\mathrm{Phyl}$? Why should we care?
2) What does it mean to apply the W construction to the operad $\mathrm{Comm}$? What’s the significance of doing this?
3) What does it mean that $\mathrm{Phyl}$ is weakly equivalent to $\mathrm{W}(\mathrm{Comm})$? You can see the definition of weak equivalence here, but it’s pretty technical, so it needs some explanation.
The answers to questions 2) and 3) take us quickly into fairly deep waters of category theory and algebraic topology—deep, that is, if you’ve never tried to navigate them. However, these waters are well-trawled by numerous experts, and I have little to say about questions 2) and 3) that they don’t already know. So given how long this talk already is, I’ll instead try to answer question 1). This is where some ideas from biology come into play.
I’ll summarize my answer in a theorem, and then explain what the theorem means:
Theorem 3. Given any continuous-time Markov process on a finite set $X$, the vector space $V$ whose basis is $X$ naturally becomes a coalgebra of the phylogenetic operad.
Impressive, eh? But this theorem is really just saying that biologists are already secretly using the phylogenetic operad.
Biologists who try to infer phylogenetic trees from present-day genetic data often use simple models where the genotype of each species follows a ‘random walk’. Also, species branch in two at various times. These models are called Markov models.
The simplest Markov model for DNA evolution is the Jukes–Cantor model. Consider a genome of fixed length: that is, one or more pieces of DNA having a total of $N$ base pairs. For example, this tiny genome has $N = 4$ base pairs, just enough to illustrate the 4 possible choices, which are called A, T, C and G:
Since there are 4 possible choices for each base pair, there are $4^N$ possible genotypes with $N$ base pairs. In the human genome, $N$ is about $3 \times 10^9$. So, there are about
$4^{3 \times 10^9} \approx 10^{1,800,000,000}$
genotypes of this length. That’s a lot!
As time passes, the Jukes–Cantor model says that the human genome randomly walks through this enormous set of possibilities, with each base pair having the same rate of randomly flipping to any other base pair.
Biologists have studied many ways to make this model more realistic in many ways, but in a Markov model of DNA evolution we’ll typically have some finite set $X$ of possible genotypes, together with some random walk on this set. But the term ‘random walk’ is a bit imprecise: what I really mean is a ‘continuous-time Markov process’. So let me define that.
Fix a finite set $X$. For each time $t \in [0,\infty)$ and pair of points i, j in $X$, a continuous-time Markov process gives a number $T_{ij}(t) \in [0,1]$ saying the probability that starting at the point i at time zero, the random walk will go to the point j at time $t$. We can think of these numbers as forming an $X \times X$ square matrix $T(t)$ at each time $t$. We demand that four properties hold:
1) $T(t)$ depends continuously on $t$.
2) For all s, t we have $T(s) T(t) = T(s + t)$.
3) $T(0)$ is the identity matrix.
4) For all j and t we have:
$\sum_{i \in X} T_{i j}(t) = 1$.
All these properties make a lot of sense if you think a bit, though condition 2) says that the random walk does not change character with the passage of time, which would be false given external events like, say, ice ages. As far as math jargon goes, conditions 1)-3) say that $T$ is a continuous one-parameter semigroup, while condition 4) together with the fact that $T_{ij}(t) \in [0,1]$ says that at each time, $T(t)$ is a stochastic matrix.
Let $V$ be the vector space whose basis is $X$. To avoid getting confused, let’s write $e_i$ for the basis vector corresponding to $i \in X$. Any probability distribution on $X$ gives a vector in $V$. Why? Because it gives a probability $\psi_i$ for each $i \in X$, and we can think of these as the components of a vector $\psi \in V$.
Similarly, for any time $t \in [0,\infty)$, we can think of the matrix $T(t)$ as a linear operator
$T(t) : V \to V$
So, if we start with some probability distribution $\psi$ of genotypes, and let them evolve for a time $t$ according to our continuous-time Markov process, by the end the probability distribution will be $T(t) \psi$.
But species do more than evolve this way: they also branch! A phylogenetic tree describes a way for species to evolve and branch.
So, you might hope that any phylogenetic tree $f \in \mathrm{Phyl}_n$ gives a ‘co-operation’ that takes one probability distribution $\psi \in V$ as input and returns n probability distributions as output.
That’s true. But these n probability distributions will be correlated, so it’s better to think of them as a single probability distribution on the set $X^n.$ This can be seen as a vector in the vector space $V^{\otimes n}$, the tensor product of n copies of $V.$
So, any phylogenetic tree $f \in \mathrm{Phyl}_n$ gives a linear operator from $V$ to $V^{\otimes n}$. We’ll call it
$T(f) : V \to V^{\otimes n}$
because we’ll build it starting from the Markov process $T.$
Here’s a sketch of how we build it—I’ll give a more precise account in the next and final section. A phylogenetic tree is made of a bunch of vertices and edges. So, I just need to give you an operator for each vertex and each edge, and you can compose them and tensor them to get the operator $T(f)$:
1) For each vertex with one edge coming in and n coming out:
we need an operator
$V \to V^{\otimes n}$
that describes what happens when one species branches into n species. This operator takes the probability distribution we put in and makes n identical and perfectly correlated copies. To define this operator, we use the fact that the vector space $V$ has a basis $e_i$ labelled by the genotypes $i \in X.$ Here’s how the operator is defined:
$e_i \mapsto e_i \otimes \cdots \otimes e_i \in V^{\otimes n}$
2) For each edge of length $t$, we need an operator that describes a random walk of length $t.$ This operator is provided by our continuous-time Markov process: it’s
$T(f) : V \to V$
And that’s it! By combining these two kinds of operators, one for ‘branching’ and one for ‘random walking’, we get a systematic way to take any phylogenetic tree $f \in \mathrm{Phyl}_n$ and get an operator
$T(f) : V \to V^{\otimes n}$
In fact, these operators $T(f)$ obey just the right axioms to make $V$ into what’s called a ‘coalgebra’ of the phylogenetic operad. But to see this—that is, to prove Theorem 3—it helps to use a bit more operad technology.
#### The proof
I haven’t even defined coalgebras of operads yet. And I don’t think I’ll bother. Why not? Well, while the proof of Theorem 3 is fundamentally trivial, it’s sufficiently sophisticated that only operadchiks would enjoy it without a lengthy warmup. And you’re probably getting tired by now.
So, to most of you reading this: bye! It was nice seeing you! And I hope you sensed the real point of this talk:
Some of the beautiful structures used in algebraic topology are also lurking in biology. These structures may or may not be useful in biology… but we’ll never know if we don’t notice them and say what they are! So, it makes sense for mathematicians to spend some time looking for them.
Now, let me sketch a proof of Theorem 3. It follows from a more general theorem:
Theorem 4. Suppose $V$ is an object in some symmetric monoidal topological category $C$. Suppose that $V$ is equipped with an action of the additive monoid $[0,\infty)$. Suppose also that $V$ is a cocommutative coalgebra. Then $V$ naturally becomes a coalgebra of the phylogenetic operad.
How does this imply Theorem 3? In Theorem 3, $C$ is the category of finite-dimensional real vector space. The action of $[0,\infty)$ on $V$ is the continuous-time Markov process. And $V$ becomes a cocommutative coalgebra because it’s a vector space with a distinguished basis, namely the finite set $X$. This makes $V$ into a cocommutative coalgebra in the usual way, where the comultiplication:
$\Delta: V \to V \otimes V$
‘duplicates’ basis vectors:
$\Delta : e_i \mapsto e_i \otimes e_i$
while the counit:
$\epsilon : V \to \mathbb{R}$
‘deletes’ them:
$\epsilon : e_i \to 1$
These correspond to species splitting in two and species going extinct, respectively. (Biologists trying to infer phylogenetic trees often ignore extinction, but it’s mathematically and biologically natural to include it.) So, all the requirements are met to apply Theorem 4 and make $V$ into coalgebra of the phylogenetic operad.
But how do we prove Theorem 4? It follows immediately from Theorem 5:
Theorem 5. The phylogenetic operad $\mathrm{Phyl}$ is the coproduct of the operad $\mathrm{Comm}$ and the additive monoid $[0,\infty)$, viewed as an operad with only 1-ary operations.
Given how coproducts works, this means that an algebra of both $\mathrm{Comm}$ and $[0,\infty)$ is automatically an algebra of $\mathrm{Phyl}$. In other words, any commutative algebra with an action of $[0,\infty)$ is an algebra of $\mathrm{Phyl}$. Dualizing, it follows that any cocommutative coalgebra with an action of $[0,\infty)$ is an coalgebra of $\mathrm{Phyl}.$ And that’s Theorem 4!
But why is Theorem 5 true? First of all, I should emphasize that the idea of using it was suggested by Tom Leinster in our last blog conversation on the phylogenetic operad. And in fact, Tom proved a result very similar to Theorem 5 here:
• Tom Leinster, Coproducts of operads, and the W-construction, 14 September 2000.
He gives an explicit description of the coproduct of an operad $O$ and a monoid, viewed as an operad with only unary operations. He works with non-symmetric, non-topological operads, but his ideas also work for symmetric, topological ones. Applying his ideas to the coproduct of $\mathrm{Comm}$ and $[0,\infty)$, we see that we get the phylogenetic operad!
And so, phylogenetic trees turn out to be related to coproducts of operads. Who’d have thought it? But we really don’t have as many fundamentally different ideas as you might think: it’s hard to have new ideas. So if you see biologists and algebraic topologists both drawing pictures of trees, you should expect that they’re related.
This entry was posted on Wednesday, July 6th, 2011 at 6:20 pm and is filed under biology, mathematics, networks. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 39 Responses to Operads and the Tree of Life
1. Jacob Biamonte says:
Wow John, It’s me Jake, from CQT, I’m in Waterloo Ontario right now — I thought for a second you got on a plane to track me to down to get back to work on those rabbit pictures. I’ll be back in Singapore in a month. Don’t forget about me.
I wanted to mention to others something that we noticed when you were teaching me stuff at CQT some time ago. You explained to me what an operad was, even drew a slick table showing how these tie to other structures. I have that table still. The two things I found the most interesting (warning seemingly way off topic from the post) are:
* In classical switching function theory, operads are called read-once formula. These are precisely the class of switching functions built from gate sets with no fan-in — that is, the gate set can be composed only to build a tree.
* In quantum network theory, they correspond to MERA networks, that can be contracted efficiently. Each of the three leg tensors (in this case) is an isomentry. To measure anything in quantum mechanics you take an inner product, when you take an inner product, you contract a state (here represented as a tree) with a conjugated copy of itself. Each of the three legged tensors will vanish when contracted to its isometric pair. This means the network can be contracted, and hence values can be measured efficiently.
* And from this, we can conclude that read-once formula are necessarily satisfiable. That means they correspond to constraint equations that can be satisfied.
This is a bit off topic, but every time I hear about operads I think about these cases. cheers!
• John Baez says:
Hi, Jake!
You may think I’ve been slacking off on our network theory project. It’s sort of true. But I claim that this operad stuff is just another aspect of that project. As usual, I’m taking ideas from quantum theory and adapting them to stochastic processes. Amplitudes become probabilities, unitary matrices become stochastic matrices, $L^2$ becomes $L^1$, and so on. Even the Fock space idea shows up, though I didn’t make that explicit here.
In particular, the pictures of trees here can be seen as Feynman diagrams! But these are a bit different than the ones we’ve been talking about. The edges are ‘propagators’ where time evolution is described by the continuous-time Markov process $T(t).$ The vertices are ‘interactions’, but only of a very limited sort: the vector space $V = L^1(X)$ is a cocommutative coalgebra, so it comes with a ‘duplication’ operator
$\Delta : V \to V \otimes V$
and a ‘deletion’ operator
$\epsilon : V \to \mathbb{R}$
These correspond to ‘speciation’ (where a species splits in two) and ‘extinction’.
I haven’t completely figured out how this new stuff is connected to stochastic Petri nets, but it’s clearly part of the same story.
2. Dan says:
This may seem a bit nit-picky when you’re obviously more concerned with the big picture, but I’m confused so I’ll go ahead and ask and hope you’ll indulge me in what are probably stupid questions. See, I had to deal with C0-semigroups pretty extensively in a past life, but I was never fortunate enough to have ones that were either Markov or acting on a finite dimensional space, so I’m struggling a bit with your condition 4 above, namely, $\sum_{i \in X} T_{ij} (t)=1$ for all j and t, where $T_{ij}(t)$ is defined to be the probability of being at j if you started at i at t=0. If I try to put this into words, I get something like: “If you find yourself at j at time t, then you must have come from from some i at time 0, and this is true for all j.” Is that a fair summary?
Personally, I tend to think in terms of initial conditions with C0-semigroups, so I would expect the condition $\sum_{j \in X} T_{ij} (t)=1$ for all i and t to hold, which I would summarize as “If you start at i, then you must end up at some j at time t, and this is true for all i and t.” Does this condition follow from the other in some way? Or is it meant to be a trivial consequence of your definition of $T_{ij}(t)$ as “the probability that starting at the point i at time zero, the random walk will go to the point j at time t?”
Thanks.
Dan.
• John Baez says:
Dan wrote:
I’m struggling a bit with your condition 4 above, namely, $\sum_{i \in X} T_{ij} (t)=1$ for all j and t, where $T_{ij}(t)$ is defined to be the probability of being at j if you started at i at $t=0$.
Whoops—you caught a typo. I switched around an i and a j here. Actually for me $T_{ij}(t)$ is defined to be the probability of being at i if you started at j at $t=0$.
Some other people may use the opposite convention—this might add to your confusion.
I’ll fix this. Thanks!
Does this condition follow from the other in some way?
No, these conditions are independent.
People like to talk about left stochastic matrices, which are matrices of nonnegative numbers where the columns sum to 1, and also right stochastic matrices, which are matrices of nonnegative numbers where the rows sum to 1.
One of these describes random processes where starting at any point you have a total probability 1 of going to some point or other.
The other describes random processes where ending at any point you have a total probability 1 of coming from some point or other.
However, which is which depends on a convention, namely whether you multiply column vectors by matrices on the left, or row vectors by matrices on the right.
In this talk I’m interested in processes where starting at any point you have a total probability 1 of going to some point or other. And, I’m multiplying column vectors by matrices on the left.
So, for all I care, there could be genotypes that our random walk will never get to—that’s okay. But every genotype has to go somewhere as it randomly walks around.
A matrix that’s both left and right stochastic is called ‘doubly stochastic’.
This is a good intro:
• Stochastic matrix, Wikipedia.
3. Ruchira Datta says:
Hi John,
Thanks for the enlightening discussion about operads and their applications to phylogenetic trees. I see several theorems about the correspondence between operads and phylogenetic trees, but I would really be interested in how the correspondence is used. I.e., is there a theorem that is stated only in terms of general operads, which we can pull through the correspondence, to get a theorem stated only in terms of phylogenetic trees and Markov processes?
You may be interested in the book Algebraic Statistics for Computational Biology and in particular in this review: “The Mathematics of Phylogenomics”.
4. John Baez says:
Ruchira wrote:
I would really be interested in how the correspondence is used.
Me too! But I just made it up; I haven’t used it for anything.
I.e., is there a theorem that is stated only in terms of general operads, which we can pull through the correspondence, to get a theorem stated only in terms of phylogenetic trees and Markov processes?
I don’t know yet—I haven’t gotten that far. Basically what happened is that I listened to Susan Holmes talk about the geometry of the space of n-leaved phylogenetic trees, and realized that these spaces form an operad. It seemed awfully familiar from my studies of algebraic topology. Then James Griffin and Tom Leinster figured out exactly how an algebraic topologist, or operad theorist, would think about this operad. Operads have ‘algebras’, so I then started wondering what the algebras of this operad were. I soon realized that the coalgebras of this operad were the branching Markov processes beloved by computational biologists working on phylogenetic trees! And that’s where I am now.
In short, so far I’ve noticed that computational biologists are using some quite interesting operads without knowing it. I don’t know yet if it will help them to know it! But I think these connections are always worth noting. If X is an example of a Y, I think it’s always worth pointing it out, unless it’s so obvious it doesn’t need pointing out. These connections have a tendency to pay off, sometimes in unexpected ways.
It’s possible that operad theory could turn out to be useful in studying phylogenetic trees. That would be great. But it’s also possible that work on phylogenetic trees could inspire new work on operads which then gets applied to something else, like algebraic geometry or string theory! That would be cool too.
For people who don’t know operads, this stuff I’m talking about is probably not ‘ready for prime time’. But for people who do, it’ll be a shockingly practical example of an operad.
You may be interested in the book Algebraic Statistics for Computational Biology and in particular in this review: “The Mathematics of Phylogenomics”.
5. David Corfield says:
Do biologists have a way to pass between trees in the enormous, but simple, space of genotypes to phylogenetic trees? Markov processes for the former may involve enormous transition matrices, but at least one could use a simple rate of genetic mutation at each site. But then interpreting that model in terms of phenotypes, speciation, etc. can’t be easy.
• John Baez says:
David wrote:
Do biologists have a way to pass between trees in the enormous, but simple, space of genotypes to phylogenetic trees?
I’m no expert on this, so take the following with a huge grain of salt, and hope that someone more knowledgeable chimes in:
In most of what I’ve seen, they aren’t observing trees in the space of genotypes: they’re observing the genotypes at the leaves of those trees and trying to infer the rest of the tree. In other words, trying to guess the past given the present. There is a huge amount of work devoted to that. A bunch of it uses Markov process models combined with maximum likelihood or Bayesian methods to guess the best tree. They don’t usually use the whole genotype: just some parts of the DNA. There’s a lot of software for doing this kind of inference, like PhyML, FastML, RaxML and Mr. Bayes.
And in most of the work I’ve seen, the tree that’s inferred is simply called the phylogenetic tree. One assumes the edges are taxonomically significant units (e.g. species) and the vertices are taxonomically significant branching events (e.g. speciation events).
I don’t know if that answers your question.
By the way, I said that mostly we observe DNA data now and try to infer the tree in the past, but one exception I know involves the HIV virus. This mutates fast enough, and has been studied carefully enough, that people have been able to watch it evolves as time passes. Viruses don’t have ‘species’ in the conventional sense of being able to interbreed, but there are medically significant branches forming in the phylogenetic tree as we watch.
Another exception would involve situations where we have access to fossil DNA, like Neanderthal or woolly mammoth DNA.
And by the way, the whole ‘tree’ concept is a simplification of reality. For example, humans and Neanderthals interbred, and among bacteria there’s a lot of ‘lateral transmission’ of genetic material via plasmids and the like.
• carlo rovelli says:
Just a micro comment: did you notice the striking resemblance between this image (HIV phylogenetic tree, evolving as we observe) and Charles Darwin’s very first drawing of the “tree of life” in his notebook (in the Museum of Natural history in NYC)?:
Did they do so on purpose?
• John Baez says:
Hey, Carlo—it’s fun to see you here!
I don’t know they made that tree looks similar on purpose. At first I was going to say that all trees look sort of similar, but I think you’re right, there’s something more than that going on here, either coincidentally or not.
By the way, nobody except me can post images here, but if someone includes a link to an interesting image I can edit their comment to make it show up… and that’s what I just did. Of course I’m too lazy to do this very often.
6. Graham says:
Here is a link to a recent conference “Phylogenetics: New Data, New Phylogenetic Challenges”
http://www.newton.ac.uk/programmes/PLG/plgw05p.html
The talks were recorded. A lot of them are mathematical, and some are quite abstract, but I haven’t spotted anyone using operads yet.
• John Baez says:
Thanks! No operads in this one, but it offers us operadchiks a lot of food for thought:
• K. St. John, Exploring treespace.
Abstract: Phylogenies, or evolutionary histories, play a central role in modern biology, illustrating the interrelationships between species, and also aiding the prediction of structural, physiological, and biochemical properties. The reconstruction of the underlying evolutionary history from a set of morphological characters or biomolecular sequences is difficult since the optimality criteria favored by biologists are NP-hard, and the space of possible answers is huge. The number of possible phylogenetic trees for n taxa is (2n − 5)!!. Due to the hardness and the large number of possible answers, clever searching techniques and heuristics are used to estimate the underlying tree. We explore the underlying space of trees, under different metrics, in particular the nearest-neighbor-interchange (NNI), subtree-prune-and-regraft (SPR), tree-bisection-and-reconnection (TBR), and Robinson-Foulds (RF) distances.
The trees here have no labellings on edges, so the metrics are metric on discrete sets… but still interesting!
7. nad says:
John wrote:
By the way, I said that mostly we observe DNA data now and try to infer the tree in the past, but one exception I know involves the HIV virus. This mutates fast enough, and has been studied carefully enough, that people have been able to watch it evolves as time passes.
Interresting, do you have any documentations about that? I could imagine that something like an educational video could bring in handy all the mutations.
• John Baez says:
Hi, Nad. I don’t have any really nice graphics illustrating HIV evolution, but here’s a good general overview of HIV evolution:
• Andrew Rambaut,David Posada, Keith A. Crandall and Edward C.Holmes, The causes and consequences of HIV evolution.
My friend Chris Lee is an expert on this stuff:
• Chris Lee, HIV positive selection mutation database.
and this database allowed him to calculate in detail the effect of various drugs on HIV evolution.
• Tom Leinster says:
There’s a lot about the evolution of HIV in the opening chapter, or perhaps the introduction, of Steve Jones’s book Almost Like a Whale. (In the US it goes by the title of Darwin’s Ghost. It’s a retelling of the Origin of Species.) If you’re after a non-technical description, you might like that.
Another virus that evolves very fast is influenza. That’s one of the things that makes vaccination difficult: the vaccines constantly have to be updated.
• nad says:
Thanks for the references John and Tom.
Another virus that evolves very fast is influenza. That’s one of the things that makes vaccination difficult: the vaccines constantly have to be updated.
Yes and in the end the vaccine might be so poisonous that it kills the sick one instead of the flu.
• John Baez says:
I think fears of vaccines are greatly overstated. I don’t know of anyone dying from a ‘poisonous vaccine’, though almost everything happens at least once.
• reperiendi says:
Here’s a nice article on it:
http://www.bunniestudios.com/blog/?p=353
8. Roger Witte says:
Excellent post – it must have been an excellent talk too!
I do take slight issue with your idea that Operad theorists have much to say (as yet) on issues ‘what does it mean …’
The point being that there are two distinct interesting questions with the same English wording
1) What are the mathematical definitions and consequences …. This is what you are thinking
2) What does it tell us about biology? A question that we should start to address now :)
• John Baez says:
Roger wrote:
Excellent post – it must have been an excellent talk too!
Thanks! But the talk is tomorrow, so while it may be predestined for excellence, I can’t be sure of that yet.
I do take slight issue with your idea that operad theorists have much to say (as yet) on issues ‘what does it mean …’
The point being that there are two distinct interesting questions with the same English wording:
1) What are the mathematical definitions and consequences … This is what you are thinking.
Actually I was thinking about the mathematical meaning, which is not always easily apparent, even if one has seen the definitions and theorems. I distinguish between being able to state results and having them fully integrated into ones way of thinking. Only the latter lets one make significant progress.
In particular, for these questions:
2) What does it mean to apply the W construction to the operad $\mathrm{Comm}$? What’s the significance of doing this?
3) What does it mean that $\mathrm{Phyl}$ is weakly equivalent to $\mathrm{W}(\mathrm{Comm})$? You can see the definition of weak equivalence here, but it’s pretty technical, so it needs some explanation.
it takes a nontrivial change of worldview to appreciate why the W construction and weak equivalence are important and inevitable notions. They’re both aspects of the ‘homotopification’ of mathematics, where instead of demanding that equations hold, one demands that equations hold ‘up to coherent homotopy’. Operad theorists have a lot to say about what this means and why it’s important.
But yes, then there’s another layer: what does all this have to with biology? I doubt anyone knows.
9. Urs Schreiber says:
Hey John,
I am thinking that your statement “Phyl is weakly equivalent to W(Comm)” is way weaker than what you would want to say. This is tantamount to saying “Phyl is weakly equivalent to the point.”
While true, it seems to me that the whole point you would like to make is that Phyl is actually isomorphic to W(Comm) (degreewise homeomorphic).
• John Baez says:
Urs wrote:
I am thinking that your statement “Phyl is weakly equivalent to W(Comm)” is way weaker than what you would want to say. This is tantamount to saying “Phyl is weakly equivalent to the point.”
Hmm, I guess you’re right. Let me make sure I understand. Is every topological operad whose space of $n$-ary operations is contractible weakly equivalent to the terminal topological operad? The action of $S_n$ on the space of $n$-ary operations is irrelevant here?
While true, it seems to me that the whole point you would like to make is that Phyl is actually isomorphic to W(Comm) (degreewise homeomorphic).
I’m afraid something slightly weaker is true. The lengths of edges in the trees that give operations of W(Comm) are positive real numbers less than or equal to 1, while for Phyl they are positive real numbers. Since (0,1] isn’t homeomorphic to (0,∞), I’m afraid Phyl isn’t degreewise homeomorphic to W(Comm). They’re “almost homeomorphic” in some way that’s hard for me to state clearly.
• John Baez says:
As spaces, $\mathrm{W}(\mathrm{Comm})_n$ is some sort of compactification of $\mathrm{Phyl}_n$.
• Urs Schreiber says:
Is every topological operad whose space of n-ary operations is contractible weakly equivalent to the terminal topological operad?
Yes. There is necessarily a unique morphism to Comm and this is by assumption then a degreewise a weak homotopy equivalence of topological spaces. By this theorem these are weak equivalences of topological operads.
The action of $S_n$ on the space of n-ary operations is irrelevant here?
The action needs to be respected by the morphism of course, but for morphisms to Comm this is automatic. But given a morphism, its property of being a weak equivalence is just that it is degreewise a weak homotopy equivalence of topological spaces.
Notice that every topological operad which is 1. degreewise contractible and 2. “Sigma cofibrant” (has free permutation group action) is an $E_\infty$ operad.
while for Phyl they are positive real numbers
Oh, okay. Right, so it slightly fails to be isomorphic to W(Comm) then. But not in a very interesting way. As you say in the second message: you could just add $\infty$ as an admissible tree length, if you wanted to “fix” this.
• John Baez says:
Urs wrote:
As you say in the second message: you could just add $\infty$ as an admissible edge length, if you wanted to “fix” this.
When it comes to biology, adding $\infty$ as an admissible edge length for phylogenetic trees amounts to requiring that our Markov process settles down to some limit as $t \to \infty$. More precisely: an algebra of the phylogenetic operad extends to an algebra of the larger operad that allows infinite edge lengths iff $\lim_{t \to \infty} T(t) \Psi$ exists for all $\Psi$, where $T$ is the corresponding Markov process.
This isn’t true for all Markov processes, but it is for the ones that show up in Markov models of DNA evolution. So that’s okay. Then there’s another little nuance…
Boardman and Vogt’s W construction requires choosing a way to make $[0,1]$ into a topological monoid. I don’t have their book on me, so I don’t remember how they do it, but other sources seem to suggest two methods. One is:
$x \ast y = x + y - xy$
and the other is:
$x \circ y = x \, \mathrm{max} \, y$
The second one has the small advantage of being obviously associative, but the first is also associative (I just checked!), and it seems to have a bigger advantage, as far as I’m concerned.
Namely, I believe (but haven’t bothered to check) that the $\ast$ operation makes $[0,1]$ isomorphic, as a topological monoid, to $[0,\infty]$ with its usual addition (defined so that $\infty + x = x + \infty = \infty$).
If so, we can embed $[0,\infty)$ with its usual addition as a dense submonoid of $[0,1]$ with the $\ast$ operation.
And if so, this should make the phylogenetic operad into a dense suboperad of $\mathrm{W}(\mathrm{Comm}),$ where the inclusion is a homotopy equivalence.
That’s a fairly crisp statement of how they’re ‘almost isomorphic’.
And of course even if Boardman and Vogt’s way of making the closed interval into a topological monoid doesn’t make it isomorphic to $[0,\infty]$ with its usual addition, such a way exists.
• Todd Trimble says:
John wrote:
I believe (but haven’t bothered to check) that the $\ast$ operation makes $[0,1]$ isomorphic, as a topological monoid, to $[0,\infty]$ with its usual addition
We have
$\begin{aligned} x \ast y &=& 1 - (1-x)(1-y) \\ &=& \phi^{-1}(\phi(x)\phi(y)) \end{aligned}$
where $\phi(x) = 1-x$. So $\phi$ is a homomorphism from $\ast$ on $[0, 1]$ to ordinary multiplication on $[0, 1]$. And $[0, 1]$ under multiplication is isomorphic to $[0, \infty]$ under addition via $x \mapsto -\log(x)$. So, you were right.
• John Baez says:
Great, thanks!!!
10. John Baez says:
My talk seemed to go well. Afterwards, André Joyal made some interesting remarks. I’d like to write them down here so I don’t forget them.
First, a nice simple observation. He said that the phylogenetic operad formalizes a notion of time, ‘branching time’, which marches forward like the usual notion of time (described by the real numbers) except for certain points where it splits.
A species can split in two and evolve in two different ways. We wondered in what other situations the concept of ‘branching time’ could apply. An obvious candidate is the branching scenario in some versions of the ‘many-worlds interpretation’ of quantum mechanics, but I don’t know how to make this precise.
More technically, we talked about how the phylogenetic operad is related to Boardman and Vogt’s W construction.
My talk discussed the coproduct of the operads $\mathrm{Comm}$ and $[0,\infty)$. More generally, we can take the coproduct $O + M$ of any operad $O$ and any monoid $M$. The operations in $O + M$ can be drawn as (roughly) trees with vertices labelled by operations in $O$ and edges labelled by elements of $M.$
If we take $M = [0,\infty)$ this gives the edges nonnegative real ‘lengths’.
But we can also take $M = \mathbb{N}$. Then the edges have integral lengths. We can draw this by marking each edge with a number of ‘ticks’: little marks representing the ticks of a clock.
The operad $O + \mathbb{N}$ is interesting because it’s the operad freely generated by the operad $O$ and an extra unary operation. Let’s call this unary operation $\mathbf{tick}.$ In his book on algebraic set theory with Moerdijk, Joyal considered the monad generated by a monad and extra unary operation. This idea was previously studied by Bénabou and Jibladze.
Moving in this circle of ideas, we saw that the operad $O + \mathbb{N}$ contains a copy of $O$ but also two copies of the free operad on the underlying collection of operations of $O$.
One way to get the latter is to take each operation $f$ of $O$ and postcomposing it with the operation $\mathbf{tick}.$ The resulting operations
$\mathbf{tick} \circ f$
generate a copy of the free operad on the underlying collection of operations of $O$. The ticks serve as ‘parentheses’ separating the operations of $O$.
Another way is to take each operation $f$ of $O$ and postcomposing it a bunch of copies of the operation $\mathbf{tick}.$ The resulting operations
$f \circ (\mathbf{tick}, \dots, \mathbf{tick})$
generate a second copy of the free operad on the underlying collection of operations of $O$.
The same trick works for $O + [0,\infty)$, taking our ‘tick’ to be any positive number.
The same trick also works for $O + [0,\infty]$ here \$[0,\infty]\$ becomes a monoid with the obvious concept of addition, such that
$x + \infty = \infty + x = \infty$
As we’ve seen, $[0,\infty]$ is isomorphic to $[0,1]$ with the product
$x \ast y = x + y - x y$
From what we’ve seen earlier in this conversation, it follows that the operad $O + [0,\infty]$ is closely related to the Boardman–Vogt construction $\mathrm{W}(O)$. I now understand this a bit better. Since
$\infty + \infty = \infty$
there’s yet another nice way that the operad $O + [0,\infty]$ contains a copy of the free operad on the underlying set of operations of $O$. Namely, we take each operation $f$ of $O$ and both precompose and postcompose it with $\infty$:
$\mathbf{tick} \circ f \circ (\mathbf{tick}, \dots, \mathbf{tick})$
I believe Boardman and Vogt use this trick in their book. Of course they talk about $[0,1]$ instead of the isomorphic monoid $[0,\infty]$, but that doesn’t matter. What matters is that they’re sandwiching the operations in $O$ with an idempotent unary operation that’s not in $O$. We need to do this to have a chance for $\mathrm{W}(O)$ to be a cofibrant replacement of \$O\$.
Tom Leinster’s discussion of the $W$ construction here doesn’t mention this point. He describes $O + [0,1]$ and writes:
So the operad $O + [0,1]$ is almost exactly the operad $\mathrm{W}(O)$ defined by the Boardman–Vogt method. As far as I can see, the only point of difference is that in Boardman–Vogt, ‘by convention, the roots and twigs have length 1′) (Homotopy Invariant Algebraic Structures… , p. 73), where is in the coproduct they have length 0. (The element 1 of the monoid $[0,1]$ plays no special role; the unit element is 0.)
All this is true except that the element 1 in the monoid $[0,1]$ (corresponding to the element $\infty$ in $[0, \infty]$) does play a special role in the Boardman–Vogt construction, by virtue of being a nontrivial idempotent.
• Tom Leinster says:
John, the stuff you’re doing prompted me to make a pdf version of my note available, since most people (including me) prefer pdf to ps these days. It’s here: http://www.maths.gla.ac.uk/~tl/w.pdf.
11. Mike Shulman says:
Very neat!
I don’t have any suggestions for what the W-construction and weak equivalence have to do with biology, but it seems clear that the reason coproducts of operads appear is that you’re considering the process of evolution of each species, and the process of branching of species, to be totally unrelated. Right?
Something seems a little weird, though: the behavior of evolution of a single species is controlled by the Markov process, but the behavor of branching of species is controlled by the operad. In other words, we choose a particular n-ary (co)operation in the phylogenetic operad which specifies all of the species branching that is to occur, with specified time periods in between branching events, and then the corresponding coaction morphism specifies a probability distribution over the genotypes of the resulting n species. Wouldn’t it be more realistic to also assign probabilities to the different kinds of branching that could occur?
• John Baez says:
Mike wrote:
I don’t have any suggestions for what the W-construction and weak equivalence have to do with biology, but it seems clear that the reason coproducts of operads appear is that you’re considering the process of evolution of each species, and the process of branching of species, to be totally unrelated. Right?
Right. And here “you” means not just little old me, but also biologists who are trying to infer phylogenetic trees from data about the DNA of various species we see today.
They do things like this: fix a Markov process and then seek the phylogenetic tree (in the sense I’ve defined here) and the genotype for the species at the root of this tree that maximize the likelihood that the species at the leaves have the genotypes we see today.
So, for example, Robert Wayne must have taken snippets of DNA from various kinds of dog-like animals and run some maximum likelihood or Bayesian algorithm to guess this tree:
Wouldn’t it be more realistic to also assign probabilities to the different kinds of branching that could occur?
If we were trying to simulate evolution including the branching of species, we might try that. That could be a lot of fun.
For the applications described above, I don’t think it would be very practical. It might be practical if we had a good guess as to the probability per unit time that an organism with a given genotype would branch into two species. But since we don’t, we’d probably be forced into a very simple dumb guess, namely that the probability is some constant. And this, I believe, would not affect the results of the above sort of calculation.
• Graham says:
Actually biologists do assume a probability distribution on the branching patterns, and the choice can affect the results. You are right that little is known about speciation rates and extinction rates, so guesses have to be made. In a Bayesian context, the distribution can be seen as part of the prior. You hope that the molecular data will overwhelm the prior, but if the signal in the data is weak, it may not.
• Mike Shulman says:
I was thinking more along the lines of the Bayesian calculation inferring what sort of branching was likely to have occurred, as Graham suggests. But I think now I see that I asked the wrong question, since what people are really doing is drawing inferences about the branching. I should have said, “wouldn’t it be more realistic to also draw inferences about the Markov process (like in an HMM), rather than fixing a choice at the outset?”
I’m guessing that some people are already trying that too. But I was having a bit of fun trying to think about how to “mix up” the two aspects operadically.
• Graham says:
Yes, people already do that too, and more. A lot of Bayesian methods can be characterised as “co-estimate everything”.
The inference is like that for a HMM. The likelihood of the observed data at the leaves (the multiple sequence alignment, see http://en.wikipedia.org/wiki/Multiple_sequence_alignment) is calculated by working back recursively towards the root.
I only have a hazy idea about operads, and here is a very hazy idea: Statistical inference often seems to work ‘backwards’, using co-operations where the model uses operations, or vice-versa.
• Todd Trimble says:
I was about to ask innocently, “what’s an HMM?” Then I googled it and figured it referred to this. But I guess it wouldn’t hurt to record this fact for the similarly innocent. :-)
• John Baez says:
Graham wrote:
Actually biologists do assume a probability distribution on the branching patterns, and the choice can affect the results. You are right that little is known about speciation rates and extinction rates, so guesses have to be made.
Thanks for the correction! I only know a little about phylogenomics, so it’s nice to know that when I say something wrong, there’s a chance you’ll appear and correct me.
What branching patterns might be particularly likely and/or unlikely?
Also, you mention ‘extinction rates’. One person told me that extinction events were completely ignored in the process of guessing a phylogenetic tree from present-day DNA data. In other words, that you can work only with trees whose branches all make it to the present, without any harm. I’ve been hoping to find some context in which this actually causes problems. Do you know about that?
(Of course we need to think about extinction events when we also use DNA data from extinct species… but I’m not talking about that now.)
• Graham says:
“What branching patterns might be particularly likely and/or unlikely?”
Trees have a topology and they can have branch lengths or node times. As far as the topology alone is concerned, the main observation is the phylogenetic trees are unbalanced. I have just added an image to
http://www.azimuthproject.org/azimuth/show/Tree+of+life
to show what I mean. This imbalance show up at every level down to trees with 4 leaves which is the smallest size that imbalance can occur. All the obvious mathematical models (eg a birth-death process assuming constant rates of birth and death) make more balanced trees than this.
As far as node times are concerned it will depend on context. For example, for gene trees within a species, coalescent theory
(http://en.wikipedia.org/wiki/Coalescent_theory) gives a useful model. Looking backward in time, the coalescences happen very quickly, then slow down. For a gene sampled from say 10 individuals, and assuming a constant population size, the expected time for the first coalescence is proportional to 1/(10*9), the additional time to the next coalescence is proportional to 1/(9*8), and so on down the the last pair to meet with expected time proportional to 1/(2*1). Looking forwards in time, this gives a tree that grows faster then exponential.
“In other words, that you can work only with trees whose branches all make it to the present, without any harm. I’ve been hoping to find some context in which this actually causes problems. Do you know about that?”
It can cause problems if you want to estimate dates. For example, there are 22 Crocodylla (crocodiles, alligators and gharials), and they separated from the rest of the tree of life a very long time ago (lets say 100My for the sake of argument). If you assumed there were no extinctions, you would estimate the time of the most recent common ancestor of extant Crocodylla (that is, the time of the first speciation in the phylogenetic tree for Crocodylla ) to be a very long time ago as well. Unlike the coalescence case, the expected times go like 1/21, 1/20, … 1/2, 1/1 going back in time, so – very roughly – you might estimate this time as 1/(1+1/2+…1/21) = 1/3.64 = .27 of 100My since the ancestor species separated from the rest of the tree of life, that is 73My ago. More realistically, there will have been many extinctions and quite likely there were once many more than 22 in this group. In this case, it could easily be that the most recent common ancestor of extant Crocodylla is very recent.
It also causes problems if you know some dates from fossils and want to estimate speciation and extinction rates. This article might be a good place to start.
“Estimating diversification rates from phylogenetic information”. By Ricklefs R E, Trends Ecol Evol. 2007.
http://www.bio-nica.info/biblioteca/Ricklefs2007PhylogeneticInformation.pdf
Finally, ‘phylogenetics’ means estimating trees from genes and ‘phylogenomics’ means estimating trees from whole genomes.
• John Baez says:
Thanks for all that, Graham! As you noticed, I don’t even know the difference between phylogenetics and phylogenomics. Well, I do now. But I have a lot of catching up to do.
By the way, when I gave my talk at UQAM the fellow in charge of the combinatorics seminar, Franco Saliola, said he had been to a nice talk by the mathematical biologist Lior Pachter. Have you heard of him? His website says:
I work on the fundamental problem of comparative genomics: the determination of the origins and evolutionary history of the nucleotides in all extant genomes. My work incorporates various aspects of genomics, including the reconstruction of ancestral genomes (paleogenomics), the modeling of genome dynamics (phylogenomics and systems biology) and the assignment of function to genome elements (functional genomics and epigenomics).
In addition to working on algorithms and mathematical foundations for comparative genomics, I also work on genome projects and perform large scale computational analyses. I have been a member of the mouse, rat, chicken and fly genome sequencing consortia, and the ENCODE project.
My research draws on tools from discrete mathematics, algebra and statistics. I am also interested in questions in these subjects that are motivated by biology problems.
12. [...] Operads and the Tree of Life « Azimuth. [...] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 253, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397488236427307, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/128559/differentiation-of-generic-riemann-integrals?answertab=active | # Differentiation of Generic Riemann integrals
I am trying to show the following result. If $f(x)$ is Riemann integrable when $a_{1}\leq x\leq b_{1}$ and if, when $a_{1} \leq a < b < b_{1}$, so $$\int _{a}^{b}f\left( x\right) dx=\phi \left( a,b\right)$$ and if $f(b+0)$ exists, then $$\lim _{\delta \rightarrow +0} \dfrac {\phi\left( a,b+\delta \right) -\phi \left( a,b\right) } {\delta }=f\left(b+0\right)$$ then, if $f(x)$ is continuous at $a$ and $b$, deduce that $\dfrac {d} {da}\int _{a}^{b}f\left( x\right) dx=-f\left( a\right)$ and $\dfrac {d} {db}\int _{a}^{b}f\left( x\right) dx=f\left( b\right)$.
Thoughts towards the solution Since $f(x)$ is continuous at $a$ and $b$ we know that the following limits exist $$\lim _{\delta \rightarrow -0} \dfrac {\phi\left( a+\delta,b \right) -\phi \left( a,b\right) } {\delta }=-f\left(a -0\right)$$ $$\lim _{\delta \rightarrow +0} \dfrac {\phi\left( a,b+\delta \right) -\phi \left( a,b\right) } {\delta }=f\left(b+0\right)$$
Also i think $\dfrac {d} {da}\int _{a}^{b}f\left( x\right) dx=\int _{a}^{b}\dfrac {df} {da}dx$ and $\dfrac {d} {db}\int _{a}^{b}f\left( x\right) dx=\int _{a}^{b}\dfrac {df} {db}dx$ although i am not sure how to exploit these two ideas to show the result. Any help would be much appreciated.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953591525554657, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/48122/the-convergence-of-jacobi-and-gauss-seidel-iteration | ## The Convergence of Jacobi and Gauss-Seidel Iteration
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi All!
I was supposed to find a solution of Ax=b using Jacobi and Gauss-Seidel method. The A is 100x100 symetric, positive-definite matrix and b is a vector filled with 1's. I am iterating(k = 1,2,....) those methods until the norm of (x(k+1) - x(k)) < precision which means that x is not changing and it is senseless to iterate more. Both methods ends with the same k(changed precision and k is still the same). So is it possible that the convergence of Jacobi and Gauss-Seidel is the same?
I will appreciate every resposne.
-
2
I'm far from being a specialist in numerical analysis. But if I remember correctly, the rate of convergence of Gauss-Seidel and Jacobi are both quite sensitive to the problem at hand. Depending on the problem either one can be faster than the other. But this sounds more like homework than a research level question, so you should consider to head over to an alternative site, see mathoverflow.net/faq – Theo Buehler Dec 3 2010 at 2:24
I am not asking for making this, but I made it by myself, and I am confused because the convergence is the same, I am only asking whether it's possible for that kind of matrix. – vasilij8 Dec 3 2010 at 2:29
## 2 Answers
There are several important cases where it is proved that $\rho(G)<\rho(J)$, with $G$ and $J$ the iteration matrices associated to the Gauss-Seidel and Jacobi methods. See for instance my book Matrices. GTM 216, Springer-Verlag. For instance, in the tridiagonal case, $\rho(G)=\rho(J)^2$ thus G-S is twice faster as Jacobi.
What means twice faster (or just $k$-times faster) ? These are order one methods, in the sense that a fixed number of exact digits are gained at each step. This number is $\tau=-\log_{10}\rho$. A method is twice faster than an other if the ratio $\tau_{one}/\tau_{other}$ equals $2$. Thus you should see a significant difference between both methods. If not, there might be two reasons. Either you are in an exceptional case where $\rho(G)=\rho(J)$, or something is wrong in your code.
In general, I do not recommend Jacobi and G-S. They are good examples in a course to beginners. But a slight change of G-S yields the relaxation method. With an optimal parameter, it is much faster. This is because $\rho(G)$ is very close to $1$ when $n$ is large, and thus $\tau$ is very small.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You need to be careful how you define rate of convergence. For Gauss-Seidel and Jacobi you split $A$ and rearrange \begin{eqnarray} Ax &=& b\ M-K &=& b \ \implies x &=& M^{-1}Kx + M^{-1}b\ &\def&Rx + c \end{eqnarray} Giving the iteration $x_{m+1} = Rx_m + c$. We (Demmel's book) define the rate of convergence as the increase in the number of correct decimal places per iteration $$r = -\log_{10}( \rho(R))$$ where $\rho(R)$ is the spectral radius of $R$. It can be shown that for $A$ strictly row diagonally dominant that $$\rho(R_{\text{Gauss}}) \leq \rho(R_{\text{Jacobi}}) < 1$$ indicating that the rate of convergence for Gauss Seidel is greater than that of Jacobi.
However I have never seen a significant difference in speed between the two methods.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230078458786011, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/74763/how-to-represent-uniformity-of-a-surface/74806 | # How to represent uniformity of a surface?
My knowledge of math is very basic, my statistics knowledge is even less. It was suggested to me that I try asking this question here, so here we go:
I am developing a software application that will measure and analyze properties of an object's surface. We want to determine how uniform the surface is and compare it to another object to determine which is "more uniform". Due to confidentiality issues as well as simplicity, I'll use an analogy of a dirt field:
• Field "A" is level, however it has many small pits and little mounds. All of the bumps and pits are pretty close to the same size, but they are everywhere on the field.
• Field "B" is also level and has much less pits and bumps than field A, however the pits and bumps it DOES have are quite large (and deep).
Let's say we have this great machine that can measure the the depth and height of the field in a 6" grid pattern - so we have this large set of data presenting an evenly spaced grid of points for each field.
The first part of my problem was to determine uniformity and after asking around and doing some research it seems that "variance" is what I was after. I am now determining variance by dividing the sum of the squared distances from the mean for each point, something like: 1. determine the mean 2. for each point, calculate the square of the distance from the mean 3. Divide the sum of squared distances by number of points.
So { 1, 3, 3, 2, 1 } = 0.8
Honestly I'm not sure I'm doing it correctly, but let's assume I am (for now). My next question is "what is 0.8?" It's my variance... OK, but how would I explain that to a layperson? Our application is not intended for the scientifically minded person, we basically want to convey the message that "Field A is smoother than field B" or "Field A, while it has more imperfections they are evenly spread out and not as intense as field B"
This is difficult to describe... I basically need to know what 0.8 "means"? Percentages are easy, you can say that "XYZ is 98% efficient" (at something) and most people can process that information and understand it. However I don't know how I can say "XYZ's surface is 0.8 uniform." The person would say "Well what does that mean? Is that good?"
That's my first problem: How to express uniformity in simple terms or relative to some other value that makes it easily understandable. If I plug some more severe numbers into my test program, say { 1,1,1,1,60 } I get a variance of 652.6875 - what!?
Please keep in mind, I really don't know what I'm talking about Math wise. What it boils down to is how to express uniformity of a set of values in a meaningful way to a layperson.
I also wasn't sure what tags to use, so if I missed some relevant ones please let me know.
Thanks for reading.
-
## 1 Answer
I guess it says something about the quality of the Wikipedia article on variance that you weren't able to answer your question by reading it. I suggest that if this answer is helpful to you (and especially since it will be helping you with a commercial project for free), you could edit the Wikipedia article to fix it in that respect. In particular, I suspect whoever wrote this section had this sort of question on mind; perhaps you could improve on that if you see what was missing for you.
First, your calculation of the variance is correct in the first case, but not in the second (which is a bit weird if you used the same program to compute them). In the second case, the mean is
$$\mu=\frac{1+1+1+1+60}5=\frac{64}5=12.8\;,$$
so the variance is
$$\begin{eqnarray} \sigma^2 &=&\frac{4(1-12.8)^2+(60-12.8)^2}5\\ &=&\frac{4(-11.8)^2+47.2^2}5\\ &=&556.96\;. \end{eqnarray}$$
You can always check your answers for this sort of thing using Wolfram|Alpha. (In this case, make sure to use "population variance" in this case; "sample variance" is something slightly different; the difference is explained in a section of the Wikipedia article).
Now to your question what to make of this. The reason you can't relate these numbers to anything more immediately meaningful is that the variance is a squared quantity, i. e. its units are the square of whatever units your data has, so it's not meaningful to compare it directly to the data. What you need for that is the standard deviation, which is just the square root of the variance. It makes sense: You square all the deviations from the mean; the remaining operations just average those squares, so the result is an average square deviation, so to get something resembling an average deviation you need to take its square root. Note that you can't just skip squaring and taking the square root, because if you directly average the deviations from the mean, you always get zero. (You might want to try that out.)
In your examples, the standard deviations are $\sqrt{0.8}\approx0.89$ and $\sqrt{556.96}=23.6$, respectively. You can interpret this as something like an average deviation from the mean (keeping in mind that the literal average deviation from the mean is zero). A similar measure is the average of the absolute value of the deviations, which in your examples would be $0.8$ and $18.88$, respectively. For a discussion of the difference between these two measures and the reasons for preferring the standard deviation, see Motivation behind standard deviation?.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96711665391922, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=3301894 | Physics Forums
## Advection equation and Crank-Nicolson
Hi, I want to numerically model the advection equation using the Crank-Nicolson scheme. Yes, I know that it is highly oscillatory but that is the point of the exercise as I want to highlight this. The problem I'm having is how do I apply the BC for grid point N in the scheme. The advection equation only needs one boundary condition at point 0 in the domain, but becuase of the centred space disretisation the scheme requires an artificial boundary condition at the other end. The basic equation is
$$\-\frac{\sigma}{4}f_{i-1}^{n+1}+f_{i}^{n+1}+\frac{\sigma}{4}f_{i+1}^{n+1}=\frac{\sigma}{4}f_{i-1}^{n}+f_{i}^{n}-\frac{\sigma}{4}f_{i+1}^{n}$$
So say I want the value at N+1 to be the same as N, that requires
$$\left.\frac{\partial f}{\partial x}\right|_{N}=\frac{f_{N+1}^{n}-f_{N-1}^{n}}{2\triangle x}=0$$
and hence
$$f_{N+1}=f_{N-1}$$
So if we sub that into the main scheme we get
$$f_{N}^{n+1}=f_{N}^{n}$$
So according to this the final grid point always remains at the initial condictions, which is clearly wrong. Does anyone know what is wrong with my assumptions?
Thanks for any info.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Lots of ways, some better/easier than others. You can try making your domain a cell too large at every boundary where you need to enforce a BC at the cell edge, then interpolate. You can also make the domain periodic, but then you get periodic solutions (which may or may not be acceptable).
Quote by olivermsun Lots of ways, some better/easier than others. You can try making your domain a cell too large at every boundary where you need to enforce a BC at the cell edge, then interpolate. You can also make the domain periodic, but then you get periodic solutions (which may or may not be acceptable).
I'm not quite sure how you implement the method you suggested. If you extend your domain, aren't you still going to have the same problem except with a larger domain? The solution still has to be in the form of a tridiagonal matrix.
Unfortunately I can't use a periodic domain as I want to see the solution at the final time compared with the initial time.
Thread Tools
| | | |
|------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Advection equation and Crank-Nicolson | | |
| Thread | Forum | Replies |
| | Differential Equations | 4 |
| | Introductory Physics Homework | 8 |
| | Calculus & Beyond Homework | 0 |
| | Math & Science Software | 0 |
| | Differential Equations | 3 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236534237861633, "perplexity_flag": "middle"} |
http://divisbyzero.com/2012/04/18/an-interesting-multivariable-calculus-example/?like=1&_wpnonce=1e4d043b68 | Division by Zero
A blog about math, puzzles, teaching, and academic technology
Posted by: Dave Richeson | April 18, 2012
An interesting multivariable calculus example
Earlier this semester in my Multivariable Calculus course we were discussing the second derivative test. Recall the pesky condition that if ${(a,b) }$ is a critical point and ${D(a,b)=f_{xx}(a,b)f_{yy}(a,b)-(f_{xy}(a,b))^{2}=0}$, then the test fails.
A student emailed me after class and asked the following question. Suppose a function ${f}$ has a critical point at ${(0,0)}$ and ${D(0,0)=0}$. Moreover, suppose that as we approach ${(0,0)}$ along ${x=0}$ we have ${f_{xx}(0,y)>0}$ when ${y<0}$ and ${f_{xx}(0,y)<0}$ when ${y>0}$. Is that enough to say that the critical point is a not a maximum or a minimum? His thought process was that if we look at slices ${y=k}$ we get curves that are concave up when ${k<0}$ and curves that are concave down when ${k>0}$—surely that could not happen at a maximum or minimum.
I understood his intuition, but I was skeptical. Indeed, after a little playing around I came up with the following counterexample. The function is
${\displaystyle f(x,y)=\begin{cases} x^{4}+y^{2}e^{-x^{2}} & y\ge 0\\ x^{4}+x^{2}y^{2}+y^{2} & y<0. \end{cases}}$
The first partial derivatives are
${\displaystyle f_{x}(x,y)=\begin{cases} 4x^{3}-2xy^{2}e^{-x^{2}} & y\ge 0\\ 4x^{3}+2xy^{2} & y<0, \end{cases}}$
${\displaystyle f_{y}(x,y)=\begin{cases} 2ye^{-x^{2}} & y\ge 0\\ 2x^{2}y+2y & y<0. \end{cases}}$
Clearly ${(0,0)}$ is a critical point. The second partial derivatives are
${\displaystyle f_{xx}(x,y)=\begin{cases} 12x^{2}-2y^{2}e^{-x^{2}}+4x^{2}y^{2}e^{-x^{2}} & y\ge 0\\ 12x^{2}+2y^{2} & y<0, \end{cases}}$
${\displaystyle f_{yy}(x,y)=\begin{cases} 2e^{-x^{2}} & y\ge 0\\ 2x^{2}+2 & y<0, \end{cases}}$
${\displaystyle f_{xy}(x,y)=f_{yx}(x,y)=\begin{cases} -4xye^{-x^{2}} & y\ge 0\\ 4xy & y<0. \end{cases}}$
Thus ${D(0,0)=f_{xx}(0,0)f_{yy}(0,0)-(f_{xy}(0,0))^{2}=0\cdot 2-0^{2}=0}$. So the second derivative test fails. But observe that when ${x=0}$ we have
${\displaystyle f_{xx}(0,y)=\begin{cases} -2y^{2} & y\ge 0\\ 2y^{2} & y<0. \end{cases}}$
So ${f_{xx}(0,y)>0}$ when ${y<0}$ and ${f_{xx}(0,y)<0}$ when ${y>0}$. Yet it is easy to see that ${(0,0)}$ is a minimum: ${f(0,0)=0}$ and ${f(x,y)>0}$ for all ${(x,y)\ne (0,0)}$. A graph of the function is shown below. You can see the concave down cross sections for $x=0$ and $y>0$.
Like this:
Posted in Math | Tags: critical point, multivariable calculus, second derivative test
Responses
1. I think the functions $(y-x^2)^4 + x^4$ and $((y-x^2)^2 + x^2)^2$ also work. They are inspired by the pitchfork bifurcation (compare with the shape of the contour $f_x=0$).
By: Jan Van lent on April 19, 2012
at 1:15 pm
• I actually meant $(y-x^2)^4 + y^4$ and $((y-x^2)^2 + y^2)^2$, but the other two also work. From the contour plots for all these functions it is clear that they use the same idea.
By: Jan Van lent on April 19, 2012
at 1:35 pm
• Great! Thanks. I haven’t checked the derivatives, but it looks good on Grapher.
By: Dave Richeson on April 19, 2012
at 2:29 pm
• I just realised that the examples I gave are closely related to the Rosenbrock function [1].
[1] http://en.wikipedia.org/wiki/Rosenbrock_function
By: Jan Van lent on April 19, 2012
at 3:52 pm
• Wow. Cool. Thanks for sharing that!
By: Dave Richeson on April 19, 2012
at 4:09 pm
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297753572463989, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/harmonic-oscillator+mathematical-physics | # Tagged Questions
2answers
357 views
### Non-Degeneracy of Eigenvalues of Number Operator for Simple Harmonic Oscillator [duplicate]
Possible Duplicate: Proof that the One-Dimensional Simple Harmonic Oscillator is Non-Degenerate? I'm trying to convince myself that the eigenvalues $n$ of the number operator ...
1answer
187 views
### Question on Sakurai's treatment of the Harmonic Oscillator:
In Section 2.3 of the second edition of Modern Quantum Mechanics (which discusses the harmonic oscillator), Sakurai derives the relation $$Na\left|n\right> = (n-1)a\left|n\right>,$$ and states ...
2answers
949 views
### Proof that the One-Dimensional Simple Harmonic Oscillator is Non-Degenerate?
The standard treatment of the one-dimensional quantum simple harmonic oscillator (SHO) using the raising and lowering operators arrives at the countable basis of eigenstates \$\{\vert n \rangle\}_{n = ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.816067099571228, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/88424/ring-of-integers-as-subring-with-most-irreducibles | ## Ring of Integers as subring with most irreducibles
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $L$ be a number field. Is it possible to define its ring of integers $R$ by saying it's the subring with (in a fuzzy sense) the "most" irreducibles?
-
It seems you're looking for the universal property of normalization, which is the geometric version of "taking the integral closure". See mathoverflow.net/questions/46/… You might also want to replace "irreducibles" by "prime ideals" in your question. – François Brunault Feb 14 2012 at 13:27
You could make "most" less fuzzy by asking: Does the ring of integers satisfy the property that if there is a subring $S$ of $L$ in which $x\in S$ is irreducible, then $x\in R$ and is irreducible or a product of irreducibles. One problem is that, unless the class number is 1, the ring of integers is not a UFD. So irreducibles behave badly. This is why, historically, one uses prime ideals rather than irreducibles. We have unique factorization of prime ideals. – Pace Nielsen Feb 14 2012 at 22:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204809665679932, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/50092-help-please-its-gobledegook.html | # Thread:
1. ## help please its gobledegook
81 to the 3/4 I know the answer is 27 but how
and
express square root of 2 over square root of 2 + 1 in the form a+bsquare root2, where a and b are integers
2. Originally Posted by mathsfool
81 to the 3/4 I know the answer is 27 but how
and
express square root of 2 over square root of 2 + 1 in the form a+bsquare root2, where a and b are integers
$81^{\frac34} = \left(3^4\right)^{\frac34} =\left( \left(3^4\right)^{\frac14}\right)^3 = 3^3 = 27$
3. Originally Posted by mathsfool
...
express square root of 2 over square root of 2 + 1 in the form a+bsquare root2, where a and b are integers
I hope you mean:
$\frac{\sqrt{2}}{\sqrt{2} + 1}$
If so:
$\frac{\sqrt{2}}{\sqrt{2} + 1}\cdot \frac{\sqrt{2} - 1}{\sqrt{2} - 1} = \frac{2-\sqrt{2}}{2-1} = 2-1\cdot \sqrt{2}$
#### Search Tags
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8458938598632812, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-statistics/201236-hard-probability-question-print.html | # Hard probability question
Printable View
• July 21st 2012, 07:13 PM
smalltalk
Hard probability question
"A dice it thrown until the sum of the outcomes is greater than 300. Find, approximately, the probability that at least 80 experiments are required."
I have the suspicion that if
X = number of throws required until sum>300
then X has normal distribution with mean 85 and ... some variance http://www.thestudentroom.co.uk/imag...olondollar.gif
How could I prove that?
Also, I calculated the probability using another method and I got 0.765 as a result, but when I simulated the experiment I got 0.89 as a result ... :(
• July 22nd 2012, 12:09 AM
richard1234
Re: Hard probability question
I doubt it's normal. We know that between 50 and 300 experiments are required, and the expected number of experiments is about 300/3.5, 85.7.
Edit: Between 51 and 301 (if you are considering "strictly greater than 300").
• July 22nd 2012, 06:33 AM
awkward
Re: Hard probability question
Quote:
Originally Posted by smalltalk
"A dice it thrown until the sum of the outcomes is greater than 300. Find, approximately, the probability that at least 80 experiments are required."
I have the suspicion that if
X = number of throws required until sum>300
then X has normal distribution with mean 85 and ... some variance http://www.thestudentroom.co.uk/imag...olondollar.gif
How could I prove that?
Also, I calculated the probability using another method and I got 0.765 as a result, but when I simulated the experiment I got 0.89 as a result ... :(
Hi Smalltalk,
Let's say the outcome of one die throw is $X_i$. Then you want to find $\Pr(\sum_{i=1}^{80} X_i \le 300)$. For brevity, let's say $Y = \sum_{i=1}^{80} X_i$.
Y is approximately normally distributed, by the Central Limit Theorem. You need to compute the mean of Y (it's not 85), and you need to find its standard deviation before you can look up the sought-for probability in a table.
Hint: $var(Y) = \sum_{i=1}^{80} var(X_i)$.
Can you find the variance of $X_i$?
All times are GMT -8. The time now is 05:40 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370409250259399, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/77128/does-the-weierstrass-function-have-a-point-of-increase/77202 | ## Does the Weierstrass function have a point of increase?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Problem
The Weierstrass function $W(x)$ is given by
$W(x)=\sum_{n\geq 0} a^n \cos(b^n \pi x)$
where $0< a <1$ and $b$ is an odd integer such that $ab > 1+3\pi/2$.
A function $f:\mathbb{R}\rightarrow \mathbb{R}$ is said to have a point of increase if there exists a $t \in \mathbb{R}$ and $\delta>0$ such that
$f(t-s)\leq f(t) \leq f(t+s) \quad \forall s \in [0,\delta]$.
So my question is does the Weierstrass function have a point of increase?
Motivation
In Burdzy's paper there is a proof that a Brownian motion does not have a point of increase. There are examples of nowhere differentiable functions which have a point of increase that one could construct but I have been having difficulty seeing if the Weierstrass function does.
I would be grateful for any references or heuristics regarding this problem, or any comments as to the difficulty.
-
I would say the thing to do is consult Hardy's paper ... Hardy, G. H. (1916) "Weierstrass's nondifferentiable function," Transactions of the American Mathematical Society, vol. 17, pages 301–325. – Gerald Edgar Oct 4 2011 at 14:59
## 2 Answers
The original proof of Weierstrass (see pages 4 to 7 in Elgar (ed.): Classics on Fractals, Westview Press, 2004) constructs, for any $x_0\in\mathbb{R}$, two sequences $(x'_n)$ and $(x''_n)$ such that $$x'_n < x_0 < x''_n,\qquad x'_n\to x_0,\qquad x''_n\to x_0,$$ but $$\frac{W(x'_n)-W(x)}{x'_n-x}\qquad\text{and}\qquad \frac{W(x''_n)-W(x)}{x''_n-x}$$ are of opposite signs and their absolute values tend to infinity. This shows that $W(x)$ has no point of increase and no point of decrease.
-
1
Note that $W$ can have a local maximum (at $0$, for example) or local minimum. And thus one-sided points of increase or decrease. – Gerald Edgar Oct 5 2011 at 11:44
Unfortunately I can't get a preview of the book but this seems like a nice way to prove the statement. Thanks. – Bati Oct 5 2011 at 12:11
@Gerald: I agree, and I believe it is difficult to find the local minima of $W(x)$. At any rate, Weierstrass' proof gives the following: (1) if $\lfloor 1/2-b^n x_0 \rfloor$ is even infinitely often, then the function is not locally decreasing at $x_0$ from the left and not locally increasing at $x_0$ from the right, in particular $x_0$ is not a local minimum; (2) if $\lfloor 1/2-b^n x_0 \rfloor$ is odd infinitely often, then the function is not locally increasing at $x_0$ from the left and not locally decreasing at $x_0$ from the right, in particular $x_0$ is not a local maximum. – GH Oct 5 2011 at 14:26
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A similar function is proved to be nowhere monotonic in Gelbaum and Olmsted, Counterexamples in Analysis, Chapter 2, Example 21.
-
Actually nowhere monotonic is a weaker property (to me) than having no point of increase or decrease. – GH Oct 5 2011 at 5:49
Nowhere monotonic (ie, no interval on which it is monotonic) is weaker than nowhere differentiable. – George Lowther Oct 5 2011 at 8:37
Point taken. But Bati still might get something out of Example 21, and, anyway, I never pass up a chance to promote the Gelbaum and Olmsted book. – Gerry Myerson Oct 5 2011 at 22:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922634482383728, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=4230005 | Physics Forums
## Confusion regarding Lie groups
Hello! Im currently trying to get things straight about Lie group from two different perspectives. I have encountered Lie groups before in math and QM, but now I´m reading GR where we are talking about coordinate and non-coordinate bases and it seems that we should be able to find commuting generators, to for example SU(2), by just:
Smooth manifold --> Find a coordinate chart --> use the coordinatebasis in tangentspace --> we have three pairwise commuting generators.
Where does this break down?
Thanks in regards!
/Kontilera
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member Homework Help Science Advisor You forgot a step: Smooth Lie group --> Find a coordinate chart --> use the coordinate basis in tangent space to identity--> extend to a left-invariant vector field The result in general will differ from the local coordinate frame you started with and so the Lie bracket won't be 0 anymore.
Ah okey, so my misunderstanding is a confusion about the difference between smooth manifolds and Lie groups. My line of thinking was that since every lie group is a smooth manifold (correct?) every coordinate chart will induce a tangentspace basis which will commute pairwise. In my GR course a coordinate basis is defined by satisfying: $$[e_\mu, e_\nu] = 0.$$ while in my mathematics literature the are defined by being directional derivatives along a set of coordinate axes (given by a chart on the manifold). But these two statements are very different for Lie groups.
Recognitions:
Gold Member
Homework Help
Science Advisor
## Confusion regarding Lie groups
Yes, I understood your line of thinking. Your error was in forgetting to keep track of how exactly is the lie algebra of a lie group G identified with the tangent space of G at the identity. Namely: if X and Y are two arbitrary vector fields on G, then they restrict to vectors on TeG (and hence induce elements of Lie(G) = {left-invariant vector field on G}), but these induced elements X' and Y' are not going to be X and Y if X and Y were not left-invariant to begin with, and so [X,Y] will not equal [X',Y'] in general.
The two definitions of coordinate basis you mention are indeed equivalent. You can consult Lee Thm 18.6 for a proof. The key is that two vector fields Lie-commute (i.e. [X,Y] = 0) iff their flows commute for all times. Given a basis of Lie-commuting vector field, you can thus compose the flows to construct coordinates.
I don't see in what sense the statements are different "for Lie groups". The statements concern only smooth manifold and are independent of any additional structures the manifold might carry.
Thanks, it doesnt seem obvious right now but I will give it some time.. maybe I come back tomorrow. :)
Last question: So on SU(2) I can find a coordinate basis which will satisfy $$[e_\mu, e_\nu] = 0$$ but these coordinate axes will not be left invariant and therefore doesnt say anything about the bigger structure of the group or its Lie algebra.
Recognitions: Gold Member Homework Help Science Advisor Right.
Its getting clearer and clearer for me but I dont really get the dimensions to add up. Lets take the 2-sphere as an example. The tangentspace can at every point be spanned by derivatives along logitude and latitude - so its 2-dimensional. But making the identification that our derivative along the longitudinal axis is the L_z operator when we represent this algebra in QM, it seems that we should have a 3-dimensional algebra; L_z, L_y, L_x. So we have a 3-dimensional vectorspace cosisting of the leftinvariant vectorfields but our manifold and tangentspace is two dimensional. Is this correct? (I am thinking of SO(3) as topologically equivalent to the 2-sphere, this is correct, right?)
Recognitions: Gold Member Homework Help Science Advisor mh, no topologically, SO(3) is real projective 3-space RP³. See the very nice explanation of this on wiki: http://en.wikipedia.org/wiki/SO%283%29#Topology. In particular SO(3) is a 3-dimensional manifold. It has S³ as its universal (2-sheeted) cover.
Yeah, just realized this, glad you verified it. :) My misstake (which I commited the last time I dealt with SO(3) aswell) is that I reason like this: "So, I want a group of rotations on R^3. Lets start with the vector (1,0,0), this can now be rotated to every vector on the 2-sphere by using all the elements of SO(3). However we can not rotate it to a vector not lying on the 2-sphere. So it should be a 1-1 correspondense between the elements of SO(3) and the 2-sphere." Where does this logic fail? :/ EDIT: This logic fails on the problem I brought up earlier I guess.. Obviously the dimensions arent correct, starting from (1,0,0) we can only go along \phi or \theta, but SO(3) is generated by rotation around three diffrent axes.
Quote by Kontilera "So, I want a group of rotations on R^3. Lets start with the vector (1,0,0), this can now be rotated to every vector on the 2-sphere by using all the elements of SO(3). However we can not rotate it to a vector not lying on the 2-sphere. So it should be a 1-1 correspondense between the elements of SO(3) and the 2-sphere." Where does this logic fail? :/
Because there isn't a unique SO(3) rotation that yields a given point on the 2-sphere. If I take your (1,0,0) vector and rotate it so it's directed to a given point on ##S^2##, I can now take that vector as an SO(3) rotation axis. Clearly, a rotation by any angle around that vector doesn't move the vector to a new point on the 2-sphere. Thus, these are all distinct rotations in an SO(2) subgroup of SO(3) that all correspond to the same point on the 2-sphere.
The moral is that when you relate the topology of some manifold to a Lie group that acts transitively on it, you have to factor out the isometry isotropy group. So, in this case, ##S^2 = SO(3)/SO(2)##. I don't know about real projective groups, but I suppose that must be equivalent to what quasar987 said.
Right, I´m with you! As soon as I can find isometries, not including the identity element, I should be careful making my fast assuptions about topological equivalents. :) I will continue my hiking down the road of GR. Thanks for the help - both of you!
Oops, sorry—meant to say "isotropy group", not "isometry group"
Thread Tools
| | | |
|-----------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Confusion regarding Lie groups | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 1 |
| | Differential Geometry | 6 |
| | Quantum Physics | 4 |
| | Calculus & Beyond Homework | 12 |
| | Introductory Physics Homework | 13 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209539294242859, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/203873/how-many-triangles/203941 | # How many triangles
I saw this riddle today, it asks how many triangles are in this picture .
I don't know how to solve this (without counting directly), though I guess it has something to do with some recurrence.
How can count the number of all triangles in the picture ?
-
– Raskolnikov Sep 28 '12 at 8:57
1
With the orientation of the triangles, every triangle has either a highest vertex or a lowest vertex. Take each vertex in turn and count the number of triangles for which it is the highest vertex; then do the same for the triangles for which it is the lowest vertex. You will see a pattern emerge ... – Mark Bennet Sep 28 '12 at 8:58
– Maesumi Sep 28 '12 at 9:07
3
– Marc van Leeuwen Sep 28 '12 at 11:45
## 6 Answers
Say that instead of four triangles along each edge we have $n$. First count the triangles that point up. This is easy to do if you count them by top vertex. Each vertex in the picture is the top of one triangle for every horizontal grid line below it. Thus, the topmost vertex, which has $n$ horizontal gridlines below it, is the top vertex of $n$ triangles; each of the two vertices in the next row down is the top vertex of $n-1$ triangles; and so on. This gives us a total of
$$\begin{align*} \sum_{k=1}^nk(n+1-k)&=\frac12n(n+1)^2-\sum_{k=1}^nk^2\\ &=\frac12n(n+1)^2-\frac16n(n+1)(2n+1)\\ &=\frac16n(n+1)\Big(3(n+1)-(2n+1)\Big)\\ &=\frac16n(n+1)(n+2)\\ &=\binom{n+2}3 \end{align*}$$
upward-pointing triangles.
The downward-pointing triangles can be counted by their by their bottom vertices, but it’s a bit messier. First, each vertex not on the left or right edge of the figure is the bottom vertex of a triangle of height $1$, and there are $$\sum_{k=1}^{n-1}=\binom{n}2$$ of them. Each vertex that is not on the left or right edge or on the slant grid lines adjacent to those edges is the bottom vertex of a triangle of height $2$, and there are
$$\sum_{k=1}^{n-3}k=\binom{n-2}2$$ of them. In general each vertex that is not on the left or right edge or on one of the $h-1$ slant grid lines nearest each of those edges is the bottom vertex of a triangle of height $h$, and there are
$$\sum_{k=1}^{n+1-2h}k=\binom{n+2-2h}2$$ of them.
Algebra beyond this point corrected.
The total number of downward-pointing triangles is therefore
$$\begin{align*} \sum_{h\ge 1}\binom{n+2-2h}2&=\sum_{k=0}^{\lfloor n/2\rfloor-1}\binom{n-2k}2\\ &=\frac12\sum_{k=0}^{\lfloor n/2\rfloor-1}(n-2k)(n-2k-1)\\ &=\frac12\sum_{k=0}^{\lfloor n/2\rfloor-1}\left(n^2-4kn+4k^2-n+2k\right)\\ &=\left\lfloor\frac{n}2\right\rfloor\binom{n}2+2\sum_{k=0}^{\lfloor n/2\rfloor-1}k^2-(2n-1)\sum_{k=0}^{\lfloor n/2\rfloor-1}k\\ &=\left\lfloor\frac{n}2\right\rfloor\binom{n}2+\frac13\left\lfloor\frac{n}2\right\rfloor\left(\left\lfloor\frac{n}2\right\rfloor-1\right)\left(2\left\lfloor\frac{n}2\right\rfloor-1\right)\\ &\qquad\qquad-\frac12(2n-1)\left\lfloor\frac{n}2\right\rfloor\left(\left\lfloor\frac{n}2\right\rfloor-1\right)\;. \end{align*}$$
Set $\displaystyle m=\left\lfloor\frac{n}2\right\rfloor$, and this becomes
$$\begin{align*} &m\binom{n}2+\frac13m(m-1)(2m-1)-\frac12(2n-1)m(m-1)\\ &\qquad\qquad=m\binom{n}2+m(m-1)\left(\frac{2m-1}3-n+\frac12\right)\;. \end{align*}$$
This simplifies to $$\frac1{24}n(n+2)(2n-1)$$ for even $n$ and to
$$\frac1{24}\left(n^2-1\right)(2n+3)$$ for odd $n$.
The final figure, then is
$$\binom{n+2}3+\begin{cases} \frac1{24}n(n+2)(2n-1),&\text{if }n\text{ is even}\\\\ \frac1{24}\left(n^2-1\right)(2n+3),&\text{if }n\text{ is odd}\;. \end{cases}$$
-
1
This of course presumes that I made no silly algebra errors. – Brian M. Scott Sep 28 '12 at 10:04
1
You could have checked $n=4$: your formula gives $\binom63+\frac{24}{24}=20+1=21$, which is not enough. – Marc van Leeuwen Sep 28 '12 at 11:41
@Marc: Actually, I did check it against $n=4$, but somehow I forgot about the triangles of height $1$. And yes, I see where the problem is; the upper bound of $\lfloor n/2\rfloor$ is fine when I’m summing the binomial coefficients, but not when I use it to count terms later on. – Brian M. Scott Sep 28 '12 at 11:55
@Marc: Incidentally, I also checked it $-$ correctly in that case $-$ against $n=5$. – Brian M. Scott Sep 28 '12 at 12:58
# Tabulating numbers
Let $u(n,k)$ denote the number of upwards-pointing triangles of size $k$ included in a triangle of size $n$, where size is a short term for edge length. Let $d(n,k)$ likewise denote the number of down triangles. You can tabulate a few numbers to get a feeling for these. In the following table, row $n$ and column $k$ will contain two numbers separated by a comma, $u(n,k), d(n,k)$.
$$\begin{array}{c|cccccc|c} n \backslash k & 1 & 2 & 3 & 4 & 5 & 6 & \Sigma \\\hline 1 & 1,0 &&&&&& 1 \\ 2 & 3,1 & 1,0 &&&&& 5 \\ 3 & 6,3 & 3,0 & 1,0 &&&& 13 \\ 4 & 10,6 & 6,1 & 3,0 & 1,0 &&& 27 \\ 5 & 15,10 & 10,3 & 6,0 & 3,0 & 1,0 && 48 \\ 6 & 21,15 & 15,6 & 10,1 & 6,0 & 3,0 & 1,0 & 78 \end{array}$$
# Finding a pattern
Now look for patterns:
• $u(n, 1) = u(n - 1, 1) + n$ as the size change added $n$ upwards-pointing triangles
• $d(n, 1) = u(n - 1, 1)$ as the downward-pointing triangles are based on triangle grid of size one smaller
• $u(n, n) = 1$ as there is always exactly one triangle of maximal size
• $d(2k, k) = 1$ as you need at least twice its edge length to contain a downward triangle.
• $u(n, k) = u(n - 1, k - 1)$ by using the small $(k-1)$-sized triangle at the top as a representant of the larger $k$-sized triangle, excluding the bottom-most (i.e. $n$th) row.
• $d(n, k) = u(n - k, k)$ as the grid continues to expand, adding one row at a time.
Using these rules, you can extend the table above arbitrarily.
The important fact to note is that you get the same sequence of $1,3,6,10,15,21,\ldots$ over and over again, in every column. It describes grids of triangles of same size and orientation, increasing the grid size by one in each step. For this reason, those numbers are also called triangular numbers. Once you know where the first triangle appears in a given column, the number of triangles in subsequent rows is easy.
# Looking up the sequence
Now take that sum column to OEIS, and you'll find this to be sequence A002717 which comes with a nice formula:
$$\left\lfloor\frac{n(n+2)(2n+1)}8\right\rfloor$$
There is also a comment stating that this sequence describes the
Number of triangles in triangular matchstick arrangement of side $n$.
Which sounds just like what you're asking.
# References
If you want to know how to obtain that formula without looking it up, or how to check that formula without simply trusting an encyclopedia, then some of the references given at OEIS will likely help you out:
• J. H. Conway and R. K. Guy, The Book of Numbers, p. 83.
• F. Gerrish, How many triangles, Math. Gaz., 54 (1970), 241-246.
• J. Halsall, An interesting series, Math. Gaz., 46 (1962), 55-56.
• M. E. Larsen, The eternal triangle – a history of a counting problem, College Math. J., 20 (1989), 370-392.
• C. L. Hamberg and T. M. Green, An application of triangular numbers, Mathematics Teacher, 60 (1967), 339-342. (Referenced by Larsen)
• B. D. Mastrantone, Comment, Math. Gaz., 55 (1971), 438-440.
• Problem 889, Math. Mag., 47 (1974), 289-292.
• L. Smiley, A Quick Solution of Triangle Counting, Mathematics Magazine, 66, #1, Feb '93, p. 40.
-
edit: I tried to write this as a generalization of the simpler case of counting the small triangles, but clearly didn't anticipate the problems that would arise. My proposed solution doesn't work, as it counts straight lines as well as triangles. It can be corrected in a number of different ways, but this just complicates a method which is already inefficient.
Label each corner of each triangle and consider them to be nodes on a graph. You can then write the adjacency matrix for that graph. Because the graph has such a regular structure, the matrix can be worked out easily enough, and scales up nicely. IMPORTANT: two nodes are "adjacent" if they are connected by a straight line, not just a line segment.
Adjacency matrices have the cool property that if you take their product, it tells you how many ways there are to get from one point to another. If you take the cube of the matrix, the diagonal entries will tell you how many ways there are to get from a point back to itself in three moves (i.e. form a triangle).
From there, you just need to take the trace of this matrix, and divide the result by six (since each triangle is counted six times; twice for each corner.)
edit: divide by six, not three.
-
I don't understand how the number of paths are the number of triangles. can you please explain ? – Belgi Sep 28 '12 at 9:17
The ij entry of the cube of the matrix tells you how many ways there are to get from node i to node j in three moves. As such, the entries on the diagonal correspond to closed loops of size 3. The only way to get a closed loop of size three is if you draw a triangle. – Julien Sep 28 '12 at 9:21
2
Actually as you describe it, you won't get the right result, but too much. This is because you count all cyclic paths of length $3$ this way, which includes cases where the path visits three points on a line; also all actual triangles it counts are counted $6$ times, one for each permutation of its vertices. Concretely you find $432$ for the case displayed, while the correct answer is $16+7+3+1=27$ (grouping triangles by size). – Marc van Leeuwen Sep 28 '12 at 11:36
I get 96 from the trace of the cube of the adjacency matrix and 96/6=16 is (correctly) the number of triangles of side-length 1.But there are other triangles as well, and I don't see how to count those easily using this method. – Mark Bennet Sep 28 '12 at 12:13
Ah... I guess that doesn't work, then. Thanks, Marc, for pointing that out. – Julien Sep 28 '12 at 13:53
show 3 more comments
Let $T_n$ denote the number of triangles when the side-length of the large triangle is $n\geq0$. Then $T_0=0$, $T_1=1$. To obtain a recurssion formula for the $T_n$ we have to distinguish even and odd $n$.
Assume $n=2m$, and let us add a $(2m+1)$'th row of triangles at the bottom. Then each "old" triangle will appear in $T_{2m+1}$, furthermore each "old" gridpoint is a vertex of a triangle with base at the bottom of the new figure, and finally the new bottom gridpoints $z$ serve as vertices of new bottom-up triangles whose sizes are between $1$ and the distance of $z$ from the lower corners of the figure. It follows that
$$T_{2m+1}=T_{2m}+\sum_{k=0}^{2m} (k+1)+2\sum_{k=1}^m k\ ,\qquad(*)$$
and similarly we obtain
$$T_{2m+2}=T_{2m+1}+\sum_{k=0}^{2m+1} (k+1)+2\sum_{k=1}^m k + (m+1)\ .$$
Plugging $T_{2m+1}$ from the first formula into the second we get
$$T_{2m+2}=T_{2m}+2\sum_{k=0}^{2m} (k+1) +(2m+2)+4\sum_{k=1}^m k +(m+1)=T_{2m}+6m^2+11m +5\ .$$
This implies $$T_{2m}=(m-1)m(2m-1)+11{(m-1)m\over2}+5m={1\over2}(4m^3+5m^2 +m)\ ,$$ and using $(*)$ we obtain $$T_{2m+1}={1\over2}(4m^3+11m^2+9m+2)\ .$$ (This coincides with Brian M. Scott's results.)
-
A lot of complicated answers for a simple problem. That's fine, but we're just trying to solve this simple problem here, not the full generalization of it. Just go by size.
Triangles of size 1 (as in 1 small triangle): 16 Triangles of size 16: 1 (the big triangle) Triangles of size 9: 3, this is pretty clear
This leaves only triangles of size 4, which is probably the only tricky part of the problem. I guess every triangle is either going to have a point at the top and be flat on the bottom, or be flat on the top and have a point on the bottom. Once you recognize this, there's nothing tricky about this part either. There are 6 with the point on top, and 1 with the point on the bottom. So the total is
$$16 + 1 + 3 + 6 + 1 = 27$$
-
I acually wrote without counting in my question, but thanks anyway! – Belgi Oct 5 '12 at 23:59
@Belgi You are correct, my bad! Thanks for being kind about it. – Graphth Oct 6 '12 at 15:31
The total is $28$. It's $16+8+3+1$ triangles of each size grouping.
-
2
Welcome to MSE! Maybe you could help OP by pointing out briefly how you arrived at your summation? – gnometorule Feb 28 at 8:11
Isn't there only 27 triangles because I only saw 7 medium triangles not eight – Anonymous Mar 1 at 3:20
1
Anonymous hasn't got enough reputation to comment, cut some slack. – vonbrand Mar 1 at 4:05
1
This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post - you can always comment on your own posts, and once you have sufficient reputation you will be able to comment on any post. – gnometorule Mar 1 at 4:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337864518165588, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/49317-should-simple-where-am-i-going-wrong.html | # Thread:
1. ## should be simple, where am i going wrong?
[(5)/(x-1) - ((2x)/(x+1))] - 1 is less than 0
when I simplify (get common denom to subtract the 1), i get (-3x^2 + 7x + 6)/(x^2 - 1).
i am assuming i am making an error in the simplification here, but i have checked 3 times. the answers in the back of the book (for zeros) are 1, -1 (from denom, i get this part) and -(2/3) and 3. it's these two i can not get from my numerator. where is my error???
2. $\bigg[\frac{5}{x-1}-\frac{2x}{x+1}\bigg] -1 < 0$
$\frac{5x + 5 -2x^2 + 2x -(x^2 -1)}{x^2 -1} < 0$
$\frac{-3x^2 +7x +6}{(x+1)(x-1)} < 0$
Nope that is correct but you have to test and see in what intervals it would be positive and when it would be negative
Look at this post you can use a chart like the one Moo used to see what is going on
http://www.mathhelpforum.com/math-he...76-domain.html
3. thanks for the response, thought i was losing my mind. i had checked it 3 times. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424872398376465, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/116229/a-known-tangent-half-angle-formula | ## A “known” tangent half-angle formula?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In another posting I wrote about a trigonometric relation I had derived, but that ended up not being the main point of the posting:
http://mathoverflow.net/questions/116214/strange-pattern-in-rounding-errors
So as long as we're here, let's make it the main point of this posting. I posted something like this a couple of days ago on stackexchange, with no answers yet.
Let's make this two questions:
• Is this "known" in the sense of being in some book, refereed paper, or the like?;
• Is there a straightforward way to prove this?
Here's how I derived this relation: I showed that a certain function satisfies a certain differential equation; then I showed that a certain function emerges as the antiderivative that you get by the usual second-semester-calculus methods; then I said these ought to be the same thing because they both solve a geometry problem that arose in some amateur cartography of a (maybe?) somewhat impractical sort, therefore they must be the same; then I checked it numerically and it checked. But there ought to be a more straightforward way.
Here's the result: If $$\tan\gamma=\dfrac{\sin\alpha\sin\beta}{\cos\alpha+\cos\beta}$$ and $\alpha, \beta, \gamma\in(0,\pi)$ or $\alpha,\beta,\gamma\in(-\pi,0)$ then $$\tan\dfrac\gamma2=\tan\dfrac\alpha2\cdot\tan\dfrac\beta2.$$
A tangent half-angle formula that everyone knows, or at least that's out there in trigonometry-for-adults books that were occasionally published before about 1930, says $$\frac{\sin\alpha+\sin\beta}{\cos\alpha+\cos\beta} = \tan\frac{\alpha+\beta}{2}.$$ Does it make any sort of sense to say that the fact that what I derived, and this "known" identity are reminiscent of each other has some reason behind it? (So I guess this is really three questions.)
-
I arrived at this identity in a horribly roundabout way, and as expected, several people have shown that it reduces to high-school trigonometry exercises, so at least in the proof, this proposition might be considered "trivial". But I think maybe in some of the consequences it might not be. But I don't feel like being specific about that just yet. – Michael Hardy Dec 14 at 17:02
## 3 Answers
Using the tangent double-angle formula $\tan\gamma=\frac{2\tan\tfrac{\gamma}{2}}{1-\tan^2\tfrac{\gamma}{2}}$ we get ```\begin{align}
\tan\gamma & = \frac{2\tan\tfrac{\beta}{2}\tan\tfrac{\alpha}{2}}{1-\tan^2\tfrac{\beta}{2}\tan^2\tfrac{\alpha}{2}} \\[10pt]
& = \frac{2\sin\tfrac{\beta}{2}\sin\tfrac{\alpha}{2}\cos\tfrac{\beta}{2}\cos\tfrac{\alpha}{2}}{\cos^2\tfrac{\beta}{2}\cos^2\tfrac{\alpha}{2}-\sin^2\tfrac{\beta}{2}\sin^2\tfrac{\alpha}{2}} \\[10pt]
& =\frac{\sin\beta\sin\alpha}{2(\cos\tfrac{\beta}{2}\cos\tfrac{\alpha}{2}-\sin\tfrac{\beta}{2}\sin\tfrac{\alpha}{2})(\cos\tfrac{\beta}{2}\cos\tfrac{\alpha}{2}+\sin\tfrac{\beta}{2}\sin\tfrac{\alpha}{2})} \\[10pt]
& =\frac{\sin\beta\sin\alpha}{2\cos\tfrac{\beta+\alpha}{2}\cos\tfrac{\beta-\alpha}{2}} \\[10pt]
& =\frac{\sin\beta\sin\alpha}{\cos\beta+\cos\alpha}
\end{align}```
-
Technically, this is proving $A$ implies $B$, where the question asked for a proof that $B$ implies $A$. But I suppose all the steps are reversible. – Gerry Myerson Dec 13 at 22:52
@GerryMyerson : The one thing that's not quite reversible is this: You can't just let $\gamma=\arctan\dfrac{\sin\alpha\sin\beta}{\cos\alpha+\cos\beta}$; rather, you have to choose the approrpriate one of two points on the circle where the tangent has a given value. Although $\tan\gamma$ is the same regardless of which of those you pick, $\tan(\gamma/2)$ is not. That question need not be mentioned if you do the proof in the direction seen in this answer, but for the converse of that, the issue comes up. – Michael Hardy Dec 14 at 16:47
I probably ought to have said "if and only if" in my question, rather than just going one direction. – Michael Hardy Dec 14 at 16:47
@Gerry: You are right. This is a proof that $\tan\gamma=\ldots$ if $\tan(\gamma/2)=\ldots$, rather than the converse. Assuming $\tan(\gamma)=\ldots$, we get merely that $f(\tan(\gamma/2))=f(\tan(\beta/2)\tan(\alpha/2))$, where $f(t)=2t/(1-t^2)$, as you note in your answer. – Yoav Kallus Dec 14 at 18:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Another way: let $r=\tan\alpha/2$, $s=\tan\beta/2$, $t=\tan\gamma/2$. Then $\sin\alpha=2r/(1+r^2)$, $\cos\alpha=(1-r^2)/(1+r^2)$, $\tan\gamma=2t/(1-t^2)$, and $${\sin\alpha\sin\beta\over\cos\alpha+\cos\beta}$$ reduces to $2rs/(1-(rs)^2)$, so the question reduces to deriving $rs=t$ from $${2t\over1-t^2}={2rs\over1-(rs)^2}$$
-
Here's one way to derive the identity:
Suppose $\tan(\gamma)=\frac{\sin(\alpha)\sin(\beta)}{\cos(\alpha)+\cos(\beta)}$. Multiplying this equation through by $\cos(\gamma)$ gives an expression for $\sin(\gamma)$ in terms of $\cos(\gamma)$ and functions of $\alpha$, $\beta$. We now take the Pythagorean identity $\cos^2(\gamma)+\sin^2(\gamma)=1$, and replace $\sin(\gamma)$ with the expression derived above, getting $$\left(\frac{1+\cos(\alpha)\cos(\beta)}{\cos(\alpha)+\cos(\beta)}\cos(\gamma)\right)^2=1$$ Some simplification was done, but the only trig identity used was the Pythagorean identity.
This yields $\cos(\gamma)=\pm\frac{\cos(\alpha)+\cos(\beta)}{1+\cos(\alpha)\cos(\beta)}$, $\sin(\gamma)=\pm\frac{\sin(\alpha)\sin(\beta)}{1+\cos(\alpha)\cos(\beta)}$. We assume $\alpha,\beta,\gamma\in(0,\pi)$, so this forces the $\pm$ signs to be $+$ (it seems like this identity fails when $\alpha,\beta,\gamma<0$).
We have a tangent half-angle formula $\tan(\gamma/2)=\frac{\sin(\gamma)}{1+\cos(\gamma)}$. Combining with the formulas for $\sin(\gamma)$, $\cos(\gamma)$ gives $$\tan(\gamma/2)=\frac{\sin(\alpha)\sin(\beta)}{(1+\cos(\alpha))(1+\cos(\beta))}$$ Using the tangent half-angle formula again gives the desired identity.
-
But the identity does work when $\alpha,\beta,\gamma<0$. – Michael Hardy Dec 14 at 16:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947540283203125, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-statistics/11144-dependant-probability-poisson-distribution-game-question.html | # Thread:
1. ## Dependant probability and Poisson distribution--a game question.
First off let me thank anyone who is willing to assist me with this. Here is the scenario...
A gambler places a number of consecutive \$3 dollar wagers. During each wager there is a 1 in 500 chance of initiating a three-tiered Event where a player may or may not win a up to three consecutive jackpots. However the chance to win the various jackpots during this Event (once initiated) and the amount of the wins are dependant on how many previous non-winning wagers have occurred.
As an example…
If the player initiated the Event on his first through 100th wagers there is a 50% chance of winning \$10 dollars. If that is won there is an 25% chance of winning an additional \$25 dollars. If that is won there is an additional 10% chance of winning an additional \$100 dollars. (Dependant probability=50% chance of winning \$10 dollars, 12.5% chance of winning \$35 dollars, 1.25% chance of winning \$135)
If the player initiated the Event on his 101st through 250th wagers there is a 70% chance of winning \$20 dollars. If that is won there is a 50% chance of winning an additional \$50 dollars. If that is won there is an additional 25% chance of winning an additional \$200 dollars. (Dependant probability=70% chance of winning \$20 dollars, 35% chance of winning \$70 dollars, 3.5% chance of winning \$270)
If the player initiated the Event on his 251st through 550th wagers there is a 100% chance of winning \$30 dollars. If that is won there is an 80% chance of winning an additional \$75 dollars. If that is won there is an additional 60% chance of winning an additional \$300 dollars. (Dependant probability=100% chance of winning \$30 dollars, 80% chance of winning \$105 dollars, 48% chance of winning \$405)
If the player initiated the Event after his 550th wager there is a 100% chance of winning all three jackpots (in this case a \$100, \$250 and a \$500 dollar jackpot). (Dependant probability=100% chance of winning \$850).
If the Event occurs (regardless of winning any jackpot(s) during the event) the non-winning wager counter resets.
How would you calculate the total odds for this game? What would the average loss to the player be per wager? Would it be necessary to use a Poisson distribution model to indicate the probability of the Event triggering occurrences over the course of 100,000,000 wagers?
I would certainly appreciate any help with this.
2. The approach to this problem that I would suggest is to start by calculating the proportion of events of each type (label them 1 through 4).
Then analyse the distribution of winnings for each event type (probably
using an contingency tree to keep track of the results). (I assume that
there are no additional stakes involved for the staged jackpot phase).
As the type of an event is independent of that of the proceeding event
we need only calculate the probability of each event type for a single
sequence of wagers ending at an event.
The probability that the event occurs on the $n$-th wager is the probability
that it has not occured on any of the preceeding $n-1$ wagers times the
probability that it occurs on this one:
$<br /> pr(n)=\frac{1}{500} \times \left( 1-\frac{1}{500} \right)^{n-1}<br />$
So the probability of a type 1 event:
$<br /> P(type1)=\frac{1}{500}\,\sum_{r=1}^{100}\left( 1-\frac{1}{500} \right)^{r-1}<br />$,
which is a geometric series and so:
$<br /> P(type1)=\frac{1}{500}\times \frac{1-(1-1/500)^{100}}{1-(1-1/500)}\approx 0.1814<br />$,
Similar arguments show that:
$<br /> P(type2)=\frac{(1-1/500)^{100}}{500}\times \frac{1-(1-1/500)^{150}}{1-(1-1/500)}\approx 0.2123<br />$
$<br /> P(type3)=\frac{(1-1/500)^{250}}{500}\times \frac{1-(1-1/500)^{300}}{1-(1-1/500)}\approx 0.2737<br />$
$<br /> P(type4)=\frac{(1-1/500)^{550}}{500}\times \frac{1}{1-(1-1/500)}\approx 0.3325<br />$
From these and the results of the analysis of the return when an event of each type occurs
we can calculate the mean return in a long run (say N) of wagers, as there are N/500 events
and we know the amount wagered (\$3N).
It would also be wise to support this type of analysis with a simulation to make sure that
no hidden invalid assumption/s have creapt in
RonL
3. ## perfect
Thank you! That makes perfect sense, even with my limited math skills!
4. You will note that here we are using the Geometric distribution for the
number of wagers to the next event. This is the discrete analog of the
(Negative) Exponential distribution which would be appropriate for the time to
next event in continuous time (we are using the wager number an a proxy
for discrete time here).
This results in the number of events in a fixed number of wagers having a
Binomial distribution rather than a Poisson which would be the case with continuous time.
With a small probability of an event per wager (as we have here) the difference
between the discrete time and continuous time models is negligible, unless
very high precision is required, or the ability to change the frequency of events
to something much higher is required.
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549837708473206, "perplexity_flag": "middle"} |
http://mathhelpforum.com/discrete-math/183091-axiom-pairing-print.html | # Axiom of pairing
Printable View
• June 15th 2011, 12:33 PM
VonNemo19
Axiom of pairing
Hello everyone, I'm just beginning set theory and I am having trouble understanding the signifigance of the axiom of pairing. I understand the axiom, but why do we need it? Can some of you guys/girls just elaborate on the axiom in the least technical way possible? I would appreciate it. I do not have a very specific question. I find that I learn best by simply talking about the topic at hand.
Thanks.
• June 15th 2011, 12:40 PM
Plato
Re: Axiom of pairing
Quote:
Originally Posted by VonNemo19
Hello everyone, I'm just beginning set theory and I am having trouble understanding the signifigance of the axiom of pairing. I understand the axiom, but why do we need it?
Have you read this?
• June 15th 2011, 01:42 PM
MoeBlee
Re: Axiom of pairing
Just in general, if you have two sets x and y, wouldn't you want to be allowed to form the set whose only two members are x and y?
Then, as you get down the road in set theory, even just a chapter or two away from where you are now, you'll see that certain mathematics formulated in set theory uses pairs of sets.
• June 15th 2011, 01:52 PM
bryangoodrich
Re: Axiom of pairing
Quote:
Originally Posted by VonNemo19
I understand the axiom, but why do we need it?
That is always a good question to ask when dealing with axiomatic theories, for if we did not need the axiom, why bother with it? Let me ask you this, as food for thought: without the axiom of pairs (i.e., with only the other axioms of ZF or ZFC), can you define the set that contains exactly the pair of sets A and B? Of course, even if we can achieve this by using other axioms, the axiom of pairing my simplify things, but then we can use it as a derived definition of a "set theoretic phenomena." This is why we don't need an axiom of intersections, yet we have an axiom of unions.
• June 15th 2011, 01:55 PM
MoeBlee
Re: Axiom of pairing
Just as a techical note (and the previous post somewhat alludes to this also): In ZF we don't need the pairing axiom, since it is derivable from the axiom schema of replacement. But of course, we do need (in some suitable sense of 'need') to be able to form pairs, whether as derived as a theorem from the replacement schema or, without the replacement schema, as an axiom itself.
• June 15th 2011, 05:16 PM
Also sprach Zarathustra
Re: Axiom of pairing
Quote:
I understand the axiom, but why do we need it?
The axiom:
To all x and y Exist set A that x,y her two elements.
With that axiom we can prove (by induction):
$\left \{ \varnothing \right \}, \left \{ \left \{ \varnothing \right \} \right \},\left \{ \left \{ \left \{ \varnothing \right \} \right \} \right \},...$
Are different groups.
Also, you can prove using the axiom the theorem:
$\forall a \exists \exist c \forall b (b\in c \leftrightarrow b=a)$
• June 16th 2011, 10:09 AM
VonNemo19
Re: Axiom of pairing
Quote:
Originally Posted by MoeBlee
Just in general, if you have two sets x and y, wouldn't you want to be allowed to form the set whose only two members are x and y?
Then, as you get down the road in set theory, even just a chapter or two away from where you are now, you'll see that certain mathematics formulated in set theory uses pairs of sets.
Awesome. I'm trying to see this...and I will. Thanks for your input. :)
• June 16th 2011, 10:12 AM
VonNemo19
Re: Axiom of pairing
Quote:
Originally Posted by Plato
Yes. It reads like my book, my friend. Can you speak on this topic yourself? What does this wiki page mean to you? I would just like to hear your thoughts on the topic.
• June 16th 2011, 10:26 AM
bryangoodrich
Re: Axiom of pairing
Quote:
Originally Posted by MoeBlee
In ZF we don't need the pairing axiom, since it is derivable from the axiom schema of replacement.
Are you sure about this? Doesn't the axiom schema of replacement define the range of a definable bijection to be a set? To make the definable bijection we require what the axiom of pairing provides (e.g., cartesian products). The benefit to the axiom of pairing is to be able to build from (albeit, finitely) known sets to things like: the singlton, the pairs, the ordered pairs, and n-tuples. Even if the axiom of replacement (with others) can satisfy the axiom of pairing, it should be noted that it would be like using a hatchet for a scalpel. The axiom schema of replacement is a very powerful axiom (and with the axiom of existence, can be used to satisfy the axiom of extensionality).
For the interested reader, a good read.
• June 16th 2011, 10:47 AM
MoeBlee
Re: Axiom of pairing
Quote:
Originally Posted by bryangoodrich
Yes, the pairing axiom is not needed in ZF. It can be derived as a theorem from the axioms of ZF without the pairing axiom.
Quote:
Originally Posted by bryangoodrich
Doesn't the axiom schema of replacement define the range of a definable bijection to be a set?
There are different formulations of the axiom schema of replacement (some stronger than others), but, most basically, the axiom schema of replacement tells us, somewhat loosely speaking here, that if we have a given set and a "class function" with that set as its domain, then the "class range" is a set. There's no need to involve the notion of bijection.
Quote:
Originally Posted by bryangoodrich
Even if the axiom of replacement (with others) can satisfy the axiom of pairing,
I don't know what you mean by a set of axioms "satisfying" something. A model satisfies a set of axioms; I don't know what would be meant by a set of axioms satisfying something. In any case, ZF without the pairing axiom proves the pairing axiom.
Quote:
Originally Posted by bryangoodrich
it should be noted that it would be like using a hatchet for a scalpel.
I'm not making any claims about what would be heuristically preferred. I'm merely pointing out a certain mathematical fact.
Quote:
Originally Posted by bryangoodrich
The axiom schema of replacement is a very powerful axiom (and with the axiom of existence, can be used to satisfy the axiom of extensionality).
I don't know what you mean by "the axiom of existence" nor what you mean by axioms satisfying anything. In any case, the axiom of extensionality is independent of the rest of the axioms of ZF.
EDIT: Perhaps by 'axiom of existence' you mean the principle that there exists at least one set. Sometimes such a principle is mentioned, however, from a technical point of view, it is superfluous, since by identity theory alone we get Ex x=x, and moreover, from the axiom schema of separation (either as an axiom or as derived from the axiom schema of replacement) we get ExAy ~yex, i.e., that there is at least one empty set.
• June 16th 2011, 10:51 AM
MoeBlee
Re: Axiom of pairing
To be even more specific: One instance of the axiom schema of replacement and the power set axiom together prove the pairing axiom. Moreover, we could even derive pairing if all we would had are an appropriate instance of the axiom schema of replacement and an "axiom" that there exist an x and y such that x not equal y.
• June 16th 2011, 02:25 PM
bryangoodrich
Re: Axiom of pairing
Quote:
Originally Posted by MoeBlee
Yes, the pairing axiom is not needed in ZF.
The Wikipedia article to which Plato referred does articulate this. They do, however, indicate that the use of the axiom of replacement is restricted to sets of at least cardinality 2. The use of other axioms (existence/empty set, power set, or infinity) can be used to build up to sets of cardinality 2. That seems an important point, I believe.
Quote:
Originally Posted by MoeBlee
There are different formulations of the axiom schema of replacement (some stronger than others), but, most basically, the axiom schema of replacement tells us, somewhat loosely speaking here, that if we have a given set and a "class function" with that set as its domain, then the "class range" is a set. There's no need to involve the notion of bijection.
As you indicate "class functions" you have to realize the needed restriction to having sets (not proper classes) in ZFC. The Wikipedia article points out the need for definable bijections. (Thus, the classes need to be "small enough" so the bijection defines a set.) If we have "paradoxical sets" that are too "large" you will end up with something like the "set of all sets." Hrbacek and Jech Introduction to Set Theory (3rd ed.) define it such that it "is intuitively obvious that the set F[A] is "no larger than" the set A," (p. 113). They go on to indicate any problematic definitions of such functions will produce proper classes, basically. Therefore, In ZFC we do end up having class functions that are definable bijections.
Quote:
Originally Posted by MoeBlee
I'm not making any claims about what would be heuristically preferred.
Nor was I claiming that you were; hence my use of "it should be noted ..." to indicate a tangential statement about the topic I was making.
Quote:
Originally Posted by MoeBlee
I don't know what you mean by a set of axioms "satisfying" something.
A model of a sentence is an interpretation in which the sentence comes out true. Thus, $\Gamma$ implies $\mathcal{D}$ if every model of $\Gamma$ is a model of $\mathcal{D}$ (taken from Boolos, et al. Computability and Logic 5th., p. 137). I was speaking loosely that a model of ZF* is a model (i.e., satisfies) the sentence expressing the axiom of pairing, where ZF* is clearly indicating the theory of ZF excluding pairing. I apologize if my usage was confusing. I'll just say implies next time.
Quote:
Originally Posted by MoeBlee
the axiom of extensionality is independent of the rest of the axioms of ZF.
Too right. I meant to say the axiom schema of specification. This is demonstrated on the Replacement Wiki referenced above.
Quote:
Originally Posted by MoeBlee
I don't know what you mean by "the axiom of existence" ... [maybe] you mean the principle that there exists at least one set. Sometimes such a principle is mentioned, however, from a technical point of view, it is superfluous, since by identity theory alone we get Ex x=x, and moreover, from the axiom schema of separation (either as an axiom or as derived from the axiom schema of replacement) we get ExAy ~yex, i.e., that there is at least one empty set.
First, I don't see how a logical expression can satisfy the existence of a set. It begs the question what the existential quantifier is quantified over: what is the universe of discourse? If you assume sets or a set of all sets by which to reference, you've begged the question. This is why an axiomatic treatment of set theory needs an existence axiom. This can be tentatively assumed until one uses the far richer Infinity axiom (e.g., Halmos does this in Naive Set Theory, making an official assumption that "there exists a set" on page 8). It can also be defined explicitly as an axiom of empty set as Hrbacek and Jech do (p. 7; it is an empty set axiom, but they call it Existence).
• June 16th 2011, 03:07 PM
MoeBlee
Re: Axiom of pairing
Quote:
Originally Posted by bryangoodrich
the use of the axiom of replacement is restricted to sets of at least cardinality 2.
There is no mention of such a restriction in the axiom schema. Of course for sets of cardinality less than 2, I suspect there's not much need for the schema. But I don't know what relevance this matter of cardinality has here.
Maybe you mean that proving pairing with the axiom schema of replacement requires that we already have two different sets? Yes, I mentioned that above. From the axiom schema of replacement and the power set axiom, we do get that there exist at least two distinct sets.
Quote:
Originally Posted by bryangoodrich
The use of other axioms (existence/empty set, power set, or infinity) can be used to build up to sets of cardinality 2. That seems an important point, I believe.
When I said "from the axiom schema of replacement" I should have been clear to say that I mean from the axiom schema of replacement along with whatever other needed axioms of ZF not including the pairing axiom. I thought that might be taken for granted in context, but I agree that to be explicit we should mention it. However, the empty set axiom is itself derivable from the axiom schema of replacement (the version of replacement that I mentioned) and the axiom of infinity is not needed here either. All we need is replacement and power set from among the ordinary axioms.
Quote:
Originally Posted by bryangoodrich
As you indicate "class functions" you have to realize the needed restriction to having sets (not proper classes) in ZFC.
Of course, and that is why I said that part was "loosely stated" (or whatever words I used). It is quite customary to use talk of proper classes in ZFC with the proviso that such talk can be reduced back down to talk that does not avail of proper classes. Indeed, some texts that use ZFC use proper classes all over the place, since such mentions of proper classes can be "resolved" to some statement in which proper classes are not mentioned.
Quote:
Originally Posted by bryangoodrich
The Wikipedia article points out the need for definable bijections. (Thus, the classes need to be "small enough" so the bijection defines a set.)
Wikipedia can point out whatever it wants, but you see that in the version of the axiom schema of replacement that I posted there is not a mention or requirement of any bijection. And, offhand, among other formulations of replacement, I can't think of one that mentions anything about bijection.
Quote:
Originally Posted by bryangoodrich
If we have "paradoxical sets" that are too "large" you will end up with something like the "set of all sets."
Such considerations are reasonable, but you can see that the actual formulation of replacement doesn't require mentioning anything about bijection.
Quote:
Originally Posted by bryangoodrich
I'll just say implies next time.
Fair enough.
Quote:
Originally Posted by bryangoodrich
I meant to say the axiom schema of specification.
I lost what point we were on in that regard, but the axiom schema of separation is also derivable from a suitable formulation (such as the one I gave) of the axiom schema of replacement.
Quote:
Originally Posted by bryangoodrich
I don't see how a logical expression can satisfy the existence of a set.
Neither do I; indeed I didn't mention such a thing.
Quote:
Originally Posted by bryangoodrich
what is the universe of discourse?
There are two separate but connected matters: (1) In the syntax of ordinary first order logic with identity (such as the logic for ordinary set theory), we prove Ex x=x, irrespective of any semantics about universes for models. (2) However, that proof syntax does "match" our ordinary semantics, since our ordinary semantics requires that any universe of a model is non-empty.
But with the proof that there exists an empty set, we don't even need identity theory. The axiom schema of separation alone, with just first order logic (not even involving identity theory) proves ExAy ~yex. And again, the syntax of proof "matches" the semantics. Even without identity. Given ANY 1-place predicate symbol (or adjusted suitably for any n-place predicate symbol), we prove, for example, Ex(Px -> Px), and thus matching the semantics that requires that a universe for a model is non-empty.
Quote:
Halmos does this in Naive Set Theory, making an official assumption that "there exists a set".
Note that Halmos is not working in a formal context. However, sometimes authors who work even in a formal context do include such existence axioms. Some of those authors mention that they do that only for purposes of convenience since we really don't need such an axiom. Other authors who adopt such an axiom don't even bother discussing it much. But in any case, whether included as a formal axiom or not, it is superfluous, as is the empty set axiom, when we have the axiom schema of separation either as a theorem schema from the axiom schema of replacement or as its own independent axiom schema.
All times are GMT -8. The time now is 09:09 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483859539031982, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/7738/why-theres-a-whirl-when-you-drain-the-bathtub?answertab=oldest | # Why there's a whirl when you drain the bathtub?
At first I thought it's because of Coriolis, but then someone told me that at the bathtub scale that's not the predominant force in this phenomenon.
-
1
– Marek Mar 29 '11 at 9:23
1
@Marek: partial duplicate. I was expecting a "definitive" answer as to what DOES cause the whirl. We can agree Coriolis is out... – Dan Mar 29 '11 at 9:44
## 7 Answers
The whirl is due to the net angular momentum the water has before it starts draining, which is pretty much random.
If the circulation were due to Coriolis forces, the water would always drain in the same direction, but I did the experiment with my sink just now and observed the water to spin different directions on different trials.
The Coriolis force is proportional to the velocity of the water and the angular velocity of Earth. Earth's angular velocity is $2\pi/24\ {\rm hours}$, or about $10^{-4}\ s^{-1}$. If water's velocity as it drains is $v$ the Coriolis acceleration is about $10^{-4} v\ s^{-1}$.
The water moves about a meter while draining, which takes a time $1\ m/v$, so the total velocity imparted by Coriolis forces could be at most $10^{-4} v\ s^{-1} * 1\ m/v = 10^{-4} \ m/s$.
So the Coriolis effect is quite a small effect. But this first-order Coriolis effect does not cause the water to rotate.
The direction of Coriolis force depends on your direction of motion. All the water in your tub is moving the same direction, so the Coriolis force pushes it all the same direction. The effect is that if the bathtub starts out perfectly flat and begins draining (and it points north), all the water will get pushed east. The two edges of the tub will have very slightly different depths of water, because the Coriolis force is pushing sideways.
The Coriolis force could create "spinning" on uniformly-moving water, but only as a second-order effect. As you move away from the equator, the Coriolis force changes. This change in the Coriolis force is because the angle between "north" and the angular velocity vector of Earth changes as you move around; as you go further north (in the Northern Hemisphere) the "north" direction gets closer and closer to making a right angle with the angular velocity vector, so the Coriolis force increases in strength. The size of this effect would be proportional to the ratio of the size of your tub to the radius of Earth. That ratio is $10^{-7}$, so this effect is completely negligible.
The Coriolis force could also create some "spinning" if different parts of the water are moving different speeds. If the tub is draining to the north in the northern hemisphere, and water near the drain is moving faster than water far away, then the water near the drain would be pushed east more than water far away is. If you subtracted out the average effect of the Coriolis force, what remained would be an easterly push near the drain and a westerly push far away. This gives a clockwise spin as viewed from above.
We've already estimated the typical velocities as $\omega L$, so the angular momentum per unit mass induced this way would be on the order of $\omega L^2$ (but maybe smaller by a factor of 10). That's only $10^{-4}\ m^2/s$. To get an equivalent effect, in a tub of $100\ L$, you could give just one liter of water on the edge of the pool a velocity of a few cm/s, something you surely do many time over when removing your body from the tub.
This effect is too small to affect your bathtub, but it's still observable under the right conditions. According to Wikipedia, Otto Tumlirz conducted several experiments in the early 20th century that demonstrated the effects of the Coriolis forces on a draining tub of water. The tub was allowed to settle for 24 hours in a controlled environment before the experiment began. This was enough to damp out the residual angular momentum left over from filling the tub up to the point where Coriolis effects were dominant.
-
Thanks! now I have to find a way to translate all this to my 4 year old daughter (who originally asked this question). – Dan Mar 29 '11 at 14:44
1
If I keep the water still for a very long time, so that it has very little angular momentum to begin with, would the water swirl? – Bernhard Heijstek Apr 26 '11 at 17:11
@phycker According to the reports of experiments, yes, eventually you would see a swirl. – Mark Eichenlaub Apr 26 '11 at 18:47
Coriolis effect is not limited to that caused by the earth's rotation. It is just another consequence of conservation of angular momentum, regardless of the scale. – Mike Dunlavey May 28 '12 at 3:25
The main effect is angular momentum (rotational inertia) in the water set up by various movements before you start observing, such as getting out of your bath.
This results in the water level being lower near the centre of rotation than further away, setting up centripital forces which maintain the rotation. When the difference in levels is significant relative to the average water level, you notice the typical whirl effect.
There are other things happening too, including the Coriolis force.
-
A discussion by 'The Straight Dope' website
http://www.straightdope.com/columns/read/149/do-bathtubs-drain-counterclockwise-in-the-northern-hemisphere
references experimental work carried out but Ascher Schapiro in 1962, which concluded something like it all depends on the shape of container and how its stirred before being left to empty.
Here is Schapiro's paper but I feel you will need academic access via a university or library to read the full PDF:
http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=368912
-
– Mark Eichenlaub Mar 29 '11 at 10:57
""In the northern hemisphere, Coriolis forces are always making you turn to the right - clockwise."" Yes, to the right, but not clockwise. The wind moving to the center of low pressure moves to the right, thus missing the center and that starts the counterclockwise movement of the anticyclone. Fig 13 in the page You liked is about that. – Georg Mar 31 '11 at 21:32
It's because of the Coriolis effect. In the Northern hemisphere, it goes around in one direction, in the Southern hemisphere around the opposite direction, and goes straight down bang on the equator.
Black natives demostrate the effect to white tourists in the video clip Water flow at the equator, Coriolis effect.
-
2
It's already been explained quite clearly that it is not due to the Coriolis effect. Further, if it were, you still wouldn't be able to see it at the equator because the Coriolis effect is zero there. – Mark Eichenlaub Mar 29 '11 at 15:42
2
Pff. The video is a show for tourists, not a scientific experiment. The vessels are not equally shaped and are not filled the same way. Just look at the math done in one of the answers. – Lagerbaer Mar 29 '11 at 15:49
3
@user2146 Sorry, for a moment there I forgot that all YouTube videos are completely credible and that guys running tourist traps in Kenya know more about physics than the entire scientific establishment. – Mark Eichenlaub Mar 29 '11 at 15:53
1
I wish I had more karma so I could downvote this further – Dan Mar 29 '11 at 16:00
3
@Larry: It is trivial to fake this. All you have to do is pour the water into the tub from slightly to one side or the other. Then it has angular momentum and coriolis force entirely due to the way you poured it. To Mark, I am very shy of argument by popularity ("entire scientific establishment"). There is an on that. – Mike Dunlavey May 28 '12 at 14:01
show 4 more comments
The whirl happens in the draining tube, whose optimal solution to drain the bathtub is a laminar flow allowing for some rotation in the tube. What you see in the surface is the match between the solution of flow in the tube and the solution of flow in the surface.
Angular momentum of the flow gets modified a lot as the tube twists and twists, sometimes even siphoning up and down.
-
Since you want to explain it to your daughter, take a plastic bottle, cut the bottom open, turn it upside town, hold the top closed and fill it with water. Give her that bottle and have her release the top (which is on the bottom now, sorry for the bad phrasing). The water will whirl in different orientations whenever you repeat this (if it whirls at all) and she can influence it by accelerating the bottle in a circular motion to understand that an initial disturbance is responsible for the whirl orientation.
-
great idea! nothing better than hands-on science for a young inquisitive mind – Dan Mar 31 '11 at 14:51
I am not sure if it is an honest answer. What is going to happen during the release is that the potential energy of the water will be moved to two componentes: the aceleration of the mass center, and the internal energy. The later will appear as a whirl around the mass center. – arivero Mar 31 '11 at 20:54
@arivero: one could of course use a spoon to introduce a small turbulence instead, I was just focusing on using as little resources as possible... – Tobias Kienzler Apr 1 '11 at 8:27
Tobias, my point is that even if you dont whirl by hand, it will appear. If it doesn't, it should mean that all the energy is going to energy of the mass center, which is unlikely. Of course I agree thatyou can influence it. – arivero Apr 2 '11 at 19:15
You can think about it like this: It takes one day for the earth to perform a full rotation (about 86k seconds), on the other hand, it takes a few seconds for your sink to drain (lets say 10 seconds). So it takes 8600 times longer for the earth to do a full rotation than it takes the water to drain down the sink. It is not too hard to imagine that the earth's rotation can have no influence on the process of draining a sink.
However, if the sink was the size of lake Michigan and you were to drain it, Coriolis would play a role.
-
## protected by Qmechanic♦Mar 25 at 4:46
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456700086593628, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/75925?sort=votes | ## Do you know this form of an uncertainty principle?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I hope this question is focused enough - it's not about real problem I have, but to find out if anyone knows about a similar thing.
You probably know the Heisenberg uncertainty principle: For any function $g\in L^2(\mathbb{R})$ for which the respective expressions exist it holds that ```$$
\frac{1}{4}\|g\|_2^4 \leq \int_{\mathbb{R}} |x|^2 |g(x)|^2 dx \int_{\mathbb{R}} |g'(x)|^2 dx.
$$```
This inequality is not only important in quantum mechanics, but also in signal processing for the short-time Fourier transform, see here.
One can derive this by formally using partial integration $$\int_{\mathbb{R}} 1\,|g(x)|^2 dx = -\int_{\mathbb{R}} x\tfrac{d}{dx}|g(x)|^2dx \leq 2\int_{\mathbb{R}} |xg(x)|\,|g'(x)|dx$$ and Cauchy-Schwarz.
Now, changing just the order of the functions, you obtain this inequality $$\int_{\mathbb{R}} |g(x)|^2 dx \leq 2\int_{\mathbb{R}} |xg'(x)|\,|g(x)|dx \leq \left(\int_{\mathbb{R}} |xg'(x)|^2dx\right)^{1/2}\left(\int_{\mathbb{R}} |g(x)|^2dx\right)^{1/2}$$ which gives $$\|g\|_2\leq \|xg'\|_2.$$
Ok, this was just playing around. However, this inequality can also be motivated by an abstract consideration about uncertainty principle associated to group-related integral transforms (see my two blog posts). Interestingly, the Heisenberg uncertainty principle derives from the short time Fourier transform and the last "uncertainty principle" derives from the wavelet transform.
The last fact bothers me: In contrast to the fact that both inequalities can be derived from two conceptually very different integral transforms (indeed both underlying groups are very different), they have a very similar formal derivation.
I have the following questions: Is anyone familiar with the last inequality? Could it be useful in any context? Is there some reason why these inequalities seem so entangled?
-
2
Isn't the last inequality (I assume you meant $\|g\|_2 \leq \|xg'\|_2$) just a manifestation of Hardy's inequality? en.wikipedia.org/wiki/Hardy%27s_inequality In which case, yes, people are familiar with it. – Willie Wong Sep 20 2011 at 13:37
In 1D Hardy's inequality is a bit awkward to express and you should work with functions vanishing at zero to get a neat formulation. Of course you can go from one to the other and back, but I would not say it is an instant consequence – Piero D'Ancona Sep 20 2011 at 15:53
## 2 Answers
There exists a plethora of inequalities relating weighted $L^p$ norms of a function and its derivatives. For instance you have the Caffarelli-Kohn-Nirenberg family of inequalities $$\| |x|^{-\gamma}u\|_ {L^{r}}\le C \||x|^{-\alpha}\nabla u\|^{a}_ {L^{p}}\||x|^{-\beta}u\|^{1-a}_ {L^{q}}$$ which hold for a quite large range of parameters; note that here $\alpha,\beta,\gamma$ may assume negative values. You will not have difficulty in googling the vast literature on the subject (let me add that there is a recent paper of mine on arXiv with some improvements on this).
-
Wow, there are lot of constants involved. As far as I've seen, it seems that the case $\gamma=0$, $a=1$ and $\alpha=-1$ is not included? – Dirk Sep 20 2011 at 10:24
You can take $n=1,a=1/2,\beta=\gamma=0,\alpha=-1$ to get your inequality, if I'm not mistaken – Piero D'Ancona Sep 20 2011 at 10:34
I got a little bit confused with all these different inequalities around (many of them called Caffarelli-Kohn-Nirenberg) but now I see. That's an interesting relation though. The Heisenberg uncertainty is also included? ($\gamma=\alpha=0$, $\beta=−1$, $a=1/2$) – Dirk Sep 20 2011 at 12:01
Yes indeed, and it's not limited to the $L^2$ framework as you see – Piero D'Ancona Sep 20 2011 at 15:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I find the neatest "standard" uncertainty principle is the one with commutators, see e.g. http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/GenUncertPrinciple.htm. I think that readily gives both your inequalities.
-
Of course the first one is an instance of the abstract "Robinson's uncertainty principle". However, for the second it's not that straightforward... – Dirk Nov 21 2011 at 6:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371194243431091, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/50633/formalising-the-principle-of-general-covariance-in-differential-geometry | ## Formalising the principle of general covariance in differential geometry
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(Edited)
1. Let $M$ be a smooth (pseudo-)Riemannian manifold and let $T(M)$ be the smooth tensor bundle on it. Is there a subspace $S \subset T(M)$ closed under (possibly infinitely many) finitary (possibly partially-defined) algebraic operations and finitely generated containing exactly the tensor fields on $M$ which are invariant under some fixed subgroup $G \le \mathrm{Diff}(M)$?
2. In particular, the above is an open question because e.g. the $G$-invariant tensor fields have not been classified for general $G$, how about asking that $S$ contains only isometry-invariant tensor fields and in particular contains the metric tensor, inverse metric tensor, the Riemann tensor, the Ricci tensor, and the Ricci scalar?
3. Suppose there is such an algebraic structure, and I adjoin an arbitrary tensor field to it. Will there be a non-trivial subgroup of $\mathrm{Diff}(M)$ under which the extended structure is invariant? Conversely, suppose I fix a non-trivial subgroup of $H < G$. Is there an algebraic structure of the same signature invariant under $H$ containing $S$?
## Motivation
We can detect, using purely algebraic means in a certain sense, whether an arbitrary complex number $\alpha \in \mathbb{C}$ is an algebraic number: Look for the smallest subfield $\mathbb{Q}(\alpha) \le \mathbb{C}$ containing both $\mathbb{Q}$ and $\alpha$. It is a vector space over $\mathbb{Q}$, and $\alpha$ is algebraic if and only if this vector space is finite-dimensional. Moreover, we can even detect whether $\alpha$ can be expressed in terms of radicals simply by examining the automorphism group of the field extension $\mathbb{Q}(\alpha) / \mathbb{Q}$.
Here, I'm interested in whether or not something similar can be done for tensor fields on a manifold. The first question is analogous the inverse Galois problem — we have a symmetry group, and we are looking for an algebraic structure which is invariant under it. Having found such an algebraic structure, we can ask whether or not there is an analogue of the fundamental theorem of Galois theory, which establishes a bijective correspondence between subextensions of a Galois field extension and the subgroups of its automorphism group.
I'm aware of at least one result in this area, broadly interpreted as an algebraic approach to differential geometry — namely Lovelock's theorem, which classifies all the symmetric divergence-free second-order natural (0, 2)-tensors on a manifold. A corollary of this theorem, I'm told, tells us that the Einstein field equations are essentially unique (in 4 dimensions).
-
2
Is there something wrong with the standard "invariant" characterization of a tensor, i.e. as a map from $T^1(M) \times \cdots T^1(M) \times T(M) \times \cdots \times T(M) \to C^{\infty}(M)$ which is multilinear over $C^{\infty}(M)$? $T(M)$ denotes the space of vector fields and $T^1(M)$ the space of $1$-forms. – Yakov Shlapentokh-Rothman Dec 29 2010 at 8:40
3
The confusion is that the $\Gamma$ you have defined is not a tensor. As Yakov mentions in his comment, the modern differential-geometric definition of a tensor is a section of a tensor bundle $\bigotimes^r TM \otimes \bigotimes^s T^*M$. In particular it has to be $C^\infty(M)$-linear in all entries. Your $\Gamma$, from its very definition, is not. – José Figueroa-O'Farrill Dec 29 2010 at 11:04
2
Could you rewrite your question and explicitly define what you mean by a "tensor bundle", a "tensor field", what it means for a "tensor field to be contained in a subspace", and, most importantly, what it means for tensor field to be "diffeomorphism-invariant"? It appears that you are using these terms in ways unfamiliar to many of us (perhaps because you are using definitions from a rather old treatise on general relativity?), so many of us are completely confused. – Deane Yang Dec 30 2010 at 3:50
4
How can we answer your question if you do not know the meaning of the words you are using? – Deane Yang Dec 30 2010 at 16:03
4
I don't want to hear your ideas. I want to know the definitions of the words you are using. I presume you are not using your own definitions. – Deane Yang Dec 30 2010 at 16:04
show 20 more comments
## 3 Answers
Hi, I'm not sure I understand what your question really is - frankly, I don't think that there is anything left to formalize about the principle of covariance in GR - but I hope that I can be of help nevertheless. Let me just state some remarks:
• "I've read that the principle of general covariance in general relativity is best understood as a gauge symmetry with respect to the diffeomorphism group".
The problem with this statement is that "gauge transformation" in a gauge theory means "a transformation of the mathematical model that does not have any measurable/observable effect". In this sense the existence of gauge transformations means that the mathematical model is redundant, there are degrees of freedom that are not observable. In general relativity a diffeomorphism represents a change of the reference frame. This is of course "observable" in the sense that observers living in different reference frames report different observations of the same event. So I'd say that this analogy is at least as misleading as it is helpful.
You can find more about this on the webpage of Ray Streater here: Diff M as a gauge group.
• "the link between this and the notion of manifest covariance is not obvious to me."
"Manifest" simply means that the covariance of an equation is easy to see (for an educated human), it does not have any deeper meaning.
• "Physicists have a "principle of general covariance" which basically states that physical laws (and in particular, physical quantities) can be stated in a form which is somehow coordinate-independent. The paradox is, such "coordinate-independent" quantities and equations are frequently stated in terms of (admittedly, arbitrary) coordinates!"
I'm not sure I understand what the paradox is. Let's say you sit in a train and pour yourself a cup of coffee. You report: "both the cup and the coffeepot don't move, therefore the coffee ends up in the cup". Let's say I observe this standing at a railroad crossing , I'll report "both the cup and the coffeepot move with 30 km/h to the east, therefore the coffee ends up in the cup". The principle of general covariance says that the event "the coffee ends up in the cup" needs to be a scalar, since all observers will agree on the fact. And the velocity of the cup and the coffeepot in the west-east direction needs to be a vector because all observers will disagree about it according to the relative velocities of their frames of reference.
So, in general relativity, we say that
a) every specific choice of coordinates of (a patch of) spacetime corresponds to an observer, who can observe events and tell us about his observations,
b) given these observations we can predict what every other observer will report by applying the diffeomorphism that takes one set of coordinates to the other set of coordinates to the mathematical gadgeds that represent observable entities/effects.
The principle of general covariance says that every physical quantity has to transform in a way that we don't get an inconsistency between what we predict what another observer will report and what he actually reports. If you report that the coffee ends up in the cup, and I report that the coffee ends up on you because in my reference frame the cup moves with a different velocity than the coffepot, the theory is in trouble.
More about general relativity, the principle of general covariance and the "hole argument" of Einstein can be found on the page spacetime on the nLab.
-
Hmmm. I think the point of disagreement is that "manifest covariance" cannot be given deeper meaning. Thank you for the links, however. As an analogy, I could say that a number is "manifestly algebraic" if I can give it terms of radicals, and then the question is how to detect such numbers amongst all algebraic numbers. The answer, of course, is Galois theory, and here I'm hoping that there might be some sort of Galois theory of geometric/physical invariants. – Zhen Lin Dec 29 2010 at 10:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The principle of general covariance has been explicitly described by J.-M. Souriau in his 1974 paper "Modèle de particule à spin dans le champ électromagnétique et gravitationnel"
http://www.jmsouriau.com/Publications/JMSouriau-ModPartSpin1974.pdf
It is in french, I don't know if there exists an english translation?
You can find a related paper in english, by Shlomo Sternberg, here:
http://www.academy.ac.il/data/books/539/Sternberg-einstein-lecture.pdf
Hope that helps.
Comment: The principle of general covariance gives you the so-called passive field equations, that is for example ${\rm div}(T) = 0$, e.g. $\nabla_\mu T^{\mu\nu} = 0$ (or the equations of geodesics, or more complex equations if you input more fields or data).
It works essentially this way. Let us say that a geometrical object is a space with a natural action of the group ${\rm Diff}(M)$, the diffeomorphisms of $M$. For example the space ${\frak M}$ of metrics with signature $(+,-,-,-)$, and ${\rm Diff}(M)$ acts by pullback, $(\varphi,g) \mapsto \varphi^*(g)$, for $\varphi \in {\rm Diff}(M)$ and $g \in {\frak M}$. Now, the principle of general covariance (its active interpretation, not with charts or frames) says that any physical object (submitted to the field $g$) belongs to the quotient ${\frak Q} = {\frak M}/{\rm Diff}(M)$. Actually it is not completely exact, we must restrict the group to ${\rm Diff}_\bullet(M)$, the group of compact supported diffeomorphisms.
Let $g \in {\frak M}$ and $\gamma = [g] \in {\frak Q}$, the "tangent space" at $\gamma$ identifies to the tangent space at $g \in {\frak M}$ (that is the space of any symmetric tensor field on $M$) modulo the tangent space to the orbit of $g$, but the tangent space to the orbit of $g$ identifies with the space of Lie derivative of $g$ by compact supported fields.
Example: Let us look for a "covector" of $\frak Q$ at the point $\gamma$, and let us assume that it is given by a smooth distribution $T^{\mu\nu}$ of contravariant symmetric tensor on $M$, according to $$\tau(\delta g) = \int_M T^{\mu\nu}\delta g_{\mu\nu}$$ This is a linear form defined on the compact supported tensor fields $\delta g$ on $M$. But to be defined on $T_\gamma \frak Q$, $\tau$ must satisfy the (Eulerian) condition $$\tau(\epsilon) = \int_M T^{\mu\nu}\epsilon_{\mu\nu} = 0 \quad \mbox{for all} \quad \epsilon = {\frak L}_\xi(g)$$ with $\xi$, any compact supported vector field. And then, you can check that this is equivalent to ${\rm div}(T) = 0$.
This may give you a taste of what contains the Souriau's paper above. So, the principle of general covariance is just a principle of invariance with respect to the action of the diffeomorphisms with compact support. It is possible to give a very precise meaning to all these heuristic considerations. It has still not been done completely.
-
Here is an outline of an answer to one of your earlier versions of your question: What tensor fields are invariant under the group of diffeomorphisms? Here, a tensor field is assumed to be a section of a tensor bundle over a fixed manifold. A tensor bundle is defined to be the tensor product of a finite number of copies of the tangent bundle and a finite number of copies of the cotangent bundle. First, since you can map any point to any other point in the manifold with any given differential, it follows that given any two points, you can find local co-ordinates near each point such that the components of the tensor field at the two points are equal. It therefore suffices to study the tensor field at a single point.
We can also now restrict to diffeomorphisms that fix the given point. The tensor field at the point is just an element of $V \otimes \cdots \otimes V\otimes V^*\otimes\cdots\otimes V^ *$, where $V$ is the tangent space at that point. Moreover the action of a diffeomorphism corresponds simply to the action of $GL(V)$ on this space. We therefore want to find all tensors that are fixed by the action of $GL(V)$. Once we have identified these tensors, then these will define natural sections of the tensor bundle that are invariant under all diffeomorphisms.
At this point, I need to defer to someone who knows $GL(n)$ representation theory a lot better than me to explain the known classification. Let me just point out that there are indeed examples, the simplest nonzero one being the identity map $\delta \in V\otimes V^ *$. One thing that is worth noting though is that such nontrivial $GL(n)$-invariant elements exist only if there are the same number of $V$ factors as $V^*$ factors in the tensor product. And I believe that any such invariant element is simply a linear combination of tensor products of the $\delta$ tensor but where you use different "slots" for each $\delta$. So the symmetric group plays a crucial role here.
-
MathJax remains an inscrutable mystery to me. – Deane Yang Dec 30 2010 at 20:07
1
To give a reference, much of the last paragraph is explained in H. Weyl, The classical groups, their invariants and representations. – Willie Wong Dec 30 2010 at 20:34
I would add that I have no idea how to deal with an arbitrary subgroup $G$ of the group of diffeomorphisms. A more reasonable and more useful question is to ask about tensors or subspaces of tensors (at a fixed point in the manifold) that are invariant under a subgroup of $GL(v)$ (for example, $SL(v)$, $O(n)$, $U(n/2)$ for $n$ even, etc.). – Deane Yang Dec 30 2010 at 23:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304568767547607, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/129952-greatest-value.html | # Thread:
1. ## The greatest value
Find the greatest value of E :
$<br /> E=xy+x\sqrt{y^{2}-1}+y\sqrt{1-x^{2}}-\sqrt{(1-x^{2})(1-y^{2})}<br />$
2. First, identify any constraints on the variables. For example, any expression inside a square root has to be greater than or equal to 0.
$y^2 - 1 \geq 0$
$y^2 \geq 1$
$y \leq -1$ or $y \geq 1$
$1 - x^2 \geq 0$
$-x^2 \geq -1$
$x^2 \leq 1$
$-1 \leq x \leq 1$
$(1 - x^2)(1 - y^2) \geq 0$
$-1 \leq x \leq 1$ and $-1 \leq y \leq 1$, or ( $x \leq -1$ or $x \geq 1$) and ( $y \leq -1$ or $y \geq 1$)
Let's examine $-1 \leq y \leq 1$. We know $y \leq -1$ or $y \geq 1$. Therefore, $-1 < y < 1$ is false so the only solutions are $y = -1$ or $y = 1$.
Now let's examine $x \leq -1$ or $x \geq 1$. We know $-1 \leq x \leq 1$. Therefore, $x < -1$ and $x > 1$ are false so the only solutions are $x = -1$ or $x = 1$.
Now we can replace $-1 \leq y \leq 1$ with $y = -1$ or $y = 1$ and $x \leq -1$ or $x \geq 1$ with $x = -1$ or $x = 1$ to produce:
$-1 \leq x \leq 1$ and ( $y = -1$ or $y = 1$), or ( $x = -1$ or $x = 1$) and ( $y \leq -1$ or $y \geq 1$)
From here, we can replace y with -1 and determine for which value of x does the simplified expression have a maximum. Then, we can repeat the replace y with 1 and repeat the procedure. Finally, we can compare those results to replacing x with -1 and 1 and finding for which value of y does this simplified expression have a maximum.
If you're allowed to use calculus, then I know I much more simple method. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 43, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8719881176948547, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Incidence_matrix | # Incidence matrix
In mathematics, an incidence matrix is a matrix that shows the relationship between two classes of objects. If the first class is X and the second is Y, the matrix has one row for each element of X and one column for each element of Y. The entry in row x and column y is 1 if x and y are related (called incident in this context) and 0 if they are not. There are variations; see below.
## Graph theory
Incidence matrices are mostly used in graph theory.
### Undirected and directed graphs
An undirected graph
In graph theory an undirected graph G has two kinds of incidence matrices: unoriented and oriented. The incidence matrix (or unoriented incidence matrix) of G is a p × q matrix $(b_{ij})$, where p and q are the numbers of vertices and edges respectively, such that $b_{ij} = 1$ if the vertex $v_i$ and edge $x_j$ are incident and 0 otherwise.
For example the incidence matrix of the undirected graph shown on the right is a matrix consisting of 4 rows (corresponding to the four vertices, 1-4) and 4 columns (corresponding to the four edges, e1-e4):
$\begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ \end{pmatrix}$
If we look at the incidence matrix, we see that the sum of each column is equal to 2. This is because each edge has a vertex connected to each end.
The incidence matrix of a directed graph D is a p × q matrix $[b_{ij}]$ where p and q are the number of vertices and edges respectively, such that $b_{ij} = -1$ if the edge $x_j$ leaves vertex $v_i$, $1$ if it enters vertex $v_i$ and 0 otherwise (Note that many authors use the opposite sign convention.).
An oriented incidence matrix of an undirected graph G is the incidence matrix, in the sense of directed graphs, of any orientation of G. That is, in the column of edge e, there is one +1 in the row corresponding to one vertex of e and one −1 in the row corresponding to the other vertex of e, and all other rows have 0. All oriented incidence matrices of G differ only by negating some set of columns. In many uses, this is an insignificant difference, so one can speak of the oriented incidence matrix, even though that is technically incorrect.
The oriented or unoriented incidence matrix of a graph G is related to the adjacency matrix of its line graph L(G) by the following theorem:
$A(L(G)) = B(G)^{T}B(G) - 2I_q\$
where $A(L(G))$ is the adjacency matrix of the line graph of G, B(G) is the incidence matrix, and $I_q$ is the identity matrix of dimension q.
The Kirchhoff matrix is obtained from the oriented incidence matrix M(G) by the formula
$M(G) M(G)^{T}.$
The integral cycle space of a graph is equal to the null space of its oriented incidence matrix, viewed as a matrix over the integers or real or complex numbers. The binary cycle space is the null space of its oriented or unoriented incidence matrix, viewed as a matrix over the two-element field.
### Signed and bidirected graphs
The incidence matrix of a signed graph is a generalisation of the oriented incidence matrix. It is the incidence matrix of any bidirected graph that orients the given signed graph. The column of a positive edge has a +1 in the row corresponding to one endpoint and a −1 in the row corresponding to the other endpoint, just like an edge in an ordinary (unsigned) graph. The column of a negative edge has either a +1 or a −1 in both rows. The line graph and Kirchhoff matrix properties generalize to signed g
### Multigraphs
The definitions of incidence matrix apply to graphs with loops and multiple edges. The column of an oriented incidence matrix that corresponds to a loop is all zero, unless the graph is signed and the loop is negative; then the column is all zero except for ±2 in the row of its incident vertex.
### Hypergraphs
Because the edges of ordinary graphs can only have two vertices (one at each end), the column of an incidence matrix for graphs can only have two non-zero entries. By contrast, a hypergraph can have multiple vertices assigned to one edge; thus, the general case describes a hypergraph.
## Incidence structures
The incidence matrix of an incidence structure C is a p × q matrix $[b_{ij}]$, where p and q are the number of points and lines respectively, such that $b_{ij} = 1$ if the point $p_i$ and line $L_j$ are incident and 0 otherwise. In this case the incidence matrix is also a biadjacency matrix of the Levi graph of the structure. As there is a hypergraph for every Levi graph, and vice-versa, the incidence matrix of an incidence structure describes a hypergraph.
### Finite geometries
An important example is a finite geometry. For instance, in a finite plane, X is the set of points and Y is the set of lines. In a finite geometry of higher dimension, X could be the set of points and Y could be the set of subspaces of dimension one less than the dimension of Y; or X could be the set of all subspaces of one dimension d and Y the set of all subspaces of another dimension e.
### Block designs
Another example is a block design. Here X is a finite set of "points" and Y is a class of subsets of X, called "blocks", subject to rules that depend on the type of design. The incidence matrix is an important tool in the theory of block designs. For instance, it is used to prove the fundamental theorem of symmetric 2-designs, that the number of blocks equals the number of points.
## References
• Diestel, Reinhard (2005), Graph Theory, Graduate Texts in Mathematics 173 (3rd ed.), Springer-Verlag, ISBN 3-540-26183-4 .
• Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 (Section 9.2 Incidence matrices, pp. 166-171)
• Jonathan L Gross, Jay Yellen, Graph Theory and its applications, second edition, 2006 (p 97, Incidence Matrices for undirect graphs; p 98, incidence matrices for digraphs) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225853085517883, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/50516/does-the-etale-fundamental-group-of-the-projective-line-minus-a-finite-number-of | ## Does the etale fundamental group of the projective line minus a finite number of points over a finite field depend on the points?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Clearly the etale fundamental group of $\mathbb{P}^1_{\mathbb{C}} \setminus {a_1,...,a_r}$ doesn't depend on the $a_i$'s, because it is the profinite completion of the topological fundamental group. Does the same hold for when I replace $\mathbb{C}$ by a finite field? How about an algebraically closed field of positive characteristic?
(note that I'm talking about the full $\pi_1$ and not the prime-to-$p$ part)
-
4
Maybe you are aware that the fundamental group of a projective curve of genus $g>1$ does depend on moduli. See, e.g., this paper of Saidi: empslocal.ex.ac.uk/people/staff/ms220/Site/… I don't know the answer to your question but my guess is that it will depend on the $a_i$'s – Felipe Voloch Dec 27 2010 at 23:34
2
I don't know the answer, but it follows from Abhyankar's conjecture, proved by Raynaud and Harbater, that the finite quotients of the fundamental groups in the algebraically closed case are the same, which suggests that the fundamental groups might be isomorphic. – Angelo Dec 28 2010 at 9:16
## 2 Answers
It is a result of Tamagawa that for two affine curves $C_1, C_2$ over finite fields $k_1,k_2$ any continuous isomorphism $\pi_1(C_1)\rightarrow \pi_1(C_2)$ arises from an isomorphism of schemes $C_1\rightarrow C_2$. Hence, if $\pi_1( \mathbb{P}^1\setminus{a_1,\ldots, a_r})$ were independent of the choice of the $a_i$, then the isomorphism class of the schemes $\mathbb{P}^1\setminus{a_1,\ldots, a_r}$ would be independent of the choice of $a_1,\ldots,a_r$.
Tamagawa's result is Theorem 0.6 in this paper:
The Grothendieck conjecture for affine curves, A Tamagawa - Compositio Mathematica, 1997 http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=298922
In the case of an algebraically closed field, the answer is also that the fundamental group depends on the choice of the points that are being removed. Again by a theorem by Tamagawa: If $k$ is the algebraic closure of $\mathbb{F}_p$, and $G$ a profinite group not isomorphic to $(\hat{\mathbb{Z}}^{(p')})^2\times \mathbb{Z}_p$, then there are only finitely many $k$-isomorphism classes of smooth curves $C$ with fundamental group $G$ (the restriction on $G$ excludes ordinary elliptic curves).
This can be found in
Finiteness of isomorphism classes of curves in positive characteristic with prescribed fundamental groups, A Tamagawa - Journal of Algebraic Geometry, 2004
-
There is a lot of interesting information here! Note though that I think you are missing the word "isomorphism" at the end of the first line. – Pete L. Clark Dec 28 2010 at 11:36
Yes, thanks! Fixed. – Lars Dec 28 2010 at 11:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
No --- given two triples of Q-rational points, there is an automorphism of the projective line over Q carrying one to the other.
-
3
I don't see how this answers anything... You just claimed that in the case r=3 and the field=Q (notice that I required positive characteristic) the a_i's don't matter (which, btw, would be a positive answer in that case and not a negative one). – Makhalan Duff Dec 28 2010 at 6:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9159430265426636, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/20154?sort=oldest | ## candidate for rigorous _mathematical_ definition of “canonical”?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In this question: http://mathoverflow.net/questions/19644/what-is-the-definition-of-canonical , people gave interesting "philosophical" takes on what the word "canonical" means. Moreover I percieved an underlying opinion that there was no formal mathematical definition.
Whilst looking for something else entirely, I just ran into Bill Messing's post
http://www.cs.nyu.edu/pipermail/fom/2007-December/012359.html
on the FOM (Foundations of Mathematics) mailing list. I'll just quote the last paragraph:
"It is my impression that there is very little FOM discussion of either Hilbert's epsilon symbol or of Bourbaki formulation of set theory. In particular the chapitre IV Structures of Bourbaki. For reasons, altogether mysterious to me, the second edition (1970) of this book supressed the appendix of the first edition (1958). This appendix gave what is, as far as I know, the only rigorous mathematical discussion of the definition of the word "canonical". Given the fact that Chevalley was, early in his career, a close friend of Herbrand and also very interested in logic, I have guessed that it was Chevalley who was the author of this appendix. But I have never asked any of the current or past members of Bourbaki whom I know whether this is correct."
It's a 4-day weekend here in the UK and I'm very unlikely to get to a library to find out what this suppressed appendix says. Wouldn't surprise me if someone could find this appendix on the web somewhere though! Is there really a mathematical definition of "canonical"??
NOTE: if anyone has more "philosophical" definitions of the word, they can put them in the other thread. I am hoping for something different here.
-
4
@Qiaochu: I have in my hand a paper by Nick Katz ("p-adic properties of modular schemes and modular forms") where he says that a certain exact sequence coming from the theory of elliptic curves has a "canonical, but not functorial, splitting" (page Ka-95, a.k.a. p163 of the book). – Kevin Buzzard Apr 2 2010 at 10:34
2
In the spirit of the old nonsense about a tree which falls in the forest when nobody is around to hear the noise, if a mathematical concept is defined in a place which is known to almost nobody then the definition may as well not exist. Or in the words of a US Supreme Court case struggling with another widely used undefined word, as long as we know it when we see it then probably there's no need for a rigorous definition (much like there's no need for rigorous foundations of category theory as long as it is being used in an essentially linguistic or "plug example into machine" manner). – BCnrd Apr 2 2010 at 15:15
3
Manjul Bhargava got a big laugh in one of his early talks on his higher composition laws. In the difficult degree five case, someone asked if the number field entity Manjul constructed had a certain desirable property, and he replied "No, but it's unique." en.wikipedia.org/wiki/Manjul_Bhargava – Will Jagy Apr 2 2010 at 15:15
2
@Dmitri: When writing my comments I had in mind what you say, but we could always go a step further and pass to the discrete subcategory (with only identity morphisms) to make "everything" canonical. :) So I thought the main interesting feature of those little examples I mentioned was that one can make natural useful constructions whose functoriality is very restricted compared with where we begin (surjections of appropriate sort for the unipotent radical and orthogonal complement examples, etc.) – BCnrd Apr 2 2010 at 18:52
6
Here's a nice little example. I'd say that the center of a group a canonical construction, but it does not prolong to a functor on the category of groups. – James Borger Apr 4 2010 at 11:24
show 8 more comments
## 2 Answers
Although the Bourbaki formulation of set theory is very seldom used in foundations, the existence of a definable Hilbert $\varepsilon$ operator has been well studied by set theorists but under a different name. The hypothesis that there is a definable well-ordering of the universe of sets is denoted V = OD (or V = HOD); this hypothesis is equivalent to the existence of a definable Hilbert $\varepsilon$ operator.
More precisely, an ordinal definable set is a set $x$ which is the unique solution to a formula $\phi(x,\alpha)$ where $\alpha$ is an ordinal parameter. Using the reflection principle and syntactic tricks, one can show that there is a single formula $\theta(x,\alpha)$ such that for every ordinal $\alpha$ there is a unique $x$ satisfying $\theta(x,\alpha)$ and every ordinal definable set is the unique solution of $\theta(x,\alpha)$ for some ordinal $\alpha$. Therefore, the (proper class) function $T$ defined by $T(\alpha) = x$ iff $\theta(x,\alpha)$ enumerates all ordinal definable sets.
The axiom V = OD is the sentence $\forall x \exists \alpha \theta(x,\alpha)$. If this statement is true, then given any formula $\phi(x,y,z,\ldots)$, one can define a Hilbert $\varepsilon$ operator $\varepsilon x \phi(x,y,z,\ldots)$ to be $T(\alpha)$ where $\alpha$ is the first ordinal $\alpha$ such that $\phi(T(\alpha),y,z,\ldots)$ (when there is one).
The statement V = OD is independent of ZFC. It implies the axiom of choice, but the axiom of choice does not imply V = OD; V = OD is implied by the axiom of constructibility V = L.
When I wrote the above (which is actually a reply to Messing) I was expecting that Bourbaki would define canonical in terms of their $\tau$ operator (Bourbaki's $\varepsilon$ operator). However, I was happily surprised when reading the 'état 9' that Thomas Sauvaget found, they make the correct observation that $\varepsilon$ operators do not generally give canonical objects.
A term is said to be 'canonically associated' to structures of a given species if (1) it makes no mention of objects other than 'constants' associated to such structures and (2) it is invariant under transport of structure. Thus, in the species of two element fields the terms 0 and 1 are canonically associated to the field F, but $\varepsilon x(x \in F)$ is not since there is no reason to believe that it is invariant under transport of structures. They also remark that $\varepsilon x(x \in F)$ is actually invariant under automorphisms, so the weaker requirement of invariance under automorphisms does not suffice for being canonical.
To translate 'canonically associated' in modern terms:
1) This condition amounts to saying that the 'term' is definable without parameters, without any choices involved. (Note that the language is not necessarily first-order.)
2) This amounts to 'functoriality' (in the loose sense) of the term over the core groupoid of the concrete category associated to the given species of structures.
So this seems to capture most of the points brought up in the answers to the earlier question.
-
Francois: you are telling me precisely the kind of mathematics that I had hoped this question would inspire. I am currently way behind though. I am happy with V=L. Let me try and translate what you say in your second para above. You say "let's well-order everything with <. Now given a non-empty bunch of sets we can take the 'smallest' one (wrt this ordering). Now if F is an arbitrary field of order 2, its 'smallest' element is automorphism-invariant (as Aut(F)=1) but not isomorphism-invariant (as if F and G are fields of order 2, the smallest elt in one might be 0 but in the other might be 1)" – Kevin Buzzard Apr 2 2010 at 15:30
So that's an example of something not canonical. But if I have enough mathematics to isolate the 0 and 1 of a field, clearly these are canonical---because they're isomorphism-invariant? But all this seems very far away from the statement (which I truly believe) that the isomorphisms of local class field theory (sending a uniformiser to a geometric Frobenius) are canonical. Am I now using canonical in a different way? Can one get from terms and functoriality to local class field theory isomorphisms? – Kevin Buzzard Apr 2 2010 at 15:30
Kevin, this is correct. The point can be summarized as follows: V = OD gives a canonical way of making choices, but not a way of making canonical choices. – François G. Dorais♦ Apr 2 2010 at 15:32
This is not my specialty so I can't say for sure, but I think sending a uniformizer to the geometric Frobenius is canonical and so is the inverse convention. The Bourbaki language of species is not first-order, so you can talk about a variety of higher-order objects (e.g. idèles and adèles would certainly be canonical) so the fact that there are small variations shouldn't affect things very much. – François G. Dorais♦ Apr 2 2010 at 15:41
@Francois: class field theory isn't hard to summarise. Let me try and abstractify the number theory away. I have a fixed field Q_p. If F is any finite extension of Q_p then I can associate two topological groups canonically to F: call them Wab(F) [the abelianisation of the Weil group associated to F] and M(F) [the multiplicative group F^*]. Now Wab() and M() can be extended to covariant functors from the cat of finite extensions of Q_p (with the morphisms being inclusions) to the cat of topological groups, and they can also be extended to contravariant functors between these categories! – Kevin Buzzard Apr 2 2010 at 16:00
show 7 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There are scanned notes in french that were used for the initial text of Théorie des Ensembles on the Bourkaki Archives website.
In particular there are indeed notes by Chevalley named Livre I. Théorie des ensembles Chap. IV (état 7 ?) Structures (53 p.) which seem at first glance to define "canonique" in the broader context of "transport de structures, idendifications" (see exemple 1 at the bottom of page 19 of that file).
-
That's a very interesting website! I'm not convinced that the reference you give is what Messing is referring to though. It seems to me that they are just giving some standard examples of canonical isomorphisms (e.g. "the integers" (however they have defined them) are canonically isomorphic to the subset of the rationals consisting of things which are integers...) – Kevin Buzzard Apr 2 2010 at 10:40
1
You're right. I had another look and I think the relevant file is the last one (état 9), which does have an appendix, as Messing mnetions, in which at page 37 and 38 we find: "un terme U est canoniquement associé à la structure générique $(s_1,...,s_p)$" to be defined as "U ne contient aucune lettre autre que les constantes de $T_\Sigma$ et est transportable relativement à $\Sigma$". It is also given the alternative name "intrinsèque pour $(s_1,...,s_p)$". – Thomas Sauvaget Apr 2 2010 at 11:06
You could well be right about \'etat 9. At first glance---I can't make head nor tail of it! I wonder what it says! – Kevin Buzzard Apr 2 2010 at 11:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933357834815979, "perplexity_flag": "middle"} |
http://mathhelpforum.com/math-topics/112121-town-hall-tiles.html | # Thread:
1. ## Town Hall Tiles
The diagram above (sorry my diagram isnt too great..) contains black and white tiles in a diamond formation. There are 7 tiles across and 25 tiles in total. Now I need to find how many tiles there would be in total, if the number of tiles across was 149. The formula I am using is n+x^2/y
So for the 7 across, the formula would be 7+x^2/y = 25
The only problem is, I cant work out what x and y are.
If you could help me finding what x and y are, that would be greatly appreciated.
Holly.
2. Originally Posted by everydaysahollyday
The diagram above (sorry my diagram isnt too great..) contains black and white tiles in a diamond formation. There are 7 tiles across and 25 tiles in total. Now I need to find how many tiles there would be in total, if the number of tiles across was 149. The formula I am using is n+x^2/y
So for the 7 across, the formula would be 7+x^2/y = 25
The only problem is, I cant work out what x and y are.
If you could help me finding what x and y are, that would be greatly appreciated.
Holly.
The standard formula for this would be $\dfrac{(n^2+1)}{2}$
In your case, working backwards, your x & y values will be:
$x = \dfrac{n-1}{2} \,\,$ --&-- $\,\, y = 0.5$
for n=149
$x = \dfrac{149-1}{2} = 74$ ; $149 + \dfrac{74^2}{0.5} \, = \, 149 + \dfrac{5476}{0.5} = 11101$
just to check
$\dfrac{(149^2+1)}{2} = 11101$
Hope that helps
.
3. In case this helps, Aidan's formula "comes from" the formula:
sum of 1st n odd numbers = n^2; like 1 + 3 + 5 = 3^2
In your example, this occurs twice (above and below the middle row),
hence 2(1 + 3 + 5) + 7 = 25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201940894126892, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/117087/is-the-moduli-space-of-genus-three-smooth-quartics-affine | ## Is the moduli space of genus three smooth quartics affine?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Non-hyperelliptic curves of genus three are smooth quartics. Is the moduli space of such curves affine?
I think this follows from a more general result on smooth complete intersections, but I'm looking for a simple proof.
One idea would be to use a "good" compactification, i.e., such that the boundary divisor is ample. This can be done by using a suitable Grassmannian containing the Hilbert scheme.
I'd like to avoid something like this and give a more elementary argument. Is that possible?
-
2
Every nonempty divisor in $\mathbb{P}^{14}$ is ample and its complement is affine. Is that what you are looking for? – Jason Starr Dec 23 at 13:53
1
Beware that analogous results for smooth complete intersections do not hold in general. For instance, the moduli space of non-hyperelliptic genus $4$ curves, that is the moduli space of smooth complete intersections of degrees $(2,3)$ in $\mathbb{P}^4$ is not affine, beacuse it contains complete curves. – Olivier Benoist Dec 23 at 14:24
@Olivier. You're right. I didn't mean to say that. Rather, more generally, the moduli space of smooth hypersurfaces of degree $d$ in $N$-projective space is affine if $d>N+1$. – Masse Dec 23 at 16:27
@Jason. That's what I was looking for indeed. Any nonempty effective divisor on $\mathbf P^{14}$ is ample. – Masse Dec 23 at 16:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921653151512146, "perplexity_flag": "head"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.