url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/differential-geometry/123253-what-limit-points-set.html
# Thread: 1. ## What are the limit points of this set? What are the limit points of the set containing x^n such that n is in the positive natural numbers for each a that is a member of the inteval [-1,1]? 2. Originally Posted by kevinlightman What are the limit points of the set containing x^n such that n is in the positive natural numbers for each a that is a member of the inteval [-1,1]? $x^n=1 \Longleftrightarrow x=1\,\,\,or\,\,\,x=-1\,\,and\,\,n\,\,even\,,\,\,x^n=-1\,\,if\,\,x=-1\,\,and$ $n\,\,odd\,,\,\,and\,\,\,x^n\rightarrow 0\,\,\,if\,\,\,|x|<1$ Now prove the above and you get your answer. Tonio 3. Originally Posted by tonio $x^n=1 \Longleftrightarrow x=1\,\,\,or\,\,\,x=-1\,\,and\,\,n\,\,even\,,\,\,x^n=-1\,\,if\,\,x=-1\,\,and$ $n\,\,odd\,,\,\,and\,\,\,x^n\rightarrow 0\,\,\,if\,\,\,|x|<1$ Now prove the above and you get your answer. Tonio I have already done this, it is more the concept of limit points I don't understand, I believe 0 to be one for every value not equal to -1 1 or 1 but I can't see what the others would be 4. Originally Posted by kevinlightman I have already done this, it is more the concept of limit points I don't understand, I believe 0 to be one for every value not equal to -1 1 or 1 but I can't see what the others would be Tell me, what is a limit point? Why do you think that $0$ is a limit point of this set?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962911069393158, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/121831/why-is-the-determinant-m-n-mathbb-r-to-mathbb-r-continuous/121834
# Why is the determinant $M_n(\mathbb R) \to \mathbb R$ continuous? Why is the determinant as a function from $M_n(\mathbb{R})$ to $\mathbb{R}$ continuous, please can anyone explain precisely and rigorously? So far I know the explanation which comes from the facts: polynomials are continuous, sum and product of continuous functions are continuous. Also I have the confusion regarding the metric on $M_n(\mathbb{R})$ - 3 Isn't it clear that polynomials are continuous functions? $M_n(\mathbb R)$ is the same as $\mathbb R^{n^2}$ under a different disguise. – azarel Mar 18 '12 at 20:50 Non-rigorous, but perhaps helpful.... Determinants can be show to be equivalent to the hyper-volume of a hyper-parallelepiped with all the edges extending from one vertex defined by the columns (or rows) of a matrix. Given that this volume changes continuously with a change in any component vector, the determinant may be seen to be continuous. – Tpofofn May 17 '12 at 2:08 ## 5 Answers $M_n(\mathbb R)$ is just $\mathbb R^{n^2}$ with the euclidian metric. det is countinous, because it is a polynomial in the coordinates $$\text{det}(x_{i,j})= \sum_\sigma \text{sgn}(\sigma) \prod_{i=1}^{n} x_{\sigma(i),i}$$ - It should be noted that $M_n(\mathbb{R})$ is often also given the metric induced by the operator norm. Of course the point is, it doesn't matter because all norms on a finite dimensional space induce the same topology. – user12014 May 17 '12 at 1:04 The coefficient maps $A\longmapsto a_{i,j}$ are continuous because they are linear on the finite-dimensional vector space $M_n(\mathbb{R})$. Here you want to refer to the topology of the latter as a normed space, which does not depend on the norm since they are all equivalent in finite dimension. Then the determinant is a polynomial in the coefficients, so it is continuous by composition of continuous maps. - Recall that the determinant can be computed by a sum of determinants of minors, that is "sub"-matrices of smaller dimension. Now we can prove by induction that $\det$ is continuous: • For $n=1$, $A\in M_1(\mathbb R)$ is simply a scalar we have that $\det A=A$, and surely the identity function is continuous. • Suppose that for $n$ we have that $\det$ is continuous on $M_n(\mathbb R)$, let $A\in M_{n+1}(\mathbb R)$. We know that $\det A$ can be calculated as the alternating sum over one of first row, when calculating the $\det$ of the appropriate minor. So $\det A$ is written as a sum and scalar multiplication of $\det$ on a smaller dimension. From the induction hypothesis these are continuous and therefore $\det$ is continuous on $n+1\times n+1$ matrices. - It's continuous because it's computable as a function from $\mathbb{R}^{n^2}$ to $\mathbb{R}$. - 1 Would whoever downvoted this please speak up; especially if you think what I've said is incorrect. – Quinn Culver May 17 '12 at 17:40 I didn't downvote you, but I'm curious if you can elaborate on why computable functions (in the relevant sense) are continuous? – Isaac Solomon May 26 '12 at 0:10 – Quinn Culver May 26 '12 at 18:23 Thanks for the link, this is very interesting! :) – Isaac Solomon May 26 '12 at 18:51 @QuinnCulver If someone downvoted, it was probably because they thought the terminology was a little vague. I can see your idea (and this is how I think of it, too) is that the determinant is a composition of finitely many addition and multiplication operations, all of which are jointly continuous, and so the composition is continuous. – rschwieb Jun 8 '12 at 14:12 show 4 more comments The function $$\det:\mathcal{M}_n(\mathbb{R})\rightarrow\mathbb{R}$$ is continuous because is a escalar function and bounded. (Theory of operators) And not all polynomials are continuous… Could you say more about what you mean? In our context, I think the claim is simply that a polynomial $f(x_1, \ldots, x_m)$ in $m$ variables with coefficients in $\mathbf R$ induces a continuous function on $\mathbf R^m$. This is definitely true. And when you mention bounded operators it seems like you're implying that $\det$ is linear, which doesn't seem to be true in any obvious sense. Cheers, – Dylan Moreland Mar 18 '12 at 23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9659556746482849, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/45578/is-two-cars-colliding-at-50mph-the-same-as-one-car-colliding-into-a-wall-at-100/45579
# Is two cars colliding at 50mph the same as one car colliding into a wall at 100 mph? I was watching a youtube video the other day where an economist said that he challenged his physics professor on this question back when he was in school. His professor said each scenario is the same, while he said that they are different, and he said he supplied a proof showing otherwise. He didn't say whether or not the cars are the same mass, but I assumed they were. To state it more clearly, in the first instance each car is traveling at 50mph in the opposite direction and they collide with each other. In the second scenario, a car travels at 100 mph and crashes into a brick wall. Which one is "worse"? When I first heard it, I thought, "of course they're the same!" But then I took a step back and thought about it again. It seems like in the first scenario the total energy of the system is the KE of the two cars, or $\frac{1}{2}mv^2 + \frac{1}{2}mv^2 = mv^2$. In the second scenario, it's the KE of the car plus wall, which is $\frac{1}{2}m(2v)^2 + 0 = 2mv^2$. So the car crashing into the wall has to absorb (and dissipate via heat) twice as much energy, so crashing into the wall is in fact worse. Is this correct? To clarify, I'm not concerned with the difference between a wall and a car, and I don't think that's what the question is getting at. Imagine instead that in the second scenario, a car is crashing at 100mph into the same car sitting there at 0mph (with it's brakes on of course). First scenario is the same, two of the same cars going 50mph in opposite directions collide. Are those two situations identical? - 3 The "with its brakes on of course" is what makes both situations different, without the brakes they are the exact same, see my answer below. – Jaime Dec 1 '12 at 2:02 Possible duplicate of Train crash: are these situations alike? and Classical car collision – voix Dec 1 '12 at 9:18 1 I thought Myth busters tested this, and found that two cars at 50mph equals one car at 50mph hitting a wall. In essence the wall is like a plane of symmetry. – ja72 Dec 1 '12 at 14:55 As Jamie said, if the second car doesn't have brakes on, the situation are identical : one is viewed from the centre of mass frame, and the second is the same event viewed from a train moving in the same direction and at the same speed of one of the cars – Marco Aita Dec 1 '12 at 19:13 ..but a wall is not like a car (i.e. it is like a very massive car), so hitting a wall at 100mph is not like hitting an equally massive car at 100mph (see the chosen answer to Classical car collision for a good explanation). – Marco Aita Dec 1 '12 at 19:35 show 1 more comment ## 6 Answers I don't think any of the other answers have made the following point clear enough, so I am going to give it a try. Both scenarios are very similar before the collision, but they differ greatly afterwards... From a stationary reference, you see the cars driving towards each other at 50mph, but of course if you choose a reference frame moving with the first car, then the second will be headed toward it at 100 mph. How is this different from the wall scenario? Well, from a stationary reference frame, after the crash both cars remain at rest, so the kinetic energy dissipated is $2\times \frac{1}{2}mv^2$. From the reference frame moving with the first car, the kinetic energy before the crash is $\frac{1}{2}m(2v)^2=4\times\frac{1}{2}mv^2$, but after the crash the cars do not remain at rest, but keep moving in the direction of the second car at half the speed. So of course the kinetic energy after the crash is $2\times\frac{1}{2}mv^2$, and the total kinetic energy lost in the crash is the same as when considering a stationary reference frame. In the car against a wall, you do have the full dissipation of a kinetic energy of $4\times\frac{1}{2}mv^2$. - To put a finer still point on this, and to tie this in with my comment to the OP, the KE before the collision is not the whole story. In the frame in which the car is stationary while the wall is moving, the KE is enormous both before and after. It is the difference that tells the story. – Alfred Centauri Dec 1 '12 at 2:05 +1 for showing that the analysis works regardless of reference frame. When I discussed mine, I just assumed it would be easier to keep the reference frame the same in both situations: a 3rd person observation standing at zero mph relative to a wall. – smaccoun Dec 2 '12 at 19:54 Certainly they are not exactly the same - a wall is not the same thing as a car, and a crash is a very complicated physical event. Even if simple calculations involving momentum and energy or descriptions involving reference frames suggest that aspects of a car-car and car-wall collision are the same, the real collisions will be fairly different. In this case, though, simple considerations do reveal that the car-car crash at 50 mph is almost certainly safer than crashing 100 mph into a wall. Your energy calculation is a fine way to see this. Another is to consider the car-car collision from a frame co-moving with the second car. In this frame, you're going 100 mph and crash into a stationary car. So the question is like asking whether it is worse to crash into a stationary wall or a stationary car when going 100 mph (apart from the fact that the movement relative to the road is a little different). Of course crashing into the car is less dangerous than crashing into the wall, confirming your earlier result. I have often heard the same problem rephrased so that you consider crashing into a wall at 100 mph or crashing into a car when you're both going 100 mph. It may be that this was the original problem the physics professor mentioned, and it got distorted somewhere in the game of telephone it played since then. In that scenario, some people say they are equally bad because the energy dissipated per car is the same. Personally, I would probably go for the wall because at least some of the car's energy should go into the wall, but here the details become important (e.g. what if I fly through the window and then hit the wall?), and the energy alone is not a strong enough difference to say what which is worse. I imagine that either crash is very likely to be fatal at that speed. Addressing your new question, two cars crashing head-on each at 50 mph is essentially the same as one car going 100 mph and crashing into a stationary car, by the relativity principle. However, relativity is broken by the existence of the road, so to the extent that the cars interact with the road during the collision there may be some differences. - 1 I think the idea here is to get at the fundamental physics concepts, not whether or not hitting a car or a wall is worse. For that reason, I don't think your second point is at all convincing, and it is the very reason why I originally thought the answer would be the same (and why I think most people would). For the second proposition, and that may very well have been the original question, I think that one is most certainly te cars. Using the same energy calculations, there is twice as much energy that needs to be dissipated for the cars colliding, so I think the cars are clearly worse there. – smaccoun Dec 1 '12 at 1:15 I don't know what you're talking about because phrases like "second such-and-such" are ambiguous; I don't know how you're counting. Please rephrase to say what specific concepts you are trying to refer to. – Mark Eichenlaub Dec 1 '12 at 1:28 By second proposition, I meant your rephrased version of the problem (your 3rd paragraph, which is a different problem) where the 2 cars colliding are each going a 100, and the second scenario the car is going 100 and crashes into the wall. For an analysis of that, I would say the two cars colliding is worse, because the total energy is twice that of a single car going 100 and crashing into the car. – smaccoun Dec 1 '12 at 1:37 I thin your third paragraph, where you use reference frames, is possibly the misconception that this problem is getting at. I originally thought that too, and it is true that relative to the other car the first car is going 100mph. This is why I had to revert to an energy argument to see a difference. You could just as easily make the problem be only walls crashing at each other, or only cars crashing at each other. Again, I'm not sure about this, but my suspicion is that the reference frame argument isn't sufficient in its own right. But, I posted it on here to see some other perspectives:) – smaccoun Dec 1 '12 at 1:40 1 Sorry man, in my 2nd comment I meant your 4th paragraph, LOL. Whoo, I need to learn how to count – smaccoun Dec 1 '12 at 1:46 show 4 more comments Actually, assuming that the oncoming car is the same mass as yours, colliding with an oncoming car at 50 MPH is equal to colliding with an ideal immovable wall at 50 MPH. Consider this: I'm going to set up one of two experiments. I'm either going to ram car A into car B, both of them moving 50 MPH in opposite directions, or I'm going to ram car A into a solid wall at 50 MPH. However, I'm going to put up a shroud so that you can only see car A, you will be unable to see either car B or the wall, whichever one my coin-flip tells me to use. Because you can now only see at car A and its contents, how would you tell which experiment I'd decided to do? - I would ask Schrödinger's Cat... – Mik Cox Dec 4 '12 at 0:23 I agree that the situations are identical, but fail to see how the shroud experiment proves that. It seems like you assumed they're the same at the beginning, and then concluded they're the same at the end based on your beginning assumption. Can you explain more how the shroud illustrates that? – smaccoun Dec 4 '12 at 0:25 Some people believe that head-on-at-50 is equivalent to brick-wall-at-100, while in fact it's equivalent to brick-wall-at-50. The shroud experiment encourages them to consider the different scenarios and figure out why head-on-at-50 is equivalent to brick-wall-at-50. – Aric TenEyck Dec 4 '12 at 2:19 I think that it makes sense to move away from the specific walls and cars and consider simply an inelastic collision on of two masses, $m_1$ and $m_2$. Otherwise we get stuck in the details. When two bodies collide, the devastating effect in collision depends only on their relative velocity $v_1-v_2$. Kinetic energy, which has the destructive effect is equal to $$\frac{1}{2}\frac{m_1m_2}{m_1+m_2}(v_1-v_2)^2$$ The rest of the kinetic energy is associated with the movement of the center of mass of the system. This energy in the collision does not change, and has no effect of destruction. In given case, if faced two identical cars moving toward each other with one and the same velocity $v$ the energy of destruction is $$\frac{1}{2}\frac{mm}{m+m}(v+v)^2=mv^2$$ Now, consider the case where a car collides with a massive barrier at speed $2v$.In this case $m_1=m$, $v_1=2v$, $m_2=\infty$, $v_2=0$ The energy of destruction: $$\frac{1}{2}m(2v)^2=2mv^2$$ I.e. the latter case is much more dangerous. - The most straightforward way to see how different the two scenarios are is to: (1) consider two cars crashing into each other from a reference frame in which one of the cars is stationary and the other has a speed of 100mph (2) consider the one car crashing into the wall with a speed of 100mph. Assuming the wall is substantial enough that its mass and physical strength far exceeds that of the stationary car in (1), it's clear that the two scenarios significantly differ. In (2), the car crashes into a stationary and effectively immovable, indestructible object at 100mph while in (1), the moving car crashes, at 100mph, into a stationary but otherwise identical object that importantly, can both move and deform. - Please see my edit. Do you think it would be the same if it was just walls crashing into walls, or just cars crashing into cars? Is my energy argument incorrect? Good points otherwise, but I don't think that's what the question is getting at. – smaccoun Dec 1 '12 at 1:50 @smaccoun, the total KE of the system is a minimum in the COM reference frame. Consider, for example, the frame in which the car is stationary and the wall is moving at 100mph. The KE in that frame is enormously larger than in the frame in which the wall is stationary. Yet, the damage to the car (and perhaps the wall) does not depend on the reference frame. Your argument cannot depend on just the KE before the crash. – Alfred Centauri Dec 1 '12 at 1:59 Damage should be the same if two cars colliding at $50$mph and if a car travelling $(50*\sqrt{2})$ mph crashes into a wall. The energy of destruction is an internal energy so: First case Equation of energy conservation $\frac{mv^2}{2}+\frac{mv^2}{2}= T$ where $T$ - internal energy, $m$ - car mass so energy of destruction in first case: $T = mv^2$ Second case Equation of energy conservation $\frac{m(v\sqrt{2})^2}{2}=\frac{(m+M)u^2}{2}+ T$ where $T$ - internal energy, $m$ - car mass, $M$ - wall mass Equation of momentum conservation $mv\sqrt{2}=(m+M)u$ so internal energy $T= \frac{m(v\sqrt{2})^2}{2}\Big(1-\frac{m}{m+M}\Big)$ $M>>m$ so energy of destruction in second case: $T \approx {mv^2}$ - The calculations are correct but they solve the problem for the total energy of the system. If you are only interested in the damage caused on one car (because in the second case there is no "other car"), the same calculations show that to obtain the same damage (=change in internal energy=deformation+heat) you need to have the same speed $v$ (again considering the wall very massive, $M>>m$ so that $u ≈ 0$). – Marco Aita Dec 1 '12 at 19:08 @Marco Aita, you are not right. Imagine a car standing before the wall, so damage will be equal for two cars in the second case. – voix Dec 2 '12 at 14:38 In the fist case each car dissipates $\frac{1}{2}mv^2$. In the case of an impact of a car with a wall of a very large (infinite) mass, the final speed is zero and therefore the car dissipates again an energy of $\frac{1}{2}mv^2$. If all this energy goes into deformation of the car you will get the same damage to the car as in the first case. The problem is that we can't really tell how much of the dissipated energy goes into deforming the car, so my reasoning is partial.. ..but I don't understand your point with the car in front of the wall, can you elaborate? – Marco Aita Dec 3 '12 at 0:10 @Marco Aita, if in the second case we put another car in front of the wall we will get the same damage to the car as in the first case. – voix Dec 3 '12 at 4:54 ..I think we are talking about different things.. Anyway, the part with the wall is practically unsolvable unless we specify more about the wall characteristics.. Imagine the difference in hitting a massive solid steel wall and an equally massive marshmallow wall.. :-). It all depends on how much energy the wall absorbs.. – Marco Aita Dec 3 '12 at 13:09 show 4 more comments ## protected by Qmechanic♦Mar 21 at 8:40 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9614531397819519, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/57092/list
## Return to Answer 2 typo Perhaps even simpler than the examples from electromagnetism in $\mathbb{R}^3$ minus some points is the following: The angle "function" $\varphi\colon S^1 \longrightarrow \mathbb{R}$ is not really globally defined as turning aroung one time gives a discontinuity. It jumps by $2\pi$. Neverhtheless, the differential $\mathrm{d}\varphi$ is a perfectly global one-form on $S^1$. It is the usual volume form, not being exact but closed for dimensional reasons. So the non-trivial first deRham cohomology of $S^1$ is responsible for counting angles and the fact that $0 \ne 2\pi$ ;) This can be upgraded to the more interesting statement that on a orientable compact manifold without boundary you have a non-trivial top-degree deRham cohomology: again, the reason is that we can integrate a volume for form resulting in a non-zero volume. Thus (by Stokes theorem) the volume form can not be exact. It is closed without thinking about it, simply for dimensional reasons. 1 Perhaps even simpler than the examples from electromagnetism in $\mathbb{R}^3$ minus some points is the following: The angle "function" $\varphi\colon S^1 \longrightarrow \mathbb{R}$ is not really globally defined as turning aroung one time gives a discontinuity. It jumps by $2\pi$. Neverhtheless, the differential $\mathrm{d}\varphi$ is a perfectly global one-form on $S^1$. It is the usual volume form, not being exact but closed for dimensional reasons. So the non-trivial first deRham cohomology of $S^1$ is responsible for counting angles and the fact that $0 \ne 2\pi$ ;) This can be upgraded to the more interesting statement that on a orientable compact manifold without boundary you have a non-trivial top-degree deRham cohomology: again, the reason is that we can integrate a volume for resulting in a non-zero volume. Thus (by Stokes theorem) the volume form can not be exact. It is closed without thinking about it, simply for dimensional reasons.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322800040245056, "perplexity_flag": "head"}
http://mathoverflow.net/questions/90738/reconstruct-an-operator-by-its-eigenfunctions-e-gp-circ-x
## Reconstruct an operator by its eigenfunctions $e^{-g(p \circ x)}$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there some well-known description of an operator $D$ (pseudodifferential of order greater than 0) for which $e^{-g(x_1 p_1,...,x_n p_n)}$, where $g \colon \mathbb{R}^n_+ \to \mathbb{R}_+$ is a homogeneous of order 1 function, will be eigenfunctions for any $(p_1,...,p_n) \in \mathbb{R}^n_+$? I.e. $$D e^{-g(p_1 x_1,...,p_n x_n)} = \lambda_p e^{-g(p_1 x_1,...,p_n x_n)}.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.887657105922699, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/8258/whats-a-nice-argument-that-shows-the-volume-of-the-unit-n-ball-in-rn-approaches/8267
## What’s a nice argument that shows the volume of the unit n-ball in R^n approaches 0? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Before you close for "homework problem", please note the tags. Last week, I gave my calculus 1 class the assignment to calculate the $n$-volume of the $n$-ball. They had finished up talking about finding volume by integrating the area of the cross-sections. I asked them to calculate a formula for $4$ and $5$, and take the limit of the general formula to get 0. Tomorrow I would like to give them a more geometric idea of why the volume goes to zero. Anyone have any ideas? :) Comm wiki in case people want to add/modify this a bit. - 4 Maybe "Can we prove without calculus that the volume of the unit n-sphere approaches 0 as n goes to infinity?" – Reid Barton Dec 8 2009 at 22:09 1 I'm pretty sure that we want the volume of the unit n-ball, since blah blah sphere is the boundary. – Harry Gindi Dec 8 2009 at 22:16 I would like to thank everyone who responded/revised/commented on this thread! You came up with very beautiful arguments, and quickly enough for me to put something together for tomorrow morning. – B. Bischof Dec 9 2009 at 2:12 2 What the heck kind of calculus 1 class is this?!? I didn't even think about this problem until vector calculus i.e. calculus 3! SIGH.I want my money back.......... – Andrew L Jun 9 2010 at 22:22 2 @Andrew All the machinery to do the cross-section limit proof is there in calc one automatically. I tend to deviate a little from the standard material on a fairly regular basis for my own entertainment. It is unclear the usefulness of these deviations. – B. Bischof Jun 10 2010 at 14:20 ## 14 Answers The ultimate reason is, of course, that the typical coordinate of a point in the unit ball is of size $\frac{1}{\sqrt{n}}\ll 1$. This can be turned into a simple geometric argument (as suggested by fedja) using the fact that an $n$-element set has $2^n$ subsets: At least $n/2$ of the coordinates of a point in the unit ball are at most $\sqrt{\frac{2}{n}}$ in absolute value, and the rest are at most $1$ in absolute value. Thus, the unit ball can be covered by at most $2^n$ bricks (right-angled parallelepipeds) of volume $$\left(2\sqrt{\frac{2}{n}}\right)^{n/2},$$ each corresponding to a subset for the small coordinates. Hence, the volume of the unit ball is at most $$2^n \cdot \left(2\sqrt{\frac{2}{n}}\right)^{n/2} = \left(\frac{128}{n}\right)^{n/4}\rightarrow0.$$ In fact, the argument shows that the volume of the unit ball decreases faster than any exponential, so the volume of the ball of any fixed radius also goes to $0$. - (128/n)^{-n/4} actually tends to infinity. I think you should change all your "-n/4" to "n/4" to fix it up. – Jason DeVito Dec 9 2009 at 0:23 6 A very nice argument! – Greg Kuperberg Dec 9 2009 at 2:33 Unfortunately, I do not see the last part of your argument, perhaps I am being dense. Could you explain how you conclude the volume is bounded by (128/n)^(n/4)? – B. Bischof Dec 9 2009 at 4:57 3 BTW, the software now makes it look like it was my argument. But it wasn't; it was contributed by fedya. – Greg Kuperberg Jun 9 2010 at 22:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A variation on some of the previous arguments that gives some intuition without actually doing any calculation. Consider $B_n$, the ball in $R^n$, and $C_n$, the cube $[-\frac{1}{2}, \frac{1}{2}]^n$. We make the following observations. 1. $C_n$ has volume $1$. 2. A typical point in $C_n$ will have about half its coordinates larger than $\frac{1}{4}$ in absolute value, so will be outside of $B_n$. In other words, almost none of the volume of $C_n$ is contained in $B_n$. 3. A typical point in $B_n$ will have no coordinates larger than $\frac{1}{2}$, since the sum of the squares of the coordinates is $1$ and this sum has to be divided among $n$ coordinates. (This is a weak version of the concentration of measure mentioned by Gil Kalai, and may be intuitively palatable). Looking at these, we see that in going from $C_n$ to $B_n$ we start with a volume of $1$, throw almost all of it away, and add almost nothing back in. - A calculus-free proof that the volume `$V_n$` of the unit $n$-sphere goes to 0 faster than any exponential. Equivalently, the volume `$r^nV_n$` of the sphere of radius $r$ goes to 0 for every $r$. It is inspired by the intuitive answers about concentration of measure. Claim. For any $0 < h < 1$, `$$V_n \le 2h V_{n-1} + (1-h^2)^{n/2} V_n.$$` Proof. Remove a slab from the middle of the $n$-ball with thickness $2h$, and bring together the remaining slices to make a lens shape. The radius of the equator of this lens is $\sqrt{1-h^2}$, and it clearly fits inside of an $n$-ball of that radius. Proof of main result. Rearrange the claim as a volume relation between adjacent dimensions: `$$V_n \le \frac{2h}{1-(1-h^2)^{n/2}} V_{n-1}.$$` For every $h$, the factor on the right is eventually close to $2h$, qed. In particular, if we take $h = 1/3$, then by the time that $n \ge 19$, the volume has turned around and is decreasing. - Because I like optimizing constants: h = 1/3 is not the best possible choice. However, for h = 1/3 we get that 2h/(1-(1-h^2)^(d/2)) < 1 for d > 18.654. The best possible d is around 18.295, which we get for h near 0.375. (No, not h = 3/8.) But no choice of h brings this critical d as low as 18. – Michael Lugo Dec 9 2009 at 13:57 In other words, you showed that the statement is true for n-spheres of any fixed radius, not just radius 1. – Reid Barton Dec 10 2009 at 17:10 Right. Although so did Fedja. – Greg Kuperberg Dec 10 2009 at 17:27 There is a simple argument by comparing to the unit ball of $\ell_1^n$. Let $K$ be the unit ball of $\ell_1^n$, i.e. the set of points with sum of coordinates (in absolute value) bounded by $1$. Then $K$ is the disjoint union of $2^n$ simplices (one per octant), and each simplex has volume $1/n!$. Now the Euclidean unit ball is contained in $\sqrt{n}K$, so its volume is at most $n^{n/2}2^n/n!$. This tends to $0$ and behaves like $(c/\sqrt{n})^n$ for some constant $c$. The value is sharp up to the value of $c$, as shown by the dual argument : the unit ball contains the cube $[-1/\sqrt{n},1/\sqrt{n}]^n$. - 1 More precisely, the constant c is 2e. – Michael Lugo Jan 20 2010 at 16:07 For $r=\frac{1}{2}$, I have a geometric argument. Let $I=[-\frac{1}{2},\frac{1}{2}]$. Now $Vol(D^{n-1} \times I) = Vol(D^{n-1})$. Take an annulus $A$, say with outer radius $\frac{1}{2}$ and inner radius $0.9\frac{1}{2}$. Remove $A \times [0.9\frac{1}{2}, \frac{1}{2}]$ from the top of $D^{n-1} \times I$. This still contains $D^n$ and has volume at least 0.1% less than $Vol(D^{n-1} \times I) = Vol(D^{n-1})$. So $Vol(D^n) < 0.999 Vol(D^{n-1})$. Done. - 1 But the volume of the unit ball is increasing from n=2 to n=5 which would seem to contradict your last line – alex Dec 9 2009 at 8:50 Not if the radius is 1/2. I interpreted the original question as about the case r=1. – Reid Barton Dec 9 2009 at 17:04 Given an $n$ dimensional closed bounded convex symmetric body $E$, situate it in $R^n$ so that the Euclidean ball $B$ is the ellipsoid of maximal volume contained in $E$. In 1978 Szarek, building on work of Kashin, showed (much more than) that if $({{vol(E)}\over{vol(B)}})^{1/n}\le C$, then the Banach space that has $E$ for its unit ball contains a subspace of dimension $n/2$ which is $C^2$ isomorphic to a Hilbert space. However, it is easy to see that $\ell_\infty^n$ contains a subspace well isomorphic to Hilbert spaces only of dimension of order $\log n$. This is the most complicated answer I can think of to the original question but does show why someone might care about computing volume ratios. - What does "$C^2$ isomorphic to a Hilbert space" mean? – L Spice Dec 25 2010 at 17:25 1 There is an isomorphism $T$ to a Hilbert space s.t. `$\|T\|\cdot \|T^{-1}\| \le C^2$`. – Bill Johnson Dec 25 2010 at 18:51 Maybe the fact that most point of the sphere are very close to the equator (concentration of measure) gives some conceptual explanation. - Related to this point, compare the diameter of the unit n-ball and the diagonal of the unit n-cube. – Qiaochu Yuan Dec 8 2009 at 22:32 It is better to say that most of points in a ball lie near hyperplane comming through its center. – Anton Petrunin Dec 8 2009 at 23:54 1 And so the volume of the n-ball will be a small number times the volume of the n−1-ball pluss something negligible, and we expect exponential decay as n grows. This seems to be the intuitive content of some other answers, including Greg Kuperberg's and Agol's. – Harald Hanche-Olsen Dec 9 2009 at 0:39 Here's a geometric argument (still with a bit of calculus). The volume of the unit $n+1$-ball may be obtained from integrating the volumes of $n$-ball cross-sections from say south pole to north pole. We have $Vol(D^{n+1}) = Vol(D^{n}) \int_{-1}^1 \sqrt{1-z^2}^{n} dz$, since the volume of the $n$-ball of radius $r$ is the volume of the unit $n$-ball times $r^n$, and the radius of the $z$-cross-section is $\sqrt{1-z^2}$. Since for any $1 >\delta >0$, $(1-z^2)^{n/2}$ converges to $0$ uniformly on $[-1,-\delta] \cup [\delta, 1]$, it is not hard to see that these integrals converge to zero. So the ratio $Vol(D^{n+1})/Vol(D^n)$ converges to zero, and therefore $Vol(D^n)\to 0$ as $n\to \infty$. As Gil Kalai says, this argument shows that the volume gets concentrated near the equator. - I've come a bit late to this particular party, but here's another argument. This one includes most of the sphere in a suitable cone. Choose a small positive number x, to be optimized later. Then the volume of that part of the sphere with $0\leq x_n\leq x$ is at most $2^n x$ (the volume of the cube being $2^n$). Now consider the plane $x_n=x$. This intersects the sphere in an (n-1)-dimensional subsphere. Let C be the smallest cone that contains everything in the sphere that lies above this plane. A simple argument using similar triangles shows that the height of this cone is at most 1/x. Therefore, its volume is at most $2^{n-1}/nx$. Doubling all this to get both halves of the sphere, we get an upper bond of $2^n(2x+1/nx)$, and taking $x=n^{-1/2}$ we get an upper bound for the ratio of $3n^{-1/2}$. Of course, this is a weak bound, but I was trying to make the argument as simple as possible. (It's simpler in my head than I've managed to make it written down.) Apologies if this duplicates someone else's argument -- I checked, but could have missed something. - By a strange coincidence I found myself thinking about almost this very question last week on a walk home early one morning (yes, that's correct). I wanted however only the weaker result that the ratio of the volume of the unit ball in the $l^2$ norm of $\mathbb{R}^n$ to that of the unit ball in the $l^{\infty}$ norm of $\mathbb{R}^n$ (i.e., $2^n$), goes to zero as $n \to \infty$. This is what I came up with: Start with the volume of the 4-ball in $\mathbb{R}^4$. Notice that the region is entirely contained in that of the polydisk = `$\{(x_1,x_2,x_3,x_4) | x_1^2 + x_2^2 \le 1, x_3^2 + x_4^2 \le 1\}$` and therefore the proportion of the volume of the 4-sphere to that of the unit ball in the $l^{\infty}$ norm is at most $(\pi/4)^2$. Repeating this argument shows that the corresponding proportion in $2n$ or $2n+1$ variables is at most $(\pi/4)^n$. This goes to 0 as $n \to \infty$. ADDENDUM: Having thought about it a little more, the above "polydisk" argument can be easily modified to answer the orginal question provided you exhibit one value of n for which the volume of the n-ball is less than one. The good news is that finding such an n is a straightforward calculus exercise involving repeated integration by parts. The bad news is that, if I've done the exercise correctly, the smallest such n is n = 13. - can you please enclose your formulas using \$ signs? – psihodelia Dec 9 2009 at 19:15 Edit: As Matthias pointed out, the following argument only works for the ball with radius 1/2. To measure volume, we need to agree on a unit of volume [1]. The traditional way of doing this is to set the volume of the unit cube to one. Now, think about the $n$-ball inscribed in the unit $n$-cube. As we increase $n$, the ball's diameter stays constant, but what happens to its volume? When $n = 1$, the ball takes up the whole unit cube, so its volume is one. When $n = 2$, the ball no longer takes up the whole unit cube, so its volume is less than one. When $n = 3$, the ball takes up even less of the unit cube, so its volume is even smaller. There's an easy way to see that when you go from $\mathbb{R}^n$ to $\mathbb{R}^{n + 1}$, the fraction of the unit cube occupied by the inscribed ball goes down. Start with an $n$-ball inscribed in the unit $n$-cube, and extrude both objects into the $(n + 1)$st dimenion. Now you have an $(n + 1)$-cylinder inscribed in an $(n + 1)$-cube. The fraction of the $(n + 1)$-cube occupied by the $(n + 1)$-cylinder is clearly the same as the fraction of the $n$-cube occupied by the $n$-ball. It's easy to see, however, that the $(n + 1)$-ball inscribed in the $(n + 1)$-cube fits inside the inscribed $(n + 1)$-cylinder with room to spare. This argument only shows that the volume of the unit-diameter $n$-ball decreases as $n$ grows; it doesn't show that the volume goes to zero. I'm hopeful, however, that a more sophisticated version of the same argument might do the trick! Edit: A more sophisticated version of the same argument does do the trick, and Matthias posted it while I was writing my post! Hooray! [1] To be more sophisticated about it: the differential n-form in $\mathbb{R}^n$ is only unique up to multiplication by a constant, so we need to settle on a constant. - A sphere packing argument (and some kissing number construction), because having smaller n-balls is equivalent to be able to pack more of them in the unity n-cube. edit: This needed to use r=1/2, not unity n-balls. Fixing the associated values. This hurts the usefulness of the argument, but still give some geometric insight on how the n-ball fills the n-cube and the shape of the space between them. First, we take n=4 and show how to put two n-balls $B_n$ having a radius of 1/2 into the unity n-cube $C_n$: we place the first at the center of the n-cube, and use this as origin. Then we place the second at (1/2,1/2,1/2,1/2) and wrap it into the n-cube. These two $B_n$ are now into $C_n$ (even if one can be considered as split in parts) and are disjoints because the distance between them is $\sqrt{4(1/2)^2} = 1$. This proves that the volume $V(B_4) < 1/2$. Now, when n=4k, notice that we can place k other $B_n$ plus the one at center by using the same type of translation as before. For the first, we only set the 4 first coordinates as 1/2 and the rest as 0. For the second, only the 4 next ones, etc... All these additional $B_n$ are at distance 1 from the centered one, and at distance $\sqrt{2}>1$ from each other, making them all disjoint. Thus, we have $V(B_{4k}) < 1/(k+1)$, which goes to 0 when n=4k increases. Of course, when n=4k+p, the same trick still work, but only with k+1 n-balls, which is not a problem. - Replacing the characteristic function of the unit ball by a suitable normal distribution with spherical symmetry when computing the volume should give approximatively the correct answer. Since $$\frac{1}{(2\pi)^{n/2}}\int_{\mathbb R^n}(x_1^2+\dots+x_n^2)e^{-(x_1^2+\dots+x_n^2)/2}dx_1\cdots dx_n$$ is linear in $n$, one has to rescale by a factor of order $\frac{1}{\sqrt{n}}$ leading to a decay of order $(\lambda n)^{-n/2}$ for the volume of the unit sphere. - Let $f(n)$ be the n-volume of the n-sphere. Then the natural thing to ask about is not $f(n)$, but $f(n)/2^n$; this is the ratio of the volume of the n-sphere to the volume of the n-cube in which it is inscribed. It's natural that this should be very small by a concentration of measure argument. The "typical" distance of a point in the n-cube $[-1,1]^n$ from the origin is a constant times $n$. More rigorously, pick a point uniformly at random from the $n$-cube; the square of its distance from the origin is $X_1^2 + X_2^2 + \ldots + X_n^2$, where $X_i$ is the $i$th coordinate, a uniform[-1,1] random variable. Thus $X_i^2$ has mean 1/3 and variance 4/45 (*), so the squared distance of a random point from the origin is roughly normally distributed with mean $n/3$ and variance $4n/45$. But points in the sphere are just those which have distance at most 1 from the origin, and these are quite rare. (*) I'm not actually sure of this "4/45". In any case, it's a positive constant. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 165, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350258708000183, "perplexity_flag": "head"}
http://mathoverflow.net/questions/20224/2-omega-1-separable/20231
## $2^{\omega_1}$ separable? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was rereading an answer to an old question of mine and it included a reference to the fact that $2^{\omega_1}$ was separable. I'm having a hard time finding a reference for this fact, and the proof is not immediately obvious to me. Can anyone provide me with a cite and/or a proof? - ## 2 Answers Should have searched a bit harder before asking this one. This is an immediate consequence of the Hewitt-Marczewski-Pondiczery theorem: Let $m \geq \aleph_0$. If ${X_s : s \in S}$ are topological spaces with $d(X_s) \leq m$ and $|S| \leq 2^m$ then $d(\prod_s X_s) \leq m$. - 3 That's overkill, I think. $2^{[0;1]}$ separable is much easier to prove than the Hewitt-Marczewski-Pondiczery Theorem. In fact, the countable system of Walsh functions is dense in $\{-1;1\}^{[0,1)}$ ... mathworld.wolfram.com/WalshFunction.html – Gerald Edgar Apr 3 2010 at 15:47 Interesting. Thanks. – David R. MacIver Apr 3 2010 at 16:03 2 Of course one also needs $\aleph_1 \le 2^{\aleph_0}$ for this simple proof to work, which is non-constructive (requires Axiom of Choice). Can one prove without choice that $2^{\omega_1}$ is separable? – Gerald Edgar Apr 4 2010 at 18:41 @Gerald: Good question! The statement appears to imply `$\aleph_1 \leq 2^{\aleph_0}$`. If `$\{d_n\}_{n<\omega}$` enumerates a dense subset then the sets `$D_\alpha = \{n<\omega: d_n(\alpha) = 1\}$` must be distinct since `$D_\alpha\setminus D_\beta$` cannot be empty when $\alpha \neq \beta$. – François G. Dorais♦ Apr 4 2010 at 19:37 Is it not the case that $\aleph_1 \leq 2^{\aleph_0}$ without the axiom of the choice? I think the following should work: By constructing with transfinite induction we can find $g_\alpha : [0, \alpha) \to \mathbb{Q}$ an injection (using the standard argument that countable ordinals embed in the rationals), and so $f_\alpha : \alpha \to \omega$ a bijection. We now define `$f : \omega_1 \to \{0, 1\}^\omega$ inductively as a diagonalisation of $f(\beta)$ for $\beta < \alpha$, which we can do without AC because we can explicitly count $\alpha$ – David R. MacIver Apr 22 2010 at 8:28 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is indeed the Hewitt-Marczewski-Pondiczery theorem. My proof, following Engelking, is here. It's in fact not that hard, the fact for a product of copies of 2 point discrete spaces already implies the general theorem pretty quickly. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206590056419373, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/20697/is-the-magnetic-field-a-feature-of-our-universe-or-is-it-a-consequence-of-the-e
# Is the magnetic field a feature of our universe, or is it a consequence of the electric field? Is the magnetic field a feature of our universe, or is it a side effect of the electric field? In other words: if we were to simulate in a computer a system of moving charged particles, taking in account only equations for the electric field, would we see magnetic effects? - – Qmechanic♦ Feb 8 '12 at 6:40 ## 2 Answers Electric and magnetic fields are fundamentally the same thing. They behave slightly differently, but that's because of the lack of monopoles and consequently "magnetic current". Check out the Maxwell's laws (I'll refer to them as M1,M2,M3,M4). M1 and M2 are nearly the same, except for the zero in M2. This is because of the lack of monopoles. Similarly, M3 and M4 are nearly the same (with some extra constants floating around), except for the $J$ term in M4. This term is not there in M3 due to the lack of "magnetic current". If we discovered magnetic monopoles and magnetic current, the forms of the equations would become exactly the same. OK. To drive in the fact that both fields are the same thing, imagine a a pair of point charges($q_1,q_2$), lying side by side. You give them both an equal and constant velocity $\vec{v}$ perpendicular to the line joining them. Now, $q_1$ is generating a magnetic field, as it is a moving charge. This magnetic field will interact with $q_2$, and push/pull it in the direction joining their common centers (To check this, imagine $q_1$ to be a wire carrying current in the direction of its motion). Our electrostatic "Coulomb force" will be $\frac{kq_1q_2}{r^2}$, as usual. Looks fine till now. Now, you hop in your car and start following the charges. You go with the same velocity as them. Now, from your car, you see something different. You see two stationary charges, exerting a force $\frac{kq_1q_2}{r^2}$ onto each other. So what happened to the force caused by the magnetic field? After all, the net force acting on a body should be the same from any inertial, that is non-accelerating, frame (Otherwise you would disagree on their accelerations etc). A similar scenario is when the charges at at rest, and you first measure them from rest and then you go into motion. So here we have a bit of a paradox. It's solved pretty easily when you consider the (intentional) flaw in my argument. The Coulomb force only works in electro*static* conditions. Its a tiny bit less/more when the charges are in apparent motion, which recompensates for the magnetic field. Thus, from one frame (the car), you see two charges at rest, attracting each other electrostatically. Thus you only see an electric field (As you can play about with a third charge in your car, and you will only notice electric effects). From the ground, you calculate two types of attractions, and you can measure two types of fields. One can say that the electric field changed form into the magnetic field. Or you could say the opposite (depending on where you made your measurements first). So what does this entail? Electric and magnetic fields are the actually the same thing, and they change form whenever you go in relative motion. Note that there is no preferred reference frame for relative motion; so one cannot choose the frame with only the electric field and say "this frame has only $\vec{E}$, ergo $\vec{E}$ is fundamental and $\vec{B}$ is just a side effect. Another way of looking at the way light propagates. A light wave travels mutually perpendicular with oscillating electric and magnetic fields. Since E/M fields are transmitted/mediated by light (the E/M interaction is due to exchange of photons), this is a good place to visualise it. From this, one can see more clearly how they are equivalent, there is no way to pick one field over the other here; they both have similar characteristics with respect to the light wave. About your second question involving computer simulations: Well, it depends upon the simulation. If you only use the Coulombic electrostatic force, you should have no problem as long as you neglect magnetic fields, and the particles have uniformish velocity. There will still be aberrations because you are not considering relativity (relativity can be partially deduced from inconsistencies in these equations). Really, it depends upon the accuracy of the simulation, as when dealing with charges only (no currents), the force due to magnetic fields is $\approx c^2$ times smaller than the electric field. If the particles accelerate, then we have the added issue of transmission of fields, as changes to the position of the particle propagate (at the speed of light, not instantaneously) to the field nearby. This is where the different electric field from moving charges comes from. Play with the velocity slider here to see how the field changes. Basically, if a point is at a distance r from a charge, the field at that point will be $\frac{kq_1q_2}{r^2}$ after $\frac{r}{c}$ seconds. If you incorporate the correct, "field travels at the speed of light" concept, then your system will behave abnormally, and you will notice the lack of magnetic fields (assuming enough precision). So the answer depends on what you mean by "taking into account only electric fields". - I'd prefer not to call it a "side effect of the electric field", but rather to recognize that the "true" object is the electromagnetic field tensor $F_{\mu\nu}$, which contains both electric and magnetic effects. What you see in any instance depends upon what measurements you do. If you try to simulate "just" the electric effects in a system of moving charged particles, then you end up simulating "just" a system of particles moving under the inverse square law, like gravity but with the possibility of repulsive interactions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456105828285217, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/13211/what-is-the-maximum-for-pearsons-chi-square-statistic
What is the maximum for Pearson's chi square statistic? I actually know that the answer is $N(k-1)$ (where $k$ is the minimum between number of rows and number of columns). However, I can not seem to find a simple proof for why the statistic is bounded by this. Any suggestions (or references?) - 1 Answer For some intuition about this, consider the square case (k rows and columns), with $N=nk$. Then the maximal Chi Square occurs when all the marginal total are equal (in this case $n$), and the values in the table are n along the diagonal and 0 for the off diagonal, so that you have perfect association between the row and column variables. Then the Chi Square statistic is $$\sum (O-E)^2/E = k\cdot(n-n/k)^2/(n/k)+k\cdot(k-1)\cdot(0-n/k)^2/(n/k)$$ where the first part represents the sum of the k diagonal elements and the second part is the sum of the off diagonal elements. You can show that this sum is $nk(k-1)=N(k-1)$. Similar reasoning extends to the case where the number of rows and columns are not the same. - One might add that for different number of rows and columns, $k$ then must be the minimum of these two. – caracal Jul 19 '11 at 8:17 Lovely answer, thank you! – Tal Galili Jul 19 '11 at 12:14 – Tal Galili Aug 1 '11 at 17:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126049876213074, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/175899/to-show-if-og-pn-then-zg-neq-e?answertab=votes
To show if $O(G)=p^{n}$ then $Z(G)\neq \{e\}$. [duplicate] Possible Duplicate: Normal and central subgroups of finite $p$-groups I want to show that if $O(G)=p^{n}$ then $Z(G)\neq \{e\}$, where $p$ is a prime number and $Z(G)=\{a\in G | ax=xa, \forall x\in G\}$, which is also known as a center of the group $G$. I think I have to use the Lagrange theorem which state that in a finite group $G$, if $H$ is a subgroup of $G$ then $O(H)|O(G)$. But i don't get any right approach to use this to prove the given result. - – user1729 Jul 27 '12 at 15:48 marked as duplicate by Gerry Myerson, Chris Eagle, Zhen Lin, tomasz, J. M.Aug 17 '12 at 18:33 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. 2 Answers Hint. $$o(G) = o(Z(G)) + \sum_{C(a)\neq G} \frac {o(G)} {o(C(a))}$$ where $C(a)$ is the centralizer of $a$. - Is this actually a class equation? – Kns Jul 27 '12 at 15:29 1 Well, this is known as the class equation (in group theory) – DonAntonio Jul 27 '12 at 15:33 Do you know the class equation? Define an action of the group on itself by conjugation: $$G\times G\to G\,\,,\,\,g\cdot x:=x^g:=g^{-1}xg$$ Now, under this action, we get $$\forall\,\,x\in G\,\,,\,\mathcal O(x):=\{x^g\;:\;g\in G\}\,\,,\,Stab(x):=\{g\in G\;:\;x^g=x\}=C_G(x)$$ Well, now just use the basic lemma $\,|\mathcal O(x)|=[G:Stab(x)]\,$ and , of course, the fact that $\,G\,$ is a $\,p-\,$group... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402839541435242, "perplexity_flag": "head"}
http://mathoverflow.net/questions/10014/applications-of-the-chinese-remainder-theorem/10080
## Applications of the Chinese remainder theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) As the title suggests I am interested in CRT applications. Wikipedia article on CRT lists some of the well known applications (e.g. used in the RSA algorithm, used to construct an elegant Gödel numbering for sequences...) Do you know some other (maybe not so well known) applications? Or interesting problems (recreational? or from mathematical competitions like IMO?) which can be solved using CRT. Or any good references or examples in that direction. I hope that with this I will have better understanding of CRT and how to use it in general. - 4 This looks a lot like the Pigeon-hole thread, so...community wiki at least? – Gjergji Zaimi Dec 29 2009 at 9:52 2 The Chinese remainder theorem is best learned in the generality of ring theory. That is, for coprime ideals a1,...,an of a ring R, R/a is isomorphic to the product of the rings R/ai where a is defined to be the product (and by coprimality also the intersection) of the ideals ai – Harry Gindi Dec 29 2009 at 10:43 2 Any question with the [big-list] tag should always be community wiki. I've converted it. – Anton Geraschenko♦ Dec 29 2009 at 17:04 Some months later: an outstanding list of applications! For those who are fond of wikipedia, I think it could provide nice hints to expand the section "Applications" in the wiki article linked in the question. – Pietro Majer Jul 30 2010 at 10:25 ## 28 Answers Secret sharing. Suppose we have $N$ people. We want any $k+1$ of them to be able to launch a missile attack, but no $k$ of them to have this power. Solution: Choose some large prime $p$ and a random polynomial $f(t)$ of degree $k$ with coefficients in $\mathbb{Z}/p$. Tell person $1$ the value of $f(1)$, person $2$ the value of $f(2)$ and so forth. (Also, everyone knows what $p$ is.) Set up the missiles to only launch when $f(0)$ is input. Any $k+1$ people can use the Chinese remainder theorem to compute $f$, and hence $f(0)$; any $k$ people do not have enough data to constrain $f(0)$ in any way. - This may be a dumb comment, but... Why not do the following instead? Choose a large prime $p$ and elements $a_1,a_2,...,a_{k+1}$ in $\mathbb{Z}/p$. Tell person $j$ the value of $a_j$, for each $j$. Set up the missiles to only launch when $\sum_{j=1}^{k+1}a_j$ is input. I am not sure if there is an essential difference between this and your suggestion. – senti_today Aug 17 2010 at 21:23 Because N is larger than k+1. – David Speyer Aug 17 2010 at 21:38 Sorry, I misread the second sentence. – senti_today Aug 18 2010 at 3:38 3 FYI this is called Shamir secret-sharing. Am I right in thinking that the reason we work over $\mathbb{Z}/p$ is that one can sensibly talk about random polynomials (with the implicitly-chosen uniform distribution on each coefficient), and not have to specify a more-or-less arbitrary distribution if we, say, took real coefficients? – Cam McLeman Dec 2 2010 at 13:49 I agree that any $k+1$ people can compute $f$, using, say, Lagarange interpolation ... but, using the Chinese remainder theorem? I don't see $k+1$ different moduli here - only (mod $p$). – Greg Martin May 15 at 17:13 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Parallel computation: Suppose you have a huge computation to do that involves adding, multiplying and subtracting integers. Possibly also dividing but, if so, only division by numbers in a finite set S which you already know. Choose primes $p_1$, $p_2$, ..., $p_r$ which do not divide any element of $S$, and such that $p_1 p_2 \cdots p_r$ is surely larger than your answer. Split your computation over $r$ processors, the $i$th of which computes the answer modulo $p_i$. Use CRT to put your answer back together in the end. This was the method used in the recent computation of the Kazhdan-Lustig-Vogan polynomials of E8. - 2 I was just about to add the computation of the character table of E8 to the list when I saw this answer. So instead, a belated +1. – Pete L. Clark Jul 31 2010 at 12:09 3 Dirk Kinnaes and I used this method to find that there are 284473580014525286666121752496600242239281330559895142380815894680093529086840703279353601839695794392738788556778405044037630360510510198592329618381688979038688482537860239885961238887812656969372196798484462132843557299991075493007550627926803688745250953668796106910118867088442300850000 binary matrices of order 34 such that each row and each column have 17 ones. However I did meet one computer-illiterate mathematician who believed me when I claimed that we counted them one at a time. – Brendan McKay Jul 12 at 11:52 The Chinese remainder theorem is used to resolve multiple range ambiguities in many radar systems. - 12 To elaborate on this, because it's more applied than the rest: Many radar systems work by sending EM pulses out at regular intervals, waiting in between pulses to look for reflections from objects. You want to calculate an object's distance from the time it takes to see a reflection. If time between pulses is very long, this works, but if you're observing something dynamic you want fast updates, so you need shorter time between pulses. But then you don't know which pulse's reflection you're seeing, so object range is only known modulo (speed of light)*(pulse interval). (cont'd) – Ian Weiner Jul 30 2010 at 2:14 3 (cont'd from above) The solution is to send out a few different types of pulses (say, with different wavelengths of light), with each type of pulse having it's own pulse interval, and making those intervals coprime. Then you can use the CRT to calculate range mod some very large distance (where you know, practically speaking, that you won't be registering any reflections from such large distances). – Ian Weiner Jul 30 2010 at 2:23 1 This is detailed on p 306 of Roger Sullivan's book Microwave radar: imaging and advanced concepts – Steve Huntsman Oct 16 2010 at 2:10 Lagrange interpolation is a special case of the Chinese remainder theorem. The Jordan normal form can be proven extremely quickly using the Chinese remainder theorem for modules over a commutative ring. This proceeds by first proving the Jordan-Chevalley decomposition, and then the rest is a simple exercise of showing what the Jordan blocks actually look like. The first one is very surprising to people, but if you state Lagrange interpolation correctly, it's easy to see that the idea is not only similar but identical. - 2 This sounds interesting. Can you give a reference? – Idoneal Dec 29 2009 at 15:57 2 or elaborate on this? – Lior Bary-Soroker Dec 29 2009 at 19:13 Which one would you like to hear about? – Harry Gindi Dec 29 2009 at 20:20 3 I explained the connection between CRT and Lagrange interpolation on my old blog once: artofproblemsolving.com/Forum/… . – Qiaochu Yuan Dec 29 2009 at 23:07 2 Why is this comment on my answer rather than the op? – Harry Gindi Dec 30 2009 at 15:10 show 2 more comments There are some cute exercises based on the Chinese remainder theorem, e.g., (1) there exist an arbitrarily large number of consecutive integers, none of which is squarefree (1955 Putnam Competition), (2) there exist an arbitrarily large number of consecutive integers, none of which is powerful ($n$ is powerful if for every prime $p$ dividing $n$, we have $p^2|n$), (3) there exist an arbitrarily large number of consecutive positive integers, none of which is a sum of two squares, (4) the number of integers $1\cdot 2, 2\cdot 3, \dots, n\cdot (n+1)$ divisible by $n$ is $2^{\omega(n)}$, where $\omega(n)$ is the number of distinct prime divisors of $n$. - 1 By way of contrast to (2), I believe it is unknown whether there exist three consecutive integers each of which is powerful. – Gerry Myerson Jul 12 at 6:21 CRT combined with Dirichlet's theorem allows you to prove the existence of infinitely many primes satisfying any system of congruences that has a solution; I sketched a proof that this implies that the square roots of integers have no nontrivial linear dependences here. - Here are some applications I don't see listed among the other answers. 1. Everyone knows $5^2$ ends in 5 and $6^2$ ends in 6. Your task: find multi-digit numbers whose squares end in themselves (e.g., $25^2$ ends in 25, $76^2$ ends in 76, ...). This problem can be given to students -- even children -- who know no particular mathematics and they discover experimentally for $n$ = 2, 3, 4, ... that there are usually two $n$-digit solutions (sometimes fewer than 2 solutions, but never more than 2). As for whether this pattern persists for all $n$, both that there usually are solutions and that there are at most two solutions among $n$-digit numbers, turn the problem into a congruence condition and then think about CRT. 2. If $f(x)$ is in ${\mathbf Z}[x]$ and all of its values $f(a)$ for $a$ in ${\mathbf Z}$ are multiples of either 2 or 3, then CRT implies all of its values are multiples of 2 or all of its values are multiples of 3. On the surface, this seems kind of miraculous, doesn't it? (Same result works by CRT replacing {2,3} with any finite set of prime numbers. While the analogous result for divisibility by a finite set of squared primes is probably true, it is still an open problem as far as I know. That is, if $p_1,...,p_r$ are primes and for every $a$ the integer $f(a)$ is divisible by one of $p_1^2,\dots,p_r^2$, can you show for one of the primes $p_i$ that every $f(a)$ is divisible by $p_i^2$?) 3. The Solovay-Strassen probabilistic primality test. Verifying that this test admits a witness for odd composite moduli uses CRT. When I teach undergraduate number theory, the SS test has always been the last topic in the course and it's a neat application of CRT. 4. If $a$ is not a square in $\mathbf Z$ then there are infinitely many primes $p$ such that $a \bmod p$ is not a square. This is an application of the Chinese remainder theorem and quadratic reciprocity. (This can be superseded in a quantitative sense if you use Dirichlet's theorem on primes in arithmetic progression.) 5. If $m|n$ then the reduction map ${\mathbf Z}/n{\mathbf Z} \rightarrow {\mathbf Z}/m{\mathbf Z}$ is easily surjective. Please try to prove by elementary methods that the reduction map on units, $({\mathbf Z}/n{\mathbf Z})^\times \rightarrow ({\mathbf Z}/m{\mathbf Z})^\times$, is surjective without using CRT. Using CRT it is quite easy. (You can yank in Dirichlet's theorem on primes for a fast proof, but that's a rather deep result compared to CRT, so it wouldn't count as an elementary proof avoiding CRT.) - I think you are using the symbol $\equiv$ in a nonstandard way in 2 above. – Idoneal Jan 17 2010 at 5:49 2 Huh? The notation looks correct to me. It's the congruence notation from modular arithmetic. To say a number is 0 mod m means it is a multiple of m, and that's the situation I'm describing in 2. In what way does the usage look nonstandard? (I said at the start that a runs over integers, and that remains true anywhere later on as well.) – KConrad Jan 17 2010 at 6:17 The statement starting at "That is," in 2. is not what you meant to write, based on what you say in the first sentence. It says "If $6$ divides $f(a)$ for all $a$, then either $2$ divides every $f(a)$, or $3$ divides every $f(a)$", which is certainly true and does not use CRT. I think that's why Idoneal was suggesting that it was a modified version of congruence. You could instead say "That is, if $f(a) \equiv 0, 2, 3$, or $4 \mod{6}$ for each $a$, then either..." – Zack Wolske Jul 12 at 1:35 Since this is a few years old, do you know if the question on squares is still open? – Zack Wolske Jul 12 at 1:36 @Zack: Thanks for pointing that out. I fixed the statement of 2. And as far as I know the question on squares is still unproved. – KConrad Jul 12 at 2:51 You already bring this idea up when you mention RSA but I want to make the point more generally: There are very good methods for solving polynomial equations modulo primes and prime powers. The only good way to solve an equation modulo $N$ is to factor $N$; solve modulo each prime power dividing $N$, and use Chinese remainder to put a solution back together. This is true even for computing square roots. - Here is a neat example that predates David Speyer's example of fast parallel arithmetic, but uses the same trick. Richard J. Lipton, Yechezkel Zalcstein: Word Problems Solvable in Logspace. J. ACM 24(3): 522-526 (1977) In this paper, the Chinese remainder theorem is used to prove that the word problem on several types of groups are solvable in logspace. (The Chinese remainder theorem is not explicitly invoked, but one can use it to justify the algorithms.) For instance, the paper states: Corollary 6. The word problem for finitely generated free groups is solvable in logspace. The word problem for a finite group is to determine if a given product of group elements equals the identity. The results are proved by embedding a group into a group of matrices over the integers, then computing the matrix product modulo all small primes. - In the proof of Gödel's First Incompleteness Theorem, you need to choose a way to encode formulas and proofs as numbers. The easy way to do this is to take 2i03i15i3...pjij. However you can instead use the Chinese Remainder Theorem to pick a number congruent to i0 mod p0, congruent to i1 mod p1, etc. (Of course, you need to pick big enough primes then.) The advantage of doing this is you no longer need exponentiation in your theory, just multiplication and adition. - I assumed that this is what was meant by "used to construct an elegant Gödel numbering for sequences" in the original question. – Andreas Blass Sep 18 2010 at 23:03 The Chinese Remainder Theorem gives a way to compute matrix exponentials. Indeed, let $A$ be a complex square matrix, put $B:=\mathbb C[A]$. This is a Banach algebra, and also a $\mathbb C[X]$-algebra ($X$ being an indeterminate). Let $S$ be the set of eigenvalues of $A$, $$\mu=\prod_{s\in S}\ (X-s)^{m(s)}$$ the minimal polynomial of $A$, and identify $B$ to $\mathbb C[X]/(\mu)$. The Chinese Remainder Theorem says that the canonical $\mathbb C[X]$-algebra morphism $$\Phi:B\to C:=\prod_{s\in S}\ \mathbb C[X]/(X-s)^{m(s)}$$ is bijective. Computing exponentials in $C$ is trivial, so the only missing piece in our puzzle is the explicit inversion of $\Phi$. Fix $s$ in $S$ and let $e_s$ be the element of $C$ which has a one at the $s$ place and zeros elsewhere. It suffices to compute $\Phi^{-1}(e_s)$. This element will be of the form $$f=\frac{\mu}{(X-s)^{m(s)}}\ g\ \mbox{ mod }\mu$$ with $f,g\in\mathbb C[X]$, the only requirement being $$g\equiv\frac{(X-s)^{m(s)}}{\mu}\mbox{ mod }(X-s)^{m(s)}$$ (the congruence taking place in the ring of rational fractions defined at $s$). So $g$ is given by Taylor's Formula. - One application : to find the product of all elements in the multiplicative group $({\mathfrak o}/{\mathfrak a})^\times$, where $\mathfrak o$ is the ring of integers in a number field and ${\mathfrak a}\subset{\mathfrak o}$ is an ideal. The case ${\mathfrak o}={\mathbb Z}$ is Wilson's theorem. See for example arXiv:0711.3879v1 [math.NT]. - Many properties of $\mathbb{Z}/n$ can be broken down to properties of $\mathbb{Z}/p^i$ using the Chinese Remainder Theorem. Here is an example that I have no idea how to prove otherwise: IMO Shortlist 1997 problem 15 is equivalent to proving that if some given integer is a square residue modulo $n$ and a cubic residue modulo $n$ at the same time, then it is a $6$-th power residue modulo $n$ as well. More generally, if $n$, $u$, $v$ are three positive integers, and some given $a\in\mathbb{Z}/n$ is both a $u$-th power and a $v$-th power in $\mathbb{Z}/n$, then $a$ is a $\mathrm{lcm}\left(u,v\right)$-th power in $\mathbb{Z}/n$. - Quadratic Reciprocity Perhaps the best application of the CRT is in a proof of the Gauss' Quadratic Reciprocity, a proof due to Rousseau (my version found on my blog here) that uses just the CRT. It is very rare that proofs of the QR drop out without a lot of effort :) - 1 It is an application of CRT, but it does not qualify as "perhaps the best". Even among proofs of quadratic reciprocity, this is not one of the nicer ones (in terms of elegance or insight). I've seen a lot of such proofs and this wouldn't go in my top 5. – KConrad Jan 17 2010 at 19:17 12 Wow, tough crowd. – Pete L. Clark Jan 29 2010 at 6:22 The chinese remainder theorem can be generalized as follows: Let $G$ be a group with normal subgroups $H,K$ of $G$. Then the canonical map $G/(H \cap K) \to G/H \times_{G/(HK)} G/K$ is an isomorphism. The proof is trivial! With the same argument: Let $R$ be a ring with ideals $I,J$, then the canonical map $R/(I \cap J) \to R/I \times_{R/(I+J)} R/J$ is an isomorphism. Now this reveals a geometric meaning of the cinese remainder theorem: Let $X$ be a scheme and $A,B$ two closed subschemes of $X$. Then $A \cup B$ is a closed subscheme of $X$ (intersect the ideal sheaves) and we have $A \cup B = A \coprod_{A \cap B} B$. Thus the union of two closed subschemes is the pushout of $A$ and $B$ along $A \cap B$, which is very intuitive. - Pohlig–Hellman discrete logarithm computation is based on the CRT. A degree n discrete Fourier Transform is identical to polynomial evaluation at primitive $n$-th roots of unity. The inverse transform is interpolation or the CRT (as mentioned in earlier posts). Montgomery reduction uses the CRT to reduce an integer modulo $x\bmod{N}$ without division by $N$ by creating an integer equivalent to $x\bmod{N}$ and $0\bmod{2^r}$, then dividing by $2^r$. - Actually, not only the Lagrange interpolation, but more generally the Hermite interpolation problem (i.e. with prescribed jets at some nodes) may be treated as an application of the CRT. I happened to wirte some lines on it in the wiki article: http://en.wikipedia.org/wiki/Chinese_remainder_theorem#Applications (where others applications and references are also given). - A variant of the CRT that has many practical applications is the "explicit CRT" (see http://cr.yp.to/antiforgery/meecrt-20060914-ams.pdf, for a good introduction). One scenario where the explicit CRT (mod m) can be used, is the following. Suppose we know the value of an integer x modulo primes p_1, ..., p_n, whose product exceeds, say, 4|x|. Now we could use a standard CRT computation to determine x, but suppose we actually wish to know x mod m, where m is some integer not among the p_i. The explicit CRT lets us compute x mod m using arithmetic operations that involve operands that are all about the same size as m (rather than x). This is useful when x >> m; if x is a 100 digit number but the p_i's and m are all 3 or 4 digits numbers, you could compute x mod m on a pocket calculator with an 8 digit display, given the values x mod p_1, ..., x mod p_n. This technique is especially useful when one needs to compute many large integers x modulo the same m (the integers x might be the coefficients of a polynomial and m might be the characteristic of a finite field, for example). This approach is used in http://arxiv.org/abs/0903.2785 to compute Hilbert class polynomials, and in http://arxiv.org/abs/1001.0402 to compute modular polynomials, both of which are notoriously large but can be efficiently computed with the explicit CRT. - Mark Iwen uses the CRT to construct a Sparse Fourier Transform, see: A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods, M. A. Iwen, http://arxiv.org/abs/0708.1211 - Also there are some examples with visible and nonvisible lattice points: http://www.jstor.org/pss/2317753 http://www.jstor.org/pss/2686720 try to google more examples - Here is an adorable application of the CRT in combinatorial number theory: In an exact covering system of congruences the two largest moduli must be identical. In other words, if you partition the integers into arithmetic series, then two of the series must have the same 'step size'. This is often given as an application of (basic) complex analysis but all you really need is the CRT! http://www.emis.de/journals/EJC/Volume_8/PDF/v8i2a1.pdf - While I absolutely don't want to ruin Doron's wonderful polemic, let me add that the "complex-analytic" proof he quotes is completely algebraic. Just look at it from the correct angle: We take the equation (MNDR) and rewrite it as $\frac{1}{1-z}-\sum_{i=1}^{n-1} \frac{z^{b_i}}{1-z^{a_i}} = \frac{z^{b_n}}{1-z^{a_n}}$. Bringing the left hand side to a common denominator, this denominator is going to be a polynomial which is nonzero on any primitive $a_n$-th root of unity, so if we multiply the equation with the $a_n$-th cyclotomic polynomial, the left hand side becomes $0$. But the right ... – darij grinberg Apr 4 2011 at 22:56 1 ... one doesn't. – darij grinberg Apr 4 2011 at 22:56 What we DO use here is the existence of a primitive $a_n$-th root of unity. This is somewhat tricky if we wish to avoid analysis, but many books do it right. – darij grinberg Apr 4 2011 at 22:57 That's a good point! – Burton Newman Apr 7 2011 at 0:35 CRT is used to generate Payam Numbers. A Payam number k can generate very prime sequences of the form k*M(x)*2^n+/-1, with n variable, M(x) a multiple of certain primes. The pursuit of very prime series using Payam numbers is a mathematical recreation currently located at http://www.mersenneforum.org/showthread.php?t=9755 - I often deal with combinatorial numbers $x$ are difficult to compute exactly, but it's possible to find congruences satisfied by them. You can combine the congruences using the Chinese Remainder Theorem into one concise congruence. If one day someone does compute the number $x$ by a lengthy computation, they can check that $x$ satisfies the congruence. For instance, the number of reduced Latin squares $R_{12}$ of order $12$ is unknown, but it satisfies $R_{12} \equiv 50400 \pmod {55440}$. - One application that I heard about (but haven't actually used myself) is in computing combinatorial numbers $x$ that have very many digits. Sometimes, instead of dealing with arbitrary precision arithmetic, it is easier to compute $x \pmod \mu$ for many small moduli $\mu$ (which can be substantially faster than dealing with $x$ itself) then afterwards combine the congruences using the Chinese Remainder Theorem to find $x$ itself. Although this requires that you have an upper bound on $x$ or some idea of how large it should be. - For parallel computation, see David Speyer's comment amount the Kazhdan-Lusztig-Vogan polynomials of E_8 below. – Michael Lugo Dec 29 2009 at 22:23 As another example of this, I heard that people used to compute Bernoulli numbers using the von Staudt-Clausen Theorem ( en.wikipedia.org/wiki/… ). – darij grinberg Mar 20 2010 at 18:29 Ulrich Oberst wrote an article on that in Expositiones no. 3, 1985 "Anwendungen des chinesischen Restsatzes". - One answer I don't see here: Lagrange interpolation. If one takes, for example, the ring $\mathbb{Q}[x]$, and realizes CRT as a statement about rings and direct sums of $R/P$ over a set of co-prime $P,$ then one can construct polynomials which have cycles of arbitrary length in the rationals (or any number of cycles of arbitrary length). Lagrange interpolation has other applications, but the proof is CRT. - Way down in the second answer. – Qiaochu Yuan Jan 29 2010 at 5:35 ah! my apologies, I missed that one. – Ben Weiss Jan 29 2010 at 6:37 Here’s a nice problem that was among some IMO practice problems. I don’t know the source. Problem: Prove that for each natural number $n$, there is some natural number $r$ for which the $n$ integers $r+1^2,r+2^2,\ldots r+n^2$ are all squarefree. Solution (sketch): For a large prime $p$, the probability that none of these $n$ integers is divisible by $p^2$ is $1-\frac{n}{p^2}$. Assuming independence for the $p_i$s we get $(1-\frac{n}{2^2})(1-\frac{n}{3^2})(1-\frac{n}{5^2})\ldots$, which converges to a positive number, so there must exist solutions. To make this into a rigorous argument one needs CRT. - The Mayan calendar system uses a number of different periodic processes, and provides a simple but very important example of a practical use of the CRT. The Tzolkin, or Day Count, has twenty weekdays (Ik, Akbal,... Auau) and thirteen numbers, 1-13. Each day, the day name advances, and so does the number. For example, 7 Ik is followed by 8 Akbal. These name/number pairs repeat in a 260 day cycle, which has been in continuous uniterrupted use since at least 600BC. The Haab, or Vague Year, is a 365 day year consisting of 19 months (Pop, Uo, ..., Cumku, Uayeb). The first 18 months have 20 days and Uayeb has five days. The Haab runs 0 Pop, 1 Pop, ..., 19 Pop, 0 Uo, 1 Uo,..., 4 Uayeb and then repeats to 0 Pop. Together, the Tzolkin and Haab form the calendar round, with dates given by Tzolkin then Haab, for example, 7 Ik 0 Cumku. This cycle repeats every 18980 days, about 52 years, which means that a calendar round date is good for most practical purposes (such as birth dates). The earlier Mayan period, from around the 1st century BC to the 13th century AD, also featured a system known as the long count, recently made somewhat famous by the fact that it finished a 5126 year cycle on December 21, 2012. Long count dates have Kin (days) which run 0-19. 20 Kin make one Uinal, 18 Uinal make one Tun, 20 Tun make one Katun, and 20 Katun make one Baktun. Dates are written with Baktun first, as, for example, 9.7.17.12.14. After 13 Baktun, the date goes back to zero, so that 12.19.19.17.19 was followed by 0.0.0.0.0. One major problem with studying the Mayan calendar is that the long count dates fell out of use hundreds of years before the Spanish arrived, and it is nontrivial to decide which Mayan long count dates correspond to which dates in the modern western calendar system - the 'correlation problem'. The key document, the Chronicle of Oxcutzcab, says that a tun ended 13 Ahua 8 Xul in AD 1539, thus tying together the long count (tun ending), the calendar round, and the Julian calendar. From ancient records, the long count is known to have begun on 4 Ahua, 8 Cumku. So, given 0.0.0.0.0 = 4 Ahua 8 Cumku, one needs to solve $x$.0.0 = 13 Ahua 8 Xul. The day number gives the equation $360 x \equiv 9 \pmod{13}$. Since 8 Xul is 125 days after 8 Cumku, the Haab gives the second equation $360 x \equiv 125 \pmod{365}$. So, there's a simple little use of CRT: Solve for $x$, and find $x \equiv 924 \pmod{949}$. To finish the story, the year AD 1539 contains the long count tun 924 = 2.6.4.0.0 plus some multiple of 949 tun = 2.7.9.0.0. There is enough historical evidence to guess the date to within 949 tun (about 935 years), and so one learns that 11.16.0.0.0 is in AD 1539. Finally, the calendar round is still in use and so one can determine that 11.16.0.0.0 is November 12, 1539. I'll leave it as an exercise to determine that December 21, 2012 really was 0.0.0.0.0. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 186, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175175428390503, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/168556/finding-the-height-given-the-angle-of-elevation-and-depression?answertab=oldest
# Finding the height given the angle of elevation and depression. Please, I need help for this problem. I'm a little confused about it :( From a point A 10ft. above the water the angle of elevation of the top of a lighthouse is 46 degrees and the angle of depression of its image is 50 degrees. Find the height of the lighthouse and its horizontal distance from the observer. I don't know where to start, because the problem doesn't have opposite, hypotenuse, or adjacent side written on it, and I think I cannot use TOA since there were no "Adj" or "Opp" side on the problem. - 2 Since you are new, I want to give some advice about the site: To get the best possible answers, you should explain what your thoughts on the problem are so far. That way, people won't tell you things you already know, and they can write answers at an appropriate level; also, people are much more willing to help you if you show that you've tried the problem yourself. If this is homework, please add the [homework] tag; people will still help, so don't worry. Also, many would consider your post rude because it is a command ("Find..."), not a request for help, so please consider rewriting it. – Zev Chonoles♦ Jul 9 '12 at 8:35 It looks like you can set up two equations in two unknowns here. Let $x$ be the lighthouse's height and $y$ be the distance to the lighthouse. Then if the picture in my head is right, $x-10$ and $y$ are the opposite and adjacent sides to a $46^\circ$ angle, and $x+10$ and $y$ are the opp. and adj. sides to a $50^\circ$ angle. – Eugene Shvarts Jul 9 '12 at 8:36 1 @jhong: You should not repost your question if you have something to add - there is an "edit" button on the bottom left of the question, underneath the "homework" and "trigonometry" tags. I have taken all of the changes you made in your reposted question and put them here, and closed your reposted question as a duplicate. – Zev Chonoles♦ Jul 9 '12 at 10:31 ## 2 Answers Let $h$ be the height, $d$ the horizontal distance. Then you arrive at two equations (why?): $$d\tan 46^o=h-10$$$$d\tan 50^o=h+10$$. - Saurabh Hota's diagram describes it perfectly! – Aneesh Karthik C Jul 9 '12 at 9:16 ## Did you find this question interesting? Try our newsletter Write EG in terms of angle $50^\circ$.Find $x$. Then the height will be $x\tan 46^\circ + 10$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512574076652527, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/99296/embedding-torus/99306
embedding torus Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) hi everybody could anyone please help me? why is it impossible to embed a torus in R^3 with index 1 ( usual euclidean space with index 1 as a semi-riemannian manifold) as a semi-riemannian submanifold? - 2 Answers I suppose that by "usual" you mean $\mathbb{R}^3$ with the semi-Riemannian metric $dx^2 + dy^2 - dz^2$? If so, for any compact surface $S$ embedded in $\mathbb{R}^3$, since the metric is invariant under translation on $\mathbb{R}^3$ you are allowed to translate $S$ so that it is tangent to the light cone $x^2 + y^2 - z^2 = 0$. At any point of tangency the restricted metric is indefinite. To do this translation, first translate $S$ so that it is strictly inside the light cone, then let $S$ gently descend until the first moment that it touches the light cone. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. thanx. but i didn't understand. let me ask another question. why with this metric R^3 doesn't have any compact semi-riemannian submanifold? - 1 You should edit your question (and possibly register, so you don't keep creating new accounts). – S. Carnahan♦ Jun 12 at 7:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180542826652527, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/quantum-electrodynamics
# Tagged Questions Quantum-ElectroDynamics (QED) is the quantum field theory believed to describe the electromagnetic interaction (and with some extension the weak nuclear force). 1answer 87 views ### Photons, where do they come from? [closed] Photons, where do they come from? What exactly is a photon? I've certainly heard how they get produced, but it doesn't seem to make sense that some sort of particle should need to be produced just ... 1answer 55 views ### What are the limits of applicability of Coulomb's Law? Coulomb's law is formally parallel to Newton's Law of Universal Gravitation, which is known to give way to General Relativity for very large masses. Does Coulomb's Law have any similar limits of ... 1answer 172 views ### Which is this formula Feynman talks about in the QED book? I am reading the fantastic QED Feynman book. He talks in chapter 3 about a formula he considers too complicated to be written in the book. I would like to know which formula he talks about, although I ... 1answer 74 views ### QED photon propagator to one-loop order gets different answers I'm a self-studying 14-year-old who has a passion for particle physics. I'm currently trying to calculate the QED photon propagator to one loop. However, in all the places I've looked, even with the ... 1answer 84 views ### How can an asymptotic expansion give an extremely accurate predication, as in QED? What is the meaning of "twenty digits accuracy" of certain QED calculations? If I take too little loops, or too many of them, the result won't be as accurate, so do people stop adding loops when the ... 0answers 21 views ### What does it mean to erase the which-path information of something? In this particular case, I am told that very fast measurements erase which-path frequency information of photons. I'm not really sure what that means though. I do not entirely understand the concept ... 1answer 180 views ### Can a photon exhibit multiple frequencies? Can a photon be a superposition of multiple frequency states? Kind of similar to how an electron can be a superposition of multiple spin states. 0answers 59 views ### How does this paper relate to standard QED? This paper proposes a microscopic mechanism for generating the values of $c, \epsilon_0, \mu_0$. They state that their vacuum is assumed to contain ephemeral (meaning existing within the limits of ... 0answers 49 views ### How does QED deal with wavelength of quanta [duplicate] Since QED treats photons as individual units (quanta) how does it treat the concept of the "wavelength" associated with the photon? 3answers 83 views ### Why doesn't a stationary electron lose energy by radiating electric field (as per coulomb's law)? If an electron in a universe constantly generates an electric field why does it not get annihilated ? I am confused because I read that an accelerating charge radiates and loses energy. So, why won't ... 1answer 77 views ### Some questions about Ward-Takahashi Identity I'm a learner of Peskin and Schroeder's textbook of quantum field theory. I have proceeded to Ward-Takahashi identity and have one question when I look for Wikipedia for reference. The following is ... 0answers 34 views ### Is it reasonable to interpret the Lamb shift as vacuum induced Stark shifts? This is a pretty hand-wavy question about interpretation of the Lamb shift. I understand that one can calculate the Lamb shift diagrammatically to get an accurate result, but there exist ... 1answer 113 views ### Photon as the carrier of the electromagnetic force My physics background goes as "far" as reading popsci books on QM, Particle Physics, and Cosmology so pardon my ignorance in the below questions. I've read that the photon is the particle (quanta in ... 1answer 66 views ### Does a quadrupole transition mean emission of one photon with spin 2? If it's true and spin-2 photons do exist, could you please point to some literature that discusses spin-2 photons? If not, then how exactly does a selection rule for quadrupole transition make sense ... 0answers 99 views ### Quantum Electrodynamics I was wondering if anyone could give a simple explanation of how light interacts with matter. From what I have read in QED, electrons will repel each other because of their ability to emit and ... 0answers 233 views ### Do EM waves transmit spin polarization? Suppose you have a normal dipole antennae (transmitter and receiver) . Spin polarized current (as opposed to normal current) is sent into the transmitter, it emits an EM wave and the Receiver receives ... 1answer 116 views ### Are there 2 kinds of photons, one that mediate the electromagnetic interaction and the other the quanta of light? It is usually said that photons are the force carriers or the mediators of the electromagnetic forces between electric charges. At the same time we know also that electromagnetic waves on the quantum ... 1answer 224 views ### Did the Feynman heuristic of “simple effects have simple causes” fail for spin statistics? Someone here recently noted that "The spin-statistics thing isn't a problem, it is a theorem (a demonstrably valid proposition), and it shouldn't be addressed, it should be understood and celebrated." ... 1answer 67 views ### Alternative methods to derive the static potential in the NR limit of QED In QED, one can relate the two-particle scattering amplitude to a static potential in the non-relativistic limit using the Born approximation. E.g. in Peskin and Schroeder pg. 125, the tree-level ... 3answers 150 views ### Does light really “travel”? From what I've so far understood about light, a photon is emitted somewhere and after some time it's absorbed somewhere else. Have we had experiments that confirm the path taken or something akin to ... 3answers 136 views ### What starts the movement of a photon Although a photon has no (rest) mass, it does have a measurable speed. Its movement can be altered by gravity. A photon "travels". If I turn on a flashlight, seen by someone at a distance, the photons ... 2answers 154 views ### Why does Quantum Electrodynamics Allow a Photon to Exist Temporarily as a Positron and an Electron? In this question... Why does a photon colliding with an atomic nucleus cause pair production? ...I asked why a photon colliding with a atomic nucleus can become an electron and a positron. The ... 3answers 256 views ### Can the path of a charged particle under the influence of a magnetic field be considered piecewise linear? Ordinarily we consider the path of a charged particle under the influence of a magnetic field to be curved. However, in order for the trajectory of the particle to change, it must emit a photon. ... 2answers 314 views ### Using photons to explain electrostatic force I am trying to understand the idea of a force carrier with the following example. Let's say there are two charges $A$ and $B$ that are a fixed distance from each other. What is causing the force on ... 1answer 127 views ### Database of scattering amplitudes I want to check whether my result for the invariant amplitude of the electron-electron scattering (to lowest order in $\alpha$; t+u channels) is correct or not. I can't find any reference that has ... 1answer 137 views ### Local $U(1)$ gauge invariance of QED The Lagrangian density for QED is $$\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+\bar{\psi}(i\gamma^{\mu}D_{\mu}-m)\psi$$ with $$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$$ ... 2answers 226 views ### Ontology of the quantum field I'll use QED as an example, but my question is relevant to any quantum field theory. When we have a particle in QED, where is its charge contained in the field? Is the field itself charged? If so, ... 1answer 60 views ### Vanishing of photon one-point function in QED I would like to know why the photon one-point function vanishes in QED. I am aware that any $n$-point functions vanishes for odd $n$ because of 'charge-conjugate" argument, this does not apply to ... 1answer 97 views ### Two-photon scattering: colours Is there a particular conservation principle that necessitates that the outcoming photon pair has the same frequencies as the incoming photon pair? I'm thinking in particular of these Feynman-like ... 0answers 59 views ### Photons interact with themselves We know that photons are the antiparticles of themselves and if they interact with each other through higher order process do they annihilate and again produce photons? Here is the Phys.SE question ... 1answer 210 views ### Spontaneous breaking of Lorentz invariance in gauge theories I was browsing through the hep-th arXiv and came across this article: Spontaneous Lorentz Violation in Gauge Theories. A. P. Balachandran, S. Vaidya. arXiv:1302.3406 [hep-th]. (Submitted on 14 ... 2answers 94 views ### Field energy of/from virtual Photons I have a slightly out-of line question: Consider a single electron (or it's current if you please) The EM field surrounding it will (no doubt) have an EM field energy (T) to go with. The standard ... 0answers 65 views ### A question on charge renormalization in QED Let us work with charge renormalization in QED. Consider 2-point photon correlation function $\Pi_2(q^2)$ at one loop level. We normalize the coupling constant at $q^2=0$ (point of normalization). ... 2answers 152 views ### Where does the mass term come from in the Proca Lagrangian? There are many good books describing how to construct the Lagrangian for an electromagnetic field in a medium. $$\mathcal{L}~=~-\frac{1}{16\pi}F^{\mu\nu}F_{\mu\nu}-\frac{1}{c}j^{\nu}A_{\nu}$$ ... 1answer 79 views ### Dichroism in uniaxial crystals I need a same help with it. Some books where i can find a real math explanation of this effect will be good help!! simple exp of this effect will be good too) 2answers 150 views ### proof of radius of convergence of perturbation series in quantum electrodynamics zero Can anyone show detailed proof of why radius of convergence of perturbation series in quantum electrodynamics is zero? And how is perturbation series constructed? So, as this argument requires ... 2answers 226 views ### Geometrical significance of gauge invariance of the QED Lagrangian The QED Lagrangian is invariant under $\psi(x) \to e^{i\alpha(x)} \psi (x)$, $A_{\mu} \to A_{\mu}- \frac{1}{e}\partial_{\mu}\alpha(x)$. What is the geometric significance of this result? Also why is ... 1answer 292 views ### Feynman Rules for massive vector boson interactions I am stuck at the beginning of a problem where I am given an interaction term that modifies the regular QED Lagrangian. It involves the interaction between a photon field and a massive vector boson: ... 1answer 97 views ### QED Commutation Relations Implications In Brian Hatfield's book on QFT and Strings there is the following quote: In particular $$[A_i (x,t), E_j(y,t)] = -i \delta_{ij}\delta(x-y)$$ implies that [A_i(x,t),\nabla \cdot E(y,t)] = ... 3answers 194 views ### Is there a simple way to compute some physical constant from Feynman diagram statistics? I've been playing around writing some software to generate Feynman diagrams for QED, respecting the vertex "rules" described here, and avoiding creating isomorphic duplicates. So from a starter ... 0answers 51 views ### How many particles are created in the strong electromagnetic field? Consider a vacuum of charged massless scalar field. Then the uniform and isotropic electric field $E$ is turned on for a time interval $\tau$. The question is, how many scalar particles are created? ... 0answers 88 views ### The state of Indefinite metric in Quantum Electrodynamics I faced difficulties to grasp why indefinite metric is introduced from no where in QED, after searching internet I found that this is a problem in QED, because one needs it to preserve theory's ... 2answers 286 views ### A thought experiment with Heisenberg's Uncertainty Principle [duplicate] Possible Duplicate: Could the Heisenberg Uncertainty Principle turn out to be false? Thought Experiment Ponder, for a moment, if I had a cube with 10cm sides which I'll name The Box. By ... 1answer 85 views ### Is there Pair production in between charged plates In classical electromagnetic theory, If parallel plates are charged oppositely and placed close to each other, there will be no charge will not flow from one plate to another. How does this situation ... 1answer 267 views ### Transparency of solids using bandgaps and relation to conduction and valence bands I think I understand how a solid can appear transparent as long as the energy of the photons travelling through it are not absorbed in the material's bandgap. But how does this band gap relate to ... 0answers 136 views ### When can photon field amplitudes be written as field operators? Suppose I have some classical field equation for two photon fields with amplitudes $A_1(z),A_2(z)$ (plane waves) given as ${A}_1=\alpha f(A_1,A_2) \\ {{A}_2}=\beta g(A_1,A_2)$ Under what ... 1answer 77 views ### Is diffraction affected by interaction between photons and electrons? Suppose we take a sheet of ordinary metal, make a narrow slit in it, and shine a light beam through the slit onto a screen. The light beam will diffract from the edges of the slit and spread out onto ... 0answers 166 views ### Nonlinear refraction index of vacuum above Schwinger limit This question is more about trying to feel the waters in our current abilities to compute (or roughly estimate) the refraction index of vacuum, specifically when high numbers of electromagnetic quanta ... 2answers 105 views ### A step in the derivation of the magnetic momentum of the electron in Zee's QFT book In chapter III.6 of his Quantum Field Theory in a Nutshell, A. Zee sets out to derive the magnetic moment of an electron in quantum electrodynamics. He starts by replacing in the Dirac equation the ... 3answers 170 views ### QED: Would atoms without electrons be visible? I have been reading a lot of QED books lately, and understand (as well as possible anyway) the interaction between electrons and photons. But I can't seem to get a clear indication of the interaction ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235057234764099, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/3940-newtons-method.html
# Thread: 1. ## Newtons Method I don't know how to do this question would anyone be able to help thanks nath Attached Thumbnails 2. Newton's Method solves the equation f(x) = 0 as follows. Let x_1 be an approximation to the true root x, say x_1 = x - h. Then 0 = f(x) = f(x_1 + h) = f(x_1) + h.f'(x_1) + (error) and we assume that the error is negligible. Then h is approximately - f(x_1)/f'(x_1) and so another approximation to the root x is x_2 = x_1 + h = x_1 - f(x_1)/f'(x_1). In your case f(x) = x^{1/3} and f'(x) = (1/3)x^{-2/3}, so h = -3x_1. 3. $<br /> a1\;=\;a\;-\;\frac{f(a)}{f'(a)}\;where\;x\;=\;a\;is\;close\;t o\;the\;root<br />$ I know i have to use the above but how 4. The point does not need to "be close to" the root. It can be in any interval that satisfies the necessary conditions for this algorithm to work. (I just do not remember the exact conditions). 5. How do i apply this to my question 6. Originally Posted by nath_quam $<br /> a1\;=\;a\;-\;\frac{f(a)}{f'(a)}\;where\;x\;=\;a\;is\;close\;t o\;the\;root<br />$ I know i have to use the above but how As rgep indicates, you set $<br /> f(x)=x^{1/3}\$ Then : $<br /> f'(x)=\frac{1}{3}\ x^{-2/3}<br />$ So: $<br /> a_1=a-3\ a<br />$ Note this does not converge for any $a \ne 0$, (as is hinted at in the question). RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046929478645325, "perplexity_flag": "middle"}
http://divisbyzero.com/2010/08/16/goodsteins-unprovable-theorem/?like=1&source=post_flair&_wpnonce=8ac41143de
# Division by Zero A blog about math, puzzles, teaching, and academic technology Posted by: Dave Richeson | August 16, 2010 ## Goodstein’s unprovable theorem Recently I learned about a family of sequences of nonnegative integers (called Goodstein sequences) and two remarkable theorems about these sequences. Begin with any positive integer ${m_{1}}$. This is the first term in the sequence. For example, suppose we begin with ${m_{1}=18}$. The first step in computing the second term of the sequence, ${m_{2}}$, is to write ${m_{1}}$ in hereditary base 2 notation. That is, write ${m_{1}}$ using only powers of 2 (the base 2 expansion of ${m_{1}}$): ${m_{1}=\sum a_{i}2^{i}}$. Then write all of the exponents in their base 2 expansions. Then write all of the exponents of the exponents in that way, etc. Our number written in hereditary base 2 notation is ${m_{1}=18=2^{4}+2^{1}=2^{2^{2}}+2^{2^{0}}}$. To obtain ${m_{2}}$ we change all of the 2′s to 3′s, then subtract 1. For our example, ${m_{2}=3^{3^{3}}+3^{3^{0}}-1}$. Notice that ${m_{2}}$ is very large number; it is approximately ${7.63\times 10^{12}}$ We continue in the same way. We obtain ${m_{n}}$ from ${m_{n-1}}$ by writing ${m_{n-1}}$ in hereditary base ${n}$ notation, changing all of the ${n}$‘s to ${(n+1)}$‘s, and subtracting 1. ${m_{1}=2^{2^{2}}+2^{2^{0}}=18}$ ${m_{2}=3^{3^{3}}+3^{3^{0}}-1=3^{3^{3}}+2\cdot 3^{0}\approx 7.63\times 10^{12}}$ ${m_{3}=4^{4^{4}}+2\cdot 4^{0}-1=4^{4^{4}}+4^{0}\approx 1.34\times 10^{154}}$ ${m_{4}=5^{5^{5}}+5^{0}-1=5^{5^{5}}\approx 1.91\times 10^{2184}}$ ${m_{5}=6^{6^{6}}-1\approx 2.66\times 10^{36305}}$ To obtain ${m_{6}}$ we must express ${m_{5}}$ in hereditary base 6 notation. First notice that ${m_{5}=6^{6^{6}}-1=5\cdot 6^{6^{6}-1}+5\cdot 6^{6^{6}-2}+\cdots+5\cdot 6^{1}+5\cdot 6^{0}.}$ (An easy way to see this is that written in base 6, ${6^{6^{6}}}$ is ${100\cdots 00_{6}}$, 1 followed by ${6^{6}}$ zeros. So ${6^{6^{6}}-1}$ is ${55\cdots 55_{6}}$, ${6^{6}}$ fives.) But this expansion of ${m_{5}}$ is not in hereditary base 6 notation yet. We must express the towers of exponents in base 6, etc. When that is done, we replace the 6′s with 7′s and subtract 1. In the end, ${m_{6}}$ is a gigantic number. Theorem. There is a ${k}$ such that ${m_{k}=0}$. That is, this sequence, which looks like it is rocketing to infinity, will eventually become zero and terminate. Wow! The proof of this theorem is very sophisticated and uses the theory of ordinal numbers. I’ll have to file this sequence away as an example that shows why we can’t use the behavior of the first few (or first few million) terms of a sequence to determine the limiting behavior of a sequence. Then, in 1982 Laurie Kirby and Jeff Paris proved the following theorem. Theorem. Goodstein’s Theorem is not provable using the Peano axioms of arithmetic. In other words, this is exactly the type of theorem described in 1931 by Gödel’s first incompleteness theorem! Recall what Gödel’s theorem says. If there is an axiomatic that is rich enough to express all elementary arithmetic (such as that formed from the Peano axioms), then it must be incomplete. In other words, there must be a true statement about arithmetic that cannot be proven from the axioms. In his proof Gödel produces an explicit example of a true, but unprovable statement. But it is complicated to grasp and more reminiscent of a logical paradox than a mathematical statement. The first nice mathematical example of such a statement was presented in 1977 by Paris and Harrington (in a field called Ramsey theory). Then in 1982 Kirby and Paris proved that Goodstein’s theorem was unprovable and they gave another elementary example, called “Hercules versus the hydra,” which relates to the growth of the hydra (a tree) and its destruction by Hercules. ### Like this: Posted in Math | Tags: axiomatic systems, Gödel's incompleteness theorem, Goodstein's theorem, Peano axioms, sequences ## Responses 1. “In other words, this is exactly the type of theorem predicted in 1931 by Gödel’s first incompleteness theorem!” Heuristic correction: Godel didn’t “predict” unprovable theorems, he PRESENTED A RECIPE to generate lots and lots of unprovable theorems. It’s possible by that word “predicted” you were gesturing toward the common noob complaint “but those aren’t “real” or “natural” theorems! they’re so contrived!”. If that’s the issue, then whatever. Theorems are theorems. Math doesn’t discriminate against its theorems – only mathematicians do. And in the case of Godelian issues, typically only those mathematicians who don’t much bother with “fringe” maths. By: sherifffruitfly on August 17, 2010 at 12:09 am 2. Great post! The summer is a good time to be one of your blog readers. @sherifffruitfly I don’t understand the distinction you are making. Correct me if I’m wrong, but while Godel’s first incompleteness theorem proved the existence of such theorems and provided examples, it certainly did not enumerate the ways to construct all such theorems. Therefore one could certainly say that Goodstein’s theorem “is exactly the type of theorem **described** in 1931 by Gödel’s first incompleteness theorem.” Checking the etymology of “predict” (and our intuitive sense of the word) “describing in advance” is a reasonable definition, which is exactly the sense in which it was used. Don’t be so hasty! By: Brendan on August 17, 2010 at 1:47 am • Thanks for the kind words Brendan! @sherifffruitfly I see what you’re saying that “predicted” probably isn’t the best word choice, although I don’t think it is as bad as you do—I like Brendan’s suggestion of “described.” By: Dave Richeson on August 17, 2010 at 8:20 am 3. Very nice – I have been thinking about unprovable theorems a bit, as I recently wrote an article about recent progress in the field: http://www.newscientist.com/article/mg20727731.300-to-infinity-and-beyond-the-struggle-to-save-arithmetic.html?full=true The lexicon of ‘concrete’ unprovable theorems is more well developed than one might expect. See e.g 48. Unprovable theorems, here: http://www.math.ohio-state.edu/~friedman/manuscripts.html Fast growing sequences and gigantic numbers play a prominent role. Another very striking example is H. Friedman’s (again!) finite version of Kruskal’s theorem on trees: http://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem By: Richard Elwes on August 17, 2010 at 11:55 am • I *meant* to link to your New Scientist article in my blog post—sorry about that. I read your article yesterday and saw the blurb about the Paris/Harrington theorem at theend—it was the first time I’d seen that. It is fascinating. I’m no expert on any of this, but I’ve read all or part of 4 or 5 books on Russell, Gödel, etc. in the last few months. So your article really caught my attention. Thanks. By: Dave Richeson on August 17, 2010 at 12:12 pm 4. How does 3^3^3 + 3^3^0 become 6.63 * 10^12 ???? The only way I can get 6.63 * 10^X is by cubing 3 seven times : 3^3^3^3^3^3^3 = 6.628 * 10^347 What’s wrong? By: Vivek on August 17, 2010 at 1:59 pm • Thanks for catching that typo. It should have been $7.63\times 10^{12}$, not $6.63\times 10^{12}$. I fixed it. By: Dave Richeson on August 17, 2010 at 2:35 pm 5. The link to Gödel’s first incompleteness theorem needs correcting. By: Matthew on August 18, 2010 at 2:28 pm • Thanks. My blog editor apparently didn’t like the “ö” in the url. By: Dave Richeson on August 18, 2010 at 2:36 pm 6. I don’t understand. If there exists a k such that m_k=0, then there should be a trivial proof. All you need to do compute each m_n until you reach m_k=0. Since k exists, this computation will take finite time, and your proof will be of finite length and won’t do anything beyond simple computation. Sure it’ll take a long time, and we don’t know what the proof will be, but it seems clear that such a proof exists. If the theorem is true, the only way I can see it being unprovable using the Peano axioms of arithmetic is if you can’t compute m_(n+1) from m_(n) using the Peano axioms of arithmetic, which would mean you can’t even prove what m_2 is. By: Mark James on August 18, 2010 at 11:58 pm 7. Okay, I misunderstood the problem. The theorem says that for all m_1 there exists a k such that m_k = 0, not just that if m_1 = 18 there exists a k such that m_k = 0. By: Mark James on August 19, 2010 at 3:19 am 8. It’s things like this that drive me to want to continue studying mathematics at the graduate level, so I can understand and grasp these proofs. Is either proof particularly elegant or beautiful? By: Justin on September 2, 2010 at 2:49 am By: 一个不可证的定理-从一个数列说起 « Dementrock's Blog on October 3, 2010 at 5:56 am 10. [...] were expressible in the language of Peano arithmetic itself.  In fact, the recently discovered Goodstein’s Theorem is a super-simple number theory statement that you can prove easily using ordinals and ZFC, but [...] By: A Couple Nuances « Gracious Living on December 6, 2010 at 4:02 am 11. [...] We understand that Godel_0 promises exist, with one another with many good examples are actually found out “in the wild” that occur to be not created explicitly for this purpose, as in this article. [...] By: Are recursive forms of Godel's statement possible? | Q&A System on January 2, 2012 at 8:45 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219011664390564, "perplexity_flag": "middle"}
http://skywired.net/blog/2012/07/how-delta-sigma-works-part-3-the-controls-system-perspective/
Electronics, DSP, and ham radio # How Delta-Sigma Works, part 3: The controls-system perspective Posted on July 4, 2012 by This post is part of a series on delta-sigma techniques: data converters, modulators, and more. A complete list of posts in the series are in the How Delta-Sigma Works tutorial page. In the first installment of How Delta-Sigma Works, I presented the basic first-order delta-sigma converter loop. Now it is time to begin digging a little deeper and look at how the loop works. To do this, we will need the mathematical tools of closed-loop control systems. Even without the mathematics, thinking about a delta-sigma modulator as a closed-loop controller can bring insight into how it works. In engineering, closed-loop control is often used to keep a system working at a setpoint despite environmental disturbances or variations in the system itself. A furnace thermostat is a simple closed-loop controller, turning on the heat when the temperature drops below a setpoint and turning it back off when the temperature is above. Another example is vehicle cruise control. Unlike a thermostat, which usually has a binary output (on/off), cruise control adjusts the engine fuel control to maintain a roughly constant speed. In my car, it does this by physically moving the gas pedal. The environmental variations cruise control can encounter include hills, the quality of the fuel, and headwinds or tailwinds. The closed-loop controller adjusts the gas pedal as needed to maintain constant speed despite these effects. A simple closed-loop control system can be drawn as in the figure below. The reference input, r, is the command input to the controller. In the thermostat example, r is the setpoint temperature, while for cruise control, it is the set speed. The output, y, is the controlled value. This is not necessarily in the same units as the reference input r. For example, in the cruise control, y might be a direct measurement of speed, or it might be something else related, such as a cumulative count of revolutions of the vehicle’s wheels. In between the input r and the output y is the control loop. At the bottom of the loop, in its feedback path, a block h processes the output y into a form that is subtracted from the input r to create an error signal e. This error signal is an indication of how far the controller is from the desired operating point. The goal of a controller design is to keep e near, if not at, 0. The feedback block h can represent many different things. In the example of a cruise control system with the output y in units of distance, h might calculate speed by computing the derivative of y. In other systems, h might simply scale the value to convert its units. If y and r are already in the same units, such as in a thermostat example, h can pass through y unchanged ($h = y$). Now that the error signal has been calculated, it is processed by the controller gc, the output of which goes to the “plant” being controlled. (To a controls engineer, anything being controlled, whether a car, a heating system, or a giant factory, is a plant.) The plant is represented by the box gp. The processing in the controller gc is easy to grasp. The controller calculates some function of its input in order to find the command it should give the plant. What happens in this plant, gp, may be a little harder to imagine. The function gp is a mathematical model of the physics of the actual plant. It may be calculated from basic principles (a physicist’s delight!), or it may be an empirical model derived from the inputs and outputs of the actual plant. In any event, a reasonable guess at the function gp is necessary before one can design a closed-loop controller. A delta-sigma modulator also has a closed loop, which suggests that perhaps insight can be gained by comparing it to a controller. The first-order delta-sigma analog-to-digital converter from the first installment, is shown again below. The resemblance to a closed-loop controller is clear when one groups the blocks as in the next figure. The subtractor has the same function in both diagrams, comparing the input to the output. The integrator functions as the controller, gc, and the analog-to-digital convertor and its register are the plant, gp, being controlled. The digital-to-analog converter is the feedback path, h. This grouping gives immediate insight into how the delta-sigma modulator does its magic: It is a linear controller for an analog-to-digital converter, which is the plant. The controller is always trying to drive that plant’s output as close as possible to the setpoint, and does so by adding up (integrating) the error signal. Also, since the integrator is adding up the history of the error signal, it can be seen that although the output at any given moment may not equal the input, the long-term average will be equal. That integrator will try to keep the long-term average of the error, e, equal to 0. There are many controllers that can control a given plant. PID (proportional-integral-derivative) controllers are simple and very popular, while more sophisticated controllers can be designed using other techniques. If a delta-sigma modulator is a control loop, it is reasonable to ask if controllers other than an single integrator will result in a better-performing modulator. In fact, other control functions can be used in delta-sigma modulators and can give lower noise or more desirable characteristics in the frequency domain. Finally, one should not get too carried away with a linear control model. Analog-to-digital and digital-to-analog converters are inherently non-linear, while the mathematics of control theory primarily deals with linear systems. Assuming linear behavior will get us a long way towards understanding delta-sigma techniques, but it is important not to take the analogy too far. Next in this series: Noise shaping, the frequency-domain secret behind delta-sigma data converters. ### Share this: This entry was posted in Design Notes and tagged controls system, delta-sigma, design, theory, tutorial. Bookmark the permalink. ### 3 Responses to How Delta-Sigma Works, part 3: The controls-system perspective 1. Akhil Kumar says: Thanx Stephen for posting such an important information in a very simple way. But I have a question regarding Digital to Digital Sigma Delta Modulator. Can you please tell me how we can use Digital to Digital Sigma Delta Modulator to decrease battery current fluctuations in audio band? And more thing I will be great full to you if you can explain Digital to Digital Sigma Delta Modulator by using a waveform like Analog to Digital Sigma Delta Modulator and based on Noise shaping. Thank you very much Regards, Akhil Kumar • Stephen Trier says: Akhil, These sound kind of like homework problems. Are they? The answer to how digital to digital reduces supply current fluctuation in the audio band lies in noise shaping. A CMOS device draws significant current only when it is switching. If your output data stream pushes the noise in the data stream up into the high frequencies, the switching spikes will similarly be pushed up into high frequencies. Digital to digital delta sigma works exactly like analog to digital. The only big difference is that the whole thing has to be in discrete time, while with analog-to-digital you have an option of continuous or discrete time for the analog portion. The concept of noise shaping is equivalent and works essentially the same way in both. I hope that helps! Stephen 2. Stephen Lake says: Just wanted to say thank you for the series. Very clear explanations that clarified many of the questions I had regarding delta-sigma ADCs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.884117841720581, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2370/simple-xor-cipher-extension?answertab=oldest
# Simple xor cipher extension Probably the simplest cipher is the xor cipher with a single integer. One can extend this to use more than one integer by several means. I'm wondering if there is any benefit to doing more than this: Let $x_n$ be the $n$th data byte to encrypt and $p_k$ be the data stream (password or whatever) to encrypt with and $p_k$ is cylical. Then the "standard" way of doing the encryption is $$\large y_n = x_n \small\oplus\large\; p_k$$ But for certain functions $f,g,h$ $$\large y_n = x_{h(n)} \small\oplus\large\; f(p_{g(k,n)},n)$$ is invertible. My question is: Without knowing what formula and functions are used, would such a method make it much more difficult to "crack" an encryption? (Basically is there any real benefit to using a more complex formula. Ofcourse, if anyone has any theoretical insight in how these modifications "enhance" the encryption then by all means ;) My guess is that it is too difficult, mathematically, to understand in general. For example, if invertibility exists and f,g, and h have certain properties, do we end up with a more secure encryption than the standard xor cipher(which is easily attacked using frequency analysis)?) Assume $f,g$ and $h$ are the best possible functions in whatever way is needed. What I have noticed is that such complex formula seem to increase the "randomness" of the output over the standard method. It would seem that if $f,g$ and $h$ are completely known to the cracker then it is equivalent to simply using a different $p_k$ and $x_n$ and equivalent to the standard method. - ## 2 Answers If the keystream $p$ is as long as the plaintext $x$ (and is not reused), then $y_n := x_n \oplus p_n$ is a one-time pad, which is theoretically unbreakable, and thus needs no further strengthening. On the other hand, if $p$ is shorter than $x$, then what you have is basically equivalent to a Vigenère cipher, which is known to be quite weak. Thus, any additional obfuscation may well improve it, although it's still unlikely to produce a truly secure cipher. It should be noted that your construction is almost general enough to encompass arbitrary synchronous stream ciphers, which are of the form $y_n := x_n \oplus f(p, n)$. The only difference is that stream ciphers are generally designed (with good reason) so that each byte (or bit) of the output of $f$ depends essentially on the entire key $p$. - Actually, your intuition is correct; if $f$, $g$ and $h$ are publicly known functions (and are unkeyed except for the $p_k$ inputs), then the more complex construction is not much stronger than the original simpler cipher. Indeed, it is vulnerable to ciphertext-only attacks just like the simpler method (assuming, of course, that the plaintext is longer than the key, and so some key bytes are used multiple times). Here are two possible avenues of attack, assuming that the attacker has some ciphertext which corresponds to some nonrandom plaintext: • Guess that key byte $p_i$ is a specific value; then, the attacker can scan through $g(n)$ (you have $g(k,n)$, however, you don't define what $k$ is there; the key length?) to find the values of $n$ where it takes on the value $i$, and for those $n$, reconstruct the plaintext bytes $x_{h(n)} = y_n \oplus f( p_i, n)$ He can then see if those revealed plaintext bytes make sense, and if they don't, then he can eliminate that specific value for the key byte. • He can also work the other direction; he can guess the values of a few consecutive plaintext bytes, say that values $x_i, ..., x_{i+4}$ are the five characters " the ". He can then reconstruct the values of the key bytes that must be for that decryption to work (so that if $h(n) = i$, then $f( p_{g(n)}, n ) = y_n \oplus x_i$, and then use the above attack to figure out what other plaintext values are implied by those key bytes. If the plaintext values make sense, then it is likely that the guess is correct (and you also get the other plaintext bytes as well). Note: this general approach of guessing consecutive plaintext bytes is known as "crib dragging". Because of these types of attacks, we generally follow the advice that Ilmari gives, that is, we make every output of the keystream depend on every byte of the key. If we don't do this, then the attacker can search on part of the key, and that's a lot cheaper for the attacker. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442864656448364, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/1142/simple-algorithm-for-online-outlier-detection-of-a-generic-time-series
# Simple algorithm for online outlier detection of a generic time series I am working with a large amount of time series. These time series are basically network measurements coming every 10 minutes, and some of them are periodic (i.e. the bandwidth), while some other aren't (i.e. the amount of routing traffic). I would like a simple algorithm for doing an online "outlier detection". Basically, I want to keep in memory (or on disk) the whole historical data for each time series, and I want to detect any outlier in a live scenario (each time a new sample is captured). What is the best way to achieve these results? I'm currently using a moving average in order to remove some noise, but then what next? Simple things like standard deviation, mad, ... against the whole data set doesn't work well (I can't assume the time series are stationary), and I would like something more "accurate", ideally a black box like: double outlier_detection(double* vector, double value); where vector is the array of double containing the historical data, and the return value is the anomaly score for the new sample "value" . - 1 – Matt Parker Aug 2 '10 at 21:42 1 I think we should encourage posters to post links as part of the question if they have posted the same question at another SE site. – user28 Aug 2 '10 at 21:47 yes, you're completely right. Next time I'll mention that the message is crossposted. – gianluca Aug 2 '10 at 21:53 I also suggest you check out the other Related links on the right hand side of the page. This is a popular question and it has come up in a variety of questions previously. If they aren't satisfactory it is best to update your question about the specifics of your situation. – Andy W Sep 3 '12 at 15:10 Good catch, @Andy! Let's merge this question with the other one. – whuber♦ Sep 3 '12 at 16:08 ## 11 Answers Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates of the trend and seasonal components and subtract them. Then find outliers in the residuals. The test for residual outliers is the same as for the standard boxplot -- points greater than 1.5IQR above or below the upper and lower quartiles are assumed outliers. The number of IQRs above/below these thresholds is returned as an outlier "score". So the score can be any positive number, and will be zero for non-outliers. I realise you are not implementing this in R, but I often find an R function a good place to start. Then the task is to translate this into whatever language is required. ````tsoutliers <- function(x,plot=FALSE) { x <- as.ts(x) if(frequency(x)>1) resid <- stl(x,s.window="periodic",robust=TRUE)$time.series[,3] else { tt <- 1:length(x) resid <- residuals(loess(x ~ tt)) } resid.q <- quantile(resid,prob=c(0.25,0.75)) iqr <- diff(resid.q) limits <- resid.q + 1.5*iqr*c(-1,1) score <- abs(pmin((resid-limits[1])/iqr,0) + pmax((resid - limits[2])/iqr,0)) if(plot) { plot(x) x2 <- ts(rep(NA,length(x))) x2[score>0] <- x[score>0] tsp(x2) <- tsp(x) points(x2,pch=19,col="red") return(invisible(score)) } else return(score) } ```` - +1 from me, excellent. So > 1.5 X inter-quartile range is the consensus definition of an outlier for time-dependent series? That would be nice to have a scale-independent reference. – doug Aug 3 '10 at 3:06 The outlier test is on the residuals, so hopefully the time-dependence is small. I don't know about a consensus, but boxplots are often used for outlier detection and seem to work reasonably well. There are better methods if someone wanted to make the function a little fancier. – Rob Hyndman Aug 3 '10 at 3:45 Really thank you for your help, I really appreciate. I'm quite busy at work now, but I'm going to test an approach like yours as soon as possible, and I will come back with my final considerations about this issue. One only thought: in your function, from what I see, I have to manually specify the frequency of the time series (when constructing it), and the seasonality component is considered only when the frequency is greater than 1. Is there a robust way to deal with this automatically? – gianluca Aug 3 '10 at 15:59 Yes, I have assumed the frequency is known and specified. There are methods to estimate the frequency automatically, but that would complicate the function considerably. If you need to estimate the frequency, try asking a separate question about it -- and I'll probably provide an answer! But it needs more space than I have available in a comment. – Rob Hyndman Aug 3 '10 at 23:40 Thank you, I'll post a different question. – gianluca Aug 4 '10 at 0:21 show 3 more comments A good solution will have several ingredients, including: • Use a resistant, moving window smooth to remove nonstationarity. • Re-express the original data so that the residuals with respect to the smooth are approximately symmetrically distributed. Given the nature of your data, it's likely that their square roots or logarithms would give symmetric residuals. • Apply control chart methods, or at least control chart thinking, to the residuals. As far as that last one goes, control chart thinking shows that "conventional" thresholds like 2 SD or 1.5 times the IQR beyond the quartiles work poorly because they trigger too many false out-of-control signals. People usually use 3 SD in control chart work, whence 2.5 (or even 3) times the IQR beyond the quartiles would be a good starting point. I have more or less outlined the nature of Rob Hyndman's solution while adding to it two major points: the potential need to re-express the data and the wisdom of being more conservative in signaling an outlier. I'm not sure that Loess is good for an online detector, though, because it doesn't work well at the endpoints. You might instead use something as simple as a moving median filter (as in Tukey's resistant smoothing). If outliers don't come in bursts, you can use a narrow window (5 data points, perhaps, which will break down only with a burst of 3 or more outliers within a group of 5). Once you have performed the analysis to determine a good re-expression of the data, it's unlikely you'll need to change the re-expression. Therefore, your online detector really only needs to reference the most recent values (the latest window) because it won't use the earlier data at all. If you have really long time series you could go further to analyze autocorrelation and seasonality (such as recurring daily or weekly fluctuations) to improve the procedure. - This is an extraordinary answer for practical analysis. Never would have thought needed to try 3 IQR beyond the quartiles. – John Robertson Oct 9 '12 at 18:45 @John, 1.5 IQR is Tukey's original recommendation for the longest whiskers on a boxplot and 3 IQR is his recommendation for marking points as "far outliers" (a riff on a popular 60's phrase). This is built into many boxplot algorithms. The recommendation is analyzed theoretically in Hoaglin, Mosteller, & Tukey, Understanding Robust and Exploratory Data Analysis. – whuber♦ Oct 9 '12 at 21:38 If you're worried about assumptions with any particular approach, one approach is to train a number of learners on different signals, then use ensemble methods and aggregate over the "votes" from your learners to make the outlier classification. BTW, this might be worth reading or skimming since it references a few approaches to the problem. - I am guessing sophisticated time series model will not work for you because of the time it takes to detect outliers using this methodology. Therefore, here is a workaround: 1. First establish a baseline 'normal' traffic patterns for a year based on manual analysis of historical data which accounts for time of the day, weekday vs weekend, month of the year etc. 2. Use this baseline along with some simple mechanism (e.g., moving average suggested by Carlos) to detect outliers. You may also want to review the statistical process control literature for some ideas. - 1 Yeah, this is exactly what I am doing: until now I manually split the signal into periods, so that for each of them I can define a confidence interval within which the signal is supposed to be stationary, and therefore I can use standard methods such as standard deviation, ... The real problem is that I can not decide the expected pattern for all the signals I have to analyze, and that's why I'm looking for something more intelligent. – gianluca Aug 2 '10 at 21:37 Here is a one idea: Step 1: Implement and estimate a generic time series model on a one time basis based on historical data. This can be done offline. Step 2: Use the resulting model to detect outliers. Step 3: At some frequency (perhaps every month?), re-calibrate the time series model (this can be done offline) so that your step 2 detection of outliers does not go too much out of step with current traffic patterns. Would that work for your context? – user28 Aug 2 '10 at 22:24 – gianluca Aug 2 '10 at 22:38 Please keep in mind I'm not too lazy for investigating these parameters, the point is that these values need to be set according to the expected pattern of the signal, and in my scenario I can't make any assumption. – gianluca Aug 2 '10 at 22:40 ARIMA models are classic time series models that can be used to fit time series data. I would encourage you to explore the application of ARIMA models. You could wait for Rob to be online and perhaps he will chime in with some ideas. – user28 Aug 2 '10 at 22:44 Seasonally adjust the data such that a normal day looks closer to flat. You could take today's 5:00pm sample and subtract or divide out the average of the previous 30 days at 5:00pm. Then look past N standard deviations (measured using pre-adjusted data) for outliers. This could be done separately for weekly and daily "seasons." - Again, this works pretty well if the signal is supposed to have a seasonality like that, but if I use a completely different time series (i.e. the average TCP round trip time over time), this method will not work (since it would be better to handle that one with a simple global mean and standard deviation using a sliding window containing historical data). – gianluca Aug 2 '10 at 22:02 1 Unless you are willing to implement a general time series model (which brings in its cons in terms of latency etc) I am pessimistic that you will find a general implementation which at the same time is simple enough to work for all sorts of time series. – user28 Aug 2 '10 at 22:06 Another comment: I know a good answer might be "so you might estimate the periodicity of the signal, and decide the algorithm to use according to it", but I didn't find a real good solution to this other problem (I played a bit with spectral analysis using DFT and time analysis using the autocorrelation function, but my time series contain a lot of noise and such methods give some crazy results mosts of the time) – gianluca Aug 2 '10 at 22:06 A comment to your last comment: that's why I'm looking for a more generic approach, but I need a kind of "black box" because I can't make any assumption about the analyzed signal, and therefore I can't create the "best parameter set for the learning algorithm". – gianluca Aug 2 '10 at 22:09 @gianluca As you have intimated the underlying ARIMA structure can mask the anomaly. Incorrect formulation pf possible cause variables such as hour of the day, day-of-the-week, holiday effects etc can also mask the anomaly. The answer is fairly clear you need to have a good eqaution to effectively detect anomalies. To quote Bacon , "For whoever knows the ways of Nature will more easily notice her deviations and, on the other hand, whoever knows her deviations will more accurately describe her ways." – IrishStat Sep 6 '12 at 17:03 (This answer responded to a duplicate (now closed) question at Detecting outstanding events, which presented some data in graphical form.) Outlier detection depends on the nature of the data and on what you are willing to assume about them. General-purpose methods rely on robust statistics. The spirit of this approach is to characterize the bulk of the data in a way that is not influenced by any outliers and then point to any individual values that do not fit within that characterization. Because this is a time series, it adds the complication of needing to (re)detect outliers on an ongoing basis. If this is to be done as the series unfolds, then we are allowed only to use older data for the detection, not future data! Moreover, as protection against the many repeated tests, we would want to use a method that has a very low false positive rate. These considerations suggest running a simple, robust moving window outlier test over the data. There are many possibilities, but one simple, easily understood and easily implemented one is based on a running MAD: median absolute deviation from the median. This is a strongly robust measure of variation within the data, akin to a standard deviation. An outlying peak would be several MADs or more greater than the median. There is still some tuning to be done: how much of a deviation from the bulk of the data should be considered outlying and how far back in time should one look? Let's leave these as parameters for experimentation. Here's an `R` implementation applied to data $x = (1,2,\ldots,n)$ (with $n=1150$ to emulate the data) with corresponding values $y$: ````# Parameters to tune to the circumstances: window <- 30 threshold <- 5 # An upper threshold ("ut") calculation based on the MAD: library(zoo) # rollapply() ut <- function(x) {m = median(x); median(x) + threshold * median(abs(x - m))} z <- rollapply(zoo(y), window, ut, align="right") z <- c(rep(z[1], window-1), z) # Use z[1] throughout the initial period outliers <- y > z # Graph the data, show the ut() cutoffs, and mark the outliers: plot(x, y, type="l", lwd=2, col="#E00000", ylim=c(0, 20000)) lines(x, z, col="Gray") points(x[outliers], y[outliers], pch=19) ```` Applied to a dataset like the red curve illustrated in the question, it produces this result: The data are shown in red, the 30-day window of median+5*MAD thresholds in gray, and the outliers--which are simply those data values above the gray curve--in black. (The threshold can only be computed beginning at the end of the initial window. For all data within this initial window, the first threshold is used: that's why the gray curve is flat between x=0 and x=30.) The effects of changing the parameters are (a) increasing the value of `window` will tend to smooth out the gray curve and (b) increasing `threshold` will raise the gray curve. Knowing this, one can take an initial segment of the data and quickly identify values of the parameters that best segregate the outlying peaks from the rest of the data. Apply these parameter values to checking the rest of the data. If a plot shows the method is worsening over time, that means the nature of the data are changing and the parameters might need re-tuning. Notice how little this method assumes about the data: they do not have to be normally distributed; they do not need to exhibit any periodicity; they don't even have to be non-negative. All it assumes is that the data behave in reasonably similar ways over time and that the outlying peaks are visibly higher than the rest of the data. If anyone would like to experiment (or compare some other solution to the one offered here), here is the code I used to produce data like those shown in the question. ````n.length <- 1150 cycle.a <- 11 cycle.b <- 365/12 amp.a <- 800 amp.b <- 8000 set.seed(17) x <- 1:n.length baseline <- (1/2) * amp.a * (1 + sin(x * 2*pi / cycle.a)) * rgamma(n.length, 40, scale=1/40) peaks <- rbinom(n.length, 1, exp(2*(-1 + sin(((1 + x/2)^(1/5) / (1 + n.length/2)^(1/5))*x * 2*pi / cycle.b))*cycle.b)) y <- peaks * rgamma(n.length, 20, scale=amp.b/20) + baseline ```` - You could use the standard deviation of the last N measurements (you have to pick a suitable N). A good anomaly score would be how many standard deviations a measurement is from the moving average. - Thank you for your response, but what if the signal exhibits a high seasonality (i.e. a lot of network measurements are characterized by a daily and weekly pattern at the same time, for example night vs day or weekend vs working days)? An approach based on standard deviation will not work in that case. – gianluca Aug 2 '10 at 20:57 For example, if I get a new sample every 10 minutes, and I'm doing an outlier detection of the network bandwidth usage of a company, basically at 6pm this measure will fall down (this is an expected an totaly normal pattern), and a standard deviation computed over a sliding window will fail (because it will trigger an alert for sure). At the same time, if the measure falls down at 4pm (deviating from the usual baseline), this is a real outlier. – gianluca Aug 2 '10 at 20:58 what I do is group the measurements by hour and day of week and compare standard deviations of that. Still doesn't correct for things like holidays and summer/winter seasonality but its correct most of the time. The downside is that you really need to collect a year or so of data to have enough so that stddev starts making sense. - Thank you, that's exactly what I was trying to avoid (having a lot of samples as baseline), because I would like a really reactive approach (e.g. online detection, maybe "dirty", after 1-2 weeks of baseline) – gianluca Aug 10 '10 at 14:49 An alternative to the approach outlined by Rob Hyndman would be to use Holt-Winters Forecasting . The confidence bands derived from Holt-Winters can be used to detect outliers. Here is a paper that describes how to use Holt-Winters for "Aberrant Behavior Detection in Time Series for Network Monitoring". An implementation for RRDTool can be found here. - Spectral analysis detects periodicity in stationary time series. The frequency domain approach based on spectral density estimation is an approach I would recommend as your first step. If for certain periods irregularity means a much higher peak than is typical for that period then the series with such irregularities would not be stationary and spectral anlsysis would not be appropriate. But assuming you have identified the period that has the irregularities you should be able to determine approximately what the normal peak height would be and then can set a threshold at some level above that average to designate the irregular cases. - 1 Could you explain how this solution would detect "local irregularities"? Presenting a worked example would be extremely helpful. (To be honest, I'm suggesting you do this because in carrying out such an exercise I believe you will discover that your suggestion is not effective for outlier detection. But I could be wrong...) – whuber♦ Sep 3 '12 at 15:03 @whuber The spectral analysis will only identify where all the peaks are. The next step would be to fit a yime series model using sine and cosine terms with the frequencies determined from the spectral analysis and the amplitudes estimated from the data. If irregularities mean peaks with very high amplitudes then I think a threshold on the amplitude would be appropriate. If local irregularities means that for a period the amplitude sometimes is significantly larger than other then the series is not stationary and spectral analysis would not be approriate. – Michael Chernick Sep 3 '12 at 15:35 I don't follow the conclusion about lack of stationarity. For instance, the sum of a regular sinusoidal waveform and a marked Poisson point process would be stationary, but it would not exhibit any of the periodicity you seek. You would nevertheless find some strong peaks in the periodogram, but they would tell you nothing relevant to the irregular data peaks introduced by the Poisson process component. – whuber♦ Sep 3 '12 at 16:07 A stationary time series has a constant mean. If the peak for a periodic component can change over time it can cuase the mean to change over time and hence the seires would be nonstationary. – Michael Chernick Sep 3 '12 at 16:16 anomaly detection requires the construction of an equation which describes the expectation. Intervention Detection is available in both a non-causal and causal setting . If one has a predictor series like price then things can get a little complicated. Other responses here don't seem to take into account assignable cause attributable to user specified predictor series like price and thus might be flawed. Quantity sold may well depend on price , perhaps previous prices and perhaps quantity sold in the past. The basis for the anomaly detection ( pulses,seasonal pulses, level shifts and local time trends ) is found in http://www.unc.edu/~jbhill/tsay.pdf . -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241793751716614, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/121984-affine-space.html
# Thread: 1. ## Affine space. Let X be an affine space in F^n and let Y be an affine space in F^k. We look now on the next Cartesian product: X x Y = {(x,y)| x in X, y in Y} c F^n x F^k =~F^(n+k) Prove that if to think on X x Y as subset of F^(n+k) under the identification: ((x_1,...,x_n),(y_1,...,y_k)) =(x_1,...,x_n,y_1,...,y_k) so we get an affine space. 2. Originally Posted by Also sprach Zarathustra Let X be an affine space in F^n and let Y be an affine space in F^k. We look now on the next Cartesian product: X x Y = {(x,y)| x in X, y in Y} c F^n x F^k =~F^(n+k) Prove that if to think on X x Y as subset of F^(n+k) under the identification: ((x_1,...,x_n),(y_1,...,y_k)) =(x_1,...,x_n,y_1,...,y_k) so we get an affine space. What is your question here? what are $X,Y$? are they simply subsets? remember that when you talk about affine space you're simply talking about $\mathbb{F}^n$ for some $n$ without the usual vector space structure. Are you by any chance talking about algebraic subsets (zeroes of polynomials)? Please make a clear question!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8090872168540955, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/219567/real-analysis-intervals-and-boundedness
# Real Analysis Intervals and Boundedness Let $I:=[a,b]$ and let $f:I\to\Bbb R$ be a (not necessarily continuous) function with the property that for every $x \in I$, the function $f$ is bounded on a neighborhood $V_{\delta_x}$ of $x$. Prove that $f$ is bounded on $I.$ I am not entirely sure where to start here. Can I say that the $\lim x_n = x$ and so there exists $N_1$ in the natural numbers such that $x_n$ is in the neighborhood for any $n>N_1$ and then continue? - 1 Hint: $I$ is a compact set. What can you say about the collection $\{V_{\delta_x}\}_{x\in I}$ in relation to $I$? – J. Loreaux Oct 23 '12 at 18:45 We have not yet used compact sets in class. – Jackson Hart Oct 23 '12 at 18:49 Have you yet covered the fact that every bounded sequence of reals has a convergent subsequence? – Brian M. Scott Oct 23 '12 at 18:53 Yes, the B-W theorem. Could I say that Suppose that f is unbounded. Then there exists xn∈[a,b] with (for example) limn→∞f(xn)=∞. Because the sequence xn is bounded, it has a convergent subsequence, denote it also xn→x0∈[a,b]. Then use the fact that f is bounded near x0 to derive a contradiction. – Jackson Hart Oct 23 '12 at 18:56 Precisely so, Jackson. – Cameron Buie Oct 23 '12 at 18:59 show 2 more comments ## 2 Answers Suppose that $f$ is not bounded on $I$. Then for each $n\in\Bbb N$ there is an $x_n\in I$ such that $|f(x_n)|\ge n$. The sequence $\langle x_n:n\in\Bbb N\rangle$ is bounded, since it lies in $I$, so it has a convergent subsequence $\langle x_{n_k}:k\in\Bbb N\rangle$. Let $x$ be the limit of this subsequence. There is an $m\in\Bbb N$ such that $x_{n_k}\in V_{\delta_x}$ for every $k\ge m$, contradicting the boundedness of $f$ on $V_{\delta_x}$. Added: Recall that the points $x_n$ were chosen so that $|f(x_n)|\ge n$ for each $n\in\Bbb N$. In particular, $|f(x_{n_k})|\ge n_k$ for all $k\in\Bbb N$. Now let $M$ be any positive real number. There is a $k_0\in\Bbb N$ such that $n_{k_0}\ge M$. (Recall that $\langle x_{n_k}:k\in\Bbb N\rangle$ is a subsequence of $\langle x_n:n\in\Bbb N\rangle$, so $\langle n_k:k\in\Bbb N\rangle$ must be strictly increasing.) Now choose any $k\ge\max\{m,k_0\}$; then $n_k\ge n_{k_0}\ge M$, and $x_{n_k}\in V_{\delta_x}$ (because $k\ge m$), so $x_{n_k}$ is a point of $V_{\delta_x}$, and $|f(x_{n_k})|\ge M$. $M$ was arbitrary, so there are points of $V_{\delta_x}$ at which $f$ assumes values with arbitrarily large magnitudes $-$ which is exactly what it means for $f$ to be unbounded on $V_{\delta_x}$. - It should be noted that $x\in I$ since $I$ is closed. – J. Loreaux Oct 23 '12 at 19:00 I understand everything here except the part that says |f(xn)|≥n. – Jackson Hart Oct 23 '12 at 19:04 Is it like that simply because it isn't bounded? – Jackson Hart Oct 23 '12 at 19:06 @Jackson: Yes: if it’s not bounded, it assumes values with arbitrarily large magnitudes. In particular, for any $n\in\Bbb N$ there must be some point at which $f(n)\ge n$ or $f(n)\le -n$, i.e., $|f(n)|\ge n$. – Brian M. Scott Oct 23 '12 at 19:09 Ok, I get that now. I am still slight confused for some reason. Could you help explain to me how this contradicts the boundedness of f on V? – Jackson Hart Oct 23 '12 at 19:12 show 7 more comments If you are familiar with compactness, for every $x$ let $V_x$ be an appropriate neighbourhood of $x$, and $B_x$ an associated bound. Then the $V_x$ form an open cover of our interval $[a,b]$. There is therefore a finite subcover $\{V_{x_1},\dots, V_{x_n}\}$. Let $B=\max B_{x_i}$. - We have not learned yet about compactness and so I am pretty sure that our teacher will not let us use that. – Jackson Hart Oct 23 '12 at 18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434703588485718, "perplexity_flag": "head"}
http://mathoverflow.net/questions/9001/inverting-a-covariance-matrix-numerically-stable/9544
## Inverting a covariance matrix numerically stable ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given an $n\times n$ covariance matrix $C$ where $n$ around $250$, I need to calculate $x\cdot C^{-1}\cdot x^t$ for many vectors $x \in \mathbb{R}^n$ (the problem comes from approximating noise by an $n$-dimensional Gaussian distribution). What is the best way (in the sense of numerically stable) to solve these equations? One option is the Cholesky Decomposition, because it is numerically quite stable and fast. Is the higher computational complexity of the Singular Value Decomposition worth it? Or is there another better possibility? - ## 2 Answers Cholesky sounds like a good option for the following reason: once you've done the Cholesky factorization to get $C=LL^T$, where $L$ is triangular, $x^TC^{-1}x = ||L^{-1}x||^2$, and $L^{-1}x$ is easy to compute because it's a triangular system. The downsides to this are that even if $C$ is sparse, $L$ is probably dense, and also that you do the same amount of work for all $C$ and $x$ while other methods may allow you to exploit some special structure and get good approximations to the solution with less work. For those reasons, you might also consider Krylov subspace methods for computing $C^{-1}x$, like conjugate gradients (since $C$ is symmetric and positive definite), especially if $C$ is sparse. $n=250$ isn't terribly large, but still large enough that Krylov subspace methods could pay off if $C$ is sufficiently sparse. (There might actually be special methods for computing $x^TC^{-1}x$ itself as opposed to $C^{-1}x$, but I don't know of any.) Edit: Since you care about stability, let me address that: Cholesky is pretty stable, as you note. Conjugate gradients is notoriously unstable, but it tends to work anyway, apparently. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Sorry for having to abuse the answer field (if possible, maybe a moderator can accept Darsh's answer for me?): I just want to thank Darsh for answer. I'll use Cholesky. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524237513542175, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/257678/expectations-of-multiplied-wiener-processes
# Expectations of multiplied Wiener processes. I wish to evaluate the following: • $E[W(t-1)W(t)^2]$ • $E[W(t)^3]$ where $t > 1$, $W$ is a standard Brownian motion and we are at $\mathscr{F}_0$ now. I know that $E[W(t-1)W(t)] = \min{(t-1,t)} = (t-1)$ when $W$ is a standard Brownian motion, but I'm not sure how to solve the above expectations. I could do it easily if I could go $E[W(t-1)W(t)^2]=E[W(t-1)]E[W(t)^2]=0$ (duh); however I can't do this beacuse they're dependent random variables. Thanks. - ## 1 Answer For the first one, let $X = W(t) - W(t-1)$ and write $W(t-1)W(t)^2 = W(t-1)(X + W(t-1))^2$. After expanding the square you have three terms, each of which is a product of independent random variables. For the second one: all you need to know is that $W(t)$ is normally distributed with mean 0 and variance $t$, so you know its density function, and can compute the expectation directly from that (integration by parts will help). - 1 I beg to differ for the second one: the distribution of `W(t)` is symmetric hence `E[u(W(t))]=0` for every odd function `u`, for example `u(x)=x^3`. – Did Dec 13 '12 at 6:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403489232063293, "perplexity_flag": "head"}
http://mathoverflow.net/questions/53488/algebraic-local-charts
## algebraic local charts ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, in differential geometry or in complex geometry one of basic stuff to prove something is to do it on the local charts and then to check that the construction glues with the others charts. Which is the anologous of local charts (ball in $C^n$) in algebraic geometry (not only complex algebraic geometry) which are "more small" or more local then the affine schemes? The question arise from the fact that many time I see that something is proved assuming that these local charts are Spec of some complete ring. Thank you in advice - I think the distinction between local charts and local local charts is superfluous ... – Martin Brandenburg Jan 27 2011 at 13:46 The examples that immediately comes to my mind are charts that are adapted to the situation at hand. For instance an affine cover over which a given locally free module is actually free. This happens of course also in differential or complex geometry but there is a difference in that for many purposes balls are enough and hence can be chosen once and for all in these latter cases. Hence I agree with Martin. – Torsten Ekedahl Jan 27 2011 at 13:56 in analytic case you have much more instruments like the inverse function theorem which you do not have in the algebraic case. You can overlap this using completions and then try to make the result algebraic, so I am asking if there is some rule to use this stuff. – unknown Jan 27 2011 at 14:11 In short, the answer is yes, if you use the etale topology. There is an inverse function theorem with respect to the etale topology. There is also a fairly general principle that you can lift solutions in the completion to solutions in an etale neighborhood (Artin approximation). A good toy example (and exercise!) is the fact that the map $C^* \to C^*$ given by $z \mapsto z^n$ becomes a local isomorphism in the etale topology. – ABayer Jan 28 2011 at 11:13 ## 1 Answer What you're looking for is the étale topology (I think). The fact is that Zariski opens are way too big to grasp stuff which should be more local than "Zariski local", for example any two smooth points on two $n$-dimensional varieties over $\mathbb{C}$ have isomorphic completed local rings (power series ring in $n$ indeterminates), but you will never find two isomorphic zariski open neighborhoods, unless the varieties are birational! The way to go more locally is to consider étale morphisms to a scheme (which should morally be locally invertible, but they aren't in the algebraic context) to be "open subsets" of that scheme. Of course this doesn't make sense unless you modify the notion of topology, and in fact this is what led to the notion of Grothendieck topologies. A good introduction to this stuff is Milne's book "étale cohomology", first chapters (I hope this is what you were looking for). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326888918876648, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/94468-triple-integrals.html
# Thread: 1. ## Triple Integrals Hey there! I'm trying to evaluate this triple integral,but I am having a lot of trouble dealing with the integrand that has a lower limit as a maximum. I am guessing there is some way one can integrate a maximum function...but then again... Thanks for your help! Attached Thumbnails 2. Originally Posted by JoAdams5000 Hey there! I'm trying to evaluate this triple integral,but I am having a lot of trouble dealing with the integrand that has a lower limit as a maximum. I am guessing there is some way one can integrate a maximum function...but then again... Thanks for your help! if you sketch the region $A=\{(x,y): \ -2 \leq x \leq 2, \ \max (-\sqrt{4-x^2}, -\sqrt{3}|x| ) \leq y \leq \sqrt{4-x^2} \},$ you'll see that, in polar coordinates, we have $A=\{(r,\theta) : \ 0 \leq r \leq 2, \ \frac{-\pi}{3} \leq \theta \leq \frac{4 \pi}{3} \}.$ so, in cylindrical coordinates, your integral has a very simple form . #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273369908332825, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/82286/rational-binomial-identity/82298
## Rational Binomial Identity ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Can anyone give a reference, a proof, or a reference that explains why Maple can evaluate this identity mathematically correctly: $$n-i-1=(d-1)\sum_{l=1}^{n-i-1}\frac{\binom{n-i-1}{l}}{\binom{n-i+d-3}{l}}$$ - 1 I recommend rewriting your question so that it makes more sense. Writing t for n-i-1 would be a good start, as well as asking what you really want, rather than a reference to why Maple does things correctly. Gerhard "Ask Me About System Design" Paseman, 2011.11.30 – Gerhard Paseman Nov 30 2011 at 16:50 I don't get your question: do you mean that you are surprised that Maple can do something correctly? Or, rather than why, are you asking how does Maple derive the identity. Anyway, the question is not clear as is, it needs motivation. See: mathoverflow.net/howtoask – Thierry Zell Nov 30 2011 at 17:20 A possible explanation, a bit tautological: Maple is able to make a number of formal simplifications on expressions like yours, so as to reduce them to some of the thousands of standard identities that it contains in its memory. – Pietro Majer Nov 30 2011 at 17:25 1 You can get Maple to show the steps it is taking . That may or may not give a satisfying result (if that is what you really want to know). Improve your notation and try the cases $d=1,2,3,4$ to get some insight. – Aaron Meyerowitz Nov 30 2011 at 17:52 I am curious about why anyone would down vote this?! It is a perfectly reasonable question. – Igor Rivin Nov 30 2011 at 19:14 ## 1 Answer The canonical reference for this sort of thing is Petkovsek and Zeilberger's book "A=B". Maple (almost certainly) uses the Zeilberger-Wilf algorithm for hypergeometric summation (which really goes back to Bill Gosper). You can also read the Wilf-Zeilberger paper (Inventiones, around 1990). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186270236968994, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4607/how-to-make-the-final-interpretation-of-pca
# How to make the final Interpretation of PCA? I have question regarding final loading of data back to original variables. So for example: I have 10 variable from a,b,c....j using returns for last 300 days i got return matrix of 300 X 10. Further I have normalized returns and calculated covariance matrix of 10 X 10. Now I have calculated eigen values and eigen vectors, So I have vector of 10 X 1 and 10 X 10 corresponding eigen values. Screeplot says that 5 component explain 80% of variation so now there are 5 eigenvectors and corresponding eigenvalues. Now further how to load them back to original variable and how can i conclude which of the variable from a,b,c.....j explain the maximum variation at time "t" - – user1234440 Nov 23 '12 at 16:07 ## 2 Answers To make things really clear, you have an original matrix $X$ of size $300 \times 10$ with all your returns. Now what you do is that you choose the first $k=5$ eigenvectors (i.e. enough to get 80% of the variation given your data) and you form a vector $U$ of size $10 \times 5$. Each of the columns of $U$ represents a portfolio of the original dataset, and all of them are orthogonal. PCA is a dimensionality-reduction method: you could use it to store your data in a matrix $Z$ of size $300 \times 5$ by doing: $$Z = X U$$ You can then recover an approximation of $X$ which we can call $\hat{X}$ as follows: $$\hat{X} = Z U^\intercal$$ Note that as your 5 eigenvectors only represent 80% of the variation of X, you will not have $X=\hat{X}$. In practice for finance application, I don't see why you would want to perform these reduction operations. In terms of factor analysis, you could sum the absolute value for each row of $U$; the vector with the highest score would be a good candidate I think. - 1 When not relying on Bayesian techniques, I can see the advantage of PCA for dimension reduction. Consider high-dimension estimation of the covariance matrix where the number of observations is smaller than the number of securities. This typically leads to problems. Alternately, VAR or Garch estimation on a small number of factors is usually faster with fewer parameters than estimating them on every security in the universe. – John Nov 27 '12 at 15:00 @SRKX "In terms of factor analysis, you could sum the absolute value for each row of U; the vector with the highest score would be a good candidate I think." Candidate to do what? How would you use it to reach a trading decision? – ManInMoon Apr 26 at 7:37 @ManInMoon the variable which adds the most variance to the sample. – SRKX♦ Apr 29 at 12:55 If you are asking which of the 10 variables is contributing most to the principle component, then look at your first eigenvector; each value reflects a single variable, so the largest value (by magnitude) in that eigenvector should give the variable with the largest contribution. Note that a large negative number means anticorrelation. The matrix you have is in fact mapping from the 10d space of your variables onto the eigenspace of the matrix; the first eigenvector represents one of the basis vectors of this new eigenspace, in the space of your 10d vectors. The analogy is that if you had 2 variables, x and y, then you could construct a similar 2d matrix, and calculate its eigenvectors. The eigenvectors would show you the axes of the new space, and the first eigenvector is its principle component (axis). Caveat: I know a lot more about eigenvectors than I do about PCA, so there may be a subtlety I'm missing. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206182956695557, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81233/a-small-variation-on-the-definition-of-epsilon-delta-absolutely-continuous
## A small variation on the definition of $\epsilon - \delta$ absolutely continuous measure? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A really simple question. If $(X, M, \mu)$ be a measurable space, a measure $v$ is absolutely continuous with respect to $u$ if $$\forall \epsilon \;\; \exists \delta \;\; if \;\;\mu(E)<\delta\, then \;\;v(E)<\epsilon$$ What is wrong with this alternate definition: $$\forall \epsilon \;\; \exists \delta \;\; if \;\;v(E)<\epsilon, then \;\;\mu(E)<\delta\$$ More concretely, can anyone come up with an example of two measures which satisfy the second definition of continuity but not the first and vice versa? - 1 "Close as too localized" – Qfwfq Nov 18 2011 at 14:37 Your question would be more appropriate at math.stackexchange.com or one of the other sites listed in the FAQ. – S. Carnahan♦ Nov 19 2011 at 3:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291394352912903, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/62393/questions-on-smoothness-of-riemann-metrics/62421
## Questions on smoothness of Riemann metrics ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've heard assertions of the sort: 1. Let there be a Riemann metric (not very smooth, say of class $C^1$ or $C^2$ or maybe $C$?) in a neighbourhood of a point on a manifold. Then it is possible to choose coordinates so that the metric is $C^\infty$ or even analytic in them. 2. In case of 3-dimensional manifolds it is possible to choose such coordinates globally, so the manifold becomes a smooth one. In the case of higher dimensions $n\ge4$ it is not true. Are those assertions true? I've heard them some time ago and not sure I remember all the details. Is it a well-known thing? Are there some detailed references? - 2 For #1, see: DeTurck, Dennis M.; Kazdan, Jerry L. Some regularity theorems in Riemannian geometry. Ann. Sci. École Norm. Sup. (4) 14 (1981), no. 3, 249–260. You need assumptions on the curvature tensor (and its covariant derivatives) if you want higher regularity. I don't know what you mean by #2. Could you explain why the three-dimensional sphere has global co-ordinates? – Deane Yang Apr 20 2011 at 14:07 2 In fact, for #1, in the DeTurck-Kazdan paper you find a counterexample in the first paragraph. Note that in this case "changing coordinates" actually corresponds to changing atlas (as noted by Anton below). I wonder if for #1 you intend them to be Einstein manifolds? In which case the result is true using elliptic regularity. – Willie Wong Apr 20 2011 at 16:21 On #2: if you believe that the best coordinates one can use is the harmonic ones, then in fact on any compact, closed manifold, you will not be able to extend the coordinates globally... – Willie Wong Apr 20 2011 at 16:46 1 @Igor: you posted a link to the Georgia Tech proxy, which most of us cannot go through :-p. The link Igor meant to post is MR: ams.org/mathscinet-getitem?mr=MR2204038 article: dx.doi.org/10.1090/S0002-9947-06-04090-6 – Willie Wong Apr 20 2011 at 17:34 1 A most recent discussion of these issues is the following paper [Taylor, Trans. Amer. Math. Soc. 358 (2006), 2415-2423] avaialble at ams.org/journals/tran/2006-358-06/… – Igor Belegradek Apr 20 2011 at 18:21 show 1 more comment ## 3 Answers 1. NO. Given a Riemannian manifold, it might be possible to improve smoothness by changing atlas. It is proved by Shefel, that the atlas with harmonic functions as coordinates is the best. But, the obtained metric might be worse than $C^\infty$. 2. There is no local-global issue here, harmonic atlas is defined locally and it is the best one globally. So you get problems starting with dimension 2. - Anton, what's the reference for Shefel? – Deane Yang Apr 20 2011 at 15:21 @Deane: Shefel, S. Z. --- 1979 and 1982. both in Russian the second one is translated. – Anton Petrunin Apr 20 2011 at 15:40 Anton, thanks. I'm still not sure which papers you're citing. Is it: "Smoothness of a conformal mapping of Riemannian spaces. (Russian) Sibirsk. Mat. Zh. 23 (1982), no. 1, 153–159, 222."? And how do his results compare to DeTurck and Kazdan? Did he prove the same results either earlier or independently? Or does he prove more? – Deane Yang Apr 20 2011 at 15:54 1 @Deane: looking at the MR, the results seems to be roughly comparable. The main lemma mentioned in the MR is equivalent to theorem 2.1 of DeTurck-Kazdan. And it looks like from the review of '79 you get an a priori estimate from the connection coefficients back up to the metric: Theorem 2 controls the regularity of the conformal map by the regularity of the conformal factor, compare that to Theorem 3.4 of DeTurck-Kazdan. So I would guess "roughly the same result, slightly earlier, not communicated well to the 'west' for the obvious reasons." – Willie Wong Apr 20 2011 at 16:41 1 I will make sure to cite Shefel from now on. – Deane Yang Apr 21 2011 at 2:02 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I confirm the Anton's answer (No, and the phenomenon is essentially local), but I suggest another explanation which works for C^1 2-dimensional metrics. We will look for a counterexample in the class of metrics such that they are C^2 everywhere except for some line, where they are C^1. Then, it is possible and relatively easy to cook an example such that the curvature of the metric is discontinuous at this special line; you can do it in the class of confomally flat metrics such that the conformal coefficient depends on one variable only and the line is where this variable is a constant. Since in order to determine the curvature of a metric you only need the distance function corresponding to this metric, and distance function does not depend on how smooth is your atlas, you can not make this metric smooth by the change of the atlas. - If you combine the work of Jost-Karcher on almost linear co-ordinates with the work of DeTurck-Kazdan and Shefel on harmonic co-ordinates (I recommend a paper of Stefan Peters on a proof of the Gromov convergence theorem), you get the following: If there exist local co-ordinates in which a Riemannian metric $g$ is $C^1$ and has bounded sectional curvature, then there exist local (harmonic) co-ordinates in which the metric is $C^{1,\alpha}$ for every $\alpha > 0$. If, in addition to this, the covariant derivatives of the Ricci tensor up to order $k$ are locally bounded, then there exist local harmonic co-ordinates in which the metric is $C^{k+1,\alpha}$ for any $\alpha > 0$. If, in particular, the covariant derivatives of Ricci of all orders are bounded, then there exist local harmonic co-ordinates in which the metric is $C^\infty$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085257053375244, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2813/how-does-djbs-nistp224-manage-to-fit-compressed-points-into-224-bits
How does DJB's nistp224 manage to fit compressed points into 224 bits? DJB's `nistp224` program purports to be an implementation of elliptic curve Diffie-Hellman relative to the standard NIST P-224 elliptic curve. To the best of my understanding, ECDH relative to this curve should produce 225-bit public messages (compressed points consisting of a 224-bit x-coordinate and a 1-bit y-coordinate), which require 29 bytes to encode (in SEC1 format). Yet somehow `nistp224` manages to produce 28-byte messages, which are therefore one bit short. How does it do this? I find the source code utterly unenlightening. (Rationale for question: It sure would be nice if I could shave a byte off my messages on the wire.) - 1 Answer That's because you can do ECDH by exchanging only the X coordinates of your public value; as long as the shared secret depends only on the x coordinate, everything works out. Here's the fundamental property of elliptic curves that makes this work, the x coordinate of $nP$ is only a function of the x coordinate of $P$ (and $n$); it does not depend on the y coordinate of $P$. So, if the two sides select their secret values as $a$ and $b$, they both compute $aG$ and $bG$. They then exchange the x-coordinates of those points to the other. What both sides can do is then compute the x-coordinate of $a(bG) = b(aG) = (ab)G$. This x-coordinate is the same value on both sides. Now, neither side knows the y-coordinate of the common point; however, the x-coordinate is sufficient for a shared secret. - Makes perfect sense. Thank you! – Zack Jun 5 '12 at 22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8917549252510071, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/163067/terminology-of-multilinear-form/163077
# terminology of multilinear form In the litterature we see the terminology "multilinear form" or "$n$-form". I'm used to refer the word "form" to mean a homogeneous polynomial. but here we define it as a map $f:V^n\to F$, ($V$ is an $F$-vector space), such that $f$ is linear on each of its components. I'm confused by terminology, for example is there some connection between a bilinear form and a quadratic form? - – user31373 Jun 25 '12 at 23:03 If you write $n-$form with the hyphen INSIDE the $\TeX$ code, then it looks like a minus sign rather than a hyphen. I changed it to $n$-form. Similarly $F-$vector and $F$-vector. – Michael Hardy Jun 25 '12 at 23:08 1 – KCd Jun 26 '12 at 3:16 ## 1 Answer If $B : V^2 \to F$ is a symmetric bilinear form, then $B(v, v) : V \to F$ is a quadratic form. If $F$ does not have characteristic $2$, then conversely if $q(v) : V \to F$ is a quadratic form, then $$B(v, w) = \frac{q(v + w) - q(v) - q(w)}{2}$$ is a symmetric bilinear form. More generally, if $f : V^n \to F$ is a symmetric multilinear form, then (in sufficiently large characteristic) it can be (not quite canonically) identified with an element of the symmetric power $\text{Sym}^n(V^{\ast})$. This symmetric power lies in the symmetric algebra $\text{Sym}(V^{\ast})$, which one can think of as an abstract form of a polynomial ring (it is the ring of polynomial functions on $V$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043903946876526, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/76582-mechanics-problem-angular-velocity.html
# Thread: 1. ## Mechanics problem - angular velocity Hi, Can anybody help with this question please? Two particles of mass m are joined by a thin rigid rod of negligible mass and length l. They lie on a frictionless plane and a third particle of mass m, travelling with speed V0 hits one of the two particles in the way depicted in the attached figure. Compute the angular velocity of the rod about its mid point after the collision - a) if the collision is elastic, so that the third mass recoils and moves straight back; b) if the collision is completely inelastic, i.e. if the two particles colliding get stuck together. Thanks Attached Thumbnails 2. ## Momentum (and energy) Hello jackiemoon Originally Posted by jackiemoon Hi, Can anybody help with this question please? Two particles of mass m are joined by a thin rigid rod of negligible mass and length l. They lie on a frictionless plane and a third particle of mass m, travelling with speed V0 hits one of the two particles in the way depicted in the attached figure. Compute the angular velocity of the rod about its mid point after the collision - a) if the collision is elastic, so that the third mass recoils and moves straight back; b) if the collision is completely inelastic, i.e. if the two particles colliding get stuck together. Thanks First let me say that it is incorrect, and misleading, to talk about the angular velocity of the rod 'about its mid point'. The angular velocity of the rod is simply the angular velocity of the rod. It's not about any particular point. Why? If at a certain instant the rod makes an angle $\theta$ with a fixed line in the plane, then the angular velocity of the rod is the rate of change of $\theta$ with respect to time. You can calculate the angular velocity of the rod as follows: • find the linear velocity of one point on the rod relative to another point on the rod • then find the component of this velocity at right angles to the rod • then divide this component by the distance between these two points. One of these two points may be the centre of the rod and the other one end of the rod; or it may be easier to take the points at opposite ends of the rod, and ignore the centre altogether. So, what principles can you use to solve your problem? I shall call the moving particle A, the particle on the rod that is struck B, and the other end of the rod C. Let's deal with (a) first: elastic collision. You can say: • At the moment of impact, there is no external impulse. So the (linear) momentum of the system along and perpendicular to the line AB is unchanged. • The impulse on A is along the line BA, so A bounces back along BA, the line along which it came. Call this rebound velocity $v_1$. • The impulse on C is along the line of the rod, so C moves along the line BC. • Split the velocity of B into two components: one along the rod ( $v_2$) and one perpendicular to the rod ( $v_3$). Then the velocity of C = $v_2$, because it's a rigid rod. • Finally, there is no loss of kinetic energy if the collision is perfectly elastic. These will give you three equations, from which you can find, in terms of $v_0$, the velocity of A, and the two components of the velocity of B (and hence the velocity of C). I make these, respectively: $v_1=\frac{v_0}{7}, v_2 = \frac{2\sqrt{2}v_0}{7}, v_3 = \frac{4\sqrt{2}v_0}{7}$ Bearing in mind what I said initially, the velocity of B relative to C is therefore $\frac{4\sqrt{2}v_0}{7}$ at right angles to the rod, and the angular velocity of the rod is therefore $\frac{4\sqrt{2}v_0}{7l}$ In part (b), after impact, A and B coalesce to form a particle with mass $2m$. So let the components of velocity of this new particle again be $v_2$ and $v_3$, with C's velocity, as before, being $v_2$. We can no longer use the KE equation, but the two components of momentum of the system along and at right angles to AB are still unchanged at impact. So this time you get two equations, which give (according to my working): $v_2 = \frac{\sqrt{2}v_0}{9}, v_3 = \frac{\sqrt{2}v_0}{6}$ and the angular velocity this time is $\frac{\sqrt{2}v_0}{6l}$ Grandad 3. Thanks for the reply Grandad. Your explanation is very helpful and I appreciate you taking the time to go into such detail.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153439402580261, "perplexity_flag": "head"}
http://www.conservapedia.com/Gaussian_Elimination
# Gaussian Elimination ### From Conservapedia Gaussian Elimination is a simple method for solving a system of simultaneous linear equations by expressing them in matrix form and then subtracting rows from each other sequentially in order to create all ones on the main diagonal and all zeros below the diagonal, thereby facilitating identification of solutions or range of solutions. The three types of elementary row operations used are: • swap rows • multiple a row by a constant • add or subtract rows from each other ## Example Here, we work a simple example. Consider the two equations: $3x-2y=7 \$ $x+\frac{y}{2} = -2 \$. We represent this collection of equations in matrix form as: $\begin{pmatrix}3 & -2 & 7\\1 & 1/2 & -2\end{pmatrix}$. Now, the technique of Gaussian elimination allows us to add and subtract rows from each other, as well as multiples of rows. The idea is to achieve 1s in the main diagonal, and 0s elsewhere. This will become clear as we work the example. Since the row $\begin{pmatrix}4 & 2 & -8 \end{pmatrix}$ is a multiple of the second row ( $4*\begin{pmatrix}1 & 1/2 & -2 \end{pmatrix}$), we can add this row to the first row: $\begin{pmatrix}3 & -2 & 7\\1 & 1/2 & -2\end{pmatrix} \to \begin{pmatrix}3+4 & -2+2 & 7-8\\1 & 1/2 & -2\end{pmatrix}=\begin{pmatrix}7 & 0 & -1\\1 & 1/2 & -2\end{pmatrix}$ As you can see, we have already begun altering the matrix so that there will be 0s off of the main diagonal. In our next step, we will divide the top row by 7 and multiply the bottom row by 2 to put 1s in the main diagonal: $\begin{pmatrix}7 & 0 & -1\\1 & 1/2 & -2\end{pmatrix} \to \begin{pmatrix}1 & 0 & -1/7\\2 & 1 & -4\end{pmatrix}$ Finally, we subtract twice the top row from the bottom to put a zero off the main diagonal in the bottom: $\begin{pmatrix}1 & 0 & -1/7\\2 & 1 & -4\end{pmatrix} \to \begin{pmatrix}1 & 0 & -1/7\\0 & 1 & -4+2/7\end{pmatrix}=\begin{pmatrix}1 & 0 & -1/7\\0 & 1 & -26/7\end{pmatrix}$ This should yield a solution to the original equation: $x=-1/7, y=-26/7 \$. To check, we insert these values into the original equations: $3\frac{-1}{7} - 2\frac{-26}{7} = \frac{-3+52}{7} = \frac{49}{7} = 7$ $\frac{-1}{7} + \frac{1}{2}{-26}{7} = \frac{-1-13}{7} = \frac{-14}{7} = -2$ as expected.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8926413059234619, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/3509/why-does-srp-6a-use-k-hn-g-instead-of-the-k-3-in-srp-6/3520
Why does SRP-6a use k = H(N, g) instead of the k = 3 in SRP-6? I've been reading up on the Secure Remote Pasword protocol (SRP). There are a couple different versions of the protocol (the original published version being designated SRP-3, with two subsequent enhancements 6 and 6a). There was a very limited attack on version 3, where the attacker could submit two password guesses per falsified attempt against the server instead of just one. This was because in step 2 where the client submits ($g^x + g^b$) to the server, they could in fact set $b$ to be a second password guess, and since the server doesn't have the password, it can't tell that $b$ isn't randomly chosen (like it's supposed to be). The attack and remedy are detailed in the paper for SRP-6. The remedy described in SRP-6 is that the client send $k·g^x + g^b$ instead, where $k = 3$. However, there is also a version SRP-6a (which is the version detailed on the page above), which sets $k = H(N, g)$. Since both $N$ and $g$ are publicly known values ($N$ is a large safe prime, and $g$ is it's generator), and the $(g,N)$ group is used for all transactions in a given SRP system (independent of password, user or session), $k$ is still a constant value. So the question is, what's the justification/benefit of picking this alternate value $H(N,g)$ for $k$? This value of $k$ is used in the TLS/SRP implementation, but I haven't found any explanation for why. - Might prevent some interactions between instances of the protocol that use different generators. – CodesInChaos Aug 10 '12 at 18:53 @CodesInChaos Maybe... you mean a different (N,g)? Or just a different g for the same N? My understanding is that given an N, g is usually fixed by convention (check the RFC which has defined (N,g) tuples). – Robert I. Jr. Aug 11 '12 at 1:46 ... fixed by convention (check the RFC which has defined (N,g) tuples). Or are you saying that there might be an attack across different (N,g)'s entirely which reuse the same k? It seems like since k is constant across sessions, any attacks leveraging k would be more relevant using data collected from multiple sessions on the same (N,g,k) (i.e. other users on the same server). – Robert I. Jr. Aug 11 '12 at 1:54 I suspect in some implementations the server chooses the (N,g) pair freely and can choose it in a way that N is identical to some other sever, and uses a different specific g for some kind of attack. But I have no clue what that attack might be. – CodesInChaos Aug 11 '12 at 6:48 1 Answer If k is a constant, such as 3, it becomes possible to select a pair (N,g) such that the discrete log of k to the base g is known, which would enable the two-for-one guessing attack again. - I see, that makes sense. So then by doing $k = H(N,g)$, it makes it very unlikely that log_g(k) is known / trivially calculable. I suppose varying k would be possible (say, salting the hash of (N,g) per session), but not useful, because if the attacker could enable the two-for-one attack by solving log_g(k), he could also factor a from A, or x from v (assuming he could grab the verifiers), which allow him to break the protocol entirely. – Robert I. Jr. Aug 14 '12 at 22:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609854817390442, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/122764?sort=newest
## Characterising ergodicity of continuous maps ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello all. Suppose $X$ is a Polish space, $\mu$ is a Borel probability measure on $X$, and $T:X \to X$ is a continuous $\mu$-preserving map which is not ergodic. Does there necessarily exist a Borel set $A \subset X$ such that • $\mu(A) \in (0,1)$; • $\mu(A \ \triangle \ T^{-1}(A)) = 0$; • $A$ has non-empty interior? What about if we replace the third point with the stronger requirement that $A$ is open? Many thanks, Julian. - Does $T$ preserve the measure? – Joel Moreira Feb 24 at 1:07 Good point! Let's assume it does. (I'll now edit the question accordingly.) – Julian Newman Feb 24 at 1:36 ## 1 Answer Let $T \colon X \to X$ be a minimal transformation of a compact metric space which is not uniquely ergodic, let $\mu$ be a non-ergodic $T$-invariant measure on $X$, and let $A$ be a set with nonempty interior such that $\mu(A \triangle T^{-1}A)=0$. I claim that necessarily $\mu(A)=1$, contradicting the above conjecture. (Some constructions of transformations with the above combination of properties may be found for example in the textbook Ergodic Theory on Compact Spaces by Denker, Grillenberger and Sigmund, or in John Oxtoby's classic 1952 article Ergodic sets.) Let $U \subseteq A$ be open and nonempty. Since $T$ is minimal we have $\bigcup_{n=0}^\infty T^{-n}U=X$, and indeed even $\bigcup_{n=0}^NT^{-n}U=X$ for some integer $N$ since $X$ is compact. In particular $\bigcup_{n=0}^N T^{-n}A=X$. Let us write $$\bigcup_{n=0}^N T^{-n}A = A \cup \bigcup_{n=1}^N \left(\left( T^{-n}A\right)\setminus \bigcup_{k=0}^{n-1} T^{-k}A\right)=A \cup \bigcup_{n=1}^N B_n,$$ say, which is a disjoint union. We would like to show that this union has measure identical to that of $A$. For each $n$ we have $$\mu(B_n)=\mu\left(T^{-n}A\setminus \bigcup_{k=0}^{n-1} T^{-k}A\right)\leq \mu\left(T^{-n}A \setminus T^{-(n-1)}A\right)=\mu\left(T^{-1}A \setminus A\right)=0$$ by invariance and the hypothesis $\mu(A \triangle T^{-1}A)=0$. It follows that $$\mu(A)=\mu\left(\bigcup_{n=0}^N T^{-n}A \right)=\mu(X)=1$$ so the desired situation can not occur. - 1 Thank you. This is most helpful. Do you know any conditions on $X$ under which, if $\mu$ is a strictly positive probability measure on $X$, then every minimal $\mu$-preserving continuous transformation is ergodic? (E.g. is this true for Euclidean space $X=\mathbb{R}^n$?) – Julian Newman Feb 24 at 2:44 1 @Julian: This is hopeless. On any reasonable space, there will be transformations that are minimal, but not strictly ergodic. – Anthony Quas Feb 24 at 3:26 1 @Julian: this is equivalent to asking for a condition on $X$ such that every minimal transformation on $X$ is uniquely ergodic, i.e. has only one invariant measure. (If a transformation has two distinct invariant measures then a strict linear combination of the two is never ergodic.) Such conditions do exist: finite spaces $X$ have this property, as does the circle (I think) but as Anthony says this is a severly restrictive requirement. The broader stroke of your question seems to be whether ergodicity can be easily characterised using only topological concepts. The answer to this is "No". – Ian Morris Feb 24 at 12:08 @Ian and Anthony: Just to be clear, I did not say that I require every invariant probability measure of a minimal transformation to be ergodic - I just required that every strictly positive invariant probability measure of a minimal transformation had to be ergodic. (By strictly positive, I mean that its support is the whole of $X$). Is this still equivalent to requiring that every minimal transformation is uniquely ergodic? (And in the case $X=\mathbb{R}^n$, if the requirement still is not satisfied, what about if we weaken the requirement by restricting to, say, diffeomorphisms on $X$?) – Julian Newman Feb 24 at 14:38 1 @Julian: Every invariant probability measure of a minimal transformation is fully supported, because otherwise its support would be a nonempty closed invariant proper subset, contradicting minimality. So the two statements are equivalent. – Ian Morris Feb 24 at 16:52 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240479469299316, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/12/09/young-tableaux/?like=1&source=post_flair&_wpnonce=5092e672ce
# The Unapologetic Mathematician ## Young Tableaux We want to come up with some nice sets for our symmetric group to act on. Our first step in this direction is to define a “Young tableau”. If $\lambda\vdash n$ is a partition of $n$, we define a Young tableau of shape $\lambda$ to be an array of numbers. We start with the Ferrers diagram of the partition $\lambda$, and we replace the dots with the numbers $1$ to $n$ in any order. Clearly, there are $n!$ Young tableaux of shape $\lambda$ if $\lambda\vdash n$. For example, if $\lambda=(2,1)$, the Ferrers diagram is $\displaystyle\begin{array}{cc}\bullet&\bullet\\\bullet&\end{array}$ We see that $(2,1)\vdash3$, and so there are $3!=6$ Young tableaux of shape $(2,1)$. They are $\displaystyle\begin{aligned}\begin{array}{cc}1&2\\3&\end{array}&,&\begin{array}{cc}1&3\\2&\end{array}&,&\begin{array}{cc}2&1\\3&\end{array}\\\begin{array}{cc}2&3\\1&\end{array}&,&\begin{array}{cc}3&1\\2&\end{array}&,&\begin{array}{cc}3&2\\1&\end{array}\end{aligned}$ We write $t_{i,j}$ for the entry in the $(i,j)$ place. For example, the last tableau above has $t_{1,1}=3$, $t_{1,2}=2$, and $t_{2,1}=1$. We also call a Young tableau $t$ of shape $\lambda$ a “$\lambda$-tableau”, and we write $\mathrm{sh}(t)=\lambda$. We can write a generic $\lambda$-tableau as $t^\lambda$. ## 13 Comments » 1. [...] cousins to Young tableaux, Young tabloids give us another set on which our symmetric group will [...] Pingback by | December 11, 2010 | Reply 2. [...] Action on Tableaux and Tabloids We’ve introduced Young tableaux and Young tabloids. We’ve also said that they carry symmetric group actions, but we never [...] Pingback by | December 13, 2010 | Reply 3. [...] Young tableau thus contains all numbers on the single row, so they’re all row-equivalent. There is only [...] Pingback by | December 14, 2010 | Reply 4. [...] and be Young tableaux of shape and , respectively. If for each row, all the entries on that row of are in different [...] Pingback by | December 20, 2010 | Reply 5. [...] Young tableau with shape gives us two subgroups of , the “row-stabilizer” and the [...] Pingback by | December 22, 2010 | Reply 6. [...] is the column-stabilizer of a Young tableau . If has columns , then . Letting run over is the same as letting run over for each from to [...] Pingback by | December 23, 2010 | Reply 7. [...] is the submodule of the Young tabloid module spanned by the polytabloids where runs over the Young tableaux of shape [...] Pingback by | December 27, 2010 | Reply 8. [...] let and are two Young tableaux of shapes and , respectively, where and . If — where is the group algebra element [...] Pingback by | December 31, 2010 | Reply 9. [...] say that a Young tableau is “standard” if its rows and columns are all increasing sequences. In this case, we [...] Pingback by | January 5, 2011 | Reply 10. [...] notions of Ferrers diagrams and Young tableaux, and Young tabloids carry over right away to compositions. For instance, the Ferrers diagram of the [...] Pingback by | January 6, 2011 | Reply 11. [...] predictably enough, certain Garnir elements we’re particularly interested in. These come from Young tableaux, and will be useful to us as we move [...] Pingback by | January 17, 2011 | Reply 12. [...] as a quick use of this concept, think about how to fill a Ferrers diagram to make a standard Young tableau. It should be clear that since is the largest entry in the tableau, it must be in the rightmost [...] Pingback by | January 26, 2011 | Reply 13. [...] Young Tableaux And now we have another generalization of Young tableaux. These are the same, except now we allow repetitions of the [...] Pingback by | February 2, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9067655801773071, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Cluster_analysis
# Cluster analysis The result of a cluster analysis shown as the coloring of the squares into three clusters. Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics. Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their notion of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances among the cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-objective optimization problem. The appropriate clustering algorithm and parameter settings (including values such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process of knowledge discovery or interactive multi-objective optimization that involves trial and failure. It will often be necessary to modify data preprocessing and model parameters until the result achieves the desired properties. Besides the term clustering, there are a number of terms with similar meanings, including automatic classification, numerical taxonomy, botryology (from Greek βότρυς "grape") and typological analysis. The subtle differences are often in the usage of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification primarily their discriminative power is of interest. This often leads to misunderstandings between researchers coming from the fields of data mining and machine learning, since they use the same terms and often the same algorithms, but have different goals. ## Clusters and clusterings The notion of a "cluster" cannot be precisely defined,[1] which is one of the reasons why there are so many clustering algorithms. There of course is a common denominator: a group of data objects. However, different researchers employ different cluster models, and for each of these cluster models again different algorithms can be given. The notion of a cluster, as found by different algorithms, varies significantly in its properties. Understanding these "cluster models" is key to understanding the differences between the various algorithms. Typical cluster models include: • Connectivity models: for example hierarchical clustering builds models based on distance connectivity. • Centroid models: for example the k-means algorithm represents each cluster by a single mean vector. • Distribution models: clusters are modeled using statistical distributions, such as multivariate normal distributions used by the Expectation-maximization algorithm. • Density models: for example DBSCAN and OPTICS defines clusters as connected dense regions in the data space. • Subspace models: in Biclustering (also known as Co-clustering or two-mode-clustering), clusters are modeled with both cluster members and relevant attributes. • Group models: some algorithms (unfortunately) do not provide a refined model for their results and just provide the grouping information. • Graph-based models: a clique, i.e., a subset of nodes in a graph such that every two nodes in the subset are connected by an edge can be considered as a prototypical form of cluster. Relaxations of the complete connectivity requirement (a fraction of the edges can be missing) are known as quasi-cliques. A "clustering" is essentially a set of such clusters, usually containing all objects in the data set. Additionally, it may specify the relationship of the clusters to each other, for example a hierarchy of clusters embedded in each other. Clusterings can be roughly distinguished in: • hard clustering: each object belongs to a cluster or not • soft clustering (also: fuzzy clustering): each object belongs to each cluster to a certain degree (e.g. a likelihood of belonging to the cluster) There are also finer distinctions possible, for example: • strict partitioning clustering: here each object belongs to exactly one cluster • strict partitioning clustering with outliers: objects can also belong to no cluster, and are considered outliers. • overlapping clustering (also: alternative clustering, multi-view clustering): while usually a hard clustering, objects may belong to more than one cluster. • hierarchical clustering: objects that belong to a child cluster also belong to the parent cluster • subspace clustering: while an overlapping clustering, within a uniquely defined subspace, clusters are not expected to overlap. ## Clustering algorithms Clustering algorithms can be categorized based on their cluster model, as listed above. The following overview will only list the most prominent examples of clustering algorithms, as there are probably a few dozen (if not over 100) published clustering algorithms. Not all provide models for their clusters and can thus not easily be categorized. An overview of algorithms explained in Wikipedia can be found in the list of statistics algorithms. There is no objectively "correct" clustering algorithm, but as it was noted, "clustering is in the eye of the beholder."[1] The most appropriate clustering algorithm for a particular problem often needs to be chosen experimentally, unless there is a mathematical reason to prefer one cluster model over another. It should be noted that an algorithm that is designed for one kind of models has no chance on a data set that contains a radically different kind of models.[1] For example, k-means cannot find non-convex clusters.[1] ### Connectivity based clustering (hierarchical clustering) Main article: Hierarchical clustering Connectivity based clustering, also known as hierarchical clustering, is based on the core idea of objects being more related to nearby objects than to objects farther away. As such, these algorithms connect "objects" to form "clusters" based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using a dendrogram, which explains where the common name "hierarchical clustering" comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the y-axis marks the distance at which the clusters merge, while the objects are placed along the x-axis such that the clusters don't mix. Connectivity based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice of distance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance to) to use. Popular choices are known as single-linkage clustering (the minimum of object distances), complete linkage clustering (the maximum of object distances) or UPGMA ("Unweighted Pair Group Method with Arithmetic Mean", also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions). While these methods are fairly easy to understand, the results are not always easy to use, as they will not produce a unique partitioning of the data set, but a hierarchy the user still needs to choose appropriate clusters from. The methods are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as "chaining phenomenon", in particular with single-linkage clustering). In the general case, the complexity is $\mathcal{O}(n^3)$, which makes them too slow for large data sets. For some special cases, optimal efficient methods (of complexity $\mathcal{O}(n^2)$) are known: SLINK[2] for single-linkage and CLINK[3] for complete-linkage clustering. In the data mining community these methods are recognized as a theoretical foundation of cluster analysis, but often considered obsolete. They did however provide inspiration for many later methods such as density based clustering. • Single-linkage on Gaussian data. At 35 clusters, the biggest cluster starts fragmenting into smaller parts, while before it was still connected to the second largest due to the single-link effect. • Single-linkage on density-based clusters. 20 clusters extracted, most of which contain single elements, since linkage clustering does not have a notion of "noise". ### Centroid-based clustering Main article: k-means clustering In centroid-based clustering, clusters are represented by a central vector, which may not necessarily be a member of the data set. When the number of clusters is fixed to k, k-means clustering gives a formal definition as an optimization problem: find the $k$ cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized. The optimization problem itself is known to be NP-hard, and thus the common approach is to search only for approximate solutions. A particularly well known approximative method is Lloyd's algorithm,[4] often actually referred to as "k-means algorithm". It does however only find a local optimum, and is commonly run multiple times with different random initializations. Variations of k-means often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set (k-medoids), choosing medians (k-medians clustering), choosing the initial centers less randomly (K-means++) or allowing a fuzzy cluster assignment (Fuzzy c-means). Most k-means-type algorithms require the number of clusters - $k$ - to be specified in advance, which is considered to be one of the biggest drawbacks of these algorithms. Furthermore, the algorithms prefer clusters of approximately similar size, as they will always assign an object to the nearest centroid. This often leads to incorrectly cut borders in between of clusters (which is not surprising, as the algorithm optimized cluster centers, not cluster borders). K-means has a number of interesting theoretical properties. On one hand, it partitions the data space into a structure known as Voronoi diagram. On the other hand, it is conceptually close to nearest neighbor classification and as such popular in machine learning. Third, it can be seen as a variation of model based classification, and Lloyd's algorithm as a variation of the Expectation-maximization algorithm for this model discussed below. • k-Means clustering examples • K-means separates data into Voronoi-cells, which assumes equal-sized clusters (not adequate here) • K-means cannot represent density-based clusters ### Distribution-based clustering The clustering model most closely related to statistics is based on distribution models. Clusters can then easily be defined as objects belonging most likely to the same distribution. A nice property of this approach is that this closely resembles the way artificial data sets are generated: by sampling random objects from a distribution. While the theoretical foundation of these methods is excellent, they suffer from one key problem known as overfitting, unless constraints are put on the model complexity. A more complex model will usually always be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult. The most prominent method is known as expectation-maximization algorithm (or short: EM-clustering). Here, the data set is usually modeled with a fixed (to avoid overfitting) number of Gaussian distributions that are initialized randomly and whose parameters are iteratively optimized to fit better to the data set. This will converge to a local optimum, so multiple runs may produce different results. In order to obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to, for soft clusterings this is not necessary. Distribution-based clustering is a semantically strong method, as it not only provides you with clusters, but also produces complex models for the clusters that can also capture correlation and dependence of attributes. However, using these algorithms puts an extra burden on the user: to choose appropriate data models to optimize, and for many real data sets, there may be no mathematical model available the algorithm is able to optimize (e.g. assuming Gaussian distributions is a rather strong assumption on the data). • EM clustering examples • On Gaussian-distributed data, EM works well, since it uses Gaussians for modelling clusters • Density-based clusters cannot be modeled using Gaussian distributions ### Density-based clustering In density-based clustering,[5] clusters are defined as areas of higher density than the remainder of the data set. Objects in these sparse areas - that are required to separate clusters - are usually considered to be noise and border points. The most popular[6] density based clustering method is DBSCAN.[7] In contrast to many newer methods, it features a well-defined cluster model called "density-reachability". Similar to linkage based clustering, it is based on connecting points within certain distance thresholds. However, it only connects points that satisfy a density criterion, in the original variant defined as a minimum number of other objects within this radius. A cluster consists of all density-connected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects' range. Another interesting property of DBSCAN is that its complexity is fairly low - it requires a linear number of range queries on the database - and that it will discover essentially the same results (it is deterministic for core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times. OPTICS[8] is a generalization of DBSCAN that removes the need to choose an appropriate value for the range parameter $\varepsilon$, and produces a hierarchical result related to that of linkage clustering. DeLi-Clu,[9] Density-Link-Clustering combines ideas from single-linkage clustering and OPTICS, eliminating the $\varepsilon$ parameter entirely and offering performance improvements over OPTICS by using an R-tree index. The key drawback of DBSCAN and OPTICS is that they expect some kind of density drop to detect cluster borders. Moreover they can not detect intrinsic cluster structures which are prevalent in the majority of real life data. A variation of DBSCAN, EnDBSCAN[10] efficiently detects such kinds of structures. On data sets with, for example, overlapping Gaussian distributions - a common use case in artificial data - the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such as EM clustering, that are able to precisely model this kind of data. • density-based clustering examples • Density-based clustering with DBSCAN. • DBSCAN assumes clusters of similar density, and may have problems separating nearby clusters • OPTICS is a DBSCAN variant that handles different densities much better ### Newer developments In recent years considerable effort has been put into improving algorithm performance of the existing algorithms.[11] Among them are CLARANS (Ng and Han, 1994),[12] and BIRCH (Zhang et al., 1996).[13] With the recent need to process larger and larger data sets (also known as big data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such as canopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such as k-means clustering. Various other approaches to clustering have been tried such as seed based clustering.[14] For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving a correlation of their attributes. Examples for such clustering algorithms are CLIQUE[15] and SUBCLU.[16] Ideas from density-based clustering methods (in particular the DBSCAN/OPTICS family of algorithms) have been adopted to subspace clustering (HiSC,[17] hierarchical subspace clustering and DiSH[18]) and correlation clustering (HiCO,[19] hierarchical corelation clustering, 4C[20] using "correlation connectivity" and ERiC[21] exploring hierarchical density-based correlation clusters). Several different clustering systems based on mutual information have been proposed. One is Marina Meilă's variation of information metric;[22] another provides hierarchical clustering.[23] Using genetic algorithms, a wide range of different fit-functions can be optimized, including mutual information.[24] Also message passing algorithms, a recent development in Computer Science and Statistical Physics, has led to the creation of new types of clustering algorithms.[25] ## Evaluation of clustering results Evaluation of clustering results sometimes is referred to as cluster validation. There have been several suggestions for a measure of similarity between two clusterings. Such a measure can be used to compare how well different data clustering algorithms perform on a set of data. These measures are usually tied to the type of criterion being considered in assessing the quality of a clustering method. ### Internal evaluation When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications.[26] Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example k-Means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering. Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another.[1] Validity as measured by such an index depends on the claim that this kind of structure exists in the data set. An algorithm designed for some kind of models has no chance if the data set contains a radically different set of models, or if the evaluation measures a radically different criterion.[1] For example, k-means clustering can only find convex clusters, and many evaluation indexes assume convex clusters. On a data set with non-convex clusters neither the use of k-means, nor of an evaluation criterion that assumes convexity, is sound. The following methods can be used to assess the quality of clustering algorithms based on internal criterion: • Linear Algebra Measure (Laura A Mather, 2000, JASIST 51:602-613) One of the most common models in information retrieval (IR), the vector space model, represents a document set as a term-document matrix where each row corresponds to a term and each column corresponds to a document. Because of the use of matrices in IR, it is possible to apply linear algebra to this IR model. This paper describes an application of linear algebra to text clustering, namely, a metric for measuring cluster quality. The metric is based on the theory that cluster quality is proportional to the number of terms that are disjoint across the clusters. The metric compares the singular values of the term-document matrix to the singular values of the matrices for each of the clusters to determine the amount of overlap of the terms across clusters. Because the metric can be difficult to interpret, a standardization of the metric is defined, which specifies the number of standard deviations a clustering of a document set is from an average, random clustering of that document set. Empirical evidence shows that the standardized cluster metric correlates with clustered retrieval performance when comparing clustering algorithms or multiple parameters for the same clustering algorithm. The Davies–Bouldin index can be calculated by the following formula: $DB = \frac {1} {n} \sum_{i=1}^{n} \max_{i\neq j}\left(\frac{\sigma_i + \sigma_j} {d(c_i,c_j)}\right)$ where n is the number of clusters, $c_x$ is the centroid of cluster $x$, $\sigma_x$ is the average distance of all elements in cluster $x$ to centroid $c_x$, and $d(c_i,c_j)$ is the distance between centroids $c_i$ and $c_j$. Since algorithms that produce clusters with low intra-cluster distances (high intra-cluster similarity) and high inter-cluster distances (low inter-cluster similarity) will have a low Davies–Bouldin index, the clustering algorithm that produces a collection of clusters with the smallest Davies–Bouldin index is considered the best algorithm based on this criterion. • Dunn index (J. C. Dunn 1974) The Dunn index aims to identify dense and well-separated clusters. It is defined as the ratio between the minimal inter-cluster distance to maximal intra-cluster distance. For each cluster partition, the Dunn index can be calculated by the following formula:[27] $D = \min_{1\leq i \leq n}\left\{\min_{1\leq j \leq n,i\neq j}\left\{\frac {d(i,j)}{\max_{1\leq k \leq n}{d^{'}(k)}}\right\}\right\}$ where $d(i,j)$ represents the distance between clusters $i$ and $j$, and $d^{'}(k)$ measures the intra-cluster distance of cluster $k$. The inter-cluster distance $d(i,j)$ between two clusters may be any number of distance measures, such as the distance between the centroids of the clusters. Similarly, the intra-cluster distance $d^{'}(k)$ may be measured in a variety ways, such as the maximal distance between any pair of elements in cluster $k$. Since internal criterion seek clusters with high intra-cluster similarity and low inter-cluster similarity, algorithms that produce clusters with high Dunn index are more desirable. ### External evaluation In external evaluation, clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by human (experts). Thus, the benchmark sets can be thought of as a gold standard for evaluation. These types of evaluation methods measure how close the clustering is to the predetermined benchmark classes. However, it has recently been discussed whether this is adequate for real data, or only on synthetic data sets with a factual ground truth, since classes can contain internal structure, the attributes present may not allow separation of clusters or the classes may contain anomalies.[28] Additionally, from a knowledge discovery point of view, the reproduction of known knowledge may not necessarily be the intended result.[28] Some of the measures of quality of a cluster algorithm using external criterion include: • Rand measure (William M. Rand 1971)[29] The Rand index computes how similar the clusters (returned by the clustering algorithm) are to the benchmark classifications. One can also view the Rand index as a measure of the percentage of correct decisions made by the algorithm. It can be computed using the following formula: $RI = \frac {TP + TN} {TP + FP + FN + TN}$ where $TP$ is the number of true positives, $TN$ is the number of true negatives, $FP$ is the number of false positives, and $FN$ is the number of false negatives. One issue with the Rand index is that false positives and false negatives are equally weighted. This may be an undesirable characteristic for some clustering applications. The F-measure addresses this concern. The F-measure can be used to balance the contribution of false negatives by weighting recall through a parameter $\beta \geq 0$. Let precision and recall be defined as follows: $P = \frac {TP } {TP + FP }$ $R = \frac {TP } {TP + FN}$ where $P$ is the precision rate and $R$ is the recall rate. We can calculate the F-measure by using the following formula:[26] $F_{\beta} = \frac {(\beta^2 + 1)\cdot P \cdot R } {\beta^2 \cdot P + R}$ Notice that when $\beta=0$, $F_{0}=P$. In other words, recall has no impact on the F-measure when $\beta=0$, and increasing $\beta$ allocates an increasing amount of weight to recall in the final F-measure. • Pair-counting F-Measure is the F-Measure applied to the set of object pairs, where objects are paired with each other when they are part of the same cluster. This measure is able to compare clusterings with different numbers of clusters. • Jaccard index The Jaccard index is used to quantify the similarity between two datasets. The Jaccard index takes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements. The Jaccard index is defined by the following formula: $J(A,B) = \frac {|A \cap B| } {|A \cup B|} = \frac{TP}{TP + FP + FN}$ This is simply the number of unique elements common to both sets divided by the total number of unique elements in both sets. • Fowlkes–Mallows index (E. B. Fowlkes & C. L. Mallows 1983)[30] The Fowlkes-Mallows index computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes-Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula: $FM = \sqrt{ \frac {TP}{TP+FP} \cdot \frac{TP}{TP+FN} }$ where $TP$ is the number of true positives, $FP$ is the number of false positives, and $FN$ is the number of false negatives. The $FM$ index is the geometric mean of the precision and recall $P$ and $R$, while the F-measure is their harmonic mean.[31] Moreover, precision and recall are also known as Wallace's indices $B^I$ and $B^{II}$.[32] A confusion matrix can be used to quickly visualize the results of a classification (or clustering) algorithm. It shows how different a cluster is from the gold standard cluster. • The Mutual Information is an information theoretic measure of how much information is shared between a clustering and a ground-truth classification that can detect a non-linear similarity between two clusterings. Adjusted mutual information is the corrected-for-chance variant of this that has a reduced bias for varying cluster numbers. ## Applications Biology, computational biology and bioinformatics Plant and animal ecology cluster analysis is used to describe and to make spatial and temporal comparisons of communities (assemblages) of organisms in heterogeneous environments; it is also used in plant systematics to generate artificial phylogenies or clusters of organisms (individuals) at the species, genus or higher level that share a number of attributes Transcriptomics clustering is used to build groups of genes with related expression patterns (also known as coexpressed genes). Often such groups contain functionally related proteins, such as enzymes for a specific pathway, or genes that are co-regulated. High throughput experiments using expressed sequence tags (ESTs) or DNA microarrays can be a powerful tool for genome annotation, a general aspect of genomics. Sequence analysis clustering is used to group homologous sequences into gene families. This is a very important concept in bioinformatics, and evolutionary biology in general. See evolution by gene duplication. High-throughput genotyping platforms clustering algorithms are used to automatically assign genotypes. Human genetic clustering The similarity of genetic data is used in clustering to infer population structures. Medicine Medical imaging On PET scans, cluster analysis can be used to differentiate between different types of tissue and blood in a three dimensional image. In this application, actual position does not matter, but the voxel intensity is considered as a vector, with a dimension for each image that was taken over time. This technique allows, for example, accurate measurement of the rate a radioactive tracer is delivered to the area of interest, without a separate sampling of arterial blood, an intrusive technique that is most common today. IMRT segmentation Clustering can be used to divide a fluence map into distinct regions for conversion into deliverable fields in MLC-based Radiation Therapy. Market research Cluster analysis is widely used in market research when working with multivariate data from surveys and test panels. Market researchers use cluster analysis to partition the general population of consumers into market segments and to better understand the relationships between different groups of consumers/potential customers, and for use in market segmentation, Product positioning, New product development and Selecting test markets. Grouping of shopping items Clustering can be used to group all the shopping items available on the web into a set of unique products. For example, all the items on eBay can be grouped into unique products. (eBay doesn't have the concept of a SKU) World wide web Social network analysis In the study of social networks, clustering may be used to recognize communities within large groups of people. Search result grouping In the process of intelligent grouping of the files and websites, clustering may be used to create a more relevant set of search results compared to normal search engines like Google. There are currently a number of web based clustering tools such as Clusty. Slippy map optimization Flickr's map of photos and other map sites use clustering to reduce the number of markers on a map. This makes it both faster and reduces the amount of visual clutter. Computer science Software evolution Clustering is useful in software evolution as it helps to reduce legacy properties in code by reforming functionality that has become dispersed. It is a form of restructuring and hence is a way of directly preventative maintenance. Image segmentation Clustering can be used to divide a digital image into distinct regions for border detection or object recognition. Evolutionary algorithms Clustering may be used to identify different niches within the population of an evolutionary algorithm so that reproductive opportunity can be distributed more evenly amongst the evolving species or subspecies. Recommender systems Recommender systems are designed to recommend new items based on a user's tastes. They sometimes use clustering algorithms to predict a user's preferences based on the preferences of other users in the user's cluster. Markov chain Monte Carlo methods Clustering is often utilized to locate and characterize extrema in the target distribution. Social science Crime analysis Cluster analysis can be used to identify areas where there are greater incidences of particular types of crime. By identifying these distinct areas or "hot spots" where a similar crime has happened over a period of time, it is possible to manage law enforcement resources more effectively. Educational data mining Cluster analysis is for example used to identify groups of schools or students with similar properties. Typologies From poll data, projects such as those underaken by the Pew Research Center use cluster analysis to discern typologies of opinions, habits, and demographics that may be useful in politics and marketing. Others Field robotics Clustering algorithms are used for robotic situational awareness to track objects and detect outliers in sensor data.[33] Mathematical chemistry To find structural similarity, etc., for example, 3000 chemical compounds were clustered in the space of 90 topological indices.[34] Climatology To find weather regimes or preferred sea level pressure atmospheric patterns.[35] Petroleum geology Cluster analysis is used to reconstruct missing bottom hole core data or missing log curves in order to evaluate reservoir properties. Physical geography The clustering of chemical properties in different sample locations. ## See also Wikimedia Commons has media related to: Cluster analysis This section may require cleanup to meet Wikipedia's quality standards. (October 2011) ### Related topics Main article: Data mining ### Related methods See also category: Data clustering algorithms ## References 1. Estivill-Castro, V. (2002). "Why so many clustering algorithms". ACM SIGKDD Explorations Newsletter 4: 65. doi:10.1145/568574.568575. 2. R. Sibson (1973). "SLINK: an optimally efficient algorithm for the single-link cluster method". The Computer Journal (British Computer Society) 16 (1): 30–34. 3. D. Defays (1977). "An efficient algorithm for a complete link method". The Computer Journal (British Computer Society) 20 (4): 364–366. 4. Lloyd, S. (1982). "Least squares quantization in PCM". IEEE Transactions on Information Theory 28 (2): 129–137. doi:10.1109/TIT.1982.1056489. 5. Hans-Peter Kriegel, Peer Kröger, Jörg Sander, Arthur Zimek (2011). "Density-based Clustering". WIREs Data Mining and Knowledge Discovery 1 (3): 231–240. doi:10.1002/widm.30. 6. Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu (1996). "A density-based algorithm for discovering clusters in large spatial databases with noise". In Evangelos Simoudis, Jiawei Han, Usama M. Fayyad. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). AAAI Press. pp. 226–231. ISBN 1-57735-004-9. 7. Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel, Jörg Sander (1999). "OPTICS: Ordering Points To Identify the Clustering Structure". ACM SIGMOD international conference on Management of data. ACM Press. pp. 49–60. 8. Achtert, E.; Böhm, C.; Kröger, P. (2006). "DeLi-Clu: Boosting Robustness, Completeness, Usability, and Efficiency of Hierarchical Clustering by a Closest Pair Ranking". LNCS: Advances in Knowledge Discovery and Data Mining. Lecture Notes in Computer Science 3918: 119–128. doi:10.1007/11731139_16. ISBN 978-3-540-33206-0. 9. S Roy, D K Bhattacharyya (2005). "An Approach to find Embedded Clusters Using Density Based Techniques". LNCS Vol.3816. Springer Verlag. pp. 523–535. 10. Z. Huang. "Extensions to the k-means algorithm for clustering large data sets with categorical values". Data Mining and Knowledge Discovery, 2:283–304, 1998. 11. R. Ng and J. Han. "Efficient and effective clustering method for spatial data mining". In: Proceedings of the 20th VLDB Conference, pages 144-155, Santiago, Chile, 1994. 12. Tian Zhang, Raghu Ramakrishnan, Miron Livny. "An Efficient Data Clustering Method for Very Large Databases." In: Proc. Int'l Conf. on Management of Data, ACM SIGMOD, pp. 103–114. 13. Can, F.; Ozkarahan, E. A. (1990). "Concepts and effectiveness of the cover-coefficient-based clustering methodology for text databases". ACM Transactions on Database Systems 15 (4): 483. doi:10.1145/99935.99938. 14. Agrawal, R.; Gehrke, J.; Gunopulos, D.; Raghavan, P. (2005). "Automatic Subspace Clustering of High Dimensional Data". Data Mining and Knowledge Discovery 11: 5. doi:10.1007/s10618-005-1396-1. 15. Karin Kailing, Hans-Peter Kriegel and Peer Kröger. Density-Connected Subspace Clustering for High-Dimensional Data. In: Proc. SIAM Int. Conf. on Data Mining (SDM'04), pp. 246-257, 2004. 16. Achtert, E.; Böhm, C.; Kriegel, H. P.; Kröger, P.; Müller-Gorman, I.; Zimek, A. (2006). "Finding Hierarchies of Subspace Clusters". LNCS: Knowledge Discovery in Databases: PKDD 2006. Lecture Notes in Computer Science 4213: 446–453. doi:10.1007/11871637_42. ISBN 978-3-540-45374-1. 17. Achtert, E.; Böhm, C.; Kriegel, H. P.; Kröger, P.; Müller-Gorman, I.; Zimek, A. (2007). "Detection and Visualization of Subspace Cluster Hierarchies". LNCS: Advances in Databases: Concepts, Systems and Applications. Lecture Notes in Computer Science 4443: 152–163. doi:10.1007/978-3-540-71703-4_15. ISBN 978-3-540-71702-7. 18. Achtert, E.; Böhm, C.; Kröger, P.; Zimek, A. (2006). "Mining Hierarchies of Correlation Clusters". Proc. 18th International Conference on Scientific and Statistical Database Management (SSDBM): 119–128. doi:10.1109/SSDBM.2006.35. ISBN 0-7695-2590-3. 19. Böhm, C.; Kailing, K.; Kröger, P.; Zimek, A. (2004). "Computing Clusters of Correlation Connected objects". Proceedings of the 2004 ACM SIGMOD international conference on Management of data - SIGMOD '04. p. 455. doi:10.1145/1007568.1007620. ISBN 1581138598. 20. Achtert, E.; Bohm, C.; Kriegel, H. P.; Kröger, P.; Zimek, A. (2007). "On Exploring Complex Relationships of Correlation Clusters". 19th International Conference on Scientific and Statistical Database Management (SSDBM 2007). p. 7. doi:10.1109/SSDBM.2007.21. ISBN 0-7695-2868-6. 21. Meilă, Marina (2003). "Comparing Clusterings by the Variation of Information". Learning Theory and Kernel Machines: 173–187. 22. Alexander Kraskov, Harald Stögbauer, Ralph G. Andrzejak, and Peter Grassberger, "Hierarchical Clustering Based on Mutual Information", (2003) 23. B.J. Frey and D. Dueck, "Clustering by Passing Messages Between Data Points", Science 315: 972–976, doi:10.1126/science.1136800  Papercore summary Frey2007 24. ^ a b Christopher D. Manning, Prabhakar Raghavan & Hinrich Schutze. Introduction to Information Retrieval. Cambridge University Press. ISBN 978-0-521-86571-5. 25. Dunn, J. (1974). "Well separated clusters and optimal fuzzy partitions". Journal of Cybernetics 4: 95–104. doi:10.1080/01969727408546059. 26. ^ a b Ines Färber, Stephan Günnemann, Hans-Peter Kriegel, Peer Kröger, Emmanuel Müller, Erich Schubert, Thomas Seidl, Arthur Zimek (2010). "On Using Class-Labels in Evaluation of Clusterings". In Xiaoli Z. Fern, Ian Davidson, Jennifer Dy. MultiClust: Discovering, Summarizing, and Using Multiple Clusterings. ACM SIGKDD. 27. W. M. Rand (1971). "Objective criteria for the evaluation of clustering methods". (American Statistical Association) 66 (336): 846–850. doi:10.2307/2284239. JSTOR 2284239. 28. E. B. Fowlkes & C. L. Mallows (1983), "A Method for Comparing Two Hierarchical Clusterings", Journal of the American Statistical Association 78, 553–569. 29. L. Hubert et P. Arabie. Comparing partitions. J. of Classification, 2(1), 1985. 30. D. L. Wallace. Comment. Journal of the American Statistical Association, 78 :569– 579, 1983. 31. Bewley A. et al. "Real-time volume estimation of a dragline payload". "IEEE International Conference on Robotics and Automation",2011: 1571-1576. 32. Basak S.C., Magnuson V.R., Niemi C.J., Regal R.R. "Determining Structural Similarity of Chemicals Using Graph Theoretic Indices". Discr. Appl. Math., 19, 1988: 17-44. 33. Huth R. et al. "Classifications of Atmospheric Circulation Patterns: Recent Advances and Applications". Ann. N.Y. Acad. Sci., 1146, 2008: 105-152
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.871344268321991, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41291/why-i-think-tension-should-be-twice-the-force-in-a-tug-of-war
# Why I think tension should be twice the force in a tug of war I'm going to provide my argument for why I think the tension in a rope should be twice the force exerted on either side of it. First, let's consider a different example. Say, there is a person named `A` and a block in space. `A` pushes on the block with a force of 100 N. Then, the block will also push `A` with a force of 100 N by Newton's third law. Now, consider the case where instead of the block, there is a person `B` who is also pushing on `A` with a force of 100 N while `A` is pushing on him. `A` will experience a force of 100 N because he pushed on `B`, AND another 100 N because he is being pushed by `B`. Hence he will experience a force of 200 N. Similarly, `B` also experiences 200 N of force. Now, back to the original problem. There are two people `A` and `B` in space with a taut rope (no tension currently) in between them. If only `A` is pulling and `B` is not, then I agree that the tension is equal to the force `A` exerts. This situation (in my opinion) becomes analogous to the above if `B` is also pulling. So, say both of them pull from either side with a force of 100 N. Then the rope at the end of `B` will pull `B` with a force of 100 N (this pull is caused by `A`). By Newton's third law, the rope will experience a pull of 100 N. But `B` is also pulling his end of the rope with 100 N. Therefore, the tension should be 200 N. Similarly, the end of the rope at `A` must pull `A` with 100 N of force (because `B` is pulling from the other side) and hence experience a force of 100 N itself by Newton's third law plus another 100 N because `A` is pulling on the rope. Apparently, the answer is not this (according to my searching on the web). So, could anyone tell me why this reasoning is wrong? Thanks. EDIT : So apparently people don't agree with my first example, leave alone the second. This is to the downvoters and the upvoters of the highest-rated answer: You all agree that if only `A` pushes `B` with a force of 100 N, then `A` and `B` both will get pushed by a force of 100 N in opposite directions, right? Then, in the case where `B` is also pushing with a force of 100 N, it doesn't make sense that the answer would be exactly the same. It doesn't seem right that no matter what `B` does, `B` and `A` will always experience the same force as they would have if `B` hadn't applied any force. EDIT 2 : I'm going to provide here a link to a question that I posted: Two people pushing off each other According to the answer and the comments there, the reason as to why my first example is incorrect is different to the one provided here. So maybe you should all read the answer and the comments provided by the person and reconsider what you think. - Imagine 1Kg-block hanging on the ceiling. Each hook (the one in the ceiling and the one in the 1Kg-block) pull with 100N still you would not get the Idea that the tension would be 200N. In the rope example the ground takes care of the other 100N. – miceterminator Oct 20 '12 at 14:42 Yes, but that's because only one side is doing the pulling. The block is being pulled by gravity downwards and since it is not moving it must be the case that the rope is pulling upwards with a force equal to its weight which implies by Newton's third law that the block is pulling downwards on the rope with the same force. Similarly, at the other end, the rope pulls the ceiling with a force equal to the weight of the block and hence again by the third law, the ceiling pulls the rope with the same force. So I see no reason to say that the tension would be twice the weight. – Alraxite Oct 20 '12 at 14:58 Sorry I didn't read through your whole text. "If only 'A' is pulling and 'B' is not, then I agree that the tension is equal to the force 'A' exerts." This does not work if you pull a massless rope with 100N there is no tension It would just accelerate infinitely. – miceterminator Oct 20 '12 at 15:09 5 @Alaxrite: I suggest you get two spring scales and a rope and perform this experiment yourself. Your example is wrong. The action/reaction pair of A and the block has nothing to do with what B is doing. You can make this really clear by drawing free body diagrams for the three objects. – Jerry Schirmer Oct 20 '12 at 18:27 4 By the way, downvoters are of course entitled to their opinion, but I do think this is a good question because it asks about a conceptual problem, and a somewhat subtle one at that. The fact that it's based on a misconception is not a problem IMO, and in fact such questions often turn out to prompt insightful answers. – David Zaslavsky♦ Oct 21 '12 at 10:01 show 4 more comments ## 2 Answers Your first example is facetious. If each is providing 100N then each is feeling 100N, period. In order to feel 200N, each would have to provide 200N. This is what Newton's Laws of Motion are all about; one does not feel their own force, only external forces, or when their own force comes into contact with an external body. - 1 I absolutely don't follow. Your wording is too vague. Look at this way: You agree that by the third law, forces always come in action-reaction pairs, right? Now, consider the same situation where the two people are pushing on each other. Let's consider these two forces: the force exerted by 'A' on 'B' and the force exerted by 'B' on 'A'. Note here, that I'm talking about the deliberate forces that they exert on each other. These do not form the third law's action-reaction pair. It's entirely B's choice that he is pushing. `continued` – Alraxite Oct 20 '12 at 15:52 1 Since these two do not form an action-reaction pair of each other, there must be two other reaction forces for these two. So now I ask, what are the they? Think about it and then reply. – Alraxite Oct 20 '12 at 15:52 1 And how do they form a pair? Didn't I mention that it's entirely B's choice that he is pushing? There are four forces in this situation. Two of which comes from each of them which they did deliberately, and the other two are the reaction forces to each of them as a consequence of the third law. And now you've just brought in two redundant equations into this discussion. What are x and y exactly? – Alraxite Oct 20 '12 at 16:12 2 I keep trying to explain why, but you say "no, I think I'll ignore the math". You can't ignore the math. You can't trust intuition, or common sense, or the fuzzy-wuzzies when it comes to physics. You must listen to the math; it is the only thing that has proven itself correct time and time again. – Ignacio Vazquez-Abrams Oct 20 '12 at 19:05 2 -1: Alraxite makes reasonable arguments (he confused me), and the responses to his questions are flippant and wrong. If you have two astronauts carrying nearly infinitely heavy backpacks, with ropes, one attached from A to B's chest and the other from B to A's chest, you wouldn't get confused that the pulls of A and B add up to determine how fast they approach each other. So why is it confusing when they are both pulling on the same rope? To be honest, I agree that it is confusing, but I haven't sorted out why. There is some failure of intuition here among the trained, not among the untrained. – Ron Maimon Oct 31 '12 at 4:18 show 18 more comments No. Your mistake is that the block also pushes back on A, with the same 100 N that A is pushing with, by Newton's third law. Assuming both weigh 100 kg, both the block and the person will accelerate at 1 m/s$^2$ for the duration of the push. If there's two people pushing on each other, then clearly they will also accelerate at 1 m/s$^2$ so the forces must be the same. (If this isn't obvious, consider placing a very massive, very thin, very strong wall between them, which can't alter the physics. Then it's just two 100 kg people pushing off a stiff wall with 100 N.) The difference between the two scenarios is that in the first one the reaction force is provided by the block's structure, while in the second one by a work-performing human. In the first one the block goes away while in the second one A and B's palms stay stationary. Thus A is able to push for twice as long and therefore do ~twice the work, so they will - as intuition says - end up going faster. This translates directly to the rope-pulling scenario, simply substituting tension for the compression force at the guys' palms. - I'm not sure about the mistake you're referring here. I have mentioned in my question that the block pushes on A. Your answer then goes on to say that it is obvious in the case where B is a person who is also pushing that he will experience the same force as the block. You have given a reason with a wall which I don't quite understand. Apparently the rest of your answer is about the details of how they push each other and I guess it does get confusing if you think about them pushing directly at each other's hands. So let me alter the situation a little. `continued` – Alraxite Oct 20 '12 at 17:17 Firstly, to keep it simple, both of them are using only one hand of theirs. Secondly, each hand of theirs are resting on each others chest before they push. So, now it should be clear that when A pushes on B's chest, his chest provides an equal and opposite force, during which B's hands which are on A's chest provides another force. Thus twice the force. – Alraxite Oct 20 '12 at 17:18 @Alraxite: Hmm, it seems you settled on the same example I gave. This is very true, and somewhat puzzling still, because when a man is pulling with 100N on a wall, it doesn't matter if it's a wall or another person pulling with 100N back. – Ron Maimon Oct 31 '12 at 4:20 The situation with pushing on each other's hands is different from the situation of pushing on each other's chests, it's serial vs. parallel springs. – Ron Maimon Oct 31 '12 at 5:47 Yes, if they push on each other's chests it's a different game (parallel instead of serial springs) and it cannot be translated to the rope example without introducing a second rope. – Emilio Pisanty Oct 31 '12 at 11:11 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695247411727905, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution
# Scaled inverse chi-squared distribution Parameters Probability density function Cumulative distribution function $\nu > 0\,$ $\tau^2 > 0\,$ $x \in (0, \infty)$ $\frac{(\tau^2\nu/2)^{\nu/2}}{\Gamma(\nu/2)}~ \frac{\exp\left[ \frac{-\nu \tau^2}{2 x}\right]}{x^{1+\nu/2}}$ $\Gamma\left(\frac{\nu}{2},\frac{\tau^2\nu}{2x}\right) \left/\Gamma\left(\frac{\nu}{2}\right)\right.$ $\frac{\nu \tau^2}{\nu-2}$ for $\nu >2\,$ $\frac{\nu \tau^2}{\nu+2}$ $\frac{2 \nu^2 \tau^4}{(\nu-2)^2 (\nu-4)}$for $\nu >4\,$ $\frac{4}{\nu-6}\sqrt{2(\nu-4)}$for $\nu >6\,$ $\frac{12(5\nu-22)}{(\nu-6)(\nu-8)}$for $\nu >8\,$ $\frac{\nu}{2} \!+\!\ln\left(\frac{\tau^2\nu}{2}\Gamma\left(\frac{\nu}{2}\right)\right)$ $\!-\!\left(1\!+\!\frac{\nu}{2}\right)\psi\left(\frac{\nu}{2}\right)$ $\frac{2}{\Gamma(\frac{\nu}{2})}\left(\frac{-\tau^2\nu t}{2}\right)^{\!\!\frac{\nu}{4}}\!\!K_{\frac{\nu}{2}}\left(\sqrt{-2\tau^2\nu t}\right)$ $\frac{2}{\Gamma(\frac{\nu}{2})}\left(\frac{-i\tau^2\nu t}{2}\right)^{\!\!\frac{\nu}{4}}\!\!K_{\frac{\nu}{2}}\left(\sqrt{-2i\tau^2\nu t}\right)$ The scaled inverse chi-squared distribution is the distribution for x = 1/s2, where s2 is a sample mean of the squares of ν independent normal random variables that have mean 0 and inverse variance 1/σ2 = τ2. The distribution is therefore parametrised by the two quantities ν and τ2, referred to as the number of chi-squared degrees of freedom and the scaling parameter, respectively. This family of scaled inverse chi-squared distributions is closely related to two other distribution families, those of the inverse-chi-squared distribution and the inverse gamma distribution. Compared to the inverse-chi-squared distribution, the scaled distribution has an extra parameter τ2, which scales the distribution horizontally and vertically, representing the inverse-variance of the original underlying process. Also, the scale inverse chi-squared distribution is presented as the distribution for the inverse of the mean of ν squared deviates, rather than the inverse of their sum. The two distributions thus have the relation that if $X \sim \mbox{Scale-inv-}\chi^2(\nu, \tau^2)$   then   $\frac{X}{\tau^2 \nu} \sim \mbox{inv-}\chi^2(\nu)$ Compared to the inverse gamma distribution, the scaled inverse chi-squared distribution describes the same data distribution, but using a different parametrization, which may be more convenient in some circumstances. Specifically, if $X \sim \mbox{Scale-inv-}\chi^2(\nu, \tau^2)$   then   $X \sim \textrm{Inv-Gamma}\left(\frac{\nu}{2}, \frac{\nu\tau^2}{2}\right)$ Either form may be used to represent the maximum entropy distribution for a fixed first inverse moment $(E(1/X))$ and first logarithmic moment $(E(\ln(X))$. The scaled inverse chi-squared distribution also has a particular use in Bayesian statistics, somewhat unrelated to its use as a predictive distribution for x = 1/s2. Specifically, the scaled inverse chi-squared distribution can be used as a conjugate prior for the variance parameter of a normal distribution. In this context the scaling parameter is denoted by σ02 rather than by τ2, and has a different interpretation. The application has been more usually presented using the inverse gamma distribution formulation instead; however, some authors, following in particular Gelman et al. (1995/2004) argue that the inverse chi-squared parametrisation is more intuitive. ## Characterization The probability density function of the scaled inverse chi-squared distribution extends over the domain $x>0$ and is $f(x; \nu, \tau^2)= \frac{(\tau^2\nu/2)^{\nu/2}}{\Gamma(\nu/2)}~ \frac{\exp\left[ \frac{-\nu \tau^2}{2 x}\right]}{x^{1+\nu/2}}$ where $\nu$ is the degrees of freedom parameter and $\tau^2$ is the scale parameter. The cumulative distribution function is $F(x; \nu, \tau^2)= \Gamma\left(\frac{\nu}{2},\frac{\tau^2\nu}{2x}\right) \left/\Gamma\left(\frac{\nu}{2}\right)\right.$ $=Q\left(\frac{\nu}{2},\frac{\tau^2\nu}{2x}\right)$ where $\Gamma(a,x)$ is the incomplete Gamma function, $\Gamma(x)$ is the Gamma function and $Q(a,x)$ is a regularized Gamma function. The characteristic function is $\varphi(t;\nu,\tau^2)=$ $\frac{2}{\Gamma(\frac{\nu}{2})}\left(\frac{-i\tau^2\nu t}{2}\right)^{\!\!\frac{\nu}{4}}\!\!K_{\frac{\nu}{2}}\left(\sqrt{-2i\tau^2\nu t}\right) ,$ where $K_{\frac{\nu}{2}}(z)$ is the modified Bessel function of the second kind. ## Parameter estimation The maximum likelihood estimate of $\tau^2$ is $\tau^2 = n/\sum_{i=1}^n \frac{1}{x_i}.$ The maximum likelihood estimate of $\frac{\nu}{2}$ can be found using Newton's method on: $\ln(\frac{\nu}{2}) + \psi(\frac{\nu}{2}) = \sum_{i=1}^n \ln(x_i) - n \ln(\tau^2) ,$ where $\psi(x)$ is the digamma function. An initial estimate can be found by taking the formula for mean and solving it for $\nu.$ Let $\bar{x} = \frac{1}{n}\sum_{i=1}^n x_i$ be the sample mean. Then an initial estimate for $\nu$ is given by: $\frac{\nu}{2} = \frac{\bar{x}}{\bar{x} - \tau^2}.$ ## Bayesian estimation of the variance of a Normal distribution The scaled inverse chi-squared distribution has a second important application, in the Bayesian estimation of the variance of a Normal distribution. According to Bayes theorem, the posterior probability distribution for quantities of interest is proportional to the product of a prior distribution for the quantities and a likelihood function: $p(\sigma^2|D,I) \propto p(\sigma^2|I) \; p(D|\sigma^2)$ where D represents the data and I represents any initial information about σ2 that we may already have. The simplest scenario arises if the mean μ is already known; or, alternatively, if it is the conditional distribution of σ2 that is sought, for a particular assumed value of μ. Then the likelihood term L(σ2|D) = p(D|σ2) has the familiar form $\mathcal{L}(\sigma^2|D,\mu) = \frac{1}{\left(\sqrt{2\pi}\sigma\right)^n} \; \exp \left[ -\frac{\sum_i^n(x_i-\mu)^2}{2\sigma^2} \right]$ Combining this with the rescaling-invariant prior p(σ2|I) = 1/σ2, which can be argued (e.g. following Jeffreys) to be the least informative possible prior for σ2 in this problem, gives a combined posterior probability $p(\sigma^2|D, I, \mu) \propto \frac{1}{\sigma^{n+2}} \; \exp \left[ -\frac{\sum_i^n(x_i-\mu)^2}{2\sigma^2} \right]$ This form can be recognised as that of a scaled inverse chi-squared distribution, with parameters ν = n and τ2 = s2 = (1/n) Σ (xi-μ)2 Gelman et al remark that the re-appearance of this distribution, previously seen in a sampling context, may seem remarkable; but given the choice of prior the "result is not surprising".[1] In particular, the choice of a rescaling-invariant prior for σ2 has the result that the probability for the ratio of σ2 / s2 has the same form (independent of the conditioning variable) when conditioned on s2 as when conditioned on σ2: $p(\tfrac{\sigma^2}{s^2}|s^2) = p(\tfrac{\sigma^2}{s^2}|\sigma^2)$ In the sampling-theory case, conditioned on σ2, the probability distribution for (1/s2) is a scaled inverse chi-squared distribution; and so the probability distribution for σ2 conditioned on s2, given a scale-agnostic prior, is also a scaled inverse chi-squared distribution. ### Use as an informative prior If more is known about the possible values of σ2, a distribution from the scaled inverse chi-squared family, such as Scale-inv-χ2(n0, s02) can be a convenient form to represent a less uninformative prior for σ2, as if from the result of n0 previous observations (though n0 need not necessarily be a whole number): $p(\sigma^2|I^\prime, \mu) \propto \frac{1}{\sigma^{n_0+2}} \; \exp \left[ -\frac{n_0 s_0^2}{2\sigma^2} \right]$ Such a prior would lead to the posterior distribution $p(\sigma^2|D, I^\prime, \mu) \propto \frac{1}{\sigma^{n+n_0+2}} \; \exp \left[ -\frac{\sum{ns^2 + n_0 s_0^2}}{2\sigma^2} \right]$ which is itself a scaled inverse chi-squared distribution. The scaled inverse chi-squared distributions are thus a convenient conjugate prior family for σ2 estimation. ### Estimation of variance when mean is unknown If the mean is not known, the most uninformative prior that can be taken for it is arguably the translation-invariant prior p(μ|I) ∝ const., which gives the following joint posterior distribution for μ and σ2, $\begin{align} p(\mu, \sigma^2|D, I) \; \propto \; & \frac{1}{\sigma^{n+2}} \; \exp \left[ -\frac{\sum_i^n(x_i-\mu)^2}{2\sigma^2} \right] \\ = \; & \frac{1}{\sigma^{n+2}} \; \exp \left[ -\frac{\sum_i^n(x_i-\bar{x})^2}{2\sigma^2} \right] \; \exp \left[ -\frac{\sum_i^n(\mu -\bar{x})^2}{2\sigma^2} \right] \end{align}$ The marginal posterior distribution for σ2 is obtained from the joint posterior distribution by integrating out over μ, $\begin{align} p(\sigma^2|D, I) \; \propto \; & \frac{1}{\sigma^{n+2}} \; \exp \left[ -\frac{\sum_i^n(x_i-\bar{x})^2}{2\sigma^2} \right] \; \int_{-\infty}^{\infty} \exp \left[ -\frac{\sum_i^n(\mu -\bar{x})^2}{2\sigma^2} \right] d\mu\\ = \; & \frac{1}{\sigma^{n+2}} \; \exp \left[ -\frac{\sum_i^n(x_i-\bar{x})^2}{2\sigma^2} \right] \; \sqrt{2 \pi \sigma^2 / n} \\ \propto \; & (\sigma^2)^{-(n+1)/2} \; \exp \left[ -\frac{(n-1)s^2}{2\sigma^2} \right] \end{align}$ This is again a scaled inverse chi-squared distribution, with parameters $\scriptstyle{n-1}\;$ and $\scriptstyle{s^2 = \sum (x_i - \bar{x})^2/(n-1)}$. ## Related distributions • If $X \sim \mbox{Scale-inv-}\chi^2(\nu, \tau^2)$ then $k X \sim \mbox{Scale-inv-}\chi^2(\nu, k \tau^2)\,$ • If $X \sim \mbox{inv-}\chi^2(\nu) \,$ (Inverse-chi-squared distribution) then $X \sim \mbox{Scale-inv-}\chi^2(\nu, 1/\nu) \,$ • If $X \sim \mbox{Scale-inv-}\chi^2(\nu, \tau^2)$ then $\frac{X}{\tau^2 \nu} \sim \mbox{inv-}\chi^2(\nu) \,$ (Inverse-chi-squared distribution) • If $X \sim \mbox{Scale-inv-}\chi^2(\nu, \tau^2)$ then $X \sim \textrm{Inv-Gamma}\left(\frac{\nu}{2}, \frac{\nu\tau^2}{2}\right)$ (Inverse-gamma distribution) • Scaled inverse chi square distribution is a special case of type 5 Pearson distribution ## References • Gelman A. et al (1995), Bayesian Data Analysis, pp 474–475; also pp 47, 480 1. Gelman et al (1995), Bayesian Data Analysis (1st ed), p.68
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 63, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8555334806442261, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/33607/group-of-positive-rationals-under-multiplication-not-isomorphic-to-group-of-rati?answertab=active
# Group of positive rationals under multiplication not isomorphic to group of rationals A question that may sound very trivial, apologies beforehand. I am wondering why $( \mathbb{Q}_{>0} , \times )$ is not isomorphic to $( \mathbb{Q} , + )$. I can see for the case when $( \mathbb{Q} , \times )$, not required to be positive, one can argue the group contains elements with order 2 (namely all negatives). In the case of the requirement for all rationals to be positive this argument does not fly. What trivial fact am I missing here? - ## 1 Answer The isomorphism would have to map some element of $(\mathbb{Q},+)$ to $2$. There is no element of $(\mathbb{Q}_{>0},\times)$ whose square is $2$, but whatever number is mapped to $2$ has a half in $(\mathbb{Q},+)$. More generally speaking, you can divide by any natural number $n$ in $(\mathbb{Q},+)$, but you can't generally draw $n$-th roots in $(\mathbb{Q}_{>0},\times)$. More abstractly speaking, you can introduce an invertible multiplication operation on $(\mathbb{Q},+)$ to turn it into a field (in fact that in a sense is the point of the construction of $\mathbb{Q}$) but you can't define a corresponding exponentiation operation within $(\mathbb{Q}_{>0},\times)$. The isomorphism that you expected to exist exists not between $(\mathbb{Q},+)$ and $(\mathbb{Q}_{>0},\times)$ but between $(\mathbb{Q},+)$ and $(b^\mathbb{Q},\times)$ for any $b\in\mathbb{R}_{>0} \setminus\{1\}$. Since $b^\mathbb{Q}$ always contains irrational elements, this is never a subgroup of $(\mathbb{Q}_{>0},\times)$. - More-or-less, joriki's answer can be rephrased like this. If we find a property, which is preserved by isomorphisms and which is fulfilled only for one of the groups, then the groups are not isomorphic. In this case, the property of the group $(G,\circ)$ is $(\forall x\in G)(\exists y\in G) y\circ y=x$. – Martin Sleziak Apr 18 '11 at 9:11 Martin's comment referred to an earlier version of the answer that only had the first couple of sentences. – joriki Apr 18 '11 at 9:28 It might be added that the property mentioned here (being avle to divide by any natural number n, which would then correspond to being able to take n'th roots in the multiplicative case) is called being divisible. – Tobias Kildetoft Apr 18 '11 at 9:45 To clarify that slightly (since "being able to divide" is a property of humans and computers, not of groups :-): A group is called divisible if for each element $x$ and each natural number $n$ there is an element $y$ such that $ny$, defined as the $n$-fold sum of $y$, is $x$. Then $(\mathbb{Q},+)$ is divisible and $(\mathbb{Q}_{>0},\times)$ isn't. – joriki Apr 18 '11 at 10:19 Thanks for all the help! – pberlijn Apr 18 '11 at 10:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485241770744324, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/36362-proof-equation.html
# Thread: 1. ## proof for an equation I have this equation here: x^2 + y^2 + z^2 = xy + xz + yz I need to somehow prove that the equation is true only when all three variables are equal, i.e., x = y = z. I would appreciate any advice or suggestions on how to accomplish this. 2. Originally Posted by gfreeman75 I have this equation here: x^2 + y^2 + z^2 = xy + xz + yz I need to somehow prove that the equation is true only when all three variables are equal, i.e., x = y = z. I would appreciate any advice or suggestions on how to accomplish this. Well, I'll probably get jumped on here but here goes ....... The equation can be re-arranged in two ways: x(x - y) + y(y - z) + z(z - x) = 0 .... (1) x(x - z) + y(y - x) + z(z - y) = 0 ....(2) Compare and equate coefficients of x, y and z: x - y = x - z => y = z. y - z = y - x => x = z. Therefore x = y = z is the only solution. 3. Originally Posted by gfreeman75 I have this equation here: x^2 + y^2 + z^2 = xy + xz + yz I need to somehow prove that the equation is true only when all three variables are equal, i.e., x = y = z. I would appreciate any advice or suggestions on how to accomplish this. $x^2 + y^2 + z^2 = xy + xz + yz$ $\Rightarrow 2(x^2 + y^2 + z^2) = 2(xy + xz + yz)$ $\Rightarrow 2(x^2 + y^2 + z^2) - 2(xy + xz + yz) = 0$ $\Rightarrow (x^2 + y^2 - 2xy)+(x^2 + z^2 - 2xz) + (z^2 + y^2 - 2zy) = 0$ $\Rightarrow (x-y)^2+(x-z)^2 + (z-y)^2 = 0$ $\text{This forces }x=y=z.$ Originally Posted by Mr.F Well, I'll probably get jumped on here but here goes ....... 4. Originally Posted by Isomorphism $x^2 + y^2 + z^2 = xy + xz + yz$ $\Rightarrow 2(x^2 + y^2 + z^2) = 2(xy + xz + yz)$ $\Rightarrow 2(x^2 + y^2 + z^2) - 2(xy + xz + yz) = 0$ $\Rightarrow (x^2 + y^2 - 2xy)+(x^2 + z^2 - 2xz) + (z^2 + y^2 - 2zy) = 0$ $\Rightarrow (x-y)^2+(x-z)^2 + (z-y)^2 = 0$ $\text{This forces }x=y=z.$ Very succinct. It took me at least twice as many lines when I tried working it out. 5. That's a pretty elegant solution to my problem, Isomorphism. Thank you. 6. Originally Posted by Isomorphism $x^2 + y^2 + z^2 = xy + xz + yz$ $\Rightarrow 2(x^2 + y^2 + z^2) = 2(xy + xz + yz)$ $\Rightarrow 2(x^2 + y^2 + z^2) - 2(xy + xz + yz) = 0$ $\Rightarrow (x^2 + y^2 - 2xy)+(x^2 + z^2 - 2xz) + (z^2 + y^2 - 2zy) = 0$ $\Rightarrow (x-y)^2+(x-z)^2 + (z-y)^2 = 0$ $\text{This forces }x=y=z.$ Well, I guess I'm saying that if you've got ax + by + cz = 0 .... (1) cx + ay + bz = 0 ....(2) then a = b = c if x, y and z are not equal to zero. A bit lame, I guess.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419593214988708, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/76564?sort=newest
## 3D Venn diagrams ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Are there higher-dimensional versions of the concept of rotationally symmetric Venn diagrams, with closed curves replaced by closed surfaces or higher manifolds ? - ## 2 Answers The n-simplex as a subset of ${\mathbb R}^{n+2}$ can serve as a stand-in for a Venn diagram. Or if you like you can fatten the vertices until you have over-lapping $n$-balls. More precisely, consider the convex hull of ${e_1, e_2, \ldots , e_{n+1}}$. This is the set $\{ \vec{x} \in R^{n+2}: \sum_i x_i = 1, \ \ \& \ \ 0\le x_i \}$ where $e_i$ represents the $i$th standard unit vector $[0,\ldots, 0,1,0,\ldots, 0]$ that has a $1$ in the $i$th position. Each vertex represents one of your sets. Each edge represents the intersection between two sets. Each triangle represents the intersection among $3$ sets, and each $k$-simplex (which is determined by a subset of size $k+1$ chosen from ${1,2, \ldots , n+2}$ represents the intersection among $(k+1)$ sets. To imagine a model use a protractor and draw the complete graph on $(k+2)$-vertices that are represented by the roots of unity $e^{2\pi i j/(k+2)}$ (The $i$ here is different than the index above). - 1 Specifically, you can choose a ball of radius $\sqrt(2)$ around each point in the hyperplane that goes through $e_1,e_2,...,e_{n+1}$ – Will Sawin Sep 27 2011 at 23:50 1 The trick is to do this with $2^{n+1}$ distinct regions (counting the exterior region) in a dimension (potentially much) lower than $n+2$. At least that would be the analogue of the well-known 2-dimensional problem that has been solved in the case of $n$ prime by Griggs-Killian-Savage. – Patricia Hersh Jul 3 at 1:25 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "four intersecting spheres form the highest order Venn diagram that is completely symmetric and can be visually represented" http://en.wikipedia.org/wiki/Venn_diagram#Extensions_to_higher_numbers_of_sets - 1 The same page shows a two-dimensional Venn diagram of five ellipses that is rotationally symmetric, so I think that sentence is referring to a stronger condition. – Jonah Ostroff Sep 27 2011 at 22:22 The two-dimensional diagram is only symmetric up to a reflection, no? – Will Sawin Sep 27 2011 at 23:48 @Will I think Jonah was talking about this one which has some rotational symmetry: upload.wikimedia.org/wikipedia/commons/1/10/… – psd Sep 28 2011 at 22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340587258338928, "perplexity_flag": "middle"}
http://www.physicsforums.com/showpost.php?p=1024810&postcount=8
Thread: A Factorization Algorithm View Single Post Recognitions: Homework Help Science Advisor Quote by Playdo, 1st post Clearly for certain special numbers N we have a one to one map of factor pairs of N and factor pairs of n+1 which is much smaller. Why is that? If N is of the form 1 + ... + an, then how do we know all factor pairs of N contain one factor of the form 1 + ... + am? In other words, why is it clear that if a number has a base-a expansion 111....111 that any factor pair of that number has at least one of the factors also being in the form 111....111 (but with fewer 1's normally, of course)? Also, for your interest, $\beta$ is not a capital letter. Capital Beta looks just like the captial roman B. When using LaTeX, if you write a greek letter in lower case, like so \pi, then you'll get the lower case $\pi$ and if you write it like \Pi then you'll get $\Pi$. Also, when using LaTeX in the same line as your text, use "itex" tags instead of "tex". So you'd write [ itex ]\phi ^2 - \phi - 1 = 0[ /itex ] to have the markup right in line $\phi ^2 - \phi - 1 = 0$. To have it bigger and on a separate line: [ tex ]\phi ^2 - \phi - 1 = 0[ /tex ] $$\phi ^2 - \phi - 1 = 0$$ Quote by Playdo, 3rd post For every composite natural number $$N$$ there must be a reducible polynomial $$p(x)$$ over the natural numbers and a natural number $$a$$ for which $$N=p(a)$$ and at least one factor pair $$s$$ and $$t$$ of $$p$$ evaluated at $$a$$ satisfy $$N=p(a)=s(a)t(a)$$. The last part, that at least one factor pair s and t satisfies N = p(a) = s(a)t(a) seems redundant from the fact that p is reducible and N is composite. I don't quite understand the point of your proof, the result seems trivial. Let N be composite, say N = nm for n, m > 1. If you define 0 to be a natural, let a be any natural less than or equal to min{n,m}. Otherwise, let a be any natural strictly less than min{n,m}. Let p(x) = (x + (n-a))(x + (m-a)). I'm in a rush right now, but I'll look at posts 2, 4, and 5 later.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9059712290763855, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/26525/coiling-rope-in-a-box/43340
## Coiling Rope in a Box ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the longest rope length L of radius r that can fit into a box? The rope is a smooth curve with a tubular neighborhood of radius r, such that the rope does not self-penetrate. For an open curve, each endpoint is surrounded by a ball of radius r. For a box of dimensions $1{\times}1{\times}\frac{1}{2}$ and rope with $r=\frac{1}{4}$, perhaps $L=\frac{1}{2}+\frac{\pi}{4} \approx 1.3$, achieved by a 'U': I know packing circles in a square is a notoriously difficult problem, but perhaps it is easier to pack a rope in a cube, because the continuity of the curve constrains the options? (I struggle with this every fall, packing a gardening hose in a rectangular tub.) I am more interested in general strategies for how to best coil the rope, rather than specific values of L. It seems that if r is large w.r.t. the box dimensions (as in the above example), no "penny-packing" cross-sectional structure is possible, where one layer nestles in the crevices of the preceding layer. This is a natural question and surely has been explored, but I didn't find much. Edit 1. It seems a curvature constraint is needed to retain naturalness: The curve should not turn so sharply that the disks of radius r orthogonal to the curve that determine the tubular neighborhood interpenetrate. Edit 2 (26Jun10). See also the MO question concerning decidability. Edit 3 (12Aug10). Here is an observation on the 2D version, where a $1 {\times} 1 {\times} 2r$ box may only accommodate one layer of rope. If $k=\frac{1}{2r}$ is an even integer, then I can see two natural strategies for coiling the rope within the box: $\qquad \qquad \qquad \qquad \qquad$ Red is rope core curve, blue marks the rope boundary. Interestingly, if I have calculated correctly, the length of the red rope curve is identical for the two strategies: $$L = 2 (k-1)[r \pi/2] + 2(k-1)^2 r \;.$$ For $r=\frac{1}{16}$, $k=8$ as illustrated, $L=\frac{7\pi}{16} + \frac{49}{8} \approx 7.5$. (As a check, for $r=\frac{1}{4}$, $k=2$, and $L$ evaluates to $\frac{\pi}{4}+\frac{1}{2}$ as in the first example above.) - 2 arxiv.org/abs/1005.4609 – Steve Huntsman May 31 2010 at 2:50 1 Your example looks suboptimal to me — you can make it a little longer by pushing the parts with bigger radius into their corners and making the radius of the curved parts in these corners match the other curved parts (opening up a gap where the rope does not touch itself along the red semicircle). – David Eppstein May 31 2010 at 3:52 @David Eppstein: I believe the red segment indicates the center of the rope, rather than a place where a thinner rope touches itself. I was confused for a bit as well, since the "thin rope" interpretation was inconsistent with the length calculation. – S. Carnahan♦ May 31 2010 at 6:58 Scott's interpretation of the figure is correct: the red is the centerline of the curve. The rope has diameter $\frac{1}{2}$; its outline is in blue. – Joseph O'Rourke May 31 2010 at 9:50 @David Eppstein: I see your point. Why not approach a right-angled 'U' of length $L=$1.5? This suggests a curvature constraint is needed. Now added. – Joseph O'Rourke May 31 2010 at 10:50 show 4 more comments ## 1 Answer Part of this question has an ad hoc nature that, in my opinion, weakeans it as a math question. How much spaghetti can you fit into a Volkswagen Beetle? Remark: Slightly more than otherwise if you remember to open the glove compartment. Okay, the question is not quite that bad, but it is clearly sensitive to boundary behavior. If you allow the box to be an arbitrary convex shape (say), then who knows if it can ever be completely solved. So let's look at the boundary-independent part of the question. In 2 dimensions, if you have a convex box of any shape with a very large inradius, then it is not hard to show that the asymptotic density of the rope is 1. You can go back and forth across the box in a boustrophedonic pattern (like the diagram on the left). In 3 dimensions, one can speculate that the asymptotic density of the rope is $\sqrt{\pi}/12$, the same as the circle packing density in the plane. There is a theorem of Andras Bezdek and my dad, Włodzimierz Kuperberg, that the maximum density space packing with congruent circular cylinders of infinite length is attained when the cylinders are parallel. Their theorem includes the special case of the rope question in which most of the rope is parallel to itself. (On the other hand, it is not quite obvious that the entire theorem is a special case. Given a pile of stiff, straight spaghetti, can you always efficiently connect the ends to make one long noodle?) In any case, I asked my dad this asymptotic rope question, and in his opinion, it is an open problem. A rope seems much harder to control than straight cylinders. Also, as far as I know, the same questions concerning either straight, round cylinders or one long rope in higher dimensions are also open. It's my philosophy that an argument that a problem is open is a valid MO answer to a MO question. - Not that you need my +1, but I was immensely amused by this answer. So there you go. – Willie Wong Oct 23 2010 at 21:50 I consider the opinion of the Kuperbergs to be definitive in this arena. Thanks for investigating! – Joseph O'Rourke Oct 23 2010 at 23:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347479343414307, "perplexity_flag": "middle"}
http://pediaview.com/openpedia/Characteristic_impedance
# Characteristic impedance A transmission line drawn as two black wires. At a distance x into the line, there is current phasor I(x) traveling through each wire, and there is a voltage difference phasor V(x) between the wires (bottom voltage minus top voltage). If $Z_0$ is the characteristic impedance of the line, then $V(x) / I(x) = Z_0$ for a wave moving rightward, or $V(x)/I(x) = -Z_0$ for a wave moving leftward. Schematic representation of a circuit where a source is coupled to a load with a transmission line having characteristic impedance $Z_0$. The characteristic impedance or surge impedance of a uniform transmission line, usually written Z0, is the ratio of the amplitudes of voltage and current of a single wave propagating along the line; that is, a wave travelling in one direction in the absence of reflections in the other direction. Characteristic impedance is determined by the geometry and materials of the transmission line and, for a uniform line, is not dependent on its length. The SI unit of characteristic impedance is the ohm. The characteristic impedance of a lossless transmission line is purely resistive, with no reactive component. Energy supplied by a source at one end of such a line is transmitted through the line without being dissipated in the line itself. A transmission line of finite length (lossless or lossy) that is terminated at one end with a resistor equal to the characteristic impedance appears to the source like an infinitely long transmission line. ## Transmission line model Schematic representation of the elementary components of a transmission line. The characteristic impedance of a transmission line is the ratio of the voltage and current of a wave travelling along the line. When the wave reaches the end of the line, in general, there will be a reflected wave which travels back along the line in the opposite direction. When this wave reaches the source, it adds to the transmitted wave and the ratio of the voltage and current at the input to the line will no longer be the characteristic impedance. This new ratio is called the input impedance. The input impedance of an infinite line is equal to the characteristic impedance since the transmitted wave is never reflected back from the end. It can be shown that an equivalent definition is: the characteristic impedance of a line is that impedance which when terminating an arbitrary length of line at its output will produce an input impedance equal to the characteristic impedance. This is so because there is no reflection on a line terminated in its own characteristic impedance. Applying the transmission line model based on the telegrapher's equations, the general expression for the characteristic impedance of a transmission line is: $Z_0=\sqrt{\frac{R+j\omega L}{G+j\omega C}}$ where $R$ is the resistance per unit length, considering the two conductors to be in series, $L$ is the inductance per unit length, $G$ is the conductance of the dielectric per unit length, $C$ is the capacitance per unit length, $j$ is the imaginary unit, and $\omega$ is the angular frequency. Although an infinite line is assumed, since all quantities are per unit length, the characteristic impedance is independent of the length of the transmission line. The voltage and current phasors on the line are related by the characteristic impedance as: $\frac{V^+}{I^+} = Z_0 = -\frac{V^-}{I^-}$ where the superscripts $+$ and $-$ represent forward- and backward-traveling waves, respectively. A surge of energy on a finite transmission line will see an impedance of Z0 prior to any reflections arriving, hence surge impedance is an alternative name for characteristic impedance. ## Lossless line For a lossless line, R and G are both zero, so the equation for characteristic impedance reduces to: $Z_0 = \sqrt{\frac{L}{C}}$ The imaginary term j has also canceled out, making Z0 a real expression, and so is purely resistive. ## Surge impedance loading In electric power transmission, the characteristic impedance of a transmission line is expressed in terms of the surge impedance loading (SIL), or natural loading, being the power loading at which reactive power is neither produced nor absorbed: $\mathit{SIL}=\frac{{V_\mathrm{LL}}^2}{Z_0}$ in which $V_\mathrm{LL}$ is the line-to-line voltage in volts. Loaded below its SIL, a line supplies reactive power to the system, tending to raise system voltages. Above it, the line absorbs reactive power, tending to depress the voltage. The Ferranti effect describes the voltage gain towards the remote end of a very lightly loaded (or open ended) transmission line. Underground cables normally have a very low characteristic impedance, resulting in an SIL that is typically in excess of the thermal limit of the cable. Hence a cable is almost always a source of reactive power. ## References • Guile, A. E. (1977). Electrical Power Systems. ISBN 0-08-021729-X [Amazon-US | Amazon-UK]. • Pozar, D. M. (February 2004). Microwave Engineering (3rd edition ed.). ISBN 0-471-44878-8 [Amazon-US | Amazon-UK]. • Ulaby, F. T. (2004). Fundamentals Of Applied Electromagnetics (media edition ed.). Prentice Hall. ISBN 0-13-185089-X [Amazon-US | Amazon-UK]. ## Source Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Characteristic impedance", which is available in its original form here: http://en.wikipedia.org/w/index.php?title=Characteristic_impedance • ## Finding More You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page. • ## Questions or Comments? If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content. All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.888034462928772, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/108376/automorphisms-of-vertically-simple-complex-multisets
Automorphisms of vertically simple complex multisets Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, I defined automorphisms of the Selberg class in http://mathoverflow.net/questions/50581/automorphisms-of-the-selberg-class Let's call "structural group of the extension $L$ of $K$", denoted by $Str(L/K)$, the group of automorphisms of $L$ which coincide with identity on $K$, whenever $K$ is a substructure of the structure $L$: for example $K$ may be a subring of a ring $L$, or a submonoid, or whatever you want. When $K$ is a subfield of a field $L$, one gets $Str(L/K)=Gal(L/K)$. Let $S$ denote the Selberg class, and $S'$ the set of self-dual functions in $S$. Following Andrew D.Droll's recent thesis (http://qspace.library.queensu.ca/jspui/bitstream/1974/7352/1/Droll_Andrew_D_201207_PhD.pdf), forall $F\in S$, let's denote by $Z(F)$ the "vertically simple complex multiset" of non-trivial zeroes of $F$. Moreover, let $Z_S$ denote `$\bigcup_{F\in S}\{ Z(F)\}$` and `$Z_S'=\bigcup_{F\in S'}\{ Z(F)\}$`. My question is: is there a natural way to define automorphisms of vertically simple complex multisets such that $Str(S/S')$ and $Str(Z_S/Z_S')$ are canonically isomorphic? If so, are such automorphisms of vertically simple complex multisets necessarily isometries? Thanks in advance and happy birthday to this website. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8166866898536682, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/111865/isnt-the-intermediate-value-theorem-self-evident-for-continuous-functions?answertab=active
# Isn't the intermediate value theorem self-evident for continuous functions? The way I understand the intermediate value theorem is this: if you have a function f that is continuous over a domain $[a,b]$ then there is a value $f(c)$, where $f(a)≤f(c)≤f(b)$, such that $a≤c≤b$. This seems self-evident. If $f$ is continuous, then there exists an $f(c)$ such that $a≤c≤b$. But isn't this just a restatement of the fact that $f$ is continuous? Isn't the intermediate value theorem self-evident for continuous functions? - 5 No you need the fact that the real line as a topological space is connected (plus the fact that the image of a connected set under a continuous function is connected). – BenjaLim Feb 22 '12 at 0:53 10 "$f$ is continuous on $[a,b]$" means "for every $x$ in $[a,b]$, for every $\epsilon >0$, there exists $\delta >0$ such that, for every $y$ in $[a,b]$, if $|x-y|<\delta$, then $|f(x)-f(y)|<\epsilon$". Can you please explain how the intermediate value theorem is an obvious restatement of this condition? – Chris Eagle Feb 22 '12 at 0:54 6 The intermediate value theorem is intimately connected with the fact that the reals are complete (every nonempty set bounded above has a supremum; equivalently, every Cauchy sequence coverges). In fact, it is equivalent to the supremum property/every Cauchy sequence converges, so it is not entirely "obvious" (in that the proof relies on these rather deep properties of the real numbers that are not easy to prove from first principles). – Arturo Magidin Feb 22 '12 at 0:55 18 You have a misunderstanding of the intermediate value theorem. The intermediate value theorem says that $f(x)$ must take every possible value between $f(a)$ and $f(b)$. It might "seem" self-evident, but it isn't true for example, if you look at continuous functions on the rational numbers, for example. – Thomas Andrews Feb 22 '12 at 0:55 8 My favorite example of a continuous function from rationals to rationals that fails the intermediate value test is simply $f(x)=1/(2-x^2)$: rational function, gotta be continuous, but it blows up around the “missing” point $\sqrt{2}$. – Lubin Feb 22 '12 at 3:36 show 5 more comments ## 7 Answers It's not self-evident; rather, it's a justification for the epsilon-delta definition of "continuous"! It's the bridge between the epsilon-delta view of the world that we learn in university, and the "point and line" view of the world which we know from high school. This is the content of two 1-hour lectures, but let me try to summarize it in a nutshell: The fundamental problem in calculus can be thought of as: Estimate $f(a)$ using information about $f(b)$ where $b$ is some number near $a$. (You can directly measure only $f(b)$ but not $f(a)$). So at the most primitive level, before we discuss derivatives or anything, we simply want to characterize functions for which $f(b)$ gives some estimate of $f(a)$ for $b$ close enough to $a$. These are the functions for which it makes some kind of sense to even try to make such an estimate. So what are these functions? 1. Function $f$ doesn't a jump or snap at $a$. For example, if $f$ is "tension in a string", and the string breaks at $a$, measuring tension just before $a$ won't tell you the tension at $a$. 2. There isn't some kind of crazy $f(x)=sin\left(\frac{1}{x}\right)$ at $x\neq 0$ and $f(0)=0$ "resonance" "shaking itself to pieces" phenomenon at $a$. In other words, $f$ is continuous at $a$ ($f(a)$ can be estimated by looking at values of $f$ for inputs close enough to $a$) if no terrible crisis occurs for function $f$ at $a$. Good. How do you formalize this? Along comes Arbogast at the end of the 18th century, and suggests that the property you need to demand is the intermediate value property. This is a good first-approximation to a formalization of "continuity", because is takes care of "breakage" ("issue 1"). But it does not take care of not "shaking" ("issue 2")- the topologist's sine curve satisfies the intermediate value property at $0$, but isn't "continuous". Even worse, it's satisfied by crazy functions like the Conway base 13 function which shake so badly that it makes no sense to try to estimate in anywhere. Then, along comes Bernard Bolzano, works out epsilon-delta, and gives the modern definition of continuity. And, because he's a Catholic Priest and not a great and famous mathematician like Arbogast, he has to probe that his definition implies the Intermediate Value Property in order for anyone to take him seriously. But, personal reputation of Bolzano aside, why in fact is this property so central? The epsilon-delta view of the world, put forward by Bolzano, posits that a line is more than just an infinite connection of points- rather, it is made up of interlocking epsilon-delta fuzz (using more technical terms, you're imposing equiping the real line with the metric topology). One way of thinking about this is that, in reality, you never know exactly what a real number is- you can write down the first zillion digits of pi, but never all of it. So a real number is an inherently fuzzy concept, and the real line (as a topological space) is made up of interlocking fuzz rather than being made up of points. Writing out the epsilon-delta construction of a continuous function (or any epsilon-delta construction) is painful, because you have all of these nested quantifiers to deal with fuzziness (for any epsilon there exists delta such that for any x between bla bla bla). It's not human language- it looks more like some kind of awful computer code. On the other hand, the intermediate value property is talking about points. It tells you that there exists a point c between $a$ and $b$ such that bla bla bla. Punchline: The intermediate value theorem is the bridge between fuzz world and point world. Every other bridge between fuzz-world and point-world goes through it (all the ones you learn in undergrad calculus do anyway). So it's not at all obvious... it's the statement that the horribly convoluted non-human-language epsilon-delta language captures all the properties of a continuous function that you want; and then some. It's the statement that the convoluted non-intuitive epsilon-delta formalism happens to be the one which captures your geometric intuition. How surprising is that! Formally, the intermediate value theorem is a consequence of completeness of the reals, which is roughly the statement that the real line doesn't have microscopic holes in it like the line of rational numbers has. So again- the passage from fuzz to points and back again has to make use completeness of the real numbers, and the intermediate value theorem is where that happens. The intermediate value theorem isn't important because it's surprising. It's important because it gives you the bridge which you need in order to cross from epsilon-delta world to point-world and back. Anyway, I've written far too much by now... Sorry for this tl;dr answer! - You can define continuity on the rational numbers, $\mathbb Q$, using the same definition that you use for real functions. But if you define $f(x)=0$ if $x^2<2$ and $f(x)=1$ if $x^2>2$, then you can show that this function is continuous on all the rationals. This function is continuous on the rational numbers precisely because $\sqrt 2$ is not a rational number. The "intermediate value theorem," then, is not true for the rationals. Basically, there is a property, completeness, of the real numbers which makes this sort of thing impossible for continuous functions defined on them. There are essentially no "gaps" in the real numbers. "Obvious?" I don't think so. One way of stating this property is that any non-empty subset of the real numbers which is bounded above has a "least upper bound," also known as a supremum. To prove the intermediate value theorem, let $a<b$ and $f$ a continuous function with $f(a)<f(b)$. Let $C\in(f(a),f(b))$. Let $U=\{x:f(x)<C\}$. Then $U$ is a non-empty subset of the real numbers and all the elements of $U$ are less than $b$, so $U$ is bounded above. It turns out you can show the supremum of $U$ is a value such that $f(x)=C$. [There is a sense in which the irrational reals can be seen as the set of non-descreasing onto functions from $\mathbb Q \rightarrow \{0,1\}$ that are continuous as functions on the rationals.] - To adopt a view slightly different to that of the other answerers: In terms of an intuitive picture of the real numbers, as a continuum, and an intuitive picture of a continuous function, as one whose graph is drawn without having to lift the pencil from the paper, the intermediate value theorem is intuitively clear. However, if one wants to reason carefully about the notion of real number, or function, and to prove statements and analyze phenomena that go beyond the intermediate value theorem, then it becomes important to have precise technical definitions of all the concepts involved: namely of real numbers, and of a continuous function. The key technical point in the characterization of real numbers is their completeness (as many people here have mentioned), while the technical definition of continuity is somewhat involved. In my view, the point of the rigorous proof of the intermediate value theorem is not primarily to confirm an intuitively obvious statement (although there is a certain pleasure in doing this), but rather to (a) confirm that the technical formalizations of real numbers and continuous functions capture the intuition they were intended to; and (b) to provide a model of how these formalizations can be harmonized with our intuitions. Point (b) is very important, since formalizations are only as powerful as our ability to wield them, and intuition (for most of us) provides the basic guide as to how to construct our mathematical arguments; thus it is important to learn how to meld intuition with formal arguments, and constructing formal proofs of intuitively clear statements is one of the best ways to do this. Note that the formal proof also suggest the development of concepts (connectedness of topological spaces, and its preservation under formation of continuous images) which go well beyond the intermediate value theorem, and extend to contexts in which there was no a priori intuition at all. (This is one of the fantastic features of formal mathematics: the formalism itself, when applied in a new context, can provide intuition! In other words, we can draw analogies between what are apparently very disparate situations by virtue of certain structural similarities that they share in common.) - First of all, the statement you've given is not the Intermediate Value Theorem. In fact, what you've written is true for any function: one can take $c = a$ or $c = b$. So as a first step in understanding these "big theorems" in calculus, you need to be excruciatingly careful to make sure you get the statements correct. (Often when I teach freshman calculus I ask for statements of theorems like IVT on midterm exams. It never fails to surprise me how often students get these questions wrong, even though I've long since come around to telling them which three or four theorems they should have memorized for any given exam. I think part of the problem is that -- given that they don't understand the statement of the theorem, at least not directly in terms of parsing the sentence; rather they may have a mental picture that they think the sentence corresponds to -- it is very hard for them to know whether they are reproducing the statement correctly or not. I still find it surprising that their rote memorization skills are not stronger, given that these skills are being tested at a fairly high level in certain other university classes. It's all a bit mysterious...) But, no, the correct statement is not close at all to the definition of continuity. It is however quite close to the rough intuitive idea that you may have of continuity...but that doesn't make IVT obvious! Rather, it makes it a very important theorem, one which ensures that a formal definition captures certain elements of the intuition that led to it. Anyway, in September 2011 a short article by S. Walk was published on exactly this: the title is "The Intermediate Value Theorem is Not Obvious -- and I Am Going to Prove It to You". I recommend it. - Dear Pete, I just finished writing an answer I began an hour or so ago, and it makes a similar point to your third paragraph, which I think is a (maybe the) key point. Best wishes, – Matt E Feb 22 '12 at 3:17 +1.Leave it to Pete to make the important point everyone except Brian made and to make it more forcefully then any of us.We don't make hay of this point because most of us think it goes without saying. It DOESN'T and it's important to point this out to a beginner.The important point isn't that there's a arbitrary point c on the compact interval where the function is defined between the endpoints,but rather that there necessarily is a point DIFFERENT FROM AND BETWEEN THE ENDPOINTS where the value of the function exists and is located between the values of the endpoints in the range. – Mathemagician1234 Feb 22 '12 at 5:56 1 @Mathemagician1234: No, it's not that there is such a point, but that the function takes every value between the values at the endpoints. – joriki Feb 22 '12 at 15:15 @joriki: Right! There are two mistakes in the OP's statement of IVT. The quantification error is certainly the more serious: I just wanted to point out that, in conjunction with the non-restriction to interior points, it gives a statement which holds trivially for all functions. – Pete L. Clark Feb 22 '12 at 17:27 @jorki,Pete: See,I made a different mistake when stating what it meant.That's what I intended since the point between the endpoints was arbitrary-but it wasn't clear in how I stated it.In other words, the image of whatever curve has endpoints a and b is a subset of the range of f if the conditions of the IVT are met. THAT'S what I should have said. – Mathemagician1234 Feb 22 '12 at 22:39 The construction of the real numbers as a geometric line is a deep and significant result. This forms the headwaters from which the important theorems of analysis on the line flow: completeness of the real numbers, the Heine-Borel Theorem and the Weierstrass Intermediate Value Theorem. The elegance of the Dedekind cut construction underlies this all. It is hardly self-evident or trivial. - No, the intermediate value theorem is not a restatement of continuity, especially when it’s stated correctly: if $f$ is continuous on $[a,b]$, and $c$ is any real number between $f(a)$ and $f(b)$, then there is some $x_0$ between $a$ and $b$ such that $c=f(x_0)$. Try to see how this differs from what you wrote. To see why this isn’t just a restatement of continuity, consider the function $$f:\mathbb{Q}\to\mathbb{R}:x\mapsto x\;,$$ where $\mathbb{Q}$ is the set of rational numbers. This function, considered only as a function on the rationals, is continuous, $f(0)=0$, and $f(2)=2$, but there is no number $p$ anywhere in the domain of $f$, let alone between $0$ and $2$, such that $f(p)=\sqrt2$. For another example, consider the function $$f:\mathbb{R}\setminus\{0\}\to\mathbb{R}\setminus\{0\}:x\mapsto \frac1x\;;$$ here $\mathbb{R}\setminus\{0\}$ means all of the real numbers except $0$. This function $g$ is continuous on its domain $\mathbb{R}\setminus\{0\}$, $g(-1)=-1$, $g(1)=1$, and $-1<\frac12<1$, but there is no $x\in\mathbb{R}\setminus\{0\}$ such that $-1\le x\le 1$ and $g(x)=\frac12$. The problem with $f$ and $g$ is that their domains are not connected: intuitively speaking, they have ‘holes’ in them. $\mathbb{Q}$ is full of holes, with one at every irrational number; $\mathbb{R}\setminus\{0\}$ has only one hole, at $0$. The intermediate value theorem is actually a special case of a more general result, namely, that if $f$ is a continuous function on a connected set $A$ (what connected means in general isn’t really important right now), then $f[A]$ is also a connected set. The connected sets in the real line are precisely the intervals, open, closed, or half-open, and bounded or unbounded. Thus, if $f$ is a continuous function on an interval $[a,b]$, the image of that interval must also be an interval of come kind; call it $J$. Since $f(a)$ and $f(b)$ are in $J$, and $J$ is an interval, everything between $f(a)$ and $f(b)$ must also be in $J$. Thus, if I pick any $c$ between $f(a)$ and $f(b)$, there must be some $x_0$ between $a$ and $b$ such that $c=f(x_0)$ $-$ which is exactly what the intermediate value theorem says. - +1 for the wonderful explanation, would +2 if I could. – Alex Becker Feb 22 '12 at 1:10 3 Your first $f$ can't possibly take the value $\sqrt2$, because it is a function from $\mathbb Q$ to $\mathbb Q$. Perhaps a better example would be that $f : \mathbb Q \to \mathbb Q : x \mapsto x^2$ has no $p$ between $1$ and $2$ such that $f(p) = 2$. – Rahul Narain Feb 22 '12 at 1:23 @Rahul: That was actually a typo: I wanted the codomain to be $\mathbb{R}$. Thanks for catching it. Your example (and Thomas Andrews’) is also nice. – Brian M. Scott Feb 22 '12 at 1:33 +1 for the terrific explanation and I doubt any of us will be able to give a better one.It's interesting since I was taught the intermediate value theorem is really a trivial result from the standpoint of topology-the really deep and difficult result the IVT relies on is the fact that the real line with the usual topology is a connected space. The best discussion and proof of this fact I've ever seen is in a most unlikely source: Elliott Mendelson's THE NUMBER SYSTEMS AND THE FOUNDATIONS OF ANALYSIS,now in a nice cheap Dover paperback.Elliott once told me he was rather proud of it. – Mathemagician1234 Feb 22 '12 at 5:45 Absolutely not! The definition of continuous function is quite technical (for every $\epsilon$ there exists a $\delta$, I forget the rest). There is no strong reason to believe that the graphs of continuous functions, as continuous function is ordinarily defined, will behave like the smooth curves of our imagination. For instance, there is a continuous nowhere differentiable function, indeed most continuous functions are nowhere differentiable. The Intermediate Value Theorem for $f$ is not a restatement of the fact that $f$ is continuous. For example, derivatives satisfy the IVT, and are not necessarily continuous. - 2 I know that nowhere-differentiable functions are dense in $C(\mathbb R,\mathbb R)$ (by the Baire category theorem), but in what sense are most continuous functions nowhere differentiable? Cardinality? Measure? If so, under what measure? – Alex Becker Feb 22 '12 at 1:07 Perhaps one can look at homepages.math.uic.edu/~marker/math414/fs.pdf. There are many other sources. And it is in the sense of category. – André Nicolas Feb 22 '12 at 1:32 1 @Alex: If I am remembering correctly, then indeed the set of differentiable functions is meager in $C(\mathbb{R},\mathbb{R})$ in an appropriate topology, so the "most" is in the sense of Baire category. (Certainly the cardinalities of the two sets are the same.) – Pete L. Clark Feb 22 '12 at 1:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 142, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517285823822021, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/string-theory-landscape?sort=active&pagesize=15
# Tagged Questions This tag applies to questions which deal with investigations of the string-theory landscape (defined as the space of solutions to the dynamical equations of string-theory) by physical, mathematical, or statistical methods. 2answers 119 views ### How can string theory work without supersymmetry? This question is inspired from reading Mitchell Porter's nice answer here to a question asking why supersymmetry should be expected naturally. Among other things, he explains that since weak scale ... 3answers 270 views ### Could all strings be one single string which weaves the fabric of the universe? This question popped out of another discussion, about if the photon needs a receiver to exist. Can a photon get emitted without a receiver? A universe containing only one electron was hypothetically ... 1answer 243 views ### Does a complete theory of quantum gravity require anthropic post-selection? Does a complete theory of quantum gravity require anthropic post-selection? Certainly the black hole complimentarity and causal patch conjectures highlights the essential role of observers, at least ... 2answers 310 views ### Are the $10^{500}$ different string theories being whittled down? An example of a test: Ask each variant whether its estimate of the electron mass lies within $\pm\,x\%$ of the known value. This surely can't take long per theory. Although $10^{500}$ is huge, ... 0answers 50 views ### Does the ensemble of effective Lagrangians in the String theory landscape mostly include gauge theories? String theory false vacua can be described by effective Lagrangians at low energy. Is there generally a correspondence between these effective Lagrangians and SU(N) gauge theories? Or do the effective ... 1answer 169 views ### Why doesn't the anthropic principle select for N=2 SUSY compactifications with an exactly zero cosmological constant? The party line of the anthropic camp goes something like this. There are at least $10^{500}$ flux compactifications breaking SUSY out there with all sorts of values for the cosmological constant. Life ... 1answer 37 views ### Interplay between the cosmological constant and “microscopic” properties of string vacua As far as I understand, string phenomenology is usually concerned with compactifications of string theory, M-theory or F-theory in which the uncompactified dimensions form a 4-dimensional Minkowski ... 1answer 42 views ### Are there stable string theory vacua with non-minimal cosmological constant? Naive reasoning suggests that a string theory vacuum with cosmological constant Lambda1 is always unstable as long as there is a string theory vacuum with cosmological constant Lambda2 < Lambda1 ... 0answers 106 views ### P-adic Numbers and Eternal Inflation In October(??) 2011, Leonard Susskind gave a talk and with few other people wrote papers about p-adic numbers and measure problems in cosmology, see e.g. arXiv:1110.0496. Has there been any recent ... 2answers 239 views ### How seriously do string theorists take the “landscape” The string theory landscape seems to this outside observer to be an intermediate step in the intellectual progress toward a more robust theory that explains why our one universe has the particular ... 0answers 48 views ### String landscape in different dimensions For D = 11 large (uncompactified) spacetime dimensions, the only "string theory" vacuum is M-theory For D = 10, there are 5 vacua. Or maybe it's more correct to say 4, since type I is S-dual to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9142234325408936, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/91222/finite-dimensional-subspaces-of-a-linear-space
# Finite dimensional subspaces of a linear space Suppose $V$ is an infinite dimensional vector space. I do not want to assume the axiom of choice, so I will define a vector space $V$ to be infinite dimensional if there is a proper subspace $W\subseteq V$ such that $V$ and $W$ are isomorphic. My question: Given $n \in \mathbb{N}$, is there a subspace $U\subseteq V$ of dimension $n$? - ## 3 Answers Even without the axiom of choice there is a sense to say that a vector space is finitely generated, i.e. has a finite dimension. It is just to say that there the vector space is a finite direct sum of copies of the field. To say that a vector space has an infinite dimension, relies on the sense of dimension which indeed breaks down when the axiom of choice does not hold; however one can say that a vector space is not of finite dimension. Much like an infinite set is just a set which is not finite. This is a broader definition than the one you give, it is possible to have a vector space without a basis (thus not of finite dimension) that every proper subspace has a finite dimension. The following arguments works regardless: Given a vector space which is not of finite dimension, we can choose $n$ which are linearly independent their span is $U$ that you seek. We prove that by induction: For $n=1$ it is obvious. Take any nonzero vector in $V$. Suppose that you chose $n$ vectors, call them $v_1,\ldots,v_n$. Since $V$ is not spanned by finitely many vectors the span of $\{v_i\}_{i=1}^n$ is not the entire space $V$, therefore we can take some $v\in V$ which is not in this span. Now we have $v_1,\ldots,v_n,v_{n+1}=v$. As wanted. Note that the induction only holds for finitely many vectors and we cannot deduce that there are countably many linearly independent vectors. It would require some choice, namely The Principle of Dependent Choice. From this principle follows that if we define something inductively then we can find an infinite sequence with the property we want. Further reading material: - Yes. Induction on $n$: For $n=0$, take $U=\{\mathbf{0}\}$. Assume the result is true for $k$. Let $U$ be a subspace of $V$ of dimension $k$. Then $U\neq V$, because $U$ is not isomorphic to any proper subspace of $U$, but $V$ does. Therefore, there exists $v\in V-U$. Let $\beta$ be a basis for $U$ (which exists since we are assuming $U$ is of dimension $k$). Then $\beta\cup\{u\}$ is linearly independent, and spans $\mathrm{span}(U,v)$. So $\mathrm{span}(U,v)$ is of dimension $k+1$, proving that $V$ has a subspace of dimension $k+1$. The choice of the single $v\in V-U$ does not require the Axiom of Choice, since $V-U$ is nonempty and this is a single choice. - Yes. You can sequentially find $n$ linearly independent vectors in $V$ by the process of always picking something outside the span of the preceding ones. No finite set will generate all of $V$, because then the theory of f.d. vector spaces (where dimension can be defined without AC) prohibits the existence of $W$. Thus there will always be room for one more linearly independent vector. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400544762611389, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/matrix?page=3&sort=active&pagesize=50
Tagged Questions Questions on the manipulation of matrices in Mathematica. 2answers 231 views Dynamic matrix multiplication I have the following problem: I want to multiply two matrices and sum them to another matrix C. A.B+ 2B Easy! The matrix A varies as a function of three inputs a,b and c. The second matrix, B, is a ... 1answer 124 views How do I compose two functions (matrices) in Mathematica? [closed] I'm new to Mathematica. I've been reading the 'Documentation Center' but can't find a clue about composing two functions which in my case are matrices. Here's the setup. let ... 3answers 179 views How can I fill in a matrix at every iteration of a Do loop? I have a matrix whose dimensions are 26x26x3. For my assignment I have written a program which is using a Do loop (maxx=maxy=10,maxz=2): ... 2answers 135 views How to get a sub range of a larger matrix? This should be an easy question but I only found a partial solution using All. Say I have a matrix A = 1000 x 5, how to get only a part of that matrix, say B = n x ... 0answers 61 views please help to me, Where is my mistake? [closed] e = {{v, -37925.9}, {-37925.9, s}} v = (37925.9 - 12.74 x) s = (7585108 - 15029 x) Solve [e == 0 , x ] please help to me, Where is my mistake? 2answers 181 views Compiling LinearSolve[] or creating a compilable procedural version of it Earlier today I had a discussion with a representative at Premier Support about the 2 questions I've asked here over the past couple of days: Seeking strategies to deploy a function securely ... 2answers 191 views Correct way to generate large data sets (i.e.forward yield curve ) I would like to generate a set of forward yield curve matrix of size 1000 x 100. First I defined my SparseArray of 1000 x100: ... 2answers 99 views Matrix conditional operation Say I have a matrix of: tmp1 = 5; tmp2 = 5; tmp3 = RandomChoice[{0, 1, 2, 3, 4, 5}, {tmp1, tmp2}]; MatrixForm[tmp3] How to do a conditional operation of elements ... 1answer 80 views How can I find the row differences of a matrix? How can I compute the row differences of an m x n matrix to obtain an m-1 x n matrix; that is, given how do I obtain 3answers 233 views Correct way to populate a DiagonalMatrix? I would like to create a series of correlation matrices that starts with : sensMat[[1]] = DiagonalMatrix[ { 1,1,1,1,1 } ]) // MatrixForm and iterates in 0.1 ... 0answers 82 views Not getting the required eigenvalues [closed] I'm trying to use Mathematica to show that the eigenvalues of $U$ are $\pm\dfrac{1-i}{\sqrt{2}}$, where $U = (I + T + iS)(I - T- iS)^{-1}$ where \$ S = \left( \begin{matrix} 1 & 1 \\ 1 ... 1answer 290 views Problem with plotting eigenvalues I want to plot the eigenvalues of a matrix which is dependant on a parameter (well, actually I want to plot a tight-binding electronic band structure). The dimension of the hamitonian matrix is ... 3answers 209 views How do I combine the data from two tables according to a rule of my own devising? I have two matrices a = m x 5 and b =m x 5 (m is large, say 1,000) which are already sorted as below: How to generate a new matrix c = m x 5 which consists: Column 1: exact replica of Column 1 ... 2answers 136 views Correct way to compare arrays and do conditional evaluations I would like to compare two arrays a=: and b= and get: using: (b/.(b_?Positive->a+b))//MatrixForm but this doesn't seem to work? Can ... 3answers 172 views Correct way to remove matrix columns? I start off with m = 1000 x 5 matrix, and I would like to remove first column to get 1000 x 4 matrix and repeat again for ... 1answer 112 views Can Depth be used as an equivalent for MatrixQ? Given an expression x, are the following two statements interchangable with no exceptions? Depth[x] - 1 == 2 ... 1answer 173 views How to efficiently fill in matrix values? I would like to efficiently fill in or assign values to matrix m: m={ {a[1,1],a[1,2],a[1,3],a[1,4]}, {a[2,1],a[2,2],a[2,3],a[2,4]}, {a[3,1],a[3,2],a[3,3],a[3,4]}, ... 3answers 144 views MatrixForm explanation as why row extract is displayed as a column? I have a 5 x 5 matrix: cdsSpread5yrs = But after doing a row extract, why is it displaying as a column? ... 2answers 511 views How to transform a 3D image by an affine transformation matrix I have a question concerning Image Processing: I have a stack of images, which I can compose to a 3D image using Image3D. Additionally I have a 4x4 affine transformation matrix. I would like to ... 1answer 107 views RowReduce Problem Here are two examples: RowReduce[{{3, 1, a}, {2, 1, b}}] evaluates to {{1, 0, a - b}, {0, 1, -2 a + 3 b}} but ... 2answers 214 views Matrix Plot – Little Exercise Hi there, mathematicians. I'm not very good at coding plots in Mathematica, so I was hoping that one of you could help me solve a problem I'm having. I have the following matrix plot: ... 2answers 295 views Exact cover solution Is it possible to get a exact cover solution(s) and/or number of possible solutions in Mathematica? 3answers 132 views Need Help Writing (a Pascal) Matrix in Mathematica I want to write a function $f[n]$ in Mathematica which gives me an $n\times n$ lower triangular Pascal matrix with a row of zeros in between each nonzero row. That is, I want the matrices ... 1answer 157 views Verifying and deriving basic (block) matrix identities How can I use the new symbolic matrix/tensor capabilities to verify matrix identities, such as (1) or (2) Even better, how can I ask Mathematica to derive expressions for X, Y, Z, and U like ... 2answers 161 views Sort matrix by columns and rows without changing them I would like to sort a matrix in descending order first by the total of each column, then by the total of each row, but without changing their content. For example, if I had: ... 1answer 238 views A lot of matrix multiplication So I have a set of 15 $4\times 4$ matrices which I call $X_i, (i=1,2..15)$ and a set of 6 $4\times 4$ matrices which I call $y_j, (j=1,2...6)$. Now I have to calculate $(X_i y_j)-(y_j X_i^*)$ for ... 3answers 322 views How to manipulate gauge theory in Mathematica? I want to know if there is a way of typing into Mathematica an expression like the following, \epsilon^{\mu \nu \lambda} f^{abc} A^a_\mu A^b_\nu A^c_\lambda + g\epsilon^{\mu \nu \lambda} A^a_\mu ... 3answers 162 views Are table headings functional? Are table headings for aesthetics only? If I have: ... 4answers 170 views Generating all matrices with 1 (possibly) replaced by -1 I have a matrix $M$, whose dimension I am unsure of, which has only $\lbrace0,1\rbrace$ entries. I would like to generate all the possible matrices that result from changing (some subset) of the $1$'s ... 1answer 118 views Optimization of correlation calculation How can I make the following line of code run faster? Is there a way to do this calculation as a matrix vs vector than vector vs vector? In the code below, f is a ... 1answer 196 views Find an inverse matrix, regarding a parameter as real I have the matrix ... 1answer 121 views How to modify a matrix to satisfy a special condition? I have matrix like this: How do I modify this matrix to make it satisfy the following condition: For each element {i, j} in the matrix the sum of the elements of row i must be equal to the sum ... 6answers 185 views Sorting Matrix elements I have matrix in as shown, consisting of real numbers and 0. How can I sort it to become out as shown? ... 3answers 191 views How to sum matrix elements based on finding the first (and second) non-zero elements of each row? I have a matrix: I would like to sum all the first non-zero elements of each row so that I get a value of $$25.5317 + 8.85471 + 6.90018 + 32.9436 + ...$$ and so on and simply ignore zero rows. ... 1answer 120 views How to do conditional matrix division when elements are a combination of zero and real numbers? I tried to divide two large matrices DLand PL of size 100,000 x 5 each and they look like this, DL= and PL= By using conditional function: ... 0answers 26 views Why isn't the matrix product computed? [duplicate] Possible Duplicate: Why does MatrixForm affect calculations? I have the following input: Why is the matrix not computed, giving me a $2 \times 2$ matrix? 1answer 94 views How to Sort on row by row basis? Say I have matrix how to sort on row by row basis to get: 2answers 215 views How to take matrix elements as input to another matrix or loop? Say I have matrix tmp1 of size 1000 x 5 (shown is just a small section for illustration purpose), consisting of real numbers and 0: How can I iteratively take every non-zero real number elements and ... 3answers 269 views Is there a way to do conditional matrix loop using 'continue' I have the following: ... 1answer 331 views using Mathematica's matrix multiplication in C++ Is there any way that I can utilize Mathematica's matrix multiplication in a C++ program? I'm making a 3D graphics engine (for class) in C++ and I would really like to use Mathematica for all of my ... 1answer 244 views Matrix Multiplication Modulo 2 I would like to perform matrix multiplication modulo 2. Hence, instead of the usual: A.B I did: ... 2answers 153 views How to do equality check of a large matrix and get the corresponding index position? Say I have a matrix A = 1000 x 5, and I want to compare it's first element (uppermost top left) to each of the 5 elements in the first column of matrix B of size 5 x 5. Whenever the first time the ... 1answer 386 views vectorial ODE in mathematica with matrix exponentials I want to solve the following equation in mathematica : DSolve[{X'[t] == A.X[t], X[0] == ( {{0},{0}} )}, X[t], t] It is a system of 2 ODEs coupled by the matrix A, ... 3answers 278 views How do we solve Eight Queens variation using primes? Using a $p_n$x $p_n$ matrix, how can we solve the Eight queens puzzle to find a prime in every row and column? ... 0answers 154 views Mathematica Complains about Non Symmetric Covariance matrix, when it's not the case I was doing some fitting with Mathematica7 using NonlinearModelFit. It's quite long the program to do the fit and that's why I am not displaying here ... It goes ok, and I can get the fit parameters ... 1answer 100 views how to get modify the following correlation matrix with a specific list of financial prices? The following takes the last 5 members of the Dow Jones index and plots the correlation matrix of the last 5 years of daily prices: ... 1answer 177 views Tridiagonal matrix for any n I'm pretty new to Mathematica and I need to figure out how to create a $n\times n$ tridiagonal matrix for any $n$. I don't have the slightest clue where to begin. Edit: got this far, not sure how to ... 1answer 130 views Creating a matrix from the output of a variable inside a for loop I would like to enter the results of the following loop inside a matrix as its elements: ... 0answers 170 views Can mathematica simplify an expression setting simplified answer equal to zero? Basically I want to reduce something like: Subscript[P, 4] - Subscript[P, 5] == Subscript[R, 4, 5]* Subscript[i, 4, 5] when only one of them is known (... 2answers 321 views How to calculate FDCT I'm new to Mathemtica and I'm trying to calculate Discrete Cosine Transformation FDCT. I found the FourierDCT built-in function, but not DCT, so I need to implement it. I have tried couple of ideas ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9021876454353333, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/279013/nilpotent-blocks-of-matrices
# Nilpotent blocks of matrices I'm trying to read through Hirsch and Smale's "Differential Equations, Dynamical Systems, and Linear Algebra", and I don't understand how this theorem follows from this other theorem. The first theorem says Theorem 1. Let $N$ be a nilpotent operator on a real or complex vector space $E$. Then $E$ has a basis giving $N$ a matrix of the form $$A = diag\{A_1, \dots, A_r\}$$ where $A_j$ is an elementary nilpotent block, and the size of $A_k$ is a nonincreasing function of $k$. The matrices $A_1, \dots, A_r$ are uniquely determined by the operator $N$. They go on to say that if $A$ is an elementary nilpotent matrix, then the dimension of $Ker A = 1$, since the rank of $A$ is $n-1$. That part I understand. What I don't understand is how Theorem 2 follows from Theorem 1: Theorem 2. In Theorem 1 the number $r$ of blocks is equal to the dimension of $Ker A$. Can you help me? Thanks! - The block decomposition of your matrix $A$ corresponds to a decomposition of your vector space into a direct sum of $A$-invariant subspaces, say $V=V_1\oplus\cdots\oplus V_r$, where each $V_i$ is $A_i$-invariant. Therefore $\ker A=\ker A_1\oplus\cdots\oplus\ker A_r$. Now the claim easily follows. – Matemáticos Chibchas Jan 15 at 3:35 ## 2 Answers Each elementary nilpotent block has rank $n-1$ and the rank the matrix is the sum of the ranks of the block diagonal matrices. Equivalently the nullity of the matrix is the sum of the nullities of the block diagonal matrices. If the nullity of $A$ is $k$ it follows that there must be precisely $k$ blocks, each contributing $1$ to the nullity. Also, the form that the book is talking about is much more general than just for nilpotent operators. This type of decomposition is called the Jordan Normal Form. The fact that the geometric multiplicity of an eigenvalue is the number of Jordan blocks is a fundamental fact of the Jordan Normal Form and your result is an immediate consequence. - I see, thanks for your help! – badatmath Jan 16 at 18:45 i found answer of your question in matrix analysis and applied linear algebra (carld.meyer) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168463945388794, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/quantization
# Tagged Questions The quantization tag has no wiki summary. 0answers 45 views ### Bohr-sommerfeld quatnization from the WKB approximation how can one prove the Bohr Sommerfeld quantization formula $$\oint p.dq =2\pi n$$ from the WKB ansatz solution for the Schroedinger equation ?? $\Psi(x)=e^{iS(x)/ \hbar}$ with $S$ the action ... 1answer 78 views ### Critical dimension in quantization of p-branes So I have what might be a fairly basic question, but my understanding that in the quantization of the the string, or the 1-brane, there are conditions on the number of spacetime dimensions to ensure ... 2answers 144 views ### Integer physics Are there interesting (aspects of) problems in modern physics that can be expressed solely in terms of integer numbers? Bonus points for quantum mechanics. 0answers 78 views ### Do semiclassical GR and charge quantisation imply magnetic monopoles? Assuming charge quantisation and semiclassical gravity, would the absence of magnetically charged black holes lead to a violation of locality, or some other inconsistency? If so, how? (I am not ... 1answer 85 views ### What is the action for an electromagnetic field if including magnetic charge Recently, I try to write an action of an electromagnetic field with magnetic charge and quantize it. But it seems not as easy as it seems to be. Does anyone know anything or think of anything like ... 2answers 161 views ### Weyl Ordering Rule While studying Path Integrals in Quantum Mechanics I have found that [Srednicki: Eqn. no. 6.6] the quantum Hamiltonian $\hat{H}(\hat{P},\hat{Q})$ can be given in terms of the classical Hamiltonian ... 1answer 83 views ### State space of QFT, CCR and quantization, and the spectrum of a field operator? In the canonical quantization of fields, CCR is postulated as (for scalar boson field ): $$[\phi(x),\pi(y)]=i\delta(x-y)\qquad\qquad(1)$$ in analogy with the ordinary QM commutation relation: ... 0answers 17 views ### Allowed Quantum States- Filkelstein and Rubinstein constraints So basically i'm doing a report on Finkelstein and Rubinstein constraints. I have a system where the allowed quantum states satisfy ... 1answer 385 views ### Why one-dimensional strings, but not higher-dimensional shells/membranes? One way that I've seen to sort-of motivate string theory is to 'generalize' the relativistic point particle action, resulting in the Nambu-Goto action. However, once you see how to make this ... 2answers 119 views ### Quantization of strings on a curved backgrond usually when people want to quantize the string on flat background, they will try to find the the OPE of embeddings (by solving a green function in a 2D space) and use them to find the energy-momentum ... 1answer 190 views ### Canonical quantization in supersymmetric quantum mechanics Suppose you have a theory of maps $\phi: {\cal T} \to M$ with $M$ some Riemannian manifold, Lagrangian L~=~ \frac12 g_{ij}\dot\phi^i\dot\phi^j + \frac{i}{2}g_{ij}(\overline{\psi}^i ... 7answers 591 views ### Is the quantization of gravity necessary for a quantum theory of gravity? The other day in my string theory class, I asked the professor why we wanted to quantize gravity, in the sense that we want to treat the metric on space-time as a quantum field, as opposed to, for ... 0answers 62 views ### Geometric quantization of a hydrogen atom I want to know how to quantize a hydrogen atom as an example of geometric quantization. Apparently there is a derivation in the book "Geometric Quantization in Action: Applications of Harmonic ... 4answers 181 views ### What entities in Quantum Mechanics are known to be “not quantized”? Since all the traditional "continuous" quantities like time, energy, momentum, etc. are taken to be quantized implying that derived quantities will also be quantized, I was wondering if Quantum ... 0answers 217 views ### exponential potential $\exp(|x|)$ For $a$ being positive what are the quantization conditions for an exponential potential? $$- \frac{d^{2}}{dx^{2}}y(x)+ ae^{|x|}y(x)=E_{n}y(x)$$ with boundary conditions $$y(0)=0=y(\infty)$$ I ... 2answers 163 views ### Definition of “Quantizing” Could anyone explain to me what "quantize" means in the following context? Quantize the 1-D harmonic oscillator for which $$H~=~{p^2\over 2m}+{1\over 2} m\omega^2 x^2.$$ I understand that the ... 1answer 169 views ### Operator Ordering Ambiguities I have been told that $$[\hat x^2,\hat p^2]=2i\hbar (\hat x\hat p+\hat p\hat x)$$ illustrates operator ordering ambiguity. What does that mean? I tried googling but to no avail. 0answers 136 views ### When can photon field amplitudes be written as field operators? Suppose I have some classical field equation for two photon fields with amplitudes $A_1(z),A_2(z)$ (plane waves) given as ${A}_1=\alpha f(A_1,A_2) \\ {{A}_2}=\beta g(A_1,A_2)$ Under what ... 1answer 194 views ### Quantization of Nambu–Goto action in multiples of Planck's constant? Isn't it possible? Quantization of Nambu–Goto action $$\mathcal{S} ~=~ -\frac{1}{2\pi\alpha'} \int \mathrm{d}^2 \Sigma \sqrt{{\dot{X}} ^2 - {X'}^2}~=~nh\qquad n \in\mathbb{Z}.$$ 1answer 107 views ### Poincaré group on quantum Klein-Gordon field (C*-algebraic scenario) on the same topic as this question, I have been trying to fool around with the free real K-G field in flat spacetime on the C*-algebraic scenario (Haag-Kastler axioms, Weyl quantization, etc). Since ... 2answers 311 views ### Path integral and geometric quantization I was wondering how one obtains geometric quantization from a path integral. It's often assumed that something like this is possible, for example, when working with Chern-Simons theory, but rarely ... 4answers 565 views ### Reason for the discreteness arising in quantum mechanics? What is the most essential reason that actually leads to the quantization. I am reading the book on quantum mechanics by Griffiths. The quanta in the infinite potential well for e.g. arise due to the ... 2answers 154 views ### Ordering Ambiguity in Quantum Hamiltonian While dealing with General Sigma models (See e.g. Ref. 1) $$\tag{10.67} S ~=~ \frac{1}{2}\int \! dt ~g_{ij}(X) \dot{X^i} \dot{X^j},$$ where the Riemann metric can be expanded as, \tag{10.68} ... 1answer 145 views ### Equivalence of classical and quantized equation of motion for a free field Suppose a classical free field $\phi$ has a dynamic given in Poisson bracket form by $\partial_o\phi=\{H, \phi\}$. If we promote this field to an operator field, the dynamic after canonical ... 0answers 99 views ### Quantization and natural boundary conditions The Euler-Lagrange equations follow from minimizing the action. Usually this is done with fixed (e.g. vanishing) boundary conditions such that we do not have to worry about any boundary terms. ... 1answer 196 views ### When can a classical field theory be quantized? Given a classical field theory can it be always quantized? Put in another way, Does there necessarily need to exist a particle excitation given a generic classical field theory? By generic I mean all ... 2answers 299 views ### Dirac equation as canonical quantization? First of all, I'm not a physicist, I'm mathematics phd student, but I have one elementary physical question and was not able to find answer in standard textbooks. Motivation is quite simple: let me ... 0answers 132 views ### Bohr sommerfeld quantiztion rule and Gutzwiller trace assuming we can evaluate the eigenvalue staircase $N(E)$ in both manners with the Bohr-Sommerfeld quantization rule $N(E)2\pi \hbar = \oint _{C}p.dq$ and using the Gutzwiller trace \$ N(E)= ... 2answers 837 views ### Bohr Model of the Hydrogen Atom - Energy Levels of the Hydrogen Atom Why the allowed (stationary) orbits correspond to those for which the orbital angular momentum of the electron is an integer multiple of $\hbar=\frac {h}{2\pi}$? $$L=n\hbar$$ Bohr Quantization rule of ... 2answers 133 views ### quantization of this hamiltonian? let be the Hamiltonian $H=f(xp)$ if we consider canonical quantization so $f( -ix \frac{d}{dx}-\frac{i}{2})\phi(x)= E_{n} \phi(x)$ here 'f' is a real valued function so i believe that $f(xp)$ ... 1answer 218 views ### Dirac's quantization rule I first recall the Dirac's quantization rule, derived under the hypothesis that there would exit somewhere a magnetic charge: $\frac{gq}{4\pi} = \frac{n\hbar}{2}$ with $n$ natural. I am wondering ... 2answers 672 views ### Why do we use Planck's constant? I have been trying to reason why energy packets (i.e. photons) are assumed to be quantized. I know this originated from Max Planck, but may someone explain why energy couldn't be emitted continuously ... 3answers 348 views ### Generalizing Heisenberg Uncertainty Priniciple Writing the relationship between canonical momenta $\pi _i$ and canonical coordinates $x_i$ $$\pi _i =\text{ }\frac{\partial \mathcal{L}}{\partial \left(\frac{\partial x_i}{\partial t}\right)}$$ ... 1answer 179 views ### Computing a density of states of Hamiltonian $H=xp$ How could I compute the integral $$N(E)~=~ \int dx \int dp~ H(E-xp)$$ the 'Area' inside the Phase space is taken for $x \ge 0$ and $p\ge 0$? The result should be N(E)~=~ ... 1answer 83 views ### understanding the oscillating part of the Gutzwiller trace given the density of states according to Gutzwiller's trace formula $g(E)= g_{smooth}(E)+ g_{osc}(E)$ i know that the 'smooth' part comes from $g_{smooth}(E)= \iint dxdp \delta(E-p^{2}-V(x))$ ... 0answers 68 views ### shouldn't we add the oscillating terms into Bohr-Sommerfeld quantization formula shouldn't be the quantization formula (in one dimension) equal to $N_{smooth}(E)+N_{osc}(E) = \oint_{C}p.dq$ ?? where the Oscillating term is just the correction from Gutzwiller trace formula or a ... 3answers 292 views ### Is the quantization of the harmonic oscillator unique? To put it a little better: Is there more than one quantum system, which ends up in the classical harmonic oscillator in the classial limit? I'm specifically, but not only, interested in an ... 0answers 131 views ### Magnetic monopole and electromagnetic field quantization procedure From the Maxwell's equations point of view, existence of magnetic monopole leads to unsuitability of the introduction of vector potential as $\vec B = \operatorname{rot}\vec A$. As a result, it was ... 3answers 313 views ### Some questions on observables in QM 1-In QM every observable is described mathematically by a linear Hermitian operator. Does that mean every Hermitian linear operator can represent an observable? 2-What are the criteria to say whether ... 0answers 45 views ### Pohlmeyer reduction of string theory for flat and AdS spaces The definition of Pohlmeyer invariants in flat-space (as per eq-2.16 in Urs Schreiber's DDF and Pohlmeyer invariants of (super)string) is the following: \$ Z^{\mu_1...\mu_N} (\mathcal{P}) = ... 0answers 71 views ### Pohlmeyer reduction of string theory for flat- and AdS- spaces The definition of Pohlmeyer invariants in flat-space (as per eq-2.16 in Urs Schreiber's DDF and Pohlmeyer invariants of (super)string) is the following: \$ Z^{\mu_1...\mu_N} (\mathcal{P}) = ... 1answer 80 views ### Quantum gravity at D = 3 Quantization of gravity (general relativity) seems to be impossible for spacetime dimension D >= 4. Instead, quantum gravity is described by string theory which is something more than quantization ... 2answers 167 views ### Virasoro constraints in quantization of the Polyakov action The generators of the Virasoro algebra (actually two copies thereof) appear as constraints in the classical theory of the Polyakov action (after gauge fixing). However, when quantizing only "half" of ... 1answer 65 views ### How does one geometrically quantize the Bloch equations? I've just now rated David Bar Moshe's post (below) as an "answer", for which appreciation and thanks are given. Nonetheless there's more to be said, and in hopes of stimulating further posts, I've ... 3answers 364 views ### Rigorous proof of Bohr-Sommerfeld quantization Bohr-Sommerfeld quantization provides an approximate recipe for recovering the spectrum of a quantum integrable system. Is there a mathematically rigorous explanation why this recipe works? In ... 2answers 123 views ### Geometric quantization of identical particles Background: It is well known that the quantum mechanics of $n$ identical particles living on $\mathbb{R}^3$ can be obtained from the geometric quantization of the cotangent bundle of the manifold ... 1answer 297 views ### Canonical quantization of quantum field The canonical quantization of a quantum field prescribes that given a lagrangian, one can quantize the theory by imposing the commutation relations between the field operators and their conjugated ... 1answer 105 views ### What makes background gauge field quantization work? [Again I am unsure as to whether this is appropriate for this site since this is again from standard graduate text-books and not research level. Please do not answer the question if you think that ... 1answer 154 views ### Question about the parity of the ghost number operator in BRST quantization Given a Lie algebra $[K_i,K_j]=f_{ij}^k K_k$, and ghost fields satisfying the anticommutation relations $\{c^i,b_j\}=\delta_j^i$, the ghost number operator is then $U=c^ib_i$ (duplicate indices are ... 2answers 248 views ### Trouble with constrained quantization (Dirac bracket) Consider the following peculiar Lagrangian with two degrees of freedom $q_1$ and $q_2$ $$L = \dot q_1 q_2 + q_1\dot q_2 -\frac12(q_1^2 + q_2^2)$$ and the goal is to properly quantize it, following ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90175461769104, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/90894-finding-derivative-function-print.html
# Finding a Derivative of a Function Printable View • May 28th 2009, 08:55 PM jimmyp Finding a Derivative of a Function Hello again, I'm again having some trouble using rules to solve down for a derivative of a function. Could someone please show me how I'd find the derivative of $y=(\frac{4x^2}{3-x})^3$ That would be great. Thanks for reading! :) • May 28th 2009, 09:23 PM Chris L T521 Quote: Originally Posted by jimmyp Hello again, I'm again having some trouble using rules to solve down for a derivative of a function. Could someone please show me how I'd find the derivative of $y=(\frac{4x^2}{3-x})^3$ That would be great. Thanks for reading! :) This is a chain rule problem. Let's approach it this way. Let $u=\frac{4x^2}{3-x}$. It follows that $y=u^3$. Now by chain rule, $\frac{\,dy}{\,dx}=\frac{\,dy}{\,du}\cdot\frac{\,du }{\,dx}$. Since we let $u=\frac{4x^2}{3-x}$, it follows by quotient rule that $\frac{\,du}{\,dx}=\frac{(3-x)(8x)-4x^2(-1)}{(3-x)^2}=\frac{24x-4x^2}{(3-x)^2}$. Therefore, $\frac{\,dy}{\,dx}=\underbrace{3u^2}_{\frac{\,dy}{\ ,du}}\cdot \underbrace{\frac{24x-4x^2}{(3-x)^2}}_{\frac{\,du}{\,dx}}=3\left(\frac{4x^2}{3-x}\right)^2\cdot\frac{24x-4x^2}{(3-x)^2}$ This can be simplified as $\frac{48x^4\cdot4x\left(6-x\right)}{\left(3-x\right)^4}=\frac{192x^5\left(6-x\right)}{\left(3-x\right)^4}$ Does this make sense? • May 28th 2009, 09:24 PM TheEmptySet Quote: Originally Posted by jimmyp Hello again, I'm again having some trouble using rules to solve down for a derivative of a function. Could someone please show me how I'd find the derivative of $y=(\frac{4x^2}{3-x})^3$ That would be great. Thanks for reading! :) First note that this is function composition $f(x)=x^3,g(x)=\frac{4x^2}{3-x}$ $y=f(g(x))=\left(\frac{4x^2}{3-x}\right)^3$ before we use the chain rule lets calculate the needed derviatves $f'(x)=3x^2$ $g'(x)=\frac{(3-x)(8x)-(4x^2)(-1)}{(3-x)^2}=\frac{24x-8x^2+4x^2}{(3-x)^2}=\frac{24x-4x^2}{(3-x)^2}=\frac{4x(6-x)}{(3-x)^2}$ Now the chain rule tells us that $y'=f'(g(x))\cdot g'(x)=3\left(\frac{4x^2}{(3-x)} \right)^2\left( \frac{4x(6-x)}{(3-x)^2}\right)=\frac{192x^5(6-x)}{(3-x)^3}$ All times are GMT -8. The time now is 04:21 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926836371421814, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/18844?sort=newest
## Does every smooth manifold of infinite topological type admit a complete Riemannian metric? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) To elaborate a bit, I should say that the question of the existence of a complete metric is only of interest in the case of manifolds of infinite topological type; if a manifold is compact, any metric is complete, and if a noncompact manifold has finite topological type(ie is diffeomorphic to the interior of a compact manifold with boundary,) one can contruct a complete metric on the manifold with boundary via a partition of unity, and then divide by the square of a defining function to get an complete asymptotically metric on the interior. I have absolutely no intuition for how "wild" these manifolds can be. The only examples I can think of are infinite connected sums and quotients of negatively curved symmetric spaces by sufficiently complicated groups, but I'd imagine that one can construct some pathological examples by limiting arguments. - ## 3 Answers By Whitney embedding theorem any smooth manifold embeds into some Euclidean space as a closed subset. The induced metric is complete. In fact, a good exercise is to show that any Riemannian metric is conformal to a complete metric. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It's not my field, so I am really out on a limb here. Maybe someone can tell my why the following naïve idea wouldn't work: Make a Riemannian metric by partition of unity. For any point $p$ in the manifold, let $h(p)$ be the infimum of the lengths of all paths starting at $p$ which are not contained in any compact subset of the manifold. Now $h$ won't be smooth, so you need to smooth it without changing it substantially. Divide the metric by $h^2$. - The idea is right I think but your $h$ is unnecessary complicated; there are easier ways to find a proper smooth positive function on a manifold. – Igor Belegradek Mar 20 2010 at 16:12 Yeah, but is it enough for it to be proper? It seems to me it has to have some relationship with the metric to insure that paths that stray outside of any compact will be infinitely long in the rescaled metric. – Harald Hanche-Olsen Mar 20 2010 at 16:44 Definition of $h$ uses "length", which gives relationship with the metric. I am not claiming $h$ is proper but it could be. – Igor Belegradek Mar 20 2010 at 17:39 This is not an answer to your question. I just wanted to point out that there are plenty of examples of complete Einstein manifolds of infinite topological type. I am aware of the following two papers at least: 1. Complete Ricci-flat Kåhler manifolds of infinite topological type, by Anderson, Kronheimer and LeBrun 2. Continued fractions and Einstein manifolds of infinite topological type, by Calderbank and Singer - 1 On a somewhat related note, Sha and Yang have shown the for $n$ and $m \geq 2$, the connect sum of $S^n\times S^m$ with itself an infinite number of times carries a metric of positive Ricci curvature. I don't think these metrics are Einstein, though. – Jason DeVito Mar 20 2010 at 17:02 Interesting. I don't work in this area, but Calderbank and Singer are friends and got to know about their results (and the earlier one I mentioned) through talking to them. – José Figueroa-O'Farrill Mar 20 2010 at 18:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364257454872131, "perplexity_flag": "head"}
http://www.reference.com/browse/habilitate
Related Searches Definitions # Le Sage's theory of gravitation Le Sage's theory of gravitation is the most common name for the kinetic theory of gravity originally proposed by Nicolas Fatio de Duillier in 1690 and later by Georges-Louis Le Sage in 1748. The theory proposed a mechanical explanation for Newton's gravitational force in terms of streams of tiny unseen particles (which Le Sage called ultra-mundane corpuscles) impacting on all material objects from all directions. According to this model, any two material bodies partially shield each other from the impinging corpuscles, resulting in a net imbalance in the pressure exerted by the impacting corpuscles on the bodies, tending to drive the bodies together. This mechanical explanation for gravity never gained widespread acceptance, although it continued to be studied occasionally by physicists until the beginning of the twentieth century, by which time it was generally considered to be conclusively discredited. ## Basic theory The theory posits that the force of gravity is the result of tiny particles (corpuscles) moving at high speed in all directions, throughout the universe. The intensity of the flux of particles is assumed to be the same in all directions, so an isolated object A is struck equally from all sides, resulting in only an inward-directed pressure but no net directional force (P1). With a second object B present, however, a fraction of the particles that would otherwise have struck A from the direction of B is intercepted, so B works as a shield, i.e. from the direction of B, A will be struck by fewer particles than from the opposite direction. Likewise B will be struck by fewer particles from the direction of A than from the opposite direction. One can say that A and B are "shadowing" each other, and the two bodies are pushed toward each other by the resulting imbalance of forces (P2). Thus the apparent attraction between bodies is, according to this theory, actually a diminished push from the direction of other bodies, so the theory is sometimes called push gravity or shadow gravity, although it is more widely referred to as Lesage gravity.Nature of collisions If the collisions of body A and the gravific particles are fully elastic, the intensity of the reflected particles would be as strong as of the incoming ones, so no net directional force would arise. The same is true if a second body B is introduced, where B acts as a shield against gravific particles in the direction of A. The gravific particle C which ordinarily would strike on A is blocked by B, but another particle D which ordinarily would not have struck A, is re-directed by the reflection on B, and therefore replaces C. Thus if the collisions are fully elastic, the reflected particles between A and B would fully compensate any shadowing effect. In order to account for a net gravitational force, it must be assumed that the collisions are not fully elastic, or at least that the reflected particles are slowed, so that their momentum is reduced after the impact. This would result in streams with diminished momentum departing from A, and streams with undiminished momentum arriving at A, so a net directional momentum toward the center of A would arise (P3). Under this assumption, the reflected particles in the two-body case will not fully compensate the shadowing effect, because the reflected flux is weaker than the incident flux.Inverse square law Since it is assumed that some or all of the gravific particles converging on an object are either absorbed or slowed by the object, it follows that the intensity of the flux of gravific particles emanating from the direction of a massive object is less than the flux converging on the object. We can imagine this imbalance of momentum flow - and therefore of the force exerted on any other body in the vicinity - distributed over a spherical surface centered on the object (P4). The imbalance of momentum flow over an entire spherical surface enclosing the object is independent of the size of the enclosing sphere, whereas the surface area of the sphere increases in proportion to the square of the radius. Therefore, the momentum imbalance per unit area decreases inversely as the square of the distance.Mass proportionality From the premises outlined so far, there arises only a force which is proportional to the surface of the bodies. But gravity is proportional to the masses. To satisfy the need for mass proportionality, the theory posits that a) the basic elements of matter are very small so that gross matter consists mostly of empty space, and b) that the particles are so small, that only a small fraction of them would be intercepted by gross matter. The result is, that the "shadow" of each body is proportional to the surface of every single element of matter. If it is then assumed that the elementary opaque elements of all matter are identical (i.e., having the same ratio of density to area), it will follow that the shadow effect is, at least approximately, proportional to the mass (P5). ## Fatio Nicolas Fatio presented the first formulation of his thoughts on gravitation in a letter to Christiaan Huygens in the spring of 1690. Two days later Fatio read the content of the letter before the Royal Society in London. In the following years Fatio composed several draft manuscripts of his major work De la Cause de la Pesanteur, but none of this material was published in his lifetime. In 1731 Fatio also sent his theory as a Latin poem, in the style of Lucretius, to the Paris Academy of Science, but it was dismissed. Some fragments of these manuscripts and copies of the poem were later acquired by Le Sage who failed to find a publisher for Fatio's papers. So it lasted until 1929, when the only complete copy of Fatio's manuscript was published by Bopp, and in 1949 Gagnebin used the collected fragments in possession of Le Sage to reconstruct the paper. The Gagnebin edition includes revisions made by Fatio as late as 1743, forty years after he composed the draft on which the Bopp edition was based. However, the second half of the Bopp edition contains the mathematically most advanced parts of Fatio's theory, and were not included by Gagnebin in his edition. For a detailed analysis of Fatio's work, and a comparison between the Bopp and the Gagnebin editions, see Zehe The following description is mainly based on the Bopp edition. ### Features of Fatio's theory Fatio's pyramid (Problem I) Fatio assumed that the universe is filled by minute particles, which are moving indiscriminately with very high speed and rectilinearly in all directions. To illustrate his thoughts he used the following example: Suppose an object C, on which an infinite small plane zz and a sphere centered about zz is drawn. Into this sphere Fatio placed the pyramid PzzQ, in which some particles are streaming in the direction of zz and also some particles, which were already reflected by C and therefore depart from zz. Fatio proposed that the mean velocity of the reflected particles is lower and therefore their momentum is weaker than that of the incident particles. The result is one stream, which pushes all bodies in the direction of zz. So on one hand the speed of the stream remains constant, but on the other hand at larger proximity to zz the density of the stream increases and therefore its intensity is proportional to 1/r². And because one can draw an infinite number of such pyramids around C, the proportionality applies to the entire range around C. Reduced speed In order to justify the assumption, that the particles are travelling after their reflection with diminished velocities, Fatio stated the following assumptions: • Either ordinary matter, or the gravific particles, or both are inelastic, or • the impacts are fully elastic, but the particles are not absolutely hard, and therefore are in a state of vibration after the impact, and/or • due to friction the particles begin to rotate after their impacts. These passages are the most incomprehensible parts of Fatio's theory, because he never clearly decided, which sort of collision he actually preferred. However, in the last version of his theory in 1742 he shortened the related passages and ascribed "perfect elasticity or spring force" to the particles and on the other hand "imperfect elasticity" to gross matter, therefore the particles would be reflected with diminished velocities. Additionally, Fatio faced another problem: What is happening if the particles collide with each other? Inelastic collisions would lead to a steady decrease of the particle speed and therefore a decrease of the gravitational force. To avoid this problem, Fatio supposed that the diameter of the particles is very small compared to their mutual distance, so their interactions are very rare.Condensation Fatio thought for a long time that, since corpuscles approach material bodies at a higher speed than they recede from them (after reflection), there would be a progressive accumulation of corpuscles near material bodies (an effect which he called "condensation). However, he later realized that although the incoming corpuscles are quicker, they are spaced fruther apart than are the reflected corpuscles, so the inward and outward flow rates are the same. Hence there is no secular accumulation of corpuscles, i.e., the density of the reflected corpuscles remains constant. Fatio also noted that, by increasing both the velocity and the elasticity of the corpuscles, the difference between the speeds of the incoming and reflected corpuscles (and hence the difference in densities) can be made arbitrarily small while still maintaining the same effective gravitational force. Porosity of gross matter In order to ensure mass proportionality, Fatio assumed that gross matter is extremely permeable to the flux of corpuscles. He sketched 3 models to justify this assumption: • He assumed that matter is an accumulation of small "balls" whereby their diameter compared with their distance among themselves is "infinitely" small. But he rejected this proposal, because under this condition the bodies would approach each other and therefore wouldn't remain stable. • Then he assumed that the balls could be connected through bars or lines and would form some kind of crystal lattice. However, he rejected this model too - if several atoms are together, the gravific fluid isn't able to penetrate this structure equally in all direction, and therefore mass proportionality is impossible. • At the end Fatio also removed the balls and only left the lines or the net. By making them "infinitely" smaller than their distance among themselves, thereby a maximum penetration capacity could be achieved. Pressure force of the particles (Problem II) Already in 1690 Fatio assumed, that the "push force" exerted by the particles on a plain surface is the sixth part of the force, which would be produced if all particles are lined up normal to the surface. Fatio now gave a proof of this proposal by determination of the force, which is exerted by the particles on a certain point zz. He derived the formula p=ρv²zz/6. This solution is very similar to the formula known in the kinetic theory of gases p=ρv²/3, which was found by Daniel Bernoulli in 1738. This was the first time that a solution analogous to the similar result in kinetic theory was pointed out - long before the basic concept of the latter theory was developed. However, Bernoulli's value is twice as large as Fatio's one, because according to Zehe, Fatio only calculated the value mv for the change of impulse after the collision, but not 2mv and therefore got the wrong result. (His result is only correct in the case of totally inelastic collisions.) Fatio tried to use his solution not only for explaining gravitation, but for explaining the behaviour of gases as well. He tried to construct a thermometer, which should indicate the "state of motion" of the air molecules and therefore estimate the temperature. But Fatio (unlike Bernoulli) didn't identify heat and the movements of the air particles - he used another fluid, which should be responsible for this effect. It is also unknown, whether Bernoulli was influenced by Fatio or not.Infinity (Problem III) In this chapter Fatio examines the connections between the term and its relations to his theory. Fatio often justified his considerations with the fact that different phenomena are "infinitely smaller or larger" than others and so many problems can be reduced to an undetectable value. For example the diameter of the bars is infinitely smaller than their distance to each other; or the speed of the particles is infinitely larger than those of gross matter; or the speed difference between reflected and non-reflected particles is infinitely small.Resistance of the medium (Problem IV) This is the mathematically most complex part of Fatio's theory. There he tried to estimate the resistance of the particle streams for moving bodies. Supposing u is the velocity of gross matter, v is the velocity of the gravific particles and ρ the density of the medium. In the case v << u and ρ = const. Fatio stated that the resistance is ρu². In the case v >> u and ρ = const. the resistance is 4/3ρuv. Now, Newton stated that the lack of resistance to the orbital motion requires an extreme sparseness of any medium in space. So Fatio decreased the density of the medium and stated, that to maintain sufficient gravitational force this reduction must be compensated by changing v "inverse proportional to the square root of the density". This follows from Fatio's particle pressure, which is proportional to ρv². According to Zehe, Fatio's attempt to increase v to a very high value would actually leave the resistance very small compared with gravity, because the resistance in Fatio's model is proportional to ρuv but gravity (i.e. the particle pressure) is proportional to ρv². ### Reception of Fatio's theory Fatio was in communication with some of the most famous scientists of his time. There was a strong personal relationship between Isaac Newton and Fatio in the years 1690 to 1693. Newton's statements on Fatio's theory differed widely. For example, after describing the necessary conditions for a mechanical explanation of gravity, he wrote in an (unpublished) note in his own printed copy of the Principia in 1692:The unique hypothesis by which gravity can be explained is however of this kind, and was first devised by the most ingenious geometer Mr. N. Fatio. On the other hand, Fatio himself stated that although Newton had commented privately that Fatio's theory was the best possible mechanical explanation of gravity, he also acknowledged that Newton tended to believe that the true explanation of gravitation was not mechanical. Also, Gregory noted in his "Memoranda": "Mr. Newton and Mr. Halley laugh at Mr. Fatio’s manner of explaining gravity." This was allegedly noted by him in December 28, 1691. However, the real date is unknown, because both ink and feather which were used, differ from the rest of the page. After 1694, the relationship between the two men cooled down. Christiaan Huygens was the first person informed by Fatio of his theory, but never accepted it. Fatio believed he had convinced Huygens of the consistency of his theory, but Huygens denied this in a letter to Gottfried Leibniz. There was also a short correspondence between Fatio and Leibniz on the theory. Leibniz criticized Fatio's theory for demanding empty space between the particles, which was rejected by him (Leibniz) on philosophical grounds. Jakob Bernoulli expressed an interest in Fatio's Theory, and urged Fatio to write his thoughts on gravitation in a complete manuscript, which was actually done by Fatio. Bernoulli then copied the manuscript, which now resides in the university library of Basel, and was the base of the Bopp edition. Nevertheless, Fatio's theory remained largely unknown with a few exceptions like Cramer and Le Sage, because he never was able to formally publish his works and he fell under the influence of a group of religious fanatics called the "French prophets" (which belonged to the camisards) and therefore his public reputation was ruined. ## Cramer and Redeker In 1731 the Swiss mathematician Gabriel Cramer published a dissertation, at the end of which appeared a sketch of a theory very similar to Fatio's - including net structure of matter, analogy to light, shading - but without mentioning Fatio's name. It was known to Fatio that Cramer had access to a copy of his main paper, so he accused Cramer of only repeating his theory without understanding it. It was also Cramer who informed Le Sage about Fatio's theory in 1749. In 1736 the German physician Franz Albert Redeker also published a similar theory. Any connection between Redeker and Fatio is unknown. ## Le Sage The first exposition of his theory, Essai sur l'origine des forces mortes, was sent by Le Sage to the Academy of Sciences at Paris in 1748, but it was never published. According to Le Sage, after creating and sending his essay he was informed on the theories of Fatio, Cramer and Redeker. In 1756 for the first time one of his expositions of the theory was published, and in 1758 he sent a more detailed exposition, Essai de Chymie Méchanique, to a competition to the Academy of Sciences in Rouen. In this paper he tried to explain both the nature of gravitation and chemical affinities. The exposition of the theory which became accessible to a broader public, Lucrèce Newtonien (1784), in which the correspondence with Lucretius’ concepts was fully developed. Another exposition of the theory was published from Le Sage's notes posthumously by Pierre Prévost in 1818. ### Le Sage's basic concept Le Sage discussed the theory in great detail and he proposed quantitative estimates for some of the theory's parameters. • He called the gravitational particles ultramundane corpuscles, because he supposed them to originate beyond our known universe. The distribution of the ultramundane flux is isotropic and the laws of its propagation are very similar to that of light. • Le Sage argued that no gravitational force would arise if the matter-particle-collisions are perfectly elastic . So he proposed that the particles and the basic constituents of matter are "absolutely hard" and asserted that this implies a complicated form of interaction, completely inelastic in the direction normal to the surface of the ordinary matter, and perfectly elastic in the direction tangential to the surface. He then commented that this implies the mean speed of scattered particles is 2/3 of their incident speed. To avoid inelastic collisions between the particles, he supposed that their diameter is very small relative to their mutual distance. • That resistance of the flux is proportional to uv (where v is the velocity of the particles and u that of gross matter) and gravity is proportional to v², so the ratio resistance/gravity can be made arbitrarily small by increasing v. Therefore he suggested that the ultramundane corpuscles might move at the speed of light, but after further consideration he adjusted this to 105 times the speed of light. • To maintain mass proportionality, ordinary matter consists of cage-like structures, in which their diameter is only the 107th part of their mutual distance. Also the "bars", which constitute the cages, were small (around 1020 times as long as thick) relative to the dimensions of the cages, so the particles can travel through them nearly unhindered. • Le Sage also attempted to use the shadowing mechanism to account for the forces of cohesion, and for forces of different strengths, by positing the existence of multiple species of ultramundane corpuscles of different sizes, as illustrated in Figure 9. Le Sage said that he was the first one, who drew all consequences from the theory and also Prévost said that Le Sage's theory was more developed than Fatio's theory. However, by comparing the two theories and after a detailed analysis of Fatio's papers (which also were in possession of Le Sage) Zehe judged that Le Sage contributed nothing essentially new and he often didn't reach Fatio's level. ### Reception of Le Sage's theory Le Sage’s ideas were not well-received during his day, except for some of his friends and associates like Pierre Prévost, Charles Bonnet, Jean-André Deluc and Simon Lhuilier. They mentioned and described Le Sage's theory in their books and papers, which were used by their contemporaries as a secondary source for Le Sage's theory (because of the lack of published papers by Le Sage himself) .Euler, Bernoulli, and Boscovich Leonhard Euler once remarked that Le Sage's model was "infinitely better" than that of all other authors, and that all objections are balanced out in this model, but later he said the analogy to light had no weight for him, because he believed in the wave nature of light. After further consideration, Euler came to disapprove of the model, and he wrote to Le Sage: You must excuse me Sir, if I have a great repugnance for your ultramundane corpuscles, and I shall always prefer to confess my ignorance of the cause of gravity than to have recourse to such strange hypotheses. Daniel Bernoulli was pleased by the similarity of Le Sage's model and his own thoughts on the nature of gases. However, Bernoulli himself was the opinion that his own kinetic theory of gases was only a speculation, and likewise he regarded Le Sage's theory as highly speculative. Roger Joseph Boscovich pointed out, that Le Sage's theory is the first one, which actually can explain gravity by mechanical means. However, he rejected the model because of the enormous and unused quantity of ultramundane matter. John Playfair described Boscovich's arguments by saying: An immense multitude of atoms, thus destined to pursue their never ending journey through the infinity of space, without changing their direction, or returning to the place from which they came, is a supposition very little countenanced by the usual economy of nature. Whence is the supply of these innumerable torrents; must it not involve a perpetual exertion of creative power, infinite both in extent and in duration? A very similar argument was later given by Maxwell (see the sections below). Additionally, Boscovich denied the existence of all contact and immediate impulse at all, but proposed repulsive and attractive actions at a distance. Lichtenberg, Kant, and Schelling Georg Christoph Lichtenberg's knowledge of Le Sage's theory was based on "Lucrece Newtonien" and a summary by Prévost. Lichtenberg originally believed (like Descartes) that every explanation of natural phenomena must be based on rectilinear motion and impulsion, and Le Sage's theory fulfilled these conditions. In 1790 he expressed in one of his papers his enthusiasm for the theory, believing that Le Sage's theory embraces all of our knowledge and makes any further dreaming on that topic useless. He went on by saying: "If it is a dream, it is the greatest and the most magnificent which was ever dreamed..." and that we can fill with it a gap in our books, which can only be filled by a dream. He often referred to Le Sage's theory in his lectures on physics at the University of Göttingen. However, around 1796 Lichtenberg changed his views after being persuaded by the arguments of Immanuel Kant, who criticized any kind of theory that attempted to replace attraction with impulsion. Kant pointed out that the very existence of spatially extended configurations of matter, such as particles of non-zero radius, implies the existence of some sort of binding force to hold the extended parts of the particle together. Now, that force cannot be explained by the push from the gravitational particles, because those particles too must hold together in the same way. To avoid this circular reasoning, Kant asserted that there must exist a fundamental attractive force. This was precisely the same objection that had always been raised against the impulse doctrine of Descartes in the previous century, and had led even the followers of Descartes to abandon that aspect of his philosophy. Another German philosopher, Friedrich Wilhelm Joseph Schelling, rejected Le Sage's model because its mechanistic materialism was incompatible with Schelling's very idealistic and anti-materialistic philosophy.Laplace Partly in consideration of Le Sage's theory, Pierre-Simon Laplace undertook to determine the necessary speed of gravity in order to be consistent with astronomical observations. He calculated that the speed must be “at least a hundred millions of times greater than that of light”, in order to avoid unacceptably large inequalities due to aberration effects in the lunar motion. This was taken by most researchers, including Laplace, as support for the Newtonian concept of instantaneous action at a distance, and to indicate the implausibility of any model such as Le Sage's. Laplace also argued that to maintain mass-proportionality the upper limit for earth's molecular surface area is at the most the ten-millionth of earth surface. To Le Sage's disappointment, Laplace never directly mentioned Le Sage's theory in his works. ## Kinetic theory Because the theories of Fatio, Cramer and Redeker were not widely known, Le Sage's exposition of the theory enjoyed a resurgence of interest in the latter half of the nineteenth century, coinciding with the development of the kinetic theory.Leray Since Le Sage's particles must lose speed when colliding with ordinary matter (in order to produce a net gravitational force), a huge amount of energy must be converted to internal energy modes. If those particles have no internal energy modes, the excess energy can only be absorbed by ordinary matter. Addressing this problem, P. Leray proposed a particle model (perfectly similar to Le Sage's) in which he asserted that the absorbed energy is used by the bodies to produce magnetism and heat. He suggested, that this might be an answer for the question of where the energy output of the stars comes from. Kelvin and Tait Le Sage's own theory became a subject of re-newed interest in the latter part of the 19th century following a paper published by Kelvin in 1873. Unlike Leray, who treated the heat problem imprecisely, Kelvin stated that the absorbed energy represents a very high heat, sufficient to vaporize any object in a fraction of a second. So Kelvin re-iterated an idea that Fatio had originally proposed in the 1690s for attempting to deal with the thermodynamic problem inherent in Le Sage's theory. He proposed that the excess heat might be absorbed by internal energy modes of the particles themselves, based on his proposal of the vortex-nature of matter. In other words, the original translational kinetic energy of the particles is transferred to internal energy modes, chiefly vibrational or rotational, of the particles. Appealing to Clausius's proposition that the energy in any particular mode of a gas molecule tends toward a fixed ratio of the total energy, Kelvin went on to suggest that the energized but slower moving particles would subsequently be restored to their original condition due to collisions (on the cosmological scale) with other particles. Kelvin also asserted that it would be possible to extract limitless amounts of free energy from the ultramundane flux, and described a perpetual motion machine to accomplish this. (The flaw in Kelvin's reasoning was that Clausius's proposition would apply only if ordinary matter was in thermodynamic equilibrium with the ultramundane flux - in which case there would no net gravitational effect.) Subsequently, Peter Guthrie Tait called the Le Sage theory the only plausible explanation of gravitation which has been propounded at that time. He went on by saying: The most singular thing about it is that, if it be true, it will probably lead us to regard all kinds of energy as ultimately Kinetic. Kelvin himself, however, was not optimistic that Le Sage's theory could ultimately give a satisfactory account of phenomena. After his brief paper in 1873 noted above, he never returned to the subject, except to make the following comment: This kinetic theory of matter is a dream, and can be nothing else, until it can explain chemical affinity, electricity, magnetism, gravitation, and the inertia of masses (that is, crowds) of vortices. Le Sage s theory might give an explanation of gravity and of its relation to inertia of masses, on the vortex theory, were it not for the essential aeolotropy of crystals, and the seemingly perfect isotropy of gravity. No finger post pointing towards a way that can possibly lead to a surmounting of this difficulty, or a turning of its flank, has been discovered, or imagined as discoverable. Preston Samuel Tolver Preston illustrated that many of the postulates introduced by Le Sage concerning the gravitational particles, such as rectilinear motion, rare interactions, etc., could be collected under the single notion that they behaved (on the cosmological scale) as the particles of a gas with an extremely long mean free path. Preston also accepted Kelvin's proposal of internal energy modes of the particles. He illustrated Kelvin's model by comparing it with the collision of a steel ring and an anvil - the anvil wouldn't be shaken very much, but the steel ring would be in a state of vibration and therefore departs with diminished velocity. He also argued, that the mean free path of the particles is at least the distance between the planets - on longer distances the particles regain their translational energy due collisions with each other, so he concluded that on longer distances there would be no attraction between the bodies, independent of their size. Paul Drude suggested that this could possibly be a connection with some theories of Carl Gottfried Neumann and Hugo von Seeliger, who proposed some sort of absorption of gravity in open space.Maxwell A review of the Kelvin-Le Sage theory was published by James Clerk Maxwell in the Ninth Edition of the Encyclopaedia Britannica under the title Atom in 1875. After describing the basic concept of the theory he wrote (with sarcasm according to Aronson): Here, then, seems to be a path leading towards an explanation of the law of gravitation, which, if it can be shown to be in other respects consistent with facts, may turn out to be a royal road into the very arcana of science. Maxwell commented on Kelvin’s suggestion of different energy modes of the particles that this implies the gravitational particles are not simple primitive entities, but rather systems, with their own internal energy modes, which must be held together by (unexplained) forces of attraction. He argues that the temperature of bodies must tend to approach that at which the average kinetic energy of a molecule of the body would be equal to the average kinetic energy of an ultra-mundane particle and he states that the latter quantity must be much greater than the former and concludes that ordinary matter should be incinerated within seconds under the Le Sage bombardment. He wrote: We have devoted more space to this theory than it seems to deserve, because it is ingenious, and because it is the only theory of the cause of gravitation which has been so far developed as to be capable of being attacked and defended. Maxwell also argued that the theory requires "an enormous expenditure of external power" and therefore violating the conservation of energy as the fundamental principle of nature. Preston responded to Maxwell's criticism by arguing that the kinetic energy of each individual simple particle could be made arbitrarily low by positing a sufficiently low mass (and higher number density) for the particles. But this issue later was discussed in a more detailed way by Poincaré, who showed that the thermodynamic problem within Le Sage models remained unresolved.Isenkrahe, Rysanik, du Bois-Reymond Caspar Isenkrahe presented his model in a variety of publications between 1879-1915. His basic assumptions were very similar to those of Le Sage and Preston, but he gave a more detailed application of the kinetic theory. However, by asserting that the velocity of the corpuscles after collision was reduced without any corresponding increase in the energy of any other object, his model violated the conservation of energy. He noted that there is a connection between the weight of a body and its density (because any decrease in the density of an object reduces the internal shielding) so he went on to assert that warm bodies should be heavier than colder ones (related to the effect of thermal expansion). In another model A. Rysanek in 1887 also gave a careful analysis, including an application of Maxwell's law of the particle velocities in a gas. He distinguished between a gravitational and a luminiferous aether. This separation of those two mediums was necessary, because according to his calculations the absence of any drag effect in the orbit of Neptune implies a lower limit for the particle velocity of 5 · 1019 cm/sec. He (like Leray) argued that the absorbed energy is converted into heat, which might be transferred into the luminiferous aether and/or is used by the stars to maintain their energy output. However, these qualitative suggestions were unsupported by any quantitative evaluation of the amount of heat actually produced. In 1888 Paul David Gustav du Bois-Reymond argued against Le Sage's model, partly because the predicted force of gravity in Le Sage's theory is not strictly proportional to mass. In order to achieve exact mass proportionality as in Newton's theory (which implies no shielding or saturation effects and an infinitely porous structure of matter), the ultramundane flux must be infinitely intense. Du Bois-Reymond rejected this as absurd. In addition, du Bois-Reymond like Kant observed that Le Sage's theory cannot meet its goal, because it invokes concepts like "elasticity" and "absolute hardness" etc., which (in his opinion) can only be explained by means of attractive forces. The same problem arises for the cohesive forces in molecules. As a result, the basic intent of such models, which is to dispense with elementary forces of attraction, is impossible. ## Wave models Keller and Boisbaudran In 1863 F.A.E. and Em. Keller presented a theory by using a Le Sage type mechanism in combination with longitudinal waves of the aether. They supposed that those waves are propagating in every direction and losing some of their momentum after the impact on bodies, so between two bodies the pressure exerted by the waves is weaker than the pressure around them. In 1869 L. de Boisbaudran presented the same model as Leray (including absorption and the production of heat etc.), but like Keller he replaced the particles with longitudinal waves of the aether.Lorentz After these attempts other authors substituted electromagnetic radiation for Le Sage’s particles early 1900s. This was in connection with the Lorentz ether theory and the electron theory of that time, in which the electrical constitution of matter was assumed. In 1900 Hendrik Lorentz wrote, that Le Sage's particle model is not consistent with the electron theory of his time. But the detection that trains of electromagnetic waves could produce some pressure in combination with the penetrating power of Röntgen rays (now called x-rays), led him to the conclusion, that nothing is speaking against the possible existence of an even more penetrating radiation then x-rays, which could replace Le Sage's particles. Lorentz showed that an attractive force between charged particles (which might be taken to model the elementary subunits of matter) would indeed arise, but only if the incident energy were entirely absorbed. This was the same fundamental problem which had afflicted the particle models. So Lorentz wrote: The circumstance however, that this attraction could only exist, if in some way or other electromagnetic energy were continually disappearing, is so serious a difficulty, that what has been said cannot be considered as furnishing an explanation of gravitation. Nor is this the only objection that can be raised. If the mechanism of gravitation consisted in vibrations which cross the aether with the velocity of light, the attraction ought to be modified by the motion of the celestial bodies to a much larger extend than astronomical observations make it possible to admit. In 1922 Lorentz first examined Martin Knudsen's investigation on rarefied gases and in connection with that he discussed Le Sage's particle model, followed by a summary of his own electromagnetic Le Sage model - but he repeated his conclusion from 1900: Without absorption no gravitational effect. In 1913 David Hilbert referred to Lorentz's theory and criticised it by arguing that no force in the form 1/r² can arise, if the mutual distance of the atoms is large enough when compared with their wavelength. J.J. Thomson In 1904 J. J. Thomson considered a Le Sage-type model in which the primary ultramundane flux consisted of a hypothetical form of radiation much more penetrating even than x-rays. He argued that Maxwell's heat problem might be avoided by assuming that the absorbed energy is not be converted into heat, but re-radiated in a still more penetrating form. He noted that this process possibly can explain where the energy of radioactive substances is coming from - however, he stated that an internal cause of radioactivity is more probable. In 1911 Thomson went back to this subject in his article "Matter" in the Encyclopædia Britannica Eleventh Edition. There he stated, that this form of secondary radiation is somewhat analogous to how the passage of electrified particles through matter causes the radiation of the even more penetrating x-rays. He remarked: It is a very interesting result of recent discoveries that the machinery which Le Sage introduced for the purpose of his theory has a very close analogy with things for which we have now direct experimental evidence....Röntgen rays, however, when absorbed do not, as far as we know, give rise to more penetrating Rontgen rays as they should to explain attraction, but either to less penetrating rays or to rays of the same kind. Tommasina and Brush Unlike Lorentz and Thomson, Thomas Tommasina between 1903 and 1928 suggested long wavelength radiation to explain gravity, and short wave-length radiation for explaining the cohesive forces of matter. Charles F. Brush in 1911 also proposed long wavelength radiation. But he later revised his view and changed to extremely short wavelengths. ## Later assessments Darwin In 1905 George Darwin subsequently calculated the gravitational force between two bodies at extremely close range to determine if geometrical effects would lead to a deviation from Newton’s law. Here Darwin replaced Le Sage's cage-like units of ordinary matter with microscopic hard spheres of uniform size. He concluded that only in the instance of perfectly inelastic collisions (zero reflection) would Newton’s law stand up, thus reinforcing the thermodynamic problem of Le Sage's theory. Also, such a theory is only valid if the normal and the tangential components of impact are totally inelastic (contrary to Le Sage's scattering mechanism), and the elementary particles are exactly of the same size. He went on to say that the emission of light is the exact converse of the absorption of Le Sage's particles. A body with different surface temperatures will move in the direction of the colder part. In a later review of gravitational theories Darwin briefly described Le Sage's theory and said he gave the theory serious consideration, but then wrote: I will not refer further to this conception, save to say that I believe that no man of science is disposed to accept it as affording the true road. Poincaré Partially based on the calculations of Darwin, an important criticism was given by Henri Poincaré in 1908. He concluded that the attraction is proportional to $Ssqrt\left\{rho\right\}v$, where S is earth's molecular surface area, v is the velocity of the particles, and ρ is the density of the medium. Following Laplace he argued that to maintain mass-proportionality the upper limit for S is at the most a ten-millionth of the earth's surface. Now, drag (i.e. the resistance of the medium) is proportional to Sρv and therefore the ratio of drag to attraction is inversely proportional to Sv. To reduce drag Poincaré calculated a lower limit for v = 24 · 1017 times the speed of light. So there are lower limits for Sv and v, and a upper limit for S and with those values one can calculate the produced heat, which is proportional to Sρv3. The calculation shows that earth's temperature would rise by 1026 degrees per second. Poincaré noticed, "that the earth could not long stand such a regime." Poincaré also analyzed some wave models (Tommasina and Lorentz), remarking that they suffered the same problems as the particle models. To reduce drag, superluminal wave velocities were necessary, and they would still be subject to the heating problem. After describing a similar re-radiation model like Thomson he concluded: "Such are the complicated hypotheses to which we are led when we seek to make Le Sage's theory tenable". He also stated that if in Lorentz' model the absorbed energy is fully converted into heat, this would raise earth's temperature by 1013 degrees per second. Poincaré then went on to consider Le Sage's theory in the context of the "new dynamics" that had been developed at the end of the 19th and beginning of the 20th centuries, specifically recognizing the relativity principle. For a particle theory he remarked that "it is difficult to imagine a law of collision compatible with the principle of relativity", and the problems of drag and heating remain. ## Predictions and criticism ### Matter and particles Porosity of matter A basic prediction of the theory is the extreme porosity of matter. As supposed by Fatio and Le Sage in 1690/1758 (and before them, Huygens) matter must consist mostly of empty space so that the very small particles can penetrate the bodies nearly undisturbed and therefore every single part of matter can take part in the gravitational interaction. This prediction has been (in some respects) confirmed over the course of the time. Indeed, matter consists mostly of empty space and certain particles like neutrinos can pass through matter nearly unhindered. However, the image of elementary particles as classical entities who interact directly, determined by their shapes and sizes (in the sense of the net structure proposed by Fatio/Le Sage and the equisized spheres of Isenkrahe/Darwin), is not consistent with current understanding of elementary particles. The Lorentz/Thomson proposal of electrical charged particles as the basic constituents of matter is inconsistent with current physics as well.Cosmic radiation Every Le Sage-type model assumes the existence of a space-filling isotropic flux or radiation of enormous intensity and penetrating capability. This has some similarity to the cosmic microwave background radiation (CMBR) discovered in the 20th century. CMBR is indeed a space-filling and fairly isotropic flux, but its intensity is extremely small, as is its penetrating capability. The flux of neutrinos, emanating from (for example) the sun, possesses the penetrating properties envisaged by Le Sage for his ultramundane corpuscles, but this flux is not isotropic (since individual stars are the main sources of neutrinos) and the intensity is even less than that of the CMBR. Of course, neither the CMBR nor neutrinos propagate at superluminal speeds, which is another necessary attribute of Le Sage’s particles. From a more modern point of view, discarding the simple “push” concept of Le Sage, the suggestion that the neutrino (or some other particle similar to the neutrino) might be the mediating particle in a quantum field theory of gravitation was considered and disproved by Feynman. ### Gravitational shielding Although matter is postulated to be very sparse in the Fatio-Le Sage theory, it cannot be perfectly transparent, because in that case no gravitational force would exist. However, the lack of perfect transparency leads to problems: with sufficient mass the amount of shading produced by two pieces of matter becomes less than the sum of the shading that each of them would produce separately, due to the overlap of their shadows (P10, above). This hypothetical effect, called gravitational shielding, implies that addition of matter does not result in a direct proportional increase in the gravitational mass. Therefore, in order to be viable, Fatio and Le Sage postulated that the shielding effect is so small as to be undetectable, which requires that the interaction cross-section of matter must be extremely small (P10, below). This places an extremely high lower-bound on the intensity of the flux required to produce the observed force of gravity. According to standard physics any form of gravitational shielding is a violation of the equivalence principle and therefore is inconsistent with general relativity. For more historical information on the connection between gravitational shielding and Le Sage gravity, see Martins, and Borzeszkowski et al. Since Isenkrahe's proposal on the connection between density, temperature and weight was based purely on the anticipated effects of changes in material density, and since temperature at a given density can be increased or decreased, Isenkrahe's comments do not imply any fundamental relation between temperature and gravitation. (There actually is a relation between temperature and gravitation, as well as between binding energy and gravitation, but these actual effects have nothing to do with Isenkrahe's proposal. See the section below on "Coupling to Energy".) Regarding the prediction of a relation between gravitation and density, all experimental evidence indicates that there is no such relation. ### Speed of gravity Drag According to Le Sage's theory, an isolated body is subjected to drag if it is in motion relative to the unique isotropic frame of the ultramundane flux (i.e., the frame in which the speed of the ultramundane corpuscles is the same in all directions). This is due to the fact that, if a body is in motion, the particles striking the body from the front have a higher speed (relative to the body) than those striking the body from behind - this effect will act to decrease the distance between the sun and the earth. The magnitude of this drag is proportional to vu, where v is the speed of the particles and u is the speed of the body, whereas the characteristic force of gravity is proportional to v2, so the ratio of drag to gravitational force is proportional to u/v. Thus for a given characteristic strength of gravity, the amount of drag for a given speed u can be made arbitrarily small by increasing the speed v of the ultramundane corpuscles. However, in order to reduce the drag to an acceptable level (i.e., consistent with observation) in terms of classical mechanics, the speed v must be many orders of magnitude greater than the speed of light. This makes Le Sage theory fundamentally incompatible with the modern science of mechanics based on special relativity, according to which no particle (or wave) can exceed the speed of light. In addition, even if superluminal particles were possible, the effective temperature of such a flux would be sufficient to incinerate all ordinary matter in a fraction of a second.Aberration As shown by Laplace, another possible Le Sage effect is orbital aberration due to finite speed of gravity. Unless the Le Sage particles are moving at speeds much greater than the speed of light, as Le Sage and Kelvin supposed, there is a time delay in the interactions between bodies (the transit time). In the case of orbital motion this results in each body reacting to a retarded position of the other, which creates a leading force component. Contrary to the drag effect, this component will act to accelerate both objects away from each other. In order to maintain stable orbits, the effect of gravity must either propagate much faster than the speed of light or must not be a purely central force. This has been suggested by many as a conclusive disproof of any Le Sage type of theory. In contrast, general relativity is consistent with the lack of appreciable aberration identified by Laplace, because even though gravity propagates at the speed of light in general relativity, the expected aberration is almost exactly cancelled by velocity-dependent terms in the interaction. ### Range of gravity In many particle models, such as Kelvin's, the range of gravity is limited due to the nature of particle interactions amongst themselves. The range is effectively determined by the rate that the proposed internal modes of the particles can eliminate the momentum defects (shadows) that are created by passing through matter. Such predictions as to the effective range of gravity will vary and are dependent upon the specific aspects and assumptions as to the modes of interactions that are available during particle interactions. However, for this class of models the observed large-scale structure of the cosmos constrains such dispersion to those that will allow for the aggregation of such immense gravitational structures. ### Energy Absorption As noted in the historical section, a major problem for every Le Sage model is the energy and heat issue. As Maxwell and Poincaré showed, inelastic collisions lead to a vaporization of matter within fractions of a second and the suggested solutions were not convincing. For example, Aronson gave a simple proof of Maxwell's assertion: Suppose that, contrary to Maxwell's hypothesis, the molecules of gross matter actually possess more energy than the particles. In that case the particles would, on the average, gain energy in the collision and the particles intercepted by body B would be replaced by more energetic ones rebounding from body B. Thus the effect of gravity would be reversed: there would be a mutual repulsion between all bodies of mundane matter, contrary to observation. If, on the other hand, the average kinetic energies of the particles and of the molecules are the same, then no net transfer of energy would take place, and the collisions would be equivalent to elastic ones, which, as has been demonstrated, do not yield a gravitational force. Likewise Isenkrahe's violation of the energy conservation law is unacceptable, and Kelvin's application of Clausius' theorem leads (as noted by Kelvin himself) to some sort of perpetual motion mechanism. The suggestion of a secondary re-radiation mechanism for wave models attracted the interest of JJ Thomson, but was not taken very seriously by either Maxwell or Poincaré, because it entails a gross violation of the second law of thermodynamics (huge amounts of energy spontaneously being converted from a colder to a hotter form), which is one of the most solidly established of all physical laws. The energy problem has also been considered in relation to the idea of mass accretion in connection with the Expanding Earth theory. Among the early theorists to link mass increase in some sort of push gravity model to Earth expansion were Yarkovsky and Hilgenberg. The idea of mass accretion and the expanding earth theory are not currently considered to be viable by mainstream scientists. This is because, among other reasons, according to the principle of mass-energy equivalence, if the Earth was absorbing the energy of the ultramundane flux at the rate necessary to produce the observed force of gravity (i.e. by using the values calculated by Poincaré), its mass would be doubling in each fraction of a second.Coupling to energy Based on observational evidence, it is now known that gravity interacts with all forms of energy, and not just with mass. The electrostatic binding energy of the nucleus, the energy of weak interactions in the nucleus, and the kinetic energy of electrons in atoms, all contribute to the gravitational mass of an atom, as has been confirmed to high precision in Eötvös type experiments. This means, for example, that when the atoms of a quantity of gas are moving more rapidly, the gravitation of that gas increases. Le Sage's theory does not predict any such effect, nor does any of the known variants of Le Sage's theory. ## Non-Gravitational applications and analogies Mock gravity Lyman Spitzer in 1941 calculated, that absorption of radiation between two dust particles lead to a net attractive force which varies proportional to 1/r² (evidently he was unaware of Le Sage's shadow mechanism and especially Lorentz's considerations on radiation pressure and gravity). George Gamow, who called this effect "mock gravity", proposed in 1949 that after the big bang the temperature of the electrons has dropped faster than the temperature of the background radiation. Absorption of the radiation lead to a Lesage mechanism between the electrons, which might had an important role in the process of galaxy formation shortly after the big bang. However, this proposal was disproved by Field in 1971, who showed that this effect was much too small, because electrons and the radiation were nearly in thermal equilibrium. Hogan and White proposed in 1986 that mock gravity might have influenced the galaxy formation by absorption of pregalactic starlight. But it was shown by Wang and Field that any form of mock gravity is incapable of producing enough force to influence the galaxy formation.Plasma The Le Sage mechanism also has been identified as a significant factor in the behavior of dusty plasma. A.M. Ignatov has shown that an attractive force arises between two dust grains suspended in an isotropic collisionless plasma due to inelastic collisions between ions of the plasma and the grains of dust. This attractive force is inversely proportional to the square of the distance between dust grains, and can counterbalance the Coulomb repulsion between dust grains.Vacuum energy In quantum field theory the existence of virtual particles is proposed, which lead to the so called Casimir effect. Casimir calculated that between two plates only particles with specific wavelengths should be counted when calculating the vacuum energy. Therefore the energy density between the plates is less if the plates are close together, leading to a net attractive force between the plates. However, the conceptual framework of this effect is very different from the theory of Fatio and Le Sage. ## Recent activity The re-examination of Le Sage's theory in the 19th century identified several closely interconnected problems with the theory. These relate to excessive heating, frictional drag, shielding, and gravitational aberration. The recognition of these problems, in conjunction with a general shift away from mechanical based theories, resulted in a progressive loss of interest in Le Sage’s theory. Ultimately in the twentieth century Le Sage’s theory was eclipsed by Einstein’s theory of general relativity. In 1965 Richard Feynman examined the Fatio/Lesage mechanism, primarily as an example of an attempt to explain a "complicated" physical law (in this case, Newton's inverse-square law of gravity) in terms of simpler primitive operations without the use of complex mathematics, and also as an example of a failed theory. He notes that the mechanism of "bouncing particles" reproduces the inverse-square force law and that "the strangeness of the mathematical relation will be very much reduced", but then notes that the scheme "does not work", because of the drag it predicts would be experienced by moving bodies, "so that is the end of that theory". Although it is not regarded as a viable theory within the mainstream scientific community, there are occasional attempts to re-habilitate the theory outside the mainstream, including those of Radzievskii and Kagalnikova (1960), Shneiderov (1961), Buonomano and Engels (1976), Adamut (1982), Jaakkola (1996), Tom Van Flandern (1999), and Edwards (2007). A variety of Le Sage models and related topics are discussed in Edwards, et al. ## Secondary sources • English summary of Prévost (1805).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598684310913086, "perplexity_flag": "middle"}
http://dsp.stackexchange.com/questions/3505/doubt-on-weighted-least-square-estimation
# Doubt on Weighted Least Square Estimation This is a page from the book linear algebra,geodesy and gps by Gilbert Strang.... the page explains about the justification of the inverse of the of the co variance matrix of measurement vector $b$ in the over determined system $Ax = b$ as the best weight matrix for best estimate of x. 1. what does the line errors contained in the matrix mean?(underlined in blue color..) 2. how does that substitution take place marked with a blue curve... 3. is E{$rr^T$} = E{$bb^T$} ?? any explanation is welcome... - 1 Do not cross-post the same question to multiple sites (or leave links). So far you have posted this here, on Cross Validated and Mathematica... If you keep doing this, you might end up getting suspended. – Lorem Ipsum Sep 29 '12 at 16:54 ## 1 Answer 1. The linear model is $b=Ax+r$, so you can see the errors $r$ as the noise added to the measurements $b$. 2. The covariance $\Sigma_b=E\{(b-\bar{b})(b-\bar{b})^T\}=E\{(b-Ax)(b-Ax)^T\}=E\{rr^T\}$ 3. $E\{rr^T\} \neq E\{bb^T\}$ unless $Ax=0$ - 1...can you explain what is this $\hat{b}$ ? is it the estimated value like $\hat{x}$ if yes then where do you get that from ? or is it the mean of $b$ ? 2...how is $\hat{b} = Ax$?? – rotating_image Sep 29 '12 at 13:00 Sorry, what I mean is $\bar{b}$, which is the mean of $b$. Since $r$ is assumed to be zero mean, then $\bar{b}=Ax$. – chaohuang Sep 29 '12 at 13:48 $E(b) = E(Ax + r)$....$E(b) = E(Ax) + E(r)$....$\bar{b} = AE(x)+ E(r)$....$\bar{b} = A\bar{x} + 0$....is $A\bar{x} = Ax$ ?? – rotating_image Sep 29 '12 at 13:51 For BLUE, $x$ is deterministic – chaohuang Sep 29 '12 at 14:15 i think for BLUE $E(x - \hat{x}) = 0$ not $\bar{x} = x$ – rotating_image Sep 29 '12 at 14:46 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8889177441596985, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/89494?sort=newest
## Hilbert polynomial of an abelian scheme ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is coming out of Mumford's GIT, section 7.2, page 131. $A/S$ an abelian scheme of dimension $g$ with polarization $\bar{\omega}$ of degree $d^2$. Then $\pi_*(L^\Delta(\bar{\omega})^3)$ is locally free on $S$ of rank $6^gd$ which defines the closed immersion $\varphi_3 : A \rightarrow \mathbb{P}(\pi_{*}(L^\Delta(\bar{\omega})^3))$. Equip this with a linear rigidification $\phi : \mathbb{P}(\pi_{*}(L^\Delta(\bar{\omega})^3)) \rightarrow \mathbb{P_m} \times S$ so that we get an embedding $I : A \rightarrow \mathbb{P_m} \times S$. Mumford then states the Hilbert Polynomial of $I(A)$ is easily computed to be $P(X) = 6^gdX^g$. Exactly how does one go about finding this Hilbert polynomial? - [edited only to insert "r" into the title's "Hilbert"] – Noam D. Elkies Feb 25 2012 at 20:05 ## 1 Answer Let us look at a single geometric fiber $X$. Let $L^\Delta(\bar\omega)|_X = \mathcal{O}_X(D)$. The Riemann-Roch theorem for abelian varieties (Mumford "Abelian Varieties", Chap. 3 Section 16) states that $$\chi(\mathcal{O}_X(D)) = D^g/g!$$ and moreover that $\chi(\mathcal{O}_X(D))^2 = \deg \phi$, where $\phi$ is the polarization map defined by $\mathcal{O}_X(D)$. So the Hilbert polynomial $\chi(\mathcal{O}_X(3nD)) = 3^gn^gD^g/g! = 3^g n^g \chi(\mathcal{O}_X(D)) = 3^g n^g d$. I got almost the right answer (where did $2^g$ go?), so maybe I misunderstood the question, but I hope this is still helpful. EDIT. Is the superscript $\Delta$ the symmetrization of the line bundle in question? Then it would explain why the $2^g$ above is missing... - $L^\Delta(\bar{\omega}) = \Delta^*((1_A \times \bar{\omega})^*(P))$, $P$ the Poincare bundle, $\Delta$ the diagonal. – rghthndsd Feb 25 2012 at 18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905699253082275, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/03/23/continuity-of-measures/
# The Unapologetic Mathematician ## Continuity of Measures Again we start with definitions. An extended real-valued set function $\mu$ on a collection of sets $\mathcal{E}$ is “continuous from below” at a set $E\in\mathcal{E}$ if for every increasing sequence of sets $\{E_i\}\subseteq\mathcal{E}$ — that is, with each $E_i\subseteq E_{i+1}$ — for which $\lim_iE_i=E$ — remember that this limit can be construed as the infinite union of the sets in the sequence — we have $\lim_i\mu(E_i)=\mu(E)$. Similarly, $\mu$ is “continuous from above” at $E$ if for every decreasing sequence $\{E_i\}\subseteq\mathcal{E}$ for which $\lim_iE_i=E$ and which has $\lvert\mu(E_i)\rvert<\infty$ for at least one set in the sequence we have $\lim_i\mu(E_i)=\mu(E)$. Of course, as usual we say that $\mu$ is continuous from above (below) if it is continuous from above (below) at each set in its domain. Now I say that a measure is continuous from above and below. First, if $\{A_i\}\subseteq\mathcal{A}$ is an increasing sequence whose limit is also in $\mathcal{A}$, then $\mu(\lim_iA_i)=\lim_i\mu(A_i)$. Let’s define $A_0=\emptyset$ and calculate $\displaystyle\begin{aligned}\mu\left(\lim\limits_{i\to\infty}A_i\right)&=\mu\left(\bigcup\limits_{i=1}^\infty A_i\right)\\&=\mu\left(\biguplus\limits_{i=1}^\infty\left(A_i\setminus A_{i-1}\right)\right)\\&=\sum\limits_{i=1}^\infty\mu\left(A_i\setminus A_{i-1}\right)\\&=\lim\limits_{n\to\infty}\sum\limits_{i=1}^n\mu\left(A_i\setminus A_{i-1}\right)\\&=\lim\limits_{n\to\infty}\mu\left(\biguplus\limits_{i=1}^n\left(A_i\setminus A_{i-1}\right)\right)\\&=\lim\limits_{n\to\infty}\mu\left(A_n\right)\end{aligned}$ where we’ve used countable (and finite) additivity to turn the disjoint union into a sum and back. Next, if $\{A_i\}\subseteq\mathcal{A}$ is a decreasing sequence whose limit is also in $\mathcal{A}$, and if at least one of the $A_m$ has finite measure, then $\mu(\lim_iA_i)=\lim_i\mu(A_i)$. Indeed, if $A_m$ has finite measure then $\mu(A_n)\leq\mu(A_m)<\infty$ by monotonicity, and thus the limit must have finite measure as well. Now $\{A_m\setminus A_i\}$ is an increasing sequence, and we calculate $\displaystyle\begin{aligned}\mu(A_m)-\mu\left(\lim\limits_{i\to\infty}A_i\right)&=\mu\left(A_m\setminus\lim\limits_{i\to\infty}A_i\right)\\&=\mu\left(\lim\limits_{i\to\infty}\left(A_m\setminus A_i\right)\right)\\&=\lim\limits_{i\to\infty}\mu\left(A_m\setminus A_i\right)\\&=\lim\limits_{i\to\infty}\left(\mu(A_m)-\mu(A_i)\right)\\&=\mu(A_m)-\lim\limits_{i\to\infty}\mu(A_i)\end{aligned}$ And thus a measure is continuous from above and from below. On the other hand we have this partial converse: Let $\mu$ be a finite, non-negative, additive set function on an algebra $\mathcal{A}$. Then if $\mu$ either is continuous from below at every $A\in\mathcal{A}$ or is continuous from above at $\emptyset$, then $\mu$ is a measure. That is, either one of these continuity properties is enough to guarantee countable additivity. Since $\mu$ is defined on an algebra, which is closed under finite unions, we can bootstrap from additivity to finite additivity. So let $\{A_i\}$ be a countably infinite sequence of pairwise disjoint sets in $\mathcal{A}$ whose (disjoint) union $A$ is also in $\mathcal{A}$, and define the two sequences in $\mathcal{A}$: $\displaystyle B_n=\biguplus\limits_{i=1}^nE_i$ $\displaystyle C_n=A\setminus B_n$ If $\mu$ is continuous from below, $\{F_n\}$ is an increasing sequence converging to $A$. We find $\displaystyle\begin{aligned}\mu(A)&=\mu\left(\lim\limits_{n\to\infty}B_n\right)\\&=\lim\limits_{n\to\infty}\mu(B_n)\\&=\lim\limits_{n\to\infty}\mu\left(\biguplus\limits_{i=1}^nA_i\right)\\&=\lim\limits_{n\to\infty}\sum\limits_{i=1}^n\mu(A_i)\\&=\sum\limits_{i=1}^\infty\mu(A_i)\end{aligned}$ On the other hand, if $\mu$ is continuous from above at $\emptyset$, then $\{C_n\}$ is a decreasing sequence converging to $\emptyset$. We find $\displaystyle\begin{aligned}\mu(A)&=\lim\limits_{n\to\infty}\mu(A)\\&=\lim\limits_{n\to\infty}\mu(B_n\uplus C_n)\\&=\lim\limits_{n\to\infty}\left(\mu(B_n)+\mu(C_n)\right)\\&=\lim\limits_{n\to\infty}\mu(B_n)+\lim\limits_{n\to\infty}\mu(C_n)\\&=\lim\limits_{n\to\infty}\mu\left(\biguplus\limits_{i=1}^nA_i\right)+\lim\limits_{n\to\infty}\mu(C_n)\\&=\lim\limits_{n\to\infty}\sum\limits_{i=1}^n\mu(A_i)+0\\&=\sum\limits_{i=1}^\infty\mu(A_i)\end{aligned}$ ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 18 Comments » 1. [...] sets on which and agree, then the limit of this sequence is again in . Indeed, since measures are continuous, we must [...] Pingback by | April 6, 2010 | Reply 2. [...] since is continuous, we see [...] Pingback by | April 7, 2010 | Reply 3. [...] in Mean First off we want to introduce another notion of continuity for set functions. We recall that a set function on a class is continuous from above at if for every decreasing sequence of [...] Pingback by | June 9, 2010 | Reply 4. [...] so for all . A.e. convergence tells us that the measure of the intersection of all the is . By continuity, we conclude [...] Pingback by | June 10, 2010 | Reply 5. [...] we turn to some continuity properties. If is a monotone sequence — if it’s decreasing we also ask that at least [...] Pingback by | June 23, 2010 | Reply 6. [...] a monotone sequence of simple so that or . Then the limit will commute with (since measures are continuous), and it will commute with the integral as [...] Pingback by | July 26, 2010 | Reply 7. [...] what we know about continuity, we just have to show that is continuous from above at to show that it’s a measure. That [...] Pingback by | July 30, 2010 | Reply 8. [...] we need to check is continuity. We know that it suffices to show that is continuous from above at . So, let be a decreasing sequence of measurable sets [...] Pingback by | August 17, 2010 | Reply 9. Very informative. Thanks! Comment by Tim Fortune | February 8, 2013 | Reply 10. Is there a way to find a counterexample for the monotonically decreasing sequence result, for \mu(A_1) = \infty? Comment by Alice | April 6, 2013 | Reply 11. I think you’re slightly confused, Alice, which is fine; the condition is confusing. It states that, given a set $E\in\mathcal{E}$, a certain property must hold for all decreasing sequences starting from $E_1\in\mathcal{E}$ with finite measure that converge to $E$. It doesn’t say anything about what must happen when a sequence that decreases to $E$ starts with a set of infinite measure; the property may or may not hold for such sequences, but that has no bearing on the continuity of the measure at $E$. Comment by | April 6, 2013 | Reply 12. Oh right, I think I mis-read this. I think I’m speaking of the property of the monotonicity of measure whereby the measure of the infinite intersection taken over a decreasing sequence is the limit of the measure of the set \$A_n\$, \mu ( \cap^{\infty} A_n)= \lim \mu (A_n), where A_{n+1} \subset A_n. So here, I’m asking for an example where the sequence starts with infinite measure, and this condition fails to hold. So sorry for this confusion! Comment by Alice | April 6, 2013 | Reply 13. Well, I could imagine cooking up a space with a countably infinite number of points assigned an infinite measure each; the sequence is $E$ along with all of them, and then removing one point at a time. The limit of the sequence is $E$ since each point will eventually be removed, but the measure of each set is infinite. I’m not really a measure theorist so I’m sort of waving my hands here at the idea that such a pathological space would be valid, but I think something along those lines should work. Comment by | April 6, 2013 | Reply 14. Would the Lebesgue measure on [n, \infty ) work? Comment by Alice | April 7, 2013 | Reply 15. Well, the trick is to get it so that the limit of the sequence has a finite measure. Otherwise the limit of the measures is the measure of the limit. I suppose in this case the limit is empty, so that might qualify. Comment by | April 7, 2013 | Reply 16. well the limit on the measure would be 0 which is definitely finite, I’m slightly dubious that the measure of the infinite intersection may not be different though….? Comment by Alice | April 7, 2013 | Reply 17. No, each of those sets has infinite measure, so the limit of the measures is infinite. But the intersection is empty (no real number can be in it), so the measure of the limit is zero. Comment by | April 8, 2013 | Reply 18. I see, thank you so much for clearing this up! Comment by Alice | April 8, 2013 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558714032173157, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Shor's_algorithm
# Shor's algorithm Shor's algorithm, named after mathematician Peter Shor, is a quantum algorithm (an algorithm which runs on a quantum computer) for integer factorization formulated in 1994. Informally it solves the following problem: Given an integer N, find its prime factors. On a quantum computer, to factor an integer N, Shor's algorithm runs in polynomial time (the time taken is polynomial in log N, which is the size of the input).[1] Specifically it takes time O((log N)3), demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is thus in the complexity class BQP. This is substantially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time — about O(e1.9 (log N)1/3 (log log N)2/3).[2] The efficiency of Shor's algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by squarings. If a quantum computer with a sufficient number of qubits could operate without succumbing to noise and other quantum interference phenomena, Shor's algorithm could be used to break public-key cryptography schemes such as the widely used RSA scheme. RSA is based on the assumption that factoring large numbers is computationally infeasible. So far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor in polynomial time. However, Shor's algorithm shows that factoring is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers and for the study of new quantum computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography. In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored 15 into 3 × 5, using an NMR implementation of a quantum computer with 7 qubits.[3] However, some doubts have been raised as to whether IBM's experiment was a true demonstration of quantum computation, since no entanglement was observed.[4] Since IBM's implementation, several other groups have implemented Shor's algorithm using photonic qubits, emphasizing that entanglement was observed.[5][6] In 2012, the factorization of 15 was repeated.[7] Also in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with a quantum computer. [8] ## Procedure The problem we are trying to solve is: given an odd composite number $N$, find an integer $d$, strictly between $1$ and $N$, that divides $N$. We are interested in odd values of $N$ because any even value of $N$ trivially has the number $2$ as a prime factor. We can use a primality testing algorithm to make sure that $N$ is indeed composite. Moreover, for the algorithm to work, we need $N$ not to be the power of a prime. This can be tested by taking square, cubic, ..., $k$-roots of $N$, for $k \le \log_{2}(N)$, and checking that none of these is an integer. (This actually excludes that $N = M^{k}$ for some integer $M$ and $k > 1$.) Since $N$ is not a power of a prime, it is the product of two coprime numbers greater than $1$. As a consequence of the Chinese remainder theorem, the number $1$ has at least four distinct roots modulo $N$, two of them being $1$ and $-1$. The aim of the algorithm is to find a square root $b$ of one, other than $1$ and $-1$; such a $b$ will lead to a factorization of $N$, as in other factoring algorithms like the quadratic sieve. In turn, finding such a $b$ is reduced to finding an element $a$ of even period with a certain additional property (as explained below, it is required that the condition of Step 6 of the classical part does not hold). The quantum algorithm is used for finding the period of randomly chosen elements $a$, as order-finding is a hard problem on a classical computer. Shor's algorithm consists of two parts: 1. A reduction, which can be done on a classical computer, of the factoring problem to the problem of order-finding. 2. A quantum algorithm to solve the order-finding problem. ### Classical part 1. Pick a random number a < N 2. Compute gcd(a, N). This may be done using the Euclidean algorithm. 3. If gcd(a, N) ≠ 1, then there is a nontrivial factor of N, so we are done. 4. Otherwise, use the period-finding subroutine (below) to find r, the period of the following function: $f(x) = a^x\ \mbox{mod}\ N$, i.e. the order $r$ of $a$ in $(\mathbb{Z}_N)^\times$, which is the smallest positive integer r for which $f(x+r) = f(x)$ or $f(x+r) = a^{x+r}\ \mbox{mod}\ N=a^x\ \mbox{mod}\ N$ 5. If r is odd, go back to step 1. 6. If a r /2 ≡ −1 (mod N), go back to step 1. 7. gcd(ar/2 ± 1, N) is a nontrivial factor of N. We are done. For example: $N = 15, a = 2, r = 4$, gcd(4 ± 1, N) ### Quantum part: Period-finding subroutine The quantum circuits used for this algorithm are custom designed for each choice of N and the random a used in f(x) = ax mod N. Given N, find Q = 2q such that $N^2 \le Q < 2N^2$, which implies $Q/r > N$. The input and output qubit registers need to hold superpositions of values from 0 to Q − 1, and so have q qubits each. Using what might appear to be twice as many qubits as necessary guarantees that there are at least N different x which produce the same f(x), even as the period r approaches N/2. Proceed as follows: 1. Initialize the registers to $Q^{-1/2} \sum_{x=0}^{Q-1} \left|x\right\rangle \left|0\right\rangle$ where x runs from 0 to Q − 1. This initial state is a superposition of Q states. 2. Construct f(x) as a quantum function and apply it to the above state, to obtain $Q^{-1/2} \sum_x \left|x\right\rangle \left|f(x)\right\rangle.$ This is still a superposition of Q states. 3. Apply the quantum Fourier transform to the input register. This transform (operating on a superposition of power-of-two Q = 2q states) uses a Qth root of unity such as $\omega = e^{2 \pi i /Q}$ to distribute the amplitude of any given $\left|x\right\rangle$ state equally among all Q of the $\left|y\right\rangle$ states, and to do so in a different way for each different x: $U_{QFT} \left|x\right\rangle = Q^{-1/2} \sum_y \omega^{x y} \left|y\right\rangle.$ This leads to the final state $Q^{-1} \sum_x \sum_y \omega^{x y} \left|y\right\rangle \left|f(x)\right\rangle.$ This is a superposition of many more than Q states, but many fewer than Q2 states. Although there are Q2 terms in the sum, the state $\left|y\right\rangle \left|f(x_0)\right\rangle$ can be factored out whenever x0 and x produce the same value. Let • $\omega = e^{2 \pi i /Q}$ be a Qth root of unity, • r be the period of f, • x0 be the smallest of a set of x which yield the same given f(x) (we have x0 < r), and • b run from 0 to $\lfloor(Q-x_0-1)/r\rfloor$ so that $x_0 + rb < Q.$ Then $\omega^{ry}$ is a unit vector in the complex plane ($\omega$ is a root of unity and r and y are integers), and the coefficient of $Q^{-1}\left|y\right\rangle \left|f(x_0)\right\rangle$ in the final state is $\sum_{x:\, f(x)=f(x_0)} \omega^{x y} = \sum_{b} \omega^{(x_0 + r b) y} = \omega^{x_0y} \sum_{b} \omega^{r b y}.$ Each term in this sum represents a different path to the same result, and quantum interference occurs—constructive when the unit vectors $\omega^{ryb}$ point in nearly the same direction in the complex plane, which requires that $\omega^{ry}$ point along the positive real axis. 4. Perform a measurement. We obtain some outcome y in the input register and $f(x_0)$ in the output register. Since f is periodic, the probability of measuring some pair y and $f(x_0)$ is given by $\left| Q^{-1} \sum_{x:\, f(x)=f(x_0)} \omega^{x y} \right|^2 = Q^{-2} \left| \sum_{b} \omega^{(x_0 + r b) y} \right|^2 = Q^{-2} \left| \sum_{b} \omega^{ b r y} \right|^2.$ Analysis now shows that this probability is higher, the closer unit vector $\omega^{ry}$ is to the positive real axis, or the closer yr/Q is to an integer. Unless r is a power of 2, it won't be a factor of Q. 5. Perform Continued Fraction Expansion on y/Q to make an approximation of it, and produce some c/r′ by it that satisfies two conditions: • A: r′<N • B: |y/Q - c/r′| < 1/2Q. By satisfaction of these conditions, r′ would be the appropriate period r with high probability. 6. Check if f(x) = f(x + r′) $\Leftrightarrow$ $a^r \equiv 1 \pmod{N}$ If so, we are done. 7. Otherwise, obtain more candidates for r by using values near y, or multiples of r′. If any candidate works, we are done. 8. Otherwise, go back to step 1 of the subroutine. ## Explanation of the algorithm The algorithm is composed of two parts. The first part of the algorithm turns the factoring problem into the problem of finding the period of a function, and may be implemented classically. The second part finds the period using the quantum Fourier transform, and is responsible for the quantum speedup. ### Obtaining factors from period The integers less than N and coprime with N form a finite Abelian group $G$ under multiplication modulo N. The size is given by Euler's totient function $\phi(N)$. By the end of step 3, we have an integer a in this group. Since the group is finite, a must have a finite order r, the smallest positive integer such that $a^r \equiv 1\ \mbox{mod}\ N.\,$ Therefore, N divides (also written | ) a r − 1 . Suppose we are able to obtain r, and it is even. (If r is odd, see step 5.) Now $b \equiv a^{r/2} \pmod{N}$ is a square root of 1 modulo $N$, different from 1. This is because $r$ is the order of $a$ modulo $N$, so $a^{r/2} \not\equiv 1 \pmod{N}$, else the order of $a$ in this group would be $r/2$. If $a^{r/2} \equiv -1 \pmod{N}$, by step 6 we have to restart the algorithm with a different random number $a$. Eventually, we must hit an $a$, of order $r$ in $G$, such that $b \equiv a^{r/2} \not\equiv 1, -1 \pmod{N}$. This is because such a $b$ is a square root of 1 modulo $N$, other than 1 and $-1$, whose existence is guaranteed by the Chinese remainder theorem, since $N$ is not a prime power. We claim that $d = \operatorname{gcd}(b-1, N)$ is a proper factor of $N$, that is, $d \ne 1, N$. In fact if $d = N$, then $N$ divides $b - 1$, so that $b \equiv 1 \pmod{N}$, against the construction of $b$. If on the other hand $d = \operatorname{gcd}(b-1, N) = 1$, then by Bézout's identity there are integers $u, v$ such that $(b-1) u + N v = 1$. Multiplying both sides by $b+1$ we obtain $(b^{2}-1) u + N(b+1) v = b+1$. Since $N$ divides $b^{2} - 1 \equiv a^{r} - 1 \pmod{N}$, we obtain that $N$ divides $b+1$, so that $b \equiv -1 \pmod{N}$, again contradicting the construction of $b$. Thus $d$ is the required proper factor of $N$. ### Finding the period Shor's period-finding algorithm relies heavily on the ability of a quantum computer to be in many states simultaneously. Physicists call this behavior a "superposition" of states. To compute the period of a function f, we evaluate the function at all points simultaneously. Quantum physics does not allow us to access all this information directly, though. A measurement will yield only one of all possible values, destroying all others. If not for the no cloning theorem, we could first measure f(x) without measuring x, and then make a few copies of the resulting state (which is a superposition of states all having the same f(x)). Measuring x on these states would provide different x values which give the same f(x), leading to the period. Because we cannot make exact copies of a quantum state, this method does not work. Therefore we have to carefully transform the superposition to another state that will return the correct answer with high probability. This is achieved by the quantum Fourier transform. Shor thus had to solve three "implementation" problems. All of them had to be implemented "fast", which means that they can be implemented with a number of quantum gates that is polynomial in $\log N$. 1. Create a superposition of states. This can be done by applying Hadamard gates to all qubits in the input register. Another approach would be to use the quantum Fourier transform (see below). 2. Implement the function f as a quantum transform. To achieve this, Shor used repeated squaring for his modular exponentiation transformation. It is important to note that this step is more difficult to implement than the quantum Fourier transform, in that it requires ancillary qubits and substantially more gates to accomplish. 3. Perform a quantum Fourier transform. By using controlled rotation gates and Hadamard gates Shor designed a circuit for the quantum Fourier transform (with Q = 2q) that uses just $q(q-1)/2 = O((\log Q)^2)$ gates.[9] After all these transformations a measurement will yield an approximation to the period r. For simplicity assume that there is a y such that yr/Q is an integer. Then the probability to measure y is 1. To see that we notice that then $e^{-2 \pi i b yr/Q} = 1\,$ for all integers b. Therefore the sum whose square gives us the probability to measure y will be Q/r since b takes roughly Q/r values and thus the probability is $1/r^2$. There are r y such that yr/Q is an integer and also r possibilities for $f(x_0)$, so the probabilities sum to 1. Note: another way to explain Shor's algorithm is by noting that it is just the quantum phase estimation algorithm in disguise. ### The bottleneck The runtime bottleneck of Shor's algorithm is quantum modular exponentiation, which is by far slower than the quantum Fourier transform and classical pre-/post-processing. There are several approaches to constructing and optimizing circuits for modular exponentiation. The simplest and (currently) most practical approach is to use mimic conventional arithmetic circuits with reversible gates, starting with ripple-carry adders. Knowing the base and the modulus of exponentiation facilitates further optimizations.[10][11] Reversible circuits typically use on the order of $n^3$ gates for $n$ qubits. Alternative techniques asymptotically improve gate counts by using quantum Fourier transforms, but are not competitive with less than 600 qubits due to high constants. ## Discrete logarithms Suppose we know that $x = g^r \pmod{p}$, for some r, and we wish to compute r, which is the discrete logarithm: $r = \log_g x \pmod{p}$. Consider the Abelian group $\left( \mathbb{Z}_{p} \right)^\times \times \left(\mathbb{Z}_p\right)^\times$ where each factor corresponds to modular multiplication of nonzero values, assuming p is prime. Now, consider the function $f(a,b) = g^a x^{-b} \pmod{p}.$ This gives us an Abelian hidden subgroup problem, as f corresponds to a group homomorphism. The kernel corresponds to modular multiples of (r,1). So, if we can find the kernel, we can find r. ## In popular culture On the television show Stargate Universe, the lead scientist, Dr. Nicholas Rush, hoped to use Shor's algorithm to crack Destiny's master code. He taught a quantum cryptography class at the University of California, Berkeley, in which Shor's algorithm was studied. Shor's algorithm was also a correct answer to a question in a Physics Bowl competition in the episode "The Bat Jar Conjecture" of the TV series The Big Bang Theory. ## References 1. Vandersypen, Lieven M. K.; Steffen, Matthias; Breyta, Gregory; Yannoni, Costantino S.; Sherwood, Mark H. & Chuang, Isaac L. (2001), "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance" (PDF), 414 (6866): 883–887, arXiv:quant-ph/0112176, Bibcode:2001Natur.414..883V, doi:10.1038/414883a, PMID 11780055 2. Braunstein, S. L.; Caves, C. M.; Jozsa, R.; Linden, N.; Popescu, S.; Schack, R. (1999), "Separability of Very Noisy Mixed States and Implications for NMR Quantum Computing", Phys. Rev. Lett 83 (5): 1054–1057, arXiv:quant-ph/9811018, Bibcode:1999PhRvL..83.1054B, doi:10.1103/PhysRevLett.83.1054 3. Lu, Chao-Yang; Browne, Daniel E.; Yang, Tao & Pan, Jian-Wei (2007), "Demonstration of a Compiled Version of Shor's Quantum Factoring Algorithm Using Photonic Qubits", 99 (25): 250504, arXiv:0705.1684, Bibcode:2007PhRvL..99y0504L, doi:10.1103/PhysRevLett.99.250504 4. Lanyon, B. P.; Weinhold, T. J.; Langford, N. K.; Barbieri, M.; James, D. F. V.; Gilchrist, A. & White, A. G. (2007), "Experimental Demonstration of a Compiled Version of Shor's Algorithm with Quantum Entanglement", Physical Review Letters 99 (25): 250505, arXiv:0705.1398, Bibcode:2007PhRvL..99y0505L, doi:10.1103/PhysRevLett.99.250505 5. Martín-López, Enrique; Enrique Martín-López, Anthony Laing, Thomas Lawson, Roberto Alvarez, Xiao-Qi Zhou & Jeremy L. O'Brien (12). "Experimental realization of Shor's quantum factoring algorithm using qubit recycling". Nature Photonics. Retrieved October 23, 2012. 6. Markov, Igor L.; Saeedi, Mehdi (2012). "Constant-Optimized Quantum Circuits for Modular Multiplication and Exponentiation". Quantum Information and Computation 12 (5–6): 361–394. arXiv:1202.6614. 7. Markov, Igor L.; Saeedi, Mehdi (2013). "Faster Quantum Number Factoring via Circuit Synthesis". Phys. Rev. A 87: 012310. arXiv:1301.3210. ## Further reading • Nielsen, Michael A. & Chuang, Isaac L. (2000), Quantum Computation and Quantum Information, Cambridge University Press . • Phillip Kaye, Raymond Laflamme, Michele Mosca, An introduction to quantum computing, Oxford University Press, 2007, ISBN 0-19-857049-X • "Explanation for the man in the street" by Scott Aaronson, "approved" by Peter Shor. (Shor wrote "Great article, Scott! That’s the best job of explaining quantum computing to the man on the street that I’ve seen."). An alternate metaphor for the QFT was presented in one of the comments. Scott Aaronson suggests the following 12 references as further reading (out of "the 10105000 quantum algorithm tutorials that are already on the web."): • Shor, Peter W. (1997), "Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer", SIAM J. Comput. 26 (5): 1484–1509, arXiv:quant-ph/9508027v2, Bibcode:1999SIAMR..41..303S, doi:10.1137/S0036144598347011 . Revised version of the original paper by Peter Shor ("28 pages, LaTeX. This is an expanded version of a paper that appeared in the Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, Nov. 20--22, 1994. Minor revisions made January, 1996"). • Quantum Computing and Shor's Algorithm, Matthew Hayward, 2005-02-17, imsa.edu, LaTeX2HTML version of the original 2750 line LaTeX document, also available as a 61 page PDF or postscript document. • Quantum Computation and Shor's Factoring Algorithm, Ronald de Wolf, CWI and University of Amsterdam, January 12, 1999, 9 page postscript document. • Shor's Factoring Algorithm, Notes from Lecture 9 of Berkeley CS 294-2, dated 4 Oct 2004, 7 page postscript document. • Chapter 6 Quantum Computation, 91 page postscript document, Caltech, Preskill, PH229. • Quantum computation: a tutorial by Samuel L. Braunstein. • The Quantum States of Shor's Algorithm, by Neal Young, Last modified: Tue May 21 11:47:38 1996. • III. Breaking RSA Encryption with a Quantum Computer: Shor's Factoring Algorithm, Lecture notes on Quantum computation, Cornell University, Physics 481-681, CS 483; Spring, 2006 by N. David Mermin. Last revised 2006-03-28, 30 page PDF document. • arXiv quant-ph/0303175 Shor's Algorithm for Factoring Large Integers. C. Lavor, L.R.U. Manssur, R. Portugal. Submitted on 29 Mar 2003. This work is a tutorial on Shor's factoring algorithm by means of a worked out example. Some basic concepts of Quantum Mechanics and quantum circuits are reviewed. It is intended for non-specialists which have basic knowledge on undergraduate Linear Algebra. 25 pages, 14 figures, introductory review. • arXiv quant-ph/0010034 Shor's Quantum Factoring Algorithm, Samuel J. Lomonaco, Jr, Submitted October 9, 2000, This paper is a written version of a one hour lecture given on Peter Shor's quantum factoring algorithm. 22 pages. • Chapter 20 Quantum Computation, from Computational Complexity: A Modern Approach, Draft of a book: Dated January 2007, Comments welcome!, Sanjeev Arora and Boaz Barak, Princeton University. • A Step Toward Quantum Computing: Entangling 10 Billion Particles, from "Discover Magazine", Dated January 19, 2011.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 115, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8853034973144531, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/109635?sort=votes
## Automorphisms of subgroup of hamming cube under distance constraint ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $S$ be a subset of `$\{0,1\}^n$` such that any two elements of $S$ are at least (Hamming) distance 5 apart. I'm looking for an upper bound on the size of the automorphism group of $S$. There's a trivial upper bound of $2^nn!$ (the number of automorphisms of `$\{0,1\}^n$`), and an easy lower bound of $2^{n/5}(n/5)!$ (take S to be all elements of the form $xxxxx$, where $x$ is a bitstring of length $n/5$). Any bound of the form $n!/n^{cn}$ for any $c>0$ would be helpful. - Such a set would be a 2-error correcting binary code. You might search for constructions and see if some provide good lower bounds. – Aaron Meyerowitz Oct 14 at 22:00 Thanks for the comment. Haven't found anything yet that I've gotten to work, but I'll post here if I do. – rishig Oct 19 at 6:23 ## 1 Answer For the additive group {0, 1}^n , it seems to me that every non-singular binary nxn matrix provides one F_2 linear bijective map from {0, 1}^n to itself . As I recall, asymptotically the number of these bijective linear maps is at least C 2^(n^2), for some C>0 . In other words, a strictly positive proportion of random nxn matrices over F_2 has non-zero determinant, and an nxn matrix over F_2 has nxn entries, each being 0 or 1. When you say automorphism, are you referring to automorphisms of the additive group (F_2)^n ? Thanks. David Bernier - Thanks for your response. I'm referring to automorphisms of the hypercube [1], which are substantially more restricted than $F_2^n$. For instance the linear map [[1 1] [1 0]] is an automorphism of $F_2^2$ but not of the square. [1] en.wikipedia.org/wiki/Hyperoctahedral_group – rishig Oct 19 at 4:44 I might be mistaken. But perhaps the autmorphism group is the same as the hroups that preserve the Hamming distance on the metric space {0,1}^n with the Hamming distance metric? Noam Elkies wrote a survey article on linear error-correcting codes that was published in the Notices of the AMS: Elkies, Noam; "Lattices, Linear Codes, and Invariants, Part II", Notices of the AMS, Volume 47 number 11, December 2000. ams.org/notices/200011/index.html – David Bernier Oct 20 at 17:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9078010320663452, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/240494/integrate-sqrt19x4-dx
# Integrate $\sqrt{1+9x^4} \, dx$ I have puzzled over this for at least an hour, and have made little progress. I tried letting $x^2 = \frac{1}{3}\tan\theta$, and got into a horrible muddle... Then I tried letting $u = x^2$, but still couldn't see any way to a solution. I am trying to calculate the length of the curve $y=x^3$ between $x=0$ and $x=1$ using $$L = \int_0^1 \sqrt{1+\left[\frac{dy}{dx}\right]^2} \, dx$$ but it's not much good if I can't find $$\int_0^1\sqrt{1+9x^4} \, dx$$ - 2 – Patrick Da Silva Nov 19 '12 at 9:42 2 parsing links from text is an arcane art; most early terminations can be fixed by adding url-encoding (in this case, `^` to `%5E`, `{` to `%7B`, `}` to `%7D`): wolframalpha.com/input/… – ysth Nov 19 '12 at 9:51 ## 2 Answers If you set $x=\sqrt{\frac{\tan\theta}{3}}$ you have: $$I = \frac{1}{2\sqrt{3}}\int_{0}^{\arctan 3}\sin^{-1/2}(\theta)\,\cos^{-5/2}(\theta)\,d\theta,$$ so, if you set $\theta=\arcsin(u)$, $$I = \frac{1}{2\sqrt{3}}\int_{0}^{\frac{3}{\sqrt{10}}} u^{-1/2} (1-u^2)^{-7/2} du,$$ now, if you set $u=\sqrt{y}$, you have: $$I = \frac{1}{4\sqrt{3}}\int_{0}^{\frac{9}{10}} y^{-3/4}(1-y)^{-7/2}\,dy$$ and this can be evaluated in terms of the incomplete Beta function. - Unfortunately I don't know anything about the incomplete Beta function, but thanks anyway. – daviewales Nov 20 '12 at 10:31 try letting $3x^2=\tan(\theta)$, or alternatively $3x^2= \sinh(\theta)$. - I think I let $x^2 = \frac{1}{3}\tan(\theta)$, which is the same thing. (I put the wrong thing in my initial post, but I'll edit it now.) – daviewales Nov 20 '12 at 10:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398636817932129, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/93495/list
## Return to Answer 8 added 31 characters in body Dear TCL, If one does not confuse geodesics with minimal geodesics, then one can say that if you develop the cube onto the plane, then straight lines are geodesics. Of course, not all geodesics arise in this way and I've never considered the question of giving an explicit formula for the distance function. If $P$ is an $(n-1)$-dimensional polyhedron in an $n$-dimensonal normed space, which we consider as a piecewise flat Finsler space, then a large class of geodesics that do not pass through the $(n-3)$-skeleton of $P$ can be easily described as follows: (1) On each facet they are straight lines. (2) They cross the $(n-2)$-dimensional faces transversely and at each crossing they satisfy the generalized Snell-Descartes law (of refraction). This statement almost obvious because straight lines are geodesics in normed spaces (and hence on the facets, which I consider as pieces of normed spaces with the induce norm) and because the generalized Snell-Descartes law follows from Fermat's principle. In the case of the cube with the piecewise flat $\ell_\infty$ metric, this gives that straight lines in the developed cube are geodesics on the cube. ## The generalized Snell-Descartes law This is a good time to say that what I'm writing now is basically an annoucement of an essay I'm writing for a book with A.C. Thompson ("An invitation to Minkowski geometry"). The presentation here is a bit sketchy at some points (otherwise it would be too long), but if you reconstruct the pictures, I think everything will be clear. A well-known secret in physics is that everything becomes simpler when you use momentum (a truly physical notion) rather than velocity (which is mere kinematics) and so it is with the laws of reflection and refraction. Moreover, once formulated in terms of momenta, the generalization to the Finsler (or normed-space) setting is obvious. Correspondence between velocity and momentum or one minute with the Legendre transform Let $v$ be a unit vector on a normed space $(X,\|\cdot\|)$, the momenta associated to $v$ is the set of unit covectors $\xi \in X^*$, the dual normed space, such that the hyperplane $\xi = 1$ supports the unit ball of $X$ at the point $v$. We extend the definition to all non-zero vectors by homogeneity. You can also write this in terms of the subdifferential of the square of the norm. (Here I really should have inserted a nice picture!) The same procedure allows us to associate to every non-zero momentum a set of velocities. When the unit spheres of $X$ and $X^*$ are strictly convex, the correspondence between velocities and momenta is a bijection. The Snell-Descartes law Consider an $(n-1)$-dimensional cooriented subspace $W \subset {\mathbb R}^n$ (the wall) and consider a norm on each half-space that describes the propagation properties of light in this anisotropic medium. The norms do not have to agree on the wall, although they do when we look at piecewise flat Finsler metrics on polyhedra and so for simplicity (I do not want to deal here with "critical angles" at which refaction becomes reflection) I will assume both norms agree on the subspace $W$. Law of refraction: If the light ray hits the wall $W$ transversally at the origin with incoming unit momentum $\xi$ (i.e. $\|\xi\|^*_1 = 1$), then the outgoing momenta $\eta$ (there may be an infinity of light rays refracting from a single ray!) are characterized by the conditions: (a) $\|\eta\|^*_2 = 1$, (b) The restrictions of the covectors (= linear forms) $\xi$ and $\eta$ to the subspace $W$ agree, (the third condition is easier to draw than to state and it distinguishes refraction from reflection) (c) The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = {v : \|v\|_2 \leq 1}$ lie on the same side of $W$ as the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = {v : \|v\|_2 \leq 1}$. Indeed if we change (c) by (c') The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = { v : \|v\|_2 \leq 1 }$ and the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = { v : \|v\|_2 \leq 1 }$ lie on different sides of $W$. we obtain a slight generalization of the Law of reflection for Minkowski spaces given in Corollary 3.2 of the paper by Gutkin and Tabachnikov). Remarks. 1. The laws of refraction and reflection admit a very pretty synthetic interpretation in terms of pencils of hyperplanes. Have fun! 2. When condition (b) cannot be met is exactly the case where we have a "critical angle" and light is reflected instead of refracted. A good exercise is to check these constructions in the standard case and rediscover the usual laws of reflection and refraction (and the condition for critical angles) that you find in any physics textbook. 3. When the normed spaces are such that their unit balls and their duals are strictly convex, then to each incoming light ray corresponds one and only and outgoing light ray. 4. On the upper half-space take the usual Euclidean norm, on the lower half-space take the norm whose unit ball is a double cone with base equal to the unit disc on the wall and apex apexes $(0,0,\dots,0,1)$ and $(0,0,\dots,0,-1)$. You can verify that all light rays coming from the upper-half plane get refracted into the sheaf of vertical rays. 7 added 40 characters in body Dear TCL, If one does not confuse geodesics with minimal geodesics, then one can say that if you develop the cube onto the plane, then straight lines are geodesics. Of course, not all geodesics arise in this way and I've never considered the question of giving an explicit formula for the distance function. If $P$ is an $(n-1)$-dimensional polyhedron in an $n$-dimensonal normed space, which we consider as a piecewise flat Finsler space, then a large class of geodesics that do not pass through the $(n-3)$-skeleton of $P$ can be easily described as follows: (1) On each facet they are straight lines. (2) They cross the $(n-2)$-dimensional faces transversely and at each crossing they satisfy the generalized Snell-Descartes law (of refraction). This statement almost obvious because straight lines are geodesics in normed spaces (and hence on the facets, which I consider as pieces of normed spaces with the induce norm) and because the generalized Snell-Descartes law follows from Fermat's principle. In the case of the cube with the piecewise flat $\ell_\infty$ metric, this gives that straight lines in the developed cube are geodesics on the cube. ## The generalized Snell-Descartes law This is a good time to say that what I'm writing now is basically an annoucement of an essay I'm writing for a book with A.C. Thompson ("An invitation to Minkowski geometry"). The presentation here is a bit sketchy at some points (otherwise it would be too long), but if you reconstruct the pictures, I think everything will be clear. A well-known secret in physics is that everything becomes simpler when you use momentum (a truly physical notion) rather than velocity (which is mere kinematics) and so it is with the laws of reflection and refraction. Moreover, once formulated in terms of momenta, the generalization to the Finsler (or normed-space) setting is obvious. Correspondence between velocity and momentum or one minute with the Legendre transform Let $v$ be a unit vector on a normed space $(X,\|\cdot\|)$, the momenta associated to $v$ is the set of unit covectors $\xi \in X^*$, the dual normed space, such that the hyperplane $\xi = 1$ supports the unit ball of $X$ at the point $v$. We extend the definition to all non-zero vectors by homogeneity. You can also write this in terms of the subdifferential of the square of the norm. (Here I really should have inserted a nice picture!) The same procedure allows us to associate to every non-zero momentum a set of velocities. When the unit spheres of $X$ and $X^*$ are strictly convex, the correspondence between velocities and momenta is a bijection. The Snell-Descartes law Consider an $(n-1)$-dimensional cooriented subspace $W \subset {\mathbb R}^n$ (the wall) and consider a norm on each half-space that describes the propagation properties of light in this anisotropic medium. The norms do not have to agree on the wall, although they do when we look at piecewise flat Finsler metrics on polyhedra and so for simplicity (I do not want to deal here with "critical angles" at which refaction becomes reflection) I will assume both norms agree on the subspace $W$. Law of refraction: If the light ray hits the wall $W$ transversally at the origin with incoming unit momentum $\xi$ (i.e. $\|\xi\|^*_1 = 1$), then the outgoing momenta $\eta$ (there may be an infinity of light rays refracting from a single ray!) are characterized by the conditions: (a) $\|\eta\|^*_2 = 1$, (b) The restrictions of the covectors (= linear forms) $\xi$ and $\eta$ to the subspace $W$ agree, (the third condition is easier to draw than to state and it distinguishes refraction from reflection) (c) The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = {v : \|v\|_2 \leq 1}$ lie on the same side of $W$ as the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = {v : \|v\|_2 \leq 1}$. Indeed if we change (c) by (c') The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = { v : \|v\|_2 \leq 1 }$ and the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = { v : \|v\|_2 \leq 1 }$ lie on different sides of $W$. we obtain a slight generalization of the Law of reflection for Minkowski spaces given in Corollary 3.2 of the paper by Gutkin and Tabachnikov). Remarks. 1. The laws of refraction and reflection admit a very pretty synthetic interpretation in terms of pencils of hyperplanes. Have fun! 2. When condition (b) cannot be met is exactly the case where we have a "critical angle" and light is reflected instead of refracted. A good exercise is to check these constructions in the standard case as and rediscover the usual laws of reflection and refraction (and the condition for critical angles) that you find in any physics textbook. 3. When the normed spaces are such that their unit balls and their duals are strictly convex, then to each incoming light ray corresponds one and only and outgoing light ray. 4. On the upper half-space take the usual Euclidean norm, on the lower half-space take the norm whose unit ball is a cone with base equal to the unit disc on the wall and apex $(0,0,\dots,0,-1)$. You can verify that all light rays coming from the upper-half plane get refracted into the sheaf of vertical rays. 6 added 73 characters in body; deleted 4 characters in body Dear TCL, If one does not confuse geodesics with minimal geodesics, then one can say that if you develop the cube onto the plane, then straight lines are geodesics. Of course, not all geodesics arise in this way and I've never considered the question of giving an explicit formula for the distance function. If $P$ is an $(n-1)$-dimensional polyhedron in an $n$-dimensonal normed space, which we consider as a piecewise flat Finsler space, then a large class of geodesics that do not pass through the $(n-3)$-skeleton of $P$ can be easily described as follows: (1) On each facet they are straight lines. (2) They cross the $(n-2)$-dimensional faces transversely and at each crossing they satisfy the generalized Snell-Descartes law (of refraction). This statement almost obvious because straight lines are geodesics in normed spaces (and hence on the facets, which I consider as pieces of normed spaces with the induce norm) and because the generalized Snell-Descartes law follows from Fermat's principle. In the case of the cube with the piecewise flat $\ell_\infty$ metric, this gives that straight lines in the developed cube are geodesics on the cube. ## The generalized Snell-Descartes law This is a good time to say that what I'm writing now is basically an annoucement of an essay I'm writing for a book with A.C. Thompson ("An invitation to Minkowski geometry"). What I'm writing now The presentation here is a bit sketchy at some points (otherwise it would be too long), but if you reconstruct the pictures, I think everything will be clear. A well-known secret in physics is that everything becomes simpler when you use momentum (a truly physical notion) rather than velocity (which is mere kinematics) and so it is with the laws of reflection and refraction. Moreover, once formulated in terms of momenta, the generalization to the Finsler (or normed-space) setting is obvious. Correspondence between velocity and momentum or one minute with the Legendre transform Let $v$ be a unit vector on a normed space $(X,\|\cdot\|)$, the momenta associated to $v$ is the set of unit covectors $\xi \in X^*$, the dual normed space, such that the hyperplane $\xi = 1$ supports the unit ball of $X$ at the point $v$. We extend the definition to all non-zero vectors by homogeneity. You can also write this in terms of the subdifferential of the square of the norm. (Here I really should have inserted a nice picture!) The same procedure allows us to associate to every non-zero momentum a set of velocities. When the unit spheres of $X$ and $X^*$ are strictly convex, the correspondence between velocities and momenta is a bijection. The Snell-Descartes law Consider an $(n-1)$-dimensional cooriented subspace $W \subset {\mathbb R}^n$ (the wall) and consider a norm on each half-space that describes the propagation properties of light in this anisotropic medium. The norms do not have to agree on the wall, although they do when we look at piecewise flat Finsler metrics on polyhedra and so for simplicity (I do not want to deal here with "critical angles" at which refaction becomes reflection) I will assume both norms agree on the subspace $W$. Law of refraction: If the light ray hits the wall $W$ transversally at the origin with incoming unit momentum $\xi$ (i.e. $\|\xi\|^*_1 = 1$), then the outgoing momenta $\eta$ (there may be an infinity of light rays refracting from a single ray!) are characterized by the conditions: (a) $\|\eta\|^*_2 = 1$, (b) The restrictions of the covectors (= linear forms) $\xi$ and $\eta$ to the subspace $W$ agree, (the third condition is easier to draw than to state and it distinguishes refraction from reflection) (c) The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = {v : \|v\|_2 \leq 1}$ lie on the same side of $W$ as the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = {v : \|v\|_2 \leq 1}$. Indeed if we change (c) by (c') The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = { v : \|v\|_2 \leq 1 }$ lie and the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = { v : \|v\|_2 \leq 1 }$ lie on different sides of $W$. we obtain a slight generalization of the Law of reflection for Minkowski spaces given in Corollary 3.2 of the paper by Gutkin and Tabachnikov). Remarks. 1. The laws of refraction and reflection admit a very pretty synthetic interpretation in terms of pencils of hyperplanes. Have fun! 2. When condition (b) cannot be met is exactly the case where we have a "critical angle" and light is reflected instead of refracted. A good exercise is to check these constructions in the standard case as rediscover the usual laws of reflection and refraction that you find in any physics textbook. 3. When the normed spaces are such that their unit balls and their duals are strictly convex, then to each incoming light ray corresponds one and only and outgoing light ray. 4. On the upper half-space take the usual Euclidean norm, on the lower half-space take the norm whose unit ball is a cone with base equal to the unit disc on the wall and apex $(0,0,\dots,0,-1)$. You can verify that all light rays coming from the upper-half plane get refracted into the sheaf of vertical rays. 5 added 2379 characters in body; added 4 characters in body This statement almost obvious because straight lines are geodesics in normed spaces (and hence on the facets, which I consider as pieces of normed spaces with the induce norm) and because the generalized Snell-Descartes law follows from Fermat's principle. I'm a bit short of time now, so I'll edit this answer later to include a description of the Snell-Descartes law in normed spaces. ## The generalized Snell-Descartes law.PartI What I'm writing now is a bit sketchy at some points (otherwise it would be too long). A well-known secret in physics is that everything becomes simpler when you use momentum (a truly physical notion) rather than velocity (which is mere kinematics) and so it is with the laws of reflection and refraction. Moreover, once formulated in terms of momenta, the generalization to the Finsler (or normed-space) setting is obvious. Consider an $(n-1)$-dimensional linear subspace $W \subset {\mathbb R}^n$ (the wall) and consider a norm on each half-space that describes the propagation properties of light in this anisotropic medium.The norms do not have to agree on the wall, although they do when we look at piecewise flat Finsler metrics on polyhedra. Correspondence between velocity and momentum or one minute with the Legendre transform: The Snell-Descartes law Consider an $(n-1)$-dimensional cooriented subspace $W \subset {\mathbb R}^n$ (the wall) and consider a norm on each half-space that describes the propagation properties of light in this anisotropic medium. The norms do not have to agree on the wall, although they do when we look at piecewise flat Finsler metrics on polyhedra and so for simplicity (I do not want to deal here with "critical angles" at which refaction becomes reflection) I will assume both norms agree on the subspace $W$. Law of refraction: If the light ray hits the wall $W$ transversally at the origin with incoming unit momentum $\xi$ (i.e. $\|\xi\|^*_1 = 1$), then the outgoing momenta $\eta$ (there may be continued an infinity of light rays refracting from a single ray!) are characterized by the conditions: (a) $\|\eta\|^*_2 = 1$, (b) The restrictions of the covectors (= linear forms) $\xi$ and $\eta$ to the subspace $W$ agree, (the third condition is easier to draw than to state and it distinguishes refraction from reflection) (c) The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = {v : \|v\|_2 \leq 1}$ lie on the same side of $W$ as the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = {v : \|v\|_2 \leq 1}$...) Indeed if we change (c) by (c') The points at which the hyperplane $\eta = 1$ supports the unit ball $B_2 = { v : \|v\|_2 \leq 1 }$ lie and the points at which the hyperplane $\xi = 1$ supports the unit ball $B_1 = { v : \|v\|_2 \leq 1 }$ lie on different sides of $W$. we obtain a slight generalization of the Law of reflection for Minkowski spaces given inCorollary 3.2 of the paper by Gutkin and Tabachnikov). Remarks. 1. The laws of refraction and reflection admit a very pretty synthetic interpretation in terms of pencils of hyperplanes. Have fun! 2. When condition (b) cannot be met is exactly the case where we have a "critical angle" and light is reflected instead of refracted. A good exercise is to check these constructions in the standard case as rediscover the usual laws of reflection and refractionthat you find in any physics textbook. 3. When the normed spaces are such that their unit balls and their duals are strictly convex, then to each incoming light ray corresponds one and only and outgoing light ray. 4. On the upper half-space take the usual Euclidean norm, on the lower half-space take the norm whose unit ball is a cone with base equal to the unit disc on the wall and apex $(0,0,\dots,0,-1)$. You can verify that all light rays coming from the upper-half plane get refracted into the sheaf of vertical rays. 4 added 1690 characters in body The generalized Snell-Descartes law. Part I This is a good time to say that what I'm writing now is basically an annoucement of an essay I'm writing for a book with A.C. Thompson ("An invitation to Minkowski geometry"). A well-known secret in physics is that everything becomes simpler when you use momentum (a truly physical notion) rather than velocity (which is mere kinematics) and so it is with the laws of reflection and refraction. Moreover, once formulated in terms of momenta, the generalization to the Finsler (or normed-space) setting is obvious. Consider an $(n-1)$-dimensional linear subspace $W \subset {\mathbb R}^n$ (the wall) and consider a norm on each half-space that describes the propagation properties of light in this anisotropic medium. The norms do not have to agree on the wall, although they do when we look at piecewise flat Finsler metrics on polyhedra. Correspondence between velocity and momentum or one minute with the Legendre transform: Let $v$ be a unit vector on a normed space $(X,\|\cdot\|)$, the momenta associated to $v$ is the set of unit covectors $\xi \in X^*$, the dual normed space, such that the hyperplane $\xi = 1$ supports the unit ball of $X$ at the point $v$. We extend the definition to all non-zero vectors by homogeneity. You can also write this in terms of the subdifferential of the square of the norm. (Here I really should have inserted a nice picture!) The same procedure allows us to associate to every non-zero momentum a set of velocities. When the unit spheres of $X$ and $X^*$ are strictly convex, the correspondence between velocities and momenta is a bijection. (to be continued ...) 3 edited body Dear TCL, If one does not confuse geodesics with minimal geodesics, then one can say that if you develop the cube onto the plane, then straight lines are geodesics. Of course, not all geodesics arise in this way and I've never considered the question of giving an explicit formula for the distance function. If $P$ is an $(n-1)$-dimensional polyhedron in an $n$-dimensonal normed space, which we consider as a piecewise flat Finsler space, then a large class of geodesics that do not pass through the $(n-2)$-skeleton (n-3)$-skeleton of$P\$ can be easily described as follows: (1) On each facet they are straight lines. (2) They cross the $(n-2)$-dimensional faces transversely and at each crossing they satisfy the generalized Snell-Descartes law (of refraction). This statement almost obvious because straight lines are geodesics in normed spaces (and hence on the facets, which I consider as pieces of normed spaces with the induce norm) and because the generalized Snell-Descartes law follows from Fermat's principle. I'm a bit short of time now, so I'll edit this answer later to include a description of the Snell-Descartes law in normed spaces. In the case of the cube with the piecewise flat $\ell_\infty$ metric, this gives that straight lines in the developed cube are geodesics on the cube. 2 added 19 characters in body Dear TCL, If one does not confuse geodesics with minimal geodesics, then one can say that if you develop the cube onto the plane, then straight lines are geodesics. Of course, not all geodesics arise in this way and I've never considered the question of giving an explicit formula for the distance function. If $P$ is an $(n-1)$-dimensional polyhedron in an $n$-dimensonal normed space, which we consider as a piecewise flat Finsler space, then a large class of geodesics that do not pass through the $(n-2)$-skeleton of $P$ can be easily described as follows: (1) On each facet they are straight lines. (2) They cross the facets $(n-2)$-dimensional faces transversely and at each crossing they satisfy the generalized Snell-Descartes law (of refraction). This statement almost obvious because straight lines are geodesics in normed spaces (and hence on the facets, which I consider as pieces of normed spaces with the induce norm) and because the generalized Snell-Descartes law follows from Fermat's principle. I'm a bit short of time now, so I'll edit this answer later to include a description of the Snell-Descartes law in normed spaces. In the case of the cube with the piecewise flat $\ell_\infty$ metric, this gives that straight lines in the developed cube are geodesics on the cube. 1 Dear TCL, If one does not confuse geodesics with minimal geodesics, then one can say that if you develop the cube onto the plane, then straight lines are geodesics. Of course, not all geodesics arise in this way and I've never considered the question of giving an explicit formula for the distance function. If $P$ is an $(n-1)$-dimensional polyhedron in an $n$-dimensonal normed space, which we consider as a piecewise flat Finsler space, then a large class of geodesics that do not pass through the $(n-2)$-skeleton of $P$ can be easily described as follows: (1) On each facet they are straight lines. (2) They cross the facets transversely and at each crossing they satisfy the generalized Snell-Descartes law (of refraction). This statement almost obvious because straight lines are geodesics in normed spaces (and hence on the facets, which I consider as pieces of normed spaces with the induce norm) and because the generalized Snell-Descartes law follows from Fermat's principle. I'm a bit short of time now, so I'll edit this answer later to include a description of the Snell-Descartes law in normed spaces. In the case of the cube with the piecewise flat $\ell_\infty$ metric, this gives that straight lines in the developed cube are geodesics on the cube.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 170, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144237637519836, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/tagged/arithmetic?sort=faq&pagesize=15
# Tagged Questions The arithmetic tag has no wiki summary. 2answers 112 views ### Timing attack on modular exponentiation It is known that computing $a^x \bmod N$ takes $O(|x| + \mathrm{pop}(x))$ multiplications modulo $N$, where $|x|$ is the number of bits of $x$ and $\mathrm{pop}(x)$ is the number of $1$ bits (Hamming ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354171752929688, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/classical-mechanics
# Tagged Questions [tag:classical-mechanics] entails the study of the trajectory of bodies under the influence of forces. More specific subtopics are: [tag:newtonian-mechanics], [tag:lagrangian-mechanics], [tag:hamiltonian-mechanics] for point particles and [tag:fluid-dynamics], [tag:statistical-mechanics] and ... 1answer 34 views ### Another Inclined plane question I did the FBD, and I found too many variables which are not eliminating...Moreover, I believe this question is based on kinetic and static friction. But, $\mu$ here is ambiguously defined...How Do I ... 1answer 41 views ### Center of mass of three particles of masses 1kg, 2kg, 3kg lies at the point (1,2,3) [closed] Center of mass of three particles of masses 1kg, 2kg, 3kg lies at the point (1,2,3) and center of mass of another system of particles 3kg and 2kg lies at the point (-1,3,-2). Where should we put a ... 1answer 58 views ### Peculiar Hamiltonian Phase space I was solving an exercise of classical mechanics : Consider the following hamiltonian $H(p,q,t) = \frac{p^2}{2m} + \lambda pq + \frac{1}{2}m\lambda^2\frac{q^6}{q^4+\alpha^4}$ Where ... 0answers 38 views ### Why is the angle of impact complementary to the angle of launch in the simple equations for the range of a projectile? I'm using the standard equation for the range of a projectile: \begin{align} d &= \frac{v\ \text{cos}\theta}{g} \left( v\ \text{sin}\theta + \sqrt{v^2\ \text{sin}^2\theta + 2gy_0}\right) ... 3answers 111 views ### Physical interpretation of Poisson bracket properties In classical Hamiltonian mechanics evolution of any observable (scalar function on a manifold in hand) is given as $$\frac{dA}{dt} = [A,H]+\frac{\partial A}{\partial t}$$ So Poisson bracket is a ... 1answer 59 views ### Physics of a cold and hot top Imagine two tops made up of exactly one thousand atoms. One is kept at 4 degrees Kelvin, the other at room temperature. 1. Would they weigh the same given an arbitrarily precise scale in the Earth's ... 2answers 48 views ### Constant of gravity in earth fixed coordinate system I have this problem: If the constant of gravity is measured to be $g_0$ in an earth fixed coordinate system, what is the difference $g-g_0$ where $g$ is the real constant of gravity as ... 0answers 38 views ### impulse problem [closed] The figure above shows a plot of the time-dependent force $F_x(t)$ acting on a particle in motion along the x-axis. What is the total impulse delivered to the particle? ... 1answer 34 views ### Is there a typo in this modified Lennard-Jones potential? The standard 12-6 Lennard Jones potential is given by $$U(r_ij) = 4\epsilon\left[ \left(\frac{\sigma_{ij}}{r_{ij}}\right)^{12} - \left(\frac{\sigma_{ij}}{r_{ij}}\right)^{6} \right]$$ where ... 1answer 38 views ### Calculating the moment inertia for a circle with a point mass on its perimeter I want to calculate the tensor of the moment of inertia. Consider this situation: The dot represents a points mass, in size equal to $\frac{5}{4}m$. $m$ is the mass of the homogenous circle. I'm ... 0answers 15 views ### Acceleration by spherical particles (micron-scale) by an external force I am looking for an expression for the velocity of a micron sized (1 - 10 micron diameter) sized particles under accelerating forces. I have aerosols in mind. This is what I have in mind The ... 1answer 55 views ### Why does Lagrangian of free particle depend on the square of the velocity ? Why does Lagrangian of free particle depend on the square of the velocity ? For example, $L(v^4)$ also doesn't depend on direction of $v$. 1answer 51 views ### Pendulum Wave Period Recently I've seen various videos showing the pendulum wave effect. All of the videos which I have found have a pattern which repeats every $60\mathrm{s}$. I am trying to work out the relationship ... 1answer 53 views ### Why is there no such thing as a body in a state of acceleration? It appears that velocity is a quantity of motion meaning that all objects can have assigned to them a particular velocity. Through the application of forces (ex: gravity, E&m) we measure changes ... 1answer 39 views ### Is this a correct interpretation of pressure? So I am told that pressure = Force per Area --> F/A.. When considering the units of Force I find that force = kg * m/s^2 When considering the units of Area I find that area = m^2 Thus the units of ... 2answers 45 views ### How to determine a reaction force? An object sits on an inclined plane. The weight of the object will have a normal and parallel component. I always thought that the reaction of the plane was simply the negative of the normal component ... 1answer 47 views ### Stopping distance of two objects with equal Kinetic Energy I'm working on a problem regarding two objects with the same kinetic energy. Two objects with masses of $m_1$ and $m_2$ have the same kinetic energy are both moving to the right. The same constant ... 1answer 213 views ### In the Lennard-Jones potential, why does the attractive part (dispersion) have an $r^{-6}$ dependence? The Lennard-Jones potential has the form: $$U(r) = 4\epsilon\left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6} \right]$$ The (attractive) $r^{-6}$ term describes the ... 0answers 47 views ### Would there be any possibility for anyone to survive when a Boeing 747 crashes to pacific ocean with its normal cruising speed? [closed] I know no case of anyone surviving when an aircraft of the size of Boeing 747 crashes to ocean with its normal cruising speed, but in physics sense, would there be any possibility of anyone surviving ... 1answer 35 views ### Finding the coffecient of restitution A ball moving with velocity $1 \hat i \ ms^{-1}$ and collides with a friction less wall, afetr collision the velocity of ball becomes $1/2 \hat j \ ms^{-1}$. Find the coefficient of restitution ... 1answer 65 views ### Double Compound Pendulum: why use inertia about the center of mass for bottom pendulum? I'm trying to wrap my head around the kinetic energy of a double compound pendulum, like the one shown in the Wikipedia article on double pendulums. I know for computing the kinetic energy of the ... 1answer 26 views ### Forces and angles "The little ball with the mass of 100g has gotten stuck in a chute as depicted in the picture. What forces, and how large are they, that are acting on the ball?" This is how I solve it: I find ... 1answer 100 views ### Questions about angular momentum and 3-dimensional(3D) space? Q1: As we know, in classical mechanics(CM), according to Noether's theorem, there is always one conserved quantity corresponding to one particular symmetry. Now consider a classical system in a $n$ ... 1answer 86 views ### Confusions about rotational dynamics and centripetal force I am a high school student. I am having confusions about the centripetal force and rotational motion . I have known that a body will be in rest or in uniform velocity if any force is not applied. But ... 4answers 118 views ### How to create frame of reference? Is this possible to create a inertial frame of reference in the earth? How it is possible? 0answers 67 views ### Torque, lever and mass The Force used in a catapult is exerted near its axis. If we double the length of the arm of the catapult, but still use the same Force at the same point as before near the same axis, does the ... 3answers 124 views ### Why does a rod rotate? I'm a physics tutor tutoring High School students. A question confused me a lot. Question is: Suppose a mass less rod length $l$ has a particle of mass $m$ attached at its end and the rod is ... 0answers 60 views ### Small oscillations [closed] I am asked to consider a fixed homogeneous rod of length $2L$ and mass density $\rho$ It is centered around $O$. A particle with mass M is moving in the same plane. The attractive force between the ... 0answers 187 views ### Extended Born relativity, Nambu 3-form and ternary (n-ary) symmetry Background: Classical Mechanics is based on the Poincare-Cartan two-form $$\omega_2=dx\wedge dp$$ where $p=\dot{x}$. Quantum mechanics is secretly a subtle modification of this. By the other hand, ... 0answers 44 views ### Closed-form equation for orientation and angular velocity over time If a rigid body, rotating freely in 3d, experiences no friction or other external forces and has an initially diagonal inertia matrix $\mathbf{I}_0$ (with $I_{11}>I_{22}>I_{33}>0$) and ... 0answers 51 views ### Scaling arguments for the Contact mechanics between two elastic spheres I am studying a bit granular dynamics and I have seen that two spheres of radius $R$ in contact with a contact area of radius $a$ would need an applied force $F$ on this two spheres that is nonlinear ... 0answers 27 views ### Doubling the energy of an oscillating mass on a spring [closed] From this question: Question 1. What do we need to change in order to double the total energy of a mass oscillating at the end of a spring? (a) increase the angular frequency by $\sqrt{2}$. ... 1answer 61 views ### Invariance, covariance and symmetry Though often heard, often read, often felt being overused, I wonder what are the precise definitions of invariance and covariance. Could you please give me an example from quantum field theory? ... 2answers 46 views ### Force applied in a body moving at high speed Consider a rod of length $l$ and uniform density is moving at high speed. I want to deflect the rod where should I need to apply the minimum force, so that the rod is deflected..? 1answer 81 views ### Statics of Rigid Bodies — Can there be two possible solutions? I've been working on a question and there seem to be two possible solutions. My own solution does not match the one given in the book. However, after resolving forces and taking moments with both ... 0answers 39 views ### When can a center of mechanical momentum frame be found for an electromagnetic system? In classical mechanics, a center of mechanical momentum frame can always be found for a system of particles interacting with one another locally. For an electromagnetic system where the charges ... 2answers 103 views ### Geometrical interpretation of complex eigenvectors in a system of differential equations Let's consider a system of differential equations in the form $$\dot{X} = M X$$ in two dimensions ($X = (x(t), y(t))$). In the case that $M$ has real values, it is easy to give a geometric ... 1answer 240 views ### Goldstein's Classical Mechanics exercises solutions [duplicate] Does anyone know where I can find some (good) solution of Goldstein's book Classical Mechanics? 1answer 40 views ### what's the center of mass for triatomic-molecule system My text use the following example to explain the center of mass. There are three balls (mass $m$) sitting in the origin, at $x=l$ and $x=2l$, each two mass are connected with a spring of constant $k$. ... 4answers 63 views ### is frictional force right or wrong an experiment to disprove the statement--"frictional force is irrespective of the surface area in contact." take a x rs note. fold it in a half and put it in the pocket of a shirt. then invert the ... 0answers 31 views ### When is classical mechanics valid for describing motion of atoms? In Molecular Dynamics simulations, the Newton's equation of motion is used to calculate the time evolution of system. Once, I read in an introductory text that when the thermal de Broglie wavelength ... 2answers 132 views ### what's the physical significance of the off-diagonal element in the matrix of moment of inertia In classical mechanics about rotation of rigid object, the general problem is to study the rotation on a given axis so we need to figure out the moment of inertia around some axes. In 3-dimensional ... 1answer 229 views ### Classical results proved using quantum mechanics Are there any results in classical mechanics that are easier to show by deriving a corresponding result in quantum mechanics and then taking the limit as $\hbar\rightarrow0$? (Are there classical ... 3answers 129 views ### Lagrangian mechanics and time derivative on general coordinates I am reading a book on analytical mechanics on Lagrangian. I get a bit idea on the method: we can use any coordinates and write down the kinetic energy $T$ and potential $V$ in terms of the general ... 1answer 52 views ### Hollow stone columns provide more support? In history class in elementary school I remember learning that the Greeks would build their stone columns hollow because they thought this provided more support. Is it true that a hollow column is ... 5answers 273 views ### Does the mass point move? There is a question regarding basic physical understanding. Assume you have a mass point (or just a ball if you like) that is constrained on a line. You know that at $t=0$ its position is $0$, i.e., ... 2answers 74 views ### Runge-Lenz vector and Keplerian Orbits Is the loss of closed Keplerian orbits in relativistic mechanics directly tied to the absence of the Runge-Lenz vector? 1answer 39 views ### Finding the acceleration at an angle "What's the maximum acceleration you can achieve in a a water-slide at a 34 degree angle (If you can't use your arms and legs)"? This is the free-body-diagram that I drew, assuming $g = 10m/s^2$: ... 1answer 75 views ### Higher order covariant Lagrangian I'm in search of examples of Lagrangian, which are at least second order in the derivatives and are covariant, preferable for field theories. Up to now I could only find first-order (such at ... 2answers 62 views ### Resolution of vectors What is the fundamental basis of resolution of vector. Suppose we have a vector $\vec{mg}$, now we resolve it into two components, horizontal and vertical. My question is what is the basis for telling ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050428867340088, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/64058/matrix-row-selection
## Matrix row selection ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider an m by n matrix $A$ with entries from some finite field $\mathbb{F}_q$, where $m\geq n$. Let us write matrix $A$ as follows $A=[A_1^T~A_2^T~\ldots A_l^T]^T$, where $A_i$'s are given matrices. Furthermore assume that all the rows of each matrix $A_i$ are linearly independent. For a given set of numbers $k_1,k_2,\ldots,k_l$, $\sum_i k_i\geq n$ the goal is to find a subset of $k_i$ rows of each $A_i$, denoted by $S_i$, such that $rank{[S_1^T~S_2^T\ldots S_l^T]}=n$. Is it possible to devise a polynomial time algorithm that solves this problem? Note that $k_1,k_2,\ldots,k_l$ are picked such that the solution always exists. - 2 I have a feeling that exact set cover reduces to this problem. I am therefore not optimistic about finding a polynomial time solution. Gerhard "Ask Me About System Design" Paseman, 2011.05.05 – Gerhard Paseman May 5 2011 at 22:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171409606933594, "perplexity_flag": "head"}
http://mathoverflow.net/questions/57184?sort=newest
## Filtrations generated by cadlag martingales. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $(\Omega,P,\mathcal{F})$ be a probability space with filtration $\mathbb{F} = (\mathcal{F}_t), t \in [0,T]$, where $T$ can be finite or infinite. Let $M$ be a cadlag (local) martingale with respect to $\mathbb{F}$, and let $\mathbb{F}^M$ be the filtration generated by $M$ and then completed with respect to $P$. Question: Is $\mathbb{F}^M$ a right-continuous filtration? Some facts: 1. If $X$ is a strong markov process, then the completion of $\mathbb{F}^X$ is right-continuous. This is in Karatzas and Shreve. 2. A sort of converse: If $M$ is a local martingale in a right continuous and complete filtration, it has a right continuous modification. One possible idea: A continuous local martingale can be expressed as a time-changed Brownian Motion, which is strong markov. - It seems to me like an affirmative answer follows directly the "càd" part of càdlàg...perhaps I'm overlooking some nuance? – Steve Huntsman Mar 3 2011 at 0:35 Youre saying the completed filtration generated by any cadlag process is right continuous? If things aren't completed, sets like {M has a local maximum at time t} are in $\mathcal{F}_{t+}$ but not in $\mathcal{F}_t$. – weakstar Mar 3 2011 at 0:43 Ah, silly me... – Steve Huntsman Mar 3 2011 at 1:05 For a brownian motion, those kinds of sets have probability zero. I'm not sure if that's the only kind of obstruction, but since martingales have the same paths as brownian motion, I had hoped you might be able to say something. – weakstar Mar 3 2011 at 1:16 ## 1 Answer No, that is not true. Consider the following, defined on a filtered probability space $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in[0,T]},\mathbb{P})$. 1. $W$ is a standard Brownian motion. 2. $U$ is an $\mathcal{F}_0$-measurable Bernoulli random variable independent of $W$, with $\mathbb{P}(U=0)=\mathbb{P}(U=1)=1/2$. Then, set $M_t=UW_t$. This is a continuous martingale. If $\mathcal{F}^M_t$ is its completed natural filtration then $U$ is $\mathcal{F}^M_t$-measurable for all $t > 0$. Then, $U$ is $\mathcal{F}^M_{0+}$-measurable but is not measurable with respect to $\mathcal{F}^M_0$ (which only contains sets with probability 0 and 1). So $\mathcal{F}^M_{0+}\not=\mathcal{F}^M_0$. Also, this is essentially the same as the example I gave in a previous answer of a Markov process which is not strong Markov. As another example to show that there is not really any simple way you can modify the question to get an affirmative answer, consider the following; a Brownian motion $W$ and left-continuous, positive, and locally bounded adapted process $H$. Then, $M=H_0+\int H\,dW$ is a local martingale. Also, $M$ has quadratic variation $[M]=\int H^2_t\,dt$ which has left-derivative $H^2$ for all $t > 0$. So, $H_t$ is $\mathcal{F}^M_t$-measurable, as is $W_t=\int H^{-1}\,dM$. In fact, $\mathbb{F}^M$ is the completed natural filtration generated by $W$ and $H$. If $H$ is taken to be independent of $W$, then $\mathbb{F}^M$ will only be right-continuous if $\mathbb{F}^H$ is, and it easy to pick left-continuous processes whose completed natural filtration fails to be right-continuous. - Thanks. Maybe i'm being dumb, but can this counterexample be extended to a nonzero starting position? – weakstar Mar 3 2011 at 1:07 @weakstar: I'm not quite sure what you mean. Setting $M_t=UW_t+c$ extends it to a nonzero starting position. If $W$ was started from a nonzero position and you wrote $M=UW_t$, then $U$ would be $\mathcal{F}^M_0$-measurable, so the argument breaks down. – George Lowther Mar 3 2011 at 1:26 Right, I meant your second suggestion, for brownian motion starting away from zero. Just seeing if any affirmative answer can be salvaged. – weakstar Mar 3 2011 at 1:45 @ George Lowther : Hi, could you please recall the version of the Bluementhal 0-1 Law, you are using to justify that sets in $\mathcal{F}_{0^+}^M$ have 0-1 measure. Best Regards – The Bridge Mar 3 2011 at 8:32 @The Bridge: I'm not using any such law, nor am I saying that $\mathcal{F}^M_{0+}$ only has sets with probability 0 or 1 (it doesn't). I was saying that $\mathcal{F}^M_{0}$ only has sets of measure 0 or 1 (in the first example) because it is generated by a random variable which is identically zero (and then completed). – George Lowther Mar 3 2011 at 9:02 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322545528411865, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/294087/permutations-and-equivalence-relations/294091
# Permutations and Equivalence Relations Let X be a nonempty set and let $\sigma \in$ Sym(X). Define the two place relation $\sim$ on X as follows: x$\sim$y if and only if $\sigma^{k}(x)=y$ for some integer k. Prove that $\sim$ is and equivalence relation. I know that Sym(X) is the set of onto maps from X to X. Since the function is onto than it has an inverse. Also my professor said something about on Z the formation of a negative is a permutation and how it fixes 0 and maps 1 to -1 and so on. How do I get started on this? - ## 2 Answers HINT: For each $\sigma\in\operatorname{Sym}(X)$, $\sigma^{-1}$ is also a permutation of $X$, and $y=\sigma^k(x)$ if and only if $$\left(\sigma^{-1}\right)^k=\left(\sigma^k\right)^{-1}(x)=\left(\sigma^k\right)^{-1}\left(\sigma^k(x)\right)=x\;.$$ This is what you need to prove symmetry of $\sim$. Reflexivity is easy: the identity map $x\mapsto x$ is a permutation of $X$. For transitivity, you must prove that if $\sigma,\tau\in\operatorname{Sym}(X)$, then $\sigma\circ\tau\in\operatorname{Sym}(X)$. - Oh snap. Just one second! :-) – Asaf Karagila Feb 4 at 0:06 @Asaf: I think that counts as simultaneous. (Evidently this simultaneity isn’t an equivalence relation. :-)) – Brian M. Scott Feb 4 at 0:08 To prove that this is an equivalence relation you need to show three things: 1. Reflexivity: For every $x\in X$, $x\sim x$, that is there is some $k$ such that $\sigma^k(x)=x$. Recall that if $f\colon A\to A$ is a function on some set $A$ then $f^0(a)=a$. 2. Symmetry: For every $x,y\in X$ if $x\sim y$ then $y\sim x$, that is if there is some $k$ such that $\sigma^k(x)=y$ then there is some $n$ such that $\sigma^n(y)=x$. Recall that $(\sigma^k)^{-1}=\sigma^{-k}$ and deduce that $-k$ witnesses the symmetry of the relation. 3. Transitivity: For every $x,y,z\in X$ if $x\sim y$ and $y\sim z$ then $x\sim z$. For this recall that $\sigma^k\circ\sigma^n=\sigma^{k+n}$ and as with the previous properties, deduce the transitivity of $\sim$. - does your point 2 apply if `k` has to be >= 0 ? – Angelo.Hannes Apr 22 at 13:30 @Angelo: Only if $\sigma$ is the product of finite cycles. Consider $X=\Bbb Z$ and $\sigma(x)=x+1$, if we require $k\geq 0$ then $-1\sim 0$ but it is impossible for $0\sim -1$, because no matter how many times you apply $\sigma$ on $0$ you will never get a negative integer. – Asaf Karagila Apr 22 at 13:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258007407188416, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/41597/maximum-resolution-per-lens-size/41602
# Maximum resolution per lens size This question is more practical than theoretical, but I am interested in the theoretical considerations as well. My wife just bought a Samsung S3 phone with a 8 MP image sensor hiding behind a tiny lens. In daylight the pictures come out fine, but it suffers horribly in low-light conditions. Is there a theoretical limit as to how fine an image sensor can be behind a lens of a specific aperture, given a reasonable amount of ambient light and a reasonable shutter speed? Will increasing the sensor resolution beyond this limit decrease the actual resolution (the ability to resolve two points as individual points) of the final image? Thanks. - ## 1 Answer The resolution is controlled by diffraction at the smallest part of the lens system. The Wikipedia article on angular resolution goes into this in some detail. To quote the headline from this article, for a camera the spatial resolution at the detector (or film) is given by: $$\Delta \ell = 1.22 \frac{f\lambda}{D}$$ where $f$ is the distance from the plane of the lens to the detector, $\lambda$ is the wavelength of the light and $D$ is the camera aperture. Making the pixel size smaller than $\Delta \ell$ won't do any harm, but it won't make the pictures any sharper. I don't know if smartphone cameras contain a variable aperture. With conventional cameras larger apertures produce less diffraction so the picture quality should actually improve in low light. However larger apertures expose a larger area of lens and optical aberration dominates the quality. The end result is that there is an optimum aperture below which diffraction dominates and above which optical aberration dominates. Incidentally, the poor performance at low light probably isn't due to diffraction. I'd guess it's just that the signal to noise ratio of the detected light falls so far the pictures get very noisy. - I see, thanks. You've given me the keywords that I do need to continue research. I want to see how far they are pushing this poor lens. However, how is quality reduced as lens size is reduced? All the professional camera have large lenses, surely there is some advantage to a larger lens. – dotancohen Oct 24 '12 at 16:25 – John Rennie Oct 24 '12 at 17:01 Thanks John! I did realize that a larger lens lets in more light, but I was unaware of spherical aberration. – dotancohen Oct 24 '12 at 17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439715147018433, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/40751/does-air-resistance-ever-slow-a-particle-down-to-zero-velocity
# Does air resistance ever slow a particle down to zero velocity? If a particle moves in a place with air resistance (but no other forces), will it ever reach a zero velocity in finite time? The air resistance is proportional to some power of velocity - $v^\alpha$, and I have to try it with different $\alpha$. I've solved for the function of position for several alphas and all functions I've gotten decay to $v=0$ as $t \to\infty$, but none ever reach exactly $v=0$ for a finite value of $t$. This should be the case, right? - ## 2 Answers Have you considered the full range of values of $\alpha$? For $\alpha\ge1$, your conclusion is correct: the velocity approaches $v=0$ asymptotically at large times. If you consider $\alpha<1$, you can find solutions which reach $v=0$ in finite time. I'll leave the explicit solutions to you, but I find the time at which the particle stops to be $$T=\frac{v(0)^{1-\alpha}}{1-\alpha} \quad {\rm for}\quad \alpha<1.$$ Edit: I should just point out that my constant of proportionality was fixed to $1$. That is, I solved $v^\prime = -v^\alpha$. To get the answer in physical units, you'll have to reinstate it. - Ah, ok, I didn't even think of $\alpha$ < 0, because the 3 alphas that I had to try were 1 , 3/2 , 2. Thanks, though! – Steven Harris Oct 14 '12 at 19:00 No problem. It's not just $\alpha<0$ though! For example, try $\alpha=\frac{1}{2}$. – AClassicalCaseOfConfusion Oct 15 '12 at 7:42 Yes. Without any force it indeed would reach zero speed only in $t=\infty$. There is no contradiction with the real world. Here we have not only the resistance force, but more and more forces, and a dependance on size and shape of body. - 1 Instead of zero and infinity, if you set "any" minimum velocity that you approximate as zero for practical considerations, then it would reach that velocity in a finite time. The problem with zero is that beyond a certain point Brownian motion will be significant. – Prathyush Oct 14 '12 at 9:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951604962348938, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/56237/a-thought-on-definition-of-momentum
# A thought on definition of momentum Well, this is a simple, basic and I think even silly doubt. The first time I saw the definition of momentum as $p = mv$ I started to think why this is a good definition. So I've read the beginning of Newton's Principia where he said that momentum is a measure of quantity of motion. Well, this started to make sense: if there's more mass, there's more matter and so there's more movement going on. If also there's more velocity, the movement is greater. So it makes real sense that quantity of motion should be proportional both to mass and velocity. The only one thing I've failed to grasp is: why the proportionality constant should be $1$? What's the reasoning behind setting $p = mv$ instead of $p = kmv$ for some constant $k$? Thanks in advance. And really sorry if this doubt is to silly and basic to be posted here. - – kleingordon Mar 8 at 2:36 There are no silly doubts. Yours is perfectly good, Newton would have also asked that. – Asphir Dom Mar 8 at 10:28 ## 2 Answers It's a good question! Physics is all about linking your intuition to science, so it's good that you're thinking about this. The statement that momentum should be proportional to mass and velocity is intuition. Like elfmotat says, you can choose your constant as long as it's consistent with units, I guess. If you want another reason, consider the time-derivative of the equation $p=mv$. It's Newton's Second Law! $F=ma$. In other words, $F=\frac{dp}{dt}$, which is a great reason for k to be 1. - If you're using consistent units, for example kilograms for mass, meters/second for velocity, and kilogram*meters/second for momentum, then for simplicity it is natural to choose $k=1$ as our definition of momentum. You can choose other constants, but it makes calculations unnecessarily sloppy. Any nonzero choice of constant $k$ is valid because it preserves important properties like momentum conservation. If, on the other hand, we're working with inconsistent units then a non-unity proportionality constant is required! For example, if we decided to measure momentum in units of kilogram*meters/second, mass in milligrams, and velocity in meters/second, then there's a required factor of $k=10^{-6}$. - Hi @elfmotat, so the reason why the constant of proportionality is $k = 1$ is the system of units that have been chosen ? In other words, we decide to measure quantities in units such that $k = 1$ ? Thanks for you answer. – user1620696 Mar 8 at 17:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448394179344177, "perplexity_flag": "middle"}
http://cms.math.ca/Reunions/ete13/abs/nii
Réunion d'été SMC 2013 Université Dalhousie, 4 - 7 juin 2013 Interactions non locales en sciences sociales, physiques et biologiques Org: Theodore Kolokolnikov (Dalhouise) et Michael Ward (UBC) [PDF] TUM CHATURAPRUEK, Harvey Mudd College Crime Modeling with L\'evy Flights  [PDF] We extend the Short \emph{et al.} model of crime to incorporate biased L\'evy Flights for the criminal's motion, with step-sizes distributed according to a power-law distribution. Such motion is considered to be more realistic than the biased diffusion that was originally proposed. This generalization leads to fractional Laplacians. We then investigate the effect of introducing the L\'evy Flights on the formation of hot-spots using linear stability and full numerics. Joint works with Jonah Breslau, Daniel Yazdi, Theodore Kolokolnikov, and Scott McCalla. YUXIN CHEN, Dalhousie University Equilibrium solutions to an aggregation model subject to exogenous and Newtonian endogenous forces in 2D  [PDF] We study the equilibrium solutions to an aggregation system consisting of $N$ single-species particles and one alien particle in two-dimensional space. Starting with a discrete aggregation model subject to pairwise endogenous and exogenous forces in $2$D, we derive the continuum model by introducing the continuous particle density. Throughout the study, we take the the pairwise endogenous force to be Newtonian. We show that three sets of equilibrium solutions occur under applying different exogenous force exerted by the alien particle. Additionally, we analyze the stability for the annulus-like equilibrium solution with uniform density by linear perturbation off the boundaries of the domain. YANGHONG HUANG, Imperial College London Self-propelled particles with quasi-Morse potential  [PDF] Rich patterns are observed in self-propelled particles systems with Morse like interaction potential $U(x)= C_ae^{-|x|/\ell_a}-C_re^{-|x|/\ell_r}$. However, the explicit forms of the observed patterns like flocks and mills are not available in higher dimension. In this talk, the potential is replaced by a quasi-Morse potentail [Carrillo \emph{et.al}, 2013 Physca D], which consists the difference of two rescaled Bessel potentails. A few observed patterns can be obtained by solved some algebraic equations, leading to an extensive parametric study of the underlying particle system. The stability of certain patterns are also disccused. DAVID IRON, Dalhousie Lattice patterns in the periodic Gierer-Meinhardt system  [PDF] We consider the Gierer-Meinhardt equations posed on the plane. The stability of spike solutions in which the spikes centres line up on a lattice will depend on the value of the regular part of the quasi-periodic Green's function on that lattice. The Green's function may be represented as an infinite sum, but it converges very slowly. We will show how to evaluate this Green's function quickly and determine the stability of a given lattice formation. This is ongoing work with Dr. John Rumsey and Dr. Michael Ward. THEODORE KOLOKOLNIKOV, Dalhousie Vortex swarms  [PDF] We investigate the dynamics of $N$ point vortices in the plane, in the limit of large $N$. We consider {\em relative equilibria}, which are rigidly rotating lattice-like configurations of vortices. These configurations were observed in several recent experiments [Durkin and Fajans, Phys. Fluids (2000) 12, 289–293; Grzybowski {\em et.al} PRE (2001)64, 011603]. We show that these solutions and their stability are fully characterized via a related {\em aggregation model} which was recently investigated in the context of biological swarms [Fetecau {\em et.al.}, Nonlinearity (2011) 2681; Bertozzi {\em et.al.}, M3AS (2011)]. By utilizing this connection, we give explicit analytic formulae for many of the configurations that have been observed experimentally. These include configurations of vortices of equal strength; the $N+1$ configurations of $N$ vortices of equal strength and one vortex of much higher strength; and more generally, $N+K$ configurations. We also give examples of configurations that have not been studied experimentally, including $N+2$ configurations where $N$ vortices aggregate inside an ellipse. CHRIS LEVY, Dalhousie University Dynamics and Stability of a 3D Model of Cell Signal Transduction with Delay  [PDF] We consider a 3D model of cell signal transduction with delay. In this model, the deactivation of signalling proteins occur throughout the cytosol and activation is localized to specific sites in the cell. We use matched asymptotic expansions to construct the dynamic solutions of signalling protein concentrations. The result of the asymptotic analysis is a system of delay differential equations (DDEs). This reduced DDE system is compared to numerical simulations of the full 3D system with delay. There are delay values which give rise to sustained oscillations. We implement the method of constrained coordinates numerically to improve the asymptotic results in this case. ALAN LINDSAY, Heriot-Watt University The Stability and Evolution of Curved Domain Arising From One Dimensional Localized Patterns  [PDF] In many pattern forming systems, narrow two dimensional domains can arise whose cross sections are roughly one dimensional localized solutions. This talk will present an investigation of this phenomenon for the variational Swift-Hohenberg equation. Stability of straight line solutions is analyzed, leading to criteria for either curve buckling or curve disintegration. A high order matched asymptotic expansion reveals a two-term expression for the geometric motion of curved domains which includes both elastic and surface diffusion-type regularizations of curve motion. This leads to novel equilibrium curves and space-filling pattern proliferation. A key ingredient in the generation of the labyrinthine patterns formed, is the non-local interaction of the curved domain with its distal segments. Numerical tests are used to confirm and illustrate these phenomena. ALAN MACKEY, UCLA Two-species Particle Aggregation and Co-dimension One Solutions  [PDF] Systems of pairwise-interacting particles model a cornucopia of physical systems, from insect swarms and bacterial colonies to nanoparticle self-assembly. We study a continuum model for two-species particle interaction in $\mathbb{R}^2$, and apply linear stability analysis of concentric ring steady states to characterize the steady state patterns and instabilities which form. Conditions for linear well-posedness are determined and these results are compared to simulations of the discrete particle dynamics, showing predictive power of the linear theory. XIAOFENG REN, George Washington University A double bubble solution in a ternary system with inhibitory long range interaction  [PDF] We consider a ternary system of three constituents, a model motivated by the triblock copolymer theory. The free energy of the system consists of two parts: an interfacial energy coming from the boundaries separating the three constituents, and a longer range interaction energy that functions as an inhibitor to limit micro domain growth. We show that a perturbed double bubble exists as a stable solution of the system. Each bubble is occupied by one constituent. The third constituent fills the complement of the double bubble. Two techniques are developed. First one defines restricted classes of perturbed double bubbles. Each perturbed double bubble in a restricted class is obtained from a standard double bubble by a special perturbation. The second technique is the use of the so called internal variables. The advantage of the internal variables is that they are only subject to linear constraints, and perturbed double bubbles in each restricted class represented by internal variables are elements of a Hilbert space. A local minimizer of the free energy in each restricted class is found as a fixed point of a nonlinear equation. This perturbed double bubble satisfies three of the four equations for critical points of the free energy. The unsolved equation is the 120 degree angle condition at triple junction points. Perform another minimization among the local minimizers from all restricted classes. A minimum of minimizers emerges and solves all the equations for critical points. W ABOU SALEM, University of Saskatchewan Semi-Relativistic Schroedinger-Poisson System of Equations with Long-Range Interactions  [PDF] The evolution of the density matrix of interacting many-body quantum particles in the mean-field limit is given by the Hartree-von Neumann equation. Using the properties of the density matrix, this is equivalent to a system of infinitely coupled nonlinear PDEs, the Schroedinger-Poisson system of equations. The semi-relativistic Schroedinger-Poisson system of equations describes the mean-field dynamics of interacting quantum particles with very high velocities, such as in plasmas. I discuss the derivation of the semi-relativistic Schroedinger-Poisson system of equations with long-range interactions and its global well-posedness in appropriate Sobolev spaces. I also describe the asymptotics of the solution as the mass of the particles tends to zero and to infinity, respectively. JONATHAN SHERRATT, Heriot-Watt University A Nonlocal Model for Cancer Invasion  [PDF] Adhesion of cells to one another and their environment is an important regulator of many biological processes, but has proved difficult to incorporate into continuum mathematical models. I will describe a new approach to the mathematical modelling of adhesion in cell populations, based on an integro-partial differential equation for cell density, in which the integral represents the sensing by cells of their local environment. This enables an effective representation of cell-cell adhesion, as well as random cell movement, and cell proliferation. I will show how this modelling approach can be applied to cancer growth. In this context, the model is capable of supporting both benign (non-invasive) and invasive growth, according to the relative strengths of cell-cell and cell-matrix adhesion. I will go on to describe the use of the model to investigate the criticality of matrix heterogeneity in shaping invasion, making the testable prediction that highly heterogeneous extracellular matrix can result in a fingering'' of the tumour front, which is a hallmark of invasive cancers. ## Commanditaires © Société mathématique du Canada © Société mathématique du Canada : http://www.smc.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8850414156913757, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/87705/connected-sum-of-surfaces/87943
## Connected sum of surfaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm looking for a detailed reference about connected sums. I'd like it to contain a proof that a connected sum of connected surfaces is independent - up to homeomorphism - of the various choices involved in the process. There are several books in which it is stated, but I cannot find one in which it is proved. In particular, I do not see a simple argument implying that changing the orientation of the circle in the glueing has no influence on the homeomorphism class of the surface. I'm aware that for compact surfaces, this point more or less follows from the classification, but then what about non compact ones? Any suggestion? - 1 What do you mean by "non-compact"? Finite type with punctures, or something more general? – Igor Rivin Feb 6 2012 at 19:54 1 Maybe I should have said that I take the word "surface" in the topological sense, i.e. a topological space that is separated and locally homeomorphic to $\mathbb{R}^2$. Thus, by non compact, I simply mean a surface in the above sense, that is not compact as a topological space. There is a well-known classification theorem for compact surfaces (they have a finite number of connected components and these are all connected sums of (a sphere and) tori and projective spaces). – Baptiste Calmès Feb 6 2012 at 21:08 3 True or false: every orientable surface has an orientation-reversing homeomorphism? – Tom Goodwillie Feb 6 2012 at 22:38 1 Baptiste -- a small remark: did you in fact mean connected sum of connected surfaces? – algori Feb 6 2012 at 23:34 1 This follows from the classification of non-compact surfaces by Richards: ams.org/journals/tran/1963-106-02/… – Agol Feb 7 2012 at 3:55 show 3 more comments ## 2 Answers For surfaces, smooth and topological classification is the same, so let me argue in the smooth category. In Bröcker and Jänichs book "Einführung in die Differentialtopologie" (I have seen references to an English translation), the following result is shown in full detail. Let $M_0$ and $M_1$ be two connected smooth manifolds and $D_i \subset M_i$ be two embedded discs. The diffeomorphism type of $M_0 \sharp M_1$ (formed along the two discs) only depends on the orientation-behaviour of the discs $D_i$ (if $M_i$ is orientable) and it does not depend on the choice of the discs if $M_i$ is nonorientable. Moreover, if both manifolds admit orientation-reversing self-diffeomorphisms, then the diffeomorphism type of $M_0 \sharp M_1$ does not depend on any choices. As indicated by Toms comment, the question is whether any surface admits an orientation-reversing diffeomorphism. Theorem: ''Every smooth connected orientable surface $M$ admits an orientation-reversing involution.'' Proof: the closed case is easy (by a picture), so let us assume that $M$ is open. Take a handlebody decomposition of $M$, we can arrange it so that $M$ has only one $0$-handle and no $2$-handles. Now $M$ is the union $D^2=M_0 \subset M_1 \subset M_2 \subset \ldots$. $M_{n+1}$ is obtained from $M_{n}$ by gluing in a copy of $D^1 \times D^1$ along an orientation-preserving embedding $S^0 \times D^1 \to \partial M_{n}$. Now construct the orientation-reversing diffeomorphism $f$ inductively. On $M_0 =D^2$, take a reflection. Assume that we have constructed $f_n:M_n \to M_n$ with the extra property that $f_n$ induces the identity on $\pi_0 (\partial M_n)$ (i.e., $f_n$ does not permute the components of the boundary) and each boundary component has a parametrization by $S^1 \subset R^2$ so that $f_{n+1}$ is given by $(x,y) \mapsto (x,-y)$ in these coordinates. There are two cases to distinguish in the induction step. 1st case: The attaching embedding $S^0 \times D^1 \to \partial M_n$ takes the two copies of the interval into two different components. We can find an isotopic embedding $S^0 \times D^1 \to \partial M_n$ that is $Z/2$-equivariant with respect to the already constructed $f_n$ and the self-map $(x,y) \mapsto (x,-y)$ of $D^1 \times D^1$. This is possible by the form of $f_n$ on the boundary. The involution extends to an involution $f_{n+1}$ of $M_{n+1}$ and because the two components where the embedding lands in are joined together, the new $f_{n+1}$ does not permute the boundary components. 2nd case: The attaching embedding takes both components of $S^0 \times D^1$ into the same component of $\partial M_{n+1}$. This time we isotope the attaching embedding so that it becomes equivariant with respect to $f_n$ and $(x,y) \mapsto (-x,y)$. This is possible by taking a non-fixed point of $f_n$ on the boundary. The component is split into two parts, but the way I have arranged the gluing guarantees that the new $f_{n+1}$ still does not permute the components. Note that a straightforward adaption of that argument also settles the closed case. - Thanks for this answer. However, although I did not mention it in the question, I would like to avoid any argument using a differentiable structure. Somehow, the spirit of my question is: "How come at the beginning of various introductory textbooks on algebraic topology (ex: Massey), this topological fact is stated as true and intuitive, although there seems to be no proof avoiding some kind of classification result, itself quite non-trivial in the non compact case? Am I missing some elementary argument, or is there really a lot of work swept under the carpet at that point?". – Baptiste Calmès Feb 7 2012 at 10:48 2 I would say that there is a lot of work swept under the carpet. Unless the involution exists, things can go completely wrong. For example, $CP^2 \sharp CP^2$ is not even homotopy equivalent to the manifold $CP^2 \sharp \bar{CP^2}$ that you obtain if one of the embedded discs is negatively oriented. So the surface case is special. If you make a picture of my construction, you will find it ''intuitive''. P.S: I really think that the smooth structure makes everything easier. – Johannes Ebert Feb 7 2012 at 14:57 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A relatively clean and intuitive proof is given in Kosinski's "Differential Manifolds," which works in the topological setting and essentially boils down to the following: If $M$ is path-connected and $i_1,i_2:D\rightarrow M$ are isotopic embeddings (smooth or topological), then by the so-called "Cerf-Palais disk theorem" (a consequence of the Isotopy Extension Property) there is an ambient isotopy $\Phi:M\times I\rightarrow M$ (smooth or topological) such that for all $t$, $\Phi_t$ is identity outside a contractible compact set, and $\Phi(i_1(x),1)=i_2(x)$. Intuitiely, $\Phi$ translates the image of $i_1$ to the image of $i_2$, and tries hard to not effect anything else. So if $M, i_1, i_2$ are as above, $N$ is another topological (or smooth) manifold and $i:D\rightarrow N$ is another embedding, then let $M\#_1N$ be formed by attaching $N\setminus i(0)$ to $M\setminus i_1(0)$, and form $M\#_2N$ using $M\setminus i_2(0)$. Since these objects are actually pushouts, we can define a homeo(diffeo)morphism in pieces: If $y\in N\setminus i(0)$, send $y$ to itself; if $y\in M\setminus i_1(0)$, send $y$ to $\Phi(y,1)$. These will assemble to give the required equivalence from $M\#_1N$ to $M\#_2N$. (then you could repeat the argument on the $N$ side, or just say it follows from commutativity) The fact that the connected sum is associative and commutative follows naturally from the fact that it is actually a pushout (if you're careful, it does make a pushout in the smooth category). Then to show that it doesn't depend on the attaching disk, I think you need something equivalent to the "Cerf-Palais" theorem I mentioned. Edit: because of what was mentioned in the comments above, it was necessary for me to assume that the embeddings were isotopic to begin with - Whoops, I may have made a huge gaff: does Isotopy Extension Property apply to Topological manifolds? I know the original question is just about surfaces, but I'm wondering more generally. – William Feb 9 2012 at 0:32 Thanks for your answer. In my mind, though, the important case is the one where the maps are not isotopic (i.e. the surface is orientable, and the disks are glued in opposite ways). – Baptiste Calmès Feb 9 2012 at 9:33 Ah, well then in that case it is as Johanns said: there's no guarantee that the connected sums are even homotopy equivalent without some restriction on the embeddings or on the manifolds. – William Feb 9 2012 at 22:29 @William. I am talking about surfaces. I know about the classical counter example in dimension 4 that Johannes Ebert was mentioning (captured by homology, by the way). However, it has to be true for surfaces, even non-compact ones, in view of the classification result mentioned by Agol. But this is definitely not a simple argument, and given no one has come up with an obvious trick, it certainly isn't intuitive before classification that connected sums of oriented surfaces are homeomorphic, whatever glueing you use. – Baptiste Calmès Feb 11 2012 at 12:19 It seems like it comes down to the case where you are embedding the disk into an oriented surface with embeddings who have different orientation parity (reversing/preserving), because in every other case you get an isotopy of the embeddings. Maybe there is an elementary cut-and-paste argument you could give for this case. After all, being able to prove with cut-and-paste techniques is what makes the theory of surfaces so elementary. – William Feb 11 2012 at 23:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395709037780762, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=66000
Physics Forums volume of the solid The region bounded by $$x = 1- y^4, x=0$$ is rotate about the line $$x = 3$$ The volume of the resulting solid is ...... here's what i done: $$x = 1- y^4$$ in terms of y => $$y = (1-x)^{1/4}$$ my integral: $$\int_0^{1} 2*pi*(3-x)(1-x)^{1/4}$$ anyone know what i have done incorrectly? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Quote by ProBasket The region bounded by $$x = 1- y^4, x=0$$ is rotate about the line $$x = 3$$ The volume of the resulting solid is ...... here's what i done: $$x = 1- y^4$$ in terms of y => $$y = (1-x)^{1/4}$$ my integral: $$\int_0^{1} 2*pi*(3-x)(1-x)^{1/4}$$ anyone know what i have done incorrectly? This is a hollow body, you have to subtract the volume of the body with radius r = 3- x(y) in the picture from that of the cylinder of radius R=3. And integrate with respect to y as the rotational angle is parallel to the y axis. $$V=\pi \int_{-1}^1 {(3^2-r^2)dy}$$ ehild Recognitions: Gold Member Homework Help Science Advisor Quote by ProBasket The region bounded by $$x = 1- y^4, x=0$$ is rotate about the line $$x = 3$$ The volume of the resulting solid is ...... here's what i done: $$x = 1- y^4$$ in terms of y => $$y = (1-x)^{1/4}$$ my integral: $$\int_0^{1} 2*pi*(3-x)(1-x)^{1/4}$$ anyone know what i have done incorrectly? Normally, either the shell method or the washer method will work. In this case, you're going to have some problems with $$(1-x)^{\frac{1}{4}}$$. It's not a function since an even root has both a positive and negative root. For example: $$\sqrt {1-.84}= \pm .4$$ Without the bottom half of the graph, you can't find the volume. You could compensate just by finding the volume of the region bounded by x=(1-x)^(1/4), x=0, y=0, and then multiplying by two. It would be easier to just to use the washer method, as ehild suggested. Thread Tools | | | | |------------------------------------------|-------------------------------|---------| | Similar Threads for: volume of the solid | | | | Thread | Forum | Replies | | | Calculus | 5 | | | Introductory Physics Homework | 11 | | | Calculus | 8 | | | General Math | 4 | | | General Math | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104442596435547, "perplexity_flag": "middle"}
http://www.cfd-online.com/W/index.php?title=Laplacian&diff=7288&oldid=1724
[Sponsors] Home > Wiki > Laplacian # Laplacian ### From CFD-Wiki (Difference between revisions) | | | | | |--------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Praveen (Talk | contribs) | | Jola (Talk | contribs) m (Reverted edits by VhlUvj (Talk); changed back to last version by Praveen) | | | (4 intermediate revisions not shown) | | | | | Line 1: | | Line 1: | | | | The n-dimensional Laplacian operator in Cartesian coordinates is defined by | | The n-dimensional Laplacian operator in Cartesian coordinates is defined by | | | | | | | - | <math> | + | :<math> | | | \Delta u = \sum_{i=1}^n \frac{\partial^2 u}{\partial x_i^2} | | \Delta u = \sum_{i=1}^n \frac{\partial^2 u}{\partial x_i^2} | | | </math> | | </math> | | | | | | | - | It is an important differential operator which occurs in many equations of mathematical physics and is usually associated with dissipative effects. Some of the important equations are | + | It is an important differential operator which occurs in many equations of mathematical physics and is usually associated with dissipative effects (except in the case of [[wave equation]]). Some of the important equations are | | | | | | | - | * Laplace equation | + | * [[Laplace equation]] | | | | | | | - | <math> | + | :<math> | | | \Delta u = 0 | | \Delta u = 0 | | | </math> | | </math> | | | | | | | - | * Poisson equation | + | * [[Poisson equation]] | | | | | | | - | <math> | + | :<math> | | - | \Delta u = f | + | -\Delta u = f | | | </math> | | </math> | | | | | | | | | + | * [[Heat equation]] | | | | + | | | | | + | :<math> | | | | + | \frac{\partial u}{\partial t} - \Delta u = 0 | | | | + | </math> | | | | + | | | | | + | * [[Wave equation]] | | | | + | | | | | + | :<math> | | | | + | \frac{\partial^2 u}{\partial t^2} - \Delta u = 0 | | | | + | </math> | | | Solutions of these equations are very smooth and in most cases are infinitely differentiable (when the associated data of the problem are sufficiently smooth). | | Solutions of these equations are very smooth and in most cases are infinitely differentiable (when the associated data of the problem are sufficiently smooth). | | | | | | | - | The Laplacian operator is invariant under coordinate rotation. | + | Folland (see reference below) explains the ubiquitous appearance of the Laplacian. | | | | + | | | | | + | ''Why is it so ubiquitous ? The answer, which we shall prove, is that it commutes with translations and rotations and generates the ring of all differential operators with this property. Hence, the Laplacian is likely to turn up in the description of any physical process whose underlying physics is homogeneous (independent of position) and isotropic (independent of direction).'' | | | | + | | | | | + | Moreover it can shown that any linear operator which commutes with translations and rotations must be a polynomial in <math>\Delta</math>, i.e., it must be of the form <math>\sum_j a_j \Delta^j</math> where the <math>a_j</math> are constants (see Folland). | | | | + | | | | | + | The Laplacian operator is invariant under coordinate translation and rotation. | | | | + | | | | | + | The Laplace operator is also denoted as <math>\nabla^2</math> since it is the divergence of the gradient operator | | | | + | | | | | + | :<math> | | | | + | \Delta = \nabla^2 = \nabla \cdot \nabla | | | | + | </math> | | | | + | | | | | + | == Laplacian in cylindrical coordinates == | | | | + | | | | | + | If <math>(x,r,\phi)</math> are cylindrical coordinates, then the Laplacian of a scalar field variable <math>u</math> is | | | | + | | | | | + | :<math> | | | | + | \Delta u = \frac{\partial^2 u}{\partial x^2} + \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial u}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 u}{\partial \phi^2} | | | | + | </math> | | | | + | | | | | + | == Laplacian in spherical coordinates == | | | | + | | | | | + | If <math>(r,\theta,\phi)</math> are spherical coordinates, then the Laplacian of a scalar field variable <math>u</math> is | | | | + | | | | | + | :<math> | | | | + | \Delta u = | | | | + | \frac{1}{r^2} \frac{\partial}{\partial r} \left( r^2 \frac{\partial u}{\partial r} \right) + | | | | + | \frac{1}{r^2 \sin\theta} \frac{\partial}{\partial \theta} \left( \sin\theta \frac{\partial u}{\partial \theta} \right) + | | | | + | \frac{1}{r^2 \sin^2\theta} \frac{\partial^2 u}{\partial \phi^2} | | | | + | </math> | | | | + | | | | | + | == References == | | | | + | | | | | + | * {{reference-book | author=Gerald B. Folland | year=1995 | title=Introduction to partial differential equations | rest=Princeton University Press}} | ## Latest revision as of 08:19, 12 April 2007 The n-dimensional Laplacian operator in Cartesian coordinates is defined by $\Delta u = \sum_{i=1}^n \frac{\partial^2 u}{\partial x_i^2}$ It is an important differential operator which occurs in many equations of mathematical physics and is usually associated with dissipative effects (except in the case of wave equation). Some of the important equations are $\Delta u = 0$ $-\Delta u = f$ $\frac{\partial u}{\partial t} - \Delta u = 0$ $\frac{\partial^2 u}{\partial t^2} - \Delta u = 0$ Solutions of these equations are very smooth and in most cases are infinitely differentiable (when the associated data of the problem are sufficiently smooth). Folland (see reference below) explains the ubiquitous appearance of the Laplacian. Why is it so ubiquitous ? The answer, which we shall prove, is that it commutes with translations and rotations and generates the ring of all differential operators with this property. Hence, the Laplacian is likely to turn up in the description of any physical process whose underlying physics is homogeneous (independent of position) and isotropic (independent of direction). Moreover it can shown that any linear operator which commutes with translations and rotations must be a polynomial in $\Delta$, i.e., it must be of the form $\sum_j a_j \Delta^j$ where the $a_j$ are constants (see Folland). The Laplacian operator is invariant under coordinate translation and rotation. The Laplace operator is also denoted as $\nabla^2$ since it is the divergence of the gradient operator $\Delta = \nabla^2 = \nabla \cdot \nabla$ ## Laplacian in cylindrical coordinates If $(x,r,\phi)$ are cylindrical coordinates, then the Laplacian of a scalar field variable $u$ is $\Delta u = \frac{\partial^2 u}{\partial x^2} + \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial u}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 u}{\partial \phi^2}$ ## Laplacian in spherical coordinates If $(r,\theta,\phi)$ are spherical coordinates, then the Laplacian of a scalar field variable $u$ is $\Delta u = \frac{1}{r^2} \frac{\partial}{\partial r} \left( r^2 \frac{\partial u}{\partial r} \right) + \frac{1}{r^2 \sin\theta} \frac{\partial}{\partial \theta} \left( \sin\theta \frac{\partial u}{\partial \theta} \right) + \frac{1}{r^2 \sin^2\theta} \frac{\partial^2 u}{\partial \phi^2}$ ## References • Gerald B. Folland (1995), Introduction to partial differential equations, Princeton University Press.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8670505881309509, "perplexity_flag": "head"}
http://mathoverflow.net/questions/110746/distribution-of-sliding-correlation
## Distribution of sliding correlation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) 1. Suppose $w$ is a vector containing $L$ i.i.d. normally distributed samples. 2. Suppose $v$ is a vector containing $N < L$ i.i.d. normally distributed samples. 3. Suppose the elements of $w$ and $v$ are independent. The correlation between $w$ and $v$ is now calculated using a sliding correlator (the shorter vector $v$ is slid across the larger vector $w$) using $R(j) = \sum_{n=1}^{N} v_n w_{n+j}$ Question What is the mean and variance of the correlation $R$ in terms of the means and variances of $w$ and $v$? (Experimental results indicate that $R$ also has a normal distribution.) Simpler case In the case where $v$ and $w$ have equal lengths ($L=N$) and the correlation is calculated using the inner (dot) product (no sliding) $P = \sum_{n=1}^N v_n w_n$, it can easily be shown that $P$ is normally distributed with mean $\mu_P=\mu_v \mu_w$ and variance $\sigma_P^2 = \sigma_v^2 \sigma_w^2 + \sigma_v^2\mu_w^2+\sigma_w^2\mu_v^2$ due to independence. The sliding correlator however introduces correlation between the samples of $R$ and the above expressions for mean and variance no longer hold. How does one go about expressing the statistics for $R$? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210484027862549, "perplexity_flag": "head"}
http://mathoverflow.net/questions/10311/infinity-de-rham-quasi-isomorphism/69342
## Infinity de Rham quasi-isomorphism ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is similar to http://mathoverflow.net/questions/9457/do-chains-and-cochains-know-the-same-thing-about-the-manifold in the sence that both deal with a natural "comparison" quasi-isomorphism that does not preserve the ring structure. Let $M$ be a smooth manifold. There is a natural comparison map $Comp$ from the differential forms on $M$ to the smooth singular cochains of $M$ (i.e. the linear dual of the vector space spanned by smooth singular simplices). It is defined as follows: take a form $\omega$ of degree $p$ and set $Comp(\omega)$ to be the cochain $\sigma\mapsto \int_\triangle \sigma^*\omega$ where $\triangle$ is the standard $p$-dimensional simplex and $\sigma:\triangle\to M$ is a smooth singular simplex. $Comp$ is a map of complexes (Stokes' theorem) and moreover, a quasi-isomorphism (the de Rham theorem). But as simple examples show, it does not preserve the ring structure. However it is natural to ask whether the ring structures, up to quasi-isomorphism, of the differential forms and of the cochains contain the same information about $M$. This translates into the following questions. 1. Can $Comp$ be completed to a morphism of $A_\infty$-algebras? 2. If the answer to 1. is positive (it presumably is), what about the $E_\infty$ case? These questions also have natural rational versions. Namely, we can take an arbitrary polyhedron $X$ instead of $M$ and consider Sullivan's $\mathbf{Q}$-polynomial forms. There is a comparison quasi-isomorphism similar to the one above that will go from the $\mathbf{Q}$-polynomial forms of $X$ to the piecewise linear $\mathbf{Q}$-cochains. Can it be completed to a map of $A_\infty$ or $E_\infty$ algebras? - ## 2 Answers This theorem was proven in 1977 in "V. K. A. M. Gugenheim, On Chen’s iterated integrals, Illinois J. Math. Volume 21, Issue 3 (1977), 703–715." - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yes. Here is one way to see it: before passing to dg-algebras, let's look at cosimplicial algebras and then later apply the normalized cochain (Moore) complex functor. Work in a smooth (oo,1)-topos, modeled by simplicial presheaves on a site of smooth loci. In there, we have for every manifold $X$ • the singular simplicial complex $X^{\Delta_{Diff}^\bullet}$ of smooth singular simplices on $X$, • the infinitesimal singular simplicial complex $X^{(\Delta^\bullet_{inf})}$ of infinitesimal singular simplices. There is a canonical injection $X^{(\Delta^\bullet_{inf})} \to X^{\Delta^\bullet_{Diff}}$. We may take degreewise (internally, i.e. smoothly) functions on these, to get the cosimplicial algebras $[X^{\Delta^\bullet_{inf}},R]$ and $[X^{\Delta^\bullet_{Diff}},R]$. The normalized cochain complex of chains on $[X^{\Delta^\bullet_{Diff}},R]$ is the complex of smooth singular cochains. The normalized cochain complex of chains on $[X^{\Delta^\bullet_{inf}},R]$ turns out to be, by some propositions by Anders Kock, to be the deRham algebra, as discussed a bit at differential forms in synthetic differential geometry. Therefore under the ordinary Dold-Kan correspondence we have a canonical morphism $$N^\bullet([X^{\Delta^\bullet_{Diff}},R] \to [X^{\Delta^\bullet_{inf}},R]) = C^\bullet_{smooth}(X) \to \Omega_{dR}^\bullet(X)$$ which is an equivalence of cochain complexes. But there is a refinement of the Dold-Kan correspondence the monoidal Dold-Kan correspondence. And this says that this functor is also a weak equivalence of oo-monoid objects. - Thanks, Urs. Do I understand correctly that your answer concerns the $A_\infty$ case? If so, what about the $E_\infty$ case? I think this would translate as whether or not the Moore complex functor is symmetric lax monoidal. Is this translation correct? – algori Jan 6 2010 at 0:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.885654091835022, "perplexity_flag": "head"}
http://mathoverflow.net/questions/42117?sort=newest
## Discrete subspaces of Hausdorff spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) does every infinite hausdorff space contains a countable infinite discrete subspace? - This sounds like homework. What have you tried? Also, unless it is rewritten, this question might be considered too localised and be closed soon. Providing motivation which is of interest to research mathematicians may prevent closure. By the way, closure may be a useful hint. Gerhard "Obscure by Accident, Not Intention" Paseman, 2010.10.13 – Gerhard Paseman Oct 14 2010 at 5:59 ## 3 Answers In a more general light: folklore theorem: Every infinite topological space contains a homeomorphic copy of one (or more) of the following 5 spaces: 1. $\mathbf{N}$ in the indiscrete topology (only $\mathbf{N}$ and $\emptyset$ are open). 2. $\mathbf{N}$ in the co-finite topology (only $\mathbf{N}$ and all finite sets are closed). 3. $\mathbf{N}$ in the upper topology (the empty set and all sets $U(k) = \{ n \in \mathbf{N} : n \ge k \}$, $k \in \mathbf{N}$, are open). 4. $\mathbf{N}$ in the lower topology ($\mathbf{N}$, $\emptyset$, and all sets $L(k) = \{ n \in \mathbf{N} : n \le k \}$, $k \in \mathbf{N}$, are open). 5. $\mathbf{N}$ in the discrete topology (all subsets are open). As each of the spaces has the property that every infinite subspace of it is homeomorphic to the whole space, this list is minimal. And spaces 1-4 are not Hausdorff, which implies what you need, as being Hausdorff is hereditary. The nicest proof of this I know uses Ramsey's theorem (off hand I do not know a reference, who does?) using a partition of the triples or pairs of X, IIRC. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Not sure whether this is "research-level", but: first show that any infinite Hausdorff space has a proper infinite closed subset. Then proceed by induction. - Yes. Lemma 1 in http://www.emis.de/journals/HOA/IJMMS/6/197.pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9090965986251831, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/272619/smallest-eigenspace-dimension-symmetric/272634
# smallest eigenspace dimension symmetric Hi would you help me with the following: Let $A = (a_{ij}) R^{n \times n}$ be a symmetric matrix satisfying: $a_{1i} \neq 0$; Sum of each row equals $0$ and each diagonal element is the sum of absolute values of other entries in the row. Determine the dimension of eigenspace corresponding to the smallest eigenvalue of $A$. Thanks a lot!! - Just a clarification, what do you mean by small? In terms of absolute value, or relative? I.e. would you consider 0 the smallest eigenvalue? – Calvin Lin Jan 8 at 4:23 2 the eigenvalues are nonnegative anyway by Gershgorin. – Salih Ucan Jan 8 at 4:25 Interesting I didn't know that theorem. Well, it's clear that 0 is an eigenvalue, so you could phrase it as asking for the dimension of the kernel. – Calvin Lin Jan 8 at 4:30 ## 2 Answers First, let's write down something that everyone knows. By the properties of $A$, all diagonal entries of $A$ are positive and all off-diagonal entries of $A$ are nonpositive. Hence by Gersgorin disc theorem and the symmetry of $A$, all eigenvalues of $A$ are nonnegative. In particular, $0$ is the smallest of $A$ and $(1,1,\ldots,1)^T$ is a corresponding eigenvector. Now let $e^T$ be the $(n-1)$-vector containing all ones. Write $A=\begin{pmatrix}a&b^T\\b&C\end{pmatrix}$ where $a$ is the $(1,1)$-th entry of $A$. By the properties of $A$, we have $$\begin{pmatrix}1&e^T\\0&I_{n-1}\end{pmatrix} A\begin{pmatrix}1&0\\e&I_{n-1}\end{pmatrix} =\begin{pmatrix}1&e^T\\0&I_{n-1}\end{pmatrix} \begin{pmatrix}0&b^T\\0&C\end{pmatrix} =\begin{pmatrix}0&0\\0&C\end{pmatrix}=:B \ \text{ (say)}.$$ Since all off-diagonal entries in the first row/column of $A$ are strictly negative, $C$ is strictly diagonally dominant. Hence all eigenvalues of $C$ are positive, i.e. $0$ is a simple eigenvalue of $B$. Therefore, by Sylvester's law of inertia, $0$ is also a simple eigenvalue of $A$. Hence the dimension of eigenspace corresponding to the smallest eigenvalue of $A$ is $1$. - +1 I think yours is much better, and deals with it cleanly. I wasn't aware of Greshgorin initially, didn't think of Sylvester to determine the signage (though on hindsight that should have been the approach). – Calvin Lin Jan 8 at 16:38 @CalvinLin I'm not flattering you, but I really like your eigenvector idea. And based on your communications to the OP, I doubt if the OP could understand my solution (no offence to him/her). In contrast, yours is very easy to understand, and that's why I think your solution is nice (at least the idea is nice; the presentation, after all those edits, is getting a bit messy, though). – user1551 Jan 8 at 17:00 Haha, well, there's always 'the way to approach linear algebra', and 'the way students do brute force matrix manipulation'. I'm strongly in favor of the former, as it indicates an understanding of the subject. Am slowly remembering all my Linear Algebra Theorems. I never saw Greshgorin before, and it's really cool, esp about the disjoint discs. – Calvin Lin Jan 8 at 17:03 Since the matrix is symmetric, all eigenvalues are real. By Greshgorin circle theorem, each of these eigenvalues lies in the disc centered $a_{ii}$ with radius $a_{ii}$, hence are non-negative. The eigenvalues are thus non-negative. The smallest eigenvalue is clearly $0$, since the all 1 vector is an eigenvector with eigenvalue 0. Since each diagonal element is the sum of absolute values in each row, and each row has a sum of 0, thus the only positive entries are on the diagonal, and the rest of the entries are nonpositive. In particular, all the $a_{1i}, i\neq 1$ are negative. (Added explanation: If any of the non-diagonal elements are positive, then by considering the sum of that row, we will get a positive value, hence the row doesn't have a sum of 0. The mathematical proof is: $a_{kk} = \sum_{k \neq i} |a_{ki}|$, so $0 = |a_{kk} + \sum_{k\neq i} a_{ki}| \geq a_{kk} - \sum_{k \neq i} |a_{ki}| = 0$ by the triangle inequality. Since equality holds, this implies that $a_{ki}$ must have the opposite sign (or could be 0), as compared to $a_{kk}$.) Edit: Since a real symmetric matrix has a complete (orthogonal) eigenbasis, to calculate the dimension of the generalized eigenspace, it is sufficient to consider just eigenvectors. Now, consider any other eigenvector $v$ that isn't a multiple of the all 1 vector. If it isn't a multiple of a vector with $\pm 1$ entries, let $v_k$ be (one of) the entry with the largest absolute value, and there exists $j$ such that $|v_j| < |v_k|$ Consider expansion along row $k$, we get $$\sum_{i\neq k} |a_{k i} v_i| \leq \sum_{i \neq k} |a_{ki}| \cdot |v_k| \leq |a_{k k} v_k|.$$ However, we cannot have equality hold throughout, since we have $|v_j| < |v_k|$. Hence, the kth entry in $A v_k$ is not 0, so the eigenvalue is not 0. If $v$ is a multiple of a vector with $\pm1$ entries, consider expansion along the first row. We now use the condition that $a_{1i} < 0$, which shows that in order for this eigenvector to have eigenvalue 0, then this eigenvector must be a multiple of $(1, 1, 1, \ldots, 1)$, which we already considered. Hence, there is no other possible eigenvector with eigenvalue 0, so the dimension of this eigenspace is 1. You should read user1551's solution, as that has a better way of dealing with the eigenvalues, then such a crude brute force computation. - @AdamW Agreed. I've now edited it to account for $\pm 1$, and actually used the condition that $a_{1i} \neq 0$. Otherwise, the conclusion need not be true, like if we had 2 by 2 symmetric blocks. – Calvin Lin Jan 8 at 14:17 @Calvin i dont understand why the off diagonal elements must be nonpositive?? – Salih Ucan Jan 8 at 15:10 @Yobo This follows directly from the condition that "Sum of each row equals 0 and each diagonal element is the sum of absolute values of other entries in the row." Let me add an explanation. – Calvin Lin Jan 8 at 15:12 @Yobo, the rest of the entries are nonpositive. I do not know what you edited that. For example, consider $\begin{pmatrix} 2 &-1 &-1\\ -1 & 2 & -1 \\ -1 & -1 & 2 \\ \end{pmatrix}$ – Calvin Lin Jan 8 at 15:20 @thanks calvin i got that nonpositivity now, i am continuing reading your solution.. – Salih Ucan Jan 8 at 15:22 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195411205291748, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/91881/list
## Return to Question 2 fixed typo in title 1 # Citeria for Involutive Subbundles Preliminaries: Let $M$ be a smooth manifold with tangent bundle $TM$. A vector subbundle $VM$ of $TM$ is called involutive if the section space $\Gamma(VM)$ of $VM$ is closed under the Lie bracket of $\Gamma(TM)$ or in other words if $[X,Y] \in \Gamma(VM)$ for all $X,Y \in \Gamma(VM)$. On the other side the Lie bracket of two vector field can be expressed entirely by the flow transformations of the fields, that is we have: $$[X,Y] = \frac{1}{2}\frac{\partial^2}{\partial_t^2} |_{t=0}(Fl^Y _{-t}\circ Fl^X _{-t}\circ Fl^Y _{t}\circ Fl^X _{t})$$ where $Fl^X$ and $Fl^Y$ are the flow transformations of $X$ and $Y$ respectively. Now the question is, can (and if yes, how) we decide whether or not o subbundle of $TM$ is involutive entirely in terms of flow transformations? If somehow we want to proof that the bracket is closed on a subbundle but we know very little about it but instead know much about the associated flow transformations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296531081199646, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/77382/list
## Return to Answer 3 edited body No, it is not enough to consider convex combinations of pairs of points in the connected set. A famous example is the moment curve $(t,t^2,t^3,\dots,t^n)$ where when you take the convex hull all convex combinations of [n/2] points form a face of the convex hull. Caratheodory theorem asserts that for every $X$ in $R^n$ a point in the convex hull of X is in the convex hull of $d+1$ points from $X$. I vaguely remember that when $X$ is connected you can replace $d+1$ by $d$ but I am not sure about it. Added later: Indeed it is an old theorem that you can replace $d+1$ with $d$ when $X$ is connected. A recent theorem of Barany and Karasov Karasev assets that if $X$ is a set in $R^d$ with the property that all projections of $X$ into a $k$ dimensional space are convex, then every point in the convex hull of $X$ is already in the convex hull of d$d+1-k$ points from $X$. 2 added 368 characters in body No, it is not enough to consider convex combinations of pairs of points in the connected set. A famous example is the moment curve $(t,t^2,t^3,\dots,t^n)$ where when you take the convex hull all convex combinations of [n/2] points form a face of the convex hull. Caratheodory theorem asserts that for every $X$ in $R^n$ a point in the convex hull of X is in the convex hull of $d+1$ points from $X$. I vaguely remember that when $X$ is connected you can replace $d+1$ by $d$ but I am not sure about it. Added later: Indeed it is an old theorem that you can replace $d+1$ with $d$ when $X$ is connected. A recent theorem of Barany and Karasov assets that if $X$ is a set in $R^d$ with the property that all projections of $X$ into a $k$ dimensional space are convex, then every point in the convex hull of $X$ is already in the convex hull of d$d+1-k$ points from $X$. 1 No, it is not enough to consider convex combinations of pairs of points in the connected set. A famous example is the moment curve $(t,t^2,t^3,\dots,t^n)$ where when you take the convex hull all convex combinations of [n/2] points form a face of the convex hull. Caratheodory theorem asserts that for every $X$ in $R^n$ a point in the convex hull of X is in the convex hull of $d+1$ points from $X$. I vaguely remember that when $X$ is connected you can replace $d+1$ by $d$ but I am not sure about it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476425647735596, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/77292/textbook-that-brings-together-linear-algebra-and-pdes?answertab=votes
# Textbook that brings together linear algebra and PDEs? I'm looking for a textbook that goes into as much detail as possible about the parallels between linear algebra in finite, countable, and continuous "spaces." Specific topics that I'm trying to get a better (more general) understanding of are, for example: The relationship between Hermitian matrices and self-adjoint operators in general; How the orthogonality and completeness relations can be described in a way to include "normalization with the Dirac delta function" without making it a special case; How to understand "basis vectors" (such as with Dirac normalization) that don't appear to be "part of" the vector space that the normalized functions live in. I hope my terminology make sense. The whole point is that I'm just trying to learn this stuff, and I don't quite know what the correct terminology is :-) I've had basic courses in linear algebra and applied PDEs, and I see a lot of parallels, but in the books I have these parallels are (sometimes) mentioned but hardly ever emphasised. I think that, for example, a book which aims to teach partial differential equations "from a linear algebra perspective" might be what I'm looking for. Any suggestions appreciated. - It sounds like you might be looking for a textbook about Sobolev spaces... – Zhen Lin Oct 30 '11 at 21:07 Hi, Mike. I think you will need to learn measure theory and basic functional analysis if you don't know these already. PDEs live in infinite dimensional spaces so your usual linear algebra is not sufficient. That is why we need the functional analysis. Measure theory is needed to be able to use all kinds of nice limit theorems and because our functions are only defined "almost everywhere" since changing some point of a function doesn't change the integral. Then you can study this... – Jonas Teuwen Oct 30 '11 at 21:49 ...I suggest Evans's Partial Differential Equations if you like more the PDE approach or Krylov's Lectures on Elliptic and Parabolic Equations in Sobolev Spaces if you prefer a more functional analytic approach. – Jonas Teuwen Oct 30 '11 at 21:49 ## 1 Answer Let me elaborate on my comment. If you're looking for some "linear algebra" treatment of PDEs you will get into the field called functional analysis. Functional analysis is some kind of infinite-dimensional linear algebra. Here we are working with function spaces for example the space of square integrable functions $L^2$ ($f$ is in $L^2$ if $\int |f|^2 < \infty$). It can be quickly seen that there will be no finite basis that spans the complete space, so our space is infinite dimensional. So, we are working with function spaces, in PDE we are looking for function spaces where our solutions of the PDEs live. That is, we are finding functions that satisfy the equation. It turns out that looking at the derivative in a classical sense doesn't give you much tools to study certain properties of the solutions like regularity (how "differentiable" your function is) and so on. So we interprete the derivative in a "weak" (or distributional) sense. In this way we have a larger class of functions that satisfies our equation and we can apply the tools of functional analysis to that. These spaces are called the Sobolev spaces. If you would like me to explain more in detail why we would study those spaces, do ask. This was actually a short summary why we would need functional analysis. Another thing you will need is measure theory. Measure and integration theory studies a different type of integral than the one you're used to (the Riemann integral) namely the Lebesgue integral. This integral has many more nice properties for example you have nice theorems that state that under mild conditions on $f_n, f$ that $$\int f_n \to \int f.$$ The Riemann integral also possesses this property, but under quite "unnatural" conditions (uniform convergence for example). I'm not sure how far you exactly are but if I understood correctly, you're having the engineering mathematics knowledge. So, you would need to know: Basic Real Analysis: Some examples: • The blog by Terence Tao: http://terrytao.wordpress.org/ contains nice lecture notes. You're looking for something like: • Bartle - Real Analysis. This book explains basic real analysis. Here you will get the formal definition of continuity, convergence, the Riemann integral and so on. This is really a prerequisite for measure theory. • Pugh - Real Mathematical Analysis is another option as is • Rudin - Principles of Mathematical Analysis. This books quite hard to study the subject from. Measure Theory: • Schilling - Measure, Integrals and Martingales. This is a quite inexpensive book and all the solutions are available online. It is a very gentle introduction. You will only need the measure theory bit, not the probability bit (with the martingales) • Folland - Real Analysis. This is my favorite, it is also not very easy to learn the subject from but worth the effort. And last but not least: • Rudin - Real and Complex Analysis. I suppose the first few chapters are sufficient, but here the same holds as for the other book by Rudin which I have mentioned. • Terence Tao's blog. This website also has some nice (free!) notes. Functional analysis: • Werner - Funktionalanalysis. If you can read German, this is a gentle introduction. You only need to know the Banach spaces and Hilbert spaces, for the moment you will not need the topological vector spaces bit. • Conway - A course in functional analysis. This is also quite dense, but might be worth the effort. You only need to know the basic theorems, but these books use a "measure theoretic" approach. • Rynne and Youngson - Linear Functional Analysis. This is an undergraduate book, and I think it contains all you need. Partial Differential Equations: • Evans - Partial Differential Equations. This is a standard book for such courses, I have studied the subject as well from this book. • Krylov - Lectures on Elliptic and Parabolic Equations in Sobolev Spaces. Is also nice if you prefer a more functional analytic approach (this books turns around the study of Sobolev spaces on domains or on the whole space). This was some short list of suggestions. If you have any questions do ask! - Thank you. It looks like there's a lot of good information in this answer, and it's going to take me a little while to process it. I may come back with more questions at some point. In the meantime, it appears that the Rynne and Youngson book is available at a local library, and I'll perhaps start by taking a look at that. (I like the sound of "undergraduate" :-) – Mike Witt Oct 31 '11 at 3:23 – Hans Lundmark Oct 31 '11 at 5:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367457628250122, "perplexity_flag": "head"}
http://cms.math.ca/Events/winter11/abs/dp
2011 CMS Winter Meeting Delta Chelsea Hotel, Toronto, December 10 - 12, 2011 www.cms.math.ca//Events/winter11 Doctoral Prize YOUNESS LAMZOURI, University of Illinois at Urbana-Champaign Prime number races  [PDF] Although the primes are equidistributed in arithmetic progressions, it has been noted that certain residue classes tend to contain more primes in initial intervals of the positive integers. This phenomenon was first observed by Chebyshev in 1853. Since that time, races'' between primes in arithmetic progressions have been extensively studied. A prime number race $\{q;a_1,\dots,a_r\}$ is a game with $r$ players, where that at time $t$, the score of the $j$-th player is the number of primes less than $t$ that are congruent to $a_j$ modulo $q$. In this lecture, I will review the history of this subject, and discuss recent progress towards understanding the origin of Chebyshev's bias and its generalizations. To study these questions we will require deep information on the zeros of certain analytic functions called Dirichlet $L$-functions. ## Event Sponsors Support from these sponsors is gratefully acknowledged.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934725284576416, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/hamiltonian-formalism?sort=unanswered&pagesize=15
# Tagged Questions The hamiltonian-formalism tag has no wiki summary. 1answer 295 views ### About Turbulence modeling There is a paper titled "Lagrangian/Hamiltonian formalism for description of Navier-Stokes fluids" in PRL. After reading the paper, the question arises how far can we investigate turbulence with this ... 1answer 75 views ### Find generating function $F_1$ for canonical trasformation I'd like to know the steps to follow to find the generating function $F_1(q,Q)$ given a canonical transformation. For example, considering the transformation $$q=Q^{1/2}e^{-P}$$ $$p=Q^{1/2}e^P$$ ... 0answers 40 views ### The consistency conditions of constrained Hamiltonian systems I am studying the Hamiltonian description of a constrained system. There are some questions puzzled me for days, which I have been stuck on it. From the lagrangian, we can obtain the primary ... 0answers 300 views ### Calculation of the non-Gaussity parameter for primordial cosmological perturbations by the ADM Formalism Maldacena has used the ADM Formalism in one of his papers (http://arxiv.org/abs/astro-ph/0210603) in computing the the three point correlation function (i.e the non-Gaussianity) parameter for ... 0answers 49 views ### Second order action ADM formalism I am trying to derive the second order action $$S_{(2)}~=~\frac{m_{pl}^{2}}{8}\int a^{2}[(h_{ij}')^{2}-(\partial_{i}h_{ij})^{2}]d^{4}x,$$ used for tensor fluctuations derived from the ADM ... 0answers 93 views ### An electron is subjected to an electromagnetic field using the canonical equations solve So I was given the following vector field: $\vec{A}(t)=\{A_{0x}cos(\omega t + \phi_x), A_{0y}cos(\omega t + \phi_y), A_{0z}cos(\omega t + \phi_z)\}$ Where the amplitudes $A_{0i}$ and phase shifts ... 0answers 72 views ### Describing the movement of the object in a particular situation in Lagrangian way Suppose there is a object M, (sliding motion) moving by the initial speed $v$ and the initial location $x_0$. Otherwise noted, friction is assumed to be nonexistent. It then meets a circular mold ... 0answers 149 views ### How important are constrained Hamiltonian dynamics and BRST transformation as a formalism? I am to study BRST transformations, for which I'm currently trying to understand constrained Hamiltonian dynamics to treat systems with singular Lagrangians. The crude recipe followed is Lagrangian -> ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8761175274848938, "perplexity_flag": "middle"}
http://polymathprojects.org/2012/06/03/polymath-proposal-the-hot-spots-conjecture-for-acute-triangles/?like=1&_wpnonce=9501e873bf
# The polymath blog ## June 3, 2012 ### Polymath proposal: The Hot Spots Conjecture for Acute Triangles Filed under: polymath proposals,hot spots — Terence Tao @ 2:59 am Chris Evans has proposed a new polymath project, namely to attack the “Hot Spots conjecture” for acute-angled triangles.   The details and motivation of this project can be found at the above link, but this blog post can serve as a place to discuss the problem (and, if the discussion takes off, to start organising a more formal polymath project around it). ## 111 Comments » 1. Some initial questions to get the ball rolling: It seems that there are two versions of the “hot spots” conjecture in the literature: (i) the hot spots conjecture for generic data; (ii) the hot spots conjecture for arbitrary data. As I understand it, (i) is equivalent to saying that a (generic) eigenfunction of the first nontrivial eigenvalue of the Laplacian (with Neumann boundary data) attains its maximum on the boundary, while (ii) is equivalent to the same claim, but for _all_ eigenfunctions and non-trivial eigenvalues. Given that (i) would be easier, I presume that this is the version of the conjecture to attack? Also, could the conjecture be solved numerically for specific triangles, such as the equilateral triangle? Comment by — June 3, 2012 @ 3:09 am • Brian J. McCartin has recently published a book, downloadable at http://www.m-hikari.com/mccartin-3.pdf, “LAPLACIAN EIGENSTRUCTURE OF THE EQUILATERAL TRIANGLE”, where he states; “in 1833, Gabriel Lam´e discovered analytical formulae for the complete eigenstructure of the Laplacian on the equilateral triangle under either Dirichlet or Neumann boundary conditions and a portion of the corresponding eigenstructure under a Robin boundary condition. Surprisingly, the associated eigenfunctions are also trigonometric. The physical context for his pioneering investigation was the propagation of heat throughout polyhedral bodies.” Comment by — June 5, 2012 @ 8:42 am • In reply to Stuart Anderson: Brian J. McCartin also has a published paper in “Mathematical Problems in Engineering”, Vol. 8 , Issue 6, Pages 517-539, entitled: “Eigenstructure of the equilateral triangle, Part II: The Neumann problem”. There, he gives a complete description of Lame’s eigenfunctions for an equilateral triangle with a Neumann boundary coundition (del U)/(del n) = 0 , n the normal to the triangular boundary, U an eigenfuntion. In section 8 on Modal Properties, he gives beautiful expressions for the eigenfunctions in terms of pairs of integers (m, n) in Equations 8.1 and 8.2, where (8.1) covers what McCartin calls the symmetric, and (8.2) the antisymmetric modes (respectively). I’m intrigued by whether the modes in equations (8.1) and (8.2) always attain their max and min on the boundary of the triangular region (equilateral triangle); the 3D plots of several modes by McCartin seems consistent with max/min always being attained on the boundary; Lame solved the equilateral triangle case, in the sense that he gave a complete set of eigenfunctions for the Laplacian eigenvalue problem with the (del U)/(del n) = 0 boundary value condition. David Bernier Comment by — June 5, 2012 @ 12:06 pm • I worked out the second eigenspace for the equilateral triangle and the answer is actually rather pretty. It is convenient to work in the plane $\{ (x,y,z): x+y+z=0\}$ and take the equilateral triangle spanned by (0,0,0),(1,0,-1), (1,-1,0). Then it turns out that there is a two-dimensional eigenspace for the second eigenfunction, spanned by the real and imaginary parts of the complex eigenfunction $\exp( 2\pi i(x-y) / 3 ) + \exp( 2\pi i(y-z) / 3 ) + \exp( 2\pi i(z-x) / 3 )$. This complex eigenfunction maps the equilateral triangle to a slightly concave version of the equilateral triangle, so that whenever one takes a projection of this complex eigenfunction to create a real eigenfunction, the maximum and minimum are only attained at the corners. Because of this strict concavity it seems likely to me that this property persists with respect to small perturbations of the domain. (I think one can use the Rayleigh quotient formalism and some suitable changes of variable to show that second eigenvalues and eigenfunctions of a perturbed domain must be close to second eigenvalues and eigenfunctions of the original domain, though I’m not sure yet exactly what function space norms one can use to define closeness of eigenfunctions here.) Comment by — June 10, 2012 @ 3:39 am • I’m trying to understand how to verify the concavity of the image. If A designates the point (0,0,0) , B designates the point (1, -1, 0) and C designates the point (1, 0, -1), then a point on the edge from A to B is a convex combination tA + (1 -t)B of the vectors A, B with $0 \leq t \leq1$. It follows that x-y = 0 on the edge from A to B. Also, z = 0 along the edge from A to B. I would redo the convex combination with parameter t in [0, 1] for the two other edges: from A to C and from B to C. If I understand you well, the image is a figure (closed subset) of the complex plane C. David Bernier Comment by — June 10, 2012 @ 7:54 am • Yes, this is right (except that one has x+y=0 rather than x-y=0). For instance, on the line from A to B, the complex eigenfunction traces out the curve $t \mapsto \exp(4\pi i t / 3) + 2 \exp( - 2\pi i t/3)$ for $0 \leq t \leq 1$, which is a concave curve from 3 to $3 \exp(4 \pi i/3)$ which can be seen for instance here. The other two sides of the triangle give rotations of this arc, tracing out the concave triangle with vertices at $3, 3 \exp(2\pi i/3), 3 \exp(4\pi i/3)$. One can show that the eigenfunction has no critical points in the interior of the triangle, so that the image is precisely the interior of the concave triangle. Comment by — June 10, 2012 @ 3:54 pm 2. Reblogged this on Guzman's Mathematics Weblog. Comment by — June 3, 2012 @ 4:37 am 3. How about approximating the eigenfunction by polynomials? The Neumann boundary conditions already imply that the corners are critical points. Perhaps starting with that observation, for sufficiently low degree, the geometry of the triangle implies that one of them must be a max/min. In an ideal world, a 2d version of the Sturm comparison theorem (if one exists) could then show that this feature of the polynomial approximation remains true for the actual eigenfunction. Just some preliminary ideas… Comment by Igor Khavkine — June 3, 2012 @ 2:22 pm • I am afraid I am not familiar with the Sturm comparison theorem and so I don’t quite follow your idea. I looked briefly at the Wikipedia page on it but I can’t figure out its connection to eigenfunctions. Is there a way for example that the Sturm comparison theorem can be used to show that the second eigenfunction for the Neumann Laplacian on the interval [0,1] has its extrema at the endpoints? Comment by — June 3, 2012 @ 10:43 pm • In my understanding, Sturm-type results allow you to establish some qualitative property for solutions of an ODE (a 1d elliptic problem) provided that the desired property holds for a nearby solution of a nearby ODE. The property usually considered is the presence of a zero (node) of an eigenfunction in some interval. I think a similar result holds for critical points of eigenfunctions. Say you could establish that the second eigenfunction of the Laplacian on a right angled triangle satisfies the desired maximum principle by showing that its only critical points are at the vertices (so that whatever the maximum is, it would have to be at one of these points). Then appealing to a comparison theorem might be able to show that the same property holds for almost-right acute triangles. Mind you, this is all speculation at this point… Comment by Igor Khavkine — June 4, 2012 @ 6:52 am • Ah, so maybe a Sturm-type result can make rigorous the statement: “the second eigenfunction depends continuously on the domain”? By physical considerations, this statement ought to be true. But then it seems this would only work to prove the conjecture for triangles close to right triangles? As a side note, if such a continuity statement were to hold then the hot spots conjecture for acute triangles must be true by the following *non-math* proof: If it weren’t true then by continuity there would be an open set of angles where it failed to hold. But after simulating many triplets of angles the conjecture always holds. C’mon, What are the odds we missed that open set? :-) Comment by — June 5, 2012 @ 3:51 am • It appears that Sturm’s classical work has far reaching generalizations, as described for instance in this monograph: Kurt Kreith, Oscillation Theory LNM-324, (Springer, 1973). In particular, Chapter 3 features some comparison theorems for solutions to elliptic equations. Comment by Igor Khavkine — June 5, 2012 @ 1:14 pm 4. Although I mentioned a “combinatorial version of the conjecture” in the proposal, I didn’t write out the details there to prevent clutter. Here is a brief write up of a combinatorial approach: http://www.math.missouri.edu/~evanslc/Polymath/Combinatorics While in some sense this is just another side of the same coin, perhaps rephrasing the problem as a combinatorics problem may make certain aspects more clear to combinatorics professionals. Comment by — June 3, 2012 @ 7:43 pm 5. Also, here is the MATLAB program I’ve used to generate and display the eigenvectors of the graphs $G_n$: http://www.math.missouri.edu/~evanslc/Polymath/HotSpotsAny.m The input format is HotSpotsAny(n,a,b,c,e), where a,b,c are the edge weights, n is the number of rows in the graph, and e determines which eigenvector of the graph Laplacian is displayed. (Note that in the proposal write up the graph $G_n$ has $2^n$ rows so if you wanted to simulate the Fiedler vector for $G_6$ with edge weights a=1,b=2,c=3 then you would type HotSpotsAny(64,1,2,3,2)). For large n, the eigenvectors of the graph Laplacian should approximate the eigenfunctions of the Neumann Laplacian of the corresponding triangle… so the program can be used to roughly simulate the true eigenfunctions as well. (Also thanks to Robert Gastler of the Univ of Missouri who wrote the first version of this code) Comment by — June 3, 2012 @ 8:11 pm 6. Just as a loose comment: maybe ideas based on self-similarity of the whole triangle to its 4 pieces can help (i.e. modeling the whole triangle as its scaled copies + the heat contact). Then without going into the graph approximation (which looks fruitful anyway), one can see some properties. Comment by — June 4, 2012 @ 1:37 pm • One advantage in the graph case is that after dividing and dividing the triangles you get to the graph $G_1$ which is simply a tree of four nodes and there I think the theorem shouldn’t be too hard to prove. And from there there might be an argument by induction. In the continuous case, no matter how many times you subdivide the triangle, after zooming it it is still the same triangle. On the other hand, in the continuous case, each of the sub-triangles is truly the same as the larger triangle. So you may be right that there is something to exploit there. Does anyone know of good examples where self-similarity techniques are used to solve a problem? In either case, there is the issue that while the biggest triangle has Neumann boundaries, the interior subtriangles have non-Neumann boundaries… Comment by — June 5, 2012 @ 3:39 am • Something that is true and uses that the whole triangle is the union of the 4 congruent pieces is the following. From each eigenfunction of the triangle with eigenvalue lambda we can build another eigenfunction with eigenvalue 4 lambda by scaling the triangle to half length, and then using even reflection through the edges. Note that this eigenfunction will have an interior maximum since every point on the boundary has a corresponding point inside (on the boundary of the triangle in the middle). I’m skeptical this observation could be of any use. Comment by Luis S — June 7, 2012 @ 10:10 am • Hmm… but, apart from the case of an equilateral triangle, when you divide a triangle into four sub-triangles, I don’t believe that the sub-triangles are reflections of one another. Comment by — June 7, 2012 @ 5:43 pm • You’re right. My bad. Comment by Luis S — June 10, 2012 @ 1:35 am 7. [...] polymath related items for this post. Firstly, there is a new polymath proposal over at the polymath blog, proposing to attack the “hot spots conjecture” (concerning a maximum principle for a [...] Pingback by — June 5, 2012 @ 12:44 am 8. In the case of an equilateral triangle, for the Hot Spots Conjecture to hold, the second eigenvalue must, by a symmetry argument, have multiplicity at least two. The refined conjecture calls for a simple eigenvalue in the scalene case, and eigenfunction extremes at the two smaller angles. Has anyone done numerical work yet to estimate how sensitive the eigenvalue degeneracy lifting is to perturbations of a starting equilateral triangle? Any proof will have to address the issue that the third eigenvalue may be arbitrarily close to the second and has an eigenfunction with extremes at a different pair of corners. Comment by Tracy Hall — June 5, 2012 @ 2:45 am • Here is some preliminary data. As I am not sure how to simulate the true eigenfunctions of the triangle, the following is for the graph $G_6$ whose eigenvectors should roughly approximate the true eigenfunctions: For the case that a=b=c=1 (Equilateral Triangle): HotSpotsAny(64,1,1,1,2) yields the corresponding eigenvalue -0.001070825047269 HotSpotsAny(64,1,1,1,3) yields the corresponding eigenvalue -0.001070825047269 HotSpotsAny(64,1,1,1,4) yields the corresponding eigenvalue -0.003211901603854 We see that indeed the eigenvalue -0.001070825047269 has multiplicity two. Perturbing a slightly we have that for a=1.1, b = 1, c = 1 (Isosceles Triangle where the odd angle is larger than the other two): HotSpotsAny(64,1.1,1,1,2) yields the corresponding eigenvalue -0.001078552707489 HotSpotsAny(64,1.1,1,1,3) yields the corresponding eigenvalue -0.001131412869938 Whereas for a=.9, b = 1, c = 1 (Isosceles Triangle where the odd angle is smaller than the other two): HotSpotsAny(64,.9,1,1,2) yields the corresponding eigenvalue -0.001004876221957 HotSpotsAny(64,.9,1,1,3) yields the corresponding eigenvalue -0.001062028119964 In either case we see that the third eigenvalue is perturbed away from the second. What strikes me as interesting is how different the outcome is in increasing a by 0.1 versus decreasing a by 0.1. In the former case, the second eigenvalue barely changes whereas in the latter case the second eigenvalue changes quite a bit. I imagine this ties into the heuristic “sharp corners insulate heat” — reducing a to .9 produces a sharper corner leading to more heat insulation and a much smaller (in absolute value) second eigenvalue, whereas increasing a to 1.1 makes that corner less sharp but the other two corners are still relatively sharp so the heat insulation isn’t affected as much. Just a guess though… Comment by — June 5, 2012 @ 4:49 am • It looks like the degeneracy lifting might be linear. It’s the ratio of the second and third eigenvalues that matters more, which is close to the same in either case, and approximately half as far from 1 as the perturbation. Using (9/10, 1, 1) should be the same as using (1, 10/9, 10/9) and then scaling all eigenvalues by 90%. Comment by Tracy Hall — June 5, 2012 @ 5:38 am 9. If the sides of the triangle have lengths A, B, and C, currently you are using edge weights a=A, b=B, and c=C. This solves a different heat equation, when you pass to the limit, than the one you want. The assumption that each small triangle is at a uniform temperature gives an extra boost to the overall heat conduction rate in those directions along which the small triangles are longer, so in the limit you are modeling heat conduction in a medium with anisotropic thermal properties. It is just as though you started with a material of extremely high thermal conductivity and then sliced it in three different directions, with three different spacings, to insert thin strips of insulating material of a constant thickness. Based on some edge cases, I suspect that the correct formulas for edge weights to model an isotropic material in the limit are as follows: $a = 1/(-A^2+B^2+C^2)$, $b=1/(A^2-B^2+C^2)$, $c=1/(A^2+B^2-C^2)$. Interestingly enough, these formulas are only defined (with positive weights) in the case of acute triangles, which suggests that this approach, if it works, may not provide an independent proof of the known cases. Comment by Tracy Hall — June 5, 2012 @ 8:45 am • An excellent point… it may therefore be that the $G_n$ don’t converge to the true triangle at all. And it would show that naive physical intuition might lead one astray… It might be interesting to see what the $G_n$ do converge to… if they converge at all (though remember if we want to look at converegence, we have to divide a,b, and c by two every time we increase n by one as the triangles halve in length each iteration). If they converge to a “dilated” Laplacian then maybe one could stretch the triangle to stretch it back into a Normal Laplacian (and therefore prove the conjecture for this stretched triangle). This then goes back to what you mentioned: What is the true relation between the side lengths of the triangle A,B, and C and the graph weights a, b, c. Where did you get the relations $a = 1/(-A^2+B^2+C^2), b=1/(A^2-B^2+C^2), c=1/(A^2+B^2-C^2)$ from? One last thing: I am still not convinced that the $G_n$ shouldn’t converge to the correct triangle (i.e. a=A,b=B,c=C). While its true that the heat-flow is anisotropic in the graphs (and hence their limit), the graphs $G_n$ are structurally equilateral in the “spacing of the nodes”. That is, maybe the $G_n$ converge to an *equilateral* triangle with anisotropic heat flow… which then would be equivalent to the usual heatflow on a scalene triangle? Comment by — June 6, 2012 @ 12:21 am • Running the data for a=1, b = 1.5, c = 2 HotSpotsAny(2^n,1*(.5)^n,1.5*(.5)^n,2*(.5)^n,2) for n=1,…,6 yields: n=1 yields the second eigenvalue as -0.567396047577611 n=2 yields the second eigenvalue as -0.079407110995481 n=3 yields the second eigenvalue as -0.010175074142110 n=4 yields the second eigenvalue as -0.001279605383718 n=5 yields the second eigenvalue as -0.0001601915458021453 n=6 yields the second eigenvalue as -0.00002003146751408886 So it seems that the second eigenvalue of the graphs $G_n$ isn’t converging (to the true Laplacian or any dialated Laplacian for that matter)… which is disappointing. Perhaps the scaling factor of 1/2 is wrong… or perhaps the $G_n$ fail to model any diffusion on the triangle… Or it could be that n needs to be larger to see convergence… my MATLAB refuses to run n=7 so that’s as far as I could go. Comment by — June 6, 2012 @ 12:59 am • There is a PDEToolbox in MATLAB, that can find numerically (finite elements) eigenvalues and eigenfunctions for just about any domain in 2D. One can draw a domain, set Neumann boundary conditions, refine a mesh and solve. There is also a Python package called FEniCS (also finite elements), which can handle 2D and 3D. I was working on Laplace eigensolver using this package. I might be able to post something soon. Any triangle I have tried gives maximum and minimum for the second eigenfunction in the vertices connected by the longest side. Comment by Bartlomiej Siudeja — June 12, 2012 @ 5:53 pm • Indeed, and Matlab’s FEM package is what I used to generate the first few examples. The devil, as with any numerical algorithm, is in the details. The PDEtoolbox uses piecewise linears, and the eigenvalue solve is through an Arnoldi iteration. You’re restricting in how fine the mesh can be (due to memory management). You will see convergence, but have no real control on the asymptotic constant. Both deal.ii and FreeFem allow for higher order approximation, and more control over the numerical linear algebra. And both are freewares. Comment by — June 12, 2012 @ 5:56 pm • FEnics package can also use arbitrary degree for the mesh. But I agree, eventually you run into not enough memory problem. I was implementing adaptive refinements based on the second eigenfunction and convergence is quite good. If you use second or higher order you can check the errors by applying Laplacian to the approximation. This seems to work very well. Comment by Bartlomiej Siudeja — June 12, 2012 @ 6:17 pm • as the geometry becomes more degenerate, the adaptive strategies are slowing down (I’m running code now). Checking the error by applying a numerical Laplacian will only give a low-order check on the error. Comment by — June 12, 2012 @ 6:26 pm • I was wondering if there was anything qualitatively different about the acute triangle case from the known cases. From the sketched writeup nothing was particularly dependent on acuteness. Comment by David Roberts — June 6, 2012 @ 12:33 am • The approach using the graphs $G_n$ does not make reference to the acuteness… but then again nothing has been proven using this approach yet (and it may well be that the approach of using the graphs $G_n$ is not the best approach)! It could be for example that the corresponding conjecture about the Fielder vector for the graphs $G_n$ only holds true under certain conditions on a,b,c… conditions that correspond to the underlying triangle being obtuse. The proof of Banuelos and Burdzy using coupled reflected Brownian motion works when the triangle is obtuse and fails when the triangle is obtuse… but the non-probablistic idea is as follows: Imagine an obtuse triangle laid out like Figure 1 in the proposal writeup (i.e. $\gamma > \frac{\pi}{2}$). Say that a heat distribution is “monotone” if, as you go from left to right, the value of heat is non-decreasing. Then Banuelos and Burdzy have shown that if the initial heat distribution is monotone, so too will the heat distribution be *at all future times*. It is this monotonicity property that distinguishes the obtuse case from the acute case. Actually what I said in the previous paragraph is a bit of an over-simplification. They didn’t actually show that *every* monotone initial condition stays monotone under the heat flow… they just showed this for the initial condition which is an indicator function which takes the value 0 to the left of and 1 to the right of any vertical line through the triangle. But you only need to know the long term behavior of one initial condition to capture the nature of the second eigenfunction (unless your initial condition is perpendicular to it… but they took care to account for that as well). Comment by — June 6, 2012 @ 12:54 am 10. Some quick computations for the special case a < b = c. It is possible (for our purposes) to "replace" the graph $G_1$ with a 3-node graph, with Laplacian $\hat{L} = \left( \begin{array}{ccc} -a & a & 0 \\ a & -a-2b & 2b \\ 0 & b & -b \end{array} \right)$. This corresponds to "merging" nodes 3 and 4. The non-zero eigenvalues of $\hat{L}$ are the solutions to $4ab + (2a+3b)\lambda + \lambda^2 = 0$. Since all coefficients are positive, the eigenvalue we are interested in (the smallest in magnitude) is $\lambda = -\frac{(2a + 3b)}{2} + \sqrt{(2a + 3b)^2/4 -4ab}$. Computing the corresponding eigenvector, we get $v = (1, \frac{a+\lambda}{a}, \left(\frac{a+\lambda }{a}\right)\left(\frac{b+\lambda}{ b}\right)^{-1})$. Since $\lambda< 0$ and $a < b$, the maximum of this eigenvector lies indeed at the first coordinate. Actually, a direct computation shows that the characteristic equation for L itself is $\lambda(b + \lambda)(4ab + (2a+3b)\lambda+ \lambda^2) = 0$. So, the eigenvalues are the same as for $\hat{L}$, except that $\lambda = -b$ is an additional eigenvalue. For a non-zero eigenvalue not equal to $-b$, the corresponding eigenvector is $(1 , \frac{a+\lambda}{a} , \left(\frac{a+\lambda} { a}\right)\left(\frac{b+\lambda} { b}\right)^{-1} , \left(\frac{a+\lambda }{a}\right)\left(\frac{b+\lambda}{b}\right)^{-1})$. Again, since $\lambda< 0$, the maximum is attained at coordinate 1. Also, as expected from symmetry, the 3rd and 4th coordinates of the eigenvector are equal. Comment by Johannes — June 5, 2012 @ 3:16 pm • typo: it should be ((a+lam) / a) / ((b+lam) / b) and not ((a+lam) / a)((b+lam) / b) Edit: Fixed Comment by Johannes — June 6, 2012 @ 8:01 am 11. As I understand your merge, we note that since nodes 3 and 4 will have the same value, we can record that as a single quantity… but really since it is the sum of the heat at nodes 3 and 4 there is twice as much heat flow. So wouldn’t $\hat{L}$ be $\hat{L} = \left( \begin{array}{ccc} -a & a & 0 \\ a & -a-2b & 2b \\ 0 & 2b & -2b \end{array} \right)$? Comment by — June 6, 2012 @ 1:42 am • I got to this formula by considering the Markov process formulation — if we are at 2, we have probability 2*b to go to the “merged node” [3,4] (because we can go to 3 with prob b, or 4 with prob b). However, if we are at [3,4], then we will go to node 2 with probability b only. I think what happens is that the interpretation of the value of eigenvector on this node corresponds to the average of 3 and 4, rather than the sum. But surely your way of merging should work at well. And I’m not really saying that it is useful to merge these nodes at all, I just started the computations that way :) Comment by Johannes — June 6, 2012 @ 7:35 am • Ah ok I sort of see… in your $\hat{L}$, the heat coming back in from node 3 is doubled (seen in the (2,3)-entry of $\hat{L}$)… so it is as though we assume that whatever heat flows in from 3 is matched by the invisible node 4? In any case $\hat{L}$ seems to work. Indeed, by wolfram alpha, the characteristic polynomial you got for L checks out (and includes that of $\hat{L}$): http://www.wolframalpha.com/input/?i=determinant%28{{-a-l%2Ca%2C0%2C0}%2C{a%2C-a-2b-l%2Cb%2Cb}%2C{0%2Cb%2C-b-l%2C0}%2C{0%2Cb%2C0%2C-b-l}}%29 I then went to compute the eigenvectors using Wolfram Alpha and saw horrific expressions… but then I saw that your eigenvectors were in terms of lambda itself. That makes things more elegant and can perhaps be used to used to prove the conjecture for $G_1$ for arbitrary a<b<c? Indeed, we seek the null space of $L-\lambda= \left( \begin{array}{cccc} -a-\lambda & a & 0 & 0 \\ a & -a-b-c-\lambda & 0 & 0 \\ 0 & b & -b-\lambda & 0 \\ 0 & c & 0 & -c-\lambda \end{array} \right)$ when $\lambda$ is the second eigenvalue. If we let $v_k$ denote the components of the eigenvector then we can scale so that $v_2=1$ (provided $v_2$ isn’t zero!), whence $v_1=a/(a+\lambda)$ $v_2=1$ $v_3=b/(b+\lambda)$ $v_4=c/(c+\lambda)$ This is what you had (but rescaled so that $v_2 = 1$ instead of $v_1=1$ in your case). However it isn’t clear to me why $v_1$ and $v_3$ should be the extrema… For one thing, some of the $v_k$ are going to be negative (because we know apriori that the $v_k$ sum to 1 as the second eigenvector is perpendicular to the first) so it must be that $\lambda_2 <-a$. If we could show somehow that $-b<\lambda_2<-a$ (and this must be the case since, experimentally, if $v_2=1$ then $v_1$ is negative and $v_3,v_4$ are positive) then we would be done since $v_1$ would be negative and $v_3$ would be the largest of the three positive verticies. Comment by — June 7, 2012 @ 3:17 am 12. [...] is <a href=”http://polymathprojects.org/2012/06/03/polymath-proposal-the-hot-spots-conjecture-for-acute-triangle…” [...] Pingback by — June 6, 2012 @ 9:02 pm 13. Here are some few naive observations. None of these provide a roadmap to a proof, but may help build intuition. This is my first post on polymath, so please let me know if more/less detail is required. First, I’ve taken the liberty of computing, using a finite element method, the desired eigenfunction of the Neumann Laplacian on a couple of acute triangles. The results are not surprising. I’ve plotted some contour lines as well. http://people.math.sfu.ca/~nigam/polymath-figures/isoceles1.jpg and http://people.math.sfu.ca/~nigam/polymath-figures/scaleneeig.jpg The domain is subdivided into a finite number N of smaller triangles. On each, I assumed the eigenfunction can be represented by a linear polynomial. Across interfaces, continuity is enforced. I used Matlab, and implemented a first order conforming finite element method. In other words, I obtained an approximation from a finite-dimensional subspace consisting of piece-wise linear polynomials. The approximation is found by considering the eigenvalue problem recast in variational form. This strategy reduces the eigenvalue question to that of finding the second eigenfunction of a finite-dimensional matrix, which in turn is done using an iterative method. As N becomes large, my approximations should converge to the true desired eigenfunction. This follows from standard arguments in numerical analysis. I am happy to provide more details if needed. Additionally, I’ve computed a couple of approximate solutions to the heat equation in acute triangles. The initial condition is chosen to have an interior ‘bump’, and I wanted to see where this bump moved. Again, I’ve plotted the contour lines of the solutions as well, and one can see the bump both smoothing out, and migrating to the sharper corners: http://people.math.sfu.ca/~nigam/polymath-figures/isoceles.avi http://people.math.sfu.ca/~nigam/polymath-figures/isoceles2.avi http://people.math.sfu.ca/~nigam/polymath-figures/scalene.avi I think in the neighbourhood of the corner with interior angle $\frac{\pi}{\alpha}$, the asymptotic behaviour of the (nonconstant) eigenfunctions should be of the form $r^\alpha \cos(\alpha \theta) + o(r^{\alpha})$, where $r$ is the radial distance from the corner. Next, if one had an unbounded wedge of interior angle $\frac{\pi}{\alpha}$, the (non-constant) eigenfunctions of the Neumann Laplacian would be given in terms of $P_n(r,\theta) = J_{\alpha, n} (\sqrt(\lambda) r) cos(\alpha n \theta), n=0,1,2....$. The $J_{\alpha,n}$ are Bessel functions of the first kind. The spectrum for this problem is continuous. When we consider the Neumann problem on a bounded wedge, the spectrum becomes discrete. This makes me think approximating the second eigenfunction of the Neumann Laplacian in terms of a linear combination of such $P_n(r,\theta)$ could be illuminating. This idea is not new, and a numerical strategy along these lines for the Dirichlet problem was described by Fox, Henrici and Moler (SIAM J. Numer. Anal., 1967). Betcke and Trefethen have a nice recent paper on an improved variant of this ‘method of particular solutions’ in SIAM Review, v. 47, n. 3, 2005. http://www.jstor.org/stable/10.2307/2949737 http://eprints.ma.man.ac.uk/589/01/MPSfinal.pdf Comment by — June 7, 2012 @ 8:20 pm • Good stuff. It sounds as though the finite element method is the standard way to approximate the eigenfunctions of the triangle… you mentioned that this led to a finite dimensional linear system/problem for finding the eigenvalue of a matrix. I am curious what the system is (as that may be an entry of attack). You mention the angle as pi/alpha as opposed to alpha… any reason for this? Also what makes you suspect that $P_n(r,\theta) = J_{\alpha, n} (\sqrt(\lambda) r) cos(\alpha n \theta), n=0,1,2,\ldots$? And finally what do you mean by “unbounded/bounded wedge”. (These may standard concepts in numerical analysis but I am not familiar with them :-/ ) Comment by — June 8, 2012 @ 5:22 pm • Chris, I’ll attempt to answer in reverse order. - By a wedge I mean the domain $\{(r,\theta)\vert 0\leq r \leq a, 0\leq \theta \leq \frac{\pi}{\alpha}\}$. An unbounded wedge would have $0\leq r$. A bounded wedge would have finite a. - The seperation of variables solution for the Neumann Laplacian on an unbounded wedge is given by $P_n(r,\theta)$. My thinking is that as one gets close to corners, the behaviour of the true eigenfunction must, in some asymptotic sense, be the same that of $P_n(r,\theta)$ as $r\rightarrow 0^+$. In other words, if I am very close to a corner of the triangle, the domain appears as if it were an infinite wedge. - If I were seperating variables on a wedge with angle $\beta$, solutions would have angular behaviour of the form $\cos(k\frac{\pi \theta}{\beta})$. I find it easier to set the angle to $\frac{\pi}{\alpha}$, in which case the angular behaviour is $\cos(k \alpha \theta)$. - the finite element matrices I used are quite large, but sparse. It’s a bit unwieldy to post the matrix. But I can attempt to describe it. The original problem, in variational form, is to find $(u,\lambda)$ such that $\int_{\Omega} |\nabla u|^2 = \lambda \int_\Omega u^2.$ I am assuming the approximate solution $u_N$ is a linear combination of basis functions $\psi_j$ which are described below. I am attempting to find the first nonzero eigenvalue $\lambda$ and eigenvector $C=(c_1,c_2,...c_N)^T$ such that $A C = \lambda_N M C.$ The entries of the matrix $A$ would be of form $\int_\Omega \nabla \psi_i \cdot \nabla \psi_j$. The entries of the matrix $M$ are of form $\int_\Omega \psi_i \psi_j$. Suppose for a minute that it were possible to tesselate the acute triangle $\Omega$ by smaller right triangles. The collection of these triangles is called the mesh. Denote such triangle to be $T_h$. On $T_h$, we are saying that the desired eigenfunction is well-approximated by a linear polynomial: $a+bx+cy$. This polynomial is completely specified by its values at the vertices of $T_h$. Now consider a ‘hat’ function $\psi_i(x,y)$, which takes on value 1 at the ith vertex of the mesh, 0 on all adjacent vertices, linear in between and zero everywhere else. Clearly a globally piece-wise linear function will be comprised of a linear combination of these hat functions. Most of the entries of $A$ and $M$ will be zero, since the support of $\psi_i$ does not intersect that of $\psi_j$ unless they are close. The matrices will be symmetric. Now, we are attempting to do all this on an acute triangle $\Omega$, so the smaller triangles $T_h$ in our mesh are affine maps of right-angled triangles. So we have to include this map while building the matrices. To get a ‘feel’ for the sparsity structure of $A$ on an acute triangle using linear FEM, here’s a picture: http://people.math.sfu.ca/~nigam/polymath-figures/sparsity.jpg Comment by — June 8, 2012 @ 7:34 pm • Very nice graphics. I might suggest plotting the “line of nodes”, the points where the “heat” is zero, in the first non-trivial eigenfunction. As Terry Tao has discussed, for an equilateral triangle, the first non-trivial eigenvalue for the laplacian with Neumann boundary condition has multiplicity two. I’m pretty sure that these Neumann eigenvalues and eigenfunctions also correspond to “pure tones” for a freely vibrating membrane. Wikipedia has (animated) modes for a disk-like drum surface fixed to the rim, which unfortunately is the Dirichlet problem. Speaking non-rigorously, for the Neumann eigenfunctions of a triangle (for non-zero eigenvalue) , the line/curve or “nodal line” should separate the regions of positive heat from those of negative heat. As a visual aid, we could use the “nodal lines” to guide us in finding the hot/cold spot: start from the “nodal line” and do a steepest ascent or steepest descent. Also, I succeeced in downloading the animation isoceles.avi from your website. It was possible using a media player to slow down the replay by 30X, going from 2 to 60 seconds. Comment by — June 14, 2012 @ 5:16 am • There has been some talk of nodal lines in the more recent comments on the second research page. I suspect that the nodal line always straddles the sharpest angle and is bowed out, whence an argument like the one used for the Dirichlet Neumann problem could maybe be put to use. But all of this would require quantitative bounds on the nodal line… I don’t quite follow the steepest descent proposal… wouldn’t that require lots of knowledge about $u$ itself, in which case we’d know the locations of the max/min already? Or maybe I am missing something? Comment by Chris Evans — June 14, 2012 @ 8:21 am • I should have been clearer in what I said about the nodal lines. I mention these in connection with enhancing the plots for eigenfunctions and perhaps for building physical intuition. So, I’m assuming we know $u$ already. In Nilima Nigam’s figure: http://people.math.sfu.ca/~nigam/polymath-figures/isoceles1.jpg , the triangle has a base of length 1 and a height of 2. This isosceles triangle has angles of about 28, 76 and 76 degrees, so the smallest angle is quite accute. So, certainly here the nodal line straddles the sharpest angle and is bowed out (toward the base). In an isosceles triangle with angles 80, 50 and 50 degrees, I really don’t know where the nodal line is for the eigenfunction associated to the first non-zero eigenvalue … I’ll have to think about it … Comment by — June 14, 2012 @ 10:43 am • After reading Siudeja’s post below, I think the Laugesen and Siudeja paper referred to there implies that an 80, 50, 50 degrees isosceles triangle has its nodal curve for the first eigenfunction along the bisector of the 80-degree angle. (See Figure 1 of the Laugesen and Siudeja paper, for example) Comment by — June 14, 2012 @ 11:33 am • It’s easy enough for me to generate some figures showing the nodal line for a few triangles. As for your steepest descent comment: do you mean we should plot the nodal line in the same fixed triangle as time increases (in the heat equation), and then look at the direction of steepest descent? This is relatively easy to do. or do you mean we should plot the nodal line for triangles with angles which are close, and somehow study the nodal lines in there? This I don’t readily see how to do, since the domain (and hence the location of the nodal lines) will change from triangle to triangle. Let me know if this is still of interest, and I can throw up some graphics. Comment by — June 14, 2012 @ 2:47 pm • I’ll explain why lines of steepest descent interest me. They are evevrywhere tangent to the gradient of $u$, the lowest non-trivial eigenfunction of the Neumann Laplacian. For your height 2 , base 1 isosceles triangle, it seems the “cold spot” has $u$ about -0.09 , at the pointy vertex. Maybe we already know that the two other vertices at the base are the only two “hot spots” with $u$ being about +0.03 there, and maybe +0.025 on the base, half-way between the two “hot spot” vertices. Say we do a mirror image of the 3d graph of $u$ about the base, thus extending $u$; could it be that the extended $u$ has a saddle point at x=0, y=0 ? It seems that to the left of the axis of symmetry of the height 2, base 1 triangle, the steepest ascent curves should slope downwards and to the left, and to the right of the axis of symmetry, the steepest ascent curves starting at the pointy vertex should slope downwards and to the right. For nodal lines, I meant for the eigenfunctions. They show where $u$ is average-valued. For the isosceles case, we already know where they lie, by the of Laugesen and Siudeja. Comment by — June 14, 2012 @ 3:58 pm 14. Is the case of an isosceles triangle still open? I was thinking of the following. Say the triangle has vertices $(a,0)$, $(-a,0)$ and $(0,h)$. If $u(x,y)$ is the second eigenfunction, then also is $u(-x,y)$ and their sum $v(x,y)=u(x,y)+u(-x,y)$. The function v is even in x, so it is also a Neumann eigenvalue of the right triangle with vertices $(0,0)$, $(0,h)$ and $(a,0)$. According to the description of the problem the case of the right triangle has already been worked out, so this would reduce to that case unless v is identically zero. If v is identically zero, it means that the second eigenfunction u was odd in x. In particular $u(0,y)=0$ for all y. The function u would be the first eigenfunction on the same right triangle as above but now with Dirichlet boundary condition on x=0 and Neumann in the other two edges. This function cannot change sign (otherwise |v| has less energy). I would expect its maximum to take place at (a,0) but I don’t know how to prove it (maybe some rearrangement along lines?). Comment by Luis S — June 7, 2012 @ 10:05 pm • As far as I know, the case for a general isoceles triangle is open, but don’t quote me on that :) Of course some cases like obtuse-isoceles are solved. I believe the obtuse-isoceles triangle has an anti-symmetric secong eigenfunction and the acute-isoceles triangle has a symmetric second eigenfunction (as this follows the hueristic that the hot spots are in the sharpest corners.) When you reduce to the case of the right triangle in your first paragraph, how do you know that it is still the second eigenfunction(as opposed to some other eigen function)? Comment by Anonymous — June 8, 2012 @ 9:35 pm • Because any other eigenfunction of the right triangle would produce a corresponding even eigenfunction of the isosceles triangle. Of course I can’t rule out the second eigenfunction of the isosceles to be odd. Comment by Luis S — June 10, 2012 @ 1:37 am • According to Proposition 2.3 of the Banuelos-Burdzy paper, the second eigenfunction of a symmetric domain cannot be odd if the ratio between the diameter and the width is at least $2 j_0/\pi \approx 1.53$, where $j_0$ is the first non-trivial zero of the Bessel function J_0. So this handles isosceles triangles which are sufficiently pointy, at least… Comment by — June 10, 2012 @ 1:53 am • The isoceles right-angled triangle has diameter of twice the width, so I’m not sure the ratio isn’t at least 1.53 for all acute isoceles triangles. Comment by — June 11, 2012 @ 11:46 pm • Should have been more careful: only the width perpendicular to the axis of symmetry counts. Comment by — June 11, 2012 @ 11:47 pm • Unfortunately so :-). Note that for the equilateral triangle there do exist odd second eigenfunctions (the second eigenspace is two-dimensional, and for each axis of symmetry there is one eigenfunction which is odd and one which is even with respect to that axis), so this method has to break down before that point. Using the Banuelos-Burdzy bound, one can handle all isosceles triangles whose apex has angle less than 38 degrees. Comment by — June 12, 2012 @ 3:05 am • It seems that this argument almost resolves the hot spots conjecture in the case of a narrow isosceles triangle (in which Banuelos-Burdzy excludes the possibility of an odd eigenfunction), except for the following issue: an even eigenfunction on the isosceles triangle is indeed also a Neumann eigenfunction on one of the two right-angled halves of that triangle, and hence by the hot spots conjecture for right-angled triangles (which follows, among other things, from this paper of Atar-Burdzy which covers the lip domain case), the extrema can only occur on the boundary of the right triangles. Unfortunately this also includes the axis of symmetry of the original triangle as well as the boundary of that triangle; if we can exclude an extremum occuring in the interior of this axis of symmetry then we are done. I think there are some extra monotonicity properties known for eigenfunctions of right triangles that can be extracted from the Atar-Burdzy paper which may be useful in this regard; I’ll go have a look. Comment by — June 11, 2012 @ 6:16 pm • Aha! It appears that in Theorem 3.3 of the Banuelos-Burdzy paper it is shown that in a right-angled triangle ABC with right angle at B, if the second eigenfunction is simple, then (up to sign) it is non-decreasing in the AB and BC directions, which among other things shows that it cannot have an extremum in the interior of AB or BC (without being constant on those arcs, which by unique continuation would make the entire eigenfunction trivial). So, putting everything together, it seems that this implies that the hot spots conjecture is true for thin isosceles triangles, and more generally for isosceles triangles in which the second eigenfunction has multiplicity one and is even. I’ve written up the details on the wiki: http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture#Isosceles_triangles Comment by — June 11, 2012 @ 6:31 pm 15. I wanted to suggest another approach to the problem using the connection between the Laplacian and Brownian motion. Suppose I start off a Brownian motion at time 0 at some point $x$ in the triangle, and the Brownian motion reflects off the sides of the triangles. At time $t$ the probability density for the Brownian motion to be at location $y$ is given by the solution $v(t,y)$ of the heat equation with $v(0,y)=\delta(x-y)$ and Neumann boundary conditions. So if I can show that the Brownian motion is eventually more likely to be in a corner (per unit area) than anywhere else, that will gives us the result. Here’s a way to approach it: Suppose I fix $x$. I ask how many ways are there for Brownian motion to get from $x$ to $y$ in time $t$. For $t$ large, If $y$ is in a corner I might imagine that there are more ways of Brownian motion to get to $y$ than otherwise, because the Brownian motion has all the ways it would in free space, but also a ton of new ways that involve bouncing off the walls. Perhaps you can using some argument showing that there are more paths into some corners than anywhere else in the triangle. (Perhaps, using the Strong Markov Property and Coupling?) Comment by paultupper — June 8, 2012 @ 5:36 pm • Sorry, I just realized that Chris had already proposed this approach in http://www.math.missouri.edu/~evanslc/Polymath/Polymath.pdf and reference the following paper where they do something similar. http://en.scientificcommons.org/42822716 Comment by paultupper — June 8, 2012 @ 7:16 pm 16. If we could find an heat distribution on a right triangle such that the hot spot is never on the vertices or the hypotenuse or the short side then could we combine two such heat distributions symmetrically in an isosceles acute triangle and get an isosceles acute triangle such that the hot spots are always on the interior? Does such a distribution exist? Is it among the known possibilities? Comment by — June 8, 2012 @ 8:46 pm • An issue is that if you reflect a right triangle and its eigenfunction the eigenfunction you get for the larger triangle might not be the *second* eigenfunction. Comment by Anonymous — June 8, 2012 @ 9:40 pm • Theorem 3.3 of Banuelos-Burdzy shows that in a right angled triangle, the hot spots of the second eigenfunctions are only at the two acute vertices, so this potential counterexample is ruled out. Comment by — June 12, 2012 @ 3:06 am 17. Thanks for the detailed details. It seems the problem you want to solve is $c^TAc = \lambda_N c^tMc$? But solving $Ac=\lambda_N Mc$ suffices (Maybe it is a necessary condition too?) But in any case it seems that the finite element method doesn’t lead to a problem of “find the eigenvector of this matrix” so much as “solve this linear system”. So in that sense it seems fundamentally different from the approach using graphs $G_n$ that I proposed (though I still suspect that the graphs $G_n$ correspond to some finite element method?) Looking at the structure of $u_N$, it is clear that its maximum will occur at one of the verticies of the mesh… that is, the hotspots conjecture for the finite elements would be the claim that “For each N, the vector c is such that the extrema of c correspond to verticies of the mesh which are on the boundary of the big triangle (and most likely should in fact be at the sharpest corners of the big triangle)”. So maybe that can be shown somehow As to the eigenfunction near the corner looking like the eigenfunction of the wedge, it certainly sounds plausible but I am not sure what the heuristic argument would be. If I start a point mass of heat in the corner and let heat flow, then for a *short* time t, I don’t expect the heat flow to differ from the heat flow in a wedge (not enough time has elapsed for much heat to have reflected off the third side of the triangle). But for *long* time (which is where the second eigenfunction comes into play) it seems like the third side of the triangle should matter. But what you are saying does still “feel” right in that I would be surprised if it weren’t the case. Speaking of wedges, it must be the case that the heat flow on the infinite wedge of angle $\alpha = \frac{2\pi}{N}$ can be obtained as follows: Divide, centered at the origin, your space into wedges $j*\frac{2\pi}{N} < \theta < (j+1)*\frac{2\pi}{N}$. Then look at the unreflected heat flow on $R^2$ and then "fold up" (as in origami) $R^2$ into a single wedge adding up the values of heat. In the case of an equilateral triangle it must be possible to tile $R^2$ with the triangle, look at the unreflected heat flow, and then "fold it up" as well. Perhaps some trick involving folding can be used for a general triangle? Comment by — June 9, 2012 @ 4:42 am • Chris, you’re right about the transposes: the eigensystem to solve is $AC^T =\lambda_N MC^T.$ I think your rephrasing of the conjecture in this setting is promising. The acuteness of the angle would need to be built into the interpretation somehow, since the entries of the matrices involved will be affected by the affine maps from the reference right triangles to the tesselating ones. I believe this is key. A decent way to start may be to begin with an isoceles triangle, divide it into 6 smaller right triangles, and explicitly write down the resulting matrices. The approximation $u_6$ will unfortunately be a lousy approximation to the true eigenfunction,but the exercise may still be illuminating. I think that because the sharper corner will be further from the third side, it will take longer for a bump to diffuse from there to the third side. It will dissipate faster due to the exponential decay in time of the second eigenfunction. This was the heuristic I was using. Comment by — June 9, 2012 @ 5:03 am 18. [...] “Hot spots conjecture” proposal has taken off, with 42 comments as of this time of writing.  As such, it is time to take the [...] Pingback by — June 9, 2012 @ 5:51 am • As you can see, I’ve just officially designated this project as the Polymath7 project, and created a discussion page (for all meta-mathematical discussion of the project at http://polymathprojects.org/2012/06/09/polymath7-discussion-thread/ ) and a wiki page (at http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture ) for holding all the progress to date. I’m also going through the comments to clean up the LaTeX (see http://polymathprojects.org/how-to-use-latex-in-comments/ ). One slightly more mathematical question: what is the reference for the fact that the hot spots conjecture is resolved in the right-angled case? Does the Banuelos-Burdzy result cover this case? Also, in addition to the perturbed right-angled triangle case proposed earlier, another case that might be easier than the general case is that of an almost degenerate triangle, in which the three vertices are nearly collinear. It may be that in that case one can use perturbative techniques to get asymptotics for the second eigenfunction that can solve the conjecture for sufficiently degenerate triangles (and, if one is optimistic, maybe one could then use a continuity argument to push to the general case)… Comment by — June 9, 2012 @ 5:57 am • The article: The “Hot Spots” Conjecture for Domains with Two Axes of Symmetry David Jerison and Nikolai Nadirashvili Journal of the American Mathematical Society Vol. 13, No. 4 (Oct., 2000), pp. 741-772 which is available at http://www.math.purdue.edu/~banuelos/Papers/hotarticle.pdf seems to imply that if a convex region as symmetries in the x and y axes then the conjecture will hold. Since four copies of a right triangle can be combined to from such a region then they should satisfy the cojecture. Also two copies of a isoceles triangle could be combined to form such a region so they should also satisfy the conjecture. Comment by — June 9, 2012 @ 5:04 pm • What worries me is that if you create a larger domain by reflection (and reflecting the eigenfunction), there is no guarantee that the resulting eigenfunction is still the second eigenfunction of the larger domain. A (possibly incorrect but plausible) heuristic is that larger domains have a smaller (i.e. closer to zero) second eigenvalue (because in a larger domain it takes longer for the heat to reach equilibrium maybe?). If that is the case then it would be legal to make arguments on “folding in” a triangle (e.g. replacing an isosceles triangle with the one you get by folding it in half) but illegal to make arguments on “unfolding” a triangle (e.g. replacing a right triangle with the isosceles triangle it folds out to). At least in the case of an obtuse isosceles triangle with angles $\alpha=\alpha < \frac{\pi}{2} < \gamma$, the second eigenfunction seems to be anti-symmetric with its extrema at the corners of angle $\alpha$. If this is "folded in" you get the function which is identically zero. On the other hand, if you start with the right triangle with angles $\alpha,\frac{\pi}{2}, \frac{\gamma}{2}$ and consider its second eigenfunction, when you "unfold it" to the obtuse isosceles triangle from before, while it is still an eigenfunction of the larger triangle, it is not the second eigenfunction (which was antisymmetric). If it is true that "folding in" is "legal", it may be possible to fold in an isosceles triangle as Luis S. suggested above to furnish a proof. Comment by — June 9, 2012 @ 6:59 pm • Together with Richard Laugesen (see “Minimizing Neumann fundamental tones of triangles: an optimal Poincaré inequality”. J. Differential Equations, 249(1):118-135, 2010) I showed that second eigenfunction for isosceles triangles is symmetric for “tall” isosceles triangles (angle below pi/3) and antisymmetric for “wide” isosceles triangles (angle above pi/3). The transition is at equilateral, with both cases having the same eigenvalue. This also means that a half of a “tall” isosceles triangle has the same second eigenfunction as the whole isosceles triangle. In general for Neumann boundary conditions there is no domain monotonicity. One can take a rectangle where the second eigenfunction is based on the longer side and put another one inside of it, along diagonal, that has longer long side. This will give a much smaller domain contained in a larger one, but with smaller eigenvalue. Comment by Bartlomiej Siudeja — June 12, 2012 @ 5:32 pm • Very nice! Combining this with Luis’s argument and Banuelos-Burdzy’s monotonicity, it now appears that we have the hot spots conjecture for all subequilateral isosceles triangles. It seems like the superequilateral isosceles case is the one to focus on next… now we have odd eigenfunctions and so we have to somehow understand mixed Dirichlet-Neumann lowest eigenfunctions on right-angled triangles… Comment by — June 12, 2012 @ 7:09 pm • For this, perhaps start with a thin wedge $\Omega:= \{(r,\theta)\vert 0< r< R, 0< \theta< \epsilon\} <$ The second Neumann eigenfunction can be written down on this explicitly. Now consider a process of deforming the domain whereby the curvilinear piece $r=R$ of the boundary $\partial \Omega$ is slowly flatted to a line segment. Comment by — June 9, 2012 @ 7:29 pm • That’s a good idea; I’ll try computing the details. A related thought I had was to start with a thin isosceles triangle (say with vertices $(0,0), (1,+\varepsilon), (1, -\varepsilon)$), take the second eigenfunction $u_\varepsilon$ (normalized in $L^2$, say), rescale it to a fixed triangle (say with vertices $(0,0), (1,1), (1,-1)$ by considering the rescaled function $\varepsilon^{1/2} u_\varepsilon(x, \varepsilon y)$, and investigate what happens in the limit $\varepsilon \to 0$ (I believe there is enough compactness to extract a useful limit, but haven’t checked this yet). My guess is that the limit should obey some exactly solvable ODE, but I haven’t worked out the details yet; I should probably do the sector case first to build some intuition. Comment by — June 9, 2012 @ 8:33 pm • Well, the sector $\{ (r \cos \theta, r \sin \theta): 0 \leq r \leq R; 0 \leq \theta \leq \alpha \}$ at least was easy to work out. By separation of variables, one can use an eigenbasis of the form $u(r,\theta) = u(r) \cos \frac{\pi k \theta}{\alpha}$ for $k=0,1,2,\ldots$. For any non-zero k, the smallest eigenvalue of that angular wave number is the minimiser of the Rayleigh quotient $\int_0^R (u_r^2 + \frac{\pi^2 k^2}{\alpha^2} u^2)\ r dr / \int_0^R u^2\ r dr$; the $k=0$ case is similar but with the additional constraint $\int_0^R u\ r dr = 0$. This already reveals that the second eigenvalue will occur only at either k=0 or k=1, and for $\alpha$ small enough it can only occur at $k=0$ (because the $k=1$ least eigenvalue blows up like $1/\alpha^2$ as $\alpha \to 0$, while the $k=0$ eigenvalue is constant). Once $\alpha$ is small enough that we are in the k=0 regime (i.e. the second eigenfunction is radial), the role of $\alpha$ is no longer relevant, and the eigenfunction equation becomes the Bessel equation $u_{rr} + \frac{1}{r} u_r + \lambda u = 0$, which has solutions $J_0(\sqrt{\lambda} r)$ and $Y_0(\sqrt{\lambda} r)$. The $Y_0$ solution is too singular at the origin to actually be in the domain of the Neumann Laplacian, so the eigenfunctions are just $J_0(\sqrt{\lambda} r)$, with $\sqrt{\lambda} R$ being a zero of $J'_0$. The second eigenfunction then occurs when $\sqrt{\lambda} R$ is the first non-trivial zero of $J'_0$ ($3.8317\ldots$, according to Wolfram alpha). This is a function with a maximum at the origin and a minimum at the circular arc, consistent with the hot spots conjecture. Now I’ll try to see what happens for a thin isosceles triangle… Comment by — June 9, 2012 @ 9:18 pm • I think the case of a thin isosceles triangle $T_\varepsilon$ with vertices $(0,0),(1,+\varepsilon), (1,-\varepsilon)$ is also going to be OK. Let $u_\varepsilon$ be a $L^2$ normalised second eigenfunction on $T_\varepsilon$ with eigenvalue $\lambda_\varepsilon$, thus $\int_{T_\varepsilon} u_\varepsilon^2 = 1$, and by integration by parts $\int_{T_\varepsilon} |\nabla u_\varepsilon|^2 = \lambda_\varepsilon$ and $\int_{T_\varepsilon} |\nabla^2 u_\varepsilon|^2 = \lambda_\varepsilon^2$. Ideally I would like to continue integrating by parts but I am having a bit of trouble dealing with the boundary terms; still the theory of regularity of Neumann eigenfunctions on a domain as nice as a triangle is presumably very well developed, so let me assume that we can get higher regularity bounds also. Using a Poincare inequality, I think one can show that $\lambda_\varepsilon$ is bounded above uniformly in the limit $\varepsilon \to 0$, and by perturbing the Bessel function example from the sector case I think one can also get a bound from below. So (after passing to a subsequence if necessary) we can assume that $\lambda_\varepsilon$ converges to a limit $\lambda$ as $\varepsilon \to 0$. Now we look at the rescaled functions $v_\varepsilon(x,y) := \varepsilon^{1/2} u_\varepsilon(x,\varepsilon y)$ on the triangle $T_1$. These functions have unit norm $L^2$, and obey the $H^1$ bounds $\int_{T_1} |\partial_x v_\varepsilon|^2 + \varepsilon^{-2} |\partial_y v_\varepsilon|^2 = O(1)$ and the $H^2$ bounds $\int_{T_1} |\partial_{xx} v_\varepsilon|^2 + \varepsilon^{-2} |\partial_{xy} v_\varepsilon|^2 + \varepsilon^{-4} |\partial_y v_\varepsilon|^2 = O(1)$. This gives enough compactness to let $v_\varepsilon$ converge strongly in H^1 and weakly in H^2 (again after passing to a subsequence) to some limit $v$, which will be constant in the y direction, thus $v(x,y) = v(x)$. Now, the functions $v_\varepsilon$ obey the PDE $\partial_{xx} v_\varepsilon + \varepsilon^{-2} \partial_{yy} v_\varepsilon + \lambda_\varepsilon v_\varepsilon = 0$ and the boundary conditions $\partial_y v_\varepsilon = \pm \varepsilon^2 \partial_x v_\varepsilon$ when $y = \pm x$. We rewrite the PDE as $\partial_{yy} v_\varepsilon = \varepsilon^2 (-\partial_{xx} v_\varepsilon - \lambda_\varepsilon v_\varepsilon).$ The RHS is $\varepsilon^2 (- v''(x) - \lambda v(x))$ plus errors which are $o(\varepsilon^2)$ in some suitable function norm. Integrating this in y (from -x to x) and comparing with the boundary condition, we get that $2 \varepsilon^2 v'(x) + o(\varepsilon^2) = 2x \varepsilon^2 ( -v''(x) - \lambda v(x) ) + o(\varepsilon^2)$ and thus on taking limits we see that v obeys the Bessel equation $v'' + v'/x + \lambda v = 0$. The same sort of analysis as in the sector case then tells us that v has to be (a constant multiple of) $J_0(\sqrt{\lambda} x)$ with $\sqrt{\lambda}$ being the first root of J’_0, so it has a non-degenerate maximum at x=0 and non-degenerate minimum at x=1. Hopefully, there is enough uniform regularity on the $v_\varepsilon$ to then conclude that $\partial_{xx} v_\varepsilon$ is bounded away from zero near x=0 and x=1 for $\varepsilon$ small enough, which (in conjunction with the convergence properties of $v_\varepsilon$ to $v$, and the Neumann boundary conditions) should ensure that $v_\varepsilon$ still has a non-degenerate maximum at x=0 and a non-degenerate minimum at x=1. This would resolve the hot spots conjecture for all sufficiently degenerate isosceles triangles. Comment by — June 9, 2012 @ 10:09 pm • This is great! I think I have some idea on how to obtain the lower bound on the eigenvalue through an analytic perturbation argument from the sector. I’ve put these on the wiki under the ‘thin sector’ section. Comment by — June 10, 2012 @ 2:45 am • Actually, this could be used to good realistic effcet. Sediment will tend to deposit in concavities and wind will tend to clean and abrade convexities giving them different colors. This is much the way that vegetation and sand distribution can be affected by slope. Also, if this could be applied to heightfields or masks, you could use this to flatten out the bottoms of concavities and convexities and possibly to give more of a streaky and abraded or bare rock texture to convexities. I may have to get one of those evil Windows machines just for your app. Comment by Colleen — August 5, 2012 @ 1:44 am 19. [...] 7 is officially underway. It is here. It has a wiki here. There is a discussion page here. Like this:LikeBe the first to like this [...] Pingback by — June 9, 2012 @ 5:22 pm 20. Here is a probabilistic argument that seems to mostly prove the conjecture for the graph $G_1$: http://www.math.missouri.edu/~evanslc/Polymath/TwoWalkers Of course, as the graph Laplacian for $G_1$ is just a 4×4 matrix, this is probably unnecessarily complicated. Comment by — June 9, 2012 @ 7:21 pm • Chris, Thanks for this note. It’s very clear. Do you think it’s possible to do a similar coupling if instead of your $G_n$ for $n>1$ you use a different graph that approximates the triangle? I was thinking of just taking a fine rectangular grid and just trimming everything that falls outside the triangle. Comment by paultupper — June 10, 2012 @ 3:57 pm • It may be possible… it would all come down to finding the correct “action charts” (coupling). I did play around with a few different graphs a while back to no avail, but it may be that I didn’t come across the correct one. It may also help to take advantage of relations between $a,b,c$ besides just the fact that $a<b<c$. For example, we know that $a<b+c,\ b<a+c$, and $c<a+b$. As this idea is ultimately derived from that in the Banuelos and Burdzy paper, I was hoping that at least it might work in the case where the triangle is obtuse (which would impose an additional relation between $a,b,$ and $c$… but I haven't managed to make it work there. Ultimately, however, what the argument in Banuelos and Burdzy paper showed is that obtuse triangles are "monotonicity preserving" which is a stronger property than just having the extrema of the second eigenfunction in corners. If the acute triangles are not monotonicity preserving then such a coupling argument may not be useful. So this does lead to another question: Are acute triangles monotonicity preserving in any sense? Computer simulation might show this not to be the case. Also, as a side question, is there any way to characterize monotonicity preserving in terms of the eigenfunctions of the Laplacian? Comment by — June 10, 2012 @ 6:18 pm 21. It occurs to me that there is a chance of establishing the conjecture for acute triangles by rigorous numerics. Once one can deal the boundary case of nearly degenerate acute triangles (in which one angle is very small), the remaining configuration space is compact. If we can verify the conjecture for a sufficiently dense mesh of possible angles $(\alpha,\beta,\gamma)$ of acute triangles, and also get some stability for each triangle in the mesh (in particular, by showing that all other stationary points of the eigenfunction are well away from the maximum and minimum, and that the second derivatives have some lower bound at the critical points) then one may be able to cover all possible cases. This wouldn’t be a very elegant way to solve the problem, but it would at least be a solution. Comment by — June 9, 2012 @ 10:59 pm • Just to clarify: by dense mesh of possible angles, do you mean a dense subset of $\mathbb{R}^3$, with each point corresponding to a triangle? The challenge I foresee with a purely numerical attack with the usual tools is the following. There will be three levels of approximation. The first will be due to approximating the eigenfunctions by functions in a finite-dimensional function space (it may, or may not, be a subset of $H^1$ depending on the choice of method). The second will be due to computing the discrete eigenfunctions and eigenvalues by an iterative process. The third will be sampling enough acute triangles. We won’t be getting the true eigenvalues or eigenfunctions except in very special circumstances. I think that since the standard numerical error estimates for eigen-pair calculations rely on the domain shape, it would be challenging to get approximations of equal quality on the dense mesh you describe. Comment by — June 10, 2012 @ 4:13 am • Well, the problem is invariant under scaling, so we don’t need to specify the sides, only the angles. As the angles add up to pi, there are only two degrees of freedom, so the mesh will be in a compact subset of ${\bf R}^2$ rather than ${\bf R}^3$. It is true that we can only hope to get approximate eigenfunctions, but this may be good enough using the Rayleigh quotient formalism to get good bounds on eigenfunctions of nearby triangles (assuming that one has lower bounds on the spectral gap between the second eigenvalue and the third eigenvalue of the triangle in the mesh). The one catch is that one needs rather high regularity bounds in order to prevent the maximum or minimum from migrating off of an edge or corner into the interior; something like uniform bounds on the second derivative may be needed. I’m starting to look at the literature on stability of the second eigenfunctions of the Neumann problem. I found a paper of Carmona and Zheng at http://www.sciencedirect.com/science/article/pii/S0022123684710858 which looks promising, but haven’t digested it yet. But there is a paper of Banuelos and Pang at http://www.emis.de/journals/EJDE/2008/145/banuelos.pdf which shows (among other things) that if one perturbs a triangle by a small amount, then the second eigenvalue only moves by a small amount, and (in the case of simple spectrum) the second eigenfunctions of the perturbation converge uniformly to second eigenfunctions of the original triangle in the limit. Comment by — June 10, 2012 @ 3:02 pm • In terms of the regularity of eigenfunctions, I think the stability of the almost-degenerate case will be most useful to start with. There are some results about the regularity of eigenfunctions in polygons, depending on the interior angles. The larger the angle, the worse the regularity – but for triangles, we have no re-entrant corners, so I think we will be able to find good bounds on weak second derivatives. Here are another couple of references which may be of interest. I unfortunately don’t have access to the books until Tuesday. - Grisvard’s book on Elliptic PDE in Nonsmooth Domains. I have a recollection that Chapter 2 considers the regularity of eigenfunctions in terms of interior angles. - Babuska and Osborn’s lovely chapter in the Handbook of Numerical Analysis, and in their quest for error estimates for the approximations, provide some estimates on higher derivatives of the eigenfunctions themselves: http://www.sciencedirect.com/science/article/pii/S1570865905800420 - Rodriguez and Hernandez consider the case of the eigenvalues of the shifted Laplacian $-\Delta u + u$ and consider a variational approach. Estimate (2.10) in their paper bounds the $H^{1+\alpha}$ norm of the eigenfunction, where $\alpha$ depends on the interior angles of the polygon under consideration. For the triangle, $\alpha =1$ http://www.ams.org/journals/mcom/2003-72-243/S0025-5718-02-01467-9/S0025-5718-02-01467-9.pdf Comment by — June 10, 2012 @ 3:49 pm 22. A trivial remark: the second eigenfunction on any domain cannot have both a strict local maximum and a strict local minimum in the interior of the domain. This is because one could shave a little bit off of both the local maximum and the local minimum to reduce the Rayleigh quotient while still staying mean zero, contradicting the fact that the second eigenfunction must minimise the Rayleigh quotient on mean zero functions. Comment by — June 10, 2012 @ 12:55 am • I don’t quite see it. If you shave the max and min certainly the integral of grad square goes down, but also does the L^2 norm in the denominator of the Rayleigh quotient. Comment by Luis S — June 10, 2012 @ 2:08 am • Oops, you’re right; I had thought that the decrease of the L^2 norm was lower order, but on revisiting the computation I see that they both decrease proportionally, so what I said was nonsense. [EDIT: in fact, there is a construction of Bass and Burdzy at http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.dmj/1092403814 which gives a counterexample to the hot spots conjecture (for a domain with holes) in which both the maximum and minimum are attained in the interior.] Comment by — June 10, 2012 @ 3:13 am • I think I understand the idea in (22), and finding a perturbation that makes the Rayleigh quotient strictly decrease is the difficulty. For the one-dimenional domain [0, 1], I’ve calculated that u(x) = cos(Pi x) is the first eigenfunction for the Laplacian on [0, 1] with a Neumann boundary condition. If a mean zero u’ on [0, 1] has a strict local max. in the open interval (0, 1), is it easy to perturb u’ while keeping the mean at zero and decreasing the Rayleigh quotient? (technically, the Rayleigh quotient of the negative of the Laplacian, so that the first non-trivial eigenvalue of the negative Laplacian on [0, 1] with Neumann boundary condition is pi^2 ). Comment by — June 12, 2012 @ 8:31 pm 23. To record another idea: the Schwartz-Christoffel map from the upper half plane to the interior of a triangle with angles $\pi a,\pi b$, and $\pi(1-a-b)$ is given by $z=f(\zeta)=\int^\zeta \frac{dw}{(w-1)^{1-a} (w+1)^{1-b}}$. (I got this from http://en.wikipedia.org/wiki/Schwarz%E2%80%93Christoffel_mapping). The mapping is bijective and biholomorphic. The question about the second eigenfunction of the Neumann Laplacian on the triangle will be converted to a question about the second eigenfunction of a different operator (one which explicitly includes the angles in its symbol). What might one gain from this approach? One moves to working on a fixed domain (the upper half plane) with a variable operator, instead of all acute angles with a fixed operator (the Laplacian). I have not yet worked out what the variable operator looks like, but perhaps someone already knows a good reference? Zeev Nehari’s book may have the desired identity. Comment by — June 10, 2012 @ 4:24 am • this approach is harder than necessary. The map $\left( \begin{array}{cc} a_1 & a_2 \\ 0 & b_2 \end{array} \right)$ takes me from the right triangle $(0,0), (1,0), (0,1)$ to the acute triangle $(0,0), (a_1,0), (a_2,b_2)$. This map is affine, so calculating pullbacks is not hard. One immediately sees the importance of the nearly-degenerate case, since in this instance the Jacobian of transformation is large. The Ritz quotient on the acute triangle can be computed in terms of functions on the right triangle, with the angles of the acute triangle appearing explicitly. Pull-backs of minimizers of the (appropriate) Ritz quotient on the acute triangle will be minimizers of the appropriate functional expressed on the right triangle. I’ll type this up with details for the wiki. Comment by — June 10, 2012 @ 10:01 pm • One advantage of the Schwartz-Christoffel approach (as opposed to the affine approach) is that it preserves the Neumann boundary condition, which is useful if one wants to perform reflection techniques. Also the Laplacian just gets multiplied by a conformal factor, transforming the original eigenfunction equation $-\Delta u= \lambda u$ into a weighted eigenfunction equation $-\Delta u = \lambda \Omega u$ for some fixed weight function $\Omega$. The downside is of course that this weight is somewhat messy… Comment by — June 12, 2012 @ 2:48 am • Good point! My motivation for doing this calculation is to attempt what you suggested in Thread 21 above. In essence, getting a numerical approximation via the Schwartz mapping will be slower than using a finite element strategy on the mapped triangle. So we decided to come up with an efficient yet robust approximation method, which could be used to probe parameter space fast. Joe Coyle and I did the calculations for the affine-mapped triangle, and it doesn’t look too bad. The Ritz quotient is modified, but by a matrix which has constant entries; the eigenvalue problem for the Laplacian is recast into one for an elliptic operator but with constant coefficients. We’ve written down two formulations of the problem on the wiki. Such calculations are frequently seen in the finite element literature (we used the book by Peter Monk on Finite Element Methods for Maxwell’s Equations). I’m a bit worried about garbling the calculation on this blog, and have typed it up on the wiki: http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture#Computational_strategies We have some preliminary code running. Once we’re confident it is working, we’d be happy to test regions of parameter space you think are of interest. Comment by — June 12, 2012 @ 3:51 am • To record one of our reformulations here: let the acute triangle have corners $(0,0), (a_1,0), (a_2,b_2)$ and the reference triangle have corners $(0,0), (1,0), (0,1)$. Let $B:= \left(\begin{array}{cc} a_1 & a_2\\ 0 & b_2\end{array} \right)$. This is the matrix corresponding to the map which takes the reference triangle to the acute one. The Neumann problem for the non-constant eigenfunctions is to find $(\hat{u}, \lambda)$ so that $\displaystyle -\nabla \cdot (M \nabla \hat{u}) = \lambda \hat{u}\,\,\,\mbox{in}\,\hat{\Omega}, M\nabla \hat{u} \cdot \hat{n} =0\,\,\mbox{in}\,\partial\hat{\Omega},\, \int_{\hat{\Omega}} \hat{u} (det(B)) dV = a_1b_2\int_{\hat{\Omega}} \hat{u} dV= 0.$ where the matrix $\displaystyle M = B^{-1} (B^{-1})^T = \frac{1}{a_1^2b_2^2} \left(\begin{array}{cc} a_2^2+b_2^2 & -a_1a_2\\ -a_1a_2 & a_1^2\end{array} \right)$ Comment by — June 12, 2012 @ 3:57 am 24. A somewhat vague/random idea: Is it the case that the eigenfunctions/eigenvalues depend continuously on the boundary data. That is, if the normal derivative at the boundary weren’t zero (Neumann condition) but were slightly off from zero, would the corresponding eigenfunctions/eigenvalues be close? If so, maybe it is possible somehow to get an approximate eigenfunction by taking the eigenfunctions for the infinite wedges of angles $\alpha,\beta$, and $\gamma$, sticking them at the three edges of the triangle, and superimposing (summing) them. For a given edge the contributions of the functions from the adjacent angles would satisfy $\frac{\partial u}{\partial \nu}=0$… so it would only be the contribution of the function from the opposite angle that causes deviation from the Neumann boundary condition. Maybe this would only be possible for certain choices of the angles (e.g. they are integer multiples of one another) in order to get the eigenvalues of the three functions to match. Comment by — June 11, 2012 @ 2:53 am • Chris, let me first see if I understand the idea. Suppose $(u,\lambda)$ solved $-\Delta u - \lambda u =0$ with zero Neumann data. Now suppose $(v,\lambda)$ solves $-\Delta v -\lambda v=0$ but with non-zero Neumann data. Is your question whether $(u-v)$ is small? It needn’t be, since the problem for $v$ is not well posed. Say $v$ solved the problem with non-zero Neumann data. One could add an arbitrary multiple of $u$ to $v$, and the result would also solve the same problem. On the other hand, your suggestion of using the eigenfunctions of the infinite wedges and using a superposition is along the lines of what I’d suggested earlier, and in the Dirichlet case, is the method of particular solutions. You have to keep more than just the first eigenfunction for each corner, though. Comment by — June 11, 2012 @ 3:41 am • Yah, this doesn’t look too promising then. And indeed you did mention superposition/ the method of particular solutions before… so my comment can be disregarded :-) Comment by — June 11, 2012 @ 6:15 am 25. Some assorted comments: 1. In an acute triangle, I think I can show (by using Bessel function expansions in each of the corners) that the eigenfunctions are $C^{2,\alpha}$ for some $\alpha>0$. In fact at each vertex $x_0$, the eigenfunction looks like a multiple of the Bessel function $J_0( \sqrt{\lambda} |x-x_0| )$ plus higher order terms which decay like $O(|x-x_0|^{2+\alpha})$ for some $\alpha>0$ depending on the acuteness of the angle at that vertex (the narrower the angle, the higher order the decay; indeed, if the angle is $\theta$, the decay should be $O( |x-x_0|^{\pi/\theta})$ if I did my calculation correctly). Among other things, this shows that if an eigenfunction is non-zero at a vertex, then it is a strict local maximum (if it is positive) or a strict local minimum (if it is negative). Of course, if an eigenfunction is zero at a vertex, then it is not being an extremum there, so this case is probably not critical to the analysis. 2. If one has an acute triangle in which (a) the second eigenvalue is simple, (b) the extrema of the second eigenfunction only occur on the boundary, and (b) at each extremum, the Hessian is strictly definite (negative definite for maxima, positive definite for minima), then I think one can show that for all sufficiently close perturbations of the triangle, the extrema continue to stay on the boundary. The point is that the perturbations are known to be close in the uniform topology (Theorem 1.5 of Banuelos-Pang), and hence locally close in C^2 by elliptic regularity (I think), and so the perturbation will also have strictly definite Hessian near the extrema of the original eigenfunction. From this and the Neumann boundary conditions, I think it is not possible for the perturbed eigenfunction to have a critical point near (but not on) the boundary. And if one is too far away from the extrema of the original eigenfunction, then from the uniform convergence it should be that the perturbation can’t attain an extremum there. 3. I’m playing around with trying to get good bounds on the eigenfunction for a narrow isosceles triangle ABC (with AB = AC, and angle BAC narrow) by reflecting the eigenfunction first across the base BC to get a new triangle BCA’, and then reflecting across the two new edges A’B, A’C. The point is that this extended function still obeys the eigenfunction equation not just in the original triangle ABC, but in an extended region which among other things includes a sector with vertex A and sides equal to extensions of AB and AC by a factor of say 1.1. My plan is then to solve the eigenfunction equation in this sector using polar coordinates and Bessel function expansions. A key issue is that the Neumann boundary conditions don’t extend beyond AB and AC but otherwise things look relatively good. Don’t have anything concrete to report yet on this though. (In thread 14 it seems we are closing in on solving the thin isosceles triangle case by other means, but it would be good to get an independent proof here as it has a better chance of extending to the non-symmetric case.) Comment by — June 11, 2012 @ 6:11 pm • The asymptotic behaviour you have at the corners is correct, I think. I’d described it earlier as ‘in neighbourhood of the corner with interior angle $\frac{\pi}{\alpha}$, the asymptotic behaviour of the (nonconstant) eigenfunctions should be of the form $r^\alpha\cos(\alpha \theta) + o(r^\alpha)$, where $r$ is the radial distance from the corner; my rationale was that it is possible to rescale near a given vertex in terms of the distance to the corner, and in the rescaled coordinates, the geometry looks like a wedge. Comment by — June 11, 2012 @ 6:23 pm • After a surprisingly long and dirty amount of computation and estimation, I think I was able to get enough estimates on eigenfunctions in thin triangles to verify the hot spots conjecture for all sufficiently thin acute triangles ABC (where “thin” means that \angle BAC is sufficiently small). I wrote up the details at http://michaelnielsen.org/polymath1/index.php?title=Thin_triangles . The basic idea is as follows. If we normalise ABC to have diameter 1, then the triangle ABC resembles a thin sector with vertex at A and unit radius, and we know from direct computation that the second eigenfunction of that sector is (up to constants) $J_0(r/j_1)$, where r is the distance to A, $j_1 = 3.817\ldots$ is the first stationary point of J_0, and J_0 is the zeroth Bessel function. So we expect the eigenfunction u at ABC to be close to this function. One key property of this function is that its second radial derivative is bounded away from zero both near A and near the circular arc at the other end of the sector (because the stationary point at J_0 is non-degenerate). So we expect u to exhibit similar behaviour, thus we expect $|\partial_{rr} u| \sim 1$ both near A and near BC. Suppose for now that we can establish this bound. Now suppose for contradiction that we have an extremiser P of u in the interior of ABC. Since $J_0(r/j_1)$ attains its maximum at A and its minimum near BC, we can show that P has to either be near A or near BC. If P is near A, then $\partial_r u$ vanishes at P and at A, and so by Rolle’s theorem we have $\partial_{rr} u = 0$ at some intermediate point between P and A, which contradicts the bound $|\partial_{rr} u| \sim 1$. Similarly, suppose that P is near BC. If we let n be the outward normal to BC, then $\partial_n u = 0$ both on P and on BC, and so $\partial_{nn} u = 0$ for some intermediate point Q, which will also be close to BC. Because of uniform C^2 bounds on u (details of this are on the wiki), this implies that $\partial_{rr} u$ is small at Q, again contradicting the bound $|\partial_{rr} u| \sim 1$. The main technical difficulty is to actually establish the bound $|\partial_{rr} u| \sim 1$ near A and near BC. This is not too difficult near A as we have a very convergent Bessel expansion in that portion of the triangle, but it turned out to be annoyingly painful for me to get this bound near BC. Because of the need to control two derivatives, I couldn’t directly compare with the sector eigenfunction $J_0(r/j_1)$, but instead had to analyse the Bessel expansion of the eigenfunction u both around B and around C (and in particular to focus on the top two terms of these expansions) and match together the expansions in order to get enough control on second derivatives of u to establish the claim. It’s all a bit ad hoc and messier than it should be, and I hope that the argument will eventually be superseded by something slicker and stronger, but I think it does at least settle the thin case (and, in principle, even gives explicit (but poor) bounds on how thin the triangle needs to be, which is an improvement over the compactness methods I was proposing earlier). Comment by — June 12, 2012 @ 3:02 am • Nicely done! Comment by — June 12, 2012 @ 3:38 am • Great job! One comment. You mention that lower bound can be established also. It is actually already proved, and it is exactly the square of the Bessel zero. See “Minimizing Neumann fundamental tones of triangles: an optimal Poincaré inequality”. J. Differential Equations, 249(1):118-135, 2010. Comment by Bartlomiej Siudeja — June 12, 2012 @ 6:45 pm • This might not be relevant for eigenfunctions, but even though there is no domain monotonicity for Neumann eigenvalues, you can stretch a triangle to get an isosceles triangle with the same diameter and smaller second eigenvalue (Rayleigh quotient goes down when stretching). This could help simplify BC boundary handling. Comment by Bartlomiej Siudeja — June 12, 2012 @ 7:06 pm 26. [...] previous research thread for the Polymath7 project “the Hot Spots Conjecture” is now quite full, so I am now [...] Pingback by — June 12, 2012 @ 8:59 pm 27. As this thread is pushing 100 comments, I have started a new research thread at http://polymathprojects.org/2012/06/12/polymath7-research-thread-1-the-hot-spots-conjecture/ This current thread will stay open in case one needs to make responses to existing comments that can’t easily be made in a different post, but the idea is to migrate over to the new thread to make it easier to navigate and catch up. Comment by — June 12, 2012 @ 9:03 pm 28. [...] the “Hot Spots Conjecture for Acute Triangles”.  It was proposed by Chris Evans and they are working on it here.  It is a more geometric/analytic/physics based question this time.  Dr. Evans describes the [...] Pingback by — June 13, 2012 @ 11:00 pm 29. [...] as announced at Terry Tao’s Blog, two new polymath items are on the horizon.  There is a new polymath proposal at the polymath blog on the “Hot Spots Conjecture”, proposed by Chris Evans, and that [...] Pingback by — June 15, 2012 @ 1:58 pm 30. [...]  It’s been quite an active discussion in the last week or so, with almost 200 comments across two threads (and a third thread freshly opened up just now).  While the problem is still not [...] Pingback by — June 15, 2012 @ 10:22 pm [...]Excellent blog here! Also your site so much up fast![...]… Trackback by — August 16, 2012 @ 2:32 pm 32. I`ve been working on computational approaches to this problem. Here is a short video illustrating my simulations: https://vimeo.com/65842093 Comment by — May 9, 2013 @ 6:50 pm 33. (very preliminary) Comment by — May 9, 2013 @ 7:25 pm • Nicely done! Comment by — May 9, 2013 @ 8:10 pm • Thanks! Comment by — May 9, 2013 @ 8:48 pm RSS feed for comments on this post. TrackBack URI Theme: Customized Rubric. Blog at WordPress.com. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 304, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248330593109131, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/135138-finding-radius-just-point.html
# Thread: 1. ## Finding radius from just a point The problem says to find the missing variables on the circle. That includes r, theta, and s. So what would be the first step to figuring out this problem? And how would I find the radius with only point given as (7,4)? Thanks. 2. $r^2=7^2+4^2$ and $\theta=tan^{-1}\left (\frac{4}{7}\right )$ and for $s$ remember the circumference is $2\pi r$ and there are 360 degrees in a circle. 3. Thank you so much Stroodle for your help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9111064672470093, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/404/where-can-i-find-useful-data-for-cryptography-coding-theory?answertab=oldest
# Where can I find useful data for cryptography/coding theory? When implementing cryptographic/coding theory algorithms one need to use data like big prime numbers, numbers in Z_n and their inverses, irreducible polynomials in Z_n[x] and so on... While sometimes it is easy to write a simple program to get simple examples of these it's handy to have a list with some non-trivial examples. Do you know where can I find such data? - ## 3 Answers Generating prime integers and computing things modulo integers is easy with a programming language which features support for big integers. I usually use Java: it includes `java.math.BigInteger`, with which one can: • generate prime integers of any length; • do all basic operations, including modular reduction (`mod()` method); • compute modular inverses (`modInverse()`); • compute modular exponentiations (`modPow()`). For irreducible polynomials, things are more complex: Java does not offer a generic method for testing irreducibility of polynomials (or, for that matter, anything related to polynomials). I do have my own library but it is not generally available (I implemented it for a customer). However, my friend Google points out that this library appears to be fairly complete in that area. - As far as very large prime numbers, have a look at the following link. http://primes.utm.edu/largest.html It gives you a summary of the largest known primes. Good luck with calculating their multiplicative inverses, though. That will not be so easy. - the multiplicative inverse of a prime number? 1/p ;) – Vicfred Aug 15 '11 at 3:22 2 No, the multiplicative inverses of numbers in the prime field $Z_p$ (i.e. $a$ modulo the prime number $p$). For that, you will need to use the Extended Euclidean Algorithm. Have a look at the following link to some lecture notes on how to do this. www.ijecbs.com/July2011/14.pdf – user476 Aug 15 '11 at 3:39 I know, I was just making it clear. Also, multiplicative inverses in Z_n can be computed faster using Euler's theorem and Fermat's little theorem. – Vicfred Aug 15 '11 at 3:41 1 I believe that you are right. Not certain about this, as it has been a while since I last did it. Good luck! – user476 Aug 15 '11 at 3:43 – user476 Aug 15 '11 at 3:57 If you use a high-level mathematical language (Mathematica, Maple, etc.), generating this data is very easy. I use Mathematica personally, but some are free and apparently very good. In a pinch, you can use Wolfram Alpha to do a lot: Generate random prime: ````RandomPrime[{2^1023,2^1024}] ```` Random integer: ````RandomInteger[{2^1023,2^1024}] ```` Inverse mod p: (Returns random prime p, random integer a, a^-1 mod p) ````{#,#2,PowerMod[#2,-1,#]}&@@Flatten[{#,RandomInteger[{1,#}]}&/@{RandomPrime[{2^1023,2^1024}]}] ```` It gets tricky supplying large numbers because queries are limited to 200 characters. Also you can't store variables and use them in subsequent queries. That is why using the actual applications is better. (For example, the last query above would simply be PowerMod[a,-1,p] if you could store a and p after you generate them.) - The fun part is getting Wolfram Alpha to generate a safe prime, short of running this query over and over again: {#,PrimeQ[#]}&/@{(RandomPrime[{2^1023,2^1024}]-1)/2} – PulpSpy Aug 15 '11 at 20:16 2 – TJ Ellis Aug 17 '11 at 23:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8713434338569641, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=2873060
Physics Forums Recognitions: Science Advisor ## Collective modes and restoration of gauge invariance in superconductivity After the first explanation of superconductivity by Bardeen, Cooper and Schrieffer, it was for several years a matter of concern to render the theory charge conserving and gauge invariant. I have been reading the article by Y. Nambu, Phys. Rev. Vol. 117, p. 648 (1960) who uses Ward identities to establish gauge invariance and the book by Schrieffer, "Theory of superconductivity" from 1964. While I can follow the steps of the calculation, the physical content is not quite clear to me. While the difference between a free and a "dressed" Greens function is quite clear to me, the concept of a dressed vertex is much less. I only see it as a formal device to calculate the current-current correlation functions. The collective modes somehow fall out as homogeneous solutions of a Bethe Salpeter type equation. Schrieffer stresses that the mechanism in fact is not peculiar to superconductivity but holds also in normal metals. I know of some discussions of the "backflow" which is also not too clear to me. I suppose that these matters are better understood now half a century later. Maybe someone knows a more pedagogical reference? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics I'm not really qualified to answer your questions but maybe some references might help you on your way. If you are quite familiar with the functional integral approach then I think this is the more modern approach to dealing with superconductors in a gauge invariant manner as well as identifying the collective modes. See for example this article by van Otterlo, Golubev, Zaikin and Blatter, where they also discuss ward identities. Another pedagogical description can be found in Altland and Simons book Condensed Matter Field Theory. I think the connection to the dressed vertices might be found from the vertex generating functional (or effective potential) $\Gamma[\Psi]$ ($\Psi$ being the pairing field) that one can obtain with the functional integral technique (for definition and introduction see for example sec. 2.4 in Negele and Orland's book Quantum Many-Particle Systems). In Weinberg's book The quantum theory of fields Vol II you can find a treatment of superconductors in terms of the effective potential. Also see this article by Weinberg. I hope this helps in some way. Let me know if it was useful or if you find any more pedagogical references. Recognitions: Science Advisor Dear Jensa, thank you very much for your answer. I am having a look at the article by van Otterlo. I knew about the work by Weinberg, but I am not sure if it is what I am looking for. There seems to be a highly relevant article written by Martin in the book "superconductivity" ed. by Parks. However, I only have partial access to it at the moment via google books. Maybe I come back to you after having read the articles. Recognitions: Science Advisor ## Collective modes and restoration of gauge invariance in superconductivity Dear Jensa, I was reading some articles in the last time and I think I at least understand the problem: Immediately after the paper by BCS, people became worried because the calculation of the Meissner effect was not Gauge invariant and it was shown that at least for the response to longitudinal fields violation of gauge invariance could lead to any result. It became clear that to restore gauge invariance one has to consider collective excitations which are now known as Nambu Goldstone bosons and correspond to variations of the phase of the gap parameter. The whole history is quite fascinating as it lead to the discovery of the Higgs mechanism in field theory, but that's another story (told e.g. in the Noble lecture of Nambu). In a superconductor, current and charge can be carried by both particle like excitations resulting from acting with the Valatin Bogoliubov operators on the vacuum and by the collective Goldstone mode. The currents carried by both modes are conserved separately, if calculated correctly. I was trying to understand this from the following two articles: 1. S. Cremer, M. Sapir and D. Lurie, Collective Modes, Coupling Constants and Dynamical-Symmetry Rearrangement in Superconductivity, Nuovo Cim., Vol 6, pp. 179, 1971 2. L. Leplae, H. Umezawa and F. Mancini, Derivation and Application of the Boson Method in Superconductivity, Phys. Rep. Vol 10, pp. 151-272, (1974) I liked the exposition in 1. but I do not understand exactly what he is doing in section 4. Especially he sais: "We shall exhibit here a simple alternative technique which yields considerable physical insight into the dy- namical mechanism involved (*). Our method is based on the well-known fact that the matrix elements of an operator density between physical one- particle states can be computed by coupling the operator in question to an external c-number field and evaluating the lowest-order transition amplitude induced by the external field. Applied to superconductivity, this technique will be shown to lead very rapidly to the results of LEPLAE and UMEZAWA (5) i.e. to the explicit expression of the current operator in terms of the physical quasi-particle and Goldstone field operators." He then directly discusses some Feynman diagramms the relevance of which is not directly clear to me from the aforesaid. The article 2. uses some apparently non-standard techniques but adresses the interesting point more directly which I never exactly understood and which seems to be at the heart of the problem: How exactly do we separate particle like modes from collective modes? Although I like in principle the "modern" treatment as e.g. in the article by Otterlo you gave me, I don't see how to identify or treat single particle like excitations after having performed the hubbard stratonovich transformation. Maybe you have some fresh ideas. Thread Tools | | | | |------------------------------------------------------------------------------------------------|----------------------------------------|---------| | Similar Threads for: Collective modes and restoration of gauge invariance in superconductivity | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | High Energy, Nuclear, Particle Physics | 15 | | | High Energy, Nuclear, Particle Physics | 6 | | | General Physics | 0 | | | Beyond the Standard Model | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375585913658142, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-challenge-problems/7009-question-4-a.html
# Thread: 1. ## Question 4 This week I will post 2 problems because I am not sure how this is still going to work. First: Given the diophantine equation, $x^2+y^3=n$ show that the number of solutions cannot exceed $n$. (Easy) Second: The Dirchlet function: Show that the Dirichlet function is not Riemann integratble on $[0,1]$ (Hard) This Dirchlet function is defined to map rationals into zero and irrationals into one. --- One more thing, can you rate this threads. By selecting the number of stars for it. Whether you liked it or hated it. 2. I am supprised why nobody responded. Either because the questions are too difficult (which I cannot imagine, the first one was really easy). Or they are just not fun. ---- First Problem: Okay you have, $x^2+y^3=n$ with $n\geq 1$. That means, $1\leq x\leq n^{1/2}$ Because otherwise, since $y\geq 1$ there is no way to make the left hand smaller. Similarly, $1\leq y\leq n^{1/3}$ The best case senerio is when all are solutions in that case, $n^{1/2}\cdot n^{1/3}=n^{5/6}<n$ --- Second Problem: $D(x)=\left\{ \begin{array}{cc}1&\mbox{ if irrational}\\ 0&\mbox{ if rational} \end{array} \right\}$ There are a number of things we need to know. First, if the function is integratble then the Riemann sum is well-defined as long as the norm sequence limit is zero. Second, if $\int_a^b f(x)dx$ exists and $c\in (a,b)$ then both the integrals exists and, $\int_a^b f(x)dx=\int_a^c f(x) dx+\int_c^b f(x) dx$ Third, $1<e$ is irrational therefore, $1/e<1$. Now we can begin. The idea is that we are going to assume that the Dirichlet function is Riemann integrable then arrive at a contradiction. If the Dirichlet function is integrable on $[0,1]$ then we can partition the interval in any way we choose as long as the norm sequence limit is zero. We will use the right end-points. In, that case, $\int_0^1 D(x)dx=\lim_{n\to \infty}\sum_{k=1}^n D\left( \frac{k}{n} \right)\cdot \frac{1}{n}$ But the trick is to realize that, $\frac{k}{n}$ is rational! Thus, $D\left(\frac{k}{n}\right)=0$ Thus, (limit of zero) $\int_0^1 D(x) dx=0$. But, since the Dirchelt integral exists (by assumption) then, $\int_0^1 D(x)=\int_0^{1/e}D(x) dx+\int_{1/e}^1 D(x)dx$ That means its value is by left-endpoints, $\lim_{n\to\infty}\sum_{k=1}^n D\left(\frac{k}{n}\right) \cdot \frac{1}{en}$+ $\lim_{n\to\infty}\sum_{k=1}^n D\left(\frac{1}{e}+\frac{k}{n}\right) \cdot \frac{1-1/e}{n}$ But, $D\left( \frac{k}{n}\right)=0$ for it is rational. Und $D\left(\frac{1}{e}+\frac{k}{n}\right)=1$ for it is irrational. Thus, $0+\lim_{n\to\infty}\sum_{k=1}^n 1\cdot \frac{1-1/e}{n}=\lim_{n\to\infty}\frac{n(1-1/e)}{n}=1-1/e\not = 0$ Thus, the limit is not well-defined by partitioning. Thus, the function is not integrable. If you are curious how I invented this problem is when I was in Calculus class we were doing integration over surfaces and I asked whether every bounded function (not continous) is integratble. My professor said it has to be discontinous at uncountably many points. So I decided to test whether what he said was true or false by using the Dirichlet function (since it is discontinous at uncountably many points) and he seemed to have been correct. So the credit of this problem should go to my professor. 3. For the first problem, how do you know that $y\geq 1$? 4. Originally Posted by TriKri For the first problem, how do you know that $y\geq 1$? Because it is a DIOPHANTINE equation. Traditionally they are solved in only the positive integers. 5. I didn't try the second exercice, but I think there is an easier way to do it. If you define a partition P in [a,b], with every step equal to $(b-a)/n$. When you calcul the $L(f,P)$, I think its value is 0. Now calcul the $U(f,P)$, it's 1. Thefore the integral is not Riemann integrable. I'm probably wrong, I would apreciate if you tell me so and explaining. 6. Originally Posted by arbolis I didn't try the second exercice, but I think there is an easier way to do it. If you define a partition P in [a,b], with every step equal to $(b-a)/n$. When you calcul the $L(f,P)$, I think its value is 0. Now calcul the $U(f,P)$, it's 1. Thefore the integral is not Riemann integrable. I'm probably wrong, I would apreciate if you tell me so and explaining. When I came up with that solution above I did not yet know what you were referring to. (About Supremum and Infimum on intervals). Using that it is much easier. 7. for the second one, couldn't you just divide $[0,1]$ into a partition $P$ and say that $\lim_{n \to \infty} \sum_{i=0}^n D(x_{i max})(x_{i+1}-x_{i}) = \lim_{n \to \infty} \sum_{i=0}^n 1(x_{i+1}-x_{i}) = 1$ but $\lim_{n \to \infty} \sum_{i=0}^n D(x_{i min})(x_{i+1}-x_{i}) = \lim_{n \to \infty} \sum_{i=0}^n 0(x_{i+1}-x_{i}) = 0$ Therefore $D(x)$ is not Riemann integrable. 8. Originally Posted by chiph588@ for the second one, couldn't you just divide $[0,1]$ into a partition $P$ and say that $\lim_{n \to \infty} \sum_{i=0}^n D(x_{i max})(x_{i+1}-x_{i}) = \lim_{n \to \infty} \sum_{i=0}^n 1(x_{i+1}-x_{i}) = 1$ but $\lim_{n \to \infty} \sum_{i=0}^n D(x_{i min})(x_{i+1}-x_{i}) = \lim_{n \to \infty} \sum_{i=0}^n 0(x_{i+1}-x_{i}) = 0$ Therefore $D(x)$ is not Riemann integrable. Yes chiph588@, that's exactly what I've done. 9. Whoops! I didn't see you posted that, sorry. 10. There is no problem!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 44, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9571467638015747, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/19758/list
## Return to Answer 3 Corrected disphenoid example by adding congruence assumption. Lemma: In a polyhedron of this type with polygons $A$ and $B$ sharing an edge $e$, the two other polygons meeting $e$ must have the same number of sides. Proof: By local symmetry reflecting through the perpendicular bisector of $e$, the angles are equal. Sergei Ivanov proved the same lemma in the comments. Since 5 is odd, all of the polygons around a pentagon must have the same number of sides, since you can't have a nonconstant alternating sequence. So, the only possibilities are that all polygons are pentagons, or that each pentagon is surrounded by hexagons, and each of these hexagons is surrounded by 3 pentagons and 3 hexagons. In the latter case, attaching pentagonal pyramids to each pentagon extends each hexagon into an equilateral triangle, producing a polyhedron whose faces are equilateral triangles with 5 meeting at a vertex, an icosahedron, so the original was a truncated icosahedron. Note that if you have equilateral triangles and squares meeting 4 to a vertex, then there are two possibilities for 3 squares and 1 triangle at a vertex, one with cubic symmetry and one with only dihedral symmetry with a belt which is an octagonal prism. By contrast, if you require that there are 3 congruent triangles meeting at a vertex, but drop the regularity assumption, you get a family of disphenoids, which generically have the Klein 4-group as symmetries and no reflective symmetry. These are related to ideal hyperbolic tetrahedra. 2 Linked to truncated icosahedron shirt Lemma: In a polyhedron of this type with polygons $A$ and $B$ sharing an edge $e$, the two other polygons meeting $e$ must have the same number of sides. Proof: By local symmetry reflecting through the perpendicular bisector of $e$, the angles are equal. Sergei Ivanov proved the same lemma in the comments. Since 5 is odd, all of the polygons around a pentagon must have the same number of sides, since you can't have a nonconstant alternating sequence. So, the only possibilities are that all polygons are pentagons, or that each pentagon is surrounded by hexagons, and each of these hexagons is surrounded by 3 pentagons and 3 hexagons. In the latter case, attaching pentagonal pyramids to each pentagon extends each hexagon into an equilateral triangle, producing a polyhedron whose faces are equilateral triangles with 5 meeting at a vertex, an icosahedron, so the original was a truncated icosahedron. Note that if you have equilateral triangles and squares meeting 4 to a vertex, then there are two possibilities for 3 squares and 1 triangle at a vertex, one with cubic symmetry and one with only dihedral symmetry with a belt which is an octagonal prism. By contrast, if you require that there are 3 triangles meeting at a vertex, but drop the regularity assumption, you get a family of disphenoids, which generically have the Klein 4-group as symmetries and no reflective symmetry. 1 Lemma: In a polyhedron of this type with polygons $A$ and $B$ sharing an edge $e$, the two other polygons meeting $e$ must have the same number of sides. Proof: By local symmetry reflecting through the perpendicular bisector of $e$, the angles are equal. Sergei Ivanov proved the same lemma in the comments. Since 5 is odd, all of the polygons around a pentagon must have the same number of sides, since you can't have a nonconstant alternating sequence. So, the only possibilities are that all polygons are pentagons, or that each pentagon is surrounded by hexagons, and each of these hexagons is surrounded by 3 pentagons and 3 hexagons. In the latter case, attaching pentagonal pyramids to each pentagon extends each hexagon into an equilateral triangle, producing a polyhedron whose faces are equilateral triangles with 5 meeting at a vertex, an icosahedron, so the original was a truncated icosahedron. Note that if you have equilateral triangles and squares meeting 4 to a vertex, then there are two possibilities for 3 squares and 1 triangle at a vertex, one with cubic symmetry and one with only dihedral symmetry with a belt which is an octagonal prism. By contrast, if you require that there are 3 triangles meeting at a vertex, but drop the regularity assumption, you get a family of disphenoids, which generically have the Klein 4-group as symmetries and no reflective symmetry.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308851957321167, "perplexity_flag": "head"}
http://mathoverflow.net/questions/99867?sort=votes
## Criteria for Positivity of Pseudoddifferential Operators on Manifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $(M,g)$ be a Riemannian Manifold and $L^2$ the Hilbert space given by the volume form associated to the metric. Let $L_0^2$ be the subspace which is orthogonal to the constant functions. When is a pseudodifferential operator on $M$ a positive operator on $L^2_0$? For second order operators the Laplacian $\Delta$ is the main example. For order zero, the obvious examples are multiplication by $f$ where $f \in C^\infty(M)$ is a smooth function and $f > 0$. Conversely if $f < 0$ anywhere then it is clear that the multiplication operator is not positive. If $A$ is positive on $L^2_0$ then $(\Delta^{p/2} A \Delta^{p/2} v, v) = (A \Delta^{p/2} v, \Delta^{p/2} v) > 0$ for $v \in L^2_0$ non-zero. So we can use the Laplacian as a sort of natural way to change the order of a given positive operator. Note that the principle symbol of such an operator is $||\xi||^{p}\sigma(A)(x,\xi)$. How else can I construct more positive pseudodifferential operators? So far I can only come up with operators whose symbols in a fiber look like $||\xi||^{p}f(x)$. I am looking for "more interesting" symbols, such as those whose restriction to the co-sphere at a point is non-constant. Ideally of course I would just like a global criterion for a symbol to quantize to a positive operator, but something tells me that this is a hard problem. If it is any easier, I would also be interested in specific examples, like the sphere with the round metric. - 1 Of course you already considered the squares of pseudodifferential operators $A^*A$ and for some reason they do not offer enough examples for your goals – Piero D'Ancona Jun 18 at 7:33 Thanks Piero, in fact I had overlooked this method because I hadn't tried it with non-self adjoint operators! – Eric Jun 18 at 15:20 ## 3 Answers I think you may be looking for this paper: Symplectic geometry and positivity of pseudo-differential operators C. Fefferman† and D. H. Phong Abstract In this paper we establish positivity for pseudo-differential operators under a condition that is essentially also necessary. The proof is based on a microlocalization procedure and a geometric lemma. http://www.pnas.org/content/79/2/710.short Basically, you must require that the principal symbol be positive except in a set of small symplectic capacity (it cannot contain a symplectically embedded unit cube). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If $A$ is a symmetric partial differential operator of order $2k$ on a compact manifold whose principal symbol is positive definite, then for $\lambda\gg 0$ the operator $A+\lambda$ is positive definite. This follows by using the theory of pseudo-differential operators with parameters discussed for example in Shubin's book. - Let $A$ be a selfadjoint (pseudo)differential operator of order 2 on $(M,g)$ with a nonnegative symbol. It is a consequence of the Fefferman-Phong inequality that $A$ is semi-bounded from below, i.e. $A+C\ge 0$, where $C$ is a constant. Now you could object that the total symbol is not invariantly defined: true but considering that $A$ acts on half-densities (identified with functions on a Riemannian manifold), you get that $$a_2+ \Re a_1$$ is indeed invariantly defined. Here the symbol of $A$ is $a_2+a_1+r_0$, where $a_j$ is of order $j$, $r_0$ of order 0, $a_2\ge 0,\quad a_2+\Re a_1\ge 0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230808615684509, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/168743-need-help-some-trig-model-graph.html
# Thread: 1. ## Need Help with some Trig for a model (graph) I attached a crude drawing of the function I'm trying to graph. I'm doing a problem with the model $\frac{dT}{dt}=k(T-T_m)$, where $T_m$ is a function shown in the graph, but I need help with the trigonometry here. Note that the graph I drew is sloppy, but you should be able to get the idea. It's just a transformed sine function. $T_m(t)=asin(bt+c)+d$ The amplitude is a=30, and it's been moved up 80 units, so d=80. Since I know that b divides the normal period $2\pi$ so that $\frac{2\pi}{b}=period$. The period appears to be 24, so I think the $b=\frac{\pi}{12}$. I'm kind of confused about what the phase shift should be though. Attached Thumbnails 2. The phase shift is where the function starts. As in, with sin(x), sin(0)=0; it `starts' halfway along an up-stroke. Basically (assuming you haven't touched the period), if your function starts half-way along an up-stroke, your phase shift will be zero. If its half-way along a down stroke, it will be $\pi$. If it is at the top of an up-stroke, it will be $\pi/2$, which is essentially cos(x), and if it is at the bottom of a down-stroke it will be $3\pi/2$. Basically, draw a sine curve, and take a ruler. Put the ruler on the vertical (y-)axis and move it right until what is to the right of your ruler looks like the curve you want. the x-value at this point is what you want. This will be scaled when you add the period; if the phase-shift (worked out as-above) is c, and the period is d, then the equation you want is $sin(d(t+c)) = sin(dt+dc) = sin(dt+c^{\prime})$. 3. Good work so far! Now the sine function you've got there would normally start out going up from its midpoint, which in your case is 80. However, your graph has it starting at its low point (I'm assuming). So, the entire graph has been shifted to the right for 1/4 of a period. What does that mean for the phase shift? [EDIT]: Didn't see Swlabr's post.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536154270172119, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/130503/bounded-linear-functional-on-mathcall-2a-b?answertab=oldest
Bounded Linear functional on $\mathcal{L}_{2}[a,b]$ Lets say that $f:[a,b] \rightarrow \mathbb{R}$ is a measurable function such that $H: \mathcal{L}_{2}[a,b] \rightarrow \mathbb{R}$ defined as $H(g) = \int_{a}^{b}fg$ is finite for all $g \in \mathcal{L}_{2}[a,b]$ I was wondering if $H$ is a bounded linear functional on $\mathcal{L}_{2}[a,b]$ - 1 Hardy-Littlewood-Polya's book Inequalities calls this result "the converse of Hölder's inequality". Cfr. Theorem 190 (161 for the infinite sum and 15 for the finite sum case). More precisely, Theorem 190 states that $(H(g)\ \text{is finite for all }g\in L^2[a, b] )\Rightarrow (f \in L^2[a, b])$, from which continuity of $H$ follows easily. This is exactly what Davide has done below. – Giuseppe Negro Apr 11 '12 at 23:49 2 Answers We work on $H:=L^2[a,b]$, which is a normed space. In fact, $f\in L^2[a,b]$, and the fact that $H$ is bounded will follow from Cauchy-Schwarz inequality. Using $\operatorname{sgn}f|g|$ for $g\in H$, we get that $fg\in L^1[a,b]$ for each $g\in L^2[a,b]$. If $f\notin H$, we can find $A_k$ pairwise disjoint such that $1\leq \int_{A_k}|f(x)|^2dx$. Put $g:=\sum_{k\geq 1}\frac 1k\chi_{A_k}\frac f{\int_{A_k}|f(x)|^2dx}$. Then $g\in H$ but $fg\notin L^1[a,b]$, a contradiction. - Yes, this is a consequence of the Banach-Steinhaus theorem. Without loss of generality take everything to be positive by taking absolute values everywhere and rotating the functions we evaluate the functionals on. Consider the family of bounded functionals given by $\Lambda_n : g \mapsto \int f_n g$ where $f_n = f \chi_{f\leq n}$ (these are bounded because $f_n$ is $L^\infty$ and the measure space is finite--this last assumption is easy to remove). By Banach-Steinhaus, this family is either uniformly bounded or is infinite on a dense $G_\delta$ set. But since we take everything positive we have $|\Lambda_n(g)| \leq |H (g)| <\infty$ for all $f$. By Banach-Steinhaus there exists a uniform bound $M$ on the operator norms $\|\Lambda_n\|$ Take any $g$ with $\|g\| = 1$. Then $\int f_n g \leq M$ uniformly and so by the dominated convergence theorem $\int f g \leq M$ which gives that the map is bounded. As a comment, a similar argument works for all $L^p$ ($1\leq p < \infty$) spaces over $\sigma-$finite measure spaces. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173389673233032, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/112174-inclusion-exclusion.html
# Thread: 1. ## Inclusion/Exclusion I've been working on this problem for a while. Could someone show me how to connect the last part? Let A, B, and C be finite sets. Prove: If $|A \cup B \cup C| = |A| + |B| + |C|$ then A, B, and C must be pairwise disjoint. Here is what I have: Suppose A, B, and C are finite sets with $|A \cup B \cup C| = |A| + |B| + |C|$. By inclusion/exclusion, we know that $|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |A \cap C| - |B \cap C| + |A \cap B \cap C|$ By cancellation, we have: $|A \cap B \cap C| - |A \cap B| - |A \cap C| - |B \cap C| = 0$ I'm just not sure how to connect that to "Thus A, B, and C must be pairwise disjoint." 2. Originally Posted by absvalue I've been working on this problem for a while. Could someone show me how to connect the last part? Let A, B, and C be finite sets. Prove: If $|A \cup B \cup C| = |A| + |B| + |C|$ then A, B, and C must be pairwise disjoint. Here is what I have: Suppose A, B, and C are finite sets with $|A \cup B \cup C| = |A| + |B| + |C|$. By inclusion/exclusion, we know that $|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |A \cap C| - |B \cap C| + |A \cap B \cap C|$ By cancellation, we have: $|A \cap B \cap C| - |A \cap B| - |A \cap C| - |B \cap C| = 0$ I'm just not sure how to connect that to "Thus A, B, and C must be pairwise disjoint." Well, $A \cap B \cap C \subset A\cap B$, so rearranging $<br /> |A \cap C| + |B \cap C| = |A \cap B \cap C| - |A \cap B| \le 0<br />$ since the LHS must be non negative we have $|A \cap C| + |B \cap C| = 0$ so both of those terms must be 0. You can do a similar thing with to get $|A \cap B| = 0$ proving the result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280994534492493, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2983/partial-collisions-for-md5
# Partial collisions for md5 Let $h$ be a bitstring and let $P(h, n)$ be the n-bit prefix of $h$. A partial collision of length $n$, for a hash function $H$ is a pair $(x,y)$, such that $P(H(x),n)=P(H(y),n)$. What is known about this type of collisions? I'm particularly interested in md5. Can rainbow tables be adapted to quickly look for these? What values of $n$ are feasible to bruteforce (perhaps in an optimized way, like with rainbow tables). I'd also like to know about the pre-image case: given $x$ and $P(H(x),n)$, how to find $y$ such that it's a $n$-partial collision for $x$. - 1 Silly question: we know how to make total collisions with MD5; why are you interested in partial collisions? – poncho Jun 19 '12 at 20:45 @poncho: because it might be cheaper (?). Plus collisions don't solve the preimage problem. – qwer Jun 19 '12 at 20:51 – mikeazo♦ Jun 21 '12 at 13:00 ## 1 Answer If $H$ is secure in the random oracle model and of width at least $n$, then $x\mapsto P(H(x),n)$ is also secure, inasmuch as an $n$-bit hash can be (proof sketch: any attack that distinguishes $x\mapsto P(H(x),n)$ from a random oracle is trivially turned into a distinguisher for $H$). The converse is not true. Finding a collision requires $O(2^{n/2})$ queries to the oracle (or evaluations of the inner function of $H$). More precisely it is expected to require $o(\sqrt{\pi/2}\cdot2^{n/2})$ distinct queries, and odds of success reach 50% after $o(\sqrt{\ln 4}\cdot2^{n/2})$ such queries (that's the generalized birthday problem). The standard paper on how to exhibit such collisions is Parallel Collision Search with Cryptanalytic Applications. Finding a preimage is expected to require about $2^n$ queries to the oracle (I plan to dig out the right formulas someday). It is possible to speed-up an attack by pre-computation, e.g. with rainbow tables, in particular if the set of possible messages is known, or if $n$ is small enough. Now we restrict to MD5, thus $n\le 128$. The difficulty of finding collisions is about as for a good hash ($\approx 1.3\cdot2^{n/2}$ MD5 inner rounds) when the messages are tried without consideration of the inner structure of the MD5 round function (e.g. random messages or words from a dictionary). Without constraints on the message, finding collisions among messages of at least 1024-bit has been possible for any $n$ since that breakthrough, works even with some constraints, now has little cost, and was recently made possible with 512-bit messages. The lower $n$ is, the easier it gets. AFAIK there is no shortcut for heavily constrained messages, e.g. passwords of less than 10 ASCII characters. The difficulty of finding a preimage is about as for a good hash (about $2^n$ MD5 inner rounds) when the messages are tried without consideration of the inner structure of the MD5 round function, nor knowledge on the origin of the target value. I am not aware of a practical method better than that, except when the message is in a known set smaller than $2^n$, which is when rainbow tables shine; in that case, I do not see that lowering $n$ changes the cost. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196205735206604, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/70973-finding-equation-line.html
# Thread: 1. ## Finding the equation of a line Find the slope-intercept equation of the line that passes through (-8, 9) and is perpendicular to x - 8y = 2. 2. $x-8y=2\Rightarrow y=\frac{1}{8}x-\frac{1}{4}\Rightarrow m=\frac{1}{8}\Rightarrow m'=-\frac{1}{m}=-8$ Here m' is the slope of the perpendicular. The equation is $y-y_0=m'(x-x_0)$ where $(x_0,y_0)=(-8,9)$ 3. Originally Posted by keadyjr Find the slope-intercept equation of the line that passes through (-8, 9) and is perpendicular to x - 8y = 2. Hi keadyjr, First of all find the slope of your line given by the equation x - 8y = 2. Let's put it in slope-intercept form: $y=\frac{1}{8}x-\frac{1}{4}$ So the slope is $\frac{1}{8}$ The slope of a line perpendicular to this line would be the negative reciprocal of $\frac{1}{8}$, or $-8$. Using the point (-8, 9) and a slope of -8, we can now find the y-intercept. $y=mx+b$ $9=-8(-8)+b$ $b=-55$ Substituting back into the slope intercept form, we arrive at: $y=-8x-55$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080202579498291, "perplexity_flag": "head"}
http://physics.aps.org/articles/print/v2/106
# Trend: Searching for the Higgs , Department of Physics and Astronomy, University of California, Riverside, Riverside, CA 92521, USA Published December 14, 2009  |  Physics 2, 106 (2009)  |  DOI: 10.1103/Physics.2.106 Researchers at the Large Hadron Collider are at the start of a challenging hunt for the Higgs boson, a particle thought to confer the property of mass on every other particle. Since the 1970s, physicists have known that two fundamental forces of nature, the electromagnetic force and the weak force, can be unified into a single force—the electroweak force—if the particles that carry these forces are massless. The photon, which carries the electromagnetic force, is massless, but the particles that carry the weak force have substantial mass, explaining why the weak force is weaker than the electromagnetic force. This unification can still work if a new spin-zero boson, the Higgs boson, is introduced, allowing the particles that carry the weak force to be massive. In addition, interactions with the Higgs boson are responsible for the masses of all particles. These ideas form the basis of the standard model of particle physics, which is consistent with almost all observations. Gravity can act once particles have mass due to the Higgs boson—the Higgs boson is not the source of the gravitational force. The one outstanding missing piece in this entire picture is the Higgs boson itself. What are the prospects for its discovery? ## The standard model of particle physics and the Higgs boson Matter is made up of spin-$1/2$ fermions, the particles known as leptons (the “light ones”) and quarks. There are three families of leptons, each consisting of two particles: the electron with its corresponding neutrino ($ν$), the muon ($μ$) and its neutrino, and the tau lepton ($τ$) and its neutrino [1]. Electrons are familiar from electric current and as constituents of atoms; they are the lightest electrically charged particles. Muons and tau leptons are also charged and can be considered to be heavier electrons. Neutrinos are neutral and (almost) massless. All of the leptons can be directly observed, some more easily than others. Quarks also come in three families, and they also have electrical charge, but their charges are fractions of the charge of the electron ($+2/3$ and $-1/3$). They cannot be directly observed—the particles we do observe, such as the proton and the pion, are made up of either three quarks or a quark and its antiparticle, an antiquark. Every particle has a corresponding antiparticle with the same mass but opposite charge, for example, the antiparticle of the electron is the positively charged positron. Quarks that are produced in particle interactions or decays materialize as “jets” of ordinary particles collimated close to the original quark direction [2]. Four fundamental forces act on the fundamental fermions: gravity, the weak force (responsible for nuclear beta decay), the electromagnetic force, and the strong force. These forces occur through the exchange of fundamental bosons: the graviton, the charged and neutral $W$ and $Z$ bosons, the photon, and eight gluons. (Gravity will not be discussed further here.) All of the fundamental fermions have interactions via the weak force, and all of the charged fundamental fermions have electromagnetic interactions. Only the quarks can interact via the strong force, and particles such as protons that are made up of quarks and therefore have strong interactions are called hadrons (the “heavy ones”). The fundamental particles and forces are summarized in Fig. 1. Photons, which have no mass, carry the electromagnetic force, whereas the massive charged $W$ and neutral $Z$ are responsible for the weak interactions; all of these particles are spin-one bosons. The minimal standard model [4] requires in addition a massive scalar boson, the Higgs boson, to allow the $W$ and $Z$ to be massive, as described by the Higgs mechanism [5]. The lowest energy state of the Higgs field has a nonzero value, which has the dimensions of mass. Particles obtain their mass from their interactions with this Higgs field—this is the reason the Higgs boson plays such a major role in physics. The photon has no such interactions, so it retains its massless character, while the masses of the $W$ and $Z$ are approximately $100$ times the mass of the proton. The asymmetry between the masses of the photon and the $W$ and $Z$ bosons is called “electroweak symmetry breaking.” According to theory, the Higgs occurs as a doublet of complex scalar fields, giving four degrees of freedom. Three of the four degrees of freedom are unphysical but are needed as intermediate states in the theory, while the fourth degree of freedom corresponds to the single physical Higgs boson. Once the Higgs mechanism is included, the electromagnetic and weak interactions are unified into one interaction—the electroweak interaction [6]. The Higgs boson, or something else that plays its role, is necessary in the standard model, but it has not yet been observed. Therefore its discovery is of utmost importance in particle physics. Searches have most recently been carried out at the Large Electron Positron collider (LEP) at the European Organization for Nuclear Research (CERN) [7] and at the Tevatron proton-antiproton collider at Fermilab [8]. It is most likely, however, that it will be discovered at the Large Hadron Collider (LHC) [9] at CERN. At what mass should we be looking for the Higgs? The mass of the Higgs boson is not specified in the standard model, but theorists think that it should be less than about $1000GeV$ (about $1000$ times the mass of the proton). In certain extensions of the standard model such as supersymmetry there may be other constraints on the mass. The couplings of the Higgs boson to other particles determine its production rate and its decays to other particles, and knowing these coupling strengths within the theory allows the prediction of its decays as functions of its unknown mass alone. Couplings of the Higgs boson to other elementary particles are directly related to its role in generating their masses. The Higgs boson is produced in interactions involving heavy particles, and its decays are in general into the heaviest particles that are kinematically possible. If the Higgs boson is heavier than twice the mass of the $W$ boson, it decays primarily into $W+W-$ and $ZZ$. If it is lighter, its decays to pairs of heavy fermions (a $b$ quark and its antiparticle the $b¯$ quark, or a tau lepton $τ-$ and its antiparticle the $τ+$) become dominant [10]. ## Indirect limits on the Higgs boson mass The value of the Higgs boson mass affects the standard model predictions for electroweak quantities, such as the mass and width of the $W$ boson and the width and other parameters of the $Z$ boson, measured in electron-positron colliders, hadron colliders, and elsewhere through higher-order corrections to the basic calculations, which are dependent logarithmically on the Higgs mass. (Such corrections are dependent on the square of the top quark mass and accurately predicted it before the top quark was discovered.) These electroweak quantities have been measured extremely precisely, for example at LEP, and global fits to the data with the standard model Higgs mass as a free parameter provide limits on the Higgs boson mass, as shown in Fig. 2 [11]. The quantity $χ2$ is a statistical measure of the agreement of the fit with the data, with the minimum value, $χmin2$ , at the most probable value of the Higgs mass. The global electroweak fit yields $Δχ2=χ2-χmin2=1$ limits, corresponding to a $68%$ confidence level or one standard deviation errors on the Higgs mass of $mH=87-26+35GeV$ (1) or a one-sided $95%$ confidence-level upper limit, including the band of theoretical uncertainty, on $mH$ of $157GeV$. Precision electroweak fits thus prefer a relatively low-mass Higgs boson. ## The Higgs boson in supersymmetric models Supersymmetric extensions of the standard model [12, 13] are particularly interesting on theoretical grounds. In supersymmetric theories there is a link between fermions and bosons. Every particle has a supersymmetric partner with the same properties except that fermions have supersymmetric partners that are bosons, and bosons have supersymmetric partners that are fermions. For example, the supersymmetric partner of the electron, a spin-$1/2$ fermion, is the spin-$0$ scalar electron, or selectron; the supersymmetric partner of the spin-$1/2$ top quark is the spin-$0$ stop quark; and the supersymmetric partner of the spin-$1$ gluon is the spin-$1/2$ gluino. Since such supersymmetric partners of the known particles have not been discovered, supersymmetry is broken, that is, the partners have larger masses than the known particles. Supersymmetric theories provide a consistent framework for the unification of the interactions at a high-energy scale and for the stability of the electroweak scale. Supersymmetry appears to be essential for string theory. In many supersymmetric models, the Lightest Supersymmetric Particle (LSP) is stable (it does not decay) and is a candidate for dark matter [14]. The measurement of the muon anomalous magnetic moment is significantly inconsistent with the standard model [15] and may be accounted for by supersymmetry. A general property of any supersymmetric extension of the standard model is the presence of at least two Higgs doublets, but there can be more. The simplest supersymmetric model is the minimal supersymmetric extension of the standard model (MSSM) [13]. In the MSSM there are two Higgs doublets, resulting in five physical Higgs bosons: three neutral ($h$, $H$, and $A$) and two charged ( $H±$ ). Masses and couplings in the MSSM depend on standard model parameters plus at least two other parameters, tan $β$ and a mass parameter (usually $mA$). The mass of the lightest Higgs boson, $mh$, is less than the mass of the $Z$, $mZ$, at the basic level and thus it was thought that it could have been found at LEP. However, $mh$ is increased significantly by corrections due primarily to the effects of the top quark and its supersymmetric partner, the spin-$0$ stop quark. Calculations within the MSSM and other supersymmetry models obtain an upper limit for $mh$ of typically about $130GeV$ [13]. Thus the lightest Higgs boson must be relatively light, as favored by the precision electroweak data. In fact, fits to the precision electroweak data within the constrained minimal supersymmetric standard model (CMSSM) give [16] $mh=110-10+8(exp)±3(theor)GeV.$ (2) In the decoupling limit, $mA2≫mZ2$, the lightest neutral Higgs boson $h$ couples in much the same way as the standard model Higgs. The $H$, $A$, and $H±$ are much heavier and nearly degenerate. ## Searches at electron-positron colliders Direct searches for the standard model Higgs boson were carried out at the LEP electron-positron collider, running at center-of-mass energies of $91$ to $209GeV$, up until the end of 2000, the final year of the LEP program. The four LEP experiments were ALEPH [17], DELPHI [18], L3 [19], and OPAL [20]. In electron-positron colliders the Higgs boson would be produced in association with a $Z$: $e+e-→HZ$ (that is, a high-energy collision between an electron and a positron would create a Higgs plus a $Z$ boson). Since electrons and positrons are fundamental particles, the collision makes use of their full energy. The Higgs and $Z$ bosons were searched for by reconstructing them from their decay products. At LEP energies, the kinematic limit for the mass of the Higgs boson is about $115GeV$, so the dominant decay of the Higgs would be into a pair of $b$ quarks, with smaller fractions of tau lepton pairs, $W$ pairs (one $W$ is virtual, that is, its mass is not equal to the rest mass of the $W$ boson), or gluon pairs. An important constraint was the reconstruction of the mass of the accompanying $Z$ through its decay products, and identification of $b$ quarks was also used. The event configurations searched were the four-jet final state ($H→bb¯$, $Z→qq¯$), the missing energy final state ($H→bb¯$, $Z→vv¯$), the leptonic final state ($H→bb¯$, $Z→e+e-$ or $H→bb¯$, $Z→μ+μ-$), and the tau lepton final state ($H→bb¯$, $Z→τ+τ-$ or $H→τ+τ-$, $Z→qq¯$). Reconstructing these decays requires an array of methods that have been designed into the experiments and used for other physics as well. Charged particles leave trails in tracking devices, such as drift chambers or silicon detectors, and their momenta can be measured from how much they bend in a magnetic field. Neutral particles such as photons leave energy deposits in the detectors. Electrons and muons are identified through their interactions with the material of the detector. Neutrinos do not interact with the amount of material in the detector and so are identified by missing energy in the reconstruction of the event, since the total energy is known from the center-of-mass energy of the electron-positron collision. Quarks are reconstructed from the jets of particles they produce, charged or neutral, since quarks cannot be directly observed. Jets from $b$ quarks can be distinguished by the rather long lifetimes of the hadrons containing the $b$ quarks. These hadrons decay at some distance from the overall event production point along the beams, and this displacement can be measured in precision tracking devices. When searching for the Higgs boson, physicists look for events that meet the criteria expected for the Higgs. However, there are background events, which are those from other physics processes that mimic the characteristics of the Higgs signal. There are significant numbers of background events due to $W$ pairs and $Z$ pairs, which appear as four-fermion events due to their decays, and quark-antiquark events. A signal due to Higgs boson production would appear as an excess number of events compared with these known standard model backgrounds. No statistically significant evidence was found for the Higgs boson, and a combination of the data of the four experiments gave a lower limit of $mH>114.4GeV$ at the $95%$ confidence level [21]. However, in the last year of LEP running at center-of-mass energies above $206GeV$, some excess events were seen that were consistent with background plus a Higgs boson of mass about $115GeV$ [22]. The experiments requested an extension of the LEP program for six months, but the request was denied because it would delay the construction of the LHC, which was built in the same tunnel as LEP. The four LEP experiments also searched for neutral Higgs bosons as predicted by the MSSM. The numbers of events produced and Higgs decays in the MSSM are determined by the parameters of the particular MSSM model, so the interpretations of search results depend on these parameters. The lightest Higgs boson $h$ typically decays into a pair of $b$ quarks or a pair of tau leptons, and the main production mechanisms are $e+e-→hZ$ and $e+e-→hA$, so searches for the standard model Higgs boson can be interpreted within the MSSM. The searches of the four LEP experiments were combined to give limits on $mh$ and $mA$ of about $93GeV$ at $95%$ confidence level over most of the MSSM parameter space [23]. The limit on $mh$ gradually approaches that of the standard model Higgs in the decoupling limit. In summary, no statistically significant evidence for a Higgs boson was obtained at LEP. ## Searches at hadron colliders Plans were for Higgs boson searches to take place at the Superconducting Super Collider (SSC), a $40$-$TeV$ (one $TeV$ equals $1000GeV$) proton-proton collider that had started construction in Texas but was canceled in 1993. However, after LEP the search for the Higgs boson then moved to Fermilab as an upgraded collider and experiments began data taking. At proton-proton or proton-antiproton colliders, unlike at electron-positron colliders, the colliding particles are not fundamental. Protons (antiprotons) are made up of quarks (antiquarks) and gluons, so the collisions involve quarks with quarks (antiquarks) or gluons, or gluons with gluons. The energies of the quarks or gluons within the proton or antiproton vary as steeply falling functions of the fraction of the total energy of the proton or antiproton. Therefore the effective center-of-mass energy of the collision is in general much less than that of the colliding protons and antiprotons and varies over a wide range. The energy transverse to the beam direction roughly balances since the quarks and gluons travel in the same direction as the proton or antiproton. To date, searches for the standard model Higgs boson have been performed at the Fermilab Tevatron proton-antiproton collider at a center-of-mass energy of $1.96TeV$ in the CDF [24] and D0 [25] experiments. The dominant production mechanism for Higgs bosons at the Tevatron would be through the interaction of a gluon in a proton with a gluon in an antiproton (gluon-gluon fusion). The Higgs can also be produced in association with a $W$ or $Z$ boson through the interaction of a quark in a proton with an antiquark in an antiproton (similar to the production of $HZ$ in an electron-positron collider). With the data accumulated so far, the Tevatron experiments are sensitive only to high-mass Higgs bosons that decay into $W$ pairs. Searches for low-mass Higgs bosons are more difficult and require more data—there are very large backgrounds that mask evidence for a low-mass Higgs decaying into a pair of $b$ quarks or a pair of tau leptons. In order to control these backgrounds, researchers look for the low-mass Higgs in association with a $W$ or $Z$ boson, which reduces the number of possible Higgs events. In addition, the low-mass Higgs boson must be identified by reconstructing it from a pair of $b$-quark jets. There are still large numbers of background events, even with the requirement of identifying an accompanying $W$ or $Z$, and the mass peak from the pair of $b$ quarks must be well defined in order to observe it above the background. To search for a Higgs that decays into a pair of $W$’s, the subsequent decays of the $W$ into a lepton ($e$, $μ$, or $τ$) and a neutrino are used. The signature for the Higgs is therefore events with two energetic electrons with opposite charge, or two muons with opposite charge, or an electron and a muon with opposite charge, plus large missing transverse energy due to the two neutrinos, which are not detected. (The tau contributes through its decay to an electron or a muon.) The main background is due to electromagnetic production of a pair of oppositely charged leptons when a quark-antiquark annihilation occurs (the Drell-Yan process), which is suppressed by the requirement of large missing transverse energy. Other backgrounds are due to $WW$, $ZZ$, $WZ$, and top quark pair production with subsequent decays into leptons. Both experiments compare the numbers of events observed with the numbers of background events expected, plus a possible signal due to a standard model Higgs boson of assumed mass produced at the predicted rate in the standard model. They use statistical methods to determine upper limits (at the $95%$ confidence level) on the possible production rate for the Higgs boson compared with the standard model prediction. Neither experiment by itself can set a $95%$ confidence level upper limit below the standard model prediction (exclusion), but the combined results of the two experiments exclude a standard model Higgs boson of mass between $160$ and $170GeV$ [26]. Discovery of a Higgs boson with mass in the region $115$–$120GeV$ by the Tevatron is unlikely [27]. The Large Hadron Collider (LHC) at CERN, which will begin data taking with proton-proton collisions in early 2010 and will ultimately have a center-of-mass energy of $14TeV$, will be sensitive to the entire mass range of the standard model Higgs boson. Searches for the Higgs boson will then begin in the ATLAS [28] and Compact Muon Solenoid (CMS) [29] detectors. The most important production mechanisms for the Higgs at the LHC are similar to those at the Tevatron. The Higgs decay into $W$ or $Z$ pairs will be used for the high-mass region. For a low-mass Higgs boson, decays into pairs of $b$ quarks or $τ$ leptons dominate; however, backgrounds from ordinary quarks and gluons are expected to be too large at the LHC to make these decay modes possible for a Higgs search. Therefore the search for the low-mass Higgs will rely on the decay into two photons, with a decay fraction of only about $0.002$. There is still considerable background in the two-photon channel due to real photon pairs produced in standard model processes and jets misidentified as photons, so the Higgs will be seen as a small peak on top of a large background [30, 31]. Accurate reconstruction of the photons in the detector is needed for the best definition of the peak. Finding a relatively light Higgs boson (which seems likely judging from the fits to precision electroweak data) at the LHC will be difficult and will require—in the language of high-energy physicists—several $fb-1$ of integrated luminosity [32], as shown in Fig. 3. In this context, luminosity is a measure of the collision rate of the two beams, and integrated luminosity of the number of collisions. One $fb-1$ of integrated luminosity corresponds to the production of one event for a process with a theoretical cross section of $1fb$ and is thus a measure of the amount of data that needs to be acquired. In practical terms, it means two to three years of data taking after the LHC begins operation at full energy will be required for observation of the Higgs boson. It may be that a Higgs boson of mass $115GeV$, just above the LEP limit, will be found. If this Higgs is the lightest MSSM Higgs boson, then it is possible that supersymmetry will be discovered first since the supersymmetric partners of quarks and gluons, the squarks and gluinos, could be produced copiously. This will be truly exciting! ## Acknowledgments The author acknowledges support by the Department of Energy through grants DE-FG02-07ER41465 and DE-FG02-07ER41487 and by the National Science Foundation through grants PHY-0630052 and PHY-0612805. The author would also like to acknowledge fruitful discussions with students and colleagues, especially Nina Byers, Ernest Ma, Harry Tom, and Gillian Wilson. ### References 1. C. Amsler et al. (Particle Data Group), Phys. Lett. B 667, 1 (2008)and 2009 partial update for the 2010 edition, http://pdg.bl.gov. 2. G. Hanson et al., Phys. Rev. Lett. 35, 1609 (1975). 3. Contemporary Physics Education Project, http://cpepweb.org. 4. S. Weinberg, Phys. Rev. Lett. 19, 1264 (1967); A. Salam, Elementary Particle Theory, edited by N. Svartholm (Almquist and Wiksells, Stockholm, 1968), p.367. 5. P. W. Higgs, Phys. Lett. 12, 132 (1964); F. Englert and R. Brout, Phys. Rev. Lett. 13, 321 (1964); P. W. Higgs, 13, 508 (1964); Phys. Rev. 145, 1156 (1966). 6. For reviews, see J. F. Gunion, H. E. Haber, G. L. Kane, and S. Dawson, The Higgs Hunter Guide (Addison-Wesley, Reading, Massachusetts, 1990)[Amazon][WorldCat]; J. Ellis, G. Ridolfi, and F. Zwirner, C. R. Physique 8, 999 (2007); A. Djouadi, Phys. Rep. 457, 1 (2008). 7. CERN official web site http://public.web.cern.ch/public/; John Adams Memorial Lecture, CERN, November 26, 1990, http://sl-div.web.cern.ch/sl-div/history/lep_doc.html. 8. Fermilab official web site http://www.fnal.gov; R. R. Wilson, “The Tevatron,” FERMILAB-TM-0763 (1978). 9. LHC official web site http://public.web.cern.ch/public/en/LHC/LHC-en.html. 10. A. Djouadi, J. Kalinowski, and M. Spira, Comput. Phys. Commun. 108, 56 (1998). 11. The ALEPH, CDF, D0, DELPHI, L3, OPAL, SLD Collaborations, the LEP Electroweak Working Group, the Tevatron Electroweak Working Group, the SLD Electroweak, and Heavy Flavour Groups, arXiv:0811.4682v1 (hep-ex) (2008); updates for Summer 2009 from http://lepewwg.web.cern.ch/LEPEWWG/. 12. J. Wess and B. Zumino, Nucl. Phys. B70, 39 (1974); Phys. Lett. B 49, 52 (1974); P. Fayet, 69, 489 (1977); 84, 421 (1979); 86, 272 (1979). 13. For reviews with references to the original literature, see H. E. Haber, and G. L. Kane, Phys. Rep. 117, 75 (1985); H. E. Haber, Phys. Rev. D 66, 010001 (2002)S. P. Martin, arXiv:hep-ph/9709356v5 (2008)A. Djouadi, Phys. Rep. 459, 1 (2008)Eur. Phys. J. C 59, 389 (2009). 14. J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, and M. Srednicki, Phys. Lett. B 127, 233 (1983); J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, K. A. Olive, and M. Srednicki, Nucl. Phys. B 238, 453 (1984); H. Goldberg, Phys. Rev. Lett. 50, 1419 (1983). 15. G. W. Bennett et al., Phys. Rev. D 73, 072003 (2006). 16. O. Buchmueller et al., Phys. Lett. B 657, 87 (2007). 17. ALEPH Collaboration, Nucl. Instrum. Methods A 294, 121 (1990); 360, 481 (1995); D. Creanza et al., 409, 157 (1998). 18. P. Aarnio et al. (DELPHI Collaboration), Nucl. Instrum. Methods A 303, 233 (1991)P. Abreu et al. (DELPHI Collaboration), 378, 57 (1996)P. Chochula et al.(DELPHI Silicon Tracker Group), 412, 304 (1998). 19. B. Adeva et al.(L3 Collaboration), Nucl. Instrum. Methods A 289, 35 (1990); O. Adriani et al. (L3 Collaboration), Phys. Rep. 236, 1 (1993); J. A. Bakken et al., Nucl. Instrum. Methods A 275, 81 (1989); O. Adriani et al., 302, 53 (1991); B. Adeva et al., 323, 109 (1992); K. Deiters et al., 323, 162 (1992); M. Chemarin et al., 349, 345 (1994); M. Acciarri et al., 351, 300 (1994); G. Basti et al., 374, 293 (1996); A. Adam et al., 383, 342 (1996). 20. K. Ahmet et al. (OPAL Collaboration), Nucl. Instrum. Methods A 305, 275 (1991); S. Anderson et al., 403, 326 (1998); B. E. Anderson et al., IEEE Trans. Nucl. Science 41, 845 (1994); G. Aguillion et al., Nucl. Instrum. Methods A 417, 266 (1998). 21. R. Barate et al. (ALEPH, DELPHI, L3, OPAL Collaborations, and The LEP Working Group for Higgs Boson Searches), Phys. Lett. B 565, 61 (2003). 22. R. Barate et al. (ALEPH Collaboration), Phys. Lett. B 495, 1 (2000); M. Acciarri et al. (L3 Collaboration), 495, 18 (2000); G. Abbiendi et al. (OPAL Collaboration), 499, 38 (2001); P. Abreu et al. (DELPHI Collaboration), 499, 23 (2001). 23. The ALEPH, DELPHI, L3, OPAL Collaborations, and The LEP Working Group for Higgs Boson Searches, Eur. Phys. J. C 47, 547 (2006). 24. D. Acosta et al. (CDF Collaboration), Phys. Rev. D 71, 032001 (2005); R. Blair et al., “The CDF II Detector Technical Design Report,” Report No. FERMILAB-PUB-96 390-E. 25. D0 Collaboration, Nucl. Instrum. Methods A 565, 463 (2006). 26. The Tevatron New Phenomena and Higgs Working Group for the CDF and D0 Collaborations, FERMILAB-PUB-09-060-E, arXiv:0903.4001v1 (hep-ex) (2009). 27. J. Conway, “The Search for the Higgs Boson,” plenary presentation at the 2009 Europhysics Conference on High Energy Physics, Kraków, Poland, July 2009, http://indico.ifj.edu.pl/MaKaC/contributionDisplay.py?contribId=937&sessionId=31&confId=11. 28. G. Aad et al., JINST 3, S08003 (2008). 29. R. Adolphi et al., JINST 3, S08004 (2008). 30. ATLAS Physics Performance Technical Design Report, CERN/LHCC/99-15; S. Asai et al., Eur. Phys. J. C 32, Suppl. 2, 19 (2004); arXiv:hep-ph/0402254. 31. CMS Physics, Technical Design Report, vol. II: Physics Performance, CERN/LHCC 2006-021, CMS TDR 8.2. 32. J.-J. Blaising et al. “Potential LHC Contributions to Europe’s Future Strategy at the High-Energy Frontier,” (2006), http://council-strategygroup.web.cern.ch/council-strategygroup/BB2/contributions/Blaising2.pdf.F. Gianotti, “ATLAS: preparing for the first LHC data,” plenary presentation at the 2009 Europhysics Conference on High Energy Physics, Kraków, Poland, July 2009, http://indico.ifj.edu.pl/MaKaC/contributionDisplay.py?contribId=940&sessionId=32&confId=11. ### Figures ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 151, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8794591426849365, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/50551-combinations-how-many-ways-can-you-choose-20-a.html
# Thread: 1. ## Combinations - How many ways can you choose 20 ... My teacher gave us this question and I'm not sure if I'm just going about it wrong, but it seems to me like there isn't enough information. The question is: A store sells red, green, and yellow marbles. In how many ways can a child buy 20 marbles? Wouldn't you have to know how many of each color there is or is there another way to do it? Thanks! 2. Hello, jlt1209! A store sells red, green, and yellow marbles. In how many ways can a child buy 20 marbles? We will assume that the store has at least 20 of each color. The child could buy: . . 1 red, 1 green, 18 yellow . . 7 red, 5 green, 8 yellow . . 5 red, 8 green, 7 yellow . . 8 red, 12 green, 0 yellow . . 20 red, 0 green, 0 yellow The list goes on and on . . . Here's my plan for counting all the possible combinations of colors. Place the 20 marbles in a row. Consider the 21 spaces that are between, before, and after the marbles. . . $\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o \_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o\_\,o \_$ We will insert two "dividers" into the spaces. To the left of the first divider are the Red marbles. To the right of the second divider are the Yellow marbles. Between the two dividers are the Green marbles. Example: . $o\:o\:o\:o\:o\,|\,o\:o\:o\:o\:o\:o\:o\:o\:o\,|\,o\ :o\:o\:o\:o\:o$ . . This represents: .5 Red, 9 Green, 6 Yellow. Example: . $o\:o\:o\:o\:o\:o\:o\,||\,o\:o\:o\:o\:o\:o\:o\:o\:o \:o\:o\:o\:o$ . . This represents: .7 Red, 0 Green, 13 Yellow. Example: . $|\,o\:o\:o\:o\:o\:o\:o\:o\:o\:o\:o\:o\,|\,o\:o\:o\ :o\:o\:o\:o\:o$ . . This represents: .0 Red, 12 Green, 8 Yellow. In this way, we can create all possible combinations of colors. There are 21 choices for the first divider and 21 choices for the second divider. . . It would seem that there are: . $21^2 = 441$ ways. But there is considerable duplication in this counting. Among the 441 items in our list, there is, for example: . . (2,4): the first divider is in space 2, the second is in space 4. . . (4,2): the first divider is in space 4, the second is in space 2. But these two represent the same distribution of colors, . . and must be removed from our count. We find that there are 210 "symmetric pairs" in our list. Therefore, the child can select colors in: . $441 - 210 \:=\:{\color{blue}231}$ ways.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9095565676689148, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/46322/is-there-some-sort-of-pati-salam-model-with-mixed-generations
# Is there some sort of Pati-Salam model with mixed generations? The evidence for an approximate "lepton as fourth colour" symmetry is so overhelming in the particle spectrum that Hanlon's razor does not seem to apply. Still, my own incompetence fails to recognise an adequate model. Point is, once we have massive see-saw-able neutrinos, we have enough leptons to organise approximate multiplets with three colours of a quark plus one colour neutral lepton: • $(ν_1,t_r,t_g,t_b)$ at about 174.10 GeV • $(ν_2,b_r,b_g,b_b)$ around 3.64 GeV • $(τ,c_r,c_g,c_b)$ around 1.698 GeV • $(μ,s_r,s_g,s_b)$ around 121.95 MeV • $(e,u_r,u_g,u_b)$ with null mass. • $(ν_3,d_r,d_g,d_b)$ about 8.75 MeV But you can see the problem: there are two charged leptons with the two second generation quarks, and then two neutral leptons with the third generation! So something must be done with the L-R SU(2)xSU(2) charges of the model, or with the quark assignment. In fact I believe to remember that the paper of Harari-Haut-Weyers, from which the mass assignments for s.u.d are taken, had a model where the right and left quarks were permuted between generations, so I should expect more work to exist in the literature. My question is, do you know of some sort of "twisted Pati-Salam L-R model" where the above multiplets are valid? EDIT 1: the representations, To start to play, they should not be taken from the standard model, but from Left-Right symmetric models with some Pati-Salam symmetry. This means that both leptons and quarks are in representations of $SU(2)_R$ and $SU(2)_L$ with Right and Left isospins of $\pm 1/2$ where they are in the doublet and 0 where they are in the singlet. So the objection of Lubos Motl, below, is that for instance the muon above is in +1/2 of the left doublet while the strange is the -1/2 of the left doublet. And of course the same problem for the conjugate multiplet, in the $SU(2)_R$ side. But this was my original question! Is it possible to twist the symmetries and generation-wise charge assignments to allow for such a mix? EDIT 1.1: To clarify, my question is on the second and third generations. The first generation multiplet $(e^L,u^L_{rgb})$ as well as its conjugate $(e^R,u^R_{rgb})$ are usual multiplets of $SU(4) \times SU(2)_L \times SU(2)_R$, the former being doublet in the L and singlet in R, the later being singlet in L and doublet in R. In this case there should be nothing surprising about a Higgs mechanism preserving SU(4), no more that the preservation of SU(3) in the Standard Model. Nobody is surprised that the three up quarks have equal mass, the three down quarks another mass but equal for all of them, and still $u^R$ and $u^L$ have different electroweak properties. EDIT 1.2: Note for instance Gabriele Honecker version of supersymmetric Pati-Salam, http://inspirehep.net/record/614377?ln=es, http://inspirehep.net/record/1185446?ln=es, where one generation has different representation that the other two. EDIT 2: the masses (just at motivation, not the real question!) Lubos points out that the values are numerological, but how they are? Well, this is irrelevant to the question, but it can be of marginal interest: the series is choosen so that all of the values fit with Koide formula: (174.10,3.64,1.698), (3.64,1.698,0.12195), (1.698,0.12195,0),(121.95,0,8.75). So the only input is 0 for up and 174.10 for top. Besides, the last triplet has the proportions of Harari-Haut-Wylers model: up equal to zero and $m_d/m_s$ is $\tan^2 15$. If some of you check the triplets agains Koide, remember take the negative sign for $\sqrt {m_s}$ in the second one. In this way, it is orthogonal to the charged lepton triplet. The link between the scb triplet and the charged lepton triplet is exploited to predict the masses after the breaking of the multiplets, by assuming that all the Koide equations are still happening. EDIT 2.1: when Koide is written as $m_k=M(1+\sqrt 2 \cos ({2 \pi \over 3} k + \delta))^2$, then it can be seen by inspecting above that $M_{scb}=3M_l$ and $\delta_{scb}=3\delta_l$. Asuming that this relation also survives to the breaking of "SU(4)", then it is possible to use as input the mass of electron and muon to predict all the other masses. And it works: the predictions are $173.26, 4.197, 1.77696, 1.359, 92.275, 5.32, .03564$; and the experiments (pdg2012) give, respectively, $173.5 \pm 1.0, 4.18 \pm 0.03, 1.77682 (16), 1.275 \pm 0.025, 95 \pm 5, \sim4.8, \sim2.3$ - I edited the question title, "some sort of" instead of "well-known". I was tempted to stress "valid", but really, as you see, I feel open about this. Just "published" should be enough. – arivero Dec 9 '12 at 18:33 ## 2 Answers There can't be any multiplets of a viable gauge group - which would include the electroweak $SU(2)$ - that look like that because it's known that the left-handed parts of the quark in your first two "multiplets" form an electroweak doublet, for example, while the remaining components of these "multiplets" - two flavors of neutrinos - surely don't. I can find dozens of similar inconsistencies in your list. Equivalently, the trace of the electroweak $T_3$ in most of your would-be multiplets is not zero but it must be zero because the trace of every generator of a non-Abelian group must vanish in every representation. You just organized the particles into random multiplets because of numerological motivations (proximity of masses?) whose basic physics is obviously indefensible. So of course that such random sets of particle species can't form multiplets. Moreover, you seem to misunderstand the nature of the true lepton and quark fields because you clump the left-handed ones and right-handed ones. They come from different representations of the gauge group which must be discussed separately. The left-handed ones are doublets, the right-handed ones are singlets, and so on. It's because the electroweak interactions are chiral. You clumped completely different representations into "unified" groups regardless of their actual transformation group, by linking them to the observed particles. But the observed particles and their masses aren't coming from fields that transform uniformly under the gauge group, as the doublet-vs-singlet chiral character mentioned above indicates. Why don't you try to learn how the fields actually transform in the electroweak theory, and then study its possible generalizations? It doesn't make any sense to try to find would-be grand unified theories if you don't understand the group theory in the more modest and more well-established electroweak theory, and you obviously don't. - I guess that my question is if I can use different L-R charge assignments in the L-R SU(2) groups for each generation, so that all the particles in the same multiplet go to the same state in the same representation. I have expanded a bit about it, it was clear from the context I was not thinking of electroweak $T_3$ but L-R $T_3$ – arivero Dec 9 '12 at 15:19 Dear Alejandro, it just doesn't matter. These multiplets are impossible as representations of any non-Abelian group because they don't pass basic tests I have mentioned. A representation of a group isn't an arbitrary collection of objects that you clumped because of some crazy numerological reason concerning masses - that have nothing to do with the multiplets. A representation is made of basis vectors that must transform into each other so their quantum numbers must reflect the actual group - and the quantum numbers are the charges under forces, not masses! Your collections can't be reps. – Luboš Motl Dec 9 '12 at 19:54 Ok, so your answer to my question is "No, such arrangement can not exist" and not "such arrangement should need a complicated Higgs sector to put masses in". It is a bit puzzling because the only exotics here are the ($\mu$-s) and the ($\nu$-t) multiplets. – arivero Dec 9 '12 at 21:01 Or, are you claiming that $(u_r,u_g,u_b)$ should not be considered to have a symmetry under SU(3) colour, because $u^R$ and $u^L$ have different electroweak assignments? It makes sense to tell it, but it is mostly a point of view, not a point of principle. – arivero Dec 9 '12 at 21:26 After some extra reading, I feel your answer is not helpful because it fails to mention relevant cases, well known by you, where different generations have different representation and charge assignments. For instance Cvetic-Shiu-Uranga or Gabriele Honecker models. – arivero Dec 17 '12 at 16:12 I am going to try to elaborate an answer by myself to show what I am thinking about, but please feel free to add your own. Problem is, that $\nu_1$ and $\mu$ are not in the same SU(2) state that $t$ and $s$. Both $\mu_L$ and $s_L$ are $SU(2)_R$ singlets, but in $SU(2)_L$ the former is $+1/2$, the later is $-1/2$ And of course the opposite problem happens with $\mu_R$ and $s_R$; they are $SU(2)_L$ singlets but they differ in $SU(2)_R$. On other hand, $\bar \mu_R$ is a $SU(2)_R$ singlet and $-1/2$ in $SU(2)_L$, and similarly with $\bar \mu_L$. Of course lepton number becomes -1 instead of +1, but perhaps we can cope with this. It could then be sensible to build the multiplets as $(\bar ν_1,t_r,t_g,t_b)$ $(ν_2,b_r,b_g,b_b)$ $(τ,c_r,c_g,c_b)$ $(\bar μ,s_r,s_g,s_b)$ $(e,u_r,u_g,u_b)$ $(ν_3,d_r,d_g,d_b)$ There is an extra point scored here: that it makes Koide equation of leptons have the same look that Koide for quarks, with two pieces of the same kind and a different one. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459537267684937, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/101125/how-to-compute-the-pareto-frontier-intuitively-speaking/101141
# How to compute the Pareto Frontier, intuitively speaking? I'm working on a multi-objective optimization problem and we have 'alternatives' that are quantified on two dimensions - value and cost. Now the question is 'how does one compute a pareto frontier'? I mean I know you can apply algorithms that will do it for you, but I want to know the basic underlying algorithm/mathematical-steps that would be employed to come up with a pareto frontier - I want to be able to do it with pen and paper - even if the algorithm is NOT efficient. I looked all over and it seems I only stumble upon various algorithms that can be employed but not the actual intuition behind how does one compute a Pareto frontier in the first place! - ## 1 Answer The basic definition of the Pareto frontier is that it consists of exactly those alternatives that are not dominated by any other alternative. We say that an alternative $A$ dominates $B$ if $A$ outscores $B$ regardless of the tradeoff between value and cost — that is, if $A$ is both better and cheaper than $B$. Obviously, both the best and the cheapest alternative always belong to the Pareto frontier, and in fact, they are its endpoints. A simple algorithm to find the other alternatives (if any) on the Pareto frontier is to first sort the alternatives according to one of the objectives — say, cost. One then starts with the cheapest alternative (which, as noted, always belongs in the Pareto frontier) and skips successive alternatives in order of increasing cost until one finds one with a higher value. This alternative is then added to the frontier and the search is restarted from it. A step-by-step description of the algorithm, assuming that $A_1, \dotsc, A_n$ are the alternatives in increasing order of cost, goes like this: Algorithm A: 1. Let $i := 1$. 2. Add $A_i$ to the Pareto frontier. 3. Find smallest $j>i$ such that $\operatorname{value}(A_j) > \operatorname{value}(A_i)$. 4. If no such $j$ exists, stop. Otherwise let $i := j$ and repeat from step 2. ### Addendum In the comments below, Nupul asked about computing the Pareto frontier for combinations of alternatives. There are two natural and interesting cases: one where the combinations are sets of alternatives (i.e. each alternative may appear at most once in a combination), and one where they are multisets (which can include the same alternative more than once). I'll address both cases below, but let me first give an alternative algorithm for computing the Pareto frontier for single alternatives: Algorithm B: 1. Let $A_1, \dotsc, A_n$ be the alternatives sorted in order of increasing cost/value ratio. Let $i := 1$. 2. Add $A_i$ to the Pareto frontier $P$. 3. Find smallest $j>i$ such that $A_j$ is not dominated by any alternative already in $P$. 4. If no such $j$ exists, stop. Otherwise let $i := j$ and repeat from step 2. In step 3, to check whether $A_j$ is dominated by any alternative in $P$, it will suffice to find the most expensive alternative $B \in P$ which is still cheaper than $A_j$. (Of course, by symmetry, we could also choose $B$ as the least valuable alternative in $P$ that has higher value than $A_j$.) If $B$ does not dominate $A_j$, neither can any other alternative in $P$. Algorithm B is somewhat more complicated than algorithm A above (in particular, algorithm A runs in $O(n)$ time if the alternatives are already sorted, whereas algorithm B needs $O(n \log n)$ time in any case), but it turns out to generalize better. Now, let's consider sets of zero or more alternatives. Obviously, we could just enumerate all such sets and then apply algorithm A or B, but this would be very inefficient: for $n$ alternatives, the number of sets is $2^n$. Instead, we can adapt algorithm $B$ to construct the combinations on the fly: Algorithm C: 1. Let $A_1, \dotsc, A_n$ be the alternatives sorted in order of increasing cost/value ratio. Let $i := 1$. Let $P := \lbrace\emptyset\rbrace$, where $\emptyset$ denotes the combination containing no alternatives. 2. For each combination $C \in P$, let $C^* := C \cup \lbrace A_i \rbrace$. If $C^*$ is not dominated by any combination already in $P$, add $C^*$ to $P$. 3. If $i = n$, stop. Otherwise increment $i$ by one and repeat from step 2. Again, as in algorithm B, we don't need to compare $C^*$ to every combination in $P$; it's enough to check whether $C^*$ is dominated by the most expensive combination in $P$ that is cheaper than $C^*$. What about multiset combinations? In this case, obviously, the Pareto frontier $P$ contains infinitely many combinations: in particular, for any combination $C \in P$, the combination $C + A_1$, where $A_1$ is the alternative with the lowest cost/value ratio, also belongs in $P$. However, the number of times any other alternative can appear in a non-dominated combination must be bounded. (The proof is a bit tedious, but follows from simple geometrical considerations.) Therefore we only need to consider the finite set $P_0$ of those combinations on the Pareto frontier which do not include the alternative $A_1$; the remaining combinations on the frontier are obtained by adding some number of $A_1$s to those combinations. For multiset combinations, we also have the following useful lemma: Lemma: Any combination that contains a dominated sub-combination must itself be dominated. In particular, this means that combinations in $P$ can only include alternatives that themselves belong in $P$. Thus, as a first step, we can use algorithm A (or B) to compute the Pareto frontier for single alternatives and discard any alternatives that are not part of it. For a complete algorithm to compute $P_0$, the following definition turns out to be useful: Dfn: A combination $B$ is said to $A$-dominate $C$ if the combination $B + nA$ dominates $C$ for some non-negative integer $n \in \mathbb N$. Equivalently, $B$ $A$-dominates $C$ iff $\operatorname{cost}(C) > \operatorname{cost}(B) + n\,\operatorname{cost}(A)$, where $n = \max \left(0, \left\lfloor \frac{\operatorname{value}(C)-\operatorname{value}(B)}{\operatorname{value}(A)} \right\rfloor \right)$. Using this definition, we can apply the following algorithm: Algorithm D: 1. Let $A_1, \dotsc, A_n$ be the (non-dominated) alternatives sorted in order of increasing cost/value ratio. Let $i := 2$. Let $P_0 := \lbrace\emptyset\rbrace$. 2. For each combination $C \in P_0$, let $C^* := C + A_i$. If $C^*$ is not $A_1$-dominated by any combination already in $P_0$, add $C^*$ to $P_0$. 3. Repeat from step 2 until no more combinations are added to $P_0$. 4. If $i = n$, stop. Otherwise increment $i$ by one and repeat from step 2. I think this algorithm should be correct, but to be honest, I'm not 100% sure that there aren't any mistakes or gaps in my proof sketch. So please test it thoroughly before using it for anything that matters. :-) I also think it should be possible to implement step 2 efficiently in a manner similar to algorithms B and C, but the situation is complicated by the fact that the rejection condition is $A_1$-domination rather than simple domination. Of course, if $B$ dominates $C$, then it trivially $A$-dominates $C$ for any $A$, so one can at least first do a quick check for simple dominance. Another way to optimize step 2 is to note that if $C + A_i$ is $A_1$-dominated by any combination in $P_0$, then so will be $D + A_i$ for any combination $D$ that includes $C$ as a sub-combination. In particular, we can skip to step 4 as soon as we find that $\emptyset + A_i = A_i$ itself is $A_1$-dominated by some combination already in $P_0$. - Now isn't that lovely - short, sweet and simple! +1 for that. What about computing the Pareto-frontier with combining alternatives? What you suggest would only be for a single alternative. What if multiple alternatives could be combined (values of value and cost being additive)? How would you compute it in that case? – PhD Jan 21 '12 at 22:33 Nevermind - I guess I figured it out - you'll have to create an exhaustive list of all possible combinations of the alternatives and then run the above algorithm. I guess now I understand what the other fancy algorithms must by doing since it's exponential in complexity! So simple yet so confounding! Thanks a ton! – PhD Jan 22 '12 at 1:40 If I understand you right, you should be able to simplify things a bit. Note that any combination involving a dominated alternative must itself be dominated; thus, you can first compute the Pareto frontier for the single alternatives, and then only consider combinations of those alternatives. (In fact, I suspect you can simplify this even further, but I'll have to think about it a bit more.) – Ilmari Karonen Jan 22 '12 at 13:14 Ps. Are your combinations sets or multisets? That is, can I include the same alternative more than once in a combination? (The question is interesting either way, but knowing which case you're primarily interested in would help in writing an answer.) – Ilmari Karonen Jan 22 '12 at 14:25 The combinations are sets. In my context including the same alternative twice makes no sense :) - it is either pursued/implemented/done or it isn't – PhD Jan 22 '12 at 18:29 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 104, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275965690612793, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/05/09/commutativity-in-series-ii/?like=1&source=post_flair&_wpnonce=bcc517a348
# The Unapologetic Mathematician ## Commutativity in Series II We’ve seen that commutativity fails for conditionally convergent series. It turns out, though, that things are much nicer for absolutely convergent series. Any rearrangement of an absolutely convergent series is again absolutely convergent, and to the same limit. Let $\sum_{k=0}^\infty a_k$ be an absolutely convergent series, and let $p:\mathbb{N}\rightarrow\mathbb{N}$ be a bijection. Define the rearrangement $b_k=a_{p(k)}$. Now given an $\epsilon>0$, absolute convergence tells us we can pick an $N$ so that any tail of the series of absolute values past that point is small. That is, for any $n\geq N$ we have $\displaystyle\sum\limits_{k=n+1}^\infty\left|a_k\right|<\frac{\epsilon}{2}$ Now for $0\leq n\leq N$, the function $p^{-1}$ takes only a finite number of values (the inverse function exists because $p$ is a bijection). Let $M$ be the largest such value. Thus if $m>M$ we will know that $p(m)>N$. Then for any such $m$ we have $\displaystyle\sum\limits_{j=m+1}^{m+d}\left|b_k\right|=\sum\limits_{j=m+1}^{m+d}\left|a_{p(k)}\right|\leq\sum\limits_{k=N+1}^\infty\left|a_k\right|$ and we know that the sum on the right is finite by the assumption of absolute convergence. Thus the tail of the series of $b_j$ — and thus the series itself — must converge. Now a similar argument to the one we used when we talked about associativity for absolutely convergent series shows that the rearranged series has the same sum as the original. This is well and good, but it still misses something. We can’t handle reorderings that break up the order structure. For example, we might ask to add up all the odd terms, and then all the even terms. There is no bijection $p$ that handles this situation. And yet we can still make it work. Unfortunately, I arrive in Maryland having left my references back in New Orleans. For now, I’ll simply assert that for absolutely convergent series we can perform these more general rearrangements, though I’ll patch this sometime. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 3 Comments » 1. [...] in Series III Okay, here’s the part I promised I’d finish last Friday. How do we deal with rearrangements that “go to infinity” more than once? That is, we [...] Pingback by | May 12, 2008 | Reply 2. [...] series converges absolutely, we can adjust the order of summations freely. Indeed, we’ve seen examples of other rearrangements that all go through as soon as the convergence is [...] Pingback by | September 16, 2008 | Reply 3. [...] going to sort of wave my hands here, motivating it by the fact that absolute convergence makes things [...] Pingback by | September 22, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9095591306686401, "perplexity_flag": "head"}