url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://www.physicsforums.com/showthread.php?s=9cb9425338de8c4ec1c26c4456b3d67c&p=4187773 | Physics Forums
## Lorentz transformation of y cpmponent for 4-momentum
I have 2 coordinate systems which move along ##x,x'## axis. I have derived a Lorentz transformation for an ##x## component of momentum, which is one part of an 4-momentum vector ##p_\mu##. This is my derivation:
[itex]
\scriptsize
\begin{split}
p_x &= mv_x \gamma(v_x)\\
p_x &= \frac{m (v_x'+u)}{\left(1+v_x' \frac{u}{c^2}\right) \sqrt{1 - \left(v_x' + u \right)^2 / c^2 \left( 1+ v_x' \frac{u}{c^2} \right)^2}} \\
p_x &= \frac{m (v_x'+u) \left( 1+ v_x' \frac{u}{c^2} \right)}{\left(1+v_x' \frac{u}{c^2}\right) \sqrt{\left[c^2 \left( 1+ v_x' \frac{u}{c^2} \right)^2 - \left(v_x' + u \right)^2 \right] / c^2 }} \\
p_x &= \frac{m (v_x'+u)}{\sqrt{\left[c^2 \left( 1+ v_x' \frac{u}{c^2} \right)^2 - \left(v_x' + u \right)^2 \right] / c^2 }} \\
p_x &= \frac{m (v_x'+u)}{\sqrt{\left[c^2 \left( 1+ 2 v_x' \frac{u}{c^2} + v_x'^2 \frac{u^2}{c^4} \right) - v_x'^2 - 2 v_x' u - u^2 \right] / c^2 }} \\
p_x &= \frac{mv_x'+mu}{\sqrt{\left[c^2 + 2 v_x'u + v_x'^2 \frac{u^2}{c^2} - v_x'^2 - 2 v_x' u - u^2 \right] / c^2 }} \\
p_x &= \frac{mv_x'+mu}{\sqrt{\left[c^2 + v_x'^2 \frac{u^2}{c^2} - v_x'^2 - u^2 \right] / c^2 }} \\
p_x &= \frac{mv_x'+mu}{\sqrt{1 + v_x'^2 \frac{u^2}{c^4} - \frac{v_x'^2}{c^2} - \frac{u^2}{c^2} }} \\
p_x &= \frac{mv_x'+mu}{\sqrt{\left(1 - \frac{u^2}{c^2}\right) \left(1-\frac{v_x'^2}{c^2} \right)}} \\
p_x &= \gamma \left[mv_x' \gamma(v_x') + mu \gamma(v_x') \right] \\
p_x &= \gamma \left[mv_x' \gamma(v_x') + \frac{mc^2 \gamma(v_x') u}{c^2} \right] \\
p_x &= \gamma \left[p_x' + \frac{W'}{c^2} u\right]
\end{split}
[/itex]
I tried to derive Lorentz transformation for momentum also in ##y## direction, but i can't seem to get relation ##p_y=p_y'## because in the end i can't get rid of ##2v_x'\frac{u}{c^2}## and ##\frac{v_y'^2}{c^2}##. Here is my attempt.
[itex]
\scriptsize
\begin{split}
p_y &= m v_y \gamma(v_y)\\
p_y &= \frac{m v_y'}{\gamma \left(1 + v_x' \frac{u}{c^2}\right) \sqrt{1 - v_y'^2/c^2\left( 1 + v_x' \frac{u}{c^2} \right)^2}}\\
p_y &= \frac{m v_y' \left( 1 + v_x' \frac{u}{c^2} \right)^2}{\gamma \left(1 + v_x' \frac{u}{c^2}\right) \sqrt{\left[c^2\left( 1 + v_x' \frac{u}{c^2} \right)^2 - v_y'^2\right]/c^2}}\\
p_y &= \frac{m v_y'}{\gamma \sqrt{\left[c^2\left( 1 + v_x' \frac{u}{c^2} \right)^2 - v_y'^2\right]/c^2}}\\
p_y &= \frac{m v_y'}{\gamma \sqrt{\left[c^2\left( 1 + 2 v_x' \frac{u}{c^2} + v_x'^2 \frac{u^2}{c^4}\right) - v_y'^2\right]/c^2}}\\
p_y &= \frac{m v_y'}{\gamma \sqrt{\left[c^2 + 2 v_x' u + v_x'^2 \frac{u^2}{c^2} - v_y'^2\right]/c^2}}\\
p_y &= \frac{m v_y'}{\gamma \sqrt{1 + 2 v_x' \frac{u}{c^2} + v_x'^2 \frac{u^2}{c^4} - \frac{v_y'^2}{c^2}}}\\
\end{split}
[/itex]
This is where it ends for me and I would need someone to point me the way and show me, how i can i get ##p_y = p_y'##. I haven't seen any derivation like this (for ##y## component of momentum) on the internet.
Thank you.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 9 Recognitions: Gold Member Science Advisor This seems awfully complicated; plus, you are starting with an incorrect assumption about p_y. The simplest way to see how 4-momentum transforms is to realize that it is a 4-vector, just like the "position" (t, x, y, z). Its components are (E, p^x, p^y, p^z), and they transform the same way any other 4-vector does. That is, you have, for relative motion in the x direction (and in units where c = 1), $$E' = \gamma \left( E - v p^x \right)$$ $$p'^x = \gamma \left( p^x - v E \right)$$ $$p'^y = p^y$$ $$p'^z = p^z$$ Which corresponds to $$t' = \gamma \left( t - v x \right)$$ $$x' = \gamma \left( x - v t \right)$$ $$y' = y$$ $$z' = z$$ The transformation for the momentum 4-vector can be derived the same way the transformation for the position 4-vector is derived. The easiest way is to start with the invariance of rest mass: $m^2 = E^2 - (p^x)^2 - (p^y)^2 - (p^z)^2 = E'^2 - (p'^x)^2 - (p'^y)^2 - (p'^z)^2$, which corresponds to the invariance of the spacetime interval for the position 4-vector.
Recognitions: Gold Member Science Advisor $p_x = m v_x \gamma(v_x)$ is wrong. It should be $p_x = m v_x \gamma \left(\sqrt{v_x^2 + v_y^2 + v_z^2} \right)$ (and similarly for the y and z components).
## Lorentz transformation of y cpmponent for 4-momentum
Quote by DrGreg $p_x = m v_x \gamma(v_x)$ is wrong. It should be $p_x = m v_x \gamma \left(\sqrt{v_x^2 + v_y^2 + v_z^2} \right)$ (and similarly for the y and z components).
How can this be wrong if i get a good result in my first derivation?
I am sorry i can't just believe in a statement that 4-momentum transforms just like a spacetime. I would need a proof for this.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by 71GA I am sorry i can't just believe in a statement that 4-momentum transforms just like a spacetime. I would need a proof for this.
Do you believe the last equation I wrote down in my last post? That's the standard energy-momentum relation in SR. Once you have that, deriving the transformation law for the components of 4-momentum is exactly the same as deriving the standard Lorentz transformation from the invariance of the spacetime interval.
Recognitions:
Gold Member
Science Advisor
Quote by 71GA I am sorry i can't just believe in a statement that 4-momentum transforms just like a spacetime. I would need a proof for this.
$$p^\alpha = m\frac{dx^\alpha}{d\tau} = \frac{dt}{d\tau} \, m\frac{dx^\alpha}{dt} = \left( \gamma mc, \gamma mv^x, \gamma mv^y, \gamma mv^z\right)$$
$dx^\alpha$ transforms via the Lorentz transform. $\tau$ is invariant.
All 4-vectors transform via the Lorentz transform; that's what makes them 4-vectors.
Quote by PeterDonis Do you believe the last equation I wrote down in my last post? That's the standard energy-momentum relation in SR. Once you have that, deriving the transformation law for the components of 4-momentum is exactly the same as deriving the standard Lorentz transformation from the invariance of the spacetime interval.
I assume you mean:
[itex]
m^2 = E^2 - (p^x)^2 - (p^y)^2 - (p^z)^2 = E'^2 - (p'^x)^2 - (p'^y)^2 - (p'^z)^2
[/itex]
Why should i believe that? How exactly does that prove ##p_y = p_y'## if relative speed ##u## among coordinate systems ##S## and ##S'## is in direction of ##x##, ##x'## axis? To translate momentum or energy in a different frame i refered to this site's section "Transforming Energy and Momentum to a New Frame".
Following this sites sugesstions i have been able to derive 2 equations which are (and this is half of 4-momentum):
[itex]
\begin{split}
p_x &= \gamma \left[p_x' + \frac{W'}{c^2} u\right]\\
W &= \gamma \left[ W' + p' u \right]
\end{split}
[/itex]
But when i tried deriving ##p_y = p_y'## or ##p_z = p_z'## i couldn't proove them. And this is weird. This method should work for ##p_y## and ##p_z## just as it worked fine for ##W## and ##p_x##.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by 71GA Why should i believe that?
Because it's a way of expressing the invariance of rest mass, which is an important part of SR. Are you saying you don't believe SR? What, exactly, are the things you're willing to accept as true in order to start your derivation of the Lorentz transformation?
Btw, if you look at the site you linked to, it has a formula that's equivalent to the one I gave: they write $E^2 - c^2 p^2 = m_0^2 c^4$, which is what I wrote if you adopt units in which c = 1 (and I wrote m instead of m_0), and recognize that this equation holds in every frame, so it holds for E' and p' as well as E and p (in fact they write this explicitly further down the page). In fact, the derivation they go on to do from this is exactly what I was talking about when I said you can derive the LT for 4-momentum from the invariance of rest mass.
Quote by 71GA Following this sites sugesstions i have been able to derive 2 equations which are (and this is half of 4-momentum): $\begin{split} p_x &= \gamma \left[p_x' + \frac{W'}{c^2} u\right]\\ W &= \gamma \left[ W' + p' u \right] \end{split}$
Yes, these look right, they're the same transformation equations that I wrote down, except that they write W instead of E and u instead of v.
Quote by 71GA This method should work for ##p_y## and ##p_z## just as it worked fine for ##W## and ##p_x##.
Why do you think that? The relative motion is in the x direction, so the x direction is different than the y and z directions. The y and z momentum does not change at all for relative motion in the x direction. The y and z *velocities* change, because they are affected by time dilation, but the *momentum* in the y and z directions doesn't change.
Quote by 71GA How can this be wrong if i get a good result in my first derivation? I am sorry i can't just believe in a statement that 4-momentum transforms just like a spacetime. I would need a proof for this.
I'm not sure what kind of proof you are looking for. If for a slower-than-light particle, you define proper time $\tau$ to be the quantity $\int \sqrt{1-\dfrac{v^2}{c^2}} dt$, then you can prove that the quantity $P^\mu = m \dfrac{dx^\mu}{d\tau}$ is a 4-vector, since $\tau$ is a scalar. It's easy to prove that in the limit as $v/c \rightarrow 0$, the spatial part goes to the Newtonian momentum. Is it really momentum? Well, what's your definition of momentum?
I presume derivation in my opening post is not going to work, so i would like to try it your way. I decided i will follow this site in combination with this and this video (this is for reference if anyone else will need it). Now that i decided to follow what all of you have been saying i stumbeled uppon a problem. You see our professor stated that invariant interval is ##\Delta s^2 = \Delta x^2 - (c \Delta t)^2## if we ommit dimensions ##y## and ##z##. Soo i presume for 4-D it wold be ##\Delta s^2 = \Delta x^2 + \Delta y^2 + \Delta z^2 - (c \Delta t)^2##. QUESTION1: Last equation for invariant interval isn't like on most sites as it has negative time component and positive space components while yours is vice versa. Why is that? QUESTION2: How do i derive 4-momentum if i start with only 3 equations below. $\begin{split} \Delta s^2 &= \Delta x^2 + \Delta y^2 + \Delta z^2 - (c \Delta t)^2\\ p &= mv \gamma(v)\\ E &= mc^2 \gamma(v) = E_k + mc^2 \end{split}$ I am sorry for such questions but our professor didn't use standard notaions and therefore i am having a hard time now.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by 71GA You see our professor stated that invariant interval is ##\Delta s^2 = \Delta x^2 - (c \Delta t)^2## if we ommit dimensions ##y## and ##z##. Soo i presume for 4-D it wold be ##\Delta s^2 = \Delta x^2 + \Delta y^2 + \Delta z^2 - (c \Delta t)^2##.
Yes, with the professor's sign convention (see below), this is correct.
Quote by 71GA QUESTION1: Last equation for invariant interval isn't like on most sites as it has negative time component and positive space components while yours is vice versa. Why is that?
It's just a different sign convention. The professor's equation that you wrote above makes s^2 negative for a timelike interval and positive for a spacelike interval. The other sign convention makes s^2 positive for a timelike interval and negative for a spacelike interval. Some people prefer one, some the other. The physics is the same either way; you just have to keep track of which sign convention you're using and be consistent.
Quote by 71GA QUESTION2: How do i derive 4-momentum if i start with only 3 equations below.
We have
[tex]
E^2 = m^2 c^4 \gamma^2 \\
p^2 = m^2 v^2 \gamma^2
[/tex]
so
$$E^2 - p^2 c^2 = m^2 c^4 \gamma^2 \left( 1 - \frac{v^2}{c^2} \right) = m^2 c^4$$
which is the energy-momentum relation I wrote down earlier. If you want to expand out p^2 by components, you would have
$$E^2 - p_x^2 c^2 - p_y^2 c^2 - p_z^2 c^2 = m^2 c^4$$
where
[tex]
p_x = m v_x \gamma \\
p_y = m v_y \gamma \\
p_z = m v_z \gamma
[/tex]
Quote by PeterDonis Yes, with the professor's sign convention
So what my professor does is correct? Please anwser me with YES/NO.
Quote by PeterDonis It's just a different sign convention. The professor's equation that you wrote above makes s^2 negative for a timelike interval and positive for a spacelike interval.
What bothers me is that convention our professor used matches with hyperbola equations. And it seems to me yours doesn't. Let me explain why. First please take a look at the Minkowski diagram where there are 2 hyperbolas.
I have figured out that i could connect invariant interval to the semimajor axis of hyperbolas in the picture. I started out from basic hyperbola equation (i ll do this for hyperbola with ##a=b=2## in the picture - I can state this as asymptotes of "lightning cone or. X" are perpendicular to eachother).
[itex]
\begin{split}
\frac{x^2}{a^2} - \frac{y^2}{b^2} &= 1\\
\frac{x^2}{2^2} - \frac{y^2}{2^2} &= 1\\
x^2 - y^2 &= 2^2\\
\end{split}
[/itex]
And then i figured out that the axis we generally write ##y## is actually ##ct## axis in Minkowski diagram while axis we generally write ##x## stays the same in Minkowski diagram. So i get an equation, where quantity under the square root on the left hand side of an equation represents invariant interval ##\Delta s##.
[itex]
\begin{split}
x^2 - (ct)^2 &= s^2\\
\Delta s^2 &= \Delta x^2 - (c \Delta t)^2 \\
\end{split}
[/itex]
This equation corresponds to the conventionbelow that my professor used.
[itex]
\Delta s^2 = \Delta x^2 + \Delta y^2 + \Delta z^2 - (c \Delta t)^2
[/itex]
I wonder what would change in my picture if i would use your convention below
[itex]
\Delta s^2 = - \Delta x^2 - \Delta y^2 - \Delta z^2 + (c \Delta t)^2
[/itex]
I think your convention comes from different basic hyperbola equation ##\frac{y^2}{b^2} - \frac{x^2}{a^2} = 1## and i would therefore get hyperbolas which open to the left/right instead of ones that open up/down? Please correct me if i am wrong. This is what i know about invariant interval and convention, but i don't know if i am correct.
Quote by PeterDonis $E^2 = m^2 c^4 \gamma^2 \\ p^2 = m^2 v^2 \gamma^2\\ E^2 - p^2 c^2 = m^2 c^4 \gamma^2 \left( 1 - \frac{v^2}{c^2} \right) = m^2 c^4$
I understand this, but last equation is derived using your convention (scalar is positive and vectors all negative). How could i write it down for my convention? Is it like this?
[itex]
p^2 c^2 - E^2 = -m^2 c^4\\
(pc)^2 - E^2 = -(mc^2)^2\\
[/itex]
Well I can see that invariant is ##mc##, but it is negative! Is this ok? (here is allso negative) I ask this because in spacetime invariance ##\Delta s## was positive. If i divide above equation by ##c^2## i get
[itex]
p^2 - \frac{E^2}{c^2} = -(mc)^2\\
[/itex]
At this point you would probably write left hand side of an equation using components, while you would state that right hand side is the dot product of an 4-momentum vector and therefore make a conclusion that 4-momentum vector is:
[itex]
\begin{split}
p_x^2 + p_y^2 +p_z^2 - \frac{E^2}{c^2} = p^\mu \cdot p^\mu\Longrightarrow \boxed{p^\mu = (p_x, p_y, p_z, \frac{E}{c})}
\end{split}
[/itex]
QUESTION1:
How do you know that ##p_x^2 + p_y^2 +p_z^2 - \frac{E^2}{c^2}## is a dot product of a 4-vector with itself?
QUESTION1:
Should a 4-momentum vector be ##p^\mu = (p_x, p_y, p_z, -\frac{E}{c})## instead of ##p^\mu = (p_x, p_y, p_z, \frac{E}{c})## and why don't we usually write down a minus sign here?
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
[Edit: Added some clarifications.]
Quote by 71GA So what my professor does is correct? Please anwser me with YES/NO.
YES
Quote by 71GA What bothers me is that convention our professor used matches with hyperbola equations. And it seems to me yours doesn't.
There are two sets of hyperbolas. One set corresponds to a spacelike s^2--i.e., an interval where the spacelike components (x^2 + y^2 + z^2) are larger in magnitude than the timelike component (t^2). The hyperbolas you drew are from that set.
The other set corresponds to a timelike s^2--i.e., an interval where the timelike component is larger in magnitude than the spacelike components. This set of hyperbolas would indeed, as you said, spread to the left/right instead of up/down.
It's important to realize that the sign convention for s^2--i.e., whether you write the interval the way your professor did, with t^2 negative, or the way I did, with t^2 positive--is independent of the above distinction. You can write hyperbolas that spread left/right instead of up/down with the t^2 term negative; you just have to also write a negative s^2, which, as I said before, corresponds to a timelike interval with that sign convention. For example, considering just the t and x coordinates, we could write:
$$x^2 - c^2 t^2 = - 1$$
which would correspond to a hyperbola spreading left/right. This isn't the normal way of writing hyperbolas in high school math class, but it works.
Quote by 71GA last equation is derived using your convention (scalar is positive and vectors all negative). How could i write it down for my convention? Is it like this? $p^2 c^2 - E^2 = -m^2 c^4\\ (pc)^2 - E^2 = -(mc^2)^2\\$ Well I can see that invariant is ##mc##, but it is negative! Is this ok?
You have just opened a different can of worms, but hopefully we can re-can them without too much trouble.
Consider intervals first; as we've seen, there are three different kinds (though we haven't talked much about the third yet):
(1) Timelike intervals, which have negative s^2 in your professor's sign convention and positive s^2 in mine;
(2) Spacelike intervals, which have positive s^2 in your professor's sign convention and negative s^2 in mine;
(3) Null intervals, which have s^2 = 0 (obviously the sign convention doesn't matter here).
These three kinds of intervals describe three physically different kinds of things:
Timelike intervals describe "lengths of time"--put another way, a curve with a timelike s^2 is a possible worldline for an ordinary observer with nonzero rest mass, and the length of the curve (i.e,. s) is the elapsed time experienced by the observer.
Spacelike intervals describe ordinary "lengths in space"--put another way, a curve with a spacelike s^2 is a possible "curve in space at some instant of time" for some observer, and the length of the curve (s) is the distance measured by that observer.
Null intervals describe light rays--a curve with null s^2 is a possible worldline for a light ray.
Now consider the corresponding 3 kinds of 4-vectors with energy-momentum components:
(1) Timelike 4-momentum, which has positive m^2 (I'll leave out the factors of c here, we're now working in units where c = 1) in my sign convention.
(2) Spacelike 4-momentum, which has negative m^2 in my sign convention.
(3) Null 4-momentum, which has zero m^2.
A timelike 4-momentum describes the energy and momentum of a timelike object--i.e,. one with nonzero rest mass (the rest mass is just the length, m, of the 4-momentum vector) which moves on a worldline with a timelike s^2.
A null 4-momentum describes the energy and momentum of a light ray, which moves on a worldline with null s^2.
A spacelike 4-momentum would then describe a hypothetical "object" (the usual name for these objects is "tachyons") which moves on a spacelike worldline, i.e., one with spacelike s^2. You can find a *lot* of articles about tachyons by Googling, but for a quick overview I recommend the Usenet Physics FAQ's article:
http://math.ucr.edu/home/baez/physic.../tachyons.html
The fact that we normally view energy as a real number, with a positive square, is why we normally adopt my sign convention, with m^2 positive for timelike 4-momentum, when describing 4-momentum vectors, even when we are using your professor's convention for intervals (with s^2 negative for timelike intervals). And hopefully that gets the worms most of the way back into the can for now.
I'm pressed for time right now so I'll defer responding to the two questions at the end of your post, since they raise some other issues we haven't touched on yet.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Now for those two questions:
Quote by 71GA QUESTION1: How do you know that ##p_x^2 + p_y^2 +p_z^2 - \frac{E^2}{c^2}## is a dot product of a 4-vector with itself?
Because the dot product of a vector with itself gives its squared length, and that's what you just wrote the formula for.
You may be confused because you're used to seeing a dot product written with all plus signs, and there's that minus sign in front of E^2. The dot product you're used to seeing is for ordinary Euclidean space, where all squared lengths are positive. The more technical way of saying this is that the metric of ordinary Euclidean space is "positive definite": the squared length of any nonzero vector is a positive number. The "metric" in ordinary Euclidean space is just the Pythagorean theorem in three dimensions: $s^2 = x^2 + y^2 + z^2$. And of course this is just the ordinary dot product of the vector (x, y, z) with itself.
In spacetime, as we have seen, we can have nonzero vectors with positive, negative, or zero squared length. (Your professor's sign convention makes spacelike squared lengths positive, which is natural when you are thinking about the analogy with Euclidean space; that's why it's so common.) So the concept of "dot product" needs to be generalized to cover this case. The way we generalize it is simple: the dot product is computed using the metric, meaning the analogue of the Pythagorean formula for spacetime. So the interval we've been looking at, $s^2 = x^2 + y^2 + z^2 - c^2 t^2$, is just the dot product of the spacetime "position vector" with itself, using the spacetime metric, in the same way as the ordinary Euclidean distance computed using the Pythagorean formula is the dot product of the spatial position vector with itself.
The energy-momentum 4-vector works the same way; in fact, *any* 4-vector in spacetime works the same way (just as we can compute the dot product of any ordinary 3-vector in Euclidean space the same way we did above for the position vector). That's why there's the minus sign in front of E^2. The sign convention (minus sign in front of E^2, instead of in front of the p^2 components) is something we already talked about, but I'll go into it a bit more in the answer to your other question below.
Quote by 71GA QUESTION1: Should a 4-momentum vector be ##p^\mu = (p_x, p_y, p_z, -\frac{E}{c})## instead of ##p^\mu = (p_x, p_y, p_z, \frac{E}{c})## and why don't we usually write down a minus sign here?
The short answer is that, as you've written it, the second form is right and the first is not. However, there's a deeper issue here which is worth going into.
The proper way of writing a 4-vector, the things we've been talking about up to now, is with the index "upstairs", as you wrote it. So the ordinary "position vector" would be
$$x^{\mu} = (x, y, z, t)$$
Notice that there is no minus sign in front of the t. Similarly, the energy-momentum 4-vector would be
$$p^{\mu} = (p^x, p^y, p^z, \frac{E}{c})$$
with no minus sign in front of the E. (Note also that I wrote the x, y, z on the p components "upstairs", not "downstairs" as you wrote them. We'll come back to that.)
You will also, however, see objects written with the index "downstairs". For example, you might see something like this:
$$p_{\mu} = (p_x, p_y, p_z, - \frac{E}{c})$$
with a minus sign in front of the E. What's going on here?
The answer is that the object with the "downstairs" index is not a vector; it's a different kind of object, usually called a "1-form" or "covector". You can read some about it here:
http://en.wikipedia.org/wiki/Linear_functional
We don't need to go into a lot of detail about 1-forms; the key point is that, as long as we have a metric (which we do here), there is a 1-to-1 mapping between 1-forms and vectors, using the metric, which is written this way:
$$p_{\mu} = \eta_{\mu \nu} p^{\nu}$$
That $\eta_{\mu \nu}$ is the "metric tensor", which for our purposes here you can just think of as a 2 x 2 matrix with (1, 1, 1, -1) along the diagonal and 0 everywhere else, using your professor's sign convention. The metric tensor is also what we use to form the dot product of vectors, so we can write the energy-momentum relation as the dot product of the 4-momentum vector with itself thus:
$$\eta_{\mu \nu} p^{\mu} p^{\nu} = p^1 p^1 + p^2 p^2 + p^3 p^3 - p^0 p^0 = ( p^x )^2 + ( p^y )^2 + ( p^z )^2 - \left( \frac{E}{c} \right)^2 = - m^2 c^2$$
where we have used a very useful convention called the "Einstein summation convention", in which any index that is repeated (i.e., it appears both "upstairs" and "downstairs") is summed over, with values (1, 2, 3, 0) corresponding to the (x, y, z, t) components of the vector. I used the same convention in writing the mapping from vectors to 1-forms, but since the metric tensor is diagonal, the sum for each component collapses to only one term, and we have
[tex]
p_1 = \eta_{11} p^1 = p^x = p_x \\
p_2 = \eta_{22} p^2 = p^y = p_y \\
p_3 = \eta_{33} p^3 = p^z = p_z \\
p_0 = \eta_{00} p^0 = - \frac{E}{c}
[/tex]
That's where the minus sign comes from in the 1-form. Also, as you can see, the spatial momentum components are the same for the vector and the 1-form, so it doesn't really matter whether we write them "upstairs" or "downstairs" if we are using your professor's sign convention. (As an exercise, though, you might want to go back and re-write all these formulas using my sign convention. The first thing to re-write is the metric tensor: what does it look like with my sign convention?)
This is a lot to digest so I'll stop now. Please feel free to ask further questions when you've looked it over.
I don't mean to be rude, but being a touch more humble will help you along your academic road.. I see that you're using modern physics in your class, that book barely goes into any depths with SR. To properly master SR you need to learn about covariant and contravariant transformations, and thus basically tensor analysis. Transforming between inertial frames becomes easy once you treat things with four vectors and one forms, just drop a lambda matrix in front of your vector and poof, transformed. The things bout four vectors is that they are the same geometrical object in every inertial frame, while three vectors are not. Understand this, the components on a vector can change, but a four vector is the same in every single inertial frame.
Quote by HomogenousCow I don't mean to be rude, but being a touch more humble will help you along your academic road.. I see that you're using modern physics in your class, that book barely goes into any depths with SR. To properly master SR you need to learn about covariant and contravariant transformations, and thus basically tensor analysis. Transforming between inertial frames becomes easy once you treat things with four vectors and one forms, just drop a lambda matrix in front of your vector and poof, transformed. The things bout four vectors is that they are the same geometrical object in every inertial frame, while three vectors are not. Understand this, the components on a vector can change, but a four vector is the same in every single inertial frame.
I will definitely take this on my knowledge. Thank you. At the moment i am still studying the anwser above, so i ll keep in touch in case i get more questions.
The central idea for SR is that space and time are bounded together into one mathematical model, the space time. In the same sense in classical mechanics that your y and your x mean nothing to nature, your t and your x mean nothing to nature, they are simply different viewpoints of a single space-time.(Or more mathematically, simply different inertial frames that we choose to work in) We use tensors because they embody this concept, the components of a tensor transform with a matrix that is the inverse of the matrix that their bases transform through, hence when you write a tensor out in a basis the coordinate transform matrices multiply and become the identity matrix, the "1" in multilinear algebra. Many would put this in more mathematical terms, and say that the invariance of the space-time interval is the center piece of SR, this is true in a sense. The convention we choose for it is not important, whether you use (+---) or (-+++) the important thing is that this be kept invariant of which inertial frame we are in. Even more fundamentally, the inner product ds^2=-dt^2+dS^2 is actually just the action of the metric tensor on the same four vector twice, measuring the "length" of that four vector. This is more fundamental because when you move on to GR, you learn that the metric is different for different systems, depending on the energy and momentum flow in that region of space time. This leads to many other strange qualities of realistic mechanics: -Because the metric is in general, position dependent, we can no longer think of vectors as arrows spanning a length in space time, but rather an entity that exists at each point of space time. -This leads to the fact that relative velocities are meaningless in GR, if you compare two four velocities, you need to move them to one point in space time and compare them. However as it turns out in curved space (where the metric is a function of space and time), the way you "slide" the two vectors affects them and hence there is no way to compare two vectors that inhabit different points in space time. -There are no lorentz frames, due to the fact that gravity is not something that can be isolated, in the sense that you cannot shield a particle from gravity, there truly exists no inertial frames in curved space time, everyone is in free fall. p.s. Yes, Physics does take pleasure in making your integrals and PDEs harder and nastier. If I know one thing it's that.
Thread Tools
Similar Threads for: Lorentz transformation of y cpmponent for 4-momentum
Thread Forum Replies
Introductory Physics Homework 6
Special & General Relativity 3
Special & General Relativity 1
Special & General Relativity 13
Special & General Relativity 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011255502700806, "perplexity_flag": "middle"} |
http://mathhelpforum.com/discrete-math/138796-counting-permutations-combinations.html | # Thread:
1. ## Counting (Permutations/Combinations)
A group of 10 is going to be selected from a pool of 8 men and 8 women...
In how many ways ways can the selection be carried out if:
a) we choose 10 people at random?
b) there must be 5 men and 5 women?
c) there must be more women than men?
Also, if you have time, can you answer:
- How many strings of 6 decimaal digit.s have exactly three digits that are 4's?
I suck at this problems and help would be great! Thank you!
2. Originally Posted by achua7
A group of 10 is going to be selected from a pool of 8 men and 8 women...
In how many ways ways can the selection be carried out if:
a) we choose 10 people at random?
b) there must be 5 men and 5 women?
c) there must be more women than men?
Also, if you have time, can you answer:
- How many strings of 6 decimaal digit.s have exactly three digits that are 4's?
I suck at this problems and help would be great! Thank you!
You only need to know that there are $\binom{n}{k}$ ways to choose a k-element subset from an n-element set.
Thus you get
a) $\binom{10+10}{10}=\binom{20}{10}=184'756$
b) $\binom{10}{5}\cdot\binom{10}{5}=\binom{10}{5}^2=63 '504$
c) Let m be the number of men chosen, the corresponding number of women must then be 10-m; to satisfy the condition that m<10-m, m can only assume the values m=0, 1, 2, 3, or 4. Hence, by summing all these cases we get
$\sum_{m=0}^4\binom{10}{m}\cdot\binom{10}{10-m}=\sum_{m=0}^4\binom{10}{m}^2=60'626$
As to the number of strings of digits (not numbers) of length 6 that contain exactly 3 times the digit 4: you can choose the position of the 3 digits that are 4 in $\binom{6}{3}$ ways, and each of these can be combined with one of the $9^2$ ways to choose the remaining 2 digits, different from 4.
Thus you get $\binom{6}{3}\cdot 9^2=1'215$ possibilities.
Note that there is a difference depending on whether we are talking here of the number of strings of length 6 or of numbers with 6 digits, because in the latter case, the first digit would not be allowed to be 0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147037267684937, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/206843/big-rudin-exercise-3-26-which-integral-is-larger?answertab=active | # Big Rudin Exercise 3.26 - Which integral is larger
This is exercise 3.26 in Rudin's Real & Complex Analysis:
If $f$ is a positive measurable function on $[0,1]$, which is larger, $$\int_0^1 f(x) \log f(x) \, dx$$ or $$\int_0^1 f(s) \, ds \int_0^1 \log f(t) \, dt$$
I tried a bunch of functions and always got the first to be larger, which suggests that Hölder's inequality won't help here (at least not a direct application). I couldn't find an example that made the second larger. I'm stuck otherwise.
(This is self-study, not homework)
Clarification: The integral here is the Lebesgue integral. The only answer so far is only applicable to Riemann integrable functions.
-
1
Can you prove the case of simple function first? – ziyuang Oct 6 '12 at 0:26
## 2 Answers
The function $x\mapsto x\log x$ is convex on $(0,\infty)$, as its second derivative is positive. Thus by Jensen's inequality,
$$\int_0^1 f(t)\log f(t) dt \geq \int_0^1 f(t) dt \log\left( \int_0^1 f(t) dt \right) .$$
The function $x\mapsto \log x$ is concave, so another application of Jensen's inequality yields
$$\log\left( \int_0^1 f(t) dt \right) \geq \int_0^1\log f(t) dt .$$
Combining these two inequalities proves the result.
-
I don't see how your first inequality follows from the convexity of $x\mapsto x\log x$ and Jensen's inequality. – Christian Blatter Oct 9 '12 at 20:11
Let $\phi(x) = x\log x$. The left side is the integral of $\phi\circ f$, the right side is $\phi$ of the integral of $f$. – user15464 Oct 10 '12 at 0:08
Can you explain why my first inequality does not follow from Jensen's inequality? It still looks to me like it is a direct application of the statement of the inequality. – user15464 Oct 10 '12 at 12:26
You are right. Sorry for the inconvenience I have caused. – Christian Blatter Oct 10 '12 at 13:42
$$\int_0^1f(t)d(t)=\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n f\left(\frac{i}{n}\right)$$ So in order to prove the inequality $$\int_0^1 f(x) \log f(x) dx \geq \int_0^1f(s)ds \int_0^1\log f(t)dt$$ it is adequate to show $$\frac{1}{n}\sum_{i=1}^n f\left(\frac{i}{n}\right) \log f\left(\frac{i}{n}\right) \geq \frac{1}{n}\sum_{j=1}^n f\left(\frac{j}{n}\right) \cdot \frac{1}{n}\sum_{k=1}^n\log f\left(\frac{k}{n}\right)$$
Since $\log f(x)$ increases as $f(x)$ increases, we can apply Chebychev's inequality to give $$\sum_{j=1}^n f\left(\frac{j}{n}\right) \cdot \sum_{k=1}^n\log f\left(\frac{k}{n}\right) \leq n \sum_{i=1}^n f\left(\frac{i}{n}\right) \log f\left(\frac{i}{n}\right)$$ from which the required result follows immediately.
-
This assumes the function is Riemann integrable, no? The exercise is about measurable functions and Lebesgue integral. – PeterM Oct 4 '12 at 8:39
Hmm, interesting. I'm trying to think what sort of non-Riemann-integrable function would make the inequality come out the opposite way. I need to think about this some more. But I'm sure that somewhere in the middle of it all, you'll use Chebychev's inequality or something equivalent. I might replace this answer with something different if I manage to work it out. – user22805 Oct 4 '12 at 8:52
I guess what my (partial) answer does tell us is that if there IS a counterexample to this inequality, then it's not almost-everywhere equal to any Riemann-integrable function. Which means we have to look for something REALLY peculiar. I think the right way to go might be to think in terms of finite linear combinations of indicator functions and apply Chebychev's inequality to the co-efficients. But I'm not sure how to use the fact that $\log x$ is monotonically non-decreasing (which surely we must have to use). – user22805 Oct 4 '12 at 18:57
1
Thanks for your insight. BTW, Chebychev's inequality hasn't been introduced in the text so far. – PeterM Oct 6 '12 at 1:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93801349401474, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/30072/roots-of-sum-of-two-polynomials/30086 | ## roots of sum of two polynomials
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I believe that there is no common theory for finding roots of polynomial sum. In my case I have $$P_{n}(x)+AQ_{n}(x)$$. I am wondering how roots of this sum depend on $A$?
-
What is assumed about $P_n$, $Q_n$ and their roots? – Andrey Rekalo Jun 30 2010 at 16:43
2
Highly non-trivial techniques to do this were developed by Goldstein and Schlag in math.uchicago.edu/~schlag/papers/gaps.pdf and math.uchicago.edu/~schlag/papers/equi.pdf . They had quite a specific problem in mind, so this might not be too helpful for you. The buzzword there is "Jensen formula". – Helge Jun 30 2010 at 19:43
in my case $P_n$ and $Q_n$ are known rekursiv polynomials of degree $n$.Positive.A>0. So all roots are complex – vilvarin Jul 1 2010 at 17:28
## 2 Answers
If they are complex polynomials or can be treated as such, then you could apply Rouche's theorem, where the location of the zeros is determined by the dominant polynomial within the sum. ("Walk the dog on the leash")
Possibly related: you could use the Wronskian to determine the values of A that make $P_n(x)$ and $Q_n(x)$ linearly independent.
Your question is related to Mason's theorem. There are a few papers which explore this specifically
1. MR1923392 (2003j:30012) Kim, Seon-Hong . Factorization of sums of polynomials. Acta Appl. Math. 73 (2002), no. 3, 275--284.
2. MR2103113 (2005h:30011) Kim, Seon-Hong . On zeros of certain sums of polynomials. Bull. Korean Math. Soc. 41 (2004), no. 4, 641--646
3. MR1911767 (2003d:11036) Pintér, Á. Zeros of the sum of polynomials. J. Math. Anal. Appl. 270 (2002), no. 1, 303--305.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Though in general you won't have a closed-form expression for the roots of your polynomials, it's possible to write down perturbation series for roots of a polynomial in a single variable in terms of the coefficients. These are basically the Puiseux series mentioned in this question.
This paper by Bernd Sturmfels (MR) sketches out the "global picture" of such series, though it's fairly complicated and I personally am not clear on whether there's a simple algorithm to decide which is the proper choice of series that will converge. See also the article "Algebraic equations and hypergeometric series" by M Passare, A Tsikh in the book: The Legacy of Niels Henrik Abel (MR).
What I've just written is probably a little unclear so I'll describe the simplest example. Suppose you'd like to write down a series for the roots of $a_2x^2+a_1x+a_0=0$. There are a pair of series which converges when $\left|\frac{a_1^2}{4a_0a_2}\right|<1$ and a pair which converges when $\left|\frac{a_1^2}{4a_0a_2}\right|>1$, and you can derive the first pair of series by treating $a_1x$ as a perturbation to the equation $a_2x^2+a_0=0$ and you can derive one of the second pair of series by treating $a_2x^2$ as a perturbation to $a_1x+a_0=0$ and the other by treating $a_0$ as a perturbation to the equation $a_2x^2+a_1x=0$.
By plugging in the coefficients of $P_n(x)+AQ_n(x)$ into the appropriate series I just described and looking at the leading order terms as functions of $A$, you will be able to derive the scaling of the corrections to the roots of $P_n(x)$.
Apologies for the rather unexplicit answer, but this is just at the limit of what I understand.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932923436164856, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/83514/algorithm-for-checking-existance-of-real-roots-for-polynomials-in-more-than-one-v | ## Algorithm for checking existance of real roots for Polynomials in more than one variable
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a way to determine exactly (without the use of approximation methods) whether $p\in \mathbb{R}[x_1,\dots,x_n]$ has real-valued solutions.
Algorithms based on Sturm's theorem seem to be applicable to univariate polynomials only.
-
2
There are also multivariate versions of Sturm's theorem. – J.C. Ottem Dec 15 2011 at 12:58
## 2 Answers
Tarski's theorem on the decidability of the theory of real-closed fields provides a general algorithm that decides any question expressible in the first order language of real-closed fields. His algorithm can therefore determine, for any statement, whether it is true in the structure $\langle\mathbb{R},+,\cdot,0,1,\lt\rangle$. Thus, not only are the purely existential assertions (solvability of systems of equations) decidable in this context, but also more complex assertions involving iterated quantifiers, which would not seem without this result to be decidable even by approximation.
The way Tarski's argument proceeds is by elimination of quantifiers: every assertion in this language is equivalent to a quantifier-free assertion. In particular, the existence of a solution to $p(\vec x)=0$ is equivalent by Tarski's reduction to a quantifier-free assertion about the coefficients of the polynomial. That is, the algorithm reduces the question to a mere calculation involving the coefficients.
But if you are interested in actually using the algorithm in specific instances, rather than the theoretical question about whether in principal there is such an algorithm, then Tarski's algorithm may not actually be helpful. Although it has been implemented on computers, the algorithm takes something like a tower of exponential time in the size of the input, and evidently it has been proved that every quantifier-elimination algorithm must be at least double-exponential.
-
Thank you. My Question was motivated by developing Collision avoidance Strategies. I am interested in an algorithm Checking whether two (possibly n-dimensional) Ellipsoids overlap. Using a polynomial Representation of ellipsoids one could check if at least one real valued solution exist in order to prove a collision. For a proper performance Comparison with other algorithms (which might be based on numeric methods) I was interested in how computational complexity behaves, when the dimension of the problem grows. – ostap bender Dec 15 2011 at 14:27
1
I would expect that, for the specific problem of checking whether two ellipsoids overlap, there are more efficient algorithms than the relevant special case of Tarski's general algorithm. Unfortunately, this is far from my expertise, so I don't actually know any such algorithms. For conceptual purposes (if not for algorithmic ones), it might help to arrange, by an affine transformation, that one of your two ellipsoids is the unit ball. – Andreas Blass Dec 15 2011 at 15:17
Yes, there are indeed more efficient Algorithms. As far as I know all of them use numerical approximation (in higher dimensions) either to calculate roots or to calculate a minimum / maximum. So I was curious whether there is an exact Algorithm, having a runtime bounded by the dimension (ideally with polynomial complexity). As far as I understood the Discussion at mathoverflow.net/questions/43979/… , there is no such algorithm yet (at least not for the more general case of counting roots of fewnomials) – ostap bender Dec 20 2011 at 12:43
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This problem is solved in so-called Semi-algebraic Geometry. Here are some books:
Basu S. Algorithms in Semi-algebraic Geometry
Basu S., Pollack R., Roy M.-F. Algorithms in Real Algebraic Geometry
Bochnak J., Coste M., Roy M-F. Real algebraic geometry
Coste M. An introduction to semialgebraic geometry
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219865202903748, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/2640/lamport-signature-how-many-signatures-are-need-to-forge-a-signature?answertab=active | # Lamport signature: How many signatures are need to forge a signature?
Lamport signature: Signing the message Note that now Alice's private key is used and should never be used again. The other 256 random numbers that she did not use for the signature she must never publish or use. Preferably she should delete them; otherwise, others gaining access to them would later be able to create false signatures.
If Lamport's signature scheme would be used incorrectly, say you would use it more than once. How many signatures of distinct messages would you need to forge a signature?
I'm thinking if you have one signature and then the "opposite" message (not really message but the message's hash sum) so every 0 in the first message is a 1 and 1 is 0. If you had those two signatures you would have everything you needed from Alice's private key.
But that's probably not realistic to think that you get exactly those two messages. Is there some general formula for how many signatures you would need?
Thanks!
-
## 2 Answers
Each additional signature halves the security level.
A security level of about 64 bits can be broken by a determined attacker, and a level of 32 bits can be trivially broken on a single home computer.
So if you use 256 pairs, which is a reasonable level, since it offers 256 bit security against second-preimage attacks, and 128 bits against collisions, practical attacks are possible once you use the same key three times, and it's trivial to find messages-signature pairs once you use it four times.
Note that at this point the attacker can't fully determine the message he wants to sign, he needs to try $2^{64}$ (after three sigs) or $2^{32}$ (after four sigs) different messages to find one that he can sign. This usually isn't a problem for the attacker, since many things he might want to sign have parts the attacker can choose freely.
## Why does it halve after every signature?
When you observe a single signature, you know one hash from each pair. So to create a signature you need to have a message hash that matches every single bit of the signature.
When you observe two signatures, you know both hashes from half of the pairs, and only one hash from the other half. So the message hash only needs to match the half where you only know one.
When you observe three signatures, you know both hashes from 3/4 of the pairs, and only one hash from the remaining 1/4. Now you only need to match 1/4th of the original bits.
etc.
-
Thank you for good explanation! – Sup3rgnu May 20 '12 at 13:51
What do you mean by forge? If you are asking about (the common) existential forgery, then two message, signature pairs are enough, given that the messages differ in at least two bits.
As an example consider that you have the signatures for $m_1 = 1111$ and $m_2 = 1100$. Considering the preimages you now have, you can forge signatures for $m_3=1101$ and $m_4=1110$.
-
1
This will only work if there is no additional hashing step before the signing. – Paŭlo Ebermann♦ May 20 '12 at 17:36
You'd need to find a pre-image for $m_3$ and $m_4$, and that's hard, assuming the function used to hash the message is a good one. – CodesInChaos May 20 '12 at 18:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935598611831665, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/18881/is-this-alternate-theory-of-gravity-as-cause-instead-of-effect-plausible | # Is this alternate theory of gravity as cause instead of effect plausible?
I came across this video today on YouTube that presents an interesting alternate theory of Gravity and the "missing" matter in the Universe that Dark Matter/Energy theories try to account for.
If I understand it correctly, it asks the question "If it is possible to space-time to be bent without the need for mass, couldn't the gravitational effects we see that under current theories requiring more mass than we have discovered be a consequence of space-time that is somehow dented either as a remnant of something in the past or just that is the way it exists? Perhaps in a way the Big Bang resulted in an unfolding of space-time instead of an expansion of it and the leftover folds account for the extra gravitational affects we see?"
I don't have the scientific background to evaluation this theory fully and I'm interested if the theory presented in this video is remotely plausible to someone who has some expert knowledge of the subject.
-
## 2 Answers
This video is ridiculous. There is no content to it, and it is repeating hackneyed things that are obvious to anyone. Further, the idea could be presented in one sentence of text, saving people a lot of time:
"Can dark matter be spacetime curvature with no matter, and can dark energy by a straightening out of the rubber sheet in a rubber sheet conception of GR?"
The first is silly, since any curvature would necessarily behave as matter. The second is doubly-silly, because the rubber sheet is a terrible analogy for GR, in that it is the wrong components that are curved (space and not time), and the rubber sheet geodesics are repulsive, not attractive.
The rubber sheet is actually a good model of Newtonian gravitational interaction between long parallel rods (or point particles in 2d), to the extent that the rubber sheet is flat (not curved) but has height variations which can be used to drive masses toward each other in the Earth's gravitational field. The curvature of the sheet is second order (in the sense of calculus, it vanishes as the height squared), while the height variations are first order, so it is not a contradiction to imagine a flat sheet with height variations.
The rest of this answer is devoted to a discussion of the finer points.
### Curvature without matter
The theory of General Relativity is not just some made up stuff that you can modify willy nilly. You need to be consistent with the basic general principles of physics.
Suppose you have a space region which is curved, and you put it in a big constant slowly varying gravitational field, by bringing a big black hole close, say, what happens? The region of curvature must accelerate toward the black hole, by the equivalence principle. It must fall into the black hole by the black hole horizon property, and it must increase the mass of the black hole, by the horizon area theorem, which is the law of entropy increase.
So you have an object which responds to gravity just like any other matter, and it is matter by definition, if you like, whether you see something there or not. The total mass-energy is determined from the curvature.
### The inverse problem
There is a cute point of view very close to this idea which is the following
• Einstein equations relate $T_{\mu\nu}$ to $G_{\mu\nu}$. Solving for the metric is hard. But what if you just take any old metric and solve for $T$ (this is trivial), can't you then find infinitely many trivial solutions to GR?
The issue with this idea is that if you specify the curvature arbitrarily, the matter you get will be grossly unphysical, in that it will have negative energy, it will have matter flow faster than the speed of light, and it will have speed of sound greater than the speed of light in many cases. The restriction on the inverse problem give rise to the energy conditions, which, in addition to the field equation, form the physical content of GR. Here are two of them:
• Null energy condition/Weak energy condition: The (borderline) energy component of T along any null-null direction is nonnegative.
• Strong energy condition: The energy component of T along any timelike or null direction exceeds the sum of the pressure components along the diagonal (in a local orthonormal frame).
The weak energy condition can be colloquially restated as follows
• Gravity always focuses light
And heuristically, perhaps precisely, as the condition
• You can't use a local gravity field to get a light signal between two far away points faster than the speed of light. (see this question: Does a Weak Energy Condition Violation Typically Lead to Causality Violation?
The strong energy condition can be colloquially restated as follows:
• The pressure in matter as a function of density, when integrated from zero density, never has the speed of sound exceed the speed of light.
This condition has an implicit assumption that the pressure be found by classical thermodynamics, by going from a vacuum by adding density at a given temperature. It can be violated if you just have coherent particles making a scalar field expectation value in a vacuum, without making a superluminal speed of sound, just because the perturbations away from the vacuum still obey the strong energy condition, although the vacuum itself does not.
These two conditions are notable in that they allow you to describe two types of results. The weak energy condition gives theorems which are universal to GR in any setting, like closed-trapped surface singularity theorems, and area theorem, while the strong energy condition is used for more special situations where there are no scalar fields giving a bulk cosmological constant, like the big-bang singularity theorem (which fails with scalar field driven inflation).
If the warping introduced by hand violates the weak energy condition, it is difficult to see how it could not be used to signal faster than light, or to violate positive energy and make a perpetual motion machine. If it violates the strong energy condition, and it is not a homogenous scalar field, it is difficult to see how little bumps can't be used to propagate sound faster than light.
So it is believed that only homogenous classical fields violate the strong energy condition, and that nothing classical violates the weak energy condition.
### Inflation
The theory of inflation postulates that there is a homogeonous scalar field which had a large expectation value near the big bang, and a large energy density. This gives rise to accelerated expansion, which makes the universe equilibrate to a small-horizon distance sphere called a deSitter space.
The deSitter phase lasts a short time, and seeds the modern era, where we are expanding normally. But we still see some residual deSitter like acceleration, and this is almost certainly due to some residual field energy in our vacuum, a residual scalar (or many scalar) which is left behind after inflation ended into our vacuum.
These ideas are very natural in GR, and in fact are predictions of GR. So it is not reasonable to say that accelerated expansion and dark matter point to a violation of GR. This is like saying that the discovery of Neptune invalidates Newton's model of the solar system, because it alters the orbit of Jupiter. That's included in the theory.
But in the special case of vacuum energy, it is philosophicaly possible in classical GR to consider the energy as part of the equations, or part of the matter, and both positions are viable (classically). The name "Dark energy" reflects the philosophical position that it should be considered matter. The name "cosmological constant" reflects the other view that it should be considered part of the Einstein equations.
These two points of view can't really be distinguished from each other classically in any positivist way, so the two positions are classically equivalent. Whether one is true or the other is completely moot. Quantum mechanically, there is the question of whether deSitter space is stable, and if it is unstable to decay to a zero (or perhaps negative) cosmological constant, then this might be intepreted as resolving the question in favor of the "dark energy" point of view.
-
Calling that a "theory" is awfully generous just wrong. All it really says is, "What if, instead of spacetime curvature being caused by mass, it wasn't caused by anything?" The video doesn't explain anything in quantitative detail or present any sort of testable result, and so as far as gravitational science is concerned, it's totally useless. It's nothing more than pseudo-philosophical speculation.
Incidentally, the existence of dark matter is a prediction of general relativity. Basically,
1. From observations of galaxy rotation curves and gravitational lensing, we can measure the curvature of spacetime around a galaxy.
2. Using Einstein's equation $G_{\mu\nu} = 8\pi T_{\mu\nu}$, we can then calculate the distribution of mass that would produce that amount of curvature. Note that Einstein's equation is local, which means that the curvature at a point only depends on the amount of mass (actually the stress-energy tensor) at that point, not on what curvature may have existed at any other point or any other time.
3. We can also calculate the mass distribution of the visible stars in the galaxy by measuring the amount and spectrum of light they give out.
The two calculated mass distributions don't match. In particular, the one calculated from Einstein's equation is greater than the one calculated from measuring the light. So general relativity is directly telling us that there is something there which is not visible, and yet interacts with gravity in a matter-like way. Hence the term "dark matter."
-
Sorry, I mean "theory" in the colloquial way, not the scientific use of the term. Thanks for the explanation. – JohnFx Dec 30 '11 at 1:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944399893283844, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/206020-matrix-linear-transformation.html | 1Thanks
• 1 Post By HallsofIvy
# Thread:
1. ## Matrix of a linear transformation
Consider the following problem.
Let M denote the set of 2x2 real matrices. Let A be an element of M with trace 2 and determinant -3. Identifying M with R4, consider the linear transformation T: M -> M defined by T(B) = AB. Then which of the following statements are true?
a) T is diagonalizable. b) 2 is an eigenvalue of T c) T is invertible d) T(B) = B for some non-zero matrix B in the set M.
Based on the opinion that the matrix of a linear transformation is the matrix which is multiplied by the input matrix of the domain, to get the output matrix belonging to the co-domain.
Here the definition of the transformation gives us the impression that it is obvious that the matrix A is the matrix of the linear transformation. So as per the given information since determinant of A is not zero it is invertible, which means it is also diagonalizable. Therefore options a and c are true. Since the trace is 2 and the determinant is -3, the two eigen values are -3 and +1 which shows that option b is not true. Similarly, since A cannot be the identity matrix, T(B) = AB can never be B itself. This means option d is also not true.
My question is this.
What does the phrase, "Identifying M with R4" have to do with the problem?
Is there something overlooked by me due to my ignoring of the significance of this phrase?
2. ## Re: Matrix of a linear transformation
"Identifying M with $R^4$" means thinking of $\begin{bmatrix}a & b \\ c & d \end{bmatrix}$ as being the same as (a, b, c, d) with the "usual" addition and scalar multiplication. That effectively means $\begin{bmatrix}a & b \\ c & d \end{bmatrix}+ \begin{bmatrix}e & f \\ g & h\end{bmatrix}= \begin{bmatrix}a+ e & b+ f \\ c+ g & d+ h\end{bmatrix}$ and $\alpha\begin{bmatrix}a & b \\ c & d \end{bmatrix}= \begin{bmatrix}\alpha a & \alpha b \\ \alpha c & \alpha d \end{bmatrix}$.
3. ## Re: Matrix of a linear transformation
Thank you HallsofIvy... I sort of assumed it to be a default info... do you find any other mistake with the conclusions i've arrived at the options of the givenquestion? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293541312217712, "perplexity_flag": "head"} |
http://electromaniacs.com/content/view/237/9/ | Trash
electromaniacs.com Theme
Menu Bar
Main Menu
Statistics
Members: 3887
News: 247
Web Links: 21
Visitors: 2635397
Your IP
You are connecting to this site from: 107.21.186.38
jstatus
Home Blog Electrical impedance
Electrical impedance
# Electrical impedance
Electrical impedance, or simply impedance, is a measure of opposition to a sinusoidal alternating electric current. The concept of electrical impedance generalises Ohm's law to AC circuit analysis. Unlike electrical resistance, the impedance of an electric circuit can be a complex number, but the same unit, the ohm, is used for both quantities. Oliver Heaviside coined the term "impedance" in July of 1886.
Generalized impedances in a circuit can be drawn with the same symbol as a resistor or with a labeled box.
## AC steady state
In general, the solutions for the voltages and currents in a circuit containing resistors, capacitors and inductors (in short, all linearly behaving components) are solutions to a linear ordinary differential equation. It can be shown that if the voltage and current sources in the circuit are sinusoidal and of constant frequency, the solutions take a form referred to as AC steady state. Thus, all of the voltages and currents in the circuit are sinusoidal and have constant amplitude, frequency and phase.
In AC steady state, v(t) is a sinusoidal function of time with constant amplitude Vp, constant frequency f, and constant phase $\varphi$:
$v(t) = V_\mathrm{p} \cos \left( 2 \pi f t + \varphi \right) = \Re \left( V_\mathrm{p} e^{j 2 \pi f t} e^{j \varphi} \right)$
where
j represents the imaginary unit ($\sqrt{-1}$)
$\Re (z)$ means the real part of the complex number z.
The phasor representation of v(t) is the constant complex number V:
$V = V_\mathrm{p} e^{j \varphi} \,$
For a circuit in AC steady state, all of the voltages and currents in the circuit have phasor representations as long as all the sources are of the same frequency. That is, each voltage and current can be represented as a constant complex number. For DC circuit analysis, each voltage and current is represented by a constant real number. Thus, it is reasonable to suppose that the rules developed for DC circuit analysis can be used for AC circuit analysis by using complex numbers instead of real numbers.
## Definition of electrical impedance
The impedance of a circuit element is defined as the ratio of the phasor voltage across the element to the phasor current through the element:
$Z_\mathrm{R} = \frac{V_\mathrm{r}}{I_\mathrm{r}}$
It should be noted that although Z is the ratio of two phasors, Z is not itself a phasor. That is, Z is not associated with some sinusoidal function of time.
For DC circuits, the resistance is defined by Ohm's law to be the ratio of the DC voltage across the resistor to the DC current through the resistor:
$R = \frac{V_\mathrm{R}}{I_\mathrm{R}}$
where
VR and IR above are DC (constant real) values.
Just as Ohm's law is generalized to AC circuits through the use of phasors, other results from DC circuit analysis such as voltage division, current division, Thevenin's theorem, and Norton's theorem generalize to AC circuits.
## Impedance of different devices
For a resistor:
$Z_\mathrm{resistor} = \frac{V_\mathrm{R}}{I_\mathrm{R}} = R \,$
For a capacitor:
$Z_\mathrm{capacitor} = \frac{V_\mathrm{C}}{I_\mathrm{C}} = \frac{1}{j \omega C} \ = \frac{-j}{\omega C} \,$
For an inductor:
$Z_\mathrm{inductor} = \frac{V_\mathrm{L}}{I_\mathrm{L}} = j \omega L \,$
For derivations, see Impedance of different devices (derivations).
## Reactance
Main article: Reactance
The term reactance refers to the imaginary part of the impedance. Some examples:
A resistor's impedance is R (its resistance) and its reactance is 0.
A capacitor's impedance is j (-1/ωC) and its reactance is -1/ωC.
An inductor's impedance is j ω L and its reactance is ω L.
It is important to note that the impedance of a capacitor or an inductor is a function of the frequency ω and is an imaginary quantity - however is certainly a real physical phenomenon relating the shift in phases between the voltage and current phasors due to the existence of the capacitor or inductor. Earlier it was shown that the impedance of a resistor is constant and real, in other words a resistor does not cause a phase shift between voltage and current as do capacitors and inductors.
When resistors, capacitors, and inductors are combined in an AC circuit, the impedances of the individual components can be combined in the same way that the resistances are combined in a DC circuit. The resulting equivalent impedance is in general, a complex quantity. That is, the equivalent impedance has a real part and an imaginary part. The real part is denoted with an R and the imaginary part is denoted with an X. Thus:
$Z_\mathrm{eq} = R_\mathrm{eq} + jX_\mathrm{eq} \,$
where
Req is termed the resistive part of the impedance
Xeq is termed the reactive part of the impedance.
It is therefore common to refer to a capacitor or an inductor as a reactance or equivalently, a reactive component (circuit element). Additionally, the impedance for a capacitance is negative imaginary while the impedance for an inductor is positive imaginary. Thus, a capacitive reactance refers to a negative reactance while an inductive reactance refers to a positive reactance.
A reactive component is distinguished by the fact that the sinusoidal voltage across the component is in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. That is, unlike a resistance, a reactance does not dissipate power.
It is instructive to determine the value of the capacitive reactance at the frequency extremes. As the frequency approaches zero, the capacitive reactance grows without bound so that a capacitor approaches an open circuit for very low frequency sinusoidal sources. As the frequency increases, the capacitive reactance approaches zero so that a capacitor approaches a short circuit for very high frequency sinusoidal sources.
Conversely, the inductive reactance approaches zero as the frequency approaches zero so that an inductor approaches a short circuit for very low frequency sinusoidal sources. As the frequency increases, the inductive reactance increases so that an inductor approaches an open circuit for very high frequency sinusoidal sources.
## Combining impedances
Main article: Series and parallel circuits
Combining impedances in series, parallel, or in delta-wye configurations, is the same as for resistors. The difference is that combining impedances involves manipulation of complex numbers.
### In series
Combining impedances in series is simple:
$Z_\mathrm{eq} = Z_1 + Z_2 = (R_1 + R_2) + j(X_1 + X_2) \!\ .$
### In parallel
Combining impedances in parallel is much more difficult than combining simple properties like resistance or capacitance, due to a multiplication term.
$Z_\mathrm{eq} = Z_1 \| Z_2 = \left( {Z_\mathrm{1}}^{-1} + {Z_\mathrm{2}}^{-1}\right) ^{-1} = \frac{Z_\mathrm{1}Z_\mathrm{2}}{Z_\mathrm{1}+Z_\mathrm{2}} \!\ .$
In rationalized form the equivalent resistance is:
$Z_\mathrm{eq} = R_\mathrm{eq} + j X_\mathrm{eq} \!\ .$
$R_\mathrm{eq} = { (X_1 R_2 + X_2 R_1) (X_1 + X_2) + (R_1 R_2 - X_1 X_2) (R_1 + R_2) \over (R_1 + R_2)^2 + (X_1 + X_2)^2}$
$X_\mathrm{eq} = {(X_1 R_2 + X_2 R_1) (R_1 + R_2) - (R_1 R_2 - X_1 X_2) (X_1 + X_2) \over (R_1 + R_2)^2 + (X_1 + X_2)^2}$
## Circuits with general sources
Impedance is defined by the ratio of two phasors where a phasor is the complex peak amplitude of a sinusoidal function of time. For more general periodic sources and even non-periodic sources, the concept of impedance can still be used. It can be shown that virtually all periodic functions of time can be represented by a Fourier series. Thus, a general periodic voltage source can be thought of as a (possibly infinite) series combination of sinusoidal voltage sources. Likewise, a general periodic current source can be thought of as a (possibly infinite) parallel combination of sinusoidal current sources.
Using the technique of Superposition, each source is activated one at a time and an AC circuit solution is found using the impedances calculated for the frequency of that particular source. The final solutions for the voltages and currents in the circuit are computed as sums of the terms calculated for each individual source. However, it is important to note that the actual voltages and currents in the circuit do not have a phasor representation. Phasors can be added together only when each represents a time function of the same frequency. Thus, the phasor voltages and currents that are calculated for each particular source must be converted back to their time domain representation before the final summation takes place.
This method can be generalized to non-periodic sources where the discrete sums are replaced by integrals. That is, a Fourier transform is used in place of the Fourier series.
## Magnitude and phase of impedance
Complex numbers are commonly expressed in two distinct forms. The rectangular form is simply the sum of the real part with the product of j and the imaginary part:
$Z = R + jX \,$
The polar form of a complex number the real magnitude of the number multiplied by the complex phase. This can be written with exponentials, or in phasor notation:
$Z = \left|Z\right| e^ {j \varphi} = \left|Z\right|\angle \varphi$
where
$\left|Z\right| = \sqrt{R^2+X^2} = \sqrt{Z Z^*}$ is the magnitude of Z (Z* denotes the complex conjugate of Z), and
$\varphi = \arctan \bigg(\frac{X}{R} \bigg)$ is the angle.
## Peak phasor versus rms phasor
A sinusoidal voltage or current has a peak amplitude value as well as an rms (root mean square) value. It can be shown that the rms value of a sinusoidal voltage or current is given by:
$V_\mathrm{rms} = \frac{V_\mathrm{peak}}{\sqrt{2}}$
$I_\mathrm{rms} = \frac{I_\mathrm{peak}}{\sqrt{2}}$
In many cases of AC analysis, the rms value of a sinusoid is more useful than the peak value. For example, to determine the amount of power dissipated by a resistor due to a sinusoidal current, the rms value of the current must be known. For this reason, phasor voltage and current sources are often specified as an rms phasor. That is, the magnitude of the phasor is the rms value of the associated sinusoid rather than the peak amplitude. Generally, rms phasors are used in electrical power engineering whereas peak phasors are often used in low-power circuit analysis.
In any event, the impedance is clearly the same. Whether peak phasors or rms phasors are used, the scaling factor cancels out when the ratio of the phasors is taken.
## Matched impedances
Main article: Impedance matching
When fitting components together to carry electromagnetic signals, it is important to match impedance, which can be achieved with various matching devices. Failing to do so is known as impedance mismatch and results in signal loss and reflections. The existence of reflections allows the use of a time-domain reflectometer to locate mismatches in a transmission system.
For example, a conventional radio frequency antenna for carrying broadcast television in North America was standardized to 300 ohms, using balanced, unshielded, flat wiring. However cable television systems introduced the use of 75 ohm unbalanced, shielded, circular wiring, which could not be plugged into most TV sets of the era. To use the newer wiring on an older TV, small devices known as baluns were widely available. Today most TVs simply standardize on 75 ohm feeds instead.
## Inverse quantities
The reciprocal of a non-reactive resistance is called conductance. Similarly, the reciprocal of an impedance is called admittance. The conductance is the real part of the admittance, and the imaginary part is called the susceptance. Conductance and susceptance are not the reciprocals of resistance and reactance in general, but only for impedances that are purely resistive or purely reactive; in the latter case a change of sign is required.
## Origin of impedances
The origin of j was found by calculating an electrical circuit by the direct method, without using impedances or phasors. The circuit is formed by a resistance an inductance and a capacitor in series The circuit is connected to a sinusoidal voltage source and we have waited long enough so that all the transitory phenomena have faded away. It is now in steady sinusoidal state. As the system is linear, the steady state current will be also sinusoidal and of the same frequency of the voltage source. The only two quantities that we ignore are the amplitude of the current and its phase relative to the voltage source. If the voltage source is $\scriptstyle{V=V_\circ\cos(\omega t)}$ the current will be of the form $\scriptstyle{I=I_\circ\cos(\omega t+\varphi)}$, where $\scriptstyle{\varphi}$ is the relative phase of the current, which is unknown. The equation of the circuit is:
$V_\circ\cos(\omega t)= V_R+V_L+V_C$
where
$\scriptstyle{V_R}$, $\scriptstyle{V_L}$ and $\scriptstyle{V_C}$ are the voltages across the resistance, the inductance and the capacitor.
$V_R\,$ is equal to $RI_\circ\cos(\omega t+\varphi)$
The definition of inductance says:
$V_L=L\textstyle{{dI\over dt}}= L\textstyle{{d\left(I_\circ\cos(\omega t+\varphi)\right)\over dt}}= -\omega L I_\circ\sin(\omega t+\varphi)$.
The definition of capacitance says that $\scriptstyle{I=C{dV_C\over dt}}$. It is easy to verify (taking the expression derivative) that:
$V_C=\textstyle{{1\over \omega C}} I_\circ\sin(\omega t+\varphi)$.
Then the equation to solve is:
$V_\circ\cos(\omega t)= RI_\circ\cos(\omega t+\varphi) -\omega L I_\circ\sin(\omega t+\varphi)+ \textstyle{{1\over \omega C}} I_\circ\sin(\omega t+\varphi)$
That is, we have to find the two values $\scriptstyle{I_\circ}$ and $\scriptstyle{\varphi}$ that makes this equation true for all values of time $\scriptstyle{t}$.
To do this, another circuit must be considered, identical to the former and fed by a voltage source whose only difference with the former is that it started with a lag of a quarter of a period. The voltage of this source is $\scriptstyle{V=V_\circ\cos(\omega t - {\pi \over 2} ) = V_\circ\sin(\omega t) }$. The current in this circuit will be the same as in the former one but for a lag of a quarter of period:
$I=I_\circ\cos(\omega t + \varphi - {\pi \over 2})= I_\circ\sin(\omega t + \varphi) \,$.
The voltage is given by:
$V_\circ\sin(\omega t)= RI_\circ\sin(\omega t+\varphi) +\omega L I_\circ\cos(\omega t+\varphi)- \textstyle{{1\over \omega C}} I_\circ\cos(\omega t+\varphi)$
Some of the signs have changed because a cosine becomes a sine, and a sine becomes a negative cosine.
The first equation is added with the second one multiplied by j, to try to replace expressions with the form $\scriptstyle{\cos x+j\sin x}$ by $\scriptstyle{e^{jx} }$, using the les Euler's formula. This gives:
$V_\circ e^{j\omega t} =RI_\circ e^{j\left(\omega t+\varphi\right)}+j\omega LI_\circ e^{j\left(\omega t+\varphi\right)} +\textstyle{{1\over j\omega C}}I_\circ e^{j\left(\omega t+\varphi\right)}$
As $\scriptstyle{e^{j\omega t} }$ is not zero we can divide all the equation by this factor:
$V_\circ =RI_\circ e^{j\varphi}+j\omega LI_\circ e^{j\varphi} +\textstyle{{1\over j\omega C}}I_\circ e^{j\varphi}$
This gives:
$I_\circ e^{j\varphi}= \textstyle{V_\circ \over R + j\omega L + \scriptstyle{{1 \over j\omega C}}}$
The left side of the equation contains the two values we are trying to deduce: the modulus and the phase of the current. The amplitude is the modulus of the complex number at the right and its phase is the argument of the complex number at the right.
The formula at right is the habitual formula which is written when doing circuit equations using phasors and impedances. The denominator of the equation is the impedances of the resistance, inductor and capacitor.
Even though the formula
$I= \textstyle{V_\circ \over R + j\omega L + \scriptstyle{{1 \over j\omega C}}}$
contains imaginary parts, at least some of the imaginary numbers will become real in the circuit (j*j = -1), which means that the previously stated formula can not be simplified to just
$I= \textstyle{V_\circ \over R}$
## Analogous impedances
### Electromagnetic impedance
In problems of electromagnetic wave propagation in a homogeneous medium, the intrinsic impedance of the medium is defined as:
$\eta = \sqrt{\frac{\mu}{\varepsilon}}$
where
μ and ε are the permeability and permittivity of the medium, respectively.
### Acoustic impedance
Main article: Acoustic impedance
In complete analogy to the electrical impedance discussed here, one also defines acoustic impedance, a complex number which describes how a medium absorbs sound by relating the amplitude and phase of an applied sound pressure to the amplitude and phase of the resulting sound flux.
### Data-transfer impedance
Another analogous coinage is the use of impedance by computer programmers to describe how easy or difficult it is to pass data and flow of control between parts of a system, commonly ones written in different languages. The common usage is to describe two programs or languages/environments as having a low or high impedance mismatch.
## Application to physical devices
Note that the equations above only apply to theoretical devices. Real resistors, capacitors, and inductors are more complex and each one may be modeled as a network of theoretical resistors, capacitors, and inductors. Rated impedances of real devices are actually nominal impedances, and are only accurate for a narrow frequency range, and are typically less accurate for higher frequencies. Even within its rated range, an inductor's resistance may be non-zero. Above the rated frequencies, resistors become inductive (power resistors more so), capacitors and inductors may become more resistive. The relationship between frequency and impedance may not even be linear outside of the device's rated range.
## References
• Pohl R. W., Electrizitätslehre, Berlin-Göttingen-Heidelberg: Springer-Verlag, 1960.
• Popov V. P., The Principles of Theory of Circuits, – M.: Higher School, 1985, 496 p. (Russian).
• Küpfmüller K., Einführung in die theoretische Elektrotechnik, Springer-Verlag, 1959.
Sponsored Links
Theme Preferences
Blue ClassicDefault theme
BlueNewer version
ClearlooksAttractive Usability
Login
Username Password Remember me Forget your password? Create An Account! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 50, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913787305355072, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/114032/restriction-of-sheaf/114035 | ## restriction of sheaf
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
suppose X is a smooth variety and F is a locally free sheaf on X. Let U be an open subset of X and i denote the inclusion map. Is i_*i^*F equal to F ?
thanks.
-
## 3 Answers
No. For example, let $X = P^1$, $U = A^1$ and $F = O_X$. Then $i^*F = O_U$ and the global sections of $i^*F$ is the algebra of polynomials $k[t]$. Therefore $\Gamma(X,i_*i^*F) = \Gamma(U,i^*F) = k[t]$, while $\Gamma(X,F) = k$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is true if and only if the complement of $U$ has codimension at least $2$. To see that this condition is sufficient see this MO answer. To see that it is necessary, see Sasha's example, or take any $X$ and any Cartier divisor $D$ on $X$ and note that for $U=X\setminus \mathrm{Supp}D$, $i^*\mathscr O_X(mD)\simeq \mathscr O_U\simeq i^*\mathscr O_X(nD)$ for any $m,n\in \mathbb Z$, so `$i_*i^*F$` can't be $F$ for both choices.
Remark for the codimension $2$ condition, you don't actually need smoothness. See the liked answer for more.
-
In general, we have the socalled projection formula: if $f:X\to Y$ is a morphism of ringed spaces, $\mathcal F$ an $\mathcal O_X$-module, and $E$ be a locally-free $\mathcal O_Y$ module of finite rank, then $f_* (\mathcal F \otimes f^{*} E) \simeq f_{*}\mathcal F \otimes E$.
Edit (following Will's remark): The projection formula yields in the case of an open immersion $i:U \subset X$ the following identity : $i_* i^* F \simeq i_*\mathcal O_U \otimes F$. Therefore, if $U$ has codimension at least 2 in $X$, then $i_* i^* F\simeq F$ by normality of $X$.
-
2
You need the assumption $f_* \mathcal O_U=\mathcal O_X$, which is not satisfied for all open immersions. – Will Sawin Nov 21 at 8:39
Thanks, you are perfectly right! (I had in mind a morphism with connected fibers) So here here one can just say that $i_*i^*F= i_* \mathcal O_X \otimes F$. – Henri Nov 21 at 10:06
1
In particular, as $X$ is smooth hence normal, the desired property holds as soon as $U$ has codimension at least $2$. – Henri Nov 21 at 10:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237768650054932, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-statistics/208006-triangles-hexagon.html | # Thread:
1. ## Triangles in a Hexagon
Consider all of the possibilities of generating a triangle with three diagonals and/or sides of a regular hexagon. In each case, find the probability that a point inside the hexagon is also inside the triangle. Explain each solution.
Attached Thumbnails
2. ## Re: Triangles in a Hexagon
Originally Posted by jthomp18
Consider all of the possibilities of generating a triangle with three diagonals and/or sides of a regular hexagon. In each case, find the probability that a point inside the hexagon is also inside the triangle. Explain each solution.
You have correctly identified the three types of triangles in a hexagon.
Now, you must find the number of each type is possible. (do not over-count)
The area a regular hexagon is $A=\frac{3\sqrt{3}}{2}\ell^2$ where $\ell$ is the length of a side in the hexagon.
Next, you need to find the area of each type of triangle.
You are looking for the ratio of the area of the triangles to area of the hexagon.
If you know complex variables the set of numbers $v_k = \exp \left( {\frac{{k\pi i}}{3}} \right),\,k = 0,1, \cdots ,5$ form a hexagon inscribed in the unit circle. From that you can use the semi-perimeter rule to find the area of each of the three types of triangles.
If you don't know complex variables, I don't know how to help you further.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104188680648804, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/63777/characterizing-the-surcomplex-numbers | ## Characterizing the surcomplex numbers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Conway showed that the Field of surreal numbers ("${\bf No}$") is the maximal totally ordered Field.
Later Jacob Lurie showed that the Group of all partizan games ${\bf Pg}$ is the universally embedding partially ordered Abelian Group.
Is there some analogous functorial characterization of the Field of surcomplex numbers ${\bf No}[i]$?
Or might there be some sense in which ${\bf No}[i]$ isn't the "right" algebraic closure of ${\bf No}$? (Recall what happens when one takes the algebraic closure of the field of $p$-adic numbers: one gets a system that is unsatisfactory because it is not metrically complete, and then one has to pass to an even larger system to obtain the correct $p$-adic analogue of the field of complex numbers. Of course this is a vague analogy; in particular, the notion of metric completeness is not relevant in the case of ${\bf No}[i]$.)
Come to think of it, why is ${\bf No}[i]$ algebraically closed?
-
8
The property that adjoining $i$ makes your field algebraically closed is called being real closed and has many, many equivalent formulations: en.wikipedia.org/wiki/Real_closed_field – Qiaochu Yuan May 3 2011 at 5:59
Thanks! I see that Norman Alling, in his book "Foundations of Analysis over Surreal Number Fields" (which I obtained after posting my query), proves that the surreal numbers are real-closed by identifying them with formal power series of a suitable kind. He does not (as far as I can tell) characterize ${\bf Cx}$ functorially. Perhaps the right functorial characterization would involve fields equipped with an involution (complex conjugation) that satisfies various properties and in particular induces partial orderings based on real part, imaginary part, and modulus. – James Propp May 3 2011 at 16:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467789530754089, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/139503/in-the-history-of-mathematics-has-there-ever-been-a-mistake/139504 | # In the history of mathematics, has there ever been a mistake?
I was just wondering whether or not there have been mistakes in mathematics. Not a conjecture that ended up being false, but a theorem which had a proof that was accepted for a nontrivial amount of time and then someone found a hole in the argument. Does this happen anymore now that we have computers? I imagine not. But it seems totally possible that this could have happened back in the Enlightenment heyday.
Feel free to interpret this how you wish!
-
27
Lots, and yes it still happens nowadays (most mathematicians don't computer-verify their proofs). People aren't perfect (not even mathematicians!). One famous historical example is an incorrect proof of the four-color theorem (en.wikipedia.org/wiki/Four_color_theorem) which stood for 11 years. See also mathoverflow.net/questions/35468/… . – Qiaochu Yuan May 1 '12 at 17:41
5
– matgaio May 1 '12 at 17:44
6
I have a half-remembered story in my head of an old "proof" of the continuum hypothesis. The paper itself was perfectly sound, but one of the results it cited turned out to be flawed and brought the whole thing down. Perhaps someone else remembers more details. – Austin Mohr May 1 '12 at 18:45
6
I recall from "Surely You're Joking, Mr. Feynman" that when Richard Feynman first solved the problem that won him the nobel prize (and started the field of Quantum Electrodynamics), he realized it contradicted some other widely-believed theorem in Physics. It turns out the original paper which "proved" this theorem had a glaring flaw, but no one had ever bothered to double-check it (he later set to work, with a couple of grad students, to re-verify all of the theorems in quantum physics). I'm afraid I don't know enough about quantum physics to know what that theorem was, though. – BlueRaja - Danny Pflughoeft May 1 '12 at 21:36
8
– Andrew Grimm May 1 '12 at 23:51
show 5 more comments
## 15 Answers
[I posted this recently in another thread, but it works much better here, so I've deleted it from there. I spent some time a couple of years ago trying to track down unequivocally incorrect claims of false results, and this was the most remarkable one I found. ]
In 1933, Kurt Gödel showed that the class called $\lbrack\exists^*\forall^2\exists^*, {\mathrm{all}}, (0)\rbrack$ was decidable. These are the formulas that begin with $\exists a\exists b\ldots \exists m\forall n\forall p\exists q\ldots\exists z$, with exactly two $\forall$ quantifiers, with no intervening $\exists$s. These formulas may contain arbitrary relations amongst the variables, but no functions or constants, and no equality symbol. Gödel showed that there is a method which takes any formula in this form and decides whether it is satisfiable. (If there are three $\forall$s in a row, or an $\exists$ between the $\forall$s, there is no such method.)
In the final sentence of the same paper, Gödel added:
In conclusion, I would still like to remark that Theorem I can also be proved, by the same method, for formulas that contain the identity sign.
Mathematicians took Gödel's word for it, and proved results derived from this one, until the mid-1960s, when Stål Aanderaa realized that Gödel had been mistaken, and the argument Gödel used would not work. In 1983, Warren Goldfarb showed that not only was Gödel's argument invalid, but his claimed result was actually false, and the larger class was not decidable.
Gödel's original 1933 paper is Zum Entscheidungsproblem des logischen Funktionenkalküls (On the decision problem for the functional calculus of logic) which can be found on pages 306–327 of volume I of his Collected Works. (Oxford University Press, 1986.) There is an introductory note by Goldfarb on pages 226–231, of which pages 229–231 address Gödel's error specifically.
-
4
Should someone care to know more about this decision problem of predicate logic and the precise meaning of $\lbrack\exists^*\forall^2\exists^*, {\mathrm{all}}, (0)\rbrack$, please consult by Börger, Grädel, and Gurevich. This book also mentions Gödel's error, although I think the discussion is abstracted from Goldfarb's. – MJD May 1 '12 at 17:48
2
What do I need to know/read to understand what is being said here? I guess it's a naive question and i'm also declaring myself as a complete dummy, but I would love to understand what is there. – Gustavo Bandeira May 1 '12 at 19:18
7
@GustavoBandeira I don't think it's a naïve question or that you appear to be a dummy. The theorem is obscure and quite technical. You need to know what is a formula of first-order logic. You need to know what it means for a problem to be decidable. In particular, you need to understand that there is no way in general to tell if a predicate formula represents a true statement. I suggest you ask your question in a post and see what you get. – MJD May 1 '12 at 20:54
1
@GustavoBandeira I know how you feel! Just keep learning and you will get there. – MJD May 3 '12 at 16:43
1
@GustavoBandeira Also, there is a lot of very interesting mathematics you can do without trigonometry or calculus, and this is some of it. You can start learning about mathematical logic right now if you want to. – MJD May 3 '12 at 16:53
show 7 more comments
When trying to enumerate mathematical objects, it's notoriously easy to inadvertently assume that some condition must be true and conclude that all the examples have been found, without recognizing the implicit assumption. A classic example of this is in tilings of the plane by pentagons: for the longest time everyone 'knew' that there were five kinds of pentagons that could tile the planes. Then Richard Kershner found three more, and everyone knew that there were eight; Martin Gardner wrote about the 'complete list' in a 1975 Scientific American column, only to be corrected by a reader who had found a ninth - and then after reporting on that discovery, by Marjorie Rice, a housewife who devoted her free time to finding tessellations and found several more in the process. These days, she has a web page devoted to the subject, including a short history, at http://home.comcast.net/~tessellations/tessellations.htm
-
1
I love this example. – Grumpy Parsnip May 2 '12 at 16:46
1
A rare and pleasant example of early crowdsourcing. Maths has a long history of public discussion leading to this sort of thing happening; one of my big regrets is that this has become steadily less common since the 19th century. – Tynam May 3 '12 at 9:48
Is there even a finite number of tiling pentagons? – D. Thomine May 10 '12 at 20:55
– Steven Stadnicki May 10 '12 at 21:30
Several examples come to my mind:
1) Hilbert's "proof" of the continuum hypothesis, in which an error was discovered by Olga Taussky when she was editing his collected works. This was shown to be undecidable by Paul Cohen later.
2) Cauchy's proof (published as lecture notes in his collected papers) of the fact that the pointwise limit of continuous functions is continuous. At the time, there was a poor understanding of the concept of continuity, until Weierstrass came along.
3) Lamé's proof of Fermat's last theorem, erroneous in that it was supposing unique factorization in rings of algebraic integers, which spurred the invention of ideals by Kummer.
-
4
The Cauchy example is widely cited, but when I looked into it a few years ago I found that the closer I looked the more complex it became. As you pointed out, the notions of continuity and convergence were still evolving. Under the ideas of continuity and convergence current at the time, Cauchy's result may have been correct, but the concepts evolved out from under him to include finer distinctions that did not exist at the time he wrote his proof. This sort of thing happens all the time in mathematics, and is not usually counted as an error. – MJD May 2 '12 at 13:30
7
The gap in Lame's proof was immediately pointed out by Liouville, even before publication. – franz lemmermeyer May 2 '12 at 17:28
One of the classic examples surely is the Perko pair of knots. For 75 years people thought that these two knots were distinct, even though they had found no invariants to distinguish between them. Then in 1974 Kenneth Perko (a (Math PhD holding) lawyer!) discovered that they were actually the same knot. Even Conway, apparently, in compiling his table, had missed this.
It is not by any means a significant error, but it is an intriguing one nonetheless.
-
I heard Curtis McMullen mention this in a talk once! It's a pretty cool story. – Steven-Owen May 2 '12 at 1:09
– KCd May 2 '12 at 1:36
Oh I didn't notice it had been given there - apologies, that one's a better answer.... – Chuck May 2 '12 at 3:02
1
To be fair Perko had a PhD in math and had just become a lawyer at the time. So it's not like some random lawyer did this :) – Zarrax May 2 '12 at 3:06
@Zarrax I suspected he must've had some serious formal training in math, but never knew that - edited accordingly – Chuck May 2 '12 at 3:16
show 1 more comment
In 2003 a startling breakthrough was made (Review text only available to MathSciNet subscribers) in the theory of combinatorial differential manifolds. This theory was started by Gel'fand and MacPherson as a new combinatorial approach to topology, and one of the objects of its study is the matroid bundle. Much effort was spent in clarifying the relationship between real vector bundles and matroid bundles. From various previous results, the relationship is expected to be "complicated".
The Annals of Mathematics published in 2003 an article by Daniel Biss whose main theorem essentially showed that the opposite is true: that morally speaking there is no difference between studying real vector bundles and matroid bundles. This came as quite a shock to the field. (For an expert's account of the importance of this result, one should read the above-linked MathSciNet review.)
Unfortunately the article was retracted in 2009 after a flaw was found by (among others) Mnev. The story was popularised by Szpiro in his book of essays.
From Wikipedia one also finds the following account of the incident by someone familiar with the details and has expertise in the field, which contradicts some of the assertions/descriptions in Szpiro's essay. According to the various accounts, "experts" may have known about the error in the proof as early as 2005. But in the "recorded history" the first public announcement was not until 2007, and the erratum only published in 2009. So depending on your point of view, this may or may not count as a theorem accepted for some "nontrivial" amount of time.
-
The "telescope conjecture" of chromatic homotopy theory is an interesting example.
In 1984, Ravenel published a seminar paper called "Localization with respect to certain periodic homology theories" where he made a series of 7 or 8 important conjectures about the global structure of the ($p$-local) stable homotopy category of finite spaces. Four years later, Devinatz-Hopkins-Smith published "Nilpotence I" (while Hopkins was still a grad student!!), which along with the follow-up paper "Nilpotence II" proved all but one of Ravenel's conjectures, the telescope conjecture. Then in 1990, Ravenel published a disproof of this conjecture, and went so far as to write a paper entitled "Life after the telescope conjecture" in 1992 that detailed a new way forward. But then it turned out that his disproof had a flaw in it too! The telescope conjecture remains open to this day, although I think most experts believe that it is false.
-
2
The telescope conjecture says: If you start with a $p$-local finite space (or spectrum) of type $n$ and take the mapping telescope of any $v_n$-self map, then the result has the same Bousfield class as $K(n)$ itself. It's known that the Bousfield class of any such mapping telescope is independent of the original choices (this was proved in Nilpotence II); what's not known is whether this always coincides with $\langle K(n) \rangle$. – Aaron Mazel-Gee May 2 '12 at 1:32
In the sentence "But then it turned out that his disproof had a flaw in it too!" what does "too" refer to? I.e., what else in this story had a flaw in it? – Omar May 10 '12 at 21:10
@Omar: Sorry, that was unclear. I just meant to emphasize that at first people thought it was true, and then Ravenel thought he proved it was false, but that development was in fact a misstep. – Aaron Mazel-Gee May 11 '12 at 17:50
A fairly recent example that I know of is a paper by the name of "A counterexample to a 1961 'theorem' in homological algebra" by Amnon Neeman (2002). It was a fairly big deal for some people when they realized the 'theorem' was false. I don't know enough about the specifics to discuss it in depth, since it's not terribly close to what I work on, so here is the abstract of Neeman's paper in lieu of any discussion:
In 1961, Jan-Erik Roos published a “theorem”, which says that in an $[AB4 * ]$ abelian category, $\lim^1$ vanishes on Mittag–Leffler sequences. See Propositions 1 and 5 in [4]. This is a “theorem” that many people since have known and used. In this article, we outline a counterexample. We construct some strange abelian categories, which are perhaps of some independent interest.These abelian categories come up naturally in the study of triangulated categories. A much fuller discussion may be found in [3]. Here we provide a brief, self contained, non–technical account. The idea is to make the counterexample easy to read for all the people who have used the result in their work.In the appendix, Deligne gives another way to look at the counterexample.
-
3
– KCd May 2 '12 at 12:17
A famous example of this involves Vandiver's 1934 "proof" of one of the two steps in a line of attack on (an important case of) Fermat's Last Theorem. In algebraic number theory, there arise important positive integers called class numbers. In particular, for each prime p, a certain class number $h_p^+$ can be defined that is intimately connected with Fermat's Last Theorem.
Kummer proposed that (an important case of) Fermat's Last Theorem could be proved by
i) Proving that $h_p^+$ is not divisible by p
ii) Proving that $h_p^+$ not being divisible by p implies the "first case" of Fermat's Last Theorem.
In 1934, Vandiver published a proof of ii). In the introduction to "Cyclotomic Fields I and II", Serge Lang stated:
"...many years ago, Feit was unable to understand a step in Vandiver's 'proof' that $p$ not dividing $h_p^+$ implies the first case of Fermat's Last Theorem, and stimulated by this, Iwasawa found a precise gap which is such that there is no proof."
(In fact, Vandiver passed away believing that his proof was correct.)
I would like to know more about this history of this myself, and would gladly edit this post with more reliable information. For instance,
http://gow.epsrc.ac.uk/NGBOViewGrant.aspx?GrantRef=EP/C549074/1
says that Feit's observation occurred "around" 1980, which suggests that it was never published.
-
If I remember correctly, Wiles' original proof had a flaw also, right? – user12014 May 2 '12 at 1:24
But, I think his proof was not accepted as being correct, and the error in his first proof was pointed out during the time when people were reviewing the accuracy of his whole proof (it is quite long). So should it count as an example of what the OP is asking? – Rankeya May 2 '12 at 2:36
Btw, I am by no means aware of what actually happened. I got this from reading Simon Singh's book on FLT. – Rankeya May 2 '12 at 2:37
2
@Barry: I emailed Walter Feit about this back in 1992, asking him what the nature of the gap was. He told me that he had gotten stuck at a step in Vandiver's proof and asked Iwasawa about it, who couldn't straighten things out. By this time Feit had forgotten the details. I then wrote to another mathematician (still alive, so let me not mention the name in case there is an error in what follows) whose reply was this: "On page 119, he says 'Let $p$ be a prime divisor of one of the $w$'s in (2).' Then on p. 120 in the middle he seems to be using the "fact" that $x+\zeta{y}$ is divisible [contd.] – KCd May 2 '12 at 12:38
1
by $p$ (I can't guarantee that this is what he is doing since I didn't quite get (7) to work). However, in (2) there does not seem to be any guarantee that $p$ does not also occur in the denominator somewhere. In fact, it must, if $p$ does not divide $x+\zeta{y}$. He cannot ignore some ideals, since he needs them all on page 122." – KCd May 2 '12 at 12:39
Some technical results in the disintegration theory of von Neumann algebras (roughly speaking, results expressing an algebraic object as a "direct integral" of "simpler" algebraic objects) stated by Minoru Tomita in the 1950s turned out to not be OK. There was an entire chapter following Tomita's approach in Naimark's book Normed Rings that vanished from later editions when the errors came to light.
I am not clear on the details of exactly how Tomita's stuff was wrong. (This happened before I was born, and I am not that interested in the history of mathematics, so I only know what I have heard about this from people who were there when it happened.) I have heard one person say that Tomita made use of certain technical results that only held under certain "nice" hypotheses that were not met at the level of generality at which he was working. Another person said that Tomita's arguments simply weren't clear enough to admit close analysis of how he went wrong, but that flaws were evident once people produced counterexamples to statements of the results. I don't personally know which of these stories is closer to the truth.
I am not sure to what extent this work was "accepted for a nontrivial amount of time." The person who told me most of what I know about this conveyed to me that at the time, there was a sense in the air that there was something "fishy" about some of the theorems, and that counterexamples were circulated among people working in the area long before it all worked itself out in print.
-
I don't know how long some of his proofs standed, but Legendre is infamous for his repeated attempts at proving the parallel postulate.
-
Not sure if the following fits the criterion for constraint, but Hans Rademacher incident comes to mind (page 82, The Riemann Hypothesis: For the aficionado and virtuoso alike):
8.2 Hans Rademacher and False Hopes
In 1945, Time Magazine reported that Hans Rademacher had submitted a flawed proof of the Riemann Hypothesis to the journal Transactions of the American Mathematical Society. The text of the article follows: A sure way for any mathematician to achieve immortal fame would be to prove or disprove the Riemann hypothesis. This baffling theory, which deals with prime numbers, is usually stated in Riemann’s symbolism as follows: “All the nontrivial zeros of the zeta function of s, a complex variable, lie on the line where sigma is 1/2 (sigma being the real part of s).” The theory was propounded in 1859 by Georg Friedrich Bernhard Riemann (who revolutionized geometry and laid the foundations for Einstein’s theory of relativity). No layman has ever been able to understand it and no mathematician has ever proved it.
One day last month electrifying news arrived at the University of Chicago office of Dr. Adrian A. Albert, editor of the Transactions of the American Mathematical Society. A wire from the society’s secretary, University of Pennsylvania Professor John R. Kline, asked Editor Albert to stop the presses: a paper disproving the Riemann hypothesis was on the way. Its author: Professor Hans Adolf Rademacher, a refugee German mathematician now at Penn.
On the heels of the telegram came a letter from Professor Rademacher himself, reporting that his calculations had been checked and confirmed by famed Mathematician Carl Siegel of Princeton’s Institute for Advanced Study. Editor Albert got ready to publish the historic paper in the May issue. U.S. mathematicians, hearing the wildfire rumor, held their breath. Alas for drama, last week the issue went to press without the Rademacher article. At the last moment the professor wired meekly that it was all a mistake; on rechecking. Mathematician Siegel had discovered a flaw (undisclosed) in the Rademacher reasoning. U.S. mathematicians felt much like the morning after a phony armistice celebration. Sighed Editor Albert: “The whole thing certainly raised a lot of false hopes.” [142]
Edit: This link has further (dis)proofs of RH including de Branges saga.
-
This seems to be related enough to deserve to be in an answer:
The April 2013 issue of the Notices of the AMS features a long article Errors and Corrections in Mathematics Literature written by Joseph F. Grcar.
Not a specific mistake, rather an analysis of how mathematics journals and mathematicians deal with mistakes in general, compared to other sciences.
-
Has there ever been a mistake? LOL! Yeah, just a few. ;-)
OK, so that isn't exactly what you asked...
Well, there have been plenty of conjectures which everybody thought were correct, which in fact were not. The one that springs to mind is the Over-estimated Primes Conjecture. I can't seem to find a URL, but essentially there was a formula for estimating the number of primes less than $N$. Thing is, the formula always slightly over-estimates how many primes there really are... or so everybody thought. It turns out that if you make $N$ absurdly large, then the formula starts to under-estimate! Nobody expected that one. (The "absurdly large number" was something like $10^{10^{10^{10}}}$ or something silly like that.)
Fermat claimed to have had a proof for his infamous "last theorem". But given that the eventual proof is a triumph of modern mathematics running to over 200 pages and understood by only a handful of mathematicians world wide, this cannot be the proof that Fermat had 300 years ago. Therefore, either 300 years of mathematicians have overlooked something really obvious, or Fermat was mistaken. (Since he never write down his proof, we can't claim that "other people believed it before it was proven false" though.)
Speaking of which, I'm told that Gauss or Cauchy [I forget which] published a proof for a special case of Fermat's last theorem - and then discovered that, no, he was wrong. (I don't recall how long it took or how many people believed it.)
-
– jwodder May 1 '12 at 20:47
Plausibly, yes. – MathematicalOrchid May 1 '12 at 20:56
Thanks for your answer; I imagined this stuff kind of stuff happening a lot. I was curious as to whether people say to have proved something though and other people see and believe the proof only for someone later to come and punch hole into it. – Steven-Owen May 1 '12 at 22:03
"Speaking of which, I'm told that Guass or Cauchy [I forget which] published a proof for a special case of Fermat's last theorem - and then discovered that, no, he was wrong." To my taste this is too much hearsay for a site like this. If you were "told" this by a reliable source, please include the source, so that interested readers can investigate it. If you can't remember where you heard it from, is it really good form to put it in an answer? – Pete L. Clark Apr 28 at 18:45
A plentiful source of examples of "theorems" that were "proved" is supplied by the Italian school of algebraic geometry.
The Italians, most prominently Guido Castelnuovo, Federigo Enriques and Francesco Severi, derived some remarkable results on classification algebraic surfaces, relying strongly on geometical insight. The problem was, their reliance on intuition ultimately led them astray, to the point where some of things that were intuitively obvious to Severi were plain wrong. For an extreme example, Severi claimed to show a degree 6 surface in a 3 dimensional projective space has at most 52 nodes, while Mumford exhibited such surface that in fact had 65 nodes. Wikipedia provides a short but informative discussion. There is also a great thread on Mathoverflow.
-
I haven't read all the answers, so I may be repeating but one famous example was the first version of Andrew Wiles' proof of Fermat's Last Theorem had a gap in it and when it was discovered it took time to fill the gap.
-
11
Using the search function you could see this was mentioned several times on this page. – Asaf Karagila May 2 '12 at 21:56
## protected by Asaf KaragilaMay 2 '12 at 21:56
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9718083143234253, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/86947?sort=newest | ## On two spectral sequences for the cohomology of a double complex
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For a (bounded) double complex (of abelian groups or vector spaces) one can consider two spectral sequences that converge to the cohomology of the totalization: one can first compute either the cohomology of rows, or the cohomology of columns. Suppose that one of these spectral sequences degenerates at $E_1$ (i.e. the cohomology of rows yields the factors of the induced filtration of the limit). Do any 'nice' properties of the second spectral sequence follow?
-
1
Why the k-theory tag? – Dylan Wilson Jan 29 2012 at 9:52
2
And what is meant by nice? If you're asking whether the second spectral sequence has to degenerate quickly, I'm pretty sure one can cook up, for each $r$, fairly simple complexes where one spectral sequence degenerates immediately, and the other lasts til $E_r$. Take two exact sequences and place them far apart, maybe? – Dylan Wilson Jan 29 2012 at 9:57
Is there an arxiv tag 'homology' without k-theory?:) – Mikhail Bondarko Jan 29 2012 at 18:45
## 2 Answers
There is a basic way to see whether things like this should be true. Any bounded double complex of vector spaces over a field $k$ is (noncanonically) the direct sum of complexes of the following two sorts:
Squares: `$$\begin{matrix} k & \rightarrow & k \\ \uparrow & & \uparrow \\ k & \rightarrow & k \end{matrix}$$`
Staircases:```$$\begin{matrix}
k & \rightarrow & k & & & & \\
& & \uparrow & & & & \\
& & k & \rightarrow & k & & \\
& & & & \uparrow & & \\
& & & & k & \rightarrow & k\\
\end{matrix}$$``` We'll say that the "length" of a staircase is the number of nonzero entries in it, so the above stair case has length $6$. Staircases may have even odd or even length, and may start and end either with vertical or horizontal maps.
The operation of "forming the spectral sequence" commutes with direct sum, so it is enough to know what the spectral sequences of each of these look like.
The spectral sequence of a square is zero on every page except for the square itself; in particular, it converges in one step.
The spectral sequence of an odd length staircase is one dimensional on every page except for the stair case itself. The one nonzero term is at one end of the staircase for the horizontal spectral sequence and the other for the vertical spectral sequence. So it also converges in one page.
The spectral sequence of an even staircase is the one which violates your claim. In one direction, it is zero on every page after the staircase itself. In the other direction, it there are $m$ pages with two nonzero terms, where the length of the staircase was $2m$. These terms are at the two ends of the stair case, and they annihilate each other on the $(m+1)$st page.
In particular, a double complex which consists simply of a length $2m$ staircase will die on the first page in one direction, but will survive for $m$ pages in the other.
-
Thank you very much for such a simple and clear explanation!! – Mikhail Bondarko Jan 29 2012 at 18:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let $E,F$ be the two spectral sequences of the double complex and for simplicity assume they are in the first quadrant. If $E_1$ degenerates, say, $E_1^{i,j}=0$ for $j>0$ then you know that $F$ converges to $E_2$, i.e. $F_2^{\;i,j} \Rightarrow E_2^{i+j,0}$.
In general, I don't think much can be said about the properties of the second spectral sequence. As an example consider the LHS spectral sequence of a group extension $1 \to H \to G \to Q \to 1$. The double complex is $$Hom_{kQ}(X,Hom_{kH}(Y,M))$$ where $k$ is a ring, $M$ is a $kG$-module and $X,Y$ are projective resolutions. Now we have $$E_1^{i,j} = H^iHom_{kQ}^\ast(X,Hom_{kH}^j(Y,M))=H^i(Q;Hom_{kH}^j(Y,M))=0 \text{ if } j>0.$$ Thus $E_1$ degenerates, but the second spectral sequence $$F_2^{\; i,j} = H^i(Q;H^j(H;M)) \Rightarrow H^{i+j}(G;M)$$ can be arbitrarily complicated.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8945414423942566, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/26515/simultaneous-block-decomposition-of-a-set-of-orthogonal-projections/26523 | ## Simultaneous Block decomposition of a set of orthogonal projections
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
An orthogonal projection is an Hermitian matrix $P$ such that $P^2=P$. Denote $U^*$ the conjugate transpose of a matrix $U$.
It can be easily shown that for two projections $P_1$ and $P_2$, there exists a unitary $U$ such that both $UP_1 U^*$ and $UP_2U^*$ are block diagonal with blocks of size one or two (And both resulting matrices have the same block structure).
My question is whether this block decomposition of projections can be generalised, for more than two projections: Given orthogonal projections $P_1, P_2, ..., P_k$, Is there a unitary $U$ such that for each $i$, $UP_i U^*$ is block diagonal with blocks of size at most $k$? (The resulting matrices must have the same block structure)
Two weaker questions are:
Is there a bound on the size of the blocks in function of $k$ only, i.e. in function of the number of projectors independently of their dimensions?
If a block decomposition is not possible, then what about decomposing the projectors into $k$-diagonal matrices? (All entries of the matrix are zero except (possible) for the diagonal and the $k$-upper and $k$-lower diagonals)
I would deeply appreciate any help or reference on how to handle these problems.
Best regards,
Mateus
-
## 2 Answers
If $P$ is a projection then $I-2P$ is a reflection. Two reflections generate the dihedral group and all irreducible representations of the dihedral group have dimension at most two. This explains your observation about two projections.
But the alternating group $Alt(n)$ can be generated by three involutions when $n\ge9$ (a result of Nuzhin). Take the usual permutation representation of $Alt(n)$ with degree $n$. If $R$ is the permutation matrix representing an involution, then $R$ is symmetric and $(I-R)^2 =2(I-R)$. So $\frac12(I-R)$ is a projection.
This permutation representation has one invariant subspace of degree one (the span of the constant vectors) and the orthogonal complement to this is irreducible with dimension $n-1$. This shows that the size of your blocks is not bounded by the number of projections.
-
Thank you very much for the nice answer. Do you have some indication on the more general case, where the matrices are just required to be k-diagonal? – Mateus de Oliveira May 31 2010 at 6:40
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Each projection gives a decomposition of the vector space into two subspaces. So with two projections you have four subspaces. The four subspace problem is tame. For $m$ projections you have $2m$ subspaces and the $n$-subspace problem is known to wild for $n\ge 5$.
This point of view has not taken into account the requirement that pairs of subspaces have zero intersection. Once this has been taken into account you can explain your observation about two idempotents. For more than two idempotents I expect this is a wild problem.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231873750686646, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/52239/coulomb-gauge-fixing-and-normalizability?answertab=active | # Coulomb gauge fixing and “normalizability”
The Setup
Let Greek indices be summed over $0,1,\dots, d$ and Latin indices over $1,2,\dots, d$. Consider a vector potential $A_\mu$ on $\mathbb R^{d,1}$ defined to gauge transform as $$A_\mu\to A_\mu'=A_\mu+\partial_\mu\theta$$ for some real-valued function $\theta$ on $\mathbb R^{d,1}$. The usual claim about Coulomb gauge fixing is that the condition $$\partial^i A_i = 0$$ serves to fix the gauge in the sense that $\partial^iA_i' = 0$ only if $\theta = 0$. The usual argument for this (as far as I am aware) is that $\partial^i A'_i =\partial^iA_i + \partial^i\partial_i\theta$, so the Coulomb gauge conditions on $A_\mu$ and $A_\mu'$ give $\partial^i\partial_i\theta=0$, but the only sufficiently smooth, normalizable (Lesbegue-integrable?) solution to this (Laplace's) equation on $\mathbb R^d$ is $\theta(t,\vec x)=0$ for all $\vec x\in\mathbb R^d$.
My Question
What, if any, is the physical justification for the smoothness and normalizability constraints on the gauge function $\theta$?
EDIT 01/26/2013 Motivated by some of the comments, I'd like to add the following question: are there physically interesting examples in which the gauge function $\theta$ fails to be smooth and/or normalizable? References with more details would be appreciated. Lubos mentioned that perhaps monopoles or solitons could be involved in such cases; I'd like to know more!
Cheers!
-
Good question which is clearly more general than Coulomb gauge fixing! – Michael Brown Jan 26 at 9:03
@MichaelBrown Thanks! Yeah I'm learning some sugra at the moment, and this argument that exploits the vanishing of smooth, normalizable solutions to Laplace's equation seems to arise quite often, for instance when one wants to eliminate certain harmonic gauge field components. – joshphysics Jan 26 at 9:09
Smoothness of $\theta$ is needed because $A_\mu$ that is modified by derivatives of $\theta$ has to remain continuous - or smooth, but one level weaker requirement of smoothness than for $\theta$. The normalizability just means that $\theta$ never diverges in the bulk of the space and decreases sufficiently quickly at infinity. When it doesn't, one would have to discuss monopoles, instantons etc. but those things are a non-issue for U(1) gauge theory. – Luboš Motl Jan 26 at 9:51
2
At any rate "sufficient smoothness" and "normalizability" are conditions that are often required in physics and physicists don't spend much time with such stuff - they're natural physical conditions for various reasons. Mathematicians are often obsessed with these mathematical details - they're needed for rigorous proofs - but physicists are not. In fact, physicists really think exactly in the opposite way than what you suggest. They would assume that functions in physics are sufficiently smooth and well behaved - and if some of them are not, they would get concerned or alert. – Luboš Motl Jan 26 at 9:53
@joshphysics: To focus the answers, perhaps you could point out a couple of references where you have encountered this use of the word normalizable? – Qmechanic♦ Jan 26 at 13:06
show 6 more comments
## 1 Answer
It means that the gauge ambiguity is practically removed in the Coulomb gauge if you deal with a "nice" $\mathbf{A}$ (which is your purpose).
However, it does not mean you only deal with the radiation (propagating solutions). Transversal $\mathbf{A}$ is different from zero for a uniformly moving charge too.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553571939468384, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/140304/normal-extensions-a-question-about-the-definition | Normal extensions (a question about the definition)
The definition of a normal extension in the book "Abstract algebra" is :
If $K$ is an algebric extension of $F$ which is the splitting field over $F$ for a collection of polynomials $f(x)\in F[x]$ then $K$ is called a normal extension
I think that there is something here I don't understand: If $K$ is an algebraic extension of $F$ then by definition each element of $K$ is a root of a polynimial with coefficients in $F$.
So each element of $K$ corresponds to a polynomial in $F[x]$ (s.t the element is a root of this polynomial).
So I deduced that $K$ is the splitiing field of the collections of polynomials in $F[x]$ that corresponds to the elements in $K$. Hence every algebric extension is also a normal one.
What part of my argument is wrong ?
-
1 Answer
Just because the extension contains a root of a polynomial doesn't mean that it contains all of the roots of the polynomial, which is a requirement for it to be the splitting field for the polynomial.
For example, $\mathbb{Q}(\sqrt[3]{2})$ is not normal because it is not the splitting field of the minimal polynomial of $\sqrt[3]{2}$, namely $x^3-2$. This can be seen through Theorem 13 on p. 572 in Dummit and Foote:
If $\mathbb{Q}(\sqrt[3]{2})$ were the splitting field for some polynomial (which doesn't split in $\mathbb{Q}$) then it would be the splitting field for any irreducible factor of that polynomial. In a field of characteristic $0$ (e.g. $\mathbb{Q}$), every irreducible polynomial is separable. So, by the theorem, since $x^3-2$ has a root in $\mathbb{Q}(\sqrt[3]{2})$, it would split in $\mathbb{Q}(\sqrt[3]{2})$ if the extension were normal.
-
4
$\mathbb{Q}(\sqrt[3]{2})$ is not normal because it is not the splitting field of any collection of polynomials... – lhf May 3 '12 at 11:32
@lhf thanks, I have clarified the answer. – Antonio Vargas May 3 '12 at 16:21
It's been a long time since the question was answered but I don't understand the proof of the Theorem that @AntonioVargas posted from that book. I don't understand the step where they apply the induction hypothesis. Could you explain that proof to me, please? – Kits89 Dec 29 '12 at 17:01
1
@Kits89 it would probably be best to open a new question about that. – Antonio Vargas Dec 29 '12 at 20:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602848291397095, "perplexity_flag": "head"} |
http://physics.stackexchange.com/tags/hydrogen/new | # Tag Info
## New answers tagged hydrogen
3
### Stark Effect on the 1st excited state of Hydrogen
What happens, essentially, is that the S and P wavefunctions get mixed to produce eigenstates that have shifted centres. This means the atom gets an induced electric dipole moment, whose interaction with the external field either lowers or raises the eigenenergy. More specifically, consider the wavefunctions of the states $|200\rangle$ and $|210\rangle$: ...
2
### Hydrogen wave function in momentum space
To get it in the momentum representation, one has to do the Fourier transform of this function. This reference can be useful: http://forum.sci.ccny.cuny.edu/Members/lombardi/publications/MOMREP-H-atom.pdf/view At the end, separation of variables after transformation to the momentum space is not trivial, and the mixing of quantum number is presented.
2
### Difference between atom and elementary particle questioned
A hydrogen atom ion $H^{+}$, with an atomic mass number of A=1, charge number Z=1, is the same as a proton. A hydrogen ion thus usually just refers to a proton. Depending on context, however, you may also have a hydrogen ion which is (a) an ion of a deuterium atom, in which case it is a bound state of a neutron and a proton, with atomic mass number A=2, ...
1
### Finding the wavelength of an electron in its ground state?
I believe you will find that the expectation value of the momentum is zero, which will sort of mess up your calculation of the wave length. Calculating the wavelength of the ground state of any bound system is folly anyway, since it will not have any nodes. The characteristic you may be looking for is the average radius, or the uncertainty in the position.
1
### Ground state energy of hydrogen molecule ion
I had a look at the paper, and I think the author means that the energy for the reaction: $$H + H^+ \rightarrow H_2^+$$ is negative i.e. the ground state energy of $H_2^+$ is less than the sum of the ground state energies of $H$ and $H^+$. The reason for this is simply the observation that the $H_2^+$ ion is stable. If the energy of $H_2^+$ were higher ...
Top 50 recent answers are included | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265051484107971, "perplexity_flag": "head"} |
http://quant.stackexchange.com/questions/3607/should-i-use-an-arithmetic-or-a-geometric-calculation-for-the-sharpe-ratio?answertab=votes | # Should I use an arithmetic or a geometric calculation for the Sharpe Ratio?
What are the advantages/disadvantages of using the arithmetic Sharpe Ratio vs the geometric Sharpe Ratio? Is one more correct? Or is one better in certain circumstances?
-
## 4 Answers
In addition to John's answer and just to make things clear:
The arithmetic mean is given by
$$\mu_a = \frac{1}{n} \sum_{i=1}^n x_i$$
The geometric mean is given by
$$\mu_g = \sqrt[n]{\prod_{i=1}^n (1+x_i)} -1$$
And we have that
$$\mu_g \leq \mu_a$$
So not only would the geometric sharp ratio would be taking into account the "actual" return of the portfolio, but it is also a more conservative measure.
-
I'm not sure it makes sense to think of one as more correct than another. However, they do have significant differences. It may help to distinguish between ex-post evaluation of a strategy and ex-ante prediction of what the strategy's performance will be.
For simplicity, let's assume the log returns of the strategy are approximately i.i.d. univariate normal and the risk-free rate is a constant. If you were a mean-variance investor deciding between the risk-free rate and the strategy, you would estimate the mean and variance of the log returns, project it to the investor's horizon, and convert the normal to lognormal to obtain the arithmetic returns. So if you were calculating a Sharpe ratio that is consistent with the way it was originated in financial theory, i.e. the slope of the efficient frontier, would be this arithmetic ex ante expected Sharpe ratio.
However, the Sharpe ratio is also used in performance evaluation in different ways. I think one major reason that the geometric version is used is that the numerator will correspond with what the investor actually earned, the CAGR. This might be useful to some people, but personally I prefer to look at the CAGR by itself rather than in the Sharpe ratio. Further, the CAGR is the median of the lognormal distribution under some assumptions. I find it more intuitive to use the mean and keep ex-post consistent with ex-ante, which would bring me back to the arithmetic Sharpe.
Another reason to use the geometric version might be that it avoids the distributional issues with the lognormal distribution (since it has skewness/kurtosis). However, Opdyke (2007) provides the asympototic distribution of the Sharpe ratio under fairly general assumptions.
-
There are many variants proposed; some useful, some not so much. As an investor, the most important thing is to compare the exact same ratio, calculated in the exact same way, for each prospect. As the prospect/fund the most important thing is to be clear about the statistic you are reporting so your investors make well informed decisions. So let's start with some definitions specific to your question.
• Sharpe ratio, sometimes called the "Modified Sharpe" ratio, is the arithmetic average of excess returns divided by the standard deviation of those returns, $r_t\in R$.
$Sharpe Ratio = \frac{E[R_t - R_{free}]}{\sigma}$
where $R_t\in[-1,\infty)$, which is a percentage, and $R_{free}$ is the risk-free return rate, typically taken as the current T-Bill rate.
It is commonly calculated over annual periods, but monthly, or daily periods are common too. It is a class of signal-to-noise ratio indicating the expected reward per unit of risk. This is the most commonly cited variant and is the measure proposed by professor Sharpe in 1994.
• The Geometric Sharpe ratio is the geometric average of compounded excess returns divided by the standard deviation of those compounded returns. This is equivalent to the arithmetic average and standard deviation of $log(1+r_t)$, which is the more convienient calculation.
The difference is mostly in the use of the average of the excess returns for the period or the compounded returns. Because the log returns are generally smaller, the "Geometric Sharpe ratios" generally suggests a higher number; about 12.5% higher for samples from a uniform distribution. Naturally, it is completely bogus to compare dissimilar ratios...so be sure you know which measure you are digesting.
-
For any real world applications, the difference between the arithmetic and geometric Sharpe ratios is likely to 'fall under the noise floor', i.e. be smaller, typically much smaller, than a standard error. This is even under the generous assumptions of stationarity and absence of omitted variables.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947443962097168, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/59390?sort=votes | ## When is a quasi-isomorphism necessarily a homotopy equivalence?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Under what circumstances is a quasi-isomorphism between two complexes necessarily a homotopy equivalence? For instance, this is true for chain complexes over a field (which are all homotopy equivalent to their homology). It's also true in an $\mathcal{A}_\infty$ setting.
Is it true for chain complexes of free Abelian groups? The case I'm particularly interested in is chain complexes of free $(\mathbb{Z}/2\mathbb{Z})[U]$ modules or free $\mathbb{Z}[U]$ modules, but I'm also interested in general statements.
-
3
Equivalent reformulation, considering the cone of the quasi-isomorphism: under what circumstances is an acyclic complex a split acyclic complex (i.e. spliced together from split short exact sequences)? True for complexes of projectives bounded to the right and, dually, for complexes of injectives bounded to the left. In free Z/4-modules, the unbounded complex .. -> Z/4 -2-> Z/4 -2-> Z/4 -> ... is acyclic, but not split acyclic. – Matthias Künzer Mar 24 2011 at 6:27
## 1 Answer
If your complexes are bounded, this is always true for any ring more generally replacing free modules with projectives. The statement is that D^b(A-mod) is equivalent to Ho(Proj-A mod) and you can find it in Weibel Chapter 10.4. If your complexes are unbounded things are more tricky. Then your statement is true in over any ring of finite homological dimension. Basically you have two notions K-projective(which have the property that you want) and complexes of projectives. Bounded complexes of projectives are K-projective, but unbounded ones are not unless you have the finiteness hypothesis(see Matthias' answer). See this post for the injective version of this story http://mathoverflow.net/questions/41642/question-about-unbounded-derived-categories-of-quasicoherent-sheaves. In the cases you are interested in there is no problem.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8884819746017456, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/93317-apostol-archimedes.html | # Thread:
1. ## apostol and archimedes
I am trying to understand apostol's argument confirming the integral area of a parabolic segment.
He get's to:
$\frac{b^3}{3}-\frac{b^3}{n} < A < \frac{b^3}{3}+\frac{b^3}{n}$ for every $n \ge 1$
There are three possibilities.
$A > \frac{b^3}{n}$ or $A = \frac{b^3}{n}$ or $A < \frac{b^3}{n}$
I am not really sure about this. I would get $-\frac{b^3}{n} < A < \frac{b^3}{n}$, so why does he get three possibilities. I understand any number is either less, equal or greater, but its still a little confusing.
The next contradiction I think I sort of understand.
Suppose $A > \frac{b^3}{n}$ is true. From the second initial inequality we have:
$A-\frac{b^3}{3} < \frac{b^3}{n}$ for every $n \ge 1$
Divide both sides and multiply to get:
$n < \frac{b^3}{A-\frac{b^3}{3}}$
But this is obviously false when
(*) $n \ge \frac{b^3}{A-\frac{b^3}{3}}$
Ok, clearly that is true, but why is it a contradiction that
$A > \frac{b^3}{n}$
Is it because (*) is always a fraction and can be at most 1 therefore n cannot be any less than that becuase n is bigger than one?
Thanks
Regards
Craig.
2. Originally Posted by craigmain
I am trying to understand apostol's argument confirming the integral area of a parabolic segment.
He get's to:
$\frac{b^3}{3}-\frac{b^3}{n} < A < \frac{b^3}{3}+\frac{b^3}{n}$ for every $n \ge 1$
There are three possibilities.
$A > \frac{b^3}{n}$ or $A = \frac{b^3}{n}$ or $A < \frac{b^3}{n}$
Well the three possibilities trivially exist, that they can exist under the earlier inequality can be demonstrated if any value of $n$ allows them.
Try $n=3$, then we have from the top inequalities:
$0 < A < (2/3)b^3$
which permits all three cases $A>b^3/3$, $A=b^3/3$ and $A<b^3/3$
CB
3. Originally Posted by CaptainBlack
Well the three possibilities trivially exist, that they can exist under the earlier inequality can be demonstrated if any value of $n$ allows them.
Try $n=3$, then we have from the top inequalities:
$0 < A < (2/3)b^3$
which permits all three cases $A>b^3/3$, $A=b^3/3$ and $A<b^3/3$
CB
I am not sure why that value is chosen though.
If I were trying to solve the inequality i would eliminate $\frac{b^3}{3}$ to get $-\frac{b^3}{n} < A < \frac{b^3}{n}$
How did he know that the area was converging to that value?
4. Originally Posted by craigmain
I am not sure why that value is chosen though.
If I were trying to solve the inequality i would eliminate $\frac{b^3}{3}$ to get $-\frac{b^3}{n} < A < \frac{b^3}{n}$
How did he know that the area was converging to that value?
Apostol or Archimedes? Archimedes: by cutting the shape from a piece of material of known area density and weighing it. Apostol: by experimental calculation.
and in either case by various huristic arguments and calculations.
CB
5. Thanks very much.
I was worrying that I had missed something obvious. It's comforting to know they also struggled for a while before finding the final solution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9684484004974365, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/68314/defining-a-phi-for-jensens-inequality-specifics-on-convexity | # Defining a phi for Jensen's inequality: specifics on convexity
According to the wikipedia page, the function $\varphi$ must be convex.
I would like to define a function $\varphi(x)$ where $$\varphi(f(x)) = f(x)*h(x)$$
This is because the integral at hand is $\int_0^1 f(x)h(x) \,dx$ and I want to use Jensen's inequality to pull the $h(x)$ outside of the integral, while leaving the $f(x)$ inside.
I can show that $\varphi(x) = x*h(x)$ is convex over $x$. However, I know that's not exactly what I'm doing in the above equation. The first equation is more like $\varphi(t) = t*h(x)$, which I don't think I can say if it's in general convex, since there are two variables (and do I need to show it's convex over $t$? or over $x$?). However, the function is convex over $x$ when I put in the value needed for the integral at hand, which is $f(x)$.
I do not want to use $\varphi(x) = f(x)*h(x)$ because that is not convex, and because I want to keep the $f(x)$ on the inside of the integral.
Can anyone help me understand this issue of convexity of $\phi$ better, and if the function is ok the way I defined it?
Thank you so much!
-
1
The one-loop phi $\varphi$ is called `\varphi` in TeX, it that's what you want. – Henning Makholm Sep 28 '11 at 20:45
@Henning: thanks, I just changed it! – Angada Sep 28 '11 at 20:48
## 1 Answer
In general, $\varphi(f(x)) = f(x) h(x)$ is impossible, because the right side depends on $h(x)$ as well as $f(x)$. The bounds you can get on $\int_0^1 f(x) h(x)\ dx$ (from Hölder, not Jensen) are $\left| \int_0^1 f(x) h(x)\ dx \right| \le \|f\|_p \|h\|_q$ where $1 \le p,q \le \infty$ and $1/p + 1/q = 1$. Here $\|f\|_p = \left( \int_0^1 |f(x)|^p \, dx\right)^{1/p}$ for $p < \infty$ while $\|f\|_\infty$ is the essential supremum of $|f(x)|$ on $[0,1]$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9631191492080688, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/6219/why-do-the-elliptic-curves-recommended-by-nist-use-521-bits-rather-than-512?answertab=active | # Why do the elliptic curves recommended by NIST use 521 bits rather than 512?
Wikipedia says in reference to the elliptic curves officially recommended by NIST in FIPS 186-3:
Five prime fields for certain primes p of sizes 192, 224, 256, 384, and 521 bits. For each of the prime fields, one elliptic curve is recommended.
The first four bit sizes are immediately familiar from other cryptographic algorithms, but 521 seems to be the odd man out. Wikipedia even includes a footnote assuring readers that it is not a typo:
The sequence may seem suggestive of a typographic error. Nevertheless, the last value is 521 and not 512 bits.
Is there a cryptographically-sound reason for 521 bits instead of a more conventional power-of-two? If so, what is it? If not, how/why was 521 chosen?
-
2
I guess $2^{521}-1$ being prime was too nice to pass up. – CodesInChaos Feb 3 at 23:05
## 1 Answer
I very much suspect it's to related to the fact that $2^{521}-1$ is prime. The previous similar prime is $2^{127}-1$ and the next such is $2^{607}-1$ so they're quite rare. Elliptic curve operations on such a field can be implemented somewhat faster than over another prime field with similar size but without this special form.
I doubt that very much serious thought went into this decision at all. If there are no breakthroughs in elliptic curve logarithm finding then 521-bit coordinate components are complete overkill. If there were a suitable breakthrough then there's no guarantee that 521 bits would be enough. Elliptic curves tend to be used when the size becomes a factor in the design of a system. It's hard to imagine what sort of constraints would have to operate to make a 521 bit curve order make sense.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588687419891357, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/wavefunction+waves | # Tagged Questions
0answers
56 views
### Double Slit Problem Involving Superposition of Wave Equation [closed]
Here's my question: To be clear it's part (iv) that's unclear to me. I can see that the important bit is that the exposure is over a LONG time. Hence, this must have some implication on the manner ...
1answer
36 views
### Nodes and Antinodes for standing wave
In the arrangement shown in the figure below, an object of mass m can be hung from a string (linear mass density $\mu$ = 2.00 g/m) that passes over a light (massless) pulley. The string is connected ...
1answer
95 views
### How does one find the wave velocity and the phase speed?
While I was studying beats, I tried to find a displacement function of any particle in the most generalized form. I ended up with $$y=2A\sin(\pi(t-x/v)(f_1+f_2))\cos(\pi(t-x/v)(f_1-f_2)).$$ Now, ...
1answer
114 views
### Cylindrical wave
I know that a wave dependent of the radius (cylindrical symmetry), has a good a approximations as $$u(r,t)=\frac{a}{\sqrt{r}}[f(x-vt)+f(x+vt)]$$ when $r$ is big. I would like to know how to deduce ...
3answers
292 views
### Confused over complex representation of the wave
My quantum mechanics textbook says that the following is a representation of a wave traveling in the +$x$ direction:$$\Psi(x,t)=Ae^{i\left(kx-\omega t\right)}\tag1$$ I'm having trouble visualizing ...
1answer
42 views
### How to compare differences in waves?
I have a series of waves that I would like to compare to one another. The measurements are two-dimensional with time on the x-axis and an intensity measurement on the y-axis. I'd like some way of ...
2answers
137 views
### Pulsed Spherical Wave
Can somebody help show me how a pulsed spherical wave has a wavefunction of the form U(r,t) = (1/r)a(t-r/c), where a(t) is an arbitrary function, r is the radius of the spherical wave, t is time, and ...
2answers
2k views
### Sinusoidal Wave Displacement Function
I am learning about waves (intro course) and as I was studying Wave Functions, I got a little confused. The book claims that the wave function of a sinusoidal wave moving in the +x direction is as ...
1answer
285 views
### How Light or Water Intensity is equal to square modulus of wave function of Light or Water Waves $I=|\psi|^2 \,$?
I've seen the Wave Function as a psi $\Psi$ $\psi$. And always heard that the wave function is the Complex Number as Imaginary and real number. But I've never seen it I've never seen components of ...
2answers
207 views
### matter wave and wave function
Is there any mathematical relationship between matter wave (or de Broglie wave) and wave function? Also, does each type of particle (e.g. photon, electron, positron etc.) have its own unique wave ...
1answer
319 views
### Relationship between classical electromagnetic wave frequency and quantum wave function + de broglie frequency
As it is. As I study through classical mechanics and quantum mechanics, I began to wonder whether there is a relationship between classical electromagnetic wave frequency and quantum wave function ...
1answer
763 views
### Relation between wavenumber and propagation constant
What is the exact difference between wavenumber and propagation constant in an electromagnetic wave propagating in a medium such as a transmission line, cause i am a bit confused. Does it have to do ...
2answers
789 views
### Speed of a particle in quantum mechanics: phase velocity vs. group velocity
Given that one usually defines two different velocities for a wave, these being the phase velocity and the group velocity, I was asking their meaning for the associated particle in quantum mechanics. ...
1answer
165 views
### Help me to visualize this wave equation in time, to which direction it moves?
The wave is $\bar{E} = E_{0} sin(\frac{2\pi z}{\lambda} + wt) \bar{i} + E_{0} cos(\frac{2 \pi z}{\lambda}+wt) \bar{j}$ Let's simplify with $z = 1$. Now the xy-axis is defined by parametrization ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330979585647583, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/37820/are-covariant-vectors-representable-as-row-vectors-and-contravariant-as-column-v/37860 | # Are covariant vectors representable as row vectors and contravariant as column vectors
I would like to know what are the range of validity of the following statement:
Covariant vectors are representable as row vectors. Contravariant vectors are representable as column vectors.
For example we know that the gradient of a function is representable as row vector in ordinary space $\mathbb{R}^3$
$\nabla f = \left [ \frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z} \right ]$
and an ordinary vector is a column vector
$\mathbf{x} = \left[ x_1, x_2, x_3 \right]^T$
I think that this continues to be valid in special relativity (Minkowski metric is flat), but I'm not sure about it in general relativity.
Can you provide me some examples?
-
the gradient $\nabla f$ should be represented as a column vector as well - the dual row vector is given by the differential $\mathrm df$ – Christoph Sep 20 '12 at 7:19
2
– linello Sep 20 '12 at 8:49
@Christoph: $\nabla f=\mathrm{d}f$. – C.R. Sep 20 '12 at 9:29
@KarsusRen: $(\nabla f)^\flat=\nabla f\rfloor g=\mathrm df$; in practice, a bit of sloppiness doesn't hurt much (after all, we can always raise or lower the index as necessary by contraction with the metric tensor), but sometimes it does matter, eg when deriving the coordinate expression for the Laplace operator (or, more precisely, the Laplace-Beltrami operator) in curvilinear coordinates – Christoph Sep 20 '12 at 10:32
so the usual gradient of a function in cartesian coordinates is or not a covariant vector representable with row vector? I've been lost... – linello Sep 20 '12 at 11:34
## 4 Answers
Yes, the statement holds true in general relativity as well. However, as we need to deal with tensors of higher and in particular mixed order, the rules of matrix multiplication (which is where the idea of the representation via row- and comlumn-vectors comes from) are no longer sufficiently powerful:
Instead, the placement of the index determines if we're dealing with a contravariant (uppper index) or a covariant (lower index) quantity.
Additionally, by convention an index which accurs in a product in both upper and lower position gets contracted, and equations must hold for all values of free indices.
If the given metric is non-Euclidean (which is already true in special relativity), mapping between co- and contravariant quantities is more involved than simple transposition and the actual values of the components in a given basis can change, eg $$p^\mu = (p^0,+\vec p)\\ p_\mu = (p^0,-\vec p)$$ and in general $$p_\mu = g_{\mu\nu}p^\nu$$ where $g_{\mu\nu}$ denotes the metric tensor and a sum $\nu=1\dots n$ is implied.
-
Ok, so in general relativity with the index notation we can extend the usual matrices from linear algebra to (1,1) tensors, while for example (0,2) (completely covariant) tensors have no corresponding matrix in usual linear algebra right? I always have to use the metric tensor to raise/lower indices and get (1,1) tensors, is it right? – linello Sep 21 '12 at 7:06
@linello: essentially correct; that's also what happens when you represent a bilinear map $A:(u,v)\mapsto\mathbb R$ as a matrix via $u^TAv$ – Christoph Sep 21 '12 at 9:21
## Did you find this question interesting? Try our newsletter
email address
It is meaningful in general, though it is a matter of convention, not of truth. But it never leads to incorrect results if you make this convention.
This is thoroughly discussed in the entry ''How are matrices and tensors related?'' of Chapter B8: Quantum gravity of my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html
Note that in multivariate analysis one generally defines the gradient is the transpose of the (exterior) derivative, so ''gradient'' and ''derivative'' are slightly different notions. The transpose makes sense only given a metric, as it essentially consists in replacing raised/lowered indices by lowered/raised ones.
Thus unlike a covariant exterior derivative, a gradient is no longer covariant but contravariant (and hence a column vector).
-
So you are suggesting that is always true to treat covariant vectors as row vectors and contravariant as column vectors? – linello Sep 20 '12 at 11:35
@linello: It is a matter of convention, not of truth. But it never leads to incorrect results if you make this convention. I added to my answer a clarifying statement. – Arnold Neumaier Sep 20 '12 at 12:45
@linello: also, be aware that while this convention can work pretty well for vectors and one-forms, it doesn't help you at all when distinguishing covarant and contravariant components of higher-rank tensors. ${T^{a}}_{b}$ is a 2x2 matrix just the same as $T_{ab}$. – Jerry Schirmer Sep 20 '12 at 14:29
@JerrySchirmer: No. Only $T^a_b$ is a matrix (linear self-mapping) on the space of column vectors, hence has a simple interpretation in linear algebra. on the other hand, 2-forms and bivectors need multilinear algebra or a distinguished metric for their proper interpretation as linear mappings. – Arnold Neumaier Sep 20 '12 at 16:06
@ArnoldNeumaier: yet people write 2-forms as matrices all of the time. Take the matrix way of writing $g_{ab}$, for examplle. Yes, the algebra doens't work, but that's kind of my point--the row vector/column vector thing breaks down immediately once you go to higher rank tensors. – Jerry Schirmer Sep 21 '12 at 15:02
show 2 more comments
As from my experience, I had very hard time to understand the "physical" difference between contra- & cov- things, I really understood them only when I read differential geometry and get involved with one-forms, even more, some authors (like Shuch) argue that it is wrong to say that Covectors are really vectors, they are different objects, they are one forms!
-
TMS: you're mixing up co and contra - co-vectors are one-forms; one-forms are of course vectors insofar as they are elements of a vector space - they are just not tangent vectors – Christoph Sep 20 '12 at 12:04
Yes sorry I miss typed it, and ofcourse they spans a vector space, anyway what Shuch meant that one-forms are duals to the usual vectors, and they can't be in the same vector space with them, thus he suggests to distinguish them, that's all. – TMS Sep 20 '12 at 12:53
This is not a full answer, but rather an attempt to clear up some misconception about the gradient: In particular, in my opinion saying that the gradient is a covector doesn't make much sense.
There are two ways to interpret the concept of vectors and covectors:
The first one is to say there is only a single entity - the vector - which has covariant and contravariant components. This is inspired by classical tensor calculus: when doing calculations, we often do not care about the placement of the indices of a particular tensor - after all, we can always lower or raise them (ie go from column vectors to row vectors and vice versa) by contraction with the metric tensor.
If you take this point of view, differential and gradient are two names for the same entity. It is somewhat misleading to say that the gradient is a covector, as what we really mean is that the gradient is a vector whose covariant components are given by the partial derivatives (whereas its contravariant components are given by contraction of the covariant components with the inverse of the metric tensor).
The second point of view - which is the one I prefer - is that vectors (or, more precisely as we're doing differential geometry, tangent vectors) are distinct from covectors (aka 1-forms). However, the scalar product gives an isomorphism between tangent vectors and 1-forms. The gradient is the (pre-)image of the differential under this isomorphism and an actual vector.
-
2
I completely disagree with your first point of view: There is no such thing as a single vector with both covariant and contravariant components, except in sloppy mode where concepts are not fully well-defined and confusion can easily arise. - I also disagree with your conclusion in that case: In coordinate-free notation, the only well-defined notion of gradient is the covector defined by $\nabla f^a=g^{ab}df_a$. – Arnold Neumaier Sep 20 '12 at 16:11
@ArnoldNeumaier: I disagree with my first point of view as well, but that's the intuition I got from visiting lectures in theoretical physics - quantities with upper or lower indices weren't discriminated, and in particular all tensors of order 1 were called 4-vectors; as to the last part of your comment: that expression defines a vector, not a covector – Christoph Sep 20 '12 at 17:13
Of course, the gradient is a vector; that's what I had wanted to say, but its too late to edit it. – Arnold Neumaier Sep 20 '12 at 17:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937561571598053, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/1987/what-is-the-flaw-in-this-model-for-homomorphic-encryption | # What is the flaw in this model for homomorphic encryption?
Imagine a Field Isomorphism $g : \mathbb F1 \to \mathbb F2$ given by some $g(x)$
Assume a client is planning to outsource his computations to server, translates every possible $x$ as $g(x)$ and sends to server and once he gets the result he translates g inverse to get the answer.
I know the above is straw man solution for Homomorphic encryption, but am being naive to think of possible problems with such model. One problem I could think of is simple frequency analysis would break the system (but this can be mitigated by coming up padding schemes that retain the homomorphic nature), but what are others?
The catch here is , Most of the current homomorphic encryptions are trying to encrypt the data and perform operations on the encrypted data, but here instead of encrypting it , just the input set is transformed to a another field and such mapping $g(x)$ is kept secret.
-
## 5 Answers
The last paragraph of Section 2.3 of Gentry's "easy" intro to homomorphic encryption, I believe, contains an important result which applies to your model.
Researchers [1, 8] showed that if $\mathcal{E}$ is a deterministic fully homomorphic encryption scheme (or, more broadly, one for which it is easy to tell whether two ciphertexts encrypt the same thing), then $\mathcal{E}$ can be broken in sub-exponential time, and in only polynomial time (i.e., efficiently) on a quantum computer.
Where 1 is Algorithms for black-box fields and their application to cryptography and 8 is Quantum algorithms for some hidden shift problems.
Since your isomorphism is deterministic, this would most definitely apply. Granted, just because it can definitely be broken in sub-exponential time does not mean there is no hope (RSA can be broken in sub-exponential time), you are going to have to give a very good hardness proof to get anyone to use your system.
-
Thanks mikeazo , that was very informative ! – sashank Mar 5 '12 at 4:28
The first problem with your ring homomorphism is that it is not an isomorphism, as $g(0) = g(2) = g(4) = g(6) = g(8) = 0$ and $g(1) = g(3) = g(5) = g(7) = g(9) = 5$, and thus division by $5$ does not recover the original data.
The second problem, of course, is that the homomorphism is not secret - you need a huge selection of possible homomorphisms (actually isomorphisms, if you want to be able to decrypt again), from which one is selected as random.
In the example ring $\mathbb Z_{10}$, multiplication by $1$, $3$, $7$ or $9$ are additive isomorphisms, i.e. they allow $g(x+y) = g(x) + g(y)$. But these are (other than the trivial one) not multiplicative homomorphisms: $(3·x)·(3·y) = 9·(x·y) \neq 3·(x·y)$.
So, we need more complicated rings which allow a non-trivial amount of different isomorphisms, and useful encodings of the data into these rings, and encoding of the operations we actually want to do to into multiplications and additions.
-
Thanks Paul , i got your answer , but i could not communicate properly earlier , now I have made more generic question now – sashank Mar 4 '12 at 14:49
Homomorphic encryption schemes are a subset of public key schemes. A public key cryptosystem is 3 algorithms: encryption, decryption and key generation with the property D is the inverse of E. The first and last must be computable quickly by anyone but decryption may only be computed quickly by those that know the key. In this case E and D are supposed to be the isomorphism and its inverse and the public key is F2. You want to keep E and D secret but that renders the scheme useless as a way of outsourcing computations since the receiver of the ciphertext can only do operations on the ciphertext. For example, if the receiver's job is take an encrypted message from the sender and modify it to the encryption of that message plus 1 there is no way for him to do that since he doesn't know what the encryption of 1 is, or any other message for that matter.
If E and D are not secret then it becomes an issue of finding the right fields to use and the right data to add to the public key. How the field is encoded is also important. For finite fields for example there are several ways to describe the fields. For all the typical encodings of finite fields, given an explicit isomorphism it is easy to find an inverse for it, so in this case there is no way that any modification of what you have proposed can work.
-
Considering your isomorphism $g:\mathbb F1\mapsto \mathbb F2$ there is the question of resources for computation. Most Fully Homomorphic systems are resource intensive, which affects their practicality.
In the case of $g:\mathbb F1\mapsto \mathbb F2$ I believe the complexity of the computational problem depends on the \$mathbb F1,\mathbb F2$ you choose.
For example, if you choose $\mathbb F1, \mathbb F2$ to contain $2^{64}$ numbers, they would be big enough to cover any number that can be stored as an integer in modern Linux or Windows systems. One could of course imagine a mapping $g:\mathbb F1\mapsto \mathbb F2$ such that each different number in $\mathbb F1$ was mapped to a different number in $\mathbb F2$.
Such an approach could be very strong but - we would need to prove that the mapping can support the homomorphic requirements - we would need to compute the mapping, which may require a huge table of size $C*2^{64}$ (C being some constant).
This is just an example, an illustation. What I am saying, more generally, you have a trade off between complexity and practicality. That is very typical of homomorphic encryption. It is the main difficulty (IMO) in bringing H.E. to the real world problems.
-
I don't understand. Are you trying to achieve homomorphic encryption scheme that is not public key encryption because definitely your scheme is not a public key encryption scheme. You are keeping your homomorphism "secret" which limits its applicability by large. The idea of public key encryption is to make the encryption key public.
-
Just FYI, at least one of the early fully homomorphic cryptosystems could actually be classified as secret key crypto (it was the one over the integers). You couldn't, however, run the $Recrypt$ procedure unless you released some public information, thus making it a public key system. Your point is still valid, however. – mikeazo♦ Mar 5 '12 at 1:43
You are keeping your homomorphism "secret" which limits its applicability by large can you explain what the limitations would be ? – sashank Mar 5 '12 at 2:37
What I meant to say is that it cannot be a public-key system. Mike, can you please give me the reference as well. I would very much like to read the paper. – Jalaj Mar 5 '12 at 2:55
– mikeazo♦ Mar 5 '12 at 3:03
@mikeazo, this is public key encryption scheme! I have already this paper. This was the second paper in the line and was later published in EUROCRYPT 2010. The public key is random $x$! – Jalaj Mar 5 '12 at 13:13
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536800980567932, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/63694?sort=votes | ## diameter of Voronoi cell of the lattice ? What about R^n ? What about small n =2,3,4 ?What about random lattice ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a lattice in R^n. Consider Voronoi cell of it. What is known about diameter ? About the shape ? What are good references ?
As far as I understand they are not easy to compute.
May be in small dimensions like 2,3,4 there are some manageable results ?
Is there something known about diameter of the random lattice ? (e.g. components of generating vectors are distributed as N(0,1) )
-
Its twice the covering radius of the lattice. – Roland Bacher May 2 2011 at 12:37
What is covering radius ? Is is the half diameter of the cell of the lattice ? But cell is not uniquely defined ? Should I take the minimal one ? Is it exact value or an upper bound ? Is it easy to prove ? Thanks You for the comment ! – Alexander Chervov May 2 2011 at 16:17
1
The covering radius of a lattice $\lambda$ is the smallest real number $\rho$ such that the union of balls of radius $\rho$ centered at all lattice points is all of $\mathbb R^n$. It is thus by definition the maximal distance of a point of the Voronoi cell to its barycenter. Since the Voronoi cell is centrally symmetric, its diameter is twice the covering radius. – Roland Bacher May 2 2011 at 16:29
Dear Roland, thanks for Yours answer. Is there some way to compute or estimate the covering radius ? At least at R^2 ? – Alexander Chervov May 3 2011 at 18:37
## 2 Answers
Computing the covering radius of a lattice (wrt $\ell_2$ norm) is not known to be NP-hard. (Will, if you got that piece of information from SLG, I guess SLG is wrong.)
Computing the covering radius of linear codes (wrt Hamming metric) and of lattices (wrt $\ell_p$ norm but only for large $p>2$) is NP-hard. In fact these problems are even $\Pi_2$ hard to approximate for small constant approximation factors. See papers
1. http://dx.doi.org/10.1007/s00037-005-0193-y (The complexity of the covering radius problem, Guruswami, Micciancio and Regev, Computational Complexity 14(2):90-121)
2. http://dx.doi.org/10.1109/CCC.2006.23 (Hardness of the covering radius problem on lattices, Haviv and Regev, in CCC 2006)
Approximating the covering radius of lattices in $\ell_2$ norm is also conjectured (in reference 1) to be $\Pi_2$ hard to approximate within some small constant factor (and NP-hard for any constant factor), but as far as I know this is an open problem.
For any fixed dimension n, the covering radius problem can be solved in polynomial time. So, for small $n$ (certainly for $n=2,3,4$, and probably for up to $n\leq 20$ or so), the problem can be solved efficiently. There are several ways to do that, but they are all based on enumerating all the vertices of the Voronoi cell, which takes at least $n^{O(n)}$ time. See http://dx.doi.org/10.1145/1806689.1806739 and references therein. Computing the covering radius in single exponential time $2^{O(n)}$ is an open problem.
For larger dimension it may be more effective to approximate the covering radius within a factor 2 as suggested in reference 1 above, by picking a random point in space and computing its distance to the lattice. This can be done in $2^{O(n)}$ time.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Alex, the covering radius of a lattice is the circumradius of the Voronoi cell around the origin. For a lattice, all Voronoi cells are translates of each other. The points on the boundary of the Voronoi cell that achieve that maximum distance from the origin are called the deep holes. Let's see, the Voronoi cell around the origin is the set of points that are closer to the origin, or no farther away from the origin, than to any other lattice point. So, in $R^2,$ for the standard hexagonal circle packing the cell is a regular hexagon, which you can easily draw by hand. The covering radius is then the distance from the origin to a vertex of the hexagon. For a slightly skewed lattice and slightly irregular (but centrally symmetric) hexagon, the covering radius would be the distance to the farthest vertex from the origin. This is mostly from chapter 2 of SLG, which is Sphere Packings, Lattices and Groups by J. H. Conway and N. J. A. Sloane. Let's see, for the standard integer lattice in $R^2$ the cell would be a square.
It is known that calculating the Voronoi cell, deep holes, and in particular covering radius is NP-hard, the number of steps required grows exponentially with the dimension $n.$ The great advance in the LLL algorithm is that it finds fairly short vectors, where the steps grow only as a polynomial in $n.$ And of course, for very small $n,$ it finds the shortest vector. Meanwhile, the algorithm finds an entire basis, so the comments about possible minimality refer to the first reported vector in the (integral) basis. A full basis is a different problem from a single short vector, in dimension 100 you would usually rather have a basis of all medium length vectors than a single very short one and 99 long ones.
Perhaps this will help, there is a useful language called Magma that has this, see here
Meanwhile, for examples of lattices, with an emphasis of root lattices for Lie algebras, see here. Note that Gabriele Nebe has written an article finding all lattices with covering radius below a certain bound, in her normalization that is $\sqrt 2.$ This anticipates me and Pete Clark of MO,
http://mathoverflow.net/questions/39510/must-a-ring-which-admits-a-euclidean-quadratic-form-be-euclidean
so it is not clear what will be published on that...
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928801417350769, "perplexity_flag": "head"} |
http://www.reference.com/browse/dead+time | Definitions
Nearby Words
# Dead time
In particle and nuclear detector systems the dead time is the time after each event, during which the system is not able to record another event if it happens. An everyday life example of this is what happens when someone takes a photo using a flash - another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge.
The total dead time of a detection system is usually due to the contributions of the intrinsic dead time of the detector (for example the drift time in a gaseous ionization detector), of the analog front end (for example the shaping time of a spectroscopy amplifier) and of the DAQ (the conversion time of the ADCs and the readout and storage times).
The intrinsic dead time of a detector is often due to its physical characteristics; for example a spark chamber is "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution.
The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually from .5 up to 10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution.
Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account.
Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisiticated pipelining and multi-level trigger logic to reduce the readout rates.
From the total time a detection system is running, the dead time must be subtracted to obtain the live time of the experiment.
## Paralizable and non-paralizable behaviour
A detector, or detection system, can be characterized by a paralizable or non-paralizable behaviour.
In a non-paralizable detector, an event happening during the dead time since the previous event is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time.
In a paralizable detector, an event happening during the dead time since the previous one will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all.
A semi-paralizable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.
## Analysis
It will be assumed that the events are occurring randomly with an average frequency of f. That is, they constitute a Poisson process. The probability that an event will occur in an infinitesimal time interval dt is then f dt. The probability P(t) that an event will occur at time t to t+dt with no events occurring between t=0 and time t can be shown to be given by the exponential distribution (Lucke 1974, Meeks 2008):
$P\left(t\right)dt=fe^\left\{-ft\right\}dt,$
The expected time between events is then
$langle t rangle = int_0^infty tP\left(t\right)dt = 1/f$
### Non-paralizable analysis
For the non-paralizable case, with a dead time of $tau$, the probability of measuring an event between t=0 and $t=tau$ is zero. Otherwise the probabilities of measurement are the same as the event probabilities. The probability of measuring an event at time t with no intervening measurements is then given by an exponential distribution shifted by $tau$:
$P_m\left(t\right)dt=0,$ for $tletau,$
$P_m\left(t\right)dt=frac\left\{fe^\left\{-ft\right\}dt\right\}\left\{int_tau^infty fe^\left\{-ft\right\}dt\right\} = fe^\left\{-f\left(t-tau\right)\right\}dt$ for $t>tau,$
The expected time between measurements is then
$langle t_m rangle = int_tau^infty tP_m\left(t\right)dt = langle t rangle+tau$
In other words, if $N_m$ counts are recorded during a particular time interval and the dead time is known, the actual number of events (N) may be estimated by
$N approx frac\left\{N_m\right\}\left\{1-tau N_m\right\}$
If the dead time is not known, a statistical analysis can yield the correct count. For example (Meeks 2008), if $t_i$ are a set of intervals between measurements, then the $t_i$ will have a shifted exponential distribution, but if a fixed value D is subtracted from each interval, with negative values discarded, the distribution will be exponential as long as D is greater than the dead time $tau$. For an exponential distribution, the following relationship holds:
$frac\left\{langle t^n rangle\right\}\left\{langle t rangle^n\right\} = n!$
where n is any integer. If the above function is estimated for many measured intervals with various values of D subtracted (and for various values of n) it should be found that for values of D above a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate.
## References
• W. R. Leo (1994). Techniques for Nuclear and Particle Physics Experiments. Springer. ISBN 3-540-57280-5.
Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148335456848145, "perplexity_flag": "middle"} |
http://cms.math.ca/10.4153/CMB-2011-127-8 | Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CMB
Abstract view
# Multiplicity Free Jacquet Modules
Read article
[PDF: 238KB]
http://dx.doi.org/10.4153/CMB-2011-127-8
Canad. Math. Bull. 55(2012), 673-688
Published:2011-06-29
Printed: Dec 2012
• Avraham Aizenbud,
Massachussetts Institute of Technology, Cambridge, MA 02139, USA
• Dmitry Gourevitch,
Faculty of Mathematics and Computer Science, The Weizmann Institute of Science, Rehovot 76100, Israel
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: LaTeX MathJax PDF
## Abstract
Let $F$ be a non-Archimedean local field or a finite field. Let $n$ be a natural number and $k$ be $1$ or $2$. Consider $G:=\operatorname{GL}_{n+k}(F)$ and let $M:=\operatorname{GL}_n(F) \times \operatorname{GL}_k(F)\lt G$ be a maximal Levi subgroup. Let $U\lt G$ be the corresponding unipotent subgroup and let $P=MU$ be the corresponding parabolic subgroup. Let $J:=J_M^G: \mathcal{M}(G) \to \mathcal{M}(M)$ be the Jacquet functor, i.e., the functor of coinvariants with respect to $U$. In this paper we prove that $J$ is a multiplicity free functor, i.e., $\dim \operatorname{Hom}_M(J(\pi),\rho)\leq 1$, for any irreducible representations $\pi$ of $G$ and $\rho$ of $M$. We adapt the classical method of Gelfand and Kazhdan, which proves the multiplicity free" property of certain representations to prove the multiplicity free" property of certain functors. At the end we discuss whether other Jacquet functors are multiplicity free.
Keywords: multiplicity one, Gelfand pair, invariant distribution, finite group
MSC Classifications: 20G05 - Representation theory 20C30 - Representations of finite symmetric groups 20C33 - Representations of finite groups of Lie type 46F10 - Operations with distributions 47A67 - Representation theory | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6717991828918457, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/90021?sort=oldest | ## Mapping from a finite index subgroup onto the whole group
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear All,
here is the question:
Does there exist a finitely generated group $G$ with a proper subgroup $H$ of finite index, and an (onto) homomorphism $\phi:G\to G$ such that $\phi(H)=G$?
My guess is "no", for the following reason (and this is basically where the question came from): in Semigroup Theory there is a notion of Rees index -- for a subsemigroup $T$ in a semigroup $S$, the Rees index is just $|S\setminus T|$. The thing is that group index and Rees index share the same features: say for almost all classical finiteness conditions $\mathcal{P}$, which make sense both for groups and semigroups, the passage of $\mathcal{P}$ to sub- or supergroups of finite index holds if and only if this passage holds for sub- or supersemigroups of finite Rees index. There are also some other cases of analogy between the indices. Now, the question from the post is "no" for Rees index in the semigroup case, so I wonder if the same is true for the groups.
Also, I beleive the answer to the question may shed some light on self-similar groups.
-
Hi Victor. I knew I must be missing something, but I can't believe it was something so obvious! I should have checked what you wrote more thoroughly. Anyway, I deleted my answer, since it contributes nothing and so it's better for the question to still have 0 answers. – Tara Brough Mar 2 2012 at 10:54
Hi Tara. It is quite strange, but for groups this question is much harder to deal with than that for semigroups. Actually we proved the semigroup statement with Nik for the purposes of seeing how hopficity is preserved by finite Rees extensions, so we found this property on the go, by accident. This property is inetersting on its own. – Victor Mar 2 2012 at 11:00
Perhaps you could give a reference for this fact in the semigroup case? – HW Mar 2 2012 at 11:03
HW, actually so far it appeared only in my thesis. I could send you (and anybody else interested) a pdf with the thesis, just write to me to [email protected] for this. – Victor Mar 2 2012 at 11:08
## 2 Answers
Here is a proof that there is no such finitely generated group. It's similar to Mal'cev's proof that finitely generated residually finite groups are non-Hopfian.
First, note that $\ker\phi$ is not contained in $H$---otherwise, $|\phi(G):\phi(H)|=|G:H|$. Let $k\in\ker\phi\smallsetminus H$. Because $\phi$ is surjective, there are elements $k_n$ for each $n\in\mathbb{N}$ such that $\phi^n(k_n)=k$.
Let $\eta:G\to\mathrm{Sym}(G/H)$ be the natural action by left translation. Then the homomorphisms $\eta\circ\phi^n$ are all distinct. Indeed,
$\eta\circ\phi^n(k_n)=\eta(k)\neq 1$
because $k\notin H$, whereas
$\eta\circ\phi^{m}(k_n)=\eta(1)=1$
for $m>n$. But there can only be finitely many distinct homomorphisms from a finitely generated group to a finite group.
-
This is very neat! Thank you. – Victor Mar 2 2012 at 13:56
And actually now I see that in the groups case, the argument is much easier :-) (see above comments). Well, "easier" only when you already know how to do it -- again, it is great to come up with such a beautiful proof! – Victor Mar 2 2012 at 14:04
1
Nice! Do you happen to know of an example if we remove the requirement that $H$ has finite index in $G$? I rather expect it's possible then, but I haven't thought of an example yet. – Tara Brough Mar 2 2012 at 15:46
Victor - indeed, it is much easier when you know how! The proof is almost identical to the proof of Mal'cev's theorem. – HW Mar 2 2012 at 16:26
3
@tara, there are f.g. groups G isomorphic to GxG. So take the composition of such an iso with a projection to a factor. – Benjamin Steinberg Mar 2 2012 at 17:26
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is a variation on Henry's nice argument which uses Malcev's theorem. Let $N$ be the intersection of all finite index normal subgroups of $G$. Clearly $\phi(N)\subseteq N$ because a surjective endomorphism takes finite index normal subgroups to finite index normal subgroups. Thus $\phi$ induces a proper endomorphism of the finitely generated residually finite group $G/N$. By Malcev's theorem that f.g. residually finite groups are Hopfian, it follows $\phi$ induces an automorphism, which means $\ker \phi$ is contained in $N$. But then since each finite index subgroup contains a finite index normal subgroup, we have $\ker \phi\subseteq H$, which is a contradiction as Henry points out since in that case one would have $[G:H]=[\phi(G):\phi(H)]$.
-
That's quite nice, too! – Victor Mar 2 2012 at 14:34
3
In fact, the proof of Mal'cev's Theorem really shows that the kernel of any self-epimorphism is contained in every finite-index subgroup. – HW Mar 2 2012 at 20:56
Malcev's theorem is equivalent to the statement that the kernel of each self-epimorphism of a fg group is contained in the intersection of all finite index subgroup. – Benjamin Steinberg Mar 2 2012 at 22:25
Malcev's theorem follows from the fact any surjective (weak) contraction of a compact metric space is an isometry. – Benjamin Steinberg Mar 3 2012 at 1:16
1
I prefer the second two proofs (which are essentially the same) since they clearly work for any algebraic structure. – Benjamin Steinberg Mar 3 2012 at 11:45
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353808164596558, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/116833?sort=newest | ## unique sums in a finite direct product of sets of integers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am an algebraist, and I am wondering if there is a definition for the following:
Let $A_1$, $A_2$, $\ldots, A_n$ be sets of integers (or more generally, subsets of a group $G$). Say that (for the purposes of this question) $A_1\times A_2\cdots\times A_n$ is special provided that whenever $a_1+a_2+\cdots+a_n=b_1+b_2\cdots+b_n$ with each $a_i,b_i\in A_i$, $1\leq i\leq n$, then $a_i=b_i$ for all $i$, $1\leq i\leq n$.
Is there some terminology for this property? Any information would be appreciated.
-
In the case that $A_1,\dotsc,A_n$ are singletons, this notion as known as unique-sum sets (see e.g. cmc11.uni-jena.de/proceedings/frisco.pdf). – Martin Brandenburg Dec 20 at 8:07
@Martin: I suspect there must be something wrong in your statement because if $A_1, \ldots, A_n$ are singletons, then also $A_1 \times \ldots \times A_n$ is a singleton, and hence the condition in the question is trivially true. – boumol Dec 20 at 8:26
I think Martin is referrring to $B_n[1]$ sets which would -- I think! -- be the case $A_1=A_2=\dots= A_n$ – Yemon Choi Dec 20 at 9:37
While what Yemon Choi says is also a pertinent related notion Martin Brandenburg meant something else. Namely, two elements sets (not singletons). The point is that one studies the question when a set (or also multiset) has the property that all the sums of elements of distinct subsets are actually distinct. This corresponds directly to the case that the A_i are of the form {0,a_i} in the present problem. Yet since the present problem is invarinat under shifts this also applies in case each A_i contains (at most) two elements. – quid Dec 20 at 11:30
1
A more natural way to state the condition is that $|A_1+\cdots+A_h| = |A_1| \cdots |A_h|$ (the parameter $h$ is more common than $n$ here). These come up in additive combinatorics, but I don't know a name for them. If $A_1= \dots =A_h$, and you don't care about the ordering of the sum (i.e., one can reorder the $b_i$ so that $a_i=b_i$), these are called "Sidon Sets", also $B_h$-sets. – Kevin O'Bryant Dec 20 at 19:30
## 1 Answer
There is a lot of interesting work using the term "Tiling" (sometimes " Algebraic Tiling"), especially when there are two factors although all the factors but one can be combined (I am thinking mainly of Abelian Groups). If $A$ is any infinite set of integers which increases quickly enough then there is a $B$ with $A \oplus B= \mathbb{Z}.$ For $A \oplus B=\mathbb{Z}$ with $A$ finite, there are results and open questions, cyclotomic polynomials are a useful tool For $A \oplus B= \mathbb{N}$ (bases for the positive integers) both $A$ and $B$ are highly structured. Evidently there are connections to Musical Canons.
There have uses of non-abelian factorizations for cryptography.
There are better and other references but that gives a few entry points
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430315494537354, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/71736/number-of-closed-walks-on-an-n-cube | ## Number of closed walks on an $n$-cube
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a known formula for the number of closed walks of length (exactly) $r$ on the $n$-cube? If not, what are the best known upper and lower bounds?
Note: the walk can repeat vertices.
-
## 3 Answers
Yes (assuming a closed walk can repeat vertices). For any finite graph $G$ with adjacency matrix $A$, the total number of closed walks of length $r$ is given by
$$\text{tr } A^r = \sum_i \lambda_i^r$$
where $\lambda_i$ runs over all the eigenvalues of $A$. So it suffices to compute the eigenvalues of the adjacency matrix of the $n$-cube. But the $n$-cube is just the Cayley graph of $(\mathbb{Z}/2\mathbb{Z})^n$ with the standard generators, and the eigenvalues of a Cayley graph of any finite abelian group can be computed using the discrete Fourier transform (since the characters of the group automatically given eigenvectors of the adjacency matrix). We find that the eigenvalue $n - 2j$ occurs with multiplicity ${n \choose j}$, hence
$$\text{tr } A^r = \sum_{j=0}^n {n \choose j} (n - 2j)^r.$$
For fixed $n$ as $r \to \infty$ the dominant term is given by $n^r + (-n)^r$.
-
Thanks! that's what I needed. – Lev Reyzin Jul 31 2011 at 19:24
I'm guessing not, but is there any chance this expression has a closed form? – Lev Reyzin Aug 2 2011 at 17:12
@Lev: you mean without a summation over $n$? I doubt it. Is fixed $n$ as $r \to \infty$ not the regime you're interested in? – Qiaochu Yuan Aug 2 2011 at 17:36
In some sense, I'm more interested in fixed r as n gets large. The summation is certainly quite helpful, but of course if a closed form existed, it would even be nicer :) – Lev Reyzin Aug 2 2011 at 18:06
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The number of such walks is $2^n$ (the number of vertices of the $n$-cube) times the number of walks that start (and end) at the origin. We may encode such a walk as a word in the letters $1, -1, \dots, n, -n$ where $i$ represents a positive step in the $i$th coordinate direction and $-i$ represents a negative step in the $i$th coordinate direction. The words that encode walks that start and end at the origin are encoded as shuffles of words of the form $i\ -i \ \ i \ -i \ \cdots\ i \ -i$, for $i$ from 1 to $n$. Since for each $i$ there is exactly one word of this form for each even length, the number of shuffles of these words of total length $m$ is the coefficient of $x^m/m!$ in $$\biggl(\sum_{k=0}^\infty \frac{x^{2k}}{(2k)!}\biggr)^{n} = \left(\frac{e^x + e^{-x}}{2}\right)^n.$$ Expanding by the binomial theorem, extracting the coefficient of $x^r/r!$, and multiplying by $2^n$ gives Qiaochu's formula.
Let $W(n,r)$ be the coefficient of $x^r/r!$ in $\cosh^n x$, so that $$W(n,r) = \frac{1}{2^n}\sum_{j=0}^n\binom{n}{j} (n-2j)^r.$$ Then we have the continued fraction, due originally to L. J. Rogers, $$\sum_{r=0}^\infty W(n,r) x^r = \cfrac{1}{1- \cfrac{1\cdot nx^2}{ 1- \cfrac{2(n-1)x^2}{1- \cfrac{3(n-2)x^2}{\frac{\ddots\strut} {\displaystyle 1-n\cdot 1 x^2} }}}}$$ A combinatorial proof of this formula, using paths that are essentially the same as walks on the $n$-cube, was given by I. P. Goulden and D. M. Jackson, Distributions, continued fractions, and the Ehrenfest urn model, J. Combin. Theory Ser. A 41 (1986), 21–-31.
Incidentally, the formula given above for $W(n,r)$ (equivalent to Qiaochu's formula) is given in Exercise 33b of Chapter 1 of the second edition of Richard Stanley's Enumerative Combinatorics, Volume 1 (not published yet, but available from his web page). Curiously, I had this page sitting on my desk for the past month (because I wanted to look at Exercise 35) but didn't notice until just now that this formula was on it.
-
thanks - this is nice. – Lev Reyzin Aug 2 2011 at 23:08
Assuming a "closed walk" can repeat vertices, we can count closed walks starting at $0$ by counting the $r$-sequences of $[n]$ so that each number appears an even number of times. The bijection is given by labeling edges by the coordinate that is toggled between the vertices. You can probably count these sequences by inclusion/exclusion and then multiply by $2^n/r$ to account for the choice of start position.
-
If we assume the path moves in each dimension 0 or 2 times, you can select ${n \choose r/2}$ dimensions and then permute them $r^1/2^r$ ways. This is a lower bound on the number of walks and is likely the right asymptotics. – Derrick Stolee Jul 31 2011 at 17:03
That should be $r!/2^r$. – Derrick Stolee Jul 31 2011 at 17:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359062910079956, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/115371/question-on-local-cohomology/115380 | ## Question on local cohomology
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $M$ be a positive graded finitely generated module over a positive graded commutative ring $R$. Assume that $R_0$ is a local ring with maximal ideal $m_0$. Let $d$ be the Krull dimension of $M$. In the book "Local cohomology : An algebraic introduction with geometric application" of R.Y. Sharp and Broddman, the author claim that if $d=0$ then the set of associated primes ideal of $M$ is $\lbrace m_0 \oplus R_+ \rbrace$, and therefore there exist a $t\in \mathbb{N}$ such that :$R_{+}^{t}M=0$.
My question is :
1. Why Ass(M)= $\lbrace m_0 \oplus R_+ \rbrace$
2. Why from that we have :$R_{+}^{t}M=0$ for some $t\in \mathbb{N}$ ?
-
## 2 Answers
First, claim 1 as stated is wrong - consider the zero module.
Second, I guess that you talk about a step in the proof of Theorem 15.3.1 in Brodmann-Sharp. If so, then you have more hypotheses than you mentioned. Beside others, $R$ is noetherian and - most important - $M$ is $0$-dimensional. (And $M$ is not "positively graded", a notion that seems not reasonable for graded modules.)
So, what you want to show is that under these hypotheses, $M$ is $R_+$-torsion. For this it suffices to show that $M$ is `$\mathfrak{m}_0+R_+$`-torsion. More general, it suffices to show that a $0$-dimensional finitely generated graded module $M$ over a *local graded ring with *maximal ideal $\mathfrak{m}$ is $\mathfrak{m}$-torsion. And this is indeed the case. Namely, $0$-dimensionality means that the graded ring $R/(0:_RM)$ is $0$-dimensional. Now, $\sqrt{(0:_RM)}$ is the intersection of the graded primes containing $(0:_RM)$. But as $R/(0:_RM)$ is $0$-dimensional, $\mathfrak{m}$ is the only such prime, implying $\sqrt{(0:_RM)}=\mathfrak{m}$. Since $\mathfrak{m}$ is finitely generated (as $R$ is supposed to be noetherian), there exists $t\in\mathbb{N}$ with $\mathfrak{m}^t\subseteq(0:_RM)$, and this implies that $M$ is an $\mathfrak{m}$-torsion module as desired.
-
@Fred Rorher: Why do we have : $M$ is $m_{0}+R_+$ torsion then $M$ is $R_+$ torsion ? Could you please make it more precise ? – Axy Dec 4 at 10:13
1
Dear @Axy, this is because if $\mathfrak{a}$ and $\mathfrak{b}$ are ideals with $\mathfrak{a}\subseteq\mathfrak{b}$, then the $\mathfrak{b}$-torsion functor is a subfunctor of the $\mathfrak{a}$-torsion functor. – Fred Rohrer Dec 4 at 10:18
Sorry Fred Rohrer, but I still do not get it. my question is follow: Since M is $m_0+R_+$ torsion, then there exist $t$ such that $(m_0+R_+)^tM=0$. From this, how can we get $R_{+}^{k}M=0$ for some k – Axy 14 secs ago – Axy Dec 4 at 10:32
1
Dear @Axy, $R_+\subseteq\mathfrak{m}_0+R_+$, hence $R_+^t\subseteq(\mathfrak{m}_0+R_+)^t$, thus $R_+^tM\subseteq(\mathfrak{m}_0+R_+)^tM=0$, and therefore $R_+^tM=0$. – Fred Rohrer Dec 4 at 10:35
Thank @Fred Rohrer for that. I am so stupid. Could you explain for me that in the proof in that book, the authors proved that : $M_{\mathfrak{p}_{0}}=N_{\mathfrak{p}_{0}}$ for each $\mathfrak{p}_0\in \text{Spec}(R_0)$. So why can we reduce the problem for the local case ? I still do not understand the idea of the author. Sorry if you feel my question is stupid. – Axy Dec 4 at 10:44
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let $m = m_0 \oplus R_+$ which is the homogeneous maximal ideal (also it is maximal). Then the first condition implies that Min(M) = Ass(M) = {$m$}. In particular, dim $M =$ dim $R/m = 0$. This says that $M$ is an Artinian module. So there exists $s$ such that $M_s = 0$. So taking $t \ge s$ would do the job.
-
@Young su: What do you mean by Min(M) ? – Axy Dec 4 at 10:31
I meant the set of minimal primes in Supp(M). But here Supp(M) = {m} as well. – Youngsu Dec 5 at 3:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939772367477417, "perplexity_flag": "head"} |
http://matthewkahle.wordpress.com/2009/06/25/a-foolproof-cube-a-symmetric-etude-i/?like=1&source=post_flair&_wpnonce=eb8d9d939c | # The foolproof cube – a study in symmetry
Besides being a fun toy, and perhaps the most popular puzzle in human history, the Rubik’s Cube is an interesting mathematical example. It provides a nice example of a nonabelian group, and in another article I may discuss some features of this group structure. This expository article is about an experiment, where I made Rubik’s cubes with two or three colors, instead of six. In particular, I want to mention an interesting observation made by Dave Rosoff about one of the specially colored cubes: It turns out to be foolproof, in the sense that no matter how one breaks it apart and reassembles the pieces, it is still solvable by twisting the sides. It is well known that this is not a property that stock Rubik’s Cubes have.
The first observation about the physical construction of the cube is that it is made out of 21 smaller plastic pieces: a middle piece with 6 independently spinning center tiles, 12 corner cubies, and 8 edge cubies. The frame includes 6 of the stickers. Each corner cubie has 3, and each edge cubie has 2. (And this accounts for all of the $6 \times 9 =54 = 6+ 12 \times 3 + 8 \times 2$ stickers. These stickers are on a solid piece of plastic, so they always stay next to each other, no matter matter how much you scramble it by twisting the sides. (I remember getting really mad as a kid, after figuring out that someone else had messed around with the stickers on my cube. Still a pet peeve, taking stickers off a cube is like fingernails on a chalkboard for me.)
So anyway, it’s impossible to get two yellow stickers on the same edge cubie, for example, because that would make the cube impossible to solve: you couldn’t ever get the two stickers onto the same side. But this is not the only thing that can’t happen. Let’s restrict ourselves from now on to just the positions you can get to by taking the cube apart into plastic pieces, and putting them back together. If you take it apart, and put it back together randomly, will you necessarily be able to solve it by only twisting sides? (Of course you can always solve it by taking it back apart and putting it back together solved!) I knew, empirically, as a kid that you might not be able to solve it if you put it together haphazardly. Someone told me in high school that if you put it together “randomly,” your chances that it was solvable were exactly 1 in 12, and explained roughly why: it is impossible to flip an edge (gives a factor of 2), rotate a corner (a factor of 3), or to switch any two cubies (another factor of 2).
This seemed plausible at the time, but it wasn’t until a graduate course in algebra that I could finally make mathematical sense of this. In fact, one of the problems on the take-home final was to prove that it’s impossible to flip any edge (leaving the rest of the cube untouched!), through any series of twists. I thought about it for a day or two, and was extremely satisfied to finally figure it out, how to prove something that I had known in my heart to be true since I was a kid. It is a fun puzzle, and one can write out a proof that doesn’t really use group theory in any essential way (although to be fair, group theory does provide a convenient language, and concise ways of thinking about things). For example, it is possible to describe a number 0 or 1 to every position, such that the number doesn’t change when you twist any side (i.e. it is invariant). Then provided that the property prescribes a 0 to a solved cube and a 1 to a cube with a flipped edge, the invariance gives that the flipped edge cube is unsolvable.
Several years ago, I got inspired to try different colorings of a Rubik’s Cube, just allowing some of the sides to have the same color. I was picky about how I wanted to do it, however. Each color class should “look the same,” up to a relabeling. A more precise way to say this is that every permutation of the colors is indistinguishable from some isometry. (Isometries of the cube are its symmetries: reflections, and rotations, and compositions of these. There are 48 in total.)
It turns out that this only gives a few possibilities. The first is the usual coloring by 6-colors. Although there are several ways to put 6 colors on the faces of a cube, for our purposes here there is really only one 6-color cube. There is also the “Zen cube,” with only one color. (“Always changing, but always the same.”) But there are a few intermediate possibilities that are interesting. First, with two colors, once can two-color the faces of a cube in essentially two different ways. Note that since we want each color class to look the same, each color gets three sides. The three sides of a color class either all three meet at a corner, or they don’t. And these are the only two possibilities, after taking into account all of the symmetries.
So I bought some blank cubes and stickers, and made all four of the mathematically interesting possibilities. (I keep meaning to make a nice Zen cube, perhaps more interesting philosophically than mathematically, but I still haven’t gotten around to it. ) My friend Dave Rosoff and I played around with all of these, and found them somewhat entertaining. A first surprise was that they seem harder to solve than a regular Rubik’s cube. Seems like it should be easier, in terms of various metrics: the number of indistinguishable positions being much smaller, or equiavalently, the number of mechanical positions which are indistinguishable from the “true” solved position being much bigger. However in practice, what happens for many experience cube solvers, is that they get into positions that they don’t recognize at the end: the same-colored stickers seem to mask your true position. Nevertheless, an experienced solver can handle all four of these cubes without too much difficulty.
When playing around with them, occasionally a cubie would pop out fall to the floor. The thing to do is to just pop the piece back in arbitrarily, and the solve it as far as you can. It is usually an edge that pops, so the probability of having a solvable cube is 1/2. And if not, one can tell at the end, and then remedies the situation by flipping any edge back. Dave noticed something special about one of these four cubes, I think just from enough experience with solving it: no matter which piece got popped out, when he popped it back in randomly, the cube still seemed to be solvable. He thought it seemed too frequent to be a coincidence, so after a while of chatting about it, we convinced ourselves that this cube indeed had a special property: if one takes it apart into its 21 pieces, and reassembles it arbitrarily, it is always solvable by twists, a property we described as, “foolproof.” We talked it about it for a while longer, and convinced ourselves that this is the only foolproof cube, up to symmetry and permuting colors. (Well, the Zen cube is foolproof too.)
So which of these four cubes is foolproof? This puzzle yields to a few basic insights, and does not actually require making models of each type of cube, although I would encourage anyone to do so who has extra blank cubes and stickers around, or who wants a neat Cube variant for their collection.
### Like this:
This entry was posted in expository and tagged combinatorics, cubes.
# Post navigation
## 2 thoughts on “The foolproof cube – a study in symmetry”
1. Kenny Easwaran says:
My thought is that it must be the second one from the left – if you’ve put in an edge piece wrong, then you can do a move where you flip that edge and some monochromatic (green-green or blue-blue) edges, while if you’ve put a corner piece in wrong you can do a move where you rotate that corner piece along with a monochromatic corner piece. I’m not quite as sure what you do if you’ve accidentally switched two cubies. But since this is the only one where there’s a monochromatic corner, it seems that all the others are going to be susceptible to having a corner put back in wrong.
• matthewkahle says:
Hi Kenny, nice answer, of course you’re right.
One comment: it is not hard to see that one can swap any two edge or corner cubies on any of these four cubes, since each one has some pair of indistinguishable cubies to take up the slack. It seems like this is essentially the same argument as your argument that you can twist a corner on the foolproof cube.
# Mathematical art
Blog at WordPress.com. | Theme: Dusk To Dawn by Automattic.
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578204154968262, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/100777/doubling-dimension-of-a-euclidean-space | ## Doubling dimension of a Euclidean space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The doubling dimension of a metric space $X$ is the smallest positive integer $k$ such that every ball of $X$ can be covered by $2^k$ balls of half the radius.
It is well known that the doubling dimension $d(n)$ of the Euclidean space $\mathbb R^n$ is $O(n)$, which means that there is a constant $C$ such that for large $n$ one has $d(n)\leq Cn$. A posteriori, I can find a new constant $D$ that works for all $n$. I would like to have an explicit description of this new constant. In other words,
Question: What explicit and possibly nice and small constant $D>0$ would guarantee that $d(n)\leq Dn$, for all $n$?
Edit. As observed by Igor Rivin, $D=\log 2$ should be good for $n\geq7$, by a theorem of Verger-Gaugry. Any idea for all $n$? I have to clarify that at the moment I am not interested in the best possible constant, but in some good-looking constant, something to make aesthetically pleasant a certain formula that I found out.
Thank you in advance,
Valerio
-
## 1 Answer
As shown in this paper,Theorem 1.2, $D \leq \log 2.$ I remark that this paper came up in my answer to this question, and there is a bug for small $n$ ($n < 7$), but the author's interest was apparently similar to yours, so the large $n$ results should be correct. (the paper is: "Covering a Ball with Smaller Equal Balls in \$\mathbb{R}^n," by Jean-Louis Verger-Gaugry)
-
Thank you very much. What can we say for small $n$? I am really interested in all values of $n$. Maybe also to know that say $D=2$ is good enough would be OK. Indeed, for the moment I want to put this constant in a as nice as possible formula for all $n$ and then maybe discuss the fact that can be sharpened for $n\geq7$... – Valerio Capraro Jun 27 at 14:23
I am having a look at the paper and maybe I am missing something. He fixes a radius $T>\frac12$ and answers the question of how many balls of radius $\frac12$ are needed to cover a ball of radius $T$. My case is little different, because the covering balls have radius $T/2$. Well, it is possible that one can go down inductively and apply that formula, but I am little in trouble with that terrible formula. Moreover, that formula holds only for $T\leq\frac{n}{2\log(n)}$... In two words: I'm confused! – Valerio Capraro Jun 27 at 14:40
For your problem $T=1,$ and the inequality is vacuously satisfied... – Igor Rivin Jun 27 at 15:15
Yes, indeed! thanks again – Valerio Capraro Jun 27 at 15:33
I am sorry, but I cannot understand how you get $\log 2$. Indeed, let $f(n)$ be Verger-Gaugry's estimation in Theorem 1.2 but without $2^n$. It seems to me that $d$ should verify the property that $\log_2(f(n))\leq(d-1)n$, for all $n$. One can easily see that every $d>1$ is eventually good, but the function $f(n)$ diverges and then it seems to me impossible to find a constant $d\leq1$. Am I missing something? – Valerio Capraro Jun 30 at 16:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544100165367126, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/33734-automata-proof-theory-computation.html | # Thread:
1. ## Automata Proof... Theory of Computation
Theorem
The class of regular languages is closed under the union operation.
(In other words, if $A_1$ and $A_2$ are regular languages, so is $A_1 \cup A_2$.)
Problem
Prove this theorem using the Proof by Construction method.
Useful Information
A regular language merely means that some finite automaton M recognizes A.
Definition of $A \cup B$:
$A \cup B = \{ x|x\in A \vee x\in B\}$
So:
We have $A_1$ which is recognized by the finite automaton $M_1$ and we have $A_2$ which is recognized by the finite automaton $M_2$.
Definition of a finite automaton (Deterministic):
$M = (Q, \Sigma , \delta , q_0, F)$ where:
$Q = \text{Finite set called the States}$
Meaning:
$Q = \{ q_0, q_1, q_2, ..., q_n\}$,
$\Sigma = \text{Finite set called the Alphabet}$
It's all possible symbols that can be used as an input, for example, binary is:
$\Sigma = \{ 0, 1\}$
$\delta = \text{The transition function}$
$\delta : Q \ x \ \Sigma \longrightarrow Q$
$q_0 = \text{The Start State}$
$F = \text{Set of Accept States}$
Meaning that this set only contains the states from $Q$ that the computation accepts. They are usually recognizable.
Proof
Here's what I've gotten so far:
Let $M_1$ recognize $A_1$, where $M_1 = (Q_1, \Sigma , {\delta}_1, q_1, F_1)$, and
$M_1$ recognize $A_2$, where $M_2 = (Q_2, \Sigma , {\delta}_2, q_2, F_2)$.
Now, I have to construct an $M$ to recognize $A_1 \cup A_2$, where $M = (Q, \Sigma , \delta , q_0, F)$.
I know the first step, and that's to define Q:
1. $Q = \{ (r_1, r_2)|r_1\in Q_1 \ \wedge \ r_2\in Q_2\}$.
The Second Step is to define the language:
2. We will assume that the language is the same for both automata. If the case may be that they have separate languages, then the language for $M$ will be:
$\Sigma = {\Sigma}_1 \cup {\Sigma}_2$
From here I have no idea what to do... Anyone know how to do this?
2. I am in a class where we learned this, but my teacher didnt make us prove these properties formally. What we did was show that if we have two NFAs $M_1$ and $M_2$ which recognize the languages $A_1$ and $A_2$, respectively, then we can create a new machine $M_{1 \cup 2}$ which recognizes $A_1 \cup A_2$ by creating a new start state with an $\epsilon$ transition to the start states of $M_1$ and $M_2$. This way, the new machine will nondeterministically recognize $A_1$ or $A_2$. Then, since we have constructed an NFA which recognizes the language $M_{1 \cup 2}$, it must be regular, since NFAs are equivalent to regular expressions. I will go through my book to see how you formalize the argument. Sorry if this doesn't help.
edit: OK, so your next step should be to create a new start state $q_0$ and describe your new transition function $\delta=\{ \delta_0 , \delta_1 , \delta_2 \}$ with $\delta_0 \left( {q_0 , \epsilon} \right) = \{ q_1 ,q_2 \}<br />$ and keep $\delta_1$ and $\delta_2$ the same. Then you make $F=\{ F_1 \cup F_2 \}$. You should also make $Q=\{q_0 , Q_1, Q_2 \}$. Hope this is correct and helps.
edit 2: I just saw you are working deterministically, so if you haven't learned about NFAs then this is kind of a useless post.
3. I know about NFAs and such, but this is a purely Deterministic Automaton and should be constructed strictly as a Deterministic Automaton.
But thanks, there's some useful information in there. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 50, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347586035728455, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/7848-need-help-what-probably-simple-quadratic-equation.html | # Thread:
1. ## Need help with what is probably a simple quadratic equation
Hi, I really need a hand with the following question
Find the values for k for which the quadratic equation x^2-2x+21=2k(x-7) has equal roots
I don't really know what to do, I know the discriminant must equal 0 but actually putting that usefully is completely beyond me somehow.
2. Solve for x in terms of k.
Get the equation in the form ax^2 + bx + c = 0. k will be in there somewhere in a,b, and/or c. Then, solve the quadratic. Instead of simplifying the qudratic formula to a constant, simplify it to terms with k.
Does that make sense? Shout at me if it doesn't.
3. Originally Posted by finlaymaguire
Hi, I really need a hand with the following question
Find the values for k for which the quadratic equation x^2-2x+21=2k(x-7) has equal roots
I don't really know what to do, I know the discriminant must equal 0 but actually putting that usefully is completely beyond me somehow.
rearrange the equation as:
$<br /> x^2-2(1+k)x+28=0<br />$
Now the quadratic formula tells us that:
$<br /> x=\frac{2(1+k) \pm \sqrt{4(1+k)^2-4\times 28}}{2}<br />$
There are two distinct roots unless what is under the root (the
discriminant is zero), when there is only one root (or two equal roots).
So the condition for a double root is that:
$<br /> 4(1+k)^2-4\times 28=0<br />$
RonL
4. ahhhh, I can't believe I couldn't see that, so would that make the values of k be 2 and -10/9?
Thanks for the help by the way, I really appreciate it.
5. Originally Posted by finlaymaguire
ahhhh, I can't believe I couldn't see that, so would that make the values of k2 and -10/9?
Thanks for the help by the way, I really appreciate it.
No it would be:
$<br /> k=-1 \pm \sqrt{28}<br />$
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9604241251945496, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/39621?sort=oldest | ## Which groups have nice compactifications ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a discrete group G. Is there a nice criterion to decide, whether there is a compact Hausdorff $G$- space X, that contains the discrete space $G$ as a subspace, such that the stabilizer of every point in $X$ is (virtually) cyclic ?
For example the free group admits such a compactification (As well as any hyperbolic group I think). Is it possible to decide, whether $\mathbb{Z}^2$ admits such a compactification? .
-
CAT(0) groups (such as $\mathbb{Z}^2$) admit a `visual boundary'. I don't have a reference to hand, but look at Metric spaces of non-positive curvature by Bridson and Haefliger for details. – HW Sep 22 2010 at 15:44
Browsing google hits for 'CAT(0) boundary', I'm reminded that Croke--Kleiner showed that the boundary is not an invariant of the group $G$, so my previous comment was, strictly speaking, slightly inaccurate. But it is well defined for the space $X$, which I think is what you wanted. – HW Sep 22 2010 at 15:48
I think, that the CAT(0) boundary has too big stabilizers. In the example of $\mathbb{Z}^2$ the whole group acts trivially on the boundary. However for any CAT(0) group $G$ the the group $G\times \mathbb{Z}$ fixes a point at its boundary. So I think CAT(0) groups don't fit in this scheme. – HenrikRüping Sep 22 2010 at 15:53
Ah yes, good point. On the other hand, the construction is very natural, which suggests that it's the `right' one. Why do you want such small stabilisers? – HW Sep 22 2010 at 15:58
well the motivation was the free group acting on a tree. If I drop that condition, I could always take the one point compactification. Furthermore I am wondering, whether the CAT(0) boundary works, if the space doesn't contain $\mathbb{R}^2$ as a subspace. – HenrikRüping Sep 22 2010 at 16:14
show 1 more comment
## 2 Answers
There are many compactifications of particular groups. For your example of $\mathbb Z^2$: one construction for a compactification is to first embed it as a subgroup of $S^1 = \mathbb R / \mathbb Z$ by picking two rationally independent numbers for the images of the generators. Now compactify $\mathbb Z^2$ by making large elements connverge toward their image points in $S^1$. The stabilizer of any point is trivial.
The same method works to get a compactification associated with any action of $G$ on a compact space $X$. Just pick a point $x \in X$, and adjoin the closure of the orbit of $X$ at infinity in $G$. If the action has no fixed points in the cloure of the orbit, then stabilizers are trivial. It's easier to avoid all but cyclic stabilizers. To make actions with small stabilizers, you can take products of examples; point stabilizes in the product become intersections of stabilizers in the factors. There are many tricks, some of them useful, for making compactifications that are Hausdorff metric spaces.
There's an ultimate (but non-constructive and of large cardinality) compactification, the Stone-Cech compactification, which has trivial point stabilizers for any group,
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Regarding the question of `whether the CAT(0) boundary works, if the space doesn't contain $\mathbb{R}^2$ as a subspace' (see comments above), the Flat Plane Theorem asserts that any CAT(0) group that acts on a CAT(0) space without an isometrically embedded copy of $\mathbb{R}^2$ is word-hyperbolic. So in this case you can use the usual hyperbolic boundary. See Bridson & Haefliger for details.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134942293167114, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/162712/relevance-of-differential-forms | # Relevance of Differential Forms
I recently started reading about differential forms, and I am trying to figure out their purpose. Lets say $\omega=y\,dx+x\,dy$, and we want to evaluate $\int_C \omega$ over the curve parametrized by $\phi(t)=(t^2,t^3)$ from 0 to 1. So we have $\int_C \omega=\int_C y\,dx+x\,dy$...now I am trying to figure out what the purpose in defining differential forms as functions that send points to $T^*_p C$...when I parametrize the integral, am I suppose to evaluate $\omega$ at $\omega_{\phi(t)}(\phi(t))\,dt$, so that $\int_C \omega=\int_0^1 \omega_{\phi(t)}(\phi(t))\,dt$, where $\omega_{\phi(t)}=t^3\,dx_\phi+t^2\,dy_\phi$, and so $\omega_{\phi(t)}(\phi(t))=\omega_{\phi(t)}(t^2\partial_x,t^3\partial_y)=2t^5$, where I used that $dx\partial_x=1, dx\partial_y=0$. I know that this is wrong though.
-
2
You're asking two separate questions here (what the point of differential forms is, and what you did wrong in this specific computation). You'd be better off separating them. – Qiaochu Yuan Jun 25 '12 at 4:28
The thing is, I know how to do the substitution from calculus, but I'm trying to figure it out from the point of view of functionals, if there is another way to look at it. – JLA Jun 25 '12 at 4:34
Though I edited it. – JLA Jun 25 '12 at 4:43
1
Your definition of differential form as "functions that send points in $T_pC$ to $T^*_pC$" is incorrect. You want something like "functions that send points in $M$ or $\mathbb{R}^n$ to elements of $T^*_pC$". So a 1-form assigns to each point a function which eats a vector and spits out a number. – Adam Saltz Jun 25 '12 at 5:12
2
It's $$\int_C \omega=\int_0^1 \omega_{\phi(t)}(\phi'(t))\,dt\ .$$ – Christian Blatter Jul 28 '12 at 8:21
show 1 more comment
## 1 Answer
The problem is that you need to look at $\omega_{\phi(t)}(\phi'(t))$, and not $\omega_{\phi(t)}(\phi(t))$ as you were doing.
Remember, $\omega_{\phi(t)}$ is an element of $T_{\phi(t)}^*C$, so $\omega_{\phi(t)}\colon \ T_{\phi(t)}C \to \mathbb{R}.$ This means that $\omega_{\phi(t)}$ has to take tangent vectors as inputs.
Since $\omega_{\phi(t)}(\phi'(t)) = (t^3\,dx_\phi + t^2\,dy_\phi)(2t\,\partial_x + 3t^2\partial_y) = 2t^4 + 3t^4 = 5t^4,$ we have $$\int_C \omega = \int_{[0,1]}\phi^*\omega = \int_0^1 \omega_{\phi(t)}(\phi'(t))\,dt = \int_0^1 5t^4\,dt = 1,$$ where $\phi^*\omega$ is the pullback of $\omega$.
When you ask why differential forms are "relevant," I take your question to mean, "Why are differential forms the objects that we integrate?" Or maybe, "Why is the definition of differential form so technical?"
I don't have time to answer that now, but I do know that there are a number of good answers on this very website (and possibly MathOverflow) already. Perhaps another user can provide the links.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.967292845249176, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/89436/list | ## Return to Question
2 edited body; edited tags; edited title
# Bound on the (anticanonical) degree of somesmoothtoric Fano varieties
Does there exists a universal constant $C \geq 1$ such that if $X$ is any a smooth, toric, Fano $n$-dimensional manifold admitting a Kähler-Einstein metric, then its anticanonical degree $(-K_X)^n$ is at most $C^n (n+1)^n$?
This would follow from the weakening of Ehrhart's conjecture that I proposed in http://mathoverflow.net/questions/88153/reference-request-ehrharts-conjecture-on-the-geometry-of-numbers (or at least this is what I understand from reading page 6 of the paper of Nill and Paffenholtz http://front.math.ucdavis.edu/0905.2054).
I wonder:
1. Is this known?
2. Is this interesting?
My knowledge of algebraic geometry is pitiful so please do not be offended if the question is really naive. I'm just trying to see what possible interesting consequences could have the weakened conjecture could have.
1
# Bound on the (anticanonical) degree of some smooth Fano varieties
Does there exists a universal constant $C \geq 1$ such that if $X$ is any a smooth, toric, Fano $n$-dimensional manifold admitting a Kähler-Einstein metric, then its anticanonical degree $(-K_X)^n$ is at most $C^n (n+1)^n$?
This would follow from the weakening of Ehrhart's conjecture that I proposed in http://mathoverflow.net/questions/88153/reference-request-ehrharts-conjecture-on-the-geometry-of-numbers (or at least this is what I understand from reading page 6 of the paper of Nill and Paffenholtz http://front.math.ucdavis.edu/0905.2054).
I wonder:
1. Is this known?
2. Is this interesting?
My knowledge of algebraic geometry is pitiful so please do not be offended if the question is really naive. I'm just trying to see what possible interesting consequences could have the weakened conjecture. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8814507722854614, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/215732/solve-logarithmic-equation?answertab=oldest | # Solve logarithmic equation
I'm getting stuck trying to solve this logarithmic equation:
$$\log( \sqrt{4-x} ) - \log( \sqrt{x+3} ) = \log(x)$$ I understand that the first and second terms can be combined & the logarithms share the same base so one-to-one properties apply and I get to: $$x = \frac{\sqrt{4-x}}{ \sqrt{x+3} }$$ Now if I square both sides to remove the radicals: $$x^2 = \frac{4-x}{x+3}$$ Then: $$x^2(x+3) = 4-x$$ $$x^3 +3x^2 + x - 4 = 0$$
Is this correct so far? How do I solve for x from here?
-
1
– Per Manne Oct 17 '12 at 16:52
Yes, very good! Can you check the exercise again? Wasn't there a $2$ or $\sqrt x$ somewhere? Wolfram Alpha says, has one real solution but ugly: $x=0.893289..$ – Berci Oct 17 '12 at 16:53
@Berci Thanks, I copied it correctly. It looks like they are looking for the ugly solution! – Justin Brown Oct 17 '12 at 17:11
## 2 Answers
Fine so far. I would just use Wolfram Alpha, which shows there is a root about $0.89329$. The exact value is a real mess. I tried the rational root theorem, which failed. If I didn't have Alpha, I would go for a numeric solution. You can see there is a solution in $(0,1)$ because the left side is $-4$ at $0$ and $+1$ at $1.$
-
Okay, I put the equation in Wolfram Alpha and got the same root, thank you. Do you have a link that explains how to get a numeric solution? – Justin Brown Oct 17 '12 at 17:14
1
@JustinBrown: There are many methods, discussed in numerical analysis texts. The simplest to explain is bisection. We know that $f(0) \lt 0$ and $f(1) \gt 0$, so there is a root in there. Check $f(0.5)=-2.625$ and we know the root is in $(0.5,1)$, then check $f(0.75)$ and so on. Keep going until the interval is small enough. There are other methods that converge faster. – Ross Millikan Oct 17 '12 at 17:22
I'm familiar with that technique I just didn't realize that is what you meant. Thank you! – Justin Brown Oct 17 '12 at 17:31
It is correct so far.
There is clearly a root between $0$ and $1$. Either use numerical methods to find it is about $0.893289$ or (not recommended) solve the cubic to get $$\sqrt[3]{\frac{3}{2} - \sqrt{\frac{211}{108}}} + \sqrt[3]{\frac{3}{2} + \sqrt{\frac{211}{108}}} -1$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944855272769928, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2010/11/22/the-character-table-as-change-of-basis/?like=1&source=post_flair&_wpnonce=d91e0e6d9a | # The Unapologetic Mathematician
## The Character Table as Change of Basis
Now that we’ve seen that the character table is square, we know that irreducible characters form an orthonormal basis of the space of class functions. And we also know another orthonormal basis of this space, indexed by the conjugacy classes $K\subseteq G$:
$\displaystyle\left\{\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}f_K\right\}$
A line in the character table corresponds to an irreducible character $\chi^{(i)}$, and its entries $\chi_K^{(i)}$ tell us how to write it in terms of the basis $\{f_K\}$:
$\displaystyle\chi^{(i)}=\sum\limits_K\chi_K^{(i)}f_K$
That is, it’s a change of basis matrix from one to the other. In fact, we can modify it slightly to exploit the orthonormality as well.
When dealing with lines in the character table, we found that we can write our inner product as
$\displaystyle\langle\chi,\psi\rangle=\sum\limits_K\frac{\lvert K\rvert}{\lvert G\rvert}\overline{\chi_K}\psi_K$
So let’s modify the table to replace the entry $\chi_K^{(i)}$ with $\sqrt{\lvert K\rvert/\lvert G\rvert}\chi_K^{(i)}$. Then we have
$\displaystyle\sum\limits_K\overline{\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(i)}\right)}\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(j)}\right)=\langle\chi^{(i)},\chi^{(j)}\rangle=\delta_{i,j}$
where we make use of our orthonormality relations. That is, if we use the regular dot product on the rows of the modified character table (considered as tuples of complex numbers) we find that they’re orthonormal. But this means that the modified table is a unitary matrix, and thus its columns are orthonormal as well. We conclude that
$\displaystyle\sum\limits_i\overline{\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(i)}\right)}\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_L^{(i)}\right)=\delta_{K,L}$
where now the sum is over a set indexing the irreducible characters. We rewrite these relations as
$\displaystyle\sum\limits_i\overline{\chi_K^{(i)}}\chi_L^{(i)}=\frac{\lvert G\rvert}{\lvert K\rvert}\delta_{K,L}$
We can use these relations to help fill out character tables. For instance, let’s consider the character table of $S_3$, starting from the first two rows:
$\displaystyle\begin{array}{c|ccc}&e&(1\,2)&(1\,2\,3)\\\hline\chi^\mathrm{triv}&1&1&1\\\mathrm{sgn}&1&-1&1\\\chi^{(3)}&a&b&c\end{array}$
where we know that the third row must exist for the character table to be square. Now our new orthogonality relations tell us on the first column that
$\displaystyle1^2+1^2+a^2=6$
Since $a=\chi^{(3)}(e)$, it is a dimension, and must be positive. That is, $a=2$. On the second column we see that
$\displaystyle1^2+1^2+b^2=\frac{6}{3}=2$
and so we must have $b=0$. Finally on the third column we see that
$\displaystyle1^2+1^2+c^2=\frac{6}{2}=3$
so $c=\pm1$.
To tell the difference, we can use the new orthogonality relations on the first and third or second and third columns, or the old ones on the first and third or second and third rows. Any of them will tell us that $c=-1$, and we’ve completed the character table without worrying about constructing any representations at all.
We should take note here that the conjugacy classes index one orthonormal basis of the space of class functions, and the irreducible representations index another. Since all bases of any given vector space have the same cardinality, the set of conjugacy classes and the set of irreducible representations have the same number of elements. However, there is no reason to believe that there is any particular correspondence between the elements of the two sets. And in general there isn’t any, but we will see that in the case of symmetric groups there is a way of making just such a correspondence.
### Like this:
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137781858444214, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/132928-shortest-distance-between-2-lines-3d.html | # Thread:
1. ## Shortest distance between 2 lines in 3D
Hi all,
I've been investigating the shortest distance between two lines in terms of vectors. So far I have that you need to find the common perpendicular, from which I can get two equations with three unknowns. I believe it's possible to find the ratio of the unknowns from this, but I'm not sure how to go about it. Any help appreciated.
Cheers
2. Originally Posted by DangerousDave
I've been investigating the shortest distance between two lines in terms of vectors. So far I have that you need to find the common perpendicular, from which I can get two equations with three unknowns. I believe it's possible to find the ratio of the unknowns from this, but I'm not sure how to go about it. Any help appreciated.
Suppose the we have two skew lines, $l_1 (t) = P + tD\quad \& \quad l_2 (t) = Q + tE$.
The distance between them is $\frac{{\left| {\overrightarrow {PQ} \cdot \left( {D \times E} \right)} \right|}}{{\left\| {D \times E} \right\|}}$
3. To put this into words, would this be the scalar product of (the line between a known point on each line) with (the unit vector for the scalar product of the direction vectors for each line)? I'm not sure of the significance of the 'x' signs.
Is there no merit in the other approach I mentioned?
4. Originally Posted by DangerousDave
I'm not sure of the significance of the 'x' signs.
Are you saying that you do not understand the idea of the cross product of two vectors?
If that is true what I posted is of absolutely no use to you.
5. I'm afraid so. I've finally got word back from the prof of my course and he says that we don't need to know it and haven't been taught it, even though it's on the official syllabus. I had wondered since, any way you cut it, it seems to involve mathematical techniques we haven't been taught yet.
Perhaps I'll investigate cross products anyway.
Thanks for the help. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9805343151092529, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/154042-exponential-fourier-series-trouble-solving.html | # Thread:
1. ## Exponential Fourier Series - Trouble solving
Hello,
I have troubling simplifying the following problem. It is the second integral solving for c(k) of f involving [i] that is confusing.
I also was wondering if there is simpler way to solve for the exponetial fourier series for the given problem, than the way I done it. I would greatly appriciate guidance.
Thank you
Attached Thumbnails
2. Part of your problem is that for the $c_{k}$ integral, you can't use half the interval and multiply by two like you could for the $c_{0}$ case. You can only do that, in general, when your complete integrand is even. But the presence of that particular exponential function in the integrand precludes that. So you have to keep the full $[-\pi,\pi]$ interval for the limits in the $c_{k}$ integration, I'm afraid.
I don't know of any way to integrate $x^{2}e^{-ikx}$ other than by parts twice. I'm not sure I would recommend breaking the integrals up into the trig functions, because then you're going to have to do integration by parts twice on two integrals. There's no need, when working with the exponential Fourier series, to look at sin and cos individually. It's an easier integration with just the exponential in there.
So I would carry these changes through, and then we'll see what happens at the end. Sound good?
3. Originally Posted by Ackbeet
Part of your problem is that for the $c_{k}$ integral, you can't use half the interval and multiply by two like you could for the $c_{0}$ case. You can only do that, in general, when your complete integrand is even. But the presence of that particular exponential function in the integrand precludes that. So you have to keep the full $[-\pi,\pi]$ interval for the limits in the $c_{k}$ integration, I'm afraid.
I don't know of any way to integrate $x^{2}e^{-ikx}$ other than by parts twice. I'm not sure I would recommend breaking the integrals up into the trig functions, because then you're going to have to do integration by parts twice on two integrals. There's no need, when working with the exponential Fourier series, to look at sin and cos individually. It's an easier integration with just the exponential in there.
So I would carry these changes through, and then we'll see what happens at the end. Sound good?
I got the following expression after integrating the integral $[-\pi,\pi]$:
$c_{k} = \frac{1}{2\pi} ({-\frac{\pi^{2}}{ik}}e^{-ik\pi}}+{\frac{2\pi}{k^{2}}e^{-ik\pi}+{\frac{2}{ik^{3}}e^{-ik\pi}+\frac{\pi^{2}}{ik}}e^{ik\pi}+{\frac{2\pi}{k ^{2}}e^{ik\pi}-{\frac{2}{ik^{3}}e^{ik\pi})$
and I know that
$e^{ik\pi}=(-1)^{k}$
$\frac{1}{2\pi}(\frac{\pi^{2}}{ik}}e^{ik\pi}{-\frac{\pi^{2}}{ik}}e^{-ik\pi}})=0$
$\frac{1}{2\pi}({\frac{2}{ik^{3}}e^{-ik\pi}-{\frac{2}{ik^{3}}e^{ik\pi})=0$
$\frac{1}{2\pi}({\frac{2\pi}{k^{2}}e^{ik\pi}+{\frac {2\pi}{k^{2}}e^{-ik\pi})={\frac{1}{k^{2}}e^{ik\pi}+{\frac{1}{k^{2}} e^{-ik\pi}$
$c_{k}$ suppose to be ${\frac{{2}(-1^{k})}{k^{2}}$
How do I get:
$({\frac{1}{k^{2}}e^{ik\pi}+{\frac{1}{k^{2}}e^{-ik\pi})$
to be equal ${\frac{{2}(-1^{k})}{k^{2}}$
I would appriciate a response. Thank you
4. Your k exponent should be outside the parentheses (otherwise it only applies to the 1). Otherwise, it looks fine.
5. Originally Posted by Ackbeet
Your k exponent should be outside the parentheses (otherwise it only applies to the 1). Otherwise, it looks fine.
Where does the 2 in ${\frac{{2}(-1)^{k}}{k^{2}}$ comes from. How do I get that. I just got it from the answer in the book but don't know where it comes from.
6. $({\frac{1}{k^{2}}e^{ik\pi}+{\frac{1}{k^{2}}e^{-ik\pi})=\frac{1}{k^{2}}(e^{ikx}+e^{-ikx})=\frac{1}{k^{2}}(\cos(k\pi)+i\sin(k\pi)+\cos( k\pi)-i\sin(k\pi))$
$=\frac{2}{k^{2}}\,\cos(k\pi).$
But now $\cos(k\pi)=(-1)^{k}$ for all $k\in\mathbb{Z}.$
The answer follows.
Does that help? It's basically all from the Euler formula, and the even-ness of cosine and the odd-ness of sine:
$e^{i\theta}=\cos(\theta)+i\sin(\theta).$
7. I have a question regarding solving for the exponetial Fourier series for a given function. The general formula is the following (the way I understood):
$\displaystyle f(x)=\displaystyle \sum_{k=-\infty}^{\infty}c_{k}e^{ik\Omega x}$
there $\Omega T={2\pi}$
Now, to solve for the exponetial Fourier series for the given problem above :
$\displaystyle f(x) = {x^{2}}$ $,\,\,-\pi < x \leq \pi$
Why do we solve for $c_{0}$ ? Its not in the general fomula.
--------------------------------------------------------------------------
Where as I have a different problem where I have to solve for the exponetial Fourier series for a ${2\pi}$ - periodic function for the following given function:
$\displaystyle f(x)=e^{-x}},\,\,0 < x < {2\pi}$
In this case we just solve for $c_{k}$ only and not for $c_{0}$. Why is that?
I do appriciate for a explanation. Thank you
8. Why do we solve for $c_{0}$? It's not in the general formula.
I don't understand. If you have the sum
$\displaystyle{f(x)=\sum_{k=-\infty}^{\infty}c_{k}e^{ik\Omega x}},$
then the $k=0$ term has the expression $c_{0}$ in it. Does that qualify as "being in the general formula"?
In the problem you have there, I would think you would solve for $c_{0}$. Both functions you're dealing with have a nonzero average value. As I recall, the expression for the $c_{0}$ term looks a lot like the average value of the function over its period. *looking it up* In fact, it is the average value of the function over a period. Now if you have a function like $f(x)=x$ over the interval $(-\pi,\pi),$ and you can tell at a glance that its average value is zero, well then, you have no need to explicitly compute it. But such examples should not be considered to be the usual.
Make sense?
9. Alright, it makes sense. Thanks.
$\displaystyle f(x)=e^{-x}},\,\,0 < x < {2\pi}$
Do I need to compute $c_{0}$? If not, why?
10. Well, do you think the average value of the function over that interval is nonzero?
11. Originally Posted by Ackbeet
Well, do you think the average value of the function over that interval is nonzero?
I guess I am confused after all and don't understand.
Do you mean that:
$\displaystyle{f(x)=\sum_{k=-\infty}^{\infty}c_{k}e^{ik\Omega x}}\iff {f(x)=c_{0} + \sum_ {k\neq 0}c_{k}e^{ik\Omega x}}$
12. Your double implication is correct, if the understood range of $k$ values is the integers. All you're doing there is splitting off one term from the sum.
13. Originally Posted by Ackbeet
Now if you have a function like $f(x)=x$ over the interval $(-\pi,\pi),$ and you can tell at a glance that its average value is zero, well then, you have no need to explicitly compute it. But such examples should not be considered to be the usual.
If I compute $c_{0}$ for this function:
$f(x)=x$ over the interval $(-\pi,\pi)$
I would get the following:
$\displaystyle c_{0} =\frac{1}{2\pi} \int_{-\pi}^{\pi} f(x) dx =\frac{2}{2\pi}\int_{0}^{\pi}x dx =\frac{\pi}{2}$ the value is ${\neq 0}$
14. No, the formula you used is valid for an even integrand. You have an odd integrand.
15. Originally Posted by Ackbeet
No, the formula you used is valid for an even integrand. You have an odd integrand.
Which formula do I use for an odd integrand? By the way thanks for your response. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 54, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488674998283386, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/120235?sort=newest | ## Is the derived category of abelian groups a subcategory of the stable homotopy category?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
An extension of the Dold-Kan equivalence gives an adjunction between the stable homotopy category and the (unbounded) derived category of abelian groups $SH \rightleftarrows D(Ab)$.
Question 1: Is the right adjoint $D(Ab) \to SH$ faithful?
Question 2: If not, is there a class of objects on which it is faithful (for example compact objects).
-
## 2 Answers
I've found the following somewhat intricate way of answering Q1 in the affirmative. Any complex in $D(Ab)$ quasi-isomorphic to a graded abelian group. Hence, it is enough to consider complexes concentrated in a single degree. Given an abelian group $A$ and $n\in\mathbb Z$, let $(A,n)$ be the abelian group $A$ concentrated in degree $n$. For simplicity, I will use the same notation for the Eilenberg-MacLane spectrum $\Sigma^nHA$. In the derived category we have, $$D(Ab)((A,n),(B,n))=\operatorname{Hom}(A,B),$$ $$D(Ab)((A,n),(B,n+1))=\operatorname{Ext}(A,B),$$ $$D(Ab)((A,n),(B,m))=0\text{ otherwise}.$$ In the stable homotopy category we have the stable Eilenberg-MacLane groups $$SH((A,n),(B,m))=H^{m+k}(A,n+k;B),\quad k>>0.$$ It is well known, since E-ML's "On the groups..." (Annals) that $$SH((A,n),(B,n))=\operatorname{Hom}(A,B),$$ $$SH((A,n),(B,n+1))=\operatorname{Ext}(A,B),$$ and that the functor $D(Ab)\rightarrow SH$ is the identity on the previous morphism sets. Hence we are done. The groups $SH((A,n),(B,m))$ are however non-trivial for $m>n+1$, in general.
-
Thanks for your answer Fernando. I was actually initially trying to find a counter-example using this method. Something I don't understand is "Any complex is quasi-isomorphic to a graded abelian group". What do you mean by this? I don't see how to reduce to complexes concentrated in a single degree. – name Jan 30 at 15:11
A graded abelian group need not be concentrated in a single degree. It's just a complex with trivial differentials. This holds for abelian groups and more generally for complexes of modules over a hereditary ring. The proof is straightforward, see 1.6 in homepages.math.uic.edu/~bshipley/… – Fernando Muro Jan 30 at 15:40
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the answer to Question 1 is positive. Think of $SH$ as the homotopy category of modules over the sphere spectrum $S$. The category $D(Ab)$ is equivalent to the homotopy category of modules over the Eilenberg-Mac Lane spectrum $HZ$. Your adjunction is equivalent to the adjunction between $S$-modules and $HZ$ modules, where the right adjoint is pullback along the natural map of ring spectra $S\to HZ$, and the left adjoint is the functor $M\mapsto HZ\wedge M$.
Your question is equivalent to this: given $HZ$-modules $M, N$, is the map
• $[M, N]_{HZ}\to [M, N]_S$
injective? By adjunction
$[M,N] = [HZ \wedge M, N]_{HZ}$
and the map * is induced by the map $HZ\wedge M \to M$. I claim that the last map is a split surjection in the homotopy category of $HZ$-modules.
Edited to account for Fernando's comment.
Since every $HZ$-module splits (non-naturally) as a wedge sum of Eilenberg Maclane modules It is enough to check this claim when $M=HA$, in which case it is an easy calculation. The homotopy groups of $HZ\wedge HA$ are isomorphic to the homology groups of $HA$. By Huriewicz theorem, this is $A$ in dimension zero. Using the general splitting result again, it follows that $HA$ is a summand of $HZ\wedge HA$ in the category of $HZ$-modules.
Therefore * is injective.
-
3
Why is it enough to check the case $M=HZ$? Somehow, you're deducing from this case that any $HZ$-module is a retract of an induced $HZ$-module, along the ring spectrum morphism $S\rightarrow HZ$. This may be true for this ring spectrum map (I don't know), but it is false in general (consider simply ring homomorphisms, e.g. from a field $k$ to a $k$-algebra with non-projective modules). Hence, if true, there should be a good reason for $S\rightarrow HZ$ to have this property. – Fernando Muro Jan 29 at 22:55
+1 You are right. I was originally going to say that it is enough to check it for $M$ an Eilenberg-Maclane spectrum, using the same splitting argument as you did (honest). Then somehow convinced myself in a hurry that I could get away with a categorical argument. But it does not work. The map $HZ \wedge M \to M$ splits, but not naturally. I will edit. – Gregory Arone Jan 29 at 23:28
2
Essentially, the problem with making a general categorical argument for this is that the unit map $M\to HZ\wedge M$ is not an $HZ$-module map. Or, to put it another way, the map $HZ\wedge HZ\to HZ$ splits as a map of $HZ$-modules but not as a map of $HZ$-bimodules. – Eric Wofsey Jan 29 at 23:42
Indeed, if a general categorical argument worked then you could replace $S \to HZ$ by $HZ \to HZ/p^2$ and "prove" that the derived category of $Z/p^2$ embeds into the derived category of $HZ$. – Tyler Lawson Jan 30 at 5:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9107387661933899, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/155518-prove-set-interval.html | # Thread:
1. ## prove the set is an interval
Prove that the set [1,3)={x∈R:1≤x<3} is an interval.
Also, prove that for any two intervals I , J , if I intersect J is not equal to ∅ then I ∪ J is an interval.
2. Originally Posted by tn11631
Prove that the set [1,3)={x∈R:1≤x<3} is an interval.
Also, prove that for any two intervals I , J , if I intersect J is not equal to ∅ then I ∪ J is an interval.
How does your text material define interval?
3. Originally Posted by Plato
How does your text material define interval?
This is why I was getting confused because its a written up question not from a text and we were just given the formal definition of an interval from like back in calc.
A subset I of R is an interval provided that for all x, y, z ∈ R, if x < y < z and x ∈ I and z ∈ I then y ∈ I. (Given Def of Interval)
However, when I was looking through previous books they just jumped into open and closed intervals and nothing really about if the set is an interval. For the first part The set[1,3)={x∈R:1≤x<3}is an interval I would say its an interval but I don't know how to prove that it is. And then for the second part For any two intervals I , J , if I ∩ J ̸= ∅ then I ∪ J is an interval I'm not where to even start.
4. Originally Posted by tn11631
A subset I of R is an interval provided that for all x, y, z ∈ R, if x < y < z and x ∈ I and z ∈ I then y ∈ I. (Given Def of Interval)
Let’s use that definition.
If $\{a,b\}\subset I\cup J~\&~a<c<b$ we must show that $c\in I\cup J$.
If we have either $\{a,b\}\subset I$ or $\{a,b\}\subset J$ then we are done because each of $I~\&~J$ is interval. WHY?
So suppose that $a\in I\setminus J$ and $b\in J\setminus I$. We are given that $\left( {\exists p \in I \cap J} \right)$.
Three cases: i) if $p=c$ we are done. Why?
ii) if $p<c<b$ because $J$ is an interval, so $c\in J\subset I\cup J$
iii) likewise if $a<c<p$ because $I$ is an interval, so $c\in I\subset I\cup J$.
We are done. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352124333381653, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/18080/some-questions-about-the-logics-of-the-principles-of-independence-of-motion-and | Some questions about the logics of the principles of independence of motion and composition of motion
In high-school level textbooks* one encounters often the principles of independence of motion and that of composition (or superpositions) of motions. In this context this is used as "independence of velocities" and superposition of velocities (not of forces).
This is often illustrated by the example of the motion of a projectile, where the vertical and horizontal motions are said to be independent and that the velocities add like vectors.
Now, if $x\colon \mathbb{R} \to \mathbb{R^3}$ describes the motion of the of the considered object, it is clear that one can decompose the velocity $v = \dot x$ arbitrarily by $v = v_1 + v_2$ where $v_1$ is arbitrary and $v_2 := v - v_1$.
This leads me to my first question: Am I correct that this is purely trivial math and contains no physics at all? If so, it would not deserve to be called "principle of composition of motions" or something like that and said to be fundamental.
However it seems that one could interpret the decomposition above such that $v_1$ is the velocity of the object with reference to a frame of reference moving with $v_2$. If so, how can one see, that this goes wrong in the relativistic case?
Now suppose you have two forces $F_1$ and $F_2$ which you can switch on and off, suppose that $F_i$ alone would result in a motion $x_i$ ($i=1,2$). Newtonian Mechanics tells us the principle of superposition of forces, i.e. if you turn both forces $F_1$ and $F_2$ on, the resulting motion $x$ is the solution of the differential equation $\ddot x = \frac1m F(x, \dot x, t)$ (where m is the Mass of our object) with $F = F_1 + F_2$.
One might interpret the principle of composition of motions such that always $\dot x = \dot x_1 + \dot x_2$ holds true. This is clearly the case if $F_i$ depends linearily on $(x,\dot x)$. However I think that it doesn't need to be true for nonlinear forces. This leads me the my third question: Is there any simple mechanical experiment where such nonlinear forces occur, which shows that in this case the "principle of composition of motions" doesn't hold?
*I have found this in some (older) german textbooks, for example: Kuhn Physik IIA Mechanik, p. 107, Grimsehl Physik II p.16,17
compare also
http://sirius.ucsc.edu/demoweb/cgi-bin/?mechan-no_rot-2nd_law and Arons
-
To your third question: Consider the motion of a projectile in consideration of air resistance $~v^2$. – student Mar 10 '12 at 16:08
1 Answer
There are two different ideas in the "superposition of motion", one which is kinematics, and the other dynamics. The kinematic law is a trivial decomposition of vectors--- the velocities form a vector space, and you can add them. This is also true in relativity, if an object A is moving with velocity v, and another object B is moving u faster than v, in that it covers u more distance per unit time (where distance and time are in the stationary frame), then v+u is the velocity of object B.
But in relativity, the difference velocity u is not the velocity of object B as measured in the frame of object A, because the A frame has different time and space axes. But velocities still form a vector space, only the symmetry of changing frames to moving with velocity v does not correspond to the trivial addition of vector velocities as it does in Galilean kinematics.
The second question, the one about forces, is dynamical. You are asking why do separate forces produce separate motions, and are there any cases where this fails. The answer is no, because there is a conservation law working here--- the conservation of momentum. When you apply a force F, you are adding F units of momentum to an object per unit time. When you apply a second force F', you are adding F' units of momentum to the object. The two forces add because momentum is a vector conserved quantity--- it's separate components are separately conserved, and the components of the forces tell you how much of each momentum component is coming in.
Conserved quantities are those that add up to a constant no matter what happens, an its always pure addition, even when the dynamics are nonlinear. So there is no case where two external forces leads to anything other than two additive momentum changes, and when the momentum is entirely contained in moving particles, this means that two forces on a moving particle produce additive changes in velocity, additive accelerations.
The justification of the Newtonian picture from conservation of momentum (and angular momentum) is more fundamental.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935416579246521, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/tagged/vanishing-cycles | ## Tagged Questions
1answer
191 views
### How to glue perverse sheaves of abelian groups?
Let $X$ be a complex algebraic variety and consider the category $P(X)$ of perverse sheaves of complex vector spaces. Let $f:X\rightarrow \mathbb C$ be a regular function, $Z$ its …
3answers
2k views
### Vanishing cycles in a nutshell?
To quote one source among many, "the general reference for vanishing cycles is [SGA 7] XIII and XV". Is there a more direct way to learn the main principles of this theory (i.e. wi …
1answer
834 views
### Computation of vanishing cycles
Here's the problem I'm looking at: $F$ is a perverse sheaf (or a regular holonomic D-module, or even a mixed hodge module) on $\mathbb{C}^2$ stratified by $z_1 = 0$, $z_2=0$. It … | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8420923948287964, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/44530-first-order-partial-derivatives.html | # Thread:
1. ## First-Order Partial Derivatives
Calculate the first-order partial derivative of the following:
for all (x,y) in $R^2$
I used this as a composition function and used the chain rule.
However, I am unsure of how to apply this formula.
First-order partial derivative of x would be
First-order pairtal derivative of y would be
My question is : how to compute this? Do I do another composition+chain rule?
Thank you.
2. Originally Posted by Paperwings
Calculate the first-order partial derivative of the following:
for all (x,y) in $R^2$
I used this as a composition function and used the chain rule.
However, I am unsure of how to apply this formula.
First-order partial derivative of x would be
First-order pairtal derivative of y would be
My question is : how to compute this? Do I do another composition+chain rule?
Thank you.
You cannot, it doesn't tell you what the function is. You have gone as far as you can
This is analgous to saying
.
Since in terms of the derivative in respect to x we don't care whether or not there are y's in there, this is just a normal chain rule of a function of x with some constants.
3. One more question: suppose that g has second derivative, how would I calculate the second partial derivatives of the function?
4. Hello,
Originally Posted by Paperwings
One more question: suppose that g has second derivative, how would I calculate the second partial derivatives of the function?
It depends on what second partial derivative you want.
You'll get the first one by taking the partial derivative of df/dx with respect to y. You'll get the second one by taking the partial derivative of df/dx with respect to x.
5. My book doesn't specify, but I'm pretty sure I'm supposed to find and . Thank you, Moo.
6. Originally Posted by Paperwings
My book doesn't specify, but I'm pretty sure I'm supposed to find and . Thank you, Moo.
Well then do what you did again
for $f_{xx}$ just hold y constant again and use either the chain rule and product rule or a combination of both if waranted. Same thing with [tex]f_{yy}[/math ] except now x is the variable held as a constant. if it is indeed $f_{xy}$ note that if hte curve is continuous that $f_{xy}=f_{yx}$ so you would only need to compute one. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563633799552917, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/48495/volume-of-a-solid-of-revolution | # Volume of a solid of revolution
Let $[a,b]$ be an interval, $a\geq 0$ and $f:[a,b]\to \mathbb{R}_+$ continuous.
I want to calculate the volume of the solid of revolution obtained by rotating the area below the graph of $f$ around the $y$-axis. The result should be $$2\pi\int_{a}^bxf(x)~dx.$$
For $h,r,t\geq 0$ the volume of a cylinder of radius $r+t$ in which a centred cylinder of radius $r$ is removed, both of height $h$, is $$\pi(r+t)^2h-\pi r^2h=\pi h(2rt+t^2).$$
This formula in mind, it seems reasonable to me that the volume of the solid is $$\begin{array}{rl} \lim_{k\to\infty} ~\sum_{i=1}^k\pi f(a+i{\Tiny \frac{b-a}{k}})(2(a+i{\Tiny \frac{b-a}{k}}){\Tiny\frac{1}{k}}+{\Tiny\frac{1}{k^2}})&=\\ \pi\lim_{k\to\infty} ~\sum_{i=1}^k\left(\left( f(a+i{\Tiny \frac{b-a}{k}})2(a+i{\Tiny \frac{b-a}{k}}){\Tiny\frac{1}{k}}\right)+\left(f(a+i{\Tiny \frac{b-a}{k}}){\Tiny\frac{1}{k^2}}\right)\right)& \end{array}$$ With the 'definition' $\int_{a}^b g(x)dx=\lim_{k\to\infty} ~\sum_{i=1}^k f(a+i{\Tiny \frac{b-a}{k}}){\Tiny\frac{1}{k}}$, the first 'summand' of the infinite sum looks exactly like the solution integral. Why does the second summand disappear?
-
@Ana Lytics: It is because in the second term there is a $k^2$ in the denominator. – André Nicolas Jun 29 '11 at 19:16
## 2 Answers
If $f$ is continuous on $[a,b]$, then $|f(x)| \leq M$ for all $x \in [a,b]$, for some $M > 0$ fixed. Hence, $$\Bigg|\frac{{\sum\nolimits_{i = 1}^k {f(a + i\frac{{b - a}}{k})} }}{{k^2 }}\Bigg| \le \frac{{Mk}}{{k^2 }} = \frac{M}{k},$$ which tends to $0$ as $k \to \infty$.
-
Very nice. Easier than mine. :) – Beni Bogosel Jun 29 '11 at 19:21
@Beni - thanks. – Shai Covo Jun 29 '11 at 19:24
The second summand dissapears because you have the factor $1/k^2$. Group in the following way: $$\lim_{k \to \infty} \frac{1}{k} \sum_{i=1}^k\frac{1}{k}f(a+i\frac{b-a}{k})$$
The sum converges to an integral, something finite, and divided by $k$ it converges to $0$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156134128570557, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/120999-cauchy-sequence.html | # Thread:
1. ## Cauchy sequence
Does anyone have a concrete example of a Cauchy sequence that is not convergent?
2. i remember i studied once these, and every Cauchy sequence is convergent.
is there any counterexample, because i'm about to die.
3. Originally Posted by Krizalid
i remember i studied once these, and every Cauchy sequence is convergent.
is there any counterexample, because i'm about to die.
Every convergent series is Cauchy... but in general, the converse isn't true. Only in R.
4. oh, i'll have to review my notes.
5. Originally Posted by platinumpimp68plus1
Does anyone have a concrete example of a Cauchy sequence that is not convergent?
It all depends on which spaces you're working. For example in working with $\mathbb{R}$ we have that $x_n=\left( 1+\frac{1}{n} \right) ^n$ is convergent, but if you're in $\mathbb{Q}$ this sequence is Cauchy but not convergent. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390303492546082, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/17230?sort=newest | ## Permutation representation inner product
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\rho : S_n \rightarrow \text{GL}(n, \mathbb{C})$ be the homomorphism mapping a permutation $g$ to its permutation matrix. Let $\chi(g) = \text{Trace}(\rho(g))$.
What is the value of $\langle \chi, \chi \rangle = \displaystyle \frac{1}{n!} \sum_{g \in S_n} \chi(g)^2$ ? Computing this expression for small $n$ yields $2$. Is this always true?
-
Hmmm... looks like a homework question! (reason: it IS one, in my course on rep's of finite groups). – Alain Valette May 2 2011 at 17:31
## 7 Answers
You are computing the inner product of $\chi$ with itself.
Since $\chi=\mathrm{triv}+\mathrm{std}$ as a $S_n$-module, with $\mathrm{triv}$ being the trivial reprsentation, and $\mathrm{std}$ its orthogonal complement, which is an irreducible $S_n$-module, and since the inner product is, well, an inner product and distinct irreducible characters are orthogonal, your $2$ follows from $$\langle\chi,\chi\rangle=\langle\mathrm{triv}+\mathrm{std},\mathrm{triv}+\mathrm{std}\rangle=\langle\mathrm{triv},\mathrm{triv}\rangle+\langle\mathrm{std},\mathrm{std}\rangle=1+1$$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Much as I like Burnside's Lemma, induced (permutation) representations, and other parts of group theory, I can't resist pointing out an alternative argument that uses essentially no group theory but relies on the fact that expectation (of random variables) is linear. Since $\chi(g)$ is the number of fixed-points of $g$, its square is the number of fixed ordered pairs $(x,y)$, where of course fixing a pair means fixing both its components. So the `$\langle\chi,\chi\rangle$` in the question is the average number of fixed pairs of a permutation $g$, in other words the expectation (with respect to the uniform probability measure on `$S_n$`) of the random variable "number of fixed pairs." That random variable is the sum, over all pairs $(x,y)$, of the indicator variable `$F_{x,y}$` whose value at any permutation $g$ is 1 or 0 according to whether $g$ fixes $x$ and $y$ or not. So `$\langle\chi,\chi\rangle$` is the sum, over all $x,y$, of the expectations of these `$F_{x,y}$`, and these expectations are just the probabilities that a random permutation fixes $x$ and $y$. For each of the $n$ pairs where $x=y$, that probability is $1/n$, so all these together contribute 1 to the sum. For each of the remaining `$n^2-n$` pairs, the probability is $(1/n)(1/(n-1))$ (namely, probability $1/n$ to fix $x$ and conditional probability $1/(n-1)$ to fix $y$ given that $x$ is fixed). So these pairs also contribute 1 to the sum, for a total of 2.
-
3
I think that if one writes down a proof of Burnside's lemma, and reads it in the correct light and angle one gets the argument you wrote without changing a word :) – Mariano Suárez-Alvarez May 2 2011 at 16:14
Another fact, of which this is a special case, well-known to (many) group theorists, is that if $G$ is a finite transitive permutation group, and $H$ is a point-stabilizer, then the squared-norm of the permutation character $1_{H}^{G}$ is the number of distinct $(H,H)$-double cosets in $G$, which is the same as the number of orbits of $H$ on the points in the permutation action. This is a standard application of Mackey's formula for the restriction to one subgroup of a character (or representation) induced from another subgroup (this result can be found in standard texts such as Curtis and Reiner), after first applying Frobenius reciprocity to conclude that $\langle 1_{H}^{G}, 1_{H}^{G}\rangle$ is equal to $\langle (1_{H}^{G})_{H},1 \rangle$.
-
A small generalization. Any doubly transitive action of a group $G$ on a set $X$ has the property that $\frac{1}{|G|} \sum_{g \in G} \text{Fix}(g)^2 = 2$. This is because, by double transitivity, the diagonal action of $G$ on $X^2$ has precisely two orbits, the orbit where the first factor equals the second and the orbit where it doesn't, so the result follows by Burnside's lemma. Similarly, any $k$-transitive action of a group $G$ on a set $X$ with $k < |X|$ has the property that $\frac{1}{|G|} \sum_{g \in G} \text{Fix}(g)^k = B_k$, since the orbits of the diagonal action of $G$ on $X^k$ can naturally be put into bijection with partitions of a $k$-element set. For $k \ge |X|$ the sum evaluates to the number of partitions of a $k$-element set into at most $|X|$ parts (of course, the action can only be $k$-transitive for $k = |X|$ if $G$ is in fact the full group of permutations on $X$!).
The representation-theoretic upshot of all this is that the permutation representation corresponding to a doubly transitive group action always breaks down into the direct sum of one copy of the trivial representation and an irreducible representation. This is an easy way to write down nice irreducible representations of groups like $PSL_2(\mathbb{F}_q)$, which has a triply transitive action on $\mathbb{P}^1(\mathbb{F}_q)$ (edit: when $q$ is even!)
-
1
@Qiaochu: concerning your last comment: $PSL_2(q)$ is only triply transitive when $q$ is even; if $q$ is odd, the two-point stabilizers have two orbits (corresponding to the squares and non-squares in $\mathbb{F}_q^\times$). – Tom De Medts May 2 2011 at 10:30
@Tom: thanks for the correction. – Qiaochu Yuan May 2 2011 at 15:17
Alternatively, let $X=\{1,\dots,n\}$ and $A=\{(x,y,g)\in X\times X\times S_n:gx=x, gy=y\}$. The sum $\sum_g\chi(g)^2$ can be evaluated counting elements of $A$ in two different ways, as explained in W.R.Scott's Group theory, Thm. 10.1.6.
-
Isn't this just one of the standard proofs of Burnside's lemma for the action of S_n on X^2? – Qiaochu Yuan Mar 5 2010 at 23:23
Well, the statement is Burnside's lemma for the action of S_n on X^2 :) – Mariano Suárez-Alvarez Mar 6 2010 at 5:12
Here's another proof.
If $G$ acts transitively on $X$, then the permutation representation of $G$ is induced from the permutation representation of the stabilizer $S$ of an arbitrary point. As a result, $\langle \chi , \chi \rangle$ counts the number of orbits of $S$ on $X$.
In your situation $G=S_n$ is acting transitively on $X=\{1,\ldots,n\}$. Let $S$ be the stabilizer of $n$; this is pretty much $S_{n-1} \subset S_n$, which has two orbits on $X$: $\{1,\ldots, n-1\}$ and $\{n\}$.
-
The trace of a permutation matrix is the number of fixed points of the corresponding permutation. This is a special case of the identity proved in "An identity for fixed points of permutations" by Goldman, where the average of the $k^{th}$ powers of the number of fixed points is shown to be the `$k^{th}$` Bell number `$B_k$` when `$k<n$`. Your case follows because `$B_2=2$`.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170780777931213, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/11457/strict-class-numbers-of-totally-real-fields | Strict Class Numbers of Totally Real Fields
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In their paper Computing Systems of Hecke Eigenvalues Associated to Hilbert Modular Forms, Greenberg and Voight remark that
...it is a folklore conjecture that if one orders totally real fields by their discriminant, then a (substantial) positive proportion of fields will have strict class number 1.
I've tried searching for more details about this, but haven't found anything.
Is this conjecture based solely on calculations, or are there heuristics which explain why this should be true?
-
Are you bounding the degree of the field? If you don't then it's hard to make sense of this question, because I don't see why there will only be finitely many fields of a given discriminant. – Kevin Buzzard Jan 11 2010 at 20:06
@Buzzard: Yes, the degree of the field is bounded. The paper works with a totally real field F of degree n and it is assumed throughout that F has strict class number 1. I believe that this remark is meant to justify this restriction. – Ben Linowitz 0 secs ago – Ben Linowitz Jan 11 2010 at 20:17
Isn't it a classical result of Hermite that there are finitely many fields of bounded discriminant? – Dror Speiser Jan 11 2010 at 21:53
No :-) In fact it's a result of Golod and Schaferevich that there aren't :-) You need to bound the degree too. Unless I've slipped up. – Kevin Buzzard Jan 11 2010 at 23:14
3
@buzzard: in a Golod-Shafarevic tower, the discriminant is exponential in the degree: it's the so-called root discriminant that's constant. (Describing the set of number fields with bounded root discriminant is an extremely interesting and mysterious problem!) The set of number fields with discriminant < X, on the other hand, is indeed finite. In an appendix to a paper of Belolipetsky, Venkatesh and I show that the log-size of this set is at most (log X)^{1+eps} (the finiteness is much older, as Ben Linowitz points out.) – JSE Jan 12 2010 at 1:15
show 4 more comments
4 Answers
One heuristic is the following: if one imagines that the residue at $s = 1$ of the $\zeta$-function doesn't grow too rapidly, then the value is a combination of the regulator and the class number. I don't know any reason for the regulator not to also grow (there are a lot of units, after all!), and hence one can imagine that the class number then stays small.
This is part of a general heuristic that in random number fields there tends to be a trade-off between units and class number, so especially in the totally real case, when there are so many units, the class number should often be 1.
I learnt some of these heuristics from a colleague of mine who regards it more-or-less as an axiom that a random number field has very small class number. I think this view was formed through a mixture of back-of-the-envelope ideas of the type described above, together with a lot of experience computing with random number fields. So the answer to your question might be that it is a mixture of heuristics and computations.
Incidentally, in the real quadratic case, it is compatible with Cohen--Lenstra, but I think it goes back to Gauss. Also, there are generalizations of Cohen--Lenstra to the higher degree context, and I'm pretty sure that they are compatible with the class group/unit group trade-off heuristic described above.
-
A totally real field should have strict class number 1 if and only if it has class number 1 and units of every possible sign combination, so assuming that Cohen-Lenstra generalizes to higher degrees (as you say),I suppose I have my answer. Thanks! – Ben Linowitz Jan 11 2010 at 21:31
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Maybe it's worth a word about why Cohen-Lenstra predicts this behavior. Suppose K is a field with r archimedean places. Then Spec O_K can be thought of as analogous to a curve over a finite field k with r punctures, which is an affine scheme Spec R. Write C for the (unpunctured) curve. Then the class group of R is the quotient of Pic(C)(k) by the subgroup generated by the classes of the punctures -- or, what is the same, the quotient of Jac(C)(k) by the subgroup generated by degree-0 divisors supported on the punctures. (This last subgroup is just the image of a natural homomorphim from Z^{r-1} to Jac(C)(k).)
The Cohen-Lenstra philosophy is that these groups and the puncture data are "random" -- that is, you should expect that the p-part of the class group of R looks just like what you would get if you chose a random finite abelian p-group (where a group A is weighted by 1/|Aut(A)|) and mod out by the image of a random homomorphism from Z^{r-1}. (There are various ways in which this description is slightly off the mark but this gives the general point.)
It turns out that when r > 1 the chance is quite good that a random homomorphism from Z^{r-1} to A is surjective. In fact, the probability is close enough to 1 that when you take a product over all p you still get a positive number. In other words, when r > 1 Cohen-Lenstra predicts a positive probability that the class group will have trivial p-part for all p; in other words, it is trivial. (In fact, it predicts a precise probability, which fits experimental data quite well.)
When r = 1, on the other hand, the class group is just A itself, and the probability its p-part is trivial is on order 1-1/p. Now the product over all p is 0, so one does NOT expect to see a positive proportion of trivial class groups. And in fact, when there is just one archimedean place -- i.e. when K is imaginary quadratic -- this is just what happens!
-
I think the best way for one to become convinced that class numbers of real quadratic fields tend to be small, is to look at the continued fraction expansion of $\sqrt{D}$.
The period length of the continued fraction is about the regulator (up to a factor of $\log{D}$). One can easily compute some random continued fractions, and see that for most numbers, the length really is a small factor away from $\sqrt{D}$.
Numbers that have small continued fraction period length are very scarce. I believe it is not hard to prove that the amount of numbers up to $X$ that have period length of $\sqrt{D}$ less than a fixed integer $n$ is $O(X^{1-\epsilon})$ for some $\epsilon > 0$ (I think 1/2 should always work).
In a sense (very strict actually) the regulator counts how many numbers $n$ with $|n| < 2\sqrt{D}$ can be represented as $n = x^2-Dy^2$ for some integers $x, y$. Well, if $D$ is large and random, then it seems reasonable that many should. So the regulator should be around $\sqrt{D}$, and hence, by Dirichlet's class number formula, the class number should be very small.
Once you become convinced of the real quadratic case, the rest immediately follow, because you already believe things people said while waving their hands a lot. (This is a general philosophy in mathematics)
-
For ordinary class number 1 in the real quadratic case, see Cohen and Lenstra's Heuristics on Class Groups of Number Fields https://openaccess.leidenuniv.nl/retrieve/2845/346_069.pdf
Maybe it's not so much of a jump from their to see a heuristic arguing that a substantial positive portion have strict class number 1.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458295106887817, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/10819/module-categories-over-repg | ## Module categories over $Rep(G)$.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Related to this question I also had some troubles to understand the classification of module categories over $Rep(G)$. Specifically, on page 12 of Ostrik's paper what is the category $\mathrm{Rep}^1(\tilde{H})$? $k^*$ acting as "identity character on V" means $a.v=av$ for all $a \in k^*$ and $v \in V$? Then what is the structure of module category over $Rep(G)$? Tensor product should be after restricting representations of $G$ to $H$ and then inducing back to $\tilde{H}$?
Concretely, I was thinking about the following example. Let $H$ be a subgroup of $G$. Then $Rep(H)$ is a module category over $Rep(G)$ via tensor product as $H$-modules. What is the decomposition of $Rep(H)$ in indecomposable module categories and what are the corresponding subgroups $H$ and cocyles $\omega \in H^2(H,\;k^*)$ for each indecomposable subcategory?
-
4
Why is this post community wiki? – Leonid Positselski Jan 5 2010 at 17:03
I'd think tensor product is by pulling back a representation from $G$ to $\tilde H$ and then tensoring over $\tilde H$. How is $\mathcal M$ a module category? – t3suji Jan 5 2010 at 17:09
One rstricts a rep. of $G$ to $H$ and then it tensors over $H$ with some rep. from $\mathcal{O}$. The irreducible constituents of the tensor product are in the same orbit, $\mathcal{O}$. – Sebastian Burciu Jan 5 2010 at 17:13
@ Leonid: It gives a classification of all modules categories over $Rep(G)$. Maybe I should have said this in the post. – Sebastian Burciu Jan 5 2010 at 17:17
1
@Sebastian Burciu: Are they? Take $G=H$ and $\mathcal O$ to be the orbit of the one-point orbit of the trivial representation of $G$, for instance. Aren't you saying that the tensor product of the trivial representation and any representation is trivial? – t3suji Jan 5 2010 at 17:19
show 2 more comments
## 1 Answer
Sebastian: your definition of Rep^1(\tilde H) is absolutely correct. If you have a representation of G you can restrict it to H and consider it as a representation of \tilde H (this operation is called inflation). Now you can tensor it with any representation of \tilde H; this tensoring preserves Rep^1(\tilde H); this is a module category structure (same thing was explained above by t3suji).
The category Rep(H) considered as a module category over Rep(G) is indecomposable. It corresponds to subgroup H and trivial cocycle \omega (so \tilde H is a direct product of H and multiplicative group G_m).
-
Thank you very much! – Sebastian Burciu Jan 5 2010 at 18:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8944779634475708, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/92522/finding-a-particular-solution-to-a-non-homogeneous-system-of-equations | # Finding a particular solution to a non-homogeneous system of equations
If one asked to solve the set of equation below with the associated homogenous system, I'd know how to do it.
$$S \leftrightarrow \begin{cases} 3x + 5y + z = 8\\\ x + 2y - 2z = 3 \end{cases}$$
$$S' \leftrightarrow \begin{cases} 3x + 4y + z = 0\\\ x + 2y - 2z = 0 \end{cases}$$
You'd find the solution of the homogeneous system $S'$ to be: \begin{equation} (x, y, z) = \{ k\cdot (-12, 7, 1) | k \in \mathbb{R} \} \end{equation}
With the particular solution of $S$... \begin{equation} (x, y, z) = (1, 1, 0) \end{equation}
You can count them up and you'd find: \begin{equation} (x, y, z) = \{(1 - 12k, 1+ 7k, k)|k \in \mathbb{R}\} \end{equation}
And your original system of equations $S$ is solved.
Now I've got one question: how do you find such a particular solution to a non-homogeneous system of equations. How do you find $(1, 1, 0)$ in this case?
Another example:
How do I find one particular solution to this non homogeneous system? \begin{cases} x_1 + x_2 +x_3 =4\\ 2x_1 + 5x_2 - 2x_3 = 3 \end{cases}
-
2
Give $z$ a particular value, then solve the resulting $2 \times 2$ system. – David Mitra Dec 18 '11 at 15:26
## 1 Answer
Just set $z=0$, say. With a bit of luck, you'll be able to solve the resulting system: $$\eqalign{ 3x+5y&=8\cr x+2y&=3 }$$
The solution of the above system is $y=1 , x=1$; so, a solution to the original equation is $(1, 1 , 0)$.
For your second question, do a similar thing. Set $x_2=0$. Then you can conclude $x_1=11/4$ and $x_3=5/4$.
-
Some times, you won't be lucky, in which case you could select a different value, perhaps for a different variable. Of course, you could just solve the system using the usual augmented matrix techniques, then pick one solution. – David Mitra Dec 18 '11 at 15:51
Yes my particular solution is wrong: I accidentally wrote 4 instead of 5 with the $y$-value of the first equation. – henriv Dec 18 '11 at 15:55
@Ief2 I edited my answer to correspond to the correct system of equations. – David Mitra Dec 18 '11 at 15:58
Thank you, I guess I forgot that having a free variable means that you can actually choose one freely :-) – henriv Dec 18 '11 at 16:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8879683613777161, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/lagrangian-formalism?sort=unanswered&pagesize=50 | # Tagged Questions
For questions involving the Lagrangian formulation of a dynamical system. Namely, the application of an action principle to a suitably chosen Lagrangian or Lagrangian Density in order to obtain the equations of motion of the system.
1answer
295 views
### About Turbulence modeling
There is a paper titled "Lagrangian/Hamiltonian formalism for description of Navier-Stokes fluids" in PRL. After reading the paper, the question arises how far can we investigate turbulence with this ...
1answer
262 views
### Lagrangian density for a Piano String
So I'm trying to do this problem where I'm given the Lagrangian density for a piano string which can vibrate both transversely and longitudinally. $\eta(x,t)$ is the transverse displacement and ...
1answer
66 views
### Determinant for a coupled fluctuation Lagrangian
Lets consider a bosonic physical system in variables $t, x$ and $y(x)$ ($x$ dependent) with a classical Lagrangian $L$. To first order in fluctuations $x \to x+\xi_1$ and $y \to y+\xi_2$ the ...
1answer
75 views
### Higher order covariant Lagrangian
I'm in search of examples of Lagrangian, which are at least second order in the derivatives and are covariant, preferable for field theories. Up to now I could only find first-order (such at ...
1answer
141 views
### Quantum tunneling in Field theory with Time dependent potential
What should be the limits of integration for euclidean action $S(\phi)$ in 3d and 4d? This action is negatively exponentiated to calculate the decay rate. I suspect that it is variable limit problem. ...
0answers
141 views
### Lagrangian for Goldstone mode + topological excitation
The XY-model Hamiltonian is the following, $${\cal H}~=~-J\sum_{\langle i,j\rangle} \cos (\theta_i -\theta_j).$$ The Goldstone mode corresponds to term $(\nabla \theta)^2$ in the effective ...
0answers
121 views
### Orbit through L4 and L5
I was reading the Wikipedia article on Lagrangian points and doing the requisite wiki walk through the various quasi-satellites of Earth when a question occurred to me: Could there be a stable or ...
0answers
294 views
### General equation of motion for elementary particles
Elementary particles can be grouped into spin-classes and described by specific equations, see below: Is there a general Lagrangian density from which all these equations can be derived? A ...
0answers
145 views
### Normal modes of oscillation: how to find them
Are normal modes the eigenvectors of the matrix $(\omega ^2 T- V)$ where $T$ is the matrix of kinetic energy and $V$ is the matrix of potential energy? Is it the only way to express them? How can I ...
0answers
40 views
### Lagrangians for non-local equations of motion
Say I have a multicomponent field $X_a(x,t)$ such that I know it Fourier modes satisfy the following equation of motion, \$(\delta_{ab} \partial_t + \Omega_{ab}(t))X_b(k,t) = e^t \int \frac{d^3p ...
0answers
104 views
### Deriving torque from Euler-Lagrange equation
How could you derive an equation for the torque on a rotating (but not translating) rigid body from the Euler-Lagrange equation? As far as I know from my first class in Classical Mechanics, there is ...
0answers
52 views
### relevant 4-dimensional theory with interacting vector field
A simple langragian that gives the simplest interaction is $\mathcal{L}=(\partial\phi)^2+(m\phi)^2$ where $m$ is some constant. Does anyone know of theory in four dimensions which is physically ...
0answers
369 views
### What are the forces of constraint if there are multiple equivalent constraints?
Suppose a large (rigid) block is sitting on top of two smaller blocks of equal height $1$, both of which rest on the ground. We wish to find the position of the block (easy) and the forces of ...
0answers
232 views
### How to find angular velocity of a point inner a circumference
Let's consider a cicumference that have the center in the origin of axes and rotates around x-axes. Let's stick a bar in a point $A$ of this circumference and at the end of the bar let's stick a mass ...
0answers
72 views
### Describing the movement of the object in a particular situation in Lagrangian way
Suppose there is a object M, (sliding motion) moving by the initial speed $v$ and the initial location $x_0$. Otherwise noted, friction is assumed to be nonexistent. It then meets a circular mold ...
0answers
153 views
### Comparing Lagrangian in Special Relativity vs General Relativity for a weak gravitational field
This is a sequel to this question. Who knows a difference between the Lagrangian in SR and GR for a weak gravitational field in non-relativistic case? What is the reason of this difference? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8986603021621704, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/97310/queries-on-proof-that-every-pid-is-a-factorisation-domain | # Queries on proof that every PID is a factorisation domain
I'm reading a proof from C. Musili's Rings and Modules that every PID is a factorisation domain.
The author defines a factorisation domain as a commutative integral domain $R$ with a unit such that every non-zero $x \in R$ can be written as a unit times a finite product of irreducible elements. I will write down an outline of the proof here and my queries at the end. Those points that I am not sure about I will put in bold.
Let $R$ be a PID, and $\Omega$ the set of all non-zero elements of $R$ that cannot be written as a product of irreducible elements in $R$. We want to show that $\Omega = \emptyset$. So for a contradiction suppose that $\Omega \neq \emptyset$. Consider the non-empty family of principal ideals
$$\mathcal{F} = \{(x) \subseteq R : x \in \Omega\}$$
that is also a poset with respect to set inclusion. Now given any chain in $\mathcal{F}$ we can check that it has an upper bound in $\mathcal{F}$ so that by Zorn's Lemma $\mathcal{F}$ has a maximal element, say $(a)$.
Now $a$ cannot be a unit or irreducible for $a \in \Omega$. So write $a = bc$ for non-units $b$ and $c$. Now $(a) \subsetneqq (b)$ for otherwise $a$ and $b$ would be associates contradicting the fact that $a$ is irreducible. Since $(a)$ is maximal in $\mathcal{F}$, this means that $b$ and $c$ can be factored into irreducibles so that $a \notin \Omega$, a contradiction. Hence $\Omega = \emptyset$.
(1) For the first sentence in bold, should $\Omega$ not be the set of all non-zero elements in $R$ that cannot be written as a unit times an infinite number of irreducibles?
(2) If $\Omega$ is like what I have written above, then $a$ cannot be a unit for then
$a = a \cdot 1 =$ a unit $\times$ an irreducible element.
However if $\Omega$ is as the author has defined it to be, why must $a$ not be unit?
(3) If $a$ is not irreducible, why can we always write $a = bc$ for non-units $b$ and $c$? Does such a decomposition always exist?
Thanks.
-
3
The product of an infinite number of elements is not well-defined in general. – Qiaochu Yuan Jan 8 '12 at 6:30
@QiaochuYuan I don't understand what you mean, I don't see anywhere in the proof the use of the finiteness condition. – BenjaLim Jan 8 '12 at 6:34
Dear Benjamin: How do you define the product of "an infinite number of irreducibles"? (I'm just following on Qiaochu's comment.) – Pierre-Yves Gaillard Jan 8 '12 at 6:50
2
"There exists an $x \in R$ such that all finite products of irreducibles $p_1p_2\dotsb p_n \neq x$"? – kahen Jan 8 '12 at 6:58
3
Because we're using a topology to define such an infinite product. We're using more than the ring structure. – Pierre-Yves Gaillard Jan 8 '12 at 7:05
show 10 more comments
## 1 Answer
For (1), the set $\Omega$ should consists of all non-zero $x \in R$ such that $x$ cannot be written as a unit times a finite number of irreducibles. As Qiaochu pointed out, an infinite product of elements is undefined (so we are using the finiteness of the number of terms in assuming that the product is well-defined).
(2) If $a$ were irreducible, it would be a product of a unit times a finite number of irreducibles, namely 1 times the single irreducible $a$.
(3) This is basically the definition of an element being irreducible.
-
For (3), I am sorry I typed the wrong thing in. It has now been changed to (2), and I am wondering why $a$ cannot be a unit (taking the author's definition of $\Omega$). – BenjaLim Jan 8 '12 at 7:00
Dear @BenjaminLim: The empty product is equal to $1$. – Pierre-Yves Gaillard Jan 8 '12 at 7:10
@Pierre-YvesGaillard Sorry, I don't get what you mean, you are referring to? – BenjaLim Jan 8 '12 at 7:14
@BenjaminLim: The auther wrote "a product of irreducible elements". I think she/he should have written "a unit times a product of irreducible elements", to cover the case where our element $a$ is a unit and $a\neq1$. – Pierre-Yves Gaillard Jan 8 '12 at 7:22
1
I think I get it now, $a$ is irreducible iff we cannot write it as the product of two non-units. So (3) above is settled too. Thanks! – BenjaLim Jan 8 '12 at 7:47
show 7 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178534150123596, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/189250/closed-form-representation-of-an-irrational-number | # Closed form representation of an irrational number
Can an arbitrary non-terminating and non-repeating decimal be represented in any other way? For example if I construct such a number like 0.1 01 001 0001 ... (which is irrational by definition), can it be represented in a closed form using algebraic operators? Can it have any other representation for that matter?
-
2
Given that there are uncountably infinite real numbers, and only countably infinite closed form expressions, you can say that "most" real numbers do not have a closed form expression. – Thomas Andrews Aug 31 '12 at 13:30
"Can it have any other representation for that matter?" - you could represent it as a simple continued fraction, or as an Engel/Pierce expansion, for instance... – J. M. Aug 31 '12 at 14:38
## 3 Answers
In general, no, since, for one thing, there's an uncountable infinity of such decimals, and only a countable infinity of closed forms (under any reasonable definition).
-
Some irrational numbers can be expressed in a closed form using algebraic operations; $\sqrt7$ is a very simple example. Some can be expressed in other ways, like $\pi$ for which a multitude of formulas is known. Most real numbers however cannot be expressed (using a finite amount of information, but that is implicit in "expressing") at all, since there are just too many of them.
-
Are there any references on the impossibility of representing all irrational numbers as products of algebraic or similar operations? – Phonon Apr 6 at 22:34
Since $0.1 = \frac{1}{10}$, $0.001 = \frac{1}{10^3}$, $0.0000001 = \frac{1}{10^6}$. Making a guess that $n$-th term is $10^{-n(n+1)/2}$ the sum, representing the irrational number becomes $$0.1010010001\ldots = \sum_{k=0}^\infty \frac{1}{10^{\frac{k(k+1)}{2}}} = \left.\frac{1}{2 q^{1/4}} \theta_2\left(0, q\right)-1\right|_{q=\frac{1}{\sqrt{10}}}$$ where $\theta_2(u,q) =2 q^{1/4} \sum_{n=0}^\infty q^{n(n+1)} \cos((2n+1)u)$ is the elliptic theta function.
-
1
what is $\theta_2$? – ajay Aug 31 '12 at 13:05
It's a theta function, q.v. – Gerry Myerson Aug 31 '12 at 13:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405099153518677, "perplexity_flag": "head"} |
http://motls.blogspot.com/2012/06/george-mussers-occult-musings-on-erik.html?m=0 | The Reference Frame
Friday, June 15, 2012
... /////
George Musser's occult musings on Erik Verlinde's entropic gravity
Commenting technicality: On Friday, 2:20 pm Pilsner Summer Time, the slow Blogger.com comments have been switched to DISQUS 2012 comments. Try them. Right now, TRF has 3 independent comment systems. The Blogger.com comments should be abolished sometimes on Saturday when the import of these comments to DISQUS 2012 is completed (they may still be reached from mobile templates). Before October, the fast Echo comments should undergo the same fate and TRF will probably have one commenting system only, DISQUS 2012, that you have probably not tried yet.
Fun pictures: Here I collected all 169 pictures plus thumbnails that have been attached to Echo comments so far. Do you remember in what context? ;-) I plan to preserve them during the DISQUS transition, too.
Among the editors of Scientific American whose names I can recognize, George Musser is the most reasonable one.
The relevance of this babe will be clarified below.
Nevertheless, I still find his reasoning and approach to questions to be largely incompatible with the scientific discourse. He has just started a new SciAm blog named Critical Opalescence and dedicated the first article to Erik Verlinde's entropic gravity and related (and unrelated) efforts to transform everything we think about gravity and cosmology:
Is Dark Matter a Glimpse of a Deeper Level of Reality?
Just to remind you, Erik Verlinde conjectured two years ago that the gravitational force results from the desire of the system to reduce its entropy. However, gravity cannot be entropic because entropic forces inevitably lead to irreversible behavior (while the distance between the Earth and the Sun oscillates back and forth) and the interference patterns would be broken if there were very many microstates whose number depends on the distance, a point that was independently made by Archil Kobakhidze of Melbourne.
Erik Verlinde has said many other things in recent years. Many of them seem unrelated to the (wrong) claim about gravity's being an entropic force. He wants to return to some (medieval) steady state cosmology, reject the existence of dark matter as well as dark energy, but there is no paper about these topics that would make sense and that would offer an alternative explanation of the relevant cosmological observations.
But I want to discuss Musser's article – and why I consider his general framework to evaluate science pretty much unsatisfactory. He's a fine guy, don't get me wrong; I just think that he's not trying to solve problems in the scientist's way. We hear that two years ago, he exchanged lots of e-mails about Verlinde's divine inspirations. How did he feel about it?
I don’t think I’ve ever been so flummoxed by physicists’ reactions to a paper. Mathematically it could hardly have been simpler—the level of middle-school algebra for the most part. Logically and physically, it was a head-hurter. I couldn’t decide whether it was profound or trite.
In these sentences, you may see that Musser is trying to reach conclusions by looking at other people's reactions. He has never been similarly "flummoxed" because he isn't terribly experienced, not even in his "sociological" approach to science. What we're talking about in this context is a physicist who has done nontrivial contributions to physics in his life but at some moment, he loses his mind and becomes obsessed with ideas that demonstrably don't make any sense. And maybe, some time later, he realizes why these ideas are preposterous but because he has already received tens of millions of dollars for that idea, he doesn't find enough integrity to admit that the ideas were wrong and worthless from the beginning.
For the former reason, namely Verlinde's previous contributions, many physicists try to be even more polite than they are when they talk about other pseudoscientists; still, the latter reason makes it clear that the fair treatment of this proposal should match the treatment of any other pseudoscientist who claims to have superseded Einstein and others. This tension between the emperor's nice clothes from the past and the emperor's new kind of clothes (the naked ones) is making the physicists react in bizarre way and journalists may be flummoxed.
But it's surely not the first time when the physicists had to deal with such a transition of their colleague. Gerard 't Hooft was arguably the Ayatollah of theoretical physics some decades ago, his contributions have been priceless, and he's still incredibly bright. But at some moment, he jumped on the research of things that didn't make sense – I am specifically talking about the totally meaningless attempts to reformulate quantum mechanics as deterministic hydrodynamics – and if he's dedicating his intellectual life to this stuff for a decade, one should perhaps notice that the current 't Hooft is someone else than the previous one. Gerard 't Hooft remains a great name but that can't prevent us from seeing that most of his writings in the recent 10 years have been pure BS.
I could give you a dozen of other examples although none of them would be as prominent as that of Gerard 't Hooft. So this is a situation that theoretical physicists know rather well; Musser was only flummoxed because he is an outsider. But he continues:
The theorists we consulted said they couldn’t follow it, which we took as a polite way of saying that their colleague had gone off the deep end. Some physics bloggers came out and called Verlinde a crackpot.
For those who know Verlinde, that label hardly fits.
The last sentence above suggests that the claim that Verlinde's work on entropic gravity is worthless crap is somewhat correlated with the question whether one knows Verlinde in person or not. But in science, it's not correlated and it mustn't be correlated. If a theory disagrees with the experiment, then it's wrong. This simple statement is the key to science. It doesn't matter how beautiful the guess is, how smart the author is, what his name is – if it disagrees with the observations, it's just wrong. Go to 0:40-1:00 to hear these basic principles of science in the audio form.
I know Erik Verlinde rather well, in person, and so do most of the people who have politely suggested that he has gone off the deep end. And I am actively aware of lots of his nontrivial previous work (having co-discovered some of it), and so are the other folks. But this personal familiarity can't have any impact on our ability to realize that the paper doesn't agree with the experimentally known physics of gravity. Science is or must be meritocracy, not nepotism. Maybe a better term for the nepotist science that Musser implicitly suggests could be "crony science".
Let me accuse George Musser of misunderstanding this basic point by demagogically suggesting that the criticism of Verlinde's claims arises because people don't know the author. Quite on the contrary, it's mostly the people who know him well who realize that the paper is bunk; many of the people who are true outsiders are ready to be impressed by the paper. Musser's suggestions are upside down.
Also, Musser says:
He is a brilliant theorist, and the amount of discussion his paper provoked suggested that most of his colleagues saw something in it.
This is just a bizarre way of arguing. The first sentence is an ad hominem appraisal that simply isn't tolerable in science. One may be used to claiming that someone is brilliant but if this person looks at the brilliant person's particular paper that self-evidently fails to be brilliant – it's actually stupid – he must adjust his opinions about the paper according to the most complete observations.
Concerning the claim that "most colleagues saw something in it", it's a bizarre claim, too. First, I don't know how Musser decided that the people who see something in the paper represent a "majority". I accuse Musser of making this "fact" up – it's the kind of activism-journalism, making claims up that are probably wrong but that can't be easily verified, in order to distort the readers' opinions in a specific direction – and perhaps even push other people towards an agreement with this would-be majority.
I am not aware of any truly serious theorists who have written followup papers on that proposal – and only two or three "marginal" examples. There's no evidence that the proposal works – and no evidence that competent people think that it works.
Musser tells us that he met Verlinde, Verlinde has doubled his bets, and the attitude of experts hasn't changed.
One told me: “There are a lot of ideas he’s bringing together in an interesting way, but it’s a little hard for us to decipher, so I’m withholding judgment.” All he has really done, though, is take a general sentiment among string theorists and follow it to its logical conclusion.
The quote is very, very polite, indeed, although the point of the quote isn't hard to decipher. I am very doubtful whether similarly excessively polite appraisals are constructive. As far as I can say, the modern world is drowning in kilotons of superstitions and bad science that became widespread partly because certain people whose job was partially to separate the weeds from the crap have been polite and failed in this job of theirs.
The last sentence – one by Musser – is simply incorrect. There is nothing in string theory that would indicate that gravity is an entropic force. Quite on the contrary, the entropy of various configurations – such as a pair of nearby cold heavy orbiting neutron stars – may be explicitly calculated in string theory and the entropy (or at least the contribution that would depend on the distance) is zero.
Holography doesn't imply that the entropy of gravitating objects depends on the distance, not even vaguely or spiritually.
The general holographic insights imply that the event horizons (and therefore black holes) carry the Bekenstein-Hawking entropy; this fact may be applied to distant cosmic horizons as well so the bulk dynamics may be encoded by degrees of freedom on the surface. But none of these things suggests that a pair of neutron stars carries a huge entropy that would depend on the distance. If we use a holographic description of the system associated with a distant holographic screen, the maximum entropy or the number of degrees of freedom will depend on the geometry of the distant screen, not on internal details such as the distance between the neutron stars.
And indeed, one may also present proofs that are independent of string theory and that imply that a distance-dependent entropy of such a binary star would lead to inconsistencies with the basic observed properties of gravity. To summarize, the claim that the entropic gravity follows from – or has anything to do with – string theory is simply a lie. No such relationship exists.
Musser rightfully says that it's hard to reconcile GR and QM; some intuition has to be modified; spacetime is probably emergent, and so on. However, saying that "spacetime is emergent" is something totally different than saying that "a pair of neutron stars carries a huge distance-dependent entropy" (or that "there is no dark matter", among other things). There is a lot of processes that take place in the vacuum (virtual particles appearing and disappearing – but one may use different descriptions of the same thing). The vacuum is "composite" and has a complicated structure. It knows about everything that can be created in it. But none of these insights changes the fact that the entropy of the vacuum has to be strictly zero. The vacuum state must be a unique quantum mechanical state in the Hilbert space. If the entropy were nonzero, the entropy_density/flux four-vector would pick a privileged reference frame in the spacetime. That would conflict with the special theory of relativity.
One may similarly prove that one can't get huge entropy differences for neutron stars at different distances. If the entropy would be much higher for one distance than another distance, it would only be possible to increase (or decrease) the distance; the other motion would violate the second law of thermodynamics. This is the simplest way to see that the distance-dependent entropy is just wrong. It's not deep or mysterious or a topic for discussions that should last for decades, it's a trivial question that can and should be answered in a few seconds. Verlinde's claims are just wrong regardless of his name. A competent physicist must be able to see so.
Needless to say, Musser – and Verlinde himself – doesn't actually discuss any physical experiments or thoughts experiments that are relevant for the question whether gravity could be an entropic force. Of course that the conclusion from many of them is No, it cannot.
They don't discuss irreversibility and any other thermal phenomena even though the conjectured relevance of thermodynamics (due to entropy, a key concept of thermodynamics) is what this idea claims to be all about; they don't discuss neutron interference in gravitational fields or any other inherently quantum phenomena even though Verlinde's claims are presented as ramifications of holography and holography vitally depends on quantum mechanics. They have an agenda, to promote the idea that Verlinde's ideas are important and one must at least pay lip service to them, if not accept them. But they're wrong and they're not important and serious physicists no longer waste a minute with thinking about what Erik Verlinde may have wanted to say in 2010. It was just wrong.
Erik Verlinde has also said many weird things about cosmology. Inflation is bad, much like dark energy and dark matter; steady state cosmology and some unspecified version of MOND that is mysteriously connected with the entropic gravity – well, as far as I can see, only by the person who emits both kinds of fog – are supposed to be cherished. But there isn't even a paper on these cosmological things – not even a manifestly wrong paper such as the 2010 paper on entropic gravity – so a scientist simply can't say anything here.
Musser also exposes us some psychology of the dark matter denial.
They have never detected the material directly, though, and for something that is supposed to be so overwhelmingly dominant, dark matter has a puzzlingly subtle effect. The anomalous motions occur only in the unfashionable outskirts of galaxies. Stars and gas clouds out there move faster than they should, but don’t do anything truly wacky—it is as if the gravitational field of the visible galaxy were simply being amplified.
The first sentence is just bullshit. Dark matter's strength of (non-gravitational) interactions doesn't have to be fundamentally any weaker than the same quantity for neutrinos, particles whose reality is very clear to us. It's just normal that in physics, different types of objects display very different degrees of visibility, by many – often dozens of – orders of magnitude. Dark matter must be heavier than neutrinos which is why it's harder to produce it in the lab today but once you produce it, there's no reason to think that its interactions should be dramatically weaker than those of neutrinos.
But even if the interactions had to be even weaker for a theory of dark matter to be compatible with all the data, it wouldn't mean any problem. Quantities in fundamental physics simply span many orders of magnitude. There's absolutely nothing wrong about the situation in which the dominant component of the matter is very weakly interacting. Any feeling that it's wrong is just a layman's feeling or prejudice, something that science simply can't pay attention to. The dark matter theory can't be ruled out by similar pseudo-arguments and the detailed theories aren't even unnatural in the technical sense so there's just no justification for complaints here.
The whining that the anomalous effects only occur in the outskirts of galaxies is irrational, too. This is an observational fact – Newton's laws without extra matter work OK for the Solar System and slightly longer distances which are still much shorter than the galactic radius but they start to break down at the galactic scale – and dark matter, MOND, or any other theory that would try to explain these facts simply has to agree. Much like dark matter, MOND also shows its muscles in the outskirts of the galaxy – when modifying the forces between objects whose distance is comparable to the galactic radius. You may declare this property "unnatural" but it's nevertheless a property that has been observed in Nature so the right adjective is surely "natural". It should better be natural in a correct theory of Nature, too. And good enough models of dark matter indeed make all these patterns technically natural!
One must also object to Musser's claim that it's mysterious why the matter in the outskirts isn't doing weirder things than just those that you expect from amplified gravity. There's nothing mysterious in physics (physics including dark matter or physics that is independent of dark matter) about this fact. The reason behind this fact is that electromagnetism and gravity are known to be the only two long-range forces in Nature and electromagnetism drops down more quickly for very long distances because the charge is neutralized and the residual forces follow a more quickly decreasing power law. So it's guaranteed that the very long-distance effects of any new object or any new term in the laws must be explainable by a (modified) gravitational field. Musser's attempt to use this well-understood fact against dark matter is totally illogical and wrong.
Musser's description is a mixture of layman's prejudices, half-truths, and downright lies. It's just nonsense. Dark matter doesn't have to be right but this kind of a trash talk is nowhere close to be a piece of evidence that dark matter is wrong even if it is wrong – and no evidence that MOND is right even if it is right. Let us discuss one more paragraph about these irrational arguments, a comparison of dark matter with a gorilla:
Consequently, some astronomers and physicists suspect there may be no dark matter after all. If you notice the floorboards in your house are sagging, as if there is too much weight on them, you might conclude there is an 800-pound gorilla in the room with you. You see no gorilla, so it must be invisible. You hear no gorilla, so it must be silent. You smell no gorilla, so it must be odorless. After a while, the gorilla seems so improbably stealthy that you begin to think there must be some other explanation for the sagging floorboards—the house has settled, say. Likewise, perhaps the laws of gravity and motion which led astronomers to deduce dark matter are wrong. “I think dark matter will be a sign of another type of physics,” Verlinde said.
The fairy-tale about the gorilla is surely amusing but its application on dark matter is just wrong. Science actually explains perfectly naturally why dark matter is mute, odorless, tasteless, and invisible, among other things. Dark matter contains no sugar so it's not sweet; no salt, so it's not salty; no other compounds, so it's not bitter; no methane or ammonia so it's not stinking. Its vibrations occur at frequencies much lower than what the human ear may hear, so the dark matter is silent. And so on.
Musser and Verlinde may have managed to prove that our houses are not filled by tons of smelly gorillas jumping around. Such a proof is a great achievement but it is not equivalent to the proof that there's no dark matter. Dark matter doesn't need to be smelly because metabolism isn't necessary for its "survival"; it doesn't have to exchange signals about bananas with other clouds of dark matter because it doesn't depend on eating the bananas etc. So there's really no sensible reason to a priori assume that dark matter is a loud and smelly exhibitionist. Musser's and Verlinde's "arguments" rely on the assumption that everything that is natural in the Universe must look like a gorilla but it simply ain't the case. I think that even many untrained smart kids in the kindergarten know many objects allowed or actively predicted by the laws of physics that are very different from a gorilla.
After a few introductory words about MOND i.e. Modified Newtonian Gravity, we learn
Astoundingly, Verlinde even derived the five-to-one ratio. “I started seeing this as a manifestation of this larger phase space,” he said.
Except that it wasn't possible to write a paper that would make it through the arXiv anti-crackpot filters that would present this five-to-one argument. At this moment, it's just a commercial without any accessible justification and chances are overwhelming that this status will never change.
There isn't any large (whose size is distance-dependent) phase space associated with two non-black-hole objects that orbit each other. But even if you forget about the question whether the claim about such a big phase space could be right, there's something disturbing about the very words chosen in this sentence by Verlinde. Since the mid 1920s, we've known quantum mechanics that superseded phase spaces by the Hilbert spaces. Verlinde deliberately avoids quantum mechanics. Of course, the fact that we talk about Hilbert spaces and not phase spaces is totally essential for anything that is related to holography or any other principles he claims to build upon so it's really indefensible to neglect it if you claim to be finding a deeper meaning of holography.
I am afraid that the very fact that he talks about phase spaces is another piece of evidence that his work has never been addressed to the mature physicists per se and that earning tens of millions of dollars from idiots in various laymen's agencies for wrong but ambitious high-school-level physics pseudoscience has been his plan from the beginning.
Now, after mentioning Sean Carroll's superficial criticism of MOND, Musser promotes MOND by these sentences, among others:
I’m inclined to agree, but one thing gives me pause. MOND manages to account for a wide range of anomalous galactic motions with one simple formula. Even if MOND doesn’t overturn the laws of physics, it has shown that dark matter behaves in a simple way.
This is another untrue commercial. What MOND actually describes in a satisfactory way is just the dependence of a quantity like a velocity on two parameters or so – imagine a galaxy size and the radial distance. The ranges that have been observationally tested aren't too wide so one may fit the dependence on the two parameters by two constants encoding the trend – two exponents in a power law in two variables, if you wish. And both of these new parameters (exponents) had to be adjusted in MOND to give you the right theory. If the dependence or exponents had been different, one could design "another MOND" that would do an equally good job. MOND is just fitting some data – in a way that is almost guaranteed to be possible – while promoting a different paradigm.
I love patterns, even heuristically explained ones, but there's simply no nontrivial pattern that would be more predictive than the input data that had to be adjusted here and that could therefore be used as evidence for MOND. Claims to the contrary are just marketing.
Let me omit a paragraph on the steady state cosmology that Verlinde also promotes – the justifications behind his claims are completely unknown (in literature and/or talks) and the final message without any beef just looks too stupid to deserve more than one sentence on TRF. So let's focus on Musser's final paragraph:
The grander his claims become, the less plausible they seem. Still, Verlinde has captured theorists’ sense that cosmological mysteries signal a new era of physics.
I don't think that the latter claim is right, either. In 1998 or so, cosmology entered a new data-laden era in which it became another high-precision science. That occurred despite the fact that the discipline was so close to philosophy and theology just decades earlier. Dark energy was observed; a theory with dark matter and dark energy explained a huge amount of very accurate observations including WMAP etc.
So the trend in cosmology of the recent decade has been exactly the opposite one than Musser suggests; it was a migration away from mysterious babbling towards quantitative theories with lots of bolts.
The smallness of the dark energy remains a bit mysterious but it doesn't really contradict our models, whether you find the anthropic principle legitimate or whether you believe that there's a more "technical" explanation of the vacuum selection. To summarize, I don't see any evidence whatsoever that there are "cosmological mysteries that signal a new era of physics". Unlike other broad morals sometimes offered to summarize the scientific research, this one is simply not supported by anything that has happened in cosmology – theoretical or experimental – and in quantum gravity or string theory.
The impulse to explain dark matter and dark energy as signatures of a deeper reality, rather than a bolt-on to current theories, arises not only in string theory but also in alternatives such as loop quantum gravity and causal set theory.
Another lie. String theory gives us absolutely no reason to think that dark matter should be more than a "bolt-on" to current theories; quite on the contrary, string theory models overwhelm us with dark matter candidates (LSPs or axions in the big stringy axiverse, to mention the two dominant bolts) and they make effective particle physics theories without any dark matter pretty unnatural. With some disclaimers, the same thing could be said about dark energy, too. Musser's very words "bolt-on" are clearly chosen to sound derogatory (and "not creative") but there exists absolutely no scientific justification for such derogatory remarks.
What loop quantum gravity and causal set theory "say" is completely irrelevant for these scientific questions because these theories are nowhere near the ability to describe related phenomena such as cosmology, certainly not in a deeper and more complete and accurate way than the previous theories such as the classical general relativity.
And if Verlinde is wrong and spacetime really is a root-level feature of our world, what other intuition will have to give way? What other thing that we thought we knew for sure is wrong?
The combination of two questions in the first part of the first sentence is just totally unfair. The claim that "Verlinde is wrong" has nothing to do with the claim that "spacetime is a root-level feature of our world" and they can't be mixed in this way. The spacetime is almost certainly emergent, at least to some extent. But Erik Verlinde has nothing to do with this insight; he didn't make it and he didn't derive it, surely not as the first physicist.
When we say that the spacetime is emergent or doomed, we're certainly not saying that anything that Verlinde has said since 2010 is right. These assertions have nothing to do with each other. If you attributed the fuzzy insight that "spacetime is doomed" to Maldacena or to Witten or to Susskind, you would be inaccurate but your comment would have a true core; if you credit Erik Verlinde with these things, it's just bullshit. In the same way, he hasn't invented the idea that we may try to look for widely believed intuitions that are wrong; science has been doing it from the very beginning, starting with Galileo (if I overlook his semi-scientific predecessors).
So even though Musser's article at least conveys the fact that true experts think that the paper is bunk, it is wrong at so many levels and it is such a flagrant example of "journalistic activism" that I am left utterly disappointed. The quality of science journalism has deteriorated dramatically – and it's much worse for most of Musser's colleagues.
And that's the memo.
Posted by Luboš Motl
|
Other texts on similar topics: alternative physics, science and society, stringy quantum gravity
snail feedback (27)
:
reader PlatoHagel said...
As I went through the information about Disqus they give descriptive examples of words you can select. A default? I saw the correlation to their example and identified it toward what was appearing here, as you say,"due to abuse, your message didn't appear."
So if it is a bug(natural default) we will see if it disappears. It is under "moderation selections entitled Blacklist."
How did you get your comments to appear on the side of your blog?
reader Luboš Motl said...
A disqus guy said it was a bug to be eliminated on the blog.disqus.com recent blog about the upgrade. The recent comments widget is installed via Tools in the Dashboard, go to
http://stuxnet-lumo.disqus.com/admin/tools/
where you replace stuxnet-lumo with your shortname.
reader hajoucha said...
Thank you for this update. Disqus seems to work quite well. Also there is an extra bonus, since the side panel "recent disqus comments" now shows which comment is under which blogpost as well (unlike "recent echo comments" panel).
reader hajoucha said...
just a test
reader Luboš Motl said...
Dear hajoucha, it works but I won't put your mailinator e-mail address to a white list because it's temporary and may be later used by others, can't it? So your comments will be waiting for moderation. If you invent an e-mail address that can't be guessed too easily, it may be a different story...
reader Luboš Motl said...
Good, hajoucha, but so far DISQUS has only replaced the built-in Blogger.com comments whose recent comments widget had the article names, too. See
http://motls.blogspot.com/2012/05/recent-slow-bloggercom-comments.html
reader Luboš Motl said...
test depth 5
reader Luboš Motl said...
test depth 6
reader George Christodoulides said...
in the beginning i could log on the google account and post but later it would not let me connect at all from the google account and that was the only twiiter account i have since i don't use it .
reader Luboš Motl said...
Testing new DISQUS comments...
I suppose that $\LaTeX$ doesn't work.
reader Anonymous rat said...
Testing anonymous DISQUS 2012 comments.
reader Nobody at TRF said...
Testing anonymous comments.
reader Luboš Motl said...
Interesting. Here it uses the same dark background color and the same totally white foreground color as the blog itself - colors that have been optimized for maximum visibility. Maybe your monitor brightness and/or contrast is set to be wrong?
Also, press ctrl/+, perhaps several times, to increase the size of everything - magnify not only fonts. I don't know why white-on-black should be less readable than black-on-white, it's actually more readable for me, but the magnification should help.
reader PlatoHagel said...
Test.....I kinda wish my link on side bar would update....site is still lead by science even in face of subjective experience context information. Oh, and thanks of heads up on the use of Disqus... it is being implemented as well.
reader Guest said...
testing
reader Dilaton said...
Cool, now even I can make a slow comment :-)
reader George Christodoulides said...
the google+ account works on this one. i don't have to post as FUNPLAY
reader Luboš Motl said...
Haha, you could have always written your actual name instead of FUNPLAY, couldn't you?
Just to be sure, it's still possible to post under anonymous nicknames here. I won't use, remember, or transmit any e-mail addresses someone fills to DISQUS and I will be able to see unless it's agreed upon by the user.
reader Shannon said...
testing with ipad... Still impossible to download a picture for my avatar :-(
reader James Mayeau said...
Is this where we do the bitching, aiming scorn at things which the Boss has no control over?
reader Guest said...
test with avatar
reader James Mayeau said...
Oh my goodness! That's my real name.
I feel encumbered by a new sense of responsibility for the tone and content of my opinion.
Cats out of the bag.
reader Luboš Motl said...
You may still change it. I will forget it and the number of people who saw it is comparable to two! Or you may create a new account where you fill a different nick as a name, can't you? I would find it a bit unfortunate if your real name were blunting your tiger teeth. Note that Shannon became just Shannon - you wrote it so it showed up, right? ;-)
reader James Mayeau said...
It not like Mooney has a bounty on my head. Anyhow, I figure they'd come for Anthony Watts way before it came to that. ;)
reader PlatoHagel said...
I am not sure Lubos but I think all your gmail respondents are set to blacklist?
reader Luboš Motl said...
Why do you think so? If you mean the message "due to abuse, your message didn't appear", it's a bug of DISQUS 2012 that will be fixed soon. No posters or contacts with a human face are on my blacklist.
reader sandrahelen848 said...
because my teacher asked me to include it in a letter stating ' this is not an event but these are the technicalities were focusing on "..and I really don't understand the word fully.
regards,
sandrahelen848,
http://hvactrainingpro.com/ (This website explains about top hvac training programs and schools) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616308808326721, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=159293 | Physics Forums
Thread Closed
Page 1 of 2 1 2 >
Mentor
## Moment of Inertia: Solid Sphere
Hi,
So as not to lead anybody astray, I have decided to post a correction to my ill-fated attempt in this thread:
http://www.physicsforums.com/showthread.php?t=158832
to derive the moment of inertia of a solid sphere of uniform density and radius R.
$$dI = r_{\perp}^{2} dm = r_{\perp}^{2} \rho dV$$
where $r_{\perp}$ is the perpendicular distance to a point at r from the axis of rotation. Therefore, $r_{\perp} = |\mathbf{r}|\sin{\theta}$. Integrating over the volume,
$$I = \int \! \! \! \int \! \! \! \int_V \rho r_{\perp}^2 \,dV = \rho \int_0^{2\pi} \int_0^{\pi} \int_0^R (r \sin{\theta})^2 r^2 \sin{\theta} \,dr\,d \theta\, d\phi$$
$$= 2\pi \rho \int_0^{\pi} \sin^3{\theta}\, d\theta\, \int_0^R r^4\,dr$$
Make the substitution u = cos$\theta$.
$$= 2\pi \rho \frac{R^5}{5} \int_{-1}^{1} (1 - u^2)\,du = 2\pi \rho \frac{R^5}{5} \left(1 - (-1) - \left[\frac{u^3}{3}\right]_{-1}^1 \right)$$
$$= 2\pi\rho \frac{R^5}{5} \left(2 - \frac{2}{3}\right) = \underbrace{\frac{4}{3} \pi R^3 \rho}_{M} \left(\frac{2}{5}R^2\right)$$
$$= \frac{2}{5} MR^2$$
where M is the total mass of the sphere.
Please let me know if there is anything wrong with this derivation.
Thanks,
Cepheid
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Interesting derivation, I never thought about doing it this way. I use cylindrical coordinates; it eliminates the $$sin(\theta)$$ factor, but introduces a $$\sqrt{R^2-r^2}$$ in one of the limits of integration for the z variable. Anyway, it seems to look fine to me.
You got the right answer, didnt you?
## Moment of Inertia: Solid Sphere
I actually worked the same problem the same way just a couple weeks ago. So it looks good to me
Quote by arunma Interesting derivation, I never thought about doing it this way. I use cylindrical coordinates; it eliminates the $$sin(\theta)$$ factor, but introduces a $$\sqrt{R^2-r^2}$$ in one of the limits of integration for the z variable. Anyway, it seems to look fine to me.
Can you post the derivation of it in cylindrical coordinates?
Blog Entries: 2 Recognitions: Homework Help Science Advisor Cotufa is doing homework on "moment of inertia" of uniform solid sphere and a uniform solid cylinder. And needs to solve in both spherical & cylindrical coordinate system. It won't help cotufa learn anything by looking at arunma's derivation. I recommend not to post that.
It is a very nice derivation but there is a small error in the limits of integration after substitution. The limits should be 1 to -1 (not -1 to 1). The answer will be the same (but would in fact be different if you used the other limits).
Quote by Bella3 It is a very nice derivation but there is a small error in the limits of integration after substitution. The limits should be 1 to -1 (not -1 to 1). The answer will be the same (but would in fact be different if you used the other limits).
Actually there isn't any error in the limits of integration. If you pay attencion you will notice that:
$$\int \sin^3(\theta)d\theta = -\int^{-1}_1 (1-u^2)du = \int^{1}_{-1} (1-u^2)du.$$
It's all right.
Mentor Exactly. When I did the substitution u = cos(theta), I got du = -sin(theta)d(theta), and I absorbed that negative sign by flipping the limits of integration.
hi, erm can you explain how you get your integral terms in the triple integral step? i understand why you use (rsinθ )^2, which is your r^2(perpendicular) term, but why is there a r^2sinθ after that?
Mentor
Quote by quietrain hi, erm can you explain how you get your integral terms in the triple integral step? i understand why you use (rsinθ )^2, which is your r^2(perpendicular) term, but why is there a r^2sinθ after that?
A volume element in spherical coordinates is given by:
$$dV = r^2 \sin{\theta} \,dr\,d \theta\, d\phi$$
You can think of the volume element as an infinitesimal cube. One dimension of the cube, the length in the radial direction, is given by dr. The length in the azimuthal direction (the arc swept out by $d\phi$) is given by $r \sin \theta d\phi$. The third dimension of the cube (the arc swept out by $d\theta$) is given by $r d\theta$.
The following diagram, which I found just by Googling, will aid greatly in understanding *why* this is so. In particular, it explains where the sine factor comes from (it involves a projection onto the xy-plane). Be mindful, however, that whoever made this diagram uses $\rho$ instead of r, and his definitions of $\theta$ and $\phi$ are *switched* compared to mine.
http://www.spsu.edu/math/Dillon/Volu...oordinates.htm
oh now it all makes sense.. no wonder my lecturer said we had to take it for granted that for the projection on a xy plane of a circle, we had to change the dA to rdrd(θ) . i was wondering where the r came from. so when integrating for the area of a circle, i am actually integrating 1 dA = integrate 1 rdrd(θ) which is actually the radius times the arc length (rdθ term) where 0<θ<2pi so its like chopping the circle into tiny rectangular pieces and integrate over the entire circle so the integrand we are using is actually the radius(length) X arc length (width) is it?
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by quietrain oh now it all makes sense.. no wonder my lecturer said we had to take it for granted that for the projection on a xy plane of a circle, we had to change the dA to rdrd(θ) . i was wondering where the r came from.
bad lecturer!
so its like chopping the circle into tiny rectangular pieces and integrate over the entire circle
Very good!
An even more precise way of looking at it, would be to use the difference between two sectors of the circle, both with obening $d\theta$, but radii r and (r+dr).
Thus, we get:
$$dA=\frac{1}{2}(r+dr)^{2}d\theta-\frac{1}{2}r^{2}d\theta=(rdr+\frac{dr^{2}}{2})d\theta$$
But, since the non-linear term in dr is vanishing relatively to the dr term as dr goes to 0, it follows that the appropriate area element is $rdrd\theta$
so the AREA ELEMENT we are using is actually the radius(length) X arc length (width) is it?
Due to the linearization argument above, yes.
Mentor
Quote by quietrain oh now it all makes sense.. no wonder my lecturer said we had to take it for granted that for the projection on a xy plane of a circle, we had to change the dA to rdrd(θ) .
If I understand you right, you are now talking about a 2D case in which you are using polar coordinates (r,θ) to represent a circle in the plane.
Quote by quietrain i was wondering where the r came from.
Well now you know that, simply from definition of angular measure (in the radian system), rdθ is the arc swept out by angle dθ at radius r. Also you know that dA has to have units of area. r has units of length, and θ is dimensionless, so drdθ would not have had the right units.
Quote by quietrain so when integrating for the area of a circle, i am actually integrating 1 dA = integrate 1 rdrd(θ) which is actually the radius times the arc length (rdθ term) where 0<θ<2pi
Yes, the area of a circle of radius R can be calculated using polar coordinates in exactly the way you have described:
$$A = \int_A dA = \int_0^{2\pi} \int_0^R r\,drd\theta = 2\pi \int_0^R r \, dr = 2\pi \frac{1}{2}R^2$$
Quote by quietrain so its like chopping the circle into tiny rectangular pieces and integrate over the entire circle
Essentially, yes.
Quote by quietrain so the integrand we are using is actually the radius(length) X arc length (width) is it?
Yes, where these are the length and width (respectively) of the infinitesimal area element dA.
thanks everyone! oh btw, just a side question, when we talk about the double integral of a rectangle in the xy-plane, say bounded by the x-axis, y-axis, x=2, y=1, then when we do a double integral of any variable say X, we take the integral limits of dx to be 0 to 2, and for dy from 0 to 1. but why sometimes we have to convert the limits to in terms of dx, for example, it becomes integrate from 0 to 1 dx, BUT example, integrate for dy from 0 to 5x-6 , where the y limits becomes expressed in x terms. so how do we know when to change the limits and when to just use the number? thanks. sry if its confusing
Mentor
Quote by quietrain so how do we know when to change the limits and when to just use the number? thanks. sry if its confusing
Well, I mean, if y is a well-defined function of x, then the boundaries of the region can be expressed entirely in terms of x, and the integral reduced to a single (one-dimensional integral).
Quote by cepheid A volume element in spherical coordinates is given by: $$dV = r^2 \sin{\theta} \,dr\,d \theta\, d\phi$$
How come when you use this we get:
$$I = \int r^2dm = \rho\int^{2\pi}_{0}d\phi\int^{\pi}_{0}sin{\theta}d\theta\int^{R}_{0}r^4 dr = \frac{3M}{4\pi R^3}*2\pi*2*\frac{R^5}{5} = \frac{3}{5}MR^2$$
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
| | | |
|------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Moment of Inertia: Solid Sphere | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 10 |
| | Introductory Physics Homework | 7 |
| | Advanced Physics Homework | 4 |
| | Introductory Physics Homework | 10 |
| | Introductory Physics Homework | 2 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273101687431335, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/28972?sort=newest | ## colimits of spectral sequences
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for some references about colimits of spectral sequences.
More precisely: let $X : I \longrightarrow \cal{C}$ be a functor from a filtered category $I$ to the category of double cochain complexes of an abelian category $\cal{C}$, in which filtered colimits exist and commute with cohomology.
Let $E_2(X_i)$ be the second page of the first filtration ss associated to $X_i$. Assuming that the $X_i$ are right-half plane double complexes, it weakly converges to $H^*(\mbox{Tot}^\prod X_i)$ for all $i$ (Weibel, "An introduction to homological algebra", page 142):
$$E_2(X_i) \Longrightarrow H^*(\mbox{Tot}^\prod X_i)\ ,$$
where $\mbox{Tot}^\prod$ is the total product complex,
$$(\mbox{Tot}^\prod X)^n = \prod_{p+q=n} X^{pq} \ .$$
For the same reason:
$$E_2(\underset{i}{\lim_\longrightarrow} X_i) \Longrightarrow H^*(\mbox{Tot}^\prod \underset{i}{\lim_\longrightarrow} X_i )\ .$$
Then, because of the exactness of $\displaystyle \lim_\longrightarrow$, we have
$$\underset{i}{\lim_\longrightarrow} E_2 (X_i) = E_2(\underset{i}{\lim_\longrightarrow} X_i) \ .$$
Then my question is: under which conditions can I assure that I have a comparison theorem like
$$\underset{i}{\lim_\longrightarrow} H^* (\mbox{Tot}^\prod X_i) = H^*(\mbox{Tot}^\prod \underset{i}{\lim_\longrightarrow} X_i) \quad \mbox{?}$$
Any hints or references will be appreciated.
-
Perhaps someone with editing power could fix the broken LaTeX? – JBL Jul 26 2010 at 18:45
Hmm. There seems to be a discrepancy between the way preview handles \varinjlim and the way it is rendered in posts, so I did a workaround. I'll look for the tex bug thread on meta. – S. Carnahan♦ Jul 26 2010 at 23:04
## 1 Answer
This is a nice question, so I'm not sure why it was never answered- maybe my answer is wrong and this question is harder than I thought? In any event here's my attempt:
Under fairly mild hypotheses on your spectral sequences (i.e. that it converges in the sense of Weibel 5.2.11), we have a comparison theorem which says that if a map of convergent spectral sequences is an isomorphism for any $r$, then it induces an isomorphism on the abuttments. In particular, in this case I think that you definitely have a map of spectral sequences $\text{colim } E(X_i) \rightarrow E(\text{colim } X_i)$, and since it's an isomorphism at the $E_2$ page then it's an isomorphism from then on, so the theorem applies (at least in the case when the double complexes in question are, say, right half-plane or something). The comparison theorem is in Weibel, 5.2.12.
-
1
Thanks, Dylan. Later on, I've found the answer I needed: is the "colimit lemma" of Mitchell in "Hypercohomology spectra and Thomason's descent theorem." – Agusti Roig Feb 22 2011 at 1:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211379289627075, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/6379?sort=oldest | ## What is an integrable system
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What is an integrable system, and what is the significance of such systems? (Maybe it is easier to explain what is a non-integrable system.) In particular, is there a dichotomy between "integrable" and "chaotic"? (There is an interesting wikipedia article but I don't find it completely satisfying.)
Update (Dec 2010): Thanks for the many excellent answers. I came accross another quote from Nigel Hitchin is: "Integrability of a system of differential equations should manifest itself through some generally recognizable features:
• the existence of many conserved quantities
• the presence of algebraic geometry
• the ability to give explicit solutions.
These guidelines whould be interpreted in a very broad sense."
(If there are some aspects mentioned by Hitchin not addressed by the current answers, additions are welcome...)
-
Very good answers! I'd love to see more angles to this important issue, which is why a little bounty is offered. – Gil Kalai Dec 4 2009 at 9:57
2
Excellent question, I think. But I'm stuck before we get to the "integrable" part. What is a "system"? I'd be glad if someone addressed this in their answer. – Tom Leinster Dec 4 2009 at 12:39
1
I believe that 'system' is in the same sense as 'dynamical system', which probably comes from 'system of differential equations'. – José Figueroa-O'Farrill Dec 4 2009 at 16:57
Thanks, José, but that doesn't really answer the question. People use "dynamical system" in a variety of ways. E.g. the wikipedia article en.wikipedia.org/wiki/…) gives the general definition as a partial action of a monoid on a set. An article by Adler in the Bulletin of the AMS defines it as a compact metric space with a continuous endomorphism. But I don't think that either of those definitions is what the answers below are referring to. Perhaps I should ask this as a separate question – Tom Leinster Dec 5 2009 at 18:32
8
The book by Hitchin, Segal, Ward and Woodhouse begins with this nice quote: "Integrable systems, what are they? It's not easy to answer precisely. The question can occupy a whole book (Zakharov 1991), or be dismissed as Louis Armstrong is reputed to have done once when asked what jazz was---'If you gotta ask, you'll never know!'" – HW Dec 27 2009 at 19:26
show 2 more comments
## 11 Answers
This is, of course, a very good question. I should preface with the disclaimer that despite having worked on some aspects of integrability, I do not consider myself an expert. However I have thought about this question on and (mostly) off.
I will restrict myself to integrability in classical (i.e., hamiltonian) mechanics, since quantum integrability has to my mind a very different flavour.
The standard definition, which you can find in the wikipedia article you linked to, is that of Liouville. Given a Poisson manifold $P$ parametrising the states of a mechanical system, a hamiltonian function $H \in C^\infty(P)$ defines a vector field $\lbrace H,-\rbrace$, whose flows are the classical trajectories of the system. A function $f \in C^\infty(P)$ which Poisson-commutes with $H$ is constant along the classical trajectories and hence is called a conserved quantity. The Jacobi identity for the Poisson bracket says that if $f,g \in C^\infty(P)$ are conserved quantities so is their Poisson bracket $\lbrace f,g\rbrace$. Two conserved quantities are said to be in involution if they Poisson-commute. The system is said to be classically integrable if it admits "as many as possible" independent conserved quantities $f_1,f_2,\dots$ in involution. Independence means that the set of points of $P$ where their derivatives $df_1,df_2,\dots$ are linearly independent is dense.
I'm being purposefully vague above. If $P$ is a finite-dimensional and symplectic, hence of even dimension $2n$, then "as many as possible" means $n$. (One can include $H$ among the conserved quantities.) However there are interesting infinite-dimensional examples (e.g., KdV hierarchy and its cousins) where $P$ is only Poisson and "as many as possible" means in practice an infinite number of conserved quantities. Also it is not strictly necessary for the conserved quantities to be in involution, but one can allow the Lie subalgebra of $C^\infty(P)$ they span to be solvable but nonabelian.
Now the reason that integrability seems to be such a slippery notion is that one can argue that "locally" any reasonable hamiltonian system is integrable in this sense. The hallmark of integrability, according to the practitioners anyway, seems to be coordinate-dependent. I mean this in the sense that $P$ is not usually given abstractly as a manifold, but comes with a given coordinate chart. Integrability then requires the conserved quantities to be written as local expressions (e.g., differential polynomials,...) of the given coordinates.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't think that one could say that there is a dichotomy between integrable and chaotic systems. There is certainly a huge chank in the middle. By a chaotic system we often mean a system where trajectories of points deviate exponetially with time, a canonical example is the Arnold (or Anosov) cat's map. In this case a generic tajectory is of course eveywhere dense in the phase space. This is related to ergodicity (in the case when there is a measure preserved by the system). But of course not every ergoic sysetm is chaotic. There are different degrees of chaos, mixing, strong mixing, ect.
On the contrary for an integrable system the motion of every trajectory is quasi-periodic, it stays forever on a half-dimesnional torus, such systems are rear. A little preturbation of such a system is not integrable anymore. KAM theory describes the residue of integrabily of the preturbation, while Arnol'd difusion is about trajectories that don't move quasiperiodically anymore.
There is one amazing example due to Moser, that shows how the cat's map can "happen" on a degenerate level of an integarble system page 6 in
http://arxiv.org/PS_cache/arxiv/pdf/0810/0810.5713v1.pdf
-
1
I agree with this answer. In fact there is a memoir of the AMS by Markus and Meyer which shows that a generic Hamiltonian system is neither integrable nor ergodic, see books.google.fr/… – Thomas Sauvaget Nov 21 2009 at 18:30
I have found an article "Quantum signatures of an integrable system with a chaotic scattering map" here:
http://www.iop.org/EJ/abstract/0305-4470/28/6/008
So apparently some integrable systems can have chaotic scattering maps.
-
This is soft -- but I think of an integrable system as one whose dynamics are dominated by algebra. For finite dimensional integrable systems, the symmetries (related to conserved quantities by Noether's theorem) force the trajectories to live on half-dimensional tori. For infinite dimensional integrable systems, where the flow on the scattering data is isospectral the symmetries force solutions to be n-soliton solutions plus dispersive modes.
There is a blog post of Terry Tao's (apologies for not having the link) which talks about how algebra is the right tool to understand structure while analysis is the right tool to understand randomness. The claim is that one mark of an good problem is the presence of an interesting relationship between structure and randomness and hence the requirement that both algebra and analysis be used -- to some degree -- in order to get a good answer to the problem. The soliton resolution conjecture is by this standard a good problem because the asymptotic n-soliton solutions are fundamentally algebraic while the dispersive modes are fundamentally analytic objects.
I agree with Dmitri that there isn't a dichotomy. The symmetries can have a large or small role in the dynamics as can the ergodicity.
-
I'll give a bit of a physics definition. (Reference is "A Brief Introduction to Classical, Statistical and Quantum Mechanics" by B\"uhler.)
"A mechanical system is called integrable if we can reduce its solution to a sequence of quadratures."
So, literally, an integrable system (in this view) is one that can be solved by a sequence of integrals (which may not be explicitly solvable in elementary functions, of course). To connect to other answers, this should only work out when there are enough symmetries for us to write down and integrate.
-
The above answers deal mostly with finite-dimensional systems. As for the (systems of) PDEs, you typically need the Lax pair or a zero curvature representation (see e.g. the Takhtajan--Faddeev book mentioned in the wikipedia entry you linked to for the definition of the latter) or something else like that. To the best of my knowledge, the complete understanding of what is an integrable system for the case of three (3D) or more independent variables is still missing. In particular, for the 3D case the overwhelming majority of examples are generalizations of the systems with two independent variables. These generalizations are constructed using the so-called central extension procedure (e.g. the KP equation is related to KdV in this way). As for the reading suggestions, in addition to the Takhtajan--Faddeev book cited above, you can look e.g. into a fairly recent book Introduction to classical integrable systems by Babelon, Bernard and Talon, and into the book Multi-Hamiltonian theory of dynamical systems by Maciej Blaszak which covers the central extension stuff in a pretty straightforward fashion. Both books have extensive bibliographies with further references to look into.
Now, as for classification and identification of (new) integrable systems of PDEs, at least in two independent variables, it turns out that the (infinitesimal higher) symmetries play an important role here. A recent collective monograph Integrability, edited by A.V. Mikhailov and published by Springer in 2009, could be a good starting point in this direction. See also another recent book Algebraic theory of differential equations edited by MacCallum and Mikhailov and published by Cambridge University Press. For a general introduction to the subject of symmetries of (systems of) PDEs, I can recommend the book Applications of Lie groups to differential equations by Peter Olver.
-
Your "3D" is better known as 1+2 (one time variable, two space variables). This is an important distinction both in the Lax pair formalism (time variable is preferred) and in zero curvature representation approach (applies primarily to 1+1). – Victor Protsak Jun 30 2010 at 6:33
I'll take off from the questioner's suggesting that maybe it's better to say what is a NON-integrable system is.
The Newtonian planar three body problem, for most masses, has been proven to be non-integrable.
Before Poincare, there seemed to be a kind of general hope in the air that every autonomous Hamiltonian system was integrable. One of Poincare's big claims to fame, proved within his Les Methodes Nouvelles de Mecanique Celeste, was that the planar three-body problem is not completely integrable. It is the dynamical systems equivalent to Galois' work on quintics. Specifically, Poincare proved that besides the energy, angular momentum and linear momentum there are no other ANALYTIC functions on phase space which Poisson commute with the energy. (To be more careful : any `other' such function is a function of energy, angular momentum, and linear momentum. And his proof, or its extensions, only holds in the parameter region where one of the mass dominates the other two. It is still possible that for very special masses and angular momenta/ energies the system is integrable. No one believes this.) As best I can tell, existence of additional smooth integrals (with fractal-like level sets) is still open, at least in most cases.
Poincare's impossibitly proof is based on his discovery of what is nowadays called a homoclinic tangle'' embedded within the restricted three body problem, viewed in a rotating frame. In this tangle, the unstable and stable manifold of some point (an orbit in the non-rotating inertial frame) cross each other infinitely often, these crossing points having the point in its closure.
Roughly speaking, an additional integral would have to be constant along this complicated set. Now use the fact that if the zeros of an analytic function have an accumulation point then that function is zero to conclude that the function is zero.
Before Poincare (and I suppose since) mathematicians and in particular astronomers spent much energy searching for sequences of changes of variables which made the system ```more and more integrable''. Poincare realized the series defining
their transformations were divergent -- hence his interest in divergent series.
This divergence problem is the ```small denominators problem'' and getting around it by putting number theoretic conditions on frequencies appearing is at the heart of the KAM theorem.
-
The simple answer is that a 2n dimensional Hamiltonian system of ODE is integrable if it has n (functionally) independent constants of the motion that are "in involution". (Functionally independent means none of them can be written as a function of the others. And "in involution" means that their Poissoon-Brackets all vanish---a somewhat technical condition I won't define carefully (* but see below), but instead refer you to: http://en.wikipedia.org/wiki/Poisson_bracket). The simplest and the motivating example is the n-dimensional Harmonic Oscillator. What makes integrable systems remarkable and interesting is that one can find so-called "action angle variables" for them, in terms of which the time-evolution of any orbit becomes transparent.
For a more detailed and modern discussion you may find an expository article I wrote in the Bulliten of The AMS useful. It is called "On the Symmetries of Solitons", and you can download it as pdf here:
http://www.ams.org/journals/bull/1997-34-04/S0273-0979-97-00732-5/
It is primarily about the infinite dimensional theory of integrable systems, like SGE (the Sine-Gordon Equation), KdV (Korteweg deVries) , and NLS (non-linear Schrodinger equation), but it starts out with an exposition of the classic finite dimensional theory.
• Here is a little bit about what the Poisson bracket of two functions is that explains its meaning and why two functions with vanishing Poisson bracket are said to "Poisson commute". Recall that in Hamiltonian mechanics there is a natural non degenerate two-form $\omega = \sum_i dp_i \wedge dq_i$. This defines (by contraction with $\omega$ a bijective correspondence between vector fields and differential 1-forms. OK then---given two functions f and g, let F and G be the vector fields corresponding to the 1-forms df and dg. Then the Poisson bracket of f and g is the function h such that dh corresponds to the vector field [F,G], the usual commutator bracket of the vector fields F and G. Thus two functions Poisson commute iff the vector fields corresponding to their differentials commute, i.e., iff the flows defined by these vector fields commute. So if a Hamiltonian vector field (on a compact 2n-dimensional symplectic manifold M) is integrable, then it belongs to an n-dimensional family of commuting vector fields that generate a torus action on M. And this is where the action-angle variables come from: the level surfaces of the action variables are the torus orbits and the angle variables are the angles coordinates for the n-circles whose product give a torus orbit.
-
Since, it hasn't been mentioned yet a short addition to José Figueroa-O'Farrill's answer. I will only talk about the finite dimensional case. So let's assume that $dim(P) = 2n$. Then the Hamiltonian flow is integrable if there exist these $n$ functions $f_1, \dots, f_n$ which are in involution with respect to the Poisson structure.
Now, the cool thing is that there exist action angle coordinates. These means we can conjugate our possibly complicated dynamics to the simple dynamics $$\partial_t I_j = 0,\quad \partial_t \theta_j = I_j,\quad j=1,\dots,n$$ this is something, we can all solve since it is just linear. Note: We will have $I_j = f_j(orbit)$, which is time independent.
As a possible application, KAM theory is usually formulated as an application to systems in action angle coordinates. This in turn implies that integrable systems are stable (in a subtle measure theoretic sense). But I think this is what is meant with "integrable $\neq$ chaos". We have a great form of perturbation theory for integrable systems.
-
After reading several books and articles about integrable systems, and after several years of work in the field, I consider particularly meaningful the following quotation from Frederic Helein's book 'Constant mean curvature surfaces, harmonic maps and integrable systems', Lectures in Mathematics, ETH Zurich, Birkhauser Basel (2001):
"...working on completely integrable systems is based on a contemplation of some very exceptional equations which hide a Platonic structure: although these equations do not look trivial a priori, we shall discover that they are elementary, once we understand how they are encoded in the language of symplectic geometry, Lie groups and algebraic geometry. It will turn out that this contemplation is fruitful and lead to many results"
-
I would like to add one more example of integrability which refers to Hopf algebras and is probably the easiest to formulate. It naturally arises in spin chain physics, but can be treated abstractly as well. Consider (semi-simple) Lie algebra $\mathfrak{g}$, its universal enveloping algebra $\mathfrak{h}=U(\mathfrak{g})$ and its Hopf double. The latter has coproduct homomorphism $\Delta: \mathfrak{g}\to \mathfrak{g}\otimes\mathfrak{g}$. Now let us consider an operator $\mathfrak{R}$ (so-called, R-matrix) as the following mapping $\mathfrak{R}:\mathfrak{h}\otimes\mathfrak{h}\to\mathfrak{h}\otimes\mathfrak{h}$, meaning that $\mathfrak{R}$ is some tensor product of polynomials of elements from $\mathfrak{g}$. The integrability condition reads
$[\Delta,\mathfrak{R}]=0,$
viz. the coproduct should commute with the R-matrix. It is now a matter of several lines of simple calculations to show that the Yang-Baxter equation on the R-matrix, which is frequently referred to as the necessary condition for integrability follows [see, e.g. Kassel "Quantum Gorups"].
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335801601409912, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/77490?sort=votes | ## Homological computations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose I have a group acting on some Hadamard manifold, and I want to understand as much as possible about the (co)homology of the quotient. In my case I can find a fundamental domain for the action and the side-pairing transformations. Is there some canned piece of software which will compute the homology groups (I know how to do this in principle, but rather not actually do all the work) and the cohomology ring structure (in my ignorance, I don't know how to do this even in principle, so references would certainly be appreciated)?
*EDIT
To elaborate slightly: think of the projective model of $\mathbb{H}^n,$ and a group (discrete, but not known a priori to act without fixed points), and construct the Dirichlet domain, which is a convex polytope, for which we have the side-pairing transformations. Now, clearly by triangulating the fundamental domain enough we get a simplicial decomposition of the quotient, but since all this is happening in high dimensions (five or higher), this is already a pain to program....
-
2
+1: I really hope this gets good answers. In this day and age we really should be able to get computers to do some of this work for us – David White Oct 7 2011 at 20:32
Can you give us the description of the fundamental domain and the glueing rule on the boundary? – Mariano Suárez-Alvarez Oct 7 2011 at 21:56
1
The website computop.org is a good source for computational topology software; maybe you'll find something there? – jc Oct 8 2011 at 4:06
@Mariano and @Vel: see the Edit for some more info... – Igor Rivin Oct 8 2011 at 9:55
@jc: thanks, will look! – Igor Rivin Oct 8 2011 at 9:57
## 2 Answers
What is the fundamental domain of the action? In case you can easily create a cubical or simplicial decomposition of this region, there is tons of software out there to help you with the grunt-work.
I would recommend the Computational Homology Project (CHomP) which is run by Konstantin Mischaikow's group at Rutgers. I believe he co-wrote the book on efficient homology computation. Here is a link to the CHomP website and some documentation, I think you will want to use the program called "homsimpl" in case of a simplicial decomposition and "homcubes" in the (highly!?) unlikely case that your fundamental domain can be represented as a union of axis-parallel cubes with integer vertices.
It's been a while since I have personally used this code, but I think it is dimension-independent. I know for a fact that it is written in C++, so you will need a compiler such as gcc to build it from source. That being said, binaries are available here for most non-obscure operating systems.
As far as the cup product computation is concerned, I suspect that you will run into some difficulties. For cubical complexes, there are results (pdf) of Kaczynski (also a co-author of the Computational Homology book) which give pseudo-code (see page 6) but I must confess that I have been unable to find a software implementation. For simplices the situation is considerably harder because in general the product of two simplices is not a simplex, and so there are technical difficulties in constructing the Kunneth map. It is difficult to see how software will bypass this issue for arbitrary complexes, but I would love to test-drive an implementation if one exists.
All the best with your computations!
-
Thanks! It is clear that in my case one can get the simplicial decomposition, but this requires more work (see edit). – Igor Rivin Oct 8 2011 at 9:55
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'm not sure how much this will help you, but one person who seems pretty active in creating software to compute homology and cohomology is Robert Ghrist. You can find his website here and an MO post where I discuss his work here.
I'm also including some links below which I've used in the past to do some cohomology computations. These computations used spectral sequences, so this might not help you as much as Ghrist's work, but I figured it could not hurt to include these links. I'm sorry if this turns out not to help:
Bob Bruner's code and tables
Christian Nassau's website for more modern approaches to the same types of computations
-
@David: thanks! I am familiar with some of Rob's work; I don't think he has anything directly relevant, but I will certainly talk to him... – Igor Rivin Oct 8 2011 at 9:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518275856971741, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/99680/domains-of-continuity/100374 | Domains of continuity
I was playing around with the definition of uniform continuity, and realized that a nice application of it is the possibility to extend functions.
For example, suppose we are given a uniformly continuous function $f:\mathbb{Q}\to\mathbb{R}$.
By uniform continuity, it is easy to see that such a function extends (uniquely of course) to a continuous function $f:\mathbb{R}\to\mathbb{R}$.
If we drop the uniform continuity assumption and demand only that $f$ to be continuous, this is no longer true, as easily demonstrated by $f(x) = \frac{1}{x-\pi}$ which is continuous on $\mathbb{Q}$ but cannot be extended to a continuous function on all of $\mathbb{R}$.
Which brings me to my question: Is there a nice description of the sets $A\subseteq \mathbb{R}$ which have the following property: there is a function $f:A\to \mathbb{R}$ which is continuous, but for any $x \notin A$, $f$ cannot be extended to a continuous function on $A\cup \{x\}$ ?
Certainly open sets have this property, because if $A$ is open and $B$ is its complement, then we may define $f:A\to \mathbb{R}$ by $f(x) = \frac{1}{dist(x,B)}$.
Conversely, are all such sets open?
Edit: per Robert Israel nice examples, it appear that not all such sets are open. I still wonder if there is a nice description of this sets?
-
1
Not all open sets, just the dense ones. If $x \notin \overline{A}$ you can extend $f$ to a continuous function on $A \cup \{x\}$ by choosing $f(x)$ arbitrarily. – Robert Israel Jan 16 '12 at 22:16
Right, my bad... – anonymous Jan 16 '12 at 22:22
I'd be interested to know whether the rationals, or the irrationals, have this property. – Nate Eldredge Jan 17 '12 at 1:19
1
The irrationals do have the property. Let $r_j, j \in \mathbb N$ be an enumeration of the rationals, and define $f$ on the irrationals by $f(x) = \sum_{j=1}^\infty 2^{-j} \sin(1/(x - r_j))$. This has the desired property. – Robert Israel Jan 17 '12 at 8:42
The rationals, on the other hand, do not have the property. Suppose $f$ is a continuous function on the rationals, and let $r_j$ be an enumeration of the rationals. It's easy to construct a sequence of closed intervals $I_j$ such that $r_j \notin I_j$, $I_{j+1} \subset \text{Int}(I_j)$, $\text{length}(I_j) < 1/j$ and $|f(x) - f(y)|< 1/j$ for all rationals $x,y \in I_j$. The intersection of the $I_j$ consists of an irrational number to which $f$ can be extended continuously. – Robert Israel Jan 17 '12 at 22:59
2 Answers
The sets with the property are exactly the dense $G_\delta$'s.
1) Suppose $A \subset \mathbb R$, and $f$ a function defined on $A$. Let $G$ be the set of points $x$ such that either $x$ is not a limit point of $A$ or $\lim_{t \to x} f(t)$ exists. Then $G = \bigcap_{n \in {\mathbb N}} U_n$ where $$U_n = \{x \in {\mathbb R}: \text{ there is }\delta > 0 \text{ such that for all } y,z \in A \cap (x-\delta, x+\delta),\ |f(x) - f(y)| < 1/n\}$$ Note that $U_n$ is open, so $G$ is a $G_\delta$ set. If $A$ has the property, we must have $A = G$, because $f$ can be extended continuously to any member of $G$ not in $A$; in particular, $A$ must be a $G_\delta$. As previously noted, $A$ must be dense.
2) If $A$ is a dense $G_\delta$, write $A^c = \bigcup_{n} E_n$ where $E_n$ is closed and nowhere dense. Define $$f(x) = \sum_{n} 3^{-n} \sin(1/\text{dist}(x,E_n))$$. Since each summand is continuous on $A$ and the series converges uniformly there, $f$ is continuous on $A$. On the other hand, if $x \notin A$, say $x \in E_n \backslash \bigcup_{m < n} E_m$, there are points $y,z \in A$ arbitrarily close to $x$ with $|f(y) - f(z)| > 3^{-n-1}$.
-
1
This generalizes easily to ${\mathbb R}^n$. How far can it be pushed? What about complete metric spaces? – Robert Israel Jan 20 '12 at 2:38
No, they are not all open. For example, if $A^c = \{b_i: i \in {\mathbb N}\}$ is a discrete set (i.e. all points isolated), then $A$ has this property: let $g(x) = \sin(\pi/x)$ for $0 <|x|<1$, $0$ for $|x|>1$, and take $f(x) = \sum_{i=1}^\infty 2^{-i} g((x - b_i)/r_i)$ where $r_i$ are chosen so that the intervals $[b_i - r_i, b_i + r_i]$ are disjoint. But $A$ will not be open if the sequence $b_i$ has a limit point.
-
1
Thanks. So my question remains - is there a nice description of this sets? – anonymous Jan 16 '12 at 22:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448243379592896, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/24912/can-the-apparent-equal-size-of-sun-and-moon-be-explained-or-is-this-a-coincidenc/24913 | # Can the apparent equal size of sun and moon be explained or is this a coincidence?
Is there a possible explanation for the apparent equal size of sun and moon or is this a coincidence?
(An explanation can involve something like tide-lock effects or the anthropic principle.)
-
1
this is a very good question, but i can't think a single semi-reasonable argument toward it, so all you'll get is religious or pseudo-scientific arguments. But that doesn't make, by itself, the question any less good, mind you – lurscher Oct 7 '12 at 3:26
## 6 Answers
It just happens to be a coincidence.
The current popular theory for how the Moon formed was a glancing impact on the Earth, late in the planet buiding process, by a Mars sized object. This caused the break up of the impactor and debris from both the impactor and the proto-Earth was flung into orbit to later coallesce into the Moon. So the Moon's size just happens to be random.
Plus the Moon was formed closer to the Earth and due to tidal interactions is slowly drifting away. Over time (astronomical time, millions and millions of years) it will appear smaller and smaller in the sky. It will still always be roughly the size of the Sun but total solar eclipses will become rarer and rarer (they will be more and more annular or partial). Likewise in the past, it was larger and total eclipses were both longer and more common.
-
Both the moon and the earth's orbits are eccentric, and so the ratio between the sun's and moon's apparent diameter varies with the time of year. When the moon is at perigee, and the earth at aphelion, the moon will seem larger than the sun than when the moon is at apogee and earth at perihelion.
However, the eccentricities of these orbits are low, and the moon always seems "about the same" as the sun. This is a coincidence, both that its size is what it is, and that we're here to observe it. The moon is and will continue to recede from the earth. Eventually, the moon will appear smaller than the sun's disk and won't be able to completely eclipse it anymore.
It seems like this question has its roots in intelligent design (I've heard this argument made in favor of ID before). Were the moon designed to be the same size as the sun's disk, you would think that the earth and moon's orbits wouldn't be eccentric, and that the moon wouldn't gradually receded from the earth, in order to preserve that symmetry.
-
1
I didn't know that this question has been used by intelligent designers and I want to preemptively distance myself from them. I have heard about the moon being viewed as possibly conducive to the evolution of life, so my thoughts were more along the lines of the best tidal effects and temperature corridors for life making the apparent same size likely. – Phira Jun 1 '11 at 20:00
ah, I wasn't making any implications towards you. The moon was of course very important in the development of life, but I don't think its apparent diameter would have contributed to that. – Carson Myers Jun 1 '11 at 20:01
Short Answer:
Yes, similar apparent size equates to similar tidal forces from astronomical bodies. The fact that the moon exerts similar tidal forces as the sun is correlated with the existence of life on Earth.
Details:
I will write a short list of parameters. These are attributes of a planet that may reasonably be necessary for life to evolve. Let me be clear that these are not understood well, but that's the job of exo-biology among many other sciences.
• Sufficient time, more than 1 billion years for life to evolve (and no moon-sized impacts during this time)
• A rotation that gives a "short" day length
Now, we must also consult the numerical evidence. The day length today is 24 hours by definition. Around 3 billion years ago, the day length was closer to 17 hours, give or take an hour (I've referenced a paper that says this in a prior answer, so I might be able to find it later). The reason the day became longer was due to the Earth throwing the moon further out, as well as the interaction with the sun.
Let me introduce another constraint. No planet is formed from the proto-planetary disk with a rotation anywhere close to 2 hours. This value of 2 hours is special in terms of universal constants because it represents (roughly) the shortest day length at which a body may hold itself together through self-gravitation. The argument can be made looking at the divergence of the gravitational field compared to that of the false gravitational field from acceleration. The divergence of the gravitational field reduces to a simple term containing only density. The false field from acceleration can be found from the rotational potential $d/dr 1/2 \omega^2 r^2 = \omega^2 r$, and the divergence of this goes to a constant $\omega^2$. Equate the two divergences, assume Earth density, rearrange for period, and you find a value like 48 minutes. This is the period at which the average surface gravity is zero. Of course this is absurd because that would imply negative gravity regions, which is impossible for large bodies. Practically, you set a better limit by asking for the rotation at which the equator on a sphere would have zero gravity, and this would be close to two hours. This is wrong because the planet deforms, but that is a very mathematically challenging complication. In reality though, you would never approach the 2 hour spin in the hectic planetary formation disk.
What I'm getting to is that you extrapolate from two different directions. One, our predicted spin of proto-Earth, and the speeds we would reasonably expect from an early planet, working back from the limits imposed by gravitational divergence. You probably get something close to 12 hours on the Earth historical limit, and something close to 6 hours with the physics limit.
My lengthy wording here is to answer "why not faster?" The answer is because of physical limits. It is difficult to envision a planet starting with a rotation much faster than the Earth, and certainly not by an order of magnitude.
The question of "why not slower?" is easier to answer from a biology standpoint. Complex life on Earth has always been challenged to cope with the varying day/night temperature differences, as shown in the different (cold-blooded vs warm-blooded) approaches to the problem. Any longer day would make this challenge all the more difficult for life, and could prevent its emergence altogether.
Now let's use math. An astronomical body has a certain diameter $\theta R$, the angle we see it occupy times its distance from us. The density of astronomical bodies varies, but not by an order of magnitude. In the case of the sun and the moon it's about a factor of two. To keep the answer short, we'll call it even, if we take the cubed root it won't matter much. So now we can approximate the relative mass.
$$M \propto \theta^3 R^3$$
The force of gravity is a $1/r^2$ force, this tells us that the force of the sun on Earth is greater than the force of the moon on Earth. This is correct. The tidal force, however, varies by $1/r^3$. This tells us that the tidal force from the moon on the Earth is about the same as the force of the sun on the Earth. This is also correct.
Whether the sun is closer or further is irrelevant with this simplification. The only parameter that matters to tidal forces is the distance to diameter ratio. Now we can start asking:
Why not a larger (apparent) moon - The empirical arguments here clearly point to the fact that a larger moon in terms of appearance would result in a longer day length, and this would not be as good for the evolution of life. You can't solve this problem by starting out with a faster rotation at formation because gravity isn't strong enough. It turns out the moon slows down the Earth at about the same rate as the sun, with other anthropic factors wanting to see a larger moon, this is the point of diminishing marginal returns for getting the shorter day. It would be unhelpful for the moon to appear much smaller because Earth would still only rotate a few hours faster since the sun slows the Earth down anyway.
Why not a smaller (apparent) moon - Not my area, but probably asteroid protection. With asteroids as major a threat as they are, life probably needs a moon as large as possible, and it looks like that's what it got.
-
i like this idea – lurscher Oct 8 '12 at 3:48
1
That's reasonable for approximately the same size, but how do we get to almost the same? Does the closeness in the tidal forces from the sun and the moon result in a simpler tidal pattern? In other words, the lunar and solar tides appear to be a single tide that shifts through the day. A more complex pattern of two tides could be too difficult for primitive organisms to develop predictive mechanisms, and so would not be able to develop internal clocks to cope with migration on the beach, and so would wash away. – jcohen79 Oct 8 '12 at 3:57
@jcohen79 Maybe, practically anything is possible when talking about life, but I don't find it compelling at this time. Tides are somewhat chaotic and the alignment of the solar/lunar forces change throughout the month, although I can't even begin to address why not 2 moons? Many people argue tides are necessary for life, but frankly I think life would be fine with the regular tidal action of the sun. In my personal experience, I don't see why most organisms would care. You get storm surges and tsunamis either way. – AlanSE Oct 8 '12 at 4:07
@AlanSE As you point out, a moon should help with clearing asteroids. So a solar tide only is not an option. I think tides are compelling, because for evolution of complex features you would need an environment that is sheltered from the random events like storm surges, because randomness reduces the benefit of very small differences between individuals, and thus would slow evolution. A sheltered location would benefit from tides to remove waste and exchange nutrients and new genetic variations. The minimum number of tides per day would result in greater distance and volume of exchange. – jcohen79 Oct 8 '12 at 4:33
The real value of the moon vis-a-vis life is that its large mass stabilizes the earth's obliquity (the 23.5deg tilt to the sun that stabilizes climate). Mars on the other hand, has effectively no moon, and has a chaotic obliquity, which can wreak havoc on long-term climate.
-
Interesting subject but I'm not sure this addresses the question at all. – Robert Cartaino♦ Jun 1 '11 at 21:20
1
the answer to the question is easy: accident – Jeremy Jun 1 '11 at 21:28
I've heard this before but I don't understand how a planet's axis of rotation can change much while conserving angular momentum. – Hugh Allen Jul 3 '12 at 5:11
@HughAllen True on both accounts. This would be a good question in itself. There exists transfer of angular momentum in the solar system's long term dynamics. All I can say is that it must be a considerably high order effect, similar to tidal forces. – AlanSE Oct 7 '12 at 19:03
Havoc on long term climate can be good for life. We have had major changes on Earth due to continental drift and ocean currents (snowball earth, forests at the south pole, ice ages) and it just gives evolution something to work at. – Martin Beckett Oct 7 '12 at 23:33
Of course the easy answer is "Purely a coincidence" but for those that think there is no such thing, the situation does lend itself to some fascinating speculation. In this case, let us rephrase the question, "With the Moon slowly and steadily orbiting farther and farther away from the Earth [It's picking up angular momentum from Earth's rotation], why is it that sentient, arguably civilized creatures happened to evolve at just the precise geologic moment that the two most prominent objects in the sky are approximately the same angular diameter?"
Without venturing too far into "2001: A Space Odyssey" territory, it's not out of the realm of possibility to wonder if the spectacular phenomenon of total eclipses might have nudged a fledgling creature in the direction of more sophisticated thought processes or a primitive tribe into considering more carefully the passage of time and patterns of nature, a critical component in first developing agriculture.
I can't imagine any kind of anthropological proof more convincing than noting the coincidental timing as above, so I think the theory will always be nothing more than a speculative flight of fancy. You never know, though. ;)
-
1
There would be even more total solar eclipses if the Moon were closer, and therefore appeared larger in the sky than the Sun does. The coincidental match does mean that the corona is more clearly visible during an eclipse, but with a closer or larger Moon part of the corona would still be visible near the beginning and end of an eclipse. – Keith Thompson Feb 18 '12 at 2:14
Don't take this too seriously, or anything but a poetical divagation.
maybe the presence of a corona in solar eclipse and the existence of an almost perfect sun-light-day and moon-darkness-night symmetry was a trigger of abstract thinking in the primitive mammals that were ready for benefit from such stimulus, and the first suggestion of the existence of profound symmetries in our universe.
According to the above, even if the symmetry is just an illusion, it was the trigger for primate brains to conceive religious abstract thought. Religious thinking maybe is so deeply ingrained in our genomes precisely because it provided the right mixture of social cohesiveness and the roots of abstract thinking required for survival enough to stablish agricultural societies.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556587338447571, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/42429/weak-metric-space | ## weak metric space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the definition of a metric space, replace the triangle inequality by the weaker inequality
d (x, z) ≤ C max {d (x, y), d (y, z)},
where C is a positive constant (depending on the "metric", but not on the points x, y, z). Had structures like this ever been studied?
One can associate a more or less natural topology to this "metric", calling a set open if every point belonging to the set has a ball of positive radius, centered at this point and contained in the set. But I cannot say much on this topology. For instance, it is not obvious (and may be not true) that a ball is an open set. Neither could I prove that this topology is Hausdorff.
Any information, reference, etc. would be welcome.
-
When $C=1$ I think this might hold for $p$-adic numbers... in which case the condition is stronger than the usual metric space condition, and makes $p$-adic analysis a bit easier. – Dylan Wilson Oct 16 2010 at 23:51
Yes, for C=1, this is just the so-called ultrametric en.wikipedia.org/wiki/Ultrametric_space Though why it is called (or should be called) ultrametric is not fully clear to me. – S. Sra Oct 17 2010 at 8:30
## 1 Answer
Yes, they were introduced in valuation theory by Emil Artin and remain present in many contemporary treatments, including mine: see
http://math.uga.edu/~pete/8410Chapter1.pdf
especially Section 1.2 and
http://math.uga.edu/~pete/8410Chapter2.pdf
[Added: as RW has justly pointed out, my answer here makes sense in the context of valuation theory only. The procedure that I give from passing from a "weak metric" to a metric is not going to work in general, I think, but only in the presence of some additional algebraic structure. If I were the OP, I might not choose this as the accepted answer, or at least not yet.]
The basic idea here is to consider two such guys equivalent if one can be obtained from the other by the operation of raising $d$ to some positive real number power $\alpha$. In such a way, one can make the constant $C \leq 2$ in which case one gets an actual metric. The topology one gets in this way is easily seen to be independent of the choice of $\alpha$.
As it happens, when writing up these notes for a course I taught last spring I also thought a little bit about trying to define "balls" with respect to such a weak metric (i.e., without first renormalizing). I didn't get anywhere with this either.
Finally, I should say that in valuation theory at least, these "weak metrics" (in the valuation theoretic context I called them "Artin absolute values" and then after the course was over decided to change to the terminology to just "absolute values", while retaining the word "norm" for such a guy which was actually a metric) come up as a useful tool but are not really studied on their own or in any deep way: I have yet to see a text on valuation theory where weak norms appear after page 20 or so. Whether they may have wider applicability in some other context, I couldn't say...
-
The arguments you refer to (that for $C\le 2$ one gets an actual metric) heavily depend on an additional algebraic structure. I don't see how they would work in the general case. – R W Oct 17 2010 at 1:14
@RW: That may well be. I have only encountered the phenomenon in the setting of valuation theory, and I don't mean to imply that what I say works in the general case. Perhaps this should not be the accepted answer... – Pete L. Clark Oct 17 2010 at 2:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512764811515808, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=3456487 | Physics Forums
## Relativistic Momentum Help With Equation Reduction
Hi, so basically have been looking at this and working my way through the maths for myself. However I have hit this point and can't get past it:
\begin{align}
u = \frac{v - u}{1-\frac{uv}{c^2}}
\end{align}
Which should be able to become:
\begin{align}
u = \frac{c^2}{v(1-\sqrt{1-\frac{v^2}{c^2}})}
\end{align}
I have tried and tried but can't seem to get it to work. Can anyone help me out on this one please?!
Thanks
James
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions:
Gold Member
Homework Help
Quote by jimbobian Hi, so basically have been looking at this and working my way through the maths for myself. However I have hit this point and can't get past it: \begin{align} u = \frac{v - u}{1-\frac{uv}{c^2}} \end{align} Which should be able to become: \begin{align} u = \frac{c^2}{v(1-\sqrt{1-\frac{v^2}{c^2}})} \end{align} I have tried and tried but can't seem to get it to work. Can anyone help me out on this one please?! Thanks James
That wiki entry is not being clear with it's order of operations!
What you should be getting is:
$$u = \frac{c^2}{v}(1-\sqrt{1-\frac{v^2}{c^2}})$$
Quote by G01 That wiki entry is not being clear with it's order of operations! What you should be getting is: $$u = \frac{c^2}{v}(1-\sqrt{1-\frac{v^2}{c^2}})$$
Hi G01, thanks for your reply. Have tried aiming for that equation instead and still can't get there. I have tried all sorts of different approaches, such as getting it in the form of a quadratic or dividing the top and bottom of the original fraction by c^2, but I just end up nowhere. Could you perhaps give me a hand in the right direction, maybe the first step or two - but don't make it too easy for me!
Mentor
## Relativistic Momentum Help With Equation Reduction
Show us exactly what you've tried, and where you get stuck, and tell us why you're stuck.
Ok, have shut my computer down now and am replying on my phone. I will post my attempts tomorrow. Thanks, James
Blog Entries: 1 Recognitions: Gold Member Science Advisor Well, brute force use of the quadratic formula leads directly to the desired answer.
Haha, went back over my general quadratic attempt after PAllen's suggestion and realised that I had made a rather fundamental cross multiplication error! Fixed that and got to the desired equation. Lovely! Last question, the equation that I get is: \begin{align} u = \frac{c^2}{v}(1\pm\sqrt{1-\frac{v^2}{c^2}}) \end{align} I assume that we choose the negative version of the equation, because the positive version would yield a value for u which is greater than c? Cheers
Thread Tools
| | | |
|-------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Relativistic Momentum Help With Equation Reduction | | |
| Thread | Forum | Replies |
| | Special & General Relativity | 7 |
| | Advanced Physics Homework | 1 |
| | Introductory Physics Homework | 0 |
| | Special & General Relativity | 8 |
| | Introductory Physics Homework | 2 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367799162864685, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/32422/qm-without-complex-numbers/32468 | # QM without complex numbers
I am trying to understand how complex numbers made their way into QM. Can we have a theory of the same physics without complex numbers? If so, is the theory using complex numbers easier?
-
5
– Luboš Motl Jul 20 '12 at 5:14
4
– Luboš Motl Jul 20 '12 at 9:43
2
I suggest that before discussing further, you agree on a meaning of fundamental. I personally see no reason why "useful, easy to handle" would have anything to do with it and I don't even think it has something to do with how powerful they (the complex numbers) might be. Furthermore, I also don't see the relevance of the possibility of formulating them in a more abstract framework to this discussion. – Nick Kidman Jul 20 '12 at 13:52
4
There are no number in Nature at all... – Kostya Jul 20 '12 at 16:43
3
@Dushya ... the reference "fundamental" here for math is the fact that $\mathbb{C}$ is a field extension of $\mathbb{R}$, and not the other way around. There is nothing more to be said about this. – Chris Gerig Jul 20 '12 at 17:40
show 9 more comments
## 9 Answers
The nature of complex numbers in QM turned up in a recent discussion, and I got called a stupid hack for questioning their relevance. Mainly for therapeutic reasons, I wrote up my take on the issue:
# On the Role of Complex Numbers in Quantum Mechanics
## Motivation
It has been claimed that one of the defining characteristics that separate the quantum world from the classical one is the use of complex numbers. It's dogma, and there's some truth to it, but it's not the whole story:
While complex numbers necessarily turn up as first-class citizen of the quantum world, I'll argue that our old friend the reals shouldn't be underestimated.
## A bird's eye view of quantum mechanics
In the algebraic formulation, we have a set of observables of a quantum system that comes with the structure of a real vector space. The states of our system can be realized as normalized positive (thus necessarily real) linear functionals on that space.
In the wave-function formulation, the Schrödinger equation is manifestly complex and acts on complex-valued functions. However, it is written in terms of ordinary partial derivatives of real variables and separates into two coupled real equations - the continuity equation for the probability amplitude and a Hamilton-Jacobi-type equation for the phase angle.
The manifestly real model of 2-state quantum systems is well known.
## Complex and Real Algebraic Formulation
Let's take a look at how we end up with complex numbers in the algebraic formulation:
We complexify the space of observables and make it into a $C^*$-algebra. We then go ahead and represent it by linear operators on a complex Hilbert space (GNS construction).
Pure states end up as complex rays, mixed ones as density operators.
However, that's not the only way to do it:
We can let the real space be real and endow it with the structure of a Lie-Jordan-Algebra. We then go ahead and represent it by linear operators on a real Hilbert space (Hilbert-Schmidt construction).
Both pure and mixed states will end up as real rays. While the pure ones are necessarily unique, the mixed ones in general are not.
## The Reason for Complexity
Even in manifestly real formulations, the complex structure is still there, but in disguise:
There's a 2-out-of-3 property connecting the unitary group $U(n)$ with the orthogonal group $O(2n)$, the symplectic group $Sp(2n,\mathbb R)$ and the complex general linear group $GL(n,\mathbb C)$: If two of the last three are present and compatible, you'll get the third one for free.
An example for this is the Lie-bracket and Jordan product: Together with a compatibility condition, these are enough to reconstruct the associative product of the $C^*$-algebra.
Another instance of this is the Kähler structure of the projective complex Hilbert space taken as a real manifold, which is what you end up with when you remove the gauge freedom from your representation of pure states:
It comes with a symplectic product which specifies the dynamics via Hamiltonian vector fields, and a Riemannian metric that gives you probabilities. Make them compatible and you'll get an implicitly-defined almost-complex structure.
Quantum mechanics is unitary, with the symplectic structure being responsible for the dynamics, the orthogonal structure being responsible for probabilities and the complex structure connecting these two. It can be realized on both real and complex spaces in reasonably natural ways, but all structure is necessarily present, even if not manifestly so.
## Conclusion
Is the preference for complex spaces just a historical accident? Not really. The complex formulation is a simplification as structure gets pushed down into the scalars of our theory, and there's a certain elegance to unifying two real structures into a single complex one.
On the other hand, one could argue that it doesn't make sense to mix structures responsible for distinct features of our theory (dynamics and probabilities), or that introducing un-observables to our algebra is a design smell as preferably we should only use interior operations.
While we'll probably keep doing quantum mechanics in terms of complex realizations, one should keep in mind that the theory can be made manifestly real. This fact shouldn't really surprise anyone who has taken the bird's eye view instead of just looking throught the blinders of specific formalisms.
-
If you don't like complex numbers, you can use pairs of real numbers (x,y). You can "add" two pairs by (x,y)+(z,w) = (x+z,y+w), and you can "multiply" two pairs by (x,y) * (z,w) = (xz-yw, xw+yz). (If don't think that multiplication should work that way, you can call this operation "shmultiplication" instead.)
Now you can do anything in quantum mechanics. Wavefunctions are represented by vectors where each entry is a pair of real numbers. (Or you can say that wavefunctions are represented by a pair of real vectors.) Operators are represented by matrices where each entry is a pair of real numbers, or alternatively operators are represented by a pair of real matrices. Shmultiplication is used in many formulas. Etc. Etc.
I'm sure you see that these are exactly the same as complex numbers. (see Lubos's comment: "a contrived machinery that imitates complex numbers") They are "complex numbers for people who have philosophical problems with complex numbers". But it would make more sense to get over those philosophical problems. :-)
-
3
+1 on schmultiplication – Emilio Pisanty Jul 20 '12 at 13:31
7
But doesn't just change his question to "QM without shmultiplication"? – Alfred Centauri Jul 20 '12 at 14:00
I do like complex numbers a lot. They are extremely useful and convenient, in connection to the fundamental theorem of algebra, for example, or when working with waves. I'm just trying to understand. – Frank Jul 20 '12 at 15:30
Alfred - yes. That would be the point. I was wondering if there could be, I don't know, a matrix formulation of the same physics that would use another tool (matrices) than complex numbers. Again, I have no problem with complex numbers and I love them. – Frank Jul 20 '12 at 15:53
also note that you can model QM on a space of states on a sphere in $\mathbb{C}^n$ with radius $|x|^2+|y|^2+...=1$. These spheres have dimension $2n$ for the reals. – kηives Jul 20 '12 at 16:53
show 3 more comments
The complex numbers in quantum mechanics are mostly a fake. They can be replaced everywhere by real numbers, but you need to have two wavefunctions to encode the real and imaginary parts. The reason is just because the eigenvalues of the time evolution operator $e^{iHt}$ are complex, so the real and imaginary parts are degenerage pairs which mix by rotation, and you can relabel them using i.
The reason you know i is fake is that not every physical symmetry respects the complex structure. Time reversal changes the sign of "i". The operation of time reversal does this because it is reversing the sense in which the real and imaginary parts of the eigenvectors rotate into each other, but without reversing the sign of energy (since a time reversed state has the same energy, not negative of the energy).
This property means that the "i" you see in quantum mechanics can be thought of as shorthand for the matrix (0,1;-1,0), which is algebraically equivalent, and then you can use real and imaginary part wavefunctions. Then time reversal is simple to understand--- it's an orthogonal transformation that takes i to -i, so it doesn't commute with i.
The proper way to ask "why i" is to ask why the i operator, considered as a matrix, commutes with all physical observables. In other words, why are states doubled in quantum mechanics in indistinguishable pairs. The reason we can use it as a c-number imaginary unit is because it has this property. By construction, i commutes with H, but the question is why it must commute with everything else.
One way to understand this is to consider two finite dimensional systems with isolated Hamiltonians $H_1$ and $H_2$, with an interaction Hamiltonian $f(t)H_i$. These must interact in such a way that if you freeze the interaction at any one time, so that $f(t)$ rises to a constant and stays there, the result is going to be a meaningful quantum system, with nonzero energy. If there is any point where $H_i(t)$ doesn't commute with the i operator, there will be energy states which cannot rotate in time, because they have no partner of the same energy to rotate into. Such states must be necessarily of zero energy. The only zero energy state is the vacuum, so this is not possible.
You conclude that any mixing through an interaction hamiltonian between two quantum systems must respect the i structure, so entangling two systems to do a measurement on one will equally entangle with the two state which together make the complex state.
It is possible to truncate quantum mechanics (at least for sure in a pure bosnic theory with a real Hamiltonian, that is, PT symmetric) so that the ground state (and only the ground state) has exactly zero energy, and doesn't have a partner. For a bosonic system, the ground state wavefunction is real and positive, and if it has energy zero, it will never need the imaginary partner to mix with. Such a truncation happens naturally in the analytic continuation of SUSY QM systems with unbroken SUSY.
-
Complex numbers "show up" in many areas such as, for example, AC analysis in electrical engineering and Fourier analysis of real functions.
The complex exponential, $e^{st},\ s = \sigma + i\omega$ shows up in differential equations, Laplace transforms etc.
Actually, it just shouldn't be all that surprising that complex numbers are used in QM; they're ubiquitous in other areas of physics and engineering.
And yes, using complex numbers makes many problems far easier to solve and to understand.
I particularly enjoyed this book (written by an EE) which gives many enlightening examples of using complex numbers to greatly simplify problems.
-
I guess I'm wondering if those complex numbers are "intrinsic" or just an arbitrary computing device that happens to be effective. – Frank Jul 20 '12 at 2:56
3
@Frank: you could ask the same thing about the real numbers. Who ever measured anything to be precisely $\sqrt 2$ meters, anyhow? – Niel de Beaudrap Jul 20 '12 at 4:29
1
What does it mean though that complex numbers "appear" in AC circuit analysis? The essence of AC is a sinusoidal driving components. You could say the nature of these components come from geometry factors, made electrical from a dot product in generators. Once we have sinusoidal variables interacting in an electrical circuit, we know the utility of complex numbers. That, in turn, comes from the equations. What does that all mean though? – AlanSE Jul 20 '12 at 13:36
It means that if the sources in the circuit are all of the form $e^{st}$, the voltages and currents in the circuit will be of that form. This follows from the nature of the differential equations that represent the circuit. The fact that we choose to set $s = j\omega$ for AC analysis and then select only the real part of the solutions as a "reality" constraint doesn't change the mathematical fact that the differential equations describing the circuit have complex exponential solutions. – Alfred Centauri Jul 20 '12 at 13:49
1
Alan - it probably means nothing. It happens to be a tool that so far works pretty well. – Frank Jul 20 '12 at 15:55
show 3 more comments
Frank, I would suggest buying or borrowing a copy of Richard Feynman's QED: The Strange Theory of Light and Matter. Or, you can just go directly to the online New Zealand video version of the lectures that gave rise to the book.
In QED you will see how Feynman dispenses with complex numbers entirely, and instead describes the wave functions of photons (light particles) as nothing more than clock-like dials that rotate as they move through space. In a book-version footnote he mentions in passing "oh by the way, complex numbers are really good for representing the situation of dials that rotate as they move through space," but he intentionally avoids making the exact equivalence that is tacit or at least implied in many textbooks. Feynman is quite clear on one point: It's the rotation-of-phase as you move through space that is the more fundamental physical concept for describing quantum mechanics, not the complex numbers themselves.[1]
I should be quick to point out that Feynman was disrespecting the remarkable usefulness of complex numbers for describing physical phenomena. Far from it! He was fascinating for example by the complex-plane equation known as Euler's Identity, $e^{i\pi} = -1$ (or, equivalently, $e^{i\pi} + 1 = 0$), and considered it one of the most profound equations in all of mathematics.
It's just that Feynman in QED wanted to emphasize the remarkable conceptual simplicity of some of the most fundamental concepts of modern physics. In QED for example, he goes on to use his little clock dials to show how in principle his entire method for predicting the behavior of electrodynamic fields and systems could be done using such moving dials.
That's not practical of course, but that was never Feynman's point in the first place. His message in QED was more akin to this: Hold on tight to simplicity when simplicity is available! Always build up the more complicated things from that simplicity, rather than replacing simplicity with complexity. That way, when you see something horribly and seemingly unsolvable, that little voice can kick in and say "I know that the simple principle I learned still has to be in this mess, somewhere! So all I have to do is find it, and all of this showy snowy blowy razzamatazz will disappear!"
[1] Ironically, since physical dials have a particularly simple form of circular symmetry in which all dial positions (phases) are absolutely identical in all properties, you could argue that such dials provide a more accurate way to represent quantum phase than complex numbers. That's because as with the dials, a quantum phase in a real system seems to have absolutely nothing at all unique about it -- one "dial position" is as good as any other one, just as long as all of the phases maintain the same positions relative to each other. In contrast, if you use a complex number to represent a quantum phase, there is a subtle structural asymmetry that shows up if you do certain operations such as squaring the number (phase). If you do that do a complex number, then for example the clock position represented by $1$ (call it 3pm) stays at $1$, while in contrast the clock position represented by $-1$ (9pm) turns into a $1$ (3pm). This is no big deal in a properly set up equation, but that curious small asymmetry is definitely not part of the physically detectable quantum phase. So in that sense, representing such a phase by using a complex number adds a small bit of mathematical "noise" that is not in the physical system.
-
Yes, we can have a theory of the same physics without complex numbers (without using pairs of real functions instead of complex functions), at least in some of the most important general quantum theories. For example, Schrödinger (Nature (London) 169, 538 (1952)) noted that one can make a scalar wavefunction real by a gauge transform. Furthermore, surprisingly, the Dirac equation in electromagnetic field is generally equivalent to a fourth-order partial differential equation for just one complex component, which component can also be made real by a gauge transform (http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf (an article published in the Journal of Mathematical Physics) or http://arxiv.org/abs/1008.4828 ).
-
I am not very well versed in the history, but I believe that people doing classical wave physics had long since notes the close correspondence between the many $\sin \theta$s and $\cos \theta$s flying around their equations and the behavior of $e^{i \theta}$. In fact most wave related calculation can be done with less hassle in the exponential form.
Then in the early history of quantum mechanics we find things described in terms of de Broglie's matter waves.
And it works which is really the final word on the matter.
Finally, all the math involing complex numbers can be decomposed into compound operations on real numbers so you can obviously re-formulate the theory in those terms there is no reason to think that you will gain anything in terms of ease or insight.
-
Can a complex infinite dimensional Hilbert space be written as a real Hilbert space with complex structure ? It seems plausible that it can be done but could there be any problems due to infinite dimensionality ? – user10001 Jul 20 '12 at 2:45
The underlying field you choose, $\mathbb{C}$ or $\mathbb{R}$, for your vector space probably has nothing to do with its dimensionality. – Frank Jul 20 '12 at 2:52
@dushya: There are no problems due to infinite dimensionality, the space is separable and can be approximated by finite dimensional subspaces. – Ron Maimon Jul 20 '12 at 18:37
Thanks Ron. I had only some vague idea that it should be the case. – user10001 Jul 20 '12 at 18:57
Just to put complex numbers in context, A.A. Albert edited "Studies in Modern Algebra" - from the Mathematical Assn of America. C is one of the Normed Division Algebras - of which there are only four: R,C,H and O. One can do a search for "composition algebras" - of which C is one.
-
Update: This answer has been superseded by my second one. I'll leave it as-is for now as it is more concrete in some places. If a moderator thinks it should be deleted, feel free to do so.
I do not know of any simple answer to your question - any simple answer I have encountered so far wasn't really convincing.
Take the Schrödinger equation, which does contain the imaginary unit explicitly. However, if you write the wave function in polar form, you'll arrive at a (mostly) equivalent system of two real equations: The continuity equation together with another one that looks remarkably like a Hamilton-Jacobi equation.
Then there's the argument that the commutator of two observables is anti-hermitian. However, the observables form a real Lie-algebra with bracket $-i[\cdot,\cdot]$, which Dirac calls the quantum Poisson bracket.
All expectation values are of course real, and any state $\psi$ can be characterized by the real-valued function $$P_\psi(·) = |\langle \psi,·\rangle|^2$$
For example, the qubit does have a real description, but I do not know if this can be generalized to other quantum systems.
I used to believe that we need complex Hilbert spaces to get a unique characterization of operators in your observable algebra by their expectation values.
In particular, $$\langle\psi,A\psi\rangle = \langle\psi,B\psi\rangle \;\;\forall\psi \Rightarrow A=B$$ only holds for complex vector spaces.
Of course, you then impose the additional restriction that expectation values should be real and thus end up with self-adjoint operators.
For real vectors spaces, the latter automatically holds. However, if you impose the former condition, you end up with self-adjoint operators as well; if your conditions are real expectation values and a unique representation of observables, there's no need to prefer complex over real spaces.
The most convincing argument I've heard so far is that linear superposition of quantum states doesn't only depend on the quotient of the absolute values of the coefficients $|α|/|β|$, but also their phase difference $\arg(α) - \arg(β)$.
Update: There's another geometric argument which I came across recently and find reasonably convincing: The description of quantum states as vectors in a Hilbert space is redundant - we need to go to the projective space to get rid of this gauge freedom. The real and imaginary parts of the hermitian product induce a metric and a symplectic structure on the projective space - in fact, projective complex Hilbert spaces are Kähler manifolds. While the metric structure is responsible for probabilities, the symplectic one provides the dynamics via Hamilton's equations. Because of the 2-out-of-3 property, requiring the metric and symplectic structures to be compatible will get us an almost-complex structure for free.
-
You don't need polar form, just take the real and imaginary parts. – Ron Maimon Jul 20 '12 at 10:00
The most convincing I've heard so far is that since there are "waves" in QM, complex numbers formulation happen to be convenient and efficient. – Frank Jul 20 '12 at 15:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389197826385498, "perplexity_flag": "head"} |
http://mathhelpforum.com/new-users/207778-new-user-need-help-trigonometry-equation-physics-tan-x-b-sin-x.html | # Thread:
1. ## New user, need help with a trigonometry equation for physics Tan x = A + B sin x
Hi, I am a physics teacher and am working on a problem for a general solution to the problem of the swing carousel ride. Most physics students learn the to solve the problem of finding the angle of the swings given the tangential speed , or vice versa. This is easy.
A harder task is to find the angle as a function of angular velocity, and radius of the platform from which hang the swings. Looking only for the steady state solution, I decided I would have to work at it from the non-inertial rotation frame of reference, and use the pseudoforce centrifugal force. It make it easier to get an equation for the angle that only depends on the constant parameters of angular speed , platform radius , chain length , and gravity g.
the problem is I don't know how to solve for the angle. I could get a numerical solution but that is not what I want.
The equation boils down to
Tan x = A + B sin x
where A and B are constants that depend on the parameters. What should I do to solve for x?
Thanks for any help!
Eric
2. ## Re: New user, need help with a trigonometry equation for physics Tan x = A + B sin x
Hi Eric!
You can rewrite tan x = A + B sin x as:
${\sin x \over \pm \sqrt{1-\sin^2 x}} = A + B \sin x$
Substitute y=sin x, and you get:
${y \over \pm \sqrt{1-y^2}} = A + B y$
which yields:
$y^2 = (A + B y)^2 (1 - y^2)$
This is a 4th order polynomial, which can be solved as explained for instance here: Quartic function - Wikipedia, the free encyclopedia
This may be more involved than you'd like, although I have to say that the article makes it appear more difficult than it actually is.
Afterward, you get the answer with $x = \arcsin y$.
Is this what you had in mind, or were you hoping for an easier solution?
3. ## Re: New user, need help with a trigonometry equation for physics Tan x = A + B sin x
post trig problems in the trig forum, please. the lobby is for introductions only.
4. ## Re: New user, need help with a trigonometry equation for physics Tan x = A + B sin x
Thanks very much for the idea. I should have thought of that with the tan function but I would have had to ask anyway for help with the next step. I will have a look at the Quartic function - Wikipedia, the free encyclopedia suggestion.
Cheers,
Eric
5. ## Re: New user, need help with a trigonometry equation for physics Tan x = A + B sin x
You're welcome!
Till next time (probably in a different sub forum ;-)). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236699342727661, "perplexity_flag": "middle"} |
http://stochastix.wordpress.com/2008/11/22/building-a-polynomial-from-its-roots/ | Rod Carvalho
Building a polynomial from its roots
Suppose we are given a set of $n \geq 2$ distinct real numbers $\mathcal{R} = \{r_1, r_2, \ldots, r_n\}$, and we build a monic univariate polynomial (over field $\mathbb{R}$) of degree $n$ whose $n$ distinct roots are the elements of set $\mathcal{R}$
$p_n(x) = \displaystyle\prod_{i=1}^n (x - r_i) = \displaystyle\sum_{m=0}^n a_{m,n} x^m$,
where $a_{m,n} \in \mathbb{R}$ is the coefficient of monomial $x^m$. The second subscript in $a_{m,n}$ tells us that the coefficient corresponds to a polynomial whose degree is $n$.
Problem: for $n \geq 2$, what are the values of the coefficients $\{a_{m,n}\}_{m=0}^{n}$ in terms of the roots $\{r_i\}_{i=1}^n$?
At first glance, this problem seems elementary. However, I had never thought of it and I found it quite interesting. I prefer to think of examples before attempting to see the “big picture”, so let us consider the simplest cases.
_____
Case $n=1$
The monic polynomial of degree $1$ whose root is $r_1$ is
$p_1(x) = x - r_1$
and the coefficients are
$\left[\begin{array}{c}a_{0,1}\\a_{1,1}\end{array}\right] = \left[\begin{array}{c} -r_1\\1\end{array}\right]$.
_____
Case $n=2$
The monic polynomial of degree $2$ whose roots are $r_1, r_2$ is
$\begin{array}{rl} p_2(x) &= (x - r_1) (x - r_2)\\ &= x^2 - (r_1 + r_2) x + r_1 r_2\end{array}$
and the coefficients are
$\left[\begin{array}{c}a_{0,2}\\a_{1,2}\\a_{2,2}\end{array}\right] = \left[\begin{array}{c}r_1 r_2\\-(r_1+r_2)\\1\end{array}\right]$.
_____
Case $n=3$
The monic polynomial of degree $3$ whose roots are $r_1, r_2, r_3$ is
$\begin{array}{rl} p_3(x) &= (x - r_1) (x - r_2) (x - r_3)\\ &= x^3 - (r_1+r_2 + r_3) x^2 + (r_1 r_2 + r_1 r_3 + r_2 r_3) x - r_1 r_2 r_3\\\end{array}$
and the coefficients are
$\left[\begin{array}{c}a_{0,3}\\a_{1,3}\\a_{2,3}\\a_{3,3}\end{array}\right] = \left[\begin{array}{c} - r_1 r_2 r_3\\ r_1 r_2 + r_1 r_3 + r_2 r_3\\ -(r_1+r_2 + r_3)\\1\end{array}\right]$.
_____
The coefficients $\{a_{m,n}\}_{m=0}^{n}$ can be computed in a recursive fashion. Consider the monic polynomials
$p_{k}(x) = \displaystyle\prod_{i=1}^{k} (x - r_i)$
and
$p_{k+1}(x) = \displaystyle\prod_{i=1}^{k+1} (x - r_i) = (x - r_{k+1}) \displaystyle\prod_{i=1}^k (x - r_i)$,
then $p_{k+1} = (x - r_{k+1}) p_k(x)$, and thus
$\begin{array}{rl} p_{k+1}(x) &= (x - r_{k+1})\displaystyle\sum_{m=0}^k a_{m,k} x^m\\ &= \displaystyle\sum_{m=0}^k a_{m,k} x^{m+1} - \displaystyle\sum_{m=0}^k r_{k+1} a_{m,k} x^m.\\\end{array}$
Note that
$\begin{array}{rl} \displaystyle\sum_{m=0}^k a_{m,k} x^{m+1} &= \displaystyle\sum_{m=1}^k a_{m-1,k} x^m + a_{k,k} x^{k+1}\\ \displaystyle\sum_{m=0}^{k} r_{k+1} a_{m,k} x^m &= r_{k+1} a_{0,k} + \displaystyle\sum_{m=1}^k r_{k+1} a_{m,k} x^m\end{array}$
and therefore
$p_{k+1}(x) = a_{k,k} x^{k+1} + \displaystyle\sum_{m=1}^k \left(a_{m-1,k} - r_{k+1} a_{m,k}\right) x^m - r_{k+1} a_{0,k}$.
We can write $p_{k+1}(x)$ in the expanded form in terms of the coefficients $\{a_{m,k+1}\}_{m=0}^{k+1}$
$p_{k+1}(x) = a_{k+1,k+1} x^{k+1} + \displaystyle\sum_{m=1}^k a_{m,k+1} x^m + a_{0,k+1}$
and therefore we can write the coefficients $\{a_{m,k+1}\}_{m=0}^{k+1}$ in terms of the coefficients $\{a_{m,k}\}_{m=0}^{k}$, as follows
$a_{m,k+1} = \displaystyle\left\{\begin{array}{rl} - r_{k+1} a_{0,k} & \text{if} \quad{} m = 0\\ a_{m-1,k} - r_{k+1} a_{m,k} & \text{if} \quad{} 1 \leq m \leq k\\ a_{k,k} & \text{if} \quad{} m=k+1\\ \end{array}\right.$
Hence, starting with the zero-degree monic polynomial $p_0(x)$, whose only coefficient is $a_{0,0} = 1$, we can build $p_1(x)$, the monic polynomial of degree $1$ whose root is $r_1$. From $p_1(x)$, and given a second root $r_2$, we can build $p_2(x)$, the monic polynomial of degree $2$ whose roots are $r_1, r_2$. Iterating successively, we can build $p_n(x)$, the monic polynomial of degree $n$ whose roots are $r_1,r_2, \ldots, r_n$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183846712112427, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/195208/numerical-solution-of-fractional-integro-diffrential-equ-using-collocation-meth | # Numerical solution of fractional integro-diffrential equ. using collocation method?
problem comes from "Numerical solution of fractional integro-differential , equations by collocation method , E.A. Rawashdeh, Department of Mathematics, Yarmouk University, Irbid 21110, Jordan"
$D^qy(t)=p(t)y(t)+f(t)+\int_{0}^{1}{K(t,s)y(s)\,ds} , t\in I=[0,1]$
I want to create a maple code to check if the results in given article is valid or not but I do not have any idea about collocation method!
any reference to collocation method solution are welcome!
-
This question may be best posted on scicomp.stackexchange.com. It is more geared towards numerical methods for scientific computing. – Paul Sep 14 '12 at 19:16
– Mohammad Rafiee Sep 15 '12 at 4:48
## 1 Answer
See first how the collocation method is applied to simpler problems and then advance with your problem. See for example here where the method is applied to find the solution of a an ODE with boundary conditions. Here is paper on collocation method for solving integral equations.
-
Great thanks @Mhenni Benghorbal it really helped me understand collocation method. – Mohammad Rafiee Sep 16 '12 at 18:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8561899065971375, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/106705/2d-problems-which-are-easier-to-solve-in-3d/106711 | ## 2D Problems Which are Easier to Solve in 3D
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It sometimes happens that 1D problems are easier to solve by somehow adding a dimension. For example, we convert linear differential equations for a real unknown to a complex unknown (to use complex exponentials), or we compute a power series' radius of convergence by thinking in the complex plane (or use complex analytic properties in path integrals), or we evaluate $\int^\infty_{-\infty} e^{-x^2}\ dx$ by squaring it...
So, are any 2D problems easier to solve in even higher dimensions? I can't think of any.
-
2
The dynamics of a billiard in certain subsets of the plane are easier to analyze by looking at 2D surfaces which embed in 3D space, if that counts. – Alex Becker Sep 9 at 3:27
1
There are many, but this is a community wiki question, voting to close until it is so labeled. – Igor Rivin Sep 9 at 4:26
3
There are some geometry problems such as Desargues implies Pappus and common cotangents of three pairs of circles have intersections which are collinear. But do you want geometric examples? Gerhard "Ask Me About System Design" Paseman, 2012.09.08 – Gerhard Paseman Sep 9 at 5:10
4
This should be community wiki – DamienC Sep 9 at 6:11
1. Ehrenpreis conjecture (solved by Kahn and Markovic) is easier in 3d than in 2d. 2. Quasi-isometric rigidity of uniform lattices in O(n,1) is easier for $n=3$ than for $n=2$. 3. Classification of buildings is easier in rank 3 than in rank 2 (where it is still unknown). 4. Closely related to your question example: Poincar\'e conjecture (in all its forms) is easier for $n>4$ than for $n=4$ and $n=3$. However, your question should be CW, so voting to close until then. – Misha Sep 9 at 23:04
show 3 more comments
## 12 Answers
Of course, there is one such problem! This is the Cauchy problem for the wave equation $$\frac{\partial^2u}{\partial t^2}=\Delta u,\qquad u(x,0)=f(x),\quad \frac{\partial u}{\partial t}(x,0)=g(x),$$ where $x\in{\mathbb R}^d$. To solve it, it is enough to know the case where $f\equiv0$.
If $d=3$, this problem is solved by using spherical means. We obtain $$u(x,t)=tM_{t,x}[g],$$ where $M_{t,x}$ denotes the mean over the sphere of radius $t$ and center $x$.
The two-dimensional case is way more complicated. The formula can only be found by considering that a $2$D-solution is a special case of a $3$D-solution. Then the solution involves a complicated integral over the disk $D(x;t)$ instead of the circle. This is why the Huyghens principle holds true in $3$ space dimensions but not in $2$ space dimensions.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There's a famous problem posed by Erdos that has an easy 3-D solution, but a very difficult 2-D solution. The problem is to prove the following: Given a decomposition of an n-cube into finitely many n-cubes $Q_1, ... Q_k$ ($k>1$), prove that there exist two distinct cubes $Q_i, Q_{i'}$, of equal size.
The above statement is certainly true for $n=3$ (this is a simple exercise), but it is in fact untrue for $n=2$. I think this is known as the "Squared square" problem, and you can read more about it here. Below is the first counter-example, due to Sprague, to the problem.
-
Desargues' Theorem is a statement about triangles in the plane that is easier to prove using solid geometry.
-
This is essentially the idea behind level set methods (cf http://en.wikipedia.org/wiki/Level_set_method ).
There are several situations where one needs to study the behavior of a dynamic surface with complicated topology. In fire simulation, one needs to track the motion of an air/fuel interface which is often not connected. In image segmentation, one needs to move the boundary between inner/outer regions until a steady state is reached. In fluid mechanics, breaking waves detach from the main body of water.
Solving differential equations on surfaces with complicated topology is difficult numerically. It turns out that when one represents these surfaces as level sets of a higher dimensional function these connectivity problems disappear and higher quality simulations are possible.
-
I'd say level set methods are not really adding a dimension; they trade a parameter for an extra "spatial" dimension. I admit, though, that I'm not very familiar with them and my goal is not so clear from my question. – bobuhito Sep 9 at 15:58
A not-so-serious answer; hopefully what it lacks in depth it makes up for by being elementary.
Suppose we forget Pythagoras's theorem and define a binary operation on positive reals by sending $(a, b)$ to the length of the hypotenuse of the right-angled triangle with side lengths $a, b$ forming the right angle.
The associativity of this operation is trivial in three dimensions but not so in two.
I came across this here: D. Bell, "Associative Binary Operations and the Pythagorean Theorem", The Mathematical Intelligencer, Vol. 33, No. 1 (2011), 92-95, DOI: 10.1007/s00283-010-9171-6 http://www.springerlink.com/content/r8t12847357j1ln7/
Apparently it is also mentioned here: L. Berrone, "The Associativity of the Pythagorean Law", The American Mathematical Monthly, Vol. 116, No. 10, Dec., 2009 http://www.jstor.org/discover/10.2307/40391255?uid=3738232&uid=2129&uid=2&uid=70&uid=4&sid=21101035566283
-
A striking example:
Consider arrangements of disks in the plane so that no two disks overlap (except on their boundaries) and their complement is a disjoint union of triangles (if we include a point at infinity). You can imagine trying to build a particular finite triangulated planar graph by placing different sized coins on the table.
Here's the theorem: Any such graph may be obtained. Further, the representation is unique up to Möbius (and anti-Möbius) transformations of the plane.
The proof of uniqueness is the striking bit. You think of the plane as the boundary of hyperbolic upper half space! Fill in each triangle of the original disks with a new disk tangent to them, and extend all the disks to half-balls. We view the surface of each ball as a plane in hyperbolic space, and consider the group of reflections across them. We then apply the Mostow rigidity theorem to the quotient manifold, and obtain the result.
This observation is due to Thurston. See http://en.wikipedia.org/wiki/Circle_packing_theorem
-
Erm, not exactly. The manifold you get this way is not finite volume, so Mostow does not apply... – Igor Rivin Sep 10 at 0:07
@Igor Rivin My reference is library.msri.org/books/gt3m chapter 13, Corollary 13.6.2. Have I made a mistake? – John Wiltshire-Gordon Sep 10 at 14:27
@J W-G: you have committed the sin of omission. The Thurston trick is to consider the graph TOGETHER with the dual graph. Then, the polyhedron you are constructing is right-angled, and compact, so you can use Mostow (but this is a bit of overkill - a completely elementary argument for a much more general [in the polyhedral world] statement is in my paper Euclidean Structures on simplicial surfaces and hyperbolic volume [Annals, 1994]) – Igor Rivin Sep 10 at 15:52
Can you cover a planar disk of diameter 100 with 99 rectangles (possibly intersecting) of size $100\times 1$?
-
Can I cut one of the rectangles into a w by 1 rectangle and a (100 - w) by 1 rectangle, with 33 < w < 67? Gerhard "Not Ready For Three Dimensions" Paseman, 2012.09.11 – Gerhard Paseman Sep 11 at 16:46
If you could do that, the solution would be easy. – Dror Bar-Natan Sep 11 at 18:32
Given two disjoint disks of different radii, find the intersection of their common external tangents. For lack of a better name, call this the h-center of the pair (h- for homothety?).
Problem: Given three mutually disjoint disks, the h-centers of the three pairs are colinear.
The nicest solution involves adding one dimension and inflating the disks to balls (with centers in the original plane $\Pi$). The pairs of tangents become full-fledged cones with vertices in $\Pi$, and the proof is obtained by studying a plane tangent to all three balls. It is tangent to all three cones, so it contains their three vertices, but it also intersects $\Pi$ on a straight line :)
-
In certain cases of a small disk between two larger ones you would not be able to have a plane tangent to all three spheres , but it is still an amazing proof. – Aaron Meyerowitz Apr 27 at 21:00
Four cars drive in the Sahara desert (an infinite plane) at constant and generic velocities and directions. It is known that car A at some point in time meets car B (though let's pretend they drive through each other without crashing). The exact same thing is also known for the pairs AC, AD, BC, and BD. Is it also true for the pair CD?
-
When one generalizes to more than four cars, an interesting sort of collapse occurs. A. Bogomolny has some analysis (and spoilers) of this four traveller problem on his cut-the-knot.org website. Gerhard "Ask Me About System Design" Paseman, 2012.09.11 – Gerhard Paseman Sep 11 at 16:34
Voronoi diagrams in the plane can be described as the lower envelopes of wave-front surfaces in 3D. I'm not sure if this makes them 'easier', but it's a useful way of thinking about them.
-
[Not exactly solving a problem, but explaining it] This excellent movie uses a similar idea.
-
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299494028091431, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/10578/how-to-output-an-expression-as-a-dot-product?answertab=active | # How to output an expression as a Dot[] product
How to force Mathematica to output an expression such as
$a1u1+a2u2+a3u3$
as a `Dot` product like this one:
````{a1,a2,a3}.{u1,u2,u3}
````
or A.U
-
Didn't you just accept an answer a couple of hours ago ? – b.gatessucks Sep 13 '12 at 15:43
1
– J. M.♦ Sep 13 '12 at 15:46
@J.M May be he wants to visualize them in traditional form. SergeyFomin are you asking about the display formatting? Then rephrase your question. – PlatoManiac Sep 13 '12 at 15:48
Many thanks for coefficient extraction you have provided in my first question, but now I ask you how to force mma to present my expression in vector form. – SergeyFomin Sep 13 '12 at 16:00
1
This question seems to need additional assumptions and constraints, because it has no unique answer: $a_1u_1 + a_2u_2 + a_3u_3$ = $(a_1,a_2,a_3)\cdot(u_1,u_2,u_3)$ = $(u_1,a_2,a_3)\cdot(a_1,u_2,u_3)$ etc. = $(a_2,a_3,a_1)\cdot(u_2,u_3,u_1)$ etc. = $(1,1,1)\cdot(a_1u_1 a_2u_2,a_3u_3)$ etc. – whuber Sep 13 '12 at 17:01
show 2 more comments
## 2 Answers
I'm going to answer this in the spirit of the question, making a few reasonable assumptions:
• that the two underlying vectors in $x = \sum_i a_i u_i$ are $\mathbf{a}=\{a_1,...,a_n\}$ and $\mathbf{u}=\{u_1,...,u_n\}$, thus giving $x = \mathbf{a}\cdot\mathbf{u}$. If not, there are several possibilities as in whuber's comment.
• The corresponding elements of the vectors $\mathbf{a}$ and $\mathbf{u}$ are ordered identically for all elements. In other words, for some ordering function $f$, $f(a_i,u_i)$ is the same for all $a_i$ and $u_i$. This is so that we aren't affected by the `Orderless` attribute of `Times` (in other words, don't try this for something like $\mathbf{a}=\{b, p, z\}$ and $\mathbf{u}=\{e,g,l\}$).
• There are no numerals involved (i.e. this is purely symbolic) and the primary intent is to be able to display the vectors in the desired form.
With the above, the following is a very simple way to achieve the output with a few replacements:
````expr = a1 u1 + a2 u2 + a3 u3;
expr /. Times -> List /. List -> CenterDot /. Plus -> List
(* {a1, a2, a3}·{u1, u2, u3} *)
````
-
As an added note, it is very instructive to break the replacements into a sequence of replacements via `% /. ...` to see what it is they're doing. (+1, btw ... and enough with the edits!) – rcollyer Sep 13 '12 at 18:23
@rcollyer I agree... I'll leave that to the reader — it's informative to do the replacements one at a time to see how it works, and it must be done in that order. – rm -rf♦ Sep 13 '12 at 18:24
I forgot to add "left as an exercise for the reader," as that's why I left the note. Of course, the first one is the tricky one ... – rcollyer Sep 13 '12 at 18:28
You took my expression literally,your way doesn`t work for more comlex case, for example expr = (a1 + a2) u1 + a2 u2 + a3 u3; – SergeyFomin Sep 13 '12 at 18:53
2
Well, then you, as the question asker, should think of all complicated cases and present them in the question instead of making us guess what you might or might not have in mind... For this example, you can easily change `a1 + a2` to, say, `a4` and then use the above. In the end, change `a4` back to `a1 + a2` – rm -rf♦ Sep 13 '12 at 18:56
show 1 more comment
````expr = a1 u1 + a2 u2 + a3 u3;
HoldForm[#1.#2] & @@ Transpose[Apply[List, expr, {0, 1}]]
(* {a1,a2,a3}.{u1,u2,u3} *)
```` | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383078813552856, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/1114/whats-a-groupoid-whats-a-good-example-of-a-groupoid/1161 | ## What’s a groupoid? What’s a good example of a groupoid?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?
-
## 21 Answers
I'm surprised this example hasn't been mentioned already:
The 3x3x3 Rubik's cube forms a group. The 15-puzzle forms a groupoid.
The reason is that any move that can be applied to a Rubik's cube can be applied at any time, regardless of the current state of the cube.
This is not true of the 15-puzzle. The legal moves available to you depend on where the hole is. So you can only compose move B after move A if A leaves the puzzle in a state where move B can be applied. This is what characterises a groupoid.
There's more to be found here.
-
Very very nice : but then could you find a physical example like those two but for quasigroup ( for help Def1a : A ternary relation where any two elements impose the third , Def1b is a bordered Latin square also Def1c at en.wikipedia.org/wiki/Quasigroup ) – Jérôme JEAN-CHARLES Oct 3 2010 at 0:21
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Another answer is that a groupoid is a space which has no homotopy groups in dimension ≥ 2. (Analogously a set is a space which has no homotopy groups in dimension ≥ 1.) They arise from taking (homotopy) orbits of group actions on sets, as well as from categories (by discarding the noninvertible morphisms and then taking the nerve). People care about them because they retain useful homotopical information, analogous to the relationship between Hom and Ext in homological algebra, and also because they're a lot easier to work with than general spaces.
-
Personally, the reason I'm interested in groupoids is something called groupoid cardinality and some other related ideas (the link contains a lot of other links). A motivating idea here is that certain sets X of algebraic objects have the property that (sumx \in X 1) is ugly but (sumx \in X 1/Aut(x)) is much nicer, and so we should think of this is the "true" cardinality of the set (which is actually a groupoid).
Interesting combinatorial stuff happens when you take this philosophy seriously: for example, the cardinality of the groupoid of finite sets and bijections between them is e. Why is this interesting? It suggests that one reason exponential generating functions are important is that the denominator of n! is an indication that what you're really working with is some kind of structure defined over the groupoid of finite sets and bijections. And indeed, there's an approach to combinatorics called species theory that defines a combinatorial species, such as "binary trees," as a functor from this groupoid to itself. From this information one can extract a generating function, but the really important point here is that constructions such as the sum of generating functions are seen to be "decategorifications" of more fundamental combinatorial constructions, so one can avoid the machinery of working with generating functions by working directly with species instead. A good reference here is Bergeron, Labelle, and Laroux.
-
As other people have mentioned, a groupoid can be defined as a category in which every map is invertible. A groupoid with only one object is exactly a group, and a groupoid in which there are no maps between distinct objects is simply a family of groups.
But there's another class of examples, orthogonal to these ones. Namely: any equivalence relation is a groupoid. In fact, an equivalence relation is exactly a groupoid in which for each object a and object b, there is at most one map from a to b. Concretely, the objects of the groupoid are the elements of the set on which the equivalence relation is defined, and there is a map from a to b iff a is equivalent to b.
(A (small) category with the property that for any objects a and b, there is at most one map from a to b, is the same thing as a preordered set — that is, a set equipped with a reflexive transitive relation, usually denoted \leq.)
-
3
Right. If you want to think of groupoids as generalizations of equivalence relations, then a groupoid is a structure that tells you when two objects are "the same," but where two objects can be "the same" in more than one way. – Qiaochu Yuan Oct 20 2009 at 15:54
A groupoid is a generalization of a group. The easiest definition, IMO, is as a category in which all arrows are isomorphisms. So a group is just a groupoid with one object and arrows the elements of the group.
The best example is the fundamental groupoid of a topological space. Build a groupoid by taking the objects to be the points in the space and an arrow from point x to point y to be equivalence classes of paths from x to y. This genearlizes the idea of the fundamental group.
They are useful and Ronald Brown has a whole project of building higher dimensional group theory using them. The great thing about the fundamental groupoid is that there is a version of Van Kampen that gives the fundamental group of the circle (without using covering space theory as is the standard way to do it using only the fundamental group).
ETA: That link might not be working. Google Ronald Brown's Topology and Groupoids book for a good introduction and motivation.
-
it is THE most important example! – Sean Tilson Apr 3 2010 at 0:28
In addition to the answers already given: Alan Weinstein wrote a nice article for the Notices of the AMS which tries to give some motivating examples:
It seems that in certain situations where taking the quotient by a group action "destroys too much information", working directly with an associated groupoid is more useful. Several of the motivating examples in NCG a la Connes (et al) also seem to fit into this point of view.
-
3
Weinstein's article has the nice feature that the example he gives is related to dynamics (and, more specifically, tiling your bathroom floor). – Emily Peters Oct 19 2009 at 1:27
Penrose tilings are beautiful objects, with a lot of symmetry... but their symmetry group is trivial!
So there's a discrepancy somewhere. The answer is: "groupoids"! The topological groupoid of symmetries of a Penrose tiling is non-trivial, and contains all the information that your intuition might call "symmetry".
The reason it's a groupoid and not a group is the following. Given a Penrose tiling, there are many different tilings that are locally undistinguishable from your original tiling. These are the objects of your groupoid. It becomes a topological groupoid under the topology of "uniform convergence in any bounded domain". The arrows are given by isomorphisms between a given tiling (=object) and a translated or rotated version of itself.
-
Andre, is there a reference in which what you just described is explained in more detail? Thanks in advance. – Willie Wong May 9 2010 at 17:09
Weinstein's "Groupoids: Unifying Internal and External Symmetry" is about roughly this topic: ams.org/notices/199607/weinstein.pdf – Qiaochu Yuan May 10 2010 at 4:26
Thanks Qiaochu! The article is a bit typo heavy, but still a very nice read. And so Andre's third paragraph makes sense to me now. – Willie Wong May 11 2010 at 16:29
Beyond all the categorical and bundle-like examples already given, you can easily understand groupoids as generalizations of groups in a purely geometrical sense.
If you think of groups as the sets of symmetries of certain geometrical objects, then groupoids are local symmetries of geometrical objects. My favorite example of this consists on taking a manifold M and defining a groupoid G as the set of all the local diffeomorphisms f:U-->V where U and V are open sets of M, with multiplication given by composition of maps (whenever it makes sense).
-
I never think about groupoids in any technical sense, but my favorite easy example of one can be built out of a separable field extension K/k. It is the category whose points are the subfields of the algebraic closure of k which are k-isomorphic to K. The morphisms between two objects are just the k-isomorphisms between the respective subfields. It's some kind of "Galois groupoid" and it's a group if and only if K/k is a Galois field extension.
-
1
You say "the" algebraic closure, but forget that there is no god given algebraic closure. The algebraic closures of a given field also form a groupoid, where all objects are isomophic. – André Henriques May 9 2010 at 11:07
2
More general than either of our examples: let K/k be a field extension and make the category whose objects are abstract extensions of k which are k-isomorphic to K. Morphisms are isomorphisms between these fields. But one could do this with any algebraic structure. The thing I like about restricting to those fields contained in a single algebraic structure is that the silly connection with Galois theory can be made. – Joel Dodge May 9 2010 at 22:17
Any vector bundle is a groupoid: you can add and subtract vectors only if they are in the same fibre. Similarly, if you take a vector bundle E → M (or some other fibre bundle) then consider the automorphism bundle Aut(E) → M where a point in Aut(E) above p ∈ M is an automorphism of Ep. This is a groupoid since these automorphisms can only be composed if they lie in the same fibre.
These are discrete groupoids in the sense that there are no morphisms between distinct objects (aka points in the base space). However, they are not discrete topologically as they clearly have topologies! (Which, incidentally, shows that you should be careful when using the statement about groupoids having no homotopy above degree 2: this is a statement about groupoids in Set but groupoids exist enriched in other categories where they can have lots of interesting homotopy). To get a more general groupoid, you can consider the bundle Iso(E) → M × M where a point in Iso(E) above (p,q) ∈ M × M is an isomorphism from Ep to Eq.
One reason for liking groupoids is that they allow you to talk about quotients by group actions without actually having to take the quotient. That's useful because some categories don't have quotients - such as the category of smooth manifolds. So when you have a group G acting on a manifold M you can try to take the quotient M/G but that is quite often not a manifold. So instead you can take the groupoid with objects M and morphisms G × M, where a morphism (g,m) has source m and target gm. Even when the quotient exists, or when you've extended the category to include quotients, this can be much better behaved than the corresponding quotient.
-
While the categorical definition of groupoid is the most concise, you can also think of a groupoid as being like a group, except where multiplication is only partially defined, rather than being defined for any pair of elements. Here are a few of my favorite examples:
• Given a vector bundle E, the general linear groupoid GL(E) is the groupoid of linear isomorphisms between fibers. Given a map from Ex -> Ey, and another from Ey' -> Ez, we can only compose them if y=y'. When E is just a vector space over a single point, then this is the usual general linear group. In differential geometry, this gives a very natural way to think about frames and G-structures on a differentiable manifold M: just look at the general linear groupoid GL(TM). G-structures can be understood as subgroupoids: for example, a Riemannian structure corresponds to the orthogonal subgroupoid O(TM), consisting of elements of GL(TM) which are also isometries.
• More generally, given a principal G-bundle, the gauge groupoid consists of G-equivariant maps between fibers. This is useful for talking about connections, holonomy, etc., without having to fix a particular gauge.
• Given a directed graph, one can construct the free groupoid generated by the edges. As a special case, the free group on n elements is generated by the graph with one vertex and n self-loops. (There is also a forgetful functor from groupoids to graphs, which is adjoint to the free functor.)
The first two examples happen to be Lie groupoids, and they have corresponding Lie algebroids, which generalizes the relationship between Lie groups and Lie algebras. Whereas a Lie algebra is a vector space with a bracket between elements, a Lie algebroid is a vector bundle with a bracket between sections (as well as an additional structure called the anchor map). For example, if Q is a principal G-bundle, then the gauge groupoid is (Q x Q)/G, while the corresponding gauge algebroid is TQ/G. This comes in handy in geometric mechanics, particularly in reduction theory. If we have a Lagrangian L: TQ -> R, which is invariant with respect to the action of a Lie group G, then is useful to look at the reduced Lagrangian \ell: TQ/G -> R. There are some subtleties arising from the fact that TQ/G is not a tangent bundle, but it is still a Lie algebroid, so this has motivated the study of mechanics on Lie algebroids.
-
I (mildly) disagree with David Brown's assertion that a set is an example of a groupoid. Given any set, you can put a groupoid structure on it, even "canonically", but not uniquely canonically. (By way of analogy, you wouldn't say that a set is an example of a topological space, would you?) Thus if I give you a set and tell you the definition of a groupoid, you will probably be able to use that set to define a groupoid, but you might not come up with groupoid that David has in mind.
I want to use this as a jumping off point for my answer: one of the neat things about groupoids is that a lot of times you start with a set $X$, you take some kind of "quotient" of it, and then you are apparently left with a set $Y$ but in a way which feels unpleasant: you feel like there is a loss of information. A lot times, there is a natural groupoid structure on $X$, which has the following features:
(i) It is equivalent to or implied by some other kind of structure you are considering on $X$, so it is not evidently profitable to think of $X$ as a groupoid.
(ii) Passage to the quotient set $Y$ loses some of the evident structure.
(iii) However, if you think of $X$ as a groupoid, then the quotient $Y$ is also a groupoid, and this extra structure is exactly the structure that you were sad to have lost.
Example: Let $G$ be a group and $X$ be a set with an action of $G$. Let $Y = G \backslash X$ be the orbit space. In the passage from $X$ to $Y$ we have apparently "used up" the $G$-structure, but this is not so good: for applications we would like to know the stabilizers of the points of $X$; up to conjugacy, these only depend upon the corresponding point in $Y$ but in passage to $Y$ we seem to have lost that information, which is however very important for "mass formulas" as in Qiaochu's response.
Remedy: realize that any $G$-set is canonically a groupoid: the set of morphisms from $x$ to $x'$ is exactly the set of $g$ in $G$ such that $gx = x'$. Then we can take the quotient of this groupoid by the $G$-action [this can be done generally; in this case it is sufficiently evident what this means that I don't think it will be helpful to say any more about it], so that $X/G$ still has a groupoid structure, in which no two distinct objects have any morphisms between them but that the automorphism group of any single object is isomorphic to the isotropy group of any representative.
See for instance
http://www.mth.kcl.ac.uk/~noohi/papers/WhatIsTopSt.pdf
for a bit more on this perspective.
-
A set is an example of a groupoid, and I care about groupoids as a generalization of sets as opposed to groups. My most fundamental tool is Yoneda's lemma, which says that one can think of a category C as being embedded in the category C-hat of presheaves (C-hat := Hom(C,Sets)); this is a really useful way to think for instance about the category of Schemes (which is special because instead of presheaves you actually get sheaves). Similarly, if you want to think about things like algebraic groups, it is extremely useful to consider Hom(Schemes,Grps) instead.
A stack is a generalization of the notion of a scheme, which one would like to think of as a functor from schemes to groupoids; this doesn't quite work (one only gets a `pseudofunctor') and the notion of a stack is a gadget that makes this work. Just as with algebraic groups, sometime you want the target of the functor that your geometric object represents to have the type of extra structure that your geometric object has; groupoids come up for me in moduli theory, where they keep track of extra automorphisms, or when trying to construct quotients by group actions, where you can keep track of stabilizers.
So not a very precise answer, but also not a technical one, so maybe it will be useful.
-
Let me expand a bit on what Dave said.
The Yoneda lemma tells us that given an object `X` of a category `C`, the (covariant, contravariant, whatever) functor `h_X : C -> Set`, which sends an object `Y` to the set `Hom(Y,X)`, can be thought of as the "same" as the object X. There are many situations in which we are interested in a functor `F : C -> Set`, and we might like to know whether `F` is isomorphic to `h_M` for some object `M`, because that reduces the study of `F` to the study of a single object `M`. In such a case we say that `F` is represented by `M`. The letter `M` here, suggestively, stands for "moduli".
Example: Given a group `G`, the functor `BG' : Top -> Set` is the functor which sends a topological space `X` to the set of isomorphism classes of principal `G`-bundles over `X`. (You can also do the analogous thing for schemes.)
Example: The functor `M_g' : Sch -> Set` is the functor which sends a scheme `X` to the set of isomorphism classes of flat families of genus `g` curves over `X`.
In both of the above examples, there is no object `M` for which `h_M` is isomorphic to the functor. So this is perhaps not so nice. But, without getting into too many details, there is a natural "fix", namely we can instead consider the functor `BG : Top -> Groupoid` (resp. `M_g : Sch -> Groupoid`) which sends a topological space (resp. a scheme) to the groupoid of `G`-bundles (resp. flat families of genus `g` curves). This groupoid has objects `G`-bundles and morphisms isomorphisms of `G`-bundles (resp. the obvious analogous thing). The original set-valued functor is just the composition of this functor with the functor `Groupoid` to `Set` which takes a groupoid and returns the set of isomorphism classes of objects in the groupoid.
Anyway, despite the fact that the set-valued functors are not so "geometric", since they are not represented by a "geometric" object (topological space and scheme, respectively), the groupoid-valued functors are more "geometric". In the case of `M_g`, the "geometric" structure we get is that of a "Deligne-Mumford stack", which essentially means that we can for practical purposes pretend that it is represented by a scheme with only some slightly "weird" properties. In the case of `BG` (the topological one) you can take a "geometric realization" and recover the classifying space `BG` that we know and love.
Another very important reason for studying groupoids and another very important class of groupoids comes from, as others have already mentioned, group actions. When a group acts on a manifold or a variety, the naive quotient may be badly behaved, for example it may no longer be a manifold (e.g. it might not be smooth, or it might not be Hausdorff) or respectively a variety (or it may not even be clear how to take the quotient at all!), which makes it harder to study geometric properties of the alleged "quotient". However, the groupoid viewpoint allows us to get a better handle on the quotient and its geometry. More precisely, if `G` is a group acting on a space (manifold, scheme, variety, whatever) `X`, then the "correct" quotient is actually the functor `X/G : C -> Groupoid` (where `C` is the category of manifolds, schemes, whatever) which sends an object `Y` to the groupoid of pairs (`G`-bundles `E` over `Y`, `G`-equivariant morphism from the total space of `E` to `X`). The functor `BG` is a special case of this; it's `pt/G`.
There's some further discussion on this sort of stuff at the nLab:
http://ncatlab.org/nlab/show/moduli+space
http://ncatlab.org/nlab/show/classifying+space
-
Contra dance (or square dance) gives us a nice example of a groupoid. The objects are the formations (i.e. the positions of the dancers) and the morphisms are the calls up to homotopy. A choreography (or a dance, if you wish) if a set of composable calls whose product is a morphism between two specific objects.
-
To follow on from what Qiaochu said, one of the interesting things about groupoids is their cardinality. Whereas the cardinality of a set is a natural number, the cardinality of a groupoid is a positive rational. This gives us a combinatorial way to inject "numbers" into an abstract system.
For example, a way to think of matrices of natural numbers is just taking spans of finite sets, A <- S -> B. The "numbers" come from counting the paths from A, through S, to B. Composition by pullback then just amounts to matrix multiplication. Incidentally, this is one of the nicest ways to think about commutative bi-alebras, but that's another story (see Stephen Lack - "Composing PROPs" if you're interested).
However, if you take spans of finite groupoids instead, you get computation with matrices of positive rational numbers. If you take spans of "nice" infinite groupoids, you get positive real numbers. John Baez and co. have a nice paper, called Higher-Dimensional Algebra VII: Groupoidification, that works a lot of this out an applies it to quantum physics. It's one of the things that convinced me that groupoids were pretty cool gadgets.
-
A groupoid is a category where every morphism is invertible. If such a category has one object, then it is a group. Unfortunately I don't know why people are so interested in them, so perhaps this is not helpful.
An example is the fundamental groupoid of path classes in a topological space; the objects are points and morphisms p->q are homotopy classes of curves from p to q. Composition is defined by placing together two paths so this is not a group.
-
Sorry,Josh's post hadn't appeared when I wrote this; it covers pretty much what I said and more. – Akhil Mathew Oct 19 2009 at 1:16
but he didnt mention your group comment which he should have. so they are groups with many objects. – Sean Tilson Apr 3 2010 at 0:30
I also unfortunately don't really understand why people care so much about them, although I should probably go back and read old TWFs.
Sort of a combinatorial example of the fundamental groupoid is the category assigned to a graph where the objects are vertices and the morphisms are directed paths, and v->w->v = id_v. (If this makes sense.) I believe that this category is nice because (IIRC) you can read off the definition of graph homomorphism from it.
ETA: Okay, a quick Baez-review gives the following, which isn't strictly speaking what you asked about but which helps me understand how groupoids are intrinsically special and not just occasionally an improvement that encodes more data than groups.
If you move from categories to n-categories, you can define n-groupoids, although this is subtle. Now, just as the fundamental groupoid is a more natural construction than the fundamental group, n-groupoids capture more information about homotopy than do "monoidal n-groupoids." But furthermore, all the constructions of homotopy are in some way reversible (if I'm following Baez correctly) -- not only is homotopy equivalence an equivalence relation, but even on higher levels this is true -- e.g., the fundamental groupoid functor has an adjoint, which is essentially the classifying space construction, so actually n-categories capture everything about homotopy! And in fact, Baez says that if you think about \omega-categories (which are a limit of n-categories, essentially, I guess?) then the homotopy category of \omega-groupoids is equivalent to the homotopy category.
-
This brings up something else I'd like to know, namely what is a graph homomorphism? – Emily Peters Oct 19 2009 at 1:15
2
A graph homomorphism from G to H is pretty much exactly what it "should" be, categorically -- it's a function from V(G) to V(H) that preserves adjacency. It simultaneously generalizes the notions of subgraph and of graph coloring, which is nice, but it's kind of nasty to work with in practice. – Harrison Brown Oct 19 2009 at 1:29
As above, a groupoid is an object where every morphism is an isomorphism, and generalizes groups. As for why to get excited about them, they're useful in classification of things. Like, say you want to understand vector bundles on a space. One method of doing so is constructing the "stack" of vector bundles on that space, which, to each open set (actually, a bit more generally, but thinking concretely here rather than going to grothendieck topologies) associates the groupoid consisting of vector bundles on that open set along with isomorphisms, so that the set of vector bundles is the set of isomorphism classes in this groupoid. The stack made up this way has the property that any family of bundles (or whatever, groupoids work for many things) over some space T is equivalent to giving a morphism from T to the stack.
-
Lots of things are groupoids, but many are not groups. There is a theory of groupoids, and if you don't acknowledge groupoids, they won't let you use their theory =)
The fundamental group(oid) example is really good. What happens if you want to do Van Kampen's theorem on a pair of sets whose intersection is not connected? There's an answer but you have to use fundamental groupoids.
Also, categories are sometimes useful for isolating data. For example, you can replace the usual notion of a local system with a "representation of the fundamental groupoid" and doing so lets you think of a local system the way you already wanted to in your heart: as a collection of operators on fibers coming from "going around the bad points".
Here's a stupid riddle for everyone: what's another word for a "monoidoid"?
-
5
Isn't a monoidoid a category? – Mariano Suárez-Alvarez Jan 28 2010 at 5:33
Yes, and thus a monoidal category is a.... – Pete L. Clark Jan 28 2010 at 5:38
3
One-object monoidoidoid? – Qiaochu Yuan Jan 28 2010 at 5:54
Mariano Suárez-Alvarez is correct! – Sean Rostami Jan 28 2010 at 19:35
A groupoid is a category that has the essential features of a group: namely functors are associative,there is an identity transformation on it's objects and an inverse. A group is a groupoid with a single object in it. Any group is therefore a groupoid. The best introduction there is to the study of groupoids is Ronald Brown's classic Topology And Groupoids-which should be required reading for all mathematicians.
-
7
This answer is somewhere between wrong and completely incoherent. Functors do not show up in the definition of a groupoid (and the composition of functors is always associative). That there is an identity morphism from every object to itself is part of the definition of any category (as is the associativity of the composition). A groupoid does not have "an inverse". (Moreover, no examples are given, which was asked for.) I would recommend deleting this answer. – Pete L. Clark Nov 28 2010 at 11:25
2
I want to retract part of my previous comment. There are several different formalizations of the notion of a groupoid, including one in which the morphisms in the categorical approach become elements of a set. In this setup, there is a global inversion map. Even in the (now more standard) categorical setup, it makes sense to speak of "inversion" as an operation on the class of morphisms of the groupoid. (I stand by my recommendation that the answer be deleted.) – Pete L. Clark Dec 16 2010 at 23:23
I wouldn't call this incoherent, just terminologically muddled. Read "morphism" in the place of "functor" and "transformation", and understand that "there is... an inverse" means exactly what it should. This answer no doubt comes from a place of actually knowing what a groupoid is; it just fails to clearly communicate that. – Sridhar Ramesh Aug 22 at 22:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457392692565918, "perplexity_flag": "head"} |
http://mathhelpforum.com/trigonometry/66089-finding-trig-values.html | # Thread:
1. ## [SOLVED]Finding Trig Values
Question is:
If A is an acute angle and CosA=4/5, find the values of,
a) sin2A
b) Sin3A
c) Tan3A
I sort of have an understanding with this question, but i dont really get it, especially part b) and part c). With a solution, can someone also leave an explanation to what they did, so i can try to understand what you did? Thanks.
2. Originally Posted by iMan_07
Question is:
I sort of have an understanding with this question, but i dont really get it, especially part b) and part c). With a solution, can someone also leave an explanation to what they did, so i can try to understand what you did? Thanks.
You know from basic trigonometry, that if you consider a right angled triangle, then $cos(A) = \frac{a}{h}$, where A is the angle you are considering, a is the length of the adjecent side, and h is the length of the hypotenuse.
In this case, our triangle has adjacent side length 4, and hypotenuse of 5. Perhaps it would help if you sketched it!
Using pythagorus, you can work out the length of the side opposite the triangle.
$o^2 = h^2 - a^2$
$= 5^2-4^2$
$= 25-16 = 9$
$\therefore o = \sqrt{9} = 3$
$o = 3$
So it's a 3,4,5 triangle! Again, from basic trigonometry, you can calculate $sinA$:
$sin(A) = \frac{o}{h} = \frac{3}{5}$.
So now you know sinA and cosA.
a) You need $sin2A$. Remember $sin2A = 2sinAcosA$. You know both!
b) You need $sin3A$. Remeber that $sin(3A) = sin(2A+A) = sin2AcosA+cos2AsinA$. And remember that $cos(2A) = (cosA)^2 - (sinA)^2$.
c) You need $tan(3A)$. Remember $tan(3A) = \frac{sin(3A)}{cos(3A)}$. You worked out $sin(3A)$ before, you just need $cos(3A)$. Use the same concept of:
$cos(3A) = cos(2A+A) = cos2AcosA-sin2AsinA$ and $cos(2A) = (cosA)^2-(sinA)^2$
Good luck.
3. Thank You soo much, wonderful explanation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153037667274475, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=4228512 | Physics Forums
## probability and quantum possibilities
So this might be a too simplistic question on many accounts.
My pchem professor said to us that in QM, anything that can happen will. And it's a matter of probability, right?
I guess I'm just curious what the scales are for something like, say, walking through a wall (the go-to example for a lot of popular science books on QM)? Like, 1 in a billion or what?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions:
Gold Member
Quote by NanaToru My pchem professor said to us that in QM, anything that can happen will. And it's a matter of probability, right?
Not quite. It says that anything that can happen, well, can happen. (It is not guaranteed to happen) Yes it is a matter of probability. But don't take this to mean that all of reality and life is probability. Even if it is you don't live your life in fear that every particle in your body is going to quantum tunnel in random directions at the same time.
I guess I'm just curious what the scales are for something like, say, walking through a wall (the go-to example for a lot of popular science books on QM)? Like, 1 in a billion or what?
1 in a billion ^1023. Actually I don't know the right number, and I doubt anyone actually does, but I guarantee it to be so large it is effectively incomprehensible.
Recognitions: Gold Member Science Advisor The likelihood of that happening is so low that the universe is far too young for that to be an outcome. At least, most likely. You never know, maybe something like that has happened.
## probability and quantum possibilities
Quote by Drakkith 1 in a billion ^1023. Actually I don't know the right number, and I doubt anyone actually does, but I guarantee it to be so large it is effectively incomprehensible.
That number probably does the odds some justice. I remember calculating the probability of jumping and tunneling all the way to Jupiter, and it was like $e^{10^{10^6}}$ or something. I don't even remember now.
It should be noted that anything that can happen will happen with enough time. Even the probability above says that if the universe lasts long enough, a tunneling event of that magnitude should likely happen.
Recognitions:
Gold Member
Quote by soothsayer That number probably does the odds some justice. I remember calculating the probability of jumping and tunneling all the way to Jupiter, and it was like $e^{10^{10^6}}$ or something. I don't even remember now. It should be noted that anything that can happen will happen with enough time. Even the probability above says that if the universe lasts long enough, a tunneling event of that magnitude should likely happen.
My god that's an enormous number.
It may have been smaller, I can't remember now XD It was definitely e^10 to a really big power, but it may have been closer to 100 than one million. At that point though, what's the difference, really? It's not gonna happen, lol.
Thread Tools
| | | |
|------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: probability and quantum possibilities | | |
| Thread | Forum | Replies |
| | Quantum Physics | 1 |
| | Career Guidance | 2 |
| | Career Guidance | 0 |
| | Calculus & Beyond Homework | 1 |
| | Academic Guidance | 16 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619805812835693, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/27234/what-proof-techniques-have-failed-for-solving-the-sic-povm-problem-and-what-new?answertab=oldest | # What proof techniques have failed for solving the SIC-POVM problem and what new insights have been gleaned from them?
The SIC-POVM problem is remarkably easy to state given that it has not yet been solved. It goes like this. With dim($\mathcal H$) $=d$, find states $|\psi_k\rangle\in\mathcal H$, $k=1,\ldots,d^2$ such that $|\langle \psi_k|\psi_j\rangle|=\frac{1}{d+1}$ for all $k\neq j$.
The state of the art on the solution I believe is here: http://arxiv.org/abs/0910.5784. Various constructive conjectures have been given but what existence proofs have been tried and why have they failed? What insight has been distilled from these attempts?
-
2
– Alex 'qubeat' Oct 30 '11 at 15:16
Thanks for the link Alex. But, again, it lists numerical results and connections to other conjectures. My question is why, for example, does induction on $d$ not work? It is possible to prove an inductive proof is impossible? – Chris Ferrie Oct 30 '11 at 16:27
Constructions of SIC for consequent $d$ too different to hope on induction, e.g. see TABLE I in e-print you cited: for $d=3$ there are infinite number of SIC, but for other $d$ only finite number (and the numbers of SIC have rather unpredictable behavior). – Alex 'qubeat' Oct 30 '11 at 21:08
– Alex 'qubeat' Nov 22 '11 at 10:12
## 1 Answer
Het Chris, For more analytic arguments about SIC's you may want to check out http://arxiv.org/abs/1001.0004 .
I got interested in this problem at some point and talked to Steve. He warned me off, describing the SIC-POVM problem as a "heartbreaker" because every approach you take seems super promising but then inevitably fizzles out without really giving you a great insight as to why.
-
1
Thanks Seth! That's definitely some useful information. Side note: the thicker papers always seem to find their way to the bottom of the reading pile. I'm going to blame gravity. – Chris Ferrie Nov 7 '11 at 12:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350425601005554, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/153955-distance-metric-proof.html | # Thread:
1. ## Distance Metric Proof
Say I have two lists, List1 and List2 containing elements such as words. Some words are common two both List1 and List2. I want to create a distance metric that tells me how far apart the two lists are based on a similarity "score". The similarity score and distance metric are as follows:
Similarity score: Intersection(List1,List2) / Union(List1,List2)
Distance = 1 - Similarity Score
In other words, the similarity score is just the percentage overlap between the two lists and the distance is 0 when the two lists are the same. Say I generalize this to n lists and I calculate the distances between lists (a symmetric matrix of distances). My question is, is this distance formula valid? In other words, does it satisfy the triangle inequality? How do I check this?
A first attempt if all list sizes are equal:
Let N = size of the list, x, y & z = pairwise overlaps between three lists. You must have that x >= y+z-N. Distance between lists that give you x is 1-x/(2N-x). With some algebra you can conclude that the triangle inequality is satisfied for all valid x, y & z.
Is this true for arbitrary list sizes?
2. Hello, please check this out Symmetric difference - Wikipedia, the free encyclopedia. At the bottom it talks about symmetric difference being somewhat of a metric. Here is how it relates to your distance metric. Let $A$ and $B$ be your two lists (sets). Your distance function is:
$\displaystyle d(A,B) = 1 - \frac{|A \cap B|}{|A \cup B|} = \frac{|A \cup B| - |A \cap B|}{|A \cup B| } = \frac{|(A/ B) \cup (B /A)|}{|A \cup B| } = \frac{|A \Delta B|}{|A \cup B|}$
where $\Delta$ is the symmetric difference. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253178238868713, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-geometry/152737-complex-integral-2-problems.html | # Thread:
1. ## Complex integral, 2 problems
1)I must calculate $\int _{|z|=1} \frac{\cos (e^{-z})}{z^2} dz$. I'm not sure if I should see it as the real part of $\int _{|z|=1} \frac{e^ {i (e^{-z})}}{z^2} dz$.
Anyway the problem is obviously when $z=0$.
I get an infinite residue: $Res(f,z=0)=\lim _{z \to 0} \frac{\cos (e^{-z})}{z}=+\infty$.
2)I'd like to get an example of a real integral where when I use complex analysis methods and I choose a contour of integration, I fall over a singularity on the real axis.
Thanks in advance.
2. Notice that
$f(z)=\cos(e^z) \implies f(0)=\cos(0)$
so f does not have a zero at zero.
So to calculate the residue at 0 we have
$g(z)=\frac{\cos(e^z) }{z^2}$
So the residue is
$\displaystyle \lim_{z \to 0}\frac{d}{dz}z^2g(z)=\lim_{z\to 0}\frac{d}{dz}\cos(e^{z})=-\sin(1)$
3. Originally Posted by arbolis
2)I'd like to get an example of a real integral where when I use complex analysis methods and I choose a contour of integration, I fall over a singularity on the real axis.
Thanks in advance.
Try $\displaystyle \int_{-\infty}^\infty \frac{\sin x}{x}dx$ by integrating the complex function $\displaystyle f(z)=\frac{e^{iz}}{z}$.
4. I have noticed that you have been calculating a lot of residues this help.
Suppose you have a complex function $f(z)$that is holomorphic except for a finite number of poles. Now if $a$ is a pole of order k then $f(z)$ has a Laurent expansion at a that looks like
$\displaystyle f(z)=\sum_{n=-k}^{\infty}a_{n}(z-a)^{n}$
Then
$g(z)=(z-a)^{k}f(z)=\sum_{n=-k}^{\infty}a_{n}(z-a)^{n+k}$
The residue is the coefficient at $a_{-1}$ to recover this from $g(z)$ we take take $k-1$ derivatives to get
$\displaystyle \frac{d^{k-1}}{dz^{k-1}}g(z)=(k-1)!\sum_{n=-1}^{\infty}a_{n}(z-a)^{n+1}$
This gives that
$\displaystyle \frac{d^{k-1}}{dz^{k-1}}g(a)=\lim_{z \to a}(k-1)!\sum_{n=-1}^{\infty}a_{n}(a-a)^{n+1}=(k-1)!a_{-1}$
This gives that
$\text{Res}(f,z=a)=\lim_{z\to a}\frac{1}{(k-1)!}\frac{d^{k-1}}{dz^{k-1}}(z-a)^{k}f(z)$
5. Thanks chip, I'll try it.
Thanks TheEmptySet, that was very useful. However the function f(z) should be $\cos (e^{-z})$ but it doens't change the fact that $f(0) \neq 0$. Also, how did you know that g(z) had a pole of order 2? Is it because it gave me infinity when I thought it was of order 1?
Correct me if I'm wrong in the following: If a function f has a singularity at $z_0$ and if $\lim _{z \to z_0} (z-z_0) f(z) =\pm \infty$ then $z_0$ is either a pole of order greater than 1 or an essential singularity. On the other hand, if $\lim _{z \to z_0} (z-z_0) f(z) =0$ then f has a removable singularity at $z_0$. While if $\lim _{z \to z_0} (z-z_0) f(z) =k$ with $k \in \mathbb{C}$ and $k \neq 0$, then f has a pole of order 1 at $z_0$.
6. For this problem we had a function of the form
$\displaystyle \frac{\cos(e^{-z})}{z^2}$ the numerator does not have a zero at z=0 and is analytic so the constant term of its power series must be non zero and the denominator has a zero or order 2. The first term in its Laurent series must look like $\frac{a_0}{z^2}$So it has a pole of order 2. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481387734413147, "perplexity_flag": "head"} |
http://conservapedia.com/Fundamental_theorem_of_calculus | # Fundamental theorem of calculus
### From Conservapedia
Jump to: navigation, search
$\frac{d}{dx} \sin x=?\,$ This article/section deals with mathematical concepts appropriate for a student in late high school or early university.
## The Fundamental Theorem of Calculus
The Fundamental Theorem of Calculus, first proven by James Gregory, is the rather remarkable result that the two fundamental operations of calculus are just inverses of each other. Those two operations are performed on functions from the real numbers to the real numbers, and are most easily visualized when the functions are expressed in terms of graphs. The operations are:
• Differentiation -- find the slope of a function's graph at a given point.
• Integration -- find the area under a graph between two given limits.
## The Theorem
There are two parts to the Fundamental Theorem of Calculus[1]
### Part 1
The first can be written as: Let the function f be continuous function defined on a closed interval [a,b]. Define F(x) as:
$F(x) = \int_a^x f(t)dt$
It follows that:
The first part states that if a function F is the antiderivative of a function f, then the derivative of F is f. In other words, antiderivative and derivative are opposite functions.
### Part 2
If:
f(x) = F'(x)
Then:
$\int_a^b f(x)dx = F(b) - F(a)$
The second part begins with what we know from part 1.
It then states that the definite integral of the function f from a to b is equal to F evaluated at b minus F evaluated at a. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8834731578826904, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/314290/differentiability-of-a-function-at-a-point | # Differentiability of a function at a point
My high-school calculus teacher has asserted that a function $f(x)$ can only fail to be differentiable at a point $x=a$ if one of the following is true:
• The function is discontinuous at $x=a$: $\lim_{x\to a}f(x) \ne f(a)$
• The function has a cusp or vertical tangent at $x=a$: $\lim_{x\to a}\left|{{f(x)-f(a)}\over{x-a}}\right| = \infty$
• The function has a corner at $x=a$: $\lim_{x\to a^+} {{f(x)-f(a)}\over{x-a}} \ne \lim_{x\to a^-} {{f(x)-f(a)}\over{x-a}}$
While in most cases that is probably correct, I find it somewhat hard to swallow that it is that way for all functions. Specifically, the function
$$f(x)=\begin{cases}x\sin \ln x^2, & x\ne0 \\ 0, & x=0\end{cases}$$
is most definitely not differentiable at $x=0$, but it also doesn't appear to satisfy any of the properties listed above.
The derivative $\frac{\mathrm{d} }{\mathrm{d} x}f(x)$ of the function for $x\ne0$ appears to be $\sin{{\ln x^2}}+2\cos{{\ln x^2}}$, which doesn't show any signs of increasing without bounds as $x\to0$ or suddenly changing at $x=0$, and $f$ is most definitely continuous at that point.
So, the Question is: What's up with$f$? Does it actually fall into one of the cases above, or are they only good as a rough guide for some sorts of functions?
-
I think your derivative is incorrect. – Daryl Feb 25 at 21:50
– AJMansfield Feb 25 at 22:08
I added a more elaborate example to my answer. There may or may not be a way to accomplish the same thing with a single formula. I have errands to run and will fiddle with this later. Be nice to your teacher. – Will Jagy Feb 25 at 22:17
Not by the way, it appears your derivative is correct and you have my second example in a single formula. Good. I would have put in $|x|$ instead of $x^2.$ – Will Jagy Feb 25 at 22:23
@WillJagy That is actually what I did at first when I was playing around with it, but then I figured out that I can put $x^2$ in with nearly no extra complication to the derivative, and that way I don't have to use a function that does have a corner at $x=0$. – AJMansfield Feb 25 at 22:26
show 1 more comment
## 1 Answer
Your teacher gave a rough guide. The standard example with oscillation, the next item on a fuller list, is $$f(x) = x \sin \left( \frac{1}{x} \right)$$ which is continuous at $x=0,$ with the proviso that $f(0) = 0.$
I think you will find that your teacher was aware of this and did not want to muddy the waters.
I do not know what you mean by the word cusp.
EEDDIIIITTTTT: There may or may not be a way to build this next one with a single formula: take any function you like for $1 \leq x \leq 2,$ a polynomial should be possible, such that the graph $y = f(x)$ is tangent to the line $y=x$ at both $x=1,2$ and is tangent to $y = -x$ at $x = 3/2.$ Next, put in a half size version for $1/2 \leq x \leq 1,$ with the result that we have a differentiable function on the larger domain owing to the tangency at $x=1.$ Do the same for $1/4 \leq x \leq 1/2.$ And so on forever. Then make $f(0) = 0,$ and $f(-x) = - f(x).$ This results in a function of bounded derivative and $|f(x)| \leq |x|$ for $x \neq 0,$ but no derivative at the origin.
-
I actually looked at that one first, but the derivative actually diverges to infinity. – AJMansfield Feb 25 at 21:54
Plus, the function, as you have it written, is not actually continuous at $x=0$. It needs the zero case defined. – AJMansfield Feb 25 at 21:55
@AJMansfield, in that case I do not know what you mean by cusp. Which is alright. – Will Jagy Feb 25 at 21:56
@AJMansfield It is standard to define $f(c)$ as $\lim\limits{x\to c} f(x)$ where the limit exists and the equation defining $f$ is not valid at $c$, so this is usually omitted. Also, note that failing to do so does not lead the function to be discontinuous at $c$, but rather undefined, so it is not a function. Furthermore the slope of this function does not diverge to infinity at $0$, it oscillates between $-1$ and $1$. – Alex Becker Feb 25 at 21:58
1
@AJMansfield The value of the derivative at other points is irrelevant. The difference quotient at $0$ is $$\frac{x\sin(1/x)-0}{x-0}=\sin(1/x)$$ which oscillates between $-1$ and $1$, rather than diverging to infinity. – Alex Becker Feb 25 at 22:02
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617912769317627, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/120494/subgradient-of-minimum-eigenvalue | ## Subgradient of Minimum Eigenvalue
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider three $N \times N$ hermitian matrices $A_0,A_1,A_2$. Consider the function \begin{align} f(t_1,t_2)=\lambda_{min}(A_0+t_1A_1+t_2A_2) \end{align} where $\lambda_{min}$ denotes the minimum eigen-value. $f(t_1,t_2)$ is clearly a concave function. How do we find the sub-gradient of it?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6843355894088745, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/209643-polynomial-proof-nth-degree-poly-has-n-roots.html | 1Thanks
• 1 Post By Drexel28
# Thread:
1. ## Polynomial proof that nth degree poly has n roots
where can I please find the proof of the fundamental result of algebra that says that a polynomial of degree n has n real/complex roots? is that covered in most algebra texts?
I have read some of the solution methods that use factorization, inspection to find roots of say n=3, 4 polynomials, also partial differentiation, are there others?
2. ## Re: Polynomial proof that nth degree poly has n roots
There are tons and tons of proofs of this fact. Probably the quickest I know is that the non-constant polynomial $p$ is a holomorphic map $\mathbb{C}\to\mathbb{C}$ and by setting $p(\infty)=\infty$ induces a holomorphic map between the Riemann sphere and itself $S^2\to S^2$. But, it is a common fact that a holomorphic map between Riemann surfaces where the domain is compact, must be surjective, and so $p$ is surjective. But, since $p(\infty)=\infty$ we see that there must exist some $z\in\mathbb{C}$ such that $p(z)=0$. Thus, you have that $p$ has one zero, and the fact that it has $n$ comes from basic algebra.
3. ## Re: Polynomial proof that nth degree poly has n roots
Thanks very much for your beautiful and elegant proof with the wonderful Gaussian Sphere!!! So Galois theory is one of the approaches to this problem...
(So sorry for the delay in replying) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9622072577476501, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/198410/what-is-a-holomorphic-vector-field | # What is a Holomorphic Vector Field?
On a smooth manifold $M$, a smooth vector field is an element of $\Gamma(M, TM)$ which is the space of all smooth sections of the bundle $TM \to M$.
If $M$ is a complex manifold, then we have the holomorphic tangent space $T^{1,0}M$. We can form the space $\Gamma(M, T^{1,0}M)$ of smooth sections, but locally, an element can be written as $$f_1\frac{\partial}{\partial z^1} + \dots + f_n\frac{\partial}{\partial z^n}$$ where $n = \mathrm{dim}_{\mathbb{C}}M$ and the functions $f_1, \dots, f_n$ are smooth complex-valued functions, they are not necessarily holomorphic. This makes me think that these vector fields shouldn't be called holomorphic, but maybe I'm wrong.
What is the definition of a holomorphic vector field on a complex manifold?
Any additional resources dealing with such vector fields would also be appreciated.
-
1
"$f_1\frac{\partial}{\partial z^1} + \dots + f_n\frac{\partial}{\partial z^n}$ where $n = \mathrm{dim}_{\mathbb{C}}M$ and the functions $f_1, \dots, f_n$ are smooth complex-valued functions" Why don't you just replace "smooth" by "holomorphic"? – Makoto Kato Sep 18 '12 at 8:15
I'm not sure whether this is what is meant, and if it is, what the correct global description is (i.e. sections of the bundle $T^{1,0}M \to M$ such that...). – Michael Albanese Sep 18 '12 at 8:19
1
The holomorphic tangent bundle has the canonical complex structure. Its holomorphic sections are holomorphic vector fields. The local representation of a holomorphic vector field is of the above form. – Makoto Kato Sep 18 '12 at 8:30
2
We can define holomorphic sections of any holomorphic vector bundle in the same way as we define holomorphic functions: if $E \to X$ is a bundle, then $\overline \partial$ acts on $E$ (define it locally as usual and observe that the resulting operator glues b/c the transition functions are holomorphic). Then a holomorphic section $\sigma$ of $E$ is a smooth section such that $\overline \partial \sigma = 0$. In particular, this entails that your local functions $f_j$ are holomorphic. – Gunnar Magnusson Sep 18 '12 at 8:32
Would either/both of you be willing to write your comment in an answer so that I may accept it? – Michael Albanese Sep 18 '12 at 8:35
## 1 Answer
We can define holomorphic sections of any holomorphic vector bundle in the same way as we define holomorphic functions. Let $X$ be a complex manifold and let $E \to X$ be a holomorphic vector bundle over $X$. We can extend the $\overline\partial$ to act on sections of $E$: Let $E_U \to U \times \mathbb C^r$ be a local trivialization and $(e_1, \dots, e_r)$ be a local holomorphic frame of $E$. If $\sigma = \sum_j s_j e_j$ is a section of $E$ over $U$, then we set $$\overline\partial \sigma := \sum_j \overline \partial s_j \otimes e_j.$$ If $E_V \to V \times \mathbb C^r$ is another trivialization, then we write $g(z,\lambda) = (z, g(z) \lambda)$ for the induced transition function. These are holomorphic, so $g(z)$ is a $r \times r$ matrix of holomorphic functions. If we write $\sigma_U$ and $\sigma_V$ for the representations of the section $\sigma$ in the frames over $U$ and $V$, then $\sigma_U = g \sigma_V$. It follows that $$\overline \partial \sigma_U = g \overline \partial \sigma_V$$ because $g$ is holomorphic, so the $\overline \partial$ operator glues to define an operator on the space of sections of $E$.
We now define holomorphic sections of $E$ to be smooth sections $\sigma$ such that $\overline \partial \sigma = 0$. If we pick a local holomorphic frame $(e_1, \dots, e_r)$ and write $\sigma = \sum_j s_j e_j$ as before, then this entails that $\sigma$ is holomorphic if and only if all the functions $s_j$ are holomorphic.
We could of course have defined holomorphic sections as being those sections that satisfy that the "coordinate functions" $s_j$ are holomorphic in any local holomorphic frame. Since the transition functions of $E$ are holomorphic, this is well defined. This is basically the same as what we did here.
Since you ask for additional resources for dealing with holomorphic tangent fields specifially, I encourage you to have a look at the Bochner--Weitzenböck formulas you asked about on MO the other day. These are often used to show that there are no non-zero holomorphic vector fields on a manifold (a fun exercise is to prove this by using the Kähler--Einstein metric on a projective manifold with ample canonical bundle -- try Ballmann or Zheng's books if you need help on this).
-
Thanks for your help yet again. – Michael Albanese Sep 18 '12 at 13:59
1
"We are here to help each other get through this thing, whatever it is." – Gunnar Magnusson Sep 18 '12 at 14:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099313616752625, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/74561?sort=newest | ## Is a solution of a linear system of semidefinite matrices a convex combination of rank 1 solutions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The cone of symmetric positive semidefinite $n\times n$ matrices is the convex hull of rank $1$ matrices. That is, every symmetric positive semidefinite matrix is a convex combination of rank 1 matrices.
Does this property generalize to solutions of linear systems of semidefinite matrices?
Let me be precise. Fix $k$ symmetric $n\times n$ matrices $A_1,\ldots, A_k$. Consider the system of linear equations $\langle A_1,X\rangle=\cdots = \langle A_k, X\rangle = 0$, which you want to solve for a symmetric semidefinite $n\times n$ matrix $X\succeq 0$. Here, the inner product of two matrices is $\langle(a_{ij}),(b_{ij})\rangle=\sum_{i,j = 1}^n a_{ij}b_{ij}$.
The set of solutions $X$ forms a closed convex subcone $C$ of the cone of semidefinite $n\times n$ matrices. Is $C$ the convex hull of its rank 1 matrices? Namely, is every solution a convex combination of rank 1 solutions?
-
## 2 Answers
No. Try $n=k=2$ with $A_1 = \pmatrix{1 & 0\cr 0 & -1\cr}$ and $A_2 = \pmatrix{1 & 1\cr 1 & -1\cr}$. The only symmetric matrices $X$ with $(A_1,X) = (A_2,X) = 0$ are multiples of $I$, so there are no rank 1 solutions.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The answer seems obviously "no", since a system of equations need not have any rank one solutions (for example, the solution set can be the line $x I_n,$ where $I_n$ is the identity matrix.)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90770423412323, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/78573/classical-analogue-of-the-stone-von-neumann-theorem/79202 | ## Classical analogue of the Stone-von Neumann Theorem?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $U_s$, $V_t$ be a pair of continuous $n$-parameter groups ($n < \infty$) of unitary operators on a complex Hilbert space $\mathcal{H}$. The Stone-von Neumann Theorem establishes that any such pair forming an irreducible representation of the Weyl relations,
$U_sV_t = e^{is\cdot t}V_tU_s$
is unitarily equivalent to the Schrödinger representation, and hence that all such representations are unitarily equivalent. (Note: the Weyl relations in this context are equivalent to the canonical commutation relations (CCRs) $[Q,P]\psi=i\psi$ for all $\psi$ in the common dense domain of $Q$ and $P$, where $Q$ and $P$ are the generators of $V$ and $U$.)
Question: Is there a known analogue of this result in the context of classical Hamiltonian mechanics?
I don't know of a classical analogue of the Weyl relations. But there is a classical analogue of the CCRs, which is the Poisson bracket $\{q,p\}=1$. So, here's how I imagine a classical analogue of the Stone-von Neumann theorem might look (just a rough attempt, really!).
Let $\mathcal{M}$ be a smooth $2n$-dimensional manifold and $\omega$ a symplectic form on $\mathcal{M}$. Let $\xi = (q,p)$ be any global coordinate system on $\mathcal{M}$, and let $Q:\mathcal{M}\rightarrow\mathbb{R}$ and $P:\mathcal{M}\rightarrow\mathbb{R}$ be the projections onto $q$ and $p$, respectively. Then (conjecture): all such pairs ($Q$, $P)$ satisfying,
$\{Q,P\}=1$
where $\{\cdot,\cdot\}$ is the Poisson bracket associated with $(\mathcal{M}, \omega)$, are related by a single canonical transformation.
Does this seem like a reasonable way to formulate the classical analogue? Is the status of this conjecture obvious? Your thoughts are appreciated!
-
1
Let n=1. What you get is the famous Jacobian conjecture... In classical limit representation correspond to symplectic leaves. R^2n is symplectic - so only 1 symplectic leave with accordance to Stone-vonNeuman... – Alexander Chervov Oct 19 2011 at 13:25
## 1 Answer
Expanding on Chervov's comment: the Jacobian conjecture for two variables conjectures that if a polynomial map $(x,y) \to (X,Y)$ has for its Jacobian $\partial(X,Y)/\partial(x,y)$ a nonzero constant, then this polynomial map has a polynomial inverse. (The chain rule, plus the fact that over ${\mathbb C}$ polynomials have zeros, yields the truth of the converse: if the map has polynomial inverse, then its Jacobian is a nonzero constant.) If $(x,y) = (q,p), (X,Y) = (Q,P)$ then ${Q, P }$ is the Jacobian of this transformation. So: an affirmative answer for the Jacobian conjecture would precisely yield a 'yes' to your 'reasonable formulation' in the polynomial category when $n =1$.
Going back to Stone-von-Neumann and the Weyl relations you wrote down suggests adding the hypothesis that the flows of the Hamiltonian vector fields for both $Q$ and $P$ are complete. With that addition, you have a chance of a 'yes' answer when $n=2$, perhaps in the smooth category.
Note: your hypothesis imply that your manifold is ${\mathbb R}^{2n}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8939915895462036, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/3007-integrals-first-substitution-then-parts.html | # Thread:
1. ## Integrals - first by substitution then by parts
we're given the integral 2cos(ln(x))dx
and told to first use substitution then integration by parts, could someone kindly point me in the right direction
2. Originally Posted by dsspence
we're given the integral 2cos(ln(x))dx
and told to first use substitution then integration by parts, could someone kindly point me in the right direction
Right direction?
Okay, go to the Calculus thread in this forum, look for the posting by nirva, "Some calc questions". Open that. The second problem there is almost the same as yours, except that yours has "2" before the cos(ln(x)).
If you know what to do with that "2", and you could follow the solution there of that second problem, then you should be able to get your integral.
3. Originally Posted by dsspence
we're given the integral 2cos(ln(x))dx
and told to first use substitution then integration by parts, could someone kindly point me in the right direction
You have,
$\int \cos (\ln x) dx=\int \frac{x\cos (\ln x)}{x}<br />$
Use substitution $u=\ln x$ then, $u'=1/x$ and $x=e^u$
Thus,
$\int e^u \cos u \frac{du}{dx} dx=\int e^u \cos u du$
----
I am going to stop here because I realized that ticbol made a post before me and already answered this question. Do as he says. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334362745285034, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/20824/vector-bundles-on-the-moduli-stack-of-elliptic-curves | ## Vector Bundles on the Moduli Stack of Elliptic Curves
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As is well known, there is classification of line bundles on the moduli stack of elliptic curves over a nearly arbitrary base scheme in the paper *The Picard group of $M_{1,1}$* by Fulton and Olsson: every line bundle is isomorphic to a tensor power of the line bundle of differentials $\omega$ and $\omega^{12}$ is trivial.
I am now interested in a similar classification scheme for higher dimensional vector bundles (i.e. etale-locally free quasi-coherent sheaves of finite rank). I am especially interested in the prime 3, so you may assume, 2 is inverted, or even that we work over $\mathbb{Z}_{(3)}$. I found really very little in the literature on these questions. I know only of two strategies to approach the topic:
1) I think, I can prove that every vector bundle $E$ on the moduli stack over $\mathbb{Z}$ localized at $p$ for $p>2$ is an extension of the form $0\to L \to E \to F$ where $L$ is a line bundle and $F$ a vector bundle of one dimension smaller than $E$ (this may hold also for $p=2$, but I haven't checked). In a paper of Tilman Bauer (Computation of the homotopy of the spectrum tmf) Ext groups of the so called Weierstraß Hopf algebroid are computed, which should amount to a computation of the Ext groups of the line bundles on the moduli stack of elliptic curves if one inverts $\Delta$. It follows than that every vector bundle on the moduli stack of elliptic curves over $\mathbb{Z}_{(p)}$ is isomorphic to a sum of line bundles for $p>3$ if I have not made a mistake. But for $p=3$, there are many non-trivial Ext groups and I did not manage to see which of the occuring vector bundles are isomorphic.
2) One can try to find explicit examples of a non-trivial higher dimensional vector bundles. A candidate was suggested to me by M. Rapoport: for every elliptic curve $E$ over a base scheme $S$ we have an universal extension of $E$ by a vector bundle. Take the Lie algebra of this extension and we get a canonical vector bundle over $S$. As explained in the book Universal Extensions and One Dimensional Crystalline Cohomology by Mazur and Messing, this is isomorphic to the deRham cohomology of $E$. This vector bundle is an extension of $\omega$ and $\omega^{-1}$ and lies in a non-trivial Ext group. But I don't know how to show that this bundle is non-trivial.
I should add that I am more a topologist than an algebraic geometer and stand not really on firm ground in this topic. I would be thankful for any comment on the two strategies or anything else concerning a possible classification scheme.
-
How did you prove that every vector bundle is an extension? – Tyler Lawson Apr 9 2010 at 12:55
Since the coarse space is affine and the automorphism groups of the geometric points have order a power of 2 times a power of 3, and the Ext-group (or $\omega^{-1}$ by $\omega$, to be precise) is identified with degree-1 coherent cohomology (of $\omega^2$), its nontriviality is also not visible over $\mathbf{Z}[1/6]$. So can you also briefly indicate how you know it is nontrivial? – BCnrd Apr 9 2010 at 14:49
@Brian: The computation of the Ext-groups of tensor powers of $\omega$ on this moduli stack is written up in the Bauer paper that was linked to under part (1). – Tyler Lawson Apr 9 2010 at 15:07
3
You can also deduce that $Ext^1(O, \omega^2)$ is non-trivial from the fact that there are non-trivial "mod p" modular forms of weight 2 when p=2 or 3; these are computed (for instance) in Deligne's note on "formulas, after Tate" in Antwerp 4. – Charles Rezk Apr 9 2010 at 15:33
6
The fact that vector bundles on $\mathcal M_{1,1}$ split as sums of line bundles in characteristic larger than 3 can also be seen by the classical description of $\mathcal M_{1,1}$ as an open substack of the weighted projective stack $\mathbb P(4,6)$, coming from the Weierstrass form. Any locally free sheaf on $\mathcal M_{1,1}$ extends to a reflexive sheaf on $\mathbb P(4,6)$, which is locally free, because $\mathbb P(4,6)$ is regular of dimension 1. It is not hard to prove that any locally free sheaf on a weighted projective stack $\mathbb P(m,n)$ splits as a direct sum of line bundles. – Angelo Apr 9 2010 at 17:34
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931396484375, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/23936/numerical-approximation-of-an-integral | # Numerical approximation of an integral
I read a problem to determine the integral $\int_1^{100}x^xdx$ with error at most 5% from the book "Which way did the bicycle go". I was a bit disappointed to read the solution which used computer or calculator. I was wondering whether there is a solution to the problem which does not use computers or calculators. In particular, is there way to prove that the solution given in the book has a mistake because it claims that $$\frac{99^{99}-1}{1+\ln 99}+\frac{100^{100}-99^{99}}{1+\ln 100}\leq \int_1^{100}x^xdx$$ gives a bound $1.78408\cdot 10^{199}\leq \int_1^{100}x^xdx$ but I think the LHS should be $1.78407\cdot 10^{199}\leq \int_1^{100}x^xdx$? I checked this by Sage and Wolfram Alpha but I was unable to do it by pen and paper.
-
My integration routine gives $1.78464\times 10^{199}$ for the right hand side. – Fabian Feb 27 '11 at 9:39
## 2 Answers
$x^x$ grows really fast. Notice $$n^n>\frac{1}{n}\sum_{i=1}^{n-1} i^i.$$
In short $100^{100}$ is a lot bigger then $99^{99}$, so $$\frac{99^{99}-1}{1+\ln99}+\frac{100^{100}-99^{99}}{1+ln100}\sim \frac{100^{100}}{1+\ln100}=10^{99}\frac{10}{1+\ln100}$$
By calculator $$\frac{10}{1+\ln100}=1.78407,$$ (notice the same as above) but we can get bascially that by hand. First show $4<\ln100<5$. Then $$1.6=\frac{10}{6}\leq \frac{10}{1+\ln100}\leq \frac{10}{5}=2.$$
But lets do better. Use the fact that $$\sum_{i=1}^n\frac{1}{i}=\ln n +\gamma +E$$ where $|E|\leq \frac{1}{2n}$, and $n\geq 4$.
Then $$\ln(100)=1+\frac{1}{2}+\frac{1}{3}+\cdots +\frac{1}{100}-0.577+E$$ and diligently adding this together (under 5 minutes) gives $\ln(100)\sim 4.6$. From here we get the coefficient $\frac{25}{14}$ with an error bounded by $\frac{2}{100}$ or 2%. (We gained error because I cut off 4.61 to 4.6)
-
$100^{100}$ is just over 270 (or just under $100e$) times $99^{99}$. Does that count as "a lot" here? – Henry Feb 27 '11 at 2:00
@Henry: absolutely. The "exact" value of the LHS is $1.784078998771057... 10^{199}$. – Eelvex Feb 27 '11 at 2:03
@Henry: I said what I meant by a lot. The sum only depends on the last piece to get an error under $\frac{1}{n}$. In other words, only the last interval in the integral matters when we want an error of 5%. So we can get rid of 99% of the length of the interval of integration and have an error of less than 1%. That is a lot in my mind. – Eric♦ Feb 27 '11 at 2:04
Because your integrand grows so fast the whole integral is dominates for $x\approx 100$. We can write $x^x = \exp[ x \ln(x)]$ and then expanding $x \ln(x) = 100 \ln(100) + [1+ \ln(100)] (x- 100) + \cdots$ around $x = 100$ (note that it is important to expand inside the exponent). The integral can therefore be estimated as $$\int_1^{100} dx \, x^x \approx 100^{100} \int_{-\infty}^{100} dx\, e^{[1+ \ln(100)] (x- 100) } = \frac{100^{100}}{1 + \ln (100)}.$$ Numerics shows that this result is off by $3\times 10^{-4}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482322335243225, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/150039/combinatorics-number-of-options-to-set-a-a-b-ordered-pair-under-terms/150044 | # combinatorics: number of options to set a (a,b) ordered pair under terms
We have to find the number of options for setting pair $(a,b)$ under the terms: $a ⊆ b ⊆\{1, 2,\ldots, n\}$ Means, they are both subsets of $\{1, 2,\ldots, n\}$ and $a⊆ b$.
I was thinking to handle the $b$ coordinate first and by that, to handle the $a$ coordinate. But, what are the number of options for $b$?
tahnx.
-
1
The number of options for $b$ is $2^n,$ the number of subsets of $\{1,\ldots,n\}.$ – Giuseppe May 26 '12 at 13:29
ordered-fields? – Phira May 26 '12 at 14:26
## 3 Answers
Well, considering each element $1 \le m \le n$, we have ($m \not\in B$) or ($m \in B$ but $m \not\in A$) or ($m \in A$), hence there're three choice for each element.
So the answer is $3^n$.
Your thought is also available. It's algebraic: Let $U = \{1, 2, \ldots, n\}$, \begin{align*} \sum_{B \subseteq U} \sum_{A \subseteq B} 1 &= \sum_{B \subseteq U} 2^{|B|} \\ &= \sum_{k} \sum_{\scriptstyle B \subseteq U \atop \scriptstyle |B| = k} 2^k \\ &= \sum_{k} \binom n k 2^k \\ &= (2+1)^n \\ &= 3^n \end{align*}
-
The number of options for your pairs $(a,b)$ should be $\Sigma_{k=0}^n 2^k\binom{n}{k}$
-
Each element is either in a and b, in b but not in a, or non of both. edit: did'nt see the first part of Frank's answer. It's perfect.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8476875424385071, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/77009/mathematicians-failing-to-solve-problems-despite-having-all-methods-required/77015 | Mathematicians failing to solve problems despite having all methods required [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
On this wikipedia page, there is the following quote by Anil Nerode:
Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required.
What are some good examples of this?
-
5
If nothing else, this should have a big-list tag. I really don't know if this question is suited to MO. – David Roberts Oct 3 2011 at 4:37
2
I think this question is very broad. I mean this happens all the time in mathematics. Perhaps the question should be made more specific. – Martin Brandenburg Oct 3 2011 at 7:20
It is interesting that Gerry Myerson's answer and John Stillwell's are morally related. Non Euclidean geometries were at one point an imputes to model theory. – Lunasaurus Rex Oct 3 2011 at 9:16
4 Answers
I suppose plenty of people could have developed non-Euclidean geometry, had they not been so intent on proving that no such thing existed.
-
1
As Girolamo Saccheri, the author of Euclides ab omni nevo vindicatus'' – Giuseppe Oct 3 2011 at 6:26
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Gödel 's failure to discover unsolvability of the decision problems for predicate logic and Peano arithmetic may be an example. Gödel had all necessary tools: arithmetization, diagonalization, and an equation calculus for defining computable functions.
However, as he admitted later, he was misled by his incompleteness proof into thinking that there could not be an absolute definition of computable function -- he expected one could always form new computable functions by diagonalization. It was only when Turing came up with the definition via Turing machines that Gödel realized he was mistaken.
-
Bott-Tu has a comment in the introduction about Poincaré not discovering the computability of de Rham cohomology through combinatorial data associated to a finite good cover. I might as well give you the original:
"To digress for a moment, it is difficult not to speculate what kept poincaré from discovering this argument forty years earlier. One has the feeling that he already knew every step along the way. After all, the homotopy invariance of the de Rham theory for $\mathbb{R}^n$ is known as the Poincaré lemma! Nevertheless, he veered sharply from this point of view, thinking predominantly in terms of triangulations, and so he in fact was never able to prove either the computability of de Rham or the invariance of the combinatorial definition. Quite possibly the explanation is that the whole $C^\infty$ point of view and, in particular, the partitions of unity were alien to him and his contemporaries, steeped as they were in real or complex analytic questions."
EDIT: I guess this is more of a failure to observe the fact, rather than it being opposite to expectation. Nevertheless, it's interesting and perhaps close enough.
-
The history of the first version of Poincaré's essay submitted to the competition sponsored by King Oscar II of Sweden, could be representative of this situation but at the same time of the capacity of overcome previous errors.
The problem of the stability of a planetary system was central from the dawn of newtonian mechanics. A father of Analytical mechanics such as Dirichlet thought to have proved the stability for the n-body problem, but he died suddenly before to write it.
The prize of King Oscar was aimed to obtain such a proof of the stability and infact in the first version of his essay, Poincaré claimed the stability of the restricted 3-body problem. This essay won the prize, but just after the publication of the paper in the Acta he realized the presence of a serious error for the presence of homoclinic orbits. Consequently the published issues were recalled and the second version of the essay was printed.
The existence of a first version of Poincaré's essay has been discovered only in 1994 by June Green-Barrow. The second version is known as the starting point of the qualitative geometric methods in mechanics.
For more information a possible source is Diacu, F., The solution of the n-body problem, Math. Intelligencer 18 (3) 66-70, 1996.
-
1
Were there any useful ideas in the first version that were omitted from the second? (It is an unfortunate habit of modern mathematicians to, upon notification of a mistake in a paper, remove the whole paper along with its correct and useful parts.) – darij grinberg Oct 3 2011 at 18:07
Dear Darij Grinberg I have edited the answer to complete the information, I hope to have been more satisfactory. – Giuseppe Oct 3 2011 at 20:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.963125467300415, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2008/04/24/infinite-series/?like=1&source=post_flair&_wpnonce=a832ba55c5 | The Unapologetic Mathematician
Infinite Series
And now we come to one of the most hated parts of second-semester calculus: infinite series. An infinite series is just the sum of a (countably) infinite number of terms, and we usually collect those terms together as the image of a sequence. That is, given a sequence $a_k$ of real numbers, we define the sequence of “partial sums”:
$\displaystyle s_n=\sum\limits_{k=0}^na_k$
and then define the sum of the series as the limit of this sequence:
$\displaystyle\sum\limits_{k=0}^\infty a_k=\lim\limits_{n\rightarrow\infty}s_n$
Notice, though, that we’ve seen a way to get finite sums before: using step functions as integrators. So let’s use the step function $\lfloor x\rfloor$, which is defined for any real number $x$ as the largest integer less than or equal to $x$.
This function has jumps of unit size at each integer, and is continuous from the right at the jumps. Further, over any finite interval, its total variation is finite. Thus if $f$ is any function continuous from the left at every integer it will be integrable with respect to $\lfloor x\rfloor$ over any finite interval. Further, we can easily see
$\displaystyle\int\limits_a^bf(x)d\lfloor x\rfloor=\sum\limits_{\substack{k\in\mathbb{Z}\\a<k\leq b}}f(k)$
Now given any sequence $a_n$ we can define a function $f$ by setting $f(x)=a_{\lceil x\rceil}$ for any $x>-1$. That is, we round each number up to the nearest integer $n$ and then give the value $a_n$. This gives us a step function with the value $a_n$ on the subinterval $\left(n-1,n\right]$, which we see is continuous from the left at each jump. Thus we can always define the integral
$\displaystyle\int\limits_{-\frac{1}{2}}^bf(x)d\lfloor x\rfloor=\sum\limits_{k=0}^{\lfloor b\rfloor}a_k=s_{\lfloor b\rfloor}$
Then as we let $b$ go to infinity, $\lfloor b\rfloor$ goes to infinity with it. Thus the sum of the series is the same as the improper integral.
So this shows that any infinite series can be thought of as a Riemann-Stieltjes integral of an appropriate function. Of course, in many cases the terms $a_k$ of the sequence are already given as values $f(k)$ of some function, and in that case we can just use that function instead of this step-function we’ve cobbled together.
Like this:
Posted by John Armstrong | Analysis, Calculus
8 Comments »
1. [...] Tests for Infinite Series Now that we’ve seen infinite series as improper integrals, we can immediately import our convergence tests and apply them in this special [...]
Pingback by | April 25, 2008 | Reply
2. [...] can forge a direct connection between the sum of an infinite series and the improper integral of a function using the famed integral test for [...]
Pingback by | April 29, 2008 | Reply
3. [...] Partial Summation Formula When we consider an infinite series we construct the sequence of partial sums of the series. This is something like the indefinite [...]
Pingback by | April 30, 2008 | Reply
4. [...] of longer and longer sequences of terms and talk sensibly about whether these sums — these series — converge or not. Unfortunately, this topological concept ends up breaking the algebraic [...]
Pingback by | May 6, 2008 | Reply
5. [...] we’re back in the realm of infinite series, and taking the limit of a sequence of partial sums. The series in question has as its th term the [...]
Pingback by | August 27, 2008 | Reply
6. [...] of Complex Series Today, I want to note that all of our work on convergence of infinite series carries over — with slight modifications — to complex [...]
Pingback by | August 28, 2008 | Reply
7. [...] long as this infinite series converges, and define it to be if the series diverges. We then define the outer Lebesgue measure [...]
Pingback by | December 10, 2009 | Reply
8. [...] I dealt with what happened when we wanted to integrate over an infinite interval. Then I defined an infinite series to be what happened when we used a particular integrator in our Riemann-Stieltjes integral. This [...]
Pingback by | January 15, 2010 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164935946464539, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/6694-3-questions-2-range-1-logarithm-2.html | Thread:
1. Re:
Ok great thanks
2. Re:
What about the other problem on logs do you guys have any clue on that one?
3. Originally Posted by qbkr21
3. When Log base b of A=2 and Log base b of D=5
What is: Log base b of (a+d)
$log_bA = 2$
$log_bD = 5$
I presume you want to know: $log_b(A+D)$? (Yes, case is important. A is not necessarily the same as a.)
I couldn't tell you. With the given information I could tell you what $log_b(AD)$ is (its 2 + 5 = 7).
Let me show you why this is so hard.
$log_bA = 2$ means that $b^2 = A$. Similarly $log_bD = 5$ means $b^5 = D$. So
$A + D = b^2 + b^5$
But this is NOT a simple power of b. There IS an exponent c such that $b^c = b^2 + b^5$, but this is not an easy equation to solve for a general value of b, if it's even possible to do generally. (However if we know the value of b we can numerically estimate it.)
If it still seems simple, try to find c for $2^c = 2^2 + 2^5 = 4 + 32 = 36$. So $c = log_236 \approx 5.169925001$. But if b = 3, then $c = log_2252 \approx 5.033103256$.
-Dan
4. I think he wants to know,
$\log_b (A\cdot D)$
Not, $\log_b(A+B)$
It seems that either he copied wrong of his teacher posed an unfair problem.
5. Re:
Dan you are right I did A time D and got 15 and that was wrong on the test. Maybe I am in way over my head, I might should have just added them. But when you expand logs through mulitiplication you add them, this is what I thought but as stated she marked it wrong. My teacher obviously wanted something totally different. Thanks Guys. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9738979339599609, "perplexity_flag": "middle"} |
http://freelance-quantum-gravity.blogspot.com/2009_10_01_archive.html | # DiY quantum gravity
An independent viewpoint about quantum gravity.
## Saturday, October 31, 2009
### The dark side of the landscape
In the previous post I had presented the multiverse in a way that made it look almost innocuous. As I have said a few times in this blog I had heard about how the landscape (existence of a large number of vacua) in string theory made it unavailable to make predictions.
Despite it the actual articles I had read didn't give to me that impression, so I suspected that I was missing something, the problem is that I didn't know what. Reading a recent entry in Lubos blog titled A small Hodge three-generation Calabi-Yau I faced again that problem of missing information. So I reared once again the KKTLT paper and I searched for bibliography that would give me some cloud.
At last I was lead to the correct paper, The statistics of string/M theory vacua by Michael R. Douglas (he is not the actor, of course). The abstract of the paper says all it:
We discuss systematic approaches to the classification of string/M theory vacua, and physical
questions this might help us resolve. To this end, we initiate the study of ensembles of
effective Lagrangians, which can be used to precisely study the predictive power of string
theory, and in simple examples can lead to universality results. Using these ideas, we outline
an approach to estimating the number of vacua of string/M theory which can realize
the Standard Model.
I still haven't finished to read the paper, but the image is clear. Yes, one can have a chaotic/eternal inflation scenario that creates an infinite of universes, or one can go from one to another thought some kind of CDL (Coleman de Luccia) or Hawkings instantons among deSitter vacua or whatever mechanism to create an universe for whatever vacua of the one available in string theory. An yes, every new universe would have an smaller cosmological constant that the previous one. In that way one has an universe with the small cosmological constant (cc) observed in ours. The anthropic principle (or ideology as prefer to name it Lubos) says that in universes with large cc there are no observers so it is not that bizarre that we observe such an small cc, despite the fact that naturally theories with broken supersymmetry would have a big one to begin with.
The real problem is that in that paper is argued that even with the restrictions of an small cc and the observed gauge content (the standard model one) one still has a large number of solutions with the values of the coupling constants, masses of the particles and etc. in the observed margin of the standard model. I have made quick search in the paper to see if it was here where it appeared the famous $10^{500}$ but I couldn't find it (the search feature of acrobat seems to not work with math expressions) but in the text appear ofthem $10^{100 }~ 10^{400}$ so it is in the right order of magnitude. I have intention of reading this paper soon, as well as another by Kallosh and Linde, Landscape, the scale of SUSY breaking, and inlation
It is not that I like the idea of the landscape, I don't, and that's why I hadn't found sooner this papers and I had searched other lines of investigations, such as the ones mentioned in this blog. But like it seems that cosmology is a such a hot topic nowadays, mainly because the large amount of data available, I think it is a good idea to know this kind of things in some detail.
As I had said previously, in other blog entries, I was aware that there were some concrete approaches that tried to disprove the landscape, understood in the sense presented here-that is, too many vacuums compatible with the standard model, not just too many vacuums compatible with an small cosmological constant-. Some of that papers are The String Landscape and the Swampland discussed by Lubos here and also discussed by Distler in his entry YOU CAN’T ALWAYS GET WHAT YOU WANT. More entries in Lubos blog discussing papers against the (SM)landscape are Ooguri and Vafa's swampland conjectures. He also has a paper with C. Vafa and Nima Arkani-Hamed titled The String Landscape, Black Holes and Gravity as the Weakest Force. I think that I have seen a blog entry of him about that paper, but it doesn't appear in the trackback for some reason.
Well, I leave this entry as a loosely discussed bibliography of the real problems of the landscape ideology. As I said there are possibly good reasons to expect a good "vacuum selection method" as M. Douglas calls it and so one wouldn't care too much about it. Possible the LHC could give a cloud of it. It is good to know that beams are beginning to circulate in it again, at least partially, and thatvry soon-if everything goes ok-it will be giving data.
Publicado por Javier
## Saturday, October 17, 2009
### Universe or multiverse?
Universe or multiverse?
Recently there has been some peak of comments on the blogosphere about the multiverse, partially because of a new article by Linde and Vanchury titled How many universe are in the multiverse?.
But the battle against the multiverse, and it's buddy's, the anthropic principle and the string landscape are not new at all. Peter Woit is a championship of that cause. I have always considered P.W. as innocuous, and a source of information about string theory, even if he doesn't like it. Being he a mathematician, or at most a mathematical physicist it is not a lack of respect to his position in an university to not take seriously their objections to a branch of physics that he mostly doesn't understand.
But recently I am beginning to think that in fact he can be causing some damage. The problem I see is that he is so intended to criticise string theory that he only search the part that is good for his purposes without worrying of understanding the whole picture. Worse still, he can make believe that his biased view is the whole view. And that's very bad because it gives a very wrong perspective of what is being done in string theory, and in cosmology and high energy physics in general.
In particular what he says about multiverse, string landscape and similar topics is totally misleading. I am not saying that these are not conflictive areas. Only that what Woit says about them is not representative. To begin with one may realize that the existence of multiverses is, in some cases, inflation, mostly a consequence of already proved physicist with the only assumption of some special issues on he potential of the inflaton. Also, if one trust string theory, the multiverses would rise as a consequence of a saltatory cosmological constant. In fact that two scenarios are similar in spirit, although very different in the details.
If a reader of this blog would want to get a much better idea I would recommend him the reading of the book presented at the beginning of this entry. IT is edited by Bernard Card, who also makes a presentation chapter and thematic one, "the anthropic principle revisited". The firs chapter gives an overview of the rest of the book and explains the many meanings of the term "multiverse" that are treated.
The book itself is bases in a series of conferences partially supported by the templeton foundation. The list of participants includes a list of very prominent physicist such as S. Weinberg, S. Hawkings, L. Susskind,A. Linde, P. Davies, R. Kallosh and a long etc. The list of topics is also very broad, covering many of the variants of the multiverse idea and why it raises in nowadays physics research.
I only have read a few (six) part of the articles and I am alternating them with other articles about inflation in string theory, supersymmetry breaking and general literature about the cosmological constant. My idea is that probably there are better alternatives for the apparent existence of an accelerated universe (and in general, possibly, for some fine tunning problems) but that it would be stupid not to read (at least a part of) what appears in that book if ones is concerned about it.
Publicado por Javier
Etiquetas: cosmology, string theory
Subscribe to: Posts (Atom) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9669376015663147, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/rotational-dynamics+aerodynamics | # Tagged Questions
0answers
129 views
### Torque required to spin a disk along its diameter
How would I calculate (or simulate) this? I am only interested in the aerodynamic drag caused by the surface moving, not any other forces. As far as I know, the only variables needed are the drag ...
1answer
320 views
### Calculation for force generated by a rotating rectangular blade
When trying to calculate the lift force generated by a simple rectangular blade, I've found the following equation: $$F = \omega^2 L^2 l\rho\sin^2\phi$$ in which $\omega$ is the angular velocity, $L$ ...
1answer
217 views
### Is this simulation following real physics?
I am trying to simulate a game in Box2D(Physics engine). The game that I am trying to simulate is very simple and can be found here: http://www.makaimedia.com/#/speartoss What I want to know is that, ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417092800140381, "perplexity_flag": "middle"} |
http://jdh.hamkins.org/every-model-embeds-into-own-constructible-universe/ | # Every countable model of set theory embeds into its own constructible universe
Posted on July 5, 2012 by
• J. D. Hamkins, “Every countable model of set theory embeds into its own constructible universe,” , pp. 1-26. (under review)
````@ARTICLE{Hamkins:EveryCountableModelOfSetTheoryEmbedsIntoItsOwnL,
author = {Joel David Hamkins},
title = {Every countable model of set theory embeds into its own constructible universe},
journal = {},
year = {},
volume = {},
number = {},
pages = {1--26},
month = {},
note = {under review},
abstract = {},
keywords = {},
source = {},
eprint = {1207.0963},
url = {http://arxiv.org/abs/1207.0963},
}````
In this article, I prove that every countable model of set theory $\langle M,{\in^M}\rangle$, including every well-founded model, is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. Another way to say this is that there is an embedding
$$j:\langle M,{\in^M}\rangle\to \langle L^M,{\in^M}\rangle$$
that is elementary for quantifier-free assertions in the language of set theory.
Main Theorem 1. Every countable model of set theory $\langle M,{\in^M}\rangle$ is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$.
The proof uses universal digraph combinatorics, including an acyclic version of the countable random digraph, which I call the countable random $\mathbb{Q}$-graded digraph, and higher analogues arising as uncountable Fraisse limits, leading eventually to what I call the hypnagogic digraph, a set-homogeneous, class-universal, surreal-numbers-graded acyclic class digraph, which is closely connected with the surreal numbers. The proof shows that $\langle L^M,{\in^M}\rangle$ contains a submodel that is a universal acyclic digraph of rank $\text{Ord}^M$, and so in fact this model is universal for all countable acyclic binary relations of this rank. When $M$ is ill-founded, this includes all acyclic binary relations. The method of proof also establishes the following, thereby answering a question posed by Ewan Delanoy.
Main Theorem 2. The countable models of set theory are linearly pre-ordered by embeddability: for any two countable models of set theory $\langle M,{\in^M}\rangle$ and $\langle N,{\in^N}\rangle$, either $M$ is isomorphic to a submodel of $N$ or conversely. Indeed, the countable models of set theory are pre-well-ordered by embeddability in order type exactly $\omega_1+1$.
The proof shows that the embeddability relation on the models of set theory conforms with their ordinal heights, in that any two models with the same ordinals are bi-embeddable; any shorter model embeds into any taller model; and the ill-founded models are all bi-embeddable and universal.
The proof method arises most easily in finite set theory, showing that the nonstandard hereditarily finite sets $\text{HF}^M$ coded in any nonstandard model $M$ of PA or even of $I\Delta_0$ are similarly universal for all acyclic binary relations. This strengthens a classical theorem of Ressayre, while simplifying the proof, replacing a partial saturation and resplendency argument with a soft appeal to graph universality.
Main Theorem 3. If $M$ is any nonstandard model of PA, then every countable model of set theory is isomorphic to a submodel of the hereditarily finite sets $\langle \text{HF}^M,{\in^M}\rangle$ of $M$. Indeed, $\langle\text{HF}^M,{\in^M}\rangle$ is universal for all countable acyclic binary relations.
In particular, every countable model of ZFC and even of ZFC plus large cardinals arises as a submodel of $\langle\text{HF}^M,{\in^M}\rangle$. Thus, inside any nonstandard model of finite set theory, we may cast out some of the finite sets and thereby arrive at a copy of any desired model of infinite set theory, having infinite sets, uncountable sets or even large cardinals of whatever type we like.
The proof, in brief: for every countable acyclic digraph, consider the partial order induced by the edge relation, and extend this order to a total order, which may be embedded in the rational order $\mathbb{Q}$. Thus, every countable acyclic digraph admits a $\mathbb{Q}$-grading, an assignmment of rational numbers to nodes such that all edges point upwards. Next, one can build a countable homogeneous, universal, existentially closed $\mathbb{Q}$-graded digraph, simply by starting with nothing, and then adding finitely many nodes at each stage, so as to realize the finite pattern property. The result is a computable presentation of what I call the countable random $\mathbb{Q}$-graded digraph $\Gamma$. If $M$ is any nonstandard model of finite set theory, then we may run this computable construction inside $M$ for a nonstandard number of steps. The standard part of this nonstandard finite graph includes a copy of $\Gamma$. Furthermore, since $M$ thinks it is finite and acyclic, it can perform a modified Mostowski collapse to realize the graph in the hereditary finite sets of $M$. By looking at the sets corresponding to the nodes in the copy of $\Gamma$, we find a submodel of $M$ that is isomorphic to $\Gamma$, which is universal for all countable acyclic binary relations. So every model of ZFC isomorphic to a submodel of $M$.
The article closes with a number of questions, which I record here (and which I have also asked on mathoverflow: Can there be an embedding $j:V\to L$ from the set-theoretic universe $V$ to the constructible universe $L$, when $V\neq L$?) Although the main theorem shows that every countable model of set theory embeds into its own constructible universe $$j:M\to L^M,$$ this embedding $j$ is constructed completely externally to $M$ and there is little reason to expect that $j$ could be a class in $M$ or otherwise amenable to $M$. To what extent can we prove or refute the possibility that $j$ is a class in $M$? This amounts to considering the matter internally as a question about $V$. Surely it would seem strange to have a class embedding $j:V\to L$ when $V\neq L$, even if it is elementary only for quantifier-free assertions, since such an embedding is totally unlike the sorts of embeddings that one usually encounters in set theory. Nevertheless, I am at a loss to refute the hypothesis, and the possibility that there might be such an embedding is intriguing, if not tantalizing, for one imagines all kinds of constructions that pull structure from $L$ back into $V$.
Question 1. Can there be an embedding $j:V\to L$ when $V\neq L$?
By embedding, I mean an isomorphism from $\langle V,{\in}\rangle$ to its range in $\langle L,{\in}\rangle$, which is the same as a quantifier-free-elementary map $j:V\to L$. The question is most naturally formalized in Gödel-Bernays set theory, asking whether there can be a GB-class $j$ forming such an embedding. If one wants $j:V\to L$ to be a definable class, then this of course implies $V=\text{HOD}$, since the definable $L$-order can be pulled back to $V$, via $x\leq y\iff j(s)\leq_L j(y)$. More generally, if $j$ is merely a class in Gödel-Bernays set theory, then the existence of an embedding $j:V\to L$ implies global choice, since from the class $j$ we can pull back the $L$-order. For these reasons, we cannot expect every model of ZFC or of GB to have such embeddings. Can they be added generically? Do they have some large cardinal strength? Are they outright refutable?
It they are not outright refutable, then it would seem natural that these questions might involve large cardinals; perhaps $0^\sharp$ is relevant. But I am unsure which way the answers will go. The existence of large cardinals provides extra strength, but may at the same time make it harder to have the embedding, since it pushes $V$ further away from $L$. For example, it is conceivable that the existence of $0^\sharp$ will enable one to construct the embedding, using the Silver indiscernibles to find a universal submodel of $L$; but it is also conceivable that the non-existence of $0^\sharp$, because of covering and the corresponding essential closeness of $V$ to $L$, may make it easier for such a $j$ to exist. Or perhaps it is simply refutable in any case. The first-order analogue of the question is:
Question 2. Does every set $A$ admit an embedding $j:\langle A,{\in}\rangle \to \langle L,{\in}\rangle$? If not, which sets do admit such embeddings?
The main theorem shows that every countable set $A$ embeds into $L$. What about uncountable sets? Let us make the question extremely concrete:
Question 3. Does $\langle V_{\omega+1},{\in}\rangle$ embed into $\langle L,{\in}\rangle$? How about $\langle P(\omega),{\in}\rangle$ or $\langle\text{HC},{\in}\rangle$?
It is also natural to inquire about the nature of $j:M\to L^M$ even when it is not a class in $M$. For example, can one find such an embedding for which $j(\alpha)$ is an ordinal whenever $\alpha$ is an ordinal? The embedding arising in the proof of the main theorem definitely does not have this feature.
Question 4. Does every countable model $\langle M,{\in^M}\rangle$ of set theory admit an embedding $j:M\to L^M$ that takes ordinals to ordinals?
Probably one can arrange this simply by being a bit more careful with the modified Mostowski procedure in the proof of the main theorem. And if this is correct, then numerous further questions immediately come to mind, concerning the extent to which we ensure more attractive features for the embeddings $j$ that arise in the main theorems. This will be particularly interesting in the case of well-founded models, as well as in the case of $j:V\to L$, as in question , if that should be possible.
Question 5. Can there be a nontrivial embedding $j:V\to L$ that takes ordinals to ordinals?
Finally, I inquire about the extent to which the main theorems of the article can be extended from the countable models of set theory to the $\omega_1$-like models:
Question 6. Does every $\omega_1$-like model of set theory $\langle M,{\in^M}\rangle$ admit an embedding $j:M\to L^M$ into its own constructible universe? Are the $\omega_1$-like models of set theory linearly pre-ordered by embeddability?
This entry was posted in Publications and tagged countable random digraph, Fraisse limit, hypnagogic digraph, Peano Arithmetic by Joel David Hamkins. Bookmark the permalink.
## 11 thoughts on “Every countable model of set theory embeds into its own constructible universe”
1. Pingback: Every countable model of set theory embeds into its own constructible universe, Fields Institute, Toronto, August 2012 | Joel David Hamkins
2. Thomas Benjamin on August 18, 2012 at 3:38 am said:
Considering Conway’s system ‘No’ and Philip Ehrlich’s assertion that ‘No’ is an “absolute arithmetic continuum” containing “all numbers great and small”, could ‘No’ act as a foundation (so to speak) of the naturalistic account of forcing?
• Joel David Hamkins on August 18, 2012 at 8:10 am said:
Thomas, I don’t quite follow your suggestion. I believe that Ehrlich is using the word “absolute” in a way that recall’s Cantor’s term of “absolute infinity”, which is often understood to refer to the class of all ordinals. But the truth remains that different models of set theory have different non-isomorphic surreals, and so No is not absolute in that sense. For example, forcing creates new surreal numbers. Indeed, for models V and W of ZFC, I claim that V is isomorphic to W if and only if No^V is isomorphic to No^W, and in this sense, the surreals of a model of ZFC capture the whole model. This is because the surreal numbers correspond to the transfinite binary sequences, which in turn corresponds to a set of ordinals, and two models of ZFC set theory with the same ordinals and the same sets of ordinals are the same.
3. Thomas Benjamin on August 22, 2012 at 3:13 am said:
I will quote Ehrlich (quote found on page 240) from his paper “The Absolute Arithmetic Continuum and Its Peircean Counterpart”, “No not only exhibits [the rest of the sentence is italicized for emphasis] all possible types of algebraic and set-theoretically [the word 'set' is underlined for further emphasis] defined order -theoretic gradations consistent with its structure as an ordered field, it is to within isomorphism the unique structure that does.” I leave it to you to make of that what you will. Another (at least to me) telling remark of Ehrlich’s can be found in another article “Universally extending Arithmetic Continua” where he entitles the section in which he defines the term “absolutely homogeneous universal” for a model ‘A’ “Maximal Models”. I would be interested in seeing how forcing creates new surreal numbers since Conway ‘constructs’ No from the notation {L|R} which seems to me at least independent of any axiomatization of set theory
• Joel David Hamkins on August 22, 2012 at 9:15 am said:
Forcing can create new real numbers, and therefore also new surreal numbers. So different models of set theory can have different surreal numbers. (This is clear already from the fact that different models of set theory can have different ordinals.) The argument I gave in my comment above shows more: different models of ZFC necessarily have different No’s, since the surreal numbers code up the entire universe in which they are constructed. So the surreal numbers No are not absolute in the sense of being-the-same-in-all-models. What Ehrlich is talking about is the fact that we can prove in ZFC that No is the unique homogeneous and universal class linear order. This can be viewed in analogy with Cantor’s proof that the rational line $\mathbb{Q}$ is the unique homogeneous universal countable linear order.
• Thomas Benjamin on August 23, 2012 at 2:38 am said:
Would the fact that in ZFC (though Ehrlich speaks of No as the absolute arithmetic continuum “modulo NBG” or by extension, modulo Ackermann Set Theory) No is the unique homogeneous and universal class linear order be sufficient to use it as the basis for generating, say, Cohen and/or Random reals to affix to models of set theory, therefore liberating forcing from countable, ‘toy’ models (since at least as I understand it, forcing needed to use countable models so that the extra sets of natural numbers not contained in the model in question could act as generic sets)? In fact does No contain all possible interpretations of No relative to models of ZFC?
• Joel David Hamkins on August 23, 2012 at 8:34 am said:
The referece to NGB is due to two facts, first, the class universality claim is explicitly a second-order claim, which is not expressible except case-by-case in the first-order language; and second, one proves that No is universal for class linear orders in NGB, and the proof I know uses the global choice principle (which goes beyond ZFC) in order to guide the back-and-forth argument to find the embedding of a given class order into No. I don’t really see how to use No in the way that you suggest. The No of one model will not generally be universal for the linear orders that exist in another model of set theory.
4. Pingback: The countable models of ZFC, up to isomorphism, are linearly pre-ordered by the submodel relation; indeed, every countable model of ZFC, including every transitive model, is isomorphic to a submodel of its own $L$, New York, 2012 | Joel David Hamkins
5. Pingback: Every countable model of set theory is isomorphic to a submodel of its own constructible universe, Barcelona, December, 2012 | Joel David Hamkins
6. Pingback: A multiverse perspective on the axiom of constructiblity | Joel David Hamkins
7. Pingback: The countable models of set theory are linearly pre-ordered by embeddability, Rutgers, November 2012 | Joel David Hamkins | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251929521560669, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/247042/how-to-prove-the-product-formula-for-higher-homotopy-groups | # How to prove the product formula for higher homotopy groups?
When I review Hatcher's proof on the fact $$\pi_{n}(\prod X_{\alpha})=\prod_{\alpha}\pi_{n}(X_{\alpha})$$I found I cannot really follow. He wrote "A map $f:Y\rightarrow \prod X_{\alpha}$ is the same thing as a collection of maps $f_{\alpha}:Y\rightarrow X_{\alpha}$. Taking $Y$ to be $\mathbb{S}^{n}$ and $\mathbb{S}^{n}\times I$ gives the result. "
I am confused because the result is intuitive and (probably) trivial, but I do not see how his first line and second line connected together. Presumably the second line means something induced from the earlier proposition on covering maps, but still I do not see how to use this to prove the above statement.
-
6
Forget about the hint and try to prove it directly. When you are done you will notice that you have followed the hint. – Mariano Suárez-Alvarez♦ Nov 29 '12 at 5:18
If the index is finite, then a naive argument on projection into coordinates might suffice. But if the index is uncountable, then I believe I am on the wrong track. Image I need to prove this for $\mathbb{S}^{2}$ and $X=\mathbb{R}\times \mathbb{R}$ graphically, I still could not really follow the hint. Sorry for being slow. – user32240 Nov 29 '12 at 5:29
Why do you say «might»? Have you actually tried to do it? – Mariano Suárez-Alvarez♦ Nov 29 '12 at 15:02
okay. let me construct a proof and you may criticize on it later. – user32240 Nov 29 '12 at 16:03
## 1 Answer
For any map $\mathbb{S}^{n}$ to $\prod X_{\alpha}$, composition with projection give us $f_{\alpha}\rightarrow \prod X_{\alpha}$. So we have a map $$F:\pi_{n}\prod X_{\alpha}\rightarrow \prod \pi_{n}(X_{\alpha})$$
On the other hand given a family of maps $f_{\alpha}:\mathbb{S}^{n}\rightarrow X_{\alpha}$, by other's hint I can form a map $f':\mathbb{S}^{n}\rightarrow \prod X_{\alpha}$ such that $f'(x)_{\alpha}=f_{\alpha}(x)$. So we have an inverse map $$G:\prod \pi_{n}(X_{\alpha})\rightarrow \pi_{n}\prod X_{\alpha}$$ We have $FG=1, GF=1$. So the two groups are isomorphic.
Alternatively we can to prove $F$ is surjective and injective. Assume $\prod P_{\alpha}\in \prod \pi_{n}(X_{\alpha})$, then $G$ maps it to $\pi_{n}\prod X_{\alpha}$. By $FG=1$ we have $FG(\prod P_{\alpha})$ be $\prod P_{\alpha}$. This showed $F$ is surjective.
Injectivity is also clear because if $p\in \prod \pi_{n}(X_{\alpha})=0$, then there is a null homotopy $I\times \mathbb{S}^{n}$ such that ${0}\times \mathbb{S}^{n}\rightarrow p$, ${1}\times \mathbb{S}^{n}\rightarrow 0$. Then $Gp$ would have a null homotopy as well. This proved the above correspondence is one to one and onto. The fact that $F,G$ are group homomorphisms are directly follow from definition.
-
2
"I do not know how to form a map..." How about the map $x \mapsto (f_\alpha(x))$? (One can in fact define the product of things as the universal gadget where maps into it are in natural bijection with the collection of maps into each of its factors...) – Dylan Wilson Nov 29 '12 at 18:10
Your argument is not complete. If you really want input on your proof, you should write it out in detail. – Mariano Suárez-Alvarez♦ Nov 29 '12 at 18:31
Let me write in my detail then. – user32240 Nov 29 '12 at 21:23
updated.hopefully it is right this time. – user32240 Nov 29 '12 at 22:07
@MarianoSuárez-Alvarez: Sorry to bother you - can you check my proof? – user32240 Nov 30 '12 at 3:29
show 5 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272050857543945, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/99458?sort=oldest | ## Is there a notion of “ribbon 2-category”?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It there some notion of ribbon 2-category, which would allow for, say, talking about the Seifert surface of links (which is a 1-morphism in some ribbon category) as a 2-morphism in the category?
Thank you! (I'm sorry this question is so vague.)
-
@Jakob: My guess is that it is possible to make up such a definition. Do I understand correctly that your question is whether such a definition has already been made by someone? Or would you like someone to try to come up with such a definition here? The problem with the latter is that such a definition is bound to be very long and to involve multiple coherence diagrams. So I don't expect it to fit into an MO answer... – André Henriques Jun 13 at 15:43
1
If your 1-morphsims are framed tangles (e.g. links) in the 3-ball, then your 2-morphisms should be framed surfaces in the 4-ball, rather than Seifert surfaces, which live in the 3-ball. – Kevin Walker Jun 13 at 18:07
## 2 Answers
A ribbon 1-category is a 3-category which has only one 0-morphism, has only one 1-morphism, and is (strict) "3-pivotal", where "$n$-pivotal" is the property that would be called "pivotal" if $n=2$. (I don't know if there is a standard term for this. Candidates are "n-pivotal", "has strong duality", "disk-like", "is an $[S]O(n)$ homotopy fixed point of ...".)
So one can analogously define a ribbon 2-category to be a 4-pivotal 4-category with only one 0-morphism and one 1-morphism. I think everyone would agree with some version of this statement.
The above begs the question of how best to define an $n$-pivotal $n$-category. (Again, this is not standard terminology, and so far as I know no standard term for this notion has been established.) I'm aware of three approaches.
(1) Imitate the approach traditionally used for $n\le 3$; i.e. explicitly write out all the coherence diagrams. As $n$ grows large (e.g. $n\ge 4$), this quickly becomes unwieldy.
(2) Jacob Lurie's approach. First define $n$-categories with a weaker notion of duality. This weak duality allows one to define a homotopy $O(n)$ action. Now define an $n$-pivotal $n$-category to be a homotopy fixed point of this action. This is the approach described in André's answer. (I'm not an expert on this approach, so please let me know if I've misdescribed it.)
(3) "Disk-like" $n$-category approach (Section 6 of this paper). Define an $n$-category to be collection of functors on $k$-balls and homeomorphisms, for $k\le n$. The pivotal structure comes from the actions of Homeo($B^k$).
The advantage of approach #3 is that it is easy to verify for examples which are topological in origin (like bordism $n$-categories, $n$-categories built out of mapping spaces, $n$-categories built out of embedded cell complexes, mod relations). The disadvantage is that it doesn't specify any generators or relations. If you have some algebraic or combinatorial gadgets (like representations of a quantum group) and you are wondering whether they generate an $n$-pivotal $n$-category, you would like a (finite) list of relations to check. Approach #3 does not give you this, but approach #2 (or #1, if it exists) does. More specifically, if you write down in detail just what "$O(n)$ homotopy fixed point" means, you will end up with a finite list of generators and relations. (That's in theory; I haven't seen it carried out in practice.)
Quibbles with and corrections to the above are welcome.
-
How clear is it that ribbon is the analogue of pivotal and not of spherical? – Noah Snyder Jun 14 at 16:35
How clear is it? Probably not very. I think pivotal is a more natural condition than spherical (pivotal is local, spherical is not). Also, for an n-category with trivial 0- and 1-morphisms, n-pivotal implies n-spherical (I think). By general position, any isotopy of diagrams which takes place in the n-sphere has support in an n-ball (up to 2nd order isotopy). – Kevin Walker Jun 14 at 18:06
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here's a cryptic answer.
[and I hope that I now got it right -- if someone sees some more mistakes, please edit my answer]
By work of Lurie, there's an $O(n+1)$ action on the collection of all $n$-categories with duals.
A ribbon category is 3-category with one object and one 1-morphism, that is equipped with the extra structure of an $SO(3)$-homotopy fixed point.
A ribbon 2-category is 4-category with one object and one 1-morphism, that is equipped with the extra structure of an $SO(4)$-homotopy fixed point.
-
I'm confused about the relationship between the O(n) action on the space of all dualizable objects in a particular n-category, vs. the O(n) action on the space of all dualizable n-categories... That said, your answer doesn't seem to fit my intuition, just thinking about what topological structures Reshetikhin-Turaev depends on. – Noah Snyder Jun 13 at 16:37
1
Should the last paragraph start "A ribbon 2-category is a 4-category ..."? – Bruce Westbury Jun 13 at 17:05
Oic, maybe it's just that the dimensionality is funny. Let's think about pivotal fusion categories (which is the 2-dimensional analogue). Those are 2-categories (with one object), and are dualizable objects in a 3-category (of 2-categories, but you need to be careful that the morphisms are Morita enough). Hence you have an O(3) action. But you're not claiming that they're O(3) fixed points, rather that they're O(2) fixed points. This is almost right, they're SO(2) fixed points. So maybe you mean SO(n) everywhere instead of O(n)? – Noah Snyder Jun 13 at 17:16
THe principal article on ribbon categories theory is (or was ?) sciencedirect.com/science/article/pii/… – Buschi Sergio Jun 13 at 18:33
I tried to take the comments into account in order to make my answer less false. I'm still largely guessing though... – André Henriques Jun 13 at 19:55
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320798516273499, "perplexity_flag": "middle"} |
http://motls.blogspot.com/2012/03/mass-spectrum-of-all-objects-in.html?m=0?m=1 | The Reference Frame
Monday, March 26, 2012
... /////
Mass spectrum of all objects in the Universe
Whenever it makes sense to define the Hamiltonian, i.e. the operator of total energy, this Hamiltonian knows everything about the rules according to which all physical systems and any physical systems evolve in time. That's because the evolution of observables (quantities that may be measured and that are always associated with linear operators) is given by the Heisenberg equations of motion\[
\ddfrac{\hat L(t)}{t} = i\hbar [\hat L(t), \hat H]
\] Equivalently, one may keep all the operators constant and evolve the state vector $\ket\psi$ according to Schrödinger's equation\[
i\hbar \frac{\dd}{\dd t} \ket \psi = \hat H \ket \psi
\] This fact makes the Hamiltonian $H$ very important. Because we may always redefine the basis of the Hilbert space by a unitary transformation i.e. diagonalize the Hamiltonian, the spectrum of the Hamiltonian knows "everything" about the laws of physics.
You may be surprised by this assertion. Don't we also need to know the "shape" of the objects or the "shape" of their wave functions? It depends what you want to know. Everything that is physical about them is actually encoded in the Hamiltonian. Of course, to prove that the Hamiltonian is this powerful, we also have to study the action of the Hamiltonian on multiparticle states.
Locality of the Hamiltonian is what makes positions useful
But a point of mine is that e.g. the usefulness of the notion of "positions of objects" isn't an additional structure that has to be added to physics. Instead, "positions of objects" are useful and (partly) well-defined quantities because of a property of the Hamiltonian, namely its locality. In quantum field theory, this statement is exactly true; in non-relativistic quantum theories, this statement is either true for different but related reasons, or it's true because these theories are limits of a quantum field theory. The Hamiltonian may be written as an integral of some energy density,\[
\hat H = \int\dd^3 x~\hat T_{00}(x,y,z).
\] From this moment, I will assume that the reader has reconciled himself or herself with the fact that ours is a quantum world so all observables – when analyzed accurately enough – are operators and I don't have to add the redundant hats above them. Because the Hamiltonian $H$ above is the integral over space and the energy density $T_{00}(x,y,z)$ commutes with all operators associated with totally different points or regions $(x',y',z)$, it follows that it is often possible to associate positions to objects, either exactly or approximately, and the evolution at distant enough points will be independent.
I didn't have to add the notion of a "position" independently; it's included in the Hamiltonian. On the other hand, if the Hamiltonian were not local, not even approximately, it would probably be useless to talk about positions of objects even if the parameterization of operators according to the positions were possible.
Energy and momentum: relativity has its say
Another important pillar of modern physics is the special theory of relativity. (The general theory of relativity is also important but it makes any description in terms of a "Hamiltonian" more subtle and I will avoid these topics in this particular blog entry.) Independently of special relativity, Emmy Noether figured out that the energy conservation is associated with the translational symmetry of the laws of physics in the temporal dimension. After all, this fact is related to the role that the Hamiltonian plays in the dynamical equations from the beginning of this text.
However, Einstein realized in his 1905 special relativity that the properties of time and space have to be interlinked; they're parts of the spacetime. Because the energy is linked to translations in time, this energy has to be clumped with the operator linked to translations in space. And that's, of course, the momentum. Relativity therefore implies that the energy itself is the temporal component $P^0$ of the energy-momentum vector $P^\mu$. Whenever the energy is conserved and the Lorentz invariance underlying special relativity is unbroken, the momentum $P^i$ has to be conserved as well.
Moreover, special relativity says that for any transformation $U\in SO(3,1)$ and for every state $\ket\psi$, there has to exist the state $U \ket\psi$ whose momentum and energy is transformed correspondingly. So if you start with the state whose\[
P^\mu = (E,0,0,0)
\] i.e. whose momentum vanishes, it is always possible to "boost" this state so that the momentum will be arbitrary and\[
P^\mu P_\mu = E^2 - \abs{\vec p}^2
\] will be equal to the original value, $E^2$. It's therefore a characteristic value associated with the object, one that is independent of boosts, and it's known as $m_0^2$, the squared rest mass. I will also assume that the reader is a mature physicist who can work in units with $c=1$.
It's important to notice that if you can have a state with the energy $m_0$, you may also easily get states with an arbitrary higher energy, just by boosting the original state. The energy will increase to\[
E = m_0\gamma = \frac{m_0}{\sqrt{1-v^2/c^2}}
\] i.e. it will be enhanced by the relativistic Lorentz factor. So even if the list of elementary particles is "discrete" if not finite (at least at low energies), the spectrum of the Hamiltonian is inevitably continuous. However, you may focus on states whose total spatial momentum vanishes and those can have a discrete spectrum. Alternatively, you may try to look for possible eigenvalues of the operator $P^\mu P_\mu$ in our Universe. This could be discrete.
Well, even this is not discrete because you may have systems composed of decoupled particles or objects and by adjusting their relative speed, you may continuously change the "invariant mass" as well. But note that this continuous spectrum may only be obtained for states that are not bound. For bound states, the spectrum of \[
M^2 = P^\mu P_\mu
\] may be discrete and it actually is discrete in realistic situations, i.e. in the real world of particle physics. Let's look at it.
Elementary particles
The most natural oversimplified way to describe the world is to say that it's composed out of elementary particles. To discuss the spectrum of $M^2$, and I will mostly talk about the spectrum of \[
M \equiv \sqrt{M^2},
\] it seems as though we should only list the masses of elementary particles. Let's begin with the lightest types of elementary particles we know.
Tachyons and massless particles
Well, the lightest particles are the massless ones. If $M^2$ were negative, we would have tachyons and they would make the vacuum unstable. It seems that tachyons have to be scalars in a consistent theory of quantum gravity but any hypothetical Universe with a tachyon in the spectrum (i.e. our Universe "before" the electroweak symmetry breaking) will roll to a nearby minimum around which tachyons don't exist. Note that the masses are related to the second derivative of the potential as a function of the fields expanded around the vacuum expectation values in the relevant vacuum.
So the lowest possible eigenvalue of $M$ is $M=0$, the massless particles. In our world, only some gauge bosons are massless. Photons are massless because the electromagnetic $U(1)$ gauge symmetry is unbroken; gravitons are massless because the diffeomorphism symmetry underlying the general theory of relativity (which we treat as a theory of spin-2 fields on a Minkowski background in this text) is unbroken as well. The masslessness of these particles is related not only to the unbroken symmetries; it also has physical consequences.
The main physical consequence is the infinite range of the electromagnetic and gravitational forces. The forces really drop as $F\sim 1/r^2$ in both cases if we talk about forces in the Newtonian sense. This decreases rather slowly and the total cross sections may easily be infinite. On the other hand, short-range forces have to be mediated by massive particles. They decrease faster at long distances, with an exponential factor, and the total cross section is always finite.
There's one more particle that is "exactly massless", the gluon, the particle that mediates the force between colorful quarks. However, the gluon is charged itself, so it interacts with other gluons just like quarks do. The force is confining and it confines not only quarks but also gluons because they carry some (bi)color. For this reason, gluons can't be isolated and if we only talk about the mass spectrum of objects that can exist in isolation, the proper Hilbert space, there won't be any gluons in it, massless or otherwise.
Leptons
We have discussed the $M=0$ particles. Let's begin with positive masses. The lightest positive masses are carried by neutrinos. A few decades ago, people were considering the possibility that the neutrinos were exactly massless. However, for more than a decade, we've known from "neutrino oscillations" that the masses can't be zero. They're small but positive. Because the neutrino oscillations – operating at very long distances – are the only method we have at this moment to measure the small masses, we actually only know the mass differences.
More precisely, we only know the differences between the different eigenvalues of $M^2$. The differences are\[
\eq{
M_2^2 - M_1^2 &= \Delta M_{21}^2 =\\
&= \Delta M_\text{solar}^2 \approx (7.6\pm 0.2) \times 10^{-5}\eV^2\\
M_3^2 - M_1^2 &= \Delta M_{32}^2\approx \Delta M_{31}^2 \approx\\
&\approx \Delta M_\text{atmos}^2 \approx (2.43\pm 0.13)\times 10^{-3}\eV^2
}
\] The neutrino mass matrix hides three eigenvalues of $M^2$ because there are three generations. However, we only know two differences between them but not the overall additive shift for all of them. Moreover, the heaviest "third" eigenvalue is much larger than the other two, so the difference between the "third" and either of the lower two eigenvalues is almost the same. The differences between the squared masses are the squares of a "few millielectronvolts".
Of course, you might imagine that all the neutrino masses are actually much heavier than that, e.g. close to an electronvolt; in that case, these eigenvalues would be nearly identical. That's unlikely. More likely, the lightest eigenvalue $M_1^2$ is very close to zero, either comparable to or perhaps much smaller than the square root of $\Delta M_{12}^2$.
Some weeks ago, I discussed the PMNS mixing matrix.
Neutrinos are not too important for our current lives because their interactions with the normal matter are almost negligible. The charged leptons are much more important. The masses in multiples of an electronvolt are\[
\eq{
m_e &= 511~\keV\\
m_\mu &= 105.7~\MeV\\
m_\tau &= 1.777~\GeV
}
\] The electron $e$ is the lightest charged particle in the Universe which is why it's the most important charged particle for life, electrical engineering, and for all materials we know in general. Its heavier cousins, the muon and the tau, are less important because they're unstable. But even if they were not decaying, their high mass would make them less important.
Note that we have gone from millielectronvolts to gigaelectronvolts, around twelve orders of magnitude. There's quite some hierarchy.
Quarks
I've already discussed the confinement in the context of massless gluons. Quarks are the most famous colorful particles in particle physics and they're confined as well. You can't isolate them; it's like trying to isolate just the North pole of a bar magnet or just one endpoint of a piece of rope.
Still, at short enough distances, the glue that confines them is weak enough and quarks look like free particles with a mass. We are discussing the mass spectrum. The quarks have the following masses (in their case, it becomes really important to discuss the renormalization scales and schemes but I won't go into that):\[
\eq{
m_u &= 2\sim 3~\MeV\\
m_c &= 1.3~\GeV\\
m_t &= 173~\GeV\\
m_d &= 4\sim 6~\MeV\\
m_s &= 80\sim 130~\MeV\\
m_b &= 4\sim 5~\GeV
}
\] You see that the accuracy is poorer here. It's partly because the strongly interacting objects are messier; some of the uncertainty above may be fixed and is related to the need to define the renormalization scales and schemes. One could make the figures more accurate.
If you try to "construct" a proton from two up-quarks and one down-quark, you will only get something like $10~\MeV$. So why is the proton mass equal to $938~\MeV$? Where are the remaining 99 percent? The answer is that 99 percent of the mass of the proton (and more than 98 percent of the mass of the neutron) is actually carried by something else than the rest mass of the three "valence" quarks. A part of it comes from the relativistic increase of the mass from the motion – the quarks are moving inside the protons and neutrons by speeds that are comparable to the speed of light so the relativistic increase matters. However, most of the mass comes from the "glue", the "potential energy" between the valence quarks, and masses of gluons and additional quark-gluon pairs that are really included inside the protons and neutrons as well. These particles have a complicated internal structure.
In this case, the mass of the bound state is dramatically different from the sum of the pieces, the three quarks' masses. Because the naive "the whole is a sum of pieces" totally fails here, we say that the system is "strongly coupled". The strong nuclear force carries the name "strong" as well and it is called in this way because it is an example of a strongly coupled interaction. Note that the adjective "strong" is used both in a general way (a force so strong that the whole isn't the sum of pieces at all) and a particular way (the force that attracts quarks and gluons i.e. colorful elementary particles).
Nuclei
Protons and neutrons are therefore complicated animals if you want to "model them" out of the most elementary building blocks we know in Nature. The nuclei are a bit easier because in some sense, they may be thought of as composites of protons and neutrons (although the really accurate description has to involve quarks and gluons as well). The "residual" interactions between protons and neutrons may still be "strong" but they're much weaker than the forces between quarks. More importantly, and it is related, they're not confining; the residual force between protons and neutrons is a short-range force and certainly doesn't prevent protons and neutrons from existence in isolation.
One may create nice bound states of protons and neutrons, the nuclei. Some of their properties may be pretty nicely understood by approximate models. For example, large enough nuclei behave as droplets of a "liquid" – a nuclear liquid much much denser than water. This liquid has a "volume density", some "surface tension", punishment for the asymmetry between neutrons and protons, and the electrostatic repulsion energy for the protons (which is what favors many neutrons in the nuclei over protons).
Of course, such a "liquid" model can't be and isn't perfect. Some nuclei are much more stable than others because the neutrons and protons fully occupy "shells" that are analogous to the atoms – but with energies that are a million times higher, at order many $\MeV$'s, a fraction of a percent of the rest mass of the proton or the neutron.
The rest mass of a proton or a neutron is almost $1~\GeV$. However, if you combine them into nuclei and fuse the nuclei or make them decay, the total mass/energy carried by the system is nearly additive, up to defects of order those several $\MeV$'s, as I mentioned. Those megaelectronvolts were constructively used both in Hiroshima and Fukushima. By the way, only 1 of the previous 50+ Japanese reactors is running at this moment. The psychologically induced financial effect of the Fukushima non-events are clearly dramatic.
When we talk about nuclear technologies, Iran is developing a 20-80 percent enriched uranium, building multi-billion-dollars fortified caves for this research, and risking the health of its economy in the wake of sanctions as well as hundreds of thousands of lives of its citizens who might die in a possibly looming Israeli/U.S. attack in order to bring some isotopes to five doctors in Tehran who say that a 20-80 percent enriched uranium could perhaps be useful to improve some treatment and save a few people in a hospital although they're not sure how it could be useful yet. At least, that's the explanation of their enrichment activities accepted by those observers who take the Iranian authorities seriously.
Bound states: the Hydrogen atom
While the strong force between quarks is really strong, there exists a weaker interaction, electromagnetism. The electromagnetic bound states only subtract a small interaction mass/energy from the mass/energy of the constituents.
Take the Hydrogen atom. An electron whose rest mass is $511~\keV$ when in isolation is orbiting around a proton whose mass is $938~\MeV$. And the resulting atom is only $13.6~\eV$ lighter than the sum of the masses of the two particles in the system. That's a very tiny binding energy. Why is it so tiny? Why it's not comparable to the rest masses of both particles?
It's because the electrostatic force responsible for the binding of the two particles is a rather weak interaction, one described by the dimensionless fine-structure constant\[
\alpha \sim \frac{1}{137.03604\dots}.
\] In fact, this small number is squared once again and the small constant of order $10^{-4}$ has to multiply the rest mass of the electron, the lighter particle in the Hydrogen atom, to get a good estimate of the binding energy of the Hydrogen atom. Exactly because the fine-structure constant is so much smaller than one, it's possible to imagine that electromagnetically bound objects are "almost the sums of their parts". That's why it's always helpful, whenever you study electromagnetism, to begin with the idea of "free objects" and the interactions are added as a small perturbation that affects their motion. This strategy is much less useful for strongly coupled interactions such as the strong nuclear force.
By the way, the smallness of the fine-structure constant is also the reason why we can use non-relativistic physics to describe the Hydrogen atom and, similarly, other atoms. One may show that the velocity of the electron divided by the speed of light $v/c$ in the Hydrogen atom – and similarly in other atoms – is of order the fine-structure constant. The relativistic corrections start at $v^2/c^2$ so they are suppressed at least by four orders of magnitude relatively to the main non-relativistic result.
Again, this non-relativistic strategy can't be accurately used for the bound states of quarks because the speed of quarks in the nucleons is comparable to the speed of light because the relevant "strong fine-structure constant" is much closer to one than the electromagnetic one.
Other atoms and molecules
One may discuss the bound state of more complicated nuclei which naturally attract a larger number of electrons to get neutral. And ions. Some of the interaction energies scale with $Z$, the number of protons in the nucleus, in various approximate ways. But the calculation can't be done as exactly as it can be done for the non-relativistic Hydrogen. Already three charged objects, e.g. a nucleus and two electrons, that interact with each other leads to a mathematical problem that can't be analytically solved in terms of elementary (and even "not quite elementary") functions.
One may say that the electron-electron interactions in the atoms make the problem complicated. If there were only nucleus-electron interactions, we would be simply filling the Hydrogen states for the electron by many electrons. That's roughly how the periodic table of the elements emerges. However, the electron-electron interactions distort the spectrum, change the numbers, and push even the qualitative picture and scalings with $Z$ somewhere so that you don't want to believe every detail extracted from the model where the electron-electron interactions are neglected.
Still, the binding or ionization energies in the atoms are of order $1~\eV$.
Molecules
When you go to molecules, it's useful for a qualitative understanding of what happens to appreciate that the nuclei are much heavier than the electrons. In the so-called Born-Oppenheimer approximation, their kinetic energy is negligible and they sit at fixed points because the uncertainty principle isn't too constraining for heavy particles. The electrons are moving in the external potential of these classical nuclei. You get some spectrum for the electrons.
The discrete energy levels calculated from the quantum multi-electron problems depend on the positions of the nuclei. You may optimize the positions of the nuclei so that the total electronic energy plus the electrostatic repulsion energy from the nuclei is minimized. In this way, you get a good elementary picture of the molecules, with a focus on the electronic transitions that still change the energy by an electronvolt or so. That's what chemistry is all about.
However, we have neglected the motion of the nuclei and we may restore it now. The nuclei may vibrate around the optimum positions and they behave as quantum harmonic oscillators in some approximation. These vibrations add the tiny "vibrational spectrum" to the molecules. The splitting i.e. energy differences are much smaller than one electronvolt and they effectively split the electronic spectral lines into some bands. The rotational spectrum of the molecules (those which are not spherically symmetric) add even smaller energy differences, the "rotational spectrum". There are also mixed rotationally vibrational terms to the total energy.
So the spectrum of $M$ has many possible values. You may start from the total mass of the elementary particles. The interaction energies involving the electron change the total mass by a fixed number of order an electronvolt. The relative motion between the bound nuclei splits the energy levels and the splitting is well below an electronvolt and the rotational ones are even smaller.
Solids, crystals, metals
I won't discuss gases because they're composed of largely non-interacting, free molecules. So the spectrum of $M$ becomes continuous as the relative speeds between the unbound molecules may be continuously adjusted. Liquids are similarly continuous but the molecules are very close to each other which prevents us from saying that they're independent and non-interacting. So the calculations of properties are much harder than they are for gases but it's still true that the spectrum is messy and kind of continuous.
Even materials such as glass which we would call "solid" may be viewed as some "very slow liquids". The true solids have to be crystals – think about the diamonds and/or metals; the latter tend to be conductors (both of electricity and heat). Because a crystal may be viewed as a "single very large molecule", you may also see that it has some vibrational spectrum, just like other molecules. Because a crystal is large, you may organize the spectrum in a simplified way and find out that there may be "photons" (quasiparticles of sound) propagating through the crystal. Even the electrons are shared by all the atoms of the crystal and because the crystals are uniform, the motion of the electrons through crystals resembles the motion through empty space, at least many aspects of it do, much like for the phonons and other quasiparticles.
One could discuss specific features of many types of materials etc. but this blog entry is meant to be just a review of the big picture so let me stop with the material science and squalid state physics immediately after I began. ;-)
Heavier particles
I've jumped into nuclei, atoms, molecules, and materials right after quarks. However, I haven't completed the elementary particles list yet. What I skipped so far were the W-bosons, Z-boson, and the Higgs boson. Their masses are 81, 92, and 125 GeV, respectively. Just four months ago, we couldn't have said what the mass of the God particle was although sensible people's guesses were not far from 125 GeV.
There may be many other additional particles such as superpartners whose masses should be comparable to $1~\TeV$ within an order of magnitude or so. Nature may be unified etc. so there can be heavier particles linked to grand unification or other types of physics. Their masses may be arbitrary positive numbers up to $10^{18}~\GeV$ or so, the reduced Planck scale. As you're approaching higher energies, it becomes harder to produce the particles. Probably all those heavy particles are unstable so you have to create them artificially if you need them. If some of them are stable, e.g. magnetic monopoles, the cosmological evolution guaranteed that their density around the Earth is extremely low.
I haven't said one thing. If a particle is unstable, its mass is really complex, \[
M \approx M_\text{real} - \frac{i\Gamma}{2}
\] where $\Gamma$ is the inverse lifetime. It's extremely natural and important to complexify the energies and momenta and ask what happens for complex values. Of course, normalizable states in the proper Hilbert space can only have real values of the mass/energy.
Strings and black holes
If you want to know the part of the spectrum of localized momentum-less objects whose mass is comparable to $10^{18}~\GeV$, you enter the realm in which quantum field theory with its independent point-like particle species breaks down. You really have to switch to a more complete theory according to which even the most elementary particles refuse to be point-like. (I recommend all physicists to avoid the verb "fail" and use "refuse" instead. The verb "fail" creates the incorrect impression that there is something wrong e.g. with objects' not being point-like.)
String theory is the only known – and quite certainly, the only mathematically possible – consistent theory of quantum gravity (theory agreeing with the postulates of quantum mechanics, local Lorentz invariance, as well as the equivalence principle for the gravitational force). But we still use the terms "string theory" and "quantum gravity" as slightly inequivalent terms. "Quantum gravity" is supposed to be all about the statements whose validity doesn't "obviously" depend on the probably inevitable stringy character of quantum gravity. (As our understanding of quantum gravity in string theory and the inevitability of some stringy features of gravitational systems improves, the boundary between "quantum gravity" and "string theory" gets fuzzier and the two concepts are gradually morphing into two synonyma.)
Quantum gravity in its general form has its consequences for the spectrum of $M$, too. If you study states with $M$ of "tightly bound and looking elementary" states so that $M$ exceeds the Planck mass, comparable to $10^{19}~\GeV$, quantum gravity tells you what these states have to be. There can of course be large objects and neutron stars but we know that they're not elementary.
On the other hand, black holes did look and still do look kind of elementary. They're not composed of anything "simpler" that would look more elementary. And even if they are, the interactions keeping these building blocks tightly bound are probably textbook examples of the "strongly coupled interactions".
Quantum gravity implies that "almost all" the mass eigenstates whose mass exceeds the Planck scale are black holes. Classically, the mass of a black hole may be any real number; it may be continuously adjusted. Quantum mechanically, black holes carry a large but finite entropy comparable to the area of the event horizons in some natural units so the number of microstates is\[
N \approx \exp (S_{\rm BH}) = \exp \left( \frac{A}{4 L_{\rm Planck}^2} \right)
\] where "BH" stands both for "black hole" as well as "Bekenstein-Hawking" who co-discovered the formula. Because the entropy is finite, there must exist an exponentially large but finite number of microstates that are "similar" to a given classical black hole solution. It implies that even the mass spectrum has to be kind of discrete, although exponentially dense. It's so dense that the discreteness becomes unmeasurable for "really macroscopic" black holes whose mass has to be well above the Planck mass (otherwise the quantum corrections to the general relativity would still be huge, relatively speaking).
Now, "string/M-theory" in the narrow sense is everything that consistently interpolates between the spectrum of light elementary particles (much lighter than the Planck scale) and the black-hole spectrum in the opposite extreme limit. When you analyze all the consistent paths that physics may choose in between, you are mapping the "landscape of string theory".
In the landscape, the theories contain strings and branes (sometimes approximated well enough by points) that propagate in a higher-dimensional spacetime. It is only useful to say that the theory contains strings and branes if they're (relatively) weakly coupled at least in some description. Strings and branes may carry internal excitations that change the properties of the "ground state string" or "ground state brane" in a similar way as phonons change the properties of a crystal (or vibrational modes change a molecule). However, the changes to the mass arising from an added vibration to a string are comparable to the Planck mass: they are huge.
In perturbative string theory, which is a good enough description for a non-negligible part of the landscape, at least morally speaking, the spectrum of the internal excitations of the strings may be reduced to an infinite collection of harmonic oscillators and is exactly solvable. If you consider branes or strings at strong coupling, it becomes harder (or impossible) to analytically solve the problem and find the spectrum.
Many different kinds of physical phenomena contribute to the energy that influences the mass spectrum we discuss. For example, extra dimensions add reasonable multiples of $1/R$, in the units where also $\hbar=1$ used by the "really mature" physicists, where $R$ is the typical length/size of the compactified dimensions (and/or some curvature radius, but for warped geometries, the estimates are often harder, anyway). At energy scales well below the "Kaluza-Klein scale", we may neglect the extra dimensions and describe everything in terms of a four-dimensional quantum theory.
Because the string scale is typically higher (higher energies) than the Kaluza-Klein scale (the extra dimensions should be larger than the strings, otherwise they don't really behave as "ordinary geometry"), and because strings may be approximated by pointlike particles below the string scale, you see that below the Kaluza-Klein scale $1/R$, the description in terms of a four-dimensional quantum field theory (a theory based on point-like particles) will be a good approximation. Even in this regime, in which you have forgotten about the stringy character of matter as well as the extra dimensions, you find lots of new hierarchies, scales, and physical phenomena that influence the spectrum.
I have gotten to complicated waters and because I don't want too many overtaught readers to throw up, let me stop at this point. If I didn't stop, I would have to tell you that the spectrum of isolated objects is really just the 2-point-function part of the "full dynamical information" which also includes the $n$-point functions or scattering amplitudes (kind of equivalent to the detailed information in the Hamiltonian about all the multi-particle states, including the unbound ones). And that could get really messy. ;-)
This was long enough a text for me to postpone proofreading, too. Apologies for the enhanced expectation value of the number of typos above.
Posted by Luboš Motl
| | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420644640922546, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/93124/automorphisms-of-mathbbc | ## Automorphisms of $\mathbb{C}$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is it true that $G_{\mathbb{Q}}$, the absolute Galois group of $\mathbb{Q}$, is a subgroup of $Aut(\mathbb{C})$ ?
Or a simpler question: can any automorphism of $\overline{\mathbb{Q}}$ be extended to an automorphism of $\mathbb{C}$?
-
3
Yes, any automorphism of $\overline{\mathbb Q}$ can be extended to an automorphism of $\mathbb C$, hence $\mathrm{Aut}(\overline{\mathbb Q})$ is a quotient of $\mathrm{Aut}(\mathbb C)$. This holds for any extension pair of algebraically closed fields. – Emil Jeřábek Apr 4 2012 at 16:22
2
See e.g. en.wikipedia.org/wiki/… . – Emil Jeřábek Apr 4 2012 at 16:58
1
That's in essential Theorem V.2.8 in Lang's algebra book. "in essential", because you first have to choose a transcendence base $T$ of $\mathbb{C}|\bar{\mathbb{Q}}$, extend your automorphism $f$ to $\bar{\mathbb{Q}}(T)$ by $f|T = id_T$ and then you can apply the cited theorem. – Ralph Apr 4 2012 at 17:08
2
The natural map is from $Aut(\mathbb C)$ to $G_{\mathbb Q}$ (given by restriction), not the other way around. As has now been noted in several comments and answers, this map is surjective. On the other hand, this does not answer your first question, as to whether $G_{\mathbb Q}$ is a subgroup of $Aut(\mathbb C)$. In fact, you probably don't actually care about this question; the surjectivity is what you seemed interested in. Nevertheless, I imagine that the answer is no, in the sense that the surjection $Aut(\mathbb C) \to G_{\mathbb Q}$ presumably doesn't split. Regards, – Emerton Apr 5 2012 at 2:57
1
@Bird: Why is $\mathrm{Aut}(\overline{\mathbb Q})$ a subgroup of $\mathrm{Aut}(\mathbb C)$ as a group? That’s the question no one here was able to answer so far. – Emil Jeřábek Apr 7 2012 at 10:11
show 5 more comments
## 2 Answers
There is the more general fact that any automorphism of any subfield of $\mathbb{C}$ can be extended to an automorphism of $\mathbb{C}$. For a proof, see the paper Automorphisms of the Complex Numbers by Paul Yale of Pomona College. Here is a JSTOR link. In general, if $k$ is an arbitrary (EDIT: algebraically closed) field, my guess would be Yale' argument could be easily extended to show that any automorphism of a subfield $h\subset k$ can be extended to an automorphism of $k$.
-
3
It certainly does not work for arbitrary fields. For example, let $h=\mathbb Q(\sqrt 2)$, $k=\mathbb Q(\sqrt[4]2)$, and $f$ the automorphism of $h$ such that $f(\sqrt2)=-\sqrt2$. It works when $k$ is algebraically closed, as explained in the comments above. – Emil Jeřábek Apr 4 2012 at 22:16
For arbitrary fields k this isn't true... for instance, take h to be any nontrivial real algebraic extension of the rationals, and k to be the reals. (There are also counterexamples if k is taken to be the p-adic numbers, for any prime p.) I think it does extend to any algebraically closed k, though. – zeb Apr 4 2012 at 22:20
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Your question depends on the axiom of choice. As noted in other comments, if you assume AC, then the complex numbers have crazy automorphisms, and any automorphism of any subfield of $\mathbb{C}$ can be extended to an automorphism of all of $\mathbb{C}$.
However! If you do not assume the axiom of choice, then it is consistent to say that the only automorphisms of the complex numbers are the identity and conjugation (this is consistent with ZF and implied by additional axioms such as the axiom of determinacy [which implies among other things that all sets are measurable]). Note that this is inconsistent with AC, but it is not the negation of it.
In this case (without AC), the only automorphisms that can be extended to automorphisms of $\mathbb{C}$ are the identity and complex conjugation!
So with the axiom of choice, the automorphisms are really crazy and any automorphism of a subfield can be extended to an automorphism of all of $\mathbb{C}$. But without the axiom of choice (and WITH some other consistent axioms), the automorphisms are all really boring!
Interesting stuff.
-Pat Devlin
-
Axiom of determinacy, whose consistency requires some pretty large cardinals, is an overkill. $|\mathrm{Aut}(\mathbb C)|=2$ is in fact relatively consistent with ZF (with no large cardinal assumptions), since it follows from ZF + DC + “every set of reals has the property of Baire”. – Emil Jeřábek Apr 6 2012 at 11:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330238103866577, "perplexity_flag": "head"} |
Subsets and Splits