url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/77728?sort=oldest | ## Can local duality for elliptic curves be proven with “big rings”?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
From Exercise 5.14, Ch. V of Silverman's "Advanced Topics in the Arithmetic of Elliptic Curves", I learned that the local duality for elliptic curves over $p$-adic fields can be proven for Tate curves by a relatively easy argument in Galois cohomology. Essentially, when the elliptic curve is $E_q = G_m / q^Z$ over a $p$-adic field $K$, one can find various long exact sequences connecting the Galois cohomology $H^1(K, E_q(\bar K))$ to the cohomology of $G_m$, $Q/Z$, etc., which are well-known by class field theory.
Without being an expert in $p$-adic cohomology, Hodge theory, etc., I know that by passing to a big ring ($B_{dR}$ will work), one can find a pair of periods for an elliptic curve over $K$ with good reduction. There might not be any interesting nontrivial uniformization of such elliptic curves, but the periods carry the information instead.
So can one exhibit (some of) the duality between $H^1(K, E(\bar K))$ and $Hom(E(K), Q/Z)$ when $E$ has good reduction, by using a big ring like $B_{dR}$?
When I see period rings, they are always used as linear algebraic gadgets. But since $B_{dR}$ is a $K$-algebra with Galois action, might someone consider $H^1(K, E(B_{dR}))$? In other words, rather than taking a linear algebraic gadget over $\bar K$, and tensoring up to $B_{dR}$, might one study a variety over $\bar K$ and base change to $B_{dR}$ or at least take $B_{dR}$-points? Might $H^1(K, E(B_{dR}))$ pick up the Weil-Chatelet group in the spirit of my first question?
Any references including $B_{dR}$-points of varieties would be greatly appreciated, as well as answers to the questions.
-
5
The tag "fontaine" is interesting. Long ago, when you asked a graduate student at Orsay what he was doing, the invariable answer was "des fontaineries". – Chandan Singh Dalawat Oct 11 2011 at 2:43
1
Using the Kummer exact sequence, it seems that $H^1(K,E(B_{dR}))$ (or at least the $n$-torsion part) is the same as $H^1(K,E(\overline{K}))$. – François Brunault Oct 11 2011 at 7:46
Thinking over it again, I'm not so sure of my comment above, because it's not so clear that the multiplication-by-$n$ map is surjective on $E(B_{dR})$ (is it ?) – François Brunault Oct 11 2011 at 17:20
@Chandan Singh Dalawat: Funny! Et ceux du Collège de France répondaient des "conneries", c'est ça? (sorry but this joke only works in French!). – Nicolas B. Sep 19 at 19:49
## 1 Answer
If I've understood correctly what you want, this is in the Bloch–Kato article in volume 1 of the Grothendieck Festschrift. The local duality result (I think) you are talking about is at the top of page 353 and the generalization is Proposition 3.8. Basically, you can show that if you have an abelian variety $A$ over $K$ with Tate module $T$, then the image of the Kummer map $$A(K)\rightarrow H^1(K,T)$$ is the Bloch–Kato Selmer group $H^1_f(K,T)$ (which is defined using $B_{\text{cris}}$; basically a derived functor of tensoring with $B_\text{cris}$ and taking Galois invariants). For a general $p$-adic Galois representation on a finite free $O_K$-module $T$, one can still define $H^1_f(K,T)$. One shows that under the usual local Tate duality $$H^1(K,T)\times H^1(K,T^\vee\otimes\mathbf{Q}_p/\mathbf{Z}_p)\rightarrow H^2(K,\mathbf{Q}_p/\mathbf{Z}_p(1))\cong\mathbf{Q}_p/\mathbf{Z}_p$$ the annihilator of $H^1_f(K,T)$ is $H^1_f(K,T^\vee\otimes\mathbf{Q}_p/\mathbf{Z}_p)$ (and vice versa) (where $T^\vee$ denotes the Tate dual). That the latter thing is the $H^1(K,E(\overline{K}))$ that you want it to be is equation (3.3) of the Bloch–Kato paper.
For example, when trying to generalize some construction using Kummer theory for elliptic curves to the case of higher weight modular forms, people use $H^1_f(K,T)$. Of course, the Bloch–Kato Selmer groups pretty much permeate a lot of things right now.
-
This looks very promising to me -- I'll check out the Grothendieck Festschrift asap. Thanks! – Marty Oct 11 2011 at 14:50
Since I can't seem to find the Grothendieck Festschrift today, I'm looking through the very nice notes of Joel Bellaiche on the Bloch-Kato conjecture. He treats the Bloch-Kato Selmer group in Section 2, including the theorem you mention. I'll keep on reading now... – Marty Oct 11 2011 at 22:38
It seems that this provides a very good answer to my question, "...can one exhibit (some of) the duality...", with emphasis on "some of". What I find troublesome is that again, one passes to a linear algebraic gadget -- the Tate module here. In so doing, the duality doesn't capture all of $A(K)$, but only $A(K) \otimes_{Z_p} Q_p$. This effectively kills most of the information about $A(Z / pZ)$, as does any linear algebraic gadget with $Q_p$-vector spaces. So I guess I still fantasize about using $B_{dR}$ in a nonlinear way, like considering $A(B_{dR})$ rather than a Tate module. – Marty Oct 11 2011 at 22:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128071665763855, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/17800/how-to-implement-heuns-method-to-solve-a-2nd-order-ode?answertab=active | # How to implement Heun's method to solve a 2nd order ODE? [closed]
I am trying to simulate a soft body object by having a collection of points connected by springs. I also want to have the object bounce on a plane. I have been able to implement Euler's method to make this simulation work, but it is not satisfactory because when I increase the spring constants it becomes unstable. Ideally I would like to implement the Runge-Kutta 4th order method for this simulation, but for now I just want to implement Heun's method, which is the Runge-Kutta 2nd order method.
From what I understand, this is what you have to do:
1. Compute the acceleration of every vertex.
2. Using Euler's method, compute the new velocities and positions of each vertex after one time interval, and store these new vertices in a seperate data structure from the original vertices.
3. Compute the acceleration of every vertex in their new positions.
4. Compute the average of the 2 accelerations you computed.
5. Using this average acceleration, use Euler's method again to compute the new velocites and positions of each vertex after one time interval, starting from the original vertex velocities and positions.
This almost works, but when the object bounces off the plane, it only bounces half as high each time, essentially losing energy. If I just use Euler's method, it does not lose energy.
I factor the collision and gravity forces when I compute the acceleration. The way I am calculating the collision force from the ground is if a vertex is below the ground, then I pretend there is a spring with an equilibrium distance of 0 between the ground and the vertex. Is there something in my algorithm that is incorrect?
Thanks in advance.
-
There are better ODE-solving algorithms than RK when stiffness is a concern. My hands-down favorite, if the system is linear, as it sounds like yours is, is matrix-exponent. – Mike Dunlavey Dec 4 '11 at 14:50
"This almost works, but when the object bounces off the plane, it only bounces half as high each time, essentially losing energy." One possible cause of that is transfer of kinetic energy from overall body motion to internal motion. – mmc Dec 4 '11 at 19:02
@mmc: if this were the answer, it would be the same in Euler. – Ron Maimon Dec 7 '11 at 7:07
@RonMaimon Maybe he is having gross energy non-conservation when using Euler, but you are probably right. – mmc Dec 7 '11 at 11:51
1
@Thomas Ryabin: Maybe the question would fit better on scicomp.SE? Don't forget to provide crosspost links if you do this (without the help of moderators), i.e. crossposting (as opposed to migration). Alternatively, you can flag the moderators for migration. – Qmechanic♦ Mar 4 '12 at 16:15
show 4 more comments
## closed as off topic by Qmechanic♦Dec 30 '12 at 7:46
Questions on Physics Stack Exchange are expected to relate to physics within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here.
## 2 Answers
If you have the accelerations as `a=f(t,x,v)` then starting from `t_1`, `x_1`, `v_1` and taking a time step `h`, do the following:
1. `K0 = f(t_1, x_1, v_1)`
2. `K1 = f(t_1+h, x_1+h*v_1, v+h*K0)`
3. `t_2 = t_1 + h`
4. `v_2 = v_1 + h*(K0/2+K1/2)`
5. `x_2 = x_1 + h*(v_1+h*(K0/3+K1/6))`
where `x_2` and `v_2` are the positions and velocities after the step `h`.
-
Where did the "K0/3 + K1/6" in step 5 come from? – Thomas Dec 5 '11 at 19:03
If you assume a linear variation of acceleration with respect to time it comes out like that. The standard method has a constant acceleration `x_1+h*(v_1+h*K0/2)` term only and it is way too unstable. – ja72 Dec 5 '11 at 22:58
@Ja72: It is only "unstable" in the technical sense of stiffness, which happens when you take time scales which are larger than the largest inverse frequency in the problem. This is never the case in discrete particle/spring simulations. He could use the obvious Euler algorithm, or any of the simplest correctors and the answer should be grossly the same. It is pointless to make a better algorithm for this type of thing. – Ron Maimon Jan 4 '12 at 15:03
Your description of the algorithm is correct, and this is the best quick-and-dirty algorithm for day-to-day simulations. But you have to specify far more detail if you want a certain answer.
One possible reason is that the force near the ground is too sharply varying. You can replace the ground-particle force with just reflecting boundary conditions, so that whenever the y-coordinate of any particle is less than zero, you reverse it's y-velocity (instantly, without any Heun back and forth, just sweep over the particles at the beginning of each time step, and if y<0, $v_y=-v_y$). Then you don't need to have any force from the ground at all. But you will get sound losses in the material during the bounce.
If you want a good diagnostic, add up the total energy (potential + kinetic) and plot it with time. It is difficult to say more, because you can't debug code without seeing it.
-
The way I am calculating the force from the ground is if a vertex is below the ground, then I pretend there is a spring with an equilibrium distance of 0 between the ground and the vertex. I'll try putting the collision calculations outside of the Heun algorithm. – Thomas Dec 4 '11 at 17:26
The half spring should work if the spring constant is not too big. – Ron Maimon Dec 4 '11 at 17:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301390647888184, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/77558/area-of-distance-sphere-in-manifold-with-ricci-ge-0 | Area of distance sphere in manifold with Ricci $\ge 0$.
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $M$ be a open complete manifold with Ricci curvature $\ge 0$. By a theorem of Calabi and Yau, the volume growth of $M$ is at least of linear. I am wondering whether the following statement is true:
Let $p$ be any fixed point in $M$ and $B(p, r)$ be the distance ball of radius $r$ in $M$. Then for any given $R>0$, there exists a constant $c=c(p,R)>0$ such that $Area(\partial B(p, r))\ge c(p,R)$ for any $r>R$.
-
1
Do you mean for $r$ sufficiently large? – Agol Oct 8 2011 at 21:52
Oh, yes. Thanks Agol, I've corrected my statement – unknown (google) Oct 8 2011 at 22:42
1
Don't have time to work out the details, but can't you use the Calabi-Yau lower bound on, say, $V(B(p, 4r))$ and the lower bound on Ricci to infer an isoperimetric inequality on all domains contained in $B(p, 2r)$. That and the Calabi-Yau lower bound on $V(B(p,r))$ implies a lower bound on the area of $\partial B(p,r))$. Not sure how the lower bound depends on $r$ though. – Deane Yang Oct 9 2011 at 0:15
1
Some evidence in favour: the analogue for horospheres (rather than distance spheres) is true. Specifically, for any Busemann function $b$ on $M$, the area of $b^{-1}(r)$ is eventually nondecreasing in $r$. See Lemma 20 in C. Sormani, "Busemann functions on manifolds with lower bounds on Ricci curvature and minimal volume growth," JDG 1998. (I don't think the theorem's hypothesis of exactly-linear volume growth is needed for this part of the conclusion.) – macbeth Dec 19 2011 at 1:15
1 Answer
This is clearly false, just consider the cylinder
$$R_t \times S_{\theta}$$
with the product metric
$$g_\alpha=dt^2+\alpha^2 d\theta^2.$$
This is a flat metric so $Ric_{g_\alpha} = 0$. On the other hand, for $r>>\alpha$, it is easy to see $Area(\partial B_r)<8\pi \alpha$. Since $\alpha$ is arbitrary there is no uniform lower bound.
Maybe you need a uniform lower bound on the injectivity radius? (I'm not an expert on comparison geometry so don't know off the top of my head if this would suffice) [Edit: Or maybe this can only happen if the metric splits off an isometric euclidean factor].
[As an aside I can't seem to get math blackboard fonts to work anyone else have a problem with this?]
-
1
The constant $c=c(p,R)$ has to depend on the manifold of course. – unknown (google) Oct 9 2011 at 1:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9016431570053101, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/204024-solving-equations-groups-print.html | # Solving equations in groups
Printable View
• September 24th 2012, 08:24 PM
jzellt
Solving equations in groups
I will denote x-inverse as x' and the identity as e.
I'm asked to solve the two equations below simultaneously:
(1) ax^2 = b and (2) x^3 = e
Here's what i did:
ax^3 = bx (by multiplying (1) on the right by x)
Now, ae = bx (since x^3 = e)
b'a = x (by multiplying b' on the left)
BUT, if we plug in the above x into (1) we get a(b'a)(b'a) = b which isn't true..????
I've tried manipulating both (1) and (2) in many different ways but always come up with x = b'a.
What have I done wrong? Can someone show how to do this correctly please. Thanks a lot!
• September 24th 2012, 09:03 PM
johnsomeone
Re: Solving equations in groups
What does it mean to "solve an equation (or set of equations) for x"? It means, that, assuming the equation(s) hold, you've found out what x must be.
What you've proved is that, **IF** 1 & 2 hold for some x, THEN it must be that x = b'a.
If x = b'a doesn't actually make 1 & 2 hold in your group, then it means that the equations have no solution. It doesn't mean that you made a mistake. You didn't.
Ex: Suppose this was a set of equations in a finite group where 3 does not divide the order of the group, and that a is not equal to b. Could this set of equations have a solution?
No. Since Equation 2 requires that ord(x) divides 3, thus ord(x) is either 3 or 1. But it can't be 3, since that doesn't divide the group order (Lagrange's Thm with <x>).
Thus ord(x) = 1, so x = e. But then ax = b says a(e) = b, so a = b. But that's a contradiction, since a not equal to b by premise.
Thus these equations have no solution in a finite group where 3 does not divide the order of the group, and where a is not equal to b.
Some equations simply don't have solutions - that's true even with the "ordinary" real numbers.
• September 24th 2012, 09:05 PM
FernandoRevilla
Re: Solving equations in groups
Quote:
Originally Posted by jzellt
What have I done wrong? Can someone show how to do this correctly please. Thanks a lot!
You have correctly proved that if $x$ is a solution of the system, then necessarily $x=b^{-1}a.$ But this is not sufficient, for example choose the group $(\mathbb{R}-\{0\},\cdot)$ , $a=1$ and $b=-1.$ Then, $\begin{Bmatrix} ax^2=b\\x^3=e\end{matrix} \Leftrightarrow \begin{Bmatrix} x^2=-1\\x^3=1\end{matrix}$ , and the system has no solution.
Edited: Sorry, I didn't see johnsomeone's post.
• September 25th 2012, 09:49 AM
Deveno
Re: Solving equations in groups
for completeness' sake, let's look at a group G where the two equations DO actually have a solution:
G = S3, a = (1 2), b = (2 3), x = (1 3 2).
then x2 = (1 3 2)(1 3 2) = (1 2 3), and ax2 = (1 2)(1 2 3) = (2 3) = b, and x3 = e.
now let's look at b-1a:
b-1a = (2 3)-1(1 2) = (2 3)(1 2) = (1 3 2) = x.
another example: G = Z6 under addition mod 6:
let a = 5, b = 3, x = 2.
then a + (x + x) = 5 + 2 + 2 = 9 = 3 (mod 6) = b, and x + x + x = 2 + 2 + 2 = 6 = 0 (mod 6).
and -b + a = (-3) + 5 = 2 = x (we get the same answer if we write -b = 3: 3 + 5 = 8 = 2 (mod 6)).
now, for each of these groups, let's COMPUTE:
ab-1ab-1a.
in S3, this is:
(1 2)(2 3)(1 2)(2 3)(1 2) =
(1 2)[(2 3)(1 2)][(2 3)(1 2)] =
(1 2)(1 3 2)(1 3 2) =
(1 2)(1 2 3) = (2 3) = b.
in Z6:
a - b + a - b + a =
5 + 3 + 5 + 3 + 5 =
5 + 8 + 8 = 5 + 2 + 2 (since 8 = 2 (mod 6))
= 5 + 4 = 9 = 3 (mod 6) = b.
note that whether or not G is abelian did not affect our solution (although "the math is easier" in an abelian group).
in other words there is no reason for you to say that ab-1ab-1a ≠ b. it might.
All times are GMT -8. The time now is 11:06 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346694350242615, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/54020/list | ## Return to Answer
5 deleted 3 characters in body
That link reveals that a certain heuristic but extremely plausible assymptotic formula is highly accurate at least up to $10^{10}$. That information tells us what a graph over a larger range would look like. So one knows certain patterns are there even if the graph is not drawn. (Maybe someone has drawn it, I don't know).
Roughly, the expected number of pairs is about $0.66\frac{n}{{\log}^2(n)}$ for a large number which is twice a prime. Multiply that by $\prod_{p|n}\frac{p-1}{p-2}$ to get the estimate for any even n (the product over odd prime divisors of $n$).
This explains these patterns (do you see others?): There should be a lower half very roughly hitting at $10^6$ from 3460 to 6700 and then an upper half from 6920 to 13400 about twice the lower part. The lower part is for 2 or 4 mod 6 and the upper for mutiples of 6. Numbers which are or are not multiples of 5 and/or 7 should create 4 bands in each half (I can't see much beyond that). Those patterns (including with more primes taken into account)continue very faithfully as far as calculated.
added information
The name "Goldbach's Comet" is probably not useful in finding extended computations. "Goldbach partitions" might do better. As far as I can tell the state of the art is
MR1850627 (2002g:11142) Richstein, Jörg . Computing the number of Goldbach partitions up to $5\cdot 10^8$. Algorithmic number theory (Leiden, 2000), 475--490, Lecture Notes in Comput. Sci., 1838, Springer, Berlin, 2000.
That paper does include a graph for the range $500500000 \le n \le 500660160$. The author no doubt has the data and could generate other graphs, but may not see any reason to. There are a number of observations in that paper about patterns. You might be better served by looking at color coded plots over short ranges (say according to congruence class mod 30 or 210). for example a plot over a modest range colored mod 6 shows that the top is all of the $0 \mod 6$ and nothing else (of course) BUT it also shows that the very bottom is boundry is heavily in favor of $2 \mod 6$ leading me to pose this question.
4 added 354 characters in body; deleted 1 characters in body
That link reveals that a certain heuristic but extremely plausible assymptotic formula is highly accurate at least up to $10^{10}$. That information tells us what a graph over a larger range would look like. So one knows certain patterns are there even if the graph is not drawn. (Maybe someone has drawn it, I don't know).
Roughly, the expected number of pairs is about $0.66\frac{n}{{\log}^2(n)}$ for a large number which is twice a prime. Multiply that by $\prod_{p|n}\frac{p-1}{p-2}$ to get the estimate for any even n (the product over odd prime divisors of $n$).
This explains these patterns (do you see others?): There should be a lower half very roughly hitting at $10^6$ from 3460 to 6700 and then an upper half from 6920 to 13400 about twice the lower part. The lower part is for 2 or 4 mod 6 and the upper for mutiples of 6. Numbers which are or are not multiples of 5 and/or 7 should create 4 bands in each half (I can't see much beyond that). Those patterns (including with more primes taken into account)continue very faithfully as far as calculated.
added information
The name "Goldbach's Comet" is probably not useful in finding extended computations. "Goldbach partitions" might do better. As far as I can tell the state of the art is
MR1850627 (2002g:11142) Richstein, Jörg . Computing the number of Goldbach partitions up to $5\cdot 10^8$. Algorithmic number theory (Leiden, 2000), 475--490, Lecture Notes in Comput. Sci., 1838, Springer, Berlin, 2000.
That paper does include a graph for the range $500500000 \le n \le 500660160$. The author no doubt has the data and could generate other graphs, but may not see any reason to. There are a number of observations in that paper about patterns. You might be better served by looking at color coded plots over short ranges (say according to congruence class mod 30 or 210). for example a plot over a modest range colored mod 6 shows that the top is all of the $0 \mod 6$ and nothing else (of course) BUT it also shows that the very bottom is boundry is heavily in favor of $2 \mod 6$ leading me to pose this question.
3 added 802 characters in body
That link reveals that a certain heuristic but extremely plausible assymptotic formula is highly accurate at least up to $10^{10}$. That information tells us what a graph over a larger range would look like. So one knows certain patterns are there even if the graph is not drawn. (Maybe someone has drawn it, I don't know).
Roughly, the expected number of pairs is about $0.66\frac{n}{{\log}^2(n)}$ for a large number which is twice a prime. Multiply that by $\prod_{p|n}\frac{p-1}{p-2}$ to get the estimate for any even n (the product over odd prime divisors of $n$).
This explains these patterns (do you see others?): There should be a lower half very roughly hitting at $10^6$ from 3460 to 6700 and then an upper half from 6920 to 13400 about twice the lower part. The lower part is for 2 or 4 mod 6 and the upper for mutiples of 6. Numbers which are or are not multiples of 5 and/or 7 should create 4 bands in each half (I can't see much beyond that). Those patterns (including with more primes taken into account)continue very faithfully as far as calculated.
added information
The name "Goldbach's Comet" is probably not useful in finding extended computations. "Goldbach partitions" might do better. As far as I can tell the state of the art is
MR1850627 (2002g:11142) Richstein, Jörg . Computing the number of Goldbach partitions up to $5\cdot 10^8$. Algorithmic number theory (Leiden, 2000), 475--490, Lecture Notes in Comput. Sci., 1838, Springer, Berlin, 2000.
That paper does include a graph for the range $500500000 \le n \le 500660160$. The author no doubt has the data and could generate other graphs, but may not see any reason to. There are a number of observations in that paper about patterns. You might be better served by looking at color coded plots over short ranges (say according to congruence class mod 30 or 210).
2 added 5 characters in body
That link reveals that a certain heuristic but extremely plausible assymptotic formula is highly accurate at least up to $10^{10}$. That information tells us what a graph over a larger range would look like. So one knows certain patterns are there even if the graph is not drawn. (Maybe someone has drawn it, I don't know).
Roughly, the expected number of pairs is about $0.66\frac{n}{{\log}^2(n)}$ for a large number which is twice a prime. Multiply that by $\prod_{p|n}\frac{p-1}{p-2}$ to get the estimate for any even n (the product over odd prime divisors of $n$).
This explains these patterns (do you see others?): There should be a lower half very roughly hitting at $10^6$ from 3460 to 5540 6700 and then an upper half from 6920 to 11800 13400 about twice the lower part. The lower part is for 2 or 4 mod 6 and the upper for mutiples of 6. Numbers which are or are not multiples of 5 and/or 7 should create 4 bands in each half (I can't see much beyond that). Those patterns (including with more primes taken into account)continue very faithfully as far as calculated.
1
That link reveals that a certain heuristic but extremely plausible assymptotic formula is highly accurate at least up to $10^{10}$. That information tells us what a graph over a larger range would look like. So one knows certain patterns are there even if the graph is not drawn. (Maybe someone has drawn it, I don't know).
Roughly, the expected number of pairs is about $0.66\frac{n}{{\log}^2(n)}$ for a large number which is twice a prime. Multiply that by $\prod_{p|n}\frac{p-1}{p-2}$ to get the estimate for any even n (the product over odd prime divisors of $n$).
This explains these patterns (do you see others?): There should be a lower half roughly hitting at $10^6$ from 3460 to 5540 and then an upper half from 6920 to 11800 about twice the lower part. The lower part is for 2 or 4 mod 6 and the upper for mutiples of 6. Numbers which are or are not multiples of 5 and/or 7 should create 4 bands in each half (I can't see much beyond that). Those patterns (including with more primes taken into account)continue very faithfully as far as calculated. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490231275558472, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/107060-lost-some-basic-skills-working-square-root.html | # Thread:
1. ## Lost some basic skills, working with a square root
Okay, so I'm doing a TON of identities in trig, and have come to a point where I'm working on problems such as:
$\sqrt(1+cos(30))/2$
cos of 30 is $\sqrt3/2$
I've forgotten what it's called but I need a step by step guide to simplifying this! (it simplifies down to an exact answer of $\sqrt(2+\sqrt3)/2$
Edit: Yes, I've done college algebra (and aced it) but I haven't had to simplify problems like this in about 6 or 7 months, so I've lost the skill!
2. if Cos(30) is $\frac{\sqrt{3}}{2}$ then the numerator is $\sqrt{1+\frac{\sqrt{3}}{2}}$
The 1 can be written as 2/2 and then you can factor out 1/2.
The answer you give seems to be wrong... it should have a 1/2 in the numerator under the root.
3. Originally Posted by Wolvenmoon
Okay, so I'm doing a TON of identities in trig, and have come to a point where I'm working on problems such as:
$\sqrt(1+cos(30))/2$
cos of 30 is $\sqrt3/2$
I've forgotten what it's called but I need a step by step guide to simplifying this! (it simplifies down to an exact answer of $\sqrt(2+\sqrt3)/2$
Edit: Yes, I've done college algebra (and aced it) but I haven't had to simplify problems like this in about 6 or 7 months, so I've lost the skill!
$\sqrt {\frac{1 + \frac{\sqrt{3}}{2}}{2}} = \sqrt{ \frac{ \frac{2 + \sqrt{3}}{2}}{2}} = \sqrt{ \frac{2 + \sqrt{3}}{4}}$ $= \frac{ \sqrt{2 + \sqrt{3}}}{\sqrt{4}} = \frac{\sqrt{2 + \sqrt{3}}}{2}$
4. Okay, so the problem is 'find the exact value of cos 15 degrees using hte half angle identity for cosine'
The example gets down to ( I think i typed it wrong) a point where it is:
The square root of (1 plus the square root of three over two) all over the square root of two.
The final answer is the square root (two plus the square root of three) all over two.
That clear it up any?
5. Originally Posted by Wolvenmoon
Okay, so the problem is 'find the exact value of cos 15 degrees using hte half angle identity for cosine'
The example gets down to ( I think i typed it wrong) a point where it is:
The square root of (1 plus the square root of three over two) all over the square root of two.
The final answer is the square root (two plus the square root of three) all over two.
That clear it up any?
The answer you gave is correct. You should note that the way you wrote the expression in your first post leaves room for confusion as to what the square root is over (it seems that you wanted to simplify $\frac{\sqrt{1 + cos(30)}}{2}$...)
6. Right, it's an example problem from my book. I need to know the algebra behind it as they don't show it step by step by step.
7. Attached is an image with exactly where my problem is highlighted. All the in book examples look the same way.
Okay, so I can't make the gimp highlight stuff. But I can't afford photoshop.
Attached Thumbnails
8. Originally Posted by Defunkt
$\sqrt {\frac{1 + \frac{\sqrt{3}}{2}}{2}} = \sqrt{ \frac{ \frac{2 + \sqrt{3}}{2}}{2}} = \sqrt{ \frac{2 + \sqrt{3}}{4}}$ $= \frac{ \sqrt{2 + \sqrt{3}}}{\sqrt{4}} = \frac{\sqrt{2 + \sqrt{3}}}{2}$
step 2 to step 3 is where I'm having an issue. Could you elaborate on it for me?
Edit: Ugh...how is that 1 becoming a 2?
Edit 2: Oh..I see. It was being multiplied by (2/1)/2, which is 1, right?
9. Originally Posted by Wolvenmoon
step 2 to step 3 is where I'm having an issue. Could you elaborate on it for me?
Edit: Ugh...how is that 1 becoming a 2?
Edit 2: Oh..I see. It was being multiplied by (2/1)/2, which is 1, right?
Correct.
$\sqrt \frac {1 + \frac{\sqrt{3}}{2}}{2} = \sqrt \frac{\frac{2}{2} + \frac{ \sqrt{3}}{2}}{2} = \sqrt{\frac{ \frac{2 + \sqrt{3}}{2}}{2}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607636332511902, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/72931/list | ## Return to Question
2 added 590 characters in body; deleted 14 characters in body
The first definition of the category of Poisson algebras that comes to mind is that a morphism between Poisson algebras is an algebra homomorphism that is also a Lie algebra homomorphism with respect to the Poisson bracket. This definition does not seem to be easily compatible with how people actually use Poisson algebras (in particular rings of functions on Poisson manifolds):
• A Poisson-Lie group is not a group object in the opposite of the category of Poisson algebras because inversion negates the Poisson bracket.
• The standard choice of bracket on the tensor product of two Poisson algebras is not a categorical coproduct (if I have the correct general definition: it's defined by the requirements that it restricts to the given brackets on two Poisson algebras $A, B$ and that every element of $A$ Poisson-commutes with every element of $B$).
This suggests to me that if we used a different choice of morphisms, we might get actual group objects and an actual categorical coproduct. So are there any nice choices that do this?
I read somewhere on MO that the correct definition of a morphism between Poisson manifolds is a Lagrangian submanifold of their product. How does this generalize to Poisson algebras? Does it fix the two issues above? (I'm a little more pessimistic about the second issue, so if there's a different general principle that leads to the standard choice, I would be interested in hearing about that as well.)
Edit: The discussion in my previous question about Poisson-Lie groups seems relevant, and perhaps it shows that the above point of view is misguided. Any Poisson algebra $A$ admits an "opposite" $A^{op}$ given by negating the Poisson bracket, and then inversion in a Poisson-Lie group is a "contravariant morphism" rather than a morphism. This suggests to me that it might make more sense to look for a bicategory of Poisson algebras similar to the bimodule bicategory.
1
# What reasonable choices of morphisms are there for the category of Poisson algebras?
The first definition of the category of Poisson algebras that comes to mind is that a morphism between Poisson algebras is an algebra homomorphism that is also a Lie algebra homomorphism with respect to the Poisson bracket. This definition does not seem to be easily compatible with how people actually use Poisson algebras (in particular rings of functions on Poisson manifolds):
• A Poisson-Lie group is not a group object in the opposite of the category of Poisson algebras because inversion negates the Poisson bracket.
• The standard choice of bracket on the tensor product of two Poisson algebras is not a categorical coproduct (if I have the correct general definition: it's defined by the requirements that it restricts to the given brackets on two Poisson algebras $A, B$ and that every element of $A$ Poisson-commutes with every element of $B$).
This suggests to me that if we used a different choice of morphisms, we might get actual group objects and an actual categorical coproduct. So are there any nice choices that do this?
I read somewhere on MO that the correct definition of a morphism between Poisson manifolds is a Lagrangian submanifold of their product. How does this generalize to Poisson algebras? Does it fix the two issues above? (I'm a little more pessimistic about the second issue, so if there's a different general principle that leads to the standard choice, I would be interested in hearing about that as well.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9133360385894775, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Splitting_field | # Splitting field
In abstract algebra, a splitting field of a polynomial with coefficients in a field is a smallest field extension of that field over which the polynomial splits or decomposes into linear factors.
## Definition
A splitting field of a polynomial p(X) over a field K is a field extension L of K over which p factors into linear factors
$p(X) = \prod_{i=1}^{\deg(p)} (X - a_i) \in L[X]$
and such that the coefficients ai generate L over K. The extension L is then an extension of minimal degree over K in which p splits. It can be shown that such splitting fields exist and are unique up to isomorphism. The amount of freedom in that isomorphism is known to be the Galois group of p (if we assume it is separable).
## Facts
An extension L which is a splitting field for a set of polynomials p(X) over K is called a normal extension of K.
Given an algebraically closed field A containing K, there is a unique splitting field L of p between K and A, generated by the roots of p. If K is a subfield of the complex numbers, the existence is immediate. On the other hand, the existence of algebraic closures in general is usually proved by 'passing to the limit' from the splitting field result, which therefore requires an independent proof to avoid circular reasoning.
Given a separable extension K′ of K, a Galois closure L of K′ is a type of splitting field, and also a Galois extension of K containing K′ that is minimal, in an obvious sense. Such a Galois closure should contain a splitting field for all the polynomials p over K that are minimal polynomials over K of elements a of K′.
## Constructing splitting fields
### Motivation
Finding roots of polynomials has been an important problem since the time of the ancient Greeks. Some polynomials, however, have no roots such as X2+1 over R, the real numbers. By constructing the splitting field for such a polynomial one can find the roots of the polynomial in the new field.
### The Construction
Let F be a field and p(X) be a polynomial in the polynomial ring F[X] of degree n. The general process for constructing K, the splitting field of p(X) over F, is to construct a sequence of fields $F=K_0, K_1, \ldots K_{r-1}, K_r=K$ such that Ki is an extension of Ki−1 containing a new root of p(X). Since p(X) has at most n roots the construction will require at most n extensions. The steps for constructing Ki are given as follows:
• Factorize p(X) over Ki into irreducible factors $f_1(X)f_2(X) \cdots f_k(X)$.
• Choose any nonlinear irreducible factor f(X) = fi(X).
• Construct the field extension Ki+1 of Ki as the quotient ring Ki+1 = Ki[X]/(f(X)) where (f(X)) denotes the ideal in Ki[X] generated by f(X)
• Repeat the process for Ki+1 until p(X) completely factors.
The irreducible factor fi used in the quotient construction may be chosen arbitrarily. Although different choices of factors may lead to different subfield sequences the resulting splitting fields will be isomorphic.
Since f(X) is irreducible, (f(X)) is a maximal ideal and hence Ki[X]/(f(X)) is, in fact, a field. Moreover, if we let $\pi : K_i[X] \to K_i[X]/(f(X))$ be the natural projection of the ring onto its quotient then
$f(\pi(X)) = \pi(f(X)) = f(X)\ \bmod\ f(X) = 0$
so π(X) is a root of f(X) and of p(X).
The degree of a single extension $[K_{i+1} : K_i]$ is equal to the degree of the irreducible factor f(X). The degree of the extension [K : F] is given by $[K_r : K_{r-1}] \cdots [K_2 : K_1][K_1 : F]$ and is at most n!.
### The Field Ki[X]/(f(X))
As mentioned above, the quotient ring Ki+1 = Ki[X]/(f(X)) is a field when f(X) is irreducible. Its elements are of the form
$c_{n-1}\alpha^{n-1} + c_{n-2}\alpha^{n-2} + \cdots + c_1\alpha + c_0$
where the cj are in Ki and α = π(X). (If one considers Ki+1 as a vector space over Ki then the powers αj for 0 ≤ j ≤ n−1 form a basis.)
The elements of Ki+1 can be considered as polynomials in α of degree less than n. Addition in Ki+1 is given by the rules for polynomial addition and multiplication is given by polynomial multiplication modulo f(X). That is, for g(α) and h(α) in Ki+1 the product g(α)h(α) = r(α) where r(X) is the remainder of g(X)h(X) divided by f(X) in Ki[X].
The remainder r(X) can be computed through long division of polynomials, however there is also a straightforward reduction rule that can be used to compute r(α) = g(α)h(α) directly. First let
$f(X) = X^n + b_{n-1} X^{n-1} + \cdots + b_1 X + b_0.$
The polynomial is over a field so one can take f(X) to be monic without loss of generality. Now α is a root of f(X), so
$\alpha^n = -(b_{n-1} \alpha^{n-1} + \cdots + b_1 \alpha + b_0).$
If the product g(α)h(α) has a term αm with m ≥ n it can be reduced as follows:
$\alpha^n\alpha^{m-n} = -\left(b_{n-1} \alpha^{n-1} + \cdots + b_1 \alpha + b_0 \right) \alpha^{m-n} = -\left (b_{n-1} \alpha^{m-1} + \cdots + b_1 \alpha^{m-n+1} + b_0 \alpha^{m-n+1} \right )$.
As an example of the reduction rule, take Ki = Q[X], the ring of polynomials with rational coefficients, and take f(X) = X7 − 2. Let $g(\alpha) = \alpha^5 + \alpha^2$ and h(α) = α3 +1 be two elements of Q[X]/(X7 − 2). The reduction rule given by f(X) is α7 = 2 so
$g(\alpha) h(\alpha) = \left(\alpha^5 + \alpha^2\right) \left(\alpha^3 + 1\right) = \alpha^8 + 2 \alpha^5 + \alpha^2 = \left(\alpha^7\right) \alpha + 2\alpha^5 + \alpha^2 = 2 \alpha^5 + \alpha^2 + 2\alpha.$
## Examples
### The complex numbers
Consider the polynomial ring R[x], and the irreducible polynomial x2 + 1. The quotient ring R[x] / (x2 + 1) is given by the congruence x2 ≡ −1. As a result, the elements (or equivalence classes) of R[x] / (x2 + 1) are of the form a + bx where a and b belong to R. To see this, note that since x2 ≡ −1 it follows that x3 ≡ −x, x4 ≡ 1, x5 ≡ x, etc.; and so, for example p + qx + rx2 + sx3 ≡ p + qx + r⋅(−1) + s⋅(−x) = (p − r) + (q − s)⋅x.
The addition and multiplication operations are given by firstly using ordinary polynomial addition and multiplication, but then reducing modulo x2 + 1, i.e. using the fact that x2 ≡ −1, x3 ≡ −x, x4 ≡ 1, x5 ≡ x, etc. Thus:
$(a_1 + b_1x) + (a_2 + b_2x) = (a_1 + a_2) + (b_1 + b_2)x,$
$(a_1 + b_1x)(a_2 + b_2x) = a_1a_2 + (a_1b_2 + b_1a_2)x + (b_1b_2)x^2 \equiv (a_1a_2 - b_1b_2) + (a_1b_2 + b_1a_2)x \, .$
If we identify a + bx with (a,b) then we see that addition and multiplication are given by
$(a_1,b_1) + (a_2,b_2) = (a_1 + a_2,b_1 + b_2),$
$(a_1,b_1)\cdot (a_2,b_2) = (a_1a_2 - b_1b_2,a_1b_2 + b_1a_2).$
We claim that, as a field, the quotient R[x] / (x2 + 1) is isomorphic to the complex numbers, C. A general complex number is of the form a + ib, where a and b are real numbers and i2 = −1. Addition and multiplication are given by
$(a_1 + ib_1) + (a_2 + ib_2) = (a_1 + a_2) + i(b_1 + b_2),$
$(a_1 + ib_1) \cdot (a_2 + ib_2) = (a_1a_2 - b_1b_2) + i(a_1b_2 + a_2b_1).$
If we identify a + ib with (a,b) then we see that addition and multiplication are given by
$(a_1,b_1) + (a_2,b_2) = (a_1 + a_2,b_1 + b_2),$
$(a_1,b_1)\cdot (a_2,b_2) = (a_1a_2 - b_1b_2,a_1b_2 + b_1a_2) \, .$
The previous calculations show that addition and multiplication behave the same way in R[x] / (x2 + 1) and C. In fact, we see that the map between R[x]/(x2 + 1) and C given by a + bx → a + ib is a homomorphism with respect to addition and multiplication. It is also obvious that the map a + bx → a + ib is both injective and surjective; meaning that a + bx → a + ib is a bijective homomorphism, i.e. an isomorphism. It follows that, as claimed: R[x] / (x2 + 1) ≅ C.
### Cubic example
Let K be the rational number field Q and
p(X) = X3 − 2.
Each root of p equals $\sqrt[3]{2}$ times a cube root of unity. Therefore, if we denote the cube roots of unity by
$\omega_1 = 1 \,$,
$\omega_2 = - \frac{1} {2} + \frac {\sqrt{3}} {2} i,$
$\omega_3 = - \frac{1} {2} - \frac {\sqrt{3}} {2} i.$
any field containing two distinct roots of p will contain the quotient between two distinct cube roots of unity. Such a quotient is a primitive cube root of unity—either ω2 or $\omega_3=1/\omega_2$). It follows that a splitting field L of p will contain ω2, as well as the real cube root of 2; conversely, any extension of Q containing these elements contains all the roots of p. Thus
${L=\mathbf{Q}(\sqrt[3]{2},\omega_2)=\{a+b \omega_2+c\sqrt[3]{2} +d \sqrt[3]{2} \omega_2+ e \sqrt[3]{2^2} + f \sqrt[3]{2^2} \omega_2 \,|\,a,b,c,d,e,f\in\mathbf{Q} \}}$
### Other examples
• The splitting field of x2 + 1 over F7 is F49; the polynomial has no roots in F7, i.e., −1 is not a square there, because 7 is not equivalent to 1 (mod 4).[1]
• The splitting field of x2 − 1 over F7 is F7 since x2 − 1 = (x + 1)(x − 1) already factors into linear factors.
• We calculate the splitting field of f(x) = x3 + x + 1 over F2. It is easy to verify that f(x) has no roots in F2, hence f(x) is irreducible in F2[x]. Put r = x + (f(x)) in F2[x]/(f(x)) so F2(r) is a field and x3 + x + 1 = (x + r)(x2 + ax + b) in F2(r)[x]. Note that we can write + for − since the characteristic is two. Comparison of coefficients shows that a = r and b = 1 + r2. The elements of F2(r) can be listed as c + dr + er2, where c, d, e are in F2. There are eight elements: 0, 1, r, 1 + r, r2, 1 + r2, r + r2 and 1 + r + r2. Substituting these in x2 + rx + 1 + r2 we reach (r2)2 + r(r2) + 1 + r2 = r4 + r3 + 1 + r2 = 0, since r3 = r + 1 and r4 = r2 + r. Hence x2 + ax + b factors into linear factors in F2(r)[x] and E = F2(r) is a splitting field of x3 + x + 1 over F2.
## References
1. Instead of applying this characterization of odd prime moduli for which −1 is a square, one could just check that the set of squares in F7 is the set of classes of 0, 1, 4, and 2, which does not include the class of −1≡6.
• Dummit, David S., and Foote, Richard M. (1999). Abstract Algebra (2nd ed.). New York: John Wiley & Sons, Inc. ISBN 0-471-36857-1.
• Hazewinkel, Michiel, ed. (2001), "Splitting field of a polynomial", , Springer, ISBN 978-1-55608-010-4 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9111517071723938, "perplexity_flag": "middle"} |
http://mathhelpforum.com/number-theory/122809-rsa-algorithm-why-impossible-useful-rsa-code.html | # Thread:
1. ## RSA algorithm why is this impossible for a useful RSA code?
e = 3
n = 10
M = 7
phi(n) = (5-1) * (2-1) = 4
ed = 1 mod 4
3d = 1 mod 4
d = 3
why is this impossible for a useful RSA code?
2. I don't know what you mean, it is a correct key generation mathematically speaking. Now, you can see that $e = d$, which is not great cryptographically speaking. And your semiprime is even (instant factorization). And your values are very small.
So what do you mean by "impossible" ?
3. Originally Posted by adam_leeds
e = 3
n = 10
M = 7
phi(n) = (5-1) * (2-1) = 4
ed = 1 mod 4
3d = 1 mod 4
d = 3
why is this impossible for a useful RSA code?
It is a valid RSA setup.
It is not very useful unless your alphabet only has 10 characters.
Not impossible.
But, not very useful.
Your alphabet may have 26 or 67 or 360 alphabetic characters, but when it is decoded you are going to have only 10 characters {n: 0,1,2,3,4,5,6,7,8,9 }
If you use the above to encode a message, you will use 1 character of plain_text to generate 1 character of cipher_text.
After the tenth character you are going to get a 1 to 1 relationship. It becomes a simple substitution cipher.
Typically, you would want to encode a group of characters at a time.
Suppose you have a group of plain_text (say 12 characters of your which are ALL ones: 111111111111) and generate a cipher_text.
If you change a single bit in the plain_text (111111111121) you want generate a completely different cipher_text that has NO similarity to the other cipher_text.
4. Originally Posted by Bacterius
I don't know what you mean, it is a correct key generation mathematically speaking. Now, you can see that $e = d$, which is not great cryptographically speaking. And your semiprime is even (instant factorization). And your values are very small.
So what do you mean by "impossible" ?
not sure thats what my question says, and i dont understand it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929821252822876, "perplexity_flag": "middle"} |
http://nrich.maths.org/1947/index?nomenu=1 | $ABCD$ is a square of side 1 unit. Arc of circles with centres at $A, B, C, D$ are drawn in. Prove that the area of the central region bounded by the four arcs is: $(1 + \pi/3 + \sqrt{3})$ square units. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578337669372559, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/204494/quadratic-equations/205423 | Quadratic equations
Does anyone know how to find integer solutions of the quadratic equation
$$y^2+y+z=f$$
where $z$ is a fixed odd prime or $1$ and $f$ is a fixed odd prime greater than $3$?
This problem arose from the Diophantine equation $A+B=C$ where $A,B,C$ are natural numbers with no common factor.
-
3
Hmmm...trying to tackle a particular case of the ABC Conjecture? Anyway, your accept rate would make Oesterlé-Masser cry. – DonAntonio Sep 29 '12 at 18:56
The quantification is not clear. Are $z$ and $f$ previously-given parameters, or what? – Lubin Sep 29 '12 at 19:08
@Lubin.only one parameter is given. In sort the question asks to find the pair of primes such as that.f-z=y[y+1].We know the difference of two primes can be expressed as the difference of two squares.So the above is saying there are infinite pairs of primes which their difference is expressed as above. – Vassilis Parassidis Sep 29 '12 at 19:20
The difference of two primes always has two factors which are the following.2[2x-2y+1] or 4[x-y] which are obtained from the difference of their squares. – Vassilis Parassidis Sep 29 '12 at 22:23
1
"We know the difference of two primes can be expressed as the difference of two squares." 11 and 5 are prime, and their difference is 6. How do you propose to express 6 as a difference of two squares? – Gerry Myerson Oct 2 '12 at 0:43
2 Answers
$$y^2 + y + z = f$$ $$y^2 + y + z - f = 0$$ $$ay^2 + by + c = 0$$ where $a=1$, $b=1$, and $c=z-f$. So $$y = \frac{-b\pm\sqrt{b^2-4ac}}{2a} = \frac{-1\pm\sqrt{1-4(z-f)}}{2}$$ In its method of solution, it's no different from any other quadratic equation.
Whether the solutions are integers depends of course on $z$ and $f$.
-
My guess is that what OP is asking is, given a prime $f$, how do you find $y$ and prime $z$ such that $y^2+y+z=f$. – Gerry Myerson Oct 2 '12 at 6:34
1
If that's what he meant, he certainly wasn't clear about it. – Michael Hardy Oct 2 '12 at 16:01
Why dont you express the equation in the form $y^2+y+(z-f)=0$ and use the discriminant $b^2-4ac$ where $a=1$, $b=1$ and $c= z-f$. $c$ could be the difference of two primes or check the sequence. find the possible values of $y$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416629672050476, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/79189-integral-expression-bessel-functions.html | # Thread:
1. ## integral expression for bessel functions...
Can someone give me a hand with this problem? I cant figure out how to integrate things once I have y y' and y'' plugged into the bessel function.
problem in a .png linked below
I only need help with the integration on part A. Thanks in advance!
-Canopy
2. Originally Posted by canopy
Can someone give me a hand with this problem? I cant figure out how to integrate things once I have y y' and y'' plugged into the bessel function.
problem in a .png linked below
I only need help with the integration on part A. Thanks in advance!
-Canopy
$y = \int_0^{\pi/2} \cos( x \sin \theta) d \theta$
$y' = \int_0^{\pi/2} - \sin \theta \sin( x \sin \theta) d \theta$
$y'' = \int_0^{\pi/2} - \sin^2 \theta \cos( x \sin \theta) d \theta$
so
$x^2y'' + x y' + x^2 y =$ $\int_0^{\pi/2} - x^2\sin^2 \theta \cos( x \sin \theta) - x \sin \theta \sin( x \sin \theta) + x^2 \cos( x \sin \theta) d \theta$
$=\int_0^{\pi/2} x^2(1-\sin^2) \theta \cos( x \sin \theta) - x \sin \theta \sin( x \sin \theta) d \theta$
$=\int_0^{\pi/2} x^2\cos^2 \theta \cos( x \sin \theta) - x \sin \theta \sin( x \sin \theta) d \theta$
$= \int_0^{\pi/2} d\left( x \cos \theta \sin( x \sin \theta) \right) = \left. x \cos \theta \sin( x \sin \theta) \right|_0^{\pi/2} = 0$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8516941070556641, "perplexity_flag": "middle"} |
http://complexzeta.wordpress.com/2007/08/13/tschirnhaus-transformations/ | An Idelic Life
Algebraic number theory and anything else I feel like telling the world about
# Tschirnhaus Transformations
Monday, August 13, 2007 in Galois theory
This is based on a talk by Zinovy Reichstein from the PIMS Algebra Summer School in Edmonton.
The motivation comes from looking at ways to simplify polynomials. For example, if we start with a quadratic equation $x^2+ax+b$, we can remove the linear term by setting $y=x+\frac{a}{2}$; our equation then becomes $y^2+b'$.
We can do something similar with any degree polynomial. Consider the polynomial $x^n+a_1x^{n-1}+a_2x^{n-2}+\cdots+a_n$. We may make the substitution $y=x+\frac{a_1}{n}$ to remove the $(n-1)$-degree term.
We can also make the coefficients of the linear and constant terms equal with the substitution $z=\frac{b_n}{b_{n-1}}y$ (where the $b$‘s are the coefficients of the polynomial expressed in terms of $y$).
Enough for motivation. Suppose $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ is a polynomial, and let $K=\mathbb{C}(a_1,\ldots,a_n)$. (So, in particular, $a_1,\ldots,a_n$ form a transcendence basis for $K$ over $\mathbb{C}$.) Let $L=K[x]/(f(x))$. A Tschirnhaus transformation is an element $y\in L$ so that $L=K(y)$.
Applying a Tschirnhaus transformation to a polynomial of degree $n$ gives us another polynomial of degree $n$ with different coefficients. They also allow us to simplify polynomial expressions in various senses. We will use the following two criteria of simplification:
1) A simplification involves making as many coefficients as possible 0.
2) A simplification involves making the transcendence degree of $\mathbb{C}(b_1,\ldots,b_n)$ over $\mathbb{C}$ as small as possible. (For a polynomial of degree $n$, we will write $d(n)$ for this number.)
Suppose $n=5$. Hermite showed that it is possible to make $b_1=b_3=0$ and $b_4=b_5$. Therefore $d(5)\le 2$. Klein showed that $d(5)$ is in fact equal to 2.
Now suppose $n=6$. Joubert showed that again we can make $b_1=b_3=0$ and $b_5=b_6$. Therefore $d(6)\le 3$.
It is unknown whether we can make $b_1=b_3=0$ when $n=7$. However, it is known that we cannot do so if $n$ is of the form $n=3^r+3^s$ for $r>s\ge 0$ or $n=3^r$ for $r\ge 0$. It is also known (Buhler and Reichstein) that $d(n)\ge\left\lfloor\frac{n}{2}\right\rfloor$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272593855857849, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/92344/unconditional-bases-for-sum-infty-n1-oplus-elln-2-ell-1 | ## unconditional bases for $(\sum^\infty_{n=1} \oplus \ell^n_2 )_{\ell_1}$ [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
are there unconditional bases for $(\sum^\infty_{n=1} \oplus \ell^n_2 )_{\ell_1}$ ?
-
Actually, if I understand it correctly, the notation is fine. It just seems to me that the answer should be "yes, in the obvious way". There is an obvious candidate for an unconditional basis, have you tried to show that it works? – Yemon Choi Mar 27 2012 at 7:00
2
If the space is $\lbrace (x_{n,m})_{m\le n}: \sum_n \left( \sum_m |x_{n,m}|^2\right) ^{1/2} <\infty\} \rbrace$ then the "unit matrices" $e_{n,m}$ (all entries $0$ except at the $(n,m)$-th spot) form an unconditional basis. – Jochen Wengenroth Mar 27 2012 at 7:16
is there more than one [up to equivalence] normalized unconditional basis ? – Rafael Mar 27 2012 at 17:48
The $\ell_1$ sum of infinite dimensional Hilbert spaces has a unique (up to permutations and equivalence, of course) semi normalized unconditional basis; see Bourgain, J.(B-VUB); Casazza, P. G.(1-MO); Lindenstrauss, J.(IL-HEBR); Tzafriri, L.(IL-HEBR) Banach spaces with a unique unconditional basis, up to permutation. Mem. Amer. Math. Soc. 54 (1985), no. 322, iv+111 pp. 46B15 – Bill Johnson Apr 1 2012 at 12:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8445370197296143, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/121562/confusion-about-partial-differentiation-and-cauchy-riemann-equations | # Confusion about partial differentiation and Cauchy-Riemann equations
In complex analysis class (using Stein's Complex Analysis), we learned about the derivation of the Cauchy-Riemann equations, and that made sense. We take a holomorphic $f : O \rightarrow \mathbb{C}$, where $O \subset \mathbb{C}$ is open, and split it as $f(x + iy) = u(x, y) + i v(x, y)$. Then after some computation we arrive at $\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}$ and $\frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}$. However, I haven't learned about multivariable calculus so I'm new to partial differentiation. The above makes sense to me, but in an exercise they ask us to prove that, in polar form, these equations take the form $\frac{\partial u}{\partial r} = \frac{1}{r} \frac{\partial v}{\partial \theta}$ and $\frac{1}{r} \frac{\partial u}{\partial \theta} = - \frac{\partial v}{\partial r}$. Now I'm confused. Do they mean the same $u$ and $v$ we had been using before? Maybe they similarly define $f(re^{i\theta}) = u(r, \theta) e^{i v(r, \theta)}$ and want me to derive the equations for that? The first option doesn't make any sense to me and I didn't succeed at attempting the second. I think I need a pretty thorough clarification on this issue, can anyone help?
-
## 2 Answers
Since you're not familiar with multivariable calc, let me try to clarify:
If we consider points in $\mathbb{R}^2$ using polar coordinates, we have, as anon said, $x=r\cos\theta$ and $y=r\sin\theta$. This means $$u(x,y)=u(x(r,\theta),y(r,\theta))=u(r\cos\theta,r\sin\theta).$$ Using the chain rule, we get $$\frac{\partial u}{\partial r}=\frac{\partial u}{\partial x}(x(r,\theta),y(r,\theta))\cdot\frac{\partial x}{\partial r}(r,\theta)+\frac{\partial u}{\partial y}(x(r,\theta),y(r,\theta))\cdot\frac{\partial y}{\partial r}(r,\theta)$$ and $$\frac{\partial u}{\partial \theta}=\frac{\partial u}{\partial x}(x(r,\theta),y(r,\theta))\cdot\frac{\partial x}{\partial \theta}(r,\theta)+\frac{\partial u}{\partial y}(x(r,\theta),y(r,\theta))\cdot\frac{\partial y}{\partial \theta}(r,\theta).$$
As we can find $\frac{\partial x}{\partial r}, \frac{\partial y}{\partial r}, \frac{\partial x}{\partial \theta}$ and $\frac{\partial y}{\partial \theta}$ explicitly in terms of $r$ and $\theta$, then we get a system of equations that lets us solve for $\frac{\partial u}{\partial x}$ and $\frac{\partial u}{\partial y}$ in terms of $r$, $\theta$, $\frac{\partial u}{\partial r}$, and $\frac{\partial u}{\partial \theta}$. From here it is just a matter of plugging the alternate expressions for $u_x$ and $u_y$ back into the Cauchy-Riemann equations that you know and love and simplifying.
-
No, $u$ and $v$ are still the real and imaginary parts of $f$, but $\mathbb{C}$ (as the domain, not codomain) is now taken to be in polar form. Explicitly, we have $x=r\cos\theta$ and $y=r\sin\theta$; substitute these expressions into $u(x,y)$ and $v(x,y)$, then perform partial differentiation with respect to $r$ and $\theta$ with chain rule.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9595415592193604, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/107840-simplify.html | # Thread:
1. ## Simplify
This Question came up in a test i did today
(X+3)(X+2)(X+1)
What i did was multiply out the first two brackets
X² + 5x + 6
then i tried
(X²+5x+6)(X+1)
X³ + 5x then i get confused since i dont know what to multiply 6 by
X² + 5x² i can also work this bit out but i have no idea whats next.
So X³+X²+5x²+5x is what i got so far which can be simplified to X³+5X to the power of 4 + 5x
2. (X² + 5X + 6)(X + 1)
= X³ + X² + 5X² + 5X + 6X + 6
= X³ + 6X² + 11X + 6
Multiply 6 by x and 1 just like the rest of them.
r2
3. Originally Posted by ZeroPunk
This Question came up in a test i did today
(X+3)(X+2)(X+1)
What i did was multiply out the first two brackets
X² + 5x + 6
then i tried
(X²+5x+6)(X+1)
$(x^2+5x+6)(x+1)$
$x^2\times x+5x\times x+6\times x+x^2\times 1+5x\times 1+6\times 1$
$x^3+5x^2+6x+x^2+5x+6$
group like terms
$x^3+6x^2+11x+6$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8875727653503418, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/122539/the-unreasonable-effectiveness-of-pade-approximation/122542 | ## The unreasonable effectiveness of Pade approximation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to get an intuitive feel for why the Pade approximation works so well. Given a truncated Taylor/Maclaurin series it "extrapolates" it beyond the radius of convergence. But what I can't grasp is how it manages to approximate the original function better than the series itself does, having only "seen" the information present in the series and without having access to the original function. My naive feeling is that if you start with the Taylor series, you can't do better than it in terms of approximation error, only in terms of, say, stability or computation time. But obviously the Pade approximation does do better. So - what explains its "unreasonable effectiveness"?
UPDATE: Here is a graph from p.5 of Pade Approximants, 2nd ed., that illustrates the phenomenon that puzzles me.
-
What do you mean "better" approximation? Do you mean pointwise, $L^1, L^2$ etc ? – John Mangual Feb 21 at 18:21
13
Something to keep in mind is that the rate at which the error of the Taylor series partial sums shrink is closely related to the distance to the nearest pole/singularity from the evaluation point. If the nearest singularity is a pole, then factoring it out improves the convergence rate, since other singularities are (generically) further away. This is essentially what the Padé approximation does. It "guesses" where the nearest poles are and factors them out before proceeding with a Taylor expansion. – Igor Khavkine Feb 21 at 18:41
2
And indeed, Padé might not be very good if the closest singularity is a branch point rather than a pole. – Robert Israel Feb 21 at 21:20
## 1 Answer
Walter Van Assche gives a modern account of Pade Approximation, a variant of these were used in the proof that e is transcendental.
In the case that $f(z)$ is the Cauchy transform of a compactly supported measure $\mu(x)$,
$f(z) = \int \frac{1}{z-x} d\mu(x)$
Then $P_n(x)$ is an orthogonal polynomial with respect to $\mu(x)$ while
$Q_{n-1}(z) = \int_a^b \frac{P_n(z) - P_n(x)}{z-x} \, \mu(x)$
Then we approximate $f(z)$ as a rational function
$P_n(z) - f(z) Q_{n-1}(z) = \int_a^b \frac{P_n(x)}{z-x}\, d\mu(x)$
Then there exists a function $r(z)$ such that the pointwise convergence of the Pade approximants is exponential as we move along the diagonal.
$\lim_{n \to \infty} \left| f(z) - \frac{Q_{n-1}(z)}{ P_n(z)}\right|^{1/n} = \frac{1}{r^2}$
ORIGINAL ANSWER
The coefficients of the series $a_1, a_2, a_3, \dots$ are an infinite amount of data. The radius of convergence is a property of this sequence $1/R = \limsup |a_n|^{1/n}$ using the root test.
• The Pade approximation is defined the outside the radius of convergence of the Taylor series
• The Pade $[m/n]_f(x)$ and Taylor approximations $[(m+n)/0]_f(x)$ agree up to order $O(x^{m+n+1})$.
A paper by Hubert S. Wall, dating back to 1929, relates Pade approximants the Stieltjes moment problem and continued fractions.
EDIT: My guess is Pade approximant is not always better than Taylor series.
As a counter example let's find $[0/1]_{z+1}$:
$z + 1 \approx \frac{1}{1-z} \mod z^2$
Taylor series is exact and the 1-1 approximant diverges.
In your example, $\sqrt{\frac{1+\frac{z}{2} }{1+2z}} \to \frac{1}{2}$ for large values and likely the 1,1 approximant does the same, making it a good global fit. The 0,2 or 2,0 approximants will have different global behavior. I was unable to find any precise comparison, overall.
Continued fractions are known to be "best approximations" in a certain sense.
A best rational approximation to a real number x is a rational number d/n, d > 0, that is closer to x than any approximation with a smaller denominator.
Wikipedia's example is to approximate
$0.84375 = \cfrac{1}{1 + \cfrac{1}{5 + \cfrac{1}{2 + \cfrac{1}{2}}}} = [0;1,5,2,2]$
We can stop in the middle of this process to get the closes fraction given an upper bound on the denominator.
$0.84375 \approx 1,\frac{5}{6}, \frac{ 11}{13 } , \frac{27}{32}$
The farey fractions up to denominator 6 are
$$0,\frac{1}{6}, \frac{1}{5},\frac{1}{4},\frac{1}{3},\frac{2}{5},\frac{1}{2},\frac{3}{5},\frac{2}{3},\frac{3}{4},\frac{4}{5},\mathbf{\frac{27}{32}},\frac{5}{6} 1$$
Pade approximants are best approximations of functions and can be calculated used a kind of continued fraction.
You get an approximation $\frac{p(x)}{q(x)}$ for given degrees $m = \deg p, n = \deg q$. You are trying to find a polynomial greatest common divisor between your Taylor series and a monomial,
$\gcd(T_{m+n}(x), x^{m+n+1} )$
You can do this Euclid algorithm doing polynomial long division and taking the remainder at each step:
$\frac{p(x)}{q(x)} \equiv T_{m+n}(x) \mod x^{m+n+1}$
Here is the table for $e^z$ from Wikipedia: (also http://mathoverflow.net/questions/41226/pade-approximant-to-exponential-function)
$\begin{array}{c||c|c|c} & 0 & 1 & 2 \\ \hline \hline 0 & \frac{1}{1} & \frac{1}{1-z} & \frac{1 }{1 - z + \frac{1}{2}z^2 } \\ \hline 1 & \frac{1+z}{1} & \frac{1+ \frac{1}{2} z}{ 1- \frac{1}{2} z} & \frac{1 + \frac{1}{3}z }{ 1 - \frac{2}{3}z + \frac{1}{6}z^2 } \\ \hline 2 & \frac{1 - z + \frac{1}{2}z^2 }{1 } & \frac{ 1 - \frac{2}{3}z + \frac{1}{6}z^2 }{1 + \frac{1}{3}z } & \frac{ 1+ \frac{1}{2} z + \frac{1}{12} z^2}{ 1- \frac{1}{2} z + \frac{1}{12} z^2 } \end{array}$
Let's try to work out the steps for m=2, n=2 (not in Wikipedia). This involves the GCD of the 4th Taylor polynomial $1 + z + \frac{1}{2}z^2 + \frac{1}{6}z^3 + \frac{1}{24}z^4$ and $z^{5}$.
$\frac{1}{24} z^5 = (z-4)\left(1 + z + \frac{1}{2}z^2 + \frac{1}{6}z^3 + \frac{1}{24}z^4\right) + \left(4 + 3z + z^2 + \frac{1}{6}z^3 \right)$
$1 + z + \frac{1}{2}z^2 + \frac{1}{6}z^3 + \frac{1}{24}z^4 = \left( \frac{1}{4}z - \frac{1}{2} \right)\left( 4 + 3z + z^2 + \frac{1}{6}z^3 \right) + \left( 3 + \frac{3}{2}z + \frac{1}{4}z^2 \right)$
$4 + 3z + z^2 + \frac{1}{6}z^3 = \frac{2}{3}z \left( 3 + \frac{3}{2}z + \frac{1}{4}z^2 \right) + (z+4)$
$3 + \frac{3}{2}z + \frac{1}{4}z^2 = \left( \frac{1}{4}z + \frac{1}{2}\right)(z+4) + 1$
Using the 1st two long divisions, we get the (2,2) Pade approximant.
$1 + z + \frac{1}{2}z^2 + \frac{1}{6}z^3 + \frac{1}{24}z^4 \approx \frac{ 3 + \frac{3}{2}z + \frac{1}{4}z^2 }{(z-4) \left( \frac{1}{4}z - \frac{1}{2} \right)+1 } = \frac{ 1+ \frac{1}{2} z + \frac{1}{12} z^2}{ 1- \frac{1}{2} z + \frac{1}{12} z^2 }$
Alternatively compare coefficients of your rational approximation and polynomial $a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 \approx \frac{p_0 + p_1 x + p_2 x^2}{q_0 + q_1 x + q_2 x^2 }$ Then you can solve the system of equations: \begin{eqnarray*} a_0 &=& p_0 \\ a_1 + a_0 q_1 &=& p_1 \\ a_2 + a_1 q_1 + a_0 q_2 &=& p_2 \\ a_3 + a_2 q_1 + a_1 q_0 &=& 0 \\ a_4 + a_3 q_1 + a_2 q_0 &=& 0 \end{eqnarray*}
Cramer rule gives you the correct fraction at the end:
$\frac{ \left|\begin{array}{ccc} a_1 & a_2 & a_3 \\ a_2 & a_3 & a_4 \\ a_0 x^2 & a_0 x + a_1 x^2 & a_0 + a_1 x + a_2 x^2 \end{array} \right|} { \left|\begin{array}{ccc} a_1 & a_2 & a_3 \\ a_2 & a_3 & a_4 \\ x^2 & x & 1 \end{array} \right|} = \frac{ \left|\begin{array}{ccc} 1 & 1/2 & 1/6 \\ 1/2 & 1/6 & 1/24 \\ x^2 & x + x^2 & 1 + x + \frac{1}{2} x^2 \end{array} \right|} { \left|\begin{array}{ccc} 1 & 1/2 & 1/6 \\ 1/2 & 1/6 & 1/24 \\ x^2 & x & 1 \end{array} \right|}$
-
But doesn't the Pade approximant use only a finite section of the series? – Felix Goldberg Feb 21 at 15:34
4
I don’t see how this answers the question. My reading of it is as follows: since the degree $m/n$ Padé approximant is uniquely determined by $T_{m+n}$, how can it possibly give a better approximation of the original function than $T_{m+n}$? And as far as I can see, the answer is that in general it does not. For example, if the function is a degree $m+n$ polynomial and $n>0$, the truncated Taylor series is exact, whereas the Padé approximant is not. – Emil Jeřábek Feb 21 at 16:35
I am explaining the "unreasonable effectiveness" of the Pade approximation by comparing it to continued fractions. – John Mangual Feb 21 at 18:20
1
John, the point is that if $f$ is a degree $m+n$ polynomial, then $T_{m+n}=f$, whereas $[m/n]_f$ is a different function (if it were a polynomial, it would have degree $m-n< m+n$). Thus, any way of comparing approximations with the property that a function is better approximated by itself than by any other function will do. I guess what I am trying to point out is that the actual meaning of the vague slogan “Padé approximations are best approximations” is different than what I think the OP thinks. The meaning is that $[m/n]_f$ is the best approximation (whatever that means) among rational ... – Emil Jeřábek Feb 21 at 21:05
3
... functions with degree $m$ and $n$ polynomials in the numerator and denominator (respectively). In contrast, $T_{m+n}$ is the best approximation by a degree $m+n$ polynomial. There is no general reason why one should be better than the other, it all depends on $f$, $m$, $n$, and the chosen method of comparing approximations. – Emil Jeřábek Feb 21 at 21:09
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9129408597946167, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-applied-math/96332-proof-problem.html | # Thread:
1. ## Proof problem
Hi everyone, I have trouble proving this problem:
the question is:
Show that if a zero-sum game has a (pure) Nash equilibrium (i∗ , j∗ ), then the van
Neumann value of the game is $a_i^*j^*$
Hint. Show that $max_P min E_j (P) >= a_i^*j^*$ and that $min_Q max F_i (Q) <= a_i^*j^*$
, and then use Minimax Theorem.
Now i understand that a Pure Nash Equilibrium means:
If the the first player plays strategy i and the 2nd player plays the strategy j then (i, j) is a Nash Equilibrium if every other entry in the same column as i is less than the strategy i and every entry on the same row as the strategy j for the second player is bigger than j. meaning none of the player can improve their strategy to gain better outcome.
the minmax theorem determines the von neumann value, which is basically the value of the game. and the theorem says: for the payoff matrix A of a zero sum game, there exists optimal mixed strategies P and Q for the row and column player respectedly, such that
$max_p min E_j (P) = min_Q max F_i(Q)$ for 0<=j<=n and 0<=i<=m and P and Q are mixed Nash Equilibrium.
$E_j (p)$ = Sum $(p_i) (a_ij)$ (sum = sigma for 0 <= i <= m )
$F_i (Q)$= Sum $(q_i) (a_ij)$ (sum = sigma for 0 <= j <= n )
I don't know how to show that $max_P min E_j (P) >= a_i^*j^*$. i know that i can pick any u that is u <= $E_j (p)$ but I can't see how that would help. Should i take another approach? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.94754558801651, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/37714-double-integration-limits.html | # Thread:
1. ## Double integration: limits
I am given the following probability density function:
$f (x,y) = \frac{2x + 2 - y}{4}$ for 0<x<1 and 0<y<2
$f (x,y) = 0$ otherwise
I am to find P[X + Y > 1].
My attempts at the double integration failed.
The answer begins:
$\int_0^1 \int_{1-x}^2 \frac{2x + 2 - y}{4} dydx$
How did they pick which integral was inner?
How did they choose 1-x as the lower limit on the inner integral?
2. Here is a graph of the region
They solved x+y=1 for y to get y=1-x
3. Originally Posted by Boris B
I am given the following probability density function:
$f (x,y) = \frac{2x + 2 - y}{4}$ for 0<x<1 and 0<y<2
$f (x,y) = 0$ otherwise
I am to find P[X + Y > 1].
My attempts at the double integration failed.
The answer begins:
$\int_0^1 \int_{1-x}^2 \frac{2x + 2 - y}{4} dydx$
How did they pick which integral was inner?
How did they choose 1-x as the lower limit on the inner integral?
Draw the line x + y = 1 inside the rectangle defined by 0 < x < 1 and 0 < y < 2. This line divides the rectangle into two regions: The region above the line (upper region) and the region below the line (lower region). Note that in the upper region x + y > 1. So you need to set up a double integral of f(x, y) over the upper region .......
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8862206935882568, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/81842?sort=oldest | ## Heat equation bounds
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am interested in the following damped heat equation on $\mathbf{R}$, $u_t = u_{xx} - 1_{x \in [-1,1]} u$ with initial data $u(0,x) = \delta(x-x_0)$ for some $x_0 \in \mathbf{R}$.
In particular I am interested in obtaining non-trivial bounds on $u(t,0)$. Of course the heat kernel gives a trivial bound on $u(t,0)$ but I am struggling to obtain anything stronger.
Perhaps the equation has a closed form solution from which it is easy to read such information off?
Added later: Of course appropriate growth conditions at infinity are assumed to ensure a unique solution.
Correction: The indicator function is a function of the $x$ variable only.
-
Is the coefficient on $u$ the indicator function on the interval $[-1,1]$? There's some formatting oddity there, I think. – Christopher A. Wong Nov 25 2011 at 4:00
1
is that $1|_{[-1,1]}$ as a function of $t$ or as a function of $x$? – Pietro Majer Nov 25 2011 at 10:27
Am I right to rephrase this probabilistically as Brownian motion killed at rate 1 when inside $[-1,1]$? Also, are you mostly interested in bounding when $x_0$ is fixed and $t\to \infty$? – Ori Gurel-Gurevich Nov 27 2011 at 6:00
@Ori: Yes, that seems right. As the question concerns the bound at $x=0$ then it reduces to computing the Laplace transform of the time that a Brownian bridge spends in [-1,1]. – George Lowther Nov 27 2011 at 22:47
There are explicit formulas for one-sided intervals such as $(-\infty,1]$, I'm not sure about bounded intervals though. – George Lowther Nov 27 2011 at 22:55
show 3 more comments
## 1 Answer
Let us assume that $1_{[-1,1]}$ is the identity function for $t$ but here I consider a generic function $f(t)$. Let us consider the Dirichlet problem on a bounded domain $D$
$$\Delta\phi_n+\lambda_n\phi_n=0 \qquad \phi=0\ on\ \partial D$$
You can write down the exact solution to your equation as
$$u(t,x,y)=\sum_n a_n(t)\phi_n(x)\phi_n(y)$$
so that $u(0,x,y)=\delta(x-y)$ implies $a_n(0)=1$. By a direct substitution you get the equations to be solved
$$\dot a_n+\lambda_na_n(t)+f(t)a_n(t)=0$$
that admits the solution
$$a_n(t)=e^{-\lambda_n t-\int_0^t dt'f(t')}.$$
In this way you should be able to get a better bound on the solution.
Now, let us assume that $1_{[-1,1]}$ is the identity function for $x$. The problem is reduced to the one of a Schroedinger equation for a rectangular potential barrier. Let us search for eigenfunctions to the problem
$$\partial^2\phi_E(x)-1_{[-1,1]}\phi_E(x)=-E\phi_E(x)$$
We expect a continuous spectrum in this case and will get
$$\phi^L_E(x)=A_1e^{ik_0x}+A_2e^{-ik_0x}\qquad x<-1$$ $$\phi^C_E(x)=B_1e^{ik_1x}+B_2e^{-ik_1x}\qquad x\in [-1,1]$$ $$\phi^R_E(x)=C_1e^{ik_0x}+C_2e^{-ik_0x}\qquad x>1$$
being $k_1=\sqrt{E-1}$ for $x\in [-1,1]$ and $k_0=\sqrt{E}$ otherwise. You now impose a continuity condition on the derivative and the eigenfunctions to get the coefficients. The final solution will take an integral form as
$$u(t,x,x_0)=\int_C dEe^{-Et}\phi_E(x)\phi_E(x_0)$$
with a properly chosen contour $C$. Please, note that is also $1_{[-1,1]}=\theta(x+1)-\theta(x-1)$ being $\theta(x)$ the Heaviside function.
-
I apologize for the confusion but I intended the indicator function to be a function of $x$. The original post has been updated to reflect this. Of course, as you pointed out, if it is a function of $t$ alone then one can solve via an integrating factor. – Matt Cooper Nov 27 2011 at 4:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9032001495361328, "perplexity_flag": "head"} |
http://liuxiaochuan.wordpress.com/2009/02/11/%E5%80%A1%E8%AE%AE/ | # Xiaochuan Liu's Weblog
mathematics and other aspects of life
## 倡议
(参考我的帖子:经验一二三,给学数学的你,中国的学生差在哪,哦大学,等等)
Tao在数学研究中的经验以及给年轻人的建议
Let X be a topological space. Show that X is compact if and only if every net has a convergent subnet. (Hint: equate both properties of X with the finite intersection property, and review the proof of Theorem 1). Similarly, show that a subset E of X is relatively compact if and only if every net in E has a subnet that converges in X. (Note that as not every compact space is sequentially compact, this exercise shows that we cannot enforce injectivity of $\phi$ in the definition of a subnet.
Proof:We only prove the first conclusion.
“$\Rightarrow$“: Suppose for contradiction that the net $\{x_\alpha\}_{\alpha\in A}$ is without convergent subnet. Then an easy implication shows for any $x_0\in X$, there exists an open neighborhood $V_{x_0}$ of $x_0$ such that some $\alpha\in A$ satisfies that for all $\beta \geq \alpha$, $x_\beta\notin V_{x_0}$. All the $V_x$ form a open cover of the space X, and thus give rise to a finite number of $V_x$ which is a finite cover. Accordingly there exists a $\alpha_x$ for each x. Then since we can choose a $\alpha^*$ larger than each $\alpha_x$, the contradiction follows since there is no place to live for all the elements larger than $\alpha^*$.
“$\Leftarrow$“: Suppose for contradiction that $(F_\alpha)_{\alpha \in A}$ is a collection of closed subsets of X such that any finite subcollection of sets has non-empty intersection, and the intersection of the entire collection is empty. Then choose from all the finite subcollection of sets $(F_\alpha)_{\alpha \in A}$ an element $x_{\alpha_1,\cdots \alpha_k}$. Define $(\alpha_1,\cdots,\alpha_k)\leq (\alpha_1',\cdots,\alpha_l')$ iff $\cap_{i=1}^kF_{\alpha_i}\subseteq \cap_{j=1}^lF_{\alpha_j'}$ and this forms a net. Observe that this net has no convergent sebnet, otherwise there would be some x in all the $F_\alpha$, a contradiction.
Bourbaki学派的最初组织人就是一群年轻人,提起他们这群人我是什么意思,我想读者比我更明白,呵。
### Like this:
Explore posts in the same categories: Maths
This entry was posted on 2009/02/11 at 5:33 pm and is filed under Maths. You can subscribe via RSS 2.0 feed to this post's comments. You can comment below, or link to this permanent URL from your own site.
### 6 Comments on “倡议”
1. percy li Says:
2009/02/21 at 1:21 pm
i am a Chinese exchange student in Washington.d.c. i go to terry’s blog a lot but these math was too abstract for me. I once mentioned one of the typo in his book “sloving mathematical problems” and he really replied me. he was a great teacher. i would love to join the group if i were 5 years older…
2. liuxiaochuan Says:
2009/02/21 at 6:15 pm
Dear percy li:
The way that Professor Tao helps the Maths-learners around all over the world, in my opinion, is something just like Euler centuries ago. But the chinese students are seemingly less concerned.
As to your questions, I suggest you just try learning the posts which you are most interested. From my own experience, though it is quite difficult form the begining, you can learn much more during the process. Also, your will be more confident after some time. Anyway, you can just e-mail me at any time.
3. percy li Says:
2009/02/23 at 8:06 am
i am just in high school. i take calculus in our high school and i am reading his book on analysis(undergraduate course). his book is far more easy to read than the books i brought in China but actually i really learned a lot.
4. purplelightning Says:
2009/04/19 at 8:56 am
I will try to go furture in math study though I am still a freshman in college.And I hope I will have the opportunity to enter the team years later.
5. nouoo Says:
2009/04/19 at 3:22 pm
我是一位自学数学的学生,本科物理出身,虽然基础不好,但希望能加入你们。
6. le Xi Says:
2010/03/14 at 6:29 am
I also want to join you to improve mathematics.
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926601767539978, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/tags/one-way-function/hot | # Tag Info
## Hot answers tagged one-way-function
9
### Are there hash algorithms with variable length output?
Sure. If you want a $b$-bit hash of the message $m$, then use the first $b$ bits of AES-CTR(SHA256($m$)). That'll do the trick. In other words, compute SHA256($m$) and treat the resulting 256-bit string as a 256-bit AES key. Next, use AES in counter mode (with this key) to generate an unending stream of pseudorandom bits. Take the first $b$ bits from ...
8
### What is a hard-core predicate?
In the Lecture Notes on Cryptography of Goldwasser and Bellare, one can find (page 35) the following text: Recall that f(x) does not necessarily hide everything about x even if f is a one-way function. E.g. if f is the RSA function then it preserves the Jacobi symbol of x, and if f is the discrete logarithm function EXP then it is easy to compute the ...
7
### Are there hash algorithms with variable length output?
In general, each combination of a (secure) hash function for input with a (deterministic) pseudo random number generator for output will work here - one "state of the art" example is the one given by D.W. (using AES-CTR as PRNG and SHA-256 as hash). Another way is similar to what PBKDF-2 does to have output with the right length: hash the input (or a hash ...
7
### Is there a hash algorithm that is slow to calculate but relatively fast to check?
It sounds like you're looking for a proof-of-work system. One way to implement such a system would be, given a message $m$, to ask for a suffix $s$ such that the hash $H(m \operatorname{\|} s)$, where $H$ is some standard cryptographic hash function and $\|$ denotes concatenation, begins with a specific prefix (e.g. $n$ zero bits). Of course, the execution ...
6
### Is there a hash algorithm that is slow to calculate but relatively fast to check?
The wide class of NP problems meets your general question, almost to the exact definition. The summary is that (we conjecture that) there are problems that cannot be solved in as little as polynomial time but have solutions that can be verified to be correct in just polynomial time, so solving for the answer takes much longer than verifying the answer is ...
5
### Are there hash algorithms with variable length output?
As D.W. notes, you can use the output of any conventional hash function to key a stream cipher (or a block cipher in a streaming mode like CTR), and then take the output of the cipher as your digest. However, there has been a trend in modern hash function design to support arbitrary-length output directly, without the need for additional layers. For ...
5
### What other one-way functions are used in cryptosystems?
Many lattice schemes are based on the shortest vector problem and it's variants. Elliptic curve crypto systems are based on something akin to discrete logarithms but it is different in its details. Some authentication schemes like HB are based on learning parity with noise and systems are based on the more general learning with errors. Subset sum was ...
5
### What other one-way functions are used in cryptosystems?
The Merkle–Hellman knapsack cryptosystem was based on a variation of the subset sum problem. (It was broken by Adi Shamir a few years after it was developed.) Given a set of numbers $A$ and a number b, find a subset of $A$, which sums to b. The cryptosystem relies on the fact that in this form of the subset sum problem if the set $A$ is ...
3
### Lamport signature: How many signatures are need to forge a signature?
Each additional signature halves the security level. A security level of about 64 bits can be broken by a determined attacker, and a level of 32 bits can be trivially broken on a single home computer. So if you use 256 pairs, which is a reasonable level, since it offers 256 bit security against second-preimage attacks, and 128 bits against collisions, ...
3
### Can one build a one-way function from AES?
As Mike asked, it's not clear if you're asking about onewayness, or collision resistance (as you call the function a 'cryptographic compression function'). Assuming you're asking about onewayness, well, given a single 128 bit value $h(M)$, we obviously cannot uniquely deduce the 1408 bit value $M$. However (hint), let us assume that we can ask for the ...
3
### What other one-way functions are used in cryptosystems?
I will just like to contribute in light of what has been told above. There are few cryptosystems (just signature schemes as far as my knowledge goes) that are based on the hardness of solving a system of multi-variate polynomial. Solving a system of multi-variate polynomial is proved to be $\mathsf{NP}$-hard and just like the "hard" problems on lattices, ...
2
### Is the AES Key Schedule weak?
Now, the AES Key Schedule may be weak (it certainly does appear to be the weakest part of AES), however I don't believe invertability is really the problem. If someone has a way to get some information on the last-round subkey for an N round cipher, well, he has some information on the original key if the key schedule is invertible; if he determined $k$ ...
2
### What is a hard-core predicate?
I'm not sure how much simpler than Wikipedia one can explain this. Assume you have a function $f$ which gets some input $x$, and produces from it some output $y = f(x)$. As one example, consider the function $f(x) = x·x$. Assume that $x$ is a non-zero real (or rational, or integer) number. Then $y = f(x) = x·x$ is still a non-zero number, and ...
2
### Hash collision resistance requirements for Lamport signatures
Yes, it makes sense to truncate the hash to 128 bits. The security proof actually says that if finding a preimage for F requires effort 2^n, then breaking the Lamport signature scheme with G having k-bit digests requires effort (2^n)/(2k). So strictly speaking, with F truncated to 128 bits and G having 256 bits (2k=512=2^9), you will have 128-9=119 bits of ...
1
### Weak One Way Function
Well, if "weak one way" means that you shouldn't be able to consistently find preimages, that is, inputs that generate a specific output, and $mult(a, b)$ is defined as the integer multiplication $a \times b$, then $mult$ would not meet that definition. For an arbitrary output $C$, we can set $a = C$ and $b = 1$; hence $mult(a, b) = C$.
1
### Are there hash algorithms with variable length output?
Yes, there are hash-like algorithms that are able to produce variable-length outputs without any extra efforts. This is something "sponge functions" do. One such sponge construction is KeccaK which is one of five finalists in the SHA-3 competition.
1
### Are there hash algorithms with variable length output?
I'm curious about your purpose. Generally the primary operation involving a message digest is ultimately to compare two digest values. Hashing passwords allows comparing the digest values instead of carrying the super secret password around the systems. Hashing messages allows the transmitter and sender to verify the data was correctly received without ...
1
### Lamport signature: How many signatures are need to forge a signature?
What do you mean by forge? If you are asking about (the common) existential forgery, then two message, signature pairs are enough, given that the messages differ in at least two bits. As an example consider that you have the signatures for $m_1 = 1111$ and $m_2 = 1100$. Considering the preimages you now have, you can forge signatures for $m_3=1101$ and ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233325719833374, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/41628/examples-of-faithfully-flat-modules/42499 | # examples of faithfully flat modules
I'm studying some results about flatness and faithful flatness and I'd like to keep in my mind some examples about faithfully flat modules. In general, free modules are the typical example. Another (unusual) example of faithfully flat module is the "Zariski Covering" (Let $R$ be a ring and let $R_{f_i}$ be a localization $\forall i$. Then $S:=\bigoplus_{i=1}^n R_{f_i}$ is called Zariski covering).
Do you have any other example?
-
4
Your definition of Zariski covering is missing the important condition that the $f_i$ generate the unit ideal. Regards, – Matt E May 27 '11 at 12:17
1
Indeed, if the $f_i$ do not generate the unit ideal, there exists a maximal ideal $m$ containing all the $f_i$. In this case, the extension of $m$ to $S$ is all of $S$ and $S$ cannot be a faithfully flat $R$-module. – Amitesh Datta Jun 1 '11 at 2:53
## 3 Answers
Formal properties
The tensor product of two faithfully flat modules is faithfully flat.
If $M$ is a faithfully flat module over the faithfully flat $A$-algebra $B$, then $M$ is faithfully flat over $A$ too.
An arbitrary direct sum of flat modules is faithfully flat as soon as at least one summand is.
(But the converse is false: see caveat below )
Algebras
An $A$-algebra $B$ is faithfully flat if and only if it is flat and every prime ideal of $A$ is contracted from $B$, i.e. $Spec (B) \to Spec(A)$ is surjective.
If $A\to B$ is a local morphism between local rings, then $B$ is flat over $A$ iff it is faithfully flat over $A$.
Caveat fidelis flatificator
a) Projective modules are flat, but needn't be faithfully flat. For example $A=\mathbb Z/6=(2)\oplus (3)$ shows that the ideal $(2)\subset A$ is projective, but is not faithfully flat because $(2)\otimes_A \mathbb Z/2=0$
b) A ring of fractions $S^{-1}A$ is always flat over $A$ and never faithfully flat [unless you only invert invertible elements, in which case $S^{-1}A=A$].
c) The $\mathbb Z$-module $\oplus_{{{\frak p}}\in Spec \mathbb Z} \mathbb Z_{{\frak p}}$ is faithfully flat over $\mathbb Z$ . All summands are flat, however none is faithfully flat.
-
Thanks everyone. – user11428 May 31 '11 at 9:36
The fact, "$M$ flat over $A$ implies that the base change $M \otimes_A B$ is flat over $B$" would fit into the "Formal properties" section nicely, I think. – Dylan Moreland Jun 20 '12 at 19:18
If $A$ is a noetherian ring, $I\subseteq A$ is an ideal, and $\widehat{A}$ is the $I$-adic completion of $A$, if $I$ is contained in the Jacobson radical of $A$ (for example, if $A$ is local) then $A\to \widehat{A}$ is faithfully flat.
-
Let $f:A\to B$ be a flat ring homomorphism. If $q$ is a prime ideal of $B$, then $p=f^{-1}(q)$ is a prime ideal of $A$. Furthermore, we have an induced homomorphism of rings $\overline{f}:A_p\to B_q$ such that the composition $A\to B\to B_q$ (the first arrow is $f$ and the second arrow is the localization homomorphism $B\to B_q$) is equal to $A\to A_p\to B_q$ (the first arrow is the localization homomorphism $A\to A_p$ and the second arrow is $\overline{f})$. (Of course, this follows from the universal property of localization; every element of $A$ not in $p$ is mapped to an element of $B$ not in $q$.)
In this situation, $\overline{f}:A_p\to B_q$ is a faithfully flat ring homomorphism. (Proof: We use the transitivity of flatness. In particular, $B_q$ is a local ring of $B_p$ and is therefore flat over $B_p$ and $B_p$ is flat over $A_p$ since $B_p\cong B\otimes_{A} A_p$ and flatness is preserved under base change. Faithful flatness follows since $pA_p$, the unique maximal ideal of $A_p$, is mapped into $qB_q$, a proper (in fact, maximal) ideal of $B_q$, under $\overline{f}$.)
In particular, the induced map on spectra $\overline{f}^{*}:\text{Spec}(B_q)\to \text{Spec}(A_p)$ is surjective.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91842120885849, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=2877517 | Physics Forums
## Solving for a Surjective Matrix
I saw this in a book as a Proposition but I think it's an error:
Assume that the (n-by-k) matrix, $$A$$, is surjective as a mapping,
$$A:\mathbb{R}^{k}\rightarrow \mathbb{R}^{n}$$.
For any $$y \in \mathbb{R}^{n}$$, consider the optimization problem
$$min_{x \in \mathbb{R}^{k}}\left{||x||^2\right}$$
such that $$Ax = y$$.
Then, the following hold:
(i) The transpose of $$A$$, call it $$A^{T}$$ is injective.
(ii) The matrix $$A^{T}A$$ is invertible.
(iii) etc etc etc....
I have a problem with point (ii), take as an example the (2-by-3) surjective matrix
[tex]A = \begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix}[/tex]
$$A^{T}A$$ in this case is not invertible.
Can anyone confirm that part (ii) of this Proposition is indeed incorrect ?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I would agree with you that part (ii) of the proposition is incorrect (unless matrices are not acting on vectors from the left). If you look at bijection part http://en.wikipedia.org/wiki/Bijecti...and_surjection it reads: "If g o f is a bijection, then it can only be concluded that f is injective and g is surjective." Working right to left with matrices and composition of functions says if A^{T}A was invertible (i.e. a bijection) then A would be injective and A^{T} would be surjective. Thus something is wrong! P.S. I didn't see the bit where it clearly said the matrices were acting from the left so I would say that it is definitely wrong.
Recognitions:
Science Advisor
Quote by lauratyso11n Can anyone confirm that part (ii) of this Proposition is indeed incorrect ?
Yes, you are right. (i) is true but (ii) is false. But (ii) is true if A^TA is replaced by AA^T, so maybe it's a typo? I don't understand what "the optimization problem" has to do with this, or are the other parts of the proposition about this?
## Solving for a Surjective Matrix
Quote by Landau Yes, you are right. (i) is true but (ii) is false. But (ii) is true if A^TA is replaced by AA^T, so maybe it's a typo? I don't understand what "the optimization problem" has to do with this, or are the other parts of the proposition about this?
The full Proposition is as follows:
Assume that the (n-by-k) matrix, $$A$$, is surjective as a mapping,
$$A:\mathbb{R}^{k}\rightarrow \mathbb{R}^{n}$$.
For any $$y \in \mathbb{R}^{n}$$, consider the optimization problem
$$min_{x \in \mathbb{R}^{k}}\left{\left||x|\right|^2\right}$$
such that $$Ax = y$$.
Then, the following hold:
(i) The transpose of $$A$$, call it $$A^{T}$$ is injective.
(ii) The matrix $$A^{T}A$$ is invertible.
(iii) The unique optimal solution of the minimum norm problem is given by
$$(A^TA)^{-1}A^Ty$$
I have a problem with point (ii), take as an example the (2-by-3) surjective matrix
[tex]A = \begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix}[/tex]
$$A^{T}A$$ in this case is not invertible.
Quote by Landau Yes, you are right. (i) is true but (ii) is false. But (ii) is true if A^TA is replaced by AA^T, so maybe it's a typo?
I don't think it's a typo as he uses this result to carry some other analysis further. The crux of it is:
$$\sigma\lambda = \alpha - \bf{r}$$
where $$\sigma \in \mathbb{R}^{n-by-k}$$ is surjective, and $$\lambda \in \mathbb{R}^{k}$$, $$\alpha , \bf{r} \in \mathbb{R}^{n}$$.
How would you solve for $$\lambda$$ ? Isn't is critical that the 'typo' has to be correct to be able to solve for this ?
The author's solution as you might expect is $$\lambda = \left(\sigma^{T}\sigma\right)^{-1}\sigma^{T}\left[\alpha - \bf{r}\right]$$.
So it's either a HUGE mistake on his part or I'm missing something. The author is actually quite insightful, and this error would be quite out of character for him.
BTW, thanks for making the effort to look at the problem. Much appreciated.
Blog Entries: 1 Recognitions: Homework Help Looking at dimension counting with your example, AtA is a 3x3 matrix, which means (AtA)-1 A wouldn't make sense even if the inverse was defined because the sizes of the matrices don't match up. On the other hand A At and A are compatible matrices which suggests that he just put the transpose on the wrong one and carried the error through.
Recognitions:
Science Advisor
Quote by lauratyso11n The author's solution as you might expect is $$\lambda = \left(\sigma^{T}\sigma\right)^{-1}\sigma^{T}[\alpha - \bf{r}]$$.
This is correct provided that $\sigma^{T}\sigma$ is invertible, i.e. provided that $\left(\sigma^{T}\sigma\right)^{-1}$ makes sense.
The least square solution of Ax=y satisfies the normal equation:
$$A^TAx=A^Ty.$$
This solution is unique if and only if $A^TA$ is invertible. In this case, it is given by
$$x=(A^TA)^{-1}A^Tb$$, just like the author asserts.
But $A^TA$ is not necessarily invertible, contrary to the proposition. Note that $A^TA:\mathbb{R}^k\to\mathbb{R}^k$ is bijective if and only if its rank equals k. But since $A^TA$ and A always have equal rank, this happens if and only if A has rank k. Since $A:\mathbb{R}^k\to\mathbb{R}^n$ is assumed to be surjective, it has rank n and we must have $k\geq n$. So in fact, $A^TA$ is invertible if and only if k=n! Indeed, in your counter-example k and n are not equal.
Quote by Office_Shredder which means (AtA)-1 A wouldn't make sense even if the inverse was defined because the sizes of the matrices don't match up.
This is not the expression the author uses; it is (AtA)-1 At.
Recognitions: Science Advisor Perhaps you could post a link to the book?
Blog Entries: 1
Recognitions:
Homework Help
Quote by Landau This is not the expression the author uses; it is (AtA)-1 At.
I assumed the * was just referring to matrix multiplication
Recognitions: Science Advisor I assumed * means adjoint, so (in this real case) the transpose. This is also what lauratyso11n writes in post #5 (where A is called sigma).
Quote by Office_Shredder I assumed the * was just referring to matrix multiplication
Quote by Landau I assumed * means adjoint, so (in this real case) the transpose. This is also what lauratyso11n writes in post #5 (where A is called sigma).
SORRYYYYYY, the $$A^*$$ is actually an $$A^T$$. I've corrected it.
The correct statement is that if $$A$$ is surjective then $$A^T$$ is injective and $$AA^T$$ is invertible. The formula for the optimal $$x$$ is $$\hat{x}=A^T(AA^T)^{-1}y$$
Recognitions:
Science Advisor
Quote by lauratyso11n The correct statement is that if $$A$$ is surjective then $$A^T$$ is injective and $$AA^T$$ is invertible.
So it was a typo :)
Quote by Landau Yes, you are right. (i) is true but (ii) is false. But (ii) is true if A^TA is replaced by AA^T, so maybe it's a typo?
Thread Tools
| | | |
|------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Solving for a Surjective Matrix | | |
| Thread | Forum | Replies |
| | Programming & Comp Sci | 0 |
| | Calculus & Beyond Homework | 3 |
| | Calculus & Beyond Homework | 7 |
| | Linear & Abstract Algebra | 16 |
| | Calculus & Beyond Homework | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 38, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534643888473511, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/94371-maximum-area-triangle-print.html | # Maximum area of a Triangle
Printable View
• July 4th 2009, 07:08 AM
JoAdams5000
Maximum area of a Triangle
Suppose a triangle has perimeter 2. Let
x and y be the lengths of two of its sides and A its area. Show that A^2 = (x + y − 1)(1 − x)(1 − y). Use this formula to find the maximum area of a triangle with perimeter 2.
Hint: You will need to solve a maximization problem
over a closed, bounded set
D. To determine D, use the triangle inequality.
Kind of lost with where to begin. Any help and/or suggestions would be great! Thanks
• July 4th 2009, 07:20 AM
malaygoel
Quote:
Originally Posted by JoAdams5000
Suppose a triangle has perimeter 2. Let
x and y be the lengths of two of its sides and A its area. Show that A^2 = (x + y − 1)(1 − x)(1 − y). Use this formula to find the maximum area of a triangle with perimeter 2.
Hint: You will need to solve a maximization problem
over a closed, bounded set
D. To determine D, use the triangle inequality.
Kind of lost with where to begin. Any help and/or suggestions would be great! Thanks
Use hero's formula to determine $A^2$
• July 4th 2009, 07:22 AM
malaygoel
Quote:
Originally Posted by JoAdams5000
Suppose a triangle has perimeter 2. Let
x and y be the lengths of two of its sides and A its area. Show that A^2 = (x + y − 1)(1 − x)(1 − y). Use this formula to find the maximum area of a triangle with perimeter 2.
Hint: You will need to solve a maximization problem
over a closed, bounded set
D. To determine D, use the triangle inequality.
Kind of lost with where to begin. Any help and/or suggestions would be great! Thanks
and to find Maximum value of $A^2$, you can use AM-GM inequality.
• July 4th 2009, 10:01 PM
simplependulum
Use Heron's formula :
$s = 2/2 = 1$ and the other side is
$2 - x - y$
Therefore ,
$A = \sqrt{(s)(s-x)(s-y)[s-(2-x-y)]}$
$A^2 = (1)(1-x)(1-y)(1-2+x+y) = (1-x)(1-y)(x+y-1)$
Consider
$\frac{ (1-x) + (1-y) + (x+y-1) }{3} \geq [(1-x)(1-y)(x+y-1)]^{\frac{1}{3}}$
$(\frac{1}{3})^3 \geq A^2$
$A \leq \sqrt{\frac{1}{27}} = \frac{ \sqrt{3} }{9}$
so the maximum value of A is $\frac{ \sqrt{3} }{9}$
The equality holds when
$(1-x) = (1-y) = (x+y-1)$
it gives $x=y= \frac{2}{3}$ and the other side is also $\frac{2}{3}$ . It is an equilateral $\Delta$ .
All times are GMT -8. The time now is 02:46 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8985738754272461, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/1019/common-false-beliefs-in-physics/1935 | # Common false beliefs in Physics [closed]
Well, in Mathematics there are somethings, which appear true but they aren't true. Naive students often get fooled by these results.
Let me consider a very simple example. As a child one learns this formula $$(a+b)^{2} =a^{2}+ 2 \cdot a \cdot b + b^{2}$$ But as one mature's he applies this same formula for Matrices. That is given any two $n \times n$ square matrices, one believes that this result is true: $$(A+B)^{2} = A^{2} + 2 \cdot A \cdot B +B^{2}$$ But eventually this is false as Matrices aren't necessarily commutative.
I would like to know whether there any such things happening with physics students as well. My motivation came from the following MO thread, which many of you might take a look into:
-
4
Community wiki? – Marek Nov 17 '10 at 23:10
1
@MArek: i didnt find the option. if anyone can do it they are welcome – Chandrasekhar Nov 17 '10 at 23:15
2
@Chandru: AFAIK StackExchange recently changed its rules about this matter so that only moderators can make a question community wiki (the rationale being that the CW option is being misused at StackOverflow). – Marek Nov 17 '10 at 23:26
22
While i perceive the question interesting, I think the example with matrices is rather 'a common silly mistake' than 'a common false belief'. – Piotr Migdal Nov 17 '10 at 23:48
6
It seems a bit odd to be accepting a single answer to a soft question... – David Zaslavsky♦ Dec 1 '10 at 0:21
show 3 more comments
## locked by David Zaslavsky♦Sep 30 '12 at 20:06
This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. More info: FAQ.
## closed as not constructive by David Zaslavsky♦Sep 30 '12 at 20:05
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 49 Answers
Amazingly, Wikipedia has an article titled "List of common misconceptions". There is a (short) section dedicated to Physics, which mentions:
• The role of the Coriolis effect in bathtubs and sink drains
• The role of angular momentum in bicycle stability
• The "equal time" fallacy in explaining the lift developed by an airfoil
• Glass isn't actually a high viscosity fluid
• Composition of air
• "Lightning never strikes twice"
The Astronomy section has some good ones too:
• When a star collapses into a black hole, its gravitational pull does not actually increase.
• Meteorites are not actually hot when they land; usually they are cold. (I would add: the heating of meteors is more due to the compression of the air in front of them than to 'friction with the air' as commonly believed.)
Some that I would add:
• "Once something is in orbit it is free from Earth's gravity." Even educated people get tripped up on this one; the internet is rife with people suggesting we just "nudge" the International Space Station into lunar orbit. At a much more basic level of misunderstanding, there is the idea that astronauts are "weightless" because they are far away from the earth.
• "There is a high tide on the opposite side of the earth from the moon/sun because the earth 'shields' the ocean from the gravitational pull."
-
2
Especially the equal time fallacy. Nobody believes you when you tell them it's not true. – dan_waterworth Dec 15 '10 at 7:24
1
Just to clarify the 6 physics examples are things that aren't true (which people believe) - the 2 astronomy ones are things that are true (but people don't believe) – Martin Beckett Mar 19 '11 at 5:51
show 7 more comments
A mistake that I often come across and that is so easy to make: people somehow have a visceral belief that heavy objects fall faster than light ones. Setting aside of course problems of air resistance, this is obviously false, but it seems to be so counterintuitive and I think it is somehow tied to our intuitive understanding of mass as inertia. Since higher mass means higher inertia. People understand this intuitively as it takes more force to push a fat guy than a thin one. But they don't see that gravity is a force proportional to mass, so that more inertia is paralleled by more gravity as well. With as a result the same gravitational acceleration for light and heavy bodies.
I've even noticed the mistake being made by professional physicists in colloquial conversations.
EDIT: I just found this article today, shedding a new light on why misconceptions in physics or science in general are so common and so hard to get rid of.
-
9
+1 for "setting aside air resistance"... it always bugs me when people point out how things fall at the same rate, while neglecting to mention that this assumes air resistance is negligible. Leaving out this clarification I think leads to more confusion, as all someone has to do is drop a rock next to a feather to (incorrectly) conclude that Galileo was wrong. – Tim Goodman Nov 18 '10 at 3:22
8
If you are assuming a fixed earth, then you are correct. But if you let the earth move, then the Earth moves more rapidly when you drop a bigger mass than when you drop a smaller one. In that sense, bigger masses drop faster. This has nothing to do with the equivalence principle of course. – Vagelford Nov 18 '10 at 17:49
7
Seems important to mention here Galileo's famous gedankenexperiment. If you follow the line of thought that heavy objects fall faster, and go split a heavy falling object in two, large one and big one, both should be falling slower than the original. The upper part would be pulling lower part up, wanting to go even slower, hence a contradiction. – Pavel Radzivilovsky Dec 1 '10 at 9:44
2
There is also the complication of buoyancy - a dense object would be heavier at the same mass, and thus fall faster even if air was friction-less. – romkyns Jan 12 '11 at 1:22
1
@Pavel You're ignoring tidal effects. Splitting something in two doesn't really change anything, because the two halves, being in close proximity, experience different tidal forces than those between two objects farther apart. – ErikE Jan 21 '11 at 18:21
show 7 more comments
"Summer is when the Earth is closest to the sun, and winter is when it's furthest away."
It's true that the Earth's orbit is slightly elliptical, but the effect of this, as far as seasons, is very small. For one thing, this wouldn't explain why the sun rises and sets at different times in different seasons, and if this were true, the whole planet would have summer at the same time.
The seasons are actually caused by the tilt of the Earth relative to its orbit around the Sun.
-
show 2 more comments
I would say that for most people, the quadratic scaling of kinetic energy with speed is somewhat of a mystery.
People don't understand how if you go twice as fast, a car accident actually four times as energetic, hence the high number of reckless drivers and deadly accidents.
-
show 6 more comments
Classical mechanics is boring and mostly solved
...especially in case of fluid dynamics (-;
-
2
I never met a person who would think this. If you don't know anything about mechanics then you obviously can't think it is boring. And if you do know it, just double pendulum or three-body problem should convince you that it's far from simple. I wouldn't call fluid dynamics classical mechanics though. While it is classical, it is certainly not mechanics but rather a field theory. And that is the main reason it is hard. – Marek Dec 3 '10 at 12:46
7
I would say most physics undergrads think this is so. This is closely related to "anything that can be solved analytically, already has" – Pete Dec 3 '10 at 16:01
I will give some meta-false beliefs: these are beliefs held by the general public, which happen to be true, which are hyper-corrected by many physicists with bogus corrections based on the urge to appear smart:
### Electrons move slowly down a wire
• The belief: the electrons move lightning fast down a wire.
• the hypercorrection: in the completely obsolete Drude model, electrons move slowly. In this model, you imagine the current is carried by a classical gas of electrons, and you divide the total current by the density of all electronic charge to get the drift velocity. This predicts a completely bogus drift velocity of a few cm/s, which is total nonsense, because only electrons near the Fermi surface contribute to the conductivity. Nevertheless, you see this hypercorrection repeated endlessly (it appears here too).
• The best answer: the electronic wavefunctions are spread out in a metal. The correct notion of electron velocity is the Fermi velocity, which is enormous typically, because the wavelength is about 1 atomic radius. While it isn't the same as the speed of electricity going down the wire (which is the speed of the field perturbations, some significant fraction of the speed of light), it is enormously high. Impurities which can scatter electrons will alter this speed, but not as much as the naive hypercorrection says.
### The atom is mostly empty space
• The belief: the atom is full of stuff, that's why stuff is hard when you push against it.
• The hypercorrection: In the totally obsolete Rutherford-Bohr model, the atom is mostly empty space, the tiny pointlike electron orbiting a nucleus which contains most of the mass.
• The best answer: But it is the electrons' wavefunction which tells you whether something is empty space or not. A region filled with electronic wavefunction feels hard to the touch, because two electrons can't be compressed into the same space without squeezing their wavefunction to have very high spatial variations, by the exclusion principle. Atoms are full of electronic wavefunction, and are therefore not empty space, at least not by any reasonable definition.
### There is nothing mystical about measurement in quantum mechanics
• The belief: the measurement problem in the standard quantum mechanics suggests that consciousness is somehow involved in measurements
• hypercorrection: decoherence explains all that! Quantum mechanics is no different than determinism as far as enlightenment values are concerned.
• The best answer: Decoherence tells you why you don't have interference between classically different worlds, or histories, and its an important part of the story if quantum mechanics is exact. But it does not tell you why you "percieve" one consistent history as a world. You need a dictionary between physics and perception. Since this dictionary is fundamental, and weird, and philosophical, it is important to explain that this is not an output of physics, but an input, which links mathematical theory with explicit sense-data.
### There is no such thing as a centrifugal force
• The belief: when things are rotating, they are pushed out by a centrifugal force.
• The hypercorrection: There is no centrifugal force. There is a centripetal force (centripetal was a made up word to replace centrifugal) which pulls you in.
• The best answer: This is obviously true from the point of view of the inertial frame, there is no centrifugal force, but if you are looking at it from the point of view of the rotating object, then there is. It all depends on your choice of reference frame.
### The big bang happened everywhere at once
• The belief: the big bang means that the universe started at some definite point and got bigger from there.
• The hypercorrection: the big bang happened everywhere at once, and it is just wrong to think of it as happening at some single point in an extended model of space-time. If the universe is open, the big-bang was infinite in extent.
• The best answer: There are three important caveats: 1. In FRW models, the bang-point is a singularity, so its outside space and time, and it is impossible to determine if it is "really" a single point or "really" everywhere at once, so it's just a meaningless question. 2. In the Newtonian "big bang" model, where you imagine the universe is now filled with particles which have a speed away from you linearly proportional to the distance from you, everything does come out from a single point! All the newtonian world lines converge on your current position. That's true even though the universe is spatially homogenous (the reason its not a paradox is that Galilean boosts are nontrivially mixed with translations). 3. the best picture, in my view, is the holographic picture, where you are surrounded by a horizon which was smaller in the past. This view is similar to the Newtonian big bang, in that everything came from a small region bounded by a dS cosmological horizon. This is mathematically equivalent to everything else, except throwing away the stuff outside the horizon that can't be observed.
I would like to admit that I was a little flabbergasted when a lay-person told me that everything in a Newtonian big-bang comes from a single place. That was completely counterintuitive.
-
1
You should be teaching, Ron. – Mike Dunlavey Nov 2 '11 at 2:36
1
Nice answer, but think of looking at your "best answer" concerning quantum decoherence - it could be clearer. – CHM May 14 '12 at 22:43
show 6 more comments
Quantum mechanics is way too strange so it can't possibly be a correct description of the real world. Right? I think nothing else needs to be said about this.
Or maybe on a second thought, some more concrete beliefs and their solution are in order:
1. The physical world has to be deterministic (it doesn't).
2. Every possible question that can occur to you must have a precise answer by measurement (we observe only what we can, not what we want to).
3. The collapse of the wavefunction is in contradiction with finite speed of light (no information is being transmitted).
-
4
@Noldorin: Bohr? Really? You wouldn't find a more stringent advocate of quantum mechanics than Bohr (who I think can rightly be called its father). Einstein on the other is a different story. He wasn't able to let go of his belief that physics must be complete and answer everything we want to know. But still, this led to nice results like EPR paradox. So his inquisitive mind arrived at interesting physics even though his prejudices didn't let him accept it :-) – Marek Nov 17 '10 at 23:33
1
@Marek: That's why I put the question mark, silly. :P Who can blame Einstein in any case? I certainly don't. To call him prejudiced is not only arrogant but hugely ironic! I do not wish to continue with this debate, thank you. – Noldorin Nov 17 '10 at 23:39
3
@Noldorin: why would it be arrogant? Just consider that Einstein himself called the inclusion of cosmological constant his biggest mistake, so he himself admitted that he was too prejudiced about stationary universe and unwilling to admit its expansion (but he eventually let go because of Hubble's experimental evidence). Even physicists (especially older ones) can be prejudiced. I am not saying this with any contempt, Einstein was one of the best minds of the human kind. Just that everyone has some prejudice or other. Although in this case it's quite ironic because he helped to create QM :-) – Marek Nov 17 '10 at 23:47
2
@Marek: I think it is not only about lack of experience, but also about teaching QM in wrong way (how often do you hear "No-one knows if electric field really exists or only is our tool to describe how electron moves"? or "If someone says s(he) understand classical probability, s(he) must be lying!") – Piotr Migdal Nov 18 '10 at 0:23
2
@Marek: Yes, it is ironic, as he was one of the great figures in very early quantum mechanics. Still, I don't think that even this day we can say he's wrong! There's nothing to stop some other more fundamental theory superseding quantum mechanics and proving Einstein right. In any case, fair enough, I just suggest you keep a slightly more open mind. :) (I know too many close-minded physicists.) – Noldorin Nov 19 '10 at 21:39
show 6 more comments
A couple of space and sci-fi derived misconceptions:
• An orbiting satellite needs propulsion and that orbiting is different from free fall.
• You can actually see a laser beam in free space! I've seen this in my experimental physics class a few years back, before keychain lasers were common.
-
2
And the misconception that orbiting satellites and other space vessels make cool whooshing sounds as they go by at 1/1000th of their actual speed by the pretend-stationary camera. – romkyns Jan 12 '11 at 1:22
4
Seeing a laser beam in free space? Aren't all photons supposed to go straight in free space instead of some of them changing path to land in my eyes? – Kim Kim Mar 17 '11 at 16:21
1
@KimKim I didn't express myself properly: I've seen people believing that. – Sklivvz♦ Sep 23 '11 at 12:40
show 3 more comments
That if an object is moving, there must be a force propelling it in that direction. Students very commonly think that forces cause objects to have velocity, rather than the fact that forces cause objects to change velocity.
-
2
Related to this, it really seems to confuse a lot of intro-level physics students when acceleration and velocity vectors are pointed in opposite directions. One of my standard quizzes when I taught freshman physics was to toss a ball straight up and down in the air, and then hand out a position vs. time plot and ask them to sketch the acceleration and velocity vectors at several key points. (neglecting air resistance). Always gets an interesting mix of answers. – Tim Goodman Feb 21 '11 at 17:24
show 1 more comment
Some I have heard:
• Heavier objects fall faster (this is just plain wrong.) However, bigger and smaller objects in a typical Earth environment would fall at different rates due to air resistance, but actual mass has no influence.
• Two cars colliding at 60 mph is the same as one car colliding into a wall at 120 mph. I think MythBusters did something on this.
• Electrons travel really fast around a circuit: actually, they travel really slow but it is similar to Newton's cradle in that a small movement in one ball can transfer the energy to the last one almost instaneously.
• The laws of thermodynamics have been broken by some guy in a garage with some magnets. Well, no, they still seem to be intact, and a lot of modern science depends on them!
• That an oscilloscope trace travels faster than the speed of light on expensive high speed analog scopes (~1-2GHz.) This is not quite true: although the beam may sweep the surface of the CRT faster than c (due to the relatively small movement at the neck of the CRT), the trace cannot communicate information faster than c.
• More related to chemistry, but the fact that water can have "memory" and all that homeopathic nonsense that con arti^H^H^H^H^H^H^H^H homeopaths spurt out.
-
3
It's important to make the distinction about the speed of electrons, but electrons traveling slowly is only half the story. Electrons have a slow drift velocity, but they have a high thermal velocity in most situations. – Mark Eichenlaub Dec 3 '10 at 9:19
2
@Mark Eichenlaub: you can say that they move fast, but they don't travel fast. It's probably a language-related thing, but traveling implies overall displacement. – Sklivvz♦ Dec 3 '10 at 21:11
1
Wait, what's wrong with the colliding cars? Shouldn't it be the same in all frames of reference? It certainly is the same in the case of an elastic collision. – Greg Graviton Dec 4 '10 at 17:49
5
Ah, because the energy is used to deform both cars. So, that would be like a car with 120 mph against another car instead of a wall. Would it? Kinetic energy is quadratic in velocity, but something weird happens when you switch the frame of reference. – Greg Graviton Dec 5 '10 at 13:09
1
@Greg: here's an analysis of the crash that might interest you. Also see this question. – David Zaslavsky♦ Dec 15 '10 at 0:06
show 4 more comments
The misconceptions about special relativity and quantum mechanics are quite well-known. A lot of the posts above discuss them in detail. So rather than doing that I'll list some misconceptions from general(say high school) physics:
1. When a body rests on a surface the upward contact force acting on it is reaction to its weight. This is obviously wrong as action and reaction act on different bodies.
2. There's a lot of misconceptions about non-inertial(pseudo) force. My physics teacher once said that non-inertial forces arise only when the body is in contact with an accelerating frame.
3. Nothing can move faster than light. Of course it's false unless you add the phrase "in vacuum". The Cherenkov radiation happens when some charged particle moves in a medium with speed greater than the speed of light in that medium.
4. Friction always has to act in the opposite direction of overall motion. Actually friction provides the necessary force for rolling without which no vehicle would ever run. The correct formulation is friction opposes the instantaneous motion of the point of contact.
5. Light always travel in straight lines. Even without gravitational bending if we simply have a medium with a variable refractive index light will follow a curve through it. It's a nice application of Snell's law.
6. Newton's second law provides a definition of force. It's a very widespread misconception unfortunately even among professional physics students. This strips Newton's second law of any physical content and forces(pun intended) it to become a tautology. Of course the actual content of the law is that the force is given by some other law(say gravitational or em) and it equals ma. For a persuasive discussion on this see the first volume of Feynman's Lectures on Physics. (I am very sorry that I forgot the chapter or page number).
7. Newton's first law is derivable from second law. The proof goes as follows : F=ma. If F=0 then a=0 since $m~{}\neq 0$ QED. The problem is without first law there is no notion of an inertial frame and the laws become pointless.
8. In special relativity the hypothesis of the constancy of the speed of light in vacuum(c) with respect to all observers is redundant because it can be derived from the principle of relativity. Of course c may vary without contradicting the principle of relativity. In fact, in Newtonian mechanics c is observer dependent it respects the principle of relativity. The constancy of c hypothesis gives the Lorentz transformation whereas in Newtonian mechanics we have the Gallilean transformation. If you are still not convinced then look at this formulation of special relativity without the second hypothesis. Google doubly special relativity.
-
1
Also, I don't think anyone believes 5, 6 and 7 are arguable at best, and 8 is only stated by people who believe Maxwell's equations, so that the principle of relativity plus the validity of Maxwell's equations implies the constancy of the speed of light. – Ron Maimon Aug 16 '11 at 16:51
show 3 more comments
If you somehow manage to BREAK a law of physics, the universe will vanish!
-
9
Fortunately if you do it will be replaced by a backup copy – Martin Beckett Mar 19 '11 at 5:59
show 2 more comments
As a tutor, I frequently have conversations like this:
"So we worked out that if when I toss my pen up at 2m/s, it will go 20cm high. How high will it go if I toss it up at 4m/s?"
"40 cm."
"Well, okay, let's check by working through that equation again..."
[We find out that the answer is 80 cm]
"So when I throw it up twice as fast, it goes four times as high, because it takes twice as long to get to the top, but is also going twice as fast."
"Okay."
"Now what if I throw it up three times as fast? How many times as high will it go?"
"Six."
This isn't a misconception about kinetic energy, so much as a lack of comprehension about what scaling is. When students are missing this concept, almost all of physics is more difficult to discuss.
-
6
@Mark No, not really. They "know" the term is squared. They can instantly recite the formula KE = 1/2mv^2. But they don't really get what that means. They think that the only way to answer questions about what would happen if we throw the thing twice as fast is to plug in some numbers to the formula. The idea of looking at the exponents in formulas and gaining physical insight from that is alien to them. It's natural to you because you know basic physics and math quite well, but to students it is a weird and unusual trick. It's easy to forget how little you knew a long time ago. – Mark Eichenlaub Nov 19 '10 at 22:25
2
Yes - exactly. Building intuition about math is a slow process, but it's interesting to watch a student progress throughout the course of a year. – Mark Eichenlaub Nov 19 '10 at 22:46
show 3 more comments
You need something more to get something more
This is about emergent macroscopic properties of microscopical laws. Some people can't understand that statistics is powerful enough to make seemingly random heap of molecules suddenly show macroscopic properties like being solid or being magnetic and they think some hand of god is required to make this happen.
The best illustration of a contradiction to this principle are living organisms. They consist of nothing else than few physical laws all the way down and large number of molecules. All that was needed was statistics and natural selection.
I will agree though that it is quite amazing all of our nature can spontaneously emerge just from particles given enough space and time.
-
1
@Ron: I am not saying it's obvious, only that it is possible (i.e. you don't need God to create life). And it is certainly the only scientific explanation we have, so what needs to be done is "only" better quantification of relevant processes. – Marek Aug 16 '11 at 17:15
show 2 more comments
The most common misconceptions are about gravity:
(1) Gravity turns off at space-shuttle orbit distance because the astronauts are weightless
Gravity is at about 80% strength compared to the surface of the Earth. The astronauts are weightless because the shuttle is in free-fall (orbit). If there was no gravity, the shuttle could not orbit.
(2) Gravity is generated by the spinning Earth. If the Earth stopped spinning, gravity would turn off.
Gravity is generated by virtue of the Earth's mass and the mass of the object; the two exert a mutual pull. There are some smaller effects associated with the spinning Earth (e.g. the Coriolis effect), but gravity would still work fine if the Earth stopped spinning.
There is another great misconception about the force of impact between a truck and a small car in a collision:
(3) The truck exerts a greater force of impact on the car, than the car on the truck
While the damage can be certainly unequal, the forces are.
-
3
(3) is not really a misconception if force is understood in a colloquial way (normal people don't really use words like force and work in the physical sense). In particular, truck will have a lot more momentum so the situation is certainly asymmetric. But otherwise I like these. +1 – Marek Dec 3 '10 at 16:25
If you are riding a bicycle and you turn the front wheel to the left then the bicycle will steer to the left.
-
4
– Pavel Radzivilovsky Dec 6 '10 at 10:45
I hear from time to time people highly educated and skilled in Physics (unlike those believing that heavy objects fall faster than light ones) making the following claim:
... quantum mechanics indicates that certain physical quantities can take only a countable set of discrete values. Consequently, many current approaches to foundational questions in physics and cosmology advocate novel discrete or 'digital' pictures of nature.
("Is Reality Digital or Analog" essay contest at FQXi)
The discrete spectra of some quantum observables do not imply/suggest that nature, in particular spacetime, is fundamentally discrete. The spectrum of a continuous operator acting on Hilbert spaces [which is a topological (vector) space, hence is continuous], often has a discrete part. This has nothing to do with spacetime being discrete. If it will (eventually) turn out to be discrete, it will be for other reasons.
-
show 1 more comment
Historically the concept of absolute velocity was commonly believed until the time of Galileo Galilei in the early 17th century. As a naive child, before studying basic physics, it is surprisingly easy to believe in this even these days!
The idea of absolute velocity states that all velocities are fixed with respect to an absolute frame of reference. Galileo showed that velocity is relative to your frame of reference, a principle known as Galilean relativity. This was later fully quantified by Isaac Newton, who also proposed that acceleration is invariant with respect to inertial frames.
-
show 2 more comments
Why do I have to learn this law when they change it every few years?
This has to do with a fact that some (actually, a lot of) people believe that the progress in physics is done in form of revolution and (in particular) that one day we might find laws that will contradict everything we knew up till then.
Well, if one looks closely on history of physics, it should become apparent that progress was always just evolutionary. Even when some idea needed a revolution in the way people think (as with SR and QM) it always turned out to be just a generalization of our previous ideas (so both SR and QM have nice classical limits which coincide with Newtonian mechanics).
Barring the useless philosophical views (like we might live in Matrix or we don't know whether the sun will rise tomorrow for sure) it's pretty certain that our universe is a comprehensible place and our theories are just better and better approximations to the reality. So it will always be useful to learn Newtonian mechanics, even one million years from now.
-
"Burning coal for heating is more efficient than electrical due to thermodynamical losses at coal power plant"
This is false, even though it is true that converting electrical power to heat would be even worse. The correct way to heat with a given amount of heat source is this:
• burn it at high temperature
• make work with heat machine between Thigh and the environment
• use the work to power an air conditioner to bring heat from Tenvironment into Troom.
This gives efficiency more than 1 (more heat brought into room than heat produced by burning), and net cooling of the surrounding environment.
-
show 3 more comments
I think one common false belief is that a light mill rotates because photons deposit more momentum on the shiny side (where they are reflected) than on the black side (where they are absorbed).
I find it quite astonishing to see that many people think so despite the fact that a light mill spins to the opposide direction than predicted by that explanation.
-
2
It depends on the vacuum you have. When there isn't a high enough vacuum, the black side is heated more than the shiny side and because of convection of air and some edge effect there is motion from the black side to the shiny side. When there is a high enough vacuum, then the radiation pressure is what moves the mill. So it depends on the light mill. – Vagelford Nov 30 '10 at 23:04
3
Someone should invent a light mill where the pressure inside can be changed enough to observe it working either way. It would be fun to show that to students and non-scientists and ask them to explain. – DarenW Mar 5 '11 at 21:52
Here's another list of false beliefs. These are held by science popularizers. Whether they actually believe these beliefs, or just utter them for the purpose of getting more viewers, is an unanswerable question:
### The curved space near massive object can be pictured as a deformed rubber sheet
This one is due to Einstein, unfortunately. You put balls on a rubber sheet, and you see that they roll towards each other. The reason this is a terrible explanation is because you have the Earth's gravity doing the pulling, not the curved space. The actual geodesics on a curved space like the rubber sheet are repelled by the central mass. The reason things attract in relativity is because of the time-dilation factor, and this is the dominant effect. It is just as easy to explain things correctly, in terms of time slowing down near a massive object, and world-lines trying to maximize their proper time with given fixed endpoints, but popularizers never do this.
### A variable speed of light can replace inflation.
This appeared in a recent popular show, and it is based on the following bogus idea: if light moved faster at early times, then all the universe could have been in communication! The reason this is false is because no matter how the speed of light is imagined to vary, one can recoordinatize space-time in terms of the intersections of light cones, and unless these lightcones split instead of merge, you get the same communication paradox--- new regions coming into causal contact are coming into causal contact for the first time.
### Mesons and Baryons are made of quarks like atoms are made of protons, neutrons and electrons.
This is insidious, because its true for heavy mesons. But it's much more false than true for pions and protons and all the excitations at lower than 1GeV, because of the vacuum condensates. There is no reasonable model of light pions which does not take into account their Goldstone nature. This type of explanation also leaves out Nambu and Skyrme, both of whom were unjustly ignored for too long.
### String theory is a theory of strings
This picture is not good for someone who doesn't already have a sense of string theory, because if you start out making home-made models of relativistic strings, you will never get anything like the correct string theory. The strings you naively picture would not have the special light-cone interactions that strings do in the Mandelstam picture, and they would not obey Dolen Horn Schmidt duality. They would just be conglomerations of point particles held together by rubber bands. They would have the wrong spectrum, and they would be full of ghosts.
The only proper way to say what strings are is to say right off the bat that they are S-matrix states, and that they are designed to be an S-matrix theory with linear Regge trajectories. They have a string picture, but the constraint that exchanging strings in the S-channel is dual to exchanging them in the T-channel is all-important, just as it was all-important historically. Without this, even with the Nambu action, you are at a loss for how to incorporate interactions. It is not obvious that the interactions are by topology unless you know Dolan Horn Schmidt.
It is also important for realizing that string interactions are somewhat holistic (that they become local on the light cone is the surprise, not the other way around). You add them order by order in perturbation theory by demanding unitarity, not by asking what happens when two strings collide in the usual sense. These "strings" are strange new things born of 1960s Chew-isms, and their closest cousins are flux lines in gauge theory, or fishnet Feynman diagrams, not a collection of point masses held together by spring-like forces.
This is also insidious, because Chew, Mandelstam, Dolen, Scherk, and all that generation developed the greatest physical theory the world will ever see, and their reward was: "You're fired". (in Scherk's case, "You're crazy"). Then they were heckled for thirty years, while their work was appropriated by a new generation, who described them as the deluded misguided Chew-ites who discovered something great by accident.
### There is more than a snowball's chance in hell for large extra dimensions
The idea that there are large extra dimensions was very popular in 2000, but it's completely preposterous. Large extra dimensions bring the planck mass down to about a TeV, giving neutrinos generic majorana masses which are in the KeV-MeV range, so you need to fine tune. They lead to essentially instantaneous proton decay, and huge CP violations in strong interactions, so you need to fine tune some more. To avoid proton decay, there is a clever mechanism due to Arkani-Hamed and Schmalz which puts the quarks and leptons in different places in the extra dimensions. This idea is appealing only at a superficial first glance, because it requires that the SU(2) and U(1) of the standard model be extended in the extra dimensions, which affects their running immediately. The theory predicts unambiguously and model-independently that proton decay suppression requires huge electroweak running at around a TeV. That's a signal you haven't seen a hint of at 100GeV collisions. Come on. In addition, how do you stabilize large dimensions? It's the same fine-tuning as before, so the number of problems has gone up.
A low Planck scale would completely demolish the predictivity of string theory. You can squeeze a lot of stuff into large dimensions. In my opinion, it is this brand of string theory that the critics correctly criticize as fundamentally non-predictive.
-
show 2 more comments
One widespread belief (I think due to popular books such as Hawking's) is that GTR can never ever, ever be quantized and you always obtain infinities and blah blah. Well, it can, in many situations and in many theories. What is actually meant is that GTR is not a renormalizable quantum field theory in the naive way. But this specification is never explicitly pointed out so people get a false impression that quantum gravity is something completely out of the realms of current physics. Well, surprise, surprise, it's not. We can quantize many gravitational effects (such as waves), we understand that black holes have entropy, we understand they produce Hawking radiation and eventually disappear, etc. And assuming that string theory is correct we can predict whole deal more about it.
-
show 3 more comments
Some sport instructors would tell you:
Running on a treadmill is easier because you only have to jump, while on the street you also push forward
I suggested then that running in a train should be the easiest, by this line of thought. However, it is true that starting to run (or, accelerating) is indeed easier. Or the air resistance, unless there's a slowest forward wind. Or, the randomly changing slope. Also, lack of the air conditioner. That pretty much summarizes the difference.
-
show 3 more comments
As an instructor, I have great difficulty teaching Newton's 3rd law: "For every action there's an opposite and equal reaction." This is very basic and very old physics but it's hard to teach. A typical example of the false reaction force answer is the following:
I hold an apple in my hand. The earth pulls down on the apple with a gravitational force. What is the reaction force to this?
The answer most of them will give is that the reaction force is "my hand pushing up on the apple." Arggghhhh! Of course the reaction force is "the apple pulling up on the earth."
Students fail to realize that the opposite and equal reactions have to be between the same pair of objects. That is, forces arise as pairs. I wish they'd just rename the law so that it makes it more clear that the reaction force has to operate between the same pair of objects.
I demonstrate the law by holding a long spring in my hands and telling them that forces are like this spring. When it applies a force on one end it applies a force on the other (assume massless spring). Next quarter I'm going to try some more extreme measures on this, clearly I'm failing.
-
While the other answers are absolutely correct, they are very subtle in the way they appear. However one very large false-belief in electrical science is that current is taken to be "flowing" from the + terminal to the - terminal in DC current.
While it doesn't really matter which way you choose since you are dealing with close to light speed current, electrons are actually moving from - terminal to + terminal(do not get me wrong electrons are NOT moving anywhere near light speed while drifting. They are moving in the order of centimeters per second.). Hence the current actually moves from - to +.
And it would be interesting to note that not a soul in the professional area of electrical science considers the current from - to +, as it would cause inconsistency with his/her colleagues.
-
2
If current always flows as electrons from - to +, then please explain how the Hall coefficient can be positive in many materials, such as p-type semiconductors. – Keenan Pepper Nov 18 '10 at 0:02
4
The best explanation I heard for this was when they came up with the diagrams they had no idea what an electron was. So they just picked a direction. Turns out that they had it backward and what's actually flowing from + to - is the absence of electrons (holes). – jcollum Nov 27 '10 at 18:25
show 3 more comments
"Long hair grows slower" is due to biological effects
In fact, this is a purely mathematical phenomenon. The bigger average length, the more decrease is caused by each hair fall. This leads to a differential equation $$\frac{dL}{dt} = K - \alpha L$$ that has a solutions which decay exponentially to an equilibrium at $$K/\alpha$$.
-
2
@Pavel: sigh... are you intentionally misinterpreting my statements? I never said single hair is important. Just that your average doesn't account of single hairs (in particular, the long ones) and that the average is not important for this question. In reality what's important is that there is enough long hairs. The girl couldn't possibly distinguish whether she had 10000 30cm hairs or 5000 20cm hairs and 5000 40cm hairs. Your average would come out the same but obviously in the second case her hair would seem 10cm songer. Your model doesn't account for this at all and therefore it's useless. – Marek Dec 3 '10 at 9:18
2
@Pavel: sure, that's what I am saying. But there are many such characteristics and you used just one: the average. And this is the most crudest one and most irrelevant too. – Marek Dec 5 '10 at 12:38
show 11 more comments
the belief that cartoon-universe rules apply to falling objects:
as embodied by the statement that, "if you are trapped in a falling elevator, you can avoid destruction by jumping at the last moment, so that when the elevator hits, you are in the air, and only fall the last inch or two."
-
show 6 more comments
It really bugs me. I've even heard from some quantum mechanics lecturers that they think quantum entanglement implies faster than light communication!
-
2
Quantum theory is not non-local (and I am not really sure what you mean by that intergalactic invasions, @Pavel)! This could be put as another misconception stemming from EPR paradox. What EPR (or rather Bell's inequalities) say is that quantum theory is either non-local or incomplete (in a sense of absence of hidden parameters). We have good reasons to think it is local (e.g. QFT has to obey locality if it is to make any sense) so what these experiments actually concluded is that there are no hidden variables (i.e. QM is not just statistics). – Marek Dec 5 '10 at 12:34
2
@Pavel: okay, done. It's interesting but it's still just the good old correlation of entangled spins, nothing more. Calling it non-local (or second-best, or whatever) is just confusing and precisely the reason why people think there is superluminal communication going on. – Marek Dec 5 '10 at 17:55
show 4 more comments
The concept that quantum mechanics undermines determinism. The Schrodinger wave equation evolution is completely deterministic. The results of measurements are probabilistic, but this does not mean that the various superposed states do not have causes. This is not the same thing as a hidden variable theory. The probabilities are deterministic. T'Hooft has some interesting ideas on a determinism underlying QM (not the same thing as saying the wave equation is deterministic). I am not arguing that qm is in all senses deterministic, but it isn't completely non-deterministic either.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553008675575256, "perplexity_flag": "middle"} |
http://mathhelpforum.com/discrete-math/144268-infinite-collections.html | # Thread:
1. ## Infinite collections???
Does there exist an infinite collection of sets such that the intersection of every two sets in the collection is nonempty, but the intersection of every three distinct sets in the collection is empty???
Could you also explain what sets are???
Thanks
2. Originally Posted by dch
Could you also explain what sets are???
A very good question, and one I suspect people don't ask often enough! Unfortunately, the answer is overly complicated (it is based on a set of axioms which are not nice at all, called `ZFC'). However, wikipedia seems to cover the topic well, so I would recommend reading that page then trying your question. If you still can't crack it, ask it again!
Anyway, a basic notion of sets is a collection of objects, called elements. These elements are normally numbers (because this is maths). To show that the collection of elements is a set we enclose it in curly brackets, for example $S=\{1, 2, 3, 4, 5\}$ is a set. An exampls of an element of $S$ would be 3, and 4 would be another one. 6, however, is not an element.
Sets cannot have more than one copy of an element. So, for example, $\{1,2,1\} = \{1,2\}$.
Now, there are a few operations defined on sets. The main ones are union and intersection.
$\{A\} \cup \{B\} = \{A, B\}$ then cancel any multiple elements. So, for example, $\{1, 2, 3, 4, 5\} \cup \{1, 4, 6, 7\} = \{1, 2, 3, 4, 5, 1, 4, 6, 7\} = \{1, 2, 3, 4, 5, 6, 7\}$.
Intersection gives you the subset of elements contained in both of the sets. So, for example, $\{1, 2, 3, 4, 5\} \cap \{1, 4, 6, 7\} = \{1, 4\}$ as the elements 1 and 4 are the only elements in both of the sets.
Does that make sense?
3. Originally Posted by Swlabr
elements are normally numbers (because this is maths).
And mathematics includes set theory (and, in a certain sense, vice versa). There is no requirement that elements normally be numbers. Ordinary mathematics (which includes such subjects as analysis, topology, abstract algebra, et. al) has many kinds of objects that are not numbers, and such objects (if they are not proper classes) are themselves elements of certain sets.
4. Originally Posted by dch
Could you also explain what sets are???
There are at least four different answers:
(1) Naive, intuitive, everyday mathematical sense of the word 'set': Sets are collections (or "classes") of objects. The objects collected into a set are the elements of the set.
(2) The notion of 'set' is taken as an undefined basic concept.
(3) In certain formal theories, we may define the predicate 'is a set' in certain ways. For example, one may find in a theory such as Z set theory (without urelements) that a reasonable definition of 'is a set' is:
x is a set if and only if x is an element.
The purpose of that definition is to distinguish sets from proper classes. That is, a set is an object that is a member of some class, while a proper class is not a member any class.
In such a theory, all objects are sets, since every object x is a member of {x}.
(4) In certain formal theories, we take the predicate 'is a set' as a primitive predicate. For example, in Bernays class theory, 'is a set' is primitive.
/
Generally, we might distinguish objects in this way:
urelement
class
set
proper class
An urelement is an object that has no members (except for the empty set which is the only non-urelement having no members).
A class is an object that is either the empty set or has members.
A set is a class that is a member of some class.
A proper class is a class that is not a member of any class.
5. Originally Posted by MoeBlee
And mathematics includes set theory (and, in a certain sense, vice versa). There is no requirement that elements normally be numbers. Ordinary mathematics (which includes such subjects as analysis, topology, abstract algebra, et. al) has many kinds of objects that are not numbers, and such objects (if they are not proper classes) are themselves elements of certain sets.
Well, sure, but I can never quite understand why lecturers always have contrived examples such as {cat, dog, house}. Why don't they just start with numbers, then progress to, for example, permutations or symmetries or whatever. Just using numbers is sufficient for a first course.
6. Originally Posted by Swlabr
Well, sure, but I can never quite understand why lecturers always have contrived examples such as {cat, dog, house}. Why don't they just start with numbers, then progress to, for example, permutations or symmetries or whatever. Just using numbers is sufficient for a first course.
I basically agree with you in that sense. But I guess they use concrete objects as examples in order to get across the more general, naive, intuitive, everyday sense of the word 'set'. But, as you've alluded, in ordinary abstract set theory (without urelements) the objects are all abstract, and for any finite example (even in predicate logic), using natural numbers (and sets built from them) is sufficient (any countable model can just as well be emulated with natural numbers). (Of course, even more rigorous is to refrain from using natural numbers until they have themselves been constructed in the the theory). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537978768348694, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/16621/what-do-you-lose-when-passing-to-the-motive | ## What do you lose when passing to the motive?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I contemplated about what information about a scheme we lose when passing to its motive. I came up with the following examples:
1. The projective bundle of a vector bundle does only depend on the rank of the vector bundle whatever it is twisted (this follows from the Mayer-Vietoris exact triangle).
2. The motive of the blow-up of a $\mathbf{P}^2$ in a point equals the motive of the quadric surface.
Are there further big classes of such phenomena? (Is there a class which 2. fits into?)
And conversely, what can we recover about the variety from its motive?
-
1
You mean, passing from a projective variety over a field $k$ to the category of pure motives over $k$ ? – Regenbogen Feb 27 2010 at 18:48
Yes. The category of motives in the sense of Voevodsky. – norondion Feb 27 2010 at 19:22
## 3 Answers
Both examples you consider have the property that the additive structure of the Chow groups are the same, but the multiplicative structures are different. In the first case multiplication depends on the Chern classes of the bundle, and in the second case intersection forms on $CH^1$ are $x^2 - y^2$ and $2uv$ which are different integrally. So if we capture multiplicative structure of the motive (which is $\Delta: M(X) \to M(X) \otimes M(X)$ I think) we'll be able to do better.
For your example 2, both varieties are cellular (glued from affine spaces), and the numbers of cells in each dimension are the same. Such varieties always have isomorphic (Tate) motives.
In general, I think there's no better answer to the question "What can we recover about the variety X from its motive?" rather than the trivial one: "We can recover all the reasonable cohomology theories evaluated on X". I am very curious what other people will say, though.
As an aside note, I remember reading somewhere that it is expected that the integral motive of a quadric determines the quadractic form itself.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here the word "motive" will stand for Grothendieck pure motives modulo rational equivalence. Your point 1. is also true for Grassmann bundles. More precisely the following result holds :
*Let $E\longrightarrow X$ be a vector bundle of rank $n$, $k\leq n$ and $Gr_k(E)\longrightarrow X$ the associated Grassmann bundle. Then $M(Gr_k(E))\simeq \coprod_{\lambda}M(X)[k(n-k)-\lambda]$, where $\lambda$ runs through all partitions $\lambda=(\lambda_1,...,\lambda_k)$ satisfying $n-k\geq \lambda_1\geq...\geq \lambda_k\geq 0$.*
You can prove it in the same fashion as for the projective bundle theorem, as an application Yoneda type lemma for Chow groups.
We now know many things on the motives of quadrics. For example if a quadratic form $q$ is isotropic, the motive of the associated quadric $Q$ has a decomposition $\mathbb{Z} \oplus M(Q_1) \oplus \mathbb{Z}[\dim(Q)]$, where $Q_1$ is a quadric of dimension $\dim(Q)-2$ associated to a quadratic form $q_1$ Witt equivalent to $q$. Using it inductively you get the motivic decomposition of split quadrics and for example if $\dim(q)$ is odd and $q$ is split the motive of $Q$ is $\mathbb{Z}\oplus \mathbb{Z}[1]\oplus ... \oplus \mathbb{Z}[\dim(Q)]$. Another very important result is the Rost nilpotence theorem, which asserts that the kernel of the change of field functor on Chow groups of quadrics consists of nilpotents. This result is very fruitfull because it implies that the study of the motive of quadrics can be done over a field which splits the quadric, working with rational cycles in stead of cycles over the base field. Even though these motivic results give severe restrictions on the higher Witt indices of quadrics and have very important applications, the motive does not contain "everything" about the associated quadratic forms (even in terms of higher Witt indices).
Another interesting class of varieties to motivic computations are the cellular spaces, i.e. schemes $X$ endowed with a filtration by closed subschemes $\emptyset \subset X_0\subset ... \subset X_n= X$ and affine bundles $X_i\setminus X_{i-1}\rightarrow Y_i$. In this situation the motive of $X$ is isomorphic to the direct sum of (shifts) of the motives of the $Y_i$. For example the filtration of $\mathbb{P}^n$ given by $X_i=\mathbb{P}^i$ and those affine bundles are given by the structural morphism of $\mathbb{A}^{i}$ imply the motivic decomposition $M(\mathbb{P}^n)=\mathbb{Z}\oplus ... \oplus \mathbb{Z}[n]$, and as you can see this is the same motive as odd dimensional split quadrics, so you certainely loose information.
The situation is much more complicated replacing quadratic forms by projective homogeneous varieties, but still under some assumption you can recover some results such as Rost nilpotence theorem, and we now begin to have a good description of their motive. Under these assumption the motive of projective homogeneous varieties encodes informations about the underlying variety, such as the canonical dimension, with the example of the computation of those of generalized Severi-Brauer varieties. Some works have also been done to link motives in this case with the higher Tits indices of the underlying algebraic groups.
Just to cite a few mathematicians from who we owe these great results : V. Chernousov, N. Karpenko, A. Merkurjev, V.Petrov, M. Rost, N.Semenov, A. Vishik, K. Zainoulline and probably many others that i forgot to mention.
edit : to add more precision to the nice answers of Mr. Chandan Singh Dalawat and Mr. Evgeny Shinder, motives of (usual) Severi-Brauer varieties of split algebras are indeed the same as projective space and split quadrics (in odd dimension) but it is obvious that on the base field they're are not necessarily isomorphic since the Severi-Brauer variety is totally split as long as there is a rational point, whereas an isotropic quadratic form is not completely split.
-
I really don't know what motives are, but perhaps any smooth projective conic over $\mathbb{Q}$ has the same motive as $\mathbb{P}_1$. So you loose information about the places where the conic had bad reduction. The same applies to twisted forms of $\mathbb{P}_n$.
Also, perhaps any torsor under an abelian variety $A$ has the same motive as $A$, so you lose infomation about whether the torsor has a point or not.
Feel free to criticise or to correct me.
Bonus. A bug in MO is preventing me from asking questions. While Anton gets it fixed, let me give a free translation of the above "answer" into French :
J'ignore complètement ce que sont les motifs, mais il me semble que toute conique lisse projective sur $\mathbf{Q}$ a le même motif que $\mathbf{P}_1$. Le motif ne se souvient donc pas des places où la conique avait mauvaise réduction. On peut en dire autant des formes tordues de $\mathbf{P}_n$.
Aussi, un torseur sous une variété abélienne $A$ a probablement le même motif que $A$; il ne se souvient donc pas si le torseur avait un point rationnel ou non.
N'hésitez pas à critiquer ou corriger cette réponse, ainsi que le français.
-
3
Over Q Severi-Brauer varieties and quadrics all have the same motives indeed, but integrally they are different. – Evgeny Shinder Feb 28 2010 at 5:13
3
The point about torsors over abelian varieties is an interesting one. For example, it makes it hard to see how one might study Sha of an elliptic curve over $\mathbb Q$ (say) from a Langlandsish point of view, because it's not clear what you might do motivically with an element of Sha (with the hope that whatever you did could then be interpreted automorphicly) which would distinguish it from the elliptic curve itself. – Emerton Feb 28 2010 at 6:32
3
In a slightly different context: in characteristic 0, non isomorphic torsors under (possibly distinct) abelian varieties have distinct classes in the Grothendieck ring $K_0(Var_K)$ (which has a specialization map to the $K_0$ of Chow motives). This is a consequence of the following theorem of Larsen & Lunts and of Bittner: over a field $K$ of characteristic $0$, if two projective smooth and geometrically connected varieties $X, Y$ have the same class in $K_0(Var_K)$, then they are stably birational. B. Poonen used this result to construct 0-divisors in $K_0(Var_{\mathbb Q})$. – Qing Liu Feb 28 2010 at 16:28
2
@Emerton: From a certain perspective, I think this is not surprising. You could say that motives, being a cohomology theory, are about maps from varieties X into some fixed generalized variety H, which would be some kind of generalization of an Eilenberg-Mac Lane space. But rational points of X are about maps into X. In other words, rational points are about the functor represented by X, whereas the motive of X is about the functor corepresented by X. I think that typically it's hard to go from information about one to the other. – James Borger Mar 1 2010 at 0:21
2
I'm not sure how confident I am in that point of view yet. But it is similar to the situation in lambda-algebraic geometry, where the Witt vector space of a variety X is accessible through the functor it corepresents, and the dual construction, the arithmetic jet space functor, is accessible through the functor it represents. And indeed, the Witt functor tells you about the motive, by de Rham-Witt theory, and the arithmetic jet space functor tells you about the rational points, following Buium. – James Borger Mar 1 2010 at 0:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113039970397949, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/213044-epsilon-2.html | 1Thanks
# Thread:
1. ## Re: Epsilon
Originally Posted by Petrus
Defination 7 says me what im doing idk, but the problem says i shall use defination 7.
I agree with the definition, of course. As I described in post #11, the inequality from the definition is translated into two inequalities. One of them holds for all x > 0, and the other only starting from some N > 0. You need to solve the second inequality to find N.
2. ## Re: Epsilon
So i got one quest if im correct. U Will have 2 case because of absolute value. F(x)>0 and f(x)<0. What i want to say is when its case 2 i Will solve ur way and case 1 is what i did just do. Then im pretty much confused what to do with these two case
3. ## Re: Epsilon
The problem says to find N from the definition of the limit. It must be true that $\frac{\sqrt{4x^2+1}}{x+1}<2.5$ for $x > N$ and $\frac{\sqrt{4x^2+1}}{x+1}>1.5$ for $x > N$. The graph shows that $\frac{\sqrt{4x^2+1}}{x+1}<2.5$ for all $x > 0$. Solving the second inequality, you find $N > 0$ such that $\frac{\sqrt{4x^2+1}}{x+1}>1.5$ for $x > N$. So, if $x > \max(0,N)$, we have both $x > 0$ and $x > N$ and therefore both inequalities are true. But $\max(0,N) = N$, so both of those inequalities are true for $x > N$.
4. ## Re: Epsilon
Originally Posted by emakarov
That's not what I get. The inequality is $2\sqrt{4x^2+1}>3(x+1)$, or $7x^2-18x-5>0$.
Edit: i misscalculted ignore this
5. ## Re: Epsilon
Hello Emakarov!
If this would be on exam would this be a good answer(i will skip the math part):
By the graph or calculate we can se to right side of x intercept (0) we can se that it will be <0 so we only will get positive x intercept if we use x<0. So i calculate the x intercept when -(f(x)-L)<epsilon (0.5) and get x intercept as 2.82 and 0.2528. then i can set like 3 in function and look if its lower then epsilon (0.5) and it is.
6. ## Re: Epsilon
Originally Posted by Petrus
If this would be on exam would this be a good answer(i will skip the math part):
By the graph or calculate we can se to right side of x intercept (0) we can se that it will be <0 so we only will get positive x intercept if we use x<0. So i calculate the x intercept when -(f(x)-L)<epsilon (0.5) and get x intercept as 2.82 and 0.2528. then i can set like 3 in function and look if its lower then epsilon (0.5) and it is.
Sorry, this is very hard to read.
"we can se that it will be <0": what will be < 0?
"we only will get positive x intercept": why are you talking about x-intercepts? And x-intercepts of what? The x-intercepts of the original function
$f(x)=\frac{\sqrt{4x^2+1}}{x+1}$ (*)
do not arise in this problem at all.
"the x intercept when -(f(x)-L)<epsilon": you can't talk about the x-intercept "when" an equation holds. The concept of an x-intercept is only applicable to a function, not to an inequality. An inequality has solutions, which is a possibly infinite set of real numbers. Sometimes this set can be expressed using several inequalities of the form x > ... or x < ... .
"and get x intercept as 2.82 and 0.2528": the second value should be negative, but it is not important here.
"then i can set like 3 in function and look if its lower then epsilon (0.5) and it is": "like" is not appropriate in mathematical text. Which function: f(x) from (*) above or |f(x) - 2|? How can you check whether |f(x) - 2| < 0.5 for all x > 3, i.e., for infinitely many x? And why would you need to check this if you have just solved this inequality?
7. ## Re: Epsilon
Originally Posted by emakarov
Sorry, this is very hard to read.
"we can se that it will be <0": what will be < 0?
"we only will get positive x intercept": why are you talking about x-intercepts? And x-intercepts of what? The x-intercepts of the original function
$f(x)=\frac{\sqrt{4x^2+1}}{x+1}$ (*)
do not arise in this problem at all.
"the x intercept when -(f(x)-L)<epsilon": you can't talk about the x-intercept "when" an equation holds. The concept of an x-intercept is only applicable to a function, not to an inequality. An inequality has solutions, which is a possibly infinite set of real numbers. Sometimes this set can be expressed using several inequalities of the form x > ... or x < ... .
"and get x intercept as 2.82 and 0.2528": the second value should be negative, but it is not important here.
"then i can set like 3 in function and look if its lower then epsilon (0.5) and it is": "like" is not appropriate in mathematical text. Which function: f(x) from (*) above or |f(x) - 2|? How can you check whether |f(x) - 2| < 0.5 for all x > 3, i.e., for infinitely many x? And why would you need to check this if you have just solved this inequality?
ima try do my best to explain now, im not a good explainer :/
when i mean <0 then i mean with solving absolute value equation we will have positive value when -(f(x)-L)
tbh idk how to describe this with words. I basicly understand not really good on this but i need to read more about this.
8. ## Re: Epsilon
Originally Posted by Petrus
indeed that but there is a problem.. im kinda never on pc anymore.. i use my smartphone if i get stuck :S (im mostly in school without any pc)
You may have noticed how few of us are now replying to your post.
I will not as long as you are so discourteous as to posting unreadable images.
If I were you, I would get access to a PC or a tablet computer.
I have a 10in tablet that is a fully function cell phone.
It is easy to use on this site.
You could use Tapatalk on your phone.
OR you could learn to use the camera correctly.
It appears that you are just too lazy to do that. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325979351997375, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/9961/list | ## Return to Question
5 deleted 1 characters in body; edited title
# colimitsColimits of schemes
this
This is related to another question.
I've found many remarks that the category of schemes is not cocomplete. the The category of locally ringed spaces is cocomplete, and in some special cases this turns out to be the colimit of schemes, but in other cases not (which is, of course, no evidence that the colimit does not exist). howeverHowever, I want to understand in detail a counterexample where the colimit does not exist, but I hardly found one. in In FGA explained I've found the reference, that example Example 3.4.1 in hartshorneHartshorne, appendix Appendix B is a smooth proper scheme over $\mathbb{C}$ with a free $\mathbb{Z}/2$-action, but the quotient does not exist (without proof). to To be honest, this is too complicated to me. are Are there easy examples? you You won't help me just giving the example, because there are lots of them, but the hard part is to prove that the colimit really does not exist.
4 deleted 47 characters in body
this is related to another question.
I've found many remarks that the category of schemes is not cocomplete. the category of locally ringed spaces is cocomplete, and in some special cases this turns out to be the colimit of schemes, but in other cases not (which is, of course, no evidence that the colimit does not exist). however, I want to understand in detail a counterexample where the colimit does not exist, but I hardly found one. in FGA explained I've found the reference, that example 3.4.1 in hartshorne, appendix B is a smooth proper scheme over $\mathbb{C}$ with a free $\mathbb{Z}/2$-action, but the quotient does not exist (without proof). to be honest, this is too complicated to me. are there easy examples? you won't help me just giving the example, because there are lots of them, but the hard part is to prove that the colimit really does not exist.
edit: up to now, no proof is given here
3 added 4 characters in body
this is related to another question.
I've found many remarks that the category of schemes is not cocomplete. the category of locally ringed spaces is cocomplete, and in some special cases this turns out to be the colimit of schemes, but in other cases not (which is, of course, no evidence that the colimit does not exist). however, I want to understand in detail a counterexample where the colimit does not exist, but I hardly found one. in FGA explained I've found the reference, that example 3.4.1 in hartshorne, appendix B is a smooth proper scheme over $\mathbb{C}$ with a free $\mathbb{Z}/2$-action, but the quotient does not exist (without proof). to be honest, this is too complicated to me. are there easy examples? you won't help me just giving the example, because there are lots of them, but the hard part is to prove that the colimit really does not exist.
edit: up to now, no proof is given here
2 added 43 characters in body
this is related to another question.
I've found many remarks that the category of schemes is not cocomplete. the category of locally ringed spaces is cocomplete, and in some special cases this turns out to be the colimit of schemes, but in other cases not (which is, of course, no evidence that the colimit does not exist). however, I want to understand in detail a counterexample where the colimit does not exist, but I hardly found one. in FGA explained I've found the reference, that example 3.4.1 in hartshorne, appendix B is a smooth proper scheme over $\mathbb{C}$ with a free $\mathbb{Z}/2$-action, but the quotient does not exist (without proof). to be honest, this is too complicated to me. are there easy examples? you won't help me just giving the example, because there are lots of them, but the hard part is to prove that the colimit really does not exist.
edit: up to now, no proof is given here
1
# colimits of schemes
this is related to another question.
I've found many remarks that the category of schemes is not cocomplete. the category of locally ringed spaces is cocomplete, and in some special cases this turns out to be the colimit of schemes, but in other cases not (which is, of course, no evidence that the colimit does not exist). however, I want to understand in detail a counterexample where the colimit does not exist, but I hardly found one. in FGA explained I've found the reference, that example 3.4.1 in hartshorne, appendix B is a smooth proper scheme over $\mathbb{C}$ with a free $\mathbb{Z}/2$-action, but the quotient does not exist (without proof). to be honest, this is too complicated to me. are there easy examples? you won't help me just giving the example, because there are lots of them, but the hard part is to prove that the colimit really does not exist. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9622792601585388, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/144977-how-determine-sigma-sgn-sigma-matrices.html | # Thread:
1. ## how to determine sigma and sgn(sigma) for matrices
As the topic says, I don't know how to determine $sgn(\sigma)$ or $\sigma$ for any n x m matrix
I know this section falls under determinants but I am kind of having a hard time finding online resources for this section
2. I thought I new matrices pretty well, but you will have to tell me what " $\sigma$" represents. At a guess, I might take it to mean the permutations used in defining the determinant of a matrix:
The determinant of an n by n matrix can be defined as $\sum (-1)^{sgn(\sigma)}a_{1, \sigma(1)}a_{2,\sigma(2)}\cdot\cdot\cdot a_{n, \sigma(n)}$.
Where $\sigma$ is a permutation of {1, 2, 3, ..., n}, [tex]sgn(\sigma)[/itex] is 1 if $sigma$ is an even permutation, -1 if it is an odd permutation, and the sum is take over all such permutations.
But that doesn't make sense, nor does your "I know this section falls under determinants", because an "n by m matrix" only has a determinant if m= n.
3. sorry my question was asked rather incorrectly
in one of the examples i took down in class our proffessor asked us to determine the [tex]sgn(\sigma)[\MATH]
where eg $\sigma =$
[1 2 3
3 2 1]
a 2x3 matrix
i dont know how to solve this or what significance this has with matrices
im sorry if my english isn't good
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222075939178467, "perplexity_flag": "middle"} |
http://terrytao.wordpress.com/tag/margin-of-error/ | What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘margin of error’ tag.
## Small samples, and the margin of error
10 October, 2008 in expository, math.ST, non-technical | Tags: margin of error, polls, randomness, sample size | by Terence Tao | 31 comments
The U.S. presidential election is now only a few weeks away. The politics of this election are of course interesting and important, but I do not want to discuss these topics here (there is not exactly a shortage of other venues for such a discussion), and would request that readers refrain from doing so in the comments to this post. However, I thought it would be apropos to talk about some of the basic mathematics underlying electoral polling, and specifically to explain the fact, which can be highly unintuitive to those not well versed in statistics, that polls can be accurate even when sampling only a tiny fraction of the entire population.
Take for instance a nationwide poll of U.S. voters on which presidential candidate they intend to vote for. A typical poll will ask a number $n$ of randomly selected voters for their opinion; a typical value here is $n = 1000$. In contrast, the total voting-eligible population of the U.S. – let’s call this set $X$ – is about 200 million. (The actual turnout in the election is likely to be closer to 100 million, but let’s ignore this fact for the sake of discussion.) Thus, such a poll would sample about 0.0005% of the total population $X$ – an incredibly tiny fraction. Nevertheless, the margin of error (at the 95% confidence level) for such a poll, if conducted under idealised conditions (see below), is about 3%. In other words, if we let $p$ denote the proportion of the entire population $X$ that will vote for a given candidate $A$, and let $\overline{p}$ denote the proportion of the polled voters that will vote for $A$, then the event $\overline{p}-0.03 \leq p \leq \overline{p}+0.03$ will occur with probability at least 0.95. Thus, for instance (and oversimplifying a little – see below), if the poll reports that 55% of respondents would vote for A, then the true percentage of the electorate that would vote for A has at least a 95% chance of lying between 52% and 58%. Larger polls will of course give a smaller margin of error; for instance the margin of error for an (idealised) poll of 2,000 voters is about 2%.
I’ll give a rigorous proof of a weaker version of the above statement (giving a margin of error of about 7%, rather than 3%) in an appendix at the end of this post. But the main point of my post here is a little different, namely to address the common misconception that the accuracy of a poll is a function of the relative sample size rather than the absolute sample size, which would suggest that a poll involving only 0.0005% of the population could not possibly have a margin of error as low as 3%. I also want to point out some limitations of the mathematical analysis; depending on the methodology and the context, some polls involving 1000 respondents may have a much higher margin of error than the idealised rate of 3%.
Read the rest of this entry »
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9034794569015503, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/93136/enumerative-meaning-of-natural-q-catalan-numbers | ## enumerative meaning of natural q-Catalan numbers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Define $[n]=(1-q^n)/(1-q)$ and $[n]!=[1][2][3] \cdots [n]$, so that $[2n]!/[n]![n+1]!$ is a polynomial in $q$ (the most algebraically natural $q$-analogue of the Catalan numbers); what enumerative interpretation(s) does it have, vis-a-vis the standard members of the "Catalan zoo"? I would be especially interested in answers pertaining directly to triangulations of polygons, without any intervening bijections.
-
## 5 Answers
These $q$-Catalan numbers have been studied mainly as the generating function of statistics on Dyck paths, such as the Major index. They are also a special case of the q,t-Catalan numbers of Garsia and Haiman, related by $$\frac{1}{[n+1]_q}\left[{2n\atop n}\right]_q=q^{\binom{n}{2}}C_n(q,1/q),$$ so they can be related to the bounce and area statistics as well as some statistics on parking functions which have been investigated in the context of these general polynomials.
As far as statistics on triangulations, I believe there is some interest from the folks who work on cyclic sieving phenomena. In fact, the triple $$(\mathcal T_{n+2},C_{n+2},\frac{1}{[n+1]_q}\left[{2n\atop n}\right]_q)$$ exhibits a cyclic sieving phenomenon, in the sense that plugging in primitive $d$-roots of unity in the $q$-Catalan numbers counts triangulations with $d$-fold symmetry. The main obstacle in finding a combinatorial proof of this fact (originally due to Reiner-Stanton-White) is precisely finding a weight on triangulations on $n+2$-gons so that $$\sum _{T\in \mathcal T _{n+2}}w(T)=\frac{1}{[n+1]_q}\left[{2n\atop n}\right]_q$$ and $w$ is natural in the sense that it is well behaved with respect to rotations. See these slides of Sagan-Roichman for more information on this. In particular, I don't think there is a very satisfactory answer to your question known at the moment (but I hope I'm wrong).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You can find one answer in Enumerative Combinatorics II by R. Stanley; see Excercise 34 (b) in Chapter 6. This would need to be translated to triangulations of polygons.
-
Another less known interpretation of MacMahon's $q$-Catalan numbers is
$$\sum_{\pi \in \mathcal{S}_n(231)} q^{\operatorname{maj}(\pi) + \operatorname{imaj}(\pi)} = \frac{1}{[n+1]_q} \begin{bmatrix} 2n \newline n \end{bmatrix}_q,$$ where $\mathcal{S}_n(231)$ is the set of all permutations of length $n$ avoiding the pattern $231$, and where $\operatorname{maj}(\pi)$ is the major index, and where $\operatorname{imaj}(\pi) := \operatorname{maj}(\pi^{-1})$ is the inverse major index (see http://arxiv.org/abs/0803.3706).
I would actually be very surprised if there where a known direct statistic on triangulations giving MacMahon's $q$-Catalan numbers. I agree with Gjergji that the cyclic sieving interpretation seems to be the best known combinatorial interpretation directly in terms of triangulations.
-
To build on Bruce's answer, if we unwind the standard bijection between Dyck paths and triangulations, we get the following:
We'll write a chord as $(ij)$, meaning that it joins vertices $i$ and $j$. For every chord $(i,j)$, there are two triangles $(ijk)$ and $(ji \ell)$ containing $(ij)$. If $i < k < j < \ell$, we'll say that this chord is up-flippable; if $k < i < \ell < j$, we will say that the chord is down-flippable. As you probably know, the Tamari lattice is the poset structure on triangulations generated by up-flips.
The valleys of a Dyck path are in bijection with the down-flippable chords. If I have not made any errors, given a down-flippable chord $(ij)$, the index of that valley is `$$2 \# \{ (abc) \in T : a<b \leq i<j \leq c\} + \# \{ (abc) \in T : a\leq i < j \leq b< c \}$$` where $T$ is the set of triangles in the triangulation.
So the final answer is to sum the above quantity over all down-flippable $(ij)$.
-
David, I don't think this works for n=2; your statistic is trivially 0 on both triangulations of the 4-gon, but the q-polynomial we're after is q^0+q^2, not 2q^0. Right? – James Propp Apr 16 2012 at 7:43
Sorry, try it now. – David Speyer Apr 16 2012 at 16:00
For $n=2$, the triangulation with chord (13) is supposed to give 0, because the chord is upflippable; the triangulation with (24) is supposed to give 2 because of 124. – David Speyer Apr 16 2012 at 16:03
Seem my random speculations below. It never hurts to look in the OEIS and it would appear that the coefficient of $q^k$ in $\frac{1}{[n+1]}\left[{2n\atop n}\right]_q$ is the number of Dyck words of length 2n having major index k . I have not had time to look into translating that into triangulations.
OLDER
We do know that the $q$-binomial coefficient $$\left[{m\atop k}\right]_q=\frac{[m]!}{[k]![m-k]!}$$ is a polynomial (with non-negative integer coefficients) equal to $\binom{m}{k}$ when $q=1$ and that the coeffcient of $q^d$ is the number of ordered lists of type $0^k1^{m-k}$ having $d$ inversions. Equivalently, the number of lattice paths from $(0,0)$ to $(k,m-k)$ which enclose an region of area $d$ (with the $x$-axis and $x=k$).
Each of the lattice paths counted by $\binom{2n}{n}$ has an excedence $0 \le e \le n$ which is the number of horizontal edges above the diagonal. Each value of $e$ occurs $\frac{1}{n+1}\binom{2n}{n}$ times and the Catalan numbers count the paths of excedence $0.$
These $q$-Catalan polynomials $\frac{1}{[n+1]}\left[{2n\atop n}\right]_q$ again have non-negative integer coefficients and for $q=1$ equal the Catalan numbers which the excedence $0$ lattice paths from $(0,0)$ to $(2n,n)$. One could dream that there is an interpretation of the coefficient of $q^d$ but I certainly can't find one. A recurrence relation might be helpful in back constructing an interpretation.
The obvious greedy guess fails. For that one would have a recurrence $C_q(n+1)=\sum q^{(k+1)(n-k)}C_q(k)C_q(n-k).$ Those polynomials do not have a strong claim on being a natural q-analogue.
-
The statistic on Dyck paths is is actually "well known", see for example the paper by Christian Stump already mentioned. (I believe it also appears in papers of Christian Krattenthaler.) – Martin Rubey Apr 19 2012 at 4:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945700466632843, "perplexity_flag": "head"} |
http://www.sciforums.com/showthread.php?111942-The-Problem-of-Time-leads-to-a-Problem-of-Energy-for-the-Universe | • Forum
• New Posts
• FAQ
• Calendar
• Ban List
• Community
• Forum Actions
• Quick Links
• Encyclopedia
• What's New?
1. If this is your first visit, be sure to check out the FAQ by clicking the link above. You need to register and post an introductory thread before you can post to all subforums: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.
Thread:
1. The Problem of Time leads to a Problem of Energy for the Universe
note, the problem of time in physics ellucidates to the timelessness of relativity. In this post, I will show how if relativity is taken seriously, then we have a problem concerning energy which is a direct consequence of Noether's Theorem. I offer two possible solutions.
Energy is related to Time and Space
Bernoulli's equation is a representation of the law of "the conservation of energy" which is related by Noether's Theorem to the geometry of time - this basically means that it does not matter when you might conduct an experiment, there is a symmetry of a systems action which should imply a conservation law each time.
No Time Must Imply no Energy
And so for now, we must understand energy in the context of time. The absense of energy therefor, would imply the absence of time and vice versa; but why is this important?
Well, I have a proposal to make. Because Einstein's equations generate a motion in time that is a symmetry of the theory and thus not a true time evolution at all, we seem to be left with a timeless model. The universe would then be timeless.
Yet, if this is true and the universe is truely timeless, then surely this would mean that energy is devoid in our universe as well?
The counterintuitive facts just keep on trucking from the soil of Relativity, but this is one fact I must state. The innability of finding a time evolution for the universe would result in a faulty premise concerning whether it has an energy.
Fred Alan Wolf asked the question in his book Parallel Universes
''How can the universe have an energy?''
He further makes his point clear by saying that for the universe to have a defined energy someone would need to be sitting outside the universe to actually observe the energy. There is a way out of his problem and the paradox of timelessness and energy which I proposed above.
The whole universe, can only be observed by two possible ways: that is by someone either sitting outside the universe, or by someone who is sitting in the infinite future. Usually both examples are considered impractical [1] because they seem to purport to unphysical concepts.
I however, can see merit in the idea that something in our future has defined a total energy for the universe and by doing so, we may be able to recover the present moment; though it will not let us create the past and future, no amount of nip and tuck will rectify the problem that the past and future are simply illusions of the mind.
Enter the Transactional Interpretation (TI). In a seperate post mentioned how the TI could allow for signals to be sent back from the future to our past and shaping the past as we know it. This is not wild speculation but is a corner stone understanding of physics in general, through what is called Wheeler's Delayed Choice Experiment. In this experiment, which for the sake of getting to the point quickly will be oversimplified. The ability to make an observation on a system which may have traversed multiple paths due to the wave function will collapse into a single path - but what we are really doing when we observe the system is we are effectively creating a defined past for that object (mind I am using past as a calculational tool).
So a particle might travel the universe, take every possible path, arrive here on earth to be observed by a scientist to send quantum information backwards (the negative time wave solution of the TI) to the past history of the particle and define attributes which were but a smear of possibilities... Now, before we loose track, I will quickly get to the point. This is perhaps what is happening in our universe and why thinking about an observer in the infinite future is important. You can't have an observer sit outside space, nothing can exist outside of the universe, not even an observer.
Well, it turns out you don't even need an infinite future. To solve this, you need a boundary, or rather a symmetry in time. The very last instant of the universes existence will be were an observer would need to sit to view all the energy of the universe. By doing so, they would define whether the universe began with an energy or not. Who is this observer? Is it a form of intelligence? I don't know, all I know is that there is a problem if timelessness exists and that is that energy automatically ceases to exist, yet to solve this is by saying something intelligent is located in the very last instant of the universe which is sending signals back in time in the form of quantum waves (the kind you find in the transactional interpretation) so that the early universe could have some kind of defined volume of energy and perhaps maybe squeezing in the present time.
I am not completely against the idea of a present time existing outside in the universe defining objects, I just think it should be noted that the past and future certainly do not.
There could be something more sinister to realize perhaps, that maybe the universe is not a conserved case of energy. This statement however just seems to hard to believe ... or does it? The universe is now receeding faster than light which seems to indicate that our universe is using energy at a faster rate. In doing so, it might be conjectured that on the crux of things, the universe is not conserving energy like a ground state atom and thus will quantum leap sometime in the future. Odd to think of a universe quantum leaping, but this has been the literature in quantum cosmology.
Usually when we talk about a system not conserving it's energy, we talk about the system not having a symmetry. A symmetry would let a langrangian density be $\delta L = 0$. That is a conserved energy from symmetry, but if you add something into the equation that break's this symmetry then you no longer have a conserved quantity. So maybe, just maybe Noether's Theorem is not applicable to the universe because it does not retain the symmetry allowed to express the system as a conserved quantity.
What do you think, can you think of any other solutions to this problem I raise today?
2. Oops sorry, my citation
[1] http://xxx.lanl.gov/PS_cache/gr-qc/p.../9811053v2.pdf
3. This thread should be posted in the Physics forum, or in Alternative Theories or Pseudoscience. It is not appropriate for General Science and Technology.
4. Moderator note:
Thread moved
5. Originally Posted by Mister
Bernoulli's equation is a representation of the law of "the conservation of energy" which is related by Noether's Theorem to the geometry of time - this basically means that it does not matter when you might conduct an experiment, there is a symmetry of a systems action which should imply a conservation law each time.
What has Bernoulli's equation got to do with the rest of your post?
No Time Must Imply no Energy
And so for now, we must understand energy in the context of time. The absense of energy therefor, would imply the absence of time and vice versa; but why is this important?
You haven't established why an absence of energy would imply an absence of time.
Well, I have a proposal to make. Because Einstein's equations generate a motion in time that is a symmetry of the theory and thus not a true time evolution at all, we seem to be left with a timeless model. The universe would then be timeless.
Einstein's equations include a time evolution. What are you talking about? Time is one of the coordinates in Einstein's equations.
Obviously, our universe is not timeless, so Einstein's equations would be useless if they did not include time evolution.
The counterintuitive facts just keep on trucking from the soil of Relativity, but this is one fact I must state.
"Counterintuitive" is not the same as "wrong". That's a mistake that many cranks make.
Fred Alan Wolf asked the question in his book Parallel Universes
Isn't he a crank?
''How can the universe have an energy?''
He further makes his point clear by saying that for the universe to have a defined energy someone would need to be sitting outside the universe to actually observe the energy.
Did he explain why?
I however, can see merit in the idea that something in our future has defined a total energy for the universe and by doing so, we may be able to recover the present moment; though it will not let us create the past and future, no amount of nip and tuck will rectify the problem that the past and future are simply illusions of the mind.
I'm not sure I understand what you're saying here.
Are you saying that the past of the universe is determined by the future, and not vice-versa?
Also, you seem to be clearly saying that in any case "past" and "future" don't really exist. That leaves only "the present", whatever that is.
How and why does the mind create these illusions of past and future? What of causation? Where does it leave time if there is only now and no before or after?
Enter the Transactional Interpretation (TI). In a seperate post mentioned how the TI could allow for signals to be sent back from the future to our past and shaping the past as we know it.
But this would all be only in the mind, since past and future exist only there. Right?
This is not wild speculation but is a corner stone understanding of physics in general, through what is called Wheeler's Delayed Choice Experiment. In this experiment, which for the sake of getting to the point quickly will be oversimplified. The ability to make an observation on a system which may have traversed multiple paths due to the wave function will collapse into a single path - but what we are really doing when we observe the system is we are effectively creating a defined past for that object (mind I am using past as a calculational tool).
And this tells us about the Transactional Interpretation, does it? What is the Transactional Interpretation, exactly?
The material you have presented above is not a "corner stone" of physics that I am familiar with. Why is it not included in all the standard textbooks?
Well, it turns out you don't even need an infinite future. To solve this, you need a boundary, or rather a symmetry in time. The very last instant of the universes existence will be were an observer would need to sit to view all the energy of the universe.
If the universe is infinite in time, there is no last instant.
I am not completely against the idea of a present time existing outside in the universe defining objects, I just think it should be noted that the past and future certainly do not.
Does that mean your mind is all that exists?
There could be something more sinister to realize perhaps, that maybe the universe is not a conserved case of energy. This statement however just seems to hard to believe ... or does it? The universe is now receeding faster than light which seems to indicate that our universe is using energy at a faster rate.
My house isn't receding faster than light. And it is part of the universe.
Also, please explain how the universe uses energy in the sense you're talking about.
In doing so, it might be conjectured that on the crux of things, the universe is not conserving energy like a ground state atom and thus will quantum leap sometime in the future. Odd to think of a universe quantum leaping, but this has been the literature in quantum cosmology.
What will it leap to?
Usually when we talk about a system not conserving it's energy, we talk about the system not having a symmetry. A symmetry would let a langrangian density be $\delta L = 0$.
What is a Lagrangian density? Please explain.
That is a conserved energy from symmetry, but if you add something into the equation that break's this symmetry then you no longer have a conserved quantity. So maybe, just maybe Noether's Theorem is not applicable to the universe because it does not retain the symmetry allowed to express the system as a conserved quantity.
What do you think, can you think of any other solutions to this problem I raise today?
I think your idea that past and future are all in the mind fails at the most basic test of common sense.
6. ''What has Bernoulli's equation got to do with the rest of your post?''
You will need to excuse that little snippet mentioning Bernoulli's equation as it was extracted from the book I am writing, what was important about the snippet however was the mention of conservation and Noether's Theorem.
''You haven't established why an absence of energy would imply an absence of time.''
That is simply a title, the reason becomes obvious later when explained.
''Einstein's equations include a time evolution. What are you talking about? Time is one of the coordinates in Einstein's equations.
Obviously, our universe is not timeless, so Einstein's equations would be useless if they did not include time evolution.''
That's Ok James, I will assume you don't know about timelessness, or how it enters Einstein's equations. I was actually discussing this in the ''what is time'' thread. I had been elaborating on the fact that timelessness exists in relativity, among other concepts, to which later Dinoasaur made a post:
''First: If you need a definition for before & after, you are not able to understand time & can forget about this thread.
Einstein once wrote something like the following about time, which I think is very succinct and pretty much describes it.
When an individual ponders his experiences, he can order the events in his life using the criteria of before & after. He can assign a number to each event in such a way that events assigned a lower number occurred before events assigned a higher number.
It is convenient to use a device called a clock to provide a consistent set of numbers for use in ordering events.
In describing the laws of physics using the language of mathematics, it is convenient (if not necessary) to use a continuous variable called time. This variable similarly orders events based on the criteria of before and after.
There is little (if anything) more that can be said relating to time.''
He further addresses some issues of time in relativity:
''BTW: Einstein's remark about past, present, & future was not intended to be taken at face value.
That remark was made in the context of a discussion of World Lines, which model the laws of physics as static geometry.
Most people do not realize that the mathematics of both Special & general relativity treat physics as geometry. A moving particle is viewed as an unchanging curve. The actions of a person is viewed as a collection of related world lines.
Since the model is geometric, there is no motion. There is no past, present or future. Consideration of motion, past, present, & future require translating the geometric model into concepts compatible with the world of our ordinary senses.
Special relativity is a flat space model (plane geometry), while General Relativity is a curved space model (spherical or other non Euclidean geometry). ''
No doubt he has articulated this better than me, I was actually quite impressed by his post. It is true, that timelessness comes out of the theory, just in much the same sense I described it but articulated less well. I even bolded a particular part to show that evolution is to fit our senses, not the physical objective world.
Evolution, true evolution at that does not exist in GR. Motion arises as a symmetry of the theory, not a true time evolution. It is akin to having diffeomorphism invariance on your theory which allows to you shuffle spacetime coordinates without changing the overall structure of your theory.
I hope this explains it more fuller.
''"Counterintuitive" is not the same as "wrong". That's a mistake that many cranks make.''
I agree. Yet, I state that counterintuitive is perhaps an indication of a break down of understanding... using that to your advantage, we search for other possible solutions.
''Isn't he a crank?''
He revolutionized spin. He is very influential. If he is a crank, that is in the eye of the beholder.
''Did he explain why''
No, but he indirectly meant it involved the theory of Observation. It is true that even a particle can be an observer, but a particle cannot measure a universe. So instead, we use the analogy of a human observer. The reason why the universe cannot have an energy is because no one has measured it.
Believe it or not, but this energy problem of the universe can be solved by believing everything was predetermined at the Big Bang, possibly by the Bohmain Interpretation of QM using Pilot Waves. But neglect any deterministic universe then you run back into the same problem.
[b]''I'm not sure I understand what you're saying here.
Are you saying that the past of the universe is determined by the future, and not vice-versa?
Also, you seem to be clearly saying that in any case "past" and "future" don't really exist. That leaves only "the present", whatever that is.
How and why does the mind create these illusions of past and future? What of causation? Where does it leave time if there is only now and no before or after?
''I'm not sure I understand what you're saying here.
Are you saying that the past of the universe is determined by the future, and not vice-versa?
Also, you seem to be clearly saying that in any case "past" and "future" don't really exist. That leaves only "the present", whatever that is.
How and why does the mind create these illusions of past and future? What of causation? Where does it leave time if there is only now and no before or after?''
A lot of questions in there, I fear by the time I have finished this post I will need to log in again
Yes, the future will determine the past of the universe. Note, we cannot think of past and future in the normal sense. You can think of them also existing within the present frame. One way to tackle the difference between two present frames I have determined is by involving Einsteins ''Relativity of Simultaniety.'' In effect, one person veiwing an event in time can be different to another's perspective. The key clue however is that both observers exist in the present. Think of the future then as a massive time delay between events happening here in our frame of reference, both are true and exist within the present time.
But yes, the future will determine the past if it adheres to Wheelers Delayed Choice Experiment. Right... how does mind create the past and future? I have decided from my own studies that some illusory concept of a past and future can only exist when memory is present, and not just memory, but a device capable of ''knowing'' that memory. We are such devices. One moment which passes stores as memory but is not forgotten, so one way the mind is able to make sense of such a phenomenon is by the illusion itself of the past. We have pasts because we have memory of events, in a shorter simpler way of explaining it.
Causation is a [physical] phenomenon which has some serious problems if one takes relativity seriously, see above concerning mine and Dinosaurs post. Finally, where does it leave time if there is no before or after? There is still the now, and there will always be the now, in the form of the present moment, or present sphere as I call it.
''But this would all be only in the mind, since past and future exist only there. Right?''
Yes. The idea of a past and future is in one's mind, we have not yet articulated a way to talk about how quantum information in the form of psi-waves can stretch through the present moments of existence. I haven't yet been able to articulate it without involving past or future as a way to express the dynamics. I am sure there is a unique way to view it and explain it.
''And this tells us about the Transactional Interpretation, does it? What is the Transactional Interpretation, exactly?
The material you have presented above is not a "corner stone" of physics that I am familiar with. Why is it not included in all the standard textbooks?''
The way the transactional interpretation treats the wave function is the idea of a positive time wave and a negative time wave are able to move from the future to the past and from the past to the future. The wave moving forward in time is an advanced and moving back the retarded wave functions; and as you may guess, the waves are solutions of different quantum information packets which upon the absolute square amplitude they will define real existing things.
The emmiter could be an electron, radiating a photon, which is caused by producing a field. The field is time-symmetric under the Wheeler-Feynman description which and as John Cramer describes it ”time-symmetric combination of a retarded field which propagates into the future and an advanced field which propagates into the past.”
He considers a net field which consists of a retarded plane wave form F1
$F_1 = e^{i(kr - \omega t)}$
for t1 ≥ 1. Here, t1 is the instant of emission. The advanced solution G1 is simply
$G_1 = e^{-i(kr - \omega t)}$
for t1 ≤ 1. The idea is that the the absorbing electron responding to the incident of the retarded field F1 in such a way it will gain energy, recoil, and produce a new retarded field F2=-F1 which exactly cancels the incident field F1. The net field after such a transaction is zero.
$F_{net} = (F_1 + F_2) = 0$
Applying this to a universe can be beneficial. It can help explain how the early universe came into existence, because the future implied it through probability. The future of our universe can shape the early universe in such a way that it can define parts of the universe which are smeared by possibilities and out of which only one true history can survive. So there is the chance that the wave function in our universe is sending information back to points in our universes history where the early universe is just being formed.
''If the universe is infinite in time, there is no last instant.''
James, you qouted me saying ''Well, it turns out you don't even need an infinite future.''
So I am very much aware of this. You just haven't read it right.
''Does that mean your mind is all that exists?''
No. I believe the mind is a subsystem of the system we call the universe.
''My house isn't receding faster than light. And it is part of the universe.
Also, please explain how the universe uses energy in the sense you're talking about.''
The observable universe has been measured to be expanding faster than light. It is not matter moving at superluminal speeds, but the space inbetween. As for how this has to do with the energy of the universe, it is with current understanding that as the universe expands, more energy is released into the vacuum. A faster expansion would mean that energy is being released more rapidly.
''What will it leap to?''
I don't know. A safe articulate way of stating this, would be simply that it's quantum leaps into a new configuration, but what that configuration is, is so far, unclear.
''What is a Lagrangian density? Please explain.''
I will need to use some math here.
We may have a Langrangian
$L = \int dx [\frac{\dot{\phi}^2}{2} - \frac{\dot{\phi_{x}}^2}{2}]$
This would be the canonical momentum in respect to $\phi$. An example of breaking the symmetry, is if you had some potential term in there, and usually a simply potential may have the form
$\frac{M^2\phi^2}{2}$
Interested in the conservation, a simple Noether Theorem would include a transformation with a small parameter, or perturbation. A conserved solution would be
$q_i \rightarrow \phi_i + \epsilon f_i (q)$
The way this transforms is
$\sum_i P_i f_i (q)$
The epsilon dissapears (it is such a small quantity) and you are left with a conserved quantity
$\delta L = 0$
If it was a field momentum, $\pi$ then $\sum_i$ is replaced with $\int dx$ and the momentum is replaced with $\pi(x)$ yielding
$\int dx \pi(x) = C$
where C is some constant.
''I think your idea that past and future are all in the mind fails at the most basic test of common sense.''
Just like I said, it is counterintuitive. But one we must take seriously, I argue.
7. Mister,
I'll get back to the rest later, but right now I've only got time to ask some questions about the maths at the end of your last post.
Originally Posted by Mister
$L = \int dx [\frac{\dot{\phi}^2}{2} - \frac{\dot{\phi_{x}}^2}{2}]$
What is $\phi$ and what is $x$?
This would be the canonical momentum in respect to $\phi$.
What is? $L$, or $x$ or the whole integrand, or what? Which one is the canonical momentum?
An example of breaking the symmetry, is if you had some potential term in there, and usually a simply potential may have the form
$\frac{M^2\phi^2}{2}$
What's $M$? And what would I do with this term? Add it to the integrand of the Lagrangian?
What symmetry would it break?
Interested in the conservation, a simple Noether Theorem would include a transformation with a small parameter, or perturbation. A conserved solution would be
$q_i \rightarrow \phi_i + \epsilon f_i (q)$
What is $q_i$? What is $f_i(q)$? Is the $q$ in the $f_i$ the same as $q_i$?
What equation is this a solution to?
What is Noether's theorem, by the way?
The way this transforms is
$\sum_i P_i f_i (q)$
Sorry. I don't understand what is transforming to what, or why it needs to transform. Please explain.
Also, you have only given an expression for something here. I don't see any explanation of a transform. What is $P_i$?
The epsilon dissapears (it is such a small quantity) and you are left with a conserved quantity
$\delta L = 0$
Can you fill in the intermediate steps for me, please? I don't see how this follows.
If it was a field momentum, $\pi$ then $\sum_i$ is replaced with $\int dx$ and the momentum is replaced with $\pi(x)$ yielding
$\int dx \pi(x) = C$
where C is some constant.
What is a field momentum?
And I'm not sure what you're saying. Which sum becomes an integral? Which thing becomes a field momentum?
And none of this seems to answer my original question: what is a Lagrangian density?
8. The Langrangian density is one which is a function of the fields with in my case, the one space derivative (which could have included more dimensions and time) where the integral is over the space of the Langrangian. To increase your dimensions, you must note to change $\int d^3 x$ for three space dimensions.
That is to answer what the Langrangian Density is, I did answer it above in the math, but I probably could have been clearer.
to the rest of your questions... $\phi$ is a field. $x$ is the space dimension. $L$ is the Langrangian, ''L'' for short. $M$ is the mass.
''What symmetry would it break?''
It would break the symmetry when performing a transformation of the type I gave. A symmetry in this language then comes in the form of $(\phi \rightarrow \phi + \epsilon)$. If it is not a conserved quantity, then it is not a symmetry of the theory.
$q_i$ is velocity.
You may come across some quadratic form of the velocity in a Langrangian, such as
$\mathcal{L} = \sum_i \frac{M_i}{2} \dot{q}_{i}^{2} + U(q)$
Where $U(q)$ is the potential. This equation is equal to
$= T - U$
so then it follows that $(T+U)$ is a conserved quantity.
...Yes, $q$ is the same as the $q_i$, except with our summation. $f$ here is just a number, which can be set equal to $1$. $\epsilon$ is $\delta x = \epsilon$.
So, we may see that
$\sum_i P_i f_i (q)$ is a conserved quantity because it is unchanged under these rules, thus it satisfies the conservation of $\delta L = 0$, the $L$ again is just the langrangian.
$P_i$ is just the momentum.
Now, the field momentum is different to ordinary momentum and is given as $\pi = \frac{\partial \mathcal{L}}{\partial \dot{\phi}}$. It isn't Lorentz invariant. But you can replace $P$ ordinary momentum for the field momentum $\pi$.
9. Oh, sorry, you also asked what field momentum is exactly. When a charge interacts with an electric field, the field makes the charge move. That means that the field gives the particle momentum. This is field momentum. You might deal with a langrangian of the kind of charges in a four vector electromagnetic field as
$\mathcal{L} = \frac{M\dot{x}^2}{2} + \bar{e}\vec{A} \cdot v$
for instance. Thus
$\frac{\partial \mathcal{L}}{\partial \dot{x}} = M\dot{x} + \bar{e}\vec{A}_x$
10. Originally Posted by Mister
''What is a Lagrangian density? Please explain.''
I will need to use some math here.
We may have a Langrangian
$L = \int dx [\frac{\dot{\phi}^2}{2} - \frac{\dot{\phi_{x}}^2}{2}]$
This would be the canonical momentum in respect to $\phi$.
No, it wouldn't. The canonical momentum wrt a particular field has a very specific meaning. You just wrote down a Lagrangian, you even call it so yourself!
Are you picking bits of text from a source you don't understand again?
Originally Posted by Mister
An example of breaking the symmetry, is if you had some potential term in there
No, putting in mass doesn't necessarily break a symmetry. It depends what the symmetry is and how you're putting the mass in.
It's little comments like that which hint at the fact you're trying to reword a particular document you're picking from, rather than explaining things in your own words.
Originally Posted by Mister
Interested in the conservation, a simple Noether Theorem would include a transformation with a small parameter, or perturbation. A conserved solution would be
$q_i \rightarrow \phi_i + \epsilon f_i (q)$
That isn't a 'conserved solution' at all. It's a small perturbation.
Originally Posted by Mister
The way this transforms is
$\sum_i P_i f_i (q)$
This is meaningless since you don't define what the first term in the sum is, nor do you say what it is produced from. Different Lagrangians will be altered in different ways by perturbations.
Originally Posted by Mister
The epsilon dissapears (it is such a small quantity) and you are left with a conserved quantity
$\delta L = 0$
Firstly you've skipped out or mangled pretty much the entire derivation. Secondly the epsilon term doesn't disappear because it is small. If you just did that you'd not be perturbing the Lagrangian at all! Terms quadratic in epsilon are discarded. The terms linear in epsilon are kept and are required to vanish for non-zero epsilon. It's from that which you derive the Euler-Lagrange equations from.
Originally Posted by Mister
If it was a field momentum, $\pi$ then $\sum_i$ is replaced with $\int dx$ and the momentum is replaced with $\pi(x)$ yielding
$\int dx \pi(x) = C$
Again, you've skipped a load of stuff because you haven't even defined a momentum or shown anything about its conservation.
This is what happens when you don't understand the material you're parroting and try to cut it down so it isn't blindingly obvious to everyone that you're just copy/editing something. Since you don't understand the material you don't know which bits to remove so you just try your luck and hope no one notices. It's dishonest and it's very very obvious to those of us who understand Lagrangians.
Originally Posted by Mister
That is to answer what the Langrangian Density is, I did answer it above in the math, but I probably could have been clearer.
You didn't answer it and you definitely could have been clearer. By being less wrong.
Originally Posted by Mister
to the rest of your questions... $\phi$ is a field. $x$ is the space dimension.
Spatial coordinate. There's a difference.
Originally Posted by Mister
It would break the symmetry when performing a transformation of the type I gave. A symmetry in this language then comes in the form of $(\phi \rightarrow \phi + \epsilon)$. If it is not a conserved quantity, then it is not a symmetry of the theory.
No, a symmetry is something else which incorporates alterations to the fields. What you've given there is an alteration to the field of a type not used in the actual derivation of Noether's theorem or the E-L equations.
Originally Posted by Mister
$q_i$ is velocity.
No, it isn't. $q_{i}$ is taken to be the position of an object while the velocity is then $\dot{q}_{i}$. If you were to interpret $q_{i}$ as the velocity then M wouldn't be the mass. That really is a massive mistake because the simplest examples of the E-L formalism is to derive Newtonian mechanics from a simple Newtonian Lagrangian, ie the basic kinetic and potential energies of an object.
Originally Posted by Mister
You may come across some quadratic form of the velocity in a Langrangian, such as
$\mathcal{L} = \sum_i \frac{M_i}{2} \dot{q}_{i}^{2} + U(q)$
Where $U(q)$ is the potential.
Now you use it as velocity.
Originally Posted by Mister
This equation is equal to
$= T - U$
Obviously it isn't because you just changed signs on the potential.
Originally Posted by Mister
so then it follows that $(T+U)$ is a conserved quantity.
No, it doesn't. The Lagrangian can be written schematically as T-U. The total energy is T+U. The reason T+U is conserved follows from a particular property of the Lagrangian involves something you haven't made any mention of.
Can you tell me what that is and how you go about showing it?
You aren't explaining anything, you're just throwing out isolated statements which, when they aren't riddled with mistakes, are unsupported and unexplained.
Originally Posted by Mister
...Yes, $q$ is the same as the $q_i$, except with our summation.
No, the i index refers to the fact the equation is talking about multiple objects. If you've considering a single object then you drop the index. Again, something anyone with working familiarity with this stuff would know.
Originally Posted by Mister
$f$ here is just a number, which can be set equal to $1$. $\epsilon$ is $\delta x = \epsilon$.
Which doesn't appear anywhere in that post and doesn't line up properly with anything in your previous posts. Your habit of picking bits and pieces from source is showing again.
Really, its obvious you didn't spontaneously write that in your own words because it has absolutely no coherent flow. You refer back to things you haven't said and make conclusions based on other things you haven't said. You include conlusions of arguments you don't provide. If you really understood this stuff then you'd be able to write something more coherent. Remember how you said you think you're good at explaining stuff? This is evidence that isn't entirely true. Of course to someone utterly illiterate when it comes to physics, such as wlminex, you might be doing good 'OOB thinking' (out of box thinking) or whatever but clearly you aren't. You're trying to pass off as your own explanation material you haven't written yourself but are lifting from somewhere in bits and pieces. I'm in no doubt you'll deny it but the complete lack of coherency and random snippet nature of your 'explanation' makes it obvious.
Originally Posted by Mister
Now, the field momentum is different to ordinary momentum and is given as $\pi = \frac{\partial \mathcal{L}}{\partial \dot{\phi}}$. It isn't Lorentz invariant. But you can replace $P$ ordinary momentum for the field momentum $\pi$.
The definition of canonical momentum for a variable q in Lagrangian density $\mathcal{L}$ is $\frac{\partial \mathcal{L}}{\partial \dot{q}}$. If the Lagrangian is the standard Newtonian one $\mathcal{L} = \frac{1}{2}m\dot{q}^{2} - U(q)$ then we have $p = m\dot{q}$, which is the normal momentum.
As for Lorentz invariance, of course momentum isn't Lorentz invariant, even in special relativity, because its a Lorentz vector, it has a free index. What Lorentz invariance pertains to is the Lagrangian itself, whether it transforms as a scalar or not. Obviously the Newtonian Lagrangian doesn't so it won't give rise to a Lorentz invariant theory. Again, you're not even getting the concepts right here, you don't know what certain statements are referring to.
Originally Posted by Mister
Oh, sorry, you also asked what field momentum is exactly. When a charge interacts with an electric field, the field makes the charge move. That means that the field gives the particle momentum. This is field momentum. You might deal with a langrangian of the kind of charges in a four vector electromagnetic field as
$\mathcal{L} = \frac{M\dot{x}^2}{2} + \bar{e}\vec{A} \cdot v$
for instance. Thus
$\frac{\partial \mathcal{L}}{\partial \dot{x}} = M\dot{x} + \bar{e}\vec{A}_x$
You've screwed it up again. Firstly you've mixed notation, as you should have replaced v with $\dot{x}$ in the first expression.
Secondly you got the meaning of field momentum wrong. Given a Lagrangian density $\mathcal{L}$ with a field $q(x)$ the field momentum will be defined by the equation I just gave. The EM field $A_{\mu}$ has Lagrangian $F^{\mu\nu}F_{\mu\nu}$ and it's momentum would be computed from that. What you've said is that the field momentum of the EM field is what the particle's momentum is. No, the particle has momentum defined from it's kinetic term in the Lagrangian via the equation I gave and the EM field has its own momentum defined in the same way on its kinetic term.
The equation you give, $\mathcal{L} = \frac{M\dot{x}^2}{2} + \bar{e}\vec{A} \cdot v$ , is the electron's momentum but its not a field momentum because in that formulation the electron isn't a field, it's just a point particle. When you do quantum electrodynamics the Dirac spinor field for the electron then has field momentum.
I don't know how many times I've said this in just this post along, but this constant stream of mistakes shows over and over you aren't giving your own understanding, you've just spewing out what snippets you either half remember or possibly even directly copy from sources. Every time you make any attempt to put things into your own words you screw it up and when you give mathematical explanations you just pull bits and pieces from sources but since you don't understand it you don't know which bits to include and which to leave out.
This stuff is 2 or 3 years before university students get to the Dirac equation, which just highlights how huge the gaps in your understanding are and how laughable your claim to understand the Dirac equation in a decent manner is. Likewise with your belief you could perhaps walk the first year of a university course.
Seriously, find something else to do rather than lie to people online.
11. Alphanumeric, shut up, I AM NOT LYING TO ANYONE. This is how I have learned it.
For citation
http://www.youtube.com/watch?v=vvCeOncgSoA
Everything I have said, corresponds exactly to how I have learned about speaking about it from my favourite scientist, Mr. Susskind. Go watch it and you will see every rebuttal you make must be against his teaching!
12. (Sub-i I should have said, by the way instead of summation when speaking about q_i)
13. I knew it wouldn't have been long before you stalked me to this post, like you stalk every post of mine. You know the irony, the mods in the back room where suggesting I was stalking you in that thread about moderation. I laughed and thought, wtf are they talking about? I was talking in that thread long before you even showed face! If anything, its completely the other way round!
14. I just love how you came in blazing as well with ''all sorts'' of accusations... My favourite was this one..
''Since you don't understand the material you don't know which bits to remove so you just try your luck and hope no one notices.''
Yeah... Completely tore apart susskinds lecture haven't I? lol
If anything, the relevent snippets of information here are as full as he explained it. You really need to sort this attitude out alphanumeric, stop stalking me, because I am not the boy you knew five years ago. I do understand this stuff as it is taught to me. As I teach myself. It's the way you came in saying stuff like
''This is what happens when you don't understand the material you're parroting and try to cut it down so it isn't blindingly obvious to everyone that you're just copy/editing something.''
and
''I'm in no doubt you'll deny it but the complete lack of coherency and random snippet nature of your 'explanation' makes it obvious.''
Deny? I have hard visible proof I haven't taken snippets from anywhere and pasted them all together. If anything, I have taken relative information in it's full when susskind speaks about it, nothing to do with skipping bits of information then hoping no one notices.
You're looking quite the fool aren't you?
And this...
''You're trying to pass off as your own explanation material you haven't written yourself but are lifting from somewhere in bits and pieces.''
Wtf do you mean ''pass it off'' as my own????
When did my name become Noether? Was the Langrangian named after me? Where do you think your BS up?
15. Anyway, moving away from the distraction named, AN, I have so far came up with three solutions to the problem. Another one, a fourth solution to the problem of time implying a problem of energy for the universe has strucken the mind.
There is a concept in physics called the zero-energy universe [1]. If indeed there is zero time in the universe, the lack of a global time, were there is only timelessness, then the zero time could also tie in with the concept of a zero energy universe.
It is not so bizarre now to think of a universe with no energy. Especially in reference to the work I have provided so far.
[1] http://arxiv.org/abs/gr-qc/0605063
16. Originally Posted by Mister
Anyway, moving away from the distraction named, AN, I have so far came up with three solutions to the problem. Another one, a fourth solution to the problem of time implying a problem of energy for the universe has strucken the mind.
There is a concept in physics called the zero-energy universe [1]. If indeed there is zero time in the universe, the lack of a global time, were there is only timelessness, then the zero time could also tie in with the concept of a zero energy universe.
It is not so bizarre now to think of a universe with no energy. Especially in reference to the work I have provided so far.
[1] http://arxiv.org/abs/gr-qc/0605063
Zero energy means when you add all of the energy together with the negative energy.. every action has an equal, and opposite reaction. You get zero as the answer. But it still allows for every point in space time to contain energy. It's like breathing, you breath in to breath out.
17. Originally Posted by Pincho Paxton
Zero energy means when you add all of the energy together with the negative energy.. every action has an equal, and opposite reaction. You get zero as the answer. But it still allows for every point in space time to contain energy.
Yes, that would apply locally. But the wave function of the universe which governs the WDW-equation is not a local wave function, it is a global wave function meaning that the vanishing time derivative must be in some sense a global time. If this is the case, then if there are any problems with understanding energy in a universe, it must be thought of as a global situation. So when you think of the global energy content of the universe, you tend to ask the same problematic questions concerned in the OP.
So a zero-energy universe turns out to be very appropriate for a zero-time universe.
18. Originally Posted by Mister
Yes, that would apply locally. But the wave function of the universe which governs the WDW-equation is not a local wave function, it is a global wave function meaning that the vanishing time derivative must be in some sense a global time. If this is the case, the if there are any problems with understanding energy in a universe, it must be thought of as a global situation. So when you think of the global energy content of the universe, you tend to ask the same problematic questions concerned in the OP.
So a zero-energy universe turns out to be very appropriate for a zero-time universe.
So the global version is an average of the local version. An apple is a bunch of atoms. We break things down.
19. Originally Posted by Pincho Paxton
So the global version is an average of the local version. An apple is a bunch of atoms. We break things down.
The ''Global Version'' takes into consideration the entire, or total energy content. The local version would simply care about individual systems which we obviously can measure and define with energy. Hence why in the OP the problem of a universe with no energy first came from my reading of Doctor Wolf who indirectly explained that the observer effect will define the energy properties of systems, but if no one has observed the universe, how can it have an energy?
One might even appeal to God in my finite universe example, were some intelligence is located in the future and has defined the total energy content of the universe. That seems practical since we can't ever go outside of spacetime to attentively watch the universe and measure any energy.
20. Originally Posted by Mister
The ''Global Version'' takes into consideration the entire, or total energy content. The local version would simply care about individual systems which we obviously can measure and define with energy. Hence why in the OP the problem of a universe with no energy first came from my reading of Doctor Wolf who indirectly explained that the observer effect will define the energy properties of systems, but if no one has observed the universe, how can it have an energy?
One might even appeal to God in my finite universe example, were some intelligence is located in the future and has defined the total energy content of the universe. That seems practical since we can't ever go outside of spacetime to attentively watch the universe and measure any energy.
If an action has an equal reaction you have your answer both locally, and globally. There is no difference. The sum is zero.
+ Reply to Thread
Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
• BB code is On
• Smilies are On
• [IMG] code is On
• [VIDEO] code is On
• HTML code is Off
Forum Rules
All times are GMT -5. The time now is 07:52 AM.
sciforums.com
Powered by vBulletin®
Copyright © 2012 vBulletin Solutions, Inc. All rights reserved.
Copyrights reserved by SciForums 1996-2012 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 106, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606543779373169, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/175452/how-to-simplify-an-expression-like-this-x2x-2-21-2/175459 | # How to simplify an expression like this: $(x^2+x^{-2}-2)^{1/2}$
Sorry, I am not sure how to do the maths mark-up on this site but hopefully the question will make sense. I should know how to do this, but I have got myself stuck! Can anyone help?
$(x^2+x^{-2}-2)^{1/2}$
-
## 4 Answers
$$\left(x-\frac{1}{x}\right)^2= \dots?$$
-
Thanks! Took me a moment to figure out what you were on about, but I do see how to simplify it now. So the smallest value I get factorising to get that and then substituting it in should be x-(1/x)... Assuming I understood you correctly, that is... – Magpie Jul 26 '12 at 16:55
+1 for the question mark. – dot dot Jul 26 '12 at 17:03
@Magpie, I'm not sure what you mean by "the smalles value I get factorising...". The fact is your expression $\,x^2-2+x^{-2}\,$ has the expected form for the well-known squared binomial expression: (term 1 squared) + (term 2 squared) $\,\pm\,$ (twice term 1 times term 2) = (term 1 $\,\pm\,$ term 2)^2...practice, that's all. – DonAntonio Jul 26 '12 at 18:36
@DonAntonio Your answer is contradictory to jasoncube's. The expression is factored appropriately, however the initial exponent of $1/2$ should cancel out the exponent of $2$. Am I misunderstanding something? – user26649 Jul 27 '12 at 12:16
@FarhadYusufali, I think you are. I can't see how my answer is "contradictory to jasoncube's". IMO, both are accurate – DonAntonio Jul 27 '12 at 12:50
$$\sqrt{x^2 + x^{-2} - 2} = \sqrt{x^2 – 2(x)(x^{-1})+ (x^{-1})^2} =\sqrt{(x – x^{-1})^2} = |x – x^{-1}|$$
-
What is $x^{-2}$ equal to? Hint: can you write it as a fraction? If so, I would then look at adding and subtracting fractions and go from there.
-
For $x\ne 0$, $x^2+x^{-2}-2=(x^4-2x^2+1)/x^2=(x^2-1)^2/x^2$. So taking square roots of both sides we get on the right side $|(x^2-1)/x|=|x-1/x|$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299368858337402, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/7824/factorization-of-elements-vs-of-ideals-and-is-being-a-ufd-equivalent-to-any-pro | ## Factorization of elements vs. of ideals, and is being a UFD equivalent to any property which can be stated entirely without reference to ring elements?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Why exactly is the unique factorization of elements into irreducibles a natural thing to look for? Of course, it's true in $\mathbb{Z}$ and we'd like to see where else it is true; also, regardless of whether something is natural or not, studying it extends our knowledge of mathematics, which is always good. But the unique factorization of elements - being specifically a question of elements - seems completely counter to the category theory philosophy of characterizing structure via the maps between objects rather than their elements. Indeed, I feel like unique factorization of ideals into prime ideals is less a generalization of unique factorization of elements into irreducibles than the latter is a messier, unnatural special case of the former, a "purer" question (ideals, being the kernels of maps between rings, I feel meet my criteria for being a category-theoretically acceptable thing to look at). Certainly, the common theme in algebra (and most of mathematics) is to look at the decomposition of structures into simpler structures - but quite rarely at actual elements.
Now, for nice cases like rings of integers in number fields, we can characterize being a UFD in terms of the class group and other nice structures and not have to mess around with ring elements, but looking at the Wikipedia page on UFDs and the alternative characterizations they list for general rings, they all appear to depend on ring elements in some way (the link to "divisor theory" is broken, and I don't know what that is, so if someone could explain it and/or point me to some resources for it, it'd be much appreciated).
Sorry about the rambling question, but I was wondering if anyone had any thoughts or comments? Is "being a UFD" equivalent to any property which can be stated entirely without reference to ring elements? Should we care whether it is or not?
EDIT: Here's a more straightforward way of saying what I was trying to get at: The structure theorem for f.g. modules over a PID, the Artin-Wedderburn theorem, the Jordan-Holder theorem - these are structural decompositions. Unique factorization of elements is not, because elements are not a structure. My feeling is that this makes it a fundamentally less natural question, and I ask whether being a UFD can be characterized in purely structural terms, which would redeem the concept somewhat, I think.
-
You can view elements in a ring R as category-theoretic objects: they are maps of sets from the one-point set to the underlying set of R, and even as maps of abelian groups from the integers to (R, +). – Alberto García-Raboso Dec 5 2009 at 1:59
True, we can create structures in bijection with R, and put R's ring structure back on them (though my understanding is that we have to explicitly tell Hom_Sets({*},R) and Hom_Ab(Z,R) to have R's ring structure), which I would agree is a somewhat better situation because now we are using arrows instead of "elements". However, I don't think that just because we can do this without having to explicitly reference R's underlying set fundamentally changes the issue - – Zev Chonoles Dec 5 2009 at 2:42
now we effectively have a relabeling of R, and the question of whether a map - an element of the Hom-set - factors uniquely as a "product" (again, having to give the maps R's ring structure) of "irreducible" maps is still much more ungainly, in my opinion, than whether ideals factor as a product of prime ideals, which is in the proud tradition of other structural decomposition questions such as whether a ring is semi-simple. – Zev Chonoles Dec 5 2009 at 2:43
Another way of viewing elements of R category-theoretically is to use the identification of R with End_{R-mod}(R). This way multiplication in R agrees with category-theoretic composition. – Alison Miller Dec 5 2009 at 2:48
About divisor theory: a Google search turns up the PlanetMath page planetmath.org/encyclopedia/DivisorTheory.html and a book by Harold M. Edwards titled "Divisor Theory" books.google.com/books?id=TKRmT5L4CNcC – Alberto García-Raboso Dec 5 2009 at 2:52
show 1 more comment
## 4 Answers
This is sort of an anti-answer, but: my instinct is that ZC is taking the categorical perspective too far.
To start philosophically, I think it is quite appropriate to, when given a mathematical structure like a topological space or a ring -- i.e., a set with additional structure -- refrain from inquiring as to exactly what sort of object any element of the structure is. There is a famous essay "What numbers could not be" by Paul Benacerraf, in which he pokes fun at this idea by imagining two children who have been taught about the natural numbers by two different "militant logicists". Their education proceeds well until one day they get into an argument as to whether 3 is an element of 17. (The writing is very nice here and unusually witty for an essay on mathematical philosophy: the names of the children are Ernie and Johnny, an allusion to Zermelo and von Neumann, who had rival definitions of ordinal numbers.) The point of course is that it's a silly question, and a mathematically useless one: it won't help you to understand the structure of the natural numbers any better.
On the other hand, to deny that a set is an essential part of certain (indeed, many) mathematical structures seems to be carrying things too far. As far as I know, it is not one of the goals of category theory to eliminate sets (though one occasionally hears vague mutterings in this direction, I have never seen an explanation of this or, more critically, of the need for this).
Coming back to rings, it seems to me that very few properties of rings can be expressed without elements. You also seem to implicitly suggest that it is "more structural" to think about things in terms of ideals than elements. Can you explain this? It would seem that speaking of ideals involves more set theoretic machinery than speaking about elements: this is certainly true in model theory in the language of rings.
It seems wrong to say that unique factorization of ideals into primes is a "generalization" of unique factorization of elements, since neither property implies the other.
Finally a positive remark: it sounds like you might like the characterization of UFDs as Krull domains with trivial divisor class group.
-
I think your positive remark is the perfect answer in the spirit of the question. I must confess I don't know whats the definition of divisor class group of a Krull domain; I assume that it has to be analog to the class group of a Dedekind domain but I'd like to see if there is a short explanation of its definition. – Guillermo Mantilla Dec 5 2009 at 6:03
All right, then: see math.uga.edu/~pete/classgroup.pdf (It is one of the two ways of generalizing the class group of a Dedekind domain. The other is via the Picard group.) – Pete L. Clark Dec 5 2009 at 6:12
At least for commutative rings, we can study them entirely without looking directly at the elements of the ring by looking at the category of modules over the ring. There's a nice result about how how we can recover R from R-mod up to isomorphism. – Harry Gindi Dec 5 2009 at 6:36
I really appreciate your insightful and constructive criticism, and of course your provided characterization of UFDs - I'm not at a level where I can understand it (I'm still working through the first half of both Atiyah-Macdonald and Marcus's Number Fields), but it's precisely the kind of answer I was hoping for. People have often told me that ideals factoring into prime ideals is "the correct generalization" of unique factorization of elements, though now that I see that in general neither implies the other, I suppose they probably meant that only for rings of integers in number fields. – Zev Chonoles Dec 5 2009 at 6:57
For ideals being more "structural", I was thinking along the lines of ideals being modules over their ring, as well as being kernels of homomorpisms, which allows us to specify them with arrows instead of having to name a set. Also, though I've never done any model theory or universal algebra, from what I'd heard they prefer structures which can in one way or another be characterized completely by equations and do not require axioms for the existence of identities, negatives, etc. – Zev Chonoles Dec 5 2009 at 6:59
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
First of all, the ubiquity of category theory in algebra is fairly recent, at least given how long people have been working on algebra (not even including elementary number theory). Much of algebraic number theory was developed in the mid-19th century in attempts to prove Fermat's Last Theorem. Since category theory would not show up for another century, mathematicians like Kummer and Dedekind had little reason to think in those terms. The notion of class group showed up as an obstruction to Kummer's attempted proof of Fermat's Last Theorem, which assumed that all the cyclotomic fields `$\mathbb{Q}(\zeta_p)$` had class number 1 (or at least prime to $p$). It's hard to see even what form Kummer's arguments would take if phrased in the language of factorization of ideals. I think that when flaws in Kummer's arguments were exposed, mathematicians realized that factorization of ideals behaves much better than factorization of elements. But for a mid-19th century mathematician, it must have felt a lot more natural to try to factor elements than ideals; they only studied the latter because the former usually fails. Now we understand that being a UFD (class number 1) is simply the nicest case, and the class group in general is the obstruction.
-
My understanding is that Kummer was much more concerned about higher-order reciprocity laws than he was about Fermat's Last Theorem. – Qiaochu Yuan Dec 5 2009 at 2:49
You seem to be suggesting that Kummer implicitly assumed that Z[zeta_p] was a UFD. This is not correct. It was Kummer who (brilliantly) invented the theory of "ideal numbers", and who proved unique factorization (the formulation as we learn it now was developed by Dedekind, but the arguments are essentially due to Kummer). – Lavender Honey Dec 5 2009 at 5:11
Indeed. It was Kummer who pointed out the flaw in Lame's attempted proof, which assumed the cyclotomic integers where a UFD – Ben Webster♦ Dec 5 2009 at 5:17
This is related to ideas I'm currently exploring on my blog, so I'd also appreciate any input on this question! I hope nobody will mind if I ramble a little.
Here's an alternate way of phrasing unique factorization. Given an integral domain consider the divisibility relation on non-units, and quotient out by units. In this way one obtains a poset. An integral domain is a UFD if and only if this poset is a product of chains, one for each irreducible element. The reason I want to phrase unique factorization in this way is that it is clearer what analogous definitions are in other algebraic situations: for example, analogous to being a product of chains in the category of posets might be being a product of cyclic groups in the category of groups. Are products of cyclic groups unnatural to look at? Well, that depends on what you mean, but I would say that they're just a simple case where it's easy to understand what's going on.
Edit: I'm also not convinced there's a hard line between "structural" and "non-structural" questions. For example, a more concise way to say what I said above is that unique factorization is essentially a question about the structure of the multiplicative monoid of non-units (up to the group of units).
-
Fair enough; I agree the "structural" distinction is somewhat ambiguous. I'd like to say that it is "depends only on the isomorphism class of the object", but certainly being a UFD satisfies this. I don't know much about it, but I believe model theory may be able to capture what I'm saying - something to do with commutative rings being able to be defined purely equationally? (not sure if that's the right terminology) – Zev Chonoles Dec 5 2009 at 4:45
Also, your idea is quite intriguing; would I be correct in saying we could use the principal ideals of the integral domain ordered by inclusion, instead of using "elements up to units" and divisibility? – Zev Chonoles Dec 5 2009 at 4:46
Direct sum! The direct product is much larger. – Qiaochu Yuan Dec 5 2009 at 19:07
@ZC: Yes, except you want only nonzero ideals. For a domain R, the quotient R - {0} by R^{\times} is isomorphic to the ("less elementy") set of principal ideals, and the isomorphism respects both the multiplication and the ordering, giving us the structure of a partially ordered monoid. This is a nice way to generalize Z being a UFD: in my undergraduate number theory courses I like to give as a problem that Z^+ as a monoid is isomorphic to the direct sum of copies of (N,+) indexed by the primes. – Pete L. Clark Dec 5 2009 at 21:35
Not an answer, actually it is more of a question.
Since we are talking about UFdomains I'll assume that my rings are domains. Let $R$ be a commutative domain with $1$. Notice that the notion of principal ideal can be defined without talking about the elements in $R$.
It is the following true?
$R$ is a UFD if and only if $R$ satisfies ACC for principal ideals and every prime ideal $P$ with $ht(P)=1$ is principal.
if the answer to the above is yes, this gives a characterization of UFDs that does not talk about elements.
-
I'm not sure about the characterization that you suggest (although I probably should know...) I do know that there is a similar famous characterization due to Kaplansky: a domain is a UFD iff each nonzero prime ideal contains a principal prime ideal. I disagree though that my characterization (and yours, if it's true) "does not talk about elements": what does it mean for an ideal to be principal?? – Pete L. Clark Dec 5 2009 at 5:20
@pete: $J \leq R$ is principal iff it is $R$-isomorphic to $R$. (thats why I insisted in domains) I also suspect of my "characterization", but I remember thinking about this a couple of years ago, and I could not came up with an example of a non-UFD domain with ACC on principal ideals and such that primes of height are principal. I'm sure that this can be found somewhere in Matsumura. – Guillermo Mantilla Dec 5 2009 at 5:33
@GM: Oh, so we're allowed to talk about modules? I still don't understand the rules of the game. By the way, let R be a commutative domain with WHAT? – Pete L. Clark Dec 5 2009 at 5:59
@pete: Sorry did not see the rules you mention about not talking about modules in commutative algebra questions about rings(it's going to be hard to give answers with that rule in hand but I'll keep it in mind) When you ask "commutative domain with WHAT?" is it to make the point that I'm using elements of the ring in my answer? Just in case that was not the reason what I mean is a commutative ring with unity. – Guillermo Mantilla Dec 5 2009 at 6:19
Yes, it was a rhetorical question. :) – Pete L. Clark Dec 5 2009 at 16:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560937285423279, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/61498/units-in-quaternionic-algebras/61512 | ## Units in quaternionic algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $H$ be a quaternionic algebra over ${\bf Q}$, and let $R$ denote a maximal ${\bf Z}$-order in $H$. Is there a theorem on the structure of the units in $R$ analogous to the Dirichlet unit theorem? Is there an analogous theorem for the $S$-units?
-
1
The unit group will be, in general, non-commutative. So what definition do you suggest for the rank? Or are you just passing to the abelianization? – Pete L. Clark Apr 13 2011 at 3:32
Chapter V, section 1 of Vigneras? – Junkie Apr 13 2011 at 4:00
@akula, if you yourself are not sure what you meant... – Mariano Suárez-Alvarez Apr 13 2011 at 5:42
Another reference is Hull (1939): "On the Units of Indefinite Quaternion Algebras" jstor.org/stable/2371505 This is also Miyake's book Chapter 5 (Modular Forms) that does some parts as Fuchsian groups springerlink.com/content/v8r7212w353331r4 – Junkie Apr 14 2011 at 10:30
## 1 Answer
If $H$ is definite, then the group of units of $H$ is finite. If $H$ is indefinite, then the group of units is a pretty chunky group; it embeds as a cocompact discrete subgroup of $SL(2, R)$, and the rank of its abelianization (which, if I remember correctly, can be interpreted geometrically as twice the genus of an associated modular curve) can be arbitrarily large.
All of this follows from general theorems on arithmetic subgroups of algebraic groups; this is a classical theory going back to Borel, Harish-Chandra and Ono in the 60s, and there is a nice summary in Gross's paper "Algebraic modular forms".
EDIT: You asked about S-units. Let S be a set of finite primes. If $H$ is definite, then the group of S-units in H will be a discrete cocompact subgroup of
$\prod_{p \in S} (H \otimes \mathbb{Q}_p)^\times$.
If every place in $S$ is ramified in $H$, then the kernels of the reduced norm maps `$(H \otimes \mathbb{Q}_p)^\times \to \mathbb{Q}_p^\times$` are compact, so this group above maps with finite kernel to a discrete subgroup of $\prod_{p \in S} \mathbb{Q}_p^\times$ and hence its abelianization has rank equal to the size of S. This argument can be pushed substantially further: you can show that if H is a quaternion algebra over a totally real field F which is totally definite (i.e. definite at every infinite place of F), then the map from S-units of H to S-units of F has finite kernel and cokernel, so the abelianization of the S-units of F has rank $|S| + [F : \mathbb{Q}] - 1$.
My favourite case, though, is when $F = \mathbb{Q}$, $H$ is definite, and $S$ consists of a single finite place $p$ which is not ramified in $H$. Then the S-units of $H$ embed into $GL(2, \mathbb{Q}_p)$ as a discrete co-compact subgroup $\Gamma$, and the space of continuous functions on the quotient $GL(2, \mathbb{Q}_p) / \Gamma$ gives a representation of $GL(2, \mathbb{Q}_p)$ with many fascinating properties (it is an example of one of Matt Emerton's completed cohomology spaces).
-
+1 Nice Answer! – Marc Palm Apr 13 2011 at 7:42
@David Loeffler: Thanks. The finiteness of the unit group in the definite case must be easy -- following just by looking at the norm form. Your explanation of the the indefinite easy really explains what is going on. Do you have anything to add about S-units? – akula Apr 13 2011 at 14:28
Should it be twice the genus? I'm not sure, but that seems right to me, since the abelianization of $\pi_1$ of a genus $g$ curve has rank $2g$. – David Speyer Apr 13 2011 at 17:29
@akula: I've added some remarks on S-units. @David Speyer: Yes, I am sure you are right. – David Loeffler Apr 14 2011 at 7:45
@David Loeffler: Could you expand a bit on your "favourite case"? Does this description give the rank of the abelianization of the unit group in this case? – akula Apr 23 2011 at 19:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094675183296204, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/23515?sort=oldest | ## Cofinal inclusions of Waldhausen categories
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathcal{C}$ be a Waldhausen category. Suppose that $\mathcal{B}$ is a subcategory of $\mathcal{C}$, and that $\mathcal{B}$ is closed under extensions. If $\mathcal{B}$ is strictly cofinal in $\mathcal{C}$ (in the sense that given any $C\in \mathcal{C}$ there exists a $B\in \mathcal{B}$ such that $C\amalg B\in \mathcal{B}$), can we say anything about $K(\mathcal{B}) \rightarrow K(\mathcal{C})$?
In Waldhausen's paper "Algebraic K-theory of spaces" Waldhausen claims that the inclusion $\mathcal{B}\rightarrow \mathcal{C}$ induces a weak equivalence $wS_\bullet \mathcal{B}\rightarrow wS_\bullet\mathcal{C}$ (and thus an equivalence on K-theories), but I'm not sure that this is right, as $\mathcal{B}$ does not need to be a full subcategory. In particular, if there are objects $C,C'$ which are in $\mathcal{B}$ but are not isomorphic in $\mathcal{B}$ they may well be isomorphic (or at least weakly equivalent) in $\mathcal{C}$.
Consider the following example. Let $\mathcal{C}$ be the category of pairs of pointed finite sets, whose morphisms $(A,B)\rightarrow (A',B')$ are pointed maps $A\vee B\rightarrow A'\vee B'$, and let $\mathcal{B}$ be the category of pairs of pointed finite sets whose morphisms $(A,B)\rightarrow (A',B')$ are pairs of pointed maps $A\rightarrow B$ and $A'\rightarrow B'$. We make $\mathcal{C}$ a Waldhausen category by defining the weak equivalences to be the isomorphisms, and the cofibrations to be the injective maps. $\mathcal{B}$ is clearly cofinal in $\mathcal{C}$, but $K_0(\mathcal{B}) = \mathbf{Z}\times \mathbf{Z}$, while $K_0(\mathcal{C}) = \mathbf{Z}$. Going even further, the Barratt-Priddy-Quillen theorem should tell us that $K(\mathcal{B}) = QS^0\times QS^0$, while $K(\mathcal{C}) = QS^0$.
If we add the condition that $\mathcal{B}$ needs to be a full subcategory of $\mathcal{C}$, then I believe that Waldhausen's paper is correct. But even without that, it is possible to say anything about the map $K(\mathcal{B})\rightarrow K(\mathcal{C})$?
-
Probably any exact functor is a subcategory up to K-equivalence, so it is probably not possible to say anything. – Ben Wieland May 12 2010 at 19:21
## 2 Answers
If I recall correctly, Waldhausen states this theorem in the context of a "subcategory with weak equivalences and cofibrations" of C. This is a somewhat stronger condition than having an exact inclusion functor from B to C; it stipulates that a map in B is a cofibration if it is a cofibration in C with cofiber in B, and a weak equivalence in B if it is a weak equivalence in C. In particular, the condition excludes the kind of example you're thinking about.
In general, given an exact functor $B \to C$ you can study the cofiber $K(B) \to K(C)$ as the $K$-theory of an explicit simplicial Waldhausen category (consisting of sequences $C_0 \to C_1 \to \ldots C_k$ where the cofibers $C_{i+1} / C_i$ are in B).
-
I'm pretty sure that in my example B is a subcategory with weak equivalences and cofibrations. A map in B is a cofibration if it is a pair of injective maps... which when you take the unions of source and target are still injective, so the image of this under inclusion is still a cofibration. Analogously for a weak equivalence. And the cofiber of a map is just the coimage, which will be in B because you can take it componentwise. (The example was constructed to have exactly this property, so if I'm making an obvious mistake please point it out.) – Inna May 5 2010 at 3:58
Yes, you're right; I'm sorry, I was mistaken in my assertion above. Presuming then that your counterexample stands, this raises two interesting questions: 1) Can you construct an example like this where the categories (or at least C) have factorization? 2) What does Waldhausen's cofiber theorem say in this setting? – Andrew May 5 2010 at 18:12
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This might not answer your question exactly, but hopefully it might nevertheless shed some light on the situation. This paper,
Sagave, Steffen On the algebraic K-theory of model categories. J. Pure Appl. Algebra 190 (2004), no. 1-3, 329--340
explores the relation between Quillen model categories and Waldhausen categories. Obviously the two notions are very close. Basically, in a model category, the full subcategory of finite or homotopically finite objects forms a Waldhausen category, and Quillen equivalences induce K-theory equivalences. One of the ideas of the paper is that when a Waldhausen category comes from a model category then you get a bit of extra structure on it (coming from the fact that you can do cofibrant replacements), and this extra structure allows you to prove under various mild assumptions that a functor of Waldhausen categories is a K-theory equivalences.
In particular, look at the Approximation Theorem (Thm 2.8), which gives a nice criterion for an exact functor to be a K-theory equivalence.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237678647041321, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/176260/convergence-of-the-sequence-f-1-leftx-right-sqrtx-and-f-n1-leftx | Convergence of the sequence $f_{1}\left(x\right)=\sqrt{x}$ and $f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)}$
Let $\left\{ f_{n}\right\}$ denote the set of functions on $[0,\infty)$ given by $f_{1}\left(x\right)=\sqrt{x}$ and $f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)}$ for $n\ge1$. Prove that this sequence is convergent and find the limit function.
We can easily show that this sequence is is nondecreasing. Originally, I was trying to apply the fact that “every bounded monotonic sequence must converge” but then it hit me this is true for $\mathbb{R}^{n}$. Does this fact still apply on $C[0,\infty)$, the set of continuous functions on $[0,\infty)$. If yes, what bound would we use?
-
2
Apply the Theorem pointwise. That is, fix $x$ and consider the sequence of numbers $(f_n(x))$. The bound will depend on $x$... – David Mitra Jul 28 '12 at 19:26
What bound though? I couldn't figure out one. – Galois Jul 28 '12 at 19:27
1
Surely you could guess the limit $g(x)$ of $(f_n(x))_n$, if there is such a limit, for each $x$. Then check that $f_n(x)\leqslant g(x)$ for every $n$... – Did Jul 28 '12 at 19:37
1
I think $1+2\sqrt x$ will do: $\sqrt{x+f_n(x)}\le\sqrt{x+1+2\sqrt x}=\sqrt{(1+\sqrt x)^2}$. – David Mitra Jul 28 '12 at 19:43
@David: Honestly, why not use $g(x)$? – Did Jul 28 '12 at 20:06
show 1 more comment
4 Answers
Clearly, for each $x \ge 0$ the sequence $\{f_n(x)\}_n$ is increasing.
Let $$g:[0,\infty) \to \mathbb{R},\ g(x)= \left\{ \begin{array}{cc} 0 & \text{ if } x=0\cr \frac{1+\sqrt{1+4x}}{2} & \text{ if } x>0 \end{array} \right..$$
I claim that $f_n(x) \le g(x)$ for every $x \ge 0$. Indeed $f_n(0)=0=g(0)$ for every $n$, and $f_1(x) \le g(x)$ for every $x > 0$. If I suppose that $f_n(x) \le g(x)$ for $n \ge 1$ and $x > 0$, then $$f_{n+1}^2(x)-g^2(x)=x+f_n(x)-\frac{1+2x+\sqrt{1+4x}}{2}=f_n(x)-g(x)\le 0,$$ i.e. $f_{n+1}(x)\le g(x)$. Hence $f_n(x)\le g(x)$ for every $n \ge 1$ and $x > 0$.
Thus for each $x \ge 0$, the sequence $\{f_n(x)\}_n$ is convergent (being increasing and bounded from above), and its (pointwise) limit $f: [0,\infty) \to \mathbb{R}, x \mapsto f(x)$ satisfies $f(x)=\sqrt{x+f(x)}$, i.e. $f(x)=g(x)$ for every $x \ge 0$.
-
How did you find $g(x)$? – Galois Jul 28 '12 at 22:55
For $x>0, g(x)$ is the positive solution of $y=\sqrt{x+y}$. – Mercy Jul 29 '12 at 11:56
You know that the sequence is increasing. Now notice that if $f(x)$ is the positive solution of $y^2=x+y$, we have $f_n(x)<f(x) \ \forall n \in \mathbb{N}$. In fact, notice that this hold for $n=1$ and assuming for $n-1$ we have $$f^{2}_{n+1}(x) = x +f_n(x) < x+f(x) = f^{2}(x).$$ Then there exist $\lim_n f_n(x)$ and taking the limit in $f^{2}_{n+1}(x) = x + f_n(x)$, we find that $\lim_n f_n(x)= f(x)$.
-
In a sense, the answer is "yes". You can apply the Monotone Convergence Theorem pointwise. That is, for a fixed value of $x$, show that the sequence of nonnegative numbers $\bigl(f_n(x)\bigr)$ is bounded above and increasing. Then of course for each $x$, the sequence $\bigl(f_n(x)\bigr)$ converges to some $f(x)$. (You can find $f(x)$ explicitly by taking the limits of both sides of your defining relation for $f_n$.)
Towards showing that $\bigl(f_n(x)\bigr)$ is indeed bounded above, try using the bound $1+2\sqrt x$.
Perhaps better, as suggested by Did, would be to formally find the limit fuction first, and then show that this serves as an upper bound.
-
Hints:
• For every $x\gt0$, the function $u_x:t\mapsto \sqrt{x+t}$ is continuous hence every convergent sequence defined by $x_{n+1}=u_x(x_n)$ for every $n\geqslant0$ has limit $z_x$ such that $u_x(z_x)=z_x$. Here, $z_x=\frac12(1+\sqrt{1+4x})$.
• For every $x\gt0$, $z_x-u_x(t)=c_x(t)\cdot(z_x-t)$, with $c_x(t)=1/(z_x+t)$ hence $0\lt c_x(t)\leqslant 1/z_x$.
• For every $x\gt0$, applying the preceding remark to any sequence $(x_n)_n$ such that $x_0\leqslant z_x$ and $x_{n+1}=u_x(x_n)$ for every $n\geqslant0$ shows that $z_x-z_x^{-n}\cdot(z_x-x_0)\leqslant x_n\leqslant z_x$ for every $n\geqslant0$.
• For every $x\gt0$, $z_x\gt1$.
Conclusion:
• For every $x\gt0$, $f_n(x)\to z_x$. On the other hand, $f_n(0)=0$ for every $n\geqslant0$ hence $f_n(0)\to0\ne z_0$ and the limit is discontinuous at $x=0^+$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 88, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389284253120422, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/144641/variational-formulation-for-bilaplacian-problem | # Variational formulation for bilaplacian problem
I am trying to derive a variational formulation for the following problem $$\left\{ \begin{array}{ll} \Delta^2u=f, & \Omega \\ \Delta u+\rho \partial_{\nu}u=0, & \partial \Omega \end{array}\right.$$
where $\rho>0$ is constant. I intend to show that the right functional setting is $H^2(\Omega)\cap H^1_0(\Omega)$ and to prove that the resulting problem is well posed.
I am confused as to how to establish the right functional setting, so for a start I choose $C_0^2(\Omega)$ as a space of test functions (functions in $C^2(\Omega)$ compactly supported in $\Omega$) so that the boundary condition makes sense.
Multiplying the equation by $v\in C_0^2(\Omega)$ and integrating over $\Omega$ we obtain $$\int_{\Omega} \Delta^2u\cdot v\,dx=\int_{\Omega} fv\,dx$$
and now integrating by parts (Green's formula) twice on the left hand side we obtain
\begin{eqnarray*} \int_{\Omega} \Delta^2u\cdot v\,dx &=& \int_{\Omega} div \nabla \Delta u)\,dx \stackrel{Green}{=} \int_{\partial \Omega} \partial_{\nu}(\Delta u)v\,d\sigma- \int_{\Omega} \nabla \Delta u\cdot \nabla v \,dx \\ &\stackrel{Green}{=}& \int_{\partial \Omega} \partial_{\nu}(\Delta u)v\,d\sigma - \int_{\partial \Omega} \Delta u\cdot \partial_{\nu}v\,d\sigma + \int_{\Omega} \Delta u\cdot \Delta v\,dx \\ &=& \int_{\partial \Omega} \partial_{\nu}(\Delta u)v\,d\sigma + \int_{\partial \Omega} \rho \partial_{\nu}u \cdot \partial_{\nu}v\,d\sigma + \int_{\Omega} \Delta u\cdot \Delta v\,dx \end{eqnarray*}
where in the last equality I use the boundary condition. Now, enlarging the space of test functions by taking the closure of $C_0^2(\Omega)$ in $H^2(\Omega)$, namely $H_0^2(\Omega)\subset H_0^1(\Omega)\cap H^2(\Omega)$ the first integral vanishes (v has zero trace) so our variational formulation is $$\int_{\partial \Omega} \rho \partial_{\nu}u \cdot \partial_{\nu}v\,d\sigma + \int_{\Omega} \Delta u\cdot \Delta v\,dx=\int_{\Omega} fv\,dx$$
How can I rigorously conclude that I need to take the whole $H_0^1(\Omega)\cap H^2(\Omega)$ as my space of test functions?
In order to prove that the problem is well posed I intend to use Lax-Milgram theorem as usual, but I am confused as to how to tackle the integral over $\partial \Omega$. I define a bilinear form $$B(u,v)=\int_{\partial \Omega} \rho \partial_{\nu}u \cdot \partial_{\nu}v\,d\sigma + \int_{\Omega} \Delta u\cdot \Delta v\,dx$$ and I want to check continuity and coercitivity. For the first one I have $$|B(u,v)|\leq \int_{\partial \Omega} \rho |\partial_{\nu}u \cdot \partial_{\nu}v|\,d\sigma + \int_{\Omega} |\Delta u\cdot \Delta v|\,dx$$
For the second integral we have $$\int_{\Omega} |\Delta u\cdot \Delta v|\,dx\leq ||\Delta u||_0||\Delta v||_0\leq ||u||_{H^2(\Omega)} ||v||_{H^2(\Omega)}$$ but what about the first one?
Thanks in advance for any insight.
-
## 1 Answer
See http://mathoverflow.net/questions/96870 for an explanation.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8371846079826355, "perplexity_flag": "head"} |
http://mathematica.stackexchange.com/tags/linear-algebra/hot?filter=year | # Tag Info
## Hot answers tagged linear-algebra
21
### Computing polynomial eigenvalues in Mathematica
(I've been waiting for somebody to ask this question for months... :D ) Here's the Mathematica implementation of the Frobenius companion matrix approach discussed by Jim Wilkinson in his venerable book (for completeness and complete analogy with built-in functions, I provide these three): PolynomialEigenvalues[matCof : {__?MatrixQ}] := Module[{p = ...
17
### Eigenvalues and Determinant of a large matrix
It's just a matter of the difficulty inherent in the numerical computation of determinants. Here's what Cleve Moler has to say about determinants and characteristic polynomials in chapter 10 of his book on numerical computing: Like the determinant itself, the characteristic polynomial is useful in theoretical considerations and hand calculations, but ...
16
### Mathematica for linear algebra course?
Mathematica offers a pretty complete set of functionality for linear algebra, and it has improved in recent versions. For example, since version 5, Mathematica has offered the generalised Schur decomposition (also known as the QZ decomposition). This certainly wasn't available in earlier versions. It handles sparse matrices and many other wrinkles. And if ...
15
### Space-efficient null space of sparse array
Here is the method I outlined. I'll illustrate on a small example where we split matrix into top and bottom halves. In[794]:= SeedRandom[1111]; halfsize = 3; mat = RandomInteger[{-4, 4}, {2*halfsize, 10}] Out[796]= {{-3, -1, 3, -3, 3, 3, 3, 3, 4, 2}, {3, 3, -3, 0, 0, 1, -2, -4, 0, -1}, {-3, 4, 3, 0, -2, 4, 3, -2, -2, -2}, {2, 2, 4, 0, -4, 4, -1, -4, ...
15
### Is there a clean way to extract the subspaces invariant under a list of matrices?
This answer is almost entirely about mathematics and algorithms, not Mathematica implementation. I'm not sure whether such answers are welcome on this site; I hope I haven't offended. This is a very important problem in computational algebra, but is usually stated in a more sophisticated way. The usual way that one thinks about it is to consider the ...
14
### What is the fastest way to find an integer-valued row echelon form for a matrix with integer entries?
Actually what you want is HermiteDecomposition. It is the integer ring form of RowReduce; that latter, while working largely over the integers for the forward elimination, actually is an echelon form over the rational field. As for efficiency, you'll have to experiment to see if it meets your needs. If not, feel free to post or send examples so I'll have ...
14
### Why does my matrix lose rank?
The rank of a matrix is typically determined by performing a Gaussian elimination and is given by the number of non-zero rows. In your second case, the large number $5.4\times 10^{12}$, when eventually used as a pivot, gives a badly conditioned matrix (myM2 is the second matrix in your question): RowReduce[myM2] RowReduce::luc: Result for RowReduce of ...
13
### How do you decompose a polynomial matrix into its matrix coefficients?
Here is a very simple way to do it: Table[1/i! D[M, {a, i}] /. a -> 0, {i, 0, 3}] (* ==> {{{15, 0}, {0, 2}}, {{0, 1}, {1, 0}}, {{1, 5}, {-5, 0}}, {{0, 0}, {0, 0}}} *) This works even if the entries are not polynomials. If they are, you can replace the arbitrary maximum 3 in the Table index by the degree of the polynomial: Max[Exponent[M, a]] ...
11
### Is there any way to obtain an approximate inverse for very large sparse matrices?
If your matrix is diagonally dominant (in the example it is) then you can do as follows. Start with a diagonal matrix comprised of the reciprocals of the diagonal of your original matrix. Find the residual and use that to form a correction. iterate as long as needed. Here is your example, scaled down. ClearAll[n, s, f, aInv] n = 1000; s = ...
11
### Correcting a correlation matrix to be positive semidefinite
Here is simple (unweighted) Mma version of the Matlab implementation of Covariance Bending. ClearAll[covBending]; covBending[mat_, tol_: 1/10000]:=If[PositiveDefiniteMatrixQ[mat], mat, NestWhile[(Eigensystem[#][[2]].DiagonalMatrix[ Max[#, tol] & /@(Eigensystem[#][[1]])].Transpose[ Eigensystem[#][[2]]]) &, N@mat, Min[Eigensystem[#][[1]]] < ...
11
### Mathematica won't give eigenvectors but Wolfram Alpha will? What am I doing wrong?
Use Eigenvalues[mat, Cubics -> True] Eigenvectors[mat, Cubics -> True] sometimes Quartics -> Truecan be needed. or ToRadicals @ Eigenvalues[ mat] ToRadicals @ Eigenvectors[ mat] In general one cannot find roots (of higher order) polynomials in terms of radicals. The reason that Mathematica allows this option is that in general it is ...
10
### How to extract and compute on the diagonal entities of a sparse matrix very fast?
You can create your new diagonal matrix V in a single step as: V = DiagonalMatrix@SparseArray[1/Normal[Diagonal[A]]]; On my machine, this takes 0.05 seconds, compared to 9 seconds for your code above (excluding time taken to construct A). You can verify that they're both the same: DiagonalMatrix[SparseArray[B]] == ...
10
### Finding the characteristic polynomial of a matrix modulus n
There is no need for the Modulus option in CharacteristicPolynomial, since PolynomialMod serves that purpose. Assume we have a matrix m e.g. : m = RandomInteger[10, {5, 5}] m // MatrixForm {{10, 1, 4, 10, 9}, {1, 9, 6, 1, 5}, {9, 7, 9, 1, 0}, {1, 10, 8, 0, 4}, {4, 0, 4, 7, 10}} then CharacteristicPolynomial[m, x] 2310 - 4008 x + 1739 x^2 - ...
10
### Computing polynomial eigenvalues in Mathematica
As I've already mentioned in my previous answer, the method of using the QZ algorithm for matrix pencils on the Frobenius companion linearization of the polynomial eigenproblem is not always the most efficient approach. To illustrate this, I'll outline a general method for solving a hyperbolic quadratic eigenvalue problem, which is known to have all its ...
10
### Mathematica for linear algebra course?
In years past I've taught a standard-content sophomore-level (in U.S.) linear algebra course where students used Mathematica. I know, then, that if you're interested, or if it's a requirement, you can build everything up from the simplest functions for manipulating matrices or you can directly use powerful built-in Mathematica functions (or a combination of ...
9
### Find Determinant/or Row Reduce parameter dependent matrix
Not knowing anything else about the matrix, I can only suggest another alternative to the determinant (which sounds like it's prohibitively time-consuming). If m is your matrix, try to find a root (or minimum) of Min@Diagonal@SingularValueDecomposition[m][[2]] instead. Unfortunately, SingularValueDecomposition is also very time consuming, but as I ...
9
### How do you decompose a polynomial matrix into its matrix coefficients?
Here's one quick way for polynomial matrices: polyMat = {{15 + a^2, a + 5 a^2}, {a - 5 a^2, 2}}; Transpose[PadRight[CoefficientList[polyMat, a]], {2, 3, 1}] {{{15, 0}, {0, 2}}, {{0, 1}, {1, 0}}, {{1, 5}, {-5, 0}}} Alternatively (as Jens hints), you can do Flatten[PadRight[CoefficientList[polyMat, a]], {3}]. You can check that the matrix polynomial is ...
9
### Memory Leak in RowReduce?
There might be a memory leak. More likely is that memory was used at intermediate steps and, while (I hope) freed by Mathematica, was not returned to the OS. As for why massive memory will be consumed at all, I got into a debug version of the Mathematica kernel and had a look. By row 850 or so, in the forward elimination step, I see numbers going around in ...
9
### Toggle visibility of elements in a plot
Manipulate is probably the easiest for this specific case but here is an alternative: DynamicModule[{select = {1, 2, 3}}, Column[{ CheckboxBar[ Dynamic[select], {1 -> "g(x)", 2 -> "y=x", 3 -> "f(x)"}], Dynamic@Plot[Evaluate@{0.5 x + 1, 2 x - 2, x}[[select]], {x, -1, 5}, PlotRange -> {-1, 5}, AspectRatio -> 1, ...
9
### Compiling LinearSolve[] or creating a compilable procedural version of it
Rob Knapp has some excellent publicly available notebooks for some of the things you might want. Here are three in particular that may be useful. http://www.mathematica-journal.com/issue/v7i4/features/knapp/ http://library.wolfram.com/infocenter/Conferences/288/ http://library.wolfram.com/infocenter/Conferences/7968/ Among other things they give explicit ...
9
### How to get the determinant and inverse of a large sparse symmetric matrix?
The determinant computation is a matter of memory use in terms of how much we want to store for subdeterminants of a Laplace expansion. Mathematica simply refuses to go that route after 11x11. YOu can do your own as below. myDet[mat_] /; Length[mat] <= 4 := Det[mat] myDet[mat_] := myDet[mat] = Sum[mat[[1, j]]*myDet[Drop[mat, {1}, {j}]], {j, ...
9
### Exploiting self-adjointness when changing basis
As $P$ is explicitly constructed from eigenvectors of a self-adjoint matrix, it is unitary, i.e $P P^\dagger = I\qquad$ where the $\dagger$ is the conjugate transpose (or Hermitian conjugate, if you prefer). So, calculating the inverse is simply ConjugateTranspose[P] which is much faster than calculating it using Inverse. That said, you have to ensure that ...
8
### What's the most “functional” way to do Cholesky decomposition?
For reference there is a built-in function CholeskyDecomposition. For improving your existing code Array may be a minor subjective improvement: HalfFunctionalCholesky2[matrin_List?PositiveDefiniteMatrixQ] := Module[{dimens, uu}, dimens = Length[matrin]; uu = ConstantArray[0, {dimens, dimens}]; Array[(uu[[#]] = makerow[matrin, #, uu, dimens]) ...
8
### Correcting a correlation matrix to be positive semidefinite
Generally, the reason why matrices that were supposed to be positive semi-definite but are not, is because the constraint of working in a finite precision world often introduces a wee bit of perturbation in the lowest eigenvalues of the matrix, making it either negative or complex. These errors are generally of the order of machine precision, but is enough ...
8
### Mathematica for linear algebra course?
Having taught linear algebra using both Mathematica and Matlab, I concur with what others have said that the Mathematica's features for linear algebra include all one might need for a course in undergraduate linear algebra. Since symbolic computation is also fully integrated into Mathematica, it might be better in some ways. For example, we can solve ...
8
### How can I compute the representation matrices of a point group under given basis functions?
You were almost there. Just add the following to your code: iC3v = Inverse /@ C3v; sa = SolveAlways[Flatten@ Table[basis[[i]][iC3v[[k]].{x, y}] == Sum[basis[[j]][{x, y}] d[k, j, i], {j, 3}], {i, 3}, {k, 6}], {x, y}]; MatrixForm /@ Table[d[k, i, j], {k, 6}, {i, 3}, {j, 3}] /. sa And you get your expected result: \$\left( ...
8
### Toggle visibility of elements in a plot
One way to do this is to use Opacity to hide a graph and empty label "" to hide a label: Manipulate[ Plot[{0.5 x + 1, x, 2 x - 2}, {x, -1, 5}, PlotRange -> {-1, 5}, AspectRatio -> 1, PlotStyle -> {Opacity[a], Opacity[b], Opacity[c]}, Epilog -> { Text[If[a == 1, "f(x)", ""], {4.5, 2.7}], Text[If[b == 1, "y=x", ""], {4.5, ...
8
### Eigenvalue / Eigenvector Calculation
For speeding up Mathematica code, a little analysis of a problem often goes a long way. Analysis Writing $\mathbb{G}$ for the diagonal matrix of vertex degrees and $\mathbb{A}$ for the adjacency matrix, this question seeks eigenvectors of $\mathbb{1} - \mathbb{G}^{-1/2} \mathbb{A} \mathbb{G}^{-1/2}$. That is, it looks for solutions $(\mathbf{z}, \lambda)$ ...
8
### How to solve an eigensystem faster?
This is too long for a comment and honestly, to give a real answer, there is more information required in your question. Isn't it possible, that you give a working example, so that we see what takes long and how you implemented it? If you are calling Eigensystem for many different input values which are know, there is still some place for speed-up. Since ...
8
### Dual-Grid Graph Paper With Mathematica?
A possible starting point: With[{θ = π/3}, DeleteCases[ ParametricPlot[{{x, y}, RotationTransform[-θ][Sqrt[2] {x, y}]}, {x, -5, 5}, {y, -5, 5}, PlotRange -> {{-5, 5}, {-5, 5}}], _Polygon, ∞]]
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8514952659606934, "perplexity_flag": "middle"} |
http://www.askmehelpdesk.com/mathematics/what-probability-if-5-fair-6-sided-dice-rolled-product-4-a-677902-2.html | Browse
All Topics
Find questions to answer
Find today's questions
Find unanswered questions
Search
Search Forums
Show Threads Show Posts
Advanced Search
User Name Password Remember Me?
Home ▸ Science ▸ Mathematics » what is the probability that if 5 fair 6-sided dice are rolled the product is 4
Page 2 of 2 < 1 2
# what is the probability that if 5 fair 6-sided dice are rolled the product is 4
Asked Jul 2, 2012, 07:55 AM — 17 Answers
what is the probability that if 5 fair 6-sided dice are rolled the product of the numbers is 4
Thread Summary
17 Answers
ebaines Posts: 10,033, Reputation: 5529 Expert #11 Jul 2, 2012, 08:53 AM
You're missing quite a few combinations for the 1,1,1,2,2 set: such as 1,1,2,1,2. Don't forget that the 2's don't have to be next to each other.
Yes, there is a formula for calculating this: for the 1,1,1,2,2 set you can calculate the number of ways that the two 2's can be arranged in a set of using using combinations C(5,2), where
$<br /> C(a,b)= \frac {a!}{b!(a-b)!}<br />$
Try that and tell us what you get. You should also try calculating the number of ways that the three 1's can be arranged in a set of 5 - the answer should be the same! I would also suggest that you try writing out all the combinations by hand (don't forget to include all the cases where the 2's aren't together) and see if it all agrees.
We'd like to understand what you find wrong with ebaines's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
833student Posts: 9, Reputation: 1 New Member #12 Jul 2, 2012, 09:01 AM
Quote:
Originally Posted by ebaines You're missing quite a few combinations for the 1,1,1,2,2 set: such as 1,1,2,1,2. Don't forget that the 2's don't have to be next to each other. Yes, there is a formula for calculating this: for the 1,1,1,2,2 set you can calculate the number of ways that the two 2's can be arranged in a set of using using combinations C(5,2), where $<br /> C(a,b)= \frac {a!}{b!(a-b)!}<br />$ Try that and tell us what you get. You should also try calculating the number of ways that the three 1's can be arranged in a set of 5 - the answer should be the same! I would also suggest that you try writing out all the combinations by hand (don't forget to include all the cases where the 2's aren't together) and see if it all agrees.
so would it be 5C2= 10 for the 2's and 5C1= 5 for the 1's and 5C4=5 for the 4's and then add together?
We'd like to understand what you find wrong with 833student's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
ebaines Posts: 10,033, Reputation: 5529 Expert #13 Jul 2, 2012, 09:06 AM
Getting close. Both 5C2 and 5C3 = 10, which both are the number of ways that 1,1,1,2,2 can be arranged - so you only need one of them. And ether 5C1 or 5C4 =5 gives the number of ways that 4,1,1,1,1 can be arranged. So all you need is to add together 5C2 + 5C1. Again - I suggest you write out all combinations to see if it's correct.
We'd like to understand what you find wrong with ebaines's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
833student Posts: 9, Reputation: 1 New Member #14 Jul 2, 2012, 11:47 AM
so the answer would be 15. What if i changed it to be 4 fair dice with a product of 12? The numbers involved are
3*2*2*1
3*4*1*1
2*6*1*1
would the combination formula be as follows?
4C3+4C2=10?
We'd like to understand what you find wrong with 833student's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
ebaines Posts: 10,033, Reputation: 5529 Expert #15 Jul 2, 2012, 11:59 AM
No. The problem is that once you try to address how three different outcomes can be arranged it's a bit more complicated. For example: the set 3, 2, 2, 1 can be arranged as follows:
3,2,2,1
3,2,1,2
3,1,2,2
2,3,2,1
2,3,1,2
2,2,3,1
2,2,1,3
2,1,3,2
2,1,2,3
1,2,2,3
1,2,3,2
1,3,2,2
That's 12 possible ways, which is not the same as 4C2 or 4C3. If you repeat this for sets 3,4,1,1 and 2,6,1,1 you'll see that each of these has 12 possible arrangements. So that's a total of 36 possible "winning" combinations of rolls out of 6^4.
We'd like to understand what you find wrong with ebaines's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
833student Posts: 9, Reputation: 1 New Member #16 Jul 2, 2012, 12:20 PM
so what would be an equation to use instead of writing everything out?
We'd like to understand what you find wrong with 833student's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
ebaines Posts: 10,033, Reputation: 5529 Expert #17 Jul 2, 2012, 12:32 PM
There's no single equation. But there are a couple of different ways to think it through.
Given a set (a, b, c, c) where one element is repeated (the 'c') and the others occur once:
1. Consider what happens if the sequence starts with 'a' - that leaves one b and two c's for the last three positions, so there are 3C1 = 3 ways the b and c's could be arranged.
2. Same thing if the sequence starts with 'b' - there are 3 ways that the remaining a and 2 c's can be arranged.
3. Suppose the series starts with a 'c' - that leaves one a, one b and one c. There are 3! = 6 ways they can be arranged.
So the total is 3 + 3 + 6 = 12.
Another way to approach this is to consider that these 4 could be arranged with the a in one of 4 places (either first, second, third or fourth in sequence), which leaves any one of three places that the b could be. The c's then just fill in whatever is left. So the total number of ways to arrange (a,b,c,,c) is 4 x 3 = 12. Or alternatively you could think of the two c's as distinct items - let's call them c1 and c2. The number of ways that a, b, c1, and c2 can be arranged is 4! = 24. Now since in fact c1=c2 you have to divide by 2! to take into account the duplicate c's, So 24/2 = 12. This same approach can be used if there are more c's - for example the number of ways that (a, b, c, c, c) can be arranged is 5!/3! = 20. It's also a handy technique if there are multiple a's and/or b's as well - for example the set (a,a,b,b,b,c,c,c,c,) can be arranged in 9!/(2! x 3! x 4!) = 1260 ways.
As you can see there are various ways to approach these types of problems.
We'd like to understand what you find wrong with ebaines's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
833student Posts: 9, Reputation: 1 New Member #18 Jul 2, 2012, 12:47 PM
Ok! thanks so much for all your help
We'd like to understand what you find wrong with 833student's answer:
What's inaccurate about this answer? Say it in 25 words or less here and/or reply in the thread with more detail. Please focus on the content not the person!
Link to a credible and well-known source. You can provide a URL or simply describe the source.
Page 2 of 2 < 1 2
Thread Tools Search this Thread
Search this Thread: Advanced Search
## Check out some similar questions!
If I rolled a dice 1000 times, how many times can I expect to roll an odd number?
a pair of dice is rolled. what is the probability of getting a sum greater than 5
Probability of Four Fair Dice [ 1 Answers ]
Hi all, I am having some problem with The Binomial Distribution. I am given some questions on this topic...I did find out the answer, but I am not sure whether I am right or not... Suppose there are four fair dice. Q1:
Probability of three fair dice [ 2 Answers ]
I am taking stats modeling, its suppose to be a 5000 level course, but right now I have doing review homework, its been 2 years since prob, and this problem is driving me crazy. It should only take 2 min and I can't for gods sake get it. Roll three 6 sided fair dice, what's probability that you...
Dice probability [ 1 Answers ]
Dice
View more Mathematics questions Search | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318602681159973, "perplexity_flag": "middle"} |
http://mathhelpforum.com/number-theory/151779-number-1-2-b-1-2-always-irrational-following-conditions.html | # Thread:
1. ## Is the number a^(1/2) + b^(1/2) always irrational with the following conditions?
Is the following number (with the added conditions taken into account) always irrational? And how would one go about proving this? Heres the number and the conditions:
Is the number $\sqrt{a} + \sqrt{b}$ always irrational? Given these conditions:
$a \neq b$
$a = 1,2,3.... \;\;\; and \;\;\;b=1,2,3,....$
$a \neq x^m$ for some numbers $x=1,2,3....$ and $m=1,2,3...$
$b \neq y^n$ for some numbers $y=1,2,3....$ and $n=1,2,3..$
There, I think I've covered all the proper conditions. But if there is one I missed, I'm sure you catch the drift of the exact question I am asking, and can probably guess whatever the missing condition is, not that there is a missing condition, just considering the possibility. Anyway, theres the question. Also, I understand the possibility that there is a simple example that shows the number $\sqrt{a} + \sqrt{b}$ to be rational in some obvious case, and if there is such a case, then this question is solved simply. And if your answer is yes, I"d be interested in seeing a proof, or atleast the "gist" of the proof. Thanks in advance
2. Originally Posted by mfetch22
Is the following number (with the added conditions taken into account) always irrational? And how would one go about proving this? Heres the number and the conditions:
Is the number $\sqrt{a} + \sqrt{b}$ always irrational? Given these conditions:
$a \neq b$
$a = 1,2,3.... \;\;\; and \;\;\;b=1,2,3,....$
$a \neq x^m$ for some numbers $x=1,2,3....$ and $m=1,2,3...$
$b \neq y^n$ for some numbers $y=1,2,3....$ and $n=1,2,3..$
There, I think I've covered all the proper conditions. But if there is one I missed, I'm sure you catch the drift of the exact question I am asking, and can probably guess whatever the missing condition is, not that there is a missing condition, just considering the possibility. Anyway, theres the question. Also, I understand the possibility that there is a simple example that shows the number $\sqrt{a} + \sqrt{b}$ to be rational in some obvious case, and if there is such a case, then this question is solved simply. And if your answer is yes, I"d be interested in seeing a proof, or atleast the "gist" of the proof. Thanks in advance
Isn't this covered here?
http://www.mathhelpforum.com/math-he...tml#post540098
(post #3) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533565044403076, "perplexity_flag": "head"} |
http://xianblog.wordpress.com/2010/06/05/on-particle-learning/ | # Xi'an's Og
an attempt at bloggin, from scratch…
## On particle learning
In connection with the Valencia 9 meeting that started yesterday, and with Hedie‘s talk there, we have posted on arXiv a set of comments on particle learning. The arXiv paper contains several discussions but they mostly focus on the inevitable degeneracy that accompanies particle systems. When Lopes et al. state that $p(Z^t|y^t)$ is not of interest as the filtered, low dimensional $p(Z_t|y^t)$ is sufficient for inference at time t, they seem to implicitly imply that the restriction of the simulation focus to a low dimensional vector is a way to avoid the degeneracy inherent to all particle filters. The particle learning algorithm therefore relies on an approximation of $p(Z^t|y^t)$ and the fact that this approximation quickly degenerates as t increases means that this approximation impacts the approximation of $p(Z_t|y^t)$. We show that, unless the size of the particle population exponentially increases with t, the sample of $Z_t$‘s will not be distributed as an iid sample from $p(Z_t|y^t)$.
The graph above is an illustration of the degeneracy in the setup of a Poisson mixture with five components and 10,000 observations. The boxplots represent the variation of the evidence approximations based on a particle learning sample and Lopes et al. approximation, on a particle learning sample and Chib’s (1995) approximation, and on an MCMC sample and Chib’s (1995) approximation, for 250 replications. The differences are therefore quite severe when considering this number of observations. (I put the R code on my website for anyone who wants to check if I programmed things wrong.) There is no clear solution to the degeneracy problem, in my opinion, because the increase in the particle size overcoming degeneracy must be particularly high… We will be discussing that this morning.
### Share:
This entry was posted on June 5, 2010 at 12:32 am and is filed under R, Statistics, University life with tags degeneracy, particle learning, Valencia 9. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 2 Responses to “On particle learning”
1. [...] rejoinder to the discussion of his Valencia 9 paper has been posted. Since the discussion involved several points made by members of the CREST statistics lab (and covered the mixture paper as much as the Valencia [...]
2. [...] major change is the elimination of reversible jump from the mixture chapter (to be replaced with Chib’s approximation) and from the time-series chapter (to be simplified into a birth-and-death process). Going back to [...]
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935932993888855, "perplexity_flag": "middle"} |
http://www.citizendia.org/Homomorphism | Not to be confused with homeomorphism. Topological equivalence redirects here see also Topological equivalence (dynamical systems.
In abstract algebra, a homomorphism is a structure-preserving map between two algebraic structures (such as groups, rings, or vector spaces). Abstract algebra is the subject area of Mathematics that studies Algebraic structures such as groups, rings, fields, modules In Mathematics and related technical fields the term map or mapping is often a Synonym for function. In Algebra, a branch of Pure mathematics, an algebraic structure consists of one or more sets closed under one or more operations, In Mathematics, a group is a set of elements together with an operation that combines any two of its elements to form a third element In Mathematics, a ring is an Algebraic structure which generalizes the algebraic properties of the Integers though the rational, real In Mathematics, a vector space (or linear space) is a collection of objects (called vectors) that informally speaking may be scaled and added The word homomorphism comes from the Greek language: homos meaning "same" and morphe meaning "shape". Greek (el ελληνική γλώσσα or simply el ελληνικά — "Hellenic" is an Indo-European language, spoken today by 15-22 million people mainly Note the similar root word "homoios," meaning "similar," which is found in another mathematical concept, namely homeomorphisms. Topological equivalence redirects here see also Topological equivalence (dynamical systems.
## Informal discussion
Because abstract algebra studies sets with operations that generate interesting structure or properties on the set, the most interesting functions are those which preserve the operations. In Mathematics, an operator is a function which operates on (or modifies another function The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function These functions are known as homomorphisms.
For example, consider the natural numbers with addition as the operation. In Mathematics, a natural number (also called counting number) can mean either an element of the set (the positive Integers or an A function which preserves addition should have this property: f(a + b) = f(a) + f(b). For example, f(x) = 3x is one such homomorphism, since f(a + b) = 3(a + b) = 3a + 3b = f(a) + f(b). Note that this homomorphism maps the natural numbers back into themselves.
Homomorphisms do not have to map between sets which have the same operations. For example, operation-preserving functions exist between the set of real numbers with addition and the set of positive real numbers with multiplication. A function which preserves operation should have this property: f(a + b) = f(a) * f(b), since addition is the operation in the first set and multiplication is the operation in the second. Given the laws of exponents, f(x) = ex satisfies this condition : 2 + 3 = 5 translates into e2 * e3 = e5.
A particularly important property of homomorphisms is that if an identity element is present, it is always preserved, that is, mapped to the identity. In Mathematics, an identity element (or neutral element) is a special type of element of a set with respect to a Binary operation on that Note in the first example f(0) = 0, and 0 is the additive identity. In the second example, f(0) = 1, since 0 is the additive identity, and 1 is the multiplicative identity.
If we are considering multiple operations on a set, then all operations must be preserved for a function to be considered as a homomorphism. Even though the set may be the same, the same function might be a homomorphism, say, in group theory (sets with a single operation) but not in ring theory (sets with two related operations), because it fails to preserve the additional operation that ring theory considers. Group theory is a mathematical discipline the part of Abstract algebra that studies the Algebraic structures known as groups. In Mathematics, ring theory is the study of rings, Algebraic structures in which addition and multiplication are defined and have similar properties to those
## Formal definition
A homomorphism is a map from one algebraic structure to another of the same type that preserves all the relevant structure; i. In Mathematics and related technical fields the term map or mapping is often a Synonym for function. In Algebra, a branch of Pure mathematics, an algebraic structure consists of one or more sets closed under one or more operations, e. properties like identity elements, inverse elements, and binary operations. In Mathematics, an identity element (or neutral element) is a special type of element of a set with respect to a Binary operation on that In Mathematics, the idea of inverse element generalises the concepts of negation, in relation to Addition, and reciprocal, in relation to In Mathematics, a binary operation is a calculation involving two Operands, in other words an operation whose Arity is two
N.B. Some authors use the word homomorphism in a larger context than that of algebra. Nota bene is a Latin phrase meaning "Note Well" coming from notāre —to note Some take it to mean any kind of structure preserving map (such as continuous maps in topology), or even a more abstract kind of map—what we term a morphism—used in category theory. In Topology and related areas of Mathematics a continuous function is a Morphism between Topological spaces Intuitively this is a function Topology ( Greek topos, "place" and logos, "study" is the branch of Mathematics that studies the properties of In Mathematics, a morphism is an abstraction derived from structure-preserving mappings between two Mathematical structures The study of morphisms and In Mathematics, category theory deals in an abstract way with mathematical Structures and relationships between them it abstracts from sets This article only treats the algebraic context. For more general usage see the morphism article. In Mathematics, a morphism is an abstraction derived from structure-preserving mappings between two Mathematical structures The study of morphisms and
For example; if one considers two sets X and Y with a single binary operation defined on them (an algebraic structure known as a magma), a homomorphism is a map $\phi: X \rightarrow Y$ such that
$\phi(u \cdot v) = \phi(u) \circ \phi(v)$
where $\cdot$ is the operation on X and $\circ$ is the operation on Y. In Mathematics, a binary operation is a calculation involving two Operands, in other words an operation whose Arity is two In Abstract algebra, a magma (or groupoid) is a basic kind of Algebraic structure.
Each type of algebraic structure has its own type of homomorphism. For specific definitions see:
The notion of a homomorphism can be given a formal definition in the context of universal algebra, a field which studies ideas common to all algebraic structures. In Mathematics, given two groups ( G, * and ( H, · a group homomorphism from ( G, * to ( H, · is a function In Ring theory or Abstract algebra, a ring homomorphism is a function between two rings which respects the operations of addition and multiplication In Abstract algebra, the concept of a module over a ring is a generalization of the notion of Vector space, where instead of requiring the scalars In Mathematics, a linear map (also called a linear transformation, or linear operator) is a function between two Vector spaces that In Mathematics, a vector space (or linear space) is a collection of objects (called vectors) that informally speaking may be scaled and added A homomorphism between two algebras over a field K, A and B, is a map FA\rightarrow B such that for all k Universal algebra (sometimes called general algebra) is the field of Mathematics that studies Algebraic structures themselves not examples ("models" In Algebra, a branch of Pure mathematics, an algebraic structure consists of one or more sets closed under one or more operations, In this setting, a homomorphism $\phi: A \rightarrow B$ is a map between two algebraic structures of the same type such that
$\phi(f_A(x_1, \ldots, x_n)) = f_B(\phi(x_1), \ldots, \phi(x_n))\,$
for each n-ary operation f and for all xi in A.
## Types of homomorphisms
• An isomorphism is a bijective homomorphism. In Abstract algebra, an isomorphism ( Greek: ἴσος isos "equal" and μορφή morphe "shape" is a bijective In Mathematics, a bijection, or a bijective function is a function f from a set X to a set Y with the property Two objects are said to be isomorphic if there is an isomorphism between them. Isomorphic objects are completely indistinguishable as far as the structure in question is concerned.
• An epimorphism is a surjective homomorphism. In Category theory an epimorphism (also called an epic morphism or an epi) is a Morphism f: X &rarr Y which In Mathematics, a function f is said to be surjective or onto, if its values span its whole Codomain; that is for every
• A monomorphism (also sometimes called an extension) is an injective homomorphism. In the context of Abstract algebra or Universal algebra, a monomorphism is simply an Injective Homomorphism.
• A homomorphism from an object to itself is called an endomorphism. In Mathematics, an endomorphism is a Morphism (or Homomorphism) from a mathematical object to itself
• An endomorphism which is also an isomorphism is called an automorphism. In Mathematics, an automorphism is an Isomorphism from a Mathematical object to itself
The above terms are used in an analogous fashion in category theory, however, the definitions in category theory are more subtle; see the article on morphism for more details. In Mathematics, category theory deals in an abstract way with mathematical Structures and relationships between them it abstracts from sets In Mathematics, category theory deals in an abstract way with mathematical Structures and relationships between them it abstracts from sets In Mathematics, a morphism is an abstraction derived from structure-preserving mappings between two Mathematical structures The study of morphisms and
Note that in the larger context of structure preserving maps, it is generally insufficient to define an isomorphism as a bijective morphism. One must also require that the inverse is a morphism of the same type. In the algebraic setting (at least within the context of universal algebra) this extra condition is automatically satisfied. Universal algebra (sometimes called general algebra) is the field of Mathematics that studies Algebraic structures themselves not examples ("models"
Relationships between different kinds of homomorphisms.
H = set of Homomorphisms, M = set of Monomorphisms,
P = set of ePimorphisms, S = set of iSomorphisms,
N = set of eNdomorphisms, A = set of Automorphisms.
Notice that: M ∩ P = S, S ∩ N = A, P ∩ N = A,
M ∩ N \ A contains only infinite homomorphisms, and
P ∩ N \ A is empty.
## Kernel of a homomorphism
Main article: Kernel (algebra)
Any homomorphism f : X → Y defines an equivalence relation ~ on X by a ~ b iff f(a) = f(b). In the various branches of Mathematics that fall under the heading of Abstract algebra, the kernel of a Homomorphism measures the degree to which the homomorphism In Mathematics, an equivalence relation is a Binary relation between two elements of a set which groups them together as being "equivalent" ↔ The relation ~ is called the kernel of f. It is a congruence relation on X. See Congruence (geometry for the term as used in elementary geometry The quotient set X/~ can then be given an object-structure in a natural way, i. In Mathematics, given a set X and an Equivalence relation ~ on X, the equivalence class of an element a in X e. [x] * [y] = [x * y]. In that case the image of X in Y under the homomorphism f is necessarily isomorphic to X/~; this fact is one of the isomorphism theorems. In Abstract algebra, an isomorphism ( Greek: ἴσος isos "equal" and μορφή morphe "shape" is a bijective In Mathematics, the isomorphism theorems are three Theorems applied widely in the realm of Universal algebra, stating the existence of certain Natural Note in some cases (e. g. groups or rings), a single equivalence class K suffices to specify the structure of the quotient; so we can write it X/K. In Mathematics, a group is a set of elements together with an operation that combines any two of its elements to form a third element In Mathematics, a ring is an Algebraic structure which generalizes the algebraic properties of the Integers though the rational, real In Mathematics, given a set X and an Equivalence relation ~ on X, the equivalence class of an element a in X (X/K is usually read as "X mod K". The word modulo (Latin with respect to a modulus of ___ is the Latin Ablative of Modulus which itself means "a small measure ) Also in these cases, it is K, rather than ~, that is called the kernel of f (cf. In the various branches of Mathematics that fall under the heading of Abstract algebra, the kernel of a Homomorphism measures the degree to which the homomorphism normal subgroup, ideal). In Mathematics, more specifically in Abstract algebra, a normal subgroup is a special kind of Subgroup. In Ring theory, a branch of Abstract algebra, an ideal is a special Subset of a ring.
## Homomorphisms and e-free homomorphisms in formal language theory
Homomorphisms are also used in the study of formal languages. A formal language is a set of words, ie finite strings of letters, or symbols. [1] Given alphabets Σ1 and Σ2, a function h : $\Sigma_1^*$ → $\Sigma_2^*$ such that h(uv) = h(u)h(v) for all u and v in $\Sigma_1^*$ is called a homomorphism on $\Sigma_1^*$. [2] Let e denote the empty word. If h is a homomorphism on $\Sigma_1^*$ and $h(x) \ne e$ for all $x \ne e$ in $\Sigma_1^*$, then h is called an e-free homomorphism.
## See also
• morphism
• graph homomorphism
• continuous function
• homeomorphism
• diffeomorphism
• Homomorphic secret sharing - A simplistic decentralized voting protocol. In Mathematics, a morphism is an abstraction derived from structure-preserving mappings between two Mathematical structures The study of morphisms and In the mathematical field of Graph theory a graph homomorphism is a mapping between two graphs that respects their structure In Mathematics, a continuous function is a function for which intuitively small changes in the input result in small changes in the output Topological equivalence redirects here see also Topological equivalence (dynamical systems. In Mathematics, a diffeomorphism is an Isomorphism of Smooth manifolds It is an Invertible function that maps one Differentiable In Cryptography, homomorphic secret sharing is a form of Secret sharing Algorithm involving Homomorphism.
## References
1. ^ Seymour Ginsburg, Algebraic and automata theoretic properties of formal languages, North-Holland, 1975, ISBN 0 7204 2506 9. Seymour Ginsburg (1928-2004 was a pioneer of Automata theory Formal language theory and Database theory in particular and Computer science
2. ^ In homomorphisms on formal languages, the * operation is the Kleene star operation. In Mathematical logic and Computer science, the Kleene star (or Kleene closure) is a Unary operation, either on sets of The $\cdot$ and $\circ$ are both concatenation, commonly denoted by juxtaposition. For concatenation of general lists see Append. In Computer programming, string concatenation is the operation of joining two character | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211450815200806, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/6927/list | ## Return to Answer
2 deleted 46 characters in body
Let me say first that Engelbrekt's last point is not true - in fact a Dedekind zeta function is always a product of Artin L-functions. It is the structure of the Galois closure which is relevant here. Let me give a nice example which is indicative of the general case. Let $p(x) \in \mathbb{Z}[x]$ be an irreducible cubic, and let $\alpha$ be a root of $p$. Then $K=\mathbb{Q}(\alpha)$ has trivial automorphism group, and its Galois closure (say $L/\mathbb{Q}$) is an S3-extension. The group S3 has three irreducible representations: the trivial representation, the "sign representation" $\chi$ which is also one-dimensional, and an irreducible two-dimensional representation which we will call $\rho$. Then we have the relations $\zeta_K(s)=\zeta_{\mathbb{Q}}(s)L(s,\rho)$ and $\zeta_L(s)=\zeta_{\mathbb{Q}}(s)L(s,\chi)L(s,\rho)^2$. The proofs of these facts are part of the formalism of Artin L-functions.
Generally, the distinction is really a matter of history. Certain objects were named zeta functions - Hasse-Weil, Dedekind - while Dirichlet chose the letter "L" for the functions he made out of characters. However, one feature is that "zeta" functions tend to have poles, and they often "factor" into L-functions. These vagaries are made more precise in various places, for example Iwaniec-Kowalski Ch. 5 and some survey articles on the "Selberg class" of Dirichlet series.
1
Let me say first that Engelbrekt's last point is not true - in fact a Dedekind zeta function is always a product of Artin L-functions. It is the structure of the Galois closure which is relevant here. Let me give a nice example which is indicative of the general case. Let $p(x) \in \mathbb{Z}[x]$ be an irreducible cubic, and let $\alpha$ be a root of $p$. Then $K=\mathbb{Q}(\alpha)$ has trivial automorphism group, and its Galois closure (say $L/\mathbb{Q}$) is an S3-extension. The group S3 has three irreducible representations: the trivial representation, the "sign representation" $\chi$ which is also one-dimensional, and an irreducible two-dimensional representation which we will call $\rho$. Then we have the relations $\zeta_K(s)=\zeta_{\mathbb{Q}}(s)L(s,\rho)$ and $\zeta_L(s)=\zeta_{\mathbb{Q}}(s)L(s,\chi)L(s,\rho)^2$. The proofs of these facts are part of the formalism of Artin L-functions.
Generally, the distinction is really a matter of history. Certain objects were named zeta functions - Hasse-Weil, Dedekind - while Dirichlet chose the letter "L" for the functions he made out of characters. However, one feature is that "zeta" functions tend to have poles, and they often "factor" into L-functions. These vagaries are made more precise in various places, for example Iwaniec-Kowalski Ch. 5 and some survey articles on the "Selberg class" of Dirichlet series. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453771114349365, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/225621/a-nonlinear-version-of-the-riesz-isomorphism | # A nonlinear version of the Riesz isomorphism
The present question regards the proof of the following theorem which is found in Adams' Sobolev spaces, §2.30 - 2.33. Here $(\Omega, \mathcal{M}, \mu)$ denotes some arbitrary measure space and $L^p$ always stands for $L^p(\Omega)$.
Riesz Representation Theorem Let $1<p<\infty$ and let $p'=p/(p-1)$. For any $v \in L^{p'}$ denote $L_v$ to be the linear functional on $L^p$ defined by the following equation: $$\langle L_v, w\rangle =\int_{\Omega} vw\, d\mu.$$ Then the mapping $v\mapsto L_v$ is an isometric isomorphism of $L^{p'}$ onto $[L^p]^\star$.
The most interesting thing to prove is that this mapping $L$ is surjective, which as far as I know is usually done by means of the Radon-Nikodym's theorem of measure theory. On the contrary, Adams' book employs uniform convexity of $L^p$ space and two of the four Clarkson's inequalities, precisely: $$\tag{38} \forall u, v \in L^p, 2\le p <\infty,\quad \left\lVert \frac{u+v}{2}\right\rVert_p^{p'}+\left\lVert \frac{u-v}{2}\right\rVert_p^{p'}\ge\left( \frac{1}{2}\lVert u \rVert_p^p+\frac{1}{2}\lVert v\rVert_p^p\right)^{p'-1},$$ $$\tag{40} \forall u, v \in L^p, 1< p \le 2,\quad \left\lVert \frac{u+v}{2}\right\rVert_p^{p}+\left\lVert \frac{u-v}{2}\right\rVert_p^{p}\ge \frac{1}{2}\lVert u \rVert_p^p+\frac{1}{2}\lVert v\rVert_p^p.$$ His proof follows those steps:
1. Because of uniform convexity, there exists a duality mapping $$\left[ F\in \left( L^p\right)^\star \right] \to \left[\text{the unique}\ w\in L^p\ \text{s.t.}\ \lVert w\rVert_p=\lVert F \rVert_\star\ \text{and}\ \langle F, w\rangle=\lVert F\rVert_\star^2\right].$$
2. The duality mapping has an explicitly known left inverse $$\left[ L_v\in (L^p)^\star\ \text{where}\ v=\frac{\lvert w\rvert^{p-1}\text{signum}(w)}{\left(\int \lvert w\rvert^p\,d\mu\right)^{\frac{p-2}{p}}}\right] \leftarrow \left[ w\in L^p\right].$$
3. Because of inequalities (38) and (40), the duality mapping is injective.
This means that the duality mapping is bijective and so, in particular, that any linear functional $T$ is of the form $L_v$, which is what Adams wanted to prove.
However, it seems to me that he actually proved much more than that: namely, this proof introduces the duality mapping, which is (as far as I can understand) a generalization of the Riesz isomorphism between a Hilbert space and its dual. Also, it looks like this mapping only depends on some easily generalizable properties of $L^p$ such as uniform convexity. So:
Question How much is it known about the duality mapping in an abstract Banach space? What are the minimal hypotheses that guarantee its existence? In what spaces has it got an explicit analytical expression?
Thank you for reading.
-
1
A small remark: If you look at von Neumann's proof of the Radon-Nikodym theorem (via the $L^2$-duality), as in Rudin's Real and complex analysis, chapter 6, for example, step 2 your outline and Rudin's argument (compare with 6.10 and 6.17) are actually very close to each other. – commenter Oct 31 '12 at 5:18
@commenter: Thank you very much! I'm very much interested in those wormholes between measure theory and functional analysis. – Giuseppe Negro Oct 31 '12 at 10:32
## 1 Answer
For every normed space $X$ you have duality with its dual via $$\langle\cdot,\cdot\rangle: X\times X^*\to\mathbb{C}:(x,f)\mapsto f(x)$$ This duality is bilinear, and what is more $$\Vert x\Vert=\sup\{|\langle x, f\rangle|: f\in X^*,\;\Vert f\Vert\leq 1\}\qquad \Vert f\Vert=\sup\{|\langle x, f\rangle|: x\in X,\;\Vert x\Vert\leq 1\}$$ Thus good duality always exist, the question is explicit description of $X^*$. The problem of complete description of duals of normed spaces seems to be completely hopeless. But for most common spaces there are some. Here are some of them
Let $(\Omega,\Sigma,\mu)$ be a measure space, then $$L_p(\Omega,\Sigma,\mu)^*= \begin{cases} L_{p/(p-1)}(\Omega,\Sigma,\mu)\quad&\text{ if }\quad p\in(1,+\infty)\\ L_{\infty}(\Omega,\Sigma,\mu)\quad&\text{ if }\quad p=1\\ \mathrm{ba}(\Omega,\Sigma,\mu)\quad&\text{ if }\quad p=\infty \end{cases}$$ where $\mathrm{ba}(\Omega,\Sigma,\mu)$ is a space of finitely additive signed bounded measures on $(\Omega,\Sigma,\mu)$.
Let $\Omega$ be a normal space, then $$C(\Omega)^*=\mathrm{rba}(\Omega)$$ where $rba(\Omega)$ is a space of regular bounded finitely-additive complex-valued measures on algebra generated by closed sets.
Let $\Omega$ be locally compact Hausdorff space, then $$C(\Omega)^*=\mathrm{rca}(\Omega)$$ where $\mathrm{rca}(\Omega)$ is a space of regular bounded $\sigma$-additive complex-valued Borel measures on $\Omega$.
Proofs of these results you can find in Linear Operators, General Theory by N. Dunford, J. T. Schwartz
There is a non-commutative analogue for $L_p$ duality. Let $H$, $K$ be Hilbert spaces, then $$S_p(H,K)^*= \begin{cases} S_{p/(p-1)}(K,H)\quad&\text{ if }\quad p\in(1,+\infty)\\ \mathcal{B}(K,H)\quad&\text{ if }\quad p=1\\ S_{\infty}(K,H)\quad&\text{ if }\quad p=\infty \end{cases}$$ Where $S_p(H,K)$ is $p$-th Shatten class operators.
Feel free to add some other dualities here.
-
Nice and interesting answer, however this is not exactly what I was looking for. Unfortunately my question was too long and unnecessarily complicated to be clear. What I am interested in is the mapping $$[w \in L^p]\to\left[ L_v\in (L^p)^\star,\ v=\frac{\lvert w \rvert^{p-1}\text{signum}(w)}{\left( \int\lvert w\rvert^p\, dx\right)^{\frac{p-2}{p}}}\right]$$ – Giuseppe Negro Oct 31 '12 at 14:30
Ok, I can provide daulization mappings for you in each case. Will it be enough for you? – Norbert Oct 31 '12 at 14:31
... which is a kind of nonlinear version of the Riesz isomorphism of a Hilbert space and its dual. I was wondering: what property should we require on a Banach space $X$ so that it possesses such a mapping $X\to X^\star$ and in which cases is this mapping explicitly computable. – Giuseppe Negro Oct 31 '12 at 14:35
Also, sorry for not being clear. – Giuseppe Negro Oct 31 '12 at 14:36
@GiuseppeNegro You are asking too much, and your question is too vague. For example there spaces for which there is no explicit description of their duals (for example $\ell_\infty$). Nonexistence here is strongly correlated with axiom of choice. Another reason is that we don't have rigorous definitions of computable dual space. For example descrition of $(L_\infty)^*$ as $\mathrm{ba}$ is computable? At first sight yes, but in fact no one can say what finitely additive measure is. – Norbert Oct 31 '12 at 14:47
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950071394443512, "perplexity_flag": "head"} |
http://www.reference.com/browse/gluon | Definitions
Nearby Words
# gluon
[gloo-on] /ˈgluɒn/
gluon, an elementary particle that mediates, or carries, the strong, or nuclear, force. In quantum chromodynamics (QCD), the quantum field theory of strong interactions, the interaction of quarks (to form protons, neutrons, and other elementary particles) is described in terms of gluons—so called because they "glue" the quarks together. Gluons are massless, travel at the speed of light, and possess a property called color. Analogous to electric charge in charged particles, color is of three varieties, arbitrarily designated as red, blue, and yellow, and—analogous to positive and negative charges—three anticolor varieties. Quarks change their color as they emit and absorb gluons, and the exchange of gluons maintains proper quark color balance.
Unlike other forces, the force between quarks increases as the distance between the quarks increases. Up to distances about the diameter of a proton, quarks behave as if they were free of one another, a condition called asymptotic freedom. As the quarks move farther apart, the gluons that move between them utilize the energy that they draw from the quark's motion to create more gluons—the larger the number of gluons exchanged among quarks, the stronger the binding force. The gluons thus appear to lock the quarks inside the elementary particles, a condition called confinement. Gluons can also bind with one another to form composite particles called glueballs.
According to QCD, only colorless objects may exist in isolation. Therefore, individual gluons and individual quarks cannot exist in nature, and only indirect evidence of their existence can be detected. In 1979, compelling evidence was found when quarks were shown to emit gluons during studies of particle collisions at the German national high-energy physics laboratory in Hamburg.
See J. Diasdebens and S. Costa Ramos, ed., The Physics of Quark-Gluon Plasma (1988); F. J. Yndurain, The Theory of Quark and Gluon Interactions (1993).
Gluons ( and the suffix -on) are elementary particles that cause quarks to interact, and are indirectly responsible for the binding of protons and neutrons together in atomic nuclei.
In technical terms, they are vector gauge bosons that mediate strong color charge interactions of quarks in quantum chromodynamics (QCD). Unlike the neutral photon of quantum electrodynamics (QED), gluons themselves participate in strong interactions. The gluon has the ability to do this as it carries the color charge and so interacts with itself, making QCD significantly harder to analyze than QED.
## Properties
The gluon is a vector boson; like the photon, it has a spin of 1. While massive spin-1 particles have three polarization states, massless gauge bosons like the gluon have only two polarization states because gauge invariance requires the polarization to be transverse. In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass (experiment limits the gluon's mass to less than a few MeV). The gluon has negative intrinsic parity and zero isospin. It is its own antiparticle.
## Numerology of gluons
Unlike the single photon of QED or the three W and Z bosons of the weak interaction, there are eight independent types of gluon in QCD.
This may be difficult to understand intuitively. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons may be thought of as carrying both color and anticolor, but to correctly understand how they are combined, it is necessary to consider the mathematics of color charge in more detail.
### Color charge and superposition
In quantum mechanics, the states of particles may be added according to the principle of superposition; that is, they may be in a "combined state" with a probability, if some particular quantity is measured, of giving several different outcomes. A relevant illustration in the case at hand would be a gluon with a color state described by:
$\left(rbar\left\{b\right\}+bbar\left\{r\right\}\right)/sqrt\left\{2\right\}$
This is read as "red-antiblue plus blue-antired." (The factor of the square root of two is required for normalization, a detail which is not crucial to understand in this discussion.) If one were somehow able to make a direct measurement of the color of a gluon in this state, there would be a 50% chance of it having red-antiblue color charge and a 50% chance of blue-antired color charge.
### Color singlet states
It is often said that the stable strongly-interacting particles observed in nature are "colorless," but more precisely they are in a "color singlet" state, which is mathematically analogous to a spin singlet state. Such states allow interaction with other color singlets, but not with other color states; because long-range gluon interactions do not exist, this illustrates that gluons in the singlet state do not exist either.
The color singlet state is:
$\left(rbar\left\{r\right\}+bbar\left\{b\right\}+gbar\left\{g\right\}\right)/sqrt\left\{3\right\}$
In words, if one could measure the color of the state, there would be equal probabilities of it being red-antired, blue-antiblue, or green-antigreen.
### Eight gluon colors
There are eight remaining independent color states, which correspond to the "eight types" or "eight colors" of gluons. Because states can be mixed together as discussed above, there are many ways of presenting these states, which are known as the "color octet." One commonly used list is:
$\left(rbar\left\{b\right\}+bbar\left\{r\right\}\right)/sqrt\left\{2\right\}$ $-i\left(rbar\left\{b\right\}-bbar\left\{r\right\}\right)/sqrt\left\{2\right\}$
$\left(rbar\left\{g\right\}+gbar\left\{r\right\}\right)/sqrt\left\{2\right\}$ $-i\left(rbar\left\{g\right\}-gbar\left\{r\right\}\right)/sqrt\left\{2\right\}$
$\left(bbar\left\{g\right\}-gbar\left\{b\right\}\right)/sqrt\left\{2\right\}$ $-i\left(bbar\left\{g\right\}-gbar\left\{b\right\}\right)/sqrt\left\{2\right\}$
$\left(rbar\left\{r\right\}-bbar\left\{b\right\}\right)/sqrt\left\{2\right\}$ $\left(rbar\left\{r\right\}+bbar\left\{b\right\}-2gbar\left\{g\right\}\right)/sqrt\left\{6\right\}$
These are equivalent to the Gell-Mann matrices; the translation between the two is that red-antired is the upper-left matrix entry, red-antiblue is the left middle entry, blue-antigreen is the bottom middle entry, and so on. The critical feature of these particular eight states is that they are linearly independent, and also independent of the singlet state; there is no way to add any combination of states to produce any other. (It is also impossible to add them to make $rbar\left\{r\right\}$, $gbar\left\{g\right\}$, or $bbar\left\{b\right\}$; if it were, then the forbidden singlet state could also be made.) There are many other possible choices, but all are mathematically equivalent, at least equally complex, and give the same physical results.
### Group theory details
Technically, QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinor fields in Nf flavours, each in the fundamental representation (triplet, denoted 3) of the colour gauge group, SU(3). The gluons are vector fields in the adjoint representation (octets, denoted 8) of colour SU(3). For a general gauge group, the number of force-carriers (like photons or gluons) is always equal to the dimension of the adjoint representation. For the simple case of SU(N), the dimension of this representation is N2−1.
In terms of group theory, the assertion that there are no color singlet gluons is simply the statement that quantum chromodynamics has an SU(3) rather than a U(3) symmetry. There is no known a priori reason for one group to be preferred over the other, but as discussed above, the experimental evidence supports SU(3).
## Confinement
Since gluons themselves carry color charge (again, unlike the photon which is electrically neutral), they participate in strong interactions. These gluon-gluon interactions constrain color fields to string-like objects called "flux tubes", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to 10-15 meters, roughly the size of an atomic nucleus. (Beyond a certain distance, the energy of the flux tube binding two quarks increases linearly. At a large enough distance, it becomes energetically more favorable to pull a quark-antiquark pair out of the vacuum rather than increase the length of the flux tube.)
Gluons also share this property of being confined within hadrons. One consequence is that gluons are not directly involved in the nuclear forces between hadrons. The force mediators for these are other hadrons called mesons.
Although in the normal phase of QCD single gluons may not travel freely, it is predicted that there exist hadrons which are formed entirely of gluons — called . There are also conjectures about other in which real gluons (as opposed to virtual ones found in ordinary hadrons) would be primary constituents. Beyond the normal phase of QCD (at extreme temperatures and pressures), quark gluon plasma forms. In such a plasma there are no hadrons; quarks and gluons become free particles.
## Experimental observations
The first direct experimental evidence of gluons was found in 1979 when three-jet events were observed at the electron-positron collider called PETRA at DESY in Hamburg.
Experimentally, confinement is verified by the failure of free quark searches. Neither free quarks nor free gluons have ever been observed. Although there have been hints of exotic hadrons, no glueball has been observed either. Quark-gluon plasma has been found recently at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratories (BNL).
## References and external links
• Griffiths, David J. (1987). Introduction to Elementary Particles. Wiley, John & Sons, Inc. ISBN 0-471-60386-4.
• Kaufmann(ed), Scientific American: Particles & Fields (special edition), 1980
• Summary tables in the "Review of particle physics"
• DESY glossary
• Logbook of gluon discovery
• Why are there eight gluons and not nine? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198263883590698, "perplexity_flag": "middle"} |
http://www.emathematics.net/sistres.php?ejercicio=simple | User:
Documento sin título
Pre-algebra Arithmetics Integers Divisibility Decimals Fractions Exponents Percentages Proportional reasoning Radical expressions Graphs Algebra Monomials Polynomials Factoring Linear Equations Graphs of linear equations Rectangular Coordinate System Midpoint Formula Definition of Slope Positive and negative slope Determine the slope of a line Equations of lines Equation of lines (from graph) Applications of linear equations Inequalities Quadratic equations Graphs of quadratic equations Absolute Value Radical expressions Exponential equations Logarithmic equations System of equations Graphs and functions Plotting points and naming quadrants Interpreting Graphs Relations and Functions Function Notation Writing a Linear Equation from a Table Writing a Linear Equation to describe a Graph Direct Variation Indirect Variation Domain and range Sequences and series Matrices Inverse of a matrix Determinants Inner product Geometry Triangles Polygons 2-D Shapes 3-D Shapes Areas Volume Pythagorean Theorem Angles Building Blocks Geometry Transformations Parallel, coincident and intersepting lines Distances in the plane Lines in space Plane in space Angles in the space Distances in the space Similarity Precalculus Sequences and series Graphs Graphs Definition of slope Positive or negative slope Determine the slope of a line Equation of a line (slope-intercept form) Equation of a line (point slope form) Equation of a line from graph Domain and range Quadratic function Limits (approaches a constant) Limits (approaches infinity) Asymptotes Continuity and discontinuities Parallel, coincident and intersepting lines Introduction to Functions Limits Continuity Asymptotes Trigonometry Trigonometric ratios The reciprocal trigonometric ratios Trigonometric ratios of related angles Trigonometric identities Solving right angles Law of sines Law of cosines Domain of trigonometric functions Statistics Mean Median Mode Quartiles Deciles Percentiles Mean deviation Variance Standard Deviation Coefficient of variation Skewness kurtosis Frequency distribution Graphing statistics & Data Factorial Variations without repetition Variations with repetition Permutations without repetition Permutation with repetition Circular permutation Binomial coefficient Combinations without repetition Combinations with repetition
Simultaneous equations
Systems of 3 variable Equations
Systems of equations that have three variables are systems of planes. Like systems of linear equations, the solution of a system of planes can be no solution, one solution or infinite solutions.
To solve a system of, write the given system in matrix form. Write all the information contained in the system in the single augmented matrix and choose one of these method to solve the system:
• Gaussian Elimination: Take an augmented matrix for the system and carry it by means of elementary row operations to an equivalent augmented matrix from which the solutions of the system are easily obtained. In particular, we bring the augmented matrix to Row-Echelon Form. At that point, the solutions of the system are easily obtained.
• Matrix inverse method: Write the given system as a single matrix equation AX=B . Solve the equation obtained for the matrix variable X by left-multiplying both sides of the above matrix equation (AX=B) by the inverse of the matrix A. Then, and you have the solution.
• Cramer's Rule: To use this method, you have only to follow the steps:
1. Write the coefficient matrix of the system (call this matrix A); if it is square, you may continue, otherwise Cramer's rule is not applicable here.
2. Compute the determinant of the coefficient matrix, |A|; if |A| is not zero you may continue, otherwise Cramer's rule is not applicable here.
3. Suppose the first variable of the system is x. Then write the matrix Ax as follows: substitute the column of numbers to the right of the equal signs instead of the first (from the left) column of A. Now compute the determinant of Ax, that is |Ax|.
4. The value of x in the solution is now |Ax| / |A|.
5. Repeat steps 3, and 4 with the remaining variables. In each case substitute the column of numbers instead of the column of A that corresponds to the variable you are using. If the variables are x, y, and z, then the values will now be:
x = |Ax| / |A| y = |Ay| / |A| z = |Az| / |A|
And the solution is
(x, y, z) = ( |Ax| / |A| , |Ay| / |A| , |Az| / |A| )
Solve:
$\left{5x+3y-5z=3\\-x-y+3z=1\\4x+y-z=4$ x = y = z = | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.836574137210846, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/98609?sort=votes | ## Algorithmically unsolvable problems in topology
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is inspired by a paper by B. Poonen that appeared on the arxiv some time ago: http://arxiv.org/abs/1204.0299. The paper gives a sample of algorithmically unsolvable problems from various areas of mathematics.
The topology part however contains only two such problems: the homeomorphism problem for 4-manifolds, which was shown to be undecidalbe by Markov in 1958, and the problem of recognizing $S^n,n\geq 5$ up to homeomorphism. The indecidability in both cases basically boils down to the undecidability of the group isomorphism problem.
Note that both the above problems become decidable if one restricts one's attention to simply-connected PL-manifolds. This follows in the first case from the fact that simply-connected PL 4-manifolds are determined up to homeomorphism by the integral cohomology and in the second case from the generalized Poincare conjecture.
This makes one wonder what happens if one imposes some natural topological restrictions like simple connectedness. So I would like to ask if the following problems are decidable for simply-connected finite simplicial complexes, maybe under some further restrictions (e.g. for those those simplicial complexes that are homeomorphic to smooth or PL-manifolds):
• the homeomorphism problem
• the homotopy equivalence problem
• the rational (or mod a prime $p$) homotopy equivalence problem
Personally, I do not hold out much hope that any of these turns out to be algorithmically decidable. For instance, the rational homotopy type of a space $X$ can be seen as an infinite collection of maps $H^{\otimes n}\to H$ of degrees $2-n$ (where `$H=H^*(X,\mathbb{Q})$`) subject to some condition, up to an equivalence relation, and it looks plausible that all the components in this collection matter. However, it is not completely clear to me how to prove this.
-
I changed the ag tag to at (hopefully this was what was intended). – Mark Grant Jun 2 at 15:37
You might also be interested in "Complexity in rational homotopy" by Lechuga and Murillo. They show that various problems in rational homotopy are NP-hard. – Mark Grant Jun 2 at 17:17
To anticipate Tim Perutz' more substantial comment below, the simply-connected homeomorphism problem for PL manifolds (and more) is done in the Nabutovsky-Weinberger paper arxiv.org/abs/math/9707232. – Sergey Melikhov Jun 2 at 18:20
Mark -- thanks, I did mean at, not ag. – algori Jun 3 at 16:50
## 4 Answers
Yesterday we submitted on http://arxiv.org/abs/1302.2370 a paper "Extendability of continuous maps is undecidable" claiming that the following problem is undecidable: Given simplicial complexes $X$ and $Y$ and a simplicial map $f:A\to Y$ from a subcomplex $A$ of $X$, decide whether there is a continuous extension $|X|\to |Y|$ of $|f|$. Substantial features:
• the undecidability is show by a reduction from the Hilbert's 10th problem, that is, solvability of Diophantine polynomial equation
• the undecidability holds even if we require the considered spaces to be $k$-connected for arbitrary $k\ge 1$
Notably, once the dimension of $X$ is less than $2k$ where $k$ is the connectivity of $Y$ (stable range), the problem becomes solvable (in polynomial time when $k$ is fixed), see http://arxiv.org/abs/1211.3093.
-
Thanks, Marek, this looks interesting. – algori Feb 22 at 13:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It seems pretty clear to me that if you fix $n$ and look at finite simply connected n-dimensional simplicial complexes then the (rational) homotopy equivalence problem is decidable. It's pretty clear that construction of (rational) Postnikov towers is algorithmic. Comparing two Postnikov towers is a sequence of obstruction problems, each decidable. And you don't need to compare full Postnikov towers, it's enough to compare up to height $n$ (the rest are determined automatically).
-
Not only for rational spaces, in general Postnikov towers of simply connected finite CW-complexes are algorithmic. – Fernando Muro Jun 2 at 0:22
@Fernando Muro, yes, that's why I put rational in parentheses to include both cases. sorry of that was not clear. – Vitali Kapovitch Jun 2 at 1:34
Vitali -- I think you are right. Thanks. – algori Jun 2 at 4:22
4
This method is described by Nabutovsky and Weinberger in "Algorithmic aspects of homeomorphism problems", arxiv.org/abs/math/9707232. They point out a subtlety, though: one wants to compare the $(n+1)$st $k$-invariants, not on the nose, but up to the action of the homotopy self-equivalences of the $n$th stage. That involves deciding whether two vectors in some tensor representation of an arithmetic group lie in the same orbit. This is algorithmically decidable, but not obviously so. – Tim Perutz Jun 2 at 13:58
Tim -- thanks for the reference, that looks like what I was after. – algori Jun 3 at 16:49
From "Hardness of embedding simplicial complexes in $\mathbb{R}^d$," by Matoušek, Tancer, Wagner, 2009 (PDF download link):
According to a celebrated result of Novikov ([VKF74]; also see, e.g., [Nab95] for an exposition), the following problem is algorithmically unsolvable: Given a $d$-dimensional simplicial complex, $d \ge 5$, decide whether it is homeomorphic to $\mathbb{S}^d$, the $d$-dimensional sphere.
• [VKF74] I.A. Volodin, V.E. Kuznetsov, and A.T. Fomenko. The problem of discriminating algorithmically the standard three-dimensional sphere. Usp. Mat. Nauk, 29(5):71–168, 1974. In Russian. English translation: Russ. Math. Surv. 29,5:71–172 (1974).
• [Nab95] A. Nabutovsky. Einstein structures: Existence versus uniqueness. Geom. Funct. Anal., 5(1):76–91, 1995.
In their paper, they prove that deciding whether or not a finite simplicial complex $K$ of dimension at most $k$, can be (piecewise linearly) embedded into $\mathbb{R}^d$, is NP-hard, for all $k, d$ with $d \ge 4$ and $d \ge k \ge (2d− 2)/3$.
-
Oh, I see now that this is mentioned by the OP. Consider this, then, merely adding detail. – Joseph O'Rourke Jun 1 at 23:53
@Joseph, note that Novikov's theorem does not assume simply connectednes and uses that fact very strongly. – Vitali Kapovitch Jun 1 at 23:55
@Vitali: I did not realize that aspect. Thank you for clarifying! – Joseph O'Rourke Jun 1 at 23:58
1
The theorem of S.P. Novikov actually asserts that one cannot distinguish homology spheres from the standard sphere if $n\geq 5$. There is a generalization to this due to Nabutovsky--Weinberger asserting that S.P. Novikov's theorem remains valid in the presence of a lower bound on simplicial volume. – Malte Jun 2 at 22:32
I'm not sure whether these problems count as being topological: they're about topologically defined classes of groups:
Determining whether a presentation of a certain kind of knot groups in fact presents a more restrictive class of knots groups tends to be algorithmically unsolvable. I'm talking about the results in the paper Unsolvable problems about higher-dimensional knots and related groups by Francisco González Acuña, Cameron Gordon and Jonathan Simon:
Consider the classes of groups $\mathcal K_0 \subset \mathcal K_1 \subset \mathcal K_2 \subset \mathcal K_3 \subset \mathcal S \subset \mathcal M \subset \mathcal G$, where $\mathcal K_n$ is the class of fundamental groups of complements of $n$-spheres in $S^{n+2}$ (it is known that $\mathcal K_n = \mathcal K_3$ for $n\ge 3$); $\mathcal S$ (resp. $\mathcal M$, $\mathcal G$) is the class of fundamental groups of complements of orientable, closed surfaces in $S^4$ (resp. in a 1-connected 4-manifold, in a 4-manifold). (It is known that $\mathcal G$ is in fact the class of all finitely presented groups and that all the inclusions are strict.)
Their main theorem says that if $\mathcal B \subsetneq \mathcal A$ are two of the above classes of groups and $\mathcal K_3 \subseteq \mathcal A$, then there does not exist an algorithm that can decide, given a finite presentation of a group in $\mathcal A$ whether or not the group is in $\mathcal B$. And they conjecture this is true assuming only $\mathcal K_2 \subset \mathcal A$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141175746917725, "perplexity_flag": "middle"} |
http://mathhelpforum.com/statistics/4812-mutually-exclusive-events-need-help-sure.html | # Thread:
1. ## Mutually exclusive events? need help for sure
P(E_1)=0.25, P(E_2)=0.75, P(FlE_1)=0.05. P(FlE_2)=0.12. P(E_2lF). Ok I'm looking for an area or space between both. As for the text book no good example is close to this one. Ok so 0.25*0.75=.1875, and 0.05*0.12=.006. Now is the hard part what do I do next. I would like to say 0.188 is the answer but I'm not sure. List of possible answers are -
1. 0.188
2. 0.878
3. 0.060
4.0.370
Thanks for any help given.
2. What do you want to find, $P(E_{2}|F)?.$
3. ## opps so sorry about that
it was P(E_2lF)=___?
4. Hello, kwtolley!
I solved it with Bayes' Theorem and a lot of algebra . . .
. . always a dependable method.
$P(E_1) = 0.25,\;\; P(E_2) = 0.75,\;\; P(F|E_1)$ $= 0.05,\;\;P(F|E_2) = 0.12$
Find $P(E_2|F)$
$1)\;0.188\qquad2)\;0.878\qquad3)\; 0.060\qquad4)\;0.370$
Bayes' Theorem: . $P(E_2|F) \;= \;\frac{P(E_2\,\land\,F)}{P(F)}$
We are given: . $P(F|E_1) = 0.05$ . . . This means: . $\frac{P(F\,\land\, E_1)}{P(E_1)} \:=\:0.05$
. . Since $P(E_1) = 0.25:\;\;\frac{P(F\,\land\,E_1)}{0.25} \:=$ $\:0.05\quad\Rightarrow\quad P(F\,\land\,E_1)\:=\:0.0125<br />$
We are given: . $P(F|E_2) = 0.12$ . . . This means: . $\frac{P(F\,\land\,E_2)}{P(E_2)} = 0.12$
. . Since $P(E_2) = 0.75:\;\;\frac{P(F\,\land\,E_2)}{0.75} \:=$ $\:0.12\quad\Rightarrow\quad P(F\,\land\,E_2) = 0.09$
Hence: . $P(F)\:=\:P(F\,\land\,E_1) + P(F\,\land\,E_2) \;= \;0.0125 + 0.09 \;= \;0.1025$
Therefore: . $P(E_2|F) \;=\;\frac{P(E_2\,\land\,F)}{P(F)}\;=\;\frac{0.09} {0.1025} \;=\;0.87804878$ . . . answer (2)
5. ## omg
Man that was a lot to set up for such a small problem if you ask me. I see the steps now. I got it, I will plug this in for the rest of the problems I have to do. thanks again soooo much for showing me the way. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205806255340576, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/193513-hilbert-symbol-when-k-p-adic-numbers.html | # Thread:
1. ## Hilbert symbol when K = the p-adic numbers
How can I show that the Hilbert Symbol is bimiltuplicative, when the local field is the p-adic numbers? Everything I can find just sort of asserts bimultiplicativity without much proof, so I'm guessing it's pretty straight forward, but I haven't done much work with the p-adics so I'm a little unclear.
Moreover, for what primes p is it the case that there exists an element z of the p-adics such that (-1, z) = -1. That is, the the Hilbert symbol acts on -1 and z and evaluates to -1.
2. ## Re: Hilbert symbol when K = the p-adic numbers
Originally Posted by idontknowanything
How can I show that the Hilbert Symbol is bimiltuplicative, when the local field is the p-adic numbers? Everything I can find just sort of asserts bimultiplicativity without much proof, so I'm guessing it's pretty straight forward, but I haven't done much work with the p-adics so I'm a little unclear.
Moreover, for what primes p is it the case that there exists an element z of the p-adics such that (-1, z) = -1. That is, the the Hilbert symbol acts on -1 and z and evaluates to -1.
The Wikipedia page on the Hilbert symbol states that the proof of bimultiplicativity is not straight forward, and requires the use of local class field theory.
3. ## Re: Hilbert symbol when K = the p-adic numbers
Originally Posted by Opalg
The Wikipedia page on the Hilbert symbol states that the proof of bimultiplicativity is not straight forward, and requires the use of local class field theory.
I realize this, but every textbook I can get my hands on just asserts it without even half-attempting a proof, making me think it's still fairly simple.
4. ## Re: Hilbert symbol when K = the p-adic numbers
Serre's A Course in Arithmetic, pp. 19 - 21, contains a proof of the bilinearity of the Hilbert symbol when the local field is the p-adic numbers. The basic idea is to express the Hilbert symbol in terms of Legendre symbols and then use the multiplicative properties of the latter symbols.
I think that the cited Wikipedia article is, shall we say, misleading. It gives a definition of the Hilbert symbol for any local field, but that definition is given in most sources only for the p-adic numbers. The definition of the Hilbert symbol for an arbitrary local field is more complex (see, for example, Milne's notes on Class Field Theory: http://www.jmilne.org/math/CourseNotes/CFT310.pdf, pp. 88 and following). The proof of bilinearity for this more general definition is what apparently requires class field theory.
5. ## Re: Hilbert symbol when K = the p-adic numbers
Can it be proved using the fact that $\left(\frac{a,b}{F}\right)\otimes\left(\frac{a,c}{ F}\right)=\left(\frac{a,bc}{F}\right)\otimes M_2(F)$ (where $\left(\frac{a,b}{F}\right)$ is a quaternion algebra over $F$)?
EDIT: This takes care of the cases where:
$(a,b) = (a,c) = 1$
$(a,b) \neq (a,c)$
since $(a,b)=1$ iff $\left(\frac{a,b}{F}\right)$ is split.
However, the case $(a,b) = (a,c) = -1$ would require proving that the tensor product of two non-trivial quaternion algebras in $\mathrm{Br}(\mathbb{Q}_p)$ is a matrix algebra, which seems non-trivial. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.900707483291626, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/55778/renormalization-group-and-ising-with-d-1-and-d-1 | # Renormalization Group and Ising with d=1 and D=1
I have a question about the results of RG on Ising model. I know it's possible to obtain two couple of relations
1. $K'(K)$, $q(K')$
2. $K(K')$, $q(K)$
between the coupling costants. My problem arise when i try to draw the flow diagram of the coupling constants. I know that this model dont allow phase transition except the trivial case $T=0$, but if we reiterate the relation (1) or (2) we increase or decrease the coupling costant. In one case i obtain $$K=0,T=\infty>>>>\cdots>>>>K=\infty,T=0$$ but in the other?
Boolean Ising Model with $d=1$ dimension of lattice, $D=1$ dimension of vector space of the spins on lattice. The energy with zero external field is $$H=-J\sum_{<ij>}S_iS_j$$ note that there are overcounting. Then the partition function can be put in the following form $$Z=\sum_{\{S\}}\prod_ie^{KS_iS_{i+1}}$$ With a partial summation on even spins it became $$Z'=\sum_{S}\prod_ie^{K(S_i+S_{i+1})}+e^{-K(S_i+S_{i+1})}=\sum_{S}\prod_if(K)e^{K'S_iS_{i+1}},$$ where in the last i used the scaling proprieties
$$Z(N,K)=f(K)^{N/2}Z(N/2,K').$$ The relations for $f(K)$ and $K'(K)$ are: $$f(K)=2\cosh^{1/2}2K,$$ $$K'=\frac{1}{2}\log\cosh{2K}.$$ The extensivity of free energy states $-\beta F=\log Z=Nq(K)=\frac{N}{2}\log f(K)+\frac{N}{2}q(K')$.
-
## 1 Answer
I would look through "Elements of Phase Transitions and Critical Phenomena" Ortiz, Nishimori at your library if possible. The 3rd chapter goes over real space renormalization using decimation (renormalize over even numbered spins).
It basically rewrites the partition function for K' and obtains recursion relations between K' and K. One then looks for the fixed points in these recursion relations and these points should be an unstable critical point and two trivial fixed points related to the ordered and disordered state.
-
Thanks, i understand, but my question was: if i write the recursion relation $K'(K)$ and it inverse $K(K')$, one tends to decrease the coupling constant, and the other to increase. how to recognize what gives the correct flow of the coupling constant? – ivax Mar 5 at 8:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9010468125343323, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=1828423 | Physics Forums
## Is this inequality true and provable?
1. The problem statement, all variables and given/known data
My question is whether the following inequality can be proven.
2. Relevant equations
[tex]
\left|\int_a^bg\left(x\right)dx-\int_a^bh\left(x\right)dx\right|\leq\int_a^b\left|g\left(x\right)-h\left(x\right)\right|dx
[/tex]
3. The attempt at a solution
I tried to write down the inequality in the form of it's primitives, where $$G\left(x\right)$$ is the primitive of $$g\left(x\right)$$ and $$H\left(x\right)$$ is the primitive of $$h\left(x\right)$$. The inequality then becomes:
[tex]
\left|G\left(b\right)-G\left(a\right)-H\left(b\right)+H\left(a\right)\right|\leq\left|G\left(b\right)-H\left(b\right)\right|-\left|G\left(a\right)-H\left(a\right)\right|
[/tex]
But what next, or are there other means of getting a proof?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Assuming $$a \leq b$$ and f is continuous on the interval [a,b], then $$\left|\int_a^bf\left(x\right)dx \right| \leq \int_a^b\left|f(x)\right|dx$$ which follows from the fact that $$f(x) \leq \left|f(x)\right|$$ and $$-f(x) \leq \left|f(x)\right|$$ and that If f,g are both continuous on the interval [a,b] and $$f(x) \leq g(x)$$ for all x in the interval. Then $$\int_a^b f(x)dx \leq \int_a^b g(x)dx$$ Rearranging and using the first inequality should give you the desired inequality.
Oh, I see it now, it is indeed not that difficult. $$\left|\int_a^bg\left(x\right)dx-\int_a^bh\left(x\right)dx\right|\leq\int_a^b\left|g\left(x\right)-h\left(x\right)\right|dx$$ If we rearrange: $$\left|\int_a^b\left(g\left(x\right)-h\left(x\right)\right)dx\right|\leq\int_a^b\left|g\left(x\right)-h\left(x\right)\right|dx$$ Substituting $$f\left(x\right)=g\left(x\right)-h\left(x\right)$$ and using the first formula of snipez90, we get the proof. Thanks!
Tags
inequality, integral, mathematics, proof
Thread Tools
| | | |
|------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Is this inequality true and provable? | | |
| Thread | Forum | Replies |
| | General Engineering | 3 |
| | Calculus | 6 |
| | Computers | 16 |
| | Calculus & Beyond Homework | 1 |
| | Calculus | 3 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.822563886642456, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/13444?sort=votes | ## Prime divisors of numbers 2^n + 3
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm interested in the following problem: do there exist infinitely many prime numbers $p$ such that $p^2|2^{n}+3$ for some natural number $n$?
Some motivation:
If we replace the function $2^n + 3$ with the $f(n)$ where $f \in \mathbb{Z}[x]$ is non-constant that this is true (follows Hensel lemma). So, it's rather natural to try proving this for some other non-polynomial functions. $2^n + 3$ is an easy example of such function. There is also another good reason: sequence $a_n = 2^{n} + 3$ satisfies the reccurence relation: $a_{n+2} = 3a_{n+1} - 2a_{n}$. And for example this problem is true for Fibonacci sequence. So, for Fibonacci it's easier even if the closed form of Fibonacci numbers is more complicated. But I think that the reason of this is that the Fibonacci numbers satisfy some "good" identities which other sequences don't have to share.
Now some remarks:
It's easy exercise to prove that there are infinitely many primes $p$ such that $p|2^{n}+3$. Also, if we try "correcting" $n$ to work also for $p^2$ and we try $m=n+k(p-1)$ we see that it is possible unless $p$ is Wieferich prime, i.e. satisfies $p^2|2^{p-1}-1$. And this gives us nothing as we don't know much about Wieferich primes...
This method can of course be generalized in such way: if $p|2^{n}+3$ and order of $2$ mod $p^2$ is greater than order of $2$ mod $p$ then we can find $m$ such that $p^2|2^{m}+3$. But I don't really think that it helps.
I'm interested in some information about this problem (especially if it's open or not) and also related problems. We can ask a general question: for which functions $f$ we know that this is true?
Edit: Sorry for the confusion with $k$, deleted.
-
2
In the first sentence, where does $k$ come in? – Theo Johnson-Freyd Jan 30 2010 at 2:42
4
I guess 2 is expected to be to be a primitive root mod p for infinitely many p, and if you're a primitive root mod p then you're highly likely to be a primitive root mod p^2, so I am guessing that the question "are there infinitely many primes p for which p^2 divides 2^n+3 for some n" is open but that the answer is conjectured to be "yes". – Kevin Buzzard Jan 30 2010 at 7:35
I wanted to ask about $p^k|2^n+3$ but than I have changed my mind as I think $k=2$ is difficult enough ;) (and the general case could be easy if we would have solved $k=2$). I have corrected the post, thanks for pointing out. – tomek-kobos Jan 30 2010 at 9:38
Kevin: I also think that this is probably open. But about your argument: the condition $2^n$ gives all values mod $p^2$ is for sure much stronger than condition $2^n$ gives some certain value $a$ mod $p^2$. For example, there are infinitely many $p$'s for which $2^n \equiv \pm 1 \pmod{p^2}$ does have a solution. So, who knows, maybe it can be also proved for $a=-3$? Are there any odd $a$ besides $\pm 1$ for which we know that it holds? And of course the last question in the post remains: what about functions $f(n)$ for which we know that the problem holds? – tomek-kobos Jan 30 2010 at 10:14
## 2 Answers
(Edited as the comments below suggest)
The ABC conjecture seemed to me like it would play a roll, however it comes up a little short:
"Are there infinitely many primes $p$ so that for each $p$ there is some integer $n$ with $p^2|2^n + 3?"$
If the ABC conjecture is true, then this answer to this question is almost "no", but still there is a problem at the end of the argument.
The ABC conjecture states that for any $\epsilon > 0$ there is a constant $K_\epsilon$ so that for any co-prime triple $A < B < C$ with $A+B = C$ then $$C \le K_\epsilon\prod_{p|ABC}p^{1 + \epsilon}.$$
So, if there is such an infinite collection of primes, then for the corresponding infinite $n$ where this is true then $2^n + 3 = p^2C$ then $$p^2C \le K_\epsilon(6Cp)^{1+\epsilon}.$$
(Edited: The following sentence is incorrect "But this will clearly run into problems for sufficiently large $p.$" But I wanted to leave it so Kevin's comment makes sense.)
Note that as $C = C(p)$ is a function of $p$ then the $C^\epsilon$ (when $C$ is square-free, or nearly square-free) term may still allow this inequality to work.
-
3
"But this will clearly run into problems for sufficiently large p". But C=C(p) right? So if e.g. C(p) is a prime that's >> p then we have no contradiction, right? Or did I slip up? – Kevin Buzzard Jan 30 2010 at 7:28
ah, that's true...the power of epsilon may just save the day if $C$ is square-free. I guess over function fields where there is no epsilon in the ABC theorem an analogous statement would be false. – Ben Weiss Jan 30 2010 at 7:33
1
Ben: I think it's highly likely that there are infinitely many such primes, because 2 is surely a primitive root mod p^2 for infinitely many primes and that's even stronger than what the OP is asking. – Kevin Buzzard Jan 30 2010 at 7:38
2
For me you should leave it, but personally I think it's impossible that this problem is not true (because of Kevin's argument for example). – tomek-kobos Jan 30 2010 at 9:58
1
Ben: my honest advice is that you should leave it mostly as it is but (assuming you now believe that it doesn't quite work) point out where the problem is. Your post carries information! It says "ABC might be relevent: let's try it; here's what happens". Just edit it a bit to explain that you no longer think it solves the problem. I've seen this sort of behaviour lots of times. – Kevin Buzzard Jan 30 2010 at 14:42
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is part of the question I asked on 5191. On my observation primes categorized themselves into 3 types, a)primes that doesn't divide the form of 2^x + c, b)primes that divides but a smaller had divided it already and c)primes that divides a new x usually larger primes. Once a prime divides a form in this case 2^x + 3 it will divide it infinitely periodically, so I believe even the form of p^k|2^x + c will be true. All primes that has a period of p-1 (totient = p-1) will always divide any forms. On the case of wiefirich prime is a special form, all integer divides a mersenne, so no problem with any p^k dividing a mersenne, but tying the order of mersenne 2^k - 1 to prime k, it could just be one of the small numbers properties that the 2 wiefirich primes what they are.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521484971046448, "perplexity_flag": "head"} |
http://anhngq.wordpress.com/2009/11/10/a-trivial-identity-of-probability-measures/ | # Ngô Quốc Anh
## November 10, 2009
### A trivial identity of probability measures
Filed under: Các Bài Tập Nhỏ, Giải Tích 6 (MA5205) — Ngô Quốc Anh @ 14:36
Let us consider a probability space $(X,\mathcal B,\mu)$, i.e., $(X,\mathcal B,\mu)$ is a measurable space together with $\mu(X)=1$. We assume further that $A, B \in \mathcal B$ are such that $\mu(A)=\mu(B)=1$. Then we conclude that $\mu(A \cap B)=1$.
Indeed, since $A \subset A \cup B \subset X$ then $1=\mu(A\cup B)$. We write $A \cup B$ in the following way
$A\cup B = A\backslash B \quad \bigcup \quad A \cap B \quad\bigcup \quad B\backslash A$.
We then see that $\mu(A\backslash B)=0$ since $A\backslash B \subset X\backslash B$. Similarly, $\mu(B\backslash A)=0$. Hence, $\mu(A \cap B)=1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8546850681304932, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/177142-question-about-separability.html | # Thread:
1. ## Question about separability
I have this kind of exercise:
Let $f \in F[X]$ be an irreducible polynomial where F has characteristic $p > 0$. Express $f(X) = g(X^{p^m})$ where $m \in N$ is a large as possible. Show that g is irreducible and separable.
I know how to show that g is irreducible, but how I show that g is separable? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476692080497742, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/1534/families-of-public-private-keys-in-elliptic-curve-cryptography | # Families of public/private keys in elliptic curve cryptography
I'm looking for a related key scheme for elliptic curve cryptography. The basic idea would be that there would be a master public key and a master private key. From the master public key, you could generate a series of public keys. From the master private key, you could generate the corresponding series of private keys.
Ideally, without the master public key, even given a number of public keys in the series, it would be impractical to determine any other public keys in the series or the master public key.
I've been told that it's possible, and have just the following description of how it would be implemented:
S = Random number
RootPrivateKey = Random number
RootPublicKey = RootPrivateKey*point
Master Private Key = RootPrivateKey, S
Master Public Key = RootPublicKey, S
PrivateKey(n) = RootPrivateKey + Hash(n|S)
PublicKey(n) = RootPublicKey + Hash(n|S)*point
I understand the first five lines. But I am, unsure about the implementation details in the last two lines. Is the `+` supposed to be addition? If so, is it modular? In the public key, is `+` adding the two EC points? If anyone knows of any example code, ideally using OpenSSL, that would be the most helpful.
The typical use case for this kind of thing is creating deterministic wallets for crypto-currencies like Bitcoin.
-
## 1 Answer
I would assume that all the operations are to be done in the elliptic curve group (viewed as a module over $\mathbb Z/k\mathbb Z$, where $k$ is the order of the group), so that addition is the group operation and multiplication is elliptic curve point multiplication.
That is to say, assume we have an elliptic curve $E$, equipped with the point addition operator $+$ so that $(E,+)$ is a group. We can define a scalar multiplication operation on $E$ simply as $$n \cdot P = \underbrace{P+P+\dotsb+P}_{n \text{ times}},$$ where $n$ is an integer and $P$ is a point in $E$. Let $G$ be a point in $E$ which generates a cyclic subgroup of $E$ of order $k$: that is, $k \cdot G = 0_E$ (and $0 < j < k \implies j \cdot G \ne 0_E$), where $0_E$ is the group identity of $(E,+)$. All these are public parameters shared by all users of the group.
In ordinary ECDSA, we would choose a random private key $x \in \mathbb Z/k\mathbb Z$ (that is, a random integer such that $0 \le x < k$) and compute the public key $Q = x \cdot G$.
However, if we also had a function $H$ mapping integers to (pseudo)random values in $\mathbb Z/k\mathbb Z$, we could define a sequence of private keys and corresponding public keys as $$x_i = x + H(i)$$ $$Q_i = x_i \cdot G = Q + H(i) \cdot G$$ where the $+$ on the first line stands for ordinary addition (modulo $k$) and the $+$ on the second line is elliptic curve point addition. (A consequence of the way we defined scalar multiplication above is that addition distributes over it: $(a + b) \cdot P = a \cdot P + b \cdot P$.)
There are various ways in which we could define $H$. For example, we could use a hash function as in your example, possibly with a salt $S$, or we could perhaps pick a block cipher $B$ and a random key $K$, and let $H(i) = E_K(i)$. In fact, assuming that $H$ and $Q$ are indeed truly public, I'm not even sure that there's any reason not to just use $H(i)=i$. (Indeed, I can't even prove off the top of my head that this scheme is secure for any $H$.) But the salted hash certainly seems as good as any other choice for $H$.
Edit: I just noticed that you wrote:
"Ideally, without the master public key, even given a number of public keys in the series, it would be impractical to determine any other public keys in the series or the master public key."
I'm not sure if you wanted that to be impractical with or without knowledge of $H$ and of the indices $i$ corresponding to the known public keys $Q_i$.
If with, we have a problem: anyone who knows $H$, $i$ and $Q_i$ can calculate $Q = Q_i + (k-H(i)) \cdot G$.
If $i$ and/or $H$ can be treated as secret, things get trickier. Obviously, $H(i)=i$ is a bad choice in this case, since anyone who knows $Q_i$ could then calculate $Q_{i+j} = Q_i+j \cdot G$ for any $j$, but using a hash or a block cipher for $H$ would eliminate that simple weakness.
Also, regardless of $H$, someone who knows $Q_i$ and $Q_j$ can calculate $Q_i - Q_j = (H(i)-H(j)) \cdot G$; but I'm not sure what, if any, good that would do them, assuming that $H(i) - H(j)$ is not likely to equal $H(k)$ for any valid index $k$.
Anyway, it would be helpful to know just what parties are supposed to be involved in this scheme and what they're supposed to know. In my original answer, I'd been implicitly assuming that the public keys were, well, public and known to everybody. If that's not supposed to be the case, that could complicate things a lot.
-
2
Note that here from one private sub-key you can easily get back to the master private key, though this is not that easy for the public keys. Not sure if this might be a problem. – Paŭlo Ebermann♦ Dec 25 '11 at 11:37
The public keys are public, but they're lost in a sea of public keys. The idea is that someone should not be able to tell, given some of your public keys, whether a given public key is one of yours or not. But using a secure hash function and keeping S secret should do that. – David Schwartz Dec 25 '11 at 18:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532287120819092, "perplexity_flag": "head"} |
http://terrytao.wordpress.com/2009/09/ | What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Monthly Archive
You are currently browsing the monthly archive for September 2009.
## Two quick updates
30 September, 2009 in non-technical, update | Tags: google wave, mathematical blogging | by Terence Tao | 14 comments
1. In a previous post, I noted John Baez’s thread discussing his incipient article for the Notices of the AMS, entitled “What do mathematicians need to know about blogging?”. John has now completed an initial draft of his article and is welcoming comments on it here. [Update, Oct 2: the article has now been submitted, incorporating much of the feedback.]
2. In another previous post, I talked about the forthcoming Google Wave platform being developed currently by Google, and its potential usefulness for online mathematical collaborative projects, such as the polymath projects. My brother, who is one of the developers for this project, has just informed me that there are now a limited number of invites available to others who would like to develop specific Wave extensions or other projects (see for instance his own blog post, aimed at the GNOME community). As I understand it, the Wave platform is not yet ready for general use, so these invites would be intended for technical developers (or preferably, a group of developers) who would be working on specific projects. (For instance, I understand that there is already a preliminary extension for encoding LaTeX in a Wave, but it could be developed further.) If any readers are interested, one can request an invite directly from the Google Wave page, or I can forward requests to my brother. [At some point, I may ask for help in trying to build a Wave platform for the next generation of Polymath projects, but this will probably not occur for several months yet, due to a large number of other things on my plate (including existing Polymath projects).]
## The prime number theorem in arithmetic progressions, and dueling conspiracies
24 September, 2009 in expository, math.NT | Tags: Dirichlet characters, prime number theorem, Siegel's theorem | by Terence Tao | 26 comments
A fundamental problem in analytic number theory is to understand the distribution of the prime numbers ${\{2,3,5,\ldots\}}$. For technical reasons, it is convenient not to study the primes directly, but a proxy for the primes known as the von Mangoldt function ${\Lambda: {\mathbb N} \rightarrow {\mathbb R}}$, defined by setting ${\Lambda(n)}$ to equal ${\log p}$ when ${n}$ is a prime ${p}$ (or a power of that prime) and zero otherwise. The basic reason why the von Mangoldt function is useful is that it encodes the fundamental theorem of arithmetic (which in turn can be viewed as the defining property of the primes) very neatly via the identity
$\displaystyle \log n = \sum_{d|n} \Lambda(d) \ \ \ \ \ (1)$
for every natural number ${n}$.
The most important result in this subject is the prime number theorem, which asserts that the number of prime numbers less than a large number ${x}$ is equal to ${(1+o(1)) \frac{x}{\log x}}$:
$\displaystyle \sum_{p \leq x} 1 = (1+o(1)) \frac{x}{\log x}.$
Here, of course, ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$.
It is not hard to see (e.g. by summation by parts) that this is equivalent to the asymptotic
$\displaystyle \sum_{n \leq x} \Lambda(n) = (1+o(1)) x \ \ \ \ \ (2)$
for the von Mangoldt function (the key point being that the squares, cubes, etc. of primes give a negligible contribution, so ${\sum_{n \leq x} \Lambda(n)}$ is essentially the same quantity as ${\sum_{p \leq x} \log p}$). Understanding the nature of the ${o(1)}$ term is a very important problem, with the conjectured optimal decay rate of ${O(\sqrt{x} \log x)}$ being equivalent to the Riemann hypothesis, but this will not be our concern here.
The prime number theorem has several important generalisations (for instance, there are analogues for other number fields such as the Chebotarev density theorem). One of the more elementary such generalisations is the prime number theorem in arithmetic progressions, which asserts that for fixed ${a}$ and ${q}$ with ${a}$ coprime to ${q}$ (thus ${(a,q)=1}$), the number of primes less than ${x}$ equal to ${a}$ mod ${q}$ is equal to ${(1+o_q(1)) \frac{1}{\phi(q)} \frac{x}{\log x}}$, where ${\phi(q) := \# \{ 1 \leq a \leq q: (a,q)=1 \}}$ is the Euler totient function:
$\displaystyle \sum_{p \leq x: p = a \hbox{ mod } q} 1 = (1+o_q(1)) \frac{1}{\phi(q)} \frac{x}{\log x}.$
(Of course, if ${a}$ is not coprime to ${q}$, the number of primes less than ${x}$ equal to ${a}$ mod ${q}$ is ${O(1)}$. The subscript ${q}$ in the ${o()}$ and ${O()}$ notation denotes that the implied constants in that notation is allowed to depend on ${q}$.) This is a more quantitative version of Dirichlet’s theorem, which asserts the weaker statement that the number of primes equal to ${a}$ mod ${q}$ is infinite. This theorem is important in many applications in analytic number theory, for instance in Vinogradov’s theorem that every sufficiently large odd number is the sum of three odd primes. (Imagine for instance if almost all of the primes were clustered in the residue class ${2}$ mod ${3}$, rather than ${1}$ mod ${3}$. Then almost all sums of three odd primes would be divisible by ${3}$, leaving dangerously few sums left to cover the remaining two residue classes. Similarly for other moduli than ${3}$. This does not fully rule out the possibility that Vinogradov’s theorem could still be true, but it does indicate why the prime number theorem in arithmetic progressions is a relevant tool in the proof of that theorem.)
As before, one can rewrite the prime number theorem in arithmetic progressions in terms of the von Mangoldt function as the equivalent form
$\displaystyle \sum_{n \leq x: n = a \hbox{ mod } q} \Lambda(n) = (1+o_q(1)) \frac{1}{\phi(q)} x.$
Philosophically, one of the main reasons why it is so hard to control the distribution of the primes is that we do not currently have too many tools with which one can rule out “conspiracies” between the primes, in which the primes (or the von Mangoldt function) decide to correlate with some structured object (and in particular, with a totally multiplicative function) which then visibly distorts the distribution of the primes. For instance, one could imagine a scenario in which the probability that a randomly chosen large integer ${n}$ is prime is not asymptotic to ${\frac{1}{\log n}}$ (as is given by the prime number theorem), but instead to fluctuate depending on the phase of the complex number ${n^{it}}$ for some fixed real number ${t}$, thus for instance the probability might be significantly less than ${1/\log n}$ when ${t \log n}$ is close to an integer, and significantly more than ${1/\log n}$ when ${t \log n}$ is close to a half-integer. This would contradict the prime number theorem, and so this scenario would have to be somehow eradicated in the course of proving that theorem. In the language of Dirichlet series, this conspiracy is more commonly known as a zero of the Riemann zeta function at ${1+it}$.
In the above scenario, the primality of a large integer ${n}$ was somehow sensitive to asymptotic or “Archimedean” information about ${n}$, namely the approximate value of its logarithm. In modern terminology, this information reflects the local behaviour of ${n}$ at the infinite place ${\infty}$. There are also potential consipracies in which the primality of ${n}$ is sensitive to the local behaviour of ${n}$ at finite places, and in particular to the residue class of ${n}$ mod ${q}$ for some fixed modulus ${q}$. For instance, given a Dirichlet character ${\chi: {\mathbb Z} \rightarrow {\mathbb C}}$ of modulus ${q}$, i.e. a completely multiplicative function on the integers which is periodic of period ${q}$ (and vanishes on those integers not coprime to ${q}$), one could imagine a scenario in which the probability that a randomly chosen large integer ${n}$ is prime is large when ${\chi(n)}$ is close to ${+1}$, and small when ${\chi(n)}$ is close to ${-1}$, which would contradict the prime number theorem in arithmetic progressions. (Note the similarity between this scenario at ${q}$ and the previous scenario at ${\infty}$; in particular, observe that the functions ${n \rightarrow \chi(n)}$ and ${n \rightarrow n^{it}}$ are both totally multiplicative.) In the language of Dirichlet series, this conspiracy is more commonly known as a zero of the ${L}$-function of ${\chi}$ at ${1}$.
An especially difficult scenario to eliminate is that of real characters, such as the Kronecker symbol ${\chi(n) = \left( \frac{n}{q} \right)}$, in which numbers ${n}$ which are quadratic nonresidues mod ${q}$ are very likely to be prime, and quadratic residues mod ${q}$ are unlikely to be prime. Indeed, there is a scenario of this form – the Siegel zero scenario – which we are still not able to eradicate (without assuming powerful conjectures such as GRH), though fortunately Siegel zeroes are not quite strong enough to destroy the prime number theorem in arithmetic progressions.
It is difficult to prove that no conspiracy between the primes exist. However, it is not entirely impossible, because we have been able to exploit two important phenomena. The first is that there is often a “all or nothing dichotomy” (somewhat resembling the zero-one laws in probability) regarding conspiracies: in the asymptotic limit, the primes can either conspire totally (or more precisely, anti-conspire totally) with a multiplicative function, or fail to conspire at all, but there is no middle ground. (In the language of Dirichlet series, this is reflected in the fact that zeroes of a meromorphic function can have order ${1}$, or order ${0}$ (i.e. are not zeroes after all), but cannot have an intermediate order between ${0}$ and ${1}$.) As a corollary of this fact, the prime numbers cannot conspire with two distinct multiplicative functions at once (by having a partial correlation with one and another partial correlation with another); thus one can use the existence of one conspiracy to exclude all the others. In other words, there is at most one conspiracy that can significantly distort the distribution of the primes. Unfortunately, this argument is ineffective, because it doesn’t give any control at all on what that conspiracy is, or even if it exists in the first place!
But now one can use the second important phenomenon, which is that because of symmetries, one type of conspiracy can lead to another. For instance, because the von Mangoldt function is real-valued rather than complex-valued, we have conjugation symmetry; if the primes correlate with, say, ${n^{it}}$, then they must also correlate with ${n^{-it}}$. (In the language of Dirichlet series, this reflects the fact that the zeta function and ${L}$-functions enjoy symmetries with respect to reflection across the real axis (i.e. complex conjugation).) Combining this observation with the all-or-nothing dichotomy, we conclude that the primes cannot correlate with ${n^{it}}$ for any non-zero ${t}$, which in fact leads directly to the prime number theorem (2), as we shall discuss below. Similarly, if the primes correlated with a Dirichlet character ${\chi(n)}$, then they would also correlate with the conjugate ${\overline{\chi}(n)}$, which also is inconsistent with the all-or-nothing dichotomy, except in the exceptional case when ${\chi}$ is real – which essentially means that ${\chi}$ is a quadratic character. In this one case (which is the only scenario which comes close to threatening the truth of the prime number theorem in arithmetic progressions), the above tricks fail and one has to instead exploit the algebraic number theory properties of these characters instead, which has so far led to weaker results than in the non-real case.
As mentioned previously in passing, these phenomena are usually presented using the language of Dirichlet series and complex analysis. This is a very slick and powerful way to do things, but I would like here to present the elementary approach to the same topics, which is slightly weaker but which I find to also be very instructive. (However, I will not be too dogmatic about keeping things elementary, if this comes at the expense of obscuring the key ideas; in particular, I will rely on multiplicative Fourier analysis (both at ${\infty}$ and at finite places) as a substitute for complex analysis in order to expedite various parts of the argument. Also, the emphasis here will be more on heuristics and intuition than on rigour.)
The material here is closely related to the theory of pretentious characters developed by Granville and Soundararajan, as well as an earlier paper of Granville on elementary proofs of the prime number theorem in arithmetic progressions.
Read the rest of this entry »
## A speech for the American Academy of Arts and Sciences
17 September, 2009 in non-technical, opinion, talk, travel | Tags: american academy of arts and sciences | by Terence Tao | 56 comments
Next month, I am scheduled to give a short speech (three to five minutes in length) at the annual induction ceremony of the American Academy of Arts and Sciences in Boston. This is a bit different from the usual scientific talks that I am used to giving; there are no projectors, blackboards, or other visual aids available, and the audience of Academy members is split evenly between the humanities and the sciences (as well as people in industry and politics), so this will be an interesting new experience for me. (The last time I gave a speech was in 1985.)
My chosen topic is on the future impact of internet-based technologies on academia (somewhat similar in theme to my recent talk on this topic). I have a draft text below the fold, though it is currently too long and my actual speech is likely to be a significantly abridged version of the one below [Update, Oct 12: The abridged speech is now at the bottom of the post.] In the spirit of the theme of the talk, I would of course welcome any comments and suggestions.
For comparison, the talks from last year’s ceremony, by Jim Simons, Peter Kim, Susan Athey, Earl Lewis, and Indra Nooyi, can be found here. Jim’s chosen topic, incidentally, was what mathematics is, and why mathematicians do it.
[Update, Nov 3: Video of the various talks by myself and the other speakers (Emmylou Harris, James Earl Jones, Elizabeth Nabel, Ronald Marc George, and Edward Villela) is now available on the Academy web site here.]
Read the rest of this entry »
## Structure and randomness in the prime numbers (IMO Festschrift submission)
14 September, 2009 in math.NT, paper | Tags: international mathematical olympiad | by Terence Tao | 9 comments
A few months ago, I gave a talk at the IMO in Bremen on “Structure and randomness in the prime numbers”. I have now converted the slides from that talk into a more traditional paper (7 pages in length), for submission to a Festschrift for the Bremen Olympiad. The content is much the same as the slides, but some references have been added.
## Two more Clay-Mahler lectures
6 September, 2009 in math.AP, math.MG, math.NT, talk, travel | Tags: Clay-Mahler lectures | by Terence Tao | 8 comments
I am posting the last two talks in my Clay-Mahler lecture series here:
• “Structure and randomness in the prime numbers“. This public lecture is slightly updated from a previous talk of the same name given last year, but is largely the same material.
• “Perelman’s proof of the Poincaré conjecture“. Here I try (perhaps ambitiously) to give an overview of Perelman’s proof of the Poincaré conjecture into an hour-length talk for a general mathematical audience. It is a little unpolished, as I have not given any version of this talk before, but I hope to update it a bit in the future.
[Update, Sep 14: Poincaré conjecture slides revised.]
[Update, Sep 18: Prime slides revised also.]
## The cosmic distance ladder
3 September, 2009 in math.GM, non-technical, talk, travel | Tags: Clay-Mahler lectures, cosmic distance ladder | by Terence Tao | 18 comments
I am uploading another of my Clay-Mahler lectures here, namely my public talk on the cosmic distance ladder (4.3MB, PDF). These slides are based on my previous talks of the same name, but I have updated and reorganised the graphics significantly as I was not fully satisfied with the previous arrangement.
[Update, Sep 4: slides updated. The Powerpoint version of the slides (8MB) are available here.]
[Update, Oct 26: slides updated again.] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 100, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365893602371216, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Quantum_mechanics | # Quantum mechanics
This is the latest accepted revision, accepted on 14 May 2013.
For a generally accessible and less technical introduction to the topic, see Introduction to quantum mechanics.
Quantum mechanics
Introduction
Glossary · History
Background
Fundamental concepts
Experiments
Formulations
Equations
Interpretations
Advanced topics
People
Quantizations
Mechanics
Interpretations - Theorems
Mathematical Formulations
Quantum mechanics (QM – also known as quantum physics, or quantum theory) is a branch of physics which deals with physical phenomena at microscopic scales, where the action is on the order of the Planck constant. Quantum mechanics departs from classical mechanics primarily at the quantum realm of atomic and subatomic length scales. Quantum mechanics provides a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter.
In advanced topics of quantum mechanics, some of these behaviors are macroscopic and emerge at only extreme (i.e., very low or very high) energies or temperatures.[citation needed] The name quantum mechanics derives from the observation that some physical quantities can change only in discrete amounts (Latin quanta), and not in a continuous (cf. analog) way. For example, the angular momentum of an electron bound to an atom or molecule is quantized.[1] In the context of quantum mechanics, the wave–particle duality of energy and matter and the uncertainty principle provide a unified view of the behavior of photons, electrons, and other atomic-scale objects.
The mathematical formulations of quantum mechanics are abstract. A mathematical function known as the wavefunction provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Mathematical manipulations of the wavefunction usually involve the bra-ket notation, which requires an understanding of complex numbers and linear functionals. The wavefunction treats the object as a quantum harmonic oscillator, and the mathematics is akin to that describing acoustic resonance. Many of the results of quantum mechanics are not easily visualized in terms of classical mechanics—for instance, the ground state in a quantum mechanical model is a non-zero energy state that is the lowest permitted energy state of a system, as opposed to a more "traditional" system that is thought of as simply being at rest, with zero kinetic energy. Instead of a traditional static, unchanging zero state, quantum mechanics allows for far more dynamic, chaotic possibilities, according to John Wheeler.
The earliest versions of quantum mechanics were formulated in the first decade of the 20th century. At around the same time, the atomic theory and the corpuscular theory of light (as updated by Einstein) first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Early quantum theory was significantly reformulated in the mid-1920s by Werner Heisenberg, Max Born and Pascual Jordan, who created matrix mechanics; Louis de Broglie and Erwin Schrödinger (Wave Mechanics); and Wolfgang Pauli and Satyendra Nath Bose (statistics of subatomic particles). And the Copenhagen interpretation of Niels Bohr became widely accepted. By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann,[2] with a greater emphasis placed on measurement in quantum mechanics, the statistical nature of our knowledge of reality, and philosophical speculation about the role of the observer. Quantum mechanics has since branched out into almost every aspect of 20th century physics and other disciplines, such as quantum chemistry, quantum electronics, quantum optics, and quantum information science. Much 19th century physics has been re-evaluated as the "classical limit" of quantum mechanics, and its more advanced developments in terms of quantum field theory, string theory, and speculative quantum gravity theories.
## History
Modern physics
${ i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = \hat H \Psi(\mathbf{r},\,t)}$
History of modern physics
Founders
Scientists
Main article: History of quantum mechanics
Scientific inquiry into the wave nature of light go back to the 17th and 18th centuries when scientists such as Robert Hooke, Christian Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[3] In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper entitled "On the nature of light and colours". This experiment played a major role in the general acceptance of the wave theory of light.
In 1838 with the discovery of cathode rays by Michael Faraday, these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[4] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or "energy elements") precisely matched the observed patterns of black-body radiation.
In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies, and underestimated the radiance at low frequencies. Later, Max Planck corrected this model using Boltzmann statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics.
Among the first to study quantum phenomena in nature were Arthur Compton, C.V. Raman, Pieter Zeeman, each of whom has a quantum effect named after him. Robert A. Millikan studied the Photoelectric effect experimentally and Albert Einstein developed a theory for it. At the same time Niels Bohr developed his theory of the atomic structure which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.[5] This phase is known as Old quantum theory.
According to Planck, each energy element E is proportional to its frequency ν:
$E = h \nu\$
Planck is considered the father of the Quantum Theory
where h is Planck's constant. Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[6] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizeable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material.
The 1927 Solvay Conference in Brussels.
The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld and others. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the "Old Quantum Theory". Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.
The other exemplar that led to quantum mechanics was the study of electromagnetic waves, such as visible and ultraviolet light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or "quanta", Albert Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon) with a discrete quantum of energy that was dependent on its frequency.[7] As a matter of fact, Einstein was able to use the photon theory of light to explain the photoelectric effect, for which he won the Nobel Prize in 1921. This led to a theory of unity between subatomic particles and electromagnetic waves, called wave–particle duality, in which particles and waves were neither one nor the other, but had certain properties of both. Thus coined the term wave-particle duality.
While quantum mechanics traditionally described the world of the very small, it is also needed to explain certain recently investigated macroscopic systems such as superconductors, superfluids, and larger organic molecules.[8]
The word quantum derives from the Latin, meaning "how great" or "how much".[9] In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and sub-atomic systems which is today called quantum mechanics. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[10] Some fundamental aspects of the theory are still actively studied.[11]
Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. In addition, if classical mechanics truly governed the workings of an atom, electrons would really 'orbit' the nucleus. Since bodies in circular motion accelerate, they must emit radiation and collide with the nucleus in the process. This clearly contradicts the existence of stable atoms. However, in the natural world electrons normally remain in an uncertain, non-deterministic, "smeared", probabilistic wave–particle wavefunction orbital path around (or through) the nucleus, defying classical mechanics and electromagnetism.[12]
Quantum mechanics was initially developed to provide a better explanation and description of the atom, especially the differences in the spectra of light emitted by different isotopes of the same element, as well as subatomic particles. In short, the quantum-mechanical atomic model has succeeded spectacularly in the realm where classical mechanics and electromagnetism falter.
Broadly speaking, quantum mechanics incorporates four classes of phenomena for which classical physics cannot account:
## Mathematical formulations
Main article: Mathematical formulations of quantum mechanics
See also: Quantum logic
In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac[13] David Hilbert,[14] John von Neumann,[15] and Hermann Weyl[16] the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors"). Formally, these reside in a complex separable Hilbert space - variously called the "state space" or the "associated Hilbert space" of the system - that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system - for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues.
In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function, also referred to as state vector in a complex vector space.[17] This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, with accuracy. For instance, electrons may be considered (to a certain probability) to be located somewhere within a given region of space, but with their exact positions unknown. Contours of constant probability, often referred to as "clouds", may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle given its conjugate momentum.[18]
According to one interpretation, as the result of a measurement the wave function containing the probability information for a system collapses from a given initial state to a particular eigenstate. The possible results of a measurement are the eigenvalues of the operator representing the observable — which explains the choice of Hermitian operators, for which all the eigenvalues are real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute.
The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wavefunction collapse" (see, for example, the relative state interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.[19]
Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction.[20][21] Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable ("eigen" can be translated from German as meaning "inherent" or "characteristic").[22]
In the everyday world, it is natural and intuitive to think of everything (every observable) as being in an eigenstate. Everything appears to have a definite position, a definite momentum, a definite energy, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values of a particle's position and momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate pairs); rather, it provides only a range of probabilities in which that particle might be given its momentum and momentum probability. Therefore, it is helpful to use different words to describe states having uncertain values and states having definite values (eigenstates). Usually, a system will not be in an eigenstate of the observable (particle) we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or "generalized" eigenstate) of that observable. This process is known as wavefunction collapse, a controversial and much-debated process[23] that involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of the wavefunction collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0 (neither an eigenstate of position nor of momentum). When one measures the position of the particle, it is impossible to predict with certainty the result.[19] It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x.[24]
The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that - given a wavefunction at an initial time - it makes a definite prediction of what the wavefunction will be at any later time.[25]
During a measurement, on the other hand, the change of the initial wavefunction into another, later wavefunction is not deterministic, it is unpredictable (i.e. random). A time-evolution simulation can be seen here.[26][27]
Wave functions change as time progresses. The Schrödinger equation describes how wavefunctions change in time, playing a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain with time. This also has the effect of turning a position eigenstate (which can be thought of as an infinitely sharp wave packet) into a broadened wave packet that no longer represents a (definite, certain) position eigenstate.[28]
Fig. 1: Probability densities corresponding to the wavefunctions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Brighter areas correspond to higher probability density in a position measurement. Such wavefunctions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics, and are indeed modes of oscillation as well, possessing a sharp energy and, thus, a definite frequency. The angular momentum and energy are quantized, and take only discrete values like those shown (as is the case for resonant frequencies in acoustics)
Some wave functions produce probability distributions that are constant, or independent of time - such as when in a stationary state of constant energy, time vanishes in the absolute square of the wave function. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1) (note, however, that only the lowest angular momentum states, labeled s, are spherically symmetric).[29]
The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the "wave-like" behavior of quantum states. As it turns out, analytic solutions of the Schrödinger equation are available for only a very small number of relatively simple model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the hydrogen molecular ion, and the hydrogen atom are the most important representatives. Even the helium atom - which contains just one more electron than does the hydrogen atom - has defied all attempts at a fully analytic treatment.
There exist several techniques for generating approximate solutions, however. In the important method known as perturbation theory, one uses the analytic result for a simple quantum mechanical model to generate a result for a more complicated model that is related to the simpler model by (for one example) the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces only weak (small) deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of quantum chaos.
## Mathematically equivalent formulations of quantum mechanics
There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by the late Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics—matrix mechanics (invented by Werner Heisenberg)[30] and wave mechanics (invented by Erwin Schrödinger).[31]
Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born in the development of QM has become somewhat confused and overlooked. A 2005 biography of Born details his role as the creator of the matrix formulation of quantum mechanics. This fact was recognized in a paper that Heisenberg himself published in 1940 honoring Max Planck.[32] and In the matrix formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[33] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible histories between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.
## Interactions with other scientific theories
The rules of quantum mechanics are fundamental. They assert that the state space of a system is a Hilbert space, and that observables of that system are Hermitian operators acting on that space—although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical mechanics when a system moves to higher energies or—equivalently—larger quantum numbers, i.e. whereas a single particle exhibits a degree of randomness, in systems incorporating millions of particles averaging takes over and, at the high energy limit, the statistical probability of random behaviour approaches zero. In other words, classical mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can even start from an established classical model of a particular system, then attempt to guess the underlying quantum model that would give rise to the classical model in the correspondence limit.
‹ The template below (Unsolved) is being considered for possible deletion. See templates for discussion to help reach a consensus.›
In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wavefunction collapse", give rise to the reality we perceive?
When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.
Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical $\scriptstyle -e^2/(4 \pi\ \epsilon_{_0}\ r)$ Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.
Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. These three men shared the Nobel Prize in Physics in 1979 for this work.[34]
It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity (the most accurate theory of gravity currently known) and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity.
Classical mechanics has also been extended into the complex domain, with complex classical mechanics exhibiting behaviors similar to quantum mechanics.[35]
### Quantum mechanics and classical physics
Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy[citation needed]. According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles)[citation needed]. The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[36] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.
Quantum coherence is an essential difference between classical and quantum theories, and is illustrated by the Einstein-Podolsky-Rosen paradox[citation needed]. Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena that are characteristic of quantum systems.[37] Quantum coherence is not typically evident at macroscopic scales - although an exception to this rule can occur at extremely low temperatures (i.e. approaching absolute zero), when quantum behavior can manifest itself on more macroscopic scales (see macroscopic quantum phenomena, Bose-Einstein condensate, and Quantum machine)[citation needed]. This is in accordance with the following observations:
• Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (which consists of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.[38]
• While the seemingly "exotic" behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with particles of extremely small size or velocities approaching the speed of light, the laws of classical Newtonian physics remain accurate in predicting the behavior of the vast majority of "large" objects (on the order of the size of large molecules or bigger) at velocities much smaller than the velocity of light.[39]
### Relativity and quantum mechanics
Main article: Relativistic quantum mechanics
Even with the defining postulates of both Einstein's theory of general relativity and quantum theory being indisputably supported by rigorous and repeated empirical evidence and while they do not directly contradict each other theoretically (at least with regard to their primary claims), they have proven extremely difficult to incorporate into one consistent, cohesive model.[40]
Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly contributing to the field, he did not accept many of the more "philosophical consequences and interpretations" of quantum mechanics, such as the lack of deterministic causality. He is famously quoted as saying, in response to this aspect, "My God does not play with dice". He also had difficulty with the assertion that a single subatomic particle can occupy numerous areas of space at one time. However, he was also the first to notice some of the apparently exotic consequences of entanglement, and used them to formulate the Einstein-Podolsky-Rosen paradox in the hope of showing that quantum mechanics had unacceptable implications. This was 1935, but in 1964 it was shown by John Bell (see Bell inequality) that - although Einstein was correct in identifying seemingly paradoxical implications of quantum mechanical nonlocality - these implications could be experimentally tested. Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have definitively verified quantum entanglement.
According to the paper of J. Bell and the Copenhagen interpretation—the common interpretation of quantum mechanics by physicists since 1927 - and contrary to Einstein's ideas, quantum mechanics was not, at the same time a "realistic" theory and a "local" theory.
The Einstein-Podolsky-Rosen paradox shows in any case that there exist experiments by which one can measure the state of one particle and instantaneously change the state of its entangled partner - although the two particles can be an arbitrary distance apart. However, this effect does not violate causality, since no transfer of information happens. Quantum entanglement forms the basis of quantum cryptography, which is used in high-security commercial applications in banking and government.
Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th and 21st century physics. Many prominent physicists, including Stephen Hawking, have labored for many years in the attempt to discover a theory underlying everything. This TOE would combine not only the different models of subatomic physics, but also derive the four fundamental forces of nature - the strong force, electromagnetism, the weak force, and gravity - from a single force or phenomenon. While Stephen Hawking was initially a believer in the Theory of Everything, after considering Gödel's Incompleteness Theorem, he has concluded that one is not obtainable, and has stated so publicly in his lecture "Gödel and the End of Physics" (2002).[41]
### Attempts at a unified field theory
Main article: Grand unified theory
The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory,[42][unreliable source](blog) has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field,[43] Beyond this "grand unification," it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However — and while special relativity is parsimoniously incorporated into quantum electrodynamics — the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of the leading authorities continuing the search for a coherent TOE is Edward Witten, a theoretical physicist who formulated the groundbreaking M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are - at lower energies - completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing.
Other popular theory is Loop quantum gravity (LQG) a theory that describes the quantum properties of gravity. It is also a theory of quantum space and quantum time, because, as discovered with general relativity, the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature of the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. But here it is space itself which is discrete. More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time, is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 1.616×10−35 m. According to theory, there is no meaning to length shorter than this (cf. Planck scale energy). Therefore LQG predicts that not just matter, but also space itself, has an atomic structure. Loop quantum Gravity was first proposed by Carlo Rovelli.
## Philosophical implications
Main article: Interpretations of quantum mechanics
Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules concerning probability amplitudes and probability distributions took decades to be appreciated by society and many leading scientists. Indeed, the renowned physicist Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[44]
The Copenhagen interpretation - due largely to the Danish theoretical physicist Niels Bohr - remains the quantum mechanical formalism that is currently most widely accepted amongst physicists, some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered a final renunciation of the classical idea of "causality". It is also believed therein that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementarity nature of evidence obtained under different experimental situations.
Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement. Einstein held that there should be a local hidden variable theory underlying quantum mechanics and, consequently, that the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the Einstein-Podolsky-Rosen paradox. John Bell showed that this "EPR" paradox led to experimentally testable differences between quantum mechanics and local realistic theories. Experiments have been performed confirming the accuracy of quantum mechanics, thereby demonstrating that the physical world cannot be described by any local realistic theory.[45] The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view.
The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[46] This is not accomplished by introducing some "new axiom" to quantum mechanics, but on the contrary, by removing the axiom of the collapse of the wave packet. All of the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical - not just formally mathematical, as in other interpretations - quantum superposition. Such a superposition of consistent state combinations of different systems is called an entangled state. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can observe only the universe (i.e., the consistent state contribution to the aforementioned superposition) that we, as observers, inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, these "parallel universes" will never be accessible to us. The inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away at the speed of light towards the other end of the universe. In order to prove that the wave function did not collapse, one would have to bring all these particles back and measure them again, together with the system that was originally measured. Not only is this completely impractical, but even if one could theoretically do this, it would destroy any evidence that the original measurement took place (to include the physicist's memory).[citation needed] In light of these Bell tests, Cramer (1986) formulated his Transactional interpretation.[47] Relational quantum mechanics appeared in the late 1990s as the modern derivative of the Copenhagen Interpretation.
## Applications
Quantum mechanics had enormous[48] success in explaining many of the features of our world. Quantum mechanics is often the only tool available that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism).
Quantum mechanics is also critically important for understanding how individual atoms combine covalently to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Relativistic quantum mechanics can, in principle, mathematically describe most of chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and the magnitudes of the energies involved.[49] Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics.
A working mechanism of a resonant tunneling diode device, based on the phenomenon of quantum tunneling through potential barriers
A great deal of modern technological inventions operate at a scale where quantum effects are significant. Examples include the laser, the transistor (and thus the microchip), the electron microscope, and magnetic resonance imaging (MRI). The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems and devices.
Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances.
Quantum tunneling is vital to the operation of many devices - even in the simple light switch, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells.
While quantum mechanics primarily applies to the atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale - superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[50] Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this basic fundamental process of the plant kingdom.[51] Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers.
## Examples
### Free particle
For example, consider a free particle. In quantum mechanics, there is wave-particle duality, so the properties of the particle can be described as the properties of a wave. Therefore, its quantum state can be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with complete precision. However, one can measure the position (alone) of a moving free particle, creating an eigenstate of position with a wavefunction that is very large (a Dirac delta) at a particular position x, and zero everywhere else. If one performs a position measurement on such a wavefunction, the resultant x will be obtained with 100% probability (i.e., with full certainty, or complete precision). This is called an eigenstate of position—or, stated in mathematical terms, a generalized position eigenstate (eigendistribution). If the particle is in an eigenstate of position, then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum, then its position is completely unknown.[52] In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate.[53]
3D confined electron wave functions for each eigenstate in a Quantum Dot. Here, rectangular and triangular-shaped quantum dots are shown. Energy states in rectangular dots are more ‘s-type’ and ‘p-type’. However, in a triangular dot, the wave functions are mixed due to confinement symmetry.
### Step potential
Scattering at a finite potential step of height V0, shown in green. The amplitudes and direction of left- and right-moving waves are indicated. Yellow is the incident wave, blue are reflected and transmitted waves, red does not occur. E > V0 for this figure.
The potential in this case is given by:
$V(x)= \begin{cases} 0, & x < 0, \\ V_0, & x \ge 0. \end{cases}$
The solutions are superpositions of left- and right-moving waves:
$\psi_1(x)= \frac{1}{\sqrt{k_1}} \left(A_\rightarrow e^{i k_1 x} + A_\leftarrow e^{-ik_1x}\right)\quad x<0$,
$\psi_2(x)= \frac{1}{\sqrt{k_2}} \left(B_\rightarrow e^{i k_2 x} + B_\leftarrow e^{-ik_2x}\right)\quad x>0$
where the wave vectors are related to the energy via
$k_1=\sqrt{2m E/\hbar^2}$, and
$k_2=\sqrt{2m (E-V_0)/\hbar^2}$
and the coefficients A and B are determined from the boundary conditions and by imposing a continuous derivative on the solution.
Each term of the solution can be interpreted as an incident, reflected, or transmitted component of the wave, allowing the calculation of transmission and reflection coefficients. In contrast to classical mechanics, incident particles with energies higher than the size of the potential step are still partially reflected.
### Rectangular potential barrier
Main article: Rectangular potential barrier
This is a model for the quantum tunneling effect, which has important applications to modern devices such as flash memory and the scanning tunneling microscope.
### Particle in a box
1-dimensional potential energy box (or infinite potential well)
Main article: Particle in a box
The particle in a one-dimensional potential energy box is the most simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and infinite potential energy everywhere outside' that region. For the one-dimensional case in the $x$ direction, the time-independent Schrödinger equation can be written as:[54]
$- \frac {\hbar ^2}{2m} \frac {d ^2 \psi}{dx^2} = E \psi.$
Writing the differential operator
$\hat{p}_x = -i\hbar\frac{d}{dx}$
the previous equation can be seen to be evocative of the classic kinetic energy analogue
$\frac{1}{2m} \hat{p}_x^2 = E$
with $E$ as the energy for the state $\psi$, which in this case coincides with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are:
$\psi(x) = A e^{ikx} + B e ^{-ikx} \qquad\qquad E = \frac{\hbar^2 k^2}{2m}$
or, from Euler's formula,
$\psi(x) = C \sin kx + D \cos kx.\!$
The presence of the walls of the box determines the values of C, D, and k. At each wall (x = 0 and x = L), ψ = 0. Thus when x = 0,
$\psi(0) = 0 = C\sin 0 + D\cos 0 = D\!$
and so D = 0. When x = L,
$\psi(L) = 0 = C\sin kL.\!$
C cannot be zero, since this would conflict with the Born interpretation. Therefore, sin kL = 0, and so it must be that kL is an integer multiple of π. And additionally,
$k = \frac{n\pi}{L}\qquad\qquad n=1,2,3,\ldots.$
The quantization of energy levels follows from this constraint on k, since
$E = \frac{\hbar^2 \pi^2 n^2}{2mL^2} = \frac{n^2h^2}{8mL^2}.$
### Finite potential well
Main article: Finite potential well
This is the generalization of the infinite potential well problem to potential wells of finite depth.
### Harmonic oscillator
Main article: Quantum harmonic oscillator
Some trajectories of a harmonic oscillator (i.e. a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wavefunction), with the real part shown in blue and the imaginary part shown in red. Some of the trajectories (such as C,D,E,and F) are standing waves (or "stationary states"). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This "energy quantization" does not occur in classical physics, where the oscillator can have any energy.
As in the classical case, the potential for the quantum harmonic oscillator is given by:
$V(x)=\frac{1}{2}m\omega^2x^2$
This problem can be solved either by solving the Schrödinger equation directly, which is not trivial, or by using the more elegant "ladder method", first proposed by Paul Dirac. The eigenstates are given by:
$\psi_n(x) = \sqrt{\frac{1}{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{ - \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right), \qquad n = 0,1,2,\ldots.$
where Hn are the Hermite polynomials:
$H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}\left(e^{-x^2}\right)$
and the corresponding energy levels are
$E_n = \hbar \omega \left(n + {1\over 2}\right)$.
This is another example which illustrates the quantization of energy for bound states.
## Notes
1. The angular momentum of an unbound electron, in contrast, is not quantized.
2. van Hove, Leon (1958). "Von Neumann's contributions to quantum mechanics" (PDF). Bulletin of the American Mathematical Society 64: Part2:95–99.
3. Mehra, J.; Rechenberg, H. (1982). The historical development of quantum theory. New York: Springer-Verlag. ISBN 0387906428.
4. Kuhn, T. S. (1978). Black-body theory and the quantum discontinuity 1894-1912. Oxford: Clarendon Press. ISBN 0195023838.
5. Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" [On a heuristic point of view concerning the production and transformation of light]. 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607. Reprinted in The collected papers of Albert Einstein, John Stachel, editor, Princeton University Press, 1989, Vol. 2, pp. 149-166, in German; see also Einstein's early work on the quantum hypothesis, ibid. pp. 134-148.
6. "Quantum interference of large organic molecules". Nature.com. Retrieved April 20, 2013.
7. "Quantum - Definition and More from the Free Merriam-Webster Dictionary". Merriam-webster.com. Retrieved 2012-08-18.
8. Oocities.com at the Wayback Machine (archived October 26, 2009)[]
9. P.A.M. Dirac, The Principles of Quantum Mechanics, Clarendon Press, Oxford, 1930.
10. D. Hilbert Lectures on Quantum Theory, 1915-1927
11. J. von Neumann, Mathematische Grundlagen der Quantenmechanik, Springer, Berlin, 1932 (English translation: Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955).
12. H.Weyl "The Theory of Groups and Quantum Mechanics", 1931 (original title:"Gruppentheorie una Quantenmechanik").
13. Greiner, Walter; Müller, Berndt (1994). Quantum Mechanics Symmetries, Second edition. Springer-Verlag. p. 52. ISBN 3-540-58080-8. , Chapter 1, p. 52
14. "Heisenberg - Quantum Mechanics, 1925-1927: The Uncertainty Relations". Aip.org. Retrieved 2012-08-18.
15. ^ a b Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics, Second edition. Jones and Bartlett Publishers, Inc. p. 215. ISBN 0-7637-2470-X. , Chapter 8, p. 215
16. "[Abstract] Visualization of Uncertain Particle Movement". Actapress.com. Retrieved 2012-08-18.
17. Hirshleifer, Jack (2001). The Dark Side of the Force: Economic Foundations of Conflict Theory. Campbridge University Press. p. 265. ISBN 0-521-80412-4. , Chapter , p.
18. "Topics: Wave-Function Collapse". Phy.olemiss.edu. 2012-07-27. Retrieved 2012-08-18.
19. "Collapse of the wave-function". Farside.ph.utexas.edu. Retrieved 2012-08-18.
20. "Determinism and Naive Realism : philosophy". Reddit.com. 2009-06-01. Retrieved 2012-08-18.
21. Michael Trott. "Time-Evolution of a Wavepacket in a Square Well — Wolfram Demonstrations Project". Demonstrations.wolfram.com. Retrieved 2010-10-15.
22. Michael Trott. "Time Evolution of a Wavepacket In a Square Well". Demonstrations.wolfram.com. Retrieved 2010-10-15.
23. Mathews, Piravonu Mathews; Venkatesan, K. (1976). A Textbook of Quantum Mechanics. Tata McGraw-Hill. p. 36. ISBN 0-07-096510-2. , Chapter 2, p. 36
24. "Wave Functions and the Schrödinger Equation" (PDF). Retrieved 2010-10-15. []
25. "Quantum Physics: Werner Heisenberg Uncertainty Principle of Quantum Mechanics. Werner Heisenberg Biography". Spaceandmotion.com. 1976-02-01. Retrieved 2012-08-18.
26. Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124-8 and 285-6.
27. "The Nobel Prize in Physics 1979". Nobel Foundation. Retrieved 2010-02-16.
28. Carl M. Bender, Daniel W. Hook, Karta Kooner (2009-12-31). "Complex Elliptic Pendulum". arXiv:1001.0131 [hep-th].
29. "Quantum mechanics course iwhatisquantummechanics". Scribd.com. 2008-09-14. Retrieved 2012-08-18.
30. "Between classical and quantum�" (PDF). Retrieved 2012-08-19.
31. "Atomic Properties". Academic.brooklyn.cuny.edu. Retrieved 2012-08-18.
32. "There is as yet no logically consistent and complete relativistic quantum field theory.", p. 4. — V. B. Berestetskii, E. M. Lifshitz, L P Pitaevskii (1971). J. B. Sykes, J. S. Bell (translators). Relativistic Quantum Theory 4, part I. Course of Theoretical Physics (Landau and Lifshitz) ISBN 0-08-016025-5
33. "Life on the lattice: The most accurate theory we have". Latticeqcd.blogspot.com. 2005-06-03. Retrieved 2010-10-15.
34. Parker, B. (1993). Overcoming some of the problems. pp. 259–279.
35. The Character of Physical Law (1965) Ch. 6; also quoted in The New Quantum Universe (2003), by Tony Hey and Patrick Walters
36. "Action at a Distance in Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. 2007-01-26. Retrieved 2012-08-18.
37. "Everett's Relative-State Formulation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. Retrieved 2012-08-18.
38. The Transactional Interpretation of Quantum Mechanics by John Cramer. Reviews of Modern Physics 58, 647-688, July (1986)
39. See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14-11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8-6), and lasers (vol III, pp. 9-13).
40. Introduction to Quantum Mechanics with Applications to Chemistry - Linus Pauling, E. Bright Wilson. Books.google.com. 1985-03-01. ISBN 9780486648712. Retrieved 2012-08-18.
41. Anderson, Mark (2009-01-13). "Is Quantum Mechanics Controlling Your Thoughts? | Subatomic Particles". DISCOVER Magazine. Retrieved 2012-08-18.
42. "Quantum mechanics boosts photosynthesis". physicsworld.com. Retrieved 2010-10-23.
43. Davies, P. C. W.; Betts, David S. (1984). Quantum Mechanics, Second edition. Chapman and Hall. p. 79. ISBN 0-7487-4446-0. , Chapter 6, p. 79
44. Baofu, Peter (2007-12-31). The Future of Complexity: Conceiving a Better Way to Understand Order and Chaos. Books.google.com. ISBN 9789812708991. Retrieved 2012-08-18.
## References
The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus.
• Malin, Shimon (2012). Nature Loves to Hide: Quantum Physics and the Nature of Reality, a Western Perspective (Revised ed.). World Scientific. ISBN 978-981-4324-57-1.
• Chester, Marvin (1987) Primer of Quantum Mechanics. John Wiley. ISBN 0-486-42878-8
• Richard Feynman, 1985. QED: The Strange Theory of Light and Matter, Princeton University Press. ISBN 0-691-08388-6. Four elementary lectures on quantum electrodynamics and quantum field theory, yet containing many insights for the expert.
• Ghirardi, GianCarlo, 2004. Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra-ket notation can be passed over on a first reading.
• N. David Mermin, 1990, "Spooky actions at a distance: mysteries of the QT" in his Boojums all the way through. Cambridge University Press: 110-76.
• Victor Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5-8. Includes cosmological and philosophical considerations.
More technical:
• Bryce DeWitt, R. Neill Graham, eds., 1973. The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press. ISBN 0-691-08131-X
• Dirac, P. A. M. (1930). . ISBN 0-19-852011-5. The beginning chapters make up a very clear and comprehensible introduction.
• Hugh Everett, 1957, "Relative State Formulation of Quantum Mechanics," Reviews of Modern Physics 29: 454-62.
• Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (1965). 1–3. Addison-Wesley. ISBN 0-7382-0008-5.
• Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 0-13-111892-7. OCLC 40251748. A standard undergraduate text.
• Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw Hill.
• Hagen Kleinert, 2004. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. Singapore: World Scientific. Draft of 4th edition.
• Gunther Ludwig, 1968. Wave Mechanics. London: Pergamon Press. ISBN 0-08-203204-1
• George Mackey (2004). The mathematical foundations of quantum mechanics. Dover Publications. ISBN 0-486-43517-2.
• Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons. Cf. chpt. IV, section III.
• Omnès, Roland (1999). Understanding Quantum Mechanics. Princeton University Press. ISBN 0-691-00435-8. OCLC 39849482.
• Scerri, Eric R., 2006. The Periodic Table: Its Story and Its Significance. Oxford University Press. Considers the extent to which chemistry and the periodic system have been reduced to quantum mechanics. ISBN 0-19-530573-6
• Transnational College of Lex (1996). What is Quantum Mechanics? A Physics Adventure. Language Research Foundation, Boston. ISBN 0-9643504-1-6. OCLC 34661512.
• von Neumann, John (1955). Mathematical Foundations of Quantum Mechanics. Princeton University Press. ISBN 0-691-02893-1.
• Hermann Weyl, 1950. The Theory of Groups and Quantum Mechanics, Dover Publications.
• D. Greenberger, K. Hentschel, F. Weinert, eds., 2009. Compendium of quantum physics, Concepts, experiments, history and philosophy, Springer-Verlag, Berlin, Heidelberg.
## Further reading
• Bernstein, Jeremy (2009). Quantum Leaps. Cambridge, Massachusetts: Belknap Press of Harvard University Press. ISBN 978-0-674-03541-6.
• Müller-Kirsten, H. J. W. (2012). Introduction to Quantum Mechanics: Schrödinger Equation and Path Integral (2nd ed.). World Scientific. ISBN 978-981-4397-74-2.
• Bohm, David (1989). Quantum Theory. Dover Publications. ISBN 0-486-65969-0.
• Eisberg, Robert; Resnick, Robert (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). Wiley. ISBN 0-471-87373-X.
• Liboff, Richard L. (2002). Introductory Quantum Mechanics. Addison-Wesley. ISBN 0-8053-8714-5.
• Merzbacher, Eugen (1998). Quantum Mechanics. Wiley, John & Sons, Inc. ISBN 0-471-88702-1.
• Sakurai, J. J. (1994). Modern Quantum Mechanics. Addison Wesley. ISBN 0-201-53929-2.
• Shankar, R. (1994). Principles of Quantum Mechanics. Springer. ISBN 0-306-44790-8.
• Cox, Brian; Forshaw, Jeff (2011). The Quantum Universe: Everything That Can Happen Does Happen. Allen Lane. ISBN 1-84614-432-9. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8948705792427063, "perplexity_flag": "middle"} |
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.104.120601 | # Synopsis:
In thin air
#### Thermodynamically Admissible 13 Moment Equations from the Boltzmann Equation
Hans Christian Öttinger
Published March 22, 2010
In a sufficiently rarefied gas, the tools of standard hydrodynamics, namely, the Navier-Stokes-Fourier equations (based on Newton’s second law applied to fluid motion), no longer apply. On the other hand, tracking the statistics of individual molecules, as is done in Boltzmann’s kinetic equation approach, is computationally prohibitive.
Writing in Physical Review Letters, Hans Christian Öttinger at the ETH in Zurich, Switzerland, strengthens a description of rarefied gas flow that is intermediate between these two extremes. Previous work showed that Boltzmann’s kinetic equation can yield a set of $13$ moment equations that describe the momentum of the gas and heat flux within it. In his work, Öttinger looks to the laws of nonequilibrium thermodynamics to provide the necessary constraints on the equations. For example, entropy is conserved in reversible processes and standard hydrodynamics is recovered in the limiting case.
The results could advance efforts—particularly those based on numerical calculations—to describe the kinetics of nonequilibrium gases. These calculations are important for understanding gas flow in the smallest of channels—as in microfluidics—and aerodynamics of satellites and space stations in the outer limits of our atmosphere. – Jessica Thomas
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.902209997177124, "perplexity_flag": "middle"} |
http://programmingpraxis.com/2012/01/13/ | # Programming Praxis
A collection of etudes, updated weekly, for the education and enjoyment of the savvy programmer
## Excel’s XIRR Function
### January 13, 2012
We studied numerical integration in a previous exercise. In today’s exercise we will look at the inverse operation of numerically calculating a derivative.
The function that interests us in today’s exercise is the XIRR function from Excel, which computes the internal rate of return of a series of cash flows that are not necessarily periodic. The XIRR function calculates the value of x that makes the following equation go to 0, where pi is the ith cash flow, di is the date of the ith cash flow, and d0 is the date of the first cash flow:
$\sum_i \frac{p_i}{(1+x)^{(d_i-d_0)/365}}$
The method used to estimate x was devised by Sir Isaac Newton about three hundred years ago. If xn is an approximation to a function, then a better approximation xn+1 is given by
$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$
where f'(xn) is the derivative of f at n. Mathematically, the derivative of a function at a given point is the slope of the tangent line at that point. Arithmetically, we calculate the slope of the tangent line by knowing the value of the function at a point x and a nearby point x+ε, then using the equation
$\frac{f(x+\epsilon) - f(x)}{(x+\epsilon)-x}$
to determine the slope of the line. Thus, to find x, pick an initial guess (0.1 or 10% works well for most interest calculations) and iterate until the difference between two successive values is close enough. For example, with payments of -10000, 2750, 4250, 3250, and 2750 on dates 1 Jan 2008, 1 March 2008, 30 October 2008, 15 February 2009, and 1 April 2009, the internal rate of return is 37.3%.
Your task is to write a function that mimics Excel’s XIRR function. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Pages: 1 2
Posted by programmingpraxis
Filed in Exercises
2 Comments » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260479807853699, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/284002/let-g-v-x-cup-y-e-be-a-bipartite-graph-show-that-g-has-a-matching-which-mat | # Let $G=(V=X \cup Y,E)$ be a bipartite graph.Show that G has a matching which matches every vertex of X.
I have a similar question to this on my test tomorrow. Any help towards this question will help
Let $G=(V=X \cup Y,E)$ be a bipartite graph. Suppose that the degree of each vertex d(v)≥1. Assume also that for each edge xy with x∈ X, we have d(x)≥d(y). Show that G has a matching which matches every vertex of X.
Hint. It is enough to show that Hall's condition holds on the X-side, since if a matching has an unmatched X-vertex, we can then use our algorithmic proof of Hall's Theorem to make it larger (and saturating one extra vertex of X)
-
## 1 Answer
We want to show that for every $S \subseteq X$, $\lvert N(S) \rvert \geq \lvert S \rvert$, where $N(S)$ denotes the neighborhood of $S$.
Let $S \subseteq X$ be a minimal counterexample. This means that for every $x \in S$, $S \setminus \{x\}$ satisfies Hall's condition. Choose $x \in S$ and let $S' = S \setminus \{x\}$.
First, we may assume that $N(x) \subseteq N(S')$. If not, let $y$ be a neighbor of $X$ not in $N(S')$. Then, by hypothesis, we may match all of the elements of $S'$ to elements of $N(S')$ and may also match $x$ to $y$. Furthermore, we may assume that $\lvert N(S') \rvert = \lvert S' \rvert$: otherwise, we have $$\lvert N(S) \rvert \geq \lvert N(S') \rvert \geq \lvert S' \rvert + 1 = \lvert S \rvert.$$ Now we match every element $z$ of $S'$ to an element $y$ of $N(S')$. By our hypothesis that $d(z) \geq d(y)$, we have $$\sum_{y \in N(S')} d(y) \leq \sum_{z \in S'} d(z) \leq \bigl\lvert E\bigl(S', N(S')\bigr) \bigr\rvert \leq \sum_{y \in N(S')} d(y),$$ that is, $$\sum_{y \in N(S')} d(y) = \bigl\lvert E\bigl(S', N(S')\bigr) \bigr\rvert.$$ This means that all of the neighbors of the elements of $N(S')$ are in $S'$, which contradicts our hypothesis that $d(x) \geq 1$. Hence, $S$ cannot be a counterexample, and $X$ satisfies Hall's condition.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464388489723206, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/5001/notions-of-degree-for-maps-sn-to-sn/5007 | ## Notions of degree for maps $S^n \to S^n$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In algebraic topology, we define a degree for a map $f: S^n \to S^n$ as where the induced map $f_*$ on the $n$-th homology group of $S^n$ sends $1$.
In differential topology, we have a different (same?) notion of degree for $f$. You take a regular value $b \in S^n$, consider $f^{-1} (b)$ (which is finite by the inverse function theorem and some compactness argument), and take the difference between the number of points in the preimage where the Jacobian of $f$ is positive and the number of points in the preimage where the Jacobian of $f$ is negative.
Geometrically, I can see that they are the same, but I couldn't convince myself rigorously. In Prop 2.30 of Hatcher, he mentions that the degree of $f$ is the sum of the local degrees of $f$ at each preimage point, and local degrees are either $\pm 1$. (Local degree is defined in the middle of page 136 in Hatcher.)
So, the final question is, must the sign of the local degree of $x \in f^{-1}(b)$ the same as the sign of the Jacobian of $f$ at $x$?
-
6
There's technical details -- the jacobian needs to be non-degenerate for the formula to hold. But yes, they're the same. This is covered in Milnor's "topology from a differentiable viewpoint", Guillemin and Pollack's "differential topology", and it's also in Bredon's "geometry and topology". This construction can be viewed as something of a first step in the Pontriagin construction, covered in more detail in the Milnor text. – Ryan Budney Nov 11 2009 at 9:17
1
@Ryan: why didn't you make this an answer? – Andrew Stacey Nov 11 2009 at 12:39
The comment feature somehow seems more humble. I guess I'm getting unnaturally attached to it. – Ryan Budney Nov 11 2009 at 19:13
## 3 Answers
I think what you need is the following lemma (usually called the "Stack of records" lemma):
Consider a smooth proper map of manifolds of the same dimension $f \colon M \to N$ and let $y \in N$ be a regular value of $f$.
Then there exists a neighbourhood $V \subset N$ of $y$ such that $f^{-1}(V) = \cup_{i=1}^n U_i$ with $U_i \cap U_j = \emptyset$ for $i \neq j$ and $f|_{U_i} \colon U_i \to V$ is a diffeomorphism for all $i$.
Now from this you can just sum up $\pm 1$ according to orientation on each $U_i$ to get the local degree of $f$ at $y$, and this works for both definitions of degree.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think so: it looks like the local degree according to Hatcher's definition measures whether $f$ preserves orientation or reverses it on the neighborhood of $x$. On page 233 he begins discussion of orientation using excision classes: an orientation for an neighborhood in an $n$-manifold at a point $x$ is just a choice of generator of $H_n(\mathbb R^n, \mathbb R^n-x)$, and a small neighborhood $U$ about $x$ is homeomorphic to $\mathbb R^n$. In his degree-counting, he takes a neighborhood $U$ of $x$ which is disjoint from other preimages $f^{-1}(f(U))$ and looks at the sign of the map $H_n(U, U-{x})\rightarrow H_n(f(U), f(U)-{y})$.
The sign of the Jacobian of $f$ should also tell you whether $f$ is locally orientation-preserving or reversing at at $x$.
-
I think you can find a proof that the differentiable topological degree is the (co)homological degree in the book by Bott and Tu (Diffrential forms in algebraic topology). But there instead of homology they describe cohomology first. Then you need to translate everything to the homological setting (by de Rham isomorphism).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426178336143494, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/9836/how-many-boats-does-it-take-to-find-an-acoustic-buoy-by-doppler-shift?answertab=oldest | # How many boats does it take to find an acoustic buoy by Doppler shift?
Inspired by this question on the Doppler shift, suppose there is buoy somewhere on the surface of the ocean emitting a pure frequency.
You get to place some boats wherever you want on the surface of the ocean, moving whatever direction you want with whatever speed you want.
The boats listen to the pure frequency, which will in general be Doppler shifted due to the relative motion between buoy and boat. If you do not know ahead of time the frequency the buoy is emitting, is it possible to deduce the buoy's location simply by observing the frequency measured by the boats? What is the minimum number of boats necessary, how should they be positioned, and what velocities should they have?
This is a toy problem with the mathematics of the Doppler effect, so let's leave out attenuation of sound over distance and assume the surface of the ocean is a plane. Also, if the boats could continuously monitor the Doppler shift, they could collect extra information based on their own changes in position and velocity, so imagine locating the buoy based solely on a single frequency measurement from each boat.
-
1
– Georg May 14 '11 at 10:17
Yes, the buoy is fixed. I think you can deal with your qualms about pure frequency if you give it a little effort. Obviously it makes no real difference from a practical point of view. – Mark Eichenlaub May 14 '11 at 15:35
unlike the "pure energy" misnomer, "pure frequency" actually refers to a mathematical concept, albeit strictly unphysical, still is applicable in many approximations – lurscher May 16 '11 at 22:27
What is wrong with a pure frequency? This is perfectly well-defined ($\exists T\neq 0\colon p(\mathbf{x},t)=p(\mathbf{x},t+nT)\ \forall n\in\mathbb{Z},\mathbf{x},t$) and actually quite easy to achieve experimentally. – leftaroundabout Jun 23 '11 at 11:12
## 3 Answers
Lemme make sure I have the assumptions correct:
1) The buoy is at rest in a known inertial frame in a known plane; the boats are in the same plane.
2) Each boat may make a "single measurement" of the frequency. I will assume this means that the time of measurement is long enough that the boat can resolve the frequency to arbitrary precision, but the boat's velocity is constant over the measurement time and the change in the boat's position (relative to the buoy) is negligible. I will assume the angle of origin of the signal cannot be determined, only the frequency. The velocity and positions of the boats are known.
A three-boat solution:
First boat: at rest in the inertial frame. Measures the frequency.
Second boat: nonzero velocity in the inertial frame.
From the Doppler shift measured by the second boat, you can determine the absolute value of the angle of the buoy with respect to the velocity vector of the second boat (but can't distinguish "right" from "left").
Third boat: nonzero velocity, nonparallel to the second boat. Different location than the second boat.
From the Doppler shift measured by the third boat, you can determine the absolute value of the angle of the buoy with respect to the velocity vector of the third boat.
Draw lines from the 2nd and 3rd boats along all the two possible angles from each. Where they intersect is the location of the buoy. That's the "straightforward" solution. Is it possible to do it with two boats if you get tricky?
-
1
This looks right to. With only two boats, you get just a single number - the frequency difference between the boats. A single number shouldn't be able to give you a two-dimensional vector, so three ought to be the minimum. – Mark Eichenlaub May 17 '11 at 11:45
I'm pretty sure that for any particular geometry you propose I can find a place to put the buoy that develops at least one ambiguity. For instance let the moving boats diverge along the x and y axis. There are places behind each moving boat where I can place the buoy and get two (or even four) solutions. This will work in most cases, but not every case. – dmckee♦ May 17 '11 at 14:56
dmckee: I don't understand your objection, but perhaps that's because I'm not envisioning the same case you are (I don't know what it means for boats to "diverge"). Could you give a specific example, such as the x and y coordinates of the two boats, their velocity vectors, and the location of the buoy? – Anonymous Coward May 17 '11 at 17:41
1
@Anon: Boats at $(1,0)$ and $(0,1)$ each moving in the direction of increasing coordinate. If I put the buoy at $(0,0.057)$ the boat at $(0,1)$ knows it is directly behind, and the boar at $(1,0)$ gets an angle of $\pm 1/10$ radian from the rear, making for two solutions: ${ (0,0.57), (0,-0.57) }$. I think I can do that for any geometry you choose for you two moving boats. If I put the buoy at $(\pm 0.057, \pm 0.057)$ you end up with a four fold ambiguity. – dmckee♦ May 21 '11 at 17:47
'Course, the real world solution to that issue is to pick a geometry where all the ambiguous solutions are either withing buoy-sighting distance of at least on boat or all within buoy sighting distance of each other. And this may be possible. – dmckee♦ May 21 '11 at 18:11
show 2 more comments
Really idealized approach.
Assuming the buoy is on the surface.
Three boats. They move at the same non-zero velocity much less than the speed of sound (but enough for the instruments to get read a change) perpendicular to the distance between them. Their separation is small compared to the length scales of the problem, but nearly twice the "spotting something on the surface" distance. At a given time they take a reading. Each boat gets a angle between it's velocity and the buoy, leading to a left-right ambiguity for each boat as well as the range ambiguity. Further, while the pairs should be able to resolve the range issue, there remains the possibly of several solution ambiguity for each pair. The third boat should resolve them.
Pathological case: the buoy lies on the line between the boats when they take their reading.
Solution to the above: Turn the formation through a significant angle and try again. Or just wait a while. Or if we insist on only one reading, have the center boat lead or trail the others by a significant distance.
Problems with this: The big one seems to be precision. Each boat does not get a perfectly defined opening angle. They get a rough value. So in principle resolving the ambiguities could depend on the spacing of the boats and the relative location of the buoy. BUut I'm not going to do any math tonight.
To make it harder: Allow that the buoy might be sunken. Now the pairwise ambiguities are the intersections of cones instead of discrete points.
-
So the three boats are in a line, right? With a single reading, a single boat gets no information because the boats don't know the true frequency of the buoy. – Mark Eichenlaub May 14 '11 at 19:15
Ah...I missed that. We can add a boat riding at anchor to solve that. – dmckee♦ May 14 '11 at 21:33
If all the boats reverse direction and follow their path back, the true frequency can be read as the mean of forward/back at any point of the path. – Georg May 15 '11 at 11:15
@Georg I agree, but the problem statement was for each boat to make a single reading. It's a bit contrived, to be sure. I just thought someone might have fun working it out within those constraints. – Mark Eichenlaub May 15 '11 at 11:54
Ahh, I did not see that single reading. Of course that makes rubbish from my answer. – Georg May 15 '11 at 12:25
Three boats is the minimum. And I don't think even that is enough in the genral case, if one of them is not moving (i.e., dedicated to directly solving bouy frequency).
Using just two boats with doppler lines will result in two lines of position for each boat. The intersection of these lines from two boats will result in 4 possible positions. You need a third boat to resolve the ambiguity. Even using a third boat, there are special cases where its LOPs would intersect 2 of the intersections of LOPs from the first two boats; but it would not be likely.
I would use 3 boats, which have traveled some distance along rays separated by 120 degrees and radiating from the same point. Doppler shifts could be used to qualitatively isolte the bouy to the 1/3 of the ocean directly away from the boat with lowest received frequency. The actual bouy frequency could also be bracketed to a very narrow frequency. From there it is fairly simple to solve for the actual frequency. Proceed from there to draw the LOPs from each boat. The two boats whose paths bound the wedge of water containing the bouy would have LOPs defining 1 possible location for the bouy.
-
1
I disagree that 3 boats is not sufficient for the general case. I think we agree that 2 boats (with a 3rd to measure frequency) will give two "lines of position" extending out each boat. But these lines will only intersect at one point, not 4. They are not infinite lines, but only "half-infinite" (with one end at the boat of origin, the other at infinity). – Anonymous Coward May 16 '11 at 21:45
@Anon: Only one intersection is the best case---indeed in most cases if you choose your geometry properly. But you have to find a geometry where that is true for every possible buoy position. – dmckee♦ May 21 '11 at 17:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564679265022278, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/11/16/continuity-redux/?like=1&source=post_flair&_wpnonce=9a71b83933 | # The Unapologetic Mathematician
## Continuity redux
So now we have two new ways to talk about topologies: neighborhoods, and closure operators. We can turn around and talk about continuity directly in our new languages, rather than translating them into the open set definition we started with.
First let’s tackle neighborhoods. Remember that a continuous function $f$ from a topological space $(X,\tau_X)$ to $(Y,\tau_Y)$ is one which pulls back open sets. That is, to every open set $V\in\tau_Y$ there is an open set $f^{-1}(V)\in\tau_X$ which $f$ sends into $V$. But in the neighborhood definition we don’t have open sets at the beginning; we just have neighborhoods of points.
What we do is notice that a neighborhood $N$ of a point $y$ is a set which contains an open set $V$ containing $x$. In particular we can consider neighborhoods of a point $f(x)$. The the preimage $f^{-1}(V)$ is an open set containing $x$, which is a neighborhood! So, given a neighborhood $V$ of $f(x)$ there is a neighborhood $U$ of $x$ so that $f(U)\subseteq V$. This is an implication of the definition of continuity written in the language of neighborhoods, and it turns out that we can turn around and derive our definition of continuity from this condition.
To this end, we consider sets $X$ and $Y$ with neighborhood systems $\mathcal{N}_X$ and $\mathcal{N}_Y$, respectively. We will say that a function $f:X\rightarrow Y$ is continuous at $x$ if for every neighborhood $V\in\mathcal{N}_Y(f(x))$ there is a neighborhood $U\in\mathcal{N}_X(x)$ so that $f(U)\subseteq V$, and that $f$ is continuous if it is continuous at each point in $X$.
Now, let $V$ be an open set in $Y$. That is, a set which is a neighborhood of each of its points. We must now show that $f^{-1}(V)$ is a neighborhood of each of its points. So consider such a point $x\in f^{-1}(V)$, and its image $f(x)\in V$ Since we are assuming that $V$ is a neighborhood of $f(x)$, there must be a neighborhood $U$ of $x$ so that $f(U)\subseteq V$. But then $U\subseteq f^{-1}(V)$, and since the neighborhoods of $x$ form a filter this means $f^{-1}(V)$ is a neighborhood as well. Thus the preimage of an open set is open.
In particular, we can consider a set $T\subseteq Y$ and its interior $\mathbf{int}_Y(T)$, which is an open set contained in $T$. And so its preimage $f^{-1}(\mathrm{int}_Y(T))$ is an open set contained in $f^{-1}(T)$. Thus we see that $f^{-1}(\mathrm{int}_Y(T))\subseteq\mathrm{int}_X(f^{-1}(T))$. Finally, we can dualize this property to see that $f(\mathrm{Cl}_X(S))\subseteq\mathrm{Cl}_Y(f(S))$. That is, the image of the closure of $S$ is contained in the closure of the image of $S$ for all subsets $S\subseteq X$. Let’s now take this as our definition of continuity, and derive the original definition from it.
Well, first let’s just dualize this condition to get back to say that $f^{-1}(\mathrm{int}_Y(T))\subseteq\mathrm{int}_X(f^{-1}(T))$ for all sets $T\subseteq Y$. Now any open set $V$ is its own interior, so $f^{-1}(V)\subseteq\mathrm{int}_X(f^{-1}(V))$. But $\mathrm{int}_X(f^{-1}(V))\subseteq f^{-1}(V)$ by the definition of the interior. And so $f^{-1}(V)$ is its own interior, and is thus open.
### Like this:
Posted by John Armstrong | Point-Set Topology, Topology
## 2 Comments »
1. [...] the bars denote the norm in one or the other of the spaces or as depends on context. Again, the idea is that if we pick a metric ball around , we can find some metric ball around whose image is [...]
Pingback by | September 16, 2009 | Reply
2. [...] restate our condition for continuity, and it works either using the metric space definition or the neighborhood definition of continuity. I’ll work it out in the latter case for generality, but it’s worth [...]
Pingback by | December 7, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625006914138794, "perplexity_flag": "head"} |
http://mathhelpforum.com/trigonometry/139025-pythagorean-theorem-triangles-question.html | # Thread:
1. ## Pythagorean theorem and triangles Question
I need help with c) and d) and also e).
For c) they are asking for 2θ however I don't know what they did to get 1 - 2y^2.
d) Can you please explain to me what we do with the (pi - θ)?
e) Cos is adjacent over hypotenuse. However what do we do with the inverse of cos and y?
2. Originally Posted by florx
I need help with c) and d) and also e).
For c) they are asking for 2θ however I don't know what they did to get 1 - 2y^2.
d) Can you please explain to me what we do with the (pi - θ)?
e) Cos is adjacent over hypotenuse. However what do we do with the inverse of cos and y?
c) $\cos 2\theta = cos^2 \theta - sin^2 \theta$
d) $\sin (\pi - \theta) = \sin \theta$
"Why?" you ask. Imagine the unit circle if you will. Sine is positive in the first and second quadrant (between 0 and $\pi$.) Now $\theta < 90^{\cdot}$ so $\sin (\pi - \theta)$ lies in the second quadrant. By symmetry, it has the value of $\sin \theta$. Draw it out to see for yourself.
e) This one is trickier. Split it up into two parts. Let $sin^2 (arccos y) = sin^2 u$ where $u = arccos y$
Now $y = \cos u$. Looking at your triangle the only angle that satisfies this equation is $u = \frac{\pi}{2} - \theta$
So we now have $sin^2 (\frac{\pi}{2} - \theta)$. Look at your unit circle again and you'll realise that $\sin (\frac{\pi}{2} - \theta)$ is the same thing as $\cos \theta$
3. Thank you so much for your input. I understand what you about the sin(pi-θ). However i still don't quite understand how to get the answer.
Taken from the back of the book,
c) 1 - 2y^2
d) y
e) 1 - y^2
Thanks again for trying to help me and I hope you can help me through this and solve this problem.
4. Originally Posted by florx
Thank you so much for your input. I understand what you about the sin(pi-θ). However i still don't quite understand how to get the answer.
Taken from the back of the book,
c) 1 - 2y^2
d) y
e) 1 - y^2
Thanks again for trying to help me and I hope you can help me through this and solve this problem.
You know that $sin(\pi - \theta) = sin\theta$, right?
so, if you look at the triangle,
$sin\theta = \frac{opposite}{hypotenuse}= \frac{y}{1} = y$
Go through this link if you still have confusions:
Trigonometric functions - Wikipedia, the free encyclopedia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540517926216125, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/115689/can-the-cardinality-of-continuum-exceed-all-aleph-numbers-in-zf/115701 | # Can the cardinality of continuum exceed all aleph numbers in ZF?
More precisely, is either of the following two statements consistent with ZF:
1. $2^{\aleph_0}\geq\aleph_{\alpha}$ for every ordinal number $\alpha$,
2. $2^{\aleph_0}\leq\aleph_{\alpha}\implies 2^{\aleph_0}=\aleph_{\alpha}$ for every ordinal number $\alpha$?
I'm asking mainly out of curiosity.
-
If you use `1.` instead of `(1)` the software will parse this as an ordered list automatically... – Asaf Karagila Mar 2 '12 at 16:11
@AsafKaragila: Thanks, I didn't know that. – Dejan Govc Mar 2 '12 at 16:32
– Asaf Karagila Mar 2 '12 at 16:36
## 3 Answers
Both are inconsistent with ZFC, and the first is inconsistent with ZF as well.
The axiom of infinity tells us that $\mathbb N$, the collection of natural numbers (or finite ordinals) is a set. The axiom of power set tells us therefore that every set has a power set, in particular $\mathbb N$.
We know that the size of $P(\mathbb N)=\mathbb R$ is $2^{\aleph_0}$, however this can be $\aleph_1$ or $\aleph_2$ or even higher. Without the axiom of choice it might not even be an $\aleph$ number.
So we have that $\mathbb R$ is a set, therefore so is $\mathbb R\times\mathbb R$. It therefore has as power set, from which we can take all the subsets of $\mathbb R\times\mathbb R$ which are order relations on some subset of $\mathbb R$, and we can take all those which are well ordered.
Each is order isomorphic to a unique ordinal, so mapping every relation $R$ from this collection to the ordinal is a function defined by a formula (possibly with parameters), whose domain is a set. By the axiom of replacement the image is a set of ordinals, since isomorphism goes "both ways" we have that this is the same set as $\{\beta\in\mathrm{Ord}\mid\ \exists f\colon\beta\to\mathbb R\text{ injective}\}$
Since $\mathbb R$ is a set there can only be set many ordinals of this property, the least ordinal above them is called Hartogs number of $\mathbb R$ and it cannot be injected into $\mathbb R$, denote it by $\aleph(\mathbb R)$, we have if so that $\aleph(R)\nleq\mathbb R$.
As for the second question, if we assume the axiom of choice then the previous argument shows that for some $\aleph_\alpha$ we have that $2^{\aleph_0}<\aleph_\alpha$, and therefore for all $\beta>\alpha$. The second assertion implies, if so, that $\aleph_\alpha=\aleph_\beta=2^{\aleph_0}$ for almost all ordinals.
However without the axiom of choice it is consistent to have that for every ordinal (finite or not) we have either $\alpha=0$ and then $\aleph_0<2^{\aleph_0}$ and otherwise $\aleph_\alpha\neq 2^{\aleph_0}$. This would make the assumption in the implication of the second assertion false and the assertion itself vacuously true.
Further reading:
-
@Austin: Thanks! – Asaf Karagila Mar 2 '12 at 15:36
In the "Each is order isomorphic" paragraph, I think you're arguing the wrong direction. What you need to say here is that if there is an injection from $\beta$ into $\mathbb R$, then $\beta$ is in the image of such-and-such function from $\mathbb R\times \mathbb R$ to $\mathbf{ON}$. – Henning Makholm Mar 2 '12 at 15:55
Oops -- I meant to write "from $\mathscr P(\mathbb R\times \mathbb R)$ to $\mathbf{ON}$". My point is that you appear to have swapped premise from conclusion in that paragraph -- you want to argue that every $\beta\preceq \mathbb R$ is the image of an $R$, not that every $R$ produces a $\beta\preceq \mathbb R$ (though both directions are true). – Henning Makholm Mar 2 '12 at 16:02
@Henning: I'm talking about $R\subseteq\mathbb R\times\mathbb R$ which is a well order of a subset of $\mathbb R$. I've edited that part to something which I hope is clearer. – Asaf Karagila Mar 2 '12 at 16:04
Thank you, answer accepted. – Dejan Govc Mar 3 '12 at 8:47
I am reading your statement "$2^{\aleph_0} \geq \aleph_\alpha$" to say that there is an injection from $\omega_\alpha$ into $\mathbb{R}$. (And similarly for "$2^{\aleph_0} \leq \aleph_\alpha$".)
(1) is not consistent, since given any set $X$, there are only set-many well-orderings of subsets of $X$. (Without Choice, define $WO(X) = \{ R : R\text{ is a well-ordering of a subset of }X \}$. This is clearly a set by Power Set, Pairing and Separation). Using Replacement the family of order-types of elements of $WO(X)$ is also a set. From this it follows that for every set $X$ there must be an $\alpha$ such that "$\aleph_\alpha \leq | X |$" does not hold.
(2) would only be consistent vacuously: where there is no well-ordering of the reals and so the statement $2^{\aleph_0} \leq \aleph_\alpha$ never holds. (If there is an injection from $\mathbb{R}$ into $\omega_\alpha$, then there is also an injection from $\mathbb{R}$ into $\omega_{\alpha+1}$, and clearly $\aleph_\alpha \neq \aleph_{\alpha+1}$.)
-
Thanks, this is a very concise answer. Much appreciated. – Dejan Govc Mar 3 '12 at 8:45
Being careful about quantifiers, the answer to your first question is yes, and to your second one is no. More precisely, for any ordinal $\alpha$ there is a forcing extension of the universe where (not just ZF but even ZFC holds and) $2^{\aleph_0} \ge\aleph_\alpha$. A simple modification of Cohen's original argument works. What Cohen proved is that one can "add lots of (Cohen) reals" to the universe without changing any cardinals, meaning that if and ordinal $\alpha$ was a cardinal, then it is a cardinal after adding all those reals. So, if you add at least $\aleph_\alpha$ reals, in the forcing extension the continuum has size at least $\aleph_\alpha$.
However, equality is not always possible, at least under some choice. There is a basic restriction: Koenig proved that $\kappa^{cf(\kappa)}>\kappa$ for any infinite cardinal $\kappa$. Since $(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$, we conclude that $2^{\aleph_0}$ cannot be a cardinal of cofinality $\omega$.
Solovay proved (right after Cohen's result) that this is basically the only objection: Cohen's construction mentioned above not only "preserves cardinals" but in fact preserves all cofinalities. Solovay showed that if we start in Goedel's L, or more generally in a model of the GCH, then adding $\aleph_\alpha$ Cohen reals gives a model where $2^{\aleph_0}$ is precisely $\aleph_\alpha$, provided that $\aleph_\alpha$ did not have countable cofinality to begin with.
For example, if one adds $\aleph_\omega$ Cohen reals, then somehow one has actually added $\aleph_{\omega+1}$ reals.
If the original model is not a model of GCH, then there is an obvious additional restriction, given by monotonicity of cardinal exponentiation: If $2^\kappa >\lambda > \kappa$, then of course we cannot preserve all cardinals and make $2^{\aleph_0}=\lambda$, regardless of the cofinality of $\lambda$. But this situation does not occur if GCH holds.
I see that you meant literally to have the quantifiers the way you wrote them. The answer to the first question is now no: Hartog proved (in ZF) that for any set $X$ there is an upper bound on the ordinals that can inject into $X$. This is easy to see: If $\alpha$ injects into $X$, then there is a subset $Y$ of $X$ and a binary relation $R$ on that subset such that $(Y,R)$ is isomorphic to $(\alpha,\in)$. But then the collection of $\alpha$ that inject into $X$ is a set (the image of the collection of such pairs $(Y,R)$).
As for the second question, the answer is no if the reals are well-orderable, because they are always larger than $\aleph_0$, since Cantor theorem does not use choice. The answer is yes (vacuously) if the reals cannot be well-ordered, and this is consistent. (Cohen's proof that choice is independent of ZF actually gives models where the reals are not well-orderable.)
-
1
He asked about ZF, not ZFC. The second answer is "no if and only if there is a well ordering of the continuum." – Asaf Karagila Mar 2 '12 at 15:06
1
The question said "either of the two following statements", so $\forall \alpha\in\mathbf{ON}: 2^{\aleph_0}\ge\aleph_\alpha$ must count as one statement, and this statement is false under ZF, per Hartogs' construction. – Henning Makholm Mar 2 '12 at 15:49
@Bruce: Thanks, even though this is not what I was asking, it still does contain useful information. – Dejan Govc Mar 2 '12 at 16:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 92, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529960751533508, "perplexity_flag": "head"} |
http://topologicalmusings.wordpress.com/2009/07/09/a-relation-is-an-equivalence-iff-it-is-reflexive-and-euclidean/ | Todd and Vishal’s blog
Topological Musings
A relation is an equivalence iff it is reflexive and euclidean
July 9, 2009 in Elementary Math Problem Solving | Tags: equivalence relation, euclidean, reflexive
High-school students and undergraduates are (almost) always taught the following definition of an equivalence relation.
A binary relation $R$ on a set $A$ is an equivalence iff it satisfies
• the reflexive property: for all $a$ in $A$, $a R a$,
• the symmetric property: for all $a, b$ in $A$, if $a R b$, then $b R a$, and
• the transitive property: for all $a, b, c$ in $A$, if $a R b$ and $b R c$, then $a R c$.
However, there is another formulation of an equivalence relation that one usually doesn’t hear about, as far as I know. And, it is the following one.
A binary relation $R$ on a set $A$ is an equivalence iff it satisfies
• the reflexive property: for all $a$ in $A$, $a R a$, and
• the euclidean property: for all $a, b, c$ in $A$, if $a R b$ and $a R c$, then $b R c$.
Exercise: Show that a binary relation $R$ on a set $A$ is reflexive, symmetric and transitive iff it is reflexive and euclidean.
About these ads
Like Loading...
• 221,457 hits
11 comments
Comments feed for this article
On why it’s called “euclidean”, see this page at the recently created nLab. The commentary and history given there (probably due to Toby Bartels; I didn’t check) is interesting.
The nLab is a wiki on general mathematics, so far written mostly by regular customers at the n-Category Café. Anyone is welcome to contribute however. Many of the articles are written by people with strong category-theoretic inclinations, which could be taken either as invitation or warning I guess but there is both a rich treasure trove already in place and infinite scope for expansion.
July 9, 2009 at 2:44 pm
watchmath
This is the first time I see this equivalence formulation. Do you know the case where it is easier to verify equivalence relation by using this eqivalent definition?
I hope you don’t mind that I do the exercise
$(\Rightarrow)$ Suppose $aRb$ and $aRc$. By the symmetric property $bRa$. By transitive property applies to $bRa$ and $aRc$ we have $bRc$. So it is Euclidean.
$(\Leftarrow)$ Supppose $aRb$. Since it is reflexive, then $aRc$ for $c=a$. By the Euclidean property we have $bRc$, i.e, $bRa$. Now suppose $aRb$ and $bRc$. Since it is symmetric then $bRa$. By applying the Euclidean property to $bRa$ and $bRc$ we have $aRc$.
Todd,
Thanks for that piece of info! I think there is something not quite right about the use of tags on nLab. I mean, a search for “equivalence left euclidean” on Google returns the nLab page you mentioned as the first search result, but the same page is only seen on the fourth page of search results for “equivalence euclidean”, which actually returns the blog post I just wrote as the first search result! Would you look into this (anomaly)? Perhaps introduce appropriate tags in the abovementioned nLab page.
watchmath,
If you click on the nLab page that Todd provided a link to, you may find an answer there. For example, using the alternative formulation, proving congruence (on the set of triangles in the Euclidean plane) is an equivalence relation is easy and also intuitive. However, I came across the second formulation via a study of accessibility relation in introductory modal logic.
This is kind of like the alternative conditions used to define a subgroup; a subset S of a group G is a subgroup if e is in S, and for any $x,y \in S$, we have $xy^{-1} \in S$, and the proof is analogous.
In Maclane and Birkhoff’s Algebra the following exercise for relations appears:
“A relation $R$ on a set $X$ to itself is called ‘circular’ if $x R y$ and $y R z$ imply $z R x$. Show that a relation is reflexive and circular if and only if it is reflexive, symmetric, and transitive.” [Edited LaTeX code.]
October 31, 2011 at 4:29 am
Luqing Ye
Answer:
$\Leftarrow$:$x\mathcal{R} y$ and $y\mathcal{R} z$ means that $x\mathcal{R} z$ .And this means that $z\mathcal{R} x$.
$\Rightarrow$:First prove symmetry.$x\mathcal{R} x$ and $x\mathcal{R} y$ means that $y\mathcal{R} x$.
And transition:$x\mathcal{R} y$and $y\mathcal z$means $z\mathcal{R} x$.From the symmetry we know that $x\mathcal{R} z$.$\Box$.
July 12, 2009 at 4:47 am
Aaron F.
Oooh, cool! Most of the math on this blog is way over my head, but I have a special place in my heart for binary relations, and another special place in my heart for alternative formulations of familiar things. So this may be my favorite Topological Musings post ever.
Aaron, I am glad you liked this post. My favorite posts are the ones written by Todd, by the way!
I’ve only begun following math blogs and studying category theory; this post on Euclidean relations was a pleasant find. I hope I can use it someday on an algebra exam! Hopefully following your blog will help me develop enough to follow some journals…
October 31, 2011 at 10:45 am
Luqing Ye
I think the Euclidean property is very intuitive as long as you regard equivalence property as a double-head arrow connecting two points.
October 31, 2011 at 10:48 am
Luqing Ye
errata:equivalence property $\rightarrow$ equivalence relation .
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 67, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229453802108765, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/607?sort=votes | ## Non-conjugate words with the same trace
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let n>=2, p a large prime, G = SL_n(Z/pZ).
If n=2, there are words that, while not conjugate in the free group, do have identical trace in G. For example, tr(g h^2 g^2 h)= tr(g^2 h^2 g h) for all g, h in SL_2(Z/pZ).
```Question: does the same happen for n>=3?
Could it even be possible that there are words w_1, w_2 that are not conjugate even in G=SL_n(Z/pZ), yet always have the same trace:
tr(w_1(g_1,g_2,..,g_k)) = tr(w_2(g_1,g_2,...,g_k)) for all g_1,...,g_k in SL_n(Z/pZ)?```
This would be extremely helpful.
Actually, I just need a weaker statement:
Wild guess.- Let g, h be elements of SL_n(K), n>2. There is a constant k (which may depend on n but not on K) such that there are two elements a, b of the ball ({g,h,g^{-1},h^{-1},e})^k for which (1) tr(a)=tr(b) and (2) a is not conjugate to b (for g and h generic).
Does anybody have a clue as to whether this is or isn't true?
-
It would be easy to check this in Magma for small values of n and p. – David Zureick-Brown♦ Oct 15 2009 at 17:08
You mean for words up to a given length? – H A Helfgott Oct 16 2009 at 10:38
## 5 Answers
I don't think you asked the question you intended. Even though this question is old, I'll say a bit:
For any fixed $n$ and $p$, there are a finite number of homomorphisms $F_2 \rightarrow SL_n(Z/pZ)$.
Hence, there are only a finite number of possibilities for the trace of a group element as a function of the representation. Therefore, there are many duplicates.
I think a better question is to ask whether there are pairs $(g_1, g_2)$ of non-conjugate elements of $F_2$ such that for all primes $p$ and all homomorphisms $F_2 \rightarrow SL_n(Z/pZ)$ the traces are equal. In this form, you can pass to a limit $p \rightarrow \infty$ and conclude if nonconjugate elements can be distinguished by traces for an infinite sequence of $SL_n(Z/pZ)$, they could be distinguished by a homomorphisms to $SL_n(\mathbb C)$. Conversely, if there are no trace identities in $SL_n(\mathbb C)$, then there are no trace identities true in all $SL_n(Z/pZ)$, by finding $Z/pZ$ quotients of the ring of coefficients.
Martin Kassabov has been interested in the latter question, and in discussions he and I have had, we've both come to the opinion that there are probably no trace identitites for $SL_n(C)$ when $n > 2$, but it's not easy to find a proof. One possible strategy is to first characterize all trace identities in $SL_2$, and then construct representations in $SL_3$ where they break down. This is interesting to me in any case because of its meaning in 2 and 3-manifold topology -- the trace identities give collections of distinct elements in $\pi_1$ that are forced to have the equal length in any hyperbolic structure.
A weaker question is whether the characteristic polynomials for representations in $SL_n$ can distinguish conjugacy classes in $F_2$.
-
It seems like this paper might have interesting things to say about trace identities in SL_n(C) arxiv.org/abs/math.RT/9806016 – Peter Samuelson Sep 12 2010 at 16:55
Yes, I did mean n was fixed and p goes to infinity. – H A Helfgott Sep 14 2010 at 1:49
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Since James seems to have found the group theory lemma we need, I'm going to rewrite this post to be clearer about what it proves.
Theorem: If g and h are two non-conjugate elements of the free group Fk, then there is some n, some p, and a map r:Fk --> GLn(Z/p) such that Tr r(g) \neq Tr r(h).
Proof: By James's post, there is a finite group G and a map a: Fk --> G so that a(g) and a(h) are not conjugate. As characters distinguish conjugacy classes, there is a cyclotomic field K=Q(e^{2 pi i/q}) and a representation b: G --> GLn(K) such that Tr b(a(g)) \neq Tr b(a(h)). Let p be a prime which is (1) large enough not to appear in the denominators of any of the matrices occurring in b(G) (2) congruent to 1 modulo q and (3) large enough not to divide Tr b(a(g)) - Tr b(a(h)). By condition (2), there is a prime P of K, living over p, such that O(K)/P = Z/p. By condition (1), we can reduce the entries of the matrices of b(G) modulo p. This gives us a map c : G --> GLn(Z/p) which is morally the composite of G --> GLn(K) --> GLn(O(K)/P). (The second map of this sequence does not actually exist.)
Composing a and c gives the map r. Condition (3) insures that Tr r(g) \neq Tr r(h). QED
Note that I have not solved the following more difficult question: given n, do there exist any pair of nonconjugate elements g and h in Fk so that Tr r(g) = Tr r(h) for all maps r: Fk --> GLn(Z/p).
-
Restatement: Does the canonical map from F_k to its profinite completion merge any conjugacy classes? – S. Carnahan♦ Oct 16 2009 at 2:42
I'm not sure I follow the reduction to the assertion about non-conjugate elements in free groups. However, it is true that, if two members of a free group are not conjugate, then there is a finite quotient in which their images are not conjugate. This property is known as conjugacy separability, and it is well known that free groups are conjugacy separable. There are a number of ways to prove this. One can, for example, reduce to finitely generated nilpotent groups via the lower central series in the free group (which is residually a torsion free nilpotent groups) and then invoke the result of Formanek/Remeslennikov that f.g. torsion-free nilpotent groups are conjugacy separable. Alternatively, free products of conjugacy separable groups are conjugacy separable (Remeslennikov, 1971), and infinite cyclic groups obviously are.
-
I don't think Critch's reply above answers Harald's question; it seems to presume that the map F_2 -> SL_n(Z/pZ) factors through a chosen inclusion of SL_2(Z/pZ), while Harald wants pairs of elements in F_2 (or, more generally, F_k) which have the same trace after applying ANY homomorphism F_k > SL_n(Z/pZ).
EDITED to comment on David's answer below:
"Also, if there is any finite quotient G of F_k, in which the two elements stay nonconjugate, then there is some representation of G in which they have different traces."
Yeah, but not an n-dimensional representation.
-
Right. What I meant to say was that the question for all SL_n(Z/p) is equivalent to the question for all finite groups. – David Speyer Oct 16 2009 at 5:32
Exactly. I want two words w(g,h), w'(g,h) (say) such that tr(w(g,h))=tr(w'(g,h)) for all g, h in SL_n(Z/pZ), not just for some g,h in SL_n(Z/pZ) (such as, say, g and h in a copy of SL_2(Z/pZ) contained in SL_n(Z/pZ)). – H A Helfgott Oct 16 2009 at 10:38
Yes, if I understand your question correctly. Consider `SL_2(Z/pZ)` included in `SL_n(Z/pZ)` as those elements with a 2x2 block in the top left, 1's on the the remaining diagonal, and 0's elsewhere. Also consider The free group `F_2` on `g_1, g_2` included into the free group `F_n `on `g_1, g_2, ..., g_n`.
Two elements of `F_2` are conjugate in `F_n` iff they are conjugate in `F_2` (since if you conjugate by a word involving some `g_i, i>2`, then you land outside `F_2`). So whatever pairs of elements work in your `n=2` example also work for n arbitrary: they will have the same trace in `SL_n(Z/pZ)`, namely [their trace in `SL_2(Z/pZ)] + [n-2]`, the extra diagonal elements we added, but the words expressing them won't be conjugate in `F_n` because they weren't in `F_2`.
-
I thought he intended to require that the two traces be equal for all maps from F_g to SL_n(Z/p). That matches his example that Tr(g*h*g^2*h^2) = Tr(h^2*g^2*h*g) in SL_2. – David Speyer Oct 16 2009 at 2:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267084002494812, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/gravity?sort=unanswered&pagesize=30 | # Tagged Questions
Gravity is an attractive force that affects and is effected by all mass and - in general relativity - energy, pressure and stress. Prefer newtonian-gravity or general-relativity if sensible.
learn more… | top users | synonyms
2answers
459 views
### Ski Jumper's vertical velocity after 246.5m record?
What would be the vertical velocity of this ski jumper (ski flyer), after he first touches down, after he breaks the record with a 246.5m jump? What g force would he experience as he slows down? ...
2answers
73 views
### What is a “gravitational cell”?
I am not a physicist, and I don't understand the details of electromagnetism. Anyhow, I was looking for how the batteries work in Google. So, I came across this article: "How batteries work: A ...
2answers
137 views
### General Relativity & Kepler's law
According to Kepler's law of planetary motion, the earth revolves around the sun in an elliptical path with sun at one of its focus. However, according to general theory of relativity, the earth ...
1answer
601 views
### String theory and trace anomaly in semiclassical gravity?
what does string theory have to say about the trace anomaly in the expectation value of the stress energy tensor of massless quantum fields on a curved background and its interpretation as the ...
1answer
384 views
### Gravitational attraction of triangles
Suppose I have two triangles relatively close together (so they probably shouldn't really be treated as point masses). I want to calculate the gravitational force (and potentially torque?) generated ...
1answer
59 views
### Physics of a cold and hot top
Imagine two tops made up of exactly one thousand atoms. One is kept at 4 degrees Kelvin, the other at room temperature. 1. Would they weigh the same given an arbitrarily precise scale in the Earth's ...
1answer
65 views
### Understanding bending light beam perpendicular to motion
I'm just reading a book about gravity. An example it gives is a spaceship accelerating. A beam of light travelling at right angles to the direction of movement of the spaceship enters it via a small ...
1answer
13 views
### Can we build a synthetic event horizon?
If we imagine ourselves to be a civilization capable of manipulating very heavy masses in arbitrary spatial and momentum configurations (because we have access to large amounts of motive force, for ...
1answer
79 views
### Initial position and velocity of rocket to escape earth's gravity
I'm trying to numerically simulate a spacecraft trajectory between earth and mars. I already wrote the solar system model where the sun is at the origin of the x,y,z plane Earth and Mars are orbiting ...
1answer
199 views
### Special relativity paradox and gravitation/acceleration equivalence
One of the features of the black hole complementarity is the following : According to an external observer, the infinite time dilation at the horizon itself makes it appear as if it takes an ...
1answer
79 views
### Energy needed to lift and bring down an object
A mass of 0.5 Kg needs to be moved from point A to another point (B) which is 1 meters above point A. The time for this movement should be 0.2 seconds, then the mass is kept at position B for another ...
1answer
58 views
### Geodetic model for numerical weather prediction
What is the implicit geodetic model used by common numerical weather prediction products? WGS84? NAVD88? For example, NOAA's Rapid Refresh (RAP), North America Mesoscale (NAM), and Global Forecast ...
1answer
102 views
### Do all black holes spin in the same direction?
My question is as stated above, do all black holes spin the same direction? To my knowledge, the spin in the direction of the spin of the matter that created them. Another similar question was asked ...
1answer
72 views
### The effects of heat on gravitational fields
In boiling soapy water, globs of soap coalesce as the temperature increases to boiling. Does this mean that temperature increases the gravitational pull of bodies?
0answers
86 views
### Equation of state of cosmic strings and branes
I'm sure these are basic ideas covered in string cosmology or advanced GR, but I've done very little string theory, so I hope you will forgive some elementary questions. I'm just trying to fit some ...
0answers
91 views
### Positivity of Total Gravitational Energy in GR
I read the following statement in the introduction to an article: Over the last 30 years, one of the greatest achievements in classical general relativity has certainly been the proof of the ...
0answers
156 views
### Penrose Conformal diagram for flat 2-dim Lorentz space-time
I have the following metric $$ds^2 ~=~ Tdv^2 + 2dTdv,$$ defined for $$(v,T)~\in~ S^1\times \mathbb{R},$$ e.g. $v$ is periodic. This is the according Penrose diagram: Question 1) Is the ...
0answers
148 views
### Why Brown York stress tensor == dual field theory's energy momentum tensor?
From the AdS/CFT dictionary, how to argue that the Brown York stress tensor for a gravity system near the boundary is exactly the same as the energy momentum tensor of the dual field theory? In the ...
0answers
55 views
### Gravitational effects and metric spaces
Could somebody please explain something regarding the Nordstrom metric? In particular, I am referring to the last part of question 3 on this sheet -- about the freely falling massive bodies. My ...
0answers
78 views
### Materials with different gravitomagnetic permeability?
If you start with general relativity, and assume small perturbations around a nearly flat metric, it is possible to obtain linearized equations of gravity that look a lot like Maxwell's equations, ...
0answers
177 views
### Alcubierre warp bubble effect on gravity and space
Hello I read this question: Faster-than-light communication using Alcubierre warp drive metric around a single qubit? and I had this question: What kind of impact would an alcubierre warp bubble have ...
0answers
237 views
### How does lifting an object effect its entropy
I have figured out that: When photons leak out from a container, the entropy of the photon collection increases, because each photon has a different escape time. Photons that have leaked out from a ...
0answers
86 views
### Finite or ∞ set of masses & ∃ gravity center?
Any finite & non empty set of masses has a computable center of gravity: $\vec{OG} = \frac{\sum_i m_i \vec{OM}_i}{\sum_i m_i}$ . Does the contrapositive permits to conclude that a mass system ...
0answers
135 views
### Early stages of a computational model for object movement charting
We would like to build a computational model capable of accurately predicting the position of any object inside a chamber at any given time. Inside the model we would have a number of smaller ...
0answers
49 views
### Dirichlet's work on gravity in non-Euclidean space?
In the book The Norton History of Astronomy and Cosmology by the late John North I have found the following statement (page 514): "The German mathematician Lejeune Dirichlet studied the law of ...
0answers
159 views
### include the stretch of the spring own weight in potential energy for spring pendulum?
we are given a problem with spring with its own mass $m$. I am confused how to set up the PE term in the Lagrangian. Assume the spring has length of $L_{0}$ when it is laying on a table horizontally. ...
0answers
101 views
### What results from particle collision would ensure the existence of the graviton?
I understand that particles are smashed together to try to enable us to detect some sort of graviton presence but we can't actually detect a graviton due to the fact that it 'exists' in some extra ...
0answers
232 views
### Why does a coin falls faster when it's flipping as well?
From my experiments with measuring how fast a coin falls, I have consistently measured a faster falling rate for a coin that flips as it falls. As an example, a coin dropping on its edge from height ...
0answers
137 views
### Calculation of a Gravity Resonance Keyhole
Can anyone describe the mathematics behind the calculation of a resonance keyhole (for a two-body model)? It seems like the size and position of the keyhole should be a function only of mass and ...
0answers
19 views
### The gravitational random walk
When we shoot a single photon out into space, the chance that it will eventually return to our vicinity from a different direction is vanishingly small, even though spatial curvature exists due to the ...
0answers
37 views
### When spacetime expands to the point where galaxy clusters are not observable, will there by any interaction?
It's my understanding that in a few billion years, clusters of galaxies won't be able to directly observe one another due to the expansion of spacetime overcoming gravity between those clusters. ...
0answers
60 views
### What is the critical mass of a planet to have an atmosphere like Earth's?
Small planets/orbits like Moon cannot have atmosphere because of their masses. They don't have enough gravity to hold an atmosphere. Then what is the critical mass that makes enough gravity to keep an ...
0answers
57 views
### Thermal gravitational radiation and its detection
To my poor knowledge on the topic, the gravitational waves that are most likely to be detected by LIGO or other experiments do not have thermal spectrum. But I'm not certain. I know that Hawking's ...
0answers
44 views
### What other factors effect the ability of a propigating field of pressure in a gaseous medium to predict gravitational collapse?
What other factors effect the ability of a propigating field of pressure in a gaseous medium to predict gravitational collapse? Will the only factor influencing the Speed of Sound in this medium be ...
0answers
77 views
### How complete is our understanding of general-relativistic solutions for extremal black holes?
Putting aside quantum mechanics (or at least putting aside the question of fermions), is our knowledge of extremal General-Relativity solutions good enough that we would be able to rule out a ...
0answers
183 views
### How to model an accelerometer measurements on a car wheel?
I am working on kinematically modelling an accelerometer on a car wheel. When working on the initial conditions, I am confused whether or not I should use the gravitational acceleration since there ...
0answers
256 views
### Quantization of Gravitational Field: Quantization conditions
I'm begining to study Quantization of field with the second quantization formalism. I've studied phononic field, electromagnetic field in the vacuum and a generic relativistical scalar field. I ...
0answers
46 views
### Flung out of the galaxy
I watched a video by Dr. Michio Kaku, in which he states a theory about Dark Matter. This theory says that Dark Matter could be just ordinary matter from another parallel universe, which would be ...
0answers
45 views
### Water Stream from a Horizontal Surface
If water was projected from a flat surface where gravity was equal all over the surface. What would happen when the water fell in on itself? The water is in a continuous stream and is perfectly ...
0answers
105 views
### Did force of gravity cause macroevolution?
Did big bang create gravity? What role gravity is assumed to have played in the formation (starting from the big bang) of large structures of our universe and what other important physical mechanisms ...
0answers
66 views
### How to calculate the mass of the Cygnus X-1 black hole?
I have received a question about how to calculate the mass of Cygnus x-1 (black star). Since we are able to find the the mass through this Wikipedia page I know that we can find the mass through ...
0answers
135 views
### What is the pressure in a vertical pipe that has moving fluid through it?
I have a system like this When the valve is closed, pressure along any point along the thin tube may be found thru pgh. 1. What about when the valve is opened? 2. Does the top of the jar need to be ...
0answers
71 views
### Where to go to minimize tidal forces?
Suppose you design an experiment where you need to minimize the effects of tidal forces. Where would you go? There are a few possibilities, and the choice depends on how much effort you are willing to ...
0answers
66 views
### If one were to move withing a tesseract, would the direction of gravity be different within each cell
If you unfold a tesseract into 3D space you get a cross shape (basically). Animated, it looks like the bottom most cube becomes inverted from it's 4D orientation. Would that mean that the pull of ...
0answers
50 views
### Concave Sphere Habitat characteristics
I'm writing a short story set in an artificial planet-sized sphere with an ecosystem in its inner surface, whose "gravity" is created through spinning. Energy sources aside, what other interesting ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298163652420044, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=548513 | Physics Forums
## Induced current in conductor moving circularly in constant B-field
1. The problem statement, all variables and given/known data
A light bulb with resistance R is attached on a metal rod which is rotating around the point O on the figure. The metal rod is in contact with an electrical conductor which is a part of a circle with radius d. The metal rod and the circular electrical conductor is a closed circuit. The rod now rotates with angular velocity $\omega$ through the constant magnetic field pointing out from the paper.
a)
Find an expression for the induced current through the light bulb, expressed in terms of $\omega$, d, B and R.
2. Relevant equations
IR=vBr
where v is the tangential speed of the rod perpendicular to the B-field (every speed is perpendicular to the B-field, since we are looking at a plane) and r is the length of the rod moving at this speed.
I=$\frac{\omega Br^{2}}{R}$
v substituted for $\omega r$
3. The attempt at a solution
Since the every part of the rod is moving with different linear speeds, we should integrate the RHS from the 0 to d with respect to r and that should be it right?
i get:
I=$\int^{d}_{0}\frac{\omega Br^{2}}{R}$
I=$\frac{d^{3}B\omega}{3R}$
But when i look up the solution it says:
I=$\frac{Bd^{2}\omega}{2R}$
so who's right?
Edit: Problem solved!
Attached Thumbnails
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Maybe i should add that RI=vBr is derived from Faradays law of induction, stating that the induced EMF is equal to the closed path integral of E+v X B with respect to l (path of the circuit), and Ohm's law stating that the EMF is equal to RI when looking at the entire circuit. I only integrate over the rod since this is the only thing moving relative to the B-field. The cross product in faradays law reduces to the magnitudes of v and B multiplied, since they are always perpendicular to each other in this problem and since i only need to find the magnitude of the EMF.
Nevermind i solved it! After reading my last post over, i realized that i should use Faradays law of induction as the more general law, rather than IR=vBl which is a solution to Faradays law in a particular situation. I then obtained the same answer as in the solutions sheet.
Thread Tools
| | | |
|-----------------------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Induced current in conductor moving circularly in constant B-field | | |
| Thread | Forum | Replies |
| | Electrical Engineering | 2 |
| | Introductory Physics Homework | 10 |
| | Classical Physics | 0 |
| | Classical Physics | 0 |
| | Introductory Physics Homework | 2 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188610911369324, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/tagged/computer-science | ## Tagged Questions
1answer
140 views
### What structure has been found for functions with this relationship.
Given $f$ and $g$ $\forall x y. f(x) = f(y) \Longrightarrow f(g(x)) = f(g(y))$ Or equivalently $ker\ f \subseteq ker\ (f \circ g)$. Note: if $f$ is injective then this holds fo …
0answers
92 views
### An interesting version of the problem “balls into bins”
Consider n people, each has k identical balls. Each people choose k different bins from m bins, constrained by the condition that there are no two people choose exactly the same k …
0answers
178 views
### Splay trees and Thompson’s group $F$
( I apologize for only indicating some easy to find references, but new users are not allowed to link more than five). This is very speculative, but: Question: Is there a reformul …
2answers
109 views
### Smallest base to reach partial recursive functions as a closure of unbound search
It is customary to define the class of partial recursive functions by taking the set of primitive recursive functions $PR$ and taking closure over unbound search operation. Do we …
0answers
63 views
### Schönhage’s SMM with only one instruction
It is possible to implement $\lambda$-calculus in Schönhage's storage modification machine using an infinite set of nodes and one single program consisted exclusively of (about hun …
6answers
891 views
### Giving $Top(X,Y)$ an appropriate topology
I am not sure if its OK to ask this question here. Let $Top$ be the category of topological spaces. Let $X,Y$ be objects in $Top$. Let $F:\mathbb{I}\rightarrow Top(X,Y)$ be a fu …
2answers
383 views
### Turing-complete primitive blind automata
Let $N$ be the set of natural numbers, $S$ be the set of finite binary sequences, and $Q = [N \rightarrow N] \times [N \rightarrow N],$ where $[N \rightarrow N]$ is the set of al …
1answer
100 views
### Distance between vertices in a vertex transitive graphs. [closed]
Can anybody help me in finding out the distances between vertices in a vertex transitive graphs. Is there any specific formula to calculate distance between vertices in this graph. …
0answers
107 views
### Hypothesis: interaction-based model for maximum consistent theories
We are looking for counter-examples to the following Hypothesis. In interaction calculus \$\langle \varnothing\ |\ \Gamma(M, x) \cup \Gamma(N, x)\rangle \downarrow \langle \varnoth …
0answers
74 views
### Is it possible to implement η-reduction in interaction nets?
There are several ways to encode λ-terms in interaction nets; for instance, using the original optimal algorithm by Lamping, or compiling λ-calculus into interaction combinators. H …
1answer
160 views
### Deriving the fundamental equation (with regards to computer vision)
I'm having a hard time understanding how a few equations are being derived. So the fundamental equation is an equation that relates corresponding points in stereo images. Anyway, t …
1answer
118 views
### Grzegorczyk-hierarchy, growth-rate and functions with finite image
Grzegorczyk-hierarchy divides primitive recursive functions in distinct classes with respect to their growth-rate. It seems that the higher we go the hierarchy, the more tools we h …
2answers
534 views
### How to draw Archimedean-Galileo spiral?
It is known that some plane curves can be drawn with a tool. For instance, I heard at a web site that Archimedes created his spiral in the third century B.C. by fooling around with …
2answers
170 views
### Rigorous numerics for maxima and minima (one variable)
Let $f:\mathbb{R}_0^+\to \mathbb{R}$ be defined by some combination of the four basic operations and square roots. (The argument of square-roots is assumed is to be non-negative, a …
1answer
107 views
### Reducing the error of Algorithms by assigning variables formulas instead of values
Let me first give the intuition for my question: Suppose that you want to use a ruler to mark $n$ points in a line on a page, with 1 cm distance between neighbor points. There are …
15 30 50 per page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8973300457000732, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=124550 | Physics Forums
## Millennium Problems
This mainly goes out to the professional mathematicians, but what would be your assessment of the Millennium Problems?
In the sense of which would be the most difficult, which might be solved first, the current status of the problems within the community itself.
Not necessarily all of them, maybe the one or two that pertain to your area.
I ask because I was listening to an algebraist and topologist at my university talk about the Poincaré conjecture and found the discussion fascinating.
In essence I'm looking for a discussion on the problems, focusing on their current status and opinions of mathematicians in the relevant fields of what their future will be.
Have you read the book "The Millenium Problems" by Keith Devlin? It does a great job of summarizing the problems for the non-expert. From what I remember, Devlin claims that the P vs. NP problem is the "easiest" by which he means that it is the most likely to be solved by a non-professional, however unlikely. As for the most difficult, he spends a lot of time on the Hodge Conjecture explaining to the reader that they will basically never understand the details, though he does make an attempt. If you like this book, I would like to recommend "Prime Obsession" by John Derbyshire also. It is an in depth look at the history and mathematics behind the Riemann Hypothesis.
Read it a while ago, it's good, but I'd prefer a more in-depth look at the problems. Unfortunately I can't seem to find any literature dealing with the problems and their place in the community. (Of course there is no problem finding literature concerning the problems themselves.)
Recognitions:
Gold Member
## Millennium Problems
I seem to remember reading somewhere (possibly in the book itself) that Devlin's "The Millennium Problems" book was to be the forerunner of a larger work describing the problems at a more advanced level. However it looks like the later work never materialised.
Quote by chronon I seem to remember reading somewhere (possibly in the book itself) that Devlin's "The Millennium Problems" book was to be the forerunner of a larger work describing the problems at a more advanced level. However it looks like the later work never materialised.
That's true, it mentions it in the foreword. I went looking for the larger tome but never found it. Shame, because it would have made for a great read.
The actual problem descriptions themselves are worth sitting down with and reading, very reminiscent of the Hilbert problems in how they are stated although you can see that it's a different generation.
Good for a contrast between 19th and 20th century mathematics, or at least I found so.
Might as well update this. The Poincaré conjecture may well be solved: News Article.
Recognitions:
Homework Help
Science Advisor
Quote by BSMSMSTMSPHD If you like this book, I would like to recommend "Prime Obsession" by John Derbyshire also. It is an in depth look at the history and mathematics behind the Riemann Hypothesis.
More accurately it's an in depth look at the history and a simplistic look at a tiny part of the mathematics. You couldn't expect much more given the target audience. I found it a nice read though. Edward's Riemann Zeta Function text is one of the more accessible introductions to the mathematics, and follows the historical development well, explaining Riemann's paper (there's a must read translation in the appendix).
A nice survey article:
http://www.ams.org/notices/200303/fea-conrey-web.pdf
My money goes on the Rhiemann Hypothesis not bc I understand it but because it made it to the otp of the lsit from the 19th century list into 20th.
From what I remember, Devlin claims that the P vs. NP problem is the "easiest" by which he means that it is the most likely to be solved by a non-professional, however unlikely.
From what I understand, the latest research suggests a Cantorian style revolution is needed before P=?=NP can even be touched. The mathematics that we have simply isn't sophisticated enough to get near it.
-Riemann Hypothesis solution could be easy to solve for a Physicist if Hilbert-Polya operator is constructible...so is "equivalent to a Hamiltonian $$\zeta(1/2+iH)|n>=0$$ $$H|n>=E_n |n>$$
http://arxiv.org/ftp/math/papers/0607/0607095.pdf A very curious paper on RH Thermodynamics and Chebyshev explicit formula...
Recognitions:
Science Advisor
Quote by Dominic Mulligan From what I understand, the latest research suggests a Cantorian style revolution is needed before P=?=NP can even be touched. The mathematics that we have simply isn't sophisticated enough to get near it.
I think they call it the "easiest" problem because it's probably the easiest to understand and can be tackled by nearly anyone. This doesn't mean that the solution is easy, which it certainly isn't, but i think i could describe the problem in a short post such that anyone at all would understand.
Thread Tools
| | | |
|------------------------------------------|-------------------------------|---------|
| Similar Threads for: Millennium Problems | | |
| Thread | Forum | Replies |
| | General Math | 16 |
| | Introductory Physics Homework | 1 |
| | General Discussion | 32 |
| | General Discussion | 2 |
| | General Astronomy | 11 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615041613578796, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/distribution-theory+probability-theory | # Tagged Questions
1answer
56 views
### expectation of a function of two dependent random variables
I have a formula that I believe it's right but I do not know how to prove it. Could you please give me some arguments or references to show that. Let $\{X_t,t\geq0\}$ be a gamma process. Let $\tau_M$ ...
1answer
63 views
### Tail bound for hypergeometric distribution
I am looking for a reference (book) for the tail bound for the Hypergeometric distribution. I know there is a nice paper by Skala (2009) but its unpublished. I am looking for a book which would be a ...
2answers
164 views
### How can I calculate the CDF of this random variable?
$X_1$, $X_2$, $X_3$ are random variables distributed following non-identically independent exponential distribution. The PDF $X_i$, $f_{X_i}(x)$=\$\frac{1}{\Omega_i}\exp(\frac{x}{\Omega_i}), ...
2answers
92 views
### How can I prove this inequation $\Pr\{X+Y<t\} \le \Pr\{X<t\} \Pr\{Y<t\}$
Could you please help me to prove the inequality probability as follows: $\Pr\{X+Y<t\} \le \Pr\{X<t\} \Pr\{Y<t\}$ where $X$ and $Y$ are non-negative independent random variables with common ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215140342712402, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/37912/what-is-the-easiest-way-to-stop-a-star/37962 | # What is the easiest way to stop a star?
On long enough cosmological time scales, hydrogen and helium nucleii will become scarce in the Universe. It seems to me that any advanced civilisations that might exist in that epoch would have the motivation to try and prevent the stars from using them up, in order to burn the fuel more slowly and extract a greater proportion of the energy as usable work.
One might say that, on time scales measured in trillions of years, the stars are an unsustainable use of the universe's fuel. This question is about whether such civilisations would have the means to do something about it. My questions are:
1. What would be the most energy-efficient way (using known physics) to blow apart a star or otherwise prevent or greatly slow the rate at which it performs fusion? We're assuming this civilisation has access to vast amounts of energy but doesn't want to waste it unnecessarily, since the aim is to access energy from the hydrogen the star would have burned. In order for this to be worthwhile, the energy gained from doing this would have to be substantially more than the energy the process takes.
2. What would be the astronomical signature of such an activity? If it was happening in a distant galaxy, would we be able to detect it from Earth?
-
I realise this question is speculative, but hopefully it has enough of a basis in "real physics" to be on topic. – Nathaniel Sep 21 '12 at 9:02
6
Throw a wet towel over it. – Killercam Sep 21 '12 at 9:22
3
I don't understand what you mean by "I'm concerned that the stars are using up hydrogen nuclei at an unsustainable rate.". – user12345 Sep 21 '12 at 10:28
@user16307 it was kind of a joke (but only kind of). Over very very very long time scales, the stars use up H nuclei by turning them into thermal radiation. If you were a very advanced galactic-scale civilisation, you might get concerned about that after a few tens of billions of years, because you'd realise that your civilisation could live for a lot longer if that H wasn't being used up so inefficiently. – Nathaniel Sep 21 '12 at 11:53
– Eugene Seidel Sep 21 '12 at 15:29
show 1 more comment
## 5 Answers
Burning (and fusion) is "unsustainable" by definition because it means to convert an increasing amount of fuel to "energy" plus "waste products" and at some moment, there is no fuel left.
I am not sure whether the word "unsustainable" was used as a joke, a parody of the same nonsensical adjective that is so popular with the low-brow media these days, but I have surely laughed (because it almost sounds like you are proposing to extinguish the Sun to be truly environment-friendly). The thermonuclear reaction in the Sun has been "sustained" for 4.7 billion years and about 7.5 billion years are left before the Sun goes red giant. That's over 10 billion years – many other processes are much less sustainable than that. More importantly, there is nothing wrong about processes' and activities' being "unsustainable". All the processes in the real world are unsustainable and the most pleasant ones are the least sustainable, too.
But back to your specific project.
When it comes to energy, it is possible to blow a star apart without spending energy that exceeds the actual thermonuclear energy stored in the star. Just make a simple calculation for the Sun. Try to divide it to 2 semisuns whose mass is $10^{30}$ kilograms, each. The current distance between the two semisuns is about $700,000$ kilometers, the radius of the Sun. You want to separate them to a distance where the potential energy is small, comparable to that at infinity.
It means that you must "liberate" the semisuns from a potential well. The gravitational potential energy you need to spend is $$E = \frac{G\cdot M\cdot M}{R} = \frac{6.67\times 10^{-11}\times 10^{60}}{700,000,000} = 10^{41}\,{\rm Joules}$$ That's equivalent to the energy of $10^{24}$ kilograms (the mass of the Moon or so) completely converted to energy via $E=mc^2$, or thermonuclear energy from burning the whole Earth of hydrogen (approximately).
You may force the Sun to do something like the "red giant" transition prematurely and save some hydrogen that is unburned. To do so, you will have to spend the amount of energy corresponding to the Earth completely burned via fusion.
But of course, the counting of the energy which was "favorable" isn't the only problem. To actually tear the Sun apart, you would have to send an object inside the Sun that would survive the rather extreme conditions over there, including 15 million Celsius degrees and 3 billion atmospheres of pressure. Needless to say, no solid can survive these conditions: any object based on atoms we know will inevitably become a plasma. A closely related fact is that ordinary matter based on nuclei and electron doesn't allow for any "higher-pressure" explosion than the thermonuclear one so there's nothing "stronger" that could be sent to the Sun as an explosive to counteract the huge pressure inside the star.
One must get used to the fact that plasma is what becomes out of anything that tries to "intervene" into the Sun – and any intruder would be quickly devoured and the Sun would restore its balance. The only possible loophole is that the amount of this stuff is large. So you may think about colliding two stars which could perhaps tear them apart and stop the fusion. This isn't easy. The energy needed to substantially change the trajectory of another star is very, very large, unless one is lucky that the stars are already going to "nearly collide" which is extremely unlikely.
Physics will not allow you to do such things. You would need a form of matter that is more extreme than the plasma in the Sun, e.g. the neutron matter, but this probably can't be much lighter (and easier to prepare, e.g. when it comes to energy) than the star itself. A black hole could only drill a hole (when fast enough) or consume the Sun (which you don't want).
However, if you allow the Sun to be eaten by a black hole, you will actually get a more efficient and more sustainable source of energy. Well, too sustainable. ;-) A black hole of the mass comparable to the solar mass would have a radius about 3 miles. It would only send roughly one photon of the 3-mile-long wavelength every nanosecond or so in the Hawking radiation and it would only evaporate after $10^{60}$ years or so. It would be so sustainable that no one could possibly observe the energy it is emitting. However, the black hole would ultimately emit all the energy $E=mc^2$ stored in the mass.
If there are powerful civilizations ready to do some "helioengineering", they surely don't suffer from naive and primitive misconceptions about the world such as the word "sustainable" and many other words that are so popular in the mentally retarded movement known as "environmentalism". These civilizations may do many things artificially but they surely realize that the thermonuclear reaction in the stars is a highly efficient and useful way to get the energy from the hydrogen fuel. Even some of us realize that almost all the useful energy that allowed the Earth to evolve and create life and other things came from the Sun.
The Sun may become unsustainable in 7.5 billion years but according to everything we know about Nature, it's the optimum device to provide large enough civilizations – whole planets – with energy.
-
1
Sorry Lubos, But SE allows only a single upvote..! BTW, I like that novel man..! – Ϛѓăʑɏ βµԂԃϔ Sep 21 '12 at 11:49
1
Thanks for the answer :) I did indeed use "sustainable" as a kind of joke, although I do think the word has its place. I realise that the universe's H supply will inevitably run out, but maybe it would last a lot longer if it was fused in a controlled way rather than burnt in stars. We're talking "sustainable" on a 10 billion year time scale but not a 10,000 billion year one. A star doesn't seem to me like a very efficient way to convert H into energy, unless you build a Dyson sphere around it, but even then there would surely be conversion losses in extracting usable work. – Nathaniel Sep 21 '12 at 12:06
1
Thanks, Crazy Buddy. ;-) Nathaniel, I see. When you worry about the solar radiation lost to wrong directions, it will still be cheaper to surround the Sun with solar panels. ;-) Note that the Sun's size is of order a million of kilometers "only". And one thing I didn't mention yet: if you care about the Hydrogen, why don't you just fly somewhere and grab it from nebulae or interplanetary gas etc.? The Universe has no shortage of fuel. It has a much more serious shortage of good ideas and "things at the right place". – Luboš Motl Sep 21 '12 at 12:17
5
– Luboš Motl Sep 21 '12 at 12:39
2
Right. It strikes me as much more energetically efficient to actually gather hydrogen from nearby astrophysics sources and bring them back to the sun for fuel. Coupled with a suitable Dyson sphere, you would have amongst the more efficient (in the Carnot sense) stellar factories that you could think off. Fusion on the human scale will never be as efficient as what goes on in the sun. – Columbia Sep 21 '12 at 13:26
show 11 more comments
The most efficient way to save hydrogen for future use by a very advanced civilization is not to try to stop current stars from burning up their hydrogen, but rather to make sure that the star generation rate in the galaxy drops to zero. Basically, give up on current stars as a lost cause and just prevent new stars from forming. This works because the amount of hydrogen in gas clouds in the galaxy is orders of magnitude higher than the amount inside stars.
To do this, the civilization would need to monitor all of the gas clouds in space so they can notice clouds that are getting close to the stage of creating a proto-star. If they can explode a sufficiently energetic bomb near where the expected star would be born, they would be able to increase the pressure and prevent the collapse into a star for a significant period of time. I don't have any calculations of the energy required but it must be very much significantly less than the energy needed to take a significant amount of hydrogen out of the gravitational well of a star that has already started burning hydrogen.
They would have to be very careful that the explosions they create are not too energetic since a bomb that is too big could trigger more star formations if the expanding gas cloud runs into other stationary clouds. This whole scheme would require a lot of monitoring and simulations of all the gas clouds in the galaxy, but I don't think it is impossible from a physics point of view.
Another difficulty would be to monitor stars and to predict supernova explosions since they can also trigger nearby gas cloud to collapse and generate new stars. They would have to either gently move the gas clouds out of the way or come up with someway to prevent the supernova explosion.
The astronomical signature would be a galaxy with no star formation and no supernova explosions over an extended period of time.
-
1
+1 for thinking out of the box – Vorac Sep 28 '12 at 9:47
Interesting idea - and the SFR is a good place to start. I doubt, though, that adding shocks to giant molecular clouds will do anything but trigger star formation, but maybe you can direct it toward smaller stars. As for the amount of energy, keep in mind the standard calculation that 30 million years' worth of the Sun's luminosity about equals its gravitational potential energy with respect to all particles being at infinity. That's not quite supernova level, but it's still rather large. – Chris White Oct 5 '12 at 7:57
It would obviously have to be a very advanced civilization that could simulate the effects of their preemptive explosions very accurately to make sure they are big enough to prevent star formation and yet not too big to trigger more star formation... Yeah, probably impossible... – FrankH Oct 5 '12 at 10:16
What would be the most energy-efficient way (using known physics) to blow apart a star or otherwise prevent or greatly slow the rate at which it performs fusion?
Spin it until fusion stops. Do this using the sun's own energy.
To accomplish this, I will have to ask you to envision something like a Dyson Sphere, but the primary function of the matter encircling the star will be mirrors. I will be making statements about the force balances and basic physics, how this could be actually done in practice is out of the scope of this answer.
I propose that mirrors would at some distance from the sun, and the focal length of these mirrors would be equal to this distance. The idea will be to reflect the sun's light back onto the sun in a way that makes it spin faster. We would like to redirect all the light nearly tangentially to the edge of the sun, but we can't do this because of entropic limits. Remember, nothing can be focus the sun's rays to heat something hotter than the surface of the sun. This is why I select the parameter of $R=f$.
Focal length equal to distance, per Wikipedia
With this type of mirror we could focus the sun directly back onto the sun. We will assume sufficient distance from the sun to treat it as a simple circle. To make the sun spin faster, we will redirect the image right of the center, so the image will have a center a distance of $d$ to the right of the object. In order to get the optimal location to direct the reflection, we will
• Integrate the distance from the axis of rotation from the lower bound of the image circle to the upper bound of the object circle
• Integrate this value between the two intersections of the circles
• Find the greatest value of this moment integral over all valid values of $d$
I actually did that calculation. I obtained $d=0.836 R$. To summarize, the proposal is to refocus the light back onto the sun so that it looks like a Venn diagram.
By doing this, we trash a lot of the radiation. I calculate the average radius of interaction to be $0.418 R$, but if we dilute that number by the number of photons lost, we will get a multiplier of $0.202 R$ to convert the photon's momentum to the average torque exerted. I believe this is a fundamental, entropic limit, and if I'm wrong, it's at least conservative.
At this point we're still not finished. That's because this mirror isn't balanced. If it always reflected light in this way, it would not have a stable orbit since the gravitational force can only act radially and there is a tangential component to the photon's force on the mirror. One could compensate for this quite simply by holding a flat mirror at a $45^{\circ}$ angle to the sun's radiation, directing them in the other tangential direction. The torque from that balancing mirror, however, would depend on the distance from the sun. Because of that, we can't numerically correct for it here. If the distance of this satellite from the sun was large compared to the sun's radius then the loss from this balancing mirror would be negligible.
For simplicity I'll assume all the radiation from the sun is photons (it's only about 98%). The power output of the sun is:
$$P = 3.846 \times 10^{26} W$$
The photonic momentum (which is totally isotropic normally) can be found from $E=pc$ applied to the above power output. This gives the total momentum of photons emitted per unit time, or in other words, the isotropic photonic force.
$$F = P/c = 1.282 \times 10^{18} N$$
The moment of inertia of the sun can be found from the forumula for the moment of inertia for a solid ball.
$$m = 1.989 \times 10^{30} kg$$
$$I = \frac{2}{5} m R^2 = 3.848 \times 10^{47} m^2 kg$$
Using the methods I've laid out here, I can calculate the torque. This is assuming that all photons from the sun are used as efficiently as possible.
$$\tau = F \bar{R} = (1.282 \times 10^{18} N) ( 0.202 R) = 1.801 \times 10^{26} N m$$
How fast would it need to spin in order to stop fusion, and then how much to break it apart? This is a difficult question to answer. However, one thing we can say is that if you have enough energy to completely disociate everything in the sun gravitationally you have enough energy to do both of the tasks of stopping fusion and breaking it apart less spectacularly. The energy to fully spread out the sun's mass over all space can be calculated. This is similar to a recent question, in short, the full dissociation energy is the half of the potential integrated over the entire volume. If I've done this right, the full disociation energy is:
$$E_{diss} = \frac{3 G M^2}{8 R} = 1.423 \times 10^{41} J$$
It is also difficult to estimate the time needed for the available torque to dissociate the sun. But let's do a limit case where the sun doesn't deform due to the increased rotation. In that case we can seek an angular velocity that is equivelant to the above energy of dissociation, then ask how long it would take the available torque to accelerate it to that point.
$$E_{diss} = \frac{1}{2} I \omega^2$$
$$\omega = 0.00086 \frac{rad}{s}$$
This seems small, but consider that this would be the state of rotating once every 2 hours. Now, how long would the given torque take to get it to this state?
$$I \omega = \Delta t \tau$$
$$\Delta t = 58.23 \text{ billion years}$$
Now, if someone started spinning the sun with the sun's own power, eventually the fusion would stop, but the radiation wouldn't stop right away. In order to see if the stored energy is sufficient to break the sun apart, we'll consider the thermal energy stored in the core alone. The core's temperature is about 15,000,000 kelvin. This region goes out to around 0.25 solar radii. The average kinetic energy of a helium nuclei in the sun's core would then come from $E_k = 3/2 k T$, coming out to
$$E_k = 3.106 \times 10^{-16} J$$
The average molecular mass in the sun is about 1.67 http://web.njit.edu/~gary/321/Lecture7.html I can use this to find the number of nuclei in the core of the sun. I can then combine that with the previous value for energy per nuclei to find the total stored kinetic energy in the sun.
$$E = E_k N = E_k m (0.25)^3 / (1.67 amu) = E_k (1.12 \times 10^{55} \text{particles} ) = 3.481 \times 10^{39} J$$
This is the total thermal energy stored in the sun's core. Roughly. Divide this by the normal power output to get a $s$ valued number for the sun's power.
$$E / P = 9.05 \times 10^{12} s = 6.9 \text{million years}$$
We conclude that the stored energy of the sun is insufficient to fully break it apart by about 4 orders of magnitude. The described method would still be viable to stop the fusion reaction.
-
58 billion years? The sun only has about 5 billion years left before it goes off the Main Sequence and becomes a red giant. So it appears your scheme will only work after it has already exhausted all of it's hydrogen fuel. Still +1 for all the hard work! – FrankH Sep 23 '12 at 11:21
@FrankH Exactly, I only just now realized how stupid that sounded. I feel good about the numbers, so maybe we should conclude instead that it's not a great method and an advanced civilization would probably seek improvements. For instance, deflect the light at a large radius and use ion drives to push against the spin of the sun. I wanted to cover the obvious "passive" approach here, for better or worse. – AlanSE Sep 23 '12 at 13:08
The only way is to capture all or most of the radiated energy of a star is using a Dyson sphere.
I wouldn't call it a realistic option from an engineering standpoint, but it is posible in principle.
-
Hardly the easiest way, and my background in physics doesn't extend beyond High School ...
Anyway I'd start with the assumption that any star is basically a ball of hydrogen having a surface of plasma. It may, therefore, be possible to calculate the cavitation frequency of that ball. The practical aspect may be an engineering challenge though ... generating the necessary amount of power to penetrate the plasma, constructing rugged delivery device/s, moving the devices into position ...
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547571539878845, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/5250/list | ## Return to Answer
4 typesetting
In short, I'd tell your friend: "If you believe a ring can be understood geometrically as functions on a its spectrum, then modules help you by providing more functions with which to measure and characterize these spectra.its spectrum."
Elements of a module over a ring $R$ are like generalized functions on its spectrum. $Spec(R)$. We can talk about the support of a module element, or its vanishing set. More concretely, think of how global sections of a line bundle can act as functions you can use to define map into projective space.
When you glue together a module on open sets of a spectrum or a scheme, you get to glue using maps which are module isomorphisms, which are more flexible than the ring isomorphisms required to glue together a scheme. Borrowing intuition from smooth manifold land, the twist in the Moebius band (as a line bundle on the circle) is formed by gluing a copy of the reals to itself via multiplication by $-1$, a module map, not a ring map. This allows us to think of functions like $\cos(\theta/2)$ as being globally defined: as a map to the Moebius band.
In the same vein, when you have a representation $V$ of a group $G$, each element $v\in V$ gives you a nice evaluation map from $G$ into $V$, so lurking everywhere we've got these morphisms from our object of interest into a known object, which are nicely related to each other via the group laws. A fortiori, this certainly doesn't capture the full utility of group representations, but a priori I think it's a decent justification.
3 wording
In short, I'd tell your friend: "If you believe rings a ring can be understood geometrically as functions on a spectrum, then modules help you by providing more functions with which to measure and characterize these spectra."
Elements of a module over a ring are like generalized functions on its spectrum. We can talk about the support of a module element, or its vanishing set. More concretely, think of how global sections of a line bundle can act as functions you can use to define map into projective space.
When you glue together a module on open sets of a spectrum or a scheme, you get to glue using maps which are module isomorphisms, which are more flexible than the ring isomorphisms required to glue together a scheme. Borrowing intuition from smooth manifold land, the twist in the Moebius band (as a line bundle on the circle) is formed by gluing a copy of the reals to itself via multiplication by -1, $-1$, a module map, not a ring map, which . This allows us to think of functions like $\cos(\theta/2)$ as being globally defined: as a map to the Moebius band.
In the same vein, when you have a representation $V$ of a group $G$, each element $v\in V$ gives you a nice evaluation map from $G$ into $V$, so lurking everywhere we've got these morphisms from our object of interest into a known object, which are nicely related to each other via the group laws. A fortiori, this certainly doesn't capture the full utility of group representations, but a priori I think it's a decent justification.
2 typesetting
I think geometry provides a good reason
In short, I'd tell your friend: elements "If you believe rings can be understood geometrically as functions on a spectrum, then modules help you by providing more functions with which to measure and characterize these spectra."
Elements of a module over a ring are like generalized functions on its spectrum. We can talk about the support of a module element, or its vanishing set. More concretely, think of how global sections of a line bundle can act as functions mapping your scheme you can use to define map into projective space.
When you glue together a module on open sets of a spectrum or a scheme, you get to glue using maps which are module isomorphisms, which are which are more flexible than the ring isomorphisms required to glue together a scheme. Borrowing intuition from smooth manifold land, the twist in the Moebius band (as a line bundle on the circle) is formed by gluing a copy of the reals to itself via multiplication by -1, a module map, not a ring map, which allows us to think of functions like $\cos(\theta/2)$ as being globally defined: as a map to the Moebius band.
Similarly
In the same vein, when you have a representation V $V$ of a group G, $G$, each element v in V $v\in V$ gives you a nice evaluation map from G $G$ into V, $V$, so again, we're getting lurking everywhere we've got these morphisms from an our object of interest into a known object, which are nicely related to each other via the group laws. A fortiori, this certainly doesn't capture the full utility of group representations, but a priori I think it's a decent justification.
1
I think geometry provides a good reason: elements of a module over a ring are like generalized functions on its spectrum. We can talk about the support of a module element, or its vanishing set. More concretely, think of how global sections of a line bundle can act as functions mapping your scheme into projective space.
When you glue together a module on open sets of a spectrum or a scheme, you get to glue using maps which are module isomorphisms, which are which are more flexible than the ring isomorphisms required to glue together a scheme. Borrowing intuition from smooth manifold land, the twist in the Moebius band (as a line bundle on the circle) is formed by gluing a copy of the reals to itself via multiplication by -1, a module map, not a ring map, which allows to think of functions like $\cos(\theta/2)$ as being globally defined: as a map to the Moebius band.
Similarly, when you have a representation V of a group G, each element v in V gives you a nice evaluation map from G into V, so again, we're getting morphisms from an object into a known object. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93889319896698, "perplexity_flag": "head"} |
http://torus.math.uiuc.edu/cal/math/cal?year=2012&month=08&day=07&interval=year®exp=Group+Theory+Seminar&use=Find | Seminar Calendar
for Group Theory Seminar events the year of Tuesday, August 7, 2012.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
``` July 2012 August 2012 September 2012
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7 1 2 3 4 1
8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8
15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15
22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22
29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29
30
```
Thursday, January 19, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, January 19, 2012
Del Edit Copy
Submitted by kapovich.
Organizational meeting
Thursday, January 26, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, January 26, 2012
Del Edit Copy
Submitted by kapovich.
Robert Craggs (UIUC Math)On doubled 3-manifolds and minimal handle presentations for 4--manifoldsAbstract: We study turning algebraic handle cancellation of certain 2-handle presentations for 4-manifolds of the form $M_* \times [-1,1]$ into geometric handle cancellations. Algebraic here refers to extended Nielsen invariants on group presentations, We show how the cancellation problems leads to obstruction problems involving framed surgery on 3-manifolds. We will report on efforts to calculate some surgery obstructions
Thursday, February 2, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, February 2, 2012
Del Edit Copy
Submitted by kapovich.
Albert Fisher (University of Sao Paulo)A flow crossection for Moeckel's theorem on continued fractionsAbstract: We construct a cross-section to the principal congruence modular flow which is represented as a skew product transformation over the natural extension of the Gauss map. This leads to a new proof of Moeckel's theorem on rational approximants. For an irrational number $x$ in the unit interval with continued fraction expansion $[n_0 n_1...]$, let $p_k/q_k=$[n_0 n_1..n_k] be the rational approximants for $x$. Writing these in lowest terms, they can be of three types: $\frac{O}{E}$, $\frac{E}{O}$, or $\frac{O}{O}$ where $O$ stands for odd and $E$ for even. Moeckel's theorem states that the frequency of each of these exists almost surely. What is unusual in the proof is that this does not follow directly from the ergodic theorem applied to an observable on the Gauss map (the shift on continued fractions): one must first enlarge the space. Moeckel's approach makes use of the geodesic flow on a three-fold cover of the modular surface, together with a geometric argument for counting the time that geodesics spend in cusps. Ergodicity of the flow is automatic (via the Hopf argument) but the counting is somewhat involved. Later Jager and Liardet found a second purely ergodic theoretic proof, constructing a skew product over the Gauss map. There the counting is direct, but the proof of ergodicity is more difficult. Our proof unifies the two earlier arguments, inheriting these strong points of each.
Thursday, February 9, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, February 9, 2012
Del Edit Copy
Submitted by kapovich.
Catherine Pfaff (Rutgers - Newark)Constructing and Classifying Fully Irreducible Outer Automorphisms of Free GroupsAbstract: The main theorem of my thesis emulates, in the context of $Out(F_r)$ theory, a mapping class group theorem (by H. Masur and J. Smillie) that determines precisely which index lists arise from pseudo-Anosov mapping classes. Since the ideal Whitehead graph gives a finer invariant in the analogous setting of a fully irreducible $\phi \in Out(F_r)$, we instead focus on determining which of the 21 connected 5-vertex graphs are ideal Whitehead graphs of ageometric, fully irreducible $\phi \in Out(F_3)$. Our main theorem accomplishes this. The methods we use for constructing fully irreducible $\phi\in Out(F_r)$, as well as our identification and decomposition techniques, can be used to extend our main theorem, as they are valid in any rank. Our methods of proof rely primarily on Bestvina-Feighn-Handel train track theory and the theory of attracting laminations.
Thursday, February 16, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, February 16, 2012
Del Edit Copy
Submitted by kapovich.
Richard Brown (Johns Hopkins University)The dynamics of mapping class actions on the character varieties of surfacesAbstract: We construct an algebraic model of the special linear character variety of a compact surface in a way which facilitates the study of the action of the mapping class group of the surface on the affine set. We then present some early results of this study, and discuss some intended directions of further study.
Thursday, February 23, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, February 23, 2012
Del Edit Copy
Submitted by kapovich.
Nathan Dunfield (UIUC Math)Integer homology 3-spheres with large injectivity radiusAbstract: Conjecturally, the amount of torsion in the first homology group of a hyperbolic 3-manifold must grow rapidly in any exhaustive tower of covers (see Bergeron-Venkatesh and F. Calegari-Venkatesh). In contrast, the first betti number can stay constant (and zero) in such covers. Here "exhaustive" means that the injectivity radius of the covers goes to infinity. In this talk, I will explain how to construct hyperbolic 3-manifolds with trivial first homology where the injectivity radius is big almost everywhere by using ideas from Kleinian groups. I will then relate this to the recent work of Abert, Bergeron, Biringer, et. al. In particular, these examples show a differing approximation behavior for L^2 torsion as compared to L^2 betti numbers. This is joint work with Jeff Brock.
Thursday, March 1, 2012
Group Theory Seminar
1:00 pm in 347 Altgeld Hall, Thursday, March 1, 2012
Del Edit Copy
Submitted by dsrobins.
Derek Robinson (Department of Mathematics, University of Illinois at Urbana-Champaign)Groups with few isomorphism types of derived subgroup.Abstract: A derived subgroup in a group G is the derived (or commutator) subgroup of some subgroup of G. Recently there has been interest in trying to understand the significance of the set of derived subgroups within the lattice of all subgroups of G. In particular one can ask about the effect on the group structure of imposing restrictions on the set of derived subgroups. In this talk we will describe recent work on groups in which there are at most two isomorphism types of derived subgroup. While this may sound like a very special class of groups, it contains groups of many diverse types. We will describe some of these types of group and show how their construction involves some interesting number theoretic problems.
Thursday, March 29, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, March 29, 2012
Del Edit Copy
Submitted by kapovich.
Matt Clay (Alleheny College)Relative twisting in Outer spaceAbstract: The Culler-Vogtmann Outer space is the space of marked metric graphs of a fixed rank. It plays a similar role in the theory of the group of outer automorphisms of a free group as the Teichmueller space of a surface plays for the mapping class group of the surface. I will discuss a tool for providing a lower bound on the distance between points in the Outer space with the Lipschitz metric that is akin to annular projection in surfaces. This is joint work with Alexandra Pettet.
Thursday, April 12, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, April 12, 2012
Del Edit Copy
Submitted by kapovich.
Hao Liang (University of Illinois at Chicago)Centralizers of finite subgroups of the mapping class group and almost fixed points in the curve complexAbstract: Let S be an orientable surface of finite type, MCG(S) the mapping class group of S, C(S) the curve complex of S and H a finite subgroup of MCG(S). By the hyperbolicity of C(S), there exists points in C(S) whose H-orbit has diameter at most 6\delta; We call such points H-almost fixed points. We prove that there exists a constant K depending only on S so that if the diameter of the set of H- almost fixed points is greater than K then the centralizer of H in MCG(S) is infinite. I will start by explaining the proof of the analogous statement for hyperbolic groups, then I will explain the extra ingredients needed for the case of mapping class groups.
Thursday, April 26, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, April 26, 2012
Del Edit Copy
Submitted by kapovich.
Pekka Pankka (University of Helsinki)From Picard's theorem to quasiregular ellipticityAbstract: In the quasiconformal geometry of Riemannian manifolds the classical Picard theorem from complex analysis turns into an existence question for non-constant quasiregular mappings from Euclidean spaces into Riemannian manifolds. In this talk, I will discuss the role of the fundamental group in these questions and a class of metrics, introduced by Semmes, that connect these quesiregular ellipticity questions to questions on quasiconformal geometry of decomposition spaces. This talk is based on joint works with Kai Rajala and Jang-Mei Wu.
Thursday, August 30, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, August 30, 2012
Del Edit Copy
Submitted by kapovich.
Organizational meeting
Thursday, October 18, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, October 18, 2012
Del Edit Copy
Submitted by kapovich.
Alex Furman (University of Illinois at Chicago)Classifying lattice envelopes for (many) countable groupsAbstract: Let $\Gamma$ be a given countable group. What locally compact groups $G$ contain a lattice (not necessarily uniform) isomorphic to $\Gamma$ ? In a joint work with Uri Bader and Roman Sauer we answer this question for a large class of groups including Gromov hyperbolic groups and many linear groups. The proofs use a range of facts including: recent work of Breuillard-Gelander on Tits alternative, works of Margulis on arithmeticity of lattices in semi-simple Lie groups, and a number of quasi-isometric rigidity results.
Thursday, October 25, 2012
Group Theory Seminar
1:00 pm Thursday, October 25, 2012
Del Edit Copy
Submitted by kapovich.
No seminar today, because of the departmental retiree's luncheon
Thursday, November 1, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, November 1, 2012
Del Edit Copy
Submitted by kapovich.
Yael Algom-Kfir (Yale University)Small dilatation automorphisms of the free group and their mapping toriAbstract: We consider elements of Out(F_n) that can be represented by a self map of a graph which has the property that a high enough iterate of the map sends every edge over any other edge of the graph. Furthermore, we assume that positive iterates of the map send edges to immersed paths in the graph. These maps are called irreducible train-track maps. To each such automorphism \phi one can attach a real number \lam >1 called the dilatation of \phi. For every n, the set of real numbers realized as dilatations of elements in Out(F_n) is a discrete set however, letting n vary we can get dilatations arbitrarily close to 1. For a fixed n, the smallest dilatation of an element in Out(F_n) is on the order of 2^{1/n}. We define an element to be P-small if its dilatation is smaller than P^{1/n} (there are infinitely many such automorphisms). We prove that for a given P, there exist finitely many 2-complexes so that the mapping torus of any P-small automorphism is obtained by surgery from one of these 2-complexes. This is a direct analog of a theorem of Farb-Leininger-Margalit in the case of Mod(S) for a closed surface S. We also show that the fundamental group of such a mapping torus has a presentation with a uniformly bounded number of generators and relations. This is joint work with Kasra Rafi.
Thursday, November 8, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, November 8, 2012
Del Edit Copy
Submitted by kapovich.
Paul Schupp (UIUC Math)Multi-pass Automata and Group Word ProblemsAbstract: After reviewing some well-known connections between group theory and formal language theory, I will address a question of Bob Gilman: Is there a reasonable'' class of formal languages which are more general than context-free languages, but much more restricted than linear bounded automata, which tells us something about of group word problems? It seems that the class of multi-pass'' languages is interesting from the point of view. Although starting from automata we will discuss some mapping tori and some flat manifolds. This is joint work with Tullio Ceccerini-Silberstein, Michel Coornaert and Francesa Fiorenzi.
Thursday, November 15, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, November 15, 2012
Del Edit Copy
Submitted by kapovich.
Anton Lukyanenko (UIUC Math)Geodesic coding on the complex hyperbolic modular surfaceAbstract: Continued fractions have been used to study the behavior of geodesics in the modular line $H^2=SL(2; Z)$. Is a similar approach available for other quotients of symmetric spaces? We study the notion of a continued fraction on the Heisenberg group, a step-2 nilpotent group that serves as the boundary of complex hyperbolic plane $CH^2$, and its connection to geodesics in the modular surface $CH^2=SU(2; 1;Z[i])$. Joint work with Joseph Vandehey.
Thursday, December 6, 2012
Group Theory Seminar
1:00 pm in Altgeld Hall 347, Thursday, December 6, 2012
Del Edit Copy
Submitted by kapovich.
Brian Ray (UIUC Math)General nonexistence of finite strongly relatively rigid sets in Culler-Vogtmann Outer Space.Abstract: Given a subset $\Sigma$ of a finitely generated free group, we say that $\Sigma$ is (strongly) spectrally rigid if whenever $T, T'$ are trees in (the closure of) Culler-Vogtmann Outer Space for which $\| g \|_T = \| g \|_{T'}$ for every $g \in \Sigma$, then $T = T'$. Similarly, we say that $\Sigma$ is (strongly) relatively rigid at $T$ if given a tree $T'$ in (the closure of) C-V Outer Space for which $\| g \|_T = \| g \|_{T'}$ for every $g \in \Sigma$, then $T = T'$. It is well known that no finite spectrally rigid set exists. Recently, Carette, Francaviglia, Kapovich, and Martino proved that every $T$ in C-V Outer Space admits a finite relatively rigid set. We show the existence of a family of trees on the boundary of C-V Outer Space for which no finite strongly relatively rigid set exists. Time permitting, we will discuss how one can promote the result of CFKM and show that every tree in C-V Outer Space admits a finite \emph{strongly} relatively rigid set. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047886729240417, "perplexity_flag": "middle"} |
http://psychology.wikia.com/wiki/Income_inequality_metrics?oldid=142357 | # Income inequality metrics
Talk0
31,726pages on
this wiki
Revision as of 12:10, December 25, 2011 by Dr Joe Kiff (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
The concept of inequality is distinct from that of poverty[1] and fairness. Income inequality metrics or income distribution metrics are used by social scientists to measure the distribution of income, and economic inequality among the participants in a particular economy, such as that of a specific country or of the world in general. While different theories may try to explain how income inequality comes about, income inequality metrics simply provide a system of measurement used to determine the dispersion of incomes.
Income distribution has always been a central concern of economic theory and economic policy. Classical economists such as Adam Smith, Thomas Malthus and David Ricardo were mainly concerned with factor income distribution, that is, the distribution of income between the main factors of production, land, labour and capital. It is often related to wealth distribution although separate factors influence wealth inequality.
Modern economists have also addressed this issue, but have been more concerned with the distribution of income across individuals and households. Important theoretical and policy concerns include the relationship between income inequality and economic growth. The article Economic inequality discusses the social and policy aspects of income distribution questions.
## Defining income
All of the metrics described below are applicable to evaluating the distributional inequality of various kinds of resources. Here the focus is on income as a resource. As there are various forms of "income", the investigated kind of income has to be clearly described.
One form of income is the total amount of goods and services that a person receives, and thus there is not necessarily money or cash involved. If a subsistence farmer in Uganda grows his own grain it will count as income. Services like public health and education are also counted in. Often expenditure or consumption (which is the same in an economic sense) is used to measure income. The World Bank uses the so-called "living standard measurement surveys"[2] to measure income. These consist of questionnaires with more than 200 questions. Surveys have been completed in most developing countries.
Applied to the analysis of income inequality within countries, "income" often stands for the taxed income per individual or per household. Here income inequality measures also can be used to compare the income distributions before and after taxation in order to measure the effects of progressive tax rates.
## Properties of inequality metrics
In the economic literature on inequality four properties are generally postulated that any measure of inequality should satisfy:
### Anonymity
This assumption states that an inequality metric does not depend on the "labeling" of individuals in an economy and all that matters is the distribution of income. For example, in an economy composed of two people, Mr. Smith and Mrs. Jones, where one of them has 60% of the income and the other 40%, the inequality metric should be the same whether it is Mr. Smith or Mrs. Jones who has the 40% share. This property distinguishes the concept of inequality from that of fairness where who owns a particular level of income and how it has been acquired is of central importance. An inequality metric is a statement simply about how income is distributed, not about who the particular people in the economy are or what kind of income they "deserve".
### Scale independence
This property says that richer economies should not be automatically considered more unequal by construction. In other words, if every person's income in an economy is doubled (or multiplied by any positive constant) then the overall metric of inequality should not change. Of course the same thing applies to poorer economies. The inequality income metric should be independent of the aggregate level of income.
### Population independence
Similarly, the income inequality metric should not depend on whether an economy has a large or small population. An economy with only a few people should not be automatically judged by the metric as being more equal than a large economy with lots of people. This means that the metric should be independent of the level of population.
### Transfer principle
The Pigou–Dalton, or transfer, principle is the assumption that makes an inequality metric actually a measure of inequality. In its weak form it says that if some income is transferred from a rich person to a poor person, while still preserving the order of income ranks, then the measured inequality should not increase. In its strong form, the measured level of inequality should decrease.
## Common income inequality metrics
Among the most common metrics used to measure inequality are the Gini index (also known as Gini coefficient), the Theil index, and the Hoover index. They have all four properties described above.
An additional property of an inequality metric that may be desirable from an empirical point of view is that of 'decomposability'. This means that if a particular economy is broken down into sub-regions, and an inequality metric is computed for each sub region separately, then the measure of inequality for the economy as a whole should be a weighted average of the regional inequalities (in a weaker form, it means that it should be an explicit function of sub-regional inequalities, though not necessarily linear). Of the above indexes, only the Theil index has this property.
Because these income inequality metrics are summary statistics that seek to aggregate an entire distribution of incomes into a single index, the information on the measured inequality is reduced. This information reduction of course is the goal of computing inequality measures, as it reduces complexity.
A weaker reduction of complexity is achieved if income distributions are described by shares of total income. Rather than to indicate a single measure, the society under investigation is split into segments, e.g. into quintiles (or any other percentage of population). Usually each segment contains the same share of income earners. In case of an unequal income distribution, the shares of income available in each segment are different. In many cases the inequality indices mentioned above are computed from such segment data without evaluating the inequalities within the segments. The higher the amount of segments (e.g. deciles instead of quintiles), the closer the measured inequality of distribution gets to the real inequality. (If the inequality within the segments is known, the total inequality can be determined by those inequality metrics which have the property of being "decomposable".)
Quintile measures of inequality satisfy the transfer principle only in its weak form because any changes in income distribution outside the relevant quintiles are not picked up by this measures; only the distribution of income between the very rich and the very poor matters while inequality in the middle plays no role.
Details of the three inequality measures are described in the respective Wikipedia articles. The following subsections cover them only briefly.
### Gini index
Main article: Gini coefficient
The range of the Gini index is between 0 and 1 (0% and 100%), where 0 indicates perfect equality and 1 (100%) indicates maximum inequality.
The Gini index is the most frequently used inequality index. The reason for its popularity is that it is easy to understand how to compute the Gini index as a ratio of two areas in Lorenz curve diagrams. As a disadvantage, the Gini index only maps a number to the properties of a diagram, but the diagram itself is not based on any model of a distribution process. The "meaning" of the Gini index only can be understood empirically. Additionally the Gini does not capture where in the distribution the inequality occurs. As a result two very different distributions of income can have the same Gini index.
### Hoover index
Main article: Hoover index
The Hoover index is the simplest of all inequality measures to calculate: It is the proportion of all income which would have to be redistributed to achieve a state of perfect equality.
In a perfectly equal world, no resources would need to be redistributed to achieve equal distribution: a Hoover index of 0. In a world in which all income was received by just one family, almost 100% of that income would need to be redistributed (i.e., taken and given to other families) in order to achieve equality. The Hoover index then ranges between 0 and 1 (0% and 100%), where 0 indicates perfect equality and 1 (100%) indicates maximum inequality.
### Theil index
Main article: Theil index
A Theil index of 0 indicates perfect equality. A Theil index of 1 indicates that the distributional entropy of the system under investigation is almost similar to a system with an 82:18 distribution.[3] This is slightly more unequal than the inequality in a system to which the "80:20 Pareto principle" applies.[4] The Theil index can be transformed into an Atkinson index, which has a range between 0 and 1 (0% and 100%), where 0 indicates perfect equality and 1 (100%) indicates maximum inequality.
The Theil index is an entropy measure. As for any resource distribution and with reference to information theory, "maximum entropy" occurs once income earners cannot be distinguished by their resources, i.e. when there is perfect equality. In real societies people can be distinguished by their different resources, with the resources being incomes. The more "distinguishable" they are, the lower is the "actual entropy" of a system consisting of income and income earners. Also based on information theory, the gap between these two entropies can be called "redundancy".[5] It behaves like a negative entropy.
For the Theil index also the term "Theil entropy" had been used. This caused confusion. As an example, Amartya Sen commented on the Theil index, "given the association of doom with entropy in the context of thermodynamics, it may take a little time to get used to entropy as a good thing."[6] It is important to understand that an increasing Theil index does not indicate an increasing entropy, instead it indicates an increasing redundancy (decreasing entropy).
High inequality yields high Theil redundancies. High redundancy means low entropy. But this does not necessarily imply that a very high inequality is "good", because very low entropies also can lead to explosive compensation processes. Neither does using the Theil index necessarily imply that a very low inequality (low redundancy, high entropy) is "good", because high entropy is associated with slow, weak and inefficient resource allocation processes.
There are three variants of the Theil index. When applied to income distributions, the first Theil index relates to systems within which incomes are stochastically distributed to income earners, whereas the second Theil index relates to systems within which income earners are stochastically distributed to incomes.
A third "symmetrized" Theil index is the arithmetic average of the two previous indices. Interestingly, the formula of the third Theil index has some similarity with the Hoover index (as explained in the related articles). As in case of the Hoover index, the symmetrized Theil index does not change when swapping the incomes with the income earners. How to generate that third Theil index by means of a spreadsheet computation directly from distribution data is shown below.
An important property of the Theil index which makes its application popular is its decomposability into the between-group and within-group component. For example, the Theil index of overall income inequality can be decomposed in the between-region and within region components of inequality, while the relative share attributable to the between-region component suggests the relative importance of spatial dimension of income inequality.[7]
#### Comparison of the Theil index and the Hoover index
The Theil index indicates the distributional redundancy of a system, within which incomes are assigned to income earners in a stochastic process. In comparison, the Hoover index indicates the minimum size of the income share of a society, which would have to be redistributed in order to reach maximum entropy. Not to exceed that minimum size would require a perfectly planned redistribution. Therefore the Hoover index is the "non-stochastic" counterpart to the "stochastic" Theil index.
Applying the Theil index to allocation processes in the real world does not imply that these processes are stochastic: the Theil yields the distance between an ordered resource distribution in an observed system to the final stage of stochastic resource distribution in a closed system. Similarly, applying the Hoover index does not imply that allocation processes occur in a perfectly planned economy: the Hoover index yields the distance between the resource distribution in an observed system to the final stage of a planned "equalization" of resource distribution. For both indices, such an equalization only serves as a reference, not as a goal.
For a given distribution the Theil index can be larger than the Hoover index or smaller than the Hoover index:
• For high inequalities the Theil index is larger than the Hoover index.
This means for achieving equilibrium (maximum entropy) in a closed system, more resources would have to be reallocated than in case of a planned and optimized reallocation process, where only the necessary minimum share of resources would have to be reallocated. For an open system the export of entropy (import of redundancy) would allow to maintain the distribution dynamics driven by high inequality.
• For low inequalities the Theil index is smaller than the Hoover index.
Here, on the path to reaching equilibrium, a planned and optimized reallocation of resources would contribute more to the dynamics of redistribution than stochastic redistribution. This also is intuitively understandable, as low inequalities also weaken the urge to redistribute resources. People in such a system may tolerate or even foster an increase the inequality. As this is would be an increase of redundancy (an decrease of entropy), redundancy would have to be imported into (entropy would have to be exported from) the society. In that case the society needs to be an open system.
In order to increase the redundancy in the distribution category of a society as a closed system, entropy needs to be exported from the subsystem operating in the that economic category to other subsystems with other entropy categories in the society. For example, social entropy may increase. However, in the real world, societies are open systems, but the openness is restricted by the entropy exchange capabilities of the interfaces between the society and the environment of that society. For societies with a resource distribution which entropywise is similar to the resource distribution of a reference society with a 73:27 split (73% of the resources belong to 27% of the population and vice versa),[8] the point where the Hoover index and the Theil index are equal, is at a value of around 46% (0.46) for the Hoover index and the Theil index.
## Ratios
Another common class of metrics is to take the ratio of the income of two different groups, generally "higher over lower". This compares two parts of the income distribution, rather than the distribution as a whole; equality between these parts corresponds to 1:1, while the more unequal the parts, the greater the ratio. These statistics are easy to interpret and communicate, because they are relative (this population earns twice as much as this population), but, since they do not fall on an absolute scale, do not provide an absolute measure of inequality.
### Ratio of percentiles
Particularly common to compare a given percentile to the median, as in the chart at right; compare seven-number summary, which summarizes a distribution by certain percentiles. While such ratios do not represent the overall level of inequality in the population as a whole, they provide measures of the shape of income distribution. For example, the attached graph shows that in the period 1967–2003, US income ratio between median and 10th and 20th percentile did not change significantly, while the ratio between the median and 80th, 90th, and 95th percentile increased. This reflects that the increase in the Gini coefficient of the US in this time period is due to gains by upper income earners (relative to the median), rather than by losses by lower income earners (relative to the median).
### Share of income
A related class of ratios is "income share" – what percentage of national income a subpopulation accounts for. Taking the ration of income share to subpopulation size corresponds to a ratio of mean subpopulation income relative to mean income. Because income distribution is generally positively skewed, mean is higher than median, so ratios to mean are lower than ratios to median. This is particularly used to measure that fraction of income accruing to top earners – top 10%, 1%, .1%, .01% (1 in 10, in 100, in 1,000, in 10,000), and also "top 100" earners or the like; in the US top 400 earners is .0002% of earners (2 in 1,000,0000) – to study concentration of income – wealth condensation, or rather income condensation.[11] For example, in the chart at right, US income share of top earners was approximately constant from the mid 1950s to the mid 1980s, then increased from the mid-1980s through 2000s; this increased inequality was reflected in the Gini coefficient.
For example, in 2007 the top decile (10%) of US earners accounted for 49.7% of total wages ($4.97 \approx 5$ times fraction under equality), and the top 0.01% of US earners accounted for 6% of total wages (600 times fraction under equality).[12]
## Spreadsheet computations
The Gini coefficient, the Hoover index and the Theil index as well as the related welfare functions[13] can be computed together in a spreadsheet.[14] The welfare functions serve as alternatives to the median income.
Group Members
per
Group
Income
per
Group
Income
per
Individual
Relative
Deviation
Accumulated
Income
Gini Hoover Theil
1 A1 E1 Ē1 = E1/A1 D1 = E1/ΣE - A1/ΣA K1 = E1 G1 = (2 * K1 - E1) * A1 H1 = abs(D1) T1 = ln(Ē1) * D1
2 A2 E2 Ē2 = E2/A2 D2 = E2/ΣE - A2/ΣA K2 = E2 + K1 G2 = (2 * K2 - E2) * A2 H2 = abs(D2) T2 = ln(Ē2) * D2
3 A3 E3 Ē3 = E3/A3 D3 = E3/ΣE - A3/ΣA K3 = E3 + K2 G3 = (2 * K3 - E3) * A3 H3 = abs(D3) T3 = ln(Ē3) * D3
4 A4 E4 Ē4 = E4/A4 D4 = E4/ΣE - A4/ΣA K4 = E4 + K3 G4 = (2 * K4 - E4) * A4 H4 = abs(D4) T4 = ln(Ē4) * D4
Totals ΣA ΣE Ē = ΣE/ΣA ΣG ΣH ΣT
Inequality
Measures
Gini = 1 - ΣG/ΣA/ΣE Hoover = ΣH / 2 Theil = ΣT / 2
Welfare
Function
WG = Ē * (1 - Gini) WH = Ē * (1 - Hoover) WT = Ē * (1 - Theil)
In the table, fields with a yellow background are used for data input. From these data inequality measures as well as the related welfare functions are computed and displayed in fields with green background.
In the example given here, "Theil index" stands for the arithmetic mean of a Theil index computed for the distribution of income within a society to the individuals (or households) in that society and a Theil index computed for the distribution of the individuals (or households) in the society to the income of that society. The difference between the Theil index and the Hoover index is the weighting of the relative deviation D. For the Hoover index the relative deviation D per group is weighted with its own sign. For the Theil index the relative deviation D per group is weighted with the information size provided by the income per individual in that group.
For the computation the society usually is divided into income groups. Often there are four or five groups consisting of a similar amount of individuals in each group. In other cases the groups are created based on income ranges which leads to having different amounts of individuals in the different groups. The table above shows a computation of inequality indices for four groups. For each group the amount of individuals (or households) per group A and the total income in that group E is specified.
The parameter pairs A and E need to be sorted for the computation of the Gini coefficient. (For the Theil index and the Hoover index no sorting is required.) A and E has the be sorted so that the values in the column "Income per individual" are lined up in ascending order.
## Proper use of income inequality metrics
1. When using income metrics, it has to be made clear how income should be defined. Should it include capital gains, imputed house rents from home ownership, and gifts? If these income sources or alleged income sources (in the case of "imputed rent") are ignored (as they often are), how might this bias the analysis? How should non-paid work (such as parental childcare or doing ones own cooking instead of hiring a chef for every meal) be handled? Wealth or consumption may be more appropriate measures in some situations. Broader quality of life metrics might be useful.
2. The comparison of inequality measures requires that the segmentation of compared groups (societies etc.) into quintiles should be similar.
3. Distinguish properly, whether the basic unit of measurement is households or individuals. The Gini value for households is always lower than for individuals because of income pooling and intra-family transfers. And households have a varying amount of members. The metrics will be influenced either upward or downward depending on which unit of measurement is used.
4. Consider life cycle effects. In most Western societies, an individual tends to start life with little or no income, gradually increase income till about age 50, after which incomes will decline, eventually becoming negative. This affects the conclusions which can be drawn from a measured inequality. It has been estimated (by A.S. Blinder in The Decomposition of Inequality, MIT press) that 30% of measured income inequality is due to the inequality an individual experiences as they go through the various stages of life.
5. Clarify whether real or nominal income distributions should be used. What effect will inflation have on absolute measures? Do some groups (e.g., pensioners) feel the effect of inflation more than others?
6. When drawing conclusion from inequality measurements, consider how we should allocate the benefits of government spending? How does the existence of a social security safety net influence the definition of absolute measures of poverty? Do government programs support some income groups more than others?
7. Inequality metrics measure inequality. They do not measure possible causes of income inequality. Some alleged causes include: life cycle effects (age), inherited characteristics (IQ, talent), willingness to take chances (risk aversion), the leisure/industriousness choice, inherited wealth, economic circumstances, education and training, discrimination, and market imperfections.
Keeping these points in mind helps to understand the problems caused by the improper use of inequality measures. However, they do not render inequality coefficients invalid. If inequality measures are computed in a well explained and consistent way, they can provide a good tool for quantitative comparisons of inequalities.
## Inequality, growth, and progress
There is evidence from a broad panel of recent academic studies shows that there is a nonlinear relation between income inequality and the rate of growth and investment. Very high inequality slows growth; moderate inequality encourages growth. Studies differ on the effect of very low inequality.
Robert J. Barro, Harvard University found in his study "Inequality and Growth in a Panel of Countries" that higher inequality tends to retard growth in poor countries and encourage growth in well-developed regions.[15]
In their study for the World Institute for Development Economics Research, Giovanni Andrea Cornia and Julius Court (2001) reach slightly different conclusions.[16] The authors therefore recommend to pursue moderation also as to the distribution of wealth and particularly to avoid the extremes. Both very high egalitarianism and very high inequality cause slow growth. Considering the inequalities in economically well developed countries, public policy should target an ‘efficient inequality range’. The authors claim that such efficiency range roughly lies between the values of the Gini coefficients of 25 (the inequality value of a typical Northern European country) and 40 (that of countries such as the USA, France, Germany and the UK).
Another researcher (W.Kitterer[17]) has shown that in perfect markets inequality does not influence growth.
The precise shape of the inequality-growth curve obviously varies across countries depending upon their resource endowment, history, remaining levels of absolute poverty and available stock of social programs, as well as on the distribution of physical and human capital.
# Photos
Add a Photo
6,465photos on this wiki
• by Dr9855
2013-05-14T02:10:22Z
• by PARANOiA 12
2013-05-11T19:25:04Z
Posted in more...
• by Addyrocker
2013-04-04T18:59:14Z
• by Psymba
2013-03-24T20:27:47Z
Posted in Mike Abrams
• by Omaspiter
2013-03-14T09:55:55Z
• by Omaspiter
2013-03-14T09:28:22Z
• by Bigkellyna
2013-03-14T04:00:48Z
Posted in User talk:Bigkellyna
• by Preggo
2013-02-15T05:10:37Z
• by Preggo
2013-02-15T05:10:17Z
• by Preggo
2013-02-15T05:09:48Z
• by Preggo
2013-02-15T05:09:35Z
• See all photos
See all photos > | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299268126487732, "perplexity_flag": "middle"} |
http://gilkalai.wordpress.com/2008/11/04/a-diameter-problem-6-abstract-objective-functions/?like=1&source=post_flair&_wpnonce=0619346065 | Gil Kalai’s blog
## A Diameter Problem (6): Abstract Objective Functions
Posted on November 4, 2008 by
George Dantzig and Leonid Khachyan
In this part we will not progress on the diameter problem that we discussed in the earlier posts but will rather describe a closely related problem for directed graphs associated with ordered families of sets. The role models for these directed graphs are the directed graphs of polytopes where the direction of the edges is described by a linear objective function.
### 7. Linear programming and the simplex algorithm.
Our diameter problem for families of sets was based on a mathematical abstraction (and a generalization) of the Hirsch Conjecture which asserts that the diameter of the graph $G(P)$ of a $d$-polytope $P$ with $n$ facets is at most $n-d$. Hirsch, in fact, made the conjecture also for graphs of unbounded polyhedra – namely the intersection of $n$ closed halfspaces in $R^d$. But in the unbounded case, Klee and Walkup found a counterexample with diameter $n-d+$ [$d/5$]. The abstract problem we considered extends also to the unbounded case and $n-d+$ [$d/5$] is the best known lower bound for the abstract case as well. It is not known if there is a polynomial (in terms of $d$ and $n$) upper bound for the diameter of graphs of d-polytopes with n facets.
Hirsch’s conjecture was motivated by the simplex algorithm for linear programming. Let us talk a little more about it: Linear programming is the problem of maximizing a linear objective function $\phi(x)=b_1x_1+ b_2 x_2 \dots +b_dx_d$ subject to a system of n linear inequalities in the variables $x_1,x_2,\dots,x_d$.
$a_{11}x_1 + x_{12}x_2 + \dots + x_{1d}x_d \le c_1$,
$a_{21}x_1 + x_{22}x_2 + \dots + x_{2d}x_d \le c_2$,
…
$a_{n1}x_1 + x_{n2}x_2 + \dots + x_{nd}x_d \le c_n$,
The set of solutions to the system of inequalities is a convex polyhedron. (If it is bounded it is a polytope.) A linear objective function makes a graph of a polytope (or a polyhedron) into a digraph (directed graph). If you like graphs you would love digraphs, and if you like graphs of polytopes, you would like the digraphs associated with them.
The geometric description of Dantzig’s simplex algorithm is as follows: the system of inequalities describes a convex d-dimensional polyhedron $P$. (This polyhedron is called the feasible polyhedron.) The maximum of $\phi$ is attained at a face $F$ of $P$. We start with an initial vertex (extreme point) $v$ of the polyhedron and look at its neighbors in $G(P)$. Unless $v \in F$ there is a neighbor $u$ of $v$ that satisfies $\phi(u) > \phi (v)$. When you find such a vertex move from $v$ to $u$ and repeat!
### 8. Abstract objective functions and unique sink orientation.
Let $P$ be a simple d-polytope and let $\phi$ be a linear objective function which is not constant on any edge of the polytope. Remember, the graph of $P$, $G(P)$ is a $d$-regular graph. We can now direct every edge $u,v$ from $u$ to $v$ if $\phi (v) > \phi (u)$. Here are two important properties of this digraph.
(AC) It is acyclic! (no cycles)
(US’) It has a unique SINK, namely a unique vertex such that all edges containing it are directed towards it.
The unique sink property is in fact the property that enables the simplex algorithm to work!
When we consider a face of the polytope $F$ and its own graph $G(F)$ then again our linear objective function induces an orientation of the edges of $G(F)$ which is acyclic and also has the unique sink property. Every subgraph of an acyclic graph is acyclic. But having the unique sink property for a graph does not imply it for a subgraph. We can now describe the general unique sink properties of digraphs of polytopes:
(US) For every face F of the polytope, the directed graph induced on the vertices of $F$ has a unique sink.
A unique sink acyclic orientation of the graph of a polytope is an orientation of the edges of the graph which satisfies properties (AC) and (US).
An abstract objective function of a $d$-polytope is an ordering $<$ of the vertices of the polytope such that the directed graph obtained by directing an edge from $u$ to $v$ if $u<v$ is a unique sink acyclic orientation. (Of course, coming from an ordering the orientation is automatically acyclic.)
### 9. Questions and answers regarding the simplex algorithm.
Q: What is a polyhedron, is it just a fancy name for a polytope?
A: A polyhedron is the intersection of closed half spaces in $R^d$. A bounded polyhedron is a polytope.
Q: How do you find the initial feasible vertex v?
A: Ohh, good point. Usually you need a first stage of the algorithm to reach a feasible vertex. This is sometimes referred to as Phase 1 of the algorithm, and moving from a feasible vertex to the optimal one is called Phase 2. But you can transform every LP problem to another one in which the origin is a feasible polyhedron so for the purpose of studying the worst-case behavior of the simplex algorithm it is enough to study phase 2.
Q: How do you choose to which neighbor to move?
A: Ahh, this is a good question. Often, there are many ways to do it and a rule for making the choice is called a pivot rule. Which pivot rule to take, for theoretical purposes as well as practical purposes is important.
Q: Is the simplex algorithm a polynomial time algorithm?
A: We do not know any pivot rule that leads to a polynomial algorithm in the sense that the number of pivot steps is bounded above by a polynomial function of $d$ and $n$.
Q: Is there a polynomial algorithm for LP?
A: Yes, Katchian proved in 1979 that the Nemirovski-Shor ellipsoid algorithm is a polynomial time algorithm for LP.
Q: Didn’t you neglect to mention some important things?
A: Quite a few. In particular, I ignored issues of degeneracy, for example if the feasible polyhedron is not simple.
Before we go on to describe an even more abstract objective functions, let me recall section 2 about the connection between the abstract combinatorial graphs based on families of sets and the graphs of polytopes. If this is already fresh in your memory you can safely skip it.
### 2. The connection of our abstract setting with the Hirsch’s Conjecture reminded
The Hirsch Conjecture asserts that the diameter of the graph G(P) of a d-polytope P with n facets is at most n-d. Not even a polynomial upper bound for the diameter in terms of d and n is known. Finding good upper bounds for the diameter of graphs of d-polytopes is one of the central open problems in the study of convex polytopes. If d is fixed then a linear bound in n is known, and the best bound in terms of d and n is $n^{\log d+1}$. We will come back to these results later.
One basic fact to remember is that for every d-polytope P, G(P) is a connected graph. As a matter of fact, a theorem of Balinski asserts that G(P)\$ is d-connected.
The combinatorial diameter problem I mentioned in an earlier post (and which is repeated below) is closely related. Let me now explain the connection.
Let P be a simple d-polytope. Suppose that P is determined by n inequalities, and that each inequality describes a facet of P. Now we can define a family $\cal F$ of subsets of {1,2,…,n} as follows. Let $E_1,E_2,\dots,E_n$ be the n inequalities defining the polytope P, and let $F_1,F_2,\dots, F_n$ be the n corresponding facets. Every vertex v of P belongs to precisely d facets (this is equivalent to P being a simple polytope). Let $S_v$ be the indices of the facets containing v, or, equivalently, the indices of the inequalities which are satisfied as equalities at v. Now, let $\cal F$ be the family of all sets $S_v$ for all vertices of the polytope P.
The following observations are easy.
(1) Two vertices v and w of P are adjacent in the graph of P if and only if $|S_v \cap S_w|=d-1$. Therefore, $G(P)=G({\cal F})$.
(2) If A is a set of indices, then the vertices v of P such that $A \subset S_v$ are precisely the set of vertices of a lower dimensional face of P. This face is described by all the vertices of P which satisfy all the inequalities indexed by $i \in A$, or equivalently all vertices in P which belong to the intersection of the facets $F_i$ for $i \in A$.
Therefore, for every $A \subset N$, if ${\cal F}[A]$ is not empty the graph $G({\cal F}[A])$ is connected – this graph is just the graph of some lower dimensional polytope. This was the main assumption in our abstract problem.
### 10. Even more abstract objective functions.
We will not discuss actual pivot rules for linear programming in this thread of posts. This is an interesting topic that we may discuss separately. Linear objective functions transform the graph of the polytope into a directed graph. We replaced graphs of polytopes by very abstract and general graphs associated to families of sets. What about digraphs of polytopes?
Let ${\cal F} = (S_1,S_2,\dots, S_t)$ be an ordered family of $d$-subsets of {1,2,…,n}. Define a digraph or a directed graph $G({\cal F})$ as a digraph whose vertex set is latex \cal F\$ and which has a directed edge from $S_i$ to $S_j$ if $|S_i \cap S_j|=d-1$.
Let ${\cal F}_r = (S_r,S_2,\dots S_t)$.
Make the following assumption:
(*) For every $r$, $G({\cal F_t})$ is connected. Moreover, for every $r$ and every subset $A \subset \{1,2,\dots. n\}$ the graph $G({\cal F}_r[A])$ is connected. (In words, the graph which corresponds to all sets in the family that come after $S_r$ and contain $A$ is connected.)
Now we can define a directed graph by orienting an edge $(S_i ,$ $S_j)$ from $S_i$ to $S_j$ if $i<j$.
Starting from a simple d-polytope $P$ with $n$ facets, we associated to $P$ a family of sets that correspond to the vertices of $P$. When we have an objective function $\phi$, we can order the vertices of $P$ and thus obtain an ordered family that satisfies the assumption (*). If we start with a simple $d$ polytope with $n$ facets and order the sets which correspond to its vertices according to an abstract objective function we get a more general class of ordered families satisfying (*).
### 11. Shellability again
Remember the notion of shellability in Kimmo Errikson’s poem? Let $K$ be the ideal or simplicial complex spanned by the family $F$. So $K = \{ R \subset \{1,2,\dots, n\}: R \subset S_i$ for some $i, 1 \le i \le t \}$. To say that the ordering of $\cal F$ is an abstract objective function is equivalent to the statement that $K$ is shellable and the ordering of $S_1,S_2,\dots, S_t$ is a shelling order on $K$.
One important consequence of this observation is that not every family of sets satisfying our connectivity conditions can be ordered as to satisfy our new connectivity relation (*).
### 12. Short directed path?
Here is the directed version of our diameter problem. Given an ordered family of sets satisfying our condition (*), we can always have a directed path from every $S_r$ to $S_t$. Can we always guarantee a path of length $n$? A path of length bounded by some $n^c$ for some $c>0$?
### Like this:
This entry was posted in Combinatorics, Convex polytopes, Open problems and tagged Hirsch conjecture, Linear programming. Bookmark the permalink.
### 7 Responses to A Diameter Problem (6): Abstract Objective Functions
1. Pingback: A Diameter problem (7): The Best Known Bound « Combinatorics and more
2. Pingback: Lovasz’s Two Families Theorem « Combinatorics and more
3. Pingback: Telling a Simple Polytope From its Graph « Combinatorics and more
4. Pingback: The Polynomial Hirsch Conjecture: A proposal for Polymath3 « Combinatorics and more
5. Pingback: Subexponential Lower Bound for Randomized Pivot Rules! | Combinatorics and more
6. Pingback: Is Backgammon in P? | Combinatorics and more
7. Pingback: Remote Blogging: Efficiency of the Simplex Method: Quo vadis Hirsch conjecture? | Combinatorics and more
• ### Blogroll
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 107, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146566987037659, "perplexity_flag": "head"} |
http://nrich.maths.org/319/solution | ### There's a Limit
Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely?
### Not Continued Fractions
Which rational numbers cannot be written in the form x + 1/(y + 1/z) where x, y and z are integers?
### Comparing Continued Fractions
Which of these continued fractions is bigger and why?
# Good Approximations
##### Stage: 5 Challenge Level:
In this example we see continued fractions used to give rational approximations to irrational numbers.
The following solution was done by Ling Xiang Ning, Raffles Institution, Singapore.
Using the quadratic formula to solve the equation
$x^2 = 7x + 1$
$x^2 - 7x - 1 = 0$
I find that $x$ is $(7 \pm \sqrt{53} )/2$. The positive solution is approximately $7.140054945$
This equation is equivalent to $x = 7 + 1/x$ and hence to the sequence of continued fractions mentioned in the problem. These continued fractions give better and better approximations to the positive root of the quadratic equation and I shall do them one by one.
$7+\frac{1}{7} = \frac{50}{7} = 7.142857142$
$7+\frac{1}{7+\frac{1}{7}} = \frac{357}{50} = 7.14$
$7+\frac{1}{7+\frac{1}{7+\frac{1}{7}}} = \frac{2549}{357} = 7.140056022$
$7+\frac{1}{7+\frac{1}{7+\frac{1}{7 + \frac{1}{7}}}} = \frac{18200}{2549} = 7.140054923$
To find a rational approximation to $\sqrt{53}$ we take, as above, $$\frac{7 + \sqrt{53}}{2} \approx \frac{2549}{357}$$
which gives $$\sqrt{53} \approx 2 (\frac{2549}{357}) - 7 \approx \frac{2599}{357}.$$
$5+\frac{1}{5} = \frac{26}{5} = 5.2$
$5+\frac{1}{5+\frac{1}{5}} = \frac{135}{26} = 5.192307692$
$5+\frac{1}{5+\frac{1}{5+\frac{1}{5}}} = \frac{701}{135} = 5.192592592$
$5+\frac{1}{5+\frac{1}{5+\frac{1}{5+\frac{1}{5}}}} = \frac{3640}{701} = 5.192582025$
Similarly, using the equation, $x^2 = 5x + 1$, which has solutions $\frac{5\pm \sqrt{29} }{2}$ , we can find a rational approximation to $\sqrt{29}$. The positive root is approximately $5.192582404$. The sequence of continued fractions is:
As you can see, the sequence of continued fractions gives better and better approximations to the positive root of the quadratic equation. \par Using $\frac{5\pm \sqrt{29}}{2} \approx \frac{3640}{701}$ gives $\frac{3775}{701}$ as a rational approximation to $\sqrt{29}$.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386133551597595, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/169879/circular-random-walk | # Circular random walk
Suppose we have a circumference divided in N arcs of the same length. A particle can move on the circumference jumping from an arc to the adjacent, with probability $P_{k \to k-1}=P_{k\to k+1}=\frac{1}{2}$. I have some difficulty to find the probability that the particle fulfilled at least one complete turn on the circumference after $M$ jumps with $M\gt N$. Thanks
-
Are you asking about the first time when the particle visited at least once every arc? – Did Jul 12 '12 at 13:41
@did: yes the interpretetion is correct – Riccardo.Alestra Jul 12 '12 at 15:54
## 1 Answer
The time needed to visit at least once each vertex of a graph is called the cover time of the graph. Let $C_n$ denote the cover time of the discrete circle of size $n$. Then $\mathrm E(C_n)=\frac12n(n-1)$. The expression of the distribution of $C_n$ for any fixed $n$ is cumbersome but a result due to Jean-Pierre Imhof in 1985 describes the asymptotics as follows: when $n\to\infty$, $n^{-2}C_n\to C$ in distribution, where the distribution of $C$ has density $$\sqrt{\frac{8}{\pi t^3}}\cdot\sum_{k=1}^{+\infty}(-1)^{k-1}k^2\mathrm e^{-k^2/(2t)}\cdot[t\gt0].$$ A different, equivalent, expression of the limit distribution is in Amine Shihi's PhD thesis.
A related (and somewhat counterintuitive) result might be worth mentioning: for every $n\geqslant2$, the last vertex visited, that is, the position of the particle at time $C_n$ is uniformly distributed on the $n-1$ vertices which are not the starting one.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253975749015808, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/118169/is-there-a-relationship-between-entropy-of-a-fininte-distrete-probability-distrib | ## Is there a relationship between Entropy of a fininte distrete probability distribution and the squre sum of the values of probability mass function of that distribution?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Sorry for the long title. What I mean is that for two vectors (a_1,...,a_n) and (b_1,...,b_n) with the property $a_i,b_i \geq 0$ and $\sum a_i =\sum b_i =1$.
If $-\sum a_ilog(a_i) > -\sum b_ilog(b_i)$ implies $\sum a_i^2 < \sum b_i^2$ or something similar?
-
Certainly not as you ask it. If h(a)>h(b) implied |a|_2<|b|_2, then h(b)>h(a) would imply |b|_2<|a|_2 and you would get h(a)>h(b) if and only if |a|_2<|b|_2. This would mean that the entropy and the L2 norm are measuring the same thing (they're not). – Anthony Quas Jan 6 at 4:07
@Anthony I am not sure, if it can be ture under this specific condiction. – gstar2002 Jan 6 at 10:42
and I think for the case n = 2, it is true. – gstar2002 Jan 6 at 21:51
## 1 Answer
Of course, in general it is not true that inequality between entropies implies the same inequality between $\ell^2$ norms. Although this is true for $n=2$, it fails already for $n=3$ (as the entropy $H(p_1,p_2,p_3)$ is obviously not constant on the level curve determined by conditions $\sum p_i=1$ and $\sum p_i^2=Const$).
Nonetheless, there is a deep link between these two quantities. In order to explain it, it is better to somewhat change the viewpoint. Namely, given two probability distributions $P=(p_1,\dots,p_n)$ and $Q=(q_1,\dots,q_n)$ (for simplicity I assume that all $p_i,q_i$ are strictly positive), the corresponding Kullback-Leibler deviation is defined as $$I(Q|P) = \sum p_i \log \frac{p_i}{q_i} \;,$$ and the so-called information energy as $$\chi^2(Q,P) = \sum \biggl(\frac{p_i}{q_i}-1\biggr)^2 q_i \;.$$ Both these quantities measure "closedness" of $P$ and $Q$ (although they are not distances) and are monotone invariants of the pair $(Q,P)$ (they do not increase under quotient maps). If $Q=U_n$ is the uniform distribution, then these quantities are, up to linear rescaling, precisely the entropy and the $\ell^2$-norm of $P$, respectively, as $$I(U_n|P) = \log n - H(P)$$ and $$\chi^2(U_n,P) = n \sum p_i^2 - 1 \;.$$
Now, the link between $I$ and $\chi^2$ is provided by the fact that in naturally defined infinitesimal limits the Kullback-Leibler deviation $I$ and the information energy $\chi^2$ coincide and produce the Fisher information metric. One can read more about this in the corresponding wiki articles, in the old book Statistical decision rules and optimal inference by Čencov (AMS, 1982), or in more recent publications on information geometry.
-
@ R W, first off, thanks a lot for the very informative reply! But I still have one more question, what do you mean by "naturally defined infinitesimal limits "? do you mean n go to infinite? – gstar2002 Jan 15 at 22:11
No - infinitesimal here means that time goes to 0. The number of points remains fixed, but instead of a single distribution on $n$ points one looks at a family of distributions parametrized, say, by the interval $[0,1]$ (in other words, about a path in the space of distributions). – R W Jan 16 at 0:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148876070976257, "perplexity_flag": "head"} |
http://www.physicsforums.com/showpost.php?p=2947325&postcount=2 | View Single Post
Mentor
Quote by bob1182006 1. The problem statement, all variables and given/known data Find the roots of: $$x^5-1=0$$ 2. Relevant equations Polynomial long division. 3. The attempt at a solution $$x^5-1 = (x-1)(x^4+x^3+x^2+x+1) = 0$$ $$x^4+x^3+x^2+x+1 = (x^2+1)^2+x^3+x-x^2$$ $$(x^2+1)^2+x^3+x-x^2 = (x^2+1)^2+x(x^2+1)-x^2$$ Stuck at this point, I just can't seem to factor out something useful. I know all of the roots are complex but I need to be able to solve the problem without a computer.
Four of the roots of x5 - 1 = 0 are complex and one is real (x = 1). The complex roots are located around the unit circle at 72 deg, 144 deg, 216 deg, and 288 deg. These can be represented in rectangular form, with the first one being cos(72 deg) + i sin(72 deg). The others can be represented similarly. I don't know if there's going to be a way to factor your fourth-degree factor. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426074028015137, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-statistics/28341-probability-distribution.html | # Thread:
1. ## Probability distribution
A rental agency, which leases heavy equipment by the day, has found that one expensive piece of equipment is leased, on the average only one day in five. If rental on one day is independent on rental on any other day, find the probability distribution of Y, the number of days between a pair of rentals.
Does this mean that probability of each sample point is 1/5? I am totally lost in this one, and how to solve it.
2. Originally Posted by somestudent2
A rental agency, which leases heavy equipment by the day, has found that one expensive piece of equipment is leased, on the average only one day in five. If rental on one day is independent on rental on any other day, find the probability distribution of Y, the number of days between a pair of rentals.
From the wording of the question, it is reasonable to assume that the probability can be modeled by a geometric distribution with $p = \frac {1}{5}$. That is assuming that a piece of heavy equipment is leased one day then the events, Y, are how days until the next lease: $Y = 0,1,2,3, \cdots$.
Thus: $P(Y = n) = \frac{{4^n }}{{5^{n + 1} }}$.
3. Hi, thanks for your quick response. However I do not completely understand the solution since we haven't covered geometric distribution yet. Is there any way to solve this problem just using the basics of probability distribution? Also by your formula we get p(0)=1/5; p(1)=4/25; p(2)=16/125 etc but If n is the number of days between two rentals, shouldn't the probability increase as n increases (just common sense)?
thanks
4. =somestudent2;107401we get p(0)=1/5; p(1)=4/25; p(2)=16/125 etc but If n is the number of days between two rentals, shouldn't the probability increase as n increases?
That is counterintuitive.
Intuitively it should decrease. Don’t you think?
If not, why do you think otherwise?
5. Maybe then I just don't understand the problem. I thought of it as follows:
Let 1 represent a rent and 0 not a rent, the probability of 1 is 1/5, the prob of 0 is 4/5
110000 (means the rents are in two consecutive days) here n=0 and the probability must be the lowest?
100001 (means 2 rents 4 days apart ) here is n=4 and the probability must be 1?
since we are given that there is 1 rent in 5 days?
I am totally lost now.
6. Originally Posted by somestudent2
A rental agency, which leases heavy equipment by the day, has found that one expensive piece of equipment is leased, on the average only one day in five.
The operative phrase here is “on the average”. That does not mean that there is a rental every five days. If fact, the company could have a run of 15 days with no rentals. But experience has told them that in 100 day period they can expect to have on average 20 rentals.
7. Thanks for your help Plato, I finally get it, in fact I was misinterpreting given information in the problem.
I see how you arrived with this formula:
Since events are independent we we can multiply them to get the intersection.
Setting 1/5 for the first day * 4/5 * 4/5 *4/5....(for each day with no rental n times)
indeed it comes out to be 4^n/5^n+1
Thanks again. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584082365036011, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/tagged/zero-knowledge-proofs+prime-numbers | # Tagged Questions
1answer
166 views
### How to construct a zero-knowledge proof of a number of the form $n=p^a q^b$
Let $n = p^a$$q^b$ where p and q are distinct primes and a and b are positive integers. How to construct a zero knowledge proof that n is of such form? This is actually a homework problem with a ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422061443328857, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/58010/list | ## Return to Question
2 typo-b-gone
In Mitchell's book "Theory of Categories", Corollary I.16.8 (page 24) states that the following holds in any exact category:
Let $$0 \to A \to B \to C \to 0$$ $$0 \to B^' \to B \to B^{''} \to 0$$ be short exact sequences. Then $B^' \to B \to C$ is epi iff $A \to B \to B^{''}$ is epi.
It seems to me that Mitchell's proof requires the existence of pushouts and pullbacks. Therefore I wonder if the corollary actually is true for any exact categroycategory. Can someone acknowledge this corollary ?
The reason why I think Mitchell's proof requires pushouts and pullbacks is as follows: In a first step the two short exact sequences are embedded crosswise into a commutative diagram with three short exact columns and three short exact rows. But according to Proposition I.16.5 such a diagram only exists if some of its squares are a pushout or a pullback.
1
# On a corollary in Mitchell's book
In Mitchell's book "Theory of Categories", Corollary I.16.8 (page 24) states that the following holds in any exact category:
Let $$0 \to A \to B \to C \to 0$$ $$0 \to B^' \to B \to B^{''} \to 0$$ be short exact sequences. Then $B^' \to B \to C$ is epi iff $A \to B \to B^{''}$ is epi.
It seems to me that Mitchell's proof requires the existence of pushouts and pullbacks. Therefore I wonder if the corollary actually is true for any exact categroy. Can someone acknowledge this corollary ?
The reason why I think Mitchell's proof requires pushouts and pullbacks is as follows: In a first step the two short exact sequences are embedded crosswise into a commutative diagram with three short exact columns and three short exact rows. But according to Proposition I.16.5 such a diagram only exists if some of its squares are a pushout or a pullback. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278896450996399, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/165097/does-the-specification-of-a-general-sequence-require-the-axiom-of-choice/165103 | # Does the specification of a general sequence require the Axiom of Choice?
Many results in elementary analysis require some form of the Axiom of Choice (often weaker forms, such as countable or dependent). My question is a bit more specific, regarding sequences.
For example, consider a standard proof of the boundedness theorem which states that a function continuous on a closed interval $I$ is bounded on that interval. In the first step of the proof, one specifies a sequence as follows:
Suppose for contradiction that $f$ is unbounded. Then for every $n\in\mathbb{N}$ there exists $x_n$ such that $f(x_n) > n$. This specifies a sequence $(x_n)$.
I'm not sure if the above example requires choice. To me, it certainly feels like it does. More specifically, I think that we are specifying a sequence of sets $$A(n) = \left\{x\in I\mid f(x) > n\right\}$$ and claiming the existence of a choice function $g$ such that $g(n) \in A(n)$ so that this example specifically requires the axiom of countable choice. Please clarify whether my reasoning is correct. More specifically, does the construction of any general sequence (such as one defined as above, or perhaps recursively) then require some form of choice? Thanks for any help.
-
I think it's probably equivalent to countable choice. – tomasz Jun 30 '12 at 23:33
Or a weak form of it. (The existence of the sequence, not boundedness.) – tomasz Jun 30 '12 at 23:44
## 1 Answer
Actually, if $f$ is continuous from a closed interval (say $[0,1]$) into $\mathbb R$ then you do not need the axiom of choice to prove it is bounded.
1. First observe that closed and bounded intervals are still compact even without the axiom of choice. The proof of this is quite nice and simple, let $\mathcal B$ be an arbitrary open cover of $[0,1]$, simply consider $x=\sup\{y\in[0,1]\mid [0,y]\text{ has a finite subcover in }\mathcal B\}$, deduce that $[0,x]$ is finitely covered as well, and then argue that we have to have $x=1$ (by the same reason).
2. We can deduce from the above that a subset of $\mathbb R$ is compact if and only if it is closed and bounded. If it is compact it cannot be unbounded, and it has to be closed since $\mathbb R$ is a Hausdorff space; on the other hand, if it is closed and bounded it is a subset of a closed interval, therefore closed in a compact space and thus compact.
3. It is still true that if $f$ is a continuous function from a compact set into a metric space then its image is compact. To see this simply note that every open cover of the image can be translated into an open cover of the compact domain, therefore we can take a finite subcover, and this translate to a finite subcover of the image.
Therefore the image of a continuous image of a closed interval is compact and the image attains minimal and maximal values (since the image is a closed set).
Also note that $A(n)$ as you specified it is simply intersection of $I$ with an open set which is the preimage of $(n,\infty)$. Choosing from open sets is doable without the axiom of choice [3].
In general, when we simply produce one sequence we can sometimes avoid choice if we have a method of calculating the next element in a uniform way (induction is not a uniform way!). If we simply "take another element" then we end up using choice, but we can sometimes avoid these things (for example, instead of arbitrary $\delta$ take $\frac1k$ for the least $k$ fitting). It may even be possible, when needed just one sequence, to use the rational numbers. Those are countable and in particular well-orderable and we can choose from those as much as we want.
However sometimes we want to argue that non-trivial sequences exist, and for this we indeed have to have some choice. For example the proof that $\lim_{x\to a}f(x)=f(a)$ implies $f\colon A\to\mathbb R$ is continuous at $a$ may break, because we need to argue for all sequences and not produce just one.
It may be the case that $A$ itself is Dedekind-finite, and every sequence has only finitely many terms (at least one of those repeating, of course) so in the above case $x_n\to a$ implies that almost always $x_n=a$, but we can make sure that $f$ is not continuous at $a$. Indeed in such $A$ there are only finitely many rational numbers, and pulling the trick of choosing rationals no longer works.
Further reading:
-
I did. :) Sorry if I came out as overly picky, I was genuinely confused at first. – tomasz Jun 30 '12 at 23:56
A different way of looking at (I think) the same argument is that you can often get away with simply deciding that the numbers you chose must be rational. It is fairly rare that you have more than one choice in each step without having an entire interval to choose from. And since the rationals are countable (and so in particular well-orderable), you can just fix a global well-ordering of them and decide to always pick the first rational that satisfies the condition. – Henning Makholm Jul 1 '12 at 0:10
@Henning: Indeed this breaks down only when arbitrary sequences needs to be considered, and then rationals cannot cover the whole thing. This is the essence of the counterexample with the continuity. – Asaf Karagila Jul 1 '12 at 0:12
Thank you for the detailed answer Asaf. A few follow-up questions. 1. So does the original argument given in fact require choice? 2. You pointed out that $A(n)$ as I specified is a union of an open pre-image and a closed interval and that you can choose from open sets without choice. How exactly does this translate into a construction of a sequence in this case? (I guess I am asking for an explicit construction of the sequence) – EuYu Jul 1 '12 at 4:12
@EuYu: First observe that every sequence that can be specified without choice can be simply said to exist without it. The original argument "This specifies a sequence" has a bit of choice in flavour, but adding "we choose from open sets in an interval, so no choice is needed" can be added to remove choice. As for the construction see the third link [part 1 and the addendum talk on choices], and remember that an open set in $[0,1]$ is an open set in $\mathbb R$ intersected with the interval, so it is either empty or it contains an interval so the argument holds. – Asaf Karagila Jul 1 '12 at 6:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543440937995911, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/32667/numerical-mathemtics-how-to-solve-hexagonal-central-differences | ## [Numerical Mathemtics] How to solve hexagonal central differences
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I want to simulate a 2d linear wave equation on a circle ($\displaystyle\frac{\partial^2 z(x,y,t)}{\partial t^2}=v^2\cdot\left(\displaystyle\frac{\partial^2 z(x,y,t)}{\partial x^2}+\displaystyle\frac{\partial^2 z(x,y,t)}{\partial y^2}\right)$).
To have a more significant result, I decided to use a hexagonal pattern (each point has 6 closest points at equidistant distance) as shown here: http://upload.wikimedia.org/wikipedia/en/8/81/Uniform_polyhedron-63-t0.png where the white dots are discrete checkpoints who have describe the actual value of the wave at a certain time.
To solve the problem, I want to use central differences to calculate a new situation out of the previous 2 (in time). How can I convert the central differences (that use $x,y$ values , however you rotate the situation there is maximum one dimension that fits) intro the checpoints based on the hexagonal structure?
I suppose I have to interpolate the point of a square structure out of the hexagonal points, or are there better/faster ways?
-
## 1 Answer
Write `$X_{B,b} = \{\alpha \in \mathbb{Z}^B : \sum_j \alpha_j = b\}$`. Now using the convention $0^0 \equiv 1$, define the matrix $W_{\alpha, \alpha'} := \alpha^{\alpha'}$. For arbitrary $f:X_{B,b} \rightarrow \mathbb{K}$ we can write $f_\alpha \equiv \sum_{\alpha'} c_{\alpha'}$ from which it follows that $c = W^{-1}f$. The facts that this procedure is well-defined, and that $W$ possesses an inverse, follow from a result in multivariate interpolation assuring us that the Lagrange interpolation problem on $X_{B,b}$ is "poised".
Actually, although generic discrete point sets admit a specific multivariate Lagrange interpolation protocol that satisfies many desirable properties, only $X_{B,b}$ does it so beautifully. As a result, we obtain a Lagrange interpolation: `$f_{\mathfrak{I}}(x) := \sum_\alpha (W^{-1}f)_\alpha x^\alpha$` which satisfies `$f_{\mathfrak{I}}(\alpha) = f(\alpha)$`.
You can use this to define differencing schemes on a triangular (or hexagonal by suitable dual hand-waving) grid by considering $B = 3$. An example of the interpolation is shown.
Define `$d_{\mathfrak{I}} f := d(f_{\mathfrak{I}})|_{X_{B,b}}$`. Note (e.g.) that
`$\partial_j f_{\mathfrak{I}} = \sum_\alpha (W^{-1} f)_\alpha \partial_j x^\alpha = \sum_\alpha (W^{-1} f)_\alpha \frac{\alpha_j}{x_j} \partial_j x^\alpha$`
(suitably interpreted) is easy to compute in silico. Explicitly, set
`$\left(W_{(\partial_j)}\right)_{\alpha, \alpha'} := \frac{\alpha'_j}{\alpha_j} \alpha^{\alpha'}, \quad \left(\mathcal{W}_{(\partial_j)}\right)_{x, \alpha'} := \frac{\alpha'_j}{x_j} x^{\alpha'}.$`
Then
$\partial_j f_{\mathfrak{I}} = \mathcal{W}_{(\partial_j)} W^{-1} f, \quad \partial_j f \equiv W_{(\partial_j)} W^{-1} f.$
-
BTW, all this can be done with nice periodic (permutohedral) boundary conditions. I'll also mention that since with six neighbors as you say, you're really on the triangular lattice (though you can nevertheless adapt stuff to the hexagonal graph) it may be simpler just to tilt an axis at 60 degrees. This is often done in 2D lattice (gas or Boltzmann) simulations. – Steve Huntsman Jul 20 2010 at 19:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964081406593323, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/66727/bound-of-polynomial-on-product-space-in-terms-of-values-on-the-diagonal | ## Bound of polynomial on product space in terms of values on the diagonal
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We work in the multivariate case so that $x$ stands for $(x_1,\ldots, x_n)$. Let $q(x,y)$ be a symmetric matrix representation of a homogeneous polynomial $f(x)$ of degree $d$. Explicitly,
1. ($q$ is a bilinear form in the monomials of degree $d$. )That is, $q(x,y)$ is a homogeneous of degree $d$ in $x$ and also of degree $d$ in $y$
2. ($f$ is the quadratic form associated to the bilinear form $g$.) That is, $q(x,x)=f(x)$
3. ($g$ is symmetric.) That is, $g(x,y)=g(x,y).$
If we write $g(x,y)=\sum_{|\alpha|=|\beta|=d} a_{\alpha\beta}x^\alpha b^\beta$, then condition 3 means that $a_{\alpha\beta}=a_{\beta\alpha}$.
Suppose that $f$ is positive away from the origin. Is there an estimate of $|q|$ on the product of unit spheres $S^{n-1}\times S^{n-1}$ in terms of quantities involving $f$? For instance the supremum and infimum of $f$ on the unit sphere $S^{n-1}$.
-
Alas, no. Take $q(x,y)=(x_1y_2-x_2y_1)^2$. Then $f=0$. – fedja Jun 2 2011 at 19:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8807809948921204, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/106544/list | Return to Question
4 added 17 characters in body
Given a polynomial $p(x_1,x_2,\ldots,x_d)$ in $d$ variables, with maximum degree $k$, what is the maximum number of components of $\mathbb{R}^d$ minus $p(\ldots)=0$? In other words, into how many pieces can an implicit polynomial equation partition $\mathbb{R}^d$?
For example, the following three equations partition $\mathbb{R}^2$ or $\mathbb{R}^3$ into $3$, $4$, and $2$ pieces respectively (I think!):
$$x^3 y^2+x^3 -3 x^2 y -y^2 +4 x y+x=0$$
$$x^6 y^8+x^3+4 x y-y=0$$
$$x^4+3 \left(x^2+y^4+z\right)- \left(x^2+y^2+z^2\right)^2+y^3+z^5 + 2 xy=3$$
Of course the answer is $k+1$ in $\mathbb{R}^1$. I suspect this is well known for $\mathbb{R}^d$; if so, I would appreciate a pointer. Thanks!
Update. Greg Martin's idea (from the comments), using the 5th Chebyshev polynomial of the first kind:
As Aaron Meyerowitz points out, here the degree $k=10$, and the plane is partitioned into $28$ pieces. But using Pietro Majer's line-arrangement idea leads to (now corrected:) $58$ 56$pieces for a degree$10\$ polynomial.
3 added 202 characters in body
Given a polynomial $p(x_1,x_2,\ldots,x_d)$ in $d$ variables, with maximum degree $k$, what is the maximum number of components of $\mathbb{R}^d$ minus $p(\ldots)=0$? In other words, into how many pieces can an implicit polynomial equation partition $\mathbb{R}^d$?
For example, the following three equations partition $\mathbb{R}^2$ or $\mathbb{R}^3$ into $3$, $4$, and $2$ pieces respectively (I think!):
$$x^3 y^2+x^3 -3 x^2 y -y^2 +4 x y+x=0$$
$$x^6 y^8+x^3+4 x y-y=0$$
$$x^4+3 \left(x^2+y^4+z\right)- \left(x^2+y^2+z^2\right)^2+y^3+z^5 + 2 xy=3$$
Of course the answer is $k+1$ in $\mathbb{R}^1$. I suspect this is well known for $\mathbb{R}^d$; if so, I would appreciate a pointer. Thanks!
Update. Greg Martin's idea (from the comments), using the 5th Chebyshev polynomial of the first kind:
As Aaron Meyerowitz points out, here the degree $k=10$, and the plane is partitioned into $28$ pieces. But using Pietro Majer's line-arrangement idea leads to $58$ pieces for a degree $10$ polynomial.
2 added 253 characters in body
Given a polynomial $p(x_1,x_2,\ldots,x_d)$ in $d$ variables, with maximum degree $k$, what is the maximum number of components of $\mathbb{R}^d$ minus $p(\ldots)=0$? In other words, into how many pieces can an implicit polynomial equation partition $\mathbb{R}^d$?
For example, the following three equations partition $\mathbb{R}^2$ or $\mathbb{R}^3$ into $3$, $4$, and $2$ pieces respectively (I think!):
$$x^3 y^2+x^3 -3 x^2 y -y^2 +4 x y+x=0$$
$$x^6 y^8+x^3+4 x y-y=0$$
$$x^4+3 \left(x^2+y^4+z\right)- \left(x^2+y^2+z^2\right)^2+y^3+z^5 + 2 xy=3$$
Of course the answer is $k+1$ in $\mathbb{R}^1$. I suspect this is well known for $\mathbb{R}^d$; if so, I would appreciate a pointer. Thanks!
Update. Greg Martin's idea (from the comments), using the 5th Chebyshev polynomial of the first kind:
1
Partitions of $\mathbb{R}^d$ by implicit polynomial equations
Given a polynomial $p(x_1,x_2,\ldots,x_d)$ in $d$ variables, with maximum degree $k$, what is the maximum number of components of $\mathbb{R}^d$ minus $p(\ldots)=0$? In other words, into how many pieces can an implicit polynomial equation partition $\mathbb{R}^d$?
For example, the following three equations partition $\mathbb{R}^2$ or $\mathbb{R}^3$ into $3$, $4$, and $2$ pieces respectively (I think!):
$$x^3 y^2+x^3 -3 x^2 y -y^2 +4 x y+x=0$$
$$x^6 y^8+x^3+4 x y-y=0$$
$$x^4+3 \left(x^2+y^4+z\right)- \left(x^2+y^2+z^2\right)^2+y^3+z^5 + 2 xy=3$$
Of course the answer is $k+1$ in $\mathbb{R}^1$. I suspect this is well known for $\mathbb{R}^d$; if so, I would appreciate a pointer. Thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258397817611694, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/90716-number-ways-7-distinct-objects-can-put.html | # Thread:
1. ## Number of ways in which 7 distinct objects can be put..?
Find the number of ways in which 7 distinct objects can be put in three identical boxes so that no box remains empty.
2. This is similar to a problem i had in an old course where you had to find integers x,y,z s.t. x + y + z = some +ve integer.
So i think this one can be reworded as the 3 boxes being x,y and z. And n being 7 distinct objects.
Since you want +ve integers what to do is set x' = x-1 etc and then you get x' + y' + z' = 4.
This is done by using the binomial expression $\left(\begin{array}{cc}n + k-1\\k-1\end{array}\right)$ where k is the number of 'boxes'
So we get $\left(\begin{array}{cc}4+3-1\\3-1\end{array}\right) = \left(\begin{array}{cc}6\\2\end{array}\right)$ = 15.
3. Originally Posted by Deadstar
This is similar to a problem i had in an old course where you had to find integers w,x,y,z s.t. w + x + y + z = some +ve integer.
So i think this one can be reworded as the 4 boxes being w,x,y and z. And n being 7 distinct objects.
Since you want +ve integers what to do is set w' = w-1 etc and then you get w + x + y + z = 5.
This is done by using the binomial expression $\left(\begin{array}{cc}n + k-1\\k-1\end{array}\right)$ where k is the number of 'boxes'
So we get $\left(\begin{array}{cc}5+4-1\\4-1\end{array}\right) = \left(\begin{array}{cc}8\\3\end{array}\right)$ = 56.
its $3$ boxes.
4. Ah grim. Fixed!
5. Actually my first post was totally full of fail thanks for noticing that boxes thing...
6. The answer is 322. Are we missing something?
7. Yeah i think my way was if there was 7 identical objects... 15 is waaaay to low.
But how did you get 322? I just did a 'brute force' method and came up about 99 but didnt really check it.
Since each box must contain 1 object it can be simplified down to how many ways can 4 objects be distributed between 3 boxes... This questions bugging me now...
EDIT: Is the answer just $3^4 = 81$?
8. Well,this is how it goes.
Since the boxes are identical the problem reduces to the fact $7$ distinct objects have to be divided(not distributed) into $3$ groups as arrangement among the boxes are immaterial since the boxes are identical
The groupings can be done as follows $<img src=$1,3,3),(1,2,4),(2,2,3),(1,1,5)" alt="1,3,3),(1,2,4),(2,2,3),(1,1,5)" />
This can be done in following ways:
= $\frac{7!}{1!3!3!}\frac{1}{2!}+\frac{7!}{1!2!4!}+\f rac{7!}{2!2!3!}\frac{1}{2!}+\frac{7!}{1!1!5!}\frac {1}{2!}$
$=70+105+105+21$
$<br /> =301<br />$
Either the answer is 301 or I have missed a grouping which is not likely
9. Originally Posted by pankaj
Well,this is how it goes.
Since the boxes are identical the problem reduces to the fact $7$ distinct objects have to be divided(not distributed) into $3$ groups as arrangement among the boxes are immaterial since the boxes are identical
The groupings can be done as follows $<img src=$1,3,3),(1,2,4),(2,2,3),(1,1,5)" alt="1,3,3),(1,2,4),(2,2,3),(1,1,5)" />
This can be done in following ways:
= $\frac{7!}{1!3!3!}\frac{1}{2!}+\frac{7!}{1!2!4!}+\f rac{7!}{2!2!3!}\frac{1}{2!}+\frac{7!}{1!1!5!}\frac {1}{2!}$
$=70+105+105+21$
$<br /> =301<br />$
Either the answer is 301 or I have missed a grouping which is not likely
What about $(1,4,2)$? Or $(1,5,1)$? Also why did you divide by $2!$?
10. 301 is correct.
The number is also $S(7,3)$, where S is the Stirling Number of the 2nd Kind.
11. Originally Posted by Sampras
What about $(1,4,2)$? Or $(1,5,1)$? Also why did you divide by $2!$?
(1,2,4) is same as (1,4,2) since boxes are identical.There is no such thing as the first box or the second or the third .
If 2 groups contain same number of objects then it is required that we divide by 2!.
For example if {a,b,c,d,e,f,g} are the 7 obects.One grouping containing (2,2,3) objects can be described as follows.
{a,b},{c,d}{e,f,g} and another can be {c,d},{a,b}{e,f,g}.
Now arent these groupings same. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359069466590881, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/251567/parametric-curve-on-cylinder-surface/251900 | # Parametric curve on cylinder surface
Let $r(t)=(x(t),y(t),z(t)),t\geq0$ be a parametric curve with $r(0)$ lies on cylinder surface $x^2+2y^2=C$. Let the tangent vector of $r$ is $r'(t)=\left( 2y(t)(z(t)-1), -x(t)(z(t)-1), x(t)y(t)\right)$. Would you help me to show that :
(a) The curve always lies on ylinder surface $x^2+2y^2=C$.
(b) The curve $r(t)$ is periodic (we can find $T_0\neq0$ such that $r(T_0)=r(0)$).If we make the C smaller then the parametric curve $r(t)$ more closer to the origin (We can make a Neighboorhood that contain this parametric curve)
My effort:
(a) Let $V(x,y,z)=x^2+2y^2$. If $V(x,y,z)=C$ then $\frac{d}{dt} V(x,y,z)=0$. Since $\frac{d}{dt} V(x,y,z)=(2x,4y,0)\cdot (x'(t),y'(t),z'(t))=2x(2y(z-1))+4y(-x(z-1))=0$, then $r(0)$ would be parpendicular with normal of cylinder surface. Hence the tangent vector must be on the tangent plane of cylinder. So $r(t)$ must lie on cylinder surface.
(b) From $z'=xy$, I analyze the sign of $z'$ (in 1st quadrant z'>0 so the z component of $r(t)$ increasing and etc.) and conclude that if $r(t)$ never goes unbounded when move to another octan ( But I can't guarante $r(t)$ accros another octan.). I also consider the case when $(x=0, y>0), (x=0, y<0,z>1), (x>0, y=0,z>1$and so on) and draw the vector $r'(t)$.
Thank you so much of your help.
-
## 1 Answer
We can reparameterize $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ since $(\sqrt{C}\cos u)^2+2\left(\frac{\sqrt{C}}{\sqrt{2}}\sin u\right)^2=C$. Let $r(t)= (x(t),y(t),z(t))$ and $r(0)=(x_0,y_0,z_0)$. Define $V(x,y,z)=x^2+2y^2$. Since $V(x,y,z)=C$ then $\frac{dV}{dt}=0$. But, by chain rule we get $0=\frac{dV}{dt}=\nabla{V}\cdot(x',y',z')$ so the tangent vector of the parametrized curve that intersect $S$ in a point always parpendicular with $\nabla{V}$. Since $r(0)$ be in $S$ and $\nabla{V}$ parpendicular with the tangent plane of $S$ at $r(0)$ , then $r'(0)$ be on the tangent plane of $S$ at $r(0)$. By this argument, we can conclude that $r(t)$ must be on $S$. Since $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ then $x(t)=\sqrt{C}\cos (t-t_0)$ and $y(t)=\frac{\sqrt{C}}{\sqrt{2}}\sin (t-t_0)$ with $t_0$ satisfying $x_0=\sqrt{C}\cos t_0$ and $y_0=-\frac{\sqrt{C}}{\sqrt{2}}\sin t_0$. Since $z'=xy$ then $z'(t)=\frac{C}{2\sqrt{2}}\sin(2t-t_0)$, hence $z(t)=-\frac{C}{4\sqrt{2}}\cos(2t-t_0)$. Since $r(2\pi)=(\sqrt{C}\cos (2\pi-t_0),\frac{\sqrt{C}}{\sqrt{2}}\sin (2\pi-t_0),-\frac{C}{4\sqrt{2}}\cos(2\pi-t_0))=(x_0,y_0,z_0)=r(0)$ then $r(t)$ is periodic.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.89298015832901, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/2414/is-this-authenticated-one-way-communication-protocol-secure?answertab=active | # Is this authenticated one-way communication protocol secure?
I am looking to see if this one-way communication protocol is secure. Assume Alice wants to send Bob a message (and doesn't need Bob to reply in the same session/channel - think email). Bob knows Alice's public key $A_{public}$, and Alice knows Bob's public key $B_{public}$. Assume ideal cryptographic primitives for the hash function (ideal one-way function) and symmetric cipher (ideal random permutation) and that the bit sizes used are large enough to ensure computational security (say RSA-2048, a 256-bit symmetric key, 256-bit hash function output and 256-bit nonce)
## Protocol
### Field #1
Alice randomly generates:
• a key $K$ for an agreed upon symmetric cipher
• other material needed for the cipher (IV, etc..)
• a nonce $N$
Alice encrypts all of the above using $B_{public}$ (with random padding).
### Field #2
Then she computes $\textrm{HMAC}(N, A_{public})~~$ (where $N$ is used as a key), signs it with her private key (with random padding)
### Field #3
Finally, Alice calculates her message's hash. Then she encrypts it along with the message itself, using the symmetric key $K$.
She then sends to Bob, in this order:
• the first field (the one encrypted with Bob's public key)
• the second field (the one signed with her private key)
• the third field (the one encrypted with the symmetric key)
## Proof that Bob can decrypt it
Bob can obviously decrypt Field #1. He can then iterate through his list of known public keys (assuming he doesn't have millions of them, it will be efficient enough), undo the signature, and verify Field #1 using the nonce $N$ against the HMAC. Bob can then know Alice did send both fields, and as he knows the symmetric key, he may decrypt the message and verify its integrity against the message hash.
Bonus: this also offers some protection against accidental corruption. If Field #1 in its encrypted form is corrupted while being transmitted, the nonce $N$ will be corrupted with overwhelming probability which will prevent Bob from authenticating Alice. If Field #2 in its signed form is corrupted, the HMAC will be garbled, with the same consequences as before. And if Field #3 is corrupted, the message hash will detect it.
Of course if either Field #1 or Field #2 are damaged, then Bob will not know that Alice sent the message and so cannot ask for a retransmission unless he asks all his contacts (which isn't very practical), but there are probably methods to fix that.
### Expectations:
Define "attacker" as an entity which has no knowledge of Alice's private key or Bob's private key.
1. an attacker should not be able to impersonate Alice
2. an attacker should not be able to read nor modify the message
3. an attacker should not be able to deduce the sender (Alice) nor the recipient (Bob)
### Analysis:
1. This is just a draft but my initial analysis of the protocol shows that it mostly relies on the nonce $N$ - if the HMAC signed by Alice can be verified, then it implies that Field #1 was indeed created by Alice and the rest follows. Note that if the nonce $N$ was not placed inside Field #1 but instead in Field #2 for instance, then an attacker could trivially forge Field #1 while leaving Field #2 intact, which would allow him to impersonate Alice. So the protocol depends on the link between Field #1 and Field #2.
2. This easily follows from 1, since the attacker cannot impersonate Alice then he cannot forge neither Field #1 nor Field #2 which implies that he cannot forge (nor read) Field #3 either, as the symmetric key is in Field #1. If a streamed mode of operation is used for the cipher, he could attempt to tamper with some message bits, but this would be detected by the message hash.
3. This is more interesting. Field #1, even as plaintext, only contains random data, so it is of no use to an attacker on its own. Field #2 could in theory be decrypted by an attacker by looking up some LDAP table and checking every public key he can get his hands on, however without knowledge of the nonce $N$ (which cannot be obtained by the attacker) the HMAC would reveal no information. So the attacker has no way of establishing which public key is in fact the right one, as they all result in random data from the attacker's point of view. And Field #3 is of course of no use without the symmetric key. So an attacker cannot deduce neither the sender nor the recipient using only the data sent over the channel. Note that again, if the nonce $N$ wasn't in Field #1 but in Field #2, then the attacker could fairly easily identify Alice (but not Bob).
Finally, since the whole protocol is almost completely randomized, replay/related-key attacks are not applicable, and the attacker will not be able to reuse any past communications to his advantage. In fact assuming different, independent messages are sent each time, the only quantity which is always sent to Bob is Alice's public key, and it is masked by the HMAC (with the random nonce). Of course the drawback of all this is that the random number generator better be good as it will almost certainly be the weakest link in the chain here (but isn't that always the case?).
Is this secure? Are the security expectations met with the given conditions? Does anybody see anything I might have overlooked here, or perhaps a way to simplify the protocol while maintaining its security by sending less stuff? For instance a way for Bob to easily figure out who is sending him the message without him having to iterate over his contact list, while leaking no information to the attacker (I don't see how this is possible, as the only way for Alice to prove her identity is to sign a verifiable quantity, and this quantity must only be verifiable by Bob to meet expectation 3)
As far as I can see this scheme is algorithmically secure and provides external anonymity for both parties, authenticity for both parties (Alice knows only Bob will be able to read the message, and Bob knows Alice sent it), and integrity. But I am aware cryptography is not a trivial matter which is why I'm submitting the protocol here to see if I missed any obvious weaknesses.
Thanks and all feedback is welcome. This is all theoretical by the way, but comments on software bugs that are likely to occur and their implications are welcome too (for instance, can a software bug permanently compromise Alice and/or Bob, or just the communication where the bug occurred?)
PS: I wasn't sure whether this should go in Security SE or Cryptography SE. I opted for Cryptography since it's more about algorithms than implementation but I apologize in advance if I was wrong.
-
crypto.SE is clearly the better choice – CodesInChaos Apr 21 '12 at 8:46
## 1 Answer
I think you are right, the non-identifiability of Alice and Bob by an attacker is the problematic point. For this to work, you need some requirements on your public-key encryption and signature scheme(s):
• An encrypted message (i.e. your field 1) does not allow any clues on the public key used to encrypt it.
• A signature (i.e. your field 2) does not allow any clues on the public key to be used for verification, as long as the message itself is not known, but allows retrieving the signed message when the public key is known ("undoing the signature").
I'm not sure if these are valid for RSA. The second can only be valid for "sign-by-encrypting-with-private-key" schemes like RSA (e. will not be valid for DSA), and I'm not even sure if it is valid for RSA.
I think that these properties are normally not what public-key schemes are designed for.
(But I might be wrong here, if so I hope someone will correct me.)
In general, your split into the fields #1, #2 and #3 seems a bit complicated. Is there any reason that Alice doesn't use the usual scheme?
• Encrypt($B_{public}$, $K$)
• Encrypt($K$, message || signature($A_{private}$, hash(message)))
-
But doesn't this require the whole message to be known before Bob can authenticate Alice, though? Or can the signature and the message be concatenated the other way? If so, it does look considerably simpler (although the same amount of data is still sent since public key encryption/signing requires padding anyway). It essentially removes the nonce from the protocol. – Thomas Apr 21 '12 at 13:01
@Thomas: Yes, in this scheme the signature can only be checked after the message did arrive and is decrypted. But this is normally not a problem for use cases like email ... and in your protocol, Bob still needs to verify the integrity of the whole message after decrypting it. – Paŭlo Ebermann♦ Apr 21 '12 at 13:07
But what if an attacker decides to send me a large message (not necessarily email), I am forced to accept receiving the totality of it until I can deny it. This would constitute a basis for a denial of service attack, wouldn't it? – Thomas Apr 21 '12 at 13:15
If an attacker captures a valid message and changes part #3 to be really large, you have the same problem. But yes, you are right that one could implement some counter-measures here, like encrypting the signature of a nonce first, then the actual message (authenticated with a MAC) (and immediately, the protocol gets a bit closer to your one, and less simple). – Paŭlo Ebermann♦ Apr 21 '12 at 13:24
True. A solution would be to put the message length next to the message hash. But the attacker could still mess with it and randomly change it to something probably much larger, so it could have its own hash too (which solves the problem). But I agree, it does get more complicated quickly. Though I want the protocol to be as robust as possible though, as I won't only use it for email (that's what I had in mind initially) so it makes sense to plan it properly. – Thomas Apr 21 '12 at 13:29
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456263780593872, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Ideal_norm | # Ideal norm
In commutative algebra, the norm of an ideal is a generalization of a norm of an element in the field extension. It is particularly important in number theory since it measures the size of an ideal of a complicated number ring in terms of an ideal in a less complicated ring. When the less complicated number ring is taken to be the ring of integers, Z, then the norm of a nonzero ideal I of a number ring R is simply the size of the finite quotient ring R/I.
## Relative norm
Let A be a Dedekind domain with the field of fractions K and B be the integral closure of A in a finite separable extension L of K. (In particular, B is Dedekind then.) Let $\operatorname{Id}(A)$ and $\operatorname{Id}(B)$ be the ideal groups of A and B, respectively (i.e., the sets of fractional ideals.) Following (Serre 1979), the norm map
$N_{B/A}: \operatorname{Id}(B) \to \operatorname{Id}(A)$
is a homomorphism given by
$N_{B/A}(\mathfrak q) = \mathfrak{p}^{[B/\mathfrak q : A/\mathfrak p]}, \mathfrak q \in \operatorname{Spec} B, \mathfrak q | \mathfrak p.$
If $L, K$ are local fields, $N_{B/A}(\mathfrak{b})$ is defined to be a fractional ideal generated by the set $\{ N_{L/K}(x) | x \in \mathfrak{b} \}.$ This definition is equivalent to the above and is given in (Iwasawa 1986).
For $\mathfrak a \in \operatorname{Id}(A)$, one has $N_{B/A}(\mathfrak a B) = \mathfrak a^n$ where $n = [L : K]$. The definition is thus also compatible with norm of an element: $N_{B/A}(xB) = N_{L/K}(x)A.$[1]
Let $L/K$ be a finite Galois extension of number fields with rings of integer $\mathcal{O}_K\subset \mathcal{O}_L$. Then the preceding applies with $A = \mathcal{O}_K, B = \mathcal{O}_L$ and one has
$N_{L/K}(I)=\mathcal{O}_K \cap\prod_{\sigma \in G}^{} \sigma (I)\,$
which is an ideal of $\mathcal{O}_K$. The norm of a principal ideal generated by α is the ideal generated by the field norm of α.
The norm map is defined from the set of ideals of $\mathcal{O}_L$ to the set of ideals of $\mathcal{O}_K$. It is reasonable to use integers as the range for $N_{L/\mathbf{Q}}\,$ since Z has trivial ideal class group. This idea does not work in general since the class group may not be trivial.
## Absolute norm
Let $L$ be a number field with ring of integers $\mathcal{O}_L$, and $\alpha$ a nonzero ideal of $\mathcal{O}_L$. Then the norm of $\alpha$ is defined to be
$N(\alpha) =\left [ \mathcal{O}_L: \alpha\right ]=|\mathcal{O}_L/\alpha|.\,$
By convention, the norm of the zero ideal is taken to be zero.
If $\alpha$ is a principal ideal with $\alpha=(a)$, then $N(\alpha)=|N(a)|$. For proof, cf. Marcus, theorem 22c, pp65ff.
The norm is also completely multiplicative in that if $\alpha$ and $\beta$ are ideals of $\mathcal{O}_L$, then $N(\alpha\cdot\beta)=N(\alpha)N(\beta)$. For proof, cf. Marcus, theorem 22a, pp65ff.
The norm of an ideal $\alpha$ can be used to bound the norm of some nonzero element $x\in \alpha$ by the inequality
$|N(x)|\leq \left ( \frac{2}{\pi}\right ) ^ {r_2} \sqrt{|\Delta_L|}N(\alpha)$
where $\Delta_L$ is the discriminant of $L$ and $r_2$ is the number of pairs of complex embeddings of $L$ into $\mathbf{C}$.
## References
1. Serre, 1. 5, Proposition 14.
• Iwasawa, Kenkichi (1986), Local class field theory, Oxford Science Publications, New York: The Clarendon Press Oxford University Press, pp. viii+155, ISBN 0-19-504030-9, MR 863740 (88b:11080) Unknown parameter `|note=` ignored (help)
• Marcus, Daniel A. (1977), Number fields, New York: Springer-Verlag, pp. viii+279, ISBN 0-387-90279-1, MR 0457396 (56 #15601) Unknown parameter `|note=` ignored (help)
• Serre, Jean-Pierre (1979), Local fields, Graduate Texts in Mathematics 67, New York: Springer-Verlag, pp. viii+241, ISBN 0-387-90424-7, MR 554237 (82e:12016) Unknown parameter `|note=` ignored (help) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8380284309387207, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/218791/an-intuitive-vision-of-fiber-bundles?answertab=votes | # An intuitive vision of fiber bundles
In my mind it is clear the formal definition of a fiber bundle but I can not have a geometric image of it. Roughly speaking, given three topological spaces $X, B, F$ with a continuous surjection $\pi: X\rightarrow B$, we "attach" to every point $b$ of $B$ a closed set $\pi^{-1}(b)$ such that it is homeomorphic to $F$ and so $X$ results a disjoint union of closed sets and each of them is homeomorphic to $F$. We also ask that this collection of closed subset of $X$ varies with continuity depending on $b\in B$, but I don't understand why this request is formalized using the conditions of local triviality.
-
2
A fiber bundle looks "locally" like a product. In some ways, it is to topological spaces what the "semi-direct product" is to groups. – Thomas Andrews Oct 22 '12 at 16:01
1
In particular, it is more than just $\pi^{-1}(b)$ homeomorphic to $F$. For example, the disjoint union of copies of $F$, one for each $b\in B$, is not a fiber bundle, even though there is a function that satisfies $\pi^{-1}(b)\cong F$ which is continuous. – Thomas Andrews Oct 22 '12 at 16:05
4
The (open) Möbius strip is a nice fiber bundle to use as an example. It is a nontrivial bundle whose base space is the circle and whose fiber is the real line. You make it from a strip of paper (think of it as $[0,1]\times\mathbb{R}$), which is a trivial bundle. Now identify $(0,x)$ with $(1,-x)$, and you have the Möbius strip. Understanding all the concepts for this simple example is a good start. – Harald Hanche-Olsen Oct 22 '12 at 16:15
2
If I understand OP correctly, he is saying that he understands the definition but doesn't find it motivated by the geometry. His intuition is that the right geometric definition should be "all the fibers look alike, and they vary continuously," and so the issue with making the correct definition is figuring out how to formalize "they vary continuously." He doesn't see how that "the fibers vary continuously" is made formal by "looks locally like a product." He wants to be geometrically convinced that these are the same. – countinghaus Oct 22 '12 at 17:05
2
(continued) I tried to type something up, but don't really know what his picture is. Certainly it's clear that things which "look locally like products" have "continuously varying fibers." For the other direction, "pick a small enough neighborhood that the fibers are so close that you can bend them to look like a product." – countinghaus Oct 22 '12 at 17:06
show 1 more comment
## 1 Answer
One example: A branched cover is a fiber bundle, where the fiber is a set of points.
http://www.math.cornell.edu/~hatcher/AT/ATch1.pdf
See the section "Representing Covering Spaces by Permutations", p. 68. You can build a branched cover by covering your space with open sets $X \subset \bigcup U_i$ and your bundle with be covered by $U_i \times \{ 1, 2, \dots, n\}$ Then you need to consider what happens on $U_i \cap U_j$. The transition function will be a bijective map $$\{ 1, 2, \dots, n \} \to \{ 1, 2, \dots , n \}$$ which is a permutation.
Therefore, branched covers can be thought of as fiber bundles over spaces where the fibers are finite sets.
By covering your space with open sets $X \subset \bigcup U_i$, taking direct products with your fiber $U_i \times V$ and considering what happens over interections $(U_i \cap U_j) \times V$ you can "patchwork" a fiber bundle together.
This construction is very general. If your fiber $V$ is a vector space, your transitions maps are invertible linear maps $V \to V$ which take values in the general linear group $GL(V)$. So the set of vector bundles can be thought of as the space representations of the fundamental group $\pi_1(X) \to GL(V)$.
Then you can ask your transition functions be holomorphic and then it's a holomorphic vector bundle. Or you can ask your transition functions be continuous or have 5 derivatives or "reasonable" restriction (i.e. consistent with vector bundle axioms).
Alternatively, you can study a vector bundle by looking at its section, which are maps from the base space into the vector space. In high school and college, we deal mostly with the trivial bundle $\mathbb{R}^2$ and where the base space is $\mathbb{R}$ and the fibers are $\mathbb{R}$. Then we look at sections $$\{ (x, f(x)\} \in \mathbb{R}^2$$ In this way we can consider trivial bundles over the circle $S^1 \times \mathbb{R}$ and consider only those sections which are square-integrable i.e. $$\int_{S^1} |f(x)|^2 dx < \infty$$ Morally, this vector bundle is still the cylinder, but we are ruling out certain sections.
Question: What is the analogue of Fourier analysis for the Mobius band in this picture?
The sphere can be thought of as a fiber bundle over the line. Indeed the fiber $$\{ x^2 + y^2 + z^2 = 1 \} \cap \{ z = k\} = \{ (x,y,k): x^2 +y^2 = 1- k^2 \}$$ is a circle except at the endpoints $k = \pm 1$, where the fiber degenerate to points. Also the torus is a fiber bundle $S^1 \times S^1$.
-
4
Branched cover is not a fiber bundle. – Grigory M Nov 5 '12 at 17:06
They most certainly are fiber bundles. Remember, these examples are supposed to give intuition. – john mangual Nov 6 '12 at 15:58
Branched covering is a fiber bundle only when it is, well, not branched. Of course, (ordinary) covering is a fiber bundle (and Hatcher writes about coverings, not branched coverings). – Grigory M Nov 6 '12 at 16:01
2
Oh, and the sphere is not a fiber bundle over a line! Fiber bundle can't have different fibers over different points -- that's the main point of the definition, actually. – Grigory M Nov 6 '12 at 16:06
1
A more focused answer would be more helpful, I think. Bringing in branched coverings at the point the OP is is also not a great idea! – Mariano Suárez-Alvarez♦ Nov 6 '12 at 16:20
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517076015472412, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/206414/faces-of-a-non-planar-graph | “faces” of a non-planar graph
Good afternoon,
I have a question concerning concepts in graph theory. Graph theory is a field quite strange to my knowledge, so my question is maybe stupid.
For a planar graph, we can define its faces as follows : we delete all its edges and its vertices from the plane. Then the remaining part of the plane is a collection of pieces (connected components). Each such piece is called a face.
So how to define faces of a non-planar graph? I would like to know if we can define it from the intuitive figure of a graph.
Thanks in advance,
Duc Anh
EDIT : I would like to explain more where my question comes from. In this slide http://www.aimath.org/~hogben/Goins.pdf, the author writes $K_{3,3}$ has 6 vertices, 9 edges and 3 faces, so I wonder how these faces are defined? Or we only can define them after embedding the graph into some surface of genus $g$?
-
1
The author of these slides seems to have made some mistake; the set $F$ is clearly nowhere defined, and I have no idea where the values for $f$ come from. – Marc van Leeuwen Oct 3 '12 at 13:09
3 Answers
There is no reasonable definition of faces for an arbitrary graph. Even for a planar graph the definition of faces assumes that in addition to the (abstract) graph a concrete embedding in the plane is given. For instance take a graph constructed starting form two vertices $A,B$ by linking then with simple paths of lengths $1,3,2,3$ (introducing the necessary $5$ intermediate vertices). This graph is certainly planar, but does it contain a triangular face? That depends on how those paths are embedded in the plane (the order given suggests an embedding that does not give rise to a triangular face, but it there does exist an embedding of this graph with a triangular face. For more general graphs you could arrange that any cycle in the graph defines a face for some embedding in a sufficiently complicated surface.
-
Thank you. But I don't understand well your example. I should have assumed that a graph is an ordered pair $(V,E)$ with $V$ a set and $E$ an binairy irreflexive relation on $V$ : for two arbitrary vertices, there is at most one edge joining these vertices. Moreover, if we can not define a face for a planar graph, so how could we understand the theorem of Euler : $v - e + f = 2$ ? – Đức Anh Oct 3 '12 at 12:51
1
$V=\{A,B,0,1,2,3,4\}$, $E=\{\{A,B\},\{A,0\},\{0,1\},\{1,B\},\{A,2\},\{2,B\},\{A,3\},\{3,4\},\{4,B\}\}$. As for the theorem of Euler, it is about embedded planar graphs. A graph is planar iff it admits an embedding, but the graph itself does not choose one. – Marc van Leeuwen Oct 3 '12 at 13:02
Thank you very much. I think I should read more carefully these notions of graph theory. – Đức Anh Oct 3 '12 at 13:15
Maybe this is something you want: Any graph $G$ can be embedded in a compact surface $S_g$ of genus $g$, for some $g$. The minimum of such $g$ is called the genus of $G$. Then faces of a non-planar graph may be defined as the regions of the embedding of $G$ in the $S_g$.
-
– umar Oct 3 '12 at 9:24
@Paul : Thank you. In fact, I know this fact. I just wonder if we can determine the number of faces of a graph from its intuitive figure on a paper. I have editted my question, you can see the link there. – Đức Anh Oct 3 '12 at 12:53
@umar: thank you. I will try to read the text. It seems to be useful. – Đức Anh Oct 3 '12 at 12:54
In general, the answer to your question is no---from an abstract representation $G=(V,E)$ of a graph, you cannot immediately get any information about its faces in an embedding into a surface. All is not lost, though: You might want to look at rotations. In simple terms, if for each vertex $v$ of $G$ you define an order in which all of the edges adjacent to $v$ would be visited in a rotation about $v$, these orders will determine an embedding of $G$ in some orientable surface: selecting different orders for rotations will give you different embeddings.
-
Thank you very much. Your answer is very clear for me. – Đức Anh Oct 3 '12 at 16:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502159953117371, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/201276/sigma-question-is-it-legal-to-write-something-like-this/201287 | # Sigma question: is it legal to write something like this?
Is this mathematical syntax correct?
$$\sum_{n+1}^m\sin(n-2)$$
As you see, the starting value is $n+1$ instead of being just purely one variable.
-
5
It may even be legal, depends on the judge. But it is a really bad idea to use $n$ as the (presumed) summation index, and also as a component of one of the ends. If I am reading your intent correctly, I would write something like $\displaystyle\sum_{i=n+1}^m \sin(i-2)$. – André Nicolas Sep 23 '12 at 18:31
## 3 Answers
You have
$$\sum_{n+1}^m\sin(n-2)$$
What is the running index here? Apparently $\,n\,$ , but from what number does it begin running? Perhaps it should be $\,n=1\,$ in the summatory's lower limit?
As it stands, the expression makes not much sense.
-
what i want to mean is for the sum start at n+1 – user1561559 Sep 23 '12 at 18:29
@user1561559 yes sure. But summand is always prametrized. In your case it must be $n$ as otherwise doesnt make sense. If $m>n+1$, you can define such but I am not sure if it makes sense... – Seyhmus Güngören Sep 23 '12 at 18:31
3
@user1561559 Then you probably want $$\sum_{i=n+1}^m\sin(i-2)$$ unless the expression you are going for is equal to $(m-n-1)\sin(n-2)$. – Alex Becker Sep 23 '12 at 18:33
If there's any doubt about what the index of summation is, then specify it explicitly. If you write about the sum of terms called $\sin(n-2)$, then commonplace conventions make the reader think $n$ goes from something to something. But you've used $n$ as one of the bounds, meaning $n$ stays put while some other variable goes from $n+1$ to $m$, and what that other variable, the index, is called (is it $i$? is it $k$?) you don't say. If you write $$\sum_{k=n+1}^m \sin(n-2),$$ then that's $$\sin(n-2)+\sin(n-2)+\sin(n-2)+\cdots+\sin(n-2)$$ and all terms are identical, and there are $m-n$ of them, so the sum is $(m-n)\sin(n-2)$. If you meant anything other than that, then don't use this notation.
-
IMO, the index of summation should be specified explicitly even when there can't really be any doubt – really, it's just one symbol! The range over which that variable is summed, which may be rather more awkward to write, may be well be left out if it's clear from the context like in $\langle a, b \rangle = \sum_i a_i\!\cdot\!b_i$. (Which, of course, you might write simply $a_ib_i$, following Einstein convention...) – leftaroundabout Sep 23 '12 at 21:58
I would say that your notation is not good. The reason is that it isn't clear what the index of summation is. From how it is written it looks like $m$ and $n$ might both be constants. But then you only have the variable $n$ after the summation sign, so one would think that $n$ is what is "changing" in the summation. But if you want the sum to start at $n+1$, then you should write something like (as mentioned in the comments and the other answer): $$\sum_{i = n+1}^m \sin(i-2).$$ What this means is the sum $$\sin(n+1-2) + \sin(n+2-2) + \dots +\sin(m-1-2) + \sin(m-2).$$ You could IMO get away with writing this same sum as $$\sum_{n+1}^m \sin(i-2).$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9594122767448425, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/291921/proving-that-two-equations-are-the-same | # Proving that two equations are the same
Where am I doing it wrong? Both of them should be simplified to $A \cup (B \cap C)$
1$$A \cup (B \cap (A \cup C) ) = A \cup (A \cup B^c)^c) \cap (A\cup C)$$ 2$$A \cup ((B \cap A)\cup(B \cap C)) = A \cup (A^c \cup B)\cap (A\cup C)$$ 3$$(A \cup (B \cap A )) \cup (A \cup (B \cap C)) = ((A \cup A^c )\cup B)\cap (A\cup C)$$ 4$$A \cup (B \cap C) = B\cap (A\cup C)$$
-
1
The normal argument is element chasing. That is assume $x$ is a member of the set on the left hand side and prove it is an element of the right hand side. Do the same assuming that $x$ is an element of the right hand side. If both hold the sides must be equal. – user45150 Feb 1 at 5:16
## 1 Answer
Nevermind if solved it now. I was doing the De Morgans law in the wrong way. $A∪((B∩A)∪(B∩C))=A∪(A^c∩B)∩(A∪C)$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354071021080017, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/142041/how-to-prove-q-wedge-p-vee-q-wedge-p-q/142043 | # How to prove $((Q \wedge ¬P) \vee (Q \wedge P)) = Q$
I cannot see any steps to this problem! Surely the answer is obvious? Is there a particular law which is used to make this statement?
$$((Q \wedge ¬P) \vee (Q \wedge P)) = Q$$
-
Shouldn't it be $$((Q \wedge ¬P) \vee (Q \wedge P)) = Q$$? – Peter Tamaroff May 7 '12 at 3:03
2
It's hard to say, because you haven't said what steps are allowed. For example, if you already know that $\vee$ distributes over $\wedge$, the thing is very simple: you factor out the $Q$ from the disjuncts, obtaining $(Q\wedge(P\vee\lnot P))=Q$, etc. – MJD May 7 '12 at 3:04
1
MarkDominus yes you are right, I can see the answer now. $$(Q \wedge (P \vee ¬P)) = Q$$ (Distributive Law) $$(Q \wedge T) = Q$$ (Excluded Middle) $$Q = Q$$ (Identity) – Danny Rancher May 7 '12 at 3:06
## 2 Answers
HINT: $$(Q\land\lnot P)\lor(Q\land P)\equiv Q\land(\lnot P\lor P)$$ by distributivity. Or of course you can simply use a truth table, if you're allowed to do so in this problem.
-
@Mark: I'm sure that I didn't. Thanks for the quick catch. – Brian M. Scott May 7 '12 at 3:07
The shape of this makes me think of DeMorgan's Laws. You could start with $Q=Q \vee False$ But you could just do a truth table.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526646137237549, "perplexity_flag": "middle"} |
Subsets and Splits