url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/10239/can-the-entropy-of-a-subsystem-exceed-the-maximum-entropy-of-the-system-in-quant | # Can the entropy of a subsystem exceed the maximum entropy of the system in quantum mechanics?
Quantum mechanics has a peculiar feature, entanglement entropy, allowing the total entropy of a system to be less than the sum of the entropies of the individual subsystems comprising it. Can the entropy of a subsystem exceed the maximum entropy of the system in quantum mechanics?
What I have in mind is eternal inflation. The de Sitter radius is only a few orders of magnitude larger than the Planck length. If the maximum entropy is given by the area of the boundary of the causal patch, the maximum entropy can't be all that large. Suppose a bubble nucleation of the metastable vacuum into another phase with an exponentially tiny cosmological constant happens. After reheating inside the bubble, the entropy of the bubble increases significantly until it exceeds the maximum entropy of the causal patch.
If this is described by entanglement entropy within the bubble itself, when restricted to a subsystem of the bubble, we get a mixed state. In other words, the number of many worlds increases exponentially until it exceeds the exponential of the maximum causal patch entropy. Obviously, the causal patch itself can't possibly have that many many-worlds. So, what is the best way of interpreting these many-worlds for this example?
Thanks a lot!
-
## 1 Answer
Take state $|\psi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$. It is a pure state, so its (von Neumann) entropy is 0. But both of its one-particle states have entropy equal to 1 bit, as they are completely mixed states of the dimension two.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249006509780884, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/62945-volume-average-value.html | # Thread:
1. ## Volume and Average Value
Find the volume of the solid created by rotating the region bounded by the curves below around the x-axis.
$x=1, x=9, y=0, y= \frac{9}{\sqrt{x+5}}$
So far I have my work is as follows
$V=\pi \int_1^9 \frac{9}{\sqrt{x+5}^2}dx$
$V=\pi \int_1^9 \frac{9}{x+5}dx$
The answer is $81\pi ln \frac{7}{3}$
2. Your setup is almost right. You need to square the whole expression, meaning the 9 as well. That's where the 81 comes from. Bring the constants out and you have $81 \pi \int_{1}^{9} \frac{1}{x+5}dx$. The integral of 1/(x+5) is ln(x+5) and then all you do is plug in your bounds. The last part uses an identity that ln(a)-ln(b) = ln(a/b) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578962326049805, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/52872/calculating-glass-thermal-conductivity/52886 | # Calculating (Glass) thermal conductivity
I read that glass has a thermal coefficient of 0.8-1 W/mK. Given that my window window thickness is around $5\ mm$, then I would calculate my heat loss per area being $160-200\ Wm^{-2}K^{-1}$.
Yet standard glass has a U value of $5.6\ Wm^{-2}K^{-1}$.
What is going on here? I'm missing something huge...
-
Interesting that someone added a homework tag. I'm working on understanding calculations for my custom ventilation and heat transfer system in my house. I guess it really is home-work - in the truest sense of the word. – Stephen Feb 8 at 0:00
## 1 Answer
I guess window glass' temperature is somewhere between the temperature of the air inside and the air outside (far from the window). Air has low thermal conductivity. So you take into account the thickness of your window glass, but do not take into account the thickness of air layers near the glass. These layers' temperatures differ from the temperatures of air far from the window (inside and outside, respectively).
-
– Stephen Feb 4 at 0:23
So I agree with what they say, but that does not mean that my answer is wrong. If there were a medium with high thermal conductivity at both sides of the glass (case B), rather than air (case A), you would get those $L=200W/m^{-2}K$ heat losses. However, thermal conductivity of air is very low, and air convection's effect is limited, so the difference of temperatures at the outside and inside surfaces of the window glass is much lower in case A than in case B. – akhmeteli Feb 4 at 1:41
I could rephrase my explanation as follows. If you measure the actual difference of temperatures at the outside and inside surfaces of the window glass $\triangle T_a$ and multiply it by $L$ from my previous comment, you would get the actual heat losses. However, these losses are much lower than what you obtain by multiplying $L$ by $\triangle T_c$, where $\triangle T_c$ is the difference of temperatures in the center of your room and outside far from your window, whereas I guess they use $\triangle T_c$ to calculate the U-value, as $\triangle T_c$ is what matters to us. – akhmeteli Feb 4 at 1:50
Another thing. Are you talking about single glazing or double glazing? – akhmeteli Feb 4 at 2:17
My apologies - I said "standard" where I meant "single". I really am interested in this theory, but I'd like a reference to something to indicate that a 30x factor can be explained by air's (s)low conduction/convection... – Stephen Feb 5 at 3:11
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556053280830383, "perplexity_flag": "middle"} |
http://mathhelpforum.com/number-theory/88206-divisibility-proof-random-integer-subset.html | # Thread:
1. ## Divisibility proof in random integer subset
Can anyone help me getting started on this one? I figure it has something to do with the pigeonhole principle, but I don't know where to start.
2. Originally Posted by knarQ
Can anyone help me getting started on this one? I figure it has something to do with the pigeonhole principle, but I don't know where to start.
We can prove a more general result. If $A$ is a subset with $n+1$ elements from $\{1,2,3,...,2n\}$ then there exists $a,b\in A$ with $a|b$. Each element in $\{1,2,...,2n\}$ can be written as $n\cdot 2^m$ where $n$ is odd and $m\geq 0$. Let these be our pigeonholes. We see that there are $n$ pigeonholes because there are $n$ odd numbers between $1,2,...,2n$. Thus, two numbers end up in the same pigeonhole. So $a=n\cdot 2^{m_1}, b=n\cdot 2^{m_2}$ not it is clear that $a|b$ if $m_1<m_2$.
3. It took me a while to get this but it's a really nice proof. I was thinking in the lines of odd numbers and multiples of odd numbers but i couldn't get it down on paper, so thank you very much!
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9708643555641174, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/211941-trig-identity-ln-derivatives-driving-me-nuts.html | 3Thanks
• 1 Post By Prove It
• 1 Post By Prove It
• 1 Post By tom@ballooncalculus
# Thread:
1. ## Trig identity and ln derivatives driving me nuts!
I'm working my way through all the Khan Academy topics in the practice section and I've come as far as 'special derivatives' and 'chain rule'. The special derivatives consist of identifying the derivatives of sin, cos, tan, ln and e. I have memorized the derivative of these functions but I do not understand why they work (cannot find a good proof).
In the chain rule and beyond, almost every problem has these special derivatives in them and I am finding it really hard to find good information containing proofs for these.
For example I memorized that the derivative of ln is 1/x and also I memorized that the derivative of tan(x) is sec^2(x). Then I got a problem which asked me to derive ln*tan(x) which somehow started going in the direction of 1/tan(x) and I'm clueless.
Please help me find some good stuff on this as I've been stuck here for a long time and I was progressing extremely fast in math before this!
2. ## Re: Trig identity and ln derivatives driving me nuts!
Originally Posted by Paze
I'm working my way through all the Khan Academy topics in the practice section and I've come as far as 'special derivatives' and 'chain rule'. The special derivatives consist of identifying the derivatives of sin, cos, tan, ln and e. I have memorized the derivative of these functions but I do not understand why they work (cannot find a good proof).
In the chain rule and beyond, almost every problem has these special derivatives in them and I am finding it really hard to find good information containing proofs for these.
For example I memorized that the derivative of ln is 1/x and also I memorized that the derivative of tan(x) is sec^2(x). Then I got a problem which asked me to derive ln*tan(x) which somehow started going in the direction of 1/tan(x) and I'm clueless.
Please help me find some good stuff on this as I've been stuck here for a long time and I was progressing extremely fast in math before this!
The proof that $\displaystyle \begin{align*} \frac{d}{dx}\left[ \sin{(x)} \right] = \cos{(x)} \end{align*}$ requires knowing the standard limits $\displaystyle \begin{align*} \lim_{h \to 0}\frac{\sin{(h)}}{h} = 1 \end{align*}$ and $\displaystyle \begin{align*} \lim_{h \to 0}\frac{\cos{(h)} - 1}{h} = 0 \end{align*}$. Proof:
Let $\displaystyle \begin{align*} f(x) = \sin{(x)} \end{align*}$, then $\displaystyle \begin{align*} f(x + h) = \sin{(x + h)} = \sin{(x)}\cos{(h)} + \cos{(x)}\sin{(h)} \end{align*}$, and so
$\displaystyle \begin{align*} f'(x) &= \lim_{h \to 0}\frac{f(x + h) - f(x)}{h} \\ &= \lim_{h \to 0}\frac{\sin{(x)}\cos{(h)} + \cos{(x)}\sin{(h)} - \sin{(x)}}{h} \\ &= \lim_{h \to 0}\frac{\sin{(x)}\cos{(h)} - \sin{(x)}}{h} + \lim_{h \to 0}\frac{\cos{(x)}\sin{(h)}}{h} \\ &= \sin{(x)}\lim_{h \to 0}\left[ \frac{\cos{(x)} - 1}{h} \right] + \cos{(x)} \lim_{h \to 0}\frac{\sin{(h)}}{h} \\ &= \sin{(x)} \cdot 0 + \cos{(x)} \cdot 1 \\ &= \cos{(x)} \end{align*}$
Q.E.D.
Proof that $\displaystyle \begin{align*} \frac{d}{dx} \left[ \cos{(x)} \right] = -\sin{(x)} \end{align*}$:
$\displaystyle \begin{align*} y &= \cos{(x)} \\ &= \sin{\left( \frac{\pi}{2} - x \right)} \end{align*}$
Now let $\displaystyle \begin{align*} u = \frac{\pi}{2} - x \end{align*}$ so that $\displaystyle \begin{align*} y = \sin{(u)} \end{align*}$.
Then $\displaystyle \begin{align*} \frac{du}{dx} = -1 \end{align*}$ and
$\displaystyle \begin{align*} \frac{dy}{du} &= \cos{(u)} \\ &= \cos{\left( \frac{\pi}{2} - x \right)} \\ &= \sin{(x)} \end{align*}$
And thus $\displaystyle \begin{align*} \frac{dy}{dx} = -1 \cdot \sin{(x)} = -\sin{(x)} \end{align*}$.
Q.E.D.
Proof that $\displaystyle \begin{align*} \frac{d}{dx} \left[ \tan{(x)} \right] = \sec^2{(x)} \end{align*}$:
$\displaystyle \begin{align*} y &= \tan{(x)} \\ &= \frac{\sin{(x)}}{\cos{(x)}} \\ \frac{dy}{dx} &= \frac{\cos{(x)}\frac{d}{dx}\left[ \sin{(x)} \right] - \sin{(x)} \frac{d}{dx} \left[ \cos{(x)} \right]}{\left[ \cos{(x)} \right]^2} \\ &= \frac{\cos{(x)}\cos{(x)} - \sin{(x)}\left[ -\sin{(x)} \right] }{\cos^2{(x)}} \\ &= \frac{\cos^2{(x)} + \sin^2{(x)}}{\cos^2{(x)}} \\ &= \frac{1}{\cos^2{(x)}} \\ &= \sec^2{(x)} \end{align*}$
We have $\displaystyle \begin{align*} \frac{d}{dx}\left( e^x \right) = e^x \end{align*}$ by definition. There's not much of a proof we can do here, except to set up $\displaystyle \begin{align*} f(x) = a^x \end{align*}$, get a limit expression for $\displaystyle \begin{align*} f'(x) \end{align*}$, and use this to evaluate the value of $\displaystyle \begin{align*} a \end{align*}$ so that $\displaystyle \begin{align*} f(x) = f'(x) \end{align*}$.
The proof that $\displaystyle \begin{align*} \frac{d}{dx} \left[ \ln{(x)} \right] = \frac{1}{x} \end{align*}$ requires knowing that $\displaystyle \begin{align*} \frac{dy}{dx} = \frac{1}{\frac{dx}{dy}} \end{align*}$.
$\displaystyle \begin{align*} y &= \ln{(x)} \\ x &= e^y \\ \frac{dx}{dy} &= e^y \\ \frac{1}{\frac{dx}{dy}} &= \frac{1}{e^y} \\ \frac{dy}{dx} &= \frac{1}{x} \end{align*}$
Q.E.D.
All derivatives of combinations of these special functions can be evaluated using your standard sum, difference, product, quotient and chain rules.
3. ## Re: Trig identity and ln derivatives driving me nuts!
Wow, thanks a lot. I'm going to go through these when I wake up tomorrow!
Again, thanks a lot!
4. ## Re: Trig identity and ln derivatives driving me nuts!
After looking over these I am definitely starting to see a pattern, however I do not possess proper understanding for the limits which you referred to in the start.
Am I thinking in the right direction when I visualize sin as the y value and cos as the x value on a unit circle? Thank you.
5. ## Re: Trig identity and ln derivatives driving me nuts!
The proof that $\displaystyle \begin{align*} \lim_{h \to 0}\frac{\sin{(h)}}{h} = 1 \end{align*}$ comes from the definitions of the trigonometric functions on the unit circle.
If we call the angle made with the radius and the positive x axis $\displaystyle \begin{align*} \theta \end{align*}$, then the red length is $\displaystyle \begin{align*} \sin{(\theta)} \end{align*}$, the green length is $\displaystyle \begin{align*} \cos{(\theta)} \end{align*}$ and the blue length is $\displaystyle \begin{align*} \tan{(\theta)} \end{align*}$.
The area of the smaller triangle is $\displaystyle \begin{align*} \frac{\sin{(\theta)}\cos{(\theta)}}{2} \end{align*}$, the area of the circular sector is $\displaystyle \begin{align*} \frac{\theta}{2} \end{align*}$, and the area of the larger triangle is $\displaystyle \begin{align*} \frac{\tan{(\theta)}}{2} = \frac{\sin{(\theta)}}{2\cos{(\theta)}} \end{align*}$, and it should be obvious that
$\displaystyle \begin{align*} \frac{\sin{(\theta)}\cos{(\theta)}}{2} \leq \frac{\theta}{2} &\leq \frac{\sin{(\theta)}}{2\cos{(\theta)}} \\ \sin{(\theta)}\cos{(\theta)} \leq \theta &\leq \frac{\sin{(\theta)}}{\cos{(\theta)}} \\ \cos{(\theta)} \leq \frac{\theta}{\sin{(\theta)}} &\leq \frac{1}{\cos{(\theta)}} \textrm{ we can do this because in the first quadrant }\sin{(\theta)} > 0 \\ \frac{1}{\cos{(\theta)}} \geq \frac{\sin{(\theta)}}{\theta} &\geq \cos{(\theta)} \textrm{ note the change of inequality sign from inversion} \\ \cos{(\theta)} \leq \frac{\sin{(\theta)}}{\theta} &\leq \frac{1}{\cos{(\theta)}} \end{align*}$
And now since $\displaystyle \begin{align*} \cos{(\theta)} \to 1 \end{align*}$ as $\displaystyle \begin{align*} \theta \to 0 \end{align*}$, so does $\displaystyle \begin{align*} \frac{1}{\cos{(\theta)}} \end{align*}$, and therefore, so does $\displaystyle \begin{align*} \frac{\sin{(\theta)}}{\theta} \end{align*}$, since it's sandwiched between them.
Technically, this only proves the right hand limit, where your angles are positive, but the proof of the left hand limit is almost identical, you just have to take more care with the negatives.
Q.E.D.
As for the proof that $\displaystyle \begin{align*} \lim_{h \to 0}\frac{\cos{(h)} - 1}{h} = 0 \end{align*}$, we do some algebraic manipulation.
$\displaystyle \begin{align*} \lim_{h \to 0}\frac{\cos{(h)} - 1}{h} &= \lim_{h \to 0}\frac{\left[ \cos{(h)} - 1 \right] \left[ \cos{(h)} + 1 \right]}{h \left[ \cos{(h)} + 1 \right] } \\ &= \lim_{h \to 0}\frac{\cos^2{(h)} - 1 }{h \left[ \cos{(h)} + 1 \right] } \\ &= \lim_{h \to 0}\frac{1 - \sin^2{(h)} - 1}{h \left[ \cos{(h)} + 1 \right]} \\ &= \lim_{h \to 0}\frac{-\sin^2{(h)}}{h \left[ \cos{(h)} + 1 \right]} \\ &= - \lim_{h \to 0}\frac{\sin{(h)}}{h} \lim_{h \to 0} \frac{\sin{(h)}}{\cos{(h)} + 1 } \\ &= -1 \cdot \frac{\sin{(0)}}{\cos{(0)} + 1} \\ &= -\frac{0}{2} \\ &= 0 \end{align*}$
6. ## Re: Trig identity and ln derivatives driving me nuts!
Originally Posted by Paze
The special derivatives consist of identifying the derivatives of sin, cos, tan, ln and e. I have memorized the derivative of these functions but I do not understand why they work (cannot find a good proof).
Nothing wrong with seeking proofs to understand some of these from (relatively speaking) first principles. But notice that most differentiation and integration exercises take them...
... and the various rules for combining them, especially...
... which is how I like to think of the chain rule, and...
... the product rule, as givens.
I.e., assume they work... then, what happens?
(key in spoiler)
Spoiler:
... is the chain rule. Straight continuous lines differentiate downwards (integrate up) with respect to the main variable (in this case x), and the straight dashed line similarly but with respect to the dashed balloon expression (the inner function of the composite which is subject to the chain rule).
The general drift is...
E.g. Prove It gave you the derivative of sine from first principles, but the rest from the rules (which do also have proofs... e.g. in wikipedia).
Originally Posted by Paze
For example I memorized that the derivative of ln is 1/x and also I memorized that the derivative of tan(x) is sec^2(x). Then I got a problem which asked me to derive ln*tan(x) which somehow started going in the direction of 1/tan(x) and I'm clueless.
Yes, so we have tan as the inner function of a composite, and ln as the outer. So we know to apply the chain rule...
... and we can see the outer function differentiated with respect to the inner function is one over...
... and how the rule plays out...
I hope that helps, or doesn't confuse.
_________________________________________
Don't integrate - balloontegrate!
Balloon Calculus; standard integrals, derivatives and methods
Balloon Calculus Drawing with LaTeX and Asymptote!
7. ## Re: Trig identity and ln derivatives driving me nuts!
WOW! I am receiving so much info here! These answers are unbelievably precise and descriptive!
I can't thank you guys enough for your efforts. This is simply amazing.
Ignore past this point (unless you notice fallacies) as I will be bookmarking the thread and I intend to utilize this space for notes:
• More info on why sin(theta)/theta = 1 in the inequality proof can be found by researching the squeeze theorem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461675882339478, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/101293/how-to-prove-this-binomial-coefficient-identity/101296 | # How to prove this binomial coefficient identity? [duplicate]
Possible Duplicate:
Proving ${{n} \choose {r}}={{n-1} \choose {r-1}}+{{n-1} \choose r}$ when $1\leq r\leq n$
I have a dilemma here, how can we show that
Show that $$\binom{n}{r} = \binom{n-1}{r-1} + \binom{n-1}{r}, 1 \leq r \leq n.$$
I tried solving the right side by substituting everything into the combination's formula but everything gets complicated.. thanks
PS: I tried substituting real valued numbers, and it works, but it should be proof by means of mathematical manipulation.
-
tnx. kannapan sampth – vvavepacket Jan 22 '12 at 13:56
## marked as duplicate by mixedmath♦, Srivatsan, pedja, Davide Giraudo, David MitraJan 22 '12 at 14:01
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
I'm sure this question has been asked before, but I can't find it.
I think the easiest way to prove this identity is with a combinatorial proof. So we count the same thing in 2 ways. Suppose we have $n$ objects and we are choosing $r$ of them. The left side is that straight out.
Suppose we designate one particular element (it doesn't matter which one, but call it X). Then when choosing $r$ of the $n$ elements, we either have X or we don't. If we do, then we choose $r-1$ of the remaining $n-1$ elements. If we don't, then we choose $r$ of the remaining $n-1$ elements.
Adding these together, we see that the identity follows.
-
Oh - he found the duplicate. Vote to close! (But I keep my answer for posterity) – mixedmath♦ Jan 22 '12 at 13:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283939003944397, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/111114-famous-2-4-4-2-problem-print.html | # The famous 2^4 = 4^2 problem
Printable View
• October 28th 2009, 07:45 PM
Some1Godlier
The famous 2^4 = 4^2 problem
2^4 = 4^2 (X^Y=Y^X) I have to prove that (2,4) and (4,2) are the only pairs of distinct positive whole numbers having that property. I would also like to change the function into parametric but am quite lost. I have found some information concerning this topic but as of now it seems like gibberish (Can anyone help explain??):
First take the log of both sides:
log(X^Y) = log(Y^X)
and simplify:
Y*log(X) = X*log(Y)
and then divide by X*Y:
log(X) log(Y)
------ = ------.
X Y
Now you should consider the function
log(x)
f(x) = ------.
x
Clearly, we have a solution to the last equation if and only if
f(X) = f(Y).
Well, this happens when X = Y, but does it happen elsewhere? If we graph
y = f(x)
we will find that f increases from y = -infinity at x = 0 to y = 1/e
at x = e (that's e = 2.71828... whether you used the common log or the
natural log or the log to any other base), and then f decreases from
y = 1/e at x = e to y = 0 at x = infinity.
Well, if X and Y are different values and
f(X) = f(Y),
then that means that there is a horizontal line which passes through
our function at two points (namely X and Y). Look at the function,
and you'll find that the smaller value is somewhere between 1 and e,
and the larger value is bigger than e. Also, the closer the smaller
value is to e, the closer the larger value is to e. The closer the
smaller value is to 1, the bigger the larger value is.
So what you find is that if X <= 1, then the only solution is Y = X.
Similarly, if X = e, then the only solution is Y = X. But if
1 < X < e,
then there are exactly two solutions for Y, one of which is Y = X, and
the other is some number bigger than e. Similarly, if
e < X,
then there are exactly two solutions for Y, one of which is Y = X, and
the other is some number between 1 and e.
But can you write out a formula for the smaller value in terms of the
bigger value, or vice-versa? Well, not using any closed-form
function. But you can use numerical methods to find approximate
solutions for any X value.
you might be interested in. The solutions you gave (2, 4) and (4, 2)
are in integers. In fact, these solutions and X = Y are the only
solutions in positive integers. And the only integer solutions are X
= Y and (2, 4), (4, 2), (-2,-4), (-4, -2).
Proving that is as follows: First suppose that X and Y are positive.
By switching the order of X and Y, we may assume that Y >= X. Now
divide both sides of the equation by X^X.
X^(Y-X) = (Y/X)^X.
Since the left side is clearly an integer, the right side has to be an
integer. But if you raise a non-integer rational number to an integer
power, then you don't get an integer. So that means that
k = Y/X
must be an integer (bigger than 0). Now we re-write our equation as
X^(kX - X) = k^X.
We take the positive real X'th root of both sides of the equation and get
X^(k-1) = k.
Now if X >= 2, then:
(a) k = 1 always works (and means X = Y)
(b) k = 2 implies X = X^(2-1) = 2 (and gives your solutions)
(c) k = 3 implies X^(k-1) > k
(d) by induction on k, k >= 3 implies
X^(k-1) = X*X^(k-2) > X(k-1) >= 2k-2 > k,
so there are only the solutions already mentioned when X >= 2.
But X = 1 implies Y = 1. And X = 0 implies Y = 0. And if X is
negative, but Y is positive, then Y^X is positive, so X^Y is positive,
which means that Y is even and
X^Y = (-X)^Y = Y^X, so (-X)^Y * Y^(-X) = 1,
but then (-X)^Y and Y^(-X) both have to be 1, so X is -1 and Y = 1.
Finally, if X and Y are both negative, then we raise both sides to the
-1 power and get
X^(-Y) = Y^(-X)
and then if X is odd and Y is even or vice-versa, then the signs don't
match, but if X and Y are both odd, then we multiply both sides of the
equation by -1 to get
(-X)^(-Y) = (-Y)^(-X).
If both X and Y are even, then we don't need to multiply, and we still
get the same equation. So (-X, -Y) is a solution in positive integers.
• October 28th 2009, 08:06 PM
tonio
Quote:
Originally Posted by Some1Godlier
2^4 = 4^2 (X^Y=Y^X) I have to prove that (2,4) and (4,2) are the only pairs of distinct positive whole numbers having that property. I would also like to change the function into parametric but am quite lost. I have found some information concerning this topic but as of now it seems like gibberish (Can anyone help explain??):
First take the log of both sides:
log(X^Y) = log(Y^X)
and simplify:
Y*log(X) = X*log(Y)
and then divide by X*Y:
log(X) log(Y)
------ = ------.
X Y
Now you should consider the function
log(x)
f(x) = ------.
x
Clearly, we have a solution to the last equation if and only if
f(X) = f(Y).
Well, this happens when X = Y, but does it happen elsewhere? If we graph
y = f(x)
we will find that f increases from y = -infinity at x = 0 to y = 1/e
at x = e (that's e = 2.71828... whether you used the common log or the
natural log or the log to any other base), and then f decreases from
y = 1/e at x = e to y = 0 at x = infinity.
Well, if X and Y are different values and
f(X) = f(Y),
then that means that there is a horizontal line which passes through
our function at two points (namely X and Y). Look at the function,
and you'll find that the smaller value is somewhere between 1 and e,
and the larger value is bigger than e. Also, the closer the smaller
value is to e, the closer the larger value is to e. The closer the
smaller value is to 1, the bigger the larger value is.
So what you find is that if X <= 1, then the only solution is Y = X.
Similarly, if X = e, then the only solution is Y = X. But if
1 < X < e,
then there are exactly two solutions for Y, one of which is Y = X, and
the other is some number bigger than e. Similarly, if
e < X,
then there are exactly two solutions for Y, one of which is Y = X, and
the other is some number between 1 and e.
But can you write out a formula for the smaller value in terms of the
bigger value, or vice-versa? Well, not using any closed-form
function. But you can use numerical methods to find approximate
solutions for any X value.
you might be interested in. The solutions you gave (2, 4) and (4, 2)
are in integers. In fact, these solutions and X = Y are the only
solutions in positive integers. And the only integer solutions are X
= Y and (2, 4), (4, 2), (-2,-4), (-4, -2).
Proving that is as follows: First suppose that X and Y are positive.
By switching the order of X and Y, we may assume that Y >= X. Now
divide both sides of the equation by X^X.
X^(Y-X) = (Y/X)^X.
Since the left side is clearly an integer, the right side has to be an
integer. But if you raise a non-integer rational number to an integer
power, then you don't get an integer. So that means that
k = Y/X
must be an integer (bigger than 0). Now we re-write our equation as
X^(kX - X) = k^X.
We take the positive real X'th root of both sides of the equation and get
X^(k-1) = k.
Now if X >= 2, then:
(a) k = 1 always works (and means X = Y)
(b) k = 2 implies X = X^(2-1) = 2 (and gives your solutions)
(c) k = 3 implies X^(k-1) > k
(d) by induction on k, k >= 3 implies
X^(k-1) = X*X^(k-2) > X(k-1) >= 2k-2 > k,
so there are only the solutions already mentioned when X >= 2.
But X = 1 implies Y = 1. And X = 0 implies Y = 0. And if X is
negative, but Y is positive, then Y^X is positive, so X^Y is positive,
which means that Y is even and
X^Y = (-X)^Y = Y^X, so (-X)^Y * Y^(-X) = 1,
but then (-X)^Y and Y^(-X) both have to be 1, so X is -1 and Y = 1.
Finally, if X and Y are both negative, then we raise both sides to the
-1 power and get
X^(-Y) = Y^(-X)
and then if X is odd and Y is even or vice-versa, then the signs don't
match, but if X and Y are both odd, then we multiply both sides of the
equation by -1 to get
(-X)^(-Y) = (-Y)^(-X).
If both X and Y are even, then we don't need to multiply, and we still
get the same equation. So (-X, -Y) is a solution in positive integers.
Wow! That was a huge message, and without LaTex I doubt many people will be willing to read it all.
Anway, you reached already the conclusion that it'd be wise to study the function $f(x)=\frac{\ln x}{x}$...and indeed, it is wise: check that this function has a maximum at $x=e$ and thus it gets twice all the values in the interval $(f(1),f(e))$ when $x\rightarrow \infty$
Now check that the only integers s.t. $f(x)=f(y)$ are 2,4.
Tonio
• October 28th 2009, 08:10 PM
mr fantastic
Of related interest .....
A thread of related interest: http://www.mathhelpforum.com/math-he...stion-3-a.html
• October 28th 2009, 08:24 PM
Some1Godlier
Quote:
Originally Posted by tonio
check that this function has a maximum at $x=e$ and thus it gets twice all the values in the interval $(f(1),f(e))$ when $x\rightarrow \infty$
Now check that the only integers s.t. $f(x)=f(y)$ are 2,4.
Tonio
I am quite confused. What do you mean by twice the values? Can you go just a little more in depth. I'm slowly grasping but I need a bigger push. Thanks for the help
• October 28th 2009, 08:26 PM
Some1Godlier
Also, what is meant by F(x) = F(Y)?? What is meant by F(y)?? Is that just Ln(y)/y??
All times are GMT -8. The time now is 01:13 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331707954406738, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/5297/colimit-of-topological-groups-again/5324 | # Colimit of topological groups (again)
In Direct limit, Martin rightly pointed out that my naive construction (now deleted) of the colimit (direct limit) of topological abelian groups was wrong. He shows how to do it properly (at least the coproduct) here.
Since then, I've been lurking some of the literature about the subject and this problem of the colimit of topological groups (about which I had previously no idea) seems at the same time classical and topical. For instance, this 1998's paper points out that the Encyclopedic Dictionary of Mathematics, second edition, MIT (1987), article 210, has made the same mistake that I did, stating that the direct limit of topological groups, with the inductive limit of topologies (my naive attempt) has always a continuous multiplication.
The authors of this paper show a counter-example (example 1.2, page 553) and here is my question: I must be absolutely dumb, but don't understand it. Could anyone help me?
For those who don't have access to the paper, here is the example.
Let $G_n = \mathbb{Q} \times \mathbb{R}^n$ with the usual topology. Imbed $G_n \hookrightarrow G_{n+1}$ as $x \mapsto (x,0)$. Then, as a plain abelian group, $G = \varinjlim_{n} G_n= \mathbb{Q} \times \prod'\mathbb{R}$, where $\prod'\mathbb{R}$ denotes the weak or restricted product, which is the way guys in this area call the direct sum; that is, elements of $\prod'\mathbb{R}$ are infinite tuples $( x_1, \dots , x_n, \dots )$, in which all of its components $x_n \in \mathbb{R}$ are zero, except a finite number of them.
The inductive limit topology is the finest one that makes all the inclusions $G_n \hookrightarrow G$ continuous. That is, $U \subset G$ is open if and only if $U\cap G_n$ is open for all $n$.
Let's see how they show that, with this inductive topology, the "product" (in fact, addition) $\mu : G \times G \longrightarrow G$ is not continuous ($\mu$ is the operation induced as an honest colimit of groups -no topologies- by the operations $\mu_n : G_n \times G_n \longrightarrow G_n$, which I assume are the usual additions of those linear spaces). In plain English:
$$(x_0, x_1, \dots , x_n , \dots ) + (y_0, y_1, \dots , y_n , \dots ) = (x_0+y_0, x_1 + y_1 , \dots , x_n + y_n , \dots ) \ .$$
So, in this situation it's enough to produce an open neighbourhood $U \subset G$ of the neutral element $e \in G$ such that $V^2$ is not in $U$, for any open neighbourhood $V$ of $e$ -where, I assume, $V^2$ means $V + V$.
Ok, here is the guy that is supposed to ruin (and sure it does) my naive attempt:
$$U = \left\{ x = (x_0, x_1, \dots , x_n , \dots ) \quad \vert \quad \vert x_j\vert < \vert \cos (jx_0) \vert \ , 1 \leq j \right\} \ .$$
This guy is an open set of $G$ because $x_0$ being a rational number guarantees $\cos (jx_0) \neq 0$ for all $j$. Assume there is an open neighbourhood $V$ of $e$ such that $V^2 \subset U$. Then, $V \cap G_j$ contains an open interval $(-\varepsilon_j , \varepsilon_j)$ in $\mathbb{R}$ with $\epsilon_j > 0$ such that
$$(-\varepsilon_0 , \varepsilon_0) \times (-\varepsilon_j , \varepsilon_j) \subset \left\{ (x_0 , x_j) \in \mathbb{Q} \times \mathbb{R} \quad \vert \quad \vert x_j\vert < \vert \cos (jx_0) \vert \right\} \ .$$
And here come the two final sentences of the example I don't understand: This is impossible if $2j\varepsilon_0 > \pi$. A contradiction.
Any hints or remarks (even humiliating ones) will be welcome.
-
1
I'm confused: what is $\epsilon_0$? The statement in the next to last sentence must be missing a quantifier (both here and in the original paper): there exists $\epsilon_j>0$ such that (statement involving undefined variable $\epsilon_0$) is true. And then we get a contradiction when the undefined variable is big? – Dan Ramras Sep 23 '10 at 19:25
1
Well, I think they should have said something like: "if $V\cap G_n$ is an open neighborhood of the neutral element in $G_n$, then there is an open "cube" $(-\varepsilon_0, \varepsilon_0) \times(-\varepsilon_1, \varepsilon_1) \times \dots \times (-\varepsilon_n, \varepsilon_n) \subset V$". The problem is (see my reply to William's answer) that we are not using anywhere the condition $V^2 \subset U$. So this same "cube" should exist for $U$ too. – Agustí Roig Sep 24 '10 at 3:14
Has anyone looked in the Encyclopedic Dictionary of Mathematics, second edition, MIT (1987), article 210 (as referenced above) to see if they give a reference for the general result? If we're perplexed by the supposed counterexample, it might be worth figuring out why it was claimed that no such counterexample could exist... Google books doesn't seem to have the necessary pages. Actually, it's in our library, so I'll look today. – Dan Ramras Sep 24 '10 at 16:25
2
I have to say that the result proven by Tatsuuma-Shimomura-Hirai in the paper Agusti linked to sounds like it could be the best possible: they prove that the inductive limit topology for a (countable) system of locally compact (Hausdorff) groups gives rise to a topological group (Thm 2.7 of their paper). The reason this sounds right to me is Lemma 5.5 in Milnor and Stasheff's Characteristic Classes, which says that the direct limit topology (for sequences of spaces) commutes with Cartesian products, roughly speaking, when the spaces involved are locally compact Hausdorff. – Dan Ramras Sep 24 '10 at 16:35
2
I finally got around to looking in the Encyclopedic Dictionary of Mathematics. The result (colimits of topological groups are topological groups) is stated there without any reference, and without any explanation. Not very helpful... – Dan Ramras Sep 27 '10 at 0:48
show 9 more comments
## 3 Answers
This means that $j\epsilon_0 > \frac{\pi}{2}$. Hence $-j\epsilon_0 < -\frac{\pi}{2}$. By the density of rationals, this tells you that there exists a sequence $q_1, q_2,\dots,q_n$ of rational numbers in $(-\epsilon_0,\epsilon_0)$ such that $jq_n\rightarrow \frac{\pi}{2}$. Hence $|\cos(jq_n)|\rightarrow 0$. Combining this with $|x_j| < |\cos(jq_n)|$ for all $n$ gives the contradiction.
-
1
I thought about this possibility, but I discarded it because, if this reasoning was correct, then it would apply also to $U$, not just $V$: notice that, in the case of $U$, you already have the possibility of constructing such a sequence $q_n$. That is, if $V$ is not open, $U$ is not open too, so the whole example is wrong. – Agustí Roig Sep 24 '10 at 3:00
This does not prove that $U$ is not open, since William uses all $n$ above. – Martin Brandenburg Sep 26 '10 at 10:35
I would have added a comment, but I fear the answer would overlap the allowed character limit.
Anyway, maybe I'm missing something, but I think I see a clear contradiction. Here we go...
Let $S_j = \{(x_0, x_j)\in\mathbb{Q}\times\mathbb{R}: |x_j| < |\cos(jx_0)|\}$ Let $V_j = (-\epsilon_0,\epsilon_0)\times(-\epsilon_j,\epsilon_j)$. The claim is that for all $j$, we have $V_j\subset S_j$. Fix $j$ and assume this inclusion holds. On the other hand, if $2j\epsilon_0 > \pi$, there is a sequence $q_1,q_2,\dots$ of rationals in $(-\epsilon_0,\epsilon_0)$, such that $jq_n\rightarrow \frac{\pi}{2}$. Let $x_j\in (-\epsilon_j, \epsilon_j)$ not equal to zero. Since the inclusion $V_j\subset S_j$ holds, we have $|x_j| < |\cos(jq_n)|$ for all $n$. Clearly impossible, since $x_j > 0$.
-
1
I believe you, William (except for this last "since $x_j > 0$" -I presume you meant $\varepsilon_j > 0$-, but this doesn't matter). The point, as Pierre-Yves Gaillard says, is: this same proof shows that also $U$ is not open. So the conclusion, again, is that the whole example is wrong. – Agustí Roig Sep 24 '10 at 5:09
This is a new version of the answer. The comments refer to previous versions. Martin Brandenburg's comment was especially helpful. A comment of Dan Ramras, somewhere else in this thread, drew my attention to Lemma 5.5 page 64 in Milnor and Stasheff's Characteristic Classes. Thank you also to Agusti Roig for his wonderful question.
Let me state the Lemma of Milnor and Stasheff mentioned above:
(1) Let $A_1\subset A_2\subset\cdots$ and $B_1\subset B_2\subset\cdots$ be sequences of locally compact spaces with inductive limits $A$ and $B$ respectively. Then the product topology on $A\times B$ coincides with the inductive limit topology which is associated with the sequence $A_1\times B_1\subset A_2\times B_1\subset\cdots$.
The proof (which can be found here) shows in fact that the following more technical statement also holds:
(2) Let $A_1\subset A_2\subset\cdots$ and $B_1\subset B_2\subset\cdots$ be as above. For each $i$ let $C_i\subset A_i$ and $D_i\subset B_i$ be subspaces. Assume $C_i\subset C_{i+1}$ and $D_i\subset D_{i+1}$ for all $i$, and call $C$ and $D$ the respective inductive limits. Then the product topology on $C\times D$ coincides with the inductive limit topology which is associated with the sequence $C_1\times D_1\subset C_2\times D_2\subset\cdots$.
In particular, if the $C_i$ are topological groups, then so is $C$ (assuming, of course, that the topological group structure on $C_i$ is induced by that of $C_{i+1}$). This seems to indicate that the alleged counterexample of Tatsuuma et al. is not really a counterexample.
I know no example of an inductive limit of topological groups which is not a topological group. (I think that such examples do exist. It would be interesting to know if $(\mathbb R^\infty)^\infty$ is a topological group.)
Let's prove (2).
Let $W$ be a subspace of $C\times D$ which is open in the inductive limit topology. For each $i$ let $W_i$ be an open subset of $A_i\times B_i$ which has the same intersection with $C_i\times D_i$ as $W$. Let $(c,d)$ be in $W\cap(C_i\times D_i)$. There is a compact neighborhood $K_i$ of $c$ in $A_i$, and a compact neighborhood $L_i$ of $d$ in $B_i$, such that $K_i\times L_i\subset W_i$. There is also a compact neighborhood $K_{i+1}$ of $K_i$ in $A_{i+1}$, and a compact neighborhood $L_{i+1}$ of $L_i$ in $B_{i+1}$, such that $K_{i+1}\times L_{i+1}\subset W_{i+1}$. Then $K_{i+1}\cap C_{i+1}$ is a neighborhood of $K_i\cap C_i$ in $C_{i+1}$, and $L_{i+1}\cap D_{i+1}$ is a neighborhood of $L_i\cap D_i$ in $D_{i+1}$. The union $U$ of the $K_i\cap C_i$ is open in $C$, and the union $V$ of the $L_i\cap D_i$ is open in $D$, the spaces $C$ and $D$ being equipped with the inductive limit topology. Moreover we have $(c,d)\in U\times V\subset W$.
-
If $x_0$ is allowed to be real, then I think I need a bit of convincing for why $U'$ is open. – Dylan Wilson Sep 26 '10 at 20:23
1
@Edit3: I want to see a proof that $A_{\epsilon}$ is open. Even if you replace your intervalls $I(\epsilon)$ with open ones, this is not clear to me. – Martin Brandenburg Sep 29 '10 at 8:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 141, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539064764976501, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2007/06/02/hom-functors/?like=1&source=post_flair&_wpnonce=bbf4ba6a50 | # The Unapologetic Mathematician
## Hom functors
Every locally small category $\mathcal{C}$ comes with a very interesting functor indeed. Given objects $A'$ and $A$ we can find the set $\hom_\mathcal{C}(A',A)$ of morphisms from $A'$ to $A$. I say that this is a functor $\hom:\mathcal{C}^{\rm op}\times\mathcal{C}\rightarrow\mathbf{Set}$.
We’ve given how $\hom$ behaves on pairs of objects. Now if we have morphisms $f':B'\rightarrow A'$ and $f:A\rightarrow B$ in $\mathcal{C}$ we need to construct a function $\hom_\mathcal{C}(f',f):\hom_\mathcal{C}(A',A)\rightarrow\hom_\mathcal{C}(B',B)$. Notice that the direction of the arrow in the first slot gets reversed, since we want the functor to be contravariant in that place. So, given a morphism $m\in\hom_\mathcal{C}(A',A)$ we define $\hom_\mathcal{C}(f',f)(m)=f\circ m\circ f'$.
Clearly if we pick a pair of identity morphisms, $\hom_\mathcal{C}(1_{A'},1_A)$ is the identity function on $\hom_\mathcal{C}(A',A)$. Also, if we take $f':B'\rightarrow A'$, $f:A\rightarrow B$, $g':C'\rightarrow B'$, and $g:B\rightarrow C$ in $\mathcal{C}$, then we can check
$\left[\hom_\mathcal{C}(g',g)\circ\hom_\mathcal{C}(f',f)\right](m)=\hom_\mathcal{C}(g',g)(f\circ m\circ f')=$
$g\circ f\circ m\circ f'\circ g'=\hom_\mathcal{C}(f'\circ g',g\circ f)(m)$
Again, notice that the order of composition gets swapped in the first slot. Thus $\hom_\mathcal{C}$ is a functor, contravariant in the first slot and covariant in the second.
Not only do hom-functors exist for all locally small categories, but we’ll see that they have all sorts of special properties that will come in handy.
### Like this:
Posted by John Armstrong | Category theory
## 2 Comments »
1. [...] Now that we have a handle on hom functors, we can use them to define other [...]
Pingback by | June 4, 2007 | Reply
2. [...] when we have extra commuting actions on . The complication is connected to the fact that the hom functor is contravariant in its first argument, which if you don’t know much about category theory [...]
Pingback by | November 3, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073842763900757, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/tags/basic-knowledge/hot?filter=all | # Tag Info
## Hot answers tagged basic-knowledge
8
### Why use an Initialization Vector (IV)?
Well, the exact reason for an IV varies a bit between different modes that use IV. At a high level, what the IV does is act as a randomizer, so that each encrypted message appears to be encrypted to a random pattern, even if those messages are similar. In general, IVs disguise when you encrypt the same message twice (and more generally, when two messages ...
7
### Is it generally possible to employ brute force methods when the encryption scheme is not known? Why or why not?
This is called ciphertext-only cryptanalysis*, and it's pretty difficult unless the cipher is quite weak. Therefore, the first priority for a cryptanalyst in such a situation is usually to try to find more information about the algorithm. Fortunately (for the cryptanalyst), as Kerckhoff's principle suggests, there are often ways to find out how the ...
7
### Why use an Initialization Vector (IV)?
Many cryptographic algorithms are expressed as iterative algorithms. E.g., when encrypting a message with a block cipher in CBC mode, each message "block" is first XORed with the previous encrypted block, and the result of the XOR is then encrypted. The first block has no "previous block" hence we must supply a conventional alternate "zero-th block" which we ...
4
### Which categories of cipher are semantically secure under a chosen-plaintext attack?
Encryption using a block cypher such as AES by passing plaintext blocks directly to the encryption function is known as Electronic Code Book mode (ECB) and is not CPA secure as (as you say in your question) it is entirely deterministic and two identical plaintext blocks will result in two identical ciphertext blocks. To prevent this an initialisation ...
2
### Is it generally possible to employ brute force methods when the encryption scheme is not known? Why or why not?
If you can't tell what function was applied to create the cipher text, your search space is as many bits as the message. It's the perfect secrecy achieved with a one time pad. (x + y = z, given z what are x and y?) During an exhaustive search the attacker could find as many messages as they were willing to compute, but they will never know which one was ...
1
### Which categories of cipher are semantically secure under a chosen-plaintext attack?
To be secure against a chosen-plaintext attack, an encryption scheme must be non-deterministic — that is, its output must include a random element, so that e.g. encrypting the same plaintext twice will result in two different ciphertexts. Indeed, if that was not the case, an attacker could easily win the IND-CPA game just by using the encryption ...
1
### Decimal to binary question [closed]
In base 10 we write for example $133$ when we mean $$133 = 1 * 10^2 + 3*10^1 + 3*10^0.$$ If we want to write $49$ in base $2$ then note first that: $$49 = 1*2^5 + 1*2^4 + 0*2^3 + 0*2^2 + 0*2^1 + 1*2^0.$$ Because of this $49$ is $110001$. Noe obviously, you "don't know this", but I wanted to write it down so that you can see what happens as you divide by ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407749772071838, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/261694/working-out-digits-of-pi | # Working out digits of Pi. [closed]
I have always wondered how the digits of π are calculated. How do they do it?
Thanks.
-
5
Perhaps you should ask a question. – alex.jordan Dec 18 '12 at 21:05
1
– Daryl Dec 18 '12 at 21:05
3
– Hagen von Eitzen Dec 18 '12 at 21:09
1
:(..................... – fosho Dec 18 '12 at 21:11
– Arkamis Dec 18 '12 at 21:16
## closed as not a real question by Arkamis, Asaf Karagila, Andres Caicedo, John Wordsworth, MicahDec 19 '12 at 1:01
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ.
## 4 Answers
The Chudnovsky algorithm, which just uses the very rapidly converging series $$\frac{1}{\pi} = 12 \sum^\infty_{k=0} \frac{(-1)^k (6k)! (13591409 + 545140134k)}{(3k)!(k!)^3 640320^{3k + 3/2}},$$ was used by the Chudnovsky brothers, who are some of the points on your graph.
It is also the algorithm used by at least one arbitrary precision numerical library, mpmath, to compute arbitrarily many digits of $\pi$. Here is the relevant part of the mpmath source code discussing why this series is used, and giving a bit more detail on how it is implemented (and if you want, you can look right below that to see exactly how it is implemented). It actually uses a method called binary splitting to evaluate the series faster.
-
One thing I'm not sure about: the mpmath source says that each term adds roughly 14 digits, but it's not clear if it's 14 more digits of $\pi$ or 14 more digits of $\frac{1}{\pi}$. – asmeurer Dec 18 '12 at 21:24
I'm not sure, but I would go with $\frac{1}{\pi}$ since the terms of the series are being implicitly referred to. – 000 Dec 18 '12 at 21:55
I don't know how we do nowadays, but there is a formula due David Bailey, Peter Borwein, and Simon Plouffe in 1995 which it can be used to calculate the n-th digit of $\pi$ in base 16:
$\pi=\sum ^{\infty} _i\frac {1}{16^i}\big(\frac 4 {8i+1}-\frac {2}{8i+4}-\frac {1}{8i+5}-\frac {1}{8i+6}\big)$
What I find interesting is we don't need to know the previous digits to find the n-th digit of $\pi$. Nowadays there are a lot of variations of that formula, but it was a surprise when this formula was discovered, because until that time, the mathematicians thought be impossible to find the n-digit of $\pi$ without knowing the previous ones.
-
From one of my favorite mathematicians,
$$\frac{1}{\pi}=\frac{2\sqrt{2}}{9801}\sum_{k \ge 0} \frac{(4n)!(1103+26390n)}{(n!)^4396^{4n}}.$$
Looking here, we see some many formulae. Some of particular interest are:
$$\pi=\sum_{k \ge 0}\frac{3^k-1}{4^k}\zeta(k+1),\quad \zeta(s)=\sum_{k\ge 0}k^{-s}.$$
$$\frac{1}{6}\pi^2=\sum_{k \ge 1}\frac{1}{k^2} \quad \text{via the Basel problem}.$$
$$\pi =\frac{3}{4}\sqrt{3}+24\int_0^{\frac{1}{4}}\sqrt{x-x^2}dx.$$
$$\frac{\pi}{5\sqrt{\phi+2}}=\frac{1}{2}\sum_{k \ge 0}\frac{(k!)^2}{\phi^{2k+1}(2k+1)!},\quad \text{where } \phi \text{ is the golden ratio.}$$
$$\pi=\frac{22}{7}-\int_{0}^{1}\frac{x^4(1-x)^4}{1+x^2}dx.$$
$$\pi=4\sum_{1 \le k \le n}\frac{(-1)^{j+1}}{2j-1}+\frac{(-1)^n}{(2n-1)!}\sum_{i \ge 0}\frac{1}{16^i}\left( \frac{8}{(8i+1)_{2n}}-\frac{4}{(8i+3)_{2n}}-\frac{4}{(8i+4)_{2n}}-\frac{2}{(8i+5)_{2n}}+\frac{1}{(8i+7)_{2n}}+\frac{1}{(8i+8)_{2n}} \right),\quad \text{where } n \in \mathbb{Z}_{>0}, \text{ and } (x)_{n} \text{ represents the Pochhammer symbol.}$$
$$\pi = 2\left[\prod_{n \ge 0}\left(1+\frac{\sin\left(\frac{1}{2}p_n\right)}{p_n}\right) \right]^{-1},\quad p_n \text{ is the } n\text{th} \text{ prime number}.$$
$$\frac{2}{\pi}=\sqrt{\frac{1}{2}}\sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1}{2}}}\sqrt{\frac{1}{2}+ \frac{1}{2}\sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1}{2}}}}\cdots$$
$$\frac{\pi}{2}=\prod_{n \ge 1}\frac{(2n)^2}{(2n-1)(2n+1)}.$$
-
Which mathematician? – asmeurer Dec 18 '12 at 23:18
@asmeurer Ramanujan. :-) – 000 Dec 19 '12 at 0:23
Here is one simple way to compute digits of $\pi$:
Recall that $\tan(\pi/4) = 1$.
Thus, we have $\arctan(1) = \pi/4$.
And, in particular, $4 \cdot \arctan(1) = \pi$.
You can now take the Taylor series expansion for $4 \cdot \arctan(x)$ and consider what happens when you evaluate it at $1$. Since the corresponding series (see, e.g., here) is an alternating series decreasing in absolute value, you can evaluate however many terms you want, and then take an evaluation of the next term as a (somewhat crude) bound for your error.
-
2
This is an insanely inefficient way to calculate $\pi$. Even if you have a calculator and only want to calculate 10 digits or so, it will take you forever. – asmeurer Dec 18 '12 at 21:21
2
I make no claims as to its efficiency; indeed, it takes about $700$ terms before you can accurately claim that the expansion begins $3.14\ldots$ I mention it strictly for its simplicity. – B.D Dec 18 '12 at 21:26
I took the liberty of editing your answer to give the series explicitly, and to point out how slow it is. – asmeurer Dec 18 '12 at 21:29
Better just to leave a comment instead of "taking the liberty" to edit my answer. Thanks. – B.D Dec 20 '12 at 12:00
– asmeurer Dec 20 '12 at 19:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287489056587219, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/13006/lorentz-invariance-of-a-frequency-and-wavelength-dependent-dielectric-tensor?answertab=active | # Lorentz invariance of a frequency- and wavelength- dependent dielectric tensor
Suppose we have a material described by a dielectric tensor $\bar{\epsilon}$. In frequency domain, this tensor depends on the wave frequency $\omega$ and the wave vector $\vec{k}$.
Clearly not all $\bar{\epsilon}=f(\omega,\vec{k})$ are physically possible. Without considering the physics behind any particular $\bar{\epsilon}=f(\omega,\vec{k})$, is it possible to find general conditions that all $\bar{\epsilon}=f(\omega,\vec{k})$ must obey? I am mostly interested in Lorenz invariance : surely the most general description of any material must be parametrized by the material's velocity according to the observer ($\vec{v}_{mat}$), and then there must be some sort of symmetry between $\omega$ and $\vec{k}$ in $\bar{\epsilon}=f(\omega,\vec{k},\vec{v}_{mat})$ because the Lorentz transformation mixes up distances ($1/k$) and time intervals ($1/\omega$).
-
– Ron Maimon Aug 31 '11 at 5:21
## 1 Answer
You get constraints on the possible form of the dielectric since it is a "response function". The polarization $P$ in an applied field $E$ is given by $P(x,t) = \int \epsilon(x-x',t-t')E(x',t')dx'dt'$ up to various conventions. We must have $\epsilon(x,t) = 0$ when $t<0$, since an electric field can't change what the polarization was before the electric field was turned on. In fourier space this translates to $\epsilon(\omega)$ is analytic in the upper half complex $\omega$ plane. This in turn leads to something called Kramers-Kronig relations which you can look up on wikipedia under that name.
(I suppose we can get an even stricter condition since $\epsilon(x,t)$ must be zero outside of the forward light cone, $t > |\vec{x}|$.)
As for seeing how the dielectric changes with lorentz transformation - this should be a relatively straightforward task. Write down Maxwell's equation with everything expressed in terms of the fields and the dielectric and permitivity and perform a Lorentz transformation. This should take you back to same form of equations but with a different dielectric.
-
1
It may be straightforward, but it's done nowhere. You get a four-index response tensor, and you need to give a symmetry decomposition, and it's annoying (but not hard). This would answer a bunch of upvoted problems on this site. – Ron Maimon Dec 30 '11 at 13:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222396612167358, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/20237/are-black-hole-event-horizons-filled-with-black-holes?answertab=active | # Are black hole event horizons filled with black holes? [closed]
An observer hovering close to an event horizon will observe huge energies, like blue shifted radiation falling in, or Hawking radiation going out. So does the observer observe that black holes are created, when high energy particles collide, and these black holes then absorb energy at fast rate?
-
Currently, your question isn't that clear... If you clarify it, ping me in the comments and I'd be happy to reopen it :) – Manishearth♦ Dec 23 '12 at 8:50
## closed as not a real question by Sklivvz♦, Manishearth♦Dec 23 '12 at 8:50
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ.
## 1 Answer
First of all, Hawking radiation is a bold speculation. It has never been observed and its theoretical background is discutable as so far we do not know quantum theory of gravity which appears to be needed to speculate precisely about such phenomena. However, most phisicists agree that its existence is well possible.
Even if it existed, it is predicted to be really, really weak for any black holes of reasonable size. So, there will be no new black holes formed near the horizon because of that.
When it comes to very small black holes (e.g. size of proton) current theroies are helpless and we can not make any predictions. Therefore the answer is: they may or they may not.
Now, any other known radiation is too weak to produce black holes near the horizon for practical astrophysical situations. This is because energy concentration required to produce a horizon is immensly immense :) . Remember $E=mc^2$ - you need a lot of matter to form a black hole and still $c^2$ is sooo huge. I am working on black hole formation in my thesis and radiation can of course produce it - but simulations suggest that energy concentration has to be unreasonably strong.
-
Well, I think that an observer hovering at event horizon will consider just about any energy to be nearly infinite energy! The blue shift of falling energy is the reason for that, or even better reason: the red shift of the observer's energy. I assumed everybody knows this :) – kartsa Jan 31 '12 at 11:58
– Terminus Feb 1 '12 at 12:13
Hovering at the event horizon is an infinitely hard thing to do, and when you are doing that, your energy is zero. That's what I think! .... If Kruskal-Szekeres-coordinates tell a different story, then that's a problem of Kruskal-Szekeres-coordinateds. – kartsa Feb 2 '12 at 8:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409793615341187, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/209971/how-do-you-prove-that-no-matter-whether-pa-1-or-0-a-is-independent-from-b | # How do you prove that no matter whether P(A)=1 or 0, A is independent from B
Of course we are assuming that A and B are independent events. I know how to show that if P(A)=1 then P(B)=P(AB), but how do we show that if P(A)=0?
-
Hey there Kyle. You have asked 6 questions now, but you haven't accepted any of the answers given. Please review the answers to your other questions and accept some of the answers. – Thomas Oct 9 '12 at 17:31
1
According to your title, you're trying to show that $A$ and $B$ are independent. According to your first sentence, you're assuming that $A$ and $B$ are independent. This should be an easy proof then. – Chris Eagle Oct 9 '12 at 17:31
I just realized that Thomas, thanks for the heads up, I'm going to start doing that now. I'm pretty new to using this. – Kyle Oct 9 '12 at 17:37
## 1 Answer
Kyle, from your title, it seems you are asking, if $P(A) = 0$, how can we prove that $A$ and $B$ are independent events? The condition that must hold for two events, $A$ and $B$, to be independent is
$$P(AB) = P(A)P(B)$$
So, if you want to prove $A$ and $B$ are independent, you need to show this. In this case, if $P(A) = 0$, what is the right hand side? And, since $P(AB)$ means the probability of $A$ and $B$ both happening, what do you think $P(AB)$ is when the probability of just $A$ happening is $0$?
Note, in your question, you actually ask something totally different. You assume $A$ and $B$ are independent and you want to prove $P(B) = P(AB)$. This is a vastly different question.
-
This is not the test of independence. The test of independence is that $P(A)$ is the same regardless of whether $B$ happens. It implies that $P(AB)=P(A)P(B)$, but that is not what you asked. – Ross Millikan Oct 9 '12 at 17:37
My appologies everyone, I misunderstood the problem a little and then asked for help the completely wrong way. Thanks for your tries. – Kyle Oct 9 '12 at 17:45
1
– Henning Makholm Oct 9 '12 at 18:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597015976905823, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/89996-proof-limit-n.html | # Thread:
1. ## proof of a limit as n→∞
A friend showed me this and neither of us can work out a proof.
lim n→∞((sqroot(n+2)-sqroot(n+1))/(sqroot(n+1)-sqroot(n)))
I know the limit is 1 but I'm just looking for a method to prove it. So not putting in high values and seeing what it tends to and doing it by inspection probably wouldn't be ideal. I've tried multiplying the numerator and denominator by several values such as sqroot(n+1)/sqroot(n+1) and hit dead ends with all of them so I'm assuming there's a different approach I'm not seeing. Thanks for any help in advance =)
2. by rationalizing you get that $\frac{\sqrt{n+2}-\sqrt{n+1}}{\sqrt{n+1}-\sqrt{n}}=\frac{\sqrt{n+1}+\sqrt{n}}{\sqrt{n+2}+\s qrt{n+1}},<br />$ then just divide top & bottom by $\sqrt n$ to get the limit.
3. thanks, would you mind explaining how you rationalized this please?
4. $\frac{\sqrt{n+2}-\sqrt{n+1}}{\sqrt{n+1}-\sqrt{n}}=\frac{{\color{blue}(\sqrt{n+2}+\sqrt{n+1 })}(\sqrt{n+2}-\sqrt{n+1}){\color{red}(\sqrt{n+1}+\sqrt{n})}}{{\c olor{blue}(\sqrt{n+2}+\sqrt{n+1})}(\sqrt{n+1}-\sqrt{n}){\color{red}(\sqrt{n+1}+\sqrt{n})}}=<br />$ $\frac{\sqrt{n+1}+\sqrt{n}}{\sqrt{n+2}+\sqrt{n+1}}$
EDIT: Trying to figure out how colors work in Latex so I can make it more clear. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488155841827393, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/107653/prove-log-of-eigenvalues-are-dense-in-r | ## Prove log of eigenvalues are dense in R?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose you have the set of all possible $n$ x $n$ square adjacency matrices where $n$={1,2,3,4...}. For each matrix, compute the logarithm of the largest eigenvalue. Is it true that the set of logarithms you obtain is dense in $\mathbb{R}$? How do you begin to prove/disprove this?
-
adjacency matrices means entries are either 0 or 1? – 36min Sep 20 at 5:33
1
What is an adjacency matrix to you? Does it have to have only entries of $0$ or $1$? Does it have to have zeroes along the diagonal? Does it have to be symmetric? – Qiaochu Yuan Sep 20 at 5:52
Yes, it only has to have entries of $0$ or $1$. – Ivy Sep 20 at 7:50
1
See also this question: mathoverflow.net/questions/23989/… – Felix Goldberg Sep 20 at 8:54
## 2 Answers
I think you mean dense in $[0,\infty)$, since the spectral radius of a nonnegative integer matrix must be at least 1 (the product of all nonzero eigenvalues must be a nonzero integer). You are effectively asking whether Perron numbers are dense in $[1,\infty)$, and this is easy to see. For example, let $A_n$ be the companion matrix of $x^n-x-1$ and $\lambda_n$ be its spectral radius. It's easy to check that $\log \lambda_n\to 0$, and that $k \log \lambda_n =\log \lambda_n^k$ is the spectral radius of $A_n^k$, so these numbers, as $n, k=1,2,3,\dots$ are dense. Finally, one can recode the nonnegative integer matrix $A_n^k$ to an larger adjacency matrix with the same spectral radius using the standard idea called "higher block presentation" from symbolic dynamics (this is described in my book with Marcus called "An Introduction to Symbolic Dynamics and Coding").
-
I think you mean "I think you mean dense in $[1, \infty )$", right? ;) – Qfwfq Sep 20 at 8:45
1
I think $[0,\infty)$ is right because the OP asked about the logarithms of the largest eigenvalues. – Andreas Blass Sep 20 at 13:20
Oh yes, sure ! – Qfwfq Sep 20 at 17:15
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In addition to Doug's nice answer above: it is probably even easier to show that the set of simple Parry numbers is dense in $(1,\infty)$. More precisely, let $\beta>1$ and let $(d_n)_{n=1}^\infty$ be the greedy $\beta$-expansion of 1, i.e., $$1=\sum_{n=1}^\infty d_n\beta^{-n},$$ where $d_1=\lfloor \beta\rfloor, d_2=\lfloor\beta\ \text{frac}(\beta) \rfloor, d_3=\lfloor \beta\ \text{frac}(\text{frac}(\beta))\rfloor$, etc. (Here $\lfloor\cdot\rfloor$ stands for the integer part and frac$(\cdot)$ for the fractional part.)
A number $\beta$ is called a simple Parry number (also known as a simple $\beta$-number) if $(d_n(\beta))_1^\infty$ has only a finite number of nonzero terms (i.e., ends with $0^\infty$). It is known that any Parry number is a Perron number; also, it is obvious that the Parry numbers are dense, since for any $\beta$ with an infinite $(d_n(\beta))_1^\infty$ we can truncate this sequence at any term and get a $d_n(\beta')$ for some simple Parry number $\beta'$. Since $(d_n(\beta))_1^\infty$ and $d_n(\beta')_1^\infty$ are close (in the topology of coordinate-wise convergence), so are $\beta$ and $\beta'$.
For more details and some references you may read the first couple of pages of this paper, for instance.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316163659095764, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/174512/is-mathbbz-sqrt2-sqrt3-flat-over-mathbbz-sqrt2 | Is $\mathbb{Z}[\sqrt{2},\sqrt{3}]$ flat over $\mathbb{Z}[\sqrt{2}]$?
Is $\mathbb{Z}[\sqrt{2},\sqrt{3}]$ flat over $\mathbb{Z}[\sqrt{2}]$? The definitions doesn't seem to help. An idea of how to look at such problems would be helpful.
-
4
Dear rola, It is pretty obviously free, and free modules are flat. Regards, – Matt E Jul 24 '12 at 5:31
@MattE That's a lot better — would you mind posting that as an answer? I can incorporate it into what I wrote if you don't have the time, but it wouldn't be as good. I somehow convinced myself that the module structure would be weird, but of course it isn't. Cheers, – Dylan Moreland Jul 24 '12 at 5:38
Dear Dylan, Done. Cheers, – Matt E Jul 24 '12 at 5:40
@rola Note that there is still something to do here: take what you believe is a basis for $\mathbb Z[\sqrt2, \sqrt3]$ over $\mathbb Z[\sqrt2]$ and prove that it is one. – Dylan Moreland Jul 24 '12 at 5:46
2 Answers
$\mathbb Z[\sqrt{2},\sqrt{3}]$ is freely generated as a $\mathbb Z[\sqrt{2}]$-module (exercise). Free modules are flat. QED
-
Thanks a lot Matt for taking time to answer. – rola Jul 24 '12 at 6:09
I have a way of deciding this, although I don't like it very much.
The ring $\mathbb Z[\sqrt2]$ is a Dedekind domain — it's the ring of integers of $\mathbb Q(\sqrt2)$. A module over a Dedekind domain is flat if and only if it is torsion-free. Why? Well, flatness can be checked at each prime, each localization of a Dedekind domain at a prime is a PID, and the result is true for PIDs.
-
Unlike you, I like this answer: +1 ! – Georges Elencwajg Jul 24 '12 at 8:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9590096473693848, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/142601/generative-vs-discriminative-machine-learning | # generative vs discriminative machine learning
Could someone provide differences between these two techniques? Also, I would very much appreciate if you could give an example of a ML method that falls into discriminative model or gnerative one. For instance, is Perceptron discriminative? K-Means?
-
## 1 Answer
Let $X$ be your observed data and let $Y$ be their unobserved/hidden properties. In a ML setting, $Y$ usually holds the categories of $X$.
A generative model models their joint distribution, $P(X,Y)$.
A discriminative model models the posterior probability of the categories, $P(Y|X)$.
Depending on what you want to do, you choose between generative versus discriminative modeling. For example, if you are interested in doing classification, a discriminative model would be your choice because you are interested in $P(Y|X)$. However, you can use a generative model, $P(X,Y)$, for classification, too.
A generative model allows you to generate synthetic data ($X$) using the joint but you cannot do this with a discriminative model.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924628496170044, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/131585/i-want-to-show-that-fx-x-f1-where-fr-to-r-is-additive | # I want to show that $f(x)=x.f(1)$ where $f:R\to R$ is additive. [duplicate]
Possible Duplicate:
Proving that an additive function $f$ is continuous if it is continuous at a single point
Solution(s) to $f(x + y) = f(x) + f(y)$ (and miscellaneous questions…)
I know that if $f$ is continuous at one point then it is continuous at every point. From this i want to show that $f(x)=xf(1).$ Can anybody help me to proving this?
-
4
Start with integer $x$. Then try rational $x$. – Hurkyl Apr 14 '12 at 4:13
1
The magic words are "Cauchy functional equation". See here and here. – Arturo Magidin Apr 14 '12 at 4:16
## marked as duplicate by Arturo Magidin, Martin Sleziak, t.b., Leonid Kovalev, Jennifer Dylan Aug 17 '12 at 19:40
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
HINTS:
1. Look at $0$ first: $f(0)=f(0+0)=f(0)+f(0)$, so $f(0)=0=0\cdot f(1)$.
2. Use induction to prove that $f(n)=nf(1)$ for every positive integer $n$, and use $f(0)=0$ to show that $f(n)=nf(1)$ for every negative integer as well.
3. $f(1)=f\left(\frac12+\frac12\right)=f\left(\frac13+\frac13+\frac13\right)=\dots\;$.
4. Once you’ve got it for $f\left(\frac1n\right)$, use the idea of (2) to get it for all rationals.
5. Then use continuity at a point.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275051355361938, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/24162/where-is-the-true-higgs-if-the-lhc-125-gev-signal-is-rather-a-higher-dimension/24197 | # Where is the “true” higgs if the LHC 125 GeV signal is rather a higher dimensional radion than a SM higgs?
In this article, Lumo introduces and explains the idea (presented by the original authors in this paper) that the LHC signal at about 125 GeV could alternatively be interpreted as a higher dimensional radion. Such a higher dimensional radion would better fit to the branching ratios observed at the LHC so far (at the present state of data accumulation) than a SM higgs.
While reading Lumo`s article, I got very curious about the following:
a) Where would the higgs hide, if this model is true and the 125 GeV signal is rather a radion than a higgs?
b) Would a higgs still be needed in this case for EWSB?
or
c) Could the radion itself play the role of the higgs?
"Acknowledgment": Lumo has started to think about these questions at the end of the article too, so we are both very curious about the answers to these issues ...
-
Since this question could still be open (again ...) and people who work on this are maybe found at theoretical physics SE, I thought about asking there too. But I'm not 100 % sure if they would like it ... – Dilaton Apr 21 '12 at 14:07
## 1 Answer
This is not exactly my area of expertise, so you can probably get a better answer from someone else (perhaps Lubos). But based on a quick overview of the relevant papers, the presence of a Randall-Sundrum radion as originally proposed would not eliminate the need for a Higgs boson. In order to allow gauge invariance, the Higgs mechanism basically requires a ring-shaped potential minimum, which means you need a field with at least two degrees of freedom. The first RS paper mostly only works with the case of one extra dimension, so you would need to generalize the model to additional dimensions, and given that these extra dimensions are supposed to be individually periodic, I'm not seeing how you could get the sort of structure required to produce a Higgs mechanism out of it. Of course, a lot of people have done work that builds on Randall's and Sundrum's papers, so perhaps someone has determined some way to do it, but it seems unlikely to me. (In fact, in the original paper, around equations 17 and 18 they talk about a fundamental Higgs field which is separate from the radion field, so evidently the authors themselves did not consider the radion as a stand-in for the Higgs.)
The LHC experiments have searched the entire allowed mass range for the standard model Higgs, from the lower limit set by LEP to the upper limit set by unitarity bounds, and everything except this region around $125\text{ GeV}$ is excluded at 95% confidence level. So if this bump turns out not to be the Higgs boson, the standard model Higgs is ruled out and we would have to start looking at rather more exotic model which predict Higgs masses in excess of $600\text{ GeV}$. I don't know of any particular model of this sort which has generated much interest among particle physicists.
And another thought: even just based on the data presented in the paper by Cheung and Yuan, the branching ratio excesses measured by CMS have huge uncertainties. It seems pretty premature to me to dismiss the identification of the observed excesses with the Higgs boson, since with more data, the numbers could easily converge to the SM Higgs expectations.
-
– anna v Apr 22 '12 at 4:03
Sure, they can coexist, but that paper doesn't say anything about the radion and the Higgs being the same particle. (Though it does discuss mixing, which is kind of in that direction.) – David Zaslavsky♦ Apr 22 '12 at 4:11
Hi David, thanks for this nice answer; what you say seems very reasonable to me. Even though other people may have additional things to say, I like it +1. – Dilaton Apr 22 '12 at 9:04
Of course I did not mean to dismiss the identification of the 125 GeV excess as ha higgs... When I asked Prof. Strassler what he thinks about the idea of this alternative interpretation, I probably made him a little bit angry ... :-/ He clearly stated that one should not yet think or ask about alternative interpretations at the present state of things and that such a radion is only one in a zillion of other possibilities and all such papers should be ignored as nois ... – Dilaton Apr 22 '12 at 9:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616376161575317, "perplexity_flag": "middle"} |
http://motls.blogspot.com/2012/06/on-importance-of-conformal-field.html?m=0 | # The Reference Frame
## Saturday, June 30, 2012
... /////
### On the importance of conformal field theories
Scale-invariant theories may look like a too special, measure-zero subset of all quantum field theories. But in the scheme of the world, they play a very important role which is why you should dedicate more than a measure-zero fraction of your thinking life to them.
In this text, I will review some of their basic properties, virtues, and applications.
When we look at Nature naively and superficially, its laws look scale-invariant. One may study a hamburger of a given weight. However, it seems that you may also triple the hamburger's weight and the physical phenomena will be mathematically isomorphic. Just some quantities have to be tripled.
(Sean Carroll has tried to revive the old idea of David Gross to sell particles to corporations in order to get funding for science; search for David Rockefeller quark in that paper. The Higgs boson could be sold to McDonald's, yielding a McDonald's boson. However, anti-corporate activist Sean Carroll has failed to notice that McDonald's actually deserves to own the particle for free because much like the God particle, McDonald's is what gives its consumers their mass.)
The scale invariance seems to work
For example, if you triple the radii of all planets, the masses will increase 27-fold if you keep the density fixed. To keep the apparent sizes constant, you must also triple the distances. The forces between these planets will increase by the factor of $27\times 27 / 3^2 = 81$, accelerations by the factor of $27/3^2=3$. If you realize that the centrifugal acceleration is $r\omega^2$ and $r$ was tripled, you may actually get the same angular frequency $\omega$.
With different assumptions than a constant density and the prevailing gravitational force, you might be forced to scale $\omega$ and times in a different way, and so forth.
But it's broken
However, when you look at the world more carefully and you uncover its microscopic and fundamental building blocks, and maybe even earlier than that, you will notice that no such scale invariance actually exists. A 27-times-heavier planet contains 27-times-more atoms; this integer is different so the two planets are definitely not mathematically isomorphic.
And atoms can't be expanded. Atoms of a given kind always have the same mass. They have the same radius (in the ground state). As the Universe expands, the atoms' size remains the same so the number of atoms that may "fit" into some region between galaxies is genuinely increasing. The atoms emit spectral lines with the same frequency and wavelength. After all, that's why we may define one second as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Even well-behaved tiny animated GIFs blink once in a second and no healthy computer should ever change the frequency. ;-)
And even the Gulliver is a myth. More precisely, and the experts in literature could correct me, the Gulliver is a normal man but the Lilliputians are a myth. ;-) You can't just scale the size of organisms. Such a change of the size has consequences. Too small or too large copies of a mammal would have too weak bones to support the body, they would face too strong a resistance of the air, they couldn't effectively preserve their body temperature different from the environment's temperature, and so on, and so on.
LUMOback is telling you when you slouch and when you're lazy. And these folks are able to use the LUMO trademark to collect tens of thousands of dollars for their project. ;-) Via Christian E.
The period of some atomic radiation is constant and may be measured very accurately which is why it's a convenient benchmark powering the atomic clocks – the cornerstone of our definition of a unit of time. However, it's not necessarily the most fundamental quantity with the units of time. In particle physics, the de Broglie wave associated with an important particle at rest yields a "somewhat" more fundamental frequency than the caesium atom. In quantum gravity, the Planck time may be the most natural unit of time.
Scale invariance in classical and quantum field theories
How do we find out that some laws of physics are scale–invariant? Well, there won't be any preferred length scale in the phenomena predicted by a theory if the fundamental equations won't have a preferred length scale. They must be free of explicit finite parameters with units of length; but they must also be free of parameters whose powers could be multiplied to obtain a finite result with the units of length.
For example, the Lagrangian density of Maxwell's electromagnetism is\[
\LL_{\rm Maxwell} = -\frac 14 F_{\mu\nu}F^{\mu\nu}.
\] The sign is determined by some physical constraints: the energy must be bounded from below. The factor of $1/4$ is a convention. It is actually more convenient than if the factor were $1$ but the conceptual difference is really small. What's important is that there are no dimensionful parameters. If you wrote electromagnetism with such parameters, e.g. with $\epsilon_0$ and $c$, you could get rid of them by a choice of units and rescaling of the fundamental fields. And whenever it's possible to get rid of the parameters in this way, the theory is scale-invariant.
When we deal with a scale-invariant theory, it doesn't mean that all objects are dimensionless. Quite on the contrary: most of the quantities are dimensionful. The scale invariance is actually needed if you want to be able to assign the units to quantities in a fully consistent way. When you have a theory with a characteristic length scale, i.e. a non-scale-invariant theory, you may express all distances in the units of the fundamental length unit (e.g. Planck length). All distances effectively become dimensionless and the dimensional analysis tells you nothing.
However, in a scale-invariant theory, you may assign the lengths and spatial coordinates the units of length, e.g. one meter. The partial derivatives $\partial/\partial x$ will have the units of the inverse length. Because $S$ must be dimensionless in a quantum theory – $iS$ appears in the exponent in Feynman's path integral – it follows that the Lagrangian density has to have units of ${\rm length}^{-4}$. That's because $S=\int\dd^4 x \,\LL$. I have implicitly switched to a quantum field theory now and set $\hbar=c=1$ which still prevents us from making lengths or times (or, inversely, momenta and energy) dimensionless.
In the electromagnetic case, the units of $\LL$ are ${\rm length}^{-4}={\rm mass}^4$ which means that $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ has to have the units of ${\rm mass}^2$. And because $\partial_\mu$ has the units of ${\rm mass}$ i.e. ${\rm length}^{-1}$, we see that the same thing holds for $A_\mu$. In this way, all degrees of freedom may be assigned unambiguous units of ${\rm mass}^\Delta$ where $\Delta$ is the so-called dimension of the degree of freedom (a particularly widespread concept for quantum fields).
In classical field theory, the dimensions would always be rational – most typically, $\Delta$ would be integer or, slightly less often, a half-integer. However, in quantum field theory, the dimension of operators often likes to be an irrational number. Whenever perturbation theory around a classical limit works, these irrational numbers may be written as the classical dimensions plus "quantum corrections to the dimension", also known as the anomalous dimensions. The leading contributions to such anomalous dimensions in QED are often proportional to the fine-structure constant, $\alpha\approx 1/137.036$. This correction to the dimension may be calculated from some appropriate Feynman diagrams.
At any rate, all fields – including composite fields – may be assigned some well-defined dimensions $\Delta$. If distances triple, the fields must be rescaled by $3^{-\Delta}$.
Now, how many scale-invariant theories are there? Do we know some of them? Well, among the renormalizable theories such as the Standard Model, there is always a "natural" cousin or starting point, a quantum field theory that is classically scale-invariant. The actual full-fledged theory with mass scales is obtained by adding some mass terms and similar terms to the Lagrangian. It's easy to be explicit what we mean. The classically scale-invariant theory is simply obtained by keeping the kinetic terms – the terms with the derivatives – only and erasing the mass terms as well as all other terms with coefficients with units of a positive power of length.
It means that to get a scale-invariant theory, you keep $F_{\mu\nu}F^{\mu\nu}$, $\partial_\mu \phi\cdot \partial^\mu\phi$, $\bar\psi\gamma^\mu \partial_\mu \psi$ etc. but you erase all other terms, especially $m^2 A_\mu A^\mu$, $m^2\phi^2$, $m\bar\psi\psi$, and so on. Is there a justification why we can neglect those terms? Yes. We're effectively sending $m\to 0$ i.e. we're assuming that $m$ is negligible. Can $m$ be negligible? It depends whom you compare it with. It's negligible relatively to much greater masses/energies. That's why the mass terms and similar terms may be neglected (when we study processes) at very high energies i.e. very short distances. That's where the kinetic terms are much more important than the mass terms.
I said that by omitting the mass terms, we only get "classically" scale-invariant theories. What does "classically" mean here? Well, such theories aren't necessarily scale-invariant at the quantum level. The mechanism that breaks the scale invariance of classically invariant theories is known as the dimensional transmutation. It has a similar origin as the anomalous dimensions mentioned above. Roughly speaking, the Lagrangian density of QCD, $-{\rm Tr}(F_{\mu\nu} F^{\mu\nu})/2g^2$, no longer has the units of ${\rm mass}^4$ but slightly different units, so a dimensionful parameter $M^{\delta\Delta}$ balancing the anomalous dimension has to be added in front of it. In this way, the previously dimensionless coefficient $1/2g^2$ that defined mathematically inequivalent theories is transformed into a dimensionful parameter $M$ – which is the QCD scale in the QCD case – and the rescaling of the coefficient may be emulated by a change of the energy scale. So different values of the coefficient are mathematically equivalent after the dimensional transmutation because the modified physics may be "undone" by a change of the energy scale of the processes.
Ability of fixed points to be unique up to a few parameters
In the previous paragraphs I discussed a method to obtain a scale-invariant quantum field theory by erasing all the mass terms in an arbitrary quantum field theory. This procedure is actually more fundamental than how it may look. The scale-invariant theory is a legitimate fundamental starting point to obtain almost any important quantum field theory we know.
The reason is that the ultimate short-distance limit of a generic consistent quantum field theory has to be scale-invariant. When we study energies higher than all typical energy scales in a quantum field theory, all these energy scales associated with the given scale-non-invariant theory may be neglected and we're bound to get a scale-invariant theory in the limit. Such a limiting short-distance theory is known as the UV fixed point. The adjective UV, or ultraviolet, means that we talk about short-distance physics. The term "fixed point" means that it is a scale-invariant theory: it is invariant (fixed) under the scaling of the distances (deriving longer-distance physics from short-distance physics) which is the basic operation we do in the so-called Renormalization Group, a modern framework to classify quantum field theories and their deviations from scale invariance in particular.
A funny thing is that scale-invariant theories are very restricted. For example, the $\NNN=4$ gauge theory is unique for a chosen gauge group (up to the coupling constant and the $\theta$-angle). Its six-dimensional ancestor, the (2,0) superconformal field theory, is also scale-invariant and it is completely unique for chosen discrete data.
In other cases, the space of possible scale-invariant field theories is described by a small number of parameters. Even when we study the deformations of these theories that don't break the renormalizability, we only add a relatively small number of parameters. The condition that an arbitrary quantum field theory is renormalizable (and consistent up to arbitrarily high energy scales) is pretty much equivalent to the claim that one may derive its ultimate short-distance limit which is a UV fixed point.
It's this existence of the scale-invariant short-distance limit that makes our quantum field theories such as the Standard Model predictive. We may prove that there's only a finite number of parameters that don't spoil the renormalizability i.e. the ability to extrapolate the theory up to arbitrarily short distances. And when we extrapolate the theory to vanishingly short distances, we inevitably obtain one of the rare scale-invariant theories which only depend on a small number of parameters (and some discrete data).
So the scale-invariant theories aren't just an interesting subclass of quantum field theories that differs from the rest; for each consistent scale-non-invariant quantum field theory, there exists an important scale-invariant theory, namely the short-distance limit of it. There is another scale-invariant theory for each quantum field theory, namely its ultimate long-distance limit. Both of these limits may be – and often are – non-interacting theories because the coupling constants of QCD-like theories may slowly diminish. Also, the long-distance limit, the infrared fixed point, is often a theory with no degrees of freedom: it is "empty". For example, QCD predicts the so-called "mass gap" – the claim that all of its particles that may exist in isolation have a finite mass (the mass can't be made zero or arbitrarily small). So if you only study particle modes that survive below a certain very low energy, you get none.
No doubts about that: scale-invariant theories are very important for a proper understanding of all quantum field theories. They also play a key role in string theory, at least for two very different reasons – perturbative string theory and the AdS/CFT correspondence. These roles will be discussed in the rest of this blog entry.
Scale-invariant vs conformal
Before I jump on these stringy issues, let me spend a few minutes with a subtle difference between two adjectives, "scale-invariant" and "conformal". A scale-invariant theory is one that has the following virtue: for every allowed object, process, or history (one that obeys the equations of motion or one that is sufficiently likely according to some probabilistic rules), it also allows objects, processes, and history that differ from the original one by a scaling (or these processes are equally likely).
A conformal i.e. angle-preserving map. In two Euclidean dimensions, it's equivalently a (locally) holomorphic function of a complex variable. Note that all the intersections have 90-degree internal angles.
The adjective "conformal" is a priori more constraining: for a given objects, processes, and histories, a conformal theory must also guarantee that all processes, objects, and histories that differ from the original one by any (possibly nonlinear) transformation of space(time) that preserves the angles (between lines, measured locally) – any conformal map – must also be allowed. Conformality is a stronger condition because scaling is one of the transformations that clearly preserve angles but there are many more transformations like that.
While conformality is a priori a stronger condition than scale invariance, it turns out – and one can prove – that given some very mild additional assumptions, every scale invariant theory is automatically conformally invariant. I won't be proving it here but it's intuitively plausible if you look what the theory implies for small regions of spacetime. In a small region of spacetime (e.g. in the small squares in the picture above), every conformal transformation reduces to a composition of a rotation (or Lorentz transformation) and a scaling. So if these transformations are symmetries of the theory and if the theory is local, it must also allow you to perform transformations that look "locally allowed" (locally, they are rotations combined with scaling) – it must allow conformal symmetries.
Now, what is the group of conformal transformations in a $d$-dimensional space? Start with the Euclidean one. The group of rotations is $SO(d)$. What about the conformal group, the group of transformations preserving the angles? One may show that transformations such as $r\to 1/r$, the inversion, preserve the angles. A conceptual way to see the whole group is the stereographic projection:
You may identify points in a $d$-dimensional flat space with points on a $d$-dimensional sphere – the boundary of a $(d+1)$-dimensional ball – by the stereographic projection above. A funny thing you may easily prove is that this map preserves the angles. For example, if you project the Earth's surface from the North Pole to the plane tangent to the South Pole, you will get a map of the continents that will locally preserve all the angles (but not the areas!).
It follows that all the isometries of the sphere, $SO(d+1)$, must generate some conformal maps of the plane. In fact, one may do the same thing with a projection of the plane from a hyperboloid or Lobachevsky plane – the sphere continued to a different signature. For this reason, the conformal group must contain both $SO(d+1)$ and $SO(d,1)$ as subgroups: it must be at least $SO(d+1,1)$. One may show that there are no other transformations,too.
To get the conformal group, you write the rotational group as $SO(m,n)$ and add one to each of the two numbers to get $SO(m+1,n+1)$. Because the Minkowski space is a continuation of the sphere, a continuation of the procedures above proves that the same thing holds for the conformal group of a $d$-dimensional spacetime. While the Lorentz group is $SO(d-1,1)$, the conformal group is $SO(d,2)$. Yes, it has two time dimensions. This group $SO(d,2)$ contains not only the Lorentz group but also the translations, scaling, and some extra transformations known as the special conformal transformations.
Role of conformal field theories in AdS/CFT
The group $SO(d,2)$ is the conformal group of a $d$-dimensional Minkowski space. However, it's also the Lorentz symmetry of a $d+2$-dimensional spacetime with two timelike dimensions. In fact, we don't need the whole $(d+2)$-dimensional spacetime. It's enough to consider all of its points with a fixed value of\[
x_\mu x^\mu = \pm R^2,\quad x^\mu \in \RR^{d+2}.
\] For a properly chosen sign on the right hand side (correlated with the convention for your signature), this is a "hyperboloid" with the signature of the induced metric that has $d$ spatial dimensions and $1$ time dimension. This hyperboloids is nothing else than the anti de Sitter (AdS) space, namely the $(d+1)$-dimensional one.
So for every conformal transformation on the $d$-dimensional flat space, there is an isometry of the $AdS_{d+1}$ anti de Sitter space. If we start with a conformal theory in $d$ dimensions, there could also be a theory on $AdS_{d+1}$ that is invariant under the isometries of this anti de Sitter space. Juan Maldacena was famously able to realize this thing – during his research of thermodynamics of black holes and black branes in string theory – and find strong evidence in favor of his correspondence.
For every (non-gravitational, renormalizable, healthy) conformal (quantum) field theory in a flat space, there exists a consistent quantum gravitational (and therefore non-scale-invariant) theory (i.e. a vacuum of string/M-theory) living in a curved spacetime with one extra dimension, the $(d+1)$-dimensional anti de Sitter space, and vice versa. I was predecided to keep this section short. Although the AdS/CFT is arguably the most important development in theoretical physics of the last 15 years, it's been dedicated enough space and I realize that if I started to explain various aspects of this map, we could easily end up with a document that has 261 pages much like the AdS/CFT Bible by OGOMA, not to be confused with the author of a footnote in a paper about the curvature of the constitutional space. His name was OBAMA. ;-)
The AdS/CFT correspondence is important because it makes the holography in quantum gravity manifest, at least for the AdS spacetimes. Also, it allows us to define previously vague and mysterious theories of quantum gravity in terms of seemingly less mysterious non-gravitational (and conformal) quantum field theories. Well, it also allows us to study complex phenomena in hot environments predicted by conformal field theories in terms of more penetrable – and essentially classical – general relativity in a higher-dimensional space. Complex phenomena including low-energy physics heroes such as the quark-gluon plasma, superconductors, Fermi liquids, non-Fermi liquids, and Navier-Stokes fluids may be studied in terms of simple black holes in a higher-dimensional curved spacetime.
Role of 2D conformal field theories in perturbative string theory
Because of the chronology of the history of physics, it would have been logical to discuss the role of conformal field theories in perturbative string theory before the AdS/CFT. I chose the ordering I chose because the AdS/CFT is closer to the "more field-theoretical" aspects of string theory than the world sheet CFTs underlying perturbative string theory – something that is as intrinsically stringy as you can get. That's why the discussion of two-dimensional CFTs was finally localized in the last section of this blog entry.
The textbooks of string theory typically tell you that strings generalize point-like particles. While point-like particles have one-dimensional world lines in the spacetime, strings analogously leave two-dimensional world sheets as the histories of their motion in spacetime.
If you want point-like particles to interact with each other, you need "vertices" of Feynman diagrams; you need points on the world lines from where more than two external world lines depart. Such a vertex is a singular point and this singularity on the world lines – the vertices in the Feynman diagrams themselves – are the ultimate cause of the violent short-distance behavior of point-like-particle-based quantum field theories, especially if there are too many vertices in a diagram and if they have too many external lines.
An idea to defeat this sick short-distance behavior is to have higher-dimensional extended elementary building blocks. In that case, you may construct "smooth versions" of the Feynman diagrams on the left – but the "smooth versions" have the property that they're locally made of the same smooth higher-dimensional surface. The pants diagram on the right side of the picture – the two-dimensional world sheet depicted by this illustration – has no singular points.
However, there's a catch: if the fundamental building blocks have more than 2 spatial dimensions, the internal theory describing the world volume themselves is a higher-dimensional theory and to describe interesting interacting higher-dimensional theories like that, you will need "something like quantum field theory" which will have a potentially pathological behavior that is analogous to the behavior of quantum field theories in ordinary higher-dimensional spacetimes.
Two dimensions of the world sheet is the unique compromise for which you may tame the short-distance behavior in the spacetime – because you don't need "singular" Feynman vertices and the histories are made of the same "smooth world sheet" everywhere; but the world sheet theory itself is under control, too, because it has a low enough dimension.
In fact, an even more accurate explanation of the uniqueness of two-dimensional world sheets is that you may get rid of the world volume gravity in that case. Imagine that you embed your higher-dimensional object into a spacetime by specifying coordinates $X^\mu(\xi^i)$ where $\xi^i$ [pronounce: "xi"] are $d$ coordinates parameterizing the world line, world sheet, or world volume. With such functions, you may always calculate the induced metric on the world volume\[
h_{ij} = g_{\mu\nu} \partial_i X^\mu \partial_j X^\nu.
\] If you also calculate \[
ds^2 = h_{ij} d\xi^i d\xi^j
\] using the induced metric $h_{ij}$ for some small interval on the world sheet $d\xi^i$, you will get the same result as if you calculate it using the original spacetime metric $g_{\mu\nu}$ with the corresponding changes of the coordinates $dX^\mu = \partial_i X^\mu\cdot d\xi^i$. It's kind of a trivial statement; you may either add the partial derivatives while calculating $dX^\mu$ from $d\xi^i$ or while calculating $h_{ij}$ from $g_{\mu\nu}$.
So even if you decide that the induced metric isn't a "fundamental degree of freedom" on the world volume, it's still there and if the shape of the string or brane is dynamical, this induced metric field is dynamical, too. You deal with a theory that has a dynamical geometry – in this sense, it is a theory of gravity. However, something really cool happens for two-dimensional world sheets: you may always reparametrize $\xi^i$ by a change of the world sheet coordinates – given by two functions $\xi^{\prime i} (\xi^j)$ – and bring the induced metric to the form\[
h_{ij} = e^{2\varphi(\xi^k)} \delta_{ij}
\] where $\delta$ is the flat Euclidean metric; you may use $\eta_{ij}$ in the case of the Minkowski signature, too. So up to a local rescaling, the metric is flat! It's easy to see why you can do such a thing. The two-dimensional metric tensor only has three independent components, $h_{11},h_{12},h_{22}$ and the diffeomorphism depends on two functions so it allows you to remove two of the three components of the metric. The remaining one may be written as a scalar multiplying a chosen (predecided) metric such as the flat one. At least locally, i.e. for neighborhoods that are topologically disks, it is possible.
And if the world sheet theory is conformally invariant, it doesn't depend on the local rescaling. It doesn't depend on $\varphi$, either. So every world sheet is conformally flat which means that as far as physics goes, it's flat! At least locally. There are obviously no problems with the quantization of gravity if the "spacetime", in this case the world sheet, is flat.
Many things simplify. For example, the Nambu-Goto action, generalizing the length of the world line of a point-like particle, is the proper area of the world sheet which is an integral of the square root of the determinant of the induced metric. This looks horribly non-polynomial etc. You may be scared of the idea to develop Feynman rules for such a theory. However, if you exploit the possibility to choose an auxiliary metric and make it flat by an appropriate diffeomorphism, the action actually reduces to a kindergarten quadratic Klein-Gordon action for the scalars $X^\mu(\xi^i)$ propagating on the two-dimensional world sheet!
That's really cool because the resulting modes of the string inevitably contain things like the graviton polarizations in the spacetime. So you may describe gravity in terms of things that are as simple and as controllable as free Klein-Gordon fields in a different space, the two-dimensional world sheet.
I said that the conformal symmetry for a $d$-dimensional space is $SO(d+1,1)$ and/or $SO(d,2)$, depending on the signature. But in two dimensions, this group is actually enhaced to something larger: the group of all angle-preserving transformations is actually infinite-dimensional (as long as you don't care whether the map is one-to-one globally and you only demand that it acts nicely on an open set of the disk topology). In the Euclidean signature, they're nothing else than all the holomorphic functions of a complex variable; in the case of the Lorentzian signature, one may redefine $\xi^+$ to any function of it and similarly for $\xi^-$, the other lightlike coordinate. The variables $\xi^\pm$ are the continuations of $z,\bar z$, the complex variable and its conjugate. It's not shocking that we may redefine the lightlike coordinates by any functions and independently: the angles in a two-dimensional Minkowski space are given as soon as you announce what the two null directions are; but the scaling of each of them is inconsequential for the (Lorentzian) angles i.e. rapidities.
The conformal symmetry plays an important role for the consistency and finiteness of string theory, at least in the (manifestly Lorentz-)covariant description of perturbative string theory. States of a single string are uniquely mapped to local operators on the world sheet (that's true even for higher-dimensional CFTs); the OPEs (operator product expansions) encode the couplings between triplets of string states; the loop diagrams are obtained as integrals over the possible shapes of the world sheet of a given topology (genus) and these shapes reduce to finite-dimensional conformal equivalence classes. So all the loop diagrams may be expressed as finite-dimensional integrals over manifolds – spaces of possible shapes of the world sheet – which are under control.
Again, this article doesn't want to go too deeply to either of these topics because I don't want to reproduce Joe Polchinski's book on string theory here. Instead, this blog entry was meant to provide you with a big-picture framework answering the question "Why do the conformal field theories get so much attention?".
I feel that this kind of questions is often asked and there aren't too many answers to such questions in the standard education process etc.
And that's why I wrote this memo.
Posted by Luboš Motl
|
Email This BlogThis! Share to Twitter Share to Facebook
Other texts on similar topics: philosophy of science, string vacua and phenomenology, stringy quantum gravity
#### snail feedback (11)
:
reader Dilaton said...
Ha, this nice article gives me a lot to consider, since I and my nice office mate are writing a paper together about the importance of scale invariance in turbulence parameterizations applied in fluid simulations :-D
I mean the very nice former astrophysicist colleague who has borrowed me the "Elegant Universe" and not the other nice too colleague who borrowed me "Vom Urknall zum Durchknall" which I gave back without reading it :-P ;-)
I now have to figure out a bit what all this (at least the first 2/3 of it :-D ...) means applied to turbulence ...
For example would dimensional transmutations have something to do with transitions between different 2D/3D inertial ranges for example when considering spectra of turbulent kinetic energy...?
Ill certainly have to reconsider this nice article after Ill (hopefully) have managed to understand my favorite paper about renormalization flow analysis of turbulence well enough (such that I can explain the main ideas to colleagues and convince them that it is something nice) :-)
reader Dilaton said...
Ha ha, seems that the last section has nicely expanden since my coffee break ... :-D
reader Shannon said...
Nice analogy between McDonald and the God particle :-D. The LumoBack belt is kind of funny ; at the beginning I though it would give you electrical discharge each time you slouch, and a deadly one each time to pronounce the word "quantum loop theory".
reader Luboš Motl said...
I sincerely hope that the amount of useful information in this section - and others - isn't scale-invariant. ;-)
reader Dilaton said...
Never mind, I` m a kitty so I have seven lives ... ;-P
reader Dilaton said...
Dear Lumo,
if you try to put just one bit of useful information into the last or another section it will collaps into a block hole ;-)
Maybe I should take cover ... :-P ?
reader Ghandi said...
Are there are field theories where the couplings just oscillate as you go to high energies, rather than asymptoting to a constant (or diverging)?
reader Luboš Motl said...
Good question and essentially yes! I was cheating that it had to converge. There are theories such as gauge theories with duality cascades which change their behavior infinitely many times as you change the scale. Or at least many times. See Klebanov Strassler and followups
http://arxiv.org/abs/hep-th/0007191
reader Jeff said...
Dear Lubos,
It was recently shown that scale does not imply conformal invariance in 4d QFTs, leading to RG flows that are recurrent behaviors. See arXiv:1206.2921.
Cheers.
reader Luboš Motl said...
Thanks, Jeff. It's nice that the author of such a relevant paper learns about such blog entries. (BTW you're on the white list now.)
reader Jeff said...
Thanks,
I should point out however that there is a controversy about the existence of non-conformal scale-invariant solutions as discussed in arXiv:1204.5221.
Best. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229710102081299, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/70647?sort=votes | ## Approximate primitive roots mod p
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Artin conjectured that if $a$ is an integer which is not a square and not $-1$ then $a$ is a primitive root for infinitely many primes. This conjecture has not been resolved, but partial results are known: Heath-Brown showed that there are at most two prime numbers $a$ for which the conjecture fails.
I'd like to know if a different kind of partial result is known. Let $I(p)$ denote the index of the subgroup of $(\mathbf{Z}/p\mathbf{Z})^{\times}$ generated by 2. Thus $I(p)=1$ if and only if 2 is a primitive root mod $p$. Can one show that there is an infinite sequence of primes in which $I$ remains bounded?
-
1
I'm assuming you don't want results conditional on GRH, since you classify Artin's conjecture as unresolved? Adam Felix has some nice results on the distribution of I(p) that amply imply your desired result, but the ones I know of are conditional on GRH. – Greg Martin Jul 18 2011 at 19:19
Yes, I don't want to assume GRH. What I'm asking for follows from Artin's conjecture (since then you have an infinite sequence of primes on which I is 1), and thus from GRH. – Anonymous Jul 21 2011 at 15:24
## 2 Answers
A result of Erdos and Murty asserts that if $\epsilon(p)$ is any decreasing function tending to zero, then $I(p) \leq p^{1/2-\epsilon(p)}$ for almost all primes $p$ (i.e., all but $o(\pi(x))$ primes $p \leq x$).
Kurlberg and Pomerance (see Lemma 20 in the paper mentioned below) show that for a positive proportion of primes $p$, one has the stronger bound $I(p) \leq p^{0.323}$. This follows from a result of Baker and Harman on shifted primes with large prime factors.
The Erdos--Murty paper is #77 at
http://www.mast.queensu.ca/~murty/index2.html
and the Kurlberg--Pomerance paper is
http://www.math.dartmouth.edu/~carlp/PDF/par13.pdf
See also Theorem 23 of this paper (which is conditional on GRH).
-
Given that the Kurlberg--Pomerance paper is fairly recent, I'm assuming the result you mention from it is the best known, or at least close to it. That means the answer to my original question is "no," we can't prove the existence of an infinite sequence in which I is bounded. – Anonymous Jul 21 2011 at 15:23
I think that's right. The proof of Lemma 20 in that paper shows that you could improve $0.323$ to $\epsilon$ if you knew that there were infinitely many shifted primes $p-1$ with prime factors $> p^{1-\epsilon}$. Of course we think that $p-1$ is infinitely often twice a prime, which is much stronger and which would give the boundedness you originally asked for -- but this still seems hopeless. (However, progress towards this sort of conjecture plays a key role in the proof of the Heath--Brown proof you mentioned, and in the earlier work of Gupta and Murty.) – Anonymous Jul 21 2011 at 21:13
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is something that is much weaker than what you are asking. The proof is elementary (but not entirely trivial). For every $\epsilon>0$, the series $$\sum_p \frac{I(p)^\epsilon}{p^{1+\epsilon}}$$ converges. For example, this implies that for every $N>0$, the set of primes $p$ satisfying $$I(p)>\frac{p}{(\log\log p)^N}$$ has (analytic) density zero.
-
@Joe, That's a nice result. Please give a pointer to the proof. – Victor Miller Jul 19 2011 at 19:04
@Victor: It's in Murty, Rosen, Silverman, Variations on a theme of Romanoff, Inter. J. Math. 7 (1996), 373--391. The results are phrased in terms of the order $f(p)$ of $a$ in $(\mathbb{Z}/p\mathbb{Z})^*$, instead of the index of the group generated by $a$, which gives the cleaner looking $\sum 1/pf_a(p)^\epsilon$. We handle more generally the image of finitely generated subgroups of a number field $K^*$ in the residue fields, and also the analog for fg subgroups of abelian varieties. – Joe Silverman Jul 19 2011 at 22:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548509120941162, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/17854/do-you-need-to-say-what-left-unique-and-right-unique-means | ## Do you need to say what left-unique and right-unique means?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am talking about a relation that is what Wikipedia describes as left-unique and right-unique. I never heard these terms before, but I have heard of the alternatives (injective and functional). The question is, which terminology do you recommend? Should I include short definitions? (The context is a text in the area of formal methods. I'm not sure if this helps.)
These are some trade-offs that I see:
• I think that left-unique and right-unique are not widely known, but I'm not sure at all.
• functional is overloaded
• injective sounds too fancy (subjective, of course)
• left-unique and right-unique are symmetric (good, of course)
Edit: It seems the question is unclear. Here are more details. I describe sets X and Y and then say:
1. now we must find an injective and functional relation between sets X and Y such that...
2. now we must find a left-unique and right unique relation between sets X and Y...
Which one do you recommend? What other information would you add? The relation does not have to be total. For example, various different ranges correspond to different 'feasible' relations. Technically I should not need to say that the relation does not have to be total, but will many people assume that it has to be total if I don't say it?
-
1
I certainly did not know the terms left-unique and right-unique, and moreover when I tried to guess what they meant, I ended up with the opposite meanings. Left-unique, I reasoned, must mean that a pair in the relation is uniquely determined by its left member, i.e., functional. But that is the definition of right-unique. Go figure – Harald Hanche-Olsen Mar 11 2010 at 13:28
1
People in formal methods know the standard usages, which are "injective" and "functional". If you're worried about "functional" being taken to mean "higher-order function", then use the phrase "functional relation", as in "$R$ is a functional relation". – Neel Krishnaswami Mar 11 2010 at 14:21
@Harald: I guessed the same way as you did. @Neel: Thanks. At the moment I'm inclined to say we must find an injective and functional relation between X and Y, and not define injective/functional. – rgrig Mar 11 2010 at 17:43
## 1 Answer
Injective and functional are completely standard in this case. This is what you should use. The term "functional" is not overloaded, when you are using it to say that something is a function. Being functional means exactly that the relation is a function.
A relation that is injective and functional is precisely an injective function on its domain. It is a bijection of its domain with its range.
If you don't want to think of the relation as a function, then you can also describe it as a one-to-one correspondence of its domain with its range.
(And I don't think any of these terms I suggest would need to be defined, since their meaning is fairly universally known. This would definitely not be true of left-unique and right-unique.)
-
Joel, I think most people take R is a function from X to Y to mean that for each $x\in X$ there is exactly one $y\in Y$ such that $xRy$; similarly, I think most people take R is a bijection between X and Y to mean that R and its inverse are both functions. Also, I think most people use one-to-one correspondence as a synonym for bijection. That is not what I want to say. What I want to say is that for each $x\in X$ there is at most one $y\in Y$ such that $xRy$ and vice-versa. (This is what the left-unique and right-unique definitions that I pointed to say.) – rgrig Mar 11 2010 at 17:38
Well, I never said R should be a function from X to Y, or a bijection between X and Y, but rather, that it is a function on its domain, or a bijection of its domain with its range. The domain of a relation R is the set of x for which there is y with xRy, and the range is the corresponding set of y. These may not be X and Y, respectively, and this should resolve the confusion. It is completely correct to say that a relation is functional if and only if it is a function from its domain to its range, and this is why the word functional is used. – Joel David Hamkins Mar 11 2010 at 18:45
In particular, what I mean to say is that I stand by my answer. – Joel David Hamkins Mar 11 2010 at 18:55
@Joel: I agree. I never said you were wrong. I am just pointing out that neither 'bijection' nor 'one-to-one correspondence' mean the same as (left-unique and right-unique), so they aren't what I need to say. (I did up-vote your answer :), just so you know.) – rgrig Mar 11 2010 at 19:27
Rgrig, but if you say "bijection of its domain with its range" or "one-to-one correspondence of its domain with its range", as I suggested, then it IS what you mean to say. – Joel David Hamkins Mar 11 2010 at 19:46
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9693742990493774, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/3694/verify-contents-but-not-order/3722 | # verify contents, but not order
Is there an algorithm that can be used to verify the contents of a cyphertext, but not the order of the elements?
I am thinking that a deck of cards could be shuffled, and it must be verifiable that the deck contains all cards, but not to know what order they are in.
Any ideas how to approach this problem?
-
1
Your question is a bit underspecified. Please add some detail. Perhaps you can simply sort the parts before hashing, perhaps you need some sort of homomorphic encryption, and perhaps it's impossible. – CodesInChaos Aug 31 '12 at 21:43
I think he is basically asking for a zero knowledge proof that some ciphertext contains a permutation of some known set. – Maeher Aug 31 '12 at 21:57
If Maeher is right, search for "Mental Poker" – CodesInChaos Aug 31 '12 at 23:54
1
The problem is underspecified, but as asked, I'd answer this: Use an encryption scheme that maps each card to a number from 1 to 52. Ensure that all numbers from 1 to 52 are present, each once. Since you can't decrypt any of them, you have no idea what their order is. But since each of the 52 ciphertexts appears once, each of the 52 plaintexts must too. – David Schwartz Sep 1 '12 at 9:41
## 3 Answers
As Maeher & Codesinchaos noted you're going to want to look into Zero Knowledge protocols. Matt Green at Hopkins wrote an easy to understand blog article that gives you one such example w.r.t. mental poker Poker is hard. Claude Crepau wrote a fair amount in the 80s and 90s about zk/mental poker; here is a link to one paper A zero-knowledge Poker protocol that achieves confidentiality of the...
Simari has a good primer on zk here:A Primer on Zero Knowledge Protocols
-
Great links, I am reading through them now. Although, I was hoping for an answer that was more efficient than mental poker, possibly with some other caveats that I don't mind too much about. – Billy Moon Sep 5 '12 at 17:34
Well, the basic idea is to create or use an encryption scheme where, without knowing exactly what each encrypted thing is, you can count and identify unique records.
Take your playing cards. First off, without looking at any card's face, you can easily count them and verify that they at least came from the same style of deck. This is because each individual card is countable and identifiable as belonging to a deck of a particular style. In terms of data, you would get the data as elements of a list, delimited or otherwise divisible into individual elements, and those elements would have some sort of header you can use to identify the items as belonging to some unique set. However, you could be holding a Pinochle deck with 4 Jokers added, and not a Poker deck, and not know the difference. To be able to tell the difference, you have to know something about the values, without knowing the values.
There are two basic types of encryption that would allow you to accomplish this. One is a one-way hash; the values are translated, theoretically irreversibly, into a form that can be compared for equality but nothing else. If each hash is unique, you can be confident that the makeup of the deck is close enough to a Poker deck that it can be used as one. The other is a "block cipher"; the same key, plugged into the same algorithm, is used to transform each iterative piece of the plaintext into the ciphertext. Theoretically, without knowing the key, you can't discern the plaintext, but assuming that each card's value is encrypted in this way, each ciphertext should be unique; duplicate ciphertexts indicate duplicate plaintexts and thus duplicate cards. The actual ciphering scheme is irrelevant - you could use a Caesar cipher or AES - what's important is that each card's value is independently encrypted in exactly the same way, using no other information than one card's value, the key and the encryption scheme.
EDIT: The last thing you would have to prove is that, with all cards being unique, each one is a card you would expect to find in a poker deck (that is, there is no Zero of Spades). This is the tricky part, and IMHO you can't do it in a Zero-Knowledge way. you would have to know something about how the ciphertexts were encrypted in order to identify ciphertexts that were valid and ones that weren't.
-
1
You need to also verify that the contents are in a certain subset of values, e.g. 0 to 51. Else some ciphertexts might not represent valid values. – CodesInChaos Sep 5 '12 at 10:56
This is close to what I am looking for - but there is a verification aspect to it also as mentioned by @CodesInChaos. – Billy Moon Sep 5 '12 at 17:26
You would have to know whether a ciphertext is valid or not without knowing that a particular ciphertext represents a particular plaintext. If you used hashes, you could store known valid hashes (without storing plaintexts in parallel) and verify that every hash of the ciphered playing card values exists in the table of valid hashes. If they're encrypted, I can't think of a way to verify the ciphertext represents a valid value without knowing the secret information used to encrypt them (and thus knowing how the values were ciphered to determine the range of valid ciphertext values). – KeithS Sep 5 '12 at 17:39
It is not clear what you are asking.
Maybe you are asking the following. You have a set of 52 ciphertexts, each of which is allegedly the encryption of a different card, and you want to check that these are an encryption of some permutation of the 52 possible cards (i.e., no card appears twice, every card appears at least once) -- in particular, as Maeher suggests, you want a zero-knowledge proof of this fact.
Is that what you are looking for? If yes, a standard solution would be to use a mixnet to randomly reshuffle-and-decrypt all of the ciphertexts, via standard techniques. Then (a) using zero-knowledge proofs, anyone can check that the mixnet was done correctly, and (b) by looking at the decryptions, anyone can check that what you have is a permutation of the 52 possible cards in a deck. Another approach, as CodesInChaos suggests, is to use standard "Mental Poker" protocols -- search for it, you'll find some research papers on the topic.
-
When zero-knowledge proofs can be used, why bother with a mixnet? $\:$ – Ricky Demer Sep 1 '12 at 8:41
Either approach will work. Mixnets might be more efficient. Mixnets also might be easier to understand, depending upon what you find more intuitive. Or, they might not. It's just one more approach you could consider. (I don't immediately see any reason to think that one will be more of a "bother" than the other, or any reason to think that using zero-knowledge proofs would be inherently superior to mixnets, or vice versa.) And btw, many mixnet schemes do use zero-knowledge proofs internally. – D.W. Sep 1 '12 at 18:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9514931440353394, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/168617/to-show-f-is-continuous | # To show $f$ is continuous
Let $f:[0,1]\rightarrow \mathbb{R}$ is such that for every sequence $x_n\in [0,1]$, whenever both $x_n$ and $f(x_n)$ converges , we have $$\lim_{n\rightarrow\infty} f(x_n)=f(\lim_{n\rightarrow\infty}x_n),$$ we need to prove $f$ is continuous
well, I take $x_n$ and $y_n$ in $[0,1]$ such that $|(x_n-y_n)|\rightarrow 0$, and the given condition holds,Now enough to show $|f(x_n)-f(y_n)|\rightarrow 0$
I construct a new sequence $$z_1=x_1$$ $$z_2=y_1$$ $$\dots$$ $$z_{2n-1}=x_n$$ and $$z_{2n}=y_n$$
We see, that subsequence of $f(z_n)$ converges so it must be convergent to the same limit. Am I going in right path? please help.
-
Consider the function $f$ on $[0,1]$ defined by $f(0)=0$ and $f(x)=1/x$ for all $x \in (0,1]$. Then it satisfies the hypothesis but is not continuous on $[0,1]$. So the claim is false. The claim is false because the antecedent requires both $(x_n)$ and $(f(x_n))$ to be convergent. And for all $(x_n)$ that converges to $0, (f(x_n))$ is not convergent and hence the antecedent is false. Therefore the implication stands true, yet the function is not continuous on $[0,1]$! – Kasun Fernando Jul 9 '12 at 14:07
## 1 Answer
I will prove a different claim because I have pointed out that what is mentioned here is wrong by a counter-example.
Let $f:[0,1]→\mathbb{R}$ is such that for every sequence $x_n∈[0,1]$ whenever $(x_n)$ converges , we have $\lim\limits_{n→∞}f(x_n)=f \left(\lim\limits_{n→∞}x_n \right)$ then $f$ is continous on $[0,1]$.
I think the best way is to use proof by contradiction. Assume $f$ is not continuous at $c \in [0,1]$ then there exist $\epsilon_{0} > 0$ such that for all $n \in \mathbb{N}$ there exist $x_{n} \in (c-1/n,c+1/n) \cap [0,1]$ such that $|f(x_n)-f(c)| \geq \epsilon_{0}>0$
Obviously $( x_n )$ converges to $c$ but $(f(x_n))$ does not converge to $f(c)$ ( Note that all the terms of $(f(x_n))$ are a positive distance away from $f(c)$ ) which is a contradiction with the given property of the function.
Since our choice of $c$ was arbitrary, we have that $f$ is continuous on $[0,1]$
-
are you assuming that $f(x_n)$ converges? why not $f(x_n)\rightarrow\infty$? – Taxi Driver Jul 9 '12 at 13:39
Who cares about $f(x_n)$, as long as it does not converge to $f(x)$? – Siminore Jul 9 '12 at 13:41
I am wondering where he is using the fact given about $x_n$ and $f(x_n)$ converges together. – Taxi Driver Jul 9 '12 at 13:42
Consider the function $f$ on $[0,1]$ defined by $f(0)=0$ and $f(x)=1/x$ for all non-zero $x$. Then it satisfies the hypothesis but is not continuous on $[0,1]$. So the claim is false. What I have proven is the sequential criterion for continuity! – Kasun Fernando Jul 9 '12 at 14:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459408521652222, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/229204/show-the-sample-mean-is-a-sufficient-estimator-of-theta-if-the-population-is | # Show the sample mean is a sufficient estimator of $\theta$ if the population is exponentially distributed with parameter $\theta$.
I used the Fisher-Neyman factorization theorem for this problem.
If $X$ is exponentially distributed, $f(x)=\theta e^{x \theta}$ for $x>0$.
If we have a random sample $X_1,\dots,X_n$, and $\bar{X}$ is the sample mean, their joint density is:
$$f(x_1,\dots,x_n)=\theta^ne^{n\bar{x}\theta}$$ Then if we take $h(x_1,\dots,x_n)=1$ and $g_\theta(x_1,\dots,x_n)=\theta^ne^{n\bar{x}\theta}$, by the Fisher-Neyman factorization theorem, $\bar{X}$ is a sufficient estimator of $\theta$.
I have two questions about this result. The first is that the Wikipedia article uses the sum of $X_1,\dots,X_n$ as an example of a sufficient statistic of $\theta$. Does this mean both of these estimators are sufficient, but not necessarily consistent? Also, by choosing $h(x_1,\dots,x_n)=1$ as in the article, I felt a little like I was cheating. What would be a situation in which I can take $h$ as above and still wind up with an insufficient statistic?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545530676841736, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/42756/how-to-check-if-circles-intersect-each-other/42771 | # How to check if circles intersect each other?
I'm trying to do trilateration where I know the coordinates of three known points and the estimates of the radii - not guaranteed to be really precise. My question is, how can I check if the circles actually intersect each other? Does the checking step mentioned in this tutorial make sense, considering the estimate values?
-
1
Two circles intersect each other if the distance of the middle points is smaller than the sum of their radii. Does this help? – Listing Jun 2 '11 at 12:30
Yes, I know. The problem is I don't know the exact value of the radii because I estimate them based on other parameters (e.g. Bluetooth signal strength values). So, does it somehow still make sense to check if they actually intersect, or just go with the rest of the calculation? – printemps Jun 2 '11 at 12:44
Sure it does but of course you will get an error in the end that depends on the error in the calculation of the radii. – Listing Jun 2 '11 at 12:47
Thanks! I'll get back to you if there's any further issues. – printemps Jun 2 '11 at 12:55
## 1 Answer
If by the checking step you mean the first three lines in the tutorial, it does make sense. It sounds like your center points are known well, even if the distances are not. For two circles, each with a range of radii, you can do two checks-one with the minimum radii and one with the maximum. For the most part, you will then know if they intersect none, some, or all of the time the radii are in that range. There are pathological cases where the intersection will disappear in the middle. One example would be center $(0,0)$, distance $(\frac{5}{8},\frac{7}{8})$ and center $(1,0)$ distance $(\frac{1}{2},\frac{7}{4})$. If the error in you distance estimates is small, this is unlikely.
-
Thanks! That will do the trick. – printemps Jun 3 '11 at 7:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988879323005676, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/52778/divisibility-rules-and-congruences | # Divisibility rules and congruences
Sorry if the question is old but I wasn't able to figure out the answer yet. I know that there are a lot of divisibility rules, ie: sum of digits, alternate plus and minus digits, etc... but how can someone derive those rules for any number $n$ let's say. I know it could be done using congruences, but how ?
Thank you !
-
1
– leo Jul 21 '11 at 0:37
The only trouble is that is in Spanish. – leo Jul 21 '11 at 0:39
...construct test of divisibility by any number greater than $10$ except multiples of 2 and 5. – leo Jul 21 '11 at 0:45
Many of these rules are artifacts of a number's base representation in a particular base. – ncmathsadist Jul 21 '11 at 0:46
1
@draks Here is. Since the link can be subject to future changes this information can be useful. The article is: Reglas de divisibilidad, Vol. 7, No. 1, 2006 of this. Notice that it is in Spanish. – leo May 9 '12 at 2:51
show 2 more comments
## 3 Answers
One needn't memorize motley exotic divisibility tests. There is a universal test that is simpler and much easier recalled, viz. evaluate a radix polynomial in nested Horner form, using modular arithmetic. For example, consider evaluating a $3$ digit radix $10$ number modulo $7$. In Horner form $\rm\ d_2\ d_1\ d_0 \$ is $\rm\: (d_2\cdot 10 + d_1)\ 10 + d_0\ \equiv\ (d_2\cdot 3 + d_1)\ 3 + d_0\ (mod\ 7)\$ since $\rm\ 10\equiv 3\ (mod\ 7)\:.\:$ So we compute the remainder $\rm\ (mod\ 7)\$ as follows. Start with the leading digit then repeatedly apply the operation: multiply by $3$ then add the next digit, doing all of the arithmetic $\rm\:(mod\ 7)\:.\:$
For example, let's use this algorithm to reduce $\rm\ 43211\ \:(mod\ 7)\:.\:$ The algorithm consists of repeatedly replacing the first two leading digits $\rm\ d_n\ d_{n-1}\$ by $\rm\ d_n\cdot 3 + d_{n-1}\:\ (mod\ 7),\:$ namely
$\rm\qquad\phantom{\equiv} \color{#C00}{4\ 3}\ 2\ 1\ 1$
$\rm\qquad\equiv\phantom{4} \color{green}{1\ 2}\ 1\ 1\quad$ by $\rm\quad \color{#C00}4\cdot 3 + \color{#C00}3\ \equiv\ \color{green}1$
$\rm\qquad\equiv\phantom{4\ 3} \color{royalblue}{5\ 1}\ 1\quad$ by $\rm\quad \color{green}1\cdot 3 + \color{green}2\ \equiv\ \color{royalblue}5$
$\rm\qquad\equiv\phantom{4\ 3\ 5} \color{brown}{2\ 1}\quad$ by $\rm\quad \color{royalblue}5\cdot 3 + \color{royalblue}1\ \equiv\ \color{brown}2$
$\rm\qquad\equiv\phantom{4\ 3\ 5\ 2} 0\quad$ by $\rm\quad \color{brown}2\cdot 3 + \color{brown}1\ \equiv\ 0$
Hence $\rm\ 43211\equiv 0\:\ (mod\ 7)\:,\:$ indeed $\rm\ 43211 = 7\cdot 6173\:.\:$ Generally the modular arithmetic is simpler if one uses a balanced system of representatives, e.g. $\rm\: \pm\{0,1,2,3\}\ \:(mod\ 7)\:.$ Notice that for modulus $11$ or $9\:$ the above method reduces to the well-known divisibility tests by $11$ or $9\:$ (a.k.a. "casting out nines" for modulus $9\:$).
-
I really appreciate you using hand-picked muted colors here. – mixedmath♦ Jun 21 '12 at 22:59
A positive integer written as $x = d_k \ldots d_1 d_0$ (in base 10) is really $\sum_{j=0}^k d_j 10^j$. Suppose $10^j \equiv m_j \mod n$. Then $x$ is divisible by $n$ if and only if $\sum_{j=0}^k d_j m_j$ is divisible by $n$. Assuming $n$ and 10 are coprime, $10^j$ is periodic mod $n$, the minimal period being a divisor of $\varphi(n)$.
For example, in the case $n=7$, we have $m_0 = 1$, $m_1 = 3$, $m_2 = 2$, $m_3 = -1$, $m_4 = -3$, $m_5 = -2$, and then it repeats. So $x$ is divisible by 7 if and only if $(d_0 - d_3 + d_6 - d_9 + \ldots) + 3 (d_1 - d_4 + d_7 - d_{10} + \ldots) + 2 (d_2 - d_5 + d_8 - d_{11} + \ldots)$ is.
-
– Gone Jul 21 '11 at 2:12
Here's one example... maybe it will help you to show that something is divisible by 3 iff its digits are divisible by 3. Write your number in expanded notation[using Robert Israel's notation]:
$N=\displaystyle\sum_{j=0}^n d_j10^j, d_j\in\{0,1,\dotsc,9\}$ (assume $d_n\neq 0$)
We want to know when this number is divisible by 3; said equivalently, when this number is congruent to $0\pmod{3}$. I claim it's when the sum of the digits is divisible by 3. To show this, take our number $N\pmod{3}$:
$\displaystyle\sum_{j=0}^n d_j10^j\equiv \displaystyle\sum_{j=0}^n d_j1^j =\displaystyle\sum_{j=0}^n d_j\pmod{3}\text{ and we see that}$
When $\displaystyle\sum_{j=0}^n d_j\equiv0\pmod{3},~\displaystyle\sum_{j=0}^nd_j10^j$ is divisible by 3.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260004162788391, "perplexity_flag": "head"} |
http://pediaview.com/openpedia/Consistent_estimator | # Consistent estimator
{T1, T2, T3, …} is a sequence of estimators for parameter θ0, the true value of which is 4. This sequence is consistent: the estimators are getting more and more concentrated near the true value θ0; at the same time, these estimators are biased. The limiting distribution of the sequence is a degenerate random variable which equals θ0 with probability 1.
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.
In practice one constructs an estimator as a function of an available sample of size n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimates indexed by n, and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value θ0, it is called a consistent estimator; otherwise the estimator is said to be inconsistent.
Consistency as defined here is sometimes referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent.
## Definition
Loosely speaking, an estimator Tn of parameter θ is said to be consistent, if it converges in probability to the true value of the parameter:[1]
$\underset{n\to\infty}{\operatorname{plim}}\;T_n = \theta.$
A more rigorous definition takes into account the fact that θ is actually unknown, and thus the convergence in probability must take place for every possible value of this parameter. Suppose {pθ: θ ∈ Θ} is a family of distributions (the parametric model), and Xθ = {X1, X2, … : Xi ~ pθ} is an infinite sample from the distribution pθ. Let { Tn(Xθ) } be a sequence of estimators for some parameter g(θ). Usually Tn will be based on the first n observations of a sample. Then this sequence {Tn} is said to be (weakly) consistent if [2]
$\underset{n\to\infty}{\operatorname{plim}}\;T_n(X^{\theta}) = g(\theta),\ \ \text{for all}\ \theta\in\Theta.$
This definition uses g(θ) instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example we estimate the location parameter of the model, but not the scale:
## Examples
### Sample mean of a normal random variable
Suppose one has a sequence of observations {X1, X2, …} from a normal N(μ, σ2) distribution. To estimate μ based on the first n observations, one can use the sample mean: Tn = (X1 + … + Xn)/n. This defines a sequence of estimators, indexed by the sample size n.
From the properties of the normal distribution, we know the sampling distribution of this statistic: Tn is itself normally distributed, with mean μ and variance σ2/n. Equivalently, $\scriptstyle (T_n-\mu)/(\sigma/\sqrt{n})$ has a standard normal distribution. Then
$\Pr\!\left[\,|T_n-\mu|\geq\varepsilon\,\right] = \Pr\!\left[ \frac{\sqrt{n}\,\big|T_n-\mu\big|}{\sigma} \geq \sqrt{n}\varepsilon/\sigma \right] = 2\left(1-\Phi\left(\frac{\sqrt{n}\,\varepsilon}{\sigma}\right)\right) \to 0$
as n tends to infinity, for any fixed ε > 0. Therefore, the sequence Tn of sample means is consistent for the population mean μ.
## Establishing consistency
The notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Many such tools exist:
• In order to demonstrate consistency directly from the definition one can use the inequality [3]
$\Pr\!\big[h(T_n-\theta)\geq\varepsilon\big] \leq \frac{\operatorname{E}\big[h(T_n-\theta)\big]}{\varepsilon},$
the most common choice for function h being either the absolute value (in which case it is known as Markov inequality), or the quadratic function (respectively Chebychev's inequality).
• Another useful result is the continuous mapping theorem: if Tn is consistent for θ and g(·) is a real-valued function continuous at point θ, then g(Tn) will be consistent for g(θ):[4]
$T_n\ \xrightarrow{p}\ \theta\ \quad\Rightarrow\quad g(T_n)\ \xrightarrow{p}\ g(\theta)$
• Slutsky’s theorem can be used to combine several different estimators, or an estimator with a non-random covergent sequence. If Tn →pα, and Sn →pβ, then [5]
$\begin{align} & T_n + S_n \ \xrightarrow{p}\ \alpha+\beta, \\ & T_n S_n \ \xrightarrow{p}\ \alpha \beta, \\ & T_n / S_n \ \xrightarrow{p}\ \alpha/\beta, \text{ provided that }\beta\neq0 \end{align}$
• If estimator Tn is given by an explicit formula, then most likely the formula will employ sums of random variables, and then the law of large numbers can be used: for a sequence {Xn} of random variables and under suitable conditions,
$\frac{1}{n}\sum_{i=1}^n g(X_i) \ \xrightarrow{p}\ \operatorname{E}[\,g(X)\,]$
• If estimator Tn is defined implicitly, for example as a value that maximizes certain objective function (see extremum estimator), then a more complicated argument involving stochastic equicontinuity has to be used.[6]
## Bias versus consistency
### Unbiased but not consistent
An estimator can be unbiased but not consistent. For example, for an iid sample {x
1
,..., x
n
} one can use T(X) = x
1
as the estimator of the mean E[x]. Note that here the sampling distribution of T is the same as the underlying distribution (for any n, as it ignores all points but the first), so E[T(X)] = E[x] and it is unbiased, but it does not converge to any value.
However, if a sequence of estimators is unbiased and converges to a value, then it is consistent, as it must converge to the correct value.
### Biased but consistent
Alternatively, an estimator can be biased but consistent. For example if the mean is estimated by ${1 \over n} \sum x_i + {1 \over n}$ it is biased, but as $n \rightarrow \infty$, it approaches the correct value, and so it is consistent.
Important examples include the sample variance and sample standard deviation. Without Bessel's correction (using the sample size n instead of the degrees of freedom n − 1), these are both negatively biased but consistent estimators. With the correction, the unbiased sample variance is unbiased, while the corrected sample standard deviation is still biased, but less so, and both are still consistent: the correction factor converges to 1 as sample size grows.
## See also
• Fisher consistency — alternative, although rarely used concept of consistency for the estimators
• Statistical hypothesis testing
## References
• Amemiya, Takeshi (1985). Advanced econometrics. Harvard University Press. ISBN 0-674-00560-0 [Amazon-US | Amazon-UK].
• Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation (2nd ed.). Springer. ISBN 0-387-98502-6 [Amazon-US | Amazon-UK].
• Newey, W.; McFadden, D. (1994). Large sample estimation and hypothesis testing. In “Handbook of Econometrics”, Vol. 4, Ch. 36. Elsevier Science. ISBN 0-444-88766-0 [Amazon-US | Amazon-UK].
• Nikulin, M.S. (2001), "Consistent estimator", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 [Amazon-US | Amazon-UK]
## Source
Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Consistent estimator", which is available in its original form here:
http://en.wikipedia.org/w/index.php?title=Consistent_estimator
• ## Finding More
You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page.
• ## Questions or Comments?
If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content.
All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.845564603805542, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/77527?sort=newest | ## Balls in spaces of operators
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am interested in some geometrical aspects of spaces $L(E)$, of bounded operators on a given Banach space $E$. I am unable to estimate if my problem deserves to be asked at MO, but let me try.
Is there an infinite-dimensional Banach space (non-separable preferably) $E$ such that for some non-zero
$T\in L(E)$
the set
$$\{S\in L(E)\colon \|S-T\|=\|S+T\|\}$$
contains an open ball? In fact, I am more interested in the negation:
Is there a Banach space such that for none non-zero $T\in L(E)$ this can happen?
I cannot (dis)prove it even if $E$ is a Hilbert space.
-
Of course, you mean $T\ne0$. – Denis Serre Oct 8 2011 at 12:45
Yes, you're right. Corrected. – Sellapan Nathan Oct 8 2011 at 13:50
## 2 Answers
In what follows I show that such an operator exists if $E$ can be written (isometrically) as the $\ell_\infty$-direct sum of two (nonzero) subspaces (I have not tried the Hilbert space case, but I started writing my answer before the edits were made to the question.)
Let $E = X\oplus_\infty Y$, where $X$ and $Y$ are nonzero (infinite dimensional, if you like). Each $V\in L(E)$ satisfies $\Vert V \Vert = \max ( \Vert P_X V \Vert, \Vert P_Y V\Vert )$, where $P_X$ and $P_Y$ denote the projections onto the complemented subspaces $X$ and $Y$.
Let $T= P_X$ and $S=3P_Y$, so that $\Vert T-S\Vert =3=\Vert T+S\Vert$. To construct the desired example, we show that if $\Vert R-S\Vert <1$, then $\Vert T-R\Vert = \Vert T+R\Vert$. So take such $R$ and note that then $\Vert P_YR \Vert >2$ and $\Vert P_XR\Vert<1$. It follows that $$\Vert T-R\Vert = \max (\Vert P_X(T-R) \Vert ,\Vert P_Y(T-R)\Vert ) = \max (\Vert T-P_XR \Vert ,\Vert P_YR\Vert ) = \Vert P_Y R\Vert$$ (since $\Vert T-P_XR \Vert \leq \Vert T\Vert + \Vert P_XR \Vert$<2 and $\Vert P_Y R\Vert >2$).
Similarly, we conclude that $\Vert T+R\Vert = \Vert P_Y R\Vert$, hence $\Vert T+R\Vert = \Vert T-R\Vert$.
Edit: Note that since each $U\in L(X\oplus_1 Y)$ satisfies $\Vert U\Vert = \max (\Vert UP_X\Vert , \Vert UP_Y\Vert )$, a similar construction gives an example of such a ball for spaces isometrically isomorphic to $X\oplus_1 Y$ for nonzero $X$ and $Y$.
-
Perhaps if we could prove that this set is a vector space for some Banach space $E$, then we would be done (as it is always proper subspace). – Sellapan Nathan Oct 8 2011 at 19:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For a given $\varepsilon>0$ the set never contains the operator $\varepsilon T$ unless $T=0$.
-
Right, but I am not assuming that the ball must be centered at $T$. – Sellapan Nathan Oct 8 2011 at 14:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916235625743866, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/43646?sort=oldest | ## Variants of point fixed theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $E$ be a dual Banach space and $C$ a nonempty convex weak* compact subset of $E$. Let $G$ be a group of weak* continuous linear isometries on $E$. Suppose that $g(C)\subset C$ for all $g\in G$.
A fixed point for $G$ is an element $x$ of $C$ such that $g(x)=x$ for any $g\in G$.
What conditions on $G$ assure the existence of a fixed point for $G$?
The only condition which I know is noncontracting (=distal), see Fixed point theory, Granas/Dugundji, page 173. I need other conditions.
-
## 4 Answers
Kakutani's fixed point theorem says that it is enough for $G$ to be equicontinuous on $C$. Now equicontinuity in the weak$^*$ topology might be too restrictive, but the pre adjoints of of the elements of $G$ are equicontinuous in the normed topology and this can sometimes (always?) be used to find a fixed point of $G$ in $C$. This is what Rudin does in his book Functional Analysis to prove the existence of Haar measure on a compact group.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think if $G$ is amenable, you always get a fixed point. Check the definition of amenability in Wikipedia: http://en.wikipedia.org/wiki/Amenable_group [Even though the definition is given for discrete groups, it generalizes to second countable, locally compact groups: see Bob Zimmer's book semisimple groups and ergodic theory"
-
Finally, Bourbaki "topological vector spaces" seems to answer completely the question if $C$ has a denumerable type. None condition is needed. A such group has a fixed point!
-
Is this an answer or a comment to your original question? – Yemon Choi Oct 26 2010 at 19:13
This is an answer to the initial question in the case where $C$ is second countable for the topology of the norm. – BigBill Oct 26 2010 at 19:56
Let X be the predual of E. If the dual of every separable subspace of X is separable, then C contains a point that is fixed by EVERY weak* continuous affine isometry of C into C. This is Theorem 2 in the following paper: http://math.gmu.edu/~tlim/pams81.pdf -TCL
-
Thank you very much. It is a surprising result. Actually, I am interested in the case where $E$ is not separable (unfortunately for me). What is known about my question? – BigBill Nov 8 2010 at 18:08
If $G$ is commutative and $E$ is uniformly convex (or uniformly smooth), then $G$ has a common fixed point. Other weaker conditions like weak* normal structure on $C$ will also suffice. – TCL Nov 10 2010 at 21:00
Thank you. A last question: What do you think about the case where $E$ is the space $B(\ell_\infty)$ of bounded linear operators on $\ell_\infty$ (its predual is the space $\ell_\infty\hat{\otimes}\ell_1$ where $\hat{\otimes}$ denotes the projective tensor product) and a noncommutative discrete group $G$? Same question with $E=B_{w*}(\ell_\infty)$ the space of weak* continuous bounded operators on $\ell_\infty$. – BigBill Nov 11 2010 at 22:55
I don't know the answer to your last question. I will look into it. BTW commutativity is not needed in my previous comment, the group of isometries is left-reversible semigroup, so the theorem in math.gmu.edu/~tlim/pams74.pdf applies. – TCL Nov 13 2010 at 15:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249351620674133, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/65323?sort=newest | ## flatness of coherent analytic sheaf
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I meet a problem like this : given a short exact sequence $0\rightarrow E_1\rightarrow E_2\rightarrow E_3\rightarrow 0$ , where $E_i,i=1,2,3$ are coherent sheaves over a compact complex manifold $X$ . Let $L$ be a holomorphic line bundle over $X$ , $\mathcal{O}_X(L)$ be the associated coherent analytic sheaf , can we get $0\rightarrow E_1\otimes\mathbb{O}_X(L) \rightarrow E_2\otimes\mathcal{O}_X(L) \rightarrow E_3\otimes\mathcal{O}_X(L) \rightarrow 0$ ? THen furthermore for any other coherent analytic sheaf $S$ , can we get $0\rightarrow E_1\otimes S \rightarrow E_2\otimes S \rightarrow E_3\otimes S \rightarrow 0$ ?
-
1
To check that locally free coherent sheaves are flat (as answered by Ottem), one can pass to stalks. – shenghao May 18 2011 at 13:38
thank you very much ! And how to determine the flatness of a giving coherent sheaf in general , does there exist some kind of obstruction ? – HKSHLZW May 19 2011 at 6:13
Similarly, a coherent sheaf $F$ on a complex manifold $X$ is flat if and only if for every $x\in X,$ the stalk $F_x$ is a flat $O_x$-module. As $O_x$ is Noetherian local (cf. Gunning's books), flat=free. And $F_x$ free over $O_x$ implies that $F$ is locally free (see Hartshorne II ex. 5.7 for an algebraic counterpart, and mimic its proof). So a coherent sheaf $F$ being flat if and only if it's locally free (I could be wrong though...). – shenghao May 19 2011 at 13:37
## 1 Answer
Yes, the sequence $0\rightarrow E_1\otimes\mathcal{O}_X(L) \rightarrow E_2\otimes\mathcal{O}_X(L) \rightarrow E_3\otimes\mathcal{O}_X(L) \rightarrow 0\,\,$ is certainly exact since $L$ is locally free (hence flat).
For the second question, the answer is negative in general. Take $0 \to I_Y \to O_X \to O_Y \to 0$ and $S=O_Y$, where $X=\mathbb{A}^1$ and $Y=pt$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134814739227295, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/17209?sort=newest | ## Consequences of the Riemann hypothesis
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I assume a number of results have been proven conditionally on the Riemann hypothesis, of course in number theory and maybe in other fields. What are the most relevant you know?
It would also be nice to include consequences of the generalized Riemann hypothesis (but specify which one is assumed).
-
7
I suggest you make it a community wiki – vonjd Mar 5 2010 at 20:18
Done. I was not sure if that was the case, as it may turn out that some answer does a better survey of the current situation than the others, and so it should get the accepted answers points. So my problem really is: should every big-list be community wiki? – Andrea Ferretti Mar 5 2010 at 20:22
I was going to suggest Ravi Ramakrishna's results on infinitely ramified Galois representations (MR1765710). Initially, these representations were only known to be crystalline assuming the GRH, but apparently GRH is no longer needed due to more recent work of Ramakrishna and Khare (MR2004459). – Dan Ramras Mar 5 2010 at 20:24
## 6 Answers
I gave a talk on this topic a few months ago, so I assembled a list then which could be appreciated by a general mathematical audience. I'll reproduce it here.
Let's start with three applications of RH for the Riemann zeta-function only.
a) Sharp estimates on the remainder term in the prime number theorem: $\pi(x) = {\text{Li}}(x) + O(\sqrt{x}\log x)$, where ${\text{Li}}(x)$ is the logarithmic integral (the integral from 2 to $x$ of $1/\log t$).
b) Comparing $\pi(x)$ and ${\text{Li}}(x)$. All the numerical data shows $\pi(x)$ < ${\text{Li}}(x)$, and Gauss thought this was always true, but in 1914 Littlewood used the Riemann hypothesis to show the inequality reverses infinitely often. In 1933, Skewes used RH to show the inequality reverses for some $x$ below 10^10^10^34. In 1955 Skewes unconditionally (no need for RH) showed the inequality reverses for some $x$ below 10^10^10^963. Maybe this was the first example where something was proved first assuming RH and later proved unconditionally.
c) Gaps between primes. In 1919, Cramer showed RH implies $p_{k+1} - p_k = O(\sqrt{p_k}\log p_k)$, where $p_k$ is the $k$th prime. (A conjecture of Legendre is that there's always a prime between $n^2$ and $(n+1)^2$ -- in fact there should be a lot of them -- and this would imply $p_{k+1} - p_k = O(\sqrt{p_k})$. This is better than Cramer's result, so it lies deeper than a consequence of RH. Cramer also conjectured that the gap is really $O((\log p_k)^2)$.)
Now let's move on to applications involving more zeta and $L$-functions than just the Riemann zeta-function. Note that typically we will need to assume GRH for infinitely many such functions to say anything.
d) Chebyshev's conjecture. In 1853, Chebyshev tabulated the primes which are $1 \bmod 4$ and $3 \bmod 4$ and noticed there are always at least as many $3 \bmod 4$ primes up to $x$ as $1 \bmod 4$ primes. He conjectured this was always true and also gave an analytic sense in which there are more $3 \bmod 4$ primes: $$\lim_{x \rightarrow 1^{-}} \sum_{p \not= 2} (-1)^{(p+1)/2}x^p = \infty.$$ Here the sum runs over odd primes $p$. In 1917, Hardy-Littlewood and Landau (independently) showed this second conjecture of Chebyshev's is equivalent to GRH for the $L$-function of the nontrivial character mod 4. (In 1994, Rubinstein and Sarnak used simplicity and linear independence hypotheses on zeros of $L$-functions to say something about Chebyshev's first conjecture, but as the posted question asked only about consequences of RH and GRH, I leave the matter there and move on.)
e) The Goldbach conjecture (1742). The "even" version says all even integers $n \geq 4$ are a sum of 2 primes, while the "odd" version says all odd integers $n \geq 7$ are a sum of 3 primes. For most mathematicians, the Goldbach conjecture is understood to mean the even version, and obviously the even version implies the odd version. There has been progress on the odd version if we assume GRH. In 1923, assuming all Dirichlet $L$-functions are nonzero in a right half-plane ${\text{Re}}(s) \geq 3/4 - \varepsilon$, where $\varepsilon$ is fixed (independent of the $L$-function), Hardy and Littlewood showed the odd Goldbach conjecture is true for all sufficiently large odd $n$. In 1937, Vinogradov proved the same result unconditionally, so he was able to remove GRH as a hypothesis. In 1997, Deshouillers, Effinger, te Riele, and Zinoviev showed the odd Goldbach conjecture is true for all odd $n \geq 7$ assuming GRH. That is, the odd Goldbach conjecture is completely settled if GRH is true.
f) Polynomial-time primality tests. By results of Ankeny (1952) and Montgomery (1971), if GRH is true for all Dirichlet $L$-functions then the first nonmember of a proper subgroup of any unit group $({\mathbf Z}/m{\mathbf Z})^\times$ is $O((\log m)^2)$, where the $O$-constant is independent of $m$. In 1985, Bach showed under GRH that you take the constant to be 2. That is, each proper subgroup of $({\mathbf Z}/m{\mathbf Z})^\times$ does not contain some integer from 1 to $2(\log m)^2$. Put differently, if a subgroup contains all positive integers below $2(\log m)^2$ then the subgroup is the whole unit group mod $m$. (If instead we knew all Dirichlet $L$-functions have no nontrivial zeros on ${\text{Re}}(s) > 1 - \varepsilon$ then the first nonmember of any proper subgroup is $O((\log m)^{1/\varepsilon})$. Set $\varepsilon = 1/2$ to get the previous result I stated using GRH.) In 1976, Gary Miller used such results to show on GRH for all Dirichlet $L$-functions that there is a polynomial-time primality test. (It involved deciding if a subgroup of units is proper or not.) Shortly afterwards Solovay and Strassen described a different test along these lines using Jacobi symbols which only involved subgroups containing $-1$, so their test would "only" need GRH for Dirichlet $L$-functions of even characters in order to be a polynomial-time primality test. (Solovay and Strassen described their test only as a probabilistic test.)
In 2002 Agrawal, Kayal, and Saxena gave an unconditional polynomial-time primality test. This is a nice example showing how GRH guides mathematicians in the direction of what should be true and then you hope to find a proof of those results by other (unconditional) methods.
g) Euclidean rings of integers. In 1973, Weinberger showed that if GRH is true for Dedekind zeta-functions then any number field with an infinite unit group (so ignoring the rationals and imaginary quadratic fields) is Euclidean if it has class number 1. As a special case, in concrete terms, if $d$ is a positive integer which is not a perfect square then the ring ${\mathbf Z}[\sqrt{d}]$ is a unique factorization domain only if it is Euclidean. There has been progress in the direction of unconditional proofs that class number 1 implies Euclidean by Ram Murty and others, but as a striking special case let's consider ${\mathbf Z}[\sqrt{14}]$. It has class number 1 (which must have been known to Gauss in the early 19th century, in the language of quadratic forms), so it should be Euclidean. This particular real quadratic ring was first proved to be Euclidean only in 2004 (by M. Harper). So this is a ring which was known to have unique factorization for over 100 years before it was proved to be Euclidean.
h) Artin's primitive root conjecture. In 1927, Artin conjectured that any integer $a$ which is not $\pm 1$ or a perfect square is a generator of $({\mathbf Z}/p{\mathbf Z})^\times$ for infinitely many $p$, in fact for a positive proportion of such $p$. As a special case, taking $a = 10$, this says for primes $p$ the unit fraction $1/p$ has decimal period $p-1$ for a positive proportion of $p$. (For any prime $p$, the decimal period for $1/p$ is a factor of $p-1$, so this special case is saying the largest possible choice is realized infinitely often in a precise sense; a weaker version of this special case goes back to Gauss.) In 1967, Hooley showed Artin's conjecture follows from GRH. In 1984, R. Murty and Gupta showed unconditionally that the conjecture is true for infinitely many $a$, but the proof couldn't pin down a specific $a$ for which it is true, and in 1986 Heath-Brown showed the conjecture is true for all prime values of $a$ with at most two exceptions (and surely there are no exceptions). No definite $a$ is known for which Artin's conjecture is unconditionally true.
i) First prime in an arithmetic progression. If $\gcd(a,m) = 1$ then there are infinitely many primes $p \equiv a \bmod m$. When does the first one appear, as a function of $m$? In 1934, assuming GRH Chowla showed the first prime $p \equiv a \bmod m$ is $O(m^2(\log m)^2)$. In 1944, Linnik unconditionally showed the bound is $O(m^L)$ for some universal exponent $L$. The latest unconditional choice for $L$ (Xylouris, 2009) is $L = 5.2$.
j) Gauss' class number problem. Gauss (1801) conjectured in the language of quadratic forms that there are only finitely many imaginary quadratic fields with class number 1. (He actually conjectured more precisely that the 9 known examples are the only ones, but for what I want to say the weaker finiteness statement is simpler.) In 1913, Gronwall showed this is true if the $L$-functions of all imaginary quadratic Dirichlet characters have no zeros in some common strip $1- \varepsilon < {\text{Re}}(s) < 1$. That is weaker than GRH (we only care about $L$-functions of a restricted collection of characters), but it is still an unproved condition. In 1933, Deuring and Mordell showed Gauss' conjecture is true if the ordinary RH (for Riemann zeta-function) is false, and then in 1934 Heilbronn showed Gauss' conjecture is true if GRH is false for some Dirichlet $L$-function of an imaginary quadratic character. Since Gronwall proved Gauss' conjecture is true when GRH is true for the Riemann zeta-function and the Dirichlet $L$-functions of all imaginary quadratic Dirichlet characters and Deuring--Mordell--Heilbronn proved Gauss' conjecture is true when GRH is false for at least one of those functions, Gauss' conjecture is true by baby logic. In 1935, Siegel proved Gauss' conjecture is true unconditionally, and in the 1950s and 1960s Baker, Heegner, and Stark gave separate unconditional proofs of Gauss' precise "only 9" conjecture.
k) Missing values of a quadratic form. Lagrange (1772) showed every positive integer is a sum of four squares. However, not every integer is a sum of three squares: $x^2 + y^2 + z^2$ misses all $n \equiv 7 \bmod 8$. Legendre (1798) showed a positive integer is a sum of three squares iff it is not of the form $4^a(8k+7)$. This can be phrased as a local-global problem: $x^2 + y^2 + z^2 = n$ is solvable in integers iff the congruence $x^2 + y^2 + z^2 \equiv n \bmod m$ is solvable for all $m$. More generally, the same local-global phenomenon applies to the three-variable quadratic form $x^2 + y^2 + cz^2$ for all integers $c$ from 2 to 10 except $c = 7$ and $c = 10$. What happens for these two special values? Ramanujan looked at $c = 10$. He found 16 values of $n$ for which there is local solvability (that is, we can solve $x^2 + y^2 + 10z^2 \equiv n \bmod m$ for all $m$) but not global solvability (no integral solution for $x^2 + y^2 + 10z^2 = n$). Two additional values of $n$ were found later, and in 1990 Duke and Schulze-Pillot showed that local solvability implies global solvability except for (ineffectively) finitely many positive integers $n$. In 1997, Ono and Soundarajan showed that, under GRH, the 18 known exceptions are the only ones.
l) Euler's convenient numbers. Euler called an integer $n \geq 1$ convenient if any odd integer greater than 1 that has a unique representation as $x^2 + ny^2$ in positive integers $x$ and $y$, and which moreover has $(x,ny) = 1$, is a prime number. (These numbers were convenient for Euler to use to prove certain numbers that were large in his day, like $67579 = 229^2 + 2\cdot 87^2$, are prime.) Euler found 65 convenient numbers below 10000 (the last one being 1848). In 1934, Chowla showed there are finitely many convenient numbers. In 1973, Weinberger showed there is at most one convenient number not in Euler's list, and if the $L$-functions of all quadratic Dirichlet characters satisfy GRH then Euler's list is complete. What he needed from GRH is the lack of any real zeros in the interval $(53/54,1)$.
-
Maybe I'm being a bit dense here, but in (e), how does odd Goldbach imply even Goldbach? I thought the implication went the other way: given a representation of an even number as a sum of two primes, you can add three to get a representation of an odd number as a sum of three primes. – Michael Lugo Mar 5 2010 at 22:52
Oops! I meant to say the even version implies the odd version (namely subtract 3 from any odd number at least 7). That's now fixed. – KConrad Mar 5 2010 at 22:57
Regarding i), this doesn't seem to be the strongest expected result. If we had O(n^{1+e}) for every e, then there would exist groups of size O(n^{2+e}) with n-dimensional irreducible representations for every n: see mathoverflow.net/questions/530/… . – Qiaochu Yuan Mar 5 2010 at 23:14
1
I'm now sorry I made the question community wiki. I think your answer is very interesting and surely involved a lot of work, and should deserve points – Andrea Ferretti Mar 6 2010 at 1:08
5
Keith, your answer is now referred to in wikipedia's Riemann hypothesis article. – Rob Harron Oct 24 2010 at 2:03
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
No. See Sci. Am. column on the May 2012 issue. And I thank KConrad for point e).
-
This is clearly meant to be a comment, but where? Comments shouldn't be added as answers. – David Roberts Aug 27 at 7:10
Perhaps as a reply to Sunni under Igor Pak's answer? Not sure though either. – quid Aug 27 at 12:12
Many class group computations are sped up tremendously by assuming the GRH. As I understand it this is done by computing upper bounds on the discriminants of potential abelian extensions. See this survey by Odlyzko for more details
http://archive.numdam.org/ARCHIVE/JTNB/JTNB_1990__2_1/JTNB_1990__2_1_119_0/JTNB_1990__2_1_119_0.pdf
This is built into SAGE.
````sage: J=JonesDatabase()
sage: NFs=J.unramified_outside([2,3])
sage: time RHCNs = [K.class_number(proof=False) for K in NFs]
CPU times: user 7.05 s, sys: 0.07 s, total: 7.13 s
Wall time: 7.15 s
sage: time CNs = [K.class_number() for K in NFs]
CPU times: user 20.19 s, sys: 0.24 s, total: 20.43 s
Wall time: 20.96 s
````
-
(Jeffrey C. Lagarias) The following is equivalent to RH. Let $H_n = \sum\limits_{j=1}^n \frac{1}{j}$ be the $n$-th harmonic number. For each $n \ge 1$ $$\sum\limits_{d|n} d \le H_n + \exp (H_n) \log (H_n),$$ with equality only for $n = 1.$ (An Elementary Problem Equivalent to the Riemann Hypothesis. See also OEIS A057641.)
-
As for GRH, the prettiest one I know is this complete solution of the odd Goldbach conjecture (that every number greater than 5 is a sum of 3 primes).
-
The best result on Goldback conjecture I know is $1+2$ by JR.Chen. A great many researchers claimed that they proved $1+1$. Has Goldback conjecture really been proved? – Sunni Mar 6 2010 at 17:48
From Wikipedia's page on the consequences of the Riemann hypothesis
"Riemann's explicit formula for the number of primes less than a given number in terms of a sum over the zeros of the Riemann zeta function says that the magnitude of the oscillations of primes around their expected position is controlled by the real parts of the zeros of the zeta function. [...] the Riemann hypothesis is equivalent to the "best possible" bound for the error of the prime number theorem."
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 116, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296018481254578, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/tagged/sage | ## Tagged Questions
2answers
128 views
### Ihara zeta function (graph theory) coefficients using a line graph
I'VE COMPLETELY REVISED MY QUESTION I wish to take a simple undirected graph (i.e. the complete graph K_4) Arbitrarily direct said graph, and then create a line graph from the d …
3answers
533 views
### What CASes say about the analytic rank of rank 8 elliptic curve ‘457532830151317a1’
For the rank $8$ elliptic curve with a-invariants $(0, 0, 1, -23737, 960366)$ sage 5.3 reports analytic rank $4$ in about 2.4 hours. Almost sure this a bug, so I am interested wha … | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7936064600944519, "perplexity_flag": "middle"} |
http://hbfs.wordpress.com/2008/08/26/optimizing-boolean-expressions-for-speed/ | # Harder, Better, Faster, Stronger
Explorations in better, faster, stronger code.
## Optimizing boolean expressions for speed.
Minimizing boolean expressions is of great pragmatic importance. In hardware, it is used to reduce the number of transistors in microprocessors. In software, it is used to speed up computations. If you have a complex expression you want to minimize and look up a textbook on discrete mathematics, you will usually find a list of boolean laws and identities that you are supposed to use to minimize functions.
Textbooks usually list the following boolean laws and identities:
• They list and explain basic operators: and $\wedge$, or $\vee$, exclusive or $\oplus$, negation $\neg$, implication $\rightarrow$, true $t$, false $f$, etc.
• Double negation: $\neg\neg a=a$
• Identity laws: $a \vee f=a$, $a \wedge t=a$
• Idempotent laws: $a \vee a = a$, $a \wedge a=a$
• Dominance laws: $a \vee t=t$, $a \wedge f=f$
• Commutative laws: $a \vee b = b \vee a$, $a \wedge b=b \wedge a$
• Associative laws: $a\vee(b\vee c)=(a \vee b)\vee c$, $a \wedge(b\wedge c)=(a \wedge b)\wedge c$
• Distributive laws: $a\wedge(b \vee c)=(a \wedge b)\vee(a\wedge c)$, $a\vee(b \wedge c)=(a\vee b)\wedge(a\vee c)$
• De Morgan’s laws: $\neg(a \vee b)=\neg a \wedge \neg b$, $\neg(a \wedge b)=\neg a \vee \neg b$.
• Sub-expressions elimination. You can rewrite $x=(c \wedge(a \vee b)) \oplus (d \wedge(a \vee b))$ as $t=a \vee b$, $x=(c \wedge t)\oplus(d \wedge t)$
• And other identities you will discover.
Using these laws, you are supposed to massage your expression until you discover a shorter one that computes the same truth values as the original expression. Or you can use a truth table and see what expressions are equivalent to some of the sub-expressions and how you can combine them to obtain an equivalent but shorter boolean expression. All you need is a pen, a piece of paper, coffee, and a lot of patience. While I make this sound like a tedious process, you will rapidly get the knack of this and you will find that minimizing expressions of a few variable quite easy.
Let us take an example. You may happen to write something like (I do, once in a while):
```if ( current->next==0
|| (current->next!=0 && SomeThing() )
{
...
}
else ...
```
The conditional statement is equivalent to $\neg a \vee (a \wedge b)$, were $a$ is current->next!=0, and $b$ is SomeThing(). Using the truth table approach (a favorite of textbooks), you write:
| | | | | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $a$ | $b$ | $\neg a$ | $a \wedge b$ | $\neg a \vee (a \wedge b)$ | $\neg a \vee b$ |
| 0 | 0 | 1 | 0 | 1 | 1 |
| 0 | 1 | 1 | 0 | 1 | 1 |
| 1 | 0 | 0 | 0 | 0 | 0 |
| 1 | 1 | 0 | 1 | 1 | 1 |
The last column you eventually find by trying numerous combinations (ok, in this case it’s not all that difficult, but you get the point) to find that $\neg a \vee (a \wedge b) \equiv \neg a \vee b$. So you rewrite the above code as:
```if ( current->next==0 || SomeThing() )
{
...
}
else ...
```
The textbook approach to minimization, however, doesn’t tell the whole story: it considers that all operators, $\vee$, $\wedge$, $\neg$, $\oplus$, etc. have an equal cost; namely zero. However, in real life, evaluating a boolean expression does incur a time cost. First, each variable involved in the expression must be assigned a truth value, which may be a constant but may also be the result of a computationally expensive function. Second, the expression itself must be evaluated, and the cost of the evaluation is not null. It is proportional to the number of operators involved.
Using boolean simplification and common sub-expression expression will help a lot to reduce the computational cost of the evaluation by not evaluating costly sub-expression more than once, but we can go further. Let us note that it is not always necessary to evaluate the entire expression to ascertain its truth value. For example, the expression $a \vee (b \wedge c)$ is true if $a$ is true, regardless of the value of $(b \wedge c)$, so it is safe to abort the evaluation of the expression whenever $a$ is true. If $a$ is false, then we proceed to the evaluation of $(b \wedge c)$. Similarly, the expression $a \wedge (b \vee c)$ is false whenever $a$ is false, regardless of the value of $(b \vee c)$. The sub-expression $(b \vee c)$ is evaluated if, and only if, $a$ is true.
Some (most?) programming languages makes use of this observation to speedup evaluation of the conditional expressions. In Simula, for example, the short-cut evaluation is explicitly controlled by the programmer via the logical operators and then and or else. In Turbo Pascal, compiler switches, {\$b+} and {\$b-}, would instruct the compiler to generate code for complete or truncated expression evaluation. C and C++ use short-cut evaluation in all cases.
All C/C++ programmers use short-cut boolean expression evaluation to conditionally enable parts of expressions. Indeed, it is not uncommon to see code that looks like
```if (p && p->count)
{
// do stuff with p because
// p is not null
}
else
{
// do something else
}
```
The first part of the expression prevents the second part p->count from being evaluated if p is null. However, I think not all programmers explicitly use this feature to speedup tests by carefully resequencing/reorganizing the conditional expressions. Consider the following piece of code:
```if ( really_slow_test(with,plenty,of,parameters)
&& slower_test(with,some,parameters)
&& fast_test // with no parameters
)
{
...then code...
}
```
This code first evaluates an expensive function then, on success, proceeds to evaluate the remainder of the expression. But even if the first test fails and the evaluation is short-cut, there’s a significant performance penalty because the fat really_slow_test(...) is evaluated. While retaining program correctness, one can rearrange the expression as follows:
```if ( fast_test
&& slower_test(with,some,parameters)
&& (really_slow_test(with,plenty,of,parameters))
{
...then code...
}
```
Now, the code is much faster because fast_test costs very little compared to the other two parts of the expression, and we proceed to evaluate really_slow_test(...) only if the two previous tests succeed. Ideally, the expressions should lend themselves to such reordering. In practice, there might be dependencies in the tests that force a natural ordering, regardless of which test is expensive or not. Think of our previous example with p and p->count; clearly, inverting the expression is not going to work. In many other cases, however, one can massage the expression to simplify it and to move the easiest, fastest, most likely to fail- (&&) or succeed-fast (||) tests to the front, allowing the expression’s value to be ascertained as fast as possible while being still correct.
The use of fail- or succeed-fast testing is, I think, a good, simple pratice that does not endanger code legibility and that does not lean too far on the side of premature optimization (and all its evils), yet yields interesting performance gains whenever the conditional expressions aren’t completely trivial.
### Like this:
This entry was posted on Tuesday, August 26th, 2008 at 22:59 pm and is filed under algorithms, C, programming. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 6 Responses to Optimizing boolean expressions for speed.
1. Stephen says:
Another thing to consider is branching cost which easily runs into 10s of cycles.
On the assembly level if ( a && b && c ) { block } translates to
```if( a ) {
if (b) {
if( c) {
block } } }
```
If your tests are simple the penalties are proportionately larger. e.g.
```int a,b;
if( a == 0 && b == 0 ) // two branches
can be better written as
if( (a | b) == 0 ) // one branch
```
2. Steven Pigeon says:
You’re right. Branchings ARE expensive. However, if cascades aren’t a frequent programming style, and they might not help legibility all that much.
The second trick you present is valid when all variables are already assigned a truth value, or have unit cost of initialisation. && translates to &, || to |, and using De Morgan’s laws as you did allows to transform ((a==0)&&(b==0))==1 as ((a!=0)||(b!=0))==0 which is, indeed, (a|b)==0. I was going to talk about this in Part II, thanks for the spoilers ;)
If you do not have exactly 0 or 1 as your variables’ values, you can still demote them to zero anyway. Suppose you have ((a==3)&&(b==c))==1, you can rewrite this as (((a-3)==0)&&(((b-c)==0))==1, which, applying De Morgan’s law again, you transform to ((a-3)|(b-c)) == 0. All low-cost arithmetic, no extra jump due to short-cut evaluation.
3. mohan says:
can you please provide the alternate boolean minimization techniques other than k -map tabular method
4. Steven Pigeon says:
There are other techniques, but they are ultimately all very complex (in the computational complexity sense). It is a NP-complete problem, see The Complexity of Boolean Functions by Ingo Wegener.
A few video of courses on the subject:
There are others linked into the “suggestions.”
5. ticktock says:
Let me describe a “brute force” technique to boolean optimization I came up. The concept is to calculate term evaluation speed and probability during runtime and adjust the order of evaluation dynamically. I’ll apply it to your example:
```if ( fast_test
&& slower_test(with,some,parameters)
&& (really_slow_test(with,plenty,of,parameters))
{
...then code...
}
```
Here is the rewrite:
```// global
OptimizeExpression exp = new OptimizeANDExpression();
// condition evaluator
while(!exp.Evaluated())
{
switch(exp.Term)
{
case 0: exp.Term.val( fast_test );
break;
case 1: exp.Term.val( slower_test(bla, bla) );
break;
case 2: exp.Term.val( really_slow_test(bla, bla ,bla) );
break;
default: exp.Evaluated( true );
}
exp.NextTerm();
}
if( exp.isTrue() ) {
// .. then code ..
}
```
The cool stuff happens inside OptimizeExpression. The first iteration of the while loop constructs all the terms and captures the initial term timing. The start/end times of each expression are captured and sorted using some appropriate ranking ie. time/probability with a running average. The next iteration of the while loop will adjust the order of evaluation using the case statement assigned to each term. This executes the fastest terms with the highest probability of being false for AND expressions and true for OR expression. Considerations for using this are: 1) need to evaluate ALL terms in the expression to gather the statistics of unknown/changing execution times and probabilities 2) this expression has to be something you do a lot, ie. looping through this expression hundreds of times with different data. 3) the “while switch” construct should replace any “if” without scoping issues. 4) the term evaluation needs to be expensive to compensate for the overhead of this mechanism (although it has little impact I can see)
• Steven Pigeon says:
I would think it would be most useful when you gather the run-time behavior, then use the collected data to hard code the final result: like a profile-guided optimization. With gcc you can instrument the code to gather this data and the compiler is able to use it to generate better code.
Did you actually looked at the code generated / measured the overhead ?
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9015762209892273, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/tagged/pdf?sort=unanswered&pagesize=15 | # Tagged Questions
PDF stands for Probability Density Function (as compared to CDF for Cumulative Distribution Function). The PDF of a variable gives the likelihood for each value of a continuous variable. Use this tag also for PMF (Probability Mass Function) the analog for discrete variables.
2answers
171 views
### Non-fair die - College Probability
How many times must you roll a non-fair die to be at least 84% sure that the sample probability will be within 3% from the actual probability. Since the die is not-fair, we do not know p. My question ...
2answers
67 views
### Compute cdf and quantile of a specific distribution
I want to calculate a quantile of a specific distribution. Therefore I need the cdf. My distribution is a standardized Student's-t distribution, this can be written as \begin{align*} f(l|\nu) =(\pi ...
1answer
349 views
### kurtosis and skewness - descriptive statistics
I would like to describe the "peakedness" and tail "heaviness" of several skewed probability density functions. The features I want to describe, would they be called "kurtosis"? I've only seen the ...
1answer
89 views
### Kernel density estimation on asymmetric distributions
Let $\{s_1,\ldots,s_N\}$ be a set of samples drawn from an unknown (but certainly asymmetric) probability distribution. I would like to find the probability distribution by using the KDE approach: ...
1answer
218 views
### Сonfidence interval of histogram probability density function estimator
There is a circle of radius $R$ and a sequence of points within it. I'm going to estimate a PDF of appearing at elementary area at the distance $D$ from the centre of the circle using a realization of ...
1answer
43 views
### Confidence bounds for PDF
I build confidence bounds for estimating PDF of the empirical sample using bootstrapping: ...
1answer
49 views
### Calculating Hellinger Divergence from Results of Kernel Density Estimates in Matlab
Using the ksdensity function in matlab returns a density estimation in the form of 2 vectors f and xi. Where f are the density values and xi the corresponding points for the density values. How do I ...
0answers
217 views
### Marginal distribution of the diagonal of an inverse Wishart distributed matrix
Suppose $X\sim InvWishart(\nu, \Sigma_0)$. I'm interested in the marginal distribution of the diagonal elements $diag(X) = (x_{11}, \dots, x_{pp})$. There are a few simple results on the distribution ...
0answers
81 views
### Joint distribution of two distances
Suppose there are three points in 3D space, each with coordinates $A_i=(X_i,Y_i,Z_i)\leadsto \mathcal{N}(\mu_i,\tau^2\mathbb{I}_3)$. We compute the distance between the three points, e.g. \$D_{ij} = ...
0answers
42 views
### What is the relationship between two points on probability density function?
The Wikipedia entry for Probability Density Function states that the PDF "describes the relative likelihood for this random variable to take on a given value." Two questions: Does that mean that ...
0answers
170 views
### Goodness-of-fit test without analytical PDF and CDF
I have closed form moment-generating function and characteristic function of a distribution, which describes waiting time of a continuous univariate random process. However, I cannot analytically ...
0answers
18 views
### Kernel density estimator that doesn't collapse in the tails
I have iid datapoints $x_1, \dots, x_n$, generated by an unknown density $f(x)$. So far I have approximated $f(x)$ with a normal $N(\hat{\mu}, \hat{\sigma}^2 )$, where $\hat{\mu}$ and $\hat{\sigma}^2$ ...
0answers
50 views
### Confusion related to histogram density estimation
I have some confusion related to how the density is estimated from the histogram. I have attached the screenshot of the paper as well. Any insights I didn't get why you divide it into cubes and why ...
0answers
100 views
### Kullback-Leibler vs Hellinger Distance
I am working on this problem in which I have a dataset of n-dimensional examples that come from different and unknown distributions. Given a new sample, I wish to find k examples from the dataset that ...
0answers
56 views
### Distribution/expected length of the shortest path in infinite random geometric graphs
Consider an infinite random geometric graph $G(\rho,d)$ in which vertices are uniformly and independently scattered over the 2D plane with density $\rho$ and edges connect the vertices that are closer ...
0answers
63 views
### How to explain what density is and the interpretation of the curve's height to non-statisticians
I tend to use histograms of continuous variables adding estimated density curves in order to compare several charts easily. However, I find difficulties when I try to explain what density is and the ...
0answers
36 views
### How to improve estimation of a deconvolved density
I have the following problem: Y = X + e with Y = Total reaction time (noisy signal) X = selection time (signal) e = discrimination time (noise) I am interestend in the distribution for X and ...
0answers
64 views
### Density related to sparseness measure
Are there any multi-variate continuous distributions whose probability distribution functions give high values for sparse vectors and low values for dense vectors, i. e. indicating the sparseness of ...
0answers
16 views
### Comparing the accuracy of nested distributions: is the larger model always better?
I have a random variable $X$ with density function $f(x|\theta)$. I could approximate $f(x)$ using one of two density functions: $g(x|\phi)$ or $s(x|\psi)$. Suppose that $g(x|\phi)$ is nested inside ...
0answers
33 views
### Does the EM algorithm for mixtures still address the missing data issue?
There is a PDF $p(D| \theta)=p(X,Z| \theta)$ with observed values $X$ but also some missing or incomplete values $Z$ (for eg. resulting from censoring). The expectation-maximization (EM) algorithm is ...
0answers
71 views
### Interpretation of Conditional Density Plots
I would like to know how to correctly interpret conditional density plots. I have inserted two below that I created in R with cdplot. For example, is the ...
0answers
63 views
### Statistical test for two-factor heteregeneously distributed samples
Data I have a sample size of 50 in group A and 50 in group B where groups A and B are unmatched. Each sample in group A and group B has two frequencies associated with it, which I'll call $x$ and ...
0answers
41 views
### Dirichlet density with just one x value and one alpha parameter
Does it make any sense to apply the Dirichlet density function to only one x value and therefore one alpha parameter? (this would be the result of some bin merging) I'm asking because it appears ...
0answers
343 views
### What is the physical meaning of the probability density function and cumulative distribution function?
I have started research in Electronic Engineering, where PDF & CDF take a core part in most of the applications. I have studied books on probability where they have discussed the PDF & CDF ...
0answers
151 views
### Probability density function (pdf) of normal sample variance ($S^2$)
I need to know the formula for the pdf of $S^2$. I know this: $$\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1} \>,$$ but I want to state the correct formula for the pdf of $S^2$, not ...
0answers
152 views
### Modeling membership function given some survey data or empirical distribution
For example, I have a set of numbers (say 0 to 10) that are presented to 100 subjects. Each subject is asked whether the number is a small or a large number. The results are that 100 people think ...
0answers
21 views
### Estimation of conditional CDF vs PDF, practical differences
I'm trying to figure out where and when one would opt to work with the conditional cdf $F(y|X=x)$ rather than the pdf $f(y|X=x)$. I am thinking of $y$ as being a real valued response, and $x$ as being ...
0answers
27 views
### Joint PDF change of variables
I now understand how to conduct a change of variables for a marginal PDF. Now, given two functions that define parameter's spatially: $C_A(x)$ and $C_B(x)$, is it possible to construct the Joint PDF, ...
0answers
45 views
### Estimating probability density in Parzen windows
I came across an interesting paper about stability measure which can be used as evaluation metric for continuous data discretization. The stability measure is constructed from a series of estimated ...
0answers
34 views
### Flexible multivariate parametric density
Suppose I have observed a vector-valued data point $y_{obs}$ from a statistical model: $$y \sim f(\theta)$$ where $\theta$ are the unknown model parameters. I would like to estimate $\theta$, but ...
0answers
130 views
### Total area under any probability density function
What's the name of the theorem that tells us that the total area under any probability density function, discrete or continuous, equals 1? My stats book actually defines a PDF by requiring that ...
0answers
64 views
### Working with an arbitrary number of sample moments
The $n^{th}$ moment of a distribution can be estimated from a vector of samples $(x_1,x_2,...x_k)$ by: $$\sum_{i=1}^{k} x_i^n$$ Now, let's say I've calculated the first $m$ moments for my ...
0answers
187 views
### Some questions about confidence intervals, quantiles and cdf/pdf
My question is related to : Maximum likelihood estimation and the n-th order statistic estimation-and-the-n-th-order-statistic. I will try to answer the questions, and I would like to ask you guys to ...
0answers
81 views
### PDF Manipulation for Bayesian analysis
This post pertains to Bayesian pdf manipulation. Firstly, assuming a prior probability specified as Gamma distribution such that $\alpha = \mu_{0}^{2}/\sigma_{0}^{2}$ and \$\beta = ...
0answers
85 views
### Compare density estimate to true pdf
I'm trying to compare various density estimation methods. My dataset $D$ is generated from a fixed mixture of Gaussians (which allows me to estimate the true pdf $p(x)$). Then, I compute the estimated ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9087434411048889, "perplexity_flag": "middle"} |
http://mathhelpforum.com/discrete-math/203679-supremums-infimums.html | # Thread:
1. ## Supremums and Infimums
If possible find the supremum and infimum of the following sets.
a) (0, infinity)
b) {1/n: n is an element of the natural numbers}
c) rational numbers AND [sqrt(2), sqrt(3)]
d) {x is an element of the real numbers: x+2>x^2}
e) {1/n-1/m: n,m are elements of the natural numbers}
My answers so far:
a) supremum: none; infimum: 0
b) supremum: none; infimum: 1
c) supremum: none; infimum: none
d) supremum: none; infimum: 0
e) supremum: none; infimum: 0
I haven't had much practice with these, and I would appreciate any feedback!
2. ## Re: Supremums and Infimums
Originally Posted by lovesmath
If possible find the supremum and infimum of the following sets.
a) (0, infinity)
b) {1/n: n is an element of the natural numbers}
c) rational numbers AND [sqrt(2), sqrt(3)]
d) {x is an element of the real numbers: x+2>x^2}
e) {1/n-1/m: n,m are elements of the natural numbers}
My answers so far:
a) supremum: none; infimum: 0
b) supremum: none; infimum: 1
c) supremum: none; infimum: none
d) supremum: none; infimum: 0
e) supremum: none; infimum: 0
I haven't had much practice with these, and I would appreciate any feedback!
I agree with your answers to a, b and c.
For d) note that you have $\displaystyle \begin{align*} x + 2 > x^2 \end{align*}$, which means
$\displaystyle \begin{align*} x^2 &< x + 2 \\ x^2 - x - 2 &< 0 \\ x^2 - x + \left(-\frac{1}{2}\right)^2 - \left(-\frac{1}{2}\right)^2 - 2 &< 0 \\ \left(x - \frac{1}{2}\right)^2 - \frac{9}{4} &< 0 \\ \left(x - \frac{1}{2}\right)^2 &< \frac{9}{4} \\ \left| x - \frac{1}{2}\right| &< \frac{3}{2} \\ -\frac{3}{2} < x - \frac{1}{2} &< \frac{3}{2} \\ -1 < x &< 2 \end{align*}$
For e) I would argue that there is no infimum, because if $\displaystyle \begin{align*} n > m \end{align*}$ then $\displaystyle \begin{align*} \frac{1}{n} < \frac{1}{m} \end{align*}$ and therefore $\displaystyle \begin{align*} \frac{1}{n} - \frac{1}{m} < 0 \end{align*}$.
3. ## Re: Supremums and Infimums
Originally Posted by Prove It
For e) I would argue that there is no infimum, because if $\displaystyle \begin{align*} n > m \end{align*}$ then $\displaystyle \begin{align*} \frac{1}{n} < \frac{1}{m} \end{align*}$ and therefore $\displaystyle \begin{align*} \frac{1}{n} - \frac{1}{m} < 0 \end{align*}$.
Why does this mean that there is no infimum?
4. ## Re: Supremums and Infimums
Hey lovesmath.
Just a question regarding the infininum: does this refer strictly to the greatest lower bound?
The reason I ask is that if you have an open set and one endpoint, then if that end point is the lower point it means that the infinum won't actually exist you will never actually have a greatest lower bound.
You can think of it with say (0,infinity) where you can't have zero (since it's not included) but if you pick any value (say 0.0001) then you can always show there is a lower value (say 0.0000001) but you repeat this forever and you never actually get a fixed value.
5. ## Re: Supremums and Infimums
Originally Posted by chiro
Just a question regarding the infininum: does this refer strictly to the greatest lower bound?
The reason I ask is that if you have an open set and one endpoint, then if that end point is the lower point it means that the infinum won't actually exist you will never actually have a greatest lower bound.
You can think of it with say (0,infinity) where you can't have zero (since it's not included) but if you pick any value (say 0.0001) then you can always show there is a lower value (say 0.0000001) but you repeat this forever and you never actually get a fixed value.
Is not an "infininum" or an "infinum" but "infimum." A lower bound, including the greatest lower bound, does not have to belong to the set it bounds. If a set has a lower bound, then it has the greatest lower bound — this follows from one of the axioms of real numbers.
6. ## Re: Supremums and Infimums
An infimum requires an inequality of x >= a where a is your so called infimum.
If you are dealing with the real numbers and you have an open end-point, then the infimum doesn't exist.
7. ## Re: Supremums and Infimums
Originally Posted by chiro
If you are dealing with the real numbers and you have an open end-point, then the infimum doesn't exist.
Could you write your claim more precisely, using symbols and formulas instead of words?
8. ## Re: Supremums and Infimums
http://www.math.ualberta.ca/~bowman/m117/m117.pdf Page 30 at the top.
9. ## Re: Supremums and Infimums
Originally Posted by chiro
http://www.math.ualberta.ca/~bowman/m117/m117.pdf Page 30 at the top.
The only two claims I see in the top half of p. 30 are:
(1) A finite set always has a maximum element;
(2) [0, 1] has maximum element 1, but [0, 1) has no maximum element.
These claims are about a maximum element, not a supremum (or infimum). They are not the same.
The rest of the top half of p. 30 consists of definitions, not claims.
10. ## Re: Supremums and Infimums
The infimum is a definition.
On Page 30 it gave a definition for the infimum. Do you agree with it or not?
11. ## Re: Supremums and Infimums
You are saying strange things. In post #7 I asked you to state the following claim more precisely.
Originally Posted by chiro
If you are dealing with the real numbers and you have an open end-point, then the infimum doesn't exist.
You also said the following earlier.
Originally Posted by chiro
The reason I ask is that if you have an open set and one endpoint, then if that end point is the lower point it means that the infinum won't actually exist you will never actually have a greatest lower bound.
You can think of it with say (0,infinity) where you can't have zero (since it's not included) but if you pick any value (say 0.0001) then you can always show there is a lower value (say 0.0000001) but you repeat this forever and you never actually get a fixed value.
It seems that you are saying in these two quotes that $\inf\{x\in\mathbb{R}\mid 0 < x\}$ does not exists, which is incorrect. Therefore I asked you to clarify what your claim is, which you have not done. Instead, you referred me to a definition of infimum, which is indeed standard.
12. ## Re: Supremums and Infimums
So in this example where you have a set corresponding to all x, where x > 0, where x is any real number that even though there is no actual greatest lower bound, the infimum of this set is 0. Is this what you are saying?
13. ## Re: Supremums and Infimums
Originally Posted by chiro
So in this example where you have a set corresponding to all x, where x > 0, where x is any real number that even though there is no actual greatest lower bound, the infimum of this set is 0. Is this what you are saying?
The greatest lower bound and the infimum are the same thing, so one cannot exist without the other. For the set {x | 0 < x} both exist and are 0. The minimum of this set, on the other hand, does not exist because the infimum does not belong to the set.
14. ## Re: Supremums and Infimums
Well this is a little silly IMO if the infimum doesn't actually belong to the set (i.e. it's not an element of the set) in general.
It's not you: I've looked at what they consider this so called infimum to be but the idea of having a greatest lower bound where that element doesn't exist seems a misnomer.
I'd see it as more of an actual limit as opposed to an actual fixed bound and it's a little silly, but that's just me.
15. ## Re: Supremums and Infimums
Originally Posted by chiro
It's not you: I've looked at what they consider this so called infimum to be but the idea of having a greatest lower bound where that element doesn't exist seems a misnomer.
This is precisely the difference between the infimum and the minimum: minimum is the infimum that belongs to the set.
The name "greatest lower bound" does not suggest that it should be an element of the set. After all, a lower bound is just a number that bounds a set from below. For example, -5 is a lower bound of (0, ∞). The greatest lower bound is just the greatest of those bounds, i.e., the maximum of the set of lower bounds. It is not obvious that it exists because not every set, even bounded from above, has a maximum. Indeed, in rational numbers the greatest lower bound of the set $(\sqrt{2},\infty})$ does not exist. It is the completeness property of real numbers that guarantees that this maximum of lower bounds exists.
As an answer to the original question, the supremum of $\{1/n-1/m\mid n,m\in\mathbb{N}\}$ is 1 and the infimum is -1. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226995706558228, "perplexity_flag": "middle"} |
http://mathhelpforum.com/pre-calculus/165855-logs-what-b.html | # Thread:
1. ## Logs: What is b/a?
This isn't a homework question. I just want to know how to do this out of curiosity. It looks like something my teacher would put on a quiz, lol.
http://img228.imageshack.us/img228/1799/untitled8av.png
2. You need to find $\displaystyle\frac{b}{a}$ such that $\displaystyle\log_9a = \log_{12}b = \log_{16}(a+b)$
Now looking at the first part of the equation $\displaystyle\log_9a = \log_{12}b$
For these to be equal $\displaystyle a = 1$ and $b = 1$ , are there any other values that work?
but does this hold for the third part of the equation?
3. . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564365148544312, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/100405/list | ## Return to Answer
2 I added some refrences
I think nobody pointed this problem, if it is repeated, please say me to delete it. This problem killed me for three weeks, when I was a young student in high school. So, I want to recall it again.
$Problem:$ Find all right triangles with rational sides, where the area of these triangles are integer?
I think it is still open problem and if somebody can solve it, I will give 100\$ as a small award.
After I searched, I found these two interesting sources. I hope it will be helpful.
1) N.Koblitz, Introduction to elliptic curves and modular forms, volume 97 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1993.
2) Washington, Lawrence C., Elliptic Curves : Number Theory and Cryptography, CRC Press Series On Discrete Mathematics and Its Applications
1 [made Community Wiki]
I think nobody pointed this problem, if it is repeated, please say me to delete it. This problem killed me for three weeks, when I was a young student in high school. So, I want to recall it again.
$Problem:$ Find all right triangles with rational sides, where the area of these triangles are integer?
I think it is still open problem and if somebody can solve it, I will give 100\$ as a small award. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533961415290833, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/167502-telescopic-series-2.html | # Thread:
1. Although the problem has been completely solved I'd like to comment that in general it can be proved that if
$a_k=\varphi(k+q)-\varphi(k),\; (q\in \mathbb{N}^*)\;\textrm{and}\; L=\lim_{k \to{+}\infty} \varphi(k)\in \mathbb{R}$
then,
$S=\displaystyle\sum_{k=1}^{+\infty}a_k=qL-\varphi(1)- \varphi (2)-\ldots - \varphi (q)$
In our case, the series can be inmediatly written in the form:
$S=-\displaystyle\sum_{k=1}^{+\infty}\left(\dfrac{1/2}{k+2}-\dfrac{1/2}{k}\right)$
that is, $q=2,\;\varphi(k)=1/2k$ . Then:
$S=-(2\cdot 0-\varphi(1)-\varphi(2))=\varphi(1)+\varphi(2)=\dfrac{1}{2}+\df rac{1}{4}=\dfrac{3}{4}$
Fernando Revilla | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9624747633934021, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-math-topics/126128-finding-error-area-mandelbrot-set.html | # Thread:
1. ## Finding the error in area of mandelbrot set.
I'm not really sure where to post this...
Basically, I've used Maple to plot the Mandelbrot set.
The way its done is I have a 10000x10000 array that is filled with values between 0 and 1 where each value represents the colour of a pixel on a grid. 0 stands for a black pixel and is part of the set, 1 is white and all other entries are a shade of grey depending on how many iterations they take to diverge.
To find the area I simply counted the number of 0's in the array (it was only different to the actual value by 0.002%). How would I go about finding the error in this? As in, I'm restricted by the number of iterations I can use,so some pixels that are black may NOT be black if I had a powerful enough computer so that I could increase the max number of iterations.
The value calculated by pixel counting (by using a massively more powerful computer than mine) is 1.50659177 $\pm$ 0.00000008.
The value I calculated was 1.506628382.
2. The inaccuracies obviously occur near the boundary of the set. Can you modify your program so that it recognises when a black pixel has a non-black neighbour? There will presumably be comparatively few of these, and you could then increase the number of iterations for those pixels. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566947221755981, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/67669/how-to-prove-two-equations-in-linear-algebra?answertab=oldest | # How to prove two equations in linear algebra
Given the following definition:
How to proof these two equations?
and
PS:
Actually, there are two proofs preceding the two(I have no problem with the following two), they are:
Maybe they are hints on solving the latter two.
I encounter this problem here(section 2.1 about page 8~page 9)
-
## 2 Answers
By writing it out in index notation (personal preference), the first equation is simple application of the product rule (see Einstein notation)
\begin{align} \nabla_{A_{ij}} \delta_{kl} A_{km}B_{mn} A^T_{np} C_{pl} = \newline \nabla_{A_{ij}} A_{km}B_{mn} A_{pn} C_{pk} = \newline (\nabla_{A_{ij}} A_{km})B_{mn} A_{pn} C_{pk} + (\nabla_{A_{ij}}A_{pn}) A_{km}B_{mn}C_{pk} = \newline (\delta_{ik}\delta_{jm})B_{mn} A_{pn} C_{pk} + (\delta_{ip}\delta_{jn}) A_{km}B_{mn} C_{pk} = \newline B_{jn} A_{pn} C_{pi} + A_{km} B_{mj} C_{ik} = \newline C^T\cdot A\cdot B^T + C\cdot A \cdot B \end{align}
as for the second equation, I've only seen it derived by rewriting $|A|$ in terms of its eigenvalues and doing some tricks or Jacobi's formula. I don't think the two preceding equations give you much to work with here.
-
Let $X=Y=A$. We have $(\ast): \textrm{tr} XBY^TC = \textrm{tr} CXBY^T = \textrm{tr} (CXBY^T)^T = \textrm{tr} Y(CXB)^T$. So $$\begin{eqnarray*} \nabla_A \textrm{tr} ABA^TC &=& \nabla_X \textrm{tr} XBY^TC + \nabla_Y \textrm{tr} XBY^TC \quad\textrm{(by chain rule)}\\ &=& \nabla_X \textrm{tr} X(BY^TC) + \nabla_Y \textrm{tr} Y(CXB)^T\quad(\textrm{by }(\ast))\\ &=& (BY^TC)^T + CXB = C^TAB^T + CAB \end{eqnarray*}$$ and we get the first result. The second result is more straightforward. Recall that for any fixed $i$, by Laplace expansion, we have $\det A=\sum_j (-1)^{i+j}A_{ij}M_{ij}(A)$, where $M_{ij}(A)$ denotes the $(i,j)$-minor of $A$. Since the computation of $M_{ij}(A)$ does not involve $A_{ij}$, we have $\frac\partial{\partial A_{ij}}\det A=(-1)^{i+j}M_{ij}(A)=C_{ji}(A)$, where $C_{kl}(A)$ denotes the $(k,l)$-cofactor of $A$. Hence $\nabla_A\det(A)=\textrm{adj}(A)^T=(\det A)(A^{-1})^T$.
-
thx, but can you explain what `chain rule` is? I think that is the obstacle for me to understand. – xiaohan2012 Sep 26 '11 at 16:05
1
Here chain rule means that for the function $g(X,Y)=\textrm{tr}XBY^TC$, we have $\frac{\partial g}{\partial a_{ij}} = \frac{\partial g}{\partial x_{ij}}\frac{dx_{ij}}{\partial a_{ij}} + \frac{\partial g}{\partial y_{ij}}\frac{dy_{ij}}{\partial a_{ij}}$. As $a_{ij}=x_{ij}=y_{ij}$, we get $\frac{\partial g}{\partial a_{ij}} = \frac{\partial g}{\partial x_{ij}} + \frac{\partial g}{\partial y_{ij}}$. This holds for each pair of $(i,j)$. Hence $\nabla_Ag=\nabla_Xg+\nabla_Yg$. – user1551 Sep 26 '11 at 18:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8374432921409607, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/96069/status-of-pl-topology/96164 | ## Status of PL topology
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I posted this question on math stackexchange but received no answers. Since I know there are more people knowledgeable in geometric and piecewise-linear (PL) topology here, I'm reposting the question. I'd really want to know the state of the question, since I'm self-studying the material for pleasure and I don't have anyone to talk about it. Please feel free to close this post if you think the topic is not appropriate for this site.
I'm starting to learn about geometric topology and manifold theory. I know that there are three big important categories of manifolds: topological, smooth and PL. But I'm seeing that while topological and smooth manifolds are widely studied and there are tons of books about them, PL topology seems to be much less popular nowadays. Moreover, I saw in some place the assertion that PL topology is nowadays not nearly as useful as it used to be to study topological and smooth manifolds, due to new techniques developed in those categories, but I haven't seen it carefully explained.
My first question is: is this feeling about PL topology correct? If it is so, why is this? (If it is because of new techniques, I would like to know what these techniques are.)
My second question is: if I'm primarily interested in topological and smooth manifolds, is it worth to learn PL topology?
Also I would like to know some important open problems in the area, in what problems are mathematicians working in this field nowadays (if it is still an active field of research), and some recommended references (textbooks) for a begginer. I've seen that the most cited books on the area are from the '60's or 70's. Is there any more modern textbook on the subject?
Thanks in advance.
-
3
math.stackexchange.com/questions/70634/… addresses some of these questions. – Daniel Moskovich May 5 2012 at 15:06
3
I like the unnumbered questions in the end, but otherwise the question looks somewhat rhetorical and seems to call for a heated debate. If I'm primarily interested in programming, is it worth to learn mathematics? I heard that math is not nearly as useful as it used to be in computer science, due to new techniques developed in that subject. Pathetic, isn't it? And those books cited by mathematicians, some of them are so old! – Sergey Melikhov May 5 2012 at 16:30
4
@Daniel: Thanks very much! @Sergei: I get your point, but I think that it's not the same case as your analogy. Maybe I should put the question this way: is or is not PL topology an integral part of the education of every geometric topologist today? And concerning the books, we all know that subjects in mathematics change, and some great textbooks in the past are not well suited to the present status of the area, because change of emphasis or discovery of new tehniques that make life easier. So I'm asking about "newer" books to know if there are references more suited to present PL topology. – Carlos Sáez May 5 2012 at 16:47
1
Even if you only care about smooth manifolds, I think it's worth having some familiarity with the language and basic ideas: some important isotopy/embedding theorems (e.g. Hudson) have written proofs in the literature only for PL manifolds but also hold in the smooth case. If you want to tweak these proofs maybe it's useful to speak the language. – Jonny Evans May 5 2012 at 21:15
4
PL topology is nowadays old fashioned because of its difficulty, as it often happens in math. Nevertheless, it's not uncommon that after decades a smart guy comes with new striking discoveries and gets its back to the mainstream. I hope this happens to PL topology! – Fernando Muro Nov 7 at 7:00
## 6 Answers
Maybe I should put the question this way: is or is not PL topology an integral part of the education of every geometric topologist today?
According to a recent poll by the Central Planning Commitee for Universal Education Standards, some geometric topologists don't have a clue about regular neighborhoods, while others haven't heard of multijet transversality; but they all tend to be equally excited when it comes to Hilbert cube manifolds.
some recommended references (textbooks) for a beginner
Rourke-Sanderson, Zeeman, Stallings, Hudson,
L. C. Glaser, Geometrical combinatorial topology (2 volumes)
Is there any more modern textbook on the subject?
Not really (as far as I know), but some more recent books related to PL topology include:
Turaev, Quantum invariants of knots and 3-manifolds (chapters on the shadow world)
Kozlov, Combinatorial algebraic topology (chapters on discrete Morse theory, lexicographic shellability, etc.)
Matveev, Algorithmic topology and classification of 3-manifolds
2D homotopy and combinatorial group theory
Daverman-Venema, Embeddings in manifolds (about a third of the book is on PL embedding theory)
Benedetti-Petronio, Branched standard spines of 3-manifolds
Buchstaber-Panov, Torus actions and their applications in topology and combinatorics
Buoncristiano, Rourke, and Sanderson, A geometric approach to homology theory (includes the PL transversality theorem)
The Hauptvermutung book
Buoncristiano, Fragments of geometric topology from the sixties
Also I would like to know some important open problems in the area, in what problems are mathematicians working in this field nowadays
I'll mention two problems.
1) Alexander's 80-year old problem of whether any two triangulations of a polyhedron have a common iterated-stellar subdivision. They are known to be related by a sequence of stellar subdivisions and inverse operations (Alexander), and to have a common subdivision (Whitehead). However the notion of an arbitrary subdivision is an affine, and not a purely combinatorial notion. It would be great if one could show at least that for some family of subdivisions definable in purely combinatorial terms (e.g. replacing a simplex by a simplicially collapsible or consructible ball), common subdivisions exist. See also remarks on the Alexander problem by Lickorish and by Mnev, including the story of how this problem was thought to have been solved via algebraic geometry in the 90s.
2) MacPherson's program to develop a purely combinatorial approach to smooth manifold topology, as attempted by Biss and refuted by Mnev.
-
Thanks for the answer, specially the great list of references. – Carlos Sáez May 5 2012 at 21:31
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'd like to address another aspect of your questions. My feeling is that PL topology, or smooth topology, are foundational subjects to the low dimensional topologist, in the sense that set theory is a foundational subject to most mathematicians. A large proportion of low dimensional topologists use the foundational theorems in PL topology as black boxes, certainly without understanding or having read the proofs, and in fact they can do good mathematics that way. In the smooth category, the situation is even worse- I'm sure that there are very few people in the world who understand the proof of Kirby's Theorem, which is a difficult result, but it gets used all over low dimensional topology as a black box. Indeed, the fact that a diffeomorphism of $S^2$ extends to the $3$--ball is fundamental, under the hood everywhere, and highly non-trivial.
So you can be a manufacturer, or you can be a consumer. As a consumer, maybe you don't need to know PL topology beyond the basics that you need in order to understand simplicial homology and other basic constructions. A more sophisticated consumer might need more- I don't for example know a concrete smooth construction of linking pairings (the PL construction is in Schubert)- and in general, cell complexes allow you to work explicitly and concretely. PL proofs, if you read and care about proofs of fundamental results, tend to be shorter and easier than smooth proofs, which is not surprising because a-priori there is so much less structure which has to be carried around. This was indeed why Poincaré first considered triangulated manifolds; because of the technical facility which they afforded him. As a counter-point, I should point out Smale's comment in the introduction to in 1963 paper A survey of some recent developments in differential topology (which I recommend that you read, as it discusses your question):
It has turned out that the main theorems in differential topology did not depend on developments in combinatorial topology. In fact, the contrary is the case; the main theorems in differential topology inspired corresponding ones in combinatorial topology, or else have no combinatorial counterpart as yet...
Another aspect, which is not to be sneezed at in today's world, is that PL manifolds are better suited to computers. This is indeed the focus of Matveev's book on "algorithmic topology".
Finally, as a PL question, I nominate:
Open problem: Construct a discrete $3$-dimensional Chern-Simons theory, compatible with gauge symmetry, replacing the path integrals of the smooth picture (which are not mathematically well-defined) with finite dimensional integrals.
-
1
We should ask Smale how he would prove, at the time of writing the quote, that fibers of generic smooth maps are homotopy equivalent to CW-complexes. The only proof known to the MO community (mathoverflow.net/questions/94404) is based on the (trivial!) combinatorial counterpart of this statement, and didn't appear until Thom's conjecture on triangulation of smooth maps was proved by Andrei Verona in 1984. – Sergey Melikhov May 8 2012 at 2:50
@Sergey: That's a good point, but I think that it's fair to suppose that he would not have considered that to be one of the main theorems of differential topology. Indeed, I would argue that cell complexes are part of the PL world, and that the smooth world is more about handles. – Daniel Moskovich May 8 2012 at 13:51
2
Daniel: then let's replace "are homotopy equivalent to CW-complexes" by "have finitely generated cohomology". (The domain and the range are closed smooth manifolds.) – Sergey Melikhov May 8 2012 at 22:48
3
As a counterpoint to the Open Problem at the end, since Poincare himself many folks have tried to find a combinatorial proof that every closed simply connected 3-manifold is homeomorphic to the 3-sphere. They may still be trying... – Lee Mosher May 9 2012 at 4:06
Some points I didnt see mentioned above: the basic results of geometric topology: tubular neighborhood theorem, transversality, xetc. have easy smooth proofs, somewhat technical PL proofs, and difficult (Kirby-Siebenmann+surgery theory) TOP proofs. Historically TOP came after the development of Smooth and PL, but in the end, the formalism in high dimensions was entirely encoded in the algebraic topology of the classifying spaces $B$Diff$=B$O, $B$PL$, B$TOP. The bottom line is that many high dimensional problems can be "reduced" to algebraic topology of these classifying spaces, and so it isn't that PL isn't interesting, just that it can be treated (say in surgery theory, or smoothing theory) on equal footing with the other two, as a black box, without really knowing anything specific about the nuts and bolts of PL topology (just as you can understand most smooth topology without knowing a careful proof of the implicit function theorem).
Following the success of high dimensional topology, the focus in geometric topology shifted to low dimensions starting in the early 1980s, and as Dylan comments there is no difference between PL and Diff in low dimensions, so that the more familiar smooth methods suffice, and more recently trained topologists have no reason to study PL methods if their focus is on low dimensions.
As a topology student, it is probably good for you to have some familiarity with the surgery exact sequence, $$\mathcal{S}_{PL}(X)\to [X,G/PL]\to L(\pi_1(X))$$ and its counterparts with PL replaced by Diff or TOP (i.e. what the objects and maps are in this sequence). Knowing the early big successes in your area will give you a better appreciation of what is happening in it now.
-
2
Paul, your opinions on easy and hard proofs of the "etc." results, and on the needs of younger topologists, are perfect examples of the long expected heated debate... I'll leave it there, but how about the presumably less controversial issue of whether basic results of smooth topology can be proved at all (easy or hard) without using PL topology? In particular, that fibers of generic smooth maps between smooth manifolds are "homotopy equivalent to CW-complexes"? (This was a recent MO question, mathoverflow.net/questions/94404) – Sergey Melikhov May 7 2012 at 2:32
2
Even the non-"etc." examples are not so plain. The PL analogue of Sard's theorem is certainly easier. It says that if you take any point $p$ in the interior $U$ of a top-dimensional simplex in the range of a simplicial map $f$, then $f^{-1}(U)$ is PL homeomorphic to $U\times f^{-1}(p)$, and this is trivial to prove (as opposed to the full PL transversality). The existence of a regular neighborhood is a tautology (with definitions as in the Rourke-Sanderson book), but that of a tubular neighborhood needs proof; like other smooth proofs and unlike PL proofs, it depends on years of Calculus... – Sergey Melikhov May 7 2012 at 3:07
3
Let us not forget that smooth codimension $k$ embeddings have normal bundles with structure group $\text{O}(k)$, whereas this analogy does not hold in the PL-case (one has to use block bundles instead). – John Klein May 7 2012 at 17:52
John: there is a view (expressed already in the Rourke-Sanderson Annals paper series) that it is block bundles that are the right notion of a bundle in the PL category. For instance I wonder how you would do something beyond definitions with PL bundles, like Euler or Stiefel-Whitney classes or the umkehr map, without using the theory of block bundles. For block bundles these things are done in the Bouncrisiano-Rourke-Sandrson book. – Sergey Melikhov May 8 2012 at 1:40
1
Sergey: if one wants to understand automorphisms of PL manifolds, one cannot dispense with $PL(k)$. On the other hand, there's a sense in what you've claimed: notice $O(k+1)/O(k) = S^k$, and as $k$ varies, this gives the sphere spectrum (this is responsible for our understanding of the the Euler class). However, the spectrum associated with $PL(k+1)/PL(k)$ is very complicated: it's Waldhausen's $A(∗)$! On the other hand, the spectrum associated with $\widetilde{PL}(k+1)/\widetilde{PL}(k)$ is much easier: by Haefliger it's the sphere spectrum. – John Klein May 9 2012 at 13:20
show 3 more comments
Disclaimer: What follows is probably a bit off-topic for this site, but no more than the original questions, numbered one and two. In fact I suspect that this answer attempts to address just what the OP really wanted to ask ("isn't PL topology useless?") by posting those two lightly euphemistic questions. If there was an active meta thread for closing this question, I'd rather put this answer there.
Some topologists, perhaps the majority, tend to think that smooth and topological manifolds are "present in nature" and are the genuine objects of study in geometric topology, while PL topology is a somewhat artificial, unnatural construct, and matters just as long as it is helpful for the "real" topology. I've heard this opinion stated explicitly once, and I see a lot of this kind of attitude in this thread. In fact I think this philosophy/intuition is sufficiently familiar to nearly everyone that I don't need to elaborate on it. Moreover, I suspect that a lot of people are not even aware that it is not the only possible religion for a topologist, or else they would be more considerate to the heretics in stating their strong opinions.
I'd like to discuss one other philosophy/intuition then, according to which both smooth and topological manifolds are obviously artificial, highly deficient models for what could be "present in nature", whereas the PL world is much "closer to the reality". I don't consider myself a practitioner of this or any other religion; what follows should be regarded as said by a fictional character, not by the author.
1) As is well-known, the predisposition to seeing continuous and smooth as more natural than discrete is historical, following centuries of preoccupation with derivatives and (later) limits. Quantum physics and computer science may be changing the tide, but they don't usually compete with Calculus in a mathematician's education, at least not in the initial years.
Here is a simple test. When you fold a sheet of paper, what is the intuitive model in your imagination: is it a smooth surface (when you look with a loupe at the fold), a cusp-like singularity (generic smooth singularity), or an an angle-like singularity (PL singularity)? No matter what is your subconscious preference, I bet you didn't base it on considerations of individual photons detected by the eye. But you could have based it on your previous experience with abstract models of surfaces, which is not independent of the historically biased education. (Just for fun, I wonder if your intuitive model would change if the paper sheet is folded second time so as to make a corner - which is unstable as a singularity of a smooth map $\Bbb R^2\to\Bbb R^2$, but has a stable singularity in the link.)
2) On a molecular scale, the sheet of paper of course doesn't fit the model of a smooth surface, and although it is arguably not "discrete" or "PL" on a subatomic scale, the smooth surface model isn't restored either. Similarly, as is well-known, Maxwell equations and general relativity (which I guess are among the best reasons to study smooth topology) don't work at very small scales. The problem is that this "imperfection" of matter doesn't usually shake one's belief in "perfect" physical space. But it is perfectly consistent with modern physics (for those who don't know) that physical space is kind of discrete at a sub-Planck scale, as in loop quantum gravity (which is somewhat reminiscent of PL topology!). It is also consistent with the present day knowledge, and indeed derivable in variants of the competing string theory, that a finite volume of physical space can only contain a finite amount of information, as with the holographic principle. (In fact I didn't see much discussion of possible alternatives to this principle, many physicists appear to take it for granted.) I'm getting on a slippery slope, but finite information does not sound like it could be compatible with limits that occur in derivatives (which returns us to MacPherson's program on combinatorial differential manifolds) and especially with Casson handles that occur in topological manifolds.
The fictional character is now saying that his religion teaches him to avoid concepts based on inherently infinitary constructions, because they are likely to be unnatural, in the sense of the physical nature which might simply have no room for them (and even the question of whether it does is not obviously meaningful!). Ironically, this is quite in line with Poincare's philosophical writings, where he argued at length that the principle of mathematical induction is not an empirical fact.
3) The fictional character goes on to say that this is not just the crazy metaphysics that displays the warning, but also Grothendieck with his "tame topology" which inspired a whole area in logic (initiated by van den Dries' book Tame topology and o-minimal structures). Here is a short quote from Grothendieck:
It is this [inertia of mind] which explains why the rigid framework of general topology is patiently dragged along by generation after generation of topologists for whom "wildness" is a fatal necessity, rooted in the nature of things.
My approach toward possible foundations for a tame topology has been an axiomatic one. Rather than declaring [what] the desired “tame spaces” are ... I preferred to work on extracting which exactly, among the geometrical properties of the semianalytic sets in a space $\Bbb R^n$, make it possible to use these as local "models" for a notion of "tame space" (here semianalytic), and what (hopefully!) makes this notion flexible enough to use it effectively as the fundamental notion for a “tame topology” which would express with ease the topological intuition of shapes.
Grothendieck dismisses from the start PL and smooth topology as possible forms of tame topology, because
(i) they're "not stable under the most obvious topological operations, such as contraction-glueing operations", and
(ii) they're not closed under constructions such as mapping spaces, "which oblige one to leave the paradise of finite dimensional spaces".
I'm not familiar with "contraction-glueing operations", nor is Google. Perhaps someone fluent in French could explain what (i) is supposed to mean? My first guess would be that this could refer to mapping cylinder, mapping cone or other forms of homotopy colimit, but PL topology is closed under those (finite homotopy colimits).
Edit: Indeed, it is clear from the preceding pages that by "gluing" Grothendieck means the adjunction space, which he also calls "amalgamated sum". In particular, he says:
It was also clear that the contexts of the most rigid structures which existed then, such as the "piece-wise linear" context were equally inadequate – one common disadvantage consisting in the fact that they do not make it possible, given a pair $(U,S)$ of a "space" $U$ and a closed subspace $S$, and a glueing map $f: S\to T$, to build the corresponding amalgamated sum.
There is, of course, no problem with forming adjunction spaces in the PL context. Perhaps Grothendieck was just not aware of pseudo-radial projection or something. End of edit
As to (ii), there now exists some kind of an infinite-dimensional extension of PL topology, which includes mapping spaces and infinite homotopy colimits up to homotopy equivalence (and hopefully up to uniform homotopy equivalence, which would be more appropriate in that setup). Besides, there are, of course, Kan sets, which are closed under Hom, but they arguably don't belong to tame topology in any reasonable sense because they quickly get uncountable (in every dimension, in particular, there are uncountably many vertices) and even of larger cardinality.
In any case, logicians, who tried to set up Grothendieck's aspiration in a rigorous framework of definability (see Wilkie's survey), do now have the "o-minimal tringulability and Hauptvermutung" theorem, saying roughly that tame topology (as they understood it) is the same as PL topology. Still more roughly (perhaps, too roughly) is could be restated as "topology without infinite constructions is the same as PL topology".
Even if smooth topology will some day be reformulated in purely combinatorial terms, it is highly unlikely that it can be characterized by purely logical constraints. From this viewpoint, smooth topology is primarily justified by its role in applied math and natural sciences, but is no less and no more fundamental than symplectic topology or topology of hyperbolic manifolds.
-
One thing meant by "gluing", I suppose, is that extra information (collars) is needed to specify the result of gluing smooth manifolds along a boundary. But anyway, my fictional character argues for smooth topology: Only natural numbers are "real". Calculus is "real", because it provides a low algorithmic complexity setting to answer natural number problems. Smooth manifolds are real because they are spaces on which calculus can be performed. PL manifolds make sense as discrete models for smooth manifolds; or else you need to argue that they have "independent" existence. – Daniel Moskovich May 9 2012 at 7:32
... (cont.) So there is a lot of interesting structure (geometry, flows, analytic structure...) which you can impose on a smooth manifold. A manifold is a world to live in: clearly, a lot of dynamics can take place in smooth manifolds. But there seems to be no life on a PL manifold- it's a barren, cubist wasteland. Thus quoth my fictional character. IRL, I don't know, and I'm happy that there are adherents of both "religions" (and others besides) amongst topologists. – Daniel Moskovich May 9 2012 at 7:43
Daniel, thank you for feedback. (I'm still puzzled by Grothendieck's contraction-gluing; the result of gluing two PL manifolds along a PL homeomorphism of their boundaries doesn't need any extra information.) There are, of course, things like geometric structures on cell complexes (popular in geometric group theory, see e.g. the Bridson-Haefliger book), harmonic functions on simplicial complexes (see R.Forman's 1989 paper in Topology), combinatorial Gauss-Bonnet formula (see Yu Yan-Lin's 1983 paper in Topology), connections and parallel transport on PL manifolds (see M.A.Penna's 1978 paper... – Sergey Melikhov May 9 2012 at 9:42
...in Pacific J Math, and also arxiv.org/abs/math/0604173) and of course PL de Rham theory (see D. Lemann's 1977 paper in Asterisque, R.G.Swan's 1975 paper in Topology, Bousfield-Gugenheim 1976 AMS memoir). Of course, discrete analysis is motivated by the smooth case (Smale should have been more specific!) so no wonder it lags behind. The problem with PL topology is I think that it has indeed been largely deserted since 1970s and as a consequence is now underdeveloped and barely taught to students. I'm not sure that there's any internal reason for that, it could be entirely cultural. – Sergey Melikhov May 9 2012 at 10:05
PL topology is popular in quantum topology where some invariants (e.g Turaev-Viro) are defined by fixing a triangulation and the checking invariance under some standard moves.
-
10
It's worth commenting (for those that don't know) that PL topology is the same as smooth topology in low dimensions (up to 6). – Dylan Thurston May 6 2012 at 2:57
2
...which is a highly nontrivial fact (particularly Cerf's theorem, implying that smooth structures are unique on PL 4-manifolds and exist on PL 5-manifolds) better stated as "PL topology includes smooth topology in low dimensions" because PL topology is not just about PL manifolds but also about polyhedra (not to mention PL maps). Even that is not quite accurate, because families of low-dimensional smooth structures don't boil down to those of PL structures (see mathoverflow.net/questions/7892), so no wonder that Haefliger's smooth knots of $S^3$ in $S^6$ are trivial as PL knots. – Sergey Melikhov May 8 2012 at 4:05
... Bringing in the morphisms, PL maps include (by another highly nontrivial result) generic smooth maps, as well as smooth maps that belong to generic families, and the inclusion is strict (for maps between manifolds) starting from very low dimensions (2 to 1). Arbitrary smooth maps easily have arbitrary compact metric spaces as point-inverses, so I'm not sure if they belong to smooth topology. For sure, mapping cylinders of (very low-dimensional) generic smooth maps aren't smooth manifolds, but one can't deny their place in PL topology. – Sergey Melikhov May 8 2012 at 4:36
On a smooth manifold we have Ricci flow. What is the analogue for a PL manifold?
-
You should ask this as a separate question, rather than adding here. – arsmath Nov 7 at 13:10
2
@arsmath: I had the same initial reaction until I realized that OP is also asking for a list of open problems in PL topology. Defining a combinatorial analogue of Ricci flow (say, in dimension 3) is a well-known open problem. If such flow exists, it could lead to a more constructive proof of, say, Poincar\'e conjecture. – Misha Nov 7 at 13:56
Indeed, for further related discussion see Bruce Westbury's own question mathoverflow.net/questions/65691/… – jc Nov 7 at 15:22
2
It would certainly have helped if Bruce had added a link to his own question, and/or described a little context. – S. Carnahan♦ Nov 8 at 8:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534308910369873, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/115172/modular-arithmetic-and-congruences | # Modular Arithmetic and Congruences
I understand that $a \equiv b \pmod n$ means when you divide each $a$ and $b$ by $n$ you get the same remainder. But why do people say: "$a$ divided by $n$ gives you remainder $b$"?
They say that in the first 30 seconds of this video lecture http://www.youtube.com/watch?v=QXIlkq06Ct0&feature=youtube_gdata_player
Example
$12 \equiv 17 \pmod 5$
$12$ divided $5$ has remainder of $2$
17 divided by 5 has remainder of 2
Neither has the latter relation, so why do people sometimes say this.
-
2
You are running into two distinct uses of "mod". As a relation, it means your first definition. As an operator, $a\bmod n = b$ means the latter. – Arturo Magidin Mar 1 '12 at 5:49
The video has an equation x ≡ b (mod n) and they say x is a number that when divided by n gives you a remainder of b? Is this wrong? – rubixibuc Mar 1 '12 at 6:00
If $0\leq b \lt n$, then they are correct; they are not giving you the definition of $a\equiv b\pmod{n}$, they are telling you something else about the numbers in question. – Arturo Magidin Mar 1 '12 at 16:11
## 2 Answers
12 is the same as 2 $\bmod{5}$ and 17 is the same as 2 $\bmod{5}$
Lets say you have $a\equiv b\bmod{n}$. Then the numbers you are working with are basically from the set {0,1,2,3,...,n-1}, the number n=0 (mod n), n+1=1 (mod n), n+2=2 (mod n), ect.
If two numbers, a and b are related by $a\equiv b\bmod{n}$, then (a-b)=nc,for $c\in \mathbb{N}$, that is (a-b) is a multiple of n. So in your case above, $2\equiv2+5\equiv2+10\equiv2+15\equiv ect.\bmod{5}$. So 2 is the same as 12 which is the same as 15 $\bmod{5}$
When you have $a\equiv b \bmod{n}$, with $a>b$ and $b\in\{0,1,\cdots ,n-1 \}$, then in fact a divided by n is b, this is the case in the video. When this is not the case, it causes for confusion, as in your example. It would make sense to write $17\equiv 2 \bmod{5}$ however.
-
`\pmod{n}` produces $\pmod{n}$; it's the correct $\LaTeX$ code to use for the parenthetical one; for the binary operator, use `\bmod`. – Arturo Magidin Mar 1 '12 at 5:53
@ArturoMagidin, I am just learning latex, so thanks for the comment. What is the code for writing latex in the fancy font? – Edison Mar 1 '12 at 5:55
Why do people say a divided by n gives you a remainder of b though? – rubixibuc Mar 1 '12 at 5:58
@rubixibuc in the video, the b value is less than or equal to the n value, this is not the case with your example. – Edison Mar 1 '12 at 6:04
I assumed it was a rule, so it's just an exception in this case? – rubixibuc Mar 1 '12 at 6:05
show 3 more comments
$\rm\: a\equiv b\pmod n\iff n\ |\ a-b \iff a-b\: =\: nk\:$ for some $\rm\:k\in \mathbb Z$.
$\rm\: a\ mod\ n\: =\: b\iff a\equiv b\pmod n\:$ and $\rm\:0\le b < n.\:$ Therefore $\rm\:a\ mod\ n\:$ is the remainder upon dividing $\rm\:a\:$ by $\rm\:m,\:$ i.e. the least positive element of the equivalence class $\rm\:a + n\:\mathbb Z\:$ of all integers congruent to $\rm\:a,\:$ modulo $\rm\:n,\:$ i.e. the unique element equivalent to $\rm\:a\:$ from the complete system of representatives $\rm\:\{0,1,2,\ldots,n-1\}$ of congruence classes modulo $\rm\:n.\:$ This is the use in the video.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359458684921265, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/117567?sort=votes | ## Singular values of the sum of A and A^T
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear all,
As a part of my research, I need to achieve a lower bound to the smallest singular value, $\sigma_{n}(A+A^{T})$ for a stochastic $A$ (as a function of the singular values of $A$), which is generally not Hermitian.
I am aware of the upper bounds (due to Weyl and Fan) and of the fact that for general $\sigma_{i}(A+B)$ no lower bound is known. Do you see a way?
Thank you.
Edit: A can be considered a power of a lazy row stochastic matrix. I.e., $A=P^k$ for some strongly diagonally dominant row stochastic $P$.
-
1
See my answer mathoverflow.net/questions/97746 to a similar MO question. However, the assumption that $A$ is stochastic might change the answer. – Denis Serre Dec 30 at 7:29
I came to a relaxation to my problem, wherein A can be a power of a lazy row stochastic matrix. I.e., A=P^k for some strongly diagonally dominant row stochastic P. Hence, P is positive definite, however A is generally not. Do you see a way that it simplifies the problem? – Daniel86 Jan 3 at 6:22
## 1 Answer
It does not seem like any obvious bound on $\sigma_n(A+A^T)$ is possible in terms of the singular values of $A$. Indeed, consider $$A = \left( \matrix{ 1/3 & 1/2 & 1/6 \cr 1/6 & 1/3 & 1/2 \cr 1/2 & 1/6 & 1/3} \right).$$ Clearly, $A$ is stochastic and a computation reveals that it is nonsingular so that all of its singular values are positive. On the other hand, $A+A^T$ is a multiple of the all-ones matrix, so $\sigma_3(A+A^T)=0$.
-
Thankyou, it is a nice example, but can this situation occur where $A$ is an integer power of a positive definite matrix? I edited my original post. Thanks. – Daniel86 Jan 3 at 6:31
@Daniel86 - under standard definitions, a real positive definite matrix is automatically symmetric. When you use the words "positive definite" I am guessing you mean that $x^T A x > 0$ for any $x \in R^n$ but $A$ is not necessarily symmetric. Is my understanding correct? – alex o. Jan 4 at 21:24
@alex o. - If $P$ is diagonally dominant but not necessarily symmetric, $P+P^{T}$ is diagonally dominant and symmetric and hence positive definite. However, for $A=P^{k}$ this is not always the case. For such a case, I want to lower bound the smallest singular value of $A+A^{T}$ in terms of the singular values of $P$. – Daniel86 Jan 4 at 22:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8872968554496765, "perplexity_flag": "head"} |
http://calculus7.org/2012/01/16/generalized-birthday-problem/ | being boring
## Generalized Birthday Problem
Posted on 2012-01-16 by
Facebook currently allows up to 5000 friends. Assume that birthdays are uniformly distributed among 365 days. It is well-known that for a user with 23 friends there is a 50% chance of having to write ”Happy Birthday” on at least two walls in the same day.
How many facebook friends should one acquire to have a 50% chance of at least 10 birthdays falling on the same day? With 3286 friends this is guaranteed to happen, but one should expect that 50% probability will be reached with a substantially smaller number.
[Update] In 1989 Diaconis and Mosteller found an approximate solution that gives an approximate number of people required to have at least 50% probability of of at least $k$ birthdays on the same day. The formula can be found on MathWorld and its output is the OEIS sequence A050255. For comparison, Diaconis and Mosteller quote exact values found by Bruce Levin. The exact solution is the sequence A014088, which begins thus:
`1, 23, 88, 187, 313, 460, 623, 798, 985, 1181, 1385, 1596, 1813, 2035, 2263`
So, with 1181 Facebook friends there is a 50% chance of having to write ”Happy Birthday” at least ten times in the same day.
This entry was posted in Uncategorized and tagged Birthday Problem. Bookmark the permalink. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305523037910461, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/52333/proving-that-every-graph-is-an-induced-subgraph-of-an-r-regular-graph/52347 | ## Proving that every graph is an induced subgraph of an r-regular graph
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How would you prove that every graph G is an induced subgraph of an r-regular graph where r >= D and D is the largest degree of the vertices of G?
I can picture the answer for when G itself can be turned into a D-regular graph: make a union of G with a copy of itself and then connect the vertices across the two vertex sets U (from G) and W (from the copy of G) such that u_i and w_j are connected if and only if v_i and v_j would be connected in the original graph in order to turn it (the original graph) into a D-regular graph.
However, I cannot figure out how to do it in the general case where, for instance, the order of G may be even or odd (and, thus, may not be made into an r-regular graph if r is odd as well) or for when r > D. (I am also having trouble with the just language of graph theory and how to write proofs for it if you couldn't tell.)
-
Any $r\geq D,$ or SOME $r \geq D?$ In the latter case, any graph is an induced subgroup of a complete graph... – Igor Rivin Jan 17 2011 at 17:00
2
@Igor: I think there's some terminological confusion here - an induced subgraph of a complete graph is a complete graph... – ndkrempel Jan 17 2011 at 17:25
@ndkrempel: yes, confusion reigns. – Igor Rivin Jan 17 2011 at 17:40
Until you get 50 reputation points, you might not be able to leave comments on other people's answers; but you should be able to edit your own question, and add responses to what people have said or asked to the body of the text – Yemon Choi Jan 17 2011 at 19:53
## 7 Answers
Use induction on $r-\delta$, where $\delta=\delta(G)$ is the smallest degree of any vertex in $G$.
If $r-\delta=0$, then you are done.
If $r-\delta > 0$ then create two disjoint copies of $G$, say $G_1$ and $G_2$. For any vertex $v$ in $G$ of degree less than $r$, add an edge between the corresponding vertices $v_1$ in $G_1$, $v_2$ in $G_2$. Call the resulting graph $G'$. Then $G'$ contains $G$ as an induced subgraph, and $r-\delta(G')=r-\delta(G)-1$.
-
This is a nice recursive construction. – Derrick Stolee Jan 18 2011 at 3:49
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You can construct the graph explicitly as well, although the one I describe is much larger than the one you get from the Gale-Ryser technique.
Take your input graph $G$ with maximum degree $\Delta$ and a number $r \geq \Delta$.
Create $(r+1)!$ copies of $G$. For each vertex $v_i \in V(G)$, let $d_i$ be the degree of $v_i$ in $G$. Partition the $(r+1)!$ copies of $G$ into parts of size $r-d_i+1$ (which divides $(r+1)!$). For each part, connect all copies of $v_i$ with edges. This increases the degree at each $v_i$ from $d_i$ by $r-d_i$ to $r$.
-
Here's a silly group-theoretic proof.
Fix a free group F of suitably large rank, and realise it as the fundamental group of a rose R. Label and orient G so that there is an immersion G->R. Then G corresponds to a subgroup H of F. Let n be the diameter of G, and consider the finite set S of all elements of F of length at most n+1 that are not contained in H. By Marshall Hall's Theorem, G embeds in a finite cover R' of R such that no non-trivial elements of S are contained in the subgroup corresponding to R'.
Now, R' is regular, and G is an induced subgraph. Indeed, it is a subgraph by construction, and if it were not induced then there would be two non-adjacent vertices of G joined by an arc in R'. But this corresponds to a loop in R' of length at most n+1 that does not correspond to an element of H, which we have ruled out by construction.
(In the above, I've been a little sloppy about what I mean by length. One really needs to consider all conjugates of such elements by short words, so perhaps S needs to be a little bigger. But the idea works.)
-
I suppose I can justify this answer by observing that the question asks how `you' would prove this fact. Well, this is how I would prove it. – HW Jan 17 2011 at 19:54
1
@Henry: what kind of bound do you get on the size of this cover? Also, for us ignorami: which Marshall Hall theorem are you alluding to? – Igor Rivin Jan 17 2011 at 20:00
That's nice, Henry. – Richard Kent Jan 17 2011 at 22:49
Igor, first, regarding Marshall Hall's Theorem, I'm using a topological version which you can find, for instance, in Stallings's 'Topology of finite graphs'. Regarding the size of the cover, well, I can only give a rough estimate, but here goes. Let G' be the covering space of R corresponding to H. The index is equal to something like the size of the ball of radius n+1 in this covering space. Large chunks of this covering space will be trees, so this will in turn be exponential in n. One would want to be slightly more careful to calculate the constants. – HW Jan 17 2011 at 23:40
... Of course, one big disadvantage of this construction is that you always get even valence. – HW Jan 17 2011 at 23:41
Let $k = 2 \lceil {r \over 2} \rceil$ and start with $G_k = k \cdot G$ such that we have $k$ copies of $G$ and, thus, $k$ copies of each vertex $v_i \in V(G)$. Next, partition $G_k$ into $n=|G|$ subsets $G_1,...,G_n$ such that each consists of the $k$ copies of vertex $v_i \in V(G)$. Each element in a given subset has degree $d_i \leq r$ and is adjacent to no other element in the subset, thus, we can form a $(r-d_i)$-regular subgraph amongst the vertices in a particular subset. We know this is possible because each subset has an even number of elements ($k$ was defined to be even). Performing this for all subsets $G_1,...,G_n$ will result in an r-regular graph $G_k$ of order $kn$. Finally, since all of the added edges run only between copies of the same vertex, any subset of $V(G_k)$ corresponding to one of the $k$ copies of $V(G)$ will induce the original graph $G$.
-
In that case, the answer is given by http://mathoverflow.net/questions/48702 You call the vertices of your graph red, and you want to have a collection of blue vertices, so that the degree of every red vertex $v_i$ equals $r-d_i,$ where $d_i$ is the degree of $v_i$ in your graph $G.$ The degrees of the blue vertices are unspecified. The Gale-Ryser theorem (mentioned in the question cited above) tells you that this can be done.
EDIT Here is a better way: join every vertex $v_i$ to $r - d_i$ new vertices. When we are done, we have added $K=r n - \sum_i d_i$ new vertices. All of the old vertices now have degree $r,$ so we leave them be. The new vertices all have degree $1.$ If there exists a graph on $K$ vertices of degree $r-1,$ draw the edges of that graph between the corresponding new vertices, and we are done. If there is not such a graph, that means that either $K$ has the wrong parity, or is too small, but this is easy to fix by adding a few newer vertices (it is clear that we will never need to add more than $2r$ extra vertices, the precise bound is an exercise to the reader).
-
Take two copies $G_1$ and $G_2$ of $G$ and add and edge between each vertex $v$ of $G_1$ and every vertex of $G_2$ corresponding to non-neighbours of $v$. Then $G$ is obviously an induced subgraph, the obtained graph is $|G|-1$ regular and has order $2|G|$.
-
@igor: Any r >= D
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482130408287048, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=4020846 | Physics Forums
Page 18 of 24 « First < 8 15 16 17 18 19 20 21 > Last »
## Can a magnetic fields/forces do work on a current carrying wire?!
Quote by cabraham If I discuss 1001 other issues it is in response to people who raised these 1001 issues. I would rather stay on track w/ the OP question. Torque is radius x force x sine of angle. If the force is tangent to the radius, angle is 90 deg, sin 90 deg = 1. But when the poles are aligned, force is maximum, but angle is zero, as the force acts radially. Sin 0 deg = 0. There is maximum force but zero torque. Any motor text will clarify this for you. BR. Claude
You are right on this single point. I mixed up the conductor of the op pic with the pole.
From 263:
Anyway, I'll use 'me" in the future instead of "us".
284:
You seem to be the only person left not happy w/ the thesis that B does the work.
Can you please stick a little longer to your own good resolution not to talk for other people?
Quote by DaleSpam I am tired of asking for clear confirmation and getting long-winded obfuscations instead, so I will simply assume that I am understanding your position and encourage you to politely correct me if I accidentally misunderstand. E is the E field from Maxwell's equations. I don't know why you think that there is any ambiguity about what E is. It might be difficult to calculate, but what it is should be clear. Since there is no j outside the wire then E.j is 0 outside of the wire, so the only E corresponding to a non-zero E.j is the E inside the wire. So from this and your comments above it seems like you think that E.j≠P. If E.j≠P then energy is not conserved. The E and B fields have a certain amount of energy density given by (E²+B²)/2. If that energy changes then it must either have gone to EM fields elsewhere as described by $\nabla \cdot (E \times B)$ or it must have gone to matter as described by E.j. There is nowhere else for energy to go. http://farside.ph.utexas.edu/teachin...es/node89.html If matter obtains more power P from EM than the integral of E.j then there is energy in the matter which appears out of nowhere without a corresponding decrease in the EM energy. So, your position is incompatible with the conservation of energy. There is no doubt that the B field provides the torque. You are welcome to draw a diagram if you wish, but it is not under any dispute.
Of course B field provides the torque. I'm glad you have no doubt about that point. When a torque T turns the rotor through an angle of β. is the work done not equal to Tβ?
Now we turn to E.J I don't believe that E.J is zero when E is taken as the composite value considering R & L both. A dot product of 2 vectors does not vanish because the 2 vectors do not coincide. Again, if I take E as the composite value, accounting for IR as well as L*dI/dt, E.J accounts for the power density. Integrating over volume gives power. Outside the wire, I don't believe that the E.J value vanishes.
E.J represents input power V*I, & output power plus heat loss is V*I*PF, where PF = power factor. Energy is conserved using my computation. Power source energizes stator. A portion of the power is converted to heat via I2R. A portion is energizing LI2/2. Then from LI2/2, energy is transferred to the rotor in the form of Tβ. Is all the energy accounted for? Usually the rotor winding still has some LI2/2 energy remaining. This reactive energy is returned to the input source. The next cycle delivers more energy & reactive power etc.
Most ac motors, synchronous or induction type, have a power factor less than unity. A typical value is 0.8 lagging. I see your point. You are taking the "E" in E.J as being only that "E" internal to the wire. Then E.J is what you take to be the total power inputted to the system. You then claim that E.J is the rotor heat loss plus the rotor mechanical power.
I believe that E.J is the total power, but only if "E" is the total E across L as well as R. Visualize a winding. The E field is across the entire coil & includes the voltage drop across R (IR) & L (L*dI/dt). That is the E I refer to. I now understand what you're saying.
So here is my reply. Outside the wire, when we consider total E including inductance, the dot product E.J is not necessarily zero. Please review the dot product math. E & J do not have to occupy the same physical space to have a dot product. I will confirm tonight when I get home with my ref texts. BR.
Claude
Mentor
Quote by cabraham Of course B field provides the torque. I'm glad you have no doubt about that point. When a torque T turns the rotor through an angle of β. is the work done not equal to Tβ?
See my previous responses on this topic in post 272 and 277.
Quote by cabraham Outside the wire, I don't believe that the E.J value vanishes.
Really? What is the current density outside the wire?
Quote by cabraham I believe that E.J is the total power, but only if "E" is the total E across L as well as R. Visualize a winding. The E field is across the entire coil & includes the voltage drop across R (IR) & L (L*dI/dt). That is the E I refer to. I now understand what you're saying.
You seem to be mixing up the E field of Maxwell's equations and the voltage of circuits.
Quote by cabraham Outside the wire, when we consider total E including inductance, the dot product E.J is not necessarily zero. Please review the dot product math. E & J do not have to occupy the same physical space to have a dot product. I will confirm tonight when I get home with my ref texts.
Please do so, this is flat wrong.
Quote by DaleSpam See my previous responses on this topic in post 272 and 277. Really? What is the current density outside the wire? You seem to be mixing up the E field of Maxwell's equations and the voltage of circuits. Please do so, this is flat wrong.
Current density outside wire is zero. That does not make E.J zero. You seem to be confucing dot product with line integral. The path of intagration for E.J is the volume which includes the wires & the magnetic core. Two vectors parallel have a non-zero dot product. They do not have to coincide. E is non-zero outside the wire. J is non-zero inside the wire. Their dot product is non-zero.
Voltage of circuits is merely the line integral of E dot dl.
Your previous responses were that B cannot do work because it does not move which makes no sense. Then you claim E does work but don't mind that E does not move. This is too bizarre.
Claude
Mentor
Quote by cabraham Current density outside wire is zero. That does not make E.J zero.
Yes, it does. x.0=0 for any vector, x.
Quote by cabraham You seem to be confucing dot product with line integral. The path of intagration for E.J is the volume which includes the wires & the magnetic core. Two vectors parallel have a non-zero dot product. They do not have to coincide. E is non-zero outside the wire. J is non-zero inside the wire. Their dot product is non-zero.
E.j is a power density at every point in space. It is, in fact, zero everywhere outside the wire. Therefore, there is zero work done on matter outside the wire (since there is no matter there). This should be obvious.
Quote by cabraham Your previous responses were that B cannot do work because it does not move which makes no sense. Then you claim E does work but don't mind that E does not move. This is too bizarre.
Work is a transfer of energy. According to the equation I posted E transfers energy and B does not.
Mentor
Quote by DaleSpam I.e. you could solve Maxwell's equations for j in terms of B and substitute in to the energy conservation equation to get P = E.j = f(E,B,...).
It is too bad that you aren't making this argument. It seems like a great argument.
Recognitions: Gold Member The only approximate constants I see relevant to the E.j discussions are the supply voltage(V)the circuit resistance,the appropriate linear dimensions of the coil circuit and the B field of the magnets(not the resultant B field,this changes because there is a field component due to the current carrying coil as well as the field component due to the magnets) When the motor starts turning and picking up speed the following changes occur: 1.The back emf starts to increase and the resultant E starts to decrease. 2.The current starts to decrease with a corresponding decrease of j. 3.The resultant B field changes due to: a.The field set up due to the reducing current flowing through the coil circuit. b.The coil turning,this resulting in a non stationary and changing B field(changes due to this effect occur also for a constant current and angular velocity) As has been said before it is the work done against back emf that appears as mechanical work We could write that at a rotational speed where the resultant E has a value of E' and the current density a value of j' the power density has a value given by: P=E'.j' {I think that the whole thing is expressed most easily by the equation I first posted: VI=EbI+I squared R (VI= input power,EbI= output power and Eb=back emf).Remember that E and I change as the speed changes.} E' can be written roughly as (V-Eb)/dl and j' as I/dA(l=length A =area).From this E'.J' is given by p=(VI-EbI)/volume In other words what E.j stands for depends on what E and j are taken to be V.j represents input power Eb.j represents mechanical output power E'.j' represents power losses
There’s a lot of talk of time varying B fields to explain work being done. I want to show you a motor which is in fact an even simpler one then the one of the op. Whereas the op shows a 2-pole motor it is also possible to make an 1-pole motor, the so called homo-polar motor. http://en.wikipedia.org/wiki/Homopolar_motor There are some lovely u-tube demos: http://www.youtube.com/watch?v=3aPQqNt15-o Now, a 2 pole motor has all the same principles of power output and energy considerations then a 1 pole. Of course reversing of the current each revolution in a 2-pole brings its own interference problems etc as previously has been pointed out. But for a 1 pole there’s no: dI/dt, hence no: U=L dI/dt, and as you can see from u-tube it works perfectly.
Quote by DaleSpam It is too bad that you aren't making this argument. It seems like a great argument.
Dale, you sill don't agree that B fields/forces do work on a loop?
If not I'll use simple logic and simple illustrations for you to imagine. I think you're seriously missing something and you're only! At on point! You need to open up the horizon a bit!
You were the 1st who agreed with full confidence now you've change you're opinion and fighting strong against you're 1st one. Umm I need to shack you back to you're original opinion!
Mentor
Quote by Miyz You need to open up the horizon a bit!
I am sorry, but this is very funny advice coming from you. You are very closed-minded, and have shown no indication of even considering alternative viewpoints.
I am torn on the subject, as should be obvious. I personally would like to be able to say that magnetic fields do work. But the math says that they don't on point charges. So I thought that dealing with non point charges was a loophole, but vanhees71's paper closed that loophole. And looking deeper into the math shows that the power density is E.j for a continuous distribution also. So my loophole is slammed shut.
Nobody else has shown another loophole that holds up, and both you and cabraham seem to avoid the discussion of the energy conservation equation entirely.
Quote by cabraham Thanks vanhees71, for a great discussion, & for your making great contributions to this discussion, particularly you provided great insight re the math involved. One thing we all hopefully learned is that although the motor was invented in the 19th century, it is an incredibly fascinating device! Who would think that so much is involved when we turn on our fan & watch the blades spin? You really know your stuff. Also deserving mention is that E.J does involve B. Since J = I/Aw, where Aw is wire area, & NAcB = LI = LJAw where Ac is the area of the magnetic core, we have B = LJAw/NAc, or we can write J = BNAc/LAw. Hence E.J = (E.B)(NAc/LAw). If we wish to examine the work done by the power source energizing the overall system, then "E.J" is equal to "E.B" multiplied by a constant. So it is apparent that both E & B are involved. Of course anybody familiar with e/m field theory already knows that. Bit it is good to examine these questions now & then. I thank all who participated. Even my harshest critics helped me gain a better understanding & I thank them. I apologize if I came across as rude, & assure all that I sometimes get carried away. I don't have anything personal against anybody here, not even my critics. Feel free to ask for clarification. Thanks to all. Claude
these kinds of simple manipulation are not right.I would like to know about that B.Because so far I know E.J is rate at which electromagnetic field does work on the charges per unit volume and if B is magnetic field acting on charges in that volume,then for the case of an electromagnetic wave shinning on a hydrogen atom of certain frequency it will never be able to ionize the atom because in case of light E.B=0.Electric and magnetic fields are perpendicular to each other in light however strong it's frequency is.
Quote by DaleSpam I am sorry, but this is very funny advice coming from you. You are very closed-minded, and have shown no indication of even considering alternative viewpoints.
Well... Thats because my knowledge about this matter is basic you and Claude are speaking an entirely different language I do not understand what you are all saying although Im trying to keep up! How could I show any indications far beyond the simplicity I've understood about this matter?
Quote by DaleSpam I am torn on the subject, as should be obvious.
I noticed.
Quote by DaleSpam I personally would like to be able to say that magnetic fields do work. But the math says that they don't on point charges. So I thought that dealing with non point charges was a loophole, but vanhees71's paper closed that loophole. And looking deeper into the math shows that the power density is E.j for a continuous distribution also. So my loophole is slammed shut. Nobody else has shown another loophole that holds up, and both you and cabraham seem to avoid the discussion of the energy conservation equation entirely.
Well I didn't avoid that I just don't have any clue as I stated above. What makes sense to me the most is looking at the forces that actually are doing work. However, I speak nothing I do not understand of. I honestly don't know anything about this and its new for me. I'm only catching up and reading you're posts.
But again and again. I don't look at the energy conservation alone but... I look at the forces as well.
Because that matter is confusing as hell. I'd rather study it step by step to finally understand and discuss about it. Till then I'm a spectator.
It's wise for me to admit I know nothing of this matter and just let the experts do their work instead. If I did you'd seen me post a lot. All my post on E.J are nothing useful why? As I said before its new for me.
Quote by andrien these kinds of simple manipulation are not right.I would like to know about that B.Because so far I know E.J is rate at which electromagnetic field does work on the charges per unit volume and if B is magnetic field acting on charges in that volume,then for the case of an electromagnetic wave shinning on a hydrogen atom of certain frequency it will never be able to ionize the atom because in case of light E.B=0.Electric and magnetic fields are perpendicular to each other in light however strong it's frequency is.
But E & B are from interacting loops. Maybe E1, E2, B1, & B2 are more appropriate.
E.J is no problem, & is consistent w/ what I've presented. You acknowledge B as producing torque, yet you deny its work contribution. For the 4thtime please draw a sketch showing the fields doing work. All work is ultimately done by input power source. We have 2 loops. Each loop has E & B. It is the interaction that makes motors run. An e/m have not ionizing an H atom is too simplistic dealing w/ motors. Please draw us a pic. Thanks.
Claude
Quote by DaleSpam Yes, it does. x.0=0 for any vector, x. E.j is a power density at every point in space. It is, in fact, zero everywhere outside the wire. Therefore, there is zero work done on matter outside the wire (since there is no matter there). This should be obvious. Work is a transfer of energy. According to the equation I posted E transfers energy and B does not.
Look up dot product of 2 parallel vectors, non-zero if both are non-zero.
Your equation could be expressed in terms of J & B, so what? It doesn't prove that E is irrelevant.
But the E that transfers energy must include inductive, external to wire. If the stator winding is powered from 120 V rms ac, let's say the current is 1.0 amp, the winding ersistance is 1.0 ohm. Te voltage drop inside the wire is 1.0 volt, & the other 119 volts appears across the inductance as well as core loss, leakage reactance, etc.
You claim that E.J inside the wire accounts for all work. But if the wire is superconduction E is zero inside. You should reexamine that whole theory. I say with firm conviction that E.J must be based on total E, not just inside wire. We seem to have come to a stand still since you insist E is inside, I say E is both outside & inside. Until this is resolved it is pointless to argue. BR.
Claude
I think we can show that the work done is clearly zero. The general expression for work is $W = \int_{\ i}^f F \cdot ds \ \ \ \ \$ where F is the applied force and ds is the displacement from initial $i$ to final $f$ position $ds = \sqrt{\Delta x^2 + \Delta y^2 + \Delta z^2}$ This can be decomposed such that $\ \ \ \ \ W = {\int_{\ x_i}}^{x_f} F_x \ dx + {\int_{\ y_i}}^{y_f} F_y \ dy + {\int_{\ z_i}}^{z_f} F_z \ dz$ According to the Lorentz force equation the force produced by the magnetic field is $F \ \ = \ \ q \ (v \times B) \ \ = \ \ q \ \langle v_z B_y - v_y B_z, \ \ v_x B_z - v_z B_x, \ \ v_y B_x - v_x B_y \rangle$ This can be broken down to the following equations $F_x \ = \ q(v_z B_y - v_y B_z) \ = \ q\ (\frac{\partial z}{\partial t} B_y - \frac{\partial y}{\partial t} B_z)$ $F_y \ = \ q(v_x B_z - v_z B_x) \ = \ q\ (\frac{\partial x}{\partial t} B_z - \frac{\partial z}{\partial t} B_x)$ $F_z \ = \ q(v_y B_x - v_x B_y) \ = \ q\ (\frac{\partial y}{\partial t} B_x - \frac{\partial x}{\partial t} B_y)$ Taking the integral with respect to x in the first equation gives ${\int_{\ x_i}}^{x_f} F_x \ dx \ \ = \ \ {\int_{\ x_i}}^{x_f} q(\frac{\partial z}{\partial t} B_y - \frac{\partial y}{\partial t} B_z) \ dx \ \ = \ \ 0$ And likewise for $F_y$ and $F_z$ So the work done by the magnetic field is zero (because only the direction of the velocity changes not its magnitude). However, we're assuming in using the Lorentz force equation that it applies to an instantaneously steady magnetic field. There are no parameters in the force equation for dealing with a changing magnetic field.
Mentor
Quote by cabraham Look up dot product of 2 parallel vectors, non-zero if both are non-zero.
True, but j is 0 outside of the wire, therefore the dot product is 0 outside of the wire.
Quote by cabraham I say with firm conviction that E.J must be based on total E, not just inside wire. We seem to have come to a stand still since you insist E is inside, I say E is both outside & inside.
Weren't you going to look at a textbook last night?
A field at a location far from the wire cannot deliver power to a wire. That energy must first be transported to the location of the wire, and then it can be delivered to the wire.
It is obviously nonsense to claim that the E field a light year away from a wire can do any work on the wire now. For the same reason that is nonsense, it is also nonsense that the field a foot away or a millimeter away can do any work on the wire now. Only the fields at the location of the matter at a given time can deliver power to the matter at that time.
Another thing to consider is that work and torque share the same dimensions and measurement units: $\frac{ML^2}{T^2}$ with measurements in $N \cdot m$. In a pure numeric sense they're equivalent, but in terms of semantics work refers to force exercised over a linear displacement while torque refers to force exercised over a rotational displacement. Can one simply say that a magnetic field produces torque? In order to use it to do work you need to find a mechanical means of converting the rotary motion into linear motion?
Page 18 of 24 « First < 8 15 16 17 18 19 20 21 > Last »
Thread Tools
Similar Threads for: Can a magnetic fields/forces do work on a current carrying wire?!
Thread Forum Replies
Classical Physics 8
Introductory Physics Homework 1
Introductory Physics Homework 0
General Physics 2
Introductory Physics Homework 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516570568084717, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/26892/does-4d-n-3-supersymmetry-exist/26894 | # Does 4D N = 3 supersymmetry exist?
Steven Weinberg's book "The Quantum Theory of Fields", volume 3, page 46 gives the following argument against N = 3 supersymmetry:
"For global N = 4 supersymmetry there is just one supermultiplet ... This is equivalent to the global supersymmetry theory with N = 3, which has two supermultiplets: 1 supermultiplet... and the other the CPT conjugate supermultiplet... Adding the numbers of particles of each helicity in these two N = 3 supermultiplets gives the same particle content as for N = 4 global supersymmetry"
However, this doesn't directly imply (as far as I can tell) that there is no N = 3 QFT. Such a QFT would have the particle content of N = 4 super-Yang-Mills but it wouldn't have the same symmetry. Is such a QFT known? If not, is it possible to prove it doesn't exist? I guess it might be possible to examine all possible Lagrangians that would give this particle content and show none of them has N = 3 (but not N = 4) supersymmetry. However, is it possible to give a more fundumental argument, relying only on general principles such as Lorentz invariance, cluster decomposition etc. that would rule out such a model?
-
Although you do not say this explicitly, your question is about four-dimensional Poincaré supersymemtry. Certainly in three-dimensions, there are $N=3$ theories. – José Figueroa-O'Farrill Sep 16 '11 at 18:45
Of course. I changed the title to make it more precise – Squark Sep 18 '11 at 18:38
– Yuji Oct 5 '11 at 2:51
@Yuji--- It's N=4 on shell. – Ron Maimon Oct 7 '11 at 5:11
## 2 Answers
Depending on what you mean by "exist", the answer to your question is Yes.
There is an $N=3$ Poincaré supersymmetry algebra, and there are field-theoretic realisations. In particular there is a four-dimensional $N=3$ supergravity theory. A good modern reference for the diverse flavours of supergravity theories is Toine Van Proeyen's Structure of Supergravity Theories.
Added
Weinberg's argument is essentially the following observation. Take a massless unitary representation of the $N=3$ Poincaré superalgebra with helicity $|\lambda|\leq 1$. This representation is not stable under CPT, so the CPT theorem says that to realise that in a supersymmetric quantum field theory, you have to add the CPT-conjugate representation. Once you do that, though, the $\oplus$ representation admits in fact an action of the $N=4$ Poincaré superalgebra.
The reason the supergravity theory exists (and is different from $N=4$ supergravity) is that the $N=3$ gravity multiplet, which is a massless helicity $|\lambda|=2$ unitary representation, is already CPT-self-conjugate.
-
OK, although I think that strictly speaking supergravity is not a QFT since a consistent quantization of a gravitational theory presumably requires something else than a QFT, namely superstring theory – Squark Sep 17 '11 at 18:07
1
Yes, I agree. But Weinberg's argument is purely kinematical. It's a property of the unitary representation theory of the $N=3$ Poincaré superalgebra with the additional requirement of CPT invariance. – José Figueroa-O'Farrill Sep 17 '11 at 18:26
– José Figueroa-O'Farrill Sep 17 '11 at 18:29
I feel that there are deep reasons that no QFT can be a theory of gravity (except in the holographic sense), but it is different subject. I still don't know the answer to my original question, namely is there an (honest, non-gravitational) QFT in 4D with N = 3 supersymmetry? – Squark Sep 18 '11 at 18:40
I'm accepting the answer since although quantum gravity is not a QFT, the existence of N=3 quantum gravity probably rules out the possibility for a no-go theorem along the lines of basic principles (since most of them aren't violated by gravity and the principle that is violated - locality - is only violated in a rather subtle way). – Squark Nov 12 '11 at 14:15
show 2 more comments
The discussion on pages 168-173 in Weinberg vol III looks to exclude rigid $N=3$ supersymmetric QFTs in 4d, at least those which are renormalisable and with a lagrangian description.
The first step is to note that, in order to identify the CPT-self-conjugate $N=4$ supermultiplet with the $N=3$ supermultiplet plus its CPT-conjugate, one must assume that all fields in both supermultiplets are valued in the adjoint representation of the gauge group. In $N=1$ language, the basic constituents in both supermultiplets are one gauge and three chiral supermultiplets, all adjoint-valued. The three chiral supermultiplets must transform as a triplet under the ${\mathfrak{su}}(3)$ part of the ${\mathfrak{u}}(3)$ R-symmetry of the $N=3$ superalgebra.
Any renormalisable lagrangian field theory in 4d that has a rigid $N \geq 2$ supersymmetry must take the form given by (27.9.33) in Weinberg. This just corresponds to the generic on-shell coupling of rigid $N=2$ vector and hyper multiplets, with renormalisable $N=2$ superpotential (27.9.29). For $N>2$, vector and hyper multiplets must both transform in the adjoint representation of the gauge group. ($N=2$ requires only that the hypermultiplet transforms in a real representation of the gauge group, i.e. a "non-chiral" representation in $N=1$ language.) Putting in this assumption, the $N>2$ case is easily deduced using Weinberg's analysis below (27.9.34). All terms except those in the last two lines of (27.9.33) assemble into precisely the $N=4$ supersymmetric Yang--Mills lagrangian. The remaining terms in the last two lines of (27.9.33) depend on a matrix $\mu$ which defines the quadratic term in the superpotential. As Weinberg argues, $N=4$ occurs only if these terms all vanish identically (e.g. if $\mu =0$). Whence $N=3$ can occur only if the terms in the last two lines of (27.9.33) are non-vanishing and $N=3$ supersymmetric on their own. This would require them to be invariant under the ${\mathfrak{u}}(3)$ R-symmetry of the $N=3$ superalgebra. However, only two of the three chiral superfields (coming from the hypermultiplet) appear in the $\mu$-dependent terms. Since the three chiral supermultiplets must transform as an ${\mathfrak{su}}(3)$ triplet under the R-symmetry, it is clearly impossible for the last two lines in (27.9.33) to be ${\mathfrak{u}}(3)$-invariant unless they vanish identically. Whence, $N>2$ implies $N=4$ in this context.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124261140823364, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Karp%e2%80%93Flatt_metric | # Karp–Flatt metric
The Karp–Flatt metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's law and the Gustafson's law as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Karp and Horace P. Flatt in 1990.
## Description
Given a parallel computation exhibiting speedup $\psi$ on $p$ processors, where $p$ > 1, the experimentally determined serial fraction $e$ is defined to be the Karp–Flatt Metric viz:
$e$ = $\frac{\frac{1}{\psi}-\frac{1}{p}}{1-\frac{1}{p}}$
The less the value of $e$ the better the parallelization.
## Justification
There are many ways to measure the performance of a parallel algorithm running on a parallel processor. The Karp–Flatt metric defines a metric which reveals aspects of the performance that are not easily discerned from other metrics. A pseudo-"derivation" of sorts follows from Amdahl's Law, which can be written as:
$T(p)$ = $T_s$ + $\frac{T_p}{p}$
Where:
• $T(p)$ is the total time taken for code execution in a $p$-processor system
• $T_s$ is the time taken for the serial part of the code to run
• $T_p$ is the time taken for the parallel part of the code to run in one processor
• $p$ is the number of processors
with the result obtained by substituting $p$ = 1 viz. $T(1)$ = $T_s$ + $T_p$, if we define the serial fraction $e$ = $\frac{T_s}{T(1)}$ then the equation can be rewritten as
$T(p)$ = $T(1) e$ + $\frac{T(1) (1-e)}{p}$
In terms of the speedup $\psi$ = $\frac{T(1)}{T(p)}$ :
$\frac{1}{\psi}$ = e + $\frac{1-e}{p}$
Solving for the serial fraction, we get the Karp–Flatt metric as above.Note that this is not a "derivation" from Amdahl's law as the left hand side represents a metric rather than a mathematically derived quantity. The treatment above merely shows that the Karp–Flatt metric is consistent with Amdahl's Law.
## Use
While the serial fraction e is often mentioned in computer science literature, it was rarely used as a diagnostic tool the way speedup and efficiency are. Karp and Flatt hoped to correct this by proposing this metric. This metric addresses the inadequacies of the other laws and quantities used to measure the parallelization of computer code. In particular, Amdahl's law does not take into account load balancing issues, nor does it take overhead into consideration. Using the serial fraction as a metric poses definite advantages over the others, particularly as the number of processors grows.
For a problem of fixed size, the efficiency of a parallel computation typically decreases as the number of processors increases. By using the serial fraction obtained experimentally using the Karp–Flatt metric, we can determine if the efficiency decrease is due to limited opportunities of parallelism or increases in algorithmic or architectural overhead.
## References
• Karp, Alan H. & Flatt, Horace P. (1990). "Measuring Parallel Processor Performance". Communication of the ACM 33 (5): 539–543. doi:10.1145/78607.78614.
• Quinn, Michael J. (2004). Parallel Programming in C with MPI and OpenMP. Boston: McGraw-Hill. ISBN 0-07-058201-7. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.902179479598999, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/110893/if-a-compact-kahler-manifold-m-g-has-constant-scalar-curvature-is-the-metric/110900 | ## If a compact Kahler manifold $(M,g)$ has constant scalar curvature, is the metric $g$ real analytic?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi to all!
Perhaps it is a silly question, if so i'll delete this post. Suppose we have a compact Kahler manifold $(M,g)$ of complex dimension $m$ with constant scalar curvature with respect to its metric $g$. My question is: does the condition of constant scalar curvature imply that the metric $g$ automatically real analytic? When i say that the metric is real analytic i mean that in a holomorphic coordinate chart with coordinate functions
$$(z^1,\ldots, z^m)\quad \textrm{ with } z^j=x^j+iy^j \textrm{ for }1\leq j\leq m$$
the coefficients $(g_{i\bar{j}})_{1\leq i,j\leq m}$ are analytic functions w.r.t. $x^k,y^k$.
Thank you in advance!
-
## 1 Answer
It's not a silly question, but there's a standard answer, and it's a purely local result: If the Kähler metric is $C^2$ and has constant scalar curvature, then it is real-analytic with respect to the real-analytic structure that underlies the complex-analytic structure. The reason is that setting the scalar curvature equal to a constant is an elliptic equation for the potential of the metric that is an analytic function of its arguments in the local real-analytic coordinates that you define, and so the elliptic regularity results of Hopf and Morrey apply.
-
Thank you for the answer! My doubt is this: if have a $C^{2}$ metric i can set the equation for constant scalar curvature on a small coordinate chart and i get an elliptic (nonlinear) equation of fourth order on the potential of the metric. By elliptic regularity i have that the solution is $C^{\infty}$ but how can i conclude that the solution of this equation is analytic? – Italo Oct 28 at 12:37
@Italo: The reason is that the equation itself is a real analytic function of its arguments, which implies that any smooth solution is real-analytic. I believe that the original version of this for a single equation for one unknown (which is true in your case) is due to E. Hopf, but this was generalized to systems in a classic work by C. B. Morrey (American Jour. of Math., v. 80, 1958). You can also consult the references in mathoverflow.net/questions/33614 (Does elliptic regularity guarantee analytic solutions?). – Robert Bryant Oct 28 at 13:10
Thank you very much for the answer and the references! – Italo Oct 28 at 13:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198856353759766, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/226412/prove-the-regular-surface-with-2-geodesics-from-p-to-q-and-negative-curvature-c | # Prove the regular surface with 2 geodesics from p to q, and negative curvature cannot be simply connected.
What ideas/formulas are required to solve this? Exercise: If a and b are two geodesics from point p to q, how do you prove that M is not simply connected? M is a regular surface in R3 and has negative Gaussian curvature K. Points p & q are two distinct points on surface M.
I know surface is simply connected if any simple closed curve can be shrunk to a point continuously in the set. So a sphere is simply connected, and a donut is not simply connected. And I know that since the curvature is negative, and since K=det(Sp)=w*z, so eigenvalues w and z have opposite signs. I also have the formula K=(eg-f^2)/(EG-F^2) based on the 1st fundamental forms and derivatives for solving for E,F,G,e,f,g. Not sure what if anything that is used to solve. And the only geodesic equations I can think of off hand are
d/dt(E u' + F v')=1/2[E_u u'^2 + 2F_u u'v' +G_u v'^2 ], d/dt(Fu'+Gv')=1/2[E_v u'^2 +2F_u u'v' +G_v v'^2 ], and geodesic curvature c=, and Gauss-Bonnet integrals but not sure how that can be applied. If they can be applied, please let me know. Maybe the fact that the normal of any point on the geodesic arc also is on the same normal vector to the surface M at that point helps? What ideas and equations are required to solve the exercise?
-
1
You are calling $a$ and $b$ geodesics, but you are also calling $a$ and $b$ principal curvatures. It would be good to use different notations for these things. – treble Nov 1 '12 at 0:17
thanks @treble, i fixed it to say w and z for principal curvatures. maybe it is not even necessary for me to mention them at all since Will seems to have solved the exercise without them – user47735 Nov 1 '12 at 16:55
## 1 Answer
EDIT, Thursday evening: a good illustration for the bit about negative curvature is the inner circle in the "hole" in a torus of revolution, which is a "closed geodesic" and cannot be contracted continuously to a point. The closed geodesic occurs in the (inner) part of the torus where the Gauss curvature is negative, while the outer portion has positive curvature. Meanwhile, no compact surface in $\mathbb R^3$ can have negative Gauss curvature everywhere, for which Neal gave a simple answer in comments. It is, of course, possible to have noncompact surfaces with negative curvature, such as the catenoid of revolution. However, for any such surface of revolution with negative curvature, the absolute value of the curvature goes to $0$ as we get far from the central axis. So Mariano asked whether there are any infinite surfaces in $\mathbb R^3$ with Gauss curvature that is bounded away from $0,$ and the answer is no, not for a $C^2$ surface. Link to the question MARIANO QUESTION.
ORIGINAL: This is Gauss-Bonnet for polygons on an oriented surface. The familiar case should be this: the area of a geodesic triangle on the unit sphere is $\alpha + \beta + \gamma - \pi,$ where $\alpha, \beta, \gamma$ are the angles at the vertices of the triangle. Next, we define the external angles $\theta_1 = \pi - \alpha, \; \theta_2 = \pi - \beta, \; \theta_3 = \pi - \gamma,$ according to Figure 4-25 of do Carmo:
==========
==========
The unit sphere has curvature $K=1,$ so the integral of $K$ over a polygon is just its area. Furthermore, the boundary arcs are geodesics, with geodesic curvature $k_g = 0.$ So the statement about this area is equivalent to $$\int \int_T \; K d \sigma \; + \theta_1 + \theta_2 + \theta_3 = 2 \pi.$$
Compare this with Gauss-Bonnet, formula (1) here:
==========
===========
You may or may not be told this: in the simply connected hyperbolic plane of constant curvature $-1,$ the area of a triangle is $\pi - (\alpha + \beta + \gamma),$ where $\alpha, \beta, \gamma$ are the angles at the vertices of the triangle. However, because $K = -1,$ this is once again the same as $$\int \int_T \; K d \sigma \; + \theta_1 + \theta_2 + \theta_3 = 2 \pi.$$
For your exercise, you are asked about an (oriented) "diangle" $D,$ with angles $0 \leq \alpha, \beta \leq \pi.$ If an angle is equal to $0,$ that is what the author calls a "cusp," see Figure 4-26, the right half. We take the external angles as before, and get $$\int \int_D \; K d \sigma \; + \theta_1 + \theta_2 = 2 \pi,$$ $$\int \int_D \; K d \sigma \; + (\pi - \alpha) + (\pi - \beta) = 2 \pi,$$ $$\int \int_D \; K d \sigma \; = \alpha + \beta.$$ What does this mean? We have $K < 0.$ The right hand side of the equation is nonnegative. If there is any interior to the "diangle," the integral is strictly negative. The only legal possibility is that the two geodesics are identical, both vertices are cusps, the two geodesic arcs are exactly the same, once out, once back.
Otherwise, and here is where the topology comes in, the two geodesic arcs do not bound a piece of surface as I was assuming, and the loop cannot be deformed homotopically to a single point, thus the surface is not simply connected.
Well, I hope that works for you. There is quite a bit of detail that could be added about orientation. Furthermore, I do not think a closed surface in $\mathbb R^3$ can have negative Gauss curvature. I know that a closed surface in $\mathbb R^3$ cannot have constant negative Gauss curvature.
-
A closed surface in $\mathbb{R}^3$ can have negative Gauss curvature at some points (e.g. on the line $s = 0$ of the embedding $(\cos{s}\cos{t},\cos{s}\sin{t},\sin{s})$ of $T^2\hookrightarrow\mathbb{R}^3$). A closed surface must have at least one point of positive Gauss curvature (enclose the surface in a sphere of minimal radius, which must be tangent to the surface at some point; the surface has positive curvature at that point). – Neal Nov 1 '12 at 2:56
@Neal, works for me. Right, the version I recall is the farthest point from the origin. – Will Jagy Nov 1 '12 at 2:58
But I'd guess, for some reason, that a closed surface (closed in that it is a closed subset of $R^3$) can be of negative curcature. I dunno! – Mariano Suárez-Alvarez♦ Nov 1 '12 at 3:03
1
@MarianoSuárez-Alvarez, I do not see how, but my dissertation was in minimal surfaces, which is a special case and may be too restrictive. I'm thinking Ian Agol would know. If there is an example, it is awfully damn crinkly. – Will Jagy Nov 1 '12 at 3:23
2
I asked on MO for Ian to answer :-P – Mariano Suárez-Alvarez♦ Nov 1 '12 at 3:35
show 9 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468493461608887, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/139040/a-counter-example-for-a-set-theoretic-problem?answertab=votes | # A counter-example for a set-theoretic problem?
I have proved the below conjecture for the special cases $n\in\{0,1,2\}$. The cases $n\ge 3$ (finite and infinite) are unknown.
If the following conjecture is true, I don't expect that you will be able to prove it, because I myself failed (and I am not a moron).
But maybe it has a counterexample? Can you provide a counter-example? (maybe for the case $n=3$?) I was too obsessed with the idea that my conjecture is true too much believing my intuition. Now I humble down and ask to try to find a counter-example.
Or maybe I overestimate my skill to solve this kind of problems and someone can nevertheless prove it? (Even the special case $n=3$ would be interesting.)
Definition 1 A filtrator is a pair $\left( \mathfrak{A}; \mathfrak{Z} \right)$ of a poset $\mathfrak{A}$ and its subset $\mathfrak{Z}$.
Having fixed a filtrator, we define:
Definition 2 $\mathrm{up}\,x = \left\{ Y \in \mathfrak{Z} \hspace{0.5em} | \hspace{0.5em} Y \geqslant x \right\}$ for every $X \in \mathfrak{A}$.
Definition 3 $E^{\ast} K = \left\{ L \in \mathfrak{A} \hspace{0.5em} | \hspace{0.5em} \mathrm{up}\,L \subseteq K \right\}$ (upgrading the set $K$) for every $K \in \mathscr{P} \mathfrak{Z}$.
Definition 4 A free star on a join-semilattice $\mathfrak{A}$ with least element $0$ is a set $S$ such that $0 \not\in S$ and
$\displaystyle \forall A, B \in \mathfrak{A}: \left( A \cup B \in S \Leftrightarrow A \in S \vee B \in S \right) .$
Definition 5 Let $\mathfrak{A}$ be a family of posets, $f \in \mathscr{P} \prod \mathfrak{A} (\prod \mathfrak{A}$ has the order of function space of posets), $i \in \mathrm{dom}\,\mathfrak{A}$, $L \in \prod \mathfrak{A}|_{\left( \mathrm{dom}\,\mathfrak{A} \right) \setminus \left\{ i \right\}}$. Then
$\displaystyle \left( \mathrm{val}\,f \right)_i L = \left\{ X \in \mathfrak{A}_i \hspace{0.5em} | \hspace{0.5em} L \cup \left\{ (i ; X) \right\} \in f \right\} .$
Definition 6 Let $\mathfrak{A}$ is a family of posets. A multidimensional funcoid (or multifuncoid for short) of the form $\mathfrak{A}$ is an $f \in \mathscr{P} \prod \mathfrak{A}$ such that we have that:
$\left( \mathrm{val} f \right)_i L$ is a free star for every $i \in \mathrm{dom} \mathfrak{A}$, $L \in \prod \mathfrak{A}|_{\left( \mathrm{dom} \mathfrak{A} \right) \setminus \left\{ i \right\}}$ and $f$ is an upper set.
$\mathfrak{A}^n$ is a function space over a poset $\mathfrak{A}$ that is $a\le b\Leftrightarrow \forall i\in n:a_i\le b_i$ for $a,b\in\mathfrak{A}^n$.
Conjecture Let $\mho$ be a set, $\mathfrak{F}$ be the set of filters on $\mho$ ordered reverse to set theoretic inclusion, $\mathfrak{P}$ be the set of principal filters on $\mho$, let $n$ be an index set. Consider the filtrator $\left( \mathfrak{F}^n ; \mathfrak{P}^n \right)$. If $f$ is a multifuncoid of the form $\mathfrak{P}^n$, then $E^{\ast} f$ is a multifuncoid of the form $\mathfrak{F}^n$.
My solution for the cases $n\in\{0,1,2\}$
-
Would you please make an effort to think of a more substantive subject line for your post? – MJD Apr 30 '12 at 19:49
@Mark Dominus: Done. – porton Apr 30 '12 at 19:51
What does f.o. abbreviate? Never mind: from your paper I see that it must be filters. – Brian M. Scott Apr 30 '12 at 20:36
@Brian M. Scott: Oh, I removed this unconventional term. Now I say: "the set of filters on ℧ ordered reverse to set theoretic inclusion." – porton Apr 30 '12 at 20:39
By upper set do you mean a set that if it contains $y$ then it contains all $x$ such that $x\geq y$? – Apostolos Apr 30 '12 at 22:28
show 1 more comment
## 1 Answer
Oh, I unexpectedly found a really simple proof for this conjecture.
The proof is presented in this my draft article: http://www.mathematics21.org/binaries/nary.pdf
Also my blog post: http://portonmath.wordpress.com/2012/05/01/upgrading-conjecture-proved/
-
3
You should expand a bit on the details, so that the answer is helpful to others. After some time, you can accept this answer. – Mariano Suárez-Alvarez♦ May 2 '12 at 6:19
I should admit that I can't understand why these posts are downvoted. As downvote needs 125 reps they are from good users. It may be a policy or.. I don't know..However I hardly believe it's natural.. – CutieKrait Feb 16 at 0:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174015522003174, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/103912-find-three-consecutive-numbers-add-up-these-numbers.html | # Thread:
1. ## Find three consecutive numbers that add up to these numbers
for example: 141+142+143 add up to 426
Now what I need to do is explain the procedure (how did I get this kind of answer etc.). Sorry I really don't know how to find the formula..
4 consecutive, even numbers that add up to 1172
5 ~ that add up to 1172
2. Originally Posted by Julian.Ignacio
for example: 141+142+143 add up to 426
Now what I need to do is explain the procedure (how did I get this kind of answer etc.). Sorry I really don't know how to find the formula..
4 consecutive, even numbers that add up to 1172
5 ~ that add up to 1172
Hi Julian,
Consecutive integers can be defined as x, x + 1, x + 2, x + 3, etc., where x is an integer.
Consecutive even integers can be defined as x, x + 2, x + 4, x + 6, etc., where x is an even integer.
Let x = 1st even number
Let x + 2 = 2nd even number
Let x + 4 = 3rd even number
Let x + 6 = 4th even number
Now add them to obtain the desired sum.
x + x + 2 + x + 4 + x + 6 = 1172
Solve for x, and then determine the 2nd, 3rd and 4th even numbers.
The second part of your question is vague. There are no 5 consecutive even integers that add up to 1172.
x + x + 2 + x + 4 + x + 6 + x + 8 =1172
In the above equation, x is not an integer.
3. YOu're right, the last one IS impossible...is putting parentheses useless in this calculation? And why?
4. The additive identity does not require them. Mathematically:
$a+(b+c) = (a+b)+c = b+(a+c) = a+b+c$
You can still put them in if you wish to make the working clearer though | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206660389900208, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-statistics/187706-stochastic-process-expected-absolute-deviation-print.html | # Stochastic process, expected absolute deviation
Printable View
• September 10th 2011, 08:00 AM
Mondreus
Stochastic process, expected absolute deviation
$X(t)$ is a Gaussian stochastic process with mean $m_{X(t)}=0$, ACF $r_X(\tau)=\frac{1}{1+\tau^2}$
Determine the expected absolute deviation $\epsilon = E(|X(t+1)-X(t)|)$
I have no idea where to begin except maybe setting $Y(t)=X(t+1)-X(t)$, but then I have to figure out what the PDF $f_{Y(t)}(y)$ is and I'm not sure how to proceed.
• September 12th 2011, 10:22 AM
Mondreus
Re: Stochastic process, expected absolute deviation
The solution is that since X(t) is a Gaussian process, all linear combinations of X(t) are also Gaussian processes. That means that the PDF $f_{Y(t)}(y)$ is also a standard Gaussian PDF. $m_Y = 0$ since $m_X = 0$, and the variance can be determined like this:
$Var(Y(t))=E\left\{ (Y(t)-m_Y)^2\right\}=E\left\{ \[Y(t)\]^2\right\}=E\left\{ (X(t+1)-X(t))^2\right\}=E\left\{(X(t+1))^2\right\}-2E\left\{ X(t+1)X(t)\right\}+E\left\{ (X(t))^2\right\}=2r_X(0)-2r_X(1)=1$
All times are GMT -8. The time now is 01:02 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089608788490295, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/181930/can-horns-algorithm-be-generalized-for-all-logic-expressions | # Can Horn's algorithm be generalized for all logic expressions?
The following is an algorithm to find truth assignments for Horn Formulas:
Input: A Horn Formula
Output: A satisfying assignment, if one exists
• Set all variables to false
• While there is an implication that is not satisfied: Set the right hand variable of the implication to true
• If all pure negative clauses are satisfied: Return the assignment.
• Else: Return "Formula is not satisfiable"
The book Algorithms defines a Horn Formula as a formula that consists of implications (left hand side is an AND of any number of positive literals and whose right hand is a single positive literals) and pure negative clauses, consisting of an OR of any number of negative literals.
Will the above algorithm work to identify truth assignments for expressions that are not Horn Formulas? For example, if a logic expression consists of combinations of positive literals and negative literals (and therefore violates the Horn Formula requirement for all expressions to be either implications or pure negative clauses), would this algorithm still work?
E.g: $(v_1∧ \bar{v_2} ∧v_3 )⇒v_4 ∧ v_5$
The above expression is clearly not a Horn Formula: would the algorithm still be valid with these kind of expressions? If the algorithm does work on any logical expression set, then what makes Horn Formulas special? (since the generalization of the above algorithm would mean a linear time solution for all truth assignments as opposed to a linear time solution for only Horn Formulas)
-
## 1 Answer
I doubt you can generalise your algorithm for arbitrary clauses, since arbitrary satisfiability is NP-complete whereas your algorithm is linear.
To understand why the algorithm works for Horn clauses let us identify a variable assignment $M$ with the set of variables that are assigned the value $true$. Consider a set $A$ of Horn clauses and let $A = D \cup N$, where $D$ is the set of definite clauses (clauses with exactly one negative variable) and $N$ is the set of all negative clauses.
Now any set of definite clauses has a least variable assignment (in the set inclusion order) that satisfies it. In the first part of your algorithm you construct the least variable assignment $M$ for $D$.
If $M'$ is a variable assignment that satisfies $A$, then it satisfies $D$ and so $M \subseteq M'$. But since the clauses in $N$ are all negative $M$ also satisfies $N$. Thus if $A$ is satisfiable then it is satisfied by $M$. So in the second part of the algorithm you just check whether $M$ satisfies $N$.
-
I understand why the algorithm works for Horn Clauses. Could you explain why it doesn't work for arbitrary clauses? (e.g. Possibly provide an example for arbitrary clauses) – user26649 Aug 13 '12 at 17:31
The clause $v_1 \lor v_2$ for example does not have a least model. If you transform it to the implication $\lnot v_1 \to v_2$, then your algorithm will assign the value $true$ to $v_2$. Thus according to the algorithm the set $\{\lnot v_1 \to v_2, \lnot v_2\}$ is not satisfiable. But that is not correct. – Levon Haykazyan Aug 13 '12 at 18:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9008058905601501, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/317/why-cant-the-outcome-of-a-qm-measurement-be-calculated-a-priori/382 | # Why can't the outcome of a QM measurement be calculated a-priori?
Quantum Mechanics is very successful in determining the overall statistical distribution of many measurements of the same process.
On the other hand, it is completely clueless in determining the outcome of a single measurement. It can only describe it as having a "random" outcome within the predicted distribution.
Where does this randomness come from? Has physics "given up" on the existence of microscopic physical laws by saying that single measurements are not bound to a physical law?
As a side note: repeating the same measurement over and over with the same apparatus makes the successive measurements non-independent, statistically speaking. There could be a hidden "stateful" mechanism influencing the results. Has any study of fundamental QM features been performed taking this into account? What was the outcome?
Edit: since 2 out of 3 questions seem to me not to answer my original question, maybe a clarification on the question itself will improve the quality of the page :-)
The question is about why single measurements have the values they have. Out of the, say, 1000 measure that make a successful QM experiment, why do the single measurements happen in that particular order? Why does the wave function collapse to a specific eigenvalue and not another? It's undeniable that this collapse (or projection) happens. Is this random? What is the source of this randomness?
In other words: what is the mechanism of choice?
Edit 2: More in particular you can refer to chapter 29 of "The road to reality" by Penrose, and with special interest page 809 where the Everett interpretation is discussed - including why it is, if not wrong, quite incomplete.
-
## 10 Answers
The short answer is that we do not know why the world is this way. There might eventually be theories which explain this, rather than the current ones which simply take it as axiomatic. Maybe these future theories will relate to what we currently call the holographic principle, for example.
There is also the apparently partially related fact of the quantization of elementary phenomena, e.g. that the measured spin of an elementary particle always is measured in integer or half integer values. We also do not know why the world is this way.
If we try to unify these two, the essential statistical aspect of quantum phenomena and the quantization of the phenomena themselves, the beginnings of a new theory start to emerge. See papers by Tomasz Paterek, Borivoje Dakic, Caslav Brukner, Anton Zeilinger, and others for details .
http://arxiv.org/abs/0804.1423 and
http://www.univie.ac.at/qfp/publications3/pdffiles/Paterek_Logical%20independence%20and%20quantum%20randomness.pdf
These papers present phenomenological (preliminary) theories in which logical propositions about elementary phenomena somehow can only carry 1 or a few bits of information.
Thanks for asking this question. It was a pleasure to find these papers.
-
Thanks for your answer and the nice links, the second especially. – Sklivvz♦ Nov 7 '10 at 16:46
First of all, let me start out by pointing out to you that there have been experimental violations of Bell's inequalities. This provides damning evidence against hidden variable models of quantum mechanics, and thus essentially proves that the random outcomes are an essential feature of quantum mechanics. If the outcomes of measurements in every basis were predetermined, we should not be able to violate Bell's inequality.
One way of seeing why this is in fact a reasonable state of affairs is to consider Schroedinger's cat. Evolution of closed quantum systems is unitary, and hence entirely deterministic. For example, in the case of the cat, at some point in time we have a state of the system which is a superposition of (atom decayed and cat dead) and (atom undecayed and cat alive) with equal amplitude for each. This far quantum mechanics predicts the exact state of the system. We need to consider what happens when we open the box and look at the cat. When we do this, the system should then be in a superposition of (atom decayed, cat dead, box open, you aware cat is dead) and (atom undecayed, cat alive, box closed, you aware cat is alive). Clearly as time goes on the two branches of the wave function diverge further as the consequences of whether the cat is alive or dead propagate out into the world, and as a result no interference is likely possible. Thus there are two branches of the wave function with different configurations of the world. If you believe the Everett interpretation of quantum mechanics then both branches continue to exist indefinitely. Clearly our thinking depends on whether we have seen the cat alive or dead so that we ourselves are in a state (seen cat dead and aware we have seen the cat dead and not seen the cat alive) or (seen cat alive and aware we have seen the cat alive and not seen the cat dead). Thus even if we exist in a superposition we are only aware of a classical outcome to the experiment. Quantum mechanics allows us to calculate the exact wavefunction which is the outcome of the experiment, however, it cannot tell us a priori which branch we will find ourselves aware of after the experiment. This isn't really a shortcoming of the mathematical framework, but rather of our inability to perceive ourselves in anything other than classical states.
-
I understand your reasoning. However, moving along with it - why do we perceive a particular classical state and not another? Where does that choice come from? – Sklivvz♦ Nov 7 '10 at 16:00
@Sklivvz Well, this all depends on interpretations, but sticking with the Everett interpretation it is that we perceive both in different branches of the wave function but each branch is unaware of the other. – Joe Fitzsimons Nov 7 '10 at 16:05
This only answers "where do all the other outcomes go?" and not "why do I perceive a particular result and not the other?", which is what the question is about. – Sklivvz♦ Nov 7 '10 at 16:13
5
There have been a lot of papers written on this, but they are philosophy not physics. The practical answer is that there are multiple possibilities, and by the pigeon hole principle you must get one of them. For the probabilities, etc., see Everett's thesis or more recent review papers. – Joe Fitzsimons Nov 7 '10 at 16:18
1
– j.c. Nov 11 '10 at 15:37
show 10 more comments
My two lepta on this mainly conceptual and semantic problem:
It seems that people have an initial position/desire: those who want/expect/believe that measurements should be predictable to the last decimal point and those who are pragmatic and accept that maybe they are not. The first want an explanation of why there exists unpredictability.
An experimentalist knows that measurements are predictable within errors, which errors can sometimes be very large. Take wave mechanics, classical. Try to predict fronts in climate, a completely classical problem. The weather report is a daily reminder how large the uncertainties are in classical problems, in principle completely deterministic. Which leads to the theory of deterministic chaos. So predictability is a concept in the head of the questioner, as far as quantum or classical measurements goes. The difference is that in classical physics we believe we know why there is unpredictability.
Has physics given up on the predictability of the throw of a dice? Taken to extremes trying to find the physics of the metastable state of the fall of the dice we come again to errors and accuracy of measurement.
Within errors in measurements in the order of magnitude we live in, nano to kilometers, quantum mechanics is very predictive, as evinced by all the marvelous ways we communicate through this board . Even in achieving lasing and superconductivity. It is only when probing the very small that the theoretical unpredictability of individual measurements in QM enters. So small that "intuitions" and beliefs can become dominant to measurement and errors. And there, according to the inherent beliefs of each observer, the desire to have a classical predictability framework or the willingness to explore new concepts plays a role to a physicist, whether he/she will obsess about this conundrum or live with it until TOE..
-
The indeterminism does not originate from Quantum Mechanics. It has a wider philosophical origin.
For example, consider the multi-world interpretation of quantum mechanics. It is a completely deterministic theory which describes unitary, reversible and predictable evolution of a quantum system or the Universe (Multiverse in terms of MWI) as a whole.
But any actual experiments still will show uncertainty. Why? According the MWI with each act of measurement the observer gets splitted in two copies each of which experience different results.
One can thus formulate a similar problem but without involving the quantum mechanics: what would one experience if somebody creates an exact copy of him? Will he still experience the old body or the newer one? What happens if the old body is killed?
There can be formulated several related thought experiments:
1. There is a teleportation device that scans your body, sends that information to the receiver which re-creates an exact copy of your body and then the original body is destroyed. Would a reasonable person use such teleporter even if their friends used it and say it's great?
2. Suppose the medicine of the future became very advanced. Now you are proposed a game: your brain will be splitted in two parts with one of them left in your body and the other transplanted into another body and then the both fully regenerated. The memories of the both parts are completely (or mostly) restored. Now one of the resulting people is given billion of dollars while the other is sent to a life imprisonment. Should you agree to such a game? What is the probability that you will find yourself as a billionaire or as a prisoner after the operation? Should you agree if someone cuts not a half of the brain but a smaller part? What about other parts of the body?
This leads to the yet unresolved philosophical questions which exist from the very ancient times when people knew nothing about quantum mechanics.
Here is a list of open philosophical problems that arise in the course of the thought experiment:
• Hard problem of consciousness (philosophical zombies)
• Problem of induction
• Qualia problem
• Ship of Theseus paradox
-
I agree on the wider implications, however the MWI does not solve the problem I presented. Inserting fictitious universes does not tell us: 1) why the split exists and when; 2) why there is a particular outcome at all within our universe; 3) what is the mechanism of choice of that outcome – Sklivvz♦ Feb 18 '11 at 13:21
1
The problem is with split of the observer into two similar copies, whether the reason is related to quantum mechanics or not is not significant. What is significant is that similar problems arise in other fields, including, say, biology. – Anixx Feb 18 '11 at 13:29
1
Yes. The split is in MWI. But the indeterminism occurs regardless of what causes the split - QM or another mechanism. – Anixx Feb 18 '11 at 15:29
1
"All observers will see the same result of an experiment which is as objective as reality can be." - no, this is not correct in QM. Each observer sees his own part of reality, which may be inconsistent with what other observers see. "Furthermore MWI can't be reversible, because an act of measurement (or whatever it is called in other interpretations) is clearly not time-reversible." - there is no "act of measurement" or wavefunction collapse in MWI. All of MWI indeed completely reversible in time and unitary. – Anixx Feb 19 '11 at 2:00
1
"This is an observable physical feature: the state of a particle is irreversibly changed by a measurement." This is only because the observer gets entangled with the particle. From a point of view of another observer the situation is completely reversible. "Finally, MWI does not explain why we only one outcome is measured and why that one." MWI explains why only one outcome is is measured. It does not explain why that one indeed. But the same problem arise already outside the QM. Re-read the answer please. – Anixx Feb 19 '11 at 2:01
show 5 more comments
The mechanism of choice in one particular instant of a quantum-mechanical experiment is unknown in all physics today - it's just that this fact for many physicists is too uncomfortable to accept or to admit.
Einstein couldn't accept it, Bohr and Feynmann admitted it though. The question leads us to the never ending Bohr–Einstein debates.
A fundamental fact at the heart of physics is that wave-functions $\psi$ cannot be measured, only their absolute square $|\psi|^2$. Pure logic forces us to admit that statements about wave functions are not statements about a perceivable reality. Quantum theory is a theory that explains how to calculate the outcome of a measurement, it cannot tell us what happened before or during the measurement because a wave function is an idea, an inaccessible hypothesis .
-
You might want to read about Bohmian mechanics. Bohmian mechanics is perfectly deterministic. The reason randomness appears is explained in the same way as the appearance of randomness in thermodynamic equilibrium.
Here's some further reading with links to several papers at the bottom of the page:
http://plato.stanford.edu/entries/qm-bohm/#qr
-
The price Bohmian mechanics pays to be deterministic is that it is non-local as soon as 2 particles or more are used. – Frédéric Grosshans Nov 16 '10 at 13:42
The usual framework of quantum mechanics is also non-local. See violations of Bell inequalities. The difference with Bohmian mechanics is that in the latter, the non-locality is made more explicit. – Raskolnikov Nov 16 '10 at 14:16
We will end up embracing non-locality just as we got used to quantum measurement with its probabilities. My two cents. – user346 Feb 19 '11 at 7:10
Bohmian quantum mechanics cannot be made consistent with special relativity. – Revo Aug 12 '11 at 11:55
Quantum mechanics is inherently (and was developed as) as non-deterministic (stochastic) theory. The answer to your question lies in one of the postulates of quantum mechanics (see this page for a full description;
In any measurement of the observable associated with operator $\hat{A}$, the only values that will ever be observed are the eigenvalues $a$, which satisfy the eigenvalue equation.
Since we know that the modulus squared of the wavefunction corresponds to the probability of a certain physical variable taking a given value, this wavefunction can be expanded on its eigenstates, and we see that each measurement result has an associated probability.
-
There's nothing wrong in your answer, but it doesn't help with any of the questions I posed. Is there anything else you wanted to add? :-) – Sklivvz♦ Nov 7 '10 at 16:06
I think it does, just implicitly. :) To clarify, quantum mechanics says only that a measurement can take one of a given (finite or infinite) set of values. There is still a physical law governing it, just there's no deterministic (billiard-balls style) law. – Noldorin Nov 7 '10 at 16:23
Which is what I say in the second paragraph of my question :-) Surely though, the indetermination predicted by QM is wrong - in the sense that single measurements do have specific values, so some physical process must be responsible for it. – Sklivvz♦ Nov 7 '10 at 16:45
Indeterminism is perfectly valid, if unsatisfying to many. Single measurements produce specific values of the measured variable. The mechanism by which this is done is usually considered to be "wavefunction collapse" (under the Copenhagen interpretation) or "branching of the universe" (under the many-worlds interpretation). – Noldorin Nov 7 '10 at 17:34
About your last paragraph: I don't see the link between "since [...] this wavefunction can be expanded on its eigenstates". – Cedric H. Nov 7 '10 at 21:42
show 1 more comment
I accidentally found a partial answer to my question on arxiv. It's still completely theoretical, but the answer can be summarized like this.
It is assumed that some form of event horizon exists on a microscopic level. This event horizon prevents some information from escaping. With these premises, a QFT-like theory can be developed by observing the event horizon. The source of the randomness is the fact that information, as seen from outside the event horizon, is incomplete - the assumption being that randomness is the opposite of information.
As the field enters the Rinder horizon for the observer R, the observer shall not get information about future configurations of φ any more and all what the observer can expect about φ evolution beyond the horizon is a probabilistic distribution P[φ] of φ beyond the horizon. Already known information about φ acts as constraints for the distribution. I suggested that this ignorance is the origin of quantum randomness. Physics in the F wedge should reflect the ignorance of the observer in the R wedge, if information is fundamental.
-
– sigoldberg1 Nov 12 '10 at 3:59
This paper looks pretty cranky to my eyes. – j.c. Nov 13 '10 at 16:58
– sigoldberg1 Nov 15 '10 at 0:08
is not true that this mechanism is mysterious; however most physicists don't have the time to ponder about these philosophical questions and they just prefer to leave the random nature of quantum mechanics as an 'axiom'
to understand how the random happens, first, let's make a Gedanken experiment were our physical bodies (including our brains) are classically described; they all have well-defined position and momenta, and hence any indeterminism in its evolution is completely of practical reasons, and not a matter of principle.
So in this hypothetical classical universe, teleportation star trek style is a completely legal operation; you can read all the physical microstate of any person, and write it to another place.
But in this classical universe, it is also possible to copy a person microstate: so let's walk on the consequences of such experiment.
So our experimental configuration consists of two separate rooms with big posters on the wall: one of the rooms contains a '+' and the other room contains a '-'. Now we send our test individual into the teleportation chamber, and we will desintegrate this person and create two copies of him into each room
Now the question arises; if you are the test individual, what is that you are going to experience? well, the truth is that there are not that many options for our experiences after we enter the teleportation chamber:
1) we don't experience nothing afterwards, because we have been desintegrated, so we are dead, our goblin soul went away and in the world there are two zombie copies of you that don't have 'soul'
2) we experience to appear in a room with a big poster with a '+'
3) we experience to appear in a room with a big poster with a '-'
So if we discard 1) (i don't wanna argue about religion with anyone here, i'll just say that 1) is preposterous) we are left with two options: 2) and 3)
So, the important thing here, is that, even in this classical, determinist universe, the fact of being able to copy a conscious being, means that some observers/conscious beings, will experience events that are fundamentally random, and are intrinsically non deterministic, even if everything else is.
You could argue that there are underlying 'physical' reasons of why the 'real you' went to 2) instead of 3) or viceversa, you could say that the copy in 2) was more 'perfect' than 3) and hence the real you goes there. But the truth is that these arguments are not fundamental; you are basically trying to take to the heart the fact that there is a single 'you'.
Which makes me go back to your question; How does all this applies to our world?, after all, copying a living being is disallowed (there is even a no-clone theorem on QM) However this is not entirely true. Many-Worlds interpretation of QM is basically taking the determinism of QM to the extreme; if we allow a a quantum superposition to couple to a quantum conscious entity (an observer) the quantum observer will undergo a split and become entangled with the physical system he is measuring; he has physically become two separate copies quantum observers (mutually non-interacting copies, hence the no-clone theorem doesn't apply), experiencing different outcomes, which individually seem random to each one of them.
-
-1 This is completely off topic – Sklivvz♦ Feb 18 '11 at 16:45
no, it absolutely is not – lurscher Feb 18 '11 at 16:46
Re-read my question, especially the second part. And please remove the downvotes (if it was you) because they should reflect what you think of the posts and not what you think of me. Evidently you thought that the OP, at least, was good enough to deserve an answer... – Sklivvz♦ Feb 18 '11 at 16:57
yes, you made a very good question, so please have the time to read the full answer. I'm just making some motivational argument before making the connection to Everett's MWI. Anixx also made a excellent contribution explaining this same idea; there is no mechanism of choice, or rather; the mechanism of choice is basically the same that is making you being you and not me and viceversa – lurscher Feb 18 '11 at 17:01
1
@lurscher it is not really necessary to make an exact copy, like in my example with devided brain. In fact such operations were conducted in 1950s or 1960s to cure some deceases: the connections between the halves of a brain (Corpus callosum) were removed and the patient seemingly come better. These operations were stopped later when it was discovered that such treatment leads to appearance of two different personalities in one brain (associated with the right and left hemispheres respectively), each of which controlled one eye, one hand etc. – Anixx Feb 19 '11 at 10:40
show 5 more comments
Yes - physics HAS given up on the existence of microscopic physical laws that you assume would provide the outcome. And what evidence is there that Nature adheres to the idea that P and Q must commute? Physics never established such an idea to begin with. It was always just a convenient assumption, until it turned out that Maxwell simply can not explain atoms. The world can make perfectly good mathematical sense where particles are actually oscillators with a complex phase and possible paths, rather than "mass-points" with classical deterministic behavior. You ask if there is a way to pick out one of the possibilities, but evidently Nature does not know or care. The idea that there is a 'block spacetime' where all events were determined appears to be false.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422106146812439, "perplexity_flag": "middle"} |
http://nrich.maths.org/308&part= | ### More Mods
What is the units digit for the number 123^(456) ?
### Mod 3
Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3.
### Novemberish
a) A four digit number (in base 10) aabb is a perfect square. Discuss ways of systematically finding this number. (b) Prove that 11^{10}-1 is divisible by 100.
# Days and Dates
##### Stage: 4 Challenge Level:
If today is Monday we know that in 702 days time (that is in 100 weeks and 2 days time) it will be Wednesday. This is an example of "clock" or "modular" arithmetic.
What day will it be in 15 days? 26 days? 234 days?
In 2, 9, 16 and 23 days from now, it will be a Wednesday. What other numbers of days from now will be Wednesdays? Can you generalise what you have noticed?
Choose a pair of numbers and find the remainders when you divide by 7. Find the remainder when you divide the total by 7. For example
$15 \div 7 = 2$ remainder $1$
$26 \div 7 = 3$ remainder $5$
$15 + 26 = 41$
$41 \div 7 = 5$ remainder $6$
Try some more numbers. What do you notice about the remainders? Can you justify what you see?
Now find the remainder when you divide the product of 15 and 26 by 7. What happens? Can you justify what you see? Try some more numbers.
How can I use these ideas to work out what day my birthday will be on next year?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8957448601722717, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/48760/decomposing-an-arbitrary-unitary-representation-of-a-connected-nilpotent-lie-grou | ## Decomposing an arbitrary unitary representation of a connected nilpotent Lie group in terms of its irreps
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For a locally compact (Hausdorff) abelian group $G$ we have following theorem (see e.g. Folland):
"For every (strongly continuous) unitary representation $(\pi,\mathcal{H_{\pi}})$ of $G$, there exists a unique regular $\mathcal{H}_{\pi}$-projection-valued measure $P$ on $\hat{G}$ such that $\pi$ decomposes as:
$\pi (g)=\int_{\hat{G}}\left\langle g,\chi\right\rangle dP\left(\chi\right)$ for every $g \in G$."
To which extent is this theorem true for nilpotent Lie groups (say, connected and simply connected)? That is, do we have a canonical decomposition of a unitary representation of such a group in terms of its irreducible unireps and some sort of measure on the unitary dual?
The proof of the above theorem has two major ingredients: the identification of the spectrum of $L^1 (G)$ with $\hat{G}$ when $G$ is abelian and the spectral theory of commutative Banach algebras. It is not clear to me whether any of these ingredients has a suitable analogue in the nilpotent case. Furthermore, in this case $\hat{G}$ is not a group or even a Hausdorff space, plus one would have to integrate an operator-valued function which assumes operators acting on different Hilbert spaces as its values. Thus I am not so sure if the standard theory of projection-valued measures can be so easily applied in this case.
-
I'm only a beginner in the relevant theory, but nilpotent Lie groups are Type I, so their unitary duals are better behaved than the general case. There is a kind of operator-valued Fourier transform which applies to these situations. Hopefully someone who knows this stuff better will come along and leave an answer or a reference. – Yemon Choi Dec 9 2010 at 19:01
(Most of) Theorem 14.10.5 of Wallach, Real Reductive Groups II: Let $G$ be a locally compact separable topological group, and $\pi$ be a unitary representation of $G$. Then there exists a Borel measure $\mu$ on $\hat G$, and a direct integral of representations of $G$, $\int_{\hat G}\pi_s\ d\mu(s)$, such that $\pi$ is unitarily equivalent to $\int_{\hat G}\pi_s\ d\mu(s)$. – BR Dec 9 2010 at 21:03
This may be a misfire since I am really only familiar with the $p$-adic case, when it is not necessary to work with unitary representations... but are you familiar with Kirillov's orbit method? It allows you to identify the dual space $\widehat{G}$ of a unipotent group with the space of $G$-orbits in $\mathfrak{g}^∗$ (the dual vector space to the associated Lie algebra $\mathfrak{g}$ with the so-called coadjoint $G$-action by conjugation). – Justin Campbell Dec 9 2010 at 21:14
To add to my comment, your worries in last paragraph also apply to the real reductive group case, where the decomposition is known. So while there are difficulties, I don't think they are the ones you are anticipating. I don't know much about nilpotent groups, though. Have you looked at the representation theory of Heisenberg groups? It seems to be fairly well documented. – BR Dec 10 2010 at 0:02
BR - This looks along the lines of what I'm looking for, as long as the this decomposition is sufficiently canonical (i.e. the measure and unitary isomorphism are unique in some strong sense), I'll take a look at Wallach's book, so thanks. Justin - if you know of a good reference on the subject, that will be helpful. – Mark Schwarzmann Dec 10 2010 at 11:07
show 2 more comments
## 2 Answers
First some bad news : such a decomposition only exists for groups which are said to be of Type I (some notion coming from the theory of von Neumann algebras). There are examples of topological groups which are not of Type I, and have representations which can be decomposed in two different way (even with disjoint support).
Next the good news : luckily, every connected nilpotent Lie group is of Type I. The spectral measure you are looking for indeed does exist, and in some cases (e.g., the regular representation) it is understood explicitly. See the book by Corwin and Greenleaf or the book by Pukanszky. (It is easy to find them on MathSciNet.)
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Brad Currey has actually obtained a pretty explicit description of the Plancherel measure for general exponential solvable Lie groups which is used to decompose the representation into direct integral of irreducibles. Notice that nilpotent Lie groups are part of this larger class of exponential Lie groups. He uses the technique of jump indexes and orbit method. Here is his paper
http://mathcs.slu.edu/~currey/030401-Currey-1.pdf
Vignon S. Oussa
-
Does Currey's approach for exponential solvable Lie groups yield more information, when specialized to the case of connected nilpotent Lie groups, than the work of Pukanszjy which that paper references? Or is it just something more general''? – Yemon Choi Mar 8 2011 at 8:19
Yemon, His formula is very precise and very explicit unlike most Plancherel formulas for Nilpotent Lie groups available in the Literature. Another resource if you want would be "Representations of nilpotent Lie groups and their applications" a book written by Corwin and Greenleaf. Vignon S. Oussa – Vignon Mar 10 2011 at 3:52
Thanks Vignon - I may have a look, since I find what little I've tried to read of Pukanszky's work both difficult and opaque – Yemon Choi Mar 10 2011 at 20:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323870539665222, "perplexity_flag": "head"} |
http://scientopia.org/blogs/goodmath/tag/thermodynamics/ | Just another Scientopia Blogs site
Second Law Silliness from Sewell
Dec 12 2011 Published by MarkCC under Bad Physics, Intelligent Design, Uncategorized
So, via Panda's Thumb, I hear that Granville Sewell is up to his old hijinks. Sewell is a classic creationist crackpot, who's known for two things.
First, he's known for chronically recycling the old "second law of thermodynamics" garbage. And second, he's known for building arguments based on "thought experiments" - where instead of doing experiments, he just makes up the experiments and the results.
The second-law crankery is really annoying. It's one of the oldest creationist pseudo-scientific schticks around, and it's such a terrible argument. It's also a sort-of pet peeve of mine, because I hate the way that people generally respond to it. It's not that the common response is wrong - but rather that the common responses focus on one error, while neglecting to point out that there are many deeper issues with it.
In case you've been hiding under a rock, the creationist argument is basically:
1. The second law of thermodynamics says that disorder always increases.
2. Evolution produces highly-ordered complexity via a natural process.
3. Therefore, evolution must be impossible, because you can't create order.
The first problem with this argument is very simple. The second law of thermodynamics does not say that disorder always increases. It's a classic example of my old maxim: the worst math is no math. The second law of thermodynamics doesn't say anything as fuzzy as "you can't create order". It's a precise, mathematical statement. The second law of thermodynamics says that in a closed system:
where:
1. is the entropy in a system,
2. is the amount of heat transferred in an interaction, and
3. is the temperature of the system.
Translated into english, that basically says that in any interaction that involves the
transfer of heat, the entropy of the system cannot possible be reduced. Other ways of saying it include "There is no possible process whose sole result is the transfer of heat from a cooler body to a warmer one"; or "No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work."
Note well - there is no mention of "chaos" or "disorder" in these statements: The second law is a statement about the way that energy can be used. It basically says that when
you try to use energy, some of that energy is inevitably lost in the process of using it.
Talking about "chaos", "order", "disorder" - those are all metaphors. Entropy is a difficult concept. It doesn't really have a particularly good intuitive meaning. It means something like "energy lost into forms that can't be used to do work" - but that's still a poor attempt to capture it in metaphor. The reason that people use order and disorder comes from a way of thinking about energy: if I can extract energy from burning gasoline to spin the wheels of my car, the process of spinning my wheels is very organized - it's something that I can see as a structured application of energy - or, stretching the metaphor a bit, the energy that spins the wheels in structured. On the other hand, the "waste" from burning the gas - the heating of the engine parts, the energy caught in the warmth of the exhaust - that's just random and useless. It's "chaotic".
So when a creationist says that the second law of thermodynamics says you can't create order, they're full of shit. The second law doesn't say that - not in any shape or form. You don't need to get into the whole "open system/closed system" stuff to dispute it; it simply doesn't say what they claim it says.
But let's not stop there. Even if you accept that the mathematical statement of the second law really did say that chaos always increases, that still has nothing to do with evolution. Look back at the equation. What it says is that in a closed system, in any interaction, the total entropy must increase. Even if you accept that entropy means chaos, all that it says is that in any interaction, the total entropy must increase.
It doesn't say that you can't create order. It says that the cumulative end result of any interaction must increase entropy. Want to build a house? Of course you can do it without violating the second law. But to build that house, you need to cut down trees, dig holes, lay foundations, cut wood, pour concrete, put things together. All of those things use a lot of energy. And in each minute interaction, you're expending energy in ways that increase entropy. If the creationist interpretation of the second law were true, you couldn't build a house, because building a house involves creating something structured - creating order.
Similarly, if you look at a living cell, it does a whole lot of highly ordered, highly structured things. In order to do those things, it uses energy. And in the process of using that energy, it creates entropy. In terms of order and chaos, the cell uses energy to create order, but in the process of doing so it creates wastes - waste heat, and waste chemicals. It converts high-energy structured molecules into lower-energy molecules, converting things with energetic structure to things without. Look at all of the waste that's produced by a living cell, and you'll find that it does produce a net increase in entropy. Once again, if the creationists were right, then you wouldn't need to worry about whether evolution was possible under thermodynamics - because life wouldn't be possible.
In fact, if the creationists were right, the existence of planets, stars, and galaxies wouldn't be possible - because a galaxy full of stars with planets is far less chaotic than loose cloud of hydrogen.
Once again, we don't even need to consider the whole closed system/open system distinction, because even if we treat earth as a closed system, their arguments are wrong. Life doesn't really defy the laws of thermodynamics - it produces entropy exactly as it should.
But the creationist second-law argument is even worse than that.
The second-law argument is that the fact that DNA "encodes information", and that the amount of information "encoded" in DNA increases as a result of the evolutionary process means that evolution violates the second law.
This absolutely doesn't require bringing in any open/closed system discussions. Doing that is just a distraction which allows the creationist to sneak their real argument underneath.
The real point is: DNA is a highly structured molecule. No disagreement there. But so what? In the life of an organism, there are virtually un-countable numbers of energetic interactions, all of which result in a net increase in the amount of entropy. Why on earth would adding a bunch of links to a DNA chain completely outweigh those? In fact, changing the DNA of an organism is just another entropy increasing event. The chemical processes in the cell that create DNA strands consume energy, and use that energy to produce molecules like DNA, producing entropy along the way, just like pretty much every other chemical process in the universe.
The creationist argument relies on a bunch of sloppy handwaves: "entropy" is disorder; "you can't create order", "DNA is ordered". In fact, evolution has no problem with respect to entropy: one way of viewing evolution is that it's a process of creating ever more effective entropy-generators.
Now we can get to Sewell and his arguments, and you can see how perfectly they match what I've been talking about.
Imagine a high school science teacher renting a video showing a tornado sweeping through a town, turning houses and cars into rubble. When she attempts to show it to her students, she accidentally runs the video backward. As Ford predicts, the students laugh and say, the video is going backwards! The teacher doesn’t want to admit her mistake, so she says: “No, the video is not really going backward. It only looks like it is because it appears that the second law is being violated. And of course entropy is decreasing in this video, but tornados derive their power from the sun, and the increase in entropy on the sun is far greater than the decrease seen on this video, so there is no conflict with the second law.” “In fact,” the teacher continues, “meteorologists can explain everything that is happening in this video,” and she proceeds to give some long, detailed, hastily improvised scientific theories on how tornados, under the right conditions, really can construct houses and cars. At the end of the explanation, one student says, “I don’t want to argue with scientists, but wouldn’t it be a lot easier to explain if you ran the video the other way?”
Now imagine a professor describing the final project for students in his evolutionary biology class. “Here are two pictures,” he says.
“One is a drawing of what the Earth must have looked like soon after it formed. The other is a picture of New York City today, with tall buildings full of intelligent humans, computers, TV sets and telephones, with libraries full of science texts and novels, and jet airplanes flying overhead. Your assignment is to explain how we got from picture one to picture two, and why this did not violate the second law of thermodynamics. You should explain that 3 or 4 billion years ago a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendants generation after generation, even correcting errors. Explain how, over a very long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complicated collections of atoms, and how eventually something called “intelligence” allowed some of these collections of atoms to design buildings and computers and TV sets, and write encyclopedias and science texts. But be sure to point out that while none of this would have been possible in an isolated system, the Earth is an open system, and entropy can decrease in an open system as long as the decreases are compensated by increases outside the system. Energy from the sun is what made all of this possible, and while the origin and evolution of life may have resulted in some small decrease in entropy here, the increase in entropy on the sun easily compensates this tiny decrease. The sun should play a central role in your essay.”
When one student turns in his essay some days later, he has written,
“A few years after picture one was taken, the sun exploded into a supernova, all humans and other animals died, their bodies decayed, and their cells decomposed into simple organic and inorganic compounds. Most of the buildings collapsed immediately into rubble, those that didn’t, crumbled eventually. Most of the computers and TV sets inside were smashed into scrap metal, even those that weren’t, gradually turned into piles of rust, most of the books in the libraries burned up, the rest rotted over time, and you can see see the result in picture two.”
The professor says, “You have switched the pictures!” “I know,” says the student. “But it was so much easier to explain that way.”
Evolution is a movie running backward, that is what makes it so different from other phenomena in our universe, and why it demands a very different sort of explanation.
This is a perfect example of both of Sewell's usual techniques.
First, the essential argument here is rubbish. It's the usual "second-law means that you can't create order", even though that's not what it says, followed by a rather shallow and pointless response to the open/closed system stuff.
And the second part is what makes Sewell Sewell. He can't actually make his own arguments. No, that's much too hard. So he creates fake people, and plays out a story using his fake people and having them make fake arguments, and then uses the people in his story to illustrate his argument. It's a technique that I haven't seen used so consistency since I read Ayn Rand in high school.
Share this:
52 responses so far
• Scientopia Blogs
• Recent Comments
• eric on Probability and Interpretations
• Bard Bloom on Probability and Interpretations
• Jonas on Probability and Interpretations
• John Miller on Probability and Interpretations
• Brandon Wilson on Probability and Interpretations
• Tags
Bad Behavior has blocked 1009 access attempts in the last 7 days.
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625338912010193, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/96041?sort=oldest | ## Analog for Tate-Shafarevich group
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there an analog for the Tate-Shafarevich group for hyperelliptic curves?
References to such an analog would be nice if one exists.
EDIT: Referring to Noam Elkies' comment, are there any finiteness conjectures for such an analog?
-
6
There are Tate-Shafarevich groups for abelian varieties of any dimension. If $C$ is a curve of genus 2 or more (whether or not it is hyperelliptic), strictly speaking $C$ doesn't have a Tate-Shafarevich group, but its Jacobian $J(C)$ does, and one sometimes calls that group the "Tate-Shafarevich group of the curve $C$" by abuse of terminology. – Noam D. Elkies May 5 2012 at 3:25
1
Dear Eugene, Yes, it is conjectured that Sha of any abelian variety over a number field is finite. Regards, Matthew – Emerton May 5 2012 at 6:36
## 1 Answer
There are Tate-Shafarevich groups for every number field $K$ and every smooth locally algebraic group scheme $G$ over $X \setminus S$ where $X$ is the spectrum of the ring of integers in $K$ and $S$ is a finite set of places containing all infinite places. In this case, the Tate-Shafarevich "groups" (actually they are only pointed sets in general) are defined as $$Ш(G) := \ker\big(H^1(K,G) \to \prod_v H^1(K_v,G)\big)$$ where $v$ runs over all places of $K$ and $H^1$ is the non-abelian cohomology.
This definition and some analysis of the set can be found in the very interesting paper B. Mazur: On the passage from local to global in numer theory, III §15.
Concerning finiteness conjectures: Of interest may be Corollary 1 in Mazur's paper which states that $Ш(G)$ is finite if the Tate-Shafarevich conjecture holds for abelian varieties over $K$, i.e. $Ш(A/K)$ is finite for each abelian variety defined over $K$ and a particular group of automorphism of $G$ is descent.
-
1
Actually, in a lot of cases one can still get an abelian group structure on these pointed sets. See the work of Kneser and Borovoi. For example, in "Abelian Galois Cohomology of Reductive Groups", Borovoi shows how to use the functor $H^1_{ab}$ to give an abelian group structure on $Ш(G)$ for reductive groups $G$ over fields of characteristic 0. – Dror Speiser May 5 2012 at 9:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124369025230408, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/75873/extreme-points-of-transportation-polytope | ## Extreme points of transportation polytope
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm interested in $n \times m$ joint probability tables with prescribed row and column marginals. Such tables form a convex set known as the transportation polytope. What are the extreme points of this set?
For example, for a $2 \times 2$ case of $$\begin{bmatrix} x_{11} & x_{12}\\ x_{21} & x_{22} \end{bmatrix}$$ with row constraint $x_{11} + x_{12} = 0.9$, column constraint $x_{11} + x_{21} = 0.8$, and $\sum_{i,j} x_{ij} = 1$, then there are two extreme points, $$\begin{bmatrix} 0.8 & 0.1\\ 0 & 0.1 \end{bmatrix}, \quad \begin{bmatrix} 0.7 & 0.2\\ 0.1 & 0 \end{bmatrix}.$$ And every joint table with the constraint lies in the convex hull of these two points.
Is there a general way of finding the extreme points? In other words, is there a generalization to Birkhoff–von Neumann theorem for this case?
-
The Birkhoff von-Neumann theorem shows that the output size may be factorial, so you should not expect to solve such problems efficiently (at least in the input size). Are you looking for something more tailored to this problem than general purpose algorithms/packages such as those available at ifor.math.ethz.ch/~fukuda/cdd_home and cgm.cs.mcgill.ca/~avis/C/lrs.html? – Noah Stein Sep 20 2011 at 0:09
1
Also, I don't see why this is tagged community wiki. – Noah Stein Sep 20 2011 at 0:09
My bad, I didn't realize that the community wiki box was checked. How do I undo it? – Memming Sep 20 2011 at 15:41
I checked the FAQ, and there's no way to revert the community wiki. :'( – Memming Sep 20 2011 at 15:51
## 2 Answers
A complete solution with references can be found in Section 8.1 of Brualdi, Combinatorial Matrix Classes, Cambridge University Press, 2006.
Here is how to make an extreme point, and all extreme points can be made in this way. Suppose `$\{r_i\}$` and `$\{c_j\}$` are the required row and column sums. Start with a zero matrix $A=(a_{ij})$. Choose $i,j$ so that $r_i,c_j>0$. Set $a_{ij}=\min(r_i,c_j)$ and subtract $\min(r_i,c_j)$ from both $r_i$ and $c_j$. Keep doing this until all the row sums or all the columns sums are zero (and it better be both of them zero or there is no such matrix).
And a characterization. For any matrix in the class you can define a bipartite graph with $m$ row-vertices and $n$ column-vertices where the edges indicate where the matrix entries are non-zero. Then the matrix is an extreme point iff the graph has no cycles.
-
I've been reading the book, and it has been extremely helpful. Thank you so much. – Memming Oct 15 2011 at 19:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The answer above is partially wrong. The kind of extreme points which are obtained through the construction above, known as the northwest corner rule, can only generate a subset of extreme points (not all of them) of the polytope. To be more precise, the northwest rule can only generate those extreme points for which the graph (as described in the bottom of the answer above) is a caterpillar tree, as can be checked further here: http://www.newton.ac.uk/preprints/NI02033.pdf
-
Thanks for citing that interesting paper. However, I don't believe your comment is correct. The NWC rule specifies (for a particular permutation of the rows and columns) exactly which entry must be fixed next, but in my description any entry can be picked that has non-zero row and column requirements. This allows a lot more choices. Try row sums 2,2,2 and column sums 3,1,1,1; it is quite easy to make the tree with three branches of length two (not a caterpillar). Any feasible graph which is acyclic (i.e. a forest) can be made: start with a leaf and use induction. – Brendan McKay Apr 12 2012 at 14:36
Apologies... I misread your answer. You are right, what you describe is not the NWC but a more general approach. Many thanks for your comment, it was very useful. – mcuturi Apr 13 2012 at 1:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217432737350464, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/5806/security-model-for-privacy-preserving-aggregation-scheme/5810 | # Security model for privacy-preserving aggregation scheme.
Suppose that $S=(E,D)$ is an additively homomorphic encryption scheme. Now I want to design a protocol $P$ such that given inputs $x_1,x_2,..,x_n$, the adversary $A$ (who can decrypt) can only learn $\sum x_i$ and nothing else. To do that, $P$ first generates random values $r_1, r_2,..,r_n$ such that $\sum r_i = 0$. Then, $E(x_1+r_1)\times..\times E(x_n+r_n) = E(\sum x_i)$.
The trouble I have is when adversary has some domain knowledge of $x_i$. For example, if it knows that $x_i \geq 0$, and later decrypts $E(\sum x_i)$ to learn that $\sum x_i = 0$, it can learn that $x_i = 0$ for all $i$.
How could I formulate a security game that takes into account such attacks based on domain/auxiliary knowledge? Would IND-CPA security model be sufficient? How's about IND-CCA?
I've read the paper [1], but I don't think it addresses the domain/auxiliary knowledge.
[1] Shi et al. Privacy-preserving aggregation of Time-series Data (NDSS'11)
-
## 2 Answers
The standard approach is to break this problem into two pieces:
• What information is unavoidably leaked, merely by computing the desired function? In your case, the goal is to compute $\sum_i x_i$. This sum unavoidably leaks a little bit of information about the $x_i$'s. For instance, as you correctly state, if we somehow know that all $x_i$'s are non-negative, and if we happen to observe that $\sum_i x_i = 0$, then we can conclude that $x_1=\dots=x_n=0$. This kind of leakage is unavoidable: any approach to computing $\sum_i x_i$ must inevitably leak this information.
• What other information is leaked by the cryptographic protocol? We can ask whether the cryptographic protocol leaks more than the above (more than what is unavoidable).
Multi-party secure computation is designed to avoid any leakage of the second type: there will be none whatsoever. The only information leaked will be whatever can be deduced solely from the value of the output, i.e., whatever leakage is unavoidable and inherent in the functionality itself.
That leaves the question: just how much is unavoidably leaked, and is that much leakage acceptable? The cryptographic literature doesn't answer that question. If you think about it, it can't really, as that will depend deeply on the particular application domain. The cryptographic tools basically assume that you will think through these implications: before asking for a protocol to compute the sum $\sum_i x_i$, cryptographers assume you have thought through the implications of doing so.
So, the only good answer I can give you is: the standard approach is a bit different than you may have expected. Rather than trying to formalize something like IND-CPA (the equivalent of requiring there be no leakage whatsoever), we instead formalize things by requiring that there be no more leakage than whatever is an inevitable consequence of the value you're trying to compute.
-
This is tricky and I don't know that there is a generic way to take care of all domain/auxiliary information.
The way we typically do proofs in multi-party computations is by defining an ideal world and show that the information generated in the ideal world (usually the encrypted inputs and the outputs) could be used to simulate the real world protocol transcript (at least in the semi-honest case). If the simulator can generate the transcript, then clearly the transcript contains no additional information beyond what is already present in the info generated in the ideal world. Therefore, the protocol cannot leak additional information.
Given the protocol you've described, the ideal world would be a trusted party which cannot be corrupted is given encrypted inputs from each party. The trusted party decrypts them all (we don't need homomorphic encryption in the ideal world), adds them up and returns the answer. Well, if the answer is $0$ as you suggest, this leaks information. But, that information is leaked in the ideal world as well as in the real, so we don't consider it a breach of the protocol.
The most natural way then to deal with this problem in the ideal world is to have the trusted party check the output for certain cases (e.g., $0$) before outputting the answer. If one of those cases is achieved, the trusted party output $\bot$. You can definitely implement this functionality in your real world protocol using a secure comparison of some sort. Your protocol will obviously be more complex, but such is life.
That shows how you would handle a specific case. There are definitely other cases where domain/auxiliary information combined with the answer will allow adversaries to infer additional information. I don't know of any generic way to handle all of this, and quite possibly it is impossible to handle in a generic way.
Consider the following, assume we have a generic multi-party comptuation protocol $\pi$. Furthermore, assume this protocol is proven secure for up to $n-1$ adversarial parties.
Now, we might be tempted to use this protocol to compute the sum you describe in the question. But, there is a problem. If there are $n-1$ adversarial parties, they can all set their inputs to $0$. The single honest part sets its input to the proper input say $x_h$. The output of the computation is clearly $x_h$ so the honest party's input is leaked. This would happen in the (standard) ideal world too, though. So even though the protocol can handle up to $n-1$ adversarial parties, the specific functionality we are computing can only handle up to $n-2$ adversarial parties. This suggests that the functionality to be computed must have an impact on the adversary model we choose, and seems to suggest that a generic solution to the problem you mention might not be possible.
So then, it seems the path forward would be to list all possible corner cases for a computation where some undesired leak occurs due to domain/auxiliary information, then adjust the computation accordingly. Then the security could be proven using standard methods. This could then potentially be extended to classes of computations, etc.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378432631492615, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/26170/sample-size-calculation-for-truncated-normal-distribution | # Sample size calculation for truncated normal distribution
So, I am new here.
I need to perform a sample size calculation for a clinical trial. The study sample will be select according to criteria of person's height. Persons within the particular height range (female, 1.6m to 1.7m) will be invited to participate in trial. We know the expected sample standard deviation from previous trial. But my concern is that sample is not from a normal distribution. Usual power/sample size calculation need the assumption of normal distribution of test statistic under $H_0$ and $H_1$, but here I believe we have truncated normal distribution. So how may I modify `power.t.test`, or make some other calculation in R, to accommodate this? My colleague says to just rely on central limit theory and assume normal with 1.65 mean and known standard deviation, but I believe this is wrong due to the truncation. Any advice would be appreciated.
-
2
I would use simulation to calculate the power. Anyway, given the truncated nature of your data, the use of a t-test may be questionable (I'm assuming that you're going to carry out the t-test on your truncated variable. This is what I understand from your question). – andrea Apr 10 '12 at 9:33
I am fairly sure the CLT is not applicable here. – P Sellaz Apr 10 '12 at 10:16
Thank you @andrea . I mention t-test because that is what I do /if/ the sample is normal distribution. I would like to make a calculation using truncation distribution, or I also think uniform distribution is good in this case because sample near the mean of the population and range is small. Can I make sample size calculation with truncated normal distribution or uniform distribution ? I prefer calculation instead of the simulation. Please to know your further comment, and also to confirm the central limit theorem is wrong to use for here. – Joseph King Apr 10 '12 at 10:17
1
– andrea Apr 10 '12 at 13:34
Andrea, truncation (within the center of the distribution) actually makes the t-test more applicable, not less applicable: the sampling distribution of the mean will be remarkably close to normal. P Sellaz, because the distribution will likely be close to uniform (and not very skewed), the CLT gives good insight even with samples as small as $5$ or so. Joseph, your thinking is good. – whuber♦ Apr 10 '12 at 14:31
show 4 more comments
## 1 Answer
Your colleague is correct.
In the US, 1.6 to 1.7 m is near the middle of the range of adult female heights. According to Wolfram Alpha, which summarizes NHANES 2006 data, the height distribution in this range should look close to this:
This is extremely close to uniform: its mean is 1.649 m and its standard deviation is 0.0287 m (whereas a uniform distribution in this range would have a mean of 1.650 m and SD of 0.0289 m). Its skewness coefficient is only 0.054.
Accordingly, independent samples drawn from this distribution will have means that are close to normally distributed. Here, for instance, is a histogram of means of 10,000 samples of just four heights drawn (independently) from this distribution:
It is only very, very slightly non-normal (a Kolmogorov-Smirnov test rejects normality at p=0.94%, which is amazingly large given there are 10,000 data points). For the purpose of planning comparisons of mean heights among random groups of women, the normal approximation will work well. Standard power calculations ought to give good guidance.
-
thank you again. So I can use 0.0287 as SD in standard power calculation ? With example of following R code to detect 0.01 difference from control to intervention group ? power.t.test(delta = 0.01, sd = 0.0287 , sig.level = 0.05 , power = 0.9 , type = "two.sample", alternative = "two.sided") thank you more – Joseph King Apr 10 '12 at 18:53
default | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360659718513489, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/71574/fundamental-groups-of-compact-complex-manifolds/71576 | ## Fundamental Groups of compact Complex manifolds?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, are limitations on the fundamental group for compact complex manifolds known?
Can an arbitrary (finite represantable) group be the fundamental group of a compact complex manifold?
Thanks
-
## 2 Answers
Every finitely presented group is the fundamental group of a compact complex manifold of dimension $3$.
This is proven in the book by Amoros, Burger, Corlette, Kotschick and Toledo Fundamental groups of compact Kahler manifolds, Corollary 1.66 p. 19.
The rough idea of proof is the following. Let $\Gamma$ be a finitely presented group, and let $Y$ be a smooth closed oriented $4$-manifold with $\pi_1(Y) \cong \Gamma$. Then by a result of Taubes one can find a complex $3$-fold with the same fundamental group by taking the twistor space $Z$ of $X=Y \sharp n \overline{\mathbb{C} \mathbb{P}^2}$ for $n$ sufficiently large.
-
8
One should note that, despite the title of the cited book, the manifolds constructed in Corollary 1.66 are not Kahler. Many restrictions are known on the fundamental groups of compact Kahler manifolds. – Neil Strickland Jul 29 2011 at 15:14
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Just to give one more refference, there is now a new proof of this theorem that does not use the deep result of Taubes, the proof is elementary and 8 pages long:
http://arxiv.org/abs/1104.4814
-
4
A very cool paper... – Igor Rivin Jul 29 2011 at 15:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8935847282409668, "perplexity_flag": "head"} |
http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Skin_surface_3/Chapter_main.html | # Chapter 363D Skin Surface Meshing
Nico Kruithof
## Table of Contents
36.1 Introduction
36.2 Definition of a Skin Surface
36.3 The Interface
36.4 Timings
36.5 Example Programs
36.5.1 Meshing a Skin Surface
36.5.2 Meshing and Subdividing a Skin Surface
## 36.1 Introduction
Skin surfaces, introduced by Edelsbrunner in [Ede99], have a rich and simple combinatorial and geometric structure that makes them suitable for modeling large molecules in biological computing. Meshing such surfaces is often required for further processing of their geometry, like in numerical simulation and visualization.
A skin surface is defined by a set of weighted points (input balls) and a scalar called the shrink factor. If the shrink factor is equal to one, the surface is just the boundary of the union of the input balls. For a shrink factor smaller than one, the skin surface becomes tangent continuous, due to the appearance of patches of spheres and hyperboloids connecting the balls.
This package constructs a mesh isotopic to the skin surface defined by a set of balls and a shrink factor using the algorithm described in [KV05].
An optimized algorithm is implemented for meshing the union of a set of balls.
## 36.2 Definition of a Skin Surface
Figure 36.1: Left: Convex combinations of two weighted points (the two dashed circles). Right: The skin curve of the weighted points. The smaller circles form a subset of the weighted points whose boundary is the skin curve.
This section first briefly reviews skin surfaces. For a more thorough introduction to skin surfaces, we refer to [Ede99] where they were originally introduced.
A skin surface is defined in terms of a finite set of weighted points $$P and a shrink factor $$s, with $$0 s 1. A weighted point $$p=(p,wp) 3 × corresponds to a ball with center $$p and radius $$sqrt(wp). A weighted point with zero weight is called an unweighted point.
A pseudo distance between a weighted point $$p = (p,wP) and an unweighted point $$x is defined as
$$ (p,x) = p-x 2 - wp,
where $$ p-x is the Euclidean distance between $$p and $$x. The ball corresponding to a weighted point $$p is the zero set of $$(p, · ). Note that if $$wp<0 the radius of the ball is imaginary and the zero-set is empty.
We can take convex combinations of weighted points by taking convex combinations of their distance functions. Figure 36.1 (left) shows weighted points that are obtained as convex combinations of the dashed circles. For further reading on the space of circles and spheres we refer to [Ped70].
Starting from a weighted point $$p=(p,wP), the shrunk weighted point $$ps is obtained by taking a convex combination with the unweighted point centered at $$p, formally $$ps = s p + (1-s) p', with $$p'=(p,0). A simple calculation shows that, $$ps = (p,s · wp). The set $$Ps is the set obtained by shrinking every weighted point of $$P by a factor $$s: $$Ps = {ps p P}. The shrunk weighted points of Figure 36.1 (left) are shown in Figure 36.1 (right).
We now define the skin surface $$skn$$s(P) associated with a set of weighted points $$P. Consider the set of weighted points obtained by taking the convex hull of the input weighted points. A calculation shows that every weighted point lies within the union of the input balls. Next, we shrink each weighted point in the convex hull with the shrink factor $$s. Hence, we multiply the radius of the corresponding (real) input circles with a factor $$sqrt(s). The skin surface is the boundary of the union of this set of weighted points:
$$
skn$$s(P) = {ps p conv$$ (P)}.
Here $$conv$$(P) 3 × is the convex hull of a set of weighted points $$P, whereas $$ denotes the boundary - in $$ 3 - of the union of the corresponding set of balls.
Recall that each weighted point in the convex hull of the input weighted points is contained in the union of the input weighted points. Hence, for a shrink factor equal to one, the skin surface is the boundary of the union of the input weighted points.
By definition of a skin surface, the weights of the input balls (their radius-squared) are shrunk with a factor of $$s and the skin surface wraps around the shrunk input balls. In order to make the skin surface wrap around the (unshrunk) input balls, we can first increase the weights of the input balls by multiplying them with a factor $$1/s and then compute the skin surface.
## 36.3 The Interface
The interface to the skin surface package consists of one main function, taking a set of weighted points and a shrink factor and outputting the meshed surface. Further, it defines classes and functions and classed used to perform the main steps of the algorithm. There are two global classes Skin_surface_3 and Union_of_balls_3 both of which are models of the concept SkinSurface_3 and there are two functions to extract a mesh of the skin surface (union of balls) from the objects of the aforementioned classes. A final function takes a mesh and the Skin_surface_3 (Union_of_balls_3) object it is constructed from and refines the mesh. This section describes these classes and functions in more detail.
The main function of the skin surface package takes an iterator range of weighted points, a shrink factor and the number of subdivision steps and outputs a mesh in a CGAL::Polyhedron_3:
template <class WP_iterator, class Polyhedron_3>
void
make_skin_surface_mesh_3 ( Polyhedron_3 &p, WP_iterator begin, WP_iterator end, FT shrink_factor=.5, int nSubdiv=0, bool grow_balls = true)
Where, FT is the number type used by the Weighted_points.
To obtain more control over the algorithm, the different steps can also be performed separately. First, a Skin_surface_3 object is created from an iterator range of weighted points and a shrink factor. Optional arguments are a boolean telling whether the input weighted points should be grown in such a way that the skin surface wraps around the input balls instead of the shrunk input balls.
template <class SkinSurfaceTraits_3>
Skin_surface_3 ( WP_iterator begin, WP_iterator end, FT shrink_factor, bool grow_balls = true)
The template parameter should implement the SkinSurfaceTraits_3 concept. The type WP_iterator, is an iterator over weighted points as defined by SkinSurfaceTraits_3 and FT is the number type used by the weighted points.
For a shrink factor equal to one the skin surface is the boundary of the union of the input balls. In this case the algorithm used for meshing the skin surface greatly simplifies. These optimizations are implemented in the class Union_of_balls_3. The constructor for the union of balls class is similar, except for the missing shrink factor:
template <class SkinSurfaceTraits_3>
Union_of_balls_3 ( WP_iterator begin, WP_iterator end, bool grow_balls = true)
With a model of the concept SkinSurface_3 it is possible to generate a coarse mesh isotopic to the skin surface. Using the function mesh_skin_surface_3 with signature:
template <class SkinSurface_3, class Polyhedron>
void mesh_skin_surface_3 ( SkinSurface_3 skin_surface, Polyhedron &p)
The last function takes the (coarse) mesh and subdivides it in-situ by applying a given number of 1-4 split operations (each triangle is split into four sub-triangles) and moving the new vertices towards the skin surface. If the number of iterations is not specified, one subdivision step is done. The object of the SkinSurface_3 concept used to construct the coarse mesh is needed to move new points on the skin surface.
template <class SkinSurface_3, class Polyhedron >
void subdivide_skin_surface_mesh_3 ( SkinSurface_3 skinsurface, Polyhedron &p, int iterations=1)
## 36.4 Timings
The timings of the construction of the coarse mesh and the first subdivision are given in seconds and were done on a Pentium 4, 3.5 GHz, with 1 Gb of memory.
| | | | |
|--------------|---------------------------|-------------|------------------------|
| | | | |
| Data set | Number of weighted points | Coarse mesh | first subdivision step |
| | | | |
| Caffeine | 23 | 0.2 sec. | 0.05 sec. |
| Gramicidin A | 318 | 5 sec. | 2 sec. |
| | | | |
## 36.5 Example Programs
### 36.5.1 Meshing a Skin Surface
The following example shows the construction of a coarse mesh of the skin surface using the function make_skin_surface_mesh_3. The output is a CGAL::Polyhedron.
```File: examples/Skin_surface_3/skin_surface_simple.cpp
```
```#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/make_skin_surface_mesh_3.h>
#include <list>
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef K::Point_3 Bare_point;
typedef CGAL::Weighted_point<Bare_point,K::RT> Weighted_point;
typedef CGAL::Polyhedron_3<K> Polyhedron;
int main() {
std::list<Weighted_point> l;
double shrinkfactor = 0.5;
l.push_front(Weighted_point(Bare_point( 1,-1,-1), 1.25));
l.push_front(Weighted_point(Bare_point( 1, 1, 1), 1.25));
l.push_front(Weighted_point(Bare_point(-1, 1,-1), 1.25));
l.push_front(Weighted_point(Bare_point(-1,-1, 1), 1.25));
Polyhedron p;
CGAL::make_skin_surface_mesh_3(p, l.begin(), l.end(), shrinkfactor);
return 0;
}
```
### 36.5.2 Meshing and Subdividing a Skin Surface
The following example shows the construction of mesh of the skin surface by explicitly performing different steps of the algorithm. It first constructs a Skin_surface_3 object from an iterator range of weighted points and a shrink factor. From this object, the coarse mesh isotopic to the skin surface is extracted using the function CGAL::mesh_skin_surface_3
Next, the coarse mesh is refined to obtain a better approximation. The use of CGAL::Skin_surface_polyhedral_items_3<Skin_surface_3> in the CGAL::Polyhedron is not necessary, but gives the subdivision a significant speedup.
```File: examples/Skin_surface_3/skin_surface_subdiv.cpp
```
```#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Skin_surface_3.h>
#include <CGAL/Polyhedron_3.h>
#include <CGAL/mesh_skin_surface_3.h>
#include <CGAL/subdivide_skin_surface_mesh_3.h>
#include "skin_surface_writer.h"
#include <list>
#include <fstream>
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Skin_surface_traits_3<K> Traits;
typedef CGAL::Skin_surface_3<Traits> Skin_surface_3;
typedef Skin_surface_3::FT FT;
typedef Skin_surface_3::Weighted_point Weighted_point;
typedef Weighted_point::Point Bare_point;
typedef CGAL::Polyhedron_3<K,
CGAL::Skin_surface_polyhedral_items_3<Skin_surface_3> > Polyhedron;
int main() {
std::list<Weighted_point> l;
FT shrinkfactor = 0.5;
l.push_front(Weighted_point(Bare_point( 1,-1,-1), 1.25));
l.push_front(Weighted_point(Bare_point( 1, 1, 1), 1.25));
l.push_front(Weighted_point(Bare_point(-1, 1,-1), 1.25));
l.push_front(Weighted_point(Bare_point(-1,-1, 1), 1.25));
Polyhedron p;
Skin_surface_3 skin_surface(l.begin(), l.end(), shrinkfactor);
CGAL::mesh_skin_surface_3(skin_surface, p);
CGAL::subdivide_skin_surface_mesh_3(skin_surface, p);
std::ofstream out("mesh.off");
out << p;
return 0;
}
```
Next: Reference Manual
CGAL Open Source Project. Release 3.3.1. 25 August 2007. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8575885891914368, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/32615/applications-of-classifying-thick-subcategories | ## Applications of classifying thick subcategories
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
So, relatively recently, Balmer introduced this notion of a spectrum for a tensor triangulated category and used it to prove a generalization of a classification theorem done in several areas of mathematics. Of course, the precursor to this was the work done by Devinatz, Hopkins, and Smith in classifying the thick subcategories of the stable hootopy category of (finite) spectra. They famously used this classification to prove the periodicity theorem, and I can see why it is helpful: the classification theorem reduces the periodicity theorem down to the (still nontrivial!) task of finding a single type $n$ complex with a periodic self-map. I'm sure similar uses have been found for the other classification theorems, but I am left to wonder, more generally:
What kinds of problems are made simpler with a classification theorem? What questions does it answer?
I'm looking for some general heuristics here. They should satisfy the following conditions:
1. They should apply to theorems already proven by classification theorems; examples and references would be lovely here!
2. These heuristics should come with some sense of why one would think to use the classification theorem in this way. For example, I can see how the classification theorem makes the periodicity theorem manageable to prove, but why would one think to use it in the first place?
3. This last one is more of a throw away or a bonus, but it's worth a shot: If there are any areas of mathematics, or open problems that you think are begging for a classification theorem type application then please share! It would be a useful test of the proposed heuristics if they are able to predict the solution of a problem that has not been solved...
Basically I'm looking for some intuition here. My logic being: if we know more about what kinds of questions a classification can answer then we will know more about the information contained in a classification. This, in turn, may provide clues for how to compute or construct such a classification (which is, of course, the next step in Balmer's program).
(P.S. I've tagged the areas that I know of with classification theorems. If I'm forgetting some, do remind me in the comments Looks like there's a limit on tags :).)
-
## 2 Answers
I'm not completely sure if this is the sort of thing you are after, but the telescope conjecture (conjecture isn't a great word as it is known to be false for some categories) springs to mind as something one can (sometimes) answer using a classification theorem. The telescope conjecture holds for a compactly generated triangulated category $\mathcal{T}$ if every smashing subcategory $\mathcal{S}$ of $\mathcal{T}$, i.e. a coproduct closed triangulated subcategory whose inclusion admits a coproduct preserving right adjoint, is generated by compact objects of $\mathcal{T}$. It is obviously reasonable to attack this via a classification and this is precisely the key ingredient in the proof that the derived category of a commutative noetherian ring satisfies the telescope conjecture.
Another example, just because it is cool, is this paper by Ingalls and Thomas. They prove a classification for certain subcategories of the representation category of a quiver $Q$ of Dynkin type in terms of noncrossing partitions associated to $Q$ (this is done in the abelian setting but it extends to a classification of localizing subcategories of the derived category). They use this, for example, to give a new proof that the noncrossing partitions associated to $Q$ form a lattice.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me go through a possible answer to my own question- just do some thinking out loud here. It seems to me that a classification of thick subcategories of a (tensor-) triangulated category is a good thing to have when you have some, relatively generic, thick subcategory, $S$, at hand and you want to prove something about it. Then you can say, "Well, $S$ can only be one of a couple subcategories and I have them all listed here. It can't be any of these because __, and if it's any of these other ones, then I can easily prove my claim.
Here's an example that I thought of, but I'm not sure if this has been done. Suppose you have some functor $F: \mathcal{T} \rightarrow \mathcal{A}$ from a triangulated category to an abelian category. If, for some reason, it turned out that the kernel of this functor was a thick subcategory then we could apply the thick subcategory theorem to ask the question "When is $Fa = 0$?" In the case that the functor at hand satisfies $Fa = 0$ only when $a = 0$, then perhaps you could run through your classification and find a single object in each nonzero thick subcategory that maps to something nontrivial in $\mathcal{A}$. This would then prove that the kernel has to be whatever thing you had left.
Actually, what's good about this use of the classification theorem is that it can be used very easily to get partial results. Maybe the thick subcategories of your triangulated category come in multiple flavours and it is easy to show that one flavour is not the kernel of the given functor, but you don't know for the other flavours. Maybe that's all you need to prove something useful regarding the functor at hand.
I'm not sure how many functors from triangulated categories to abelian categories happen to satisfy the property that their kernel is a thick subcategory... On the other hand, any (triangulated) functor between triangulated categories does satisfy this property. So maybe you could apply this technique in the following way: You have a map between two objects $A \rightarrow B$ in some category (like noetherian schemes, or finite group schemes, or topological spectra). This should give rise to (hopefully triangulated) functors betweensome triangulated categories related to $A$ and $B$ (like derived category of perfect complexes on a scheme, or stable module category, etc.) I'm pretty sure that, at least in the usual cases, the original map will be trivial if and only if this induced map is. To find out if the induced map is trivial, we may pull the same trick as before and check to see if the thick subcategory corresponding to the kernel must be zero.
That seems like a high-powered way to check if a map between to objects is trivial! But maybe it's useful? If anyone has seen anything like what I've described then please post a reference! I would love to know if this application has been done before (I'm sure it has, it was the first thing that came to mind...)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510219693183899, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/29586/the-inclusion-map-from-a-manifold-to-a-product-manifold-is-c-infty/29593 | # The inclusion map from a manifold to a product manifold is $C^{\infty}$
Let $i_{q0} : M\rightarrow M\times N$, $i_{q0}(p) = (p, q0)$ be a mapping between smooth manifolds. I need some hints to show that it is $C^{\infty}$.
I have so far... Let $(U,\phi)$ and $(V,\psi)$ be charts about $p$ and $i_{q0}$, and let $r^{i}$ be the $ith$ coordinate function on Euclidean space. Then we need to show that $\frac{\partial (r^{i}\circ \psi \circ i_{q0} \circ \phi^{-1})}{\partial r^{j}}$ exists and is continuous at $\phi(p)$ and that we can keep taking partial derivatives.
-
The map $i_{q_0}$ of your question is simply the inclusion map $(x^1,\dots,x^n)\to (x^1,\dots,x^n,0,\dots,0)$ in (an appropriate choice of) local coordinates where $M$ is a smooth $n$-manifold and $N$ is a smooth $m$-manifold (where $m$ is the number of zeros in $(x^1,\dots,x^n,0,\dots,0)$). Clearly, this (inclusion) map is smooth. – Amitesh Datta Feb 16 '12 at 15:06
## 2 Answers
The smooth structure on $M \times N$ is understood to be the maximal atlas which includes all charts which are products of charts from $M$ and $N$. You do not need to show that the composition of transition maps with your given function are smooth for every choice of charts; you only need to show there is at least one such chart-- this is the point of the axiom which states that overlapping charts are compatible.
-
I think I got it. Here goes...
Let $(U,\phi)$ be any chart about $p \in M$. Then for any chart $(V,\psi)$ about $q0 \in N$, $(U \times V, \phi \times \psi)$ is a chart about $(p,q0) \in M\times N$. And for $\phi(p) \in R^{m}$, say $\phi(p) = (p^{1},...,p^{m})$, $r^{i}\circ(\phi\times\psi)\circ i_{q0}\circ\phi^{-1}(p^{1},...,p^{m}) = r^{i}\circ((p^{1},...,p^{m}),\psi(q0))$.
Then the partial derivatives are easy to compute after that.
-
Or maybe you can first show that the image set is a submanifold of MxN, as, e.g., the zero set of a smooth map, say, define the map as f-x for x in M, and use bump functions to extend it. Then your map is the inclusion map of a submanifold into a manifold, which is smooth when you use the subspace topology for M – gary May 19 '11 at 5:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443145990371704, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/28879/von-dyck-groups-and-solvability/29032 | ## von dyck groups and solvability
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A von Dyck group is a group with presentation $< a,b | a^m=b^n=(ab)^p=1 >$ with m,n,p natural numbers. Is it known which of these groups are solvable and which are not? Is there a reference for this? Thanks.
-
1
The groups with $\frac{1}{m} + \frac{1}{n} + \frac{1}{p} > 1$ are all finite, and all solvable except $\Delta(2,3,5) \cong A_5$. The groups with $\frac{1}{m} + \frac{1}{n} + \frac{1}{p} = 1$ are all infinite and solvable: their commutator subgroups are isomorphic to $\mathbb{Z}^2$. The groups with $\frac{1}{m} + \frac{1}{n} + \frac{1}{p} < 1$ are all infinite and nonsolvable: indeed, with only finitely many exceptions, each of these groups has a simple group $PSL_2(\mathbb{F}_q)$ as a quotient. – Pete L. Clark Jun 21 2010 at 2:27
Thank you for the comment. Do you know where I can find a proof for the case of 1/m + 1/n + 1/p = 1? – dave Jun 21 2010 at 3:41
These days, these groups are usually called triangle groups. You might like to look at the answers to the following question, which was very similar. mathoverflow.net/questions/22459/x-y-xp-yp-xyp-1/… – HW Jun 22 2010 at 21:27
## 1 Answer
You might try Generators and Relations for Discrete Groups by Coxeter and Moser.
Specifically for 1/m + 1/n + 1/p = 1 there are only 3 cases up to permutation, (2,3,6), (2,4,4) and (3,3,3). Map a and b to an appropriate root of unity to get a homomorphism onto C_6, C_4, or C_3, respectively. The kernel of the map is in all three cases isomorphic to Z^2.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359238147735596, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/119743/families-over-semistable-locus-of-git-quotient | ## families over semistable locus of GIT quotient?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is somehow a more generalized re-edit of a former question of mine:
http://mathoverflow.net/questions/119339/glueing-flat-families-of-objects-over-a-blow-up
I guess and hope that this will make the question clearer.
Suppose I have a flat family $X \rightarrow F$ of geometric objects (vector bundles, curves, etc) and a GIT quotients $M$ that gives a coarse moduli space for the same moduli problem. Supose further that the image of $F$ (that I will call $F$ as well, abusing notation, and I suppose smooth) under the classifying map intersect the singular, strictly semi-stable locus of $M$.
Suppose now I blow up $F$ along $F\cap M$ - we can even assume $F\cap M$ is a point $p$, as long it is cod 2. Let me denote by $FF$ del blown-up variety and $E$ the exceptional divisor..
In my particular case, over the exceptional divisor there's a natural family of objects $Z\rightarrow E$ and $E$ that is contracted to $F\cap M$ by the classifying map (I am quite sure that this is not always the case). So one can see the blow down of $E$ as a modular, universal, natural map. In particular one of the objects over $E$ is isomorphic to $X_p$, the object of $F$ over the singular point.
The question is: can I expect the existence of a flat family $Y$ over $FF$ s.t. the restriction to $E$ is iso to $Z$ and the restriction to $FF/E$ is iso to $X$?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268337488174438, "perplexity_flag": "head"} |
http://topologicalmusings.wordpress.com/tag/theory/ | Todd and Vishal’s blog
Topological Musings
# Tag Archive
You are currently browsing the tag archive for the ‘theory’ tag.
## Another elementary number theory problem
November 29, 2007 in Problem Corner | Tags: andreescu, elementary, mathematical, number, reflections, theory, titu | by Vishal | 11 comments
This one, by Dr. Titu Andreescu (of USAMO fame), is elementary in the sense that the solution to the problem doesn’t require anything more than arguments involving parity and congruences. I have the solution with me but I won’t post it on my blog until Jan 19, 2008, which is when the deadline for submission is. By the way, the problem (in the senior section) is from the $6^{th}$ issue of Mathematical Reflections, 2007.
Problem: Find the least odd positive integer $n$ such that for each prime $\displaystyle p, \, \frac{n^2-1}{4} + np^4 + p^8$ is divisible by at least four (distinct) primes.
## p^q + q^p is prime
November 28, 2007 in Problem Corner | Tags: elementary, Invariant, number, Oxford, prime, theory | by Vishal | 7 comments
I found this elementary number theory problem in the “Problem Drive” section of Invariant Magazine (Issue 16, 2005), published by the Student Mathematical Society of the University of Oxford. Below, I have included the solution, which is very elementary.
Problem: Find all ordered pairs of prime numbers $(p,q)$ such that $p^q + q^p$ is also a prime.
Solution: Let $E = p^q+q^p$. First, note that if $(p,q)$ is a solution, then so is $(q,p)$. Now, $p$ and $q$ can’t be both even or both odd, else $E$ will be even. Without loss of generality, assume $p = 2$ and $q$ some odd prime. So, $E = 2^q + q^2$. There are two cases to consider.
Case 1: $q = 3$.
This yields $E = 2^3 + 3^2 = 17$, which is prime. So, $(2,3)$ and, hence $(3,2)$ are solutions.
Case 2: $q > 3$.
There are two sub-cases to consider.
$1^{\circ}:$ $q = 3k+1$, where $k$ is some even integer. Then, we have $E = 2^{3k+1} + (3k+1)^2 \equiv (-1)^k(-1) + 1 \equiv -1 + 1 \equiv 0 \pmod 3$. Hence, $3 \mid E$; so, $E$ can’t be prime.
$2^{\circ}:$ $q = 3k+2$, where $k$ is some odd integer. Then we have $E = 2^{3k+2} + (3k+2)^2 \equiv (-1)^k(1) + 1 \equiv -1 + 1 \equiv 0 \pmod 3$. Hence, $3 \mid E$; so, again, $E$ can’t be prime.
As we have exhausted all possible cases, we conclude $(2,3)$ and $(3,2)$ are the only possible solutions.
• 221,460 hits | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8980251550674438, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=130706 | Physics Forums
## vague question about polar coordinate basis
I hae a kind of strange, vague question. We know that any vector in R^2 can be uniquely represented by unique cartesian coordinates (x, y). If we wish to be more rigorous in our definition of "coordinates" we consider them to be the coefficients of the linear combination of the standard basis vectors $e_1$ and $e_2$
Now, we know that every vector in R^2 can also be uniquely represented by unique polar coordinates ($r$, $\Theta$), except for the zero vector. Does this mean that we can consider those "coordinates" to be coefficients with respect to some basis vectors?
I would think no, but it seems odd that one coordinate system can be considered to have a "basis" and the other cannot.
My only linear algebra text I have is Strang, who is vague on coordinate representations in general, and barely brings up polar coordinates at all.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Mentor I'm afraid my explanation is very coherent - sorry. Usually, coordinates are associated with manifolds. Sometimes, a manifold is also a vector space. In this case, coodinates are sometime associated with a basis for the vector space, but they don't have to be. The surface of the Earth is a 2-dimensional manifold that has lattitude and longitude as one coordinate system. The surface of the Earth is not a vector space, but it can, locally, be approximated by a vector space. The surface at any point can be approximated by a 2-dimensional plane that is tangent to the surface at that point. These tangent spaces are vector spaces, and coordinate systems give rise to particular bases for these vector spaces. In general, an n-dimensional (topological) manifold is something that, for a myopic observer, looks like a piece of R^n, i.e., any point of the manifold is contained in an open neighbourhoood from which which there is a continous map (with continuous inverse) onto an open subset of R^n. The image (under any such map) of any point p of an n-dimensional manifold is an element of R^n - the coordinates of p. R^2 is both a manifold and a vector space. (r, theta) coordinates use the manifold structure of R^2, but not the vector space structure. Cartesian coorinates are related of R^2 to both the manifold and vector space structures of R^2. (r, theta) coordinates do give rise, at point, to tangent vectors, though.
that makes a lot of sense, thanks. not sure why I didn't think of that :) LOL, now that I think about it, I remember that I was once asked to derive the radial and tangent basis vectors,a s a function of time, for a rotating reference frame on a physics quiz. Somehow never made the connection! :)
Thread Tools
| | | |
|------------------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: vague question about polar coordinate basis | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 0 |
| | Calculus & Beyond Homework | 5 |
| | Precalculus Mathematics Homework | 9 |
| | Classical Physics | 1 |
| | Introductory Physics Homework | 3 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311832189559937, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/53645/negative-energy-and-large-scale-spacetime-structure/53646 | # Negative energy and large-scale spacetime structure
I was reading an essay from Stephen Hawking's on the Space and Time warps and I was trying to make sense on some statements referring to the Casimir effect such as:
The energy density of empty space far away from the plates, must be zero. Otherwise it would warp space-time, and the universe wouldn't be nearly flat. So the energy density in the region between the plates, must be negative.
Could anyone tell me what's the logic about energy from a far away place from plates being zero? If that was not the case how would the space-time warp? I might lack the knowledge, but I would like to understand the reasoning or have an intuitive explanation about those statements. Thank you very much!
-
## 1 Answer
The interaction between the geometry of spacetime (how precisely it is "warped"), and energy, is a fundamental notion in general relativity. Specifically, the Einstein field equations tell us that if there is energy or momentum near some spacetime point, then the geometry nearby will bend (warp, curve, whatever you'd like to call it) in a particular way. The equation itself is $$\underbrace{G_{\mu\nu}}_{\text{geometric stuff}} = \underbrace{8\pi T_{\mu\nu}-g_{\mu\nu} \Lambda}_{\text{energy-momentum stuff}}$$ On the left hand side of these equations are quantities that tell you what the geometry of spacetime is, and on the right hand side are quantities that tell you about the energy-momentum content of spacetime (including, for example, the energy density that Hawking mentions). In particular, if the energy stuff on the right hand side is nonzero, then generically the equation tells us that there will be nontrivial geometric stuff on the left hand side, aka "warping."
So Hawking is basically alluding to the fact that general relativity tells us that the if there is a non-zero energy density in some region of spacetime, then it will not be flat, but since we know the universe to be essentially flat on large scales, the energy density in regions far from the plates must vanish, or else we would have a contradiction.
Physics Rocks.
-
I Greatly appreciate your answer. You made it better with the equation (which I was not aware of) and the explanation of it. =) – Heber Sarmiento Feb 11 at 17:59
My pleasure! Glad you're interested in the coolest subject around ;) – joshphysics Feb 11 at 18:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431177377700806, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/59991?sort=votes | ## Invariants on matrices
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Take all the $n\times n$ matrices of 0's and 1's and define an equivallence relation as follows: Two matrices are equal if there is a way to pass from the one to another by alternating the columns and the rows.(acting by $S_n$ on the columns and on the rows)
Is there a good way to determine whether two such matrices are equal?
Are there any good invariants (polynomials ,etc.)?
The obvious invariant is that the sum of the 1's on the rows and on the columns does not change.
-
Helpful?: mathkb.com/Uwe/Forum.aspx/research/2601/… – Alex R. Mar 29 2011 at 17:30
## 2 Answers
This is the graph isomorphism problem.
More precisely, if you have to permute the rows and the columns by the same permutation, then this is graph isomorphism (use $1$ to code "edge present" and $0$ to code "edge absent".) If you are allowed to use different permutations on rows and columns, then this is bipartie graph isomorphism, which is equivalent to graph isomorphism.
In practice, algorithms for Graph Isomorphism are pretty fast; however, it is not known whether there is a polynomial time method to test whether two graphs are isomorphic or, equivalently, whether two matrices are equal under your operations.
-
I was just about to write that! – Per Alexandersson Mar 29 2011 at 17:31
thanx! that's the right answer – asterios gantzounis Mar 29 2011 at 17:35
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The determinant would be invariant under the permutations you outlined.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9015047550201416, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/14921/is-the-real-jacquet-module-of-a-harish-chandra-module-still-a-harish-chandra-modu | ## Is the real Jacquet module of a Harish-Chandra module still a Harish-Chandra module?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Casselman defined the real Jacquet module for a Harish-Chandra module, if we view the Jacquet module as a module corresponding to the Levi subgroup, the question is is it still a Harish-Chandra module? In particular is it still admissible?
-
## 1 Answer
As I understand it, the Jacquet module for $(\mathfrak g, K)$-modules is defined so as to again be a $\mathfrak g$-module, and in fact it is a Harish-Chandra module, not for $({\mathfrak g},K)$, but rather for $(\mathfrak g,N)$ (where $N$ is the unipotent radical of the parabolic with respect to which we compute the Jacquet module). (I am probably assuming that the original $(\mathfrak g, K)$-module has an infinitesimal character here.)
I am using the definitions of this paper, in particular the discussion of section 2. This in turn refers to Ch. 4 of Wallach's book. So probably this latter reference will cover things in detail.
Added: I may have misunderstood the question (due in part to a confusion on my part about definitions; see the comments below), but perhaps the following remark is helpful:
If one takes the Jacquet module (say in the sense of the above referenced paper, which is also the sense of Wallach), say for a Borel, then it is a category {\mathcal O}-like object: it is a direct sum of weight spaces for a maximal Cartan in ${\mathfrak g},$ and any given weight appears only finitely many times. (See e.g. Lemma 2.3 and Prop. 2.4 in the above referenced paper; no doubt this is also in Wallach in some form; actually these results are for the geometric Jacquet functor of that paper rather than for Wallach's Jacquet module, but I think they should apply just as well to Wallach's.
Maybe they also apply with Casselman's definition; if so, doesn't this give the desired admissibility?
-
Thank, Emerton. The definition in Wallach's book is different from Casselman's, in some sense it's the dual of Casselman's. In Casselman's definition, it becomes a (g,P)-module, and in particular a module for the corresponding Levi. – unknown (google) Feb 10 2010 at 18:17
Sorry, I didn't know that. Maybe ignore the above, then! – Emerton Feb 10 2010 at 18:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089893698692322, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-equations/173901-reduction-order.html | Thread:
1. reduction of order
show that y=1/x is a solution to
[LaTeX ERROR: Convert failed]
Use reduction of order to find the second solution. Find the wronskian and hence show the solutions are linearly independant.
I have done the first part. I think this is a cauchy equation so i know the general solution. I obtained [LaTeX ERROR: Convert failed] but it didn't work when I subbed it in.
2. Originally Posted by poirot
show that y=1/x is a solution to
[LaTeX ERROR: Convert failed]
Use reduction of order to find the second solution. Find the wronskian and hence show the solutions are linearly independant.
I have done the first part. I think this is a cauchy equation so i know the general solution. I obtained [LaTeX ERROR: Convert failed] but it didn't work when I subbed it in.
Well I come up with something that can't be integrated (by any method I know anyway.) I'll post what I have and hope that you (or someone else) can find my mistake.
[LaTeX ERROR: Convert failed]
We know that f(x) = 1/x is a solution, so I will assume a solution y = f(x)v(x) = v/x.
This gives me
$\displaystyle y = \frac{v}{x}$
$\displaystyle y' = \frac{v'}{x} - \frac{v}{x^2}$
$\displaystyle y'' = \frac{v''}{x} - \frac{2v'}{x^2} + \frac{2v}{x^3}$
Plugging this into the orginal equation:
$\displaystyle (4x - 1)v'' + \left ( \frac{-2(4x - 1) + 2(2x - 1)}{x} \right ) v + \left ( \frac{2(4x - 1) - 2(2x - 1) - 4x}{x^2} \right ) v = 0$
Upon simplifying:
$\displaystyle (4x - 1)v'' - 4xv' = 0$
Letting w = v' gives
$\displaystyle (4x - 1)w' - 4xw = 0$
Now, this is separable and leads to
$\displaystyle \int \frac{dw}{w} = \int \frac{4x~dx}{4x - 1}$
which leads to
$\displaystyle w = ke^{x}(4x - 1)^{1/4}$
However I cannot integrate this to find v(x)....
-Dan
3. Check the $-4x v'$ term - I think it should be $-4v'$ only.
4. Originally Posted by topsquark
Well I come up with something that can't be integrated (by any method I know anyway.) I'll post what I have and hope that you (or someone else) can find my mistake.
[LaTeX ERROR: Convert failed]
We know that f(x) = 1/x is a solution, so I will assume a solution y = f(x)v(x) = v/x.
This gives me
$\displaystyle y = \frac{v}{x}$
$\displaystyle y' = \frac{v'}{x} - \frac{v}{x^2}$
$\displaystyle y'' = \frac{v''}{x} - \frac{2v'}{x^2} + \frac{2v}{x^3}$
Plugging this into the orginal equation:
$\displaystyle (4x - 1)v'' + \left ( \frac{-2(4x - 1) + 2(2x - 1)}{x} \right ) v + \left ( \frac{2(4x - 1) - 2(2x - 1) - 4x}{x^2} \right ) v = 0$
Upon simplifying:
$\displaystyle (4x - 1)v'' - 4xv' = 0$
Letting w = v' gives
$\displaystyle (4x - 1)w' - 4xw = 0$
Now, this is separable and leads to
$\displaystyle \int \frac{dw}{w} = \int \frac{4x~dx}{4x - 1}$
which leads to
$\displaystyle w = ke^{x}(4x - 1)^{1/4}$
However I cannot integrate this to find v(x)....
-Dan
I think I worked it out. Like the person above said, you should have 4v' rather than 4xv'. Can anyone confirm that the answer is [LaTeX ERROR: Convert failed] . I wasn't not sure if I needed 2 constants since you integrate twice.
5. You can re-label $e^{C}\mapsto C.$ Yes, you do need another constant. I get the following:
$v=C(2x^{2}-x)+B.$
6. Originally Posted by poirot
show that y=1/x is a solution to
[LaTeX ERROR: Convert failed]
Use reduction of order to find the second solution. Find the wronskian and hence show the solutions are linearly independant.
I have done the first part. I think this is a cauchy equation so i know the general solution. I obtained [LaTeX ERROR: Convert failed] but it didn't work when I subbed it in.
I happened to go through reduction of order in my differential equations tutorial: http://www.mathhelpforum.com/math-he...tml#post145671
Given that $y_1(x)$ is a solution, I state that the second solution is
$\displaystyle y_2=\left[\int\left(\frac{e^{-\int P(x)\,dx}}{y_1^2(x)}\right)\,dx\right]y_1(x)$
where in our case, $P(x)=\dfrac{2(2x-1)}{x(4x-1)}=\dfrac{2}{x}-\dfrac{4}{4x-1}$ and $y_1(x)=\dfrac{1}{x}$.
I leave it for you to verify that $y_2(x)=2x-1$.
I hope this helps (in addition to the other responses you have received) !!!
7. Originally Posted by Chris L T521
I happened to go through reduction of order in my differential equations tutorial: http://www.mathhelpforum.com/math-he...tml#post145671
Given that $y_1(x)$ is a solution, I state that the second solution is
$\displaystyle y_2=\left[\int\left(\frac{e^{-\int P(x)\,dx}}{y_1^2(x)}\right)\,dx\right]y_1(x)$
where in our case, $P(x)=\dfrac{2(2x-1)}{x(4x-1)}=\dfrac{2}{x}-\dfrac{4}{4x-1}$ and $y_1(x)=\dfrac{1}{x}$.
I leave it for you to verify that $y_2(x)=2x-1$.
I hope this helps (in addition to the other responses you have received) !!!
I do agree with your answer except you need a constant
so everyone is wrong then lol?
8. Originally Posted by poirot
so everyone is wrong then lol?
No... :P
What they found was $v=C(2x^2-x)+B$. However, for the sake of finding another solution via reduction of order, I would ignore the constants. So $v(x)=C(2x^2-x)+B\sim 2x^2-x$.
But from topsquark's post, we assumed the solution was of the form $y=f(x)v(x)=\frac{v(x)}{x}$. So it now follows that your other solution is $\frac{2x^2-x}{x}=2x-1$.
I hope this clarifies things!
EDIT: The reason why we're not keeping the arbitrary constants is due to the fact that we're not finding the general solution to the ODE in question, but just finding another possible (partial) solution like $y_1=\frac{1}{x}$. Note that arbitrary constants would be necessary when you superimpose the two solutions to arrive at the general solution.
9. Originally Posted by Chris L T521
I happened to go through reduction of order in my differential equations tutorial: http://www.mathhelpforum.com/math-he...tml#post145671
Given that $y_1(x)$ is a solution, I state that the second solution is
$\displaystyle y_2=\left[\int\left(\frac{e^{-\int P(x)\,dx}}{y_1^2(x)}\right)\,dx\right]y_1(x)$
where in our case, $P(x)=\dfrac{2(2x-1)}{x(4x-1)}=\dfrac{2}{x}-\dfrac{4}{4x-1}$ and $y_1(x)=\dfrac{1}{x}$.
I leave it for you to verify that $y_2(x)=2x-1$.
I hope this helps (in addition to the other responses you have received) !!!
Can that formula be used for every type DE? Thanks
10. Originally Posted by poirot
Can that formula be used for every type DE? Thanks
This formula only works for second order DEs of the type $y^{\prime\prime}+P(x)y^{\prime} + Q(x) y = 0$. Furthermore, reduction of order only applies to second order ODEs; see: Reduction of order - Wikipedia, the free encyclopedia.
I hope this helps! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406762719154358, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/104172/knot-diagrams-sets-of-moves-and-equivalence-relations/104188 | Knot diagrams, sets of moves and equivalence relations
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Short version: Does anyone study equivalence classes generated by a given set of "moves" (in the sense of, but not limited to, Reidemeister moves) on the set of knot diagrams?
Yes, I understand that the concept of knot has a natural geometrical significance, that one usually views knot diagrams as a tool to study underlying knots, that the value of Reidemeister moves lies in how they preserve and generate the equivalence relation of isotopy. So, yes, I see why a knot theorist might reasonably have little interest, say, in looking at local moves not preserving isotopy.
So I'm asking this question in the spirit of abstraction for its own sake. But there is a precedent. A "symmetry theorist" studies groups because they capture the set of symmetries of important objects. But combinatorial group theory studies equivalence classes of strings under moves...and symmetry, if it enters the story at all, does so as a tool.
That said, it would be interesting if one could add an extra move to the Reidemeister moves that produced a coarser but computationally tractable classification.
No need to retread the ground here -- http://en.wikipedia.org/wiki/Reidemeister_move -- so, for example, I already understand that knot theorists know what happens with only Reidemeister type II and III moves. I am interested in stories like this, where one gets a finer equivalence relation that isotopy, but equally interested in sets of local moves that don't preserve isotopy and thus generate equivalence relations either coarser to, or simply incomparable with, isotopy.
-
3
The appropriate abstraction here seems to me to be a category presented by generators and relations. (The category relevant to knots is the tangle category: math.ucr.edu/home/baez/tangles.html) This is a very general construction: it has as special cases monoids presented by generators and relations, as well as posets... – Qiaochu Yuan Aug 7 at 3:02
7
There are lots of such moves. To give an example I am very familiar with, $C_k$-moves are certain moves (surgery along claspers) that generate the equivalence relation that two knots share Vassiliev invariants of order up to k-1. Another example: Lou Kauffman first observed that two knots have the same Arf invariant iff they differ by a sequence of "band-pass" moves. – Jim Conant Aug 7 at 3:13
Kawauchi's survey of knot theory book mentions several of the theorems Jim Conant alludes to. – Ryan Budney Aug 7 at 5:44
@Qiaochu Yuan So a knot or link then belongs to Hom(0,0)? But can't a "move" fail to respect the imposed directionality? Can't it wind back and forth in "time," using perhaps some but not all of the strands at a given "time" and thus escape this optic? – David Feldman Aug 7 at 8:09
1
David Feldman and Qiaochu: I think you're both right. The answer is 2-dimensional algebra, which does not suffer from imposed directionality. See the work of Dror Bar-Natan, on "the circuit algebra of tangles" and related 2-dimensional algebras. This is indeed what is happening- the abstraction is to certain diagrammatic algebras (over a "modular operad") in the sense Qiaochu is refering to. And I believe this (intentionally leaving what I mean by "this" slightly vague) is very much the appropriate abstraction. – Daniel Moskovich Aug 7 at 13:07
show 2 more comments
4 Answers
Very much so. There are a number of small industries centred around studying equivalence classes of knot diagrams generated by a set of moves.
1. The study of claspers. For example, $C_k$-moves are a special type of clasper surgeries. MathSciNet indicates 123 citations for Habiro's fundamental paper Claspers and finite type invariants of links, providing some coarse measure of the vitality of the topic.
2. Replacing one rational tangle in a knot diagram by another generates an equivalence relation which has been deeply studied using quandles. See e.g. J. Przytycki's introductory lectures.
3. Dehn surgery, where the surgery curve is required to belong to some specified part of a knot group or link group (in the kernel of its representation to some fixed group, for instance) generates equivalence relations on knot diagrams modulo combinatorial "twisting" moves, which have been studied by Cochran-Orr-Gerges, and (excuse the self promotion) by myself and Andrew Kricker, and by Litherland and Wallace. The techniques for studying these equivalence relations have been topological rather than combinatorial.
4. There are a number of setting in which one allows Reidemeister moves plus some crossing changes, but not others. In the theory of finite type invariant, one fixes a some crossings (considers them in resolutions of "double points"), and allows crossing changes away from them. The equivalence classes are detected by the finite-type invariant of type the number of "fixed" crossings. In a similar-sounding vein, a free virtual knot is a virtual knot where we allow crossing changes away from virtual crossings. They have a rich theory- see e.g. this Manturov paper.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The Delta move generates the equivalence relation of linking numbers for links, as proved by Matveev and Murakami Nakanishi.
There's the double Delta move which Naik and Stanford showed generate S-equivalence.
A student of Freedman studied "slide equivalence" of knot diagrams. Unfortunately it wasn't published though.
-
These both delta moves are special cases of Y-clasper surgeries ($C_2$ moves). Take a clasper with one leaf ringing around each strand participating in the delta. Could you describe slide equivalence? – Daniel Moskovich Aug 7 at 15:57
1
@Daniel: I see, I learned about these moves in grad school I think before claspers were around, and I didn't realize they were equivalent. The number of Delta moves needed to unknot a knot is congruent to the Arf invariant. I don't have a copy of Huang's thesis, but I went to his thesis defense, and from what I can remember, you can slide arcs of the knot diagram around, treating a strand going under a crossing as two separate arcs attaching at the overstrand. But this was over 15 years ago, so I don't remember the details. – Agol Aug 7 at 16:54
Legendrian knots
Legendrian knots are smooth knots whose tangent directions are contained in a contact structure such as the standard contact structure on $\mathbb{R}^3$, $dz=y~dx$. Every knot has Legendrian representatives. Two Legendrian knots of the same topological type might not be isotopic through Legendrian knots.
The projection of a Legendrian knot in the standard contact structure to the $xz$-plane is called its front projection. Some people study Legendrian knots through diagrams showing front projections. The $y$ coordinate can be recovered from the slope, so all crossings are determined by the diagram. However, there can be no vertical tangencies, since $y$ would be undefined, and you must allow cusps. See this Notices article.
There are analogues of Reidemeister moves, so this gives a refinement of knot theory described by a set of diagram moves on front projections.
Actually, you don't have to work with cusped diagrams. You can make all cusps horizontal, and you could choose to replace the horizontal cusps with vertical tangencies. So, standard knot diagrams up to a restricted set of moves (including disallowing some isotopies where no Reidemeister move was performed, but where vertical tangencies would have been introduced or removed) are equivalent to Legendrian knots.
Suppose you study curves up to isotopy instead of the standard ambient isotopy used in knot theory. You may be disappointed: knot theory in $S^3$ becomes trivial. You are allowed to replace a piece of a diagram showing a long knot with a long unknot by shrinking the knot to a point and forgetting it. However, link theory is still nontrivial, and so is knot theory in a $3$-manifold which is not simply connected. See Rolfsen, "Localized Alexander Invariants and Isotopy of Links." Annals of Mathematics 101 (1975) 1-19.
-
One classical example of such move is Conway mutation, which falls into the category of tangle replacement, as Qiaochu Yuan mentioned in his comment. There's a very famous pair of mutants, the Kinoshita-Terasaka and the Conway knot (see the wikipedia article).
Apparently, there's some topology behind this move: recently, using knot Floer homology, Josh Greene has shown that two alternating knots are mutants if their branched double covers are homeomorphic, and the other arrow was shown by Viro (see references in Greene's paper).
-
That link leads to an empty page. – Will Sawin Aug 8 at 0:15
Oops, sorry. I've fixed it now. Thanks. – Marco Golla Aug 8 at 6:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293157458305359, "perplexity_flag": "middle"} |
http://mathhelpforum.com/math-topics/116771-kinematics-particle-moving.html | # Thread:
1. ## Kinematics of a particle moving
below.
A particle of mass 0.5kg is at rest on a horizontal table. It receives a blow of impulse 2.5Ns.
(a) Calculate the speed with which P is moving immediately after the blow
The height of the table is 0.9m and the floor is horizontal. In an initial model of the situation the table is assumed to be smooth.
(b) Calculate the horizontal distance from the egde of the table to the point where P hits the ground.
In a refinement of the model the table is assumed rough. The coefficient of friction between the table and P is 0.2.
(c) Calculate the deacceleration of P.
Given that P travels 0.4m to the edge of the table,
(d) calculate the time which elapses between P receiving the blow to P hitting the floor.
2. Originally Posted by Sashikala
below.
A particle of mass 0.5kg is at rest on a horizontal table. It receives a blow of impulse 2.5Ns.
(a) Calculate the speed with which P is moving immediately after the blow
The height of the table is 0.9m and the floor is horizontal. In an initial model of the situation the table is assumed to be smooth.
(b) Calculate the horizontal distance from the egde of the table to the point where P hits the ground.
In a refinement of the model the table is assumed rough. The coefficient of friction between the table and P is 0.2.
(c) Calculate the deacceleration of P.
Given that P travels 0.4m to the edge of the table,
(d) calculate the time which elapses between P receiving the blow to P hitting the floor.
(c) $F_{net} = ma = f_k = \mu mg$
magnitude of the acceleration is $a = \mu g$
(d) for the time the P slides on the ruff table ...
$\Delta x = v_0 t - \frac{1}{2}at^2$ , solve for $t$
for the time it takes for P to fall to the floor ...
$\Delta y = -\frac{1}{2}gt^2$ , solve for $t$
sum the two times.
3. Thanks a lot Skeeter. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.884759247303009, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/87756/when-is-sinx-rational/87759 | # When is $\sin(x)$ rational?
Obviously, there are some points (like $\pi,30$) but I am unsure if there are more.
How can it be proved that there are no more points, or what those points will be?
EDIT: I largely meant to ask what is supposed in the first comment. Put another way, are there numbers such that sin(x) and arcSin(x) are both rational?
-
6
If you look at the graph of $\sin(x)$ it goes through every rational from -1 to 1. I doubt there is a simple characterization when it is rational. – Aleks Vlasev Dec 2 '11 at 15:40
1
almost nowhere! – Vinicius M. Dec 2 '11 at 15:53
– Martin Sleziak Dec 2 '11 at 15:54
`like pi,30` ? what? 30 degrees? that's not rational, only algebraic – leonbloy Dec 2 '11 at 15:54
@leonbloy I presume what was meant was that $\sin 30^\circ$ is rational. – Michael Hardy Dec 2 '11 at 16:49
show 2 more comments
## 4 Answers
I assume you mean to ask: when $x$ is in whole degrees, when is $\sin(x)$ rational?
If $x$ is in whole degrees, then $x^\circ=\pi x/180\text{ radians}=\pi p/q\text{ radians}$, so we wish to find all rational multiples of $\pi$ so that $\sin(\pi p/q)$ is rational.
If $p/q\in\mathbb{Q}$, then $e^{\pm i\pi p/q}$ is an algbraic integer since $\left(e^{\pm i\pi p/q}\right)^q-(-1)^p=0$. Thus, $2\sin(\pi p/q)=-i\left(e^{i\pi p/q}-e^{-i\pi p/q}\right)$ is the difference and product of algebraic integers, and therefore an algebraic integer. However, the only rational algebraic integers are normal integers. Thus, the only values of $\sin(\pi p/q)$ which could be rational, are those for which $2\sin(\pi p/q)$ is an integer, that is $\sin(\pi p/q)\in\{-1,-\frac{1}{2},0,\frac{1}{2},1\}$.
-
You might be looking for Niven's theorem:
If $\sin(r \pi) = q$ where $r,q$ are rationals, then $q$ is $0$, $\pm 1/2$, $\pm 1$.
-
There are infinitely many points where both the sine and the cosine are rational, namely $$\left( \frac{n^2-m^2}{n^2+m^2}, \frac{2mn}{n^2+m^2} \right).$$ You can see that if $(x,y)=\text{that pair}$ then $x^2+y^2=1$, so it's on the unit circle.
Maybe a more interesting question is when do you have an angle that is a rational number of degrees (or, equivalently, a rational multiple of $\pi$ radians) for which the sine is rational. I think in that case you get only the obvious ones: $\sin 0^\circ=0$, $\sin 30^\circ=1/2$, $\sin 90^\circ=1$, and the counterparts in the other quadrants. I'm not sure right now how to prove that.
-
I attempted to show why in my answer. I learned this trick from Robert Israel back on sci.math. – robjohn♦ Dec 2 '11 at 19:17
$\sin(x)$ is locally a bijection (with the exception of maxima/minima), so it's rational whenever evaluated on $\arcsin(r)$ for rational $r$ unless this is maximum/minimum.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954654335975647, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/83127?sort=newest | ## Examples of CAT(0)-groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My question is the following:
Let M be a simply connected Riemannian manifold whose sectional curvatures are all nonpositive and let G be a group. Suppose that G acts in M properly discontinuous and cocompactly by isometries. Is G a CAT(0) group?
-
## 1 Answer
The answer is yes, since $M$ is a $CAT(0)$ space, and the group is quasi-isometric to it. See (for example) Jim Cannon's article in Bedford Keane Series:
The theory of negatively curved spaces and groups J Cannon - t. Bedford, M. Keane, and C. Series. Oxford University …, 1991
-
1
By definition, a group is $CAT(0)$ if it acts isometrically, properly and co-compactly on a $CAT(0)$ space... So it is enough to say that $M$ is $CAT(0)$. See aimath.org/pggt/… – Alain Valette Dec 10 2011 at 18:33
Igor, you should say "yes by definition", but not "yes, since". – ε-δ Dec 10 2011 at 21:22
@ε-δ: I believe that is the essence of Alain's commentary... – Igor Rivin Dec 11 2011 at 10:08
To be fair, it is not a trivial fact that a simply connected Riemannian manifold whose sectional curvatures are all nonpositive is CAT(0). – Anon Dec 15 2011 at 10:01
2
@Anon: I believe that this is why the T is in CAT(0)... – Igor Rivin Dec 15 2011 at 10:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8960873484611511, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/37570/list | ## Return to Question
3 added 476 characters in body
Let $X$ be a smooth projective variety over $\mathbb{C}$. And let $L$ be a big and nef line bundle on $X$. I want to prove $L$ is semi-ample($L^m$ is basepoint-free for some $m > 0$).
The only way I know is using Kawamata basepoint-free theorem:
Theorem. Let $(X, \Delta)$ be a proper klt pair with $\Delta$ effective. Let $D$ be a nef Cartier divisor such that $aD-K_X-\Delta$ is nef and big for some $a > 0$. Then $|bD|$ has no basepoints for all $b >> 0$.
Question. What other kinds of techniques to prove semi-ampleness or basepoint-freeness of given line bundle are?
Maybe I miss some obvious method. Please don't hesitate adding answer although you think your idea on the top of your head is elementary.
Addition : In my situation, $X$ is a moduli space $\overline{M}_{0,n}$. In this case, Kodaira dimension is $-\infty$. More generally, I want to think genus 0 Kontsevich moduli space of stable maps to projective space, too. $L$ is given by a linear combination of boundary divisors. It is well-known that boundary divisors are normal crossing, and we know many curves on the space such that we can calculate intersection numbers with boundary divisors explicitely.
2 edited body
Let $X$ be a smooth projective variety over $\mathbb{C}$. And let $L$ be a big and nef line bundle on $X$. I want to prove $L$ is semi-ample($L^m$ is basepoint-free for some $m > 0$).
The only way I know is using Kawamata basepoint-free theorem:
Theorem. Let $(X, \Delta)$ be a proper klt pair with $\Delta$ effective. Let $D$ be a nef Cartier divisor such that $aD-K_X-\Delta$ is nef and big for some $a > 0$. Then $|bD|$ has no basepoints for all $b >> 0$.
Question. What other kinds of techniques to prove semi-ampleness or basepoint-freeness of given line bundle are?
Maybe I miss some obvious method. Please don't hesitate adding answer although you think your idea on the top of your head is too elementary.
1
# Technique to prove basepoint-freeness
Let $X$ be a smooth projective variety over $\mathbb{C}$. And let $L$ be a big and nef line bundle on $X$. I want to prove $L$ is semi-ample($L^m$ is basepoint-free for some $m > 0$).
The only way I know is using Kawamata basepoint-free theorem:
Theorem. Let $(X, \Delta)$ be a proper klt pair with $\Delta$ effective. Let $D$ be a nef Cartier divisor such that $aD-K_X-\Delta$ is nef and big for some $a > 0$. Then $|bD|$ has no basepoints for all $b >> 0$.
Question. What other kinds of techniques to prove semi-ampleness or basepoint-freeness of given line bundle?
Maybe I miss some obvious method. Please don't hesitate adding answer although you think your idea on the top of your head is too elementary. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298021197319031, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/8727/are-valence-electrons-located-solely-in-the-s-and-p-subshells | # Are valence electrons located solely in the s and p subshells?
Or are they in all subshells??
-
## 1 Answer
If you define valence electrons as those that belong to open shells, or as those who participate in chemical bonding, then no: The transition metal elements have open $d$-shells, and they play an important role in determining the properties of transition metal compounds.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949757993221283, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/3461/existing-works-on-pre-computing-elgamal-ephermal-keys/3478 | # Existing works on pre-computing ElGamal ephermal keys
I was playing around with a problem in e-voting schemes that use additive homomorphic encryption to tally votes, namely that at the end of the day somebody (or somebodies, if the secret material has been broken up somehow) has to be trusted to decrypt the final tally.
Looking at ElGamal's additive variant, reproduced here for reference:
• $M$ is a modulo field in which all computation happens
• $g$ is a generator for $M$
• private key = $k_2$ = random number < $M$
• public key = $k_1$ = $g^{k_2}$
• $m$ is a member of the modulo field $M$
• $x$ is the plaintext
• $r$ is a random number < $M$, different for each call to E
• this is the ephemeral key, used in the second $D(...)$
• $E(k_1, x) = <g^r, m^x * k_1^r> = <c_1, c_2>$
• $D(k_2, c_1, c_2) = c_2 * ({c_1^{k_2}})^{-1}$
• $D(r, k_1, c_1, c_2) = c_2 * (k_1^r)^{-1}$
In e-voting schemes*, every vote is represented as a 1 or 0 plaintext and tallying is a simple matter of multiplying all the ciphertexts together and decrypting the result.
The following scheme eliminates the need to release the private key to decrypt the final tally:
### Prior to the "election"
• precompute a large list of random numbers $R = [R_1, ... , R_n]$ where $n$ is the number of possible voters
• let $A = [A_1, ..., A_n] = [R_1, A_1+R_2, A_2+R3, ... , A_{n-1}+R_n]$
• publish the hashes of each member of A in order
### When encrypting "ballots"
• when choosing an $r$ for a $E(k_1, x)$, choose the first unused member of R
### After the election
• publish all the ciphertexts (the "ballots")
• publish $A_y$ where $y$ is the number of ballots cast
$A_y$ would be the ephemeral key for the tally, but wouldn't allow for the decryption of any other ballots (like the private key would). The correctness of $A_y$ is achieved by the polling authority committing to its value by publishing hashes before hand.
In a nutshell, it's just publishing a commitment (via hashing) for all possible ephemeral keys for the final tally; then releasing the singular relevant ephemeral key. Everyone can tally, and decrypt that tally but decrypting individual "ballot" values shouldn't be possible.
I built a little proof of concept program that actual does this (using some open source implementations of additive ElGamal), and it works.
Of course, simply working isn't sufficient; I'm curious about the security of the scheme.
My question, are there any publications or other work related to this approach?
This seems like a pretty simple extension to the well documented additive ElGamal e-voting schemes, but my Google-Fu is failing to find anything and I'm personally unfamiliar with any such material.
(This approach [the pre-computed ephemeral key part, specifically] being broken and why, also acceptable naturally)
*Simplified greatly from actual schemes, but this is the heart of the crypto bits.
-
Obvious question: what ensures that everyone casts a vote of 0 or 1? What prevents someone from stuffing the ballot-box by casting a vote of (say) 19, or perhaps -7? – poncho Aug 6 '12 at 0:55
1
@poncho - outside the scope of this particular question, but in a practical system there's a zero-knowledge proof that can be constructed for ElGamal ciphertexts proving the plaintext falls in a range (in this case [0,1]). – Kevin Montrose♦ Aug 6 '12 at 0:59
– CodesInChaos Aug 7 '12 at 21:01
@CodesInChaos this is all related to the "Improvements" section, so I don't think a link would contribute much and bordered on self-promotion. I didn't even really explain ElGamal in that post. – Kevin Montrose♦ Aug 7 '12 at 21:03
## 3 Answers
It seems you want to decrypt the final value without revealing the private key. First, if someone knows the private key, they can issue a very simple non-interactive zero knowledge proof that the plaintext is a decryption of the ciphertext (the ciphertext being the accumulation of all the ballots) without revealing the actual value of the key. This is the standard way of approaching the problem. (I can add the proof if you are interested).
Second, the approach of pre-committing to the random factors for Elgamal has been examined in the literature however for completely different reasons. There is a worry that if a voting machine chooses the randomness, they could try a few values until the ciphertext comes out with a certain pattern (e.g., the 5th bit is a 0 if the vote is for Alice and a 1 if it is for Bob). Your scheme incidentally avoids this. A more thorough approach is considered in On Subliminal Channels in Encrypt-on-Cast Voting Systems .
If you insist on your approach, note that you never need to use $c_1$ at all. $c_2$ alone is a simpler primitive called a Pedersen commitment. The approach of encoding votes with Pedersen commitments, adding them up (under commitment), and then revealing the sum of the random factors is considered in Improving Helios with Everlasting Privacy Towards the Public. The difference is that voters choose the random factor and submit an encryption of it rather than using factors chosen for them. (It also is being done for totally different reasons: Pedersen commitments have a property called everlasting privacy).
In terms of the security of your approach, it is not clear who knows R and how voters get their value of R. For example, if I give voter 5 the value ($R_5$ - 1) and give voter 6 the value ($R_6$ +1), they will still add up to the correct value of $R$. However voter 5's message will be $m^x/k_1$ and voter 6's will be $m^x*k_1$. These may be sensible values for a vote: for example say that $k_1=m^2$. If voter 5 votes for 0, then this can be counted as a -1 and if voter 6 votes for 1, it can be counted as a +3. In other words, you end up with 2 votes for candidate 1 instead of one vote for each.
It may be sufficient to choose $m$ in such a way that no one knows the discrete logarithm between $k_1$ and $m$ but I'd have to think about it some more before endorsing it.
-
A problem I see is that you're not actually achieving the goal you're trying to reach: Prove to the public that the authority is honest.
By making $A_y$ public, you're only proving that you used the (secondary) key that you said you would be using. What you're not proving though, is that it actually is the correct one, the sum of the ephemeral keys.
To prove that it is, you would have to do one of two things:
1. Publicize the $R_i$, so people can see that indeed $A_y = \sum R_i$. That would of course make the whole thing moot, since the votes wouldn't be secret anymore.
2. Let voter $V_i$ know the value of $A_{i-1}$, in which case each voter could verify that ${\cal H}(A_{i-1} + R_i)$ is indeed the published hash (and the voters would have to trust that everybody checks theirs). However, this would mean that $V_i$ can find out (by using $A_{i-1}$ to decrypt the ciphertext product $\prod_{j<i} <c_{1_j}, c_{2_j}>$) what the tally was up to the point where they themselves voted. In particular, $V_2$ would know how $V_1$ voted. This is obviously undesirable as well.
I don't know if there's a way for an evil authority to chose the $A_i$ such that they tweak the result in a "desired" way and not give implausible results (like 20 million votes for candidate 1, although there were just 30 voters), but unless this is proven to be impossible, your scheme doesn't seem to make the authority's decryption verifiable.
-
I've already told you this, but for the public record, I'll play with the math and see what working out $A_y$ requires (such that the decryption is believable) when you know everything else. – Kevin Montrose♦ Aug 7 '12 at 18:39
There has been extensive research literature on this subject. If you are considering using this for real, please read my answer to a similar question first.
As far as the specific question you asked, there is a general technique here. Rather than trusting a single electoral authority with the ability to decrypt all the votes (and thus the ability to learn how everyone voted, if they cared to act contrary to the public trust), a standard defense is to designate multiple electoral authorities and design a cryptographic protocol with the property that it would require collusion of all of the electoral authorities to improperly learn how people voted.
If you are using homomorphic encryption, a standard technical tool for achieving this property is to use threshold cryptography. Some additional stuff is needed (e.g., to allow the authorities to prove they did the decryption of the tally properly), but that is orthogonal and is described in the research literature.
(If you're not using homomorphic encryption, the other standard technical tool is mixnets, which allow a group of authorities to randomly permute and decrypt the encrypted votes, in a way that renders the cleartext votes unlinkable to the ciphertexts submitted by voters.)
If you'd like to learn more about this subject, I recommend that you start with Ben Adida's presentation, Voting Cryptography Tutorial for Non-Cryptographers (slides and audio recording of his talk). Then, once you understand everything in his talk, a good next place to start would be to read some of the research papers in the literature. For instance, the original Helios paper is excellent.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952084481716156, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/photoelectric-effect+hamiltonian-formalism | # Tagged Questions
0answers
93 views
### An electron is subjected to an electromagnetic field using the canonical equations solve
So I was given the following vector field: $\vec{A}(t)=\{A_{0x}cos(\omega t + \phi_x), A_{0y}cos(\omega t + \phi_y), A_{0z}cos(\omega t + \phi_z)\}$ Where the amplitudes $A_{0i}$ and phase shifts ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8812383413314819, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/99255/finding-the-matrix-of-a-rotation-transformation | # Finding the matrix of a rotation transformation
rotation transofrmation defined as some composition of rotatation along x,y,z
assuming $T$ is a rotation transformation in $\mathbb{R}^{3}, v=\left(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right), T\left(v\right)=\left(1,0,0\right)$
I need to find the matrix of $T$ according to the standard base. I was trying to find a rotation through $y$ axis such as $S_{\phi}\left(v\right)\subseteq XY$ plane. where S is rotation along y axis. Not sure if they meant T is rotation along z axis. but well Im not sure if that's the way or how do I handle this question
-
There seems to be insufficient information; also the vectors you have given us seems to be transposes of what you intended. – user21436 Jan 15 '12 at 13:05
3
@KannappanSampath: I find your second remark a bit pedantic: certainly there is nothing wrong with denoting an element of $\mathbb R^3$ by a triplet of real numbers. The habit of writing them vertically helps us remember how to operate by a matrix on a vector, but I would not advocate imposing the use of distinct versions of $\mathbb R^3$ to contain column vectors and row vectors. – Marc van Leeuwen Jan 15 '12 at 13:45
## 1 Answer
Your equations are insufficient to determine the rotation $T$. To see this, consider a rotation $R$ with axis $\left<v\right>$; if $T$ is any solution then so is $T\circ R$, since compositions of rotations are again rotations. (You can also compose on the right with rotations with axis $\left<T(v)\right>$, although this gives no other alternative solutions than those found by the first method.)
The axis of any rotation that solves this problem must lie in the reflection plane (perpendicular bisector) $H$ between the vectors $v$ and $T(v)$, since any point of the axis obviously stays at equal distances form the two. Moreover any line $l$ in $H$ and passing through the origin can be the axis of a rotation solving the problem. To see this, compose the reflection in $H$ with the unique reflection fixing both $l$ and $T(v)$ (i.e., reflection in the plane spanned by $l$ and $T(v)$). This composition is a rotation, and it sends $v$ to $T(v)$.
For instance, one gets a rotation by a minimal angle by choosing $l$ to be perpendicular to both $v$ and $T(v)$: the line spanned by $(0,1,-1)$. Compute the reflection in $H$ followed by the one in the plane spanned by $(1,0,0)$ and $(0,1,-1)$ (the latter is simple: $(x,y,z)\mapsto(x,-z,y)$) to find a concrete solution.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444480538368225, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/174452/laurent-series-of-fz-frac1zz-1z-2?answertab=votes | # Laurent series of $f(z)=\frac{1}{z(z-1)(z-2)}.$
Consider $$f(z)=\frac{1}{z(z-1)(z-2)}.$$
I want to determine the Laurent series in the point $z_0=0$ on $0<|z|<1$.
Partial decomposition yields:
$$f(z)=\frac{1}{z(z-1)(z-2)}=(1/2)\cdot (1/z) - (1/(z-1)) + (1/2)(1/(z-2)).$$
Is the general strategy now, to try to use the geometric series?
$(1/(z-1))=-(1/(1-z))=-\sum_{k=0}^\infty z^k$
$\displaystyle 1/(z-2)=-(1/2)\frac{1}{1-\frac{z}{2}}=-(1/2)\cdot\sum_{k=0}^\infty (z/2)^k$
So $f(z)=(1/2)\cdot (1/z)+(1/2)\cdot\sum_{k=0}^\infty z^k-(1/4)\cdot\sum_{k=0}^\infty (z/2)^k$ (*)
Some questions:
1) What is the difference between a Laurent and a Taylor series? I don't get it. It seems you calculate them the same.
2) Why didn't we write (1/z) as a series expansion too?
3) What makes the end result (*) to be a Laurent series?
-
## 1 Answer
Partial fractions and geometric series give $$\begin{align} \frac1{(1-x)(2-x)} &=\frac1{1-x}-\frac1{2-x}\\ &=\frac1{1-x}-\frac12\frac1{1-x/2}\\ &=(1+x+x^2+x^3+x^4+\dots)\\ &-\left(\frac12+\frac14x+\frac18x^2+\frac1{16}x^3+\frac1{32x^4}+\dots\right)\\ &=\frac12+\frac34x+\frac78x^2+\frac{15}{16}x^3+\frac{31}{32}x^4+\dots \end{align}$$ Thus, the Laurent series for $\frac1{x(x-1)(x-2)}$ at $x=0$ is $$\frac1{x(x-1)(x-2)}=\frac1{2x}+\frac34+\frac78x+\frac{15}{16}x^2+\frac{31}{32}x^3+\dots$$ We could also expand the series at $x=1$. Let $y=x-1$ and then $$\begin{align} \frac1{x(x-1)(x-2)} &=\frac1{(y+1)y(y-1)}\\ &=-\frac1y\frac1{1-y^2}\\ &=-\frac1y-y-y^3-y^5-y^7-\dots\\ &=-\frac1{x-1}-(x-1)-(x-1)^3-(x-1)^5-(x-1)^7-\dots \end{align}$$
1. The Laurent series is much like the Taylor series except terms of negative degree are allowed.
2. We don't expand $\frac1x$ since there is no power series for $\frac1x$ at $0$ other than $\frac1x$.
3. Definition. It is a power series at a point, $x_0$ which can have both positive and negative powers of $x-x_0$.
-
That's very good robjohn, thank you so much! Question on point 3: In my case, there are no negative powers of $z-0$, doesn't this fact change something? Do we still have a Laurent series, even if all powers are positive ($k$ starts at $0$ in both cases) – Chris Jul 24 '12 at 0:41
1
@Chris: Yes, a Laurent series may have both positive and negative powers of $x-x_0$, but it need not have both. A Taylor series is a Laurent series, but we usually just call it a Taylor series unless it has negative powers of $x-x_0$. – robjohn♦ Jul 24 '12 at 0:44
Great, things are much clearer now. I was until now utterly confused about that! – Chris Jul 24 '12 at 0:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587286710739136, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/3626/can-elgamal-be-made-additively-homomorphic-and-how-could-it-be-used-for-e-voting?answertab=votes | # Can Elgamal be made additively homomorphic and how could it be used for E-voting?
Elgamal is a cryptosystem that is homomorphic over multiplication.
1. How can I convert it to an additive homomorphic cryptosystem?
2. How can I use this additive homomorphic Elgamal cryptosystem for E-voting purpose?
-
## 1 Answer
Elgamal can be made additive by encrypting $g^m$ instead of $m$ with traditional Elgamal for some generator $g$ (usually the same one used to generate the public key). This variant is sometimes called exponential Elgamal. The difficulty is decryption: running the standard decryption gives you $g^m$ and recovering $m$ requires you to solve the discrete log. As long as $m$ is small, this can be done algorithmically or with a lookup table.
See this answer for how to build a voting scheme from it (or this paper for the full description). Exponential Elgamal is great for things like voting because after you tally up all the votes, you'll still have a number that is reasonably small.
Paillier is additively homomorphic as well, and can support a proper decryption of any sized message. Dispite this, many voting schemes still use exponential Elgamal because it is faster, easier to do distributed key generation, and not patented.
-
I'm curious on your remark that exponential Elgamal is faster. Do you know of any published benchmarks or results of tests that I can get numbers from? Also, do you know how traditional Elgamal compares to the elliptic curve variant when it comes to speed? – mikeazo♦ Oct 4 '12 at 13:53
I don't know of any publications that directly compare Paillier with Elgamal in Gq or with ECC; although I haven't look too hard either. It is just accepted as folk wisdom I guess. – PulpSpy Oct 16 '12 at 19:55
Can i use elgamal for both additions and multiplication of ciphertexts?I.e: Whenever i want to multiply i compute my message $x$ as $g^x$ and whenever i want to add i compute conventional Elgamal. My plaintext would be small integers in a range of $0 \ldots 2^{32} or 2^{64}$ – curious Mar 22 at 11:22
Reading between the lines of what you are asking, the answer is no. Elgamal can handle both multiplication and addition, but you cannot mix the two operations on any given ciphertexts. You must decide when you encrypt to lock the ciphertext into only doing addition or only doing multiplication. The ability to do both is a "fully homomorphic" cryptosystem, of which there are some, but they are mainly theoretical and too slow to be practical. One efficient scheme, BGN, allows a single multiplication and as many additions as you want. – PulpSpy Mar 25 at 13:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391838312149048, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/136324/finding-equation-of-plane?answertab=oldest | # Finding equation of plane
I have some trouble following some examples in my textbook. In the examples, the book provides equations for planes, and I'm not sure how they are derived.
One example is:
Given:
$$\begin{bmatrix} x \\ y \\ z\end{bmatrix} = \alpha \begin{bmatrix} 1 \\ 1\\ 1 \end{bmatrix}e^{4t} + \beta \begin{bmatrix} -1 \\ -1 \\ 2 \end{bmatrix} e^{t} + \gamma \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}e^{-t}$$
Any solution for which $\gamma = 0$ will lie in the plane $x - y = 0$ for all $t$.
Another example is:
Given: $$\begin{bmatrix} x \\ y\\ z \end{bmatrix} = \beta \begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix}e^{t} + \gamma \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} e^{4t}$$
This defines the plane $x - 3z = 0$
It's been quite some time since I had multivariable calculus, and after fumbling around for a long time, I just don't see how these plane equations are obtained. Since the book doesn't show any intermediate steps, I would greatly appreciate any help!
-
Are you sure you have the second example correctly? If I understand what you have correctly, I would say that in the second example you get the plane $x-z=0$. (P.S. What is the textbook?) – Arturo Magidin Apr 24 '12 at 16:30
Hi. Yes, this is as it is written in the textbook (Nonlinear Ordinary Differential Equations - An Introduction for Scientists and Engineers). However, the book contains a vast amount of typos (I've never encoutered a book with as many typos as this before), so it is quite possible that the text has got it wrong. – Kristian Apr 24 '12 at 16:50
The way I would interpret this for the first problem: if $\gamma=0$, then $(x,y,z)\in\mathrm{span}\bigl((1,1,1),(-1,-1,2)\bigr) = \mathrm{span}\bigl(1,1,0),(0,0,1)\bigr)$, and that span is precisely the plane $x-y=0$. Doing the same thing with the second example, I get $x-z=0$. – Arturo Magidin Apr 24 '12 at 16:52
Thanks a lot! But from where do you derive that $span((1,1,1), (-1,-1,2)) = span(1,1,0),(0,0,1)$? – Kristian Apr 24 '12 at 17:11
Row-reduce. If you add the first vector to the second, you get $(1,1,1)$ and $(0,0,3)$. Divide the second by $3$ to get $(1,1,1)$ and $(0,0,1)$. Subtract the second from the first to get $(1,1,0)$ and $(0,0,1)$. None of those (elementary row) operations changes the span. – Arturo Magidin Apr 24 '12 at 17:12
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443652629852295, "perplexity_flag": "head"} |
http://cms.math.ca/Reunions/ete12/abs/ht | Réunion d'été SMC 2012
Hôtels Regina Inn et Ramada (Regina~Saskatchewan), 2 -4 juin 2012 www.smc.math.ca//Reunions/ete12
Théorie de l’homotopie
Org: Kristine Bauer (Calgary) et Marcy Robertson (Western)
[PDF]
JULIE BERGNER, University of California, Riverside
Homotopy operads as diagrams [PDF]
Many algebraic structures on spaces can be encoded via product-preserving functors from an algebraic theory to the category of spaces. For some structures, the algebraic theory can be replaced by a simpler diagram. For example, simplicial monoids are known to be equivalent to Segal monoids, given by certain $\Delta^{op}$-diagrams. In joint work with Philip Hackney, we establish an equivalence of model categories between simplicial operads and certain $\Omega^{op}$-diagrams, where $\Omega$ is the Moerdijk-Weiss dendroidal category. Furthermore, we extend this result to a diagrammatic description of simplicial operads with a group action.
MICHAEL CHING, Amherst College
A classification of Taylor towers [PDF]
Goodwillie's homotopy calculus provides a systematic way to approximate a functor F between the categories of based spaces and/or spectra with a `Taylor tower' of polynomial functors. The layers in this tower can be described by a sequence of spectra which play the role of the derivatives of F (at the one-point object). The goal of this talk is to describe additional structure on these derivatives that specifies the extensions in the Taylor tower. This allows us to describe the polynomial approximations as derived mapping objects for coalgebras over certain comonads. I'll connect the structure of these comonads to that of right modules over various operads. This is all joint work with Greg Arone.
MARTIN FRANKLAND, University of Illinois at Urbana-Champaign
Non-realizable 2-stage $\Pi$-algebras [PDF]
It is a classic fact that Eilenberg-MacLane spaces exist and are unique up to weak equivalence. However, one cannot always find a space with two non-zero homotopy groups and prescribed primary homotopy operations. Using work of Baues and Goerss, we will present examples of non-realizable 2-stage $\Pi$-algebras, focusing on the stable range.
PHIL HACKNEY, University of California, Riverside
Homotopy theory of props [PDF]
Props have the capability to control algebraic structures more general than those described by operads; for example, there is a prop governing Hopf algebras and a prop governing conformal field theories. We study the category consisting of all (colored, simplicial) props. We show that this category is a closed symmetric monoidal category with tensor product closely related to the Boardman-Vogt tensor product of operads. Furthermore, this category admits a Quillen model structure which restricts to the model structure for (colored) operads developed by Robertson and to the Bergner model structure for simplicial categories. (joint with Marcy Robertson)
RICK JARDINE, University of Western Ontario
Cosimplicial spaces and cocycles [PDF]
Cosimplicial spaces were introduced by Bousfield and Kan in the early 1970s as a technical device in their theory of homology completions. These objects have since become fundamental tools in much of homotopy theory, but the original theory remains rather mysterious. The point of this talk is that cosimplicial spaces are quite amenable to study with modern methods of sheaf theoretic homotopy theory and cocycle categories. Non-abelian cohomology theory has a particularly interesting and useful interpretation in this context.
BRENDA JOHNSON, Union College
Models for Taylor Polynomials of Functors [PDF]
Let ${\mathcal C}$ and ${\mathcal D}$ be simplicial model categories. Let $f:A\rightarrow B$ be a fixed morphism in ${\mathcal C}$ and ${\mathcal C}_f$ be the category whose objects are pairs of morphisms $A\rightarrow X\rightarrow B$ in ${\mathcal C}$ that factor $f$. Using a generalization of Eilenberg and Mac Lane's notion of cross effect functors (originally defined for functors of abelian categories) to functors from ${\mathcal C}_f$ to ${\mathcal D}$, we produce a tower of functors, $\dots\rightarrow \Gamma _n^f F\rightarrow \Gamma _{n-1}^fF\rightarrow \dots \rightarrow \Gamma _0^fF$, that acts like a Taylor series for the functor $F$. We compare this to the Taylor tower for $F$ produced by Tom Goodwillie's calculus of homotopy functors, and use it to better understand the roles of the initial and final objects, $A$ and $B$, in the calculus of homotopy functors. This is joint work with Kristine Bauer, Rosona Eldred, and Randy McCarthy.
KEITH JOHNSON, Dalhousie University
Homogeneous integer valued polynomials and the stable homotopy of BU [PDF]
The use of homogeneous integer valued multivariable polynomials to detect elements in the stable homotopy groups of BU originated with work of Baker, Clarke, Ray and Schwartz (Trans. AMS 316(1989)). In this talk we will demonstrate some new constructions of rings of such polynomials and study their topological consequences.
DAN LIOR, University of Illinois, Urbana
The use of labelled trees in the Goodwillie-Taylor tower of discrete modules [PDF]
A discrete module is a functor of finite pointed sets taking values in chain complexes of abelian groups. For an arbitrary discrete module F, McCarthy, Johnson and Intermont described the first homogeneous layer $D_1F$ of the Goodwillie-Taylor tower of F in terms of the cross effects of F and the multilinear parts of finitely generated free Lie algebras. I will describe a category of trees which illustrates this connection and extends it to the rest of the layers $D_nF$ of the Goodwillie-Taylor tower of F.
PARKER LOWREY, University of Western Ontario
The derived motivic Hall algebra associated to a projective variety. [PDF]
We discuss how to associate a locally geometric derived moduli stack classifying objects in the bounded derived category associated to any projective variety. This is the main ingredient needed in defining a Hall algebra for this triangulated category. It extends the work of To\"en, Kontsevich, and Soibelman to the singular case and is the first step in applying Donaldson-Thomas theory (and Joycey's extensions) to these homologically unwieldy categories. The talk will contain a good deal of algebro-geometric and homotopy theoretic material.
HUGO RODRIGUEZ ORDONEZ, Universidad Autónoma de Aguascalientes
Dimensional restrictions upon counterexamples to Ganea's conjecture [PDF]
The long standing conjecture by Ganea on the Lusternik-Schnirelmann category was disproved in the late 1990s by means of a family of counterexamples whose least dimensional element has dimension 10. In a previous work, the authors proved that there is a 7-dimensional counterexample. In this work, we present a proof that there is no counterexample to this conjecture with dimension 6 or less. This is joint work with Don Stanley.
SIMONA PAOLI, University of Leicester
n-fold groupoids and n-types [PDF]
Most homotopy invariants of topological spaces are filtered by dimension, so it is useful to have finite dimensional approximations to homotopy theories. We describe an algebraic model for the latter, which we call n-track categories. An appropriate algebraic model of n-types is developed for this purpose, with a class of n-fold groupoids which we call n-typical. This model leads to an explicit connection between homotopy types and iterated loop spaces and exhibit other useful properties. This is joint work with David Blanc.
DORETTE PRONK, Dalhousie University
Bredon Cohomology with Local Coefficients [PDF]
Bredon [1] defined his version of equivariant cohomology with constant coefficients for spaces with an action of a discrete group $G$. This was generalized to arbitrary topological groups by Illman [2]. This definition was then extended to local coefficient systems independently by Moerdijk and Svensson [3] and by the Mukherjees [4]. Moerdijk and Svensson's approach was only applicable to discrete groups and used the cohomology of a category constructed to represent the $G$-space. The Mukherjees' approach was closer to the work by Illman. Mukherjee and Pandey [5] showed that the two definitions agree when the group $G$ is discrete.
Laura Scull and I have generalized the construction of the category given by Moerdijk and Svensson to $G$-spaces for an arbitrary toplogical group $G$. We will show that the resulting definition of Bredon cohomology agrees with the one given by the Mukherjees. As an application we get the Serre spectral sequence in the more general setting of a topological group $G$.
[1] G.E. Bredon, {\em Introduction to Compact Transformation Groups}, Academic Press (1972).
[2] S. Illman, Equivariant Singular Homology and Cohomology, Bull. AMS 79 (1973) pp. 188–192.
[3] I. Moerdijk, J.-A. Svensson, The equivariant Serre spectral sequence, {\em Proceedings of the AMS} 118 (1993), pp. 263–278.
[4] A. Mukherjee, G. Mukherjee, Bredon-Illman cohomology with local coefficients, {\em Quart. J. Math. Oxford} 47 (1996), pp. 199-219.
[5] Goutam Mukherjee, Neeta Pandey, Equivariant cohomology with local coefficients, {\em Proceedings of the AMS} 130 (2002), pp. 227-232.
MARCY ROBERTSON, University of Western Ontario
On Topological Triangulated Orbit Categories [PDF]
In 2005, Keller showed that the orbit category associated to the bounded derived category of a hereditary category under an auto equivalence is triangulated. As an application he proved that the cluster category is triangulated. We show that this theorem generalizes to triangulated categories with topological origin (i.e. the homotopy category of a stable model category). As an application we construct a topological triangulated category which models the cluster category. This is joint work with Andrew Salch.
DON STANLEY, University of Regina
Homotopy invariance of configuration spaces [PDF]
Given a closed manifold $M$, the configuration space of $n$ points in $M$, $F(M,k)$ is the set $k$ distinct points in $M$. Levitt showed that if $M$ is $2$-connected then $F(M,2)$ only depends on the homotopy type of $M$. When $M$ is a smooth projective variety, Kriz constructed a model for the rational homotopy type of $F(M,k)$. In this talk we show that a variant of the Kriz model works for any sufficiently connected closed manifold, and discuss the related problem of the homotopy invariance of $F(M,3)$.
SEAN TILSON, Wayne State University
Power Operations and the Kunneth Spectral Sequence [PDF]
Power operations have been constructed and successfully utilized in the Adams and Homological Homotopy Fixed Point Spectral Sequences by Bruner and Bruner-Rognes. It was thought that such results were not specific to the spectral sequence, but rather that they arose because highly structured ring spectra are involved. In this talk, we show that while the Kunneth Spectral Sequence enjoys some nice multiplicative properties, there are no non-zero algebraic operations in $E_2$ (other than the square). Despite the negative results we are able to use old computations of Steinberger's with our current work to compute operations in the homotopy of some relative smash products.
## Commandites
Nous remercions chaleureusement ces commanditaires de leur soutien. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8865537047386169, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/3636/why-is-stringless-supergravity-not-considered-by-many-to-be-a-candidate-theory-o | # Why is stringless supergravity not considered by many to be a candidate theory of quantum gravity?
This paper seems to show that $d=4, N=8$ supergravity is finite. Yet the paper only has three citations in spires, and I certainly haven't heard talk of a new candidate theory of gravity.
Why isn't perturbative supergravity with some supersymmetry breaking principle, coupled with the standard model considered a possible theory of the universe? Has someone checked the coupling to matter? Is that the problem?
-
+1 Didn't Hawking make this claim back in 1979? I think a key problem could be the as yet unspecified 'supersymmetry breaking principle'. Also one would need to understand how the Standard Model interacts with or arises from with an $N=8$ SUSY gauge fields in this model. I don't know if that can be done. – user346 Jan 22 '11 at 23:05
## 3 Answers
In order to not be entirely negative, I'll answer your question first and then provide a reason or two why research on the subject is interesting for other reasons.
1. The finiteness conjecture has to do with perturbation theory. Even if true, it is still believed that in order for the theory to be non-perturbatively finite and otherwise consistent (e.g unitary), it has to include more degrees of freedom. The most plausible scenario is that the completion is the full string theory, for which N=8 SUGRA is very closely related.
2. The theory does not have enough structure to be a realistic description of nature by itself. For example it does not have chiral fermions, or a mechanism to break the extended SUSY spontaneously. All of these things become possible when you embed the theory within string theory.
3. If you keep the theory as a QFT, but add some ingredients by hand to get more a realistic theory, the extended SUSY is broken and the magic is gone. Any finiteness conjecture is only valid for the pure (super)gravity case, and goes away as soon as you make the situation slightly less symmetric.
On the other hand, purely as a theoretical laboratory it is fascinating that a quantum field theory which contains gravity is better behaved than expected, and the theory has a close relationship to other highly symmetric quantum field theories (for example N=4 SYM). There is great excitement and hope recently that by understanding precisely why it is finite (or at least well-behaved) we'd be able to understand the structure of quantum field theories better. It is in some sense the "simplest quantum field theory" (arxiv.org/abs/0808.1446).
-
Nice, +1 point. BTW I think that Nima et al. have a somewhat variable opinion which of them is the simplest one, whether SYM or SUGRA. :-) – Luboš Motl Jan 23 '11 at 8:15
Maybe they can show they are related, to avoid the pain of deciding. – user566 Jan 23 '11 at 8:28
I have never understood the reasoning behind point 3, even though lots of physicists seem to agree on it. Given that one perturbatively finite and symmetric theory of gravity exists, how do we know that you can't add lots more particles and get a non-symmetric perturbatively finite theory of gravity, for which the Standard Model is the low-energy sector. What reasoning tells us that all perturbatively finite theories of gravity have to be as symmetric as $d=4$, $N=8$ supergravity? – Peter Shor Feb 21 '11 at 20:55
@Peter: The ultimate reason is that you calculate and find divergences. However, this is clear also a priori: there are many calculations that are expected to give UV divergences, but their coefficient is zero in N=8 following directly from the symmetry. When you make the theory more generic, there's no reason for the coefficient of those divergences to vanish. When you calculate you find that they indeed do not vanish. Personally, I find (2) to be much more troubling: finiteness is nice and all, but it is much worse if cannot get something even remotely similar to low energy physics. – user566 Feb 22 '11 at 1:27
@Peter: As for the more abstract question, we indeed do not know for sure there are no other perturbatively finite theories of gravity which are more interesting than N=8, maybe they do exist. But, given that the finiteness of N=8 is so strongly related to its unique structure, and given the many indications that non-perturbative gravity cannot be a conventional QFT, current hints indicates this is not a fruitful research direction. This is of course a personal decision, everyone has to follow their gut feeling - this is mine. – user566 Feb 22 '11 at 1:32
show 3 more comments
Dear Jerry, the $N=8$, $d=4$ "non-stringy" supergravity is
1. non-perturbatively inconsistent
2. unacceptable phenomenologically
Trying to fix either of these things leads one to string/M-theory. See
Two roads from $N=8$ sugra to string theory http://motls.blogspot.com/2008/07/two-roads-from-n8-sugra-to-string.html
The perturbative inconsistency may be seen in many ways: for example, the supergravity theory has $U(1)$ charges but produces no charged objects with respect to these $U(1)$'s. That's inconsistent because at least a newly formed black hole may confine these electric and magnetic fields and become charged.
The electric and magnetic charges have to be quantized in inverse units, as seen by the Dirac quantization argument. It follows that the noncompact continuous exceptional $E_{7(7)}$ symmetry has to be broken to its discrete subgroup, the U-duality group. There are many ways to choose the lattice of allowed charges. These ways are related by the original continuous symmetry. In decompactification limits, the lightest of these charges (with smallest spacing) may be interpreted as Kaluza-Klein momenta with respect to new dimensions, and one discovers the 7 compactified dimensions of M-theory. It may also be showed that the other charges inevitably have the shape of string/M-theoretical membranes and fivebranes.
There's no doubt today - and since the mid 1990s, in fact - that the supergravity theory is just a perturbative approximation to string/M-theory which is also why the supergravity community has been fully merged with the string/M-theory community. The people realize that they are working on the same theory and they are saying the same things. Ask Michael Duff.
Phenomenology
The maximal supergravity in four dimensions is left-right symmetric, and the high supersymmetry leads to too huge degenerate multiplets where spins differ by as much as $2$. The only acceptable supersymmetry is the minimal one where spins differ by $1/2$. The maximum supersymmetry implies that left-handed neutrinos couldn't exist and for each particle, there would have to be lots of very different superpartners. One couldn't get matter and gauge fields decoupled from gravity etc.
The maximum supersymmetry cannot be broken down to a smaller one by field-theoretical mechanisms - except for an explicit breaking that just destroys all the finiteness virtues of the supergravity. However, it may be broken at the stringy level, by appreciating the extra 6-7 dimensions, and compactifying them differently. The resulting models are compactifications of string/M-theory. They preserve the perturbative finiteness by the added stringy species and they also lead to realistic phenomenology with all types of matter and interactions that we know.
As Joe Polchinski said, all roads lead to string theory. In the case of attempts to overcome limitations of supergravity, the previous sentence is not a slogan but rather an accurate description of the situation.
Cheers LM
-
There has been a recent spate of interest in computing high loop quantum gravity without strings. In my opinion this is a huge laborious effort to capture what lies in string theory already. $N = 8$ SUGRA with an $SU(8)$ symmetry plus an $E_{7(7)}$ symmetry which acts independently. The $SU(8)$ acts upon the $133$ dimensions of the $E_(7(7)$, which we can think of for $E_{7(7)}({\bf R})$ as $133$ scalars. The $133$ real parameters of $E_{7(7)}$ are ${\bf 133}~=~{\bf 28}~+~{\bf 35}~+~{\bf 35}~+~{\bf 35}$, where $\bf 28$ is an $SO(8)$ and combined with ${\bf 35}$ is an $SU(8)$. The current is composed of these scalars as $$J_\mu=~\sum_{a=1}^{133}J_\mu^a e_a$$ with the current conservation rule $\partial^\mu J_\mu~=~0$ If with fix the $SU(8)$ this is a coset rule $E_{7(7)}/SU(8)$ which subtracts out the ${\bf 28}~+~{\bf 35}~=~{\bf 63}$ of $SU(8)$ are leaves $70$ scalars. Thus we may think of $E_{7(7)}$ as $$E_{7(7)}~=~SU(8)\times R^{70}.$$ The coset construction removes trouble some terms or provide counter terms. It is then possible to compute currents, which are Noetherian conserved, in this coset construction which up to $7$ loops is $UV$ finite.
The $E_{7(7)}({\bf R})$ is broken into a discrete group $E_{7(7)}({\bf Z})$ by “quantization,” which in turn contains the modular (or Mobius) group $SL(2,{\bf Z})$ of S duality and the T-duality group $SO(6,6,{\bf Z})$. The Noether theorem operates for continuous symmetries, not for discrete ones. However, a braid group or Yang-Baxter arguments may recover current conservation. Further, a more general U duality description which meshes S an T dualities together may conserve current. So there are open questions with the construction which have not been answered. In an STU setting the lack of Noetherian currents should be replaced with Noetherian charges associated with qubits.
For $N~=~8$ and $d~=~4$ a paper by Green, Ooguri, Schwarz
http://arxiv.org/PS_cache/arxiv/pdf/0704/0704.0777v1.pdf
it is not possible to decouple string theory from SUGRA. There exists a set of states which make any decoupling inconsistent. This calls into question the ability to compute the appropriate counter terms required to make a consistent SUGRA which is $UV$ finite. This paper attempts to circumvent this problem, but it must be realized this is a $7$ loop computation of considerable complexity which recovers results of string theory that are obtained rather readily.
These efforts are not without purpose though, for the construction of Noetherian currents, which I think will correspond to Noetherian charges associated with qubits and N-partite entanglements in STU theories. However, as a replacement for string theory, reduction of gravity to a pure QFT, I doubt this will ever prove to be a complete approach.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480600953102112, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showpost.php?p=2501727&postcount=19 | Thread: Twins paradox and ageing View Single Post
In space/time every thing is always traveling at the speed of light. That is $$(vt)^2 + (ct)^2 = c^2$$ It is only a question of how much of the traveling is done in the spacial direction and how much is done in the time direction. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639600515365601, "perplexity_flag": "head"} |
http://cs.stackexchange.com/questions/7276/smallest-string-length-to-contain-all-types-of-beads | # Smallest string length to contain all types of beads
I read this question somewhere, and could not come up with an efficient answer.
A string of some length has beads fixed on it at some given arbitrary distances from each other. There are $k$ different types of beads and $n$ beads in total on the string, and each bead is present atleast once. We need to find one consecutive section of the string, such that:
• that section contains all of the $k$ different types of beads atleast once.
• the length of this section is as small as possible, provided the first condition is met.
We are given the positions of each bead on the string, or alternatively, the distances between each pair of consecutive beads.
Of course, a simple brute force method would be to start from every bead (assume that the section starts from this bead), and go on till atleast on instance of all beads are found while keeping track of the length. Repeat for every starting position, and find the minimum among them. This will give a $O(n^2)$ solution, where $n$ is the number of beads on the string. I think a dynamic programming approach would also probably be $O(n^2)$, but I may be wrong. Is there a faster algorithm? Space complexity has to be sub-quadratic. Thanks!
Edit: $k$ can be $O(n)$.
-
## 1 Answer
With a little care your own suggestion can be implemented to $O(kn)$, if my idea is correct.
Keep $k$ pointers, one for each colour, and a general pointer, the possible start of the segment. At each moment each of these colour pointers keeps the next position of its colour, that follows the segment pointer. One colour pointer points to the segment pointer. That colour pointer is updated when the segment pointer moves to the next position. Each colour pointer in total moves only $n$ positions. For each position the segment pointer computes the maximal distance to the colour pounters, and one takes the overal minimum of that.
Or, intuitively perhaps simpler, let the pointers look into the past, not the future. Let the colour pointers denote the distance to the respective colours last seen. In each step add the distance to the last bead to each pointer, except the one of the current colour, which is set to zero.
(edit: answer to question) If $k$ is large, in the order of $n$ as suggested, then one may keep the $k$ pointers in a max heap. An update of a pointer costs $\log k$ each of $n$ steps. We may find max (the farthest colour, hence the interval length) in constant time, each of $n$ steps. So $n \log k$ total plus initialization.
Now we also have to find the element/colour in the heap that we have to update. This is done by keeping an index of elements. Each time we swap to elements in the heap (a usual heap operation) we also swap the positions stored in the index. This is usually doen when computing Dijkstra's algorithm with a heap: when a new edge is found some distances to vertices have to be decreased, and one needs to find them.
-
Nice! Thank you! However, since $k$ can be $O(n)$, this will also be $O(n)$. But I'm sure it will be faster than the brute force algorithm that I suggested. Can we go faster than quadratic, even when $k$ is $O(n)$? – mayank Dec 10 '12 at 9:41
Great idea! I'm hitting a roadblock though. If I have understood your algorithm (the first one in your answer), then we will need to update some internal value of the heap. When we want to move to the next starting position, the colour pointer at the current starting position will need to be updated to the next position of that colour. This will require modifying an internal value of the max-heap. How can this be done, when the array index (position in the heap's implementation) for this colour pointer is unknown? – mayank Dec 10 '12 at 12:30
Perhaps a (balanced) binary search tree would be better - min and update would both be $O(\log{k})$? – mayank Dec 10 '12 at 14:47
Added heap details into answer. Balanced tree will not find max in log k time. It can be used as index though, see my answer, but if the colours are consecutive numbers one uses a table for storing positions. – Hendrik Jan Dec 10 '12 at 15:21
Yep, I did think of storing and updating indices, it just seemed like too much book-keeping. As for using a balanced BST, for each position of bead, we can find max in $O(log{k})$ (right-most node) and then find and update in $O(\log{k})$. I did not understand why max will not be in log k time? But yes, your solution should work nicely. Thanks! – mayank Dec 10 '12 at 15:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389832615852356, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/115323/list | ## Return to Answer
2 deleted 427 characters in body
If you allow singularitieslook at local rings, it is easy to construct such examples. For example, given any birational class if $X,Y$ are smooth of surfacessame dimension, you can find one with a rational double then for any point of fixed type (say $A_n, D_nx\in X,y\in Y$, E_6.E_7$or the completions of$E_8$). But these have only one analytic type, so taking different birational classesO_{X,x}, O_{Y,y}$ are isomorphic, you can find such but of course the algebraic local rings which are not necessarily isomorphic , but their completions are. It is somewhat more subtle to do the same if you fix the birational class and in general it is impossible except in the case of rational surfaces, when in some cases you can find such non-isomorphic ones. For (for example, if $R$ is the local ring at the origin of 3-space, $R/(x^5+y^3+z^2)$ and $R/(x^4y+x^5+y^3+z^2)$ both X,Y$are$E_8\$ singularities on a rational surface, non-isomorphicnot birational, but of course then even their completions fraction fields are isomorphic.not isomorphic.)
1
If you allow singularities, it is easy to construct such examples. For example, given any birational class of surfaces, you can find one with a rational double point of fixed type (say $A_n, D_n, E_6.E_7$ or $E_8$). But these have only one analytic type, so taking different birational classes, you can find such local rings which are not isomorphic, but their completions are. It is somewhat more subtle to do the same if you fix the birational class and in general it is impossible except in the case of rational surfaces, when in some cases you can find such non-isomorphic ones. For example, if $R$ is the local ring at the origin of 3-space, $R/(x^5+y^3+z^2)$ and $R/(x^4y+x^5+y^3+z^2)$ both are $E_8$ singularities on a rational surface, non-isomorphic, but of course their completions are isomorphic. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073386788368225, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/202156-maximizing-cuboid-volume-fixed-perimeter.html | # Thread:
1. ## Maximizing the cuboid volume with fixed perimeter
First, the 2D case is easy...
Q0: In a rectangle, we know that the sum of the dimensions (lenght and width) is a fixed $C$. Which are the dimensions values that maximize the area?
A: One dimension is $x$, and thus the other is $C-x$. Therefore, the area is $x \cdot (C-x) = xC - x^2$. Then, setting the derivative equal to zero would provide us the answer ( $C/2$ on both dimensions)
Okay... now I want to generalize this to the 3D case. More precisely:
Q1: In a cuboid (see en.wikipedia.org/wiki/Cuboid), we know that the sum of the dimensions (lenght, width and depth) is a fixed $C$. Which are the dimensions values that maximize the volume?
Finally, is there a generalization of the answer to Q1 for higher dimensions?
2. ## Re: Maximizing the cuboid volume with fixed perimeter
Originally Posted by andre.vignatti
First, the 2D case is easy...
Q0: In a rectangle, we know that the sum of the dimensions (lenght and width) is a fixed $C$. Which are the dimensions values that maximize the area?
A: One dimension is $x$, and thus the other is $C-x$. Therefore, the area is $x \cdot (C-x) = xC - x^2$. Then, setting the derivative equal to zero would provide us the answer ( $C/2$ on both dimensions)
Okay... now I want to generalize this to the 3D case. More precisely:
Q1: In a cuboid (see en.wikipedia.org/wiki/Cuboid), we know that the sum of the dimensions (lenght, width and depth) is a fixed $C$. Which are the dimensions values that maximize the volume?
Finally, is there a generalization of the answer to Q1 for higher dimensions?
The easiest way is to use Lagrange Multipliers. So the problem would be we want to maximize the volume
$V(x,y,z)=xyz$
subject to the constraint that
$x+y+z=C \iff x+y+z-C=0 \implies F(x,y,z)=x+y+z-C$
Now we use the Lagrange multipliers to get
$\nabla V(x,y,z)=\lambda \nabla F(x,y,z)$
This gives
$yz \vec{i}+xz\vec{j}+xy\vec{k}=\lambda\vec{i}+\lambda \vec{j}+\lambda\vec{k}$
This gives a system of three equations with four unknowns. Using the original contraint gives the non linear system of equations
$yz=\lambda \\ xz=\lambda \\ xy=\lambda \\ x+y+z=C$
This gives that
$xyz=\lambda x =\lambda y =\lambda z$
or
$x=y=z= \frac{C}{3}$
You can use Lagrange multipliers in any dimention to solve this problem
3. ## Re: Maximizing the cuboid volume with fixed perimeter
Hummm, thank you TheEmptySet! Now I need to learn about the Lagrange multipliers :-) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8644922375679016, "perplexity_flag": "head"} |
http://gowers.wordpress.com/2008/12/28/how-can-one-equivalent-statement-be-stronger-than-another/ | # Gowers's Weblog
Mathematics related discussions
## How can one equivalent statement be stronger than another?
It’s been a long time since any mathematical content was posted on this blog. This is in part because I have been diverting my mathematical efforts more to the Tricki (not to mention my own research), and indeed the existence of the Tricki means that this blog will probably become less active, though I may publish some Tricki articles here as well. But there are certain quasi-philosophical questions that I want to discuss and that are better discussed here. I have already written about one of my favourites: when are two proofs essentially the same? Another that is closely related is the question of how it can be that two mathematical statements are equivalent to each other and yet one is clearly “stronger”. This phenomenon will be familiar to all mathematicians, but here is an example that illustrates it particularly well. (The example is also familiar as a good example of the phenomenon).Hall’s theorem, sometimes known as Hall’s marriage theorem, is the following result. Let $G$ be a bipartite graph with finite vertex sets $X$ and $Y$ of the same size. A perfect matching in $G$ is a bijection $f:X\rightarrow Y$ such that $x$ and $f(x)$ are neighbours for every $x\in X$. Given a subset $A\subset X$, the neighbourhood $\Gamma(A)$ of $A$ is the set of all vertices in $Y$ that are joined to at least one vertex in $A$. A trivial necessary condition for the existence of a perfect matching is that for every subset $A\subset X$ the neighbourhood $\Gamma(A)$ is at least as big as $A$, since it must contain $f(A)$, which has the same size as $A$. This condition is called Hall’s condition.
What is not trivial at all is that Hall’s condition is also sufficient for the existence of a perfect matching. Here’s one of the shortest proofs. Let $G$ be a bipartite graph with vertex sets $X$ and $Y$ of size $n$ and suppose by induction that the result is true for all bipartite graphs with smaller vertex sets. Now we split into two cases. If there is a proper subset $A\subset X$ with $|\Gamma(A)|=|A|$, then by induction we can find a perfect matching from $A$ to $\Gamma(A)$. Let $B$ be the complement of $A$ in $X$. Then any subset $C\subset B$ has at least $|C|$ neighbours in $Y\setminus\Gamma(A)$, or else $C\cup A$ would have fewer than $|C\cup A|$ neighbours. So we can find a matching from $B$ to $Y\setminus\Gamma(A)$ as well. Putting the two together gives a perfect matching of $X$ and $Y$.
Now suppose that $|\Gamma(A)|>|A|$ for every proper subset $A\subset X$. In that case we can pick an arbitrary $x\in X$ and let $f(x)$ be an arbitrary neighbour of $X$. Let $X'=X\setminus\{x\}$ and let $Y'=Y\setminus\{f(x)\}$. Then since we have removed just one vertex from $Y$, the restriction of $G$ to the vertex sets $X'$ and $Y'$ satisfies Hall’s condition, so by induction we have a perfect matching of $X'$ and $Y'$ and again we are done.
There can be no doubt that one of the directions of this equivalence is trivial compared with the other. And we are also tempted to say that the condition that $G$ has a perfect matching is “stronger” than Hall’s condition, since it trivially implies it. So Hall’s theorem has the flavour of obtaining a strong conclusion from a weak hypothesis. But how can this be if the hypothesis and conclusion are equivalent?
Before thinking about how to answer this question, it is perhaps a good idea to think about why exactly it seems mysterious, if indeed it does. This would be my suggestion. Normally when we say that condition P is stronger than condition Q we mean that P has consequences that Q does not have, or, more simply, that some objects satisfy Q without satisfying P. For example, the condition that a natural number is an odd prime is stronger than the condition that it is odd. But here we find ourselves wanting to say that one graph property is stronger than another even though the two properties have precisely the same consequences and pick out precisely the same set of graphs.
The solution to this little puzzle seems to be that it depends on a confusion between actual logical consequences and what we can easily perceive to be logical consequences. Or at least, that’s one suggestion for a solution, though I’m not sure I’m entirely satisfied by it. It does at least cover the case of Hall’s theorem: we could say that the existence of a perfect matching is psychologically stronger than Hall’s condition, since it trivially implies, and is not trivially implied by, Hall’s condition.
But I’d like to say something more “objective” somehow, and not tied to what people happen to find easy. To see what I might mean by this, consider another difference between the two conditions. If you were asked to check whether some bipartite graph $G$ satisfied Hall’s condition, what would you do? You could check the sizes of the neighbourhoods one by one, but there are exponentially many of them, so this would not be an appealing prospect. Alternatively, you could use a well-known polynomial-time algorithm that finds perfect matchings when they exist. This would clearly be much better. Is there some precise sense in which an efficient check that a graph $G$ satisfies Hall’s condition “has to” find a perfect matching?
Another thought is that Hall’s theorem is false for infinite graphs. For instance, if you let $X$ and $Y$ both be $\mathbb{N}$ and join $1$ in $X$ to all of $\mathbb{N}$, and all other $n$ in $X$ to $n-1$, then Hall’s condition is satisfied but there is no perfect matching. On the other hand, the existence of a perfect matching still trivially implies Hall’s condition. So we can generalize the two conditions in a natural way and we find that one is now stronger than the other in the precise formal sense. (It seems unlikely that we can do something like this systematically for all examples where one direction of an equivalence is trivial and the other hard. But it would be interesting to come up with other equivalences where it can be done.)
Since I’m throwing out questions here, I may as well mention the fact that is rarely talked about but is surely noticed by almost all mathematicians, which is that the notion of equivalence itself is used in an informal way. Why don’t we say that Fermat’s last theorem is equivalent to the four-colour theorem? After all, here’s a proof that FLT implies 4CT (in ZFC): obviously FLT+ZFC implies ZFC, and we know that ZFC implies 4CT. And similarly in the opposite direction. An answer to this question takes us back to ideas that were discussed in some of the responses to the post on sameness of proofs: one would like to say that when mathematicians talk of one statement implying another, the implication should not be “homotopic” to a silly implication such as the one just given of 4CT from FLT. (I feel some kind of structural property should be involved and not mere length of proof, but that is just a feeling that I can’t back up in any adequate way.)
One final remark is that questions of this kind show just how incomplete a picture of mathematics was provided by some philosophers early in the twentieth century, who held that it was merely a giant collection of tautologies. The way that tautologies relate to each other is fascinating and important, so even if one believes that mathematics consists of tautologies, the “merely” is unacceptable. It should be said that this particular view of mathematics went out of fashion fairly soon after it came into fashion, so it doesn’t really need attacking, but it would be interesting to develop this particular line of attack. Indeed, I’m fairly sure that theoretical computer scientists can say something precise about how Boolean tautologies relate to each other: that could perhaps provide a good toy model for “sensible” implications and “non-obvious” directions of equivalences.
### Like this:
This entry was posted on December 28, 2008 at 1:22 am and is filed under Somewhat philosophical. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 47 Responses to “How can one equivalent statement be stronger than another?”
1. Terence Tao Says:
December 28, 2008 at 7:31 am | Reply
Another reason why the “difficult” implication in Hall’s theorem is non-trivial is that it is non-functorial: there is no canonical way to obtain the perfect matching from a bipartite graph obeying Hall’s condition which respects isomorphisms (i.e. relabeling of the graph). Instead, one must make many arbitrary choices. Thus, it largely falls outside the range of “abstract nonsense”. (The fact that the theorem fails in the infinite case is perhaps a symptom of this non-functoriality; the inability to generalise the result to a more abstract setting indicates that the implication must depend sensitively on all of its hypotheses, and so cannot be replaced by overly “soft” techniques that are insensitive to one or more of these hypotheses.)
[Perhaps one could sum up the two cultures of mathematics succinctly as "mathematics which generalises" and "mathematics which does not generalise"? ]
I could imagine another way to “certify” non-triviality of this sort of implication: suppose one had an example of a “low-complexity” graph for which Hall’s condition could be verified in some “low-complexity” or “easy” manner, but for which the only perfect matchings in the graph were somehow “high-complexity”. This would be evidence that the implication is doing something non-trivial. [The Szemeredi regularity lemma, as you well know, is a good example of this - any result whose conclusion has a complexity that is tower-exponential in the inputs can't be all that trivial. For a similar reason, the Behrend example illustrates the non-triviality of the tautology that is Roth's theorem. ].
I recently encountered an example of this in my work with Vitali Bergelson and Tamar Ziegler. Let F be a finite field of characteristic p, and let $P: F^n \to {\Bbb R}/{\Bbb Z}$ be a polynomial of degree d (i.e. the d+1^th derivatives of this polynomial vanish, or equivalently that $\|e(P)\|_{U^{d+1}(F^n)}=1$). At some point we needed a $p^{th}$ root of P, i.e. a function Q such that pQ=P. We wanted Q to be a polynomial as low a degree as possible. By breaking up P into monomials, and taking roots of each monomial separately, it turns out that one can make Q of degree d+p-1, which is best possible (I wrote about this in this blog post of mine). But the operation of breaking up into monomials is computationally intensive, in particular it cannot be used in any sort of constant-time probabilistic algorithm that would locally reconstruct Q from P in time uniform in n. [If one does not care about getting the minimal degree on Q, but would be happy with some bound on the degree of Q, there is a very cheap and local algorithm to obtain Q, namely to write Q = f(P) where f is some arbitrary branch of the $p^{th}$ root function.]
In fact, it appears that no such algorithm to obtain a minimal-degree root exists, because the ergodic theory analogue of the statement fails. Namely, given an action of $F^\omega$ on a probability space X, there exists polynomials P of degree d on X (where polynomiality is defined with respect to this action) which do not have a $p^{th}$ root Q of degree d+p-1 in X, which roughly speaking means that there is no low degree root Q of P which can be created out of a finite number of shifts of P. On the other hand, the monomial argument does eventually let one build such a low degree root of Q in an extension Y of the system X; thus these roots do exist in some sufficiently abstract sense, they are just not “measurable” enough to be definable within X.
2. Terence Tao Says:
December 28, 2008 at 7:43 am | Reply
Another example in a similar spirit came up in work I did with Tim Austin on property testing and repair in hypergraphs. Thanks to the hypergraph regularity and counting lemmas, many hypergraph properties P are known to be locally testable, which roughly speaking means that if a randomly chosen medium-size subhypergraph of a hypergraph G has a very high chance of obeying P, then it is possible to modify G slightly to another hypergraph G’ which also obeys P. For some properties P (e.g. the triangle-free property for graphs), the proof of local testability actually gives an efficient (probabilistic) algorithm for locally constructing G’ in terms of G, and so P is not just locally testable, but also locally correctable (or locally repairable). But Tim and I found some hypergraph properties which were locally testable but not locally repairable, thus indicating that the local testability result is in some sense non-trivial. The simplest example is that of total order: a directed graph is said to obey P if it is the graph of a total ordering of its vertices. It is not hard to see that P is locally testable, but it turns out that one cannot repair a partially corrupted total ordering on n vertices by purely local means; one has to make a large number (linear in n) of arbitrary choices before one can even begin to set down the corrected ordering.
3. Terence Tao Says:
December 28, 2008 at 7:59 am | Reply
As regards why we don’t view FLT as implying 4CT in any interesting way, perhaps it may help to see how robust the implication is with respect to perturbations. Consider a perturbation or generalisation FLT’ of FLT which we do not currently know to be true. Would having a proof of FLT’ be likely to help us make progress on some analogous perturbation or generalisation 4CT’ of 4CT that we also do not currently know to be true? Probably not – so the implication is non-robust. In contrast, most “useful” implications in mathematics tend to have at least some robustness with respect to at least some kinds of perturbations. (Indeed, us mathematicians spend a significant fraction of our research time exploring this sort of robustness.)
4. Andrej Bauer Says:
December 28, 2008 at 9:31 am | Reply
When speaking about equivalence of statements P and Q we need to be careful about the base theory. The stronger the base theory, the fewer distinctions can be made (as was already noted in the case of base theory ZFC and the statements FLT and 4CT). A familiar example is equivalence of various formulations of the axiom of choice, where it is evident that the base theory ought to be ZF rather than ZFC.
Now in the case at hand we should first identify a base theory in which it makes sense to talk about (non)-coincidence of graphs with bipartite matchins and graph satisfying Hall’s condition. But such a theory must be very, very weak, far weaker than Peano arithmetic. The book to look in for this sort of information is Stephen Simpson’s “Subsystems of Second Order Arithmetic”. If one condition really is stronger than the other, I would expect there to be a reasonable base theory which proves only one direction of the equivalence.
5. Terence Tao Says:
December 28, 2008 at 10:23 pm | Reply
I should probably yield the floor after this comment, but: here is perhaps one proposal for defining the strength of an implication: a statement X strongly implies Y if one can use X to significantly shorten the proof of Y, or (if Y is false) one can use the falsity of Y to significantly shorten the demonstration of the falsity of X. (Thus, for instance, FLT does not strongly imply 4CT, indeed the proof of 4CT from FLT+ZFC is precisely one line longer than the proof of 4CT from ZFC.)
In practice, most theorems are not implications $X \implies Y$, but rather families of implications: “For all s, $X(s) \implies Y(s)$“. (For instance, in Hall’s theorem, s would be a bipartite graph, X(s) would be Hall’s condition applied to s, and Y(s) would be the claim that s has a perfect matching.) One can then generalise the previous definition by averaging over the types of s which are likely to come up in applications in which the implication could conceivably be useful. Since Hall’s theorem does significantly shorten the proof of existence of perfect matchings of many graphs in applications, it is a useful application. On the other hand, the converse Hall’s theorem rarely shortens the proof of expansion of a bipartite graph by much (indeed, since the proof of the converse is only a line or two, it can only shorten proofs by at most that amount.)
6. gowers Says:
December 28, 2008 at 11:48 pm | Reply
I agree that one of the principal reasons for being interested in equivalences is that if $P\equiv Q$ and you want to prove $R\Rightarrow S$, then it’s enough to prove that $R\Rightarrow P$ and $Q\Rightarrow S$. And if $P$ is the “weak” half of the equivalence, then it will be easier to prove $R\Rightarrow P$ than $R\Rightarrow Q$, and also easier to prove $Q\Rightarrow S$ than $P\Rightarrow S$. But fleshing out the details of what you say in the second paragraph of your last comment is not all that easy I think: it might at first sight seem to be saying little more than that the proof in one direction is quite a bit longer than the proof in the other. But I have the impression that you’re being careful to say a bit more when you talk about averaging over graphs where one might conceivably want to find a perfect matching.
It feels to me as though the main thing, however, is not so much the length of the proof as a written-out object but the number of ideas it contains (which one could think of as a different notion of length). The trivial direction of Hall’s theorem is trivial because it needs no ideas, and the non-trivial direction is non-trivial because it needs at least one idea. A question provoked by this is whether anyone knows of two equivalent statements $P$ and $Q$ where there is a short but necessarily very ingenious proof that $P\Rightarrow Q$ and a trivial but necessarily quite long proof that $Q\Rightarrow P$. It would be interesting to see what one’s impression was of the relative strengths of such a $P$ and $Q$. I would have thought that abstract algebra would be the most promising place to look for such an equivalence if there is one.
7. Daniel Moskovich Says:
December 29, 2008 at 3:50 am | Reply
This is really your first “objective” condition, but Vaughan Jones proposed a measuring the strength of a mathematical statement a while back… I don’t know a reference, and many people have their own versions (Dror Bar-Natan for example), but from lowest to highest it runs:
0.5) Result which hints at a theory
1) A theory
2) Existence of a solution
3) Algorithm exists to find the solution (e.g. Hall’s condition)
4) Practical (e.g. polynomial-time) algorithm exists to find the solution (e.g. the condition of being bipartite)
5) The closed-form solution itself.
Thus, despite logical equivalence between the condition and implication of the theorem, which is true for any theorem, one has here an advance from a statement of strength 3 to a statement of strength 4.
8. Terence Tao Says:
December 29, 2008 at 5:53 am | Reply
Dear Tim,
As regards your question of an equivalence which is short but ingenious in one direction, and trivial but long in another, here is a somewhat contrived example: for every integer n between 1 and 560, n is prime iff $a^{n-1} = 1 \hbox{ mod } n$ for every $1 \leq a < n$ coprime to n. The “only if” direction follows from Fermat’s little theorem, which is simple but does require at least one non-trivial trick; but the “if” direction basically requires a tedious check of hundreds of cases. (561, of course, is the first Carmichael number.)
9. gowers Says:
December 29, 2008 at 8:47 pm | Reply
That’s a nice example that forces me to modify the question a bit. If I examine my not-thinking-too-hard reaction to the two statements “$n$ is prime” and “$a^{n-1}=1$ mod $n$ for every $1\leq a<n$ coprime to $n$” with it understood that $n$ is an integer between 1 and 560, then I feel as though it is a stronger statement that $n$ is prime, since that one implies the second statement (via Fermat’s little theorem) while the second doesn’t really imply the first in the sensible sense because if you’re given an integer and told that it is a Carmichael number that doesn’t imply that it’s prime. So it seems to imply it in the FLT-implies-4CT sense. (I don’t rule out that there’s some test for primality that is shortened if you’re already given that your number is a Carmichael number — if that is the case then I’d have to modify my remarks somewhat.)
Now if I examine my response in more detail, I see that it comes down to something like this. I have a bit of a weakness for radical finitism, so I feel a bit doubtful about whether a proof that’s a tedious finite check can really count as trivial. (For example, is it a trivial fact that there’s no projective plane of order 10?) That feels too much like saying that Goldbach’s conjecture is trivial because God could just check it for every even integer. So I’ll admit that you’ve found something that satisfies the criterion I put forward (a long proof that requires no idea), but what I really meant was a proof that was trivial in the sense of giving a trivial explanation for why the statement was true. I should start with a weaker question: is there a proof that’s (i) explanatory (this is supposed to rule out long case checks) (ii) trivial (in that every step is completely obvious) and (iii) long? Then the second question would be whether there is such a proof for a statement that has a true converse that is not trivial but that can be proved quickly using a clever idea.
10. Gil Kalai Says:
December 29, 2008 at 9:12 pm | Reply
Hello all,
here is an example which is somewhat in the spirit of Hall’s theorem but is much simpler: Suppose that A B, C, and D are finite and pairwise disjoint sets. Suppose that f is a bijection from A to B.
It is true that there is a bijection g from C to D if and only if there is a bijection h from A $\cup$ C to B $\cup$ D.
Now the proof that h exists given that g exists is very easy. And the proof that g exists if h exits is considerably harder.
11. Terence Tao Says:
December 29, 2008 at 10:23 pm | Reply
Hmm, perhaps the finitary proofs that come out of proof mining an infinitary proof might qualify for the revised question? For instance, suppose one managed to prove (say) Rado’s theorem by an infinitary argument (e.g. topological dynamics). General proof theory then tells us that one should be able to translate that proof into a purely finitary one by working through each step one at a time and carefully disentangling all the infinitary quantifiers. (Infinitely large -> sufficiently large, and so forth.) Such a translation is basically trivial, still captures the explanatory power of the original – but will be tediously long.
One direction of Rado’s theorem is fairly straightforward (but perhaps requires one small idea), and can be done finitarily, whereas the finitisation of a simple infinitary proof of the hard direction of Rado’s theorem is likely to be long, trivial, and technically explanatory…
12. Gil Kalai Says:
December 29, 2008 at 11:27 pm | Reply
Which Rado’s theorem?
13. gowers Says:
December 29, 2008 at 11:29 pm | Reply
I have a couple of worries about that as an example. The first is that I’m not sure that there’s any trivial, as opposed to simple, proof of Rado’s theorem, even if that proof is allowed to be infinitary. But perhaps one could cheat slightly and make everything relative to an infinitary version. I.e., one could add “and the infinitary version of the hard direction of Rado’s theorem holds” to the two finitary statements that are equivalent. But what would one’s perception of the strengths of the two statements be in this relative context? I’m not sure. I’m also slightly worried about the mixing up of metamathematics with mathematics: it might mean that the finitary proof is metatrivial rather than trivial.
The sort of thing I was looking for was more like this. I can’t instantly think of an example, though I think something will occur to me at some point, but it is a common feeling when one is trying to write up a result, that some stage of the proof is trivial even though it might look rather complicated technically (and might even appear to the uninitiated to contain ideas). Or another possibility is something I remember from lecturing the Bott periodicity theorem a long time ago. The proof involves showing that a cycle of six maps is exact, and I found some of the steps of the proofs very hard to remember, even though I was convinced that to someone more steeped in that kind of mathematics they would have seemed trivial, in the sense that each step would be pretty well the only thing one could do.
14. Gil Kalai Says:
December 30, 2008 at 7:36 am | Reply
Dear all, a few words of explanation to the example I gave. Recall that A B C and D are finite and pairwise disjoint. We are given a bujection f from A to C.
Now, given a bijection g from B to D we can show the existence of a bijection h from A $\cup$ B to B $\cup D$ simply by defining h(x) = f(x) if x $\in$ A and h(x)=g(x) if x $\in B$. This of course wirks also when the sets are infinite.
Now given a bijection h from A $\cup$ B to C $\cup D$ we can prove that |B|=|D|. But this, while automatic for us, is conceptually much harder, requires the definition of the cardinality of a finite set, and basic properties of arithmetic operations. It also does not give us an explicit (or cannonical) function g.
Write f* for f$^{-1}$. There is a beautiful bijective construction of g given f and h which (I think) goes back to works of Dominique Foata in bujective combinatorics. Starting from an element x in B consider the sequence h(x), f*(h(x)), h(f*(h(x))), f*(h(f*(h(x)))),… which you continue untill reaching an element y in D. Then define y=g(x).
(You can see some similarity with the alternating path algorithm for perfect matching.)
15. Gil Says:
December 30, 2008 at 11:53 am | Reply
Regarding Hall’s theorem it is also interesting to examine the relations between the statements:
(1) For every subset A of X, A has at least as many neighbors as its size.
and
(2) For every subset B of Y, B has at least as many neighbors as its size.
By Hall’s theorem these two statements are equivalent. (We assumed that |X|=|Y|.) And clearly, they are equally strong. But while the implications between them are not artificial as some of the examples in the post I am not aware of a direct combinatorial implication.
There is a linear algebra context where (1) refers to a statement about the row-rank of a square matrix (2) refers to a statement about the column-rank and the existence of perfect matching follows from the fact that a regular matrix has a nonzero determinant. (To see this consider a X by Y natrix with zero entry for x and y which are non-neighbors and a generic variable entry if x and y are neighbors.) In this context the equivalence of (1) and (2) can be proven directly but passing from the combinatorial setting to the linear algebra formulation requires some work. ( This is related to Rado’s theorem )
Finally, Tim wrote
“One final remark is that questions of this kind show just how incomplete a picture of mathematics was provided by some philosophers early in the twentieth century, who held that it was merely a giant collection of tautologies. The way that tautologies relate to each other is fascinating and important, so even if one believes that mathematics consists of tautologies, the “merely” is unacceptable.”
Well, I do not know what precise statements by philosopheres are referred to and also what is meant by the word “mathematics” in “a picture of mathematics.” Still one can claim that the giant collections of tautologies contains also all the finer information regarding the “fascinating and important” way tautologies relate to each other.
16. Terence Tao Says:
December 30, 2008 at 5:15 pm | Reply
Hmm… I think in order to make the adjective “trivial”, well, non-trivial, one has to assume a certain base of knowledge that one can be trivial relative to. Very little in mathematics is trivial just relative to the axioms of ZFC, but quite a bit is trivial relative to a decent graduate education in mathematics (or, in what is (very) roughly equivalent, 3000 years of progress in mathematics). For instance, there are many “trivial” proofs of a statement X that look like this:
1. Hey, X looks like a job for Zorn’s lemma / Hahn-Banach theorem / Baire category theorem / diagram chasing / Szemeredi regularity lemma / greedy algorithm / random algorithm / Cauchy-Schwarz / [insert favourite tool here].
2. Set up all the objects required for your tool.
3. Verify all the hypotheses required for your tool.
4. Apply tool.
5. Use the conclusion of the tool to conclude X.
If one accepts the statement of the tool, and the general conditions under which the tool is known to be useful, as trivial, then the whole proof often becomes trivial and requires no additional ideas – but the actual verification of the remaining “routine” steps may actually be rather lengthy. (How often does one really check every commutative square or exact sequence of a large commutative diagram?)
17. gowers Says:
December 30, 2008 at 5:19 pm | Reply
Gil, the answer to your last (implied) question is that I was referring to the logical positivists. They famously held that a statement that couldn’t be empirically verified was meaningless (to oversimplify a bit, since there were various versions of the verification principle, some less extreme than this). However, this left them with a problem, since they certainly didn’t want to condemn mathematics as meaningless, but it also can’t be empirically verified. (One might try to make a case that some of it can, but that would be a difficult one and it wouldn’t apply to the vast bulk of mathematics.) To get round this, they proposed the idea that mathematics consists of tautologies. I don’t know enough about them to know whether they came up with detailed theories about how these tautologies came to be meaningful. The tautologies idea was I think attributed to the early Wittgenstein.
Terry, perhaps this is turning into a discussion about what “trivial” means, and if it is then I need to think a bit. My instant reaction to the type of argument you talk about is that I’d use a word like “routine” rather than “trivial”. For example, the proof of the triangle-removal lemma is trivial in your sense, once you have the hint that the regularity lemma should be used, and if you are familiar with a typical use of the lemma, since once you apply it and do very standard things like removing sparse and irregular pairs, the verification is more or less instant. But I would describe that by saying that it has become routine to those who are familiar with arguments of that general type, rather than that it is trivial. It certainly feels very different from the easy direction of Hall’s theorem. (Of course, the long but trivial proof whose existence I am wondering about would probably also feel somewhat different from the easy direction of Hall’s theorem.) Routineness is obviously a relative concept, but I feel that triviality is less so: with some theorems there really does seem to be only one sensible step to take at each stage and taking sensible steps gets you from the premises to the conclusion. (The notion of “sensible” is relative to experience I suppose, but somehow it feels less dependent on it than routineness.)
18. luca Says:
December 30, 2008 at 7:45 pm | Reply
To return to the issue of the 4CT being equivalent to FLT because they are both true (while intuitively they are “independent” statements), this is an issue that has come up in theoretical computer science when comparing the strength of different assumptions/conjectures.
For example, Impagliazzo and Rudich considered in 1989 the question of whether the existence of secure public-key encryption could be equivalent to the existence of one-way functions (which in turn is equivalent, via several non-trivial theorems, to the existence of secure private-key encryption). They wanted to provide a negative answer, but the problem with even stating what a negative answer could be is that we believe that both one-way functions and public-key cryptosystems exist, in which case the statements about their existence are equivalent because they are both true.
Their approach, roughly speaking, was to observe that implication results in cryptography are usually proved via reductions that establish stronger results as well, and that the corresponding stronger result for the implication “one way functions imply public key encryption” was false (or required a proof of $P\neq NP$, depending on the model of reductions used).
This is in the same spirit of Terry’s argument to distinguish the two directions of Hall’s theorem by noting that one direction does not generalize to the infinite case (and so it is not provable via techniques that automatically generalize to the infinite case), but the other direction does.
19. Gil Kalai Says:
December 30, 2008 at 8:37 pm | Reply
Thanks a lot, Tim. I suppose regarding mathematics as a large collection of tautologies is a a reasonable abstraction and not the most daring or controversial (or strange) thing about logical positivism. (Or about Wittgenstein.) Perhaps regarding math as a large collection of tautologies is like regarding chess as merely a 2-player zero sum game with complete information.
As for logical positivism, this amazing movement that for many decades dominated philosophy and intellectual life, wanted to base mathematics ,science, and philosophy itself on logic, rejected mataphysics, rejected, as you said, everything not empirically based as meaningless, including, “beauty”, “justice” “ethics” and other central issues in human thinking and philosophy, (at some later time philosophers, perhaps Oxford-based, did try to draw a line between “scientific” and “meaningful”), tried to develop logical probability-calculus that will eventually allow to give a probability (like truth value) to every meaningfull statement, and promoted many other unbelievable ambitious and unrealistic goals, did not really had a chance to triumph, and today even philosophy came back to the old problems. Yet, logical positivism through its totally unrealistic goals achieved so much!
20. gowers Says:
December 30, 2008 at 11:52 pm | Reply
Gil, it sounds as though you have some sympathy for logical positivism, so let me confess that, despite the tone of my remarks above, so do I. In particular, some people have held that logical positivism is instantly defeated by the fact that the verification principle itself is not empirically verifiable. But as a mathematician I find that refutation as unconvincing as an argument that mathematics doesn’t work because we don’t prove that proofs are valid. And it seems to me that a weak version of the verification principle that doesn’t talk about meaning but merely says that there is an important distinction between statements that are empirically testable and statements that are not, and that this distinction coincides fairly well (if not perfectly — think of string theory) with what we would like to think of as the distinction between scientific and nonscientific statements, is correct and useful. Where I (like most people) part company with logical positivists is in their view that we should try to reduce all statements to very low-level ones, just as we can in principle do in mathematics, and thereby establish their true status. The later Wittgenstein mocks this view very nicely by asking, “Then does someone who says that the broom is in the corner really mean: the broomstick is there, and so is the brush, and the broomstick is fixed in the brush?” I regard that as a one-sentence demolition of one of the major strands of logical positivism.
Returning to 4CT and FLT (after reading Luca’s comment) I wonder if in maths the notion of implication will be quite hard to formalize, even informally, so to speak. One quite convincing argument that 4CT doesn’t imply FLT is that if you imagine a parallel earth that’s almost identical to ours, but in which Appel and Haken didn’t prove 4CT (and neither did anyone else), then you certainly don’t think, “Well, in that case Wiles probably wouldn’t have proved FLT either.” In other words, knowing 4CT doesn’t give you any advantage if you want to prove FLT. But that establishes a stronger conclusion: not only does 4CT not imply FLT, it doesn’t even help to imply it. This suggests to me that there could be a grey area. Indeed, I think there is. For example, does Ken Ribet’s work imply FLT? It certainly helped Wiles to prove it. But somehow we feel that Wiles put in a very significant extra input. But suppose, contrary to fact, that Wiles’s extra input had consisted of one ingenious but easily understood observation, after which FLT had dropped straight out. Then we would probably have said that Ribet’s work implied FLT. So it seems that this notion of implication actually gives us a spectrum that ranges from complete irrelevance all the way to trivial implication.
21. Timothy Chow Says:
December 30, 2008 at 11:53 pm | Reply
Andrej’s suggestion of looking to reverse mathematics (a la Simpson’s book) is appealing; certainly if one can define a very weak logical theory over which A implies B but B does not imply A, then we have a very satisfying formalization of “A is stronger than B.” But as I mentioned in my comment to “When are two proofs essentially the same?” it is not clear that reverse mathematics will always give us the desired answer to Tim’s question: Many people feel that Brouwer’s fixed-point theorem and Sperner’s lemma are “equivalent,” yet the former is stronger than the latter in the sense that Brouwer is not provable in RCA_0 while Sperner is. This is another illustration of the general principle that one should not confuse logical strength with psychological difficulty.
By the way, regarding the non-canonicality of the perfect matching in Hall’s theorem: There’s a very nice and not-well-known result by Blass, Gurevich, and Shelah (“On polynomial time computation over unordered structures, J. Symbolic Logic 67 (2002), 1093-1125) that shows that the non-canonicality isn’t as bad as you might think. To state the result, let’s define the “saturation” of a bipartite graph with (equal-sized) parts A and B as follows. First partition A and B into disjoint subsets {A_i} and {B_j} so that for every i and j, every vertex in A_i has the same number of neighbors in B_j, and every vertex in B_j has the same number of neighbors in A_i. (This partition can be found by a so-called “stable coloring algorithm,” and in particular is “canonical” in a certain sense.) The “saturation” of the graph is obtained by joining *every* vertex in A_i to *every* vertex in B_j as long as there is at least one edge from A_i to B_j.
Theorem (BGS). (A, B) has a perfect matching iff its saturation does.
Now checking whether the saturation has a perfect matching still requires the usual algorithm, but the point is that “picking” an A_i or a B_j can be done canonically because the A_i and B_j are canonically defined. Thus the non-canonicality is limited to picking an arbitrary vertex from each A_i or B_j. If this sounds like a subtle distinction, let me just say that the BGS theorem is the key to proving that perfect matchability of bipartite graphs is computable in something called “choiceless polynomial time with counting,” which I won’t define here but which roughly speaking corresponds to the class of graph properties computable in polynomial time without having to assign arbitrary labels to the vertices.
A very nice open problem is whether the perfect matchability of an arbitrary (non-bipartite) graph is in choiceless polynomial time with counting.
22. Andrej Bauer Says:
December 31, 2008 at 1:32 am | Reply
I asked about reverse mathematics of Hall’s theorem on the Foundations of Mathematics mailing list. Stephen Cook explained how things stand with regards to Hall’s theorem, see http://www.cs.nyu.edu/pipermail/fom/2008-December/013256.html
Perhaps it is worth pointing out what logicians are trying to accomplish with the plethora of logical systems (Stephen Cook manages to list six systems in his short post: AC0, VTC0, TC0, VP, S-1-2, NC2): they capture various computational/proof complexity classes and reasoning principles, i.e., they are mathematical manifestations of ideas such as those appearing in this discussion, e.g., “any proof of X involves a counting argument”, or “any proof of Y not using X must be long”.
23. luca Says:
December 31, 2008 at 7:42 pm | Reply
Perhaps a way to formalize Tim’s comments on 4CT versus FLT is to think of associating a measure of “difficulty” to any proof, and then saying that “A implies B” if there is a proof of A=>B which is significantly less difficult than any (known) proof of B. (This does give a spectrum of possibilities.)
As a first approximation, one could take “difficulty” to be “length,” but it has been remarked above that there are many examples in which a lengthy proof can be simple and a short proof can be hard.
Suppose, however, that B is only known to have a routine but lengthy proof, and that someone discovers a clever and non-trivial trick giving a short proof that A=>B; it may be incorrect to say that A=>B has a “less difficult” proof than B, but certainly the short proof of A=>B shows that there is some relation between A and B. (So even the naive choice of formalizing difficulty as length does not lead to completely meaningless conclusions.)
By the way, even results in reverse mathematics can be framed this way, if one defines “difficulty” not as a quantitative measure but as a the strength of the adopted proof system.
24. Questões matemático-filosóficas profundas « problemas | teoremas Says:
January 1, 2009 at 11:21 pm | Reply
[...] no seu último post (como pode um enunciado ser mais forte do que outro que lhe é equivalente?, How can one equivalent statement be stronger than another?, no original). Esta questão vem no seguimento de outras classificadas na categoria somewhat [...]
25. Benjamin Says:
January 3, 2009 at 6:06 am | Reply
Suppose I have an “elaborate” set of mathematics (perhaps with many axioms and theorems) which I’ll call P. And suppose that the following can be demonstrated from P:
1) P=>(R=>Q)
Now also suppose that the “ideas” in R are contained within the ideas in Q s.t.:
2) Q=>R – A somewhat trivial example might be that Q is simply a composite statement of (R & S).
So given system P, Q and R are equivalent. But it happens that I can derive statements with Q that I cannot derive with R. (Using my trivial case above, I can derive S from Q but I cannot derive S from R.) Hence, Q is “stronger” than R.
The possibility that Q is equivalent to R, yet stronger than R, is contingent on the fact that Q and R are embedded within a system P, at least as far as this example is concerned. Separated from that system, Q and R are simply not equivalent.
26. Carrie Jenkins Says:
January 4, 2009 at 1:26 pm | Reply
Hi Tim,
The purely psychological differences are important; I think that they are really all that’s responsible for the persistant interest in what’s now known as the Church-Fitch paradox (aka the paradox of knowability or Fitch’s paradox).
But I wanted to suggest a way of thinking about the (first) kind of more objective difference you’re looking for. The thought I have is to take seriously the temptation you feel to use modal language (“Is there some precise sense in which an efficient check that a graph satisfies Hall’s condition “has to” find a perfect matching?”) and apply an idea from the philosophy of modality.
This is the idea of varying worlds whilst ‘holding fixed’ certain facts. Normally this is done with possible worlds but to apply the idea to mathematics we’d need to use impossible worlds too. The idea would be that as we hold fixed facts which are particularly mathematically important and *vary* other mathematical facts, we get a range of non-actual worlds in which one of the two equivalent statements always implies the other but not vice versa. This could give you a sense in which the ‘stronger’ one has more ‘strength’ than the ‘weaker’ one: the ‘stronger’ one keeps its power in other salient (impossible) worlds where the mathematical facts are different. Whereas the ‘weaker’ one loses its power in some of those worlds.
27. gowers Says:
January 4, 2009 at 6:50 pm | Reply
Hi Carrie, That’s an interesting suggestion, but I’ve been unable to get it to go anywhere. Let me explain the difficulty I have. If I understand you correctly, what one would like is something like a “plausible mathematical world” that is in fact contradictory, but for which the contradiction doesn’t jump straight out at you. And one would like to devise such a world in which the trivial direction of Hall’s theorem is true (whatever that means when the world contains a contradiction, but let’s go for something like “easily seen to be true”) and the nontrivial direction is false.
My first problem is that I can devise such a world in an uninteresting way, simply by adding to ZF the statement “there exists a graph that satisfies Hall’s condition but that does not contain a perfect matching”. Such a world is plausible in the sense that the contradiction doesn’t jump out, but in order to specify it I haven’t done anything interesting. So one needs to decide what counts as an interesting construction of an impossible mathematical world.
My second problem arose from an attempt to deal with the first. Here is a possible reason for regarding the easy direction of Hall’s theorem as trivial: it’s that we can give a proof in a high-level language using little more than easy syllogisms. Such a proof would go something like this. (I think I’ve made it needlessly complicated actually, but that doesn’t affect the point I want to make.)
Bijections preserve cardinality.
The restriction of a bijection to a subset is a bijection.
Therefore, the restriction of a bijection to a subset preserves the cardinality of that subset.
A perfect matching is a bijection such that the image of any element is a neighbour of that element.
Given any subset, its neighbourhood contains the images of all its elements.
Therefore, given any subset, its neighbourhood contains the image of that subset.
By the earlier remark, the image of that subset has the same size as the subset.
Therefore, given any subset, its neighbourhood contains a set of the same size as that subset.
If a set contains another set, then the first set is at least as big as the second.
Therefore, given any subset, its neighbourhood is at least as big as the subset.
Anyhow, suppose I could carry out some analysis like that, and argue on the basis of it that the easy direction of Hall’s theorem was trivial because it all took place at a superficial level. I could then perhaps define a “plausible mathematical world” as being one where all superficial deductions were valid, or something like that. It might be necessary to restrict the statements that one was even allowed to consider, to avoid the objection that any proof can be broken down into steps that are superficial. Or one might wish to argue that “trivially implies” is not a transitive relation between statements. Again, these are problems, but they are not the main difficulty, it seems to me. The main problem is that if I could give an analysis that did justice to our idea of what counts as a trivial deduction, then it’s not clear what would be added by talk of plausible mathematical worlds: I could just define P to be stronger than Q if P trivially implies Q and Q does not trivially imply P.
Both my main difficulties can be summed up as follows: how would one get the notion of a plausible mathematical world to do any work? I’m not saying that I think it couldn’t, but just that I don’t at the moment see how it could.
28. Timothy Chow Says:
January 5, 2009 at 3:49 am | Reply
The possible-worlds point of view is, very roughly speaking, the same as what I and others have been calling the reverse-mathematics point of view. I think it is best not to think of “impossible worlds” or worlds that are in fact contradictory. Instead, think in terms of “nonstandard worlds.”
In the case of Hall’s theorem, Cook and Nguyen’s work means that we’re in good shape. Take the very weak logical theory VTC_0. In this theory we can *state* Hall’s theorem, and in the standard model of VTC_0, Hall’s theorem means what we think it means. However, only one direction of Hall’s theorem is known to be *provable* in VTC_0. The provability of the other direction in VTC_0 is an open problem, but let’s suppose that it’s not provable. This means that there is some nonstandard model of VTC_0 in which one direction of Hall’s theorem holds but the other doesn’t. In this nonstandard model, “Hall’s theorem” doesn’t quite mean what we normally think of it as meaning, because we’re interpreting the words slightly differently from usual. Nevertheless, to the extent that VTC_0 captures a natural subset of correct mathematical reasoning about finite combinatorics, this gives a precise and natural meaning to the statement that one side of Hall’s theorem is stronger than the other.
It’s worth noting that Cook and Nguyen’s logical systems are very closely related to standard computational complexity classes. Thus, “stronger” in this context means something like “harder to compute.” Of course, the informal notion of “easy” doesn’t *exactly* coincide with “easy to compute,” but low computational complexity is at least one reasonably plausible way to interpret “easy.”
29. Jaime Montuerto Says:
January 5, 2009 at 10:03 am | Reply
Dear Prof Gowers,
I have this question that is beyond me but might just be appropriate now. The FLT’s equation:
a^n + b^n = c^n and gcd(c^n – 1, a^n + b^n -1) = 1
is similar. I mean the second equation implies the first one. Can you show or give counterexample of positive ODD factor on the second equation? What’s interesting about the second equation is that it is based on Fermat’s Little theorem. The question is if the second equation is true then the first one is true. But we all knew that the first one is true. It doesn’t automatically implies the second one to be true.
Thanks
30. Carrie Jenkins Says:
January 5, 2009 at 12:00 pm | Reply
Hi Tim,
Suppose we can specify in other terms (i.e. without making reference to what we happen to find obvious) a set of necessary and sufficient conditions for a deduction to count as ‘trivial’. I agree it’s not straightforward but let’s suppose. We then use it to specify the relevant class of worlds.
The point is that even if the new specification of the relevant worlds ends up delivering exactly the same worlds (same extension) as if you had just said ‘holding fixed the trivial deductions but varying other things’, provided you have specified it in other terms (different hyperintension) you have given an account of the sense in which one statement is stronger than the other which does not make reference to what we happen to find plausible/obvious, and could therefore reasonably be counted as objective.
Here’s an analogy. Suppose I like all and only green things. And someone is wondering if there is an objective difference between these things and all the others. Obviously a non-objective difference is that I only like some of them. But then we notice that everything I like is green, and everything else isn’t. Then we can say that there is an objective colour difference between the two groups. The factor we have identified marks an objective difference although it coincides with something non-objective (my preferences).
31. gowers Says:
January 5, 2009 at 12:28 pm | Reply
Tim, the responses to my original post have left me very curious to find out more about reverse mathematics, in which I was already interested in fact, though from a safe distance. For instance, I was recently working on a problem with a colleague, David Conlon, and at one point we managed to get ourselves unstuck in a sort of reverse-mathematics way: we needed a conclusion Q to be true, and there was a seemingly obvious way of proving it. But we couldn’t get the hypotheses P that we needed to carry out this proof. But then, by focusing on the converse direction, we observed (i) that Q did not imply P and (ii) that there was a different, and simple, set of hypotheses P’ that was equivalent to Q and that we could actually obtain. That’s not supposed to be an addition to the fascinating work on reverse mathematics, but rather an indication that the reverse point of view is, as its proponents say, of interest in “real” mathematics and not just to logicians.
Returning to your comment, I am of course much happier with the idea of nonstandard models than I am with inconsistent models, and very interested and impressed that there is such a beautifully precise answer to the question I asked about Hall’s theorem. At first, the idea of an inconsistent model seems itself to be inconsistent (after all, the existence of a model surely guarantees consistency). But I suppose there might conceivably be a way of carrying out Carrie’s suggestion. First, one would need to define some notion like an “imperfect model” for some axioms. The axioms wouldn’t be true in the model, but neither would they be blatantly false. If that could be done, it’s conceivable that it might be easier to construct such a model than to show syntactically that an implication was not trivial.
32. Joel Says:
January 5, 2009 at 10:55 pm | Reply
Hi Tim,
The following is, I think, far less interesting than the set of examples discussed above … but no-one appears to have mentioned a possible distinction between “stronger” and “strictly stronger” conditions. This problem actually appears all over the place. After all, mathematicians cannot even agree whether the notation $A \subset B$ means that $A$ is a strict subset of $B$ or whether it means the same thing as $A \subseteq B$!
For topologies I use the terms “stronger” and “weaker” in the non-strict sense, but with some misgivings. For example, one of my favourite results of elementary point-set topology is the fact that whenever a compact topology is stronger than a Hausdorff topology, then the two topologies are, in fact, equal … in which case it seems strange to use the term stronger.
I suppose that when comparing conditions, “stronger” is easier to demonstrate than “strictly stronger”, which may well depend in any case on the setting you are working in (as demonstrated above).
Joel
33. Pete Says:
January 6, 2009 at 1:14 am | Reply
Dear Gil,
with respect to your example -
I’d claim that the definition of ‘finite’ is not totally trivial. Of course it is necessary: if one takes an infinite set A, a set B of the same cardinality, a set C with one element, and D the empty set, then there exist a bijection f from A to B (by definition) and g from A union C to B union D (via well-ordering) but of course not h from C to D.
So, then, here is an easy proof, using Peano and ZF (but of course not C).
Let r be an enumeration of B union D, i.e. a bijection from B union D to [k] for some integer k (I see no other definition of ‘finite’ that doesn’t involve a non-trivial proof to make sense of the definition).
Let s be an enumeration of C.
Consider the set r(h(A union C)-f(A)). This is a set of integers; let q(j) be the j-th integer of this set in increasing order (which can be defined by functions via summation of an indicator).
Let g(c)=r^{-1}(q(s(c))) for c in C.
If g fails to be well-defined, then this can only be because q fails to be well-defined, which it does not (Peano arithmetic).
It is clear that g is injective and maps onto D; to see that it is surjective is again Peano arithmetic.
I think the fact that g(b) as given here is actually a bijection requires about as much thought as the fact that the iterative procedure you give actually terminates – that is, both fail if there are inappropriate infinite quantities floating around, and for both proving that they succeed requires some idea of ‘finite’, which in turn requires Peano arithmetic to make sense. The difference is I think the proof I give is ‘obvious’ in the sense that it’s the first thing you’d try if you didn’t care for elegance and were told that the word ‘finite’ in the statement is necessary: which for me means the proof is trivial.
34. Mark Bennet Says:
January 6, 2009 at 9:23 am | Reply
Perhaps another example, which seems to me to be similar, might help thinking about this. Try Wedderburn’s Theorem that every finite skew field is commutative, which means that the two properties ‘being a finite skew field’ and ‘being a finite field’ are equivalent.
Some observations
* Being a skew field can be explicitly made part of the definition of being a field – this is not normally pointed out (or if it is, it is quickly forgotten) as one tends to learn about fields first and skew fields later. One could do something similar with Hall’s theorem defining matched graphs and paired graphs (or whatever, with fairly obvious definitions) and saying ‘every finite matched graph is a paired graph’. How natural would it be to say that a paired graph is a matched graph + another condition? – I don’t think it quite works the same.
* With Wedderburn there are some fairly obvious finite and infinite examples to hand: with Hall, the language of marriage and the way I learned graph theory both give me a psychological bias to thinking of finite cases first.
* I quite like what Benjamin posted above, which raises the question what properties beyond those explicit in the definition of Q are needed to prove R?
* Are there any natural/obvious examples of this phenomenon which don’t depend on a finiteness condition?
35. Timothy Chow Says:
January 6, 2009 at 5:19 pm | Reply
Here’s a comment that I meant to make earlier but forgot. There is a somewhat related question that I brought up for discussion on the Foundations of Mathematics (FOM) mailing list some years ago, which is what sense we can make of statements such as, “The Riemann hypothesis might be true.” Surely Yoda (of Star Wars fame) would object, “True, or true not. There is no `might’”? That is, either RH is true in all possible worlds or false in all possible worlds; it’s not that RH is true in some possible worlds and RH is false in some possible worlds.
One suggestion that came out of the FOM discussion was that work in “dynamic epistemic logic” is relevant. That is, “RH might be true” is interpreted as “We don’t know that RH is false,” and so we formalize “RH might be true” by formalizing the concept of knowledge. Note that knowledge can change over time even if mathematical facts can’t; hence the adjective “dynamic.”
I haven’t invested the effort to understand the details of dynamic epistemic logic so I’m not sure how much the theory has to offer, but it seems that it might be relevant to the current discussion too. As has already been pointed out, the term “easy to prove” seems to have strong psychological overtones, and so maybe “easy proof” can be plausibly formalized along the same lines that “knowledge” can be formalized.
36. Joel Says:
January 7, 2009 at 11:16 am | Reply
Here is another example of the original phenomenon. It is one of those well-known coffee-table exercises whose origins I do not know, and where no-one should spoil anyone else’s fun. I offer my second-year undergraduates a prize for the first correct solution each year.
Let $G$ be a semigroup. Consider the following three conditions that $G$ might satisfy:
(a) $G$ is a group;
(b) for all $g$ in $G$ there is a unique $g^*$ in $G$ such that $g g^* g = g$;
(c) for all $g$ in $G$ there is a unique $g^*$ in $G$ such that $g g^* g = g$ and ${g^* } g {g^*} = {g^*}$.
The main exercise is to show that (a) is equivalent to (b). The implication (a) implies (b) is essentially trivial, but (b) implies (a) is much more fun.
A superficial comparison of (b) and (c) suggests that (c) might be stronger than (b), but in fact it is strictly weaker: (c) defines an inverse semigroup.
Joel
37. Cahit Says:
January 9, 2009 at 10:19 am | Reply
I like to comment on the triviality and non-triviality when we compare strength of the two equivalent statements. In another blog (http://tbgloops.blogspot.com/2008/08/definetrivial.html )
triviality has been discussed along with the proof of the 4CT. The conclusion was that “Intelligence is the ability to create any non-trivial thing”. I think the reverse “Non-triviality is the ability to create any intelligent thing” is also reasonable.
38. GG Says:
January 12, 2009 at 11:15 am | Reply
There are various ways to start from a statement A and get another statement B. One can, for example, deduct B from A, negate A to get B, add quantifiers to A to get B. Here, we are studying the interaction of statements with respect to deduction. However, it may be interesting to consider another form of ‘interaction’ between statements. Specifically I was thinking about dualisation. Indeed, many statements have natural dual statements. I often found that to better understand a statement, it is revealing to consider its dual statement. Also, at least empirically, dual statements are often true.
So suppose one has two equivalent statements A and B (with respect to deduction), with A having a natural dual statement, but not B. Then one could say A is stronger than B with respect to dualisation. Indeed, A is more prone than B to “producing revealing and often true statements”. This is very vague and based on an intuition. I’ll try to find an example illustrating this.
39. Gil Kalai Says:
January 12, 2009 at 4:10 pm | Reply
Dear all, Here is another example of a somewhat different nature.
Consider a finite POSET (partially ordered set) X.
Dilworth’s theorem asserts that X can be covered by chains whose number is the size of the largest antichainin X.
A much easier theorem asserts that X can be covered by antichains whose number is the size of the largest chain in X.
(A chain is a set of elements so that every two are comparable. An antichain is a set of elements so that every two (distinct) are incomparable.)
Now, these two theorems are equivalent because of a Theorem by Lovasz which asserts that the complement of a perfect graph is also perfect.
A (finite) graph G is perfect if for every induced subgraph H of G, H can be covered by $\alpha(H)$ cliques (=complete subgraphs) where $\alpha (H)$ is the independence number of H.
Starting from a post X we can desribe two graphs: the comperability graph where x and y are adjecent if x>y or y>x, and (its complement) the uncomparability graph where x and y are adjacent if neither x>y nor y>x. Lovasz’s theorem gives an equivalent between two theorems one which is easy and another which is harder.
In this example we are talking about equivalence of absolute statements (and not equivalence of properties of graphs that sometimes hold and sometimes not like the perfect matching example), so it is more similar to the equivalence between theorems like at the end of the post. But here the equivalence is “genuine” in some sense.
40. Joel Says:
January 13, 2009 at 12:01 pm | Reply
I was also thinking of Dilworth’s theorem, but in a way closer to the Hall matching theorem setting. It is trivial that you need at least as many chains as the size of the largest antichain, but the reverse inequality is non-trivial.
For a given positive integer k, you can consider the two properties the poset might or might not have:
(a) at least k chains are needed to cover the poset;
(b) there is an antichain in the poset with at least k elements.
Dilworth’s theorem tells us that (a) and (b) are equivalent. However, the fact that (b) implies (a) is trivial, while (a) implies (b) is non-trivial.
Joel
41. Greg Marks Says:
January 14, 2009 at 8:07 pm | Reply
Perhaps the reason it “feels strange” to observe that the Four Color Theorem is implied by Fermat’s Last Theorem (or its negation) is that human beings seem predisposed to seek out causality in the external world, so when confronted with the statement “P implies Q” our instinct is to feel that P has “caused” Q to be true. I think examples like P = Fermat’s Last Theorem and Q = Four Color Theorem bring this false intuition to the fore.
42. Peter Shor Says:
January 15, 2009 at 5:34 pm | Reply
I have a little story about strength of equivalent statements. In 2003, I proved that four additivity conjectures in quantum information were equivalent. Let’s call these A, B, C, and D1. Four of these implications are essentially trivial, with one- or two-line proofs. These are A→B, A&rightarrowC;, B&rightarrowD;, and C→D. To get implications in the oppositie direction, you have to do a moderate amount of work.
Earlier this year, Matt Hastings discovered a very clever counterexample to the additivity conjecture. Obviously, since A is the strongest of these equivalent conjectures, you would expect that the counterexample was to conjecture A. But no, the counterexample was actually to weakest conjecture D. Why? Probably because D is the simplest of these equivalent conjectures, as well as being the weakest, so counterexamples to conejcture D are easiest to analyze.
1For those who want to keep track, A is known as strong superadditivity of entanglement of formation, B is additivity of entanglement of formation, C is additivity of classical capacity of a quantum channel, and D is additivity of minimum entropy output of a quantum channel. These names are irrelevant to the point of the story.
43. Jack Says:
January 27, 2010 at 2:07 am | Reply
This is so funny, I know it’s a “dead thread”, but I was thinking about this the other day and came across this post. By the way I am only an undergraduate. From my point of view it is no coinkydink that the example comes from (finite) combinatorics. The fact that the existence of a perfect matching implies Hall’s condition is trivial, the fact that Hall’s condition implies the existence of a perfect matching is nontrivial. Quite objectively (see next sentence). The former does not require induction to prove, but the latter does. Or that’s what I think it comes down to, anyway.
This is why you could say something like, Dilworth’s theorem is a special case of the perfect graph theorem in graph theory, and in turn Hall’s theorem is a special case of Dilworth’s theorem. You don’t need to induct to prove these implications, they sort of “spill out”, in not too many lines. I honestly think that if you tried to reverse these implications, you would end up using induction, and that is what makes these theorems stronger. I am particularly sure about the impossibility of getting from Dilworth’s theorem to the perfect graph theorem, in such a way.
I think this fits the sense I get that, for example, proving Dilworth’s theorem by induction is doing the donkey work, and deriving Hall’s theorem from Dilworth’s theorem is just fiddling. Fiddling fiddling fiddling.
44. Jack Says:
January 27, 2010 at 2:32 am | Reply
I seem to be making another post. I am sure my original post contains plenty of gaps, anyhow.
The “requires induction” idea is surely not rigorous, whatever that means, but on the other hand, I am convinced that it could be made so, to such a degree that rigor actually has a value.
But somehow I don’t think “ZFC” is going to be the slightest bit of help. For one thing, ZFC concerns the whole of mathematics, whereas what I am thinking about concerns only (finite) combinatorics [it probably extends to combinatorial set theory in some limited form, or whatever].
As such I have no idea how one might, even if you wanted to, come up with a new, general pan-mathematical meaning to the word “equivalent” that would be more wise.
But I think it is important that implications can “just seem wrong”. Earlier (while googling what lead me to this thread) I read that Hall’s theorem implies Dilworth’s theorem and the max-flow-min-cut theorem. These implications, though logically valid, just seem wrong somehow, and seem even more wrong when viewed through the spectacles of induction.
45. Jack Says:
January 27, 2010 at 2:50 am | Reply
Back to Hall’s theorem, I can see how you could derive this theorem, without induction, from Konig’s theorem that min vertex cover equals max matching in a bipartite graph. But I am not sure how one could do the converse.
46. Basic logic — relationships between statements — converses and contrapositives « Gowers's Weblog Says:
October 5, 2011 at 11:28 am | Reply
[...] the second, while the second is not so easy to deduce from the first. I discussed this phenomenon in a blog post a few years ago, which provoked a number of very interesting [...]
47. Austin Arlitt Says:
March 18, 2012 at 1:17 am | Reply
“At first, the idea of an inconsistent model seems itself to be inconsistent (after all, the existence of a model surely guarantees consistency). But I suppose there might conceivably be a way of carrying out Carrie’s suggestion. First, one would need to define some notion like an “imperfect model” for some axioms. The axioms wouldn’t be true in the model, but neither would they be blatantly false. If that could be done, it’s conceivable that it might be easier to construct such a model than to show syntactically that an implication was not trivial.”
I might be misunderstanding your intent here, but here’s one “inconsistent” system you can construct trivially. (It is, of course, only trivial in the sense that after you know a powerful theorem it is obvious, with the theorem here being incompleteness.)
One can construct a provability predicate for ZFC. Call it F(x).
Then both ZFC + F(0=1) and ZFC + ~F(0=1) are consistent if ZFC is.
The second system trivially tacks on another statement, which happens to be true, but the first one adds a false axiom. This axiom cannot “interact” with the arithmetic component of ZFC, so no formal inconsistency arises.
I’ll just leave this thought here for posterity’s sake. After all, I stumbled onto this thread.
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 125, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515678286552429, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/130683-one-prove-question.html | # Thread:
1. ## one prove question
Let A be an nxn matrix with characteristic polynomial
$f(\lambda) = \lambda^n + a_{n-1}\lambda^{n-1}+...+ a_1\lambda +a_0$
Prove that $a_0 = (-1)^ndetA$
2. Every square matrix satisfies its characteristic equation.
3. Originally Posted by kjchauhan
Every square matrix satisfies its characteristic equation.
That may be, but you haven't proven this. This kind of question calls for an equation proof, not just a statement.
I second the request for help on this.
4. Recall that $f(t)=\det (A-tI)$. Set $t=0$ on both sides! You should just get $a_0$. The factor $(-1)^n$ could come into the picture if somehow you had defined $f(t)=\det (tI-A)$ but that is certainly not standard. The answer is just $a_0$.
5. Originally Posted by kjchauhan
Every square matrix satisfies its characteristic equation.
I'm not sure how you expected him to use the Cayley-Hamilton theorem.
6. Originally Posted by Bruno J.
Recall that $f(t)=\det (A-tI)$. Set $t=0$ on both sides! You should just get $a_0$. The factor $(-1)^n$ could come into the picture if somehow you had defined $f(t)=\det (tI-A)$ but that is certainly not standard. The answer is just $a_0$.
EDIT: Another source told me that the term $f(t)=\det (tI-A)$ is equal to the characteristic polynomial, so that should help in solving the problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570282697677612, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/56373/list | ## Return to Answer
Post Made Community Wiki by S. Carnahan♦
2 Reformat, second example.; added 8 characters in body
For the symmetric group $\Sigma_n$, you can take $$\begin{align*} E\Sigma_n = &= \{\text{injective functions } \{1,\dotsc,n\}\to\mathbb{R}^\infty \}$$ $$} \\ B\Sigma_n = &= \{\text{subsets of size $n$ in } \mathbb{R}^\infty \}$$} \end{align*}
Now let $G_n$ be the group of braids on $n$ strings, and let $H_n$ be the subgroup of pure braids. We have \begin{align*} BH_n &= \{\text{injective functions } \{1,\dotsc,n\}\to\mathbb{R}^2 \} \\ BG_n &= \{\text{subsets of size $n$ in } \mathbb{R}^2 \} \end{align*} These spaces have trivial homotopy groups $\pi_{k}(X)$ for $k\geq 2$, so $$EH_n=EG_n= \text{ universal cover of } BH_n = \text{ universal cover of } EH_n.$$ I think I see a proof that this space is homeomorphic to $\mathbb{R}^{2n}$, but I don't know if that is in the literature.
1
For the symmetric group $\Sigma_n$, you can take $$E\Sigma_n = \{\text{injective functions} \{1,\dotsc,n\}\to\mathbb{R}^\infty \}$$ $$B\Sigma_n = \{\text{subsets of size $n$ in } \mathbb{R}^\infty \}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.811652421951294, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/200996-how-simplify-algebraic-expression.html | # Thread:
1. ## How to simplify algebraic expression
$\frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{2}(\frac{x - 3}{x - 2})^{-\frac{1}{2}} \cdot \frac{1}{(x - 2)^{2}}$
$\frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{2}(\frac{x - 3}{x - 2})^{-\frac{1}{2}} \cdot \frac{1}{(x - 2)^{2}} \\<br /> \frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{2}\frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{(x - 2)^{2}} \\<br /> \frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{\frac{1}{2}}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{(x - 2)^{2}} \\<br /> \frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{2(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{(x - 2)^{2}} \\<br /> \frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{(\frac{x - 3}{x - 2})^{\frac{1}{2}}} \cdot \frac{1}{2(x - 2)^{2}} \\<br /> \frac{1}{(\frac{x - 3}{x - 2})} \cdot \frac{1}{2(x - 2)^{2}} \\<br /> \frac{1}{(x - 3)} \cdot \frac{1}{2(x - 2)} \\<br /> \frac{1}{2(x - 3)(x - 2)}$
Which step have I done incorrectly?
2. ## Re: How to simplify algebraic expression
Hello, daigo!
You made a few unnecessary steps, though.
$\frac{1}{\left(\dfrac{x - 3}{x - 2}\right)^{\frac{1}{2}}} \cdot \frac{1}{2}\left(\frac{x - 3}{x - 2}\right)^{-\frac{1}{2}} \cdot \frac{1}{(x - 2)^{2}}$
$\text{We have: }\;\frac{1}{2}\cdot\underbrace{\frac{1}{\left( \dfrac{x-3}{x-2}\right)^{\frac{1}{2}}}\cdot \frac{1}{\left(\dfrac{x-3}{x-2}\right)^{\frac{1}{2}}}}_{\downarrow} \cdot\frac{1}{(x-2)^2}$
. . . . . . . . . . . . $=\;\frac{1}{2}\cdot\frac{1}{\frac{x-3}{x-2}} \cdot \frac{1}{(x-2)^2}$
. . . . . . . . . . . . $=\;\frac{1}{2(x-3)(x-2)}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6511743068695068, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/34695?sort=votes | ## Counting/constructing Toric Varieties
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a torus $T$ is there way to classify all the toric varieties it gives rise to? That is, classify all toric varieties $X$ whose torus is isomorphic to $T$. Is there a way to construct these toric varieties (i.e. give equations for them)?
Remark: as has been explained in the comments and answers, this question is I'll posed. But it seems to have generated good discussion so I'm leaving the phrasing as is.
-
4
Yes. Any book on toric varieties will explain what a fan is, and using that combinatorial datum you can construct all of them. – Mariano Suárez-Alvarez Aug 5 2010 at 22:28
As for isomorphism classes: if you want $T$-equivariant isomorphisms, then the toric varieties are isomorphic iff the have the same fan. – Mariano Suárez-Alvarez Aug 5 2010 at 22:35
2
"To construct" and "to give equations" are two different questions. In particular, there exist complete toric varieties that aren't projective, so while it's possible to construct them as abstract algebraic varieties or schemes, it doesn't make sense to talk about their equations in $\mathbb{P}^n.$ – Victor Protsak Aug 6 2010 at 5:02
1
It depends on what you mean by classification. If you mean to concretely classify them it is no doubt hopeless. See for instance the extremely involved attempt (yet far from complete) of classifying all three-dimensional smooth and proper toric varieties in Oda: Torus embeddings and classifications. – Torsten Ekedahl Aug 6 2010 at 6:48
## 2 Answers
As far as my understanding goes the answer is no, and I will try to explain why and clarify the list of comments (I have little reputation so I cannot comment there) and give you a partial answer. I hope I do not patronise you, since you may now already part of it.
First of all, as Torsten said, it depends what you understand for classification. In this context a torus $T$ of dimension $r$ is always an algebraic variety isomorphic to $(\mathbb{C}^*)^r$ as a group. A complex algebraic variety $X$ of finite type is toric if there exists an embedding $\iota: (\mathbb{C}^\ast)^r \hookrightarrow X$, such that the image of $\iota$ is an open set whose Zariski closure is $X$ itself and the usual multiplication in $T=\iota((\mathbb{C}^\ast)^r)$ extends to $X$ (i.e. $T$ acts on $X$).
Think about all toric varieties. It is hard to find a complete classification, i.e. being able to give the coordinates ring for each affine patch and the morphisms among them for all toric varieties.
However, when the toric varieties we consider are normal there is a structure called the fan $\Sigma$ made out of cones. All cones live in $N_\mathbb{R}\cong N\otimes \mathbb{R}$ where $N\cong \mathbb{Z}$ is a lattice. A cone is generated by several vectors of the lattices (like a high school cone, really) and a fan is a union of cones which mainly have to satisfy that they do not overlap unless the overlap is a face of the cone (another cone of smaller dimension). There is a concept of morphism of fans and hence we can speak of fans 'up to isomorphism' (elements of $\mathbf{SL}(n,\mathbb{Z})$). Given a lattice N, there is an associated torus $T_N=N\otimes (\mathbb{C}^*)$, isomorphic to the standard torus.
Then we have a 1:1 correspondence between separated normal toric varieties $X$ (which contain the torus $T_N$ as a subset) up to isomorphism and fans in $N_\mathbb{R}$ up to isomorphism. There are algorithms to compute the fan from the variety and the variety from the fan and they are not difficult at all. You can easily learn them in chapter seven of the Mirror Symmetry book, available for free. Given any toric variety (even non-normal ones) we can compute its fan, but computing back the variety of this fan many not give us the original variety unless the original is normal. You can check this easily by computing the fan of a $\mathbf{V}(x^2-y^3)$ (torus embedding $(t^3,t^2)$) which is the same as $\mathbb{C}^1$ but obviously they are not isomorphic (the former has a singularity at (0,0)). In fact, since there are only two non-isomorphic fans of dimension 1 (the one generated by $1\in \mathbb{Z}$ and the one generated by 1 and -1) we see that there are only three normal toric varieties of dimension 1, the projective line and the affine line, and the standard torus.
The proof of this statement is not easy and to be honest I have never seen it written down complete (and I would appreciate any reference if someone saw it) but I know more or less the reason, as it is explained in the book about to be published by Cox, Little and Schenck (partly available) This theorem is part of my first year report which is due by the end of September, so if you want me to send you a copy when it is finished send me an e-mail.
So, yes, in the case of normal varieties there is some 'classification' via combinatorics, but in the case of non-normal I doubt there is (I never worked with them anyways).
Become a toric fan!.
-
2
Welcome to MO! One should probably add that older books on toric varieties, such as Fulton, define a toric variety to be normal. In this case, Toric Varieties (up to $T$-equivariant isomorphism) are in bijection with fans. – David Speyer Aug 13 2010 at 12:27
2
Shouldn't $\mathbb{C}^*$ also appear in your list of normal toric varieties of dimension one? :) – damiano Aug 13 2010 at 17:45
1
Hum, I am not sure. I think according to the definition you have to take the Zariski Closure, which would make it just $\mathbb{C}^1$, right? You make me doubt... – Jesus Martinez Garcia Aug 15 2010 at 15:02
4
I think $T=\mathbb{C}^*$ should be on the list. $T$ is closed in the zariski top. as a sub var. of itself. Plus I think it would be weird if tori weren't toric varieties. – solbap Aug 15 2010 at 16:50
1
OK, fixed, the torus has to be a proper subset (I checked, very important for the proof) and the variety has to be separated (important too). – Jesus Martinez Garcia Aug 16 2010 at 16:08
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This question seems based on a confusion of two different possible meanings of a "torus" in algebraic geometry. A torus could mean (but among algebraic geometers, usually doesn't) an algebraic group which is isomorphic as a real Lie group to a Cartesian product of circles. That kind of torus has a moduli space of complex algebraic structures, and in higher dimensions a larger moduli space of complex analytic structures. Or a torus could mean a Cartesian power of the non-zero complex numbers `$\mathbb{C}^*$`. In a natural sense, there is only one of them in each complex dimension. A toric variety is a compactification of a torus in this second sense. Thus, the $n$-dimensional torus `$(\mathbb{C}^*)^n$` gives rise to all $n$-dimensional toric varieties.
And, as is explained in the comments and in Fulton's book, you construct a toric variety from its fan. If you want to construct a projective toric variety with a set of equations in projective space, there is an explicit way to do that by refining the fan to an integer polytope. But I'm guessing that the other remark more addressed your question.
-
Thanks for this answer. So to go off on a tangent, it seems like if you had another field of char 0 you could do a similar construction looking at normal varieties containing a dense open $k[x_1^{\pm},...,x_n^{\pm}]$. This seems ok even for positive characteristic, but now this connection with fans doesn't quite work. Do people study such things or is it not very interesting to generalize in this way? – solbap Aug 15 2010 at 17:25
I am not certainly not an expert in the algebraic geometry. My understanding is, first, that there is a huge difference between having a dense torus and having a torus action with a dense orbit. The latter apparently do generally readily to other fields besides $\mathbb{C}$, and it is interesting in any characteristic, but I don't know a whole lot about that topic. Note though that if $k$ is not algebraically closed, you would want to look at schemes over $k$, and thus implicitly the algebraic closure. – Greg Kuperberg Aug 15 2010 at 20:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492836594581604, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/180481/testing-if-a-geometric-series-converges-by-taking-limit-to-infinity?answertab=votes | # Testing if a geometric series converges by taking limit to infinity
If the limit as n approaches infinity of a geometric series is not zero, then that means the series diverges. This makes intuitive sense to me, because it is an infinite series and we keep adding nonzero terms, it will go to infinity.
However, if the limit as n approaches infinity does equal zero, series that is not enough information to tell whether the series converges or diverges.
I would think that if the limit as n approaches infinity is zero, and the series is a continuous function, then the series would converge to some real number. Why is this not the case? The only counter-example I can think of is a series like: a^n = { (-1)^n }, but that is not a continuous function.
-
3
Convergence of infinite series has nothing to do with continuity; sequences are functions defined on the natural numbers, and any such function is continuous since the natural numbers form a discrete set. The fact that we often consider sequences defined by formulas like $a_n = \frac{1}{n^2}$ which happen to make sense for all real values of $n$ is a red herring, and in particular the continuity of the functions determined by such formulas is largely irrelevant. – Paul Siegel Aug 9 '12 at 1:48
## 1 Answer
$\lim_{n \rightarrow \infty} \frac{1}{n} = 0$ but $\sum_{n = 1}^\infty \frac{1}{n}$ does not converge. The associated function $\frac{1}{x}$ is continuous from $[1, \infty)$.
Also a geometric series is the series associated with the sequence $a_n = pr^n$ for some $r$ and $p$. Geometric series converges if $|r| < 1$.
-
So if |r| > 1 it automatically diverges? – ordinary Aug 9 '12 at 1:42
1
Yes. If $|r| \geq 1$, then $\lim_{n \rightarrow 0} pr^n \neq 0$. – William Aug 9 '12 at 1:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383143186569214, "perplexity_flag": "head"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.