url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://physics.stackexchange.com/questions/5416/how-does-an-altimeter-work/5418
# How does an altimeter work? In our aerodynamics class we recently discussed the concept of static and dynamic pressure and discussed their application to aircraft instruments. However, I do not understand properly how the altimeter can work. First of all a small recap of Bernoulli's law. The total pressure is given by $p_t = \frac{1}{2} \rho V^2 + p_s$ and is constant along a streamline. Consequently, the static pressure is given by $p_s = p_t - \frac{1}{2} \rho V^2$. Now imagine a low speed windtunnel. In a reservoir the speed is so low that it can be assumed to be $V=0$ m/s. Hence here the static pressure equals the total pressure, $p_t = p_s$. Now the fluid is accelerated along a streamline, and hence the static pressure should drop according to the relation given above. Now this is where my problem is. I read that the altimeter just measures the static pressure at flight level to obtain the pressure altitude. According to the explanation given above, however, this can not be, as the static pressure in the flow will be lower than the static pressure at the same altitude at zero velocity. - ## 3 Answers The static air pressure seen by the aircraft does not change with the aircraft's velocity. Your confusion is from a common misinterpretation of Bernoulli's principle. It is not true that a fluid's pressure will decrease simply by virtue of flowing faster. After all, this violates the idea that physics should be the same in all inertial frames. Here is a simple counterexample to the typical interpretation of the Bernoulli principle. Consider a tube of infinite length and uniform diameter with some gas sitting in it. Now consider various coordinate systems with a velocity in the direction of the tube. In these different coordinate systems, the velocity of the gas will be different, but we expect the force on the walls of the tube due to the fluid's pressure to be the same in all cases. (The tube is not going to rupture simply because of a choice of coordinate system!) Instead, Bernoulli's principle says that, in a given flow (say, along a streamline), a local increase in velocity is associated with local decrease in pressure. The canonical example is fluid flow through a tube with a constriction (a venturi). Quoting from the Wikipedia article for Bernoulli's principle: Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of mechanical energy in a fluid along a streamline is the same at all points on that streamline. This requires that the sum of kinetic energy and potential energy remain constant. ... If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. Note the emphasis on relative changes occurring on a streamline. The specific flaw in your argument is here: Now the fluid is accelerated along a streamline, and hence the static pressure should drop according to the relation given above. In the wind tunnel, something has to do work to accelerate the air to the wind tunnel velocity, adding energy to the flow, which violates the conservation of energy assumption in Bernoulli's principle. - Aircraft altimeter use the pitot-static system http://en.wikipedia.org/wiki/Pitot-static_system In particular, the pitot tube measures the pressure of air ramming into the tube which, under normal circumstances, is equal to the total pressure. The static pressure is obtained through a static port. The difference is that it is "capped" so the air gets in from the sides. The air isn't rammed into these side holes (by any velocity) so these holes still get balanced with the static pressure outside. See the Wikipedia text above. - Sorry, but your answer does not help me at all. I know that the static pressure is obtained through the static ports. However, this static pressure is the static pressure IN THE FLOW, meaning it should be lower than the ambient static pressure according to Bernoulli's law. But if that is the case, how can you deduce the altitude from that? That is my problem. – Ingo Feb 18 '11 at 14:09 I'd add, just to avoid confusion, that the altimeter used the static pressure, while the airspeed indicator uses the difference between the static and Pitot pressures. – Colin K Feb 18 '11 at 20:26 The location of the static ports on the fuselage are fixed in positions where that impact is minimal (if any). As an interesting side note, switching to the alternate static (located inside the cockpit of an unpressurized airplane, and used if the outside static ports become obstructed) results in a small increase in indicated altitude as the pressure inside the cockpit is lower than outside. - "the pressure inside the cockpit is lower than outside" - Are you sure? Why? – nibot Feb 18 '11 at 18:09 I'm not sure of the reason why, but I am sure that he's right. I fly light aircraft, and I've definitely experienced this effect. – Colin K Feb 18 '11 at 20:23 Venturi effect. Outside air accelerates as it goes around the (curved) fuselage, creating a lower pressure zone that includes the interior of the (unpressurized) airplane. Seems a little counter-intuitive as you'd initially expect the fuselage to be higher pressured. However, since all the surfaces are curved around it, the effect is that you end up with low pressure all the way around, and the inside air seeks to balance that, resulting in lower interior pressure. – Brian Knoblauch Feb 21 '11 at 14:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359822273254395, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/17189/is-there-a-good-reason-why-a2b-b2a-1-when-ab1/17192
## Is there a good reason why a^{2b} + b^{2a} <= 1 when a+b=1? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The following problem is not from me, yet I find it a big challenge to give a nice (in contrast to 'heavy computation') proof. The motivation for me to post it lies in its concise content. If $a$ and $b$ are nonnegative real numbers such that $a+b=1$, show that $a^{2b} + b^{2a}\le 1$. - 23 Scott, I don't get it. This is pretty much exactly what I'd expect from the title. If I were to title this post, I would write "elementary inequality", but "simple" seems close. And the phrasing is polite enough. – David Speyer Mar 5 2010 at 17:10 23 That said, this question might do better on artofproblemsolving.com . I could certainly bulldoze through it if I had to but, if there is a nice answer, then it is probably more likely to be found by a Putnam fellow or IMO medalist than a professional mathematician. – David Speyer Mar 5 2010 at 17:15 14 @David: what I don't like about the problem is that the questioner clearly knows the answer, so I have no incentive to think about it myself in this forum, which I percieve to be for people who are "stuck" or "need help". There are a gazillion resources for putnam-like problems on the web, but this place seems to generally discourage it, which is something I like about it. – Kevin Buzzard Mar 5 2010 at 18:18 7 Regarding the title, I guess this is just a pet rant by now, but I wish the community norms were: 1) Always ask a question in your title and 2) try to make that question as close as possible an approximation to the full question you're asking. Unfortunately everyone uses the title more as an email subject line. – Scott Morrison♦ Mar 5 2010 at 18:36 11 Rather than argue in comments I used my editing power to change the title to an actual question. This is the most common sort of edit I make to people's comments and I think it's a good thing to just do. – Noah Snyder Apr 13 2010 at 1:58 show 7 more comments ## 8 Answers Fixed now. I spent some time looking for some clever trick but the most unimaginative way turned out to be the best. So, as I said before, the straightforward Taylor series expansion does it in no time. Assume that $a>b$. Put $t=a-b=1-2b$. Step 1: ```$$ \begin{aligned} a^{2b}&=(1-b)^{1-t}=1-b(1-t)-t(1-t)\left[\frac{1}2b^2+\frac{1+t}{3!}b^3+\frac{(1+t)(2+t)}{4!}b^4+\dots\right] \\ &\le 1-b(1-t)-t(1-t)\left[\frac{b^2}{1\cdot 2}+\frac{b^3}{2\cdot 3}+\frac{b^4}{3\cdot 4}+\dots\right] \\& =1-b(1-t)-t(1-t)\left[b\log\frac 1{a}+b-\log\frac {1}a\right] \\ &=1-b(1-t^2)+(1-b)t(1-t)\log\frac{1}a=1-b\left(1-t^2-t(1+t)\log\frac 1a\right) \end{aligned} $$``` (in the last line we rewrote $(1-b)(1-t)=(1-b)2b=b(2-2b)=b(1+t)$) Step 2. We need the inequality `$e^{ku}\ge (1+u)(1+u+\dots+u^{k-1})+\frac k{k+1}u^{k+1}$` for $u\ge 0$. For $k=1$ it is just $e^u\ge 1+u+\frac{u^2}{2}$. For $k\ge 2$, the Taylor coefficients on the left are $\frac{k^j}{j!}$ and on the right $1,2,2,\dots,2,1$ (up to the order $k$) and then $\frac{k}{k+1}$. Now it remains to note that $\frac{k^0}{0!}=1$, $\frac{k^j}{j!}\ge \frac {k^j}{j^{j-1}}\ge k\ge 2$ for $1\le j\le k$, and $\frac{k^{k+1}}{(k+1)!}\ge \frac{k}{k+1}$. Step 3: Let $u=\log\frac 1a$. We've seen in Step 1 that $a^{2b}\le 1-b(1-t\mu)$ where $\mu=u+(1+u)t$. In what follows, it'll be important that $\mu\le\frac 1a-1+\frac 1a t=1$ (we just used $\log\frac 1a\le \frac 1a-1$ here. We have $b^{2a}=b(a-t)^t$. Thus, to finish, it'll suffice to show that $(a-t)^t\le 1-t\mu$. Taking negative logarithm of both sides and recalling that $\frac 1a=e^u$, we get the inequality ```$$ tu+t\log(1-te^u)^{-1}\ge \log(1-t\mu)^{-1} $$``` to prove. Now, note that, according to Step 2, ```$$ \begin{aligned} &\frac{e^{uk}}k\ge \frac{(1+u)(1+u+\dots+u^{k-1})}k+\frac{u^{k+1}}{k+1} \ge\frac{(1+u)(\mu^{k-1}+\mu^{k-2}u+\dots+u^{k-1})}k+\frac{u^{k+1}}{k+1} \\ &=\frac{\mu^k-u^k}{kt}+\frac{u^{k+1}}{k+1} \end{aligned} $$``` Multiplying by $t^{k+1}$ and adding up, we get $$t\log(1-te^u)^{-1}\ge -ut+\log(1-t\mu)^{-1}$$ which is exactly what we need. The end. P.S. If somebody is still interested, the bottom line is almost trivial once the top line is known. Assume again that $a>b$, $a+b=1$. Put $t=a-b$. ```$$ \begin{aligned} &\left(\frac{a^b}{2^b}+\frac{b^a}{2^a}\right)^2=(a^{2b}+b^{2a})(2^{-2b}+2^{-2a})-\left(\frac{a^b}{2^a}-\frac{b^a}{2^b}\right)^2 \\ &\le 1+\frac 14\{ [\sqrt 2(2^{t/2}-2^{-t/2})]^2-[(1+t)^b-(1-t)^a]^2\} \end{aligned} $$``` Now it remains to note that $2^{t/2}-2^{-t/2}$ is convex on $[0,1]$, so, interpolating between the endpoints, we get $\sqrt 2(2^{t/2}-2^{-t/2})\le t$. Also, the function $x\mapsto (1+x)^b-(1-x)^a$ is convex on $[0,1]$ (the second derivative is $ab[(1-x)^{b-2}-(1+x)^{a-2}]$, which is clearly non-negative). But the derivative at $0$ is $a+b=1$, so $(1+x)^b-(1-x)^a\ge x$ on $[0,1]$. Plugging in $x=t$ finishes the story. - 9 Bravo ! – Tom Leinster Apr 13 2010 at 4:26 2 @fedja: I think your proof meets the criteria. If you are interested, you may submit your proof featuring solving a conjecture in <On Some Inequalities With Power-Exponential Functions> by Vasile Cirtoaje. – Sunni Apr 13 2010 at 13:26 3 Thanks. Well, anybody who is interested can find it here, so I do not think it makes much sense to submit it anywhere else. I'll, probably, just send a PM to Vasile. – fedja Apr 13 2010 at 14:26 2 @fedja: That is good. For me, I learned a good proof, that is all I want for posting this problem. – Sunni Apr 13 2010 at 15:22 Thanks for including $a + b = 1/2$ – Will Jagy Apr 13 2010 at 16:20 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is too long to be a comment. This inequality appears as conjecture 4.8 in this article here. As you probably know, V.Cirtoaje has written many books on olympiad-style inequalities, so you see my reason for not believing that a simple solution exists. Optimization problems can sometimes (or most of the time actually) require "non-elegant" analysis (whatever that means to you) so this search is a bit pointless in my opinion. If an elegant solution is found to some nontrivial optimization/estimation problem then it is very likely to appear in an olympiad/competition, and AOPS is the right place to carry such discussions. - It now becomes a problem whether I should post this sort of problem here (my interest in this problem is simple: curiosity). Yes, generally problems from olympiad-style is not welcome here. I expect that new and fresh ideas may appear here. – Sunni Mar 5 2010 at 18:26 An argument that backs this up to some extent is the fact that the maximum occurs at (0,1), (1/2,1/2) and (1,0). What's more, the place where the minimum occurs is, if my calculation is correct, the place where aloga = 1-a, which doesn't fill one with confidence that a slick solution exists. Even so, I don't completely rule it out. – gowers Mar 5 2010 at 18:30 5 Wait, this was actually published as a conjecture? I'm certain this inequality is provable if you let me use a computer. For example, let $f[t]=t^{2(1−t)}$. Then, for $k/10000 \leq t \leq (k+1)/10000$ , we have $f[t]+f[1−t] \leq f[(k+1)/10000]+f[1−k/10000]$ . This proves the inequality for all $t$ not in $[0,3/10000]$ , $[4747/10000, 5253/10000]$ and $[9997/10000,1]$. Local arguments near 0, $1/2$ and 1 should finish the job. This is only a hard problem if you insist on a simple, non-machine-aided answer. – David Speyer Mar 5 2010 at 18:39 2 @DS: I'm sure that's what the author meant by stating it as a "conjecture", that there is no solution similar to the simple proofs of the other inequalities mentioned in the article. I just gave it as a counterargument to miwalin insisting that we find an elegant solution here. – Gjergji Zaimi Mar 5 2010 at 18:41 3 @GZ: If I were the referee, I would insist on the author distinguishing results that don't have proofs from results that don't have elementary proofs. But in any case, nice work finding that reference! That definitely makes it seem less likely that there is an elegant approach. – David Speyer Mar 5 2010 at 18:48 Since this question has been bumped up I would like to state what I think is its natural framework: We have the inequality $f(a,b) \leq 1$ for $a+b=k$ when $k$ lies between $\frac 12$ and $1$. Otherwise, the inequality is $f(a,b) \leq \frac {k^k}{2^{k-1}}$ (with $f(a,b) = a^{2b}+b^{2a}$). This version is not just more comprehensive but it illustrates the dichotomy in where the maximum occurs (at the symmetric point $(\frac k2,\frac k2)$ or at the boundary $(k,0)$). The two cases considered above ($k=\frac 12$ and $k=1$) are precisely the transitional ones. One can also get estimates from below (usually by the constant $1$ but in a small neighbourhood around the critical interval $[\frac 12,1]$ the sharp version involves values which are given implicitly as the solution of transcendental equations). (P.S. I had already given some of this information in a comment but, since it elicited no reaction, I have taken the liberty to repeat it here despite the fact that it isn't really an answer but, hopefully, does shed some light on the problem and its solution). - I think it gives a better sense of the geometry of the problem to ask whether, with non-negative $x,y$ such that $$\frac{1}{2} \leq x + y \leq 1,$$ we can prove that $$x^{2 y} + y^{2 x} \leq 1 ?$$ I'm not entirely certain where the second level curve component, through $\left( \frac{1}{4} , \frac{1}{4}\right),$ meets the axes. My programmable calculator seems to think that, if this arc does have $\left( \frac{1}{2} , 0 \right)$ as a limit point, the arc is tangent to the $x$-axis. I see, this was pointed out in a comment on March 17 by Yaakov Baruch, one needs to click on the "show 6 more comments." I think I will leave this here anyway. - Since $e^{at}t^t$ is convex for all $a\in\mathbb R$ (a boring but straightforvard computation shows that the second derivative is $[(a+\log t+1)^2+\frac 1t]e^{at}t^t$), including the point $(x,y)$ into the family $(xt,yt)$, we see that it is enough to check the boundary curves. I've done the upper one already. Anybody wants to try the lower one? – fedja Apr 13 2010 at 4:35 Silly non-answer removed. - UPDATE: this "proof" is WRONG! We want to prove that a^(2-2a)+(1-a)^(2a)<=1 for 0<=a<=1, or for 0<=a<=1/2 because of symmetry under a -> 1-a. Set f(a)=a^(2-2a). Then we want to prove that f(a)<=1-f(1-a), but since trivially f(a)<=a in [0,1/2], we have f(a)<=a<=1-a<=1-f(1-a). QED - 2 That last inequality is wrong. Try a=.1: 1-a = 0.9, but 1-f(1-a)=0.0208. – Douglas Zare Mar 15 2010 at 23:09 @Douglas: ooops... Can it be fixed? (I don't see how.) – Yaakov Baruch Mar 15 2010 at 23:32 I don't think that can be salvaged, since for $a=0.1, a \gt 1-f(1-a)$. – Douglas Zare Mar 15 2010 at 23:39 Since f(a)>=a in [1/2,1], the correct inequalities are f(a) <= a >= 1-f(1-a) in [0,1/2]! My apologies for the blunder and please someone give a down vote, since I can't. (Just saw you posted the same point.) – Yaakov Baruch Mar 15 2010 at 23:45 I don't see why. Just edit the answer to make it clear it is wrong. – Andrea Ferretti Mar 15 2010 at 23:52 EDIT: wrong proof attempt - 4 Your last sentence in the second paragraph is wrong... – Gjergji Zaimi Mar 5 2010 at 17:42 Thanks Gjergji for your comment. I've corrected the mistake above. – Markus Mar 15 2010 at 16:12 1 Ehm... he is refering to the sentence starting with "By assumption...". You reversed the inequalities. – Andrea Ferretti Mar 15 2010 at 16:31 Hi Andrea! The inequality is reversed since the number is negative: $0\leq a \leq b \wedge 0\geq x \geq y \Rightarrow ax\geq bx\geq by$. – Markus Mar 15 2010 at 19:11 2 No, what you have is that you cannot compare the products $x^{2y}(x^{2y} - 1)$ and $y^{2x}(y^{2x} - 1)$, because the inequalities you have go in opposite directions. That is, both products are negative, and you cannot tell which one is bigger in absolute value from $x^{2y}\geq y^{2x}\geq 0$ and $0\geq x^{2y}-1\geq y^{2x}-1$. – Andrea Ferretti Mar 15 2010 at 19:38 show 1 more comment This type of problem can be solved by the following approach: • Maximize the function a^(2b)+b^(2a) s.t. a+b=1. • We find that the function is maximized at a=b=1/2 and takes value 1. - 4 Did you try plugging in $a=0,b=1$? – Agol May 28 at 20:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371009469032288, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Divergence_Theorem&oldid=6483
# Divergence Theorem ### From Math Images Revision as of 16:01, 29 June 2009 by Bjohn1 (Talk | contribs) Fountain Flux The water flowing out of a fountain demonstrates an important property of vector fields, the Divergence Theorem. Fountain Flux Field: Calculus Created By: Brendan John # Basic Description Consider a fountain like the one pictured, particularly its top layer. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume. # A More Mathematical Explanation Note: understanding of this explanation requires: *Some multivariable calculus [Click to view A More Mathematical Explanation] The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considere [...] [Click to hide A More Mathematical Explanation] The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considered a vector field because at each point the water has a position and a velocity vector. Faster moving water is represented by a larger vector in our field. The divergence of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field $F$ is $\nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}$, where $F _i$ is the component of $F$ in the $i$ direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence. The divergence theorem requires that we sum divergence over an entire volume. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of flux, the flow through a surface, to quantify this movement through the boundary, which itself is a surface. The divergence theorem is formally stated as: $\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$ The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the flux through the boundary. ### Example of Divergence Theorem Verification The following example verifies that given a volume and a vector field, the Divergence Theorem is valid. Consider the vector field $F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}$. For a volume, we will use a cube of edge length two, and vertices at (0,0,0), (2,0,0), (0,2,0), (0,0,2), (2,2,0), (2,0,2), (0,2,2), (2,2,2). This cube has a corner at the origin and all the points it contains are in positive regions. We begin by calculating the left side of the Divergence Theorem. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126169681549072, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/6831/how-can-i-overload-autocompletion-to-work-with-full-contexts
How can I overload autocompletion to work with full contexts? I would like for the autocomplete feature to search through contexts, for example if I have a symbol named ABCMyFunction, when I type A and press "cmd + shift + k" it will complete it. Edit To be clear, I don't want to have to type the path because it's usually very long, and I don't want to have to type the function name again, even if the path itself gets auto completed. I want the following: If I have these functions: ````Very`Long`Context`For`My`Function1 Very`Long`Context`For`My`Function2 ... ```` I want to be able to type Very` and then press CMD+Shift+k, to get a dropdown menu saying exactly ````Very`Long`Context`For`My`Function1 Very`Long`Context`For`My`Function2 ... ```` - I assume you mean `cmd-k`, since `cmd-shift-k` inserts a template. – Brett Champion Jun 15 '12 at 2:06 2 Answers This is obsolete in Mathematica 9, which automatically includes contexts in completions. Undocumented function: use at your own risk, subject to change in future versions, etc.... The function you're interested in is `FE`FC`. It's been around for a while (here's a Mathematica Journal article that references it, near the end) although it has changed argument structure at least once that I'm aware of. Anyway, here's the code I currently use to a similar end as what Mike would like. (Most of this is boilerplate from the original definition; the main difference is the use of a new function `FE`names`.) ````(* Nice little hack to have command completion (cmd-k) include contexts *) Unprotect[FE`FC]; ClearAll[FE`FC] FE`FC[FE`nameString_, FE`ignoreCase_:False] /; $Notebooks:= MathLink`CallFrontEnd[FrontEnd`CompletionsListPacket[ FE`names[FE`nameString<>"*"], FE`ignoreCase], FE`NoResult] FE`names[FE`str_, FE`ignoreCase_:False] := Join[FE`shortContexts[FE`str], Names[FE`str, IgnoreCase -> FE`ignoreCase]]; FE`shortContexts[FE`patt_]:= With[{FE`brettclen = Length[StringSplit[FE`patt, "`"]]}, Union[StringJoin[ Riffle[Take[#, Min[FE`brettclen, Length[#]]], "`", {2, -1, 2}]] & /@ StringSplit[Contexts[FE`patt], "`"]] ] Protect[FE`FC]; ```` The end result is that when I use command completion, I get contexts that match in addition to symbols. This isn't quite the same as Mike's request, since it gives the contexts one at a time: since otherwise the list can get a bit overwhelming. For example, if you typed `Int` and then tried to complete to `IntegerPart`, there's a factor of ten difference: ````In[5]:= {Length[Names["Int*"]] + Length[Contexts["Int*"]], Length[Names["Int*`*"]]} Out[5]= {41, 419} ```` - wow, I have somehow missed autocomplete for the last 5 years.... thanks for this! – tkott Jun 15 '12 at 15:25 Exactly! I knew I wasn't crazy. – M.R. Jun 15 '12 at 19:52 One option is to put the context on the path: ````$ContextPath = AppendTo[$ContextPath, "A`B`C`"] ```` - I don't think this works. After appending to the \$ContextPath, if you type "CMD+Shift+k", after typing "A", you don't get a list with ABC(functions) as items to complete, as you would if you typed "MapThre"... – M.R. Jun 14 '12 at 22:58 @Mike, expanding MyF+Shift+k works just fine. If you want something else, I suggest to state that more clearly in your question. – ruebenko Jun 14 '12 at 23:00 @Mike, just that we are on the same page: adding ABCMyFunction ABCMyFunction2 give a list of items. Maybe I don't get what you want. – ruebenko Jun 14 '12 at 23:07 Sorry for the confusion! I just updated the question a bit. – M.R. Jun 14 '12 at 23:10 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 7, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8830065131187439, "perplexity_flag": "middle"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/S/s15adc.html
# NAG Library Function Documentnag_erfc (s15adc) ## 1  Purpose nag_erfc (s15adc) returns the value of the complementary error function, $\mathrm{erfc}x$. ## 2  Specification #include <nag.h> #include <nags.h> double nag_erfc (double x) ## 3  Description nag_erfc (s15adc) calculates an approximate value for the complement of the error function $erfc⁡x = 2 π ∫ x ∞ e - u 2 du = 1 - erf⁡x .$ The approximation is based on a Chebyshev expansion. ## 4  References Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications ## 5  Arguments 1:     x – doubleInput On entry: the argument $x$ of the function. None. ## 7  Accuracy If $\delta $ and $\epsilon $ are relative errors in the argument and result, respectively, then in principle $\left|\epsilon \right|\simeq \left|\left(2{xe}^{-{x}^{2}}/\sqrt{\pi }\mathrm{erfc}x\right)\delta \right|$, so that the relative error in the argument, $x$, is amplified by a factor $\left(2{xe}^{-{x}^{2}}\right)/\left(\sqrt{\pi }\mathrm{erfc}x\right)$ in the result. Near $x=0$ this factor behaves as $2x/\sqrt{\pi }$ and hence the accuracy is largely determined by the machine precision. Also for large negative $x$, where the factor is $\sim {xe}^{-{x}^{2}}/\sqrt{\pi }$, accuracy is mainly limited by machine precision. However, for large positive $x$, the factor becomes $\sim 2{x}^{2}$ and to an extent relative accuracy is necessarily lost. The absolute accuracy $E$ is given by $E\simeq \left(2{xe}^{-{x}^{2}}/\sqrt{\pi }\right)\delta $ so absolute accuracy is guaranteed for all $x$. None. ## 9  Example The following program reads values of the argument $x$ from a file, evaluates the function at each value of $x$ and prints the results. ### 9.1  Program Text Program Text (s15adce.c) ### 9.2  Program Data Program Data (s15adce.d) ### 9.3  Program Results Program Results (s15adce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6159538626670837, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/69170/question-on-triangles/69173
Question on Triangles In a right triangle, the length of hypotenuse is $c$. The centers of three circles of radius $c/5$ are found at its vertices. Find the radius of the fourth circle which touches the three given circles and doesn't enclose them. - I found the radius. Now what? – The Chaz 2.0 Oct 2 '11 at 1:51 @TheChaz: Can you please write proof – Ramana Venkata Oct 2 '11 at 1:54 Why the upvote? (This is me "telling", btw) – The Chaz 2.0 Oct 2 '11 at 5:08 @TheChaz: You parenthesized remark is completely cryptic. Your question is not: I upvoted it since it seems like a reasonable question. – Michael Hardy Oct 2 '11 at 13:36 @Michael: I see. – The Chaz 2.0 Oct 2 '11 at 13:54 show 1 more comment 2 Answers The midpoint of the hypotenuse is at a distance of $c/2$ from either of the endpoints of the hypotenuse. It's not hard to show that it's also at that same distance from the vertex of the right angle. Therefore a circle centered there that is just big enough to touch the two circles of radius $c/5$ centered at the endpoints of the hypotenuse will also be just big enough to touch that third circle of radius $c/5$ centered at the vertex of the right angle. So do the arithmetic. - Hint: Show that the center of the fourth circle is equidistant to each vertex of the triangle. Hence show that the center of the fourth circle is the circumcenter of the triangle. As the circumradius of a right triangle with hypotenuse $c$ is $c/2$, the radius of the fourth circle is $c/2-c/5$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208300709724426, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/11830/different-results-for-maximumlikelihood-depending-on-method
# Different results for MaximumLikelihood depending on method? When I used this command: ````EstimatedDistribution[data, BinomialDistribution[n, p], ParameterEstimator -> {"MaximumLikelihood", Method -> "NMaximize"}] ```` I get: `BinomialDistribution[12, 0.00842065]` with error message: ````FindRoot::cvmit: Failed to converge to the requested accuracy or precision within 100 iterations ```` versus: ````EstimatedDistribution[data, BinomialDistribution[n, p], ParameterEstimator -> {"MaximumLikelihood"}] ```` I get: `BinomialDistribution[4, 0.0252702]` What are actually the differences between the two commands? Which answer should I use? - – Searke Oct 10 '12 at 13:32 The answers you are getting strongly suggest your data consist of counts of rare events. The distribution will be approximately Poisson, which (as the limiting value of Binomial$(n,p)$ for $np$ held constant) is going to be very difficult to differentiate from Binomial distributions with large $n$ (and vanishingly small $p$). The standard errors for $n$ and $p$ ought to tell you that. So the more important issue for you is not which command to use, but whether it's wise to trust the output of any such command here. Perhaps you should be fitting a Poisson distribution from the outset... – whuber Oct 22 '12 at 15:33 ## 1 Answer It seems to be matter of which method--FindMaximum vs. NMaximize--is used to maximize the Likelihood function (or more probably the log-likelihood function). Since it is possible to get an analytic form of the likelihood function of the binomial distribution given some data: ````SeedRandom[1]; nSamplePoints = 10; data = RandomVariate[ BinomialDistribution[10, 0.1], nSamplePoints ]; likelihood = Simplify[ Likelihood[BinomialDistribution[n, p], data] , Assumptions -> {n >= 2}]; ```` Then you can plot this function with ContourPlot (of course n is probably an integer but that's a detail for this purpose): How to ascertain which combination of n and p gives the maximum likelihood (the mountain top on the contour plot) is a huge topic and there is a whole raft of documentation on function maximization in Mathematica's help. If you have the analytic form, you usually want to use the default method (which then uses the Automatic method of FindMaximum) instead of NMaximize, which doesn't have any knowledge of the gradient of the function to aid it in climbing up the hill but still does the best it can. In your case, I think that it is even more acute since the likelihood function is so sharp given the very low value for p. (I suspect its overwhelmingly zeros in your data) - yes, the proportion of zeros in my data is very high, more than 90%. – yyasinta Oct 18 '12 at 10:20 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8757997751235962, "perplexity_flag": "middle"}
http://www.chemeurope.com/en/encyclopedia/Pressure_head.html
My watch list my.chemeurope.com my.chemeurope.com With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • Home • Encyclopedia • Pressure_head # Pressure head Pressure head is a term used in fluid mechanics to represent the internal energy of a fluid due to the pressure exerted on its container. It may also be called static pressure head or simply static head (but not static head pressure). It is mathematically expressed as: $\psi = \frac{p}{\gamma} = \frac{p}{\rho \, g}$ where ψ is pressure head (Length, typically in units of m); p is fluid pressure (Force per unit Area, often as Pa units); and γ is the specific weight (Weight per unit volume, typically N·m−3 units) ρ is the density of the fluid (Mass per unit volume, typically kg·m−3) g is acceleration due to gravity (rate of change of velocity, given in m·s−2) ## Practical uses for pressure head Fluid flow is measured with a wide variety of instruments. The venturi meter in the diagram on the right shows two columns of a measurement fluid at different heights. The height of each column of fluid is proportional to the pressure of the fluid. To demonstrate a classical measurement of pressure head, we could hypothetically replace the working fluid with another fluid having different physical properties. For example, if the original fluid was water and we replaced it with mercury at the same pressure, we would expect to see a rather different value for pressure head. In fact, the specific weight of water is 9.8 kN/m3 and the specific weight of mercury is 133 kN/m3. So, for any particular measurement of pressure head, the height of the column of water will be about 13.6 times taller than the column of mercury would be (133/9.8 = 13.6). So if a water column meter reads "13.6 cm H2O," then a coinciding measurement is "1.00 cm Hg." This example demonstrates why there is a bit of confusion surrounding pressure head and its relationship to pressure. Scientists frequently use columns of water (or mercury) to measure pressure, since for a given fluid, pressure head is proportional to pressure. Measuring pressure in units of "mm of mercury" or "inches of water" makes sense for instrumentation, but these raw measurements of head must frequently be converted to more convenient pressure units using the equations above to solve for pressure. In summary, pressure head is a measurement of length, which can be converted to the units of pressure, as long as strict attention is paid to the density of the measurement fluid and the local value of g. ## Implications for gravitational anomalies on $\psi\,$ We would normally use pressure head calculations in areas in which g is constant. However, if the gravitational field fluctuates, we can prove that pressure head fluctuates with it. • If we consider what would happens as gravity decreases, we would expect the fluid in the venturi meter shown above to withdraw from the pipe up into the vertical columns. Pressure head is increased. • In the case of zero gravity, the pressure head approaches infinity. Fluid in the pipe may "leak out" of the top of the vertical columns (assuming p > 0). • To simulate negative gravity, we could turn the venturi meter shown above upside down. In this case gravity is negative, and we would expect the fluid in the pipe to "pour out" the vertical columns. Pressure head is negative (assuming p > 0). • If p < 0 and g > 0, we observe that the pressure head is also negative, and the ambient air is sucked into the columns shown in the venturi meter above. This is called a siphon, and is caused by a partial vacuum inside the vertical columns. In many venturis, the column on the left has fluid in it (ψ > 0), while only the column on the right is a siphon (ψ < 0). • If p < 0 and g < 0, we observe that the pressure head is again positive, predicting that the venturi meter shown above would look the same, only upside down. In this situation, gravity causes the working fluid to plug the siphon holes, but the fluid doesn't leak out because the ambient pressure is greater than the pressure in the pipe. • The above situations imply that the Bernoulli equation, from which we obtain static pressure head, is extremely versatile. ## See also • Derivations of Bernoulli equation • Hydraulic head, which includes a component of pressure head ## References See Engineering Toolbox article on Specific Weight See Engineering Toolbox article on Static Pressure Head
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9109175205230713, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/92568/perfect-matchings-in-certain-classes-of-hypergraphs
## Perfect matchings in certain classes of hypergraphs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) While doing research I came unto the following problem: Given a hypergraph $H$ $r-partite$, $r-uniform$ (a r-graph, each edge contains r vertices), >$k- regular$ (all vertices have regular degree) and $n-balanced$ (each partition contain n vertices). Does H contain a perfect matching (an independent set of edges that covers all vertices)? In literature I've found results by Aharoni, Haxell, Alon, Rödl and others, with sufficient conditions, but none seem to contain these hypothesis over the graph. Any suggestions or pointers to literature would be greatly appreciated. EDIT: Instead of asking wether H contains a perfect matching, what would be sufficient conditions for H to have a perfect matching? More specifically, what invariants are important in this kind of problem? So far I've seen $\delta(H)$ and $|H|$ or $|V|$. I wonder if there are sufficient conditions which do not contain hypothesis over the size of partitions or of the graph. - ## 1 Answer The answer is no. That is, already for $n=3$, $r = 3$ and $k = 2$ there is an $r$-partite $r$-uniform $k$-regular hypergraph that doesn't contain a perfect matching. Let $v_1,v_2,v_3$ be the first part, $u_1,u_2,u_3$ be the second part and $w_1,w_2,w_3$ the third. Let the hyperedges be $$(v_1,u_1,w_1),(v_1,u_2,w_2),(v_2,u_2,w_1),(v_2,u_3,w_3),(v_3,u_3,w_3),(v_3,u_1,w_2) .$$ It is easy to verify that the resulting hypergraph is 2-regular. However, there is no perfect matching. To see this, consider $w_1$. It belongs to the first and third edge. If we take the first, then we can't take the second (because both contain $v_1$) and so we must take the third edge if we want to cover $u_2$. However, this is impossible because then $w_1$ is covered twice. Hence there is no perfect matching. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315966367721558, "perplexity_flag": "head"}
http://www.reference.com/browse/principle-complementarity
Definitions # Copenhagen interpretation The Copenhagen interpretation is an interpretation of quantum mechanics. A key feature of quantum mechanics is that the state of every particle is described by a wavefunction, which is a mathematical representation used to calculate the probability for it to be found in a location, or state of motion. In effect, the act of measurement causes the calculated set of probabilities to "collapse" to the value defined by the measurement. This feature of the mathematical representations is known as wavefunction collapse. Early twentieth century studies of the physics of very small-scale phenomena led to the Copenhagen interpretation. The new experiments led to the discovery of phenomena that could not be predicted on the basis of classical physics, and to new empirical generalizations (theories) that described and predicted very accurately those micro-scale phenomena so recently discovered. These generalizations, these models of the real world being observed at this micro scale, could not be squared easily with the way objects are observed to behave on the macro scale of everyday life. The predictions they offered often appeared counter-intuitive to observers. Indeed, they touched off much consternation -- even in the minds of their discoverers. The Copenhagen interpretation consists of attempts to explain the experiments and their mathematical formulations in ways that do not go beyond the evidence to suggest more (or less) than is actually there. The work of relating the experiments and the abstract mathematical and theoretical formulations that constitute quantum physics to the experience that all of us share in the world of everyday life fell first to Niels Bohr and Werner Heisenberg in the course of their collaboration in Copenhagen around 1927. Bohr and Heisenberg stepped beyond the world of empirical experiments and pragmatic predictions of such phenomena as the frequencies of light emitted under various conditions. In the earlier work of Planck, Einstein and Bohr himself, discrete quantities of energy had been postulated in order to avoid paradoxes of classical physics when pushed to extremes. Bohr and Heisenberg now found a new world of energy quanta, entities that fit neither the classical ideas of particles nor the classical ideas of waves. Elementary particles behaved in ways highly regular when many similar interactions were analyzed yet, highly unpredictable when one tried to predict things like individual trajectories through a simple physical apparatus. The new theories were inspired by laboratory experiments and based on the idea that matter has both wave and particle aspects. They predict that knowledge of the position of a particle prevents us from knowing its direction and velocity, and vice-versa. Also, the very fact of detecting a small object (such as a photon or electron) passing through an apparatus by one of two paths, can change the end result of the experiment when that small entity reaches a detection screen. The results of their own burgeoning understanding disoriented Bohr and Heisenberg, and some physicists concluded that human observation of a microscopic event changes the reality of the event. The Copenhagen interpretation was a composite statement about what could and could not be legitimately stated in common language to complement the statements and predictions that could be made in the language of instrument readings and mathematical operations. In other words, it attempted to answer the question, "What do these amazing experimental results really mean?" ## Overview There is no definitive statement of the Copenhagen Interpretation since it consists of the views developed by a number of scientists and philosophers at the turn of the 20th Century. Thus, there are a number of ideas that have been associated with the Copenhagen interpretation. Asher Peres remarked that very different, sometimes opposite, views are presented as the Copenhagen interpretation by different authors. ### Principles 1. A system is completely described by a wave function $psi$, which represents an observer's knowledge of the system. (Heisenberg) 2. The description of nature is essentially probabilistic. The probability of an event is related to the square of the amplitude of the wave function related to it. (Max Born) 3. Heisenberg's uncertainty principle states the observed fact that it is not possible to know the values of all of the properties of the system at the same time; those properties that are not known with precision must be described by probabilities. 4. Complementarity principle: matter exhibits a wave-particle duality. An experiment can show the particle-like properties of matter, or wave-like properties, but not both at the same time.(Niels Bohr) 5. Measuring devices are essentially classical devices, and measure classical properties such as position and momentum. 6. The correspondence principle of Bohr and Heisenberg: the quantum mechanical description of large systems should closely approximate to the classical description. ## The meaning of the wave function The Copenhagen Interpretation denies that any wave function is anything more than an abstraction, or is at least non-committal about its being a discrete entity or a discernible component of some discrete entity. There are some who say that there are objective variants of the Copenhagen Interpretation that allow for a "real" wave function, but it is questionable whether that view is really consistent with positivism and/or with some of Bohr's statements. Niels Bohr emphasized that science is concerned with predictions of the outcomes of experiments, and that any additional propositions offered are not scientific but rather meta-physical. Bohr was heavily influenced by positivism. On the other hand, Bohr and Heisenberg were not in complete agreement, and held different views at different times. Heisenberg in particular was prompted to move towards realism. Even if the wave function is not regarded as real, there is still a divide between those who treat it as definitely and entirely subjective, and those who are non-committal or agnostic about the subject. An example of the agnostic view is given by von Weizsäcker, who, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted: "What cannot be observed does not exist". He suggested instead that the Copenhagen interpretation follows the principle: "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes. The subjective view, that the wave function is merely a mathematical tool for calculating probabilities of specific experiment, is a similar approach to the Ensemble interpretation. ## The nature of collapse All versions of the Copenhagen interpretation include at least a formal or methodological version of wave function collapse, in which unobserved eigenvalues are removed from further consideration. (In other words, Copenhagenists have never rejected collapse, even in the early days of quantum physics, in the way that adherents of the Many-worlds interpretation do.) In more prosaic terms, those who hold to the Copenhagen understanding are willing to say that a wave function involves the various probabilities that a given event will proceed to certain different outcomes. But when one or another of those more- or less-likely outcomes becomes manifest the other probabilities cease to have any function in the real world. So if an electron passes through a double slit apparatus there are various probabilities for where on the detection screen that individual electron will hit. But once it has hit, there is no longer any probability whatsoever that it will hit somewhere else. Many-worlds interpretations say that an electron hits wherever there is a possibility that it might hit, and that each of these hits occurs in a separate universe. An adherent of the subjective view, that the wave function represents nothing but knowledge, would take an equally subjective view of "collapse". Some argue that the concept of collapse of a "real" wave function was introduced by John Von Neumann in 1932 and was not part of the original formulation of the Copenhagen Interpretation. ## Acceptance among physicists According to a poll at a Quantum Mechanics workshop in 1997, the Copenhagen interpretation is the most widely-accepted specific interpretation of quantum mechanics, followed by the many-worlds interpretation. Although current trends show substantial competition from alternative interpretations, throughout much of the twentieth century the Copenhagen interpretation had strong acceptance among physicists. Astrophysicist and science writer John Gribbin describes it as having fallen from primacy after the 1980s. ## Consequences The nature of the Copenhagen Interpretation is exposed by considering a number of experiments and paradoxes. 1. - A cat is put in a box with a radioactive substance and a radiation detector (such as a geiger counter). The half-life of the substance is the period of time in which there is a 50% chance that a particle will be emitted (and detected). The detector is activated for that period of time. If a particle is detected, a poisonous gas will be released and the cat killed. Schrödinger set this up as what he called a "ridiculous case" in which "The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts." He resisted an interpretation that would "so naively accepting as valid a 'blurred model' for representing reality. How can the cat be both alive and dead? The Copenhagen Interpretation: The wave function reflects our knowledge of the system. The wave function $\left(|text\left\{dead\right\}rangle + |text\left\{alive\right\}rangle\right)/sqrt 2$ simply means that there is a 50-50 chance that the cat is alive or dead. 2. - Wigner puts his friend in with the cat. The external observer believes the system is in the state $\left(|text\left\{dead\right\}rangle + |text\left\{alive\right\}rangle\right)/sqrt 2$. His friend however is convinced that cat is alive, i.e. for him, the cat is in the state $|text\left\{alive\right\}rangle$. How can Wigner and his friend see different wave functions? The Copenhagen Interpretation: Wigner's friend highlights the subjective nature of probability. Each observer (Wigner and his friend) has different information and therefore different wave functions. The distinction between the "objective" nature of reality and the subjective nature of probability has led to a great deal of controversy. C.f. Bayesian versus Frequentist interpretations of probability. 3. - Light passes through double slits and onto a screen resulting in a diffraction pattern. Is light a particle or a wave? The Copenhagen Interpretation: Light is neither. A particular experiment can demonstrate particle (photon) or wave properties, but not both at the same time (Bohr's Complementary Principle). The same experiment can in theory be performed with any physical system: electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants, planets, etc. In practice it has been performed for light, electrons, buckminsterfullerene, and some atoms. Due to the smallness of Planck's constant it is practically impossible to realize experiments that directly reveal the wave nature of any system bigger than a few atoms but, in general, quantum mechanics considers all matter as possessing both particle and wave behaviors. The greater systems (like viruses, bacteria, cats, etc.) are considered as "classical" ones but only as an approximation. 4. . Entangled "particles" are emitted in a single event. Conservation laws ensure that the measured spin of one particle must be the opposite of the measured spin of the other, so that if the spin of one particle is measured, the spin of the other particle is now instantaneously known. The most discomfiting aspect of this paradox is that the effect is instantaneous so that something that happens in one galaxy could cause an instantaneous change in another galaxy. But, according to Einstein's theory of special relativity, no information-bearing signal or entity can travel at or faster than the speed of light, which is finite. Thus, it seems as if the Copenhagen interpretation is inconsistent with special relativity. The Copenhagen Interpretation: Assuming wave functions are not real, wave function collapse is interpreted subjectively. The moment one observer measures the spin of one particle, he knows the spin of the other. However another observer cannot benefit until the results of that measurement have been relayed to him, at less than or equal to the speed of light. Copenhagenists claim that interpretations of quantum mechanics where the wave function is regarded as real have problems with EPR-type effects, since they imply that the laws of physics allow for influences to propagate at speeds greater than the speed of light. However, proponents of Many worlds and the Transactional interpretation maintain that their theories are fatally non-local. The claim that EPR effects violate the principle that information cannot travel faster than the speed of light can be avoided by noting that they cannot be used for signaling because neither observer can control, or predetermine, what he observes, and therefore cannot manipulate what the other observer measures. Relativistic difficulties about establishing which measurement occurred first also undermine the idea that one observer is causing what the other is measuring. ## Criticisms The completeness of quantum mechanics (thesis 1) was attacked by the Einstein-Podolsky-Rosen thought experiment which was intended to show that quantum physics could not be a complete theory. Experimental tests of Bell's inequality using particles have supported the quantum mechanical prediction of entanglement. The Copenhagen Interpretation gives special status to measurement processes without clearly defining them or explaining their peculiar effects. In his article entitled "Criticism and Counterproposals to the Copenhagen Interpretation of Quantum Theory," countering the view of Alexandrov that (in Heisenberg's paraphrase) "the wave function in configuration space characterizes the objective state of the electron." Heisenberg says, Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory. -- Heisenberg, Physics and Philosophy, p. 137 Many physicists and philosophers have objected to the Copenhagen interpretation, both on the grounds that it is non-deterministic and that it includes an undefined measurement process that converts probability functions into non-probabilistic measurements. Einstein's comments "I, at any rate, am convinced that He (God) does not throw dice. and "Do you really think the moon isn't there if you aren't looking at it?" exemplify this. Bohr, in response, said "Einstein, don't tell God what to do". Steven Weinberg in "Einstein's Mistakes", , November 2005, page 31, said: All this familiar story is true, but it leaves out an irony. Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wave function (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from? Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wave function, the Schrödinger equation, to observers and their apparatus. The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe. ## Alternatives The Ensemble Interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". Consciousness causes collapse is often confused with the Copenhagen interpretation. If the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Dropping the principle that the wave function is a complete description results in a hidden variable theory. Many physicists have subscribed to the null interpretation of quantum mechanics summarized by the sentence "Shut up and calculate!". While it is sometimes attributed to Paul Dirac or Richard Feynman, it is in fact due to David Mermin. A list of alternatives can be found at Interpretation of quantum mechanics. ## Further reading • G. Weihs et al., Phys. Rev. Lett. 81 (1998) 5039 • M. Rowe et al., Nature 409 (2001) 791. • J.A. Wheeler & W.H. Zurek (eds) , Quantum Theory and Measurement, Princeton University Press 1983 • A. Petersen, Quantum Physics and the Philosophical Tradition, MIT Press 1968 • H. Margeneau, The Nature of Physical Reality, McGraw-Hill 1950 • M. Chown, Forever Quantum, New Scientist No. 2595 (2007) 37. • T. Schürmann, A Single Particle Uncertainty Relation, Acta Physica Polonica B39 (2008) 587. ## External links • Copenhagen Interpretation (Stanford Encyclopedia of Philosophy) • Physics FAQ section about Bell's inequality • The Copenhagen Interpretation of Quantum Mechanics • Preprint of Afshar Experiment • This Quantum World What is quantum mechanics trying to tell us about the nature of Nature?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333745837211609, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/102492/diffeq-with-various-quadratic-terms
# DiffEQ with various quadratic terms I have the following DiffEQ and would like to solve it, $$2F\left( \frac{{d}^{2}}{d{z}^{2}}F\right) -{\left( \frac{d}{dz}F\right) }^{2}+2F\left( \frac{{d}^{2}}{d{y}^{2}}F\right) -{\left( \frac{d}{dy}F\right) }^{2}+2F\left( \frac{{d}^{2}}{d{x}^{2}}F\right) -{\left( \frac{d}{dx}F\right) }^{2}=0$$ Any help would be appreciated, Thanks. - You could have just said $F\,\Delta F=\frac{1}{2}\|\nabla F\;\|^2$. – anon Jan 26 '12 at 2:36 ## 1 Answer Note that constant $F$ works. Otherwise, assuming $F > 0,$ your condition is just $$\Delta \sqrt F = 0.$$ So you want some positive harmonic function $G$ and then $F = G^2.$ Similar for $F < 0, \; F = - H^2, \; \; \Delta \sqrt {-F} = 0.$ From Liouville's theorem, if you want a solution on all of $\mathbb R^3$ then $G$ and $F$ are constant. So, if you start with a nonconstant harmonic $G,$ such as linear, along a finite set of surfaces $G$ actually becomes $0$ and everything goes sideways. EDIT, Thursday: Looking again, it is not so bad when the function is $0,$ as long as we are squaring, so the gradient is also $0.$ So, a solution, and possibly all solutions, are some real constant $c$ and harmonic $W,$ then $$F = c W^2.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9592902064323425, "perplexity_flag": "head"}
http://mathoverflow.net/questions/37955/confidence-interval-for-polynomial-fitting
## Confidence Interval For Polynomial Fitting ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm programming an n-dimensional polynomial fitting function. It uses the basic concept of least squares and a design matrix. For example, a quadratic fit on 2d data: This is a row in the design Matrix: $X={({1, x, x^2})}$ And this is the matrix formula I'm using: $(X^T*X)^{-1}(X^T*Y)=Ax^2+Bx+C$ This is nothing special or complicated. I'm at the point however where I'd like to introduce confidence intervals. How do I compute the confidence intervals from this regression technique? - Normal equations, huh? If you're going to stick with that approach, please please hook up a condition estimator as a sanity check; this is the approach most sensitive to roundoff error. QR decomposition is often a better choice. As an aside, the variances of the parameters are usually obtained from the diagonal entries of the variance-covariance matrix `$\left(\mathbf{X}^T\mathbf{X}\right)^{-1}$` ; I would suppose any expression for the CIs would involve these as well. – J. M. Sep 7 2010 at 9:11 ## 1 Answer A simple place to start: Look at the residues when you subtract out your putative function from your data set. A perfect fit (for a very small dataset, or a very perfectly quadratic dataset) will have residues of zero and a correlation coefficient that you can calculate. A bad but "middling" best-fit will have residues equally distributed in the positive and negative $y$-axis. A bad quadratic fit will skew positive on one side and negative on the other side, or positive in the middle and negative on the ends or vice versa. A better fit will have smaller residues. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905931711196899, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Discrete_probability_distribution
# Probability distribution (Redirected from Discrete probability distribution) In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found in experiments whose sample space is non-numerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. In applied probability, a probability distribution can be specified in a number of different ways, often chosen for mathematical convenience: • by supplying a valid probability mass function or probability density function • by supplying a valid cumulative distribution function or survival function • by supplying a valid hazard function • by supplying a valid characteristic function • by supplying a rule for constructing a new random variable from other random variables whose joint probability distribution is known. A probability distribution can either be univariate or multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector—a set of two or more random variables—taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution. ## Introduction The probability mass function (pmf) p(S) specifies the probability distribution for the sum S of counts from two dice. For example, the figure shows that p(11) = 1/18. The pmf allows the computation of probabilities of events such as P(S > 9) = 1/12 + 1/18 + 1/36 = 1/6, and all other probabilities in the distribution. To define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. In the discrete case, one can easily assign a probability to each possible value: for example, when throwing a die, each of the six values 1 to 6 has the probability 1/6. In contrast, when a random variable takes values from a continuum, probabilities are nonzero only if they refer to finite intervals: in quality control one might demand that the probability of a "500 g" package containing between 490 g and 510 g should be no less than 98%. The probability density function (pdf) of the normal distribution, also called Gaussian or "bell curve", the most important continuous random distribution. As notated on the figure, the probabilities of intervals of values corresponds to the area under the curve. If the random variable is real-valued (or more generally, if a total order is defined for its possible values), the cumulative distribution function (CDF) gives the probability that the random variable is no larger than a given value; in the real-valued case, the CDF is the integral of the probability density function (pdf) provided that this function exists. ## Terminology As probability theory is used in quite diverse applications, terminology is not uniform and sometimes confusing. The following terms are used for non-cumulative probability distribution functions: • Probability mass, Probability mass function, p.m.f.: for discrete random variables. • Categorical distribution: for discrete random variables with a finite set of values. • Probability density, Probability density function, p.d.f: most often reserved for continuous random variables. The following terms are somewhat ambiguous as they can refer to non-cumulative or cumulative distributions, depending on authors' preferences: • Probability distribution function: continuous or discrete, non-cumulative or cumulative. • Probability function: even more ambiguous, can mean any of the above or other things. Finally, • Probability distribution: sometimes the same as probability distribution function, but usually refers to the more complete assignment of probabilities to all measurable subsets of outcomes, not just to specific outcomes or ranges of outcomes. ### Basic terms • Mode: for a discrete random variable, the value with highest probability (the location at which the probability mass function has its peak); for a continuous random variable, the location at which the probability density function has its peak. • Support: the smallest closed set whose complement has probability zero. • Head: the range of values where the pmf or pdf is relatively high. • Tail: the complement of the head within the support; the large set of values where the pmf or pdf is relatively low. • Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or, the continuous analog thereof. • Median: the value such that the set of values less than the median has a probability of one-half. • Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution. • Standard deviation: the square root of the variance, and hence another measure of dispersion. • Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value is a mirror image of the portion to its right. • Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. ## Cumulative distribution function Because a probability distribution Pr on the real line is determined by the probability of a scalar random variable X being in a half-open interval (-∞, x], the probability distribution is completely characterized by its cumulative distribution function: $F(x) = \Pr \left[ X \le x \right] \qquad \text{ for all } x \in \mathbb{R}.$ ## Discrete probability distribution The probability mass function of a discrete probability distribution. The probabilities of the singletons {1}, {3}, and {7} are respectively 0.2, 0.5, 0.3. A set not containing any of these points has probability zero. The cdf of a discrete probability distribution, ... ... of a continuous probability distribution, ... ... of a distribution which has both a continuous part and a discrete part. A discrete probability distribution shall be understood as a probability distribution characterized by a probability mass function. Thus, the distribution of a random variable X is discrete, and X is then called a discrete random variable, if $\sum_u \Pr(X=u) = 1$ as u runs through the set of all possible values of X. It follows that such a random variable can assume only a finite or countably infinite number of values. For the number of potential values to be countably infinite even though their probabilities sum to 1 requires that the probabilities decline to zero fast enough: for example, if $\Pr(X=n) = \tfrac{1}{2^n}$ for n = 1, 2, ..., we have the sum of probabilities 1/2 + 1/4 + 1/8 + ... = 1. In cases more frequently considered, this set of possible values is a topologically discrete set in the sense that all its points are isolated points. But there are discrete random variables for which this countable set is dense on the real line (for example, a distribution over rational numbers). Among the most well-known discrete probability distributions that are used for statistical modeling are the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. In addition, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices. ### Cumulative density Equivalently to the above, a discrete random variable can be defined as a random variable whose cumulative distribution function (cdf) increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant between those jumps. The points where jumps occur are precisely the values which the random variable may take. ### Delta-function representation Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part. ### Indicator-function representation For a discrete random variable X, let u0, u1, ... be the values it can take with non-zero probability. Denote $\Omega_i=\{\omega: X(\omega)=u_i\},\, i=0, 1, 2, \dots$ These are disjoint sets, and by formula (1) $\Pr\left(\bigcup_i \Omega_i\right)=\sum_i \Pr(\Omega_i)=\sum_i\Pr(X=u_i)=1.$ It follows that the probability that X takes any value except for u0, u1, ... is zero, and thus one can write X as $X=\sum_i u_i 1_{\Omega_i}$ except on a set of probability zero, where $1_A$ is the indicator function of A. This may serve as an alternative definition of discrete random variables. ## Continuous probability distribution See also: Probability density function A continuous probability distribution is a probability distribution that has a probability density function. Mathematicians also call such a distribution absolutely continuous, since its cumulative distribution function is absolutely continuous with respect to the Lebesgue measure λ. If the distribution of X is continuous, then X is called a continuous random variable. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others. Intuitively, a continuous random variable is the one which can take a continuous range of values — as opposed to a discrete distribution, where the set of possible values for the random variable is at most countable. While for a discrete distribution an event with probability zero is impossible (e.g. rolling 3½ on a standard die is impossible, and has probability zero), this is not so in the case of a continuous random variable. For example, if one measures the width of an oak leaf, the result of 3½ cm is possible, however it has probability zero because there are uncountably many other potential values even between 3 cm and 4 cm. Each of these individual outcomes has probability zero, yet the probability that the outcome will fall into the interval (3 cm, 4 cm) is nonzero. This apparent paradox is resolved by the fact that the probability that X attains some value within an infinite set, such as an interval, cannot be found by naively adding the probabilities for individual values. Formally, each value has an infinitesimally small probability, which statistically is equivalent to zero. Formally, if X is a continuous random variable, then it has a probability density function ƒ(x), and therefore its probability of falling into a given interval, say [a, b] is given by the integral $\Pr[a\le X\le b] = \int_a^b f(x) \, dx$ In particular, the probability for X to take any single value a (that is a ≤ X ≤ a) is zero, because an integral with coinciding upper and lower limits is always equal to zero. The definition states that a continuous probability distribution must possess a density, or equivalently, its cumulative distribution function be absolutely continuous. This requirement is stronger than simple continuity of the cumulative distribution function, and there is a special class of distributions, singular distributions, which are neither continuous nor discrete nor a mixture of those. An example is given by the Cantor distribution. Such singular distributions however are never encountered in practice. Note on terminology: some authors use the term "continuous distribution" to denote the distribution with continuous cumulative distribution function. Thus, their definition includes both the (absolutely) continuous and singular distributions. By one convention, a probability distribution $\,\mu$ is called continuous if its cumulative distribution function $F(x)=\mu(-\infty,x]$ is continuous and, therefore, the probability measure of singletons $\mu\{x\}\,=\,0$ for all $\,x$. Another convention reserves the term continuous probability distribution for absolutely continuous distributions. These distributions can be characterized by a probability density function: a non-negative Lebesgue integrable function $\,f$ defined on the real numbers such that $F(x) = \mu(-\infty,x] = \int_{-\infty}^x f(t)\,dt.$ Discrete distributions and some continuous distributions (like the Cantor distribution) do not admit such a density. ## Some properties • The probability distribution of the sum of two independent random variables is the convolution of each of their distributions. • Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures). ## Kolmogorov definition Main articles: Probability space and Probability measure In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function X from a probability space $\scriptstyle (\Omega, \mathcal{F}, \operatorname{P})$ to measurable space $\scriptstyle (\mathcal{X},\mathcal{A})$. A probability distribution is the pushforward measure, P, satisfying X*P = PX −1 on $\scriptstyle (\mathcal{X},\mathcal{A})$.[clarification needed] ## Random number generation Main article: Pseudo-random number sampling A frequent problem in statistical simulations (the Monte Carlo method) is the generation of pseudo-random numbers that are distributed in a given way. Most algorithms are based on a pseudorandom number generator that produces numbers X that are uniformly distributed in the interval [0,1). These random variates X are then transformed via some algorithm to create a new random variate having the required probability distribution. ## Applications The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate. As a more specific example of an application, the cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions. ## Common probability distributions Main article: List of probability distributions The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, continuous, multivariate, etc.) Note also that all of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution. ### Related to real-valued quantities that grow linearly (e.g. errors, offsets) • Normal distribution (Gaussian distribution), for a single such quantity; the most common continuous distribution ### Related to positive real-valued quantities that grow exponentially (e.g. prices, incomes, populations) • Log-normal distribution, for a single such quantity whose log is normally distributed • Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution ### Related to real-valued quantities that are assumed to be uniformly distributed over a (possibly unknown) region • Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair die) • Continuous uniform distribution, for continuously distributed values ### Related to Bernoulli trials (yes/no events, with a given probability) • Basic distributions: • Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no) • Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences • Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs • Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution • Related to sampling schemes over a finite population: • Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement • Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Polya urn scheme (in some sense, the "opposite" of sampling without replacement) ### Related to categorical outcomes (events with K possible outcomes, with a given probability for each outcome) • Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution • Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution • Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution ### Related to events in a Poisson process (events that occur independently with a given rate) • Poisson distribution, for the number of occurrences of a Poisson-type event in a given period of time • Exponential distribution, for the time before the next Poisson-type event occurs ### Useful for hypothesis testing related to normally distributed outcomes • Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test) • Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test) • F-distribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving R-squared (the squared correlation coefficient) ### Useful as conjugate prior distributions in Bayesian inference Main article: Conjugate prior • Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution • Gamma distribution, for a non-negative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc. • Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution • Wishart distribution, for a symmetric non-negative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalization of the gamma distribution ## References • B. S. Everitt: The Cambridge Dictionary of Statistics, Cambridge University Press, Cambridge (3rd edition, 2006). ISBN 0-521-69027-7 • Bishop: Pattern Recognition and Machine Learning, Springer, ISBN 0-387-31073-8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8867579698562622, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/148993-unsure-what-area-relates-but-im-struggling-ignition-map.html
Thread: 1. Unsure what area this relates to but im struggling with an ignition map Im trying to write a microcontroller program to run the ignition in my car. However im struggling to interpret values from a table properly if the value i need falls inbetween the values in the table, im not convinced the method im using does a very good job. Going by the image i have attached what would be the value for advance be if the load was 67 and the RPM was 4600? thanks Dan Attached Thumbnails 2. Originally Posted by powermandan Im trying to write a microcontroller program to run the ignition in my car. However im struggling to interpret values from a table properly if the value i need falls inbetween the values in the table, im not convinced the method im using does a very good job. Going by the image i have attached what would be the value for advance be if the load was 67 and the RPM was 4600? thanks Dan Linear interpolation: 1. Assume that the increase between 60 and 80 or 4000 and 5000 respectively is proprtional. That means: To reach the 67 you have to calculate: $67 = 60+(80-60) \cdot \frac{7}{80-60}$. So the increase is $\frac7{20}$ of the difference of the 2 corresponding values. Now calculate the 2 values in the 4000 and 5000 column. 2. The increase in the last row is $\frac{600}{1000} = 0.6$ of the difference between the 2 corresponding values in the last row. 3. Now calculate the increase between the newly calculated values. 4. I've attached a table with the results. Attached Thumbnails 3. Thank you very much
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8949414491653442, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/243897/about-simplicial-complex
# About simplicial complex Let $K$ be a simplicial complex and if $\sigma$ is a simplex of $K$. How to prove that the following sets are simplicial complexes? 1) The boundary of $K$: $\partial(K)=\{\mbox{proper faces which belong to all the simplexes of K}\}$ 2) The closure of $\sigma$: $\mathrm{Cl}(\sigma)=\{\mbox{faces of }\sigma\}$ Can you help me please? Thank you!!! - 2 What definition do you have of simplicial complexes? – Dedalus Nov 24 '12 at 19:36 I have to prove this definition: A simplicial complex K is a finite collection of simplices in some R^n satisfying: 1. If σ ∈ K, then all faces of σ belong to K. 2. If σ, τ ∈ K, then either σ ∩ τ = ∅ or σ ∩ τ is a common face of σ and τ. – lauren Nov 25 '12 at 8:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8344354629516602, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/67347/base-change-and-langlands-combinatorial-exercise
## base change and Langlands' combinatorial exercise ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, Is it correct that Langlands' combinatorial exercise (as he terms it in his paper "Shimura varieties and the Selberg trace formula") is to establish base change identities between orbital integrals of the group $G$ over a number field and twisted orbital integrals over some unramified extension? Or am I completely wrong? I am trying to understand this part of the Langlands' paper "On the zeta-functions of some simple Shimura varieties" without much success... Thanks - I think so. My understanding is that he proves a fundamental lemma in the context of base-change. – Emerton Oct 22 2011 at 4:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113767147064209, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/60108/occurrences-of-cohomology-in-other-disciplines-and-or-nature/98168
## Occurrences of (co)homology in other disciplines and/or nature ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am curious if the setup for (co)homology theory appears outside the realm of pure mathematics. The idea of a family of groups linked by a series of arrows such that the composition of consecutive arrows is zero seems like a fairly general notion, but I have not come across it in fields like biology, economics, etc. Are there examples of non-trivial (co)homology appearing outside of pure mathematics? I think Hatcher has a couple illustrations of homology in his textbook involving electric circuits. This is the type of thing I'm looking for, but it still feels like topology since it is about closed loops. Since the relation $d^2=0$ seems so simple to state, I would imagine this setup to be ubiquitous. Is it? And if not, why is it so special to topology and related fields? - 3 Ghrist has written a number of papers applying homology to different applied fields. See for example his paper on Sensor Networks. – Jim Conant Mar 30 2011 at 19:30 3 This question should be Community Wiki, since it's looking for a list of examples rather than an answer to a specific question. I also feel like the question isn't very clear, since any answer has to be homological in nature so fundamentally mathematical. So the question seems somewhat conflicted. – Ryan Budney Mar 30 2011 at 19:35 1 Do you count string theory as "outside the realm of pure mathematics"? – Qfwfq Mar 30 2011 at 20:04 4 The first paragraph seems to suggest that the question is not so much about homology/topology in the large, but more about the idea of chain complexes. Is my reading correct? – Yemon Choi Mar 30 2011 at 20:27 3 Applied topology at Stanford: comptop.stanford.edu – Igor Belegradek Mar 30 2011 at 20:48 show 1 more comment ## 20 Answers Robert Ghrist is all about applied topology: Sensor Network, Signal Processing, and Fluid Dynamics. (homepage: http://www.math.upenn.edu/~ghrist/index.html ). For instance, we want to use the least number of sensors to cover a certain area, such that when we remove one sensor, a part of that area is undetectable. We can form a complex of these sensors and hence its nerve, and use homology to determine whether there are any gaps in the sensor-collection. I've met with him in person and he expressed confidence that this is going to be a big thing of the future. There are also applications of cohomology to Crystallography (see Howard Hiller) and Quasicrystals in physics (see Benji Fisher and David Rabson). In particular, it uses cohomology in connection with Fourier space to reformulate the language of quasicrystals/physics in terms of cohomology... Extinctions in x-ray diffraction patterns and degeneracy of electronic levels are interpreted as physical manifestations of nonzero homology classes. Another application is on fermion lattices (http://arxiv.org/abs/0804.0174v2), using homology combinatorially. We want to see how fermions can align themselves in a lattice, noting that by the Pauli Exclusion principle we cannot put a bunch of fermions next to each other. Homology is defined on the patterns of fermion-distributions. - 3 Robert Ghrist is coming to Edinburgh for the Science Festival this year to talk about the Mathematics of holes. (Alas I will be away.) If anyone is around, it will be worth your while to attend! – José Figueroa-O'Farrill Mar 30 2011 at 20:59 10 In general, I think that homology will play a role in the mathematics of information. We had a talk by Gunnar Carlsson recently in Edinburgh about "persistent homology" and it was quite an eye opener. See comptop.stanford.edu for instance. – José Figueroa-O'Farrill Mar 30 2011 at 21:00 Very interesting, thanks! – Noah Giansiracusa Mar 31 2011 at 14:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Actually even schoolchildren calculate group co-cycle. (Without knowing that it is called like this). Cohomology occurs in everyday life as soon as one learns to count. 5+7 = 1 2 4 + 5 = 0 9 2 + 8 = 1 0 What is the function on which sends a pair (a,b) to the $0$ or $1$ depending result is greater than 9 or not ? ( e.g. f(5,7)= 1, f(4,5) = 0, f(2,8)= 1). This is actually a 2-cocycle for group $Z/nZ$ with values in $Z$. It can be checked directly or... Let us look on it more conceptually. Consider the standard short exact sequence of abelian groups $0->Z->Z->Z/n->0$. (First map is multiplication by $n$, the second is factorization and will be denoted by $p$). Choose section $s: Z/nZ -> Z$ (i.e. any map such $ps=Id$, where $p: Z->Z/nZ$, it is like connection in differential geometry (can be made precise)). Define $f(a,b)= s(a)+s(b) - s(a+b)$ Note that: a) this function $f(a,b)$ is exactly we talked above b) from general theory this is 2-cocyle, (it corresponds to this extension, (it it is like "curvature" of connection is differential geomety (can be made precise)). That is all: we explained why it is group cocycle and what its role. I would like to learn this 20 years ago when I learned group cohomology as undergraduate, but I learned this 1 ago, doing some engineering work in wireless communication... I am still surprised that it is not written on the first page of any textbook which deals with group cohomology, when I am explaining this to my friends most did not know this also and after knowing share my feeling of surprise. - 12 While not on the first page of a textbook, this is written up in an article in the American Mathematical Monthly (Daniel C. Isaksen, A cohomological viewpoint on elementary school arithmetic, Amer. Math. Monthly (109), no. 9 (2002), p. 796--805) – Christopher Drupieski Jun 11 at 20:44 Thank you for reference. I di not know it. – Alexander Chervov Jun 12 at 8:02 1 is it easy to generalize to larger digits? how about for multiplication? – unknown (google) Jun 18 at 16:34 @unknown good questions sorry to say I cannot say much. About higher digets we can think about N=10^2 10^3 and so on.... arguments will be the same. – Alexander Chervov Jun 18 at 17:54 Can you explain the things you claim "can be made precise"? I'm really curious. – 36min Nov 21 at 5:23 show 8 more comments Quantum field theory is outside the realm of pure mathematics, makes contact with the real world and features chain complexes and cohomology. The current paradigm for gauge theories such as the standard model is based on Yang-Mills theories coupled to matter. The quantisation of nonabelian (and, depending on your choice of gauge fixing function, also abelian) Yang-Mills theories features a cohomology theory known by the moniker of BRST, after the inventors: Becchi, Rouet, Stora and, independently, Tyutin. The cleanest proofs of the renormalizability of Yang-Mills theories are cohomological in nature. - Nice examples, thanks! – Noah Giansiracusa Mar 31 2011 at 14:33 My understanding, from conversations with Raoul Bott, is that his early work on electrical circuits and the Bott-Duffin theorem can be intepreted as exhibiting close connections between de Rham cohomology and the laws of electrical circuits, and that this is part of what led him into pure mathematics early in his career. - 4 He talked about this connection to electrical circuits in teaching graduate algebraic topology, to help motivate cohomology and give intuition for it. – Patricia Hersh May 20 2012 at 14:25 Bott gave a very nice talk (c. 1960) to electrical engineers on the subject, but I cannot find a reference right now. – Robert Bruner Jun 11 at 21:55 The mass of a classical mechanical system is an element in the (one-dimensional) second cohomology group of the Lie algebra of the Galilei group. See J. M. Souriau, Stucture des Systèmes Dynamiques, Chap. III, section (12.136). Or in english translation, search inside here for "total mass". - 1. Something resembling de Rham complex with differential-algebraic flavor appears in (variant of) control theory, see, for example, G. Conte, C.H. Moog, A.M. Perdon, Algebraic Methods for Nonlinear Control Systems, 2nd ed., Springer, 2006. But, as far as I can tell, they do not use the world "cohomology" explicitly. 2. Spencer cohomology (which is, essentially, a Lie algebra cohomology) appears as obstructions to integrability of some differential-geometric structures (G-structures) and, through it, of (some) differential equations. Potentially this opens a wide possibilities for applications, and indeed, Dimitry Leites advocates this approach in (some of) his writings. An emblematic publication which is available, unfortunately, only in Russian, is: "Application of cohomology of Lie algebras in national economy", Seminar "Globus", Independent Univ. of Moscow, Vol. 2, 2005, 82-102. The Russian original for "national economy" in the title is (a somewhat pejorative and untranslatable term) "narodnoe khozyai'stvo". - Anders Björner and László Lovász used bounds on the Betti numbers for the complement of a real subspace arrangement called the $k$-equal arrangement to give a complexity theory lower bound that agreed, up to a scalar multiple, with the previously known upper bound in: A. Björner and L. Lovász, Linear decision trees, subspace arrangements, and Mobius functions, Journal of the American Mathematical Society, Vol. 7, No. 3 (1994), 677--706. The basic question addressed in their paper (along with other questions of a similar flavor) is how many pairwise comparisons of coordinates are needed to decide if a vector in ${\bf R}^n$ has $k$ coordinates all equal to each other for fixed $k$ and $n$. They observed that this is equivalent to deciding whether the vector lies on the so-called $k$-equal arrangement or in its complement, where the $k$-equal arrangement is the subspace arrangement comprised of the ${n\choose k}$ subspaces where $k$ coordinates are set equal to each other. To this end, they gave a lower bound on the number of leaves in a linear decision tree -- a tree where one starts at the root, and each time one does a comparison of two coordinates $a_i$ and $a_j$, then one proceeds down to either the $a_i < a_j$ child or the $a_i = a_j$ child or the $a_i > a_j$ child. One reaches a leaf when no further queries are necessary to make a decision as to containment in the arrangement or its complement. The log base 3 of the number of leaves is a lower bound on the depth of the tree, i.e. on the number of queries needed in the worst case. To get some intuition for why this bound depended fundamentally on the Betti numbers of the complement, consider the $k=2$ case -- where the number of connected components of the complement of the subspace arrangement (which in this case is a hyperplane arrangement) is an obvious lower bound on the number of leaves in any linear decision tree. - Recently, it is realized that quantum many-body states can be divided into short-range entangled states and long-range entangled states. The quantum phases with long-range entanglements correspond to topologically ordered phases, which, in two spatial dimensions, can be described by tensor category theory (see cond-mat/0404617). Topological order in higher dimensions may need higher category to describe them. One can also show that the quantum phases with short-range entanglements and symmetry $G$ in any dimensions can be "classified" by Borel group cohomology theory of the symmetry group. (Those phases are called symmetry protected trivial (SPT) phases.) The quantum phases with short-range entanglements that break the symmetry are the familar Landau symmetry breaking states, which can be described by group theory. So, to understand the symmetry breaking states, physicists have already been forced to learn group theory. It looks like to understand patterns of many-body entanglements that correspond to topological order and SPT order, physicists will be forced to learn tensor category theory and group cohomology theory. In modern quantum many-body physics and in modern condensed matter physics, tensor category theory and group cohomology theory will be as useful as group theory. The days when physics students need to learn tensor category theory and group cohomology theory are coming, may be soon. - The finite element method- a numerical method for solving PDE's- has a homological interpretation: MR2269741 (2007j:58002) Arnold, Douglas N.(1-MN-MA); Falk, Richard S.(1-RTG); Winther, Ragnar(N-OSLO-CMA) Finite element exterior calculus, homological techniques, and applications. (English summary) Acta Numer. 15 (2006), 1–155 - The Aharonov–Bohm effect. Classically, you can't distinguish two electromagnetic potentials which are in the same cohomology class. From quantum viewpoint, they can be distinguished, because an electron changes its phase under parallel transport defined by the connection associated to a potential. - A classical and elegant application is to the solution of Kirchhoff's theorem on electrical cricuits. See: Nerode, A.; Shank, H.: An algebraic proof of Kirchhoff's network theorem. Amer. Math. Monthly, 68 (1961) 244–247 - It's my understanding that Carina Curto and Vladmir Itskov at the University of Nebraska - Lincoln apply algebraic topology (among other things) to study theoretical and applied neuroscience. - Their work is very similar in spirit to Ghrist's work alluded to in one of the other answers. – Igor Rivin May 20 2012 at 15:38 Maurice Herlihy and Nir Shavit won the 2004 Gödel Prize for topological analysis of asynchronous computation. Homology was involved. - • There's a CST.SE thread of possible interest here: http://cstheory.stackexchange.com/questions/7958/papers-on-relation-between-computational-complexity-and-algebraic-geometry-topol It mentions stuff like Geometric Complexity Theory, a far-out program for proving P!=NP with algebraic geometry. Mentions the thing I actually first websearched for, Herlihy's work on concurrent and distributed computing using cohomology. - I am surprised no-one has mentioned Persistent Homology. - en.wikipedia.org/wiki/… – Alexander Chervov Jun 20 at 12:40 See also José Figueroa-O'Farrill's comment on the accepted answer, in which he mentions a talk by Gunnar Carlsson about persistent homology. – J W Jun 20 at 15:59 An application of cohomology to provide a geometric/topological description charged particles in General Relativity can be found in GRAVITATION: An Introduction to Current Research, ed. Louis Witten and references therein. Wheeler's geometrodynamics program contained a subprogram named "charge without charge", which aimed to express the electric charge in terms of geometric and/or topological properties. A wormhole allows the existence of an electromagnetic field without source - hence the name "charge without charge". The two ends of the wormholes behave as particles of opposite electric charge. And all this can be obtained as a solution to Einstein-Maxwell equations. Roots of the approach of Misner and Wheeler can be found in the paper of Einstein and Rosen, and a series of papers of G. Y. Rainich from 1924-1925. - There are some applications of topology/cohomology to combinatorics and combinatoric geometry. One of the earliest examples is surely Lovasz's proof of a bound for the chromatic number of the Kneser graph; he uses the Borsuk-Ulam theorem, which is usually proved by homological methods. A modern exposition can be found here. Another example is Tveberg's theorem with all its variants on the configuration of points in space (the best results can be found in a recent paper of Blagojevic, Matschke and Ziegler. There are many other results in convex geometry/polytope theory which use topological methods and, in particular, cohomology. - While this isn't so clear from the title of the question, the first sentence in the text of the question mentions wanting things outside of pure mathematics. – tweetie-bird Oct 25 at 14:27 Cohomology is basically a way to get information from linear maps that are neither injective nor surjective. So in any place such linear map occurs, one can find the use of cohomology. But whether cohomology plays a significant role will depend on what kind of questions asked. - 3 I don't really see how this answers the original question "I am curious if the setup for (co)homology theory appears outside the realm of pure mathematics". – Yemon Choi Jun 11 at 22:55 At a first glance the characterization of the structures you mention - "(co)homologies", appear to be easily interpreted as modal structures (i.e. as in modal logics), or labelled transition systems, widely used in computer science. - Every cohomology group has its meaning but the meaning of higher cohomologies is still to be explored. Physics and applications in other discipline of science might provide motivation and inspiration for the study of such meanings. In algebra, the first cohomology usually corresponds to some kind of homomorphism type property, for example, derivations and so on. The second cohomology corresponds what people call square-zero extensions of the algebra and also infitesimal deformations of the algebra. The thrid cohomology of associative algebras is the obstruction to the existence of finite formal deformations. If one sees such extensions of algebras or deformations of algebras in problems in other sciences, then it is the cohomology that plays the fundamental role here. - 2 I don't really see how this answers the original question "I am curious if the setup for (co)homology theory appears outside the realm of pure mathematics". – Yemon Choi Jun 11 at 22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178416132926941, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/35553/why-there-is-no-accuracy-of-the-measured-value-of-g
# why there is no accuracy of the measured value of $G$? With the advancement of Modern Technology still there is no accuracy of the measured value of $G$ Gravitational Constant, why!? - ①Gravitation is too small.②The prototype limits the accuracy of mass measurement.③Gravitation of unwanted sources (equipment, lab, you) cannot be screened. – C.R. Sep 4 '12 at 0:32 ## 2 Answers It's very hard to measure the magnitude of the gravitational force between objects of well-known mass. For the mass to be well-known, as a multiple of the kilogram prototype (which is how we still define the unit of mass) they have to be rather small objects. But the gravity between the small objects is too weak. It can be measured but it has been impossible to measure the force at a better accuracy than three or four significant figures. - alfered, This is a good question with a potentially long answer, although there are good sources that discuss some of the difficulties and recent solutions in measuring G. A good one is the thesis of Joshua Schwarz which gives a good overview in the first six pages and covers how to perform gravity measurements in free fall. One obstacle in traditional torsion type approaches is having an accurate estimate of the anelasticity of the fiber used in the torsion experiment. As one can imagine, one must understand the physics of the material itself in order to estimate the actual force that is being applied. Hope this helps. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486719369888306, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/feynman-diagram+duality
# Tagged Questions 1answer 43 views ### Why is there no double counting of $s$- and $t$-channels in string theory? In string theory for the four particle tree diagram exchange, why is there some mysterious crossing duality between the $s$- and $t$- and $u$-channels? Why isn't there a double counting in the Feynman ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128602743148804, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/279637/exponent-questions-algebra
# Exponent questions algebra How would I solve the following two exponent questions? (1) The first question is $$\left(\frac{x^{-2}+y^{-1}}{xy^2}\right)^{-1}$$ I got $\quad \displaystyle \frac{-xy^{-2}}{x^2+y},\;\;$but this does not seem to be correct. (2) My second question is $$\left(\frac{3}{A^{-3}B^{-2}}\right)^{-2}$$ I got $\quad\displaystyle \frac{A^6B^4}{1/9},\quad$ but my book's answer is $\quad\displaystyle \frac{1}{9A^6B^4}$ - ## 4 Answers We are not solving, we are simplifying: For the first, note that $$\left(\frac{x^{-2}+y^{-1}}{xy^2}\right)^{-1}\;=\; \frac{xy^2}{x^{-2} + y^{-1}}\;=\;\frac{xy^2}{\large\frac{1}{x^2} + \frac{1}{y}}$$ Try now to multiply numerator and denominator by $x^2y$: $$\frac{xy^2}{\left(\large\frac{1}{x^2} + \frac{1}{y}\right)}\cdot \frac{(x^2y)}{(x^2y)} \quad = \quad\frac {x\cdot x^2 \cdot y^2 \cdot y}{\left(\large\frac{x^2y}{x^2} + \frac{x^2y}{y}\right)}\quad =\quad \frac{x^3y^3}{y + x^2}$$ For the second, again, we are simplifying: $$\left(\frac{3}{A^{-3}B^{-2}}\right)^{-2} \quad = \quad \frac{3^{-2}}{A^{(-3)(-2)}B^{(-2)(-2)}}\quad = \quad \frac{1/9}{A^6B^4}\quad =\quad \frac{1}{9A^6B^4}$$ (as per my now deleted comment below) Alternatively, for the second problem, we proceed as follows: $$\left(\frac{3}{A^{-3}B^{-2}}\right)^{-2}\quad = \quad\left(\frac{A^{-3}B^{-2}}{3}\right)^2 \quad=\quad \frac{A^{-6}B^{-4}}{9} \quad = \quad \frac{1}{9A^6B^4}$$ - One quick question I have on the second problem is dont you flip the fraction when you have a negative exponent. – Fernando Martinez Jan 15 at 23:10 1 I strongly encourage the OP to check the reason for each step in this answer. If you are unsure of what rule is applied to each step, do some research and/or ask here. – Code-Guru Jan 15 at 23:11 1 @FernandoMartinez There are often more than one way to simplify these kinds of expressions. – Code-Guru Jan 15 at 23:11 Then is the way I did it initially correct or incorrect? – Fernando Martinez Jan 15 at 23:14 1 Fernando: you can flip, but you can also multiply through by the negative exponent (since the terms in the numerator and denominator are all multiplied), so each term's exponent can be multiplied by -2, which simplifies the denominator right away. In the first problem, note that at the start, the numerator had a sum, so we can't just multiply through by the exponent. – amWhy Jan 15 at 23:14 show 4 more comments You got $$\frac{-xy^{-2}}{x^2+y}$$ Here's what I got: $$\left(\frac{x^{-2}+y^{-1}}{xy^2}\right)^{-1}=\left(\frac{xy^2}{x^{-2}+y^{-1}}\right)=\left(\frac{xy^2}{\frac{1}{x^2}+\frac{1}{y}} \right)=\left(\frac{xy^2}{\frac{y+x^2}{yx^2}} \right)=xy^2\cdot\left(\frac{yx^2}{y+x^2} \right)=\frac{x^3y^3}{y+x^2}$$ - Same thing I got, so I think this is the correct answer – frogeyedpeas Jan 15 at 22:52 To Solve Problem 1. We are going to begin by seperating the fraction into two parts (both still underneath one big -1st power) This leads us to x^-3/y^2 + y^-3/x. After moving the powers to their correct locations we get: 1/(y^2*x^3) + 1(x*y^3). Now factoring out 1/(x*y^2) from both sides we get. 1/(x*y^2)*[1/(x^2) + 1/(y)] finding an LCD (least common denominator) for both fractions we can rewrite it as. (1/x*y^2)*[(y + x^2)/(x^2*y)] Placing back the factored fraction we end up with: (y + x^2)/(x^3 * y^3) and now applying the negative first power (REMEMBER FROM THE START!) (x^3*y^3)/(y + x^2) Is the correct answer. To Solve Number 2. We begin with the expression (3/(A^(-3) * B^(-2)))^(-2). First we bring up the A and B (since we can w/ negative exponents!) so we get (3*A^3 * B^2)^(-2). Now we can evaluate that -2nd power in two steps. First we evaluate it as a 2nd power and then a -1, both we know how to do. So after evaluating the 2 we end up with: (9*A^6*B^4)^(-1) After the -1 we end up with: 1/(9*A^6*B^4) Which is what you should get! Good Luck :) - 2 You can find some good starting points on how to format mathematics on the site here and here. This AMS reference is very useful. If you need to format more advanced things, there are many excellent references on LaTeX on the internet, including StackExchange's own TeX.SE site. – Zev Chonoles♦ Jan 15 at 23:01 Here is another way to simplify the second expression from your question: $$\left(\frac{3}{A^{-3}B^{-2}}\right)^{-2} = \left(\frac{A^{-3}B^{-2}}{3}\right)^{2} = \left(\frac{A^{-6}B^{-4}}{9}\right) = \left(\frac{1}{9A^{6}B^{4}}\right)$$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8949320316314697, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/25677-diagonal.html
# Thread: 1. ## Diagonal Consider a convex n-gon such that no 3 diagonals intersect at a single point. Draw all the diagonals (i.e. connect every pair of vertices by a segment).  a) How many intersections do the diagonals determine? b) Into how many parts is the polygon divided by the diagonals? 2. Originally Posted by vivian Consider a convex n-gon such that no 3 diagonals intersect at a single point. Draw all the diagonals (i.e. connect every pair of vertices by a segment). a) How many intersections do the diagonals determine? b) Into how many parts is the polygon divided by the diagonals? Hello, to a). from each vertex there are starting (n-3) diagonals. The total number of diagonals is therefore: $\frac n2 \cdot (n-3)$ Each diagonal is intersected by (n-3) diagonals. Therefore the number of intersections is: $\frac n2 \cdot (n-3)^2$ to b) I'm not quite sure what you mean by parts Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232874512672424, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/180650/newtons-method-error-bounds?answertab=votes
# Newton's method - error bounds I just have a very brief question regarding the formula for error bounds in Newton's method. Depending on where you look, this will either be written as: $$e_{n+1} \approx \frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}e_{n}^2$$ or: $$e_{n+1} \approx -\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}e_{n}^2$$ In other words, the sign preceeding the fraction term will differ. Why is this so? I have encountered both versions in different places, and I just wondered if it is arbitrary what you choose to use. After all, if you take the absolute value, the magnitude of the error bound will be the same, but if you leave the answer as it is, without taking the absolute value, then, naturally, one answer will be positive and another will be negative. If anyone can clear this up for me, I would greatly appreciate it! - ## 1 Answer Let $\alpha$ be the true root. We can define the error $e_{n+1}$ in the estimate $x_{n+1}$ in three different ways. Way $1$: $\,e_{n+1}=x_{n+1}-\alpha$. If that is the definition, then the first estimate of the error you give is the correct one. Way $2$: $\,e_{n+1}=\alpha-x_{n+1}$. With that definition, the second estimate you give is the correct one. Way $3$: $\,e_{n+1}=|\alpha-x_{n+1}|$. Then neither is correct, one should use $e_{n+1} \approx \left|\frac{f^{\prime \prime}(r)}{2 f^{\prime}(r)}\right|e_{n}^2$. Although one is mostly interested in the absolute value of $\alpha-x_{n+1}$, Ways $1$ and $2$ supply more information than Way $3$, since they tell us whether $x_{n+1}$ is an overestimate or an underestimate. Each of Ways $1$, $2$, and $3$ is unfortunately in fairly common use. - Awesome! Thanks a lot for clearing this up. I really appreciate it. – Kristian Aug 9 '12 at 17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188054800033569, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/204942/correctness-of-fermats-factorization
# Correctness of Fermat's Factorization Is this proof correct: An odd integer $n \in \mathbb{N}$ is composite iff it can be written in the form $n = x^2 - y^2, y+1 < x$ Proof: $\leftarrow$ Want: $n = ab$ Where $a$ and $b$ are odd integers (since $n$ is odd) Let $n = x^2 - y^2, x > y + 1$. Let $x = \dfrac{a+b}{2}$ and let $y= \dfrac{a-b}{2}$ where $a$ and $b$ are odd integers. Consider $n = x^2 - y^2$: $= (x+y)(x-y) \iff (\dfrac{a+b}{2} + \dfrac{a-b}{2})\cdot(\dfrac{a+b}{2} - \dfrac{a-b}{2})$ Thus we have $ab$. Now I could do similar steps backwards to prove the other direction. - It looks solid, except that you should explicitly mention where the condition that $x\gt y+1$ comes into play (hint; there's another condition on $a$ and $b$ that you haven't mentioned - actually, two more, one being $a\geq b$ since $y$ is positive...) – Steven Stadnicki Sep 30 '12 at 17:32 Is it the fact that $x \geq \lceil \sqrt n \rceil$? – CodeKingPlusPlus Sep 30 '12 at 17:49 No, that's actually moot - it's what that condition implies about $a$ and $b$. (Slightly larger hint: every number $n$ has a factorization $n=ab$; what do you need to ensure that $n$ isn't prime?) – Steven Stadnicki Sep 30 '12 at 18:55 $a \neq 1$ and $b \neq 1$ – CodeKingPlusPlus Sep 30 '12 at 20:51 ## 1 Answer The part when you say "Let $x=\frac{a+b}2$ and let $y$..." is not really clear. In the $\Leftarrow$ direction $x$ and $y$ should be considered as given, and define $a$ and $b$ using them, and show that they are integers. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395979642868042, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/289594/vector-span-proof?answertab=active
# Vector span proof Let $c_1,c_2,c_3,c_4,c_5$ be vectors in $R^4$ I'm trying to show that the set (call it set A) {$c_1,c_2,c_3,c_4,c_5$} spans $R^4$ if and only if the set (say, set B) of vectors {$c_1+c_2,c_2+c_3,c_3+c_4,c_4+c_5,c_5+c_1$} spans $R^4$ I tried a proof by contradiction to show that there can't exist a vector b that is formed by a linear combination of vectors from B but not from A, but that doesn't seem to be the case. Am I messing something up somewhere/is there something I could be doing differently, possibly with matrices? - ## 2 Answers Hint: $(c_1 + c_2) = (c_1) + (c_2)$ $(c_1) = \frac {1}{2} \left[ (c_1 + c_2) - (c_2+c_3)+(c_3+c_4) - (c_4+c_5) + (c_5 + c_1)\right]$ - Sorry, I'm not quite sure how that's supposed to help. Could you elaborate a bit further, please? – user1903336 Jan 29 at 8:05 2 @user1903336 Can you show that $Span B \subset Span A$ using the first equation (and it's cyclic versions)? Similarly show that $Span A \subset Span B$ using the second equation. Hence conclude that $Span A = Span B$. – Calvin Lin Jan 29 at 8:09 My fundamentals are very fuzzy, sorry. Do you mind explaining yourself a bit more? – user1903336 Jan 29 at 8:15 1 If $b \in Span B$, then $b = \sum b_i ( c_i + c_{i+1})$ (where $c_6 = c_1)$ for some coefficients $b_i$. Then, $b = \sum (b_i + b_{i-1} )c_i$ shows that $b \in Span A$. Thus, $Span B \subset Span A$. Do the second statement in a similar way. – Calvin Lin Jan 29 at 8:19 1 The hard part would be finding the equations stated in the hint. The rewrite should be obvious, by just expanding and combining terms. – Calvin Lin Jan 29 at 8:46 show 5 more comments You can actually prove something stronger, that A and B always generate the same vector space, by showing that every vector in A can be written as a linear combination of vectors in B, and every vector in B is a linear combination of vectors in A. - Can you explain why that would work/how I would go about doing that? – user1903336 Jan 29 at 8:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538381099700928, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/158584-how-does-simplify-simple.html
# Thread: 1. ## How does this simplify to be this? (Simple) I can't see how these two expressions are equal, $\frac{ \frac{1}{2} }{3 - \left[ - (x-2) \right]} = \frac{1}{3} \left[ \frac{ \frac{1}{6} }{1 - \left[ -\frac{ (x-2) }{3} \right] \right]$ I can see them pulling out a 3 of the bottom, but why is the top being multipled by, 1\3? I took a factor of 3 out of the denominator now why do I have to change the top number to do so? Can someone explain to me what I'm missing? Attached Thumbnails 2. You are right; they are not equal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9633927345275879, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/43864/describe-subsets-of-the-integers-closed-under-the-binary-operation-axby/43908
## describe subsets of the integers closed under the binary operation Ax+By ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Could one describe the subsets of the integers closed under the binary operation Ax+By where A and B are arbitrary fixed integers ? That is, describe the subsets S of the integers such that if $x,y\in S$ then $Ax+By\in S$. Or just the minimal such subsets containing 1. Do I guess correctly that this question belongs to additive combinatorics ? - Are A and B fixed? – Qiaochu Yuan Oct 27 2010 at 20:17 I can't figure out what you mean: $Ax+By$ whatever it is, is not a "binary operation". So does "$A=2$, $B=1$" mean that if $x$ and $y$ are elements of your set, then $2x+y$ is? – Robin Chapman Oct 27 2010 at 20:26 @Yuan: yes, A and B are fixed. @Robin: yes, true. (Shall add this in the main post). Sorry about being unclear. – mmm Oct 27 2010 at 20:47 Your claim about "it's an arithmetical progression in the case A=B" is wrong. You can just write the terms of minimal $S$ containing 1 as polynomials in A, and note that for any $d$ there is only finite number of polynomials in $S$, whose degree is less than $d$. And in the case $A=2$, $B=1$ you get $S$ equal to the set of odd positive numbers, i.e. just one arithmetical progression. – Fiktor Oct 27 2010 at 21:21 @Fiktor: true, thank you!...I'd delete my answers. The question still remains, though. – mmm Oct 27 2010 at 21:43 ## 7 Answers I think the problem is pretty much solved in a series of papers by Klarner et al; David A Klarner and Karel Post, Some fascinating integer sequences. A collection of contributions in honour of Jack van Lint. Discrete Math. 106/107 (1992), 303–309, MR 93i:11031 D G Hoffman and D A Klarner, Sets of integers closed under affine operators—the finite basis theorem. Pacific J. Math. 83 (1979), no. 1, 135–144, MR 83e:10080 D G Hoffman and D A Klarner, Sets of integers closed under affine operators—the closure of finite sets. Pacific J. Math. 78 (1978), no. 2, 337–344, MR 80i:10075 - 1 Thanks, these references are helpful! Essentially they seem to imply that such (minimal) sets are finite unions of possibly bouunded arithmetical progressions. If I could bother you further by asking whether these questions are indeed in the f7ield of additive combinatorics ? So that I know whom to ask about such questions... – mmm Oct 28 2010 at 16:15 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Some trivial observations. If $A=1, B=-1$ we get subgroups of $\mathbb{Z}$. If $A=1, B=1$ we get positive cones (sets closed under positive linear combinations). If $A=k, B=0$ we get sets closed under multiplication by $k$. If $A=2, B=-1$ and $1, 2 \in S$, then $S=\mathbb{Z}$. To see this, let $n \in \mathbb{N}$. By induction we may assume that $n-2, n-1 \in S$. But then $n=2(n-1)-(n-2) \in S$. Note also that clearly $0 \in S$, and that $-n=n-2n \in S$. - Actually if $A=2, B=-1$ or more generarily $A+B=1$ the minimal set $S$ is $S=\{ 1 \}$. – Nick S Oct 27 2010 at 23:59 Yes indeed, but I am randomly assuming that in this case that 2 is also in $S$. I am addressing the first part of the question. It is probably hard to characterize $S$ for arbitrary $A$ and $B$, but as a start perhaps we can give simple conditions for when $S=\mathbb{Z}$. – Tony Huynh Oct 28 2010 at 0:06 All sets of integers are closed under this binary operation when A=1 and B=0. - Let $(n)$ be the closed set generated by $n$. Clearly, $(0)=${0}. As you probably figured out, it suffices to describe $(1)$. Indeed, $(n)=n(1)$ and any such set is a union of all $(n)$ it contains. I am too drunk right now to try to describe $(1)$ in any nontrivial way. Clearly, $(1)$ is contained in ${ F(A,B) }$ where $F$ is the set of integer polynomials with positive coefficients. You can pinpoint this set further by saying that it contains $1$ and $A+B$ and closed under substitutions. Now I have no clue how to describe sets of polynomials closed under substitutions but will have a go at it later... - Wouldn't $(1)$ be the set of all integers of the form $F(A+B)$ where $F$ is a polynomial with positive integer coefficients? – Hany Oct 28 2010 at 7:09 No, $A^2$ is not in $(1)$ and $A(A+B)+B\in (1)$ is not in your set. – Bugs Bunny Oct 28 2010 at 8:47 Finding all the solutions is probably hard, if I am not mistaken any set containing $d\ZZ$ where $d$ is gcd $(A, B)$ is a solution, but this is far from optimal. If you are looking for the the minimal $S$, just by looking over the general pattern, you are solving multiple higher order recurences at once (at each step the number of recurences increases). You start with $x_0=1, x_1= A+B$ and at each step, given $x_0,..., x_{2^n}$ you try to figure out a new term $x_{??} = A x_{k}+ B x_{m}$ with $k,m \leq 2^n$. In particular the solutions to the following recurences will always be in your set: $$x_1=1, x_{n+1}= (A+B) x_n \,.$$ $$x_1=1, x_2= A+B x_{n+1}= A x_n+ Bx_{n-1} \,.$$ $$x_1=1, x_2= A+B x_{n+1}= B x_n+ Ax_{n-1} \,.$$ but also you have things like $$x_1, x_2, x_3 \in { 1, A+B, A+AA+B^2, A^2+AB+B , (A+B)^2 } x_{n+1}= A x_n+ Bx_{n-2} \,.$$ $$x_1, x_2, x_3 \in { A+AA+B^2, A^2+AB+B , (A+B)^2 } x_{n+1}= A x_{n-1}+ Bx_{n-2} \,.$$ $$x_1, x_2, x_3 \in { A+AA+B^2, A^2+AB+B , (A+B)^2 } x_{n+1}= B x_{n-1}+ Ax_{n-2} \,.$$ $$x_1, x_2, x_3 \in { A+AA+B^2, A^2+AB+B , (A+B)^2 } x_{n+1}= B x_{n}+ Ax_{n-2} \,.$$ and so on. I migth be wrong, but if I am not mistaken, the Question you are asking is equivalent to the following: For all the possible $k$ describe recursivelly the general solution to all the recurences of order $k$ of the type $x_{n+k} = A x_{n+m} + Bx_n$ and $x_{n+k} = B x_{n+m} + Ax_n$, where $x_1,..., x_{k-1}$ are solutions to a reccurence of this type of order at most $k-1$. - Rather than attempt an answer, I suggest a generalization: look at the appropriate clones on the integers, or even on the natural numbers when A and B are positive. Considering the latter, it is clear that the 2-clone (set of functions in two variables closed under projections and composition) containing x + y contains any other 2-clone generated by Ax + By, where A and B are positive integers. Also, the clone containing Ax + By also contains many operations of the form Ax + Cy and Dx + By, where C is a positive multiple of B and belongs to a certain subsemigroup of the natural numbers, and where D has analogous restrictions. Once the various clones are understood, then you can plug in values for x and y to see what semigroups arise. If A and B are larger than two, you will get things that are related, but I suspect are properly contained in, subsemigroups studied in the Frobenius postage-stamp (or coin) problem. I think the clones will be a richer class of items to study, however. - I look forward to reading those papers of Klarner and Hoffman. It appears that (as a special case of their results) when $\gcd(A,B)=1$ then any closed set is a finite union of arithmetic progressions ( infinite or bi-infinite) possibly augmented by a finite set of integers. I can't tell from mathscinet if they discuss the case $\gcd(A,B)>1$. If $A=B=2$ then $\{1\} \cup \{6k+4 \mid k \ge 0\}$ describes a closed set. The case $A=B=3$ is more intricate. Consider the infinite set of integers $\{1,6,21,66,201,\dots\}$ and the infinite set of (disjoint) positive integer progressions $\{36+45k,111+135k, 336+405k\dots\}$ where each element is 3 more than 3 times the previous one (A nice base 3 description is possible). I believe I can prove that together they make the smallest closed set containing $1$. None of the integers in the first set belong to any arithmetic progression in the second set. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931237518787384, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/1521-geometry-algebra.html
# Thread: 1. ## Help A triangle ABC,a=AC, b=AC, c=AB , and the angle C≥60° 2. Why no one answer me? Do I fail to express the question? 3. I want to help you but I do not understand your question, sorry. Can you draw it? 4. a triangle:......A.. ......................... ................b.............c ................................. ...............C ...B ............................a I hope you can understand my question now. Looking forward to your help,thank you. 5. Is it isoseles? Because in the picture it is not and in the first post it is. If it is isosoles I have found a proof. 6. Yes 7. If I understand the problem correctly the angle can be any angle not just more than 60 and the inequality is really and equality. Let me explain: Since $a=b$ the problem reduces to proving $2a(\frac{2}{a}+\frac{1}{c}) \geq 4+\frac{1}{\sin (C/2)}$. Thus, $4+\frac{2a}{c}\geq 4+\frac{1}{\sin (C/2)}$. Thus, (if and only if) $\frac{2a}{c}\geq \frac{1}{\sin C/2}$. But this is always equal to eachother, thus, $\frac{2a}{c}= \frac{1}{\sin (C/2)}$. Because if an isoseles triangle has sides $a$ with angle between them $C$ and $c$ is its third side then $2a\sin (C/2)=c$. The reason why is simple, by the law of cosines, $a^2+a^2-2a^2\cos C=c^2$ thus, $2a^2(1-\cos C)=c^2$ by the half-angle identity $4a^2\sin^2(C/2)=c^2$ thus $2a\sin(c/2)=a$ thus, $\frac{2a}{c}=\frac{1}{\sin(C/2)}$. Thus your inequality is in reality always equal, thus the limitation for the angle were not needed. Thus, your problem was tricky in that it was making me believe the angle needs to be greater than 60 which is not true. Here are some examples which show what you are saying is an equality. A right triangle with sides $1,1,\sqrt{2}$, here the angle is 90. An equilateral triangle with sides $1,1,1$ here the angle is 60. 8. Thank you for your help. But after this question, there is another one asking me to guess when a and b are not equal, can it still be right. I've put many sets of numbers in it, which shows it is always right. But I can't prove it. I think I have to trouble again. just like this: A triangle ABC,a=BC, b=AC, c=AB , and the angle C≥60° 9. I am trying to solve your problem for all triangles but I did not find a proof yet. Are you certain it works for cases that you tried it for? Where did you even get this problem? Any ideas which I could use in my proof? 10. Now I'm sure that when angle C≥60°,it is always the case. I've looked for this problem in many reference books, one of which says that it is true, but it doesn't offer a proof. A triangle ABC,a=BC, b=AC, c=AB , and the angle C≥60°
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519203305244446, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/4793-sum-squares-minimized.html
# Thread: 1. ## Help me please Hello, can any one help me in solving the folowing problem please??? show that if $x_1,...,x_n$ is a set of real numbers such that $<br /> \sum_{i=1,..N} {x_i=a}<br />$ then the quantity $\sum_{i=1,..N} {x_i^2}$ is minimized when $x_1=x_2=...=x_n=a/n$ the problem give some hint for solution: assume that $x_i != x_j$ and show that this contradics the assumption that $x_1,...,x_n$ minimize the sum of squares.to do this show that replacing $x_1 and x_j$ each by $(x_i+x_j)/2$ reduces the sum of squares while insuring $<br /> x_1+...+x_i_-_1+(x_i+x_j)/2+x_i_+_1$ $+...+x_j_-_1+(x_i+x_j)/2+x_j_+_1+...+x_n=a$ thanks. 2. Originally Posted by TFT Hello, can any one help me in solving the folowing problem please??? show that if $x_1,...,x_n$ is a set of real numbers such that $<br /> \sum_{i=1,..N} {x_i=a}<br />$ then the quantity $\sum_{i=1,..N} {x_i^2}$ is minimized when $x_1=x_2=...=x_n=a/n$ You want to minimize, $f(x_1,...,x_n)=x_1^2+x_2^2+...+x_n^2$ With constraint curve, $C(x_1,x_2,...,x_n)=x_1+...+x_n=a$ Use Lagrange multipliers, $\nabla f=k\nabla g$ Thus, $2x_1(1,0,...,0)+2x_2(0,1,...,0)+...+2x_n(0,0,...,1 )$ $=k(1,0,...,0)+k(0,1,...,0)+...+k(0,0,...,1)$ From here we see that, $x_1=x_2=...=x_n=k/2$ Thus, $x_1=...=x_n=\frac{a}{n}$ Take point $(a,0,...,0)$ and note that $f(a,0,...,0)\geq f(x_1,x_2,...,x_n)$. Therefore, the point we found was the minimum!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8720973134040833, "perplexity_flag": "middle"}
http://www.haskell.org/haskellwiki/index.php?title=Typeclassopedia&diff=prev&oldid=54423
# Typeclassopedia ### From HaskellWiki (Difference between revisions) | | | | | |-------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ( add a stub section for writing more about Foldable) | | ( expand on description of Applicative laws) | | | Line 211: | | Line 211: | | | | [http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html haddock for Applicative] and [http://www.soi.city.ac.uk/~ross/papers/Applicative.html Applicative programming with effects]}} | | [http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html haddock for Applicative] and [http://www.soi.city.ac.uk/~ross/papers/Applicative.html Applicative programming with effects]}} | | | | | | | - | There are several laws that <code>Applicative</code> instances should satisfy {{noteref}}, but only one is crucial to developing intuition, because it specifies how <code>Applicative</code> should relate to <code>Functor</code> (the other four mostly specify the exact sense in which <code>pure</code> deserves its name). This law is: | + | Traditionally, there are four laws that <code>Applicative</code> instances should satisfy {{noteref}}. In some sense, they are all concerned with making sure that <code>pure</code> deserves its name: | | | | + | | | | | + | * The identity law:<br /><haskell>pure id <*> v = v</haskell> | | | | + | * Homomorphism:<br /><haskell>pure f <*> pure x = pure (f x)</haskell>Intuitively, applying a non-effectful function to a non-effectful argument in an effectful context is the same as just applying the function to the argument and then injecting the result into the context with <code>pure</code>. | | | | + | * Interchange:<br /><haskell>u <*> pure y = pure (\$ y) <*> u</haskell>Intuitively, this says that when evaluating the application of an effectful function to a pure argument, the order in which we evaluate the function and its argument doesn't matter. | | | | + | * Composition:<br /><haskell>u <*> (v <*> w) = pure (.) <*> u <*> v <*> w </haskell>This one is the trickiest law to gain intuition for. In some sense it is expressing a sort of associativity property of <code>(<*>)</code>. The reader may wish to simply convince themselves that this law is type-correct. | | | | + | | | | | + | Considered as left-to-right rewrite rules, the homomorphism, interchange, and composition laws actually constitute an algorithm for transforming any expression using <code>pure</code> and <code>(<*>)</code> into a canonical form with only a single use of <code>pure</code> at the very beginning and only left-nested occurrences of <code>(<*>)</code>. Composition allows reassociating <code>(<*>)</code>; interchange allows moving occurrences of <code>pure</code> leftwards; and homomorphism allows collapsing multiple adjacent occurrences of <code>pure</code> into one. | | | | + | | | | | + | There is also a law specifying how <code>Applicative</code> should relate to <code>Functor</code>: | | | | | | ## Revision as of 18:15, 23 October 2012 By Brent Yorgey, [email protected] Originally published 12 March 2009 in issue 13 of the Monad.Reader. Ported to the Haskell wiki in November 2011 by Geheimdienst. This is now the official version of the Typeclassopedia and supersedes the version published in the Monad.Reader. Please help update and extend it by editing it yourself or by leaving comments, suggestions, and questions on the talk page. # 1 Abstract The standard Haskell libraries feature a number of type classes with algebraic or category-theoretic underpinnings. Becoming a fluent Haskell hacker requires intimate familiarity with them all, yet acquiring this familiarity often involves combing through a mountain of tutorials, blog posts, mailing list archives, and IRC logs. The goal of this document is to serve as a starting point for the student of Haskell wishing to gain a firm grasp of its standard type classes. The essentials of each type class are introduced, with examples, commentary, and extensive references for further reading. # 2 Introduction Have you ever had any of the following thoughts? • What the heck is a monoid, and how is it different from a monad? • I finally figured out how to use Parsec with do-notation, and someone told me I should use something called `Applicative` instead. Um, what? • Someone in the #haskell IRC channel used `(***)`, and when I asked lambdabot to tell me its type, it printed out scary gobbledygook that didn’t even fit on one line! Then someone used `fmap fmap fmap` and my brain exploded. • When I asked how to do something I thought was really complicated, people started typing things like `zip.ap fmap.(id &&& wtf)` and the scary thing is that they worked! Anyway, I think those people must actually be robots because there’s no way anyone could come up with that in two seconds off the top of their head. If you have, look no further! You, too, can write and understand concise, elegant, idiomatic Haskell code with the best of them. There are two keys to an expert Haskell hacker’s wisdom: 1. Understand the types. 2. Gain a deep intuition for each type class and its relationship to other type classes, backed up by familiarity with many examples. It’s impossible to overstate the importance of the first; the patient student of type signatures will uncover many profound secrets. Conversely, anyone ignorant of the types in their code is doomed to eternal uncertainty. “Hmm, it doesn’t compile ... maybe I’ll stick in an `fmap` here ... nope, let’s see ... maybe I need another `(.)` somewhere? ... um ...” The second key—gaining deep intuition, backed by examples—is also important, but much more difficult to attain. A primary goal of this document is to set you on the road to gaining such intuition. However— There is no royal road to Haskell. —Euclid This document can only be a starting point, since good intuition comes from hard work, not from learning the right metaphor. Anyone who reads and understands all of it will still have an arduous journey ahead—but sometimes a good starting point makes a big difference. It should be noted that this is not a Haskell tutorial; it is assumed that the reader is already familiar with the basics of Haskell, including the standard `Prelude`, the type system, data types, and type classes. The type classes we will be discussing and their interrelationships: ∗ `Semigroup` can be found in the `semigroups` package, `Apply` in the `semigroupoids` package, and `Comonad` in the `comonad` package. • Solid arrows point from the general to the specific; that is, if there is an arrow from `Foo` to `Bar` it means that every `Bar` is (or should be, or can be made into) a `Foo`. • Dotted arrows indicate some other sort of relationship. • `Monad` and `ArrowApply` are equivalent. • `Semigroup`, `Apply` and `Comonad` are greyed out since they are not actually (yet?) in the standard Haskell libraries ∗. One more note before we begin. The original spelling of “type class” is with two words, as evidenced by, for example, the Haskell 98 Revised Report, early papers on type classes like Type classes in Haskell and Type classes: exploring the design space, and Hudak et al.’s history of Haskell. However, as often happens with two-word phrases that see a lot of use, it has started to show up as one word (“typeclass”) or, rarely, hyphenated (“type-class”). When wearing my prescriptivist hat, I prefer “type class”, but realize (after changing into my descriptivist hat) that there's probably not much I can do about it. We now begin with the simplest type class of all: `Functor`. # 3 Functor The `Functor` class (haddock) is the most basic and ubiquitous type class in the Haskell libraries. A simple intuition is that a `Functor` represents a “container” of some sort, along with the ability to apply a function uniformly to every element in the container. For example, a list is a container of elements, and we can apply a function to every element of a list, using `map`. As another example, a binary tree is also a container of elements, and it’s not hard to come up with a way to recursively apply a function to every element in a tree. Another intuition is that a `Functor` represents some sort of “computational context”. This intuition is generally more useful, but is more difficult to explain, precisely because it is so general. Some examples later should help to clarify the `Functor`-as-context point of view. In the end, however, a `Functor` is simply what it is defined to be; doubtless there are many examples of `Functor` instances that don’t exactly fit either of the above intuitions. The wise student will focus their attention on definitions and examples, without leaning too heavily on any particular metaphor. Intuition will come, in time, on its own. ## 3.1 Definition Here is the type class declaration for `Functor`: ```class Functor f where fmap :: (a -> b) -> f a -> f b``` `Functor` is exported by the `Prelude`, so no special imports are needed to use it. First, the `f a` and `f b` in the type signature for `fmap` tell us that `f` isn’t just a type; it is a type constructor which takes another type as a parameter. (A more precise way to say this is that the kind of `f` must be `* -> *`.) For example, `Maybe` is such a type constructor: `Maybe` is not a type in and of itself, but requires another type as a parameter, like `Maybe Integer`. So it would not make sense to say `instance Functor Integer`, but it could make sense to say `instance Functor Maybe`. Now look at the type of `fmap`: it takes any function from `a` to `b`, and a value of type `f a`, and outputs a value of type `f b`. From the container point of view, the intention is that `fmap` applies a function to each element of a container, without altering the structure of the container. From the context point of view, the intention is that `fmap` applies a function to a value without altering its context. Let’s look at a few specific examples. ## 3.2 Instances ∗ Recall that `[]` has two meanings in Haskell: it can either stand for the empty list, or, as here, it can represent the list type constructor (pronounced “list-of”). In other words, the type `[a]` (list-of-`a`) can also be written `[] a`. ∗ You might ask why we need a separate `map` function. Why not just do away with the current list-only `map` function, and rename `fmap` to `map` instead? Well, that’s a good question. The usual argument is that someone just learning Haskell, when using `map` incorrectly, would much rather see an error about lists than about `Functor`s. As noted before, the list constructor `[]` is a functor ∗; we can use the standard list function `map` to apply a function to each element of a list ∗. The `Maybe` type constructor is also a functor, representing a container which might hold a single element. The function `fmap g` has no effect on `Nothing` (there are no elements to which `g` can be applied), and simply applies `g` to the single element inside a `Just`. Alternatively, under the context interpretation, the list functor represents a context of nondeterministic choice; that is, a list can be thought of as representing a single value which is nondeterministically chosen from among several possibilities (the elements of the list). Likewise, the `Maybe` functor represents a context with possible failure. These instances are: ```instance Functor [] where fmap _ [] = [] fmap g (x:xs) = g x : fmap g xs -- or we could just say fmap = map   instance Functor Maybe where fmap _ Nothing = Nothing fmap g (Just a) = Just (g a)``` As an aside, in idiomatic Haskell code you will often see the letter `f` used to stand for both an arbitrary `Functor` and an arbitrary function. In this document, `f` represents only `Functor`s, and `g` or `h` always represent functions, but you should be aware of the potential confusion. In practice, what `f` stands for should always be clear from the context, by noting whether it is part of a type or part of the code. There are other `Functor` instances in the standard libraries; below are a few. Note that some of these instances are not exported by the `Prelude`; to access them, you can import `Control.Monad.Instances`. • `Either e` is an instance of `Functor`; `Either e a` represents a container which can contain either a value of type `a`, or a value of type `e` (often representing some sort of error condition). It is similar to `Maybe` in that it represents possible failure, but it can carry some extra information about the failure as well. • `((,) e)` represents a container which holds an “annotation” of type `e` along with the actual value it holds. It might be clearer to write it as `(e,)`, by analogy with an operator section like `(1+)`, but that syntax is not allowed in types (although it is allowed in expressions with the `TupleSections` extension enabled). However, you can certainly think of it as `(e,)`. • `((->) e)` (which can be thought of as `(e ->)`; see above), the type of functions which take a value of type `e` as a parameter, is a `Functor`. As a container, `(e -> a)` represents a (possibly infinite) set of values of `a`, indexed by values of `e`. Alternatively, and more usefully, `((->) e)` can be thought of as a context in which a value of type `e` is available to be consulted in a read-only fashion. This is also why `((->) e)` is sometimes referred to as the reader monad; more on this later. • `IO` is a `Functor`; a value of type `IO a` represents a computation producing a value of type `a` which may have I/O effects. If `m` computes the value `x` while producing some I/O effects, then `fmap g m` will compute the value `g x` while producing the same I/O effects. • Many standard types from the containers library (such as `Tree`, `Map`, and `Sequence`) are instances of `Functor`. A notable exception is `Set`, which cannot be made a `Functor` in Haskell (although it is certainly a mathematical functor) since it requires an `Ord` constraint on its elements; `fmap` must be applicable to any types `a` and `b`. However, `Set` (and other similarly restricted data types) can be made an instance of a suitable generalization of `Functor`, either by making `a` and `b` arguments to the `Functor` type class themselves, or by adding an associated constraint. Exercises 1. Implement `Functor` instances for `Either e` and `((->) e)`. 2. Implement `Functor` instances for `((,) e)` and for `Pair`, defined as `data Pair a = Pair a a` Explain their similarities and differences. 3. Implement a `Functor` instance for the type `ITree`, defined as ```data ITree a = Leaf (Int -> a) | Node [ITree a]``` 4. Give an example of a type of kind `* -> *` which cannot be made an instance of `Functor` (without using `undefined`). 5. Is this statement true or false? The composition of two `Functor`s is also a `Functor`. If false, give a counterexample; if true, prove it by exhibiting some appropriate Haskell code. ## 3.3 Laws As far as the Haskell language itself is concerned, the only requirement to be a `Functor` is an implementation of `fmap` with the proper type. Any sensible `Functor` instance, however, will also satisfy the functor laws, which are part of the definition of a mathematical functor. There are two: ```fmap id = id fmap (g . h) = (fmap g) . (fmap h)``` ∗ Technically, these laws make `f` and `fmap` together an endofunctor on Hask, the category of Haskell types (ignoring ⊥, which is a party pooper). See Wikibook: Category theory. Together, these laws ensure that `fmap g` does not change the structure of a container, only the elements. Equivalently, and more simply, they ensure that `fmap g` changes a value without altering its context ∗. The first law says that mapping the identity function over every item in a container has no effect. The second says that mapping a composition of two functions over every item in a container is the same as first mapping one function, and then mapping the other. As an example, the following code is a “valid” instance of `Functor` (it typechecks), but it violates the functor laws. Do you see why? ```-- Evil Functor instance instance Functor [] where fmap _ [] = [] fmap g (x:xs) = g x : g x : fmap g xs``` Any Haskeller worth their salt would reject this code as a gruesome abomination. Unlike some other type classes we will encounter, a given type has at most one valid instance of `Functor`. This can be proven via the free theorem for the type of `fmap`. In fact, GHC can automatically derive `Functor` instances for many data types. A similar argument also shows that any `Functor` instance satisfying the first law (`fmap id = id`) will automatically satisfy the second law as well. Practically, this means that only the first law needs to be checked (usually by a very straightforward induction) to ensure that a `Functor` instance is valid. Exercises 1. Although it is not possible for a `Functor` instance to satisfy the first `Functor` law but not the second, the reverse is possible. Give an example of a (bogus) `Functor` instance which satisfies the second law but not the first. 2. Which laws are violated by the evil `Functor` instance for list shown above: both laws, or the first law alone? Give specific counterexamples. ## 3.4 Intuition There are two fundamental ways to think about `fmap`. The first has already been mentioned: it takes two parameters, a function and a container, and applies the function “inside” the container, producing a new container. Alternately, we can think of `fmap` as applying a function to a value in a context (without altering the context). Just like all other Haskell functions of “more than one parameter”, however, `fmap` is actually curried: it does not really take two parameters, but takes a single parameter and returns a function. For emphasis, we can write `fmap`’s type with extra parentheses: `fmap :: (a -> b) -> (f a -> f b)`. Written in this form, it is apparent that `fmap` transforms a “normal” function (`g :: a -> b`) into one which operates over containers/contexts (`fmap g :: f a -> f b`). This transformation is often referred to as a lift; `fmap` “lifts” a function from the “normal world” into the “`f` world”. ## 3.5 Further reading A good starting point for reading about the category theory behind the concept of a functor is the excellent Haskell wikibook page on category theory. # 4 Applicative A somewhat newer addition to the pantheon of standard Haskell type classes, applicative functors represent an abstraction lying in between `Functor` and `Monad` in expressivity, first described by McBride and Paterson. The title of their classic paper, Applicative Programming with Effects, gives a hint at the intended intuition behind the `Applicative` type class. It encapsulates certain sorts of “effectful” computations in a functionally pure way, and encourages an “applicative” programming style. Exactly what these things mean will be seen later. ## 4.1 Definition Recall that `Functor` allows us to lift a “normal” function to a function on computational contexts. But `fmap` doesn’t allow us to apply a function which is itself in a context to a value in a context. `Applicative` gives us just such a tool, `(<*>)`. It also provides a method, `pure`, for embedding values in a default, “effect free” context. Here is the type class declaration for `Applicative`, as defined in `Control.Applicative`: ```class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b``` Note that every `Applicative` must also be a `Functor`. In fact, as we will see, `fmap` can be implemented using the `Applicative` methods, so every `Applicative` is a functor whether we like it or not; the `Functor` constraint forces us to be honest. ∗ Recall that `($)` is just function application: `f $ x = f x`. As always, it’s crucial to understand the type signatures. First, consider `(<*>)`: the best way of thinking about it comes from noting that the type of `(<*>)` is similar to the type of `($)` ∗, but with everything enclosed in an `f`. In other words, `(<*>)` is just function application within a computational context. The type of `(<*>)` is also very similar to the type of `fmap`; the only difference is that the first parameter is `f (a -> b)`, a function in a context, instead of a “normal” function `(a -> b)`. `pure` takes a value of any type `a`, and returns a context/container of type `f a`. The intention is that `pure` creates some sort of “default” container or “effect free” context. In fact, the behavior of `pure` is quite constrained by the laws it should satisfy in conjunction with `(<*>)`. Usually, for a given implementation of `(<*>)` there is only one possible implementation of `pure`. (Note that previous versions of the Typeclassopedia explained `pure` in terms of a type class `Pointed`, which can still be found in the `pointed` package. However, the current consensus is that `Pointed` is not very useful after all. For a more detailed explanation, see Why not Pointed?) ## 4.2 Laws Traditionally, there are four laws that `Applicative` instances should satisfy ∗. In some sense, they are all concerned with making sure that `pure` deserves its name: • The identity law: `pure id <*> v = v` • Homomorphism: `pure f <*> pure x = pure (f x)` Intuitively, applying a non-effectful function to a non-effectful argument in an effectful context is the same as just applying the function to the argument and then injecting the result into the context with `pure`. • Interchange: `u <*> pure y = pure ($ y) <*> u` Intuitively, this says that when evaluating the application of an effectful function to a pure argument, the order in which we evaluate the function and its argument doesn't matter. • Composition: `u <*> (v <*> w) = pure (.) <*> u <*> v <*> w` This one is the trickiest law to gain intuition for. In some sense it is expressing a sort of associativity property of `(<*>)`. The reader may wish to simply convince themselves that this law is type-correct. Considered as left-to-right rewrite rules, the homomorphism, interchange, and composition laws actually constitute an algorithm for transforming any expression using `pure` and `(<*>)` into a canonical form with only a single use of `pure` at the very beginning and only left-nested occurrences of `(<*>)`. Composition allows reassociating `(<*>)`; interchange allows moving occurrences of `pure` leftwards; and homomorphism allows collapsing multiple adjacent occurrences of `pure` into one. There is also a law specifying how `Applicative` should relate to `Functor`: `fmap g x = pure g <*> x` It says that mapping a pure function `g` over a context `x` is the same as first injecting `g` into a context with `pure`, and then applying it to `x` with `(<*>)`. In other words, we can decompose `fmap` into two more atomic operations: injection into a context, and application within a context. The `Control.Applicative` module also defines `(<$>)` as a synonym for `fmap`, so the above law can also be expressed as: `g <$> x = pure g <*> x`. ## 4.3 Instances Most of the standard types which are instances of `Functor` are also instances of `Applicative`. `Maybe` can easily be made an instance of `Applicative`; writing such an instance is left as an exercise for the reader. The list type constructor `[]` can actually be made an instance of `Applicative` in two ways; essentially, it comes down to whether we want to think of lists as ordered collections of elements, or as contexts representing multiple results of a nondeterministic computation (see Wadler’s How to replace failure by a list of successes). Let’s first consider the collection point of view. Since there can only be one instance of a given type class for any particular type, one or both of the list instances of `Applicative` need to be defined for a `newtype` wrapper; as it happens, the nondeterministic computation instance is the default, and the collection instance is defined in terms of a `newtype` called `ZipList`. This instance is: ```newtype ZipList a = ZipList { getZipList :: [a] }   instance Applicative ZipList where pure = undefined -- exercise (ZipList gs) <*> (ZipList xs) = ZipList (zipWith ($) gs xs)``` To apply a list of functions to a list of inputs with `(<*>)`, we just match up the functions and inputs elementwise, and produce a list of the resulting outputs. In other words, we “zip” the lists together with function application, `($)`; hence the name `ZipList`. The other `Applicative` instance for lists, based on the nondeterministic computation point of view, is: ```instance Applicative [] where pure x = [x] gs <*> xs = [ g x | g <- gs, x <- xs ]``` Instead of applying functions to inputs pairwise, we apply each function to all the inputs in turn, and collect all the results in a list. Now we can write nondeterministic computations in a natural style. To add the numbers `3` and `4` deterministically, we can of course write `(+) 3 4`. But suppose instead of `3` we have a nondeterministic computation that might result in `2`, `3`, or `4`; then we can write `pure (+) <*> [2,3,4] <*> pure 4` or, more idiomatically, `(+) <$> [2,3,4] <*> pure 4.` There are several other `Applicative` instances as well: • `IO` is an instance of `Applicative`, and behaves exactly as you would think: to execute `m1 <*> m2`, first `m1` is executed, resulting in a function `f`, then `m2` is executed, resulting in a value `x`, and finally the value `f x` is returned as the result of executing `m1 <*> m2`. • `((,) a)` is an `Applicative`, as long as `a` is an instance of `Monoid` (section Monoid). The `a` values are accumulated in parallel with the computation. • The `Applicative` module defines the `Const` type constructor; a value of type `Const a b` simply contains an `a`. This is an instance of `Applicative` for any `Monoid a`; this instance becomes especially useful in conjunction with things like `Foldable` (section Foldable). • The `WrappedMonad` and `WrappedArrow` newtypes make any instances of `Monad` (section Monad) or `Arrow` (section Arrow) respectively into instances of `Applicative`; as we will see when we study those type classes, both are strictly more expressive than `Applicative`, in the sense that the `Applicative` methods can be implemented in terms of their methods. Exercises 1. Implement an instance of `Applicative` for `Maybe`. 2. Determine the correct definition of `pure` for the `ZipList` instance of `Applicative`—there is only one implementation that satisfies the law relating `pure` and `(<*>)`. ## 4.4 Intuition McBride and Paterson’s paper introduces the notation $[[g \; x_1 \; x_2 \; \cdots \; x_n]]\$ to denote function application in a computational context. If each $x_i\$ has type $f \; t_i\$ for some applicative functor $f\$, and $g\$ has type $t_1 \to t_2 \to \dots \to t_n \to t\$, then the entire expression $[[g \; x_1 \; \cdots \; x_n]]\$ has type $f \; t\$. You can think of this as applying a function to multiple “effectful” arguments. In this sense, the double bracket notation is a generalization of `fmap`, which allows us to apply a function to a single argument in a context. Why do we need `Applicative` to implement this generalization of `fmap`? Suppose we use `fmap` to apply `g` to the first parameter `x1`. Then we get something of type `f (t2 -> ... t)`, but now we are stuck: we can’t apply this function-in-a-context to the next argument with `fmap`. However, this is precisely what `(<*>)` allows us to do. This suggests the proper translation of the idealized notation $[[g \; x_1 \; x_2 \; \cdots \; x_n]]\$ into Haskell, namely `g <$> x1 <*> x2 <*> ... <*> xn,` recalling that `Control.Applicative` defines `(<$>)` as convenient infix shorthand for `fmap`. This is what is meant by an “applicative style”—effectful computations can still be described in terms of function application; the only difference is that we have to use the special operator `(<*>)` for application instead of simple juxtaposition. Note that `pure` allows embedding “non-effectful” arguments in the middle of an idiomatic application, like `g <$> x1 <*> pure x2 <*> x3` which has type `f d`, given ```g :: a -> b -> c -> d x1 :: f a x2 :: b x3 :: f c``` The double brackets are commonly known as “idiom brackets”, because they allow writing “idiomatic” function application, that is, function application that looks normal but has some special, non-standard meaning (determined by the particular instance of `Applicative` being used). Idiom brackets are not supported by GHC, but they are supported by the Strathclyde Haskell Enhancement, a preprocessor which (among many other things) translates idiom brackets into standard uses of `(<$>)` and `(<*>)`. This can result in much more readable code when making heavy use of `Applicative`. ## 4.5 Further reading There are many other useful combinators in the standard libraries implemented in terms of `pure` and `(<*>)`: for example, `(*>)`, `(<*)`, `(<**>)`, `(<$)`, and so on (see haddock for Applicative). Judicious use of such secondary combinators can often make code using `Applicative`s much easier to read. McBride and Paterson’s original paper is a treasure-trove of information and examples, as well as some perspectives on the connection between `Applicative` and category theory. Beginners will find it difficult to make it through the entire paper, but it is extremely well-motivated—even beginners will be able to glean something from reading as far as they are able. ∗ Introduced by an earlier paper that was since superseded by Push-pull functional reactive programming. Conal Elliott has been one of the biggest proponents of `Applicative`. For example, the Pan library for functional images and the reactive library for functional reactive programming (FRP) ∗ make key use of it; his blog also contains many examples of `Applicative` in action. Building on the work of McBride and Paterson, Elliott also built the TypeCompose library, which embodies the observation (among others) that `Applicative` types are closed under composition; therefore, `Applicative` instances can often be automatically derived for complex types built out of simpler ones. Although the Parsec parsing library (paper) was originally designed for use as a monad, in its most common use cases an `Applicative` instance can be used to great effect; Bryan O’Sullivan’s blog post is a good starting point. If the extra power provided by `Monad` isn’t needed, it’s usually a good idea to use `Applicative` instead. A couple other nice examples of `Applicative` in action include the ConfigFile and HSQL libraries and the formlets library. # 5 Monad It’s a safe bet that if you’re reading this, you’ve heard of monads—although it’s quite possible you’ve never heard of `Applicative` before, or `Arrow`, or even `Monoid`. Why are monads such a big deal in Haskell? There are several reasons. • Haskell does, in fact, single out monads for special attention by making them the framework in which to construct I/O operations. • Haskell also singles out monads for special attention by providing a special syntactic sugar for monadic expressions: the `do`-notation. • `Monad` has been around longer than other abstract models of computation such as `Applicative` or `Arrow`. • The more monad tutorials there are, the harder people think monads must be, and the more new monad tutorials are written by people who think they finally “get” monads (the monad tutorial fallacy). I will let you judge for yourself whether these are good reasons. In the end, despite all the hoopla, `Monad` is just another type class. Let’s take a look at its definition. ## 5.1 Definition The type class declaration for `Monad` is: ```class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b (>>) :: m a -> m b -> m b m >> n = m >>= \_ -> n   fail :: String -> m a``` The `Monad` type class is exported by the `Prelude`, along with a few standard instances. However, many utility functions are found in `Control.Monad`, and there are also several instances (such as `((->) e)`) defined in `Control.Monad.Instances`. Let’s examine the methods in the `Monad` class one by one. The type of `return` should look familiar; it’s the same as `pure`. Indeed, `return` is `pure`, but with an unfortunate name. (Unfortunate, since someone coming from an imperative programming background might think that `return` is like the C or Java keyword of the same name, when in fact the similarities are minimal.) From a mathematical point of view, every monad is an applicative functor, but for historical reasons, the `Monad` type class declaration unfortunately does not require this. We can see that `(>>)` is a specialized version of `(>>=)`, with a default implementation given. It is only included in the type class declaration so that specific instances of `Monad` can override the default implementation of `(>>)` with a more efficient one, if desired. Also, note that although `_ >> n = n` would be a type-correct implementation of `(>>)`, it would not correspond to the intended semantics: the intention is that `m >> n` ignores the result of `m`, but not its effects. The `fail` function is an awful hack that has no place in the `Monad` class; more on this later. The only really interesting thing to look at—and what makes `Monad` strictly more powerful than `Applicative`—is `(>>=)`, which is often called bind. An alternative definition of `Monad` could look like: ```class Applicative m => Monad' m where (>>=) :: m a -> (a -> m b) -> m b``` We could spend a while talking about the intuition behind `(>>=)`—and we will. But first, let’s look at some examples. ## 5.2 Instances Even if you don’t understand the intuition behind the `Monad` class, you can still create instances of it by just seeing where the types lead you. You may be surprised to find that this actually gets you a long way towards understanding the intuition; at the very least, it will give you some concrete examples to play with as you read more about the `Monad` class in general. The first few examples are from the standard `Prelude`; the remaining examples are from the `transformers` package. • The simplest possible instance of `Monad` is `Identity`, which is described in Dan Piponi’s highly recommended blog post on The Trivial Monad. Despite being “trivial”, it is a great introduction to the `Monad` type class, and contains some good exercises to get your brain working. • The next simplest instance of `Monad` is `Maybe`. We already know how to write `return`/`pure` for `Maybe`. So how do we write `(>>=)`? Well, let’s think about its type. Specializing for `Maybe`, we have `(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b.` If the first argument to `(>>=)` is `Just x`, then we have something of type `a` (namely, `x`), to which we can apply the second argument—resulting in a `Maybe b`, which is exactly what we wanted. What if the first argument to `(>>=)` is `Nothing`? In that case, we don’t have anything to which we can apply the `a -> Maybe b` function, so there’s only one thing we can do: yield `Nothing`. This instance is: ```instance Monad Maybe where return = Just (Just x) >>= g = g x Nothing >>= _ = Nothing``` We can already get a bit of intuition as to what is going on here: if we build up a computation by chaining together a bunch of functions with `(>>=)`, as soon as any one of them fails, the entire computation will fail (because `Nothing >>= f` is `Nothing`, no matter what `f` is). The entire computation succeeds only if all the constituent functions individually succeed. So the `Maybe` monad models computations which may fail. • The `Monad` instance for the list constructor `[]` is similar to its `Applicative` instance; see the exercise below. • Of course, the `IO` constructor is famously a `Monad`, but its implementation is somewhat magical, and may in fact differ from compiler to compiler. It is worth emphasizing that the `IO` monad is the only monad which is magical. It allows us to build up, in an entirely pure way, values representing possibly effectful computations. The special value `main`, of type `IO ()`, is taken by the runtime and actually executed, producing actual effects. Every other monad is functionally pure, and requires no special compiler support. We often speak of monadic values as “effectful computations”, but this is because some monads allow us to write code as if it has side effects, when in fact the monad is hiding the plumbing which allows these apparent side effects to be implemented in a functionally pure way. • As mentioned earlier, `((->) e)` is known as the reader monad, since it describes computations in which a value of type `e` is available as a read-only environment. The `Control.Monad.Reader` module provides the `Reader e a` type, which is just a convenient `newtype` wrapper around `(e -> a)`, along with an appropriate `Monad` instance and some `Reader`-specific utility functions such as `ask` (retrieve the environment), `asks` (retrieve a function of the environment), and `local` (run a subcomputation under a different environment). • The `Control.Monad.Writer` module provides the `Writer` monad, which allows information to be collected as a computation progresses. `Writer w a` is isomorphic to `(a,w)`, where the output value `a` is carried along with an annotation or “log” of type `w`, which must be an instance of `Monoid` (see section Monoid); the special function `tell` performs logging. • The `Control.Monad.State` module provides the `State s a` type, a `newtype` wrapper around `s -> (a,s)`. Something of type `State s a` represents a stateful computation which produces an `a` but can access and modify the state of type `s` along the way. The module also provides `State`-specific utility functions such as `get` (read the current state), `gets` (read a function of the current state), `put` (overwrite the state), and `modify` (apply a function to the state). • The `Control.Monad.Cont` module provides the `Cont` monad, which represents computations in continuation-passing style. It can be used to suspend and resume computations, and to implement non-local transfers of control, co-routines, other complex control structures—all in a functionally pure way. `Cont` has been called the “mother of all monads” because of its universal properties. Exercises 1. Implement a `Monad` instance for the list constructor, `[]`. Follow the types! 2. Implement a `Monad` instance for `((->) e)`. 3. Implement `Functor` and `Monad` instances for `Free f`, defined as ```data Free f a = Var a | Node (f (Free f a))``` You may assume that `f` has a `Functor` instance. This is known as the free monad built from the functor `f`. ## 5.3 Intuition Let’s look more closely at the type of `(>>=)`. The basic intuition is that it combines two computations into one larger computation. The first argument, `m a`, is the first computation. However, it would be boring if the second argument were just an `m b`; then there would be no way for the computations to interact with one another (actually, this is exactly the situation with `Applicative`). So, the second argument to `(>>=)` has type `a -> m b`: a function of this type, given a result of the first computation, can produce a second computation to be run. In other words, `x >>= k` is a computation which runs `x`, and then uses the result(s) of `x` to decide what computation to run second, using the output of the second computation as the result of the entire computation. ∗ Actually, because Haskell allows general recursion, this is a lie: using a Haskell parsing library one can recursively construct infinite grammars, and hence `Alternative` by itself is enough to parse any context-sensitive language with a finite alphabet. Simply make an n-way choice on each symbol you care about, and "encode" the context by having a different nonterminal for every possible pairing of a context and a nonterminal from the original grammar. However, no one in their right mind would want to write a context-sensitive parser this way! Intuitively, it is this ability to use the output from previous computations to decide what computations to run next that makes `Monad` more powerful than `Applicative`. The structure of an `Applicative` computation is fixed, whereas the structure of a `Monad` computation can change based on intermediate results. This also means that parsers built using an `Applicative` interface can only parse context-free languages; in order to parse context-sensitive languages a `Monad` interface is needed.∗ To see the increased power of `Monad` from a different point of view, let’s see what happens if we try to implement `(>>=)` in terms of `fmap`, `pure`, and `(<*>)`. We are given a value `x` of type `m a`, and a function `k` of type `a -> m b`, so the only thing we can do is apply `k` to `x`. We can’t apply it directly, of course; we have to use `fmap` to lift it over the `m`. But what is the type of `fmap k`? Well, it’s `m a -> m (m b)`. So after we apply it to `x`, we are left with something of type `m (m b)`—but now we are stuck; what we really want is an `m b`, but there’s no way to get there from here. We can add `m`’s using `pure`, but we have no way to collapse multiple `m`’s into one. ∗ You might hear some people claim that that the definition in terms of `return`, `fmap`, and `join` is the “math definition” and the definition in terms of `return` and `(>>=)` is something specific to Haskell. In fact, both definitions were known in the mathematics community long before Haskell picked up monads. This ability to collapse multiple `m`’s is exactly the ability provided by the function `join :: m (m a) -> m a`, and it should come as no surprise that an alternative definition of `Monad` can be given in terms of `join`: ```class Applicative m => Monad'' m where join :: m (m a) -> m a``` In fact, the earliest definitions of monads in category theory were in terms of `return`, `fmap`, and `join` (often called η, T, and μ in the mathematical literature). Haskell uses an alternative formulation with `(>>=)` instead of `join` since it is more convenient to use ∗. However, sometimes it can be easier to think about `Monad` instances in terms of `join`, since it is a more “atomic” operation. (For example, `join` for the list monad is just `concat`.) Exercises 1. Implement `(>>=)` in terms of `fmap` (or `liftM`) and `join`. 2. Now implement `join` and `fmap` (`liftM`) in terms of `(>>=)` and `return`. ## 5.4 Utility functions The `Control.Monad` module provides a large number of convenient utility functions, all of which can be implemented in terms of the basic `Monad` operations (`return` and `(>>=)` in particular). We have already seen one of them, namely, `join`. We also mention some other noteworthy ones here; implementing these utility functions oneself is a good exercise. For a more detailed guide to these functions, with commentary and example code, see Henk-Jan van Tuyl’s tour. ∗ Still, it is unclear how this "bug" should be fixed. Making `Monad` require a `Functor` instance has some drawbacks, as mentioned in this 2011 mailing-list discussion. —Geheimdienst • `liftM :: Monad m => (a -> b) -> m a -> m b`. This should be familiar; of course, it is just `fmap`. The fact that we have both `fmap` and `liftM` is an unfortunate consequence of the fact that the `Monad` type class does not require a `Functor` instance, even though mathematically speaking, every monad is a functor. However, `fmap` and `liftM` are essentially interchangeable, since it is a bug (in a social rather than technical sense) for any type to be an instance of `Monad` without also being an instance of `Functor` ∗. • `ap :: Monad m => m (a -> b) -> m a -> m b` should also be familiar: it is equivalent to `(<*>)`, justifying the claim that the `Monad` interface is strictly more powerful than `Applicative`. We can make any `Monad` into an instance of `Applicative` by setting `pure = return` and `(<*>) = ap`. • `sequence :: Monad m => [m a] -> m [a]` takes a list of computations and combines them into one computation which collects a list of their results. It is again something of a historical accident that `sequence` has a `Monad` constraint, since it can actually be implemented only in terms of `Applicative`. There is an additional generalization of `sequence` to structures other than lists, which will be discussed in the section on `Traversable`. • `replicateM :: Monad m => Int -> m a -> m [a]` is simply a combination of `replicate` and `sequence`. • `when :: Monad m => Bool -> m () -> m ()` conditionally executes a computation, evaluating to its second argument if the test is `True`, and to `return ()` if the test is `False`. A collection of other sorts of monadic conditionals can be found in the `IfElse` package. • `mapM :: Monad m => (a -> m b) -> [a] -> m [b]` maps its first argument over the second, and `sequence`s the results. The `forM` function is just `mapM` with its arguments reversed; it is called `forM` since it models generalized `for` loops: the list `[a]` provides the loop indices, and the function `a -> m b` specifies the “body” of the loop for each index. • `(=<<) :: Monad m => (a -> m b) -> m a -> m b` is just `(>>=)` with its arguments reversed; sometimes this direction is more convenient since it corresponds more closely to function application. • `(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c` is sort of like function composition, but with an extra `m` on the result type of each function, and the arguments swapped. We’ll have more to say about this operation later. There is also a flipped variant, `(<=<)`. • The `guard` function is for use with instances of `MonadPlus`, which is discussed at the end of the `Monoid` section. Many of these functions also have “underscored” variants, such as `sequence_` and `mapM_`; these variants throw away the results of the computations passed to them as arguments, using them only for their side effects. Other monadic functions which are occasionally useful include `filterM`, `zipWithM`, `foldM`, and `forever`. ## 5.5 Laws There are several laws that instances of `Monad` should satisfy (see also the Monad laws wiki page). The standard presentation is: ```return a >>= k = k a m >>= return = m m >>= (\x -> k x >>= h) = (m >>= k) >>= h   fmap f xs = xs >>= return . f = liftM f xs``` The first and second laws express the fact that `return` behaves nicely: if we inject a value `a` into a monadic context with `return`, and then bind to `k`, it is the same as just applying `k` to `a` in the first place; if we bind a computation `m` to `return`, nothing changes. The third law essentially says that `(>>=)` is associative, sort of. The last law ensures that `fmap` and `liftM` are the same for types which are instances of both `Functor` and `Monad`—which, as already noted, should be every instance of `Monad`. ∗ I like to pronounce this operator “fish”, but that’s probably not the canonical pronunciation ... However, the presentation of the above laws, especially the third, is marred by the asymmetry of `(>>=)`. It’s hard to look at the laws and see what they’re really saying. I prefer a much more elegant version of the laws, which is formulated in terms of `(>=>)` ∗. Recall that `(>=>)` “composes” two functions of type `a -> m b` and `b -> m c`. You can think of something of type `a -> m b` (roughly) as a function from `a` to `b` which may also have some sort of effect in the context corresponding to `m`. `(>=>)` lets us compose these “effectful functions”, and we would like to know what properties `(>=>)` has. The monad laws reformulated in terms of `(>=>)` are: ```return >=> g = g g >=> return = g (g >=> h) >=> k = g >=> (h >=> k)``` ∗ As fans of category theory will note, these laws say precisely that functions of type `a -> m b` are the arrows of a category with `(>=>)` as composition! Indeed, this is known as the Kleisli category of the monad `m`. It will come up again when we discuss `Arrow`s. Ah, much better! The laws simply state that `return` is the identity of `(>=>)`, and that `(>=>)` is associative ∗. Working out the equivalence between these two formulations, given the definition `g >=> h = \x -> g x >>= h`, is left as an exercise. There is also a formulation of the monad laws in terms of `fmap`, `return`, and `join`; for a discussion of this formulation, see the Haskell wikibook page on category theory. ## 5.6 `do` notation Haskell’s special `do` notation supports an “imperative style” of programming by providing syntactic sugar for chains of monadic expressions. The genesis of the notation lies in realizing that something like `a >>= \x -> b >> c >>= \y -> d ` can be more readably written by putting successive computations on separate lines: ```a >>= \x -> b >> c >>= \y -> d``` This emphasizes that the overall computation consists of four computations `a`, `b`, `c`, and `d`, and that `x` is bound to the result of `a`, and `y` is bound to the result of `c` (`b`, `c`, and `d` are allowed to refer to `x`, and `d` is allowed to refer to `y` as well). From here it is not hard to imagine a nicer notation: ```do { x <- a ; b  ; y <- c ; d }``` (The curly braces and semicolons may optionally be omitted; the Haskell parser uses layout to determine where they should be inserted.) This discussion should make clear that `do` notation is just syntactic sugar. In fact, `do` blocks are recursively translated into monad operations (almost) like this: ``` do e → e do { e; stmts } → e >> do { stmts } do { v <- e; stmts } → e >>= \v -> do { stmts } do { let decls; stmts} → let decls in do { stmts } ``` This is not quite the whole story, since `v` might be a pattern instead of a variable. For example, one can write ```do (x:xs) <- foo bar x``` but what happens if `foo` produces an empty list? Well, remember that ugly `fail` function in the `Monad` type class declaration? That’s what happens. See section 3.14 of the Haskell Report for the full details. See also the discussion of `MonadPlus` and `MonadZero` in the section on other monoidal classes. A final note on intuition: `do` notation plays very strongly to the “computational context” point of view rather than the “container” point of view, since the binding notation `x <- m` is suggestive of “extracting” a single `x` from `m` and doing something with it. But `m` may represent some sort of a container, such as a list or a tree; the meaning of `x <- m` is entirely dependent on the implementation of `(>>=)`. For example, if `m` is a list, `x <- m` actually means that `x` will take on each value from the list in turn. ## 5.7 Further reading Philip Wadler was the first to propose using monads to structure functional programs. His paper is still a readable introduction to the subject. There are, of course, numerous monad tutorials of varying quality ∗. A few of the best include Cale Gibbard’s Monads as containers and Monads as computation; Jeff Newbern’s All About Monads, a comprehensive guide with lots of examples; and Dan Piponi’s You Could Have Invented Monads!, which features great exercises. If you just want to know how to use `IO`, you could consult the Introduction to IO. Even this is just a sampling; the monad tutorials timeline is a more complete list. (All these monad tutorials have prompted parodies like think of a monad ... as well as other kinds of backlash like Monads! (and Why Monad Tutorials Are All Awful) or Abstraction, intuition, and the “monad tutorial fallacy”.) Other good monad references which are not necessarily tutorials include Henk-Jan van Tuyl’s tour of the functions in `Control.Monad`, Dan Piponi’s field guide, Tim Newsham’s What’s a Monad?, and Chris Smith's excellent article Why Do Monads Matter?. There are also many blog posts which have been written on various aspects of monads; a collection of links can be found under Blog articles/Monads. One of the quirks of the `Monad` class and the Haskell type system is that it is not possible to straightforwardly declare `Monad` instances for types which require a class constraint on their data, even if they are monads from a mathematical point of view. For example, `Data.Set` requires an `Ord` constraint on its data, so it cannot be easily made an instance of `Monad`. A solution to this problem was first described by Eric Kidd, and later made into a library named rmonad by Ganesh Sittampalam and Peter Gavin. There are many good reasons for eschewing `do` notation; some have gone so far as to consider it harmful. Monads can be generalized in various ways; for an exposition of one possibility, see Robert Atkey’s paper on parameterized monads, or Dan Piponi’s Beyond Monads. For the categorically inclined, monads can be viewed as monoids (From Monoids to Monads) and also as closure operators Triples and Closure. Derek Elkins’s article in issue 13 of the Monad.Reader contains an exposition of the category-theoretic underpinnings of some of the standard `Monad` instances, such as `State` and `Cont`. Links to many more research papers related to monads can be found under Research papers/Monads and arrows. # 6 Monad transformers One would often like to be able to combine two monads into one: for example, to have stateful, nondeterministic computations (`State` + `[]`), or computations which may fail and can consult a read-only environment (`Maybe` + `Reader`), and so on. Unfortunately, monads do not compose as nicely as applicative functors (yet another reason to use `Applicative` if you don’t need the full power that `Monad` provides), but some monads can be combined in certain ways. ## 6.1 Standard monad transformers The transformers library provides a number of standard monad transformers. Each monad transformer adds a particular capability/feature/effect to any existing monad. • `IdentityT` is the identity transformer, which maps a monad to (something isomorphic to) itself. This may seem useless at first glance, but it is useful for the same reason that the `id` function is useful -- it can be passed as an argument to things which are parameterized over an arbitrary monad transformer. • `StateT` adds a read-write state. • `ReaderT` adds a read-only environment. • `WriterT` adds a write-only log. • `RWST` conveniently combines `ReaderT`, `WriterT`, and `StateT` into one. • `MaybeT` adds the possibility of failure. • `ErrorT` adds the possibility of failure with an arbitrary type to represent errors. • `ListT` adds non-determinism (however, see the discussion of `ListT` below). • `ContT` adds continuation handling. For example, `StateT s Maybe` is an instance of `Monad`; computations of type `StateT s Maybe a` may fail, and have access to a mutable state of type `s`. Monad transformers can be multiply stacked. One thing to keep in mind while using monad transformers is that the order of composition matters. For example, when a `StateT s Maybe a` computation fails, the state ceases being updated (indeed, it simply disappears); on the other hand, the state of a `MaybeT (State s) a` computation may continue to be modified even after the computation has failed. This may seem backwards, but it is correct. Monad transformers build composite monads “inside out”; `MaybeT (State s) a` is isomorphic to `s -> (Maybe a, s)`. (Lambdabot has an indispensable `@unmtl` command which you can use to “unpack” a monad transformer stack in this way.) Intuitively, the monads become "more fundamental" the further down in the stack you get, and the effects of a given monad "have precedence" over the effects of monads further up the stack. Of course, this is just handwaving, and if you are unsure of the proper order for some monads you wish to combine, there is no substitute for using `@unmtl` or simply trying out the various options. ## 6.2 Definition and laws All monad transformers should implement the `MonadTrans` type class, defined in `Control.Monad.Trans.Class`: ```class MonadTrans t where lift :: Monad m => m a -> t m a``` It allows arbitrary computations in the base monad `m` to be “lifted” into computations in the transformed monad `t m`. (Note that type application associates to the left, just like function application, so `t m a = (t m) a`.) `lift` must satisfy the laws ```lift . return = return lift (m >>= f) = lift m >>= (lift . f)``` which intuitively state that `lift` transforms `m a` computations into `t m a` computations in a "sensible" way, which sends the `return` and `(>>=)` of `m` to the `return` and `(>>=)` of `t m`. Exercises 1. What is the kind of `t` in the declaration of `MonadTrans`? ## 6.3 Transformer type classes and "capability" style ∗ The only problem with this scheme is the quadratic number of instances required as the number of standard monad transformers grows—but as the current set of standard monad transformers seems adequate for most common use cases, this may not be that big of a deal. There are also type classes (provided by the `mtl` package) for the operations of each transformer. For example, the `MonadState` type class provides the state-specific methods `get` and `put`, allowing you to conveniently use these methods not only with `State`, but with any monad which is an instance of `MonadState`—including `MaybeT (State s)`, `StateT s (ReaderT r IO)`, and so on. Similar type classes exist for `Reader`, `Writer`, `Cont`, `IO`, and others ∗. These type classes serve two purposes. First, they get rid of (most of) the need for explicitly using `lift`, giving a type-directed way to automatically determine the right number of calls to `lift`. Simply writing `put` will be automatically translated into `lift . put`, `lift . lift . put`, or something similar depending on what concrete monad stack you are using. Second, they give you more flexibility to switch between different concrete monad stacks. For example, if you are writing a state-based algorithm, don't write ```foo :: State Int Char foo = modify (*2) >> return 'x'``` but rather ```foo :: MonadState Int m => m Char foo = modify (*2) >> return 'x'``` Now, if somewhere down the line you realize you need to introduce the possibility of failure, you might switch from `State Int` to `MaybeT (State Int)`. The type of the first version of `foo` would need to be modified to reflect this change, but the second version of `foo` can still be used as-is. However, this sort of "capability-based" style (e.g. specifying that `foo` works for any monad with the "state capability") quickly runs into problems when you try to naively scale it up: for example, what if you need to maintain two independent states? A very nice framework for solving this and related problems is described by Schrijvers and Olivera (Monads, zippers and views: virtualizing the monad stack, ICFP 2011) and is implemented in the `Monatron` package. ## 6.4 Composing monads Is the composition of two monads always a monad? As hinted previously, the answer is no. For example, XXX insert example here. Since `Applicative` functors are closed under composition, the problem must lie with `join`. Indeed, suppose `m` and `n` are arbitrary monads; to make a monad out of their composition we would need to be able to implement `join :: m (n (m (n a))) -> m (n a)` but it is not clear how this could be done in general. The `join` method for `m` is no help, because the two occurrences of `m` are not next to each other (and likewise for `n`). However, one situation in which it can be done is if `n` distributes over `m`, that is, if there is a function `distrib :: n (m a) -> m (n a)` satisfying certain laws. See Jones and Duponcheel (Composing Monads). Exercises • Implement `join :: M (N (M (N a))) -> M (N a)`, given `distrib :: N (M a) -> M (N a)` and assuming `M` and `N` are instances of `Monad`. ## 6.5 Further reading Much of the monad transformer library (originally `mtl`, now split between `mtl` and `transformers`), including the `Reader`, `Writer`, `State`, and other monads, as well as the monad transformer framework itself, was inspired by Mark Jones’s classic paper Functional Programming with Overloading and Higher-Order Polymorphism. It’s still very much worth a read—and highly readable—after almost fifteen years. See Edward Kmett's mailing list message for a description of the history and relationships among monad transformer packages (`mtl`, `transformers`, `monads-fd`, `monads-tf`). There are two excellent references on monad transformers. Martin Grabmüller’s Monad Transformers Step by Step is a thorough description, with running examples, of how to use monad transformers to elegantly build up computations with various effects. Cale Gibbard’s article on how to use monad transformers is more practical, describing how to structure code using monad transformers to make writing it as painless as possible. Another good starting place for learning about monad transformers is a blog post by Dan Piponi. The `ListT` transformer from the `transformers` package comes with the caveat that `ListT m` is only a monad when `m` is commutative, that is, when `ma >>= \a -> mb >>= \b -> foo` is equivalent to `mb >>= \b -> ma >>= \a -> foo` (i.e. the order of `m`'s effects does not matter). For one explanation why, see Dan Piponi's blog post "Why isn't `ListT []` a monad". For more examples, as well as a design for a version of `ListT` which does not have this problem, see `ListT` done right. There is an alternative way to compose monads, using coproducts, as described by Lüth and Ghani. This method is interesting but has not (yet?) seen widespread use. # 7 MonadFix Note: `MonadFix` is included here for completeness (and because it is interesting) but seems not to be used much. Skipping this section on a first read-through is perfectly OK (and perhaps even recommended). ## 7.1 `do rec` notation The `MonadFix` class describes monads which support the special fixpoint operation `mfix :: (a -> m a) -> m a`, which allows the output of monadic computations to be defined via (effectful) recursion. This is supported in GHC by a special “recursive do” notation, enabled by the `-XDoRec` flag. Within a `do` block, one may have a nested `rec` block, like so: ```do { x <- foo  ; rec { y <- baz  ; z <- bar  ; bob }  ; w <- frob }``` Normally (if we had `do` in place of `rec` in the above example), `y` would be in scope in `bar` and `bob` but not in `baz`, and `z` would be in scope only in `bob`. With the `rec`, however, `y` and `z` are both in scope in all three of `baz`, `bar`, and `bob`. A `rec` block is analogous to a `let` block such as ```let { y = baz  ; z = bar } in bob``` because, in Haskell, every variable bound in a `let`-block is in scope throughout the entire block. (From this point of view, Haskell's normal `do` blocks are analogous to Scheme's `let*` construct.) What could such a feature be used for? One of the motivating examples given in the original paper describing `MonadFix` (see below) is encoding circuit descriptions. A line in a `do`-block such as `x <- gate y z` describes a gate whose input wires are labeled `y` and `z` and whose output wire is labeled `x`. Many (most?) useful circuits, however, involve some sort of feedback loop, making them impossible to write in a normal `do`-block (since some wire would have to be mentioned as an input before being listed as an output). Using a `rec` block solves this problem. ## 7.2 Examples and intuition Of course, not every monad supports such recursive binding. However, as mentioned above, it suffices to have an implementation of `mfix :: (a -> m a) -> m a`, satisfying a few laws. Let's try implementing `mfix` for the `Maybe` monad. That is, we want to implement a function `maybeFix :: (a -> Maybe a) -> Maybe a` ∗ Actually, `fix` is implemented slightly differently for efficiency reasons; but the given definition is equivalent and simpler for the present purpose. Let's think for a moment about the implementation ∗ of the non-monadic `fix :: (a -> a) -> a`: `fix f = f (fix f)` Inspired by `fix`, our first attempt at implementing `maybeFix` might be something like ```maybeFix :: (a -> Maybe a) -> Maybe a maybeFix f = maybeFix f >>= f``` This has the right type. However, something seems wrong: there is nothing in particular here about `Maybe`; `maybeFix` actually has the more general type `Monad m => (a -> m a) -> m a`. But didn't we just say that not all monads support `mfix`? The answer is that although this implementation of `maybeFix` has the right type, it does not have the intended semantics. If we think about how `(>>=)` works for the `Maybe` monad (by pattern-matching on its first argument to see whether it is `Nothing` or `Just`) we can see that this definition of `maybeFix` is completely useless: it will just recurse infinitely, trying to decide whether it is going to return `Nothing` or `Just`, without ever even so much as a glance in the direction of `f`. The trick is to simply assume that `maybeFix` will return `Just`, and get on with life! ```maybeFix :: (a -> Maybe a) -> Maybe a maybeFix f = ma where ma = f (fromJust ma)``` This says that the result of `maybeFix` is `ma`, and assuming that `ma = Just x`, it is defined (recursively) to be equal to `f x`. Why is this OK? Isn't `fromJust` almost as bad as `unsafePerformIO`? Well, usually, yes. This is just about the only situation in which it is justified! The interesting thing to note is that `maybeFix` will never crash -- although it may, of course, fail to terminate. The only way we could get a crash is if we try to evaluate `fromJust ma` when we know that `ma = Nothing`. But how could we know `ma = Nothing`? Since `ma` is defined as `f (fromJust ma)`, it must be that this expression has already been evaluated to `Nothing` -- in which case there is no reason for us to be evaluating `fromJust ma` in the first place! To see this from another point of view, we can consider three possibilities. First, if `f` outputs `Nothing` without looking at its argument, then `maybeFix f` clearly returns `Nothing`. Second, if `f` always outputs `Just x`, where `x` depends on its argument, then the recursion can proceed usefully: `fromJust ma` will be able to evaluate to `x`, thus feeding `f`'s output back to it as input. Third, if `f` tries to use its argument to decide whether to output `Just` or `Nothing`, then `maybeFix f` will not terminate: evaluating `f`'s argument requires evaluating `ma` to see whether it is `Just`, which requires evaluating `f (fromJust ma)`, which requires evaluating `ma`, ... and so on. There are also instances of `MonadFix` for lists (which works analogously to the instance for `Maybe`), for `ST`, and for `IO`. The instance for `IO` is particularly amusing: it creates a new `IORef` (with a dummy value), immediately reads its contents using `unsafeInterleaveIO` (which delays the actual reading lazily until the value is needed), uses the contents of the `IORef` to compute a new value, which it then writes back into the `IORef`. It almost seems, spookily, that `mfix` is sending a value back in time to itself through the `IORef` -- though of course what is really going on is that the reading is delayed just long enough (via `unsafeInterleaveIO`) to get the process bootstrapped. Exercises • Implement a `MonadFix` instance for `[]`. ## 7.3 Further reading For more information (such as the precise desugaring rules for `rec` blocks), see Levent Erkök and John Launchbury's 2002 Haskell workshop paper, A Recursive do for Haskell, or for full details, Levent Erkök’s thesis, Value Recursion in Monadic Computations. (Note, while reading, that `do rec` used to be called `mdo`, and `MonadFix` used to be called `MonadRec`.) # 8 Semigroup A semigroup is a set $S\$ together with a binary operation $\oplus\$ which combines elements from $S\$. The $\oplus\$ operator is required to be associative (that is, $(a \oplus b) \oplus c = a \oplus (b \oplus c)\$, for any $a,b,c\$ which are elements of $S\$). For example, the natural numbers under addition form a semigroup: the sum of any two natural numbers is a natural number, and $(a+b)+c = a+(b+c)\$ for any natural numbers $a\$, $b\$, and $c\,\$. The integers under multiplication also form a semigroup, as do the integers (or rationals, or reals) under $\max\$ or $\min\$, Boolean values under conjunction and disjunction, lists under concatenation, functions from a set to itself under composition ... Semigroups show up all over the place, once you know to look for them. ## 8.1 Definition Semigroups are not (yet?) defined in the base package, but the semigroups package provides a standard definition. The definition of the `Semigroup` type class (haddock) is as follows: ```class Semigroup a where (<>) :: a -> a -> a   sconcat :: NonEmpty a -> a sconcat = sconcat (a :| as) = go a as where go b (c:cs) = b <> go c cs go b [] = b   times1p :: Whole n => n -> a -> a times1p = ...``` The really important method is `(<>)`, representing the associative binary operation. The other two methods have default implementations in terms of `(<>)`, and are included in the type class in case some instances can give more efficient implementations than the default. `sconcat` reduces a nonempty list using `(<>)`; `times1p n` is equivalent to (but more efficient than) `sconcat . replicate n`. See the haddock documentation for more information on `sconcat` and `times1p`. ## 8.2 Laws The only law is that `(<>)` must be associative: `(x <> y) <> z = x <> (y <> z)` More coming soon... # 9 Monoid Many semigroups have a special element e for which the binary operation $\oplus$ is the identity, that is, $e \oplus x = x \oplus e = x$ for every element x. Such a semigroup-with-identity-element is called a monoid. ## 9.1 Definition The definition of the `Monoid` type class (defined in `Data.Monoid`; haddock) is: ```class Monoid a where mempty :: a mappend :: a -> a -> a   mconcat :: [a] -> a mconcat = foldr mappend mempty``` The `mempty` value specifies the identity element of the monoid, and `mappend` is the binary operation. The default definition for `mconcat` “reduces” a list of elements by combining them all with `mappend`, using a right fold. It is only in the `Monoid` class so that specific instances have the option of providing an alternative, more efficient implementation; usually, you can safely ignore `mconcat` when creating a `Monoid` instance, since its default definition will work just fine. The `Monoid` methods are rather unfortunately named; they are inspired by the list instance of `Monoid`, where indeed `mempty = []` and `mappend = (++)`, but this is misleading since many monoids have little to do with appending (see these Comments from OCaml Hacker Brian Hurt on the haskell-cafe mailing list). ## 9.2 Laws Of course, every `Monoid` instance should actually be a monoid in the mathematical sense, which implies these laws: ```mempty `mappend` x = x x `mappend` mempty = x (x `mappend` y) `mappend` z = x `mappend` (y `mappend` z)``` ## 9.3 Instances There are quite a few interesting `Monoid` instances defined in `Data.Monoid`. • `[a]` is a `Monoid`, with `mempty = []` and `mappend = (++)`. It is not hard to check that `(x ++ y) ++ z = x ++ (y ++ z)` for any lists `x`, `y`, and `z`, and that the empty list is the identity: `[] ++ x = x ++ [] = x`. • As noted previously, we can make a monoid out of any numeric type under either addition or multiplication. However, since we can’t have two instances for the same type, `Data.Monoid` provides two `newtype` wrappers, `Sum` and `Product`, with appropriate `Monoid` instances. ```> getSum (mconcat . map Sum $ [1..5]) 15 > getProduct (mconcat . map Product $ [1..5]) 120``` This example code is silly, of course; we could just write `sum [1..5]` and `product [1..5]`. Nevertheless, these instances are useful in more generalized settings, as we will see in the section `Foldable`. • `Any` and `All` are `newtype` wrappers providing `Monoid` instances for `Bool` (under disjunction and conjunction, respectively). • There are three instances for `Maybe`: a basic instance which lifts a `Monoid` instance for `a` to an instance for `Maybe a`, and two `newtype` wrappers `First` and `Last` for which `mappend` selects the first (respectively last) non-`Nothing` item. • `Endo a` is a newtype wrapper for functions `a -> a`, which form a monoid under composition. • There are several ways to “lift” `Monoid` instances to instances with additional structure. We have already seen that an instance for `a` can be lifted to an instance for `Maybe a`. There are also tuple instances: if `a` and `b` are instances of `Monoid`, then so is `(a,b)`, using the monoid operations for `a` and `b` in the obvious pairwise manner. Finally, if `a` is a `Monoid`, then so is the function type `e -> a` for any `e`; in particular, `g `mappend` h` is the function which applies both `g` and `h` to its argument and then combines the results using the underlying `Monoid` instance for `a`. This can be quite useful and elegant (see example). • The type `Ordering = LT || EQ || GT` is a `Monoid`, defined in such a way that `mconcat (zipWith compare xs ys)` computes the lexicographic ordering of `xs` and `ys` (if `xs` and `ys` have the same length). In particular, `mempty = EQ`, and `mappend` evaluates to its leftmost non-`EQ` argument (or `EQ` if both arguments are `EQ`). This can be used together with the function instance of `Monoid` to do some clever things (example). • There are also `Monoid` instances for several standard data structures in the containers library (haddock), including `Map`, `Set`, and `Sequence`. `Monoid` is also used to enable several other type class instances. As noted previously, we can use `Monoid` to make `((,) e)` an instance of `Applicative`: ```instance Monoid e => Applicative ((,) e) where pure x = (mempty, x) (u, f) <*> (v, x) = (u `mappend` v, f x)``` `Monoid` can be similarly used to make `((,) e)` an instance of `Monad` as well; this is known as the writer monad. As we’ve already seen, `Writer` and `WriterT` are a newtype wrapper and transformer for this monad, respectively. `Monoid` also plays a key role in the `Foldable` type class (see section Foldable). ## 9.4 Other monoidal classes: Alternative, MonadPlus, ArrowPlus The `Alternative` type class (haddock) is for `Applicative` functors which also have a monoid structure: ```class Applicative f => Alternative f where empty :: f a (<|>) :: f a -> f a -> f a``` Of course, instances of `Alternative` should satisfy the monoid laws. Likewise, `MonadPlus` (haddock) is for `Monad`s with a monoid structure: ```class Monad m => MonadPlus m where mzero :: m a mplus :: m a -> m a -> m a``` The `MonadPlus` documentation states that it is intended to model monads which also support “choice and failure”; in addition to the monoid laws, instances of `MonadPlus` are expected to satisfy ```mzero >>= f = mzero v >> mzero = mzero``` which explains the sense in which `mzero` denotes failure. Since `mzero` should be the identity for `mplus`, the computation `m1 `mplus` m2` succeeds (evaluates to something other than `mzero`) if either `m1` or `m2` does; so `mplus` represents choice. The `guard` function can also be used with instances of `MonadPlus`; it requires a condition to be satisfied and fails (using `mzero`) if it is not. A simple example of a `MonadPlus` instance is `[]`, which is exactly the same as the `Monoid` instance for `[]`: the empty list represents failure, and list concatenation represents choice. In general, however, a `MonadPlus` instance for a type need not be the same as its `Monoid` instance; `Maybe` is an example of such a type. A great introduction to the `MonadPlus` type class, with interesting examples of its use, is Doug Auclair’s MonadPlus: What a Super Monad! in the Monad.Reader issue 11. There used to be a type class called `MonadZero` containing only `mzero`, representing monads with failure. The `do`-notation requires some notion of failure to deal with failing pattern matches. Unfortunately, `MonadZero` was scrapped in favor of adding the `fail` method to the `Monad` class. If we are lucky, someday `MonadZero` will be restored, and `fail` will be banished to the bit bucket where it belongs (see MonadPlus reform proposal). The idea is that any `do`-block which uses pattern matching (and hence may fail) would require a `MonadZero` constraint; otherwise, only a `Monad` constraint would be required. Finally, `ArrowZero` and `ArrowPlus` (haddock) represent `Arrow`s (see below) with a monoid structure: ```class Arrow arr => ArrowZero arr where zeroArrow :: b `arr` c   class ArrowZero arr => ArrowPlus arr where (<+>) :: (b `arr` c) -> (b `arr` c) -> (b `arr` c)``` ## 9.5 Further reading Monoids have gotten a fair bit of attention recently, ultimately due to a blog post by Brian Hurt, in which he complained about the fact that the names of many Haskell type classes (`Monoid` in particular) are taken from abstract mathematics. This resulted in a long haskell-cafe thread arguing the point and discussing monoids in general. ∗ May its name live forever. However, this was quickly followed by several blog posts about `Monoid` ∗. First, Dan Piponi wrote a great introductory post, Haskell Monoids and their Uses. This was quickly followed by Heinrich Apfelmus’s Monoids and Finger Trees, an accessible exposition of Hinze and Paterson’s classic paper on 2-3 finger trees, which makes very clever use of `Monoid` to implement an elegant and generic data structure. Dan Piponi then wrote two fascinating articles about using `Monoids` (and finger trees): Fast Incremental Regular Expressions and Beyond Regular Expressions In a similar vein, David Place’s article on improving `Data.Map` in order to compute incremental folds (see the Monad Reader issue 11) is also a good example of using `Monoid` to generalize a data structure. Some other interesting examples of `Monoid` use include building elegant list sorting combinators, collecting unstructured information, and a brilliant series of posts by Chung-Chieh Shan and Dylan Thurston using `Monoid`s to elegantly solve a difficult combinatorial puzzle (followed by part 2, part 3, part 4). As unlikely as it sounds, monads can actually be viewed as a sort of monoid, with `join` playing the role of the binary operation and `return` the role of the identity; see Dan Piponi’s blog post. # 10 Foldable The `Foldable` class, defined in the `Data.Foldable` module (haddock), abstracts over containers which can be “folded” into a summary value. This allows such folding operations to be written in a container-agnostic way. ## 10.1 Definition The definition of the `Foldable` type class is: ```class Foldable t where fold :: Monoid m => t m -> m foldMap :: Monoid m => (a -> m) -> t a -> m   foldr :: (a -> b -> b) -> b -> t a -> b foldl :: (a -> b -> a) -> a -> t b -> a foldr1 :: (a -> a -> a) -> t a -> a foldl1 :: (a -> a -> a) -> t a -> a``` This may look complicated, but in fact, to make a `Foldable` instance you only need to implement one method: your choice of `foldMap` or `foldr`. All the other methods have default implementations in terms of these, and are presumably included in the class in case more efficient implementations can be provided. ## 10.2 Instances and examples The type of `foldMap` should make it clear what it is supposed to do: given a way to convert the data in a container into a `Monoid` (a function `a -> m`) and a container of `a`’s (`t a`), `foldMap` provides a way to iterate over the entire contents of the container, converting all the `a`’s to `m`’s and combining all the `m`’s with `mappend`. The following code shows two examples: a simple implementation of `foldMap` for lists, and a binary tree example provided by the `Foldable` documentation. ```instance Foldable [] where foldMap g = mconcat . map g   data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)   instance Foldable Tree where foldMap f Empty = mempty foldMap f (Leaf x) = f x foldMap f (Node l k r) = foldMap f l ++ f k ++ foldMap f r where (++) = mappend``` The `foldr` function has a type similar to the `foldr` found in the `Prelude`, but more general, since the `foldr` in the `Prelude` works only on lists. The `Foldable` module also provides instances for `Maybe` and `Array`; additionally, many of the data structures found in the standard containers library (for example, `Map`, `Set`, `Tree`, and `Sequence`) provide their own `Foldable` instances. ## 10.3 Derived folds Given an instance of `Foldable`, we can write generic, container-agnostic functions such as: ```-- Compute the size of any container. containerSize :: Foldable f => f a -> Int containerSize = getSum . foldMap (const (Sum 1))   -- Compute a list of elements of a container satisfying a predicate. filterF :: Foldable f => (a -> Bool) -> f a -> [a] filterF p = foldMap (\a -> if p a then [a] else [])   -- Get a list of all the Strings in a container which include the -- letter a. aStrings :: Foldable f => f String -> [String] aStrings = filterF (elem 'a')``` The `Foldable` module also provides a large number of predefined folds, many of which are generalized versions of `Prelude` functions of the same name that only work on lists: `concat`, `concatMap`, `and`, `or`, `any`, `all`, `sum`, `product`, `maximum`(`By`), `minimum`(`By`), `elem`, `notElem`, and `find`. The reader may enjoy coming up with elegant implementations of these functions using `fold` or `foldMap` and appropriate `Monoid` instances. There are also generic functions that work with `Applicative` or `Monad` instances to generate some sort of computation from each element in a container, and then perform all the side effects from those computations, discarding the results: `traverse_`, `sequenceA_`, and others. The results must be discarded because the `Foldable` class is too weak to specify what to do with them: we cannot, in general, make an arbitrary `Applicative` or `Monad` instance into a `Monoid`. If we do have an `Applicative` or `Monad` with a monoid structure—that is, an `Alternative` or a `MonadPlus`—then we can use the `asum` or `msum` functions, which can combine the results as well. Consult the `Foldable` documentation for more details on any of these functions. Note that the `Foldable` operations always forget the structure of the container being folded. If we start with a container of type `t a` for some `Foldable t`, then `t` will never appear in the output type of any operations defined in the `Foldable` module. Many times this is exactly what we want, but sometimes we would like to be able to generically traverse a container while preserving its structure—and this is exactly what the `Traversable` class provides, which will be discussed in the next section. ## 10.4 Foldable actually isn't TODO: write about how Foldable doesn't actually give you folds (in the technical sense of catamorphisms). It's something weaker, equivalent to a fold over a list. Often this is sufficient, but not always. ## 10.5 Further reading The `Foldable` class had its genesis in McBride and Paterson’s paper introducing `Applicative`, although it has been fleshed out quite a bit from the form in the paper. An interesting use of `Foldable` (as well as `Traversable`) can be found in Janis Voigtländer’s paper Bidirectionalization for free!. # 11 Traversable ## 11.1 Definition The `Traversable` type class, defined in the `Data.Traversable` module (haddock), is: ```class (Functor t, Foldable t) => Traversable t where traverse :: Applicative f => (a -> f b) -> t a -> f (t b) sequenceA :: Applicative f => t (f a) -> f (t a) mapM :: Monad m => (a -> m b) -> t a -> m (t b) sequence :: Monad m => t (m a) -> m (t a)``` As you can see, every `Traversable` is also a foldable functor. Like `Foldable`, there is a lot in this type class, but making instances is actually rather easy: one need only implement `traverse` or `sequenceA`; the other methods all have default implementations in terms of these functions. A good exercise is to figure out what the default implementations should be: given either `traverse` or `sequenceA`, how would you define the other three methods? (Hint for `mapM`: `Control.Applicative` exports the `WrapMonad` newtype, which makes any `Monad` into an `Applicative`. The `sequence` function can be implemented in terms of `mapM`.) ## 11.2 Intuition The key method of the `Traversable` class, and the source of its unique power, is `sequenceA`. Consider its type: `sequenceA :: Applicative f => t (f a) -> f (t a)` This answers the fundamental question: when can we commute two functors? For example, can we turn a tree of lists into a list of trees? (Answer: yes, in two ways. Figuring out what they are, and why, is left as an exercise. A much more challenging question is whether a list of trees can be turned into a tree of lists.) The ability to compose two monads depends crucially on this ability to commute functors. Intuitively, if we want to build a composed monad `M a = m (n a)` out of monads `m` and `n`, then to be able to implement `join :: M (M a) -> M a`, that is, `join :: m (n (m (n a))) -> m (n a)`, we have to be able to commute the `n` past the `m` to get `m (m (n (n a)))`, and then we can use the `join`s for `m` and `n` to produce something of type `m (n a)`. See Mark Jones’s paper for more details. ## 11.3 Instances and examples What’s an example of a `Traversable` instance? The following code shows an example instance for the same `Tree` type used as an example in the previous `Foldable` section. It is instructive to compare this instance with a `Functor` instance for `Tree`, which is also shown. ```data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)   instance Traversable Tree where traverse g Empty = pure Empty traverse g (Leaf x) = Leaf <$> g x traverse g (Node l x r) = Node <$> traverse g l <*> g x <*> traverse g r   instance Functor Tree where fmap g Empty = Empty fmap g (Leaf x) = Leaf $ g x fmap g (Node l x r) = Node (fmap g l) (g x) (fmap g r)``` It should be clear that the `Traversable` and `Functor` instances for `Tree` are almost identical; the only difference is that the `Functor` instance involves normal function application, whereas the applications in the `Traversable` instance take place within an `Applicative` context, using `(<$>)` and `(<*>)`. In fact, this will be true for any type. Any `Traversable` functor is also `Foldable`, and a `Functor`. We can see this not only from the class declaration, but by the fact that we can implement the methods of both classes given only the `Traversable` methods. A good exercise is to implement `fmap` and `foldMap` using only the `Traversable` methods; the implementations are surprisingly elegant. The `Traversable` module provides these implementations as `fmapDefault` and `foldMapDefault`. The standard libraries provide a number of `Traversable` instances, including instances for `[]`, `Maybe`, `Map`, `Tree`, and `Sequence`. Notably, `Set` is not `Traversable`, although it is `Foldable`. ## 11.4 Further reading The `Traversable` class also had its genesis in McBride and Paterson’s `Applicative` paper, and is described in more detail in Gibbons and Oliveira, The Essence of the Iterator Pattern, which also contains a wealth of references to related work. # 12 Category `Category` is a relatively recent addition to the Haskell standard libraries. It generalizes the notion of function composition to general “morphisms”. ∗ GHC 7.6.1 changed its rules regarding types and type variables. Now, any operator at the type level is treated as a type constructor rather than a type variable; prior to GHC 7.6.1 it was possible to use `(~>)` instead of ``arr``. For more information, see the discussion on the GHC-users mailing list. For a new approach to nice arrow notation that works with GHC 7.6.1, see this messsage and also this message from Edward Kmett, though for simplicity I haven't adopted it here. The definition of the `Category` type class (from `Control.Category`—haddock) is shown below. For ease of reading, note that I have used an infix type variable ``arr``, in parallel with the infix function type constructor `(->)`. ∗ This syntax is not part of Haskell 2010. The second definition shown is the one used in the standard libraries. For the remainder of this document, I will use the infix type constructor ``arr`` for `Category` as well as `Arrow`. ```class Category arr where id :: a `arr` a (.) :: (b `arr` c) -> (a `arr` b) -> (a `arr` c)   -- The same thing, with a normal (prefix) type constructor class Category cat where id :: cat a a (.) :: cat b c -> cat a b -> cat a c``` Note that an instance of `Category` should be a type constructor which takes two type arguments, that is, something of kind `* -> * -> *`. It is instructive to imagine the type constructor variable `cat` replaced by the function constructor `(->)`: indeed, in this case we recover precisely the familiar identity function `id` and function composition operator `(.)` defined in the standard `Prelude`. Of course, the `Category` module provides exactly such an instance of `Category` for `(->)`. But it also provides one other instance, shown below, which should be familiar from the previous discussion of the `Monad` laws. `Kleisli m a b`, as defined in the `Control.Arrow` module, is just a `newtype` wrapper around `a -> m b`. ```newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }   instance Monad m => Category (Kleisli m) where id = Kleisli return Kleisli g . Kleisli h = Kleisli (h >=> g)``` The only law that `Category` instances should satisfy is that `id` and `(.)` should form a monoid—that is, `id` should be the identity of `(.)`, and `(.)` should be associative. Finally, the `Category` module exports two additional operators: `(<<<)`, which is just a synonym for `(.)`, and `(>>>)`, which is `(.)` with its arguments reversed. (In previous versions of the libraries, these operators were defined as part of the `Arrow` class.) ## 12.1 Further reading The name `Category` is a bit misleading, since the `Category` class cannot represent arbitrary categories, but only categories whose objects are objects of `Hask`, the category of Haskell types. For a more general treatment of categories within Haskell, see the category-extras package. For more about category theory in general, see the excellent Haskell wikibook page, Steve Awodey’s new book, Benjamin Pierce’s Basic category theory for computer scientists, or Barr and Wells’s category theory lecture notes. Benjamin Russell’s blog post is another good source of motivation and category theory links. You certainly don’t need to know any category theory to be a successful and productive Haskell programmer, but it does lend itself to much deeper appreciation of Haskell’s underlying theory. # 13 Arrow The `Arrow` class represents another abstraction of computation, in a similar vein to `Monad` and `Applicative`. However, unlike `Monad` and `Applicative`, whose types only reflect their output, the type of an `Arrow` computation reflects both its input and output. Arrows generalize functions: if `arr` is an instance of `Arrow`, a value of type `b `arr` c` can be thought of as a computation which takes values of type `b` as input, and produces values of type `c` as output. In the `(->)` instance of `Arrow` this is just a pure function; in general, however, an arrow may represent some sort of “effectful” computation. ## 13.1 Definition The definition of the `Arrow` type class, from `Control.Arrow` (haddock), is: ```class Category arr => Arrow arr where arr :: (b -> c) -> (b `arr` c) first :: (b `arr` c) -> ((b, d) `arr` (c, d)) second :: (b `arr` c) -> ((d, b) `arr` (d, c)) (***) :: (b `arr` c) -> (b' `arr` c') -> ((b, b') `arr` (c, c')) (&&&) :: (b `arr` c) -> (b `arr` c') -> (b `arr` (c, c'))``` ∗ In versions of the `base` package prior to version 4, there is no `Category` class, and the `Arrow` class includes the arrow composition operator `(>>>)`. It also includes `pure` as a synonym for `arr`, but this was removed since it conflicts with the `pure` from `Applicative`. The first thing to note is the `Category` class constraint, which means that we get identity arrows and arrow composition for free: given two arrows `g :: b `arr` c` and `h :: c `arr` d`, we can form their composition `g >>> h :: b `arr` d` ∗. As should be a familiar pattern by now, the only methods which must be defined when writing a new instance of `Arrow` are `arr` and `first`; the other methods have default definitions in terms of these, but are included in the `Arrow` class so that they can be overridden with more efficient implementations if desired. ## 13.2 Intuition Let’s look at each of the arrow methods in turn. Ross Paterson’s web page on arrows has nice diagrams which can help build intuition. • The `arr` function takes any function `b -> c` and turns it into a generalized arrow `b `arr` c`. The `arr` method justifies the claim that arrows generalize functions, since it says that we can treat any function as an arrow. It is intended that the arrow `arr g` is “pure” in the sense that it only computes `g` and has no “effects” (whatever that might mean for any particular arrow type). • The `first` method turns any arrow from `b` to `c` into an arrow from `(b,d)` to `(c,d)`. The idea is that `first g` uses `g` to process the first element of a tuple, and lets the second element pass through unchanged. For the function instance of `Arrow`, of course, `first g (x,y) = (g x, y)`. • The `second` function is similar to `first`, but with the elements of the tuples swapped. Indeed, it can be defined in terms of `first` using an auxiliary function `swap`, defined by `swap (x,y) = (y,x)`. • The `(***)` operator is “parallel composition” of arrows: it takes two arrows and makes them into one arrow on tuples, which has the behavior of the first arrow on the first element of a tuple, and the behavior of the second arrow on the second element. The mnemonic is that `g *** h` is the product (hence `*`) of `g` and `h`. For the function instance of `Arrow`, we define `(g *** h) (x,y) = (g x, h y)`. The default implementation of `(***)` is in terms of `first`, `second`, and sequential arrow composition `(>>>)`. The reader may also wish to think about how to implement `first` and `second` in terms of `(***)`. • The `(&&&)` operator is “fanout composition” of arrows: it takes two arrows `g` and `h` and makes them into a new arrow `g &&& h` which supplies its input as the input to both `g` and `h`, returning their results as a tuple. The mnemonic is that `g &&& h` performs both `g` and `h` (hence `&`) on its input. For functions, we define `(g &&& h) x = (g x, h x)`. ## 13.3 Instances The `Arrow` library itself only provides two `Arrow` instances, both of which we have already seen: `(->)`, the normal function constructor, and `Kleisli m`, which makes functions of type `a -> m b` into `Arrow`s for any `Monad m`. These instances are: ```instance Arrow (->) where arr g = g first g (x,y) = (g x, y)   newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }   instance Monad m => Arrow (Kleisli m) where arr f = Kleisli (return . f) first (Kleisli f) = Kleisli (\ ~(b,d) -> do c <- f b return (c,d) )``` ## 13.4 Laws There are quite a few laws that instances of `Arrow` should satisfy ∗: ```arr id = id arr (h . g) = arr g >>> arr h first (arr g) = arr (g *** id) first (g >>> h) = first g >>> first h first g >>> arr (id *** h) = arr (id *** h) >>> first g first g >>> arr fst = arr fst >>> g first (first g) >>> arr assoc = arr assoc >>> first g   assoc ((x,y),z) = (x,(y,z))``` Note that this version of the laws is slightly different than the laws given in the first two above references, since several of the laws have now been subsumed by the `Category` laws (in particular, the requirements that `id` is the identity arrow and that `(>>>)` is associative). The laws shown here follow those in Paterson’s Programming with Arrows, which uses the `Category` class. ∗ Unless category-theory-induced insomnolence is your cup of tea. The reader is advised not to lose too much sleep over the `Arrow` laws ∗, since it is not essential to understand them in order to program with arrows. There are also laws that `ArrowChoice`, `ArrowApply`, and `ArrowLoop` instances should satisfy; the interested reader should consult Paterson: Programming with Arrows. ## 13.5 ArrowChoice Computations built using the `Arrow` class, like those built using the `Applicative` class, are rather inflexible: the structure of the computation is fixed at the outset, and there is no ability to choose between alternate execution paths based on intermediate results. The `ArrowChoice` class provides exactly such an ability: ```class Arrow arr => ArrowChoice arr where left :: (b `arr` c) -> (Either b d `arr` Either c d) right :: (b `arr` c) -> (Either d b `arr` Either d c) (+++) :: (b `arr` c) -> (b' `arr` c') -> (Either b b' `arr` Either c c') (|||) :: (b `arr` d) -> (c `arr` d) -> (Either b c `arr` d)``` A comparison of `ArrowChoice` to `Arrow` will reveal a striking parallel between `left`, `right`, `(+++)`, `(|||)` and `first`, `second`, `(***)`, `(&&&)`, respectively. Indeed, they are dual: `first`, `second`, `(***)`, and `(&&&)` all operate on product types (tuples), and `left`, `right`, `(+++)`, and `(|||)` are the corresponding operations on sum types. In general, these operations create arrows whose inputs are tagged with `Left` or `Right`, and can choose how to act based on these tags. • If `g` is an arrow from `b` to `c`, then `left g` is an arrow from `Either b d` to `Either c d`. On inputs tagged with `Left`, the `left g` arrow has the behavior of `g`; on inputs tagged with `Right`, it behaves as the identity. • The `right` function, of course, is the mirror image of `left`. The arrow `right g` has the behavior of `g` on inputs tagged with `Right`. • The `(+++)` operator performs “multiplexing”: `g +++ h` behaves as `g` on inputs tagged with `Left`, and as `h` on inputs tagged with `Right`. The tags are preserved. The `(+++)` operator is the sum (hence `+`) of two arrows, just as `(***)` is the product. • The `(|||)` operator is “merge” or “fanin”: the arrow `g ||| h` behaves as `g` on inputs tagged with `Left`, and `h` on inputs tagged with `Right`, but the tags are discarded (hence, `g` and `h` must have the same output type). The mnemonic is that `g ||| h` performs either `g` or `h` on its input. The `ArrowChoice` class allows computations to choose among a finite number of execution paths, based on intermediate results. The possible execution paths must be known in advance, and explicitly assembled with `(+++)` or `(|||)`. However, sometimes more flexibility is needed: we would like to be able to compute an arrow from intermediate results, and use this computed arrow to continue the computation. This is the power given to us by `ArrowApply`. ## 13.6 ArrowApply The `ArrowApply` type class is: ```class Arrow arr => ArrowApply arr where app :: (b `arr` c, b) `arr` c``` If we have computed an arrow as the output of some previous computation, then `app` allows us to apply that arrow to an input, producing its output as the output of `app`. As an exercise, the reader may wish to use `app` to implement an alternative “curried” version, `app2 :: b `arr` ((b `arr` c) `arr` c)`. This notion of being able to compute a new computation may sound familiar: this is exactly what the monadic bind operator `(>>=)` does. It should not particularly come as a surprise that `ArrowApply` and `Monad` are exactly equivalent in expressive power. In particular, `Kleisli m` can be made an instance of `ArrowApply`, and any instance of `ArrowApply` can be made a `Monad` (via the `newtype` wrapper `ArrowMonad`). As an exercise, the reader may wish to try implementing these instances: ```instance Monad m => ArrowApply (Kleisli m) where app = -- exercise   newtype ArrowApply a => ArrowMonad a b = ArrowMonad (a () b)   instance ArrowApply a => Monad (ArrowMonad a) where return = -- exercise (ArrowMonad a) >>= k = -- exercise``` ## 13.7 ArrowLoop The `ArrowLoop` type class is: ```class Arrow a => ArrowLoop a where loop :: a (b, d) (c, d) -> a b c   trace :: ((b,d) -> (c,d)) -> b -> c trace f b = let (c,d) = f (b,d) in c``` It describes arrows that can use recursion to compute results, and is used to desugar the `rec` construct in arrow notation (described below). Taken by itself, the type of the `loop` method does not seem to tell us much. Its intention, however, is a generalization of the `trace` function which is also shown. The `d` component of the first arrow’s output is fed back in as its own input. In other words, the arrow `loop g` is obtained by recursively “fixing” the second component of the input to `g`. It can be a bit difficult to grok what the `trace` function is doing. How can `d` appear on the left and right sides of the `let`? Well, this is Haskell’s laziness at work. There is not space here for a full explanation; the interested reader is encouraged to study the standard `fix` function, and to read Paterson’s arrow tutorial. ## 13.8 Arrow notation Programming directly with the arrow combinators can be painful, especially when writing complex computations which need to retain simultaneous reference to a number of intermediate results. With nothing but the arrow combinators, such intermediate results must be kept in nested tuples, and it is up to the programmer to remember which intermediate results are in which components, and to swap, reassociate, and generally mangle tuples as necessary. This problem is solved by the special arrow notation supported by GHC, similar to `do` notation for monads, that allows names to be assigned to intermediate results while building up arrow computations. An example arrow implemented using arrow notation, taken from Paterson, is: ```class ArrowLoop arr => ArrowCircuit arr where delay :: b -> (b `arr` b)   counter :: ArrowCircuit arr => Bool `arr` Int counter = proc reset -> do rec output <- idA -< if reset then 0 else next next <- delay 0 -< output + 1 idA -< output``` This arrow is intended to represent a recursively defined counter circuit with a reset line. There is not space here for a full explanation of arrow notation; the interested reader should consult Paterson’s paper introducing the notation, or his later tutorial which presents a simplified version. ## 13.9 Further reading An excellent starting place for the student of arrows is the arrows web page, which contains an introduction and many references. Some key papers on arrows include Hughes’s original paper introducing arrows, Generalising monads to arrows, and Paterson’s paper on arrow notation. Both Hughes and Paterson later wrote accessible tutorials intended for a broader audience: Paterson: Programming with Arrows and Hughes: Programming with Arrows. Although Hughes’s goal in defining the `Arrow` class was to generalize `Monad`s, and it has been said that `Arrow` lies “between `Applicative` and `Monad`” in power, they are not directly comparable. The precise relationship remained in some confusion until analyzed by Lindley, Wadler, and Yallop, who also invented a new calculus of arrows, based on the lambda calculus, which considerably simplifies the presentation of the arrow laws (see The arrow calculus). Some examples of `Arrow`s include Yampa, the Haskell XML Toolkit, and the functional GUI library Grapefruit. Some extensions to arrows have been explored; for example, the `BiArrow`s of Alimarine et al., for two-way instead of one-way computation. The Haskell wiki has links to many additional research papers relating to `Arrow`s. # 14 Comonad The final type class we will examine is `Comonad`. The `Comonad` class is the categorical dual of `Monad`; that is, `Comonad` is like `Monad` but with all the function arrows flipped. It is not actually in the standard Haskell libraries, but it has seen some interesting uses recently, so we include it here for completeness. ## 14.1 Definition The `Comonad` type class, defined in the `Control.Comonad` module of the comonad library, is: ```class Functor f => Copointed f where extract :: f a -> a   class Copointed w => Comonad w where duplicate :: w a -> w (w a) extend :: (w a -> b) -> w a -> w b``` As you can see, `extract` is the dual of `return`, `duplicate` is the dual of `join`, and `extend` is the dual of `(>>=)` (although its arguments are in a different order). The definition of `Comonad` is a bit redundant (after all, the `Monad` class does not need `join`), but this is so that a `Comonad` can be defined by `fmap`, `extract`, and either `duplicate` or `extend`. Each has a default implementation in terms of the other. A prototypical example of a `Comonad` instance is: ```-- Infinite lazy streams data Stream a = Cons a (Stream a)   instance Functor Stream where fmap g (Cons x xs) = Cons (g x) (fmap g xs)   instance Copointed Stream where extract (Cons x _) = x   -- 'duplicate' is like the list function 'tails' -- 'extend' computes a new Stream from an old, where the element -- at position n is computed as a function of everything from -- position n onwards in the old Stream instance Comonad Stream where duplicate s@(Cons x xs) = Cons s (duplicate xs) extend g s@(Cons x xs) = Cons (g s) (extend g xs) -- = fmap g (duplicate s)``` ## 14.2 Further reading Dan Piponi explains in a blog post what cellular automata have to do with comonads. In another blog post, Conal Elliott has examined a comonadic formulation of functional reactive programming. Sterling Clover’s blog post Comonads in everyday life explains the relationship between comonads and zippers, and how comonads can be used to design a menu system for a web site. Uustalu and Vene have a number of papers exploring ideas related to comonads and functional programming: # 15 Acknowledgements A special thanks to all of those who taught me about standard Haskell type classes and helped me develop good intuition for them, particularly Jules Bean (quicksilver), Derek Elkins (ddarius), Conal Elliott (conal), Cale Gibbard (Cale), David House, Dan Piponi (sigfpe), and Kevin Reid (kpreid). I also thank the many people who provided a mountain of helpful feedback and suggestions on a first draft of the Typeclassopedia: David Amos, Kevin Ballard, Reid Barton, Doug Beardsley, Joachim Breitner, Andrew Cave, David Christiansen, Gregory Collins, Mark Jason Dominus, Conal Elliott, Yitz Gale, George Giorgidze, Steven Grady, Travis Hartwell, Steve Hicks, Philip Hölzenspies, Edward Kmett, Eric Kow, Serge Le Huitouze, Felipe Lessa, Stefan Ljungstrand, Eric Macaulay, Rob MacAulay, Simon Meier, Eric Mertens, Tim Newsham, Russell O’Connor, Conrad Parker, Walt Rorie-Baety, Colin Ross, Tom Schrijvers, Aditya Siram, C. Smith, Martijn van Steenbergen, Joe Thornber, Jared Updike, Rob Vollmert, Andrew Wagner, Louis Wasserman, and Ashley Yakeley, as well as a few only known to me by their IRC nicks: b_jonas, maltem, tehgeekmeister, and ziman. I have undoubtedly omitted a few inadvertently, which in no way diminishes my gratitude. Finally, I would like to thank Wouter Swierstra for his fantastic work editing the Monad.Reader, and my wife Joyia for her patience during the process of writing the Typeclassopedia. # 16 About the author Brent Yorgey (blog, homepage) is (as of November 2011) a fourth-year Ph.D. student in the programming languages group at the University of Pennsylvania. He enjoys teaching, creating EDSLs, playing Bach fugues, musing upon category theory, and cooking tasty lambda-treats for the denizens of #haskell. # 17 Colophon The Typeclassopedia was written by Brent Yorgey and initally published in March 2009. Painstakingly converted to wiki syntax by User:Geheimdienst in November 2011, after asking Brent’s permission. If something like this tex to wiki syntax conversion ever needs to be done again, here are some vim commands that helped: • %s/\\section{\([^}]*\)}/=\1=/gc • %s/\\subsection{\([^}]*\)}/==\1==/gc • %s/^ *\\item /\r* /gc • %s/---/—/gc • %s/\\$\([^\$]*\)\\$/<math>\1\\ <\/math>/gc Appending “\ ” forces images to be rendered. Otherwise, Mediawiki would go back and forth between one font for short <math> tags, and another more Tex-like font for longer tags (containing more than a few characters)"" • %s/|\([^|]*\)|/<code>\1<\/code>/gc • %s/\\dots/.../gc • %s/^\\label{.*\$//gc • %s/\\emph{\([^}]*\)}/''\1''/gc • %s/\\term{\([^}]*\)}/''\1''/gc The biggest issue was taking the academic-paper-style citations and turning them into hyperlinks with an appropriate title and an appropriate target. In most cases there was an obvious thing to do (e.g. online PDFs of the cited papers or Citeseer entries). Sometimes, however, it’s less clear and you might want to check the original Typeclassopedia PDF with the original bibliography file. To get all the citations into the main text, I first tried processing the source with Tex or Lyx. This didn’t work due to missing unfindable packages, syntax errors, and my general ineptitude with Tex. I then went for the next best solution, which seemed to be extracting all instances of “\cite{something}” from the source and in that order pulling the referenced entries from the .bib file. This way you can go through the source file and sorted-references file in parallel, copying over what you need, without searching back and forth in the .bib file. I used: • egrep -o "\cite\{[^\}]*\}" ~/typeclassopedia.lhs | cut -c 6- | tr "," "\n" | tr -d "}" > /tmp/citations • for i in \$(cat /tmp/citations); do grep -A99 "\$i" ~/typeclassopedia.bib|egrep -B99 '^\}\$' -m1 ; done > ~/typeclasso-refs-sorted
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252167344093323, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/198615/number-has-terminating-decimal-iff-its-denominator-when-in-lowest-terms-is-co
# number has terminating decimal iff it's denominator, when in lowest terms, is coprime to all primes except 2 and 5. Show that a number has a terminating decimal expansion if and only if, it is rational and when in lowest terms, its denominator is coprime to all primes other than 2 and 5. This is an unsolved question in my lecture notes. I can only seem to prove the converse direction for this. Would appreciate a solution for the other direction. For converse direction: let the number, in lowest terms, be given by $\frac{l}{m}$ where $m = 2^ \alpha 5^\beta$, for some positive integers, $\alpha$ and $\beta$. if $\alpha > \beta$, let $k = 5^{(\alpha-\beta)}$ if $\alpha < \beta$, let $k = 2^{(\beta-\alpha)}$ then $\frac{l}{m} = \frac{kl}{k2^\alpha5^\beta}=\frac{kl}{10^q}$ where q = max($\alpha,\beta$). hence $\frac{l}{m}$ is a terminating decimal. - ## 3 Answers If the denominator has a prime factor other than $2$ and $5$, there is no power of $10$ that it divides. If you assume it terminates after $n$ decimals, the denominator must divide $10^n$ - That's not enough to yield a rigorous proof - see my answer. – Gone Sep 18 '12 at 18:08 Let $n$ be the terminating decimal in question, and let $a =n\cdot10^m$, where $m$ is the number of places in the decimal expansion after the decimal point. Then $n=\frac{a}{10^m}$, the denominator of which is co-prime to any prime other than 2 or 5, no matter how many cancellations occur when simplifying. - This is a consequence of unique factorization, which implies uniqueness of the lowest-terms representation of fractions. Suppose the real $\rm\,r\,$ has $\rm\:k\:$ nonzero digits after the decimal point. Then multiplying it by $\rm\,10^k$ shifts the decimal point right by $\rm\,k\,$ digits, hence yields an integer, i.e. $\rm\: 10^k r = n\in \Bbb Z.\:$ Thus $\rm\: r = n/10^k\:$ so canceling common factors to reduce this fraction to lowest terms yields a fraction whose denominator divides $\rm\:10^k\! = 2^k 5^k.\:$ By unique factorization the only such divisors are $\rm\:2^i 5^j\:$ for $\rm\:i,j \le k.\:$ Also by unique factorization the lowest terms representation of a fraction is unique, so there cannot exist another equivalent fraction in lowest terms whose denominator has prime factors other than $2$ and $5$. This completes the proof. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061077237129211, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/90499/how-to-compute-this-integral-by-converting-it-into-polar-coordinate?answertab=votes
# How to compute this integral by converting it into polar coordinate? The problem is to convert this integral from $dxdy$ form into $rdrd\theta$ form , please upload the graph also. I have the problem especially on the domain of the $\theta$ when convert from Cartesian coordinate into polar coordinate, thanks in advance for your help. $$\int ^2 _{-2} \int ^3 _{-3} (x^2+y^2)\,dx\,dy$$ I just want to learn how to convert this into polar coordinate and learn how to compute this in polar coordinate as a example The answer is $$\int_{-\arctan \frac{3}{2}}^{\arctan \frac{3}{2}} \int_0^{\frac{2}{\cos \theta}} r^2 r\,dr\,d\theta,$$ please explain - 6 I would suggest not using polar coordinates for this... – David Mitra Dec 11 '11 at 17:22 1 I second David's remark. This is trivial to do in $x,y$ coordinates and the region is perfectly adapted to that. – user20266 Dec 11 '11 at 17:33 I think this is course homework. If the borders are explicitly known then I wouldn't use polar coordinates either, but in this case it isn't too bad to carry the (unnecessarily tedious) calculation out with all the details. – David Heider Dec 11 '11 at 17:36 @DavidHeider - So just assume i just have a problem with this problem in a homework with many question – Victor Dec 11 '11 at 17:38 @Victor: This sounds pretty much like a homework exercise to me. But I may be mistaken. In this case I apologize. But why do you use such a poorly chosen example to learn polar coordinates? Why don't you use polar coordinates and calculate e.g. the area of an ellipsis, or a ball with a function defined in it? – David Heider Dec 11 '11 at 18:05 show 4 more comments ## 2 Answers The reason people are suggesting not doing this in polar coordinates is that although the integrand is nice in those coordinates, the region of integration is not. Converting the integral $\int ^2 _{-2} \int ^3 _{-3} (x^2+y^2)dxdy=\int \int r^2 r\;dr\;d\theta$ gives you something easy, but expressing the region of integration is tougher. I would break it into four pieces, with one being $\int_{-\arctan \frac{3}{2}}^{\arctan \frac{3}{2}} \int_0^{\frac{2}{\cos \theta}} r^2 r\;dr\;d\theta$. This represents the triangle $(0,0), (2,-3), (2,3)$ - May i ask you how do you express that into polar coordinate? – Victor Dec 11 '11 at 18:00 Use the transformations I have included in my edited article. – David Heider Dec 11 '11 at 18:07 @Victor: I don't understand what needs expressing in polar coordinates. The triangle is represented by its sides: the two rays at angles $\pm \arctan \frac{3}{2}$ and the line $r \cos \theta=2$, which is $x=2$. The $r$ integral is easy. I didn't try the $\theta$ one. – Ross Millikan Dec 11 '11 at 19:38 Check Wikipedia for the inverse transformation of polar coordinates. Note, that $x^2+y^2=r^2$, and that $rdrd\theta = dxdy$. By using the inverse transformation, you can easily find the upper and lower bound for the integral for both $r,\theta$. I would like to refer you to Wikipedia for that as it is rewarding to carry this calculation out on one's own. The inverse transformations are given by: $r=\sqrt{x^2+y^2}, \theta = \arctan\left(y/x\right)$ (Watch the signs!!!). Now simply find the maximal and minimal values of both$r,\theta$. However, I'd advise you not to use polar coordinates. The region is well-adapted, actually, it is a square. But since you want to perform this tedious calculation... Try to make use of the inverse transformation (Frankly, I don't know the result in polar coordinates) and try to calculate this integral in polar coordinates. Then however, calculate it in Cartesian coordinates. If you reach the same result you're done. - 1 May you show me how to compute it in polar coordinate, actually i am not a college student yet, so i would like to learn it for my own interest – Victor Dec 11 '11 at 17:36 As I said before, check Wikipedia for the inverse transformation. If you learn it for your own interest, then you should have a high level of frustration tolerance. OK, I'll edit my answer. – David Heider Dec 11 '11 at 17:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367461800575256, "perplexity_flag": "head"}
http://en.m.wikibooks.org/wiki/Operations_Research/Transportation_and_Assignment_Problem
# Operations Research/Transportation and Assignment Problem The Transportation and Assignment problems deal with assigning sources and jobs to destinations and machines. We will discuss the transportation problem first. Suppose a company has m factories where it manufactures its product and n outlets from where the product is sold. Transporting the product from a factory to an outlet costs some money which depends on several factors and varies for each choice of factory and outlet. The total amount of the product a particular factory makes is fixed and so is the total amount a particular outlet can store. The problem is to decide how much of the product should be supplied from each factory to each outlet so that the total cost is minimum. Let us consider an example. Suppose an auto company has three plants in cities A, B and C and two major distribution centers in D and E. The capacities of the three plants during the next quarter are 1000, 1500 and 1200 cars. The quarterly demands of the two distribution centers are 2300 and 1400 cars. The transportation costs (which depend on the mileage, transport company etc) between the plants and the distribution centers is as follows: | | | | |------------|---------------|---------------| | Cost Table | Dist Center D | Dist Center E | | Plant A | 80 | 215 | | Plant B | 100 | 108 | | Plant C | 102 | 68 | Which plant should supply how many cars to which outlet so that the total cost is minimum? The problem can be formulated as a LP model: Let $x_{ij}$ be the amount of cars to be shipped from source i to destination j. Then our objective is to minimize the total cost which is $80x_{11}+215x_{12}+100x_{21}+108x_{22}+102x_{31}+68x_{32}$. The constraints are the ones imposed by the amount of cars to be transported from each plant and the amount each center can absorb. The whole model is: Minimize z = $80x_{11}+215x_{12}+100x_{21}+108x_{22}+102x_{31}+68x_{32}$ subject to, $x_{11}+x_{12}=1000$; $x_{21}+x_{22}=1500$; $x_{31}+x_{32}=1200$; $x_{11}+x_{21}+x_{31}=2300$; $x_{12}+x_{22}+x_{32}=1400$; $x_{ij}\ge 0$ and integer, i = 1,2,3, j = 1,2. The problem can now be solved using the simplex method. A convenient procedure is discussed in the next section.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421629309654236, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/95719?sort=votes
## on the difference of exponential random variables ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Assume two random variables X,Y are exponentially distributed with rates p and q respectively, and we know that the r.v. X-Y is distributed like X'-Y' where X',Y'are exponential random variables, independent among themselves and independent of X andY, with rates p and q. Does it follow that X and Y are independent? - 1 You may have better luck at stats.stackexchange.com. In particular, this looks to me like a homework question, and MO is not for homework. – David Roberts May 2 2012 at 6:06 Yes. If it's not homework, you should tell us the context... (read the FAQ) – Anthony Quas May 2 2012 at 14:03 No, it's not HW, even though it is a question about two random variables. It arose naturally in my current research project. I am constructing a sequence of random variables in a very complicated geometric way, by sone miracle I can find the one dimensional marginals and the property written above. I want to know whether the sequence is at least pairwise independent. If you do know this question is easy or doable as a Hw, can you at least point me to an appropriate lit reference? – Mensarguens May 2 2012 at 14:48 ## 1 Answer No. Consider an analogous question for the random variables $Z_1$, $Z_2$ uniformly distributed on $\lbrace0,1,2,3\rbrace$ whose sum has the probabilities $1/16, 2/16, 3/16, 4/16, 3/16, 2/16, 1/16$ of taking the values $0, 1, 2, 3, 4, 5, 6$, respectively. Must $Z_1$ and $Z_2$ be independent? No, here is a possible joint distribution: $$\frac 1{16}\begin{pmatrix}1 & 0 & 2 & 1 \\ 2 & 1 & 1 & 0 \\ 0 & 1 & 1 & 2 \\ 1 & 2 & 0 & 1\end{pmatrix}.$$ The sum of each row or column is $1/4$, and the sums of the diagonals are $1/16, 2/16, 3/16, 4/16, 3/16, 2/16, 1/16$. Using this, we can construct random variables uniformly distributed on $[0,1]$ whose sum is a uniform sum distribution, but which are not independent. Let the density of the joint distribution on the unit square be $0$, $1$, or $2$ according to the $16$th of the square and the pattern above. Using this, for any distributions which have densities greater than a positive constant on some intervals, we can write the distributions as mixtures of uniform distributions and something else, and we use the above on the uniform distributions to construct nonindependent copies whose sum has the same distribution as independent copies. Doing this for $X'$ and $-Y'$ produces exponentially distributed random variables which are not independent, but whose difference is the same as the difference of independent exponential random variables. There are some discrete distributions which do have the property that if the sum looks like the variables are independent, then they must be independent. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192147254943848, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/152645/an-exercise-about-a-hamel-basis
# An exercise about a Hamel basis The following problem is from Golan's linear algebra book. I've been unable to make any progress. Definition: A Hamel basis is a (necessarily infinite dimensional) basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$. Problem: Let $B$ be a Hamel basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$ and fix some element $a\in\mathbb{R}$ with $a\neq 0,1$. Show there exists some $y\in B$ with $ay\notin B$. - 1 The existence of Hamel bases to all vector spaces is equivalent to the axiom of choice. The way this is written suggests that a basis for $\mathbb R$ over $\mathbb Q$ may be equivalent to the axiom of choice which is very untrue. – Asaf Karagila Jun 1 '12 at 21:31 You are right. I will fix it. – Potato Jun 1 '12 at 21:31 Thanks. I removed [axiom-of-choice] because it is completely irrelevant to the question (even before the edit). – Asaf Karagila Jun 1 '12 at 21:32 1 Is it $a\neq 0$? If $a=0$ you will never get it into $B$. – rschwieb Jun 1 '12 at 21:34 Yes. Thanks for catching my typo. – Potato Jun 1 '12 at 21:35 show 3 more comments ## 2 Answers Here is the proof of the exercise: Let $B$ be a Hamel basis. Then any real number $r$ can we written uniquely as $\Sigma_{x \in B} {r_x}x$ where the $r_x$ are rational numbers only finitely many of which are nonzero. The function $\alpha : r \to \Sigma_{x \in B} r_x$ is a linear transformation of vector spaces over the rational numbers. Now suppose that $a \ne 1$ and $ax \in B$ for all $x \in B$. Then $\alpha(ar)=\alpha(r)$ for all real numbers $r$. In particular, if $x \in B$ and if $r = x(a-1)^{-1}$ then $1 = \alpha(x) = \alpha([a-1]r) = \alpha(ar) - \alpha(r) = 0$. Contradiction! - Please visit more often, Dr. Golan :) – rschwieb Jun 2 '12 at 19:13 Thanks! Your book is fantastic, by the way. – Potato Jun 2 '12 at 19:53 I am glad you like it. In case you don't know, a third (expanded) edition was published by Springer a few months ago. – Jonathan Golan Jun 3 '12 at 1:56 Completely Revised: Let $f:\Bbb R\to\Bbb R:x\mapsto ax$, and suppose that $f[B]\subseteq B$. Since $f[B]$ is a basis for $\Bbb R$, we must have $f[B]=B$. In particular, $a^nb\in B$ for each $b\in B$ and $n\in\Bbb Z$, and it follows that $a$ must be transcendental. Define a relation $\sim$ on $B$ by $b_0\sim b_1$ iff $b_1=a^nb_0$ for some $n\in\Bbb Z$; $\sim$ is easily seen to be an equivalence relation. Let $T\subseteq B$ contain exactly one representative of each $\sim$-equivalence class. Fix $t\in T$; there are $m\in\Bbb Z^+$ and for $k=1,\dots,m$ distinct $t_k\in T$ and Laurent polynomials $p_k$ with non-zero rational coefficients such that $$\frac{t}{a+1}=\sum_{k=1}^mp_k(a)t_k\;,$$ and hence $$t-\sum_{k=1}^m(a+1)p_k(a)t_k=0\;.$$ But this implies that $m=1$, $t_1=t$, and $(a+1)p_1(a)=1$, making $a$ algebraic, which is impossible. Thus, $f[B]\nsubseteq B$. - 1 I'm a bit confused as to why $B_{\eta + 1}$ is independent. For ease, let's just focus on $B_1$. A linear combination of elements in $B_1$ looks like $p(e) + q(e)x_1$ where $p$ and $q$ are Laurent series with only finitely many positive and negative powers of $e$ appearing. Suppose this combination is $0$. If $q(e) = 0$, we're done by independence of $B_0$. Else, we have $x_1 = -\frac{p(e)}{q(e)}$. Why is this a problem? More specifically, it seems as though, for example $x_1$ could be $\frac{e}{e+1}$, which, if I'm computed correctly is not in the span of $B_0$. – Jason DeVito Jun 2 '12 at 2:10 1 @Jason: You’re right. I was definitely having a bad day. (In a way I’m relieved, since I didn’t expect the problem to be in error.) I’ll have another look. – Brian M. Scott Jun 2 '12 at 3:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421201944351196, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/40655-system-linear-equations.html
# Thread: 1. ## System of linear equations... Hi, I tried to solve these by way of the reduced row echelon, and posted my answers. If anyone could check em', I'd appreciate it. x+ y + 2z = -1 x- 2y + z = -5 3x+ y+ z = 3 (For this one, I got x= 1, y=2, z= -2) Anyone agree/disagree? =================== Next, x+y+z=0 x+z=0 2x+y-2z=o x+5y+5z=0 looks to me like x=y=z=0? But when I reduced the rows, it was showing that 1=0, so could the answer be no solution? Thanks a lot 2. 1. You can always plug in your values of x, y, and z to see if they satisfy the three equations. 2. Every homogeneous system of equations has only the trivial solution (x,y,z) = (0,0,0) or infinitely many non-zero solutions in which the trivial solution is one of them (specifically when there are more unknowns than equations which isn't the case here). There are no other alternatives. When you say you got 1 = 0, did you mean that in your augmented matrix you had a row that was similar to: $\left[ \begin{array}{ccccc} 0 & 0 & 1 & | & 0 \end{array} \right]$. This would mean z = 0, giving your trivial solution. 3. Yes, exactly. I thought it was saying 1=0. Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9679334163665771, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/45950/projectile-motion-with-air-resistance-and-wind?answertab=votes
# Projectile Motion with Air Resistance and Wind I am wondering how the general kinematics equations would change in the following situation. If an object were fired out of a cannon, or some sort of launcher, so that it had both an initial velocity and an initial angle, and air resistance is taken into account, what would be the equations for the x and y components of the position, velocity, and acceleration. Furthermore, I am wondering how these equations would change if there were also a wind blowing at an angle. In essence, what I would like to know is how to rewrite the kinematics equations to take into account the air resistance and moving wind and the terminal velocity. The reason I want to know this is that I am writing a program to model this behavior, but I first need to know these equations. Also, if possible, could someone provide some help on finding equations for the maximum height the projectile reaches, as well as the distance it travels before it hits the ground? I would like both of these to be values the user of the program can find if desired. Oh, and in the scenario of the wind, it can blow from any angle, which means it will affect the x and y velocities and either augment them or lessen them depending on the angle at which it blows. So I guess another request is an explanation of how to obtain the set of equations (position, velocity, acceleration) for the x direction based on whether the wind angle is helpful or hurtful, and how to obtain the set of equations for the y direction, based again on whether the wind is helpful or hurtful. I would naturally have a constraint on the wind velocity so that the object would always inevitably hit the ground, so the force of the wind in the y-direction, if it were blowing upwards, would have to be less than the force of gravity of the object, so that it still fell. Sorry, I know I'm asking a lot, it's just that I really want to understand the principles behind this. Any help at all here would be very much appreciated, but if possible, could whoever responds please try to address all of my questions, numerous though they are? Oh, one final note. As this is being written in a computer program (python 2.7.3, to be exact), I cannot perform any integration or differentiation of the functions. Instead, I will need to create a small time step, dt, and plot the points at each time step over a certain interval. The values of the radius of the object, its mass, its initial velocity and angle, the wind velocity and angle, and dt can all be entered by the user, and the values of wind angle and wind velocity are defaulted to 0, the angle is defaulted to 45 degrees, and dt is defaulted to 0.001, although these values can be changed by the user whenever they desire. Thanks in advance for any help provided! - 3 If you are seeking an answer with a high degree of accuracy, this becomes a HUGELY complex problem dependent on multiple variables including shape of the projectile, pressure/temperature of the air, and a number of other factors. How precise an answer do you need? – Mik Cox Dec 5 '12 at 0:27 A precision of maybe .01 would be fine for my purposes. Maybe even less, depending on how time-consuming the programming is. – Chris Spedden Dec 5 '12 at 0:43 And I recognize its complexity and the variables involved, as I have been working on combining them into a few unified equations for the past few days, but to no avail. Hence my post on here, where people are undoubtedly more talented at such matters than I. Whatever you can do will be most helpful! – Chris Spedden Dec 5 '12 at 0:46 Wouldn't the wind have an effect on on the object other than just changing the relative speed? The force of the wind acting on the object would change things, would it not? The wind can act in any direction in this scenario. To clarify, if the angle of the wind is between 0 and 90 degrees, or between 270 and 360 degrees, it will be acting in the positive x direction, thus affecting the acceleration in that direction. And the same for the y direction. Also, how would I tie in terminal velocity to this situation, so that the speed increases, then slows down as it approaches terminal velocity? – Chris Spedden Dec 5 '12 at 1:30 Also, I have seen this document before, but am unsure of something in it. When they say ax = -(D/m)v*vx, what does v*vx mean exactly? – Chris Spedden Dec 5 '12 at 1:30 ## 1 Answer As mentioned in the comments, this is an extremely complex problem if you intend to consider every possible aspect. However, for a general estimation, you can use the relatively simple methods described in this document to begin calculating the effects of air drag on projectiles. Note that in the document cited, they make the assumption that the air is not moving, and begun their derivation from $f = Dv^2$, and this $v$ was relative to the air and therefore the following equations simply used the velocity of the ball. For the more complex case where the air is moving as well, you will need to account for this change and make sure that the x and y components of the force due to drag are calculated using the relative velocity of the projectile through the now-moving air. Also worth noting is the fact that if the wind direction changes, the effective footprint of your projectile will change, thus changing $D$ and therefore the force due to drag. If you are willing to make a reasonable approximation for the average footprint of your projectile, however, this will likely yield a result that is accurate enough for your purposes. Hope this helps! - Wouldn't the wind have an effect on on the object other than just changing the relative speed? The force of the wind acting on the object would change things, would it not? The wind can act in any direction in this scenario. To clarify, if the angle of the wind is between 0 and 90 degrees, or between 270 and 360 degrees, it will be acting in the positive x direction, thus affecting the acceleration in that direction. And the same for the y direction. Also, how would I tie in terminal velocity to this situation, so that the speed increases, then slows down as it approaches terminal velocity? – Chris Spedden Dec 5 '12 at 1:12 Also, I have seen this document before, but am unsure of something in it. When they say ax = -(D/m)v*vx, what does v*vx mean exactly? – Chris Spedden Dec 5 '12 at 1:16 Consider the system in the exact same manner as you would without wind resistance, and then draw a free body diagram and look at acceleration and forces on the projectile. The only thing that air adds to the scenario is the frictional force, always in a direction opposite the projectile's movement. Thus, you should simply change the free body diagram to include this force and re-calculate the horizontal and vertical components of acceleration. As to your confusion about v*vx, v is the magnitude of the projectile's velocity relative to the air, vx is the x component of this velocity vector. – Mik Cox Dec 5 '12 at 3:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508455395698547, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/135343-series-problem.html
# Thread: 1. ## Series Problem Problem: Consider the following infinite series. $3 - 3x + 3x^2 - 3x^3 + 3x^4 - ...$ (a) For what values of x will the sum of the series be a finite value? (b) Find the value of the infinite series for x in the interval in part (a). Could somebody explain how this is done? I really don't understand where to even start... 2. Originally Posted by lysserloo Problem: Consider the following infinite series. $3 - 3x + 3x^2 - 3x^3 + 3x^4 - ...$ (a) For what values of x will the sum of the series be a finite value? (b) Find the value of the infinite series for x in the interval in part (a). Could somebody explain how this is done? I really don't understand where to even start... It is a geometric series. 3. I understand that, but I don't understand how to find the sum when x has to be in an interval from -1 to 1 (the answer to part a). How do I solve when x is an interval and not a set number? 4. Originally Posted by lysserloo I understand that, but I don't understand how to find the sum when x has to be in an interval from -1 to 1 (the answer to part a). How do I solve when x is an interval and not a set number? You should know then that a = 3 and r = -x. Now, what is the condition on r for an inifinite geometric series to have a finite value ....?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943211019039154, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/11/10/subspaces-from-irreducible-representations/?like=1&source=post_flair&_wpnonce=8cb43e0c56
# The Unapologetic Mathematician ## Subspaces from Irreducible Representations Because of Maschke’s theorem we know that every representation of a finite group $G$ can be decomposed into chunks that correspond to irreducible representations: $\displaystyle V\cong\bigoplus\limits_{i=1}^km_iV^{(i)}$ where the $V^{(i)}$ are pairwise-inequivalent irreps. But our consequences of orthonormality prove that there can be only finitely many such inequivalent irreps. So we may as well say that $k$ is the number of them and let a multiplicity $m_i$ be zero if $V^{(i)}$ doesn’t show up in $V$ at all. Now there’s one part of this setup that’s a little less than satisfying. For now, let’s say that $V$ is an irrep itself, and let $m$ be a natural number for its multiplicity. We’ve been considering the representation $\displaystyle mV=\bigoplus\limits_{i=1}^mV$ made up of the direct sum of $m$ copies of $V$. But this leaves some impression that these copies of $V$ actually exist in some sense inside the sum. In fact, though inequivalent irreps stay distinct, equivalent ones lose their separate identities in the sum. Indeed, we’ve seen that $\displaystyle\hom_G(V,mV)\cong\mathrm{Mat}_{1,m}(\mathbb{C})$ That is, we can find a copy of $V$ lying “across” all $m$ copies in the sum in all sorts of different ways. The identified copies are like the basis vectors in an $m$-dimensional vector space — they hardly account for all the vectors in the space. We need a more satisfactory way of describing this space. And it turns out that we have one: $\displaystyle mV=\bigoplus\limits_{i=1}^mV\cong V\otimes\mathbb{C}^m$ Here, the tensor product is over the base field $\mathbb{C}$, so the “extra action” by $G$ on $V$ makes this into a $G$-module as well. This actually makes sense, because as we pass from representations to their characters, we also pass from “plain” vector spaces to their dimensions, and from tensor products to regular products. Thus at the level of characters this says that adding $m$ copies of an irreducible character together gives the same result as multiplying it by $m$, which is obviously true. So since the two sides have the same characters, they contain the same number of copies of the same irreps, and so they are isomorphic as asserted. Actually, any vector space of dimension $m$ will do in the place of $\mathbb{C}^m$ here. And we have one immediately at hand: $\hom_G(V,mV)$ itself. That is, if $V$ is an irreducible representation then we have an isomorphism: $\displaystyle mV\cong V\otimes\hom_G(V,mV)$ As an example, if $V$ is any representation and $V^{(i)}$ is any irrep, then we find $\displaystyle m_iV^{(i)}\cong V^{(i)}\otimes\hom_G(V^{(i)},V)$ We can reassemble these subspaces to find $\displaystyle V\cong\bigoplus\limits_{i=1}^km_iV^{(i)}\cong\bigoplus\limits_{i=1}^kV^{(i)}\otimes\hom_G(V^{(i)},V)$ Notice that this extends our analogy between $\hom$ spaces and inner products. Indeed, if we have an orthonormal basis $\{e_i\}$ of a vector space of dimension $k$, we can decompose any vector as $\displaystyle v=\sum\limits_{i=1}^ke_i\langle e_i,v\rangle$ ## 2 Comments » 1. [...] be a representation of a finite group , with finite dimension . We can decompose into blocks — one for each irreducible representation of [...] Pingback by | November 12, 2010 | Reply 2. [...] we recall that the submodule of invariants can be written [...] Pingback by | November 16, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281932711601257, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/12/17/do-short-exact-sequences-of-representations-split/?like=1&_wpnonce=84af3d0b4c
# The Unapologetic Mathematician ## Do Short Exact Sequences of Representations Split? We’ve seen that the category of representations is abelian, so we have all we need to talk about exact sequences. And we know that some of the most important exact sequences are short exact sequences. We also saw that every short exact sequence of vector spaces splits. So does the same hold for representations? It turns out that no, they don’t always, and I’ll give an example to show what can happen. Consider the group $\mathbb{Z}$ of integers and the two dimensional representation $\rho:\mathbb{Z}\rightarrow\mathrm{GL}\left(\mathbb{F}^2\right)$ defined by: $\displaystyle\rho(n)=\begin{pmatrix}1&n\\ 0&1\end{pmatrix}$ Verify for yourself that this actually does define a representation of the group of integers. Now it’s straightforward to see that all these linear transformations send every vector of the form $\begin{pmatrix}x\\ 0\end{pmatrix}$ to itself. This defines a one-dimensional subspace fixed by the representation — a subrepresentation $\tau:\mathbb{Z}\rightarrow\mathrm{GL}\left(\mathbb{F}^1\right)$ defined by: $\displaystyle\tau(n)=\begin{pmatrix}1\end{pmatrix}$ Then there must be a quotient representation $\sigma=\rho/\tau$, and we can arrange them into a short exact sequence: $\mathbf{0}\rightarrow\tau\rightarrow\rho\rightarrow\sigma\rightarrow\mathbf{0}$. The question, then, is whether this is isomorphic to the split exact sequence $\mathbf{0}\rightarrow\tau\rightarrow\tau\oplus\sigma\rightarrow\sigma\rightarrow\mathbf{0}$. That is, can we find an isomorphism $\rho\cong\tau\oplus\sigma$ compatible with the the inclusion map from $\tau$ and the projection map onto $\sigma$? First off, let’s write the direct sum representation a little more explicitly. The direct sum $\tau\oplus\sigma$ acts on pairs of field elements by $\tau$ on the first and $\sigma$ on the second, with no interaction between them. That is, we can write the representation as $\displaystyle\left[\tau\oplus\sigma\right](n)=\begin{pmatrix}\tau(n)&0\\ 0&\sigma(n)\end{pmatrix}=\begin{pmatrix}1&0\\ 0&\sigma(n)\end{pmatrix}$ And we’re looking for some isomorphism so that for every $n\in\mathbb{Z}$ we get from the matrix $\rho(n)$ to the matrix $\left[\tau\oplus\sigma\right](n)$ by conjugation. Explicitly, we’ll need a matrix $\begin{pmatrix}a&b\\c&d\end{pmatrix}$. But we also need to make sure that $\tau$ as a subrepresentation of $\rho$ is sent to $\tau$ as a subrepresentation of $\tau\oplus\sigma$. That is we must satisfy $\displaystyle\begin{pmatrix}a&b\\c&d\end{pmatrix}\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}1\\ 0\end{pmatrix}$ Thus $a=1$ and $c=0$ right off the bat! Now the intertwining condition (equivalent to the conjugation) is that $\displaystyle\begin{pmatrix}1&b\\ 0&d\end{pmatrix}\begin{pmatrix}1&n\\ 0&1\end{pmatrix}=\begin{pmatrix}1&0\\ 0&\sigma(n)\end{pmatrix}\begin{pmatrix}1&b\\ 0&d\end{pmatrix}$ $\displaystyle\begin{pmatrix}1&n+b\\ 0&d\end{pmatrix}=\begin{pmatrix}1&b\\ 0&\sigma(n)d\end{pmatrix}$ But this says that $n+b=b$ for all $n\in\mathbb{Z}$, and this is clearly impossible! So here’s an example where a short exact sequence of representations can not be split. At some point later we’ll see that in many cases we’re interested in they do split, but for now it’s good to see that they don’t always work out so nicely. ### Like this: Posted by John Armstrong | Algebra, Representation Theory ## 3 Comments » 1. Every time I read one of your posts a tiny bit of the algebra I once was sorta-kinda proficient at comes back (ever so slightly). Comment by | December 17, 2008 | Reply 2. That’s good! And there’s plenty more to come! Comment by | December 17, 2008 | Reply 3. [...] decomposable. Indeed, in categorical terms this is the statement that for some groups there are short exact sequences which do not split. To chase this down a little further, our work yesterday showed that even in the reducible case we [...] Pingback by | September 24, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174841642379761, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/9049/dfa-that-accepts-decimal-representations-of-a-natural-number-divisible-by-43
# DFA that accepts decimal representations of a natural number divisible by 43 First, I have tried to build a DFA over the alphabet $\sum = \{0,\dots, 9\}$ that accepts all decimal representations of natural numbers divisible by 3, which is quite easy because of the digit sum. For this I choose the states $Q = \mathbb{Z}/3\mathbb{Z}\cup\{q_0\}$ ($q_0$ to avoid the empty word), start state $q_0$, accept states $\{[0]_3\}$ and $\delta(q, w) =\begin{cases} [w]_3 &\mbox{if } q = q_0 \\ [q + w]_3 & \mbox{else } \end{cases}$ Of course, it doesn't work that way for natural numbers divisible by 43. For 43 itself, I would end in $[7]_{43}$, which wouldn't be an accepting state. Is there any way I can add something there or do you have other suggestions on how to do this? Thanks. - – Paresh Jan 20 at 9:15 4 – A.Schulz Jan 20 at 10:58 ## 1 Answer If the string read has a certain decimal value, then reading the next digit changes that value: multiply by $10$ and add that digit. The DFA keeps track of that value modulo $43$. Thus, for $q\in \{0,1,\dots,42\}$ and $a\in \{0,1,\dots,9\}$ you do $\qquad \delta(q,a) = (10\cdot q + a) \bmod 43$. Note that the DFA does not actually perform the computation; the transition is hard-coded. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921906054019928, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/63556/list
## Return to Answer 2 deleted 19 characters in body Suppose that $A$ is an infinite definable subset of a real closed field $R$ which is a zero-divisor-free ring under operations whose graphs are definable in $R$. Then $A$ is definably isomorphic to one of $R$, $R(\sqrt{-1})$ or the ring of quaternions over $R$. This is a special case of the main result of: Otero, Peterzil, and Pillay, On groups and rings definable in o-minimal expansions of real closed fields. (English summary) , Bull. London Math. Soc. 28 (1996), no. 1, 7–14. 1 Suppose that $A$ is an infinite definable subset of a real closed field $R$ which is a zero-divisor-free ring under operations whose graphs are definable in $R$. Then $A$ is definably isomorphic to one of $R$, $R(\sqrt{-1})$ or the ring of quaternions over $R$. This is a special case of the main result of: Otero, Peterzil, and Pillay, On groups and rings definable in o-minimal expansions of real closed fields. (English summary) Bull. London Math. Soc. 28 (1996), no. 1, 7–14.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339471459388733, "perplexity_flag": "head"}
http://mathoverflow.net/questions/93960/cryptography-and-iterations/93991
## Cryptography and iterations ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, Here is a question in cryptography which is probably naive, and a reference request. I was wondering about the following key-exchange scheme, which is a variant on Diffie-Hellman. Consider a set $X$ (finite but very large) and a map $T : X \to X$, both made public together with a point $x \in X$. Now A chooses an integer $n$ secretly and publishes $T^n(x)$, while B does the same with $m$. A and B can both compute the key $T^{n+m}(x)$, and assuming that it is difficult to find $n$ from $x$ and $T^n(x)$, then noone else can. Traditionally one picks an element $g$ in a group $G$, then A publishes $g^n$, B publishes $g^m$, and A and B both know the key $g^{nm}$. For $G$ one picks $(\mathbf{Z}/p)^\times$, or an elliptic curve over a finite field, or a braid group, or what have you. It seems that with the above variant, it is easy to produce examples: for example take $X$ to be a vector space over $\mathbf{F}_2$, and let $T$ be some map which shuffles the bits around according to your fancy. My intuition is that it is easier to make the "log-problem" difficult in this way than by choosing the right group $G$. I may be so completely wrong! Is there an obvious weakness in this scheme? For example, is it very hard to prove that, for a given map $T$, the "log-problem" is indeed difficult? It may well be that I'm only describing something standard. What is a good reference, then? (Basic searches with "cryptography and dynamics" were not satisfactory.) Thanks for reading! Pierre - It's possible that your question might be better suited for the Crypto StackExchange site: crypto.stackexchange.com – Timothy Chow Apr 13 2012 at 23:03 ## 3 Answers What makes Diffie-Hellman work is that the secret maps $x \mapsto x^n$ and $x \mapsto x^m$ commute with each other and are both easy to compute (even if $n$ and $m$ are huge), but knowing $g^n$ doesn't let you easily raise other numbers to the $n$-th power. Your scheme achieves the commutativity by making both maps powers of a given function $T$, but generically it won't make them easy enough to compute, so it doesn't offer any computational advantage for the participants compared with an attacker. Without some special structure, computing $T^n(x)$ will take about $n$ operations, since you'll have to compute each of $T^1(x)$, $T^2(x)$, etc. in turn. Breaking the scheme by computing $n$ from $T^n(x)$ will be just as fast. Diffie-Hellman would have the same problem if you had to compute $g^n$ naively (using $n$ operations), but you can use repeated squaring to handle much larger values of $n$. You can certainly use repeated squaring to compute powers of $T$, too, but the underlying set $X$ will be huge, so you'll need a more efficient way to represent powers of $T$ than just as permutations of $X$. This will depend on having some structure, for example knowing that $T$ is in some smaller group, and then the question becomes whether this structure helps break the system. In principle, I don't see why this shouldn't be secure, because Diffie-Hellman is a special case: Typically, $g$ will be chosen to have prime order $p$. If $h$ is a primitive root modulo $p$, then all exponentiation maps mod $p$ are powers of $x \mapsto x^h$, so Diffie-Hellman becomes isomorphic to your scheme with $T(x) = x^h$. Maybe I'm overlooking something, but I don't see offhand why knowing a primitive root modulo $p$ would let one break Diffie-Hellman. In that case, your scheme can be as secure as Diffie-Hellman. However, the security would depend delicately on how you choose $T$, and I wouldn't trust other choices without a lot of thought and cryptanalysis. - Ah! I thought something like this was going on. You are right, computing $T^n$ has no reason to be easy (I was aware that computing $g^n$ in a group was fast). In other words, requiring from $T$ that computing $T^n$ be very fast, AND that the log-problem be hard, gives serious constrains which possibly mean that we will not find anything besides examples coming from groups. Thanks! – Pierre Apr 13 2012 at 16:19 P.S. In hindsight, knowing a primitive root $h$ obviously can't help you break Diffie-Hellman. If you pick $h$ at random, then there's at least a $c/\log \log p$ chance it will be a primitive root (for some constant $c$), so you could pick a small number of candidates at random and pretend each was a primitive root, and probably you'd be right at least once. – Henry Cohn Apr 13 2012 at 17:22 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Shuffling the bits around "according to your fancy" sounds dangerous. I vaguely recall an example (probably by Knuth) where a sequence of very random-looking choices, to produce a complicated coding, ended up being absurdly easy to break. - I would be very interested in seeing this example, if anyone knows a reference. My intuition with these things needs maturing. – Pierre Apr 13 2012 at 16:56 I mean this as a comment under Adreas Blass response: I believe the example to which you are referring can be found in Knuth's Volume 2 of The Art of Computer Programming, 3rd ed. on page 5 where he talks about his '"super random" number generator' algorithm. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561244249343872, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=1b98b1252b3542c389a806ca9abf3efc&p=4311210
Physics Forums History of speed of the expansion of the universe? I would like to know the history of the value of the speed of the expansion of the universe. If today it's 74.3 ± 2.1 kilometres per second per megaparsec then how much it was a billion years ago, 5 billion years ago, 10 billion years ago, and how much it was in the first million years after the Big Bang? Also when the universe was approximately 379,000 years old, when the photons started to travel freely through space. Recognitions: Homework Help That great! How were you intending to find out? In the absence of dark energy, the Hubble Constant would be inversely proportional to cosmic age. With dark energy, it initially falls that way but settles down to a constant value in the far future so the maths is more complex. There is an applet called "CosmoCalc" in a number of varieties that tells you lots of interesting parameters. This one from Ned Wright tells you redshift from lookback time: http://www.astro.ucla.edu/~wright/DlttCalc.html This one takes redshift and tells you lots of stuff including the Hubble Constant http://www.einsteins-theory-of-relat...ocalc_2010.htm They need to use the same assumptions for current values so set H0 to 70.4 and Omega_M to 0.272 in Wright's. Put in say 5 for the light travel time in Gyr and press the "Flat" button. You get 0.492 for the redshift. Now put that value in for redshift in the second applet and it will tell you everything you want to know (and more). Recognitions: Gold Member Science Advisor History of speed of the expansion of the universe? Quote by SpaceBear I would like to know the history of the value of the speed of the expansion of the universe. ... The "speed" of the expansion of distances is proportional to the size of the distance. A distance twice as big increases at twice the "speed". Expansion rate is really not any one particular "mph" or "km/s". It is a percentage growth rate of distance. The current rate distances are growing is 1/140 percent per million years. In the past the percentage growth rate was considerably larger. Here is a sample history: $${\begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&T_{Hub}(Gy)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)\\ \hline10.000&0.100000&0.6&0.8&30.825&3.082&4.663&1.585\\ \hline9.000&0.111111&0.7&1.0&29.922&3.325&5.081&1.861\\ \hline8.000&0.125000&0.8&1.2&28.856&3.607&5.583&2.227\\ \hline7.000&0.142857&0.9&1.4&27.570&3.939&6.197&2.729\\ \hline6.000&0.166667&1.2&1.8&25.976&4.329&6.964&3.449\\ \hline5.000&0.200000&1.6&2.3&23.932&4.786&7.948&4.548\\ \hline4.000&0.250000&2.2&3.2&21.181&5.295&9.247&6.373\\ \hline3.000&0.333333&3.3&4.9&17.215&5.738&11.008&9.819\\ \hline2.000&0.500000&5.9&8.1&10.915&5.458&13.361&17.878\\ \hline1.000&1.000000&13.8&14.0&0.000&0.000&15.793&46.686\\ \hline\end{array}}$$ Past epochs are labeled by how much distances and wavelengths have been elongated since that time (the "stretch" factor). The table goes from S=10 to S=1 (the present moment). You can read off the percentage growth rates from the "Hubble time" column (the fourth column). In that column 14.0 Gy corresponds to the present rate of 1/140 percent per million years. And 0.8 Gy corresponds to the rate of 1/8 percent per million years. That was the rate back in year 600 million (i.e. in year 0.6 Gy) as you can see from the table. The same calculator will easily tell you the distance growth rate at S=1090, the moment of clearing or transparency that you asked about. Around year 380,000 when the ancient CMB light originated. That light has been "stretched" by a factor of 1090 so you just have to put that number in the upper limit box, instead the number 10, which I put in to make this table. If instead of a percent growth rate, what you want is a km/s speed of some benchmark distance, I would suggest a million lightyears. Most people have some mental association with that distance---having heard the distance to a neighbor galaxy like Andromeda expressed in those terms. It is on the order of a million lightyears from us and its light takes on the order of a million years to get here. When distances are growing at rate of 1/140% per million years, then a distance of 1 million lightyears is growing at a "speed" of 3000/140 km/s. All you have to do, to convert, is multiply the percent rate 1/140 by 3000. That gives the km/s. Recognitions: Gold Member Science Advisor The same calculator will tell you that back at the time CMB originated distances were increasing by 157% per million years. So if you want a "speed" of some sample benchmark distance, say 1 million lightyears. Just multiply that by 3000 3000 x 157 is some number of kilometers per second. I would rather think of that in the equivalent form of 157% of the speed of light. Kilometers per second don't seem too meaningful in that range, an awkward unit to use. But "speed" is an awkward way to visualize distance expansion in any case. What we are talking about is geometry change, not things traveling from one place to another. Nobody gets anywhere by expansion, all stationary observers just become farther apart. And the expansion is proportional to the size of distance---you shouldn't have to pick a benchmark of a million this or that, you just make unnecessary work for yourself by doing that. For convenience I keep the calculator link in my signature (TabCosmo7). Anyone who wants coaching in its use should ask. Several people here can help, and it's basically real easy to use. Thread Tools | | | | |-------------------------------------------------------------------------|-----------|---------| | Similar Threads for: History of speed of the expansion of the universe? | | | | Thread | Forum | Replies | | | Cosmology | 15 | | | Cosmology | 27 | | | Cosmology | 22 | | | Cosmology | 9 | | | Cosmology | 20 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407896995544434, "perplexity_flag": "middle"}
http://nrich.maths.org/301
Find the smallest numbers $a, b$, and $c$ such that: $$a^2 = 2 b^3 = 3 c^5$$ What can you say about other solutions to this problem?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269425868988037, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/45534/tensions-and-pulleys-with-masses/45624
# Tensions And Pulleys With Masses The problem I am working on is: "A block of mass m1 = 1.80 kg and a block of mass m2 = 6.30 kg are connected by a massless string over a pulley in the shape of a solid disk having radius R = 0.250 m and mass M = 10.0 kg. The fixed, wedge-shaped ramp makes an angle of θ = 30.0° as shown in the figure. The coefficient of kinetic friction is 0.360 for both blocks." The provided diagram: Determine the acceleration of the two blocks. (Enter the magnitude of the acceleration.) Determine the tensions in the string on both sides of the pulley. What I was wondering is why there are two different tension forces acting on the pulley? Could someone give me a descriptive answer? Also, does the mass of the pulley somehow affect the tension forces? Why exactly? - ## 2 Answers Tension is a vector, so it has different directions on either side. For second question, imagine what would happen to tension if you had a pulley with the mass of the moon. - Great point. To understand questions like this it is always a good idea to think of extreme situations. – ja72 Dec 1 '12 at 14:37 Oh, yes...The pulley would definitely have more rotational inertia, and that would diminish the amount of force transmitted, right? – Mack Dec 1 '12 at 15:11 I've sort of run into a road-block trying to solve this problem. I try to solve for the acceleration, but I have no way of eliminating the unknowns from my equations; what still remains is the two tensions forces. So, I tried to figure out how they were related through torque on the pulley; and despite the fact that there is a relationship, it introduces another unknown, namely, the torque. How do I solve for the acceleration? – Mack Dec 1 '12 at 15:19 I actually figured it out. However, I have a new question: if the tension forces on either side of the pulley are different, then why is there only one acceleration for the two blocks and the particles on the outer rim of the pulley? – Mack Dec 1 '12 at 15:43 There is two accelerations, of the same magnitude, acceleration is a vector too. A pulley is an idealized solid contraption, where motion is restriced to circular. – Hobo Dec 1 '12 at 22:06 Using the principle of virtual work, if you move the blocks a distance a, the inclined block is lowered by an amount equal to $a\sin(\theta)$, meaning that it gains energy $m_2 ga\sin(\theta)$. The total moving mass is $m_1 + m_2$, so that the acceleration is the same as for a mass $m_1 + m_2$ in 1 dimension with a force $m_2 g \sin(\theta)$, so that $$a = {m_2 \over m_1 + m_2} g \sin\theta$$ This is how you solve these types of problems, it's equivalent to writing the Lagrangian, but more elementary sounding. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523338079452515, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/175160-how-find-f-x-i-suspect-error-command.html
# Thread: 1. ## How to find f(x) - I suspect an error in the command Hello I want to solve a question, I don't understand why my answer is false so i think there might be an error in the order. The quadratic function takes the value of 41 at x=-2 and the value of 20 at x=5. The function is minimalized at x=2. - the A,B,C of y=Ax^2+Bx+C - the minimum value of this function, D The vertex have coordinates (2;D) 2=-B/2A B=-4A I then replaced B in 41=A(-2)^2 +B(-2)+C 20=A(5)^2 +B(5)+C In the end I've got f(x)= (-3/7)x^2-(12/7)x+(275/7) It can't be right because A can't be an negative number if the function has a minimum. Also in this case the vertex is not in x=2 because D=239/7 (it's higher than f(-2)=20) But when I calculate f(-2)= (-3/7)(-2)^2-(12/7)(-2)+(275/7) AND f(5)=(-3/7)5^2-(12/7)5+(275/7) It takes the value of 41 and 20. Is it me confusing everything or there is really an error in the command? 2. Originally Posted by Viou Hello I want to solve a question, I don't understand why my answer is false so i think there might be an error in the order. The quadratic function takes the value of 41 at x=-2 and the value of 20 at x=5. The function is minimalized at x=2. - the A,B,C of y=Ax^2+Bx+C - the minimum value of this function, D The appex have coordinates (2;D) 2=-B/2A B=-4A I then replaced B in 41=A(-2)^2 +B(-2)+C 20=A(5)^2 +B(5)+C In the end I've got f(x)= (-3/7)x^2-(12/7)x+(275/7) It can't be right because A can't be an negative number if the function has a minimum. Also in this case the appex is not in x=2 because D=239/7 (it's higher than f(-2)=20) But when I calculate f(-2)= (-3/7)(-2)^2-(12/7)(-2)+(275/7) AND f(5)=(-3/7)5^2-(12/7)5+(275/7) It takes the value of 41 and 20. Is it me confusing everything or there is really an error in the command? see this post ... http://www.mathhelpforum.com/math-he...le-175161.html 3. Do you understand that a quadratic function can be written as $y= a(x- x_0)^2+ b$ where $x_0$ is the x coordinate of the vertex (minimum point)? Knowing that the minimum occurs at x= 2 tells you that you can write the quadratic function as $y= a(x- 2)^2+ b$. Putting x= -2 and y= 41 gives 41= a(-2-2)^2+ b= 16a+ b. putting x= 5 and y= 20 gives 20= a(5- 2)^2+ b= 9a+ b. You can solve those two equations for a and b. 4. Ok thank you very much! It worked out nice And sorry for putting it in the wrong section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8517244458198547, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/127814-irreducible-polynomial.html
# Thread: 1. ## Irreducible polynomial Show that $x^{10}+x^9+x^8+...+x+1$ is irreducible over $Q$ - set of all rational numbers. 2. Originally Posted by Arczi1984 Show that $x^{10}+x^9+x^8+...+x+1$ is irreducible over $Q$ - set of all rational numbers. By the rational roots theorem the only possilbe roots are $\pm 1$. but $P(1)=11$ and $P(-1)=1$ So the polynomial has no rational roots. 3. Thank You for help I think that I'm too tired because of this I've problems with so easy task 4. Another way to see it is that your polynomial is just $\frac{x^{11}-1}{x-1}$; and the roots of $x^{11}-1$ are the 11-th roots of unity, none of which lie on the real axis except for $x=1$. 5. Being irreducible is a much stronger property than having no roots (in the given field). Having no roots means that there are no linear factors. But for a polynomial to be irreducible it must have no nontrivial factors at all. As Bruno J. points out, the roots of the given polynomial in the complex field are the complex 11th roots of unity. These can be grouped in complex conjugate pairs to form quadratic factors $x^2 - 2\cos(2k\pi/11)*x + 1$ (for k = 1,2,3,4,5) over the real field. To show that the polynomial is irreducible over the rationals, you would have to show that none of these can be grouped together to form a rational polynomial. I don't know how to do that, but presumably it must involve showing that those cosines are seriously irrational. 6. Originally Posted by Arczi1984 Show that $x^{10}+x^9+x^8+...+x+1$ is irreducible over $Q$ - set of all rational numbers. $x^{10}+x^9+x^8+...+x+1=\frac{x^{11}-1}{x-1}$ let $t=x-1$ then $\frac{x^{11}-1}{x-1}=\frac{(t+1)^{11}-1}{t}$ then you can use Eisenstein criterion to prove it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434516429901123, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/14799/list
## Return to Question 4 added 67 characters in body Let $X$, $Y$ and $Z$ be Noetherian schemes. If $f: Y \to X$ and $g: Z \to X$ are morphisms of finite type, such that at each point of $X$, at least one of the two morphisms is smooth/étale/unramified (at all points of its inverse image), can we conclude that the induced morphism $Y \times_X Z \to X$ is smooth/étale/unramified everywhere? If not, which results can we obtain? (In his textbook on Algebraic Geometry, Liu asks to prove that the answer is always "yes"...) EDIT. So, indeed, the problem statement in the book is wrong... 3 added 126 characters in body; deleted 31 characters in body Let $X$, $Y$ and $Z$ be Noetherian schemes. If $f: Y \to X$ and $g: Z \to X$ are morphisms of finite type, such that at each point of $X$, at least one of the two morphisms is smooth/étale/unramified (at all points of its inverse image), can we conclude that the induced morphism $Y \times_X Z \to X$ is smooth/étale/unramified everywhere? If not, which results can we obtain? (In his textbook on Algebraic Geometry, Liu asks to prove that the answer is always "yes"...) 2 edited body Let $X$, $Y$ and $Z$ be Noetherian schemes. If $f: Y \to X$ and $g: Z \to X$ are morphisms of finite type, such that at each point of $Y$, X$, at least one of the two morphisms is smooth/étale/unramified (at all points of its inverse image), can we conclude that the induced morphism$Y \times_X Z \to X\$ is smooth/étale/unramified everywhere? If not, which results can we obtain? 1 # products and smooth/étale/unramified morphisms Let $X$, $Y$ and $Z$ be Noetherian schemes. If $f: Y \to X$ and $g: Z \to X$ are morphisms of finite type, such that at each point of $Y$, at least one of the two morphisms is smooth/étale/unramified (at all points of its inverse image), can we conclude that the induced morphism $Y \times_X Z \to X$ is smooth/étale/unramified everywhere? If not, which results can we obtain?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450762867927551, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/28329/list
## Return to Question 2 edited title; edited title; added 4 characters in body # longestpathindiameterof a graph with random edge weights Given a weighted directed graph $G=(V,E, w)$, suppose we generate a new graph $G'=(V,E,w')$ with the same vertices and edges, but now letting the weight of edge $(i,j)$ be an exponential random variable with mean $w_{ij}$. My question is: what is the expected diameter of $G'$? Why I'm interested in this: I was intrigued by the observation that the expected diameter of $G'$ can be quite different from the diameter of $G$. Indeed, consider the following example: define $G$ by taking the complete graph $K_n$, K_{n+1}$, picking an arbitrary vertex$a$, and assigning weight$n$to any edge incident on$a$, and weight$1$to every other edge. Then, the diameter of$G$is$n$. On the other hand, the expected diameter of$G'$is O(1) since we can expect one of the edges incident on$a\$ to have small weight. 1 # longest path in a graph with random weights Given a weighted directed graph $G=(V,E, w)$, suppose we generate a new graph $G'=(V,E,w')$ with the same vertices and edges, but now letting the weight of edge $(i,j)$ be an exponential random variable with mean $w_{ij}$. My question is: what is the expected diameter of $G'$? Why I'm interested in this: I was intrigued by the observation that the expected diameter of $G'$ can be quite different from the diameter of $G$. Indeed, consider the following example: define $G$ by taking the complete graph $K_n$, picking an arbitrary vertex $a$, and assigning weight $n$ to any edge incident on $a$, and weight $1$ to every other edge. Then, the diameter of $G$ is $n$. On the other hand, the expected diameter of $G'$ is O(1) since we can expect one of the edges incident on $a$ to have small weight.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447965025901794, "perplexity_flag": "head"}
http://www.conservapedia.com/Talk:Poincar%C3%A9_conjecture
# Talk:Poincaré conjecture ## The smooth Poincare conjecture The page states that "the h-cobordism theorem actually demonstrates that a diffeomorphism exists for n >= 5. The only open case is the four dimensional one". Perhaps I'm misunderstanding what this is supposed to mean, but I think it's false, the counterexamples being provided by the so-called exotic spheres. These are known not to exist for n=1,2,3,5,6, but there are 28 distinct smooth manifolds which are homeomorphic to the 7-sphere but not diffeomorphic to it (Milnor). For general larger n the conjecture is false, though there are a few cases (n=12 if memory serves) where it's still true. Generally the set of smooth structures on the n-sphere can be assembled into a finite abelian group. It's a tricky matter, and as noted in the article, remains open in 4 dimensions (though it's generally thought to be false). --JimR 21:03, 22 December 2009 (EST) You're absolutely right, I misstated something. I'll fix it now. JacobB 21:19, 22 December 2009 (EST) I don't know why I wrote that, I've seen a few exotic S^7s. In my defense, this was written pretty late at night (heh). JacobB 21:26, 22 December 2009 (EST) I surely understand, and great work on this page! Would you mind if I add a link to fundamental group, which looks much better than homotopy group and related pages? --JimR 21:33, 22 December 2009 (EST) It's a wiki, Jim! With your edit history, you hardly have to ask before contributing to a math article! JacobB 21:45, 22 December 2009 (EST) ## Layman's statement I made a couple tweaks to make this more accurate, and I hope I haven't compromised the accessibility too much: 1. Changed "orange" to "surface of an orange", to emphasize that S^n should not be thought of as solid. 2. Not sure what a "covering" means here, so I made it a statement about loops. But this is probably less clear. Any suggestions? 3. The page made it sound like part of the conjecture is that S^3 is simply connected, but this is an easy fact. The hard part is that it's the only simply connected thing. 4. The page made reference to "manifold in four-dimensional space". What we're really interested in is three-manifolds, and some of them don't even fit in four dimensional space, so this needs a tweak! A somewhat analogous example is the Klein bottle, a 2-dimensional manifold which doesn't fit in 3-dimensional space (without self-intersections). Thus I wrote "three dimensional space", with a link to manifold. I think my changes are not optimal, please continue to improve! --JimR 22:24, 22 December 2009 (EST) Your improvements sound great! I'm learning as I go here, which is part of the benefit in contributing. Researching more to improve more ....--Andy Schlafly 22:41, 22 December 2009 (EST) While Andy's addition of "compact" is certainly a necessary correction, since I didn't even think to add this necessary term, I feel that the term itself isn't that useful in the laymans definition. I'm going to change it to give an explanation in the same way "manifold" is explained. My explanation isn't exact, but the only possibility I've ignored is that a closed set is removed from a compact manifold, which isn't really an object of study, ever. JacobB 22:46, 22 December 2009 (EST) Great improvement, as the layman's definition should not be overloaded with jargon. I need to understand the entropy angle better on this next.--Andy Schlafly 00:06, 23 December 2009 (EST) Ricci Flow and Entropy: I think I can help explain the entropy relation here. It won't be too technical, since this is very high-level stuff here, but I can give the basics. There'll be some calculus: First of all, every three-dimensional manifold (three manifold) is said to "admit a smooth structure," which basically just means we can do differential calculus in them. Now, Ricci flow on a manifold is a way of deforming the manifold over time. A great way to visualize Ricci flow on two-dimensional manifolds is to imagine gas inside: for example, picture a manifold that looks two balloons, one full and one empty, which are attatched at their bases. Now, over time, gas will go from the large, inflated part (which has "positive curvature") to the deflated part (the curvature of which might vary, but will definitely be more curved than the inflated part). By the time the gas has distributed itself throughout the interior of the two balloons, the part which was originally inflated will have shrunk, and the part which had high curvature will have expanded. This is very similar to Ricci flow - the details of the manifold's geometry disappear, but the essential structure remains - two balloons connected. Of course, dissipating gas is an example of entropy, as is the disappearance of details over time. So it shouldn't surprise us that the effects of Ricci flow have a good deal in common with entropy in physics. For example, look at the heat diffusion equation $u_t = \Delta u \$. This equation is precisely an expression for the change in the metric of a manifold over time in Ricci flow if we take $u = \ln g_{jj} \$, where j is any number less than the dimension of the manifold and no summation is implied on that index, and g is a metric symmetric on the main diagonal (any relevant manifold can have a coordinate system which create such a metric). Noticing this (and other similarities between the effect of Ricci flow on a manifolds metric and the effect of time on heat dispersion, Perelman formulated the concept of "Perelman entropy." Two very important insights were necessary to turn the idea of Ricci flow into a proof of the Poincare conjecture. First, and this is the lesser of the insights, it was realized that no matter how complicated a manifold might be, the simple knowledge that a region of a three manifold could be bounded by a loop (or sphere) that could be shrunk to a point allows us to know for CERTAIN that Ricci flow on that region will either make it look like a patch of a three sphere after some finite "run time" for the flow, or "pinch" it beyond recognition (a singularity). This insight is due to Hamilton. The second of these insights, which is the result of Perelman, is whenever the flow pinched the manifold, one could perform surgery on the manifold which would remove the pinch but leave the structure of the manifold unchanged otherwise. I'm going to expand on this explanation for possible incorporation into the article. JacobB 00:30, 23 December 2009 (EST) I have to ponder this further. Thanks for your explanation and I look forward to your additional insights.--Andy Schlafly 00:51, 23 December 2009 (EST) Here is as precise a definition of Perelman entropy as exists for a non-geometer: Ricci flow on two manifolds which are basically the same (ie, diffeomorphic) can still be different. Perelman discovered a similar flow on manifolds which depends on not only the geometry of the manifold, but some other function (it can be just about any function). While we can create many different flows of this sort by altering that other function, there is a diffeomorphism of the manifold for which any flow you can think of will be identical to the Ricci flow. Perelman further modified this flow to be scale invariant, and called this discovery "Perelman entropy." JacobB 01:00, 23 December 2009 (EST)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639184474945068, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/35042/why-is-current-not-0-in-a-regular-resistor-battery-circuit-immediately-after-y?answertab=oldest
# Why is current not 0 in a regular resistor - battery circuit immediately after you closed a circuit? In regular open circuits with either a capacitor or inductor element, (when capacitor is uncharged) with a battery, when a switch is closed to complete the circuit the current is said to be 0 because current doesn't jump immediately. But in a circuit with just resistors, as soon as a switch is closed the current isn't 0? Example is this question from 2008 AP Physics C Exam http://apcentral.collegeboard.com/apc/public/repository/ap08_physics_c_em_frq.pdf Go to Question 2 for details. - It's not clear enough what you're having trouble with. Conceptually, yes, current jumps instantly in a battery-resistor circuit. If it is as simple as this, I don't see what you're asking. – AlanSE Aug 27 '12 at 19:29 @AlanSE, that's my problem why does it jumpy instantaneously? – jak Aug 27 '12 at 20:12 @jak, it's as simple as Ohm's Law. When the switch is instantaneously closed, the voltage across the resistor instantaneously changes from zero to some non-zero value. Thus, by Ohm's Law, the current instantaneously changes from zero to some non-zero value. – Alfred Centauri Aug 28 '12 at 1:56 ## 3 Answers In real life, the current can't jump instantaneously because there is always some finite inductance in a circuit. However, this is just a typical idealized textbook problem where the inductance is assumed identically zero, so the current can jump instantaneously according to the assumptions of the problem. Note the current also jumps in their solution for the capacative case. - This is really a footnote to user1631's answer: even in the absence of any inductance the current obviously can't change instantly because no signal can propagate faster than the speed of light. In typical circuits the increase in current propagates somewhere between $0.1c$ and $c$. - This question is working within the realm of 'circuit theory', which is an idealization useful for introductory teaching of electromagnetism. It is really a simplification of electrodynamic field theory, just a special case making useful assumptions. A lot of conceptual problems in circuit based questions come from forgetting that you are dealing with a slightly unreal situation. The answer is as above, the current does not instantaneously propogate throughout the circuit but at some finite speed $<c$. In introductory problems however this speed is much faster than you need to worry about. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577736258506775, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/56453?sort=oldest
## Epimorphisms have dense range in TopHausGrp? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the category of Topological Groups with continuous homomorphisms. Then a continuous homomorphism $f:G\rightarrow H$ with dense range is an epimorphism. Is the converse true? If not, what about for locally compact groups? Even for groups, without topology, this is not trivial-- Wikipedia points me to a simple proof given by Linderholm, "A Group Epimorphism is Surjective", The American Mathematical Monthly Vol. 77, No. 2 (Feb., 1970), pp. 176-177 see http://www.jstor.org/pss/2317336 It is far from obvious to me that this argument extends to the topological case (but perhaps it does). Edit: As suggested in the comments, I really was to ask about Hausdorff topologies. - There is rarely need to use abbreviations in MO questions (and, in your title, calling maps with dense image surjective is strange!) – Mariano Suárez-Alvarez Feb 23 2011 at 22:17 I don't think the abbreviations hinder one's ability to read the question, myself. Though I agree that the way the title is worded is momentarily confusing... – Yemon Choi Feb 23 2011 at 23:00 I replaced the abbreviations - the meaning of 'cts' may not leap out at someone whose first language is not English, and 'homo' is a particularly inelegant abbreviation IMHO. – David Roberts Feb 23 2011 at 23:18 2 It's not true that a continuous homomorphism with dense range are epimorphisms, unless you work in the category of Hausdorff topological groups. This is just because you can give any group the indiscrete topology, and in that context all maps have dense image. Alternatively, you can consider the inclusion $\mathbb{Q}\to\mathbb{R}$, which equalises the projection and the zero map to the non-Hausdorff group $\mathbb{R}/\mathbb{Q}$. – Neil Strickland Feb 24 2011 at 7:42 1 Mariano is right. I would even say that a mathematical text should never contain any abbreviation. This rule has been observed, I think, by the best authors: Bourbaki (also in English) , Grothendieck, Serre , Cartan and his seminarists,... – Georges Elencwajg Feb 24 2011 at 9:15 show 3 more comments ## 1 Answer Google, MathSciNet and some ferreting lead me to MR1235755 (94m:22003) Uspenskiĭ, Vladimir(D-MNCH) The solution of the epimorphism problem for Hausdorff topological groups. Sem. Sophus Lie 3 (1993), no. 1, 69–70. where the review indicates that the answer is negative in general, but positive for locally compact groups; this latter case was apparently treated in MR0492044 (58 #11204) Nummela, Eric C. On epimorphisms of topological groups. Gen. Topology Appl. 9 (1978), no. 2, 155–167. The case of compact groups had been done earlier by Poguntke: MR0263978 (41 #8577) Poguntke, Detlev Epimorphisms of compact groups are onto. Proc. Amer. Math. Soc. 26 1970 503–504. and this apparently inspired the authors of the following paper MR1338245 (96c:46054) Hofmann, K. H.(D-DARM); Neeb, K.-H.(D-ERL-MI) Epimorphisms of $C^∗$-algebras are surjective. Arch. Math. (Basel) 65 (1995), no. 2, 134–137. - 4 It's interesting that the answer is "yes" for Hausdorff topological spaces, and "yes" for groups, but "no" for Hausdorff topological groups. – Greg Marks Feb 23 2011 at 23:25 Many thanks. I guess I really should have been able to find the Nummela paper myself, given its title! – Matthew Daws Feb 24 2011 at 8:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8956884145736694, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/77089/grothendieck-group-for-projective-space-over-the-dual-numbers/77125
## Grothendieck group for projective space over the dual numbers ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix a field $k$. For a singular variety $X$, I understand that the Grothendieck group $K^0(X)$ of vector bundles on $X$ is not necessarily isomorphic to the Grothendieck group $K_0(X)$ of coherent sheaves on $X$. I am curious to learn what is known about these two groups in one family of examples: $\mathbb P^n_{D}$, where $D$ is the dual numbers $D=k[\epsilon]/(\epsilon^2)$. References would be especially appreciated, as I know very little about K-theory. - ## 2 Answers If $X$ is a noetherian separated scheme and $X_{red}$ its reduction , we have $K_0(X)=K_o(X_{red})$: in other words $K_o$ doesn't see nilpotents . Much more generally and profoundly, Quillen has proved that for all his $K$-theory groups, $K_i(X)=K_i(X_{red})$. In your particular case you thus have (in the following $T$ is an indeterminate) $$K_0(\mathbb P^n_D)=K_0(\mathbb P^n_k)=\mathbb Z[T]/(T^{n+1})$$ As for $K^0$, a special case of a theorem of Berthelot (SGA 6, Exposé VI, Théorème 1.1, page 365) states that, for any commutative ring $A$, we have $K^0(\mathbb P^n_A)=K^o(A)[T]/(T^{n+1})$. If $A=D=k[\epsilon]$, we have $K^0(D)=\mathbb Z$, since projective modules over local rings (like $D$) are free. So here too $$K^0(\mathbb P^n_D)=\mathbb Z[T]/(T^{n+1})$$ Bibliography Srinivas has written this nice book on $K$-theory. And as an homage to the recently sadly departed Daniel Quillen, let me refer to his groundbreaking paper "Higher algebraic $K$-theory I", published in Springer's Lecture Notes LNM 341. - Thank you for the thoughtful answer and the helpful references. – Daniel Erman Oct 4 2011 at 17:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The identity map from $k$ to itself factors through $D$. Thus, if $K$ represents either $K^0$ or $K_0$, $K(\mathbb P^n_{k})$ is a direct summand of $K(\mathbb P^n_{D})$. For $M$ a coherent $\mathbb P^n_{D}$-module, we have an exact sequence $$0\rightarrow \epsilon M \rightarrow M \rightarrow M/\epsilon M\rightarrow 0$$ The classes of the modules on the left and right, and therefore the class of the module in the middle, all come from the $K_0(\mathbb P^n_{k})$ direct summand. This shows that $K_0(\mathbb P^n_{D})=K_0(\mathbb P^n_{k})$. For $K^0$, you can apply Nakayama's Lemma. - Thank you for this excellent answer. – Daniel Erman Oct 4 2011 at 17:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247627258300781, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/05/08/upper-triangular-matrices-and-orthonormal-bases/?like=1&source=post_flair&_wpnonce=5b12459335
# The Unapologetic Mathematician ## Upper-Triangular Matrices and Orthonormal Bases I just noticed in my drafts this post which I’d written last Friday never went up. Let’s say we have a real or complex vector space $V$ of finite dimension $d$ with an inner product, and let $T:V\rightarrow V$ be a linear map from $V$ to itself. Further, let $\left\{v_i\right\}_{i=1}^d$ be a basis with respect to which the matrix of $T$ is upper-triangular. It turns out that we can also find an orthonormal basis which also gives us an upper-triangular matrix. And of course, we’ll use Gram-Schmidt to do it. What it rests on is that an upper-triangular matrix means we have a nested sequence of invariant subspaces. If we define $U_k$ to be the span of $\left\{v_i\right\}_{i=1}^k$ then clearly we have a chain $\displaystyle U_1\subseteq\dots\subseteq U_{d-1}\subseteq U_d=V$ Further, the fact that the matrix of $T$ is upper-triangular means that $T(v_i)\in U_i$. And so the whole subspace is invariant: $T(U_i)\subseteq U_i$. Now let’s apply Gram-Schmidt to the basis $\left\{v_i\right\}_{i=1}^d$ and get an orthonormal basis $\left\{e_i\right\}_{i=1}^d$. As a bonus, the span of $\left\{e_i\right\}_{i=1}^k$ is the same as the span of $\left\{e_i\right\}_{i=1}^k$, which is $U_k$. So we have exactly the same chain of invariant subspaces, and the matrix of $T$ with respect to the new orthonormal basis is still upper-triangular. In particular, since every complex linear transformation has an upper-triangular matrix with respect to some basis, there must exist an orthonormal basis giving an upper-triangular matrix. For real transformations, of course, it’s possible that there isn’t any upper-triangular matrix at all. It’s also worth pointing out here that there’s no guarantee that we can push forward and get an orthonormal Jordan basis. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 1 Comment » 1. [...] and see what happens as we try to diagonalize it. First, since we’re working over here, we can pick an orthonormal basis that gives us an upper-triangular matrix and call the basis . Now, I assert that this matrix already is diagonal when is [...] Pingback by | August 10, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8979453444480896, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4201204
Physics Forums ## Eigenvalues of a complex symmetric matrix Eigen values of a complex symmetric matrix which is NOT a hermitian are not always real. I want to formulate conditions for which eigen values of a complex symmetric matrix (which is not hermitian) are real. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Well, assume $A$ has real eigenvalues. Then if $\lambda,x$ are an "eigenpair" we have $x^*Ax=x^*\lambda x = \lambda x^*x$ which is real. On the other hand, $\lambda x^* x = (\lambda x)^*x = (Ax)^*x = x^*A^*x$ so that we have $x^*Ax = x^*A^*x$. Now, if $A$ is symmetric, I *think* this means it can be diagonalised (ie there is an eigenbasis) and so this argument seems like it might imply that $A=A^*$, that is, $A$ is Hermitian. However, I don't have a lot of time to think about it now, and I might be missing something important. Do you mean if a complex symmetric matrix is diagonizable, it will have real eigenvalues? ## Eigenvalues of a complex symmetric matrix No, I *think* all symmetric matrices are diagonalisable (and thus have an eigenbasis) and if all eigenvalues are real, then the matrix is Hermitian. That is, I am saying that a symmetric matrix is hermitian iff all eigenvalues are real. Recognitions: Science Advisor Quote by Robert1986 That is, I am saying that a symmetric matrix is hermitian iff all eigenvalues are real. A symmetric matrix is hermitian iff the matrix is real, so that is not a good way to characterize symmetric complex matrices. I don't think there is a simple answer to the OP's question. Quote by AlephZero A symmetric matrix is hermitian iff the matrix is real, so that is not a good way to characterize symmetric complex matrices. I don't think there is a simple answer to the OP's question. I should have been more clear. Any symmetric matrix $M$ has an eigenbasis (because any symmetric matrix is diagonalisable.) Now, if all the eigenvalues of a symmetric matrix are real, then $A^* = A$, ie, $A$ is hermitian. However, as you pointed out, since $A^\top = A$ by assumption, this implies that $A$ is real. So, what I am saying is that there are no complex symmetric matrices with all real eigenvalues. (Unless, of course, the matrix is real.) In other words, given that $M$ is symmetric and has real eigenvalues, then it must be real. EDIT: In fact, given any matrix $M$, if $x^*M^*x$ is real for all $x$ then $M$ is hermitian. Recognitions: Science Advisor Quote by Robert1986 So, what I am saying is that there are no complex symmetric matrices with all real eigenvalues. (Unless, of course, the matrix is real.) OK, I agree with that. But the number of complex eigenvalues can by anything from 1 to the order of the matrix, which doesn't go very far to answer the OP's question. Quote by AlephZero OK, I agree with that. But the number of complex eigenvalues can by anything from 1 to the order of the matrix, which doesn't go very far to answer the OP's question. Perhaps I misunderstood the OP, but it seems that he wants to know under what conditions the eigenvalues of a symmetric matrix are real. The answer is the the matrix must be real. Recognitions: Science Advisor Maybe we both misunderstood, but I read the OP's "which eigen values ... are real" as a question about some of them, not all of them. It might be possible to give an answer if the eigenproblem represents a physical system. For example the eigenvalues of a damped multi-degree-of-freedom oscillator, with an arbitrary damping matrix, represent the damped natural freuqencies on the s-plane, therefore they are all complex except for zero-frequency (rigid body motion) modes. Also, the sign of the real part of the eigenvalues shows whether the mode is damped, undamped, or unstable (i.e. it gains energy from outside the system). But I don't know how to turn that "physics insight" about a particular physical system into a mathematical way to characaterize the matrix. Quote by AlephZero Maybe we both misunderstood, but I read the OP's "which eigen values ... are real" as a question about some of them, not all of them. It might be possible to give an answer if the eigenproblem represents a physical system. For example the eigenvalues of a damped multi-degree-of-freedom oscillator, with an arbitrary damping matrix, represent the damped natural freuqencies on the s-plane, therefore they are all complex except for zero-frequency (rigid body motion) modes. Also, the sign of the real part of the eigenvalues shows whether the mode is damped, undamped, or unstable (i.e. it gains energy from outside the system). But I don't know how to turn that "physics insight" about a particular physical system into a mathematical way to characaterize the matrix. Ha, I see now. It seems we are interpreting things differently. Looking back at the OP, it seems that your interpretation is more in line with what the OP wrote; however I prefer my interpretation because it admits a solution :). OK, given your interpretation (which I now think is the correct one), I agree that the problem is hard and not very well formulated. "Which eigenvalues are real?" is kind of an odd one to answer. I mean, "which" in what sense? What I am trying to say is this. All hermitian matrices are symmetric but all symmetric matrices are not hermitian. Eigenvalues of hermitian (real or complex) matrices are always real. But what if the matrix is complex and symmetric but not hermitian. In hermitian the ij element is complex conjugal of ji element. But I am taking about matrix for which ij element and ji element are equal. Eigen values of such a matrix may not be real. So under what condition Eigenvalues will be real. Quote by sodaboy7 What I am trying to say is this. All hermitian matrices are symmetric but all symmetric matrices are not hermitian. Eigenvalues of hermitian (real or complex) matrices are always real. But what if the matrix is complex and symmetric but not hermitian. In hermitian the ij element is complex conjugal of ji element. But I am taking about matrix for which ij element and ji element are equal. Eigen values of such a matrix may not be real. So under what condition Eigenvalues will be real. First of all, a hermitian matrix is symmetric if and only if the matrix is real. A hermitian complex matrix is not symmetric. But, to answer your question, the matrix must be real. That is, if a matrix is symmetric and has real eigenvalues, then it is a real matrix. Does this make sense? Put another way, all symmetric matrices with real eigenvalues are real matrices. Quote by Robert1986 I should have been more clear. Any symmetric matrix $M$ has an eigenbasis (because any symmetric matrix is diagonalisable.) But it has not been proved in this thread (nor is a reference to a proof given) that a symmetric matrix must be diagonalizable. Quote by Robert1986 EDIT: In fact, given any matrix $M$, if $x^*M^*x$ is real for all $x$ then $M$ is hermitian. But it is not proved, at this stage, that $x^*Mx$ is real for all ##x##, even if ##M## is symmetric and diagonalizable with all eigenvalues real. In that case, this holds if ##x## is an eigenvector to ##M##, but since we don't know that the eigenbasis is orthogonal, this cannot be generalized to all ##x##. Quote by Erland But it has not been proved in this thread (nor is a reference to a proof given) that a symmetric matrix must be diagonalizable. Yes, and as I think about it, there are really simple counterexamples to what I said. So, forget what I wrote... However, as proven in Matrix Analysis, if a symmetric matrix is diagonalisable, then it is diagonalisable via an orthogonal matrix, and so what I wrote does follow. NOW, if the matrix is not diagonalisable, there is obviously not an eigenbasis (orthogonal or otherwise.) Now, if the matrix is normal (commutes with its adjoint) then it is orthogonally diagonalisable (this is also in Matrix Analysis), and what I wrote then follows. EDIT: Again, from matrix analysis, if $M$ is complex-symmetric, there is a unitary matrix $U$ such that $M=UDU^\top$ where the columns of $U$ are eigenvectors of $MM^* = M\bar{M}$ and $D$ is diagonal and the entries are the positive square roots of the corresponding eigenvalues. Now, if the columns of $U$ are real, the $U$ is orthogonal and so $M$ is orthogonally diagonalisable, and what I wrote follows. So, IF the eigenvectors of $M\bar{M}$ are real, then the eigenvalues of $M$ are real. Thread Tools Similar Threads for: Eigenvalues of a complex symmetric matrix Thread Forum Replies Calculus & Beyond Homework 4 Calculus & Beyond Homework 2 Linear & Abstract Algebra 1 Linear & Abstract Algebra 5 Calculus & Beyond Homework 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181146025657654, "perplexity_flag": "head"}
http://complexzeta.wordpress.com/2007/08/14/new-objects-from-old-using-equivalence-classes-of-pairs/
An Idelic Life Algebraic number theory and anything else I feel like telling the world about # New objects from old using equivalence classes of pairs Tuesday, August 14, 2007 in K-theory, ring theory In many places in mathematics, we see some variant of the following simple construction. The first time we see it is in constructing the integers from the natural numbers: Consider pairs $(a,b)$ of elements of $\mathbb{N}$ (which includes zero for our purposes, but it doesn’t really matter this time). We form equivalence classes out of these pairs by saying that $(a,b)\sim(c,d)$ if $a+d=b+c$. We can create a group structure on these pairs by setting $(a,b)+(c,d)=(a+c,b+d)$. The resulting group is isomorphic to $\mathbb{Z}$. So that’s how to construct the integers from the natural numbers. We see a similar construction when we discuss localizations of rings. Let $R$ be a commutative ring and $S$ a multiplicative subset containing 1. (If $R$ is noncommutative, you can still localize provided that $S$ is an Ore set, but I don’t feel like going there now.) We now consider pairs $(r,s)\in R\times S$ under the equivalence relation $(r_1,s_1)\sim(r_2,s_2)$ if there is some $t\in S$ so that $tr_1s_2=tr_2s_1$. The set of equivalence classes has the structure of a ring, called the localization of $R$ at $S$, and denoted by $R_S$. This construction is generally seen with $S=R\setminus\mathfrak{p}$, where $\mathfrak{p}$ is a prime ideal of $R$. The resulting ring is then local (meaning that it has a unique maximal ideal, namely $\mathfrak{p}R_S$. (We generally write $R_{\mathfrak{p}}$ rather than $R_S$ in this situation.) Anyway, this construction is really useful because localizations at prime ideals are frequently principal ideal domains, and we know all sorts of interesting theorems about finitely generated modules over principal ideal domains. And then we can use some Hasse principle-type result to transfer our results back to the original ring. Notice that I allowed the multiplicative set of localization to contain zero. However, in this case, the localization becomes the trivial ring (or not a ring, if you require that $0\neq 1$ in your definition of a ring, as many people do). More generally, allowing zero divisors in the multiplicative set causes various elements in $R$ to become zero in the localization. A similar construction shows up in $K$-theory. Suppose $A$ is any commutative semigroup. We consider pairs $(a,b)\in A\times A$ under the equivalence relation $(a,b)\sim(c,d)$ if there is some $e\in A$ so that $a+d+e=b+c+e$. (This is necessary since we do not assume that $A$ satisfies the cancellation property.) The resulting equivalence classes form a group called the Grothendieck group of $A$ and denoted by $K_0(A)$. The Grothendieck group satisfies the following universal property. Let $\phi$ be the map sending $a\in A$ to $(a+b,b)$ for any $b\in A$. (This is easily seen to be well-defined.) Now let $G$ be any abelian group and $\psi:A\to G$ any semigroup homomorphism. Then there is a (unique) map $\theta:K_0(A)\to G$ so that $\theta\phi=\psi$. Grothendieck groups can be very helpful for studying rings. Let $R$ be a commutative ring, and let $A$ denote the semigroup of isomorphism classes of projective $R$ modules (under the operation of direct sum). Then $K_0(A)$ (or $K_0(R)$, as people often write) is an important object of study. If $R$ is a field, for instance, then $K_0(R)\cong\mathbb{Z}$. However, if $R$ is the ring of integers of a number field $K$, then $K_0(R)\cong\mathbb{Z}\oplus C(K)$, where $C(K)$ is the ideal class group. Perhaps more interesting is Swan’s Theorem, which relates vector bundles over a compact topological space to the projective modules over its ring of continuous functions: they have isomorphic Grothendieck groups. But that’s probably the subject of another post, especially if I can manage to understand my notes from Max Karoubi’s lecture series in Edmonton. About these ads Like Loading... ## 1 comment Comments feed for this article Let me think of a few other uses. Localizations of modules. Local rings have lots of nice properties, especially Nakayama’s lemma, which makes them useful in commutative algebra. In algebraic geometry, the local ring on a scheme plays an important role (and the stalk of a quasicoherent sheaf-this is just localization too). At most places (at nonsingular points), the local rings will even be regular and consequently UFDs of finite homological dimension. Completions (e.g. the p-adic integers) give rise to local rings. Those complete local rings are of course interest in the theory of local fields, and for ideles in the theory of global fields. Localization can even be used to prove unique factorization of ideals in a Localization is also friendly since it is an exact functor. It commutes with Hom and the tensor product in a certain sense (the former for finitely presented modules, at least; localization is also really a tensor product operation). Yes, I agree: localization is universal. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209257960319519, "perplexity_flag": "head"}
http://bloggingmath.wordpress.com/2007/10/
# Point-In-Polygon Testing Part 1 Posted by 0 One of the most basic geometric tests is if a point, $p$, is inside a polygon $poly$. The most common method for this test is casting a ray from the point $p$ either horizontally or vertically and checking how many times it intersects the polygon. If the number of intersections is odd then $p$ is inside $poly$. If the number of intersections is even then $p$ is outside $poly$. In the below illustration the horizontal ray cast from black point intersects the polygon 3 times. Thus the black point is inside the polygon. There are a few corner cases that have to be carefully handled. Case 1. The point $p$ is on the boundary of the polygon. In this case the algorithm may return true or false because of floating point issues. Case 2. The casted ray intersects a vertex of the polygon. In this case the naive implementation of the algorithm would return two intersections because of the top line segment and the bottom line segment. The way to fix this is to only count an intersection if (1) it’s not an end-point intersection or (2) it’s an end-point intersection and the end-point of the line segment lies below the casted ray. Case 3. The casted ray overlaps a line segment, $l$. In this case the casted ray intersects an end-point of the overlapping line segment, $l$ and the line segment does not lie below the casted ray. So do not count the line segment $l$ intersection(s). By implementing the special cases this point-in-polygon method should work extremely well. It’s a fast method since the only operations used are compares. The big-O is O(n) where n is the number of line segments for the polygon. Posted in Computational Geometry Oct·29 # More on Floating Point Posted by 0 In my last post I outlined one of the issues with floating point. My particular geometric problem is testing if two polygons are subsets of each other. In general I want to test if one polygon is a subset of another polygon. Also the polygons can be rotated, translated, or reflected in particular any rigid motion can be applied to the polygons. To again illustrate the problem with floating point I’ll give an example: Let polygon, $T1 = \{(0, 0), (1, 0), (1, 1),\}$ (a triangle), and polygon $T2 = \{ (0, 0), (0.1, 0.1), (0, 1)\}$ (another triangle). See the below illustration. By rotating triangle T1 by exactly $-\pi/4$ radians, I can make triangle T1 a subset of triangle T2. But the problem with floating point is that $-\pi/4$ is not a floating point number so I can’t rotate by exactly $-\pi/4$. Which means that using the ordinary algorithms for subsetness there is no easy way to make T1 a subset of T2. That why I wrote the previous post on using some sort of fuzzy $\epsilon$ in my geometry algorithms. But I believe that there is another way to get the result I want. What I can do is rotate T1 by the closest floating point number to $-\pi/4$ and then do some boolean operations to T1 and T2. In particular take the boolean intersection of T1 and T2, call the resulting shape I (the intersection). And then take the boolean difference of T1 and I, call the resulting shape R (the remainder). If R is small enough (say the area is within $\epsilon = 10^-5$) then I can say that T1 is a subset of T2. In this way I can avoid using $\epsilon$ in the algorithm for testing if a point is contained in a polygon (which is what I was supposed to write about in this post). Posted in Computational Geometry Oct·26 # Rigorous Geometry Posted by 2 I’ve mentioned before that floating numbers are not real numbers. In particular they are not associative or distributive link and also most real numbers cannot be exactly represented. Thus a naive implementation for common geometry operations such as line intersection or testing if a point is contained (by contained I mean the point is either on the boundary or in the interior of the polygon) inside a polygon can have problems. I’ll give a short example of testing if a point is inside a square. Let $p = (0.5, 0)$ and let the square, $s$, have vertices at $(0, 0), (1, 0), (1, 1), (0, 1)$. Obviously $p$ is contained in $s$ and any reasonable algorithm for testing so would yield true. I now want to try rotating $s$ by $\pi/4$ radians. This is the point where the pesky problems with floating point come in. The number $\pi/4$ is not a floating point number so it is approximated by 0.785398164 (I haven’t written all the digits of the floating point number but that doesn’t change the issue). It turns out that 0.785398164 is greater than $\pi/4.$ That is the square was rotated by a little bit more than $\pi/4$ radians. Thus the point $(0.5, 0.5)$ is not contained in the rotated square. This is a bad thing because intuitively the point $(0.5, 0.5)$ should be contained in the rotated square. After all we meant to rotate the square by $\pi/4$ radians. I don’t know of any way to fix the problem where we can’t rotate by the number of desired radians. What is possible is to modify the algorithms for testing if a point is contained inside a polygon so that they return true if the point is within $\epsilon$ of the boundary of the polygon or is actually contained in the polygon. The $\epsilon$ is a fixed number that is large enough to mask the floating point rounding problems. My next post will give one of the algorithms for testing point containedness with the $\epsilon$ modification. Posted in Computational Geometry Oct·24 # Blogroll Bryan Bell's Profile
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139008522033691, "perplexity_flag": "head"}
http://mathoverflow.net/questions/55392?sort=oldest
## Intended interpretations of set theories ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In his Set Theory. An Introduction to Indepencence Proofs, Kunen develops $ZFC$ from a platonistic point of view because he believes that this is pedagogically easier. When he talks about the intended interpretation of set theory he says such things as, for example, that the domain of discourse $V$ is the collection of all (well-founded, when foundation is introduced) hereditary sets. This point of view has always made me feel a bit uncomfortable. How can a variable in a first-order language run over the elements of a collection that is not a set? Only recently I realized that one thing is to be a platonist, and another thing is to believe such an odd thing. A first-order theory of sets with a countable language can only prove the existence of countably many sets. Let me call them provable sets for short. Platonistically, we wish our intended interpretation of that theory to be one in which every provable set is actually the set the theory says it is. So we don't need our interpretation to contain every set, we just need that it contains at least the true provable sets. This collection is, really, a set, although it doesn't know it. To be a bit more concrete, if one is a platonist and the cumulative hierarchy is what one has in mind as the real universe of sets, one can think that the $V$ of one's theory actually refers to a an initial segment of that hierarchy, hence variables run no more over the real $V$ but only over the elements of some $V_\alpha$. There's a parallel to these ideas. For example, when we want to prove consistency with $ZFC$ of a given sentence, we do not directly look for a model of $ZFC$ where that sentence is true, but instead we take advantage of knowing that every finite fragment of $ZFC$ is consistent and that every proof involves only finitely many axioms. My question is: then, is this position tenable or am I going awfully wrong? I apologize that this seems a philosophical issue rather than a mathematical one. I also apologize for stating things so simply (out of laziness). - It's a pity only one answer can be accepted. Thanks a lot. – Marc Alcobé García Feb 15 2011 at 7:25 ## 5 Answers While Kunen takes for universe the collection of all hereditary sets, Marc proposes to restrict the universe to those hereditary sets which are first-order definable without parameters (Marc's "provable sets"). To make this more mathematical, let me rephrase the question as follows: Suppose $M$ is a model of ZF. Does the subset $K$ of $M$ consisting of the first-order definable (without parameters) elements of $M$ form a model of ZF? This turns out to be a delicate question which was answered here on MO some time ago. To summarize the answer there, Marc's proposed universe $K$ will be a model of ZF if and only if $M$ is a model of ZF + V = OD. Returning to the more philosophical question, this says that Marc's proposal is tenable if and only if there is a reason to believe that the hereditary sets are all ordinal definable. Occam's Razor type arguments support that V = OD and, indeed, V = L is true. However, since it is much weaker than the rigid assumption V = L, the assumption V = OD is not incompatible with a much richer view of the universe... There is another interpretation of Marc's "provable sets." Instead of merely requiring them to be first-order definable without parameters, one can require them to provably exist in any well-founded model of ZF. With this interpretation, one gets a potentially much smaller set: the minimal well-founded model of ZF. This minimal model is $L_\alpha$ where $\alpha$ is the smallest element of `$Ord\cup\{Ord\}$` such that $L_\alpha \models ZF$. Note that it is entirely plausible that $\alpha = Ord$ (for example this will happen if we happen to live in the minimal model in question). The assumption that "$L_\alpha$ is a set" is equivalent to the existence of a well-founded model of ZF. From a philosophical standpoint, this is a rather strong assertion. For example, it implies that ZF is not only consistent but also $\omega$-consistent and much more... - 2 As far as I can see, if there is an ordinal $\alpha$ such that `$L_\alpha$` is a model of ZF, then the first such $\alpha$ is definable (since I've just defined it) and the corresponding `$L_\alpha$` is hereditarily definable. It seems that the collection of all hereditarily first-order definable (without parameters) sets will be considerably larger than this `$L_\alpha$`. Exactly where it lies seems to depend on lots of undecidable information about the ambient model. For example, cardinal exponentiation might encode some complicated reals, encoding complicated hereditarily countable sets. – Andreas Blass Feb 14 2011 at 18:16 You're completely right! I guess the correct adjective would be predicatively or something like that. – François G. Dorais♦ Feb 14 2011 at 19:02 Predicatively didn't seem right either, so I went with hierarchically... – François G. Dorais♦ Feb 14 2011 at 19:09 I wasn't happy with that either, so I rewrote the whole sentence. – François G. Dorais♦ Feb 15 2011 at 15:23 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you assume that ZFC is consistent, then it follows from Gödel's completeness theorem that there is a set model of ZFC. You can then argue from a Platonistic point of view by taking the viewpoint of that set model. Note that Kunen also proves relative consistency results by moving to models that are not sets with respect to the viewpoint of your model. For example, he proves the relative consistency of the GCH with ZFC by moving to the inner model L. The reason he talks about fragments of ZFC for relative consistency results obtained via forcing is because CON(ZFC) does not guarantee the existence of a countable transitive model of ZFC. Edit: Let me now put forth a more complete answer tying together philosophical and mathematical considerations. A strict finitist doesn't believe in the existence of the set of Natural numbers but that in itself does not prevent the individual from talking about properties shared by all of its elements. Even if you don't believe that it is a static collection of elements that can be put together into a single set, you can still syntactically prove theorems about Natural numbers (as discussed in more detail by Carl with reference to ZFC). But since $\mathbb{N}$ does not exist in this platonistic frame of thinking, your intuitive objection about quantifying over all well-founded sets is analogous to this problem where we try to quantify over all Natural numbers. Ultimately, some finitists will adopt a compromise where they accept the existence of infinite sets but only ones that can be constructed from the Natural numbers in a finitistic manner (e.g., expressible in a relatively weak theory such as primitive recursive arithmetic or PRA). A somewhat analogous resolution to your philosophical objection is to allow for the existence of definable classes as Kunen does by considering them as "abbreviations for expressions not involving them" or to treat classes as separate formal objects in a conservative extension of ZFC such as NBG. Of course, if you assume the existence of an inaccessible cardinal $\kappa$, then you have a nice set model of NBG, mainly the collection of $\Delta_0$-definable subsets of $V_{\kappa}$. You can also take a semantic point of view but with a syntactic twist. Specifically, you can pretend that you live in a model of ZFC, but you don't know which one. You therefore can assert the existence of sets provable from the axioms of ZFC while acknowledging that there are sets out of reach. Similarly, you can assert ZFC-provable properties of all of the sets in your unspecified universe while realizing that you are unaware of the boolean truth value of statements independent of ZFC. This line of thinking is indeed present with boolean-valued models of ZFC where you talk about the probabilities of certain statements being true or certain sets given by names coming to fruition in forcing extensions. Finally, you can throw yourself in a constructed set model $M$ guaranteed by ZFC's believed consistency and restrict to some subclass of $M$. Specifically, you can externally take a sufficient cut of its universe $V_{\alpha}^M$ (which may be all of $M$) as you alluded to or restrict yourself to its parameter-free definable sets if it happens to be a model of $V = OD$ as François mentions to get a model of ZFC. However, in both cases, you will potentially be sacrificing some of the richness exhibited by the universe $M$. - To begin, its currently 6:00 am here and I am not entirely alive mentally, so if my answer is missing the point, let me know. In addition, I just thought I might make some additional points or comments not already mentioned in the other answers. This is a common point of contention, and honestly there really isn't a good answer that avoids the issues of playing fast and loose with the consistency of $ZF$. Everyone really just comes up with a meta-mathematical justification which sort of sits the best with them and it really is a matter of philosophy and personality. Firstly, with respect to this comment: This point of view has always made me feel a bit uncomfortable. How can a variable in a first-order language run over the elements of a collection that is not a set? Only recently I realized that one thing is to be a platonist, an another thing is to believe such an odd thing. Formally within the confines of $ZFC$ a variable cannot range over anything that is not a set. However this does not stop us from viewing or speaking about things we cannot formalize within $ZFC$, and Kunen takes liberties with this point of view. The thing that is actually going on when he talks about variables which range over proper classes, is we have just stepped outside of $ZFC$ and are now conversing in the meta-theory. Secondly the comment you make here: To be a bit more concrete, if one is a platonist and the cumulative hierarchy is what one has in mind as the real universe of sets, one can think that the V of one's theory actually refers to a an initial segment of that hierarchy, hence variables run no more over the real V but only over the elements of some $V_{\alpha}$. is a very keen observation, and foreshadows a nice understanding of inaccessible cardinals assumptions. You see by postulating the existence of a strongly inaccessible cardinal we are in fact assuming that we have an initial segment of the cumulative hierarchy which satisfies all of $ZFC$. The only problem with this is that in doing so we have stepped up the consistency strength of system we are working in. There's a parallel to these ideas. For example, when we want to prove consistency with ZFC of a given sentence, we do not directly look for a model of ZFC where that sentence is true, but instead we take advantage of knowing that every finite fragment of ZFC is consistent and that every proof involves only finitely many axioms. is quite dead on. Because applying Levy reflection, coupled with the downward Löwenheim–Skolem theorem we can produce a countable model of enough of $ZFC$ for whatever argument we care to be trying to formulate. This view provides a lot of comfort when trying to construct forcing arguments. However, if this view is not comforting, there is a differing view on the matter, and it makes an appearance in the way of Boolean-valued models (which Kunen's text is kinda lite on). EDIT: fixed some dumb typos. - I have a few comments that I hope are useful even if they don't clarify things completely. As Michael Blackmon says, different people have different ways of resolving things to their own satisfaction. In the end, attempts to resolve things by reasoning about $V$ in a natural-language set-based metatheory are always going to be complicated because of the set-theoretic paradoxes. If these didn't exist, we could define $V$ in the naive way and not worry about it. But the paradoxes tell us that the idea of "the collection of all pure well-founded sets" is not as simple as we might have hoped. 1) "How can a variable in a first-order language run over the elements of a collection that is not a set?" This question seems to relate to the fact that most set theory books do not work with a semantic metatheory in the way that model theory books do. The counter-question is: from what perspective are we handling an interpretation of the language? From the viewpoint of the object theory, quantifiers are no problem. By analogy, in Peano arithmetic we can quantify over all natural numbers even though we have no sets at all. The way that most set theory books treat things, you want to pretend you are working in a metatheory like PRA that performs only syntactic (uninterpreted) manipulations of formulas. From this metatheoretic point of view, the variables of a ZFC formula don't range over anything at all, because the metatheory does not attempt to interpret them. The advantage of this approach are that it side-steps many philosophical problems, and it gives extra oomph to relative consistency results. The disadvantage is that it is divorced from the semantic way that set theorists actually think about models of ZFC. It would be perfectly possible to work instead with an actual semantic metatheory that can handle models of ZFC as objects. In that situation, though, it wouldn't be the case that interpreted quantifiers would range over something that isn't a set, because now the interpretations are all sets; the variables under a certain interpretation would range over the object-sets of that interpretation, not over meta-sets. There is no reason, strictly speaking, that this metatheory would even have to be a set theory. For example, you could use something based on the multiverse axioms of Gitman and Hamkins, which is vaguely analogous to a category-theoretic axiomatization of the category of models of ZFC. In that case, it might not even be possible to directly quantify over meta-sets in the metatheory. 2) It isn't necessary to view "V" as a meta-theoretic definition. Instead, you can simply think of it as an object-theory definition, which stratifies the universe of discourse of a particular model into the levels of the cumulative hierarchy. Each interpretation has its own idea what $V$ denotes. In other words, there's no harm done if you just pretend you have fixed a particular model of ZFC and are working inside it. This is essentially what most set theory books do, even if they don't come out and say it. This means you can ignore any other models that might or might not exist; you've committed to just one of them. 3) The idea that the $V$ from any one model is always an initial segment of $V$ in another model was proposed by Zermelo (1930) "On boundary numbers and domains of sets", translation in From Kant to Hilbert v. 2. This proposal has echoes of the notion of "absolutely infinite" from Cantor. This idea of extending $V$ reappears as one of the multiverse axioms. - Not being able to post comments, this "answer" intends to be a comment to the question: you say "For example, when we want to prove consistency with ZFC of a given sentence, we do not directly look for a model of ZFC where that sentence is true, but instead we take advantage of knowing that every finite fragment of ZFC is consistent and that every proof involves only finitely many axioms." But as far as I know, if we knew that every finite fragment of ZFC is consistent, we would know that $all$ of ZFC is consistent, wouldn't we? I think this would follow by the compactness theorem, hence we cannot assure that every finite fragment of ZFC is consistent. Please let me know if there's something I'm not understanding well here. - This should really be a question. However such a question would duplicate mathoverflow.net/questions/18787/… – François G. Dorais♦ Mar 3 2011 at 12:47 There are two ways to interpret "every finite fragment". You could do it with one sentence $F$, "for every $n$, the conjunction of the first $n$ axioms of ZFC is not contradictory". Or you could make an axiom scheme that contains, for every standard $n$, the sentence saying that the conjunction of the first $n$ sentences of ZFC isn't contradictory. If you add the single sentence $F$ as an axiom to ZFC, the resulting theory does prove Con(ZFC). If you add the axiom scheme to ZFC there is no reason to think that theory would prove Con(ZFC); nonstandard models have nonstandard axioms. – Carl Mummert Mar 3 2011 at 12:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951409101486206, "perplexity_flag": "head"}
http://mathoverflow.net/questions/112388/invariants-of-a-set-of-real-unit-vectors-in-3d-space-under-so3/112531
## Invariants of a set of real unit vectors in 3d space, under SO(3) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a set of $n$ real unit vectors, in 3-dimensional space. (It is a follow up of Sets of vectors related by a rotation.) Is there a construction providing a complete set of independent*) invariants under SO(3)? *) I'm the most interested in independent invariants, but a solution without this assumption is of mine interest as well. I had two initial ideas, but I run in some problems: • construct Grammian matrix of vectors; however, then the matrix itself depend on a particular permutation of vectors, making it impossible to use as if two set of vectors are equivalent; also, determinant of the matrix is not enough as for $n=3$ it gives only 2 out of 3 invariants, • calculate moments, i.e. $M_{ijk} = \langle x^i y^j z^k \rangle$; then for $i+j+k=1$ there is length squared, for $i+j+k=2$ characteristic polynomial of the inertia matrix, but I don't know how to go further. - This seems related to some work in mechanics on separating the "internal" degrees of systems of $n$-particles from those arising from translations and rotations. One motivation of this is the "falling cat problem": how does a cat falling upside down from a tree manage to reorient itself to fall on its feet, given that it cannot create torques by clawing the air? I don't know enough to post a real answer to your question but there is a review here: rmp.aps.org/abstract/RMP/v69/i1/p213_1 (there also seem to be some copies findable by google if you don't have access) – jc Nov 14 at 16:46 @Piotr: I'm not sure what you mean by 'complete' and 'independent'. You should be aware that the ring $R_n$ of polynomial functions on $n$-tuples of points in $S^2$ that are invariant under permutation of the points and rotations in $S^2$ is not (except for small values of $n$) a free polynomial ring on $2n{-}3$ generators, though it contains subrings that are (which would probably satisfy your notion of 'independent' but not 'complete'). The full $R_n$ (which would probably satisfy your notion of 'complete') is usually a nontrivial quotient of a free polynomial ring on more generators. – Robert Bryant Nov 15 at 13:24 @Robert: 'complete' - such that two sets of vectors are equivalent iff all invariants coincide; 'independent' - no invariant can be written as a function of other invariants; I know that especially the second thing is nontrivial, but I can live without that. – Piotr Migdal Nov 15 at 16:22 @Piotr: The tag 'vector' is not at all useful here (or probably anywhere else), though maybe 'lie-groups' would be. You are asking a question about invariant theory related to compact Lie group actions. As Robert Bryant points out, this kind of problem arose in classical invariant theory. But it's certainly not trivial to answer in full detail. I guess that's why the subject died out a few times and had to be re-invented. – Jim Humphreys Nov 15 at 23:26 ## 1 Answer The following construction reduces your problem it to a classical and well-studied problem in invariant theory. First, I claim that there is a natural way to interpret an $n$-tuple of points in $S^2$ as a point of $\mathbb{CP}^n$ and vice-versa. This depends on interpreting $S^2$ as $\mathbb{CP}^1$ and the classical fact that the symmetric product of $n$ copies of $\mathbb{CP}^1$ is naturally regarded as $\mathbb{CP}^n$. Moreover, this can be done in an $\mathrm{SO}(3)$-equivariant way, so that your problem becomes that of finding invariants for an action of $\mathrm{SO}(3)$ on $\mathbb{CP}^n$, a very classical problem. To see this, remember that $\mathrm{SU}(2)$, which is the double cover of $\mathrm{SO}(3)$ acts on $\mathbb{C}^2$ in the obvious way and that this action is transitive on the $1$-dimensional subspaces $L\subset\mathbb{C}^2$, the set of which is naturally $S^2=\mathbb{CP}^1$. Moreover, the induced action on $S^2$ is that of $\mathrm{SO(3)}=\mathrm{SU}(2)/\{\pm I_2\}$. Thus, nonzero vectors in $\mathbb{C}^2$ up to nonzero complex multiples correspond to points of $S^2$, and, given any $n$-tuple of unit vectors $u_i\in S^2$ for $1\le i\le n$, they can be represented by an $n$-tuple of lines in $\mathbb{C}^2$ of the form $u_i = [v_i] = \mathbb{C}\cdot v_i\subset \mathbb{C}^2$. Now consider the symmetric product $$v_1v_2\cdots v_n\in \mathsf{S}^n(\mathbb{C}^2) \simeq \mathbb{C}^{n+1}$$ The line spanned by this product, $[v_1v_2\cdots v_n]\in \mathbb{CP}^n$ is well-defined, independent of the choice of the $v_i$ to represent the $u_i$. Conversely, since any nonzero complex polynomial in two variables that is homogeneous of degree $n$ can be factored into linear factors uniquely up to complex multiples and permutations, it follows that any element $p\in \mathbb{CP}^n$ can be constructed this way and, moreover, the $n$-tuple of points $u_i\in S^2 = \mathbb{CP}^1$ that gives rise to $p$ is uniquely determined up to permutation. Now, what about the action of $\mathrm{SO}(3)$? Since this is induced by the action of $\mathrm{SU}(2)$ on $\mathbb{C}^2$, and that action extends in the usual way to the symmetric power $\mathsf{S}^2(\mathbb{C}^2)=\mathbb{C}^{n+1}$, on which it is irreducible (as a complex representation), it follows that this construction is $\mathrm{SO}(3)$-equivariant. Thus, you are reduced to finding a 'complete' (in your sense) and 'independent' (in your sense) set of invariants for the action of $\mathrm{SO}(3)$ on $\mathbb{CP}^n$ that is induced by the irreducible representation of $\mathrm{SU}(2)$ on $\mathbb{C}^{n+1}$. This is, of course, a very classical problem, about which an enormous amount has been known since the 19th century. In particular, the Clebsch-Gordan formulae can be used to give one a procedure for generating all of the polynomial invariants, but they inevitably have complicated relations among them as soon as $n$ gets bigger than $3$ or $4$. - Thanks! However, then the problem goes back to my very original one (I went the other way around, hoping that it actually simplifies the problem). – Piotr Migdal Nov 16 at 1:22 1 @Piotr: Perhaps you could say why you thought that your formulation could circumvent classical invariant theory. (If I had known that you already knew the classical problem, I wouldn't have spent so much space explaining it.) Also, perhaps you could say what you find unsatisfactory about the algorithm for constructing the ring of invariants using CIT and the Clebsch-Gordan formulae. It's true that explicit answers for each $n$ require further work than CIT, but the answers are known to be complicated as $n$ increases, and no reformulation will do away with that. – Robert Bryant Nov 16 at 13:00 @Robert The story is that I'm working on quantum information. There a relation of permutation-symmetric state for n qubits (n two-dimensional complex vectors) to n points on 2d real sphere is called "Majorana's stellar representation" (it is basically: sym. subspace for d=2 -> polynomial -> factorization). (I should have said what I know, excuse me for that.) Such relation is specific to d=2 case; so I thought that maybe in this specific case there is an easier way to calculate invariants (clearly, there is at least an easy way to find some invariants). – Piotr Migdal Mar 18 at 13:42 1 @Piotr: There is an explicit, easily implemented algorithm that generates the space of homogeneous polynomial invariants of any given degree. (The dimension of this finite-dimensional vector space can be found without much difficulty; there is a generating function which is the Poincaré-Hilbert polynomial of the full ring that can be computed fairly explicitly as a rational function). However, determining the algebraic relations among these invariants is nontrivial when $n$ is sufficiently large, and I'm not aware of any simple way of doing it. – Robert Bryant Mar 18 at 14:54 1 @Piotr: degree bounds are not easy either. For binary form CIT there is an old one by Jordan see: www.math.lsa.umich.edu/~hderksen/preprints/bound.ps I don't know if it has been improved recently. – Abdelmalek Abdesselam Mar 18 at 20:38 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949459969997406, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/133248-general-term.html
# Thread: 1. ## General term I'm having trouble finding the coefficients for the maclaurin series for $f(x)=\sqrt{1-x}$. i have $\frac{\Pi_{k=2}^n(2k-3)}{2^nn!}$. Is this right? 2. By differentiation , $f(x) = \sqrt{1-x} = (1-x)^{ \frac{1}{2} }$ $f'(x) = -\frac{1}{2} \cdot (1-x)^{- \frac{1}{2} }$ $f''(x) = - \frac{1}{2} \cdot \frac{1}{2} \cdot (1-x)^{- \frac{3}{2} }$ $f'''(x) = - \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{3}{2} \cdot (1-x)^{- \frac{5}{2} }$ Observation is a good tool to conclude that $f^{(n)}(x) = - \frac{1}{2} \cdot \frac{1 \cdot 3 ... \cdot (2n-3) }{ 2^{n-1} } \cdot (1-x)^{- \frac{2n-1}{2} }$ ( Isolate the first $\frac{1}{2}$ ) $n \geq 2$ Therefore , $f^{(n}(0) = - \frac{1}{2} \cdot \frac{1 \cdot 3 ... \cdot (2n-3) }{ 2^{n-1} }$ But if we " fill up " the spaces between the odd numbers , $1 \cdot 3 \cdot 5 \cdot ... \cdot (2n-3)$ $= 1 \cdot 2 \cdot 3 \cdot 4 ... \cdot (2n-3) \cdot (2n-2)$ $\cdot \frac{1}{2 \cdot 4 \cdot ... \cdot (2n-2)}$ $= \frac{ (2n-2)!}{ 2^{n-1} \cdot (n-1)!}$ we will find that $f^{(n)}(0)$ is also equal to : $- \frac{ (2n-2)! }{ 2^{2n-1} (n-1)! }$ When accompanying with $\frac{1}{n!}$ to form the coefficient for the Maclaurin series , we can obtain an elegant expression . $- \frac{ (2n-2)! }{ 2^{2n-1} (n-1)! n! }$ $= - \frac{ (2n-2)! }{ 2^{2n-1} (n-1)! n! } \cdot \frac{ (2n-1)(2n) }{ n } \cdot \frac{n}{2n(2n-1)}$ $= - \frac{ \binom{2n}{n} }{ 2^{2n} (2n-1) }$ $= - \frac{1}{4^n} \frac{ \binom{2n}{n} }{ 2n-1}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8870836496353149, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/180758-polar-curves-symmetries.html
# Thread: 1. ## Polar curves and symmetries Hi everyone, Just a very fast question about polar curves analysis. Ok. So let $\rho= f(\theta )$ a polar curves. Let's assume that this function is periodic with period $2 \pi$ I know that if I have $\rho ( - \theta ) =\rho (\theta)$ then, I can just analyse it on $[0, \pi]$ and make a reflexion symmetry with Ox axis. Now, if I also have $\rho ( \pi - \theta) = - \rho (\theta)$, I also have the same symmetry ! What should I do ? Keep restricting the interval of study (say, $[0, \frac{\pi}{2}]$) or stay with $[0,\pi]$ ? Thanks a lot for reading me, and my deep apologies for the few English mistakes I might have written. Hugo.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340575933456421, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/8642/does-the-lack-of-modular-nuclearity-in-string-theory-mean-anything/8859
Does the lack of modular nuclearity in string theory mean anything? Nuclearity is a postulate in algebraic quantum field theory (AQFT). Basically, it says thermal states at any temperature always have a thermodynamic limit with extensive quantities. This is violated by string theory at the Hagedorn temperature. Does this mean anything? - 2 Answers It means that the postulate is incorrect in general. Whenever the number of degrees of freedom is large, there is some "extensivity". For example, a single string stores energy and entropy in many modes labeled by $n$, the Fourier mode. However, it is surely incorrect that the entropy and other quantities is proportional to volume in general. In particular, quantum gravity obeys the holographic principle which implies that the entropy only scales as a surface area, not the volume. In that context, the disagreements are not just exceptions: they're generic and they show that "modular nuclearity" is much more wrong than the modest example of the Hagedorn temperature shows. There are many other wrong assumptions underlying AQFT, too. The closer one studies renormalized field theory; string theory; quantum gravity, the more visible the invalidity of the AQFT prejudices becomes. - You don't even need quantum gravity to see that extensivity breaks down when gravity is important. Solve the Tolman-Oppenheimer-Volkov equation for a self-gravitating perfect fluid with equation of state $p=\kappa\rho$. This gives an energy density $\rho \sim r^{-2}$. Cut off the solution at some radial size $R$ and join it on to a solution that goes to zero at a finite distance. Relate the entropy density to the energy density using the EOS, integrate, and you get entropy $\sim R^{\frac{1+3\kappa}{1+\kappa}}$. The entropy of the self-gravitating object never grows faster than the area. – Robert McNees Aug 12 '11 at 18:34 To add a little to Lubos's answer: Even classical gravity has a problem with a thermodynamic limit, because a constant energy density leads to a collapse in a finite time in the future or past, so thermodynamics in the presence of gravity is not trivial. You can think of this as a classical residue of holography. I believe the proper analog to a thermal state in gravity (or in string theory) is an empty dS space, since this is a maximal entropy configuration semi-classically with a finite temperature. de Sitter space is hard to describe in quantum gravity, because of the various paradoxes related to the finite surface area bounding horizon. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8991619944572449, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Riemann_hypothesis
# Riemann hypothesis The real part (red) and imaginary part (blue) of the Riemann zeta function along the critical line Re(s) = 1/2. The first non-trivial zeros can be seen at Im(s) = ±14.135, ±21.022 and ±25.011. In mathematics, the Riemann hypothesis, proposed by Bernhard Riemann (1859), is a conjecture that the nontrivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields. The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, it is considered by some mathematicians to be the most important unresolved problem in pure mathematics (Bombieri 2000). The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems. The Riemann zeta function ζ(s) is defined for all complex numbers s ≠ 1 with a simple pole at s = 1. It has zeros at the negative even integers (i.e. at s = −2, −4, −6, ...). These are called the trivial zeros. The Riemann hypothesis is concerned with the non-trivial zeros, and states that: The real part of any non-trivial zero of the Riemann zeta function is 1/2. Thus the non-trivial zeros should lie on the critical line, 1/2 + i t, where t is a real number and i is the imaginary unit. There are several nontechnical books on the Riemann hypothesis, such as Derbyshire (2003), Rockmore (2005), Sabbagh (2003), du Sautoy (2003). The books Edwards (1974), Patterson (1988) and Borwein et al. (2008) give mathematical introductions, while Titchmarsh (1986), Ivić (1985) and Karatsuba & Voronin (1992) are advanced monographs. ## Riemann zeta function The Riemann zeta function is defined for complex s with real part greater than 1 by the absolutely convergent infinite series $\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots.$ Leonhard Euler showed that this series equals the Euler product $\zeta(s) = \prod_{p \text{ prime}} \frac{1}{1-p^{-s}}= \frac{1}{1-2^{-s}}\cdot\frac{1}{1-3^{-s}}\cdot\frac{1}{1-5^{-s}}\cdot\frac{1}{1-7^{-s}} \cdots \frac{1}{1-p^{-s}} \cdots$ where the infinite product extends over all prime numbers p, and again converges for complex s with real part greater than 1. The convergence of the Euler product shows that ζ(s) has no zeros in this region, as none of the factors have zeros. The Riemann hypothesis discusses zeros outside the region of convergence of this series, so it needs to be analytically continued to all complex s. This can be done by expressing it in terms of the Dirichlet eta function as follows. If s is greater than one, then the zeta function satisfies $\left(1-\frac{2}{2^s}\right)\zeta(s) = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^s} = \frac{1}{1^s} - \frac{1}{2^s} + \frac{1}{3^s} - \cdots.$ However, the series on the right converges not just when s is greater than one, but more generally whenever s has positive real part. Thus, this alternative series extends the zeta function from Re(s) > 1 to the larger domain Re(s) > 0, excluding the zeros $s = 1 + 2\pi in/\ln(2)$ of $1-2/2^s$ (see Dirichlet eta function). In the strip 0 < Re(s) < 1 the zeta function also satisfies the functional equation $\zeta(s) = 2^s\pi^{s-1}\ \sin\left(\frac{\pi s}{2}\right)\ \Gamma(1-s)\ \zeta(1-s).$ One may then define ζ(s) for all remaining nonzero complex numbers s by assuming that this equation holds outside the strip as well, and letting ζ(s) equal the right-hand side of the equation whenever s has non-positive real part. If s is a negative even integer then ζ(s) = 0 because the factor sin(πs/2) vanishes; these are the trivial zeros of the zeta function. (If s is a positive even integer this argument does not apply because the zeros of sin are cancelled by the poles of the gamma function as it takes negative integer arguments.) The value ζ(0) = −1/2 is not determined by the functional equation, but is the limiting value of ζ(s) as s approaches zero. The functional equation also implies that the zeta function has no zeros with negative real part other than the trivial zeros, so all non-trivial zeros lie in the critical strip where s has real part between 0 and 1. ## History "…es ist sehr wahrscheinlich, dass alle Wurzeln reell sind. Hiervon wäre allerdings ein strenger Beweis zu wünschen; ich habe indess die Aufsuchung desselben nach einigen flüchtigen vergeblichen Versuchen vorläufig bei Seite gelassen, da er für den nächsten Zweck meiner Untersuchung entbehrlich schien." "…it is very probable that all roots are real. Of course one would wish for a rigorous proof here; I have for the time being, after some fleeting vain attempts, provisionally put aside the search for this, as it appears dispensable for the next objective of my investigation." Riemann's statement of the Riemann hypothesis, from (Riemann 1859). (He was discussing a version of the zeta function, modified so that its roots are real rather than on the critical line.) In his 1859 paper On the Number of Primes Less Than a Given Magnitude Riemann found an explicit formula for the number of primes π(x) less than a given number x. His formula was given in terms of the related function $\Pi(x) = \pi(x) +\tfrac{1}{2}\pi(x^{\frac{1}{2}}) +\tfrac{1}{3}\pi(x^{\frac{1}{3}}) +\tfrac{1}{4}\pi(x^{\frac{1}{4}}) +\tfrac{1}{5}\pi(x^{\frac{1}{5}}) +\tfrac{1}{6}\pi(x^{\frac{1}{6}}) +\cdots$ which counts primes where a prime power pn counts as 1/n of a prime. The number of primes can be recovered from this function by $\pi(x) = \sum_{n=1}^{\infty}\frac{\mu(n)}{n}\Pi(x^{\frac{1}{n}}) = \Pi(x) -\frac{1}{2}\Pi(x^{\frac{1}{2}}) -\frac{1}{3}\Pi(x^{\frac{1}{3}}) -\frac{1}{5}\Pi(x^{\frac{1}{5}}) +\frac{1}{6}\Pi(x^{\frac{1}{6}}) -\cdots,$ where μ is the Möbius function. Riemann's formula is then $\Pi_0(x) = \operatorname{Li}(x) - \sum_\rho \operatorname{Li}(x^\rho) -\log(2) +\int_x^\infty\frac{dt}{t(t^2-1)\log(t)}$ where the sum is over the nontrivial zeros of the zeta function and where Π0 is a slightly modified version of Π that replaces its value at its points of discontinuity by the average of its upper and lower limits: $\Pi_0(x) = \lim_{\varepsilon \to 0}\frac{\Pi(x-\varepsilon)+\Pi(x+\varepsilon)}2.$ The summation in Riemann's formula is not absolutely convergent, but may be evaluated by taking the zeros ρ in order of the absolute value of their imaginary part. The function Li occurring in the first term is the (unoffset) logarithmic integral function given by the Cauchy principal value of the divergent integral $\operatorname{Li}(x) = \int_0^x\frac{dt}{\log(t)}.$ The terms Li(xρ) involving the zeros of the zeta function need some care in their definition as Li has branch points at 0 and 1, and are defined (for x > 1) by analytic continuation in the complex variable ρ in the region Re(ρ) > 0, i.e. they should be considered as Ei(ρ ln x). The other terms also correspond to zeros: the dominant term Li(x) comes from the pole at s = 1, considered as a zero of multiplicity −1, and the remaining small terms come from the trivial zeros. For some graphs of the sums of the first few terms of this series see Riesel & Göhl (1970) or Zagier (1977). This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their "expected" positions. Riemann knew that the non-trivial zeros of the zeta function were symmetrically distributed about the line s = 1/2 + it, and he knew that all of its non-trivial zeros must lie in the range 0 ≤ Re(s) ≤ 1. He checked that a few of the zeros lay on the critical line with real part 1/2 and suggested that they all do; this is the Riemann hypothesis. ## Consequences of the Riemann hypothesis The practical uses of the Riemann hypothesis include many propositions which are known to be true under the Riemann hypothesis, and some which can be shown to be equivalent to the Riemann hypothesis. ### Distribution of prime numbers Riemann's explicit formula for the number of primes less than a given number in terms of a sum over the zeros of the Riemann zeta function says that the magnitude of the oscillations of primes around their expected position is controlled by the real parts of the zeros of the zeta function. In particular the error term in the prime number theorem is closely related to the position of the zeros: for example, the supremum of real parts of the zeros is the infimum of numbers β such that the error is O(xβ)(Ingham 1932). Von Koch (1901) proved that the Riemann hypothesis is equivalent to the "best possible" bound for the error of the prime number theorem. A precise version of Koch's result, due to Schoenfeld (1976), says that the Riemann hypothesis is equivalent to $|\pi(x) - \operatorname{Li}(x)| < \frac{1}{8\pi} \sqrt{x} \log(x), \qquad \text{for all } x \ge 2657.$ ### Growth of arithmetic functions The Riemann hypothesis implies strong bounds on the growth of many other arithmetic functions, in addition to the primes counting function above. One example involves the Möbius function μ. The statement that the equation $\frac{1}{\zeta(s)} = \sum_{n=1}^\infty \frac{\mu(n)}{n^s}$ is valid for every s with real part greater than 1/2, with the sum on the right hand side converging, is equivalent to the Riemann hypothesis. From this we can also conclude that if the Mertens function is defined by $M(x) = \sum_{n \le x} \mu(n)$ then the claim that $M(x) = O(x^{\frac{1}{2}+\varepsilon})$ for every positive ε is equivalent to the Riemann hypothesis (Titchmarsh 1986). (For the meaning of these symbols, see Big O notation.) The determinant of the order n Redheffer matrix is equal to M(n), so the Riemann hypothesis can also be stated as a condition on the growth of these determinants. The Riemann hypothesis puts a rather tight bound on the growth of M, since Odlyzko & te Riele (1985) disproved the slightly stronger Mertens conjecture $|M(x)| \le \sqrt x.$ The Riemann hypothesis is equivalent to many other conjectures about the rate of growth of other arithmetic functions aside from μ(n). A typical example is Robin's theorem (Robin 1984), which states that if σ(n) is the divisor function, given by $\sigma(n) = \sum_{d\mid n} d$ then $\sigma(n) < e^\gamma n \log \log n$ for all n > 5040 if and only if the Riemann hypothesis is true, where γ is the Euler–Mascheroni constant. Another example was found by Jérôme Franel, and extended by Landau (see Franel & Landau (1924)). The Riemann hypothesis is equivalent to several statements showing that the terms of the Farey sequence are fairly regular. One such equivalence is as follows: if Fn is the Farey sequence of order n, beginning with 1/n and up to 1/1, then the claim that for all ε > 0 $\sum_{i=1}^m|F_n(i) - \tfrac{i}{m}| = O(n^{\frac{1}{2}+\epsilon})$ is equivalent to the Riemann hypothesis. Here $m = \sum_{i=1}^n\phi(i)$ is the number of terms in the Farey sequence of order n. For an example from group theory, if g(n) is Landau's function given by the maximal order of elements of the symmetric group Sn of degree n, then Massias, Nicolas & Robin (1988) showed that the Riemann hypothesis is equivalent to the bound $\log g(n) < \sqrt{\operatorname{Li}^{-1}(n)}$ for all sufficiently large n. ### Lindelöf hypothesis and growth of the zeta function The Riemann hypothesis has various weaker consequences as well; one is the Lindelöf hypothesis on the rate of growth of the zeta function on the critical line, which says that, for any ε > 0, $\zeta\left(\frac{1}{2} + it\right) = O(t^\varepsilon),$ as t → ∞. The Riemann hypothesis also implies quite sharp bounds for the growth rate of the zeta function in other regions of the critical strip. For example, it implies that $e^\gamma\le \limsup_{t\rightarrow +\infty}\frac{|\zeta(1+it)|}{\log\log t}\le 2e^\gamma$ $\frac{6}{\pi^2}e^\gamma\le \limsup_{t\rightarrow +\infty}\frac{1/|\zeta(1+it)|}{\log\log t}\le \frac{12}{\pi^2}e^\gamma$ so the growth rate of ζ(1+it) and its inverse would be known up to a factor of 2 (Titchmarsh 1986). ### Large prime gap conjecture The prime number theorem implies that on average, the gap between the prime p and its successor is log p. However, some gaps between primes may be much larger than the average. Cramér proved that, assuming the Riemann hypothesis, every gap is O(√p log p). This is a case in which even the best bound that can be proved using the Riemann Hypothesis is far weaker than what seems to be true: Cramér's conjecture implies that every gap is O((log p)2) which, while larger than the average gap, is far smaller than the bound implied by the Riemann hypothesis. Numerical evidence supports Cramér's conjecture (Nicely 1999). ### Criteria equivalent to the Riemann hypothesis Many statements equivalent to the Riemann hypothesis have been found, though so far none of them have led to much progress in proving (or disproving) it. Some typical examples are as follows. (Others involve the divisor function σ(n).) The Riesz criterion was given by Riesz (1916), to the effect that the bound $-\sum_{k=1}^\infty \frac{(-x)^k}{(k-1)! \zeta(2k)}= O\left(x^{\frac{1}{4}+\epsilon}\right)$ holds for all ε > 0 if and only if the Riemann hypothesis holds. Nyman (1950) proved that the Riemann Hypothesis is true if and only if the space of functions of the form $f(x) = \sum_{\nu=1}^nc_\nu\rho \left(\frac{\theta_\nu}{x} \right)$ where ρ(z) is the fractional part of z, 0 ≤ θν ≤ 1, and $\sum_{\nu=1}^nc_\nu\theta_\nu=0$, is dense in the Hilbert space L2(0,1) of square-integrable functions on the unit interval.Beurling (1955) extended this by showing that the zeta function has no zeros with real part greater than 1/p if and only if this function space is dense in Lp(0,1) Salem (1953) showed that the Riemann hypothesis is true if and only if the integral equation $\int_{0}^\infty\frac{z^{-\sigma-1}\phi(z)\,dz}{{e^{x/z}}+1}=0$ has no non-trivial bounded solutions φ for 1/2<σ<1. Weil's criterion is the statement that the positivity of a certain function is equivalent to the Riemann hypothesis. Related is Li's criterion, a statement that the positivity of a certain sequence of numbers is equivalent to the Riemann hypothesis. Speiser (1934) proved that the Riemann hypothesis is equivalent to the statement that $\zeta'(s)$, the derivative of ζ(s), has no zeros in the strip $0 < \Re(s) < \frac12.$ That ζ has only simple zeros on the critical line is equivalent (by definition) to its derivative having no zeros on the critical line. ### Consequences of the generalized Riemann hypothesis Several applications use the generalized Riemann hypothesis for Dirichlet L-series or zeta functions of number fields rather than just the Riemann hypothesis. Many basic properties of the Riemann zeta function can easily be generalized to all Dirichlet L-series, so it is plausible that a method that proves the Riemann hypothesis for the Riemann zeta function would also work for the generalized Riemann hypothesis for Dirichlet L-functions. Several results first proved using the generalized Riemann hypothesis were later given unconditional proofs without using it, though these were usually much harder. Many of the consequences on the following list are taken from Conrad (2010). • In 1913, Gronwall showed that the generalized Riemann hypothesis implies that Gauss's list of imaginary quadratic fields with class number 1 is complete, though Baker, Stark and Heegner later gave unconditional proofs of this without using the generalized Riemann hypothesis. • In 1917, Hardy and Littlewood showed that the generalized Riemann hypothesis implies a conjecture of Chebyshev that $\lim_{x\to 1^-}\sum_{p>2}(-1)^{(p+1)/2}x^p=+\infty,$ which says that in some sense primes 3 mod 4 are more common than primes 1 mod 4. • In 1923 Hardy and Littlewood showed that the generalized Riemann hypothesis implies a weak form of the Goldbach conjecture for odd numbers: that every sufficiently large odd number is the sum of three primes, though in 1937 Vinogradov gave an unconditional proof. In 1997 Deshouillers, Effinger, te Riele, and Zinoviev showed that the generalized Riemann hypothesis implies that every odd number greater than 5 is the sum of three primes. • In 1934, Chowla showed that the generalized Riemann hypothesis implies that the first prime in the arithmetic progression a mod m is at most Km2log(m)2 for some fixed constant K. • In 1967, Hooley showed that the generalized Riemann hypothesis implies Artin's conjecture on primitive roots. • In 1973, Weinberger showed that the generalized Riemann hypothesis implies that Euler's list of idoneal numbers is complete. • Weinberger (1973) showed that the generalized Riemann hypothesis for the zeta functions of all algebraic number fields implies that any number field with class number 1 is either Euclidean or an imaginary quadratic number field of discriminant −19, −43, −67, or −163. • In 1976, G. Miller showed that the generalized Riemann hypothesis implies that one can test if a number is prime in polynomial time via the Miller test. In 2002, Manindra Agrawal, Neeraj Kayal and Nitin Saxena proved this result unconditionally using the AKS primality test. • Odlyzko (1990) discussed how the generalized Riemann hypothesis can be used to give sharper estimates for discriminants and class numbers of number fields. • Ono & Soundararajan (1997) showed that the generalized Riemann hypothesis implies that Ramanujan's integral quadratic form x2 +y2 + 10z2 represents all integers that it represents locally, with exactly 18 exceptions. ### Excluded middle Some consequences of the RH are also consequences of its negation, and are thus theorems. In their discussion of the Hecke, Deuring, Mordell, Heilbronn theorem, (Ireland & Rosen 1990, p. 359) say The method of proof here is truly amazing. If the generalized Riemann hypothesis is true, then the theorem is true. If the generalized Riemann hypothesis is false, then the theorem is true. Thus, the theorem is true!!     (punctuation in original) Care should be taken to understand what is meant by saying the generalized Riemann hypothesis is false: exactly which class of Dirichlet series is supposed to have a counterexample should be clearly indicated to avoid confusion. #### Littlewood's theorem This concerns the sign of the error in the prime number theorem. It has been computed that π(x) < Li(x) for all x ≤ 1023, and no value of x is known for which π(x) > Li(x). See this table. In 1914 Littlewood proved that there are arbitrarily large values of x for which $\pi(x)>\operatorname{Li}(x) +\frac13\frac{\sqrt x}{\log x}\log\log\log x,$ and that there are also arbitrarily large values of x for which $\pi(x)<\operatorname{Li}(x) -\frac13\frac{\sqrt x}{\log x}\log\log\log x.$ Thus the difference π(x) − Li(x) changes sign infinitely many times. Skewes' number is an estimate of the value of x corresponding to the first sign change. His proof is divided into two cases: the RH is assumed to be false (about half a page of Ingham 1932, Chapt. V), and the RH is assumed to be true (about a dozen pages). #### Gauss's class number conjecture This is the conjecture (first stated in article 303 of Gauss's Disquisitiones Arithmeticae) that there are only a finite number of imaginary quadratic fields with a given class number. One way to prove it would be to show that as the discriminant D → −∞ the class number h(D) → ∞. The following sequence of theorems involving the Riemann hypothesis is described in Ireland & Rosen 1990, pp. 358–361: Theorem (Hecke; 1918). Let D < 0 be the discriminant of an imaginary quadratic number field K. Assume the generalized Riemann hypothesis for L-functions of all imaginary quadratic Dirichlet characters. Then there is an absolute constant C such that $h(D) > C\frac{\sqrt{|D|}}{\log |D|}.$ Theorem (Deuring; 1933). If the RH is false then h(D) > 1 if |D| is sufficiently large. Theorem (Mordell; 1934). If the RH is false then h(D) → ∞ as D → −∞. Theorem (Heilbronn; 1934). If the generalized RH is false for the L-function of some imaginary quadratic Dirichlet character then h(D) → ∞ as D → −∞. (In the work of Hecke and Heilbronn, the only L-functions that occur are those attached to imaginary quadratic characters, and it is only for those L-functions that GRH is true or GRH is false is intended; a failure of GRH for the L-function of a cubic Dirichlet character would, strictly speaking, mean GRH is false, but that was not the kind of failure of GRH that Heilbronn had in mind, so his assumption was more restricted than simply GRH is false.) In 1935, Carl Siegel later strengthened the result without using RH or GRH in any way. #### Growth of Euler's totient In 1983 J. L. Nicolas proved (Ribenboim 1996, p. 320) that $\phi(n) < e^{-\gamma}\frac {n} {\log \log n}$ for infinitely many n, where φ(n) is Euler's totient function and γ is Euler's constant. Ribenboim remarks that: The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption. ## Generalizations and analogs of the Riemann hypothesis ### Dirichlet L-series and other number fields The Riemann hypothesis can be generalized by replacing the Riemann zeta function by the formally similar, but much more general, global L-functions. In this broader setting, one expects the non-trivial zeros of the global L-functions to have real part 1/2. It is these conjectures, rather than the classical Riemann hypothesis only for the single Riemann zeta function, which accounts for the true importance of the Riemann hypothesis in mathematics. The generalized Riemann hypothesis extends the Riemann hypothesis to all Dirichlet L-functions. In particular it implies the conjecture that Siegel zeros (zeros of L-functions between 1/2 and 1) do not exist. The extended Riemann hypothesis extends the Riemann hypothesis to all Dedekind zeta functions of algebraic number fields. The extended Riemann hypothesis for abelian extension of the rationals is equivalent to the generalized Riemann hypothesis. The Riemann hypothesis can also be extended to the L-functions of Hecke characters of number fields. The grand Riemann hypothesis extends it to all automorphic zeta functions, such as Mellin transforms of Hecke eigenforms. ### Function fields and zeta functions of varieties over finite fields Artin (1924) introduced global zeta functions of (quadratic) function fields and conjectured an analogue of the Riemann hypothesis for them, which has been proven by Hasse in the genus 1 case and by Weil (1948) in general. For instance, the fact that the Gauss sum, of the quadratic character of a finite field of size q (with q odd), has absolute value $\sqrt{q}$ is actually an instance of the Riemann hypothesis in the function field setting. This led Weil (1949) to conjecture a similar statement for all algebraic varieties; the resulting Weil conjectures were proven by Pierre Deligne (1974, 1980). ### Arithmetic zeta functions of arithmetic schemes and their L-factors Arithmetic zeta functions generalise the Riemann and Dedekind zeta functions as well as the zeta functions of varieties over finite fields to every arithmetic scheme or a scheme of finite type over integers. The arithmetic zeta function of a regular connected equidimensional arithmetic scheme of Kronecker dimension n can be factorized into the product of appropriately defined L-factors and an auxiliary factor Jean-Pierre Serre (1970). Assuming a functional equation and meromorphic continuation, the generalized Riemann hypothesis for the L-factor states that its zeros inside the critical strip $\Re(s)\in (0,n)$ lie on the central line. Correspondingly, the generalized Riemann hypothesis for the arithmetic zeta function of a regular connected equidimensional arithmetic scheme states that its zeros inside the critical strip lie on vertical lines $\Re(s)=1/2,3/2,\dots,n-1/2$ and its poles inside the critical strip lie on vertical lines $\Re(s)=1, 2, \dots,n-1$. This is known for schemes in positive characteristic and follows from Pierre Deligne (1974, 1980), but remains entirely unknown in characteristic zero. ### Selberg zeta functions Selberg (1956) introduced the Selberg zeta function of a Riemann surface. These are similar to the Riemann zeta function: they have a functional equation, and an infinite product similar to the Euler product but taken over closed geodesics rather than primes. The Selberg trace formula is the analogue for these functions of the explicit formulas in prime number theory. Selberg proved that the Selberg zeta functions satisfy the analogue of the Riemann hypothesis, with the imaginary parts of their zeros related to the eigenvalues of the Laplacian operator of the Riemann surface. ### Ihara zeta functions The Ihara zeta function of a finite graph is an analogue of the Selberg zeta function introduced by Yasutaka Ihara. A regular finite graph is a Ramanujan graph, a mathematical model of efficient communication networks, if and only if its Ihara zeta function satisfies the analogue of the Riemann hypothesis as was pointed out by T. Sunada. ### Montgomery's pair correlation conjecture Montgomery (1973) suggested the pair correlation conjecture that the correlation functions of the (suitably normalized) zeros of the zeta function should be the same as those of the eigenvalues of a random hermitian matrix. Odlyzko (1987) showed that this is supported by large scale numerical calculations of these correlation functions. Montgomery showed that (assuming the Riemann hypothesis) at least 2/3 of all zeros are simple, and a related conjecture is that all zeros of the zeta function are simple (or more generally have no non-trivial integer linear relations between their imaginary parts). Dedekind zeta functions of algebraic number fields, which generalize the Riemann zeta function, often do have multiple complex zeros.[citation needed] This is because the Dedekind zeta functions factorize as a product of powers of Artin L-functions, so zeros of Artin L-functions sometimes give rise to multiple zeros of Dedekind zeta functions. Other examples of zeta functions with multiple zeros are the L-functions of some elliptic curves: these can have multiple zeros at the real point of their critical line; the Birch-Swinnerton-Dyer conjecture predicts that the multiplicity of this zero is the rank of the elliptic curve. ### Other zeta functions There are many other examples of zeta functions with analogues of the Riemann hypothesis, some of which have been proved. Goss zeta functions of function fields have a Riemann hypothesis, proved by Sheats (1998). The main conjecture of Iwasawa theory, proved by Barry Mazur and Andrew Wiles for cyclotomic fields, and Wiles for totally real fields, identifies the zeros of a p-adic L-function with the eigenvalues of an operator, so can be thought of as an analogue of the Hilbert–Pólya conjecture for p-adic L-functions (Wiles 2000). ## Attempts to prove the Riemann hypothesis Several mathematicians have addressed the Riemann hypothesis, but none of their attempts have yet been accepted as correct solutions. Watkins (2007) lists some incorrect solutions, and more are frequently announced. ### Operator theory Main article: Hilbert–Pólya conjecture Hilbert and Polya suggested that one way to derive the Riemann hypothesis would be to find a self-adjoint operator, from the existence of which the statement on the real parts of the zeros of ζ(s) would follow when one applies the criterion on real eigenvalues. Some support for this idea comes from several analogues of the Riemann zeta functions whose zeros correspond to eigenvalues of some operator: the zeros of a zeta function of a variety over a finite field correspond to eigenvalues of a Frobenius element on an étale cohomology group, the zeros of a Selberg zeta function are eigenvalues of a Laplacian operator of a Riemann surface, and the zeros of a p-adic zeta function correspond to eigenvectors of a Galois action on ideal class groups. Odlyzko (1987) showed that the distribution of the zeros of the Riemann zeta function shares some statistical properties with the eigenvalues of random matrices drawn from the Gaussian unitary ensemble. This gives some support to the Hilbert–Pólya conjecture. In 1999, Michael Berry and Jon Keating conjectured that there is some unknown quantization $\hat H$ of the classical Hamiltonian H = xp so that $\zeta (1/2+i\hat H) = 0$ and even more strongly, that the Riemann zeros coincide with the spectrum of the operator $1/2 + i \hat H$. This is to be contrasted to canonical quantization which leads to the Heisenberg uncertainty principle $[x,p]=1/2$ and the natural numbers as spectrum of the quantum harmonic oscillator. The crucial point is that the Hamiltonian should be a self-adjoint operator so that the quantization would be a realization of the Hilbert–Pólya program. In a connection with this quantum mechanical problem Berry and Connes had proposed that the inverse of the potential of the Hamiltonian is connected to the half-derivative of the function $N(s)= \frac{1}{\pi}\operatorname{Arg}\xi(1/2+i\sqrt s)$ then, in Berry–Connes approach $V^{-1}(x) = \sqrt{4\pi} \frac{d^{1/2}N(x)}{dx^{1/2}}$ (Connes 1999). This yields to a Hamiltonian whose eigenvalues are the square of the imaginary part of the Riemann zeros, and also the functional determinant of this Hamiltonian operator is just the Riemann Xi function. In fact the Riemann Xi function would be proportional to the functional determinant (Hadamard product) $\det(H+1/4+s(s-1))$ as proven by Connes and others, in this approach $\frac{\xi(s)}{\xi(0)}=\frac{\det(H+s(s-1)+1/4)}{\det(H+1/4)}$. The analogy with the Riemann hypothesis over finite fields suggests that the Hilbert space containing eigenvectors corresponding to the zeros might be some sort of first cohomology group of the spectrum Spec(Z) of the integers. Deninger (1998) described some of the attempts to find such a cohomology theory (Leichtnam 2005). Zagier (1983) constructed a natural space of invariant functions on the upper half plane which has eigenvalues under the Laplacian operator corresponding to zeros of the Riemann zeta function, and remarked that in the unlikely event that one could show the existence of a suitable positive definite inner product on this space the Riemann hypothesis would follow. Cartier (1982) discussed a related example, where due to a bizarre bug a computer program listed zeros of the Riemann zeta function as eigenvalues of the same Laplacian operator. Schumayer & Hutchinson (2011) surveyed some of the attempts to construct a suitable physical model related to the Riemann zeta function. ### Lee–Yang theorem The Lee–Yang theorem states that the zeros of certain partition functions in statistical mechanics all lie on a "critical line" with real part 0, and this has led to some speculation about a relationship with the Riemann hypothesis (Knauf 1999). ### Turán's result Pál Turán (1948) showed that if the functions $\sum_{n=1}^N n^{-s}$ have no zeros when the real part of s is greater than one then $T(x) = \sum_{n\le x}\frac{\lambda(n)}{n}\ge 0\text{ for } x > 0,$ where λ(n) is the Liouville function given by (−1)r if n has r prime factors. He showed that this in turn would imply that the Riemann hypothesis is true. However Haselgrove (1958) proved that T(x) is negative for infinitely many x (and also disproved the closely related Pólya conjecture), and Borwein, Ferguson & Mossinghoff (2008) showed that the smallest such x is 72185376951205. Spira (1968) showed by numerical calculation that the finite Dirichlet series above for N=19 has a zero with real part greater than 1. Turán also showed that a somewhat weaker assumption, the nonexistence of zeros with real part greater than 1+N−1/2+ε for large N in the finite Dirichlet series above, would also imply the Riemann hypothesis, but Montgomery (1983) showed that for all sufficiently large N these series have zeros with real part greater than 1 + (log log N)/(4 log N). Therefore, Turán's result is vacuously true and cannot be used to help prove the Riemann hypothesis. ### Noncommutative geometry Connes (1999, 2000) has described a relationship between the Riemann hypothesis and noncommutative geometry, and shows that a suitable analog of the Selberg trace formula for the action of the idèle class group on the adèle class space would imply the Riemann hypothesis. Some of these ideas are elaborated in Lapidus (2008). ### Hilbert spaces of entire functions Louis de Branges (1992) showed that the Riemann hypothesis would follow from a positivity condition on a certain Hilbert space of entire functions. However Conrey & Li (2000) showed that the necessary positivity conditions are not satisfied. ### Quasicrystals The Riemann hypothesis implies that the zeros of the zeta function form a quasicrystal, meaning a distribution with discrete support whose Fourier transform also has discrete support. Dyson (2009) suggested trying to prove the Riemann hypothesis by classifying, or at least studying, 1-dimensional quasicrystals. ### Arithmetic zeta functions of models of elliptic curves over number fields When one goes from geometric dimension one, e.g. an algebraic number field, to geometric dimension two, e.g. a regular model of an elliptic curve over a number field, the two-dimensional part of the generalized Riemann hypothesis for the arithmetic zeta function of the model deals with the poles of the zeta function. In dimension one the study of the zeta integral in Tate's thesis does not lead to new important information on the Riemann hypothesis. Contrary to this, in dimension two work of Ivan Fesenko on two-dimensional generalisation of Tate's thesis includes an integral representation of a zeta integral closely related to the zeta function. In this new situation, not possible in dimension one, the poles of the zeta function can be studied via the zeta integral and associated adele groups. Related conjecture of Fesenko (2010) on the positivity of the fourth derivative of a boundary function associated to the zeta integral essentially implies the pole part of the generalized Riemann hypothesis. Suzuki (2011) proved that the latter, together with some technical assumptions, implies Fesenko's conjecture. ### Multiple zeta functions Deligne's proof of the Riemann hypothesis over finite fields used the zeta functions of product varieties, whose zeros and poles correspond to sums of zeros and poles of the original zeta function, in order to bound the real parts of the zeros of the original zeta function. By analogy, Kurokawa (1992) introduced multiple zeta functions whose zeros and poles correspond to sums of zeros and poles of the Riemann zeta function. To make the series converge he restricted to sums of zeros or poles all with non-negative imaginary part. So far, the known bounds on the zeros and poles of the multiple zeta functions are not strong enough to give useful estimates for the zeros of the Riemann zeta function. ## Location of the zeros ### Number of zeros The functional equation combined with the argument principle implies that the number of zeros of the zeta function with imaginary part between 0 and T is given by $N(T)=\frac{1}{\pi}\mathop{\mathrm{Arg}}(\xi(s)) = \frac{1}{\pi}\mathop{\mathrm{Arg}}(\Gamma(\tfrac{s}{2})\pi^{-\frac{s}{2}}\zeta(s)s(s-1)/2)$ for s=1/2+iT, where the argument is defined by varying it continuously along the line with Im(s)=T, starting with argument 0 at ∞+iT. This is the sum of a large but well understood term $\frac{1}{\pi}\mathop{\mathrm{Arg}}(\Gamma(\tfrac{s}{2})\pi^{-s/2}s(s-1)/2) = \frac{T}{2\pi}\log\frac{T}{2\pi}-\frac{T}{2\pi} +7/8+O(1/T)$ and a small but rather mysterious term $S(T) = \frac{1}{\pi}\mathop{\mathrm{Arg}}(\zeta(1/2+iT)) =O(\log(T)).$ So the density of zeros with imaginary part near T is about log(T)/2π, and the function S describes the small deviations from this. The function S(t) jumps by 1 at each zero of the zeta function, and for t ≥ 8 it decreases monotonically between zeros with derivative close to −log t. Karatsuba (1996) proved that every interval (T, T+H] for $H \ge T^{\frac{27}{82}+\varepsilon}$ contains at least $H(\ln T)^{\frac{1}{3}}e^{-c\sqrt{\ln\ln T}}$ points where the function S(t) changes sign. Selberg (1946) showed that the average moments of even powers of S are given by $\int_0^T|S(t)|^{2k}dt = \frac{(2k)!}{k!(2\pi)^{2k}}T(\log \log T)^k + O(T(\log \log T)^{k-1/2}).$ This suggests that S(T)/(log log T)1/2 resembles a Gaussian random variable with mean 0 and variance 2π2 (Ghosh (1983) proved this fact). In particular |S(T)| is usually somewhere around (log log T)1/2, but occasionally much larger. The exact order of growth of S(T) is not known. There has been no unconditional improvement to Riemann's original bound S(T)=O(log T), though the Riemann hypothesis implies the slightly smaller bound S(T)=O(log T/log log T) (Titchmarsh 1985). The true order of magnitude may be somewhat less than this, as random functions with the same distribution as S(T) tend to have growth of order about log(T)1/2. In the other direction it cannot be too small: Selberg (1946) showed that S(T) ≠ o((log T)1/3/(log log T)7/3), and assuming the Riemann hypothesis Montgomery showed that S(T) ≠ o((log T)1/2/(log log T)1/2). Numerical calculations confirm that S grows very slowly: |S(T)| < 1 for T < 280, |S(T)| < 2 for T < 6800000, and the largest value of |S(T)| found so far is not much larger than 3 (Odlyzko 2002). Riemann's estimate S(T) = O(log T) implies that the gaps between zeros are bounded, and Littlewood improved this slightly, showing that the gaps between their imaginary parts tends to 0. ### Theorem of Hadamard and de la Vallée-Poussin Hadamard (1896) and de la Vallée-Poussin (1896) independently proved that no zeros could lie on the line Re(s) = 1. Together with the functional equation and the fact that there are no zeros with real part greater than 1, this showed that all non-trivial zeros must lie in the interior of the critical strip 0 < Re(s) < 1. This was a key step in their first proofs of the prime number theorem. Both the original proofs that the zeta function has no zeros with real part 1 are similar, and depend on showing that if ζ(1+it) vanishes, then ζ(1+2it) is singular, which is not possible. One way of doing this is by using the inequality $|\zeta(\sigma)^3\zeta(\sigma+it)^4\zeta(\sigma+2it)|\ge 1$ for σ > 1, t real, and looking at the limit as σ → 1. This inequality follows by taking the real part of the log of the Euler product to see that $|\zeta(\sigma+it)| = \exp\Re\sum_{p^n}\frac{p^{-n(\sigma+it)}}{n}=\exp\sum_{p^n}\frac{p^{-n\sigma}\cos(t\log p^n)}{n},$ where the sum is over all prime powers pn, so that $|\zeta(\sigma)^3\zeta(\sigma+it)^4\zeta(\sigma+2it)| = \exp\sum_{p^n}p^{-n\sigma}\frac{3+4\cos(t\log p^n)+\cos(2t\log p^n)}{n}$ which is at least 1 because all the terms in the sum are positive, due to the inequality $3+4\cos(\theta)+\cos(2\theta) = 2 (1+\cos(\theta))^2\ge0.$ ### Zero-free regions De la Vallée-Poussin (1899-1900) proved that if σ+it is a zero of the Riemann zeta function, then 1−σ ≥ C/log(t) for some positive constant C. In other words zeros cannot be too close to the line σ = 1: there is a zero-free region close to this line. This zero-free region has been enlarged by several authors.Ford (2002) gave a version with explicit numerical constants: ζ(σ + it) ≠ 0 whenever |t| ≥ 3 and $\sigma\ge 1-\frac{1}{57.54(\log{|t|})^{2/3}(\log{\log{|t|}})^{1/3}}.$ ## Zeros on the critical line Hardy (1914) and Hardy & Littlewood (1921) showed there are infinitely many zeros on the critical line, by considering moments of certain functions related to the zeta function. Selberg (1942) proved that at least a (small) positive proportion of zeros lie on the line. Levinson (1974) improved this to one-third of the zeros by relating the zeros of the zeta function to those of its derivative, and Conrey (1989) improved this further to two-fifths. Most zeros lie close to the critical line. More precisely, Bohr & Landau (1914) showed that for any positive ε, all but an infinitely small proportion of zeros lie within a distance ε of the critical line. Ivić (1985) gives several more precise versions of this result, called zero density estimates, which bound the number of zeros in regions with imaginary part at most T and real part at least 1/2+ε. ### Hardy–Littlewood conjectures In 1914 Godfrey Harold Hardy proved that $\zeta\left(\tfrac{1}{2}+it\right)$ has infinitely many real zeros. Let N(T) be the total number of real zeros, $N_0(T)$ be the total number of zeros of odd order of the function $\zeta\left(\tfrac{1}{2}+it\right),$ lying on the interval (0, T]. The next two conjectures of Hardy and John Edensor Littlewood on the distance between real zeros of $\zeta\left(\tfrac{1}{2}+it\right)$ and on the density of zeros of $\zeta\left(\tfrac{1}{2}+it\right)$ on intervals (T, T+H] for sufficiently large T > 0, $H = T^{a + \varepsilon}$ and with as less as possible value of a > 0, where ε > 0 is an arbitrarily small number, open two new directions in the investigation of the Riemann zeta function: 1. for any ε > 0 there exists $T_0 = T_0(\varepsilon) > 0$ such that for $T \geq T_0$ and $H=T^{0.25+\varepsilon}$ the interval $(T,T+H]$ contains a zero of odd order of the function $\zeta\bigl(\tfrac{1}{2}+it\bigr)$. 2. for any ε > 0 there exist $T_0 = T_0(\varepsilon) > 0$ and c = c(ε) > 0, such that for $T \geq T_0$ and $H=T^{0.5+\varepsilon}$ the inequality $N_0(T+H)-N_0(T) \geq cH$ is true. ### Selberg conjecture Atle Selberg (1942) investigated the problem of Hardy–Littlewood 2 and proved that for any ε > 0 there exists such $T_0 = T_0(\varepsilon) > 0$ and c = c(ε) > 0, such that for $T \geq T_0$ and $H=T^{0.5+\varepsilon}$ the inequality $N(T+H)-N(T) \geq cH\log T$ is true. Selberg conjectured that this could be tightened to $H=T^{0.5}$. A. A. Karatsuba (1984a, 1984b, 1985) proved that for a fixed ε satisfying the condition 0 < ε < 0.001, a sufficiently large T and $H = T^{a+\varepsilon}$, $a = \tfrac{27}{82} = \tfrac{1}{3} -\tfrac{1}{246}$, the interval (T, T+H) contains at least cHln(T) real zeros of the Riemann zeta function $\zeta\left(\tfrac{1}{2}+it\right)$ and therefore confirmed the Selberg conjecture. The estimates of Selberg and Karatsuba can not be improved in respect of the order of growth as T → ∞. Karatsuba (1992) proved that an analog of the Selberg conjecture holds for almost all intervals (T, T+H], $H = T^{\varepsilon}$, where ε is an arbitrarily small fixed positive number. The Karatsuba method permits to investigate zeros of the Riemann zeta-function on "supershort" intervals of the critical line, that is, on the intervals (T, T+H], the length H of which grows slower than any, even arbitrarily small degree T. In particular, he proved that for any given numbers ε, $\varepsilon_1$ satisfying the conditions $0<\varepsilon, \varepsilon_{1}<1$ almost all intervals (T, T+H] for $H\ge\exp{\{(\ln T)^{\varepsilon}\}}$ contain at least $H(\ln T)^{1-\varepsilon_{1}}$ zeros of the function $\zeta\left(\tfrac{1}{2}+it\right)$. This estimate is quite close to the one that follows from the Riemann hypothesis. ### Numerical calculations Absolute value of the ζ-function The function $\pi^{-\frac{s}{2}}\Gamma(\tfrac{s}{2})\zeta(s)$ has the same zeros as the zeta function in the critical strip, and is real on the critical line because of the functional equation, so one can prove the existence of zeros exactly on the real line between two points by checking numerically that the function has opposite signs at these points. Usually one writes $\zeta(\tfrac{1}{2} +it) = Z(t)e^{-i\pi\theta(t)}$ where Hardy's function Z and the Riemann–Siegel theta function θ are uniquely defined by this and the condition that they are smooth real functions with θ(0)=0. By finding many intervals where the function Z changes sign one can show that there are many zeros on the critical line. To verify the Riemann hypothesis up to a given imaginary part T of the zeros, one also has to check that there are no further zeros off the line in this region. This can be done by calculating the total number of zeros in the region and checking that it is the same as the number of zeros found on the line. This allows one to verify the Riemann hypothesis computationally up to any desired value of T (provided all the zeros of the zeta function in this region are simple and on the critical line). Some calculations of zeros of the zeta function are listed below. So far all zeros that have been checked are on the critical line and are simple. (A multiple zero would cause problems for the zero finding algorithms, which depend on finding sign changes between zeros.) For tables of the zeros, see Haselgrove & Miller (1960) or Odlyzko. Year Number of zeros Author 1859? 3 B. Riemann used the Riemann–Siegel formula (unpublished, but reported in Siegel 1932). 1903 15 J. P. Gram (1903) used Euler–Maclaurin summation and discovered Gram's law. He showed that all 10 zeros with imaginary part at most 50 range lie on the critical line with real part 1/2 by computing the sum of the inverse 10th powers of the roots he found. 1914 79 (γn ≤ 200) R. J. Backlund (1914) introduced a better method of checking all the zeros up to that point are on the line, by studying the argument S(T) of the zeta function. 1925 138 (γn ≤ 300) J. I. Hutchinson (1925) found the first failure of Gram's law, at the Gram point g126. 1935 195 E. C. Titchmarsh (1935) used the recently rediscovered Riemann–Siegel formula, which is much faster than Euler–Maclaurin summation. It takes about O(T3/2+ε) steps to check zeros with imaginary part less than T, while the Euler–Maclaurin method takes about O(T2+ε) steps. 1936 1041 E. C. Titchmarsh (1936) and L. J. Comrie were the last to find zeros by hand. 1953 1104 A. M. Turing (1953) found a more efficient way to check that all zeros up to some point are accounted for by the zeros on the line, by checking that Z has the correct sign at several consecutive Gram points and using the fact that S(T) has average value 0. This requires almost no extra work because the sign of Z at Gram points is already known from finding the zeros, and is still the usual method used. This was the first use of a digital computer to calculate the zeros. 1956 15000 D. H. Lehmer (1956) discovered a few cases where the zeta function has zeros that are "only just" on the line: two zeros of the zeta function are so close together that it is unusually difficult to find a sign change between them. This is called "Lehmer's phenomenon", and first occurs at the zeros with imaginary parts 7005.063 and 7005.101, which differ by only .04 while the average gap between other zeros near this point is about 1. 1956 25000 D. H. Lehmer 1958 35337 N. A. Meller 1966 250000 R. S. Lehman 1968 3500000 Rosser, Yohe & Schoenfeld (1969) stated Rosser's rule (described below). 1977 40000000 R. P. Brent 1979 81000001 R. P. Brent 1982 200000001 R. P. Brent, J. van de Lune, H. J. J. te Riele, D. T. Winter 1983 300000001 J. van de Lune, H. J. J. te Riele 1986 1500000001 van de Lune, te Riele & Winter (1986) gave some statistical data about the zeros and give several graphs of Z at places where it has unusual behavior. 1987 A few of large (~1012) height A. M. Odlyzko (1987) computed smaller numbers of zeros of much larger height, around 1012, to high precision to check Montgomery's pair correlation conjecture. 1992 A few of large (~1020) height A. M. Odlyzko (1992) computed a 175 million zeroes of heights around 1020 and a few more of heights around 2×1020, and gave an extensive discussion of the results. 1998 10000 of large (~1021) height A. M. Odlyzko (1998) computed some zeros of height about 1021 2001 10000000000 J. van de Lune (unpublished) 2004 900000000000 S. Wedeniwski (ZetaGrid distributed computing) 2004 10000000000000 and a few of large (up to ~1024) heights X. Gourdon (2004) and Patrick Demichel used the Odlyzko–Schönhage algorithm. They also checked two billion zeros around heights 1013, 1014, ..., 1024. ### Gram points A Gram point is a point on the critical line 1/2 + it where the zeta function is real and non-zero. Using the expression for the zeta function on the critical line, ζ(1/2 + it) = Z(t)e − iθ(t), where Hardy's function, Z, is real for real t, and θ is the Riemann–Siegel theta function, we see that zeta is real when sin(θ(t)) = 0. This implies that θ(t) is an integer multiple of π which allows for the location of Gram points to be calculated fairly easily by inverting the formula for θ. They are usually numbered as gn for n = 0, 1, ..., where gn is the unique solution of θ(t) = nπ. Gram observed that there was often exactly one zero of the zeta function between any two Gram points; Hutchinson called this observation Gram's law. There are several other closely related statements that are also sometimes called Gram's law: for example, (−1)nZ(gn) is usually positive, or Z(t) usually has opposite sign at consecutive Gram points. The imaginary parts γn of the first few zeros (in blue) and the first few Gram points gn are given in the following table | | | | | | | | | | | | | | | | |-------|-------|-------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | | | g−1 | γ1 | g0 | γ2 | g1 | γ3 | g2 | γ4 | g3 | γ5 | g4 | γ6 | g5 | | 0.000 | 3.436 | 9.667 | 14.135 | 17.846 | 21.022 | 23.170 | 25.011 | 27.670 | 30.425 | 31.718 | 32.935 | 35.467 | 37.586 | 38.999 | This shows the values of ζ(1/2+it) in the complex plane for 0 ≤ t ≤ 34. (For t=0, ζ(1/2) ≈ -1.460 corresponds to the leftmost point of the red curve.) Gram's law states that the curve usually crosses the real axis once between zeros. The first failure of Gram's law occurs at the 127'th zero and the Gram point g126, which are in the "wrong" order. | | | | | | | γ127 | 282.465 | |---------|---------|---------|---------|---------|---------|---------|-----------| | g124 | γ126 | g125 | g126 | γ128 | g127 | γ129 | g128 | | 279.148 | 279.229 | 280.802 | 282.455 | 283.211 | 284.104 | 284.836 | 285.752 | A Gram point t is called good if the zeta function is positive at 1/2 + it. The indices of the "bad" Gram points where Z has the "wrong" sign are 126, 134, 195, 211, ... (sequence in OEIS). A Gram block is an interval bounded by two good Gram points such that all the Gram points between them are bad. A refinement of Gram's law called Rosser's rule due to Rosser, Yohe & Schoenfeld (1969) says that Gram blocks often have the expected number of zeros in them (the same as the number of Gram intervals), even though some of the individual Gram intervals in the block may not have exactly one zero in them. For example, the interval bounded by g125 and g127 is a Gram block containing a unique bad Gram point g126, and contains the expected number 2 of zeros although neither of its two Gram intervals contains a unique zero. Rosser et al. checked that there were no exceptions to Rosser's rule in the first 3 million zeros, although there are infinitely many exceptions to Rosser's rule over the entire zeta function. Gram's rule and Rosser's rule both say that in some sense zeros do not stray too far from their expected positions. The distance of a zero from its expected position is controlled by the function S defined above, which grows extremely slowly: its average value is of the order of (log log T)1/2, which only reaches 2 for T around 1024. This means that both rules hold most of the time for small T but eventually break down often. Indeed Trudgian (2011) showed that both Gram's law and Rosser's rule fail in a positive proportion of cases. To be more specific, it is expected that in about 73% one zero is enclosed by two successive Gram points, but in 14% no zero and in 13% two zeros are in such a Gram-interval on the long run. ## Arguments for and against the Riemann hypothesis Mathematical papers about the Riemann hypothesis tend to be cautiously noncommittal about its truth. Of authors who express an opinion, most of them, such as Riemann (1859) or Bombieri (2000), imply that they expect (or at least hope) that it is true. The few authors who express serious doubt about it include Ivić (2008) who lists some reasons for being skeptical, and Littlewood (1962) who flatly states that he believes it to be false, and that there is no evidence whatever for it and no imaginable reason for it to be true. The consensus of the survey articles (Bombieri 2000, Conrey 2003, and Sarnak 2008) is that the evidence for it is strong but not overwhelming, so that while it is probably true there is some reasonable doubt about it. Some of the arguments for (or against) the Riemann hypothesis are listed by Sarnak (2008), Conrey (2003), and Ivić (2008), and include the following reasons. • Several analogues of the Riemann hypothesis have already been proved. The proof of the Riemann hypothesis for varieties over finite fields by Deligne (1974) is possibly the single strongest theoretical reason in favor of the Riemann hypothesis. This provides some evidence for the more general conjecture that all zeta functions associated with automorphic forms satisfy a Riemann hypothesis, which includes the classical Riemann hypothesis as a special case. Similarly Selberg zeta functions satisfy the analogue of the Riemann hypothesis, and are in some ways similar to the Riemann zeta function, having a functional equation and an infinite product expansion analogous to the Euler product expansion. However there are also some major differences; for example they are not given by Dirichlet series. The Riemann hypothesis for the Goss zeta function was proved by Sheats (1998). In contrast to these positive examples, however, some Epstein zeta functions do not satisfy the Riemann hypothesis, even though they have an infinite number of zeros on the critical line (Titchmarsh 1986). These functions are quite similar to the Riemann zeta function, and have a Dirichlet series expansion and a functional equation, but the ones known to fail the Riemann hypothesis do not have an Euler product and are not directly related to automorphic representations. • The numerical verification that many zeros lie on the line seems at first sight to be strong evidence for it. However analytic number theory has had many conjectures supported by large amounts of numerical evidence that turn out to be false. See Skewes number for a notorious example, where the first exception to a plausible conjecture related to the Riemann hypothesis probably occurs around 10316; a counterexample to the Riemann hypothesis with imaginary part this size would be far beyond anything that can currently be computed. The problem is that the behavior is often influenced by very slowly increasing functions such as log log T, that tend to infinity, but do so so slowly that this cannot be detected by computation. Such functions occur in the theory of the zeta function controlling the behavior of its zeros; for example the function S(T) above has average size around (log log T)1/2 . As S(T) jumps by at least 2 at any counterexample to the Riemann hypothesis, one might expect any counterexamples to the Riemann hypothesis to start appearing only when S(T) becomes large. It is never much more than 3 as far as it has been calculated, but is known to be unbounded, suggesting that calculations may not have yet reached the region of typical behavior of the zeta function. • Denjoy's probabilistic argument for the Riemann hypothesis (Edwards 1974) is based on the observation that if μ(x) is a random sequence of "1"s and "−1"s then, for every ε > 0, the partial sums $M(x) = \sum_{n \le x} \mu(n)$ (the values of which are positions in a simple random walk) satisfy the bound $M(x) = O(x^{1/2+\varepsilon})$ with probability 1. The Riemann hypothesis is equivalent to this bound for the Möbius function μ and the Mertens function M derived in the same way from it. In other words, the Riemann hypothesis is in some sense equivalent to saying that μ(x) behaves like a random sequence of coin tosses. When μ(x) is non-zero its sign gives the parity of the number of prime factors of x, so informally the Riemann hypothesis says that the parity of the number of prime factors of an integer behaves randomly. Such probabilistic arguments in number theory often give the right answer, but tend to be very hard to make rigorous, and occasionally give the wrong answer for some results, such as Maier's theorem. • The calculations in Odlyzko (1987) show that the zeros of the zeta function behave very much like the eigenvalues of a random Hermitian matrix, suggesting that they are the eigenvalues of some self-adjoint operator, which would imply the Riemann hypothesis. However all attempts to find such an operator have failed. • There are several theorems, such as Goldbach's conjecture for sufficiently large odd numbers, that were first proved using the generalized Riemann hypothesis, and later shown to be true unconditionally. This could be considered as weak evidence for the generalized Riemann hypothesis, as several of its "predictions" turned out to be true. • Lehmer's phenomenon (Lehmer 1956) where two zeros are sometimes very close is sometimes given as a reason to disbelieve in the Riemann hypothesis. However one would expect this to happen occasionally just by chance even if the Riemann hypothesis were true, and Odlyzko's calculations suggest that nearby pairs of zeros occur just as often as predicted by Montgomery's conjecture. • Patterson (1988) suggests that the most compelling reason for the Riemann hypothesis for most mathematicians is the hope that primes are distributed as regularly as possible. ## References • Artin, Emil (1924), "Quadratische Körper im Gebiete der höheren Kongruenzen. II. Analytischer Teil", 19 (1): 207–246, doi:10.1007/BF01181075 • Beurling, Arne (1955), "A closure problem related to the Riemann zeta-function", 41 (5): 312–314, doi:10.1073/pnas.41.5.312, MR 0070655 • Bohr, H.; Landau, E. (1914), "Ein Satz über Dirichletsche Reihen mit Anwendung auf die ζ-Funktion und die L-Funktionen", 37 (1): 269–272, doi:10.1007/BF03014823 • Bombieri, Enrico (2000), The Riemann Hypothesis - official problem description (PDF), Clay Mathematics Institute, retrieved 2008-10-25  Reprinted in (Borwein et al. 2008). • Borwein, Peter; Choi, Stephen; Rooney, Brendan et al., eds. (2008), The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike, CMS Books in Mathematics, New York: Springer, doi:10.1007/978-0-387-72126-2, ISBN 978-0-387-72125-5  `|displayeditors=` suggested (help) • Borwein, Peter; Ferguson, Ron; Mossinghoff, Michael J. (2008), "Sign changes in sums of the Liouville function", 77 (263): 1681–1694, doi:10.1090/S0025-5718-08-02036-X, MR 2398787 • de Branges, Louis (1992), "The convergence of Euler products", Journal of Functional Analysis 107 (1): 122–210, doi:10.1016/0022-1236(92)90103-P, MR 1165869 • Cartier, P. (1982), "Comment l'hypothèse de Riemann ne fut pas prouvée", Seminar on Number Theory, Paris 1980-81 (Paris, 1980/1981), Progr. Math. 22, Boston, MA: Birkhäuser Boston, pp. 35–48, MR 693308 • Connes, Alain (1999), "Trace formula in noncommutative geometry and the zeros of the Riemann zeta function", Selecta Mathematica. New Series 5 (1): 29–106, arXiv:math/9811068, doi:10.1007/s000290050042, MR 1694895 • Connes, Alain (2000), "Noncommutative geometry and the Riemann zeta function", Mathematics: frontiers and perspectives, Providence, R.I.: American Mathematical Society, pp. 35–54, MR 1754766 • Conrey, J. B. (1989), "More than two fifths of the zeros of the Riemann zeta function are on the critical line", J. Reine angew. Math. 399: 1–16, MR 1004130 • Conrey, J. Brian (2003), "The Riemann Hypothesis" (PDF), Notices of the American Mathematical Society: 341–353  Reprinted in (Borwein et al. 2008). • Conrey, J. B.; Li, Xian-Jin (2000), "A note on some positivity conditions related to zeta and L-functions", International Mathematics Research Notices 2000 (18): 929–940, arXiv:math/9812166, doi:10.1155/S1073792800000489, MR 1792282 • Deligne, Pierre (1974), "La conjecture de Weil. I", 43: 273–307, doi:10.1007/BF02684373, MR 0340258 • Deligne, Pierre (1980), "La conjecture de Weil : II", Publications Mathématiques de l'IHÉS 52: 137–252, doi:10.1007/BF02684780 • Deninger, Christopher (1998), Some analogies between number theory and dynamical systems on foliated spaces, "Proceedings of the International Congress of Mathematicians, Vol. I (Berlin, 1998)", Documenta Mathematica: 163–186, MR 1648030 • Derbyshire, John (2003), , Joseph Henry Press, Washington, DC, ISBN 978-0-309-08549-6, MR 1968857 • Dyson, Freeman (2009), "Birds and frogs", 56 (2): 212–223, MR 2483565 • Edwards, H. M. (1974), Riemann's Zeta Function, New York: Dover Publications, ISBN 978-0-486-41740-0, MR 0466039 • Fesenko, Ivan (2010), "Analysis on arithmetic schemes. II", Journal of K-theory 5: 437–557 • Ford, Kevin (2002), "Vinogradov's integral and bounds for the Riemann zeta function", Proceedings of the London Mathematical Society. Third Series 85 (3): 565–633, doi:10.1112/S0024611502013655, MR 1936814 • Franel, J.; Landau, E. (1924), "Les suites de Farey et le problème des nombres premiers" (Franel, 198-201); "Bemerkungen zu der vorstehenden Abhandlung von Herrn Franel (Landau, 202-206)", Göttinger Nachrichten: 198–206 • Ghosh, Amit (1983), "On the Riemann zeta function—mean value theorems and the distribution of |S(T)|", J. Number Theory 17: 93–102, doi:10.1016/0022-314X(83)90010-0 • • Gram, J. P. (1903), "Note sur les zéros de la fonction ζ(s) de Riemann", Acta Mathematica 27: 289–304, doi:10.1007/BF02421310 • Hadamard, Jacques (1896), "Sur la distribution des zéros de la fonction ζ(s) et ses conséquences arithmétiques", Bulletin Société Mathématique de France 14: 199–220  Reprinted in (Borwein et al. 2008). • Hardy, G. H. (1914), "Sur les Zéros de la Fonction ζ(s) de Riemann", C. R. Acad. Sci. Paris 158: 1012–1014, JFM 45.0716.04  Reprinted in (Borwein et al. 2008). • Hardy, G. H.; Littlewood, J. E. (1921), "The zeros of Riemann's zeta-function on the critical line", Math. Z. 10 (3–4): 283–317, doi:10.1007/BF01211614 • Haselgrove, C. B. (1958), "A disproof of a conjecture of Pólya", Mathematika 5 (2): 141–145, doi:10.1112/S0025579300001480, MR 0104638  Reprinted in (Borwein et al. 2008). • Haselgrove, C. B.; Miller, J. C. P. (1960), Tables of the Riemann zeta function, Royal Society Mathematical Tables, Vol. 6, Cambridge University Press, ISBN 978-0-521-06152-0, MR 0117905  Review • Hutchinson, J. I. (1925), "On the Roots of the Riemann Zeta-Function", 27 (1): 49–60, doi:10.2307/1989163, JSTOR 1989163 • Ingham, A.E. (1932), The Distribution of Prime Numbers, Cambridge Tracts in Mathematics and Mathematical Physics 30, Cambridge University Press . Reprinted 1990, ISBN 978-0-521-39789-6, MR1074573 • Ireland, Kenneth; Rosen, Michael (1990), A Classical Introduction to Modern Number Theory (Second edition), New York: Springer, ISBN 0-387-97329-X • Ivić, A. (1985), The Riemann Zeta Function, New York: John Wiley & Sons, ISBN 978-0-471-80634-9, MR 0792089  (Reprinted by Dover 2003) • Ivić, Aleksandar (2008), "On some reasons for doubting the Riemann hypothesis", in Borwein, Peter; Choi, Stephen; Rooney, Brendan et al., The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike, CMS Books in Mathematics, New York: Springer, pp. 131–160, arXiv:math.NT/0311162, ISBN 978-0-387-72125-5  `|displayeditors=` suggested (help) • Karatsuba, A. A. (1984a), "Zeros of the function ζ(s) on short intervals of the critical line", Izv. Akad. Nauk SSSR, Ser. Mat. (in Russian) 48 (3): 569–584, MR 0747251 • Karatsuba, A. A. (1984b), "Distribution of zeros of the function ζ(1/2 + it)", Izv. Akad. Nauk SSSR, Ser. Mat. (in Russian) 48 (6): 1214–1224, MR 0772113 • Karatsuba, A. A. (1985), "Zeros of the Riemann zeta-function on the critical line", Trudy Mat. Inst. Steklov. (in Russian) (167): 167–178, MR 0804073 • Karatsuba, A. A. (1992), "On the number of zeros of the Riemann zeta-function lying in almost all short intervals of the critical line", Izv. Ross. Akad. Nauk, Ser. Mat. (in Russian) 56 (2): 372–397, MR 1180378 • Karatsuba, A. A.; Voronin, S. M. (1992), The Riemann zeta-function, de Gruyter Expositions in Mathematics 5, Berlin: Walter de Gruyter & Co., ISBN 978-3-11-013170-3, MR 1183467 • Keating, Jonathan P.; Snaith, N. C. (2000), "Random matrix theory and ζ(1/2 + it)", Communications in Mathematical Physics 214 (1): 57–89, doi:10.1007/s002200000261, MR 1794265 • Knauf, Andreas (1999), "Number theory, dynamical systems and statistical mechanics", Reviews in Mathematical Physics. A Journal for Both Review and Original Research Papers in the Field of Mathematical Physics 11 (8): 1027–1060, doi:10.1142/S0129055X99000325, MR 1714352 • von Koch, Helge (1901), "Sur la distribution des nombres premiers", Acta Mathematica 24: 159–182, doi:10.1007/BF02403071 • Kurokawa, Nobushige (1992), "Multiple zeta functions: an example", Zeta functions in geometry (Tokyo, 1990), Adv. Stud. Pure Math. 21, Tokyo: Kinokuniya, pp. 219–226, MR 1210791 • Lapidus, Michel L. (2008), In search of the Riemann zeros, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-4222-5, MR 2375028 • Lavrik, A. F. (2001), "Zeta-function", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 • Lehmer, D. H. (1956), "Extended computation of the Riemann zeta-function", Mathematika. A Journal of Pure and Applied Mathematics 3 (2): 102–108, doi:10.1112/S0025579300001753, MR 0086083 • Leichtnam, Eric (2005), "An invitation to Deninger's work on arithmetic zeta functions", Geometry, spectral theory, groups, and dynamics, Contemp. Math. 387, Providence, RI: Amer. Math. Soc., pp. 201–236, MR 2180209 . • Levinson, N. (1974), "More than one-third of the zeros of Riemann's zeta function are on σ = 1/2", Adv. In Math. 13 (4): 383–436, doi:10.1016/0001-8708(74)90074-7, MR 0564081 • Littlewood, J. E. (1962), "The Riemann hypothesis", The scientist speculates: an anthology of partly baked idea, New York: Basic books • van de Lune, J.; te Riele, H. J. J.; Winter, D. T. (1986), "On the zeros of the Riemann zeta function in the critical strip. IV", Mathematics of Computation 46 (174): 667–681, doi:10.2307/2008005, JSTOR 2008005, MR 829637 • Massias, J.-P.; Nicolas, Jean-Louis; Robin, G. (1988), "Évaluation asymptotique de l'ordre maximum d'un élément du groupe symétrique", Polska Akademia Nauk. Instytut Matematyczny. Acta Arithmetica 50 (3): 221–242, MR 960551 • Montgomery, Hugh L. (1973), "The pair correlation of zeros of the zeta function", Analytic number theory, Proc. Sympos. Pure Math. XXIV, Providence, R.I.: American Mathematical Society, pp. 181–193, MR 0337821  Reprinted in (Borwein et al. 2008). • Montgomery, Hugh L. (1983), "Zeros of approximations to the zeta function", in Erdős, Paul, Studies in pure mathematics. To the memory of Paul Turán, Basel, Boston, Berlin: Birkhäuser, pp. 497–506, ISBN 978-3-7643-1288-6, MR 820245 • Nicely, Thomas R. (1999), "New maximal prime gaps and first occurrences", Mathematics of Computation 68 (227): 1311–1315, doi:10.1090/S0025-5718-99-01065-0, MR 1627813 . • Nyman, Bertil (1950), On the One-Dimensional Translation Group and Semi-Group in Certain Function Spaces, PhD Thesis, University of Uppsala: University of Uppsala, MR 0036444 • • Odlyzko, A. M. (1987), "On the distribution of spacings between zeros of the zeta function", Mathematics of Computation 48 (177): 273–308, doi:10.2307/2007890, JSTOR 2007890, MR 866115 • Odlyzko, A. M. (1990), "Bounds for discriminants and related estimates for class numbers, regulators and zeros of zeta functions: a survey of recent results", Séminaire de Théorie des Nombres de Bordeaux, Série 2 2 (1): 119–141, MR 1061762 •   This unpublished book describes the implementation of the algorithm and discusses the results in detail. • • Ono, Ken; Soundararajan, K. (1997), "Ramanujan's ternary quadratic form", Inventiones Mathematicae 130 (3): 415–454, doi:10.1007/s002220050191 • Patterson, S. J. (1988), An introduction to the theory of the Riemann zeta-function, Cambridge Studies in Advanced Mathematics 14, Cambridge University Press, ISBN 978-0-521-33535-5, MR 933558 • Ribenboim, Paulo (1996), The New Book of Prime Number Records, New York: Springer, ISBN 0-387-94457-5 • Riemann, Bernhard (1859), "Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse", Monatsberichte der Berliner Akademie . In Gesammelte Werke, Teubner, Leipzig (1892), Reprinted by Dover, New York (1953). Original manuscript (with English translation). Reprinted in (Borwein et al. 2008) and (Edwards 1874) • Riesel, Hans; Göhl, Gunnar (1970), "Some calculations related to Riemann's prime number formula", Mathematics of Computation 24 (112): 969–983, doi:10.2307/2004630, JSTOR 2004630, MR 0277489 • Riesz, M. (1916), "Sur l'hypothèse de Riemann", Acta Mathematica 40: 185–190, doi:10.1007/BF02418544 • Robin, G. (1984), "Grandes valeurs de la fonction somme des diviseurs et hypothèse de Riemann", , Neuvième Série 63 (2): 187–213, MR 774171 • Rockmore, Dan (2005), Stalking the Riemann hypothesis, Pantheon Books, ISBN 978-0-375-42136-5, MR 2269393 • Rosser, J. Barkley; Yohe, J. M.; Schoenfeld, Lowell (1969), "Rigorous computation and the zeros of the Riemann zeta-function. (With discussion)", Information Processing 68 (Proc. IFIP Congress, Edinburgh, 1968), Vol. 1: Mathematics, Software, Amsterdam: North-Holland, pp. 70–76, MR 0258245 • Rudin, Walter (1973), Functional Analysis, 1st edition (January 1973), New York: McGraw-Hill, ISBN 0-070-54225-2 • Sabbagh, Karl (2003), The Riemann hypothesis, Farrar, Straus and Giroux, New York, ISBN 978-0-374-25007-2, MR 1979664 • Salem, Raphaël (1953), "Sur une proposition équivalente à l'hypothèse de Riemann", 236: 1127–1128, MR 0053148 • Sarnak, Peter (2008), "Problems of the Millennium: The Riemann Hypothesis" (PDF), in Borwein, Peter; Choi, Stephen; Rooney, Brendan et al., The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike, CMS Books in Mathematics, New York: Springer, pp. 107–115, ISBN 978-0-387-72125-5  `|displayeditors=` suggested (help) • du Sautoy, Marcus (2003), The music of the primes, HarperCollins Publishers, ISBN 978-0-06-621070-4, MR 2060134 • Schoenfeld, Lowell (1976), "Sharper bounds for the Chebyshev functions θ(x) and ψ(x). II", Mathematics of Computation 30 (134): 337–360, doi:10.2307/2005976, JSTOR 2005976, MR 0457374 • Schumayer, Daniel; Hutchinson, David A. W. (2011), Physics of the Riemann Hypothesis, arXiv:1101.3116 • Selberg, Atle (1942), "On the zeros of Riemann's zeta-function", Skr. Norske Vid. Akad. Oslo I. 10: 59 pp, MR 0010712 • Selberg, Atle (1946), "Contributions to the theory of the Riemann zeta-function", Arch. Math. Naturvid. 48 (5): 89–155, MR 0020594 • Selberg, Atle (1956), "Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series", J. Indian Math. Soc. (N.S.) 20: 47–87, MR 0088511 • Serre, Jean-Pierre (1969/70), "Facteurs locaux des fonctions zeta des varietés algébriques (définitions et conjectures)", Séminaire Delange-Pisot-Poitou 19 • Sheats, Jeffrey T. (1998), "The Riemann hypothesis for the Goss zeta function for Fq[T]", 71 (1): 121–157, doi:10.1006/jnth.1998.2232, MR 1630979 • Siegel, C. L. (1932), "Über Riemanns Nachlaß zur analytischen Zahlentheorie", Quellen Studien zur Geschichte der Math. Astron. und Phys. Abt. B: Studien 2: 45–80  Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966. • Speiser, Andreas (1934), "Geometrisches zur Riemannschen Zetafunktion", Mathematische Annalen 110: 514–521, doi:10.1007/BF01448042, JFM 60.0272.04 • • Suzuki, Masatoshi (2011), "Positivity of certain functions associated with analysis on elliptic surfaces", Journal of Number Theory 131: 1770–1796 • Titchmarsh, Edward Charles (1935), "The Zeros of the Riemann Zeta-Function", Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences (The Royal Society) 151 (873): 234–255, doi:10.1098/rspa.1935.0146, JSTOR 96545 • Titchmarsh, Edward Charles (1936), "The Zeros of the Riemann Zeta-Function", Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences (The Royal Society) 157 (891): 261–263, doi:10.1098/rspa.1936.0192, JSTOR 96692 • Titchmarsh, Edward Charles (1986), The theory of the Riemann zeta-function (2nd ed.), The Clarendon Press Oxford University Press, ISBN 978-0-19-853369-6, MR 882550 • Trudgian, Timothy (2011), "On the success and failure of Gram's Law and the Rosser Rule", Acta Arithmetica 125: 225–256 • Turán, Paul (1948), "On some approximative Dirichlet-polynomials in the theory of the zeta-function of Riemann", Danske Vid. Selsk. Mat.-Fys. Medd. 24 (17): 36, MR 0027305  Reprinted in (Borwein et al. 2008). • Turing, Alan M. (1953), "Some calculations of the Riemann zeta-function", Proceedings of the London Mathematical Society. Third Series 3: 99–117, doi:10.1112/plms/s3-3.1.99, MR 0055785 • de la Vallée-Poussin, Ch.J. (1896), "Recherches analytiques sur la théorie des nombers premiers", Ann. Soc. Sci. Bruxelles 20: 183–256 • de la Vallée-Poussin, Ch.J. (1899–1900), "Sur la fonction ζ(s) de Riemann et la nombre des nombres premiers inférieurs à une limite donnée", Mem. Couronnes Acad. Sci. Belg. 59 (1)  Reprinted in (Borwein et al. 2008). • Weil, André (1948), Sur les courbes algébriques et les variétés qui s'en déduisent, Actualités Sci. Ind., no. 1041 = Publ. Inst. Math. Univ. Strasbourg 7 (1945), Hermann et Cie., Paris, MR 0027151 • Weil, André (1949), "Numbers of solutions of equations in finite fields", 55 (5): 497–508, doi:10.1090/S0002-9904-1949-09219-4, MR 0029393  Reprinted in Oeuvres Scientifiques/Collected Papers by Andre Weil ISBN 0-387-90330-5 • Weinberger, Peter J. (1973), "On Euclidean rings of algebraic integers", Analytic number theory ( St. Louis Univ., 1972), Proc. Sympos. Pure Math. 24, Providence, R.I.: Amer. Math. Soc., pp. 321–332, MR 0337902 • Wiles, Andrew (2000), "Twenty years of number theory", Mathematics: frontiers and perspectives, Providence, R.I.: American Mathematical Society, pp. 329–342, ISBN 978-0-8218-2697-3, MR 1754786 • Zagier, Don (1977), "The first 50 million prime numbers" (PDF), Math. Intelligencer (Springer) 0: 7–19, doi:10.1007/BF03039306, MR 643810 • Zagier, Don (1981), "Eisenstein series and the Riemann zeta function", Automorphic forms, representation theory and arithmetic (Bombay, 1979), Tata Inst. Fund. Res. Studies in Math. 10, Tata Inst. Fundamental Res., Bombay, pp. 275–301, MR 633666
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 93, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8336473107337952, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/56118/spaces-with-a-quasi-triangle-inequality/56155
## Spaces with a quasi triangle inequality ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) How do you call a space with a function which is symmetric, non negative, positive definite and which satisfies a quasi-triangle inequality: $d(x,z) \leq C( d(x,y)+d(y,z) )$ for all $x,y,z$ and some $C > 1$? That is, it satisfies all the axioms of a metric space except for the triangle inequality, which is replaced by the one above. Can anyone provide any reference on these spaces? Thanks. - More precisely, I'm trying to use Banach's fixed point theorem on this spaces, but I can only use it if the contraction constant is small enough. Is it possible to use Banach's fixed point theorem independently of how big is the contraction constant (as long as it's < 1 of course)? – John H Feb 21 2011 at 0:27 ## 3 Answers Your construction is a special case of semimetric spaces with relaxed triangle inequality: http://en.wikipedia.org/wiki/Semimetric_space#Semimetrics. This type of metric is sometimes also called non-Archimedian metric. There is a classical paper of W.A.Wilson "On semi-metric spaces", Amer. J. Math. 53 (1931) 361–373, on the subject. Also, I have seen this type of construction mostly used in fixed-point theory, so this would be an additional keyword to look for. EDIT: To answer your second question about whether Banach fixed-point theorem would be applicable to semimetric spaces: In general one needs $(X,d)$ to be bounded, otherwise there are counter-examples. Consider $X=\mathbb{N}$, $d(n,m):=\frac{|n-m|}{2^{\min(n,m)}}$ and $f(n):=n+1$. Then $(X,d)$ is $d$-Cauchy complete semimetric space (!), but $f$ has no fixed points, even though it is a contraction w.r.t. $d$ with contraction constant $1/2$. This example is taken from the paper "Nonlinear Contractions on Semimetric Spaces" by J. Jachymski, J. Matkowski, T. Swiatkowski, Journal of Applied Analysis Vol. 1, No. 2 (1995), pp. 125–134, where you can also find the proof of the Banach Fixed-Point Theorem for bounded semimetric spaces and some more related results. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a negative answer for your additional remark concerning Banach's fixed point theorem: Consider $d(x,y)=(\int_0^1|x-y|^p)^{1/p},$ `$0<p<1$` which satisfies the quasi-triangle inequality. Look at the set of all (measurable real) functions on $[0,1]$ boundaed between 0 and 2 and of integral 1. Look at the Baker transformation on this set: first map $x$ to $y(t)=2x(2t)$, $0\le t\le 1/2$ then trancate at hight 2 and shift what remains ($(y-1)^{+}$) by $1/2$ to the right and $2$ down. I think I checked that it is a contraction (with constant $2^{p-1}$). This map is known not to have a fixed point, see the following paper of Dale Alspach: http://www.claremontmckenna.edu/math/moneill/Math%20138/papers138/Alspach.pdf - This is called "C-relaxed triangular inequality". See, for example, this paper by Fagin and Stockmeyer. - Can the Banach fixed point theorem can be used in this spaces? – John H Feb 21 2011 at 0:05 My guess is "yes" if the contraction constant is small enough comparing to $C$. But I am not sure because I never thought about it. – Mark Sapir Feb 21 2011 at 0:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184010624885559, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/39611/what-is-the-meaning-of-uncertainty-in-heisenbergs-uncertainty-principle/39614
# What is the meaning of uncertainty in Heisenberg's uncertainty principle? The Heisenberg's uncertainty principle states the following: $$\Delta p \cdot \Delta x \ge \frac{h}{4\pi}.$$ While studying for my high school physics exams, I fooled myself into believing that I understood the uncertainty principle (at least the implications). But suddenly the question that's nagging me is the following. If the uncertainty $\Delta x$ of an electron is 1.2 nm, does it imply that the probability that the x-coordinate lies within a 1.2 nm range, equal to 100%? Or does it mean that the probability is 95%? Or does it mean something totally different? I wonder why no author made it clear in the `high school/junior college` level textbooks. I am uncertain about what uncertainty means. - I'm sure that your textbook gave the definition somewhere: $\Delta x = \sqrt{\left<x^2\right>-\left<x\right>^2}$ – user2963 Oct 11 '12 at 23:09 2 On second thought, perhaps it didn't: it seems a lot of explanations of the uncertainty principle just gloss over the mathematical meaning. – user2963 Oct 11 '12 at 23:15 ## 4 Answers For a particle which has a position-space wavefunction $\psi(x)$, the uncertainty in position, denoted $\sigma_x$ or $\Delta x$ (I prefer the former), is given by $$\begin{align} \sigma_x^2 &= \langle x^2\rangle - \langle x\rangle^2 \\ &= \int_{-\infty}^{\infty}\psi^*(x)x^2\psi(x)\,\mathrm{d}x - \biggl[\int_{-\infty}^{\infty}\psi^*(x)\,x\,\psi(x)\,\mathrm{d}x\biggr]^2 \end{align}$$ and the uncertainty in momentum, denoted $\sigma_p$ or $\Delta p$, is given by $$\begin{align} \sigma_p^2 &= \langle p^2\rangle - \langle p\rangle^2 \\ &= \int_{-\infty}^{\infty}\psi^*(x)\biggl(-i\hbar\frac{\partial}{\partial x}\biggr)^2\psi(x)\,\mathrm{d}x - \biggl[\int_{-\infty}^{\infty}\psi^*(x)\biggl(-i\hbar\frac{\partial}{\partial x}\biggr)\psi(x)\,\mathrm{d}x\biggr]^2 \end{align}$$ Wikipedia's page on the uncertainty principle contains a proof using these definitions. These definitions do not imply anything about the probability of finding the particle within the range specified by the uncertainty. - $\Delta x$ is really the standard deviation in $x$, $\sigma_x$. So the probability that you find the particle within $\Delta x$ is about $68\%$. Same goes for momentum. In response to the other answer, the Fourier Transform formalism in fact shows that if the probability distribution of $x$ is a normal distribution, the standard deviation = $\sigma_x$. The probability that a random variable lies within 1 sigma of the mean is 68.27%. - 1 The 68% only holds if the position has a normal distribution like (P(x)=Ae^-(x-x0)^2/sig^2). There is no guarantee that a given wavefunction probability distribution will have that form - see @DavidZaslavsky 's answer... – FrankH Oct 12 '12 at 1:52 I agree. I didn't mean to say that the probability distribution is always normal; that's why I qualified with an "if." And since so many distributions are asymptotically gaussian, 68% is a nice number to throw around. – hwlin Oct 12 '12 at 2:00 I have a problem with calling it "THE standard deviation", unqualified, because in measuring x the standard deviation given by the measurement is classical and momentum can be measured simultaneously to equal accuracy, unless one is working in quantum dimensions. It is better to call it "uncertainty" in position and momentum. – anna v Oct 12 '12 at 4:37 – hwlin Oct 12 '12 at 22:16 David outlined the mathematical of the uncertainty principle. The meaning behind the uncertainty principle can be understood if you look very look at an experimental apparatus that illustrates it. If one attempts to measure the position of a particle to an accuracy, then the same experimental apparatus cannot measure momentum to an accuracy greater than that given by the momentum cut. For an Illustration consider the Double slit experiment. The measurement of position is performed to an accuracy corresponding to that of the slit width. There is an uncontrollable exchange of momentum between the slit and the particle, that manifests itself as an uncertainty in the momentum of the photon that leaves the slit. I would Highly recommend Bohr's exposition. http://www.marxists.org/reference/subject/philosophy/works/dk/bohr.htm - Any textbook on quantum mechanics that I know gives the definition of $\Delta p$ and $\Delta x$. Apart from the purely mathematical answer, consider this from a physical point of view. The only possibility to restrict an electron within a 1.2 nm wide box is by applying an infinite force at the edges. In reality the wavefunction for the electron will spread over all the space and the probability to find the electron outside the box will be non-zero. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234671592712402, "perplexity_flag": "head"}
http://mathoverflow.net/questions/18644?sort=votes
## Uniqueness of Chern/Stiefel-Whitney Classes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is closely related to this previous question. Chern and Stiefel-Whitney classes can be defined on bundles over arbitrary base spaces. (In Hatcher's Vector Bundles notes, he uses the Leray-Hirsch Theorem, which appears to require paracompactness of the base space. The construction in Milnor-Stasheff works in general, as does the argument given by Charles Resk in answer to the above question. A posteriori, this actually shows that Hatcher's construction works in general too, since he really just needs `$w_1$` and `$c_1$` to be defined everywhere.) The proof of uniqueness (as discussed in Milnor and Stasheff, or in Hatcher's Vector Bundles notes, or in the answers to the above question) relies on the splitting principle, and hence (it seems to me) requires the existence of a metric on the bundle in question. More precisely, if we have two sequences of characteristic classes satisfying the axioms for, say, Chern classes, and we want to check that they agree agree on some bundle `$E\to B$`, the method is to pull back `$E$` along some map `$f: B'\to B$` (with $f^*$ injective on cohomology) so that `$f^*E$` splits as a sum of lines. Producing the splitting seems to require a metric on `$E$` (or at least on `$f^*E$`). If `$B$` is not paracompact, bundles over `$B$` may not admit a metric (and may admit a classifying map into the universal bundle over the Grassmannian), so my question is: Are Chern and/or Stiefel-Whitney classes unique for arbitrary bundles? If not, do `$w_1$` and `$c_1$` at least determine the higher-dimensional classes? - ## 1 Answer I'm going to assume that your characteristic classes are supposed to live in the singular cohomology of the base space. Then to show your uniqueness result, it should be enough if you can produce, for any space $B$, a map $f:B'\to B$ such that $B'$ is paracompact, and $f$ induces an isomorphism in singular cohomology. For any $B$, you can find a CW-complex $B'$ and a weak equivalence $f: B'\to B$, by one of Whitehead's many theorems. Weak equivalences always induce isomorphisms in singular cohomology, and CW-complexes are paracompact (I think!). Hatcher's topology textbook proves all of these, except possibly for the paracompactness claim (for which I can't find a reference yet). If you're talking about Cech cohomology, then this proof won't work. - 2 CW-complexes are paracompact. Milnor and Stasheff, p. 74, references Miyazaki, H., Paracompactness of CW complexes , Tohoku Math. J. 4 (1952), 309-313 and Dugundji, J., "Topology," Allyn and Bacon, Boston, 1996, p. 419. – Cotton Seed Mar 18 2010 at 20:07 Excellent! I think I've actually used an argument along these lines before... Hatcher's Vector Bundles notes include a proof that CW complexes are paracompact. It's in the Appendix to Chapter 1. – Dan Ramras Mar 18 2010 at 21:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121212363243103, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/44362/activation-energy-and-entropy/44369
Activation energy and entropy First assertion If a system is already in a high temperature, adding energy, will increment the entropy in a low amount (compared with a system in a lower temperature). Question (if assertion is right) What if the heat is enough that let molecules breaks (activation energy), this would lead to new multiplicity (more freedom) so more entropy. It is a higher entropy grow than if the temperature were lower! that seems to be in contradiction with first assertion. I see there is something wrong here, but I don't know where. - 2 Answers The assertion is based on the assumption that you either have only ‘small’ increases in temperature (and hence small increases in entropy, think of all the $dS$ and $dT$ you encounter in standard thermodynamics) or that your system is sufficiently homogenous that the change in entropy is a continous function of the change in temperature. This obviously breaks down if your molecules start to break up. - +1 I think that's the key, system is out of equilibrium, those are not small increases, then, it's a mistake to maintain temperature constantly "low" while increasing energy, that won't happen – HDE Nov 16 '12 at 18:52 By definition of temperature $$\frac{1}{T} = \left( \frac{\partial S}{\partial U} \right)_{N_j}$$ If temperature is higher adding the same amount of energy $\delta U$ at constant composition $N_j$ results in a lower change $\delta S$ in the entropy. But if composition is changing due to chemical reaction $\mathrm{AB} \rightarrow \mathrm{A} + \mathrm{B}$ then there is an extra variation in the entropy due to change in composition $$\frac{\mu_j}{T} = - \left( \frac{\partial S}{\partial N_j} \right)_U$$ The total change in the entropy is given by the variation of energy plus the variation on composition $$\mathrm{d}S = \frac{1}{T}\mathrm{d}U - \sum_j \frac{\mu_j}{T}\mathrm{d}N_j$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491674900054932, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/tagged/moments?sort=faq&pagesize=15
# Tagged Questions For questions about the moments of a random variable, that is, the expectation of $X^k$ where $X$ is a random variable and $k$ an integer (or a real number with $X$ non-negative). 1answer 1k views ### Existence of the moment generating function and variance Can a distribution with finite mean and infinite variance have a moment generating function? What about a distribution with finite mean and finite variance but infinite higher moments? 1answer 254 views ### Proof that if higher moment exists then lower moment also exists The $r$-th moment of a random variable $X$ is finite if $$\mathbb E(|X^r|)< \infty$$ I am trying to show that for any positive integer $s<r$, then the $s$-th moment $\mathbb E[|X^s|]$ is ... 1answer 515 views ### How can I calculate central moments of a joint pdf? Let's say I have two signals $x_1$ and $x_2$, each having $N$ samples, i.e.: $$x_1 = \{ x_{11}, x_{12}, ..., x_{1N} \}$$ $$x_2 = \{ x_{21}, x_{22}, ..., x_{2N} \}$$ The signals are both ... 2answers 786 views ### What's so 'moment' about 'moments' of a probability distribution? I KNOW what moments are and how to calculate them and how to use the moment generating function for getting higher order moments. Yes, I know the math. Now that I need to get my statistics knowledge ... 4answers 1k views ### A transform to change skew without affecting kurtosis? I am curious if there is a transform which alters the skew of a random variable without affecting the kurtosis. This would be analogous to how an affine transform of a RV affects the mean and ... 2answers 440 views ### A proof involving properties of moment generating functions Wackerly et al's text states this theorem "Let $m_x(t)$ and $m_y(t)$ denote the moment-generating functions of random variables X and Y, respectively. If both moment-generating functions exist and ... 1answer 288 views ### Central Moments of Symmetric Distributions I am trying to show that the central moment of a symmetric distribution: $${\bf f}_x{\bf (a+x)} = {\bf f}_x{\bf(a-x)}$$ is zero for odd numbers. So for instance the third central moment {\bf ... 2answers 130 views ### Moments of the Kolmogorov distribution Up to what order do the moments of the Kolmogorov distribution exist? References would be appreciated. 1answer 225 views ### Moment generating function of the inner product of two gaussian random vectors Can anybody please suggest how I can compute the moment generating function of the inner product of two gaussian random vectors, each distributed as $\mathcal N(0,\sigma^2)$, independent of each ... 2answers 240 views ### Need an example of RV with a mean and no second moment An example like the t-distribution with 2 degrees of freedom would not suffice as the second moment exists but equals inf.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8693638443946838, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/1507-logarithmic-scale-problem.html
Thread: 1. Logarithmic scale problem I am working on a homework for a programming class. We have to create a Logarithmic plot and add to it a marker when the program is running on the click of the mouse. That is NOT the problem , in fact, that's very simple! My problem, however, is with the scale. When my plot is in linear scale it adds the marker right where it should. By that I mean that if I click on point (1,2) it adds my marker on (1,2). Now, when I switch to logarithmic scale, if I click on (0,0) it adds the marker on (1,1). If I click on values greater than 10, it adds the marker at the place where I clicked. But when my values are lower than 10, the marker is shifted to the right. How do I solve that problem? I already tried converting the values I get from my mouse-click event to logarithmic values and they are wrong. In fact I get negative numbers when the values are lower than 1; so the marker is shifted to the left. I am not good at all with log scales; so, please help me!! 2. 1. Your problem is not a mathematical one: This sign "=" means different things in Math and Programming. 2. If you click on (0,0) your program caculate $\mbox{base}^0=1$. 3. Depending on the value of your base $\mbox{base}^{10}=\mbox{more than 1000}$. In other words: It is out of your plot. Change the sides of your "equation" in your program. Perhaps it'll work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9071700572967529, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47007/hamiltonian-of-the-surface-states-of-a-3d-topological-insulator
# Hamiltonian of the surface states of a 3D topological insulator The surface states of a 3D topological insulator (let's say in the (x-y) plane) are sometimes described by the following Hamiltonian : $$H(k)=\hbar v_F (p_x \sigma_x + p_y \sigma_y)$$ or sometimes by : $$H(k)=\hbar v_F (p_x \sigma_y - p_y \sigma_x).$$ Do you know why? Thanks in advance for any help. - Why the two different forms or why such a form at all? – Fabian Dec 16 '12 at 19:58 It's just a rotation of the spin axis by $\pi/2$. x goes to y, y goes to negative x. The above is typically associated with a Dresselhaus spin-orbit coupling, while the lower is the typical form of Rashba spin-orbit coupling. – wsc Dec 16 '12 at 22:22 Thanks for your comment. Does it mean that in some materials being 3D topological insulators, the spin-orbit coupling is a Rashba term and in other materials, it is a Dresselhaus term. I would be surprised since I thought that surface states were due to an intrinsic SOC term responsible for band inversion (different from Rashba and Dresselhaus). We often say that the surface states correspond to one Dirac cone as 1/4 graphene so the Hamiltonian should be the first one? – JaneFlo Dec 17 '12 at 9:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400472640991211, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/162316-limit-proof.html
# Thread: 1. ## Limit proof Assume $f(x) >= g(x)$ for all $x$ in some set $A$ on which $f$ and $g$ are defined. Show that for any limit point $c$ of $A$ we must have $\displaystyle \lim_{x \to c} f(x) >= \lim_{x \to c} g(x).$ I think I start with the $\epsilon$ and $\delta$ definition of a limit. So let $\displaystyle \lim_{x \to c} f(x) = F.$ Let $\epsilon > 0.$ $0 < |x - c| < \delta$ implies there exists $\delta > 0$ such that $|f(x) - F| < \epsilon.$ Is that the right place to start? Because I am not sure where to go from here. Thanks in advance. 2. Let $\displaystyle \lim_{x \to c} f(x) =F~\&~ \lim_{x \to c} g(x)=G.$ Suppose that that $F<G$. You can get a contradiction. Hint: consider $\varepsilon = \frac{{G - F}}{2}$. 3. Thank you so much. I believe I have it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541592001914978, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/177551-power-series-justification.html
# Thread: 1. ## power series justification Hi. Quick question. Given a power series of the form: sigma cn(x+3)^n. It is given that this series converges when x=0 and diverges when x = -8 Now does it converge when the series is sigma cn2^n The answer is yes but not sure how to justify. 2. Hint: As $\sum_{n=0}^{+\infty}c_n3^n$ converges, the radius of convergence of the series $\sum_{n=0}^{+\infty}c_nx^n$ is $\geq 3$ 3. Originally Posted by FernandoRevilla Hint: As $\sum_{n=0}^{+\infty}c_n3^n$ converges, the radius of convergence of the series $\sum_{n=0}^{+\infty}c_nx^n$ is $\geq 3$ I was trying to figure out the radius of convergence. This is what i have. we know that |z-a| < R means that the series converges absolutely for some number R. Here i have a as (-3) and z=x=0 when it converges. so |-3| < R => |3| < R now |3| can be + or - 3...so I guess is R > -3 or 3? I dunno if im doing this right.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9084292054176331, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/106675/laplace-of-x2-fracd2ydx2/106683
# Laplace of $x^2\frac{d^2y}{dx^2}$ How does one evaluate the Laplace of functions like $t^2\frac{d^2y}{dt^2}$ ? I wanted to solve a differential equation using Laplace Transform resembling: $$x^2\frac{d^2y}{dx^2} + x\frac{dy}{dx} + y = 5$$ MATLAB Provides me the answer as : C6*cos(log(t)) + C5*sin(log(t)) + 5 Can someone give me a derivation for this? - ## 2 Answers Use ${\cal L} \{ t^n f(t)\}=(-1)^n {d^n\over ds^n} F(s)$ and ${\cal L} \{ f''(t)\}=s^2F(s)-sf(0)-f'(0)$. - So, $x^2\frac{d^2y}{dx^2}$ would be $\frac{d^2(s^2F(s) - sf(0) -f'(0))}{ds^2}$ ? I did this, I seemed to have done something silly then. I'll recheck. – Inquest Feb 7 '12 at 15:06 @Nunoxic I don't think Laplace transforms are the way to proceed here. I would use Jon's approach and then use Laplace transforms to solve the equation he obtains. – David Mitra Feb 7 '12 at 15:52 – David Mitra Feb 7 '12 at 15:57 I am just fiddling around. I don't really need to solve the equation using LT but was wondering if I could. – Inquest Feb 7 '12 at 15:59 Take a new variable $x=e^t$.Then, $$\frac{d}{dx}=\frac{d}{dt}\frac{dt}{dx}=\frac{d}{dt}e^{-t}$$ and $$\frac{d^2}{dx^2}=\frac{d^2}{dt^2}e^{-2t}$$ and so your equation just becomes $$\frac{d^2y}{dt^2}+\frac{dy}{dt}+y=5$$ that can be solved by combinations of sine and cosine. - How does Laplace come into the picture? I necessarily need to solve it using Laplace. – Inquest Feb 7 '12 at 15:10 Yes, you can use Laplace on the last equation. But, I think that the hint by David could be more helpful. – Jon Feb 7 '12 at 15:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399296641349792, "perplexity_flag": "middle"}
http://math.stackexchange.com/users/32572/barranka?tab=activity&sort=comments
# Barranka reputation 8 bio location Mexico age 32 member for 11 months seen May 13 at 20:45 profile views 17 Actuary, finance and risk oriented, musician, programmer... I enjoy mathematics, probability, statistics, numerical methods... and whatever challenges the creativity of human mind. Just for information, I've found an article that I think it's important before asking something here (or anywhere else): http://mattgemmell.com/2008/12/08/what-have-you-tried/ | | | bio | visits | | | |----------|----------------|---------|-----------------------------|-----------------|-----------| | | 157 reputation | website | twitter.com/Pitecantropus80 | member for | 11 months | | 8 badges | location | Mexico | seen | May 13 at 20:45 | | # 11 Comments | | | | |-------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Apr10 | comment | How can I prove that $xy\leq x^2+y^2$?Quite an elegant hint! | | Oct3 | comment | How do you find the center of a circle with a pencil and a book?This solution is far better than the accepted one, since it does not depends on perception ("put the corner of the book..." how do you know that you are really drawing a diameter?). +1 for this answer! | | Sep21 | comment | What is $dx$ in integration?Plain links don't help... please improve this answer | | Sep21 | comment | What is $dx$ in integration?Quite a good explanation | | Sep20 | comment | What are imaginary numbers?@CliveN. Man! This is one of the greatest, clearest, coolest and simplest answers I've ever seen for this question (and funniest, I had a lot of fun reading it). I will quote it every time I get the chance! +1 (If I could vote more than one time, I would) | | Aug21 | comment | Which symbol should be used for an empty set?This is not a constructive question... It's just about "what symbol is better"... it's a matter of preference! | | Jun20 | comment | Is it wrong to say $\sqrt{x} \times \sqrt{x} =\pm x,\forall x \in \mathbb{R}$?Notice that $\sqrt{x^2}=|x|$, so it can be said that $\sqrt{x^2}=\pm x$ | | Jun17 | comment | Is it wrong to say $\sqrt{x} \times \sqrt{x} =\pm x,\forall x \in \mathbb{R}$?@Théophile ook... $x\geq0$ | | Jun7 | comment | What is the chance to get a parking ticket in half an hour if the chance to get a ticket is 80% in 1 hour?@MarkAdler Thank you Mark... I will soon correct my answer to include this calculation for the value of $\lambda$. And yes, Math is beautiful! :-) | | Jun5 | comment | What is the chance to get a parking ticket in half an hour if the chance to get a ticket is 80% in 1 hour?That said, I'm correcting my answer, because I calculated the probability of "being spotted" once per interval, which is not what we are looking for. The probability we're looking for is "what is the probability of 'being spotted' at least one time", which is 1 - Pr{"Not 'being spotted'"}. | | Jun5 | comment | What is the chance to get a parking ticket in half an hour if the chance to get a ticket is 80% in 1 hour?With no further information, the expected value of the tickets per hour is 0.8 (is a Bernoulli experiment). I think the Poisson Process hypothesis holds because all you know is that you may "get caught" with a given probability in a unit of time. Of course you can't be caught more than once a day, but that doesn't implies that you can't be "spotted" n times a day. |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381497502326965, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/58010/determining-if-a-quadratic-is-always-positive?answertab=active
# Determining if a quadratic is always positive Is there a quick and systematic method to find out if a quadratic equation is always positive or may have positive and negative or always negative for all values to its variables? Say for a quadratic equation: $3x^{2}+8xy+5xz+2yz+7y^{2}+2z^{2}>0$, without drawing a graph to look at its shape, how can I find out if this equation is always more than zero or does it have negative results or is it always negative for all non-zero values into the variables? I tried randomly substituting values into the variables but I can never be sure if I had covered all cases. Thanks for any help. - 1 If this is homework, haven't you covered such methods in class already? (Diagonalization or completing the squares, for example.) – Hans Lundmark Aug 17 '11 at 10:12 ## 4 Answers This is what Sylvester's criterion is for. Write your quadratic as $v^T A v$ where $v$ is a vector of variables $(x_1\ x_2\ \cdots\ x_n)$ and $A$ is a matrix of constants. For example, in your case, you are interested in $$\begin{pmatrix} x & y & z \end{pmatrix} \begin{pmatrix} 3 & 4 & 5/2 \\ 4 & 7 & 1 \\ 5/2 & 1 & 2 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix}$$ Observe that the off diagonal entries are half the coefficients of the quadratic. The standard terminology is that $A$ is "positive definite" if this quantity is positive for all nonzero $v$. Sylvester's criterion says that $A$ is positive definite if and only if the determinants of the top-left $k \times k$ submatrix are positive for $k=1$, $2$, ..., $n$. In our case, we need to test $$\det \begin{pmatrix} 3 \end{pmatrix} =3 \quad \det \begin{pmatrix}3 & 4 \\ 4 & 7\end{pmatrix} = 5 \quad \det \begin{pmatrix} 3 & 4 & 5/2 \\ 4 & 7 & 1 \\ 5/2 & 1 & 2 \end{pmatrix} = -67/4.$$ Since the last quantity is negative, Sylvester's criterion tells us that this quadratic is NOT positive definite. - 1 Just an offhand remark: from a computational perspective, you probably would not want to use Sylvester's criterion - assuming each determinant is computed in $O(n^3)$ operations, this involves doing $O(n^4)$ operations total. I'm not a hundred percent sure what the best algorithm is - perhaps someone more knowledgeable can pipe in - but I'm guessing its Cholesky decomposition, which can be performed in $O(n^3)$ operations total through a variant of Gaussian elimination. – alex Aug 17 '11 at 19:42 Good point. I also don't know. I would guess that the best idea would be to slightly modify Cholesky decomposition to get a decomposition of the form $M D M^*$ where $M$ is lower triangular with ones on the diagonal, and $D$ is diagonal. This will avoid having to compute square roots. In particular, if your input is integer like the above, then you won't have to go to floating point. But one of the big things I have learned from talking to numerical analysts is that you shouldn't try to guess what a good algorithm will be; you need to actually test your ideas on data. – David Speyer Aug 17 '11 at 19:48 1 – J. M. Aug 18 '11 at 10:24 1 @J.M. You brought up something interesting. Actually, is the $LDL^{T}$ same as Cholesky if the pivots are positive then $L\sqrt{D}(L\sqrt{D})^{T}$? But this only works if the pivots are positive; matrix is positive definite. For the symmetric matrices that don't work with $L\sqrt{D}(L\sqrt{D})^{T}$ due to negative pivots, it still has $LDL^{T}$, wouldn't it? And even singular matrices would have $LDL^{T}$ decomposition just that $D$ is singular too. Then what are the symmetric matrices that don't have $LDL^{T}$? – xenon Aug 18 '11 at 15:14 1 @xEnOn: Very perceptive of you! :) Most people unfortunately do not realize that relationship between $LDL^T$ and Cholesky. If any element of $D$ is negative, the original matrix cannot have a Cholesky decomposition since one of the underlying assumptions is that the diagonal elements ought to be real. As for matrices that do not possess an $LDL^T$ decomposition, if the leading submatrix of your symmetric matrix is singular, then you cannot outright compute the decomposition, e.g. $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ will not have that decomposition. – J. M. Aug 18 '11 at 15:23 show 1 more comment Rewrite your expression as a bilinear form with a symmetric matrix in-between. This can always be done. For instance, in your case, your expression is $$3x^{2}+8xy+5xz+2yz+7y^{2}+2z^{2} = \begin{pmatrix} x & y & z \end{pmatrix} \begin{pmatrix} 3 & 4 & 5/2 \\ 4 & 7 & 1 \\ 5/2 & 1 & 2 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}$$ Now all you need to check is that the matrix is positive definite. A nice property of the positive definite matrix is that every diagonal sub-matrix must be positive definite. However, note that the matrix $$\begin{pmatrix} 3 & 5/2 \\ 5/2 & 2 \end{pmatrix}$$ is not positive definite. Hence, it is not possible that $$3x^{2}+8xy+5xz+2yz+7y^{2}+2z^{2}$$ is always positive $\forall x,y,z \in \mathbb{R}$ - The answer is almost the same as David Speyer's. I was writing this up when he posted his. – user17762 Aug 17 '11 at 13:14 Thanks. But how was the matrix $\begin{pmatrix} 3 & 5/2 \\ 5/2 & 2 \end{pmatrix}$ derived? By diagonal sub-matrix, I thought it referred to the submatrix from the top left corner of the main matrix? But $\begin{pmatrix} 3 & 5/2 \\ 5/2 & 2 \end{pmatrix}$ doesn't look like that. – xenon Aug 17 '11 at 17:25 @xEnOn: By diagonal submatrix I mean $A(n1,n2,\ldots,nk;n1,n2,\ldots,nk)$ where $n1,n2,\ldots,nk \in \{1,2,\ldots,n\}$ or you can think of symmetric permutation i.e. in this case swap rows 2 and 3 and columns 2 and 3 and then look at the top left sub-matrix – user17762 Aug 17 '11 at 17:50 @downvoter: downvoting without leaving a comment serves no purpose! – user17762 Aug 17 '11 at 19:43 My approach of this is like this: For a regular one variable quadratic, $ax^2+bx+c$ you can find its sign like this: solve $ax^2+bx+c=0$, and after that • between the roots (if any) the sign is opposite to the sign of $a$ • outside the roots the sign is the sign of $a$. In your case you want to see if a quadratic is positive all the time, and this means it has no roots, i.e. the determinant $\Delta=b^2-4ac<0$. You can now solve your problem: Consider the equation as a quadratic in $x$, and suppose the condition $\Delta_x<0$ is true. Now you arrive at a quadratic in $y,z$ which should be negative all the time. Consider this second quadratic as a one variable quadratic in $y$, and suppose the condition $\Delta_y<0$ is again true. Next you arrive at a quadratic in $z$, and if $\Delta_z<0$ all the time you are done. The other method, as the previous answer stated is to form squares. This is always possible, and if the signs of the three squares formed are not all plus, then the quadratic isn't always positive. The third method uses linear algebra, and you can search for positive definitness of the matrix of your quadratic. This can be done pretty fast, but it uses determinants. If you are interested I will present this method. - Thanks! Although the determinant $b^{2}-4ac<0$ does not have real roots, could the graph be totally above or below the axis? Do I have to then substitute any random number into the equation to find if it is above(always positive) or below(always negative) the axis? If there're more than 1 variable like the example I wrote, I'm still not sure how I could get it into $\Delta$. For $\Delta_x$, I group it into $3x^{2}+[8y]x+[5z]x+2yz+7y^{2}+2z^{2}>0$ and so $a=3$, $b=8y$ & $c=2yz+5z+y^{2}$, then I put them into $\Delta_x=(8y)^{2}-4(3)(2yz+5z+y^{2})$, is this right? But they're all variables only. – xenon Aug 17 '11 at 12:28 Actually, I came about this question because I was thinking about one of the properties of or test for positive definite with $x^{T}Ax>0$. But at the same time, was wondering how I could find out if the quadratic equation is always positive, negative or both when the number of variables gets more. – xenon Aug 17 '11 at 12:29 One of the methods when you don't know necessary and sufficient condition for the minimum of function of several variables - consider other as parameters. You know that for a function $$a_1x^2+b_1(y,z)x+c(y,z)$$ the minimum is attained at $\frac{-b_1(y,z)}{2a_1}$ for $a_1>0$ and any fixed $y,z$. Then you should just substitute this into your equation and solve the minimum problem w.r.t. $y$ and then, on the third step, for $z$. In your case: $a_1 = 3, b_1 = 8y+5z$, so you put $$x = -\frac{1}{6}(8y+5z)$$ and obtain a function $$\frac{1}{12}(20 y^2-56 y z-z^2)$$ which certainly can go below zero due to the negativity of the coefficient with $z^2$. Finally, the strict inequality never holds, since any quadratic function is equal to zero in the origin. - At the point after obtaining the function $\frac{1}{12}(20 y^2-56 y z-z^2)$, can I only through observation on the square amount from $y^2$ and $z^2$ to see if they can over come the middle term $56yz$ to know if it is positive? Thanks! – xenon Aug 17 '11 at 12:41 @xEnOn: sure, you can take $y=0$. So in overall you take $y=0,z=1$ and $x = -5/6$ to obtain $-1/12$ as a value. – Ilya Aug 17 '11 at 13:31 So the only way is through observation and no formal ways to "squeeze out a value" that can determine if the equation is always positive or negative? – xenon Aug 18 '11 at 1:42 @xEnOn: I think from the other answers you see that the problem is equivalent to sign-definiteness of the matrix. So, the fastest way is to use Sylvester's criteria. The are two reasons why I presented here another method. 1) I didn't know if you already studied Sylvester's criteria, while my method you can simply apply based on the school program. 2) it also works if you have terms $\alpha x+\beta y+\gamma z$. – Ilya Aug 18 '11 at 7:03 Thanks Gortaur! :) – xenon Aug 18 '11 at 16:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441474676132202, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/15168/graphing-effect-size-for-coefficient-of-determination
# Graphing effect size for coefficient of determination If I have a significant correlation coefficient of r=.80 between Variable A and B, I can work out the effect size (coefficient of determination) by squaring it, which is 64%. I want to graph this in the simplest way possibe (given my non-statistical target audience). Can I use a 100% stacked bar graph for this purpose. This will show Variable B as 100% and on top of it would be Variable A which would be 64%. I can then graphically say that 64% of variance in Variable B can be attributed to Variable A. (Conversely, I can also graphically say that I am unsure of the remaining 36% of Variable B) The appeal of this approach for me is that I can show the effect size between A and B on a number of variables (e.g. gender, age, education) in one graph. This will also make a colourful presentation (which is good for a non statistical target audience!). I have seen some textbooks showing two circles, each representing a variable (e.g. A and B). The part of the circles that overlaps illustrates the effect size. I thought doing a 100% stacked bar graph was a better way. From the discussion below, it appears that the scatterplot is the way to go on this matter. However, how do I show 64% on a scatterplot? I think the point is being missed in the discussion below. It is easy to illustrate the relationship through the scatterplot but how is the effect size illustrated i.e. the actual perecentage as above. I can't see this percentage figure in any of the diagrams below. - In response to your edit, you can try the scatterplot matrix approach, but show $R^2$'s in the upper or lower diagonal. – chl♦ Sep 8 '11 at 19:23 @chi thanks. Can you please explain your comment a bit more. – Adhesh Josh Sep 9 '11 at 22:41 – chl♦ Sep 10 '11 at 9:29 ## 4 Answers I agree with the person who suggested a scatterplot. An R-square gives the proportion of variance statistically explained by A. It doesn't give the effect size in the way I think you mean. If you just want to illustrate a proportion, any figure would do, but very little information is conveyed. The "effect" of the explanatory variable is shown in a scatterplot by the closeness of the points to a regression line (or one drawn "by hand" if need be). - I have seen some textbooks showing two circles, each representing a variable (e.g. A and B). The part of the circles that overlaps illustrates the effect size. I thought doing a 100% stacked bar graph was a better way. – Adhesh Josh Sep 6 '11 at 8:46 +1 Welcome to our site, htr! – whuber♦ Sep 6 '11 at 22:24 Such bar plots are almost completely devoid of content. I would instead show scatterplots of B vs each A, which will really illustrate the relationships. You can fit a surprising number of scatterplots into a small space. - 1 Thanks. A scatterplot shows relationship. I want to show effect size. Remember, I am presenting to a non-statistical audience. – Adhesh Josh Sep 4 '11 at 12:47 3 Maybe I'm being too idealistic, but I think the effect size is apparent in the scatterplot in a way that it can be actually understood, while the coefficient of determination values on their own would have no real meaning to non-statisticians. – Karl Sep 4 '11 at 12:51 But how do I show 64% on a scatterplot. – Adhesh Josh Sep 5 '11 at 13:55 1 +1 for Karl's answers. I think that scatter plots are more informative and more easily understood than coefficients of determination and alike. This is even more true for non-statistical audiences, which probably do not know what a coefficient of determination is. – user5644 Sep 5 '11 at 14:49 1 @Adesh I agree that $r^2$ is useful, I would just rather see a bunch of scatterplots in place of a bar chart of $r^2$ values, and personally I will internally translate a given $r^2$ to the corresponding prototypical scatterplot. – Karl Sep 5 '11 at 18:27 show 1 more comment You could do some sort of graphical correlation matrix. In R: ````#Get the correlations from the data frame #(The fifth column in the iris dataset is a factor, so we're not using it.) r2<-cor(iris[-5])^2 #Plot them plot(rep(1:nrow(r2),each=ncol(r2)),rep(1:ncol(r2),nrow(r2)), main='Relationships among properties of Irises', sub='Larger circles indicate stronger relationships', #The next line just makes it less cluttered. bg=1,pch=21,cex=r2*5,axes=F,xlab='',ylab='' ) #Add labels sapply(1:2,axis,at=1:ncol(r2),labels=colnames(r2),tick=F) ```` I have to note that I'm somewhat skeptical that that's the best way to present your data. A scatterplot will show the strength of a relationship. Among other things, the ratio of the longest length to the smallest width of the convex hull of the points on the plot will be one graphical indication of the variability. And you can use color or a scatterplot matrix to present more than two variables. And if you wanted to display many more than you can in a scatterplot matrix, I'd suggest that you try to group the relationships in order to present fewer variables at once. You could also combine variables with something like principal components analysis if that's appropriate. - – whuber♦ Sep 6 '11 at 22:27 I actually think you'd need more space for a scatterplot matrix because you need to be able to see the many small dots clearly. But I still think a scatterplot matrix is better. – Thomas Levine Sep 7 '11 at 1:06 @Karl Broman's reply suggested one might create a scatterplot matrix to good effect. As an example, here are the famous iris data, ridiculously scaled down to press the point that you don't need lots of ink or space on the page to be able to present far more information than a mere table of numbers or simple bar chart or correlation plot would reveal. For instance, none of those would indicate the separation into distinct populations that is so clear here: (The variables, left to right and top to bottom, are sepal length, sepal width, petal length, and petal width, all expressed in centimeters. Colors indicate species: i. setosa is blue, i. versicolor is red, and i. virginica is green.) Some statistical software (such as Systat), as an option, will superimpose ellipses on the scatterplots to approximate contours of the fitted bivariate Normal distribution. This visual aid is a nice graphical way to indicate correlation coefficients. - The contours don't indicate the actual correlation coefficient do they? – Andy W Sep 7 '11 at 14:53 1 @Andy Provided the scales on the axes are standardized to 1 SD for each variable, the contours correspond to correlation coefficients $\rho$. Circular contours correspond to $\rho\sim 0$; skinny contours sloping up correspond to $\rho\sim 1$; and skinny contours sloping down correspond to $\rho\sim -1$. – whuber♦ Sep 7 '11 at 16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9158386588096619, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/45326/touch-up-on-trig
# Touch up on Trig I have forgotten a few things about trigonometry and angles. I have this trig equation, $\sin \theta = \frac{200\text{ dyn}}{224\text{ dyn}}$ What exactly are the steps of getting the angle, $\theta$. - Is there supposed to be an equal sign in your equation? What is $dyn$? – Jim Belk Jun 14 '11 at 16:45 I'm assuming this says: sin(theta)=200/224 in which case you use the inverse sine (aka the arcsine function): asin(200/224)=theta. Because if you asin() both sides... asin(sin(theta))=asin(200/224) and asin(sin(anything)) = anything. – Matt Razza Jun 14 '11 at 16:45 Without an equals sign, that's not an equation you've got there. – Milosz Wielondek Jun 14 '11 at 16:45 Oh yes! sorry about that. – Dan the Man Jun 14 '11 at 16:59 1 – Américo Tavares Jun 14 '11 at 17:11 ## 2 Answers I do not know whether you are measuring angles in radians or in degrees. So I will assume degrees. If you know about radians, I am sure that you can make the requisite adjustments. First note that there is not a single answer to your problem. For if $\sin \theta= a$, then we also have $\sin(180^\circ-\theta)=a$. And if $\sin\theta=a$, then $\sin(\theta+360n)=a$ for any integer $n$. But for any number $a$ such that $0\le a \le 1$, there is exactly one $\theta$ between $0^\circ$ and $90^\circ$ such that $\sin\theta=a$. On most older calculators, entering $a$ and then pressing the $\sin^{-1}$ button will give you the (unique) $\theta$ between $-90^\circ$ and $90^\circ$ which solves the equation $\sin\theta=a$. On many newer calculators, you press the $\sin^{-1}$ button then put in $a$. Make sure your calculator is set to degrees. When I do this, I get roughly $79.155937$. Try it on your calculator, or in the calculator program on your computer. It makes sense that the angle is not far from $90^\circ$, since the sine of a $90$ degree angle is $1$, and $220/224$ is not far from $1$. However, as I pointed out earlier, there are other angles whose sine is $220/224$. An important one that you are likely to bump into in geometric work is $180$ minus the number we just computed. This is roughly $100.84406$ degrees. And, as was pointed out, there are infinitely many angles whose sine is $220/224$. However, there is only one from $0$ to $90$ degrees, and only one from $90$ degrees to $180$ degrees, and we have found them. You might want to check, by calculating the sine of each of these angles, that each of them has the right sine, at least to the limits of calculator accuracy. - @yunone: Thanks for the edit! – André Nicolas Jun 14 '11 at 18:49 Thank you! sin -1 works perfectly. – Dan the Man Jun 14 '11 at 19:54 The solution to the equation $$\sin \theta \;=\; \frac{220}{224}$$ is $$\theta \;=\; \arcsin\left(\frac{220}{224}\right)$$ where $\arcsin$ denotes the inverse sine function. - Thank you very much! – Dan the Man Jun 14 '11 at 19:55 $\tfrac{220}{224}=\tfrac{55}{56}$ – UnkleRhaukus Dec 24 '12 at 9:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371717572212219, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/220292/prove-that-entropy-is-maximized-when-probability-is-1-n/220296
Prove that entropy is maximized when probability is $1/n$ How can be proven that the entropy of a dice roll is maximized when the probability of each of its $6$ faces is equal, $1/6$? - 2 Answers Suprise is defined as -log(p{X=x}). A good way to think of entropy is the "expected surprise". In this sense, its easy to see that the uniform distribution maximizes the expected surprise. - not sure how to do it... can you help? – whynot Oct 26 '12 at 5:55 The entropy is given by $-\sum p_i\ln p_i$. Use Jensen's inequality with the logarithm function. - not sure how to do it... can you help? – whynot Oct 26 '12 at 4:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440367221832275, "perplexity_flag": "head"}
http://mathoverflow.net/questions/118432/lax-pairs-for-linear-pdes
## Lax Pairs for Linear PDEs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to understand the discussion around equation (2.1) of the paper http://www.jstor.org/stable/53053. It says that the linear PDE $M(\partial_x,\partial_y)q=0$ with constant coefficients has the Lax pair $\mu_x+ik\mu=q$ and $M(\partial_x,\partial_y)\mu=0$, where k is any complex number and $\mu$ is a function. The way I'm used to thinking of Lax pairs is as operators $L$ and $B$ such that $\dot{L}+[L,B]=0$ is equivalent to the original PDE. This is equivalent to requiring that the equations $L\phi=\lambda\phi$ and $\dot{\phi}=B\phi$ are compatible, where $\lambda$ is a fixed eigenvalue and $\phi$ is any function. Can anyone explain how this connects with the discussion in the paper? What are $L$ and $B$ in the above case? Thanks! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463929533958435, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/144280-pointwise-uniform-convergence.html
# Thread: 1. ## Pointwise and uniform convergence I got stuck on two problems that involves finding the pointwisse limits of piecewise functions. Hope someone can give me a hand. 1) Let $\{a_n\}$ be a sequence that contains each rational in [0,1] precisely once. For each $n\in N$, define $f_n(x)= 0$ if x is irrational $f_n(a_i)=0$ if $i > n$ $f_n(a_i)=1$ if $i \leq n$ Prove that $\{f_n\}$ converges pointwise on [0,1] to a function that is not R-integrable. My attempt: I guess that this sequence of functions converges to the rational characteristic function, but I can't really show $lim f_n=f$ as n approaches infinity. I tried this again and I got this far: $f(x)=0$ if x is irrational and $f(x)=1$ if x is rational. Consider: absolute value of $f_n(x)-f(x)$= $0-0=0< \epsilon$ if x is irrational. Choose $\epsilon=1/2$ If x is rational, $f_n(x)-f(x)=1-1=0< \epsilon$. I think f(x)=1 because eventually $i \leq n$ as n approaches infinity. 2) For $n \in N$ define $f_n: [0,1] \rightarrow R$ by $f_n(x)=1/x$ if $x \in [1/n,1]$ and $f_n(x)=n^2x$ if $x \in [0,1/n)$. Prove that $\{f_n\}$ does not converge uniformly on [0,1] but converge uniformly on [M,1] where 0<M<1. I don't know what the pointwise limit of $f_n$, so I can't go any anywhere. It would be really helpful if someone helps me how to find the pointwise limit 2. Originally Posted by jackie I got stuck on two problems that involves finding the pointwisse limits of piecewise functions. Hope someone can give me a hand. 1) Let $\{a_n\}$ be a sequence that contains each rational in [0,1] precisely once. For each $n\in N$, define $f_n(x)= 0$ if x is irrational $f_n(a_i)=0$ if $i > n$ $f_n(a_i)=1$ if $i \leq n$ Prove that $\{f_n\}$ converges pointwise on [0,1] to a function that is not R-integrable. My attempt: I guess that this sequence of functions converges to the rational characteristic function, but I can't really show $lim f_n=f$ as n approaches infinity. I tried this again and I got this far: $f(x)=0$ if x is irrational and $f(x)=1$ if x is rational. Consider: absolute value of $f_n(x)-f(x)$= $0-0=0< \epsilon$ if x is irrational. Choose $\epsilon=1/2$ If x is rational, $f_n(x)-f(x)=1-1=0< \epsilon$. I think f(x)=1 because eventually $i \leq n$ as n approaches infinity. The part for x rational sounds a litte sloppy to me, apart from that I don't quite see where your problem is. To fix that sloppiness you should argue that if $x\in [0;1]\cap\mathbb{Q}$, then there exists a unique $n_0$ such that $x=a_{n_0}$ and therefore for all $n\geq n_0$ we have that $|f_n(x)-1|=|f_n(a_{n_0})-1|=|1-1|=0<\epsilon$ f is not integrable because all the lower sums are 0 and all the upper sums are 1. 2) For $n \in N$ define $f_n: [0,1] \rightarrow R$ by $f_n(x)=1/x$ if $x \in [1/n,1]$ and $f_n(x)=n^2x$ if $x \in [0,1/n)$. Prove that $\{f_n\}$ does not converge uniformly on [0,1] but converge uniformly on [M,1] where 0<M<1. I don't know what the pointwise limit of $f_n$, so I can't go any anywhere. It would be really helpful if someone helps me how to find the pointwise limit What about $f(x) := \begin{cases}0, &\text{if } x=0\\\tfrac{1}{x}, &\text{otherwise}\end{cases}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 54, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356244802474976, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/128454-uniform-convergence-print.html
# Uniform Convergence Printable View • February 11th 2010, 06:20 PM tedii Uniform Convergence I have been staring at this problem all day and I can't figure how to even begin to form subsequence. Can someone give me a jump start? Let $\{f_n\}$ be a uniformly bounded sequence of functions which are Riemann-integrable on [a, b], and put $F_n(x) = \int_a^x f_n(x), dt (a \leq x \leq b).$ Prove that there exists a subsequence $\{F_{n_k}\}$ which converges uniformly on [a, b]. • February 11th 2010, 07:43 PM tedii The problem is from Blue Rudin chapter 7 problem 18. Also I believe I am supposed to follow a format similar to the proof of Theorem 7.23. I hope this helps. Thank you for any advice. All times are GMT -8. The time now is 06:57 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331403970718384, "perplexity_flag": "middle"}
http://cms.math.ca/cmb/v55/n4
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CMB « Vol. 55 No. 3 Vol. 56 No. 1 » Volume 55 Number 4 (Dec 2012) Looking for a printed back issue? Page Contents 673 Aizenbud, Avraham; Gourevitch, Dmitry Let $F$ be a non-Archimedean local field or a finite field. Let $n$ be a natural number and $k$ be $1$ or $2$. Consider $G:=\operatorname{GL}_{n+k}(F)$ and let $M:=\operatorname{GL}_n(F) \times \operatorname{GL}_k(F)\lt G$ be a maximal Levi subgroup. Let $U\lt G$ be the corresponding unipotent subgroup and let $P=MU$ be the corresponding parabolic subgroup. Let $J:=J_M^G: \mathcal{M}(G) \to \mathcal{M}(M)$ be the Jacquet functor, i.e., the functor of coinvariants with respect to $U$. In this paper we prove that $J$ is a multiplicity free functor, i.e., $\dim \operatorname{Hom}_M(J(\pi),\rho)\leq 1$, for any irreducible representations $\pi$ of $G$ and $\rho$ of $M$. We adapt the classical method of Gelfand and Kazhdan, which proves the multiplicity free" property of certain representations to prove the multiplicity free" property of certain functors. At the end we discuss whether other Jacquet functors are multiplicity free. 689 Berndt, Ryan We show a pointwise estimate for the Fourier transform on the line involving the number of times the function changes monotonicity. The contrapositive of the theorem may be used to find a lower bound to the number of local maxima of a function. We also show two applications of the theorem. The first is the two weight problem for the Fourier transform, and the second is estimating the number of roots of the derivative of a function. 697 Borwein, Jonathan M.; Vanderwerff, Jon We give precise conditions under which the composition of a norm with a convex function yields a uniformly convex function on a Banach space. Various applications are given to functions of power type. The results are dualized to study uniform smoothness and several examples are provided. 708 Demeter, Ciprian We prove that the Return Times Theorem holds true for pairs of $L^p-L^q$ functions, whenever $\frac{1}{p}+\frac{1}{q}<\frac{3}{2}$. 723 Gigli, Nicola; Ohta, Shin-Ichi We extend results proved by the second author (Amer. J. Math., 2009) for nonnegatively curved Alexandrov spaces to general compact Alexandrov spaces $X$ with curvature bounded below. The gradient flow of a geodesically convex functional on the quadratic Wasserstein space $(\mathcal P(X),W_2)$ satisfies the evolution variational inequality. Moreover, the gradient flow enjoys uniqueness and contractivity. These results are obtained by proving a first variation formula for the Wasserstein distance. 736 Hernández, Eduardo; O'Regan, Donal In this paper we discuss the existence of mild and classical solutions for a class of abstract non-autonomous neutral functional differential equations. An application to partial neutral differential equations is considered. 752 Hickel, M.; Rond, G. We prove the existence of an approximation function for holomorphic solutions of a system of real analytic equations. For this we use ultraproducts and Weierstrass systems introduced by J. Denef and L. Lipshitz. We also prove a version of the Płoski smoothing theorem in this case. 762 Li, Hanfeng We show that any Lipschitz projection-valued function $p$ on a connected closed Riemannian manifold can be approximated uniformly by smooth projection-valued functions $q$ with Lipschitz constant close to that of $p$. This answers a question of Rieffel. 767 Martini, Horst; Wu, Senlin We extend the notion of Zindler curve from the Euclidean plane to normed planes. A characterization of Zindler curves for general normed planes is given, and the relation between Zindler curves and curves of constant area-halving distances in such planes is discussed. 774 Mollin, R. A.; Srinivasan, A. We provide a criterion for the central norm to be any value in the simple continued fraction expansion of $\sqrt{D}$ for any non-square integer $D>1$. We also provide a simple criterion for the solvability of the Pell equation $x^2-Dy^2=-1$ in terms of congruence conditions modulo $D$. 783 Motallebi, M. R.; Saiflu, H. In this paper we define lower, upper, and symmetric completeness and discuss closure of the sets in product and direct sums. In particular, we introduce suitable bases for these topologies, which leads us to investigate completeness of the direct sum and its components. Some results obtained about $X$-topologies and polars of the neighborhoods. 799 Novelli, Carla; Occhetta, Gianluca Let $X$ be a smooth complex projective variety, and let $H \in \operatorname{Pic}(X)$ be an ample line bundle. Assume that $X$ is covered by rational curves with degree one with respect to $H$ and with anticanonical degree greater than or equal to $(\dim X -1)/2$. We prove that there is a covering family of such curves whose numerical class spans an extremal ray in the cone of curves $\operatorname{NE}(X)$. 815 Oberlin, Daniel M. We establish a mixed norm estimate for the Radon transform in $\mathbb{R}^2$ when the set of directions has fractional dimension. This estimate is used to prove a result about an exceptional set of directions connected with projections of planar sets. That leads to a conjecture analogous to a well-known conjecture of Furstenberg. 821 Perez-Garcia, C.; Schikhof, W. H. The study carried out in this paper about some new examples of Banach spaces, consisting of certain valued fields extensions, is a typical non-archimedean feature. We determine whether these extensions are of countable type, have $t$-orthogonal bases, or are reflexive. As an application we construct, for a class of base fields, a norm $\|\cdot\|$ on $c_0$, equivalent to the canonical supremum norm, without non-zero vectors that are $\|\cdot\|$-orthogonal and such that there is a multiplication on $c_0$ making $(c_0,\|\cdot\|)$ into a valued field. 830 Reinhold, Karin; Savvopoulou, Anna K.; Wedrychowicz, Christopher M. Let $(X,\mathcal{B},m,\tau)$ be a dynamical system with $(X,\mathcal{B},m)$ a probability space and $\tau$ an invertible, measure preserving transformation. This paper deals with the almost everywhere convergence in $\textrm{L}^1(X)$ of a sequence of operators of weighted averages. Almost everywhere convergence follows once we obtain an appropriate maximal estimate and once we provide a dense class where convergence holds almost everywhere. The weights are given by convolution products of members of a sequence of probability measures $\{\nu_i\}$ defined on $\mathbb{Z}$. We then exhibit cases of such averages where convergence fails. 842 Sairaiji, Fumio; Yamauchi, Takuya Frey and Jarden asked if any abelian variety over a number field $K$ has the infinite Mordell-Weil rank over the maximal abelian extension $K^{\operatorname{ab}}$. In this paper, we give an affirmative answer to their conjecture for the Jacobian variety of any smooth projective curve $C$ over $K$ such that $\sharp C(K^{\operatorname{ab}})=\infty$ and for any abelian variety of $\operatorname{GL}_2$-type with trivial character. 850 Shparlinski, Igor E.; Stange, Katherine E. We obtain nontrivial estimates of quadratic character sums of division polynomials $\Psi_n(P)$, $n=1,2, \dots$, evaluated at a given point $P$ on an elliptic curve over a finite field of $q$ elements. Our bounds are nontrivial if the order of $P$ is at least $q^{1/2 + \varepsilon}$ for some fixed $\varepsilon > 0$. This work is motivated by an open question about statistical indistinguishability of some cryptographically relevant sequences that was recently brought up by K. Lauter and the second author. 858 von Renesse, Max-K. We show that the Schrödinger equation is a lift of Newton's third law of motion $\nabla^\mathcal W_{\dot \mu} \dot \mu = -\nabla^\mathcal W F(\mu)$ on the space of probability measures, where derivatives are taken with respect to the Wasserstein Riemannian metric. Here the potential $\mu \to F(\mu)$ is the sum of the total classical potential energy $\langle V,\mu\rangle$ of the extended system and its Fisher information $\frac {\hbar^2} 8 \int |\nabla \ln \mu |^2 \,d\mu$. The precise relation is established via a well-known (Madelung) transform which is shown to be a symplectic submersion of the standard symplectic structure of complex valued functions into the canonical symplectic space over the Wasserstein space. All computations are conducted in the framework of Otto's formal Riemannian calculus for optimal transportation of probability measures. 870 Wang, Hui; Deng, Shaoqiang In this paper we study left invariant Einstein-Randers metrics on compact Lie groups. First, we give a method to construct left invariant non-Riemannian Einstein-Randers metrics on a compact Lie group, using the Zermelo navigation data. Then we prove that this gives a complete classification of left invariant Einstein-Randers metrics on compact simple Lie groups with the underlying Riemannian metric naturally reductive. Further, we completely determine the identity component of the group of isometries for this type of metrics on simple groups. Finally, we study some geometric properties of such metrics. In particular, we give the formulae of geodesics and flag curvature of such metrics. 882 Xueli, Song; Jigen, Peng $L_p$ stability and exponential stability are two important concepts for nonlinear dynamic systems. In this paper, we prove that a nonlinear exponentially bounded Lipschitzian semigroup is exponentially stable if and only if the semigroup is $L_p$ stable for some $p>0$. Based on the equivalence, we derive two sufficient conditions for exponential stability of the nonlinear semigroup. The results obtained extend and improve some existing ones.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8668869137763977, "perplexity_flag": "head"}
http://mathoverflow.net/questions/67259/countable-dense-sub-groups-of-the-reals
Countable Dense Sub-Groups of the Reals… Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Can countable dense additive subgroups of the reals be well-ordered up to isomorphism by inclusion? If so, is $\mathbb{Q}$ the smallest (up to isomorphism) countable dense subgroup of the reals, and what is the second smallest (up to isomorphism)? - 4 Answers `$\{a2^b:a,b\in\mathbb Z\}$` and `$\{a3^b:a,b\in\mathbb Z\}$` are both countable dense additive subgroups of the reals, and they are not embeddable in each other (hence $\mathbb Q$ is embeddable in neither). Also, let `$\{p_k:k\in\mathbb N\}$` be an enumeration of primes, and let $A_k$ consist of all fractions $a/b$ of integers such that $b$ is not divisible by $p_0,\dots,p_k$. Then $A_k$ is an additive group, and $A_0\supset A_1\supset A_2\supset\dots$ is an infinite strictly decreasing chain with respect to either inclusion or embeddability. Hence the poset of countable dense subgroups is not even well-founded. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One can carry the idea of Emil's answer a bit further: for any set $A$ of primes, let $G_A$ be the set of rational numbers $\frac ab$ for integers $a$, $b$ where every prime divisor of $b$ is in $A$. If $A$ is nonempty, then $G_A$ is a countable dense additive subgroup of $\mathbb{R}$, and furthermore $A\subset B$ if and only if $G_A$ is isomorphic to a subgroup of $G_B$. It follows that the lattice of countable dense additive subgroups of $\mathbb{R}$ includes a copy of the powerset lattice $\langle P(\mathbb{N}),\subset\rangle$. In particular, since this powerset order is universal for all countable partial orders (by considering the map of a point to its lower cone), it follows that for any countable partial order, one can find a family of countable dense additive subgroups whose subset embedability relation is exactly that order. So this is very far from a well-order. Indeed, since the rational order embeds this way, by using the corresponding Dedekind cuts, one can find an uncountable chain of countable dense additive subgroups of $\mathbb{R}$ whose subset relation has the order type of the continuum $\langle\mathbb{R},\lt\rangle$. - This should be a comment to Joel's answer, but it got too long. Joel has exhibited a good part, but not quite all, of the classification of non-trivial subgroups of the rationals (also known as the classification of rank-one, torsion-free, abelian groups), a classification which, if I remember correctly, goes back to Reinhold Baer in the 1930's. For the record, here's the classification. Let $s$ be a function from a subset $D$ of the set of primes into the non-negative integers. Associated to $s$ is the group `$G_s$` of those rational numbers expressible as $a/b$ with integers $a$ and $b$ such that, for each prime $p\in D$, the power of $p$ in the prime decomposition of $b$ is at most `$p^{s(p)}$`. (Primes not in $D$ can occur arbitrarily often in denominators.) Then the non-zero subgroups of $\mathbb{Q}$ that contain the integers are exactly these `$G_s$`'s. (Note that, up to isomorphism, containing the integers is unimportant, as it can always be achieved by rescaling.) Two of them, say `$G_s$` and `$G_t$`, are isomorphic iff $s$ and $t$ have the same domain and agree at all but finitely many points in that domain. All these groups are dense in the reals, except for those isomorphic to `$\mathbb Z$`, i.e., those of the form `$G_s$` where the domain of $s$ consists of all the primes and $s(p)=0$ for all but finitely many $p$. Note that all of this concerns only groups of rank 1, i.e., those in which every two elements are linearly dependent over the rationals. For groups of higher but still finite rank, things get more complicated --- in a precise sense: If I remember correctly, the complexity of the isomorphism problem for torsion-free abelian groups of rank $n$ is known to be strictly increasing (in the sense of Borel reducibility) as $n$ increases. - In the last sentence of my answer, "If I remember correctly" should be replaced by "According to a theorem of Simon Thomas ["The classification problem for torsion-free abelian groups of finite rank," J. Amer. Math. Soc. 16 (2003) 233-258]." – Andreas Blass Jun 8 2011 at 17:46 If $u,v \in \mathbb R$ are linearly independent over $\mathbb Q$, then $G = \{au+bv : a,b\in\mathbb Z\}$ is dense. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396349191665649, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/64667-circle-geometry-once-more.html
# Thread: 1. ## Circle geometry once more! A circle with the centre (-1,-2) is tangent to the line 3x+4y-14=0 Determine the radius of the circle algebraically. The part that confuses me is the equation there, I'm not sure what to do with it or anything. so please give me step by step details, they would be greatly appreciated. 2. Hi If you know the formula of the distance between a point and a line it is very easy since the radius of the circle is the distance between the center C and the tangent line (D) The formula for a point A(xA,yA) and a line (d) ax+by+c=0 is $d(A,(d)) = \frac{|a x_A + b y_A + c|}{\sqrt{a^2 + b^2}}$ $R = d(C,(D)) = \frac{|3 x_C + 4 y_C - 14|}{\sqrt{3^2 + 4^2}} = 5$ If you don't know the formula then you can use the point of tangence H with the conditions (i) H is on the line (D) (ii) (CH) is perpendicular to (D) 3. ## formula Is the formula d= square root of (x2-x1)^2 +(y2-y)^2 4. The formula you have written is the distance between 2 points M1(x1,y1) and M2(x2,y2) $M_1M_2 = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$ 5. So for that formula, what numbers to i plug into it, im so confused, 6. If you are not allowed to use the formula I gave you in my first post, here is how you can manage Let H be the tangence point of the circle and the line Then (i) $\overrightarrow{CH}.\overrightarrow{u}=0$ where u is a direction vector of the line (ii) 3xH+4yH-14=0 This gives 2 equations, the resolution gives the coordinates of H Tell me if you need additional help 7. no no, im allowed using the formula. I'm just really confused, I dunno were to put numbners and what im suppose to end up with 8. The distance between a point A(xA,yA) and a line (d) ax+by+c=0 is $d(A,(d)) = \frac{|a x_A + b y_A + c|}{\sqrt{a^2 + b^2}}$ You are looking for the distance betwwen the center C(-1,-2) and the line 3x+4y-14=0 The formula gives $d = \frac{|3 \cdot (-1) + 4 \cdot (-2) - 14|}{\sqrt{3^2 + 4^2}} = \frac{|-25|}{\sqrt{25}} = 5$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8836345672607422, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/50187/why-is-the-exterior-algebra-a-bi-algebra-and-even-a-hopf-algebra
Why is the exterior algebra a bi-algebra (and even a Hopf algebra)? According to the wikipedia, the exterior algebra of a $\Bbbk$-vector space $V$ is initial with respect to being unital and there existing a $\Bbbk$-linear map $j\colon V\to A$ such that $j(v)^2=0$ for all $v\in V$. This is a reasonable algebra to consider if one is interested in measuring $k$-dimensional volumes, which are specified by $k$ linearly independent vectors, and which are degenerate if the vectors are not linearly independent (equivalently, for $char \Bbbk\neq 2$, measuring $k$-dimensional signed volumes). Then multiplication consists simply of throwing in extra vectors. My question, inspired by the recent question on the meaning of addition in the exterior algebra, is about the meaning of co-multiplication. I happen to know virtually nothing about co-algebras outside of their formalism, so in part I am looking for answers that will help me build some intuition, geometric and otherwise, about what's happening. 1. What is a categorical argument that the exterior algebra satisfying the above universal property has co-multiplication (and is thus a bi-algbera)? 2. Is there a (heurisitc) geometric interpretation of the exterior algebra's co-multiplication similar to the $k$-dimensional volume tracking I sketched above? 3. (Bonus question): Is there any geometric significance to the anti-pode that makes the exterior (bi-)algebra into a Hopf algebra? - I cannot see what your second paragraph means, really... – Mariano Suárez-Alvarez♦ Jul 7 '11 at 23:39 Heuristically, I think of non-zero elements $v_1\wedge v_2\dots\wedge v_k$ of the exterior algebra as assigning to the (oriented) parallelepiped generated by $(v_1,\dots,v_k)$ a $k$-dimensional signed volume of $1$. Multiplication then naturally takes us from assigning $k-$ and $l$-dimensional volumes to the parallelepipeds generated by $(v_1,\dots,v_k)$ and $(w_1,\dots,w_l)$ to assigning $k+l$-dimensional signed volumes by assigning $k+l$-dimensional volume $1$ to the parallelepipied generated by $(v_1,\dots,v_k,w_1,\dots,w_l)$, unless the parallelepiped generated is of dimension $<k+l$. – Vladimir Sotirov Jul 8 '11 at 0:45 if that helps you, I guess it's all great! :) In any case, not all elements of the exterior algebra are of the form $v_1\wedge\cdots\wedge v_k$. – Mariano Suárez-Alvarez♦ Jul 8 '11 at 3:05 2 Answers Perhaps it is first easier to understand why the symmetric algebra is a Hopf algebra. This is because $S(V)$ is nothing more than the ring of polynomial functions on $V^{\ast}$, and $V^{\ast}$ is naturally a group scheme with vector addition as the group operation. Dualizing all of the maps you get from this group structure gives you precisely the Hopf algebra structure: you get the antipode from the inverse in $V^{\ast}$ and the comultiplication from the addition in $V^{\ast}$. The exterior algebra is a sort of "twisted" symmetric algebra, so the same thing is true for it. One way to formalize this is that the exterior algebra is precisely the symmetric algebra, but where $V$ is regarded as an odd supervector space. This is because the symmetric monoidal structure on supervector spaces is slightly different from what one might expect: it is given by $a \otimes b \mapsto (-1)^{|a| |b|} b \otimes a$. See, for example, this blog post. I think there is a physical interpretation here, but I don't know how to be precise about it. The symmetric and exterior algebras are respectively the bosonic and fermionic Fock spaces. The comultiplication ought to have a physical interpretation as "duplication of states," or something like that. There is an idea you should understand if you haven't seen it before, and it is the notion of a group object in a category. If you write down what it means for an affine scheme $\text{Spec } R$ to be a group object in the category of affine schemes, then dualize all of the maps, you find that you have equipped $R$ with the structure of a Hopf algebra. A concise way to say this is that Hopf algebras are cogroup objects in a category of algebras, which just means that they are group objects in the opposite categories. - Let $\Lambda V$ be the exterior algebra on $V$, and consider the tensor product algebra $\Lambda V\otimes\Lambda V$ in the sense of graded algebras, so that if $a$, $b$, $c$ and $d$ are homogeneous elements in $\Lambda V$ we have $$a\otimes b\cdot c\otimes d=(-1)^{|b||c|}ac\otimes bd.$$ If $v\in V$, then you can chech that $(1\otimes v+v\otimes1)^2=0$. Therefore we have a map $$j:v\in V\mapsto v\otimes1+1\otimes v\in\Lambda V\otimes\Lambda V$$ such that $j(v)^2=0$ for all $v\in V$. As you observed, this $j$ then induces an algebra map $$\Delta:\Lambda V\to\Lambda V\otimes\Lambda V.$$ Using the universal property of $\Lambda V$ it is easy to show that this turns $\Lambda V$ into a Hopf algebra in the graded sense. For example, we need to show $\Delta$ is coassociative: but the maps $\Delta\otimes1_V\circ\Delta$ and $1_\V\otimes\Delta\circ\Delta$ are two algebra maps $\Lambda V\to\Lambda V\otimes\Lambda V\otimes\Lambda V$, so the universal property tells us that to check they coincide it is enough to show their restrictions to $V\subseteq\Lambda V$ coincide, and this can be done by computing explicitely. It is important that the tensor product algebra $\Lambda V\otimes\Lambda V$ be taken in the graded sense here. In the ungraded sense, $\Lambda V$ is generally $not$ a Hopf algebra: for example, when $V$ is one dimensional, $\Lambda V\cong k[x]/(x^2)$ as an algebra and if the characteristic of the field $k$ is not two, this cannot be made into a Hopf algebra in any way. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292828440666199, "perplexity_flag": "head"}
http://mathoverflow.net/questions/30948/kalinins-formulation-of-the-anosov-closing-lemma/30954
## Kalinin’s formulation of the Anosov closing lemma ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to read a paper of Boris Kalinin on the cohomology of dynamical systems for a project. The material is geared towards topologically transitive Anosov diffeomorphisms (which is how the initial (abelian) results were proved by Livsic). However, he axiomatizes things for homeomorphisms of metric spaces. It is necessary in the proof to use a version of the Anosov closing lemma, but it looks stronger than the one I've seen. I'd like to know whether elementary techniques will suffice to prove it. Background: The statement of the closing lemma that I learned initially uis as follows. Let $M$ be a compact manifold, $f$ an Anosov diffeomorphism. Put a metric $d$ on $M$, and fix $\epsilon>0$. Then there is $\delta$ such that if $n \in \mathbb{N}$ and $d(f^n(x), x)<\delta$, then there is $p \in X$ with $f^n(p)=p$ and $d(p,x)<\epsilon$. In other words, "approximately periodic points" can be approximately closely by actual ones. The property Kalinin stipulates in section 1 of his paper is that there is a type of exponential closeness. In other words, Kalinin wants that $c,\delta, \gamma>0$ exist such that any $x \in X$ with $d(f^n(x),x)<\delta$ can have the whole orbit be exponentially approximated by a periodic orbit (of $p$ with $f^n(p)=p$). More precisely, one has $d(f^i(p), f^i(x))< c d(f^i(x),x) e^{-\gamma \min(i,n-i)}$ for each $i=0, \dots, n-1$. This means that the orbits of $p$ and $x$ get even closer in the middle, and this is a strenghtening of the usual closing condition. One can motivate this fact for Anosov diffeomorphisms geometrically by considering hyperbolic linear maps and drawing a picture of the stable and unstable subspaces, and I am told that it is a straightforward (and "effective") generalization of the usual statement of the closing lemma. This seems more like an intuitive aid rather than a rigorous proof for general Anosov diffeomorphisms, though. However, Kalinin goes on to say more. He assumes that there exists $y \in X$ such that $d(f^i(x), f^i(y)) \leq \delta e^{-\gamma i}$, $d(f^i(y)), f^i(p)) \leq \delta e^{-\gamma(n-i)}$. Immediately thereafter, he states that this is true for Anosov diffeomorphisms in view of the closing lemma. Questions: 1) Can one prove the (stronger) version of the closing lemma in Kalinin's paper using the usual statement itself standard techniques (i.e. successive approximation, basic linear algebra for hyperbolic maps, or lemmas like this one)? The books I have seen do not mention it, and certainly say nothing about a point $y$ as in the statement. 2) Does anyone know a good reference for this material (or for the general theory of Anosov diffeomorphisms, for that matter)? - ## 2 Answers The closing lemma as stated by Kalinin can be found in many textbooks e.g. Katok-Hasselblatt "Introduction to the modern theory of dynamical systems", corollary 6.4.17. The closing lemma really gives a periodic point close to x, with iterates also close to the iterates of x until the orbit of x returns. That's not just the fact that periodic points are close to non-wandering points. The point y is obtained by taking the intersection of the local stable set of x with the nth pull-back of the local unstable set of $f^n(p)$. Draw a picture to understand what's going on. There are "geometric" proofs of the closing lemma that build y before p. And of course the original article of Livsic contains such a proof. - Thanks for your answer. It's ironic that I'm currently at PSU and the library doesn't have an available copy of Katok-Hasselblatt, so I hadn't checked that. Fortunately, I should be able to borrow one soon... – Akhil Mathew Jul 7 2010 at 20:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Besides the Hasselblatt and Katok bible, these are the references on Anosov diffeomorphisms that I found worth buying in the past six months: HK's Handbook of Dynamical Systems vol 1A, Gallavotti's books (available online and giving some great physical background and intuition, but probably not of any interest for you otherwise) and last but certainly not least Bowen's Equilibrium states and the ergodic theory of Anosov diffeomorphisms, which covers the closing lemma (3.8). There is a PDF of this online that should be easy to find. - I should elaborate: Gallavotti's "short treatise" and "aspects" books are probably the most relevant here. – Steve Huntsman Jul 7 2010 at 21:02 Thanks! The last one in particular looks helpful. – Akhil Mathew Jul 7 2010 at 23:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371234178543091, "perplexity_flag": "head"}
http://cogsci.stackexchange.com/questions/3280/how-to-compute-chi-square-value-and-degrees-of-freedom-in-excel
# How to compute Chi-square value and degrees of freedom in Excel? I am trying to understand the statistical test of the chi-square in Excel. I already made the test but I still don't know how to compute the χ² value and what it means, how to compute the degrees of freedom and how to write a propper APA approved result section. Can anyone help me and explain this in a simple way? I have a population of n=76. 'Left' = 25 and 'Right' = 51. The hypothesis is you would expect a normal distribution ('Left' =38 and 'Right' = 38). So the question is 'is this distribution a coincidence?' The p < .029. By seeing this outcome I thought there is that is no coincidence that there isn't a normal distribution and it must be due to a variable. Is it true in this case the degress of freedom is just n-1? - ## 1 Answer This is really more of a statistical question (except perhaps the bit about APA style). As such it probably belongs on stats.stackexchange.com . A binary variable does not have a "normal distribution". A normal distribution is bell shaped and is relevant to continuous data. Your null hypothesis is that the population proportions for left and right handers are equal. Thus, if your chi-square value is sufficiently large, then you might reject the null hypothesis and conclude that the population proportions are unequal. If $k$ is the number of categories (you have two categories), then degrees of freedom for the one sample chi-square test is $k-1$ (i.e., $2-1 = 1$). To calculate chi-square check out the example here. The following formula in Excel should give you a p-value. For example if your chi-square value was 22, and your degrees of freedom was 3: ```` =1-CHISQ.DIST(22, 3,TRUE) ```` • the first argument is the chi-square value • The second argument is the degrees of freedom • The third argument indicates that a cumulative distribution (CDF) is desired • Thus by taking 1 minus the value of the CDF you get the probability of getting a chi-square value as large or larger than the observed value. You can check your formula by looking at existing calculated tables. http://home.comcast.net/~sharov/PopEcol/tables/chisq.html - Thank you very much Jeromy! This was very helpful! – NinaM Mar 11 at 12:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295969009399414, "perplexity_flag": "middle"}
http://rationalwiki.org/wiki/Principle_of_explosion
# Principle of explosion From RationalWiki Part of the series on Logic Key articles General logic Bad logic v - t - e The principle of explosion is a logical rule of inference. According to the rule, from a set of premises in which a sentence $A$ and its negation $\neg A$ are both true (i.e., a contradiction is true), any sentence $B$ may be inferred. It is also known by its Latin name ex contradictione quodlibet, meaning from a contradiction anything follows, or ECQ for short. Classical logic accepts the principle of explosion; but in paraconsistent logic it is rejected. It is also rejected in relevance logic, since relevance logic is based on the competing principle that the premises must be relevant to the conclusion. (All relevance logics are paraconsistent, but not all paraconsistent logics are relevant.) ## Rule Formally, the rule is stated as follows. For arbitrary sentences $A$ and $B$: $\left\{A,\neg A\right\}\vDash B$ Informally, the rule is applied thus: Suppose that one has $A$ and $\neg A$ for premises and wishes to prove $B$. One then may employ reductio ad absurdum, assuming to the contrary $\neg B$ and then bringing down the premises $A$ and $\neg A$. This is, of course, a contradiction, meaning that $\neg B$ may be concluded to be false, i.e., $B$ is true. ## Tacit applications The Bible is known to contain numerous inconsistencies. Although for obvious reasons the principle of explosion is not used to draw conclusions from them, the many inconsistencies allow a Bible quote to be furnished to support just about any position imaginable, across the spectra of politics, economics, and emotions. For example, quotes supporting a speech on wrath and judgment can be quote-mined from the Old Testament, quotes supporting a speech on love and mercy from the New.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.94826340675354, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/66462/which-functions-of-one-variable-are-derivatives/66496
## Which functions of one variable are derivatives ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is motivated by this recent MO question. Is there a complete characterization of those functions $f:(a,b)\rightarrow\mathbb R$ that are pointwise derivative of some everywhere differentiable function $g:(a,b)\rightarrow\mathbb R$ ? Of course, continuity is a sufficient condition. Integrability is not, because the integral defines an absolutely continuous function, which needs not be differentiable everywhere. A. Denjoy designed a procedure of reconstruction of $g$, where he used transfinite induction. But I don't know whether he assumed that $f$ is a derivative, or if he had the answer to the above question. - 1 Since differentiable functions are continuous, to be of Baire class $1$ (a pointwise limit of continuous functions) is certainly necessary. – Theo Buehler May 30 2011 at 15:28 1 See this wiki page for some partial results: en.wikipedia.org/wiki/… – Mark Schwarzmann May 30 2011 at 15:30 1 Another necessary condition is mapping intervals into intervals – Pietro Majer May 30 2011 at 23:07 1 @Pietro. I mentionned this point in my answer to the previous MO question; in the form a derivative satisfies the intermediate value property. – Denis Serre May 31 2011 at 6:40 2 There is a theorem (of Maximoff?) stating that any Baire 1 function which satisfies the intermediate value property is the composition of a derivative and a homeomorphism (and the converse is obvious). This does not answer your question but I think it's cute (I'm pretty sure I read this somewhere in Kechris's "Classical Descriptive Set Theory", but I don't have it with me) – Julien Melleray May 31 2011 at 15:34 show 1 more comment ## 4 Answers I can't claim much knowledge here, but I am given to understand that the class of differentiable functions (or the class of functions which are derivatives of such) is really quite nasty and complicated. This paper by Kechris and Woodin indicates that there is some very serious descriptive set theory involved: that there is a hierarchy of levels of complication indexed by $\omega_1$ (i.e., the set of countable ordinals). This online article by Kechris and Louveau also looks relevant. - I suspected something like that. – Denis Serre May 30 2011 at 15:47 4 Yeah, that's why nobody ever talks about or works with pointwise differentiable functions. We usually learn about derivatives and differentiability and think about them as being more "elementary" than integration and integrability. But from a theoretical and even computational standpoint, the latter is much easier to work with than the former. – Deane Yang May 30 2011 at 18:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Take a look a this book by Andrew M. Bruckner: Differentiation of real functions. Chapter seven is about The problem of characterizing derivatives. There is a review by Daniel Waterman. You might also want to take a look at Homeomorphisms in Analysis by Goffman, Nishiura and Waterman. - Andy has updated his account of this problem in a survey article for the Real Analysis Exchange: Bruckner, Andrew M. The problem of characterizing derivatives revisited. Real Anal. Exchange 21 (1995/96), no. 1, 112--133. Download from our web site here: classicalrealanalysis.info/documents/Bruckner1995.rae.1341343228.pdf – B S Thomson Dec 28 at 17:58 Here are a few characterizations of derivatives: 1. D. Preiss and M. Tartaglia On Characterizing Derivatives Proceedings of the American Mathematical Society, Vol. 123, No. 8 (Aug., 1995), 2417-2420. 2. Chris Freiling, On the problem of characterizing derivatives, Real Analysis Exchange 23 (1997/98), no. 2, 805-812. 3. Brian S. Thomson, On Riemann Sums Real Analysis Exchange 37 (2011/12), 1-22. [You can download the PDF file here.] The problem was first posed by W. H. Young. We include in our article about the Youngs a full quote stating his problem; Bruckner, Andrew M. and Thomson, Brian S. Real variable contributions of G. C. Young and W. H. Young. Expo. Math. 19 (2001), no. 4, 337–358. [You can download the PDF file here.] - 1 Fantastic link! – Andres Caicedo Dec 27 at 19:35 Thanks a lot for these references! – Denis Serre Dec 28 at 8:14 A result that is related to your question (the "almost everywhere" is the difference) : Every Henstock-Kurzweil integrable function on [a,b] is almost everywhere the derivative of a differentiable function, and inversely, any derivative is Henstock-Kurzweil integrable. More here : http://www.math.vanderbilt.edu/~schectex/ccc/gauge/ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9159994721412659, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/101540/list
## Return to Question Hey everybody, I think this question might be just a simple oversight on my part, but this has been bugging me a few days. I am reading Hatcher's Spectral Sequences book, and trying to understand his example where he computes $\pi_*^s$ for $p=2$, p=2$(page 21-23), and I'm a bit confused about a certain step. He claims that the element corresponding to$h_3^2$must have order 2 in$\pi_*^2$, \pi_{14}^s$, because of "the commutativity property of the composition product, since $h_3$ has odd degree". Now, I see why $h_3^2$ can have order at most 4, because $h_3^2h_0^2=0 \in E_2$, but why must it have order 2 exactly? What does the odd degree have to do with it? If I am not mistaken, the Yoneda product on $Ext_A(Z/2,Z/2)$ induces the composition product on $\pi_*^s$, which, mod 2, is commutative, but the Yoneda product has $h_3h_0=h_0h_3$ in the $E_2$ page, so I can't from that derive the induced composition product is 0. Do I need to use a fact about $\pi_s^*$ that doesn't come from this spectral sequence? Thanks for the help everybody! -Joseph Victor 2 apparently it doesn't like {*,*} in math mode... Hey everybody, I think this question might be just a simple oversight on my part, but this has been bugging me a few days. I am reading Hatcher's Spectral Sequences book, and trying to understand his example where he computes $\pi_*^s$ for $p=2$, and I'm a bit confused about a certain step. He claims that the element corresponding to $h_3^2$ must have order 2 in $\pi_*^2$, because of "the commutativity property of the composition product, since $h_3$ has odd degree". Now, I see why $h_3^2$ can have order at most 4, because $h_3^2h_0^2=0\in E_2^{h_3^2h_0^2=0 \in E_2$, }$, but why must it have order 2 exactly? What does the odd degree have to do with it? If I am not mistaken, the Yoneda product on$Ext_A^{,}(Z/2,Z/2)$Ext_A(Z/2,Z/2)$ induces the composition product on $\pi_*^s$, which, mod 2, is commutative, but the Yoneda product has $h_3h_0=h_0h_3$ in the $E_2$ page, so I can't from that derive the induced composition product is 0. Do I need to use a fact about $\pi_s^*$ that doesn't come from this spectral sequence? Thanks for the help everybody! -Joseph Victor 1 # How do you know when something must die in the Adams Spectral Sequence for $\pi_*^s$ Hey everybody, I think this question might be just a simple oversight on my part, but this has been bugging me a few days. I am reading Hatcher's Spectral Sequences book, and trying to understand his example where he computes $\pi_*^s$ for $p=2$, and I'm a bit confused about a certain step. He claims that the element corresponding to $h_3^2$ must have order 2 in $\pi_*^2$, because of "the commutativity property of the composition product, since $h_3$ has odd degree". Now, I see why $h_3^2$ can have order at most 4, because $h_3^2h_0^2=0\in E_2^{,}$, but why must it have order 2 exactly? What does the odd degree have to do with it? If I am not mistaken, the Yoneda product on $Ext_A^{,}(Z/2,Z/2)$ induces the composition product on $\pi_*^s$, which, mod 2, is commutative, but the Yoneda product has $h_3h_0=h_0h_3$ in the $E_2$ page, so I can't from that derive the induced composition product is 0. Do I need to use a fact about $\pi_s^*$ that doesn't come from this spectral sequence? Thanks for the help everybody! -Joseph Victor
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9710358381271362, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/113312-prove-suppose-h-then.html
# Thread: 1. ## Prove: Suppose |H|<∞. then Let H be a group in which h²=I for all h is an element of H. Prove: Suppose |H|<∞. Let {h₁,h₂,…..hn} be minimal set of generators for H. then 2. Originally Posted by apple2009 Let H be a group in which h²=I for all h is an element of H. Prove: Suppose |H|<∞. Let {h₁,h₂,…..hn} be minimal set of generators for H. then H is abelian, because for any $a,b \in H: \ 1=(ab)^2=abab,$ which gives us $ba=a^{-1}b^{-1}=ab.$ thus every element of H is in the form $h_1^{k_1} h_2^{k_2} \cdots \cdots h_n^{k_n},$ where $k_j \in \{ 0,1 \}.$ to finish the proof, you only need to show that such a presentation for an element of H is unique. this is a result of the set $\{h_1, \cdots , h_n \}$ being a minimal set of generators: if $h_1^{k_1} h_2^{k_2} \cdots \cdots h_n^{k_n}=h_1^{j_1} h_2^{j_2} \cdots \cdots h_n^{j_n}$ and say $j_r=0, \ k_r=1,$ then $h_r \in <\{h_i: \ i \neq r \}>$ and so $\{h_i: \ i \neq r \}$ would be a smaller set of generators for H. contradintion!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115084409713745, "perplexity_flag": "head"}
http://mathoverflow.net/questions/24358/free-z-modules-bases-etc
## free Z-modules: Bases etc. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I need a reference which states which of the "normal properties of vector spaces" carry over to free $\mathbb{Z}$-modules. Especially I am interested in things like: If you have a linear map between two free $\mathbb{Z}$-modules and you choose a basis for its kernel, can you choose a basis of a complementary space so that both together form a basis of the whole space (and the map, viewed only on this complementary space, is an isomorphism on its image)? Probably this is an easy question for algebra guys. - To make my question more precise: What are important properties of subspaces of Z^n, regarding bases, complements, kernels and cokernels? Is there a good book about that? – J. Fabian Meier May 12 2010 at 11:33 1 I first learnt the basics of this from the textbook amazon.co.uk/… which now alas is out of print. – Robin Chapman May 12 2010 at 12:04 This comment is just to emphasize that Smith normal form (as in bugs' answer below) really is a good way to answer the types of questions you ask about (for any particular map between finite rank modules). The wikipedia entry is, I think, pretty readable: en.wikipedia.org/wiki/Smith_normal_form – GS May 12 2010 at 18:49 ## 3 Answers What carries over? As Peter pointed out, a submodule of a free $\mathbb{Z}$-module though free need not have a complement. Indeed each submodule of a free $\mathbb{Z}$-module is free, but a quotient module need not be, for instance $\mathbb{Z}/2\mathbb{Z}$. Also a $\mathbb{Z}$-module is free if and oly if it is projective; this entails that a kernel of a map of free modules does have a complement. The set $\mathrm{Hom}(F,G)$ for free $\mathbb{Z}$-modules need not be free. If $F$ is free of countably infinite rank and $G=\mathbb{Z}$, then $\mathrm{Hom}(F,G)\cong\prod_{j=1}^\infty\mathbb{Z}$ which remarkably is not free over $\mathbb{Z}$. But $F\otimes G$ is free for free $F$ and $G$. - So what is an example of a map f:Z^n -> Z^m, where ker f has no complement V, so that f|V is an isomorphism onto the image of f? – J. Fabian Meier May 12 2010 at 11:48 3 The kernel of a map between free Z-modules always has a complement (the image is a submodule of a free module, hence itself free, choose a splitting). – a-fortiori May 12 2010 at 11:57 Now corrected – Robin Chapman May 12 2010 at 11:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You can write your map as a matrix. Moreover, you can choose a different basis so that the matrix is in the Smith's normal formal. The complement to the kernel exists only if the Smith's normal form contains only ones and zeroes. That is about it and should be explained in many Algebra books, say, Artin's Algebra. - I think you meant to say that the complement to the image exists iff the SNF has only zeroes and ones. – Robin Chapman May 12 2010 at 12:03 Yeah, good point! My statement is still correct but the next statement "that is about it" would not be correct without iff:-)) – Bugs Bunny May 12 2010 at 12:13 8 Bugs, why do you say "That is about it" rather than "That's all, Folks"? It almost makes me think you might not be the real Bugs Bunny. – KConrad May 12 2010 at 22:28 4 Of course you realize, this means war. – Bugs Bunny May 21 2010 at 9:11 There is no complementary space in general: Consider the multiplication by 2 map from $\mathbb{Z}$ to itself ... - 1 I did not claim that. I was just interested in complementary spaces to kernels of linear maps. If we do not exist in general, I don't care: I was searching for a reference which states what kind of vector space (basis related) theorems also work (in a maybe weaker sense) for free Z-modules. – J. Fabian Meier May 12 2010 at 11:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211411476135254, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/08/06/polynomials-with-too-few-roots/?like=1&source=post_flair&_wpnonce=9916d21231
# The Unapologetic Mathematician ## Polynomials with Too Few Roots Okay, we saw that roots of polynomials exactly correspond to linear factors, and that a polynomial can have at most as many roots as its degree. In fact, there’s an expectation that a polynomial of degree $n$ will have exactly $n$ roots. Today I want to talk about two ways this can fail. First off, let’s work over the real numbers and consider the polynomial $p=X^2-2X+1$. A root of $p$ will be a real number ${x}$ so that $x^2-2x+1=0$, but a little playing around will show us that $x=1$ is the only possible solution. The degree of the polynomial is two, so we expect two roots, but we only find one. What went wrong? Well, we know from the fact that $x=1$ is a root that $X-1$ is a factor of $p$. Our division algorithm shows us that we can write $p=(X-1)(X-1)$. The factor that gives us the root $x=1$ shows up twice! But since we’ve already counted the root once the second occurrence doesn’t do anything. To remedy this, let’s define the “multiplicity” of a root ${x}$ to be the number of times we can evenly divide out a factor of $(X-x)$. So in our example the root ${1}$ has multiplicity $2$. When we count the roots along with their multiplicities, we get back exactly the degree. So do multiplicities handle all our problems? Unfortunately, no. An example here over the real numbers is the polynomial $p=X^2+1$. A root of this polynomial would be a real number ${x}$ with $x^2=-1$. But since the square of any real number is nonnegative, this can’t be satisfied. So there exist polynomials with fewer roots than their degrees would indicate; even with no roots at all! Now, some fields are well-behaved. We say that a field is “algebraically closed” if every polynomial over that field has a root ${x}$. In that case we can divide out by $(X-x)$ to get a polynomial of one degree less, which must again have a root by algebraic closure. We can keep going until we write the polynomial as the product of a bunch of linear factors — the number is the same as the degree of the polynomial — and leave one field element left once we get down to degree zero. Thus over an algebraically closed field every polynomial has exactly as many roots as its degree indicates.. if you count them with their multiplicities! ### Like this: Posted by John Armstrong | Algebra, Polynomials, Ring theory ## 14 Comments » 1. Excuse the nitpick, but more usual is “algebraically closed”. Comment by | August 6, 2008 | Reply 2. Sorry, I thought I cleared that.. typed the wrong thing, noticed it later, and failed to correct… Comment by | August 6, 2008 | Reply 3. When you write “a polynomial can have at most as many roots as its degree”, maybe you should recall that you consider polynomials over a field, say in the proof of this statement where you use integrality, and mention the degree-two polynomial (X-2)(X-3) with four roots in Z/6Z. Comment by | August 7, 2008 | Reply 4. Benoit, this is true, but I’ve been pretty consistent in this whole section that I’m working over a field, and not over a general ring. Comment by | August 7, 2008 | Reply 5. [...] Complex Numbers Yesterday we defined a field to be algebraically closed if it always has exactly as many roots (counting multiplicities) as we expect from its degree. But [...] Pingback by | August 7, 2008 | Reply 6. [...] into linear factors because the complex numbers are algebraically closed. But we also know that real polynomials can have too few roots. Now, there are a lot of fields out there that aren’t algebraically closed, and I’m not [...] Pingback by | August 14, 2008 | Reply 7. [...] Until further notice, I’ll be assuming that the base field is algebraically closed, like the complex numbers [...] Pingback by | February 2, 2009 | Reply 8. [...] of the eigenvalues are not distinct. Worse, we could be working over a field that isn’t algebraically closed, so there may not be roots at all, even counting duplicates. But still, in the generic case [...] Pingback by | February 10, 2009 | Reply 9. [...] we can use our definition of multiplicity for roots of polynomials to see that a given value of has multiplicity equal to the number of [...] Pingback by | February 19, 2009 | Reply 10. [...] indeed, some real polynomials have no roots. But all is not lost! We do know something about factoring real polynomials. We can break any one [...] Pingback by | March 13, 2009 | Reply 11. [...] together now. Start with a linear endomorphism on a vector space of finite dimension over an algebraically closed field . If you want to be specific, use the complex numbers [...] Pingback by | April 9, 2009 | Reply 12. [...] so we may find things a little more complicated now. We will, however, have to assume that is algebraically closed and that no multiple of the unit in is [...] Pingback by | August 25, 2012 | Reply 13. [...] recall that any linear endomorphism of a finite-dimensional vector space over an algebraically closed field can be put into Jordan normal form: we can find a basis such that its matrix is the sum of [...] Pingback by | August 28, 2012 | Reply 14. [...] recall that any linear endomorphism of a finite-dimensional vector space over an algebraically closed field can be put into Jordan normal form: we can find a basis such that its matrix is the sum of [...] Pingback by | August 28, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561663269996643, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/477/cardinality-of-set-of-real-continuous-functions
# Cardinality of set of real continuous functions The set of all ℝ → ℝ continuous functions is c. How to show that? Is there are bijection between ℝn and the set of continuous functions? - The result that you’ve found here is correct: there are $\mathfrak c=|\Bbb R|$ continuous real-valued functions on $[0,1]$. I find it hard to believe that Ó Searcóid made such an egregious error; could you quote exactly what he says? – Brian M. Scott Apr 21 at 0:05 This is from page 268 (first edition): "It is demonstrated in many textbooks that $\mathbb Q$ is countable, that $\mathbb R$ is uncountable, that every non-degenerate interval is uncountable, that the collection of continuous functions defined on $[0 , 1]$ is of a greater cardinality than $\mathbb R$, and that there are sets of greater and greater cardinality." – Andres Caicedo Apr 21 at 0:29 (Brian's comment and mine refer to a different version of the question, merged with this one as duplicate. The question was prompted by a claim in "Metric spaces", by Mícheál Ó Searcóid, where it is claimed that there are more continuous functions on $[0,1]$ than real numbers.) – Andres Caicedo Apr 21 at 1:43 ## 2 Answers The cardinality is at least that of the continuum because every real number corresponds to a constant function. The cardinality is at most that of the continuum because the set of real continuous functions injects into the sequence space $R^{N}$ by mapping each continuous function to its values on all the rational points. Since the rational points are dense, this determines the function. The Schroeder-Bernstein theorem now implies the cardinality is precisely that of the continuum. Note that then the set of sequences of reals is also of the same cardinality as the reals. This is because if we have a sequence of binary representations $.a_1a_2..., .b_1b_2..., .c_1c_2...$, we can splice them together via $.a_1 b_1 a_2 c_1 b_2 a_3...$ so that a sequence of reals can be encoded by one real number. - Good answer, but your last statement does not work for infinite sequences. – Larry Wang Jul 22 '10 at 12:27 4 +1 Nice. Since the rational points are dense, this determines the function. - This is the trickiest claim in the argument, enough to count as a lacuna. It might make a good further question "Can there be two distinct, continuous functions that are equal at all rationals?" – Charles Stewart Jul 22 '10 at 12:31 @Kaestur: it works for countably many reals, which I think is all that was intended. – Charles Stewart Jul 22 '10 at 12:32 @Charles: Fair enough. +1 from me. – Larry Wang Jul 22 '10 at 13:19 – Charles Stewart Jul 22 '10 at 16:53 It is at least $c$, since all constant functions are continuous. Now consider the fact that $\mathbb{R}$ is separable. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954096794128418, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/reflection
# Tagged Questions The reflection tag has no wiki summary. 1answer 40 views ### Reflection of a polarised beam The past days I've been trying to understand how AutoFocus(AF) works on photographic cameras. There is a statement that says AF systems are polarisation sensitive. This means that they can only work ... 1answer 40 views ### Reflectivity of a glowing-hot metal surface When a polished piece of metal (or steel in particular) is heated to incandescence, how do its reflective properties change? Given a mirror-like surface, would the object temporarily cease to act ... 2answers 51 views ### Does more reflective aluminum foil make a room cooler compared to less reflective foil? Aluminum foil is said to be not absorbing light at all. It reflects light. So, does it mean that a more shiny aluminum foil will reflect more light and thus make the room more cooler as compared to ... 1answer 68 views ### What properties make a good barrier for microwave (oven) radiation? Suppose I have plenty of food I want to heat (which will provide load) in the microwave, and one item I don't want to heat. What properties would make a material a a good shield, to reduce or prevent ... 2answers 150 views ### Light Ray Reflection from concave mirror Suppose a ray of light hits a concave mirror and is parallel to principle axis but far away from it such that it doesn't follow paraxial ray approximation. Will it pass through focus or between focus ... 1answer 55 views ### Confusing mirror problem A piece of thin spherical shell that has a radius of curvature of 106 cm is silvered on both sides. The concave side of the piece forms a real image 79.5 cm from the mirror. The piece is then turned ... 1answer 34 views ### Can the choice of reflection angle for light can be derived from a minimality condition? When the light hits on a surface, it reflects with the "same" angle as the one that hits the surface. I was wondering if this choice of angle can be explained by a minimality condition? 0answers 47 views ### Trigonometry in the plane mirror [closed] I was trying to solve a problem taken from an Physics Olympiad when I came across a curious and complex mathematical expression. I can not prove with what I know so far about mathematics, does could ... 1answer 49 views ### Calculation the reflection coefficient of a mutlilayer material For our project we have to study an infrared filter. This filter is composed of glass and several layers (nanolayers of titanium oxide, silver and cupper deposited on one side of the glass). Now we ... 1answer 79 views ### How to create visible reflections in shallow water? Assumption: The only lights I have are candle, table lamp, and sunlight. What would I need to create visible reflection of an object in the shallow water contained in a 5 liter bucket? Is it even ... 2answers 114 views ### How does this trick with mirrors work? Imagine two mirrors, set touching each other at right angles to one another. There is a 90 degree arc in which reflections can be seen, and a person standing in that arc can see himself reflected in ... 0answers 23 views ### Show that the plane of incidence is perpendicular to the surface of reflection Is it possible to derive from the boundary conditions of the Maxwell equations for E and H, that the plane of incidence for an EM wave is perpendicular to the reflection surface? How? If not, what ... 1answer 31 views ### How much refraction occurs as a fraction of all reflection and refraction? When light reaches a boundary between materials below the critical angle, some of it refracts and some of it reflects. For example, glass acts as a partial mirror with a dark background. Assuming ... 0answers 22 views ### Why the mirror changes the sides [duplicate] Why the mirror changes the left with the right side but not the top with the bottom? Go to the mirror and check this out. 2answers 103 views ### why does a mirror show what is in front of it? the only answer I can think of is that light is reflected from the objects in front of the mirror (visible color) and then reflects again off of the mirror to our eye, but im not quite satified with ... 1answer 39 views ### hurdles in creating (close to) infinite images Let's put an object(hypothetical superman) inside a "well sealed" box containing only mirrors. Is it possible to create number of images that will be close to infinity, assuming that resolution of our ... 1answer 56 views ### choosing the right color Context: My room is being painted, and i sit and study in a corner of the room, surrounded by walls on 2 sides, such that i am facing the wall. A tube light is at my 4-5 o'clock, and a ... 1answer 74 views ### Seeing a mirage through mirror? Okay, I am not really good in physics (rather terrible), but nonetheless. So, I was just wondering if you can see a mirage, is there something special in our eyes that we can see it or what? I mean, ... 2answers 74 views ### All mirrors always shrink to 50% scale? I have this geometric optics exercise here, in which a man is looking at himself in a mirror. Determine the minimum height at which the bottom of the mirror must be placed so the man can see his ... 0answers 42 views ### Bragg reflected electrons Could you explain how does bragg reflection happen for electrons? What does it mean that when they satisfy Laue condition? This already is asked in Physics SE. They are Bragg reflected in the opposite ... 1answer 337 views ### Two mirrors facing each other I have a question that I would like answered. What happens when you place two mirrors facing each other? Is it possible to have an infinite amount of reflections? 1answer 62 views ### Ratio of distance between mirror and person In perspective of a given example, if a man was to stand $2\ m$ away from a mirror which was $0.9\ m$ in height and was able to see his full reflection, what would the height of the mirror have to be ... 1answer 71 views ### How does light get into a stable optical cavity in the first place? It is supposedly possible to trap a beam of light bouncing back and fourth between two mirrors in a stable configuration. As I understand it, this means the configuration will prevent further spread ... 1answer 50 views ### How reflected objects are composed and who is responsible for that? Please refer to this image. The scene contains an object close to a mirror in the wall and a window, note that the reflected object is receiving more light than the object itself. I read some ... 1answer 46 views ### Reflected light from pulsars If I point one telescope at a pulsar and record the image and then I point a second telescope at a mirror that has the image of the pulsar on it and record it, will the two recordings be different? ... 0answers 92 views ### Sum of intensity of reflected and transmitted waves The given state: Let $\psi$ be a wave that passes from medium $a$ to medium $b$. Let $A$ be the amplitude of $\psi$. Let $R$ be the amplitude ratio of the reflected wave $\psi_r$ and the original one, ... 4answers 451 views ### Thought experiment regarding an object approaching a mirror Here's a thought experiment I came up with in class today when my mind drifted (I however highly doubt I'm the first to think about this since it is pretty rudimentary) : Let's say superman ... 0answers 27 views ### Run with speed of light with a mirror in hand [duplicate] Possible Duplicate: Reflection At Speed of Light Imagine you are able to run with the speed of light holding a mirror in your hand. Now will you be able to see yourself in the mirror? 4answers 178 views ### Does light accelerate or slow down during reflection? After all, it does change direction when reflection occurs. So shouldn't it also accelerate? And since the acceleration cannot increase the speed of light, mustn't it slow down? 0answers 51 views ### Perimeter of Image of a Square A concave mirror of focal length =10 cm is placed 15 cm from a square. The square lie on the principal axis i.e it's one side coincides with the principal axis. What is the perimeter of the image? How ... 1answer 358 views ### How does the aluminium foil do the thermal and WiFi isolation? Aluminium foil is widely used for thermal isolation. As far as I know it reflects the thermal infrared radiation. Also I've seen a lot of guides about strengthening WiFi signal by putting the ... 1answer 103 views ### How is it possible for an Ultrasound device to correctly interpret a negative density change in tissue? I understand the principles of Ultrasound Imaging, and the mathematics behind sonar velocity, impedance, and reflection. I also understand that an Ultrasound device recieves an echo produced by ... 1answer 97 views ### Colors in the secondary rainbow reverse of that in the primary rainbow Why the colors of Secondary rainbow is reverse of that in the color in the Primary rainbow? What can be the possible reason among the following options Because it is formed by one internal ... 0answers 40 views ### How does a lens affect the field of view in a mirror? If one looks into a mirror, he can see a certain field of view. If he places a convex lens that magnifies (or a concave lens that does the opposite) in front of the mirror, but so that he can still ... 1answer 216 views ### Why does the spotlight reflected off of a rectangular mirror tend to become circular? Background and setup When I was 12 I used to like a girl, we were almost neighbors and it was essential that our parents don't find out. So whenever one of us wanted to call the other they'd signal ... 1answer 71 views ### Can small clouds reflect enough light to hurt your eyes/blind you? I looked out my window a minute ago and immediately noticed a very bright spot where a cloud and a jet/plane trail met. The spot was so bright that I thought the sun was behind it because it left that ... 1answer 91 views ### Refraction and Reflection Seismology So I am wondering if I got the difference right. Both methods use explosives to send waves into the earth's surface. Now reflection seismology tries to get information from the reflected waves; the ... 0answers 78 views ### Reflection in Convex mirrors [closed] A monkey starts chucking polished stainless bocce balls at you. The bocce balls are 6cm in radius. Where does your image form as a function of bocce balls distance and what is the size of your image ... 1answer 152 views ### Special Relativity & Mirror Reflection If you move at $5$ $ms^-$$^1$ towards a plane mirror, your reflection moves $10$ $ms^-$$^1$ towards you. But what happens if you're moving much faster, say $0.8c$? Would your reflection move at ... 3answers 7k views ### Why does diamond shine? I have always wondered why diamonds shine. Can anyone tell my why? 2answers 165 views ### A light and magnetic mirror paradox? If light is an electromagnetic wave and lightspeed is constant (we ignore spacetime or gravity for this question) why can't we slow down light with a few dozen wellplaced magnets and electricly ... 1answer 150 views ### how to simulate wave interference [closed] I need to simulate wave interference with reflection from surfaces. What formulas I need to use? What differential equation I need to solve? - Could someone help me out? 1answer 44 views ### Rays in Symmetric Resonator I'm having some trouble figuring out how to get started on this question: If I have a symmetric resonator with two concave mirrors of radii $R$ separated by a certain distance, after how many round ... 0answers 27 views ### Reflection of light [duplicate] Possible Duplicate: What is the difference between a white object and a mirror? When I look at a red object under white light, I see it as red because it absorbs the other colours and ... 1answer 183 views ### Bragg condition for transmission: Why is the full diffracted angle Two times Theta? Or isn't it? On a Bragg reflection with incomming angle Theta the total diffraction angle of the incomming wave is 2*Theta, of course. But I have Bragg transmission with electrons on a graphite crystal ... 2answers 2k views ### Free Optics Simulation Programs I'm having an extremely difficult time finding an optics program that is easy to use and offers accurate physics simulations. I'm not asking for much, I just want to be able to simulate a laser going ... 2answers 162 views ### Redirecting light beams from beam splitters I'm doing a project where I am taking a laser beam and sending it through a beam splitter. As I understand, approximately 50% of the light will go pass through and 50% will be reflected. So this means ... 0answers 65 views ### What about the photons that make you see ? [duplicate] Possible Duplicate: What determines color — wavelength or frequency? Explanation about black color, and hence color I understand that what we see are the reflected light from other ... 2answers 953 views ### Refraction, reflection, and what is total reflection? So if light travels from one media to another with a different refraction index, what may happen happen? Refraction, reflection or total reflection? I am quite confused as to the differences between ... 0answers 42 views ### EM-wave hits a brick-wall, $\pi/2$ -phase-shift? [duplicate] Possible Duplicate: Phase shift of 180 degrees on reflection from optically denser medium If I have a cord-wave, I get a phase-shift with attached cord but do I get such a phase-shift with ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936340868473053, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/141759-related-rates-cars-moving-intersection.html
# Thread: 1. ## Related Rates - Cars Moving at an Intersection A straight railway track and a straight road intersect at right angles. At a given instant, a motor car at 40 km/h and a train at 50 km/h are moving away from the intersection and are 40 and 30 km respectively from the intersection. At what rate is the distance between them changing one hour later? At what rate would the distance between them be changing at that instant if they were traveling towards the intersection. Dunno this one as they are 3 variables 2. Originally Posted by Lukybear A straight railway track and a straight road intersect at right angles. At a given instant, a motor car at 40 km/h and a train at 50 km/h are moving away from the intersection and are 40 and 30 km respectively from the intersection. At what rate is the distance between them changing one hour later? At what rate would the distance between them be changing at that instant if they were traveling towards the intersection. Dunno this one as they are 3 variables $x$ = car distance from the intersection $y$ = train distance from the intersection $z$ = distance between the car and train $x^2 + y^2 = z^2$ take the time derivative and solve for $\frac{dz}{dt}$ at the indicated time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9702495336532593, "perplexity_flag": "middle"}
http://www.conservapedia.com/Difference
# Difference ### From Conservapedia The difference of two numbers is the number obtained when the lesser number is subtracted from the greater number. The difference of 4 and 2 is: 4 − 2 = 2 The difference of -3 and -9 is: − 3 − ( − 9) = 6 Another way of putting it, is that the difference is the absolute value of subtracting either number from the other. $\left |4 - 2 \right | = \left |2-4 \right | = 2$ The number being subtracted is the subtrahend, the number being subtracted from is the minuend.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906603217124939, "perplexity_flag": "middle"}
http://gilkalai.wordpress.com/2009/07/28/polymath3-abstract-polynomial-hirsch-conjecture-aphc/?like=1&_wpnonce=9501e873bf
Gil Kalai’s blog ## The Polynomial Hirsch Conjecture, a Proposal for Polymath3 (Cont.) Posted on July 28, 2009 by ## The Abstract Polynomial Hirsch Conjecture A convex polytope $P$ is the convex hull of a finite set of points in a real vector space. A polytope can be described as the intersection of a finite number of closed halfspaces. Polytopes have a facial structure: A (proper) face of a polytope $P$ is the intersection of  $P$ with a supporting hyperplane. (A hyperplane $H$ is a supporting hyperplane of $P$ if $P$ is contained in a closed halfspace bounded by $H$, and the intersection of $H$ and $P$ is not empty.) We regard the empty face and the entire polytope as trivial faces. The extreme points of a polytope $P$ are called its vertices. The one-dimensional faces of $P$ are called edges. The edges are line intervals connecting a pair of vertices. The graph $G(P)$ of a polytope $P$  is a graph whose vertices are the vertices of $P$ and two vertices are adjacent in $G(P)$ if there is an edge of $P$ connecting them. The $(d-1)$-dimensional faces of a polytop are called facets. The Hirsch conjecture: The graph of a d-polytope with n  facets has diameter at most n-d. A weaker conjecture which is also open is: Polynomial Hirsch Conjecture: Let G be the graph of a d-polytope with n facets. Then the diameter of G is bounded above by a polynomial in d and n. The avenue which I consider most promising (but I may be wrong) is to replace “graphs of polytopes” by a larger class of graphs. Most known upper bound on the diameter of graphs of polytopes apply in much larger generality. Recently, interesting lower bounds were discovered and we can wonder what they mean for the geometric problem. Here is the (most recent) abstract setting: Consider the collection ${\cal G}(d,n)$ of graphs $G$ whose vertices are labeled by $d$-subsets of an $n$ element set. The only condition we will require is that if  $v$ is a vertex labeled by $S$ and $u$ is a vertex labeled by the set $T$, then there is a path between $u$ and $v$ so that all labels of its vertices are sets containing $S \cap T$. Abstract Polynomial Hirsch Conjecture (APHC): Let $G \in {\cal G}(d,n)$  then the diameter of $G$ is bounded above by a polynomial in $d$ and $n$. Everything that is known about the APHC can be described in a few pages. It requires only rather elementary combinatorics; No knowledge about convex polytopes is needed. A positive answer to APHC (and some friends of mine believe that $n^2$ is the right upper bound) will apply automatically to convex polytopes. A negative answer to APHC will be (in my opinion) extremely interesting as well,  but will leave the case of polytopes open.  (One of the most active areas of convex polytope theory is methods for constructing polytopes, and there may be several ways to move from an abstract combinatorial example to a geometric example.) If indeed we will decide to go for a polymath3, the concrete problem which I propose attacking is the APHC. However, we can discuss possible arguments regarding diameter of polytopes which use geometry, and we can be open to even more general abstract forms of the problem. (Or other things that people suggest.) Reading the recent very short paper by  Freidrich Eisenbrand, Nicolai Hahnle, and Thomas Rothvoss and the 3-pages paper by Sasha Razborov (the merged journal paper of these two contributions  will become available soon, ) will get you right to the front lines. (There is an argument from the first paper that uses the Hall-marriage theorem, and an argument from the second paper that uses the “Lovasz local lemma”.) I will try to repeat in later posts the simple arguments from these  papers -  I plan to devote one post to the upper bounds, another post to the lower bounds, and yet another post to general background, motivation and cheerleading  for the problem. I will try to make the different posts self-contained. Questions and remarks about polytopes, the problem, or these papers are welcome. ### Like this: This entry was posted in Open problems, Convex polytopes, Open discussion and tagged Hirsch conjecture, Polymath proposals. Bookmark the permalink. ### 5 Responses to The Polynomial Hirsch Conjecture, a Proposal for Polymath3 (Cont.) 1. Pingback: The Polynomial Hirsch Conjecture: A proposal for Polymath3 « Combinatorics and more 2. Kristal Cantwell says: 3. Gil Kalai says: Thanks Kristal. Let me make one additional remark on the abstract setting. The condition is that if we have two vertices $v$ with lebel \$S\$ and $w$ with label $T$ there is a path between them with vertices labelled by sets containing $S \cap T$. We do not make the dual assumption that we can move between $v$ to latex \$w\$ by sets all whose labels are included in $S \cup T$. (Indeed, this is not the case for simple polytopes when we labeled a vertex by the set of facets containing it. If we could guarantee that the labels are always inside $S \cup S$ and containing $S \cap T$ then the diameter would be $d$. 4. Pingback: Polymath on other blogs « The polymath blog 5. Pingback: The Polynomial Hirsch Conjecture: Discussion Thread, Continued « Combinatorics and more • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201068878173828, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/1894/path-integral-vs-measure-on-infinite-dimensional-space
# Path integral vs. measure on infinite dimensional space Coming from a mathematical background, I'm trying to get a handle on the path integral formulation of quantum mechanics. According to Feynman, if you want to figure out the probability amplitude for a particle moving from one point to another, you 1) figure out the contribution from every possible path it could take, then 2) "sum up" all the contributions. Usually when you want to "sum up" an infinite number of things, you do so by putting a measure on the space of things, from which a notion of integration arises. However the function space of paths is not just infinite, it's extremely infinite. If the path-space has a notion of dimension, it would be infinite-dimensional (eg, viewed as a submanifold of $C([0,t] , {\mathbb R}^n))$. For any reasonable notion of distance, every ball will fail to be compact. It's hard to see how one could reasonably define a measure over a space like this - Lebesgue-like measures are certainly out. The books I've seen basically forgo defining what anything is, and instead present a method to do calculations involving "zig-zag" paths and renormalization. Apparently this gives the right answer experimentally, but it seem extremely contrived (what if you approximate the paths a different way, how do you know you will get the same answer?). Is there a more rigorous way to define Feynman path integrals in terms of function spaces and measures? - 6 This may be one of those things that physicists do that would "make mathematicians throw themselves off the roof," as several of my professors put it ;-) i.e. I'm not sure whether there is a rigorous formulation. But path integrals are very well studied, so I'm sure people have at least tried to find one. Anyway, great question. I'm curious about this myself. – David Zaslavsky♦ Dec 14 '10 at 5:45 Lovely question and lovely answers. +1 – Sklivvz♦ Dec 14 '10 at 21:04 @David Zaslavsky: This is not one of those things. The path integral in quantum mechanics has a perfectly rigorous formulation. What makes mathematicians want to throw themselves off roofs is when physicists conflate Lie groups and Lie algebras. Or when -- despite knowing for 40 years that renormalization is the key organizing principle of QFT -- physicists write textbooks which don't mention the idea until page 300. – user1504 May 17 '12 at 22:45 ## 5 Answers Path integral is indeed very problematic on its own. But there are ways to almost capturing it rigorously. ## Wiener process One way is to start with Abstract Wiener space that can be built out of the Hamiltonian and carries a canonical Wiener measure. This is the usual measure describing properties of the random walk. Now to arrive at path integral one has to accept the existence of "infinite-dimensional Wick rotation" and analytically continue Wiener measure to the complex plane (and every time this is done a probabilist dies somewhere). This is the usual connection between statistical physics (which is a nice, well-defined real theory) at inverse temperature $\beta$ in (N+1,0) space-time dimensions and evolution of the quantum system in (N, 1) dimensions for time $t = -i \hbar \beta$ that is used all over the physics but almost never justified. Although in some cases it was actually possible to prove that Wightman QFT theory is indeed a Wick rotation of some Euclidean QFT (note that quantum mechanics is also a special case of QFT in (0, 1) space-time dimensions). Intermezzo This is a good place to point out that while path integral is problematic in QM, whole lot of different issues enter with more space dimensions. One has to deal with operator valued distributions and there is no good way to multiply them (which is what physicist absolutely need to do). There are various axiomatic approaches to get a handle on this and they in fact do look very nice. Except that it's very hard to actually find a theory that satisfies these axioms. In particular, none of our present day theories of Standard model have been rigorously defined. Anyway, to make the Wick rotation a bit clearer, recall that Schrödinger equation is a kind of diffusion equation but for an introduction of complex numbers. And then just come back to the beginning and note that diffusion equation is macroscopic equation that captures the mean behavior of the random walk. (But this is not to say that path integral in any way depends on the Schrödingerian, non-relativistic physics) ## Others There were other approaches to define the path-integral rigorously. They propose some set of axioms that path-integral has to obey and continue from there. To my knowledge (but I'd like to be wrong), all of these approaches are too constraining (they don't describe most of physically interesting situations). But if you'd like I can dig up some references. - I'd be interested to find out more about the analytic continuation approach, even if it turns out to not work (as per the discussion below). I'm a big fan of analytic continuation and reimann surfaces - not for a good reason, but just because I think they're cool (: – Nick Alger Dec 14 '10 at 23:52 @Nick: all right, I'll dug up some references; but I can't think of any out of top of my head (besides the wikipedia article). But if it really interests you then you should ask for the uses of Wick rotation in physics; I am sure there are many more applications than I am aware of (for example, one time I stumbled upon the use of analytical continuation to study event horizons of some black holes). – Marek Dec 15 '10 at 15:40 In 2-dimensional space-time, Feynman path integrals are perfectly well-defined, though understanding how this is done rigorously is somewhat heavy-going. But everything is spelled out in the book ''Quantum Physics: A Functional Integral Point of View'' by Glimm and Jaffe. http://www.amazon.com/Quantum-Physics-Functional-Integral-Point/dp/0387964770 In 4 space-time dimensions how to make rigorous sense of the Feynman path integral is an unsolved problem. On the one side, there is no indication that some rigorous version of it could not exist, and one expects that the structural properties the integral has in 2D continue to hold. On the other side, constructing a 4D integral having these properties has been successful only in the free case, and the methods used for the constructions in lower dimensions seem too weak to work in 4 dimensions. Edit: On the other hand, in quantum mechanics with finitely many degrees of freedom, Feynman path integrals are very well understood, and whole books about the subjects have been written in a mathematically rigorous style, e.g., the book ''The Feynman Integral and Feynman's Operational Calculus'' by Johnson and Lapidus. http://tocs.ulb.tu-darmstadt.de/110841727.pdf - One major difficulty with defining path-integrals (which is entirely mathematicians' fault)is that the mathematicians insist for no good reason (and many bad ones) that there are non-measurable subsets of R. This is a psychological artifact of early days of set theory, where ZFC ws not seen as a way of generating countable models of a set-theoretic universe, but as the way things REALLY are in Plato-land (whatever that means). Cohen fixed that in 1963, but mathematicians still haven't gotten used to the fix, but that is changing rapidly. If you assume every subset of R is measurable, the notion of "randomly picking a number between 0 and 1" becomes free of contradiction. In the presence of the axiom of choice, the question "what is the probability that this number lands in a Vitali set?" is paradoxical, but in the real world, it is obviously meaningful. This tension is resolved in what is called a "Solovay model", where you have no more trouble with probability arguments meshing with set theory. For a non-mathematician, when you deal with sets which arise by predicative definition, not by doing uncountable axiom-of-choice shenanigans, probability is never contradictory. A solovay model still allows you to use countable axiom-of-choice, and countable dependent choice, which is enough for all usual analysis. Anyway, inside a Solovay model, you can define a Euclidean bosonic path integral very easily: it is an algorithm for picking a random path, or a random field. This has to be done by step-wise refinement, because the random path or random field has values at continuum of points, so you need to say what it means for a path to "refine" another path. further, while paths end up continuous, so that the refinement process is meaningful in the space of continuous paths, fields refine discontinuously. if you have a field whose average value on a lattice is something, it swings more wildly at small distances in dimensions higher than 2, so that in the limit, it defines a random distribution. If you are allowed to pick at random, part of the battle is won. You get free field path-integrals in any dimension with absolutely no work (pick the fourier components at random as Gaussian random numbers with a width which goes like the propagator). There is no issue with proving measurability, and the space of distributions you get out is just defined to be whatever space of distributions you get by doing the random picking. It's as simple as that. Really. The remaining battle is just renormalization, at least for bosonic fields with CP invariant (real) actions, which have a stable vacuum, so that their Euclidean continuation has a probability interpretation. You need to define the stepwise approximations in such a way that their probability distribution function approaches a consistent limit at small refinements. This is slightly tricky, but it automatically defines the measure if you have a Solovay world. There is nobody working on field theory in Solovay models, but there are people who mock up what is more or less the same thing inside usual set theory by doing what is called "constructive measure theory". I don't think that one can navigate the complicated renormalization arguments unless one is allowed to construct measures using probability intuitions without fear, and without work. And set theorists know how to do this since 1970. - This is very intriguing but at the same it's very hard to believe that all problems of path integral that people are having trouble with for half a century can be cured by such naive approach. Any references? – Marek Aug 10 '11 at 9:27 I didn't say that all the problems can be cured, only the measure theoretic headaches--- defining a sigma algebra on the set of distributions, when you don't know what their properties are a-priori. This approach automatically shifts the difficulties to the places they are real. There is no reference--- it's my own personal view. But I guarantee you that if I ever construct a nonfree field, I will do it within a Solovay model. – Ron Maimon Aug 11 '11 at 0:55 I didn't mean to argue with you. It just strikes me as odd that no one else has as of yet realized this point of view and worked on it if it is as useful as you propose. – Marek Aug 11 '11 at 10:31 is the Solovay model enough to recover the relevant parts of functional analysis? having a path integral doen't help much if the rest of QM breaks down... – Christoph Jul 5 '12 at 18:06 @Christoph: Nothing breaks down from lack of continuum choice--- that's just the stupid things that mathematicians say to keep people in line regarding choice. The only choice used in any functional analysis is dependent choice (sequential countable choice depending on the previous choices), which is not controversial (or not very controversial). Choice only conflicts with intuition for sets of size the continuum or larger. – Ron Maimon Jul 5 '12 at 19:00 The answer is: forget about it. :-) Currently, there is no satisfying mathematical formalization of the path integral. Coming from a mathematics background myself, I was equally dismayed at this state of affairs. But I have come to terms with it, mainly due to the following historical observation: for several centuries, infinitesimal quantities did not have a satisfying formalization, but that didn't stop mathematicians from understanding and using them. We now have the blessing of Weierstraß' epsilons and deltas, but it is also a curse, since the infinitesimal quantities disappeared as well (outside of non-standard analysis). I would say that the path integral is a similar situation. However, if you accept the path integral as a "figure of speech", then there are ways to formalize it to some extend. Namely, you can interpret it as an "integration" of the propagator, much like the exponential function is the solution of the differential equation $\dot y = Ay$. The propagator is the probability amplitude $$U(r,t; r_0,t_0)$$ of finding a particle at place and time $r,t$ when it originally started at place and time $r_0,t_0$. It is the general solution to the Schrödinger equation $$i\hbar \frac{\partial}{\partial t}U(r,t; r_0,t_0) = \hat H(r,t) U(r,t; r_0,t_0), \quad U(r,t_0; r_0,t_0) = \delta(r-r_0) .$$ Now, pick a time $t_1$ that lies between $t$ and $t_0$. The interpretation of the propagator as probability amplitude makes it clear that you can also obtain it by integrating over all intermediate positions $r_1$. $$U(r,t; r_0,t_0) = \int dr_1 U(r,t; r_1,t_1)U(r_1,t_1; r_0,t_0)$$ If you repeat that procedure and divide the time interval $[t,t_0]$ into infinitely many parts, thus integrating over all possible intermediate positions, you will obtain the path integral. More details on this construction can be found in Altland, Simons. Condensed Matter Field Theory. - Well, I think most of the problems of path integral can be summarized as: complex numbers. Nothing converges, everything oscillates. But we already know that analytical continuation works in finite dimensions and the success of the path integral suggests that it indeed continues to hold also in infinite dimensions (at least under some conditions). So the way to make path integral rigorous would be to try to capture the properties of analytical continuation on infinite dimensional spaces. Do you know whether such a thing has been attempted? – Marek Dec 14 '10 at 10:12 – Greg Graviton Dec 14 '10 at 17:45 1 As a matter of fact, I think it will play a prominent role. It does play it in physics and there is no reason for not playing it as well in math once people polish things up. As for the axiomatic approach: that is precisely what I am talking about. You can define some axioms but soon you'll find out that they don't really generalize to other situations you are interested in or that there is nothing actually satisfying the axioms. This is probably because these theories are created by mathematicians and path integral is too physical in nature. – Marek Dec 14 '10 at 20:36 And I just realized my comment might be a little insulting to mathematicians. So apologies in advance. It was just a general observation that mathematicians hold different values and understand different things than physicists do. – Marek Dec 14 '10 at 20:38 (No worries. :-)) – Greg Graviton Dec 15 '10 at 8:51 For quantum mechanics, there's really nothing unrigorous about the path integral. You have to define it in Euclidean signature, but that's just the way life is with oscillatory integrals. It has nothing to do with the fact that the path integral is infinite-dimensional. Try to insert a set of intermediate states in the propagator $\langle q_f | e^{-iHt/\hbar}|q_i\rangle$ and you'll get an integral that's not absolutely convergent. This expression is just fine if you sandwhich it inside a well-defined computation -- e.g., don't use singular wave functions for your initial and final states -- but if you want the expression to stand on its own, you have to provide some additional convergence information. Usually what people do is observe that the unitary group of time translations is the imaginary boundary of an analytic semigroup. The real part of this semigroup, $e^{-H\tau}$, has a rigorous path integral formula; it's the volume of a cylinder set. The volume of a cylinder set is computed as the limit of cutoff path integrals of the form $$\frac{1}{Z} \int_{F_{cutoff}} e^{-\frac{1}{\hbar} S_{effective}(\phi)} d\phi,$$ where $d\phi$ is Lebesgue/Haar/whatever measures on the finite-dimensional space of cutoff fields and $S_{effective}$ is a cutoff/lattice approximation to the continuum action. Given such a measure, under reasonable conditions, you can analytically continue the correlation functions from Euclidean signature back to Minkowski. For the record: mathematicians are not tearing their hair out about this stuff. It's cool. We got it. We -- and by "we", I mean a relatively small number of experts, not necessarily including myself -- can even handle 4d Yang-Mills theory in finite volume. (What's hard is proving facts about the behavior of correlation functions in the IR limit.) - This answer is impossibly misleading: when you say you can handle 4d Yang Mills in finite volume, you mean 4d lattice Yang Mills in finite volume, with finite coupling. when the gauge group is a compact product group. Big deal. The whole problem is defining Yang Mills in a continuum in a finite volume, which is equivalent to the infinite volume/zero coupling limit, and none of the so-called "experts" can handle that. – Ron Maimon Aug 11 '11 at 0:58 1 – user1504 Aug 11 '11 at 17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455597996711731, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/82951/list
## Return to Question 3 added 57 characters in body Let $F$ be a number field, and $G$ a connected semi-simple linear algebraic $F$-group. F$-group, which does not contain anisotropic (simple)$F$-factors. Write$\hat{F}$for the ring of finite adeles$F\otimes\hat{\mathbb{Z}}$. Then the strong approximation theorem implies that the double coset$G(F)\backslash G(\hat{F}) / K_G$is finite for any compact open subgroup$K_G\subset G(\hat{F})$. In fact it is even equal to one (i.e. trivial double quotient) if$G\$ is simply connected as a semi-simple group. And in general, for $G$ semi-simple but not simply-connected, how should one bound the growth of the size of the double quotient? At least we know that there is an isogeny $G'\rightarrow G$ with $G'$ semi-simple and simply connected. Can we expect the double quotient to be bounded by some function in terms of the degree of $G'\rightarrow G$ and the set of finite places where $K_G$ is not a maximal compact open subgroup? At least it seems that one could not expect the double quotient to be uniformly bounded when $K_G$ shrinks to the neutral element. Thanks! 2 added 4 characters in body Let $F$ be a number field, and $G$ a connected semi-simple linear algebraic $F$-group. Write $\hat{F}$ for the ring of finite adeles $F\otimes\hat{\mathbb{Z}}$. Then the strong approximation theorem implies that the double coset $G(F)\backslash G(\hat{F}) / K_G$ is finite for any compact open subgroup $K_G\subset G(\hat{F})$. In fact it is even equal to one (i.e. trivial double quotient) if $G$ is simply connected as a semi-simple group. And in general, for $G$ semi-simple but not simply-connected, how should one bound the growth of the size of the double quotient? At least we know that there is an isogeny $G'\rightarrow G$ with $G'$ semi-simple and simply connected. Can we expect the double quotient to be bounded by some function in terms of the degree of $G'\rightarrow G$ and the set of finite places where $K$ K_G\$ is not a maximal compact open subgroup? At least it seems that one could not expect the double quotient to be uniformly bounded when $K$ K_G\$ shrinks to the neutral element. Thanks! 1 # finiteness of class number: a bound for semi-simple groups? Let $F$ be a number field, and $G$ a connected semi-simple linear algebraic $F$-group. Write $\hat{F}$ for the ring of finite adeles $F\otimes\hat{\mathbb{Z}}$. Then the strong approximation theorem implies that the double coset $G(F)\backslash G(\hat{F}) / K_G$ is finite for any compact open subgroup $K_G\subset G(\hat{F})$. In fact it is even equal to one (i.e. trivial double quotient) if $G$ is simply connected as a semi-simple group. And in general, for $G$ semi-simple but not simply-connected, how should one bound the growth of the size of the double quotient? At least we know that there is an isogeny $G'\rightarrow G$ with $G'$ semi-simple and simply connected. Can we expect the double quotient to be bounded by some function in terms of the degree of $G'\rightarrow G$ and the set of finite places where $K$ is not a maximal compact open subgroup? At least it seems that one could not expect the double quotient to be uniformly bounded when $K$ shrinks to the neutral element. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086261987686157, "perplexity_flag": "head"}