qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,231,113
<p>There are plenty of questions about the homology of the connected sum of two $n$-manifolds, but I didn't find an explicit explanation of the computation done in degree $n-1$. Let's show some examples of such questions:</p> <p><strong>1)</strong> In <a href="https://math.stackexchange.com/questions/453132/connected-sums-and-their-homology">this question</a> it is outlined the way to solve it, which is perfectly clear, but I don't know how to prove the result claimed in degree $n-1$. </p> <blockquote> <p>In the comments it's written that the boundary map $$H_n(M) \to H_{n-1}(B\setminus B') = H_{n-1}(S^{n-1})$$ is an iso, but being a purely algebraic map, I cannot visualise it, and hence I don't know how to prove that it is in fact an isomorphism.</p> </blockquote> <p><strong>My Idea</strong> ($M,N$ orientable): if the map were induced by inclusion we would be OK, because we are mapping the fundamental class of the connected sum in a local orientation, and this is an iso. But I <strong>don't know</strong> how to formalise it properly. Moreover I don't know how to deal with the boundary map, which is purely algebraic, so not good to handle in general. In the case that one of them is not orientable, I don't know how to proceed.</p> <p><strong>2)</strong> The same problem arises in <a href="https://math.stackexchange.com/questions/187413/computing-the-homology-and-cohomology-of-connected-sum">this question</a> where there is written:</p> <blockquote> <p>[...] In the case that both are orientable, the above sequence turns into $$0\to \mathbb{Z} \to \mathbb{Z}\oplus\mathbb{Z} \to \mathbb{Z} \to \widetilde{H}_{n-1}(M\# N)\to \widetilde{H}_{n-1}(M\vee N) \to 0$$ as their connected sum is also orientable. From this, we see that $$\widetilde{H}_{n-1}(M\# N)\to \widetilde{H}_{n-1}(M\vee N)$$ must be an isomorphism.</p> </blockquote> <p>I'm not able to find a reason on why it is true. A similar (mutatis mutandis) result it's stated in the case that only one of them is orientable.</p> <p><a href="https://www.physicsforums.com/threads/homology-of-a-connected-sum.306512/" rel="nofollow noreferrer">Here you can find the same question</a>, with the additional "hint" that: </p> <blockquote> <p>the so called <strong>bounding sphere</strong> is homologous to zero, (which should means that the map $H_n-1(S^n-1)$ into $H_n-1(M-p) \oplus H_n-1(N-p)$ is the zero map) . </p> </blockquote> <p>But even here without any further references on why this result is true. </p> <p>I'm starting thinking that this should be a kind of triviality, but I'm facing several problems in finding a proof of it. So can someone explain the isomorphism between $H_{n-1}(M \sharp N)$ and $H_{n-1}(M)\oplus H_{n-1}(N)$ in the cases that both manifold are orientable and only one of them is orientable?</p>
Joe S
224,892
<p>There is a basic theorem for the case of triangulable orientable compact n-manifolds without boundary that the top dimensional Z-homology is Z and is generated by an oriented sum of the n-simplices of the triangulation. </p> <p>The picture is that each n-simplex shares each of its (n-1)-faces with exactly one other n-simplex and with proper orientation they cancel under the boundary operator. This oriented sum of n-simplices is called the fundamental cycle.</p> <p>If you remove one of these n-simplices(this corresponds to your n-ball) then its (n-1)- faces do not cancel and so the boundary of the remaining n-simplices is the boundary of the removed n-simplex. This corresponds to your sphere or spherical shell. So the spherical shell is homologous to zero.</p> <p>If the manifold is not orientatble there is no fundamental cycle and the bounding sphere is not null homologous as the example of the Mobius band shows.</p> <p>If you choose the triangulation so that the small ball that has been removed lies in the interior of one of the n-simplices, then it is easy to show that the manifold minus the n-simplex is a strong deformation retract of the manifold minus the ball.</p> <ul> <li>I am not sure how the proof goes if the manifold can not be triangulated.</li> </ul>
82,716
<p>There seems to be two competing(?) formalisms for specifying theories: <a href="http://ncatlab.org/nlab/show/sketch" rel="noreferrer">sketches</a> (as developped by Ehresmann and students, and expanded upon by Barr and Wells in, for example, <a href="http://www.tac.mta.ca/tac/reprints/articles/12/tr12.pdf" rel="noreferrer">Toposes, Triples and Theories</a>), and the setting of <a href="http://cseweb.ucsd.edu/~goguen/pps/nel05.pdf" rel="noreferrer">institutions</a>. </p> <p>But I sometimes get a glimpse that sketches are really a very nice way to specifiy a good category of <em>signatures</em>, while institutions are much more model-theoretic. But in works on institutions, the category of signatures is usually highly under-specified (which is quite ironic, really).</p> <p>So my question really is: what is the relation between Sketches and Institutions? </p> <p>A subsidiary question is, why do I find a lot of work relying on institutions, but comparatively less on sketches? [I am talking volume here, not quality.] Did sketches somehow not prove to be effective?</p>
Peter Arndt
733
<p>Well, as you said yourself the category of signatures is left unspecified in the general setup of institution theory - it just tells you what, given some notion of signature, can be said about the relation of syntax and semantics. It's quite amazing how many meaningful things can be said at such a level of abstraction, see Razvan Diaconescu's book "<a href="http://www.springer.com/birkhauser/mathematics/book/978-3-7643-8707-5" rel="nofollow">Institution Independent Model Theory</a>".</p> <p>Now, as you said yourself again, sketches are a specific type of signature, and they come with a specific type of semantics, so they form an instance of the general concept of institution. An outline of a proof of this is given in section 10.3 of Barr/Wells' book "<a href="http://www.cwru.edu/artsci/math/wells/pub/ctcs.html" rel="nofollow">Categories for Computing Science</a>".</p> <p>About your question why you find more references to institution theory I can only speculate. It probably is because you are looking through computer science literature and would be the the other way round if you were scanning math literature. The reason might be that mathematics often cares for qualitative statements while in computer science things have to be effectively spelled out: Given, for example some specification of a structure, a mathematician is able to say "Okay, those structures form an accessible category, so by <a href="http://books.google.com/books/about/Accessible_categories.html?id=ZHY8xsMEII8C" rel="nofollow">Makkai/Paré</a> they are the models of some sketch", while for a computer scientist the actual syntax of the specification counts, and this might not easily be translatable into sketch language - thus he resorts to institution theory which can immediately acommodate any sort of specification. But I'm just guessing...</p>
1,158,712
<p>My question is : Given an invertible matrix $A$ ( with complex entries ) , if $A^n$ is normal,is $A$ normal?</p> <p>This is related to the question : <a href="https://math.stackexchange.com/questions/1158600/if-a-is-an-invertible-n-times-n-complex-matrix-and-some-power-of-a-is-diag">If $A$ is an invertible $n\times n$ complex matrix and some power of $A$ is diagonal, then $A$ can be diagonalized</a> . <br/> <br/>Since we know that such a matrix is unitarily diagonalizable iff it is normal, I thought of formulating it this way. <br/><br/> My Attempt so far : <br/> I thought of proving first that if $A^n$ is normal, then $A^{n-1}$ is normal. Since $A^n$ is normal, $A^n A^{n*}=A^{n*}A^n$. We write this as : $P=A A^{n-1}A^{(n-1)*}A^*=A^*A^{(n-1)*}A^{n-1}A$. Let's keep $B=A^{n-1}A^{(n-1)*}$. Then we have, $P=A BA^*=A^*B^*A$. I was thinking of taking it this way, but I'm not sure from here. Can you help me proceed ? </p>
egreg
62,967
<p>Consider a triangularization $A=UTU^*$ of $A$. Saying that $A^n$ is normal means that $T^n$ is normal, so diagonal. Divide $T$ in blocks, $$ T=\begin{bmatrix} T_1 &amp; X \\ 0 &amp; T_2 \end{bmatrix} $$ where $T_1$ is $2\times2$ (the assertion is trivial for $1\times1$ matrices. Then, using block multiplication, we have $$ T^n=\begin{bmatrix} T_1^n &amp; X_n \\ 0 &amp; T_2^n \end{bmatrix} $$ so we can look to $2\times2$ matrices. It's easy to prove by induction that, assuming $a\ne c$, $$ \begin{bmatrix} a &amp; b\\ 0 &amp; c \end{bmatrix}^n= \begin{bmatrix} a^n &amp; b(a^n-c^n)/(a-c) \\ 0 &amp; c^n \end{bmatrix} $$ so saying $T_1^n$ isn't diagonal unless $b=0$. Thus we have infinitely many counterexamples.</p>
4,646,498
<p>I have an assignment for university and I’m a bit confused as to how I should translate the following sentence:</p> <p>Neither Ana nor Bob can do every exercise but each can do some.</p> <p>I've identified the atomic sentences A=Ana can do every exercise and B=Bob can do every exercise and managed to translate the first part into ~A &amp; ~B but I don't know how to go about &quot;each can do some&quot;. Any help would be greatly appreciated!</p>
Angelo
771,461
<p>Kai, you can solve it without any change of variables, indeed :</p> <p><span class="math-container">$\displaystyle y’=-\int4x\,\mathrm dx=-2x^2+C\;\;,$</span></p> <p><span class="math-container">$\displaystyle y=\int\left(-2x^2+C\right)\mathrm dx=-\dfrac23x^3+Cx+D\,.$</span></p>
3,531,809
<p>In how many ways can we place 7 identical red balls and 7 identical blue balls into 5 distinct urns if each urn has at least 1 ball?</p> <p>This is how I approached the problem:</p> <p>1) Compute the number of total combinations if there were no constraints:</p> <p>Placing just the red balls, allowing for empty urns: <span class="math-container">$\binom{n+k-1}{k-1} = \binom{7+5-1}{5-1} = \binom{11}{4} = 330$</span>. There are the same number of blue ball configurations.</p> <p>Since each red ball configuration can have 330 possible blue ball configurations, then in total we should have <span class="math-container">$330^2 = 108900$</span> </p> <p>2) Compute the number of illegal configurations with 1, 2, 3 or 4 empty urns:</p> <p><span class="math-container">$r_1$</span> = ways to put 7 red balls into 1 urn = <span class="math-container">$\binom{7-1}{1-1} = \binom{6}{0} = 1$</span><br> <span class="math-container">$r_2$</span> = ways to put 7 red balls into 2 urns = <span class="math-container">$\binom{7-1}{2-1} = \binom{6}{1} = 6$</span><br> <span class="math-container">$r_3$</span> = ways to put 7 red balls into 3 urns = <span class="math-container">$\binom{7-1}{3-1} = \binom{6}{2} = 15$</span><br> <span class="math-container">$r_4$</span> = ways to put 7 red balls into 4 urns = <span class="math-container">$\binom{7-1}{4-1} = \binom{6}{3} = $</span>20 </p> <p><span class="math-container">$b_1$</span> = ways to put 7 blue balls into 1 urn = <span class="math-container">$r_1$</span><br> <span class="math-container">$b_2$</span> = ways to put 7 blue balls into 2 urns = <span class="math-container">$r_2$</span><br> <span class="math-container">$b_3$</span> = ways to put 7 blue balls into 3 urns = <span class="math-container">$r_3$</span><br> <span class="math-container">$b_4$</span> = ways to put 7 blue balls into 4 urns = <span class="math-container">$r_4$</span> </p> <p><span class="math-container">$u_1$</span> = ways to pick 1 urn = <span class="math-container">$\binom{5}{1} = 5$</span><br> <span class="math-container">$u_2$</span> = ways to pick 2 urns = <span class="math-container">$\binom{5}{2} = 10$</span><br> <span class="math-container">$u_3$</span> = ways to pick 3 urns = <span class="math-container">$\binom{5}{3} = 10$</span><br> <span class="math-container">$u_4$</span> = ways to pick 4 urns = <span class="math-container">$\binom{5}{4} = 5$</span> </p> <p># ways to put 7 red and 7 blue balls into 1, 2, 3, or 4 urns =<br> # ways to put 7 red and 7 blue balls into 5 urns where 1 or more urns are empty = </p> <p><span class="math-container">$r_1 b_1\binom{5}{1} + r_2 b_2\binom{5}{2} + r_3 b_3\binom{5}{3} + r_4 b_4\binom{5}{4} = $</span> </p> <p><span class="math-container">$1^2 \cdot 5 + 6^2 \cdot 10 + 15^2 \cdot 10 + 20^2 \cdot 5 = 4615$</span> = # of illegal configurations</p> <p>3) Subtract the number of illegal configurations from the number of total configurations:</p> <p>108900 - 4615 = 104285</p> <p>Is this correct? If not, could someone explain where either my logic breaks down or where I calculated something incorrectly?</p>
Daniel S.
362,911
<pre><code>A=[ nchoosek(1:11,4)-ones(size(nchoosek(1:11,4))), diff(nchoosek(1:11,4),[],2) - ones(size(diff(nchoosek(1:11,4),[],2))), -nchoosek(1:11,4)+11*ones(size(nchoosek(1:11,4)))]; B=A(:, [1,5,6,7,11]); valid=0; for i=1:size(B,1) for j=1:size(B,1) C=B(i,:)+B(j,:); if (min(C) &gt; 0) valid=valid+1; end end end valid </code></pre> <p>the Matlab code above generates the right answer, which is 49225</p> <p>one nice way to solve this sort of problems is through generating functions</p> <p><a href="https://math.stackexchange.com/questions/1161312/how-to-solve-this-distribution-problem-with-generating-functions">How to solve this distribution problem with generating functions?</a></p> <p>I think you would be very interested to read Section 4.2 of Wilf's <a href="http://www.math.upenn.edu/~wilf/DownldGF.html" rel="nofollow noreferrer">generatingfunctionology</a>.</p> <p>Note that what Wilf calls the sieve method is precisely inclusion-exclusion.</p> <p><a href="https://math.stackexchange.com/questions/108973/inclusion-exclusion-vs-generating-functions">Inclusion Exclusion vs. Generating Functions</a></p>
1,572,126
<p>I did solve, I got four solutions, but the book says there are only 3.</p> <p>I considered the cases $| x - 3 | = 1$ or $3x^2 -10x + 3 = 0$.</p> <p>I got for $x\leq 0$: $~2 , 3 , \frac13$</p> <p>I got for $x &gt; 0$: $~4$ </p> <p>Am I wrong? Is $0^0 = 1$ or NOT?</p> <p>Considering the fact that : $ 2^2 = 2 \cdot2\cdot 1 $ </p> <p>$2^1 = 2\cdot 1$ </p> <p>$2^0 = 1$ </p> <p>$0^0$ should be $1$ right?</p>
Archis Welankar
275,884
<p>You are wrong $0^0=indeterminate$ you will get only three solutions. mod $| |$ gets opened with $\pm$ if we put it as '-'. We will get expression...=$-1$ so we can express negative number as raised to something only with a complex number here $i^2$ but we want real solutions so $...expression=1$ and you get three solutions. Hope its clear</p>
1,146,050
<p>given $f(x)=\frac{x^4+x^2+1}{x^2+x+1}$.</p> <p>Need to find the min value of $f(x)$.</p> <p>I know it can be easily done by polynomial division but my question is if there's another way</p> <p>(more elegant maybe) to find the min? </p> <p><strong>About my way</strong>: $f(x)=\frac{x^4+x^2+1}{x^2+x+1}=x^2-x+1$. (long division)</p> <p>$x_{min}=\frac{-b}{2a}=\frac{1}{2}$. (when $ax^2+bx+c=0$)</p> <p>So $f(0.5)=0.5^2-0.5+1=\frac{3}{4}$</p> <p>Thanks. </p>
Amirali
794,843
<p>Multiply the fraction by <span class="math-container">$\frac{x^2-1}{x^2-1}$</span>:<span class="math-container">$$\frac{(x^2-1)(x^4+x^2+1)}{(x+1)(x-1)(x^2+x+1)}=\frac{x^6-1}{(x+1)(x^3-1)}=\frac{x^3+1}{x+1}=x^2-x+1$$</span></p>
2,573,458
<p>Given $n$ prime numbers, $p_1, p_2, p_3,\ldots,p_n$, then $p_1p_2p_3\cdots p_n+1$ is not divisible by any of the primes $p_i, i=1,2,3,\ldots,n.$ I dont understand why. Can somebody give me a hint or an Explanation ? Thanks.</p>
Michael Hardy
11,667
<p>First approach:</p> <blockquote> <p>Let <span class="math-container">$P= 2\times3\times5\times7\times11\times13.$</span> Then:</p> <p>The next number after <span class="math-container">$P$</span> that is divisible by <span class="math-container">$2$</span> is <span class="math-container">$P+2.$</span></p> <p>The next number after <span class="math-container">$P$</span> that is divisible by <span class="math-container">$3$</span> is <span class="math-container">$P+3.$</span></p> <p>The next number after <span class="math-container">$P$</span> that is divisible by <span class="math-container">$5$</span> is <span class="math-container">$P+5.$</span></p> <p>The next number after <span class="math-container">$P$</span> that is divisible by <span class="math-container">$7$</span> is <span class="math-container">$P+7.$</span></p> <p>The next number after <span class="math-container">$P$</span> that is divisible by <span class="math-container">$11$</span> is <span class="math-container">$P+11.$</span></p> <p>The next number after <span class="math-container">$P$</span> that is divisible by <span class="math-container">$13$</span> is <span class="math-container">$P+13.$</span></p> <p>So <span class="math-container">$P+1$</span> is not divisible by any of those.</p> </blockquote> <p>Second approach:</p> <blockquote> <p>If <span class="math-container">$(2\times3\times5\times7\times11\times13) + 1$</span> is divided by <span class="math-container">$2$</span> then the quotient is <span class="math-container">$\underbrace{3\times5\times7\times11\times 13}_{\large\text{excluding 2}}$</span> and the remainder is <span class="math-container">$1.$</span></p> <p>If <span class="math-container">$(2\times3\times5\times7\times11\times13) + 1$</span> is divided by <span class="math-container">$3$</span> then the quotient is <span class="math-container">$\underbrace{2\times5\times7\times11\times 13}_{\large\text{excluding 3}}$</span> and the remainder is <span class="math-container">$1.$</span></p> <p>If <span class="math-container">$(2\times3\times5\times7\times11\times13) + 1$</span> is divided by <span class="math-container">$5$</span> then the quotient is <span class="math-container">$\underbrace{2\times3\times7\times11\times 13}_{\large\text{excluding 5}}$</span> and the remainder is <span class="math-container">$1.$</span></p> <p>If <span class="math-container">$(2\times3\times5\times7\times11\times13) + 1$</span> is divided by <span class="math-container">$7$</span> then the quotient is <span class="math-container">$\underbrace{2\times3\times5\times11\times 13}_{\large\text{excluding 7}}$</span> and the remainder is <span class="math-container">$1.$</span></p> <p>If <span class="math-container">$(2\times3\times5\times7\times11\times13) + 1$</span> is divided by <span class="math-container">$11$</span> then the quotient is <span class="math-container">$\underbrace{2\times3\times5\times7\times 13}_{\large\text{excluding 11}}$</span> and the remainder is <span class="math-container">$1.$</span></p> <p>If <span class="math-container">$(2\times3\times5\times7\times11\times13) + 1$</span> is divided by <span class="math-container">$13$</span> then the quotient is <span class="math-container">$2\underbrace{\times3\times5\times7\times11}_{\large\text{excluding 13}}$</span> and the remainder is <span class="math-container">$1.$</span></p> <p>(Appendix: <span class="math-container">$(2\times3\times5\times7\times11\times13) + 1 = 59\times509.$</span>)</p> </blockquote>
264,770
<p>If we have a vector in $\mathbb{R}^3$ (or any Euclidian space I suppose), say $v = (-3,-6,-9)$, then:</p> <ol> <li>May I always "factor" out a constant from a vector, as in this example like $(-3,-6,-9) = -3(1,2,3) \implies (1,2,3)$ or does the constant always go along with the vector?</li> <li>If yes on question 1, then if I want to compute the norm, is the correct computation the following: $||v|| = |-3|\sqrt{14} = 3\sqrt{14}$ ? If so, is the only reason that we take the absolute value of -3 because we don't want a negative length?</li> </ol> <p>I'm sorry if things are obvious but I just want to make sure I actually get this correctly.</p> <p>Best regards</p>
Ink
34,881
<p>The answer is yes to both questions. In any normed vector vector space, $\|\alpha v\| = |\alpha|\|v\|$ for any vector $v$ and scalar $\alpha$.</p>
169,531
<p>Let me preface this by saying that I have essentially no background in logic, an I apologize in advance if this question is unintelligent. Perhaps the correct answer to my question is "go look it up in a textbook"; the reasons I haven't done so are that I wouldn't know which textbook to look in and I wouldn't know what I'm looking at even if I did.</p> <p>Anyway, here's the setup. According to my understanding (i.e. Wikipedia), Godel's first incompleteness theorem says that no formal theory whose axioms form a recursively enumerable set and which contain the axioms for the natural numbers can be both complete and consistent. Let $T$ be such a theory, and assume $T$ is consistent. Then there is a "Godel statement" $G$ in $T$ which is true but cannot be proven in $T$. Form a new theory $T'$ obtained from $T$ by adjoining $G$ as an axiom. Though I don't know how to prove anything it seems reasonably likely to me that $T'$ is still consistent, has recursively enumerable axioms, and contains the axioms for the natural numbers. Thus applying the incompleteness theorem again one deduces that there is a Godel statement $G'$ in $T'$.</p> <p>My question is: can we necessarily take $G'$ to be a statement in $T$? Posed differently, could there be a consistent formal theory with recursively enumerable axioms which contains arithmetic and which can prove every true <em>arithmetic</em> statement, even though it can't prove all of its own true statements? If this is theoretically possible, are there any known examples or candidates?</p> <p>Thanks in advance!</p>
hmakholm left over Monica
14,366
<p>You are right that $T'=T+G$ must be consistent. If it were not, then we could prove a contradiction from $T$ together with $G$, which would amount to a proof by contradiction of $\neg G$ in $T$. But that contradicts the fact that $G$ is independent of $T$!</p> <p>On the other hand, when you ask</p> <blockquote> <p>Posed differently, could there be a consistent formal theory with recursively enumerable axioms which contains arithmetic and which can prove every true arithmetic statement, even though it can't prove all of its own true statements?</p> </blockquote> <p>that cannot be, because the independent statement $G$ produced by Gödel's construction <em>is always an arithmetic statement</em>. ($G$ is often popularly explained as stating something about provability or not, which is correct so far as it goes -- but the major achievement of Gödel's work is to show how such claims can be expressed as arithmetic statements).</p> <p>Indeed, if the original theory $T$ was something like Peano Arithmetic, the <em>language</em> of the theory does not allow one to even <em>state</em> any sentence that is not arithmetic -- and no amount of added <em>axioms</em> can change that.</p>
194,724
<p>All graphs discussed are finite and simple. The <em>cycle sequence</em> of a graph $G$, denoted $C(G)$, is the nondecreasing sequence of the lengths of all of the cycles in $G$, where cycles are distinguished by the vertices they contain, not by the edges they contain. </p> <p>For example, $C(K_{3,2})=4,4,4$ and $C(K_4)=3,3,3,3,4$.</p> <p>Two graphs are <em>isoparic</em> if they have the same number of vertices and the same number of edges. </p> <p><strong>Main question</strong>: If $G$ and $H$ are 2-connected nonisoparic graphs, can $C(G)=C(H)$?</p> <p>The 2-connected condition is so we can't just make a bunch of edge-disjoint cycles that share a vertex. The nonisoparic condition is so we can ignore situations like the following:</p> <p><img src="https://i.stack.imgur.com/yOqfH.png" alt="isoparic graphs"></p> <p>These graphs are not isomorphic but are isoparic. Both graphs have the cycle sequence $3,3,4,5,5,6$ and can be viewed as just a square surrounded by two triangles. Perhaps there's a better way to ignore this trick besides the nonisoparic condition.</p> <p>I'm interested more generally in finding out exactly what the cycle sequence can tell us. When is a cycle sequence realizable by a 2-connected graph? Is such a realization ever unique? I've looked at a couple dozen graphs on fewer than seven vertices and the only duplicate cycle sequences have been for the graphs shown above. </p> <p>Thank you.</p>
Gordon Royle
1,492
<p><strong>Second Answer</strong></p> <p>I'm adding this as another separate answer, rather than editing the first "answer" because otherwise anyone coming late to this discussion will end up doubly confused.</p> <p>So let's try again, and say that the answer to your question is still "Yes". </p> <p>If you type the following into Sage </p> <pre><code>g1 = Graph("G?rFf_") g2 = Graph("H??EDz{") </code></pre> <p>and then show them as before, we get</p> <p><img src="https://i.stack.imgur.com/x7n5d.png" alt="enter image description here"> <img src="https://i.stack.imgur.com/VdgMV.png" alt="enter image description here"></p> <p>then I think that they each have exactly 11 4-blobs and 4 6-blobs (using "blob" rather than overloading the word cycle) but one has 8 vertices and 12 edges and the other has 9 vertices and 13 edges.</p> <p>Here's a list of the blobs for the first graph (preceded by the size)</p> <pre><code>4 5 4 1 0 4 6 4 1 0 4 6 5 1 0 4 7 4 1 0 4 7 5 1 0 4 7 6 1 0 4 7 6 2 0 4 7 6 2 1 4 7 6 3 0 4 7 6 3 1 4 7 6 3 2 6 7 6 4 2 1 0 6 7 6 4 3 1 0 6 7 6 5 2 1 0 6 7 6 5 3 1 0 </code></pre> <p>and here's the ones for the second graph</p> <pre><code>4 8 6 1 0 4 8 7 2 0 4 8 7 3 0 4 8 7 3 2 4 8 7 4 0 4 8 7 4 2 4 8 7 4 3 4 8 7 5 0 4 8 7 5 2 4 8 7 5 3 4 8 7 5 4 6 8 7 6 2 1 0 6 8 7 6 3 1 0 6 8 7 6 4 1 0 6 8 7 6 5 1 0 </code></pre>
430,629
<p>The <strong>full linear monoid</strong> <span class="math-container">$M_N(k)$</span> of a field <span class="math-container">$k$</span> is the set of <span class="math-container">$N \times N$</span> matrices with entries in <span class="math-container">$k$</span>, made into a monoid with matrix multiplication. A <strong>representation</strong> of <span class="math-container">$M_N(k)$</span> on a vector space <span class="math-container">$V$</span> over <span class="math-container">$k$</span> is a monoid homomorphism</p> <p><span class="math-container">$$ \rho \colon M_N(k) \to \textrm{End}(V). $$</span></p> <p>What is the classification of representations of the full linear monoid?</p> <p>(Note: this is different than asking about representations of <span class="math-container">$M_N(k)$</span> viewed as an <em>algebra.</em>)</p> <p>We get a bunch of irreducible representations of <span class="math-container">$M_N(k)$</span> from Young diagrams with <span class="math-container">$\le N$</span> rows. The vector space</p> <p><span class="math-container">$$ (k^N)^{\otimes n}= \underbrace{k^N\otimes \cdots\otimes k^N}_{\mbox{$n$ copies}} $$</span></p> <p>is a representation of <span class="math-container">$M_N(k)$</span> in an obvious way. Given a Young diagram with <span class="math-container">$n$</span> boxes and <span class="math-container">$\le N$</span> rows, we get a minimal central idempotent in <span class="math-container">$S_n$</span>, and we can use this to project <span class="math-container">$(k^N)^{\otimes n}$</span> down to a subspace that is a representation of <span class="math-container">$M_N(k)$</span>.</p> <p>I believe these are all the <strong>polynomial</strong> irreducible representations <span class="math-container">$\rho$</span> of <span class="math-container">$M_N(k)$</span>: that is, those where the matrix entries of <span class="math-container">$\rho(T)$</span> are polynomials in the matrix entries of <span class="math-container">$T \in M_N(k)$</span>.</p> <ol> <li>Is this correct?</li> </ol> <p>We get more irreducible representations using the absolute Galois group of <span class="math-container">$k$</span>. Any field automorphism of <span class="math-container">$k$</span> gives an automorphism <span class="math-container">$\alpha$</span> of the monoid <span class="math-container">$M_n(k)$</span>, and composing this with a polynomial irreducible representation <span class="math-container">$\rho \colon M_n(k) \to \mathrm{End}(V)$</span> we get a new representation <span class="math-container">$\rho \circ \alpha$</span> which is still irreducible, but not polynomial unless <span class="math-container">$\alpha$</span> is the identity.</p> <p>But when <span class="math-container">$N = 1$</span>, at least, there are even more irreducible representations of the full linear monoid! Then <span class="math-container">$M_1(k)$</span> is the multiplicative monoid of <span class="math-container">$k$</span>, and it has a 1-dimensional irreducible representation sending <span class="math-container">$x \in k$</span> to multiplication by <span class="math-container">$x^n$</span> for any <span class="math-container">$n \ge 0$</span>.</p> <p>The point here is that the multiplicative monoid of <span class="math-container">$k$</span> has endomorphisms that don't come from field automorphisms. There can also be others that combine raising to a power with field automorphisms: e.g. <span class="math-container">$M_1(\mathbb{C})$</span> has a 1-dimensional irreducible representation sending <span class="math-container">$z \in \mathbb{C}$</span> to multiplication by <span class="math-container">$z^4 \overline{z}^3$</span>.</p> <p>Since endomorphisms of the multiplicative monoid of <span class="math-container">$k$</span> not arising from field automorphisms don't preserve addition, I don't see how to use them to get extra representations of <span class="math-container">$M_N(k)$</span> for <span class="math-container">$N &gt; 1$</span>.</p> <p>So:</p> <ol start="2"> <li><p>Are all the irreducible representations of <span class="math-container">$M_N(k)$</span> for <span class="math-container">$N &gt; 1$</span> of the form <span class="math-container">$\rho \circ \alpha$</span> where <span class="math-container">$\rho$</span> comes from a Young diagram with at most <span class="math-container">$N$</span> rows and <span class="math-container">$\alpha$</span> comes from a field automorphism of <span class="math-container">$k$</span>, or are there more?</p> </li> <li><p>Do all the irreducible representations of <span class="math-container">$M_N(k)$</span> for <span class="math-container">$N = 1$</span> come from endomorphisms of the multiplicative monoid of <span class="math-container">$k$</span>?</p> </li> <li><p>Are all finite-dimensional representations of <span class="math-container">$M_n(k)$</span> completely reducible?</p> </li> </ol>
Nicholas Kuhn
102,519
<p>As Ben Steinberg has been suggesting in the comments, the representation theory of these have been studied for quite awhile. His book on this, <em>the Representation Theory of Finite Monoids</em>, is an excellent modern general reference for how to think about these.</p> <p>The general theme is that the maximal subgroups in <span class="math-container">$M_n(k)$</span>, namely the groups <span class="math-container">$GL_m(k)$</span> for <span class="math-container">$m=0, \dots, n$</span>, control the representation theory. For example there is a recollment diagram of the abelian categories: <span class="math-container">$$ GL_n(k)\text{- modules} \begin{array}{c} \stackrel{q}{\longleftarrow} \\[-.08in] \stackrel{i}{\longrightarrow} \\[-.08in] \stackrel{p}{\longleftarrow} \end{array} M_n(k)\text{- modules} \begin{array}{c} \stackrel{l}{\longleftarrow} \\[-.06in] \stackrel{e}{\longrightarrow} \\[-.08in] \stackrel{r}{\longleftarrow} \end{array} M_{n-1}(k)\text{- modules.} $$</span> From this, one sees that the irreducible modules for <span class="math-container">$M_n(k)$</span> come in two flavors: <span class="math-container">$GL_n(k)$</span> irreducibles on which singular matrices act trivially, and <span class="math-container">$M_{n-1}(k)$</span> irreducible modules `induced' up to <span class="math-container">$M_n(k)$</span>. Curiously, in the finite field case, at least, these two types are related be tensoring with the determinant representation (perhaps first noticed in a 1980's paper by John Harris and me).</p> <p>In particular, if <span class="math-container">$k$</span> is a finite field of order <span class="math-container">$q$</span>, the irreducible representations of <span class="math-container">$M_n(k)$</span> can be sensibly labelled by <span class="math-container">$q$</span>--regular Young diagrams with at most <span class="math-container">$n$</span> nonzero columns. If <span class="math-container">$k$</span> is infinite, one should be getting all Young diagrams.</p> <p>Finally, the category of representations of <span class="math-container">$M_n(k)$</span> is very much related to the abelian category of functors from finite dimensional <span class="math-container">$k$</span>--vector spaces to <span class="math-container">$k$</span>--vector spaces (dubbed the category of `generic representations' of <span class="math-container">$k$</span> in some of my papers), and much interesting calculational work has been figured out in this setting.</p>
381,177
<p>I have a problem in which I have to compute the following integral: <span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^k y_i=x} e^{-N^2r(\sum_{i=1}^k y_i^2-\frac{1}{k}x^2)} dy_1\dots dy_k,$$</span> where this notation means that I want to integrate over <span class="math-container">$\mathbb{R}^k$</span> restricted to the plane where <span class="math-container">$\sum_{i=1}^k y_i=x$</span> (a convolution of gaussians) and <span class="math-container">$N$</span> and <span class="math-container">$r$</span> are positive real constants. I have tried two different methods for computing this integral, but they are yielding different results. I would appreciate it very much if someone could take a look and tell me what I'm doing wrong.</p> <p><strong>Method 1</strong></p> <p>In method 1 I just wrote it as <span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\sum_{i=1}^{k}y_i^2-\frac{1}{k}x^2)} dy_1\dots dy_k =\int_{-\infty}^{\infty}\dots\int_{-\infty}^\infty e^{-N^2r((x-y_1)^2+\sum_{i=1}^{k-2}(y_i-y_{i+1})^2+y_{k-1}^2-\frac{1}{k}x^2)} \, dy_1\dots dy_{k-1}=\sqrt{\frac{1} {\pi r^{k-1}k}} \frac{\pi^k}{N^{k-1}}$$</span></p> <p>I deduced this formula by induction, first integrating in <span class="math-container">$y_{k-1}$</span>, then <span class="math-container">$y_{k-2}$</span> and so on.</p> <p><strong>Method 2</strong></p> <p>In method 2 I tried writting the function in a matrix form <span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\sum_{i=1}^{k}y_i^2-\frac{1}{k}x^2)} dy_1\dots dy_{k}=\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\vec{y},Q\vec{y})} dy_1\dots dy_{k}$$</span> where <span class="math-container">\begin{equation} Q:=\left(\begin{array}{cccccccc} (1-\frac{1}{k})&amp; -\frac{1}{k} &amp; -\frac{1}{k} &amp; \cdots &amp; -\frac{1}{k} \\ -\frac{1}{k} &amp; (1-\frac{1}{k}) &amp; -\frac{1}{k} &amp; \cdots &amp; -\frac{1}{k} \\ \vdots &amp; \ddots &amp; &amp; &amp;\vdots \\ -\frac{1}{k} &amp; \dots &amp; &amp;-\frac{1}{k} &amp;(1-\frac{1}{k}) \end{array}\right). \end{equation}</span></p> <p>This matrix <span class="math-container">$Q$</span> has eigenvalues <span class="math-container">$\lambda_0=0$</span>, <span class="math-container">$\lambda_l=1$</span> and corresponding normalized eigenvetors <span class="math-container">\begin{equation} \vec{\lambda}_l=\frac{1}{\sqrt{k}}\left(\begin{array}{c} 1 \\ e^{\frac{2\pi i}{k}1l} \\ \vdots \\ e^{\frac{2\pi i}{k}(k-1)l} \end{array}\right) \end{equation}</span> for <span class="math-container">$0\le l\le k-1$</span>.</p> <p>As I understand it, the restriction in the integral means that I shouldn't integrate in the <span class="math-container">$\lambda_0$</span> direction, since in this direction I must have all components equal, and the only place where the components are equal and the bound is satisfied is <span class="math-container">$(\frac{x}{k},\dots,\frac{x}{k})$</span>. So my integration should occour in the orthogonal complement of this vector, which is a hyperplane of dimension <span class="math-container">$k-1$</span>. Everything seems to check to this point, so I diagonalized the matrix <span class="math-container">$Q=U\Lambda U^{-1}$</span> and so</p> <p><span class="math-container">$$(\vec{y},Q\vec{y})=(\vec{\xi},\Lambda\vec{\xi})=\sum_{i=1}^{k-1}\xi_i^2.$$</span></p> <p>The change of variables <span class="math-container">$\vec{\xi}=U^{-1}\vec{y}$</span> has a Jacobian <span class="math-container">$\frac{1}{\sqrt{k^{k-1}}}$</span>, since <span class="math-container">$U^{-1}$</span> is the DFT matrix times <span class="math-container">$\frac{1}{\sqrt{k^{k-1}}}$</span> and the DFT matrix is known to be unitary. So</p> <p><span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\vec{y},Q\vec{y})} dy_1\dots dy_{k}=\idotsint\limits_{\mathbb{R}^k} e^{-N^2r\sum_{i=1}^{k-1}\xi_i^2} \frac{1}{\sqrt{k^{k-1}}}d\xi_1\dots d\xi_{k-1}= \sqrt{\frac{\pi^{k-1}}{k^{k-1}r^{k-1}}}\frac{1}{N^{k-1}}.$$</span></p> <p>These two results are different and I cannot figure out why.</p> <p>Thank you all in advance for your help!</p>
Iosif Pinelis
36,721
<p><span class="math-container">$\newcommand\R{\mathbb R}\newcommand\1{\mathbf1}$</span>When you say &quot;I want to integrate over <span class="math-container">$\mathbb{R}^k$</span> restricted to the plane where <span class="math-container">$\sum_{i=1}^{k}y_i=x$</span>&quot;, you have to specify the measure over the plane over which you want to integrate.</p> <p>It appears you want this measure to be induced by the Lebesgue measure on <span class="math-container">$\R^k$</span>. Then the integration can be done as follows. Let <span class="math-container">$$c:=N^2r\in(0,\infty)$$</span> and <span class="math-container">$$t:=x/\sqrt k,$$</span> the (signed) distance from the the origin to your plane <span class="math-container">$$\Pi_t:=\{y\in\R^k\colon u\cdot y=t\}=\{y\in\R^k\colon \1\cdot y=x\},$$</span> where <span class="math-container">$\cdot$</span> denotes the dot product, <span class="math-container">$\1:=(1,\dots,1)\in\R^k$</span>, and <span class="math-container">$$u:=\1/\sqrt k$$</span> is a unit normal vector to the plane <span class="math-container">$\Pi_t$</span>. Thus, instead of the parameter <span class="math-container">$x$</span>, we use the more geometrical parameter <span class="math-container">$t$</span>.</p> <p>Then the integral in question can be written as <span class="math-container">$$I_t:=e^{ct^2}J_t,\quad\text{where}\quad J_t:=\int_{\Pi_t}\mu_t(dy)e^{-c|y|^2},$$</span> <span class="math-container">$|y|$</span> is the Euclidean norm of <span class="math-container">$y$</span>, and, for each real <span class="math-container">$t$</span>, <span class="math-container">$\mu_t$</span> is the measure over the plane <span class="math-container">$\Pi_t$</span> induced by the Lebesgue measure on <span class="math-container">$\R^k$</span> in the following sense: <span class="math-container">\begin{equation} \int_a^b dt\, \int_{\Pi_t}\mu_t(dy)g(y) =\int_{\Pi_{a,b}}dy\,g(y) \tag{1} \end{equation}</span> for all nonnegative Borel-measurable functions <span class="math-container">$g\colon\R^k\to\R$</span> and all real <span class="math-container">$a$</span> and <span class="math-container">$b$</span> such that <span class="math-container">$a&lt;b$</span>, where <span class="math-container">$$\Pi_{a,b}:=\bigcup_{t\in[a,b]}\Pi_t=\{y\in\R^k\colon a\le u\cdot y\le b\}.$$</span></p> <p>Then for such <span class="math-container">$a$</span> and <span class="math-container">$b$</span> we have <span class="math-container">$$\int_a^b dt\, J_t=\int_a^b dt\, \int_{\Pi_t}\mu_t(dy)e^{-c|y|^2} =K_{a,b}:=\int_{\Pi_{a,b}}dy\,e^{-c|y|^2}.$$</span> (See the remark on this at the end of this answer.)</p> <p>To compute the integral <span class="math-container">$K_{a,b}$</span>, let us use a substitution of the form <span class="math-container">$y=Qz$</span>, where <span class="math-container">$Q$</span> is any orthogonal <span class="math-container">$k\times k$</span> matrix whose first column is the unit vector <span class="math-container">$u$</span>, so that <span class="math-container">$y=Qz$</span> implies <span class="math-container">$z_1=u\cdot y$</span>, where <span class="math-container">$z_j$</span> is the <span class="math-container">$j$</span>'s coordinate of the vector <span class="math-container">$z$</span>; such an orthogonal matrix <span class="math-container">$Q$</span> exists. Then we can write <span class="math-container">$$K_{a,b}=\int_{\R^k}dy\,e^{-c|y|^2}\,1(a\le u\cdot y\le b) \\ =\int_{\R^k}dz\,e^{-c|z|^2}\,1(a\le z_1\le b) \\ =\int_a^b dz_1\,e^{-cz_1^2}\int_{\R^{k-1}}dw\,e^{-c|w|^2} \\ =\int_a^b dz_1\,e^{-cz_1^2}\,(\pi/c)^{(k-1)/2}. $$</span> So, <span class="math-container">$$J_t=\frac d{dt}\,K_{a,t}=e^{-ct^2}\,(\pi/c)^{(k-1)/2}.$$</span> Thus, the integral in question is <span class="math-container">$$I_t:=e^{ct^2}J_t=(\pi/c)^{(k-1)/2}=\Big(\frac\pi{N^2r}\Big)^{(k-1)/2}.$$</span></p> <p>This differs from both of your answers -- but you never defined the measure over which you integrate.</p> <hr /> <p><strong>Remark:</strong> Intuitively, think of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> as being close to a real number <span class="math-container">$t$</span>, and hence to each other. We approximate the integral <span class="math-container">$\int_{\Pi_t}\mu_t(dy)e^{-c|y|^2}$</span> over the plane <span class="math-container">$\Pi_t$</span> by <span class="math-container">$\frac1{b-a}\,\int_{\Pi_{a,b}}dy\, e^{-c|y|^2}$</span>, that is, by the integral over the thin layer <span class="math-container">$\Pi_{a,b}$</span> between two parallel planes <span class="math-container">$\Pi_a$</span> and <span class="math-container">$\Pi_b$</span> (close to the plane <span class="math-container">$\Pi_t$</span>) divided by thickness <span class="math-container">$b-a$</span> of the layer.</p> <p>Formally, we are dealing here with <a href="https://en.wikipedia.org/wiki/Disintegration_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">disintegration of a measure</a>. That linked theorem deals only with probability measures, but it is trivially extended to finite measures. If we forget about this finiteness condition for a moment, then in that linked theorem we can choose <span class="math-container">$X=\R$</span>, <span class="math-container">$Y=\R^k$</span>, let the map <span class="math-container">$\pi\colon Y\to X$</span> be the projection map defined by <span class="math-container">$\pi(y):=u\cdot y$</span> for all <span class="math-container">$y\in Y=\R^k$</span>, and let <span class="math-container">$\mu:=\lambda_k$</span> and <span class="math-container">$\nu:=\lambda_1$</span>, where <span class="math-container">$\lambda_k$</span> is the Lebesgue measure over <span class="math-container">$\R^k$</span>. I think the finiteness condition in that linked theorem is inessential, and the proof will hold for any Borel measures <span class="math-container">$\mu$</span>, at least if <span class="math-container">$\mu$</span> is <span class="math-container">$\sigma$</span>-finite. Alternatively, one can approximate here the Lebesgue measure over <span class="math-container">$\R^k$</span> by the finite Lebesgue measures over big cubes in <span class="math-container">$\R^k$</span>.</p>
2,196,539
<p>I'm having a complete mind blank here even though i'm pretty sure the solution is relatively easy.</p> <p>I need to make X the subject of the following equation:</p> <p>$$AB - AX = X $$</p> <p>All i've done so far is: $$A(B-X) = X$$ $$B-X = A^{-1} X$$</p> <p>Not sure if thats right?</p> <p>Thanks in advance.</p>
Random-generator
427,836
<p>Continuing with your expression, $B-X = A^{-1} X$, $B = (I+A^{-1}) X$, $X = (I+A^{-1})^{-1}B$.</p>
4,632,865
<p>This was on a mock test for an examination that grants admission to an undergraduate course in mathematics. So, in theory, a school-going <span class="math-container">$17$</span> year old with a bit of extra knowledge should be able to solve it. As such, I'm looking for &quot;elementary&quot; answers.</p> <p>Q. Suppose <span class="math-container">$$I_n=\int_0^2(2x-x^2)^ndx.$$</span> Show that <span class="math-container">$$\lim_{n\to\infty}I_n=0.$$</span></p> <p>I would give my attempt, but I do not know where to begin. Perhaps we set up some sort of recurrence relation?</p>
Lorago
883,088
<p>Take <span class="math-container">$\delta\in(0,1)$</span>, set <span class="math-container">$I_\delta=(1-\delta,1+\delta)$</span>, and set</p> <p><span class="math-container">$$M_\delta:=\max_{x\in[0,2]\setminus I_\delta}\lvert 2x-x^2\rvert.$$</span></p> <p>Notice that then <span class="math-container">$M_\delta&lt;1$</span>. In particular we have that</p> <p><span class="math-container">$$\left\lvert\int_{[0,2]\setminus I_\delta} (2x-x^2)^n~\mathrm{d}x\right\rvert\leq\int_{[0,2]\setminus I_\delta} M_\delta^n~\mathrm{d}x=(2-2\delta)M_\delta^n\to0$$</span></p> <p>as <span class="math-container">$n\to\infty$</span>. Similarly we also have that</p> <p><span class="math-container">$$\left\lvert\int_{I_\delta} (2x-x^2)^n~\mathrm{d}x\right\rvert\leq\int_{I_\delta} 1^n~\mathrm{d}x=2\delta\to0$$</span></p> <p>as <span class="math-container">$\delta\to0$</span>. Now fix <span class="math-container">$\varepsilon&gt;0$</span>. Choose <span class="math-container">$\delta$</span> such that</p> <p><span class="math-container">$$\left\lvert\int_{I_\delta} (2x-x^2)^n~\mathrm{d}x\right\rvert&lt;\frac{\varepsilon}{2}$$</span></p> <p>and choose <span class="math-container">$N$</span> such that</p> <p><span class="math-container">$$\left\lvert\int_{[0,2]\setminus I_\delta} (2x-x^2)^n~\mathrm{d}x\right\rvert\leq\frac{\varepsilon}{2}$$</span></p> <p>for all <span class="math-container">$n\geq N$</span>. Then, for all <span class="math-container">$n\geq N$</span> we have that</p> <p><span class="math-container">$$\lvert I_n\rvert\leq\left\lvert\int_{[0,2]\setminus I_\delta} (2x-x^2)^n~\mathrm{d}x\right\rvert+\left\lvert\int_{I_\delta} (2x-x^2)^n~\mathrm{d}x\right\rvert&lt;\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon,$$</span></p> <p>which proves that</p> <p><span class="math-container">$$\lim_{n\to\infty}I_n=0.$$</span></p>
4,632,865
<p>This was on a mock test for an examination that grants admission to an undergraduate course in mathematics. So, in theory, a school-going <span class="math-container">$17$</span> year old with a bit of extra knowledge should be able to solve it. As such, I'm looking for &quot;elementary&quot; answers.</p> <p>Q. Suppose <span class="math-container">$$I_n=\int_0^2(2x-x^2)^ndx.$$</span> Show that <span class="math-container">$$\lim_{n\to\infty}I_n=0.$$</span></p> <p>I would give my attempt, but I do not know where to begin. Perhaps we set up some sort of recurrence relation?</p>
Phobo Havuz
563,245
<p>Here's how to solve this by forming a recurrence relation. First, complete the square and perform a trigonometric substitution <span class="math-container">$x-1=\sin\phi,\text{d}x=\cos\phi\text{ d}\phi$</span>. <span class="math-container">$$\begin{aligned} I_n&amp;=\int_0^2\left(2x-x^2\right)^n\text{ d}x\\ &amp;=\int_0^2\left(1-(x-1)^2\right)^n\text{ d}x\\ &amp;=\int_{-\pi/2}^{\pi/2}\left(1-\sin^2\phi\right)^n\cos\phi\text{ d}\phi\\ &amp;=\int_{-\pi/2}^{\pi/2}\cos^{2n+1}\phi\text{ d}\phi \end{aligned}$$</span> Now, integrate by parts, integrating <span class="math-container">$\cos\phi$</span> and differentiating <span class="math-container">$\cos^{2n}\phi$</span>. <span class="math-container">$$\begin{aligned} I_n&amp;=\int_{-\pi/2}^{\pi/2}\cos^{2n+1}\phi\text{ d}\phi\\ &amp;=\left[\sin\phi\cos^{2n}\phi\right]_{-\pi/2}^{\pi/2}-\int_{-\pi/2}^{\pi/2}2n\cos^{2n-1}\phi\cdot\sin\phi\cdot\sin\phi\text{ d}\phi\\ &amp;=-2n\int_{-\pi/2}^{\pi/2}\cos^{2n-1}\phi\cdot(1-\cos^2\phi)\text{ d}\phi\\ &amp;=-2n\int_{-\pi/2}^{\pi/2}\cos^{2n-1}\phi\text{ d}\phi+2n\int_{-\pi/2}^{\pi/2}\cos^{2n+1}\phi\text{ d}\phi\\ &amp;=-2n\ I_{n-1}+2n\ I_n \end{aligned}$$</span> Solving this for <span class="math-container">$I_n$</span> gives <span class="math-container">$$I_n=\frac{2n-1}{2n}I_{n-1}$$</span> Can you prove that <span class="math-container">$I_n$</span> decreases to <span class="math-container">$0$</span> now?</p>
2,134,928
<p>Let <span class="math-container">$ \ C[0,1] \ $</span> stands for the real vector space of continuous functions <span class="math-container">$ \ [0,1] \to [0,1] \ $</span> on the unit interval with the usual subspace topology from <span class="math-container">$\mathbb{R}$</span>. Let <span class="math-container">$$\lVert f \rVert_1 = \int_0^1 |f(x)| \ dx \qquad \text{ and } \qquad \lVert f \rVert_{\infty} = \max_{x \in [0,1]} |f(x)|$$</span> be the usual norms defined on that space. Let <span class="math-container">$ \ \Delta : C[0,1] \to C[0,1] \ $</span> be the diagonal function, ie, <span class="math-container">$ \ \Delta f=f \ $</span>, <span class="math-container">$\forall f \in C[0,1]$</span>. Then <span class="math-container">$$ \Delta = \big\{ (f,g) \in C[0,1] \times C[0,1] \ : \ g=f \ \big\} \ . $$</span> My questions are</p> <blockquote> <p><strong>(1)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta \ $</span> a closed set of <span class="math-container">$ \ C[0,1] \times C[0,1] \ $</span>, with respect to the product topology induced by these norms?</p> <p><strong>(2)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> continuous?</p> <p><strong>(3)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span>?</p> <p><strong>(4)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> continuous?</p> <p><strong>(5)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span>?</p> </blockquote> <p>Now about some terminology, when I say that &quot;<span class="math-container">$\Delta \ $</span> is closed&quot;, or that &quot;<span class="math-container">$\Delta \ $</span> is a closed map&quot; or that &quot;<span class="math-container">$\Delta \ $</span> is a closed operator&quot;?</p> <p>Thanks in advance.</p>
Upstart
312,594
<p>Correct answers$=x$ ,Incorrect answers$=10-x$</p> <p>$3.x -1.(10-x)=18$</p> <p>$4x=28$</p> <p>$x=7$</p>
4,182,343
<p>I am trying to solve the following simple equation:</p> <p><span class="math-container">$$\frac{df}{dx}=\sin{f}$$</span> This is the &quot;kink&quot; solution to the Sine-Gordon equation. To solve this, I do the following substitution:</p> <p><span class="math-container">$$f=\tan^{-1}g$$</span></p> <p>Then we can use <span class="math-container">$\frac{d}{dz}\tan^{-1}z=\frac{1}{1+z^2}$</span> to find:</p> <p><span class="math-container">$$\frac{1}{1+g^2}\frac{dg}{dx}=\sin(\tan^{-1}g)=\frac{g}{\sqrt{1+g^2}}$$</span></p> <p>This gives:</p> <p><span class="math-container">$$\frac{dg}{dx}=g\sqrt{1+g^2}$$</span> which I need to solve.</p> <p>The problem now is that the solution I find is:</p> <p><span class="math-container">$$f=\tan^{-1}{e^{x-a}}$$</span> which suggests that <span class="math-container">$\frac{dg}{dx}=g$</span> and therefore, what I have wrote above is not correct. What did I do wrong?</p>
dbrane
521,734
<p>The trick is to use a slightly modified substitution:</p> <p><span class="math-container">$$f=2\tan^{-1}g$$</span> Then, the RHS becomes <span class="math-container">$$\sin{(2\tan^{-1}g)}=2\sin{(\tan^{-1}g)}\cos{(\tan^{-1}g)}$$</span> solving the problem</p>
15,063
<p>Let $F:R \to S$ be an étale morphism of rings. It follows with some work that $f$ is flat. </p> <p>However, faithful flatness is another story. It's not hard to show that faithful + flat is weaker than being faithfully flat. An equivalent condition to being faithfully flat is being surjective on spectra. </p> <p>The question: Is there any further condition we can require on an étale morphism that implies faithful flatness?</p> <p>"Faithfully flat implies faithfully flat" or "surjective on spectra is equivalent to faithfully flat" do not count. The answer should in some way use the fact that the morphism is étale (or at least flat). </p> <p>As you can see by the tag, all rings commutative, unital, etc.</p> <p>Edit: <a href="http://mathworld.wolfram.com/FaithfullyFlatModule.html" rel="nofollow">Why</a> faithfully flat is weaker than faithful + flat.</p> <p>Edit 2: I resent the voting down of this question without accompanying comments as well as the voting up of the glib and unhelpful answer below. It's clear that some of you are in the habit of voting on posts based on the poster rather than the content, and I think that is shameful. There is nothing I can do because none of you has the basic decency to at least leave a comment. I am completely at your mercy. You've won. I hope it's made you very happy.</p> <p>Edit 3: To answer Emerton's comment, I asked here after:</p> <p>a.) Reading this <a href="http://sbseminar.wordpress.com/2009/08/06/algebraic-geometry-without-prime-ideals/#comment-6354" rel="nofollow">post</a> by Jim Borger </p> <p>b.) Asking my commutative algebra professor in an e-mail</p> <p>Which led me to believe (perhaps due to a flawed reading of said sources) that this was a harder question than it turned out to be.</p>
Anton Geraschenko
1
<p>The definition of $f:R\to S$ being faithfully flat that I first saw is that $S\otimes_R-$ is exact and faithful (meaning that $S\otimes_R M=0$ implies $M=0$). I'm not sure exactly what your definition of "faithfully flat" is, but it looks like you're happy with "flat and surjective on spectra." You get flatness for free from étaleness, so I'll show that the extra faithfulness condition implies surjectivity on spectra. </p> <p>Upon tensoring with $S$, $f$ becomes $f\otimes_R id_S:S\cong S\otimes_R R\to S\otimes_R S$, given by $s\mapsto s\otimes 1$. This is injective since it is a section of multiplication $S\otimes_R S\to S$. By flatness of $S$, this shows that $S\otimes_R \ker f=0$, so $\ker f=0$. So I'll identify $R$ with a subring of $S$.</p> <p>Let $\mathfrak p\subseteq R$ be a prime ideal. We wish to show that there is a prime $\mathfrak q\subseteq S$ such that $\mathfrak q \cap R=\mathfrak p$. Let $K$ be the kernel of the morphism $R/\mathfrak p\to S/\mathfrak p S$ of $R$-modules. Upon tensoring with $S$, this morphism becomes injective (as before, it's a section of the multiplication map $S/\mathfrak p S\otimes_R S/\mathfrak p S\to S/\mathfrak p S$), so by flatness of $S$, we have $S\otimes_R K=0$, so $K=0$. This shows that $\mathfrak p S \cap R=\mathfrak p$ (if the intersection were any larger, $K$ would be non-zero). So $\mathfrak p$ generates a proper ideal in the localization $(R\setminus \mathfrak p)^{-1}S$. Let $\mathfrak q\subseteq (R\setminus \mathfrak p)^{-1}S$ be a maximal ideal containing $\mathfrak p$. This corresponds to some prime ideal $\mathfrak q$ (slight abuse of notation to use the same letter) of $S$ which contains $\mathfrak p$ but does not intersect $R\setminus \mathfrak p$, so $\mathfrak q\cap R=\mathfrak p$.</p> <p>See also exercise 16 of Chapter 3 of Atiyah-Macdonald.</p>
2,485,529
<p>The integral is $$\int{\left[\frac{\sin^8(x) - \cos^8(x)}{1 - 2 \sin^2(x)\cos^2(x)}\right]}dx$$</p>
Corey
492,030
<p>Hint: $\sin^8(x)- \cos^8(x)=(\sin^2(x)- \cos^2(x))(1-2\sin^2(x) \cos^2(x))$</p>
370,212
<p>Let <span class="math-container">$\mathbb{N}$</span> denote the set of positive integers. For <span class="math-container">$\alpha\in \; ]0,1[\;$</span>, let <span class="math-container">$$\mu(n,\alpha) = \min\big\{|\alpha-\frac{b}{n}|: b\in\mathbb{N}\cup\{0\}\big\}.$$</span> (Note that we could have written <span class="math-container">$\inf\{\ldots\}$</span> instead of <span class="math-container">$\min\{\ldots\}$</span>, but it is easy to see that the infimum is always a minimum.)</p> <p>Is there an <span class="math-container">$\alpha\in \; ]0,1[$</span> such that for all <span class="math-container">$n\in\mathbb{N}$</span> we have <span class="math-container">$\mu(n+1,\alpha)&lt;\mu(n,\alpha)$</span>?</p>
Max Alekseyev
7,076
<p>No. If <span class="math-container">$\alpha$</span> is rational, set <span class="math-container">$n$</span> to the denominator of <span class="math-container">$\alpha$</span>. Otherwise set <span class="math-container">$n$</span> to the denominator of the third <a href="https://en.wikipedia.org/wiki/Continued_fraction#Infinite_continued_fractions_and_convergents" rel="nofollow noreferrer">convergent</a> to <span class="math-container">$\alpha$</span>. In both cases, we get <span class="math-container">$\mu(n+1,\alpha)&gt;\mu(n,\alpha)$</span>.</p>
186,638
<p>$f(x)=\max(2x+1,3-4x)$, where $x \in \mathbb{R}$. what is the minimum possible value of $f(x)$.</p> <p>when, $2x+1=3-4x$, we have $x=\frac{1}{3}$</p>
ronno
32,766
<p>At $x = \frac13, f(x) = \frac53$ and this is the minimum value, since for $x&gt;\frac13$, $2x+1&gt;\frac53$ and for $x&lt;\frac13$, $3-4x &gt;\frac53$.</p>
1,215,537
<p>I need to prove that $ \int_0^\infty (\frac{\sin x}{x})^2 = \frac{\pi}{2}$. I have proved that $\sum_1^\infty \frac {\sin^2(n \delta)}{n^2 \delta}=\frac{\pi-\delta}{2}$ for $0&lt;\delta&lt;\pi$ and I'm supposed to use this identity.</p>
Olivier Bégassat
11,258
<p>Here is an alternate proof. It hinges on a lemma, which isn't too difficult to prove (it requires cutting $\Bbb R_+$ in two halves $[0,A]$ and $[A,+\infty)$ for some meaningful $A$, and using Riemann sums on the segment)</p> <blockquote> <p>Suppose $f:\Bbb R_+\to\Bbb R$ is continuous integrable and eventually decreasing. Then for any $\delta&gt;0$, the sum $$S_\delta=\sum_{n=0}^\infty f(n\delta)\cdot\delta$$ is (absolutely) convergent, and has a limit when $\delta\to 0$ with $$\lim_{\delta\to 0}\;S_\delta=\int_{\Bbb R_+}f$$</p> </blockquote> <p>This lemma then easily implies that if $\phi:\Bbb R_+\to \Bbb R$ is continuous, integrable and dominated by some non negative, integrable, eventually decreasing function $f$ (in the sense that $|\phi|\leq f$), then, for any $\delta&gt;0$, the series $$\Sigma_\delta=\sum_{n=0}^\infty\phi(n\delta)\cdot\delta$$ is (absolutely) convergent, and has a limit as $\delta\to 0$, with $$\lim_{\delta\to 0}\;\Sigma_\delta=\int_{\Bbb R_+}\phi$$</p> <hr> <p>This gives you the desired result, with $\phi(x)=\frac{\sin(x)^2}{x^2}$ and $f(x)$ defined to be equal to $1$ on $[0,1]$, and equal to $\frac1{x^2}$ on $[1,+\infty)$.</p>
2,541,709
<p>For example Calculate the probability of getting exactly 50 heads and 50 tails after flipping a fair coin $100$ times. then is ${100 \choose 50}\left(\frac 12\right)^{50}\left(\frac 12\right)^{50}$ the reason that we multiply $\left(\frac 12\right)^{50}$ twice is because the first one $\left(\frac 12\right)^{50}$ is consider as $\frac 12$ probability of head $50$ as $50$ times, then $\frac 12$ multiply itself $50$ equal $\left(\frac 12\right)^{50}$ I know we need to multiply the second $\left(\frac 12\right)^{50}$ term as well although it is the failure of $50$ heads (or otherwise, when we are talking about $50$ tails.) My question is: Why do we need to multiply the probability of failure events ? ( I do notice that "exactly" always seems to appear in the question)</p>
user1101010
184,176
<p>First, note $C_0(\mathbb{R})$ is the closure of $C_c(\mathbb{R})$ in the uniform metric and $C_c(\mathbb{R})$ is dense in $L^1(\mathbb{R})$. Then choose sequences $\{f_n\} \subseteq C_c(\mathbb{R})$ and $\{g_n\} \subseteq C_c(\mathbb{R})$ such that $\| f_n - f\|_u \to 0$ and $\|g_n - g\|_1 \to 0$. Then we should be able to prove $f_n * g_n \in C_c(\mathbb{R})$. By Holder's inequality, $$\begin{split} |f_n*g_n(x)-f*g(x)|&amp;\le |f_n*g_n(x)-f*g_n(x)|+|f*g_n(x)-f*g(x)| \\ &amp;\le \|f_n - f\|_{\infty} \|g_n\|_1 + \|f\|_1 \|g_n - g\|_{\infty} \to 0 \end{split}$$ So $f*g \in C_0(\mathbb{R})$.</p>
198,521
<p>I am trying to overlap two <code>Graphics</code> objects <code>g1</code> and <code>g2</code> with <code>Show</code>. However, I found that when the coordinates of each object is defined to quite different ranges, I need to "scale" and "shift" the coordinates of one object to get the desired look. </p> <p>For example,</p> <pre><code>g1 = Graphics[{GrayLevel[0.8], Rectangle[{-2, -2}, {2, 2}]}]; g2 = Graphics[{{GrayLevel[0.5], Rectangle[{0, 0}, {1, 1}]}}]; Show[g1, AspectRatio -&gt; 1, Axes -&gt; True, ImageSize -&gt; 230] Show[g2, AspectRatio -&gt; 1, Axes -&gt; True, ImageSize -&gt; 230] Show[g1, g2, AspectRatio -&gt; 1, Axes -&gt; True, ImageSize -&gt; 230] </code></pre> <p>The overlapped version of <code>g1</code> and <code>g2</code> looks like this <a href="https://i.stack.imgur.com/tVM01.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tVM01.png" alt="enter image description here"></a></p> <p>However, I would like to scale the coordinates of <code>g2</code> to make <code>g2</code> twice large and also shift its coordinates so the center can "roughly" coincide the center of <code>g1</code>. I say "roughly" because <code>g1</code> and <code>g2</code> may be some graphics not of a regular shape. The desired result will look like <a href="https://i.stack.imgur.com/VO3jM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VO3jM.png" alt="enter image description here"></a></p> <p>So how can I manipulate the coordinates of <code>g2</code> to adjust its relative position and size when <code>Show</code> with <code>g1</code>? Please avoid modifying the definition of <code>g1</code> and <code>g2</code> as they can be any <code>Graphics</code> copy-pasted over.</p>
kglr
125
<p>You can use <a href="https://reference.wolfram.com/language/ref/Scale.html" rel="nofollow noreferrer"><code>Scale</code></a> on <code>First @ g2</code> (which contains the graphics directives and primitives):</p> <pre><code>Show[g1, Graphics @ Scale[First @ g2, 2, {1, 1}], AspectRatio -&gt; 1, Axes -&gt; True, ImageSize -&gt; 230] </code></pre> <p><a href="https://i.stack.imgur.com/adXiT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/adXiT.png" alt="enter image description here"></a></p> <p>Alterantively, you can use combination of <code>Scale</code> and <a href="https://reference.wolfram.com/language/ref/Translate.html" rel="nofollow noreferrer"><code>Translate</code></a>:</p> <pre><code>Show[g1, Graphics @ Translate[Scale[First @ g2, 2], -{1, 1}/2], AspectRatio -&gt; 1, Axes -&gt; True, ImageSize -&gt; 230] </code></pre> <p>or</p> <pre><code>Show[g1, Graphics@Translate[Scale[First@g2, 2, {0, 0}], -{1, 1}], AspectRatio -&gt; 1, Axes -&gt; True, ImageSize -&gt; 230] </code></pre> <blockquote> <p>same picture</p> </blockquote>
495,622
<p>Well, it may seem trivial, but I cannot find it on google. Is a constant function continuously differentiable, of all orders?</p> <p>Thank you.</p>
BaronVT
39,526
<p>Let's assume this is a constant function on $\mathbb{R}$ (i.e. $f : \mathbb{R} \to \mathbb{R}$, $f(x) = c$ for some fixed $c$, for all $x \in \mathbb{R}$).</p> <p>Fix any $x_0$. What is</p> <p>$$ \lim_{h\to 0} \frac{f(x_0 + h) - f(x_0)}{h} \ \ ? $$</p> <p>In particular, are there any points where this limit fails to exist? And at the points where the limit does exist, what is the limit equal to?</p> <blockquote class="spoiler"> <p> No, there are no points where the limit does not exist, and at every point, the limit is equal to $0$. To see this, note that $f(x_0 + h) = c$ and $f(x_0) = c$, so the numerator is always $0$.</p> </blockquote> <p>Thus, if we define a function $$ f'(x) = \lim_{h\to 0} \frac{f(x + h) - f(x)}{h}, $$ then $f'$ is defined for all $x$, and it is continuous everywhere, as required.</p>
2,586,618
<p>I'm trying to study for myself a little of Convex Geometry and I have some doubts with respect the proof of the Theorem 1.8.5 of the book Convex Bodies: The Brunn-Minkowski Theory. Before I presented the proof and my doubts, I will put the definitions used in the theorem below.</p> <p><span class="math-container">$\textbf{Definitions used in the theorem:}$</span></p> <p>(i) <span class="math-container">$\mathcal{C}^n := \{ A \subset \mathbb{R}^n \ ; \ A \neq \emptyset \ \text{and} \ A \ \text{is compact} \}$</span>.</p> <p>(ii) <span class="math-container">$B^n = \overline{B(0,1)} := \{ x \in \mathbb{R}^n \ ; \ d(x,0) \leq 1 \}$</span>.</p> <p>(iii) The Hausdorff distance of the sets <span class="math-container">$K, L \in \mathcal{C}^n$</span> is defined by</p> <p><span class="math-container">$$\delta (K,L) := \max \left \{ \sup_{x \in K} \inf_{y \in L} |x - y|, \sup_{x \in L} \inf_{y \in K} |x - y| \right \}$$</span></p> <p>or, equivalently, by</p> <p><span class="math-container">$$\delta (K,L) := \min \{ \lambda \geq 0 \ ; \ K \subset L + \lambda B^n, L \subset K + \lambda B^n \} $$</span></p> <blockquote> <p><span class="math-container">$\textbf{Theorem 1.8.5.}$</span> From each bounded sequence in <span class="math-container">$\mathcal{C}^n$</span> one can select a convergent subsequence.</p> <p><span class="math-container">$\textbf{Proof:}$</span></p> <p>Let <span class="math-container">$(K^0_i)_{i \in \mathbb{N}}$</span> be a sequence in <span class="math-container">$\mathcal{C}^n$</span> whose elements are contained in some cube <span class="math-container">$C$</span> of edge length <span class="math-container">$\gamma$</span>. For each <span class="math-container">$m \in \mathbb{N}$</span>, the cube <span class="math-container">$C$</span> can be written as a union of <span class="math-container">$2^{mn}$</span> cubes of length <span class="math-container">$2^{-m}\gamma$</span>. For <span class="math-container">$K \in \mathcal{C}^n$</span>, let <span class="math-container">$A_m(K)$</span> denote the union of all such cubes that meet <span class="math-container">$K$</span>. Since (for each <span class="math-container">$m$</span>) the number of subcubes is finite, the sequence <span class="math-container">$(K^0_i)_{i \in \mathbb{N}}$</span> has a subsequence <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> such that <span class="math-container">$A_1(K^1_i) =: T_1$</span> is independent of <span class="math-container">$i$</span>. Similarly, there is an union <span class="math-container">$T_2$</span> of subcubes of length <span class="math-container">$2^{-2} \gamma$</span> and a subsequence <span class="math-container">$(K^2_i)_{i \in \mathbb{N}}$</span> of <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> such that <span class="math-container">$A_2(K^2_i) = T_2$</span>. Continuing in this way, we obtain a sequence <span class="math-container">$(T_m)_{m \in \mathbb{N}}$</span> of union of subcubes (of edge length <span class="math-container">$2^{-m} \gamma$</span> for given <span class="math-container">$m$</span>) and to each <span class="math-container">$m$</span> a sequence <span class="math-container">$(K^m_i)_{i \in \mathbb{N}}$</span> such that</p> <p><span class="math-container">$$A_m(K^m_i) = T_m \ (1.61)$$</span></p> <p>and</p> <p><span class="math-container">$$(K^m_i)_{i \in \mathbb{N}} \ \text{is a subsequence of} \ (K^k_i)_{i \in \mathbb{N}} \ \text{for} \ k &lt; m. (1.62)$$</span></p> <p>By <span class="math-container">$(1.61)$</span> we have <span class="math-container">$K^m_i \subset K^m_j + \lambda B^n$</span> with <span class="math-container">$\lambda = 2^{-m} \sqrt{n \gamma}$</span>, hence <span class="math-container">$\delta(K^m_i, K^m_j) \leq 2^{-m} \sqrt{n \gamma}$</span> (<span class="math-container">$i,j,m \in \mathbb{N})$</span> and thus, by <span class="math-container">$(1.62)$</span>,</p> <p><span class="math-container">$$\delta(K^m_i,K^k_j) \leq 2^{-m} \sqrt{n \gamma} \hspace{1cm} \text{for} \ i,j \in \mathbb{N} \ \text{and} \ k \geq m.$$</span></p> <p>For <span class="math-container">$K_m := K^m_m$</span>, it follows that</p> <p><span class="math-container">$$\delta(K_m, K_k) \leq 2^{-m} \sqrt{n \gamma} \hspace{1cm} \text{for} \ i,j \in \mathbb{N} \ \text{and} \ k \geq m.$$</span></p> <p>Thus, <span class="math-container">$(K_m)_{m \in \mathbb{N}}$</span> is a Cauchy sequence and hence convergent because <span class="math-container">$(\mathcal{C}^n, \delta)$</span> is complete. This is the subsequence that proves the assertation. <span class="math-container">$\square$</span></p> </blockquote> <p>My doubts are</p> <ol> <li><p>When the author states &quot;Since (for each <span class="math-container">$m$</span>) the number of subcubes is finite the sequence <span class="math-container">$(K^0_i)_{i \in \mathbb{N}}$</span> has a subsequence <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> such that <span class="math-container">$A_1(K^1_i) =: T_1$</span> is independent of <span class="math-container">$i$</span>&quot;, I dind't understand why exists the subsequence <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> and why <span class="math-container">$T_1$</span> is independent of <span class="math-container">$i$</span>.</p> </li> <li><p>Why <span class="math-container">$(1.62)$</span> implies that <span class="math-container">$\delta(K^m_i,K^k_j) \leq 2^{-m} \sqrt{n \gamma} \ \text{for} \ i,j \in \mathbb{N} \ \text{and} \ k \geq m$</span>?</p> </li> </ol> <p>Thanks in advance!</p>
Community
-1
<p>You can use Green's formulas:</p> <p>$\int_{S}1=\frac{1}{2}\oint_{\partial S}xdy-ydx$</p> <p>Your vector field is exactly the one described in the RHS, so, defining $S: \frac{x^2}{4}+\frac{y^2}{9}=1; y\geq 0$:</p> <p>$\oint_{L}A=-2\cdot Area (S)=-2\cdot\frac{1}{2}\pi\cdot2\cdot3=-6\pi$</p> <p>(You get the minus since $L$ has inverted orientation respecting that of S)</p> <p>In your computation you were wrong in the change of variables: $dxdy=6rdrd\theta$</p>
2,993,551
<p>I'm studying for a first year Discrete Mathematics course, I found this question on a previous paper and am lost on how to solve:</p> <blockquote> <p>Let <span class="math-container">$n$</span> be a fixed arbitrary integer, prove that there are infinitely many integers <span class="math-container">$m$</span> s.t.: <span class="math-container">$m^3 \equiv n^6 \pmod{19}$</span></p> </blockquote> <p>Thank you</p>
user
505,767
<p><strong>HINT</strong></p> <p>Recall that in general <span class="math-container">$n$</span> linearly independent vectors <span class="math-container">$\in \mathbb{R^n}$</span> are a basis and then span <span class="math-container">$\mathbb{R^n}$</span>, to check that we can consider the matrix <span class="math-container">$A$</span> which has the vectors as columns or rows and use that the vectors are linearly independent if and only if</p> <ul> <li><span class="math-container">$\det(A)\neq 0$</span> (usually effective for n small)</li> <li>RREF for <span class="math-container">$A$</span> contains <span class="math-container">$n$</span> pivots</li> </ul>
4,216,740
<p>Let <span class="math-container">$f:\mathbb{R}^k\rightarrow \mathbb{R}$</span> be a vanishing at infinity function, also infinitely differentiable, i.e. <span class="math-container">$f\in C_0^\infty(\mathbb{R}^k,\mathbb{R})$</span>. Is it true that I can always approximate <span class="math-container">$f$</span> with some polynomial <span class="math-container">$p$</span>? Can I use here directly the right version of the Stone-Weierstrass theorem considering the subalgebra of polynomials <span class="math-container">$P:=\lbrace p:\mathbb{R}^k\rightarrow \mathbb{R} \mid p \text{ polynomial} \rbrace$</span>?</p>
Mark Saving
798,694
<p>Yes and no.</p> <p>If you restrain <span class="math-container">$f$</span> to take a compact domain <span class="math-container">$C \subseteq \mathbb{R}^k$</span>, then the answer is an unambiguous yes. For the Stone-Weirstrass theorem only requires an algebra of functions which can discriminate between points, and this clearly applies to polynomials in <span class="math-container">$k$</span> variables.</p> <p>If you wish to distinguish between points <span class="math-container">$(x_1, ..., x_k)$</span> and <span class="math-container">$(y_1, ..., y_k)$</span>, just take the polynomial <span class="math-container">$P(z_1, ..., z_n) = \sum\limits_{i = 1}^n (z_i - x_i)^2$</span>.</p> <p>If you do not restrict <span class="math-container">$f$</span> to have a compact domain, you can still construct a sequence of polynomials which converges pointwise to <span class="math-container">$f$</span> (but not necessarily uniformly). This is true even if <span class="math-container">$k = 1$</span>; take <span class="math-container">$f(x) = e^x$</span>, for example.</p>
4,216,740
<p>Let <span class="math-container">$f:\mathbb{R}^k\rightarrow \mathbb{R}$</span> be a vanishing at infinity function, also infinitely differentiable, i.e. <span class="math-container">$f\in C_0^\infty(\mathbb{R}^k,\mathbb{R})$</span>. Is it true that I can always approximate <span class="math-container">$f$</span> with some polynomial <span class="math-container">$p$</span>? Can I use here directly the right version of the Stone-Weierstrass theorem considering the subalgebra of polynomials <span class="math-container">$P:=\lbrace p:\mathbb{R}^k\rightarrow \mathbb{R} \mid p \text{ polynomial} \rbrace$</span>?</p>
zhw.
228,045
<p>If you're interested in uniform convergence of polynomials on all of <span class="math-container">$\mathbb R^k,$</span> there's not much good news: The only <span class="math-container">$f\in C_0^\infty$</span> that can be uniformly approximated by polynomials is the zero function. That's because if <span class="math-container">$P$</span> is a polynomial, then either <span class="math-container">$P$</span> is a constant or <span class="math-container">$P$</span> is unbounded.</p>
98,361
<p>I have been reading Rudin (Principles of Mathematical Analysis) on my own now for around a month or so. While I was able to complete the first chapter without any difficulty, I am having problems trying to get the second chapter right. I have been able to get the definitions and work out some problems, but I am still not sure if I understand the thing and it is certainly not internalized. </p> <p>I am wondering whether I should take this shaky structure with me to the next chapters, hoping that the application there improves my understanding, or to stop and complete this chapter really well? </p> <p>What do you think? </p> <p>As for my background, I am quiet close to completing Linear Algebra by Lang (having done a course in Linear Algebra from Strang). I have completed Spivak's Calculus. I come from an engineering background and so I have done multivariable calculus, fourier analysis, numerical analysis, basic probability and random variables as required for engineering. One of the professors advised that I may be better off studying Part I from Topology and Modern Analysis by GF Simmons, but I am finding that completing that book itself may take a semester and I would prefer not to wait that long to start with analysis. </p> <p>Thank You</p> <p><strong>EDIT</strong>: If it makes any difference, I am studying on my own. </p> <p><strong>EDIT</strong>: So, I have accepted the answer by Samuel Reid. I too have found the limit point definition as illustrated by Rudin and the large set of definitions listed there somewhat dry and without any motivation or examples. This is one of the places in the book which makes it a little difficult for self-study. What I found working in this case is, taking some drill problems from other books and working through them. I will advise anyone to go real slow over the sections 2.18 to 2.32 . There are too many definitions and new concepts in that sections and to miss even one means you cannot move forward. To tell the truth, I found Simmons's 50 pages (from chapter 2 section 10 to the end of chapter 3) to be more useful than the corresponding 4.5 pages in Rudin. </p>
Community
-1
<p>This approach may not suit you, but I definitely found that it helped me. My suggestion while studying Rudin chapter two is to look at Munkres' <em>Topology</em> chapters 2 and 3. What I found was though general topological spaces are in a more abstract setting, they it gave me a lot of motivation. For example if you're having trouble on the bit about open sets relative to the metric space embedded in, then you can look for motivation in the subspace topology. </p> <p>The idea is the following: If you are studying material that is at level zero (metric spaces), then build up enough machinery at say level ten (topological spaces) to tackle all the level zero problems. The advantage of this is sometimes - not always though - when you are solving a problem at level ten you can look for specific cases in level zero and understand why the problem is true/false there. If you are working at a lower level on the other hand, it is more difficult to ascend up to another level because you will have to learn new material.</p> <p>As an example, suppose you know the definition of a continuous function in terms of open sets. Suppose you want to show that the zero set of a continuous function is closed. Then because the singleton set $\{0\}$ is closed in a metric space its pre-image must be closed too.</p> <p>The high level approach to this is as follows: If $f,g$ are continuous functions from $X$ to $Y$ that are topological spaces, $Y$ being Hausdorff then the equaliser </p> <p>$\Delta = \{ x\in X : f(x) = g(x) \}$ is closed in $X$. Once you know how to do this, whatever you wanted to prove above is just a special case by setting $g(x) = 0$ and noting that every metric space is Hausdorff.</p> <p>Regarding your difficulties with the material, here are some exercises that may help with your understanding. It requires a little more than the material of chapter 2 of Rudin (You just need to know what cauchy sequences and complete metric spaces are). I promise you if you do the below, your understanding of the material will be strengthened.</p> <blockquote> <blockquote> <p>(1) Suppose that $(X,d)$ is a complete metric space and $I_n$ a sequence of non-empty closed sets such that $I_n \supset I_{n+1}$ for all $n \geq 0$ and diam $I_n \rightarrow 0$ as $n \rightarrow \infty$. Then $$\bigcap_{i=0}^\infty I_n$$ consists of exactly one point. The diameter of a subset $A \subset X$ is</p> <p>diam $A :=$ sup $\{d(x,y) : x,y \in A\}.$</p> <p>(2) Using (1) above, prove the following version of the Baire Category theorem: If $\{G_a\}$ is a countable collection of non-empty, dense open sets in a complete metric space $X$, then $$\bigcap_{a} G_a \neq \emptyset.$$</p> <p>(3) Finally using (2) above, prove the following: Let $\{H_a\}$ be a countable collection of nowhere dense sets in a complete metric space $X$. Prove that there is a point of $X$ that is not in any of the $H_a&#39;s$.</p> </blockquote> </blockquote> <p>You can post your answers here, and I will try and help you out with them.</p> <p>Have a good day!</p>
98,361
<p>I have been reading Rudin (Principles of Mathematical Analysis) on my own now for around a month or so. While I was able to complete the first chapter without any difficulty, I am having problems trying to get the second chapter right. I have been able to get the definitions and work out some problems, but I am still not sure if I understand the thing and it is certainly not internalized. </p> <p>I am wondering whether I should take this shaky structure with me to the next chapters, hoping that the application there improves my understanding, or to stop and complete this chapter really well? </p> <p>What do you think? </p> <p>As for my background, I am quiet close to completing Linear Algebra by Lang (having done a course in Linear Algebra from Strang). I have completed Spivak's Calculus. I come from an engineering background and so I have done multivariable calculus, fourier analysis, numerical analysis, basic probability and random variables as required for engineering. One of the professors advised that I may be better off studying Part I from Topology and Modern Analysis by GF Simmons, but I am finding that completing that book itself may take a semester and I would prefer not to wait that long to start with analysis. </p> <p>Thank You</p> <p><strong>EDIT</strong>: If it makes any difference, I am studying on my own. </p> <p><strong>EDIT</strong>: So, I have accepted the answer by Samuel Reid. I too have found the limit point definition as illustrated by Rudin and the large set of definitions listed there somewhat dry and without any motivation or examples. This is one of the places in the book which makes it a little difficult for self-study. What I found working in this case is, taking some drill problems from other books and working through them. I will advise anyone to go real slow over the sections 2.18 to 2.32 . There are too many definitions and new concepts in that sections and to miss even one means you cannot move forward. To tell the truth, I found Simmons's 50 pages (from chapter 2 section 10 to the end of chapter 3) to be more useful than the corresponding 4.5 pages in Rudin. </p>
Yesid Fonseca V.
252,386
<p>I feel very identified with you, I am also an engineer, electronic engineer and I have a background similar to yours. My motivation to start studying topology is to understand Hilbert spaces and someday understand functional analysis.</p> <p>Initially, also thought I understood the concepts of Chapter 2 until Section 2.28, indeed to get there it was necessary to understand some definitions and theorems of Chapter 1. When I arrived at the concept of "open relative to" I felt that what I thought understand it was not really because otherwise there would have confused me a lot with this concept, so I went back to the sections that you recommend to go very slow and decided to be strictly literal and deduct only what is logical.</p> <p>So I have gone well and I feel comfortable reading the book, it is sometimes difficult to understand the ideas of Walter Rudin, however when I read carefully what he says, everything becomes clear, probably not immediately, but that does not matter. I also think this forum has been very useful, is always good to understand the interpretations of others, you learn more. I’m finishing Compacts Sets (theorem 2.41) and all this takeme a month.</p>
2,147,571
<p>Both $A$ and $B$ are a random number from the $\left [ 0;1 \right ]$ interval.</p> <p>I don't know how to calculate it, so i've made an estimation with excel and 1 million test, and i've got $0.214633$. But i would need the exact number.</p>
spaceisdarkgreen
397,125
<p>You can use the fact that the distribution of the ratio of two independent uniform $[0,1]$'s is $$ f_Z(z) = \left\{\begin{array}{ll}1/2&amp; 0 &lt; z &lt; 1 \\\frac{1}{2z^2} &amp; z &gt;1 \end{array}\right.$$</p> <p>Then you can calculate the probability that the closest integer is $i$: $$\int_{i-1/2}^{i+1/2} p_Z(z)dz. $$</p> <p>For $i\ge2,$ we get $$ \int_{i-1/2}^{i+1/2} \frac{1}{2z^2}dz = \frac{1}{2i-1}-\frac{1}{2i+1}.$$</p> <p>The probability that $0$ is the closest integer is $$ \int_0^{1/2}p_Z(z)dz = 1/4.$$</p> <p>So the total probability that even numbers are closest is $$ 1/4 + \sum_{j = 1}^\infty \frac{1}{4j-1} - \frac{1}{4j+1} = 1/4 + 1-\pi/4.$$</p> <p>The numerical answer you gave is close to $1-\pi/4$ so I guess you weren't counting zero amongst the even integers.</p>
379,893
<p>Define the sequence <span class="math-container">$b_1=1$</span> and <span class="math-container">$$b_n=\sum_{k=1}^{n-1}\binom{n-1}k\binom{n-1}{k-1}b_kb_{n-k}.$$</span></p> <p>By now, there is enough in the literature that <span class="math-container">$C_n$</span> is odd iff <span class="math-container">$n=2^k-1$</span> for some <span class="math-container">$k$</span> where <span class="math-container">$C_n$</span> are the Catalan numbers: <span class="math-container">$C_0=1$</span> and <span class="math-container">$$C_{n+1}=\sum_{k=0}^nC_kC_{n-k}.$$</span></p> <p>In the same spirit, I ask:</p> <blockquote> <p><strong>QUESTION.</strong> is it true that <span class="math-container">$b_n$</span> is odd iff <span class="math-container">$n=2^k$</span> for some <span class="math-container">$k$</span>?</p> </blockquote>
Giorgio Mossa
14,969
<blockquote> <p>Is there a way to recapture the additional understanding imparted by multicategories using higher categories?</p> </blockquote> <p>Well I would say so. Multicategories are basically categories whose morphisms have multiple sources instead of only one. Bicategories generalize categories by adding 2-morphisms, i.e. relations among morphisms, this is completely different from adding multiple sources to the domains of the morphisms.</p> <blockquote> <p>Is there a way to recapture the additional understanding imparted by multicategories using higher categories?</p> </blockquote> <p>None that I'm aware of, but that should be expected since they provide different additions to categories (multiple sources vs morphisms among morphisms).</p> <blockquote> <p>If not, then I would ask if a theory of higher multicategories exists and if the additional work of learning it over higher category theory is worth the understanding payoff.</p> </blockquote> <p>Again not that I'm aware of.</p> <blockquote> <p>For those familiar with it, does the theory of augmented virtual double categories have any significant ‘big picture understanding’ advantages over the theory of bicategories? What about compared to higher categories?</p> </blockquote> <p>I'm not really familiar with that, but far I haven't seen lots of work on the subject. Probably time will tell.</p> <p>Hope this helps.</p>
995,489
<p>This is taken from Trefethen and Bau, 13.3.</p> <p>Why is there a difference in accuracy between evaluating near 2 the expression $(x-2)^9$ and this expression:</p> <p>$$x^9 - 18x^8 + 144x^7 -672x^6 + 2016x^5 - 4032x^4 + 5376x^3 - 4608x^2 + 2304x - 512 $$</p> <p>Where exactly is the problem?</p> <p>Thanks.</p>
Adam
44,654
<p>I have tried to plot this 2 expresions near x=2.0. Look at the image. The behaviour ( shape) seems very different . </p> <p><img src="https://i.stack.imgur.com/ohKfY.png" alt="enter image description here"></p> <p>Here is the Maxima CAS code :</p> <pre><code>draw2d( file_name = "d", color = blue, key = "second function", explicit(x^9 - 18*x^8 + 144*x^7 -672*x^6 + 2016*x^5 - 4032*x^4 + 5376*x^3 - 4608*x^2 + 2304*x - 512 ,x, 2-0.0000005,2.0000005), color = red, key = " y = (x-2)^9", explicit((x-2)^9, x, 2-0.0000005,2.0000005), terminal = 'svg) $ </code></pre> <p>Note that above 2 expressions are the same ! </p> <pre><code>expand((x-2)^9) </code></pre> <p>So problem is in computer arithmethic not in math ( informally speaking) More : <a href="https://archive.org/details/jresv71Bn1p11" rel="nofollow noreferrer">stable evaluation of polynomials</a> . See also <a href="http://comments.gmane.org/gmane.comp.mathematics.maxima.general/47209" rel="nofollow noreferrer">discussion</a> </p>
3,020,988
<p>Here's my attempt at an integral I found on this site. <span class="math-container">$$\int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=2\pi$$</span> <strong>I'm not asking for a proof, I just want to know where I messed up</strong></p> <p>Recall that, for all <span class="math-container">$x$</span>, <span class="math-container">$$e^x=\sum_{n\geq0}\frac{x^n}{n!}$$</span> And <span class="math-container">$$\cos x=\sum_{n\geq0}(-1)^n\frac{x^{2n}}{(2n)!}$$</span> Hence we have that <span class="math-container">$$ \begin{align} \int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=&amp;\int_0^{2\pi}\bigg(\sum_{n\geq0}\frac{\cos^n2x}{n!}\bigg)\bigg(\sum_{m\geq0}(-1)^m\frac{\sin^{2m}2x}{(2m)!}\bigg)\mathrm{d}x\\ =&amp;\sum_{n,m\geq0}\frac{(-1)^m}{n!(2m)!}\int_0^{2\pi}\cos(2x)^n\sin(2x)^{2m}\mathrm{d}x\\ =&amp;\frac12\sum_{n,m\geq0}\frac{(-1)^m}{n!(2m)!}\int_0^{4\pi}\cos(t)^n\sin(t)^{2m}\mathrm{d}t\\ \end{align} $$</span> The final integral is related to the incomplete beta function, defined as <span class="math-container">$$B(x;a,b)=\int_0^x u^{a-1}(1-u)^{b-1}\mathrm{d}u$$</span> If we define <span class="math-container">$$I(x;a,b)=\int_0^x\sin(t)^a\cos(t)^b\mathrm{d}t$$</span> We can make the substitution <span class="math-container">$\sin^2t=u$</span>, which gives <span class="math-container">$$ \begin{align} I(x;a,b)=&amp;\frac12\int_0^{\sin^2x}u^{a/2}(1-u)^{b/2}u^{-1/2}(1-u)^{-1/2}\mathrm{d}u\\ =&amp;\frac12\int_0^{\sin^2x}u^{\frac{a-1}2}(1-u)^{\frac{b-1}2}\mathrm{d}u\\ =&amp;\frac12\int_0^{\sin^2x}u^{\frac{a+1}2-1}(1-u)^{\frac{b+1}2-1}\mathrm{d}u\\ =&amp;\frac12B\bigg(\sin^2x;\frac{a+1}2,\frac{b+1}2\bigg)\\ \end{align} $$</span> Hence we have a form of our final integral: <span class="math-container">$$ \begin{align} I(4\pi;2m,n)=&amp;\frac12B\bigg(\sin^24\pi;\frac{2m+1}2,\frac{n+1}2\bigg)\\ =&amp;\frac12B\bigg(0;\frac{2m+1}2,\frac{n+1}2\bigg)\\ =&amp;\frac12\int_0^0t^{\frac{2m-1}2}(1-t)^{\frac{n-1}2}\mathrm{d}t\\ =&amp;\,0 \end{align} $$</span> Which implies that <span class="math-container">$$\int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=0$$</span> Which is totally wrong. But as far as I can tell, I haven't broken any rules. Where's my error, and how do I fix it? Thanks.</p>
Seewoo Lee
350,772
<p>To solve the integral, you may consider <span class="math-container">$$\int_{C} \frac{e^z}{z}dz$$</span> where <span class="math-container">$C$</span> is a unit circle, and see its real part.</p>
3,020,988
<p>Here's my attempt at an integral I found on this site. <span class="math-container">$$\int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=2\pi$$</span> <strong>I'm not asking for a proof, I just want to know where I messed up</strong></p> <p>Recall that, for all <span class="math-container">$x$</span>, <span class="math-container">$$e^x=\sum_{n\geq0}\frac{x^n}{n!}$$</span> And <span class="math-container">$$\cos x=\sum_{n\geq0}(-1)^n\frac{x^{2n}}{(2n)!}$$</span> Hence we have that <span class="math-container">$$ \begin{align} \int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=&amp;\int_0^{2\pi}\bigg(\sum_{n\geq0}\frac{\cos^n2x}{n!}\bigg)\bigg(\sum_{m\geq0}(-1)^m\frac{\sin^{2m}2x}{(2m)!}\bigg)\mathrm{d}x\\ =&amp;\sum_{n,m\geq0}\frac{(-1)^m}{n!(2m)!}\int_0^{2\pi}\cos(2x)^n\sin(2x)^{2m}\mathrm{d}x\\ =&amp;\frac12\sum_{n,m\geq0}\frac{(-1)^m}{n!(2m)!}\int_0^{4\pi}\cos(t)^n\sin(t)^{2m}\mathrm{d}t\\ \end{align} $$</span> The final integral is related to the incomplete beta function, defined as <span class="math-container">$$B(x;a,b)=\int_0^x u^{a-1}(1-u)^{b-1}\mathrm{d}u$$</span> If we define <span class="math-container">$$I(x;a,b)=\int_0^x\sin(t)^a\cos(t)^b\mathrm{d}t$$</span> We can make the substitution <span class="math-container">$\sin^2t=u$</span>, which gives <span class="math-container">$$ \begin{align} I(x;a,b)=&amp;\frac12\int_0^{\sin^2x}u^{a/2}(1-u)^{b/2}u^{-1/2}(1-u)^{-1/2}\mathrm{d}u\\ =&amp;\frac12\int_0^{\sin^2x}u^{\frac{a-1}2}(1-u)^{\frac{b-1}2}\mathrm{d}u\\ =&amp;\frac12\int_0^{\sin^2x}u^{\frac{a+1}2-1}(1-u)^{\frac{b+1}2-1}\mathrm{d}u\\ =&amp;\frac12B\bigg(\sin^2x;\frac{a+1}2,\frac{b+1}2\bigg)\\ \end{align} $$</span> Hence we have a form of our final integral: <span class="math-container">$$ \begin{align} I(4\pi;2m,n)=&amp;\frac12B\bigg(\sin^24\pi;\frac{2m+1}2,\frac{n+1}2\bigg)\\ =&amp;\frac12B\bigg(0;\frac{2m+1}2,\frac{n+1}2\bigg)\\ =&amp;\frac12\int_0^0t^{\frac{2m-1}2}(1-t)^{\frac{n-1}2}\mathrm{d}t\\ =&amp;\,0 \end{align} $$</span> Which implies that <span class="math-container">$$\int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=0$$</span> Which is totally wrong. But as far as I can tell, I haven't broken any rules. Where's my error, and how do I fix it? Thanks.</p>
Henry Lee
541,220
<p>you could try this: <span class="math-container">$$I=\int_0^{2\pi}e^{\cos(2x)}\cos\left[\sin(2x)\right]dx$$</span> <span class="math-container">$$=\Re\left(\int_0^{2\pi}e^{\cos(2x)}\cos\left[\sin(2x)\right]dx+i\int_0^{2\pi}e^{\cos(2x)}\sin\left[\sin(2x)\right]dx\right)$$</span> <span class="math-container">$$=\Re\left(\int_0^{2\pi}e^{\cos(2x)}e^{i\sin(2x)}dx\right)$$</span> <span class="math-container">$$=\Re\left(\int_0^{2\pi}e^{\cos(2x)+i\sin(2x)}dx\right)$$</span> <span class="math-container">$$=\Re\left(\int_0^{2\pi}e^{e^{2ix}}dx\right)$$</span> and since: <span class="math-container">$$e^y=\sum_{n=0}^\infty\frac{x^n}{n!}$$</span> we can say that: <span class="math-container">$$e^{e^{2ix}}=\sum_{n=0}^\infty\frac{e^{2nix}}{n!}$$</span> and so: <span class="math-container">$$I=\Re\int_0^{2\pi}\sum_{n=0}^\infty\frac{e^{2nix}}{n!}dx$$</span> <span class="math-container">$$=\Re\sum_{n=0}^\infty\left[\frac{e^{2nix}}{2ni.n!}\right]_0^{2\pi}$$</span> <span class="math-container">$$=\Re\sum_{n=0}^\infty\frac{e^{4\pi ni}-1}{2ni.n!}$$</span> but note that for all integers n, <span class="math-container">$$e^{4\pi ni}=1$$</span> so this summation may be hard to calculate (or wrong). This is probably due to the fact that the integral is between <span class="math-container">$0$</span> and <span class="math-container">$2\pi$</span>, so the integral may need to be split up into several parts before it can be evaulated.</p>
956,110
<p>I am struggling with thinking about this. Any help would be great!!</p> <p>A medical research survey categorizes adults as follows:</p> <ul> <li>by gender (male or female)</li> <li>by age group (age groups are 18-25, 26-35, 36-50, 51+)</li> <li>by income (less than 30k/year, 30k-60k/year, more than 60k/year)</li> <li>for women only: by whether they have been pregnant (yes/no)</li> <li>for men only: by frequency of undergoing prostate exams (frequently, rarely, never).</li> </ul> <p>What minimum size of a set of adults will guarantee that there are two people in it with matching characteristics in all categories? You do not need to explain your answer.</p>
mar10
177,164
<p>Based on this</p> <p><img src="https://i.stack.imgur.com/MQ1xa.png" alt="PowerPoint Lecture Video for Discrete Math"></p> <p>I believe you would have (choices)</p> <ul> <li><p>(5)</p> <p>Woman : Yes</p> <p>Woman : No</p> <p>Male : Frequently</p> <p>Male : Rarely</p> <p>Male : Never</p></li> <li><p>(4) by age group (age groups are 18-25, 26-35, 36-50, 51+)</p></li> <li>(3) by income (less than 30k/year, 30k-60k/year, more than 60k/year)</li> </ul> <p>$$5*4*3=60$$</p> <p>By the product rule, there are 60 ways of answering this survey. Therefore if,</p> <p>$$60+1=61$$</p> <p>61 people were surveyed, two will have matching characteristics.</p>
322,134
<p>$$2e^{-x}+e^{5x}$$</p> <p>Here is what I have tried: $$2e^{-x}+e^{5x}$$ $$\frac{2}{e^x}+e^{5x}$$ $$\left(\frac{2}{e^x}\right)'+(e^{5x})'$$</p> <p>$$\left(\frac{2}{e^x}\right)' = \frac{-2e^x}{e^{2x}}$$ $$(e^{5x})'=5xe^{5x}$$</p> <p>So the answer I got was $$\frac{-2e^x}{e^{2x}}+5xe^{5x}$$</p> <p>I checked my answer online and it said that it was incorrect but I am sure I have done the steps correctly. Did I approach this problem correctly?</p>
Zev Chonoles
264
<p><strong>Hint:</strong> $$\frac{d}{dx}(e^{ax})=ae^{ax}$$ (this works for negative values of $a$ too, so no need to make your life more complicated with the quotient rule)</p>
3,218,525
<p>Let <span class="math-container">$f:[0,1] \to [0, \infty)$</span> is a non-negative continuous function so that <span class="math-container">$f(0)=0$</span> and for all <span class="math-container">$x \in [0,1]$</span> we have <span class="math-container">$$f(x) \leq \int_{0}^{x} f(y)^2 dy$$</span><br> Now consider the set <span class="math-container">$$A=\{x∈[0,1] : \text{for all } y∈[0,x] \text{ we have }f(y)≤1/2\}$$</span> Prove that <span class="math-container">$A=[0,1]$</span>.<br> Since <span class="math-container">$f$</span> is bounded of <span class="math-container">$[0,x]$</span>, I think <span class="math-container">$f$</span> may be <span class="math-container">$0$</span>. But I am not able to do this. Please help me to solve this.</p>
ajotatxe
132,456
<p>Just a comment that is too long to post it as a comment.</p> <p><span class="math-container">$$f(x)\le\int_0^xf(y)^2dy\le x\sup_{0\le t \le 1}f(t)^2$$</span></p> <p>Since the supremum of <span class="math-container">$f$</span> is reached at some point <span class="math-container">$x=x_M$</span> (Weierstrass' theorem), <span class="math-container">$$M=f(x_M)\le x_Mf(x_M)^2$$</span> If <span class="math-container">$M=f(x_M)=0$</span>, the statement is obvious. Otherwise, <span class="math-container">$$M\ge Mx_M\ge 1$$</span></p> <p>The other answer shows that <span class="math-container">$M\le\frac12$</span>. Thus, as you guessed, <span class="math-container">$f\equiv 0$</span>.</p>
4,019,561
<p>If <span class="math-container">$M_{n\times n}$</span> is the set of invertible matrices with real entries. Find two matrices <span class="math-container">$A,B\in M_{n \times n}$</span> with the propriety that there not exists such a continuous function</p> <p><span class="math-container">$$f:[0,1]\to M, \quad f(0)=A, f(1)=B $$</span></p> <p>the only way i was thinking was is the inverse function such as <span class="math-container">$f^{-1}(A)=0, \quad f^{-1}(B)=1,$</span> but this doesnt seem to get me anywhere.</p>
Tommaso Rossi
678,717
<p>Let A be any matrix with positive determinant, B any matrix with negative determinant. If there is such a <span class="math-container">$f$</span> , then <span class="math-container">$det(f(t))$</span> is a continuous function of <span class="math-container">$t\in[0,1]$</span> (it is a polinomial), with <span class="math-container">$det(f(0))=det(A)&gt;0$</span> and <span class="math-container">$det(f(1))=det(B)&lt;0$</span>. Then, by the intermediate value theorem there must be a <span class="math-container">$t\in[0,1]$</span> such that <span class="math-container">$det(f(t))=0$</span> giving a contradiction.</p>
142,993
<p>I'm challenging myself to figure out the mathematical expression of the number of possible combinations for certain parameters, and frankly I have no idea how.</p> <p>The rules are these:</p> <p>Take numbers 1...n. Given m places, and with <em>no repeated digits</em>, how many combinations of those numbers can be made?</p> <p>AKA</p> <ul> <li>1 for n=1, m=1 --> 1</li> <li>2 for n=2, m=1 --> 1, 2</li> <li>2 for n=2, m=2 --> 12, 21</li> <li>3 for n=3, m=1 --> 1,2,3</li> <li>6 for n=3, m=2 --> 12,13,21,23,31,32</li> <li>6 for n=3, m=3 --> 123,132,213,231,312,321</li> </ul> <p>I cannot find a way to express the left hand value. Can you guide me in the steps to figuring this out?</p>
Wonder
27,958
<p>First one can be placed in $8^2$ ways, given it second one can be placed in $7^2$ ways, given the first two third one can be placed in $6^2$ ways. Divide by $3!$ to account for the fact that the points could have been picked in any order. So total number of ways = $\frac{8^2.7^2.6^2}{3!} = 18816$</p> <p>EDIT: Sorry did not realize it was a chessboard. Way to do it here would be to notice that once you choose parity of column you also choose parity of row of white square. So a square with odd row parity will never clash with square with even row parity. Now, either all three squares have same row parity or one is different from other two. In both cases we basically have disjoint instances of smaller problems like what I had earlier solved it as. In first case, number of ways = $\frac{4^2.3^2.2^2}{3!} = 96$. In second case, number of ways = $\frac{4^2.3^2}{2!} * \frac{4^2}{1!}= 72 * 16 = 1152$. So total = $1152+96=1248$. Now just notice that both of these cases can happen in two ways for two row parities, eg. for the first one it can be all odd row parities or all even row parities. So we need to double this to get $2 * 1248 = 2496$.</p>
160,801
<p>Here is a vector </p> <p>$$\begin{pmatrix}i\\7i\\-2\end{pmatrix}$$</p> <p>Here is a matrix</p> <p>$$\begin{pmatrix}2&amp; i&amp;0\\-i&amp;1&amp;1\\0 &amp;1&amp;0\end{pmatrix}$$</p> <p>Is there a simple way to determine whether the vector is an eigenvector of this matrix?</p> <p>Here is some code for your convenience.</p> <pre><code>h = {{2, I, 0 }, {-I, 1, 1}, {0, 1, 0}}; y = {I, 7 I, -2}; </code></pre>
Michael E2
4,999
<p>For problems with exact coordinates, one could code up the definition of eigenvector. The function <code>eigV</code> finds the eigenvalue for a given vector in the form <code>L == value</code> or returns <code>False</code> if there is none; the function <code>eigQ</code> returns <code>True</code> if there exists an eigenvalue for </p> <pre><code>ClearAll[eigQ, eigV]; eigV[m_, v_] := Reduce@Thread[(m - SparseArray[{i_, i_} :&gt; L, Dimensions[m]]).v == 0]; eigV[m_][v_] := eigV[m, v]; (* operator form *) eigQ[m_, v_] := Resolve@Exists[L, eigV[m, v]]; eigQ[m_][v_] := eigQ[m, v]; (* operator form *) </code></pre> <p>Examples:</p> <pre><code>eigQ[h] /@ {y, {-I (-2 + Sqrt[3]), 1 - Sqrt[3], 1}} (* {False, True} *) eigV[h] /@ {y, {-I (-2 + Sqrt[3]), 1 - Sqrt[3], 1}} (* {False, L == 1 - Sqrt[3]} *) </code></pre> <p>Or simply</p> <pre><code>eigQ[h, y] (* False *) </code></pre> <p>For approximate problems, one would have to account for rounding error.</p>
160,801
<p>Here is a vector </p> <p>$$\begin{pmatrix}i\\7i\\-2\end{pmatrix}$$</p> <p>Here is a matrix</p> <p>$$\begin{pmatrix}2&amp; i&amp;0\\-i&amp;1&amp;1\\0 &amp;1&amp;0\end{pmatrix}$$</p> <p>Is there a simple way to determine whether the vector is an eigenvector of this matrix?</p> <p>Here is some code for your convenience.</p> <pre><code>h = {{2, I, 0 }, {-I, 1, 1}, {0, 1, 0}}; y = {I, 7 I, -2}; </code></pre>
Αλέξανδρος Ζεγγ
12,924
<p>How about this:</p> <pre><code>eigenVectorQ[mat_, vec_] := Abs[Dot[#1\[Conjugate], #2]] == Norm[#1] Norm[#2] &amp;[mat.vec, vec] </code></pre> <p>Then <code>eigenVectorQ[h, y]</code> returns <code>False</code>.</p>
1,885,751
<p>An urn contains 15 Balls (5 white, 10 Black). Let's say we pick them one after the other without returning them. How many white balls are expected to have been drawn after 7 turns?</p> <p>I can calculate it by hand with a tree model but is there a formula for this?</p>
barak manos
131,263
<p>Split it into disjoint events, and then add up their probabilities:</p> <ul> <li>The probability of getting exactly $\color\red2$ ones is $\binom{5}{\color\red2}\cdot\left(\frac16\right)^{\color\red2}\cdot\left(1-\frac16\right)^{5-\color\red2}$</li> <li>The probability of getting exactly $\color\red3$ ones is $\binom{5}{\color\red3}\cdot\left(\frac16\right)^{\color\red3}\cdot\left(1-\frac16\right)^{5-\color\red3}$</li> <li>The probability of getting exactly $\color\red4$ ones is $\binom{5}{\color\red4}\cdot\left(\frac16\right)^{\color\red4}\cdot\left(1-\frac16\right)^{5-\color\red4}$</li> <li>The probability of getting exactly $\color\red5$ ones is $\binom{5}{\color\red5}\cdot\left(\frac16\right)^{\color\red5}\cdot\left(1-\frac16\right)^{5-\color\red5}$</li> </ul> <hr> <p>Hence the probability of getting at least $\color\green2$ ones on $\color\purple5$ dice is:</p> <p>$$\sum\limits_{n=\color\green2}^{\color\purple5}\binom{\color\purple5}{n}\cdot\left(\frac16\right)^{n}\cdot\left(1-\frac16\right)^{\color\purple5-n}$$</p>
1,564,729
<p>Find: $$ L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{x^2} $$</p> <p>My approach:</p> <p>Because of the fact that the above limit is evaluated as $\frac{0}{0}$, we might want to try the De L' Hospital rule, but that would lead to a more complex limit which is also of the form $\frac{0}{0}$. </p> <p>What I tried is: $$ L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right) $$ Then, if the limits $$ L_1 = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}}, $$</p> <p>$$ L_2 = \lim_{x\to0}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right) $$ exist, then $L=L_1L_2$.</p> <p>For the first one, by making the substitution $u=1-\frac{\sin(x)}{x}$, we have $$ L_1 = \lim_{u\to u_0}\frac{\sin(u)}{u}, $$ where $$ u_0 = \lim_{x\to0}\left(1-\frac{\sin(x)}{x}\right)=0. $$ Consequently, $$ L_1 = \lim_{u\to0}\frac{\sin(u)}{u}=1. $$</p> <p>Moreover, for the second limit, we apply the De L' Hospital rule twice and we find $L_2=\frac{1}{6}$.</p> <p>Finally, $L=1\frac{1}{6}=\frac{1}{6}$.</p> <p>Is this correct?</p>
Community
-1
<p><strong>By L' Hospital anyway</strong>:</p> <p>$$\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{x^2}$$ yields</p> <p>$$\cos\left(1-\frac{\sin(x)}x\right)\frac{\sin(x)-x\cos(x)}{2x^3}.$$</p> <p>The first factor has limit $1$ and can be ignored.</p> <p>Then with L'Hospital again:</p> <p>$$\frac{x\sin(x)}{6x^2},$$</p> <p>which clearly tends to $\dfrac16$.</p>
3,282,206
<p>I'm familiar with Fermat's Little Theorem and Euler's Totient, but I'm wondering whether the fact that the only shared factor of <span class="math-container">$(a,N)=1$</span> has something to do with the fact that, given the prior constraints there exists at least one <span class="math-container">$x$</span> (with <span class="math-container">$x$</span> being one of elements of the least non-zero positive remainder class of <span class="math-container">$N$</span>) where <span class="math-container">$a^x \equiv 1 \pmod{N}$</span>? </p> <p>It doesn't appear to be dependent upon the modulus being prime since using a modulus of 10 the following is true:</p> <p><span class="math-container">$9^1\pod{10}=9$</span><br> <span class="math-container">$9^2\pod{10}=1$</span><br> <span class="math-container">$9^3\pod{10}=9$</span><br> <span class="math-container">$9^4\pod{10}=1$</span><br> <span class="math-container">$9^5\pod{10}=9$</span><br> <span class="math-container">$9^6\pod{10}=1$</span><br> <span class="math-container">$9^7\pod{10}=9$</span><br> <span class="math-container">$9^8\pod{10}=1$</span><br> <span class="math-container">$9^9\pod{10}=9$</span><br></p> <p>This is also really naive but another observation is that given the previous assumptions the initial base will always be less than <span class="math-container">$N$</span> and <span class="math-container">$a^1 \pmod{N}$</span> will always return <span class="math-container">$a$</span>, so given <span class="math-container">$\gcd(a,N)=1$</span>, <span class="math-container">$a$</span> will always produce at least one remainder other than <span class="math-container">$1$</span>.</p> <p>I know there's some intuition that I'm missing?</p>
TonyK
1,508
<p>Consider all powers <span class="math-container">$a^r\bmod N$</span> for <span class="math-container">$r=1,2,3,\ldots$</span> There are an infinite number of them, but they can only take values <span class="math-container">$0\le a^r\bmod N\le N-1$</span>. So sooner or later we must encounter a repeated value; say <span class="math-container">$a^s\equiv a^t\bmod N$</span>, for some <span class="math-container">$s&lt;t$</span>.</p> <p>Now, since <span class="math-container">$\gcd(a,N)=1$</span>, we know that <span class="math-container">$\gcd(a^s,N)=1$</span>. So we can divide by <span class="math-container">$a^s$</span>, and we get <span class="math-container">$1\equiv a^{t-s}\bmod N$</span>. Which is what we wanted.</p>
2,648,492
<p>I am having trouble with this problem. When they say spot I think they are essentially saying the sum, so its the probability that the sum of dice is $11$ or less.</p> <p>I understand that there are $6^5$ combinations.</p> <p>I found 6 ways that it can equal to $11$ $(2,3,2,2,2)(3,3,1,1,3),(4,4,1,1,1),(5,2,2,1,1),(6,2,1,1,1)(7,1,1,1,1)$, but I know there has to be an easier way than just counting. Is it like $\binom{6}{5}$? Thanks for the help.</p>
Siong Thye Goh
306,553
<p>We can solve the problem using generating function. \begin{align}\left(\frac16\sum_{i=1}^6x^6\right)^5=\frac{1}{6^5}\left( \frac{x(1-x^6)}{1-x}\right)^5 \end{align}</p> <p>Our goal is to find the sum of coefficients of $x^5$ to $x^{11}$.</p> <p>$$(1-x^6)^5 =1-5x^6+\text{higher order terms} $$</p> <p>By <a href="http://mathworld.wolfram.com/NegativeBinomialSeries.html" rel="nofollow noreferrer">negative binomial series</a> $$(1-x)^{-5}=\sum_{k=0}^\infty \binom{4+k}{k}x^k$$</p> <p>Hence, </p> <p>\begin{align}\left(\frac16\sum_{i=1}^6x^6\right)^5=\frac{1}{6^5}x^5(1-5x^6+\text{higher order terms})\sum_{k=0}^\infty \binom{4+k}{k}x^k\end{align}</p> <p>and the sum of coefficients is </p> <p>$$\frac1{6^5}\left(\sum_{k=0}^{6}\binom{4+k}{k}-5\binom{4+0}{0} \right)=\frac1{6^5}\left(\sum_{k=0}^{6}\binom{4+k}{4}-5 \right)=\frac{462-5}{6^5}=\frac{457}{7776}$$</p>
2,648,492
<p>I am having trouble with this problem. When they say spot I think they are essentially saying the sum, so its the probability that the sum of dice is $11$ or less.</p> <p>I understand that there are $6^5$ combinations.</p> <p>I found 6 ways that it can equal to $11$ $(2,3,2,2,2)(3,3,1,1,3),(4,4,1,1,1),(5,2,2,1,1),(6,2,1,1,1)(7,1,1,1,1)$, but I know there has to be an easier way than just counting. Is it like $\binom{6}{5}$? Thanks for the help.</p>
P.Diddy
513,123
<p>You can solve an equivalent question which is slightly easier. Consider the following: </p> <p>You have five different bins to which you need to distribute up to eleven balls. First you will put one ball at each bin, as Matti P suggested, since it is impossible to have a score lower than 1 on a dice. Now you have up to six balls left to distribute. So, we will use an additional bin - a sort of trash bin, if you will, which will take all the balls we won't put in the original five bins (corresponding with the dice). The amount of ways to distribute up to six balls in five bins is equivalent to distributing six balls to six bins, as in $nSr(6,6)=nCr(11,5)$. However, you still need to subtract the number of distributions which have more than a total six balls in a single bin (that one mean a dice got 7 or more). You already have five bins with one ball inside, and six balls left, so your only option is to choose 1 out of 5 bins then insert all 6 balls to that bin.</p> <p>To sum up, you have $nSr(6,6)-5=nCr(11,5)-5=457$ ways to get less than 11. Now divide that by the total number of possible dice outcomes and you'll get the probability of 457 over 7776, i.e. 0.05877...</p>
1,293,207
<p>A ray of light travels from the point $A$ to the point $B$ across the border between two materials. At the first material the speed is $v_1$ and at the second it is $v_2$. Show that the journey is achieved at the least possible time when Snell's law: $$\frac{\sin \theta_1}{\sin \theta_2}=\frac{v_1}{v_2}$$ holds. </p> <p>To show that do we have to use Lagrange multipliers method?? </p> <p>But which function do we want to minimize??</p>
Emilio Novati
187,568
<p>Hint:</p> <p>Use <a href="http://en.wikipedia.org/wiki/Fermat&#39;s_principle" rel="nofollow">Fermat's principle</a>.</p> <blockquote> <p>the path taken between two points by a ray of light is the path that can be traversed in the least time.</p> </blockquote> <p>And you don't need Lagrange multipliers.</p>
1,293,207
<p>A ray of light travels from the point $A$ to the point $B$ across the border between two materials. At the first material the speed is $v_1$ and at the second it is $v_2$. Show that the journey is achieved at the least possible time when Snell's law: $$\frac{\sin \theta_1}{\sin \theta_2}=\frac{v_1}{v_2}$$ holds. </p> <p>To show that do we have to use Lagrange multipliers method?? </p> <p>But which function do we want to minimize??</p>
Matematleta
138,929
<p>Let the total horizontal distance travelled be $a$. The light travels in the first medium a distance $d_{1}$ at an angle $\theta _{1}$ to the vertical. Then if the horizontal distance traveled in this step is $x$ and the vertical distance traveled is $h$, the time it takes to reach the border is </p> <p>$t_{1}=\frac{\left ( x^{2}+h^{2} \right )^{1/2}}{v_{1}}$.</p> <p>From there, the light enters the second medium has velocity $v_{2}$, makes an angle $\theta _{2}$ to the vertical, and travels a distance $d_{2}$. The horizontal distance travelled is $a-x$ and the vertical distance travelled is $y$, so that </p> <p>$t_{2}=\frac{\left ( (a-x)^{2}+y^{2} \right )^{1/2}}{v_{2}}$.</p> <p>The trick here is to observe that we may assume that only $x$ varies. i.e. that $h$ and $y$ are constant. </p> <p>Then, put $t=t_{1}+t_{2}$ and minimize this function as a function of the single variable $x$. There is some messy algebra involved and you will use the relations</p> <p>$\sin \theta _{1}=\frac{x}{d_{1}}$ </p> <p>and</p> <p>$\sin \theta _{2}=\frac{a-x}{d_{2}}$.</p>
216,031
<p>Using image analysis, I have found the positions of a circular ring and imported them as <code>xx</code> and <code>yy</code> coordinates. I am using <code>ListInterpolation</code> to interpolate the data:</p> <pre><code>xi = ListInterpolation[xx, {0, 1}, InterpolationOrder -&gt; 4, PeriodicInterpolation -&gt; True, Method -&gt; "Spline"]; yi = ListInterpolation[yy, {0, 1}, InterpolationOrder -&gt; 4, PeriodicInterpolation -&gt; True, Method -&gt; "Spline"]; </code></pre> <p>I plot the results as:</p> <pre><code>splinePlot = ParametricPlot[{xi[s], yi[s]} , {s, 0, 1}, PlotStyle -&gt; {Red}] </code></pre> <p>and the result looks like: </p> <p><a href="https://i.stack.imgur.com/c2wRi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c2wRi.png" alt="interpolation of data"></a></p> <p>I am trying to study this shape as it deforms, and I will need to look at the derivatives of this interpretation (notably, second derivatives). I know that there are physical constraints that will not let the local curvature at any point be <strong>larger</strong> than, for example <code>1/10</code> in the units show (so, a radius of curvature <code>10</code>). <strong>Is there a way that I can constrain the interpolation so that the local curvature never exceeds a given value?</strong></p> <p>Here is the data (Dropbox Link): <a href="https://www.dropbox.com/s/g9vajch0obbcplk/testShape.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/g9vajch0obbcplk/testShape.csv?dl=0</a></p>
Steffen Jaeschke
61,643
<p>Again I tend to duplicate, despite this is an interpolation and a real 2D curve under consideration.</p> <p>What I did is using this <a href="https://mathematica.stackexchange.com/questions/84502/movingaverage-to-include-the-average-of-the-first-and-last-elements-in-a-list/84504#84504">solution</a>. I like to meet the periodic boundary conditions, but did not really match them.</p> <p>Use this</p> <pre><code>xxn = Mean[{#, RotateLeft@#}] &amp;@(Transpose[xydata][[1, All]]); yyn = Mean[{#, RotateLeft@#}] &amp;@(Transpose[xydata][[2, All]]); </code></pre> <p>It do in this particular case not really enhance the problems. </p> <p>Better make use of </p> <pre><code>xxn7 = MovingAverage[ArrayPad[#, {0, 1}, "Periodic"], 7] &amp;@xxn; xi1 = ListInterpolation[xxn7, {0, 1}, InterpolationOrder -&gt; 4, Method -&gt; "Spline"] Plot[xi1'[x], {x, 0, 1}] </code></pre> <p>The result is: </p> <p><a href="https://i.stack.imgur.com/JRWVb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JRWVb.png" alt="enter image description here"></a></p> <p>That compares to</p> <p><a href="https://i.stack.imgur.com/Hrnd8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hrnd8.png" alt="enter image description here"></a>.</p> <p>The curvature is defined: </p> <p><a href="https://i.stack.imgur.com/1TixB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1TixB.png" alt="enter image description here"></a></p> <p>This is not so critical in this particular case because it is inversely dependent on the first derivative at the first look. The second derivative called a in the formular is even oscillating wilder at the problematic points. So smoothing the first derivative and then derive further is a nice selection for controlling and stay close to the real process of oscillations.</p> <p>There is no general best approximation. For most of the points the <code>Mean</code> smoothing, neating, even, sleeking is to much, for others fair, for some not enough. The last ones govern the seconde derivative and make it even more jumpy.</p> <p>On accident or selection, the second derivative from Your interpolation is already limited. <code>60000</code> is pretty much. MovingAverage of 7 neighbors reduces the maximum to <code>20000</code>. There is a need for a function between neighbors for MovingAverage and the Maximum in <code>x''</code>.</p> <p><a href="https://i.stack.imgur.com/Ov0II.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ov0II.png" alt="enter image description here"></a></p> <p>Better is average different pairs of neighbors repetitive. The resulting curve and maximum value of the second derivative will get smaller and smaller. Stay on the original curve at the points fitting best, showing a little value in the second derivative to keep Your moving curve close to the original process. It is a matter of taste or mathematical authentic perception.</p> <p>The path of calculation a parametric representation of the data points may be shorter but giving less control and insight into the behavior of the first and second derivative of the original curve.</p> <p>If You decide for the MovingAverage approach, it might be sense full to transform the center of the curve to the origin of the coordinate system.</p> <p>So after applying the MovingAverage over 7 neighbors, the second derivative is tamed:</p> <p><a href="https://i.stack.imgur.com/WZ0R7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WZ0R7.png" alt="enter image description here"></a></p> <p>The spikes a smaller about a third but there are more spikes. The distribution broadened a lot. This again contributes to the advice that a selection for better points might be better.</p> <p>Keep in mind I did not alter the coordinate system nor did I add or remove points to or from the original curve. I worked on approximate curvature.</p> <p>The resulting curve is a wavy as the given one:</p> <p><a href="https://i.stack.imgur.com/YsVMH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YsVMH.png" alt="enter image description here"></a></p> <p>The problem is it needs to be closed again.</p> <p>A close comparison shows the power of averaging little:</p> <p><a href="https://i.stack.imgur.com/WcE8Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WcE8Q.png" alt="enter image description here"></a></p> <p>In the upper half the averaging cuts of the spikes in the lower half it tends to move the curve outwards, enlarging the radius. So no symmetry at all. It seems there is more work ahead in dealing with the completion of the curve at the bottom. Again as with all steps it is up to the realism that applies.</p> <p>Again I remembered not to make too much duplicate. For my introduce problem about the periodic option have a look at this article:</p> <p><a href="https://mathematica.stackexchange.com/questions/10273/higher-order-periodic-interpolation-curve-fitting">Higher order periodic interpolation (curve fitting)</a>. The answer by <a href="https://mathematica.stackexchange.com/users/50/j-m-is-in-limbo">J. M. is in limbo</a> should do the rest.</p> <p>Obey the periodic option and stay away from evaluating the curvature but look deep and with care a the first and second derivative of the numerical interpolated representation of the given data.</p> <p>I am just having problems with the part from 20 minutes past to close before half-past. The rest is comfortable and fine. There should be a rest of the curls in the curve. And the curve must not be as induced by changing to polar coordinate reside inside the very random speedy given one. There is a systematic error from the acquisition at the bottom that is in need to be dealt different with than numeric work.</p>
3,208,822
<p>I'm looking at a STEP question and I'm a little confused by the logic of the method, and i'm really hoping someone could clarify what is going on for me. I have a good knowledge (At least I thought), as some STEP II and III questions are accessible but this one , I just can't wrap my head around - there must be a gap in my understanding for sure. </p> <p>It would be difficult for me to rewrite the question on here but here is the video of the question AND the solution, (it's not too long I assure you, just the first 2/3 minutes, it's the entry to the question that has got me scratching my head)</p> <p><a href="https://www.youtube.com/watch?v=YKjQ2RTfltU&amp;frags=pl%2Cwn" rel="nofollow noreferrer">https://www.youtube.com/watch?v=YKjQ2RTfltU&amp;frags=pl%2Cwn</a></p> <blockquote> <p>An internet tester send <span class="math-container">$n$</span> e-mails simultaneously at time <span class="math-container">$t=0$</span>. Their arrival times at their destinations are independent random variables each having probability density function <span class="math-container">$ke^{-kt}$</span> where <span class="math-container">$t&gt;0$</span> and <span class="math-container">$k&gt;0$</span>. The random variable <span class="math-container">$T$</span> is the time of arrival of the e-mail that arrives first at its destination. Show that the probability density function of <span class="math-container">$T$</span> is <span class="math-container">$nke^{-nkt}$</span></p> </blockquote> <p>I understand the calculation later on in the question for the expectation and even the very last part of the question, I just find it difficult to understand how it is logical to consider what this guy has considered in the first 2/3 minutes of the video. Is there a nicer explanation why he considered <span class="math-container">$P(T&gt;t)$</span> and not <span class="math-container">$P(T&lt;t)$</span>, could it even be done this way? Is this a general method for this 'type' of question? </p> <p>I do see what he has done, and understand it, it's just something that I would never have even considered and makes me uncomfortable. </p> <p>Thank you for your help, and I do apologise for the awkwardness of the question.</p>
Henry
6,460
<p>Hint:</p> <p><span class="math-container">$$n\, f(x) -1 \lt \lfloor n\, f(x)\rfloor \le n\, f(x)$$</span></p> <p>so <span class="math-container">$$\dfrac{n\, f(x) -1}{n} \lt \dfrac{\lfloor n\, f(x)\rfloor}{n} \le \dfrac{n\, f(x)}{n}$$</span></p> <p>i.e. <span class="math-container">$$f(x) -\dfrac{1}{n} \lt \dfrac{\lfloor n\, f(x)\rfloor}{n} \le f(x)$$</span></p> <p>Now consider <span class="math-container">$n \gt N \ge \frac1\epsilon$</span></p>
2,194,376
<p>I'm having some trouble understanding the answers to the following questions:</p> <p><a href="https://i.stack.imgur.com/nnuu8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nnuu8.png" alt="enter image description here"></a></p> <p>(a)</p> <p>Why would it make sense for Eve to test out the $gcd(77, 35)$ ?</p> <p>I understand that she has the following mapping</p> <p>$e(x) = x^7\,mod\,35$</p> <p>$e(x) = x^7\,mod\,77$</p> <p>(b)</p> <p>I believe this answer follows from (a)</p>
Olivier Oloa
118,798
<p><strong>Hint</strong>. One may observe that $$ \left|\frac{\sin\frac{x}{n}\sin 2nx}{x^2+4n}\right|\le\frac{\left|\sin\frac{x}{n}\right|}{4n}\le \frac{\frac{\left|x\right|}n}{4n}\le\frac{\left|A\right|}{4n^2},\quad x \in [-A,A] \,\,(A&gt;0), $$ by using <a href="https://en.wikipedia.org/wiki/Weierstrass_M-test" rel="nofollow noreferrer">the Weierstrass M-test</a>, this gives the <em>uniform convergence</em> of the given series over each compact set of $\mathbb{R}$.</p>
3,116,693
<blockquote> <p>Let <span class="math-container">$C([0,1])$</span> be the space of all real valued continuous functions <span class="math-container">$f:[0,1]\to \mathbb{R}$</span>. Take the norm <span class="math-container">$$||f||=\left(\int_0^1 |f(x)|^2\right)^{1/2}$$</span> and the subspace <span class="math-container">$$C=\{f\in C([0,1]):f(0)=0\}.$$</span> Find the closure of <span class="math-container">$C$</span>.</p> </blockquote> <p>First, I showed that <span class="math-container">$C$</span> is not closed, so it cannot be its own closure. To do this, let's take the function <span class="math-container">$$f_n(x)=\left\{ \begin{array}{ll} nx &amp; \text{if } x\leq \tfrac{1}{n} \\ 1 &amp; \text{if } x\geq \tfrac{1}{n} \end{array} \right.$$</span> Clearly, <span class="math-container">$f_n(x)\in C$</span> for any <span class="math-container">$n\in \mathbb{N}$</span>. We'll prove that <span class="math-container">$$f:=\lim_{n\to \infty} f_n(x)=\left\{ \begin{array}{ll} 0 &amp; \text{if } x=0 \\ 1 &amp; \text{if } x\neq 0 \end{array} \right.$$</span> To see this, let <span class="math-container">$\varepsilon&gt;0$</span> and choose <span class="math-container">$N\in \mathbb{N}$</span> such that <span class="math-container">$\frac{1}{\sqrt{3N}}\leq \varepsilon$</span>. Then, for any <span class="math-container">$n\geq N$</span> we have that <span class="math-container">$$\int_0^1|f_n(x)-f(x)|^2dx=\int_0^{1/n}|f_n(x)-f(x)|^2dx+\int_{1/n}^1|f_n(x)-f(x)|^2dx$$</span> <span class="math-container">$$\int_0^{1/n}|nx-1|^2dx+\int_{1/n}^1|1-1|^2dx=\int_0^{1/n}(nx-1)^2dx=\frac{1}{3n}\leq \frac{1}{3N}\leq \varepsilon.$$</span></p> <p>Hence, we have constructed a sequence in <span class="math-container">$C$</span> which does not converge in <span class="math-container">$C$</span> since <span class="math-container">$f$</span> is not continuous, so <span class="math-container">$C$</span> cannot be closed.</p> <p>I don't know how to approach the question of finding the closure of <span class="math-container">$C$</span>. Any help would be appreciated.</p>
kimchi lover
457,779
<p>The closure of <span class="math-container">$C$</span> in <span class="math-container">$V=(C([0,1]),\|\cdot\|_2)$</span> is all of <span class="math-container">$V$</span>. For given <span class="math-container">$f\in V$</span>, let <span class="math-container">$f_n\in C$</span> be obtained from <span class="math-container">$f$</span> by <span class="math-container">$$ f_n(x)=\begin{cases}xf(1/n) \text{ if } x\in[0,1/n]\\ f(x) \text{ otherwise.}\end{cases}$$</span> It's clear that <span class="math-container">$\|f_n-f\|_2\to0$</span>, so <span class="math-container">$f$</span> is in the closure of <span class="math-container">$C$</span>.</p>
3,116,693
<blockquote> <p>Let <span class="math-container">$C([0,1])$</span> be the space of all real valued continuous functions <span class="math-container">$f:[0,1]\to \mathbb{R}$</span>. Take the norm <span class="math-container">$$||f||=\left(\int_0^1 |f(x)|^2\right)^{1/2}$$</span> and the subspace <span class="math-container">$$C=\{f\in C([0,1]):f(0)=0\}.$$</span> Find the closure of <span class="math-container">$C$</span>.</p> </blockquote> <p>First, I showed that <span class="math-container">$C$</span> is not closed, so it cannot be its own closure. To do this, let's take the function <span class="math-container">$$f_n(x)=\left\{ \begin{array}{ll} nx &amp; \text{if } x\leq \tfrac{1}{n} \\ 1 &amp; \text{if } x\geq \tfrac{1}{n} \end{array} \right.$$</span> Clearly, <span class="math-container">$f_n(x)\in C$</span> for any <span class="math-container">$n\in \mathbb{N}$</span>. We'll prove that <span class="math-container">$$f:=\lim_{n\to \infty} f_n(x)=\left\{ \begin{array}{ll} 0 &amp; \text{if } x=0 \\ 1 &amp; \text{if } x\neq 0 \end{array} \right.$$</span> To see this, let <span class="math-container">$\varepsilon&gt;0$</span> and choose <span class="math-container">$N\in \mathbb{N}$</span> such that <span class="math-container">$\frac{1}{\sqrt{3N}}\leq \varepsilon$</span>. Then, for any <span class="math-container">$n\geq N$</span> we have that <span class="math-container">$$\int_0^1|f_n(x)-f(x)|^2dx=\int_0^{1/n}|f_n(x)-f(x)|^2dx+\int_{1/n}^1|f_n(x)-f(x)|^2dx$$</span> <span class="math-container">$$\int_0^{1/n}|nx-1|^2dx+\int_{1/n}^1|1-1|^2dx=\int_0^{1/n}(nx-1)^2dx=\frac{1}{3n}\leq \frac{1}{3N}\leq \varepsilon.$$</span></p> <p>Hence, we have constructed a sequence in <span class="math-container">$C$</span> which does not converge in <span class="math-container">$C$</span> since <span class="math-container">$f$</span> is not continuous, so <span class="math-container">$C$</span> cannot be closed.</p> <p>I don't know how to approach the question of finding the closure of <span class="math-container">$C$</span>. Any help would be appreciated.</p>
mechanodroid
144,766
<p><strong>Lemma.</strong></p> <blockquote> <p>Let <span class="math-container">$X$</span> be a normed space and <span class="math-container">$f : X \to \mathbb{C}$</span> an unbounded functional on <span class="math-container">$X$</span>. Then <span class="math-container">$\ker f$</span> is dense in <span class="math-container">$X$</span>.</p> </blockquote> <p><strong>Proof.</strong> Pick a sequence <span class="math-container">$(x_n)_n$</span> in <span class="math-container">$X$</span> such that <span class="math-container">$\|x_n\| = 1$</span> and <span class="math-container">$|f(x_n)| \ge n, \forall n \in \mathbb{N}$</span>. Let <span class="math-container">$x \in X$</span> be arbitrary. Check that the sequence <span class="math-container">$\left(x-\frac{x_n}{f(x_n)}f(x)\right)_n$</span> lies in <span class="math-container">$\ker f$</span> and converges to <span class="math-container">$x$</span>.</p> <p>Now, the functional <span class="math-container">$\phi : C[0,1] \to \mathbb{C}$</span> given by <span class="math-container">$\phi(f) = f(0)$</span> is unbounded. Namely, consider the functions <span class="math-container">$$g_n(t) = \begin{cases} \sqrt{2n - 2n^2t}, &amp;\text{if } t \in \left[0,\frac1n\right]\\ 0, &amp;\text{if } t \in \left[\frac1n,1\right]\end{cases}$$</span></p> <p>We have <span class="math-container">$\|g_n\|_2 = 1$</span> but <span class="math-container">$\phi(g_n) = g_n(0) = \sqrt{2n}$</span>.</p> <p>Hence <span class="math-container">$C = \ker \phi$</span> is dense in <span class="math-container">$C[0,1]$</span>, therefore <span class="math-container">$\overline{C} = C[0,1]$</span>.</p>
456,824
<p>It's quite easy to give the complete <em>rational</em> solution to,</p> <p>$$x_1^k+x_2^k+x_3^k = y_1^k+y_2^k+y_3^k,\;\; \text{for}\; k=1,2\tag{1}$$</p> <p>One can express it in the form,</p> <p>$$(p+q)^k+(r+s)^k+(t+u)^k=(p-q)^k+(r-s)^k+(t-u)^k\tag{2}$$</p> <p>and impose 2 linear conditions on $p,q,r,s,t,u$. However, an alternative is,</p> <p>$$(ad+e)^k+ (bc+e)^k+ (ac+bd+e)^k = (ac+e)^k + (bd+e)^k + (ad+bc+e)^k\tag{3}$$</p> <p>which is already identically true for $k=1,2$. </p> <p><em><strong>Question</em></strong>: How do you prove $(3)$ is the complete rational solution to $(1)$? </p> <p>(I actually have a proof, but was wondering if someone else has a more elegant way to do it.) </p>
coffeemath
30,316
<p>With different notation for the main variables, the system is $u+v+w=x+y+z$ and the squared equation $u^2+v^2+w^2=x^2+y^2+z^2.$ Your substitutions are $$ad+e=u,\\ bc+e=v,\\ac+bd+e=w,\\ ac+e=x,\\ bd+e=y,\\ ad+bc+e=z.$$ From this two expressions for $e$ are $u+v-z$ and $x+y-w.$ These are compatible because of the linear equation of the system. Now one can solve for the pairwise products of your substitution (using $e=u+v-z=x+y-w$) as $$ad=z-v,\\ bc=z-u,\\ ac=w-y,\\ bd=w-x.\tag{1}$$ This implies that for your substitution to cover all solutions, it must be the case, since $(ad)(bc)=(ac)(bd),$ that $$(z-u)(z-v)=(w-x)(w-y).\tag{2}$$ I haven't been able to show this is a consequence of the system consisting of the linear and quadratic equations in $u,v,w,x,y,z$. However it seems to hold for the one example I looked at, namely $(1,5,6,2,3,7).$ No matter how I set up the variables the relation (2) was OK.</p> <p>Assuming that (2) holds, there is the question of how to recover $a,b,c,d$, since we already know $e$ and the remaining parameters only appear in products. There are relations like $[(ad)(ac)]/[(bc)(bd)]=(a/b)^2$ which imply that certain ratios of the differences of the variables are squares; even this seemed to hold in the simple example I looked at. It seems these values of $a,b,c,d$ are not unique, but can be found by choosing say $a$ at random, and then using the product expressions from (1) to obtain $b,c,d$. </p> <p>In conclusion, <em>if</em> the relation (2) can somehow be shown to follow from the initial system (even if one must suppose a certain ordering of the main variables), it seems your parametrization should cover all solutions.</p>
456,824
<p>It's quite easy to give the complete <em>rational</em> solution to,</p> <p>$$x_1^k+x_2^k+x_3^k = y_1^k+y_2^k+y_3^k,\;\; \text{for}\; k=1,2\tag{1}$$</p> <p>One can express it in the form,</p> <p>$$(p+q)^k+(r+s)^k+(t+u)^k=(p-q)^k+(r-s)^k+(t-u)^k\tag{2}$$</p> <p>and impose 2 linear conditions on $p,q,r,s,t,u$. However, an alternative is,</p> <p>$$(ad+e)^k+ (bc+e)^k+ (ac+bd+e)^k = (ac+e)^k + (bd+e)^k + (ad+bc+e)^k\tag{3}$$</p> <p>which is already identically true for $k=1,2$. </p> <p><em><strong>Question</em></strong>: How do you prove $(3)$ is the complete rational solution to $(1)$? </p> <p>(I actually have a proof, but was wondering if someone else has a more elegant way to do it.) </p>
Tito Piezas III
4,781
<p>Complementing the proof by ckrao in the <a href="http://ckrao.wordpress.com/2012/08/28/23-multigrade-equations/" rel="nofollow">link</a> given by Myerson, here is mine. The general formula for,</p> <p>$$x_1^k+x_2^k+x_3^k = y_1^k+y_2^k+y_3^k,\;\; \text{for}\; k=1,2\tag{1}$$</p> <p>where $x_1 \ne y_1$ can be given as,</p> <p>$$(p+q)^k+(r+s)^k+(t+u)^k=(p-q)^k+(r-s)^k+(t-u)^k\tag{2}$$</p> <p><em><strong>with</em></strong> two linear conditions. Expanding at $k=1,2$, it must be the case that,</p> <p>$$q+s+u =0$$</p> <p>$$pq+rs+tu=0$$</p> <p>and then one can linearly solve for $t,u$. Alternatively,</p> <p>$$(ad+e)^k+ (bc+e)^k+ (ac+bd+e)^k = (ac+e)^k + (bd+e)^k + (ad+bc+e)^k\tag{3}$$</p> <p><em>Proof</em>:</p> <p>Equating terms of $(2)$ and $(3)$,</p> <p>$$\begin{aligned} p+q &amp;= ad+e\\ r+s &amp;= bc+e\\ p-q &amp;= ac+e\\ r-s &amp;= bd+e\\ \end{aligned}\tag{4}$$</p> <p>we can also linearly solve for $a,b,c,e$ (the exact forms are tedious to write down here). But it can then be observed that $a,b,c,e$ and $t,u$ satisfy,</p> <p>$$\begin{aligned} t+u &amp;= ac+bd+e\\ t-u &amp;= ad+bc+e\\ \end{aligned}\tag{5}$$</p> <p>thus proving $(2)$ (with the two linear conditions) and $(3)$ are equivalent. (<em>End proof</em>.)</p> <p><em><strong>P.S.</em></strong> I was hoping for a proof that could derive $(3)$ from first principles, not the sketch that I gave above since it needs prior knowledge of $(3)$.</p>
3,968,905
<p>I am trying to prove this:</p> <p><span class="math-container">$\bullet$</span> Prove that <span class="math-container">$\Delta(\varrho_\epsilon \star u) = \varrho_\epsilon \star f $</span> in the sense of distributions, if <span class="math-container">$\Delta u = f$</span> in the sense of distributions, <span class="math-container">$ u \in L^1_{loc}(\mathbb{R}^n)$</span>.</p> <p>Can anybody help me? Thank you in advance and I take advantage of the situation to wish you a happy new year :D</p>
cr001
254,175
<p>Expansion:</p> <p><span class="math-container">$$x^2y^2+x^2+y^2+1+2x-2y+2xy^2-2x^2y-4xy=4$$</span></p> <p>Re-factoring:</p> <p><span class="math-container">$$(x^2+2x+1)(y^2-2y+1)=4$$</span></p> <p>Simplify</p> <p><span class="math-container">$$(x+1)^2(y-1)^2=4$$</span></p> <p>I will leave you to check the cases.</p>
176,893
<p>Suppose I have a polynomial $$ p(x)=\sum_{i=0}^n p_ix^i. $$ For simplicity furthermore assume $p_n=1$. </p> <p>As it is well known we may use Gershgorin circles to give an upper bound for the absolute values of the roots of $p(x)$. The theorem states that all roots are contained within a circle with radius $$ r=\max\{|p_0|, 1+|p_1|,\ldots, 1+|p_{n-1}|\}. $$</p> <hr> <p>Now I wonder if there is something like an inverse of this theorem. Suppose I know that all roots are contained within a circle of radius $r$. Is there anything that can be said about the maximum coefficient, i.e. $$ \max |p_i|\leq \text{some function of }r? $$</p> <p>I would also be graceful for a counter example.</p>
Joe Silverman
11,926
<p>There are actually stronger estimates that deal with all of the roots. Let $r_1,\ldots,r_n$ be the roots of your polynomial (with multiplicities as appropriate). Then $$ \max_{0\le i\le n} |p_i| \ge \frac{1}{4^n} \prod_{j=1}^n \max\bigl\{|r_j|,1\bigr\} $$ and $$ \max_{0\le i\le n} |p_i| \le 4^n \prod_{j=1}^n \max\bigl\{|r_j|,1\bigr\}. $$ For references to much more general results, see the answers to <a href="https://mathoverflow.net/questions/151556/bounds-on-coefficients-of-factors-of-a-multivariate-polynomial/151564#151564">Bounds on coefficients of factors of a multivariate polynomial</a> and <a href="https://mathoverflow.net/questions/108726/getting-a-bound-on-the-coefficients-of-the-factor-polynomial?rq=1">Getting a bound on the coefficients of the factor polynomial</a> . (The $4^{\pm n}$ constants are not best possible.)</p>
1,225,655
<p>Find the equation of the line that is tangent to the curve at the point $(0,\sqrt{\frac{\pi}{2}})$. Given your answer in slope-intercept form.</p> <p>I don't know how can I get the tangent line, without a given equation!!, this is part of cal1 classes.</p>
mvw
86,776
<p>So far we can infer $$ T(x) = m x + n $$ with $T(0) = \sqrt{\pi/2}$. Thus $$ T(x) = m x + \sqrt{\pi/2} $$ To determine the slope $m$ we need more information about the given curve.</p>
2,023,130
<p>Suppose I have directed graph $G=(V,E)$ s.t at least one of the following statements is always true:</p> <ol> <li>for every $v$ in $V$, v doesn't have any incoming edges.</li> <li>for every $v$ in $V$, v doesn't have any outgoing edges.</li> </ol> <p>How can I prove that G is bipartite? I tried to think of a proof but coldn't figure it out.</p>
dtldarek
26,306
<p>Your current statement is:</p> <blockquote> <p>Suppose I have directed graph $G=(V,E)$ s.t. at least one of the following statements is always true:</p> <ol> <li>for every $v$ in $V$, $v$ doesn't have any incoming edges.</li> <li>for every $v$ in $V$, $v$ doesn't have any outgoing edges.</li> </ol> </blockquote> <p>Any directed edge is an incoming edge for one of its endpoints and outgoing for the second of its endpoints. Thus, existence even of a single edge violates both (1) and (2). In other words, both (1) and (2), as currently stated, imply independently that there are no edges in the graph. Such a graph is certainly bipartite.</p> <p>I hope this helps $\ddot\smile$</p>
2,120,763
<p>I've been given this problem:</p> <p>Prove that a subordinate matrix norm is a matrix norm, i.e. </p> <p>if $\left \|. \right \|$ is a vector norm on $\mathbb{R}^{n}$, then $\left \| A \right \|=\max_{\left \| x \right \|=1}\left \| Ax \right \|$ is a matrix norm</p> <p>I don't even understand the question, and a explanation on what the problem ask me to do would be very appreciated, thanks in advance.</p> <p>specific what does $\max_{\left \| x \right \|=1}\left \| Ax \right \|$ mean</p>
Community
-1
<p><em><a href="https://math.stackexchange.com/questions/319297/gcd-to-lcm-of-multiple-numbers/319690#319690">From here</a>:</em></p> <p>We have that </p> <p><strong>Theorem:</strong> $\rm\ \ lcm(a,b,c)\, =\, \dfrac{abc}{(bc,ca,ab)}$</p> <p><strong>Proof:</strong> $\!\begin{align}\qquad\qquad\rm\ a,b,c&amp;\mid\rm\ k\\ \iff\quad\rm abc&amp;\mid \rm\,\ kbc,kca,kab\\ \iff\quad\rm abc&amp;\mid \rm (kbc,kca,kab)\, =\, k(bc,ca,ab)\\ \iff\rm \ \dfrac{abc}{(bc,ca,ab)} &amp;\:\Bigg| \rm\,\ k\end{align}$</p> <p>where $(bc, ca, ab) $ means the gcd of $ab, bc, ca $. Hope it helps. </p>
1,630,733
<p>I seem to be having a lot of difficulty with proofs and wondered if someone can walk me through this. The question out of my textbook states:</p> <blockquote> <p>Use a direct proof to show that if two integers have the same parity, then their sum is even.</p> </blockquote> <p>A very similar example from my notes is as follows: <code>Use a direct proof to show that if two integers have opposite parity, then their sum is odd.</code> This led to:</p> <p><strong>Proposition</strong>: The sum of an even integer and an odd integer is odd.</p> <p><strong>Proof</strong>: Suppose <code>a</code> is an even integer and <code>b</code> is an odd integer. Then by our definitions of even and odd numbers, we know that integers <code>m</code> and <code>n</code> exist so that <code>a = 2m</code> and <code>b = 2n+1</code>. This means:</p> <p>a+b = (2m)+(2n+1) = 2(m+n)+1 = 2c+1 where c=m+n is an integer by the closure property of addition. </p> <p>Thus it is shown that <code>a+b = 2c+1</code> for some integer <code>c</code> so <code>a+b</code> must be odd.</p> <h2>----------------------------------------------------------------------------</h2> <p>So then for the proof of showing two integers of the same parity would have an even sum, I have thus far:</p> <p><strong>Proposition</strong>: The sum of 2 even integers is even.</p> <p><strong>Proof</strong>: Suppose <code>a</code> is an even integer and <code>b</code> is an even integer. Then by our definitions of even numbers, we know that integers <code>m</code> and <code>n</code> exist so that <code>a=2m</code> and <code>b=2m</code>???</p>
fleablood
280,126
<p>"Suppose a is an even integer and b is an even integer. Then by our definitions of even numbers, we know that integers m and n exist so that a=2m and b=2m???"</p> <p>Since a and b are different numbers they should be different m and n.</p> <p>"Suppose a is an even integer and b is an even integer. Then by our definitions of even numbers, we know that integers m and n exist so that a=2m and b=2*n*?</p> <p>And so a + b = 2m + 2n = 2(m+n) and as m+n =c for some integer c, a + b = 2c so by definition a + b is even.</p>
25,285
<p>I am just looking for a basic introduction to the Podles sphere and its topology. All I know is that it's a q-deformation of $S^2$. </p>
Nicola Ciccoli
6,032
<p>First of all some terminology. One usually talks about Podles spheres, since they are a one parameter family. If you say <strong>the</strong> Podles sphere you probably mean the one that is often referred to as the <strong>standard</strong> one. My indications will refer to the whole family.</p> <p>You do not clarify your background and your directions. The family of Podles spheres has been used as an example in all possible meaning of the word <em>quantization</em>, I guess you could easily find a list of hundred of references about it. I'll stick to some paper that can be considered as foundational in various aspects.</p> <p>First: as NC C*-algebra the first reading should be Podles original paper, which is simple and well written P.Podles Quantum spheres, Lett.Math.Phys. 14, 193-202 (1987).</p> <p>From the point of view of quantum homogeneous spaces, i.e. <em>-coideal subalgebras in Hopf-</em>-algebras I strongly suggest M.Dijkhuizen and T. Koornwinder Quantum homogeneous spaces, duality and quantum 2-spheres, Geom.Dedicata 52, 291-315 (1994).</p> <p>In both these approaches, more or less evidently, q-special functions pop up at some point. M. Noumi and Mimachi K., Quantum 2-spheres and big q-Jacobi polynomials, Comm. Math. Phys. 128, 521-531 (1990) is the one not to avoid.</p> <p>Last but not least you may be interested in understanding Podles spheres as deformations of Poisson structures (say à la Rieffel), which is beautifully explained in A.Sheu, Quantization of the Poisson SU(2) and its Poisson homogeneous space - the 2 sphere, Comm. Math. Phys. 135, 217-232 (1991). I suggest that here you first read the appendix by Lu and Weinstein where the Poisson structures are explained neatly and simply, and then go the quantization part (some of which may result rather technical at first).</p> <p>Then, of course, as mathphysicist mentioned, the whole issue of putting spectral triples opens up; the literature there is much more scattered (several attempts and several choices as well) and I guess one should just simply dive into open sea and see what happens...</p> <p>ADDED: Personally I would start with the paper by Dijkhuizen and Koornwinder that settles the algebraic part (generators and relations) and has a down-to-earth approach, without technicalities (if you know a little about Hopf algebras). I would not dismiss the paper by Noumi and Mimachi if you're interested in spectral triples. q-special functions means harmonic analysis on the sphere: it shouldn't surprise it is important if you look for Dirac operators satisfying some invariance condition.</p>
1,177,988
<p>Comparing the equation $$x^4+3x+20=0$$<br> With the equation $$(x^2+\lambda)^2-(mx+n)^2=0$$ we get </p> <p>$m^2=2\lambda,$</p> <p>$-2mn=3,$<br> $n^2=\lambda^2-20$ </p> <p>Now, $4m^2n^2=9\Rightarrow 4(2\lambda)(\lambda^2-20)=9\Rightarrow 8\lambda^3-160\lambda-9=0$. </p> <p>How can I find easily the values of $\lambda$ from the above equation.</p> <p>Please suggest me.</p>
Mathlover
22,430
<p>HINT: Use binom expansion $$(x+y)^3=x^3+3x^2y+3xy^2+y^3$$ Reorder the equation as: $$(x+y)^3-3xy(x+y)-(x^3+y^3)=0$$</p> <p>$$8\lambda^3-160\lambda-9=0$$<br> $$\lambda^3-20\lambda-\frac{9}{8}=0$$ </p> <p>Define: $x+y=\lambda$, then solve the equation with 2 unknowns $$3xy=20$$</p> <p>$$x^3+y^3=\frac{9}{8}$$</p> <p>Let me know if you cannot get forward from there.</p>
3,678,033
<p>let Y be an ordered set in the order topology.let X be a topological space and let <span class="math-container">$f,g:{X\to Y}$</span> be continuous function.</p> <p>show that the set <span class="math-container">$A=\{x\in X\mid f(x)\le g(x)\}$</span> is closed in X. I used the complement A and Hasdorf, but I didn't get anywhere...</p>
Thomas
128,832
<p>note that in <span class="math-container">$Y\times Y$</span> with the product topology, <span class="math-container">$\{(y_1, y_2)| y_1 \le y_2\}$</span> is closed, and <span class="math-container">$(f,g):X\rightarrow Y\times Y$</span> is continuous, since each factor is.</p>
3,678,033
<p>let Y be an ordered set in the order topology.let X be a topological space and let <span class="math-container">$f,g:{X\to Y}$</span> be continuous function.</p> <p>show that the set <span class="math-container">$A=\{x\in X\mid f(x)\le g(x)\}$</span> is closed in X. I used the complement A and Hasdorf, but I didn't get anywhere...</p>
Henno Brandsma
4,280
<p>Let <span class="math-container">$p \notin A$</span>, so that <span class="math-container">$f(p) &gt; g(p)$</span> in <span class="math-container">$Y$</span>. </p> <p>Case 1: if there exists a <span class="math-container">$y_0 \in Y$</span> with <span class="math-container">$f(p) &gt; y_0 &gt; g(p)$</span> then check that</p> <p><span class="math-container">$U = f^{-1}[(y_0, \rightarrow)] \cap g^{-1}[(\leftarrow, y_0)]$</span> contains <span class="math-container">$p$</span> and is a subset of <span class="math-container">$A$</span>, showing that <span class="math-container">$p$</span> is an interior point of <span class="math-container">$A$</span>.</p> <p>Case 2: no such <span class="math-container">$y_0$</span> exists and in that case</p> <p><span class="math-container">$U = f^{-1}[(g(p), \rightarrow)] \cap g^{-1}[\leftarrow, f(p)]$</span> has the same properties and also <span class="math-container">$p$</span> is an interior point of <span class="math-container">$A$</span>. </p> <p>So <span class="math-container">$A$</span> is open. (In both cases <span class="math-container">$U$</span> is open because we use the order topology on <span class="math-container">$Y$</span> and <span class="math-container">$f,g$</span> are continuous.)</p>
183,243
<p>Gödel's incompleteness theorem states that: "<em>if a system is consistent, it is not complete.</em>" And it's well known that there are unprovable statements in ZF, e.g. GCH, AC, etc.</p> <p>However, why does this mean that ZF is consistent? What does "<em>relatively consistent</em>" actually mean?</p>
Cameron Buie
28,900
<p>Relatively consistent means that if some other system is consistent, then so is the given system. For example, ZF is relatively consistent with ZF-Foundation (and vice versa), and relatively consistent with ZFC (and vice versa). For a unidirectional example, the axioms of Peano arithmetic are relatively consistent with ZF. However, the consistency of Peano arithmetic does not entail the consistency of ZF. (Thanks again, Henning.)</p>
481,173
<p>The most common way to find inverse matrix is $M^{-1}=\frac1{\det(M)}\mathrm{adj}(M)$. However it is very trouble to find when the matrix is large.</p> <p>I found a very interesting way to get inverse matrix and I want to know why it can be done like this. For example if you want to find the inverse of $$M=\begin{bmatrix}1 &amp; 2 \\ 3 &amp; 4\end{bmatrix}$$</p> <p>First, write an identity matrix on the right hand side and carry out some steps:</p> <p>$$\begin{bmatrix}1 &amp; 2 &amp;1 &amp;0 \\ 3 &amp; 4&amp;0&amp;1\end{bmatrix}\to\begin{bmatrix}1 &amp; 2 &amp;1 &amp;0 \\ 3/2 &amp; 2&amp;0&amp;1/2\end{bmatrix}\to\begin{bmatrix}1/2 &amp; 0 &amp;-1 &amp;1/2 \\ 3/2 &amp; 2&amp;0&amp;1/2\end{bmatrix}\to\begin{bmatrix}3/2 &amp; 0 &amp;-3 &amp;3/2 \\ 3/2 &amp; 2&amp;0&amp;1/2\end{bmatrix}$$ $$\to\begin{bmatrix}3/2 &amp; 0 &amp;-3 &amp;3/2 \\ 0 &amp; 2&amp;3&amp;-1\end{bmatrix}\to\begin{bmatrix}1 &amp; 0 &amp;-2 &amp;1 \\ 0 &amp; 2&amp;3&amp;-1\end{bmatrix}\to\begin{bmatrix}1 &amp; 0 &amp;-2 &amp;1 \\ 0 &amp; 1&amp;3/2&amp;-1/2\end{bmatrix}$$</p> <p>You can 1. swap any two row of the matrix 2. multiply a constant in any row 3. add one row to the other row. Just like you are doing Gaussian elimination. when the identical matrix shift to the left, the right hand side become</p> <p>$$M^{-1}=\begin{bmatrix}-2 &amp;1 \\3/2&amp;-1/2\end{bmatrix}$$</p> <p>How to prove this method work?</p>
littleO
40,119
<p>This is probably the standard way to compute the inverse of a matrix.</p> <p>Suppose $AB = I$. Looking at this equation column by column, we see that $Ab_1 = e_1,\ldots, A b_n = e_n$, where $b_1,\ldots,b_n$ are the columns of $B$ and $\{e_1,\ldots,e_n\}$ is the standard basis of $\mathbb R^n$.</p> <p>Thus, to compute the inverse of $A$, we need to solve the equations $Ax_1 = e_1,\ldots, Ax_n = e_n$. These equations are all solved at once by Gaussian elimination in the method you described.</p>
1,943,478
<p>So the prompt is merely an existence proof--just find a $u$ and $v$ that work. Well, I'm unfortunately a little stuck on getting started.</p> <p>I know that $Q \in SO_4(\mathbb R) \implies QQ^T = I \text{ and } \det(Q) = 1$.</p> <p>I tried to solve $Qx = uxv$ for $u,v$ but I was not able to do so successfully. This is because I don't necessarily know if $x$ is has an inverse.</p> <p>Here's what I managed to deduce successfully:</p> <p>$$|Qx| = |uxv| = |u||x||v| = 1\cdot |x| \cdot 1 = |x|$$</p> <p>which means that multiplying $x \in \mathbb H$ by $Q$ doesn't change its length.</p> <p>Let $Q = [q_1 \,|\, q_2 \,|\, q_3 \,|\, q_4]$, where $q_i$ is the $i^{th}$ column of $Q$, and let $x = a + bi + cj + dk$. Then,</p> <p>$$\begin{align}Qx &amp; = Q(a + bi + cj + dk)\\&amp; = aQ + bQi + cQj + dQk \\&amp;= aQ + bQ\begin{pmatrix}0 \\ 1 \\ 0 \\ 0\end{pmatrix} + cQ\begin{pmatrix}0 \\ 0 \\ 1 \\ 0\end{pmatrix} + d\begin{pmatrix}0 \\ 0 \\ 0 \\ 1\end{pmatrix} \\ &amp; = aQ + bq_2 + cq_3 + dq_4\end{align}$$</p> <p>And unfortunately I don't see where to go from here. I'm not entirely sure that I did the multiplication of a quaternion by a matrix correctly. If $a \in \mathbb R$, what does $aQ$ mean? Thus, I don't think that's right.</p> <p>Likely, the solution will boil down to $v = u^{-1}$ or something. But I'm still not quite sure how to arrive there.</p>
John Hughes
114,036
<p>Here's a proof that you may find completely unsatisfactory, depending on whether you know about bundles and things like that. </p> <p>Consider the map $p : SO(4) \to \mathbb S^3 : [q_1, q_2, q_3, q_3] \mapsto q_1$ that selects from a matrix its first column. </p> <p>This is a fibration, and its fiber ($p^{-1}(q_1)$for $q_1 = \begin{bmatrix}1\\0\\0\\0 \end{bmatrix}$, for example) is just the set of $4 \times 4 $ orthogonal matrices whose first row and column are (1,0,0,0). If you look at the lower right $3 \times 3$ matrix, it must therefore be in $SO(3)$, since all its columns are orthonormal, and a first-row-expansion on the $4 \times 4$ matrix shows that the determinant is $+1$ instead of $-1$. </p> <p>So now we have $$ SO(3) \to SO(4) \to S^3 $$ as a sequence of maps, where the first is the injection of the fiber and the second is a fibration. That means that $SO(4)$ can be written as a bundle where we look at a trivial $SO(3)$ bundle over the top half of $S^3$ and a trivial $SO(3)$ bundle over the bottom half, and the "gluing map" along the equator is a map from $S^2 \to SO(3)$, i.e., an element of $\pi_2(SO(3)$. Since $SO(3)$ is a Lie group, we know that $\pi_2(SO(3) = 0$. </p> <p>So the gluing map is trivial, and there's a decomposition of $SO(4)$ as $SO(3) \times S^3$. (I have a feeling that was rather the long way around, but so be it.)</p> <p>So do this: let $Q$ be your matrix, and let $u$ be the first column of $Q$, and let $L_{u^{-1}}$ be the matrix that represents left quaternion multiplication by $u^{-1}$ (quaternion inverse!), so that $L_{u^{-1}} \cdot e_1 = u^{-1} e_1= u^{-1}$, for instance, and $L_{u^{-1}}u = u^{-1}u = \mathbf {1}$, the quaternion $1$, which corresponds to the vector $e_1$. </p> <p>What is $L_{u^{-1}}Q$? Well, its first columns is $e_1$, so its $(1,1)$ entry is $1$, so its first row (which must be a unit vector!) is $(1,0,0,0)$. So $$ L_{u^{-1}}Q = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; &amp; &amp; \\ 0 &amp; &amp; A &amp; \\ 0 &amp; &amp; &amp; \end{bmatrix} $$ where $A$ is a $3 \times 3$ matrix that must in fact be in $SO(3)$. </p> <p>Now every rotation of 3-space, considered as the space of pure-vector quaternions, can be expressed in the form $q \mapsto s^{-1} q s$ for some unit quaternion $s$. So we can write the matrix $A$ in the form $L_{s^-1} R_s$, where these are the matrices of the linear transformations "multiply on the left by $s^{-1}$" and "multiply on the right by $s$", respectively. </p> <p>So we have \begin{align} L_{u^{-1}} Q &amp;= L_{s^{-1}}R_s \\ Q &amp;= L_u L_{s^{-1}}R_s \\ \end{align} i.e., the matrix $Q$ represents left multiplication by $us^{-1}$ followed by right multiplication by $s$. Picking the $u$ in your solution to be my $us^{-1}$ and $v$ to be my $s$, we're done. </p> <p>(Apologies for rambling answer...but I got there in the end...)</p>
2,668,826
<p>I am stuck on this result, which the professor wrote as "trivial", but I don't find a way out.</p> <p>I have the function </p> <p>$$f_{\alpha}(t) = \frac{1}{2\pi} \sum_{k = 1}^{+\infty} \frac{1}{k}\int_0^{\pi} (\alpha(p))^k \sin^{2k}(\epsilon(p) t)\ dp$$</p> <p>and he told use that for $t\to +\infty$ we have:</p> <p>$$f_{\alpha}(t) = \frac{1}{2\pi} \sum_{k = 1}^{+\infty} \frac{1}{4^k k}\binom{2k}{k}\int_0^{\pi} (\alpha(p))^k\ dp$$</p> <p>Now, it's all about the sine since it's the only term with a dependance on $t$. Yet I cannot find a way to send</p> <p>$$\sin^{2k}(\epsilon(p) t)$$</p> <p>into</p> <p>$$\frac{1}{4^k}\binom{2k}{k}$$</p> <p>Any help? Thank you so much.</p> <p><strong>More Details</strong></p> <p>$$\epsilon(p)$$</p> <p>Is a positive, limited and continuous function.</p> <p>The "true" starting point was</p> <p>$$f_{\alpha}(t) = -\frac{1}{2\pi}\int_0^{\pi} \log\left(1 - \alpha(p)\sin^2(\epsilon(p)t)\right)\ dp$$</p> <p>Then I thought I could have expanded the logarithm in series. Maybe I shouldn't had to...</p>
Felix Marin
85,343
<p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{\infty}\ln\pars{1 + x^{11} \over 1 + x^{3}} \,{\dd x \over \pars{1 + x^{2}}\ln\pars{x}}:\ {\Large ?}}$.</p> </blockquote> <p>\begin{equation} \mbox{Note that}\quad \begin{array}{|l|}\hline\mbox{}\\ \ds{\quad\int_{0}^{\infty}\ln\pars{1 + x^{11} \over 1 + x^{3}} \,{\dd x \over \pars{1 + x^{2}}\ln\pars{x}} =\quad} \\[3mm] \ds{\quad\int_{0}^{\infty} {\ln\pars{1 + x^{11}} \over \pars{1 + x^{2}}\ln\pars{x}}\,\dd x - \int_{0}^{\infty} {\ln\pars{1 + x^{3}} \over \pars{1 + x^{2}}\ln\pars{x}}\,\dd x\quad} \\ \mbox{}\\ \hline \end{array} \label{1}\tag{1} \end{equation} <hr> With $\ds{\mu &gt; 0}$: \begin{align} &amp;\bbox[10px,#ffd]{\ds{\int_{0}^{\infty} {\ln\pars{1 + x^{\mu}} \over \pars{1 + x^{2}}\ln\pars{x}}\,\dd x}} \,\,\,\stackrel{x\ \mapsto\ 1/x}{=}\,\,\, \int_{\infty}^{0} {\ln\pars{1 + 1/x^{\mu}} \over \pars{1 + 1/x^{2}}\ln\pars{1/x}} \pars{-\,{\dd x \over x^{2}}} \\[5mm] = &amp;\ -\int_{0}^{\infty} {\ln\pars{x^{\mu} + 1} - \mu\ln\pars{x}\over \pars{x^{2} + 1}\ln\pars{x}}\,\dd x \\[5mm] \implies &amp;\ \bbx{\int_{0}^{\infty} {\ln\pars{1 + x^{\mu}} \over \pars{1 + x^{2}}\ln\pars{x}}\,\dd x = {1 \over 2}\mu\int_{0}^{\infty}{\dd x \over 1 + x^{2}} = {1 \over 4}\,\mu\pi} \label{2}\tag{2} \end{align} <hr> \eqref{1} and \eqref{2} lead to $$ \bbx{\int_{0}^{\infty}\ln\pars{1 + x^{11} \over 1 + x^{3}} \,{\dd x \over \pars{1 + x^{2}}\ln\pars{x}} = {1 \over 4}\,11\pi - {1 \over 4}\,3\pi = {\large 2\pi}} $$</p>
2,707,514
<p>I'm reading <em>A First Course in Modular Forms</em> by Diamond and Shurman and am confused on a small point in Chapter 2. Let <span class="math-container">$\Gamma$</span> be a congruence subgroup of <span class="math-container">$\operatorname{SL}_2(\mathbb Z)$</span>. <span class="math-container">$\gamma \in \mathscr H$</span> is called an <em>elliptic point</em> for <span class="math-container">$\Gamma$</span> if the stablizer of <span class="math-container">$\gamma$</span> in <span class="math-container">$\operatorname{PSL}_2$</span> is nontrivial.</p> <blockquote> <p>Proposition 2.1.1 Let <span class="math-container">$\tau_1, \tau_2 \in \mathscr H$</span> be given. There exist open neighborhoods <span class="math-container">$U_i$</span> of <span class="math-container">$\tau_i$</span> in <span class="math-container">$\mathscr H$</span> such that if <span class="math-container">$\gamma \in \operatorname{SL}_2(\mathbb Z), \gamma(U_1) \cap U_2 \neq \emptyset$</span>, then <span class="math-container">$\gamma(\tau_1) = \tau_2$</span>.</p> <p>Corollary 2.2.3 Let <span class="math-container">$\Gamma$</span> be a congruence subgroup of <span class="math-container">$\operatorname{SL}_2(\mathbb Z)$</span>. Each point <span class="math-container">$\tau \in \mathscr H$</span> has a neighborhood <span class="math-container">$U$</span> in <span class="math-container">$\mathscr H$</span> such that <span class="math-container">$\gamma \in \Gamma, \gamma(U) \cap U \neq \emptyset$</span> implies <span class="math-container">$\gamma \in \operatorname{Stab} \tau$</span>. Such a neighborhood has no elliptic points except possibly <span class="math-container">$\tau$</span>.</p> </blockquote> <p>Taking <span class="math-container">$\tau = \tau_1 = \tau_2$</span> and <span class="math-container">$U = U_1 \cap U_2$</span> in the proposition implies everything in the corollary except for the last sentence. How do we know that we can choose <span class="math-container">$U$</span> small enough to exclude all elliptic points? In other words, how do we know that the elliptic points in <span class="math-container">$\mathscr H$</span> form a discrete set?</p>
Oven
546,601
<p>Let $\tau$ and $U$ be as in the Corollary. If $\tau' \in U$ is an elliptic point different from $\tau$, with nontrivial stabilizer $\gamma \in \text{PSL}_2(\mathbb{Z})$, then $\gamma$ also fixes $\tau$ by the first part of the Corollary. It is thus sufficient to show that if $\gamma \in \text{PSL}_2(\mathbb{R})$ fixes two distinct points of $\mathscr{H}$, then $\gamma$ is the identity. This is immediately verified (say, by assuming that one of the points is $\sqrt{-1}$).</p>
4,025,279
<p>I have been reading material on uniformly continuous functions. And going through the problems where we have to prove that a function is not uniformly continuous or otherwise.</p> <p>A function defined on an interval I is said to be uniformly continuous on I if to each <span class="math-container">$\epsilon$</span> there exists a <span class="math-container">$\delta$</span> such that</p> <p><span class="math-container">$|f(x_1)-f(x_2)| &lt; \epsilon$</span>, for arbitrary points <span class="math-container">$x_1, x_2$</span> of I for which <span class="math-container">$|x_1-x_2|&lt;\delta$</span>.</p> <p>Now I understand the above definition in mathematical terms and able to apply this definition to solve problems. But I don't understand why this definition was introduced, how does uniformly continuous functions look like ? If given a graph of a function, how can I tell if it is a uniformly continuous function ? When I imagine continuous functions, I have this picture in mind as to how they look like but for uniformly continuous functions, I can't think of any picture.</p> <p>I hope I am able to explain my question correctly. Please help me understand.</p>
Ethan Bolker
72,858
<p>Intuitively, A continuous function is uniformly continuous when it's never too steep.</p> <p>If you want to draw one that's not, the domain should be an open interval (bounded or unbounded) since a continuous function on a closed interval is uniformly continuous.</p> <p>My favorite example is <span class="math-container">$$ f(x) = \frac{1}{1-x} $$</span> on the interval <span class="math-container">$(0,1)$</span>. The nearer <span class="math-container">$x$</span> is to <span class="math-container">$1$</span> the steeper the graph, so the larger a <span class="math-container">$\delta$</span> you need for each given <span class="math-container">$\epsilon$</span>.</p>
640,554
<p>For the system $$ \left\{ \begin{array}{rcrcrcr} x &amp;+ &amp;3y &amp;- &amp;z &amp;= &amp;-4 \\ 4x &amp;- &amp;y &amp;+ &amp;2z &amp;= &amp;3 \\ 2x &amp;- &amp;y &amp;- &amp;3z &amp;= &amp;1 \end{array} \right. $$ what is the condition to determine if there is no solution or unique solution or infinite solution? </p> <p>Thank you!</p>
Cameron Buie
28,900
<p>Probably the most straightforward method (to fully distinguish between the various possibilities) that I've seen is transforming the corresponding augmented matrix into row-reduced echelon form. In this case, you would start with: <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; -4\\4 &amp; -1 &amp; 2 &amp; 3\\2 &amp; -1 &amp; -3 &amp; 1\end{array}\right]$$</span> Subtracting <span class="math-container">$4$</span> times the first row from the second, and <span class="math-container">$2$</span> times the first row from the third, we have: <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; -4\\0 &amp; -13 &amp; 6 &amp; 19\\0 &amp; -7 &amp; -1 &amp; 9\end{array}\right]$$</span> Subtracting <span class="math-container">$2$</span> times the third row from the second, we have: <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; -4\\0 &amp; 1 &amp; 8 &amp; 1\\0 &amp; -7 &amp; -1 &amp; 9\end{array}\right]$$</span> Adding <span class="math-container">$7$</span> times the second row to the third, we have: <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; -4\\0 &amp; 1 &amp; 12 &amp; 1\\0 &amp; 0 &amp; 55 &amp; 16\end{array}\right]$$</span></p> <p>At this point, we have only zeroes below the main diagonal, but no zeroes on the diagonal, so a unique solution exists. Continuing to reduce until the <span class="math-container">$3\times 3$</span> portion of the augmented matrix is just the <span class="math-container">$3\times 3$</span> identity matrix, we have <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 0 &amp; 0 &amp; 3/11\\0 &amp; 1 &amp; 0 &amp; -73/55\\0 &amp; 0 &amp; 1 &amp; 16/55\end{array}\right]$$</span> This tells us that <span class="math-container">$x=3/11,$</span> <span class="math-container">$y=-73/55,$</span> <span class="math-container">$z=16/55$</span> is the unique solution to the system.</p> <hr> <p>Let's consider another system: <span class="math-container">$$\begin{cases}x+3y-z=4\\4x-y+2z=8\\2x-7y+4z=-3,\end{cases}$$</span> which has corresponding matrix <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; 4\\4 &amp; -1 &amp; 2 &amp; 8\\2 &amp; -7 &amp; 4 &amp; -3\end{array}\right].$$</span> Starting out the same way gets us <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; 4\\0 &amp; -13 &amp; 6 &amp; -8\\0 &amp; -13 &amp; 6 &amp; -11\end{array}\right],$$</span> and subtracting the second row from the third gives us <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; 4\\0 &amp; -13 &amp; 6 &amp; -8\\0 &amp; 0 &amp; 0 &amp; -3\end{array}\right].$$</span> Now we have only zeroes below the main diagonal, but we have a zero on the main diagonal, too. This tells us that either there are no solutions or there are infinitely-many. Translated back into terms of <span class="math-container">$x,y,z$</span> this is the equivalent system <span class="math-container">$$\begin{cases}x+3y+-z=4\\0x-13y+6z=-8\\0x+0y+0z=-3,\end{cases}$$</span> or alternatively <span class="math-container">$$\begin{cases}x=-\frac5{13}z+\frac{28}{13}\\y=\frac6{13}z+\frac8{13}\\0=3,\end{cases}$$</span> but there is no solution to the last equation, so no solution to the system.</p> <p><strong>Upshot</strong>: We will have no solutions whenever we end up with one or more rows of all <span class="math-container">$0$</span>s <em>except in the last column</em> as we reduce the augmented matrix.</p> <hr> <p>By contrast, if we'd started with the system <span class="math-container">$$\begin{cases}x+3y-z=4\\4x-y+2z=8\\2x-7y+4z=0,\end{cases}$$</span> which has corresponding matrix <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; 4\\4 &amp; -1 &amp; 2 &amp; 8\\2 &amp; -7 &amp; 4 &amp; 0\end{array}\right],$$</span> then our reduction process will get us <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 3 &amp; -1 &amp; 4\\0 &amp; -13 &amp; 6 &amp; -8\\0 &amp; 0 &amp; 0 &amp; 0\end{array}\right].$$</span> Again, we must have no solution or infinitely many. Continuing to row-reduce as much as possible gets us to <span class="math-container">$$\left[\begin{array}{ccc|c}1 &amp; 0 &amp; 5/13 &amp; 28/13\\0 &amp; 1 &amp; -6/13 &amp; 8/13\\0 &amp; 0 &amp; 0 &amp; 0\end{array}\right].$$</span> Translated back into terms of <span class="math-container">$x,y,z$</span> this is the equivalent system <span class="math-container">$$\begin{cases}x+0y+\frac5{13}z=\frac{28}{13}\\0x+y-\frac6{13}z=\frac8{13}\\0=0.\end{cases}$$</span> One of these equations is always true, so one of our variables can take on any value. We might as well let <span class="math-container">$z$</span> take on any value, at which point the other two equations will tell us the values that <span class="math-container">$x$</span> and <span class="math-container">$y$</span> must take. Hence, we have infinitely-many solutions. </p> <p><strong>Upshot</strong>: We will have infinitely-many solutions whenever we end up with one or more rows of all <span class="math-container">$0$</span>s as we reduce the augmented matrix, so long as we don't have any rows with all <span class="math-container">$0$</span>s except in the last column.</p> <hr> <p><strong>Added</strong>: Simply taking the determinant of the unaugmented matrix of the system--meaning of <span class="math-container">$$\begin{bmatrix}1 &amp; 3 &amp; -1\\4 &amp; -1 &amp; 2\\2 &amp; -1 &amp; -3\end{bmatrix}$$</span> in the first example and of <span class="math-container">$$\begin{bmatrix}1 &amp; 3 &amp; -1\\4 &amp; -1 &amp; 2\\2 &amp; -7 &amp; 4\end{bmatrix}$$</span> in the other two examples--will give us part of the answer. If the determinant is <span class="math-container">$0$</span> (as in the second and third example), then the system either has no solution or infinitely-many, but we cannot (by this method alone) say which. Otherwise, the system has a unique solution, but we cannot (by this method alone) say what it might be. That's why I tend to prefer the first method I suggested, at least when dealing with only a few equations and a few variables: it tells us the whole story.</p>
36,477
<p>The length, width and height of a rectangular box are measured to be 3cm, 4cm and 5cm respectively, with a maximum error of 0.05cm in each measurement. Use differentials to approximate the maximum error in the calculated volume.</p> <p><br><br> Please help</p>
Américo Tavares
752
<p>Let $V=xyz$ be the actual volume of the box, where $x,y,z$ are respectively its actual length, width and height, and let $V_{0}=x_{0}y_{0}z_{0}=3\cdot 4\cdot 5=60$ cm$^{3}$ be the measured volume.</p> <p>The maximum error of 0.05 cm in each measurement means that $\left\vert x-3\right\vert \leq 0.05$, $\left\vert y-4\right\vert \leq 0.05$, $% \left\vert z-5\right\vert \leq 0.05$ cm. </p> <p>Since these three measurement errors are small, respectively, $\pm \frac{5}{3}\%$, $\pm \frac{5}{4}\%$ and $\pm \frac{5}{5}\%$, of the measured length, width and height, we can approximate the error in the computed volume by the differential $dV$ evaluated at $(x_{0},y_{0},z_{0})=(3,4,5)$, and taking $dx=dy=dz=\max \left\{ \left\vert x-3\right\vert ,\left\vert y-4\right\vert ,\left\vert z-5\right\vert \right\} =0.05$ cm:</p> <p>$$\begin{eqnarray*} dV &amp;=&amp;\left. \frac{\partial }{\partial x}\left( xyz\right) \right\vert _{(3,4,5)}dx+\left. \frac{\partial }{\partial y}\left( xyz\right) \right\vert _{(3,4,5)}dy+\left. \frac{\partial }{\partial z}\left( xyz\right) \right\vert _{(3,4,5)}dz \\ &amp;=&amp;4\cdot 5\cdot 0.05+3\cdot 5\cdot 0.05+3\cdot 4\cdot 0.05 \\ &amp;=&amp;2.35\text{ cm}^{3}. \end{eqnarray*}$$</p> <p>Of course, for $dx=dy=dz=-0.05$ cm the value of the differential $dV$ would be symmetric: $dV=-2.35\text{ cm}^{3}.$</p> <p>Indeed these values compare very well with the maximum error $\varepsilon _{\max }$ in the computed volume: </p> <p>$$-2.3201=2.95\cdot 3.95\cdot 4.95-V_{0}\leq \varepsilon\leq 3.05\cdot 4.05\cdot 5.05-V_{0}=2.3801.$$</p>
1,605,281
<p><a href="https://i.stack.imgur.com/43uoh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/43uoh.png" alt="The question about finding the exact value of the sine of the angle between (PQ) and the plane"></a></p> <p>I have done part (a). For part (b), I know the principle of how to do it, I tried to use the cross product to find the exact value of the sine of the angle. </p> <p>So I found PQ which is $$\begin{pmatrix}7-2\\-1-1\\2-6\end{pmatrix}$$ </p> <p>$$=\begin{pmatrix}5\\-2\\-4\end{pmatrix}$$</p> <p>Then I do the cross product between $$\begin{pmatrix}5\\-2\\-4\end{pmatrix}$$ and $$\begin{pmatrix}5\\-3\\-1\end{pmatrix}$$ which the normal vector for the plane equation given</p> <p>and using $$||{\bf a} \times {\bf b}|| = ||{\bf a}|| \, ||{\bf b}|| \sin \theta,$$ I got the answer to be $\sqrt (490)/\sqrt (490)$ which makes $sin\theta =1$ but the answer states that $sin\theta = \sqrt 7/3$. Where did I go wrong? </p> <p>Thank you and sorry in advance for any wrong tags and title labelling. </p>
Jendrik Stelzner
300,783
<p>First off, $[-a,-b) = \{x \mid -a \leq x &lt; -b\}$ and $(-b,-a] = \{x \mid -b &lt; x \leq -a]$ are different things.</p> <p>You also want to assume $a &lt; b$ to ensure $[a,b)$ is non-empty, for otherwise your counterexample does not work.</p> <p>You claim that $(-b,-a] \notin \mathcal{T}_l$, which you have to justify. It is important to notice that $\mathcal{T}_l$ is <strong>generated</strong> by intervalls of the form $[a,b)$, but contains even more sets. For example $(0,1) = \bigcup_{n \geq 1} [1/n,1) \in \mathcal{T}_l$.</p> <p>So for your counterexample to work you have to show $\mathcal{T}_l$ does not contain $(-b,-a]$. This can be done by explicitely calculating $\mathcal{T}_l$ and showing that $(-b,-a] \in \mathcal{T}_l$ if $a &lt; b$. (<strong>Hint</strong>: Using the above example you can see that $\mathcal{T}_l$ must contain the half-open intervalls $[a,b)$ (with $b = \infty$ allowed) and open intervalls $(a,b)$ (with $a = -\infty$ and $b = \infty$) allowed. Check if this already is a topology, or what is missing.)</p>
129,295
<p>$$\int{\sqrt{x^2 - 2x}}$$</p> <p>I think I should be doing trig substitution, but which? I completed the square giving </p> <p>$$\int{\sqrt{(x-1)^2 -1}}$$</p> <p>But the closest I found is for</p> <p>$$\frac{1}{\sqrt{a^2 - (x+b)^2}}$$ </p> <p>So I must add a $-$, but how? </p>
N3buchadnezzar
18,908
<p>The standard way to solve these problems is indeed with trigonometric substitutions. But it is not the <em>only</em> way to solve these. Another way that was used more before is called <em>Euler substitutions</em></p> <blockquote> <p>if we are to integrate $\sqrt{ax^2+bx+c}$ that has real roots $\alpha$ and $\beta$, then we can use the substitution $\sqrt{ax^2+bx+c}=x \cdot t$</p> </blockquote> <p>Now, in your problem we have $\alpha=0$ and $\beta=2$, so we may choose to use the substitution </p> <p>$$ \sqrt{x^2-2x} = (x-0)\cdot t \ \Rightarrow \ x = \frac{2}{1-t^2}$$</p> <p>Since this is homework and I am lazy, so from here I will only outline the details</p> <p>$$ \begin{align} I &amp; = \int \sqrt{x^2-2x}\,\mathrm{d}x \\ &amp; = \int \frac{8t^2}{1-3t^2+3t^4-t^6}\,\mathrm{d}t \\ &amp; = \int -\frac{8t^2}{(t+1)^3(t-1)^3}\,\mathrm{d}t \\ &amp; = \int - \frac{1}{2}\frac{1}{t+1} + \frac{1}{(t+1)^3} - \frac{1}{2}\frac{1}{(t+1)^2} + \frac{1}{2}\frac{1}{(t-1)^2} - \frac{1}{(t-1)^3} - \frac{1}{2}\frac{1}{(t-1)^2} \, \mathrm{d}t \\ \end{align} $$ </p> <p>and from here the rest is obvious =)</p>
33,330
<p>Let $p$ be a complex number. Let $ z_0 = p $ and, for $ n \geq 1 $, define $z_{n+1} = \frac{1}{2} ( z_n - \frac{1}{z_n}) $ if $z_n \neq 0 $. Prove the following:</p> <p>i) If $ \{ z_n \} $ converges to a limit $a$, then $a^2 + 1 = 0 $</p> <p>ii) If $ p $ is real, then $ \{ z_n \} $, if defined, does not converge</p> <p>iii) If $ p = iq $, where $ q \in \mathbb{R} \backslash \{0\} $, then $ \{ z_n \} $ converges.</p> <p>I have been able to do the first two parts of this (the second is because the sequence would be real, but would have to have a complex limit). I am stuck on the third part, though. Any help would be greatly appreciated.</p> <p>Thanks</p>
Adrián Barquero
900
<p>I'm not sure how much help this could provide but anyway. Look that what part i) tells you is that if the sequence $(z_n)$ converges then its limit must be $\pm i \in \mathbb{C}$. So in part ii) you show that if the first term $z_0 = p$ is real then the sequence does not converge because all terms are real so there's no way to get the imaginary $i$.</p> <p>Now look at part iii). You can show that in that case all terms are purely imaginary and that the imaginary part of all the terms in the sequence has the same sign as that of $q$. For example by induction, if $z_n = i p_n$ for some $p_n \in \mathbb{R} \setminus \{ 0 \}$, then</p> <p>$$z_{n+1} = \frac{1}{2} \left ( \frac{z^2_n - 1}{z_n} \right ) = \frac{i}{2} \left ( \frac{p^2_n + 1}{p_n} \right ) = i p_{n+1}$$</p> <p>with $p_{n+1} \in \mathbb{R} \setminus \{ 0 \}$. Thus now you can concentrate on the sequence defined by $p_0 = p$ and </p> <p>$$ p_{n+1} = \frac{p^2_n + 1}{2p_n}$$ This sequence must have limit $\pm 1$, depending on the sign of $p$.</p> <p>Then I guess is not that hard to just prove that this sequence $(p_n)$ converges by showing that it is monotonic and bounded. I think that you have to distinguish two cases depending if $p&gt;1$ or not. So for example if $p&gt;1$ then every term of the sequence $p_n &gt; 1$ and</p> <p>$$p_{n+1} = \frac{p^2_n + 1}{2p_n} = \frac{p_n}{2} + \frac{1}{2p_n} &lt; \frac{p_n}{2} + \frac{p_n}{2} = p_n$$</p> <p>so in this case the sequence is decreasing and bounded below by $1$. The other cases should be similar.</p>
2,995,408
<blockquote> <p><span class="math-container">$$ \lim_{x\to 2^-}\frac{x(x-2)}{|(x+1)(x-2)|}= \lim_{x\to 2^-}\left(\frac{x}{|x+1|}\cdot \frac{x-2}{|x-2|}\right) $$</span></p> </blockquote> <p>So as the title says, is it okay to separate function under absolute value like this (i.e In form of Products) as shown in the denominator?</p>
user
505,767
<p>Yes of course we can di that since the two expressions are equivalent, moreover in that case we can also go beyond that indeed</p> <p><span class="math-container">$$\lim_{x\to 2^-}\frac{x(x-2)}{|(x+1)(x-2)|}= \lim_{x\to 2^-}\left(\frac{x}{|x+1|}\cdot \frac{x-2}{|x-2|}\right)=\lim_{x\to 2^-}\left(\frac{x}{|x+1|}\right)\cdot \lim_{x\to 2^-}\left(\frac{x-2}{|x-2|}\right)$$</span></p> <p>since the two limits exist and are finite.</p>
2,334,381
<p>is strong topology on a metric space the topology that is induced by metric? Is a open set in weak topology also open in strong topology?</p>
Henno Brandsma
4,280
<p>Yes, in a linear space $X$ that is normed, e.g. we have a metric $d$ from the norm, and the strong topology is the one induced by $d$ (so generated by the $d$-balls). The weak topology looks at all continuous linear functions from $(X,d)$ to the underlying field (often the reals). This set is called $X^\ast$, the dual of $X$.</p> <p>The weak topology $\mathcal{T}_w$ is the smallest topology that makes all functions in $X^\ast$ continuous. By definition the $d$-topology is one of those topologies, so by minimality, $\mathcal{T}_w \subseteq \mathcal{T}_d$. This is only an equality of topologies in the finite dimensional case. Otherwise the weak topology is strictly smaller (i.e. weaker), hence the name.</p>
3,400,123
<blockquote> <p>Proof that <span class="math-container">$$\sum_{l=1}^{\infty} \frac{\sin((2l-1)x)}{2l-1} =\frac{\pi}{4}$$</span> when <span class="math-container">$0&lt;x&lt;\pi$</span></p> </blockquote> <p>The chapter we are working on is about Fourier series, so I guess I'd need to use that some how.</p> <p>My idea was to use <span class="math-container">$$\sum_{l\in \mathbb Z}c_l e^{i(2l-1)x}= a_0+\sum_{l\geq 1}a_1\cos(lx)+\sum_{l\geq 1} b_l\sin((2l-1)x)$$</span></p> <p>Where <span class="math-container">$a_0$</span> and <span class="math-container">$a_l$</span> would be <span class="math-container">$0$</span>, <span class="math-container">$b_l= \frac{1}{2l-1}$</span>.</p> <p>This would give us <span class="math-container">$c_l = \frac{1}{4il-2i}$</span>. I don't know how knowing that </p> <p><span class="math-container">$$\sum_{l=1}^{\infty} \frac{\sin((2l-1)x)}{2l-1}= \sum_{l\in \mathbb Z} \frac{1}{4il-2i} e^{i(2l-1)x}$$</span> would help here though.</p> <p>Am I even on the right track here? Any hints are much appreciated , thanks in advance :)</p>
J.G.
56,861
<p>Define <span class="math-container">$f(x)$</span> to be <span class="math-container">$\frac{\pi}{4}$</span> on <span class="math-container">$(0,\,\pi)$</span>, <span class="math-container">$0$</span> at multiples of <span class="math-container">$\pi$</span> and <span class="math-container">$-\frac{\pi}{4}$</span> on <span class="math-container">$(-\pi,\,0)$</span>, and extend <span class="math-container">$f$</span> to <span class="math-container">$\Bbb R$</span> by requiring it to be of period <span class="math-container">$2\pi$</span>. Since <span class="math-container">$f$</span> is odd, its Fourier series is of the form <span class="math-container">$\sum_{n\ge1}s_n\sin nx$</span>. Now we just need to prove <span class="math-container">$s_n=\frac{1+(-1)^{n+1}}{2n}$</span>. Indeed <span class="math-container">$$s_n=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\sin nxdx=\frac14\left(\int_0^\pi\sin nxdx+\int_0^{-\pi}\sin nx dx\right)\\=\frac{1}{4n}\left([\cos nx]_\pi^0+[\cos nx]_{-\pi}^0\right)=\frac{1+(-1)^{n+1}}{2n}.$$</span></p>
2,559,260
<p>There exists a function $f$ such that $\lim_{x \rightarrow \infty} \frac{f(x)}{x^2} = 25$ and $\lim_{x \rightarrow \infty} \frac{f(x)}{x} = 5$</p> <p>I am confused, I do not whether it is true or not</p> <p>I have a counter-example, but I think thre might be such function</p>
user
505,767
<p>It can't exist since :</p> <p>$$\lim_{x \rightarrow \infty} \frac{f(x)}{x^2} =\lim_{x \rightarrow \infty} \frac{1}{x}\frac{f(x)}{x}=0\cdot5=0$$</p>
2,214,236
<p>The question:</p> <blockquote> <p>An object is dropped from a cliff. How far does the object fall in the 3rd second?"</p> </blockquote> <p>I calculated that a ball dropped from rest from a cliff will fall $45\text{ m}$ in $3 \text{ s}$, assuming $g$ is $10\text{ m/s}^2$.</p> <p>$$s = (0 \times 3) + \frac{1}{2}\cdot 10\cdot (3\times 3) = 45\text{ m}$$</p> <p>But my teacher is telling me $25\text{ m}$! </p> <p>EDITS: His reasoning was that from $t=0$ to $t=1$, $s=10\text{ m}$, and from $t=1$ to $t =2$, $s=20$...</p> <p>The mark scheme also says $25\text{ m}$</p>
hamam_Abdallah
369,188
<p>We have</p> <p>$$h=\frac {1}{2}gt^2+v_0t+h_0$$</p> <p>$$=\frac {1}{2}gt^2$$</p> <p>$$=\frac {1}{2}.10. (3)^2=45 m $$.</p>
615,093
<p>How to prove the following sequence converges to $0.5$ ? $$a_n=\int_0^1{nx^{n-1}\over 1+x}dx$$ What I have tried: I calculated the integral $$a_n=1-n\left(-1\right)^n\left[\ln2-\sum_{i=1}^n {\left(-1\right)^{i+1}\over i}\right]$$ I also noticed ${1\over2}&lt;a_n&lt;1$ $\forall n \in \mathbb{N}$.</p> <p>Then I wrote a C program and verified that $a_n\to 0.5$ (I didn't know the answer before) by calculating $a_n$ upto $n=9990002$ (starting from $n=2$ and each time increasing $n$ by $10^4$). I can't think of how to prove $\{a_n\}$ is monotone decreasing, which is clear from direct calculation.</p>
Yiorgos S. Smyrlis
57,021
<p>Using <a href="http://en.wikipedia.org/wiki/Integration_by_parts" rel="nofollow">integration by parts</a>, we obtain \begin{align} \int_0^1 \frac{nx^{n-1}}{1+x}dx &amp;=\left.\frac{x^n}{1+x}\right|_0^1+\int_0^1\frac{x^n} {(1+x)^2}dx =\frac{1}{2}+r_n, \end{align} where clearly $$ 0&lt;r_n\le \int_0^1 x^n\,dx=\frac{1}{n+1}\longrightarrow 0, $$ as $n\to\infty$.</p>
615,093
<p>How to prove the following sequence converges to $0.5$ ? $$a_n=\int_0^1{nx^{n-1}\over 1+x}dx$$ What I have tried: I calculated the integral $$a_n=1-n\left(-1\right)^n\left[\ln2-\sum_{i=1}^n {\left(-1\right)^{i+1}\over i}\right]$$ I also noticed ${1\over2}&lt;a_n&lt;1$ $\forall n \in \mathbb{N}$.</p> <p>Then I wrote a C program and verified that $a_n\to 0.5$ (I didn't know the answer before) by calculating $a_n$ upto $n=9990002$ (starting from $n=2$ and each time increasing $n$ by $10^4$). I can't think of how to prove $\{a_n\}$ is monotone decreasing, which is clear from direct calculation.</p>
Mercy King
23,304
<p>We have $$ a_n=\int_0^1\frac{nx^{n-1}}{1+x}\,dx=\frac{x^n}{1+x}\Big|_0^1+\int_0^1\frac{x^n}{(1+x)^2}\,dx=\frac12+\int_0^1\frac{x^n}{(1+x)^2}\,dx \quad \forall n \ge 1. $$ Since $$ \int_0^1\frac{x^n}{(1+x)^2}\,dx\le \int_0^1x^n\,dx=\frac{1}{n+1} \quad \forall n\ge 1, $$ it follows that $$ \lim_n\int_0^1\frac{x^n}{(1+x)^2}\,dx=0. $$ Thus $\lim_na_n=\frac12$.</p>
834,508
<p>Show that for any natural number $n$, between $n^2$ and $(n+1)^2$ one can find three distinct natural numbers $a,b,c$ such that $a^2+b^2$ is divisible by $c$.</p> <p>A friend and I found a general case that always work with a computer problem, I would like to see a different solution, or a solution that tells the motivation of how to find that general case without a computer.</p>
vonbrand
43,946
<p>The Frobenius problem in general is <em>hard</em> to solve (the 2 number case is easy, see e.g. <a href="http://www.cut-the-knot.org/blue/Sylvester2.shtml" rel="nofollow">here</a> for an instructive technique). I believe the (to date) definitive summary of what is known to be Ramírez-Alfonsín's "The Diophantine Frobenius Problem" (Oxford University Press, 2006). The problem itself pops up all over the place, so it has been well-studied, a web search should turn up lots of references.</p>
2,595,612
<p>Is true that every limit can only converge, diverge or(exclusive) not exist?</p> <p>Can I demonstrate that it doesn't exist after I proved it doesn't converge neither diverge?</p> <p>I've never seen this, but it makes some sort of sense to me. If a real isn't positive nor negative, it must be zero... But with limits.</p>
zwim
399,263
<p>In the following, I'll use $n$ for $n\to +\infty$ and $x$ for $x\to 0$.</p> <p>Converge means it has a limit and this limit is finite.</p> <ul> <li>For instance $\frac 1n$ converges to $0$ </li> </ul> <p>Diverge means does not converge.</p> <p>There are multiple forms of divergence</p> <ul> <li><p>The limit is infinite $\frac 1{x^2}\to+\infty$, so it has a limit in the extended sense.</p></li> <li><p>The limit is infinite but with undefined sign $\frac 1x\to\pm\infty$ whether $x\to 0^+$ or $x\to 0^-$, now even in the extended sense, it has no limit or we say it doesn't exists.</p></li> <li><p>There are multiple adherence points $(-1)^n\frac {n+1}{n+2}$ has two limit points $1$ and $-1$ (better said two convergent subsequences), even worst is the case $\sin(\frac 1x)$ where whole $[-1,1]$ is adherent.</p></li> <li><p>There are some wild values for instance $\dfrac 1{n\sin(\frac{n\pi}8)}$, it "mostly" goes to $0$ but from time to time there are infinite values. Though they don't need to be infinite, take $f(n)=\frac 1n$ and $f(n^2)=\text{random}$, it doesn't converge either.</p></li> </ul> <p>These are only some common examples of divergence, it is not exhaustive, as soon as the values are not all concentrated around one single finite value, there is divergence.</p>
236,933
<p>When I use ListPlot, I want to show the labels, such as</p> <pre><code>ListPlot[Callout[#, #, Above] &amp; /@ Range[10], Joined -&gt; True, Mesh -&gt; All] </code></pre> <p>Now all positions are <strong>Above</strong>, but sometimes the labels will over other text, so I want to set some of them <strong>Below</strong>, for example, the second and third point labels are <strong>Below</strong>, as the image shows.</p> <p><a href="https://i.stack.imgur.com/xZ06X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZ06X.png" alt="enter image description here" /></a></p>
kglr
125
<pre><code>ListPlot[Callout[#, #, # /. {2 | 3 | 8 -&gt; Below, _ -&gt; Above}] &amp; /@ Range[10], Joined -&gt; True, Mesh -&gt; All] </code></pre> <p><a href="https://i.stack.imgur.com/KDloV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KDloV.png" alt="enter image description here" /></a></p>
377,152
<p>Let n be a fixed natural number. Show that: $$\sum_{r=0}^m \binom {n+r-1}r = \binom {n+m}{m}$$</p> <p>(A): using a combinatorial argument and (B): by induction on $m$?</p>
Arthur
15,500
<p>For (A), you're supposed to find something to count that can be counted in two ways. One should be naturally representable as $\sum_{r=0}^m \binom {n+r-1}r$, and the other as $\binom{n+m}{m}$. Since it is the same thing you counted the two must be equal.</p> <p>For the other you need to check that it is actually true for a few low-number cases. Then show that <em>if</em> it is true in any given case, then it <em>must</em> be true in the next case.</p> <p>If you want a better answer, you should first add some personal flair to the question. Tell us how you came by it, why you want a solution, and most importantly, what you have tried yourself. Many people on this site (me includeed) are put off by askers just copying the text of a problem, including the commanding language ("Show that", rather than "how do I show that"). Also, if we don't know what you have tried yourself it is difficult to tailor a solution <em>for you</em>, which is what we strive to do here.</p>
377,152
<p>Let n be a fixed natural number. Show that: $$\sum_{r=0}^m \binom {n+r-1}r = \binom {n+m}{m}$$</p> <p>(A): using a combinatorial argument and (B): by induction on $m$?</p>
Marko Riedel
44,883
<p>For the sake of completeness I add option (C): using generating functions. We have $$\sum_{r=0}^m \binom{n+r-1}{r} = \sum_{r=0}^m [z^n] \frac{z}{(1-z)^{r+1}} = [z^{n-1}] \sum_{r=0}^m \left(\frac{1}{1-z}\right)^{r+1} \\ = [z^{n-1}] \frac{1}{1-z} \frac{1-1/(1-z)^{m+1}}{1-1/(1-z)} = [z^{n-1}] \frac{1-1/(1-z)^{m+1}}{1-z-1} = [z^{n-1}] \frac{1}{z} \left(\frac{1}{(1-z)^{m+1}}-1\right) \\ = [z^n] \left(\frac{1}{(1-z)^{m+1}}-1\right).$$ Now there are two cases: first, for $m=0$, we obtain $$ [z^n] \left(\frac{1}{(1-z)^1}-1\right) =1 = \binom{n+m}{m}.$$ For $m\ge 1$, and keeping in mind that $n$ is a natural number, we get $$[z^n] \left(\frac{1}{(1-z)^{m+1}}-1\right) = [z^{n+1}] \frac{z}{(1-z)^{m+1}} = \binom{n+1+m-1}{m} = \binom{n+m}{m}. $$</p>
2,912,759
<p>I am trying to use the characteristic function of the uniform distribution defined on (0,1) to compute the mean. I have calculated the characteristic function (correctly) and used Euler's identity to convert it to the following form:</p> <p>$$\phi_Y(t)=\frac{\sin(t)}{t} + i \frac{1-\cos(t)}{t}$$</p> <p>I should be able to compute the mean (which should be 1/2) by taking the first derivative, multiplying by $\frac{1}{i}$, and evaluating at $t=0$. I've computed the first derivative as:</p> <p>$$\frac{\partial}{\partial t}\phi_Y(t)=\frac{t\cos(t)-\sin(t)}{t^2} + i\frac{t \sin(t) + \cos(t) -1}{t^2}$$</p> <p>And dividing by $i$, this expression simplifies to: $$E[X]=\Big(\frac{i\sin(t)-it\cos(t)+t(\sin(t)+\cos(t)-1}{t^2}\Big)\bigg\rvert_{t=0}$$</p> <p>This expression is undefined, because of division by 0. Am I missing something here? </p>
Kavi Rama Murthy
142,385
<p>You have to use limiting values. Even in your formula for $\phi_Y (t)$ there is $t$ in the denominator. It doesn't mean $\phi_Y (0)$ is not defined. The value is $1$ which is also $\lim_{t \to 0} \phi_Y (t)$. Similarly, you can find $EY$ by computing $\lim _{t \to 0} \frac {\partial} {\partial t} \phi_Y (t)$. By expanding $\sin $ and $cos $ in their Taylor series you get the value of the limit as $\frac 1 2$. </p>
1,456,411
<p>When we're introduced to $\mathbb{R}^3$ in multivariable calculus, we first think of it as a collection of points. Then we're taught that you can have these things called <em>vectors</em>, which are (equivalence classes of) arrows that start at one point and end up at another.</p> <p>At this point $\mathbb{R}^3$ is an affine space, not a vector space: for two points $x, y \in \mathbb{R}^3$, the operation $x + y$ is meaningless (my professor likes to say: "You can't add Chicago and New York!") but the operation $x - y$ gives a vector (the vector which points from New York to Chicago). You can also add a point and a vector, which gives you a translated point.</p> <p>The distinction between the point $(0, 1, 2)$ and the vector $\langle 0, 1, 2 \rangle$ is sometimes made.</p> <p>But then we quickly move on to treating $\mathbb{R}^3$ as a <em>vector</em> space, where instead of a point $A$, you have vectors starting at the origin with their tip at $A$. For example, parameterized curves such as</p> <p>$$r(t) = (t, t^2, 3t)$$</p> <p>are called "vector-valued functions" and not "point-valued functions". So, my question is, what is the reason that we historically don't define two spaces -- $\mathbb{R}^3$ and $\mathbb{R}^3_{\text{affine}}$? (I'm sure there's better notation).</p> <p>For example, my "point-valued function" $r(t)$ would be a function $\mathbb{R} \rightarrow \mathbb{R}^3_{\text{affine}}$, but its derivative $r'(t)$ (the velocity <em>vector</em>) would be a function $\mathbb{R} \rightarrow \mathbb{R}^3$. What would this make more difficult?</p> <p>In particular, I know that $\mathbb{R}^3 \iff \mathbb{R}^3_{\text{affine}}$ is a bijection, and that we use this sometimes, but how often in multivariable calculus? If we are using it all the time, then it wouldn't make sense to emphasize the distinction.</p>
hardmath
3,111
<p>Note that $\mathbb{R}^3$ explicitly introduces <em>coordinates</em> for three dimensional space, and coordinates are not necessary to have three dimensional affine space. You are right in observing that once we identify points by coordinates, there is an ambiguity in whether we mean a point or a vector when coordinates are given. But this scarcely means we don't consider $\mathbb{R}^3$ to be an affine space.</p> <p>In general classic geometry deals with Euclidean space in affine terms and without coordinates. The magnitude of lengths and angles does not require a special designation of an origin, such as we have in synthetic geometry (introducing coordinates and making many arguments easier).</p> <p>Abstractly we can <em>consider any vector space to be an affine space</em> by "forgetting" the designation of a distinguished origin. Many of the operations in multivariable calculus can be viewed as shifting the origin to a convenient point, e.g. a point on a surface, and carrying out analysis "locally" in a neighborhood of that point. In this sense we are making frequent use of the affine structure of $n$-dimensional Euclidean space.</p>
1,117,458
<p>How do I integrate an expression of the form $$ \frac{f'(x)}{[f(x)]^n} $$ with respect to $x$?</p> <p>Could I use some kind of recognition method, thus avoiding partial fractions?</p> <p>For example: $$ \frac{(2x+1)}{(x^2+x-1)^2} $$</p>
RE60K
67,609
<p>Simple, substitute $t=f(x)$: $$\int\frac{f'(x)}{f^n(x)}=\int\frac{dt}{t^n}=??$$</p>
3,960,282
<p>I am trying to find <span class="math-container">$z$</span> such that <span class="math-container">$$\dot{z} = -1 + e^{-iz^*},$$</span> where <span class="math-container">$*$</span> denotes complex conjugate and the dots represent derivatives with respect to time. The time dependence of the dependent variables is suppressed for clarity of presentation. Letting <span class="math-container">$z=x+iy$</span>, <span class="math-container">$$\dot{x} = -1 - e^{-y}\cos x, \quad \dot{y}= e^{-y}\sin x,$$</span> together with the constraint that <span class="math-container">$$A = -y+ e^{-y}\cos x$$</span> is a constant, i.e. <span class="math-container">$\dot{A}=0$</span>.</p> <p>The normal way I would proceed would be to eliminate <span class="math-container">$y$</span> in favor of <span class="math-container">$x$</span>, which yields product log (ie Lambert W) functions, but the resulting ODE is complicated, and I cannot solve it in closed form. Note, if we take <span class="math-container">$z = i \log \zeta$</span>, we can equate this to a problem involving a point vortex in a uniform stream, but I'm not sure this identification helps.</p>
user577215664
475,762
<p><span class="math-container">$$\dot{x} = -1 - e^{-y}\cos x, \quad \dot{y}= e^{-y}\sin x,$$</span> <span class="math-container">$$\sin x \dfrac {dx}{dy}=-e^y- \cos x$$</span> <span class="math-container">$$(\cos x)'-\cos x=e^y$$</span> <span class="math-container">$$(e^{-y}\cos x)'=1$$</span> That you can integrate. But the system is not linear so it isn't going to be easy to integrate.</p>
308,565
<p>Suppose I have a one parameter flat family of complex surfaces (regular, of general type) whose general fibre is smooth. Is it possible for the central fibre to have singularities which are not canonical? If so, how bad can they be? </p>
inkspot
8,726
<p>The cone over a plane curve of degree $d$ deforms to a smooth surface in $\mathbb P^3$ of degree $d$. Take $d\ge 5$ to see that things can be arbitrarily bad.</p>
141,423
<p>Let $V \subset H \subset V'$ be a Hilbert triple.</p> <p>We can define a weak derivative of $u \in L^2(0,T;V)$ as the element $u' \in L^2(0,T;V')$ satisfying $$\int_0^T u(t)\varphi'(t)=-\int_0^T u'(t)\varphi(t)$$ for all $\varphi \in C_c^\infty(0,T)$.</p> <p>Then we define the space $W = \{u \in L^2(0,T;V) : u' \in L^2(0,T;V')\}$. We know for example that for $u, v \in W$, $$\frac{d}{dt}(u(t),v(t))_H = \langle u'(t), v(t) \rangle + \langle v'(t), u(t) \rangle.\tag{1}$$</p> <p>Now suppose I change my space of test functions and define a weak derivative as an element $u' \in L^2(0,T;V')$ satisfying $$\int_0^T u(t)\varphi'(t)=-\int_0^T u'(t)\varphi(t)$$ for all $\varphi \in C_c^1(0,T)$.</p> <p>Define $\tilde W = \{u \in L^2(0,T;V) : u' \in L^2(0,T;V')\}$ where the derivative is now with respect to these new test functions. How is this related to $W$? Do properties like (1) still hold? </p> <p>I think yes, by uniqueness of weak derivatives. But I wanted to check in case I missed something.</p>
Guido Kanschat
37,813
<p>The derivative you define is the distributional derivative, which can be defined on $C^\infty_c$ or $C^1_c$. But then, you restrict to weak derivatives $u' \in L^2(I;V')$, which means, you allow test functions even from from $H^1(I;V)$, which is the class for which the left hand side of the definition of your derivative makes sense.</p> <p>Let for a moment be $V=\mathbb R$. Let $u$ be the Heaviside function with jump at zero. Then, its distributional derivative according to your first and second definition is the Dirac functional $\delta(x)$, which is not a weak derivative in your sense. By both of your definitions, you can even compute its distributional derivative $\delta'(x)$ with both your definitions. The next derivative $\delta''(x)$ is only defined on $C^2_c$ and thus would not be caught by your second definition. But as soon as a derivative is defined as a functional on $C^1_c$, it will be the same on $C^\infty_c$, since the latter is a subspace. </p>
62,581
<p>I have a 2D coordinate system defined by two non-perpendicular axes. I wish to convert from a standard Cartesian (rectangular) coordinate system into mine. Any tips on how to go about it?</p>
davidlowryduda
9,754
<p>Sure. Let's approach this in a very elementary way: with matrix algebra.</p> <p>Suppose that our two new 'basis vectors' are given by $ (\alpha _1, \alpha _2)$ and $(\beta _1, \beta_2)$, e.g. $(1,1)$ and $(1,0)$. </p> <p>Then our goal is to find a linear combination of them such that we can express some given vector, which we will imaginatively denote as $(x, y)$. In particular, this will allow us to find where a standard point is in our new, nonstandard coordinate system.</p> <p>But this is nothing more than solving the system: $$ \left( \begin{array}{cc} \alpha_1 &amp; \beta_1 \\ \alpha_2 &amp; \beta_2 \end{array} \right) \left( \begin{array}{c} a \\ b \end{array}\right)= \left( \begin{array}{c} x \\ y\end{array}\right)$$ for the unknown a and b values. In particular, this means that if the matrix is invertible (meaning that your axes are not parallel), then you can immediately find where any particular points are in your system. </p> <p>Is that what you were looking for?</p>
1,856,530
<blockquote> <p>Prove that the product of five consecutive positive integers cannot be the square of an integer.</p> </blockquote> <p>I don't understand the book's argument below for why $24r-1$ and $24r+5$ can't be one of the five consecutive numbers. Are they saying that since $24-1$ and $24+5$ aren't perfect squares it can't be so? Also, the argument after that about how $24r+4$ is divisible by $6r+1$ and thus is a perfect square is unclear.</p> <p><strong>Book's solution:</strong></p> <p><a href="https://i.stack.imgur.com/cyKAA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cyKAA.png" alt="enter image description here"></a></p>
André Nicolas
6,312
<p>The numbers $24r-1$ and $24r+5$ are also divisible neither by $2$ nor by $3$, so by the previous argument, if they are part of the list they must be perfect squares. However, they are odd and respectively congruent to $-1$ and $5$ modulo $8$. But any odd perfect square must be congruent to $1$ modulo $8$.</p>
899,249
<p>I got this problem from my friend. I have been doing it for hours.</p> <p>$a_1 = 2$</p> <p>$a_{n+1} = 2a^2_n+1$</p> <p>$a_n = ?$</p> <p>Could you please tell me how to solve this? Thanks!</p> <p>BTW: I failed to solve it by using Mathematica <code>RSolve[{a[1] == 2, a[n + 1] == 2 a[n]^2 + 1}, a[n], n]</code></p>
Will Jagy
10,400
<p>Take $b_n = 2 a_n,$ this becomes $$ b_{n+1} = b_n^2 + 2, \; \; b_1 = 4. $$ This is a well-known problem, the general heading is Lucas-Lehmer sequences. The best that can be done is that there is a real number $\theta &gt; 1$ such that $$ b_n \approx \theta^{\left( 2^n \right)},$$ where the really bad news is that you can only get estimates of $\theta$ by using more and more terms.</p> <p>If your problem had a minus sign instead, then there would be a closed form solution! Given $$ x_{n+1} = x_n^2 - 2, \; \; \; x_0 &gt; 2, $$ find the constants $A &gt; B &gt; 0$ such that $$ AB = 1, \; \; A+B = x_0. $$ Then $$ x_n = A^{\left( 2^n \right)} + B^{\left( 2^n \right)} $$</p> <p>So, my suspicion is that there was a lapse in communication, and the problem was originally $$\color{blue}{ a_1 = 2, \; \; a_{n+1} = 2 a_n^2 - 1.} $$</p>