qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
203,614
<p>I have two matrices </p> <p><span class="math-container">$$ A=\begin{pmatrix} a &amp; 0 &amp; 0 \\ 0 &amp; b &amp; 0 \\ 0 &amp; 0 &amp; c \end{pmatrix} \quad \text{ and } \quad B=\begin{pmatrix} d &amp; e &amp; f \\ d &amp; e &amp; f \\ d &amp; e &amp; f \end{pmatrix} $$</span></p> <p>In reality mine are more like 1000 x 1000 matrices but the only thing that is important for now is that the left matrix is diagonal and the right one has one row that repeats itself.</p> <p>Obviously the eigenvalues of the left matrix are its diagonal components. I want to create a new matrix C</p> <p><span class="math-container">$$C = A+B=\begin{pmatrix} a &amp; 0 &amp; 0 \\0 &amp; b &amp; 0 \\0 &amp; 0 &amp; c \end{pmatrix}+\begin{pmatrix} d &amp; e &amp; f \\d &amp; e &amp; f \\d &amp; e &amp; f \end{pmatrix}=\begin{pmatrix} a+d &amp; e &amp; f \\d &amp; b+e &amp; f \\d &amp; e &amp; c+f \end{pmatrix}$$</span></p> <p>I am now wondering how the eigenvalues of this new matrix C are related to the eigenvalues of the diagonal matrix A. Can I use an argument that uses row reduction in order to relate the eigenvalues of both matrices? </p> <p>The reason why I am asking is that my 1000 x 1000 matrix (implemented in mathematica) that is described as above gives me almost the same eigenvalues as the corresponding diagonal matrix (only a few eigenvalues differ) and I really cannot think of any reason why that should be the case.</p> <p>EDIT:</p> <p>I implemented a simple code in mathematica to illustrate what I mean. One can see that every eigenvalue of the diagonal matrix A appears in C:</p> <pre><code> dim = 50; A = DiagonalMatrix[Flatten[RandomInteger[{0, 10}, {1, dim}]]]; mat = RandomReal[{0, 100}, {1, dim}]; B = ArrayFlatten[ConstantArray[{mat}, dim]]; c = A + B; Abs[Eigenvalues[A]] Round[Abs[Eigenvalues[c]], 0.01] (*{10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 1, 1, 1, 0, 0, 0}*) (*{2084.89, 10., 10., 10., 10., 10., 9.71, 9., 9., 9., 9., 9., 8.54, 8., 8., 8., 7.72, 7., 7., 7., 7., 6.61, 6., 6., 6., 5.44, 5., 5., 5., 5., 4.29, 4., 4., 4., 3.51, 3., 3., 3., 3., 2.28, 2., 2., 2., 2., 1.21, 1., 1., 0.33, 0., 0.}*) </code></pre>
Michael E2
4,999
<p>It doesn't happen here:</p> <pre><code>SeedRandom[0]; aa = RandomReal[{-10, 10}, {1000, 1000}]; bb = ConstantArray[RandomReal[{-10, 10}, {1000}], {1000}]; eva = Eigenvalues@aa; evc = Eigenvalues[aa + bb]; ListPlot[{ReIm@eva, ReIm@evc}, ImageSize -&gt; Large, MaxPlotPoints -&gt; 1000] </code></pre> <p><a href="https://i.stack.imgur.com/H4E4I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H4E4I.png" alt="enter image description here"></a></p> <p>OTOH, it does happen here:</p> <pre><code>bb = ConstantArray[RandomReal[{-1, 1} 1*^-8, {1000}], {1000}]; eva - Eigenvalues[aa + bb] // Abs // Max (* 5.4818*10^-7 *_) </code></pre> <p>Of course, the explanations should be obvious.</p>
3,156,359
<p>I am currently attempting to solve a system of quadratic (and linear) systems that I have run into while trying to triangulate sound.</p> <p>My hypothetical setup includes 3 sensors on a perfectly equilateral triangle, with one sensor located at <span class="math-container">$(0,0)$</span> and the other two located below it. (The specifics don't matter, as I am simply referring to the sensor locations using <span class="math-container">$a_1,a_2,a_3$</span> for the x-coordinates of the sensors, and <span class="math-container">$b_1,b_2,b_3$</span> for the y-coordinates of the sensors, with <span class="math-container">$r_1,r_2,r_3$</span> being the radii of the circles from each respective sensor to the sound point) </p> <p>I am trying to specify equations for the x position of the sound, the y position of the sound, and finally the radius of the incident sensor to the sound (the sensor that picks up the sound wave first).</p> <p>My equations are as follows:</p> <p><span class="math-container">$$(x - a_1)^2 + (y - b_1)^2 = r_1^2$$</span> <span class="math-container">$$(x - a_2)^2 + (y - b_2)^2 = r_2^2$$</span> <span class="math-container">$$(x - a_3)^2 + (y - b_3)^2 = r_3^2$$</span> <span class="math-container">$$r_3 = r_1 + (t_3 * \text{speed of sound})$$</span> <span class="math-container">$$r_2 = r_1 + (t_2 *\text{speed of sound})$$</span></p> <p>In this example, I am assuming that the sound reaches sensor 1 first. I understand that a true solution requires 3 discrete solutions, one for each sensor being the "incident sensor". (assuming that there cannot be a scenario where sound perfectly reaches multiple sensors at the same time)</p> <p>My known variables: <span class="math-container">$a_1,a_2,a_3,b_1,b_2,b_3,\text{speed of sound}, t_1,t_2,t_3$</span></p> <p>My Unknown variables: <span class="math-container">$x,y,r_1,r_2,r_3$</span>.</p> <p>Now I understand that I can just substitute in the three linear equations, but that leaves me with three quadratic equations that I am unsure of how to solve and obtain a meaningful answer from. </p> <p>I tried searching for revelant topics, and the closest I could come was this: <a href="https://math.stackexchange.com/a/187858/656339">https://math.stackexchange.com/a/187858/656339</a></p> <p>Which has the same setup as I, but doesn't detail how to solve it.</p> <p>Any help would be appreciated.</p>
Rohit Pandey
155,881
<p>Solving system of quadratic and indeed, general polynomial equations is possible with techniques like Buchberger's algorithm. See the first two chapters of <a href="http://people.dm.unipi.it/caboara/Misc/Cox,%20Little,%20O%27Shea%20-%20Ideals,%20varieties%20and%20algorithms.pdf" rel="nofollow noreferrer">the book</a> by Cox et.al. There are also many numerical tools to assist you with this. Sympy in python is one alternative (the one I'd recommend) and there is also one <a href="https://github.com/ryu577/GroebnerBasis" rel="nofollow noreferrer">I wrote in C#</a> following the textbook cited.</p>
76,505
<p>In the eighties, Grothendieck devoted a great amount of time to work on the foundations of homotopical algebra. </p> <p>He wrote in "Esquisse d'un programme": "[D]epuis près d'un an, la plus grande partie de mon énergie a été consacrée à un travail de réflexion sur les <em>fondements de l'algèbre (co)homologique non commutative</em>, ou ce qui revient au même, finalement, de l'<em>algèbre homotopique</em>." (Beginning of section 7. English version <a href="http://matematicas.unex.es/~navarro/res/esquisseeng.pdf" rel="noreferrer">here</a>: "Since the month of March last year, so nearly a year ago, the greater part of my energy has been devoted to a work of reflection on the foundations of non-commutative (co)homological algebra, or what is the same, after all, of homotopic[al] algebra.) </p> <p>In <a href="http://www.math.jussieu.fr/~maltsin/groth/ps/lettreder.pdf" rel="noreferrer">a letter to Thomason</a> written in 1991, he states: "[P]our moi le “paradis originel” pour l’algèbre topologique n’est nullement la sempiternelle catégorie ∆∧ semi-simpliciale, si utile soit-elle, et encore moins celle des espaces topologiques (qui l’une et l’autre s’envoient dans la 2-catégorie des topos, qui en est comme une enveloppe commune), mais bien la catégorie Cat des petites caégories, vue avec un œil de géomètre par l’ensemble d’intuitions, étonnamment riche, provenant des topos." [EDIT 1: Terrible attempt of translation, otherwise some people might miss the reason why I have asked this question: "To me, the "original paradise" for topological algebra is by no means the never-ending semi-simplicial category ∆∧ [he means the simplex category], for all its usefulness, and even less is it the category of topological spaces (both of them imbedded in the 2-category of toposes, which is a kind of common enveloppe for them). It is the category of small categories Cat indeed, seen through the eyes of a geometer with the set of intuitions, surprisingly rich, arising from toposes."]</p> <p>If $Hot$ stands for the classical homotopy category, then we can see $Hot$ as the localization of $Cat$ with respect to functors of which the topological realization of the nerve is a homotopy equivalence (or equivalently a topological weak equivalence). This definition of $Hot$ still makes use of topological spaces. However, topological spaces are in fact not necessary to define $Hot$. Grothendieck defines a <em>basic localizer</em> as a $W \subseteq Fl(Cat)$ satisfying the following properties: $W$ is weakly saturated; if a small category $A$ has a terminal object, then $A \to e$ is in $W$ (where $e$ stands for the trivial category); and the relative version of Quillen Theorem A holds. This notion is clearly stable by intersection, and Grothendieck conjectured that classical weak equivalences of $Cat$ form the smallest basic localizer. This was proved by Cisinski in his thesis, so that we end up with a categorical definition of the homotopy category $Hot$ without having mentionned topological spaces. (Neither have we made use of simplicial sets.) </p> <p>I personnally found what Grothendieck wrote on the subject quite convincing, but of course it is a rather radical change of viewpoint regarding the foundations of homotopical algebra. </p> <p>A related fact is that Grothendieck writes in "Esquisse d'un programme" that "la "<em>topologie générale</em>" <em>a été développée</em> (dans les années trente et quarante) <em>par des analystes et pour les besoins de l'analyse</em>, non pour les besoins de la topologie proprement dite, c'est à dire l'étude des <em>propriétés topologiques de formes géométriques</em> diverses". ("[G]eneral topology” was developed (during the thirties and forties) by analysts and in order to meet the needs of analysis, not for topology per se, i.e. the study of the topological properties of the various geometrical shapes." See the link above.) This sentence has already been alluded to on MO, for instance in Allen Knutson's answer <a href="https://mathoverflow.net/questions/8204/how-can-i-really-motivate-the-zariski-topology-on-a-scheme/14354#14354">there</a> or Kevin Lin's comment <a href="https://mathoverflow.net/questions/14314/algebraic-topologies-like-the-zariski-topology">there</a>. </p> <p>So much for the personal background of this question.</p> <p>It is not new that $Top$, the category of all topological spaces and continuous functions, does not possess all the desirable properties from the geometric and homotopical viewpoint. For instance, there are many situations in which it is necessary to restrict oneself to some subcategory of $Top$. I expect there are many more instances of "failures" of $Top$ from the homotopical viewpoint than the few I know of, and I would like to have a list of such "failures", from elementary ones to deeper or less-known ones. I do not give any example myself on purpose, but I hope the question as stated below is clear enough. Here it is, then: </p> <blockquote> <p>In which situations is it noticeable that $Top$ (the category of general topological spaces and continuous maps) is not adapted to geometric or homotopical needs? Which facts "should be true" but are not? And what do people usually do when encountering such situations? </p> </blockquote> <p>As usual, please post only one answer per post so as to allow people to upvote or downvote single answers.</p> <p>P.S. I would like to make sure that nobody interpret this question as "why should we get rid of topological spaces". This, of course, is not what I have in mind! </p>
David Roberts
4,177
<p>The geometric realisation functor (read: homotopy colimit for nice situations) from simplicial spaces to $Top$ preserves pullbacks only when you take the $k$-ification of the product in $Top$, or work with compactly generated spaces (Edit: or a convenient category of spaces). This is false in the category of all spaces with the ordinary product.</p> <p>See e.g. <a href="http://ncatlab.org/nlab/show/geometric+realization+of+simplicial+topological+spaces#ordinary_geometric_realization_34" rel="noreferrer">here</a> in the nLab.</p>
692,582
<p>When I was reading a paper, I found an strange derivation like $$\int^{+\infty}_{-\infty}\mathrm{ln}(1+e^w)f(w)dw\\=\int^0_{-\infty}\ln(1+e^w)f(w)+\int^\infty_0[\ln(1+e^{-w})+w]f(w)dw$$ when $w$ is the normal random variable and $f(w)$ is the normal density.</p> <p>Why is that natural log integration broken up into that?</p> <p>I think it just need a simple principle.</p> <p>Thank you.</p>
bubba
31,744
<p>Use a <em>parametric</em> spline, in which $x$ and $y$ are spline functions (or even just polynomial functions) of some independent parameter $t$. </p> <p>Here is a parametric cubic spline with 4 segments created in Powerpoint. Or, looking at it another way, this is just a string of four cubic Bezier curves that join smoothly.</p> <p><img src="https://i.stack.imgur.com/pASDE.jpg" alt="loop"></p> <p>And here's a nicer curve. It's a Bezier curve of degree 6, and its control points are $(2,0)$, $(6,0)$, $(7,2)$, $(4,5)$, $(1,2)$, $(2,0)$, $(6,0)$.</p> <p><img src="https://i.stack.imgur.com/6bpTL.jpg" alt="enter image description here"></p>
1,877,567
<p>I need help calculating two integrals</p> <p>1) $$\int_1^2 \sqrt{4+ \frac{1}{x}}\mathrm{d}x$$ 2) $$\int_0^{\frac{\pi}{2}}x^n sin(x)\mathrm{d}x$$</p> <p>So I think on the 1st one I will have to use substitution, but I don't know what to do to get something similar to something that's in the known basic integrals. 2nd one I think is per partes but the $n$ is confusing.</p> <p>Any help would be appreciated. Thank you in advance. (If there happens to be any duplicates I am apologizing also, but didn't find any.)</p>
David Quinn
187,299
<p>HINT...for the first one, try substituting $$\frac 1x=4\tan^2\theta$$</p> <p>For the second, try integrating by parts twice to obtain a reduction formula</p>
1,877,567
<p>I need help calculating two integrals</p> <p>1) $$\int_1^2 \sqrt{4+ \frac{1}{x}}\mathrm{d}x$$ 2) $$\int_0^{\frac{\pi}{2}}x^n sin(x)\mathrm{d}x$$</p> <p>So I think on the 1st one I will have to use substitution, but I don't know what to do to get something similar to something that's in the known basic integrals. 2nd one I think is per partes but the $n$ is confusing.</p> <p>Any help would be appreciated. Thank you in advance. (If there happens to be any duplicates I am apologizing also, but didn't find any.)</p>
Zau
307,565
<p>Hint:</p> <p><strong>For the first problem:</strong> $$\int \sqrt{4+ \frac{1}{x}}\mathrm{d}x = \int \frac{\sqrt{4x+ 1 }}{\sqrt x}\mathrm{d}x$$</p> <p>Let $u = \sqrt x$ , $x = u^2$, $2u du =dx $</p> <p>$$ \int \frac{\sqrt{4u^2+ 1 }}u 2u \mathrm{d}u$$</p> <p>Try some trigonometric substitution.</p> <p><strong>For the second problem:</strong></p> <p>Set $$I(n) = \int x^n sin(x)\mathrm{d}x$$</p> <p>Let $u = x^{n},dv = \sin x$,$du = n x^{n-1}, v = -\cos x$ by integration by parts.</p> <p>$$I(n) = x^n \times \cos x + n \int x^{n-1} \cos x\mathrm{d}x $$</p> <p>Let $u = x^{n-1},dv = \cos x$,$du = (n-1) x^{n-2}, v = \sin x$ by integration by parts</p> <p>$$I(n) = x^n \times \cos x + n (x^{n-1} \sin x - (n-1)\int x^{n-2} \sin x dx ) = x^n \times \cos x + n x^{n-1} \sin x - n (n-1)I(n-2) $$</p> <p>Then use mathematic induction you will get the formula.</p>
3,017,602
<p>I am trying to solve the following differential equation: <span class="math-container">$$\frac{dy}{dx}=\frac{y^{1/2}}{2}, \quad y&gt;0$$</span></p> <p>Here is what I tried: <span class="math-container">$$ \begin{split} \frac{dy}{dx} &amp;= \frac{y^{1/2}}{2} \\ 2y^{1/2}dy &amp;= dx\\ 6y^{3/2} &amp;= x+c\\ y &amp;= \sqrt[3/2]{\frac{x+c}{6}} \end{split} $$</span> But this does not satisfy the original equation. What went wrong?</p>
hamam_Abdallah
369,188
<p>You made a mistake. compare with</p> <p><span class="math-container">$$\frac{1}{\sqrt{y}}dy=\frac 12 dx$$</span></p> <p><span class="math-container">$$\frac{dy}{2\sqrt{y}}=\frac 14 dx$$</span></p> <p><span class="math-container">$$d(\sqrt{y})=d(\frac x4+C)$$</span></p> <p><span class="math-container">$$\sqrt{y}=\frac 14 x+C$$</span></p> <p>you can finish.</p>
3,330,938
<p>On Wikipedia page about Weierstrass factorization theorem one can find a sentence which mentions a generalized version so that it should work for meromorphic functions. I mean:</p> <blockquote> <p>We have sets of zeros and poles of function <span class="math-container">$f$</span>. How could we use that sets to find formula for <span class="math-container">$f$</span>.</p> </blockquote> <p>I think that it should be in the form of quotient of two entire functions.</p>
David C. Ullrich
248,223
<p>First, a meromorphic function in the plane <em>is</em> the quotient of two entire functions: Say <span class="math-container">$f$</span> is entire except for poles at <span class="math-container">$p_j$</span>. Say the pole at <span class="math-container">$p_j$</span> has order <span class="math-container">$n_j$</span>. There is an entire function <span class="math-container">$h$</span> such that <span class="math-container">$h$</span> has a zero of order <span class="math-container">$n_j$</span> at <span class="math-container">$p_j$</span> (and no other zeroes); in fact you can construct such a function <span class="math-container">$h$</span> as a product of the sort you see in the Weierstrass factorization theorem. Now all the singularities of <span class="math-container">$g=hf$</span> are removable, so <span class="math-container">$g$</span> is entire, and <span class="math-container">$f=g/h$</span>.</p> <p>I don't know precisely what factorization that Wikipedia page is referring to, but if <span class="math-container">$f=g/h$</span> then a factorization of <span class="math-container">$g$</span> plus a factorization of <span class="math-container">$h$</span> give a factorization of <span class="math-container">$f$</span>. This "must" be the result they're talking about...</p> <p>(And in fact we've obtained <span class="math-container">$f=e^\varphi\Pi_1/\Pi_2$</span>, where <span class="math-container">$\Pi_2$</span> is a product depending only on the poles of <span class="math-container">$f$</span> and <span class="math-container">$\Pi_1$</span> is a product depending only on the zeroes of <span class="math-container">$g$</span>, which are the same as the zeroes of <span class="math-container">$f$</span>. So, in the strongest possible sense, we have in fact given a factorization of <span class="math-container">$f$</span> that depends only on the zeroes and poles of <span class="math-container">$f$</span>.)</p>
4,249,281
<p>In a game, 6 balls are chosen from a set of 40 balls numbered from 1 to 40. Find the probability that the number 30 is drawn and it is the highest number drawn in at least one of the next five games.</p> <p>I have <span class="math-container">$X\sim \operatorname{Bin}(5,6/29)$</span> and <span class="math-container">$P(X=1\mid X\text{ is max}) = \frac{P(X=1\text{ and }X\text{ is max})}{P(X\text{ is max})}=\frac{\binom{5}{1}(1/40)(23/29)^4}{6/29}$</span> which is obviously incorrect.</p> <p>Why is this wrong?</p> <p>UPDATE:</p> <p>Parts a) and B) asked for</p> <p>a) In a game, 6 balls are chosen from a set of 40 balls numbered from 1 to 40. Find the probability that the number 30 is drawn in exactly two of the next five games.</p> <p>b) In a game, 6 balls are chosen from a set of 40 balls numbered from 1 to 40. Find the probability that the number 30 is drawn in at least two of the next five games.</p> <p>Which I was able to successfully solve with</p> <p><span class="math-container">$X\sim Bin(5,6/40)$</span></p> <p>Thus for the main question in my post, I was led to believe to use the binomila distribution. Do you think that this could be a method for part c)?</p>
user71207
814,679
<blockquote> <p>For a particular game, you want the probability that 30 is drawn and the other five numbers drawn are smaller. one way is to look at the probability all six are 30 or less, and the probability all six are 29 or less</p> </blockquote> <p>That means <span class="math-container">$P(X≥1 \text{and others are smaller}) = 1 - P(X≥1 \text{and others are smaller})=1-\binom{29}{6}(\frac{1}{29})^5$</span></p> <p>OR <span class="math-container">$P(X=0 \text{and others are smaller}) = \binom{29}{6}(\frac{1}{29})^5$</span>.</p> <p>Then adding them together gives 1, which is wrong</p>
2,236,717
<p>Let $S$ be a regular domain of characteristic $p&gt;0$ with fraction field $K$. Assume that $K$ is $F$-finite, meaning that $K$ is a finite module over $K^p$. Does it follow that $S$ is also $F$-finite?</p> <p>Diego</p>
Yoël
401,292
<p>I believe the result must be true though I don't have the answer right now, but maybe this can help (or not):</p> <p>Denote $\phi: A\rightarrow A$, $x\mapsto x^p$ the Frobenius morphism, and still denote $\phi: K\rightarrow K$ its extension to $K$.</p> <p>Let $n\ge 1$ be an integer and assume $\mathcal{B}=\{f_1, ..., f_n\}$ is a $\phi(K)$-basis of $K$. We can always assume that $f_1$, ..., $f_n$ belong to $A$ (if $f_i=\frac{a_i}{b_i}$, $a_i$, $b_i\in A$, then $\{b_1^{p-1}a_1, ..., b_n^{p-1}a_n\}$ is still a $\phi(K)$-basis of $K$).</p> <p>Identify $K$ to $\phi(K)^n$ through $\mathcal{B}$, and denote $M\subset A$ the $\phi(A)$-module (free) of rank $n$ generated by $\mathcal{B}$. Then $A/M$ is a torsion module over $\phi(A)$, by assumption. Now I think we have to look for properties about torsion modules over regular domains (maybe first assume that $R$ is local ?), which I don't know much (but I'll look at it).</p> <p>Hope this helped a bit.</p>
2,632,273
<p>so basically I want to know why when we have something like:</p> <p>$$v(x) = x - y + 1$$ If we take the derivative with respect to x, it yields:</p> <p>$$v'(x) = 1 - \frac{dy}{dx}$$</p> <p>Now I still don't understand why when it comes to implicit differentiation, we need to tag a $y'$ or $\frac{dy}{dx}$ after every time we take the derivative of a y term. </p> <p>Thanks</p>
ElfHog
527,282
<p>Basically when we are "taking differentiation with respect to $x$", we mean that (intuitively) "when $x$ has a small change, how will the function change". </p> <p>Now $y$ can be a function of $x$, e.g. $y=x^2$ or $y=e^{e^x}$. So a small change in $x$ will cause a change (called $\frac{dy}{dx}$) in $y$. </p> <p>For example, if you are dealing with $\frac{d}{dx}y^2$, you need to do it as "if I change slightly $y$, we will get $2y$ as the change ($\frac{d}{dy}y^2=2y$). But since we are doing it with respect to $x$ so we need to multiply the term by 'when we change $x$ a bit, how $y$ will be changed [$\frac{dy}{dx}=y'$]. Hence the answer is $2y\frac{dy}{dx}$."</p> <p>Of course it is only intuitively speaking, and the formal proof of chain rules etc. can be found online easily. I am trying to clarify why the "<em>correction</em>" term $\frac{dy}{dx}$ is necessary.</p>
373,068
<p>For a real number $a$ and a positive integer $k$, denote by $(a)^{(k)}$ the number $a(a+1)\cdots (a+k-1)$ and $(a)_k$ the number $a(a-1)\cdots (a-k+1)$. Let $m$ be a positive integer $\ge k$. Can anyone show me, or point me to a reference, why the number $$ \frac{(m)^{(k)}(m)_k}{(1/2)^{(k)} k!}= \frac{2^{2k}(m)^{(k)}(m)_k}{(2k)!}$$ is always an integer?</p>
Shaswata
68,110
<p>We use the idea- product of n consecutive integers is divisible by $n!$.</p> <p>The numerator = $$2^{2k}(m-k+1)(m-k+2)\cdots (m-1)(m)(m)(m+1)\cdots (m+k-2)(m+k-1)$$</p> <p>= 2k-1 consecutive integers $\times m$</p> <p>This must be divisible by $2^{2k}(2k-1)!\cdot m$</p>
91,700
<p>Suppose that $A,C$ are $C^*$-algebras and $\phi:A \to C$ is a completely positive, orthogonality-preserving linear map. (Orthogonality preserving means: if $a,b \in A$ satisfy $ab=0$ then $\phi(a)\phi(b) = 0$.) Then:</p> <p>(i) For any $a,b,c \in A$, $$ \phi\left(ab\right)\phi\left(c\right) = \phi\left(a\right)\phi\left(bc\right) $$ (in the special case that $A$ is unital, this is equivalent to $\phi\left(a\right)\phi\left(b\right) = \phi\left(1\right)\phi\left(ab\right)$ for any $a,b \in A$);</p> <p>(ii) For any $a,b \in A$, $$ \left\| \phi\left(ab\right) \right\| \leq \|a\|\cdot\left\|\phi\left(b\right)\right\|; $$</p> <p>(iii) If $A$ is unital and simple, then for any $a \in A$, $$ \left\| \phi\left(a\right) \right\| = \|a\|\cdot\left\|\phi\left(1\right)\right\|. $$</p> <p>In fact, there is a rich structure theorem about completely positive, orthogonality-preserving maps (in the literature, they are called "order zero" instead of "orthogonality-preserving" ), Theorem 2.3 of Winter, Zacharias, "Completely positive maps of order zero," Münster J. Math, 2009 (see also Corollary 3.1); and I can prove these statements easily using the structure theorem. But, my question is: can we prove any of the facts above directly (without appealing to this structure theorem)?</p> <p>(I am intentionally not restating the structure theorem here because my question is about not using it.)</p>
jorge
100,313
<p>The general form of a (bounded) orthogonality preserving linear mapping between C*-algebras is obtained here: <a href="http://www.sciencedirect.com/science/article/pii/S0022247X08007245" rel="nofollow">http://www.sciencedirect.com/science/article/pii/S0022247X08007245</a></p>
2,036,301
<p>Can someone please help me prove that this series is convergent? <br></p> <p>The problem is I don't know what to do with sin.<br> </p> <p>$$\sum_{n=1}^{\infty} 2^n \sin{\frac{\pi}{3^n}} $$</p>
Community
-1
<p>As $\sin x&lt;x$ for $x&gt;0$ we can estimate $$ \sum_{n=1}^\infty2^n\sin(\pi/3^n)\leq\sum_{n=1}^\infty2^n\frac{\pi}{3^n}. $$ The RHS is finite by the geometric series so our (positive) series is convergent.</p>
2,036,301
<p>Can someone please help me prove that this series is convergent? <br></p> <p>The problem is I don't know what to do with sin.<br> </p> <p>$$\sum_{n=1}^{\infty} 2^n \sin{\frac{\pi}{3^n}} $$</p>
hamam_Abdallah
369,188
<p>We have $$\sin(X)\sim X \;(X\to 0)$$</p> <p>$$\implies 2^n\sin(\frac{\pi}{3^n})\sim (\frac{2}{3})^n\;(n\to+\infty)$$</p> <p>$\implies \sum 2^n\sin(\frac{\pi}{3^n})$ is convergent since the geometric series $\sum (\frac{2}{3})^n$ is positive and convergent.</p>
67,513
<p>When processing a larger Dataset I came up do a point where I want to form a dataset with culumn heads from an intermediate structure. Here is an example of this structure:</p> <pre><code>test = {&lt;|"name" -&gt; "alpha", "group" -&gt; "one"|&gt; -&gt; {&lt;|"value" -&gt; 459|&gt;}, &lt;|"name" -&gt; "beta", "group" -&gt; "two"|&gt; -&gt; {&lt;|"value" -&gt; -338|&gt;}, &lt;|"name" -&gt; "gamma", "group" -&gt; "two"|&gt; -&gt; {&lt;|"value" -&gt; 363|&gt;}}; </code></pre> <p>making a dataset:</p> <pre><code>assoc = Association@test; Dataset@assoc </code></pre> <p>this gives:</p> <p><img src="https://i.stack.imgur.com/9fH3r.png" alt="enter image description here"></p> <p>Now I´m searching a way to make this a dataset with column-heads "name", "group" and "value". Maybe I´m blind for a moment, but I could not manage it??? Can anyone give me a hint?</p>
WReach
142
<p>If we assume that we are starting from the exhibited dataset:</p> <pre><code>dataset = Dataset@assoc; </code></pre> <p>... then we can reshape it like this:</p> <pre><code>dataset[All, Apply@Association] </code></pre> <p>Or, equivalently:</p> <pre><code>dataset[Map[Apply@Association]] </code></pre> <p><img src="https://i.stack.imgur.com/kDidv.png" alt="dataset screenshot"></p> <p>This will also do the trick, although it is a little messy since <code>MapThread</code> presently lacks an operator form:</p> <pre><code>dataset[{Keys, Values}][MapThread[Association, #]&amp;] </code></pre> <p><strong>Edit</strong></p> <p>And here is yet another way:</p> <pre><code>dataset[All, &lt;| Keys@#, Values@# |&gt; &amp;] </code></pre>
187,395
<p>I can't find my dumb mistake.</p> <p>I'm figuring the definite integral from first principles of $2x+3$ with limits $x=1$ to $x=4$. No big deal! But for some reason I can't find where my arithmetic went screwy. (Maybe because it's 2:46am @_@).</p> <p>so </p> <p>$\delta x=\frac{3}{n}$ and $x_i^*=\frac{3i}{n}$</p> <p>where $x_i^*$ is the right end point of each rectangle under the curve.</p> <p>So the sum of the areas of the $n$ rectangles is</p> <p>$\Sigma_{i=1}^n f(\delta xi)\delta x$</p> <p>$=\Sigma_{i=1}^n f(\frac{3i}{n})\frac{3}{n}$</p> <p>$=\Sigma_{i=1}^n (2(\frac{3i}{n})+3)\frac{3}{n}$</p> <p>$=\frac{3}{n}\Sigma_{i=1}^n (2(\frac{3i}{n})+3)$</p> <p>$=\frac{3}{n}\Sigma_{i=1}^n ((\frac{6i}{n})+3)$</p> <p>$=\frac{3}{n} (\frac{6}{n}\Sigma_{i=1}^ni+ 3\Sigma_{i=1}^n1)$</p> <p>$=\frac{3}{n} (\frac{6}{n}\frac{n(n+1)}{2}+ 3n)$</p> <p>$=\frac{18}{n}\frac{(n+1)}{2}+ 9$</p> <p>$=\frac{9(n+1)}{n}+ 9$</p> <p>$\lim_{n\to\infty} \frac{9(n+1)}{n}+ 9 = 18$</p> <p>But the correct answer is 24. </p>
Mikasa
8,581
<p><strong>Hint:</strong> $$\int_a^bf(x)dx=\lim_{n\to\infty}\sum_1^nf(a+\frac{b-a}{n}i)\frac{b-a}{n}$$</p>
4,046,532
<p><strong>QUESTION 1:</strong> Let <span class="math-container">$f, g: S\rightarrow \mathbb{R}^m$</span> be differentiable vector-valued functions and let <span class="math-container">$\lambda\in \mathbb{R}$</span>. Prove that the function <span class="math-container">$(f+g):S\rightarrow \mathbb{R}^m$</span> is also differentiable. <strong>PROOF:</strong> Let <span class="math-container">$X:U\rightarrow S$</span> be a parametrization of <span class="math-container">$S$</span>. Then <span class="math-container">$$(f+g)\circ X= f\circ X + g\circ X$$</span> which is differentiable, as a sum of differentiable functions.</p> <p><strong>MY DOUBT IN QUESTION 1:</strong> How can I show in a step by step way that the sum above is distributive with the composition maps?</p> <p><strong>QUESTION 2:</strong> Suppose that a surface <span class="math-container">$S$</span> is a union <span class="math-container">$S=\displaystyle\bigcup_{i\in I} S_i$</span>, where each <span class="math-container">$S_i$</span> is open. If <span class="math-container">$f:S\rightarrow \mathbb{R}^m$</span> is a map such that each <span class="math-container">$f|_{S_i}:S_i\rightarrow \mathbb{R}^m$</span> is differentiable, prove that <span class="math-container">$f$</span> is differentiable.</p> <p><strong>MY ATTEMPT:</strong> We have already proved that open sets in surfaces are also surfaces. Defining <span class="math-container">$f|_{S_i}:S_i\rightarrow \mathbb{R}^m$</span> as <span class="math-container">$f_i:S_i\rightarrow \mathbb{R}^m$</span> such that <span class="math-container">$f=\sum f_i$</span>, considering <span class="math-container">$X:U\rightarrow S$</span> a parametrization of S (which is a surface), then we can write <span class="math-container">$(f_1+\cdots +f_n)\circ X= f_1\circ X+\cdots +f_n\circ X$</span> is differentiable, this is <span class="math-container">$f$</span> is differentiable.</p> <p><strong>MY DOUBT IN QUESTION 2:</strong> I'm not sure if I can interpret the union as a sum. Would you help me?</p>
Jakobian
476,484
<ol> <li><p>This is correct. Distributivity comes pretty much directly from definition, try calculating <span class="math-container">$(f+g)\circ X(p)$</span>. Recall that <span class="math-container">$(f+g)(q) := f(q)+g(q)$</span>.</p> </li> <li><p>This is wrong. Recall that differentiability is a local property. That means if we want to prove <span class="math-container">$f$</span> is differentiable at <span class="math-container">$p$</span>, we can take <span class="math-container">$X:U\to S$</span> with <span class="math-container">$X(S)$</span> small enough to be contained in <span class="math-container">$S_i$</span> (where <span class="math-container">$p\in X(S)\cap S_i$</span>). Compatibility of parametrizations takes over the rest.</p> </li> </ol>
24,195
<p>I am looking at a von Neumann algebra constructed from a discrete group and a 2-cocylce. Does someone know some good references (article, book)? It would be very helpful for me. To be more precise, consider a countable group $G$ and a 2-cocycle $\phi :G^2\rightarrow S^1$ where $S^1$ is the group of complex number of modulus 1. You get a representation $\pi$ of the group $G$ in the Hilbert space $l^2(G)$ defined as follow: $$\pi(g)(e_t)=\phi(g,t).e_{gt}$$, where $e_t$ is the canonical hilbert basis of $l^2(G)$. I consider $L_\phi(G)$, the von Neumann algebra generated by $\pi(G)$. I am looking for reference on those kinds of algebra. Thanks, Arnaud</p>
Roland Bacher
4,556
<p>This is not at all a complete answer but a remark which can be improved. It is a completely rewritten and replaces bullshit (explaining the first comments.)</p> <p>Suppose the answer to Segerman's question is no. There exists thus a counterexample given by a decreasing sequence $r_1\geq \dots \geq r_n$ of radii such that $\sum_{i=1}^n r_i^2=1/2$ and one can not fit $n$ circles with radii $r_1,\dots,r_n$ into a circle $C_1$ of radius $1$. Suppose $n$ is the smallest integer for which a counterexample exists. Then $r_1&lt;2/3$. Indeed, the area $\pi(1/2-r_1^2)$ of the discs of radii $r_2,\dots,r_n$ is at most half the area $\pi(1-\rho_1)^2$ of the largest disc $C'$ which fits together with the disc $C_1$ of radius $r_1$ into $C$. Since $n$ is minimal, the $(n-1)$ circles of radii $r_2,\dots,r_n$ can be packed into $C'$. </p> <p>This kind of argument can be improved (pack the circles of radii $r_2,\dots$ into more than one circle of suitable radii which fit into $C$ together with $ C_1$) in order to lower the upper bound on the largest radius of a minimal counterexample. </p> <p>I guess one can use a packing argument showing that a solution always exists if the largest radius is small enough. </p> <p>These two bounds can perhaps be made to met (but I fear that the involved combinatorics are quite messy).</p> <p>Let me make the argument for small radii a little bit more (but not entirely, I agree that some more work is needed) rigorous. (This is probably similar to the argument suggested by fedja (see comment below), I admit that I do not quite understand the details). </p> <p>Supposing the total area of all discs of radius $\leq \epsilon$ exceeds $2\pi \epsilon$, look at the annulus of width $\epsilon$ and outer radius $1$ inside the large disc, supposed to be of radius $1$. If $\epsilon$ is very small, such an annulus looks locally like a strip delimited by two parallel lines at distance $\epsilon$. I have thus to prove that given a collection of radii $\leq 1/2$, I can cover more than half of the area of a very long strip delimited by two parallel lines at distance $1$.</p> <p>If all discs have equal radius, then I can cover asymptotically a proportion of at least $\pi/4\sim .7854$ of the strip (the least favorable case corresponds to a collection of discs all of maximal radius $1/2$). The general case should be better but is harder to analyze. I claim however that this analysis can be made. Indeed, up to subdividing the strip into smaller substrips, we can assume that all radii are $&gt;1/4$ and we have then a fairly small number of combinatorial situations to consider. We can thus compute the worst case.</p>
24,195
<p>I am looking at a von Neumann algebra constructed from a discrete group and a 2-cocylce. Does someone know some good references (article, book)? It would be very helpful for me. To be more precise, consider a countable group $G$ and a 2-cocycle $\phi :G^2\rightarrow S^1$ where $S^1$ is the group of complex number of modulus 1. You get a representation $\pi$ of the group $G$ in the Hilbert space $l^2(G)$ defined as follow: $$\pi(g)(e_t)=\phi(g,t).e_{gt}$$, where $e_t$ is the canonical hilbert basis of $l^2(G)$. I consider $L_\phi(G)$, the von Neumann algebra generated by $\pi(G)$. I am looking for reference on those kinds of algebra. Thanks, Arnaud</p>
David Eppstein
440
<p>It's still not a yes-or-no answer to your question, but it seems to be true that a collection of circles with area 1/9 can always fit into a circle of area 1. Or, more strongly, if the circles are placed largest-first, then no matter how the larger circles are placed (by a malicious adversary trying to prevent the packing from succeeding) there will always be room for each circle to be placed.</p> <p>Proof sketch: as the circles are placed, maintain an "exclusion zone" where centers of new circles cannot go without potentially conflicting with existing circles. Initially, the exclusion zone covers the outer 1/3 of the radius of the area-1 circle, leaving 4/9 of its area free; with this initial exclusion zone, a circle with area at most 1/9 can have its center anywhere in the free area and not fall outside the boundary of the area-1 circle.</p> <p>Then, when placing any circle, add to the exclusion zone a circular area with twice its radius; smaller circles whose centers are outside this zone cannot conflict with the placed circle.</p> <p>If the total area of all circles is at most 1/9, then the total area of the exclusion zones formed in this way will always be less than 1, and there will always be a safe place for the center of the next circle to go.</p>
24,195
<p>I am looking at a von Neumann algebra constructed from a discrete group and a 2-cocylce. Does someone know some good references (article, book)? It would be very helpful for me. To be more precise, consider a countable group $G$ and a 2-cocycle $\phi :G^2\rightarrow S^1$ where $S^1$ is the group of complex number of modulus 1. You get a representation $\pi$ of the group $G$ in the Hilbert space $l^2(G)$ defined as follow: $$\pi(g)(e_t)=\phi(g,t).e_{gt}$$, where $e_t$ is the canonical hilbert basis of $l^2(G)$. I consider $L_\phi(G)$, the von Neumann algebra generated by $\pi(G)$. I am looking for reference on those kinds of algebra. Thanks, Arnaud</p>
Gerhard Paseman
3,402
<p>After weeks of sweating through computations without success, I think I have an auxillary result which I can use to tackle the problem posted above. However, the path to the conclusion is so surprising that I invite verification, in particular to point out any show-stopper mistake that may or may not be lurking.</p> <p>The auxillary result involves a region that is the "suitcase" circle with a segment removed. Let C be a disk of radius 1, and draw a chord of C which has distance 1/2 from the center of C. Let D be that part of C (including the chord itself) that is on the same side of the chord as the center of C, so D is a disk with a segment chopped off so that the area of D is less than 5/6 the area of C. Let F and G be two closed disks with unequal radii whose areas sum to half the area of C.</p> <p>Result: F and G pack inside D.</p> <p>Note that if F and G have the same radius, they will not pack inside D for the same reason they won't pack inside C: they have to share a point which is the circle center.</p> <p>The proof of the result involves placing F and G so that they touch the chord and otherwise are as far apart inside D as possible. The analogous problem for packing inside C involves placing F and G inside C so that the centers of F, G, and C are on the same line and F and G are as far apart as possible, then use the area sum condition to show the radii of F and of G sum to less than 1, and so they can be pushed together just enough to remain disjoint and keep away from the region boundary.</p> <p>For the result about D, I orient the coordinates to have the chord parallel to the x axis, and then I show that the x coordinates of the centers of F and G are greater than distance 1 apart. It took me weeks to realize that was what was needed to be shown. Even so, the algebra involved led to a surprise, which I want someone else to confirm or deny.</p> <p>Now for the algebraic verification. Let $0 \leq s \lt 1/2 \lt r \leq \sqrt{1/2}$ where $s$ and $r$ are the lengths of the radii of F and G. The area sum condition yields $r^2 + s^2 = 1/2$, and the inequality to be shown is inequality (A): $\sqrt{3/4 - r} + \sqrt{3/4 - s} \gt 1$. Since this inequality on the x coordinates of the centers of F and G imply that F and G are disjoint, inequality (A) will yield the result.</p> <p>There may be a slick analytic way to show (A), but I use repeated squaring and rearrangments to yield a quartic polynomial in $r$. I start by writing $s$ as $\sqrt{1/2 - r^2}$ and isolate the radical with $s$ on one side, and square both sides. I end up with an inequality to prove that has $\sqrt{1/2 - r^2}$ as a subexpression, and so I isolate the summand containing that subexpression on one side and square again.</p> <p>I collect and rearrange and square again to get the following inequality to be shown: $49/4 - 42r + 50r^2 - 24r^3 + 4r^4 \gt (1 - 2r + r^2)(12 - 16r)$. I then collect terms and decide to factor out a term of $(r - 1/2)$ in hopes of getting an easier polynomial to handle to show that it is positive for $r$ in the range given above. I know the relations above yield 0 when $r$ is $1/2$, so it seems like a nice simplification.</p> <p>The surprise comes when I find out the above inequality is equivalent to (B): $4(r - 1/2)^4 \gt 0$. I was not expecting that at all! If I have it right, then the result is proved, and I can move on after weeks of fruitless labor involving delta and epsilons and sketches that had no $r$'s or $s$'s. Can someone rederive or otherwise prove inequality A above? Even better, can someone tell me how I could know inequality B was coming?</p> <p>Gerhard "Surprised After All These Weeks" Paseman, 2012.05.07</p>
2,834,864
<p>Is it safe to assume that if $a\equiv b \pmod {35 =5\times7}$</p> <p>then $a\equiv b\pmod 5$ is also true?</p>
Vera
169,789
<p>If $hH=kK$ and $e$ denotes the identity then $e\in H=h^{-1}kK$.</p> <p>This implies that $h^{-1}kK=K$ because a coset of a subgroup that contains the indentity must be the subgroup itself.</p> <p>So we end up with $H=K$.</p>
2,668,839
<blockquote> <p>Finding range of $$f(x)=\frac{\sin^2 x+4\sin x+5}{2\sin^2 x+8\sin x+8}$$</p> </blockquote> <p>Try: put $\sin x=t$ and $-1\leq t\leq 1$</p> <p>So $$y=\frac{t^2+4t+5}{2t^2+8t+8}$$</p> <p>$$2yt^2+8yt+8y=t^2+4t+5$$</p> <p>$$(2y-1)t^2+4(2y-1)t+(8y-5)=0$$</p> <p>For real roots $D\geq 0$</p> <p>So $$16(2y-1)^2-4(2y-1)(8y-5)\geq 0$$</p> <p>$$4(2y-1)^2-(2y-1)(8y-5)\geq 0$$</p> <p>$y\geq 0.5$</p> <p>Could some help me where I have wrong, thanks</p>
MrYouMath
262,304
<p>Hint: Use $$\sin^2 x +4\sin x+5 = (\sin x +2)^2 +1$$ and $$2\sin^2 x +8 \sin x +8 = 2\left(\sin x + 2 \right)^2.$$</p> <p>Also, break the fraction into two pieces</p> <p>$$\dfrac{(\sin x +2)^2 +1}{2\left(\sin x + 2 \right)^2}=\dfrac{1}{2}+\dfrac{1}{2}\dfrac{1}{\left(\sin x + 2 \right)^2}$$</p>
557,426
<p>I have 5 ring oscillators whose frequencies are f1, f2, ..., f5. Each ring oscillator (RO) has 5 inverters. For each RO, I just randomly pick 3 inverters out of 5 inverters. For example, in RO1, I pick inverter 1,3,5 (Notation: RO1(1,3,5)). So I have the following:</p> <p>RO1(1,3,5) (I call this is the configuration of RO1)</p> <p>RO2(1,2,4) (configuration of RO2)</p> <p>RO3(2,4,5)</p> <p>RO4(1,4,5)</p> <p>RO5(2,3,4)</p> <p>Let say f1 &lt; f3 &lt; f4 &lt; f5 &lt; f2 (I call this is a rank). So for this particular rank, I have:</p> <p>{RO1(1,3,5), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4), RO2(1,2,4)}</p> <p>For a different rank, let say f1 &lt; f2 &lt; f3 &lt; f4 &lt; f5, I will have:</p> <p>{RO1(1,3,5), RO2(1,2,4), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4)}</p> <p>So as you can see, the rank affects how things are arranged.</p> <p>One trivial encoding is:</p> <p>{RO1(1,3,5), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4), RO2(1,2,4)} = 000...00 (all zeros)</p> <p>{RO1(1,2,5), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4), RO2(1,2,4)} = 000...01 (all zeros with a single 1)</p> <p>etc ... (repeat the whole process until we exhaust all combinations for all configurations for all ranks).</p> <p>This trivial encoding is <strong><em>NOT efficient</em></strong> because I need to store the entire mapping table in memory so that later on, I can use it for encoding.</p> <p>My question is: Is there a better way to carry out the encoding without having access to the entire mapping table? </p> <p><em><strong>Clarification:</em></strong> For 5 ROs, there are 5! different possible ranks. So I will need ceiling(log(5!))=7 bits (log here is base-2) to encode 5! different ranks.</p> <p>Each RO has (5 choose 3) different possible configurations. So for 5 ROs, there are (5 choose 3)^5=10^5 possible configuration combinations. Therefore, I will need ceiling(log(10^5))=17 bits to encode all possible combinations.</p> <p>So let me re-state my question in a clearer way: Given a specific rank of 5 ROs and the configurations of 5 ROs, how can I efficiently encode those information into 24 bits?</p>
John Dvorak
49,851
<p>To encode a permutation into a number, you can use the <a href="http://en.wikipedia.org/wiki/Factoradic" rel="nofollow noreferrer">factoradic base</a>. A factoradic representation of a positive integer $x$ is the unique sequence of digits $d_n..d_0$ such that each digit $d_n$ is at most $n$ and the sum $\sum_{i=0}^{n}(i!\,d_i) = x$</p> <p>To encode a permutation word into a factoradic number, you could represent each symbol of the word by the digit $d$ if it's the $d^\text{th}$ symbol in the alphabet not used yet, but this could get hard to implement, both in hardware and in software. Instead, I suggest using the Fisher-Yates shuffle. To encode:</p> <ul> <li>represent the input word as an array <code>a</code> of numbers from <code>0..n</code></li> <li>for each number <code>a_i</code> from n downto 0 <ul> <li>find the leftmost index <code>i</code> of the number <code>a_i</code> in <code>a</code></li> <li>store <code>a[i]</code> into <code>a[a_i]</code></li> <li>push <code>i</code> into the output, most significant digit first</li> </ul></li> </ul> <p>To decode:</p> <ul> <li>initialize <code>a</code> as an array of <code>n</code> elements</li> <li><p>for each index <code>i</code> from <code>0</code> to <code>n</code></p> <ul> <li>store <code>a[input[i]]</code> into <code>a[i]</code>, where <code>input[0]</code> is the least significant digit (0) of the input, unless <code>input[i] == i</code></li> <li><p>store <code>i</code> into <code>a[input[i]]</code></p> <p>these two steps are equivalent to initialising <code>a[i]</code> to <code>i</code> and swapping it with <code>a[input[i]]</code></p></li> </ul></li> <li>output the resulting array <code>a</code></li> </ul> <p>Translating these into a hardware circuit produces a 2D grid of 3-bit constant comparators, 2-way multiplexers, encoders and lots of wiring in case of the encoder, and a 2D grid of decoders, 2-way multiplexers, bigger multiplexers and lots of wiring in case of the decoder.</p> <p><img src="https://i.stack.imgur.com/GdCVA.png" alt="moderately aesthetically pleasing depiction of both contraptions"></p> <p>You can encode a 5-digit factoradic number into 7 bits by calculating its value, but if arithmetic is expensive you can also just store: $a_0$ is always zero; $a_1$ as one bit; $a_3$ as two bits; $a_2$ and $a_4$ as a four-bit value. Or you could store the number each digit separately. These then pack nicely into a byte. But this costs us an extra bit, which is a luxury we don't have.</p> <hr> <p>To encode a $\binom{5}{3}$ selection into 4 bytes, you could use (five times) a lookup table. It's just 10 terms, after all. Moreover 4->4 LUTs are the building blocks of FPGAs (and the fifth bit is just an NXOR of the remaining four). You could use arithmetic to represent the table implicitly, but this would end up much bigger in hardware (and you need even bigger LUTs anyways), and much less readable in software (<em>and</em> bigger). </p> <p>This alone costs us 20 bits total (plus seven for the permutation), three more than is the budget. Two decades can be packed into seven bytes, but we're still one bit short. Three decades fit into ten bytes, so pack our decades by 2 and 3, resulting in the optimal amount. </p> <p>You could use arithmetic to pack digits, but if you want to implement the packing in hardware, there is an alternative encoding, named <a href="http://en.wikipedia.org/wiki/Densely_packed_decimal" rel="nofollow noreferrer">"Densely Packed Decimals (DPD)"</a>. It passes numbers 0-79 unmodified, and it encodes two digits to seven bits or three digits to ten bits.</p> <p>Assume the three digits are <code>abcd efgh ijkm</code> and their DPD encoding is <code>pqr stu v wxy</code>. Then</p> <blockquote> <p>The encoding is:</p> <pre><code> p = (a*f*i) + (a*j) + b q = (a*g*i) + (a*k) + c r = d s = (-a*e*j) + (f*-i) + (-a*f) + (e*i) t = (-a*e*k) + (a*i) + g u = h v = a + e + i w = (-e*j) + (e*i) + a x = (-a*k) + (a*i) + e y = m </code></pre> <p>The decoding is:</p> <pre><code> a = (-s*v*w) + (t*v*w*x) + (v*w*-x) b = (p*s*x) + (p*-w) + (p*-v) c = (q*s*x) + (q*-w) + (q*-v) d = r e = (t*v*-w*x) + (s*v*w*x) + (-t*v*x) f = (p*t*v*w*x) + (s*-x) + (s*-v) g = (q*t*w) + (t*-x) + (t*-v) h = u i = (t*v*w*x) + (s*v*w*x) + (v*-w*-x) j = (p*-s*-t*w) + (s*v*-w*x) + (p*w*-x) + (-v*w) k = (q*-s*-t*v*w) + (q*v*w*-x) + (t*v*-w*x) + (-v*x) m = y </code></pre> <p>Herein is: <code>* = AND, + = OR, - = NOT</code>.</p> </blockquote> <p>ref.: <a href="http://web.archive.org/web/20070824053303/http://home.hetnet.nl/mr_1/81/jhm.bonten/computers/bitsandbytes/wordsizes/ibmpde.htm#dense7" rel="nofollow noreferrer">http://web.archive.org/web/20070824053303/http://home.hetnet.nl/mr_1/81/jhm.bonten/computers/bitsandbytes/wordsizes/ibmpde.htm#dense7</a></p> <hr> <p>Note that if the ring oscillators are guaranteed to be different, the amount of possibilities shrinks from $2^16 &lt; 10^5 &lt; 2^17$ to just $2^7 &lt; \binom{10}{5} = 212 &lt; 2^8$. I wouldn't be afraid to spare an 8-bit lookup table for this.</p> <p>also note that while <a href="http://www.wolframalpha.com/input/?i=47%3B2*log_2%285%21*%285+choose+3%29%5E5%29%3B48" rel="nofollow noreferrer">we are a tiiiny bit over 47 bits at two messages</a>, it is possible to save one bit out of 72 by packing three transmissions into one. It actually suffices to unpack (or to not pack) all three pairs of decades, and repack them as two triplets. You lose byte alignment (and base64-alignment), however.</p> <p><a href="http://www.wolframalpha.com/input/?i=9*log_2%285%21*%285+choose+3%29%5E5%29" rel="nofollow noreferrer">It should also be possible to save an extra bit every ninth transmission</a>, but don't ask me to do this in hardware. It's not too hard in software if your language supports arbitrary precision arithmetic, however. </p>
2,541,997
<p>For what values of n can {1, 2, . . . , n} be partitioned into three subsets with equal sums?</p> <p>I noticed that somehow the sum from 1 to n hast to be a multiple of 3 and the common sum among these 3 subset is this sum divided by 3, but it's still not a convincing argument. How do you prove there exists 3 subsets of operands such that their sum evaluate to this number?</p> <p>My solution made from other solutions</p> <p>Theorem: The given set can be partitioned into three subsets with equals sums whem $\sum_{i = 1}^{n}i = \frac{1}{2} n(n+1) \equiv 0 \mod 3$ except when $n = 3$ , so $n \equiv 0,2 \mod 3$</p> <p>Here are some cases which will be important to prove our theorem<br> n = 5 $\{1,4\},\{2,3\},\{5\}$<br> n = 6 $\{1,6\},\{2,5\},\{3,4\}$<br> n = 8 $\{8,4\},\{7,5\},\{1,3,6\}$<br> n =9 $\{9,6\}, \{8,7\}, \{1,2,3,4,5\}$</p> <p>Also note that if $n \equiv 0,2\mod 3$ then $n \equiv 0,2,3,5 \mod 6 $ and ${5,6,8,9}$ represents this modularity, so we can start creating subsets with equal sums with the last 6 elements of the n elements. Once we are done with this 6 elements, we continue with the next 6 until we get to 5,6,8 or 9 and then we apply the base case.</p>
Asinomás
33,907
<p>Clearly we need $n\equiv 0,2\bmod 3$ for the sum to be a multiple of $3$.</p> <p>We need two disjoint sets with sum $s=\frac{n(n+1)}{6}$.</p> <p>For the first set $A$ take $1,3,5,\dots$ until the next number would exceed $s$.</p> <p>For the second set $B$ take $2,4,\dots $ until the next number would exceed $s$.</p> <p>How many numbers have not been used?</p> <p>the sum of the remaining numbers is at least $a=\frac{n(n+1)}{6}$ and so there are at least $\frac{n+1}{6}$ remaining numbers.</p> <p>In other words the numbers $\underbrace{n,n-1,\dots}_a $ are unused.</p> <p>We can use these numbers to fix $A$ and $B$.(more on this when Im up to it)</p>
3,278
<h3>What are Community Promotion Ads?</h3> <p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p> <h3>Why do we have Community Promotion Ads?</h3> <p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p> <ul> <li>the site's twitter account</li> <li>useful tools or resources for the mathematically inclined</li> <li>interesting articles or findings for the curious</li> <li>cool events or conferences</li> <li>anything else your community would genuinely be interested in</li> </ul> <p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p> <h3>How does it work?</h3> <p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p> <ol> <li><p>All answers should be in the exact form of:</p> <pre><code>[![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url </code></pre> <p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li> <li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li> </ol> <h3>Image requirements</h3> <ul> <li>The image that you create must be <strong>220 x 250 pixels</strong></li> <li>Must be hosted through our standard image uploader (imgur)</li> <li>Must be GIF or PNG</li> <li>No animated GIFs</li> <li>Absolute limit on file size of 150 KB</li> </ul> <h3>Score Threshold</h3> <p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p> <p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
Ilmari Karonen
9,602
<p><a href="http://www.proofwiki.org/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GH4r0.png" alt="ProofWiki, the online compendium of mathematical proofs"></a></p>
2,221,807
<p>I know that this question has been answered before, however I have not seen a response that satisfies me on whether my proof will work.</p> <p><strong>Proof</strong></p> <p>Suppose $A \cup B$ is a separation of $X$. Then WLOG $X-A=B$ and is finite, but this implies that $X-B$ is infinite thus $B$ is not a open set which is a contradiction.</p> <p>My Question, is it is enough to show that $B$ is not open to get to this contradiction or do I need to go further to reach my contradiction? </p>
DMcMor
155,622
<p>This is fine, but you don't need to say 'WLOG'. If $A\cup B$ is a separation of $X$ then it must be the case that $B$ is finite because $X\setminus A=B$ and $A$ is open, and this doesn't cause any possible generality issues.</p>
2,221,807
<p>I know that this question has been answered before, however I have not seen a response that satisfies me on whether my proof will work.</p> <p><strong>Proof</strong></p> <p>Suppose $A \cup B$ is a separation of $X$. Then WLOG $X-A=B$ and is finite, but this implies that $X-B$ is infinite thus $B$ is not a open set which is a contradiction.</p> <p>My Question, is it is enough to show that $B$ is not open to get to this contradiction or do I need to go further to reach my contradiction? </p>
A. Thomas Yerger
112,357
<p>A slick way of saying what you've said is that a set is disconnected if it can be partitioned into two non-empty clopen sets. But in this topology no set can be clopen, as a set cannot be both finite and infinite.</p>
145,429
<p>I have expression like this:</p> <pre><code>expr = xuyz; </code></pre> <p>then</p> <pre><code>Head[expr] = xuyz </code></pre> <p>But I wanted the product of four factors, so it should have been written as <code>x*u*y*z</code> or <code>x u y z</code>, because Mathematica understands multiplication of four single-character variables in these two ways only.</p> <p>So I need to detect whether or not an expression is written in the right way. By "right" I mean all variables should are required to be single-character, all factors to be such variables or function thereof, and all terms to be products of such factors . </p> <p>I have tried with:</p> <pre><code>FreeQ[expr, Times]. </code></pre> <p>But it doesn't help, because I can have the situation</p> <pre><code>xu*y </code></pre> <p>which is not right for me, because <code>xu</code> and <code>y</code> are are the factors and I want <code>x</code>, <code>u</code> and <code>y</code> to be the factors. </p> <p>So is there any way to detect this problem?</p>
Bob Hanlon
9,362
<pre><code>ClearAll[testVars] testVars::usage = "testVars[expr, allowedVars] returns variables appearing in expr that \ are not in the list allowedVars."; testVars[expr_, allowedVars_?VectorQ] := Complement[Variables[Level[expr, {-1}]], allowedVars] vars = {x, y, z}; testVars[xyx, vars] (* {xyx} *) </code></pre>
368,789
<p>Suppose I have a family of elliptic curves $E_{n}/\mathbb{Q}$. I would like to determine the torsion subgroup of $E_{n}(\mathbb{Q})$ denoted by $E_{n}(\mathbb{Q})_{\textrm{tors}}$. Two ways to do this are using Nagell-Lutz and computing the number of points over $\mathbb{F}_{\ell}$ for various $\ell$. Are there other ways to determine the torsion subgroup of an elliptic curve?</p>
Álvaro Lozano-Robledo
14,699
<p>The two ways that you mention, plus looking your curve up in <a href="http://www.lmfdb.org/EllipticCurve/Q" rel="noreferrer">a database</a> (as Matt E. suggests), are the most practical and efficient ways I can think of. Here are two other ways to do this, one practical (but not as efficient) and one which is by no means practical (I think):</p> <ul> <li><p>Use <a href="http://en.wikipedia.org/wiki/Division_polynomials" rel="noreferrer">division polynomials</a>. If $E$ is defined over $\mathbb{Q}$, then $E(\mathbb{Q})$ can only have torsion points of order up to certain bounds. If $P$ is a point of prime order, then the order is $2$, $3$, $5$, or $7$ (by Mazur's theorem). You can find the division polynomials for each of these primes, and see if they have any roots in $\mathbb{Q}$. A root in $\mathbb{Q}$ would give you a rational $x$-coordinate of a torsion point of said order. You can also find the division polynomials of order $p^n$. If a point $P$ has order $p^n$, then the order is at most $8$, $9$, $5$, or $7$, so you only really need to find the $8$th, $9$th, $5$th and $7$th division polynomials, and factor those, to find all the torsion points on $E(\mathbb{Q})$.</p></li> <li><p>Not practical, and conjectural: Use the Birch and Swinnerton-Dyer conjecture. BSD provides a <a href="http://en.wikipedia.org/wiki/Birch_and_Swinnerton-Dyer_conjecture#History" rel="noreferrer">conjectural formula for a certain Taylor coefficient</a> of the $L$-series of the elliptic curve in question. The size of the torsion subgroup appears in the denominator of the coefficient... However, this coefficient is a real number. The real period $\Omega_E$ can always be calculated. If you could calculate the regulator $R_E$ of your elliptic curve (for instance, if you know the rank is $0$, then $R_E=1$), then you could calculate $\Omega_E\cdot R_E$, calculate the coefficient computationally, divide the result by $\Omega_E\cdot R_E$ and obtain a rational number, whose denominator is a divisor of the square of the size of the torsion subgroup of $E$ (notice that there may be some cancellation with the numerator!). This information would then allow you to find the torsion points on $E$ (see previous bullet point).</p></li> </ul> <p><strong>Example (with division polynomials)</strong>: Let $E$ be the curve <code>53b3</code> in Cremona's database, with Weierstrass equation $$E: y^2 +xy+y= x^3-x^2-14x+29.$$ Let us first find division polynomials:</p> <ul> <li><p>($p=2$) The 2nd division polynomial, defining the $x$-coordinates of $2$-torsion points, is given by $4x^3 - 3x^2 - 54x + 117$. This is irreducible, so there are no $2$-torsion points defined over $\mathbb{Q}$.</p></li> <li><p>($p=3$) The 3rd division polynomial is $$3x^4 - 3x^3 - 81x^2 + 351x - 270,$$ and it factors as $3(x - 1)(x^3 - 27x + 90)$. Thus, there is a point of order $3$ with $x$-coordinate $1$. By plugging into the equation of $E$, we find that $(1,3)$ and $(1,-5)$ are points of order $3$ on $E$. Since we found a point of order $3$, we need to keep going and look for points modulo $9$, $27$ (can't happen over $\mathbb{Q}$), etc... </p> <ul> <li><p>($9$) The $9$ division polynomial has degree $40$ so we won't reproduce it here. However, it has a bunch of linear factors in its factorization: $$(x - 9)(x - 3)(x - 1)(x + 3)\cdot (\text{higher order factors}).$$ The points with $x=1$ are of no interest to us, since they are points of order $3$. The other coordinates $x=\pm 3$ and $x=9$ correspond to points of order exactly 9, so we need to check if their $y$-coordinates are in $\mathbb{Q}$. Indeed, they do correspond to points of order $9$, namely $$(3,1), (-3,7), (9,-29),(9,19),(-3,-5), (3,-5).$$ Notice, however, that $(3,1)$ generates all of them. We move on to order $27$, since we found points of order $9$.</p> <ul> <li>($27$) The 27th division polynomial is of degree $364$, but all the linear factors in its factorization already appeared in the 9th division polynomial, so there are no points of order $27$ (again, there are NO points of order $27$ on elliptic curves over $\mathbb{Q}$).</li> </ul></li> <li><p>($p=5$) The 5th division polynomial is of degree $12$ and irreducible. Thus, there are no points of order $5$.</p></li> <li><p>($p=7$) The 7th division polynomial is of degree $24$ and irreducible. Thus, there are no points of order $7$.</p></li> </ul></li> </ul> <p>Hence, $E(\mathbb{Q})_\text{tors} \cong \mathbb{Z}/9\mathbb{Z}$.</p> <p><strong>Example (with BSD and L-functions)</strong>: Let $E$ be the curve <code>53b3</code> in Cremona's database, with Weierstrass equation $$E: y^2 +xy+y= x^3-x^2-14x+29.$$ First, we perform a $2$-descent on $E$ to calculate the $2$-Selmer group. It turns out to be trivial. This means two things: the rank is $0$, and Sha is trivial. In particular, the regulator is $R_E=1$. Moreover:</p> <ul> <li>The real period is $\Omega_E=3.09156554910300755665231500284\cdots$</li> <li>The Tamagawa numbers are $c_2=9$ and $c_3=3$ for $p=2$ and $3$ respectively.</li> <li>The value of the L-function at $1$ can be calculated to be $$L(E,1)\approx 1.03052184970100251888410500095\cdots$$ If we believe BSD in this case, then we must have: $$(\# E(\mathbb{Q})_\text{tors})^2 = \frac{\# \text{Sha} \cdot \Omega_E\cdot R_E\cdot \prod c_p}{L(E,1)} = \frac{1\cdot (3.09156554910300755665231500284\cdots)\cdot 1 \cdot 27}{1.03052184970100251888410500095\cdots} =81.000000000000000000000000000\cdots.$$</li> </ul> <p>Thus, since $(\# E(\mathbb{Q})_\text{tors})^2$ must be an integer, it must be $81$, and $\# E(\mathbb{Q})_\text{tors} = 9$. Since $ E(\mathbb{Q})_\text{tors}$ is an abelian group, there are only two options: $\mathbb{Z}/3\mathbb{Z}\times \mathbb{Z}/3\mathbb{Z}$ or $\mathbb{Z}/9\mathbb{Z}$. However, the former is imposible because if $E[n]$ is defined over a field $F$, then $F$ must contain the $n$th roots of unity. But $\mathbb{Q}$ does not contain $\sqrt{-3}$. Hence, it must be $\mathbb{Z}/9\mathbb{Z}$.</p> <p><strong>Magma code</strong> Here is Magma code for all I did above:</p> <blockquote> <p>E:=EllipticCurve("54b3");E;</p> <p>DivisionPolynomial(E,2);</p> <p>Factorization(DivisionPolynomial(E,2));</p> <p>DivisionPolynomial(E,3);</p> <p>Factorization(DivisionPolynomial(E,3));</p> <p>IsPoint(E,1);</p> <p>P:=E![1,3,1];</p> <p>2*P; 3*P;</p> <p>DivisionPolynomial(E,9);</p> <p>Factorization(DivisionPolynomial(E,9));</p> <p>IsPoint(E,3);</p> <p>IsPoint(E,9);</p> <p>P:=E![3,1,1];</p> <p>P,2*P,3*P,4*P,5*P,6*P,7*P,8*P,9*P;</p> <p>Degree(DivisionPolynomial(E,27));</p> <p>// Factorization(DivisionPolynomial(E,27));</p> <p>DivisionPolynomial(E,5);</p> <p>Factorization(DivisionPolynomial(E,5));</p> <p>DivisionPolynomial(E,7);</p> <p>Factorization(DivisionPolynomial(E,7));</p> <p>TwoSelmerGroup(E);</p> <p>RealPeriod(E);</p> <p>TamagawaNumbers(E);</p> <p>L:=LSeries(E);</p> <p>Evaluate(L,1);</p> <p>27*RealPeriod(E)/Evaluate(L,1);</p> </blockquote>
3,768,086
<p>Show that <span class="math-container">$(X_n)_n$</span> converges in probability to <span class="math-container">$X$</span> if and only if for every continuous function <span class="math-container">$f$</span> with compact support, <span class="math-container">$f(X_n)$</span> converges in probability to <span class="math-container">$f(X).$</span></p> <p><span class="math-container">$\implies$</span> is very easy, the problem is with the converse. Any suggestions to begin?</p>
lulu
252,071
<p>The problem is not clear as stated.</p> <p>Interpretation <span class="math-container">$\#1$</span>: If you interpret it as &quot;find the probability that the game end in an evenly numbered round&quot; you can reason recursively.</p> <p>Let <span class="math-container">$P$</span> denote the answer. The probability that the game ends in the first round is <span class="math-container">$\frac 26+\frac 46\times \frac 46=\frac 79$</span>. If you don't end in the first round, the probability is now <span class="math-container">$1-P$</span>. Thus <span class="math-container">$$P=\frac 79\times 0 +\frac 29\times (1-P)\implies \boxed{P=\frac 2{11}}$$</span></p> <p>as in your solution.</p> <p>Interpretation <span class="math-container">$\#2$</span>: If the problem meant &quot;find the probability that <span class="math-container">$B$</span> wins given that <span class="math-container">$A$</span> starts&quot; that too can be solved recursively. Let <span class="math-container">$\Psi$</span> denote that answer and let <span class="math-container">$\Phi$</span> be the probability that <span class="math-container">$B$</span> wins given that <span class="math-container">$B$</span> starts. Then <span class="math-container">$$\Psi=\frac 46\times \Phi$$</span> and <span class="math-container">$$\Phi=\frac 46 +\frac 26\times \Psi$$</span> This system is easily solved and yields <span class="math-container">$$\boxed {\Psi=\frac 47}$$</span> as desired.</p>
1,428,377
<p>So I was watching the show Numb3rs, and the math genius was teaching, and something he did just stumped me.</p> <p>He was asking his class (more specifically a student) on which of the three cards is the car. The other two cards have an animal on them. Now, the student picked the middle card to begin with. So the cards looks like this</p> <pre><code>+---+---+---+ | 1 | X | 3 | +---+---+---+ </code></pre> <p><em>The <code>X</code> Representing The Picked Card</em></p> <p>Then he flipped over the third card, and it turned out to be an animal. All that is left now is one more animal, and a car. He asks the student if the chances are higher of getting a car if they switch cards. The student responds no (That's what I thought too).</p> <p>The student was wrong. What the teacher said is "Switching cards actually doubles your chances of getting the car".</p> <p>So my question is, why does switching selected cards double your chances of getting the car when 1 of the 3 cards are already revealed. I thought it would just be a simple 50/50 still, please explain why the chances double!</p>
Graham Kemp
135,106
<p>The situation your first intuition tells you happens is that if the host picks a card <em>entirely at random</em> then there's an equal chance of either other cards revealing a car.</p> <p>However, the <em>actual</em> situation is that the host picks a card <em>after</em> you made the first selection.</p> <p>So, you have either selected a car or not. &nbsp; There's a $1/3$ chance you selected the car using unbiased random selection.</p> <p>Now, if the host were to reveal a car it would be game over (you would be certainly wrong to either keep your choice or pick the remaining card.) &nbsp; Still, we know this hasn't happened because the host did not reveal a car. &nbsp; (Typically this never happens because the host is using <em>foreknowledge</em> to avoid this branch of the probability tree. However even if that's not the case, we still know a car was not revealed.)</p> <ul> <li><p>If you selected a car, the host was free to flip either other card. &nbsp; Then if you change your mind you are certainly wrong.</p></li> <li><p>If you did not select a car, the card the host didn't choose is a car. &nbsp; Then if you change your mind you are certainly right.</p></li> </ul> <p>Thus there is a $2/3$ chance of you being right if you change your mind.</p>
498,694
<p>So, I'm learning limits right now in calculus class.</p> <p>When $x$ approaches infinity, what does this expression approach?</p> <p>$$\frac{(x^x)}{(x!)}$$</p> <p>Why? Since, the bottom is $x!$, doesn't it mean that the bottom goes to zero faster, therefore the whole thing approaches 0?</p>
Ron Gordon
53,268
<p>Here's $10^{10}$:</p> <p>$$10 \cdot 10 \cdot 10\cdot 10\cdot 10\cdot 10\cdot 10\cdot 10\cdot 10\cdot 10$$</p> <p>Here's $10!$:</p> <p>$$10 \cdot 9 \cdot 8 \cdot 7 \cdot 6 \cdot 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1$$</p> <p>Which one is bigger? Carry that thought out for larger and larger numbers, and you'll see the answer.</p>
1,672,080
<p>I have troubles understanding the concepts of quotient topology and product topology (in the infinite case). </p> <p>I know that we want to give a topology to new spaces built from the old ones, but the thing is that I can't figure out why is the definition for quotient topology natural since we only require that the canonical projection should be continuos (I think this definition is given tersley), and on the other hand I don't understand why does the box topology doesn't work in the infinite case so we have to define a very special topology where you say that you have infinite tuples where most of them are the space itself so, How could this work?</p> <p>And can you recommend some exercises to put in practice this concepts please.</p> <p>Thanks a lot in advance.</p>
Alex Provost
59,556
<p>A guiding principle in modern mathematics is that studying morphisms between objects is even more important than studying the objects themselves. In the category of topological spaces, this means that one should be more interested in continuous maps between spaces than in the spaces themselves. From this point of view, it is natural to endow certain spaces with a topology that is the "least trivial possible" and that makes canonical maps from/to this space continuous.</p> <p>The quotient topology is an example of that. Of course, we could endow $X/{\sim}$ with the trivial (indiscrete) topology, and then the canonical map $X \to X/{\sim}$ would be continuous, but this is of little interest as in fact any map $Y \to X/{\sim}$ would be continuous. In a way we want to give the quotient space the most <em>optimal</em> topology that makes the canonical projection continuous. In this case "most optimal topology" would mean "finest topology". One should also be content with the definition of this topology because in practice it truly reflects how we like to think of quotient spaces as being spaces glued together under an equivalence relation: gluing together two ends of a line segment yields a circle, gluing the boundary a disk to a point yields a sphere, etc.</p> <p>Similarly, and especially after considering how product objects generally behave in other categories, one would want to endow $\Pi_{i \in I} X_i$ with the "most optimal" topology that makes the projection maps $\Pi_{i \in I} X_i \to X_j$ continuous. This topology is the product topology. The box topology, on the other hand, can be deceiving in the infinite case because one can have a discontinuous map $Y \to \Pi_{i \in I}X_i$ such that all its component functions $Y \to X_j$ are continuous. These are the kinds of things we want to avoid.</p>
3,612,351
<p>It is given that a function f(x) satisfy: <span class="math-container">$$f(x)=3f(x+1)-3f(x+2)\quad \text{ and } \quad f(3)=3^{1000}$$</span> then find value of <span class="math-container">$f(2019)$</span>.</p> <p>I further wanted to ask that is there some general method to solve such equation. The method that I know to solve such questions is to substitute <span class="math-container">$x$</span> with <span class="math-container">$x+1$</span> in equation and there by making new equation which is <span class="math-container">$$ f(x+1)=3f(x+2)-3f(x+3)$$</span> then again substitute <span class="math-container">$x$</span> with <span class="math-container">$x+2$</span> in original equation and make new equation <span class="math-container">$$f(x+2)=3f(x+3)-3f(x+4)$$</span> Do this for a couple of times and then on combining the equations, in most of the question we get some relation like f(x) = f(x+a) but that does not work here. Please share your ideas on how to solve such questions.</p>
Gareth Ma
623,901
<p><span class="math-container">$$ \begin{align} f(x)&amp;=f(x-1)-\frac{f(x-2)}{3}, f(3)=3^{1000}\\ f(x)&amp;=f(x-1)-\frac{1}{3}f(x-2)\\ f(x+1)&amp;=f(x)-\frac{1}{3}f(x-1)=(f(x-1)-\frac{1}{3}f(x-2))-\frac{1}{3}f(x-1)\\ f(x+1) &amp;= \frac{2}{3}f(x-1)-\frac{1}{3}f(x-2)\\ f(x+2)&amp;=f(x+1)-\frac{1}{3}f(x)=(\frac{2}{3}f(x-1)-\frac{1}{3}f(x-2))-\frac{1}{3}(f(x-1)-\frac{1}{3}f(x-2))\\ f(x+2)&amp;=\frac{1}{3}f(x-1)-\frac{2}{9}f(x-2)\\ f(x+3)&amp;=f(x+2)-\frac{1}{3}f(x+1)=(\frac{1}{3}f(x-1)-\frac{2}{9}f(x-2))-\frac{1}{3}(\frac{2}{3}f(x-1)-\frac{1}{3}f(x-2))\\ f(x+3) &amp;= \frac{1}{9}f(x-1)-\frac{1}{9}f(x-2) f(x+4)=(\frac{1}{9}f(x-1)-\frac{1}{9}f(x-2))-\frac{1}{3}(\frac{1}{3}f(x-1)-\frac{2}{9}f(x-2))\\ f(x+4)&amp;=(-\frac{1}{9}+\frac{2}{27})f(x-2)=-\frac{1}{27}f(x-2)\end{align}$$</span></p> <p><span class="math-container">$$\therefore f(x+4)=-\frac{1}{27}f(x-2) \implies f(x+6)=-\frac{1}{27}f(x)$$</span></p> <p><span class="math-container">$$\implies f(x+6k)=(-\frac{1}{27})^k f(x) \forall k \in N$$</span></p> <p><span class="math-container">$$f(2019)=f(3+6\cdot 336)=(-\frac{1}{27})^{336} f(3)=\frac{1}{3^{1008}}3^{1000} = \frac{1}{3^8}$$</span></p> <p><span class="math-container">$$\therefore f(2019) = \frac{1}{3^8}$$</span></p>
3,612,351
<p>It is given that a function f(x) satisfy: <span class="math-container">$$f(x)=3f(x+1)-3f(x+2)\quad \text{ and } \quad f(3)=3^{1000}$$</span> then find value of <span class="math-container">$f(2019)$</span>.</p> <p>I further wanted to ask that is there some general method to solve such equation. The method that I know to solve such questions is to substitute <span class="math-container">$x$</span> with <span class="math-container">$x+1$</span> in equation and there by making new equation which is <span class="math-container">$$ f(x+1)=3f(x+2)-3f(x+3)$$</span> then again substitute <span class="math-container">$x$</span> with <span class="math-container">$x+2$</span> in original equation and make new equation <span class="math-container">$$f(x+2)=3f(x+3)-3f(x+4)$$</span> Do this for a couple of times and then on combining the equations, in most of the question we get some relation like f(x) = f(x+a) but that does not work here. Please share your ideas on how to solve such questions.</p>
Bumblebee
156,886
<p>Note that, we can rewrite the given recurrence such that <span class="math-container">$$af(n+1)-f(n)=b(af(n)-f(n-1))$$</span> with complex numbers <span class="math-container">$a=\sqrt{3}e^{i\pi/6}, b=\dfrac{\sqrt{3}}{3}e^{i\pi/6}.$</span> Inductively we have that <span class="math-container">$$af(n+1)-f(n)=b^{k+1}(af(n-k)-f(n-k-1))$$</span> for all <span class="math-container">$k\in\mathbb{N}.$</span> In particular, this quantity equals to <span class="math-container">$b^{n}(af(1)-f(0)).$</span> Now note that <span class="math-container">$$a^{n+1}f(n+1)-a^nf(n)=(ab)^{n}A,$$</span> where <span class="math-container">$A=af(1)-f(0)$</span> is a constant. By telescoping we can obtain <span class="math-container">$$a^{n+1}f(n+1)-f(0)=A\sum_{k=0}^n e^{i\pi k/3}=A\left(\dfrac{1-\omega e^{i\pi n/3}}{1- \omega} \right)$$</span> where <span class="math-container">$\omega=e^{i\pi /3}$</span> is the third root of <span class="math-container">$-1.$</span> Here RHS is a periodic function of <span class="math-container">$n$</span> with period <span class="math-container">$6,$</span> and hence <span class="math-container">$$f(n)=-\dfrac{1}{27}f(n-6)=\dfrac{(-1)^k}{3^{3k}}f(n-6k)$$</span> for all <span class="math-container">$n, k\in \mathbb{N}.$</span> In particular, for <span class="math-container">$n=2019, k=336$</span> we get the desire answer.</p>
2,236,008
<p>Suppose $Z$ is a Gaussian distribution $N(0,\sigma^2)$. Is there a formula of upper bound for $P(Z\in [a,b])$, or do we know this probability is integral with respect to $\sigma\in \mathbf{R}$?</p>
spaceisdarkgreen
397,125
<p>The probability is given by $$ P(Z\in[a,b]) =\int_a^b \frac{1}{\sqrt{2\pi\sigma^2}}e^{-x^2/(2\sigma^2)}dx$$ i.e. you integrate the density of the $N(0,\sigma^2)$ over the interval. You can also substitute $u = x/\sigma$ into the formula and get $$ P(Z\in[a,b]) = \int_{a/\sigma}^{b/\sigma} \frac{1}{\sqrt{2\pi}}e^{-u^2/2}du$$ and this in turn can be expressed as $$ P(Z\in[a,b]) = \Phi(b/\sigma)-\Phi(a/\sigma)$$ where $\Phi(x) = \int_{-\infty}^xe^{-u^2/2}du/\sqrt{2\pi}$ is the standard normal cumulative.</p> <p>As for how to get an upper bound, it really depends how good of an upper bound you want and where $a$ and $b$ are. Usually you'd just compute it exactly since the normal cumulative is usually available. </p> <p>One crude way for $0&lt;a&lt;b$ would be to notice that $e^{-x^2/(2\sigma^2)} \le e^{-a^2/(2\sigma^2)}$ for $x\in (a,b)$ so you have $$ P(Z\in[a,b]) =\int_a^b \frac{1}{\sqrt{2\pi\sigma^2}}e^{-x^2/(2\sigma^2)}dx \le \frac{1}{\sqrt{2\pi\sigma^2}}e^{-a^2/(2\sigma^2)}\int_a^bdx = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-a^2/(2\sigma^2)}(b-a).$$ This works best when $b-a$ is small relative to $\sigma.$ This obviously doesn't work at all in the important case where $b=\infty$</p>
654,617
<p>$v$ being a vector. I never understood what they mean and haven't found online resources. Just a quick question.</p> <p>Thought it was absolute and magnitude respectively when regarding vectors. need confirmation</p>
AlexR
86,940
<p>I have found $|\cdot|$ to almost always represent the euclidean ($2$)-norm of a vector in $\mathbb K^n$, and $\Vert\cdot\Vert$ is a general sign for a norm. In different context, both may be used for different norms, though. I have even seen $|||\cdot |||$ for a "special" norm</p>
277,250
<p>Let $\mathbb{N}$ be the set of natural numbers and $\beta \mathbb N$ denotes the Stone-Cech compactification of $\mathbb N$. </p> <p>Is it then true that $\beta \mathbb N\cong \beta \mathbb N \times \beta \mathbb N $ ? </p>
მამუკა ჯიბლაძე
41,291
<p>(Just noticed - already done by Todd Trimble in a comment:)</p> <p>A proof by Stone duality: the dual question is whether the Boolean algebras $\mathscr P\mathbb N$ and $\mathscr P\mathbb N\otimes\mathscr P\mathbb N$ are isomorphic. (Here "$\otimes$" is the coproduct in Boolean algebras.)</p> <p>The answer is no since the former is complete and the latter is not: its completion is given by the canonical homomorphism $i:\mathscr P\mathbb N\otimes\mathscr P\mathbb N\to\mathscr P(\mathbb N\times\mathbb N)$, which can also serve to provide an explicit subset of $\mathscr P\mathbb N\otimes\mathscr P\mathbb N$ that does not posses a lub, by exhibiting an element of $\mathscr P(\mathbb N\times\mathbb N)$ not in the image of $i$.</p> <p>Namely, the image of $i$ is the Boolean subalgebra of $\mathscr P(\mathbb N\times\mathbb N)$ generated by the rectangles $S\times T\subseteq\mathbb N\times \mathbb N$, for $S,T\subseteq\mathbb N$, while there clearly are subsets of $\mathbb N\times\mathbb N$ which are not finite Boolean combinations of such rectangles.</p> <p>(<strong>Edit</strong> - found a simpler example: e. g. the diagonal $\mathbb N\subseteq \mathbb N\times\mathbb N$ is not a finite Boolean combination of rectangles; accordingly, the subset $$ \Delta:=\{\ \{n\}\otimes\{n\}\mid n=1,2,...\ \}\subseteq\mathscr P\mathbb N\otimes\mathscr P\mathbb N $$ does not have least upper bound. Indeed, an element $S_1\otimes T_1\lor\cdots\lor S_k\otimes T_k\in\mathscr P\mathbb N\otimes\mathscr P\mathbb N$ is an upper bound for $\Delta$ if and only if $\left(S_1\cap T_1\right)\cup\cdots\cup\left(S_k\cap T_k\right)=\mathbb N$. Then for some $i\in\{1,...,k\}$ there are $n_1,n_2\in S_i\cap T_i\setminus\bigcup_{j\ne i}S_j\cap T_j$ with $n_1\ne n_2$, and then in the above finite join we can replace $S_i\otimes T_i$ with $$\left(S_i\setminus\{n_1\}\otimes T_i\setminus\{n_1\}\right)\lor\left(S_i\setminus\{n_2\}\otimes T_i\setminus\{n_2\}\right),$$resulting in a strictly smaller upper bound.)</p> <p>Turning back to "normal", this proof says that $\beta\mathbb N\times\beta\mathbb N$ is not extremally disconnected, while $\beta\mathbb N$ is.</p>
207,040
<p>Is there some way I can solve the following equation with <span class="math-container">$d-by-d$</span> matrices in Mathematica in reasonable time?</p> <p><span class="math-container">$$AX+X'B=C$$</span></p> <p>My solution below calls linsolve on <span class="math-container">$d^2,d^2$</span> matrix, which is too expensive for my case (my d is 1000)</p> <pre><code>kmat[n_] := Module[{mat1, mat2}, mat1 = Array[{#1, #2} &amp;, {n, n}]; mat2 = Transpose[mat1]; pos[{row_, col_}] := row + (col - 1)*n; poses = Flatten[MapIndexed[{pos[#1], pos[#2]} &amp;, mat2, {2}], 1]; Normal[SparseArray[# -&gt; 1 &amp; /@ poses]] ]; unvec[Wf_, rows_] := Transpose[Flatten /@ Partition[Wf, rows]]; vec[x_] := Flatten[Transpose[x]]; solveLyapunov2[a_, b_, c_] := Module[{}, dims = Length[a]; ii = IdentityMatrix[dims]; x0 = LinearSolve[ KroneckerProduct[ii, a] + KroneckerProduct[Transpose[b], ii].kmat[dims], vec[c]]; X = unvec[x0, dims]; Print["error is ", Norm[a.X + Transpose[X].b - c]]; X ] a = RandomReal[{-3, 3}, {3, 3}]; b = RandomReal[{-3, 3}, {3, 3}]; c = RandomReal[{-3, 3}, {3, 3}]; X = solveLyapunov2[a, b, c] </code></pre> <p><em>Edit Sep 30</em>: An approximate solution would be useful as well. In my application <span class="math-container">$C$</span> is the gradient, and <span class="math-container">$X$</span> is the preconditioned gradient, so I'm looking for something that's much better than a "default" solution of <span class="math-container">$X_0=C$</span></p>
Alex Trounev
58,388
<p>Your equation is not in the correct form for FEM. Homogeneous Neumann conditions are applied automatically.</p> <pre><code>Needs["NDSolve`FEM`"] Bi = 0.5; xf = 5; reg = Rectangle[{0, 0}, {xf, 1}]; mesh = ToElementMesh[reg, MaxCellMeasure -&gt; 0.0001]; PDE = D[M[t, x, y], t] - 1/(x^2 + y^2)*D[(x^2 + 1)*D[M[t, x, y], x], x] - 1/(x^2 + y^2)*D[(-y^2 + 1)*D[M[t, x, y], y], y] nv4 = NeumannValue[-Sqrt[(x^2 + y^2)/(x^2 + 1)]*Bi*M[t, x, y], x == xf]; sol = NDSolve[{PDE == nv4, M[0, x, y] == 1}, M, {x, y} \[Element] mesh, {t, 0, 10}] Table[ContourPlot[M[t, x, y] /. sol, {x, y} \[Element] mesh, Contours -&gt; 20, ColorFunction -&gt; "TemperatureMap", PlotLegends -&gt; Automatic, PlotLabel -&gt; Row[{"t = ", t}]], {t, 1, 10, 3}] </code></pre> <p><a href="https://i.stack.imgur.com/IPogX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IPogX.png" alt="Figure 1"></a></p> <p>Consider a solution in a region with a hole (homogeneous Neumann conditions are applied automatically at the edge of the hole)</p> <pre><code>Needs["NDSolve`FEM`"] Bi = 0.5; xf = 5; reg = Rectangle[{0, 0}, {xf, 1}]; reg1 = Rectangle[{1, 1/3}, {4, 2/3}]; mesh = ToElementMesh[RegionDifference[reg, reg1], MaxCellMeasure -&gt; 0.0001]; PDE = D[M[t, x, y], t] - 1/(x^2 + y^2)*D[(x^2 + 1)*D[M[t, x, y], x], x] - 1/(x^2 + y^2)*D[(-y^2 + 1)*D[M[t, x, y], y], y]; nv4 = NeumannValue[-Sqrt[(x^2 + y^2)/(x^2 + 1)]*Bi*M[t, x, y], x == xf]; sol = NDSolve[{PDE == nv4, M[0, x, y] == 1}, M, {x, y} \[Element] mesh, {t, 0, 10}] Table[ContourPlot[M[t, x, y] /. sol, {x, y} \[Element] mesh, Contours -&gt; 20, ColorFunction -&gt; "TemperatureMap", PlotLegends -&gt; Automatic, PlotLabel -&gt; Row[{"t = ", t}]], {t, 1, 10, 3}] </code></pre> <p><a href="https://i.stack.imgur.com/5pHlU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5pHlU.png" alt="Figure 2"></a></p>
4,573,600
<p>I need help to start solving a differential equation</p> <p><span class="math-container">$$x^2y'+xy=\sqrt{x^2y^2+1}.$$</span></p> <p>I would divide the equation with <span class="math-container">$x^2.$</span> Then the equation looks like a homogeneous equation, but I get under the square root <span class="math-container">$\frac{1}{x^4}.$</span> I don't know what else to do. Can someone give me a hint?</p>
MDCCXXIX
1,118,767
<p>Set some variable, lets call him q, as <span class="math-container">$$q = xy$$</span> then, <span class="math-container">$$\frac{dy}{dx} = \frac{xq'-q}{x^{2}}$$</span> <span class="math-container">$$xq' = \sqrt{q^{2}+1}$$</span> <span class="math-container">$$\displaystyle \int \frac{1}{\sqrt{q^2+1}} \,dq = \displaystyle \int \frac{1}{x} \,dx$$</span> <span class="math-container">$$arcsinh(q) = \ln(x) + C$$</span> <span class="math-container">$$q = \sinh(\ln(x) + C)$$</span> Hope it helped.</p>
3,375,181
<p>How do I graph f(x)=1/(1+e^(1/x)) except for replacing variable x with numbers? Besides, I get the picture of the answer online <a href="https://i.stack.imgur.com/gZb7X.png" rel="nofollow noreferrer">enter image description here</a> and do not understand why x = 0 exists on this graph.</p>
Claude Leibovici
82,404
<p>Starting from Certainly not a dog's answer <span class="math-container">$$y’-y\tan{(x+c)}=0\implies \frac{y'}y=\tan{(x+c)}$$</span> Integrate both sides to get <span class="math-container">$$\log(y)=-\log (\cos (x+c))+d\implies y=d \sec(x+c)$$</span></p>
459,428
<p>How does one evaluate a function in the form of $$\int \ln^nx\space dx$$ My trusty friend Wolfram Alpha is blabbering about $\Gamma$ functions and I am having trouble following. Is there a method for indefinitely integrating such and expression? Or if there isn't a method how would you tackle the problem?</p>
Community
-1
<p>Let $$F_n=\int \log^n(x) dx$$ so by integration by parts (we derivate $\log^n(x)$) we have $$F_n=x\log^n(x)-n\int\log^{n-1}(x)dx=x\log^n(x)-nF_{n-1}$$ so we find $F_n$ by induction by the relation:</p> <p>$$\left\{\begin{array}\\ F_0=x+C\\ F_{n}=x\log^n(x)-nF_{n-1},\quad n\geq 1 \end{array}\right.$$</p> <p><strong>Added</strong>$\ $ We can write a simple procedure with Maple which gives the expression of $F_n$ for every $n$ as follow:</p> <blockquote> <p><img src="https://i.stack.imgur.com/vckZA.png" alt="enter image description here"></p> </blockquote> <p>We can prove by induction that $$F_n=x\log^n(x)+\sum_{k=1}^{n-1}(-1)^{n-k}\frac{n!}{k!} x\log^k(x)+(-1)^nx+C$$</p>
459,428
<p>How does one evaluate a function in the form of $$\int \ln^nx\space dx$$ My trusty friend Wolfram Alpha is blabbering about $\Gamma$ functions and I am having trouble following. Is there a method for indefinitely integrating such and expression? Or if there isn't a method how would you tackle the problem?</p>
The_Sympathizer
11,172
<p>I notice these answers do not also explain why Wolfram Alpha gives results involving the gamma function. I'll provide that here.</p> <p>The reason is that Wolfram Alpha interprets $n$ as an arbitrary <i>real or complex number</i>, not just a nonnegative integer. In that case, there is no elementary solution and we have to use the gamma function.</p> <p>This works as follows: Start</p> <p>$$\int \ln(x)^n dx,\ n \in \mathbb{C}$$</p> <p>and use the substitution $u = \ln(x)$, $du = \frac{1}{x} dx$ to get</p> <p>$$\int \ln(x)^n dx = \int \ln(x)^n \frac{x}{x} dx = \int x \ln(x)^n \frac{1}{x} dx = \int e^u u^n du$$.</p> <p>Now the last integral is essentially that for the gamma function. In particular, using the "lower incomplete gamma function"</p> <p>$$\gamma(s, x) = \int_{0}^{x} e^{-t} t^{s-1} dt$$</p> <p>we have the last integral (with one more change of variable $u \rightarrow -v$) as</p> <p>$$\begin{align}\int e^u u^n du &amp;= -\int e^{-v} (-v)^n dv = -(-1)^n \int e^{-v} v^n dv \\ &amp;= -(-1)^n \gamma(n+1, v) + C \\ &amp;= -(-1)^n \gamma(n+1, -u) + C\end{align}$$</p> <p>And thus the final integral is</p> <p>$$\int \ln(x)^n dx = (-1)^{n+1} \gamma(n+1, -\ln(x)) + C$$.</p> <p>There is then a recurrence formula</p> <p>$$\gamma(s, x) = (s - 1)\gamma(s - 1, x) - e^{-x} x^{s-1}$$</p> <p>for the incomplete gamma, which can be used to get Sami Ben Romdhane's solution for $n$ natural. But this formula is obtained by integration by parts on the incomplete gamma, so it would be simpler to just apply that directly to the original integral, as he did, unless you do need the answer for a non-integer $n$ in terms of standard, though not elementary, functions.</p>
822,711
<p>In case of Riemannian geometry the connection $\Gamma^i_{jk}$ as is derived from the derivatives of the metric tensor $g_{ij}$ is ought to be symmetric wrt to its lower two indices. But in the case of Non-Riemannian Geometry that need not be the case, so the question is how do you actually construct such connections? Do you again use the metric tensor?</p>
Phillip Andreae
141,493
<p>Here's one way to construct a connection on the tangent bundle (a similar construction works on more general vector bundles). Let $\{\rho_\alpha \}$ be a partition of unity subordinate to a locally finite coordinate cover $\{U_\alpha \}$. On each $U_\alpha$, choose coordinates $x_1^\alpha, \dots, x_n^\alpha$, giving a frame $X_1^\alpha = \frac{\partial}{\partial x_1^\alpha}, \dots , X_n^\alpha = \frac{\partial}{\partial x_n^\alpha}$ for the tangent bundle. Then choose a connection $\nabla^\alpha$ on each $U_\alpha$; any valid connection will do! To specify $\nabla^\alpha$, it suffices to specify the Christoffel symbols. One easy choice would be to make all the Christoffel symbols zero (meaning $\nabla^\alpha_{X_i^\alpha} X_j^\alpha = 0$ for all $i$, $j$).</p> <p>We now have a connection on each $U_\alpha$, but we don't have a well-defined connection on the whole manifold because in general, $\nabla^\alpha$ and $\nabla^\beta$ will not agree on $U_\alpha \cap U_\beta$. But one way to construct a global connection is to use the partition of unity: define $\nabla$ by $$ \nabla := \sum_\alpha \rho_\alpha \nabla^\alpha.$$ Then $\nabla$ is a well-defined global connection. This shows that connections exist!</p> <p>Of course, depending on the context, this construction may not be too useful since we chose the $\nabla^\alpha$'s arbitrarily. For example, on a Riemannian manifold, one usually wants to work with the unique Levi-Civita connection. On more general vector bundles, there is not a canonical connection analogous to the Levi-Civita connection, but one often wants to work with metric-compatible connections (the torsion-free condition does not make sense in general).</p>
4,298,951
<p>Let us define a sequence <span class="math-container">$(a_n)$</span> as follows:</p> <p><span class="math-container">$$a_1 = 1, a_2 = 2 \text{ and } a_{n} = \frac14 a_{n-2} + \frac34 a_{n-1}$$</span></p> <p>Prove that the sequence <span class="math-container">$(a_n)$</span> is Cauchy and find the limit.</p> <hr /> <p>I have proved that the sequence <span class="math-container">$(a_n)$</span> is Cauchy. But unable to find the limit. I have observed that the sequence <span class="math-container">$(a_n)$</span> is decreasing for <span class="math-container">$n \ge 2$</span>.</p>
Surjeet Singh
809,487
<p>Given that <span class="math-container">$a_1=1$</span> and <span class="math-container">$a_2=2$</span> such that <span class="math-container">$\displaystyle a_n=\frac{1}{4}a_{n-2}+\frac{3}{4}a_{n-1}$</span> for <span class="math-container">$n\geq3$</span></p> <p>Now <span class="math-container">$\displaystyle a_{n}-a_{n-1}=-\frac{1}{4}\left(a_{n-1}-a_{n-2}\right)=\left(-\frac{1}{4}\right)^2\left(a_{n-2}-a_{n-3}\right)$</span></p> <p><span class="math-container">$\displaystyle \dots=\left(-\frac{1}{4}\right)^{n-2}(a_2-a_1)=\left(-\frac{1}{4}\right)^{n-2}$</span></p> <p>So <span class="math-container">$\displaystyle \sum_{n=2}^k(a_{n}-a_{n-1})=\sum_{n=2}^k\left(-\frac{1}{4}\right)^{n-2}\Rightarrow a_k=1+\sum_{n=2}^k\left(-\frac{1}{4}\right)^{n-2}$</span></p> <p>Now take limit we will get <span class="math-container">$\displaystyle \lim_{k\to\infty}a_k=1+\sum_{n=2}^{\infty}\left(-\frac{1}{4}\right)^{n-2}=1+\frac{4}{5}=\frac{9}{5}$</span></p>
355,888
<p>Consider $x''-2x'+x= te^t$</p> <p>Determine the solution with initial values $x(1) = e,$ $x'(1) = 0.$</p> <p>I know this looks like and probably is a very easy question, but i'm not getting the right answer when i try and solve putting into quadratic form. Could someone please demonstrate or show me a different method? </p> <p>Many thanks :)</p>
obataku
54,050
<p>First we solve the complementary homogeneous equation $x'' - 2x'+x=0$ by presuming a solution of the form $x=e^{rt}$ to yield:</p> <p>$$e^{rt}\left(r^2-2r+1\right)=0\\(r-1)^2=0$$</p> <p>So we have repeated roots of our characteristic polynomial yielding a complementary solution $x=c_1e^{t}+c_2te^{t}$.</p> <p>Recognize that the right hand side of our nonhomogeneous part is not linearly independent to our general solution; we can assume a sufficiently large power of $x$, however, in our particular solution to remedy this.</p>
2,316,448
<p>I was working on the infinite sum $$\sum_{x=1}^\infty \frac{1}{x(2x+1)}$$ and I used partial fractions to split up the fraction $$\frac{1}{x(2x+1)}=\frac{1}{x}-\frac{2}{2x+1}$$ and then I wrote out the sum in expanded form: $$1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+...$$ and then rearranged it a bit: $$1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}-\frac{1}{7}+...$$ $$2-(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}+\frac{1}{7}-...)$$ and since the sum inside of the parentheses is just the alternating harmonic series, which sums to $\ln 2$, I got $$2-\ln 2$$ Which is wrong. What went wrong? I notice that, in general, this kind of thing happens when I try to evaluate telescoping sums in the form $$\sum_{x=1}^\infty f(x)-f(ax+b)$$ and I think something is happening when I rearrange it. Perhaps it has something to do the frequency of $f(ax+b)$ and that, when I spread it out to make it cancel out with other terms, I am "decreasing" how many of them there really are because I'm getting rid of the one to one correspondence between the $f(x)$ and $f(ax+b)$ terms?</p> <p>I can't wrap my head around this. Please help!</p>
Jack D'Aurizio
44,121
<p>By the <a href="https://en.wikipedia.org/wiki/Riemann_series_theorem" rel="nofollow noreferrer">Riemann rearrangement theorem</a>, you have to be very careful when permuting terms of a conditionally but not absolutely convergent series. In your case it is probably simpler to notice that</p> <p>$$ S=\sum_{n\geq 1}\frac{1}{n(2n+1)} = 2\sum_{n\geq 1}\left(\frac{1}{2n}-\frac{1}{2n+1}\right)=2\sum_{m\geq 2}\frac{(-1)^m}{m} $$ hence $$ S = 2 \sum_{m\geq 2}(-1)^m \int_{0}^{1}x^{m-1}\,dx = 2\int_{0}^{1}\frac{x}{1+x}\,dx = \color{red}{2(1-\log 2)}.$$</p>
2,316,448
<p>I was working on the infinite sum $$\sum_{x=1}^\infty \frac{1}{x(2x+1)}$$ and I used partial fractions to split up the fraction $$\frac{1}{x(2x+1)}=\frac{1}{x}-\frac{2}{2x+1}$$ and then I wrote out the sum in expanded form: $$1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+...$$ and then rearranged it a bit: $$1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}-\frac{1}{7}+...$$ $$2-(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}+\frac{1}{7}-...)$$ and since the sum inside of the parentheses is just the alternating harmonic series, which sums to $\ln 2$, I got $$2-\ln 2$$ Which is wrong. What went wrong? I notice that, in general, this kind of thing happens when I try to evaluate telescoping sums in the form $$\sum_{x=1}^\infty f(x)-f(ax+b)$$ and I think something is happening when I rearrange it. Perhaps it has something to do the frequency of $f(ax+b)$ and that, when I spread it out to make it cancel out with other terms, I am "decreasing" how many of them there really are because I'm getting rid of the one to one correspondence between the $f(x)$ and $f(ax+b)$ terms?</p> <p>I can't wrap my head around this. Please help!</p>
grand_chat
215,011
<p>When you split the series in question via partial fractions you create an alternating series that happens to be conditionally convergent (it's not absolutely convergent), so rearranging 'a bit' is not allowed. What went wrong is exactly what you surmised: you've lost the correspondence between terms in the two 'halves' of your alternating series, and so you can achieve the wrong sum.</p> <p>If you're careful with your <em>partial sums</em>, however, you can confirm the following: $$ S_n := \sum_{k=1}^n \frac1{k(2k+1)}=\sum_{k=1}^n2\left(\frac1{2k}-\frac1{2k+1}\right)=2\left(1-A_{2n+1}\right), $$ where $$A_n:=\sum_{k=1}^n\frac{(-1)^{k+1}}k$$ is the $n$th partial sum of the alternating harmonic series, and therefore $S_n$ converges to $2(1-\log 2)$.</p>
346,432
<p>I will think of <span class="math-container">$ \mathbb{R}^{n+m}$</span> as <span class="math-container">$\mathbb{R}^n \times \mathbb{R}^m$</span>.</p> <p>Let <span class="math-container">$ V \subset \mathbb{R}^{n+m}$</span> be open and <span class="math-container">$g:V \to U \subset \mathbb{R}^{n+m} $</span> be a <span class="math-container">$C^1$</span> diffeomorphism. For a fixed <span class="math-container">${y} \in \mathbb{R}^m$</span>, the image <span class="math-container">$g(\mathbb{R}^n \times \{y\})$</span> is an <span class="math-container">$n$</span>-dimensional <span class="math-container">$C^1$</span> manifold, and, similarly, for a fixed <span class="math-container">${x}$</span>, the image <span class="math-container">$g(\{x\} \times \mathbb{R}^m)$</span> is an <span class="math-container">$m$</span>-dimensional <span class="math-container">$C^1$</span> manifold. Let <span class="math-container">$\mathcal{H}^n$</span> and <span class="math-container">$\mathcal{H}^m$</span> be, respectively, the Hausdorff measures on these with respect to the intrinsic metric on them induced from <span class="math-container">$\mathbb{R}^{n+m}$</span>. As mentioned in the comments below, they will be different from fiber to fiber, and, for example, it is not true that all these measures are identifiable.</p> <p><strong>Original Question:</strong> I wonder if a "Fubini's theorem" can be formulated and proven using integrals on these manifolds directly.** I do NOT wish to pullback to <span class="math-container">$V$</span> via <span class="math-container">$g$</span>,</p> <p>Edit: Initially I stated "I do not want to contaminate my integral with the Jacobian!" In light of comments below, it will be impossible to bring in some type of Jacobian(S) into picture. Now, it looks obvious: We must take into account how fibers close in or expand away from one another at different neighborhoods. So, now I reiterate my question allowing this:</p> <p><strong>Edited Question:</strong> Is there "a Fubini's theorem" that equates an integral over <span class="math-container">$U$</span> to the iterated integrals (of the function probably multiplied with some Jaobian of the map <span class="math-container">$g$</span>) over these fibers -- against their intrinsic Hausdorff measures.**</p> <p>A cartoon of the sought-for identity will look like: for a continuous real-valued function <span class="math-container">$ \phi: U \to \mathbb{R}$</span>, <span class="math-container">$$ \int_U \phi \ d\mathcal{L}^{n+m}= \int_{?} \left(\int_{?} \phi(x,y) \cdot Jacobian \ quantities \ from \ g \ d\mathcal{H}^n(x)\right) \ d\mathcal{H}^m(y) \ .$$</span></p> <p><strong>Note:</strong> I seem to have figured out one such formula but will wait longer for possible alternatives or references to known ones, if any exists.</p> <p>I have the answer here: <a href="https://mathoverflow.net/questions/350952/fubinis-theorem-on-arbitrary-foliations">Fubini&#39;s Theorem on Arbitrary Foliations</a></p>
Ben McKay
13,268
<p>You can do this nicely with differential forms: see the chapter on Fubini's theorem in my lecture notes on <a href="https://euclid.ucc.ie/mckay/analysis/analysis.pdf" rel="nofollow noreferrer">Stokes's theorem</a>.</p>
1,080,858
<p>Why do we have</p> <ul> <li>$u_n=\dfrac{1}{\sqrt{n^2-1}}-\dfrac{1}{\sqrt{n^2+1}}=O\left(\dfrac{1}{n^3}\right)$</li> <li>$u_n=e-\left(1+\frac{1}{n}\right)^n\sim \dfrac{e}{2n}$</li> </ul> <p>any help would be appreciated</p>
Alex Ravsky
71,850
<p>$$u_n=\dfrac{1}{\sqrt{n^2-1}}-\dfrac{1}{\sqrt{n^2+1}}=$$ $$\dfrac{\sqrt{n^2+1}-\sqrt{n^2-1}}{\sqrt{n^2-1}\sqrt{n^2+1}}=$$ $$\dfrac{(\sqrt{n^2+1}-\sqrt{n^2-1})(\sqrt{n^2+1}+\sqrt{n^2-1})}{\sqrt{n^2-1}\sqrt{n^2+1}(\sqrt{n^2+1}+\sqrt{n^2-1})}=$$ $$\dfrac{n^2+1-n^2+1}{\sqrt{n^2-1}\sqrt{n^2+1}(\sqrt{n^2+1}+\sqrt{n^2-1})}=$$ $$\dfrac{2}{\sqrt{n^2-1}\sqrt{n^2+1}(\sqrt{n^2+1}+\sqrt{n^2-1})}=$$ $$\dfrac{2}{\sqrt{n^4-1}(\sqrt{n^2+1}+\sqrt{n^2-1})}=O\left(\frac{1}{n^3}\right).$$</p>
907,879
<p>Calculate the limit $\lim\limits_{x\to\infty} (a^x+b^x-c^x)^{\frac{1}{x}}$ where $a&gt;b&gt;c&gt;0$.</p> <p>First, $$\exp\left( \lim\limits_{x\to\infty} \frac{\ln(a^x+b^x-c^x)}{x} \right)$$</p> <p>Next, $$\lim\limits_{x\to\infty} a^x + b^x - c^x = \lim\limits_{x\to\infty} a^x \left[1 + (b/a)^x - (c/a)^x \right] = \infty$$. </p> <p>Since, $\ln(\infty) = \infty$ we may use L'Hopital's rule. The expression inside the exponent is: </p> <p>$$\lim\limits_{x\to\infty} \frac{a^x\ln(a)+b^x\ln(b)-c^x\ln(c)}{a^x+b^x-c^x}$$</p> <p>Which again is $\frac{\infty}{\infty}$. Is that the right way?</p>
Travis Willse
155,629
<p>You're using the correct trick but in the wrong place: Since $a &gt; b &gt; c$, the term $a^x$ will dominate the other two in the parenthetical expression as $x \to \infty$.</p> <p>Factoring that term out gives</p> <p>$\lim_{x \to \infty} \left[(a^x)^{\frac{1}{x}} \left(1 + \left(\frac{b}{a}\right)^x - \left(\frac{c}{a}\right)^x\right)^{\frac{1}{x}}\right]$.</p> <p>The first factor in the brackets is just $a$, which can be factored out, leaving an easier limit to evaluate.</p>
25,137
<p>I want to find an intuitive analogy to explain how binary addition (more precise: an adder circuit in a computer) works. The point here is to explain the abstract process of <em>adding</em> something by comparing it to something that isn't abstract itself.</p> <p>In principle: An everyday object or an action that is structured like or functionally resembles an adder.</p> <p>Think of a thing that can belong to any number of categories x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, x<sub>4</sub>, x<sub>5</sub>, x<sub>6</sub>, x<sub>7</sub>, x<sub>8</sub> for which the property holds that if you put two objects together/perform two actions simultaneously, and both the objects/actions are of the same category you automatically create an object or perform an action that is of the next higher category that the object doesn't yet belong to, the whole thing therefore implementing the basic functionality of an adder.</p> <p>(Categories are changing here analogous to the bits in the circuit: 00000001 (1) + 00000001 (1) together, adds up to 00000010 (2).)</p> <p>But I just can't think of such a situation or an object where this pattern would occur. Whatever analogy i create with increasing amount of categories the way these categories transform becomes increasingly harder to explain, and the metaphor becomes overly specific and unhandy.</p> <p>Hence the question:</p> <p><strong>What's an everyday object that resembles an adder in it's basic functionality?</strong></p>
guest troll
19,769
<p>Weighing/massing is something that has this aspect. Really any extrinsic property. <a href="https://en.wikipedia.org/wiki/Intrinsic_and_extrinsic_properties" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Intrinsic_and_extrinsic_properties</a> But I think weighing is easy to understand because you see the output, often in digital form.Seems to satisfy your desire.</p> <p>I would probably avoid fluid or electrical flow, because of the complexities of series and parallel and driving force. All that said, if they know basic E&amp;M, obviously resistance in series adds (and some basic adders are actually resistance networks). Also, of course head loss of components in series adds linearly for fluid circuits...but I suspect your computer guys will be less comfortable with piping than electronics.</p>
25,137
<p>I want to find an intuitive analogy to explain how binary addition (more precise: an adder circuit in a computer) works. The point here is to explain the abstract process of <em>adding</em> something by comparing it to something that isn't abstract itself.</p> <p>In principle: An everyday object or an action that is structured like or functionally resembles an adder.</p> <p>Think of a thing that can belong to any number of categories x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, x<sub>4</sub>, x<sub>5</sub>, x<sub>6</sub>, x<sub>7</sub>, x<sub>8</sub> for which the property holds that if you put two objects together/perform two actions simultaneously, and both the objects/actions are of the same category you automatically create an object or perform an action that is of the next higher category that the object doesn't yet belong to, the whole thing therefore implementing the basic functionality of an adder.</p> <p>(Categories are changing here analogous to the bits in the circuit: 00000001 (1) + 00000001 (1) together, adds up to 00000010 (2).)</p> <p>But I just can't think of such a situation or an object where this pattern would occur. Whatever analogy i create with increasing amount of categories the way these categories transform becomes increasingly harder to explain, and the metaphor becomes overly specific and unhandy.</p> <p>Hence the question:</p> <p><strong>What's an everyday object that resembles an adder in it's basic functionality?</strong></p>
The_Sympathizer
7,650
<p>Why would you need an analogy? Binary is just another way of encoding numbers - so the question is, &quot;why do you need something that adds numbers?&quot;</p> <p>I think what people get hung up on with this issue is they keep wanting to associate &quot;numbers&quot; with <em>decimals</em>.</p> <p>And to that end, I'd suggest that people should think about the difference between &quot;3&quot;, &quot;three&quot;, &quot;III&quot;, and perhaps a couple of foreign-language words with the same meaning. They're all just <em>symbols</em> for the underlying number that occurs when we measure situations in which there is something, another something, and finally another something, but not any further somethings than that.</p> <p>And binary numerals are just another kind of symbols, built on a different rule, but they refer to exactly the same numbers. The number &quot;11&quot; <em>binary</em> means the same thing that &quot;III&quot; above does. They're just two different <em>reporesentational systems</em> for the same things.</p> <p>&quot;So why do you need a binary adder?&quot; Well, why do you need a desk calculator? (Which almost certainly has a binary adder in its circuits, by the way, that does just what you think from the name it would be doing - so you can throw that on top, too.)</p>
1,562,503
<p>Can anyone help me here?</p> <p>Question: "X is a normed space and A is a subset dense in the dual of X. x belongs to X and the sequence (x_n) of X is bounded of E such that f(x_n) converges to f(x) for all f in A. Show that x_n converges to x weakly"</p> <p>My try: I think that if I show that A=cl(A) so I prove what is required. So I have to prove that cl(A) is contained in A, ie, for all y in cl(A): y is in A. Let y belongs to cl(A), ie, there existe a sequence (y_n) in A such that y_n converges to y. Let y_n := f(x_n) and y:=f(x) (my doubt is: my I do this? because y is in A and f(x_n) is a image of x_n, and not a function) </p> <p>Then I tried to prove that f(x) is in A but I think it's impossible :(</p>
Tien Truong
281,875
<p>We want to prove </p> <p>$g(x_n) \to g(x)$, for every $g \in X'$,</p> <p>where $X'$ is the dual space of $X$. Equivalently, we can prove</p> <p>$|g(x_n) - g(x)| \to 0$, as $n \to \infty$. </p> <p>Let $\{f_k\}$ be a sequence in $A$ such that </p> <p>$\|f_k - g\|_{X'} = \sup_{\|x\| \leq 1} |f_k(x) - g(x)| \to 0$, as $k \to \infty$.</p> <p>The above implies that for each $x_n$ in the bounded sequence $\{x_n\}_{n\geq 1}$, we must have</p> <p>$|f_k(x_n) - g(x_n)| \to 0$, as $k \to \infty$,</p> <p>because the supremum goes to $0$ by the norm convergence of $\{f_k\}$.</p> <p>Consider $|g(x_n) - g(x)|$, we have</p> <p>$|g(x_n) - g(x)| = |g(x_n) - f_k(x_n) + f_k(x_n) - f_k(x) + f_k(x) - g(x)|$</p> <p>$\leq |g(x_n) - f_k(x_n)| + |f_k(x_n) - f_k(x)| + |f_k(x) - g(x)|$.</p> <p>Each term of the above sum goes to zero.</p> <p>$|g(x_n) - f_k(x_n)| \to 0$ and $|f_k(x) - g(x)| \to 0$ as $k \to \infty$ because of norm convergence of $\{f_k\}$ to $g$,</p> <p>$|f_k(x_n) - f_k(x)| \to 0$ as $n \to \infty$ by hypothesis.</p> <p>So $|g(x_n) - g(x)| \to 0$ when $n \to \infty$ as desired. </p>
3,088,766
<p>I need to prove that the premise <span class="math-container">$A \to (B \vee C)$</span> leads to the conclusion <span class="math-container">$(A \to B) \vee (A \to C)$</span>. Here's what I have so far.</p> <p><a href="https://i.stack.imgur.com/1AgTZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1AgTZ.png" alt="enter image description here" /></a></p> <p>From here I'm stuck (and I'm not even sure if this is correct). My idea is to use negation intro by assuming the opposite and coming up with a contradiction. I assumed <span class="math-container">$A$</span> which led to <span class="math-container">$B \vee C$</span> and, as you can see, I'm trying or elim but the only way I can think of doing this is to use conditional intro and then or intro but that seems to only work for a single subproof. In other words, I can't use the assumption of <span class="math-container">$B$</span> to say <span class="math-container">$A \to B$</span>. This is called an indirect proof.</p>
Graham Kemp
135,106
<blockquote> <p>I need to prove that the premise <span class="math-container">$A \to (B \vee C)$</span> leads to the conclusion <span class="math-container">$(A \to B) \vee (A \to C)$</span>. Here's what I have so far. ...</p> </blockquote> <p>A disjunction is usually proven by reduction to absurdity. &nbsp; Assume its negation and derive a contradiction. &nbsp; Typically this involves further assuming the negation of one disjunct aiming to derive the other. <span class="math-container">$$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline #2\end{array}}\fitch{A\to (B\vee C)}{\fitch{\lnot((A\to B)\lor(A\to C))}{\fitch{\lnot (A\to B)}{\fitch A{~\vdots\\C}\\A\to C\\(A\to B)\vee(A\to C)\\\bot}\\\lnot\lnot(A\to B)\\A\to B\\(A\to B)\vee(A\to C)\\\bot}\\\lnot\lnot((A\to B)\vee(A\to C))\\(A\to B)\lor (A\to C)}$$</span></p>
501,660
<p>In school, we just started learning about trigonometry, and I was wondering: is there a way to find the sine, cosine, tangent, cosecant, secant, and cotangent of a single angle without using a calculator?</p> <p>Sometimes I don't feel right when I can't do things out myself and let a machine do it when I can't.</p> <p>Or, if you could redirect me to a place that explains how to do it, please do so.</p> <p>My dad said there isn't, but I just had to make sure.</p> <p>Thanks.</p>
Alex Peter
579,318
<p><strong>Tailored Taylor</strong></p> <p>You can use Taylor but first you need to pack your angle into the region $x_1=0,2\pi$. simply by $x \mod 2\pi$</p> <p>Once you are there if $x_1&gt;\pi$ take the result as $\sin(x_1)=-\sin(x_1 - \pi)$ reducing it to $x_2=0,\pi.$</p> <p>Now if $x_2&gt;\frac{\pi}{2}$ calculate the result as $\sin(x_2)=\sin(\pi-x_2)$.</p> <p>So all this above is easily shifting it all to $x_3=0,\frac{\pi}{2}$</p> <p>If needed use further</p> <p>$$ \sin(x)=2\sin(\frac{x}{2})\cos(\frac{x}{2})$$</p> <p>with</p> <p>$$ \cos(\frac{x}{2})=\sqrt{1-\sin(\frac{x}{2})^2}$$</p> <p>in case $x_3&gt;1.$</p> <p>All this is making the Taylor expansion much more accurate within a lesser number of steps as $x^n$ in it for $x&lt;1$ will rapidly go to $0$ besides the help from factorial. Next you represent Taylor series of $\sin(x)$ in a much more handy way</p> <p>$$\sin(x)=x(1-\frac{x^2}{3 \cdot 2}(1-\frac{x^2}{5 \cdot 4}(1-\frac{x^2}{7 \cdot 6}(…$$</p> <p>Notice that $x^2$ is repeating. So choose $k$ as big as you are willing to calculate and have</p> <p>$$f(0)=1-\frac{x^2}{(2k+1) \cdot 2k}$$ $$f(n)=1-\frac{x^2}{(2(k-n)+1) \cdot 2(k-n)}f(n-1)$$</p> <p>Finally</p> <p>$$\sin(x)=xf(k-1)$$</p> <p><strong>Binary ladders</strong></p> <p>For this method as well you need to bring the angle as much down as you can as explained above.</p> <p>If you do not want to deal with division, otherwise you can use $\tan(x)$, the task is possible with multiplication only. Take the small $m$ and</p> <p>$$ \sin(m)\approx m = x_0$$ $$\cos(m) \approx 1-\frac{m^2}{2} = y_0$$</p> <p>Then have:</p> <p>$$M_{2^0} = \begin{bmatrix}y_0 &amp; x_0 \\-x_0 &amp; y_0 \end{bmatrix}$$</p> <p>$$M_{2^{k+1}} = (M_{2^{k}})^{2}$$</p> <p>this is just based on duplication formula for $\sin(x)$ and $\cos(x)$</p> <p>Now it is up to you what small $m$ you will use as a reference. It can be for example $\frac{1}{2^{10}}$ or $0.00001$ or any other small number. Smaller it is, a better precision you have.</p> <p>Now you find the integer $n$ so that $nm \leq x &lt; (n+1)m$</p> <p>The game can start.</p> <p>Write $n$ in binary expansion.</p> <p>$$\sum_{d=1}^{m}{2^{k_d} } = n$$</p> <p>Using</p> <p>$$\sin(x+y)=\sin(x)\cos(y)+\cos(x)\sin(y) $$</p> <p>$$\cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y)$$</p> <p>which is in our case</p> <p>$$\begin{bmatrix}y_p &amp; x_p \\-x_p &amp; y_p \end{bmatrix} \displaystyle \begin{bmatrix}y_q &amp; x_q \\-x_q &amp; y_q \end{bmatrix}$$</p> <p>since we are dealing with evaluation of $\sin(x), \cos(x)$ all the time you additionally multiply </p> <p>$$M_{2^{k_1}}M_{2^{k_2}} … M_{2^{k_m}}$$ </p> <p>where</p> <p>$$\sum_{d=1}^{m}{2^{k_d} } = n$$</p>
617,747
<p>In mathematics, how does something like complex numbers apply to the real world? Why do complex numbers exist? How can we comprehend addition of complex numbers? For example, addition of natural numbers can be understood as putting together two apples and two oranges makes four fruits. How can we apply this thinking to complex numbers?</p>
Dylan Yott
62,865
<p>There are lots of ways to develop intuition with complex numbers and they've been mentioned above, so I'll try to say something different. I don't think it matters whether or not complex numbers "exist", it simply matters that they are very useful and therefore worth studying. In fact, its hard to say whether any numbers really exist. For example, what is the number $5$? There isn't a tree somewhere in Iceland where $5$'s grow and are shipped all over the world for commercial use. Instead, we have a concept of $5$ that has been universally agreed upon in some sense and can be represented by $5$ of something, for example the following represents $5$ in some sense: $* * * * *$. </p> <p>My point is, to me natural numbers don't even "exist" in any meaningful sense. They just turn out to be concepts that seem to occur in nature and we've come up with a useful way to characterize them. You could even argue that complex numbers occur "in nature" in the sense that we can use them to describe certain physical laws. However, that is less satisfying to me, since there are plenty of mathematical constructions with no physical interpretation. Instead, I like to think that complex numbers are simply a mathematical construction that turn out to be very useful in solving different types of problems. </p>
2,791,204
<p>I am trying to understand whether or not the product of two positive semidefinite matrices is also positive semidefinite. This topic has already been discussed in the past <a href="https://math.stackexchange.com/q/113859">here</a>. For me $A$ is positive definite" means $x^T A x &gt; 0$ for all nonzero real vectors $x$, in this case @RobertIsrael gives a counterexample: $$ A = \pmatrix{ 1 &amp; 2\cr 2 &amp; 5\cr},\ B = \pmatrix{1 &amp; -1\cr -1 &amp; 2\cr},\ AB = \pmatrix{-1 &amp; 3\cr -3 &amp; 8\cr},\ (1\ 0) A B \pmatrix{1\cr 0\cr} = -1$$ However, then proceeds to prove that for $A$, $B$, positive semidefinite real symmetric matrices the result holds. </p> <p>The proof is very short, quoting the answer: "Then $A$ has a positive semidefinite square root, which I'll write as $A^{1/2}$. Now $A^{1/2} B A^{1/2}$ is symmetric and positive semidefinite, and $AB = A^{1/2} (A^{1/2} B)$ and $A^{1/2} B A^{1/2}$ have the same nonzero eigenvalues."</p> <p>So then the question is: what does it mean to be positive semidefinite real symmetric and why are $A$, $B$ not of this type?</p>
mechanodroid
144,766
<p>A real matrix $A$ is positive-semidefinite if $A$ is symmetric and $x^TAx \ge 0$ for all $x \in \mathbb{R}^n$.</p> <blockquote> <p>Product of two positive-semidefinite matrices $A,B$ is again a positive-semidefinite matrix if and only if $AB = BA$.</p> </blockquote> <p>Proof.</p> <p>Assume $AB = BA$. </p> <p>Then $(AB)^T = B^TA^T = BA = AB$ so $AB$ is symmetric.</p> <p>$A$ has a (unique) positive-semidefinite square root $A^{1/2}$. Furthermore, $AB = BA$ implies $A^{1/2}B = BA^{1/2}$.</p> <p>We have</p> <p>$$x^TABx = x^TA^{1/2}A^{1/2}Bx = \left(A^{1/2}x\right)^TA^{1/2}Bx = \left(A^{1/2}x\right)^T B \left(A^{1/2}x\right) \ge 0$$</p> <p>Therefore $AB$ is positive-semidefinite.</p> <hr/> <p>Conversely, assume that $AB$ is positive-semidefinite. In particular, $AB$ is symmetric so</p> <p>$$AB = (AB)^T = B^TA^T = BA$$</p> <p>Hence $AB = BA$.</p>
1,873,370
<p>I am trying to understand a particular coset/double coset of the finite group $G = GL(n, q^2) = GL_n(\mathbb{F}_{q^2})$. It has a natural subgroup $H = GL(n, q)$, which can also be viewed in the following way: consider an automorphism of raising each entry to the $q$-th power, (taking $n = 2$ as an example)</p> <p>$$ \varphi\begin{bmatrix}a &amp; b \\ c &amp; d\end{bmatrix} = \begin{bmatrix}a^q &amp; b^q \\ c^q &amp; d^q\end{bmatrix}, $$</p> <p>clearly $\varphi^2 = Id$, and $H = G^{\varphi}$ as the fixed points of this morphism.</p> <p>Question: what is a good way to view the coset $G/H$ and double coset $H \backslash G/H$, and any good way of writing the representatives?</p>
paul garrett
12,291
<p>A standard heuristic to get a feeling for such a question in my world is to replace $\mathbb F_{q^2}$ by $\mathbb F_q \oplus \mathbb F_q$, so that the question is about $GL(n,\mathbb F_q)\times GL(n,\mathbb F_q)$ modulo a diagonal copy of $GL(n,\mathbb F_q)$, and about the corresponding double cosets. The first question is very easy, then. The second yields the answer that the double cosets are in bijection with conjugacy classes in $GL(n,\mathbb F_q)$. </p> <p>After gathering one's optimism, then one goes back and tries to see what happens in the original... thinking that it's a "Galois twist" of the "split case".</p>
2,729,617
<blockquote> <p>Find the 5th-order Maclaurin polynomial $P_5(x)$ for $f(x) = e^x$.</p> </blockquote> <p>I got $$P_5(x) = 1 + x +\frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \frac{x^5}{120} + O(x^6) $$</p> <p>From this answer, I'm supposed to approximate $f(-1)$, correct to the fifth decimal place. Is it right to just put in $x= -1$ like below?</p> <p>$$\begin{align} &amp; 1 + (-1) + \frac{(-1)^2}{2} + \frac{(-1)^3}{6} + \frac{(-1)^4}{24} + \frac{(-1)^5}{120} + O((-1)^6) \\ =\; &amp; 1 -1 + \frac12 - \frac16 + \frac1{24} - \frac1{120} + O((-1)^6) \\ =\; &amp; 0.5 - 0.166666 + 0.041666 - 0.008333 + O((-1)^6) \\ =\; &amp; 0.366667 \\ =\; &amp; 0.36667 + O((-1)^6) \end{align}$$</p> <p>Is this done right? </p>
Mark Fischler
150,362
<p>You almost have it right. The condition is better stated without referring to derivatives. A function $f(x)$ is strictly increasing if for all $(x,y)$ such that $y&gt;x$,</p> <p>$$ f(y) &gt; f(x) $$</p> <p>and is monotonic increasing if for all $(x,y)$ such that $y&gt;x$, $$ f(y) \geq f(x) $$</p> <p>Your definition involving derivatives would say that the sawtooth $$ g(x) = x - \lfloor x \rfloor $$ is strictly monotonic (since the derivative is not defined at integer $x$), but it is not monotonic at all.</p> <p>Your last sentence is completely correct.</p>
1,079,356
<p>My question can be summarized as:</p> <blockquote> <p>I want to prove that closed immersions are stable under base change.</p> </blockquote> <p>This is exercise II.3.11.a in Hartshorne's Algebraic Geometry. I researched this for about half a day. I consulted a number of books and online notes, but I found the proofs to be vague. Vakil's notes (9.2.1) hint that this is an immediate consequence of the canonical isomorphism $ M/IM\cong M\otimes_{A}A/I $, and so does Liu's book (1.23). The proof can't be this simple. I need to show that this can be reduced to the affine case, and to do so, I need to show that a closed immersion into an affine scheme is affine. I haven't been able to do so yet. Edit: I found a proof for this later but I still don't know how to use it to prove the question above.</p> <p>As for the Stacks project, I had to traverse a tree propositions, until I eventually found a proof that uses concepts like quasi-coherent sheaves, something not introduced in Hartshorne's book at this point.</p> <p>I also consulted Gortz &amp; Wedhorn. The book cites section 4.11 as a proof for this. However the section is a general introduction to the categorical fibred product. It's unrelated. In fact the official errata mentions this error and cites proposition 4.20 instead. It is unclear to me how the proposition shows the result. I suspect the book uses a different - but equivalent - definition.</p> <p>At this point, I'm frustrated. I'm self-studying and don't have anybody to ask. Could somebody please be kind enough to show me a self-contained proof?</p>
Babai
36,789
<p><strong>Fact</strong>: <span class="math-container">$X, Y$</span> are schemes over <span class="math-container">$S$</span>, if <span class="math-container">$U\subset X$</span> is an open subset and if the product <span class="math-container">$X\times _S Y$</span> exists then <span class="math-container">$p_1 ^{-1}(U)= U\times _S Y$</span> .</p> <p>Let <span class="math-container">$f:Z \rightarrow X$</span> is a closed immersion. equivalently there is an open affine covering <span class="math-container">$\{U_i\}$</span>, <span class="math-container">$U_i=\operatorname{Spec}(A_i)$</span> of <span class="math-container">$X$</span>, such that <span class="math-container">$f^{-1}(U_i)=\operatorname{Spec}(A_i/I_i)$</span> for some <span class="math-container">$I_i$</span> ideal of <span class="math-container">$A_i$</span>.</p> <p>Let <span class="math-container">$g: Y\rightarrow X$</span> is a morphism . To show <span class="math-container">$p_2: Z\times _X Y \rightarrow Y$</span> is a closed immersion. <span class="math-container">$$\require{AMScd} \begin{CD} Z\times _X Y @&gt;&gt;&gt; Y \\ @VVV @VVV \\ Z @&gt;&gt;&gt; X \end{CD}$$</span></p> <p>We can choose an open affine covering <span class="math-container">$\{\operatorname{Spec}(B_i)\}$</span> of <span class="math-container">$Y$</span> such that <span class="math-container">$g(\operatorname{Spec}(B_i))\subset (\operatorname{Spec}(A_i))$</span> .</p> <p>By the Fact above, <span class="math-container">$p_2^{-1}(\operatorname{Spec}(B_i))= (\operatorname{Spec}(B_i))\times_X Z $</span></p> <p><span class="math-container">$= (\operatorname{Spec}(B_i))\times_{\operatorname{Spec}(A_i)} \operatorname{Spec}(A_i/I_i) $</span></p> <p>( because <span class="math-container">$g(\operatorname{Spec}(B_i))\subset \operatorname{Spec}(A_i)$</span> and <span class="math-container">$f^{-1}(\operatorname{Spec}(A_i))= \operatorname{Spec}(A_i/I_i)$</span>)</p> <p><span class="math-container">$=\operatorname{Spec}(B_i\otimes_{A_i} A_i/I_i)$</span></p> <p><span class="math-container">$=\operatorname{Spec}(B_i/I_i B_i)$</span> (becasue <span class="math-container">$M/IM\cong M\otimes_{A}A/I$</span>)</p> <p>Therefore <span class="math-container">$p_2$</span> is a closed immersion.</p>
1,079,356
<p>My question can be summarized as:</p> <blockquote> <p>I want to prove that closed immersions are stable under base change.</p> </blockquote> <p>This is exercise II.3.11.a in Hartshorne's Algebraic Geometry. I researched this for about half a day. I consulted a number of books and online notes, but I found the proofs to be vague. Vakil's notes (9.2.1) hint that this is an immediate consequence of the canonical isomorphism $ M/IM\cong M\otimes_{A}A/I $, and so does Liu's book (1.23). The proof can't be this simple. I need to show that this can be reduced to the affine case, and to do so, I need to show that a closed immersion into an affine scheme is affine. I haven't been able to do so yet. Edit: I found a proof for this later but I still don't know how to use it to prove the question above.</p> <p>As for the Stacks project, I had to traverse a tree propositions, until I eventually found a proof that uses concepts like quasi-coherent sheaves, something not introduced in Hartshorne's book at this point.</p> <p>I also consulted Gortz &amp; Wedhorn. The book cites section 4.11 as a proof for this. However the section is a general introduction to the categorical fibred product. It's unrelated. In fact the official errata mentions this error and cites proposition 4.20 instead. It is unclear to me how the proposition shows the result. I suspect the book uses a different - but equivalent - definition.</p> <p>At this point, I'm frustrated. I'm self-studying and don't have anybody to ask. Could somebody please be kind enough to show me a self-contained proof?</p>
Takumi Murayama
116,766
<p>This is not an optimal solution, but if you didn't know that closed immersions can be checked affine locally (like I didn't), then this would be something you can do: check each condition for a closed immersion separately.</p> <p>Let $X = \operatorname{Spec} R$, $X' = \operatorname{Spec} A$, and $Y = \operatorname{Spec} B$ be affine. Then, we know that in the diagram $$\require{AMScd} \begin{CD} B \otimes_R A @&lt;&lt;&lt; A\\ @AAA @AAA \\ B @&lt;&lt;&lt; R \end{CD}$$ $R \to B$ being surjective implies $A \to B \otimes_R A$ is surjective by the right exactness of the tensor product.</p> <p>Now what we want to do is to use the same idea to show the surjectivity of the map $f^{\prime\#}$ on structure sheaves, and to do so you can check surjectivity on stalks. Thus, by using small enough open neighborhoods you can assume $X,X',Y$ are affine and using the notation above, you want to show $A_P \to (B \otimes_R A)_P$ is surjective for each $P \in \operatorname{Spec} A$, where you consider $B \otimes_R A$ as an $A$-module. Now the thing was that I decided to localize the diagram above at the prime $P$: you're right that I was wrong to put my prime $P$ in $\operatorname{Spec} R$ instead of $\operatorname{Spec} A$. </p> <p>Instead, you can get the same conclusion by noticing $(B \otimes_R A)_P = B \otimes_R A \otimes_A A_P = B \otimes_R A_P$, so we want to show $A_P \to B \otimes_R A_P$ is surjective. But $R \to B$ is surjective since it is locally surjective by assumption that $Y \to X$ is a closed immersion, so right exactness works again.</p> <p>Now, the question is how you get the topological part of being a closed immersion. The proof above combined with Exercise 2.18(c) shows we have a homeomorphism with a closed subset of $X'$ when assuming $X,X',Y$ are affine. You can then play the whole gluing game like in the construction of the fiber product to deduce that for arbitrary $X,X',Y$ you have this topological property.</p> <p>In hindsight, this is not a good way to go!</p> <p><strong>EDIT:</strong> I was requested to flesh out the topological part of the proof. Recall the setup: want to show that in the diagram $$\begin{CD} Y \times_X X' @&gt;f'&gt;&gt; X'\\ @VVV @VV{g}V \\ Y @&gt;f&gt;&gt; X \end{CD}$$ that if $f$ is a closed immersion, then $f'$ is a closed immersion. The proof above shows that $f^{\prime\#}$ is indeed surjective at each stalk, and so we must show $f'$ induces a homeomorphism between $Y \times_X X'$ and a closed subset of $X'$. I think the hard part is proving we can reduce to the case when $X,X',Y$ are affine, so I will be doing this below. We follow EGAI, Cor. 4.2.4 and Prop. 4.3.1. Note that the new edition has another proof.</p> <p>We first note that closed immersions are local on target. The property on stalks is trivially local on target; the property "a morphism $f\colon Y \to X$ induces a homeomorphism between $Y$ and a closed subset of $X$" is local on target as well. For, suppose $\{X_\lambda\}$ is a cover of $X$ and $Y_\lambda := f^{-1}(X_\lambda)$, such that $Y_\lambda \to X_\lambda$ induces a homeomorphism between $Y_\lambda$ and $f(Y_\lambda)$ a closed subset of $X_\lambda$. By hypothesis, if $y \in Y$, every neighborhood of $y$ is mapped to a neighborhood of $f(y)$, and $f$ is injective. So, it remains to show that $f(Y)$ is actually closed in $X$. But it suffices to show $f(Y) \cap X_\lambda$ is closed in $X_\lambda$ for every $\lambda$; but this is trivial since $f(Y) \cap X_\lambda = f(Y_\lambda)$ is closed in $X_\lambda$.</p> <p>Now back to the claim at hand. We first reduce to the case where $X$ is affine. First let $\{X_\lambda\}$ be an affine cover of $X$, and let $Y_\lambda := f^{-1}(X_\lambda)$ and $X'_\lambda := g^{-1}(X_\lambda)$. The restriction $f\rvert_{Y_\lambda}\colon Y_\lambda \to X_\lambda$ is closed immersion, hence if the proposition holds for $X$ affine, then $Y_\lambda \times_{X_\lambda} X'_\lambda \to X'_\lambda$ is a closed immersion. Now by Thm. 3.3, Step 7, the $Y_\lambda \times_{X_\lambda} X'_\lambda$ are canonically isomorphic to $Y \times_X X'_\lambda$; the morphism $Y_\lambda \times_{X_\lambda} X'_\lambda \to X'_\lambda$ is the same as the restriction of $f'$ to $Y \times_X X'_\lambda$ ; since the $X'_\lambda$ form a cover of $X'$, we have that $f'$ is a closed immersion assuming the proposition holds for $X$ affine by the fact that closed immersions are local on target.</p> <p>Now we further reduce to the case where $X'$ is affine as well; note that $X$ affine implies $Y$ is affine since it is a closed subscheme of $X$ by Exercise 3.11(b). But if $\{X'_\mu\}$ is an affine open cover of $X'$, then $Y \times_X X'_\mu \to X'_\mu$ is a closed immersion by the affine case we wanted to reduce to, hence $Y \times_X X' \to X'$ is a closed immersion since closed immersions are local on target.</p>
1,079,356
<p>My question can be summarized as:</p> <blockquote> <p>I want to prove that closed immersions are stable under base change.</p> </blockquote> <p>This is exercise II.3.11.a in Hartshorne's Algebraic Geometry. I researched this for about half a day. I consulted a number of books and online notes, but I found the proofs to be vague. Vakil's notes (9.2.1) hint that this is an immediate consequence of the canonical isomorphism $ M/IM\cong M\otimes_{A}A/I $, and so does Liu's book (1.23). The proof can't be this simple. I need to show that this can be reduced to the affine case, and to do so, I need to show that a closed immersion into an affine scheme is affine. I haven't been able to do so yet. Edit: I found a proof for this later but I still don't know how to use it to prove the question above.</p> <p>As for the Stacks project, I had to traverse a tree propositions, until I eventually found a proof that uses concepts like quasi-coherent sheaves, something not introduced in Hartshorne's book at this point.</p> <p>I also consulted Gortz &amp; Wedhorn. The book cites section 4.11 as a proof for this. However the section is a general introduction to the categorical fibred product. It's unrelated. In fact the official errata mentions this error and cites proposition 4.20 instead. It is unclear to me how the proposition shows the result. I suspect the book uses a different - but equivalent - definition.</p> <p>At this point, I'm frustrated. I'm self-studying and don't have anybody to ask. Could somebody please be kind enough to show me a self-contained proof?</p>
Wang Samuel
422,510
<p>In the below, we consider a criterion for stability of base change that works in this exercise.</p> <p>Proof of this criterion using abstract non-sense is given at the end.</p> <hr /> <p><strong>Criterion for Stability under Base Change</strong></p> <p>Let <span class="math-container">$\mathbf{P}$</span> be a property that one can talk about on morphisms of schemes.</p> <p>Suppose the following holds :</p> <ol> <li><p>Given <span class="math-container">$f:X\to Y$</span>, for an open subset <span class="math-container">$U$</span> of <span class="math-container">$Y$</span>, if <span class="math-container">$f$</span> has property <span class="math-container">$\mathbf{P}$</span>, so does the base change of <span class="math-container">$f$</span> along the inclusion of the open subscheme <span class="math-container">$U\subset Y$</span>.</p> </li> <li><p>Given <span class="math-container">$f:X\to Y$</span>, for an open cover <span class="math-container">$\{U_i\}_i$</span> of <span class="math-container">$Y$</span>, if the base change of each <span class="math-container">$f$</span> along the inclusion of the open subscheme <span class="math-container">$U_i\subset Y$</span> has property <span class="math-container">$\mathbf{P}$</span>, so does <span class="math-container">$f$</span>.</p> </li> </ol> <p>then <span class="math-container">$\mathbf{P}$</span> is stable under base change iff <span class="math-container">$\mathbf{P}$</span> is stable under base change along morphisms with target and domain between affine schemes.</p> <p><strong>Solution</strong></p> <p>Closed immersions clearly satisfy condition 1., 2. : on the level of topological spaces, things are clear; on the level of sheaves, use stalks to check epimorphicity.</p> <p>In Hartshorne's Exercise 3.11(b), closed immersions with target an affine scheme are characterized as the <span class="math-container">$\operatorname{Spec}$</span> of some <span class="math-container">$A\to A/\mathfrak{a}$</span>). Therefore, after verifying condition 1., 2. in the above, we have reduced the problem to the computation of the pushout of some span <span class="math-container">$A\leftarrow B\rightarrow B/I$</span> (which gives <span class="math-container">$A\to A/IA$</span>).</p> <hr /> <p><strong>Proof of Criterion</strong></p> <p>Consider <span class="math-container">$Z\rightarrow X\leftarrow Y$</span> with pullback <span class="math-container">$W$</span>, where <span class="math-container">$Y\rightarrow X$</span> has property <span class="math-container">$\mathbf{P}$</span>.</p> <ul> <li>The case <span class="math-container">$X$</span> is affine : choose an open affine cover <span class="math-container">$\{Z_i\}_i$</span> of <span class="math-container">$Z$</span>, note that</li> </ul> <p><span class="math-container">$$(W\times_{Z}Z_i\to Z_i)\simeq (Y\times_XZ_i\to Z_i)$$</span></p> <p>As <span class="math-container">$X,Z_i$</span> are affine, condition 2. shows <span class="math-container">$W\to Z$</span> has property <span class="math-container">$\mathbf{P}$</span>.</p> <ul> <li>The general case : choose an affine cover <span class="math-container">$\{X_i\}_i$</span> of <span class="math-container">$X$</span>, let <span class="math-container">$Z_i=X_i\times_XZ$</span>, we have</li> </ul> <p><span class="math-container">$$(W\times_{Z_i}Z)\simeq(Y\times_XX_i)\times_{X_i}Z_i$$</span></p> <p>By condition 1., <span class="math-container">$(Y\times_XX_i)\to X_i$</span> has property <span class="math-container">$\mathbf{P}$</span>, and as <span class="math-container">$X_i$</span> is affine, this concludes the proof of the criterion.</p>
3,183,617
<p>I have an equation that looks like <span class="math-container">$$X' = a \sin(X) + b \cos(X) + c$$</span> where <span class="math-container">$a,b$</span> and <span class="math-container">$c$</span> are constants. For given values of <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> how can I calculate X? I have set of values for <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> and I am looking for an equation that could solve for <span class="math-container">$X$</span> or other approaches like numerical methods are also ok. Thanks in advance.</p> <p>My approach is as below: <span class="math-container">$$X' = a \sin(X) + b \cos(X) + c \ (Integrating \ this \ eqn)$$</span> <span class="math-container">$$X = d \cos(X) + b \sin(X) + cX \\where(d = -a)$$</span> <span class="math-container">$$X = p\sin(X + q) + cX \\where \ p= sqrt(d^2 + b^2) \ and \ cosq = d/p ,\ sinq=b/p$$</span> <span class="math-container">$$X(1-c)/p = sin(X+q) \\ where \ p/(1-c) = r$$</span> <span class="math-container">$$X - rsin(X+q) = 0$$</span> <span class="math-container">$$f(X) = X - r \ sin(X+q) \\ f'(X) = 1 - rcos(X+q)$$</span> To the above equations I applied Newton's method: <span class="math-container">$$X(n+1) = X(n) - f(X(n))/f'(X(n)) \\ X(0) = 1$$</span> I run this for 2000 Iterations and found theat the solution is not converging to expected result. Is there something wrong with the mathematical derication ? or It is not possible to get results from this approach? </p>
Claude Leibovici
82,404
<p>If you rewrite the equation as <span class="math-container">$$\frac 1 {t'}=a \sin(x)+b \cos(x)+c$$</span> you should get <span class="math-container">$$t+k=-\frac{2 \tanh ^{-1}\left(\frac{a+(c-b) \tan \left(\frac{x}{2}\right)}{\sqrt{a^2+b^2-c^2}}\right)}{\sqrt{a^2+b^2-c^2}}$$</span> where <span class="math-container">$k$</span> would be fixed by initial conditions.</p> <p>Solving for <span class="math-container">$x$</span> would then give <span class="math-container">$$x=2 \tan ^{-1}\left(\frac{\sqrt{a^2+b^2-c^2} \tanh \left(\frac{1}{2} (k+t) \sqrt{a^2+b^2-c^2}\right)+a}{b-c}\right)$$</span></p>
377,393
<p>Two players play a game. Player 1 goes first, and chooses a number between 1 and 30 (inclusive). Player 2 chooses second; he can't choose Player 1's number. A fair 30-sided die is rolled. The player that chose the number closest to the value of the roll takes that value (say, in dollars) from the other player. Would you rather be Player 1 (choose first) or Player 2 (choose second)? Also, what integer should that player choose?</p> <p>After doing some trial-and-error calculation, I now know the correct player and the integer he should choose, but what is a faster way to determine which player to be, and which number he should choose?</p>
A. R. Caputo III
612,152
<p>Here is some Python code that simulates this game using Monte Carlo and then uses CommonerG's method to solve for the optimal strategy for any n-sided die. Note that the Monte Carlo strategy sometimes converges to a local maximum, but with enough iterations it is usually correct.</p> <p><a href="https://github.com/arcaputo3/algorithms/blob/master/dice_game/two_player_game.py" rel="nofollow noreferrer">https://github.com/arcaputo3/algorithms/blob/master/dice_game/two_player_game.py</a></p>
2,880,384
<p>Look at the following definition.</p> <p><strong>Definition.</strong> Let $\kappa$ be an infinite cardinal. A theory $T$ is called $\kappa$-stable if for all model $M\models T$ and all $A\subset M$ with $|A|\leq \kappa$ we have $|S_n^M(A)|\leq \kappa$. A theory $T$ is called stable if it is $\kappa$-stable for some infinite cardinal $\kappa$.</p> <p>I am beginner in model theory and so my questions might be stupid. When you read a basic textbook in math you see that the algebraic , analysis, geometric, topological and .... definitions/concepts are natural. For example algebraic concepts, group, ring, field, module, Galois theory and ... have a clear natural roots. Also the notion of continuity in analysis is a natural (in my sense!) concept. The idea behind topology is also natural. But the model theoretic notions are usually not concrete for me at all! For example one of the main portions in model theory is stability theory (which is a part of Shelah classification) in which you need to count the number of types. I would like to know: where is the idea of stability (counting the number of types) come from?</p> <p>Any reference would be appreciated.</p>
Noah Schweber
28,111
<p><em>I recommend <a href="http://www.math.ucla.edu/~chernikov/teaching/StabilityTheory285D/StabilityNotes.pdf" rel="noreferrer">this survey of Chernikov</a> as a source.</em></p> <p>Stability, in my opinion, should be thought of in the context of the overall <strong>classification program</strong>$^1$ - in particular, the idea is to look for a "tameness" property which will hopefully imply that a given theory has "few" models (so that we have a hope of classifying them). That is, stability is (initially at least) a <strong>tool with an intended application</strong>. Remember its original appearance, after all: Morley introduced it to show that (for countable complete theories) categoricity in one uncountable cardinal implies categoricity in every uncountable cardinal, or more broadly that categoricity in a single uncountable cardinal is an incredibly powerful tameness property.</p> <p>The key point, then, is to connect stability more generally with <em>the number of models</em> - or more simply, to understand why "more types = more models." I think a good first example to consider here is $\mathbb{C}$ versus $\mathbb{R}$ as fields. The former's theory is very simple: an algebraically closed field is classified completely by its characteristic and its transcendence degree, and in particular the theory of algebraically closed fields of characteristic zero is uncountably categorical. By contrast, the latter's theory is very complicated, at least in the sense of counting models: it's easy to show that there are for example continuum many non-isomorphic countable real closed fields. Playing around with this example, it's not hard to see that what's going on is that $\mathbb{C}$ has "few types" while $\mathbb{R}$ has "many types," and this gives us the idea to try to connect the number of types and the number of models more generally. (It also suggests more specifically a connection between <em>instability</em> and <em>definable orderings</em>, and indeed this turns out to hold in a very strong sense: see Definition 2.9 and following in the paper linked above.)</p> <hr> <p>$^1$Of course, to a certain extent this just pushes the question back: why, or to what extent, is the classification program natural? In my opinion, the question of when a (first-order axiomatizable) class of structures admits a "reasonable classification" is an extremely natural one, motivated by examples on each side - e.g. uncountable dense linear orders without endpoints are extremely complicated even though their <em>theory</em> is very simple, while algebraically closed fields are easily classified by <em>characteristic</em> and <em>transcendence degree</em> - and the reflexive desire to find a "common thread" uniting the tame, or the wild, theories.</p>
2,418,547
<p>The following is a proof of $$\frac{\partial(u,v)}{\partial(x,y)}.\frac{\partial(x,y)}{\partial(u,v)} = 1$$ <a href="https://i.stack.imgur.com/fLfRX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fLfRX.jpg" alt="Jacobian proof"></a></p> <p>In the above proof, I cannot understand why $\frac{\partial u}{\partial v} = 0$ and $\frac{\partial v}{\partial u} = 0$. Since, $u$ and $v$ are independent variables, $\frac{\partial v}{\partial u}$ and $\frac{\partial u}{\partial v}$ seem to have no meaning. $ $ $ $ $ $</p>
Community
-1
<p>The expressions $\frac{\mathrm{d}v}{\mathrm{d}u}$ and $\frac{\mathrm{d}u}{\mathrm{d}v}$ are indeed meaningless because the differentials $\mathrm{d}u$ and ${\mathrm{d}v}$ are not ratios of one another.</p> <p>But those are not the expressions appearing in the formulas.</p> <p>When given a system of coordinates such as $\{ u, v \}$, the equations</p> <p>$$ \frac{\partial v}{\partial u} = 0 = \frac{\partial u}{\partial v}$$</p> <p>are basically (half of) the <em>definition</em> of the notation. For reductions to other familiar notions:</p> <ul> <li>$\frac{\partial}{\partial u}$ is notation that means to take the directional derivative in the direction where $v$ is held constant and $u$ increases with unit rate. That is, $\frac{\partial}{\partial u} = \nabla_{\vec{s}}$ where $\vec{s}$ is chosen to be the vector such that $\nabla_{\vec{s}} u = 1$ and $\nabla_{\vec{s}} v = 0$</li> <li>$\frac{\partial}{\partial u}$ is the linear operator on differential forms that sends $\mathrm{d}u \mapsto 1$ and $\mathrm{d}v \mapsto 0$. The notation $\frac{\partial z}{\partial u}$ means to apply $\frac{\partial}{\partial u}$ to the differential $\mathrm{d} z$.</li> </ul> <hr> <p>Aside: do take care to note that despite only one of the two variables appearing in $\frac{\partial}{\partial u}$, the operation depends on the entire choice of coordinate system $\{ u, v \}$. If $\{ u, x \}$ also happened to be a coordinate system, the operator $\frac{\partial}{\partial u}$ you would write for that coordinate system is different from the operator by the same name you would write for the $\{ u, v \}$ system.</p> <p>If you ever find yourself in such a situation and still want to retain the use of this notation, you could do something like</p> <p>$$ \left.\frac{\partial}{\partial u}\right|_{v \mathrm{\ const}} $$</p> <p>to refer to the operator that is relative to the $\{ u, v \}$ coordinate system.</p>
3,144,757
<p>I got these definite integrals from the moments.</p> <p>I am required to calculate the definite integrals <span class="math-container">$$\int_{0}^{\infty}x^kf(x)dx$$</span>, where <span class="math-container">$f(x)=e^{-x^{\frac{1}{4}}}sin(x^{\frac{1}{4}})$</span> and <span class="math-container">$k\in\mathbb N$</span>.</p> <p>I have been through a tough time on trying to calculate it. I tried to change variable and integrate by parts, but it seems not working.</p> <p>Any help will be appreciated.</p> <p><strong>Edit:</strong></p> <p>I first used the change of variable turing the definite integral into <span class="math-container">$$4\int_0^{\infty}y^{4k+3}e^{-y}sin(y)dy$$</span>, where <span class="math-container">$y=x^{\frac{1}{4}}.$</span></p> <p>And then when I tried to integrate by parts, two new terms will come out, which are <span class="math-container">$y^{4k+2}sin(y)$</span> and <span class="math-container">$y^{4k+3}cos(y).$</span> I noticed that we can never eliminate the power of <span class="math-container">$y$</span> from the second term, so I had no idea how to go on.</p>
Travis Willse
155,629
<p><strong>Hint</strong> The appearance of the quantity <span class="math-container">$\require{cancel}x^{1 / 4}$</span> inside the arguments of <span class="math-container">$\exp$</span> and <span class="math-container">$\sin$</span> suggest the substitution <span class="math-container">$$x = u^4, \qquad dx = 4 u^3 du,$$</span> which transforms the integral into <span class="math-container">$$4 \int_0^\infty u^{4 k + 3} e^{-u} \sin u \,du = 4 \int_0^\infty u^{4 k + 3} e^{-u} \operatorname{Im}(e^{i u}) \,du = 4 \operatorname{Im} \int_0^{\infty} u^{4 u + 3} e^{(-1 + i) u} du .$$</span> The form of the integrand suggests applying integration by parts. Doing so for <span class="math-container">$$\int_0^{\infty} u^m e^{(-1 + i) u} du$$</span> with <span class="math-container">$p = u^m$</span>, <span class="math-container">$dq = e^{(-1 + i) u} du$</span> gives <span class="math-container">$$\cancelto{0}{\left.u^m \cdot \frac{1}{-1 + i} e^{(-1 + i) u} \right\vert_0^\infty} - \int_0^\infty m u^{m - 1} \frac{1}{-1 + i} e^{(-1 + i) u} du .$$</span> So, if we denote <span class="math-container">$I_m := \int_0^{\infty} u^m e^{(-1 + i) u} du ,$</span> the integrals <span class="math-container">$I_m$</span> satisfy the reduction formula <span class="math-container">$$I_m = -\frac{m e^{-3 \pi i / 4}}{\sqrt{2}} I_{m - 1}.$$</span></p>
3,144,757
<p>I got these definite integrals from the moments.</p> <p>I am required to calculate the definite integrals <span class="math-container">$$\int_{0}^{\infty}x^kf(x)dx$$</span>, where <span class="math-container">$f(x)=e^{-x^{\frac{1}{4}}}sin(x^{\frac{1}{4}})$</span> and <span class="math-container">$k\in\mathbb N$</span>.</p> <p>I have been through a tough time on trying to calculate it. I tried to change variable and integrate by parts, but it seems not working.</p> <p>Any help will be appreciated.</p> <p><strong>Edit:</strong></p> <p>I first used the change of variable turing the definite integral into <span class="math-container">$$4\int_0^{\infty}y^{4k+3}e^{-y}sin(y)dy$$</span>, where <span class="math-container">$y=x^{\frac{1}{4}}.$</span></p> <p>And then when I tried to integrate by parts, two new terms will come out, which are <span class="math-container">$y^{4k+2}sin(y)$</span> and <span class="math-container">$y^{4k+3}cos(y).$</span> I noticed that we can never eliminate the power of <span class="math-container">$y$</span> from the second term, so I had no idea how to go on.</p>
Community
-1
<p>An alternative approach. Here your integral is: <span class="math-container">\begin{equation} I = 4\int_0^\infty x^{4k + 3} e^{-x} \sin(x)\:dx\nonumber \end{equation}</span> Here we will employ Feynman's Trick by introducing the function <span class="math-container">\begin{equation} J(t) = 4\int_0^\infty x^{4k + 3} e^{-tx} \sin(x)\:dx\nonumber \end{equation}</span> We observe that <span class="math-container">$J(1) = I$</span>. Now: <span class="math-container">\begin{equation} \frac{\partial}{\partial t} e^{-tx} = -x e^{-tx} \Longrightarrow \frac{\partial^{4k + 3}}{\partial t^{4k + 3}} e^{-tx} = \left(-1\right)^{4k + 3} x^{4k + 3}e^{-tx} = -x^{4k + 3}e^{-tx} \end{equation}</span> And thus, <span class="math-container">\begin{equation} J(t) = 4\int_0^\infty x^{4k + 3} e^{-tx} \sin(x)\:dx = 4\int_0^\infty -\frac{\partial^{4k + 3}}{\partial t^{4k + 3}} e^{-tx} \sin(x)\:dx = -4\int_0^\infty \frac{\partial^{4k + 3}}{\partial t^{4k + 3}} e^{-tx} \sin(x)\:dx\nonumber \end{equation}</span> By Leibniz's Integral Rule: <span class="math-container">\begin{equation} J(t) =-4\int_0^\infty \frac{\partial^{4k + 3}}{\partial t^{4k + 3}} e^{-tx} \sin(x)\:dx = -4\frac{\partial^{4k + 3}}{\partial t^{4k + 3}} \int_0^\infty e^{-tx}\sin(x)\:dx \nonumber \end{equation}</span> Now <span class="math-container">\begin{equation} \int e^{ax}\sin\left(bx\right)\:dx = \frac{e^{ax}}{a^2 + b^2}\left(a\sin(bx) - b\cos(bx) \right) + C\nonumber \end{equation}</span> Thus our integral becomes <span class="math-container">\begin{align} J(t) &amp;=-4 \frac{\partial^{4k + 3}}{\partial t^{4k + 3}} \int_0^\infty e^{-tx}\sin(x)\:dx = -4\frac{\partial^{4k + 3}}{\partial t^{4k + 3}} \left[\frac{e^{-tx}}{t^2 + 1}\left(-t\sin(x) - \cos(x) \right) \right]_0^{\infty}\nonumber \\ &amp;= -4 \frac{\partial^{4k + 3}}{\partial t^{4k + 3}} \left[\frac{1}{t^2 + 1} \right] \nonumber \end{align}</span> And so we can express the solution to <span class="math-container">$I$</span> as <span class="math-container">\begin{equation} I = J(1) = -4\frac{\partial^{4k + 3}}{\partial t^{4k + 3}} \frac{1}{t^2 + 1}\bigg|_{t = 1} \end{equation}</span></p>
481,421
<p>Find the limit of: $$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
ILikeMath
86,744
<p>As an alternative to NightRa's answer</p> <p>$$\mathop {\lim}\limits_{h \to 0} \frac{-\sin h}{-2 \sin {2h}}=\mathop{\lim}\limits_{h\to 0}\frac{-\sin h}{-4 \sin h \cos h}=\mathop{\lim}\limits_{h\to 0}\frac{1}{4\cos h}=\frac{1}{4}.$$</p>
481,421
<p>Find the limit of: $$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
André Nicolas
6,312
<p>Let $h=\frac{1}{x}$. We want to find $$\lim_{h\to 0^+} \frac{\cos h-1}{\cos 2h-1}.$$ From the identity $\cos 2h=2\cos^2h-1$, we see that we want $$\lim_{h\to 0^+} \frac{\cos h-1}{2\cos^2 h-2}.$$ But $2(\cos^2 h-1)=2(\cos h-1)(\cos h+1)$, so we want $$\lim_{h\to 0^+} \frac{1}{2(\cos h+1)}.$$ This limit is $\dfrac{1}{4}$.</p>
4,362,741
<p>Let <span class="math-container">$f:(0,\infty)\rightarrow \mathbb{R}$</span> be a real-valued function, such that for some <span class="math-container">$t,C&gt;0$</span>, <span class="math-container">\begin{equation} \limsup_{x\rightarrow\infty} f(x+t)\leq C \end{equation}</span> Is it also true that <span class="math-container">$\limsup_{x\rightarrow\infty} f(x)\leq C$</span>? Particularly, is it true that <span class="math-container">$\limsup_{x\rightarrow\infty} f(x+t)=\limsup_{x\rightarrow\infty} f(x)$</span>?</p>
Mason
752,243
<p>This is easy to prove straight from the definition of <span class="math-container">$\limsup$</span>. By definition, <span class="math-container">$$\limsup_{x \to \infty}f(x + t) = \lim_{x \to \infty}\sup_{y &gt; x}f(y + t).$$</span> Thus <span class="math-container">$$\limsup_{x \to \infty}f(x + t) = \lim_{x \to \infty}\sup_{y &gt; x + t}f(y).$$</span> Now by a standard <span class="math-container">$\varepsilon$</span> argument for <span class="math-container">$\lim$</span> (probably what you call &quot;change of variables&quot;), we get <span class="math-container">$$\lim_{x \to \infty}\sup_{y &gt; x + t}f(y) = \lim_{z \to \infty}\sup_{y &gt; z}f(y),$$</span> and this is exactly <span class="math-container">$\limsup_{z \to \infty}f(z)$</span>.</p>
1,382,087
<p>Problem:</p> <p>A bag contains $4$ red and $5$ white balls. Balls are drawn from the bag without replacement.</p> <p>Let $A$ be the event that first ball drawn is white and let $B$ denote the event that the second ball drawn is red. Find </p> <p>(i) $P(B\mid A)$</p> <p>(ii) $P(A\mid B)$</p> <p>My confusion is that should $P(A\mid B)=P(A)$</p> <p>Can we say that in general if $P(A\mid B)$ exists then $P(B\mid A)$ should also exist?</p>
drhab
75,923
<p>In general:</p> <p>$$P(B|A)P(A)=P(A\cap B)=P(A|B)P(B)$$</p> <p>If you can find $P(A),P(B)$ and $P(A\cap B)$ then this enables you to find $P(A|B)$ and $P(B|A)$.</p> <p>Note that $P(A|B)=P(A)$ leads to $P(A\cap B)=P(A)P(B)$ i.e. independence of $A$ and $B$. </p> <p>In your question $A$ and $B$ are not independent.</p> <hr> <p>Hints:</p> <ul> <li><p>To find $P(B)$ realize that the $9$ balls all have <em>equal</em> probability to become the second ball drawn, and $4$ of them are red, so...</p></li> <li><p>Actually finding $P(B|A)$ "directly" is somehow easyer than finding $P(A\cap B)$. If the first ball has been drawn and is white then there are $8$ balls left and $4$ of them are red, so...</p></li> </ul>
1,382,087
<p>Problem:</p> <p>A bag contains $4$ red and $5$ white balls. Balls are drawn from the bag without replacement.</p> <p>Let $A$ be the event that first ball drawn is white and let $B$ denote the event that the second ball drawn is red. Find </p> <p>(i) $P(B\mid A)$</p> <p>(ii) $P(A\mid B)$</p> <p>My confusion is that should $P(A\mid B)=P(A)$</p> <p>Can we say that in general if $P(A\mid B)$ exists then $P(B\mid A)$ should also exist?</p>
wythagoras
236,048
<p>$P(A \mid B) \neq P(A)$. </p> <p>$$P(A \mid B)= \frac{P(A \cap B)}{P(B)} = \frac{\frac5{18}}{\frac59\times\frac48+\frac49\times\frac38} = \frac{5}{8}$$</p> <p>We have $P(A)=\frac{5}{9}$. The intuition behind this is that $B$ makes it more likely that a white ball has been drawn the first time, because then $B$ is more likely. </p> <p>The key in the problem is that the balls are drawn without replacement. Otherwise we would have $P(A \mid B) = P(A)$.</p> <hr> <p>Conditional probability is always meaningful. $P(A \mid B)=P(A)$ also give information. It namely tells you that $A$ and $B$ are independent. </p>
3,531,809
<p>In how many ways can we place 7 identical red balls and 7 identical blue balls into 5 distinct urns if each urn has at least 1 ball?</p> <p>This is how I approached the problem:</p> <p>1) Compute the number of total combinations if there were no constraints:</p> <p>Placing just the red balls, allowing for empty urns: <span class="math-container">$\binom{n+k-1}{k-1} = \binom{7+5-1}{5-1} = \binom{11}{4} = 330$</span>. There are the same number of blue ball configurations.</p> <p>Since each red ball configuration can have 330 possible blue ball configurations, then in total we should have <span class="math-container">$330^2 = 108900$</span> </p> <p>2) Compute the number of illegal configurations with 1, 2, 3 or 4 empty urns:</p> <p><span class="math-container">$r_1$</span> = ways to put 7 red balls into 1 urn = <span class="math-container">$\binom{7-1}{1-1} = \binom{6}{0} = 1$</span><br> <span class="math-container">$r_2$</span> = ways to put 7 red balls into 2 urns = <span class="math-container">$\binom{7-1}{2-1} = \binom{6}{1} = 6$</span><br> <span class="math-container">$r_3$</span> = ways to put 7 red balls into 3 urns = <span class="math-container">$\binom{7-1}{3-1} = \binom{6}{2} = 15$</span><br> <span class="math-container">$r_4$</span> = ways to put 7 red balls into 4 urns = <span class="math-container">$\binom{7-1}{4-1} = \binom{6}{3} = $</span>20 </p> <p><span class="math-container">$b_1$</span> = ways to put 7 blue balls into 1 urn = <span class="math-container">$r_1$</span><br> <span class="math-container">$b_2$</span> = ways to put 7 blue balls into 2 urns = <span class="math-container">$r_2$</span><br> <span class="math-container">$b_3$</span> = ways to put 7 blue balls into 3 urns = <span class="math-container">$r_3$</span><br> <span class="math-container">$b_4$</span> = ways to put 7 blue balls into 4 urns = <span class="math-container">$r_4$</span> </p> <p><span class="math-container">$u_1$</span> = ways to pick 1 urn = <span class="math-container">$\binom{5}{1} = 5$</span><br> <span class="math-container">$u_2$</span> = ways to pick 2 urns = <span class="math-container">$\binom{5}{2} = 10$</span><br> <span class="math-container">$u_3$</span> = ways to pick 3 urns = <span class="math-container">$\binom{5}{3} = 10$</span><br> <span class="math-container">$u_4$</span> = ways to pick 4 urns = <span class="math-container">$\binom{5}{4} = 5$</span> </p> <p># ways to put 7 red and 7 blue balls into 1, 2, 3, or 4 urns =<br> # ways to put 7 red and 7 blue balls into 5 urns where 1 or more urns are empty = </p> <p><span class="math-container">$r_1 b_1\binom{5}{1} + r_2 b_2\binom{5}{2} + r_3 b_3\binom{5}{3} + r_4 b_4\binom{5}{4} = $</span> </p> <p><span class="math-container">$1^2 \cdot 5 + 6^2 \cdot 10 + 15^2 \cdot 10 + 20^2 \cdot 5 = 4615$</span> = # of illegal configurations</p> <p>3) Subtract the number of illegal configurations from the number of total configurations:</p> <p>108900 - 4615 = 104285</p> <p>Is this correct? If not, could someone explain where either my logic breaks down or where I calculated something incorrectly?</p>
joriki
6,622
<p>Your error lies in multiplying the number of ways to distribute <span class="math-container">$7$</span> red balls over <span class="math-container">$k$</span> non-empty urns by the number of ways to distribute <span class="math-container">$7$</span> blue balls over <span class="math-container">$k$</span> non-empty urns and treating that as the number of ways to distribute all <span class="math-container">$14$</span> balls over <span class="math-container">$2$</span> non-empty urns. You&rsquo;re missing the distributions where <span class="math-container">$k$</span> urns are non-empty but not all of them contain both red and blue balls.</p> <p>For a correct count, you can perform <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion–exclusion</a> like this: There are <span class="math-container">$5$</span> conditions for the <span class="math-container">$5$</span> urns to be non-empty. There are <span class="math-container">$\binom5k$</span> ways to choose <span class="math-container">$k$</span> particular conditions, and <span class="math-container">$\binom{7+(5-k)-1}{(5-k)-1}^2=\binom{11-k}7^2$</span> ways to violate them by distributing all balls to the remaining <span class="math-container">$5-k$</span> urns. Thus the number of admissible distributions is</p> <p><span class="math-container">\begin{eqnarray} \sum_{k=0}^4(-1)^k\binom5k\binom{11-k}7^2 &amp;=&amp; 1\cdot\binom{11}7^2-5\cdot\binom{10}7^2+10\cdot\binom97^2-10\cdot\binom87^2+5\cdot\binom77^2 \\ &amp;=&amp; 49225\;. \end{eqnarray}</span></p>
1,572,126
<p>I did solve, I got four solutions, but the book says there are only 3.</p> <p>I considered the cases $| x - 3 | = 1$ or $3x^2 -10x + 3 = 0$.</p> <p>I got for $x\leq 0$: $~2 , 3 , \frac13$</p> <p>I got for $x &gt; 0$: $~4$ </p> <p>Am I wrong? Is $0^0 = 1$ or NOT?</p> <p>Considering the fact that : $ 2^2 = 2 \cdot2\cdot 1 $ </p> <p>$2^1 = 2\cdot 1$ </p> <p>$2^0 = 1$ </p> <p>$0^0$ should be $1$ right?</p>
Jan Eerland
226,665
<p>First of all notice that $0^0\ne 1$</p> <p>$$|x-3|^{3x^2-10x+3}=1\Longleftrightarrow$$ $$\ln\left(|x-3|^{3x^2-10x+3}\right)=\ln(1)\Longleftrightarrow$$ $$\ln\left(|x-3|\right)\left(3x^2-10x+3\right)=0\Longleftrightarrow$$ $$\ln\left(|x-3|\right)\left(x-3\right)\left(3x-1\right)=0$$</p> <hr> <p>Split $\ln\left(|x-3|\right)\left(x-3\right)\left(3x-1\right)$ into seperate parts with additional assumptions:</p> <hr> <p>So we got:</p> <ul> <li>$$\ln\left(|x-3|\right)=0\Longleftrightarrow$$ $$|x-3|=1\Longleftrightarrow$$ $$x-3=\pm 1\Longleftrightarrow$$ $$x=\pm 1+3$$</li> <li>$$x-3=0\to\text{not a valid solution}$$</li> <li>$$3x-1=0\Longleftrightarrow$$ $$3x=1\Longleftrightarrow$$ $$x=\frac{1}{3}$$</li> </ul> <p>So at the end we got 3 solutions:</p> <p>$$x_1=\frac{1}{3}$$ $$x_2=2$$ $$x_3=4$$</p>
2,573,458
<p>Given $n$ prime numbers, $p_1, p_2, p_3,\ldots,p_n$, then $p_1p_2p_3\cdots p_n+1$ is not divisible by any of the primes $p_i, i=1,2,3,\ldots,n.$ I dont understand why. Can somebody give me a hint or an Explanation ? Thanks.</p>
openspace
243,510
<p>Suppose it divide bt $p_{i}$. Then $n \equiv 0 \mod p_{i}$. </p> <p>But $n = p_{1} \dots p_{n} + 1 \equiv 1 \mod p_{i}$.</p>
1,396,322
<p>For example I have eight kids,</p> <pre><code>A,B,C,D,E,F,G,H </code></pre> <p>If I ask them to go into groups of two, their choices are</p> <pre><code>A-&gt;B B-&gt;C C-&gt;B D-&gt;B E-&gt;A F-&gt;A G-&gt;H H-&gt;C </code></pre> <p>How to make sure they get their choices as much as possible?</p> <p>Or similarly, to get into groups of four:</p> <pre><code>A-&gt;B,C,D B-&gt;A,C,G C-&gt;E,A,D D-&gt;B,E,G E-&gt;F,G,H F-&gt;A,B,C G-&gt;E,F,B H-&gt;F,E,C </code></pre> <p>I am sure there are many ways to do this. But I just don't know where to start looking for algorithms. What is the mathematical term for such problems?</p>
Community
-1
<p>We have using integration by parts</p> <p>$$\sin(x)=\int_0^x\cos(t)dt=x-\int_0^x(x-t)\sin(t)dt$$ and by integrating twice by parts we get $$\sin x=x-\frac{x^3}{6}+\int_0^x\frac{(x-t)^3}{6}\sin(t)dt$$ Finally since $$\left|\int_0^x\frac{(x-t)^3}{6}\sin(t)dt\right|\le \int_0^x\frac{(x-t)^3}{6}dt=\frac{x^4}{24}$$ we get easily the desired limit $\frac16$.</p> <p><strong>Remark :</strong> This method is in fact the proof of the Taylor series of the function $\sin$ with integral remainder.</p>
1,396,322
<p>For example I have eight kids,</p> <pre><code>A,B,C,D,E,F,G,H </code></pre> <p>If I ask them to go into groups of two, their choices are</p> <pre><code>A-&gt;B B-&gt;C C-&gt;B D-&gt;B E-&gt;A F-&gt;A G-&gt;H H-&gt;C </code></pre> <p>How to make sure they get their choices as much as possible?</p> <p>Or similarly, to get into groups of four:</p> <pre><code>A-&gt;B,C,D B-&gt;A,C,G C-&gt;E,A,D D-&gt;B,E,G E-&gt;F,G,H F-&gt;A,B,C G-&gt;E,F,B H-&gt;F,E,C </code></pre> <p>I am sure there are many ways to do this. But I just don't know where to start looking for algorithms. What is the mathematical term for such problems?</p>
Marconius
232,988
<p>I offer an alternative route based on a trigonometric identity.</p> <p><em>You will need to prove that the limit $L$ exists.</em></p> <p>Put $x=3u$ in $$L = \lim_{x\to0}{\frac{x-\sin x}{x^3}}$$</p> <p>to get</p> <p>$$L = \lim_{u\to 0}{\frac{3u-\sin(3u)}{(3u)^3}}$$</p> <p>and since $$\sin(3u)=-4\sin^3 u + 3\sin u$$</p> <p>we get $$\begin{align} L = \lim_{u\to 0}{\frac{3u-(-4\sin^3 u + 3 \sin u)}{27u^3}} &amp;= \lim_{u\to 0}{\left\{\frac{4}{27}\cdot\left(\frac{\sin u}{u}\right)^3 + \frac{u-\sin u}{9u^3}\right\}} \\\\ &amp;= \frac{4}{27}\left(\lim_{u\to 0}{\frac{\sin u}{u}} \right)^3 + \frac{1}{9}\left(\lim_{u\to 0}{\frac{u-\sin u}{u}}\right) \\\\ &amp;= \frac{4}{27}\cdot1 + \frac{1}{9}L\qquad(\textit{if given }\lim_{u\to 0}{\frac{\sin u}{u}} = 1) \end{align}$$</p> <p>So $$\frac{8}{9}L=\frac{4}{27} \implies L = \frac{9}{8}\cdot\frac{4}{27} = \frac{1}{6}$$</p>
240,741
<p>I'm trying to include the legends inside the frame of the plot like this</p> <p><a href="https://i.stack.imgur.com/7K5aa.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/7K5aa.jpg" alt="hehe" /></a></p> <p>Here is my Attempt:</p> <pre><code>ListPlot[{{2, 5, 2, 8, 6, 8, 3}, {1, 2, 5, 2, 3, 4, 3}}, PlotMarkers -&gt; {&quot;\[SixPointedStar]&quot;, 15}, Joined -&gt; True, PlotStyle -&gt; {Orange, Green}, PlotLegends -&gt; Placed[&quot;line1&quot;, &quot;line2&quot;, LegendFunction -&gt; (Framed[#, FrameMargins -&gt; 0] &amp;)], Frame -&gt; True] </code></pre> <p>My references:</p> <ol> <li><a href="https://mathematica.stackexchange.com/questions/141737/specify-legend-position-in-a-plot">specify-legend-position-in-a-plot</a></li> <li><a href="https://mathematica.stackexchange.com/questions/173911/plotting-legends-matching-with-plots-inside-the-show-graph">plotting-legends-matching-with-plots-inside-the-show-graph</a></li> <li><a href="https://mathematica.stackexchange.com/questions/212046/placing-plot-legends-inside-a-plot">placing-plot-legends-inside-a-plot</a></li> <li><a href="https://www.wolfram.com/mathematica/new-in-9/legends/place-a-legend-inside-a-plot.html" rel="noreferrer">place-a-legend-inside-a-plot.</a></li> </ol>
Sumit
8,070
<p>You did not put any location for <code>Placed</code></p> <pre><code>ListPlot[{{2, 5, 2, 8, 6, 8, 3}, {1, 2, 5, 2, 3, 4, 3}}, PlotMarkers -&gt; {&quot;\[SixPointedStar]&quot;, 15}, Joined -&gt; True, PlotStyle -&gt; {Orange, Green}, PlotLegends -&gt; Placed[LineLegend[{&quot;line1&quot;, &quot;line2&quot;}, LegendFunction -&gt; (Framed[#, FrameMargins -&gt; 0] &amp;)], {0.1, 0.5}], Frame -&gt; True] </code></pre> <p><a href="https://i.stack.imgur.com/WCyiE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WCyiE.png" alt="enter image description here" /></a></p>
2,877,080
<p>Let A denote a commutative ring and let e denote an element of A such that $e^2 = e$. How to prove that $eA \times (1 - e)A \simeq A$? I thought that $\phi: A \mapsto eA \times (1 - e)A, \ \phi(a) = (ea, (1-e)a)$ is an isomorphism but I don't know how to prove that $\phi$ is a bijection.</p>
David C. Ullrich
248,223
<p>You need MVT to prove just about anything about derivatives. For example, $f'=0$ implies $f$ is constant. (Of course explaining what the need for MVT is depends on the audience. In a calculus class the students probably think it's obvious that $f'=0$ implies that $f$ is constant. In a context like that where they don't see the need to <em>prove</em> such things there really is no valid reason to prove MVT... <strong>Edit:</strong> I just saw the description of the context in that pdf. In a situation like that you might comment on how in math we do feel the need to prove things. Heh mention how the logical foundations of calculus were all fuzzy until we were saved by Weierstrass and Cauchy...)</p> <p>Or a dramatic generalization: MVT plus the definition of the Riemann integral make it trivial to show that if $f$ is differentiable on $[a,b]$ and $f'$ is Riemann integrable then $f(b)-f(a)=\int_a^b f'$. Given $a=t_0&lt;\dots&lt;t_n=b$, write $$f(b)-f(a)=\sum_{j=1}^n(f(t_j)-f(t_{j-1})).$$ If you apply MVT to each term $f(t_j)-f(t_{j-1})$ what you get is precisely a Riemann sum for $\int_a^b f'$.</p>
169,531
<p>Let me preface this by saying that I have essentially no background in logic, an I apologize in advance if this question is unintelligent. Perhaps the correct answer to my question is "go look it up in a textbook"; the reasons I haven't done so are that I wouldn't know which textbook to look in and I wouldn't know what I'm looking at even if I did.</p> <p>Anyway, here's the setup. According to my understanding (i.e. Wikipedia), Godel's first incompleteness theorem says that no formal theory whose axioms form a recursively enumerable set and which contain the axioms for the natural numbers can be both complete and consistent. Let $T$ be such a theory, and assume $T$ is consistent. Then there is a "Godel statement" $G$ in $T$ which is true but cannot be proven in $T$. Form a new theory $T'$ obtained from $T$ by adjoining $G$ as an axiom. Though I don't know how to prove anything it seems reasonably likely to me that $T'$ is still consistent, has recursively enumerable axioms, and contains the axioms for the natural numbers. Thus applying the incompleteness theorem again one deduces that there is a Godel statement $G'$ in $T'$.</p> <p>My question is: can we necessarily take $G'$ to be a statement in $T$? Posed differently, could there be a consistent formal theory with recursively enumerable axioms which contains arithmetic and which can prove every true <em>arithmetic</em> statement, even though it can't prove all of its own true statements? If this is theoretically possible, are there any known examples or candidates?</p> <p>Thanks in advance!</p>
André Nicolas
6,312
<p>Since the sentence $G$ that is added is a sentence of the language of $T$, the same diagonalization procedure that got you $G$ can be used to produce a sentence $G'$ of the kind you described. The language has not been extended, so $G'$ is definitely "a sentence of $T$." More properly put, it is a sentence of the language of $T$. (To say "sentence of $T$" could be taken to mean is a theorem of $T$, which is definitely not the case.)</p> <p>As a part answer to your second question, the Incompleteness Theorem applies to <em>any</em> "strong enough" first-order theory. So there is no recursively axiomatized first-order consistent theory that proves all the sentences that are true in the natural numbers. Standard ZFC set theory is recursively axiomatized, so (if consistent) it is not strong enough to prove all sentences that are true in the natural numbers.</p>
284,809
<p>$F$ is a field and $F[X^2, X^3]$ is a subring of $F[X]$, the polynomial ring. I need to show that nonzero prime ideals of $F[X^2, X^3]$ are maximal.</p> <p>A classmate suggested taking a nonzero prime ideal $\mathfrak{p}$ of $F[X^2, X^3]$ and embedding $F[X^2,X^3]/\mathfrak{p} \hookrightarrow F[X]/(\mathfrak{p})$ and ultimately showing that $R/\mathfrak{p}$ was a field, but the proof we discussed was quite convoluted and it's not even apparent to me that that is an injective map. I'm inclined to think there's a shorter, more direct approach.</p> <p>Does anyone see one? It's not imperative that I find a shorter or nicer proof, but seeing another approach can't hurt and I'm sure the grader would appreciate an easy-to-follow proof.</p> <p>Thank you kindly.</p>
user1551
1,551
<p>Let $Q^{1/2}$ be a Hermitian square root of $Q$. Then by completing square, we get \begin{align*} x^HQx-2~\Re{(x^Hb)}+1 &amp;=x^HQx-x^Hb-b^Hx+1\\ &amp;=\|Q^{1/2}x - Q^{-1/2}b\|^2 + (1 - b^HQ^{-1}b). \end{align*} Hence the minimum occurs at $x = Q^{-1}b$ and the minimum value is $1 - b^HQ^{-1}b$.</p>
2,134,928
<p>Let <span class="math-container">$ \ C[0,1] \ $</span> stands for the real vector space of continuous functions <span class="math-container">$ \ [0,1] \to [0,1] \ $</span> on the unit interval with the usual subspace topology from <span class="math-container">$\mathbb{R}$</span>. Let <span class="math-container">$$\lVert f \rVert_1 = \int_0^1 |f(x)| \ dx \qquad \text{ and } \qquad \lVert f \rVert_{\infty} = \max_{x \in [0,1]} |f(x)|$$</span> be the usual norms defined on that space. Let <span class="math-container">$ \ \Delta : C[0,1] \to C[0,1] \ $</span> be the diagonal function, ie, <span class="math-container">$ \ \Delta f=f \ $</span>, <span class="math-container">$\forall f \in C[0,1]$</span>. Then <span class="math-container">$$ \Delta = \big\{ (f,g) \in C[0,1] \times C[0,1] \ : \ g=f \ \big\} \ . $$</span> My questions are</p> <blockquote> <p><strong>(1)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta \ $</span> a closed set of <span class="math-container">$ \ C[0,1] \times C[0,1] \ $</span>, with respect to the product topology induced by these norms?</p> <p><strong>(2)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> continuous?</p> <p><strong>(3)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span>?</p> <p><strong>(4)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> continuous?</p> <p><strong>(5)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span>?</p> </blockquote> <p>Now about some terminology, when I say that &quot;<span class="math-container">$\Delta \ $</span> is closed&quot;, or that &quot;<span class="math-container">$\Delta \ $</span> is a closed map&quot; or that &quot;<span class="math-container">$\Delta \ $</span> is a closed operator&quot;?</p> <p>Thanks in advance.</p>
Kanwaljit Singh
401,635
<p>Let $x$ be correct answers then $10-x$ are incorrect answers.</p> <p>Then marks for correct answers = $3 × x = 3x$</p> <p>And marks deducted for incorrect answers = $1 × (10-x) = 10 - x$</p> <p>Now after deducting negative marks she got 18 marks.</p> <p>$3x - (10-x) = 18$</p> <p>$3x - 10 + x = 18$</p> <p>$4x = 28$</p> <p>$x = 7$</p> <p>So correct answers 7 and incorrect answers 3.</p>
1,849,577
<p>I recently asked for <a href="https://math.stackexchange.com/questions/1848739/a-topology-on-the-set-of-lines">natural topologies on the set of lines</a> in $\mathbb R^2$. Now I'm aiming for a similar question on the set $S_p$ of conic sections in $\mathbb R^2$ sharing the same focus $p$ (but not necessary having the same major axis). The situation is an idealization of the ecliptic plane and all stuff in the solar system. Are there natural topologies on this set?</p>
Community
-1
<p>Consider as "conics" the zero set of equation $ax^2 + by^2 + cxy + dx + ey + f = 0$. </p> <p>Since $(a,b,c,d,e,f)$ and $(\lambda a, \lambda b, \lambda c, \lambda d, \lambda e, \lambda f)$ define the same equation for $\lambda \neq 0$, a conic define naturally a point in $\mathbb P^5$, the real projective space of dimension $5$. </p> <p>Not every point in $\mathbb P ^5$ gives you a conic, so you have to remove the points $(a,b,c,d,e,f)$ where $abc = 0$, and where $b^2 - 4ac = 0$. The last case gives you degenerate conics (union of two lines, or even a single line). Since you are in real coordinate I guess you have to remove some other points (for example the "conic" $x^2 + y^2 + 1 = 0$ has no real point). </p> <p>Finally, $\mathbb P ^5$ has a natural topology (the quotient topology), so for a given point $p$, the conics with focus $p$ form a variety in the space of conics and has therefore a natural topology. </p>
211,803
<p>I ended up with a differential equation that looks like this: $$\frac{d^2y}{dx^2} + \frac 1 x \frac{dy}{dx} - \frac{ay}{x^2} + \left(b -\frac c x - e x \right )y = 0.$$ I tried with Mathematica. But could not get the sensible answer. May you help me out how to solve it or give me some references that I can go over please? Thanks.</p>
Pedro
23,350
<p>Let $x=e^u$. I changed $e$ to $f$ in the equation to avoid confusions. Then, multiplying by $x^2$ gives $${x^2}\frac{{{d^2}y}}{{d{x^2}}} + x\frac{{dy}}{{dx}} - ay + \left( {b{x^2} - cx - f{x^3}} \right)y = 0$$</p> <p>Now, if $x=e^u$, then $$\eqalign{ &amp; x\frac{{dy}}{{dx}} = \frac{{dy}}{{du}} \cr &amp; {x^2}\frac{{dy}}{{dx}} = \frac{{{d^2}y}}{{d{u^2}}} - \frac{{dy}}{{du}} \cr} $$ so the equation is</p> <p>$$\frac{{{d^2}y}}{{d{u^2}}} - \frac{{dy}}{{du}} + \frac{{dy}}{{du}} - ay + \left( {b{e^{2u}} - c{e^u} - f{e^{3u}}} \right)y = 0$$</p> <p>or $$\frac{{{d^2}y}}{{d{u^2}}} - ay + \left( {b{e^{2u}} - c{e^u} - f{e^{3u}}} \right)y = 0$$</p> <p>$$\frac{{{d^2}y}}{{d{u^2}}} + \left( {b{e^{2u}} - c{e^u} - f{e^{3u}} - a} \right)y = 0$$</p> <p>$$\frac{{{d^2}y}}{{d{u^2}}} + F\left( u \right)y = 0$$This is a $2^{\rm nd}$ degree DE. As Robert Israel points out, and shows in his answer, the solutions seem to be complicated. My inclination would be to aim for a series solution, finding the coefficients of $y$ recursively. </p>
63,052
<p>Suppose I have a square matrix $M$, which you can think of as the weighted adjacency matrix of a graph $G$. I want to order the vertices of $G$ in such a way that the entries of the matrix $M$ are clustered. By this I mean that the weights that are close in value should appear close in $M$.</p> <p>I know Mathematica has some clustering algorithms implemented. How can I do this with Mathematica?</p>
Karl Va4
72,907
<p>"By this I mean that the weights that are close in value should appear close in M." Does this make sense if M represents a weight adjacency matrix? The clustered matrix would not be one anymore. You can just cluster the flattened list. Is the point of clustering a weighted adjacency matrix not rather that strongly connected vertices are clustered together? If yes, possible way: Transform into an actual graph object, cluster, and use the reordered list of vertex indices to resort the matrix.</p> <pre><code>MGraph=WeightedAdjacencyGraph[M]; MClusters=FindGraphCommunities[MGraph];(*Find strongly connected clusters*) MResort=Flatten[MClusters]; MClustered=M[[;;,MResort]](*Reorder rows*) MClustered=MClustered[[MResort,;;]](*Reorder columns*) </code></pre> <p>Perhaps there is also a way without transformation into a graph?</p>
3,733,757
<p>I'm proving that given a nonempty set <span class="math-container">$I$</span>, and given a filter <span class="math-container">$F$</span>, there exists an ultrafilter <span class="math-container">$D$</span> on <span class="math-container">$I$</span> such that <span class="math-container">$F \subseteq D$</span>. I used Zorn's lemma to prove that for a given filter <span class="math-container">$F$</span>, there exists a maximal filter <span class="math-container">$D'$</span>, where <span class="math-container">$F \subseteq D'$</span>. I need to prove that this maximal filter <span class="math-container">$D'$</span> is a ultrafilter, defined as a filter <span class="math-container">$B$</span> that satisfies the following condition: <span class="math-container">$\forall A \subseteq I , A \in B \lor (I -A) \in B$</span>. I tried to use the proof by contradiction, but failed. How do I prove it?</p>
Anonymous
559,302
<p>I like FiMePr's answer, but here is an alternative route which avoids invoking the finite meet property.</p> <p>Either <span class="math-container">$A\in D'$</span> or <span class="math-container">$A\notin D'$</span>. If <span class="math-container">$A\in D'$</span> then we are done so suppose <span class="math-container">$A\notin D'$</span>. Let <span class="math-container">$$B=\{X\subseteq I\mid \exists Y\in D',\ A\cap Y\subseteq X\}$$</span> and show that <span class="math-container">$B$</span> is a filter which properly contains <span class="math-container">$D'$</span>. By the maximality of <span class="math-container">$D'$</span>, this implies <span class="math-container">$B$</span> contains all subsets of <span class="math-container">$I$</span>. So there exist <span class="math-container">$Y\in D'$</span> such that <span class="math-container">$A\cap Y\subseteq\emptyset$</span> which implies <span class="math-container">$Y\subseteq I\setminus A$</span> and therefore <span class="math-container">$I\setminus A\in D'$</span>. Thus, either <span class="math-container">$A\in D'$</span> or <span class="math-container">$I\setminus A\in D'$</span> so that <span class="math-container">$D'$</span> is an ultrafilter. (I ended up opting for the law of excluded middle in place of proof by contradiction)</p>
3,595,451
<p><strong>Question:</strong></p> <p>Let <span class="math-container">$P_{3}(\mathbb{R})$</span> have the standard inner product and <span class="math-container">$U$</span> be the subset spanned by the two vectors (which are polynomials) <span class="math-container">$u_{1}=1+2x-3x^2$</span> and <span class="math-container">$u_{2}=x-x^2+2x^3$</span>. Find the basis for the orthogonal complement <span class="math-container">$U^{⊥}$</span>.</p> <p>I honestly have no idea how to approach this question. I know what orthogonal complement and a basis are but I don't understand where to begin or even solve this question. Any help would be much appreciated. Thanks in advance. </p>
angryavian
43,949
<p>It is easier to find the CDF first (and then it is easy to find the PDF from there). Try to compute <span class="math-container">$P(R \le r)$</span> for any real number <span class="math-container">$r$</span>. (This will be a ratio of areas.)</p>
3,920,469
<p>The topic of <a href="https://en.wikipedia.org/wiki/Perfect_number#Odd_perfect_numbers" rel="nofollow noreferrer">odd perfect numbers</a> likely needs no introduction.</p> <p>The question is as is in the title:</p> <blockquote> <p>If <span class="math-container">$p^k m^2$</span> is an odd perfect number with special prime <span class="math-container">$p$</span>, then what is the optimal constant <span class="math-container">$C$</span> such that <span class="math-container">$$\frac{\sigma(m^2)}{p^k} &lt; \frac{m^2 - p^k}{C}?$$</span></p> </blockquote> <p>(Note that the special prime <span class="math-container">$p$</span> satisfies <span class="math-container">$p \equiv k \equiv 1 \pmod 4$</span> and <span class="math-container">$\gcd(p,m)=1$</span>.)</p> <p><strong>MY OWN PROOF FOR <span class="math-container">$C = 1$</span></strong></p> <p>It is trivial to show that <span class="math-container">$m^2 - p^k \equiv 0 \pmod 4$</span>. This implies that <span class="math-container">$$\frac{\sigma(m^2)}{p^k} \neq m^2 - p^k$$</span> since <span class="math-container">$\sigma(m^2)/p^k$</span> is always odd.</p> <p>Now, suppose to the contrary that <span class="math-container">$$\sigma(m^2) &gt; p^k m^2 - p^{2k}.$$</span> <span class="math-container">$$p^{2k} + \sigma(m^2) &gt; p^k m^2$$</span> <span class="math-container">$$2p^{2k} + 2\sigma(m^2) &gt; 2p^k m^2 = \sigma(p^k)\sigma(m^2)$$</span> <span class="math-container">$$2p^{2k} &gt; \sigma(m^2)\bigg(\sigma(p^k) - 2\bigg)$$</span> <span class="math-container">$$\sigma(m^2) &lt; \frac{2p^{2k}}{\sigma(p^k) - 2} \leq \frac{2p^{2k}}{p^k - 1}$$</span> where we have used the lower bound <span class="math-container">$\sigma(p^k) \geq p^k + 1$</span>. But from the paper <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">Dris (2012)</a>, we have the lower bound <span class="math-container">$$3p^k \leq \sigma(m^2)$$</span> while we also have the upper bound <span class="math-container">$$\frac{2p^{2k}}{p^k - 1} = 2(p^k + 1) + \frac{2}{p^k - 1}.$$</span></p> <p>This implies that <span class="math-container">$$3p^k &lt; 2(p^k + 1) + \frac{2}{p^k - 1}$$</span> <span class="math-container">$$p^k - 2 &lt; \frac{2}{p^k - 1}$$</span> <span class="math-container">$$(p^k - 1)(p^k - 2) &lt; 2.$$</span> This last inequality is a contradiction, as <span class="math-container">$p^k \geq 5$</span>.</p> <p>We therefore conclude that the inequality <span class="math-container">$$\frac{\sigma(m^2)}{p^k} &lt; \frac{m^2 - p^k}{C}$$</span> holds when <span class="math-container">$C=1$</span>.</p> <blockquote> <p>Does the inequality still hold when the constant <span class="math-container">$C &gt; 1$</span>? If so, then what is the optimal value for <span class="math-container">$C$</span>?</p> </blockquote>
Jose Arnaldo Bebita Dris
28,816
<p>Let <span class="math-container">$p^k m^2$</span> be an odd perfect number with special prime <span class="math-container">$p$</span>.</p> <p>Suppose to the contrary that <span class="math-container">$$\frac{\sigma(m^2)}{p^k} &gt; \frac{m^2 - p^k}{C}$$</span> and that <span class="math-container">$C=2$</span>.</p> <p>(Note that <span class="math-container">$$\frac{\sigma(m^2)}{p^k} \neq \frac{m^2 - p^k}{2}$$</span> since the LHS is odd while the RHS is even, as <span class="math-container">$m^2 - p^k \equiv 0 \pmod 4$</span>.)</p> <p>Following in the footsteps of the proof in the OP, we have <span class="math-container">$$2\sigma(m^2) &gt; p^k m^2 - p^{2k}$$</span> <span class="math-container">$$4\sigma(m^2) + 2p^{2k} &gt; 2p^k m^2 = \sigma(p^k)\sigma(m^2)$$</span> <span class="math-container">$$2p^{2k} &gt; \sigma(m^2)\bigg(\sigma(p^k) - 4\bigg)$$</span> (Note that, since <span class="math-container">$p$</span> is the special prime satisfying <span class="math-container">$p \equiv k \equiv 1 \pmod 4$</span>, then <span class="math-container">$p^k \geq 5$</span>, so that <span class="math-container">$\sigma(p^k) \geq p^k + 1 \geq 6$</span>. This shows that the factor <span class="math-container">$\bigg(\sigma(p^k) - 4\bigg)$</span> is always positive.) This implies that <span class="math-container">$$\sigma(m^2) &lt; \frac{2p^{2k}}{\sigma(p^k) - 4} \leq \frac{2p^{2k}}{p^k - 3}$$</span> where we have used the lower bound <span class="math-container">$\sigma(p^k) \geq p^k + 1$</span>. But the upper bound can be rewritten as <span class="math-container">$$\frac{2p^{2k}}{p^k - 3} = \frac{2p^{2k} - 18}{p^k - 3} + \frac{18}{p^k - 3} = 2(p^k + 3) + \frac{18}{p^k - 3}.$$</span> However, as before, we have the lower bound <span class="math-container">$3p^k \leq \sigma(m^2)$</span>. This implies that <span class="math-container">$$3p^k &lt; 2(p^k + 3) + \frac{18}{p^k - 3}$$</span> which means that <span class="math-container">$$(p^k - 6)(p^k - 3) &lt; 18,$$</span> an inequality that is satisfied when <span class="math-container">$p=5$</span> and <span class="math-container">$k=1$</span>.</p> <p>We infer that we need to use a larger lower bound for <span class="math-container">$\sigma(m^2)/p^k$</span>. We use the one by <a href="https://arxiv.org/abs/1103.1437" rel="nofollow noreferrer">Dris and Luca (2016)</a>: <span class="math-container">$$7p^k \leq \sigma(m^2) &lt; 2(p^k + 3) + \frac{18}{p^k - 3}$$</span> <span class="math-container">$$(5p^k - 6)(p^k - 3) &lt; 18.$$</span> This is a contradiction as <span class="math-container">$p^k \geq 5$</span> implies that <span class="math-container">$$(5p^k - 6)(p^k - 3) \geq 38.$$</span></p> <p>We therefore have the following proposition:</p> <blockquote> <p>If <span class="math-container">$p^k m^2$</span> is an odd perfect number with special prime <span class="math-container">$p$</span>, then <span class="math-container">$$\frac{\sigma(m^2)}{p^k} &lt; \frac{m^2 - p^k}{2}.$$</span></p> </blockquote>
3,920,469
<p>The topic of <a href="https://en.wikipedia.org/wiki/Perfect_number#Odd_perfect_numbers" rel="nofollow noreferrer">odd perfect numbers</a> likely needs no introduction.</p> <p>The question is as is in the title:</p> <blockquote> <p>If <span class="math-container">$p^k m^2$</span> is an odd perfect number with special prime <span class="math-container">$p$</span>, then what is the optimal constant <span class="math-container">$C$</span> such that <span class="math-container">$$\frac{\sigma(m^2)}{p^k} &lt; \frac{m^2 - p^k}{C}?$$</span></p> </blockquote> <p>(Note that the special prime <span class="math-container">$p$</span> satisfies <span class="math-container">$p \equiv k \equiv 1 \pmod 4$</span> and <span class="math-container">$\gcd(p,m)=1$</span>.)</p> <p><strong>MY OWN PROOF FOR <span class="math-container">$C = 1$</span></strong></p> <p>It is trivial to show that <span class="math-container">$m^2 - p^k \equiv 0 \pmod 4$</span>. This implies that <span class="math-container">$$\frac{\sigma(m^2)}{p^k} \neq m^2 - p^k$$</span> since <span class="math-container">$\sigma(m^2)/p^k$</span> is always odd.</p> <p>Now, suppose to the contrary that <span class="math-container">$$\sigma(m^2) &gt; p^k m^2 - p^{2k}.$$</span> <span class="math-container">$$p^{2k} + \sigma(m^2) &gt; p^k m^2$$</span> <span class="math-container">$$2p^{2k} + 2\sigma(m^2) &gt; 2p^k m^2 = \sigma(p^k)\sigma(m^2)$$</span> <span class="math-container">$$2p^{2k} &gt; \sigma(m^2)\bigg(\sigma(p^k) - 2\bigg)$$</span> <span class="math-container">$$\sigma(m^2) &lt; \frac{2p^{2k}}{\sigma(p^k) - 2} \leq \frac{2p^{2k}}{p^k - 1}$$</span> where we have used the lower bound <span class="math-container">$\sigma(p^k) \geq p^k + 1$</span>. But from the paper <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">Dris (2012)</a>, we have the lower bound <span class="math-container">$$3p^k \leq \sigma(m^2)$$</span> while we also have the upper bound <span class="math-container">$$\frac{2p^{2k}}{p^k - 1} = 2(p^k + 1) + \frac{2}{p^k - 1}.$$</span></p> <p>This implies that <span class="math-container">$$3p^k &lt; 2(p^k + 1) + \frac{2}{p^k - 1}$$</span> <span class="math-container">$$p^k - 2 &lt; \frac{2}{p^k - 1}$$</span> <span class="math-container">$$(p^k - 1)(p^k - 2) &lt; 2.$$</span> This last inequality is a contradiction, as <span class="math-container">$p^k \geq 5$</span>.</p> <p>We therefore conclude that the inequality <span class="math-container">$$\frac{\sigma(m^2)}{p^k} &lt; \frac{m^2 - p^k}{C}$$</span> holds when <span class="math-container">$C=1$</span>.</p> <blockquote> <p>Does the inequality still hold when the constant <span class="math-container">$C &gt; 1$</span>? If so, then what is the optimal value for <span class="math-container">$C$</span>?</p> </blockquote>
Jose Arnaldo Bebita Dris
28,816
<p>(<em>Last updated on March 2, 2021 - 5:04 PM Manila time</em>)</p> <p>Let <span class="math-container">$p^k m^2$</span> be an odd perfect number with special prime <span class="math-container">$p$</span>.</p> <p>Suppose to the contrary that <span class="math-container">$$\frac{\sigma(m^2)}{p^k} &gt; \frac{m^2 - p^k}{C}$$</span> and that <span class="math-container">$C=3$</span>.</p> <p>Following in the footsteps of this <a href="https://math.stackexchange.com/a/3931083/28816">answer</a>, we have <span class="math-container">$$3\sigma(m^2) &gt; p^k m^2 - p^{2k}$$</span> <span class="math-container">$$6\sigma(m^2) + 2p^{2k} &gt; 2p^k m^2 = \sigma(p^k)\sigma(m^2)$$</span> <span class="math-container">$$2p^{2k} &gt; \sigma(m^2)\bigg(\sigma(p^k) - 6\bigg)$$</span> Assuming <span class="math-container">$\sigma(p^k) \neq 6$</span> (that is, assuming it is <strong>not the case</strong> that <span class="math-container">$p=5$</span> and <span class="math-container">$k=1$</span> holds), then <span class="math-container">$$\sigma(m^2) &lt; \frac{2p^{2k}}{\sigma(p^k) - 6} \leq \frac{2p^{2k}}{p^k - 5} = \frac{2p^{2k} - 50}{p^k - 5} + \frac{50}{p^k - 5} = 2(p^k + 5) + \frac{50}{p^k - 5}.$$</span></p> <p>But according to <a href="https://www.emis.de/journals/INTEGERS/papers/n39/n39.pdf" rel="nofollow noreferrer">Broughan et al. (2013)</a>, we have the lower bound <span class="math-container">$315p^k \leq \sigma(m^2)$</span>.</p> <p>This implies that <span class="math-container">$$315p^k &lt; 2(p^k + 5) + \frac{50}{p^k - 5}$$</span> <span class="math-container">$$(313p^k - 10)(p^k - 5) &lt; 50$$</span> which is clearly a contradiction if <span class="math-container">$p \neq 5$</span> or <span class="math-container">$k \neq 1$</span>.</p> <p>We therefore have the following proposition:</p> <blockquote> <p>If <span class="math-container">$p^k m^2$</span> is an odd perfect number with special prime <span class="math-container">$p$</span> satisfying <span class="math-container">$\sigma(p^k) \neq 6$</span>, then <span class="math-container">$$\frac{\sigma(m^2)}{p^k} \leq \frac{m^2 - p^k}{3}.$$</span></p> </blockquote> <hr /> <p><strong>SECOND ATTEMPT</strong></p> <p>Using <a href="https://math.stackexchange.com/u/78967">mathlove</a>'s approach in his/her answer below, together with <a href="https://www.emis.de/journals/INTEGERS/papers/n39/n39.pdf" rel="nofollow noreferrer">Broughan et al. (2013)</a>'s result, we have the lower bound <span class="math-container">$\frac{\sigma(m^2)}{p^k} \geq 315$</span>. As in mathlove's answer, let the quantity <span class="math-container">$F$</span> be given by <span class="math-container">$$F = \frac{\sigma(p^k)}{2} - \frac{p^{2k}}{\sigma(m^2)} \geq \frac{\sigma(p^k)}{2} - \frac{p^k}{315} = \frac{p^{k+1} - 1}{2(p-1)} - \frac{p^k}{315} := f_1(k).$$</span> Since <span class="math-container">$f_1(k)$</span> is increasing, we get <span class="math-container">$$F \geq f_1(k) \geq f_1(1) = \frac{p+1}{2} - \frac{p}{315} \geq \frac{188}{63} \approx 2.98412698412698$$</span> whence we have the following proposition:</p> <blockquote> <p>If <span class="math-container">$p^k m^2$</span> is an odd perfect number with special prime <span class="math-container">$p$</span>, then <span class="math-container">$$\frac{\sigma(m^2)}{p^k} \leq \frac{m^2 - p^k}{\frac{188}{63}}.$$</span></p> </blockquote> <hr /> <p><strong>THIRD ATTEMPT</strong></p> <p>Using mathlove's approach in his/her answer below, together with mathlove's <a href="https://math.stackexchange.com/a/4028814/28816">answer to a closely related question</a>, we have the lower bound <span class="math-container">$\frac{\sigma(m^2)}{p^k} \geq 3^3 \times 5^3 = 3375$</span>. As in mathlove's answer, let the quantity <span class="math-container">$F$</span> be given by <span class="math-container">$$F = \frac{\sigma(p^k)}{2} - \frac{p^{2k}}{\sigma(m^2)} \geq \frac{\sigma(p^k)}{2} - \frac{p^k}{3375} = \frac{p^{k+1} - 1}{2(p-1)} - \frac{p^k}{3375} := f_2(k).$$</span> Since <span class="math-container">$f_2(k)$</span> is increasing, we get <span class="math-container">$$F \geq f_2(k) \geq f_2(1) = \frac{p+1}{2} - \frac{p}{3375} \geq \frac{2024}{675} = 2.99\overline{851}$$</span> whence we have the following proposition:</p> <blockquote> <p>If <span class="math-container">$p^k m^2$</span> is an odd perfect number with special prime <span class="math-container">$p$</span>, then <span class="math-container">$$\frac{\sigma(m^2)}{p^k} \leq \frac{m^2 - p^k}{\frac{2024}{675}}.$$</span></p> </blockquote>
15,063
<p>Let $F:R \to S$ be an étale morphism of rings. It follows with some work that $f$ is flat. </p> <p>However, faithful flatness is another story. It's not hard to show that faithful + flat is weaker than being faithfully flat. An equivalent condition to being faithfully flat is being surjective on spectra. </p> <p>The question: Is there any further condition we can require on an étale morphism that implies faithful flatness?</p> <p>"Faithfully flat implies faithfully flat" or "surjective on spectra is equivalent to faithfully flat" do not count. The answer should in some way use the fact that the morphism is étale (or at least flat). </p> <p>As you can see by the tag, all rings commutative, unital, etc.</p> <p>Edit: <a href="http://mathworld.wolfram.com/FaithfullyFlatModule.html" rel="nofollow">Why</a> faithfully flat is weaker than faithful + flat.</p> <p>Edit 2: I resent the voting down of this question without accompanying comments as well as the voting up of the glib and unhelpful answer below. It's clear that some of you are in the habit of voting on posts based on the poster rather than the content, and I think that is shameful. There is nothing I can do because none of you has the basic decency to at least leave a comment. I am completely at your mercy. You've won. I hope it's made you very happy.</p> <p>Edit 3: To answer Emerton's comment, I asked here after:</p> <p>a.) Reading this <a href="http://sbseminar.wordpress.com/2009/08/06/algebraic-geometry-without-prime-ideals/#comment-6354" rel="nofollow">post</a> by Jim Borger </p> <p>b.) Asking my commutative algebra professor in an e-mail</p> <p>Which led me to believe (perhaps due to a flawed reading of said sources) that this was a harder question than it turned out to be.</p>
Dustin Clausen
3,931
<p>so yeah, look, i was trying to be funny &amp; also trying to highlight the absurdly haughty nature of the caveats in the question. to be serious i would say that if F is etale then it is faithfully flat iff it is surjective on separably-closed field valued points, but also remark that the same is true with "etale" replaced by "smooth", so i guess i'm not using the full strength of the etale condition.</p>
1,216,619
<p>Why are the rings $\mathbb{R}$ and $\mathbb{R}[ x ]$ not isomorphic to eachother ?</p> <p>Think it might have to do with multiplicative inverses but I'm not sure.</p>
Alexandre Halm
177,651
<p>Well $\Bbb R$ is a field while $\Bbb R[X]$ is not.</p> <p>Question: if a ring is ring-isomorphic to a field, is it necessary a field? </p>
2,978,988
<p>I'm stuck at a question. </p> <p>The question states that <span class="math-container">$K$</span> is a field like <span class="math-container">$\mathbb Q, \mathbb R, \mathbb C$</span> or <span class="math-container">$\mathbb Z/p\mathbb Z$</span> with <span class="math-container">$p$</span> a prime. <span class="math-container">$R$</span> is used to give the ring <span class="math-container">$K[X]$</span>. A subset <span class="math-container">$I$</span> of R is called an ideal if:</p> <p>• <span class="math-container">$0 \in I$</span>; </p> <p>• <span class="math-container">$a,b \in I \to a−b \in I$</span>; </p> <p>• <span class="math-container">$a \in I$</span> <span class="math-container">$r \in R \to ra \in I$</span>. </p> <p>Suppose <span class="math-container">$a_1,...,a_n \in R$</span>. The ideal <span class="math-container">$&lt;a_1,...,a_n&gt;$</span> generated by <span class="math-container">$a_1,...,a_n$</span> is defined as the intersection of all ideals which contain <span class="math-container">$a_1,...,a_n$</span>. Prove that <span class="math-container">$⟨a_1,...,a_n⟩ = {r_1a_1 +···+ r_na_n | r_1,...,r_n \in R}$</span>. </p> <pre><code> I proved this, but I got stuck on the one below: </code></pre> <p>Prove that <span class="math-container">$⟨a_1,...,a_n⟩ = ⟨\gcd(a_1,...,a_n)⟩$</span></p> <p>Because I know how to calculate the gcd, but how do I use it in this context? Because it now has more than two elements, so I don't know how to work with this</p>
drhab
75,923
<p><span class="math-container">$$\sum_{k=0}^n(n^2+k)=(n+1)n^2+\sum_{k=0}^nk=n(n^2+n)+\sum_{k=1}^nk=\sum_{k=1}^n(n^2+n+k)$$</span></p>
84,204
<p>Say I have some object or quantity and an instance or special case of it, how to formally write this down? </p> <p>I don't (just) mean that $X$ is a set and $x$ an element, i.e. $x\in X$ is not it. I'm dealing with things as general like "<em>the specific group $g$ is a group/is a case of a group</em>". Or "<em>the Integers can be viewed as a restriction from the reals</em>". It should be able to handle such different cases. Are there standard ways to do such a "<em>succession</em>"?</p> <p>Do I have to intruduce a two valued predicate, which says "<em>is instance</em>" or "<em>is special case of</em>"? Is this even legal/formally right/possible? Do I have to elevate the bigger things to some sort of set or category first? Does this have to do with lattices (since I generate some order)?</p>
Asaf Karagila
622
<p>If you can formalize the special case as "having property $p(x)$" the you can say $$\forall x\bigg( p(x)\lor \ldots\bigg)$$ Where the ellipses handle the general case. </p> <p>If however you cannot express the specific case with such property or the case you want to handle is predefined (if the set is empty, the number is 1, etc. ) the you can replace $p(x)$ by a formula saying $x$ is that specific case and how we should proceed in that case. </p>
370,212
<p>Let <span class="math-container">$\mathbb{N}$</span> denote the set of positive integers. For <span class="math-container">$\alpha\in \; ]0,1[\;$</span>, let <span class="math-container">$$\mu(n,\alpha) = \min\big\{|\alpha-\frac{b}{n}|: b\in\mathbb{N}\cup\{0\}\big\}.$$</span> (Note that we could have written <span class="math-container">$\inf\{\ldots\}$</span> instead of <span class="math-container">$\min\{\ldots\}$</span>, but it is easy to see that the infimum is always a minimum.)</p> <p>Is there an <span class="math-container">$\alpha\in \; ]0,1[$</span> such that for all <span class="math-container">$n\in\mathbb{N}$</span> we have <span class="math-container">$\mu(n+1,\alpha)&lt;\mu(n,\alpha)$</span>?</p>
Emil Jeřábek
12,705
<p>There is no such <span class="math-container">$\alpha$</span>.</p> <p>If <span class="math-container">$\alpha\in\mathbb Q$</span>, there is <span class="math-container">$n$</span> such that <span class="math-container">$\mu(n,\alpha)=0$</span>, thus <span class="math-container">$\mu(n+1,\alpha)&lt;\mu(n,\alpha)$</span> is impossible.</p> <p>If <span class="math-container">$\alpha\notin\mathbb Q$</span>, a classical result in Diophantine approximation says that there are infinitely many <span class="math-container">$n$</span> such that <span class="math-container">$$\mu(n,\alpha)&lt;\frac1{\sqrt5n^2}.$$</span> If then <span class="math-container">$$\mu(n+1,\alpha)&lt;\mu(n,\alpha),$$</span> let <span class="math-container">$a/n$</span> and <span class="math-container">$b/(n+1)$</span> be the respective closest approximations of <span class="math-container">$\alpha$</span>. We have <span class="math-container">$$\left|\frac an-\frac b{n+1}\right|&lt;\frac2{\sqrt5n^2}&lt;\frac1{n(n+1)},$$</span> while <span class="math-container">$$\left|\frac an-\frac b{n+1}\right|=\frac{|a(n+1)-bn|}{n(n+1)}\ge\frac1{n(n+1)}$$</span> unless <span class="math-container">$a(n+1)=bn$</span>, i.e., the approximating fractions are <span class="math-container">$0$</span> or <span class="math-container">$1$</span>. Since this happens for infinitely many <span class="math-container">$n$</span>, this is impossible for <span class="math-container">$\alpha\in(0,1)$</span>.</p>
186,638
<p>$f(x)=\max(2x+1,3-4x)$, where $x \in \mathbb{R}$. what is the minimum possible value of $f(x)$.</p> <p>when, $2x+1=3-4x$, we have $x=\frac{1}{3}$</p>
Jeff Yontz
34,893
<p>Since $2x+1$ is strictly increasing and $3-4x$ is strictly decreasing, they must intersect at a some point, $z$. For any $\epsilon &gt; 0, x =z+\epsilon$ implies that $2x+1 &gt; 3-4x$ and similarly $x = z-\epsilon$ implies that $2x+1 &lt; 3-4x$. Thus, the minimum of $f(x)$ must be at $z$. In your case, $z = \frac{1}{3}$.</p>
64,780
<p>I need to sum values that belongs to same week. For example, I have the list x with one column and n rows. Format: </p> <pre><code>{{2007,1,3},0.2},{2007,1,4},0.1},{2007,1,5},0.14},{2007,1,8},0.}, ... {2014,10,17},-0.2},{2014,10,18},0.2},{2014,10,19},0.2}}. </code></pre> <p>Dates in list are sorted in the form from Oldest to Newest. Say differently, is there any function like function "Weekendum" in MS Excel that returns the number of week in the year, so i could use function „GatherBy“ that number of week and „Accumulate“ the values.</p>
Nasser
70
<p>Using the function by Heike from <a href="https://mathematica.stackexchange.com/questions/4551/determining-the-week-of-a-year-from-a-given-date">Determining the week of a year from a given date</a> which gives the week number from a date:</p> <pre><code>data = {{{2007, 1, 3}, 0.2}, {{2007, 1, 4}, 0.1}, {{2007, 1, 5}, 0.14}, {{2007, 1, 8}, 0.}, {{2014, 10, 17}, -0.2}, {{2014, 10, 18},0.2}, {{2014, 10, 19}, 0.2}}; (*replace date with week number for each entry*) data2 = Table[{weekNumberUS[DateList[data[[i, 1]]]], data[[i, 2]]}, {i, Length[data]}]; (*make {week,sum} matrix*) {First[#[[1]]], Total[#[[All, 2]]]} &amp; /@ SplitBy[data2, First] (* {{1, 0.44}, {2, 0.}, {42, 0.}, {43, 0.2}} *) </code></pre>
4,428,142
<p>Applying integration by parts splits the integral into 3 integrals, <span class="math-container">$\displaystyle \begin{aligned}I&amp;=\int_{0}^{1} \frac{\sin ^{-1} x \ln (1+x)}{x^{2}} d x\\&amp;=-\int_{0}^{1} \sin ^{-1} x \ln (1+x) d\left(\frac{1}{x}\right) \\&amp;=-\left[\frac{\sin ^{-1} x \ln (1+x)}{x}\right]_{0}^{1}+\underbrace{\int_{0}^{1} \frac{\ln (1+x)}{x \sqrt{1-x^{2}}}}_{K} +\underbrace{\int_{0}^{1}\frac{\sin ^{-1} x}{x}}_{L} d x-\underbrace{\int_{0}^{1} \frac{\sin ^{-1} x}{1+x}}_{M} d x \end{aligned} \tag*{} $</span> Letting <span class="math-container">$x= \cos \theta$</span> for <span class="math-container">$K$</span> and <span class="math-container">$\sin^{-1}x \mapsto x$</span> for <span class="math-container">$L$</span> and <span class="math-container">$M$</span>, yields <span class="math-container">$\displaystyle I=-\frac{\pi}{2} \ln 2 +\underbrace{\int_{0}^{\frac{\pi}{2}} \frac{\ln (1+\cos \theta)}{\cos \theta} d \theta}_{K}+\underbrace{\int_{0}^{\frac{\pi}{2}} \frac{x\cos x }{\sin x} d x}_{L}-\underbrace{\int_{0}^{\frac{\pi}{2}} \frac{x\cos x }{1+\sin x} d x }_{M}\tag*{} $</span></p> <hr /> <p>For the integral <span class="math-container">$ K,$</span>putting <span class="math-container">$ a=1$</span> in my <a href="https://math.stackexchange.com/a/4299808/732917">post</a> yields <span class="math-container">$\displaystyle \boxed{K=\frac{\pi^{2}}{8}}\tag*{} $</span></p> <hr /> <p>For the integral <span class="math-container">$ L,$</span> integration by parts yields <span class="math-container">$\displaystyle \begin{aligned}L &amp;=\int_{0}^{\frac{\pi}{2}} x d \ln (\sin x) \\&amp;=[x \ln (\sin x)]_{0}^{\frac{\pi}{2}}-\int_{0}^{\frac{\pi}{2}} \ln (\sin x) d x \\&amp;=\boxed{\frac{\pi}{2} \ln 2}\end{aligned}\tag*{} $</span></p> <hr /> <p>For the integral <span class="math-container">$ M,$</span> integration by parts yields <span class="math-container">$\displaystyle \begin{aligned}M &amp;=\int_{0}^{\frac{\pi}{2}} x d \ln (1+\sin x)\\&amp;=[x \ln (1+\sin x)]_{0}^{\frac{\pi}{2}}-\int_{0}^{\frac{\pi}{2}} \ln (1+\sin x) d x \\&amp;=\frac{\pi}{2} \ln 2-\underbrace{\int_0^{\frac{\pi}{2} }\ln (1+\sin x) d x}_{N}\end{aligned}\tag*{} $</span> For the integral <span class="math-container">$ N,$</span> using my <a href="https://www.quora.com/How-do-we-find-the-integral-displaystyle-int_-0-frac-pi-4-ln-cos-x-d-x-tag/answer/Lai-Johnny" rel="nofollow noreferrer">post </a> in the second last step yields <span class="math-container">$\displaystyle \begin{aligned}N \stackrel{x\mapsto\frac{\pi}{2}-x}{=} &amp;\int_{0}^{\frac{\pi}{2}} \ln (1+\cos x) d x \\=&amp;\int_{0}^{\frac{\pi}{2}} \ln \left(2 \cos ^{2} \frac{x}{2}\right) d x \\=&amp;\frac{\pi}{2} \ln 2+2 \int_{0}^{\frac{\pi}{2}} \ln \left(\cos \frac{x}{2}\right) d x \\=&amp;\frac{\pi}{2} \ln 2+4 \int_{0}^{\frac{\pi}{4}} \ln (\cos x) d x \\=&amp;\frac{\pi}{2} \ln 2+4\left(\frac{1}{4}(2 G-\pi \ln 2)\right) \\=&amp;\boxed{-\frac{\pi}{2} \ln 2+2 G}\end{aligned}\tag*{} $</span> where <span class="math-container">$ G$</span> is the Catalan’s Constant.</p> <hr /> <p>Putting them together yields <span class="math-container">$\displaystyle \boxed{I=-\pi \ln 2+\frac{\pi^{2}}{8}+2G} \tag*{} $</span></p> <hr /> <p><em><strong>Question:</strong></em> Is there any shorter solution?</p>
WAH
449,818
<p>Perhaps not 100% satisfactory, as I've performed each step via Mathematica rather than a step-by-step derivation, but the following could be considered a more simple solution. Note that <span class="math-container">$$\ln(1+x) = \sum_{n=1}^\infty(-1)^{n+1}\frac{x^n}{n}$$</span> and <span class="math-container">$$\int_0^1\sin^{-1}x\,x^n dx = \frac{\sqrt{\pi}}{n+1}\left( \frac{\sqrt\pi}{2} - \frac{\Gamma\big(1+\frac n2\big)}{(n+1)\Gamma\big(\frac{n+1}{2}\big)} \right),$$</span> where, for <span class="math-container">$n=-1$</span>, the above is understood to be <span class="math-container">$\frac{\pi\ln2}{2}$</span>. Performing the infinite sum yields the answer given in the original question.</p>