qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
529,861
<p>If $m,n$ are coprime positive integers and $m-n$ is odd, then $(m-n),(m+n),m,n,$ are coprime each other?</p> <p>How do I prove it?</p> <p>Especially how do I prove $(m-n), (m+n)$ are coprime?</p>
Real Hilbert
54,006
<p>Note that $x=\frac{1}{1+p} \text{for some}~ p&gt;0 $</p> <p>which implies $|x^k|=\frac{1}{(1+p)^k} \leq \frac{1}{kp}&lt;\epsilon~ \forall~ k&gt;\frac{1}{p\epsilon} $ </p>
2,316,286
<blockquote> <p><strong>Theorem</strong> <em>(Cauchy-Schwarz Inequality) : If $u$ and $v$ are vectors in an inner product space $V$, then</em> $$\langle u,v\rangle ^2\leqslant \langle u,u\rangle \langle v,v\rangle .$$</p> </blockquote> <p><strong>Proof</strong> : If $u=0$, then $\langle u,v\rangle = \langle u,u\rangle=0$ so that the inequality clearly holds. Assume now that $u\neq 0$. Let $a=\langle u,u\rangle$, $b=2 \langle u,v\rangle$, $c=\langle v,v\rangle$, let $t$ be any real number. By the positivity axiom, the inner product of any vector with itself is always non-negative. Therefore $$0\leqslant\langle (tu+v),(tu+v)\rangle =\langle u,u\rangle t^2+2\langle u,v\rangle t+\langle v,v\rangle =at^2+bt+c.$$</p> <p>This inequality implies that the quadratic polynomial $at^2+bt+c$ has no real roots or a repeated real root. Therefore its discriminant must satisfy $b^2-4ac\leqslant0$. Expressing $a$,$b$ and $c$ in terms of $u$ and $v$ gives $$4\langle u,v\rangle^2-4\langle u,u\rangle \langle v,v\rangle \leqslant 0$$ or equivalently, $$ \langle u,v\rangle^2\leqslant\langle u,u\rangle\langle v,v\rangle.\blacksquare$$</p> <p><strong>Doubt</strong> : How do we know that $at^2+bt+c$ has no real roots or a repeated real root?</p>
N. S.
9,176
<p>If $at^2+bt+c \geq 0$ then subbing $t=-\frac{b}{2a}$ you get $$a(\frac{-b}{2a})^2+b(-\frac{b}{2a})+c \geq 0$$</p> <p>Now, since $a&gt;0$, multiplying by $4a$ you get $$b^2-2b^2+4ac \geq 0$$</p>
732,121
<p>I'm having trouble seeing why the bounds of integration used to calculate the marginal density of $X$ aren't $0 &lt; y &lt; \infty$.</p> <p>Here's the problem:</p> <p>$f(x,y) = \frac{1}{8}(y^2 + x^2)e^{-y}$ where $-y \leq x \leq y$, $0 &lt; y &lt; \infty$ </p> <p>Find the marginal densities of $X$ and $Y$.</p> <p>To find $f_Y(y)$, I simply integrated away the "x" component of the joint probability density function:</p> <p>$f_Y(y) = \frac{1}{8}\int_{-y}^y (y^2 + x^2)e^{-y} \, dx = \frac{1}{6}y^3e^{-y}$</p> <p>Then to find $f_X(x)$,</p> <p>$f_X(x) = \frac{1}{8}\int_0^\infty (y^2 + x^2)e^{-y} \, dy = \frac{-(x^2-2)}{8}$</p> <p>However, the solutions I have say that the marginal density of $X$ above is wrong. Instead, it says that $f_X(x)$ is</p> <p>$f_X(x) = \frac{1}{8}\int_{|x|}^\infty (y^2 + x^2)e^{-y} \, dy = \frac{1}{4}e^{-|x|}(1+|x|)$</p> <p>Unfortunately, there is no explanation as to why the lower bound is $|x|$. The only thing that stands out to me are the bounds of $x$: $-y \leq x \leq y$.</p> <p>Any constructive input is appreciated.</p>
Javier Gutierrez
291,573
<p>If you graph $x = y$ and $x = -y$ ; and only look at $0 &lt; y &lt; \infty$ ; you will only use quadrants I and IV.</p> <p>Thus $x$ can only be positive.</p> <p>$0 &lt; y &lt; \infty $ aren't x-bounds.</p>
170,014
<p>Determine for which $a$ values $f = x^2+ax+2$ can be divided by $g= x-3$ in $\mathbb Z_5$. </p> <p>I don't know if there are more effective (and certainly <strong>right</strong>) ways to solve this problem, I assume there definitely are, but as I am not aware of them, I thought I could proceed like this: I have divided $f$ by $g$, pretending $a$ to be a constant in $\mathbb Q$, the resulting quotient is $x+(a+3)$, the reminder is $2+3(a+3)$. In order to have an exact division it needs to happen:</p> <p>$$\begin{aligned} 2+3(a+3) = 0 \end{aligned}$$ $$\begin{aligned} 2+3a + 4 = 0 \end{aligned}$$ $$\begin{aligned} 3a + 1 = 0 \Rightarrow 3a=-1 \Rightarrow 3a = 4 \Rightarrow a= \frac{4}{3}=3 \end{aligned}$$</p> <p>now I would expect $x^2+3x+2 = (x+1)(x-3)$, but it isn't the case because $(x+1)(x-3) = x^2-2x-3$. Is my way to solve this exercise totally wrong and it would be better if I'd set my notebook on fire (and in this case please feel free to jump in) or I am <em>just</em> doing some calculation wrong?</p>
Did
6,179
<p>What is known is explained in C. Albanese, S. Lawi, <em>Laplace transform of an integrated gometric Brownian motion</em>, MPRF 11 (2005), 677-724, in particular in the paragraph of the Introduction beginning by <em>A separate class of models</em>...</p>
678,073
<p>I am working with the multiplicative ring of integers modulo $2^{127}$.</p> <p>Consider the set $E=\{(k,l) \mid 5^k \cdot 3^l \equiv 1\mod 2^{127}, k &gt; 0, l&gt; 0\}$. I wonder if anybody knows or has an idea where to look for a result related to a lower bound for $M=\min\{k+l \mid (k,l)\in E \}$.</p> <p>We have that $0&lt;M\leq \mathrm{ord}_{\mathbb{Z}_{2^{127}}}(5)+\mathrm{ord}_{\mathbb{Z}_{2^{127}}}(3)$ where $\mathrm{ord}_{\mathbb{Z}_{2^{127}}}(5)=2^{125}$ and $\mathrm{ord}_{\mathbb{Z}_{2^{127}}}(3)=2^{125}$ (orders of these primes in the multiplicative ring $\mathbb{Z}_{2^{127}}$).</p> <p>I also would like to generalize the above for primes other than 5 and 3.</p> <p>Is there a result about a tighter lower bound for $M$?</p>
benh
115,596
<p><strong>Claim:</strong></p> <blockquote> <p>Let $c=40647290924413185736448652556727923386$, then the <strong>set of solutions</strong> is given by $$A = \{(k,l) \in (\Bbb Z/2^{126} \Bbb Z)^2 \mid 5^k3^l \equiv 1\bmod 2^{127}\}= \{(cn, 2 n) \mid n \in \Bbb Z\}$$</p> </blockquote> <p>This gives an explicit formula for all solutions of $0\leq l+k &lt; 2^{126}$, namely $$l+k = (2n \bmod 2^{126})+(cn \bmod 2^{126})$$ with $0&lt;n&lt;2^{125}$. I find it hard to calculate the minimum of that expression.</p> <p><strong>Reason:</strong></p> <p>The set $A = \{(k,l) \mid 5^k3^l \equiv 1\bmod 2^{127}\}$ can be interpreted as a subspace of $(\Bbb Z/2^{126} \Bbb Z)^2$. We only need to find a generator of that subspace to give an explicit characterization of $A$.</p> <p>This is hard in general, but in the case of the modulus $2^{n}$ we can do the following: Given $k,l \in \Bbb Z/ 2^{n-1} \Bbb Z$ such that $5^k3^l \equiv 1 \bmod 2^{n}$ we also know that $5^k3^l \equiv 1 \bmod 2^{n-1}$. Conversely, given a solution to the second equation, we can try to lift $(k,l)$ from $ \Bbb Z/ 2^{n-2} \Bbb Z$ to $ \Bbb Z/ 2^{n-1} \Bbb Z$.</p> <p>Now by consideration $\bmod \,2^3$ it is clear that both $k$ and $l$ are even. We can therefore lift a solution of $5^k3^2 \bmod 2^3$ (my little python code did that pretty instantly) to find that $$5^{c}3^2\equiv 1 \bmod 2^{127}$$ which is of order $2^{125}$ in $\Bbb Z/2^{126} \Bbb Z$ and therefore a generator of $A$. </p> <hr> <p>EDIT: One can use <strong>continued fractions</strong> of $c/2^{m}$ to obtain small values of $(cn \bmod 2^{m})$ and thus also of $l+k$. The lowest I found so far is $$\begin{eqnarray} l&amp;=&amp;11726533429350798020\\ k&amp;=&amp;\;\,\;\;391079140617450804\\l+k&amp;=&amp;12117612569968248824&lt;\sqrt{2^{127}}&lt;2^{64}. \end{eqnarray}$$</p>
2,883,625
<p>Let $f:\Bbb R^2\to \Bbb R$ such that $$(f_x)^2+(f_y)^2=4\Big(1-f(x,y)\Big)\Big(f(x,y)\Big)^2,\qquad 0&lt;f(x,y)&lt;1.$$ then which functions satisfy the above property?</p>
Jaap Scherphuis
362,967
<p>It is not always possible for both of $a,b$ to satisfy $a,b&lt;\sqrt{p}$.</p> <p>A small counterexample is $p=5$, $x=1$, $y=4$. The only solutions are $(a,b) \in \{ (1,4), (2,3), (3,2), (4,1) \}$.</p> <p>In fact, for every $p$ the values $(x,y)=(1,p-1)$ will lead to a counterexample because it forces $a+b=0 \mod p$, and $a+b &lt; \sqrt{p}+\sqrt{p}&lt;p$ then gives the trivial solution $a+b=0$ only.</p> <p>For any $a$, you can set $b=y^{-1}ax \mod p$, where the $y^{-1}$ is calculated with the <a href="https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm" rel="nofollow noreferrer">Extended Euclidean algorithm</a> applied to the coprime numbers $y$ and $p$. You now have a solution to $ax-by=0 \mod p$. You just cannot guarantee that if $a&lt;\sqrt{p}$ that you will get a $b$ that satisfies $b&lt;\sqrt{p}$.</p> <p><strong>EDIT:</strong></p> <p>To find a solution that satisfies $a,b&lt;\sqrt{p}$ (assuming there is one) you could just start at $a=1$ and try successive values.</p> <p>For example $p=101$, $x=32$, $y=6$. Starting with $a=1$, you get the solution: $$b=6^{−1}\cdot32=17\cdot32=544=39 \mod 101$$ This $b$-value isn't in the correct range. To get the other solutions for $a=2,3,4,...$ you take successive multiples of $b=39$, which are $39,78,16,55,94,32,71,9,...$. You stop then because you found $b\le10$, which gives the valid solution $(a,b)=(8,9)$.</p> <p>For large $p$, trying all values in succession may be slow or even infeasible. You can however be a little cleverer.</p> <p>To illustrate this, here's a larger example: $p=1000003$, $x=454463$, $y=109818$.</p> <p>The first solution has $a=1$ and $b=454463*109818^{-1} = 454463*554765=409838$.</p> <p>$(a,b)=(1, 409838)$</p> <p>Get the first multiple of this that doesn't yet overflow, i.e. multiply it by $\lfloor{p/b}\rfloor$.<br> $2\cdot(1, 409838) =(2, 2\cdot409838) = (2,819676)$<br> In this method we keep track of the solution with the smallest value of $b$ and of the solution with the largest value of $b$. So at the moment the $a=2$ solution is the current largest solution, and the $a=1$ solution is the current smallest solution. </p> <p>Now add the smallest solution to the largest solution so that $b$ overflows modulo $p$.<br> $(2,819676)+(1, 409838) = (3,229511)$<br> This gives a new and improved current smallest solution.</p> <p>At this point we would normally try to improve our largest solution by adding the smallest to it. However, it would overflow so that wouldn't give a larger $b$.<br> Let's add it anyway to get a new smallest solution:<br> $(2,819676)+(3,229511) = (5,49184)$</p> <p>Again, let's see what we get when we add the smallest solution to the largest. This time we can it 3 times to get a new largest solution:<br> $(2,819676)+3\cdot(5,49184) = (17,967228)$<br> And then add it one more time to make it overflow and produce a new smallest solution:<br> $(17,967228)+(5,49184) = (22,16409)$</p> <p>Again add the smallest solution as often as possible to the largest without making it overflow:<br> $(17,967228)+1\cdot(22,16409) = (39,983637)$<br> And then add it once more to make it overflow:<br> $(39,983637)+(22,16409) = (61,43)$</p> <p>The algorithm ends now, because at this point $b$ has become smaller than $\sqrt{p}$ as required. You would also stop if $a$ became greater than $\sqrt{p}$, in which case there is no solution satisfying the condition $a,b &lt; \sqrt{p}$.</p>
2,883,625
<p>Let $f:\Bbb R^2\to \Bbb R$ such that $$(f_x)^2+(f_y)^2=4\Big(1-f(x,y)\Big)\Big(f(x,y)\Big)^2,\qquad 0&lt;f(x,y)&lt;1.$$ then which functions satisfy the above property?</p>
Russ
497,388
<p>I'm somewhat new to this board, but if Python is permitted, here is an annotated function that implements Jaap's efficient algorithm (with some helpers up front):</p> <p>(Again, sorry if posting code is inappropriate for this board - please let me know, and I will delete it. If appropriate, feel free to delete this comment! ;-)</p> <p><strong>IMPLEMENTATION</strong></p> <pre><code>def xgcd(b, n): ''' Returns GCD of b and n - specifically, returns the GCD(b,n), and Bezout's coefficients x and y. From https://en.wikibooks.org/wiki/Algorithm_Implementation/Mathematics/Extended_Euclidean_algorithm#Iterative_algorithm_3 ''' x0, x1, y0, y1 = 1, 0, 0, 1 while n != 0: q, b, n = b // n, n, b % n x0, x1 = x1, x0 - q * x1 y0, y1 = y1, y0 - q * y1 return b, x0, y0 def mulinv(b,n): ''' Returns multiplicative inverse of b mod n, or None if no inverse exists. ''' g, x, _ = xgcd(b, n) if g == 1: return x % n else: return None def jaapfastsolve(x, y, p): ''' Returns solution of a, b, (with number of steps) to the form: ax - by = 0 mod p ''' isqrt = lambda x: int(pow(x,0.5)) # convenience def mins = (1, x*mulinv(y,p)%p) maxs = mins n = 0 while mins[1]&gt;isqrt(p): n = n + 1 # increase max (by max) to largest under p tm = p//maxs[1] news = (tm*maxs[0], tm*maxs[1]%p) maxs = max([maxs, news], key=lambda t:t[1]) # increase again by min to largest under p to get new max tn = (p-maxs[1])//mins[1] news = ((tn*mins[0])+maxs[0], (tn*mins[1] + maxs[1])%p) maxs = max([maxs, news], key=lambda t:t[1]) # add one more min to overflow p to get new min news = (mins[0]+maxs[0], (mins[1] + maxs[1])%p) mins = min([mins, news], key=lambda t:t[1]) # print mins return (n, mins) </code></pre> <p><strong>EXAMPLE</strong></p> <pre><code>p=1000003; x=454463; y=109818 n, mins = jaapfastsolve(x, y, p) print "fast solve answers" print n, mins, (mins[0]*x-mins[1]*y)%p==0 </code></pre> <p>Returns 4 (61, 43) True.</p>
1,439,850
<p>So the problem states that the centre of the circle is in the first quadrant and that circle passes through $x$ axis, $y$ axis and the following line: $3x-4y=12$. I have only one question. The answer denotes $r$ as the radius of the circle and then assumes that centre is at $(r,r)$ because of the fact that the circle passes through $x$ and $y$ axis. I was thinking that this single fact does not permit one to assume that centre must be at $(r,r)$, simply because the centre may be positioned in such a manner that the distance to $y$ and $x$ axis is not the same and not necessarily $r$. Is my thinking correct? If not, why? </p>
Nikolay Gromov
269,576
<p>Again substitute $y(x)=z(x)/x$ solution you find $x z'(x)=1-2 z(x)$ and then (by separating the variable) $$ -\frac{1}{2}\log(1-2z)=c+\log(x) $$ or $$ z=\frac{1}{2}+\frac{c}{x^2} $$ or $$ y=\frac{c}{x^3}+\frac{1}{2x} $$</p>
1,087,015
<p>I'm looking for a polynomial $P(x)$ with the following properties:</p> <ol> <li>$P(0) = 0$.</li> <li>$P\left(\frac13\right) = 1$</li> <li>$P\left(\frac23\right) = 0$</li> <li>$P'\left(\frac13\right) = 0$</li> <li>$P'\left(\frac23\right) = 0$</li> </ol> <p>From 1 and 3 we know that $P(x) = x\left(x - \frac23\right)Q(x)$. From 4 and 5 we know that $P'(x) = \alpha\left(x - \frac13\right)\left(x - \frac23\right)$. $$ \begin{align} P(x) &amp; = \alpha\int\left(x - \frac13\right)\left(x - \frac23\right)\text{d}x \\ &amp; = \alpha\left(\frac13x^3 - \frac12x^2 + \frac29x + \text{C}\right) \end{align} $$ Now 1 implies that the constant is $\text{C} = 0$, but 3 implies that the constant is $\text{C} = -\frac{2}{81}$. Am I doing something wrong or does this polynomial not exist? </p> <p>If it doesn't exist, how close could I get to making a polynomial that satisfies these 5 conditions?</p>
Ross Millikan
1,827
<p>Certainly higher orders could. Since you have five conditions, you could expect a quartic to work. The condition that $P(\frac 23)=P'(\frac 23)=0$ gives a factor $(x-\frac 23)^2$, so we expect $P(x)=x(x-\frac 23)^2(ax+b)$ Now apply the conditions at $x=\frac 13$ to get $a$ and $b$. We get $P(\frac 13)=\frac {a+3b}{81}=1, P'(\frac 13)=-\frac b9$, so $b=0, a=81$ and $P(x)=81x^2(x-\frac 23)^2$</p>
3,623,368
<p>for example this equation: 505x-673y=1 . x=4 and y = 3. but how can I find them with mathematics. What would be the approach here?</p>
Matteo
686,644
<p>This is a simply diophantine equation. To solve this, you have basically to express <span class="math-container">$673$</span> in terms of <span class="math-container">$505$</span> and <span class="math-container">$1$</span> using euclid algorithm. The result, as you have shown is <span class="math-container">$x=4$</span> and <span class="math-container">$y=3$</span> because: <span class="math-container">$$505\cdot4-673\cdot3=1$$</span></p>
428,415
<p>I tried using integration by parts twice, the same way we do for $\int \sin {(\sqrt{x})}$ but in the second integral, I'm not getting an expression that is equal to $\int x\sin {(\sqrt{x})}$.</p> <p>I let $\sqrt x = t$ thus, $$\int t^2 \cdot \sin({t})\cdot 2t dt = 2\int t^3\sin(t)dt = 2[(-\cos(t)\cdot t^3 + \int 3t^2\cos(t))] = 2[-\cos(t)\cdot t^3+(\sin(t)\cdot 3t^3 - \int 6t \cdot \sin(t))]]$$</p> <p>which I can't find useful. </p>
Nick
83,568
<p>Just continue your path of partial integration with the last integral ? The last integral is purely a cosine which is integrable and yields your sollution.</p>
1,275,848
<p>Given two numbers $x$ and $y$, how to check whether $x$ is divisible by <strong>all</strong> prime factors of $y$ or not?, is there a way to do this without factoring $y$?.</p>
2'5 9'2
11,123
<p>$x$ is divisible by all prime factors of $y$ if and only if for some $n$, $x^n\equiv0$ modulo $y$. You might compute $x^n$ modulo $y$ for $n=1$ up to say $\log_2(y)$ and see if $0$ arises as a result. For large numbers, where prime factorization is hard, but modular arithmetic is doable, this would be more efficient than prime factorization. I say $\log_2(y)$ because that is an upper bound for any exponent on a prime factor of $y$ in the prime factorization of $y$. So by the time you have raised $x$ to that power, any prime factor of $x$ will then be raised to a power at least as large as it could arise in the prime factorization of $y$.</p> <p>@Joffan points out in the comments that you could skip to just raising to the $\log_2(y)$ power. If you use repeated squaring, it does speed things up to $\log_2\log_2(y)$ multiplications modulo $y$. In fact, raising even higher to the next power of $2$ saves a step or two here and there.</p> <p>Applied to Mark's examples, this process would go like this: $$\begin{align} x=168,y=132&amp;\rightarrow \lfloor\log_2(y)\rfloor=7\rightarrow\text{8 is the next power of 2}\\ &amp;\phantom{\rightarrow \lfloor\log_2(y)\rfloor=7\rightarrow}\text{Explicitly, $8=2^{\lceil\log_2{ \lfloor\log_2(y)\rfloor}\rceil}$}\\ &amp;\rightarrow 168^8=((168^2)^2)^2\\ &amp;\rightarrow 168^8\equiv(108^2)^2\\ &amp;\rightarrow 168^7\equiv72^2\\ &amp;\rightarrow 168^7\equiv144\\ \end{align}$$ which uses three squarings mod $y$ to determine the answer is no. And$$\begin{align} x=168,y=98&amp;\rightarrow \lfloor\log_2(y)\rfloor=6\rightarrow\text{8 is the next power of 2}\\ &amp;\rightarrow 168^8=((168^2)^2)^2\\ &amp;\rightarrow 168^8=(0^2)^2\\ &amp;\rightarrow 168^8=0^2\\ &amp;\rightarrow 168^8\equiv0\\ \end{align}$$ which technically uses three squarings if we don't see the early $0$ to determine the answer is yes. (Or you could make the algorithm check each step to see if there are early zeros if you like---not much time saved though by preventing a few calculations of $0^2$.)</p>
1,275,848
<p>Given two numbers $x$ and $y$, how to check whether $x$ is divisible by <strong>all</strong> prime factors of $y$ or not?, is there a way to do this without factoring $y$?.</p>
Mark Bennet
2,906
<p>If $x$ is divisible by all the prime factors of $y$, then so is the highest common factor $h_1$ of $x$ and $y$.</p> <p>To test whether $y$ has a prime factor $p$ which is not a factor of $x$ - well then $p$ is not a factor of $h_1$, but will be a factor of $y_1$ where $y=y_1h_1$. Let $h_2$ be the highest common factor of $h_1$ and $y_1$ and $y_1=h_2y_2$. Then $y_2$ will retain any prime factor $p$ which is not a factor of $x$, and this will not be a factor of $h_2$. </p> <p>The $h_i$ are decreasing positive integers, so the process terminates. If some $h_i=1$ with $y_i\gt 1$ then $y$ has a prime factor which $x$ does not. Otherwise all the prime factors of $y$ must also be prime factors of $y$.</p> <hr> <p>To illustrate with $x=168, y=132$ we have $h_1=12, y_1=11$ and $h_2=1, y_2=11$ detects a problem.</p> <p>With $x=168, y=98$ we have $h_1=14, y_1=7$ and then $h_2=7, y_2=1$ and $h_3=1, y_3=1$ and all the prime factors of $y$ are factors of $x$.</p> <p>Note the hcf can be determines by the Euclidean algorithm, without factoring $y$.</p>
3,789,676
<p>I am try to calculate the derivative of cross-entropy, when the softmax layer has the temperature T. That is: <span class="math-container">\begin{equation} p_j = \frac{e^{o_j/T}}{\sum_k e^{o_k/T}} \end{equation}</span></p> <p>This question here was answered at T=1: <a href="https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function">Derivative of Softmax loss function</a></p> <p>Now what would be the final derivative in terms of <span class="math-container">$p_i$</span>, <span class="math-container">$q_i$</span>, and T? Please see the linked question for the notations.</p> <p>Edit: Thanks to Alex for pointing out a typo</p>
Alex
38,873
<p>It's called chain rule:<span class="math-container">$\frac{\partial L}{\partial s} = \frac{\partial L}{\partial y} \times\frac{\partial y}{\partial s}$</span>. For the first term, in case of Euclidean loss, it is <span class="math-container">$(y-L)$</span>. For the second, it is <span class="math-container">$\sigma(s)(1-\sigma(s)) = y(1-y)$</span></p>
802,877
<blockquote> <p>Find $\displaystyle\lim_{n\to\infty} n(e^{\frac 1 n}-1)$ </p> </blockquote> <p>This should be solved without LHR. I tried to substitute $n=1/k$ but still get indeterminant form like $\displaystyle\lim_{k\to 0} \frac {e^k-1} k$. Is there a way to solve it without LHR nor Taylor or integrals ?</p> <p>Maybe with the definition of a limit ?</p> <p>EDIT:</p> <p>$f(x)'=\displaystyle\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{(x+h)(e^{1/x+h}-1)-x(e^{\frac 1 x}-1)}{h}= \lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-x-h-xe^{\frac 1 x}+x}{h}= \lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-h-xe^{\frac 1 x}}{h}$</p>
Community
-1
<p><strong>Hint:</strong> You have $\lim_{n\to\infty}\dfrac{e^\frac{1}{n}-e^0}{\frac{1}{n}}=\lim_{k\to0}\dfrac{e^k-e^0}{k}$. Use the definition of a differential.</p>
58,870
<p>I am teaching a introductory course on differentiable manifolds next term. The course is aimed at fourth year US undergraduate students and first year US graduate students who have done basic coursework in point-set topology and multivariable calculus, but may not know the definition of differentiable manifold. I am following the textbook <a href="http://rads.stackoverflow.com/amzn/click/0132126052">Differential Topology</a> by Guillemin and Pollack, supplemented by Milnor's <a href="http://rads.stackoverflow.com/amzn/click/0691048339">book</a>.</p> <p>My question is: <strong>What are good topics to cover that are not in assigned textbooks?</strong> </p>
Georges Elencwajg
450
<p>I nominate Ehresmann's theorem according to which a proper submersion between manifolds is automatically a locally trivial bundle. It is incredibly useful, in deformation theory for example, but is sadly neglected in introductory courses and books on manifolds. It is completely elementary: witness <a href="http://www.math.ucla.edu/~petersen/manifolds.pdf">these lecture notes</a> by Peter Petersen, where it is proved in a few lines on page 9, the prerequisites being about two pages long.</p> <p><a href="http://www.uib.no/People/nmabd/dt/080627dt.pdf">Bjørn Ian Dundas</a> and our friend <a href="http://www.math.ntnu.no/~stacey/documents/tma4190.lectures.handout.2009-04-24.pdf"> Andrew Stacey </a> also have online documents proving this theorem.</p>
294,519
<p>The problem I am working on is:</p> <p>Translate these statements into English, where C(x) is “x is a comedian” and F(x) is “x is funny” and the domain consists of all people.</p> <p>a) $∀x(C(x)→F(x))$ </p> <p>b)$∀x(C(x)∧F(x))$</p> <p>c) $∃x(C(x)→F(x))$ </p> <p>d)$∃x(C(x)∧F(x))$</p> <h2>-----------------------------------------------------------------------------------------</h2> <p>Here are my answers:</p> <p>For a): For every person, if they are a comedian, then they are funny.</p> <p>For b): For every person, they are both a comedian and funny.</p> <p>For c): There exists a person who, if he is funny, is a comedian</p> <p>For d): There exists a person who is funny and is a comedian.</p> <p>Here are the books answers:</p> <p>a)Every comedian is funny. b)Every person is a funny comedian. c)There exists a person such that if she or he is a comedian, then she or he is funny. d)Some comedians are funny. </p> <p>Does the meaning of my answers seem to be in harmony with the meaning of the answers given in the solution manual? The reason I ask is because part a), for instance, is a implication, and "Every comedian is funny," does not appear to be an implication.</p>
Barbara Osofsky
59,437
<p>You literally wrote down symbol by symbol what the statements were. But languages used for everyday communications only rarely uses quantifiers, and does not have absolute truth/falsehood. Pretend you are talking to a friend. If you gave your answer to say a), the friend would probably look at you as if you were crazy. We do not talk that way. If you used the manual's answer, the friend might say 'No way' or 'if they are not they become unemployed rather soon" or 'well most'. You are communicating, but reality is not even close to the precision of mathematical statements. Mathematics models the real world but it is not the real world. People simply do not talk that way. They talk the way the manual answers, and that is much more fuzzy than math.</p>
1,877,558
<p>For instance, let $(\mathbb{R}, \mathfrak{T})$ be $\mathbb{R}$ with the usual topology. </p> <p>Why is that $\mathfrak{T} \times \mathfrak{T}$ is a basis on $\mathbb{R} \times \mathbb{R}$ instead of topology?</p> <p>It seems that people just take $\mathfrak{T} \times \mathfrak{T}$ as a basis by definition. There must be some open sets in the product that cannot be represented by Cartesian product of $\mathfrak{T} \times \mathfrak{T}$, but I don't have any examples handy. </p> <p>Can someone please instruct?</p> <p><em>I am still not convinced because the examples so far are not "constructive" :(</em></p>
5xum
112,884
<p>The unit ball $$\{(x,y)| x^2+y^2&lt;1\}$$ is not a cartesian product of two sets in $\mathbb R$, but it is an open set in $\mathbb R^2$.</p> <p>This can easily be shown using basic set theory - you don't need any knowledge of topology. If you find it hard to show this fact, then it's good practice and I suggest you try to do it!</p>
2,221,033
<p>My question is due to <a href="https://en.wikipedia.org/w/index.php?title=Imaginary_number&amp;diff=prev&amp;oldid=175488747" rel="noreferrer">an edit</a> to the Wikipedia article: <a href="https://en.m.wikipedia.org/wiki/Imaginary_number" rel="noreferrer">Imaginary number</a>.</p> <p>The funny thing is, I couldn't find (in three of my old textbooks) a clear definition of an "imaginary number". (Though they were pretty good at defining "imaginary component", etc.)</p> <p>I understand that the number zero lies on both the real and imaginary axes.<br> <em>But is $\it 0$ both a real number and an imaginary number?</em></p> <p>We know certainly, that there are complex numbers that are neither purely real, nor purely imaginary. But I've always previously considered, that a purely imaginary number had to have a square that is a real and negative number (not just non-positive). </p> <p>Clearly we can (re)define a real number as a complex number with an imaginary component that is zero (meaning that $0$ is a real number), but if one were to define an imaginary number as a complex number with real component zero, then that would also include $0$ among the pure imaginaries. </p> <p><em>What is the complete and formal definition of an "imaginary number" (outside of the Wikipedia reference or anything derived from it)?</em></p>
Jatin
809,617
<p>A complex number z=a+ib where a and b are real numbers is called : 1- purely real , if b=0 ; e.g.- 56,78 ; 2- purely imaginary, if a=0 ,e.g.- 2i, (5/2)i ; 3- imaginary,if b≠ 0 ,e.g.- 2+3i,1-i,5i ; 0 is purely imaginary and purely real but not imaginary.</p>
4,374,391
<blockquote> <p>Find prime number <span class="math-container">$p$</span> such that <span class="math-container">$19p+1$</span> is a square number.</p> </blockquote> <p>Now, I have found out, what I think is the correct answer using this method.<br /> Square numbers can end with - <span class="math-container">$1, 4, 9, 6, 5, 0$</span>.<br /> So, <span class="math-container">$19p+1$</span> also ends with these digits.<br /> Thus, <span class="math-container">$19p$</span> ends with - <span class="math-container">$0, 3, 8, 5, 4, 9$</span>.<br /> As <span class="math-container">$p$</span> is either odd or <span class="math-container">$2$</span>. So, <span class="math-container">$19p$</span> is either odd or ends with <span class="math-container">$8$</span>.<br /> So, we can say that <span class="math-container">$19p$</span> ends with - <span class="math-container">$3, 8, 5, 9$</span>.<br /> So, <span class="math-container">$p$</span> ends with - <span class="math-container">$7, 2, 5, 1$</span>.<br /> As <span class="math-container">$19*7$</span> end with <span class="math-container">$3$</span>, <span class="math-container">$19*2$</span> ends with <span class="math-container">$8$</span> etc.<br /> Thus, possible one digit values of <span class="math-container">$p$</span> - <span class="math-container">$2, 7, 5$</span>. <span class="math-container">$$19*2+1=39$$</span> <span class="math-container">$$19*7+1=134$$</span> <span class="math-container">$$19*5+1=96$$</span> None of these are square numbers.<br /> Possible two digit values of <span class="math-container">$p$</span> - <span class="math-container">$11, 17$</span>. <span class="math-container">$$19*11+1=210$$</span> <span class="math-container">$$19*17+1=324=18^2$$</span> Thus, <span class="math-container">$p=17$</span>.</p> <p>But, I am not satisfied with the solution as it is basically trial and error. Is there a better way to do this?</p>
Khosrotash
104,171
<p>Another idea to show is to use a binary base like below <span class="math-container">$$(0000000...000)_2=0\\(0000000...001)_2=1\\(0000000...010)_2=2\\ (0000000...011)_2=3\\(0000000...100)_2=4\\\vdots\\(0111111...111)_2=2^n-1\\(1000000...000)_2=2^n$$</span> and</p> <p>there is <span class="math-container">$2^n$</span> element to make a bijection</p>
4,288,460
<blockquote> <p>Suppose that <span class="math-container">$\{X_t : Ω → S := \mathbb{R}^d, t\in T\}$</span> is a stochastic process with independent increments and let <span class="math-container">$\mathcal{B}_t :=\mathcal{B}_t^X$</span> (natural filtration) for all <span class="math-container">$t\in T$</span>. Show, for all <span class="math-container">$0 ≤ s &lt; t$</span>, that <span class="math-container">$(X_t − X_s )$</span> is independent of <span class="math-container">$\mathcal{B}_s^X$</span> and then use this to show <span class="math-container">$\{X_t\}_{t\in T}$</span> is a Markov process with transition kernels defined by <span class="math-container">$0 ≤ s ≤ t$</span>, <span class="math-container">$$q_{s,t}(x, A) := E [1_A (x + X_t − X_s )]\text{ for all }A\in \mathcal{S}\text{ and }x\in\mathbb{R}^d.$$</span></p> </blockquote> <p>The first part showing that <span class="math-container">$X_t-X_s$</span> is independent of <span class="math-container">$\mathcal{B}^X_s$</span>. I more or less understand from a monotone class lemma.</p> <p>For the part where I need to compute the transition kernel, I am not sure what I have to show. Is seems to me I have to show <span class="math-container">$P(X_t\in A|X_s)=q_{s,t}(X_s,A)$</span>, is that correct? To do this I observe <span class="math-container">\begin{align} P(X_t\in A|X_s)=E(1_{X_t\in A}|X_s)=E(1_A(X_t)|X_s)=E(1_A(X_t-X_s+X_s)|X_s) \end{align}</span> But then I am not sure how to finish. The exercise hintes that I should use that <span class="math-container">$X_t-X_s$</span> is independent of <span class="math-container">$\mathcal{B}^X_s$</span>.</p>
Robert Shore
640,080
<p>I'll give you an alternative approach. (I'm assuming you don't know yet that the inverse image of a closed set under a continuous function is closed.) If <span class="math-container">$h(x)=0$</span> is always true, then <span class="math-container">$K= \Bbb R$</span> and is closed.</p> <p>Otherwise, we will prove that <span class="math-container">$\Bbb R \setminus K$</span> is open so that <span class="math-container">$K$</span> must be closed. Choose <span class="math-container">$x_0 \in \Bbb R$</span> and assume <span class="math-container">$x \notin K$</span> so that <span class="math-container">$h(x_0)=y \neq 0$</span>. Then because <span class="math-container">$h$</span> is continuous, for <span class="math-container">$\varepsilon = \left \vert \frac y2 \right \vert ~ \exists \delta \gt 0$</span> such that <span class="math-container">$\vert x-x_0 \vert \lt \delta \Rightarrow \vert h(x) - y \vert \lt \varepsilon = \left \vert \frac y2 \right \vert \Rightarrow \vert h(x) \vert \gt \left \vert \frac y2 \right \vert \gt 0.$</span> Thus, for any point <span class="math-container">$x_0 \in \Bbb R \setminus K$</span> we have an open neighborhood containing <span class="math-container">$U$</span> containing <span class="math-container">$x_0$</span> (namely, <span class="math-container">$U= \{ x \in \Bbb R \mid \vert x-x_0 \vert \lt \delta \}$</span>) such that <span class="math-container">$U \cap K = \varnothing$</span>; <em>i.e.</em>, <span class="math-container">$U \subseteq \Bbb R \setminus K$</span>. Thus, <span class="math-container">$\Bbb R \setminus K$</span> is open so <span class="math-container">$K$</span> is closed.</p>
1,320,469
<p>I am working on the following problem</p> <blockquote> <p>[R. Vakil] Exercise 19.8.B: Suppose $C$ is a curve of genus $g&gt;1$ over a field $k$ that is not algebraically closed. Show that $C$ has a closed point of degree at most $2g-2$ over the base field.</p> </blockquote> <p>I have no idea how to do this question. This is what I know: Since $g&gt;1$, so the sections of the dualising sheaf $\omega$ defines a morphism $\varphi: C\rightarrow\mathbb{P}^{g-1}$. Hence it defines a morphism $\varphi:C\rightarrow C'$ from $C$ onto its image curve $C'$. The degree of this morphism is $2g-2$, which corresponds to the degree of the extension of the function field $[k(C):k(C')]$. </p> <p>How should I go on from here? i.e. Where to find that closed point? I tried to break down the cases where either $\varphi$ is a closed embedding or it is hyperelliptic but they don't seem to help.</p>
Phil Tosteson
157,315
<p>Okay, by hypothesis $\omega$ is degree $2g -2$ and it has a global section $s \in H^0(\omega)$. </p> <p>Then $\omega$ is $\mathcal O(D)$ for the divisor $D = \sum_p \nu_p(s) \cdot p$, where the sum is taken over all points $p$. And $2g -2 =\deg \omega = \sum_p \nu_p(s) \cdot \deg p$, and all the $v_p(s)$ are positive since $s$ is global. </p> <p>So worst-case scenario, we have point of degree $2g -2$. </p>
910,070
<p>I am working on a weighted minimization problem. Without the weights, the error function can be expressed as $e^T e$. With weights, $e$ first need to element-wise multiple by $w$, then the same formula applies: $(w \circ e)^T (w \circ e)$. How do I express it in pure matrix form (without the $\circ$). The $\circ$ operation is giving me a lot of trouble in trying to derive a derivative of a chained function on a set of parameters. It would be better if it's a matrix whose diagonal is $w_i e_i$, and 0 elsewhere; or a vector of $w_i e_i$. </p> <p>For the weighted minimization problem, I have $$g = e^T e, \; e_i = w_i u_i, \; u = h(X)$$ where $$u, w, e \in \mathbb{R}_{m}, \; X \in \mathbb{R}_{n}, \; g: \mathbb{R}_{m} \rightarrow \mathbb{R}_{1}, \; h: \mathbb{R}_{n} \rightarrow \mathbb{R}_{m} $$ I want to find $\frac{dg(X)}{dX}$. I think this should be in $\mathbb{R}_{n}^T$. Applying the chain rule in the single variable manner, $$ \frac{dg(X)}{dX} = 2 e \frac{de}{du} \frac{du}{dX} $$ $$ \frac{dg(u)}{du} \in \mathbb{R}_{m}^T, \; \frac{du}{dX} \in \mathbb{R}_{mn} $$ The sizes of the matrices don't foot because $e \frac{de}{du}$ should be $e \circ \frac{de}{du}$.</p>
Memming
24,717
<p>Note that $$ (w \circ e)^\top (w \circ e) = e^\top W e$$ where $W = \mathrm{diag}^{-1}(w_1^2, \ldots, w_n^2)$ and $\circ$ denotes Hadamard (or Schur) product.</p>
3,366,569
<p>I am trying to solve the following problem;</p> <p>Write all elements of the following set: <span class="math-container">$ A=\left \{ x\in\mathbb{R}; \sqrt{8-t+\sqrt{2-t}}\in\mathbb{R}, t\in\mathbb{R} \right \}$</span> .</p> <p>My assumption is that the solution is <span class="math-container">$\mathbb{R}$</span> and we don't need to solve when are the square roots defined, because of the <span class="math-container">$x$</span>. Am I correct?</p> <p>Thanks</p>
Lutz Lehmann
115,115
<p>You should have first tried the direct Euler-Cauchy approach by computing the characteristic polynomial <span class="math-container">$0=m(m-1)+m-1=m^2-1$</span> giving <span class="math-container">$x,x^{-1}$</span> as basis solutions. The right side is not of the form <span class="math-container">$x^r\ln(x)^p$</span> or a sum of such terms, so that the method of undetermined coefficients does not work. </p> <p>Use the variation-of-constant method <span class="math-container">\begin{align} y_p(z)&amp;=xv(x)+x^{-1}w(x),\\ 0&amp;=xv'(x)+x^{-1}w'(x)\\ y_p'(x)=v(x)-x^{-2}w(x) x^2e^{2x}=x(xy_p'(x))'-y_p(x)&amp;=x^2v'(x)-w'(x) \end{align}</span> so that one coefficient integration is readily solvable <span class="math-container">$$ v'(x)=\frac12e^{2x}\implies v(x)=\frac14e^{2x}+C $$</span> while the more complicated reads as <span class="math-container">\begin{align} w'(x)=-\frac12x^2e^{2x}\implies w(x)=-\frac12\int x^2e^{2x}dx &amp;=-\frac14x^2e^{2x}+\frac12\int xe^{2x}dx \\ &amp;=-\frac14x^2e^{2x}+\frac14xe^{2x}-\frac18e^{2x}+D \end{align}</span> Combined this results in <span class="math-container">\begin{align} y_p(x)&amp;=\frac14e^{2x}-\frac18x^{-1}e^{2x}\\ &amp;\color{blue}{\text{test: }\begin{aligned}[t] xy_p'(x)&amp;=\frac12xe^{2x}+\frac18x^{-1}e^{2x}-\frac14e^{2x}\\ x(xy_p'(x))'&amp;=\frac12xe^{2x}+x^2e^{2x}-\frac18x^{-1}e^{2x}+\frac14e^{2x}-\frac12xe^{2x} \\ &amp;=x^2e^{2x}+y_p(x) \end{aligned}} \end{align}</span></p>
605,155
<p>$\newcommand{\ker}{\operatorname{ker}}$</p> <p>Proof that: $\ker AB\subseteq\ker A+\ker B$</p> <p>my solution:</p> <p>$x\in \ker AB\to ABx=0\to \begin{cases} Ax=0\to x\in \ker A\\Bx=0\to x\in \ker B\end{cases}$</p> <p>$\to x\in \ker A+\ker B\to \ker AB\subseteq \ker A+\ker B$</p> <p>Question: Do it right? if false, the new right to do so?</p>
Mercy King
23,304
<p>$$ x\in \ker(AB) \iff ABx=0 \iff Bx\in \ker (A)\iff x \in B^{-1}(\ker(A)), $$ i.e. $$ \ker(AB)=B^{-1}(\ker(A)). $$</p>
262,319
<pre><code>ContourPlot[ EuclideanDistance[{-5, 0}, {x, y}]* EuclideanDistance[{5, 0}, {x, y}], {x, -15, 15}, {y, -11, 11}, Contours -&gt; Range[5, 150, 20], Frame -&gt; False, ContourLabels -&gt; (Text[Style[#3, Directive[Blue, 15]], {#1, #2}] &amp;), AspectRatio -&gt; Automatic, ColorFunction -&gt; (If[# &lt; 145, ColorData[{&quot;TemperatureMap&quot;, {0, 145}}, #], None] &amp;), ColorFunctionScaling -&gt; False] </code></pre> <p><a href="https://i.stack.imgur.com/3hvcE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3hvcE.png" alt="enter image description here" /></a></p> <p>How to remove those redundant <code>25</code>? I just hope to keep one or two. But I get a ton of label <code>25</code>, which makes the graphic look crowded...</p>
Bob Hanlon
9,362
<p>Simplifying and evaluating the argument will also reduce the redundant labels.</p> <pre><code>$Version (* &quot;13.0.0 for Mac OS X x86 (64-bit) (December 3, 2021)&quot; *) ContourPlot[ Evaluate[ Simplify[ EuclideanDistance[{-5, 0}, {x, y}]* EuclideanDistance[{5, 0}, {x, y}]]], {x, -15, 15}, {y, -11, 11}, Contours -&gt; Range[5, 150, 20], Frame -&gt; False, ContourLabels -&gt; (Text[Style[#3, Directive[Blue, 15]], {#1, #2}] &amp;), AspectRatio -&gt; Automatic, ColorFunction -&gt; (If[# &lt; 145, ColorData[{&quot;TemperatureMap&quot;, {0, 145}}, #], None] &amp;), ColorFunctionScaling -&gt; False] </code></pre> <p><a href="https://i.stack.imgur.com/JSQdb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JSQdb.png" alt="enter image description here" /></a></p>
178,028
<p>I am given $G = \{x + y \sqrt7 \mid x^2 - 7y^2 = 1; x,y \in \mathbb Q\}$ and the task is to determine the nature of $(G, \cdot)$, where $\cdot$ is multiplication. I'm having trouble finding the inverse element (I have found the neutral and proven the associative rule.</p>
hmakholm left over Monica
14,366
<p><em>Hint</em>. Your set has a certain formal similarity with $\{x+\sqrt{-1}y \mid x^2+y^2=1\}$ (arising by writing $-1$ for both occurrences of $7$), which is the unit circle in the complex plane. In the complex unit circle, the multiplicative inverse is just the complex conjugate, i.e. what you get by negating the $y$ coordinate.</p> <p>It's worth a try to see if something similar works here.</p>
21,201
<p>Next Monday, I'll have an interview at Siemens for an internship where I have to know about fluid dynamics/computational fluid dynamics. I'm not a physicist, so does somebody have a suggestion for a good book where I can read about some basics? Thank you very much.</p>
anonymous
5,334
<hr> <p>Let $G$ be a finite subgroup of $GL(V)$. I claim that the following are equivalent:</p> <p>$(1)$ Any two elements of $G$ which are $GL(V)$ conjugate are also $G$ conjugate.</p> <p>$(2)$ The representation ring $\mathbb{Q} \otimes \mathrm{Rep}(G)$ is spanned by representations $S^{\lambda}(V)$, where $S^{\lambda}$ ranges over all Schur functors.</p> <p>Proofs: For $g$ in $G$, let the eigenvalues of $g$ be $e^{2 \pi i q_k(g)}$, where $(q_1(g), q_2(g), \ldots, q_n(g))$ is a multiset of elements of $\mathbb{Q}/\mathbb{Z}$. Write $Q(g)$ for this multiset. So $g$ and $g'$ are $GL(V)$ conjugate if and only if $Q(g) = Q(g')$. When necessary, we will write $Q_W(g)$ for the analogous construction for some other representation $W$.</p> <p>If $Q(g)=Q(g')$ then $Q(g^k) = k Q(g) = k Q(g') = Q((g')^k)$. So, for every Adams operator $\psi^k$, the actions of $g$ and $g'$ on the virtual representation $\psi^k(V)$ have the same eigenvalues and, in particular, the same trace. Also, $Q_{V_1 \oplus V_2}(g)$ and $Q_{V_1 \otimes V_2}(g)$ are determined by $Q_{V_1}(g)$ and $Q_{V_2}(g)$. So, if $Q(g)=Q(g')$ then the linera functions $Tr(g)$ and $Tr(g')$ are equal on the sub-$\Lambda$-ring of $\mathbb{Q} \otimes \mathrm{Rep}(G)$ generated by $V$.</p> <p>Assume condition 2, and let $Q(g) = Q(g')$. Then $Tr(g)$ and $Tr(g')$ are equal on all of $\mathbb{Q} \otimes \mathrm{Rep}(G)$, so $g$ and $g'$ are $G$ conjugate.</p> <p>Conversely, assume condition 1. Fix any $h \in G$. Let $S$ be the sub-$\Lambda$-ring of $\mathbb{Q} \otimes \mathrm{Rep}(G)$ generated by $V$. We will construct a virtual representation $W$ in $\mathbb{C} \otimes S$ such that $Tr_W(g)$ is $0$ for $g$ not conjugate to $h$ and $1$ for $g$ conjugate to $h$. Taking such linear combinations of such linear functionals, we can clearly generate all class functions on $G$, so $S$ must be the whole of $\mathbb{Q} \otimes \mathrm{Rep}(G)$.</p> <p>By condition (1), $Q(h)$ is different from $Q(g)$ for any $g$ not conjugate to $h$. Therefore, we can find a symmetric polynomial $F$, with coefficients in $\mathbb{C}$, which vanishes at $e^{2 \pi i Q(g)}$ for $g$ not conjugate to $h$, but does not vanish at $e^{2 \pi i Q(h)}$. (Here $e^{2 \pi i Q(g)}$ is a point of $\mathbb{C}^n$, which should be considered as defined only up to the action of $S_n$.) Using the standard relation between symmetric polynomials and $\Lambda$-ring operations, $F(V)$ is the desired $W$.</p>
402,214
<p>I recently obtained "What is Mathematics?" by Richard Courant and I am having trouble understanding what is happening with the Prime Number Unique Factor Composition Proof (found on Page 23).</p> <p>The first part:</p> <blockquote> <p><img src="https://i.stack.imgur.com/h5rCh.png" alt="enter image description here"></p> </blockquote> <p>I have looked over it many times but I just don't understand what he is doing and why, as an example, if you remove the first factor from either side of the equation you end up with two essentially different compositions that make up a new smaller integer.</p> <p>I'm sure it is a simple error on my behalf but I have been stuck on this for a long time so I would appreciate a walkthrough explained clearly or some pointers in the right direction. Thank you.</p>
Federica Maggioni
49,358
<p>You want to show that every positive integer can be expressed as a product of prime numbers, secondly you want that such a decomposition is unique (except for the order of factors). Call $\mathscr{F}$ the set of positive integers not satisfying your claim, i.e. the set of positive integers that can be written in more than a single way as a prime product. In terms of $\mathscr{F}$, your claim becomes: show that $\mathscr{F}$ is empty. By contradiction, suppose $\mathscr{F}$ non-empty and look for an absurd.Every non-empty set of positive integers has a smallest element, call $m$ the smallest element of $\mathscr{F}$. Using the book notation, suppose $p_1=q_1$ and cancel the two factors from both sides. Since the two factorizations were supposed to be different, they remain different after that cancellation, producing an element of $\mathscr{F}$, but smallest than $m$, contradicting the definition of $m$ as the smallest element of $\mathscr{F}$. Then $p_1$ is strictly less than $q_1$ or viceversa...</p>
2,323,351
<p>I thought we take $4$ vowels and find number of arrangements $4!$ and multiply it with arrangements that can be made with consonants that is $5!/2!$. However my approach seems to be wrong. </p>
Evargalo
443,536
<p>Assume $n&gt;=7$.</p> <p>Starting from the set {1,3,5,7}, any solution set can be reached by increasing the gaps between the chosen integers : you have to place $n-7$ gaps in any of the 5 following spots: before the 1, between 1 and 3, between 3 and 5, between 5 and 7, or after 7.</p> <p>For instance, if $n=20$, you can place the $n-7=13$ extra gaps in the 5 spots like that: (2,5,0,2,4) ; then the set becomes {3,10,12,16} : the 4 digits have been increased by 2, then the three last digits have been increased by 5, then the two last digits have been increased by 0, then the last digit has been increased by 2, and the 4 biggest integers in {1,...,20} are not used.</p> <p>In that way you reach all the solutions once and only once. That means the number of solutions is the number of ways to place $n-7$ items in $5$ boxes.</p> <p>Generally the number of ways to place $k$ items in $l$ boxes is $\binom{l+k-1}{k-1}$ ; Hence the solution to your problem is $\binom{n-7+5-1}{5-1}$=$\binom{n-3}{4}$</p>
2,323,351
<p>I thought we take $4$ vowels and find number of arrangements $4!$ and multiply it with arrangements that can be made with consonants that is $5!/2!$. However my approach seems to be wrong. </p>
Christian Blatter
1,303
<p>Each admissible choice is a binary word of length $n$ containing exactly four ones, and satisfying the extra condition that after the first three ones there is at least one zero.</p> <p>Given an admissible word $w$ remove the first zero after each of the first three ones, and you obtain a binary word $w'$ of length $n-3$ containing four ones, and satisfying no extra condition. Conversely: Given any binary word $w'$ of length $n-3$ containing four ones insert a zero after the first three ones, and you obtain an admissible word $w$ of length $n$.</p> <p>The number of admissible words of length $n$ therefore is ${\displaystyle{n-3\choose 4}}$.</p>
2,565,802
<p>Calculate the volume of the region bounded by $z=0, z=1,$, and $(z+1)\sqrt{x^2+y^2}=1$</p> <p>The integral is $\int_{B}z\text{ dV}$</p> <p>The area is like the thing between the top two green places. The first place is $z=1$, second is $z=0$</p> <p>Clearly we have $0\leq z\leq 1$, but I'm not sure what to bound next? Should I be using cylindrical? </p> <p>Would it be correct in saying $0\leq r\leq \dfrac{1}{z+1}$ <a href="https://i.stack.imgur.com/THRhl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/THRhl.png" alt="enter image description here"></a></p>
Stephen Meskin
465,208
<p>Let the $n$ people around the table be labeled $1, 2, \ldots n $ and let $k$ be the # of those people to be selected.<br> We will count using two cases. Case A: $1$ is selected. Case B: $1$ is not selected. </p> <p>Case A: $ \binom{n-k-1}{k-1}$<br> Case B: $ \binom{n-k}{k}$<br> Their sum is $\frac{n}{k}\binom{n-k-1}{k-1}$ </p> <p>You can check that the formula works for your example (for which the answer is 16). </p> <p>I will prove Case A, leaving Case B for you. </p> <p>EDIT: This proof has been revised using the methods of the clean proof by @ChristianBlatter </p> <p>Assume $1$ is selected. Consider a sequence of $n-k\; 0$s. There are $n-k-1$ spaces between the $0$s. Select $k-1$ of these spaces. This can be done $ \binom{n-k-1}{k-1}$ ways. Place a $1$ into each of the selected places. This gives a unique sequence of length $n-1$ consisting $n-k \; 0\text{s and } k-1\; 1$s starting and ending with $0$ and with a $0$ between each of the $1$s. </p> <p>Now label the elements of the string $2, 3, 4, \ldots , n$. The selected hand shakers are $1$ and the $k-1$ people in the string that had been selected.</p>
2,565,802
<p>Calculate the volume of the region bounded by $z=0, z=1,$, and $(z+1)\sqrt{x^2+y^2}=1$</p> <p>The integral is $\int_{B}z\text{ dV}$</p> <p>The area is like the thing between the top two green places. The first place is $z=1$, second is $z=0$</p> <p>Clearly we have $0\leq z\leq 1$, but I'm not sure what to bound next? Should I be using cylindrical? </p> <p>Would it be correct in saying $0\leq r\leq \dfrac{1}{z+1}$ <a href="https://i.stack.imgur.com/THRhl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/THRhl.png" alt="enter image description here"></a></p>
Christian Blatter
1,303
<p>Assume there are $N$ admissible quadruples. These involve $4N$ choices of an individual, and by symmetry each person is chosen the same number of times, hence ${N\over25}$ times. We now count the number of admissible quadruples having person $100=0$ as a member. The admissible choices of the remaining three persons can be encoded as a binary word of length $99$ obtained in the following way: Write $96$ zeros, and insert a one into $3$ of the $95$ gaps between the zeros.</p> <p>It follows that the total number of admissible quadruples is given by $$N=25\cdot{95\choose3}=3\,460\,375\ .$$</p>
1,177,493
<p>If $p$ is a prime and $p \equiv 1 \bmod 4$, how many ways are there to write $p$ as a sum of two squares? Is there an explicit formulation for this?</p> <p>There's a theorem that says that $p = 1 \bmod 4$ if and only if $p$ is a sum of two squares so this number must be at least 1. There's also the Sum of Two Squares Theorem for the prime factorization of integers and the Pythagorean Hypotenuse Proposition which says that a number $c$ is a hypotenuse if and only if it's a factor of $1 \bmod 4$ primes. All of these theorems only assert the existence of $1 \bmod 4$ primes as the sum of two squares. How do I (perhaps use these altogether) to find the exact number of different ways to write such prime as a sum of two squares?</p>
mcmat23
180,748
<p>If $p$ is prime $p \equiv 1 \bmod \ 4 $ then $\exists!(a, b), (b, a) \in \mathbb N^2 $ such that $p = a^2 + b^2$. In fact, suppose $p = a^2 + b^2 = c^2 + d^2$ then in $\mathbb Z[i]$ $(a + ib)(a - ib) = (c + id)(c - id)$ but $N(a + ib) = N(a - ib) = N(c + id) = N(c - id) = p$ where $N(a + ib) = a^2 + b^2$ is the norm in $\mathbb Z[i]$. Then we have that $a + ib, a - ib, c + id, c - id$ are primes in $\mathbb Z[i]$ therefore we can conclude that $a = b, c = d$.</p>
752,517
<p>From Wikipedia</p> <blockquote> <p>...the free group $F_{S}$ over a given set $S$ consists of all expressions (a.k.a. words, or terms) that can be built from members of $S$, considering two expressions different unless their equality follows from the group axioms (e.g. $st = suu^{−1}t$, but $s ≠ t$ for $s,t,u \in S$). The members of $S$ are called generators of $F_{S}$.</p> </blockquote> <p>I don't understand the distinction between " <strong>$a$ and $b$ freely generate a group</strong>" , "$a$ and $b$ <strong>generate a free group</strong>" and just checking if a group is free. For example I thought that if $a$ and $b$ freely generate a group then that group is free and vice versa. However I have seen statements like " $a$ and $b$ freely generate a free subgroup of..." quite a few times and so there seems to be some distinction between the terms. If so could you please provide an example where one of the conditions hold bit the other doesn't?</p> <p>If, for example, $S=\{a,b\}$, but $a^{3}=1$ and $b^{2}=1$ is it the case that $S$ cannot generate a free group even if any word written using only the letters $a$,$a^2$ and $b$ (excluding powers of these) has a unique representation? If we say that we are considering words in $\{a,b\}$ does that mean that we consider words using $a$,$b$, $a^{-1}$ and $b^{-1}$ as letters and identifying $b^2=1$ and $a^3=1$ in the process of reducing the letter or that we use any integer power of and b and only reduce identities that hold by group axioms independent of the actual group structure we are considering?</p> <p>I apologize if the question is confusing but I am quite confused myself. If you think there is any way of clarifying the question please feel free to suggest it.</p> <p>Thank you.</p>
Michael Hardy
11,667
<p><b>PS: I've written a third answer that shows the really simple way to do this, without trigonometric functions.</b></p> <p>The usual topology on the circle $\{(x,y) : x^2+y^2=1\}$ can be characterized in at least the following two ways, and I think it's fairly easy to show the two ways yield the same topology:</p> <ul> <li>A subset of $\{(x,y) : x^2+y^2=1\}$ is open iff its inverse-image under the mapping $\theta\mapsto(\cos\theta,\sin\theta)$ is an open subset of the quotient space $\mathbb R/2\pi\mathbb Z$.</li> <li>A subset of $\{(x,y) : x^2+y^2=1\}$ is open iff it is the intersection of that set with an open subset of $\mathbb R^2$ with the usual topology. In other words, it is the subspace topology.</li> </ul> <p>A subset of $\mathbb R/2\pi\mathbb Z$ is open iff it is the union of a set of disjoint open intervals.</p> <p>With that much, I suspect you could show that $\mathbb R\cup\{\infty\}$ is homeomorphic to $\{(x,y) : x^2+y^2=1\}$ without mentioning compactness or the Hausdorff property.</p> <p>Can you show that if a space $U$ is a subspace of $V$, with the subspace topology, and $V$ is a Hausdorff space, then $U$ is a Hausdorff space?</p> <p><b>PS:</b> In a comment you ask about surjectivity. If you mean surjectivity of the mapping $t\mapsto(\cos(2\arctan t),\sin(2\arctan t))$, then that can be shown something like this: Suppose $x,y$ are real and $x^2+y^2=1$. Draw the line through the two points $(-1,0)$ and $(x,y)$, and look at that line's intersection with the set $\{(0,y): y\in\mathbb R\}$. Call that intersection point $(0,t)$, i.e. let $t$ be the second component of the pair. That they do intersect if $(x,y)\ne(-1,0)$ is easy to prove. In case $(x,y)=(-1,0)$, then let $t=\infty$.</p> <p>To show that that value of $t$ will serve may require you to recall some secondary-school geometry: Let $O=(0,0)$, $A=(1,0)$, $B=(-1,0)$, $C=(x,y)$, with $x^2+y^2=1$. Then $\angle AOC= 2\angle ABC$.</p>
752,517
<p>From Wikipedia</p> <blockquote> <p>...the free group $F_{S}$ over a given set $S$ consists of all expressions (a.k.a. words, or terms) that can be built from members of $S$, considering two expressions different unless their equality follows from the group axioms (e.g. $st = suu^{−1}t$, but $s ≠ t$ for $s,t,u \in S$). The members of $S$ are called generators of $F_{S}$.</p> </blockquote> <p>I don't understand the distinction between " <strong>$a$ and $b$ freely generate a group</strong>" , "$a$ and $b$ <strong>generate a free group</strong>" and just checking if a group is free. For example I thought that if $a$ and $b$ freely generate a group then that group is free and vice versa. However I have seen statements like " $a$ and $b$ freely generate a free subgroup of..." quite a few times and so there seems to be some distinction between the terms. If so could you please provide an example where one of the conditions hold bit the other doesn't?</p> <p>If, for example, $S=\{a,b\}$, but $a^{3}=1$ and $b^{2}=1$ is it the case that $S$ cannot generate a free group even if any word written using only the letters $a$,$a^2$ and $b$ (excluding powers of these) has a unique representation? If we say that we are considering words in $\{a,b\}$ does that mean that we consider words using $a$,$b$, $a^{-1}$ and $b^{-1}$ as letters and identifying $b^2=1$ and $a^3=1$ in the process of reducing the letter or that we use any integer power of and b and only reduce identities that hold by group axioms independent of the actual group structure we are considering?</p> <p>I apologize if the question is confusing but I am quite confused myself. If you think there is any way of clarifying the question please feel free to suggest it.</p> <p>Thank you.</p>
The very fluffy Panda
140,598
<p>It is also useful to know that if X and Y are locally compact Hausdorff spaces and are homeomorphic, then so are their one point compactifications. Another way of looking at the problem would be by considering the steoreographic projection which extends to a map from $\mathbb{R}\cup \{\infty\}$ to $\displaystyle S^1$.</p>
752,517
<p>From Wikipedia</p> <blockquote> <p>...the free group $F_{S}$ over a given set $S$ consists of all expressions (a.k.a. words, or terms) that can be built from members of $S$, considering two expressions different unless their equality follows from the group axioms (e.g. $st = suu^{−1}t$, but $s ≠ t$ for $s,t,u \in S$). The members of $S$ are called generators of $F_{S}$.</p> </blockquote> <p>I don't understand the distinction between " <strong>$a$ and $b$ freely generate a group</strong>" , "$a$ and $b$ <strong>generate a free group</strong>" and just checking if a group is free. For example I thought that if $a$ and $b$ freely generate a group then that group is free and vice versa. However I have seen statements like " $a$ and $b$ freely generate a free subgroup of..." quite a few times and so there seems to be some distinction between the terms. If so could you please provide an example where one of the conditions hold bit the other doesn't?</p> <p>If, for example, $S=\{a,b\}$, but $a^{3}=1$ and $b^{2}=1$ is it the case that $S$ cannot generate a free group even if any word written using only the letters $a$,$a^2$ and $b$ (excluding powers of these) has a unique representation? If we say that we are considering words in $\{a,b\}$ does that mean that we consider words using $a$,$b$, $a^{-1}$ and $b^{-1}$ as letters and identifying $b^2=1$ and $a^3=1$ in the process of reducing the letter or that we use any integer power of and b and only reduce identities that hold by group axioms independent of the actual group structure we are considering?</p> <p>I apologize if the question is confusing but I am quite confused myself. If you think there is any way of clarifying the question please feel free to suggest it.</p> <p>Thank you.</p>
Michael Hardy
11,667
<p><b>PS: I've written a third answer that shows the really simple way to do this, without trigonometric functions.</b></p> <p>Since you're making a lot of the problem of proving surjectivity, I'm adding a new answer dealing with trigonometry and geometry rather than with point-set topology. You have \begin{align} x &amp; = \cos(2\arctan(t)), \\ y &amp; = \sin(2\arctan(t)). \end{align}</p> <p>Two standard double-angle formulas say $\cos(2u) = \cos^2u-\sin^2 u$ and $\sin(2u)=2\sin u\cos u$. That gives us \begin{align} x &amp; = \cos^2\arctan t - \sin^2\arctan t \\ y &amp; = 2\sin(\arctan t)\cos(\arctan t). \end{align}</p> <p>If you draw a right triangle with "adjacent" side $1$ and "opposite" side $t$ then the angle to which those are "adjacent" and "opposite" is $\arctan t$ and the hypotenuse, by the Pythagorean theorem, is $\sqrt{1+t^2}$. Therefore \begin{align} \cos (\arctan(t)) &amp; = \frac{\text{adjacent}}{\text{hypotenuse}} = \frac{1}{\sqrt{1+t^2}}, \\[10pt] \sin (\arctan(t)) &amp; = \frac{\text{opposite}}{\text{hypotenuse}} = \frac{t}{\sqrt{1+t^2}}. \end{align} Hence \begin{align} x &amp; = \frac{1-t^2}{1+t^2}, \tag 1 \\[10pt] y &amp; = \frac{2t}{1+t^2}. \tag 2 \end{align} From this you can get $$ \frac{y}{x+1} = t, \tag 3 $$ so you have your inverse function.</p> <p>You can show that the line passing through the two points $(0,t)$ and $(-1,0)$ intersects the circle at $(x,y)$. From that you can demonstrate the equality in $(3)$ just by finding the $y$-intercept of the line passing through $(x,y)$ and $(-1,0)$.</p> <p>This gives you surjectivity, except that you also have to check that the image of $\infty$ is $(-1,0)$.</p> <p><b>PS:</b> Here's another way to view it. Identify the point $(x,y)$ on the circle with the complex number $x+iy$. Then instead of writing $(1)$ and $(2)$ above one can write $$ x+iy = \frac{1+it}{1-it}. $$ This can be solved for $t$, but maybe simplifying the answer is messier than I'd hoped.</p>
2,877,576
<p>Is it true that in equation $Ax=b$, $A$ is a square matrix of $n\times n$, is having rank $n$, then augmented matrix $[A|b]$ will always have rank $n$?</p> <p>$b$ is a column vector with non-zero values. $x$ is a column vector of $n$ variables.</p> <p>If not then please provide an example.</p>
PQR Theorist
582,944
<p>My algebra gives b=√(√7500) or 9.30604 cm, (same as your answer) leading to an (internal) angle of 111.47°--which agrees with @Bence's result. </p>
3,037,296
<p>I'm confused of what <span class="math-container">$\sqrt {3 + 4i}$</span> would be after I used quadratic formula to simplify <span class="math-container">$z^2 + iz - (1 + i)$</span></p>
timtfj
619,670
<p>Well. As a hint: a complex number can be represented by a real part and an imaginary part. Or, on the complex plane, it can be expressed as a distance from the origin (its magnitude) and an angle. Multiplying two complex numbers multiplies the magnitudes and adds the angles—and as with real numbers, there are two square roots.</p> <p>Gimusi's answer uses that approach.</p> <p>Also I think I might detect a <span class="math-container">$3,4,5$</span> triangle somewhere in there . . .</p>
3,124,158
<p>So what I want to prove is <span class="math-container">$$ |xy+xz+yz- 2(x+y+z) + 3| \leq |x^2+y^2+z^2-2(x+y+z)+3| $$</span> for <span class="math-container">$x,y,z\in \mathbb{R}$</span>, and I'm aware that the RHS is just <span class="math-container">$|(x-1)^2+(y-1)^2+(z-1)^2|$</span>.</p> <p>Now I'm able to prove that <span class="math-container">$ x^2+y^2+z^2 \geq xy+xz+yz $</span> as this just follows from the AM-GM inequality. So I know that the statement <em>without</em> the absolute values must be true, i.e. <span class="math-container">$$ xy+xz+yz- 2(x+y+z) + 3 \leq x^2+y^2+z^2-2(x+y+z)+3 $$</span> But I can't see why I'm safe to just put absolute values on both sides here. Because I'm not sure why the LHS is guaranteed to be smaller in magnitude than the RHS?</p> <p>(I thought about Cauchy-Schwarz being hidden here but then I realised that I could not see how.)</p> <p>Edit: Alternatively I also understand that <span class="math-container">$$ |xy+xz+yz| \leq |xy|+|xz|+|yz| \leq |x|^2+|y|^2+|z|^2 = x^2+y^2+z^2 = |x^2+y^2+z^2| $$</span> but then if I try to adapt this path, the <span class="math-container">$-2(x+y+z) $</span> bit throws me off</p>
Macavity
58,320
<p><strong>Hint:</strong> You have shown <span class="math-container">$|xy+yz+zx|\leqslant |x^2+y^2+z^2|$</span>. Now replace <span class="math-container">$(x,y,z)$</span> with <span class="math-container">$(x-1,y-1,z-1)$</span>…</p>
1,913,689
<blockquote> <p>Let $f: X \rightarrow Y$ be a function. $A \subset X$ and $B \subset Y$. Prove $A \subset f^{-1}(f(A))$.</p> </blockquote> <p>Here is my approach. </p> <p>Let $x \in A$. Then there exists some $y \in f(A)$ such that $y = f(x)$. By the definition of inverse function, $f^{-1}(f(x)) = \{ x \in X$ such that $y = f(x) \}$. Thus $x \in f^{-1}(f(A)).$</p> <p>Does this look OK, and how can I improve it?</p>
Alberto Takase
146,817
<p><strong>Proposition.</strong> Let $X$ and $Y$ be sets. Let $f:X\to Y$. For each $A\in\mathscr{P}(X)$, $A\subseteq f^{-1}(f(A))$.</p> <p><em>Proof.</em> Let $A\in\mathscr{P}(X)$ be arbitrary. \begin{align} f^{-1}(f(A))&amp;=\{z\in X:f(z)\in f(A)\}\\ &amp;=\{z\in X:f(z)\in\{y\in Y:(\exists x\in A)[f(x)=y]\}\}\\ &amp;=\{z\in X:(\exists x\in A)[f(x)=f(z)]\}. \end{align} $$A\subseteq f^{-1}(f(A)).$$</p> <p><strong>Remark.</strong> Let $X$ and $Y$ be sets. Let $f:X\to Y$. If $f$ is injective (i.e. one-to-one), then for each $A\in\mathscr{P}(X)$, $A= f^{-1}(f(A))$.</p>
2,007,373
<p>At some point in your life you were explained how to understand the dimensions of a line, a point, a plane, and a n-dimensional object. </p> <p>For me the first instance that comes to memory was in 7th grade in a inner city USA school district. </p> <p>Getting to the point, my geometry teacher taught,</p> <p>"a point has no length width or depth in any dimensions, if you take a string of points and line them up for "x" distance you have a line, the line has "x" length and zero height, when you stack the lines on top of each other for "y" distance you get a plane"</p> <p>Meanwhile I'm experiencing cognitive dissonance, how can anything with zero length or width be stacked on top of itself and build itself into something with width of length?</p> <p>I quit math. </p> <p>Cut to a few years after high school, I'm deep in the math's. </p> <p>I rationalized geometry with my own theory which didn't conflict with any of geometry or trigonometry. </p> <p>I theorized that a point in space was infinitely small space in every dimension such that you can add them together to get a line, or add the lines to get a plane. </p> <p>Now you can say that the line has infinitely small height approaching zero but not zero.</p> <p>What really triggered me is a Linear Algebra professor at my school said that lines have zero height and didn't listen to my argument. . . </p> <p>I don't know if my intuition is any better than hers . . . if I'm wrong, if she's wrong . . . </p> <p>I would very much appreciate some advice on how to deal with these sorts of things. </p>
ಠ_ಠ
169,780
<p>This is less an answer and more of an extended comment. You seem to be struggling with the idea of a point as contrasted with an <a href="https://ncatlab.org/nlab/show/infinitesimally+thickened+point" rel="noreferrer">infinitesimally thickened point</a>, and it sounds to me like you want to do geometry with with <a href="https://ncatlab.org/nlab/show/infinitesimal+object" rel="noreferrer">infinitesimals</a>. Whereas Omnomnomnom suggests looking at nonstandard analysis, I would suggest a different approach to infinitesimals, namely <a href="https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis" rel="noreferrer">smooth infinitesimal analysis</a>. It's pretty intuitive (no pun on intuitionistic logic intended) and easy to use. I personally think in terms of synthetic differential geometry and smooth infinitesimal analysis all the time when working with smooth manifolds and Lie groups/algebras. If you're interested, have a look at John L. Bell's <a href="http://rads.stackoverflow.com/amzn/click/0521887186" rel="noreferrer">A Primer of Infinitesimal Analysis</a>. He's a prominent philosopher of mathematics and I'm sure you'll find something in common with his views.</p>
2,109,832
<p>This is for beginners in probability!</p> <p>Could someone give me a step by step on how to find the MGF of the binomial distribution?</p>
y_prime
112,175
<p>Let's use a right triangle. The simple one, 3-4-5.</p> <p>I can't draw a diagram since I suck at those, but let the angle $\theta$ be opposite of the side of length 3. So $\sin(\theta)=\frac{3}{5}$ and $\cos(\theta)=\frac{4}{5}.$ (This is actually derivable since $\cos^2\theta=1-\sin^2\theta=\frac{16}{25}.$ Note that this doesn't make $\cos\theta=-\frac{4}{5}$ thanks to arcsin's definition.)</p> <p>Now, $\sin(2\theta)=2\sin(\theta)\cos(\theta)=2\cdot \frac{3}{5}\cdot \frac{4}{5}=\boxed{\frac{24}{25}}.$</p>
2,109,832
<p>This is for beginners in probability!</p> <p>Could someone give me a step by step on how to find the MGF of the binomial distribution?</p>
User8128
307,205
<p>Use $\sin(2x) =2\sin(x)\cos(x)$ and then $\cos(x)=\sqrt{1-\sin^2(x)}$.</p>
2,109,832
<p>This is for beginners in probability!</p> <p>Could someone give me a step by step on how to find the MGF of the binomial distribution?</p>
Nosrati
108,128
<p>By formula $\sin x=2\sin x\cos x$ we write $$\color{red}{\sin(2\arcsin\dfrac35)}=2\sin(\arcsin\dfrac35)\cos(\arcsin\dfrac35)$$ By formula $\cos x=\sqrt{1-\sin^2x}$ $$\color{red}{\sin(2\arcsin\dfrac35)}=2\sin(\arcsin\dfrac35)\sqrt{1-\sin^2(\arcsin\dfrac35)}$$ But $\sin(\arcsin x)=x$ then $$\color{red}{\sin(2\arcsin\dfrac35)}=2\dfrac35\sqrt{1-(\dfrac35)^2}$$ $$\color{red}{\sin(2\arcsin\dfrac35)}=2\dfrac35\sqrt{\dfrac{16}{25}}=\color{blue}{\dfrac{24}{25}}$$</p>
3,133,695
<p>A spotlight on the ground shines on a wal 12 m away. If a man 2m tall walks from the spotlight toward the building at a speed of 1.6m/s, how fast is the length of his shadow on the building decreasing when he is 4m from the building.</p> <p>How do you solve this word problem. I have drawn a picture to figure out the solution but I have failed to come up with anything.</p> <p><a href="https://i.stack.imgur.com/PFCDG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PFCDG.png" alt="enter image description here"></a></p> <p>Let x be the distance between the man and the spotlight, the distance between him and the building shall be <span class="math-container">$12-x$</span>. <span class="math-container">$12-x=4$</span> means <span class="math-container">$x=8$</span></p> <p>At this point I don't know how to proceed.</p> <p>Please help</p>
Parallelism Alert
639,984
<p>No, two non-congruent quadrilaterals with same sets of sides and angles don't exist. Let's prove this:</p> <p>Suppose that there exist such two quadrilaterals <span class="math-container">$ABCD$</span> and <span class="math-container">$A^{'}B^{'}C^{'}D^{'} $</span>.<br> We have <span class="math-container">$AB=A^{'}B^{'}, AD=A^{'}D^{'}$</span> and angles <span class="math-container">$A$</span> and <span class="math-container">$A^{'}$</span> are congruent, therefore <span class="math-container">$\triangle ABD \equiv \triangle A^{'}B^{'}D^{'}$</span>, therefore <span class="math-container">$BD = B^{'}D^{'}.$</span><br> Analogously, we have <span class="math-container">$AC=A^{'}C^{'}$</span></p> <p>Therefore, both quadrilaterals have same length of sides, diagonals, and same angles between any two un-opposed sides and, because of the earlier-proven congruences, same angles between sides and diagonals. Therefore, the two quadrilaterals are congruent, which is contradiction. Q.E.D</p>
846,797
<p>I encountered this calculation in a problem $\dfrac{\sin 150^o\times\sin 20^o}{\sin 80^o\times\sin 10^o}$ and calculated that it equals 1.</p> <p>Is it just a coincidence or is there any identity that says $\sin 150^o\times\sin 20^o=\sin 80^o\times\sin 10^o$?</p> <p>I am trying to use the addition formulae and<br> $\sin \phi\sin \theta\equiv\dfrac{\cos (\phi-\theta)-\cos (\phi+\theta)}{2}$,<br> which reduces to showing $\cos 130^0\cos 170^0\equiv\cos 70^0\cos 90^0$, but still unable to explain why.</p> <p>Any help is really appreciated. Many thanks!</p>
Cookie
111,793
<p>We will need to utilize the following observations (the unit circle would help you see these visually):</p> <ul> <li>$\sin 150^\circ=\frac 12$ </li> <li>$\cos 10^\circ = \sin 80^\circ$</li> </ul> <p>Also the double-angle identity is used:</p> <ul> <li>$\sin 20^\circ = 2 \cos 10^\circ \sin 10^\circ$ (double angle identity)</li> </ul> <p>We apply these observations to prove your relation, from the LHS to the RHS. \begin{align} \sin 150^\circ \sin 20^\circ = \left(\frac 12\right) (2 \sin 10^\circ \cos 10^\circ) = \sin 10^\circ \cos 10^\circ=\sin 10^\circ \sin 80^\circ \end{align}</p>
3,983,914
<p><strong>Preliminary properties</strong>: Let the state vector <span class="math-container">$x(t)=[x_1(t),\dots,x_n(t)]^T\in\mathbb{R}^n$</span> be constrained to the dynamical system <span class="math-container">$$ \dot{x} = Ax + \begin{bmatrix} \phi_1(x_1) \\ \vdots \\ \phi_n(x_1) \\ \end{bmatrix}, \ \ \ \ x(0) = x_0 $$</span> where <span class="math-container">$A$</span> is defined by: <span class="math-container">$$ A = \begin{bmatrix} \lambda_1 &amp; 1 &amp; 0 &amp;\cdots&amp; 0\\ 0 &amp; \lambda_2 &amp; 1 &amp;\ddots&amp;\vdots\\ \vdots&amp;\ddots&amp;\ddots&amp;\ddots&amp;0\\ 0&amp;\cdots&amp;0&amp;\lambda_{n-1}&amp; 1\\ 0&amp;\cdots&amp;0&amp;0&amp;\lambda_n \end{bmatrix} $$</span> with <span class="math-container">$\lambda_i&gt;0$</span>, and <span class="math-container">$\phi_i(x_1) = \beta_i |x_1|^{\alpha_i}\text{sign}(x_1), \beta_i&gt;0$</span>, <span class="math-container">$0&lt;\alpha_i&lt;1$</span>.</p> <p><strong>Question:</strong> Is it possible to show that for any initial condition <span class="math-container">$x_0\neq 0$</span>, the solution <span class="math-container">$x(t)$</span> either converge to the origin, or <span class="math-container">$ \lim_{t\to\infty}\|x(t)\| = +\infty $</span>, but cannot remain in a bounded trajectory different from staying at the origin?</p> <p>Concretelly, what additional structure or conditions on the system or the initial condition do we require to show this?</p> <p>In case you find this useful, here are my attempts to understand/solve the problem.</p> <p><strong>Attempt 1</strong>: I was trying to use results such as the ones from <a href="https://www.jstor.org/stable/pdf/2099661.pdf?refreqid=excelsior%3A03f1446d2fac90e5743319c6fb03b263**" rel="nofollow noreferrer">here</a> which can conclude what I want, but require to find a Lyapunov-like function (not necesarilly positive definite) for which <span class="math-container">$\ddot{V}\neq 0, x\neq 0$</span>. However, I haven't been able to come up with a suitable such function.</p> <p><strong>Attempt 2</strong>: The differential equation have &quot;explicit&quot; solution (not precisely explicit but can be expressed as) <span class="math-container">$$ x(t) = e^{At}x_0 + e^{At}\int_0^se^{-As}\Phi(x_1(s))ds $$</span> where <span class="math-container">$\Phi(x_1) = [\phi_1(x_1),\dots,\phi_n(x_1)]^T$</span>. So I wanted to proceed by contradiction: assume that there exists <span class="math-container">$b,B&gt;0$</span> and <span class="math-container">$T&gt;0$</span> such that <span class="math-container">$b\leq \|x(t)\|\leq B$</span> for all <span class="math-container">$t\geq T$</span>. Hence, <span class="math-container">$$ b\leq \left\|e^{At}x_0 + e^{At}\int_0^se^{-As}\Phi(x_1(s))ds\right\|\leq B $$</span> And noticing that in this case there should be <span class="math-container">$c,C&gt;0$</span> such that <span class="math-container">$0&lt;c\leq\|\Phi(x_1(t))\|\leq C $</span>, for all <span class="math-container">$t\geq T$</span>. Thus, try to obtain a contradiction, for example by using <span class="math-container">$C\geq\|\Phi(x_1(t))\|$</span> to show that <span class="math-container">$B\leq\|x(t)\|$</span>. But unfortunately I haven't obtained anything positive in this direction neither.</p> <p><strong>Attempt 3</strong>: Can Bendixon's/Dulac criterion (see Theorem 11 <a href="https://www.damtp.cam.ac.uk/user/examples/D26e.pdf" rel="nofollow noreferrer">here</a>) be used to conclude something for this system? It is easy to verify that if we write this system as <span class="math-container">$\dot{x} = f(x)$</span>, we obtain <span class="math-container">$\nabla\cdot f(x)&gt;0$</span>.</p> <p>I know that neither my attempts nor my exposition here are perfect. However, I'm looking for suggestions/references or any idea which might help me understand more this problem.</p>
open problem
876,065
<p>Instead of the general case let us focus on the case where <span class="math-container">$\alpha_{i} = 1$</span> for all i. In this special case <span class="math-container">$sign(x_{1})|x_{1}|=x_{1}$</span> the <span class="math-container">$\phi(x_{1})$</span> term can then be absorbed into <span class="math-container">$A$</span>, yielding <span class="math-container">$A' = A+\Phi(x_{1})$</span>. So in this special case the equation is linear homogeneous.</p> <p>This may seem like an oversimplification, however there are several salient points. First, it is a good first place to look for counterexamples. Second, when <span class="math-container">$\alpha &gt; 1$</span> you can see that the local structure near the zero equilibrium is dominated by <span class="math-container">$A'$</span> for small |x(t)|. So the behavior of a subset of solutions will always be reliant on <span class="math-container">$A'$</span> in that way it is always worth looking at the structure of <span class="math-container">$A'$</span>.</p> <p>Lets look at the system of two equations version just to keep the notation to a reasonable level.</p> <p><span class="math-container">$$\frac{dx_{1}}{dt} = (\lambda_{1} + \beta_{1})x_{1}(t) +x_{2}(t)$$</span></p> <p><span class="math-container">$$\frac{dx_{2}}{dt} = \beta_{2}x_{1}(t) + \lambda_{2}x_{2}(t)$$</span></p> <p>Solutions of which are given by:</p> <p><span class="math-container">$x(t) = e^{A't}x_{0}$</span></p> <p>The same reasoning works in the full system case where all <span class="math-container">$\alpha_{i}=1$</span></p> <p>Now in the general case but with system of two equations</p> <p><span class="math-container">$$|\frac{dx_{1}}{dt}| = |(\lambda_{1}x_{1}(t) + sign(x_{1}(t))\beta_{1})|x_{1}(t)|^{\alpha_{1}} +x_{2}(t)| = \lambda_{1}|x_{1}(t)| + \beta_{1}|x_{1}(t)|^{\alpha_{1}} + |x_{2}(t)|$$</span></p> <p><span class="math-container">$$|\frac{dx_{2}}{dt}| = |sign(x_{1}(t))\beta_{2})|x_{1}(t)|^{\alpha_{1}} + \lambda_{2}x_{2}(t) |= \beta_{2}|x_{1}(t)|^{\alpha_{1}} + \lambda_{2}|x_{2}(t)|$$</span></p> <p>So we get the inequality:</p> <p><span class="math-container">$$|\frac{dx_{1}}{dt}| \geq \lambda_{1}|x_{1}(t)|+ |x_{2}(t)|$$</span></p> <p><span class="math-container">$$|\frac{dx_{2}}{dt}| \geq \lambda_{2}|x_{2}(t)|$$</span></p> <p>Integrating against the inequality we get that</p> <p><span class="math-container">$|x_{1}(t)| &gt; |y_{1}(t)|$</span> and <span class="math-container">$|x_{2}(t)| &gt; |y_{2}(t)|$</span></p> <p>Where <span class="math-container">$y(t) = e^{At}x_{0}$</span>.</p> <p>Note we make heavy use of the fact that the <span class="math-container">$\lambda_{i}$</span> and <span class="math-container">$\beta_{i}$</span> are all positive in the above inequality. If you want to generalize the problem to include negative <span class="math-container">$\lambda_{i}$</span> and <span class="math-container">$\beta_{i}$</span> the structure of <span class="math-container">$A'$</span> will be the place to start looking for counterexamples.</p> <p><strong>I will add the assumption that <span class="math-container">$x_{0}$</span> is either in the positive or negative orthant for now, until I update the argument.</strong></p> <p>Ok. So let's try this approach so that we can use the comparison theorem in your textbook. Let's let <span class="math-container">$U(t) = x_{1}(t) + x_{2}(t)$</span></p> <p><span class="math-container">$$|\frac{d(x_{1}(t) + x_{2}(t))}{dt}| = |\lambda_{1}x_{1}(t)| + |sign(x_{1}(t))\beta_{1}|x_{1}(t)|^{\alpha_{1}}| + |x_{2}(t)| + |sign(x_{1}(t))\beta_{2})|x_{1}(t)|^{\alpha_{1}}| + |\lambda_{2}x_{2}(t)|$$</span></p> <p><span class="math-container">$$\geq |\lambda_{1}x_{1}(t)| + |x_{2}(t)| + |\lambda_{2}x_{2}(t)| \geq min(\lambda_{1},1+\lambda_{2}) |x_{1}(t)+x_{2}(t)|$$</span></p> <p>So <span class="math-container">$|\frac{dU}{dt}| &gt; min(\lambda_{1},1+\lambda_{2}) |U(t)|$</span>.</p> <p>Then you can use the comparison theorem from your book to show that:</p> <p><span class="math-container">$|U(t)| &gt; e^{min(\lambda_{1},1+\lambda_{2})t}|x_{0}|$</span>.</p> <p>You can do something similar in the kth order case.</p> <p><strong>Sorry folks, when I read the question I just assumed we were in the positive orthant. Here is a quick counterexample.</strong></p> <p>Choose so that <span class="math-container">$\beta_{1}+\lambda_{1}=1$</span> and <span class="math-container">$\beta_{2} = 1$</span> and <span class="math-container">$\lambda_{2}=1$</span></p> <p>The alphas can be free.</p> <p><span class="math-container">$$\frac{dx_{1}}{dt} = \lambda_{1}x_{1}(t) + sign(x_{1}(t))\beta_{1}|x_{1}(t)|^{\alpha_{1}} +x_{2}(t)$$</span></p> <p><span class="math-container">$$\frac{dx_{2}}{dt} = sign(x_{1}(t))|x_{1}(t)|^{\alpha_{1}} + x_{2}(t)$$</span></p> <p><span class="math-container">$x_{0}=(1,-1)$</span> is a non-zero equilibrium point.</p>
1,791,146
<p>I know that a set G with a binary operation $*$ is a group, if:</p> <ol> <li><p>$a*b\in G$, for all $a, b \in G$.</p></li> <li><p>$*$ is associative:</p></li> </ol> <p>$$(a*b)*c=a*(b*c) \\ \text{for all }a, b, c\in G.$$</p> <ol start="3"> <li>An identity element $e \in G$ exists, such that</li> </ol> <p>$$a*e = e*a = a\\ \text{for all }a\in G.$$</p> <ol start="4"> <li>For all elements $a \in G$, there exists an $a^{-1} \in G$, such that:</li> </ol> <p>$$a*a^{-1} = a^{-1}*a=e.$$</p> <p>Can I use that to show that the empty set is a group?</p>
AnotherPerson
185,237
<p>So we know that universal statements are true on empty domains, and existence statements are false. Being a group requires the existence of an identity element, and since the empty set cannot satisfy this (it has no elements) it is not a group. </p>
1,198,722
<p>I am working with a standard linear program:</p> <p>$$\text{min}\:\:f'x$$ $$s.t.\:\:Ax = b$$ $$x ≥ 0$$</p> <p><strong>Goal:</strong> I want to enforce all nonzero solutions $x_i\in$ x to be greater than or equal to a certain threshold "k" if it's nonzero. In other words, I want to add a conditional bound to the LP: if any $x_i$ is > 0, enforce $x_i$ ≥ k.</p> <p><strong>Main issue:</strong> Is there a way to set up this problem as an LP? Any alternate approaches? Any input would be appreciated and I'm happy to provide any additional info as needed! Thanks! </p>
Michael Grant
52,878
<p>This cannot be solved with linear programming, but it can be solved with mixed-integer linear programming (MILP). What you are looking for is something called a <em>semicontinuous variable</em> in the MILP community. A semicontinuous variable $x$ is constrained to the disjoint set $$x\in\{0\}\cup[\ell,u] \qquad\text{or}\qquad x\in\{0\}\cup[\ell,+\infty),$$ where $0&lt;\ell&lt;u$. </p> <p>This kind of construct occurs commonly in practice. For example, consider a portfolio design problem in which a particular investment can only be purchased if you are willing to allocate a certain minimum amount to it---say, 10K USD. Any amount above that minimum is permissible: so 9.9K is not acceptable, but 10.1K and 100K are. This kind of constraint is naturally modeled with a semicontinuous variable.</p> <p>Many MILP solvers such as CPLEX, Gurobi, and even <code>lp_solve</code> handle them explicitly. The method for specifying semicontinuity and the bounds $\ell,u$ depend on the specific solver, so you'll need to consult the documentation for the solver of your choice.</p> <p>With solvers like, say, MOSEK that do not specifically handle semicontinuity, the case with the upper-bound is readily handled with the pair of inequalities $$\ell z\leq x \leq u z$$ where $z\in\{0,1\}$ is an additional binary variable. If you don't have an upper bound, then you must take ChrKroer's approach and select an arbitrary one. Try not to make it any larger than necessary to preserve the solution set.</p> <p>As you can see, the advantage to handling semicontinuity explicitly is that it can handle the $u=\infty$ case. Computationally both introduce a single branch into the problem, so I suspect there is no consistent performance benefit to one or the other, but I do not have the experience to say for sure.</p> <p>Google searches for "semicontinuous variables" or "semi-continuous variables" turn up a wealth of useful results. There is also the notion of a <em>semi-integer variable</em>: as its name implies, it combines a semicontinuous constraint with an integrality constraint.</p>
2,393,525
<p>I have two questions which I think both concern the same problem I am having. Is $...121212.0$ a rational number and is $....12121212....$ a rational number? The reason I was thinking it could be a number is when you take the number $x=0.9999...$, then $10x=9.999...$ . Therefore, we conclude $9x=9$ which means $x=1$. Why could or couldn't you do the same thing and divide the first number in similar fashion by defining it as $x$ and then taking $x/100$?</p>
Eric Wofsey
86,856
<p>The "numbers" you have written are not real numbers at all, so they are not rational or irrational. The decimal expansion of a real number cannot continue infinitely to the left. Why? Well, intuitively, such a number would be "infinitely large" and there are no infinitely large real numbers. More precisely, an expression like $...121212.0$ would denote the sum of the series $$\sum_{n=0}^{\infty}a_n\cdot 10^n$$ where $a_n=2$ if $n$ is even and $a_n=1$ if $n$ is odd. But this series diverges: the partial sums get larger and larger without bound, since you keep adding larger powers of $10$. So there is no real number that is the sum of the series.</p>
1,042,227
<p>I want to verify that the solution to the difference equation</p> <p>$m_x - 2pqm_{x-2} = p^2 + q^2$</p> <p>with boundary conditions</p> <p>$m_0 = 0$</p> <p>$m_1 = 0$</p> <p>is</p> <p>$$m_x = -\frac{1}{2}(\frac{1}{\sqrt{2pq}} +1)(\sqrt{2pq})^x + \frac{1}{2}(\frac{1}{\sqrt{2pq}} - 1)(-\sqrt{2pq})^x + 1$$</p> <p><strong>General solution to inhomogeneous equation</strong></p> <p>I know that that general solution for $m_x$ will be equal to the general solution to the homogeneous equation plus a particular solution to the inhomogeneous equation. So the general solution to the homogeneous equation</p> <p>$m_x - 2pqm_{x-2} = 0$</p> <p>ends up being </p> <p>$m_x = A(\sqrt{2pq})^x + B(-\sqrt{2pq})^x$</p> <p>Using $m_0 = 0$ we have that $A = -B$ giving us</p> <p>$m_x = A(\sqrt{2pq})^x - A(-\sqrt{2pq})^x$</p> <p>Using $m_1 = 0$ we have that </p> <p>$0 = -B(\sqrt{2pq}) + B(-\sqrt{2pq})$</p> <p>$0 = -B(\sqrt{2pq}) - B(\sqrt{2pq})$</p> <p>$0 = -2B(\sqrt{2pq})$</p> <p>$=&gt; B = -A = 0$</p> <p>So I must have done something wrong? And where is the "$1$" at the end of the correct general solution above coming from? Can someone show how to verify the solution correctly?</p>
Did
6,179
<p>If, as I suspect, $p+q=1$, one might prefer the general formula, valid for every integer $x\geqslant0$, $$m_{2x}=m_{2x+1}=1-(2pq)^x.$$ In the general case, if $2pq\ne1$, try $$m_{2x}=m_{2x+1}=\left(1-(2pq)^x\right)\,\frac{p^2+q^2}{1-2pq}.$$ Finally, if $2pq=1$, $$m_{2x}=m_{2x+1}=x\,(p^2+q^2).$$ In each case, checking that the recursion holds should be direct.</p> <p><em>Morality:</em> To transform the formula for $m_{2x}=m_{2x+1}$ into a single formula valid for every $m_x$, based on the oscillatory nature of $(-1)^x$, while always posssible, might not be a good idea.</p>
3,765,555
<p>Let <span class="math-container">$\triangle ABC$</span> be an isosceles triangle with base <span class="math-container">$a$</span> and altitude to the base <span class="math-container">$b.$</span> I am trying to find the sides of the rectangle inscribed in <span class="math-container">$\triangle ABC$</span> if its diagonals are parallel to the triangle legs.</p> <p>Does an inscribed rectangle exist in every isosceles triangle? How are we to construct that rectangle?</p> <p>Thank you in advance!</p>
SarGe
782,505
<p><a href="https://i.stack.imgur.com/kGNSV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kGNSV.png" alt="enter image description here" /></a></p> <p>Let <span class="math-container">$ABC$</span> be a an isosceles triangle as shown in the figure. <span class="math-container">$D(t,0)$</span> and <span class="math-container">$E$</span> be points on the side <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span> respectively such that <span class="math-container">$DE$</span> is one of the diagonal of the rectangle inscribed in <span class="math-container">$\triangle ABC$</span>.</p> <p>Equation of <span class="math-container">$AC$</span> is given as <span class="math-container">$\displaystyle y=\left(\frac{2b}{a}\right) x+b$</span>. It is evident by symmetry that <span class="math-container">$x$</span>- coordinate of <span class="math-container">$E$</span> will be <span class="math-container">$-t$</span> and hence we've <span class="math-container">$y$</span>- coordinate as <span class="math-container">$\displaystyle b-\frac{2bt}{a}$</span>.</p> <p>Given that slope of <span class="math-container">$BC$</span> and <span class="math-container">$DE$</span> are equal, we get <span class="math-container">$\displaystyle t=\frac{a}{6}$</span>.</p>
3,820,465
<p>I'm working on the following problem but I'm having a hard time figuring out how to do it:</p> <p>Q: Let A and B be two arbitrary events in a sample space S. Prove or provide a counterexample:</p> <p>If <span class="math-container">$P(A^c) = P(B) - P(A \cap B)$</span> then <span class="math-container">$P(B) = 1$</span></p> <p>Drawing Venn diagrams I can see how this is true, as <span class="math-container">$A \subset B$</span>, but I'm not sure how to formally prove this. Any help would be great!</p>
copper.hat
27,978
<p>Let <span class="math-container">$P$</span> be uniform on <span class="math-container">$S=\{1,2,3\}$</span> and let <span class="math-container">$A=\{1,2\}, B=\{2,3\}$</span>.</p> <p><span class="math-container">$P(A \cap B) = P \{2\} = {1 \over 3}$</span>. <span class="math-container">$P(B) = P \{2,3\} = {2 \over 3}$</span>. <span class="math-container">$P(A^c) = P \{3\} = {1 \over 3}$</span>.</p> <p>Hence the equation holds but <span class="math-container">$P(B) \neq 1$</span>.</p> <p>All that one can really conclude is that <span class="math-container">$P (A \cup B)^c = 0$</span>.</p>
385,789
<p>Can anybody please help me this problem?</p> <p>Let $K = \mathbb{F}_p$ be the field of integers module an odd prime $p$, and $G = \mathcal{M}^*_n(\mathbb{F}_p)$ the set of $n\times n$ invertible matrices with components in $\mathbb{F}_p$. Based on the linear (in)dependence of the columns of a matrix $M\in G$, get the number of matrices in $G$.</p> <p>Thanx in advance.</p>
rschwieb
29,335
<p>Hint: you are working with a pool of $p^n$ column vectors from $\Bbb F^n$. Of course, $n$ could be 1 or 2, but to get you going, what I say will venture up to 3.</p> <p>When picking the first column, you'll have $p^n-1$ choices. (Anything except the zero vector.)</p> <p>When you pick the second column, you'll have to avoid picking something in the span of the first column. There are $p$ things in that span since you can choose the coefficient freely from the field, so you now have $p^n-p$ choices for the second column.</p> <p>For the third column, you'd have to pick a vector not in the span of the first two columns. There are $p^2$ such vectors, since you can choose two coefficents for the vectors freely. So now you are down to $p^n-p^2$ vectors... </p> <p>Can you see how to count the total possibilities from these data?</p>
3,913,732
<p>The following was asked by a high school student which I could not answer. Please help</p> <p>In the figure below, show that the bisector of <span class="math-container">$\angle AEB$</span> and <span class="math-container">$\angle AFD$</span> intersect at perpendicular <img src="https://i.stack.imgur.com/76tDO.jpg" alt="enter image description here" /></p>
Claude Leibovici
82,404
<p>Using whole numbers, you want to solve <span class="math-container">$$\cos \left(\frac{13}{10} \pi \cos (t)\right)=\cos \left(\frac{13\pi }{10}\right)=\sqrt{\frac{5}{8}-\frac{\sqrt{5}}{8}}$$</span> Taking the inverse <span class="math-container">$$t=\cos ^{-1}\left(\frac{10 }{13 \pi }\cos ^{-1}\left(-\frac{1}{2} \sqrt{\frac{1}{2} \left(5-\sqrt{5}\right)}\right)\right)$$</span> modulo ... something.</p>
3,784,872
<p><strong>Problem:</strong></p> <p>Suppose <span class="math-container">$(X_n)_{n \geq 1}$</span> are indipendent random variables defined in <span class="math-container">$(\Omega, \mathscr{A},\mathbb{P})$</span>. Define <span class="math-container">$Y=\limsup _{n \to \infty} \frac{1}{n} \sum_{1 \leq p \leq n}X_p$</span> and <span class="math-container">$Z=\liminf _{n \to \infty} \frac{1}{n} \sum_{1 \leq p \leq n}X_p$</span>. How can I prove that <span class="math-container">$Y$</span> and <span class="math-container">$Z$</span> are costant almost everywhere?</p> <p><strong>My attempt:</strong></p> <p>I know it is related with Kolmogorov's zero-one law. I can solve the simplier case <span class="math-container">$\limsup_n X_n$</span> but I cannot solve the problem above.</p>
Kavi Rama Murthy
142,385
<p>For any <span class="math-container">$k$</span> <span class="math-container">$\frac 1n \sum\limits_{p=1}^{k}X_p \to 0$</span>. So we can take the sum from <span class="math-container">$k+1$</span> to <span class="math-container">$\infty$</span>. This shows that <span class="math-container">$Y$</span> and <span class="math-container">$Z$</span> are measurable w.r.t. <span class="math-container">$\sigma (X_{k+1},X_{k+2},...)$</span> for each <span class="math-container">$k$</span>. Apply <span class="math-container">$0-1$</span> law now.</p>
132,862
<p>Is it true that given a matrix $A_{m\times n}$, $A$ is regular / invertible if and only if $m=n$ and $A$ is a basis in $\mathbb{R}^n$?</p> <p>Seems so to me, but I haven't seen anything in my book yet that says it directly.</p>
Aaron Meyerowitz
84,560
<p>As was noted in the question, the set $\mathbb{N}$ which we are trying to specify is infinite because we know $T \subseteq \mathbb{N}$ for the infinite set $T=\{ 0, S(0), S(S(0)), \ldots \}.$ I will rephrase your question as</p> <blockquote> <p><strong>How do we show that in fact $\mathbb{N}=\{ 0, S(0), S(S(0)), \ldots \}?$</strong></p> </blockquote> <p>Well we have to say that, it doesn't follow from the first four axioms because we could use $\mathbb{Q}$ in place of $\mathbb{N}$ and let $S(q)=q+1$ and everything so far works fine.</p> <p>So we might say that " once you have $0,S(0),S(S(0)),S(S(S(0))),$ <em>etc.</em> that is everything!" That is pretty much what axiom 5 says.</p> <p>There are some technical details but they are not directly relevant to why we want axiom 5. One detail is, how exactly do we specify what we mean by the <em>etc.</em>? The technical form of axiom 5 handles that, We say any set $A$ with $0 \in A$ and $S(n) \in A$ whenever $n \in A$ has all of $\mathbb{N} \subseteq A.$ This means $\mathbb{N} \subseteq T.$</p> <p>Another detail is that we want to prove things about $\mathbb{N}$ and axiom 5 gives us proof by induction.</p>
1,783,323
<p>Given the transition matrix for a 2 state Markov Chain, how do I find the n-step transition matrix P^n? I also need to take n--> inf and find the invariant probability pi?</p>
Carl
339,093
<p>The conditions that have to be fulfilled are for an stationary distribution on a finite markov chain to exist are:</p> <ul> <li>It is irreducible</li> <li>Additionally if it is aperiodic then $P^n$ will converge against a projection matrix $ e \cdot \pi^T $ where $e = (1,\dots,1) $ and $ \pi $ is the stationary distribution (which is the same as primitive (for proof see Seneta 1981))</li> </ul> <p>Then as nico already said - if it is aperiodic and irreducible - you can take the Jordanian Normalform and let $ n \rightarrow \infty$.</p>
3,987,470
<p>I am reading about how a wrong formulation of the tower of Hanoi and the inductive hypothesis can lead to a dead-end.<br /> The example I am reading states the following:</p> <blockquote> <p>The task is to move N discs from a <em>specific</em> pole to another <em>specific</em> pole. Assume there are poles <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span>. The <strong>base case</strong> is that when the number of discs is 0 then no steps are needed to complete the tasks. For the <strong>inductive step</strong> assume that we can move <span class="math-container">$n$</span> discs from pole <span class="math-container">$A$</span> to pole <span class="math-container">$B$</span> and we are required to show how to move <span class="math-container">$n + 1$</span> discs from <span class="math-container">$A$</span> to <span class="math-container">$B$</span></p> </blockquote> <p>Then it highlights that this definition is a dead-end since the only 2 ways to use the induction hypothesis as set don't lead anywhere.<br /> Specifically the options are:</p> <blockquote> <ol> <li>Move the top <span class="math-container">$n$</span> discs from pole <span class="math-container">$A$</span> to <span class="math-container">$B$</span>. After this point all possibilities of using the induction hypothesis have been exhausted since <span class="math-container">$n$</span> discs are on pole <span class="math-container">$B$</span> and we do not have hypothesis about moving discs from that pole.</li> <li>Move the smallest disc from pole <span class="math-container">$A$</span> to <span class="math-container">$C$</span>. Then move the remaining <span class="math-container">$n$</span> discs from <span class="math-container">$A$</span> to <span class="math-container">$B$</span>. Once again we have exhausted all possibilities of using the induction hypothesis, because <span class="math-container">$n$</span> discs are now on pole <span class="math-container">$B$</span>, and we have no hypothesis about moving discs from this pole.</li> </ol> </blockquote> <p>The reasoning for <span class="math-container">$1$</span> is clear to me. If we move the top <span class="math-container">$n$</span> discs i.e. the <span class="math-container">$n$</span> smaller discs to pole <span class="math-container">$B$</span> then at some point we would have to move them again since we would need to place the <span class="math-container">$nth + 1$</span> remaining largest disc bellow them. And there is no inductive hypothesis for that so I think I get it.</p> <p>I need help with the second part. How I understand it is we move the smallest disc to pole <span class="math-container">$C$</span>. Then we use the inductive hypothesis to move the remaining <span class="math-container">$n$</span> larger discs from pole <span class="math-container">$A$</span> to <span class="math-container">$B$</span>. (At this part I am not sure how this could happen if pole <span class="math-container">$C$</span> is occupied by the smallest disc but I guess it is part of the logic of using induction in proof.)<br /> Then at that state it seems to me that the only thing pending would be to move the smallest disc from <span class="math-container">$C$</span> to <span class="math-container">$B$</span> and finish the task.<br /> Why does it state that we would need to move the <span class="math-container">$n$</span> discs from pole <span class="math-container">$B$</span> and that is not possible since we don't have an induction hypothesis?<br /> Am I misunderstanding something about statements on induction here?</p>
Community
-1
<p>You are right to be concerned about the second part.</p> <p>Your statement of the only thing pending is correct. However, the second part is nonsense since the only sensible inductive hypothesis concerns 'moving n discs from pole A to pole B, assuming that 3 poles are available'.</p> <p><strong>An added observation</strong></p> <p>Looking at your queries in the various comments I feel it might be useful to point out that proof by induction and proof by minimal counterexample are logically equivalent but that some of the things you are questioning seem so much clearer w.r.t. the minimal counterexample method.</p> <p>If you are trying to prove a result then</p> <p>(1) Assume it is false.</p> <p>(2) Consider a counterexample which in some well-defined sense is minimal.</p> <p>(3) Then try to prove the result <strong>by whatever means you wish</strong> but where, whenever you need to, you can assume the result for anything 'smaller'.</p> <p>As an example of the benefits of this way of thinking you can see from this that distinctions such as 'strong' and 'weak' induction are just a distracting irrelevance.</p>
3,987,470
<p>I am reading about how a wrong formulation of the tower of Hanoi and the inductive hypothesis can lead to a dead-end.<br /> The example I am reading states the following:</p> <blockquote> <p>The task is to move N discs from a <em>specific</em> pole to another <em>specific</em> pole. Assume there are poles <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span>. The <strong>base case</strong> is that when the number of discs is 0 then no steps are needed to complete the tasks. For the <strong>inductive step</strong> assume that we can move <span class="math-container">$n$</span> discs from pole <span class="math-container">$A$</span> to pole <span class="math-container">$B$</span> and we are required to show how to move <span class="math-container">$n + 1$</span> discs from <span class="math-container">$A$</span> to <span class="math-container">$B$</span></p> </blockquote> <p>Then it highlights that this definition is a dead-end since the only 2 ways to use the induction hypothesis as set don't lead anywhere.<br /> Specifically the options are:</p> <blockquote> <ol> <li>Move the top <span class="math-container">$n$</span> discs from pole <span class="math-container">$A$</span> to <span class="math-container">$B$</span>. After this point all possibilities of using the induction hypothesis have been exhausted since <span class="math-container">$n$</span> discs are on pole <span class="math-container">$B$</span> and we do not have hypothesis about moving discs from that pole.</li> <li>Move the smallest disc from pole <span class="math-container">$A$</span> to <span class="math-container">$C$</span>. Then move the remaining <span class="math-container">$n$</span> discs from <span class="math-container">$A$</span> to <span class="math-container">$B$</span>. Once again we have exhausted all possibilities of using the induction hypothesis, because <span class="math-container">$n$</span> discs are now on pole <span class="math-container">$B$</span>, and we have no hypothesis about moving discs from this pole.</li> </ol> </blockquote> <p>The reasoning for <span class="math-container">$1$</span> is clear to me. If we move the top <span class="math-container">$n$</span> discs i.e. the <span class="math-container">$n$</span> smaller discs to pole <span class="math-container">$B$</span> then at some point we would have to move them again since we would need to place the <span class="math-container">$nth + 1$</span> remaining largest disc bellow them. And there is no inductive hypothesis for that so I think I get it.</p> <p>I need help with the second part. How I understand it is we move the smallest disc to pole <span class="math-container">$C$</span>. Then we use the inductive hypothesis to move the remaining <span class="math-container">$n$</span> larger discs from pole <span class="math-container">$A$</span> to <span class="math-container">$B$</span>. (At this part I am not sure how this could happen if pole <span class="math-container">$C$</span> is occupied by the smallest disc but I guess it is part of the logic of using induction in proof.)<br /> Then at that state it seems to me that the only thing pending would be to move the smallest disc from <span class="math-container">$C$</span> to <span class="math-container">$B$</span> and finish the task.<br /> Why does it state that we would need to move the <span class="math-container">$n$</span> discs from pole <span class="math-container">$B$</span> and that is not possible since we don't have an induction hypothesis?<br /> Am I misunderstanding something about statements on induction here?</p>
Ross Millikan
1,827
<p>The reason that the second leads to a dead end is because having moved the smallest disk to <span class="math-container">$C$</span> you can't move the bottom <span class="math-container">$n$</span> disks to <span class="math-container">$B$</span> because you can't use <span class="math-container">$C$</span> to transfer them. If you could, this would work fine. The bottom <span class="math-container">$n$</span> disks are on <span class="math-container">$B$</span> and you just move the top disk onto them and are done.</p>
4,227,536
<blockquote> <p>Let <span class="math-container">$X$</span> be the product space <span class="math-container">$\Bbb R^{\Bbb R}$</span>. Let <span class="math-container">$A \subset X $</span> be the set of all characteristic functions of finite sets. Show that the constant map <span class="math-container">$g, g(x) = 1$</span> belongs to the closure of <span class="math-container">$A$</span>.</p> </blockquote> <p>In order to show that some <span class="math-container">$x \in \overline{A}$</span> I need to show that for every open nbdh <span class="math-container">$U_x$</span> the property <span class="math-container">$A \cap U_x \ne \emptyset$</span> holds.</p> <p>Now I don’t know exactly how to approach the problem is it so that I would need to show that for every nbdh of the constant map <span class="math-container">$g$</span> I need to satisfy the property I stated preivously?</p>
Henno Brandsma
4,280
<p>If <span class="math-container">$U$</span> is a basic open neighbourhood of <span class="math-container">$g$</span>, it is of the form <span class="math-container">$U = \bigcap_{x \in F} p_x^{-1}[U_x]$</span> for a finite subset <span class="math-container">$F \subseteq \Bbb R$</span> and for each <span class="math-container">$x$</span>, <span class="math-container">$p_x:\Bbb R^{\Bbb R} \to \Bbb R$</span> is the projection <span class="math-container">$p_x(f)=f(x)$</span> and each <span class="math-container">$U_x, x \in F$</span> is a real neighbourhood of <span class="math-container">$1$</span>. This is by the definition of the product topology on <span class="math-container">$\Bbb R^{\Bbb R}$</span>.</p> <p>This <span class="math-container">$U$</span> thus contains <span class="math-container">$\chi_F$</span> by definition: <span class="math-container">$p_x(\chi_F)=\chi_F(x) =1 \in U_x$</span> for all <span class="math-container">$x \in F$</span>, so <span class="math-container">$\chi_F \in \bigcap_{x \in F} p_x^{-1}[U_x] = U$</span>.</p> <p>As every <em>basic</em> neighbourhood of <span class="math-container">$g$</span> intersects <span class="math-container">$A$</span>, it follows trivially that <span class="math-container">$g \in \overline{A}$</span> too.</p>
627,258
<p>Helly everybody,<br> I'm trying to find another approach to topology in order to justify the axiomatization of topology. My idea was as follows:</p> <p>Given an <strong>arbitrary</strong> collection of subsets of some space: $\mathcal{C}\in\mathcal{P}^2(\Omega)$<br> Define a closure operator by: $\overline{A}:=\bigcap_{A\subseteq C\in\mathcal{C}}C$<br> This gives rise to a topology apart from the space itself being open.<br> However, considering the space as being equipped with a notion of being close, all topological question can be studied - as in topological spaces.<br> <em>(I left out the details as being part of my research)</em></p> <p>So my question is:<br> <em>What could go BADLY! wrong if a collection would satisfy all axioms for open sets but the entire space not necessarily being open?</em></p> <p>Thanks for your help! Cheers Alex</p>
C-star-W-star
79,762
<p>Nearness Spaces</p> <p>Such a space wouldn't constitute a topology in the open set definition, though it would give rise to some sort of space in which "being close" still makes sense for <strong>all points</strong> to subsets.</p> <p>Imagine the following situation:<br> $\Omega:=\mathbb{S}^2\cup\{\mathbb{B}^3\}$<br> $d(x,A):=\lVert x-A\rVert$ (in the usual sense)<br> In this example there would be a point being close to every subset:<br> $\forall A\subseteq\Omega:\quad\mathbb{B}^3\in\overline{A}$</p>
33,817
<p>It is an open problem to prove that $\pi$ and $e$ are algebraically independent over $\mathbb{Q}$.</p> <ul> <li>What are some of the important results leading toward proving this?</li> <li>What are the most promising theories and approaches for this problem?</li> </ul>
Evan Jenkins
396
<p><a href="http://en.wikipedia.org/wiki/Schanuel%27s_conjecture">Schanuel's conjecture</a> would imply this result. It states that if $z_1, \ldots, z_n$ are linearly independent over $\mathbb{Q}$, then $\mathbb{Q}(z_1, \ldots, z_n, e^{z_1}, \ldots, e^{z_n})$ has transcendence degree at least $n$ over $\mathbb{Q}$. In particular, if we take $z_1 = 1$, $z_2 = \pi i$, then Schanuel's conjecture would imply that $\mathbb{Q}(1, \pi i, e, -1) = \mathbb{Q}(e, \pi i)$ has transcendence degree 2 over $\mathbb{Q}$.</p>
496,255
<p>Let $u$ be an integer of the form $4n+3$, where $n$ is a positive integer. Can we find integers $a$ and $b$ such that $u = a^2 + b^2$? If not, how to establish this for a fact? </p>
David Vaknin
216,025
<p>Let's assume x^2+y^2 = 4n+3, then either x or y has to be even. Let's assume x = 2z and write</p> <p>(2z)^2+y^2 = 4n+3. This can also be written as follows: </p> <p>(2z+y)^2-4zy = 4n+3 by rearrangement we can write </p> <p>(2z+y)^2-1^2 =4n+2+4zy</p> <p>(2z+y)^2-1^2=2(2n+2zy+1) and further </p> <p>(2z+y-1)(2z+y+1)=2(2n+2zy+1)</p> <p>The left side is product of two even numbers the right side is the product of even and odd number. So the assumption is wrong, and no number in the form of 4n+3 can be a sum of two squares.</p>
1,012,895
<p>I am stuck with my revision for the upcoming test.</p> <p>The question asks"</p> <p>An implementation of insertion sort spent 1 second to sort a list of ${10^6}$ records. How many seconds it will spend to sort ${10^7}$ records?</p> <p>By using $\frac{T(x)}{T(1)}$ = $\frac{10^7}{10^6}$ I thought the answer was $10$ seconds but the actual answer says it's $100$ seconds.</p> <p>Can someone please help me out? :(</p>
Aditya Hase
190,645
<p>Hint: Time complexity of Insertion sort is $O(n^2)$</p> <p>Roughly speaking if $10^6$ records took $1$ second then </p> <p>$10\times 10^6$ records will take $10^2\times1=100$ seconds</p>
3,620,375
<p>I am asked to calculate the integral <span class="math-container">$$\int_C \frac{1}{z-a}dz$$</span> where <span class="math-container">$C$</span> is the circle centered at the origin with radius <span class="math-container">$r$</span> and <span class="math-container">$|a|\neq r$</span></p> <p>I parametrized the circle and got <span class="math-container">$$\int_0^{2\pi}\frac{ire^{it}}{re^{it}-a}dt=\text{log}(re^{2\pi i}-a)-\text{log}(r-a)=\text{log}(r-a)-\text{log}(r-a)=0$$</span></p> <p>Because of how the complex logarithm is defined, I am pretty sure that the first equaility is wrong, it it is, what is the correct solution?</p>
Community
-1
<p>By Cauchy's Integral Formula, we get <span class="math-container">$2\pi i$</span>, if <span class="math-container">$|a|\lt r$</span>. Otherwise we get <span class="math-container">$0$</span>, by Cauchy's theorem.</p>
132,003
<p>I have a time consuming function that is going to be iterated in a <code>Nest</code> or <code>NestList</code> and I would like to know if there is a good way to monitor the progress. I have found a partial work-around, but it requires an extra global variable (n). </p> <pre><code>fun[x_] := Module[{}, n++; Pause[1]]; ProgressIndicator[Dynamic[n/5]] n = 0; NestList[fun, Null, 5] </code></pre> <p>Besides being poor coding practice, this is a problem because when I call the Nest from different places in the larger code (for example, make two copies of the above and execute both), all the progress indicators move synchronously, rather than being limited to the <code>NestList</code> that is actually executing.</p>
Kuba
5,478
<p><code>fun</code> should not know about <code>n</code>:</p> <pre><code>nestListWithMonitor[f_, init_, n_] := Module[{it = 0}, PrintTemporary[ProgressIndicator[Dynamic[it/n]]]; NestList[(it++; f[#]) &amp;, init, n] ] </code></pre> <p><code>it</code> is highlighted <code>Red</code> but it doesn't matter if its parent <code>Dynamic</code> is not meant to survive across sessions.</p> <p>Now you can use it with whatever <code>fun</code> you want.</p> <pre><code>fun[x_] := (Pause[1]; x + 1); nestListWithMonitor[fun, 1, 5] </code></pre>
3,079,493
<p>Let <span class="math-container">$$D_6=\langle a,b| a^6=b^2=1, ab=ba^{-1}\rangle$$</span> <span class="math-container">$$D_6=\{1,a,a^2,a^3,a^4,a^5,b,ab,a^2b,a^3b,a^4b,a^5b\}$$</span></p> <p>I would like to compute its character table and its irreducible representations.</p> <p>I will explain what I have done so far and I will add some doubts I had while doing this.</p> <p><strong>MY ATTEMPT</strong></p> <ol> <li><p>Compute conjugacy classes. <span class="math-container">$$C_1=\{e\}, C_2=\{a,a^5\},C_3=\{a^2,a^4\}$$</span><span class="math-container">$$C_4=\{a^3\},C_5=\{b,a^2b,a^4b\},C_6=\{ab,a^3b,a^5b\}$$</span></p></li> <li><p>Find <span class="math-container">$1$</span>-dimensional representations. Since <span class="math-container">$D_6/\{a,a^5\}\cong \mathbb{Z}_2$</span>, we have one more representation apart from <span class="math-container">$\alpha_1=id$</span>. That is <span class="math-container">$$\alpha_2: G \longrightarrow \mathbb{C}: a \mapsto 1, b\mapsto -1$$</span> Again using irreducible representations from quotient group by normal subgroup, I considered <span class="math-container">$G/\{\overline{1},\overline{a},\overline{b},\overline{ab}\}\cong \mathbb{Z}_2\times\mathbb{Z}_2$</span> (since it is abelian). Then from here I obtained <span class="math-container">$$\alpha_3:G\longrightarrow \mathbb{C}: a\mapsto -1, b\mapsto 1$$</span> <span class="math-container">$$\alpha_4:G\longrightarrow \mathbb{C}: a\mapsto -1, b\mapsto -1$$</span></p></li> <li><p>Find <span class="math-container">$2$</span>-dimensional representations. I have seen in my notes that for <span class="math-container">$D_n$</span> we can define <span class="math-container">$2$</span>-dimensional representations: <span class="math-container">$$\alpha_5: G\longrightarrow GL_2(\mathbb{C}): a\mapsto \begin{bmatrix}cos(\frac{2\pi}{n}) &amp; -sin(\frac{2\pi}{n})\\ sin(\frac{2\pi}{n}) &amp;cos(\frac{2\pi}{n})\end{bmatrix}, b\mapsto \begin{bmatrix}1 &amp; 0\\ 0 &amp;-1\end{bmatrix}$$</span> Hence my <span class="math-container">$$\alpha_5: G\longrightarrow GL_2(\mathbb{C}): a\mapsto \begin{bmatrix}\frac{1}{2} &amp; -\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2} &amp;\frac{1}{2}\end{bmatrix}, b\mapsto \begin{bmatrix}1 &amp; 0\\ 0 &amp;-1\end{bmatrix}$$</span></p></li> <li><p>Build my character table. <span class="math-container">\begin{array}{|c|c|c|c|} \hline &amp; C_1 &amp; C_2 &amp;C_3 &amp;C_4 &amp;C_5 &amp;C_6 \\ \hline \chi_1&amp; 1 &amp; 1 &amp;1 &amp;1 &amp;1&amp;1 \\ \hline \chi_2&amp; 1 &amp; 1 &amp;1 &amp;1 &amp;-1 &amp;-1 \\ \hline \chi_3&amp; 1 &amp; -1 &amp;1 &amp;-1 &amp;1 &amp;-1 \\ \hline \chi_4&amp; 1 &amp; -1 &amp;1 &amp;-1 &amp;-1 &amp;1 \\ \hline \chi_5&amp; 2 &amp; 1 &amp;-1 &amp;-2 &amp;0 &amp;0 \\ \hline \chi_6&amp; 2 &amp; -1 &amp;-1 &amp;2 &amp;0 &amp;0 \\ \hline \end{array}</span></p></li> </ol> <p>where I have computed <span class="math-container">$\chi_6$</span> by the orthogonality formula <span class="math-container">$(\chi_6|\chi_j)=\delta_{6,j}$</span>.</p> <p><strong>QUESTIONS</strong></p> <ol> <li>Is <span class="math-container">$D_6/\{a^2,a^4\}$</span> really abelian? I can not see it clearly.</li> <li><p>My first question comes when I have to find <span class="math-container">$2$</span>-dimensional irreducible representations. I have find them because I have seen it in my notes. But how could I get <span class="math-container">$\alpha_5$</span> and <span class="math-container">$\alpha_6$</span> without knowing the special case of <span class="math-container">$D_n$</span>. I know that I also could get it from <span class="math-container">$S_3$</span> (one of them). But I have again the same problem, if you are looking for <span class="math-container">$2$</span>-dimensional irreducible representations of <span class="math-container">$S_3$</span>, how do you find them? (Both).</p></li> <li><p>Now consider <span class="math-container">$X$</span> to be the set of the vertices of a regular <span class="math-container">$6$</span>-gon and consider the action of <span class="math-container">$D_6$</span> on the set <span class="math-container">$X$</span> by restricting the usual action of <span class="math-container">$D_6$</span> on the <span class="math-container">$6$</span>-gon to the set of vertices <span class="math-container">$X$</span>. Let <span class="math-container">$\phi$</span> be the induced permutation representation (over <span class="math-container">$\mathbb{C}$</span>) of <span class="math-container">$D_6$</span>. I would like to write it as a sum of irreducible representations by computing the in-product of the irreducible characters with <span class="math-container">$\chi_{\phi}$</span>. What should I do? I do not understand this induced permutation representation. Any help?</p></li> </ol>
Ben Dyer
164,207
<blockquote> <ol> <li>Is <span class="math-container">$D_6/\{a^2,a^4\}$</span> really abelian? I can not see it clearly.</li> </ol> </blockquote> <p>In your notation it looks like <span class="math-container">$\{a^2,a^4\}$</span> is denoting a conjugacy class and not a (normal) subgroup, so presumably you mean <span class="math-container">$D_6/\{1,a^2,a^4\}$</span>. But yes its abelian.</p> <blockquote> <ol start="2"> <li>My first question comes when I have to find <span class="math-container">$2$</span>-dimensional irreducible representations. I have find them because I have seen it in my notes. But how could I get <span class="math-container">$\alpha_5$</span> and <span class="math-container">$\alpha_6$</span> without knowing the special case of <span class="math-container">$D_n$</span>. I know that I also could get it from <span class="math-container">$S_3$</span> (one of them). But I have again the same problem, if you are looking for <span class="math-container">$2$</span>-dimensional irreducible representations of <span class="math-container">$S_3$</span>, how do you find them? (Both).</li> </ol> </blockquote> <p>To find irreducible representations of <span class="math-container">$S_3$</span> note there is a natural 3-dimensional representation as permutation matrices (the standard representation). There is a 1 dimensional sub-representation of this which is the span of the vector <span class="math-container">$(1,1,1)$</span>. Split off this summand to get an 2d irrep.</p> <blockquote> <ol start="3"> <li>Now consider <span class="math-container">$X$</span> to be the set of the vertices of a regular <span class="math-container">$6$</span>-gon and consider the action of <span class="math-container">$D_6$</span> on the set <span class="math-container">$X$</span> by restricting the usual action of <span class="math-container">$D_6$</span> on the <span class="math-container">$6$</span>-gon to the set of vertices <span class="math-container">$X$</span>. Let <span class="math-container">$\phi$</span> be the induced permutation representation (over <span class="math-container">$\mathbb{C}$</span>) of <span class="math-container">$D_6$</span>. I would like to write it as a sum of irreducible representations by computing the in-product of the irreducible characters with <span class="math-container">$\chi_{\phi}$</span>. What should I do? I do not understand this induced permutation representation. Any help?</li> </ol> </blockquote> <p>Remember <span class="math-container">$\chi_\phi(g) = tr(\phi(g))$</span>. Since this character is coming from a group action there is also the interpretation as the number of fixed points, <span class="math-container">$\chi_\phi(g) = |\#\{x : g.x = x\}|$</span>. For example if <span class="math-container">$g$</span> is a rotation then <span class="math-container">$\phi(g)$</span> has no fixed points so <span class="math-container">$\chi = 0$</span>. Equivalently, <span class="math-container">$\phi(g)$</span> has 0s on the diagonal so <span class="math-container">$\chi_\phi(g) = tr(\phi(g)) = 0$</span>. If <span class="math-container">$\phi(g)$</span> is a reflection then there will be fixed points and the character will be non-zero.</p>
3,565,015
<p>I generated this polynomial after playing around with the golden ratio. I first observed that (using various properties of <span class="math-container">$\phi$</span>), <span class="math-container">$\phi^3+\phi^{-3}=4\phi-2$</span>. This equation has no significance at all, I just mention it because the whole problem stems from me wondering: which other numbers does this equation hold for?</p> <p>The six possible answers are the roots of <span class="math-container">$x^6-4x^4+2x^3+1=0$</span>. Note that I am <em>not</em> interested in solving for <span class="math-container">$x$</span> itself as much as I am interested in a method which would allow me to completely factor out this polynomial into lowest degree factors which still have real coefficients. Note that I am treating this equation as if I had no clue that the golden ratio is one of the solutions. In other words, I am trying to factor this equation as if I never saw it before, so I can't just immediately factor out <span class="math-container">$(x^2-x-1)$</span> without a justifiable process, even though it is indeed one of the factors.</p> <p>I first observed that the equation holds for <span class="math-container">$x=1$</span>, so I was able to divide out <span class="math-container">$(x-1)$</span> to get the factorization of:</p> <p><span class="math-container">$$(x-1)(x^5+x^4-3x^3-x^2-x-1)$$</span></p> <p>I tried making an assumption that the quintic reduces to a product of <span class="math-container">$(x^3+Ax^2+Bx+C)(x^2+Dx+E)$</span>, multiplying out, and equalling coefficients, but I ended up with a system of two extremely convoluted equations which I had no idea how to solve. I also tried to turn the first five terms of the quintic into a palindromic polynomial and then perform the standard method of factoring palindromic polynomials, to no avail.</p> <p>I am either missing something, or I don't know of a nice method that would let this expression be factored. I'm looking forward to being enlightened, thanks for any help.</p>
Abhijeet Vats
426,261
<p>Here's a possible way to do it:</p> <p><span class="math-container">$x^6-4x^4+2x^3+1 = (x^6+2x^3+1)-4x^4 = (x^3+1)^2 - 4x^4$</span></p> <p><span class="math-container">$(x^3+1)^2-4x^4 = [x^3+1-2x^2][x^3+1+2x^2]$</span></p> <p><span class="math-container">$x^6-4x^4+2x^3+1= [(x^3-x^2)+(1-x^2)][x^3+2x^2+1]$</span></p> <p>Then, we have:</p> <p><span class="math-container">$x^6-4x^4+2x^3+1 =[x^3+2x^2+1][x^2(x-1)+(1-x)(1+x)]$</span></p> <p><span class="math-container">$x^6-4x^4+2x^3+1 = (x-1)(x^2-x-1)[x^3+2x^2+1]$</span></p> <p>So that gives you a decently nice factored form. </p>
3,265,835
<p>I came across some equation in physics which had a different kind of integration. Like it should have <span class="math-container">$dx2$</span> but had <span class="math-container">$d2x$</span> . And I did some substitution for solving it like putting <span class="math-container">$x= u^2$</span> and then double differentiating it to get <span class="math-container">$d2x = 2.du2$</span> . So I thought it could be right but my teachers said we can not do this and no other proof is coming into my mind.!(<a href="https://i.stack.imgur.com/C5Bg8.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/C5Bg8.jpg</a>)</p>
Joel Biffin
332,245
<p>This is simply an abuse of the notation for the infinitesimal <span class="math-container">$dx$</span>. Often scientists (most frequently Physicists) will use slightly misleading notation due to a few little "calculus tricks" which are not 100% mathematically sound but work 90% of the time. </p> <p>In this case, I believe it is written like this because of the form of the second differential operator,</p> <p><span class="math-container">$$\frac{d}{dx}\left (\frac{d}{dx}\right)(u)=\frac{d}{dx}\left ( \frac{du}{dx} \right ) = \frac{d^2u}{dx^2}=\frac{d^2}{dx^2}(u).$$</span></p> <p>Often when people are taught how to solve a separable ordinary differential equation like,</p> <p><span class="math-container">$$\frac{du}{dx}=f(u)g(x),$$</span></p> <p>they are taught to "put all the <span class="math-container">$x$</span> on one side and put all the <span class="math-container">$u$</span> on the other",</p> <p><span class="math-container">$$\frac{1}{f(u)}\frac{du}{dx}=g(x).$$</span></p> <p>The next step is the one which is often taught in a "misleading" way, some students are told to "multiply by <span class="math-container">$dx$</span>" to give</p> <p><span class="math-container">$$\frac{1}{f(u)}du = g(x)dx,$$</span></p> <p>"and then integrate" (just stick integral signs in front of both sides of the equation),</p> <p><span class="math-container">$$\int \frac{1}{f(u)}du = \int g(x)dx.$$</span></p> <p>Now you might ask, the result is correct so why does it matter if this is not 100% mathematically sound? Well it goes back to the simple fact that an integral is an operator with respect to some basis. Therefore one cannot simply "stick integral signs in front of both sides". </p> <p>Instead, if we were to integrate both sides we <strong>must</strong> integrate with respect to some variable and we <strong>must not</strong> break the sacred equality rule - perform the identical operations on the left hand side as you do the right - i.e. we integrate both sides with respect to the same variable.</p> <p>So instead of "multiplying by <span class="math-container">$du$</span>", what actually is going on is that from <span class="math-container">$$\frac{1}{f(u)}\frac{du}{dx}=g(x),$$</span> we <strong>integrate both sides with respect to <span class="math-container">$x$</span></strong>, producing</p> <p><span class="math-container">$$\int \frac{1}{f(u)}\frac{du}{dx}dx=\int g(x)dx,$$</span></p> <p>as required.</p> <p>In summary, it is abuses like this which lead to misleading notation such as <span class="math-container">$d^2x$</span> rather than <span class="math-container">$(dx)^2$</span> or more acceptably <span class="math-container">$dx dx$</span>.</p>
2,877,833
<p><a href="https://i.stack.imgur.com/9wqM5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9wqM5.png" alt="enter image description here"></a></p> <p>Look at this part:</p> <blockquote> <p>Define the vector $p = -\nabla f(x^*)$ and note that $p^T\nabla f(x^*) = -||\nabla f(x^*)||^2 &lt;0$. Because $f$ is continuous near $x^*$, there is a scalar $T&gt;0$ such that </p> <p>$p^T\nabla f(x^*+tp) &lt;0, \forall t\in [0,T]$</p> </blockquote> <p>Why the continuity of the gradient imply that? I understand that because the gradient is continuous, we can move around smoothly and retain the signal. But I'd suppose it works for $\nabla f$ only. Why it works for $p^T\nabla f(x^*+tp)$?</p> <p>Also, what if I chose $p = \nabla f(x^*)$ instead of the negative?</p>
user293794
293,794
<p>I think what you're missing is the following fact: if $F:\mathbb{R}^n\rightarrow\mathbb{R}$ is continuous and satisfies $F(x_0)&lt;0$ then there exists some $\delta&gt;0$ such that $F(x)&lt;0$ for all $x$ such that $|x-x_0|&lt;\delta$. You should try to prove this from the limit definition of continuity. In your particular example, $F$ is the continuous function $p^T\nabla f(x)$ so by the fact above we know that $p^T\nabla f(x)&lt;0$ for all $|x-x^*|&lt;\delta$ for some $\delta$. We then just choose $T$ so that $|x+tp-x^*|&lt;\delta$ whenever $t&lt;T$.</p>
227,833
<p>In the documentation article for <code>Polygon</code> in Mathematica 12, there is an example with the input:</p> <pre><code>pol = Polygon[{{1, 0}, {0, Sqrt[3]}, {-1, 0}}] </code></pre> <p>In the documentation article the output is displayed as:</p> <blockquote> <pre><code>Polygon[{{1, 0}, {0, Sqrt[3]}, {-1, 0}}] </code></pre> </blockquote> <p>But when I evaluate the code I get a different output with some information about the polygon, the number of points, the dimension, and more. It looks like this:</p> <p><a href="https://i.stack.imgur.com/iA16Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iA16Z.png" alt="polygon" /></a></p> <p>Is there a way to control what is obtained as output?</p>
N0va
42,436
<p>Following the comments under m_goldberg Answer to this question (<a href="https://mathematica.stackexchange.com/a/227859">https://mathematica.stackexchange.com/a/227859</a>) the following code disables the SummaryBox for <code>Polygon</code> only without disabling all elided forms or modifying the protected symbol <code>BoxForm`UseIcons</code>:</p> <pre><code>ClearAll[Region`PolygonDump`summaryBox] Region`PolygonDump`summaryBox[poly_, format_] := ToBoxes[InputForm@poly, format] Region`PolygonDump`summaryBox[___] := $Failed Attributes[Region`PolygonDump`summaryBox] = {HoldAllComplete}; </code></pre> <p><a href="https://i.stack.imgur.com/lIeCG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lIeCG.png" alt="Disablingthe SummaryBox for Polygon" /></a></p> <p><code>Region`PolygonDump`summaryBox</code> is the internal function constructing the SummaryBox for <code>Polygon</code> which I found using <code>GeneralUtilities`PrintDefinitions[Polygon]</code>. I have not noticed any problems using this but unwanted side effects might occur when modifying core functions and the internal functionality for <code>BoxForm</code> of <code>Polygon</code> might change in the future but the presented solution works for me in 12.1.1.0.</p>
223,642
<p>$z\cdot e^{1/z}\cdot e^{-1/z^2}$ at $z=0$.</p> <p>My answer is removable singularity. $$ \lim_{z\to0}\left|z\cdot e^{1/z}\cdot e^{-1/z^2}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|=0. $$ But someone says it is an essential singularity. I don't know why.</p>
DonAntonio
31,254
<p>$$ze^{1/z}e^{-1/z^2}=z\left(1+\frac{1}{z}+\frac{1}{2!z^2}+...\right)\left(1-\frac{1}{z^2}+\frac{1}{2!z^4}-...\right)$$</p> <p>So this looks like an essential singularity, uh? </p> <p>I really don't understand how you made the following step:</p> <p>$$\lim_{z\to 0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|$$</p> <p>What happened to that $\,z\,$ in the exponential's power?</p>
223,642
<p>$z\cdot e^{1/z}\cdot e^{-1/z^2}$ at $z=0$.</p> <p>My answer is removable singularity. $$ \lim_{z\to0}\left|z\cdot e^{1/z}\cdot e^{-1/z^2}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|=0. $$ But someone says it is an essential singularity. I don't know why.</p>
JavaMan
6,491
<p>First, notice that $$\lim_{z \to 0} e^{1/z}$$ does not exist as you get different values when you approach $0$ along the real line $x + 0i$ from the right and from the left.</p> <p>From there, it is not difficult to show that $\lim_{z \to 0} z e^{1/z} e^{-1/x^2}$ does not exist either. Finally, we need to show that $\lim_{z \to 0}\frac{1}{f(z)}$ does not exist <a href="http://en.wikipedia.org/wiki/Essential_singularity#Alternate_descriptions" rel="nofollow">in order for $f(z)$ to have an essential singularity at $z = 0$</a>.</p> <p>In other words, you need to examine</p> <p>$$ \lim_{z \to 0} \frac{e^{1/z^2}}{ze^{1/z}}. $$</p> <p>I'll leave this part to you.</p>
1,137,079
<p>I'm new to the concept of complex plane. I found this exercise:</p> <blockquote> <p>Let $z,z_1,z_2\in\mathbb C$ such that $z=z_1/z_2$. Show that the length of $z$ is the quotient of the length of $z_1$ and $z_2$.</p> </blockquote> <p>If $z_1=x_1+iy_1$ and $z_2=x_2+iy_2$ then $|z_1|=\sqrt{x_1^2+y_1^2}$ and $|z_2|=\sqrt{x_2^2+y_2^2}$, which yields $|z_1|/|z_2|=\sqrt{\dfrac{x_1^2+y_1^2}{x_2^2+y_2^2}}$.</p> <p>Now, $z=\dfrac{z_1}{z_2}=\dfrac{x_1+iy_1}{x_2+iy_2} $. The first issue is to try and separate the imaginary part from the real one. I did this by: $$\dfrac{x_1+iy_1}{x_2+iy_2}=\dfrac{x_1+iy_1}{x_2+iy_2}\times\frac{x_2-iy_2}{x_2-iy_2}=\frac{x_1x_2-ix_1y_2+ix_2y_1+y_1y_2}{x_2^2+y_2^2}\\ =\frac{x_1x_2+y_1y_2}{x_2^2+y_2^2}+i\frac{x_2y_1-x_1y_2}{x_2^2+y_2^2}.$$ Hence $|z|=\sqrt{\left(\dfrac{x_1x_2+y_1y_2}{x_2^2+y_2^2}\right)^2+\left(\dfrac{x_2y_1-x_1y_2}{x_2^2+y_2^2}\right)^2}=\sqrt{\dfrac{(x_1x_2+y_1y_2)^2+(x_2y_1-x_1y_2)^2}{(x_2^2+y_2^2)^2}}$. Continuing I get $$\sqrt{\frac{(x_1x_2)^2+(y_1y_2)^2+(x_2y_1)^2+(x_1y_2)^2}{(x_2^2+y_2^2)^2}},$$ but I can't see any future here. Is there any mistake? How can I simplify the expression for $|z|$? I appreciate your help.</p>
David K
139,123
<p>Given $z = z_1/z_2$, you can conclude that $z z_2 = z_1$, so $|z_1| = |z z_2|$. If you can show that $|z z_2| = |z||z_2|$ then you can divide both sides by $|z_2|$ to get the desired result.</p> <p>So you just need to know that the modulus of a product of two numbers is the product of the modulus of each number.</p> <p>You can write any complex number $z$ as $z = r e^{i\theta}$ where $r = |z|$ and $\theta = \mathrm{Arg}\ z$. This works in reverse, too: if $r$ and $\theta$ are real and $r e^{i\theta} = z$, then $|z| = r$.</p> <p>So for a product of two numbers $z_a z_b$, let $z_a = r_a e^{i\theta_a}$ and $z_b = r_b e^{i\theta_b}.$ Then $$z_a z_b = r_a e^{i\theta_a} r_b e^{i\theta_b} = r_a r_b e^{i(\theta_a+\theta_b)},$$ and so $|z_a z_b| = r_a r_b = |z_a||z_b|.$</p> <p>You could also have proved the ratio formula directly in this fashion, but I find multiplication easier to work with.</p> <p>In terms of developing the theory, maybe it's too early to start using the exponential format,<br> but it's a handy way to remember facts like this.</p>
2,921,439
<p>I got this summation from the book <a href="https://rads.stackoverflow.com/amzn/click/0201558025" rel="nofollow noreferrer">Concrete Mathematics</a> which I didn't exactly understand:</p> <p>$$ \begin{align} Sn &amp;= \sum_{1 \leqslant k \leqslant n} \sum_{1 \leqslant j \lt k} {\frac{1}{k-j}} \\ &amp;= \sum_{1 \leqslant k \leqslant n} \sum_{1 \leqslant k-j \lt k} {\frac{1}{j}} \\ &amp;= \sum_{1 \leqslant k \leqslant n} \sum_{0 \lt j \leqslant k-1} {\frac{1}{j}} \\ \end{align} $$</p> <p>I didn't understant why $1 \leqslant j \lt k$ became $1 \leqslant k-j \lt k$ in the second line and why $1 \leqslant k-j \lt k$ became $0 \lt j \leqslant k-1$ in the third line.</p> <p>Can you guys help me understanding that?</p>
Chinny84
92,628
<p>$$ -x^3-x^2+4x + 4 = -x^2(x+1) + 4(x+1) $$ Then we can see that $$ \frac{x+1}{-x^3-x^2+4x + 4 } = \frac{x+1}{-x^2(x+1) + 4(x+1)} = \frac{1}{4-x^2} $$</p>
1,141,632
<p>I need to answer a question on fractals from the book <em>Fractals Everywhere</em> by M. Barsley and I have been struggling with it for a while:</p> <p>Use collage theorem to help you find an IFS consisting of two affine maps in $\mathbb{R}^2$ whose attractor is close to this set: <img src="https://i.stack.imgur.com/8H7l9.jpg" alt="enter image description here"></p> <p>It would be great to have some hints/help - I feel like it should be really simple but I'm having a hard tome to wrap my head around it. Thanks!</p>
heropup
118,193
<p>The basic idea is to think of two affine mappings that each map the entire picture to a proper subset of itself. Instead of trying to match up the entire set of points, all you need to do is think about the placement of the vertices of the bounding box such that certain points are identified under the mapping.</p> <p>What I mean by this is that, for example, one mapping clearly must identify the center of the spiral with itself, whilst slightly rotating the image counterclockwise and shrinking it slightly so that it identifies with the subset of the spiral that lacks the outermost "sub-spiral." To obtain an affine mapping that does this, try to solve the appropriate linear transformation equation that takes three corners of the bounding box to the (approximate) points where you'd expect those corners to go under this mapping.</p> <p>The second mapping takes the entire spiral and maps it to <em>one</em> of the little sub-spirals, say, the outermost. Take care to note that this mapping translates, shears, rotates, and scales the large spiral. Again, you can eyeball it but the more precise you are, the closer the resulting image will be to the given picture.</p> <p>The composition of these two mappings in an IFS will probabilistically generate the entire set, although you may need to tune the probabilities accordingly so that the density of points is not uneven.</p> <p>Here is a link to an interactive demonstration that allows you to manipulate the affine mappings (as they are defined by their effect on the bounding box) in real-time to see how the IFS is generated:</p> <p><a href="http://demonstrations.wolfram.com/FractalCreationWithIteratedFunctionSystems/" rel="nofollow">http://demonstrations.wolfram.com/FractalCreationWithIteratedFunctionSystems/</a></p>
2,794,715
<p>Is it right that</p> <p><strong>$$\sqrt[a]{2^{2^n}+1}$$</strong></p> <p>for every $$a&gt;1,n \in \mathbb N $$ </p> <p>is always irrational?</p>
lhf
589
<p>$\sqrt[a]{m}$ is rational iff it is an integer iff $m$ is an $a$-th power.</p> <p>The question deals with <a href="https://en.wikipedia.org/wiki/Fermat_number" rel="nofollow noreferrer">Fermat numbers</a>.</p> <p>Apart from the easy counterexamples, <a href="https://en.wikipedia.org/wiki/Fermat_number#Primality_of_Fermat_numbers" rel="nofollow noreferrer">Wikipedia</a> says that even a weaker version of the question is an open problem:</p> <blockquote> <p>Does a Fermat number exist that is not square-free?</p> </blockquote>
2,261,410
<blockquote> <p>The generating function for a Bessel equation is:</p> <p>$$g(x,t) = e^{(x/2)(t-1/t))}$$</p> <p>Using the product $g(x,t)\cdot g(x,-t)$ show that:</p> <p>a) $$[J_0(x)]^2 + 2[J_1(x)]^2 + 2[J_2(x)]^2 + \cdots = 1$$</p> <p>and consequently:</p> <p>b)</p> <p>$$|J_0(x)|\le 1, \forall x$$</p> <p>c) $$|J_n(x)| \le \frac{1}{\sqrt{2}}; n=1,2,3,\cdots$$</p> </blockquote> <p>For a) I tried the product:</p> <p>$$e^{(x/2)(t-1/t))}\cdot e^{(x/2)(-t+1/t))} = 1$$</p> <p>I at least arrived at the right side of the equation. Since this generates the bessel functions, I should arrive at something related to $J_n$ in the left side. I know that</p> <p>$$e^{(x/2)(t-1/t))} = \sum_{n=0\infty}^{\infty} J_n(x)t^n$$</p> <p>$$e^{(x/2)(-t+1/t))} = \sum_{n=0\infty}^{\infty} J_n(x)(-t)^n$$</p> <p>but it's not just a matter of multiplying coefficients from the two infinite series, right?</p> <p>For $b$, I tried to use <a href="https://math.stackexchange.com/questions/2261372/proving-bessel-equation-j-0uv-j-0u-cdot-j-0v2-sum-s-1-inftyj-s">Proving Bessel equation $J_{0}(u+v) = J_0(u)\cdot J_0(v)+2\sum_{s=1}^{\infty}J_s(u)\cdot J_{-s}(v)$</a> but it's not as obvious</p> <p>I have no idea how to deal with $c$, could somebody help me?</p>
Jack D'Aurizio
44,121
<p>By the <a href="http://mathworld.wolfram.com/Jacobi-AngerExpansion.html" rel="nofollow noreferrer">Jacobi-Anger expansion</a> we have:</p> <p>$$e^{iz\sin\theta} = \sum_{n=-\infty}^{+\infty}J_n(z) e^{in\theta}\tag{1}$$ hence by <a href="https://en.wikipedia.org/wiki/Parseval%27s_theorem" rel="nofollow noreferrer">Parseval's theorem</a> it follows that</p> <p>$$ \sum_{n=-\infty}^{+\infty}\left|J_n(z)\right|^2 = \frac{1}{2\pi}\int_{0}^{2\pi}e^{iz\sin \theta} e^{iz\sin(-\theta)}\,d\theta = 1.\tag{2}$$</p>
2,612,794
<p>I have a very elementar question but I do not see where my mistake is. </p> <p>Suppose we have a sequence $(x_n)$ with $\lim_{n\to\infty}x_n=1$. Moreover, suppose that the sequence $({x_n}^c)$ for some constant $c&gt;1$ has limit $\lim_{n\to\infty}{x_{n}}^c=c$.</p> <p>Then $$ \lim_{n\to\infty}\log({x_n}^c)=\log(c). $$</p> <p>But since $\log({x_n}^c)=c\log(x_n)$, I also have</p> <p>$$ \lim_{n\to\infty}\log({x_n}^c)=c\lim_{n\to\infty}\log(x_n)=0. $$</p> <p>Where is my mistake? Maybe in the assumptions of the sequences.</p>
user326210
326,210
<p>If $x_n\rightarrow 1$, the only way to get $x_n^c\rightarrow c$ is if $c=1$. There is no other exponent $c$ which makes this work. </p> <p>After all, if $x_n\rightarrow 1$, then $x_n^c \rightarrow 1^c = 1$ as well, by continuity (see below). But then if $1=\lim_{n\rightarrow\infty}x_n^c = c$, then necessarily $c=1$.</p> <p>So <strong>you can't simultaneously have $c&gt;1$ and $x_n^c \rightarrow c$</strong> because this is a contradiction; anything derived from it may also be a contradiction. In particular, by continuity, $\lim_{n\rightarrow \infty}\log(x_n^c) = c \log(1) = 0$ always. But if we assume $\lim_{n\rightarrow\infty} x_n^c = c$ and $c&gt;1$, we find that instead $\lim_{n\rightarrow \infty}\log(x_n^c) = \log(c) &gt; \log(1) = 0$&mdash; a contradiction.</p> <p>We assumed something that cannot happen, and got a contradiction as a result. </p> <hr/> <ul> <li><p>First, note that if $h$ is any continuous function, then $$\lim_{n\rightarrow \infty }h\left(x_n\right) = h\left(\lim_{n\rightarrow \infty }x_n\right)$$</p></li> <li><p>Next, let $f$ be the function $f(x) = x^c$, let $g(x)=\log(x)$, and suppose $x_n$ is a sequence such that $x_n \rightarrow 1$.</p></li> <li>Because $f$ and $g$ are continuous, we know that : $$\lim_{n\rightarrow \infty} \log(x^c) = \lim_{n\rightarrow \infty} g(f(x_n)) = g\left(f\left(\lim_{n\rightarrow \infty} x_n\right)\right) = g(f(1)) = \log(1^c) = \log(1) = 0$$</li> <li>Because of the property of logs, we know that: $$\lim_{n\rightarrow \infty} \log(x^c) = \lim_{n\rightarrow \infty} c\log(x_n) = c\lim_{n\rightarrow \infty}\log(x_n) = c \lim_{n\rightarrow \infty}g(x_n) = c \cdot g\left(\lim_{n\rightarrow \infty}x_n\right) = c\cdot g(1) = c\cdot \log(1) = c\cdot 0 = 0$$</li> </ul> <hr/> <p>Taking a step back, we have that $x_n\rightarrow 1$, and because of continuity, $x_n^c \rightarrow 1$ as well. Hence $\log(x_n)$ and $\log(x_n^c)$ both go to 0 as $n\rightarrow \infty$.</p>
1,428,143
<p>Let $f:E\to F$ where $E$ and $F$ are metric space. We suppose $f$ continuous. I know that if $I\subset E$ is compact, then $f(I)$ is also compact. But if $J\subset F$ is compact, do we also have that $f^{-1}(J)$ is compact ?</p> <p>If yes and if $E$ and $F$ are not necessarily compact, it still works ?</p>
robjohn
13,854
<p>Clayton's answer based on non-injectivity is very good. However, we can also base a counterexample on non-surjectivity.</p> <p>Consider $f:\mathbb{R}\mapsto[-1,1]$ given by $$ f(x)=\frac{x}{\sqrt{x^2+1}} $$ Then $$ f^{-1}([-1,1])=\mathbb{R} $$</p>
1,433,980
<p>so the problem I m having deals with conditional probability. I am given so much information and don't know what to do with what. Here is the problem:</p> <p>"A study investigated whether men and women place more importance on a mate's ability to express his/her feelings or on a mate's ability to make a good living. In the study, 55% of the participants were men, 71% of participants said that feelings were more important, and 35% of the participants were men that said feelings were more important. Suppose that an individual is randomly selected from the participants in this study. Let M be the event that the individual is male and F be the event that the individual said that feelings were more important."</p> <p>I am asked to find $P(M' \cap F)$</p> <p>I get confused on which percentages to use. For instance, 55% of all the participants were men, so that means 45% were women. 71% of the participants said feelings were more important so that means 29% said feelings were not important. Of the 71% that said feelings were important, 35% of them were men. So does that means the percentage of women that said feelings were important is 65%? Or would that be 36%? Since I am finding $P(M' \cap F)$, for M' would I use the 65% (or 36%) or would I use the 45% of the total population?</p> <p>Thanks</p>
WW1
88,679
<p>I disagree that </p> <p>"35% of the participants were men that said feelings were more important" </p> <p>means the same as </p> <p>" Of the 71% that said feelings were important, 35% of them were men"</p> <p>The original question does not contain any conditional probabilities.</p> <p>I read the information provided as ...</p> <p>$$ P(M)=0.55 \,\,\,P(F)= 0.71 \,\,\, P(M \cap F)=0.35 $$</p> <p>So $$P(M' \cap F) = P(F)-P(M \cap F)=0.71-0.35=0.36$$</p>
2,691,232
<p>Let $E$ be a complex Hilbert space.</p> <blockquote> <p>I look for an example of $A,B\in \mathcal{L}(E)$ such that $A\neq 0$ and $B\neq 0$ but $AB=0$.</p> </blockquote>
user284331
284,331
<p>Let $A=E_{1,1}$ and $B=E_{2,2}$, then $AB=0$, where $E_{1,1}$ is the matrix has scalar $1$ only at $(1,1)$ entry, $E_{2,2}$ is the matrix has scalar $1$ only at $(2,2)$ entry.</p>
8,997
<p>I have a set of data points in two columns in a spreadsheet (OpenOffice Calc):</p> <p><img src="https://i.stack.imgur.com/IPNz9.png" alt="enter image description here"></p> <p>I would like to get these into <em>Mathematica</em> in this format:</p> <pre><code>data = {{1, 3.3}, {2, 5.6}, {3, 7.1}, {4, 11.4}, {5, 14.8}, {6, 18.3}} </code></pre> <p>I have Googled for this, but what I find is about importing the entire document, which seems like overkill. Is there a way to kind of cut and paste those two columns into <em>Mathematica</em>? </p>
Peter
28,135
<p>Here is an Excel VBA function that will generate the list in an excel cell. Copy the cell and paste directly into mma:</p> <p>Function ToMathematicaList(Y_Values, X_Values)</p> <p>N1 = Y_Values.Count</p> <p>stout = "{"</p> <p>For J = 1 To (N1 - 1)</p> <p>stout = stout &amp; "{" &amp; X_Values(J) &amp; ", " &amp; Y_Values(J) &amp; "},"</p> <p>Next J</p> <p>stout = stout &amp; "{" &amp; X_Values(N1) &amp; ", " &amp; Y_Values(N1) &amp; "}}"</p> <p>ToMathematicaList = stout</p> <p>End Function</p>
481,167
<p>Let $V$ be a $\mathbb{R}$-vector space. Let $\Phi:V^n\to\mathbb{R}$ a multilinear symmetric operator.</p> <p>Is it true and how do we show that for any $v_1,\ldots,v_n\in V$, we have:</p> <p>$$\Phi[v_1,\ldots,v_n]=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1&lt;\cdots&lt;j_k\leq n} (-1)^{n-k}\phi (v_{j_1}+\cdots+v_{j_k}),$$ where $\phi(v)=\Phi(v,\ldots,v)$.</p> <p>My question come from that, I have seen this formula when I was reading about mixed volume, and also when I was reading about mixed Monge-Ampère measure. The setting was not exactly the one of a vector space $V$ but I think the formula is true here and I am interested by having this property shown out of the specific context of Monge-Ampère measures or volumes. I have done some work in the other direction, <em>i.e.</em> starting from an operator $\phi:V\to\mathbb{R}$ satisfying some condition and obtaining a multilinear operator $\Phi$ ; bellow are the results I have seen in this direction.</p> <p>I already know that if $\phi':V\to\mathbb{R}$ is such that for any $v_1,\ldots,v_n\in V$, $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ is a homogeneous polynomial of degree $n$ in the variables $\lambda_i$, then there exists a unique multilinear symmetric operator $\Phi':V^n\to\mathbb{R}$ such that $\Phi'(v,\ldots,v)=\phi'(v)$ for any $v\in V$. Moreover $\Phi'(v_1,\ldots,v_n)$ is the coefficient of the symmetric monomial $\lambda_1\cdots\lambda_n$ in $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ (see <a href="https://math.stackexchange.com/questions/469342/symmetric-multilinear-form-from-an-homogenous-form">Symmetric multilinear form from an homogenous form.</a>).</p> <p>I also know that if $\phi'(\lambda v)=\lambda^n \phi'(v)$ and we define $$\Phi''(v_1,\ldots,v_n)=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1&lt;\cdots&lt;j_k\leq n} (-1)^{n-k}\phi' (v_{j_1}+\cdots+v_{j_k}),$$ then $\Phi''(v,\ldots,v)=\frac{1}{n!} \sum_{k=1}^n (-1)^{n-k} \binom{n}{k} k^n \phi'(v)=\phi'(v)$ (see <a href="https://math.stackexchange.com/questions/465172/show-this-equality-the-factorial-as-an-alternate-sum-with-binomial-coefficients">Show this equality (The factorial as an alternate sum with binomial coefficients).</a>). It is clear that $\Phi''$ is symmetric, but I don't know if $\Phi''$ is multilinear.</p> <p>Formula for $n=2$: $$\Phi[v_1,v_2]=\frac12 [\phi(v_1+v_2)-\phi(v_1)-\phi(v_2)].$$</p> <p>Formula for $n=3$: $$\Phi[v_1,v_2,v_3]=\frac16 [\phi(v_1+v_2+v_3)-\phi(v_1+v_2)-\phi(v_1+v_3)-\phi(v_2+v_3)+\phi(v_1)+\phi(v_2)+\phi(v_3)].$$</p>
Anthony Carapetis
28,513
<p>This is true, here's a proof.</p> <p>I'm going to use the polynomial notation $\Phi\left(v_{1},\ldots,v_{n}\right)=v_{1}\cdots v_{n}$ - note that the multilinearity and symmetry of $\Phi$ means that manipulating these like polynomials (i.e. commuting elements, distributing ``multiplication'') is completely legitimate. Let the RHS of your proposed equation be $\frac{1}{n!}F\left(n\right)$.</p> <p>Using the multinomial expansion, we have $$ F\left(n\right)=\sum_{k=1}^{n}\left(-1\right)^{n-k}f\left(n,k\right) $$ where $$ f\left(n,k\right)=\sum_{1\le j_{1}&lt;\cdots&lt;j_{k}\le n}\ \sum_{l_{1}+\cdots+l_{k}=n}{n \choose l_{1},\ldots,l_{k}}v_{j_{1}}^{l_{1}}\cdots v_{j_{k}}^{l_{k}}. $$ Let's try to compute the coefficient of $v_{j_{1}}^{l_{1}}\cdots v_{j_{k}}^{l_{k}}$ in $F\left(n\right)$. The most obvious contribution is from $f\left(n,k\right)$, which gives $\left(-1\right)^{n-k}{n \choose l_{1},\ldots,l_{k}}.$ But there are more contributions: for every $K&gt;k$ we have terms where $K-k$ of the $l$s are zero. The contribution from $f\left(n,K\right)$ is $$ \left(-1\right)^{n-K}\sum\left\{ {n \choose l_{1},\ldots,l_{k},0,0,\ldots}:j_{k+1},\ldots,j_{K}\textrm{ distinct from }j_{1},\ldots,j_{k}\right\} . $$ All we need to do to compute this is count the number of choices of $K-k$ of the $n-k$ remaining indices, so we get $$ \left(-1\right)^{n-K}{n \choose l_{1},\ldots,l_{k}}{n-k \choose K-k}. $$ The coefficient of $v_{j_{1}}^{l_{1}}\cdots v_{j_{k}}^{l_{k}}$ in $F\left(n\right)$ is thus $$ \sum_{K=k}^{n}\left(-1\right)^{n-K}{n \choose l_{1},\ldots,l_{k}}{n-k \choose K-k}=\sum_{K=k}^{n}\left(-1\right)^{n-K}\frac{n!}{l_{1}!\cdots l_{k}!}{n-k \choose K-k}. $$ We want to show that this is $n!$ when $k=n,l_{j}=1$ and zero otherwise. The first case is easy - there is only a single term in the sum and all of $n,k,K$ are just $n$, so it falls out immediately. Let's try the zero case. Factoring out the $K$-independent terms gives $$ \left(-1\right)^{n}\frac{n!}{l_{1}!\cdots l_{k}!}\sum_{K=k}^{n}\left(-1\right)^{K}{n-k \choose K-k}. $$ Making a change of variables $j=K-k$ turns the sum to $$ \left(-1\right)^{k}\sum_{j=0}^{n-k}\left(-1\right)^{j}{n-k \choose j}. $$ This is the alternating sum of the binomial coefficients, which vanishes as required.</p>
204,592
<p>The matrix exponential is a well know thing but when I see online it is provided for matrices. Does it the same expansion for a linear operator? That is if $A$ is a linear operator then $$e^A=I+A+\frac{1}{2}A^2+\cdots+\frac{1}{k!}A^k+\cdots$$</p>
Fly by Night
38,495
<p>As you have suggested, if $A$ is a linear operator then:</p> <p>$$\exp A = I + A + \frac{1}{2}A^2 + \cdots + \frac{1}{k!}A^k + \cdots \, . $$</p> <p>These are very common in physics. <a href="http://www.tcs.tifr.res.in/~pgdsen/pages/courses/2007/quantalgo/lectures/lec06.pdf" rel="nofollow">Here is a link</a> to a PDF file.</p>
204,592
<p>The matrix exponential is a well know thing but when I see online it is provided for matrices. Does it the same expansion for a linear operator? That is if $A$ is a linear operator then $$e^A=I+A+\frac{1}{2}A^2+\cdots+\frac{1}{k!}A^k+\cdots$$</p>
Hagen von Eitzen
39,174
<p>The exponential series has a remarkably "ubiquitiuos" convergence. As soon as you have a $\mathbb Q$-algebra $M$ with a norm such that $||X Y||\le c\cdot ||X||\cdot ||Y||$ for some $c$, then $\exp(A)$ converges for all $A$ with respect to this norm. Hence if $M$ is complete, you indeed obtain an element of $M$. Moreover, if $AB=BA$ then $\exp(A+B)=\exp(A)\exp(B)$ holds.</p> <p>There are even cases when the exponential series is useful even when division by $k!$ is undefined. One just has to be careful that $A$ must be nilpotent enough (i.e. $A^k=0$ for all $k$ for which divison by $k!$ is undefined)</p>
526,820
<p>How do I integrate the inner integral on 2nd line? </p> <p><img src="https://i.stack.imgur.com/uIxQX.png" alt="enter image description here"></p> <hr> <p>$$\int^\infty_{-\infty} x \exp\{ -\frac{1}{2(1-\rho^2)} (x-y\rho)^2 \} \, dx$$</p> <p>I know I can use integration by substitution, let $u = \frac{x-y\rho}{\sqrt{1-\rho^2}}$ resulting in</p> <p>$$\sqrt{1-\rho^2}\int^{\infty}_{-\infty} [u\sqrt{1-\rho^2} + y\rho] e^{-u^2/2} \; du$$</p> <p>Thats the 3rd line in the image, but how do I proceed? </p>
AstroSharp
83,876
<p>$$\ldots=\sqrt{1-\rho^2}\left[y\rho\sqrt{1-\rho^2}\underbrace{\int_{\mathbb{R}}ue^{-u^2/2}du}_{0\mbox{ odd integrant}}+y\rho{\int_{\mathbb{R}}e^{-u^2/2}du}\right]=y\rho\sqrt{1-\rho^2}\int_\mathbb{R}e^{-u^2/2}du=\ldots$$</p>
1,285,014
<p>Let $R,S$ be commutative rings with identity.</p> <p>Proving that $X \sqcup Y$ is an affine scheme is the same as proving that $Spec(R) \sqcup Spec(S) = Spec(R \times S)$.</p> <p>I proved that if $R,S$ are rings, then the ideals of $R \times S$ are exactly of the form $P \times Q$, where $P$ is an ideal of $R$ and $Q$ is an ideal of $S$.</p> <p>However, for prime ideals this is not true in general.</p> <p>If $I$ is a prime ideal of $R \times S$, then $I = \mathfrak{p} \times \mathfrak{q}$, where $\mathfrak{p}$ is a prime ideal of $R$ and $\mathfrak{q}$ is a prime ideal of $S$.</p> <p>But if $\mathfrak{p}$ is a prime ideal of $R$ and $\mathfrak{q}$ is a prime ideal of $S$, it is not true in general that $\mathfrak{p} \times \mathfrak{q}$ is a prime ideal of $R \times S$.</p> <p>Then, $Spec(R \times S) \subseteq Spec(R) \times Spec(S)$ and the reverse inclusion is false in general.</p> <p>My question is, what is $Spec(R) \sqcup Spec(S)$ set-theoretically, in order to use what I proved above?</p>
Rob Arthan
23,171
<p>I am assuming you mean truth table in the title. Consider a conjunction of literals such as $p \land \lnot q \land \lnot r$: this is true for the assignment given by the row with $(p, q, r) = (1, 0, 0)$ in the truth table and not for any other row. The DNF is the disjunction of the conjunctions corresponding to the rows in which your formula is true. $$ \begin{align*} &amp; \lnot p \land \lnot q \land \lnot r\\ {}\lor {} &amp;\lnot p \land \lnot q \land r \\ {}\lor {} &amp;\lnot p \land q \land \lnot r \\ {}\lor {} &amp;\lnot p \land q \land r \\ {}\lor {} &amp;p \land \lnot q \land \lnot r \\ \end{align*} $$</p>
2,409,183
<p>Good evening all! I'm trying to find the eigenvalues and eigenvectors of the following problem</p> <p>$$ \begin{bmatrix} -10 &amp; 8\\ -18 &amp; 14\\ \end{bmatrix}*\begin{bmatrix} x_{1}\\ x_{2}\\ \end{bmatrix} $$</p> <p>I've found that $λ_{1},_{2}=2$ where $λ$ is double eigenvalue of our matrix and now I'm trying to find its eigenvectors. So we have to solve the above system $$(-10-λ)*x_{1}+8*x_{2}=0 , -18*x_{1}+(14-λ)*x_{2}=0$$ So what I do here is to replace $λ=2$ and we have $$-12*x_{1}+8*x_{2}=0 , -18*x_{1}+12*x_{2}=0$$</p> <p>From here I get that $3*x_{1}=2*x_{2}$ and I think that gives us the eigenvector $\begin{bmatrix} 2/3\\ 1\\ \end{bmatrix}$</p> <p>My book is telling me that the eigenvector for $λ=2$ is $\begin{bmatrix} 2\\ 3\\ \end{bmatrix}$ but I can't find the same solution. Can someone help me?</p> <p>EDIT : I added my own try at solving it.</p>
Mark Fischler
150,362
<p>Clearly the surface integral vanishes if it has a factor of any component to an odd power. We can always write the surface integral in spherical coordinates, aligning the $z$ axis with one of the indices appearing to the $4$-th or $2$-nd power. </p> <p>We can also make the substitution $u = \cos \theta$ where $du = -\sin \theta\, d\theta$.</p> <p>Thus when $i=j=k=\ell$ the integral is $$ -\int_1^{-1}2\pi u^4 du=\frac45\pi $$ (the $2\pi$ comes from thje integral over $\phi$.) </p> <p>And when $i=j$ and $k=\ell\neq i$ (and all other pairings of the like nature) the integral is $$ -\int_{\phi = 0}^{2\pi}\cos^2\phi\int_1^{-1}u^2(1-u^2) du $$ where we have used $sin^2 = 1-\cos^2$ to turn the $\sin^2\theta$ into $(1-u^2)$.</p> <p>The $d\phi$ integral is easy if you remember that over a full cycle, the average value of $\cos^2\phi$ is $\frac12$. We are left with $$ \pi \int_{-1}^1(u^2-u^4)\,du=\pi\left( \frac23-\frac25 \right)=\frac2{15}\pi $$</p> <p>Finally, we need to turn these into a unified expression involving the indices $i,j,k,\ell$, which is allowed to used the Kroenecker delta $\delta_{mn} = 1$ if $m=n$ and $0$ otherwise.</p> <p>The integral must then be of the form $$ \int \hat{r}_i\hat{r}_j\hat{r}_k\hat{r}_\ell= \frac2{15} \pi (\delta_{ij}\delta_{k\ell}+\delta_{ik}\delta_{j\ell}+\delta_{i\ell}\delta_{jk}) + \lambda \delta_{ij}\delta_{ik}\delta_{i\ell} $$ where $\lambda$ is chosen to make the all-indices-equal answer come out to $\frac45\pi$. When all four indices are equal the first term becomes $\frac6{15}\pi = \frac25 \pi$ so $\lambda$ must be $\frac25$. The answer is $$ \int \hat{r}_i\hat{r}_j\hat{r}_k\hat{r}_\ell= \frac2{15} \pi (\delta_{ij}\delta_{k\ell}+\delta_{ik}\delta_{j\ell}+\delta_{i\ell}\delta_{jk} + 3\, \delta_{ij}\delta_{ik}\delta_{i\ell}) $$</p>
48,989
<p>How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?</p>
ncmathsadist
4,154
<p>You know that a linear transformation cannot increase the dimension of its domain; i.e. If $T: V\rightarrow W$ is a linear transformation, $$\dim(T(V))\le \dim(V).$$</p>
1,828,097
<p>If we contruct two strainght lines as shown:<a href="https://i.stack.imgur.com/8K5Eo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8K5Eo.png" alt="enter image description here"></a></p> <p>Then join them such that to complete a triangle. <a href="https://i.stack.imgur.com/Uvtnw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uvtnw.png" alt="enter image description here"></a></p> <p>It is taught that we can find infinity points on straight line. So there are infinity points on $DE$ and $BC$. </p> <p>If we will join $A$ with $BC$ as shown:<a href="https://i.stack.imgur.com/HuWos.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HuWos.png" alt="enter image description here"></a> We can find one point on $DE$ and corresponding point on $BC$. So point on $DE$ and $BC$ are same.</p> <p>Hence, can we say that $\infty=\infty$, But why $\infty - \infty \neq0$</p> <p>I'm not sure does this make any sense or not,your suggestions are appreciated.</p>
avz2611
142,634
<p>the problem is that when we define point we consider point to be dimensionless but line is comprised of points , so it should be dimensionless as well but that's not the case . To go around the problem consider an $\epsilon$ value that is the minimum distance between two points , now you will realize for that $\epsilon$ value in $DE$ the distance between two points in $BC$ is more than $\epsilon$ thus if we place points optimally we can place more points than that on $DE$ , and ration of those points will tend to ratio of their lengths </p>
4,462,081
<p>I actually already have the solution to the following expression, yet it takes a long time for me to decipher the first operation provided in the answer. I understand all of the following except how to convert <span class="math-container">$\left(1+e^{i\theta \ }\right)^n=\left(e^{\frac{i\theta }{2}}\left(e^{\frac{-i\theta }{2}}+e^{\frac{i\theta }{2}}\right)\right)^n$</span></p> <p>I am not sure if I wrote the expression correctly, I am new to this website.</p> <p>Thank You!</p>
Matt E.
948,077
<p>The first operation is just a result of algebraic manipulation. See here:</p> <p><span class="math-container">\begin{align} 1 + e^{i\theta} = e^0 + e^{i\theta} = e^{i\frac{\theta}{2} - i\frac{\theta}{2}} + e^{i\frac{\theta}{2} + i\frac{\theta}{2}} = e^{i\frac{\theta}{2}}e^{-i\frac{\theta}{2}} + e^{i\frac{\theta}{2}}e^{i\frac{\theta}{2}} = e^{i\frac{\theta}{2}} \left( e^{-i\frac{\theta}{2}} + e^{i\frac{\theta}{2}} \right). \end{align}</span></p>
612,827
<p>I'm self studying with Munkres's topology and he uses the uniform metric several times throughout the text. When I looked in Wikipedia I found that there's this concept of a <a href="http://en.wikipedia.org/wiki/Uniform_space" rel="nofollow">uniform space</a>.</p> <p>I'd like to know what are it's uses (outside point set topology) and whether it's an important thing to learn on a first run on topology? </p>
Willie Wong
1,543
<p>Let me quote from Warren Page's <em>Topological Uniform Structures</em>:</p> <blockquote> <p>This book aims to acquaint the reader with a slice of mathematics that is interesting, meaningful, and in the mainstream of contemporary <em>[Ed: book originally published 1978]</em> mathematical developments. Admittedly a number of excellent sources cover, in part, uniform spaces, topological groups, topological vector spaces, topological algebras, and abstract harmonic analysis. </p> </blockquote> <p>and </p> <blockquote> <p>The overall unifying theme of topologies compatible with increasingly enriched algebraic structures ... a number of striking results that combine and interlace algebraic, topological, and measure-theoretic properties associated with the structure under consideration. </p> </blockquote> <p>As indicated, the subjects which most directly benefit from some background in uniform spaces are</p> <ul> <li>topological groups</li> <li>topological vector spaces</li> <li>topological algebras</li> <li>abstract harmonic analysis</li> </ul> <p>But the uniform structure often only becomes apparent when you get quite far in the study of these objects. For example, while there is a rich and interesting theory of topological vector spaces, <em>most</em> of the TVS that are used commonly in other fields (in particular Banach and Frechet spaces) are in fact metrizable, so intuitions from metric spaces are "good enough" for everyday use for many mathematicians. </p> <p>Let me give another example since you mentioned differential topology in your comments: any <em>manifold</em> is locally Euclidean, and hence locally metrizable. It is a <a href="http://en.wikipedia.org/wiki/Metrization_theorem#Metrization_theorems" rel="nofollow">theorem</a> that locally metrizable topological spaces are metrizable if and only if it is Hausdorff and paracompact. When you study differential topology most of the time Hausdorff and paracompact are built-in assumptions for your manifolds. Hence for the most part, the study of smooth manifolds can be dispensed with using intuitions built up from metric spaces, without necessarily having to delve into intricacies associated with uniform structures. </p> <p>On a first run through topology, I think it is safe to put-off learning about uniform spaces until later. By the time you really need it, you can probably pick it up relatively quickly. The one advantage to thinking a little bit about the uniform spaces (especially how they <em>differ</em> from metric spaces) is that it forces you to confront certain intuitive prejudices that we've grown accustomed to from working with $\mathbb{R}$ all the time, and allows you to overcome certain limitations that arises from thinking only about countable, instead of uncountable infinities. (This of course comes up also in the difference between nets and sequences.) </p>
1,358,735
<p>I'm sorry to sound like a dummy, but I've had trouble with Algebra all my life. I'm studying online with Khan Academy and one of the questions is: </p> <p>Point $E$'s $y$-coordinate is $0$, but its $x$-coordinate is not $0$. Where could point $E$ be located on the coordinate plane?</p> <p>There is not graph or nothing, just a multiple choice of </p> <ul> <li>Quadrant $I$ </li> <li>Quadrant $II$</li> <li>Quadrant $III$</li> <li>Quadrant $IV$ </li> <li>$x$-axis </li> <li>$y$-axis</li> </ul> <p>What does the $E$ mean? I understand plotting numbers and points, etc, but what is $E$? </p>
Zain Patel
161,779
<p>If the point $E$ has a $y$-coordinate of $0$, then it lies on the $x$-axis. Can you plot the point $(5,0)$, $(100,0)$, $(-23, 0)$. Notice what they all have in common? They lie on the $x$-axis and have a $y$-coordinate of $0$.</p> <p>The equation of the $x$-axis is $y=0$. This is why, when you want to find $x$-intercepts of a function $f(x)$. You set $f(x) = 0$ to find the $x$-intercepts. </p>
735,470
<p>I am having trouble with integrating the following:</p> <p>$$\int \frac{\cos2x}{1-\cos4x}\mathrm{d}x$$</p> <p>I have simplified it using the double angle: </p> <p>$$\int \frac{1-2\sin^2x}{1-\cos4x}\mathrm{d}x$$</p> <p>But i am stuck as I am not sure on how to continue on from here. Should i use the double angle formula to simplify the denominator too?</p> <p>Then there is this other question, which I am not sure on how to solve. </p> <p>$$\int {(2^x+3^x)}^{2}\mathrm{d}x$$</p> <p>Should i just expand and multiply the terms, then use "$\int a^x\mathrm{d}x = \frac{1}{\ln a} a^x$ "to integrate?</p> <p>I am confused on whether i am taking the correct approach in solving this question. </p> <p>All help and suggestions are welcomed. Thank you very much for helping me once again, guys.</p>
WimC
25,313
<p>Note that $1-\cos(4x)=1-(1-2\sin^2(2x))=2 \sin^2(2x)$.</p>
4,351,794
<p>I am trying not exactly to solve equation, but just change it from what is on right side to what is on left side. But I didn't do any math for years and can't remember what to.</p> <blockquote> <p><span class="math-container">$$\frac{1}{2jw(1+jw)}=\frac{-j(1-jw)}{2w(1+w^2)}$$</span></p> <p>Here, <span class="math-container">$j^2=-1$</span>.</p> </blockquote> <p>If I am right, I should do something like this. <span class="math-container">$$\frac{1}{(2jw(1+jw))}=\frac{(2jw(1-jw))}{(2jw(1+jw))}$$</span> But whatever I do I can't get what's shown above.</p> <p>I will be thankful for any help.</p>
Deepak
151,732
<p>Here, it is quite apparent <span class="math-container">$j$</span> represents the imaginary unit (<span class="math-container">$j^2 = -1$</span>). This is quite common usage in physics and electrical engineering. In mathematics, you should be using <span class="math-container">$i$</span> rather than <span class="math-container">$j$</span>.</p> <p>Anyway, the factor of <span class="math-container">$j$</span> in the denominator can be dealt with by noting that <span class="math-container">$\frac 1j = \frac j{j^2} = -j$</span>, so you can &quot;bring up&quot; the <span class="math-container">$j$</span> and reverse the sign. Following this, you need to &quot;realise&quot; (in analogy to rationalise) the denominator by multiplying by the conjugate of <span class="math-container">$1+j\omega$</span>, which is <span class="math-container">$1 - j\omega$</span>. You can use the basic algebraic identity <span class="math-container">$(a+b)(a-b) = a^2 - b^2$</span> to see what this would work out to. Keep in mind that multiplying <span class="math-container">$j$</span> by itself would give you <span class="math-container">$-1$</span>. If you carry out these simple steps, you should be able to verify the working.</p>
4,351,794
<p>I am trying not exactly to solve equation, but just change it from what is on right side to what is on left side. But I didn't do any math for years and can't remember what to.</p> <blockquote> <p><span class="math-container">$$\frac{1}{2jw(1+jw)}=\frac{-j(1-jw)}{2w(1+w^2)}$$</span></p> <p>Here, <span class="math-container">$j^2=-1$</span>.</p> </blockquote> <p>If I am right, I should do something like this. <span class="math-container">$$\frac{1}{(2jw(1+jw))}=\frac{(2jw(1-jw))}{(2jw(1+jw))}$$</span> But whatever I do I can't get what's shown above.</p> <p>I will be thankful for any help.</p>
HAD HAN
1,010,307
<blockquote> <p>si <span class="math-container">$j^{2}=-1$</span><br /> <span class="math-container">$\dfrac{1}{2jw(1+jw)}=\dfrac{1}{2jw(1+jw)}\times\dfrac{j}{j}=\dfrac{1\times j}{2jw(1+jw)\times j}=\dfrac{j}{2j^{2}w(1+jw)}=\dfrac{j}{2(-1)w(1+jw)}=\dfrac{j}{-2w(1+jw)}=\dfrac{j}{-2w(1+jw)}\times \dfrac{1-jw}{1-jw)}=\dfrac{j\times (1-jw)}{-2w(1+jw)(1-jw)}=\dfrac{j\times (1-jw)}{-2w((1)^{2}-(jw)^{2})}=\dfrac{j\times (1-jw)}{-2w(1-(j)^{2}(w)^{2})}=\dfrac{j\times (1-jw)}{-2w(1-(-1)(w)^{2})}=\dfrac{j\times (1-jw)}{-2w(1+(w)^{2})}=\dfrac{-j\times (1-jw)}{2w(1+(w)^{2})}$</span></p> </blockquote>
2,916,158
<p>I am trying to understand why we need Large deviation theory/principle.</p> <p><strong>Here is what I understand so far</strong> based on the <a href="https://en.wikipedia.org/wiki/Large_deviations_theory#An_elementary_example" rel="noreferrer">Wikipedia</a>. Let $S_n$ be a random variable which depends on $n$. We are interested in</p> <blockquote> <p>How the probability $P(S_n &gt; x)$ changes as $n \to \infty$.</p> </blockquote> <p>Very often, LDT concerns with how fast $P(S_n &gt;x)$ converges to $0$. Then the answer to the above question is already known, it is $0$. Then one might be interested in how fast it converges. So I understand the notion of the ration function $I(x)$.</p> <p><strong>Here I what I don't understand:</strong></p> <ol> <li>In order to know $P(S_n &gt; x)$, it is a matter of finding the cumulative distribution function of $S_n$, as $$ P(S_n &gt; x) = 1 - P(S_n &lt; x) = 1 - F_{S_n}(x). $$ Then once we know the pdf of $S_n$, say $f_{S_n}(x)$, one can easily obtain $$ P(S_n &gt; x) = \int_x^{\infty} f_{S_n}(t)dt. $$ If this is the case, I am not sure why we need LDP.</li> <li>Okay, it is not always the case where we know the pdf of $S_n$. If it is the case, I think the problem is then to estimate $P(S_n &gt; x)$. However, it seems that many LDP approaches require to compute certain generating functions. For example, $$ \lambda(k) = \lim_{n\to \infty} \frac{1}{n} \ln E[e^{nkS_n}], \qquad I(s) = \sup_{k \in \mathbb{R}} \{ks - \lambda(k)\}. $$ It seems that calculating the aboves are definitely a lot harder than a direct calculation of $P(S_n &gt; x)$. Thus I don't understand why people introduces quantities which a lot harder to obtain.</li> <li><em>"Why the rate?"</em> Ok, regardless of #1 and #2, let say we have $I(x)$. What now? Given $I(x)$, can we estimate $P(S_n &gt; x)$? or can we do something useful? Or Is the LDP devoted to derive the rate function $I(x)$?</li> </ol> <p>Any comments or answers will very be appreciated. </p>
saz
36,150
<p>Large deviation theory deals with the decay of probabilities of rare events on an exponential scale. If $(S_n)_{n \in \mathbb{N}}$ is a random walk, then "rare event" means that $\lim_{n \to \infty} \mathbb{P}(S_n \in A) =0$. Large deviation theory aims to determine the asymptotics of</p> <p>$$\mathbb{P}(S_n \in A) \quad \text{as $n \to \infty$}$$</p> <p>How fast does $\mathbb{P}(S_n \in A)$ tend to zero? Roughly speaking, a large deviation estimate tells us that</p> <p>$$\mathbb{P}(S_n \in A) \approx \exp \left( -n J(A) \right) \qquad \text{for large $n$}$$</p> <p>for a certain rate $J(A) \geq 0$. This means that the probability $\mathbb{P}(S_n \in A)$ decays for $n \to \infty$ exponentially with rate $J(A)$. If you are for instance interested in finding $n \in \mathbb{N}$ such that $\mathbb{P}(S_n \in A) \leq \epsilon$ for some given $\epsilon&gt;0$, this is really useful because it allows you to determine how large you have to choose $n$.</p> <p>You are right that it is, in general, not easy to determine the rate function. The case of independent identically distributed random variables illustrates quite nicely that large deviation theory has nevertheless its justification:</p> <p>Let $(X_j)_{j \in \mathbb{N}}$ be a sequence of independent and identically distributed random variables and $S_n := \sum_{j=1}^n X_j$ the associated random walks. If we want to determine the asymptotics</p> <p>$$\mathbb{P}(S_n &gt; x)$$</p> <p>using the distribution function, then this means that we have to calculate the distribution function of $S_n$ for <strong>each</strong> $n$, and this will certainly require a huge amount of computations. If we use large deviation theory instead, then we have to compute</p> <p>$$I(x) = \sup_y \{y \cdot x - \lambda(y)\}$$</p> <p>for the moment generating function $\lambda(y) = \mathbb{E}\exp(y X_1)$. Note that these quantities do not depend on $n$, i.e. we have to compute it once and then we are done. Moreover, the large deviation principle will also allow us to estimate probabilities of the form</p> <p>$$\mathbb{P}(S_n \in A) $$</p> <p>using the rate function $I$; we are not restricted to events $A$ of the particular form $(x,\infty)$. Probabilities like that are very hard to compute using the density function $S_n$ (which is itself, in general, very hard/impossible to compute).</p>