qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
2,993,551 |
<p>I'm studying for a first year Discrete Mathematics course, I found this question on a previous paper and am lost on how to solve:</p>
<blockquote>
<p>Let <span class="math-container">$n$</span> be a fixed arbitrary integer, prove that there are infinitely
many integers <span class="math-container">$m$</span> s.t.: <span class="math-container">$m^3 \equiv n^6 \pmod{19}$</span></p>
</blockquote>
<p>Thank you</p>
|
Jimmy R.
| 128,037 |
<p><strong>Hint:</strong><span class="math-container">$$\det{\pmatrix{1&1&1\\1 & 2 & 3\\ 2& -1&1}}=5\neq0$$</span></p>
|
1,305,481 |
<p>Let $C$ denote the circle $|z|=1$ oriented counterclockwise. Show that</p>
<p>i)$\int_Cz^ne^{\frac{1}{z}}dz=\frac{2\pi i}{(n+1)!}$ for $n=0,1,2$</p>
<p>ii)$\int_C e^{z+\frac{1}{z}}dz=2\pi i\sum_{n=0}^\infty\frac{1}{n!(n+1)!}$</p>
<p>I'm stuck in this exercise, because</p>
<p>$$\int_Cz^ne^{\frac{1}{z}}dz=2\pi i *Res_{z=0}z^ne^{\frac{1}{z}}$$
$$z^ne^{\frac{1}{z}}=z^n\sum_{n=0}^\infty \frac{z^{-n}}{n!}=\sum_{n=0}^\infty \frac{1}{n!}$$</p>
<p>Anyone can help me?</p>
|
kobe
| 190,421 |
<p>You made an error in the last line, where you wrote</p>
<p>$$z^n \sum_{n = 0}^\infty \frac{z^{-n}}{n!} = \sum_{n = 0}^\infty \frac{1}{n!}.$$</p>
<p>This equation does not even make sense since the summation index $n$ cannot appear outside the summation. What you should have is</p>
<p>$$z^n e^{1/z} = z^n \sum_{k = 0}^\infty \frac{z^{-k}}{k!} = \sum_{k = 0}^\infty \frac{z^{n-k}}{k!} = \sum_{k = -\infty}^n \frac{z^k}{(n-k)!}.$$</p>
<p>The residue of $z^n e^{1/z}$ at $z = 0$ is the coefficient of $z^{-1}$ in the above Laurent series expansion, which is $\frac{1}{(n+1)!}$. Therefore, $\int_C z^n e^{1/z}\, dz = 2\pi i \cdot\operatorname{Res}_{z = 0} z^n e^{1/z} = \frac{2\pi i}{(n+1)!}$.</p>
|
1,428,905 |
<p>I have two functions:</p>
<p>$n!$</p>
<p>$2^{n^{2}}$</p>
<p>What is the difference between the growth of these two? My thought is that $2^{n^2}$ grows much faster than $n!$. </p>
|
Quang Hoang
| 91,708 |
<p>We know $2^{n}$ grows much faster than $n$, so
$$2^{n^2}>2^{1+2+\cdots+n}$$
grows much faster than
$$n!=1\cdot 2\cdots n.$$</p>
|
1,035,877 |
<p>I'd really appreciate if someone could help me so I could get going on these problems, but this is confusing me... and it's been holding me up for the last couple hours. </p>
<p>How can I find the volume of the solid when revolving the region bounded by $y=1-\frac{1}{2}x$, $y=0$, and $x=0$ about the line $ x=-1$? How could I set it up? </p>
<p>I'd REALLY appreciate if someone could take the time to answer this so I don't spend all night on one problem.</p>
|
Idris Addou
| 192,045 |
<p>The shape is a triangle, vetrex are the points (0,0), (0,1) and (2,0). You can use tube methode: $V=\int_{0}^{2}2\pi x(1-\frac{1}{2}x)dx=\frac{4}{3}\pi $ units.</p>
|
2,406,587 |
<p>Isn't the concept of homomorphism and isomorphism in abstract algebra analogous to functions and invertible functions in set theory respectively? That's one way to quickly grasp the concept into the mind?</p>
|
mathreadler
| 213,607 |
<p>An eigenvector to a matrix is a vector that when multiplied with the matrix maps onto itself up to scaling with a scalar of the underlying field.</p>
<p>Eigen is the german word for own. It basically means something that maps back to itself.</p>
<p>What you need to learn to understand this language in the context of differential equations is the connection between functions and vectors and differentiation and linear operators. In my experience it takes time for it to grow in the mind so don't panick if you don't understand it right away.</p>
|
10,949 |
<p>Is it known whether every finite abelian group is isomorphic to the ideal class group of the ring of integers in some number field? If so, is it still true if we consider only imaginary quadratic fields?</p>
|
ABC
| 3,728 |
<p>The smallest abelian group which is not the class group of an imaginary quadratic field is $(\mathbf{Z}/3 \mathbf{Z})^3$. There are six other groups of order
$< 100$ which do not occur in this way, of orders
$32$, $27$, $64$, $64$, $81$, and $81$ respectively.
The groups $(\mathbf{Z}/3 \mathbf{Z})^2$ and $(\mathbf{Z}/2 \mathbf{Z})^4$ occur as class groups of imaginary quadratic fields exactly once, for $D = -4027$ and $-5460$ respectively.
These results follow from the "class number $100$" problem, solved by Mark Watkins.</p>
<p>If you restrict to the $p$-part of the class group, then the answer (for general number fields) is positive. That is, for any abelian $p$-group $A$, there exists a number field $K$ with class group $C$ such that $C \otimes \mathbf{Z}_p = A$.
There is even a non-abelian analog of this. Namely, for any finite $p$-group $G$, there exists a number field $K$ such that the maximal Galois $p$-extension $L/K$ unramified everywhere has Galois group $G$. This is a very recent result of Ozaki.</p>
|
1,673,771 |
<p>I was wondering if this proof is valid. </p>
<p>I use $[x]$ to denote the floor of $x$.</p>
<p><strong>Problem</strong> </p>
<p>Prove that</p>
<p>$$[mx] = \sum_{k=0}^{m-1} \, \bigg[x+\frac{k}{m} \bigg]$$</p>
<p>where $m \in \mathbb{N}$ and $x \in \mathbb{R}$.</p>
<p><strong>Proof</strong></p>
<p>Let $m \in \mathbb{N}$ and let $x \in \mathbb{R}$ such that $\epsilon = x - [x]$ where $0 \leq \epsilon < 1$.</p>
<p>Partition the interval $[0,1)$ as </p>
<p>$$[0,1) = \bigcup_{k=0}^{m-1} \, \bigg[\frac{k}{m}, \frac{k+1}{m} \bigg)$$</p>
<p>Let $p \in \{1, \dotsc, m\}$ and consider the interval</p>
<p>$$\frac{p-1}{m} \leq \epsilon < \frac{p}{m}$$</p>
<p>Expanding and simplifying the sum $\sum_{k=0}^{m-1} \, \big[x+\frac{k}{m} \big]$ renders</p>
<p>$$\begin{align} \sum_{k=0}^{m-1} \, \bigg[x+\frac{k}{m} \bigg] &= [x] + \bigg([x] + \bigg[\epsilon + \frac{1}{m}\bigg]\bigg) + \cdots + \bigg([x] + \bigg[\epsilon + \frac{m-1}{m} \bigg]\bigg) \\ &= m[x] + \sum_{k=1}^{m-1} \, \bigg[\epsilon + \frac{k}{m} \bigg] \end{align} $$</p>
<p>Since each term in the sum $\sum_{k=1}^{m-1} \, \big[\epsilon + \frac{k}{m} \big]$ either equals 0 or 1 depending on $p$, we observe that there are at most $(m-p)$ zeros and $(p-1)$ ones amongst the $(m-1)$ terms of the given series. Hence,</p>
<p>$$\begin{align} \sum_{k=0}^{m-1} \, \bigg[x+\frac{k}{m} \bigg] &= m[x] + \sum_{k=1}^{m-1} \, \bigg[\epsilon + \frac{k}{m} \bigg] \\\\ &= m[x] + (m-p)\cdot 0 + (p-1) \cdot 1 \\\\ &= m[x] + (p-1) \end{align}$$</p>
<p>Furthermore, $\frac{p-1}{m} \leq \epsilon < \frac{p}{m}$ implies $p-1 \leq m \cdot \epsilon < p$. Thus we can write</p>
<p>$$\begin{align} [mx] &= \big[m([x]+\epsilon)\big] \\\\ &= m[x] + [m \cdot \epsilon] \\\\ &= m[x] + (p-1). \end{align}$$</p>
<p>Since for all intervals $p = 1, \dotsc, m$ we have </p>
<p>$$\begin{align} \sum_{k=0}^{m-1} \, \bigg[x+\frac{k}{m} \bigg] &= [mx] \\ &= m[x] + (p-1) \end{align}$$</p>
<p>we conclude that $[mx] = \sum_{k=0}^{m-1} \, \bigg[x+\frac{k}{m} \bigg]$</p>
|
Brian M. Scott
| 12,042 |
<p>There are a couple of small bugs, but the argument is basically correct. When you partition $[0,1)$, you want either $\bigcup_{k=0}^{m-1}\left[\frac{k}m,\frac{k+1}m\right)$ or $\bigcup_{k=1}^m\left[\frac{k-1}m,\frac{k}m\right)$. In the next sentence you should not be considering all $m$ values of $p$: you already have $\epsilon=x-\lfloor x\rfloor$, and you’re defining $p$ to be the unique member of $\{1,\ldots,m\}$ such that </p>
<p>$$\frac{p-1}m\le\epsilon<\frac{p}m\;.$$</p>
<p>A bit later you say that $\sum_{k=1}^{m-1}\left\lfloor\epsilon+\frac{k}m\right\rfloor$ is $0$ or $1$ depending on $p$; what you mean, I think, is that each <em>term</em> of that sum is $0$ or $1$.</p>
<p>The same basic idea can be carried out a bit more compactly.</p>
<p>Let $\ell\in\{0,\ldots,m-1\}$ be maximal such that $x+\frac{\ell}m<\lfloor x\rfloor+1$. Then $\left\lfloor x+\frac{k}m\right\rfloor=\lfloor x\rfloor$ for $k=0,\ldots,\ell$, and $\left\lfloor x+\frac{k}m\right\rfloor=\lfloor x\rfloor+1$ for $k=\ell+1,\ldots,m-1$. Thus,</p>
<p>$$\begin{align*}
\sum_{k=0}^{m-1}\left\lfloor x+\frac{k}m\right\rfloor&=(\ell+1)\lfloor x\rfloor+(m-1-\ell)(\lfloor x\rfloor+1)\\
&=m\lfloor x\rfloor+m-1-\ell\;.
\end{align*}\tag{1}$$</p>
<p>This also implies that</p>
<p>$$1-\frac{\ell+1}m\le x-\lfloor x\rfloor<1-\frac{\ell}m$$</p>
<p>and hence that</p>
<p>$$m-\ell-1\le mx-m\lfloor x\rfloor<m-\ell\;,$$</p>
<p>or</p>
<p>$$m\lfloor x\rfloor+m-\ell-1\le mx<m\lfloor x\rfloor+m-\ell\;.\tag{2}$$</p>
<p>But $(2)$ implies that $\lfloor mx\rfloor=m\lfloor x\rfloor+m-\ell-1$, which, when combined with $(1)$, yields the desired result:</p>
<p>$$\lfloor mx\rfloor=\sum_{k=0}^{m-1}\left\lfloor x+\frac{k}m\right\rfloor\;.$$</p>
|
2,147,571 |
<p>Both $A$ and $B$ are a random number from the $\left [ 0;1 \right ]$ interval.</p>
<p>I don't know how to calculate it, so i've made an estimation with excel and 1 million test, and i've got $0.214633$. But i would need the exact number.</p>
|
heropup
| 118,193 |
<p>This is <a href="http://www.math.hawaii.edu/~dale/putnam/1993.pdf" rel="nofollow noreferrer">Problem 1993-B3 from the 54th Putnam exam.</a></p>
<p>The solution becomes obvious if we look at a graph of $(B,A)$ in the Cartesian unit square $[0,1]^2$. Then the value $A/B$ is the slope of the line segment joining $(B,A)$ to the origin. Define $[x]$ to indicate the nearest whole number to $x$. Then clearly, when $A < B$, we must have $[A/B] = 0$, which occurs for points in the triangle with vertices $\{(0,0), (1,0), (1,1/2)\}$. This triangle has area $1/4$.</p>
<p>For points with $A > B$, we have an infinite series of triangles with variable base along the edge joining $(0,1)$ and $(1,1)$, and common height $1$. For a point $(x,1)$ along this edge, the rounded slope is $[1/x]$, and we require this to be an even integer; i.e., $$2k - 1/2 < 1/x < 2k + 1/2, \quad k \in \mathbb Z^+$$ or equivalently $$\frac{2}{4k+1} < x < \frac{2}{4k-1}.$$ Consequently the total area of these triangles is simply $$\frac{1}{2} \sum_{k=1}^\infty \frac{2}{4k-1} - \frac{2}{4k+1} = \sum_{m=1}^\infty \frac{(-1)^{m+1}}{2m+1} = 1 - \frac{\pi}{4}.$$ Adding in the value for the case $[A/B] = 0$, we get $$\frac{5-\pi}{4} \approx 0.46460183660255169038.$$ Your answer corresponds to the case where $[A/B]$ is a <strong>positive</strong> even integer.</p>
<p><a href="https://i.stack.imgur.com/qAlnt.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qAlnt.gif" alt="enter image description here"></a></p>
|
3,020,988 |
<p>Here's my attempt at an integral I found on this site.
<span class="math-container">$$\int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=2\pi$$</span>
<strong>I'm not asking for a proof, I just want to know where I messed up</strong></p>
<p>Recall that, for all <span class="math-container">$x$</span>,
<span class="math-container">$$e^x=\sum_{n\geq0}\frac{x^n}{n!}$$</span>
And <span class="math-container">$$\cos x=\sum_{n\geq0}(-1)^n\frac{x^{2n}}{(2n)!}$$</span>
Hence we have that
<span class="math-container">$$
\begin{align}
\int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=&\int_0^{2\pi}\bigg(\sum_{n\geq0}\frac{\cos^n2x}{n!}\bigg)\bigg(\sum_{m\geq0}(-1)^m\frac{\sin^{2m}2x}{(2m)!}\bigg)\mathrm{d}x\\
=&\sum_{n,m\geq0}\frac{(-1)^m}{n!(2m)!}\int_0^{2\pi}\cos(2x)^n\sin(2x)^{2m}\mathrm{d}x\\
=&\frac12\sum_{n,m\geq0}\frac{(-1)^m}{n!(2m)!}\int_0^{4\pi}\cos(t)^n\sin(t)^{2m}\mathrm{d}t\\
\end{align}
$$</span>
The final integral is related to the incomplete beta function, defined as
<span class="math-container">$$B(x;a,b)=\int_0^x u^{a-1}(1-u)^{b-1}\mathrm{d}u$$</span>
If we define
<span class="math-container">$$I(x;a,b)=\int_0^x\sin(t)^a\cos(t)^b\mathrm{d}t$$</span>
We can make the substitution <span class="math-container">$\sin^2t=u$</span>, which gives
<span class="math-container">$$
\begin{align}
I(x;a,b)=&\frac12\int_0^{\sin^2x}u^{a/2}(1-u)^{b/2}u^{-1/2}(1-u)^{-1/2}\mathrm{d}u\\
=&\frac12\int_0^{\sin^2x}u^{\frac{a-1}2}(1-u)^{\frac{b-1}2}\mathrm{d}u\\
=&\frac12\int_0^{\sin^2x}u^{\frac{a+1}2-1}(1-u)^{\frac{b+1}2-1}\mathrm{d}u\\
=&\frac12B\bigg(\sin^2x;\frac{a+1}2,\frac{b+1}2\bigg)\\
\end{align}
$$</span>
Hence we have a form of our final integral:
<span class="math-container">$$
\begin{align}
I(4\pi;2m,n)=&\frac12B\bigg(\sin^24\pi;\frac{2m+1}2,\frac{n+1}2\bigg)\\
=&\frac12B\bigg(0;\frac{2m+1}2,\frac{n+1}2\bigg)\\
=&\frac12\int_0^0t^{\frac{2m-1}2}(1-t)^{\frac{n-1}2}\mathrm{d}t\\
=&\,0
\end{align}
$$</span>
Which implies that
<span class="math-container">$$\int_0^{2\pi}e^{\cos2x}\cos(\sin2x)\ \mathrm{d}x=0$$</span>
Which is totally wrong. But as far as I can tell, I haven't broken any rules. Where's my error, and how do I fix it? Thanks.</p>
|
Pseudo Professor
| 559,658 |
<p>If you want a full solution of the integral using complex analysis, going along the lines of what Seewoo Lee recommend, you can solve the integral as follows:</p>
<p>First consider the integral <span class="math-container">$$\int_{C} \frac{e^z}{z}dz$$</span></p>
<p>where <span class="math-container">$C$</span> is the unit circle oriented counter-clockwise in the complex plane. Using the Cauchy Integral formula (from complex analysis) you can find that <span class="math-container">$$\int_{C} \frac{e^z}{z}dz = 2\pi i \phantom{------}(1)$$</span></p>
<p>Now we will directly integrate the above integral by parametrising <span class="math-container">$C$</span>. Let <span class="math-container">$z(t)=e^{2it}$</span> with <span class="math-container">$0 \le t \le \pi$</span> be the parametrisation of <span class="math-container">$C$</span>. Then the integral works out as:</p>
<p><span class="math-container">\begin{align}
\int_{C} \frac{e^z}{z}dz &= \int_0^{\pi} \frac{e^{e^{2it}}}{e^{2it}} 2ie^{2it} dt \\
&= 2i \int_0^{\pi} {e^{\cos(2t)+i\sin(2t)}}dt \\
&= 2i \int_0^{\pi} {e^{\cos(2t)}e^{i\sin(2t)}}dt \\
&= 2i \int_0^{\pi} {e^{\cos(2t)}(\cos(\sin(2t))+i\sin(\sin(2t))} dt \\
&= 2i \int_0^{\pi} {e^{\cos(2t)}}(\cos(\sin(2t))dt - 2 \int_0^{\pi} {e^{\cos(2t)}}(sin(sin(2t)) dt \phantom{-------} (2) \\
\end{align}</span></p>
<p>Equating imaginary parts of (1) and (2) we see that:</p>
<p><span class="math-container">$$\int_0^{\pi} {e^{\cos(2t)}}(\cos(\sin(2t))dt = \pi$$</span></p>
<p>If we parametrise the curve initially with <span class="math-container">$\pi \le t \le 2\pi$</span> we would have ended up with:</p>
<p><span class="math-container">$$\int_{\pi}^{2\pi} {e^{\cos(2t)}}(\cos(\sin(2t))dt = \pi$$</span></p>
<p>Thus,</p>
<p><span class="math-container">\begin{align}
\int_{0}^{2\pi} {e^{\cos(2t)}}(\cos(\sin(2t))dt &= \int_{0}^{\pi} {e^{\cos(2t)}}(\cos(\sin(2t))dt + \int_{\pi}^{2\pi} {e^{\cos(2t)}}(\cos(\sin(2t))dt \\
&= \pi + \pi = 2\pi
\end{align}</span></p>
|
1,269,447 |
<p>I consider the space $C^1[a, b]$ of (complex) functions that are at least once differentiable on $[a, b]$. I want to show that</p>
<p>$$||f||_{C^1} := ||f||_\infty + ||f'||_\infty$$</p>
<p>defines a norm on $C^1[a, b]$.</p>
<p>Now it's easy to see that $||f||_{C^1}$ is non-negative, and that it's zero iff f = 0, and it can be easily shown that it's compatible with multiplication of a scalar. But so far I've been struggling with proving and understanding that the triangle inequality is valid indeed. Thanks in advance.</p>
|
Tito Piezas III
| 4,781 |
<p>(<em>More a comment</em>.) If we allow the <em>non-principal square root</em>,
$$\begin{cases}
2x^2+y^2=1,\\
x^2 + y \sqrt{1-x^2}=1\color{red}{\pm} (1-y)\sqrt{x}
\end{cases}$$
the $+$ case has one real solution, but the $-$ case has <strong><em>three</em></strong> real solutions: $(x,y)=(0,1)$ and two which surprisingly are roots of $11$-deg equations. Using,
$$x=+\sqrt{\frac{1-y^2}{2}}$$
and with $y$ as two appropriate real roots of,
$$\small49 - 301 y + 327 y^2 - 51 y^3 - 1038 y^4 + 1334 y^5 - 1066 y^6 + 850 y^7 - 259 y^8 + 279 y^9 + 3 y^{10} + y^{11}=0$$
namely $y_1 \approx -0.619107$, and $y_2 \approx 0.939251$.</p>
|
322,134 |
<p>$$2e^{-x}+e^{5x}$$</p>
<p>Here is what I have tried: $$2e^{-x}+e^{5x}$$
$$\frac{2}{e^x}+e^{5x}$$
$$\left(\frac{2}{e^x}\right)'+(e^{5x})'$$</p>
<p>$$\left(\frac{2}{e^x}\right)' = \frac{-2e^x}{e^{2x}}$$
$$(e^{5x})'=5xe^{5x}$$</p>
<p>So the answer I got was $$\frac{-2e^x}{e^{2x}}+5xe^{5x}$$</p>
<p>I checked my answer online and it said that it was incorrect but I am sure I have done the steps correctly. Did I approach this problem correctly?</p>
|
amWhy
| 9,003 |
<p>Not correct: $(e^{5x})'\neq 5xe^{5x}$</p>
<p>$$(e^{5x})' = 5e^{5x}$$</p>
<p>$$(ae^{bx})' = abe^{bx}$$</p>
<p>It's because of the chain rule, and because $\frac d{dx}(e^x) = e^x$.</p>
|
3,218,525 |
<p>Let <span class="math-container">$f:[0,1] \to [0, \infty)$</span> is a non-negative continuous function so that <span class="math-container">$f(0)=0$</span> and for all <span class="math-container">$x \in [0,1]$</span> we have <span class="math-container">$$f(x) \leq \int_{0}^{x} f(y)^2 dy$$</span><br>
Now consider the set <span class="math-container">$$A=\{x∈[0,1] : \text{for all } y∈[0,x] \text{ we have }f(y)≤1/2\}$$</span> Prove that <span class="math-container">$A=[0,1]$</span>.<br>
Since <span class="math-container">$f$</span> is bounded of <span class="math-container">$[0,x]$</span>, I think <span class="math-container">$f$</span> may be <span class="math-container">$0$</span>. But I am not able to do this. Please help me to solve this.</p>
|
User8128
| 307,205 |
<p>Clearly <span class="math-container">$A$</span> is non-empty since <span class="math-container">$f(0) = 0$</span> and thus by continuity there is <span class="math-container">$\delta > 0$</span> so that <span class="math-container">$f(y) \le 1/2$</span> for all <span class="math-container">$y \in [0,\delta]$</span>. </p>
<p>Take <span class="math-container">$x \in A$</span>. Then <span class="math-container">$f(y) \le 1/2$</span> for all <span class="math-container">$y \in [0,x]$</span>. But then <span class="math-container">$$f(x) \le \int^x_0 f(y)^2 dy \le \int^x_0 \frac 1 4 dy = \frac x 4 \le \frac 1 4.$$</span> Thus again by contiuity, there is <span class="math-container">$\delta > 0$</span> so that <span class="math-container">$f(y) \le 1/2$</span> for all <span class="math-container">$y \in [0,x+\delta]$</span>. This shows that <span class="math-container">$(x-\delta, x+\delta) \subset A$</span>, and thus <span class="math-container">$A$</span> is an open set, since for any element of <span class="math-container">$A$</span>, we can find a ball surrounding that element that remains in <span class="math-container">$A$</span>.</p>
<p>Conversely, if <span class="math-container">$x \not\in A$</span>, then there is <span class="math-container">$y \in [0,x]$</span> such that <span class="math-container">$f(y) > 1/2$</span>. But then by continuity, <span class="math-container">$f(y-\epsilon) > 1/2$</span> for some small <span class="math-container">$\epsilon > 0$</span> and this shows that <span class="math-container">$x-\epsilon \not \in A$</span>, and thus <span class="math-container">$(x-\epsilon,x+\epsilon) \subset A^c$</span>. Similar to above, this shows that <span class="math-container">$A^c$</span> is open, and so <span class="math-container">$A$</span> is closed. </p>
<p>Since <span class="math-container">$[0,1]$</span> is connected, we conclude that <span class="math-container">$A = [0,1]$</span>, since this is the only open and closed subset of <span class="math-container">$[0,1]$</span>. </p>
|
142,993 |
<p>I'm challenging myself to figure out the mathematical expression of the number of possible combinations for certain parameters, and frankly I have no idea how.</p>
<p>The rules are these:</p>
<p>Take numbers 1...n. Given m places, and with <em>no repeated digits</em>, how many combinations of those numbers can be made?</p>
<p>AKA</p>
<ul>
<li>1 for n=1, m=1 --> 1</li>
<li>2 for n=2, m=1 --> 1, 2</li>
<li>2 for n=2, m=2 --> 12, 21</li>
<li>3 for n=3, m=1 --> 1,2,3</li>
<li>6 for n=3, m=2 --> 12,13,21,23,31,32</li>
<li>6 for n=3, m=3 --> 123,132,213,231,312,321</li>
</ul>
<p>I cannot find a way to express the left hand value. Can you guide me in the steps to figuring this out?</p>
|
Will Orrick
| 3,736 |
<p>Let's first choose the three columns. There are four cases,</p>
<ol>
<li>all columns even</li>
<li>two columns even, one odd</li>
<li>one column even, two odd</li>
<li>all columns odd</li>
</ol>
<p>It either of cases (1) and (4), we have four rows we could choose for the lowest numbered column, three for the next lowest numbered column, and two for the highest numbered column. This gives
$$
2\cdot\binom{4}{3}\cdot(4\cdot3\cdot2)
$$
selections.</p>
<p>Now look at case (2). We have four row choices for the lower even column, and three for the higher even column. We have four row choices for the odd column. Case (3) is similar, so we get
$$
2\binom{4}{2}\binom{4}{1}\cdot(4\cdot3\cdot4)
$$
selections.</p>
|
160,801 |
<p>Here is a vector </p>
<p>$$\begin{pmatrix}i\\7i\\-2\end{pmatrix}$$</p>
<p>Here is a matrix</p>
<p>$$\begin{pmatrix}2& i&0\\-i&1&1\\0 &1&0\end{pmatrix}$$</p>
<p>Is there a simple way to determine whether the vector is an eigenvector of this matrix?</p>
<p>Here is some code for your convenience.</p>
<pre><code>h = {{2, I, 0 },
{-I, 1, 1},
{0, 1, 0}};
y = {I, 7 I, -2};
</code></pre>
|
Neil_P
| 67,649 |
<p>Carl Woll's answer seems to be broken in the newest Mathematica. Here is a slight modification that makes it work</p>
<pre><code>EigenvectorQ[matrix_, vector_] := MatrixRank[Join[matrix.vector, vector, 2]] == 1
</code></pre>
<p><code>MatrixRank</code> gives the number of linearly dependent columns.
<code>Join[l1, l2, 2]</code> joins the second vector to the first on the right.</p>
|
353,480 |
<p>Is $f(x)=\ln(x)$ uniformly continuous on $(1,+\infty)$? If so, how to show it?</p>
<p>I know how to show that it is not uniformly continuous on $(0,1)$, by taking $x=\frac{1}{\exp(n)}$ and $y = \frac{1}{\exp(n+1)}$.</p>
<p>Also, on which interval does $\ln(x)$ satisfy the Lipschitz condition?</p>
|
Community
| -1 |
<p><strong>HINT</strong> Every differentiable function that has bounded derivative on a set $X$ is uniformly continuous on $X$.</p>
|
2,128,380 |
<p>Okay, I must admit that I am lost on how to do this. I have looked up videos and tutorials about this, and they helped a little. The main thing is that my professor asked for us to solve this without using the "determinant method." I have just started linear algebra, so I am still trying to understand determinants and the like. I am just wondering how it is possible to prove the cross product of two vectors with another method? Any help would be great!</p>
<p>Prove the following without using the determinant method:</p>
<p>$A \times B = (A_yB_z - B_yA_z)\hat{i} - (A_xB_z - B_xA_z)\hat{j} + (A_xB_y - B_xA_y)\hat{k}$</p>
|
Anna SdTC
| 410,766 |
<p>By defiition, the cross product of $A$ and $B$ is a vector $(u,v,w)\in\mathbb{R}^3$ that is perpendicular to both of them.</p>
<p>So, both doth products should be zero:
$$(A_x,A_y,A_z)\cdot(u,v,w)=A_xu+A_yv+A_zw=0,$$
$$(B_x,B_y,B_z)\cdot(u,v,w)=B_xu+B_yv+B_zw=0.$$</p>
<p>From here, you can find expressions of two of the components (say, for instance, $v$ and $w$), that depend on $A$, $B$ and the other component ($u$).</p>
<p>Do you know anything about the angle between $A$ and $B$, (let's call it $\alpha$)? To solve for the third component, you can use that the length of the vector you are looking for is
$$|(u,v,w)|=|A||B|\sin(\alpha).$$</p>
|
365,986 |
<p>If $A$ is an $n \times n$ matrix with $\DeclareMathOperator{\rank}{rank}$ $\rank(A) < n$, then I need to show that $\det(A) = 0$.</p>
<p>Now I understand why this is - if $\rank(A) < n$ then when converted to reduced row echelon form, there will be a row/column of zeroes, thus $\det(A) = 0$</p>
<p>However, I have been told to use the fact that the determinant is multilinear and alternating and subsequently deduce that if $\det(A)$ is non-zero, $A$ is invertible. </p>
<p>How do I use the properties of the determinant to prove these claims? </p>
|
not all wrong
| 37,268 |
<p>$f(x+cy,y,z,\cdots) = f(x,y,z,\cdots) + cf(y,y,z,\cdots)=f(x,y,z,\cdots)$ using multilinearity and the alternating property respectively.</p>
<p>Hence you can add columns together without changing the determinant. But the rank is $<n$ if and only if some linear combination of the columns is trivial, in which case we can obtain an equivalent determinant with a column of zeros. By linearity, the determinant is zero.</p>
|
3,227,788 |
<p>Let <span class="math-container">$f: D(0,1)\to \mathbb C$</span> be a holomorphic function. How to show that there exists a sequence <span class="math-container">$\{z_n\}$</span> in <span class="math-container">$D(0,1)$</span> such that <span class="math-container">$|z_n| \to 1$</span> and <span class="math-container">$\exists M>0$</span> such that <span class="math-container">$|f(z_n)|<M,\forall n \ge 1$</span> ? </p>
<p>My try: If not, then <span class="math-container">$\lim_{|z|\to 1} |f(z)|=\infty$</span>. So in particular, <span class="math-container">$f$</span> has finitely many zeroes in <span class="math-container">$D(0,1)$</span>. Also, <span class="math-container">$1/f$</span> is meromorphic in <span class="math-container">$D(0,1)$</span> with <span class="math-container">$\lim _{|z|\to 1}\dfrac {1}{|f(z)|}=0$</span>. I am not sure where to go from here. Please help. </p>
|
Conrad
| 298,272 |
<p>The way to do it is to first take out the (finitely many) zeros with a finite Blaschke product <span class="math-container">$B$</span> - which has absolute value <span class="math-container">$1$</span> on the circle- so get <span class="math-container">$g = \frac{f}{B}$</span> analytic, no zeros, <span class="math-container">$g \to \infty$</span> on the circle, so then <span class="math-container">$\frac{1}{g}=\frac{B}{f}$</span> is an analytic function in the unit disc that goes to zero at the boundary and that means it is zero by maximum modulus, so you get a contradiction and are done.</p>
|
1,564,729 |
<p>Find:
$$
L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{x^2}
$$</p>
<p>My approach:</p>
<p>Because of the fact that the above limit is evaluated as $\frac{0}{0}$, we might want to try the De L' Hospital rule, but that would lead to a more complex limit which is also of the form $\frac{0}{0}$. </p>
<p>What I tried is:
$$
L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right)
$$
Then, if the limits
$$
L_1 = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}},
$$</p>
<p>$$
L_2 = \lim_{x\to0}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right)
$$
exist, then $L=L_1L_2$.</p>
<p>For the first one, by making the substitution $u=1-\frac{\sin(x)}{x}$, we have
$$
L_1 = \lim_{u\to u_0}\frac{\sin(u)}{u},
$$
where
$$
u_0 = \lim_{x\to0}\left(1-\frac{\sin(x)}{x}\right)=0.
$$
Consequently,
$$
L_1 = \lim_{u\to0}\frac{\sin(u)}{u}=1.
$$</p>
<p>Moreover, for the second limit, we apply the De L' Hospital rule twice and we find $L_2=\frac{1}{6}$.</p>
<p>Finally, $L=1\frac{1}{6}=\frac{1}{6}$.</p>
<p>Is this correct?</p>
|
Olivier Oloa
| 118,798 |
<p><em>In a slightly different way</em>, using the Taylor expansion, as $x \to 0$,
$$
\sin x=x-\frac{x^3}6+O(x^5)
$$ gives
$$
1-\frac{\sin x}x=\frac{x^2}6+O(x^4)
$$ then
$$
\sin \left( 1-\frac{\sin x}x\right)=\frac{x^2}6+O(x^4)
$$ and</p>
<blockquote>
<p>$$
\frac{\sin \left( 1-\frac{\sin x}x\right)}{x^2}=\frac16+O(x^2)
$$ </p>
</blockquote>
<p>from which one may conclude easily.</p>
|
148,185 |
<p>Let $ X = \mathbb R^3 \setminus A$, where $A$ is a circle. I'd like to calculate $\pi_1(X)$, using van Kampen. I don't know how to approach this at all - I can't see an open/NDR pair $C,D$ such that $X = C \cup D$ and $C \cap D$ is path connected on which to use van Kampen. </p>
<p>Any help would be appreciated. Thanks</p>
|
Community
| -1 |
<p>You can add a point to get $S^3-A$, without changing the fundamental group (this follows from van Kampen's theorem). Now $S^3$ minus <em>any point</em> is homeomorphic to $\mathbb{R}^3$, so choose this <em>any point</em> to lie on $A$! This gives a new space, still with the same fundamental group, but now you've got $\mathbb{R}^3-B$, where $B$ is a line (say the x axis). Think about what would happen if you had a solid ball minus a line segment; that should give you what you need to deformation retract to a solid torus.</p>
|
3,760,253 |
<blockquote>
<p>The diagram shows the line <span class="math-container">$y=\frac{3x}{5\pi}$</span> and the curve <span class="math-container">$y=\sin$</span>
<span class="math-container">$x$</span> for <span class="math-container">$0\le x\le \pi$</span>.</p>
<p>Find (as an exact value) the enclosed area shown shaded in the diagram.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/fTGUr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fTGUr.png" alt="enter image description here" /></a></p>
<p>I'm not sure where I've made an error, but this is my working out so far:</p>
<p>Area under <span class="math-container">$y=\frac{3x}{5\pi}$</span> from <span class="math-container">$0$</span> to <span class="math-container">$\frac{5\pi}{6}$</span> is:</p>
<p><span class="math-container">$$\int_{0}^{5\pi/6} \frac{3x}{5\pi}dx=\frac{5\pi}{24}$$</span></p>
<p>The remaining <strong>unshaded</strong> "triangle" area from <span class="math-container">$\frac{5\pi}{6}$</span> to <span class="math-container">$\pi$</span>:</p>
<p><span class="math-container">$$\frac{1}{2} \cdot \frac{\pi}{6} \cdot \frac{1}{2} = \frac{\pi}{24}$$</span></p>
<p>Area under <span class="math-container">$y=\sin x$</span> for <span class="math-container">$0\le x\le\pi$</span>:</p>
<p><span class="math-container">$$\int_{0}^{\pi} \sin{x}dx=2$$</span></p>
<p>Hence, shaded area = <span class="math-container">$$2-\frac{5\pi}{24} - \frac{\pi}{24} = \frac{-\pi}{4} + 2$$</span></p>
<p>The <strong>correct</strong> answer is:</p>
<p><span class="math-container">$$1+\frac{\sqrt3}{2} - \frac{5\pi}{24} units^2$$</span></p>
<p>If someone could explain how to correctly solve this, it would be greatly appreciated!</p>
|
univer
| 809,451 |
<p>L<span class="math-container">$= 1−x+x^2−⋯$</span><br>
<span class="math-container">$=(1+x^2+x^4+⋯) − (x+x^3+x^5⋯)$</span><br>
<span class="math-container">$=(1+x^2+x^4+⋯) − x(1+x^2+x^4⋯)$</span><br>
<span class="math-container">$=(1+x^2+x^4+⋯)(1−x)$</span><br>
<span class="math-container">$=\frac{1}{(1-x^2)}(1-x)$</span><br>
<span class="math-container">$=\frac{1}{1+x}$</span><br><br>
So, L<span class="math-container">$=\frac{1}{1+x}$</span></p>
|
2,604,093 |
<p>I would like to study the convergence of the series:</p>
<p>$$\sum_{n=1}^\infty \frac{\log n}{n^2}$$</p>
<p>I could compare the generic element $\frac{\log n}{n^2}$ with $\frac{1}{n^2}$ and say that
$$\frac{1}{n^2}<\frac{\log n}{n^2}$$ and $\frac{1}{n^2}$ converges but nothing more about.</p>
|
BenB
| 336,000 |
<p>Here is one way to show it directly. Note that $$\lim_{n \rightarrow \infty} \frac{\sqrt{n}}{\log(n)} = \infty$$
(if you need convinced, just apply L'Hospital's Rule).</p>
<p>Thus $\exists N \in \mathbb{N}, \text{ such that }\sqrt{n} > \log(n) \quad \forall n > N. \quad$ So we write:
$$
\begin{align*}
\sum_{n=1}^{\infty} \frac{\log(n)}{n^2} = \sum_{n=1}^{N} \frac{\log(n)}{n^2} + \sum_{n=N + 1}^{\infty} \frac{\log(n)}{n^2} &< C_N + \sum_{n=N+1}^\infty \frac{\sqrt{n}}{n^2} \\
&< C_N + \sum_{n=1}^\infty \frac{1}{n^{3/2}} = K \in \mathbb{R^+}\end{align*}
$$
Where $C_N > 0$ is some positive constant since $\sum_{n=1}^N \frac{\log(n)}{n^2}$ is a finite, positive sum and $\sum \frac{1}{n^{3/2}}$ converges as it is a $p$-series with $p = \frac{3}{2} > 1$.</p>
|
3,208,822 |
<p>I'm looking at a STEP question and I'm a little confused by the logic of the method, and i'm really hoping someone could clarify what is going on for me. I have a good knowledge (At least I thought), as some STEP II and III questions are accessible but this one , I just can't wrap my head around - there must be a gap in my understanding for sure. </p>
<p>It would be difficult for me to rewrite the question on here but here is the video of the question AND the solution, (it's not too long I assure you, just the first 2/3 minutes, it's the entry to the question that has got me scratching my head)</p>
<p><a href="https://www.youtube.com/watch?v=YKjQ2RTfltU&frags=pl%2Cwn" rel="nofollow noreferrer">https://www.youtube.com/watch?v=YKjQ2RTfltU&frags=pl%2Cwn</a></p>
<blockquote>
<p>An internet tester send <span class="math-container">$n$</span> e-mails simultaneously at time <span class="math-container">$t=0$</span>. Their arrival times at their destinations are independent random variables each having probability density function <span class="math-container">$ke^{-kt}$</span> where <span class="math-container">$t>0$</span> and <span class="math-container">$k>0$</span>. The random variable <span class="math-container">$T$</span> is the time of arrival of the e-mail that arrives first at its destination. Show that the probability density function of <span class="math-container">$T$</span> is <span class="math-container">$nke^{-nkt}$</span></p>
</blockquote>
<p>I understand the calculation later on in the question for the expectation and even the very last part of the question, I just find it difficult to understand how it is logical to consider what this guy has considered in the first 2/3 minutes of the video. Is there a nicer explanation why he considered <span class="math-container">$P(T>t)$</span> and not <span class="math-container">$P(T<t)$</span>, could it even be done this way? Is this a general method for this 'type' of question? </p>
<p>I do see what he has done, and understand it, it's just something that I would never have even considered and makes me uncomfortable. </p>
<p>Thank you for your help, and I do apologise for the awkwardness of the question.</p>
|
José Carlos Santos
| 446,262 |
<p>For each <span class="math-container">$x\in[0,1]$</span>, <span class="math-container">$\bigl\lfloor nf(x)\bigr\rfloor\leqslant nf(x)<\bigl\lfloor nf(x)\bigr\rfloor+1$</span> and therefore<span class="math-container">$$\frac{\bigl\lfloor nf(x)\bigr\rfloor}n\leqslant f(x)<\frac{\bigl\lfloor nf(x)\bigr\rfloor}n+\frac1n.\tag1$$</span>So, given <span class="math-container">$\varepsilon>0$</span>, take <span class="math-container">$N\in\mathbb N$</span> such that <span class="math-container">$\frac1N<\varepsilon$</span> and it will follow from <span class="math-container">$(1)$</span> that<span class="math-container">$$n\geqslant N\implies\left\lvert f(x)-\frac{\bigl\lfloor nf(x)\bigr\rfloor}n\right\rvert<\varepsilon.$$</span>This proves that <span class="math-container">$(f_n)_{n\in\mathbb N}$</span> converges uniformly to <span class="math-container">$f$</span>.</p>
|
886,243 |
<p>Evaluate</p>
<p><img src="https://latex.codecogs.com/gif.latex?%0A%24%245050%20%5Cfrac%20%7B%5Cleft(%20%5Csum%20_%7Br%3D0%7D%5E%7B100%7D%20%5Cfrac%20%7B%7B100%5Cchoose%20r%7D%7D%7B50r%2B1%7D%5Ccdot%20(-1)%5Er%5Cright)%20-%201%7D%7B%5Cleft(%20%5Csum%20_%7Br%3D0%7D%5E%7B101%7D%5Cfrac%7B%7B101%5Cchoose%20r%7D%7D%7B50r%2B1%7D%20%5Ccdot%20(-1)%5Er%5Cright)%20-%201%7D%24%24" alt="image"></p>
<p>There seemed to be some problem with stackexchange's math rendering but Ian corrected whatever error was there in the expression.Thanks</p>
<p>$$5050 \frac {\left( \sum _{r=0}^{100} \frac {{100\choose r}}{50r+1}\cdot (-1)^r\right) - 1}{\left( \sum _{r=0}^{101}\frac{{101\choose r}}{50r+1} \cdot (-1)^r\right) - 1}$$</p>
<p>The original definite integral that led to this was </p>
<p>$$5050\frac{\int_0^1(1-x^{50})^{100} dx}{\int_0^1(1-x^{50})^{101} dx} $$</p>
<p>I used binomail to arrive at the above expression.</p>
<p>I already know a technique by definite integration but I want one by sequence and series.</p>
<blockquote class="spoiler">
<p> The answer is 5051</p>
</blockquote>
|
SuperAbound
| 140,590 |
<p>This expression can be simplified using <a href="http://en.wikipedia.org/wiki/Beta_function" rel="nofollow">Beta</a> and <a href="http://en.wikipedia.org/wiki/Gamma_function" rel="nofollow">Gamma</a> functions. <br/><br/>
The sum in the numerator is
\begin{align}
\sum^{100}_{r=0}\binom{100}{r}\frac{(-1)^r}{50r+1}
&=\sum^{100}_{r=0}\binom{100}{r}(-1)^r\int^1_0x^{50r}dx\\
&=\int^1_0(1-x^{50})^{100}dx\\
&=\frac{1}{50}\int^1_0u^{-49/50}(1-u)^{100}du\\
&=\frac{1}{50}B(1/50,101)\\
&=\frac{\Gamma(1/50) \ \Gamma(101)}{50 \ \Gamma(5051/50)}
\end{align}
Similarly, the sum in the denominator is
\begin{align}
\int^1_0(1-x^{50})^{101}dx
&=\frac{\Gamma(1/50) \ \Gamma(102)}{50 \ \Gamma(5101/50)}
\end{align}
Assuming you meant
$$5050\frac{\sum^{100}_{r=0}\binom{100}{r}\frac{(-1)^r}{50r+1}}{\sum^{101}_{r=0}\binom{101}{r}\frac{(-1)^r}{50r+1}}$$
instead, the above expression evaluates to
$$\frac{5050 \ \color\red{\Gamma(1/50)} \ \Gamma(101)}{\color\red{50} \ \Gamma(5051/50)}\frac{\color\red{50} \ \Gamma(5101/50)}{\color\red{\Gamma(1/50)} \ \Gamma(102)}=\frac{5050\cdot5051}{50\cdot 101}=5051$$</p>
|
552,474 |
<p>If there are,
Are there unity <strong>(but not division)</strong> rings of this kind?
Are there non-unity rings of this kind?</p>
<p>Sorry, I forgot writting the non division condition.</p>
|
anon
| 11,763 |
<p>Yes. In fact not only can we force every nonzero element to not be a zero divisor, we can force every nonzero element to be <em>invertible</em>. You get a division ring (also called skew field).</p>
<p>Perhaps one of the most famous historical examples: the quaternions. They are defined by</p>
<p>$$\Bbb H=\{a+bi+bj+ck:i^2=j^2=k^2=ijk=-1,a,b,c,d\in\Bbb R\}.$$</p>
<p>As for nonunital rings: simply take a unital ring with no zero divisors at hand (say $\Bbb H$) and look at a non-unital subring. For example the elements $2\Bbb Z+2i\Bbb Z+2j\Bbb Z+2k\Bbb Z\subseteq\Bbb H$.</p>
|
103,545 |
<p>It is well-known that, given a normalized eigenform $f=\sum a_n q^n$, its coefficients $a_n$ generate a number field $K_f$. </p>
<p>In their 1995 <a href="http://www.math.mcgill.ca/darmon/pub/Articles/Expository/05.DDT/paper.pdf">paper</a> "Fermat's Last Theorem", Darmon, Diamond, and Taylor remark that, at the time of writing, very little was known about what sort of number fields could arise as some $K_f$. They do claim, however, that $K_f$ must be totally real or CM. This claim is made just before Lemma 1.37, on page 40 of the copy I linked to.</p>
<p>This is probably standard knowledge among experts, but I'm having trouble finding a reference, so my questions are:</p>
<p>1) Can someone please provide a reference for this claim?</p>
<p>2) Is this still the state of the art, or do we now know more about what types of fields can appear as $K_f$ for some $f$? What if we restrict our attention to weight $k=2$?</p>
<p>Thank you!</p>
<blockquote>
<p>Edit: In my question, I originally just wrote "modular form" instead of "normalized eigenform". Thanks to @Stopple for pointing this out! Also, I originally claimed the paper was published in 2007, but Kevin Buzzard pointed out it was published in 1995. Thanks Kevin!</p>
</blockquote>
|
Rob Harron
| 1,021 |
<p>For (1), see Ribet's wonderful article <em>Galois representations attached to eigenforms with Nebentypus</em> (<a href="http://dx.doi.org/10.1007/BFb0063943" rel="nofollow">http://dx.doi.org/10.1007/BFb0063943</a>). It's proposition 3.2.</p>
|
3,482,376 |
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is differentiable on <span class="math-container">$[0,\infty)$</span> and <span class="math-container">$\displaystyle \lim_{x \to \infty} \frac{f(x)}{x} = 0$</span>. Show that <span class="math-container">$\displaystyle \liminf_{x \to \infty}|f'(x)| = 0$</span>.</p>
</blockquote>
<p>What I have tried is to apply the mean value theorem to <span class="math-container">$\frac{f(y)}{y} - \frac{f(x)}{x}$</span> with <span class="math-container">$0 < x < y$</span>. There exists <span class="math-container">$c_x \in (x,y)$</span> such that <span class="math-container">$\dfrac{\frac{f(y)}{y} - \frac{f(x)}{x}}{y-x} = \frac{f'(c_x)}{c_x}- \frac{f(c_x)}{c_x^2}$</span>. From here we get
<span class="math-container">$$|f'(c_x)| \leq \frac{c_x}{y-x}\left|\frac{f(y)}{y} - \frac{f(x)}{x} \right| + \left|\frac{f(c_x)}{c_x} \right| $$</span></p>
<p>Now I want to set <span class="math-container">$y = 2x$</span> and take <span class="math-container">$\liminf$</span> of both sides to get <span class="math-container">$0$</span>, but I am unclear on:</p>
<ol>
<li><p>If <span class="math-container">$\liminf_{x \to \infty} |f'(c_x)| = 0$</span>, then <span class="math-container">$\liminf_{x \to \infty} |f'(x)| = 0$</span>. I think this is true?</p></li>
<li><p>How to handle the factor <span class="math-container">$c_x/(y-x) = c_x/x$</span> when taking <span class="math-container">$\liminf$</span>?</p></li>
</ol>
|
Franklin Pezzuti Dyer
| 438,055 |
<p>It seems easier to prove this using the contrapositive - here‘s a sketch. </p>
<p>Suppose that <span class="math-container">$\liminf |f‘(x)|=k > 0$</span>, so that the infimum is not equal to zero. It must then be the case that either <span class="math-container">$f’(x)>k$</span> or <span class="math-container">$f’(x)<-k$</span> for all <span class="math-container">$x\in\mathbb R^+$</span>. This means that either <span class="math-container">$f(x) >kx+C$</span> or <span class="math-container">$f(x)<-kx+C$</span> for all <span class="math-container">$x$</span>, for some constant <span class="math-container">$C$</span>. This implies that the limit of <span class="math-container">$f(x)/x$</span> as <span class="math-container">$x\to\infty$</span> cannot equal zero.</p>
|
3,482,376 |
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is differentiable on <span class="math-container">$[0,\infty)$</span> and <span class="math-container">$\displaystyle \lim_{x \to \infty} \frac{f(x)}{x} = 0$</span>. Show that <span class="math-container">$\displaystyle \liminf_{x \to \infty}|f'(x)| = 0$</span>.</p>
</blockquote>
<p>What I have tried is to apply the mean value theorem to <span class="math-container">$\frac{f(y)}{y} - \frac{f(x)}{x}$</span> with <span class="math-container">$0 < x < y$</span>. There exists <span class="math-container">$c_x \in (x,y)$</span> such that <span class="math-container">$\dfrac{\frac{f(y)}{y} - \frac{f(x)}{x}}{y-x} = \frac{f'(c_x)}{c_x}- \frac{f(c_x)}{c_x^2}$</span>. From here we get
<span class="math-container">$$|f'(c_x)| \leq \frac{c_x}{y-x}\left|\frac{f(y)}{y} - \frac{f(x)}{x} \right| + \left|\frac{f(c_x)}{c_x} \right| $$</span></p>
<p>Now I want to set <span class="math-container">$y = 2x$</span> and take <span class="math-container">$\liminf$</span> of both sides to get <span class="math-container">$0$</span>, but I am unclear on:</p>
<ol>
<li><p>If <span class="math-container">$\liminf_{x \to \infty} |f'(c_x)| = 0$</span>, then <span class="math-container">$\liminf_{x \to \infty} |f'(x)| = 0$</span>. I think this is true?</p></li>
<li><p>How to handle the factor <span class="math-container">$c_x/(y-x) = c_x/x$</span> when taking <span class="math-container">$\liminf$</span>?</p></li>
</ol>
|
user284331
| 284,331 |
<p>Let <span class="math-container">$\epsilon>0$</span> and choose some <span class="math-container">$M>0$</span> such that <span class="math-container">$\left|\dfrac{f(x)}{x}\right|<\epsilon$</span> for all <span class="math-container">$x\geq M$</span>.</p>
<p>Now we choose by Mean Value Theorem some <span class="math-container">$c_{x}\in[x,2x]$</span> such that
<span class="math-container">\begin{align*}
\left|\dfrac{f(2x)-f(x)}{2x-x}\right|=|f'(c_{x})|,
\end{align*}</span>
then for all <span class="math-container">$x\geq M$</span>, we have
<span class="math-container">\begin{align*}
|f'(c_{x})|\leq 2\left|\dfrac{f(2x)}{2x}\right|+\left|\dfrac{f(x)}{x}\right|\leq 2\epsilon+\epsilon=3\epsilon.
\end{align*}</span>
Now realize those <span class="math-container">$x$</span> to <span class="math-container">$M,3M,9M,...$</span> successively, so <span class="math-container">$c_{M}\in[M,2M]$</span>, <span class="math-container">$c_{3M}\in[3M,6M],...$</span>, so we look at the sequence <span class="math-container">$(\eta_{n})$</span> defined by <span class="math-container">$\eta_{n}=c_{3^{n}M}$</span>, and of course <span class="math-container">$\eta_{n}\rightarrow\infty$</span> with <span class="math-container">$|f'(\eta_{n})|\leq 3\epsilon$</span>, so <span class="math-container">$\liminf_{n\rightarrow\infty}|f'(\eta_{n})|\leq 3\epsilon$</span>. Note that the <span class="math-container">$M$</span> may depend on <span class="math-container">$\epsilon$</span>, so the construction of the whole <span class="math-container">$f(\eta_{n})$</span> is somehow depending on <span class="math-container">$\epsilon>0$</span> as well, but this does not matter.</p>
<p>Finally, note that <span class="math-container">$\liminf_{x\rightarrow\infty}|f'(x)|=\inf\{\liminf_{n\rightarrow\infty}|f'(y_{n})|: y_{n}\rightarrow\infty\}$</span>, and hence <span class="math-container">$\liminf_{x\rightarrow\infty}|f'(x)|\leq\liminf_{n\rightarrow\infty}|f'(\eta_{n})|\leq 3\epsilon$</span>, so by arbitririness of <span class="math-container">$\epsilon>0$</span>, we deduce that <span class="math-container">$\liminf_{x\rightarrow\infty}|f'(x)|=0$</span>.</p>
|
1,630,733 |
<p>I seem to be having a lot of difficulty with proofs and wondered if someone can walk me through this. The question out of my textbook states:</p>
<blockquote>
<p>Use a direct proof to show that if two integers have the same parity, then their sum is even.</p>
</blockquote>
<p>A very similar example from my notes is as follows: <code>Use a direct proof to show that if two integers have opposite parity, then their sum is odd.</code> This led to:</p>
<p><strong>Proposition</strong>: The sum of an even integer and an odd integer is odd.</p>
<p><strong>Proof</strong>: Suppose <code>a</code> is an even integer and <code>b</code> is an odd integer. Then by our definitions of even and odd numbers, we know that integers <code>m</code> and <code>n</code> exist so that <code>a = 2m</code> and <code>b = 2n+1</code>. This means:</p>
<p>a+b = (2m)+(2n+1) = 2(m+n)+1 = 2c+1 where c=m+n is an integer by the closure property of addition. </p>
<p>Thus it is shown that <code>a+b = 2c+1</code> for some integer <code>c</code> so <code>a+b</code> must be odd.</p>
<h2>----------------------------------------------------------------------------</h2>
<p>So then for the proof of showing two integers of the same parity would have an even sum, I have thus far:</p>
<p><strong>Proposition</strong>: The sum of 2 even integers is even.</p>
<p><strong>Proof</strong>: Suppose <code>a</code> is an even integer and <code>b</code> is an even integer. Then by our definitions of even numbers, we know that integers <code>m</code> and <code>n</code> exist so that <code>a=2m</code> and <code>b=2m</code>???</p>
|
Alex Provost
| 59,556 |
<p>Yes, you can use the definitions directly. If $a,b$ are even then like you say we have $a = 2m$ and $b = 2n$, so $a+b = 2m + 2n = 2(m+n)$, which is even.</p>
<p>Similarly, if $a,b$ are odd then we have $a = 2m + 1$ and $b = 2n + 1$, and so $a+b = (2m +1) + (2n + 1) = 2(m + n + 1)$, which is also even.</p>
|
1,473,418 |
<p>I have difficulties with this question : </p>
<p>Given the ODE named (1) : $$x'=y+\sin (x^2y)$$ $$y'=x+\sin(xy^2)$$</p>
<p>and the :</p>
<p><strong>Definition.</strong> A <em>Petal</em> is a solution $(x(t),y(t))$ that verifies $\displaystyle \lim _{t \to \pm \infty} (x(t),y(t)) =(0,0)$.</p>
<p>How can I show that the exists at most two distinct <em>petals</em> ? </p>
<p>First I remarked that the question has sense since : </p>
<p>$$\|(x',y')\| \leqslant \|(x,y)\| + C$$ where $C$ is a wisely chosen constant. It follows that the solution $(x(t),y(t))$ is defined on $\mathbb{R}$.</p>
<p>Next, the linearization in $(0,0)$ shows that $(0,0)$ is an hyperbolic point. </p>
<p>But how can I say more ? </p>
<p>Thank you</p>
|
JMP
| 210,189 |
<p>Your approach is fine except for you are missing the repeated digit. Here is a lazy solution:</p>
<p>Imagine we only have $0,1,2,3,4,5,6$, then your sums become:</p>
<ul>
<li>$0: 6\times5\times4=120$</li>
<li>$5: 5\times5\times4=100$</li>
</ul>
<p>Therefore there are at least $220$ and at most $390$ solutions, therefore the answer is b ($249$).</p>
|
3,678,033 |
<p>let Y be an ordered set in the order topology.let X be a topological space and let <span class="math-container">$f,g:{X\to Y}$</span> be continuous function.</p>
<p>show that the set <span class="math-container">$A=\{x\in X\mid f(x)\le g(x)\}$</span> is closed in X.
I used the complement A and Hasdorf, but I didn't get anywhere...</p>
|
Didier
| 788,724 |
<p>If <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous, what can you say about <span class="math-container">$f-g : X \to Y$</span> ? Thus, can you express <span class="math-container">$A$</span> as the reverse image of <span class="math-container">$f-g$</span> of a suitable closed subset of <span class="math-container">$Y$</span> ?</p>
|
2,983,877 |
<p>I once saw a function for generating successively more-precise square root approximations, <span class="math-container">$f(x) = \frac{1}{2} ({x + \frac{S}{x}})$</span> where S is the square for which we are trying to calculate <span class="math-container">$\sqrt S$</span>. And the function works really well, generating an approximation of <span class="math-container">$\sqrt 2 \approx f^3(1) = \frac{577}{408} \approx 1.414215$</span>.</p>
<p>This fascinated me, so I tried extending the same logic to further radicals, starting with cube roots.</p>
<p>My first guess was <span class="math-container">$f_2(x) = \frac{1}{3} ({x + \frac{S}{x}})$</span>, but when I tried an approximation for <span class="math-container">$\sqrt[3] 3$</span>, I got <span class="math-container">$\sqrt[3] 3 \approx f_2^2(1) = \frac{43}{36} \approx 1.194444$</span>, which is a far cry from <span class="math-container">$\sqrt[3] 3 \approx Google(\sqrt[3] 3) \approx 1.44225$</span>.</p>
<p>How can I extend this logic for <span class="math-container">$n^{a\over b}$</span> where <span class="math-container">$ b > 2$</span>? Was I accurate all-along and just needed more iterations? Or is the presence of <span class="math-container">$\frac{1}{2}$</span> in <span class="math-container">$f(x) $</span> and in <span class="math-container">$n^\frac{1}{2}$</span> a coincidence?</p>
<p>Disclaimer: I am not educated in calculus.</p>
|
Ken Draco
| 431,886 |
<p><a href="https://math.stackexchange.com/questions/2225186/approximation-for-value-of-2x-without-using-calculator/2225416#2225416">Please see the link description at the very bottom</a></p>
<p>Let's first derive a formula for calculating square roots manually without calculus (and later for more general cases of various roots).
Let's denote our numbers as follows
<span class="math-container">$\,\sqrt{S}=x_1+a\,$</span> where <span class="math-container">$\,x_1\,$</span> is the first approximate value by our choice and <span class="math-container">$\,a\,$</span> is a tolerance (an error of approximation). Therefore, after squaring we obtain</p>
<p><span class="math-container">$\,S=x_1^2+2\,x_1a+a^2.\,\,$</span> We choose <span class="math-container">$\,x_1\,$</span> to be much greater than <span class="math-container">$\,a\,$</span> so that we can cast away <span class="math-container">$\,a^2\,$</span> for the approximation:</p>
<p><span class="math-container">$$\,S\approx x_1^2+2\,x_1a\,,\quad a\approx\frac{S-x_1^2}{2\,x_1}$$</span>
As we previously denoted <span class="math-container">$\,\sqrt{S}=x_1+a\,$</span>:
<span class="math-container">$$\,\sqrt{S}=x_1+a\approx x_1+\frac{S-x_1^2}{2\,x_1}=\frac{2\,x_1^2+S-x_1}{2\,x_1}=\frac{x_1^2+S}{2\,x_1}\,$$</span>
We use the same technique for each step (<span class="math-container">$a$</span> gets smaller and smaller), and at the step <span class="math-container">$\,n+1\,$</span> we get a more general expression for our formula:<br>
<span class="math-container">$$\sqrt{S}\approx x_{n+1}=\frac{x_n^2+S}{2\,x_n}\quad or \quad \sqrt{S}\approx\frac{1}{2}\bigg(x_n+\frac{S}{x_n}\bigg)$$</span> </p>
<p>Now let's derive such a formula for the cube root.
<span class="math-container">$\,\sqrt[3]{S}=x_1+a\,$</span> where <span class="math-container">$\,x_1\,$</span> is the first approximate value by our own choice and <span class="math-container">$\,a\,$</span> is a tolerance (an error of approximation). By raising to the third power we obtain</p>
<p><span class="math-container">$\,S=x_1^3+3\,x_1^2a+3\,x_1a^2+a^3.\,\,$</span> Again, we choose <span class="math-container">$\,x_1\,$</span> to be much greater than <span class="math-container">$\,a\,$</span> so that we can discard <span class="math-container">$\,a^2\,$</span> and <span class="math-container">$\,a^3\,$</span> for our approximation:</p>
<p><span class="math-container">$$\,S\approx x_1^3+3\,x_1^2a\,,\quad a\approx\frac{S-x_1^3}{3\,x_1^2}$$</span>
As we previously denoted <span class="math-container">$\,\sqrt[3]{S}=x_1+a\,$</span>:
<span class="math-container">$$\,\sqrt[3]{S}\approx x_2=x_1+a= x_1+\frac{S-x_1^3}{3\,x_1^2}=\frac{3\,x_1^3+S-x_1^3}{3\,x_1^2}=\frac{2\,x_1^3+S}{3\,x_1^2}\,$$</span>
Similarly for <span class="math-container">$\,x_{n+1}\,$</span> we get
<span class="math-container">$$\,\sqrt[3]{S}\approx x_{n+1}=x_n+\frac{S-x_n^3}{3\,x_n^2}=\frac{3\,x_n^3+S-x_n^3}{3\,x_n^2}=\frac{2\,x_n^3+S}{3\,x_n^2}\,$$</span>
So, <span class="math-container">$$\,\sqrt[3]{S}\approx x_{n+1}=\frac{2\,x_n^3+S}{3\,x_n^2}\,$$</span>
In the same way we can derive the formula for the <span class="math-container">$k$</span>-th root of <span class="math-container">$S$</span>:
<span class="math-container">$$\,\sqrt[k]{S}\approx x_{n+1}=\frac{(k-1)\,x_n^k+S}{k\,x_n^{k-1}}\,$$</span></p>
<p>Unlike the general formula we have just derived, there's an even more general formula (Newton's binomial) for <span class="math-container">$(1+a)^x$</span> where <span class="math-container">$x$</span> is any fractional or negative number. For positive integer powers Newton's binomial is finite, otherwise it is an infinite series. Here is a link (please go back to the very top of this answer) illustrating a far more general case.</p>
<p>However, I want to express that it is important to be able to derive formulas and understand all underlying procedures and derivations rather than plugging in numbers and hoping they will fit by hook or by crook after some attempts.</p>
|
2,078,796 |
<p>In a contest problem book, I found a reference to Newton's little formula that may be used to find the <em>nth</em> term of a numeric sequence. Specifically, it is a formula that is based on the differences between consecutive terms that is computed at each level until the differences match. </p>
<p>An example application of this formula for computing the <em>nth</em> term of the series (15, 55, 123, 225, 367, 555, 795, ....) involves computing the differences as shown below: </p>
<pre>
1) 1st Level difference is (40, 68, 102, 142, 188, 240)
2) 2nd Level difference is (28, 34, 40, 46, 52)
3) 3rd Level difference is (6, 6, 6, 6, 6)
</pre>
<p>Now the nth term is $$15{n-1\choose 0} + 40{n-1\choose 1} + 28{n-1\choose 2} + 6{n-1\choose 3}$$ where the constant multipliers are the first term of the differences at each level in addition to the first term of the sequence itself.</p>
<p>I was not able to find any reference to this formula or a proof of it after searching on the web. Any explanation of this method is appreciated.</p>
|
Ahmed S. Attaalla
| 229,023 |
<p>Suppose that we have a sequence:</p>
<p>$$a_0,a_1,...a_k$$</p>
<p>And we want to find the function of $n$ that defines $a_n$.</p>
<p>To do this we start by letting $a_{n+1}-a_n=\Delta a_n$ and we call this operation on $a_n$ the forward difference. Then given $\Delta a_n$ we can find $a_n$. Sum both sides of the equation from $n=0$ to $x-1$, and note that we have a telescoping series:</p>
<p>$$\sum_{n=0}^{x-1} \Delta a_n=\sum_{n=0}^{x-1} (a_{n+1}-a_n)=a_{x}-a_{0}$$</p>
<p>Hence $a_n=a_0+\sum_{i=0}^{n-1} \Delta a_i$. Also $\Delta a_n=\Delta (0)+\sum_{i=0}^{n-1} \Delta^2 a_n$...and so forth. Using this we must have if the series converges:</p>
<p>$$a_n=a_0+\Delta (0) \sum_{x_0=0}^{n-1} 1+\Delta \Delta (0) \sum_{x_0=0}^{n-1} \sum_{x_1=0}^{x_0-1} 1+\Delta \Delta \Delta (0) \sum_{x_0=0}^{n-1} \sum_{x_1=0}^{x_0-1} \sum_{x_2=0}^{x_1-1} 1+\cdots$$</p>
<p>Where $\Delta^i (0)$ denotes the first term ($n=0$) of the $i$ th difference sequence of $a_n$. </p>
<p>Through a combinational argument, If we take $\Delta^0 (0)=a_0$ and ${n \choose 0}=1$ we may get:</p>
<p>$$a_n=\sum_{i=0}^{\infty} \Delta^i(0) {n \choose i}$$</p>
<p>If it is the case you want the sequence to start with $a_1$ we need to shift this result to the right one:</p>
<p>$$a_n=\sum_{i=0}^{\infty} \Delta^i(1) {n-1 \choose i}$$</p>
<p>Note when you re-define $a_0$ to be $a_1$ by shifting it's index to the right one, you're re-defining $a_1-a_0=\Delta^1(0)$ to be $a_2-a_1=a_{1+1}-a_{1}=\Delta^1(1)$.</p>
|
481,173 |
<p>The most common way to find inverse matrix is $M^{-1}=\frac1{\det(M)}\mathrm{adj}(M)$. However it is very trouble to find when the matrix is large.</p>
<p>I found a very interesting way to get inverse matrix and I want to know why it can be done like this. For example if you want to find the inverse of $$M=\begin{bmatrix}1 & 2 \\ 3 & 4\end{bmatrix}$$</p>
<p>First, write an identity matrix on the right hand side and carry out some steps:</p>
<p>$$\begin{bmatrix}1 & 2 &1 &0 \\ 3 & 4&0&1\end{bmatrix}\to\begin{bmatrix}1 & 2 &1 &0 \\ 3/2 & 2&0&1/2\end{bmatrix}\to\begin{bmatrix}1/2 & 0 &-1 &1/2 \\ 3/2 & 2&0&1/2\end{bmatrix}\to\begin{bmatrix}3/2 & 0 &-3 &3/2 \\ 3/2 & 2&0&1/2\end{bmatrix}$$
$$\to\begin{bmatrix}3/2 & 0 &-3 &3/2 \\ 0 & 2&3&-1\end{bmatrix}\to\begin{bmatrix}1 & 0 &-2 &1 \\ 0 & 2&3&-1\end{bmatrix}\to\begin{bmatrix}1 & 0 &-2 &1 \\ 0 & 1&3/2&-1/2\end{bmatrix}$$</p>
<p>You can 1. swap any two row of the matrix 2. multiply a constant in any row 3. add one row to the other row. Just like you are doing Gaussian elimination. when the identical matrix shift to the left, the right hand side become</p>
<p>$$M^{-1}=\begin{bmatrix}-2 &1 \\3/2&-1/2\end{bmatrix}$$</p>
<p>How to prove this method work?</p>
|
bubba
| 31,744 |
<p>The method you describe is called the Gauss-Jordan method. There is a nice description and an informal discussion of why it works <a href="http://www.mathsisfun.com/algebra/matrix-inverse-row-operations-gauss-jordan.html" rel="nofollow">here</a>.</p>
<p>It's much more efficient and stable than the method based on determinants and adjugates. In practice, you would not use the determinant/adjugate method except for very small matrices (of size 2 or 3).</p>
|
1,943,478 |
<p>So the prompt is merely an existence proof--just find a $u$ and $v$ that work. Well, I'm unfortunately a little stuck on getting started.</p>
<p>I know that $Q \in SO_4(\mathbb R) \implies QQ^T = I \text{ and } \det(Q) = 1$.</p>
<p>I tried to solve $Qx = uxv$ for $u,v$ but I was not able to do so successfully. This is because I don't necessarily know if $x$ is has an inverse.</p>
<p>Here's what I managed to deduce successfully:</p>
<p>$$|Qx| = |uxv| = |u||x||v| = 1\cdot |x| \cdot 1 = |x|$$</p>
<p>which means that multiplying $x \in \mathbb H$ by $Q$ doesn't change its length.</p>
<p>Let $Q = [q_1 \,|\, q_2 \,|\, q_3 \,|\, q_4]$, where $q_i$ is the $i^{th}$ column of $Q$, and let $x = a + bi + cj + dk$. Then,</p>
<p>$$\begin{align}Qx & = Q(a + bi + cj + dk)\\& = aQ + bQi + cQj + dQk \\&= aQ + bQ\begin{pmatrix}0 \\ 1 \\ 0 \\ 0\end{pmatrix} + cQ\begin{pmatrix}0 \\ 0 \\ 1 \\ 0\end{pmatrix} + d\begin{pmatrix}0 \\ 0 \\ 0 \\ 1\end{pmatrix} \\ & = aQ + bq_2 + cq_3 + dq_4\end{align}$$</p>
<p>And unfortunately I don't see where to go from here. I'm not entirely sure that I did the multiplication of a quaternion by a matrix correctly. If $a \in \mathbb R$, what does $aQ$ mean? Thus, I don't think that's right.</p>
<p>Likely, the solution will boil down to $v = u^{-1}$ or something. But I'm still not quite sure how to arrive there.</p>
|
Emilio Novati
| 187,568 |
<p>It is a consequence of the fact that any 4D rotation can be canonically decomposed into a left-isoclinic and a right-isoclinic rotation.
As you can see in the <a href="https://en.wikipedia.org/wiki/Rotations_in_4-dimensional_Euclidean_space#Isoclinic_decomposition" rel="nofollow">reference</a> this means that any $A \in SO_4(\mathbb{R})$ acts, on a vector $\vec x=(x_1,x_2,x_3,x_4) \in \mathbb{R}^4$ as:
$$
A \vec x=UV \vec x=U \vec x V^T
$$
where $U$ and $V^T$ are matrices of the form:
$$
U=\begin{pmatrix}
a&-b&-c&-d\\
b&a&-d&c\\
c&d&a&-b\\
d&c&b&a
\end{pmatrix}
\qquad
U^T=\begin{pmatrix}
p&q&r&s\\
-q&p&-s&r\\
-r&s&p&-q\\
-s&-r&q&p
\end{pmatrix}
$$
with:$a^2+b^2+c^2+d^2=p^2+q^2+r^2+s^2=1$
Now we can see that $U$ and $V^T$ are the <a href="https://en.wikipedia.org/wiki/Quaternion#Matrix_representations" rel="nofollow">matrix representation</a> in $M_4(\mathbb{R})$ of two unit quaternions $u,v$ and the vector $\vec x$ can be represented by the quaternion $x=x_1+x_2\mathbf{i}+x_2\mathbf{j}+x_3\mathbf{i}+x_4\mathbf{k}$, so the rotation can be represented as the product $uxv$ .</p>
|
488,258 |
<p>What are the last two digits of $11^{25}$ to be solved by binomial theorem like $(1+10)^{25}$?
If there is any other way to solve this it would help if that is shown too.</p>
|
Gerry Myerson
| 8,269 |
<p>Here's another way. $11\times11=121$ ends in 21. $11\times21=231$ ends in 31. $11\times31=341$ ends in 41. See the pattern for the last two digits of powers of 11? Now prove that the pattern continues to hold, and then see what that tells you about $11^{25}$. </p>
|
935,331 |
<p>Previously, to integrate functions like $x(x^2+1)^7$ I used integration by parts. Today we were introduced to a new formula in class: $$\int f'(x)f(x)^n dx = \frac{1}{n+1} {f(x)}^{n+1} +c$$
I was wondering how and why this works. Any help would be appreciated. </p>
|
Community
| -1 |
<p>By the chain rule we have</p>
<p>$$(g\circ f)'=(g'\circ f)\times f'$$
Now what we get if we take $g(x)=x^n$?</p>
|
640,554 |
<p>For the system
$$
\left\{
\begin{array}{rcrcrcr}
x &+ &3y &- &z &= &-4 \\
4x &- &y &+ &2z &= &3 \\
2x &- &y &- &3z &= &1
\end{array}
\right.
$$
what is the condition to determine if there is no solution or unique solution or infinite solution? </p>
<p>Thank you!</p>
|
Ben
| 116,271 |
<p>The important concept here is linear dependence versus linear independence. As shown in the examples posted by others, linear dependence occurs when one equation in the system of equations can be shown to be a multiple of another. This is ultimately what Gaussian elimination or computing the determinant reveals. In this instance, there is no unique solution to the system of equations. </p>
<p>Conversely, if the system of equations is linearly independent, then a unique solution does exist (though you still have to compute it, as is done in the examples in other answers). </p>
<p>This can be visualized by graphing the equations, assuming a low order for the system. Linear dependence implies that two or more of the lines obtained when graphing the system are parallel or one on top of the other. Linear independence will result in a graph in which the various lines intersect at one point, that point being the solution to the system of equations.</p>
|
129,890 |
<p>The code</p>
<pre><code>x0 = 0.25; T = 20; u1 = -0.03; u2 = 0.07; u3 = -0.04;
a = 1/100; t0 = 5; omega = 2;
a = 0.01; dis[x_] := a/(Pi (x^2 + a^2))
P[t_] := If[t <= t0, Sin[omega t], 0]
u[t_] := u1 HeavisideTheta[t - 0.8] +
u2 HeavisideTheta[t - 1.64] + u3 HeavisideTheta[t - 3.33]
pde = a D[w[x, t], {x, 4}] + D[w[x, t], {t, 2}] -
P[t] dis[x - x0];
sol = NDSolve[{pde == 0, w[0, t] == u[t], w[1, t] == 0,
Derivative[2, 0][w][0, t] == 0, Derivative[2, 0][w][1, t] == 0,
w[x, 0] == 0, Derivative[0, 1][w][x, 0] == 0},
w[x, t], {x, 0, 1}, {t, 0, 80}, Method -> "StiffnessSwitching"];
</code></pre>
<p>gives an error</p>
<pre><code>NDSolve::eerr: Warning: scaled local spatial error estimate of 246.5944594961422` at t = 80.` in the direction of independent variable x is much greater than the prescribed error tolerance. Grid spacing with 25 points may be too large to achieve the desired accuracy or precision. A singularity may have formed or a smaller grid spacing can be specified using the MaxStepSize or MinPoints method options.
</code></pre>
<p>I think it's because of the boundary conditions on derivatives. Have tried Automatic, MethodOfLines, etc., does not help. Tried</p>
<pre><code>Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MinPoints" -> 100}}
</code></pre>
<p>works fine with second order equation subjected to bc containing only first order derivative. Any thoughts, hints?</p>
|
xzczd
| 1,871 |
<p><code>eerr</code> is a warning, not an error, it just suggests the possibility of trouble and doesn't always mean the output you obtained is wrong. Indeed, the solution given by <code>NDSolve</code> with the default setting seems to be erroneous, but according to my test, with a spatial grid <strong>dense enough</strong> e.g. <code>50</code> (BTW it seems to be better to use a <strong>even</strong> number), <code>NDSolve</code> won't give too bad a solution, even if the warning is still there:</p>
<pre><code>appro = With[{k = 1000}, ArcTan[k #]/Pi + 1/2 &];
unitStepExpand = Simplify`PWToUnitStep@PiecewiseExpand@# &;
x0 = 25/100; T = 20; u1 = -3/100; u2 = 7/100; u3 = -4/100;
a = 1/100; t0 = 5; omega = 2;
a = 1/100; dis[x_] := a/(Pi (x^2 + a^2))
P[t_] = unitStepExpand@If[t <= t0, Sin[omega t], 0];
u[t_] = u1 HeavisideTheta[t - 8/10] + u2 HeavisideTheta[t - 164/100] +
u3 HeavisideTheta[t - 333/100] /. HeavisideTheta -> UnitStep;
pde = a D[w[x, t], {x, 4}] + D[w[x, t], {t, 2}] - P[t] dis[x - x0];
(* I decrease the value of tend
because the solution seems to come to steady state long before 80. *)
tend = 10;
mol[n_Integer, o_:"Pseudospectral"] := {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n,
"MinPoints" -> n, "DifferenceOrder" -> o}}
fsol = NDSolveValue[{pde == 0, w[0, t] == u[t], w[1, t] == 0,
Derivative[2, 0][w][0, t] == 0, Derivative[2, 0][w][1, t] == 0, w[x, 0] == 0,
Derivative[0, 1][w][x, 0] == 0} /. UnitStep -> appro, w, {x, 0, 1}, {t, 0, tend},
MaxSteps -> Infinity, Method -> mol[50, 4]]; // AbsoluteTiming
Plot[fsol[0, t], {t, 0, tend}, PlotRange -> .05]
Plot3D[fsol[x, t], {x, 0, 1}, {t, 0, tend}, PlotRange -> 2, PlotPoints -> 40,
ColorFunction -> "AvocadoColors", Lighting -> {{"Ambient", White}}]
Manipulate[Plot[fsol[x, t], {x, 0, 1}, PlotRange -> 2], {t, 0, tend}]
</code></pre>
<p><img src="https://i.stack.imgur.com/IQVJF.png" alt="Mathematica graphics">
<img src="https://i.stack.imgur.com/eTNxR.png" alt="Mathematica graphics">
<a href="https://i.stack.imgur.com/WobIl.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WobIl.gif" alt="enter image description here"></a></p>
<p>I modfied the parameters with the <code>appro</code> and <code>unitStepExapnd</code> because of the reason mentioned <a href="https://mathematica.stackexchange.com/a/129280/1871">here</a>.</p>
<p>Well, I admit there still exists slight yet suspicious oscillation in the solution, and obtaining a highly accurate numerical solution for the problem seems to be pending, so let's wait and see if someone will come up with a better approach.</p>
|
332,583 |
<p>I'm a high school student, and I have to write a 4000-word research paper on mathematics (as part of the IB Diploma Programme). Among my potential topics were cellular automata and the Boolean satisfiability problem, but then I thought that maybe there was a connection between the two. Variables in Boolean expressions can be True or False; cells in cellular automata can be "On" or "Off". Also, the state of some variables in Boolean expressions can depend on that of other variables (e.g. the output of a Boolean function), while the state of cells in cellular automata depend on that of its neighbors.</p>
<p>Would it be possible to use cellular automata to solve a satisfiablity instance? If so, how, and where can I find helpful/relevant information?</p>
<p>Thanks in advance!</p>
|
Robert Israel
| 8,508 |
<p>If you could find an efficient way to solve SAT, you'd become very rich and famous. That's not likely to happen when you're still in high school. What you might be able to do, though, is get your cellular automaton to go through all possible values of the variables, and check the value of the Boolean expression for each one. </p>
|
332,583 |
<p>I'm a high school student, and I have to write a 4000-word research paper on mathematics (as part of the IB Diploma Programme). Among my potential topics were cellular automata and the Boolean satisfiability problem, but then I thought that maybe there was a connection between the two. Variables in Boolean expressions can be True or False; cells in cellular automata can be "On" or "Off". Also, the state of some variables in Boolean expressions can depend on that of other variables (e.g. the output of a Boolean function), while the state of cells in cellular automata depend on that of its neighbors.</p>
<p>Would it be possible to use cellular automata to solve a satisfiablity instance? If so, how, and where can I find helpful/relevant information?</p>
<p>Thanks in advance!</p>
|
Sonia
| 8,415 |
<p>The simple answer is Yes. Can you do it in a simple way? probably not. Cook showed in 2004 that you can indeed compute anything that a turing-machine could on elemetry celular automata using Rule 110 and careful selection of start conditions. There is probably no simple rule or start conditions to do this though. </p>
<p>see: <a href="http://blog.barvinograd.com/2012/11/cyclic-tag-system-1-line-of-turing-complete-code/" rel="nofollow">http://blog.barvinograd.com/2012/11/cyclic-tag-system-1-line-of-turing-complete-code/</a> for more details.</p>
|
119,506 |
<p>Let $\kappa$ be a singular cardinal, and let $\langle \kappa_i \mid i<\mathrm{cf}(\kappa) \rangle$ be an increasing sequence of regular cardinals cofinal in $\kappa$. Recall that a scale on $\Pi_{i<\mathrm{cf}(\kappa)} \kappa_i$ is a sequence $\langle f_\alpha \mid \alpha < \kappa^+ \rangle$ such that:</p>
<ol>
<li>For every $\alpha < \kappa^+$, $f_\alpha \in \Pi_{i<\mathrm{cf}(\kappa)} \kappa_i$.</li>
<li>For every $\alpha < \beta < \kappa^+$, there is $i < \mathrm{cf}(\kappa)$ such that $f_\alpha <_i f_\beta$, i.e. for every $j\geq i$, $f_\alpha(j) < f_\beta(j)$.</li>
<li>For every $g\in \Pi_{i<\mathrm{cf}(\kappa)} \kappa_i$, there is $\alpha < \kappa^+$ and $i < \mathrm{cf}(\kappa)$ such that $g <_i f_\alpha$.</li>
</ol>
<p>Question: Is it consistent that there is a scale on $\Pi_{i<\mathrm{cf}(\kappa)} \kappa_i$ such that, for every $\beta < \kappa^+$ and every $i<\mathrm{cf}(\kappa)$,
$\left|{\{\alpha < \beta \mid f_\alpha <_i f_\beta\}}\right| < \kappa$ ?</p>
<p>My intuition is that the answer should be no, but I haven't been able to find a proof.</p>
|
Chris Lambie-Hanson
| 26,002 |
<p>I have a negative answer assuming some mild cardinal arithmetic assumptions. Namely, if $(\kappa_i)^i < \kappa$ for every $i<\mathrm{cf}(\kappa)$, then there can be no scale with the desired property. This is true, for example, whenever $\mathrm{cf}(\kappa) = \omega$ or $\kappa$ is strong limit. We also make the harmless assumption that $\mathrm{cf}(\kappa) < \kappa_0$.</p>
<p>Assume for sake of contradiction that $\langle f_\alpha \mid \alpha < \kappa^+ \rangle$ is such a scale. For $j<\mathrm{cf}(\kappa)$, define $g_j \in \Pi_{i<\mathrm{cf}(\kappa)}\kappa_i$ as follows: Using the fact that $(\kappa_j)^j < \kappa$, fix $B_j \subseteq \kappa^+$ and $f \in \Pi_{i\leq j}\kappa_i$ such that $\left|{B_j}\right|=\kappa_j$ and, for every $\alpha \in B_j$ and $i\leq j$, $f_\alpha(i)=f(i)$. For $i\leq j$, let $g_j(i)=f(i)+1$. For $i>j$, let $g_j(i)=\sup(\{f_\alpha(i)+1 \mid \alpha \in B_j \})$. Now define $g \in \Pi_{i<\mathrm{cf}(\kappa)}\kappa_i$ by letting $g(i)=\sup(\{g_j(i) \mid j<\mathrm{cf}(\kappa) \})$. Finally, find $\beta < \kappa^+$ and $i<\mathrm{cf}(\kappa)$ such that $g <_i f_\beta$. Letting $B = \bigcup_{j<\mathrm{cf}(\kappa)}B_j$, we have that $\left|{B}\right| = \kappa$ and $f_\alpha <_i f_\beta$ for every $\alpha \in B$. Contradiction.</p>
|
987,620 |
<p>$P$ and $Q$ are two distinct prime numbers. How can I prove that $\sqrt{PQ}$ is an irrational number?</p>
|
Yiorgos S. Smyrlis
| 57,021 |
<p>If
<span class="math-container">$$
\sqrt{pq}=\frac{m}{n}, \quad (m,n)=1,
$$</span>
then
<span class="math-container">$$
n^2pq=m^2, \tag{$\star$}
$$</span>
which means that <span class="math-container">$p\mid m^2$</span> and hence <span class="math-container">$p\mid m$</span>. Thus <span class="math-container">$m=pm_1$</span>, and <span class="math-container">$(\star)$</span> becomes
<span class="math-container">$$
n^2q=pm_1^2.
$$</span>
But this means that <span class="math-container">$p\mid qn^2$</span>, and as <span class="math-container">$p\ne q$</span> and hence <span class="math-container">$p\not\mid q$</span>, then <span class="math-container">$p\mid n^2$</span>, and thus <span class="math-container">$p\mid n$</span>. Therefore, <span class="math-container">$n=pn_1$</span>.</p>
<p>This is a contradiction, since <span class="math-container">$p\mid m$</span> and <span class="math-container">$p\mid n$</span>, and we had assumed that <span class="math-container">$(m,n)=1$</span>.</p>
|
3,856,370 |
<p>this is the result in the book(Discrete mathematics and its applications) I was reading.</p>
<ol>
<li><span class="math-container">$n^d\in O(b^n)$</span></li>
</ol>
<p>where <span class="math-container">$b>1$</span> and <span class="math-container">$d$</span> is positive</p>
<p>and</p>
<ol start="2">
<li><span class="math-container">$(\log_b(n))^c\in O(n^d)$</span></li>
</ol>
<p>where b>1 and d,c are positive</p>
<p>but i am having trouble understanding how that's possible
for eg in the second if you have <span class="math-container">$c$</span> as some big number like <span class="math-container">$10^{100}$</span> and d as <span class="math-container">$10^{-100}$</span> they both are positive
and b can be 1.0000001 how can we find a <span class="math-container">$C$</span> and <span class="math-container">$k$</span> for it to be a big-Oh notation</p>
<p>and in the first one if we compare the function by taking log on both sides (ignoring the d here as we can always multiply with it on the other side)</p>
<p>we will have</p>
<p><span class="math-container">$\log(n)$</span> and <span class="math-container">$n\cdot \log(b)$</span></p>
<p>now which one of them is bigger as <span class="math-container">$b$</span> can be some number like <span class="math-container">$1.00000000001$</span></p>
<p>can some one give me proof of these two results. Thanks</p>
<p>edit: in the first one its not <span class="math-container">$d.n$</span> but <span class="math-container">$n^d$</span></p>
<p>update: i found a same question from another user <a href="https://math.stackexchange.com/questions/2687525/how-to-prove-that-nd-is-obn-from-n-is-o2n-given-that-d0-b1">identical question</a></p>
<p>it's answer did clear a little bit for me but still left me confused as acc to op's accepted answer <span class="math-container">$d<log_2(b)$</span> where, <span class="math-container">$log_2(b)>0$</span>.</p>
<p>but the result from the book is</p>
<blockquote>
<p><span class="math-container">$n^d\in O(b^n)$</span></p>
</blockquote>
<blockquote>
<p>This tells us that every power of n is big-O of every exponential function of n with a base
that is greater than one</p>
</blockquote>
<p>There is no such constraint for <span class="math-container">$d$</span> (as the statement says for every power of n)</p>
|
J.G.
| 56,861 |
<p>By a <a href="https://en.wikipedia.org/wiki/Schwinger_parametrization" rel="nofollow noreferrer">Schwinger parametrization</a>, this integral is<span class="math-container">$$\begin{align}\frac{1}{4i}\int_0^\infty dx\int_0^\infty dy(e^{ix}-e^{-ix}-2ix)y^2e^{-xy}&=\frac{1}{4i}\int_0^\infty dy\left(\frac{1}{y-i}-\frac{1}{y+i}-\frac{2i}{y^2}\right)y^2\\&=-\frac12\int_0^\infty\frac{dy}{y^2+1}\\&=-\frac{\pi}{4}.\end{align}$$</span><a href="https://www.wolframalpha.com/input/?i=integrate+%28sin+x-x%29%2Fx%5E3+from+0+to+infinity&dataset=" rel="nofollow noreferrer">WA agrees</a>.</p>
|
1,422,990 |
<p>How to show that $(2^n-1)^{1/n}$ is irrational for all integer $n\ge 2$?</p>
<p>If $(2^n-1)^{1/n}=q\in\Bbb Q$ then $q^n=2^n-1$ which doesn't seem right, but I don't get how to prove it.</p>
|
Hagen von Eitzen
| 39,174 |
<p>If the $n$th root of an integer is rational, then it is in fact an integer (any prime occuring in the dneominator of $\frac xy$ occurs also in the denominator of $\frac{x^n}{y^n}$). As $1^n<2^n-1<2^n$, this is not possible.</p>
|
1,892 |
<p>Although whether $$ P = NP $$ is important from theoretical computer science point of view, but I fail to see any practical implication of it.</p>
<p>Suppose that we can prove all questions that can be verified in polynomial time have polynomial time solutions, it won't help us in finding the actual solutions. Conversely, if we can prove that $$ P \ne NP,$$ then this doesn't mean that our current NP-hard problems have no polynomial time solutions. </p>
<p>From practical point of view ( practical as in the sense that we can immediately use the solution to real world scenario), it shouldn't bother me whether P vs NP is proved or disproved any more than whether my <strong><em>current problem</em></strong> has a polynomial time solution. </p>
<p>Am I right?</p>
|
Community
| -1 |
<p>There is an interesting heuristic to suggest that P is actually not NP. It is that, roughly, the task of finding out a proof of a statement is an NP task, but that of verifying it is a P task. From our actual experience that verifying a proof is far easier than finding one, we can intuitively expect P != NP to hold true.</p>
<p>The practical application is that a result which would agree with our intuition would be a great thing, and psychologically satisfying moreover.</p>
|
2,995,408 |
<blockquote>
<p><span class="math-container">$$
\lim_{x\to 2^-}\frac{x(x-2)}{|(x+1)(x-2)|}=
\lim_{x\to 2^-}\left(\frac{x}{|x+1|}\cdot \frac{x-2}{|x-2|}\right)
$$</span></p>
</blockquote>
<p>So as the title says, is it okay to separate function under absolute value like this (i.e In form of Products) as shown in the denominator?</p>
|
Yuval Gat
| 450,141 |
<p>Yes, <span class="math-container">$|ab|=|a||b|$</span> holds for all <span class="math-container">$a, b\in\mathbb{R}$</span>.</p>
|
2,838,312 |
<p>What is the Fourier transform of $\mathrm{e}^{ik|x|}$? Here, $k > 0$ is real.</p>
<p>I use the definition $$ F(\omega) = \int_{-\infty}^\infty \mathrm{e}^{-i\omega x} f(x) \mathrm{d}x.$$</p>
<p>Thanks!</p>
|
mathworker21
| 366,088 |
<p>If $|z| = r$, then, using that $|a_i| \le r-1$ for each $i$, $$|a_{n-1}z^{n-1}+\dots+a_1z+a_0| \le (r-1)r^{n-1}+(r-1)r^{n-2}+\dots+(r-1)r+(r-1) = r^n-1 < |z^n|$$ So by Rouché, we see that the number of zeroes of $p$ in $\Delta_r(0)$ is the same as the number of zeroes as $z^n$ in $\Delta_r(0)$, namely $n$.</p>
|
2,178,714 |
<p>Let $ f: (-1,1) \rightarrow \mathbb{R}$ be a bounded and continuous function . Prove that the function $ g(x)=(x^{2}-1)f(x) $ is uniformly continuous on $ (-1,1)$ . $$ $$ My little approach is, Since $f$ is bounded on $(-1,1)$ , there is positive $M \in \mathbb{R}$ such that </p>
<p>$$\forall x \in (-1,1)\,|f(x)| \leq M$$</p>
<p>Now, $ |g(x)-g(y)|=|(x^{2}-1)f(x)-(y^{2}-1)f(y)|=|x^{2}f(x)-y^{2}f(y)-(f(x)-f(y))|$</p>
|
edm
| 356,114 |
<p>Define a new function on a larger domain $G:[-1,1]\to \Bbb R$ by $$G(x):=\begin{cases}
g(x) &\text{if $x\in(-1,1)$}\\
0 &\text{if $x=1$ or $-1$}
\end{cases}.$$
You can check that $G$ is continuous (specifically, you just need to check this at $-1$ and $1$). This follows from boundedness of $f$ and you apply squeeze theorem.</p>
<p>Now $G$ is defined on a compact set and is continuous, and hence uniformly continuous, while $g$ is the restriction of a uniformly continuous function to a smaller domain. So $g$ is uniformly continuous.</p>
|
3,400,123 |
<blockquote>
<p>Proof that <span class="math-container">$$\sum_{l=1}^{\infty} \frac{\sin((2l-1)x)}{2l-1}
=\frac{\pi}{4}$$</span> when <span class="math-container">$0<x<\pi$</span></p>
</blockquote>
<p>The chapter we are working on is about Fourier series, so I guess I'd need to use that some how.</p>
<p>My idea was to use
<span class="math-container">$$\sum_{l\in \mathbb Z}c_l e^{i(2l-1)x}= a_0+\sum_{l\geq 1}a_1\cos(lx)+\sum_{l\geq 1} b_l\sin((2l-1)x)$$</span></p>
<p>Where <span class="math-container">$a_0$</span> and <span class="math-container">$a_l$</span> would be <span class="math-container">$0$</span>, <span class="math-container">$b_l= \frac{1}{2l-1}$</span>.</p>
<p>This would give us <span class="math-container">$c_l = \frac{1}{4il-2i}$</span>. I don't know how knowing that </p>
<p><span class="math-container">$$\sum_{l=1}^{\infty} \frac{\sin((2l-1)x)}{2l-1}= \sum_{l\in \mathbb Z} \frac{1}{4il-2i} e^{i(2l-1)x}$$</span> would help here though.</p>
<p>Am I even on the right track here?
Any hints are much appreciated , thanks in advance :)</p>
|
Jack D'Aurizio
| 44,121 |
<p>For short:
<span class="math-container">$$\sum_{k\geq 0}\frac{\sin((2k+1)x)}{2k+1}=\text{Im}\!\sum_{k\geq 0}\frac{(e^{ix})^{2k+1}}{2k+1}=\text{Im}\,\text{arctanh}(e^{ix})=\frac{1}{2}\text{Im}\log\left(\frac{1+e^{ix}}{1-e^{ix}}\right)=\frac{1}{2}\text{Arg}\left(i\cot\frac{x}{2}\right)=\frac{\pi}{4}. $$</span></p>
<p>There are a couple of subtleties, related to the evaluation of a power series at a point on the boundary of its disk of convergence, and to the determination of a complex logarithm, but I guess you can figure out the details easily.</p>
|
615,093 |
<p>How to prove the following sequence converges to $0.5$ ?
$$a_n=\int_0^1{nx^{n-1}\over 1+x}dx$$
What I have tried:
I calculated the integral $$a_n=1-n\left(-1\right)^n\left[\ln2-\sum_{i=1}^n {\left(-1\right)^{i+1}\over i}\right]$$
I also noticed ${1\over2}<a_n<1$ $\forall n \in \mathbb{N}$.</p>
<p>Then I wrote a C program and verified that $a_n\to 0.5$ (I didn't know the answer before) by calculating $a_n$ upto $n=9990002$ (starting from $n=2$ and each time increasing $n$ by $10^4$). I can't think of how to prove $\{a_n\}$ is monotone decreasing, which is clear from direct calculation.</p>
|
Felix Marin
| 85,343 |
<p>$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
When $n \gg 1$ the main contribution to the integral comes from $x \sim 1$. Then, we set the change of variables $x = 1 -\epsilon$:
\begin{align}
\color{#0000ff}{\large a_{n}} &= \int_{0}^{1}{nx^{n - 1} \over 1 + x}\,\dd x
=
\half\,n\int_{0}^{1}{\pars{1 - \epsilon}^{n - 1} \over 1 - \epsilon/2}\,\dd\epsilon
\\[3mm]&=
\half\,n\int_{0}^{1}\exp\pars{\bracks{n - 1}\ln\pars{1 - \epsilon} - \ln\pars{1 - {\epsilon \over 2}}}\,\dd\epsilon\quad
{\Large\stackrel{n \gg 1}{\sim}}\quad
\half\,n\int_{0}^{\infty}\exp\pars{-\bracks{n - {3 \over 2}}\epsilon}\,\dd\epsilon
\\[3mm]&=
\half\,{n \over n - 3/2}\quad \color{#0000ff}{\large\stackrel{n \to \infty}{\Huge \to} \quad\half}
\end{align}</p>
|
615,093 |
<p>How to prove the following sequence converges to $0.5$ ?
$$a_n=\int_0^1{nx^{n-1}\over 1+x}dx$$
What I have tried:
I calculated the integral $$a_n=1-n\left(-1\right)^n\left[\ln2-\sum_{i=1}^n {\left(-1\right)^{i+1}\over i}\right]$$
I also noticed ${1\over2}<a_n<1$ $\forall n \in \mathbb{N}$.</p>
<p>Then I wrote a C program and verified that $a_n\to 0.5$ (I didn't know the answer before) by calculating $a_n$ upto $n=9990002$ (starting from $n=2$ and each time increasing $n$ by $10^4$). I can't think of how to prove $\{a_n\}$ is monotone decreasing, which is clear from direct calculation.</p>
|
Claude Leibovici
| 82,404 |
<p>There is another way of adressing the problem (but it could be totally out of scope depending of what you are supposed to use for this). </p>
<p>It can be established that<br>
a(n) = (n / 2) (-PolyGamma[0, n/2] + PolyGamma[0, (1 + n)/2])<br>
Now, a Taylor development of a(n) built around infinity gives an approximation which is<br>
1/2 + 1/(4 n) - 1/(8 n^3)</p>
|
308,565 |
<p>Suppose I have a one parameter flat family of complex surfaces (regular, of general type) whose general fibre is smooth. Is it possible for the central fibre to have singularities which are not canonical? If so, how bad can they be? </p>
|
Miguel González
| 11,528 |
<p>The central fiber can even be everywhere non-reduced. This can happen when you take the central fiber of the global image of a global map whose general fiber map is the canonical embedding of a surface of general type with very ample canonical map but its central fiber map is the canonical map of a surface of general type whose canonical map is a double covering onto some rational surface.</p>
|
80,432 |
<p>I have a question that I've been wondering about the past day or so while trying to relate measure theory back to some general topology.</p>
<p>Is it ever possible for some family of (open) sets in $\mathbb{R}^2$ to be both a base for the usual topology on $\mathbb{R}^2$, as well as a semiring?</p>
<p>To avoid confusion, I mean a semiring of sets. So by semiring, I mean a collection of subsets of $\mathbb{R}^2$ which has $\emptyset$ as an element, is closed under finite intersections, and for any $A,B$ in the semiring, $A\setminus B=\bigcup_{i=1}^n C_i$ for disjoint $C_i$ in the semiring.</p>
|
Henno Brandsma
| 4,280 |
<p>In a non-trivial $T_1$ connected, locally connected space this can never happen, I think: as soon as we can find non-empty proper connected open subsets $U \subset V$ in a base/semiring, where the inclusion is proper, then $V \setminus U$ cannot be open (so in particular cannot be a disjoint union of base elements) because it's already closed in $U$. But a base could consist of non-connected subsets, so the argument is not quite clear. But I think something like this should be provable.</p>
<p>The situation can occur for other topologies, like the discrete one, or more interestingly, the Sorgenfrey plane (so $\mathbb{R}^2$ with sets of the form $[a,b) \times [c,d) $ as base),
which has the standard semiring for the Lebesgue measure as a topological base.</p>
<p>Note that every semiring of sets is a base for <strong>some</strong> topology (as it's closed under intersections).</p>
|
80,432 |
<p>I have a question that I've been wondering about the past day or so while trying to relate measure theory back to some general topology.</p>
<p>Is it ever possible for some family of (open) sets in $\mathbb{R}^2$ to be both a base for the usual topology on $\mathbb{R}^2$, as well as a semiring?</p>
<p>To avoid confusion, I mean a semiring of sets. So by semiring, I mean a collection of subsets of $\mathbb{R}^2$ which has $\emptyset$ as an element, is closed under finite intersections, and for any $A,B$ in the semiring, $A\setminus B=\bigcup_{i=1}^n C_i$ for disjoint $C_i$ in the semiring.</p>
|
Arturo Magidin
| 742 |
<p>I believe here's a proof for $\mathbb{R}^2$, but I'm not positive how general this can be; as Henno Brandsma mentions, connectivity seems to be key.</p>
<p>Assume you have a base for $\mathbb{R}^2$ that is closed under finite intersections. Let $A$ be a nonempty basic open set, and let $B$ be a connected component of $A$. Let $C$ be a nonempty open set of the basis that is properly contained in $B$ (always possible in $\mathbb{R}^2$). Since $\mathbb{R}^2$ is equal to the disjoint union of $C$, $\partial C$, and the interior of the complement of $C$, and $B$ is connected, then $B$ must intersect $\partial C$. Let $x\in B\cap\partial C$. Then $x\in A-C$; but there can be no open set that contains $x$ and is contained in $A-C$, since every open neighborhood of $x$ intersects $C$ nontrivially. Therefore, $A-C$ cannot be open, which shows that the base cannot be a semiring.</p>
|
153,426 |
<p>Let $r=a/b$ be a rational number in lowest terms, larger than $1$,
and not an integer (so $b > 1$).</p>
<blockquote>
<p><strong>Q</strong>. Does the sequence
$$ \lfloor r \rfloor, \lfloor r^2 \rfloor, \lfloor r^3 \rfloor,
\ldots, \lfloor r^n \rfloor, \ldots$$
always contain an infinite number of primes?</p>
</blockquote>
<p>For example, for $r=13/4$,
$$(r,r^2,r^3,r^4,r^5, \ldots) = (3.25, 10.56, 34.33, 111.57, 362.59, \ldots)$$
and so the floors are
$$(3, 10, 34, 111, 362, \ldots)$$
The next prime after $3$ occurs at
$$\lfloor r^{34} \rfloor = 253532870351270971$$
and I only find four primes up to $\lfloor r^{1000} \rfloor$.</p>
|
Gerry Myerson
| 3,684 |
<p>The question is much too hard. Forman and Shapiro proved that $[r^n]$ is <em>composite</em> infinitely often for $r=3/2$ and for $r=4/3$. Dubickas and his students have found a few more results along these lines. </p>
<p>EDIT: Here is a <a href="http://vddb.laba.lt/fedora/get/LT-eLABa-0001%3aE.02~2012~D_20121017_111805-09669/DS.005.1.01.ETD">link</a> to the thesis of Novikas, written under the supervision of Dubickas, reviewing and extending the work of Forman and Shapiro. </p>
|
153,426 |
<p>Let $r=a/b$ be a rational number in lowest terms, larger than $1$,
and not an integer (so $b > 1$).</p>
<blockquote>
<p><strong>Q</strong>. Does the sequence
$$ \lfloor r \rfloor, \lfloor r^2 \rfloor, \lfloor r^3 \rfloor,
\ldots, \lfloor r^n \rfloor, \ldots$$
always contain an infinite number of primes?</p>
</blockquote>
<p>For example, for $r=13/4$,
$$(r,r^2,r^3,r^4,r^5, \ldots) = (3.25, 10.56, 34.33, 111.57, 362.59, \ldots)$$
and so the floors are
$$(3, 10, 34, 111, 362, \ldots)$$
The next prime after $3$ occurs at
$$\lfloor r^{34} \rfloor = 253532870351270971$$
and I only find four primes up to $\lfloor r^{1000} \rfloor$.</p>
|
Aaron Meyerowitz
| 8,008 |
<p>I recently asked a <a href="https://mathoverflow.net/questions/153721">related question</a> (inspired by this one!) . I accepted an answer but also provided <a href="https://mathoverflow.net/questions/153721/floors-of-powers-of-reals-how-much-do-the-first-few-determine-the-next/153923#153923">an answer</a> of my own. That answer illustrates the fact that for <em>real</em> $r$ one can arrange to have $\lfloor r^n \rfloor$ all even. One can pick the successive terms of $\lfloor r \rfloor, \lfloor r^2 \rfloor, \lfloor r^3 \rfloor,
\ldots, \lfloor r^n \rfloor, \ldots$ fairly freely. By this I mean being slightly vague (or uncommitted) about the value of $r$ and specifying it by providing the sequence of floor values one term at a time. It is easy to preserve freedom of choice enough to have about $\lfloor r \rfloor$ consistent choices at each stage (the smallest of which should be rejected in order to preserve the freedom for the following stage). </p>
<p>Let me relate it to other similar problems and implicitly suggest that for almost all non-integer rationals greater than $1$ there will be infinitely many prime floors. That we might not be able to prove it true for even one rational and that there may not be even one counter-example (at least that we can discover). </p>
<p>Given a positive real $r$ and a real "base" $b \gt 1$ define the base $b$ expansion of $r$ to be the integer sequence $r_b=(x_0,x_1,x_2,\cdots)$ with $x_k=\lfloor rb^k \rfloor$. So this particular integer sequence has $\lim\frac{x_{k+1}}{x_k}=b.$</p>
<ul>
<li><p>Does the sequence $\pi_{10}=3,31,314,3141,\cdots$ contain infinitely many primes? One would expect so and even that there are about $\frac{\ln{N}}{\ln{10}}$ up to $n=N$ (in an appropriate sense), but one also does not expect to see a proof.</p></li>
<li><p>What about the same problem with $b=3$ or $b=4$? Same mutatis-mutandis. </p></li>
<li><p>what about $r_{10}=\lfloor 10^n r \rfloor$ for other reals $r?$ Well, we can pick the decimal expansion to avoid primes but otherwise we might predict infinitely many. The case $r=1/9$ is the question "Are there infinitely many prime <a href="http://en.wikipedia.org/wiki/Repunit" rel="nofollow noreferrer">repunits</a>?" We know that we can only get a prime for $n$ prime, but might guess that something like $\frac{\ln{\ln{N}}}{\ln{10}}$ is the correct growth rate. We could switch this to questions about Mersenne primes by using the non-terminating binary expansion of $c=1$ (or $c=1/2$).</p></li>
<li><p>What about $\pi_{\pi}=\lfloor \pi^n \pi \rfloor$ in place of $\lfloor 3^n \pi \rfloor$? Either is a sequence of integers each roughly triple the one before and nothing much should be different.</p></li>
<li><p>The <strong>main point</strong> of my answer to my question is that we can build up a feasible integer sequence $\lfloor r^n \rfloor$ or $\lfloor c r^n \rfloor$ term by term somewhat like a decimal expansion. Eventually typically having about $r$ choices for the next term. At times the next choice can be forced by previous ones but that does not persist. So we can avoid primes if we build $r$ by startiing with the expansion. There are specific reals $r$ with special behavior (see the rest of this answer) but no reason to expect that any non-integer rationals are among them.</p></li>
<li><p>In special cases like $r=\sqrt[3]{7}$ we know that every third term of $r_r$ is non-prime, but that still suggests infinitely many prime floors, just at a slower growth rate.</p></li>
<li><p>There are certain algebraic integers (Pisot-Vijarayaghavan numbers) $r$ where the distance from $r^n$ to the nearest integer goes to $0$ exponentially fast. Then there is a recurrence relation for $\lfloor r^n \rfloor$ and things can sometimes be cooked to make these all even (for example). With the right definitions ($\lceil rb^n \rceil$ for $r=\frac{1}{\sqrt{5}}, b=\frac{1+\sqrt{5}}2$ ) we get the question "are there infinitely many (odd) prime Fibonacci numbers?" As with repunits, only prime $n$ could give a prime here and the situation should be about the same.</p></li>
<li><p>Suppose that we look at possible sequences $\lceil r^n \rceil.$ Then every initial segment of the sequence of Fermat numbers $3,5,9,17\cdots$ is feasible and the nested intervals are specifying a real in $\cap (2,\sqrt[k]{2^k+1}].$ These intervals have empty intersection but $\cap [2,\sqrt[k]{2^k+1}]=\{{2\}}$. In this non-standard expansion only the terms with $n=2^j$ could possibly be prime. Hueristics suggest that we know all the Fermat Primes (just $0 \le j \le 4$) but altering the heuristic arguments can (evidently) support a case that a positive proportion of choices $n=2^j$ give Fermat primes.</p></li>
</ul>
<p>In conclusion: Give a real constant $r$ and a base $b \in \mathbf{R}$, the integer sequence $\lfloor r b^n\rfloor$ gives the base $b$ expansion of $r$. This is a sequence of integers each roughly $b$ times the previous and generally the previous choices give an interval of length $b$ for possible values of the next term. If we start with the expansion we can avoid primes, but a "random" constant $r\ $ <em>should</em> yield a predictable infinite number of primes subject to adjustments we feel that we understand. </p>
<p>Not that much is different if we use $b=r$. There are nameable reals with exceptional expansions $r_r$ (Integers and Pisot numbers), $r=\sqrt[j]{k}$ a root of an integer does certain special things when $j \mid n$. We have a lot of latitude to construct $r_r$, the "base $r$" expansion of $r$ to get an integer sequence enjoying certain properties, but there is no reason to think that this would lead to a rational number.</p>
|
1,856,530 |
<blockquote>
<p>Prove that the product of five consecutive positive integers cannot be the square of an integer.</p>
</blockquote>
<p>I don't understand the book's argument below for why $24r-1$ and $24r+5$ can't be one of the five consecutive numbers. Are they saying that since $24-1$ and $24+5$ aren't perfect squares it can't be so? Also, the argument after that about how $24r+4$ is divisible by $6r+1$ and thus is a perfect square is unclear.</p>
<p><strong>Book's solution:</strong></p>
<p><a href="https://i.stack.imgur.com/cyKAA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cyKAA.png" alt="enter image description here"></a></p>
|
TonyK
| 1,508 |
<p>$24r-1$ and $24r+5$ are also divisible neither by $2$ nor by $3$. So they must also be coprime to the remaining four numbers, and thus must be squares.</p>
<p>But this is impossible, because we already know that $24r+1$ is a square, and two non-zero squares can't differ by $2$ or $4$.</p>
<p>For the second part: $6r+1$ is coprime to $24r,24r+1,24r+2$, and $24r+3$. So it must be a square. Hence $24r+4=4(6r+1)$ is a square. But then the two perfect squares $24r+1$ and $24r+4$ differ by $3$, and the only two squares differing by $3$ are $1$ and $4$. This forces $r=0$, which contradicts $r=k(3k\pm 1)/2$.</p>
|
3,523,205 |
<p>The given series of function is as follow</p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^\infty x^{n-1}(1-x)^{2}$$</span>
prove that given series is uniformaly convergent on <span class="math-container">$[0,1]$</span></p>
</blockquote>
<p><strong>The solution i tried</strong>-The given series form an <span class="math-container">$G.P$</span> with ratio <span class="math-container">$x \leq1$</span> </p>
<p>i.e</p>
<blockquote>
<p><span class="math-container">$$(1-x)^2+x(1-x)^2+x^2(1-x)^2+...$$</span></p>
</blockquote>
<p>Now if i form partial sum of <span class="math-container">$n$</span> terms it will be
<span class="math-container">$$s_n=(1-x)^2 \frac{1-x^n}{1-x}$$</span>
<span class="math-container">$$\lim_{n\to \infty}s_n=\frac{(1-x)^2}{1-x}$$</span></p>
<p>after that we get <span class="math-container">$$s=(1-x)$$</span></p>
<p>now what can i say about convergence </p>
<p>Because we know that if the series of partial sum is uniform convergence then series is uniform convergence ,but here <span class="math-container">$s$</span> is something polynomial type .</p>
<p>Please Help</p>
|
astro
| 587,159 |
<p>The answer above is correct. Other approach would be to show that, since <span class="math-container">$f$</span> is continuous and <span class="math-container">$f(1)=0$</span> then given <span class="math-container">$\varepsilon >0$</span> there exists <span class="math-container">$\delta >0$</span> such that for all <span class="math-container">$x \in (1-\delta, 1],\forall n \in \mathbb{N}: |f(x)|<\varepsilon $</span>. </p>
<p>Now, since <span class="math-container">$f$</span> converges uniformly to zero in <span class="math-container">$[0,1)$</span> so does in <span class="math-container">$[0,1-\frac{\delta}{2}]$</span>.</p>
<p>Combining both assertions you get uniform convergence of <span class="math-container">$f$</span> in <span class="math-container">$[0,1]$</span>.</p>
|
732,121 |
<p>I'm having trouble seeing why the bounds of integration used to calculate the marginal density of $X$ aren't $0 < y < \infty$.</p>
<p>Here's the problem:</p>
<p>$f(x,y) = \frac{1}{8}(y^2 + x^2)e^{-y}$ where $-y \leq x \leq y$, $0 < y < \infty$ </p>
<p>Find the marginal densities of $X$ and $Y$.</p>
<p>To find $f_Y(y)$, I simply integrated away the "x" component of the joint probability density function:</p>
<p>$f_Y(y) = \frac{1}{8}\int_{-y}^y (y^2 + x^2)e^{-y} \, dx = \frac{1}{6}y^3e^{-y}$</p>
<p>Then to find $f_X(x)$,</p>
<p>$f_X(x) = \frac{1}{8}\int_0^\infty (y^2 + x^2)e^{-y} \, dy = \frac{-(x^2-2)}{8}$</p>
<p>However, the solutions I have say that the marginal density of $X$ above is wrong. Instead, it says that $f_X(x)$ is</p>
<p>$f_X(x) = \frac{1}{8}\int_{|x|}^\infty (y^2 + x^2)e^{-y} \, dy = \frac{1}{4}e^{-|x|}(1+|x|)$</p>
<p>Unfortunately, there is no explanation as to why the lower bound is $|x|$. The only thing that stands out to me are the bounds of $x$: $-y \leq x \leq y$.</p>
<p>Any constructive input is appreciated.</p>
|
Brian Tung
| 224,454 |
<p>The constraints are $-y \leq x \leq y$ and $0 < y < \infty$. That first constraint is equivalent to</p>
<p>$$
|x| \leq y
$$</p>
<p>which, when combined with the second constraint, naturally yields</p>
<p>$$
|x| \leq y < \infty
$$</p>
<p>Whence the integration limits.</p>
|
1,480,720 |
<p>How many times do you have to flip a coin such that the probability of getting $2$ heads in a row is at least $1/2$?</p>
<p>I tried using a Negative Binomial:
$P(X=2)=$$(n-1)\choose(r-1)$$p^r\times(1-p)^{n-r} \geq 1/2$ where $r = 2$ and $p = 1/4$. However, I don't get a value of $n$ that makes sense.</p>
<p>Thank You</p>
|
Birdman2246
| 271,207 |
<p>NOTE: This proof is <strong>incorrect</strong>. Please see the other proofs.</p>
<p>I'm no statistician, but I'll do my best to help you:</p>
<p>Consider some number of coin tosses $n$. With every $n$ coin tosses, there are $n-1$ opportunities to get heads two times in a row. Since with every coin toss there is a .5 chance of getting heads, there is a .25 chance of getting two heads in a row. Therefore, there is a .75 chance of not getting two heads in a row. Since the chance of getting heads two times in a row for $n$ trials is at least .5, the chance of not getting two heads in a row for $n$ trials is at most .5. Therefore, we get the equation $.75^{n-1} \leq .5$. Solving for the smallest value of $n$, we find that $n = 4$. Therefore, you would need to flip four coins in order to (likely) get two heads in a row. $QED$</p>
<p>Hope this helped. Please feel free to leave a reply if you need me to make something a little more clear.</p>
|
1,449,450 |
<p>Is there a way to prove which one of these is bigger? $e^{(a+b)}$ or $e^a + e^b$?</p>
<p>Thanks</p>
|
lab bhattacharjee
| 33,337 |
<p>$$xy>x+y\iff x(y-1)>y$$</p>
<p>If $x,y>0,$</p>
<p>If $y-1>0, x(y-1)>y\iff x>\dfrac y{y-1}$</p>
<p>Else $x(y-1)>y\iff x<\dfrac y{y-1}$</p>
|
744,034 |
<p>How do I show that for all integers $n$, $n^3+(n+1)^3+(n+2)^3$ is a multiple of $9$?
Do I use induction for showing this? If not what do I use and how? And is this question asking me to prove it or show it? How do I show it? </p>
|
user140943
| 140,943 |
<p>Well $$n^3+(n+1)^3+(n+2)^3=n^3+(n^3+3n^2+3n+1)+(n^3+6n^2+12n+8)=3n^3+9n^2+15n+9=3(n^3+3n^2+5n+3)=3(n+1)(n^2+2n+3)$$ Suppose $n=3k$ or $n=3k+1$. Then $$n^2+2n+3=3(3k^2+2k+1)$$ or $$n^2+2n+3=9k^2+6k+1+6k+2+3=9k^2+12k+6=3(3k^2+4k+2)$$ Can you take it from here?</p>
|
428,415 |
<p>I tried using integration by parts twice, the same way we do for $\int \sin {(\sqrt{x})}$
but in the second integral, I'm not getting an expression that is equal to $\int x\sin {(\sqrt{x})}$.</p>
<p>I let $\sqrt x = t$ thus,
$$\int t^2 \cdot \sin({t})\cdot 2t dt = 2\int t^3\sin(t)dt = 2[(-\cos(t)\cdot t^3 + \int 3t^2\cos(t))] = 2[-\cos(t)\cdot t^3+(\sin(t)\cdot 3t^3 - \int 6t \cdot \sin(t))]]$$</p>
<p>which I can't find useful. </p>
|
lab bhattacharjee
| 33,337 |
<p>Integrating by parts, we get </p>
<p>if $n\ne-1,$</p>
<p>$$\int x^n\cos\sqrt xdx= \frac{x^{n+1}\cos\sqrt x}{n+1}+\frac1{2(n+1)}\int x^{n+\frac12}\sin\sqrt x dx$$</p>
<p>$$\int x^n\sin\sqrt xdx= \frac{x^{n+1}\sin\sqrt x}{n+1}-\frac1{2(n+1)}\int x^{n+\frac12}\cos\sqrt x dx$$</p>
<p>Putting $n=\frac12$ in the first integral, </p>
<p>$$\int x^\frac12\cos\sqrt xdx= \frac{x^{\frac12+1}\cos\sqrt x}{\frac12+1}+\frac1{2(\frac12+1)}\int x\sin\sqrt x dx$$</p>
<p>$$\implies \int x\sin\sqrt x dx=3\int x^\frac12\cos\sqrt xdx- 2x^{\frac12+1}\cos\sqrt x$$</p>
<p>Putting $n=0$ in the second integral, </p>
<p>$$\int \sin\sqrt xdx= \frac{x \sin\sqrt x}{1}-\frac1{2}\int x^{\frac12}\cos\sqrt x dx$$</p>
<p>$$\implies \int x^{\frac12}\cos\sqrt x dx= 2x \sin\sqrt x-2\int \sin\sqrt xdx$$</p>
<p>Now, $\int \sin\sqrt xdx$ can be found <a href="https://math.stackexchange.com/questions/271343/integral-of-sin-sqrtx">here</a></p>
|
1,275,848 |
<p>Given two numbers $x$ and $y$, how to check whether $x$ is divisible by <strong>all</strong> prime factors of $y$ or not?, is there a way to do this without factoring $y$?.</p>
|
NovaDenizen
| 109,816 |
<p>There's a pretty simple algorithm based on gcd.</p>
<ol>
<li>$c := \gcd(x,y)$. </li>
<li>$z := y / c$.</li>
<li>If $z = 1$, terminate. All prime factors in y were also in $x$.</li>
<li>$c := \gcd(x,z)$.</li>
<li>If $c = 1$, terminate. $z > 1$ and $z$ divides $y$, but no prime factor of $z$ divides $x$.</li>
<li>$z := z / c$. </li>
<li>Goto 3. $z$ dimnishes every iteration, so we're guaranteed to terminate.</li>
</ol>
|
262,745 |
<p>I need to find the normal vector of the form Ax+By+C=0 of the plane that includes the point (6.82,1,5.56) and the line (7.82,6.82,6.56) +t(6,12,-6), with A=1.</p>
<p>Of course, this is easy to do by hand, using the cross product of two lines and the point. There's supposed to be an automated way of doing it, though, and I can't find it.
Any ideas on an efficient way of doing it?</p>
|
Rudy Potter
| 57,047 |
<p>We have a point</p>
<pre><code>pt1 = {6.82, 1, 5.56};
</code></pre>
<p>And we have a line</p>
<pre><code>ln = {7.82, 6.82, 6.56} + t {6, 12, -6};
</code></pre>
<p>We can get two more points from the line</p>
<pre><code>pt2 = ln /. t -> 0;
pt3 = ln /. t -> 1;
</code></pre>
<p>Then we can borrow the example in the Mathematica help files for the <code>Cross[]</code> function (ref/Cross)</p>
<pre><code>u = pt2 - pt1;
v = pt3 - pt1;
w = Cross[u, v]
n = w/Norm[w]
Graphics3D[{Black, Arrow[Tube[{pt1, pt2}]], Arrow[Tube[{pt1, pt3}]],
Red, Arrow[{pt1, pt1 + 5 n}], White, InfinitePlane[{pt1, pt2, pt3}],
InfiniteLine[{pt2, pt3}]}, Axes -> True, PlotRangePadding -> 2]
</code></pre>
<p>And get that the unit normal vector is
<code>{-0.8757, 0.223964, -0.427772}</code>
and a plot showing the plane, the line, and the vectors. (n multiplied by 5 for visibility)
<a href="https://i.stack.imgur.com/kNNow.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kNNow.png" alt="Plot of the plane, the line, and the vectors" /></a></p>
<p>Also, I should point out, you don't need to assign values to t0 and t1 to get the unit normal vector because they will cancel out when we normalize.</p>
<pre><code>pt2 = ln /. t -> t0;
pt3 = ln /. t -> t1;
u = pt2 - pt1;
v = pt3 - pt1;
w = FullSimplify[Rationalize[Cross[u, v]],
t0 \[Element] Reals && t1 \[Element] Reals && t0 != t1 && t0 < t1]
n = FullSimplify[Rationalize[w/Norm[w]],
t0 \[Element] Reals && t1 \[Element] Reals && t0 != t1 && t0 < t1]
n = SetPrecision[N[n], 3]
Graphics3D[{Black, Arrow[Tube[{pt1, pt2}]], Arrow[Tube[{pt1, pt3}]],
Red, Arrow[{pt1, pt1 + 5 n}], White, InfinitePlane[{pt1, pt2, pt3}],
InfiniteLine[{pt2, pt3}]}, Axes -> True, PlotRangePadding -> 2]
</code></pre>
<pre><code>{(1173 (t0 - t1))/25, 12 (-t0 + t1), (573 (t0 - t1))/25}
{-(391/Sqrt[199362]), 50 Sqrt[2/99681], -(191/Sqrt[199362])}
{-0.876, 0.224, -0.428}
</code></pre>
|
262,745 |
<p>I need to find the normal vector of the form Ax+By+C=0 of the plane that includes the point (6.82,1,5.56) and the line (7.82,6.82,6.56) +t(6,12,-6), with A=1.</p>
<p>Of course, this is easy to do by hand, using the cross product of two lines and the point. There's supposed to be an automated way of doing it, though, and I can't find it.
Any ideas on an efficient way of doing it?</p>
|
Roman
| 26,598 |
<p>Based on <a href="https://mathematica.stackexchange.com/a/239071/26598">this answer</a>, we find the equation of a plane through three points with</p>
<pre><code>p1 = {6.82, 1, 5.56};
p2 = {7.82, 6.82, 6.56};
d2 = {6, 12, -6};
v = First@NullSpace[Append[#, 1] & /@ {p1, p2, p2 + d2}]
(* {-0.106949, 0.0273527, -0.0522436, 0.992514} *)
</code></pre>
<p>As you want the first coefficient to be 1, divide all coefficients by the first:</p>
<pre><code>w = v/v[[1]]
(* {1., -0.255754, 0.488491, -9.28026} *)
</code></pre>
<p>The equation of the plane you are looking for is</p>
<pre><code>w . {x, y, z, 1} == 0
(* 1. x - 0.255754 y + 0.488491 z - 9.28026 == 0 *)
</code></pre>
|
4,243,030 |
<p>I tried to evaluate the integral <span class="math-container">$$ \oint_c\dfrac{dz}{\sin^2 z}$$</span> where <span class="math-container">$c$</span> is a circle <span class="math-container">$|z|=1/2$</span>. The only pole within <span class="math-container">$c$</span> is <span class="math-container">$z=0$</span> and the residue at <span class="math-container">$z=0$</span> is found as <span class="math-container">$$\lim_{z\to 0}\dfrac{d}{dz}\left (\dfrac{z^2}{\sin^2 z}\right )=0$$</span> so that integral is zero. Where did I go wrong as it violates Cauchy Goursat theorem in complex integration.</p>
|
Mr.Gandalf Sauron
| 683,801 |
<p>You can also find the residue at <span class="math-container">$z=0$</span> by directly evaluating the coefficient of <span class="math-container">$\frac{1}{z}$</span> in :-</p>
<p><span class="math-container">$$\frac{1}{\sin^{2}(z)}=(z-\frac{z^{3}}{3!}+\frac{z^{5}}{5!}-...)^{-2}=z^{-2}(1-\frac{z^{2}}{3!}+\frac{z^{4}}{5!}-...)^{-2}$$</span>.</p>
<p>Using Binomial theorem for negative index and <span class="math-container">$|z|<1$</span>.</p>
<p>We get <span class="math-container">$$z^{-2}\left(1+2(\frac{z^{2}}{3!}-\frac{z^{4}}{5!})+( \text{powers of z greater than 1})\right)$$</span> .</p>
<p>So coefficient of <span class="math-container">$\frac{1}{z}$</span> is <span class="math-container">$0\,\,$</span> ( Which is exactly the definition of Residue at <span class="math-container">$0$</span> . It is the coefficient of <span class="math-container">$\frac{1}{z}$</span> in Laurent Series expansion about <span class="math-container">$0$</span>)</p>
<p>Hence By Cauchy Residue theorem . The integral is <span class="math-container">$0$</span>.</p>
|
802,877 |
<blockquote>
<p>Find $\displaystyle\lim_{n\to\infty} n(e^{\frac 1 n}-1)$ </p>
</blockquote>
<p>This should be solved without LHR. I tried to substitute $n=1/k$ but still get indeterminant form like $\displaystyle\lim_{k\to 0} \frac {e^k-1} k$. Is there a way to solve it without LHR nor Taylor or integrals ?</p>
<p>Maybe with the definition of a limit ?</p>
<p>EDIT:</p>
<p>$f(x)'=\displaystyle\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{(x+h)(e^{1/x+h}-1)-x(e^{\frac 1 x}-1)}{h}=
\lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-x-h-xe^{\frac 1 x}+x}{h}=
\lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-h-xe^{\frac 1 x}}{h}$</p>
|
Paramanand Singh
| 72,031 |
<p>Again this turns out to be a very nice question. Without making any assumptions on $e$ or $e^{x}$ it is possible to show that the limit $\lim_{n \to \infty}n(e^{1/n} - 1)$ exists. To be more general we can show that for any real number $x > 0$ the limit $$f(x) = \lim_{n \to \infty}n(x^{1/n} - 1)$$ exists. We need to deal with cases $0 < x < 1$, $x = 1$ and $x > 1$ and to it can be seen that the case $0 < x < 1$ can be handled via the case $x > 1$ if we put $x = 1/y$. For $x = 1$ the limit is obviously $0$.</p>
<p>For $x > 1$ we need to show that the sequence $g(x, n) = n(x^{1/n} - 1)$ decreases as $n$ increases and obvious $g(x, n) > 0$ so that the limit $f(x) = \lim_{n \to \infty}n(x^{1/n} - 1)$ exists.</p>
<p>As a next step we can show that the limit function $f(x)$ is a strictly increasing function of $x$ for $x > 0$ and satisfies $$f(1) = 0, f(xy) = f(x) + f(y)$$ and with some more effort we can show that $f(x)$ is differentiable and $f'(x) = 1/x$ so that $f(x)$ has all the properties of $\log x$ or $\ln x$.</p>
<p>If we define $e$ as a number such that $\log e = 1$ then obviously we get $\lim_{n \to \infty}n(e^{1/n} - 1) = 1$. This approach towards the definition of logarithm via limits is presented in detail in <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="nofollow">my blog post</a>.</p>
|
74,805 |
<p>Could anyone give me some explanation how can I compute sum of multibranched functions? For example, if I have $z=re^{i\varphi}$ then $$\sqrt{z}=\sqrt{r}(\cos(\varphi/2+k\pi)+i\sin (\varphi/2+k\pi))$$ for some integer $k$. Therefore, is $$\sqrt{z}+\sqrt{z}=2\sqrt{r}(\cos(\varphi/2+k\pi)+i\sin (\varphi/2+k\pi))$$ for some integer $k$ or $$\sqrt{z}+\sqrt{z}=\sqrt{r}(\cos(\varphi/2+k_1\pi)+i\sin (\varphi/2+k_1\pi))+\sqrt{r}(\cos(\varphi/2+k_2\pi)+i\sin (\varphi/2+k_2\pi))$$ for some integers $k_1,k_2$?</p>
|
Did
| 6,179 |
<p>The first condition, that $f'(0)/f'0)=g'(0)/g(0)$, is equivalent to $A=-z/w$ where $z$ and $w$ are the complex number $z=B-C-\mathrm i(B+C)$ and $w=B-C+\mathrm i(B+C)$. If $B$ and $C$ are real numbers, $w=\bar z$ hence $|A|=1$. (The only case when $w=0$ is $B=C=0$, and then $g(0)=0$ hence $g'(0)/g(0)$ does not exist.) The second condition is irrelevant since it uses values at $a$.</p>
|
74,805 |
<p>Could anyone give me some explanation how can I compute sum of multibranched functions? For example, if I have $z=re^{i\varphi}$ then $$\sqrt{z}=\sqrt{r}(\cos(\varphi/2+k\pi)+i\sin (\varphi/2+k\pi))$$ for some integer $k$. Therefore, is $$\sqrt{z}+\sqrt{z}=2\sqrt{r}(\cos(\varphi/2+k\pi)+i\sin (\varphi/2+k\pi))$$ for some integer $k$ or $$\sqrt{z}+\sqrt{z}=\sqrt{r}(\cos(\varphi/2+k_1\pi)+i\sin (\varphi/2+k_1\pi))+\sqrt{r}(\cos(\varphi/2+k_2\pi)+i\sin (\varphi/2+k_2\pi))$$ for some integers $k_1,k_2$?</p>
|
Peđa
| 15,660 |
<p>From the first condition we can conclude :</p>
<p>$(A=-1 \lor k=0 \lor B=C) \land (A=1 \lor k=0 \lor B=-C)$</p>
<p>From the second condition we have that:</p>
<p>$(k=0 \lor C=Be^{2ka}) \land (k=0 \lor C=-Be^{2ka})$</p>
<p>so... </p>
<p>$a)$if $k=0 \Rightarrow A$ is undetermined</p>
<p>$b)$if $B=\pm C=0 \Rightarrow A$ is undetermined</p>
<p>If we observe the second condition we can see that $C$ cannot be at same time $Be^{2ka}$ and $-Be^{2ka}$ which means that $k=0 \lor B=\pm C=0$,therefore $A$ is undetermined.</p>
|
3,479,940 |
<p>Say we're given a set of <span class="math-container">$d$</span> vectors <span class="math-container">$S=\{\mathbf{v}_1,\dots,\mathbf{v}_d\}$</span> in <span class="math-container">$\mathbb{R}^n$</span>, with <span class="math-container">$d\leq n$</span> (obviously). We want to test in an efficient way if S is linearly independent. Now, write the coefficient matrix <span class="math-container">$\mathbf{A}=[\mathbf{v}_1 \dots\mathbf{v}_d]$</span> (the <span class="math-container">$\mathbf{v}_i$</span> are considered to be column vectors). </p>
<p>A <em>non-efficient</em> way would be to compute all minors of rank <span class="math-container">$d$</span> of <span class="math-container">$\mathbf{A}$</span>, and check that they are non-zero (up to some tolerance, as always when we do numerical linear algebra). Another way would be using Gram–Schmidt orthogonalization, but I recall that Gram–Schmidt orthogonalization is numerically unstable. Which is the correct alternative? Singular Value Decomposition (if all singular values are strictly positive, then the vectors are independent)? QR factorization? </p>
|
tch
| 352,534 |
<p>The standard numerical approaches would be computing a (rank revealing) QR decomposition or SVD. Both <a href="https://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.linalg.matrix_rank.html" rel="nofollow noreferrer">Numpy</a> and <a href="https://www.mathworks.com/help/matlab/ref/rank.html" rel="nofollow noreferrer">Matlab</a> use the SVD, and both algorithms are <span class="math-container">$nd^2$</span>[1].
In general, whenever stability is a concern, the SVD tends to be preferred.
However, even plain QR with householder reflections or modified Gram-Schmidt may give reasonable results in many cases.</p>
<p>As you allude to in the original question, there is always a trade off between accuracy and computation time, and the "best" algorithm depends on your application.
The notion of "rank" itself is not well defined in finite precision, and in defining numerical rank it would be reasonable to take any of the equivalent characterizations of exact rank, and then relax them. </p>
<p>In general, one could define <span class="math-container">$A$</span> as numerically rank <span class="math-container">$k$</span> if all singular values <span class="math-container">$k+1, \ldots, n$</span> are less than some tolerance (this is what numpy and matlab do).
Of course, even if you define it this way, how close your numerical method comes to computing the exact SVD of your finite precision matrix is another concern; i.e. two different implementations of an SVD algorithm would give different results.</p>
<p>On the other hand, if <span class="math-container">$1\ll k\ll n$</span> and you are satisfied with an approximation to the rank, then perhaps a randomized algorithm would be best.</p>
<p>I am not familiar with Smith normal form, but I would be very hesitant to use it in a general purpose numerical method as it seems like it would be susceptible to the same issues as Gaussian elimination.</p>
<p>[1] a rank-<span class="math-container">$r$</span> SVD can be computed somewhat more cheaply, so if you have an upper bound on the rank of your matrix then you may be able to do things somewhat faster.</p>
|
58,870 |
<p>I am teaching a introductory course on differentiable manifolds next term. The course is aimed at fourth year US undergraduate students and first year US graduate students who have done basic coursework in
point-set topology and multivariable calculus, but may not know the definition of differentiable manifold. I am following the textbook <a href="http://rads.stackoverflow.com/amzn/click/0132126052">Differential Topology</a> by
Guillemin and Pollack, supplemented by Milnor's <a href="http://rads.stackoverflow.com/amzn/click/0691048339">book</a>.</p>
<p>My question is: <strong>What are good topics to cover that are not in assigned textbooks?</strong> </p>
|
AFK
| 1,985 |
<p>Thierry Aubin's book "A course in differential geometry" is really good for an introductory course. It covers the basic definitions of manifolds and vector bundles, orientability and integration (Stokes formula) and then focuses on Riemannian geometry defining the Levi-Civita connection, curvature tensor etc... </p>
<p>The only important missing topics are Lie groups and de Rham cohomology. Many courses in differential geometry don't talk about these subjects leaving them to specialised courses in Lie theory or Algebraic topology but I think it's a mistake. </p>
|
294,519 |
<p>The problem I am working on is:</p>
<p>Translate these statements into English, where C(x) is “x is a comedian” and F(x) is “x is funny” and the domain consists of all people.</p>
<p>a) $∀x(C(x)→F(x))$ </p>
<p>b)$∀x(C(x)∧F(x))$</p>
<p>c) $∃x(C(x)→F(x))$ </p>
<p>d)$∃x(C(x)∧F(x))$</p>
<h2>-----------------------------------------------------------------------------------------</h2>
<p>Here are my answers:</p>
<p>For a): For every person, if they are a comedian, then they are funny.</p>
<p>For b): For every person, they are both a comedian and funny.</p>
<p>For c): There exists a person who, if he is funny, is a comedian</p>
<p>For d): There exists a person who is funny and is a comedian.</p>
<p>Here are the books answers:</p>
<p>a)Every comedian is funny.
b)Every person is a funny comedian.
c)There exists a person such that if she or he is a comedian, then she or he is funny.
d)Some comedians are funny. </p>
<p>Does the meaning of my answers seem to be in harmony with the meaning of the answers given in the solution manual? The reason I ask is because part a), for instance, is a implication, and "Every comedian is funny," does not appear to be an implication.</p>
|
Peter Smith
| 35,151 |
<p>The homework question said "Translate these statements into English". And that means normal English, such as a native English speaker might use. Now, is</p>
<blockquote>
<p>For every person, if they are a comedian, then they are funny.</p>
</blockquote>
<p>normal English in that sense? Would <em>you</em> ever say that sort of thing outside the logic classroom?? Of course not!</p>
<p>What you have written down is a Logic-English mix (Loglish if you like), with something of the syntactic structure of the language of first-order logic, and the vocabulary of English. Now, that's a very useful thing to do <em>as a half-way house</em>, using Loglish as a bridge between FOL and English. But it <em>is</em> only a half-way house. You now need to ask: how would you say much the same thing (as far as truth-conditions are concerned) in normal English? The book answer gets <em>that</em> right.</p>
|
947,358 |
<p>Okay $g(x)= \sqrt{x^2-9}$</p>
<p>thus, $x^2 -9 \ge 0$</p>
<p>equals $x \ge +3$ and $x \ge -3$</p>
<p>thus the domains should be $[3,+\infty) \cup [-3,\infty)$ how come the answer key in my book is stating $(−\infty, −3] \cup[3,\infty)$. </p>
|
Dr. Sonnhard Graubner
| 175,066 |
<p>it must be $(x-3)(x+3)\geq 0$ and this means $x\geq 3$ or $x\le -3$.</p>
|
947,358 |
<p>Okay $g(x)= \sqrt{x^2-9}$</p>
<p>thus, $x^2 -9 \ge 0$</p>
<p>equals $x \ge +3$ and $x \ge -3$</p>
<p>thus the domains should be $[3,+\infty) \cup [-3,\infty)$ how come the answer key in my book is stating $(−\infty, −3] \cup[3,\infty)$. </p>
|
kingW3
| 130,953 |
<p>So we have that $$(x-3)(x+3)\geq0$$ Now this happens when either both $x-3$ and $x+3$ are positive or both are negative.Now solving $$x-3\geq0\land x+3\geq 0\implies x\geq 3\land x\geq -3\\x-3\leq0\land x+3\leq0\implies x\leq3\land x\leq-3$$
Now the first solution is $x\geq3$ and the second is $x\leq-3$</p>
|
3,034,441 |
<p>How can I prove that <span class="math-container">$\lim_{n\to\infty} \frac{2^n}{n!}=0$</span>?</p>
|
Jack D'Aurizio
| 44,121 |
<p>In other terms we want to evaluate</p>
<p><span class="math-container">$$ \sum_{n\geq 1}(-1)^{n+1}\left(H_{n(n+1)/2}-H_{n(n-1)/2}\right)=\int_{0}^{1}\sum_{n\geq 1}(-1)^{n+1}\frac{x^{n(n+1)/2}-x^{n(n-1)/2}}{x-1}\,dx$$</span>
where the theory of modular forms ensures
<span class="math-container">$$ \sum_{n\geq 0} x^{n(n+1)/2} = \prod_{n\geq 1}\frac{(1-x^{2n})^2}{(1-x^n)}=\prod_{n\geq 1}\frac{1-x^{2n}}{1-x^{2n-1}} $$</span>
but I do not see an easy way for introducing a <span class="math-container">$(-1)^n$</span> twist in the LHS. On the other hand, the Euler-Maclaurin summation formula ensures
<span class="math-container">$$ H_n = \log n + \gamma + \frac{1}{2n} - \sum_{m\geq 2}\frac{B_m}{m n^m} $$</span>
in the Poisson sense. Replacing <span class="math-container">$n$</span> with <span class="math-container">$n(n\pm 1)/2$</span>,
<span class="math-container">$$ H_{\frac{n(n+1)}{2}}-H_{\frac{n(n-1)}{2}} = \log\left(\tfrac{n+1}{n-1}\right)+\tfrac{2}{(n-1)n(n+1)}-\sum_{m\geq 2}\tfrac{2^m B_m}{m n^m}\left(\tfrac{1}{(n-1)^m}-\tfrac{1}{(n+1)^m}\right) $$</span>
then multiplying both sides by <span class="math-container">$(-1)^n$</span> and summing over <span class="math-container">$n\geq 2$</span>:
<span class="math-container">$$ \sum_{n\geq 2}(-1)^n\left(H_{\frac{n(n+1)}{2}}-H_{\frac{n(n-1)}{2}} \right)=\\=5\left(\log(2)-\tfrac{1}{2}\right)-\sum_{m\geq 2}\tfrac{2^m B_m}{m}\sum_{n\geq 2}(-1)^n\left(\tfrac{1}{n^m(n-1)^m}-\tfrac{1}{n^m(n+1)^m}\right)
\\=5\left(\log(2)-\tfrac{1}{2}\right)-\sum_{m\geq 2}\frac{2^m B_m}{m}\left[\frac{1}{2^m}-2\sum_{n\geq 2}\frac{(-1)^m}{n^m(n+1)^m}\right]$$</span>
where the innermost series is a linear combination of <span class="math-container">$\log(2),\zeta(3),\zeta(5),\ldots$</span> by partial fraction decomposition. This allows a reasonable numerical approximation of the original series and a conversion into a double series involving <span class="math-container">$\zeta(2a)\zeta(2b+1)$</span>. I am not sure we can do better than this, but I would be delighted to be proven wrong.</p>
<hr>
<p>Playing a bit with functions, a nice approximation of <span class="math-container">$\sum_{n\geq 0}(-1)^n x^{n(n+1)/2}$</span> over <span class="math-container">$[0,1]$</span> is given by <span class="math-container">$\frac{1}{x+1}-x^2(1-x)^2$</span>, so the value of the original series has to be close to <span class="math-container">$\log(2)-\frac{1}{6}$</span>. A better approximation of the function is <span class="math-container">$\frac{1}{x+1}-x^2(1-x)^2+\frac{3}{4}x^4(1-x)\left(\frac{4}{5}-x\right)$</span>, leading to the following improved approximation for the series: <span class="math-container">$\log(2)-\frac{53}{300}$</span>. A further refinement,
<span class="math-container">$$ g(x)=\sum_{n\geq 0}(-1)^n x^{n(n+1)/2} \approx \frac{1+x+2x^2}{1+2x+5x^2}$$</span>
leads to <span class="math-container">$\color{red}{S\approx\frac{\pi+3\log 2}{10}}$</span>. It might be interesting to describe <em>how</em> I got this approximation.<br> <span class="math-container">$g(0)$</span> and <span class="math-container">$g'(0)$</span> are directly given by the Maclaurin series, while <span class="math-container">$\lim_{x\to 1^-}g(x)=\frac{1}{2}$</span> and <span class="math-container">$\lim_{x\to 1^-}g'(x)=-\frac{1}{8}$</span> can be found through <span class="math-container">$\mathcal{L}(f(e^{-x}))(s)$</span>.<br> <span class="math-container">$g(x)$</span> is convex and decreasing on <span class="math-container">$(0,1)$</span> and any approximation of the
<span class="math-container">$$ \frac{1+ax+(1+a)x^2}{1+(1+a)x+(2+3a)x^2}$$</span>
kind with <span class="math-container">$a$</span> in a suitable range matches such constraint and the values of <span class="math-container">$g$</span> and <span class="math-container">$g'$</span> at the endpoints of <span class="math-container">$(0,1)$</span>. We still have the freedom to pick <span class="math-container">$a$</span> in such a way that the derived approximation is both simple and accurate enough - I just picked <span class="math-container">$a=1$</span>.</p>
|
3,034,441 |
<p>How can I prove that <span class="math-container">$\lim_{n\to\infty} \frac{2^n}{n!}=0$</span>?</p>
|
robjohn
| 13,854 |
<p><strong>Approximating the Sum</strong></p>
<p>Applying the Euler-Maclaurin Sum Formula to <span class="math-container">$\frac1n$</span>, we get
<span class="math-container">$$
\begin{align}
H_n
&=\log(n)+\gamma+\frac1{2n}-\frac1{12n^2}+\frac1{120n^4}-\frac1{252n^6}+\frac1{240n^8}-\frac1{132n^{10}}\\
&\phantom{\,={}}+\frac{691}{32760n^{12}}-\frac1{12n^{14}}+O\!\left(\frac1{n^{16}}\right)\tag1
\end{align}
$$</span>
To compute <span class="math-container">$\sum\limits_{k=1}^\infty(-1)^{k-1}\!\!\left(H_{(k+1)k/2}-H_{k(k-1)/2}\right)$</span>, we will combine the <span class="math-container">$k=2n-1$</span> and the <span class="math-container">$k=2n$</span> terms and use <span class="math-container">$(1)$</span> to estimate <span class="math-container">$H_n$</span>:
<span class="math-container">$$
\begin{align}
\hspace{-5mm}f(n)
&=\overbrace{\left(H_{n(2n-1)}-H_{(2n-1)(n-1)}\right)}^{k=2n-1}-\overbrace{\left(H_{(2n+1)n}-H_{n(2n-1)}\right)}^{k=2n}\\
&=\frac1{2n^2}+\frac1{4n^3}-\frac1{8n^4}-\frac3{16n^5}-\frac{19}{96n^6}-\frac{11}{64n^7}-\frac{43}{384n^8}-\frac{41}{768n^9}\\
&\phantom{\,={}}+\frac{27}{2560n^{10}}+\frac{69}{1024n^{11}}+\frac{677}{6144n^{12}}+\frac{557}{4096n^{13}}+\frac{38797}{286720n^{14}}\\
&\phantom{\,={}}+\frac{26107}{245760n^{15}}+\frac{4597}{98304n^{16}}-\frac{2627}{65536n^{17}}-\frac{838061}{5898240n^{18}}-\frac{316167}{1310720n^{19}}\\
&\phantom{\,={}}-\frac{1130847}{3670016n^{20}}-\frac{6769583}{22020096n^{21}}-\frac{23505703}{115343360n^{22}}+\frac{616009}{20971520n^{23}}\\
&\phantom{\,={}}+\frac{29333183}{75497472n^{24}}+\frac{68957009}{83886080n^{25}}+\frac{3688165573}{3053453312n^{26}}+\frac{5672489659}{4227858432n^{27}}\\
&\phantom{\,={}}+\frac{900259265}{939524096n^{28}}-\frac{55931283}{268435456n^{29}}-\frac{18407784799}{8053063680n^{30}}\\
&\phantom{\,={}}-\frac{5422797419}{1073741824n^{31}}+O\!\left(\frac1{n^{32}}
\right)\tag2
\end{align}
$$</span>
Next, apply the Euler-Maclaurin Sum Formula to <span class="math-container">$(2)$</span> to approximate
<span class="math-container">$$
\sum_{k=1}^nf(k)=C+g(n)\tag3
$$</span>
where
<span class="math-container">$$
\begin{align}
\hspace{-5mm}g(n)
&=-\frac1{2n}+\frac1{8n^2}+\frac1{12n^3}-\frac5{64n^4}+\frac1{240n^5}+\frac{11}{384n^6}-\frac5{1344n^7}-\frac{151}{6144n^8}\\
&\phantom{\,={}}-\frac{13}{11520n^9}+\frac{507}{10240n^{10}}+\frac5{11264n^{11}}-\frac{6797}{49152n^{12}}+\frac{2411}{5591040n^{13}}\\
&\phantom{\,={}}+\frac{623927}{1146880n^{14}}-\frac{29}{737280n^{15}}-\frac{9105847}{3145728n^{16}}-\frac{1269}{5570560n^{17}}\\
&\phantom{\,={}}+\frac{470568199}{23592960n^{18}}-\frac{9641}{174325760n^{19}}-\frac{25364768763}{146800640n^{20}}+\frac{568951}{3633315840n^{21}}\\
&\phantom{\,={}}+\frac{848095747927}{461373440n^{22}}+\frac{163511}{1447034880n^{23}}-\frac{142300201072307}{6039797760n^{24}}\\
&\phantom{\,={}}-\frac{4915843}{38168166400 n^{25}}+\frac{4373467527277213}{12213813248n^{26}}-\frac{2560205}{12683575296n^{27}}\\
&\phantom{\,={}}-\frac{47857489871577677}{7516192768n^{28}}+\frac{2494825}{23353884672n^{29}}+\frac{4218610442394753611}{32212254720n^{30}}\\
&\phantom{\,={}}+O\!\left(\frac1{n^{31}}\right)\tag4
\end{align}
$$</span>
Since <span class="math-container">$\lim\limits_{n\to\infty}g(n)=0$</span>, <span class="math-container">$(3)$</span> implies <span class="math-container">$C=\sum\limits_{k=1}^\infty f(k)$</span>.</p>
<p>We will use the approximation of <span class="math-container">$g$</span> given in <span class="math-container">$(4)$</span>, call it <span class="math-container">$\tilde{g}$</span>, whose error is <span class="math-container">$O\!\left(\frac1{n^{31}}\right)$</span>, to get
<span class="math-container">$$
\sum_{k=1}^\infty f(k)=\sum_{k=1}^nf(k)-\tilde{g}(n)+O\!\left(\frac1{n^{31}}\right)\tag5
$$</span>
Here is a table of the values from <span class="math-container">$(5)$</span>. Note that as we double <span class="math-container">$n$</span>, we get about <span class="math-container">$9.63$</span> more digits of the sum.
<span class="math-container">$$
\begin{array}{r|c|l}
n&\substack{\text{harmonic}\\\text{series}\\\text{terms}\vphantom{\text{i}}}&\sum\limits_{k=1}^nf(k)-\tilde{g}(n)\\\hline
5&55&0.517100379042\color{#CCC}{335666895661356664256993223233139202045}\\
10&210&0.517100379042401725064\color{#CCC}{786300619359787983738352017985}\\
20&820&0.51710037904240172506481077213135\color{#CCC}{0714320294664965323}\\
40&3240&0.51710037904240172506481077213135745047250\color{#CCC}{5734079916}\\
80&12880&0.517100379042401725064810772131357450472507379080669\\
160&51360&0.517100379042401725064810772131357450472507379080669
\end{array}
$$</span>
So with <span class="math-container">$n=80$</span>, we need to sum <span class="math-container">$12880$</span> terms of the Harmonic Series to get <span class="math-container">$51$</span> decimal places of the sum. This is pretty good since the sum of that many terms of the series only gives about <span class="math-container">$2$</span> decimal places without the correction from <span class="math-container">$\tilde{g}$</span>.</p>
<hr />
<p><strong>About the Error Term</strong></p>
<p>Note that in <span class="math-container">$(4)$</span>, when the exponents are big, the coefficients of the even powers of <span class="math-container">$n$</span> are big and the coefficients of the odd powers of <span class="math-container">$n$</span> are small. In fact, the coefficient of the <span class="math-container">$\frac1{n^{31}}$</span> term is about <span class="math-container">$\frac1{2516}$</span> whereas the coefficient for the <span class="math-container">$\frac1{n^{32}}$</span> term is about <span class="math-container">$3085116088$</span>. This means that for <span class="math-container">$n\lt8\times10^{12}$</span>, the error acts more like <span class="math-container">$\frac{3085116088}{n^{32}}$</span> than <span class="math-container">$\frac1{2516n^{31}}$</span>.</p>
<p>For <span class="math-container">$n=\,\,5$</span>, this gives about <span class="math-container">$12.9$</span> decimal places<br />
For <span class="math-container">$n=10$</span>, this gives about <span class="math-container">$22.5$</span> decimal places<br />
For <span class="math-container">$n=20$</span>, this gives about <span class="math-container">$32.1$</span> decimal places<br />
For <span class="math-container">$n=40$</span>, this gives about <span class="math-container">$41.8$</span> decimal places<br />
For <span class="math-container">$n=80$</span>, this gives about <span class="math-container">$51.4$</span> decimal places</p>
<p>These match the table above. Although, in the table, <span class="math-container">$n=10$</span> only matches to <span class="math-container">$21$</span> places, the error is actually only <span class="math-container">$2.4\times10^{-23}$</span>.</p>
<hr />
<p><strong>Correction to the Proof in the Question</strong></p>
<p>While it is true that the terms in the alternating sum are decreasing, as stated in the question, there is a problem with the proof there. The third line does not imply the second. Here is a fixed proof.
<span class="math-container">$$
\begin{align}
&\sum_{i=1}^n\frac1{\frac{n(n-1)}2+i}-\sum_{i=1}^{n+1}\frac1{\frac{n(n+1)}2+i}\\
&=\sum_{i=1}^n\left(\frac1{\frac{n(n-1)}2+i}-\frac1{\frac{n(n+1)}2+i}\right)-\frac1{\frac{(n+2)(n+1)}2}\tag{6a}\\
&=\sum_{i=1}^n\frac{n}{\left(\frac{n(n-1)}2+i\right)\left(\frac{n(n+1)}2+i\right)}-\frac2{(n+2)(n+1)}\tag{6b}\\
&\ge\frac{n^2}{\frac{n(n+1)}2\frac{n(n+3)}2}-\frac2{(n+2)(n+1)}\tag{6c}\\[6pt]
&=\frac4{(n+1)(n+3)}-\frac2{(n+2)(n+1)}\tag{6d}\\[9pt]
&=\frac2{(n+2)(n+3)}\tag{6e}
\end{align}
$$</span>
Explanation:<br />
<span class="math-container">$\text{(6a)}$</span>: move the terms for <span class="math-container">$i\in[1,n]$</span> from the right sum into the left sum<br />
<span class="math-container">$\text{(6b)}$</span>: simplify the difference in the summation<br />
<span class="math-container">$\text{(6c)}$</span>: replace each term in the sum by the smallest term (<span class="math-container">$i=n$</span>)<br />
<span class="math-container">$\text{(6d)}$</span>: simplify<br />
<span class="math-container">$\text{(6e)}$</span>: subtract</p>
|
216,021 |
<p>Suppose $A$ and $B$ are $n \times n$ matrices. Assume $AB=I$. Prove that $A$ and $B$ are invertible and that $B=A^{-1}$.</p>
<p>Please let me know whether my proof is correct and if there are any improvements to be made.</p>
<p>Assume $AB=I$. Then $(AB)A=IA=A$. So, $A(BA)=AI=A$. Then $BA=I$. Therefore $AB=BA=I$. Thus $A$ and $B$ are invertible. And by definition $B=A^{-1}$, so $AB=AA^{-1}=I$.</p>
<p>It doesn't seem quite correct. Thanks in advance for any advice.</p>
|
littleO
| 40,119 |
<p>Suppose $AB = I$. ($A$ and $B$ are $n \times n$ matrices.)</p>
<p>First note that $R(A) = \mathbb{R}^n$. (If $y \in \mathbb{R}^n$, then
$y = Ax$, where $x = By$.)</p>
<p>It follows that $N(A) = \{0\}$.</p>
<p>We wish to show that $BAx = x$ for all $x \in \mathbb{R}^n$.
So let $x \in \mathbb{R}^n$, and let $z = BAx$.
Then $Az = A(BAx) = (AB)Ax = Ax$, which implies that
$z = x$ because $N(A) = \{ 0 \}$. So $BAx = x$.</p>
|
2,917,439 |
<p>Suppose we flip two fair coins and roll one fair six-sided die.</p>
<p>What is the conditional probability that the number of heads equals the number
showing on the die, conditional on knowing that the die showed 1?</p>
<p>Let's define the following:</p>
<p>$A=\{\text{#H = # on die}\}$</p>
<p>$B=\{\text{# on die = 1}\}$</p>
<p>We want to find:</p>
<p>$P(A|B)=\dfrac{P(A\cap B)}{P(B)}=\dfrac{P(\{\text{#H = # on die}\} \cap \{\text{# on die = 1}\})}{P(\{\text{# on die = 1}\})}$</p>
<p>I drew a tree diagram to help me:</p>
<p><a href="https://i.stack.imgur.com/ZhP1s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZhP1s.png" alt="enter image description here"></a></p>
<p>Basically we toss two coins, and the final toss can match with either number on the dice.</p>
<p>Now $P(B)=1/6=(1/2)(1/2)(1/6)(4)$</p>
<p>For the numerator, we want $\{\text{#H = # on die}\} \cap \{\text{# on die = 1}\}$. The more restrictive here is that the die must roll a 1:</p>
<p>The orange show the only two paths that are the intersection. This equals $(2)(1/2)(1/2)(1/6)=1/12$</p>
<p><a href="https://i.stack.imgur.com/JmS7l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JmS7l.png" alt="enter image description here"></a> </p>
<p>Therefore our $P(A|B)=(1/12)/(1/6)=(1/2)$ as our final answer.</p>
<p>Is this correct? I'm wondering if it makes sense because the dice rolled a $1$, and there are only two options on a coin, that thus the 50/50.</p>
|
stagerf
| 492,266 |
<p>Let $\xi_1$ be the number of heads from two coin tosses and $\xi_2$ be value of a die after a roll.</p>
<p>$\xi_1$ is binomially distributed with $n=2$ and $p=\frac{1}{2}$, so $p_{\xi_1}(x)=_2C_x \cdot \left ( \frac{1}{2} \right )^2$. $\xi_2$ has a discrete uniform distribution with $p_{\xi_2}(x)=\frac{1}{6}$.</p>
<p>When $\xi_2=1$, the only event that satisfies $\xi_1=\xi_2$ is $\xi_1 = 1 \cap \xi_2 = 1$. Since $\xi_1$ and $\xi_2$ are independent random variables, $\Pr(\xi_1 = 1 \cap \xi_2 = 1)=\Pr(\xi_1 = 1) \cdot \Pr(\xi_2 = 1)$. </p>
<p>\[\Pr(\xi_1 = \xi_2 | \xi_2 = 1)=\frac{\text{Probability of events in which }\xi_1 \text{ equals } \xi_2 \text{ when } \xi_2=1}{\text{Probability that }\xi_2=1}\]
\[=\frac{\Pr(\xi_1 = 1 \cap \xi_2 = 1)}{\Pr(\xi_2=1)}=\frac{\Pr(\xi_1 = 1) \cdot \Pr(\xi_2 = 1)}{\Pr(\xi_2 = 1)}=\Pr(\xi_1 = 1)=_2C_1 \cdot \left (\frac{1}{2} \right)^2\]
\[=\frac{1}{2}\]</p>
|
3,366,569 |
<p>I am trying to solve the following problem;</p>
<p>Write all elements of the following set: <span class="math-container">$ A=\left \{ x\in\mathbb{R}; \sqrt{8-t+\sqrt{2-t}}\in\mathbb{R}, t\in\mathbb{R} \right \}$</span> .</p>
<p>My assumption is that the solution is <span class="math-container">$\mathbb{R}$</span> and we don't need to solve when are the square roots defined, because of the <span class="math-container">$x$</span>. Am I correct?</p>
<p>Thanks</p>
|
Kavi Rama Murthy
| 142,385 |
<p>Hint: try <span class="math-container">$y=\sum\limits_{k=0}^\infty a_nx^{n}$</span>. The coefficient of <span class="math-container">$x^{n}$</span> on the left side becomes <span class="math-container">$(n^{2}-1)a_n$</span>. Equate this to the coefficient of <span class="math-container">$x^{n}$</span> on the right side. </p>
|
1,596 |
<p><a href="https://en.wikipedia.org/wiki/Lenna" rel="nofollow noreferrer">Lenna</a> is commonly used as an example placeholder image. I also recently used it in an <a href="https://mathematica.stackexchange.com/questions/87693/how-to-put-an-imported-image-in-a-disk/87699#87699">answer on the site</a>. However, as the user <a href="https://mathematica.stackexchange.com/users/21825/lightness-races-in-orbit">Lightness Races in Orbit</a> pointed out, since Lenna is actually a Playboy centerfold image some people might find the usage of the image offensive, or unwelcoming to women (I don't see why women should be more offended by this than men, but let's keep that aside for now).</p>
<p>I tend to agree, and indeed, I changed the picture to a somewhat less offensive picture of Jeb Bush. However, it made me think, that we might want, as a community, to discourage the usage of Lenna, and even maybe add a functionality to the site (or to all SE sites, for that matter) that notifies users who use <code>ExampleData[{"TestImage", "Lena"}]</code> that this is discouraged, and suggest an alternative text image, for example, this one:</p>
<p><img src="https://i.stack.imgur.com/LfVGhm.jpg" alt="enter image description here"></p>
<p>Any thoughts or suggestions will be welcome.</p>
|
Community
| -1 |
<blockquote>
<p>If a professor makes a sexist joke, a female student might well find it so disturbing that she is unable to listen to the rest of the lecture [<a href="http://www.spertus.com/ellen/Gender/pap/node10.html#SECTION00410000000000000000" rel="nofollow">2</a>]. Suggestive pictures used in lectures on image processing are similarly distracting to the women listeners and <strong>convey the message that the lecturer caters to the males only.</strong> For example, it is amazing that the "Lena" pin-up image is still used as an example in courses and published as a test image in journals today.</p>
</blockquote>
<p>— Dianne P. O'Leary, "<a href="https://www.cs.umd.edu/users/oleary/faculty/whole.html" rel="nofollow">Accessibility of Computer Science: A Reflection for Faculty Members</a>" (emphasis mine).</p>
<p>Programming as a field already has a reputation of gender bias and non-inclusiveness. Of course you are still free to choose whatever test images you like, but you should consider whether the technical qualities of the image you choose are worth the side effect of making this site feel slightly more like a boys' club.</p>
|
2,170,278 |
<p>The answer is $\binom {13}1 \binom42 \binom{12}3 \binom 41^3$</p>
<p>I want to break the last term and see what happens. [Struggling with the concept so trying to work with it as much as possible].</p>
<p>$\binom 41^3$ means that $\heartsuit \diamondsuit \spadesuit$ is different from $\diamondsuit \heartsuit \spadesuit$ and $\heartsuit \spadesuit \diamondsuit $ </p>
<p>$\binom 42 \binom 41$ means that $\heartsuit \diamondsuit \spadesuit$ is is the same as $\diamondsuit \heartsuit \spadesuit$</p>
<p>$\binom 43$ means that $\heartsuit \diamondsuit \spadesuit$, $\diamondsuit \heartsuit \spadesuit$, $\heartsuit \spadesuit \diamondsuit $ are all the same.</p>
<p>Basically, in the first case all three suits are ordered, in the second case first two suitss are unordered but the third one is and in the third case none of the suits are ordered.</p>
<p>Does that make sense?</p>
<p>Clarification:</p>
<p>A one pair consists of five cards where two are of the same kind and the other three are of different kinds. How many such hands are possible? The correct answer is $\binom {13}1 \binom42 \binom{12}3 \binom 41^3$. What I am doing is choosing the suits in different ways to see how that affects the correct answer. Want to see if my reasoning holds up.</p>
|
David K
| 139,123 |
<p>The difference between the two principles is that the chain of implications <em>terminates</em> in downward induction, whereas in infinite descent it does not terminate.</p>
<p>Examine the two following statements carefully to see how they are different:</p>
<blockquote>
<p>A) From <span class="math-container">$k\in \mathbb N$</span> and <span class="math-container">$P(k+1)$</span> you can conclude <span class="math-container">$P(k).$</span></p>
<p>B) From <span class="math-container">$m\in \mathbb N$</span> and <span class="math-container">$P(m)$</span> you can conclude <span class="math-container">$m - 1\in \mathbb N$</span> and <span class="math-container">$P(m-1).$</span></p>
</blockquote>
<p>The difference is that statement A applies only when we know that there actually exists an natural number <span class="math-container">$k$</span> at which the conclusion <span class="math-container">$P(k)$</span> can be evaluated. Statement B has no such safeguard with respect to <span class="math-container">$P(m-1).$</span></p>
<p>In particular, let <span class="math-container">$n_0$</span> be the least natural number (<span class="math-container">$n_0 = 1$</span> or
<span class="math-container">$n_0 = 0,$</span> depending on your definition of "natural number").
Then from the assumptions that statement B is true and that <span class="math-container">$P(n_0)$</span> is true, it follows that <span class="math-container">$n_0 - 1$</span> is a natural number.
This is a false conclusion, of course.
But if instead we have that statement A is true and that <span class="math-container">$P(n_0)$</span> is true,
we are not led to any such false conclusions;
there is no <span class="math-container">$k\in\mathbb N$</span> such that <span class="math-container">$k + 1 = n_0,$</span> so the antecedent of A is false, so we do not conclude the consequence.</p>
<p>Statement B is just a special case of infinite descent in which the "next" natural number in the chain is always one less than the number "before" it.</p>
<p>In summary, downward induction starts at some natural number and proceeds downward through every consecutive natural number until it reaches the smallest one, and then it stops.
Infinite descent starts at some natural number and proceeds downward,
not necessarily visiting every lesser natural number, but <em>never stopping.</em></p>
|
262,319 |
<pre><code>ContourPlot[
EuclideanDistance[{-5, 0}, {x, y}]*
EuclideanDistance[{5, 0}, {x, y}], {x, -15, 15}, {y, -11, 11},
Contours -> Range[5, 150, 20], Frame -> False,
ContourLabels -> (Text[Style[#3, Directive[Blue, 15]], {#1, #2}] &),
AspectRatio -> Automatic,
ColorFunction -> (If[# < 145,
ColorData[{"TemperatureMap", {0, 145}}, #], None] &),
ColorFunctionScaling -> False]
</code></pre>
<p><a href="https://i.stack.imgur.com/3hvcE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3hvcE.png" alt="enter image description here" /></a></p>
<p>How to remove those redundant <code>25</code>? I just hope to keep one or two. But I get a ton of label <code>25</code>, which makes the graphic look crowded...</p>
|
kglr
| 125 |
<pre><code>cp = ContourPlot[EuclideanDistance[{-5, 0}, {x, y}] EuclideanDistance[{5, 0}, {x, y}],
{x, -15, 15}, {y, -11, 11},
Contours -> Range[5, 150, 20], Frame -> False,
ContourLabels -> (Text[Style[#3, Directive[Blue, 15]], {#1, #2}] &),
AspectRatio -> Automatic,
ColorFunction -> (If[# < 145, ColorData[{"TemperatureMap", {0, 145}}, #], None] &),
ColorFunctionScaling -> False]
</code></pre>
<p><a href="https://i.stack.imgur.com/Ighko.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ighko.png" alt="enter image description here" /></a></p>
<p>You can post-process <code>cp</code> to remove duplicate labels:</p>
<pre><code>ReplaceAll[labels : {__Text} :> DeleteDuplicatesBy[First] @ labels] @ cp
</code></pre>
<p><a href="https://i.stack.imgur.com/Iz5ZE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iz5ZE.png" alt="enter image description here" /></a></p>
<p>Alternatively, you can add the option</p>
<pre><code>DisplayFunction -> ReplaceAll[labels : {__Text} :> DeleteDuplicatesBy[First] @ labels]
</code></pre>
<p>to <code>ContourPlot[...]</code> to get the same picture.</p>
|
262,319 |
<pre><code>ContourPlot[
EuclideanDistance[{-5, 0}, {x, y}]*
EuclideanDistance[{5, 0}, {x, y}], {x, -15, 15}, {y, -11, 11},
Contours -> Range[5, 150, 20], Frame -> False,
ContourLabels -> (Text[Style[#3, Directive[Blue, 15]], {#1, #2}] &),
AspectRatio -> Automatic,
ColorFunction -> (If[# < 145,
ColorData[{"TemperatureMap", {0, 145}}, #], None] &),
ColorFunctionScaling -> False]
</code></pre>
<p><a href="https://i.stack.imgur.com/3hvcE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3hvcE.png" alt="enter image description here" /></a></p>
<p>How to remove those redundant <code>25</code>? I just hope to keep one or two. But I get a ton of label <code>25</code>, which makes the graphic look crowded...</p>
|
cvgmt
| 72,111 |
<p>Follow the approach by @Bob Hanlon.</p>
<p>I think the problem come from the Complex Function <code>EuclideanDistance</code> since it contain <code>Sqrt</code> and <code>Abs</code>.</p>
<pre><code>EuclideanDistance[{a, b}, {x, y}]
(* Sqrt[Abs[a - x]^2 + Abs[b - y]^2] *)
</code></pre>
<p>So sometimes I avoid to use <code>EuclideanDistance</code> or <code>Norm</code>. Here we can use <code>#.#&</code> instead.</p>
<pre><code>Sqrt[# . # &[{-5, 0} - {x, y}] # . # &[{5, 0} - {x, y}]]
</code></pre>
<p>Or</p>
<pre><code>Sqrt[Norm[{-5, 0} - {x, y}]^2 Norm[{5, 0} - {x, y}]^2]
</code></pre>
<p>Or</p>
<pre><code>Sqrt[EuclideanDistance[{-5, 0}, {x, y}]^2 EuclideanDistance[{5,
0}, {x, y}]^2]
</code></pre>
|
178,028 |
<p>I am given $G = \{x + y \sqrt7 \mid x^2 - 7y^2 = 1; x,y \in \mathbb Q\}$ and the task is to determine the nature of $(G, \cdot)$, where $\cdot$ is multiplication. I'm having trouble finding the inverse element (I have found the neutral and proven the associative rule.</p>
|
Bill Dubuque
| 242 |
<p><strong>Hint</strong> $\ \alpha\, \bar\alpha = 1\:\Rightarrow\: \bar\alpha = 1/\alpha$</p>
|
21,201 |
<p>Next Monday, I'll have an interview at Siemens for an internship where I have to know about fluid dynamics/computational fluid dynamics. I'm not a physicist, so does somebody have a suggestion for a good book where I can read about some basics? Thank you very much.</p>
|
Ben Webster
| 66 |
<p>There are many group actions on sets which are linearly equivalent but not equivalent as actions. In fact, every group other than the cyclic group has one. This follows from some easy linear algebra:</p>
<ul>
<li>the number of irreducible reps over <span class="math-container">$\mathbb{Q}$</span> is the number of conjugacy classes of cyclic subgroups of <span class="math-container">$G$</span>, (<strong>EDIT</strong>:there are at most this many since any two elements which generate conjugate cyclic groups have the same character in a rational representation; on the other hand, the characters of the inductions of the trivial from any set of cyclic groups, no two of which are conjugate, are linearly independent, so there are at least this many) and</li>
<li>the number of non-isomorphic transitive G-sets is the number of conjugacy classes of subgroups.</li>
</ul>
<p>Thus, there must be an integer valued linear combination of transitive actions which has trivial character. Moving all the actions with negative coefficients to the other side of the equality, we get two different actions with the same character, and thus isomorphic representations.</p>
<p>I actually wrote <a href="https://arxiv.org/abs/math/0610205" rel="nofollow noreferrer">a paper</a> about this a few years back, which I think is a reasonable starting place for the subject, which actually has quite a long history, and a reasonably extensive literature.</p>
|
1,006,354 |
<ul>
<li>A multiple choice exam has 175 questions. </li>
<li>Each question has 4 possible answers. </li>
<li>Only 1 answer out of the 4 possible answers is correct. </li>
<li>The pass rate for the exam is 70% (123 questions must be answered correctly). </li>
<li>We know for a fact that 100 questions were answered correctly. </li>
</ul>
<p>Questions: What is the probability of passing the exam, if one were to guess on the remaining 75 questions? That is, pick at random one of the 4 answers for each of the 75 questions. </p>
|
Joshua Mundinger
| 106,317 |
<p>The question is essentially what the probability of answering more than 22 questions out of 75 correctly. </p>
<p>The chance that we will answer all 75 incorrectly is simply $(3/4)^{75}$. The probability that we will answer one particular question correctly and all the rest incorrectly is $(1/4)(3/4)^{74}$. Thus, the probability that we will answer exactly one question incorrectly is $75(1/4)(3/4)^{74}$ since there are 75 possible choices to answer correctly. </p>
<p>In general, the probability that $p$ questions will be answered correctly will be $${75 \choose p}\left(\frac{1}{4}\right)^p\left(\frac{3}{4}\right)^{75-p}$$ and thus our total probability is</p>
<p>$$P = \sum_{p=23}^{75} {75 \choose p}\left(\frac{1}{4}\right)^p\left(\frac{3}{4}\right)^{75-p} \approx 0.158$$</p>
|
402,214 |
<p>I recently obtained "What is Mathematics?" by Richard Courant and I am having trouble understanding what is happening with the Prime Number Unique Factor Composition Proof (found on Page 23).</p>
<p>The first part:</p>
<blockquote>
<p><img src="https://i.stack.imgur.com/h5rCh.png" alt="enter image description here"></p>
</blockquote>
<p>I have looked over it many times but I just don't understand what he is doing and why, as an example, if you remove the first factor from either side of the equation you end up with two essentially different compositions that make up a new smaller integer.</p>
<p>I'm sure it is a simple error on my behalf but I have been stuck on this for a long time so I would appreciate a walkthrough explained clearly or some pointers in the right direction. Thank you.</p>
|
Key Ideas
| 78,535 |
<p>It appears that the troublesome aspect of the proof is the negative contradictory form of induction that is employed, i.e. the contrapositive of complete induction (Fermat's infinite descent). This is easily eliminated by rewriting the proof to use <a href="https://en.wikipedia.org/wiki/Mathematical_induction#Complete_induction" rel="nofollow">complete induction,</a> as is done below.</p>
<p>Suppose as inductive hypothesis that the prime factorization of every natural $< m\,$ is unique up to order. We prove that the same holds true for $\,m.\,$ Consider any two prime factorizations of $\,m$</p>
<p>$$\tag{1} m\, =\, p_1 p_2\cdots p_r =\, q_1 q_2\cdots q_s$$</p>
<p>where the $p$'s and $q$'s are primes. By reordering if need be, we may assume that</p>
<p>$$ p_1 \le p_2 \le \cdots \le p_r,\quad q_1 \le q_2 \le \cdots \le q_s$$</p>
<p>We prove $\,\color{#0a0}{p_1 = \ q_1}.\,$ We may assume $\,p_1 \le q_1$ (else change notation: swap $p$ and $q).\,$
Let $$\tag{2} m' = m - p_1 q_2 q_3\cdots q_s$$ </p>
<p>By substituting for $\,m\,$ its two factorizations in equation $(1)$ we obtain
$$\begin{eqnarray}
\tag{3} m' &=\,& p_1 p_2\cdots p_r -\, p_1 q_2\cdots q_s &=\,& p_1 (p_2\cdots p_r -\, q_2 q_3 \cdots q_s) \\
\tag{4} m' &=\,& q_1 q_2\cdots q_s -\, p_1 q_2\cdots q_s &=\,& (q_1-p_1)(q_2 q_3\cdots q_s) \end{eqnarray}$$</p>
<p>Suppose $\,\color{#c00}{p_1 \ne q_1}\,$ Then $\,p_1 < q_1\,$ so it follows by $(4)$ that $\,m'>0$, while $\,m'<m\,$ by $(2).\,$
So, by induction, the prime factorization of $\,m'\,$ is unique, up to order.
From $(3)$ it follows that $\,p_1\,$ is a factor of $\, m'.\,$ Thus, by uniqueness,
$\,p_1\,$ must also appear as a factor in $(4)$, i.e. as a factor of either $\,q_1 - p_1$ or $\,q_2\cdots q_s.\,$ The latter is impossible,
since all the $\,q_i\,$ are primes larger than $\,p_1.\,$ Hence $\,p_1\,$ is a factor of $\,q_1 - p_1,\,$
i.e. there is an integer $\,k\,$ such that $\, q_1 - p_1 = k p_1,\,$ so $\,q_1 = (k\!+\!1) p_1.\,$ Thus $\,p_1\,$ is a factor of $\,q_1,\: $ contra prime $\:q_1 > p_1.\, $ This contradiction proves that our assumption $\,\color{#c00}{p_1\ne q_1}\,$ is false, so $\,\color{#0a0}{p_1 = q_1}\,$ as claimed.</p>
<p>Cancelling $\,\color{#0a0}{p_1\! = q_1}\,$ from both sides of equation $(1)$ and applying the inductive hypothesis,
we infer that the smaller number $\, m/p_1 = p_2\cdots p_r =\, q_2\cdots q_s\,$ has unique factorization,
so $\, r = s\,$ and $\, p_2\! = q_2,\ p_3\! = q_3,\ldots, p_s\! = q_s.\,$ Hence the factorizations
in $(1)$ are the same up to order. $ $ <strong>QED</strong></p>
|
3,245,223 |
<p>so I am supposed to solve a proof which seems fairly easy, but the negative exponents in <span class="math-container">$$\sum_{k=0}^n \binom nk\ (\frac{(-1)^k}{k+1})= \frac{1}{n+1}$$</span> are making this question very difficult for me. I have tried using binomial theorem on the right side with <span class="math-container">$(n+1)^{-1}$</span> but I understand that operation does not make sense. I can also tell that the only difference between the two proofs is that the left side has an additional <span class="math-container">$(k+1)^{-1}$</span> and the right side has an additional <span class="math-container">$(n+1)^{-1}$</span>, but I am still having difficulty solving this question. Hints appreciated.</p>
|
Z Ahmed
| 671,540 |
<p>The binomial theorem gives:
<span class="math-container">$$(1-x)^n= \sum_{k=0}^{n} (-1)^k x^k$$</span>
Integrate w.r.t. x both sides from <span class="math-container">$x=0$</span> to <span class="math-container">$x=1$</span>, to get
<span class="math-container">$$\left .\sum_{k=0}^{n} (-1)^k {n \choose k}\frac{x^{k+1}}{k+1} \right|_{0}^{1}= -\left.\frac{(1-x)^{n+1}}{n+1} \right|_{0}^{1}=\frac{1}{n+1}.$$</span></p>
|
1,177,493 |
<p>If $p$ is a prime and $p \equiv 1 \bmod 4$, how many ways are there to write $p$ as a sum of two squares? Is there an explicit formulation for this?</p>
<p>There's a theorem that says that $p = 1 \bmod 4$ if and only if $p$ is a sum of two squares so this number must be at least 1. There's also the Sum of Two Squares Theorem for the prime factorization of integers and the Pythagorean Hypotenuse Proposition which says that a number $c$ is a hypotenuse if and only if it's a factor of $1 \bmod 4$ primes. All of these theorems only assert the existence of $1 \bmod 4$ primes as the sum of two squares. How do I (perhaps use these altogether) to find the exact number of different ways to write such prime as a sum of two squares?</p>
|
Gerry Myerson
| 8,269 |
<p>If $n$ has two distinct expressions as a sum of two squares, $n=a^2+b^2=c^2+d^2$, then $n$ divides $(ac+bd)(ac-bd)$. But then you can show $n$ doesn't divide either $ac+bd$ or $ac-bd$ (it's a bit tricky), from which it follows that $n$ isn't prime. </p>
<p>Then the contrapositive is that if $n$ is prime it doesn't have more than one representation as a sum of two squares. </p>
|
1,575,671 |
<p>The whole question is that <br>
If $f(x) = -2cos^2x$, then what is $d^6y \over dx^6$ for x = $\pi/4$?</p>
<p>The key here is what does $d^6y \over dx^6$ mean?</p>
<p>I know that $d^6y \over d^6x$ means 6th derivative of y with respect to x, but I've never seen it before.</p>
|
Kaster
| 49,333 |
<p>If you recall the definition of the derivative, then you can write
\begin{align}
y'(x) &= \lim_{h \to 0} \frac {y(x+h) - y(h)}h \\
y''(x) &= \lim_{h \to 0} \frac {y'(x+h) - y'(x)}h = \lim_{h \to 0} \frac{\frac {y(x+2h) - y(x+h)}h - \frac {y(x+h) - y(x)}h}h = \lim_{h \to 0} \frac {y(x+2h) - 2y(x+h) + y(x)}{h^2}
\end{align}
so you can see, that denominator is in fact raised to the power, and numerator isn't. So, by convention
$$
y'' = \frac {d^2y}{dx^2}
$$
where $d^2y$ means a certain infinitesimal difference operator, and $dx^2$ means actual power.</p>
<p>Same idea holds for higher derivatives. </p>
|
1,738,153 |
<p>I know the definition is given as follows:</p>
<p>A map $p: G \rightarrow GL(V)$ such that $p(g_1g_2)=p(g_1)p(g_2)$ but I still do not really understand what this means</p>
<p>Can someone help me gain some intuition for this - perhaps a basic example?</p>
<p>Thanks</p>
|
Peter Franek
| 62,009 |
<p>The intuition is that $G$ is the group of symmetry of something. For example, if you study classical physics, it may be the group of Euclidean transformations. $V$ is a space whose elements are objects (velocity, force, electron in quantum mechanics,...) as seen by an observer. The action of the group corresponds to a change of observer. A different observer sees the "vector" differently. But the change is assumed to be linear and invertible, with analogy of elementary change of basis in linear algebra. Everybody "sees" the zero vector as zero. (There is "no force" / "no electron" etc)</p>
<p>Whatever is an electron, if it is a "vector" and you want to operate in the special relativity framework, then the underlying vector space should be a representation of the Poincaré group (elements of Poincaré group=change of observers in special relativity). Quantum people also require a scalar product such that $\langle gx, gy\rangle=\langle x, y\rangle$ for any $g\in G$, $x,y\in V$. This gives you restriction, which spaces are possible as models for some objects.</p>
<p>(At least this is my understanding of the motivation from physics, and why unitary representations of $SL(2,\Bbb C)$ were so much studied: ($SL(2,\Bbb C)=Spin(3,1)$, a double-cover of the Lorentz group).)</p>
|
151,937 |
<p>In <code>FindGraphCommunities</code>, how can one find the vertices associated with the edges that are found to connect one or more communities?</p>
|
David G. Stork
| 9,735 |
<p>Start with a community graph:</p>
<pre><code>CommunityGraphPlot[g = RandomGraph[{20, 50}]]
</code></pre>
<p>Find the list of vertexes in each community:</p>
<pre><code>mycommunitylists = FindGraphCommunities[g]
</code></pre>
<p>(*</p>
<p>{{3, 5, 9, 11, 12, 13, 14, 16, 18}, {1, 2, 7, 10, 15, 19, 20}, {4, 6,
8, 17}}</p>
<p>*)</p>
<p>Get an edge list of the full graph:</p>
<pre><code>EdgeList[g]
</code></pre>
<p>(*</p>
<p>{1 <-> 6, 1 <-> 7, 1 <-> 10, 1 <-> 11, 1 <-> 19, 1 <-> 20, 2 <-> 8,
2 <-> 10, 2 <-> 11, 2 <-> 19, 3 <-> 5, 3 <-> 6, 3 <-> 9, 3 <-> 11,
3 <-> 13, 3 <-> 16, 4 <-> 6, 4 <-> 8, 4 <-> 10, 5 <-> 9, 5 <-> 11,
5 <-> 13, 5 <-> 18, 6 <-> 12, 6 <-> 18, 6 <-> 19, 7 <-> 8, 7 <-> 10,
7 <-> 13, 7 <-> 14, 7 <-> 15, 7 <-> 16, 8 <-> 14, 8 <-> 17, 8 <-> 18,
9 <-> 13, 9 <-> 15, 9 <-> 16, 9 <-> 20, 10 <-> 15, 10 <-> 18,
10 <-> 20, 11 <-> 14, 11 <-> 16, 12 <-> 13, 12 <-> 16, 13 <-> 14,
14 <-> 16, 15 <-> 16, 15 <-> 19}</p>
<p>*)</p>
<p>convert to lists of adjacent vertexes:</p>
<pre><code>hh = {#[[1]], #[[2]]} & /@ EdgeList[g]
</code></pre>
<p>(*</p>
<p>{{1, 6}, {1, 7}, {1, 10}, {1, 11}, {1, 19}, {1, 20}, {2, 8}, {2,
10}, {2, 11}, {2, 19}, {3, 5}, {3, 6}, {3, 9}, {3, 11}, {3, 13}, {3,
16}, {4, 6}, {4, 8}, {4, 10}, {5, 9}, {5, 11}, {5, 13}, {5,
18}, {6, 12}, {6, 18}, {6, 19}, {7, 8}, {7, 10}, {7, 13}, {7,
14}, {7, 15}, {7, 16}, {8, 14}, {8, 17}, {8, 18}, {9, 13}, {9,
15}, {9, 16}, {9, 20}, {10, 15}, {10, 18}, {10, 20}, {11, 14}, {11,
16}, {12, 13}, {12, 16}, {13, 14}, {14, 16}, {15, 16}, {15, 19}}</p>
<p>*)</p>
<p>Search through all edges for ones linking community $i$ with community $j$, e.g. for community 1 and community 2,</p>
<pre><code>Select[hh,
MemberQ[mycommunitylists[[1]], #[[1]]] &&
MemberQ[mycommunitylists[[2]], #[[2]]] &]
</code></pre>
<p>(If, for instance, you had a large number of communities and wanted the links between community 5 and 9, then just use:</p>
<pre><code>Select[hh,
MemberQ[mycommunitylists[[5]], #[[1]]] &&
MemberQ[mycommunitylists[[9]], #[[2]]] &]
</code></pre>
<p>)</p>
<p>(*</p>
<p>{{9, 15}, {9, 20}}</p>
<p>*)</p>
<pre><code>DeleteDuplicates[Flatten[%]]
</code></pre>
<p>*)</p>
<p>{9, 15, 20}</p>
<p>*)</p>
|
2,877,576 |
<p>Is it true that in equation $Ax=b$,
$A$ is a square matrix of $n\times n$, is having rank $n$, then augmented matrix $[A|b]$ will always have rank $n$?</p>
<p>$b$ is a column vector with non-zero values.
$x$ is a column vector of $n$ variables.</p>
<p>If not then please provide an example.</p>
|
Phil H
| 554,494 |
<p><a href="https://i.stack.imgur.com/QJ9We.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QJ9We.jpg" alt="enter image description here"></a></p>
<p>We can treat this by maximizing the cross sectional area A.</p>
<p>$A = (100 - x^2)^{.5} (20 + x)$ where $x$ is the appex height of the roof</p>
<p>$dA/dx = \frac{-x(20+x)}{(100-x^2)^{.5}} + (100-x^2)^{.5}$</p>
<p>A is max when $dA/dx = 0$</p>
<p>$0 = \frac{-x(20+x)}{(100-x^2)^{.5}} + (100-x^2)^{.5}$</p>
<p>$\frac{x(20+x)}{(100-x^2)^{.5}} = (100-x^2)^{.5}$</p>
<p>$20x + x^2 = 100 - x^2$</p>
<p>$2x^2 + 20x - 100 = 0$</p>
<p>$x^2 +10x - 50 = 0$</p>
<p>This yields a positive $x = 3.660254$</p>
<p>Angle $\alpha = 90 + \sin^{-1}(3.660254/10)$</p>
<p>$\alpha = 111.47$ deg</p>
|
1,538,496 |
<p>I came across this riddle during a job interview and thought it was worth sharing with the community as I thought it was clever:</p>
<blockquote>
<p>Suppose you are sitting at a perfectly round table with an adversary about to play a game. Next to each of you is an infinitely large bag of pennies. The goal of the game is to be the player who is able to put the last penny on the table. Pennies cannot be moved once placed and cannot be stacked on top of each other; also, players place 1 penny per turn. There is a strategy to win this game every time. Do you move first or second, and what is your strategy?</p>
</blockquote>
<p>JMoravitz has provided the answer (hidden in spoilers) below in case you are frustrated!</p>
|
zahbaz
| 176,922 |
<p>JMoravitz has a great post. I'm happy to have arrived at the same answer, and would like to share a thought that aided my process. Consider an extreme case to find out whether to go first or second:</p>
<blockquote class="spoiler">
<p> The problem specifies pennies and a circular table. If there is a winning strategy, then without any loss of generality, we ought to win regardless of the radius of the table. Given a table whose radius is smaller than the penny's, and assuming that at least one penny would fit on the table, then you clearly want to go <em>first</em>. You would immediately win.</p>
</blockquote>
<p>What if the table was... well... normal? How to ensure your win? Don't run out of options, of course. This is the same idea as J's in the other answer.</p>
<blockquote class="spoiler">
<p> Given the centrosymmetric nature of the arrangement, after centering your original penny, player two can now be mimicked until the game's end. For any penny the opponent places at $(x,y)$, place your next one at $(-x,-y)$. I'd expect most opponents to resign after a turn or two, or suffer a slow, sad defeat.</p>
</blockquote>
<p>Zugzwang after move 1 really.</p>
|
1,961,727 |
<p>As far as I understood <a href="https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process">Gram–Schmidt orthogonalization</a> starts with a set of linearly independent vectors and produces a set of mutually orthonormal vectors that spans the same space that starting vectors did.</p>
<p>I have no problem understanding the algorithm, but here is a thing I fail to get. Why do I need to do all these calculations? For example, instead of doing the calculations provided in that wiki page in example section, why can't I just grab two basis vectors $w_1 = (1, 0)'$ and $w_2 = (0, 1)'$? They are clearly orthonormal and span the same subspace as the original vectors $v_1 = (3, 1)'$, $v_2 = (2, 2)'$.</p>
<p>It is clear that I'm missing something important, but I can't see what exactly.</p>
|
Sean Lake
| 153,694 |
<p>You can also get into function spaces where it's not clear what the basis you can just grab from is. The Legendre polynomials can be constructed by starting with the functions $1$ and $x$ on the interval $x \in [-1,1]$, and using Gram-Schmidt orthogonalization to construct the higher order ones. The second order polynomial is constructed by removing the component of $x^2$ that points in the direction of $1$, for example.</p>
|
3,124,158 |
<p>So what I want to prove is
<span class="math-container">$$ |xy+xz+yz- 2(x+y+z) + 3| \leq |x^2+y^2+z^2-2(x+y+z)+3| $$</span>
for <span class="math-container">$x,y,z\in \mathbb{R}$</span>, and I'm aware that the RHS is just <span class="math-container">$|(x-1)^2+(y-1)^2+(z-1)^2|$</span>.</p>
<p>Now I'm able to prove that <span class="math-container">$ x^2+y^2+z^2 \geq xy+xz+yz $</span> as this just follows from the AM-GM inequality. So I know that the statement <em>without</em> the absolute values must be true, i.e.
<span class="math-container">$$ xy+xz+yz- 2(x+y+z) + 3 \leq x^2+y^2+z^2-2(x+y+z)+3 $$</span>
But I can't see why I'm safe to just put absolute values on both sides here. Because I'm not sure why the LHS is guaranteed to be smaller in magnitude than the RHS?</p>
<p>(I thought about Cauchy-Schwarz being hidden here but then I realised that I could not see how.)</p>
<p>Edit: Alternatively I also understand that
<span class="math-container">$$ |xy+xz+yz| \leq |xy|+|xz|+|yz| \leq |x|^2+|y|^2+|z|^2 = x^2+y^2+z^2 = |x^2+y^2+z^2| $$</span>
but then if I try to adapt this path, the <span class="math-container">$-2(x+y+z) $</span> bit throws me off</p>
|
Michael Rozenberg
| 190,319 |
<p>Since <span class="math-container">$$x^2+y^2+z^2-2(x+y+z)+3=\sum_{cyc}(x-1)^2\geq0,$$</span> we need to prove that <span class="math-container">$$ x^2+y^2+z^2-2(x+y+z)+3 \geq xy+xz+yz- 2(x+y+z) + 3\geq$$</span>
<span class="math-container">$$\geq-(x^2+y^2+z^2-2(x+y+z)+3)$$</span> </p>
<p>The left inequality it's <span class="math-container">$$\sum_{cyc}(x-y)^2\geq0,$$</span> which is true and the right it's
<span class="math-container">$$\sum_{cyc}(x^2+xy-4x+2)\geq0,$$</span> which is true for all reals <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$z$</span>.</p>
<p>We can prove it by numbers of ways, but I like the following.</p>
<p>Let <span class="math-container">$x+y+z=3u$</span>, <span class="math-container">$xy+xz+yz=3v^2$</span>, where <span class="math-container">$v^2$</span> can be negative, and <span class="math-container">$xyz=w^3$</span>.</p>
<p>Thus, the last inequality does not depend on <span class="math-container">$w^3$</span>, which says that it's enough to prove it for extreme value of <span class="math-container">$w^3$</span>, which happens for equality case of two variables.</p>
<p>Let <span class="math-container">$y=x$</span>.</p>
<p>Thus, we need to prove that
<span class="math-container">$$3x^2+2(z-4)x+z^2-4z+6\geq0,$$</span> for which it's enough to prove that
<span class="math-container">$$(z-4)^2-3(z^2-4z+6)\leq0$$</span> or <span class="math-container">$$(z-1)^2\geq0$$</span> and we are done!</p>
<p>Actually, the inequality <span class="math-container">$$\sum_{cyc}(x^2+xy-4x+2)\geq0$$</span> it's just
<span class="math-container">$$\sum_{cyc}(x+y-2)^2\geq0,$$</span> but it's not so easy to see. </p>
|
1,913,689 |
<blockquote>
<p>Let $f: X \rightarrow Y$ be a function. $A \subset X$ and $B \subset Y$.
Prove $A \subset f^{-1}(f(A))$.</p>
</blockquote>
<p>Here is my approach. </p>
<p>Let $x \in A$. Then there exists some $y \in f(A)$ such that $y = f(x)$. By the definition of inverse function, $f^{-1}(f(x)) = \{ x \in X$ such that $y = f(x) \}$. Thus $x \in f^{-1}(f(A)).$</p>
<p>Does this look OK, and how can I improve it?</p>
|
Michael Hardy
| 11,667 |
<p>"there exists some $y\in f(A)$ such that $y=f(x)$" is a cumbersome way of expressing it. I'd just say "let $y = f(x)$." Also, I would avoid using the same letter, $x$, in two different senses, especially in view of the fact that not every point whose image under $f$ is equal to $f(x)$ needs to be the same as $x.$</p>
|
1,017,707 |
<p>Are there any proofs of this equality online? I'm just looking for something very simply that I can self-verify. My textbook uses the result without a proof, and I want to see what a proof would look like here.</p>
|
mookid
| 131,738 |
<p>\begin{align}
e^{inx} &= (\cos x + i\sin x)^n
\\&= \sum_{k=0}^n \binom nk\cos^{n-k}(x) i^k \sin^k (x)
\\&= \sum_{k=0}^{\lfloor n/2\rfloor} \binom n{2k}\cos^{n-2k}(x) i^{2k} \sin^{2k} (x) +
\sum_{k=0}^{\lfloor (n-1)/2\rfloor} \binom n{2k+1}\cos^{n-2k-1}(x)
i^{2k+1} \sin^{2k+1} (x)
\end{align}</p>
<p>Take the real part of both sides and you get
$$
\cos nx =\sum_{k=0}^{\lfloor n/2\rfloor} \binom n{2k}\cos^{n-2k}(x)
(-1)^k (1-\cos^2x)^k
$$</p>
<p>The last expression is clearly a polynomial of degree $n$ in $\cos (x)$.</p>
<hr>
<p>Alternative:</p>
<p>as $$
\cos ((n+1)x) + \cos ((n-1)x) = 2\cos (x) \cos (nx)
$$
you can prove it directly using induction.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.