qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
1,017,707 |
<p>Are there any proofs of this equality online? I'm just looking for something very simply that I can self-verify. My textbook uses the result without a proof, and I want to see what a proof would look like here.</p>
|
Adhvaitha
| 191,728 |
<p>We have
\begin{align}
\cos((n+1)x) & = \cos(x) \cos(nx) - \sin(x) \sin(nx)\\
& = \cos(x) \cos(nx) + \dfrac{\cos((n+1)x) - \cos((n-1)x)}2
\end{align}
Hence,
$$2 \cos((n+1)x) = 2\cos(x) \cos(nx) + \cos((n+1)x) - \cos((n-1)x)$$
Therefore,
$$\cos((n+1)x) = 2\cos(x) \cos(nx) - \cos((n-1)x)$$
Now use induction to conclude what you want.</p>
|
999,147 |
<p>I'm looking to gain a better understanding of how the cofinite topology applies to R.
I know the definition for this topology but I'm specifically looking to find some properties such as the closure, interior, set of limit points, or the boundary set and how these change based on whether a subset A in R is closed, open, or clopen. </p>
<p>Any help would be appreciated. </p>
<p>Note: I have only the most basic of definitions for closure, interior, etc. </p>
<p>Thank you! </p>
|
Jonas Gomes
| 138,672 |
<p>The co-finite topology has a good property: It's the minimal $T_1$ topology, so every other property (Hausdorff, Regular, etc) you might expect to fail.</p>
<p>For example, every closed set is either finite or $\mathbb{R}$. So, the closure of any set is the set iself (if it's finite) or $\mathbb{R}$ if it is infinite.</p>
<p>Every open set is infinite or the empty set, so the interior of any finite set is $\emptyset$. </p>
<p>So there are no clopen sets (except $\mathbb{R}$ and $\emptyset$)</p>
<p>The boundary $\partial A = \overline{A} \cap \overline{\mathbb{R}\setminus A}$, so, if $A$ is open, its boundary is precisely $\mathbb{R}\setminus A$. If $A$ is closed, the closure of $\mathbb{R}\setminus A$ is $\mathbb{R}$ and then $\partial A = \overline{A} = A$</p>
<p>The set of limit points of $A$ is $\emptyset$ if $A$ is closed, and is the set itself if it's open.</p>
|
2,755,733 |
<p>Why in this <a href="https://math.stackexchange.com/questions/625112/if-tossing-a-coin-400-times-we-count-the-heads-what-is-the-probability-that-t">If, tossing a coin 400 times, we count the heads, what is the probability that the number of heads is [160,190]?</a> question heropup's asnwer is like that? </p>
<p>I don't understand the blue text, I think it should be <strong>190</strong> instead of $\color{blue}{200}.$</p>
<p>And why to do this step $\color{blue}{\Pr[159.5 \le X \le 200.5] }?$ when you can pass directly to standarization.</p>
<p><strong>This is his/her answer:</strong></p>
<p>With $n = 400$ trials, the exact probability distribution for the number of heads $X$ observed is given by $X \sim {\rm Binomial}(n = 400, p = 1/2)$, assuming the coin is fair. Since calculating $\Pr[160 \le X \le \color{blue}{200}]$ requires a computer, and $n$ is large, we can approximate the distribution of $X$ as ${\rm Normal}(\mu = np = 200, \sigma^2 = np(1-p) = 100)$. Thus $$\begin{align*} \Pr[160 \le X \le 200] &\approx \color{blue}{\Pr[159.5 \le X \le 200.5] }\\ &= \Pr\left[\frac{159.5 - 200}{\sqrt{100}} \le \frac{X - \mu}{\sigma} \le \frac{200.5 - 200}{\sqrt{100}} \right] \\ &= \Pr[-4.05 \le Z \le 0.05] \\ &= \Phi(0.05) - \Phi(-4.05) \\ &\approx 0.519913. \end{align*}$$ Note that we employed continuity correction for this calculation. The exact probability is $0.5199104479\ldots$.</p>
<p>A similar calculation applies for $\Pr[160 \le X \le 190]$. Using the normal approximation to the binomial, you would get an approximate value of $0.171031$. Using the exact distribution, the probability is $0.17103699497659\ldots$.</p>
|
farruhota
| 425,072 |
<p>Note that the binomial probability distribution is a discrete probability distribution, while normal probability distribution is a continuous probability distribution. </p>
<p>When the binomial probability distribution is approximated (under certain conditions) with the normal probability distribution, so called continuity correction factor should be applied. Refer to the graph below (source: <a href="https://en.wikipedia.org/wiki/File:Binomial_Distribution.svg" rel="nofollow noreferrer">wikipedia</a>):</p>
<p>$\hspace{6cm}$<img src="https://i.stack.imgur.com/rkDe0.png" alt="1"></p>
<p>For example:
$$P(1\le \underbrace{X}_{\text{binomial rv}}\le 4) = P(1-0.5\le \underbrace{X}_{\text{normal rv}}\le 4+0.5)=\\
P\left(\frac{(1-0.5)-np}{\sqrt{npq}}\le \underbrace{Z}_{\text{st.normal rv}}\le \frac{(4+0.5)-np}{\sqrt{npq}}\right),$$
because:</p>
<p>$\hspace{2cm}$<img src="https://i.stack.imgur.com/PQ2pB.png" alt="2"></p>
|
3,983,914 |
<p><strong>Preliminary properties</strong>: Let the state vector <span class="math-container">$x(t)=[x_1(t),\dots,x_n(t)]^T\in\mathbb{R}^n$</span> be constrained to the dynamical system
<span class="math-container">$$
\dot{x} = Ax +
\begin{bmatrix}
\phi_1(x_1) \\
\vdots \\
\phi_n(x_1) \\
\end{bmatrix}, \ \ \ \ x(0) = x_0
$$</span>
where <span class="math-container">$A$</span> is defined by:
<span class="math-container">$$
A =
\begin{bmatrix}
\lambda_1 & 1 & 0 &\cdots& 0\\
0 & \lambda_2 & 1 &\ddots&\vdots\\
\vdots&\ddots&\ddots&\ddots&0\\
0&\cdots&0&\lambda_{n-1}& 1\\
0&\cdots&0&0&\lambda_n
\end{bmatrix}
$$</span>
with <span class="math-container">$\lambda_i>0$</span>, and <span class="math-container">$\phi_i(x_1) = \beta_i |x_1|^{\alpha_i}\text{sign}(x_1), \beta_i>0$</span>, <span class="math-container">$0<\alpha_i<1$</span>.</p>
<p><strong>Question:</strong> Is it possible to show that for any initial condition <span class="math-container">$x_0\neq 0$</span>, the solution <span class="math-container">$x(t)$</span> either converge to the origin, or <span class="math-container">$
\lim_{t\to\infty}\|x(t)\| = +\infty
$</span>, but cannot remain in a bounded trajectory different from staying at the origin?</p>
<p>Concretelly, what additional structure or conditions on the system or the initial condition do we require to show this?</p>
<p>In case you find this useful, here are my attempts to understand/solve the problem.</p>
<p><strong>Attempt 1</strong>: I was trying to use results such as the ones from <a href="https://www.jstor.org/stable/pdf/2099661.pdf?refreqid=excelsior%3A03f1446d2fac90e5743319c6fb03b263**" rel="nofollow noreferrer">here</a> which can conclude what I want, but require to find a Lyapunov-like function (not necesarilly positive definite) for which <span class="math-container">$\ddot{V}\neq 0, x\neq 0$</span>. However, I haven't been able to come up with a suitable such function.</p>
<p><strong>Attempt 2</strong>: The differential equation have "explicit" solution (not precisely explicit but can be expressed as)
<span class="math-container">$$
x(t) = e^{At}x_0 + e^{At}\int_0^se^{-As}\Phi(x_1(s))ds
$$</span>
where <span class="math-container">$\Phi(x_1) = [\phi_1(x_1),\dots,\phi_n(x_1)]^T$</span>. So I wanted to proceed by contradiction: assume that there exists <span class="math-container">$b,B>0$</span> and <span class="math-container">$T>0$</span> such that <span class="math-container">$b\leq \|x(t)\|\leq B$</span> for all <span class="math-container">$t\geq T$</span>. Hence,
<span class="math-container">$$
b\leq \left\|e^{At}x_0 + e^{At}\int_0^se^{-As}\Phi(x_1(s))ds\right\|\leq B
$$</span>
And noticing that in this case there should be <span class="math-container">$c,C>0$</span> such that <span class="math-container">$0<c\leq\|\Phi(x_1(t))\|\leq C $</span>, for all <span class="math-container">$t\geq T$</span>. Thus, try to obtain a contradiction, for example by using <span class="math-container">$C\geq\|\Phi(x_1(t))\|$</span> to show that <span class="math-container">$B\leq\|x(t)\|$</span>. But unfortunately I haven't obtained anything positive in this direction neither.</p>
<p><strong>Attempt 3</strong>: Can Bendixon's/Dulac criterion (see Theorem 11 <a href="https://www.damtp.cam.ac.uk/user/examples/D26e.pdf" rel="nofollow noreferrer">here</a>) be used to conclude something for this system? It is easy to verify that if we write this system as <span class="math-container">$\dot{x} = f(x)$</span>, we obtain <span class="math-container">$\nabla\cdot f(x)>0$</span>.</p>
<p>I know that neither my attempts nor my exposition here are perfect. However, I'm looking for suggestions/references or any idea which might help me understand more this problem.</p>
|
Kwin van der Veen
| 76,466 |
<p>Inspired by the answer of open problem one can say a bit more in general when considering only the <span class="math-container">$\alpha_i=1$</span> cases. Although, it is stated that <span class="math-container">$0 < \alpha_i < 1$</span>, so technically these cases would just barely violate the considered domains for each <span class="math-container">$\alpha_i$</span>. In these cases the dynamics is linear and can be described with <span class="math-container">$\dot{x} = M\,x$</span>, with</p>
<p><span class="math-container">$$
M =
\begin{bmatrix}
\lambda_1 + \beta_1 & 1 & 0 & \cdots & 0 \\
\beta_2 & \lambda_2 & 1 & \ddots & \vdots \\
\vdots & 0 & \ddots & \ddots & 0 \\
\beta_{n-1} & \vdots & \ddots & \lambda_{n-1} & 1 \\
\beta_n & 0 & \dots & 0 & \lambda_n
\end{bmatrix}. \tag{1}
$$</span></p>
<p>These kind of systems can have non-zero bounded trajectories if <span class="math-container">$M$</span> has at least one eigenvalue of zero. One necessary condition for this would be that <span class="math-container">$\det(M) = 0$</span>, since the determinant of a matrix is equal to the product of its eigenvalues.</p>
<p>It can be shown that in general the determinant of <span class="math-container">$(1)$</span> is equal to</p>
<p><span class="math-container">$$
\det(M) = \prod_{k=1}^n \lambda_k + \sum_{k=1}^n \left((-1)^{k+1} \beta_k \prod_{m = k+1}^n \lambda_m\right). \tag{2}
$$</span></p>
<p>Even though it holds that <span class="math-container">$\lambda_i,\beta_i > 0$</span> for all <span class="math-container">$i = 1, \cdots, n$</span>, due to the minus signs inside <span class="math-container">$(2)$</span> it is possible to have that <span class="math-container">$\det(M) = 0$</span> for <span class="math-container">$n \ge 2$</span>.</p>
<p>For example for <span class="math-container">$n = 2$</span> with <span class="math-container">$\lambda_1,\lambda_2,\beta_1 = 1$</span> and <span class="math-container">$\beta_2 = 2$</span> yields</p>
<p><span class="math-container">$$
M =
\begin{bmatrix}
2 & 1 \\
2 & 1
\end{bmatrix}, \tag{3}
$$</span></p>
<p>which has the eigenvalues <span class="math-container">$0$</span> and <span class="math-container">$3$</span> and thus can have non-zero bounded trajectories if the initial condition <span class="math-container">$x(0)$</span> is chosen such that it doesn't excite the unstable mode associated with the eigenvalue <span class="math-container">$3$</span>.</p>
<hr />
<p>Another sufficient condition for a counter argument would be if a certain system satisfying your description has other equilibria besides the origin. It can be noted that for the linear cases the mode, whose associated eigenvalue is zero, gives a line of equilibria. For all choices for <span class="math-container">$\alpha_i$</span> and using <span class="math-container">$x_1 = 1$</span> yields <span class="math-container">$\phi_i(x_1) = \beta_i$</span>. Therefore, a non-zero equilibrium can be constructed by solving <span class="math-container">$\dot{x} = 0$</span>. In order to split the knowns from the unknowns I define <span class="math-container">$x' = \begin{bmatrix}x_2 & \cdots & x_n\end{bmatrix}^\top$</span>, such that <span class="math-container">$\dot{x} = 0$</span> can be split into <span class="math-container">$\dot{x}_1 = 0$</span> and <span class="math-container">$\dot{x}' = 0$</span>. Substituting <span class="math-container">$x_1 = 1$</span> into those two expressions yields</p>
<p><span class="math-container">$$
\lambda_1 + x_2 + \beta_1 = 0, \tag{4}
$$</span></p>
<p><span class="math-container">$$
A'\,x' + B = 0, \tag{5}
$$</span></p>
<p>with <span class="math-container">$B = \begin{bmatrix}\beta_2 & \cdots & \beta_n\end{bmatrix}^\top$</span> and</p>
<p><span class="math-container">$$
A' =
\begin{bmatrix}
\lambda_2 & 1 & 0 & \cdots & 0 \\
0 & \lambda_3 & 1 & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & 0 \\
0 & \cdots & 0 & \lambda_{n-1} & 1 \\
0 & \cdots & 0 & 0 & \lambda_n
\end{bmatrix}. \tag{6}
$$</span></p>
<p>Solving <span class="math-container">$(5)$</span> for <span class="math-container">$x'$</span> yields <span class="math-container">$x' = - A'^{-1} B$</span>. Therefore <span class="math-container">$B$</span> can be chosen to ensure that <span class="math-container">$\beta_i > 0$</span> for <span class="math-container">$i=2,\cdots,n$</span>. However, this doesn't ensure that <span class="math-container">$\beta_1 > 0$</span>. Namely, solving <span class="math-container">$(4)$</span> for <span class="math-container">$\beta_1$</span> yields <span class="math-container">$\beta_1 = -\lambda_1 - x_2$</span>, where <span class="math-container">$x_2$</span> can be obtained from the solution for <span class="math-container">$x'$</span>. It can be noted that scaling <span class="math-container">$B$</span> by a positive scalar <span class="math-container">$\gamma$</span> also scales <span class="math-container">$x'$</span> by the same scalar. Therefore, if for some valid <span class="math-container">$B$</span> one obtains a negative value for <span class="math-container">$x_2$</span> one could always find a large enough <span class="math-container">$\gamma$</span> such that after scaling <span class="math-container">$\beta_1$</span> would become positive. The inverse of <span class="math-container">$A'$</span> from <span class="math-container">$(6)$</span> can shown to be equal to</p>
<p><span class="math-container">$$
A'^{-1}_{ij} = \left\{
\begin{array}{ll}
\frac{(-1)^{j-i}}{\prod_{k=i}^j \lambda_k} & \text{if}\ j \geq i \\
0 & \text{otherwise}
\end{array}
\right., \tag{7}
$$</span></p>
<p>where <span class="math-container">$X_{ij}$</span> denotes the element of matrix <span class="math-container">$X$</span> at its <span class="math-container">$i$</span>th row and <span class="math-container">$j$</span>th column. Given that each element of <span class="math-container">$B$</span> is positive the expression for <span class="math-container">$x_2$</span> would thus be a sum of alternating negative and positive terms. Therefore, by choosing some of the odd terms of <span class="math-container">$B$</span> sufficiently large would guarantee that the associated solution for <span class="math-container">$x_2$</span> would be negative, which thus ensures that <span class="math-container">$\beta_1$</span> can be made positive.</p>
<p>For example for <span class="math-container">$n = 2$</span> with <span class="math-container">$\lambda_1,\lambda_2 = 1$</span> and <span class="math-container">$\beta_2 = 2$</span> yields <span class="math-container">$x_{eq} = \begin{bmatrix}1 & -2\end{bmatrix}^\top$</span> as equilibrium for every possible <span class="math-container">$\alpha_i$</span>. It can be noted that due to the fact the the expression for <span class="math-container">$\dot{x}$</span> is <a href="https://en.wikipedia.org/wiki/Even_and_odd_functions" rel="nofollow noreferrer">odd</a> in <span class="math-container">$x$</span> also implies that <span class="math-container">$-x_{eq}$</span> (thus <span class="math-container">$\begin{bmatrix}-1 & 2\end{bmatrix}^\top$</span>) would be an equilibrium as well.</p>
<p>However, I am not sure if for <span class="math-container">$n \geq 2$</span> these systems always have multiple equilibrium points for any arbitrary choice for <span class="math-container">$\lambda_i,\beta_i > 0$</span>. But at least I have shown there exists systems that satisfy your description that violate your postulated limits.</p>
|
345,094 |
<p>If $f(x-1)+f(x-2) = 5x^2 - 2x + 9$</p>
<p>and</p>
<p>$f(x)= ax^2 + bx + c$</p>
<p>what would be the value of $a+b+c$?</p>
<p>I was doing</p>
<p>$f(x-1)+f(x-2)= f(x-3)$
then
$f(x)$</p>
<pre><code>a = 5
b = -2
c = 9
</code></pre>
<p>$(5-3)+(-2-3)+(9-3)$</p>
<p>But do not think is is correct</p>
<p>What would be correct approach?</p>
|
Jerry
| 68,593 |
<p>If $f(x) = ax^2 + bx+ c$, what is $f(x-1)$ and what is $f(x-2)$?</p>
<p>Work those out, then add both expressions to equate to $5x^2-2x+9$.</p>
|
345,094 |
<p>If $f(x-1)+f(x-2) = 5x^2 - 2x + 9$</p>
<p>and</p>
<p>$f(x)= ax^2 + bx + c$</p>
<p>what would be the value of $a+b+c$?</p>
<p>I was doing</p>
<p>$f(x-1)+f(x-2)= f(x-3)$
then
$f(x)$</p>
<pre><code>a = 5
b = -2
c = 9
</code></pre>
<p>$(5-3)+(-2-3)+(9-3)$</p>
<p>But do not think is is correct</p>
<p>What would be correct approach?</p>
|
Adi Dani
| 12,848 |
<p>$$f(x-1)+f(x-2) = 5x^2 - 2x + 9$$and $$f(x)=ax^2+bx+c$$
for $x=1,2,3$ we get the system
$$f(0)+f(-1)=a-b+2c =12$$
$$f(1)+f(0)=a+b+2c=25$$
$$f(2)+f(1)=5a+3b+2c = 48$$
with solutions
$$a=5/2,b=13/2,c=8$$
so $$f(x)=\frac{5}{2}x^2+\frac{13}{2}x+8$$</p>
|
3,568,693 |
<p>I am trying to solve <span class="math-container">$n! = 10^6$</span> for <span class="math-container">$n$</span>. I thought to do this using the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="nofollow noreferrer">gamma function</a>:</p>
<p><span class="math-container">$$(n - 1)! = \Gamma(n) = \int_0^\infty x^{n - 1}e^{-x} \ dx$$</span></p>
<p>So I have that </p>
<p><span class="math-container">$$\Gamma(n + 1) = \int_0^\infty x^n e^{-x} \ dx = 10^6$$</span></p>
<p>I thought to solve this using integration by parts:</p>
<p><span class="math-container">$$\begin{align} \Gamma(n + 1) &= \int_0^\infty x^n e^{-x} \ dx \\ &= [-x^n e^{-x}]^{\infty}_0 + \int_0^{\infty} nx^{n - 1}e^{-x} \ dx \end{align}$$</span></p>
<p>But, as you can see, unless I have made a mistake, we get the term <span class="math-container">$\int_0^{\infty} nx^{n - 1}e^{-x} \ dx$</span>, which, as far as I can tell, means that we get stuck in an infinite loop of integration by parts.</p>
<p>So how do I solve this? </p>
<p>Thank you.</p>
|
Strafe Ae
| 457,404 |
<p>Using WolframAlpha, I get <span class="math-container">$n=9.4456089144163262435935599652$</span>, but thats about as far as I can figure. There's not really a good way to reverse the Gamma function to solve something like that besides numerical approximations.</p>
|
1,644,845 |
<blockquote>
<p>Show that $\lim_{z \to 0} \frac{\Re(z)}{z}$ doesn't exist.</p>
</blockquote>
<p>Let $z=r(\cos(\theta)+i \sin(\theta))$. So $\frac{\Re(z)}{z} =\cos ^2(\theta) - i \cos(\theta)\sin(\theta) $, and $$\lim_{z \to 0} \frac{\Re(z)}{z} = \lim_{r \to 0} (\cos ^2(\theta) - i \cos(\theta)\sin(\theta)) = (\cos ^2(\theta) - i \cos(\theta)\sin(\theta))$$</p>
<p>So the limit cannot exist, because it depends of $\theta$.</p>
<p>Am I right? If not, is there existed another approach?</p>
|
Cm7F7Bb
| 23,249 |
<p>$X_1,\ldots,X_n$ are i.i.d. $\mathcal N(\mu,\sigma^2)$ random variables and $\bar X=n^{-1}\sum_{i=1}^nX_i$. The distributions and confidence intervals are as follow.</p>
<p>(a)
$$
\frac1{\sigma^2}\sum_{i=1}^n(X_i-\bar X)^2\sim\chi_{n-1}^2
$$
and
$$
\Pr\biggl(\frac{\sum_{i=1}^n(X_i-\bar X)^2}{\chi_{n-1,\alpha/2}^2}\le\sigma^2\le\frac{\sum_{i=1}^n(X_i-\bar X)^2}{\chi_{n-1,1-\alpha/2}^2}\biggr)=1-\alpha.
$$
(b)
$$
\frac1{\sigma^2}\sum_{i=1}^n(X_i-\mu)^2\sim\chi_n^2
$$
and
$$
\Pr\biggl(\frac{\sum_{i=1}^n(X_i-\mu)^2}{\chi_{n,\alpha/2}^2}\le\sigma^2\le\frac{\sum_{i=1}^n(X_i-\mu)^2}{\chi_{n,1-\alpha/2}^2}\biggr)=1-\alpha.
$$</p>
<p>(c)</p>
<p>$$
\frac{n(\bar X-\mu)^2}{\sigma^2}\sim\chi_1^2
$$
and</p>
<p>$$
\Pr\biggl(\frac{n(\bar X-\mu)^2}{\chi_{1,\alpha/2}^2}\le\sigma^2\le\frac{n(\bar X-\mu)^2}{\chi_{1,1-\alpha/2}^2}\biggr)=1-\alpha.
$$</p>
<p>I hope this helps.</p>
|
3,820,465 |
<p>I'm working on the following problem but I'm having a hard time figuring out how to do it:</p>
<p>Q: Let A and B be two arbitrary events in a sample space S. Prove or provide a counterexample:</p>
<p>If <span class="math-container">$P(A^c) = P(B) - P(A \cap B)$</span> then <span class="math-container">$P(B) = 1$</span></p>
<p>Drawing Venn diagrams I can see how this is true, as <span class="math-container">$A \subset B$</span>, but I'm not sure how to formally prove this. Any help would be great!</p>
|
herb steinberg
| 501,262 |
<p>Not true. If <span class="math-container">$A^c\subseteq B$</span>, then <span class="math-container">$B-A\cap B=A^c$</span>, so <span class="math-container">$P(A^c)=P(B)-P(A\cap B)$</span> for any <span class="math-container">$B$</span>. This holds since <span class="math-container">$A^c$</span> and <span class="math-container">$A\cap B$</span> are mutually exclusive and add up to <span class="math-container">$B$</span>.</p>
|
3,356,544 |
<p>A lot of calculators actually agree with me saying that it is defined and the result equals 1, which makes sense to me because:</p>
<p><span class="math-container">$$ (-1)^{2.16} = (-1)^2 \cdot (-1)^{0.16} = (-1)^2\cdot\sqrt[100]{(-1)^{16}}\\
= (-1)^2 \cdot \sqrt[100]{1} = (-1)^2 \cdot 1 = 1$$</span></p>
<p>However, there are certain calculators (WolframAlpha among them) which contest this answer, and instead claim it is equal to:</p>
<p><a href="https://i.stack.imgur.com/XB8nG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XB8nG.png" alt="enter image description here"></a></p>
<p>Graphing this as an exponential function was not possible.</p>
<p>What's going on?</p>
|
LIR
| 608,434 |
<p>The problem is that the exponential function: <span class="math-container">$$f:\mathbb{R}\rightarrow\mathbb{R}, f(x) = ab^x$$</span> is only defined for <span class="math-container">$b\in(0,\infty)$</span> and <span class="math-container">$a\neq0$</span> so <span class="math-container">$(-1)^x$</span> is not an exponential function, thus it doesn't have its proprieties. </p>
<p>On the other hand <span class="math-container">$(-1)^{2.16} = e^{2.16\log(-1)} = e^{2.16\pi i} = \cos(2.16\pi) + i \sin(2.16\pi)$</span> which gives the same value as WolframAlpha.</p>
<p>The graphing is not working because the function <span class="math-container">$(-1)^x = \cos(x\pi) + i \sin(x\pi)$</span> takes real values <span class="math-container">$\iff \sin(x\pi) = 0 \iff x\in\mathbb{Z}$</span>.</p>
|
3,069,987 |
<p>I know that whatever numbers you choose for x and y and their sum equals to 1 will satisfy the equation <span class="math-container">$x^2 + y = y^2 + x$</span></p>
<p>Algebraic proof: </p>
<p>Given: <span class="math-container">$x + y = 1$</span></p>
<p><span class="math-container">$$LS = x^2+ y
= (1-y)^2 + y
= 1 - 2y+y^2 + y
= y^2 - y + 1$$</span></p>
<p><span class="math-container">$$RS = y^2 + x
= y^2 + (1-y)
= y^2 - y + 1$$</span></p>
<p>Therefore,<span class="math-container">$$ LS = RS $$</span></p>
<p>How can this be proved geometrically? (Ex. in a diagram of rectangular areas)</p>
<p>I tried to add a square piece with side lengths y with a rectangle with side lengths x and x+y but I can't seem to prove it geometrically. </p>
<p>Can someone help? </p>
|
Mauro ALLEGRANZA
| 108,274 |
<p><em>Sorry for the description of the lacking shapes</em>...</p>
<p>You have to consider for the LHS a square of side <span class="math-container">$x$</span> and a rectangle of sides <span class="math-container">$y$</span> and <span class="math-container">$x+y=1$</span>.</p>
<p>This can be decomposed into the "big" square <span class="math-container">$x^2$</span>, the "little" square <span class="math-container">$y^2$</span> and the remaining rectangle <span class="math-container">$xy$</span>.</p>
<p>For the RHS, a square of side <span class="math-container">$y$</span> and a rectangle of sides <span class="math-container">$x$</span> and <span class="math-container">$x+y=1$</span>.</p>
<p>In turn, this can be decomposed into the "little" square <span class="math-container">$y^2$</span>, the "big" square <span class="math-container">$x^2$</span> and the remaining rectangle <span class="math-container">$yx$</span>.</p>
<p>Then rotate <span class="math-container">$yx$</span>.</p>
|
3,784,872 |
<p><strong>Problem:</strong></p>
<p>Suppose <span class="math-container">$(X_n)_{n \geq 1}$</span> are indipendent random variables defined in <span class="math-container">$(\Omega, \mathscr{A},\mathbb{P})$</span>.
Define <span class="math-container">$Y=\limsup _{n \to \infty} \frac{1}{n} \sum_{1 \leq p \leq n}X_p$</span> and <span class="math-container">$Z=\liminf _{n \to \infty} \frac{1}{n} \sum_{1 \leq p \leq n}X_p$</span>. How can I prove that <span class="math-container">$Y$</span> and <span class="math-container">$Z$</span> are costant almost everywhere?</p>
<p><strong>My attempt:</strong></p>
<p>I know it is related with Kolmogorov's zero-one law. I can solve the simplier case <span class="math-container">$\limsup_n X_n$</span> but I cannot solve the problem above.</p>
|
QuantumSpace
| 661,543 |
<p><strong>Hint</strong>: Show that <span class="math-container">$Y$</span> and <span class="math-container">$Z$</span> are <span class="math-container">$\mathcal{T}:= \bigcap_{k=1}^\infty \sigma(X_k, X_{k+1, \dots})$</span>-measurable. Hence, Kolmogorov's <span class="math-container">$0$</span>-<span class="math-container">$1$</span> law applies (due to independence) and we can conclude that
<span class="math-container">$$\Bbb{P}(Y \in A) \in \{0,1\}$$</span>
for all Borel sets <span class="math-container">$A$</span>. Similarly for <span class="math-container">$Z$</span>. Subsequently, use/prove the following lemma:</p>
<p><strong>Lemma</strong>: If a random variable satisfies <span class="math-container">$\Bbb{P}(Y \in A) \in \{0,1\}$</span> for all Borel sets <span class="math-container">$A$</span>, then <span class="math-container">$Y$</span> is constant almost surely.</p>
|
875,729 |
<p>Prove that, without using induction, A real symmetric matrix $A$ can be decomposed as $A = Q^T \Lambda Q$, where $Q$ is an orthogonal matrix and $\Lambda$ is a diagonal matrix with eigenvalues of $A$ as its diagonal elements.</p>
<p>I can see that all eigenvalues of $A$ are real, and the corresponding eigenvectors are orthogonal, but I failed to see that when putting all (interesting) eigenvectors together, they form a basis of $\mathbb{R}^n$.</p>
<p><strong>Edit</strong></p>
<p>The reason I asked this question is to show that a real symmetric matrix is diagonalizable, so let's not use that fact for a while. Other than that, any undergraduate level linear algebra can be used. </p>
<p><strong>Edit 2</strong></p>
<p>After reading <strong>Algebraic Pavel</strong>'s answer, I feel like ruling out Schur Decomposition as well, but I can't keep ruling out theorems, so...if a proof is too obvious, that's probably not what I am looking for, though, it maybe a technically correct answer.</p>
<p>Thanks.</p>
|
user126154
| 126,154 |
<p>A matrix $Q$ is orthogonal if and only if its columns forms a orthonormal basis, if and only if $Q^{-1}=Q^T$. </p>
<p>Therefore, if there exists an orthornomal basis of eigenvectors of $A$, we have
that the matrix of change of basis if ortogonal. That is to say, there is $Q$ orthogonal so that</p>
<p>$Q^{-1}AQ=\Lambda$ </p>
<p>But then $A=Q^{-1}\Lambda Q=Q^T\Lambda Q$</p>
<p>It remains to show that an orthonormal basis of eigenvectors exists.</p>
<p>Eigenspaces corresponding to different eigenvalues are orthogonal. As A is diagonalizable we have $\mathbb R^n=V_1\oplus V_2\dots\oplus V_k$ where $V_i$ is the eigenspace corresonding to the eignevalue $\lambda_i$.</p>
<p>Each $V_i$ has a orthonormal basis $v_{i,1},\dots,v_{i,n_i}$ where $n_i=\dim V_i$. (from the comments we know that we can use this fact.)</p>
<p>Thus, putting toghether these basis we got an orthonormal basis $v_{i,j}$ of $\mathbb R^n$ consisting of eigenvectors of $A$. </p>
<p><strong>Edit</strong>
If you cannot use that $A$ is diagonalizable, then use the following workaround to shot that in fact $A$ is diagonalizable.</p>
<p>Write $\mathbb R^n=V_1\oplus\dots\oplus V_k\oplus W$. Where the $V_i$ are the eigenspaces of $A$ and $W$ is the orthogonal complement of $V_1\oplus\dots\oplus V_k$.</p>
<p>Then, the resctriction of $A$ to $W$ is symmetric and has no eigenvectors.
Now, consider the function</p>
<p>$\max \langle Aw,w\rangle$ when $w$ runs on the spaces of unitary vectors of $W$ (that is to say, $||w||=1$). </p>
<p>Any max point is an eigenvector, contradicting the fact that $A|_W$ has no eigen vectors. (See below.) Therefore $W$ contains no unitary vectors, hence $W=\{0\}$ and
$V_1\oplus\dots\oplus V_k=\mathbb R^n$.</p>
<p>Let's see that max points are eigenvectore. We restrict to $W$ a and
Let $F(w)=\langle Aw,w\rangle$ then the derivative of $F$ at point $w$, in the direction $v$ is $dF_w[v]=2\langle Aw,v\rangle$ (because $A$ is symmetric). The tangent space of $\{||w||=1\}$ at the point $w$ is $w^\perp$. Thus, $w$ is a critical value if and only if $dF_w[v]=0\forall v\in w^\perp$. That is to say, $\langle Aw,v\rangle=0$ for all $v$ such that $\langle w,v\rangle=0$. Therefore $Aw$ must be a multiple of $w$ as the orthogonal ov $w$ is contained in the orthogonal of $Aw$. </p>
<p><strong>Edit</strong> another way to see that $A|_W$ has an eigenvector is to consider its characteristic polynomial. It has roots in $\mathbb C$ because of the fundamental theorem of algebra. They are reals because $A|_W$ is symmetric. So $A|_W$ has at least one eigenvalue, whence eigenvector.</p>
|
33,817 |
<p>It is an open problem to prove that $\pi$ and $e$ are algebraically independent over $\mathbb{Q}$.</p>
<ul>
<li>What are some of the important results leading toward proving this?</li>
<li>What are the most promising theories and approaches for this problem?</li>
</ul>
|
Stefan Geschke
| 7,743 |
<p>People in model theory are currently studying the complex numbers with exponentiation.
Z'ilber has an axiomatisation of an exponential field (field with exponential function) that looks like the complex numbers with exp. but satisfies Schanuel's conjecture.
He proved that there is exactly one such field of the size of $\mathbb C$.
I would find it odd if Z'ilber's field turned out to be different from the complex numbers.</p>
<p>By results of Wilkie, the reals with exponentiation are well understood, and the complex numbers with exponentiation is in some way the next step up. The model theoretic frame work
(o-minimality) that works for the reals with exp. fails for the complex numbers, but there might be a similar theory that works for the complex field with exponentiation.</p>
|
1,634,725 |
<p>It's bee a long time since I've worked with sums and series, so even simple examples like this one are giving me trouble:</p>
<p>$\sum_{i=4}^N \left(5\right)^i$</p>
<p>Can I get some guidance on series like this? I'm finding different methods online but not sure which to use. I know that starting at a non-zero number also changes things.</p>
<p>My original thought was to do (sum from 0 to N of 5^i) - (sum from 0 to 3 of 5^i) but I'm not sure that's right.</p>
|
2.71828-asy
| 302,548 |
<p>Let $S = a + ar + ar^2 + ar^3 ...$</p>
<p>Then $S-Sr = (a + ar + ar^2 + ar^3 ... ar^n) - (ar + ar^2 + ar^3 + ar^4 ... ar^{n+1}) = a - ar^{n+1}$</p>
<p>Factoring out an S we have $S(1-r) = a-ar^{n+1}$</p>
<p>Finally, $$S = {(a - ar^{n+1})\over(1-r)}$$</p>
<p>In your case, you are trying to find $5^4 + 5^5 + 5^6 ... 5^n$</p>
<p>You can factor out a $5^4$ to get $5^4(1 + 5 + 5^2 ... + 5^{n-4})$</p>
<p>Plugging in corresponding values of $a$ and $r$ into the equation above we have:
$$S = 5^4 \times {5^{n-3}-1\over4} $$</p>
|
1,665,833 |
<p>Given that A $\in$ M $_{mxn}$ (<strong>R</strong>). Assume that {$v_1$...$v_n$} is a basis for $R^n$ such that {$v_1$...$v_k$} is a basis for Null(A). </p>
<p>How would I prove that {A$v_{k+1}$...A$v_n$} spans Col(A)?</p>
|
parsiad
| 64,601 |
<p>Z-tables are just values of the CDF
$$
F(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-y^{2}/2}dy
$$
at various points $x$. If you want to generate a Z-table, pick some $x$ points and evaluate that integral (using a numerical method of your choice).</p>
<p>As mentioned in the comments to your question, MATLAB (and other numerical software) already do this for you. Probably best not to reinvent the wheel.</p>
<hr>
<p><strong>Addendum</strong>: As @Ian mentioned, we should truncate the integral since it is on an unbounded domain. That is, we should look to compute instead</p>
<p>$$
F(x)\approx\frac{1}{\sqrt{2\pi}}\int_{w(x)}^{x}e^{-y^{2}/2}dy.
$$
The question is thus, how do we pick $w(x)$? Let's say that you are only interested in computing the integral up to $\epsilon$ accuracy. Therefore, it might be reasonable to pick $w(x)$ such that
$$
F(w(x))=\frac{1}{2}\text{erfc}(-w(x)/\sqrt{2})\leq\epsilon.
$$
A well-known bound for the erf function is
$$
\text{erfc}(x)\leq e^{-x^{2}}.
$$
Plugging this into the above,
$$
F(w(x))\leq\frac{1}{2}e^{-w(x)^{2}/2}
$$
and thus it follows that
$$
w(x)\leq-\sqrt{-2\log(2\epsilon)}.
$$</p>
<blockquote>
<p>For example, if $\epsilon=10^{-6}$, $w(x)\leq-5.1230$ (approximately). This makes sense, as 5-sigma events should not make much of a contribution in computation.</p>
</blockquote>
<p>This bound is <strong>very</strong> conservative, however, and you could/should come up with a tighter one, or one that takes into account relative instead of absolute error (also mentioned in comments).</p>
|
1,985,402 |
<p>I wrote down $$12 \times 0 = 0$$ Then, I divided both sides by $0$ like so:
$$12 = \frac {0}{0}$$ I know that $$ \frac {0}{x} = 0, x \in R$$
Therefore, $$12=0$$ which is a false statement. Where did I go wrong?</p>
|
user262291
| 262,291 |
<p>There are several things wrong with your proof. The first thing is when you divide by $0$ in your second line of work. Division by zero is an invalid operation. Then again you claim that $\frac{0}{x} =0$ when $x$ is a real number is also wrong. This is because in the real numbers $x$ can be zero and division by zero is an invalid operation. Overall both your second and third steps are incorrect because of division by zero. </p>
|
3,892,246 |
<p>I am dealing with sets of vectors <span class="math-container">$\big\{x_1, x_2, x_3, \dotsc, x_m \big\}$</span> from some abstract vector space <span class="math-container">$\mathcal{V}$</span>. <strong>Occasionally</strong>, <span class="math-container">$\mathcal{V}=\mathbb{R}^n$</span> an I need to address the elements these vectors e.g. sum over all elements of the vector <span class="math-container">$x_j$</span>. However, the subindex is already used to denote a specific vector.</p>
<p>Is there a common notation address the elements of a vector? E.g. <span class="math-container">$x_j[i]$</span> or <span class="math-container">$x_j^i$</span> or <span class="math-container">$x_j^{(i)}$</span>?</p>
|
postmortes
| 65,078 |
<p>In Banach spaces, where we often consider elements of sequence spaces (e.g. <span class="math-container">$x\in c_0$</span> so that <span class="math-container">$x=(x_1, x_2, \ldots)$</span>) the problem of indexing comes up a lot. Typically when we need to index into a sequence of sequences we use <span class="math-container">$x_j(n)$</span> to indicate the <span class="math-container">$n^{\mathrm{th}}$</span> component of the <span class="math-container">$j^{\mathrm{th}}$</span> element of the master sequence. I would disagree with Wuestenfux here and say that this is reasonably standard in Banach Space Theory.</p>
<p>Other suggestions, such as raised indices (<span class="math-container">$x^i_j$</span>) run into problems when you need to consider powers of the series, and multiple sub-indices (<span class="math-container">$x_{i,j}$</span>) can get confusing and hard to read (there's a Banach space called Schreier space where some of the proofs require considering indices <span class="math-container">$j$</span> such that <span class="math-container">$p_{n_k +1} \leq j \leq p_{n_{k+1}+1}$</span> which is not only hard to read, but hard to think about!). Provided you are clear that <span class="math-container">$x_j$</span> refers to a sequence and not a function you shouldn't have much difficulty with people understanding <span class="math-container">$x_j(n)$</span>.</p>
<p>That said, whatever your choice, state it clearly upfront :)</p>
|
1,218,140 |
<p>I am reading Hartshorne's proof of $\mathbb{P}^1$ being simply connected as a scheme. It seems one ingredient of the proof is that if $X\rightarrow\mathbb{P}^1$ is an étale covering, then X has only finitely many connected components. But I do not see why.</p>
<p>Thanks in advance.</p>
|
KReiser
| 21,412 |
<p>The important point is that $f:X\to \mathbb{P}^1$ must be a finite map. Finite maps are quasi-finite, so the preimage of any point in $\mathbb{P}^1$ is a finite set. Since the number of points in the preimage is an upper semicontinuous function, there will be a generic multiplicity $n_0$ and finitely many points $p_1,\cdots,p_m$ in $\mathbb{P}^1$ which will have a larger number of points $n_1,\cdots,n_m$ in the fiber. Take the max of the $n_i$, and there are at most that many connected components.</p>
|
704,680 |
<p>We have $$\sqrt{x -2} = 3 -2\sqrt{x}$$.</p>
<p>I am to find whether a real number exists for this relation, and the real number that satisfies.</p>
<p>I start by squaring both sides, which yields: </p>
<p>$$x - 2 = 4x - 12\sqrt{x} + 9$$.</p>
<p>Whence:</p>
<p>$$ -3x = -12\sqrt{x} + 11 \\
\sqrt{x} = \frac{x}{4} + \frac{11}{12}.
$$</p>
<p>But once i get here i am stuck. How can i find whether a solution exists for x from here?</p>
|
GEdgar
| 442 |
<p>There is a Borel set $E$ in $\mathbb R^2$ such that $F := \{x-y\colon (x,y) \in E\}$ is not a Borel set.</p>
<p>Let $A := \{f \in \mathbf{C}\colon (f(1), f(0)) \in E\}$. Then $A \in \mathcal{B}_{\left[0,\infty\right)}$.</p>
<p>How about $T(A)$? In fact
$$
T(A) = \{g \in \mathbf{C}\colon g(0)=0, g(1) \in F\}
$$
and is not Borel.</p>
<p><strong>added Mar 10</strong><br>
Why is $T(A)$ not Borel? </p>
<p>First, note that $\mathcal{B}_{\left[0,\infty\right)}$ is the same as the Borel sets for the topology of uniform convergence on bounded sets for $\mathbf{C}_{[0,\infty)}$, a Polish space. </p>
<p>Suppose (for purposes of contracidtion) that $T(A)=\{g \in \mathbf{C}\colon g(0)=0 \text{ and } g(1) \in F\}$ is Borel. Then so is its complement $T(A)^c := \{g \in \mathbf{C}\colon g(0) \ne 0 \text{ or } g(1) \in F^c\}$. For Polish spaces, the continuous image of a Borel set is an analytic set. Now $\pi_{01} \colon \mathbf{C}_{[0,\infty)} \to \mathbb R^2$ defined by
$$
\pi_{01}(f) = (f(0),f(1))
$$
is continuous. So
$$
G_1:= \{(x,y) \in \mathbb R^2 \colon x=0 \text{ and } y \in F\},\qquad
G_2:= \{(x,y) \in \mathbb R^2 \colon x \ne 0 \text{ or } y \in F^c\},
$$
are both analytic sets in $\mathbb R^2$. But then cross-sections
$$
\{y\colon (0,y) \in G_1\} = F \qquad\text{and}\qquad
\{y\colon (0,y) \in G_2\} = F^c
$$
are both analytic sets in $\mathbb R$, and therefore $F$ is Borel. This contradiction shows that our assumption that $T(A)$ is Borel is wrong.</p>
|
3,235,854 |
<p>I have to calculate the gcd of <span class="math-container">$f=X^3 +9X^2 +10X +3$</span> and <span class="math-container">$g= X^2 -X -2$</span> in
<span class="math-container">$\mathbb{Q}[X]$</span> and <span class="math-container">$\mathbb{Z}/5\mathbb{Z}$</span>.</p>
<p>In <span class="math-container">$\mathbb{Q}[X]$</span> I got that <span class="math-container">$X+1$</span> is a gcd and therefore <span class="math-container">$r(X+1)$</span> since <span class="math-container">$\mathbb{Q}$</span> is a field.</p>
<p>But I dont know how to do polynomial division in <span class="math-container">$\mathbb{Z}/5\mathbb{Z}$</span>.
Can somebody please help me?</p>
<p>Thank you!</p>
|
gt6989b
| 16,192 |
<p><strong>HINT</strong></p>
<p>It's easy to see that <span class="math-container">$x^2-x-2 = (x+1)(x-2)$</span>, but in <span class="math-container">$\mathbb{Z}/5\mathbb{Z}$</span> you have <span class="math-container">$5 \equiv 0$</span>, so
<span class="math-container">$$
x^3+9x^2+10x+3 \equiv x^3-x^2+3 = (x+1)\times \ldots
$$</span></p>
|
3,235,854 |
<p>I have to calculate the gcd of <span class="math-container">$f=X^3 +9X^2 +10X +3$</span> and <span class="math-container">$g= X^2 -X -2$</span> in
<span class="math-container">$\mathbb{Q}[X]$</span> and <span class="math-container">$\mathbb{Z}/5\mathbb{Z}$</span>.</p>
<p>In <span class="math-container">$\mathbb{Q}[X]$</span> I got that <span class="math-container">$X+1$</span> is a gcd and therefore <span class="math-container">$r(X+1)$</span> since <span class="math-container">$\mathbb{Q}$</span> is a field.</p>
<p>But I dont know how to do polynomial division in <span class="math-container">$\mathbb{Z}/5\mathbb{Z}$</span>.
Can somebody please help me?</p>
<p>Thank you!</p>
|
mathcounterexamples.net
| 187,663 |
<p>You got the GCD in <span class="math-container">$\mathbb Z[X]$</span>, i.e. <span class="math-container">$X+1$</span>. The GCD in <span class="math-container">$\mathbb Z / 5 \mathbb Z[X]$</span> is obtained by reducing each coefficient of the GCD in <span class="math-container">$\mathbb Z$</span> in <span class="math-container">$\mathbb Z / 5 \mathbb Z$</span>.</p>
<p>Which means that the GCD in <span class="math-container">$\mathbb Z / 5 \mathbb Z[X]$</span> is equal to <span class="math-container">$\bar{1}X + \bar{1} = X + \bar{1}$</span>.</p>
|
3,872,750 |
<p>Suppose we have a series <span class="math-container">$$\sum_{n=2}^\infty (-1)^n \frac{n^2}{10^n} = \sum_{n=2}^\infty (-1)^n b_n$$</span>.</p>
<p>I want to apply the alternating series test to see if it converges.</p>
<p>I need to show that:</p>
<p><span class="math-container">$$\lim_{n \rightarrow \infty} \frac{n^2}{10^n} = 0$$</span></p>
<p><span class="math-container">$$\frac{(n+1)^2}{10^{n+1}} \leq \frac{n^2}{10^n}$$</span></p>
<p>To see that <span class="math-container">$\lim_{n \rightarrow \infty} \frac{n^2}{10^n} =0$</span>, we note that the denominator is going to grow way faster than the the numerator. If you want to show it using calculus, then you can do L'hoptials rule twice to get:</p>
<p><span class="math-container">$\lim_{n \rightarrow \infty} \frac{n^2}{10^n} = \lim_{n \rightarrow \infty} \frac{2}{10^xlog^2(10)}=0$</span></p>
<p>Next we want to know if</p>
<p><span class="math-container">$$\frac{(n+1)^2}{10^{n+1}} \leq \frac{n^2}{10^n}$$</span></p>
<p>is true for all <span class="math-container">$n$</span>. Rearranging, we can get:</p>
<p><span class="math-container">$$\frac{10^n}{10^{n+1}} \leq \frac{n^2}{(n+1)^2}$$</span></p>
<p><span class="math-container">$$\frac{1}{10} \leq \frac{n^2}{(n+1)^2}= (\frac{n}{n+1})^2$$</span></p>
<p>By plugging in values <span class="math-container">$n=2,3,...$</span> we can see that this inequality only gets more and more true as <span class="math-container">$n$</span> gets bigger since <span class="math-container">$\lim_{n \rightarrow \infty} (\frac{n}{n+1})^2 = 1$</span></p>
<p>To show this with calculus we consider the function <span class="math-container">$f(x) = \frac{x^2}{10^x}$</span> and take it's derivative and find when it is <span class="math-container">$\leq 0$</span> on our domain of interest <span class="math-container">$[2, \infty)$</span>:</p>
<p><span class="math-container">$f'(x) = -10^{-x}x(xlog(10)-2) \leq 0$</span></p>
<p>This will be true as long as <span class="math-container">$(xlog(10)-2)$</span> is not negative, so <span class="math-container">$x \geq \frac{2}{log(10)}$</span>, whicn includes our domain since we only care about <span class="math-container">$[2, \infty)$</span></p>
<p>Thus the series converges by the alternating series test</p>
|
user247327
| 247,327 |
<p>The first equation is 3x= 4 (mod 6) which is the same as 3x= 4+ 6n or 3x- 6n= 4, for some integer, n. I have an immediate problem with that! For any x and n, 3x and 6n are divisible by 3, so 3x- 6n is a multiple of 3 but 4 is not. There is no solution.</p>
<p>The second equation is 5x= 4 (mod 6) which is the same as 5x= 4+ 6n or 5x- 6n= 4. It is obvious that 6- 5= 1 so taking x= -1 and n= -1, 5x- 5n= 5(-1)- 6(-1)= 6- 5= 1. Multipying by 4, 5(-4)- 6(-4)= 4. One solution is x= -4, All solutions are of the form x= -4+ 6k for some integer k. Taking k= 1 gives x= -4+ 6= 2 as the only solution in <span class="math-container">$Z_6$</span>.</p>
<p>The third equation is 5x= 3 (mod 6) which is the same as 5x= 3+ 6n or 5x- 6n= 3. Again 5(-1)- 6(-1)= 6- 5= 1 so 5(-3)- 6(-3)= 3. All solutions are of the form x= -3+ 6k for some k. x= -3+ 6(1)= 3 is the only solution in <span class="math-container">$Z_6$</span>.</p>
<p>The last equation is 3x= 3 (mod 6). Dividing by 3 gives x= 1 as an obvious solution but 6 is also divisible by 3 so writing the equation as 3x= 3+ 6k, we have x= 1+ 3k. Now taking k= 1 we have x= 4. There are two solutions, x= 1 and x= 4, in <span class="math-container">$Z_6$</span>.</p>
|
1,315,744 |
<p>Already I know that harmonic series, $$\sum_{k=1}^n\frac1k $$ is divergent series.</p>
<p>And, it is also divergent by Abel Sum or Cesaro Sum.</p>
<p>However, I do not know how to prove it is divergent by concept of Abel or Cesaro.</p>
<p>Abel Sum or Cesaro Sum do not exist in this problem.</p>
<p>But, how can I prove it?</p>
<p>I have tried to prove for long time...</p>
<p>Can you please show your work?</p>
|
Jack D'Aurizio
| 44,121 |
<p>A simple trick is to consider that:
$$\frac{1}{n}\geq\log\left(1+\frac{1}{n}\right)\tag{1}$$
holds regardless of which concept of convergence you are using, and the partial sums of the RHS of $(1)$ are fairly easy to compute by the telescopic property:
$$ \sum_{n=1}^{N}\log\left(1+\frac{1}{n}\right)=\log\prod_{n=1}^{N}\frac{n+1}{n} = \log(N+1).\tag{2}$$
As an alternative, given that $H_n=\sum_{k=1}^{n}\frac{1}{k}$, we have:
$$ \frac{H_1+\ldots+H_n}{n}=\frac{1}{n}\sum_{j=1}^{n}\sum_{k=1}^{j}\frac{1}{k}=\frac{1}{n}\sum_{j=1}^{n}\frac{n+1-j}{j}=\left(1+\frac{1}{n}\right)H_n-1.\tag{3}$$</p>
|
2,781,153 |
<p>I've a right triangle that is inscribed in a circle with radius $r$ the hypotunese of the triangle is equal to the diameter of the circle and the two other sides of the triangle are equal to eachother.</p>
<blockquote>
<p>Prove that when you divide the area of the circle by the area of the triangle that you will get $\pi$.</p>
</blockquote>
<p>This is what I did:</p>
<p>The area of a triangle is $\frac{height\times width}{2}$ and the area of a circle is $\pi r^2$. Now I do not know how to continue.</p>
|
Dr. Sonnhard Graubner
| 175,066 |
<p>Hint: The area of this triangle is given by $$A_1=\frac{2r\cdot r}{2}=r^2$$ and the area of the circle is $$A_2=\pi r^2$$</p>
|
2,768,187 |
<blockquote>
<p>Let $w$ and $z$ be complex numbers such that $w=\frac{1}{1-z}$, and
$|z|^2=1$. Find the real part of $w$.</p>
</blockquote>
<p>The answer is $\frac{1}{2}$ but I don't know how to get to it.</p>
<hr>
<p>My attempt</p>
<p>as $|z|^2=1$</p>
<p>$z\bar z = 1$</p>
<p>If $z = x+yi$</p>
<p>$z=\frac{1}{\bar z} = \frac{1}{x-yi}$</p>
<p>$\therefore w= \frac{1}{1-\frac{1}{x-yi}}=\frac{x-yi}{(x-1)-yi}$</p>
<p>$=\frac{(x-yi)((x-1)+yi)}{((x-1)-yi)((x-1)+yi)}$</p>
<p>$=\frac{x^2-x+yi+y^2}{x^2-x+1+y^2}$</p>
<p>$=\frac{x^2-x+y^2}{x^2-x+1+y^2}+\frac{y}{x^2-x+1+y^2}i$</p>
<p>Hence $Re(w)=\frac{x^2-x+y^2}{x^2-x+1+y^2}$</p>
<p>$=\frac{x^2+y^2-x}{x^2+y^2-x+1}$</p>
<p>$=\frac{1-x}{1-x+1}$</p>
<p>$=\frac{1-x}{2-x}$</p>
<p>$=\frac{-1}{2-x}+\frac{2-x}{2-x}$</p>
<p>$=1-\frac{1}{2-x}$</p>
<p>$=1+\frac{1}{x-2}$ ?????</p>
|
Math Lover
| 348,257 |
<p>Note that $$w+w^* = \frac{1}{1-z} + \frac{1}{1-z^*}=\frac{2-z^*-z}{1-z^*-z+|z|^2}=1.$$</p>
|
132,226 |
<p>After edit:</p>
<p>How do we show that for every (holomorphic) vector bundle over a curve, is it possible to deform it to another one which is decomposable (into line bundles)? </p>
<p>Before edit:</p>
<p>I am not sure how much obvious or wrong is the following question:</p>
<p>For every (holomorphic) vector bundle over a complex projective variety, is it possible to deform it to another one which is decomposable (into line bundles)?
I feel the answer is yes (since obstruction lies in the ext group and you can deform any element of ext group to zero) but I don't know how easy is a precise proof! </p>
<p>Is this true at least over curves?</p>
|
Piotr Achinger
| 3,847 |
<p>To expand my comment above:</p>
<p>This is not possible in general and Chern classes are a possible obstruction. For an easy example, take the cotangent bundle $\Omega$ of $\mathbb{P}^2$. The Euler sequence
$$ 0 \to \Omega \to \mathcal{O}(-1)^3 \to \mathcal{O} \to 0 $$
shows that $ch(\Omega) = (1-H)^3 / 1 = 1 - 3H + 3H^2$, where $H$ is the hyperplane class. If it was possible to deform $\Omega$ to $\Omega'$ sitting inside an extension
$$ 0 \to \mathcal{O}(a) \to \Omega' \to \mathcal{O}(b) \to 0 $$
then we would have $1 - 3H + 3H^2 = ch(\Omega) = ch(\Omega') = (1 + aH)(1 + bH)$, which is impossible.</p>
<p>On the other hand, if the given vector bundle $E$ has a filtration with line bundle quotients then I think it is possible. Assume $E$ sits in an extension
$$ 0\to E_1 \to E \to E_0 \to 0. $$
To deform $E$ to $E_0 \oplus E_1$, as you said we need to "deform the $Ext$ class". To make sense of this, we first need to understand how $Ext$ classes and extensions correspond. Given $\xi\in Ext^1(E_0, E_1)$, the corresponding extension is obtained as a certain pushout. Replacing $\xi$ by $t\cdot \xi$ where $t$ is the coordinate of $\mathbb{A}^1$, we easily do it in a family (or even construct a "universal" family over $X \times Ext^1(E_0, E_1)$). </p>
<p>(to be continued, sorry, have to go)</p>
|
757,049 |
<p>as the title suggests, I need help proving that the cardinality of $(0,1)$ and $[0,1]$ are the same. </p>
<p>Here is my work: </p>
<p>$f:[0,1] \rightarrow (0,1)$</p>
<p>Let $n\in N$</p>
<p>Let $A=\{\frac{1}{2}, \frac{1}{3}, \frac{1}{4}....\}\cup \{0\}$</p>
<p>On $[0,1]\in A: f(x)=x$</p>
<p>On $A: f(0)=\frac{1}{3}$</p>
<p>$f(1)=\frac{1}{2}$</p>
<p>$f(\frac{1}{n})=\frac{1}{n+2}, n>2$</p>
<p>Now I will prove that $f$ is a bijective function.
Let $f(n)=f(m)$ for $n,m \in N$ and $n,m>2$. Then $\frac{1}{n+2}=\frac{1}{m+2}$. We multiply both sides of the equation by $(n+2)(m+2)$ and obtain $m+2=n+2 \rightarrow m=n$. Thus $f$ is injective.</p>
<p>From here on out, I am kind of shakey. I know the gist of this proof, but I don't know how to set up the sequencing correctly. For instance, I know that I need to map $x_1$ to $0$ and $x_2$ to $1$ and $x_{n}$ to $x_{n+2}$, but I am confused on how to do so. </p>
|
Asaf Karagila
| 622 |
<p>Your proof has some problems.</p>
<ol>
<li>You haven't defined $A$. I suppose it should denote some subset of $[0,1]$.</li>
<li>If $A$ is a subset of $[0,1]$ then it's not the case that $[0,1]\in A$. Perhaps you meant $[0,1]\setminus A$?</li>
<li>The idea is that $A$ can be "easily described" as a sequence $a_n$ for $n\in\Bbb N$, preferably such that $a_0=0$ and $a_1=1$. Then you can define a function which maps $a_n\mapsto a_{n+2}$, and $x$ for those not in $A$ (as you did).</li>
</ol>
<p>Then to show that the map is a bijection you need to show that if $x,y\in[0,1]$ are distinct then $f(x)\neq f(y)$. This is done by dividing into cases, both $x,y\in A$, both $x,y\notin A$ and the case that $x\in A$ and $y\notin A$ (there's a fourth case which is similar to this one).</p>
<p>Then you have to show that every $y\in(0,1)$ is in the range, which is also done by dividing into cases, if $y\notin A$ then $f(y)=y$; and if $y\in A$, then you have to do some work.</p>
|
3,013,529 |
<p>Suppose that the function <span class="math-container">$f$</span> is:</p>
<p>1) Riemann integrable (not necessarily continuous) function on <span class="math-container">$\big[a,b \big]$</span>;</p>
<p>2) <span class="math-container">$\forall n \geq 0$</span> <span class="math-container">$\int_{a}^{b}{f(x) x^n} = 0$</span> (in particular, it means that the function is orthogonal to all polynomials).</p>
<p>Prove that <span class="math-container">$f(x) = 0$</span> in all points of continuity <span class="math-container">$f$</span>.</p>
|
Martin Argerami
| 22,857 |
<p>Because <span class="math-container">$f$</span> is a 2-norm limit of polynomials, you can deduce that <span class="math-container">$\int_a^bf(x)^2=0$</span>. </p>
<p>Now suppose that <span class="math-container">$f(x_0)\ne0$</span> for some <span class="math-container">$x_0$</span> where <span class="math-container">$f$</span> is continuous. Take <span class="math-container">$\varepsilon=|f(x_0)|/2$</span>; by continuity at <span class="math-container">$x_0$</span>, there exists <span class="math-container">$\delta>0$</span> such that <span class="math-container">$|f(x)-f(x_0)|<|f(x_0)|/2$</span> for all <span class="math-container">$x\in (x_0-\delta,x_0+\delta)$</span>. From the reverse triangle inequality we have
<span class="math-container">$$
|f(x_0)|-|f(x)|<|f(x_0)|/2,
$$</span>
so <span class="math-container">$|f(x)|>|f(x_0)|/2$</span>. Then
<span class="math-container">$$
\int_a^b f(x)^2\geq\int_{x_0-\delta}^{x_0+\delta}f(x)^2\geq\int_{x_0-\delta}^{x_0+\delta}f(x)^2\geq\int_{x_0-\delta}^{x_0+\delta}f(x_0)^2/4=\delta f(x_0)^2/2>0,
$$</span>
a contradiction. So <span class="math-container">$f(x_0)=0$</span>. </p>
|
1,345,538 |
<blockquote>
<p>If a red dice and a green dice are rolled together and $X$ is the highest score minus the lowest score of the dice, what are the possible values of $X$? </p>
<p>Tabulate the probability distribution of $x$.</p>
</blockquote>
<p>Personally, here is my solution based on my understanding:
$$
P[X = 0] = \frac{1}{6} \quad\quad
P[X = 1] = \frac{1}{6} \quad\quad
P[X = 2] = \frac{1}{6} \\
P[X = 3] = \frac{1}{6} \quad\quad
P[X = 4] = \frac{1}{6} \quad\quad
P[X = 5] = \frac{1}{6}
$$</p>
<p>I know I answer wrongly. Who can explain and solve this problem?</p>
|
mvw
| 86,776 |
<p>The task is to walk away from the present location such that the temperature decreases as much as possible.</p>
<p>We are at $(1,1,3)$, the temperature gradient points at $(2,2,6)$.
If we move by $dx$ and $dy$ the change in position is
$$
dr = (dx, dy, z_x dx + z_y dy) = (dx, dy, -2(dx +dy))
$$
The change in temperature moving there is
$$
dT
= (T_x, T_y, T_z) \cdot dr
= (2,2,6) \cdot dr
= 2dx + 2dy - 12(dx + dy) = -10(dx+dy)
$$
With $x = r \cos \phi$, $y = r \sin \phi$ and only moving along the radius for fixed $\phi$ we have $dx = \cos \phi dr$ and $dy = \sin \phi dr$ and $dx + dy = (\cos \phi + \sin \phi) dr \le \sqrt{2} dr$ for $\phi = \pi/4$.
So I would run in that direction.</p>
|
1,345,538 |
<blockquote>
<p>If a red dice and a green dice are rolled together and $X$ is the highest score minus the lowest score of the dice, what are the possible values of $X$? </p>
<p>Tabulate the probability distribution of $x$.</p>
</blockquote>
<p>Personally, here is my solution based on my understanding:
$$
P[X = 0] = \frac{1}{6} \quad\quad
P[X = 1] = \frac{1}{6} \quad\quad
P[X = 2] = \frac{1}{6} \\
P[X = 3] = \frac{1}{6} \quad\quad
P[X = 4] = \frac{1}{6} \quad\quad
P[X = 5] = \frac{1}{6}
$$</p>
<p>I know I answer wrongly. Who can explain and solve this problem?</p>
|
KittyL
| 206,286 |
<p>Here is a method continuing your idea. </p>
<p>You should use the negative of the gradient since you are looking for the direction in which the temperature decreases the fastest. Then look for the projection of that direction onto the tangent plane of the mountain. </p>
<p>The gradient direction of the temperature is $\vec{t}=(-2,-2,-6)$. The normal vector of the tangent plane is the gradient of the mountain $\vec{n}=(2,2,1)$.</p>
<p>The projection is</p>
<p>$$\vec{t}-\frac{\vec{t}\cdot\vec{n}}{\vec{n}\cdot \vec{n}}\vec{n}$$</p>
<p><strong>Edit</strong>: This method only works for this special case. user251257 gives a better method that works for general case. Here I just want to illustrate how to find the projection of a vector onto a plane.</p>
<p>The following picture shows the projection of a vector $\vec{t}$ onto a plane with normal vector $\vec{n}$:</p>
<p><img src="https://i.stack.imgur.com/6Ea4V.png" alt="enter image description here"></p>
<p>Notice the little red vector is the part $\frac{\vec{t}\cdot\vec{n}}{\vec{n}\cdot \vec{n}}\vec{n}$. So subtraction gives you the projection onto the plane. </p>
|
3,464,291 |
<blockquote>
<p>If <span class="math-container">$x,y,z>0.$</span> Then minimum value of</p>
<p><span class="math-container">$x^{\ln(y)-\ln(z)}+y^{\ln(z)-\ln(x)}+z^{\ln(x)-\ln(y)}$</span></p>
</blockquote>
<p>what i try</p>
<p>Let <span class="math-container">$\ln(x)=a,\ln(y)=b.\ln(z)=c$</span></p>
<p>So <span class="math-container">$x=e^{a},y=e^{b},z=e^{c}$</span></p>
<p>How do i solve it help me</p>
|
Ng Chung Tak
| 299,599 |
<blockquote>
<p><strong>Useful fact</strong></p>
<p><span class="math-container">$$\large x^{\log y}=y^{\log x}$$</span></p>
<p>Also refer to another answer of mine <a href="https://math.stackexchange.com/questions/1879477/logarithms-equality/1879607#1879607"><em>here</em></a>.</p>
</blockquote>
<p>Let <span class="math-container">$u=x^{\ln y}$</span>, <span class="math-container">$v=y^{\ln z}$</span> and <span class="math-container">$w=z^{\ln x}$</span>, then <span class="math-container">$u,v,w\in \mathbb{R}^+$</span></p>
<p><span class="math-container">\begin{align}
f(x,y,z) &= x^{\ln y-\ln z}+y^{\ln z-\ln x}+z^{\ln x-\ln y} \\
&= \frac{u}{w}+\frac{v}{u}+\frac{w}{v} \\
& \ge 3\sqrt[3]{\frac{u}{w} \times \frac{v}{u} \times \frac{w}{v}}
\tag{by AM $\ge$ GM} \\
&= 3
\end{align}</span></p>
|
2,794,715 |
<p>Is it right that</p>
<p><strong>$$\sqrt[a]{2^{2^n}+1}$$</strong></p>
<p>for every $$a>1,n \in \mathbb N $$ </p>
<p>is always irrational?</p>
|
Wojowu
| 127,263 |
<p>As lhf notes, we just have to see that $2^{2^n}+1$ can never be a perfect power. However, since $2^{2^n}$ is a perfect power, using <a href="https://en.wikipedia.org/wiki/Catalan%27s_conjecture" rel="nofollow noreferrer">Mihailescu's theorem</a> the only pair of perfect powers differing by $1$ is $8$ and $9$, but $2^{2^n}$ can't be equal to $8$ for $n$ natural. So your expression is, indeed, always irrational.</p>
<p>(I guess I should note that using full Mihailescu's theorem here might be an overkill - 2 being a prime should let us give a relatively simple argument... but hey, whatever works :) )</p>
|
2,293,162 |
<p>$$f_n(x)=\begin{cases} 1-nx,&\text{for }x\in[0,1/n]\\
0 ,&\text{for }x \in [1/n,1]
\end{cases}$$ </p>
<p>Then which is correct option?</p>
<p>1.$\lim\limits_{n\to\infty }f_n(x)$ defines a continuous function on $[0,1]$. </p>
<p>2.$\lim\limits_{n\to\infty }f_n(x)$ exists for all $x\in [0,1]$. $f_n(0)=1$ and $f_n(1/n)=0$ </p>
<p>I think first option is correct but I have no proper justification for that just by looking at the graph of function, $f$ is continuous but again there is a question arises if $f$ is continuous then why $2$ is wrong? Please correct me and help me to understand this problem.</p>
|
Arpan1729
| 444,208 |
<p>The function converges pointwise to a discontinuous function which takes the value $1$ when $x$ is $0$ and takes the value $0$ otherwise.</p>
<p>Hence option $2$ is correct and option $1$ is false.</p>
<p>Explanation:</p>
<p>Say $1>x>0$ </p>
<p>Then let us see the sequence $f_1(x),f_2(x),f_3(x)\dots$
The terms of this sequence eventually equals to $0$, as for every $x>0$ there exists an integer $N$ such that $1/N<x$, so $f_n(x)=0$ for all $n\geq N$, so the limit is also $0$.
And for $x=1$ the $f_n(x)=1$ for all $n\in \mathbb{N}$, hence the sequence converges to $1$.</p>
<p>Hence the limit function $f$ equals $1$ if $x=0$ and $0$ otherwise.</p>
|
3,979,371 |
<p>So I have been struggling with this question for a while. Suppose <span class="math-container">$X$</span> is uniformly distributed over an interval <span class="math-container">$(a, b)$</span> and <span class="math-container">$Y$</span> is uniformly distributed over <span class="math-container">$(-\sigma, \sigma)$</span>, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent. Consider a random variable <span class="math-container">$Z = X + Y$</span>. What would be the <em>conditional</em> distribution of <span class="math-container">$X | Z$</span>, i.e. <span class="math-container">$F_{X|Z}(x)$</span>?</p>
<p>I know that <span class="math-container">$F_X(x) = \frac{x - a}{b-a}$</span> and <span class="math-container">$F_y(y) = \frac{y + \sigma}{2\sigma}$</span> but I am having trouble figuring out <span class="math-container">$F_{X|Z}(x)$</span>. Could anyone help me? My gut says that <span class="math-container">$X|Z$</span> <em>should</em> be uniformly distributed over <span class="math-container">$(Z - \sigma, Z + \sigma)$</span> but I can't figure out how to prove this.</p>
|
grand_chat
| 215,011 |
<p>You can show that the conditional density of <span class="math-container">$X$</span> given <span class="math-container">$X+Y$</span> is uniform by applying the change of variables formula. Define the transformation <span class="math-container">$(W,Z):=(X,X+Y)$</span>. The joint density of <span class="math-container">$(W,Z)$</span> is
<span class="math-container">$$
f_{W,Z}(w,z)=f_{X,Y}(x(w,z),y(w,z)) = f_{X,Y}(w, z-w).\tag1$$</span> (Check that the Jacobian of the transformation is <span class="math-container">$1$</span>.) Since the joint density of <span class="math-container">$(X,Y)$</span> is
<span class="math-container">$$f_{X,Y}(x,y)=I_{(a,b)}(x)I_{(-\sigma,\sigma)}(y),\tag2$$</span> write (1) as
<span class="math-container">$$
f_{W,Z}(w,z)=I_{(a,b)}(w)I_{(-\sigma,\sigma)}(z-w).\tag3
$$</span>
Plotting in the <span class="math-container">$(w,z)$</span>-plane, we find the density (3) is uniform over a skewed diamond shape. In fact, for fixed <span class="math-container">$z$</span> the value of (3) is zero unless <span class="math-container">$a<w<b$</span> and <span class="math-container">$z-\sigma<w<z+\sigma$</span>, in which case the value of (3) is <span class="math-container">$1$</span>. The key observation is that for fixed <span class="math-container">$z$</span>, the expression (3), when nonzero, is <em>constant</em> as a function of <span class="math-container">$w$</span>. This means that when we divide (3) by the marginal density <span class="math-container">$f_Z(z)$</span> (note we don't even have to compute the value!), the resulting conditional density <span class="math-container">$f_{W\mid Z}(w\mid z)$</span> remains constant as a function of <span class="math-container">$w$</span>.</p>
|
1,428,143 |
<p>Let $f:E\to F$ where $E$ and $F$ are metric space. We suppose $f$ continuous. I know that if $I\subset E$ is compact, then $f(I)$ is also compact. But if $J\subset F$ is compact, do we also have that $f^{-1}(J)$ is compact ?</p>
<p>If yes and if $E$ and $F$ are not necessarily compact, it still works ?</p>
|
Clayton
| 43,239 |
<p>Not necessarily; consider $f:(-2\pi,2\pi)\to[-1,1]$ given by $f(x)=\sin(x)$.</p>
|
3,060,742 |
<p><span class="math-container">$\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = 1.644934$</span> or <span class="math-container">$\frac{\pi^2}{6}$</span></p>
<p>What if we take every 3rd term and add them up? </p>
<p>A = <span class="math-container">$ \frac{1}{3^2} + \frac{1}{6^2} + \frac{1}{9^2} + \cdots = ??$</span></p>
<p>How to take every 3rd-1 term and add them up?</p>
<p>B = <span class="math-container">$ \frac{1}{2^2} + \frac{1}{5^2} + \frac{1}{8^2} + \cdots = ??$</span></p>
<p>How to take every 3rd-2 term and add them up?</p>
<p>C = <span class="math-container">$ \frac{1}{1^2} + \frac{1}{4^2} + \frac{1}{7^2} + \cdots = ??$</span></p>
<p>I am not sure how to adapt Eulers methods as he used the power series of sin for his arguments: <a href="https://en.wikipedia.org/wiki/Basel_problem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Basel_problem</a></p>
|
Mark Viola
| 218,419 |
<p>Note that we have</p>
<p><span class="math-container">$$\psi'(z)=\sum_{n=0}^\infty \frac{1}{(n+z)^2}$$</span></p>
<p>where <span class="math-container">$\psi'(z)$</span> is the derivative of the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a>. Hence, we can write</p>
<p><span class="math-container">$$\sum_{n=0}^\infty \frac{1}{(3n+1)^2}=\frac19 \psi'(1/3)$$</span></p>
<p>and </p>
<p><span class="math-container">$$\sum_{n=0}^\infty \frac{1}{(3n+2)^2}=\frac19 \psi'(2/3)$$</span></p>
<p>Interestingly, since we have</p>
<p><span class="math-container">$$\sum_{n=0}^\infty \left(\frac1{(3n+3)^2}+\frac1{(3n+2)^2}+\frac1{(3n+1)^2}\right)=\frac{\pi^2}{6}$$</span></p>
<p>we find that</p>
<p><span class="math-container">$$\psi'(1/3)+\psi'(2/3) = 4\pi^2/3$$</span></p>
|
603,986 |
<p>Show that in a finite field $F$ there exists $p(x)\in F[X]$ s.t $p(f)\neq 0\;\;\forall f\in F$</p>
<p>Any ideas how to prove it?</p>
|
Dylan Yott
| 62,865 |
<p>Let $p(x)=1$, you win. Here's a less stupid example with the same flavor. If $|F|=q$, then consider $x^q-x+1$. Can you figure out why you still win?</p>
|
1,237,077 |
<p>For a periodic function we have: $$\int_{b}^{b+a}f(t)dt = \int_{b}^{na}f(t)dt+\int_{na}^{b+a}f(t)dt = \int_{b+a}^{(n+1)a}f(t)dt+\int_{an}^{b+a}f(t)dt = \int_{na}^{(n+1)a}f(t)dt = \int_{0}^{a}f(t)dt.$$ , but I don't understand how we obtain $\int _{b+a}^{\left(n+1\right)a}\:f\left(t\right)\:dt=\int _b^{na}\:f\left(t\right)dt$ in our equality?</p>
|
Prasun Biswas
| 215,900 |
<p>What does it mean for a function to have a period <span class="math-container">$a$</span> ? Informally, it means that the function values gets repeated after an increment of <span class="math-container">$a$</span> in the <span class="math-container">$x-$</span>value. (domain value). Notationally,</p>
<p><span class="math-container">$$a\textrm{ is period of }f\iff f(x)=f(x+a)~\forall~x,x+a\in\textrm{Dom(f)}$$</span></p>
<p>That gives us <span class="math-container">$na=na+a=n(a+1)$</span> and <span class="math-container">$b=b+a$</span> since <span class="math-container">$na$</span> and <span class="math-container">$b$</span> are domain values (<span class="math-container">$x-$</span>values) for <span class="math-container">$f$</span> and <span class="math-container">$a$</span> is the period.</p>
<p>So, the definite integral remains the same.</p>
<p>Here's a simple diagram:</p>
<p><a href="https://i.stack.imgur.com/AYUcK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AYUcK.png" alt="Image" /></a></p>
|
8,997 |
<p>I have a set of data points in two columns in a spreadsheet (OpenOffice Calc):</p>
<p><img src="https://i.stack.imgur.com/IPNz9.png" alt="enter image description here"></p>
<p>I would like to get these into <em>Mathematica</em> in this format:</p>
<pre><code>data = {{1, 3.3}, {2, 5.6}, {3, 7.1}, {4, 11.4}, {5, 14.8}, {6, 18.3}}
</code></pre>
<p>I have Googled for this, but what I find is about importing the entire document, which seems like overkill. Is there a way to kind of cut and paste those two columns into <em>Mathematica</em>? </p>
|
pyler
| 2,338 |
<p>I don't have 50 reputation points so I'm submitting this as an answer when it is really a comment, a small addition. After doing the first approach given by WReach, Wrap the result of your evaluation between theses two brackets --><strong>"<code>Grid[]</code>"</strong> and add <strong>"<code>Grid[ ,Frame->All]</code>"</strong>.
Enjoy!</p>
|
2,277,115 |
<p>I'm asking for examples of interesting categories in which there exist non-isomorphic objects $X$ and $Y$, a split monomorphism $f : X \to Y$, and a split epimorphism $g : X \to Y$. Spelled out, there should exist maps $f : X \leftrightarrow Y : f'$ such that $f'f = \mathrm{id}_{X}$ and maps $g : X \leftrightarrow Y : g'$ such that $gg' = \mathrm{id}_{Y}$ such that there is no pair of maps $h : X \leftrightarrow Y : h'$ satisfying $h'h = \mathrm{id}_{X}$ and $hh' = \mathrm{id}_{Y}$.</p>
<p>My professor found a seemingly relevant exercise from Rowen's "Graduate Algebra: Noncommutative view" suggesting this may occur in R-Mod, but I haven't got the book on hand and remember having trouble understanding the exercise anyways. Additionally, he more specifically asked if this can happen in Top.</p>
|
HeinrichD
| 369,530 |
<p>The situation can also be described by two non-isomorphic objects $X,Y$ which admit split monomorphisms $X \to Y$ and $Y \to X$ in both directions.</p>
<p>In fact, this happens in the category of abelian groups. See <a href="https://mathoverflow.net/questions/218113">here</a> and <a href="https://mathoverflow.net/questions/10128">here</a> for rather complicated examples. It was already asked for more easy examples <a href="https://math.stackexchange.com/questions/1429267">here</a>.</p>
|
921,893 |
<p>If we have: $f(x)=\frac { 1+x }{ 1+{ e }^{ x } } $</p>
<p>I am told to determine if $f(x)=x$ has multiple roots on $\left[ 0;+\infty \right] $</p>
<p>I tried to manually solve this equation, but I don't understand the result:</p>
<p>$f(x)-x=0\rightarrow \frac { 1+x }{ 1+{ e }^{ x } } =0\rightarrow { e }^{ x }(x+1)=0$</p>
<p>which would mean that -1 is the only solution. So there shouldn't be any solution on $\left[ 0;+\infty \right] $.</p>
<p>But when I check on Wolfram, I see that there is a unique solution, and that this solution is 0.56.</p>
<p>Can anybody help please ?</p>
<p>1) How can I find the nb of solutions to this equation ?
2) How can we explain the result I get and the one from wolfram ?</p>
|
Jack D'Aurizio
| 44,121 |
<p>If $f(x)=x$, then $f(x)-x=0$, or:
$$\frac{1-xe^x}{1+e^x}=0.\tag{1}$$
Since $1+e^x$ is always positive, $(1)$ is equivalent to:
$$ g(x)=xe^x = 1.\tag{2}$$
$g(x)$ is a decreasing and negative function over $(-\infty,-1)$, an increasing function over $(-1,+\infty)$, since $g'(x) = (x+1)e^x$. So there is at most one real number $x$ such that $xe^x=1$. Exactly one since $xe^x$ over $\mathbb{R}^+$ is positive and unbounded. Such a number is just:
$$ W(1) = 0.567143290409783873\ldots $$
where $W(\cdot)$ is the <a href="http://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow">Lambert W-function</a>. $W(1)$ can be computed through Newton's method.</p>
|
2,409,183 |
<p>Good evening all! I'm trying to find the eigenvalues and eigenvectors of the following problem</p>
<p>$$
\begin{bmatrix}
-10 & 8\\
-18 & 14\\
\end{bmatrix}*\begin{bmatrix}
x_{1}\\
x_{2}\\
\end{bmatrix}
$$</p>
<p>I've found that $λ_{1},_{2}=2$ where $λ$ is double eigenvalue of our matrix and now I'm trying to find its eigenvectors. So we have to solve the above system
$$(-10-λ)*x_{1}+8*x_{2}=0 , -18*x_{1}+(14-λ)*x_{2}=0$$
So what I do here is to replace $λ=2$ and we have
$$-12*x_{1}+8*x_{2}=0 , -18*x_{1}+12*x_{2}=0$$</p>
<p>From here I get that $3*x_{1}=2*x_{2}$ and I think that gives us the eigenvector $\begin{bmatrix}
2/3\\
1\\
\end{bmatrix}$</p>
<p>My book is telling me that the eigenvector for $λ=2$ is $\begin{bmatrix}
2\\
3\\
\end{bmatrix}$ but I can't find the same solution. Can someone help me?</p>
<p>EDIT : I added my own try at solving it.</p>
|
Giuseppe Negro
| 8,157 |
<p>The first integral always vanishes. To see this, write
$$I=\int_{\mathbb S^2} r_i r_j r_k\, d\Omega.$$
Suppose that $r_i$ appears with power $1$ or $3$ (if it appears with power $2$, then choose the other one). Then the change of variable $r_i\mapsto -r_i, r_j\mapsto r_j$ for $j\ne i$ leaves $d\Omega$ invariant and shows that
$$I=-I.$$</p>
<p>For the same reason, the second integral vanishes if there is a factor of $r_i$ appearing with power $1$ or $3$. The only nonvanishing integrals are therefore
$$J_1=\int_{\mathbb S^2} r_i^2 r_j^2\, d\Omega, \quad J_2=\int_{\mathbb S^2} r_i^4\, d\Omega.$$
Here you can assume without loss of generality that $r_i=z=\cos \theta, r_j=y=\sin\theta\cos \phi$. The integrals become
$$
J_1=\int_0^\pi (\cos\theta)^2(\sin\theta)^3\, d\theta \int_0^{2\pi} (\cos\phi)^2\, d\phi, \quad J_2=2\pi\int_0^\pi (\cos \theta)^4 \sin \theta\, d\theta,$$
and they can be computed directly.</p>
|
48,989 |
<p>How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?</p>
|
Dave Au
| 197,331 |
<p>Since we have $\text{Col }AB \subseteq \text{Col }A$ and $\text{Row }AB \subseteq \text{Row }B$, therefore $\text{Rank }AB \leq \text{Rank }A$ and $\text{Rank }AB \leq \text{Rank }B$, then the result follows.</p>
|
1,586,286 |
<p>There are $30$ red balls and $50$ white balls. Sam and Jane take turns drawing balls until they have drawn them all. Sam goes first. Let $N$ be the number of times Jane draws the same color ball as Sam. Find $E[N].$</p>
<p>I have been proceeding with indicators...</p>
<p>$$ I_{j} =
\begin{cases}
1, & \text{if the $i^{th}$ pick is the same as the $(i-1)^{th}$ pick.} \\
0, & \text{otherwise}
\end{cases}
$$</p>
<p>But I am having problems with this. Because when I proceed, I am find all the instances where the same ball is drawn on every draw, not just Jane's. I know the answer is $\frac{9}{19}$ but I can't get there. How do I change the indicators, or the probability?</p>
|
JMoravitz
| 179,297 |
<p>As you started, let $I_n=\begin{cases}1&\text{if Jane draws the same color on turn}~2n~\text{as Sam did on turn}~2n-1\\
0&\text{otherwise}\end{cases}$</p>
<p>We have $E[N]=E[\sum\limits_{n=1}^{40}I_n]$ which by linearity of expectation is $=\sum\limits_{n=1}^{40}E[I_n]$ and by symmetry is $40E[I_1]$</p>
<p>Why is $E[I_1]=E[I_2]=\dots$? Imagine that they pull out two balls and announce that they are starting in the middle of the count and that these were in fact the $(2n-1)^{st}$ and $(2n)^{th}$ balls respectively and then continue to the beginning of the count afterwards. Clearly, the probabilities are the same regardless what label they give the turn numbers.</p>
<p>Let $S_r,S_w,J_r,J_w$ represent the events that Sam or Jane pulled a red or white ball on turn $2n-1$ or $2n$ respectively.</p>
<p>The probability that they matched color is then $Pr((S_r\cap J_r)\cup(S_w\cap J_w))=Pr(S_r)Pr(J_r|S_r)+Pr(S_w)Pr(J_w|S_w)=\frac{30\cdot 29+50\cdot 49}{80\cdot 79}$</p>
<p>The expected number of matches then will be $40$ times this number for a final total of:</p>
<p>$$\frac{1660}{79}\approx 21.01267$$</p>
|
2,227,027 |
<p>If $f(x)$ is defined everywhere except at $x=x_0$, would $f'(x_0)$ be undefined at $x=x_0$ as well?</p>
<p>One example is: $$f(x)=\ln(x)\rightarrow f'(x)=\frac{1}{x}$$</p>
<p>In this particular case, both $f(x)$ and $f'(x)$ are undefined at $x=0$. I wonder if this always holds true.</p>
<p>Thank you.</p>
|
Alex Peter
| 579,318 |
<p>In order to have a first derivative at a point, a function must be continuous at that point. </p>
<p>Differentiation requires continuity. A function that is not defined at some point cannot be continuous at that point because it does not exist at that point.</p>
<p>If the first derivative <span class="math-container">$f(x)$</span> is defined then all functions that have that derivative are defined as</p>
<p><span class="math-container">$$F(x) = \int f(x) dx$$</span></p>
<p>which is defining a family of functions that differ only by a constant. The Riemann integral requires that a function is defined at the region of integration because at any interval we have to be able to create a mesh with arbitrarily small sizes and the evaluation of the integral cannot depend on the choice of a mesh.</p>
<p>Technically, we could ignore or circumvent a point where a function is not defined, but then we cannot speak about the derivative of that function at that point.</p>
<p>As an option, sometimes, we still attach a value to the function at that point so that it becomes defined. Typically, we choose such value so that the function becomes continuous as well.</p>
<p>Example:</p>
<p><span class="math-container">$$F(x)=\frac{\sin(x)}{x}$$</span></p>
<p>is not defined at <span class="math-container">$0$</span> as the expression that we get by simply replacing <span class="math-container">$x=0$</span>, <span class="math-container">$0/0$</span> is undefined. Yet we can additionally specify <span class="math-container">$F(0)=1$</span> getting a perfectly continuous and differentiable function at <span class="math-container">$0$</span>.</p>
<p>There is a definition of <em>symmetric derivative</em> that can exist even when normal derivative does not exist</p>
<p><span class="math-container">$$\lim_{h \to 0}\frac{f(x+h) - f(x-h)}{2h}$$</span></p>
<p>This is for example defined for <span class="math-container">$|x|$</span> where the function does not have a normal derivative. Symmetric derivative evaluates to the same value as ordinary derivative in case both exists.</p>
<p>Apart from these two amends, the answer is strictly no, a function cannot be undefined at a point and have a derivative at that point.</p>
<p>Notice that the first derivative at <span class="math-container">$x_0$</span> is defined precisely as</p>
<p><span class="math-container">$$\lim_{h \to 0}\frac{f(x_0+h) - f(x_0)}{h}$$</span></p>
<p>so if <em>anything</em> makes this expression not evaluating, the first derivative does not exist. And in your case <span class="math-container">$f(x_0)$</span> is not defined, thus first derivative at <span class="math-container">$x_0$</span> is not defined as well.</p>
|
4,462,081 |
<p>I actually already have the solution to the following expression, yet it takes a long time for me to decipher the first operation provided in the answer. I understand all of the following except how to convert <span class="math-container">$\left(1+e^{i\theta \ }\right)^n=\left(e^{\frac{i\theta }{2}}\left(e^{\frac{-i\theta }{2}}+e^{\frac{i\theta }{2}}\right)\right)^n$</span></p>
<p>I am not sure if I wrote the expression correctly, I am new to this website.</p>
<p>Thank You!</p>
|
Átila Correia
| 953,679 |
<p>You can alternatively proceed as follows:
<span class="math-container">\begin{align*}
1 + \cos(x) + i\sin(x) & = 2\cos^{2}(x/2) +2i\sin(x/2)\cos(x/2)\\\\
& = 2\cos(x/2)(\cos(x/2) + i\sin(x/2))
\end{align*}</span></p>
<p>From this identity it results the desired claim:
<span class="math-container">\begin{align*}
(1 + \cos(x) + i\sin(x))^{n} = 2^{n}\cos^{n}(x/2)(\cos(nx/2) + i\sin(nx/2))
\end{align*}</span></p>
<p>Hopefully this helps!</p>
|
1,861,890 |
<p>Given that $$s_{n}=\frac{(-1)^{n}}{n},$$ I want to show $$\lim_{n\to\infty}{s_{n}}=0$$ in the metric space $X=\mathbb{C}.$ However, it seems to me that <strong>Archimedean Property is not applicable to the case above</strong>, because $s_{n}$ is not always positive for each $n$. Then, how can I do that?</p>
|
Cbjork
| 137,229 |
<p>Let $\varepsilon>0$ and $n>\frac{1}{\varepsilon}$. Then $\left | \dfrac{(-1)^n}{n}-0\right |=\left | \dfrac{(-1)^n}{n}\right |=\left | \dfrac{1}{n}\right |<\varepsilon$</p>
|
2,916,158 |
<p>I am trying to understand why we need Large deviation theory/principle.</p>
<p><strong>Here is what I understand so far</strong> based on the <a href="https://en.wikipedia.org/wiki/Large_deviations_theory#An_elementary_example" rel="noreferrer">Wikipedia</a>.
Let $S_n$ be a random variable which depends on $n$.
We are interested in</p>
<blockquote>
<p>How the probability $P(S_n > x)$ changes as $n \to \infty$.</p>
</blockquote>
<p>Very often, LDT concerns with how fast $P(S_n >x)$ converges to $0$.
Then the answer to the above question is already known, it is $0$.
Then one might be interested in how fast it converges.
So I understand the notion of the ration function $I(x)$.</p>
<p><strong>Here I what I don't understand:</strong></p>
<ol>
<li>In order to know $P(S_n > x)$, it is a matter of finding the cumulative distribution function of $S_n$, as
$$
P(S_n > x) = 1 - P(S_n < x) = 1 - F_{S_n}(x).
$$
Then once we know the pdf of $S_n$, say $f_{S_n}(x)$, one can easily obtain
$$
P(S_n > x) = \int_x^{\infty} f_{S_n}(t)dt.
$$
If this is the case, I am not sure why we need LDP.</li>
<li>Okay, it is not always the case where we know the pdf of $S_n$.
If it is the case, I think the problem is then to estimate $P(S_n > x)$.
However, it seems that many LDP approaches require to compute certain generating functions. For example,
$$
\lambda(k) = \lim_{n\to \infty} \frac{1}{n} \ln E[e^{nkS_n}],
\qquad
I(s) = \sup_{k \in \mathbb{R}} \{ks - \lambda(k)\}.
$$
It seems that calculating the aboves are definitely a lot harder than a direct calculation of $P(S_n > x)$. Thus I don't understand why people introduces quantities which a lot harder to obtain.</li>
<li><em>"Why the rate?"</em>
Ok, regardless of #1 and #2, let say we have $I(x)$. What now?
Given $I(x)$, can we estimate $P(S_n > x)$? or can we do something useful?
Or Is the LDP devoted to derive the rate function $I(x)$?</li>
</ol>
<p>Any comments or answers will very be appreciated. </p>
|
Alex R.
| 22,064 |
<p>For 1: "easily obtain" is a misnomer here. As an example you can get estimates for $P(S_n>x)$ very easily just by appealing to the CLT. The problem is that the CLT <em>does not hold</em> when you are too far from the mean (as a function of $n$). So you can "easily obtain" that $P(S_n>x)$ is close t0 0, just like the gaussian distribution, but it's not at all clear <em>how close</em>. </p>
<p>As an example, imagine that $S_n$ denotes insurance payments for disasters on a yearly basis. The chance of there being a hurricane that causes a 10 trillion dollars of damage is close to 0. But how close to 0? Are we talking once every 10 years? 100 years? 1000000 years? To price your insurance policy, you'd need a good estimate on this, as a gaussian distribution would generally grossly underestimate this probability.</p>
<p>For 2: note that it's not harder, necessarily. Exponention is really natural, as in some easy situations, by markov's inequality, $P(S_n>x)= P(e^{S_nt}>e^{xt})\leq \frac{\phi(t)}{e^{t\epsilon}}$, where $\phi$ is the moment generating function, gives you a decent bound after minimizing in $t$. What's difficult is the <em>lower</em> bound on $P(S_n>x)$, and this usually takes the most work in large deviations theory.</p>
<p>For 3: $E[e^{nkS_n}]$, written out as an approximate Riemman sum is effectively <a href="https://en.wikipedia.org/wiki/Laplace%27s_method" rel="nofollow noreferrer">Laplace's Approximation</a> at work: $e^{-an}+e^{-bn}\approx e^{-an}$ when $a<b$ (e.g, even if a=b-0.000001). There's a saying that "rare events happen in the cheapest way possible", and this captures that statement: the most probable rare event is the one that carries the smallest exponent, and this is exactly where the concept of a "rate" comes in, to pin how fast it goes to 0.</p>
|
947,730 |
<p>I'm trying to do this for practice but I'm just going nowhere with it, I'd love to see some work and answers on it.</p>
<p>Thanks :)</p>
<p>Find a polynomial that passes through the points (-2,-1), (-1,7), (2,-5), (3,-1). Present the answer in standard form.</p>
<p>What I've tried:</p>
<p><img src="https://i.stack.imgur.com/Wsvj9.jpg" alt="What I firstly tried but went nowhere"></p>
<p><img src="https://i.stack.imgur.com/R6USZ.jpg" alt="Another attempt to get somewhere"></p>
|
orangeskid
| 168,051 |
<p>$$f(x) = a x^3 + b x^2 + c x + d$$
\begin{eqnarray*}
-1 &=& a (-2)^3 + b(-2)^2 + c(-2) + d \\
7 &=& a (-1)^3 + b(-1)^2 + c(-1) + d \\
-5 &=& a(2)^3 + b (2)^2 + c (2) + d \\
-1 &=& a(3)^3 + b(3)^2 + c (3) + d
\end{eqnarray*}
or
\begin{eqnarray*}
-8 a + 4b -2c + d &=& -1\\
-{\ }a{\ } +{\ } b -{\ } c + d &=&{\ } 7 \\
8a + 4b + 2c + d&=&-5 \\
27a + 9b + 3c + d&=&-1
\end{eqnarray*}
with solution
\begin{eqnarray*}
a&=&{\ }{\ }1\\
b&=&-2\\
c&=&-5\\
d&=&{\ }{\ }5
\end{eqnarray*}
and so $f(x) = x^3 -2x^2-5x+5$</p>
|
3,260,530 |
<p>I read from wikipedia that a neighbourhood of a point <span class="math-container">$p$</span> is a subset <span class="math-container">$V$</span> of a topological space <span class="math-container">$\{X,\tau\}$</span> that includes an open set <span class="math-container">$U$</span> such that <span class="math-container">$p \in U$</span>. </p>
<p>I would like to clarify, suppose that <span class="math-container">$p \in U$</span>, but <span class="math-container">$p \not\in W$</span>, if <span class="math-container">$V=\{U,W\}$</span>, does <span class="math-container">$V$</span> qualify as a neighborhood of <span class="math-container">$p$</span> even if it includes the set <span class="math-container">$W$</span> which does not contain <span class="math-container">$p$</span>?</p>
<p>Does this mean that if the topological space is <span class="math-container">$\mathbb{R}$</span>, is it accurate to say that the set <span class="math-container">$\{(-1,1),(2,3)\}$</span> is also a neighborhood of <span class="math-container">$0$</span>?</p>
|
cqfd
| 588,038 |
<p><span class="math-container">$V=\{U,W\} $</span> is <strong>not</strong> a subset of <span class="math-container">$X $</span>, it is a subset of the topology <span class="math-container">$\mathcal T $</span>(assuming <span class="math-container">$U $</span> and <span class="math-container">$W $</span> are open; otherwise it is a subset of <span class="math-container">$\mathcal P (X) $</span>).</p>
|
2,400,336 |
<p>My first try was to set the whole expression equal to $a$ and square both sides. $$\sqrt{6-\sqrt{20}}=a \Longleftrightarrow a^2=6-\sqrt{20}=6-\sqrt{4\cdot5}=6-2\sqrt{5}.$$</p>
<p>Multiplying by conjugate I get $$a^2=\frac{(6-2\sqrt{5})(6+2\sqrt{5})}{6+2\sqrt{5}}=\frac{16}{2+\sqrt{5}}.$$</p>
<p>But I still end up with an ugly radical expression.</p>
|
Vidyanshu Mishra
| 363,566 |
<p>$(\sqrt{5}-\sqrt{1})^2= 6-\sqrt{20}$</p>
|
2,400,336 |
<p>My first try was to set the whole expression equal to $a$ and square both sides. $$\sqrt{6-\sqrt{20}}=a \Longleftrightarrow a^2=6-\sqrt{20}=6-\sqrt{4\cdot5}=6-2\sqrt{5}.$$</p>
<p>Multiplying by conjugate I get $$a^2=\frac{(6-2\sqrt{5})(6+2\sqrt{5})}{6+2\sqrt{5}}=\frac{16}{2+\sqrt{5}}.$$</p>
<p>But I still end up with an ugly radical expression.</p>
|
Henry
| 6,460 |
<p>$$a=\sqrt{6-\sqrt{20}}$$</p>
<p>$$\implies a^2=6-\sqrt{20}$$</p>
<p>$$\implies(a^2-6)^2=20$$</p>
<p>$$\implies a^4-12a^2+16=0$$</p>
<p>$$\implies (a^2-2a-4)(a^2+2a-4)=0$$</p>
<p>so it will be one of the four possibilities $\pm\sqrt{5}\pm1$, and since $\sqrt{6-\sqrt{25}} \lt \sqrt{6-\sqrt{20}} \lt \sqrt{6-\sqrt{4}}$, you want the one which is in $[1,2]$ </p>
|
4,433,724 |
<p>I need to prove the following useful statement:
If <span class="math-container">$f(x,y)$</span> is differentiable at <span class="math-container">$(x_0,y_0)$</span>, then in the neighbourhood of <span class="math-container">$(x_0,y_0)$</span>, we have
<span class="math-container">$$
\Delta z = f(x_0+\Delta x,y_0+\Delta y) - f(x_0,y_0) = f_x \Delta x + f_y \Delta y + \alpha\Delta x+ \beta\Delta y
$$</span>
where <span class="math-container">$\alpha \rightarrow 0, \beta \rightarrow 0$</span> when <span class="math-container">$\rho = \sqrt{(\Delta x)^2+(\Delta y)^2} \rightarrow 0$</span>.</p>
<p>My try: according to the definition of differentiability, we can write</p>
<p><span class="math-container">$$
\Delta z = f(x_0+\Delta x,y_0+\Delta y) - f(x_0,y_0) = f_x \Delta x + f_y \Delta y + o(\rho)
$$</span></p>
<p>Here comes the vague thing, how to get <span class="math-container">$\alpha\Delta x+ \beta\Delta y$</span> from <span class="math-container">$o(\rho)$</span>?</p>
<p>Any thoughts? Thanks!</p>
|
Bram28
| 256,001 |
<p>I am not going to answer the questions that were given to you, but I will criticize your answer ... and in particular your reasoning:</p>
<blockquote>
<p><em><strong>What I think:</strong></em> I think that without any assumptions we can't use NMP at all, therefore we can tell that 1, 2, 4 are false and only 3 is true.</p>
</blockquote>
<p>No. This does not work.</p>
<p>First of all, notice that for 1., there <em>is</em> an assumption: for 1, the question is whether you can derive <span class="math-container">$\alpha \to \gamma$</span> if you assume <span class="math-container">$\gamma \to (\alpha \to \beta)$</span>. So your reasoning does not work for 1. So maybe 1 is True</p>
<p>Second, and more importantly, in these axiom systems, you can at any point write down any instantiation of any of the axioms. And given that, you can derive other things. In other words, you can derive things from nothing. Simple example:</p>
<p><span class="math-container">$A \to (B \to A) \ Axiom \ 1$</span></p>
<p>Done. And there! I just derived something without any assumptions. Indeed, I just showed that <span class="math-container">$\vdash_{N} A \to (B \to A)$</span></p>
<p>With that, your reasoning also does not work for 2. So maybe 2 is True.</p>
<p>Likewise, I assume that you said that 3 is True since the antecedent cannot be True, but since <span class="math-container">$\vdash_N \alpha$</span> <em>can</em> be True just fine, 3 can still be False.</p>
<p>And finally, I don't see how your argument would show 4 to be False: if you say that we can't derive anything in <span class="math-container">$N$</span> at all, then 4 shoukld be True, rather than False. But if you dsay that if we have a setof assumpotions <span class="math-container">$T$</span> from which we can derive <span class="math-container">$\alpha$</span>, then why shouldn't we also be able to derive <span class="math-container">$\alpha$</span> from <span class="math-container">$T$</span> in <span class="math-container">$CPL$</span>? Your reasoning does nothing towards answering that question.</p>
<p>Again, I am not going to answer the questions put to you ... I would like you to do some more thinking about these questions yourself first, and see if you can come up with some other answers or thoughts before I give any further feedback.</p>
|
4,433,724 |
<p>I need to prove the following useful statement:
If <span class="math-container">$f(x,y)$</span> is differentiable at <span class="math-container">$(x_0,y_0)$</span>, then in the neighbourhood of <span class="math-container">$(x_0,y_0)$</span>, we have
<span class="math-container">$$
\Delta z = f(x_0+\Delta x,y_0+\Delta y) - f(x_0,y_0) = f_x \Delta x + f_y \Delta y + \alpha\Delta x+ \beta\Delta y
$$</span>
where <span class="math-container">$\alpha \rightarrow 0, \beta \rightarrow 0$</span> when <span class="math-container">$\rho = \sqrt{(\Delta x)^2+(\Delta y)^2} \rightarrow 0$</span>.</p>
<p>My try: according to the definition of differentiability, we can write</p>
<p><span class="math-container">$$
\Delta z = f(x_0+\Delta x,y_0+\Delta y) - f(x_0,y_0) = f_x \Delta x + f_y \Delta y + o(\rho)
$$</span></p>
<p>Here comes the vague thing, how to get <span class="math-container">$\alpha\Delta x+ \beta\Delta y$</span> from <span class="math-container">$o(\rho)$</span>?</p>
<p>Any thoughts? Thanks!</p>
|
Mohamad S.
| 1,013,510 |
<p>Thanks to 'ancient mathematician' and 'Bram28' I got my answer to the question and here I post it:</p>
<p><strong>1. The claim is false.</strong></p>
<p>In 3 we prove that the system is sound therefore if <span class="math-container">$\gamma\rightarrow(\alpha\rightarrow\beta)\vdash_N(\alpha\rightarrow\gamma)$</span> <strong>then</strong> <span class="math-container">$\gamma\rightarrow(\alpha\rightarrow\beta)\vdash_{CPL}(\alpha\rightarrow\gamma)$</span>. If we define a valuation <span class="math-container">$\alpha=t,\beta=t,\gamma=f$</span> then <span class="math-container">$\gamma\rightarrow(\alpha\rightarrow\beta)$</span> is <strong>true</strong> but <span class="math-container">$(\alpha\rightarrow\gamma)$</span> is <strong>false</strong>. QED.</p>
<p><strong>2. The claim is true.</strong></p>
<p>We show a proof sequence that ends with the required statement:</p>
<ol>
<li><span class="math-container">$(\neg\beta\rightarrow\neg\alpha)\rightarrow(\alpha\rightarrow\beta)$</span> - Axiom 3</li>
<li><span class="math-container">$(\alpha\rightarrow\beta)\rightarrow(\gamma\rightarrow(\alpha\rightarrow\beta))$</span> - Axiom 1 where <span class="math-container">$A=(\alpha\rightarrow\beta)$</span> and <span class="math-container">$B=\gamma$</span></li>
<li><span class="math-container">$(\neg\beta\rightarrow\neg\alpha)\rightarrow(\gamma\rightarrow(\alpha\rightarrow\beta))$</span> - NMP (1,2)</li>
</ol>
<p><strong>3. The claim is true.</strong></p>
<p>Proving 4 will imply that 3 is true if <span class="math-container">$T=\emptyset$</span>:</p>
<p>We need to prove that if <span class="math-container">$T\vdash_NA$</span> <strong>then</strong> <span class="math-container">$T\vdash_{CPL}A$</span>. We show by structural induction.</p>
<p><strong>Base:</strong> if A is an Axiom or an assumption then it's true by default.</p>
<p><strong>Induction step:</strong> A is obtained from <span class="math-container">$(B\rightarrow C),(C\rightarrow D)$</span> where <span class="math-container">$A=(B\rightarrow D)$</span>. By the induction hypothesis, <span class="math-container">$T\vdash_{CPL}(B\rightarrow C)$</span> and <span class="math-container">$T\vdash_{CPL}(C\rightarrow D)$</span>.</p>
<p>it's true that <span class="math-container">$(\alpha\rightarrow\beta),(\beta\rightarrow\gamma)\vdash_{CPL}(\alpha\rightarrow\gamma)$</span> (I leave the proof of this to you).</p>
<p>So we get that <span class="math-container">$(B\rightarrow C),(C\rightarrow D)\vdash_{CPL}(B\rightarrow D)$</span> which is <span class="math-container">$(B\rightarrow C),(C\rightarrow D)\vdash_{CPL}A$</span>. QED.</p>
<p><strong>4. The claim is true.</strong></p>
<p>Proved in 3.</p>
|
339,880 |
<p>I'm interested in examples where the sum of a set with itself is a substantially bigger set with nice structure. Here are two examples:</p>
<ul>
<li><strong>Cantor set</strong>: Let <span class="math-container">$C$</span> denote the ternary Cantor set on the interval <span class="math-container">$[0,1]$</span>. Then <span class="math-container">$C+C = [0,2]$</span>. There are several nice proofs of this result. Note that the set <span class="math-container">$C$</span> has measure zero, so is "thin" compared to the interval <span class="math-container">$[0,2]$</span> whose measure is positive. </li>
<li><strong>Goldbach Conjecture</strong>: Let <span class="math-container">$P$</span> denote the set of odd primes and <span class="math-container">$E_6$</span> the set of even integers greater than or equal to 6. Then the conjecture states is equivalent to <span class="math-container">$P + P = E_6$</span>. Note that the primes have asymptotic density zero on the integers, so the set <span class="math-container">$P$</span> is "thin" relative to the positive integers.</li>
</ul>
<p>Are there other nice examples?</p>
|
Francesco Polizzi
| 7,460 |
<p>Every real number is the sum of two Liouville numbers, see </p>
<p>P. Erdős: <a href="http://dx.doi.org/10.1307/mmj/1028998621" rel="noreferrer"><em>Representations of real numbers as sums and products of Liouville numbers</em></a>, Mich. Math. J. <strong>9</strong>, 59-60 (1962). <a href="https://zbmath.org/?q=an:0114.26306" rel="noreferrer">ZBL0114.26306</a>. </p>
<p>So, denoting by <span class="math-container">$L \subset \mathbb{R}$</span> the set of Liouville numbers, we have <span class="math-container">$$\mathbb{R}=L+L.$$</span></p>
<p>Interestingly, in the the same paper it is also proved that every non-zero real number is the product of two Liouville numbers, so we have an equality for the multiplicative group of the form <span class="math-container">$$\mathbb{R}^{\times} = L L.$$</span></p>
<p>Note that <span class="math-container">$L$</span> <a href="https://en.wikipedia.org/wiki/Liouville_number" rel="noreferrer">has Lebesgue measure 0</a>, hence it is "thin" with respect to measure theory.</p>
|
339,880 |
<p>I'm interested in examples where the sum of a set with itself is a substantially bigger set with nice structure. Here are two examples:</p>
<ul>
<li><strong>Cantor set</strong>: Let <span class="math-container">$C$</span> denote the ternary Cantor set on the interval <span class="math-container">$[0,1]$</span>. Then <span class="math-container">$C+C = [0,2]$</span>. There are several nice proofs of this result. Note that the set <span class="math-container">$C$</span> has measure zero, so is "thin" compared to the interval <span class="math-container">$[0,2]$</span> whose measure is positive. </li>
<li><strong>Goldbach Conjecture</strong>: Let <span class="math-container">$P$</span> denote the set of odd primes and <span class="math-container">$E_6$</span> the set of even integers greater than or equal to 6. Then the conjecture states is equivalent to <span class="math-container">$P + P = E_6$</span>. Note that the primes have asymptotic density zero on the integers, so the set <span class="math-container">$P$</span> is "thin" relative to the positive integers.</li>
</ul>
<p>Are there other nice examples?</p>
|
Gerry Myerson
| 3,684 |
<p>Every real number is the sum of two numbers whose continued fraction expansion has no partial quotient exceeding <span class="math-container">$4$</span>. Marshall Hall, Jr., On the sum and product of continued fractions, Annals of Mathematics, Second Series, Vol. 48, No. 4 (Oct., 1947), pp. 966-993, DOI: 10.2307/1969389, <a href="https://www.jstor.org/stable/1969389" rel="noreferrer">https://www.jstor.org/stable/1969389</a> </p>
<p>Here, <span class="math-container">$4$</span> is best possible. </p>
|
3,375,375 |
<p>I noticed this issue was throwing off a more sophisticated problem I'm working on. When computing the indefinite integral </p>
<p><span class="math-container">$$ I(x) = \int \frac{dx}{1-x} = \log | 1-x | + C,$$</span></p>
<p>I realized I could equivalently write</p>
<p><span class="math-container">$$ I(x) = - \int \frac{dx}{x-1} = -\log|x-1| +C = \log \frac{1}{|1-x|} + C.$$</span></p>
<p>How are these two answers compatible? What am I missing here? </p>
|
Kavi Rama Murthy
| 142,385 |
<p>The first one is wrong . You have missed a minus sign. </p>
<p>Also <span class="math-container">$\log (\frac 1 c)=\log 1 -\log c=-\log c$</span>. </p>
|
1,261,504 |
<p>I am trying to proof $ab = \gcd(a,b)\mathrm{lcm}(a,b)$.</p>
<p>The definition of $\mathrm{lcm}(a,b)$ is as follows:</p>
<p>$t$ is the lowest common multiple of $a$ and $b$ if it satisfies the following:</p>
<p>i) $a | t$ and $b | t$ </p>
<p>ii) If $a | c$ and $b | c$, then $t | c$.</p>
<p>Similiarly for the $\gcd(a,b)$.</p>
<p>Here is my proof:</p>
<p>Case I: $\gcd(a,b)\neq 1$</p>
<p>Suppose $\gcd(a,b) = d$.</p>
<p>Then $ab = dq_1b = dbq_1 = d(dq_1q_2)$</p>
<p>Claim: $\mathrm{lcm}(a,b) = dq_1q_2$</p>
<p>$a = dq_1 | dq_1q_2$ </p>
<p>$b = dq_2 | dq_2q_1$.</p>
<p>Supppose $\mathrm{lcm}(a,b) = c$.
Hence $c \leq dq_1q_2$ .</p>
<p>To get the other inequality we have $dq_1 | a$ and $dq_2 | b$. Hence $dq_1 \leq a \leq c \leq dq_1q_2$ similarly for $dq_2$.</p>
<p>Suppose that c is strictly less than $dq_1q_2$, so we have $dq_1q_2 < cq_2$ and $dq_1q_2 < cq_1$.</p>
<p>So $dq_1q_2 < c < cq_2 < dq_2^2q_1$ and $dq_1q_2 < c < cq_2 < dq_1^2q_2$, but $dq_1^2q_2 > dq_1q_2$ so $c < dq_1q_2$ and </p>
<p>$c > dq_1q_2$ contradiction. Hence $c = dq_1q_2$ </p>
<p>Notice that the case where $\gcd(a,b) = 1$ we can just set $q_1 = a$ and $q_2 = b$, and the proof will be the same.</p>
|
3x89g2
| 90,914 |
<p>$P(Z\le z)=\int_{-\infty}^z \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}\,dz=0.95$</p>
<p>Or if probability density function has not been covered, just consider the picture of a normal distribution. We know that $Pr(Z\le z)=0.5$ when $z=0$ since $0$ is in the middle. Now, move $z$ to the right, the probability will become larger. When should we stop?</p>
|
2,631,284 |
<p>I'm trying to find all $n \in \mathbb{N}$ such that</p>
<p>$(n+2) \mid (n^2+5)$ </p>
<p>as the title says, I've tried numbers up to $20$ and found that $1, 7$ are solutions and I suspect that those are the only $2$ solutions, however I have no idea how to show that.</p>
<p>I've done nothing but basic transformations:</p>
<p>$(n+2) \mid (n^2+5)$ </p>
<p>$\iff n^2+5 = k(n+2)$</p>
<p>$\iff n^2+5 \mod(n+2) = 0$</p>
<p>$\iff (n^2 \mod(n+2) + 5 \mod(n+2)) \mod(n+2) = 0$</p>
<p>Now I suspect the next step is to find all possible solutions for</p>
<p>$n^2 \mod(n+2)$, which I have no idea how to do.</p>
|
Roman83
| 309,360 |
<p>$$(n+2)|(n^2-4)$$
Then $$n+2|(n^2+5)-9$$
Then $$n+2|9$$</p>
|
850,390 |
<p>Let $f(x)$ be differentiable function from $\mathbb R$ to $\mathbb R$, If $f(x)$ is even, then $f'(0)=0$. Is it always true?</p>
|
Brandon
| 113,565 |
<p>Hint: If a function $f$ is diffentiable at $x$, then
$$
f'(x)=\lim\limits_{h\to 0}\frac{f(x+h)-f(x-h)}{2h}
$$</p>
|
850,390 |
<p>Let $f(x)$ be differentiable function from $\mathbb R$ to $\mathbb R$, If $f(x)$ is even, then $f'(0)=0$. Is it always true?</p>
|
gnasher729
| 137,175 |
<p>Let f (x) = square root of absolute value of x. Is it an even function? Is it continuous? What is f' (0)? </p>
|
2,766,879 |
<p>Show that there are no primitive pythagorean triple $(x,y,z)$ with $z\equiv -1 \pmod 4$. </p>
<p>I once have proven that, for all integers $a,b$, we have that $a^2 + b^2$ is congruent to $0$, or $1$, or $2$ modulo $4$. I feel like it is enough to conclude it by considering $a=x$, $b=y$ and $\gcd(x,y)=1$. But I am not completely sure if it is the way the proof should end.</p>
|
user
| 505,767 |
<p>Yes it is correct since for $x>1$</p>
<p>$$f(x)=\frac{x^2+x+2}{x-1}\ge 7 \iff x^2-6x+9=(x-3)^2\ge 0$$</p>
<p>and for $x<1$</p>
<p>$$f(x)=\frac{x^2+x+2}{x-1}\le -1 \iff x^2+2x+1=(x+1)^2\ge 0$$</p>
<p>then for $x>1$ and $y=\frac1x<1$</p>
<p>$$f(x)-f\left(\frac1x\right)=f(x)-f(y)\ge 7+1=8$$</p>
|
4,567,390 |
<p>Let</p>
<ul>
<li><span class="math-container">$X$</span> be a metric space,</li>
<li><span class="math-container">$\mathcal C_b(X)$</span> the space of real-valued bounded continuous functions,</li>
<li><span class="math-container">$\mathcal C_0(X)$</span> the space of real-valued continuous functions that vanish at infinity, and</li>
<li><span class="math-container">$\mathcal C_c(X)$</span> the space of real-valued continuous functions with compact supports.</li>
</ul>
<p>Then <span class="math-container">$\mathcal C_b(X)$</span> and <span class="math-container">$\mathcal C_0(X)$</span> are real Banach space with supremum norm <span class="math-container">$\|\cdot\|_\infty$</span>. It's mentioned <a href="https://math.stackexchange.com/questions/184766/when-is-c-0x-separable?rq=1">here</a> that <em>"if <span class="math-container">$E$</span> is locally compact and separable, then <span class="math-container">$\mathcal C_0 (E)$</span> is separable"</em>. Now I would like to prove a somehow reverse direction, i.e.,</p>
<blockquote>
<p><strong>Theorem:</strong> If <span class="math-container">$X$</span> is locally compact and <span class="math-container">$\mathcal C_c (X)$</span> is separable, then <span class="math-container">$X$</span> is separable.</p>
</blockquote>
<p>Could you have a check on my below attempt?</p>
<hr />
<p><strong>Proof:</strong> Because <span class="math-container">$\mathcal C_0(X)$</span> <a href="https://math.stackexchange.com/questions/4566771/mathcal-c-0x-is-the-closure-of-mathcal-c-cx-in-mathcal-c-bx-is-l">is the closure</a> of <span class="math-container">$\mathcal C_c(X)$</span> in <span class="math-container">$\mathcal C_b(X)$</span>, it suffices to show that</p>
<blockquote>
<p>If <span class="math-container">$X$</span> is locally compact and <span class="math-container">$\mathcal C_0 (X)$</span> is separable, then <span class="math-container">$X$</span> is separable.</p>
</blockquote>
<p>Let <span class="math-container">$\mathcal M(X)$</span> the space of finite signed Radon measures on <span class="math-container">$X$</span>. Let <span class="math-container">$[\cdot]$</span> be the total variation norm on <span class="math-container">$\mathcal M(X)$</span>. We define a map
<span class="math-container">$$
f:X \to \mathcal M(X), x \mapsto \delta_x.
$$</span></p>
<p>Notice that <span class="math-container">$f(X)$</span> is a subset of the closed unit ball of <span class="math-container">$\mathcal M (X)$</span>. Let <span class="math-container">$E := \mathcal C_0 (X)$</span>. By <a href="https://math.stackexchange.com/questions/4500358/3-versions-of-riesz-markov-kakutani-theorem">Riesz–Markov–Kakutani theorem</a>, <span class="math-container">$(\mathcal M(X), [\cdot])$</span> is isometrically isomorphic to <span class="math-container">$E^*$</span> though a canonical map <span class="math-container">$\Phi:\mathcal M(X) \to E^*$</span>. Let <span class="math-container">$B$</span> be the closed unit ball of <span class="math-container">$E^*$</span>. Clearly, <span class="math-container">$\Phi \circ f (X) \subset B$</span>.</p>
<p>By Banach-Alaoglu's theorem, <span class="math-container">$B$</span> is compact in the weak<span class="math-container">$^*$</span> topology <span class="math-container">$\sigma(E^*, E)$</span>. Because <span class="math-container">$E$</span> is separable, the subspace topology <span class="math-container">$\sigma_B(E^*, E)$</span> that <span class="math-container">$\sigma(E^*, E)$</span> induces on <span class="math-container">$B$</span> <a href="https://math.stackexchange.com/questions/4546596/let-e-be-a-separable-banach-space-then-e-star-is-metrizable-in-the-wea?rq=1">is metrizable</a>. It follows that <span class="math-container">$\sigma_B(E^*, E)$</span> is compact metrizable and thus separable.</p>
<p>If one proves that <span class="math-container">$\Phi \circ f$</span> is a <a href="https://math.stackexchange.com/questions/4567328/the-bijection-in-riesz-markov-kakutani-theorem-is-a-homeomorphism-w-r-t-both-no">homeomorphism</a> from <span class="math-container">$X$</span> (together with metric topology) onto <span class="math-container">$\Phi \circ f(X)$</span> (together with its subspace topology induced by <span class="math-container">$\sigma_B(E^*, E)$</span>), It would follow that <span class="math-container">$X$</span> is separable.</p>
|
Analyst
| 1,019,043 |
<p>As mentioned by @OliverDíaz, the fact that <span class="math-container">$f^{-1}$</span> is a homeomorphism (in weak<span class="math-container">$^*$</span> topology of <span class="math-container">$\mathcal M(X)$</span>) from <span class="math-container">$f(X)$</span> onto <span class="math-container">$X$</span> is not clear. I have added a proof below.</p>
<hr />
<p>WLOG, we assume <span class="math-container">$d \le 1$</span>. For <span class="math-container">$x \in X$</span> and <span class="math-container">$r>0$</span>, let</p>
<ul>
<li><span class="math-container">$B_r (x)$</span> be the open ball centered at <span class="math-container">$x$</span> with radius <span class="math-container">$r$</span>.</li>
<li><span class="math-container">$\overline B_r (x)$</span> the closed ball centered at <span class="math-container">$x$</span> with radius <span class="math-container">$r$</span>.</li>
<li><span class="math-container">$\overline{B_r (x)}$</span> the closure of <span class="math-container">$B_r (x)$</span>.</li>
</ul>
<p>Notice that <span class="math-container">$\overline{B_r (x)} \subset \overline B_r (x)$</span> but <strong>not</strong> necessarily that <span class="math-container">$\overline{B_r (x)} = \overline B_r (x)$</span>. Assume <span class="math-container">$a, x_n \in X$</span> such that <span class="math-container">$\delta_{x_n} \to \delta_a$</span> in weak<span class="math-container">$^*$</span> topology, i.e.,
<span class="math-container">$$
\forall f \in \mathcal C_0(X) : \int_X f \mathrm d \delta_{x_n} \to \int_X f \mathrm d \delta_{a} \quad \text{as} \quad n \to \infty.
$$</span></p>
<p>Because <span class="math-container">$X$</span> is locally compact, there is a sequence <span class="math-container">$(r_m) \subset \mathbb R_{>0}$</span> such that <span class="math-container">$r_m \searrow 0$</span> and <span class="math-container">$\overline B_{r_m} (a)$</span> is compact. Clearly, <span class="math-container">$\overline{B_{r_m} (a)}$</span> is compact. Let <span class="math-container">$C_m := X \setminus B_{r_m} (a)$</span>, and
<span class="math-container">$$
f_m (x) := d(x, C_m) \quad \forall x \in X.
$$</span></p>
<p>Then <span class="math-container">$f_m \in \mathcal C_b(X)$</span>. Because <span class="math-container">$C_m$</span> is closed,
<span class="math-container">$$
f_m (x) \neq 0 \iff d(x, C_m)>0 \iff x \notin C_m \iff x \in B_{r_m} (a).
$$</span></p>
<p>So
<span class="math-container">$$
\operatorname{supp} (f_m) = \overline{B_{r_m} (a)}.
$$</span></p>
<p>Hence <span class="math-container">$f_m \in \mathcal C_c (X)$</span> and thus
<span class="math-container">$$
f_m(x_n) \xrightarrow{n \to \infty} f_m(a) \quad \forall m \in \mathbb N.
$$</span></p>
<p>This implies
<span class="math-container">$$
\lim_{n \to \infty} d(x_n, C_m) = d(a, C_m) \quad \forall m \in \mathbb N.
$$</span></p>
<p>As such,
<span class="math-container">$$
\lim_{n \to \infty} d(x_n, a) \le \lim_{n \to \infty} d(x_n, C_m) + d(a, C_m) = 2 d(a, C_m) \le 2r_m \quad \forall m \in \mathbb N.
$$</span></p>
<p>The proof is completed by taking the limit <span class="math-container">$m \to \infty$</span>.</p>
|
4,048,785 |
<p>Show that <span class="math-container">$2r^2-3$</span> is never a square, <span class="math-container">$r=2,3,...$</span></p>
<p>I know that no perfect square can have <span class="math-container">$2, 3, 7$</span>, or <span class="math-container">$8$</span> as its last digit. I'm not sure how to do this with congruence/mod notation. Any hints or solutions are greatly appreciated. I also tried to assume it was equal to a perfect square and through some algebraic manipulations arrive at a contradiction, but that proved to be difficult. I'm curious to know if it can be done algebraically.</p>
|
José Carlos Santos
| 446,262 |
<p>Each perfect square is congruent to <span class="math-container">$0$</span>, <span class="math-container">$1$</span> or <span class="math-container">$4$</span> modulo <span class="math-container">$8$</span>. But each number of the form <span class="math-container">$2n^2-3$</span> is congruent to <span class="math-container">$5$</span> or <span class="math-container">$7$</span> modulo <span class="math-container">$8$</span>.</p>
|
4,427,651 |
<p>In programming, we define an "array" (basically an ordered n-tuple) in the following way:</p>
<p><span class="math-container">$$a=[3,5].$$</span></p>
<p>Later on, if we want to refer to the first element of the predetermined array/pair/n-tuple, we write <span class="math-container">$a[0]$</span> (because in programming you start counting from <span class="math-container">$0$</span>, not <span class="math-container">$1$</span>), which will be equal to <span class="math-container">$3$</span>.</p>
<p>Is there a similar notation for doing this in mathematics? E.g something like <code>(3,5).firstElement</code>, which would equal <span class="math-container">$3$</span>? Or is such a reference only possible by using the set-theoretic definition of an ordered pair?</p>
|
Georges Elencwajg
| 3,217 |
<p>A standard notation (used in Dieudonné, Foundations of Modern Analysis, page 12; Bourbaki, General Topology, Chapter 1, §4.1, page 44.) is <span class="math-container">$\operatorname{pr_i}(a_1,\cdots,a_n)=a_i$</span> since, given a set <span class="math-container">$A$</span>, the map <span class="math-container">$$A^n\to A:(a_1,\cdots,a_n) \mapsto a_i$$</span> is called the <span class="math-container">$i$</span>-th projection.</p>
<p>This notation has the merit that we have (almost) the same word in French (<em>projection</em>), German (<em>Projektion</em>), Dutch (<em>projectie</em>), Italian (<em>proiezione</em>), Spanish (<em>proyección</em>), etc...and all these words also start with "pr".</p>
|
4,427,651 |
<p>In programming, we define an "array" (basically an ordered n-tuple) in the following way:</p>
<p><span class="math-container">$$a=[3,5].$$</span></p>
<p>Later on, if we want to refer to the first element of the predetermined array/pair/n-tuple, we write <span class="math-container">$a[0]$</span> (because in programming you start counting from <span class="math-container">$0$</span>, not <span class="math-container">$1$</span>), which will be equal to <span class="math-container">$3$</span>.</p>
<p>Is there a similar notation for doing this in mathematics? E.g something like <code>(3,5).firstElement</code>, which would equal <span class="math-container">$3$</span>? Or is such a reference only possible by using the set-theoretic definition of an ordered pair?</p>
|
Nick Matteo
| 59,435 |
<p>It's common to assume that the elements of a tuple or vector named <span class="math-container">$a$</span> are indexed <span class="math-container">$a = \langle a_1, a_2, \dots \rangle$</span>, so you would just refer to <span class="math-container">$a_1$</span> for the first element.</p>
<p>Similarly, the elements of a matrix <span class="math-container">$A$</span> are often assumed to be indexed as <span class="math-container">$a_{i,j}$</span>.</p>
|
512,591 |
<p>It is always confusing to prove with $\not\equiv$. Should I try contrapositive?</p>
|
BaronVT
| 39,526 |
<p>Well, in this case the contrapositive is "if $a^2$ is $\textit{not}$ congruent to $1$ mod $3$ then ..." which will cause you the same kind of problem.</p>
<p>In this case, you might just try working directly with the hypothesis. If $a$ is not congruent to $0$, then it has to be congruent to either $1$ or $2$, right? Consider each of these cases separately, and you should see why this is true.</p>
|
609,845 |
<blockquote>
<p>$5$ Integers are paired in all possible ways and each pair of integers
is added. The $10$ sums obtained are $1,4,7,5,8,9,11,14,15,10$. What are the
$5$ integers?</p>
</blockquote>
<p>This is what I got so far:</p>
<p>To get all possible pairs, each integer must be paired with the other $4$ integers.</p>
<p>At this point I am stuck. Is the only way to try all possible pairs and see what works? There must be an easier way...</p>
<p>EDIT: Fixed TYPO</p>
|
David Holden
| 79,543 |
<p>1well you have in one sense an overdetermined problem, since there are ten equations in five unknowns.</p>
<p>however, you are unsure which equation gives which result.
a sensible procedure would be to begin by ordering the unknowns:
$$
x_1 \le x_2 \le x_3 \le x_4 \le x_5
$$</p>
<p>however you know also that (by adding all equations)
$$
4\sum_{i=1}^5 x_i = 74
$$
but since $4 \not \mid 74$ there may be a difficulty...</p>
<p>added: just seen Tim's comment! i didn't do the due diligence of counting!!!</p>
<p>now the typo is fixed, we get $84$ for the sum, so that the five numbers must add to 21.
one definite step we can take is to assume that the smallest and largest sums must be the sums of the two smallest and the two largest numbers, so that
$$
x_1+x_2= 1 \\
x_4+x_5=15
$$
adding these and subtracting from $21$ gives us the definite result that $x_3=5$. the fact that the second smallest sum must be $x_1+x_3$ allows you to ascertain $x_1$, and so forth.</p>
|
3,408,458 |
<p>i have 4 vectors:</p>
<ul>
<li><span class="math-container">$|\vec{AC}|=|\vec{AD}|$</span></li>
<li><span class="math-container">$|\vec{BC}|=|\vec{BE}|$</span></li>
</ul>
<p><span class="math-container">$\angle (\vec{AC}, \vec{AD}) $</span> =<span class="math-container">$\angle (\vec{BC}, \vec{BE}) $</span></p>
<p><a href="https://i.stack.imgur.com/eEgie.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eEgie.png" alt="vector image" /></a></p>
<p>if i know the len and angle of all vectors: how do I find the Distance between D and E ?</p>
<h2>EDIT expanding to more vectors</h2>
<p>Given all possible vectors pairs that satisfy:</p>
<ul>
<li><span class="math-container">$|\vec{xC}|=|\vec{xy}|$</span> Vector ends at C</li>
<li><span class="math-container">$\angle (\vec{xC}, \vec{xy}) $</span> =<span class="math-container">$\angle (\vec{BC}, \vec{BE}) $</span></li>
</ul>
<p>how do you describe the line that forms between all points y. (line through y1, y2, and black dots in image)</p>
<p><a href="https://i.stack.imgur.com/4K8Zc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4K8Zc.png" alt="All y points" /></a></p>
|
Mohammad Riazi-Kermani
| 514,496 |
<p><span class="math-container">$$\vec {DE} =\vec {DA}+\vec {AB}+\vec {BE} =-\vec {AD}+\vec {AB}+\vec {BE}$$</span></p>
|
66,000 |
<p>In 2008 I wrote a group theory package. I've recently started using it again, and I found that one (at least) of my functions is broken in Mathematica 10. The problem is complicated to describe, but the essence of it occurs in this line:</p>
<pre><code>l = Split[l, Union[#1] == Union[#2] &]
</code></pre>
<p>Here <code>l</code> is a list of sets. The intent of the line is to split <code>l</code> into sublists of identical sets. Each set is represented as a list of group elements. I say "sets" rather than "lists" because two sets are to be considered identical if they contain the same members in any order. This is the reason for comparing <code>Union</code>s of the sets. </p>
<p>This used to work, but now it doesn't. The problem is that sets that, as far as I can tell, are equal, do not compare equal by this test. fact, the comparison <code>e1 == e2</code> for indistinguishable group elements <code>e1</code> and <code>e2</code> also sometimes fails to yield <code>True</code>. (It remains unevaluated; <code>e1 === e2</code> evaluates to <code>False</code>.) The elements can be fairly complicated objects. For instance, in one case where I'm having this problem, <code>ByteCount[e1]</code> is 2448. But <code>e1</code> and <code>e2</code> are indistinguishable. For instance, <code>ToString[FullForm[e1]] === ToString[FullForm[e2]]</code> yields <code>True</code>. </p>
<p>I've shown one line where this failure to compare equal causes a problem. In this one case I could probably work around the problem by defining <code>UpValue</code>s for <code>e1 == e2</code> or <code>e1 === e2</code>. But, unfortunately, the problem raises its head in other contexts as well. For instance, I am trying to use <code>GraphPlot</code> to show a cycle graph of the elements. <code>GraphPlot</code> takes a list of edges of the form <code>ei->ej</code>. In order to recognize that edges <code>ei->ej</code> and <code>ei->ek</code> are both connected to <code>ei</code>, <code>GraphPlot</code> needs to know that the <code>ei</code> appearing in the first edge is the same as <code>ei</code> in the second. It doesn't, so I get a disconnected graph. Unlike <code>Split</code>, <code>GraphPlot</code> doesn't provide a hook to enable me to tell it how to test vertexes for equality, and it apparently doesn't use <code>Equal</code> or <code>SameQ</code>, either, as <code>UpValue</code>s I define for those are not used. </p>
<p>(Sorry about the generic tag -- I couldn't find anything more specific. Suggestions welcome.)</p>
<p>EDIT: In response to Szabolcs request, here is the <code>FullForm</code> of such an object:</p>
<pre><code>a = sdp[znz[1, 3],
aut[List[Rule[znz[1, 3], znz[1, 3]]],
List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]],
Rule[znz[2, 3], znz[2, 3]]],
Dispatch[List[Rule[znz[0, 3], znz[0, 3]],
Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]]]],
Function[NonCommutativeMultiply[Slot[2], Slot[1]]]]
b = sdp[znz[1, 3],
aut[List[Rule[znz[1, 3], znz[1, 3]]],
List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]],
Rule[znz[2, 3], znz[2, 3]]],
Dispatch[List[Rule[znz[0, 3], znz[0, 3]],
Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]]]],
Function[NonCommutativeMultiply[Slot[2], Slot[1]]]]
a === b
(* ==> False *)
</code></pre>
<p>Note that <code>a</code> and <code>b</code> are identical and <code>ToString[a] === ToString[b]</code> gives <code>True</code>.</p>
|
Szabolcs
| 12 |
<p>This is not a full answer, just a start towards a solution.</p>
<p>The culprit is <code>Dispatch</code>, which became <a href="http://reference.wolfram.com/mathematica/ref/AtomQ.html" rel="noreferrer">atomic</a> in version 10, and comparison wasn't implemented for it.</p>
<p>Here's a small test in version 9:</p>
<pre><code>In[1]:= a =
Dispatch[{"a" -> 1, "b" -> 2, "c" -> 3, "d" -> 4, "e" -> 5,
"f" -> 6, "g" -> 7, "h" -> 8, "i" -> 9, "j" -> 10, "k" -> 11,
"l" -> 12, "m" -> 13}];
In[2]:= b =
Dispatch[{"a" -> 1, "b" -> 2, "c" -> 3, "d" -> 4, "e" -> 5,
"f" -> 6, "g" -> 7, "h" -> 8, "i" -> 9, "j" -> 10, "k" -> 11,
"l" -> 12, "m" -> 13}];
In[3]:= AtomQ[a]
Out[3]= False
In[4]:= a === b
Out[4]= True
</code></pre>
<p>(Note that there need to be a certain number of elements in the dispatch table before it actually builds a hash table from it. It won't happen when using only a few rules.)</p>
<p>In version 10 we get</p>
<pre><code>In[3]:= AtomQ[a]
Out[3]= True
In[4]:= a === b
Out[4]= False
</code></pre>
<p>Unfortunately I do not see an easy solution around this as the <code>Dispatch</code> objects are only small parts of the expressions you are comparing. A "proper" solution would be a significant reworking of the package, but who has time for that ... ?</p>
<p>Another question is: is this a bug? I suspect that many <em>programmers</em> (which most of us aren't) wouldn't consider it a bug. There was good reason to make <code>Dispatch</code> atomic, for good performance, and seamlessly integrating atomic objects into Mathematica is hard. Lots of operations need special implementation for a seamless integration: pattern matching, comparison, translation to/from some sort of FullForm, etc. Also, the type of comparison you are doing between these expressions could be considered a hack or a quick-and-dirty solution.</p>
<p>On the other hand: Mathematica has always had the implicit promise that we can rely on <a href="http://reference.wolfram.com/language/tutorial/EverythingIsAnExpression.html" rel="noreferrer">everything being an expression</a> and one of its big strengths is that it's really easy to hack together useful and working solutions. They are often not proper solutions, but most people who use Mathematica (researchers) don't <em>develop software</em>, as programmers do. We just create quick and dirty solutions for our own immediate use, targeted at the problem at hand. And Mathematica really shines at this.</p>
<p>You should definitely contact WRI about this and let them know about your use case.</p>
<hr />
<h2>Possible workaround</h2>
<p>Here's a possible workaround: instead of <code>Dispatch</code>, use <code>Association</code> (new in 10). In a quick test, <code>Association</code> seems to give similar performance benefits to <code>Dispatch</code>. But unlike <code>Dispatch</code>, associations <em>can</em> be compared using <code>===</code>. Just be sure to <code>KeySort</code> the association before including it in the expression: <code><| a -> 1, b -> 2 |></code> is different from <code><| b -> 2, a -> 1 |></code>.</p>
|
1,356,783 |
<p>What kind of mathematical object is this substitution(is it a function or what). We assuming set of variables exist.</p>
|
Gary.
| 235,023 |
<p>If I understood correctly, say you have a wff on n variables. Then substituting in a variable, you map into the collection of wffs on $(n-1)$ variables. If you substitute for all n variables, or if $n=1$, you map into a world, or interpretation for the wff. But there may not be possible worlds where the wff holds, e.g., if your wff is a contradiction.</p>
|
719,055 |
<p>I'm trying to show that if solid tori $T_1, T_2; T_i=S^1 \times D^2$ ,are glued by homeomorphisms between their respective boundaries, then the homeomorphism type of the identification space depends on the choice of homeomorphism up to, I think, isotopy ( Please forgive the rambling; I'm trying to put together a lot of different things from different sources, and I don't yet have a very coherent general picture.) I first thought of Lens spaces, but the gluing here is not done by a homeomorphism.</p>
<p>I have some fuzzy ideas here that I would like to make precise: I know this has to see with Heegard splittings; specifically, this is a genus-1 splitting ( actually, genus-1 gluing ) and the gluing may be determined by a mapping in $SL(2,\mathbb Z)$, which determines the induced map on the top homology , and different induced maps would result in different homeomorphic types on the glued spaces.</p>
<p>I think we can also see this from the perspective of Dehn surgery ( please feel free to correct anything I write here ), where we remove a link $L$ and a tubular 'hood $T(L)$ of $L$ , and then glue another torus. I know then an n-framing is equivalent to removing a solid torus, twisting n times and then regluing. But it's obvious from the post that I don't know how to show that the homeomorphism class of the space glued along $h: \partial T_1 \rightarrow \partial T_2$ depends on $h$.</p>
<p>Thanks, and sorry for the rambling ( not my fault, I was born a rambling man.)</p>
|
Tony
| 39,296 |
<p>The notation $\frac{dy}{dx} = \frac{dy}{du} * \frac{du}{dx}$ is valid. However, you cannot prove the chain rule just by "cancelling" the two $du$'s; it doesn't work that way.</p>
|
10,601 |
<p>It sometimes happens that the same user posts <strong>exactly</strong> the same question twice in a row.</p>
<p>Examples: </p>
<ul>
<li><a href="https://math.stackexchange.com/questions/446622/drawing-at-least-90-of-colors-from-urn-with-large-populations">1</a> <a href="https://math.stackexchange.com/questions/446631/number-of-draws-required-for-ensuring-90-of-different-colors-in-the-urn-with-la">2</a>. user asked twice for the probability of choosing 90% of the colors from a collection of $10^{10}$ balls of $10^7$ different colors</li>
<li><a href="https://math.stackexchange.com/questions/463610/distance-between-two-polyhedra">1</a> <a href="https://math.stackexchange.com/questions/463536/distance-between-two-disjoint-polyhedrons">2</a>. user asked twice for a proof that disjoint polyhedra must lie at positive distance from one another</li>
<li><a href="https://math.stackexchange.com/questions/328368/matrix-factorization-in-upper-and-lower-triangular-matrix">1</a> <a href="https://math.stackexchange.com/questions/328326/guassian-elimination-triangular-factorization">2</a>. user asked twice for the LU decomposition of the same 3×4 matrix</li>
<li><a href="https://math.stackexchange.com/questions/315841/aronszajns-criterion-for-euclidean-space-again">1</a> <a href="https://math.stackexchange.com/questions/315733/aronszajns-criterion-for-euclidean-space/">2</a>. user asked twice for clarification of lemma 3 from a certain paper of Arthan </li>
<li><a href="https://math.stackexchange.com/questions/451268/children-balls-homework">1</a> <a href="https://math.stackexchange.com/questions/449079/children-balls-choosability">2</a>, both deleted but visible to 10K users. User asked twice for proofs of the same claim about $2n$ children choosing from sets of $n$ colored balls</li>
<li><a href="https://math.stackexchange.com/questions/391489/sum-of-squared-cubed-combinations">1</a> <a href="https://math.stackexchange.com/questions/391640/sum-of-squared-cube-combinations">2</a>, second one deleted. user asked twice for a closed form for $\sum {n \choose k}^3$</li>
<li><a href="https://math.stackexchange.com/questions/233892/directed-graph-dijkstras-algorithm">1</a> <a href="https://math.stackexchange.com/questions/233770/directed-graph-max-flow-dijkstras-algorithm">2</a>, both deleted. user asked twice for a proof that a certain max-flow problem could be solved with Dijkstra's algorithm</li>
<li><a href="https://math.stackexchange.com/questions/209331/prove-that-every-problem-in-p-is-reducible">1</a> <a href="https://math.stackexchange.com/questions/208848/for-two-problems-a-and-b-if-a-is-in-p-then-a-is-reducible-to-b">2</a>. user asked twice for a proof that any problem $A$ in $\mathcal P$ is polytime-reducible to any other problem $B$</li>
</ul>
<p>It seems to me that the best way to handle these is to flag them for moderator attention, so that the moderators can immediately merge the questions or close or delete the second one. I can vote to close one as a duplicate of the other, but it sometimes takes along while to gather five votes to close, and in that time the following discussion, which should be happening in one place, is split between two. If it is, the moderators could merge the two questions and their answers, which the site members have no way to do, so I think the flag is required anyway.</p>
<p>Sometimes my flags have been accepted, other times rejected. It appears that some moderators see the matter the way I do, but others don't. I would like to hear other members' opinions on this, and if possible I would like a clear statement from the moderators about whether I should raise a flag in this situation.</p>
<p>Related: <a href="http://meta.math.stackexchange.com/questions/10359/closing-duplicate-questions-by-the-same-poster">Closing duplicate questions by the same poster</a>.</p>
|
rurouniwallace
| 35,878 |
<p>In these situations, if there's no answers yet, I usually handle it by doing two things: voting to close as a duplicate, and posting a link to the original question as a comment. I usually delete the automatic comment that comes up when you vote to close as a duplicate and replace it with something like:</p>
<blockquote>
<p><strike>Possible duplicate of <a href="https://math.stackexchange.com/questions/446622/drawing-at-least-90-of-colors-from-urn-with-large-populations">drawing at least 90% of colors from urn with large populations</a></strike>.</p>
<p>Exact duplicate of <a href="https://math.stackexchange.com/questions/446622/drawing-at-least-90-of-colors-from-urn-with-large-populations">drawing at least 90% of colors from urn with large populations</a> and posted by the same user.</p>
</blockquote>
<p>This way other users will immediately know to go to the link to the original question, and it helps divert discussion on the duplicate.</p>
<p>On the other hand, if the duplicate already has answers on it, I think that's the right time to flag for moderator attention, so that the answers can get merged. It also doesn't hurt to vote to close as a duplicate as well, in case a moderator decides to ignore the flag.</p>
|
697 |
<p><a href="https://mathoverflow.net/questions/36307/why-cant-i-post-a-question-on-math-stackexchange-com">This question</a> was posted on MO about not being able to post on math.SE. While MO wasn't the right place for the question, I have to wonder what is. New users who are experiencing difficulty using math.SE can't post about it on meta, so where do they turn? The only thing I can think of is that they have to figure out that it is possible for them to contact the moderators, but nowhere is it explicitly described how to do this. Maybe something should be added to the FAQ.</p>
|
kennytm
| 171 |
<p>The contact info is actually written in <a href="https://math.stackexchange.com/about">"about"</a></p>
<blockquote>
<h2>How can I learn more?</h2>
<p>Check out the <a href="https://math.stackexchange.com/faq">FAQ</a>. And if you need to contact us, you can do so at <strong>[email protected]</strong>.</p>
</blockquote>
<p>But I agree, it should present in the FAQ too.</p>
|
340,575 |
<p>I got my exam on Thursday, and just got a few questions left. Anyway I would aprreciate help a lot! Can anyone please help me to solve this task? You can see the picture below. The need is to finde the size of the two radius. I thought about working with cords, like the cord AC is the same size like another one. Still couldn´t really find something usefull. <img src="https://i.stack.imgur.com/nOGlt.png" alt="enter image description here"></p>
|
Inceptio
| 63,477 |
<p>$\triangle MAC$ and $\triangle MBC$ are isosceles.
<img src="https://i.stack.imgur.com/CfWal.png" alt="enter image description here"></p>
<p>Construct a tangent $CD$ which is common at $C$. </p>
<p>Now $\angle DCM= \angle M_2BD=90^0$, which means $BDCM_2$ is a <strong>cyclic</strong> Quadrilateral.</p>
<p>Similarly, prove $ADCM_1$ is a <strong>cyclic</strong> quadrilateral.</p>
<p>$$\angle BCM_2=\angle CBM_2=x$$(Isosceles)</p>
<p>$\angle BCM_2=BDM_2=x$(Why? Since it is cyclic, $M_2B$ projects equal angles)</p>
<p>$$\angle M_2DC= \angle M_2BC=x$$</p>
<p>$\angle CDB=2x$</p>
<p>In quadrilateral $ADCM_1$,</p>
<p>$\angle M_1AC=\angle M_1CA =y$(Isosceles)</p>
<p>$\angle M_1DA=M_1CA=y \implies \angle CDA=2y$ </p>
<p>$2x+2y=90^0 \implies x+y=90^0$</p>
<p>In $\triangle ACB$,</p>
<p>$\angle CAB=90-y$ and $\angle CBA=90-x$</p>
<p>$\angle BCA= x+y$, But we have $x+y=90^0$. Therefore, $\triangle ACB$ is right angled.</p>
|
26,823 |
<p>Trying to solve for the area enclosed by $x^4+y^4=1$. A friend posed this question to me today, but I have no clue what to do to solve this. Keep in mind, we don't even know if there is a straightforward solution. I think he just likes thinking up problems out of thin air. </p>
<p>Anyway, the question becomes more general, since we <em>think</em> that </p>
<p>$lim_{n\to\infty}\int_0^1{(1-x^n)^{1/n}} = {1\over4}$ (it approaches a square / becomes linear)</p>
<p>can anyone confirm that this is true or not?</p>
|
Wadim Zudilin
| 4,953 |
<p>I always prefer not to skip $dx$:
$$
I_n=\int_0^1(1-x^n)^{1/n}dx.
$$
After the change of variable $t=x^n$, the integral becomes the beta integral,
$$
I_n=\frac1n\int_0^1(1-t)^{1/n}t^{1/n-1}dt
=\frac1n\frac{\Gamma(1+1/n)\Gamma(1/n)}{\Gamma(1+2/n)}
=\frac1n\frac{\Gamma(1/n)^2\cdot 1/n}{\Gamma(2/n)\cdot 2/n}
\to1 \quad\text{as $n\to\infty$},
$$
as $1/\Gamma(z)\sim z$ as $z\to 0$.</p>
|
86,067 |
<p>So I am having an issue using <code>NDSolve</code> and plotting the function. So I have two different <code>NDSolve</code> calls in my plotting function. (They are technically the same, just have different names; but that can be changed back if at all possible because I want them to be the same.) But the second one is not working. </p>
<p>When I remove the plotting code from the <code>Manipulate</code> command, <code>Plot</code> works fine and outputs an answer. I just need the expanded form so that I can manipulate the variables (if there is a way around this, that would be great too!)</p>
<p>Here is what I have so far, any help would be appreciated as I have no idea why I am getting this error.</p>
<pre><code>Manipulate[
Plot[{
(Evaluate[
ReplaceAll[
Paorta[t],
NDSolve[
{Paorta'[t] ==
1/Caorta ((1/2*k*(1 + Cos[ω t]) + 10 - Paorta[t])/
Piecewise[{{Ro,
1/2*k*(1 + Cos[ω t]) + 10 - Paorta[t] > 0}},
x*Ro] - Paorta[t]/Rsystemic), Paorta[0] == 90},
{Paorta[t]},
{t, 0, 10}
]
]
]), (1/2*k*(1 + Cos[ω t]) +
10), (((1/2*k*(1 + Cos[ω t]) + 10) -
Evaluate[
ReplaceAll[Pao[t],
NDSolve[{Pao'[t] ==
1/Caorta ((1/2*k*(1 + Cos[ω t]) + 10 - Pao[t])/
Piecewise[{{Ro,
1/2*k*(1 + Cos[ω t]) + 10 - Pao[t] > 0}},
x*Ro] - Pao[t]/Rsystemic), Pao[0] == 90}, {Pao[t]}, {t,
0, 10}
]
]
])/Ro)},
{t, 0, 10},
ImageSize -> Large,
PlotRange -> Full,
PlotLegends -> {"Aortic Pressure", "Pressure in Left Ventricle",
"Flow"}
],
{{Caorta, 1/.48}, 1, 6},
{{Rsystemic, 3.1}, .1, 6},
{{x, 8000}, 1, 10000},
{{ω, 2 π}, π, 3 π},
{{k, 110}, 60, 200},
{{Ro, .01}, .007, .05}
]
</code></pre>
<p>Thanks in advance!</p>
|
kglr
| 125 |
<pre><code>ClearAll[ticksF, axesF, labelF]
ticksF[tSide_: Left, tr_: 1, tl_: (.01), s_: {Thickness[.001]}][{minmax__}, nd_:{6, 6}] :=
Module[{tf = tSide /. {Automatic | Left -> Identity, Right -> ({-1, 1} # &),
Bottom -> ((Reverse@#) &)},
d = {#, Complement[Join @@ #2, #]} & @@ FindDivisions[{minmax}, nd, Method -> {}],
trns = tSide /. {Left -> {-tr, 0}, Right -> {tr, 0}, Bottom -> {0, -tr} }, tcks},
tcks = Join[Table[{i, i, tl, s}, {i, d[[1]]}], Table[{i, "", tl/2, s}, {i, d[[2]]}]];
Translate[{s, Line@Thread[tf @
{0, Through @ {Min, Max} @ #[[All, 1]]}], {Line[{tf @ {-#3, #}, tf @ {0, #}}],
Text[#2, tf[{1.2 tSide /. {(Left | Bottom) -> (-1) , (Top | Right) -> 1}, 1} {#3, #}],
If[tSide === Bottom, {Center, tSide /. {Bottom -> Top, Top -> Bottom}},
{tSide /. {Left -> Right, Right -> Left}, Center}]]} & @@@ #} &@tcks, trns]]
axesF[ tr_: {1, 1}, tl_: (0.01), ar_: 1/GoldenRatio][
{rng1 : {_, __}, nd1_: {6, 6}}, {rng2 : {_, __}, nd2_: {6, 6}}] :=
Module[{sc = ar (Subtract @@ rng1[[{2, 1}]])/ (Subtract @@ rng2[[{2, 1}]])},
{ticksF[Bottom, tr[[1]], tl][rng1, nd1], ticksF[Left, tr [[2]] sc, tl sc][rng2, nd2]}]
labelF = Labeled[#, {Rotate[#2, 90 Degree], #3}, {Left, Bottom}] &;
</code></pre>
<h2>Histogram</h2>
<pre><code>SeedRandom[1]
data = RandomVariate[HalfNormalDistribution[1/150], 500];
hst = Histogram[data, Axes -> False,
ImagePadding -> {Scaled /@ {.04, .05}, Scaled /@ {.04, .025}},
ImageSize -> 600, PlotRangeClipping -> False,
Epilog -> (axesF[{5, 5}, 5][{{0, 700}}, {{0, 120}}]),
BaseStyle -> {FontSize -> 14}];
labelF[hst, Style["labely", 20, "Panel"], Style["labelx", 20, "Panel"]]
</code></pre>
<p><a href="https://i.stack.imgur.com/gCRhg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gCRhg.png" alt="enter image description here"></a></p>
<h2>ListPlot</h2>
<pre><code>lp = ListPlot[data, Axes -> False,
ImagePadding -> {Scaled /@ {.04, .05}, Scaled /@ {.04, .025}},
ImageSize -> 600, PlotRangeClipping -> False,
Epilog -> (axesF[{20, 20}, 20][{{0, Length@data}}, {{0, 1.1 Max[data]}}]),
BaseStyle -> {FontSize -> 14}];
labelF[lp, Style["labely", 20, "Panel"], Style["labelx", 20, "Panel"]]
</code></pre>
<p><a href="https://i.stack.imgur.com/gmSmu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gmSmu.png" alt="enter image description here"></a></p>
<h2>BarChart</h2>
<pre><code>bc = BarChart[d2 = HistogramList[data][[2]], Axes -> False,
AxesOrigin -> {0, 0},
ImagePadding -> {Scaled /@ {.025, .025}, Scaled /@ {.05, .025}},
ImageSize -> 600, AspectRatio -> ar, PlotRangeClipping -> False,
Epilog -> (axesF[{5, 0}, 5][{{1, 1 + Length@d2}, {Length@d2, 1}}, {{0, 120}}]),
BaseStyle -> {FontSize -> 14}];
labelF[bc, Style["labely", 20, "Panel"], Style["labelx", 20, "Panel"]]
</code></pre>
<p><a href="https://i.stack.imgur.com/biwn8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/biwn8.png" alt="enter image description here"></a></p>
<h2>Plot</h2>
<pre><code>plot = Plot[{Sin[x], Cos[x]}, {x, -2 Pi, 2 Pi}, Axes -> False,
ImagePadding -> {Scaled /@ {.05, .05}, Scaled /@ {.065, .05}},
ImageSize -> 600, PlotRange -> {{-2 Pi, 2 Pi}, {-1, 1}},
PlotRangeClipping -> False,
Epilog -> (axesF[{1.1, 1.7}, .1][{{-2 Pi, 2 Pi, Pi/2}, {8, 2}}, {{-1, 1}}]),
BaseStyle -> {FontSize -> 14}]
labelF[plot, Style["labely", 20, "Panel"], Style["labelx", 20, "Panel"]]
</code></pre>
<p><a href="https://i.stack.imgur.com/Rea8k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rea8k.png" alt="enter image description here"></a></p>
<p><strong>Note:</strong> With some additional effort, some of the manual settings can be automated using <code>Scaled</code> and/or extracting plot range. </p>
|
2,909,244 |
<p>I have a homework, about calculate the limit of a series:
$$
\lim\limits_{n \to +\infty} \dfrac{\sqrt[n] {n^3} + \sqrt[n] {7}}{3\sqrt[n]{n^2} + \sqrt[n]{3n}}
$$
Solution is $\frac{1}{2}$. I am trying use the unequality:
$$
\dfrac{\sqrt[n] {n^3} }{3\sqrt[n]{n^2} + \sqrt[n]{3n}} \le \dfrac{\sqrt[n] {n^3} + \sqrt[n] {7}}{3\sqrt[n]{n^2} + \sqrt[n]{3n}} \le \dfrac{\sqrt[n] {n^3} + \sqrt[n] {7}}{3\sqrt[n]{n^2}}
$$
However, I haven't got to find solution.</p>
|
Daquisu
| 591,084 |
<p>I'd calculate the individual limits then put them together.</p>
<p>$\lim\limits_{n \to +\infty} \sqrt[n] {n^3} = \lim\limits_{n \to +\infty} \exp(ln(\sqrt[n] {n^3})) = \lim\limits_{n \to +\infty} \exp(ln(({n^3})^\dfrac{1}{n})) $</p>
<p>Appyling log properties: $log(A^n) = n*log(A)$</p>
<p>$\lim\limits_{n \to +\infty} \exp(\dfrac{1}{n} * ln( {n^3})) = \lim\limits_{n \to +\infty} \exp(\dfrac{ln( {n^3})}{n}) = \lim\limits_{n \to +\infty} \exp(\dfrac{3*ln( {n})}{n})$</p>
<p>Both the numerator's and the denomitator's limits goes to $+\infty$, so we can apply L'Hopital's rule:</p>
<p>$\lim\limits_{n \to +\infty} \exp(\dfrac{\dfrac{3}{n}}{1}) = \lim\limits_{n \to +\infty} \exp(\dfrac{3}{n}) = \exp(0) = 1 $</p>
<p>The limits remaining are pretty similar, so I will be more straightfoward.</p>
<p>$(*)$ means I used L'Hopital's rule:</p>
<hr>
<p>$\lim\limits_{n \to +\infty} \sqrt[n] {7} = \lim\limits_{n \to +\infty} \exp(\dfrac{ln(7)}{n}) = \exp(0) = 1$</p>
<hr>
<p>$\lim\limits_{n \to +\infty} 3\sqrt[n] {n^2} = 3\lim\limits_{n \to +\infty} \exp(ln(\sqrt[n] {n^2})) = 3\lim\limits_{n \to +\infty} \exp(\dfrac{2ln(n)}{n}) =(*) 3\lim\limits_{n \to +\infty} \exp(\dfrac{\dfrac{2}{n}}{1}) = 3 $</p>
<hr>
<p>$\lim\limits_{n \to +\infty} \sqrt[n] {3n} = \lim\limits_{n \to +\infty} \exp(ln(\sqrt[n] {3n})) = \lim\limits_{n \to +\infty} \exp(\dfrac{ln(3n)}{n}) =(*)\lim\limits_{n \to +\infty} \exp(\dfrac{\dfrac{1}{n}}{1}) = 1 $</p>
<hr>
<p>Putting it all together:</p>
<p>$ \lim\limits_{n \to +\infty} \dfrac{\sqrt[n] {n^3} + \sqrt[n] {7}}{3\sqrt[n]{n^2} + \sqrt[n]{3n}} = \dfrac{1+1}{3+1} = \dfrac{1}{2}$</p>
<p>Using $ x = \exp(ln(x)) $ is very common for exponential limits. </p>
|
4,331,790 |
<blockquote>
<p><strong>Question 23:</strong> Which one of following statements holds true if and only if <span class="math-container">$n$</span> is a prime number?
<span class="math-container">$$
\begin{alignat}{2}
&\text{(A)} &\quad n|(n-1)!+1 \\
&\text{(B)} &\quad n|(n-1)!-1 \\
&\text{(C)} &\quad n|(n+1)!-1 \\
&\text{(D)} &\quad n|(n+1)!+1
\end{alignat}
$$</span></p>
</blockquote>
<p>I ruled out choices C and D because <span class="math-container">$n$</span> is a natural factor of <span class="math-container">$(n+1)!$</span>, so <span class="math-container">$n$</span> won't divide <span class="math-container">$n(n+1)! -1 $</span> or <span class="math-container">$n(n+1)!+1 $</span>(that could be the case if <span class="math-container">$n$</span> is <span class="math-container">$1$</span> but this question concerns only primes). <br>I would like to know how to choose between choices A and B. Alternative ways to approach the problem are equally welcome.</p>
|
Claude Leibovici
| 82,404 |
<p>May be a few identities could help.
<span class="math-container">$$\sum_{k=0}^n \frac{(-1)^k}{k!}\binom{n}{k}\,x^k=\, _1F_1(-n;1;x)=L_n(x)$$</span> (<span class="math-container">$L_n(x)$</span> being Laguerre polynomial) and
<span class="math-container">$\lim_{n\to \infty } \, L_n(1) =0$</span></p>
<p>On the other side, the last expression you wrote
<span class="math-container">$$\sum\limits_{k=0}^\infty \dfrac{(-1)^k}{k!}\dbinom{n+k}{k}x^k=\, _1F_1(n+1;1;-x)=L_{-n-1}(-x)$$</span></p>
<p>So, it remains to prove that
<span class="math-container">$$\, _1F_1(-n;1;1)=e\,\, _1F_1(n+1;1;-1)$$</span> or that
<span class="math-container">$$L_n(1)=e\,L_{-n-1}(-1)$$</span></p>
<p>Both are (numerically) true but, at this pont, I am stuck.</p>
|
33,582 |
<p>My code finding <a href="http://en.wikipedia.org/wiki/Narcissistic_number">Narcissistic numbers</a> is not that slow, but it's not in functional style and lacks flexibility: if $n \neq 7$, I have to rewrite my code. Could you give some good advice?</p>
<pre><code>nar = Compile[{$},
Do[
With[{
n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g,
n2 = a^7 + b^7 + c^7 + d^7 + e^7 + f^7 + g^7},
If[n == n2, Sow@n];
],
{a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}],
RuntimeOptions -> "Speed", CompilationTarget -> "C"
];
Reap[nar@0][[2, 1]] // AbsoluteTiming
(*{0.398023, {1741725, 4210818, 9800817, 9926315}}*)
</code></pre>
|
RunnyKine
| 5,709 |
<p>Here is a functional approach:</p>
<pre><code>Narciss[x_] := With[{num = IntegerDigits[x]}, Total[num^Length[num]] == x]
</code></pre>
<p>Here is a compiled version of the above function:</p>
<pre><code>NarcissC = Compile[{{x, _Integer}},
With[{num = IntegerDigits[x]}, Total[num^Length[num]] == x],
Parallelization -> True, CompilationTarget -> "C",
RuntimeAttributes -> Listable, RuntimeOptions -> "Speed"]
</code></pre>
<p>Now you can do something like</p>
<pre><code>AbsoluteTiming[Position[NarcissC[Range[10000000]], True] // Flatten]
</code></pre>
<blockquote>
<p>{1.003214, {1, 2, 3, 4, 5, 6, 7, 8, 9, 153, 370, 371, 407, 1634, 8208,
9474, 54748, 92727, 93084, 548834, 1741725, 4210818, 9800817,
9926315}}</p>
</blockquote>
<p>to get all the <em>m</em>-Narcissistic numbers from 1 to 10000000.</p>
<p>For a further bump in speed as suggested by chyaong, here is <code>NarcissC2</code> (Using <code>Sum</code> instead of <code>Total</code>)</p>
<pre><code>NarcissC2 = Compile[{{x, _Integer}},
With[{num = IntegerDigits[x]}, Sum[i^Length@num, {i, num}] - x],
CompilationTarget -> "C", RuntimeAttributes -> Listable, RuntimeOptions-> "Speed"];
</code></pre>
<p>Now you can do:</p>
<pre><code>Pick[#, NarcissC2[#], 0] &@Range[10000000] // AbsoluteTiming
</code></pre>
<p>Which gives:</p>
<blockquote>
<p>{0.475276, {1, 2, 3, 4, 5, 6, 7, 8, 9, 153, 370, 371, 407, 1634, 8208,
9474, 54748, 92727, 93084, 548834, 1741725, 4210818, 9800817, 9926315}}</p>
</blockquote>
<p><strong>EDIT</strong></p>
<p>It turns out that you can get a bump using <code>Total</code> and <code>Pick</code> instead of <code>Position</code> (not as fast as <code>Sum</code>):</p>
<pre><code> NarcissC1 = Compile[{{x, _Integer}},
With[{num = IntegerDigits[x]}, Total[num^Length[num]] - x], Parallelization -> True,
CompilationTarget -> "C", RuntimeAttributes -> Listable, RuntimeOptions -> "Speed"]
</code></pre>
<p>Then</p>
<pre><code>Pick[#, NarcissC1[#], 0] &@Range[10000000] // AbsoluteTiming
</code></pre>
<p>gives:</p>
<pre><code>{0.626322, {1, 2, 3, 4, 5, 6, 7, 8, 9, 153, 370, 371, 407, 1634, 8208,
9474, 54748, 92727, 93084, 548834, 1741725, 4210818, 9800817, 9926315}}
</code></pre>
|
33,582 |
<p>My code finding <a href="http://en.wikipedia.org/wiki/Narcissistic_number">Narcissistic numbers</a> is not that slow, but it's not in functional style and lacks flexibility: if $n \neq 7$, I have to rewrite my code. Could you give some good advice?</p>
<pre><code>nar = Compile[{$},
Do[
With[{
n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g,
n2 = a^7 + b^7 + c^7 + d^7 + e^7 + f^7 + g^7},
If[n == n2, Sow@n];
],
{a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}],
RuntimeOptions -> "Speed", CompilationTarget -> "C"
];
Reap[nar@0][[2, 1]] // AbsoluteTiming
(*{0.398023, {1741725, 4210818, 9800817, 9926315}}*)
</code></pre>
|
Carl Woll
| 45,431 |
<p>It should be much faster to generate all possible integer digit sets, and then select those integer digit sets that have the require property. For instance, $135, 153, 315, 351, 513, 531$ all have the integer digits $1, 3, 5$, but the sum of the cubes of all 6 digit sets is the same, namely, $153$. The set of all possible integer digit sets for an integer length $k$ number (assuming leading 0s are acceptable) is $\binom{k+9}{k}$, which is much smaller than the set of all integers with integer length less than or equal to $k$. For instance, for $k=7$, we have:</p>
<pre><code>10^7-1
Binomial[9+7, 7]
</code></pre>
<blockquote>
<p>9999999</p>
<p>11440</p>
</blockquote>
<p>Checking each of the integer digit sets will be much faster than checking all integers. Here is a function that carries out this idea:</p>
<pre><code>narcisist[p_, k_] := With[
{subs=Transpose[Transpose @ Subsets[Range[0, 8+k], {k}] - Range[0, k-1]]},
With[
{tdigits=Sort/@IntegerDigits[Total[subs^p,{2}],10,k]},
Total[Pick[subs,Total[Abs[subs-tdigits],{2}],0]^p,{2}]
]
]
</code></pre>
<p>I extended the concept to having independent powers $p$ and integer lengths $k$. Here is the answer for the OP parameters:</p>
<pre><code>narcisist[7, 7] //AbsoluteTiming
</code></pre>
<blockquote>
<p>{0.020188, {0, 1, 9800817, 4210818, 1741725, 9926315}}</p>
</blockquote>
<p>This approach can succeed for much larger integer lengths:</p>
<pre><code>narcisist[10, 10] //AbsoluteTiming
</code></pre>
<blockquote>
<p>{0.215505, {0, 1, 4679307774}}</p>
</blockquote>
<pre><code>narcisist[20, 20] // AbsoluteTiming
</code></pre>
<blockquote>
<p>{112.554, {0, 1, 63105425988599693916}}</p>
</blockquote>
|
626,958 |
<p>I know that $E[X|Y]=E[X]$ if $X$ is independent of $Y$. I recently was made aware that it is true if only $\text{Cov}(X,Y)=0$. Would someone kindly either give a hint if it's easy, show me a reference or even a full proof if it's short? Either will work I think :) </p>
<p>Thanks.</p>
<p>Edit: Thanks for the great answers! I accepted Alecos simply for being first. I've made a followup question <a href="https://math.stackexchange.com/questions/627846/conditional-mean-on-uncorrelated-stochastic-variable-2">here</a>. (When i someday reach 15 reputation I will upvote)</p>
|
leonbloy
| 312 |
<p>Your assertion is false.</p>
<p>For two random variables <span class="math-container">$X$</span> <span class="math-container">$Y$</span> one can consider three measures of un-related-ness:</p>
<p><span class="math-container">$1$</span>: Independence: <span class="math-container">$p(X,Y) = p(X)p(Y)$</span> Equivalently, <span class="math-container">$P(X|Y)= P(X)$</span>. This is the most important and strongest one.</p>
<p><span class="math-container">$2$</span>: Unpredictability (this term is not very standard): conditioning on one variable does not change the expectation of the other. Actually, this property is not symmetric, so there are two possible cases:</p>
<p><span class="math-container">$2a$</span>: <span class="math-container">$E(X|Y) = E(X)$</span></p>
<p><span class="math-container">$2b$</span>: <span class="math-container">$E(Y|X) = E(Y)$</span></p>
<p>(These properties can also be stated as: the regression line is horizontal/vertical)</p>
<p><span class="math-container">$3$</span>: Uncorrelatedness (orthogonality): <span class="math-container">$Cov(X,Y) = 0$</span> . Equivalently: <span class="math-container">$E(X Y) = E(X) E(Y)$</span></p>
<p>It can be shown that <span class="math-container">$1 \implies 2a$</span>, <span class="math-container">$1 \implies 2b$</span> , <span class="math-container">$2a \implies 3$</span> , <span class="math-container">$2b \implies 3$</span> (hence, <span class="math-container">$1 \implies 3$</span>). All other implicancies don't hold.</p>
<p><img src="https://i.stack.imgur.com/Q73G6.png" alt="enter image description here"></p>
<p>An example: consider <span class="math-container">$(X,Y)$</span> having uniform distribution over the triangle with vertices at <span class="math-container">$(-1,0)$</span>, <span class="math-container">$(1,0)$</span>, <span class="math-container">$(0,1)$</span>. Check (you can deduce it from symmetry considerations) that <span class="math-container">$E(X|Y)=E(X)$</span> and <span class="math-container">$Cov(X,Y)=0$</span>, but <span class="math-container">$E(Y|X)$</span> varies with <span class="math-container">$X$</span>.</p>
|
2,342,537 |
<p>Suppose $f:\mathbb R\rightarrow \mathbb R$ s.t. $f(x+y)=f(x)+f(y)$ for all $x,y\in \mathbb R$ and $f$ is not continuous on $\mathbb R$. Prove that</p>
<p>(a).$f$ is not bounded below (or above) on any subinterval $(a,b)$ of $\mathbb R$.</p>
<p>(b). $f$ is not monotone.</p>
<p>On plugging $x=y=0$; $f(0)=f(0)+f(0)$ which gives $f(0)=0$.
Also, plug $y=-x$ to get $f(-x)=-f(x)$ i.e. $f$ is an odd function and thus have graph in opposite quadrants. Now, I am stuck how to show that graph will be unbounded? Please help!</p>
|
Fimpellizzeri
| 173,410 |
<p>Here's a route:</p>
<p>$\qquad(1)$: Show that $f$ is $\mathbb{Q}$-linear. In particular, show that $f(q\cdot x)=q\cdot f(x)$ for all $q\in\mathbb{Q}$.</p>
<p>$\qquad(2)$: Using $(1)$, show that if $f$ is bounded on any interval $[a,b]$ (why can you assume it is closed?), then $f$ is continuous at $x=0$.</p>
<p>$\qquad(3)$: Establish a contradiciton by showing that if $f$ is continuous at $0$, it is continuous everywhere.</p>
<p>$\qquad(4)$: Show that $f$ cannot be monotone on any given interval because it is not bounded on that interval, by $(3)$.</p>
<hr>
<p>Suppose without loss of generality that $0<a<b$ (why can you assume both are positive?) and that $|f(x)|\leq C$ for all $x\in \mathbb[a,b]$ and some $C>0$.
We will show that</p>
<p>$$\forall \epsilon>0,\,\exists\delta>0,\,\forall x \text{ with $0<x<\delta$},\, |f(x)|\leq\epsilon$$</p>
<p>Observe that because $f$ is odd and $f(0)=0$, this suffices.</p>
<p>Notice that $f(x)=\frac1q f(qx)$ for any $q\in\mathbb{Q}\setminus\{0\}$, because of point $(1)$.
It follows that for all $q\in\mathbb{Q}_+$ and all $x\in[a,b]$, $|f(qx)|\leq qC$. In other words, for all $q\in\mathbb{Q}_+$ and all $x\in[qa,qb]$, $|f(x)|\leq qC$.
Taking $q=\frac{\epsilon}C$, we find that $|f(x)|\leq \epsilon$ for all $x\in\left[\frac{a\epsilon}C,\frac{b\epsilon}C\right]$.</p>
<p>Now, if $0<x<\frac{a\epsilon}C$, there is some $q>1$ with $qx\in\left[\frac{a\epsilon}C,\frac{b\epsilon}C\right]$.
Then:</p>
<p>$$|f(x)|\leq q|f(x)|=|qf(x)|=|f(qx)|\leq \epsilon$$</p>
<p>It follows that whenever $0<x\leq\frac{b\epsilon}C$, $|f(x)|\leq\epsilon$, so we may take $\delta=\frac{b\epsilon}C$ which completes the proof.</p>
|
376,600 |
<p>$$\lim_{n\to\infty} \int_{-\infty}^{\infty} \frac{1}{(1+x^2)^n}\,dx $$</p>
<p>Mathematica tells me the answer is 0, but how can I go about actually proving it mathematically?</p>
|
xpaul
| 66,420 |
<p>Use integration by substitution. Let $x=\tan\theta$. Then
$$ \int_{-\infty}^\infty\frac{1}{(1+x^2)^n}dx=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\cos^{2(n-1)}\theta d\theta. $$
Now it is not hard to verify
$$\lim_{n\to\infty}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\cos^{2(n-1)}\theta d\theta=0.$$</p>
|
1,610,055 |
<p>I feel rather silly for having to ask this question in specific and am by no means looking for a flat out step by step answer. I understand the definition for the euclidean norm in an n-dimensional space (as defined <a href="https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm" rel="nofollow">here</a>). I can't figure out how to apply it however to even a simple problem like this one:</p>
<blockquote>
<p>If $\| x - z \| \lt 2$ and $\| y - z \| \lt 3$, prove $\| x - y \| \lt 5$.</p>
</blockquote>
<p>Sorry in advance for lack of formatting, I'm new to math exchange and there's really nothing complicated to format. Again, I am not looking for a straight answer. My proof breaks down after I add the two assumptions and attempt to square both sides. Hopefully someone can point out the simple first step here. Thanks. </p>
|
Paolo Franchi
| 302,637 |
<p>Apply the triangle inequality $\|a+b\| \leq \|a\| + \|b\|$, with $a=x-z$ and $b=z-y$.</p>
<p>$$ \|x-y\| = \| x -z + z - y \| = \| (x -z) + (z - y) \| \leq \| (x -z) \| + \|(z - y) \| < 2+3=5 \\
\implies \|x-y\| < 5.$$</p>
|
2,146,911 |
<blockquote>
<p>Given natural numbers <span class="math-container">$m,n,$</span> and a real number <span class="math-container">$a>1$</span>, prove the inequality :</p>
<p><span class="math-container">$$\displaystyle a^{\frac{2n}{m}} - 1 \geq n\big(a^{\frac{n+1}m} - a^{\frac{n-1}{m}}\big)$$</span></p>
<p><strong>SOURCE :</strong> <a href="http://imomath.com/pcpdf/f1/f40.pdf" rel="nofollow noreferrer">Inequalities</a> (PDF) (Page Number 2 ; Question Number 153.2)</p>
</blockquote>
<p>I have been trying this problem from 2 weeks but still no success. I tried every method I could think of like AM-GM, C-S, Holder and more, but could not find a proof.</p>
<p>Also, is it necessary for <span class="math-container">$n,m$</span> to be natural numbers ?</p>
<p>Any help will be gratefully acknowledged.</p>
<p>Thanks in advance ! :)</p>
|
S.C.B.
| 310,930 |
<p>The answer can be given via simple calculus, and thus the result can be shown to hold true for all $x$. However, the OP has stated that he/she would prefer a solution that did not resort to calculus. So here is my edited answer. For my original answer, please check the edit history. </p>
<p>Let $a^{\frac{1}{m}}=x>1$. The question is equivalent to showing $$x^{2n}-1 \ge n(x^{n+1}-x^{n-1}) \iff \frac{x^{2n}-1}{x^2-1} \ge nx^{n-1}$$
Now, note that $$\frac{x^{2n}-1}{x^2-1}=\sum_{k=0}^{n-1}x^{2k}=\frac{1}{2} \left(\sum_{k=0}^{n-1}x^{2k}+x^{2n-2k-2}\right) \ge \frac{1}{2} \times 2\sum_{k=0}^{n-1}x^{n-1}=nx^{n-1}$$
From $\text{AM-GM}$. Our proof is done. </p>
|
2,436,167 |
<p>I appear to be misunderstanding a basic probability concept. The question is: you flip four coins. At least two are tails. What is the probability that exactly three are tails? </p>
<p>I know the answer isn't 1/2, but I don't know why that's so. Isn't the probability of just getting 1 tail in the remaining two coins 1/2?</p>
<p>Thanks</p>
|
B. Goddard
| 362,009 |
<p>Count like this: When you flip 4 coins, there are 16 possible outcomes. List them and cross off all the cases which do not have at least two tails. That leaves 11 possibilites. Of the 11, how many have exactly 3 tails? $4$. So the answer is $4/11.$</p>
|
1,236,600 |
<p>A dose of $D$ milligrams of a drug is taken every 12 hours. Assume that the drug's half-life is such that every $12$ hours a fraction $r$, with $0<r<1$ of the drug remains in the blood. Let $d_1= D$ be the amount of the drug in the blood after first dose.
It follows that the amount of the drug in the blood after the $n^{\mathrm{th}}$ dose is $$d_n= D\sum_{k=0}^{n-1}r^k.$$</p>
<p>At the steady state, $$d_\infty = \lim_{n\to\infty} d_n = \frac D{1-r}.$$</p>
<p>$d_\infty$ is the drug level just AFTER a dose, so it is the maximum drug level. Find the minimum drug level $d_{\min}$, just PRIOR to a steady state dose. Verify that $$d_\infty - d_{\min}=D.$$</p>
<p>I have no idea how to do this. Any ideas?</p>
|
Ross Millikan
| 1,827 |
<p>Hint: If in steady state you have $d_\infty$ just after a dose, after $12$ hours you have $rd_\infty=d_{min}$ because it is just before a dose.</p>
|
1,236,600 |
<p>A dose of $D$ milligrams of a drug is taken every 12 hours. Assume that the drug's half-life is such that every $12$ hours a fraction $r$, with $0<r<1$ of the drug remains in the blood. Let $d_1= D$ be the amount of the drug in the blood after first dose.
It follows that the amount of the drug in the blood after the $n^{\mathrm{th}}$ dose is $$d_n= D\sum_{k=0}^{n-1}r^k.$$</p>
<p>At the steady state, $$d_\infty = \lim_{n\to\infty} d_n = \frac D{1-r}.$$</p>
<p>$d_\infty$ is the drug level just AFTER a dose, so it is the maximum drug level. Find the minimum drug level $d_{\min}$, just PRIOR to a steady state dose. Verify that $$d_\infty - d_{\min}=D.$$</p>
<p>I have no idea how to do this. Any ideas?</p>
|
Math1000
| 38,584 |
<p>Let $d(t)$ be the amount of the drug in the blood $t$ hours after a steady-state dose for $0\leqslant t<12$. Then $d(t)=d_\infty e^{-\lambda t}$ for some $\lambda>0$ (this is the definition of exponential decay). Then
$$d_{\min} = \lim_{t\uparrow12}d(t)=\lim_{t\uparrow12}d_\infty e^{-\lambda t}=d_\infty e^{-12\lambda}.$$
From the given information about the half-life of the drug, we have
$$d_\infty e^{-12\lambda}=rd_\infty, $$
and hence $$\lambda = \frac{\log\left(\frac1r\right)}{12}.$$
It follows that
$$d_\min = d_\infty e^{-12\lambda} = d_\infty e^{\log r} = rd_\infty, $$
whence
$$
\begin{align*}
d_\infty - d_\min &= d_\infty - rd_{\infty}\\
&= d_\infty(1-r)\\
&= \left(\frac D{1-r}\right)(1-r)\\
&= D.
\end{align*}
$$</p>
|
2,307,021 |
<p>I am struggling with a confusing differentials' problem. It seems like there is a key piece of information missing:</p>
<p><strong>The problem:</strong></p>
<blockquote>
<p>The electrical resistance $ R $ of a copper wire is given by $ R = \frac{k}{r^2} $ where $ k $ is a constant and $ r $ is the radius of the wire. Suppose that the radius has an error of $ \pm 5\% $, find the $\%$ error of $ R $.</p>
</blockquote>
<p><strong>My solution:</strong></p>
<p>\begin{align*}
R &= \frac{k}{r^2}\\
\frac{dR}{dr} &= k \cdot (-2) \cdot r^{-3} \quad \therefore \quad dR = \frac{-2k \cdot 0.05}{r^3} = \frac{-0.1k}{r^3}\\
\end{align*}</p>
<p>So the percentage error is given by</p>
<p>\begin{align*}
E_\% = \frac{\frac{-0.1k}{r^3}}{\frac{k}{r^2}} = - \frac{0.1}{r}
\end{align*}</p>
<p><strong>My question:</strong> Am I missing something? Should I have arrived in a real value (not a function of $ r $ )? Is there information missing on the problem?</p>
<p>Thank you.</p>
|
Ross Millikan
| 1,827 |
<p>When we say the radius has an error of $5\%$, we mean that as a relative error, so $dr=0.05r$</p>
|
2,307,021 |
<p>I am struggling with a confusing differentials' problem. It seems like there is a key piece of information missing:</p>
<p><strong>The problem:</strong></p>
<blockquote>
<p>The electrical resistance $ R $ of a copper wire is given by $ R = \frac{k}{r^2} $ where $ k $ is a constant and $ r $ is the radius of the wire. Suppose that the radius has an error of $ \pm 5\% $, find the $\%$ error of $ R $.</p>
</blockquote>
<p><strong>My solution:</strong></p>
<p>\begin{align*}
R &= \frac{k}{r^2}\\
\frac{dR}{dr} &= k \cdot (-2) \cdot r^{-3} \quad \therefore \quad dR = \frac{-2k \cdot 0.05}{r^3} = \frac{-0.1k}{r^3}\\
\end{align*}</p>
<p>So the percentage error is given by</p>
<p>\begin{align*}
E_\% = \frac{\frac{-0.1k}{r^3}}{\frac{k}{r^2}} = - \frac{0.1}{r}
\end{align*}</p>
<p><strong>My question:</strong> Am I missing something? Should I have arrived in a real value (not a function of $ r $ )? Is there information missing on the problem?</p>
<p>Thank you.</p>
|
mrnovice
| 416,020 |
<p>Just doing this by brute force:</p>
<p>$$R=\frac{k}{r^2}\quad\text{percentage error in $r$ is $\pm 5$ percent}$$</p>
<p>So $R_{min} =\frac{k}{(r+0.05r)^2} =\frac{k}{1.05^2r^2}=\frac{400}{441}\cdot\frac{k}{r^2}$</p>
<p>$R_{max} = \frac{k}{(0.95r)^2} =\frac{400}{361}\cdot \frac{k}{r^2}$</p>
<p>Then the percentage error is given by:</p>
<p>$$\frac{R_{max}-R_{min}}{2R}\cdot 100 =\frac{\frac{k}{r^2}\left(\frac{400}{361}-\frac{400}{441}\right)}{2\cdot \frac{k}{r^2}}\cdot 100 \approx 10.1\%\quad\text{3s.f.}$$</p>
|
2,025,934 |
<p>May $V$ be an $n$ dimensional Vektorspace such that $\dim (V) =: n \ge 2$.</p>
<p>We shall prove, that there are infinitely many $k$-dimensional subspaces of $V$, $\forall k \in \{1, 2, ..., n-1\}$.</p>
<p>So first, I thought about using induction, the base step is not that hard, for $n=2$ we take two vectors, say $a$ and $b$ and define infinitely many 1-dimensional subspaces as span$\{a+jb\}$ for $j \in \mathbb N$.</p>
<p>It is easy to see those vector spaces are not all equal, but I kinda realised that induction is not the way to go, as I think $n$ is fixed.</p>
<p>Anyhow, then I thought about using finiteness of basis for $V$ to try to construct those subspaces (using vectors from basis). I failed to do so, so I'm just asking for a hint or any useful advice where to start with this.</p>
|
Learnmore
| 294,365 |
<p><strong>HINT</strong>: How many lines are there in $\Bbb R^2$ passing through $\{(0,0)\}$.</p>
<p>How many planes are there in $\Bbb R^3$ containing $\{(0,0,0)\}$.</p>
|
1,171,150 |
<p>I am struggling to figure out $$\lim\limits_{n \to \infty} \sqrt[n]{n^2+1} .$$ I've tried manipulating the inside of the square root but I cannot seem to figure out a simplification that helps me find the limit.</p>
|
Ivo Terek
| 118,056 |
<p><strong>Hint:</strong> $$\sqrt[n]{n^2+1} = (n^2+1)^{1/n} = e^{\frac{\ln(n^2+1)}{n}}.$$</p>
|
1,171,150 |
<p>I am struggling to figure out $$\lim\limits_{n \to \infty} \sqrt[n]{n^2+1} .$$ I've tried manipulating the inside of the square root but I cannot seem to figure out a simplification that helps me find the limit.</p>
|
kobe
| 190,421 |
<p>Let $a_n = \sqrt[n]{n^2 + 1}$. Then $$n^{2/n} < a_n < (n + 1)^{2/n}.$$</p>
<p>Since $\lim_{n\to \infty} n^{1/n} = 1$ and $\lim_{n\to \infty} (n + 1)^{1/n} = 1$, it follows that the left- and right-most sides of the above inequality tend to $1$ as $n\to \infty$. Therefore, by the squeeze theorem, $\lim_{n\to \infty} a_n = 1$.</p>
|
2,581,135 |
<blockquote>
<p>Find: $\displaystyle\lim_{x\to\infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}.$</p>
</blockquote>
<p>Question from a book on preparation for math contests. All the tricks I know to solve this limit are not working. Wolfram Alpha struggled to find $1$ as the solution, but the solution process presented is not understandable. The answer is $1$.</p>
<p>Hints and solutions are appreciated. Sorry if this is a duplicate.</p>
|
Jack D'Aurizio
| 44,121 |
<p>A fun overkill: it is well known (at least among Ramanujan supporters) that for any $x>1$ we have
$$ \sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} = \tfrac{1}{2}+\sqrt{x+\tfrac{1}{4}} $$
hence $\frac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}$ is bounded between $1$ and $\frac{\sqrt{x}}{\sqrt{x+\frac{1}{4}}+\frac{1}{2}}$, whose limit as $x\to +\infty$ is also $1$.<br>
The claim hence follows by squeezing.</p>
|
2,581,135 |
<blockquote>
<p>Find: $\displaystyle\lim_{x\to\infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}.$</p>
</blockquote>
<p>Question from a book on preparation for math contests. All the tricks I know to solve this limit are not working. Wolfram Alpha struggled to find $1$ as the solution, but the solution process presented is not understandable. The answer is $1$.</p>
<p>Hints and solutions are appreciated. Sorry if this is a duplicate.</p>
|
JJacquelin
| 108,514 |
<p>$$\text{Let}\quad x=\frac{1}{\epsilon^2} \quad\implies\quad
\frac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}=\frac{1}{1+\epsilon\:\sqrt{1+\epsilon}}\qquad\qquad \epsilon\neq 0$$</p>
<p>$$\displaystyle\lim_{x\to\infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}} = \lim_{\epsilon\to 0}\frac{1}{1+\epsilon\:\sqrt{1+\epsilon}} = \lim_{\epsilon\to 0}\frac{1}{1+\epsilon\sqrt{1}} = \lim_{\epsilon\to 0}\frac{1}{1+\epsilon} =1$$</p>
|
2,968,655 |
<p>My numerical calculations suggest that the equation
<span class="math-container">$$x = \frac{1}{1+e^{-a+bx}}$$</span>
has a unique solution for any <span class="math-container">$a,b \in \mathbb R$</span>. How would one go about showing this?</p>
|
William Elliot
| 426,203 |
<p>Let <span class="math-container">$y = \dfrac1{1+e^{-a+bx}} - x.$</span><br>
<span class="math-container">$$y' = \dfrac{-be^{-a+bx}}{(1+e^{-a+bx})^2} - 1.$$</span> </p>
<p>If <span class="math-container">$b$</span> is positive, <span class="math-container">$y' < 0$</span>, <span class="math-container">$y$</span> is strictly decreasing.<br>
Thus <span class="math-container">$y$</span> can be zero at most one point and the<br>
original equation can have at most one solution.</p>
<p>If <span class="math-container">$b$</span> is negative, the derivative suggests<br>
some possibility of another solution. </p>
<p>A solution exists by IVT because y(1) < 0 < y(0).</p>
|
2,179,289 |
<p>Every valuation ring is an integrally closed local domain, and the integral closure of a local ring is the intersection of all valuation rings containing it. It would be useful for me to know when integrally closed local domains are valuation rings.</p>
<p>To be more specific,</p>
<blockquote>
<p>is there a property $P$ of unitary commutative rings that is strictly weaker than being a valuation ring, such that an integrally closed local domain is a valuation ring iff it satisfies the property $P$.</p>
</blockquote>
|
Hagen Knaf
| 2,479 |
<p>A commutative ring $R$ is called coherent, if every finitely generated ideal $I$ is finitely presented, that is as an $R$-module $I$ is isomorphic to $R^n/J$ for some finitely generated $R$-submodule $J$ of $R^n$.</p>
<p>For two ideals $I,J$ of $R$ one defines the ideal $(I:J):=\{r\in R : rJ\subseteq I\}$.</p>
<p>Now the following is true: the local integrally closed domain $R$ is a valuation domain if and only if $R$ is coherent and there exist $r,s\in R$, $s\not\in rR$ such that the maximal ideal $M$ of $R$ is minimal among the prime ideals containing $(rR:sR)$.</p>
<p>This follows from results obtained by J. Mott and M. Zafrullah some decades ago.</p>
<p>References:</p>
<p>S. Glaz, Commutative coherent rings, Lecture notes in mathematics 1371, 1989.
(general theory of coherence)</p>
<p>J. Mott, M. Zafrullah, On Prüfer -v-multiplication domains, Manuscripta Mathematica 35 (1981). (Theorem 3.2 is relevant)</p>
<p>M. Zafrullah, On finite conductor domains, Manuscripta Mathematica 24 (1978). (Theorem 2 is relevant)</p>
|
1,204,745 |
<p>Let $(\Omega, A, \mathbb{P} )$ be a probability space. Let $f: \Omega \rightarrow [-\infty, \infty]$ an $A$-measurable function. </p>
<p>If $f$ is bounded on the positive side and unbounded on the negative side. Is it possible that $\mathbb{E}[f]$ (the expectation with probability measure $\mathbb{P}$ ) is finite?</p>
<p>and what if $f$ is unbounded on the 2 sides ?</p>
|
Nicolas
| 213,738 |
<p>The function $f$ can be unbounded and still be integrable. For example, $f\left(x\right)=\exp\left(-x^{2}\right)\mathtt{1}_{\mathbb{R}_{+}}\left(x\right)
+\delta_{-1}\left(x\right)$ defines an unbounded function, but $\mathbb{E}\left[f\right]=\frac{\sqrt{\pi}}{2}$ for the Lebesgue's measure (with $\Omega=\mathbb{R}$). Here, you notice that $f$ is actually bounded Lebesgue-almost everywhere.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.