title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Proving that height of this triangle is also height of pyramid | Simply note that MO is orthogonal to $\triangle ABC$ thus by definition it is the height of the pyramid. |
Inverse of the asymptotic expansion of Gauss Hypergeometric function | We need an asymptotic for ${_2\hspace{-1px}F_1}(a, b; c; z)$ with $z \to 1^-$. Since $c - a - b = 1/(q - 1) < 0$, the leading term is
$$\frac {\Gamma(c) \Gamma(a + b - c)} {\Gamma(a) \Gamma(b)} (1 - z)^{c - a - b}.$$
This gives $\rho \sim r \sqrt{1 - (b/r)^{1 - q}}$, therefore $\rho \sim r$ and $r \sim \rho$.
For $q < 0$, the next term in the expansion of ${_2\hspace{-1px}F_1}$ is a constant, and we need only the first term from the $(1 - z)^{1/2}$ factor:
$${_2\hspace{-1px}F_1}
{\left( \frac 1 2, p; \frac 3 2; 1 - z \right)} =
\frac {z^{1 - p}} {2 (p - 1)} +
\frac {\sqrt \pi \,\Gamma(1 - p)}
{2 \Gamma {\left( \frac 3 2 - p \right)}} + \dots, \\
\rho = r + \frac {b \sqrt \pi \,\Gamma(1 - p)}
{(1 - q) \Gamma {\left( \frac 3 2 - p \right)}} + \dots, \\
r = \rho - \frac {b \sqrt \pi \,\Gamma(1 - p)}
{(1 - q) \Gamma {\left( \frac 3 2 - p \right)}} + \dots,$$
where $p = (2 - q)/(1 - q)$. For $q > 0$ and excluding the logarithmic case $p \in \mathbb N$, the next term in the expansion of ${_2\hspace{-1px}F_1}$ is of order $z^{2 - p}$, and we need two terms from $(1 - z)^{1/2}$:
$${_2\hspace{-1px}F_1}
{\left( \frac 1 2, p; \frac 3 2; 1 - z \right)} =
\frac {z^{1 - p}} {2 (p - 1)} +
\frac {(2 p - 3) z^{2 - p}} {4 (p - 1) (p - 2)} + \dots, \\
\rho = r + \frac {b^{1 - q} r^q} {2 q} + \dots, \\
r = \rho - \frac {b^{1 - q} \rho^q} {2 q} + \dots,$$
the last step coming from the Lagrange reversion theorem. |
Cartesian product sets proof | idea:
You should start with $(x,y) \in A \times (B/C)$. By definition this means $x \in A$ and $y\in B/C$. This implies $(x,y) \in A \times B$ And $(x,y) \not\in A \times C$. Consequently
$$(x,y) \in (A \times B)/(A \times C)$$
Now try the other set containment. |
What is wrong with this proof that if $V = U_1 \oplus W$ and if $V = U_2 \oplus W$, then $U_1 = U_2$? | The first problem in this proof is assuming that
$(u_1 - u_2) \in U_1 \cup U_2$.
In vector spaces, $u_1 \in U_1$ and $u_2 \in U_2$ do not imply $u_1 - u_2 \in U_1 \cup U_2$. In fact, it is more likely that $u_1 - u_2 \notin U_1 \cup U_2$.
There is another minor flaw:
$ u_1 \in U_2 \implies U_1 \subset U_2 $.
This is true if it holds for every $u_1 \in U_1$, but in this case you cannot pick $u_1$ freely since $u_1$ is determined as $v$ is picked. In this case, you can prove that every vector in $U_1$ can be $u_1$, but that is not written in your proof. |
prove $\lim\frac{a_n}{b_n} = \frac{\lim a_n}{\lim b_n}$ | They are using the fact that $b_n$ converges to $b$. By definition, that means that whatever bound you set, sooner or later $b_n$ will be closer to $b$ than that bound. They choose the bound $|b|/2$, for convenience. Thus the definition of convergence tells them that from some point on (i.e. far enough along the sequence), the distance from $b_n$ to $b$ will be less than $|b|/2$. That specifically means that $|b_n|$ must be larger than $|b|/2$. |
Calculus Derivatives - Finding a function, given tangent and x intercepts | $f(0) = a(0)^2 + b(0) + c = 0$
$f(8) = a(8)^2 + 8b + c = 0$
$f'(x) = 2ax + bx \implies \mathrm{slope \, at \, x=2}: f'(2) = 2a(2) + b(2) = 16$
You should be able to solve it pretty easily if you know how to solve a system of equations with 2 unknowns. |
A fly has a lifespan of $4$–$6$ days. What is the probability that the fly will die at exactly $5$ days? | If the random variable $X$ has a continuous probability distribution with probability density function $f$, the probability of $X$ being in the interval $[a,b]$ is
$$ \mathbb P(a \le X \le b) = \int_a^b f(x)\; dx$$
i.e. the area under the curve $y = f(x)$ for $x$ from $a$ to $b$. But if $a = b$ that area and that integral, are $0$.
Note that "exactly" in mathematics is very special. If the fly's lifetime is $5.0\dots01$ days (with as many zeros as you wish), it does not count as "exactly" $5$ days. But we can't actually measure the fly's lifetime with perfect precision, so we could never actually say that the fly's lifetime was exactly $5$ days. |
Meaning of the following integal | In physics notation, it means a triple integral over the whole 3-dimensional space. An equation like:
$$
\int d^3x\,f(x)
$$
actually means:
$$
\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(x,y,z)\,dx\,dy\,dz.
$$
For example, a 3-dimensional Fourier transform can be written as:
$$
\tilde{f}(p) = \int d^3x\,f(x)\,e^{-ip\cdot x},
$$
which means:
$$
\tilde{f}(p_1, p_2,p_3) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(x,y,z)\,e^{-ip_1x-ip_2y-ip_3z}\,dx\,dy\,dz.
$$
It is a little unclear at first, but as you can see, it shortens the formulas a lot. |
What is the sum of the squares of the roots of the equation $x^2 − 7[x] + 5 = 0?$ (Here $[x]$ denotes the greatest integer less than or equal to $x$) | Note that there are more than two roots (the equation is not a quadratic polynomial). We solve the problem in
three steps.
1) For $x<1$, $x^2+5>0\geq 7[x]$. No roots in $(-\infty,1)$.
2) For $x\geq 7$, $x^2+5>7x\geq 7[x]$. No roots in $[7,+\infty)$.
3) Try to solve the equation for $x\in [1,7)$ where $[x]\in\{1,2,3,4,5,6\}$.
If $x\in[1,2)$ then $x^2=7[x]-5=2$ implies $x=\sqrt{2}\in [1,2)$. So $x=\sqrt{2}$ is a root.
If $x\in[2,3)$ then $x^2=7[x]-5=9$ implies $x=\sqrt{3}\not\in [2,3)$. No roots.
If $x\in[3,4)$ then $x^2=7[x]-5=16$ implies $x=4\not\in [3,4)$. No roots.
If $x\in[4,5)$ then $x^2=7[x]-5=23$ implies $x=\sqrt{23}\in [4,5)$. So $x=\sqrt{23}$ is a root.
If $x\in[5,6)$ then $x^2=7[x]-5=30$ implies $x=\sqrt{30}\in [5,6)$. So $x=\sqrt{30}$ is a root.
If $x\in[6,7)$ then $x^2=7[x]-5=37$ implies $x=\sqrt{37}\in [6,7)$. So $x=\sqrt{37}$ is a root.
Finally the sum of the squares of the roots is $2+23+30+37=92$. |
Transforming vector field into spherical coordinates. Why and how does this method work? | There are two different questions here combined into one. One is translating the values of the components of a vector field from one basis to another. The other one is expressing those components with respect to one coordinate system or the other. You may have done the first task correctly (more on that in a second) but you haven't yet done the second step. You may now want to change your expressions so they are all in one set of variables. In the majority of situations, it would make sense to express everything with respect to the spherical variables since that appears to be the coordinate system you are going to.
Now, when you do that step, you will know whether you've done your calculation correctly. I can tell you that the correct answer is easily seen to be $$\begin{bmatrix}1\\ 0\\ 0 \end{bmatrix}$$ since your vector field points along the radial direction and has length one, so it coincides with the first vector from the spherical basis.
One final note. You will find essentially three different spherical bases in literature. In all cases, the three vectors point in the same three orthogonal directions, but the scaling is different. Perhaps the most common choice - which makes the least amount of sense - is to normalize all the vectors to one. The two other, far more elegant, choices are called covariant and contravariant. (Look up "Tensor Calculus" on YouTube.) For your particular problem, the choice won't matter since all three bases agree on the first vector in the triplet. |
If $f(n) \in \mathbb{Z}$ for an infinite number of $n \in \mathbb{Z}$, then $f \in \mathbb{Q}[x]$. | The following uses too much machinery: I am being lazy.
Suppose that $P(x)=w_0+w_1x+\cdots +w_{n-1}x^{n-1}$, where the $w_i$ are complex numbers.
Let $b_1$ to $b_n$ be distinct integers such that $P(b_i)=m_i$, where $m_i$ is an integer.
This yields the system of $n$ linear equations
$$w_0+b_iw_1+b_i^2w_2+\cdots +b_i^{n-1}w_{n-1}=m_i$$
($i=1,2,\dots, n$).
We will show that the above system has unique rational solutions $w_i$. This is straightforward, as long as we prove that the $n\times n$ matrix with $i$-th row
$$1\quad b_i \quad b_i^2 \quad b_i^3\quad \cdots\quad b_i^{n-1}$$
is invertible.
But that matrix is the well known Vandermonde matrix, and it is invertible. |
Asymmetric Normal Probability Distribution | I suggest Rayleigh distribution as it is quite similar to your figure, however it starts from zero. But one can shift it how he/she wants to. It is the distribution of the amplitude of the complex gaussian random variable.
http://en.wikipedia.org/wiki/Rayleigh_distribution |
Integer Linear Programming: Sum of Products of Binary Variables | The following should do what you want, with only one set of $N$ additional binary variables $\beta_j$:
\begin{align}
x_i + y_i - 1 &\le \sum_{j=1}^N \beta_j &\text{for all $i$}\\
\beta_j &\le v_j &\text{for all $j$}\\
\beta_j &\le w_j &\text{for all $j$}
\end{align}
Alternatively, you can aggregate the last two constraints at the expense of weakening the linear programming relaxation:
\begin{align}
x_i + y_i - 1 &\le \sum_{j=1}^N \beta_j &\text{for all $i$}\\
2\beta_j &\le v_j + w_j &\text{for all $j$}
\end{align} |
Limit of a trigonometric rational expression | Observe
\begin{align}
\lim_{x\to\infty}\frac{\sin^2\left(\sqrt{x+1}-\sqrt{x}\right)}{1-\cos^2\frac{1}{x}}&=\lim_{x\to\infty}\frac{\sin^2\left(\sqrt{x+1}-\sqrt{x}\right)}{\sin^2\frac{1}{x}}\\
&=\lim_{x\to\infty}\frac{\left(\sqrt{x+1}-\sqrt{x}\right)^2\frac{\sin^2\left(\sqrt{x+1}-\sqrt{x}\right)}{(\sqrt{x+1}-\sqrt{x})^2}}{\frac{1}{x^2}\frac{\sin^2(1/x)}{1/x^2}}
\end{align}
Now, setting $t=\dfrac{1}{\sqrt{x+1}+\sqrt{x}}$ we note $x\to\infty$ implies $t\to 0^+$, and since $\displaystyle{\lim_{t\to 0}\frac{\sin t}{t}=1}$
$$\lim_{x\to\infty}\frac{\sin^2(\sqrt{x+1}-\sqrt{x})}{(\sqrt{x+1}-\sqrt{x})^2}=\left[\lim_{x\to \infty}\frac{\sin \frac{1}{\sqrt{x+1}+\sqrt{x}}}{\frac{1}{\sqrt{x+1}+\sqrt{x}}}\right]^2=\left(\lim_{t\to 0^+}\frac{\sin t}{t}\right)^2=(1)^2=1$$
Similarly, putting $t=1/x$ we have $t\to0^+$ as $x\to\infty$, and
$$\lim_{x\to\infty}\frac{\sin^2(1/x)}{1/x^2}=\left(\lim_{t\to 0^+}\frac{\sin t}{t}\right)^2=1^2=1$$
Then,
\begin{align}
\lim_{x\to\infty}\frac{\sin^2\left(\sqrt{x+1}-\sqrt{x}\right)}{1-\cos^2\frac{1}{x}}&=\left(\lim_{x\to \infty}\frac{\left(\sqrt{x+1}-\sqrt{x}\right)^2}{\frac{1}{x^2}}\right)\lim_{x\to\infty}\frac{\frac{\sin^2\left(\sqrt{x+1}-\sqrt{x}\right)}{(\sqrt{x+1}-\sqrt{x})^2}}{\frac{\sin^2(1/x)}{1/x^2}}\\[5pt]
&=\left(\lim_{x\to\infty}x^2\left(\frac{1}{\sqrt{x+1}+\sqrt{x}}\right)^2\right)\cdot\frac{1}{1}\\
&=\lim_{x\to\infty}\left(\frac{x}{\sqrt{x+1}+\sqrt{x}}\right)^2
\end{align}
You can determine the last limit. |
Find the remaining area when a half sphere is put in a cylinder. | In your figure, the radius of the cylinder is $r$, not $r/2$ and the height is $r$. So your cylinder volume is twice what you had written, namely $\pi r^3$. And now your result is positive. |
Please help solving this inital-value problem | Hint:$$m.dx+n.dy=0 \\$$
$$\begin{cases}(ye^{xy} + cos(x))dx + (xe^{xy})dy = 0\\y(\dfrac{\pi}{2}) = 0\end{cases}\\
\frac{\partial (ye^{xy} + cos(x)) }{\partial y}=1.e^{xy}+xy.e^{xy}+0\\
\frac{\partial (xe^{xy})}{\partial x}=1.e^{xy}+yxe^{xy}\\so \\
\frac{\partial n}{\partial x}=\frac{\partial m }{\partial y}\\exact \space equation$$now
$$u(x,y)=\int m.dx=\int (ye^{xy} + cos(x))dx=\\e^{xy}+sinx +g(y) \to $$
$$u_y=x.e^{xy}+0+g'(y) =xe^{xy} \\\to g'(y)=0 \to g(y)=c\\so \\ u(x,y)=e^{xy}+sinx+c$$ |
how many ways are there to divide 180 items into 36 groups of 5? | Yeah, your way of counting doesn't work, for the reason that there will be significant overlaps between the groups counted by $\binom{180}{5}$. If your set is $\lbrace 1, \ldots, 180 \rbrace$, your counting allows for $\lbrace 1, 2, 3, 4, 5\rbrace$ and $\lbrace 1, 2, 3, 4, 6\rbrace$ to be two of your groups.
To follow your reasoning to the answer given in the book, you can count $\binom{180}{5}$ for your first group, then with these $5$ removed (whatever they are), you can count $\binom{175}{5}$ for your second group. Removing these $5$ more ($10$ in total), you can choose one of $\binom{170}{5}$ for your third group, etc, giving you the following count of ordered lists of unordered groups:$\require{cancel}$
\begin{align*}
\binom{180}{5} \cdot \binom{175}{5} \cdot \ldots \cdot \binom{10}{5} \cdot \binom{5}{5} &= \frac{180!}{\cancel{175!} \cdot 5!} \cdot \frac{\cancel{175!}}{\cancel{170!} \cdot 5!} \cdot \ldots \cdot \frac{\cancel{10!}}{\cancel{5!} \cdot 5!} \cdot \frac{\cancel{5!}}{0! \cdot 5!} \\
&= \frac{180!}{(5!)^{36}}.
\end{align*}
However, since we want an unordered group of groups, we have counted each unordered group $36!$ times, yielding a result of
$$\frac{180!}{(5!)^{36} \cdot 36!}.$$ |
Find the quadratic equation equation of $x_1, x_2$. | Is $x_1$ and $x_2$ are the solution than you can write
$$(x_1-x)(x_2-x)=0$$
which is
$$x^2-(x_1+x_2)\cdot x+x_1x_2=0$$
Thus $a=1$, $b=-x_1-x_2$ and $c=x_1x_2$ |
Hilbert-Schmidt operator gives the same infinite sum acting on any Hilbert basis | The key here is Parseval's identity which says
$$
\sum_{i\in I}|\langle x,e_i\rangle|^2=\Vert x\Vert^2
$$
for any $x\in\mathcal H$ and $(e_i)_{i\in I}$ any orthonormal basis of $\mathcal H$. Using this for orthonormal bases $(e_i)_{i\in I}$, $(f_i)_{i\in I}$, $(g_i)_{i\in I}$ of $\mathcal H$ one gets
$$
\sum_{i\in I}\Vert Ae_i\Vert^2=\sum_{i\in I}\sum_{j\in I}|\langle Ae_i,g_j\rangle|^2=\sum_{j\in I}\sum_{i\in I}|\langle A^\dagger g_j,e_i\rangle|^2=\sum_{j\in I}\Vert A^\dagger g_j\Vert^2\\
\hphantom{\sum_{i\in I}\Vert Ae_i\Vert^2}=\sum_{j\in I}\sum_{i\in I}|\langle A^\dagger g_j,f_i\rangle|^2=\sum_{i\in I}\sum_{j\in I}|\langle Af_i,g_j\rangle|^2=\sum_{i\in I}\Vert Af_i\Vert^2.
$$
Note that interchanging the sums over $i,j$ is fine as all the summands are non-negative. |
$\mathbb{R}^3$ not diffeomorphic to $\mathbb{R}^3\setminus \{0\}$ | In fact they are not even homotopy equivalent. $\mathbb{R}^3$ is homotopy equivalent to a point, whereas $\mathbb{R}^3 \setminus \{ 0 \}$ is homotopy equivalent to $S^2$. These can be distinguished by e.g. their second homology $H_2$.
For a more differential argument you can try comparing second de Rham cohomology. On $\mathbb{R}^3 \setminus \{ 0 \}$ you can write down a $2$-form which is closed but not exact (because it integrates nontrivially over a sphere $S^2$ around the origin: for some physical inspiration think about Gauss's law), but on $\mathbb{R}^3$ all closed $2$-forms are exact. This reflects the fact that $H^2_{dR}(\mathbb{R}^3) = 0$ but $H^2_{dR}(\mathbb{R}^3 \setminus \{ 0 \}) \cong \mathbb{R}$.
Beyond that it depends on what tools you're supposed to have access to. |
Maximum value of $(x_{1}.x_{3}.x_{5})$ | You are on the right track. Now you could notice the match of $x_1+2x_3+x_5$ on both sides. Let $y=x_1+2x_3+x_5$ and we have $(y+2k)^2=y+4ky+4k^2=4ky$, so $y+4k^2=0$. OOPS-we have $y \le 0$ but you told us that $(x_{1}\;,x_{3}\;,x_{5}>0)$ |
For any closed subset of $\mathbb{R}$ there is a sequence in $\mathbb{R}$ whose sequential limits is equal to the that subset | I'm going to assume that you know basic things about compactness. And give you two steps to begin with.
If $A$ is a closed and bounded set, then it is compact. Prove, even in general, that a compact metric space is always the closure of a countable set (or in other words, a compact metric space is always countable). You can do that by finding a particular sequence of open covers, refine each to a finite subcover and use that to define the countable set.
If $A$ is closed and unbounded, then it can be written as the countable union of closed and bounded sets. Namely, if $A$ is closed, then it is the union of countably many compact sets. What do you know about the countable unions of countable sets? Now apply the previous case.
Or, prove directly,
If $A\subseteq\Bbb R$ is covered by open sets, then there is a countable subcover. Use similar tricks to the previous two steps and find a countable dense subset. (This property of $\Bbb R$ is called hereditarily Lindelöf.) |
How is the gradient of this function calculated? | From your initial expression:
$$
J = \lambda w^T w + \frac{1}{m}\sum_{i=1}^m \frac{1}{2}(w^Tx_i - y_i)^2
$$
Gradient respect to the vector $w$ yields
$$
\nabla J = \lambda \nabla (w^T w) + \frac{1}{m} \sum_{i=1}^m \frac{1}{2} \nabla \left(w^Tx_i-y_i \right)^2,
$$
Let's compute the gradient of the single bits.
First bit
$$
\nabla (w^T w) = 2w.
$$
Second bit
$$
\nabla(w^T x_i - y_i)^2 = \frac{d(w^T x_i - y_i)^2}{d(w^T x_i - y_i)} \nabla(w^T x_i) = 2(w^T x_i - y_i) x_i.
$$
Let's back substitute
$$
\nabla J = 2\lambda w + \frac{1}{m} \sum_{i=1}^m (w^T x_i - y_i) x_i = 2\lambda w + \frac{1}{m} \sum_{i=1}^m (w^T x_i) x_i - \sum_{i=1}^my_i x_i
$$
Now let's analyze the summation
$$
\sum_{i=1}^m (w^T x_i) x_i = \sum_{i=1}^m (x_i^T w) x_i = X X^T w = A w,
$$
Here $X$ is the matrix constructed as
$$
X = \left[x_1 , \ldots,x_d \right]
$$
Observe that
$$
X X^T = \sum_{i=1}^mx_i x_i^T.
$$
I suppose, apart from dimensions checking, the most rigorous way to prove the equality is to prove that the two matrices $A = XX^T$ and $B = \sum_{i=1}^m x_i x_i^T$ define the same bilinear form, which means you need to check
$$
e_i^T X X^T e_j = e_i^T \left( \sum_{i=1}^m x_i x_i^T \right) e_j
$$
are the same for all $i,j$ ($e_i$ is the $i$-th vector of the canonical basis in $\mathbb{R}^m$), if you don't get confused with the indices you should be able to prove the equality. |
Vector Spaces, is it or not? | A vector space typically has $10$ properties to check:
Closure under addition. (If you add two vectors, you get another vector)
Additive commutativity. (The order in which you add two vectors doesn't matter)
Additive associativity. (Parentheses can be moved around freely)
Zero vector. (There is vector that you add to any other vector and nothing changes)
Additive inverses. (You have the negative of a vector.)
Closure under scalar multiplication (If you multiply a vector by a scalar, you get another vector)
Distributive law for multiplication ($c(u+v)=cu+cv$)
Distributive law for multiplication ($(c+d)u=cu+cd$)
Associative law for multiplication ($(cd)u=c(du)$)
Identity for multiplication ($1u=u$).
To determine if this is a vector space, you need to check all of these properties. Let's get started:
$[x,y]+[a,b]=[x+a+1,y+b]$, when you add two vectors, you get a new vector in $\mathbb{R}^2$, so this is OK. There would only be a possible problem if one were to restrict the part of $\mathbb{R}^2$ under consideration (which doesn't happen here.)
$[x,y]+[a,b]=[x+a+1,y+b]$ and $[a,b]+[x,y]=[a+x+1,b+y]=[x+a+1,y+b]=[x,y]+[a,b]$ so additive commutativity holds.
We compute $([x,y]+[a,b])+[w,z]=[x+a+1,y+b]+[w,z]=[x+a+1+w+1,y+b+z]=[x+a+w+2,y+b+z]$ and $[x,y]+([a,b]+[w,z])=[x,y]+[a+w+1,b+z]=[x+a+w+1+1,y+b+z]=[x+a+w+2,y+b+z]=([x,y]+[a,b])+[w,z]$ so associativity holds.
We want a vector $[a,b]$ so that $[x,y]+[a,b]=[x,y]$. Since $[x,y]+[a,b]=[x+a+1,y+b]$, we need $x+a+1=x$ (so $a=-1$ and $y+b=y$, so $b=0$). Observe that $[x,y]+[-1,0]=[x-1+1,y+0]=[x,y]$ so $[-1,0]$ is the zero.
We want a vector $[a,b]$ so that $[x,y]+[a,b]=[-1,0]$. Since $[x,y]+[a,b]=[x+a+1,y+b]$, to get this to equal $[-1,0]$, we need $x+a+1=-1$ so $a=-2-x$ and for $y+b=0$, $b=-y$. Therefore, the negative of $[x,y]$ is $[-2-x,-y]$.
Since $r[x,y]=[xr+r-1,ry]$, which si a vector in $\mathbb{R}^2$ this is OK. There could only be a problem if we restricted our attention to a part of $\mathbb{R}^2$.
Consider $r([x,y]+[a,b])=r[x+a+1,y+b]=[r(x+a+1)+r-1,r(y+b)]=[rx+ra+2r-1,ry+rb]$. On the other hand, $r[x,y]+r[a,b]=[rx+r-1,ry]+[ra+r-1,rb]=[rx+r-1+ra+r-1+1,ry+rb]=[rx+ra+2r-1,ry+rb]=r([x,y]+[a,b])$.
Consider $(c+d)[x,y]=[(c+d)x+(c+d)-1,(c+d)y]=[cx+dx+c+d-1,cy+dy]$ and $c[x,y]+d[x,y]=[cx+c-1,cy]+[dx+d-1,dy]=[cx+c-1+dx+d-1+1,cy+dy]=[cx+dx+c+d-1,cy+dy]=(c+d)[x,y]$.
Consider $(cd)[x,y]=[cdx+cd-1,cdy]$ and $c(d[x,y])=c[dx+d-1,dy]=[c(dx+d-1)+c-1,cdy]=[cdx+cd-1,cdy]=(cd)[x,y]$.
Consider $1[x,y]=[1x+1-1,1y]=[x,y]$ so the multiplicative identity holds.
Therefore, all $10$ properties hold and this is a vector space (unless I made a mistake somewhere - very possible). If anyone wants to make this prettier, please feel free to change this up, I got tired somewhere around property 7. |
Proving that a relation is well-defined | You say that to show a map $f :X \to Y$ is well defined we need to show that if $f(x) = y_1$ and $f(x)=y_2$ then $y_1 = y_2$. That is true.
An equivalent formulation is that if $x=y$ then we must have $f(x) = f(y)$.
This is what the proof does. Since $x^k = x^{n+k} = x^{2n+k} = \cdots$ we need to check that it doesn't matter whether we define $f(x^k)$ to be $y^k$ or $y^{n+k}$ or ... |
If D is integral domain, then $b_m$ is unit | Let $a(x) = x^m$. If $q(x)$ has degree greater than $0$, $q(x)b(x)$ will have degree greater than $m$ (because the degree of a product is the sum of the degrees of the factors when working over an integral domain), so since $r(x)$ has degree less than $m$, our equation cannot hold. Therefore we must have $q(x) = c$ for some $c \in D$. Then, since $r(x)$ has degree less than $m$, the leading term of $q(x)b(x) + r(x)$ will be $cb_m x^m$. Since this must actually equal $a(x)$, we get $cb_m = 1$. |
If a bounded sequence $(x_n)$ diverges and if $(x_{n_k})$ converges to $a$, show there exists another subseq. that converges to a $b\neq a$ | The quantifiers are missing. What you have is that "there exists some $\varepsilon_0 > 0$ such that, for all $N$, there exists $n \geq N$ for which $\lvert x_n - a \rvert \geq \varepsilon_0$."
In particular, that allows you to create a subsequence $(y_n)_{n\geq 0}=(x_{n_j})_{j\geq 0}$ of $(x_n)_{n\geq 0}$ such that $\lvert y_n - a \rvert \geq \varepsilon_0$ for all $n\geq 0$. (Can you see why?)
But then, this sequence itself is bounded, since $(x_n)_{n\geq 0}$ is. Invoking Bolzano—Weierstrass, it has a converging subsequence $(y_{n_k})_{k\geq 0}$: being a subsequence of a subsequence of $(x_n)_{n\geq 0}$, this is a subsequence of $(x_n)_{n\geq 0}$ as well. And it converges to a limit, call it $b$; but $b\neq a$, as by construction one must have $\lvert b - a \rvert \geq \varepsilon_0$. |
Are these integrals of motion? | Just write down the motion equations and you will get
$$
a\ddot\phi_1=-c\sin(\phi_1-\phi_2)
$$
$$
a\ddot\phi_2=c\sin(\phi_1-\phi_2).
$$
Now, sum these two equations and you will get
$$
\dot\phi_1+\dot\phi_2=constant.
$$
Indeed, it is not difficult to realize that a change of coordinates to $\Phi_1=\phi_1+\phi_2$ and $\Phi_2=\phi_1-\phi_2$ can make all things somewhat clearer. |
Notion of truth in logic | One contemporary reading of Tarski's truth definition is that it simply defines truth in a structure. So, from that point of view, you do need to start with a particular structure before you can talk about "truth". I don't believe this was Tarski's original aim in the 1930s, but it's a reasonable approach to "truth" from the current viewpoint of the field.
So, yes, when people talk about "true but unprovable statements", or about a "standard model", they are beginning with the viewpoint that there is a "standard model" to which we can refer. Once someone accepts that there is a standard model, the fact that it gives each sentence a unique truth value is just a matter of Tarski's definition of truth in a structure.
However, the reason that we know that Gödel sentence of a consistent theory $T$ is "true" but "unprovable in $T$" is that we can prove the Gödel sentence in some metatheory. If we couldn't prove the Gödel sentence, we wouldn't know it is true.
In general, if you don't like to talk about "truth", you can replace claims about "truth" with claims about provability in the metatheory. This is a very standard procedure, and authors often don't bother to even mention it. They may write as if truth is well determined, but this still allows people who worry about it to replace "true" with some other meaning. (Similarly, authors may use Platonistic terminology, but people with other tastes know how to re-interpret the language to suit their taste.)
In particular, the Gödel sentence of a theory $T$ is provable in a very weak metatheory - primitive recursive arithmetic (PRA) will suffice - under the additional assumption of Con($T$). This fact is part of a "formalized incompleteness theorem", where "formalized" means that we pay attention to the particular metatheory necessary to prove the theorem, including its claims about truth.
So what we have formally is that, when $T$ satisfies the hypotheses of the incompleteness theorems and $G_T$ is its Gödel sentence, then
$$
\text{PRA} \vdash \text{Con}(T) \to G_T.
$$
If we recognize the axioms of PRA as "true" in some sense, and that Con($T$) is "true" in that sense if and only if $T$ is consistent, then that formal result can be interpreted as: if $T$ is consistent then $G_T$ is "true" in the same sense. |
Covariant Derivative Clarification | Something fundamental to remember is that metric tensors are metrilinic with respect to covariant derivatives in the sense that:
$$\partial_\alpha g^{\mu\nu}=0,\text{ for all indices.}$$
So for this reason, it can be 'pulled through' any covariant derivatives. In addition, the metric tensor has the property that it can raise indices. So the expression is reduced to:
$$\begin{align}
\partial_\mu\partial_\alpha(\phi^\alpha g^{\mu\nu})&=\partial_\mu\partial_\alpha(\phi^\alpha g^{\mu\nu}) \\
\partial_\mu\partial_\alpha(\phi^\alpha g^{\mu\nu})&=\partial_\mu(g^{\mu\nu}\partial_\alpha\phi^\alpha+\phi^\alpha\partial_\alpha g^{\mu\nu}) \\
\partial_\mu\partial_\alpha(\phi^\alpha g^{\mu\nu})&=\partial_\mu(g^{\mu\nu}\partial_\alpha\phi^\alpha) \\
\partial_\mu\partial_\alpha(\phi^\alpha g^{\mu\nu})&=\partial_\alpha\phi^\alpha\partial_\mu g^{\mu\nu}+g^{\mu\nu}\partial_\mu\partial_\alpha\phi^\alpha \\
\partial_\mu\partial_\alpha(\phi^\alpha g^{\mu\nu})&=g^{\mu\nu}\partial_\mu\partial_\alpha\phi^\alpha \\
\partial_\mu\partial_\alpha(\phi^\alpha g^{\mu\nu})&=\partial^\nu\partial_\alpha\phi^\alpha
\end{align}$$
This can be simplified in vector form as:
$$\nabla(\nabla\cdot\vec{\phi})$$ |
Finding the necessary $\delta$ for $\lim_{z \to 1+i}6z-4 = 2+6i$? | You have
$$|(2+6i)-(6z-4)| = |6(1+i)-6z| = 6 |(i+1)-z|$$
Thus whenever you have $|(i+1)-z| < \tfrac{\epsilon}{6}$ you also have $|(2+6i)-(6z-4)| < \epsilon$. |
Piecewise union of continuous maps is continuous | You either need to assume $A$ and $B$ are both open or both closed.
Then it is a standard glueing lemma.
For other $A$ and $B$ things can go wrong. |
Prove that $\rm area(\triangle ABE) = area (ABCD)$ | Hint: The triangle ABE and the quadrilateral ABCD share the area of the triangle ABC. It suffices to show the remaining areas are equal, which are the areas of the triangle ACD and that of the triangle ACE. How may you show this? Find another area that is equal to both that of ACD and ACE, for example. |
Help determining the properties of this function (for the sake of nonlinear optimzation) | Your model is a pure exponential
$$y=a\, b^x=a\, e^{x\log(b)}=a\, e^{cx}$$ but it is nonlinear with respect to its parameters; so you need some reasonable guesses to start.
Keeping your formulation, in a first step, linearize the model
$$y=a\, b^x \implies \log(y)=\log(a)+x \log(b)=\alpha + \beta x$$ A first linear regression gives $\alpha$ and $\beta$ and then $a=e^{\alpha}$ and $b=e^{\beta}$. Now, start the nonlinear regression.
Edit
You could even reduce the problem to one equation in $b$
$$a=\frac{\sum_{i=1}^n y_i b^{x_i} } {\sum_{i=1}^n b^{2x_i} }$$ and then
$$f(b)=\frac{\sum_{i=1}^n y_i b^{x_i} } {\sum_{i=1}^n b^{2x_i} }-\frac{\sum_{i=1}^n x_iy_i b^{x_i} } {\sum_{i=1}^n x_ib^{2x_i} }=0$$ SInce you have the estimate from $\beta$, even plotting will give you the result |
Closed form of $\int{\lfloor{x}\rfloor}dx$ | Interpret the integral as a Riemann-Stieltjes integral, you can integrate it by part.
For $y > 0$, you get something like
$$\begin{align}\int_0^y \lfloor x \rfloor dx
&=
\int_{0-}^{y+} \lfloor x \rfloor dx =
y \lfloor y \rfloor - \int_{0-}^{y+} x d\lfloor x \rfloor\\
&= y \lfloor y \rfloor - \sum_{k=0}^{\lfloor y\rfloor} k
= y \lfloor y \rfloor - \frac12 \lfloor y \rfloor(\lfloor y \rfloor + 1)\\
&= \frac12y(y-1) + \frac12 \{y\}(1-\{y\})
\end{align}
$$
where $\{y\} = y - \lfloor y \rfloor$ is fractional part of $y$.
This suggests
$$\sum_{k=1}^{\infty}\left(\frac{\sin(k\pi x)}{k\pi}\right)^2 = \frac12 \{x\} (1-\{x\})$$
One can verify this by computing the Fourier series of the periodic function on RHS.
Notice
$$\begin{align}
a_0 &= \frac12\int_0^1 x(1-x) dx = \frac{1}{12} = \frac{1}{2\pi^2}\frac{\pi^2}{6}\\&= \frac{1}{2\pi^2}\sum_{k=1}^\infty \frac{1}{k^2}\\
a_k &= \frac12\int_0^1 x(1-x)\cos(2\pi k x)dx
= \frac{1}{4\pi k}\int_0^1 x(1-x) d\sin(2\pi k x)\\
&= \frac{1}{4\pi k}\left\{
\left[x(1-x) \sin(2\pi k x)\right]_0^1
+ \int_0^1 (2x-1)\sin(2\pi k x) dx
\right\}\\
&= -\frac{1}{8\pi^2 k^2}\int_0^1 (2x-1)d\cos(2\pi k x)\\
&= -\frac{1}{8\pi^2 k^2} \left\{
\left[(2x-1)\cos(2\pi k x)\right]_0^1 - 2\int_0^1 \cos(2\pi k x) dx
\right\}\\
&= -\frac{1}{4\pi^2k^2}
\end{align}$$
and by symmetry, $\displaystyle\;b_k = \frac12\int_0^1 x (1-x) \sin(2\pi k x)dx = 0$ for all $k$. This leads to
$$\begin{align}\frac12\{x\}(1 - \{x\}) &=
a_0 + 2\sum_{k=1}^\infty (a_k \cos(2\pi k x) + b_k\sin(2\pi kx))\\
&= \frac{1}{2\pi^2}\sum_{k=1}^\infty \frac{1 - \cos(2\pi k x)}{k^2}\\
&= \sum_{k=1}^\infty \left(\frac{\sin(\pi k x)}{k \pi}\right)^2\end{align}$$ |
Heuristics for Lipschitz equivalence | There is indeed some similarity. But, in the $\Theta$-notion we have the idea that this approximation only holds for sufficiently large $x$, while here we have that the inequality holds for all pairs of points, so it's uniform. Also, it's unclear what the equivalent formulation would be of the notion of limit towards infinity, e.g. What if a metric space is bounded? |
Calkin algebra and $\beta\omega\setminus\omega$. | If you accept the analogies
$$c_0 \leftrightarrow \mathcal{K}(\ell_2),$$
$$\ell_\infty \leftrightarrow \mathcal{B}(\ell_2),$$
then you also have
$$C(\beta\mathbb N \setminus\mathbb N) \leftrightarrow \mathcal{B}(\ell_2)/\mathcal{K}(\ell_2),$$
as $C(\beta \mathbb N\setminus \mathbb N)\cong \ell_\infty / c_0$ and $C(\beta \mathbb N\setminus \mathbb N)$ is encoded by its spectrum $\beta \mathbb N\setminus \mathbb N$ (by the Gelfand-Kolmogorov theorem, for example).
The algebra $\ell_\infty$ is the canonical diagonal masa in $\mathcal{B}(\ell_2)$ and so $\ell_\infty / c_0$ is the canonical diagonal masa in the Calkin algebra. Thus, you have one more analogy. |
If $T$ is a densely-defined injective operator between Hilbert spaces with dense range, then $T^\ast$ is injective as well | Converting my comment to an answer..
Suppose $T^*x = 0$ for some $x\in\operatorname{dom}(T^*)$, then $\langle T^* x, y\rangle 0$ for all $y\in H_1$. This then says that $\langle x, Ty\rangle = 0$ for all $y\in \operatorname{dom}(T)$ by definition of the adjoint operator. Since $T$ is densely-defined and has dense range, there is a sequence $(y_n)_{n=1}^{\infty}\subseteq H_1$ such that $(Ty_n)_{n=1}^{\infty}$ converges to $x$—note that nothing is said about $y_n\to y$ (which would be the closedness condition you were probably thinking of)—and so $\langle x, Ty_n\rangle = 0$ since $\langle x, Ty\rangle = 0$ for all $y\in H_1$ and particularly for our sequence. But since $Ty_n \to x$ and inner products are continuous, we can pass the limit inside to conclude that $\langle x, x\rangle = 0$, i.e. $x = 0$. |
Proving df is the sum of partial derivatives | Let $A$ be open in $\Bbb{R}^{n}$; let $f: A \to \Bbb{R}$ be of class $C^{1}$; for every $x \in A$ let $df^{x}: v \mapsto \nabla f(x)\cdot v$ on $\Bbb{R}^{n}$. Then $df$, the differential of $f$, is a $1$-form on $A$ (provided that a $1$-form is defined as an alternating $1$-tensor).
Each elementary 1-form $dx_{i}$, which is the differential of the $i$th projection map on $\Bbb{R}^{n}$, is thus the map that assigns to every $x \in A$ the map $dx_{i}^{x}: v \mapsto \nabla x_{i}(x)\cdot v = v_{i}$ on $\Bbb{R}^{n}.$
If $x \in A$ and if $v \in \Bbb{R}^{n}$, then
$$
df^{x}(v) = \nabla f(x)\cdot v = \sum_{i=1}^{n}D_{i}f(x)v_{i} = \sum_{i=1}^{n}D_{i}f(x)dx_{i}^{x}(v);
$$
this result can be written succinctly as
$$
df = \sum_{i=1}^{n}(D_{i}f) dx_{i}.
$$
Note that it is a (conventional) "abuse" of notation to write the $i$th projection map as $x_{i}$; if we denote it by $p_{i}$, say, then it is clear that
$$
\frac{\partial p_{i}}{\partial x_{j}} = \frac{\partial x_{i}}{\partial x_{j}} = \delta_{ij},
$$
which simply says that $D_{j}x_{i} = 1$ if $i=j$ and $=0$ if $i \neq j$. |
Optional sampling theorem with possibly infinite stopping times | By the optional stopping theorem, we have
$$\mathbb{E}(M_{\sigma_2 \wedge n} \mid \mathcal{F}_{\sigma_1 \wedge k}) = M_{\sigma_1 \wedge k}$$
for any $k \leq n$. Since $M_{\sigma_2 \wedge n}$ converges to $M_{\sigma_2}$ in $L^1$ as $n \to \infty$, this implies
$$\mathbb{E}(M_{\sigma_2} \mid \mathcal{F}_{\sigma_1 \wedge k}) = M_{\sigma_1 \wedge k}. \tag{1}$$
Now we can apply Lévy's convergence theorem to conclude
$$\begin{align*} M_{\sigma_1} = \lim_{k \to \infty} M_{\sigma_1 \wedge k} &\stackrel{(1)}{=} \lim_{k \to \infty} \mathbb{E}(M_{\sigma_2} \mid \mathcal{F}_{\sigma_1 \wedge k}) \stackrel{\text{Lévy}}{=} \mathbb{E}(M_{\sigma_2} \mid \mathcal{F}_{\sigma_1}). \end{align*}$$ |
distribution of limits for infinite terms | You have:
$\lim_\limits{n\to\infty}\left(\frac {f(a + \frac {1}{n^2}) - f(a)}{\frac {1}{n^2}}
+ 2\frac {f(a + \frac {2}{n^2}) - f(a)}{\frac {2}{n^2}} + 3\frac {f(a + \frac {3}{n^2}) - f(a)}{\frac {3}{n^2}}+\cdots + n\frac {f(a + \frac {n}{n^2}) - f(a)}{\frac {n}{n^2}}\right)\frac{1}{n^2}$
For each term:
$\lim_\limits{n\to\infty}\frac {f(a + \frac {k}{n^2}) - f(a)}{\frac k{n^2}} = f'(a)$
$\lim_\limits{n\to\infty}\left(f'(a) + 2 f'(a) + 3 f'(a) + \cdots nf'(a)\right)\frac{1}{n^2}$
Which is an arithmetic series.
$\lim_\limits{n\to\infty} \frac{n(n+1)f'(a)}{2n^2}\\
\frac 12 f'(a)$
Update reflecting a learner's comment.
If we want to do it "right" we need to start with the Taylor expansion.
$f(a+h) = f(a) + hf'(a) + O(h^2)$
$\lim_\limits{n\to\infty} \sum_\limits{k=1}^n f(a+\frac {k}{n^2}) - nf(a)\\
\lim_\limits{n\to\infty} \sum_\limits{k=1}^n \left(f(a) + \frac {k}{n^2}f'(a) + k^2 O(\frac 1{n^4})\right) - nf(a)\\
\lim_\limits{n\to\infty} nf(a) + \frac {(n)(n+1)}{2n^2}f'(a) + \frac {n(n+1)(2n+1)}{6} O(\frac 1{n^4}) - nf(a)\\
\lim_\limits{n\to\infty} \frac {(n)(n+1)}{2n^2}f'(a) + O(n^{-1})\\
\frac 12 f'(a)$ |
Correct interpretation of the Bayesian rule when having a mix of (i) a joint and (ii) a conditional probability | Your “interpretation 1” is incorrect.
$p(x_i,d_i|h)$ denotes the joint probability of $x_i$ and $d_i$ when $h$ is given. It is defined as:
$p(x_i,d_i|h) = \frac{p(x_i,d_i,h)}{p(h)}$.
Therefore, $h$ conditions both $x_i$ and $d_i$.
For the second equality, consider the definition of $p(d_i|x_i,h)$:
$p(d_i|x_i,h) = \frac{p(x_i,d_i|h)}{p(x_i|h)}$
If you know/assume that $x_i$ and $h$ are independent (which the author of the book does), then you have $p(x_i|h)=p(x_i)$ and you get the second equality.
If you are interested in having a deeper understanding of this kind of models, then you should consider studying probabilistic graphical models and, in particular, Bayesian networks. There’s a great book by Daphne Koller and an excellent course in Coursera about that subject. |
Proof of commutativity for complex numbers help | $$
(x+iy)(z+iw)=xz+i(wx+yz)-yw
$$
Quite similarly
$$
(z+iw)(x+iy)=zx+i(zy+xw)-wy
$$
since each of the products are real and thus commute, we have equality of real ad imaginary parts in the two expressions. |
Probability Puzzle : Robot and coins | Let $h$ be the proportion of heads among the coins and $t=1-h$ the proportion of tails. In a stationary state, the chance of a transition heads$ \to $ tails must equal the chance of a transition tails$ \to $heads So we have $h=\frac t2$ and the equilibrium state is $\frac 13$ heads and $\frac 23$ tails. |
About the conjugacy classes of a finite group | Use
$$yz=x=yz(y^{-1}y)= (yzy^{-1})y$$
and notice that $yzy^{-1}\in K_j$. This allows you to define an injection
$$\{(y, z) \in K_i × K_j : yz = x\}\to\{(y, z) \in K_j × K_i : yz = x\}:(y,z)\mapsto (yzy^{-1},y).$$
This shows $n_{ijs}\leq n_{jis}$; exchanging $i,j$ gives the other inequality and thus the equality. |
Is this proof sufficient to show that a concave function of a metric is also a metric? | Monotonicity of $f$ follows from your assumptions.
Assume that $f$ is not monotonically increasing. Then there are $x<y$ such that $f(x)\ge f(y) +\delta$, $\delta>0$.
This implies that $f(z)<0$ for some large $z$: Let $z>y$. Then
$$
y = x + \frac{y-x}{z-x}(z-x) = \frac{z-y}{z-x}x + \frac{y-x}{z-x}z,
$$
and by concavity
$$
f(y) \ge \frac{z-y}{z-x} f(x) + \frac{y-x}{z-x}f(z).
$$
Since $f(x)\ge f(y)+\delta$, we get
$$
f(x) \ge \frac{z-y}{z-x} f(x) + \frac{y-x}{z-x}f(z) + \delta,
$$
which is equivalent to
$$
f(x)- \frac{z-x}{y-x}\delta\ge f(z).
$$
Now for $z\to\infty$ the left-hand side tends to $-\infty$, thus there is $z$ such that $f(z)<0$. Contradiction. |
Inductive proof that $n(n-1)(n+1)$ is divisible by $6$ | Does it have to be by induction? The solutions suggested by the other answers are more straightforward.
Anyway...
Basis: $(1)(0)(2) = 0$ is divisible by $6$ (when $n = 1$).
Induction: Suppose $k = n(n-1)(n+1)$ is divisible by $6$. Then
\begin{align}
(n+1)n(n+2) & = (n+1)n[(n-1)+3] \\
& = k+3n(n+1)
\end{align}
and provided $n(n+1)$ is always even, this quantity is always also divisible by $6$. We show this secondary claim, also by induction (!).
Basis: $(1)(2) = 2$ is even (when $n = 1$).
Induction: Suppose that $j = n(n+1)$ is even. Then
$$
(n+1)(n+2) = j+2(n+1)
$$
is also even. |
Question about immersions of $\mathbb{R}P^n$ into $\mathbb{R}^{n+1}$ | If $i:\mathbb P^n\hookrightarrow \mathbb R^{n+1}$ is an immersion, you have the relation on $\mathbb P^n$, involving the normal line bundle $N$: $$ i^* T_{\mathbb R^{n+1}}=T_{\mathbb P^n}\oplus N $$
From this you deduce for the total Stiefel-Whitney classes $$1=w(T_{\mathbb P^n})\cdot w(N)\in H^*(\mathbb P^n,\mathbb F_2) \quad (*)$$
Since $N$ is a line bundle, we have $w_i(N)=0$ for $i\gt 1$ and a little calculation then shows that this is only possible if for some $k$ we have $n=2^k-1$ or $n=2^k-2$.
Edit Since I have some free time now, here is the "little calculation":
Recall that $H^*(\mathbb P^n,\mathbb F_2)=\mathbb F_2[H]/\langle H^{n+1}\rangle= \mathbb F_2[h]$ and that $w(T_{\mathbb P^n})=(1+h)^{n+1}$.
Since $N$ has rank $1$, its total Stiefel-Whitney class is $w(N)=1$ or $w(N)=1+h$ , so we dichotomize:
First case: $ w(N)=1$
Then from $(*)$ we get $w(T_{\mathbb P^n})=(1+h)^{n+1}=1$, hence (from arithmetic modulo $2$ : see below) $n+1=2^k$
Second case: $ w(N)=1+h$
Then from $(*)$ we get $w(T_{\mathbb P^n})=(1+h)^{n+1}\cdot (1+h)=(1+h)^{n+2}=1 $ and again from arithmetic modulo 2 we get $n+2=2^k$.
Arithmetic modulo 2 fact: If $N=2^rs$ with $s$ odd then $$(1+h)^{N}=(1+h^{2^r})^s=1\in \mathbb F_2[h] \iff 2^r\geq n+1$$
This follows from the binomial formula so dear to Professor Moriarty. |
Continuity of Probability Measures proof | I think you're going too deep: the definition of an infinite sum is exactly the limit of partial sums. $$\sum_{i > 0} Pr(B_i)$$ literally is defined as $\lim_{n \to\infty} \sum_{i = 1}^n Pr(B_i)$. |
Tricky limit as $n$ tends to infinity of an expression involving a bunch of roots | I'll be using the form
$$
L=\lim_{x \rightarrow\infty}\left(\frac{(x+1)^2}{x}\left(2\pi(x+1)\right)^{\frac{1}{2(x+1)}}-x\left(2\pi x\right)^{\frac{1}{2x}}\right).
$$
We'll look at the function
$$
f(x)=(2\pi x)^{1/2x}=\exp\left(\frac{1}{2x}\log(2\pi x)\right).
$$
First note that $\lim_{x \rightarrow\infty}f(x)=1$, so we can simplify the original limit:
\begin{align}
L&=\lim_{x \rightarrow\infty}\frac{(x+1)^2f(x+1)-x^2f(x)}{x}\\
&=\lim_{x \rightarrow\infty}\color{orange}{\frac{x^2f(x+1)-x^2f(x)}x}
+\color{blue}{\frac{2xf(x+1)}x}
+\color{green}{\frac{f(x+1)}{x}}\\
&=\color{blue}2+\color{green}0+\color{orange}{\lim_{x \rightarrow\infty}x(f(x+1)-f(x))}\\
&=2+\lim_{x \rightarrow\infty}x(f(x+1)-f(x)).
\end{align}
Now use Lagrange's theorem to write $f(x+1)-f(x)=f'(\xi(x))$ for some $\xi\in(x,x+1)$. To make use of this, we evaluate the limit of $g(x)=xf'(x)$:
$$
\lim_{x \rightarrow\infty}g(x)=\lim_{x \rightarrow\infty}xf'(x)=\lim_{x \rightarrow\infty}f(x)\frac{1-\log(2\pi x)}{2x}=0
$$
Since $\xi(x)\in(x,x+1)$ we have:
$$
\lim_{x \rightarrow\infty}x(f(x+1)-f(x))=\lim_{x \rightarrow\infty}\xi(x)f'(\xi(x))+(x-\xi(x))f'(\xi(x))=\lim_{x \rightarrow\infty}g(\xi(x))=0
$$
Recall that $\lim_{x \rightarrow\infty}\xi(x)=\infty$; the term with $x-\xi(x)$ disappears as this expression is bounded and $f'(\xi(x))$ vanishes in the limit. The very last equality holds because of $\lim_{x \rightarrow\infty}g(x)=0$ and $\lim_{x \rightarrow\infty}\xi(x)=\infty$. This establishes $L=2$.
EDIT: Here's an approach without Lagrange's theorem. We'll again look at the limit
$$
\lim_{x \rightarrow\infty}x(f(x+1)-f(x))
$$
Calculating $f''$, we find
$$
f''(x)=f(x)\left(\left(\frac{1-\log{2\pi x}}{2x^2}\right)^2+\frac{2\log{2\pi x}-3}{2x^3}\right)
$$
In particular, for large $x:f''(x)>0$. This means the function is convex and because $f'<0$ for large $x$, we can estimate
$$
\vert f(x+1)-f(x)\vert < |f'(x)|\cdot 1=\vert f'(x)\vert
$$
But since $\lim_{x \rightarrow\infty}xf'(x)=0$, this implies $\lim_{x \rightarrow\infty}x(f(x+1)-f(x))=0$. |
Is the intersection of these given sequences the empty set? | Hint: Assume some $x>0$ is in the intersection. Can you find a contradiction to this assumption? That is, can you find one of the intervals in the intersection not containing $x$?
Responding to your edit: 0 is not in any of the sets under consideration, so it cannot be in the intersection. |
Calculate the area of a triangle with four circles inside | Let us construct a similar configuration by choosing the radius $r$ of the involved circles and the "critical distance" $d$:
The area of the innermost triangle is $4r^2$ and the innermost triangle is similar to $ABC$. The angle bisectors of $ABC$ are also the angle bisectors of the innermost triangle, so the ratio between a side length of $ABC$ and the length of the parallel side in the innermost triangle is $\frac{i+r}{i}$, with $i$ being the inradius of the innermost triangle. It follows that the area of $ABC$ is
$$ 4r^2\left(\frac{i+r}{i}\right)^2 $$
and $\frac{i+r}{i}=\frac{AC}{4r}$. But... wait! This gives that the area of $ABC$ is just $\color{red}{\frac{1}{4}AC^2}$, we do not need to know neither $r$, or $i$, or $d$!!!
In particular, the distance of $B$ from $AC$ is exactly $\frac{AC}{2}$. |
Volume under hyperbolic paraboloid, above unit disk | Your calculations are correct. Doing the integral in cartesian coordinates (not the best idea in this case!):
$$
\frac V4=\int_0^{\sqrt2/2}\int_y^{\sqrt{1-y^2}}\int_0^{x^2-y^2}1\,dz\,dx\,dy = \frac18
$$ |
Matrices with a certain property | We shall adopt the following notations. Let $B=A^{-1}$. Denote the standard basis by $\{e_1,\ldots,e_n\}$ and the identity matrix by $\mathbf I$. When not in bold face, the letter $I$ denotes an index set. For any matrix $M$, denote its $i$-th row by $M_{i\ast}$ and its $j$-th column by $M_{\ast j}$. When $I$ and $J$ are index sets, $M_{IJ},\ M_{iJ}$ and $M_{Ij}$ are notations for submatrix, sub row vector and sub column vector of $M$ respectively. We call a nonzero element $m_{ij}$ a right entry of $M$ if $j\ge i$ or a left entry if $i>j$.
By the given conditions, $A$ satisfies the following properties:
Property 1. $A$ cannot contain both right and left entries on the same column. If $a_{ij}$ is a right entry, $A_{\ast j}$ is a scalar multiple of $e_i$.
Proof. This follows directly from the third given condition.
Property 2. Each row of $A$ contains at most one right entry.
Proof. Suppose $a_{ij}$ and $a_{ik}$ are two right entries of $A$. By property 1, both $A_{\ast j}$ and $A_{\ast k}$ are scalar multiples of $e_i$. Hence $j$ must be equal to $k$ because $A$ is invertible.
Property 3. Every column of $A$ contains at most one left entry.
Proof. Suppose the contrary that $A$ has more than one left entries on some column $j$. Let $K=\{k: a_{kj}\ne0\}$. Then $|K|>1$ and by property 1, $a_{kj}$ is a left entry of $A$ and $b_{jk}$ is either zero or a right entry of $B$ for each $k\in K$. Since $BA=\mathbf I$, we have
$$
B_{\ast K}A_{Kj} = BA_{\ast j} = (BA)_{\ast j} = e_j.\tag{1}
$$
In particular, $B_{jK}A_{Kj}=1$. Since $A_{Kj}$ is entrywise nonzero, we must have $b_{jk_0}\ne0$ for some $k_0\in K$, meaning that $b_{jk_0}$ is a right entry of $B$. Therefore, $b_{jk_0}$ is the only nonzero entry on $B_{\ast k_0}$ by property 1 and also the only nonzero entry on $B_{jK}$ by property 2. In other words, $B_{\ast K}$ is in the form of
$$
B_{\ast K}=\pmatrix{\ast&0&\ast\\ 0&b_{jk_0}&0\\ \ast&0&\ast}.\tag{2}
$$
Let $I=\{1,2,\ldots,n\}\setminus\{j\}$ and $J=K\setminus\{k_0\}$. As $|K|>1$, $J$ is non-empty. Then $(1)$ and $(2)$ together imply that $A_{Jj}$ is a non-trivial solution of $B_{IJ}A_{Jj}=0$. Thus $B_{IJ}$ and in turn $B_{\ast J}$ have linearly dependent columns, and we arrive at a contradiction because $B$ is supposed to be invertible. $\square$
Now properties 1 and 3 imply that $A$ has at most one nonzero entry on each column. It follows from the invertibility of $A$ that $A$ has exactly one nonzero entry on each row and each column, i.e. $A=DP$ for some permutation matrix $P$ and some invertible diagonal matrix $D$. |
why$2 (-1 + 2^{1 + n})$ is the answer to the recurrence relation $a_{n}=2a_{n-1}+2$? | Your expression for $a_n$ is correct. So $$a_n=2+\cdots+2^n+2^{n+1}.$$ Now note that this is a sum of $n+1$ terms in geometric progression with first term $a=2$ and the common ratio $r=2$. The sum $S$ can be found by $$S=\frac{a(r^{n+1}-1)}{r-1}$$ So substituting $a=2$ and $r=2$ we get $$a_n=\frac{2(2^{n+1}-1)}{2-1}=2(2^{n+1}-1)$$ as required. |
Number of paths of length $P_{n-1}$ in $K_n$? | Your title says $P_{n+1}$ but your question asks for $P_{n-1}$.
Assuming you are going for non-intersecting paths only. Consider a path with $n-1$ vertices. Once you fix the first vertex and the last vertex of the path, the remaining $n-3$ vertices can be put in different orders to create a unique path with each arrangement.
So the number of such paths is:
$$\underbrace{\binom{n}{2}}_{\text{A}}\underbrace{\binom{n-2}{n-3}}_{\text{B}}\underbrace{(n-3)!}_{\text{C}}=\frac{n!}{2},$$
where
A is no. of ways of choosing first and last vertex.
B is no. of ways of choosing in between vertices.
C is no. of ways of arranging the in between vertices.
In fact this idea can be generalized to paths with $m$ vertices.
$$\underbrace{\binom{n}{2}}_{\text{A}}\underbrace{\binom{n-2}{m-2}}_{\text{B}}\underbrace{(m-2)!}_{\text{C}}=\frac{n!}{2(n-m)!}.$$ |
Symmetry vs isometry | An isometry is a set bijection $\Phi : (X, d) \to (X', d')$ between metric spaces that identifies $d, d'$, that is, that satisfies
$$d(x, y) = d'(\Phi(x), \Phi(y)) \qquad \textrm{for all $x, y \in X$} .$$
A symmetry (as defined in the excerpt), then, is just an isometry from a metric space $(X, d)$ to itself.
Note every metric space admits at least one symmetry, namely the identity map. Checking the axioms directly that the set of symmetries of a fixed metric space $(X, d)$ form a group under composition, which is called the isometry group and is sometimes denoted $\operatorname{Iso}(X, d)$.
As Clara points out in the comments, the term symmetry does not usually have this restricted meaning. More generally, given a set $X$ equipped with some structure $\mathcal S$, we can consider the bijections $X \to X$ that preserve $\mathcal S$ in an appropriate sense. Generically these maps are called automorphisms rather than symmetries, and again for any $(X, \mathcal S)$ the set of such maps form a group under composition called the automorphism group of $(X, \mathcal S)$, and it is often denoted $\operatorname{Aut}(X, \mathcal S)$. In some contexts, often when $(X, \mathcal S)$ has some geometric interpretation, this group is sometimes called the symmetry group of $(X, \mathcal S)$. |
Some ring homomorphisms | Hint: For (i), where does a ring homomorphism have to map $1$? For (ii), is $\phi(x+y) = \phi(x) + \phi(y)$? |
How to evaluate $\int_{0}^1\frac{\arctan(x)}{x}\,dx$? | For any $x\in(-1,1)$ we have
$$ \arctan(x) = x-\frac{x^3}{3}+\frac{x^5}{5}-\frac{x^7}{7}+\ldots\tag{1} $$
hence by dividing both sides by $x$ and apply termwise integration on $(0,1)$ we get:
$$ \int_{0}^{1}\frac{\arctan x}{x}\,dx = 1-\frac{1}{9}+\frac{1}{25}-\frac{1}{49}+\ldots = K\tag{2}$$
namely Catalan's constant.
Late addendum. Decent rational approximations for $K$ can be produced by creative telescoping. By coupling consecutive terms we have $K=\sum_{n\geq 0}\frac{8(2n+1)}{(4n+1)^2(4n+3)^2}$, and $\frac{8(2n+1)}{(4n+1)^2(4n+3)^2}$ and $\frac{1}{32n^2+6}-\frac{1}{32(n+1)^2+6}$ have the same asymptotic expansion up to the $O\left(\frac{1}{n^6}\right)$ term. This leads to
$$ K \approx \frac{8}{9}+\frac{1}{38} $$
where the absolute error is less than $8\cdot 10^{-4}$. |
Asymptotic analysis of a series | As $x \to 0$,
$$\log{\left ( 1-\frac{x^2}{\pi^2 n^2}\right )} \sim -\frac{x^2}{\pi^2 n^2} $$
Thus, in that limit, the sum behaves as
$$-\frac{x^2}{\pi^2} \sum_{n=1}^{\infty} \frac1{n^2} = -\frac{x^2}{6} $$ |
In the ring of continuous functions $\mathbb{R} \to \mathbb{R}$, the set of all $f$ with $f(0) = 0$ is a maximal ideal. | Hint: Use the first isomorphism theorem and a well chosen map $R \to \mathbb{R}$. |
Maximum number of squares with same number | [This is not a solution.]
Let the cells be $(i,j)$, where each ordinate runs from 1 to 1000. $(1,1)$ is in the top left corner.
We define each 130 by 130 sub board by its top left cell. We have $(1000-130+1)^2 $ boards of the boards, defined by $ ( i, j)$, where each ordinate runs from 1 to 871.
We flip the following boards:
49 boards of the form $ ( 130 i + 1, 130 j + 1)$,
7 boards of the form $ ( 871, 130j + 1)$,
7 boards of the form $(130i+1, 871)$,
1 board of the form $(871, 871)$.
Then, most of the cells are 1, except those of the form
1) $ (i, j)$ where $ 871 \leq i \leq 910$ and $j$ is anything.
2) $(i,j)$ where $i$ is anything and $ 871 \leq i \leq 910$.
This gives us equal cells for $ 1000^2 - 40 \times 1000 - 40 \times 1000 + 40^2 = 921600.$ |
dimension of set of polynomials of degree at most 4 that vanish at 6. | The question statement is unclear in places, but NB (as J.W. Tanner points out in the comments) that $x^2 - 6^2$ is not in $S$, as its second derivative is the constant function $x \mapsto 2$.
Hint Write out a generic element $p \in P_4({\bf R})$, then impose the condition $p''(6) = 0$.
Additional hint One could write such an element as $$p(x) = b_4 x^4 + b_3 x^3 + b_2 x^2 + b_1 x + b_0$$ for constants $b_0, \ldots, b_4$ but the linear map $p \mapsto p''(6)$ evaluates $p''$ at $x = 6$, and we can speed up this computation by instead writing a generic element as a linear combination of powers of $x - 6$: $$p(x) = a_4 (x - 6)^4 + a_3 (x - 6)^3 + a_2 (x - 6)^2 +a_1 (x - 6) + a_0.$$ Now write $S$ in terms of conditions on the $a_i$.) |
If $B_d(x,r) \cap S \neq \emptyset $ for all $ r > 0$, then a sequence in $S$ goes to $x$. | Hint This follows immediately from the archimedian property. Indeed
$$\frac{1}{n} < \epsilon \Leftrightarrow n > \frac{1}{\epsilon}$$
and such $n$ exists by the archimedian property. This $n$ is usually denoted by $\lfloor \frac{1}{\epsilon}\rfloor +1$. |
Analytic continuation "playing nice" with function composition | Just to make my comment an official answer:
Assuming you really intend to ask this question
Assume $g_N,g_\infty:\Omega\to\mathbb{C}$ are holomorphic functions s.t. $g_N \xrightarrow{N\to\infty} g_\infty$ pointwise on some open subset $\emptyset\neq\Omega_0\subseteq\Omega$. Given any entire function $f:\mathbb{C}\to\mathbb{C}$, is it true that $f\circ g_N$ converges pointwise to a holomorphic function $h$ on $\Omega_0$ and that $f\circ g_\infty$ is the analytic continuation of $h$ to all of $\Omega$?
In that case, the answer is "yes" for obvious reasons: $f$ is continuous, therefore $f(g_N(z)) \to f(g_\infty(z))$ for all $z\in\Omega_0$. We can therefore define $h:=(f\circ g_\infty)_{|\Omega_0}$ and have found a holomorphic function on $\Omega_0$ such that $f\circ g_N$ converges pointwise to $h$. Furthermore: By construction $f\circ g_\infty$ is a holomorphic extension of $h$ to all of $\Omega$ and by the identity theorem it is the unique such function, i.e. the analytic continuation of $h$.
Note that the two instances of pointwise convergence can be replace by a lot of other convergence modes. For example one could ask for uniform convergence, locally uniform convergence, and many more. |
Proving That the Following Set is Countable | Hint: You can biject the set of polynomials in question with $\mathbb{N}$. |
Find the points of the line $r$ such that the distance is 3 | First of all, I found the parametric equation of the line, that is:
$$\begin{align}x &= 1 + \frac{1}{2}\lambda \\y &= 1-\frac{1}{2}\lambda\\z &= 0 + \lambda\end{align}$$
*by supposing $z = \lambda$ and solving the simple system of $x$ and $y$.
Then, a point of the line $r$ has this type:
$$(1+\frac{1}{2}\lambda,1-\frac{1}{2}\lambda, \lambda)$$
So what I did is to suppose this point has the distance $3$ from $A$, by using the distance formula:
$$\sqrt{[0 - (1+\frac{1}{2}\lambda)]^2 + [2 - (1-\frac{1}{2}\lambda)]^2, [1 - \lambda]^2} = 3$$
Solving this we have:
$\lambda = 2$ or $\lambda = -2$
So plugging these values in the generic point we assumed:
$$(1+\frac{1}{2}\color{Blue}{2},1-\frac{1}{2}\color{Blue}{2}, \color{Blue}{2}) = (2,0,2)$$
$$(1+\frac{1}{2}\color{Blue}{(-2)},1-\frac{1}{2}\color{Blue}{(-2)}, \color{Blue}{(-2)}) = (0,2,-2)$$ |
consider the initial value problem $ y'=y^{1/3}, y(0)=0$ | The only reasonable definition of $y^{1/3}$ for $y<0$ in this context is
$$
y^{1/3} = -|y|^{1/3} \qquad \forall y<0.
$$
Whoever claims differently is just confusing people.
The non uniqueness of solutions is related to the non-Lipschitzianity of the function; it has nothing to do with it being "ill-defined in negative axis", whatever that means. You would have the same problem with the ODE $y'=|y|^{1/3}$. What happens on the negative $y$ axis does not matter at all in this exercise.
The example of a parametrized family of solutions starting at $y(0)=0$ is a standard exercise. The key insights are that:
the ODE is autonomous, so you can translate solutions,
$y=0$ is a solution,
you can "join" the $y=0$ solution with any one departing from it. |
Closed graph theorem( metric spaces) | Take a look at a sequence $(x_n,y_n) \in \text{graph}(f)$ converging in $X\times Y$ to some $(x,y)$. Since we look at the graph of f, we have $y_n=f(x_n)$. We have to prove, that $(x_n,y_n)$ converges in $\text{graph}(f)$. Since $x_n\rightarrow x$ and f is continuous, $y_n=f(x_n)\rightarrow f(x)=y$, thus the limit of our sequence is $(x,y)=(x,f(x))$ and our sequence converges in $\text{graph}(f)$. Thus $\text{graph}(f)$ is closed. |
How to prove, that $(p, 1+\sqrt{-5})$ is a prime ideals in $\mathbb Z[\sqrt{-5}]$ for prime $p\in\mathbb Z$? | This is not entirely true. For most primes $p$, the ideal $I$ is the unit ideal. If $(1)$ is a prime ideal depends on your taste I guess.
Anyways, here's one way to compute in this case: note that $\mathbb Z[\sqrt{-5}] = \mathbb Z[x]/(x^2+5)$. Here $x$ correspond to $\sqrt{-5}$. Dividing out by the ideal $I$ is the same computing
$$
\mathbb Z[x]/(x^2+5,p,1+x).
$$
Here I use that dividing out by $I$ first then by $J$ is the same as dividing out by $I+J$. Explicitly, any ideal $J \subset R/I$ lifts to an ideal $\overline J$ containing $I$. So we can write $\overline J = I+ J'$ (just let overline $J'$ be generated by the elements of $\overline J$ not in $I$). Then $R/(I+J') \simeq (R/I)/J$.
But dividing out by $1+x$ is the same as putting $x=-1$ in the quotient ring. Do you see the rest? |
Approximate $e^{1/3}$ within $ 10^{-8}$ using Taylor's inequality | $e^x = 1 + x + \frac 12 x^2 + \cdots \frac 1{n!}x^n + \epsilon$
$\epsilon = {\frac {f^{(n+1)}(c)}{(n+1)!}}x^{n}$
OP has suggested using 3 as the upper bound if $f^{(n+1)}(c)$ It is probably a little bit larger than we need, but why not.
$\epsilon < \frac {1}{(n+1)!}(\frac 13)^{(n+1)} (3)$
What is the smallest $n$ such that
$\epsilon < \frac {1}{(n+1)!}(\frac 13)^n<10^{-8} $
$\frac {1}{(8)!}(\frac 13)^7<1.13\times 10^{-8} $ |
Fermat's theorem on sums of two squares | Just to complement Pantelis' answer, the reason why they are unique can be easily seen from the proof using the Gaussian integers $\mathbb{Z}[i]$, which is a UFD. |
Compute the explicit formula of a recursive sequence | the following procedure simplifies the problem. maybe you can use this hint?
clearing fractions:
$$
(n+b)T_{n+1} = b + (n+a+b)T_n
$$
so:
$$
(n-1+b)T_n = b + (n-1+a+b)T_{n-1}
$$
subtracting
$$
(n+b)(T_{n+1}-T_n) +T_n = (n+a+b)(T_n-T_{n-1}) +T_{n-1}
$$
or, setting $Q_n=T_n-T_{n-1}$
$$
(b+n)Q_{n+1} = (a+b+n-1)Q_n
$$
we can now multiply these equations for $n,n-1,....$ |
Approximating $\frac{P(x)}{\sqrt{x^2 + 1}}$ using least square approximation | You have $n$ data points $(x_i,y_i)$ and you want to fit the data to the model
$$y=\frac{P(x)}{\sqrt{x^2 + 1}}$$ where $P(x)$ is a polynomial of degree $k$. This means that
$$y=\frac{\sum_{i=0}^k a_i x^i }{\sqrt{x^2 + 1}}=\sum_{i=0}^k a_i \frac{x^i}{\sqrt{x^2 + 1}}$$
So, define
$$t_i=\frac{x^i}{\sqrt{x^2 + 1}}\implies y=\sum_{i=0}^k a_i t^i$$ which is a multilinear model with no intercept.
Warning
What you should not do is to write
$$z={\sqrt{x^2 + 1}}\, y=\sum_{i=0}^k a_i x^i$$ because what is measured is $y$ and not any of its possible transforms. |
Question in Hatcher's Algebraic Topology Exercise 0.27 | This is a special case (at least in the case $A$ is closed on $X$) of the results discussed in section $7.5$ of Topology and Groupoids, on the homotopy type of adjunction spaces. |
Integration with a variable in the terminals. | $$I=\iint_R f(x,y) \ dxdy=\iint_R f(x,y) \ dydx$$
1) Both integrals should cover the same region $R$.
2) Outer limit(s) should not be functions of inner integral variable(s).
Then, when you interchange variables, you may not be able to represent region $R$ with a single set of limits, i.e., it may be $R=R_1+R_2$. For example, in your case:
When $0<\alpha<1$:
$$\int_0^2\int_y^{\alpha y}f(x,y) \ dxdy=\int_{0}^{2}\int_{x/\alpha}^{x}f(x,y) \ dydx+\int_{2}^{2\alpha}\int_{x/\alpha}^{2}f(x,y) \ dydx$$
When $\alpha>1$, you may get different set of limits. |
How to understand the automorphism group of a very symmetric graph (related to sylow intersections) | Any permutation of any of the cosets $Hx$ is an automorphism of the graph. So you can immediately see the subgroup $N = S_8^{15}$. I haven't attempted to prove it, but it seems clear that these cosets must form a system of imprimitivity for $X$, in which case the subgroup $N$ is indeed normal, so $X = S_{15} \wr P$, where $P$ is a transitive permutation group of degree 15 and, since it has $A_6$ as composition factor and order 720, must be $S_6$. (You can verify that from the transitive groups database.) |
Casting nines and elevens in other bases (radix) and doing check sums for binary | In any radix $\rm\:b\:$ we can cast out $\rm\:b\pm1$'s just like casting out nines ($9$) and elevens ($11$) in decimal arithmetic. This works so simply and effectively because $\rm\ b\: \equiv\: \pm 1\ \ (mod\ \ b\mp 1)\:,\:$ therefore
$\rm\qquad c_0 + c_1\ b + c_2\ b^2 + c_3\ b^3 + c_4 b^4\:+\:\cdots\:\equiv\ c_0 \pm c_1 + c_2 \pm c_3 + c_4\:\pm\:\cdots\ (mod\ b\mp 1)$
follows immediately by applying the Polynomial Congruence Rule.
This "check" won't reveal all arithmetical errors, i.e. there can be many "false positives", since the check only verifies that expressions agree modulo some small number, e.g. agreeing mod $10$ means only that they have the same final digits. However, one can use many coprime moduli and this will suffice to check natural expressions with values below the product of the moduli (see the Chinese Remainder Theorem = CRT). This is the heart of modular computation techniques - which you can read about in most textbooks on computer algebra, e.g. Knuth, TAOCP, vol. 2, Seminumerical Algorithms, or von zur Gathen: Modern Computer Algebra. See also my many prior posts here on casting nines, 91's, etc. |
Probability of hitting a target? | A start: Draw the two diagonals of the square, and also the two lines that join midpoints of opposite sides. These $4$ lines divide the square into $8$ congruent right-angled isosceles triangles. By symmetry, it is enough to determine the probability that a randomly chosen point within a specific one of these triangles is closer to the centre of the square than it is to one of the sides of the square.
Assume (it makes no difference) that the original square has vertices $(2,2)$, $(-2,2)$, $(-2,-2)$, and $(2,-2)$. Consider the triangle $T$ with vertices $(0,0)$, $(2,0)$, and $(2,2)$. This triangle has area $2$. So for our probability, we will determine the area of the region $K$ in $T$ consisting of all points $(x,y)$ such that the distance from $(x,y)$ to the origin is $\le$ the distance from $(x,y)$ to the nearest side of the original square. Then our required probability is the area of $K$ divided by $2$,
So let us determine the locus of all points $(x,y)$ in $T$ that are equidistant from the origin and the nearest side of the original square.
The distance to the origin is $\sqrt{x^2+y^2}$, and the distance to the nearest side of the original square is $2-x$. So our locus is $\sqrt{x^2+y^2}=2-x$. Square and simplify. We get $x=\frac{1}{4}(4-y^2)$, part of a parabola.
Our region $K$ is the region bounded by the $x$-axis, the parabola $x=\frac{1}{4}(4-y^2)$, and the line $y=x$. We find the area of $K$ by integrating with respect to $y$.
Setting up the integral is slightly tricky. We will be integrating from $y=0$ to where $x=\frac{1}{4}(4-y^2)$ meets $y=x$. That gives the equation $y=\frac{1}{4}(4-y^2)$, which has the solution $y=2\sqrt{2}-2$. Thus our required area is
$$\int_0^{2\sqrt{2}-2} \left(\frac{1}{4}(4-y^2)-y\right)\,dy.$$
After calculating the area, don't forget to divide by $2$. |
Wilson's theorem could be simplified? | It is indeed true that $n\ge 2$ is prime if and only if $(n-2)!\equiv 1\pmod n$.
"Why isn't the theorem simplified to this condition?": why should it? It looks basically the same to me. |
Suppose $\|A+C \, \cos\pi z\| \leq e^{\|z\|}$, then $C = 0$. | Look at what happens when $z=it$ with $t\gg 0$. If $C\neq 0$, then $\frac12 C\exp(\pi t)$ is the dominating term, and its magnitude grows faster than $\exp(t)$. |
value of $k$ in binomial sum | It is helpful to know that multiplication of a series $A(x)=a_0+a_1x+a_2x^2+a_3x^3+\cdots$ with $\frac{1}{1-x}$ transforms the series into
\begin{align*}
\frac{1}{1-x}A(x)=a_0+\left(a_0+a_1\right)x+\left(a_0+a_1+a_2\right)x^2+\left(a_0+a_1+a_2+a_3\right)x^3+\cdots
\end{align*}
so that the coefficient of $x^{n}$ is the sum $a_0+a_1+\cdots+a_n$.
In order to find $a_0+a_1+\cdots+a_{10}$ of $\left(1-x\right)^{\frac{1}{2}}$ we look for the coefficient of $x^{10}$ in $\frac{1}{1-x}\left(1-x\right)^{\frac{1}{2}}$. We obtain
\begin{align*}
\frac{1}{1-x}\left(1-x\right)^{\frac{1}{2}}&=\left(1-x\right)^{-\frac{1}{2}}\tag{1}\\
&=1+\left(-\frac{1}{2}\right)(-x)+\left(-\frac{1}{2}\right)\left(-\frac{1}{2}-1\right)\frac{(-x)^2}{2!}\\
&\qquad+\left(-\frac{1}{2}\right)\left(-\frac{1}{2}-1\right)\left(-\frac{1}{2}-2\right)\frac{(-x)^{3}}{3!}\\
&\qquad+\cdots\\
&\qquad\,\,\color{blue}{+\left(-\frac{1}{2}\right)\left(-\frac{1}{2}-1\right)\left(-\frac{1}{2}-2\right)\cdots\left(-\frac{1}{2}-9\right)\frac{(-x)^{10}}{10!}}\tag{2}\\
&\qquad+\cdots
\end{align*}
It is convenient to use the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ in a series.
We obtain from (1) and (2)
\begin{align*}
\color{blue}{a_0+a_1+a_2+\cdots+a_{10}}&=[x^{10}]\left(1-x\right)^{-\frac{1}{2}}\\
&=\left(-\frac{1}{2}\right)\left(-\frac{3}{2}\right)\left(-\frac{5}{2}\right)\cdots\left(-\frac{19}{2}\right)\frac{(-1)^{10}}{10!}\\
&=\frac{1\cdot3\cdot5\cdots19}{2^{10}10!}\\
&=\frac{1\cdot3\cdot5\cdots19}{2^{10}10!}\cdot\frac{2\cdot4\cdot6\cdots20}{2\cdot4\cdot6\cdots20}\\
&=\frac{1\cdot2\cdot3\cdots20}{2^{10}\,10!\,2^{10}10!}\\
&\,\,\color{blue}{=\frac{1}{4^{10}}\binom{20}{10}}
\end{align*}
and $\color{blue}{k=4}$ follows.
The validity of the transformation can be shown via
\begin{align*}
&\left(a_0+a_1x+a_2x^2+a_3x^3\cdots\right)\frac{1}{1-x}\\
&\qquad=\left(a_0+a_1x+a_2x^2+a_3x^3+\cdots\right)\left(1+x+x^2+\cdots\right)\\
&\qquad=a_0+a_1x+a_2x^2+a_3x^3\cdots\\
&\qquad\qquad\ \,+a_0x+a_1x^2+a_2x^3+\cdots\\
&\qquad\qquad\qquad\quad\ +a_0x^2+a_1x^3+\cdots\\
&\qquad\qquad\qquad\qquad\qquad\;\ +\cdots\\
&\qquad=a_0+\left(a_0+a_1\right)x+\left(a_0+a_1+a_2\right)x^2+\cdots
\end{align*} |
Explaining why $T$ is linear and finding the matrix $A$ such that $T(v) = Av$ for all $v \in \mathbb{R}^3$ | To show that $T$ is linear just take an arbitrary linear combination $a\vec{u}+b\vec{v}$. Apply $T$ to this combination, i.e. $T(a\vec{u}+b\vec{v})$, and show it is equal to $aT(\vec{u})+bT(\vec{v})$. Let $\vec{u}=\begin{bmatrix} x_1 \\ y_1 \\ z_1 \end{bmatrix}$ and $\vec{v}=\begin{bmatrix} x_2 \\ y_2 \\ z_2 \end{bmatrix}$. Then, $a\vec{u}+b\vec{v}=\begin{bmatrix} ax_1+bx_2 \\ ay_1+by_2 \\ az_1+bz_2 \end{bmatrix}$. Apply $T$ to this vector and find out what you get. Compare that to the linear combination $aT(\vec{u})+bT(\vec{v})$.
For the matrix, think of the definition of matrix multiplication. You know the matrix representing $T$ should be 2 rows and 3 columns. The first row of should be $(0, 1, 0)$. What about the second?
Your answer is correct for the second part because a linear transformation must map $\vec{0}$ to $\vec{0}$. This is a one-way implication, meaning that if a mapping takes $\vec{0}$ to $\vec{0}$ it does not mean it's linear. |
Algebra. Equivalence of two expressions | It means $sinϕ=\frac{2h}{\sqrt{4h^2+(b-a)^2}}$,
and $cosϕ=\frac{b-a}{\sqrt{4h^2+(b-a)^2}}$,
and $ϕ∈[0,2\pi)$. |
Find yield-to-maturity of a bond | You haven't taken into account when then payments are received. You are accumulating the $1191.31$ with interest for $3$ years, but you aren't adding interest to the bond payments at all. You would need to say, $$1191.31(i+1)^3=120(1+i)^2+120(1+i)+1120$$
You say in a comment that this seems to imply that you receive interest on the bond payments after receiving them. In a way it does, but not from the bond issuer. It assumes that you reinvest, the bond proceeds at the yield rate. Just as $\$120$ a year in the future is worth less than $\$120$, $\$120$ a year in the past is worth more than $\$120$.
If this assumption weren't made, the yield calculations could never be consistent. |
How to graph a function involving limits? | You want to find the value of the limit for each (fixed) $x$. For example, $f(0) = 1$. I don't want to give away the question entirely, but I suggest you use the 'binomial' expansion for general powers (see here, for example). Since you want a large-$n$ limit, you don't need to be really precise but can collect terms into $\mathcal{O}(1/n)$, or such. Note that you have to be careful to apply the $(1+z)^r$ expansion for $z < 1$. For $x > 1$, and hence $x^n > 1$, you should think about how to overcome this.
If you need a further hint, feel free to ask! :) |
Indiscernibility of indiscernibles in second order logic | I assume that you intend second-order formulas to be interpreted with the first-order variables ranging over the constructible sets (since that's where you have first-order indiscernibility) and the second-order variables ranging over arbitrary (not necessarily constructible) subsets (or maybe subclasses) of L, i.e., the standard interpretation of second-order logic over L. If that's your intention, then the Silver indiscernibles are not second-order indiscernible. For example, second-order logic over L can express "$x$ is a cardinal (in the sense of the full universe, not just in L)" and this property is true of some but not all of the Silver indiscernibles. |
A morphism of sheaves is surjective if it is surjective at the level of stalk for every point. | By $im(\phi) = \mathscr G$ I will assume you mean that the canonical map $im(\phi) \longrightarrow \mathscr G$ is an isomorphism. This argument holds in any abelian category.
As you have proven, $\mathscr G/im(\phi) = 0$. This is precisely the cokernel of $\phi$, and the image of a map in an abelian category is defined to be the kernel of its cokernel. We have the following universal property:
In symbols, $\pi \circ \psi = 0$ implies the existence and uniqueness of the dashed map making the diagram commute.
Now, as we said, $im(\phi) = ker(cok(\phi))=ker(\pi)$. So by "the canonical map $im(\phi) \longrightarrow \mathscr G$", I mean $\iota$. We will proceed to show that it is an isomorphism.
Since $\mathscr G/im(\phi)=0$, we necessarily have $\pi = 0$. We will therefore show that $id: \mathscr G \longrightarrow \mathscr G$ satisfies the universal property of $ker(\pi)$. Indeed, let $\psi: \mathscr H \longrightarrow \mathscr G$ satisfy $\pi \circ \psi = 0 \circ \psi$. Thus, we have the commutative diagram:
and certainly, only $\psi$ can fill in for the dashed map and allow the diagram to commute. Hence, this satisfies the same universal property as $ker(\pi)$. Hence, they are isomorphic. This arises by applying the universal property in both directions. In fact, this yields the following commutative diagram.
Hence, we have shown that the canonical map $im(\phi)=ker(\pi)\longrightarrow \mathscr G$ is an isomorphism. This is the closest to equality we can get in an abelian category. |
How to find tension in the string given in the question below? | Some trigonometry is required, but the gist of it is that because the particle is at rest (not accelerating) the horizontal component of the tension in $BP$ is equal to the horizontal component of the tension in $AP$. All you have to do is figuring out all the angles involved and decompose all the forces correctly. |
how to write QR algorithm into one equation to represent? | Are you asking about the QR algorithm or the QR decomposition?
The factors $Q$ and $R$ can indeed be written using explicit formulas, since the columns of $Q$ are given by the Gram-Schmidt process, and can thus be expressed by (increasingly complicated) functions of the entries of $A$ that are continuous away from $\det A =0$. As adam points out, once you have $Q$, you also have $R$.
The output of the $QR$ algorithm is the spectrum of $A$. It does have a formula, in a sense, given by the characteristic polynomial; an explicit formula for the eigenvalues of $A$ would also be an explicit formula for the roots of all monic polynomial. To my knowledge no such formula exists, and by the Abel-Ruffini theorem it cannot be algebraic. |
How to estimate total number of different results for a stochastic event? | Suppose 4 events are observed with frequency vector $c=(1,1,2)$ as in @quasi 's example and that there are really 5 unique objects labeled A, B, C, D, and E. The probability of observing one A, two D's, and one E is given by the multinomial probability mass function:
$$\text{multinomial} = \frac{4! \left(\frac{1}{5}\right)^4}{1! 0! 0! 2! 1!}=\frac{12}{625}$$
But because we don't know if we've seen A, D, and E or B, C, and D or A, B, and E, etc. we need to multiply that probability by the number of possible arrangements of the selected objects. To do so we look at the frequency of the frequencies. We have the "true" frequencies of (1, 1, 2, 0, 0). There are 2 objects with frequency 1 and 1 object with frequency 2, and 2 objects with frequency 0. That frequency of frequency vector is $f = (2,1,2)$. The possible number of arrangements is
$$\text{multiplier} = \frac{5!}{2! 1! 2!}=30$$
So the probability of the observed frequencies $c=(1,1,2)$ is multinomial*multiplier = (12/625)*30 = 72/125 = 0.576.
You go through this process for $n = 3, 4, 5, 6, \ldots$ and choose the value of $n$ that maximizes the probability of the observed frequencies.
Some Mathematica code to do this for a proposed set of observed frequencies follows:
prob[c_] := (Total[c]!/((c!) /. List -> Times)) (1/Length[c])^Total[c] *
(Length[c]!/((Tally[c][[All, 2]]!) /. List -> Times))
{3, prob[{1, 1, 2}] // N}
(* {3, 0.444444} *)
{4, prob[{1, 1, 2, 0}] // N}
(* {4, 0.5625} *)
{5, prob[{1, 1, 2, 0, 0}] // N}
(* {5, 0.576} *)
{6, prob[{1, 1, 2, 0, 0, 0}] // N}
(* {6, 0.555556} *)
{7, prob[{1, 1, 2, 0, 0, 0, 0}] // N}
(* {7, 0.524781} *)
{8, prob[{1, 1, 2, 0, 0, 0, 0, 0}] // N}
(* {8, 0.492188} *)
We see that $n=5$ maximizes probability of observing $c=(1,1,2)$.
That is the process for determinine the maximum likelihood estimate given a particular set of observed frequencies. What is also important is knowing the distribution of the maximum likelihood estimator given the sample size ($m$) and the number of unique elements in the population ($n$).
Because the maximum likelihood estimate is $\infty$ when all of the observed frequencies are 1, the maximum likelihood estimator has no mean and therefore can't be unbiased (as you mentioned that unbiasedness was important to you). That doesn't mean that there aren't unbiased estimators but just that using maximum likelihood won't achieve that.
Here is some Mathematica code to obtain the distribution of the maximum likelihood estimator of $n$ given the sample size $m$. First, define a few functions to obtain the possible samples, probabilities, and maximum likelihood estimates:
(* List of possible observed frequencies given sample size and number of items in population *)
ss[m_, n_] :=
If[Length[#] < n, Join[#, ConstantArray[0, n - Length[#]]], #] & /@ IntegerPartitions[m, {1, n}]
(* Probability of observing a particular set of n frequencies *)
prob[c_] := (Total[c]!/((c!) /. List -> Times)) (1/Length[c])^Total[c] *
(Length[c]!/((Tally[c][[All, 2]]!) /. List -> Times))
(* Maximum likelihood estimate of n given observed frequency counts *)
mle[c_] := Module[{n0},
n0 = Length[c];
If[Total[c] == Length[c], \[Infinity],
Sort[Join[{{n0, prob[c] // N}},
Table[{i, prob[Join[c, ConstantArray[0, i - n0]]] // N}, {i, n0 + 1, 500}]],
#1[[2]] > #2[[2]] &][[1, 1]]]]
(Note that the mle function only allows a maximum value of $n$ being 500. That maximum can be increased if 500 is ever reached.) Now use the functions to obtain the distribution of the maximum likelihood estimator:
m = 10; (* Sample size *)
n = 20; (* Number of items in population *)
(* Determine distribution of the maximum likelihood estimator given m and n *)
data = Transpose[{mle[#] & /@ IntegerPartitions[m, {1, n}],
prob[#] & /@ ss[m, n]}];
g = GatherBy[data, #[[1]] &];
dist = {#[[1, 1]], Total[#[[All, 2]]] // N} & /@ g;
TableForm[dist, TableHeadings -> {None, {"MLE", "Probability"}}]
The estimation problem you describe is related to capture/recapture statistical procedures and so likely this is a well-known topic (just not well-known to me). A Bayesian approach might be fruitful if you can characterize what you think about the possible values of $n$ as a probability distribution. |
How to prove that the sets $B$ and $C$ are equal, given the following facts? | If $b \in B$, then either $b \in A$ or $b \in X\setminus A$. If $b \in A$, then $b \in A\cap B = A\cap C\subset C$. If $b \in X\setminus A$, then $b \in B\cap X\setminus A = C \cap X\setminus A\subset C$, hence $B \subset C$.
In a similar fashion one can show that $C\subset B$.
Note: I write $A\subset B$ to mean $A$ is a subset of $B$, not necessarily proper, in other words $A\subset A$ |
Negative Binomial with $4$ white faces before $3$ black faces | It is correct.
What your solution is doing is adding up the probabilities that the 4th white roll will be exactly the 4th, 5th, or 6th roll, respectively. For the $k$-th white roll to be the $n$-th overall, in the previous $n-1$ you must have $k-1$ whites exactly, that's why you are doing binomial with $n-1$ and $k-1$, but then you have to multiply by an extra $2/3$ for the actual $n$-th roll being white, and you get your formula. The possibilities for $n$ are 4, 5, and 6, because the 7th roll would be too late, already failed.
You can also proceed like this: to have 4 whites before 3 blacks is the same as to have at least 4 whites among the first 6 rolls: if it happens, you have 4 whites before (at most) 2 blacks, and if it doesn't, then you have at least 3 blacks and no more than 3 whites in these first 6 rolls, so you have failed.
So it's just standard binomial with $n=6$ and $4 \leq k \leq 6$:
$$\sum_{k=4}^{6} \binom{6}{k} \left(\frac{2}{3}\right)^k \left(\frac{1}{3}\right)^{6-k} = \frac{31\cdot 2^4}{3^6} $$ |
A question about the relationship between submodule and ideal | I think it's just by definition of submodule, because if $_RN$ is a submodule of $_RR$ you are asking that $_RN$ is closed for multiplication on the right by elements of $_RR$ and it's just the condition for $_RN$ to be an ideal of $_RR$. So you can conclude that submodules of a ring are all and only ideals of the ring. |
prove variance of a RV of 1's and 0's is greater than var of a RV of values on (0,1] | As a counterexample, consider Bernoulli RV, $X \sim \mathrm{Be}(p)$ with $p=0.01$ and a uniform RV, $Y \sim \mathcal{U}(0,1)$:
$$
\mathbb{Var}(X) = p(1-p) = 0.0099 < \mathbb{Var}(Y) = \frac{1}{12} \approx 0.0833
$$ |
If $f′(x) \ne 0$ for every $x ∈ (a, b)$, then $f$ is injective and onto an open interval $I ⊆ R$. | Let $I=f\bigl((a,b)\bigr)\subseteq \Bbb R$ be the image of $f$, let $u=\inf I$, $v=\sup I$ (where possibly $u=-\infty$ and/or $v=+\infty$). We need to show that $I=(u,v)$, that is three claims:
If $u<y<v$, then $y\in I$
$u\notin I$
$v\notin I$
For the first, if $u<y<v$, then by the definition of $\inf$ and $\sup$, there exist $y_1,y_2\in I$ with $u<y_1<y<y_2<v$. If $y_{i}=f(x_{i})$, we can use the IVT to find $x$ between $x_1$ and $x_2$ with $f(x)=y$.
For the second, note that for $x\in (a,b)$ we conclude that $f(\xi)<f(x)$ for $\xi\approx x$ and on the "correct" side of $x$ (depending on the sign of $f'(x)$). Therefore, $I$ contains vaues $<f(x)$ and $f(x)\ne\inf I$ for all $x\in(a,b)$.
The third follows by the same argument. |
Properties of solutions to general ODE's | Let $a(x)=b(x) = \ldots = 0$ and suppose that $g(x)$ is the infinitely smooth, bounded function whose support is contained in $[-2,-1]$ (Google "Test Functions" to learn more). Consider the once-differentiable function $$y = \begin{cases} 0 & x<0\\ x^2 & x \geq 0 \end{cases}.$$ Clearly, $g(x)y'=0$, since the supports of $g$ and $y'$ are disjoint. So then, we have an equation which has infinitely smooth coefficients, but admits a once-differentiable solution. So, we can conclude that every solution to the above ODE need not be infinitely smooth or bounded. However, note that $y=c$ is an infinitely smooth, bounded solution. So, the problem admits infinitely smooth and bounded solutions, but not all of them need to be infinitely smooth and bounded.
For the second question, I can be sure that the solution is infinitely differentiable, since in order to satisfy the equation with $i \to \infty$, all derivatives must exist. Hence, every solution to the infinite ODE must be smooth. This holds even if the coefficients are just continuous. |
Normal Operators (Obsolete!) | If $E_N$ is the spectral resolution for $N$, then the following is bounded and normal by the functional calculus:
$$
N(I+N^{\star}N)^{-1} = \int \frac{\lambda}{1+|\lambda|^{2}}dE_{N}(\lambda).
$$
In line with your comment under this point that you are trying to derive the spectral measure, let's try a different approach.
For any $x$, one has $y=(I+N^{\star}N)^{-1}x \in \mathcal{D}(N^{\star}N)=\mathcal{D}(NN^{\star})$ and, hence, $Ny$ and $N^{\star}y$ are defined. So
$$
Ax = N(I+N^{\star}N)^{-1}x,\;\;\; Bx = N^{\star}(I+N^{\star}N)^{-1}x
$$
are defined everywhere. $A$ and $B$ are bounded because they are everywhere defined, and easily verified to be closed. If $x \in \mathcal{D}((N^{\star}N)^{2})$, then
$$
(I+N^{\star}N)N^{\star}x = N^{\star}(I+NN^{\star})x=N^{\star}(I+N^{\star}N)x.
$$
If $y \in \mathcal{D}(N^{\star}N)$, then $x=(I+N^{\star}N)^{-1}y \in \mathcal{D}((N^{\star}N)^{2})$, and the above gives
$$
(I+N^{\star}N)N^{\star}(I+N^{\star}N)^{-1}y = N^{\star}y \\
N^{\star}(I+N^{\star}N)^{-1}y=(I+N^{\star}N)^{-1}N^{\star}y.
$$
Therefore, if $x \in \mathcal{D}(N^{\star}N)$, $y \in \mathcal{H}$,
\begin{align}
(Bx,y) & = (N^{\star}(I+N^{\star}N)^{-1}x,y) \\
& = ((I+N^{\star}N)^{-1}N^{\star}x,y) \\
& = (N^{\star}x,(I+N^{\star}N)^{-1}y) \\
& = (x,N(I+N^{\star}N)^{-1}y) \\
& = (x,Ay).
\end{align}
That's enough to prove that $B^{\star}=A$ and $A^{\star}=B$ because $\mathcal{D}(N^{\star}N)$ is dense in $\mathcal{H}$. To show that $A$ is normal, keep in mind that $N$ is normal and, therefore,
\begin{align}
\|Ax\| & =\|N(I+N^{\star}N)^{-1}x\| \\
& =\|N^{\star}(I+N^{\star}N)^{-1}x\| \\
& =\|A^{\star}x\|.
\end{align}
So $A$ is normal and $A^{\star}=B$ is normal. |
I need a hint on a proof using mathematical induction | Why bother use mathematical induction? For $k\ge2$, we have $k^k+1\ge2^k+1>2^k$.
Maybe prove $k^k\ge2^k$ ($k\ge2$) would be much easier. Since $$(k+1)^{k+1}>(k+1)k^k\ge(k+1)2^k>2^{k+1},$$ mathematical induction can easily be applied. |
Contour integration of fourier transform around a semicircle orientation of contour | What you say is true: when integrating around a semicircular contour we must be careful with the sign a the exponent.
Why this is?
I denote by J the integral you want to compute to get your Fourier Transform.
When you pass in the complex domain, you define a path $\Gamma_R=I_R+C_R$ where R represents the radius, $I_R=[-R;R]$ and $C_R$ is the semicircular joining $z=-R$ to $z=+R$.
You want to have something nice and simple for your integration when passing to the limit $R \rightarrow \infty$, that is :
$$
\oint_\Gamma f(z) dz = 2\pi\sum Res(f) = \int_I f(z) dz = \int_\mathbb{R} f(x) dx =J
$$
But wait : you should also have
$$
\oint_\Gamma f(z) dz = \oint_{I+C}f(z) dz = \int_{I}f(x) dx + \int_{C}f(z) dz
$$
So, you have your nice result only if $\int_{C}f(z) dz = 0 $
For this to be true, a sufficient condition would be that, as $R \rightarrow \infty$, then $|\int_{C}f(z) dz | \rightarrow 0$, and the second integral (the one over the semi-circle) would disappear and you would have only $J =\oint_\Gamma f(z) dz = 2\pi\sum Res(f)$
And for this result to be true (you can check by yourself) you have to bound your exponential term: ie, its argument has to be negative. This is why you must choose the correct semi-circle. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.