title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Calculating possible combinations with N number of characters for the length of M | If you have one choice for the first position and three choices for each of the remaining $M - 1$ positions, then the number of sequences you can form is $1 \cdot 3^{M - 1} = 3^{M - 1}$. You can verify that this formula works for the examples you have considered. |
Formula for series $\frac{\sqrt{a}}{b}+\frac{\sqrt{a+\sqrt{a}}}{b}+\cdots+\frac{\sqrt{a+\sqrt{a+\sqrt{\cdots+\sqrt{a}}}}}{b}$ | If all you are looking for is a compact representation, let
$$
s_{k}=\begin{cases}
0 & \text{if }k=0\\
\sqrt{a+s_{k-1}} & \text{if }k>0
\end{cases}.
$$
Then
\begin{align*}
S_n & =\left(\frac{\sqrt{a}}{b}\right)+\left(\frac{\sqrt{a+\sqrt{a}}}{b}\right)+\ldots+\left(\frac{\sqrt{a+\sqrt{a+\ldots+\sqrt{a}}}}{b}\right)\\
& =\frac{1}{b}\left[\sqrt{a}+\sqrt{a+\sqrt{a}}+\ldots+\sqrt{a+\sqrt{a+\ldots+\sqrt{a}}}\right]\\
& =\frac{1}{b}\sum_{k=1}^{n}s_{k}.
\end{align*}
Assume $a\in\mathbb{R}$ (you don't have to do this). We can show that the recurrence is
stable everywhere (weakly stable at $a=-\frac{1}{4}$). Particularly,
the fixed point is given by
$$
s^{2}-s-a=0,
$$
which has roots
$$
\frac{1\pm\sqrt{1+4a}}{2}.
$$
Particularly, the locally stable fixed point is the solution with $\pm$ is $+$.
So, for large enough $k$,
$$
s_k\approx\frac{1+\sqrt{1+4a}}{2}.
$$
This is as good an answer as you can hope for, save for error bounds on the above expression. |
Probability of finding a job | The probability the candidate does not get an offer from the first interview is $\frac{2}{3}$. The probability she doesn't get an offer from the second is $\frac{3}{4}$, and the probability she doesn't get an offer from the third is $\frac{1}{2}$.
So the probability she does not get an offer at all is $\frac{2}{3}\cdot\frac{3}{4}\cdot\frac{1}{2}=\frac{1}{4}$. Here we are assuming (unrealistically) independence.
Thus the probability she gets at least one offer is $\frac{3}{4}$.
The thing that is wrong about your method is that you are double-counting the cases where she gets $2$ offers, and triple-counting the cases where she gets $3$ offers. That is why your sum is greater than the correct value. Luckily, it turned out to be grater than $1$, which makes it clear that something is not right.
Later, you will learn how to get rid of such multiple counting by using the Method of Inclusion/Exclusion. However, I am guessing you are not yet at that point in the course, so took the simpler approach above.
Remark: We made the totally unreasonable assumption that job offers are given at random, that if there are $3$ people interviewed, exactly one, chosen at random, will get an offer. That's not quite the way the world works! |
Degrees of the irreducible factors of $X^{13}-1$ | We know that $X^{5^n} - X$ is the product of all irreducible polynomials over $\mathbb{F}_5$ whose degree divides $n$. You can compute
$$\mathrm{gcd}(X^{13} - 1, X^5 - X) = X - 1,$$
$$\mathrm{gcd}(X^{13} - 1, X^{25} - X) = X - 1,$$
$$\mathrm{gcd}(X^{13} - 1, X^{625} - X) = X^{13} - 1$$ with the Euclidean algorithm and immediately read off that $X^{13} - 1$ has one factors of degree $1$, no factors of degree $2$ and $\frac{12}{4} = 3$ factors of degree $4$.
The fact that $X^{13} - 1$ divides $X^{625} - X$ mod $5$ is because $13 \mid 624$, as suggested in the comments. |
How to derive the equation $I-\hat x \hat x^T= (\hat x^T E)^T(\hat x^T E)$ | $E$ is interesting, since for any vector $x=\pmatrix{a\\b}\;$ the product $E^Tx=\pmatrix{-b\\+a}\;$ and so $\,x\perp E^Tx$
${\mathbb R}^{2}$ is interesting, since any orthonormal vector pair $(x,y)$ forms a basis and so $\;I = xx^T + yy^T$
Combining these two interesting facts one can write
$$\eqalign{
I - xx^T &= yy^T \\&= (E^Tx)(E^Tx)^T \\&= (x^TE)^T(x^TE)
}$$ |
Points contained in the diagonal of the product of schemes | I'm happy you didn't manage to prove your statement because it is false!
Here is a counter-example:
Take $X=Y=Spec (\mathbb C)=\{\omega\}$ and $S=Spec(\mathbb R)$, so that $Y\times_SY=Spec(\mathbb C\otimes_\mathbb R \mathbb C)$.
And take for $f,g$ the morphisms $X\to Y$ corresponding to the identity and the conjugation $\mathbb C\to \mathbb C$.
Then the morphism $h:X\to Y\times_SY$ corresponds to the $\mathbb R$-algebra morphism $\mathbb C\otimes_\mathbb R \mathbb C\to \mathbb C:z\otimes w\mapsto z\bar w $ .
Hence the image $h(\omega)$ of $h:X \to Y\times_{S} Y$ is the point corresponding to the prime ideal $\mathfrak p\subset \mathbb C\otimes_\mathbb R \mathbb C$ equal to the real vector space generated by the two vectors $1\otimes 1-i\otimes i,i\otimes 1+1\otimes i\in \mathbb C\otimes_\mathbb R \mathbb C$ i.e. $$ \mathfrak p=\mathbb C(1\otimes 1-i\otimes i)\oplus \mathbb C(i\otimes 1+1\otimes i) \in Spec(\mathbb C\otimes_\mathbb R \mathbb C) $$
Whereas the morphism $\Delta :Y\to Y\times_SY$ corresponds to the $\mathbb R$-algebra morphism $\mathbb C\otimes_\mathbb R \mathbb C\to \mathbb C:z\otimes w\mapsto zw $ .
Hence the image $\Delta (\omega)$ of the diagonal morphism corresponds to the prime ideal $\mathfrak q\subset \mathbb C\otimes_\mathbb R \mathbb C$ equal to the real vector space generated by the two vectors $1\otimes 1+i\otimes i , i\otimes 1-\otimes i \in \mathbb C\otimes_\mathbb R \mathbb C$ i.e. $$ \mathfrak q=\mathbb C(1\otimes 1+i\otimes i)\oplus \mathbb C(i\otimes 1-1\otimes i) \in Spec(\mathbb C\otimes_\mathbb R \mathbb C) $$
Conclusion:
The unique point $\omega$ of $X$ has an image not contained in the diagonal of the product:
$$ \omega \in X \;\text {satisfies}\; f(\omega)=g(\omega)\in Y \; \text {but} \;h(\omega)= \mathfrak p \notin Im(\Delta)=\{\Delta (\omega)\}=\{\mathfrak q\} $$ |
what's the expanded expression of $\lVert\vec a+\vec b\rVert^3$ | No. It's not what you wrote. The norm is $$ ||\vec a + \vec b|| =\sqrt{||\vec a||^2+||\vec b||^2 +2\vec a\cdot\vec b}$$ so you have $$ ||\vec a + \vec b||^3 =\left(||\vec a||^2+||\vec b||^2 +2\vec a\cdot\vec b)\right)^{3/2}.$$ There isn't really a way to simplify it further. |
Can there be a perfect square whose digits consist of exactly 4 ones, 4 twos and 4 zeros in any order? | No because the number will be divisible by $3$ but not by $9$. |
conjugates of upper triangular matrices | The answer is yes.
Hint: Take $g=I+e_{ij}$ where $i<j$. |
Finding a point within a triangle using barycentric coordinates | I found a link here that describes a method for computing the position of a point that lies at the barycenter of a triangle ... it does not, however, provide a method for computing the position of a point at a specified barycentric coordinate... |
$C[0,1]$ is separable: Theorem $11.2$ Carothers' Real Analysis | (1) You get $\| f - g\|_\infty \le \epsilon$ by simply taking more and more points (i.e., increasing $n$). Since $f$ is uniformly continuous, if $g$ matches $f$ at every point $k/n$ and $n$ is large enough, you will eventually get $\lvert f(x) - g(x) \rvert \le \epsilon$ for all $x \in [0,1]$ because $$\lvert f(x) - g(x) \rvert \le \underbrace{\lvert f(x) - f(k/n)\rvert}_{\text{small by continuity of } f} + \underbrace{\lvert f(k/n) - g(k/n) \rvert}_{=0, \text{ by def. of } g} + \underbrace{\lvert g(k/n) - g(x) \rvert}_{\text{small by continuity of } g}$$ where you choose the $k/n$ closest to $x$ in the above line.
(2) You can't have $h(k/n)= f(k/n)$ because you need the values of $h(k/n)$ to be rational (to ensure the family of such $h$ remains countable) but you may have $f(k/n)$ irrational. So you first make the $g$ with $g(k/n) = f(k/n)$; $g$ is a nice polygonal function, but may have irrational values at nodes $k/n$. However, because of density of the rationals, there is a rational point as close as desired to $g(k/n)$. Define $h$ the same way as $g$ but with $h(k/n)$ a rational number such that $\lvert h(k/n) - g(k/n) \rvert \le \epsilon$. Then $h$ pretty well approximates $g$ and $g$ pretty well approximates $f$ so $h$ pretty well approximates $f$, and has rational values at nodes $k/n$.
(3) Density of the rationals: you're just taking $g$ and possibly moving its values as nodes $k/n$ by a tiny bit.
(4) The countable dense set is the set of polygonal functions $h$ such that $h(k/n)$ is rational for all $k = 0,1,\ldots,n$. This set is countable because it can be seen as a subset of $$\bigcup_{n\in\mathbb N} \mathbb Q^{n+1}$$ via the following reasoning: choose the number $n$ of nodes, then choose the $n+1$ rational values at nodes $k/n$ for $k=0,1,\ldots,n$, and you have uniquely determined your polygonal function $h$. But $\mathbb Q^{n+1}$ is countable, and a countable union of countable sets is countable, so this shows that the set of such functions $h$ is countable. |
Proof of the associative property of addition | This is wrong. When of comes to logic, showing a specific case is not enough to prove something. Showing many cases is not enough either. To prove of you have to either show it's a direct consequence of definition or a consequence of something that was already proven. There are various proof techniques you can use.
You could probably start working your way from the definition of addition as the recursive application of the successor function S(n). |
Does $AB$ possess inverse? | Take $A=\begin{bmatrix} 0&1 \end{bmatrix}$, $B$= $\begin{bmatrix}
&1\\
&0
\end{bmatrix}$. Then $AB=0$ |
Kernel of adjoint operator | Indeed it was quite simple. It is enough to consider
\begin{equation}
\phi(x)=\begin{cases}
x^{\frac {1+\sqrt 3 }{2}}-x^{\frac{1-\sqrt 3} 2}&\quad \text{if } 0<x\le 1,\\
0&\quad \text{otherwise}.
\end{cases}
\end{equation}
Clearly $\phi(x)$ is in $L^2$ and such that $L^* \phi=0$. Hence $\ker(L^*)=\text{span}\{\phi\}$. |
Number of simple paths between two vertices on an $n \times m$ square-grid graph? | This is a variation of the problem of enumerating self avoiding walks. In general there are no closed form expressions for the number of self avoiding walks between two arbitrary points in a grid of arbitrary dimension; although I have not seen the problem considered with the particular constraints in question, I would be surprised if there was a closed form solution in this case. However, reliable approximation methods do exist for such enumeration problems, and such an approximation may be suitable for your purposes.
Edit: Using a rather naive approach, I've worked out the following lower bound for the number of self-avoiding walks on an $n\times m$ rectangular grid from a given boundary point to a given interior point. With more mathematical sophistication, it is likely that significant improvements could be made on this bound.
Let $A$ be an $n\times m$ rectangular grid, with $n>2$ and $m>2$. That is, let $A$ be the collection of integer points in an $xy$ cartesian plane satisfying the conditions $0 \leq x \leq n$ and $0 \leq y \leq m$. Let $\left(a,b\right)$ be a point in the interior of $A$. In the wikipedia article it discusses the case of walks from one diagonal to another with moves only in the positive direction. I shall call such walks positive walks. There is a simple formula for the number of positive walks from one point to another in a rectangular grid.
We claim that the number of self-avoiding walks from a given point on the boundary to $\left(a,b\right)$ is bounded below, independently of the choice of the point on the boundary by
$$ \binom{a+b}{b} + \binom{a+m-b}{m-b} + \binom{n-a+b}{b} + \binom{n-a+m-b}{m-b}$$
That is, by the number of positive walks from any of the four corners of the grid to the point $\left(a,b\right)$.
Let $\alpha$ be a point on the boundary of $A$. We first bound the number of self-avoiding walks from $\alpha$ to $\left(a,b\right)$ below by the number of positive walks from any point on the boundary of an $\left(n-2\right)\times\left(m-2\right)$ rectangular grid $B$ to the point $\left(a-1,b-1\right)$ within $B$. Our requirement that $n>2$ and $m>2$ is to allow this step to work with a minimum of headache. There is probably an exact solution in the case where $n \leq 2$ or $m \leq 2$.
We may consider those walks which first move $k$ steps in the counter-clockwise direction around the boundary, then move one step into the interior, and whose successive moves follow a positive walk directly to $\left(a,b\right)$. We may do this for $k$ from 0 to $2n+2m-1$, excluding those values of $k$ which place the walker upon a corner, where there is no move into the interior. It is clear that we have produced a disjoint collection of self-avoiding walks equal in number to the number of positive walks from the boundary of an $\left(n-2\right)\times\left(m-2\right)$ $B$ to the point $\left(a-1,b-1\right)$ in $B$.
Now we use the result that the number of positive walks from one diagonal to another on an $j\times k$ grid equals
$$\binom{j+k}{j} = \binom{j+k}{j,k} = \binom{j+k}{k} $$
Using this result, we may express the number of positive walks from any point on the boundary of $B$ to the point $\left(a-1,b-1\right)$ as the sum
$$\sum_{i=0}^{b-1}\binom{a-1+i}{a-1} + \sum_{i=0}^{m-b-1}\binom{a-1+i}{a-1} + \sum_{i=0}^{b-1}\binom{n-a-1+i}{n-a-1} + $$
$$\sum_{i=0}^{m-b-1}\binom{n-a-1+i}{n-a-1} + \sum_{i=0}^{a-1}\binom{b-1+i}{b-1} + \sum_{i=0}^{n-a-1}\binom{b-1+i}{b-1} + $$
$$ \sum_{i=0}^{a-1}\binom{m-b-1+i}{m-b-1} + \sum_{i=0}^{n-a-1}\binom{m-b-1+i}{m-b-1} - 4$$
Where we subtract 4 because the above expression involving binomial coefficients counts each straight line path from the boundary of $B$ to $\left(a-1,b-1\right)$ twice. Since we have specified that $n>2$ and $m>2$, we may safely ignore this 4 by adding in some walks that first move $k$ steps in the clock-wise direction, move away from the boundary, and then follow a positive walk to the finish point.
Using the well known binomial coefficient identity
$$ \sum_{i=0}^{k}\binom{r+i}{r} = \binom{r+k+1}{r+1}$$
We may express the above sum, without the addition of the constant $-4$, as
$$\binom{a+b-1}{a}+\binom{a-1+m-b}{a}+\binom{n-a-1+b}{n-a}+\binom{n-a-1+m-b}{n-a} + $$
$$\binom{b-1+a}{b}+\binom{b-1+n-a}{b}+\binom{m-b-1+a}{m-b}+\binom{m-b-1+n-a}{m-b}$$
After some simple manipulations, we may pair up the terms and combine each pair using the identity $\binom{n}{r}=\binom{n-1}{r}+\binom{n-1}{r-1}$ to get
$$ \binom{a+b}{b} + \binom{a+m-b}{m-b} + \binom{n-a+b}{b} + \binom{n-a+m-b}{m-b}$$
The result we were attempting to prove.
I'm not sure how useful this result is, but I thought it was interesting how easily things lined up to be simplified by standard binomial coefficient identities. I also thought it was interesting how the final formula has a straightforward combinatorial interpretation. |
turning cartesian triple integral to spherical | For all $\theta$, $\phi$ the value of $\rho$ varies from the origin to the hemisphere:
$$x^2 + y^2 + (z-1)^2 = 1,$$
$$
\rho^2\cos^2\theta\sin^2\phi + \rho^2\sin^2\theta\sin^2\phi + (\rho\cos\phi - 1)^2 = 1,
$$
$$\rho^2\sin^2\phi + \rho\cos\phi = 2,$$
$$\rho = -\frac{\cos\phi}{2\sin^2\phi}\pm\frac{\sqrt{8-7\cos^2\phi}}{2\sin^2\phi}.$$
As $\rho>0$ is required,
$$\rho = \frac{\sqrt{8-7\cos^2\phi}}{2\sin^2\phi}-\frac{\cos\phi}{2\sin^2\phi}.$$
An the integral in spherical coordinates is ugly and nasty. Surely cylindrical coordinates is better. |
Dollar Sign in Context Free Language | In theory there are several cases, but they all work exactly the same way and can be handled at once.
Start with the word $s=a^p\$a^{3p}\$a^{5p}$, where $p$ is the pumping length. The pumping lemma gives you a decomposition $s=uvwxy$ such that $|vwx|\le p$, $|vx|\ge 1$, and $uv^kwx^ky\in L$ for each $k\ge 0$. Because $|vwx|\le p$, the string $uvw$ can intersect at most two of the blocks of $a$s; use this fact to show easily that pumping $s$ takes you out of $L$. |
Expanding Fourier Series of $f(x)=\pi-x$ where $0<x<\pi$ (even and odd) | $a_0=\frac{4}{T}\int_{0}^\frac{T}{2}f(x)=\frac{4}{2\pi}\int_{0}^\pi \pi-x=\pi x-\frac{x^2}{2}|_0^\pi=\pi$
$$a_n=\frac{4}{T}\int_{0}^\frac{T}{2}f(x)\cos\frac{2\pi nt}{T} =\frac{4}{2\pi}\int_{0}^\pi (\pi-x)\cos nx=\frac{2}{\pi}(\frac{\pi}{n}\sin nt-\frac{x}{n}\sin nt+\frac{1}{n^2}\cos nt)|_0^\pi=\frac{2}{n^2\pi}((-1)^n-1)$$
so
$a_n=\dfrac{-4}{\pi n^2}$ for even $n$ and $0$ for odd $n$.
for $\sin$
$$b_n=\frac{4}{T}\int_{0}^\frac{T}{2}f(x)\sin\frac{2\pi nt}{T} =\frac{2}{n}$$ |
Angles between curves | You must solve for the intersection point of the two curves:
$\sin (5 x) = \cos (5 x)$, or $\sin (5 x) = \sin (\pi/2 - 5 x)$, which gives $x = \pi/20 = .15708$.
Then take derivatives:
${d \sin (5 x) \over dx} = 5 \cos (5 x)$, and likewise for the other function. Substitute the value of $x$ at the intersection to get the slopes. Then use your formula.
The answer is indeed $31.5863^\circ$. |
Help fixing mistake in dynamics problem | Hint:
60 degrees is not the angle that gives the maximal external radius. What angle is that? |
Cardinal of the set of all dense and enumerable subsets of $\mathbb R^2$ | You just need to go one step further:
$$|B|=\mathfrak{c}^{\aleph_0}=(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0\cdot\aleph_0}=2^{\aleph_0}=\mathfrak{c}\;.$$
So $|A|\le\mathfrak{c}$, and you’ll be done if you can find $\mathfrak{c}$ different countable dense subsets of $\Bbb R$. What about sets of the form $\Bbb Q^2\cup\text{(something very simple)}\,$? |
Is the linear dual of a cyclic $K[G]$-module $K[G]$-cyclic | Yes, it’s cyclic.
A cyclic module $M$ is just one that is a quotient of $K[G]$, the regular $K[G]$-module. But by Maschke’s theorem, a quotient of $K[G]$ is a direct summand, so $M^*$ is a direct summand of $K[G]^*$, which is isomorphic to $K[G]$, so $M^*$ is also cyclic.
But in positive characteristic, where Maschke’s theorem doesn’t apply, it’s not always true that the dual of a cyclic module is cyclic. |
Solving $\int \frac1{\cos x}\mathrm dx$ | $$\int \sec(x)dx$$Let $u=\tan(\frac{x}{2})$
$$\int \sec(x)dx=2\int\frac{du}{1-u^2}$$ Use partial fractions to get $$\int \sec(x)dx=\int\frac{du}{1-u}-\int\frac{du}{1+u}=\ln\left(\frac{1-u}{1+u}\right)+C=\ln\left(\frac{1-\tan(\frac{x}{2})}{1+\tan(\frac{x}{2})}\right)+C$$ |
How to prove/disprove that $\sum _{k=1}^{\infty } \frac{(-1)^k \left(H_{k-1}+\log (k)+\gamma \right)}{k}= 0$ | \begin{align}
s_H&=\sum_{k=1}^\infty\frac{(-1)^kH_{k-1}}{k}\\
&=\sum_{k=1}^\infty\frac{(-1)^kH_{k}}{k}-\sum_{k=1}^\infty\frac{(-1)^k}{k^2}\\
&=\frac12\ln^2(2)+\operatorname{Li}_2(-1)-\operatorname{Li}_2(-1)\\
&=\frac12\ln^22
\end{align}
where we used the generating function $\ \sum_{k=1}^\infty\frac{x^k H_k}{k}=\frac12\ln^2(1-x)+\operatorname{Li}_2(x)$
.
From this solution , we proved $\ \int_0^\infty \ln x\ e^{-kx}\ dx=-\frac{\ln k+\gamma}{k}$
, so we can write
\begin{align}
s_L&=\sum_{k=1}^\infty\frac{(-1)^k(\ln k+\gamma)}{k}\\
&=-\int_0^\infty\ln x\sum_{k=1}^\infty(-e^{-x})^k\ dx\\
&=\int_0^\infty\frac{e^{-x}\ln x}{1+e^{-x}}\ dx
\end{align}
which I think manageable to prove it $-\frac12\ln^22$ |
Inductive step for mathematical induction | There is no difference. Note that the implication does not state the antecedent is in fact true, nor does it state the consequent is in fact true. It is simply saying that IF the antecedent were true, THEN the consequent would also be true.
This makes sense when you consider how a proof by induction is performed. The inductive step begins with assuming that $P(k)$ is true. You then attempt to validly deduce $P(k+1)$. If you can, then you are justified in stating $P(k) \rightarrow P(k+1)$, meaning "if $P(k)$ is true, then $P(k+1)$ is true," or in other words, the $P(k+1)$ is true under the assumption that $P(k)$ is true. It is only when the antecedent is in fact true that the truthfulness of consequent is implied, which is why a proof by induction includes a basis step. The basis step provides the first instance for which the antecedent is true, which implies that your propositional function is true for the very next element. That, in turn, becomes the basis for implying the truthfulness of the propositional function for the next element after that, and so on, and so on... |
How does strong convexity behave under Minkowski sums? | I'm going for the largest strong convexity constant, since any smaller value also works.
If $A$ is $\alpha$-strongly convex and $B$ is $\beta$-strongly convex, then $A+B$ is $\gamma:=\max\{\alpha,\beta\}$-strongly convex.
Proof: Let $a_1,a_2\in A$, let $b_1,b_2\in B$, and let $\eta\in[0,1]$. For notational simplicity, I will set $a:=\eta a_1+(1-\eta)a_2$ and $b:=\eta b_1+(1-\eta)b_2$. Now suppose that $z$ is in the ball centered at $\eta(a_1+b_1) + (1-\eta)(a_2+b_2)=a+b$ with radius $\eta(1-\eta)\gamma$. It suffices to show that $z\in A+B$. By the construction of $\gamma$ and definition of $z$, either
\begin{equation}
\|a+b-z\|=\|a - \left(z - b\right)\| \leq \eta(1-\eta)\gamma=\eta(1-\eta)\max\{\alpha,\beta\} =\eta(1-\eta)\alpha, \tag{1}
\end{equation}
or,
\begin{equation}
\|a+b-z\|=\|b - \left(z - a\right)\|\leq \eta(1-\eta)\beta. \tag{2}
\end{equation}
If (1) holds, then $z-b$ is in the ball centered at $a$ with the proper radius. Since $A$ is strongly convex, this implies $z-b\in A$ and hence $z\in A+b\subset A+B$. Likewise, if (2) holds, then $z-a$ is in the ball centered at $b$ with the proper radius. Since $B$ is strongly convex, we find $z-a\in B$ and hence $z\in B+a\subset A+B$. |
Evaluate the limit of $(f(2+h)-f(2))/h$ as $h$ approaches $0$ for $f(x) = \sin(x)$. | Your problem wants you to evaluate
$$\lim_{h \to 0} \frac{f(2+h) - f(2)}{h}$$
where $f(x) = \sin(x)$ and $x=2$.
First, note that $f(2+h) = \sin(2+h) = \sin(2)\cos(h) + \sin(h)\cos(2)$, which is a trig identity.
Then, the expression, $f(2+h) - f(2)$ simply becomes $\sin(2)\cos(h) + \sin(h)\cos(2) - \sin(2)$.
Therefore, we have to evaluate the following limit
$$\lim_{h \to 0} \frac{\sin(2)\cos(h) + \sin(h)\cos(2) - \sin(2)}{h} \tag{1}$$
We want to be able to use the following trig limit identities in our problem
$$\lim_{h \to 0} \frac{1-\cos(h)}{h} = \frac{\cos(h)-1}{h} = 0$$
$$\lim_{h \to 0} \frac{\sin(h)}{h} = 1$$
Thus, we will rewrite $(1)$ into the following form
$$\frac{\sin(2)\cos(h) + \sin(h)\cos(2) - \sin(2)}{h} = \sin(2)\cdot\frac{(\cos(h)-1)}{h} + \cos(2) \cdot \frac{\sin(h)}{h} $$
Thus evaluating the limit, we deduce
$$\lim_{h \to 0} \frac{\sin(2)\cos(h) + \sin(h)\cos(2) - \sin(2)}{h} = \lim_{h \to 0} \bigg( \sin(2)\cdot\frac{(\cos(h)-1)}{h} \bigg) + \lim_{h \to 0} \bigg( \cos(2) \cdot \frac{\sin(h)}{h} \bigg)$$
which becomes
$$\sin(2) \cdot 0 + \cos(2) \cdot 1 = \cos(2)$$
as desired.
Using a calculator, we see that $\cos(2) \approx -0.42$ radians.
When you learn derivatives, you’ll find that $\frac{\mathrm d}{\mathrm dx}(\sin(x)) = \cos(x)$, and when $x = 2$, then $\frac{\mathrm d}{\mathrm dx}(\sin(2)) = \cos(2)$, which we have formally proved above using the definition of the derivative. |
Discontinuity of the indicator function | Let $u$ denote the vector of ones, then, for every real number $t$ and every vector $x$, one has $q(x+tu,-x)=\mathbf 1_{t\gt0}$ and $q(x,-x+tu)=\mathbf 1_{t\lt0}$. What does this tell you? |
Tough integrals that can be easily beaten by using simple techniques | My favourite example of this is @SangchulLee's solution to @VladimirReshetnikov's question, which asks to verify
the correctness of the identity
$$\int_0^{\infty} \frac{dx}{\sqrt[4]{7 + \cosh x}}= \frac{\sqrt[4]{6}}{3\sqrt{\pi}} \Gamma\left(\frac14\right)^2 .$$
The other answers indicate the "toughness" of this integral, resorting to all sorts of special functions such as elliptic functions,
hypergeometric functions, or Mathematica.
However, the integral can be brilliantly shown to be a few substitutions away from the form of a beta function integral.
Lee makes a chain of simple substitutions which he desecribes here, to obtain
$$\int_0^{\infty} \frac{dx}{(a+ \cosh x)^s} = \frac1{(a+1)^s} \int_0^1 \frac{v^{s-1}}{\sqrt{(1-v)(1-\frac{a-1}{a+1} v)}} dv.$$
The closed formed of this integral for general $a$ is certainly non-elementary, but our special case $a=7$ and $s=4$ is different
for a very neat reason:
When $a=7,$ $\frac{a-1}{a+1}$ is equal to $\frac34$, but since we have the triple angle formula
$\displaystyle \, \cosh(3 x)=4\cosh^3 x-3 \cosh x,$ the integral can be rewritten (with $v=\operatorname{sech}^2 t$) as
$$2^{5/4} \int_0^{\infty} \frac{\cosh t}{\sqrt{\cosh 3t}} dt$$
which can be easily brought to the form of a beta function.
(Note that we can find a similar closed form (with $a=7$) for $s=3/4$.) |
Probability help! Am i wrong? | You have enough data to make a complete cross-tabulation:
$$\begin{array}{r|c|c|c}
&\text{pos.}&\text{neg.}&\text{total}\\ \hline
\text{pregnant}&35&9&44\\ \hline
\text{not pregnant}&8&48&56\\ \hline
\text{total}&43&57&100
\end{array}$$
There are $56$ women who are not pregnant, and $48$ of them test negative, so the probability that a randomly chosen non-pregnant woman tests negative is
$$\frac{48}{56}=\frac67=0.\overline{857142}\;.$$
This is slightly larger than your $\frac{56}{66}=\frac{28}{33}=0.\overline{84}$. |
Finding this summation: $\sum_{n=1}^{\infty}\frac{(2n+99)!(3n-2)!}{(2n)!(3n+99)!}$ | For any $s \ge 1$, let
$$\Delta_s = \sum_{n=1}^\infty \frac{(2n+s-1)!}{(2n)!}\frac{(3n-2)!}{(3n+s-1)!}$$
In particular, $\Delta_{100}$ is the sum we want to compute. Notice
$$
\begin{align}
\frac{(2n+s-1)!}{(2n)!}
&= \left.\left(\frac{d}{dz}\right)^{s-1} z^{2n+s-1}\right|_{z=1}
= \frac{\Gamma(s)}{2\pi i}\oint_{C} \frac{z^{2n+s-1}}{(z-1)^s}dz\\
\frac{(3n-2)!}{(3n+s-1)!}
&= \frac{1}{\Gamma(s+1)}\frac{\Gamma(3n-1)\Gamma(s+1)}{\Gamma(3n+s)} = \frac{1}{\Gamma(s+1)}\int_0^1 t^{3n-2} (1-t)^s dt
\end{align}$$
where $C_1 \subset \mathbb{C}$ is a small circular contour centered at $z = 1$.
We have following integral representation for $\Delta_s$.
$$s\Delta_s = \frac{1}{2\pi i}\int_0^1 \oint_C
\left(\frac{z(1-t)}{z-1}\right)^s \frac{zt}{1 - z^2t^3} dz dt
$$
Let $\Delta(\eta)$ be the OGF for $s\Delta_s$,
$$\Delta(\eta) \stackrel{def}{=} \sum_{s=1}^\infty s\Delta_s \eta^{s-1}$$
Let $\omega = (1-\eta + \eta t)^{-1}$ and $C_{\omega} \subset \mathbb{C}$ be a circular contour centered at $\omega$. Since
$$\sum_{s=1}^\infty \left(\frac{z(1-t)}{z-1}\right)^s \eta^{s-1} =
\frac{z(1-t)}{z(1-\eta + \eta t) - 1} = \frac{z(1-t)\omega}{z-\omega}
$$
We find
$$\begin{align}
\Delta(\eta)
&=
\frac{1}{2\pi i}\int_0^1 \oint_{C_\omega} \frac{z^2 t (1-t)}{1 - z^2 t^3}\frac{\omega}{z - \omega} dz dt
= \int_0^1 \frac{\omega^3 t(1-t)}{1 - \omega^2 t^3} dt\\
&= \int_0^1 \frac{ t dt }{(1-\eta +\eta t)((1-\eta)^2 + (1-\eta^2) t + t^2)}\\
&= \frac{1}{1-\eta}\int_0^{\frac{1}{1-\eta}} \frac{t dt}{(1+\eta t)(1 + (1+\eta)t + t^2)}\\
&= \frac{1}{(1-\eta)^2}\int_0^{\frac{1}{1-\eta}}
\left[\frac{t+\eta}{1 + (1+\eta)t + t^2} - \frac{\eta}{1+\eta t}\right] dt\\
&= \frac{1}{(1-\eta)^2}\left[
\frac12 \log(3-2\eta) - \sqrt{\frac{1-\eta}{3+\eta}}\tan^{-1}\left(\frac{\sqrt{(1-\eta)(3+\eta)}}{3-\eta}\right)
\right]
\end{align}
$$
Throw the last expression to a CAS and ask it to compute the coefficient of $\eta^{s-1}$ for $s = 100$. We get
$$
\Delta_{100} = \frac{1}{100}\left( 50 \log(3) - A \frac{\pi}{3^{199/2}} - \frac{B}{C}\right)$$
where
$$\begin{array}{rcl}
A &=& 279620009275010140018538432376916760970649320550\\
B &=& 402992750960761749592608273159927666000520075786\\
& & 76797445589033694041389862879632396903\\
C &=& 782839087808164519964649481706120884210784218785\\
& & 090048580229772278114461980652430352
\end{array}
$$
If one multiply this expression by $101 \times 100$, one reproduce the number in Lucian's comment (which is off by above factor).
Numerically, $\Delta_{100}$ is very close to $\frac{1}{200}$,
$$\Delta_{100} \approx 0.004999999999999999999533961997106313117\ldots$$
There is a good reason for this. For small $\delta$, we have
$$
\Delta(1-\delta) = \frac{1}{\delta^2}\left(\frac12\log(1+2\delta) - \sqrt{\frac{\delta}{4-\delta}}\tan^{-1}\left(\frac{\sqrt{\delta(4-\delta)}}{2+\delta}\right)\right)
\approx \frac{1}{2\delta} + O(1)
$$
This implies $\Delta(\eta)$ has a simple pole at $\eta = 1$ with residue $-\frac12$.
Let us subtract this pole from $\Delta(\eta)$ and look for the singularities
for the remaining pieces nearest the origin.
For the piece $\log(3 - 2\eta)$, the nearest singularity is clearly $\eta = \frac32$.
For the piece
$\displaystyle\;\sqrt{\frac{1-\eta}{3+\eta}}\tan^{-1}\left(\frac{\sqrt{(1-\eta)(3+\eta)}}{3-\eta}\right)\;$
the nearest singularity occurs at those $\eta$ where
$\displaystyle\;\frac{\sqrt{(1-\eta)(3+\eta)}}{3-\eta} = \pm i\;$. Once again, this leads to $\eta = \frac32$.
What this means is after we subtract the pole from $\Delta(\eta)$, the remaining piece is analytic over the circle $|z| < \frac32$. If we pick a circle with radius $1 < r < \frac32$ and extract the power series of the remaining piece at $z = 0$ using Cauchy integral formula, the coefficients of $\eta^{s-1}$ for the remaining piece will grow at most as fast as $r^{1-s}$.
Using this, we can conclude for large $s$,
$$s \Delta_s = \frac12 + O( r^{1-s} ), \quad 1 < r < \frac32.$$
In order to estimate the coefficient in front of the $O(r^{1-s})$ term,
we need to study $\Delta(\eta)$ near $\eta = \frac32$. One can show that the leading singular behavior there is given by
$$\Delta(\eta) \approx \frac{4}{3} \log\left(\frac32 - \eta\right) + \cdots \quad\text{ for }\quad \eta \approx \frac32.$$
This suggests for large $s$,
$$s\Delta_s \approx \frac12 - \frac{4}{3(s-1)} (2/3)^{s-1}$$
Let use $s = 100$ as a test case, this estimate gives
$$\left[\frac{100 \Delta_{100} - \frac12}{(2/3)^{99}}\right]_{\verb/approx/} = -\frac{4}{297}
\approx -0.01346801346801347$$
This is within $7\%$ from corresponding exact value. Our hand waving estimate turns out to be reasonably decent (at least for $s \approx 100$).
$$\left[ \frac{100 \Delta_{100} - \frac12}{(2/3)^{99}}\right]_{\verb/exact/} \approx -0.012631530615502$$ |
Obtain the elements of a subgroup generated from a permutation, given the elements of the subgroups generated from its disjoint cycles. | Using the $\times$ sign in group theory means a direct product, which is by no means related to the thing you're talking about. I guess you meant to write:
$$\langle (2837) \rangle\langle (46) \rangle\langle (59) \rangle = \{abc |a \in \langle (2837) \rangle, b \in \langle (46) \rangle, c \in \langle (59) \rangle\}$$
Unfortunately this isn't the way to go. In $\langle (2837)(46)(59) \rangle$ the elements are of the type $(2837)^k(46)^k(59)^k$, as cycles commute. On the otherside the elements in $\langle (2837) \rangle\langle (46) \rangle\langle (59) \rangle$ are of the type $(2837)^k(46)^m(59)^n$, where $k,m,n$ are not necessarily the same.
Anyway the best way to go is to calculate this by hand. In fact:
$$\langle(2837)(46)(59)\rangle = \{(2837)^0(46)^0(59)^0,(2837)^1(46)^1(59)^1, (2837)^2(46)^2(59)^2, (2837)^3(46)^3(59)^3\}$$
Now it's just a matter of calculation. |
$n_{0} \in \mathbb{N}$ such that $\frac{n-m}{mn}<\varepsilon$ for all $n \geq m\geq n_{0}$ | If $n_0 > \frac{1}{\epsilon}$, then
$$
\frac{n-m}{nm} < \frac{n}{nm} = \frac{1}{m} < \frac{1}{n_0} < \epsilon
$$ |
If $y=\ln(x)$ and the percentage error in $x$ is 5%, find the error in $y$. | $$y = \ln x \implies dy = \frac{1}{x}dx$$ and you have $\frac{dx}{x}$in %, and $dy$ is $\cdots$ |
Find series $\arctan \frac{a-x}{a+x}$ near x=0 | \begin{align*}
\tan^{-1} \frac{x}{a}+\tan^{-1} \frac{a-x}{a+x} &=
\tan^{-1}
\left(
\frac{\frac{x}{a}+\frac{a-x}{a+x}}
{1-\frac{x}{a} \frac{a-x}{a+x}}
\right) \\ &=
\tan^{-1} \frac{x(a+x)+a(a-x)}{a(a+x)-x(a-x)} \\ &=
\tan^{-1} \frac{x^2+a^2}{a^2+x^2} \\ &=
\frac{\pi}{4} \\
\tan^{-1} \frac{a-x}{a+x} &= \frac{\pi}{4}-\tan^{-1} \frac{x}{a} \\
&= \frac{\pi}{4}-\frac{x}{a}+\frac{x^3}{3a^3}-\ldots+
\frac{(-1)^{n}x^{2n-1}}{(2n-1)a^{2n-1}}+\ldots
\end{align*}
Using ratio test,
$$
\lim_{n\to \infty}
\left|
\frac{(2n-1)a^{2n-1}x^{2n+1}}{(2n+1)a^{2n+1}x^{2n-1}}
\right|=
\frac{x^2}{a^2}<1$$
$$x^2<a^2$$
$$R=a$$ |
Integral, change of variables into rectangle | Yes looks alright, other than a couple of minor things. The sign of your result is minus as $(l - k \gt 0, d^2 - c^2 \gt 0)$.
While $J(u,v) = -\frac{u}{(v+1)^2}$, $|J(u,v)| = \frac{u}{(v+1)^2}$ so you should not have negative sign inside the integral.
You set up $\frac{x}{y} = v$ and so the bounds of $v$ should be $(\frac{1}{l} \leq v \leq \frac{1}{k})$. You could have instead set up $\frac{y}{x} = v$ and then the bound of $v$ would have been $k \leq v \leq l$.
So $\int_{1/l}^{l/k} \int_{c}^{d} \frac{u}{(v+1)^2} du dv = \frac{{(l-k)}{(d^2-c^2)}}{2(l+1)(k+1)}$ |
Analysis: Prove divergence of sequence $(n!)^{\frac2n}$ | Pick $K$ as large as you want, then $n!\ge K^{n-K}$ (for $n\ge K$), so
$$(n!)^{1/n}\ge K^{(n-K)/n}=K^{1-K/n}\to K\text{ as }n\to\infty.$$ |
$m<n$ and $n<r \implies m<r$ | As the first answer says, you are almost there already. All that is required is to make explicit the key step that $\;p^+ + q^+\;$ is again a positive integer.
Just to give you another viewpoint, here is a slightly different proof in a slightly different style. As in the question, all variables range over the integers.
$
\newcommand{\calc}{\begin{align} \quad &}
\newcommand{\calcop}[2]{\\ #1 \quad & \quad \unicode{x201c}\text{#2}\unicode{x201d} \\ \quad & }
\newcommand{\endcalc}{\end{align}}
\newcommand{\ref}[1]{\text{(#1)}}
\newcommand{\then}{\Rightarrow}
\newcommand{\followsfrom}{\Leftarrow}
\newcommand{\true}{\text{true}}
\newcommand{\false}{\text{false}}
$From the solution attempt in your question, I'm guessing that your "order definition of an integer" is as follows (in slightly different notation):
$$
\tag 0 m < n \;\equiv\; \langle \exists p : p > 0 : n = m + p \rangle
$$
So that is what I will be using here.
First we notice that $\ref 0$ is equivalent to the simpler
$$
\tag 1 m < n \;\equiv\; n - m > 0
$$ as we see by the following simple calculation:
$$\calc
\langle \exists p : p > 0 : n = m + p \rangle
\calcop\equiv{arithmetic}
\langle \exists p : p > 0 : p = n - m \rangle
\calcop\equiv{logic: one-point rule}
n - m > 0
\endcalc$$
Now, let's start on the most complex side of our demonstrandum, and work towards the other side:
$$\calc
m < n \;\land\; n < r
\calcop\equiv{definition $\ref 1$, twice}
n - m > 0 \;\land\; r - n > 0
\calcop{\tag{*} \then}{arithmetic: the sum of two positive number is positive}
(n - m) + (r - n) > 0
\calcop\equiv{arithmetic: simplify}
r - m > 0
\calcop\equiv{definition $\ref 1$}
m < r
\endcalc$$
This completes the proof.
This proof makes explicit the key step $\ref *$ of this proof: positive integers add up to a positive integer. |
A Poisson process is memoryless | Everything is good up to your inequality.
You have correctly shown that
$$
P(N(s+t)=k | N(s)=j, \{N(u), 0<u<s\}) = P(N(s+t)-N(s)=k-j).
$$
Now start with only $P(N(s+t)=k | N(s)=j)$, without the $\{N(u), 0<u<s\}$ and show that it is also equal to $P(N(s+t)-N(s)=k-j)$.
Then you are done! |
Is my solution correct? (trigonometry) | $\sin x>-\sqrt{2}/2$ and you have $\sin(-\pi/4)=\sin(5\pi/4)=-\sqrt{2}/2$. The big arc between $-\pi/4$ and $5\pi/4$ is the solution :
$$\left[\frac{-\pi}{4}, \frac{5\pi}{4}\right]~~~\text{(mod }2\pi\text{)}$$
Another way to denote this solution is
$$\bigcup\limits_{n\in\mathbb{Z}}\left[\frac{-\pi}{4}+2n\pi, \frac{5\pi}{4}+2n\pi\right]$$ |
$\mathbb{Q}$ is totally disconnected w.r.t. natural metric | Sure. Just consider the map$$\begin{array}{rccc}f\colon&F&\longrightarrow&\Bbb R\\&a&\mapsto&\begin{cases}0&\text{ if }a<\xi\\1&\text{ if }a>\xi.\end{cases}\end{array}$$Then $f(F)=\{0,1\}$, which is impossible, since $F$ is connected and $f$ is continuous. |
function equation with translation of independent variable | $f(x) = e^x$ and $g(x) = e^x$, in fact:
$$f(x+a) = e^{x+a} = e^x e^{a} = f(a) g(x)$$
Note that $g(x) = e^x > 0 ~ \forall x$ as for hypotesis. |
find the eigenvalues of a linear operator T | Suppose $u_A$ is a right eigenvector of $A$ corresponding to an eigenvalue $\lambda_A$.
Similarly, let $u_B^T$ be a left eigenvector of $B$ corresponding to an eigenvalue $\lambda_B$. Then by a simple calculation, we have $T(u_A u_B^T) = \lambda_A \lambda_B u_A u_B^T$, hence $u_A u_B^T$ is an eigenvector of $T$ corresponding to the eigenvalue $\lambda_A \lambda_B$.
The trace can be calculated by summing the eigenvalues of $T$. This involves computing a formula assuming that $A,B$ are diagonalizable, and then using the fact that the diagonalizable matrices are dense coupled with continuity of $\mathbb{tr}$.
Alternatively, we can pick a basis for $\mathbb{C}^{m \times n}$ and compute the trace directly. A simple basis is given by $E_{ij} = e_i e_j^T$. If $X \in \mathbb{C}^{m \times n}$, let $[X]_{ij}$ denote the component of $X$ along $E_{ij}$, ie, $X = \sum_{i,j} [X]_{ij} E_{ij}$. Then the trace of $T$ is given by $\mathbb{tr}(T) = \sum_{i,j} [T(E_{ij})]_{ij}$. A simple computation shows that $[T(E_{ij})]_{ij} = [A]_{ii}[B]_{jj}$, from which the following formula follows:
$$\mathbb{tr}(T) = \sum_{i,j} [A]_{ii}[B]_{jj} = \sum_i [A]_{ii} \sum_j [B]_{jj} = \mathbb{tr}(A) \mathbb{tr}(B). $$ |
Continuity of $J: GL_c(E) \rightarrow GL_c(E): T \mapsto T^{-1}$ | Fix $\varepsilon>0$. By continuity in $I$, we can find $\delta$ such that if $\lVert I-A\rVert\leq \delta$ then $\lVert I-A^{-1}\rVert\leq\varepsilon$. So we have so see when $\lVert I-U^{-1}(U+H)\rVert\leq \delta$, i.e. $\lVert I-U^{-1}H\rVert$. It happens if $\lVert H\rVert\leq \frac{\delta}{\lVert U^{-1}\rVert}=\delta r$. |
Linear algebra over a non-free f.g. module | Saying that you have generators $m_1, \ldots, m_n$ of $M$ is equivalent to saying that there is a surjective morphism $\pi:R^n\to M$. Let $K$ be the kernel of this morphism. Then we have a short exact sequence
$$
0 \to K\to R^n \to M \to 0.
$$
Applying the functor $\operatorname{Hom}_R(R^n, ?)$ to this sequence, we get an exact sequence
$$
0\to \operatorname{Hom}_R(R^n,K) \to \operatorname{Hom}_R(R^n,R^n) \stackrel{p}{\to} \operatorname{Hom}_R(R^n,M) \to \operatorname{Ext}^1_R(R^n,K)=0,
$$
where the last term is zero because $R^n$ is a projective $R$-module. Note that $\operatorname{Hom}_R(R^n,R^n) \cong M_n(R)$ as rings.
If, instead, we applied the functor $\operatorname{Hom}_R(?, M)$ to the above short exact sequence, we would get an injective morphism
$$
0\to \operatorname{Hom}_R(M,M) \to \operatorname{Hom}_R(R^n, M).
$$
If we call $Im$ the image of this morphism, then $Im$ is isomorphic to $\operatorname{End}_R(M)$ as an abelian group. Letting $p$ be the morphism depicted in the second exact sequence above, we can define $S=p^{-1}(Im)$, which is a subgroup of $\operatorname{End}_R(R)\cong M_n(R)$.
Therefore, we have morphisms
$$
M_n(R) \hookleftarrow S \stackrel{p}{\twoheadrightarrow} S/I \cong Im \cong \operatorname{End}_R(M),
$$
where $I$ is the subgroup of $S$ of morphisms $R^n\to R^n$ in $S$ that factor through the inclusion $K\to R^n$ (this is where we can interpret the relations between the generators $m_1, \ldots, m_n$).
This allows us to explain the morphisms you are asking for in your post, with the important caveat that $S$ is a subgroup (and not a subring) and $I$ is another subgroup (and not an ideal). (But see below).
Edit.
Thinking a bit more, we can give the following description. $S$ will be the set of morphisms in $\operatorname{End}_R(R)$ which send $K$ into $K$, and so is a subring of $\operatorname{End}_R(R)$. Moreover, $I$ is a two-sided ideal of $S$. |
$f \in H(\mathbb C) $ s.t. restricted to any strip of finite width ( including straight lines ) , $f(z) \to 0$ as $ z \to \infty$ ; is $f$ constant? | Yes, such functions exist and can be constructed using Arakelyan's approximation theorem. Let $U = \{ x + iy : x \in (-1,\infty), y \in (x^2 - 1, x^2 + 1) \}$ (something like the $1$-neighborhood of the graph of $y = x^2$ for $x>-1$), and let $E = \mathbb{C} \setminus U$ be its complement. Note that every strip of finite width intersects $U$ in a bounded set, and that $E$ contains $1$, but not $0$.
Arakelyan's theorem gives that every continuous function on $E$ which is holomorphic in the interior can be uniformly approximated by entire functions. The function $g(z) = 1/z$ satisfies these assumptions and is bounded on $E$, so there exists an entire function $f$ with $|f(z) - 1/z| < 1/2$ for $z \in E$, which both implies that $f$ is bounded and that is non-constant on $E$. This already shows that there is a non-constant entire function $f$ which is bounded on every strip of finite width.
In order to improve this to get $\lim_{z\to\infty} f(z) = 0$ on every such strip, we have to modify the construction slightly. Note that $E$ is simply connected and does not contain $0$, so there exists an analytic branch of $-\log z$ in (some open neighborhood of) $E$. Applying Arakelyan's theorem to this function gives an entire function $h = u + iv$ with $|h(z) + \log z| < 1$ for $z \in E$. This shows that for $z \in E$ one has $u(z) < 1-\log |z|$ and thus $|e^{h(z)}| < e/|z|$, which implies that $\lim_{z\to\infty} h(z) = 0$ in $E$. So the entire function $f = e^h$ has the desired property. |
I am having a problem in understanding a notation in the book Matrix Analysis by C Meyer. | $A_{\ast j}$ denotes the $j$-th column of $A$ (and we also denote the $i$-th row of $A$ by $A_{i\ast}$). It is a rather popular notation in linear algebra literature.
You may see Wikipedia for accounts of reflections and Householder reflections. Alternatively, consult any numerical linear algebra textbook. |
How to get the explicit formula of Bernoulli number using its generating function? | By writing $j^k$ as $\left.\frac{d^k}{dx^k}e^{jx}\right|_{x=0}$ we have
$$\begin{eqnarray*} \sum_{n=0}^{k}\frac{1}{n+1}\sum_{j=0}^{n}(-1)^j\binom{n}{j}j^k &=& \left.\frac{d^k}{dx^k}\sum_{n=0}^{k}\frac{1}{n+1}\sum_{j=0}^{n}\binom{n}{j}(-1)^j e^{jx}\right|_{x=0}\\&=& \left.\frac{d^k}{dx^k}\sum_{n=0}^{k}\frac{(1-e^x)^n}{n+1}\right|_{x=0}\\\\(A)\qquad&=& \left.\frac{d^k}{dx^k}\sum_{n\geq 0}\frac{(1-e^x)^n}{n+1}\right|_{x=0}\\(B)\qquad&=& \left.\frac{d^k}{dx^k}\frac{x}{e^x-1}\right|_{x=0}\qquad\square.\end{eqnarray*} $$
In $(A)$ we exploit the fact that for any $m>k$, the $k$-th derivative of $(1-e^x)^m$ at the origin is simply zero, since $e^x-1=x+\frac{x^2}{2}+\ldots$ In $(B)$ we exploit the fact that $\sum_{n\geq 0}\frac{z^n}{n+1}=\frac{-\log(1-z)}{z}$, then replace $z$ with $1-e^x$. |
Deriving covariance of Gaussian of linear combination of latent variable + noise | I believe I've found the answer myself. Since $\mathbf{W}$ is a parameter and not a random variable, and $\mathbf{z}$ and $\boldsymbol\epsilon$ have zero-mean, we have:
\begin{align}
\mathbb{E}[\mathbf{Wz}\boldsymbol\epsilon^T] = \mathbf{W}\mathbb{E}[\mathbf{z}\boldsymbol\epsilon^T] = \mathbf{W}\mathbb{E}[\mathbf{z}]\mathbb{E}[\boldsymbol\epsilon^T] = \mathbf{W}\mathbf{0}\mathbf{0}^T
\end{align}
Which is a matrix of zeros. We can factorize the expected value of $\mathbf{z}\boldsymbol\epsilon^T$ as above because they are uncorrelated.
Similarly,
\begin{align}
\mathbb{E}[\boldsymbol\epsilon\mathbf{z}^T\mathbf{W}^T] = \mathbb{E}[\boldsymbol\epsilon\mathbf{z}^T]\mathbf{W}^T = \mathbb{E}[\boldsymbol\epsilon]\mathbb{E}[\mathbf{z}^T]\mathbf{W}^T = \mathbf{0}\mathbf{0}^T\mathbf{W}^T
\end{align}
Which also is a matrix of zeros. |
Compatibility conditions for Maurer-Cartan forms on a homogeneous space | I think there is a mixed up and $h_{UV}(x)=(s_{U}(x))^{-1} \cdot s_V(x) \in H$. Continuing where you left:
$$\theta_V(X)= (s_V^*(\omega))(X)=\omega((s_V)_*(X))=\omega(( s_U\cdot h_{UV})_*(X))=\omega(\frac{d}{dt}|_{t=0}(s_U(c(t))\cdot h_{UV}(c(t)))$$
Now we use the chain rule to write
$$\omega(\frac{d}{dt}|_{t=0}(s_U(c(t))\cdot h_{UV}(c(t)))=\omega\left(\left(dL_{S_{U}}\right)_{h_{UV}(x)}((h_U)_*(X))+\left(dR_{h_U}\right)_{s_{U}(x)}((s_{U})_*(X))\right)$$
Now we will use the defention of the maurer cartan form. It is given as the pushforward of a vector in $T_gG$ along the left-translation:
$$\omega(v) = (L_{g^{-1}})_* v,\quad v\in T_gG.$$
In your case we get
$$\left(L_{(s_{U}(x)h_{UV}(x))^{-1}}\right)_*\left(\left(dL_{s_{U}}\right)_{h_{UV}(x)}((h_{UV})_*(X))+\left(dR_{h_{UV}}\right)_{s_{U}(x)}((s_{U})_*(X))\right)$$
Useing the properties of left translation we can write the above as
$$(L_{(h_{UV}(x)^{-1}})_*(L_{(s_{U}(x)^{-1}})_*\left(\left(dL_{s_{U}}\right)_{h_{UV}(x)}((h_{UV})_*(X))+\left(dR_{h_{UV}}\right)_{s_{U}(x)}((s_{U})_*(X))\right).$$
Simplfing further and rembering that $(h_{UV}(x))^{-1}$ is an elment of the closed lie subgroup $H$,
$$(L_{(h_{UV}(x)^{-1}})_*\left(((h_{UV})_*(X))+(L_{(s_{U}(x))^{-1}})_*\left(dR_{h_{UV}}\right)_{s_{U}(x)}((s_{U})_*(X))\right)=\omega_{H}\left((h_{UV})_*(X)\right)+(L_{(h_{UV}(x)^{-1}})_*(L_{(s_{U}(x))^{-1}})_*\left(dR_{h_{UV}}\right)_{s_{U}(x)}((s_{U})_*(X))$$
The right action commute with the left action so we almost get the desired result
$$h_{UV}^*(\omega_H)\left(X\right)+(L_{(h_{UV}(x))^{-1}})_*\left(R_{h_{UV}}\right)_*\omega((s_{U})_*(X))$$
Finally by the defention of the adjoing operator we get:
$$\theta_V(X)=Ad(h_{UV}^{-1})\theta_U(X)+(h_{UV})^*\omega_H(X)$$ |
Logic: Show that {∧, ↔, +} is complete, but every proper subset of it, isn't. (Symbol "+" is exclusive or) | You probably already have an example of a minimal complete set of operations, like $S=\{\wedge,\neg\}$. So proving that a set of operations is complete is as easy as showing that you can derive all of the operations in $S$.
After that, all you need to do is show that none of the three doubleton subsets of $\{\wedge,\leftrightarrow,+\}$ are complete. You could do that like the example you cited where you describe a feature that all of the derivable operations have and then show that there is a logical operation that doesn't have that feature. The usual suspects for counterexamples are $\bot$, $\neg$, and $\to$ because a set of operators either can't return false if all the arguments are true, can't be contrary, or can't be asymmetric". |
Finding a reidue at essential singularity $z_0=0$ | Part A):
$$f(z) = \frac{\sin (z^2)}{z^2}$$
is an even function, hence all coefficients of odd powers of $z$ in its Laurent expansion are $0$, in particular $\operatorname{Res}(f;0) = 0$ here. (Furthermore, this is an entire function, it has a removable singularity in $0$.)
Part B) (After the correction of the function):
$$f(z) = z^3\sin \frac{1}{z}$$
is even too, hence the residue is $0$. |
The number of strings of length n that only contain digits among 0...4, and do not contain the string 00. | In the first case, there is a one to one map between the strings contributing to $a_n$ not starting with $0$ but with another fixed number say $1$ and the strings contributing to $a_{n-1}$. That is, every string contributing to $a_{n-1}$ gives a string contributing to $a_n$ by appending a $1$ at the left end, and vice versa. Since we can do this for all of $1,2,3,4$, we get a factor of $4$.
In the second case, there is a one-one correspondence between the strings contributing to $a_n$ beginning with $0$ and having a fixed second digit (say $1$) and the strings contributing to $a_{n-2}$. Specifically, given a string contributing to $a_n$ starting with $01$, it corresponds to the string formed by its last $n-2$ letters. Since we could have any of $1,2,3,4$ instead of $1$, there is again a factor of $4$. |
Prove equation with $x$, $y$, $r$ definition. | Recall that $\csc\theta=\frac{r}{y}$, $\tan\theta=\frac{y}{x}$, and $\cot\theta=\frac{x}{y}$. (What about $\cos\theta$? Do you know its representation in terms of $x$, $y$, and $r$?)
Start at one end of the equation; the left-hand side is more "complicated" so it's better to start there. Replace the trigonometric terms with their corresponding equivalent forms in $x$, $y$, and $r$.
$$\frac{\csc\theta}{\tan\theta+\cot\theta}=\frac{\frac{r}{y}}{\frac{y}{x}+\frac{x}{y}}$$
Simplify by multiplying the expression by $\frac{xy}{xy}$:
$$\frac{rx}{y^2+x^2}$$
Then use the identity $x^2+y^2=r^2$. You should be able to do the rest. |
Example of a non-isomorphic linear transformation | Well, for your concrete example of $c_i$ we can take
$$f(t) = (t - 1)(t - 2)(t - 3) = (t^2 - 3t + 2)(t - 3) = t^3 - 6t^2 - 7t - 6. $$
Then
$$ T(f) = \begin{pmatrix} f(1) \\ f(2) \\ f(2) \\ f(3) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}. $$
The polynomial $f$ is a polynomial of degree $3$ which is non-zero but $T(f) = 0$. |
Using a comparison test for series with factorials and repeating patterns | Here is a good way to use the comparison test:
$$\begin{align}5\times8\times11\times\dots\times(3i+2)&>2\times4\times6\times\dots\times(2i)\\&=(2\times1)(2\times2)(2\times3)\dots(2\times i)\\&=2^ii!\end{align}$$
Thus,
$$\frac{i!}{5\cdot8\cdot11\cdot\cdot\cdot(3i+2)}<\frac{i!}{2^ii!}=\frac1{2^i}$$
And this is just a geometric series that converges. |
$\angle ABC = \angle AKC = 90^\circ$, Find length of $BK$. | If everything else fails, use coordinates. If you select a coordinate system where $A=(-1,0)$, $C=(1,0)$, $D=(0,-1)$ things are fairly manageable. We then have $K=(x,y)$ with $x^2+y^2$ and after a bit of algebra things work out as
$$ |DK|^2 = 2+2y $$
$$ \frac{(|AK|+|CK|)^2}{2} = 2+2\sqrt{1-x^2} $$
which are clearly the same. |
Why does every 3-string, composed of two letters have exactly 3 palindromic substrings? | There are only $2^3=8$ cases to check. The question also implies distinct palindromes, otherwise there would be more than $3$ of them in some strings:
$$aaa\to a,aa,aaa$$
$$aab\to a,aa,b$$
$$aba\to a,b,aba$$
$$abb\to a,b,bb$$
$$baa\to b,a,aa$$
$$bab\to a,b,bab$$
$$bba\to b,bb,a$$
$$bbb\to b,bb,bbb$$ |
An irreducible representation of a complex Lie algebra is the product of a 1-dim rep'n and a semisimple one | Here you first need to understand how $X\in\mathrm{Rad}(\mathfrak{g})$ act on $L^*=\mathrm{Hom}(L,\mathbb{C})$: $$(Xf)(u)=-f(Xu)=-\lambda(X)f(u),$$ so $X$ acts on $L^*$ by $-\lambda$.
Next, the action of $\mathfrak{g}$ on $V\otimes L^*$ is not the diagonal action. It is the action induced by the coproduct on $\mathfrak{g}$: $\Delta(X)=X\otimes 1+1\otimes X$. Therefore, on $V\otimes L^*$ and $X\in\mathrm{Rad}(\mathfrak{g})$, we have
$$X(v\otimes f)=(Xv)\otimes f + v\otimes(Xf)=\lambda(X)(v\otimes f)-\lambda(X)(v\otimes f)=0.$$ |
Prove: the intersection of Fibonacci sequence and Mersenne sequence is just $\{1,3\}$ | We need to find when $F_n+1$ is a power of 2. Almost every value of $n$ can be eliminated by considering the Pisano period. In particular, we can deduce that:
$F_n+1 \equiv 0 \pmod {16}$ if and only if $n \equiv 22 \pmod {24}$ and
$F_n+1 \equiv 0 \pmod 9$ if $n \equiv 22 \pmod {24}$.
This leaves the few small cases already listed. |
Understanding counting. | The rule of multiplicatinn is simple. If there are $x$ ways to perform a subtask and $y$ ways to perform a second (independent) subtask, and the tasks are performed in serial then there are $xy$ ways to perform whole task.
Think of it as branching paths. If the path branches into $x$ paths and then each of these branches into $y$, how many ways are they to take?
Similarly if you take a bushwalk which has $x$ paths to get from $A$ to $B$ and there are $y$ paths from $B$ to $C$, then how many different ways are there to go from $A$ to $B$ to $C$?
When independent subtasks are performed in serial then multipling the counts of subtasks obtains the count of the whole task.
[Serial: both tasks must be performed; Parallel: the tasks are alternatives.]
We write $3!$ as the count of ways to arrange three distinct items; and likewise $7!$ and $10!$ are the ways to arrange seven, or ten, distinct items, respectively.
Well, now we write $\binom {10}3~$, or sometimes $~^{10}\mathbb C_3$, to represent the count of ways to choose three from ten distinct things.
But what exactly is this $\binom {10}{3}$ notation? How do we find its value?
You have examined two ways to count how to arrange ten children among ten seats; when the seats are in two rows; three and seven.
Because these are ways to perform the exact same task; the counts must be the same.
You have counted the ways to select three of ten children, then arrange them among three seats, and finally arrange the remaining children among seven seats. This is: $~\binom{10}3\cdot 3!\cdot 7!$.
You have also counted the ways to simply arrange all ten children among all ten seats. That is: $~10!~$.
Hence we find that: $\dbinom{10}3 = \dfrac{10!}{3!\,7!}$ .
In general we find that $\dbinom{n}{r} ~=~ \dfrac{n!}{r!~(n-r)!}$ for all non-negative integer $r,n$ such that $0\leq r\leq n$
You use verify this with smaller examples: What are $\binom 3 1$ and $\binom 4 2$ ? |
Find a symmetric postive definite matrix which maps one vector to another | Making $y = \alpha x + \beta v$ with $x\cdot v = 0$ if $\alpha > 0$ then
$$
x^{\dagger}\cdot A\cdot x \ge \alpha x^{\dagger}\cdot x\gt 0,\ \ \ x\ne 0
$$
or
$$
x^{\dagger}\cdot\left( A-\alpha I_2\right)\cdot x > 0,\ \ \ x\ne 0
$$
now calling
$$
A =\left(
\begin{array}{cc}
a_{11} & a_{12} \\
a_{12} & a_{22}\\
\end{array}
\right)
$$
we have the conditions for positivity on $ A-\alpha I_2$
or
$$
\cases{
a_{11}>0\\
a_{22}>0\\
a_{11} a_{22}-a_{11} \alpha -a_{12}^2-a_{22} \alpha +\alpha ^2\gt 0,\ \ \ x\ne 0
}
$$ |
Is there a simpler method of calculating $\sqrt[n]{x}$? | The n-th root algorithm converges quadratically (the number of correct digits is essentially doubled each step) and uses only "simpler" operations such as addition, division, integer powers.
This method is fast, reliable, and accurate and does not require anything except basic arithmetic. You know in advance how many steps are needed to get, say 100 digits. With a decent starting value, you will need about 10 steps.
The log method requires the computation of logarithms and exponentials which is in turn done approximately. So this method has an uncertain amount of computational work associated with it and accuracy control is also uncertain.
"Simplicity" is a subjective criterion and must be balanced against reliability, accuracy, speed. Many (including me) would call the n-th root algorithm simple and elegant. |
Laurent series in domain $|z|>0$ | The function's analytic in your region (and almost analytic at $\;z=0\;$ ...), so using the power series for $\;\sin z\;$ which has infinite convergence radius:
$$\frac1z\sin2x=\frac1z\sum_{n=1}^\infty(-1)^{n-1}\frac{(2z)^{2n-1}}{(2n-1)!}=\sum_{n=1}^\infty(-1)^{n-1}\frac{2^{2n-1}z^{2n-2}}{(2n-1)!}$$
and we get in fact a power series, as expected. |
How do I find the Maclaurin series of $\sinh^2(x)$? | Note that $$\sinh ^2 x = (\frac {e^x-e^{-x}}{2})^2=$$
$$(1/4)(e^{2x} +e^{-2x} -2)$$
Now use $$e^{2x} = 1+(2x) + (2x)^2/2 + (2x)^3 / {3!}+.....$$ and $$e^{-2x} = 1+(-2x) + (-2x)^2/2 + (-2x)^3 / {3!}+.....$$
to get your result. |
Proving that a certain language is or is not regular using pumping lemma | Your argument is difficult to understand, but it seems like you are trying to show that $L$ is not regular by first considering the language $L' = \{a^mb^n \mid m > n\}$. But the problem is that while a pumped up word from $L'$ might not be in $L'$, it can still be in $L$.
To make matters worse, the langeuage $L$ is regular. Consider e.g. the regular grammar with starting symbol $L$ and the rules $L \to aL | B$ and $B \to bB | \varepsilon$. |
Characterization of convex set with empty interior on Hilbert spaces | Your counterexample is correct, assuming that "subspace" (as often) means a closed subspace of $H$. Otherwise, it would not work because the set $C$ is contained in $\ell^1(\mathbb{N})$ which is a dense subspace of $\ell^2(\mathbb{N})$. So I proceed by assuming the subspaces are closed.
The justification is slightly lacking in the following: the property $C^\perp = \{0\}$ only implies that $C$ is not contained in any proper linear subspace; it does not preclude $C$ from being contained in a proper affine subspace. For example, the set $A=e_1+e_1^\perp$, which is an affine hyperplane, satisfies $A^\perp = \{0\}$ since its linear span is all of $\ell^2$.
However, the above is easy to repair: since $C$ contains $0$, any affine subspace containing it would be a linear subspace.
A simpler example
Let $C$ be the set of all sequences $x$ such that $x_n=0$ except for finitely many $n$. Then $C$ is convex and has empty interior, since adding an arbitrarily small multiple of the vector $(1/2^n)_{n\in\mathbb{N}}$ to an element of $C$ takes one out of $C$. It's also dense in $\ell^2$, so can't be contained in a proper closed subset of any kind.
(Your example has the additional property of being closed, which however wasn't required.) |
Prove that sequence is monotone with induction | Because $a_n>0$ and $$a_{n+1}-a_n=\frac{2a_n}{3+a_n}-a_n=\frac{-a_n^2-a_n}{3+a_n}<0$$ |
Prove asymptotic equivalence of $\text{li}(n)$ and $n/\ln(n)$ | One way to prove it is to use de l'Hôpital's rule.
Let $f(x) = \frac{x}{\log{x}}$ and $g(x) = \int_1^x \frac{dt}{\log{t}}$. Then
$$
\lim_{x \to \infty}f(x) = \lim_{x\to\infty}g(x)=\infty
$$
and therefore
$$
\lim_{x \to \infty}\frac{f(x)}{g(x)}
$$
is undefined. We can apply de l'Hôpital once and we obtain
$$
\lim_{x \to \infty}\frac{f(x)}{g(x)} = \lim_{x \to \infty}\frac{f'(x)}{g'(x)}.
$$
We have
$$
f'(x) = \frac{\log{x}-1}{\log^2 x} \qquad \text{ and } \qquad g'(x)=\frac{1}{\log{x}},
$$
the latter following from the fundamental theorem of calculus. Finally we have
$$
\lim_{x \to \infty}\frac{f(x)}{g(x)} = \lim_{x \to \infty}\frac{f'(x)}{g'(x)} = \frac{(\log x - 1)\log x}{\log^2 x} = 1.
$$ |
Is term "real number" equivalent to "group of algorithms generating stream of digits"? | What you mean is equivalent to computable numbers. There are also definable numbers.
You can also be interested in this question. |
Number Theory - Method of Proof explanation | The proof as presented has several problems. Personally, I'd prefer something like this:
Claim. If $n\ge 2$, then $2^n$ is the sum of two consecutive odd integers.
Proof. As $n\ge 2$, it follows that $k=2^{n-2}$ is an integer, and then $2k-1$ and $2k+1$ are consecutive odd integers. As $(2k-1)+(2k+1)=4k=2^n, $
the claim follows. $\square$
The formulation "implies $k=2^{n-2}$" goes the wrong way and belongs rather to a research attempt to find a proof. |
Euler's proof for the infinitude of the primes | It might be instructive to see the process of moving from a heuristic argument to a rigorous proof.
Probably the simplest thing to do when considering a heuristic argument involving infinite sums (or infinite products or improper integrals or other such things) is to consider its finite truncations. i.e. what can we do with the following?
$$\prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p}$$
Well, we can repeat the first step easily:
$$\prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p}
= \prod_{\substack{p \in P \\ p < B}} \sum_{k=0}^{+\infty} \frac{1}{p^k}$$
Because all summations involved are all absolutely convergent (I think that's the condition I want? my real analysis is rusty), we can distribute:
$$ \ldots = \sum_{n} \frac{1}{n} $$
where the summation is over only those integers whose prime factors are all less than $B$.
At this point, it is easy to make two very rough bounds:
$$ \sum_{n=1}^{B-1} \frac{1}{n} \leq \prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p}
\leq \sum_{n=1}^{+\infty} \frac{1}{n} $$
And now, we can take the limit as $B \to +\infty$ and apply the squeeze theorem:
$$ \prod_{\substack{p \in P}} \frac{1}{1 - 1/p}
= \sum_{n=1}^{+\infty} \frac{1}{n} $$ |
unique MST's of a graph? | You can argue something like this:
Take a weighted graph $G = (V,E)$, with unique weights $w_2 < w_3 < \dots < w_E$, and equal weights $w_0 = w_1$. Start a minimum spanning tree algorithm (like Kruskal's) and examine the algorithm's actions when edge $e_0$ and edge $e_1$ are reached. There are multiple cases -
If $e_0$ and $e_1$ can both be inserted without closing a cycle.
If $e_0$ and $e_1$ can both be inserted individually, but they close a cycle together.
If $e_0$ cannot be inserted, but $e_1$ can (or vice versa).
If neither $e_0$ nor $e_1$ can be inserted.
In cases $1,3$, and $4$, the algorithm treats edges $e_0$ and $e_1$ like uniquely weighted edges (say $w_0$ and $w_1 + \epsilon$), inserts them with the correct procedure, and continues on to find a unique solution.
In case $2$, there are two possible minimum spanning trees, one where $e_0$ was inserted and one where $e_1$ was inserted. |
Double Integral with Variable Substitution | The red region is the region of integration in the $uv$ coordinate system. The additional bound $u+v=3$ (the green line) does not further bound the region.
You can see that $v$ is bounded below by $v=-u$ and above $v=0$ and that $u$ is restricted to the interval $[0,3]$. |
Notation for the disjoint union of open subspaces | Yes, it is customary to write $\coprod X_i$ for a disjoint union of subspaces/subsets in general. And $\coprod X_i$ is also the usual notation for the abstract disjoint union of the $X_i$ (which is the coproduct in the category of spaces - this notation is also used for (categorical) coproducts in general). This usually does not lead to confusion, and in any case, whenever you feel that it can be confusing, you should say explicitely what you are speaking about. A good way to avoid confusion is also to prefer using only $\bigcup X_i$ for the subspace if you want to distinguish it from the abstract disjoint union. |
Proving $\operatorname{tr} AB\le n$ when $ABA=A$. | $$ABAB = AB{{{{{{{{}}}}}}}}$$hence $$
tr AB = rank( AB )\le n
$$ |
When associated prime ideals are comaximal | This is not going to be a completely general answer for the following reason:
This is obviously not true of rings with embedded primes, but there are nonreduced rings with no embedded primes, e.g. $k[x]/(x^2)$, which has a single prime ideal, $(x)$, and hence satisfies your properties.
However, to forbid embedded primes, let's assume $R$ is reduced. Let's also assume $R$ is Noetherian, since I'm not sure how much of what I'm going to say applies to non-Noetherian rings without checking some references.
Then the associated primes of $R$ are precisely the minimal prime ideals.
If $R$ is reduced and Noetherian, the following are equivalent:
The associated primes are pairwise comaximal.
$R\cong \prod_{P\in\operatorname{Ass} R} R/P$
Every associated prime is the annihilator of an idempotent
Every associated prime is principal, generated by an idempotent
Geometrically these all correspond to $\newcommand\Spec{\operatorname{Spec}}\Spec R$ being the disjoint union of its irreducible components.
Proof:
$1\implies 2$: If every pair of associated primes is comaximal, then Chinese remainder theorem guarantees that $$R\cong \prod_{P\in\operatorname{Ass} R} R/P,$$ since the intersection of the associated primes is $0$, since $R$ is reduced.
$2\implies 3$: If $R$ is isomorphic to that product, then $P$ is the annihilator of the idempotent corresponding to the preimage in $R$ of the identity in $R/P$.
$3\implies 4$: If every associated prime is the annihilator of an idempotent, then if $P=\newcommand\ann{\operatorname{ann}}\ann e$, $P=R(1-e)$, since every element of $R(1-e)$ is in $\ann e$, and if $fe=0$, then $f(1-e)=f-fe=f$, so $f\in R(1-e)$. Thus $P$ is principal, generated by the idempotent $1-e$.
$4\implies 1$: Finally, if $P$ and $Q$ are distinct primes with $P=Re$ and $q=Rf$ for idempotents $e$ and $f$, then consider $R/P$. In this ring the image of $f$ is still idempotent, but $P$ is prime, so the image of $f$ is either $0$ or $1$. If the image of $f$ is $1$, then $P$ and $Q$ are comaximal, since this implies that $1-f$ lies in $P$, so $1-f+f = 1 \in P+Q$. On the other hand, if the image of $f$ is $0$, $f\in P$, so $Q\subseteq P$, and considering the image of $e$ in $R/Q$ we see that either the image of $e$ is $1$, in which case $P$ and $Q$ are comaximal, or the image of $e$ is $0$ also, so $P\subseteq Q$ as well, contradicting that $P$ and $Q$ were distinct. |
Taylor's formula - infinitesimal generator | $$Af(x) = \lim_{t \downarrow 0} \frac{\mathbb{E}^x(f(X_t))-f(x)}{t}$$
implies that
$$\lim_{t \downarrow 0} \mathbb{E}^x(f(X_t))-f(x) - tAf(x) = \lim_{t \downarrow 0} t \left[ \frac{\mathbb{E}^x(f(X_t))-f(x)}{t} - Af(x) \right] = 0$$
which tells you that
$\mathbb{E}^x(f(X_t)) \approx f(x) + tAf(x)$ for small $t \geq 0$. |
Show that the closed $n$-ball $B^n(a)$ is a manifold | To do this, you need to show that every point $x \in B^n(a)$ has a neighborhood $U$ which is homeomorphic to an open subset of $\mathbb{H}^n = \{(x_1, \ldots, x_n) \ | \ x_n \geq 0\}$. This is the upper half space, and has boundary $\{(x_1, \ldots, x_n) \ | \ x_n = 0\}$. The points $x$ which are mapped to the boundary of $\mathbb{H}^n$ by the above-described homeomorphism form the boundary of the manifold.
Since clearly each point $x$ in the interior of $B^n(a)$ has a neighborhood which is homeomorphic to an open subset of $\mathbb{H}^n$ (just by, say, translating a small ball around $x$ upwards so that now it lies in the upper half plane).
So the only issue is showing that each point on the boundary $S^{n - 1}$ has an open neighborhood homeomorphic to an open neighborhood of $\mathbb{H}^n$. This can be done by "straightening the boundary" of the sphere. To do this, try using stereographic projection (see https://www.physicsforums.com/threads/closed-ball-is-manifold-with-boundary.744620/).
It would probably help to do this in 2 dimensions first; the generalization step should be almost identical. |
Projectile Motion with Gun Fire | Apply the equations below,
$$x = v_xt,\>\>\>\>\> y = v_y t -\frac12 g t^2$$
Substitute the givens to get,
$$2000 = 200\cdot \cos 45 \cdot t, \>\>\>\>\>\>y = 200\cdot\sin 45\cdot t - \frac12\cdot 10\cdot t^2$$
Solve for $y$ and $t$,
$$y=1000,\>\>\>\>\> t = 10\sqrt2$$
which shows the gun hits the target at the height of 1000 meters and the time of flight is $10\sqrt2$ seconds.
The vertical speed at the impact is $v_y-gt = 200\cdot \sin 45 - 10\cdot 10\sqrt2=0$. So, the impact speed of the shell is $v_x= 200\cdot \cos 45 = 100\sqrt2$m/s. |
Calculate the length of a polar curve | As OP states "polar curve", then we can imagine it as curve on polar plane, or curve on Cartesian plane in polar coordinates. So, we can do it in 2 ways:
Considering it as usual parametrical representation of curve on plane $(\boldsymbol r, \boldsymbol\theta)$ and considering $r$ as parameter we have $$\left(\frac{ds}{dr}\right)^2 =\left(\frac{dr}{dr}\right)^2+\left(\frac{d \theta}{dr}\right)^2$$
So, length can be calculated as
$$\int\limits_{1}^{3}\sqrt{1+\theta'^2(r)}dr $$
Considering on plane $(\boldsymbol x, \boldsymbol y)$ and taking polar representation of curve by $x =r \cos \theta, y = r \sin \theta$. Using in last formulas $\theta = \theta(r)$ we can calculate
$$\frac{dx}{dr} = \cos \theta - r \frac{d\theta}{dr} \sin \theta$$
and
$$\frac{dy}{dr} = \sin \theta + r \frac{d\theta}{dr} \cos \theta$$
Using obtained we have
$$\left(\frac{ds}{dr}\right)^2 =\left(\frac{dx}{dr}\right)^2+\left(\frac{dy}{dr}\right)^2 =\\
=\left(\ \cos \theta - r \frac{d\theta}{dr} \sin \theta \right)^2 + \left(\ \sin \theta + r \frac{d\theta}{dr} \cos \theta \right)^2 =\\
=1+ r^2\left(\frac{d\theta}{dr}\right)^2$$
and length can be calculate as integral
$$\int\limits_{1}^{3}\sqrt{1+r^2\theta'^2(r)}dr$$ |
Explanation of the metric tensor | What they mean is that $m_{ij}$ is the value of the scalar product of $v_i$ and $v_j$, so $\left<v_i,v_j\right> = m_{ij}$. They didn't "get" $m_{ij}$ from anywhere...it's just given to you, as the definition of the scalar product. Let's do a concrete example. Let's say for $\Bbb{R}^2$, with basis $e_1,e_2$. For example. let's say our scalar product is given by $\left<e_1,e_1\right> = 2$, $\left<e_2,e_2\right> = 3$, and $\left<e_1,e_2\right> = 5$. Then the matrix $M = (m_{ij})$ is
$$ M = \left( \begin {array}{cc} 2 & 5 \\ 5 & 3 \end{array} \right) $$
Now let's say we want to compute the scalar product of the vectors $(1,2) = e_1 + 2e_2$ and $(3,-4) = 3e_1 - 4e_2$. Since the scalar product is bilinear, we can compute as:
$$ \begin {align*}
\left<e_1+2e_2, \, 3e_1-4e_2\right> &= 3m_{11} -4m_{12} + 6m_{21} -8m_{22} \\
&= 3(2) + (6-4)(5) - 8(3) \\
&= -8
\end {align*}
$$
The claim is just that this computation could have also been done using matrix multiplication:
$$ \begin {align*}
\left<(1,2),(3,-4)\right> &= (1,2) \left(\begin{array}{cc} 2&5\\5&3 \end{array}\right) \left( \begin{array}{c} 3\\-4 \end{array}\right)
\end {align*}
$$ |
What is the maximum value of $\ln(x+\ln(x+\ln(x+\cdots)))-\ln x$? | After some successful discussions with @Holo here, we have derived a closed form for $k$.
The expression for $k(x)$ in the original post is incorrect in that we must take the negative branch of the Lambert $W$ function on $(-1,0)$; that is, $$k(x)=-W_{-1}(-e^{-x})-x-\ln x\tag{1}.$$ with $W_{-1}(-e^{-x})$ being the negative branch on $(1,\infty)$.
We want to find the maximum value of $f_{\infty}(x)-\ln x$. Differentiating $(1)$, we get $$k'(x)=-\frac{W_{-1}(-e^{-x})}{-e^{-x}(1+W_{-1}(-e^{-x}))}\cdot e^{-x}-1-\frac1x=-\frac1{1+W_{-1}(-e^{-x})}-\frac1x=0$$ for critical points. Therefore, we have $$x+1+W_{-1}(-e^{-x})=0\tag{2}$$ provided that $x\ne0$ and $W_{-1}(-e^{-x})\ne-1\implies x\ne1$ which is not a problem.
From $(2)$, we finally arrive at the value of $x=e^{1-0}+0-1=e-1$, hence $$k=-W_{-1}(-e^{e-1})-e-1-\ln(e-1)=-(-e)-(e-1)-\ln(e-1)=\boxed{1-\ln(e-1)}$$ |
Second countable normed vector space | Second countability is equivalent to separability. Given countable dense subsets $A$ of $Y$ and $B$ of $X/Y$ we choose a countable set $C$ of $X$ such that $B=q(C)$, where $q:X\to X/Y$ is the quotient map. Then $A+C=\lbrace a+c: a\in A, c\in C\rbrace$ is countable and dense in $X$.
Indeed, given $x\in X$ and $ \varepsilon>0$ there is $b=q(c)\in B$ such that the quotient norm $$\|q(x)-b\|_{X/Y} = \inf\lbrace \|v\|_X: q(v)=q(x)-b\rbrace <\varepsilon/2.$$
Take such a $v$ with $\|v\|_X <\varepsilon/2$.
Then $x-c-v \in kern(q)=Y$ so that there is $a\in A$ with
$\|x-c-v-a\|_Y <\varepsilon/2$. This gives
$$\|x-(a+c)\|_X \le \|x-c-v-a\|_X+\|v\|_X < \varepsilon.$$ |
Expressing a polynomial in terms of Chebyshev series | This is first the part of the procedure for Economization of Power Series, see e.g. Abramowitz/Stegun 22.20. In Press et al., Numerical Recipes in C, 2nd ed, Ch. 5.11, you can find the function pccheb which computes the Chebyshev coefficients $b_k$ from the $a_k$. The main idea is a double loop based on the relation
$$
x^k = \frac{1}{2^{k-1}}\left(T_k(x) + {k \choose 1}T_{k-2}(x) + {k \choose 2}T_{k-4}(x) \cdots \right)
$$ |
Least squares fitting of vectors | The method of least squares works for any dimension (assuming invertibility of $A^TA$).
$$
Ax = b\\
A^TA\hat{x} = A^Tb\\
\hat{x} = (A^TA)^{-1}A^Tb
$$
For your problem you might do the following:
$$
Q_x = \begin{bmatrix}Q_{1x} & Q_{2x} & \dots &Q_{nx}\end{bmatrix}_{1\times n}\\
Q_y = \begin{bmatrix}Q_{1y} & Q_{2y} & \dots &Q_{ny}\end{bmatrix}_{1\times n}\\
Q_z = \begin{bmatrix}Q_{1z} & Q_{2z} & \dots &Q_{nz}\end{bmatrix}_{1\times n}\\
P_x = \begin{bmatrix}P_{1x} & P_{2x} & \dots &P_{nx}\end{bmatrix}_{1\times n}\\
P_y = \begin{bmatrix}P_{1y} & P_{2y} & \dots &P_{ny}\end{bmatrix}_{1\times n}\\
P_z = \begin{bmatrix}P_{1z} & P_{2z} & \dots &P_{nz}\end{bmatrix}_{1\times n}\\
q = \begin{bmatrix}Q_x & Q_y & Q_z\end{bmatrix}^T\ _{3n\times1} \\
p = \begin{bmatrix}P_x & P_y & P_z\end{bmatrix}^T\ _{3n\times1}
$$
With such matrices it's easy to solve the problem. We want:
$$
q\lambda = p
$$
Such $\lambda$ probably doesn't exist. Using least squares we get $\lambda$ that minimizes the square error:
$$
\hat{\lambda} = (q^Tq)^{-1}q^Tp = \frac{q\circ p}{q\circ q}
$$ |
How do I show that u solves the homogenous equation? | In this case you are given a function $u$ and asked to verify that is indeed a solution. This means you just need to calculate $\frac {d^2u}{d{\phi}^2}+{u}$ and show that it is $0$. |
If $\phi$ is injective linear map $\mathbb{R}^r \rightarrow \mathbb{R}^s$ then $Im \phi$ is closed in $\mathbb{R}^s$ | If $V \subset \mathbb R^s$ is any subspace, consider the canonical map $\mathbb R^s \to \mathbb R^s/V$ with kernel $V$. A linear map between finite dimensional spaces is continuous, hence the kernel is closed as the pre-image of the closed subset $\{0\}$. |
Coordinatable continuous functions in terms of orthogonal systems $\{e^{in\theta}\}$ | There is the following theorem of de la Vallée-Poussin, which I quote from page 880 of J. Marshall Ash's Uniqueness of Representation by Trigonometric Series:
If $S=\sum_n t_n e^{inx}$ converges to $f$ at each $x$, and if $f$ is finite at each $x$ and if $\int_{\mathbb T} |f(x)| dx< \infty$, then $S$ is the Fourier series of $f$.
This theorem is to be found in Zygmund's Trigonometric Series, Vol I, page 326. The beginning of Ash's paper proves the special case for $f=0$, but it seems that the proof of the general case spans several pages, so I will leave it at that.
For your question: consider a continuous function $f$ constructed via the Uniform Boundedness Principle (see for example, the note of Paul Garrett linked in this question). Suppose that $S$ converged pointwise to $f$. By de la Vallée-Poussin's theorem, $S$ must be the Fourier series of $f$. But the Fourier series of $f$ does not converge at every point, which is a contradiction. |
Coding theory (combinatorics?) | Without loss of generality, we may assume the word looks like this:
$$ \overbrace{11\dots 1}^{\ell}\overbrace{00\dots 0}^{n-\ell}. $$
Assume we change $a$ of the $1$'s and $b$ of the $0$'s. The weight of the new word will be $\ell -a + b = h$. The distance between both words will be $s = a+b$. Under the given assumptions, these equations have a unique solution $(a,b)$ and the answer is then given by
$$ \binom{\ell}{a} \binom{n-\ell}{b} $$
Added: The solution is given by
$$\begin{align*} a &= \frac{\ell+s-h}{2} \\ b &= \frac{h+s-\ell}{2} \end{align*}$$
We know that $h+\ell-s$ is even, therefore $\ell+s-h$ and $h+s-\ell$ is even as well. (Parity ignores signs.) But the extra conditions that
$$ 0 \leq a \leq \ell \text{ and } 0\leq b \leq n-\ell $$
must be met as well. If these conditions are satisfied, the number of such words are indeed given by the formula above; if not then there are no such words. |
Determine the centralizer in $G L_3 (\mathbb{R})$ of each matrix | Hint: For $A$ consider the diagonal matrices in $GL_{3}(\mathbb{R})$ i.e.
$\{D = (d_{ij}) : d_{ij}= 0 \text{ for } i \neq j \text{ and } d_{11} \cdot d_{22}\cdot d_{33} \neq 0\}$\
and for $B$ consider the set of scalar matrices in $Gl_{3}(\mathbb{R})$ i.e.
$\{ \lambda I_3 : \lambda \in \mathbb{R}\setminus \{0\}\}$.
Now, by taking elementary matrices (matrices $E_{ij}$ for which $ij$th entry is 1 and rest entries are 0) show that these are precisely the centralizers of $A$ and $B$ respectively. |
Strategy in a game of drawing cards from a deck | It doesn't matter which approach you choose, assuming that if only one suit has been seen, you will name that suit.
Let $P_1$ be the probability that you win if two cards have been drawn and they are of different suits, and let $P_2$ be the probability that you win, if two cards have been drawn and they are both of your suit.
Without loss of generality, the first card turned up is a Club. If you name Clubs, then with probability $\frac{12}{25}$ the next card will be a Club, and with probability $\frac{13}{25}$ it will be a Heart, so by the law of total probability, your probability of winning is $$.48P_2+.52P_1$$
Now suppose instead, you elect to see a second card. With probability $.48$ it will be a Club, and you will name Clubs, giving you a probability of winning of $P_2$. With probability $.52$, it will be a Heart, and you will name one of the suits, and have probability $P_1$ of winning. By the law of total probability, your overall probability of winning is the same as before.
The simplest approach is simply to name the suit of the first card turned up.
As to the question about a full $52$-card deck, the probabilities are exactly the same, assuming that when you name a suit before any card of another suit has been turned up, your opponent's suit will be the first new suit drawn. The cards in the other two suits (Diamonds and Spades in the original example,) have no effect on the outcome, so we may simply ignore them.
There's a substantial advantage to going first in this game. I calculate the probability of a first player win at approximately $0.613272$. |
Proof: Sum of the combination of the these numbers are not equal. | Let us assume there are two different subsets which have the same sum. Now, if there are two subsets which have the same sum, it also means that there are two disjoint subsets which have the same sum. (We get the disjoint subsets by deleting the common elements.)
Neither of these sets can contain $1$ or $5$, because if they did, the sum's last digit would be $1$, $5$ or $6$ which can't be reached by the remaining three numbers.
Now,we must divide the remaining three numbers into two subsets which have the same sum. Clearly, if $100$ is in either set, it's sum will be greater than the other set.
Now, we only have two unequal elements, $10$ and $50$ which to be put into two subsets of equal sums. This wold only be possible if $10=50$, but it isn't. |
Lebesgue Measure of $A=\left \{ (x,0) : x \in [0,1]\right \} \subset \mathbb{R}^2$ | The formula $m^*(A \times B) \leq m^*(A) m^*(B)$ is easy to prove in general, where $m^*$ is the outer measure. |
Statements about elements in $\mathbb{C}$ | It seems right to me. For the second dot note that if $z=au+bi$, then $z^{t}=au-bi$ and since $b\in\mathbb{R}$ we have that $-b\in\mathbb{R}$ and $z^{t}\in\mathbb{C}$. Then use the first dot to calculate: $zz^{t}=(ac-bd)u+(ad+bc)i=(a^2+b^2)u+(-ab+ab)i=(a^2+b^2)u$ (I have used that $z^{t}=cu+di=au-bi$, so: $c=a, d=-b$). Try to do the others on your own, then don't hesitate to ask for help if you need.
P.S. If you are interested, it's not a casuality that it is named $\mathbb{C}$, in fact it is a way to represent the complex numbers. https://en.wikipedia.org/wiki/Complex_number#Matrix_representation_of_complex_numbers |
Question about Strong Law of Large Numbers in Breiman | I have Breiman's proof in front of me, so I can attempt to explain it more clearly.
The assertion that $C=\hat C$ is an assertion about sequences of zeros and ones, so we can suppress the $\omega$ by focusing on just one sequence.
The inclusion $C\subset \hat C$ is trivial: if a sequence $x_n$ converges to zero as $n\to\infty$, then it also converges along the sequence $n_k:=k^2$. So the hard direction is proving that if $S_{m^2}/m^2\to\frac12$, then so does $S_n/n$.
Suppose $S_{m^2}/m^2\to\frac12$. Let $\epsilon>0$. To show $S_n/n\to\frac12$, the obvious first step is to write, using the triangle inequality,
$$
\left|\frac{S_n}n-\frac12\right|\le\left|\frac{S_n}n-\frac{S_{m^2}}{m^2}\right|+
\left|\frac{S_{m^2}}{m^2}-\frac12\right|.\tag1
$$
But the second term on the RHS of (1) tends to $0$ as $m\to\infty$. So for all large $m$, say $m\ge M$, we have
$$
\left|\frac{S_{m^2}}{m^2}-\frac12\right|<\epsilon.\tag2
$$
Now pick $n\ge M^2$. For this $n$, find an $m\ge M$ such that $m^2\le n<(m+1)^2$. Confirm that this implies that $|n-m^2|\le 2m$. By the triangle inequality, we can bound the first term on the RHS of (1):
$$
\left|\frac{S_n}n-\frac{S_{m^2}}{m^2}\right|\le
\left|\frac{S_n}n-\frac{S_n}{m^2}\right|+
\left|\frac{S_n}{m^2}-\frac{S_{m^2}}{m^2}\right|.\tag3
$$
Since $|S_n|\le n$, bound the first term on the RHS of (3) by $n\left|\frac 1n-\frac1{m^2}\right|\le\frac2m$. Bound the second term by $\frac1{m^2}|n-m^2|$, since the distance between $S_n$ and $S_{m^2}$ is at most the number of steps from time $m^2$ to time $n$. Adding these, it follows that the RHS of (3) is at most $\frac4m\le\frac4M$. Putting (2) and (3) together, we have
$$
\left|\frac{S_n}n-\frac12\right|\le\frac4M+\epsilon
$$
whenever $n\ge M^2$. This implies that (1) is less than $2\epsilon$ for all large $n$, and we're done. |
Prove that for homomorphism $\phi:G\to H$ if $G_{1}\leq G$ is abelian, then $\phi(G_{1})$ is abelian. | Yes, it is correct. I suggest that you start with two elements $x,y\in\phi(G_1)$. You want to prove that $xy=yx$. But, since, $x,y\in\phi(G_1)$, there are $a,b\in G_1$ such that $\phi(a)=x$ and $\phi(b)=y$. Then\begin{align}xy&=\phi(a)\phi(b)\\&=\phi(ab)\\&=\phi(ba)\\&=\phi(b)\phi(a)\\&=yx.\end{align} |
Please help correct my answer for find the moment generating function | The moment generating function of a random variable $X$ is defined by
$$ M_X(t) = E(e^{tX}) =
\begin{cases}
\sum_i e^{tx_i}p_X(x_i), & \text{(discrete case)} \\
\\
\int_{-\infty}^{\infty} e^{tx}f_X(x)dx, & \text{(continuous case)}
\end{cases}
$$
And you have the discret case and your calculation is correct.
If we express $e^{tX}$ formally and take expectation
$$M_X(t) = E(e^{tX}) = 1 + tE(X) + \frac{t^2}{2!}E(X^2)+\ldots+
\frac{t^k}{k!}E(X^k)+\ldots$$
then the $k$th moment of $X$ is given by
$$E(X^k) = M_X^{(k)}(0) \:\:\:\:\:\:k = 1, 2\ldots$$
$$M_X^{(k)}(0) = \frac{d^k}{dt^k} M_X(t) |_{t=0}$$
Hence
$E[X] = M'(0)$
$E[X^2] = M''(0)$ |
How to find the probability of the number of data | A favorable outcome here is when we picked one person who borrowed $6$ books and one person who borrowed some other number of books. By the multiplication principle, the number of favorable outcomes is equal to the product of: [the number of ways to pick one person from the people who borrowed $6$ books] times [the number of ways to pick one person from the people who borrowed other than $6$ books]. |
Calculating the probability of winning at least $128$ dollars in a lottery St. Petersburg Paradox | To win at least 128 dollars, you need a sequence of 7 tails, so the probability is $ \frac{1}{2^7}=\frac{1}{128}$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.