title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Find all $x$ such that $\sin x = \frac{4}{5}$ and $\cos x = \frac{3}{5}$. | Your original (set of) equations implies $\tan x=\frac{4}{3}$ but not the other way around. When you solve $\tan x=\frac{4}{3}$ you get the solutions of your original equation and the solutions $\sin x=\frac{-4}{5}$, $\cos x=\frac{-3}{5}$. |
Given $\{A_{n}\}_{n=1}^{\infty}=\{\{0\},\{0,1\},...,\{0,1,2,…\}\}$, is $\bigcap_{j \geq 1} A_j$ equal to $\{0\}$ or $\{\{0\}\}$? | The $$\bigcap _{j\ge 1} A_j =\{0\}$$ because the only member which is common to all $A_j$ is $0$
Note that $\{0\}$ is not a member of $A_j$ so it is not a member of the intersection. |
Numbers that are the sum of the squares of their prime factors | Giorgos Kalogeropoulos has found 3 such numbers, each having more than 100 digits.
You can find these numbers if you follow the links in the comments of OEIS A339062 &
A338093
or here https://www.primepuzzles.net/puzzles/puzz_1019.htm
So, such numbers exist! It is an open question if there are infinitely many of them... |
question regarding Fourier restriction estimates | As user90090 suggested, consider a decomposition of $\hat\psi=\sum_{k=0}^{\infty}\psi_k$ in which $\psi_k(x), k\geq1$ have support $k<|x|<{k+1}$. Notice that this decomposition is slight different from the original dyadic decomposition. There are mainly two reasons, of different level, to explain this. First, from uncertainty principle, if $f$ lives in a ball $B(0,R)$, then its Fourier transform should have frequency "lives" in a band with width $1/R$, so it is a little more natural to break up the frequency variable into $1/R$ annular bands. Second, from technical level, we need to repeatedly use the following estimate
$$ ||f||_{L^q(N_{1/R}(S))}\lesssim R^{\alpha-1/q}||f||_{L^p(B(0,R))}\hspace{2cm} (2)$$
So simply speaking what we need to do is to break up the $(k/R, (k+1)/R)$ annular area into a group of small $1/R$ balls with finite over-lapping. Notice that there are at most $O(k^n)$ such balls, so the number can be controlled by $k^{-N}$ decay coming from the fact that $\psi$ is fast decaying. Then we shift these small balls back into $N_{1/R}(S)$ with an exponential multiple of $f$, which does not affect the support of it. So now we can safely apply $(2)$ to get what we want. |
Setting up this polar equation? | Area = $$\frac{1}{2}\int_0^{2\pi}\big( 4\sin^2 \theta - \sin^2 \theta) d\theta = \frac{3}{2} \int_0^{2\pi} \sin^2 \theta d\theta$$.
Use the trig identity $\sin^2 \theta = \frac{1}{2}(1-cos2\theta)$ to help evaluate the integral. It helps to graph the two polar equations, this may give you a better view for your bounds of integration. |
Basic encoding with math formula | You could write the encryption as a matrix product. The matrix has ones on the main diagonal, and one diagonal either side. To decrypt it, multiply by the inverse of the matrix.
$$\left[\begin{array}{ccccc}1&1&0&0&0\\1&1&1&0&0\\0&1&1&1&0\\0&0&1&1&1\\0&0&0&1&1\end{array}\right]\left[\begin{array}{c}a\\b\\c\\d\\e\end{array}\right]=\left[\begin{array}{c}2\\2\\1\\1\\1\end{array}\right]$$
EDIT: This matrix is not invertible, so I think there is no solution. You could add any multiple of $[1,-1,0,1,-1]$ to one solution, to get another solution. |
Understanding a contradiction regarding least upper bounds | $f(x)\geq0$ if $x\geq c$ because $c$ is an upper bound.
$f(x)\geq0$ if $c-\delta<x<c$ because $f(c)>0$.
So $f(x)\geq0$ if $c-\delta<x$.
So $c-\delta$ is also an upper bound for $S$.
$c-\delta<c$, so $c$ is not the least upper bound. |
ideals with fixed norm in a Dedekind domain | One has the following general result:
Theorem (Gilmer--Heinzer): Let $R$ be a Noetherian ring. Then, there exists only finitely many ideals $I$ of $R$ such that $|R/I|\leqslant n$ for any
natural number $n$.
For a proof one can see [1]. For a more leisurely discussion see [2, Pg. 15].
[1] Gilmer, R. and Heinzer, W., 1992. Products of commutative rings and zero-dimensionality. Transactions of the American Mathematical Society, 331(2), pp.663-680.
[2] Anderson, D.F. and Dobbs, D. eds., 1995. Zero-dimensional commutative rings (Vol. 171). CRC Press. |
Equivalence Relation with dividing x and y integers | The relation $x \sim y \iff 5 | (x-y)$ is an equivalence relation since it is:
Reflexive: $x \sim x$ since $5 | (x -x)$, that is $ 5 | 0$, which is trivially true.
Symmetric: If $x \sim y$ then $(x-y) = 5m$ for some integer $m$. Then $y \sim x$ is true since $5 | (y-x)$ because $y-x = -5m$ for some integer $m$.
Transitive: If we have that $x \sim y$ and $y \sim z$ then we can write those as $x-y = 5m$ and $y-z=5n$ for integers $m$ and $n$. Then $x - 5n - z = 5m$ and hence $x-z = 5m + 5n$ so $5 | (x-z)$ and hence $x \sim z$. |
If $X \sim N(0,1)$ and $Y \sim N(0,1)$ are two random variables that may or may not be independent, what is $E(XY)$? | Since $\mathbb{E}[X]=\mathbb{E}[Y]=0$, $\mathbb{E}[XY]$ is the covariance of $X$ and $Y$, which in this case could be any real number in $[-1,1]$. You can't say anything more based on the information you've given. |
Given two unit vectors, find a vector perpendicular with additional constraint | Given two unit vectors $\hat{u}$ and $\hat{v}$, we can construct a vector perpendicular to both by their cross product:
$$\vec{n}=\hat{u}\times\hat{v}.$$
To obtain a perpendicular vector of unit length, just normalize $\vec{n}$:
$$\hat{n}=\frac{\vec{n}}{\|\vec{n}\|}=\frac{\hat{u}\times\hat{v}}{\|\hat{u}\times\hat{v}\|}.$$
Normalizing $\vec{n}$ requires the computation of $\|\hat{u}\times\hat{v}\|$. Since the norm of a vector is defined as the square root of the dot product of the vector with itself, it is impossible to normalize a vector without using square roots.
However, there is a way to look like you're avoiding square roots. If you can find the angle $\theta$ between the unit vectors $\hat{u}$ and $\hat{v}$ geometrically, you can employ the theorem that gives the norm of their cross product as:
$$\|\hat{u}\times\hat{v}\|=\sin{\theta}\\
\implies \hat{n}=\csc{\theta}\,(\hat{u}\times\hat{v}).$$
Of course, this method doesn't truly avoid square roots, since $\sin\theta$ is defined as a square root:
$$|\sin\theta| = \sqrt{1-\cos^2{\theta}}=\sqrt{1-(\hat{u}\cdot\hat{v})^2}.$$ |
Expressing the trace of the adjugate matrix in terms of eigenvalues | If $A$ is invertible, $A^* = \det(A) A^{-1}$, so $$\text{trace}(A^*) = \det(A) \text{trace}(A^{-1}) = \left(\prod_j \lambda_j \right) \sum_j 1/\lambda_j
= \sum_j \prod_{i \ne j} \lambda_i$$
(where the eigenvalues of $A$ are $\lambda_j$, counted by algebraic multiplicity).
By continuity, $\text{trace}(A^*) = \sum_j \prod_{i\ne j} \lambda_i$ for all square matrices.
Thus for $n=4$ it's $\lambda_2 \lambda_3 \lambda_4 + \lambda_1 \lambda_3 \lambda_4 + \lambda_1 \lambda_2 \lambda_4 + \lambda_1 \lambda_2 \lambda_3$. |
Area between $r=4\sin(\theta)$ and $r=2$ | You are just intersecting two circles with the same radius, going through the center of each other.
The area of a circle sector with radius $R=2$ and amplitude $60^\circ$ is $\frac{1}{6}\pi R^2=\frac{2\pi}{3}$, while the area of an equilateral triangle with side length $2$ is given by $\sqrt{3}$, hence the area of the circle segment by the difference of these objects is $\frac{2\pi}{3}-\sqrt{3}$.
These results are enough to solve your question without integrals:
$$\color{red}{\mathcal{A}}=2\sqrt{3}+4\left(\frac{2\pi}{3}-\sqrt{3}\right)=\color{red}{\frac{8\pi}{3}-2\sqrt{3}}.$$ |
dyadic rational points of the unit sphere in $D$ dimensions | This really about finding integer solutions of
$$x_1^2+\cdots+x_n^2=4^m.\tag1$$
This corresponds to the dyadic point $(x_1/2^m,\ldots,x_m/2^m)$.
To avoid trivialities and degeneracies, stick to solutions with $m\ge1$
and not all $x_i$ even, so at least one is odd. Also let's avoid cases
where some $x_i=0$.
There are no such solutions when $n\in\{1,2,3\}$. This is impossible
modulo $4$. For $n=4$ and $m=1$ we have $(\pm1,\pm1,\pm1,\pm1)$
but when $m\ge2$ then ($1$) is impossible modulo $8$.
Non-trivial solutions start multiplying for $n\ge5$. There are no
longer obstructions modulo powers of $2$. We have $(3,2,1,1,1)$
for $n=5$ etc. I'm not sure about finiteness any more... |
Geometric Problem. Find an angle. | Let, $AB=AC=a$ in isosceles $\triangle ABC$
Using Sine rule in isosceles $\triangle ABC$ $$\frac{\sin \angle BAC}{BC}=\frac{\sin \angle ABC}{AC}$$ $$\frac{\sin 36^\circ}{BC}=\frac{\sin 72^\circ}{a}\implies BC=2a\sin 18^\circ$$ Similarly applying Sine rule in $\triangle BCD$
$$\frac{\sin \angle DCB}{BD}=\frac{\sin \angle BDC}{BC}$$ $$\frac{\sin 54^\circ}{BD}=\frac{\sin 84^\circ}{2a\sin 18^\circ}$$ $$\implies BD=\frac{2a\sin 18^\circ\sin 54^\circ}{\cos 6^\circ}$$
Let, $\angle BAD=\alpha$
Applying Sine rule in $\triangle ABD$
$$\frac{\sin \angle BAD}{BD}=\frac{\sin \angle ADB}{AB}$$ $$\frac{\sin \alpha}{\frac{2a\sin 18^\circ\sin 54^\circ}{\cos 6^\circ}}=\frac{\sin (180^\circ-(\alpha+30^\circ))}{a}$$ $$\cos 6^\circ\sin \alpha=2\sin 18^\circ\sin 54^\circ\sin (\alpha+30^\circ)$$ $$\frac{\sin(\alpha+30^\circ)}{\sin \alpha}=\frac{\cos 6^\circ}{2\sin 18^\circ\sin 54^\circ}$$ $$\frac{\sin \alpha\cos 30^\circ+\cos \alpha\sin 30^\circ}{\sin \alpha}=\frac{\cos 6^\circ}{2\sin 18^\circ\sin 54^\circ}$$
$$\frac{\sqrt 3}{2}+\frac{1}{2}\cot \alpha=\frac{\cos 6^\circ}{2\sin 18^\circ\sin 54^\circ}$$
$$\frac{1}{2}\cot \alpha=\frac{\cos 6^\circ}{2\sin 18^\circ\sin 54^\circ}-\frac{\sqrt 3}{2}$$
$$\cot \alpha=\frac{\cos 6^\circ-\sqrt 3\sin 18^\circ\sin 54^\circ}{\sin 18^\circ\sin 54^\circ}$$
$$\tan \alpha=\frac{\sin 18^\circ\sin 54^\circ}{\cos 6^\circ-\sqrt 3\sin 18^\circ\sin 54^\circ}$$
$$\bbox[5px, border:2px solid #C0A000]{\color{red}{\alpha=\tan^{-1}\left(\frac{\sin 18^\circ\sin 54^\circ}{\cos 6^\circ-\sqrt 3\sin 18^\circ\sin 54^\circ}\right)=24^\circ}}$$ |
How do I determine the radius of convergence of the power series? | Since $z=1+i$ is the only pole - it's simply the distance from $z=0$ to $z=1+i$ which is $\sqrt{2}$. |
Can the Fourier transform witness complete normality of the dual group? | Edit: The answer is no in general, since $\{f>0\}$ must be an $F_\sigma$. If $S$ is an $F_\sigma$ the answer is yes.
Note In fact there's no significant use of the fact that $G$ is compact below; if $G$ is any LCA group and $\Gamma=\hat G$ then (of course writing $L^1(\Gamma)$ in place of $\ell_1$) the answer is yes if and only if $S$ is a countable union of compact sets.
Hint to maybe get started: In this context the basic way one gets $f\in\ell_1$ such that $\hat f$ does something is to note that Plancherel shows that $$L^2*L^2=\widehat{L^1}.$$
More explicitly:
Lemma Suppose $V$ is a neighborhood of the origin in $G$. There exists $f\in\ell^1(\Gamma)$ such that $\hat f\ge0$, $\hat f(0)>0$, and $\hat f$ vanishes on $G\setminus V$.
Proof: Let $W$ be an open set with $0\in W$ and $W+W\subset V$. Let $$\phi_V=\chi_W*\chi_W.$$ Then $\phi_V\ge0$, $\phi_V(0)>0$, and $\phi_V$ vanishes on $G\setminus V$.
And $\hat\phi_V=(\hat\chi_W)^2$, so $$||\hat\phi_V||_1\le||\chi_W||_2^2=m(W)<\infty.$$Qed.
It's easy to use the lemma to prove this:
Theorem. If $U\subset G$ is open and $K\subset U$ is compact there exists $f\in\ell_1(\Gamma)$ with $\hat f\ge0$, $\hat f>0$ on $K$, and $\hat f=0$ on $G\setminus U$.
Hint: Say the function obtained in the lemma is $f_V$. Say $S(V)$ is the set where $f_V>0$. You can concoct things so $x_j+V_j\subset U$, $K\subset\bigcup_j(x_j+S(V_j))$.
(And then $f=\sum c_jf_j$ gives $\{f>0\}=U$ if $U$ is an $F_\sigma$...) |
True of false? If $\mathbb{E}(Y^2)=\infty$, then $Y^21_{|Y|\leqslant n}\overset{as}{\rightarrow}Y^2.$ | The example $Y(\omega)=\infty$ for all $\omega$ shows that this is not true. In this case $Y^{2}1_{|Y| \leq n}=0$ for all $n$. |
Real Gaussian integral with complex poles | Dawson's function would be for a real pole. In your case, this should give rise to Faddeeva functions, or more precisely their real part or Voigt functions, if the $a_j$'s come in opposite pairs. The integral value is $2 \pi \sum_{a_j>0} \left[\frac{1}{a_j} \frac{1}{\prod_{k=1,k\neq j}^N{(1-\frac{a_k}{a_j})}} V\left(0,\frac{1}{\sqrt{2}a_j}\right)\right]$ when all $a_j$'s are distinct and come in opposite pairs as explained below.
Strategy
Partial fraction decomposition leads to the functions' integral representation. If all $a_j$'s are distinct, that appears directly, while if some identical $a_j$'s correspond to poles with higher multiplicity, integration by parts, possibly applied recursively, will progressively reduce the order of the polynomials at the denominator of each of the fractions.
(In the latter case, a further trick to use, when integrating by parts, is that $\frac{d e^{-t^2}}{dt} = - 2 t e^{-t^2} = 2 (b_j - t) e^{-t^2} - 2 b_j e^{-t^2}$ where $b_j = -i/(\sqrt{2}a_j)$ is the pole and $t=T/{\sqrt{2}}$ .)
This thus leaves to evaluate
$\int_{-\infty}^{+\infty}\frac{e^{-{t^2}}}{(1-i \sqrt{2} a_j t)}dt=\frac{1}{i \sqrt{2} a_j } f(-i/(\sqrt{2}a_j))$
for each of the $a_j$, with $f(z)$ defined as
$f(z)=\int_{-\infty}^{+\infty}\frac{e^{-t^2}}{(z- t)}dt$.
This function $f(z)$ is seen in different fields, with various names and in various forms. Here, the most important points are that it is analytic only within each of the two half-planes $Im(z)>0$ and $Im(z)<0$, and that it only exists in a Cauchy principal value sense, as the Dawson function, if $z$ is real (which is not the case, here). There is discontinuity between the values in the two half-spaces and, also, between those and the ('principal') values on the real axis.
For $Im(z)>0$, $f(z)=-i \pi w(z)$, where $w(z)=e^{-z^2}erfc(-i z)$ is the Faddeeva function and $erfc(y)=\frac{2}{\sqrt{\pi}}\int_{y}^{\infty} e^{-t^2} dt$ is the complementary
error function. While Dawson function is also defined for complex arguments/poles, it wouldn't give the correct values in the current case.
If the poles have negative imaginary parts, you may want to use the symmetry property $f({z})=\overline{f}(\overline{z})$ to come back to that half-plane, otherwise the Faddeeva function differs from the desired integral by an exponential term (as it is continuous in the whole complex plane, contrary to $f(.)$).
Summary and steps:
Change of variable $T = \sqrt{2} t$, $dT = \sqrt{2} dt$
$I = \frac{\sqrt{2}}{\prod_{j=1}^N(i \sqrt{2} a_j)}\int_{-\infty}^{\infty} \frac{e^{-t^2}}{\prod_{j=1}^N{(b_j - t )}} dt$ with $b_j := -i/(\sqrt{2}a_j)$
Partial fraction decomposition (PFD)
$I = \frac{\sqrt{2}}{\prod_{j=1}^N(i \sqrt{2} a_j)}\sum_{j=1}^N\left[c_j \int_{-\infty}^{\infty} \frac{e^{-t^2}}{{(b_j - t )}} dt +d_j\right]$
where $c_j$ and $d_j$ come from the PFD (and the indicated tricks of integrating by parts, if some $a_j$'s are identical). If all $a_j$'s are distinct, $d_j=0$ and $c_j = (-1)^{N+1}/(\prod_{k=1;k\neq j}^{N}(b_j-b_k))$.
Evaluation of the Faddeeva functions
$I = \frac{-i \sqrt{2} \pi}{\prod_{j=1}^N(i \sqrt{2} a_j)}\left\{\sum_{a_j<0} \left[ c_j w(b_j) + \frac{i}{\pi}d_j\right] +\sum_{a_j>0} \left[ - c_j \overline{w}(- b_j) + \frac{i}{\pi} d_j\right] \right\}$
where $\sum_{a_j<0}[.]$ indicates the sum of terms in brackets restricted to all indices, '$j$', such that $a_j<0$.
If all $a_j$ are distinct,
$I=I_{dis} = \frac{-i \sqrt{2} \pi}{\prod_{j=1}^N(i \sqrt{2} a_j)}\left\{\sum_{a_j<0} [ c_j w(b_j) ] +\sum_{a_j>0} [ - c_j \overline{w}(- b_j)] \right\}$
or, after simplification (it might be worth double-checking the signs, etc),
$I=I_{dis} = \pi \left\{ \sum_{a_j>0} \left[\frac{\overline{w}(- b_j)}{a_j} \frac{1}{\prod_{k=1,k\neq j}^N{(1-\frac{a_k}{a_j})}}\right] - \sum_{a_j<0} \left[\frac{w(b_j)}{a_j} \frac{1}{\prod_{k=1,k\neq j}^N{(1-\frac{a_k}{a_j})}} \right]\right\}$
Simplification for opposite pairs
If the $a_j$'s come in opposite pairs, sums as the two in bracket simplify and only the real part of the Faddeeva function, i.e. the Voigt function $V(x,y)=\Re\{w(x+ i y)\}$, appears. Still for the case of distinct poles, that would give
$I=I_{dis} = 2 \pi \sum_{a_j>0} \left[\frac{1}{a_j} \frac{1}{\prod_{k=1,k\neq j}^N{(1-\frac{a_k}{a_j})}} V\left(0,\frac{1}{\sqrt{2}a_j}\right)\right]$.
References:
Gautschi, 1970, 'Efficient computation of the complex error function', for definitions and properties (pg 187-188)
Lecomte, 2013, 'Exact statistics of systems with uncertainties: An analytical
theory of rank-one stochastic dynamic systems', for further discussion and references (pg 2757) and the tricks of integrating by parts for multiple poles (Eqs (44) and (55-57)).
Notes:
'Faddeeva' function is also called 'Kramp' function.
Although available in numerical packages, it is recommended to double-check the definition, domain of application, and precision of routines to evaluate them. |
Find the singularities of $f(z) =\frac{1}{(2\sin z - 1)^2}$. | This $f$ has a singularity anywhere \begin{align*}
2 \sin z - 1 &= 0 \text{,} \\
\sin z &= 1/2 \text{.}
\end{align*}
This has infinitely many solutions in the reals,
\begin{align*}
z \in &\{ \arcsin(1/2) + 2 \pi k : k \in \Bbb{Z} \} \cup \\
\quad &\{\pi - \arcsin(1/2) + 2 \pi k : k \in \Bbb{Z}\} \\
= & \{ \pi/6 + 2 \pi k : k \in \Bbb{Z} \} \cup \\
\quad &\{5\pi/6 + 2 \pi k : k \in \Bbb{Z}\} \text{.}
\end{align*}
It turns out these are the only two families of solutions, but you have done nothing to show that there are not more solutions. To do so, start with, for $x,y\in\Bbb{R}$,
$$ \sin(x+\mathrm{i} \, y) = \sin(x) \cosh(y) + \mathrm{i} \cos(x) \sinh(y) \text{.} $$
Then talk about the zeroes of the real-valued cosine and hyperbolic sine (since $1/2$ has imaginary part $0$), which will restrict $y$ so that $\cosh = 1$, forcing the real solutions to be the only solutions.
If you have shown that the limits are infinity approaching each of these infinitely many singularities, then you have shown that they are not removable. I'm not convinced you have understood the definition/nature of essential singularities. |
Are the eigenvalues of the sum of two positive definite matrices increased? | For symmetric matrices you have the Courant-Fischer min-max Theorem:
$$\lambda_k(A) = \min \{ \max \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=k \}$$
with
$$R_A(x) = \frac{(Ax, x)}{(x,x)}.$$
Now, your assertion follows easily, since $R_{A+B}(x) > \max\{R_A(x), R_B(x)\}$.
This theorem is also helpful to prove other nice properties of the eigenvalues of symmetric matrices. For example:
\begin{equation*}
\lambda_k(A) + \lambda_1(B) \le \lambda_k(A+B) \le \lambda_k(A) + \lambda_n(B)
\end{equation*}
This shows the continuous dependence of the Eigenvalues on the entries of the matrix, and also your assertion. |
What short exact sequence induces the Bockstein for $H^*(G,k)$? | No such short exact sequence exists. Note that your "splicing" operation coincides with the cup product on $H^*(G,k)$, so if such a short exact sequence existed, then there would be a class $b\in H^1(G,k)$ such that $\delta'(x)=bx$ for all $x\in H^*(G,k)$. There usually does not exist any such class. For instance, let $G$ be cyclic of order $p$; then if $p$ is odd, $H^*(G,k)=k[x,y]/(y^2)$ with $|y|=1$ and $|x|=2$, and $\delta'$ is given by $\delta'(x^n)=0$ and $\delta'(x^ny)=x^{n+1}$. Clearly there is no element $b\in H^1(G,k)$ such that $\delta'$ is always multiplication by $b$. For $p=2$, we instead have $H^*(G,k)=k[y]$ with $|y|=1$ and $\delta'(y^{2n})=0$, $\delta'(y^{2n+1})=y^{2n+2}$. Again, $\delta'$ does not coincide with multiplication by any class in $H^1$.
(Note that when you replace $k$ by $\mathbb{F}_p$, $\delta$ is not induced by splicing with a short exact sequence of $kG$-modules, because $\mathbb{Z}/p^2\mathbb{Z}$ is not even a $k$-module! So there isn't really any particular reason to expect $\delta$ or $\delta'$ to come from splicing with a short exact sequence.) |
How do I invert this 3×3 matrix? | Note that $A$ is diagonal. Convince yourself that merely taking reciprocals of the diagonal entries yields the inverse:
$$A^{-1}=\begin{bmatrix}
\frac12&0&0\\
0&\frac15&0\\
0&0&1\end{bmatrix}$$ |
Prime number that are recursively made up of other prime number -- what is this called | They're called truncatable primes.
For a list of them on OEIS, see A024785, A020994, A055521 |
Find a sequence of real sequences which converges in one space but not in another (related space) | Make the $n$-th term of $L$ the sequence whose first $n$ terms are $\left(\frac1n\right)^{1/3}$ and whose remaining terms are all $0$. The only possible limit is the all-zero sequence, but each term of $L$ is at distance $1$ from it in $\mathcal{G}^3$. In $\mathcal{G}^4$, however, the distance from the $n$-th sequence to the all-zero sequence is
$$\left(n\left(\frac1n\right)^{4/3}\right)^{1/4}=\frac{n^{1/4}}{n^{1/3}}=\frac1{n^{1/12}}\;.$$
There are two ideas in play here. One is to use sequences that are eventually $0$ if possible, since they’re especially easy to work with. The other is that for positive reals less than $1$, cube roots are bigger than fourth roots. If I can find a sequence of sequences whose norms in $\mathcal{G}^3$ just barely fail to converge to $0$, in $\mathcal{G}^4$ they’ve a good chance of doing so. |
Cauchy-Riemann equations for $z=x+iy$ and $f(z)=R(x,y)e^{i\theta(x,y)}$ | It is actually a good exercise to deduce all the variants from the classical CR equations:
$\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}$, $\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}$.
So for example $u = Rcos(\theta)$, so $\frac{\partial u}{\partial x} = \frac{\partial R}{\partial x}cos(\theta) - Rsin(\theta)\frac{\partial \theta}{\partial x}$ and all the others, so using the classical CR and bringing terms together you get:
$(\frac{\partial R}{\partial x} - R\frac{\partial \theta}{\partial y})cos(\theta)$ = $(\frac{\partial R}{\partial y} + R\frac{\partial \theta}{\partial x})sin(\theta)$,
$(\frac{\partial R}{\partial x} - R\frac{\partial \theta}{\partial y})sin(\theta)$ = -$(\frac{\partial R}{\partial y} + R\frac{\partial \theta}{\partial x})cos(\theta)$,
so you conclude with indeed the relations $\frac{\partial R}{\partial x} = R\frac{\partial \theta}{\partial y}, \frac{\partial R}{\partial y} = -R\frac{\partial \theta}{\partial x}$ |
Set of all real values of $a$ for which the equation $(a-4)\sec^4x+(a-3)\sec^2x+1=0, (a\ne4)$ has real solutions | I think you overdid it. It is quite neat
$$\sec^2x=\frac{-(a-3)\pm\sqrt{(a-3)^2-4(a-4)}}{2(a-4)}=-1,\frac{1}{4-a}$$
So, obviously $\sec^2x\geq 1$
So $$\frac{1}{4-a}\geq 1$$
which gives $\frac{a-3}{a-4}\leq 0$
Thus $a\in[3,4)$ |
Word Problems(sats practice booklet) | For every 3 boys there are 2 girls. So for every 6 boys there are 4 girls. 42 boys are the same as $14$ lines of $3$ boys and $2$ girls therefore there are $14*2$ girls which is $28$ as user1709828 said. |
Relation between the Divisor Class Groups and Chow Groups | If $X$ is irreducible, $A^1(X)$ is actually the divisor class group of $X$. This is proposition 1.9 in Fulton's book on intersection theory. If you compare both definitions it's more or less clear that in both case, this is the quotient of the free abelian group generated by the divisor, quotiented by the ideal generated by $f^{-1}(0) - f^{-1}(\infty)$ for $f \in k(X)$.
In fact, the motivation of the Chow ring is to generalise the construction of the divisor class group to varieties of higher codimension. |
Australian Maths Competition: An Expected-Value Problem | Say that my probability to win is $p$, and I want to find the value of $p$.
If I get head in the first flip (0.5 chance), my friend can win with $p$ probability or lose with $(1-p)$ probability. $\frac{1-p}{2}$
If i get tail in the first flip (0.5 chance), then my friend and I keep getting heads until eventually he get tail, the game reset to the start and I can win with $p$ probability. $\sum_{i=1}^{\infty}{\frac{p}{4^{i}}}$
$$
\begin{align}
p&=\frac{1-p}{2}+\sum_{i=1}^{\infty}{\frac{p}{4^{i}}}\\
&=\frac{1-p}{2}+\frac{p}{3}
\end{align}
$$
Solve for $p$ and you will get $p=\frac{6}{14}$ |
solution for ODE problem | Interpretation 1: Perhaps we can write your problem as
$$ \frac{dv}{dt} - \frac{s}{|v|}v=a $$
Provided $s$ is in fact a scalar. Take the dot-product with $v$ to obtain:
$$ v \cdot \frac{dv}{dt} - \frac{s}{|v|}v\cdot v= v \cdot a $$
Hence,
$$ \frac{1}{2}\frac{d}{dt} (v \cdot v )-s|v| = v \cdot a $$
One silly solution is $v(t)=v_o$ where $v_o$ is taken to fit the condition $v_o \cdot a=-s|v_o|$. I assume $s,a$ are given constants.
Interpretation 2: Another solution, suppose we seek a constant speed solution then $|v|= s_o$. We face
$$ \frac{dv}{dt} - \frac{s}{s_o}v=a $$
Suppose $s$ is a scalar function of $t$. We can use the obvious generalization of the usual integrating factor technique (note $v$ is a vector in contrast to the usual context where the typical DEqns student faces $\frac{dy}{dt}+py=q$). Construct $\mu = exp( -\int \frac{s}{s_o} dt)$. Multiply by this integrating factor,
$$ \mu\frac{dv}{dt} - \frac{s}{s_o}\mu v= \mu a $$
By the product rule for a scalar function multiplying a vector,
$$ \frac{d}{dt}\biggl[ \mu v \biggr] = \mu a $$
Integrating, we reduce the problem to quadrature:
$$ v = \frac{1}{\mu} \int \mu a \, dt $$
where $\mu = exp( -\int \frac{s}{s_o} dt)$ and we choose constants of integration such that $|v|=s_o$ (if that's even possible...) |
is the largest number $X_n$ shown up to the nth roll a Markov chain? | The Markov chain in question has the following states:
$$\{1,2,3,4,5,6\}.$$
Let $p_{i,j}^n$ denote the probability that the chain goes to state $j$ assuming that it is in state $i$ before the $(n+1)^{th}$ roll:
$$p_{i,j}^n=P(X_{n+1}=j|X_n=i).$$
This conditional probability is $0$ if $i>j$ since the state changes only if the next roll results in a larger number than the maximum achieved so far.
$i=j$ means that the chain remains in state $i$. This event takes place if the result of the next roll is less or equal than $i$. There are $i=\#\{1,2...,i\}$ such possibilities of equal probability: $\frac16$. This is why
$$p_{i,j}^n=P(X_{n+1}=j|X_n=i)=\frac i6, \text{ if } i=j.$$
There is only one possibility of probabilty $\frac16$ to get to the state $j>i$ from state $i$, namely the result of the roll has to be exactly $j$. This is why
$$p_{i,j}^n=P(X_{n+1}=j|X_n=i)=\frac 16, \text{ if } i<j.$$
I am sorry but I don't understand what
$$
p_{s,i}(n)
$$
is. |
Intermediate Value Theorem guarantee | Since $f(-3)=-2<0<3=f(6)$, we can guarantee that the function has a zero in the interval $[-3,6]$. We cannot conclude it has only one, though (it may be many zeros).
EDIT: As has already been pointed out elsewhere, the IVT guarantees the existence of at least one $x\in[-3,6]$ such that $f(x)=c$ for any $c\in[-2,3]$. Note that the fact that there is a zero may be important (for example, you couldn't define a rational function over this domain with this particular function in the denominator), or you may be more interested in the fact that it attains the value $y=1$ for some $x\in(-3,6)$. I hope this helps make the solution a little bit more clear. |
sample variance divided by variance is Chi Square | Simulation in R: Population mean estimated by sample mean.
set.seed(2021)
n = 5; mu = 100; sg = 15
q4 = replicate(n^5, 4*var(rnorm(n,mu,sg))/sg^2)
mean(q4)
[1] 4.000032 # aprx mean of CHISQ(4) = 4.
In the figure below the density function of $\mathsf{Chisq}(\nu=4)$ fits the simulated values, while the
density function of $\mathsf{Chisq}(\nu=5)$ does not.
hdr = "Mean estimated: CHISQ(4)"
hist(q4, prob=T, br=20, ylim=c(0,.25), col="skyblue2", main=hdr)
curve(dchisq(x,4), add=T, lwd=2)
curve(dchisq(x,5), add=T, lwd=2, col="red", lty="dotted")
Population mean known:
set.seed(320)
n = 5; mu = 100; sg = 15
q5 = replicate(n^5, sum((rnorm(n,mu,sg)-mu)^2)/sg^2)
mean(q5)
[1] 5.109602 # aprx mean of CHISQ(5) = 5
hdr = "Mean known: CHISQ(5)"
hist(q5, prob=T, br=20, ylim=c(0,.20), col="skyblue2", main=hdr)
curve(dchisq(x,5), add=T, lwd=2, col="red", lty="dotted") |
Maximize $(h_1i_1+h_2i_2...h_ni_n)*(p_1i_1+p_2i_2...p_ni_n)$ subject to $c_1i_1+c_2i_2...c_ni_n \le B $ | The problem can be summarized as $\max\{ x^TQx : a^Tx \leq b\}$, where $Q$ is not necessarily negative semidefinite. An optimization problem with a quadratic objective and just one linear constraint is easy, since it satisfies strong duality. That can be proven via the S-lemma.
For more information, see these lecture notes or Appendix B.1 of the (free) book Convex Optimization by Boyd and Vandenberghe.
upd As a computer scientist you may be interested in multi-objective swarm optimization (or variants thereof). Your objectives are maximizing $h_1i_1+h_2i_2...h_ni_n$ and $p_1i_1+p_2i_2...p_ni_n$. Among the Pareto optimal solutions, you multiply both quantities and select the solution with the highest outcome. |
find the equation to the circle circumscribing the quadrilateral formed by the straight lines | First it would be wise to draw the lines, just to get a vizualization. Note that because none of the lines are parallel, they'll all intersect each other, so when you draw the lines you could determine the 4 vertices of the quadrilaterial. Now once you've done it find the coordinates of those points. Let their coordinates be: $(x_i,y_i); i = \overline{1,4}$
All this points are vertices, so they all line on the circumcircle. We know that every circle is defined by $3$ points and we know that the equation of the circle is:
$$(x-a)^2 + (y-b)^2 = r^2$$
Where $(a,b)$ are cooridnates of the circle's center. Now because all vertices lie on the circle just substitute and solve this system of 4 equation with 3 variables:
$$
\left\{\begin{aligned}
&(x_1-a)^2 + (y_1-b)^2 = r^2\\
&(x_2-a)^2 + (y_2-b)^2 = r^2\\
&(x_3-a)^2 + (y_3-b)^2 = r^2\\
&(x_4-a)^2 + (y_4-b)^2 = r^2
\end{aligned}
\right.$$
And the rest should be easy. |
Correlation between probability of events | Suppose there are two events $A$ and $B$ and that $\mathsf P(A\mid A\cup B)~\mathsf P(B\mid A\cup B) ~=~ \mathsf P(A\cap B \mid A \cup B)$. Then I am asked to find if $A$ and $B$ are independent, positively or negatively correlated.
What you have is conditional independence given the union of events. (Think about what that says†.)
My intuition is that $A$ and $B$ are independent. If they are, $\mathsf P(A\mid A\cap B)~\mathsf P(B\mid A\cup B)$ would yield only $\mathsf P(A)~\mathsf P(B)$, and the right side would be equal. But I am not sure if this is a rigorous proof.
Independence requires $\mathsf P(A\cap B)=\mathsf P(A)~\mathsf P(B)$ That is all; nothing else.
Total Probability says: $$\begin{align}\mathsf P(A\cap B)~=& ~\mathsf P(A\cap B\mid A\cup B)~\mathsf P(A\cup B) + \mathsf P(A\cap B\mid (A\cup B)^\complement)~\mathsf P((A\cup B)^\complement) \\[1ex] = & ~ \mathsf P(A\cap B\mid A\cup B)~\mathsf P(A\cup B) + 0
\end{align}$$
With what we are given, that means:
$$\begin{align} \mathsf P(A\cap B) ~ = & ~ \mathsf P(A \mid A\cup B)~\mathsf P(B\mid A\cup B)~\mathsf P(A\cup B) \\[1ex] = & ~ \mathsf P(A\mid A\cup B)~\mathsf P(B)
\\[1ex] = & ~ \frac{\mathsf P(A)}{\mathsf P(A\cup B)}~\mathsf P(B) \end{align}$$
So your given sets will only be independent if: $\mathsf P(A\cup B)=1$, otherwise $\mathsf P(A\cap B)> \mathsf P(A)~\mathsf P(B)$
† Also consider that $\mathsf P(A\cup B\mid A\cup B) = \mathsf P(A\mid A\cup B)+\mathsf P(B\mid a\cup B)-\mathsf P(A\cap B\mid A\cup B)$
So $$\begin{align}1 ~= ~& \mathsf P(A\mid A\cup B)+\mathsf P(B\mid A\cup B)-\mathsf P(A\mid A\cup B)~\mathsf P(B\mid A\cup B) \\[3ex] \therefore \mathsf P(A\mid A\cup B)~ = & ~ \mathsf P(B\mid A\cup B) ~=~ 1\end{align}$$ |
Elliptic integrals and $\zeta(5)$. | Let $x= K'(k)/K(k)$, then $\frac{dx}{dk} = -\frac{\pi}{2kk'^{2}K^{2}}$. Let $\tau = ix$, then $$k = \frac{\vartheta_2^2(\tau)}{\vartheta_3^2(\tau)}\qquad k' = \frac{\vartheta_4^2(\tau)}{\vartheta_3^2(\tau)}\qquad K=\frac{\pi}{2}\vartheta_3^2(\tau)\qquad iK'=\frac{\pi}{2}\tau\vartheta_3^2(\tau)$$
where $\vartheta_i$ are Jacobi theta functions. So $$I = \int_0^1 \frac{K'(k)^4}{K(k)^2} k dk = \frac{\pi^3}{8}\int_0^\infty x^4 \vartheta_2^4(\tau) \vartheta_4^4(\tau) dx = 4\pi^3 \int_0^\infty x^4 f(ix) dx =\frac{3}{\pi^2}L(5,f)$$ where $f(z) = \vartheta_2^4(2z) \vartheta_4^4(2z)$ is a weight-$4$ modular form of $\Gamma_1(4)$. There is no cusp form in $M_4(\Gamma_1(4))$, so we can immediately conclude $I$ can be expressed in terms of Dirichlet $L$-functions (because Fourier coefficients of Eisenstein series are given by divisor-sum functions, and their $L$-series are products of degree $1$ $L$-functions).
This answer explicitly computes $L(s,f)$:
$$L(s,f) = 4^{2-s} (2^s-16)(2^s-1) \zeta (s-3) \zeta (s)$$
so $I = 31\zeta(5)/8$ as desired. |
how to calculate P(Z>X+Y) where X,Y,Z∼U(0,1) | You can consider the probability mass uniformly distributed across the unit cube. The plane $X+Y=Z$ passes through its vertices $(1,0,1),(0,1,1),(0,0,0)$ and thus cuts off a tetrahedron with height 1 and base area $1/2$. Checking the direction of the inequality, this tetrahedron represents the probability you want, which is therefore $1*1/2*1/3=1/6$. |
Express each of these Boolean functions using the operators ·and ¬. | Looks good.
For your information, the other absorption law is $$x(x+w) = x,$$ so your simplification result is right by letting
$$w = \bar y + \bar z.$$ |
what is the value of $9^{ \frac{a}{b} } + 16^{ \frac{b}{a} }$ if $3^{a} = 4^{b}$ | You were close. However, you simplified $\frac{2b}b$ into $b$ and $\frac{2a}a$ into $a$ in your final step, which is not right. Correct that mistake, and you should have your solution.
On a less serious note, I would do an extra step here, for clarity:
$$
3^{2a/b} + 4^{2b/a} = (3^a)^{2/b} + (4^b)^{2/a} = 4^{2a/a} + 3^{2b/b}
$$
Also I prefer to not write fractions in exponents unless necessary. I think they're somewhat ugly. |
A function f(x) satisfies $f(x) = \sin x +\int^x_0 f'(t)(2\sin t-\sin^2t)dt$, then find f(x) | You applied Newton-Leibniz rule correctly.
$$f'(x) = \frac{cos x}{1-2sinx+ sin^2 x}=\frac{cosx}{(sinx-1)^2}$$ and then integrate.
$$f(x)=\frac{-1}{sinx-1}+c$$
From given initial equation $f(0)=0$,
which gives $c=-1$.
Hence, $$f(x)=\frac{-1}{sinx-1}-1$$ |
compute $\text{Hom}(\mathbb Z_p, \mathbb Z)$ (p-adic integers) | Not sure this a fully satisfactory answer, but I want to share the following construction proving, at least when assuming a suitable set theory, the existence of very many homomorphisms of additive groups from $\Bbb{Z}_p$ to $\Bbb{Q}$, and also to provide a certain kind of census of them.
Consider the field of $p$-adic numbers $\Bbb{Q}_p$. It is an extension of the field $\Bbb{Q}$, so it is a vector space over $\Bbb{Q}$. If we accept the Axiom of Choice this implies that there exists a $\Bbb{Q}$-basis $\mathcal{B}$ of $\Bbb{Q}_p$. Furthermore, as $\Bbb{Q}_p$ is uncountable, so is $\mathcal{B}$.
We thus have a large supply of $\Bbb{Q}$-linear mappings $f:\Bbb{Q}_p\to\Bbb{Q}$. Namely, any function $\tilde{f}:\mathcal{B}\to\Bbb{Q}$ gives rise to a unique linear transformation $f$ in the usual way.
Composing any such $f$ with the inclusion mapping $i:\Bbb{Z}_p\to\Bbb{Q}_p$ gives a homomorphism $F:=f\circ i:\Bbb{Z}_p\to\Bbb{Q}$.
A few remarks are due:
Any homomorphism $f$ of additive groups from $\Bbb{Q}_p\to\Bbb{Q}$ is actually a linear transformation. If $q=m/n\in\Bbb{Q}$ and $z\in\Bbb{Q}_p$ are arbitrary then $y:=f(qz)$ must satisfy the equation $$ny=nf(qz)=f(nqz)=f(mz)=mf(z).$$ This equation has a unique solution $y=qf(z)$ in $\Bbb{Q}$ proving linearity.
Every element $z\in\mathcal{B}$ is of the form $z=x/p^{n}$ for some $x\in\Bbb{Z}_p$ and $n\in\Bbb{N}$. So without loss of generality we can assume that $\mathcal{B}\subset\Bbb{Z}_p$. Consequently different choices of $f$ give rise to different homomorphisms $F$.
Because $\Bbb{Q}$ is divisible, it is an injective object in the category of abelian groups. Therefore any homomorphism $F:\Bbb{Z}_p\to\Bbb{Q}$ can be lifted to a corresponding homomorphism $f:\Bbb{Q}_p\to\Bbb{Q}$ such that $F=f\circ i$. In other words, we have accounted for all the homomorphisms of addtive groups $\in Hom(\Bbb{Z}_p,\Bbb{Q}).$
The homomorphisms of additive groups $\Bbb{Z}_p\to\Bbb{Q}$ are the restrictions of $\Bbb{Q}$-linear transformations $\Bbb{Q}_p\to\Bbb{Q}$. Distinct transformations have distinct restrictions, so in this sense
$\text{Hom}(\mathbb Z_p, \mathbb Q)$ is the dual space $\widehat{\Bbb{Q}_p}.$ |
Feller Probability - putting r balls in n cells | This amounts to the number of functions $f:A\to B$ from a set $A$ with $3$ elements (the three balls) $a_1$, $a_2$ and $a_3$, say, to a set $B$ with $4$ elements (the four cells). Indeed, you must assign one of the four cells to each of the three balls. Let's count:
Make a choice for $f(a_1)$. You have $4$ options.
Make a choice for $f(a_2)$. You have $4$ options.
Make a choice for $f(a_3)$. You have $4$ options.
All in all, the Rule of product leads to $4\cdot4\cdot4=4^3=64$ points, so you are correct and the book is wrong.
Your (equivalent) approach is also correct. You are calculating $|X_1\times X_2\times X_3|$ where $|X_i|=4$ ($i=1,2,3)$. The first component gives the cell of one (fixed) ball, the second component gives the cell of one of the remaining two balls and the third component gives the cell you assign to the last (fixed) ball. |
Big Omega number theory representation | The portion highlighted in red looks correct. The correction you suggest gives a different definition of the $\Omega$ notation. In fact, just see the line below the portion you highlighted. This second version is a stronger requirement since $f(n)$ must be larger than $kg(n)$ for all sufficiently large $n$ (i.e. for all $n$ starting from some value $n_0$) rather than just for infinitely many values of $n$. |
For a transitive permutation group $G,$ show that there is some $\sigma \in G$ such that $\sigma(a) \neq a$ for all $a \in A.$ | For $a \in A$ let $Ga:=\{ g \cdot x \mid g \in G \}$ denote the $G$-orbit of $a$ and $A / G$ denote the $G$-orbit space of $A$, i.e., the set of $G$-orbits of $A$. From the first part of the problem and the orbit-stabalizer theorem we can deduce Burnside's lemma:
$$
\lvert A / G \rvert = \sum_{a \in A} \frac{1}{|Ga|} = \sum_{a \in A} \frac{|G_a|}{|G|} = \frac{1}{|G|} \sum_{g \in G} |A^g|.
$$
If $G$ acts transitively on $A$, then the left-hand side of the displayed equation is $1$. Since the identity element $e \in G$ satisfies $|A^e| = |A| \geq 2$, there must some $g \in G$ such that $A^g = \varnothing$, as otherwise the right-most sum is strictly greater than $|G|$. |
Matrix exponential in the time propagation by the free Dirac equation | Taking $a=-cpt/\hbar$ and $b=-mc^2t/\hbar$ for convenience, note that $\exp(ia\alpha+ib\beta)=\sum\limits_{n=0}^{\infty}\frac{1}{n!}(ia\alpha+ib\beta)^n$. But from the fact that $\alpha^2=\beta^2$ is the identity, and $\alpha\beta+\beta\alpha=0$, we have that $(ia\alpha+ib\beta)^{2k}=(-1)^k(a^2+b^2)^k I$ (where $I$ is the identity matrix), and $(ia\alpha+ib\beta)^{2k+1}=(-1)^k i(a^2+b^2)^k(a\alpha+b\beta)$. So, gathering together the even terms yields: $\cosh(\sqrt{-a^2-b^2})I$ and gathering together the odd terms yields: $\frac{i\sinh(\sqrt{-a^2-b^2})}{\sqrt{-a^2-b^2}}(a\alpha+b\beta)$. Note that $a$ and $b$ are real, so we can pull a factor of $i$ from each square root. Recalling that $\cosh(ix)=\cos(x)$ and $\sinh(ix)=i\sin(x)$ and adding the two pieces together then yields the result. |
Find the remainder when $f(x)$ is divided by $(x-2)(x+1)^2$. | Hint. $-1$ is a double root, so try with the derivative. We have that $f'(x)=2(x+1)q_2(x)+(x+1)^2q'_2(x)+2$, therefore we are able to find also $f'(-1)$. |
Inverse of an integral function | You need to find $(f^{-1})'(0)$, and you know from the formula you mentioned that $$(f^{-1})'(0) = \frac{1}{f'(f^{-1}(0))}.$$ Now, given that $$f(x) =\int_0^x \sin(t)\,dt,$$ you have that $f(0) = 0$, and so $f^{-1}(0) = 0$ as well. That simplifies the question to finding $$(f^{-1})'(0) = \frac{1}{f'(0)}.$$ You have also said that $f'(x) = \sin(x)$, and since $\sin(0) = 0$ you have that $(f^{-1})'(0)$ is undefined, and in fact $(f^{-1})'(x) \to \infty$ as $x\to 0^{+}$.
Alternatively, you can compute directly that $$f(x) = \int_{0}^{x}\sin(t)\,dt = -\cos(t)\bigg|_{0}^{x} = 1 - \cos(x)$$ and so $$f^{-1}(x) = \arccos(1-x).$$ Differentiating, then, gives us $$(f^{-1})'(x) = \frac{1}{\sqrt{1-(1-x)^{2}}} = \frac{1}{\sqrt{2x-x^{2}}}.$$ Again, we have that $(f^{-1})'(0)$ is undefined, and $(f^{-1})'(x) \to \infty$ as $x\to 0^{+}$. |
Can this statistics question be solved as it stands? | This is a simple binomial hypothesis test.
You should have the null hypothesis that the underlying probability ($\pi$) is exactly 25%.
(Sometimes this might be stated as less than or equal to 25%, but during the test we'll assume that this null hypothesis is true by using the 25% value.)
The alternative hypothesis is that the underlying probability is greater than 25% (matching the attorney's claim)
Now you want to work out the probability of the (right-hand) tail, based on the observed result. That is: P(X$\geq$63). This is called the p-value, as you require. Here, you work out the p-value according to a Binomial distribution matching the null hypothesis [X~B(200,0.25)].
The point of the p-value is to decide whether this result could reasonably be generated by the null hypothesis.
If this resulting p-value is smaller or equal to (unlikelier) than $\alpha$=5%, then the result is "significant" - the result does not appear to be compatible with the null hypothesis, so therefore we'd accept the alternative hypothesis. (So, there is evidence the attorney's claim is correct)
If the resulting p-value is bigger than 5%, the result is "not significant" - the result appers to be compatible with the null hypothesis, so we accept the null hypothesis. (So, there is no evidence the attorney's claim is correct)
Edited To respond to your idea: a z-score would only apply to a test if we use a Normal distribution. Since this test is using a Binomial distribution, we don't really have any concept of a z-score here.
This idea is quite subtle so feel free to ask questions if you're not sure. |
Minimizing the sum of linear ratios | General problems of the type you give are NP-hard. However, structured classes can be solved. Here is an argument that shows hardness in general:
Consider the (hard) integer program of finding a vector $x \in \mathbb{R}^n$ to satisfy the following constraints:
\begin{align}
&Ax \leq b \\
&\sum_{i=1}^n x_i = k\\
&x_i \in \{0, 1\} \quad, \forall i \in \{1, ..., n\}
\end{align}
Assume the integer program is feasible (so there exists an integer solution that satisfies the constraints). Let's define a new (non-integer) problem of the general type you pose, then show it could be used to solve the above (hard) problem. The new problem is:
\begin{align}
\mbox{Minimize:} \quad & \sum_{i=1}^n \frac{x_i}{1+x_i} \\
\mbox{Subject to:} \quad & A x\leq b\\
&\sum_{i=1}^n x_i = k \\
&x_i \in [0,1] \quad , \forall i \in \{1, ..., n\}
\end{align}
Claim: If the integer program is feasible, then the new problem is also feasible and its solution satisfies the constraints of the integer program.
Proof: Let $y^*$ be a solution to the integer problem. Then $y_i^* \in \{0,1\}$ and $\sum_{i=1}^n y_i^*=k$. Notice that the function $x/(1+x)$ satisfies
$$ (Fact 1): \quad \frac{x}{1+x} \geq \frac{x}{2} \quad , \forall x \in [0,1], \mbox{with equality iff $x\in \{0,1\}$} $$
Hence
$$ \sum_{i=1}^n \frac{y_i^*}{1+y_i^*} = \sum_{i=1}^n \frac{y_i^*}{2} = k/2 $$
Notice that $y^*$ satisfies the constraints of the new problem and achieves an objective value of $k/2$. So the new problem is feasible, and its optimal objective value is less than or equal to $k/2$.
Now let $x^*$ be an optimal solution to the new problem. It suffices to show $x^*$ is a binary vector. Since the optimal objective value is no more than $k/2$, we have:
$$ \sum_{i=1}^n \frac{x_i^*}{1+x_i^*} \leq k/2 $$
On the other hand:
$$ k/2 \geq \sum_{i=1}^n \frac{x_i^*}{1+x_i^*} \overset{(a)}{\geq} \sum_{i=1}^n \frac{x_i^*}{2} \overset{(b)}{=} k/2 $$
where (a) holds by Fact 1 and (b) holds because $x^*$ satisfies the constraint $\sum_{i=1}^n x_i^*=k$. It follows that
$$ \sum_{i=1}^n\frac{x_i^*}{1+x_i^*} = \sum_{i=1}^n \frac{x_i^*}{2}$$
But for each $i \in \{1, ..., n\}$, we know by Fact 1 that
$$\frac{x_i^*}{2} \geq \frac{x_i^*}{1+x_i^*}$$
So equality must hold in each term $i$, that is, $\frac{x_i^*}{2} = \frac{x_i^*}{1+x_i^*}$, which implies $x_i^* \in \{0,1\}$ by Fact 1. $\Box$
The "structured classes" of problems I mentioned above are generalizations of linear fractional problems where denominator terms can be grouped over separate variables, such as
\begin{align}
\mbox{Minimize:} \quad & \sum_{i=1}^G \frac{g_i(x)}{T_i(x)} \\
\mbox{Subject to:} \quad & \sum_{i=1}^G \frac{h_{i,k}(x)}{T_i(x)} \leq c_k, \quad \forall k \in \{1, ..., K\} \\
\end{align}
where $G$ is the number of groups, and
where for each group $i \in \{1, ..,G\}$, the functions $g_i(x)$ and $T_i(x)$, $h_{i,k}(x)$ are functions of a disjoint set of components of the $x$ vector. For example, group 1 uses only components $x_1, x_2$. Group 2 uses components $x_3, x_4$, and so on. I've done some work in more complicated stochastic scenarios that use this structure, for example Theorem 1 here (which is unfortunately written for a more complicated system, a simplified version could be written for this problem):
http://ee.usc.edu/stochastic-nets/docs/asynch-markov-v8.pdf |
Proving fixed points of a continuous function | It means that the domain of the function is $[0,1]$ and that the codomain of the function is $[0,1]$. An example of such a function would be $x^2$, for example, or maybe $\sin(x)$ or an infinitely many other functions.
To solve your problem, here's a couple guidelines:
Take a look at the function $g(x)=f(x)-x$.
What is the sign of $g(0)$?
What is the sign of $g(1)$?
What does the intermediate value theorem tell you? |
find domain of given question. f(x,y)=arcsin( √x- √y/ √x+ √y) | $\{(x,y)|x\geq 0, y\geq 0\}-\{(0,0)\}.$ |
Limits and double integral? | We have that $R$ corresponds to the strip between the lines
$y=x-2$
$y=x-1$
and the hyperbolas
$y=2/x$
$y=1/x$
One way to proceed is to find the intersection points for these region and then integrate accordingly for each part. |
Theorem 20.4 in Munkres' TOPOLOGY, 2nd edition: How are these three topologies different on an infinite Cartesian product of $\mathbb{R}$ with itself? | if the index is infinite then the box topology has open sets with have arbitrarily small projections. e.g. for a countable index set, consider the set with projections:
$$
U_j = B(0,\frac1{n})
$$
this is open in the box topology, but contains no open ball of the uniform topology. |
A problem on calculating a limit as $n \to \infty$ | Hint:
$\log(a_1a_2 \cdots a_n)=\sum_1^n\log a_k$ |
Quotient Topology, equivalence relation. I need help to prove if X is homeomorphic to X/~ | Hint: Consider the continuous map $[0,3] \to [0,2]$
$$x\mapsto \left\{\matrix{x\ \text{ if }x<1 \\ 1\ \text{ if } x\in [1,2]\\ x-1\ \text{ if } x>2}\right.$$ |
open sets of infinite produduct: $[0,1]^{\omega _1}$ | $[0,1]^{\omega_1}$ does not have that property. Every open set $O$ does contain a countable union of basic open sets that is dense in $O$; this follows from ccc, essentially. |
Can basis of kernel be extended to a Jordan basis? | This should always be possible.
Given a basis $\mathcal{B}$ of $\ker A$ you can check whether or not it already is a basis of $\mathbb{C}^n$. If it is not a basis you proceed as in the proof of the existence of the Jordan normal form and add vectors from $\ker A^m \backslash \ker A^{m-1}$ to $\mathcal{B}$ until you get a basis of $\mathbb{C}^n$. As you are working over $\mathbb{C}$ we know that a Jordan basis exists, so this procedure yields a basis of $\mathbb{C}^n$ after finitely many steps.
As you can start this procedure with any basis of $\ker A$, this means that any basis of $\ker A$ can be extended to a Jordan basis of $A$. |
Recurrence Equation and Markov Chain: How to get the base case | The stationary distribution of a Markov chain is found by solving the matrix equation
$$v(T-I)=0$$
Where $T$ is the transition matrix and $I$ is the identity matrix with 1 on the diagonal and 0 elsewhere. The equation can be solved by Gaussian elimination. You will get an infinite family of solutions, and you want to normalize it so the sum of the components of the vector is 1.
To calculate the mean time to get from $i$ to $i$ again, you compute first the mean time to get from any state to $i$. You do this by solving
$$k_i=0$$
$$k_j=1+\frac{1}{6}\sum{k_p}$$
Where $j$ is a non stopping state.
Then the probability of getting from $i$ to $i$ again is
$1+ \frac{1}{6}\sum{k_p}$ |
How many not-equal sign is needed for representing distinct value? | I think you are asking for the length of the shortest path in the complete graph $K_n$ that traverses every edge.
When $n$ is odd, the graph is Eulerian, and there are paths with ${n\choose 2}$ distinct edges.
When $n$ is even, pick two vertices $P$ and $Q$ and add $(n-2)/2$ extra edges, necessarily parallel to edges already present, to make the degree
of all vertices other than $P$ and $Q$ equal to $n$. In this new hypergraph
there is an Eulerian path from $P$ to $Q$. This path has ${n\choose 2}+\frac12(n-2)$ edges which is the best possible. |
Is a tangent vector field on $S^n$ always continuous? | I think it's often taken to be part of the definition of "vector field" that it's continuous. The fact that you can't comb the hair on a billiard ball, i.e. there is no non-vanishing field of tangent vectors on $S^2$, would obviously not be true if you didn't take continuity to be part of the definition of "vector field". |
Finding the Jordan form of a $3\times 3$ matrix | Using the theorem: Let $ T : V \rightarrow V$ be a linear map admitting a JFC, $J$. Let $i < 0 $ and $\lambda \in \mathbb{C}$. Then the number of Jordan blocks with eigenvalue $\lambda $ and length at least $i$ is equal to nullity$((T-\lambda I_n)^{i}) -$nullity$((T-\lambda I_n)^{i-1})$.$$ $$
So, in your case, the number of $J$-Blocks with $\lambda =4$ and length $2$ is nullity$((T-\lambda I_n)^{2}) -$nullity$((T-\lambda I_n)^{1}) = 2-1 =1$. So there is one block of at least length $2$. However, nullity$((T-\lambda I_n)^{3}) -$nullity$((T-\lambda I_n)^{2})= 3-2=1$. So there is one block of length 3, implying that the Jordan normal form is $$J = \begin{pmatrix} 4 & 1 & 0 \\ 0 & 4 & 1 \\ 0 &0 &4\end{pmatrix}$$
The fact that nullity$(A-4I)=1$ does not imply that there is one Jordan block. |
Is there an intuitive way of thinking why a rearranged conditionally convergent series yields different results? | That the series is not absolutely convergent means that "there's always enough left to go as far as you want". That it's also convergent means that "there's always enough left to go as far as you want in either direction" (because otherwise, if you could go as far as you want in one direction but not the other, the series would go off in that direction and not converge). Now, if there's always enough left to go as far as you want in either direction, you can pick an arbitrary target and then keep going past it to either side by using the appropriate terms of the series. On the other hand, since the convergence implies that the terms go to zero, you need to overstep the target less and less, allowing the rearranged series to converge to the target. That the terms go to zero also ensures that you can use all of them eventually (otherwise you might get a rearranged subsequence but not a rearrangement of the entire series). |
How i can prove Var(XY)? | Begin with the definition for Variance: $$\mathsf {Var}(XY)=\mathsf E(X^2Y^2)-\mathsf E(XY)^2$$
Now, you say that $X,Y$ are independent. What does that mean for those expectations? |
What is a valid unit of measurement? | If I've understood your system correctly, we do this all the time with temperatures: absolute temperature is measured in kelvins but degrees Celsius are effectively "kelvins beyond 273.16". (Loosely, the freezing point of water.) We routinely use them all the time to compare temperaures, but things go awry if we want to say that one thing is twice as hot as another.
Note that 30°C isn't 20°C plus 10°C unless we use $x$°C to mean two different things, namely a temperature of $x$ temperature steps above freezing, and an increment of $x$ temperature steps.
You might argue that we do something similar (but inconsistently) with weights when we treat them like mass: we're neglecting the buoyancy effect in air so there's a slight offset. This actually has to be taken into account when dealing with very precise mass measurenents.
And decibels are in fact meaningless on their own as a unit: a decibel is a power ratio of $\sqrt[10]{10}$ so the actual measurement unit is dBA, ie decibels above the threshold of hearing. On its own, a decibel is simply the number $\sqrt[10]{10}$ treated as a multiplication factor. |
If $f(x)$ has $k$ critical points and has degree $n$, show that $n-k$ is odd | Let $P$ be the polynomial, $x_1,...,x_k$ the critical points, you can write $P'(x)=(x-x_1)..(x-x_k)q(x)$, the polynomial $q$ does not have a zero so its degree is even. By integrating, you obtain the result. |
proving equation including the zeta-function | You need to use $$\frac{\pi^2}{\sin^2 (\pi z)} = \sum_n \frac1{(z+n)^2}= z^{-2}+\sum_{k\ge 1} 2\zeta(2k)z^{2k-2} $$ Then multiply both sides by $\sin^2(\pi z)$'s Taylor series |
On average, how many times will I need to roll a six-sided die before I see ten ONES in total? | If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average.
This is just intuition. Making the "on average" mathematically rigorous takes some work. |
Project e^x using the monomial and legendre basis | Let, $$ A_{nm} = \int_{-1}^1 x^n x^m dx, $$
write,
$$ e^x = \sum_{n=0}^2 c_n x^n + (\text{ higher order terms}) $$
our goal is to use the inner product formulas to determine the $c_n$'s. Multiply both sides by $x^p$ and integrate from $[-1,1]$ (this is the inner product with $x^p$).
$$\int_{-1}^1 x^p e^x dx= \int_{-1}^1 x^p \sum_{n=0}^2 c_n x^n dx + (\text{ higher order terms})$$
$$\int_{-1}^1 x^p e^x dx= \sum_{n=0}^2 \int_{-1}^1 x^p x^n dx c_n + (\text{ higher order terms})$$
$$\int_{-1}^1 x^p e^x dx= \sum_{n=0}^2 A_{pn} c_n + (\text{ higher order terms})$$
Define $E_p = \int_{-1}^1 x^p e^x$ and we have,
$$ E_p = \sum_{n=0}^2 A_{pn} c_n + (\text{ higher order terms}),$$
If we neglect the higher order terms we then have the following equation,
$$ E_p = \sum_{n=0}^2 A_{pn} c_n,$$
which can be written in matrix form,
$$ \boxed{E = A c}$$
The vector $E$ and the matrix $A$ are computed from integrals. The vector $c$ contains the unknown coefficients. You can solve for $c$ using ordinary matrix methods (I would do a Cholesky decomposition).
Once you know your function in terms of the monomials you can convert this to a representation in terms of Legendre polynomials using the following procedure.
Suppose we have,
$$ f(x) = 1 + 2 x + 3 x^2 $$
we want to write $f$ as a linear combination of Legendre polynomials. We start wiht the highest order term ($x^2$) and write it in terms of $P_2$.
$$P_2 = \frac12 ( 3 x^2 -1 ) \Leftrightarrow x^2 = \frac13(2P_2+1)$$
$$ f(x) = 1 + 2 x + 3 \frac13(2P_2+1) $$
$$ f(x) = 2 + 2 x + 2P_2 $$
$$ f(x) = 1 + 2 x + 2P_2 $$
$$ f(x) = 1 P_0 + 2 P_1 + 2P_2 $$
Then the projection of $f(x)$ onto $P_2$ is the coefficient $2$.
For part $(b)$ equation will be
$$e^x = \sum_{n=0}^2 c_n P_n(x) + (\text{higher order terms})$$
Multiply both sides by $P_m(x)$ and integrate over the interval $[-1,1]$.
$$e^x P_m(x) = \sum_{n=0}^2 c_n P_n(x) P_m(x) + (\text{higher order terms})$$
$$\int_{-1}^1 e^x P_m(x) \mathrm{d}x = \int_{-1}^1 \sum_{n=0}^2 c_n P_n(x) P_m(x) \mathrm{d}x + (\text{higher order terms})$$
$$\color{blue}{\int_{-1}^1 e^x P_m(x) \mathrm{d}x} = \sum_{n=0}^2 c_n \color{blue}{\int_{-1}^1 P_n(x) P_m(x) \mathrm{d}x} + (\text{higher order terms})$$
We will label the integral on the left as $E_m$ and the integral on the right as $A_{nm}$.
$$\color{blue}{E_m} = \sum_{n=0}^2 c_n \color{blue}{A_{nm}} + (\text{higher order terms})$$
Lets neglect the higher order terms.
$$E_m = \sum_{n=0}^2 c_n A_{nm} $$
This is equivalent to the matrix equation,
$$ E = A c$$ |
Problems about exact sequence in Vick's homology theory. | Consider the space $X=D^n\cup_f Y$ and lets define $U=X\setminus \{p\}$ where $p$ is the center of the ball $D^n$, and let $V=B_{\epsilon}(p)$ be a small open ball around the point $p$. Note that $U\cap V$ is within the interior of the ball $D^n$ and so it is just a copy of $S^{n-1}\times I$ or just $S^{n-1}$ up to homotopy. We also note that $U$ deformation retracts onto an embedded copy of $Y$ at the 'boundary' of the ball, so let us write the Mayer-Vietoris sequence associated to this decomposition of $X$.
$$\cdots\to H_m(U\cap V)\to H_m(U)\oplus H_m(V) \to H_m(X) \to H_{m-1}(U\cap V)\to\cdots$$
which from the above reasoning (for large enough $m$ such that $H_m(V)=0$) becomes
$$\cdots\to H_m(S^{n-1})\to H_m(Y) \to H_m(D^n\cup_f Y) \to H_{m-1}(US^{n-1})\to\cdots.$$
The maps in this sequence are exactly the maps appearing in yours as they are induced by homotopic maps (for instance your map $i$ is the inclusion of $Y$ into the 'boundary' of the $n$-ball $D^n$.) |
Prove irrationality of $\sqrt{2+\sqrt{2}}$ and $\sqrt{2}+\sqrt{3}$ | Let $x=\sqrt{2}+\sqrt{3}$. Then
$$(x-\sqrt{2})^2=3=x^2-2\sqrt{2}x+2$$
$$\sqrt{2}=\frac{x^2-1}{2x}$$
Then if $x$ is rational, so is $\sqrt{2}$, contradiction.
If $x=\sqrt{2+\sqrt{2}}$, then $x^2-2=\sqrt{2}$, same idea then. |
Verifying that a map to $L^2_{\text{loc}}$ is continuous. | It turns out that this follows by Cauchy-Schwarz and invariance of the measure on $M$. Using the fact that we are integrating over the compact subset $H\subseteq G$, we have for any compact subset $K\subseteq M$,
$$\int_K |f(\mu)(x)|^2\,dx = \int_K\int_H\int_H c(gx)\mu(gx)c(g'x)\mu(g'x)\,dg\,dg'dx$$
$$\leq\int_H\int_H\langle f_g(\mu),f_{g'}(\mu)\rangle_{L^2(K)}\,dg\,dg'$$
$$\leq\int_H\int_H ||f_g\mu||_{L^2(K)}||f_{g'}\mu||_{L^2(K)}\,dg\,dg'.$$
where $f_g(\mu)(x):=f(\mu)(gx)$ and $f_{g'}(\mu)(x):=f(\mu)(g'x)$. This is finite since $H$ has finite volume, and claim 1 is verified.
The proof of claim 2 is similar, using the equivalent criterion that $f$ is continuous if and only if for any $K$ there exists a constant $C_K$ such that $||f(\mu)||_{L^2(K)}\leq C_K ||\mu||_{L^2(M)}$. The constant $C_K$ can be chosen to be $(\text{vol}(H))^2$. |
Fourier Transform of quadratic Volterra series | I was working on a similar issue and knowing what the result should have been has helped me find the solution. I’m only going to work on the second order term and I’ll use uppercase letters to denote the Fourier transformed functions for clarity (and, out of laziness, I’m going to omit the bounds of integration).
$$Y_2(\omega) = \iiint h_2(\tau_1,\tau_2)x(t-\tau_1)x(t-\tau_2) e^{-i\omega t} dt d\tau_1 d\tau_2$$
The issue here is that both $x$ depend upont $t$, so I’m plopping in a $\delta$ to deal with that.
$$Y_2(\omega) = \iiiint h_2(\tau_1,\tau_2)x(\theta-\tau_1)x(t-\tau_2)\delta(\theta-t) e^{-i\omega t} d\theta dt d\tau_1 d\tau_2$$
Then I’m going to write the $\delta$ out in its integral form $\delta(\theta - t) = \frac{1}{2\pi} \int e^{-i \omega_1 (\theta-t)} d\omega_1$.
$$Y_2(\omega) = \frac{1}{2\pi} \int\!\!\!\iiiint h_2(\tau_1,\tau_2)x(\theta-\tau_1)x(t-\tau_2) e^{-i \omega_1 (\theta-t)} e^{-i\omega t} d\theta dt d\tau_1 d\tau_2 d\omega_1$$
$$Y_2(\omega) = \frac{1}{2\pi} \int\!\!\!\iiiint h_2(\tau_1,\tau_2) e^{-i\omega_1\tau_1} x(\theta-\tau_1) e^{-i \omega_1 (\theta-\tau_1)} x(t-\tau_2) e^{-i (\omega-\omega_1) t} d\theta dt d\tau_1 d\tau_2 d\omega_1$$
With the substitution $\zeta_1 = \theta - \tau_1$ we can finally start getting rid of some of the mess.
$$Y_2(\omega) = \frac{1}{2\pi} \int\!\!\!\iiiint h_2(\tau_1,\tau_2) e^{-i\omega_1\tau_1} x(\zeta_1) e^{-i \omega_1 \zeta_1} x(t-\tau_2) e^{-i (\omega-\omega_1) t} d\zeta_1 dt d\tau_1 d\tau_2 d\omega_1$$
$$Y_2(\omega) = \frac{1}{2\pi} \iiiint h_2(\tau_1,\tau_2) e^{-i\omega_1\tau_1} X(\omega_1) x(t-\tau_2) e^{-i (\omega-\omega_1) t} dt d\tau_1 d\tau_2 d\omega_1$$
Similarly I can transform the other $x$ by taking $\zeta_2 = t - \tau_2$.
$$Y_2(\omega) = \frac{1}{2\pi} \iiiint h_2(\tau_1,\tau_2) e^{-i\omega_1\tau_1} e^{-i (\omega-\omega_1) \tau_2}X(\omega_1) x(t-\tau_2) e^{-i (\omega-\omega_1) (t-\tau_2)} dt d\tau_1 d\tau_2 d\omega_1$$
$$Y_2(\omega) = \frac{1}{2\pi} \iiiint h_2(\tau_1,\tau_2) e^{-i\omega_1\tau_1} e^{-i (\omega-\omega_1) \tau_2}X(\omega_1) x(\zeta_2) e^{-i (\omega-\omega_1) \zeta_2} d\zeta_2 d\tau_1 d\tau_2 d\omega_1$$
$$Y_2(\omega) = \frac{1}{2\pi} \iiint h_2(\tau_1,\tau_2) e^{-i\omega_1\tau_1} e^{-i (\omega-\omega_1) \tau_2}X(\omega_1) X(\omega-\omega_1) d\tau_1 d\tau_2 d\omega_1$$
And finally we just have the Fourier transform of $h_2$.
$$Y_2(\omega) = \frac{1}{2\pi} \int H_2(\omega_1,\omega-\omega_1) X(\omega_1) X(\omega-\omega_1) d\omega_1$$
As it has already been commented, the final term with the $\delta(\omega)$ should instead come from the zeroth order term which you omitted: $y_0(t) = h_0$, and the Fourier transform of a constant is indeed just a delta. No idea why it would be denoted as $\overline{y_2}$, but it could be a mistake in the slides you have. |
Two Homotopy Colimit Questions | One way I've thought about this, if the diagram category is a Reedy category: $\mathrm{hocolim}_iF(i)$ is $\mathrm{colim}G(i)$ where $G$ is a cofibrant replacement for $F$. Then $TG$ is also a cofibrant replacement for $(\mathbb{L}T)F$ (as $T$ sends cofibrations to cofibrations and pushouts to pushouts).
So
$$\mathrm{hocolim}_i (\mathbb{L}T)F(i) = \mathrm{colim}_i TG(i) = T \mathrm{colim}_i G(i) = \mathbb{L}T \mathrm{hocolim}_i F(i).$$
(Edited to make everything homotopical.) |
$U_{(1)}\overset{p}{\to} 0$ from a $U(0,1)$ population | Looks good to me, except I would also write $$\Pr[|U_{(1)} - 0| \ge \epsilon] = \Pr[|U_{(1)}| \ge \epsilon] = \ldots,$$ hence $$\lim_{n \to \infty} \Pr[|U_{(1)} - 0| \ge \epsilon] \le \lim_{n \to \infty} \frac{1}{(n+1)\epsilon} = \frac{1}{\epsilon} \cdot 0 = 0$$ for all $\epsilon > 0$, and since the probability is by definition bounded below by $0$, the limit must equal $0$ for all $\epsilon > 0$. |
Proving that if $(a_n)$ is a bounded sequence and that if every convergent subsequence of $(a_n)$ converges to $a $, then $(a_n)$ converges to $a$. | No you got the rationale of the proof wrong.
Firstly, one assumes that $(a_n)$ does not converge. This yields the existence of some $\epsilon$ such that $\forall N\in \mathbb N$, there is some $n\geq N$ such that $a_n \notin V_{\epsilon}(a)$. This allows you to build a subsequence $a_{n_k}$ such that $\forall k, a_{n_k}\notin V_{\epsilon}(a)$.
Now, since $a_{n_k}$ is bounded, it has a convergence subsequence, say $b_n$, and $b_n$ is therefore a convergent subsequence of $a_{n_k}$, but also a convergent subsequence of $a_n$. Hence $b_n$ converges to $a$. But that is a contradiction, since $b_n$ is a subsequence of $a_{n_k}$. |
Convolution with delta function | It's called the sifting property:
$$
\int_{-\infty}^\infty f(x)\delta(x-a)\,dx=f(a).
$$
Now, if
$$
f(t)*g(t):=\int_0^t f(t-s)g(s)\,ds,
$$
we want to compute
$$
f(t)*\delta(t-a)=\int_0^t f(t-s)\delta(s-a)\,ds.
$$
With an eye on the sifting property above (which requires that we integrate "across the spike" of the Dirac delta, which occurs at $a$, we consider two cases.
If $t<a$, then $\delta(s-a)\equiv 0$ since $0\le s\le t<a$. Therefore $\int_0^t f(t-s)\delta(s-a)\,ds\equiv 0$.
If $t\geq a$, then by the sifting property, $\int_0^t f(t-s)\delta(s-a)\,ds=f(t-s)\Big|_{s=a}=f(t-a)$.
Thus,
\begin{align}
f(t)*\delta(t-a)=\int_0^t f(t-s)\delta(s-a)\,ds&=\begin{cases} 0, &t<a,\\ f(t-a) &t\ge a,\end{cases}=f(t-a)u(t-a),
\end{align}
where $u(t)$ denotes the unit step function. |
interchanging integral and limit | You can use the Dominated convergence Theorem, your function is dominated by $f(t)$. |
Proving that a minimal set satisfying 'tree' properties is countable. | Following bof's suggestions...
For any $Y \subset X$ we can form $\Phi (Y)$,
$$\tag 1 \Phi (Y) = \bigcup_{y \in Y} F(y)$$
If $Y$ is countable so is $\Phi (Y)$,
Let $Y_0 = K$ and for $n \ge 0$ define
$$\tag 2 Y_{n+1} = \Phi (Y_n)$$
By induction. each $Y_n$ is countable, so
$$\tag 3 \hat K = \bigcup_{n \ge 0} Y_n$$
is also countable.
Claim: $\hat K$ is an inductive accommodation of $K$ and $F$:
Clearly $K \subset \hat K$.
If $y \in \hat K$ then there is a $k$ with $y \in Y_k$, and since
$$\tag 4 F(y) \subset \Phi (Y_k) = Y_{k+1} \subset \hat K$$
the claim is established.
It can be demonstrated by induction that $\hat K$ is the minimal inductive accommodation .
Note that we need the axiom of choice in this setting. |
How to solve the equation $2z^{2}+i=-2$? | Hint: With $$z=x+iy$$ we get $$2(x^2-y^2+1)+i(4xy+1)=0$$
It must be $$x^2-y^2+1=0$$ and $$4xy+1=0$$ |
Does there exist prime number of the form $0^0+1^1+2^2+3^3+4^4+...$ after the trivial one $2$? | Checking up to $n = 500$, I find that the expression is prime for $n = 1, 52, 124, 431$.
There is no reason to believe that there are only finitely many.
Edit: Fixed the typo, $n = 1$, not $n = 2$. Thanks to those who pointed this out. |
Computing ideal from its radical | Well, since $\sqrt{\tilde{P_i}^n}=P_i$, I fear that the only case when you can do the above assumption is when both ideals are trivial. |
Can I use order of operations (BIDMAS) on any question? | Short answer: Yes.
Long answer: Yes, and the reason why is its always in that order is so that everyone knows how to write euqations to meet things. If those rules werent followed, then simple math could become impossible to decipher.
Example:
$1-2-3=(1-2)-3=-1-3=-4$
But that could also mean
$1-2-3=1-(2-3)=1-(-1)=1+1=2$
The standard order of operations is what keeps our math consistent and understandable. |
Euler angle From one position to another. | Denote first rotation as $^0R_A$ and the second one as $^0R_B$.
Now you have ${^0}R_A{{^A}R_B}={^0}R_B$ what is the notation for operation rotating from $0$ to $A$ and from $A$ to $B$ getting this way rotation from $0$ to $B$.
From this you can calculate rotation from $A$ to $B$ as ${{^A}R_B}=({^0}R_A^{-1})({^0}R_B)$.
In your case $({^0}R_A)^{-1}=(R_1(\phi_1)R_2(\theta_1)R_3(\psi_1))^{-1} = R_3(-\psi_1)R_2(-\theta_1)R_1(-\phi_1)$ and ${^0}R_B=R_1(\phi_2)R_2(\theta_2)R_3(\psi_2)$. |
Primitive polynomial | If $\alpha$ is a root of your polynomial, since the polynomial is irreducible, we know that ${\mathbb F}_2 (\alpha)$ is actually the finite field having $2^5$ elements, and hence we know $\alpha^{31} = 1$ becuase $31$ is the order of the multiplicative group of ${\mathbb F}_{2^5}$.
Now, what can the multplicative order of $\alpha$ be if $\alpha^{31} = 1$? 31 is a very special number.
Here is the definition of primitive polynomial I am using here.
EDIT: To make matters clear, assume $p(x) = x^5 + x^2 + 1$ divides $x^n + 1 = x^n - 1$ for some $n\in\{0,\ldots,30\}$, then necessarily $n\geq 5$ as $\deg p = 5$. Now, if $\alpha$ were a root of $p(x)$, then $\alpha^n - 1 = 0$ because $p(x) | (x^n - 1)$. Hence the multiplicative order of $\alpha$ must divide $n$.
However, as we saw above, the multiplicative order of $\alpha$ is either 1 or 31 since 31 is prime, if the order were $1$ then $\alpha=1$ which is impossible as $p(1)=1\neq 0$, and so the order of $\alpha$ is 31. So it is impossible that the order divide $n$ for any $n<31$, a contradiction to the previous paragraph. |
Is the maximum of a probability distribution function of a Binomial distribution always the expected value? | If $p=1/3$ and $n=40$ then $np= 40/3 = 13.33333\ldots,$ and the mode must be an integer.
What do you multiply $\Pr(X=x)$ by to get $\Pr(X=x+1)$?
\begin{align}
\frac{\Pr(X=x+1)}{\Pr(X=x)} = \frac{\dbinom n x p^x (1-p)^{n-x}}{\dbinom n {x+1} p^{x+1} (1-p)^{n-x-1}} = \frac{(x+1)(1-p)}{(n-x-1)p}
\end{align}
This is $<1$ if $x<np-1$ and is $>1$ if $x>np-1.$
Thus $\Pr(X=x+1)>\Pr(X=x)$ when $x<np-1$
and $\Pr(X=x+1)<\Pr(X=x)$ when $x>np-1.$
When $x=np-1$ then there are two modes differing from each other by $1.$
In the concrete example above, $np-1 = 12.33333\ldots,$ so $\Pr(X=12) < \Pr(X=13) > \Pr(X=14),$ so the mode is $13.$ |
Evaluation of $\lim _{ x\rightarrow 0 }{ \frac { \sin ^{ -1 }{ x } -\tan ^{ -1 }{ x } }{ { x }^{ 3 } } } \quad $ without using L'Hospital rule | We have
$$
\sin(\arcsin(x)-\arctan(x))=\frac{x(1-\sqrt{1-x^2})}{\sqrt{1+x^2}}
$$
and
$$
\lim_{x\to 0}\left(\frac{\arcsin(x)-\arctan(x)}{x^3}\right)\equiv \lim_{x\to 0}\left(\frac{\sin(\arcsin(x)-\arctan(x))}{\sin(x^3)}\right)
$$
then
$$
\frac{\sin(\arcsin(x)-\arctan(x))}{\sin(x^3)} = \frac{x^3(1-\sqrt{1-x^2})}{x^2\sqrt{1+x^2}\sin(x^3)} = \left(\frac{x^3}{\sin(x^3)}\right)\frac{x^2}{x^2(1+\sqrt{1-x^2})\sqrt{1+x^2}}
$$
then
$$
\lim_{x\to 0}\left(\frac{\sin(\arcsin(x)-\arctan(x))}{x^3}\right)\equiv\lim_{x\to 0}\left(\frac{x^3}{\sin(x^3)}\right)\frac{1}{(1+\sqrt{1-x^2})\sqrt{1+x^2}} = \frac{1}{2}
$$ |
When should I shift $a$ and $b$ in $\cfrac{x^2}{a^2}+\cfrac{y^2}{b^2}=1$? | The foci are (0,6);(0,-6). Hense, they are located on the y-asis. So, the larger axis is on y-axis and $b=17$.
As a consequence, $c^2=b^2-a^2$, but not $c^2=a^2-b^2$ as you did.
With $a^2=b^2-c^2$ you will obtain the expected result. |
Calculating distribution of variable $T$ if $T/N=n $ is distributed $exp(an+b) $ | You don't need to exchange the integral and the sum. Just calculate the inegral:
$$(an+b)\int_c^de^{-(an+b)t}\,dt=-\left.e^{-(an+b)t}\right\vert_c^d=e^{-(an+b)c}-e^{-(an+b)d}.$$
Then the sum becomes:
\begin{align}
\sum\limits_{n=0}^{\infty} \frac{\lambda^n e^{-\lambda}}{n!}(e^{-(an+b)c}-e^{-(an+b)d})&=\sum\limits_{n=0}^{\infty} \frac{\lambda^n e^{-\lambda}}{n!}e^{-(an+b)c}-\sum\limits_{n=0}^{\infty} \frac{\lambda^n e^{-\lambda}}{n!}e^{-(an+b)d}\\
&=e^{-(\lambda+bc)}\sum\limits_{n=0}^{\infty} \frac{(\lambda e^{-ac})^n}{n!}-e^{-(\lambda+bd)}\sum\limits_{n=0}^{\infty} \frac{(\lambda e^{-ad})^n}{n!}\\
&=e^{-(\lambda+bc)}e^{\lambda e^{-ac}}-e^{-(\lambda+bd)}e^{\lambda e^{-ad}}.
\end{align}
In the last step we're using the series expansion $e^x=\sum\limits_{n=0}^\infty\frac{x^n}{n!}$.
Also, to get the distribution, it's enough to calculate $P(T<d)$ instead of introducing another variable $c$. Then you get $$P(T < d)= \sum_{n=0}^{\infty} P(N=n) P(T <d | N=n)= \sum_{n=0}^{\infty} \frac{\lambda^n e^{-\lambda}}{n!}(an+b) \int_{0}^d e^{-(an+b)t}\,dt, $$
which simplifies the above calculation by setting $c=0$ and you get that the sum becomes
$$1-e^{-(\lambda+bd)}e^{\lambda e^{-ad}}=1-e^{-\lambda(1-e^{-ad})-bd}.$$
I'm not sure if this distribution has a name. |
Prove that term of upper central series of a group $G$ is a characteristic subgroup of $G.$ | Let $\alpha \in {\rm Aut}(G)$. By inductive hypothesis, $\alpha(Z_{n-1}(G)) = Z_{n-1}(G)$. So $\alpha$ induces an automorphism $\overline{\alpha}$ of $G/Z_{n-1}(G)$ defined by $\overline{\alpha}(gZ_{n-1}(G)) = \alpha(g)Z_{n-1}(G)$.
Since $Z(G/Z_{n-1}(G))$ is characteristic in $G/Z_{n-1}(G)$, $\overline{\alpha}$ fixes $Z(G/Z_{n-1}(G))$. But, by definition, $Z(G/Z_{n-1}(G)) = Z_n(G)/Z_{n-1}(G)$ so, for any $g \in Z_n(G)$, $\overline{\alpha}(gZ_{n-1}(G)) = \alpha(g)Z_{n-1}(G)\in Z_n(G)/Z_{n-1}(G)$, and hence $\alpha(g) \in Z_n(G)$.
We have proved that $\alpha(Z_n(G)) \le Z_n(G)$ Similarly $\alpha^{-1}(Z_n(G)) \le Z_n(G)$ and applying $\alpha$ to this gives $Z_n(G)) \le \alpha(Z_n(G))$. Hence $Z_n(G)$ is characteristic in $G$. |
Category theoretic description of evaluation of polynomials | Fix a base field $k$. Morphisms
$$\varphi : k[x_1, \dots x_n] \to A$$
from $k[x_1, \dots x_n]$ to any other $k$-algebra $A$ correspond to $n$-tuples of elements of $A$, as follows: if $(a_1, \dots a_n)$ is an $n$-tuple, the corresponding morphism is the evaluation morphism
$$\varphi_{(a_1, \dots a_n)} : k[x_1, \dots x_n] \ni f(x_1, \dots x_n) \mapsto f(a_1, \dots a_n) \in A.$$
So from this point of view, evaluation is the way to make explicit the universal property of the polynomial algebra. There are various other ways to say these things as well. |
Relationship between covariance matrix and its determinant | Hint: Try computing $\text{Var}(a_1X_1+a_2X_2+\dots+a_nX_n)$. What does it mean if a random variable has zero variance? |
Lipschitz constant of limit of functions part 2 | let $\epsilon >0$. Then $Lip(f_n) <1+\epsilon$ for $n$ sufficiently large. Hence $d_Y(f_n(x),f_n(y)) \leq (1+\epsilon ) d_X(x,y)$ for all $x,y$ for $n$ sufficiently large.. Letting $n \to \infty$ we get $d_Y(f(x),f(y)) \leq (1+\epsilon ) d_X(x,y)$ for all $x,y$. Letting $\epsilon \to 0$ we conclude that $Lip(f) \leq 1$. |
$f '' - (f ')^2 + f=0$; what is known about solutions? | The most insightful way to study this equation is to write it as a two-dimensional dynamical system, like
\begin{align}
f' &= g, \\
g' &= g^2 -f. \tag{1}
\end{align}
Questions about the shape of solutions can best be answered by looking at the orbits of solutions in the $(f,g)$ phase plane. If you consider such an orbit as a graph $g(f)$, then these obey the differential equation
\begin{equation}
\frac{\text{d} g}{\text{d} f} = \frac{g'}{f'} = g - \frac{f}{g}. \tag{2}
\end{equation}
The solutions of $(2)$ are given by
\begin{equation}
g(f) = \pm \sqrt{\frac{1}{2}+f + c_0 e^{2 f}}. \tag{3}
\end{equation}
As you can see, these functions $g(f)$ exist as long as $\frac{1}{2}+f + c_0 e^{2 f} \geq 0$. That is, the boundaries of the domain of $g(f)$ are given by the solutions to the equation $\frac{1}{2}+f + c_0 e^{2 f} = 0$, which yields
\begin{equation}
f = -\frac{1}{2}\left(1+W(2 c_0/e)\right), \tag{4}
\end{equation}
where $W$ is the Lambert W function. This function has two branches, which coincide at the left point of its domain of definition, which is at $-\frac{1}{e}$. This coincides with the observation that $\frac{1}{2}+f + c_0 e^{2 f} < 0$ for all $f$ if $c_0 < -\frac{1}{2}$. Also, the second branch of the Lambert $W$ function only exists as long as its argument lies in between $-\frac{1}{e}$ and $0$. Therefore, we can identify two limit values of $c_0$ (being $-\frac{1}{2}$ and $0$), separating two types of solutions.
For $-\frac{1}{2} < c_0 < 0$, the two graphs $(3)$ form a closed orbit. As $c_0 \to -\frac{1}{2}$, these orbits shrink to the point $(0,0)$ in the $(f,g)$ phase plane, which is an equilibrium of the system $(1)$. As $c_0 \to 0$, these periodic orbits grow larger and larger (in the phase plane), until they become unbounded. For $c_0 = 0$, the graph is given by the curve
\begin{equation}
g(f) = \pm \sqrt{\frac{1}{2} + f}. \tag{5}
\end{equation}
Remembering that $g = f'$, this yields a differential equation
\begin{equation}
f'^2 = 1 + f,
\end{equation}
which for $f(0) = 0$ yields
\begin{equation}
f(t) = \frac{t^2}{4} \pm \frac{t}{\sqrt{2}}. \tag{6}
\end{equation}
For all values $c_0>0$, the graph $(3)$ is unbounded, and hence the orbit on that graph is unbounded as well. In the phase plane, the curves $(5)$, which are traced out by the orbit of $(6)$, form a separatrix between the region in phase space where closed orbits exist ('within' the parabola), and where orbits are unbounded.
To calculate the period $T(c_0)$ of a periodic orbit for a certain $-\frac{1}{2} < c_0 < 0$, the graph of which exists between the two solutions $(4)$ (let's denote these by $f_\text{left}$ and $f_\text{right}$), you have to evaluate the integral
\begin{equation}
T(c_0) = 2\int_{f_\text{left}}^{f_\text{right}} \frac{1}{\sqrt{\frac{1}{2}+f + c_0 e^{2f}}}\,\text{d} f,
\end{equation}
which does not have a closed form expression.
For $c_0 > 0$, the large-$f$ behaviour of $(3)$ can be approximated as $g(f) \leadsto \sqrt{c_0} e^f$ as $f \to \infty$. This implies that for large $f$, we can write
\begin{equation}
f(t) \leadsto - \log \sqrt{c_0} |t| \quad \text{as}\quad t \to \pm \infty.
\end{equation}
I strongly advise you to draw the phase plane associated to $(1)$, to get more insight into the behaviour of the system. You can use PPlane for that. |
Two Divergent series such that their sum is convergent. | How about
$$
1 + 2 + 3 + \ldots
$$
and
$$
-1 + (-2) + (-3) + \ldots
$$ |
sum of the eigenvalues = trace($A$)? | Yes. Just look at the characteristic polynomial (say of degree n). Trace=-the coefficient of the term of $x^{(n-1)}$ which is also the sum of the roots of the characteristic polynomial (the coefficient of the term $x^{(n-1)}$ of any monic polynomial of degree $n$ is the sum of its roots with a minus sign.). |
why do we need to discuss $E|g(X)|$, rather than $Eg(X)$ directly, if expectation does not exist? | Define $X^+=\max \{ X,0 \},X^-=\max \{ -X,0 \}$. There are really three cases:
$E[X^+]$ and $E[X^-]$ are both finite, this is the normal situation. In this case you can just define $E[X]=E[X^+]-E[X^-]$ and there's no problem.
Exactly one of $E[X^+]$ and $E[X^-]$ is infinite. In this situation it makes sense to interpret $E[X]=E[X^+]-E[X^-]=\pm \infty$. However many theorems actually need finite expectation anyway.
Both $E[X^+]$ and $E[X^-]$ are infinite. In this situation it turns out that there is no real sense in defining the expectation. You might intuitively expect that some symmetric truncation procedure along the lines of the Cauchy principal value would help...but it really doesn't. See Why do we ask for *absolute* convergence of a series to define the mean of a discrete random variable? for more. |
How to write combinations using double summation | The double-sum is equivalent to $$\underbrace{\left[\binom{\binom{n}{1}}{1}+\binom{\binom{n}{1}}{2}+\ldots+\binom{\binom{n}{1}}{n}\right]}_{i=1}+\underbrace{\left[\ldots\right]}_{i=2}+\ldots+\underbrace{\left[\binom{\binom{n}{n}}{1}+\binom{\binom{n}{n}}{2}+\ldots+\binom{\binom{n}{n}}{n}\right]}_{i=n}$$
So for $n=2$, it would have been
$$\underbrace{\left[\binom{\binom{2}{1}}{1}+\binom{\binom{2}{1}}{2}\right]}_{i=1}+\underbrace{\left[\binom{\binom{2}{2}}{1}+\binom{\binom{2}{2}}{2}\right]}_{i=2}=4$$ |
Proving if function is one to one? | We begin by noting that the sets $P_i = \{ x| \frac{i(i-1)}{2} < x \le \frac{i(i+1)}{2} \}$ are disjoint.
Now note that $\frac{(x+y)(x+y+1)}{2}+y$ is in the set $P_{x+y+1}$.
Similarly, we note that $\frac{(x'+y')(x'+y'+1)}{2}+y'$ is in the set $P_{x'+y'+1}$.
Therefore, we must have $x+y=x'+y'$. It follows that $x=x'$, $y=y'$. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.