title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to compute a derived tensor product? | In the case that $R_{\bullet}$, $A_{\bullet}$, and $B_{\bullet}$ are constant simplicial commutative rings, one can use the bar construction. Fix a monad $T:\mathcal{C}\to\mathcal{C}$ for which the $T$-algebras are $R$-algebras. For an $R$-algebra $S$, the bar construction $B(T,S)_{n}$ is $T^{n+1}S$ with face/degeneracy maps induced by the action of $T$.
The bar construction, when $T$ is a "free" functor, creates a simplicial resolution $B(T,S)_{\bullet}\to S$, hence a cofibrant replacement in the model category structure which can be used to compute
$S\otimes^{L}T_{\bullet}:=S^{c}\otimes T_{\bullet}$, where $S$ is a constant simplicial algebra and $S^{c}$ is any cofibrant replacement of $S$.
In general, the bar construction is rather unwieldy (the standard is the polynomial resolution where $T(S)=R[S]$). That shouldn't be too surprising though, for the same reason constructing projective resolutions of modules is unwieldy in the general case, even with finiteness tools like Hilbert's Syzygy Theorem. Fortunately, there are some pet examples where things are manageable, like $T:R\mbox{-}\mathbf{alg}\to R\mbox{-}\mathbf{alg}$, $T(S)=S\otimes_{R}R[y]$, where the $T$-algebras are $R[y]$-algebras.
See http://math.uchicago.edu/~amathew/SCR.pdf, section 4, for some bar constructions and a great example computing $\pi_{n}(R\otimes_{R[y]}^{L}R)$ using the manageable bar construction mentioned in the paragraph above. See https://arxiv.org/pdf/math/0609151.pdf, section 4, for the same exercise in a slightly different flavor, referenced by Mathew's paper. |
How to project the position vector of an ellipse onto a plane? | You just need to find a unit vector $\hat n$ giving the direction of the planet, and then multiply it by $r(\nu)$ given by your formula.
Suppose then your planet is at $(0,1,0)$: to carry it to its actual position $\hat n$ you need to perform first a rotation about $z$-axis by an angle $\omega+\nu$, then a rotation about $y$-axis by an angle $i$ and finally a rotation about $z$-axis by an angle $\Omega$:
$$
\hat n=
\pmatrix{\cos\Omega &-\sin\Omega & 0\\ \sin\Omega &\cos\Omega & 0\\ 0 & 0 & 1}\cdot
\pmatrix{\cos i & 0 &\sin i\\ 0 & 1 & 0 \\ -\sin i & 0 &\cos i}\cdot
\pmatrix{\cos(\omega+\nu) &-\sin(\omega+\nu) & 0\\ \sin(\omega+\nu) &\cos(\omega+\nu) & 0\\ 0 & 0 & 1}\cdot
\pmatrix{0\\ 1\\0},
$$
giving the result:
$$
\hat n=
\pmatrix{-\cos i \cos\Omega \sin(\omega+\nu)-\sin\Omega \cos(\omega+\nu)\\
\cos\Omega \cos(\omega+\nu)-\cos i \sin \Omega\sin(\omega+\nu)\\
\sin i \sin(\omega+\nu)}.
$$
Multiply that by
$$
r(\nu)=\frac{a(1-e^2)}{1+e\cos(\nu)}
$$
and you'll have the coordinates of the planet. |
How to evaluate $\int_0^{2\pi} e^{e^{i\theta}}{d\theta}$? | With $z=e^{i\theta}, dz=izd\theta$, so
$\int_0^{2\pi} e^{e^{i\theta}}{d\theta}=\frac{1}{i}\int_{|z|=1} \frac{e^z}{z}{dz}=2\pi$ by the residue theorem |
intersection of three chords | Hint: use the butterfly theorem. |
Predicate Logic Question: Implications/Operations on the Empty Set | $% Predefined Typography
\newcommand{\paren} [1]{\left({#1}\right)}
\newcommand{\bparen}[1]{\bigg({#1}\bigg)}
\newcommand{\brace} [1]{\left\{{#1}\right\}}
\newcommand{\bbrace}[1]{\bigg\{{#1}\bigg\}}
\newcommand{\floor} [1]{\left\lfloor{#1}\right\rfloor}
\newcommand{\bfloor}[1]{\bigg\lfloor{#1}\bigg\rfloor}
\newcommand{\mag} [1]{\left\lVert{#1}\right\rVert}
\newcommand{\bmag} [1]{\bigg\Vert{#1}\bigg\Vert}
\newcommand{\abs} [1]{\left\vert{#1}\right\vert}
\newcommand{\babs} [1]{\bigg\vert{#1}\bigg\vert}
%
\newcommand{\labelt}[2]{\underbrace{#1}_{\text{#2}}}
\newcommand{\label} [2]{\underbrace{#1}_{#2}}
\newcommand{\ulabelt}[2]{\overbrace{#1}_{\text{#2}}}
\newcommand{\ulabel} [2]{\overbrace{#1}_{#2}}
%
\newcommand{\setcomp}[2]{\left\{~{#1}~~\middle \vert~~ {#2}~\right\}}
\newcommand{\bsetcomp}[2]{\bigg\{~{#1}~~\bigg \vert~~ {#2}~\bigg\}}
%
\newcommand{\iint}[2]{\int {#1}~{\rm d}{#2}}
\newcommand{\dint}[4]{\int_{#3}^{#4}{#1}~{\rm d}{#2}}
\newcommand{\pred}[2]{\frac{\rm d}{{\rm d}{#2}}#1}
\newcommand{\ind} [2]{\frac{{\rm d} {#1}}{{\rm d}{#2}}}
\newcommand{\predp}[2]{\frac{\partial}{\partial {#2}}#1}
\newcommand{\indp} [2]{\frac{{\partial} {#1}}{\partial {#2}}}
\newcommand{\predn}[3]{\frac{\rm d}^{#3}{{\rm d}{#2}^{#3}}#1}
\newcommand{\indn} [3]{\frac{{\rm d}^{#3} {#1}}{{\rm d}{#2}^{#3}}}
%
\newcommand{\ii}{{\rm i}}
\newcommand{\ee}{{\rm e}}
\newcommand{\exp}[1] { {\rm e}^{\large{#1}} }
%
\newcommand{\and} {~\text{and}~}
\newcommand{\or} {~\text{or}~}
%
\newcommand{\red} [1]{\color{red}{#1}}
\newcommand{\blue} [1]{\color{blue}{#1}}
\newcommand{\green}[1]{\color{green}{#1}}
$
Writing the statements symbolically:
$$\forall (s, p) ~:~ (s \in T \and p \in \text{Primes} \and p\mid s) \rightarrow 2 = p \tag{C1}$$
$$T = \setcomp{x}{x \in \mathbb N \text{ and } x^2 + x + 1 = 0} \tag{C2}$$
From (C2), we know that $T$ is the empty set. So (C1) is:
$$\forall (s, p) ~:~ (s \in \emptyset \and p \in \text{Primes} \and p\mid s) \rightarrow 2 = p$$
$$\forall (s, p) ~:~ (\text{false} \and p \in \text{Primes} \and p\mid s) \rightarrow 2 = p$$
$$\forall (s, p) ~:~ \text{false} \rightarrow 2 = p$$
$$\forall (s, p) ~:~ \text{true}$$
$$\text{true}$$
So (C2) implies (C1).
On the other hand, if we assume (C1) (assuming T is a subset of the natural numbers so that divisibility is defined):
$$\forall (s, p) ~:~ (s \in T \and p \in \text{Primes} \and p\mid s) \rightarrow 2 = p$$
$$T \subseteq \{1,~ 2,~ 4,~ 8,~ 16,~ 32,~ \dots\}$$
Does $T \subseteq \{1,~ 2,~ 4,~ 8,~ 16,~ 32,~ \dots\}$ imply that $T = \emptyset$? Sometimes yes, sometimes no. For example:
$$\begin{array} {c|c|c} \hline
\text{Value of T} & T \subseteq \{1,~ 2,~ 4,~ 8,~ 16,~ 32,~ \dots\} \rightarrow T = \emptyset & \text{..which is:} \\
\hline
T = \{3\} & \text{false} \rightarrow \text{false} & \text{true} \\
T = \{4, 8\} & \text{true} \rightarrow \text{false} & \text{false} \\
T = \{\} & \text{true} \rightarrow \text{true} & \text{true} \\
\end{array}$$
So $(C1) \rightarrow (C2)$ is not fully defined. |
Is there a solution for parker's square? | This is an open problem. No example is known; nor is a proof that there is none.
Open Problem Garden says:
This question was first asked in 1984 by Martin LaBar and popularized in 1996 by Martin Gardner, who offered \$100 to the first person to construct such a square. In 2005 Christian Boyer offered €1,000 and a bottle of champagne for a solution to a somewhat easier problem. For a review of the history of research, see […]. For basic facts about the anticipated $ 3\times 3 $ magic square of squares, see […].
Some discussion is given on mathpages.com. |
Solve for $A$ if $\sin 2x-\cos2x= \sqrt{2}\sin(2x+A\pi)$; is then $0<A<2$? | Perhaps you can take a look at this: it is a small document on someone encountering almost the exact same problem as you. However, it does use complex numbers, so it requires you to be familiar with those.
Howeve,r as you are only interested in the value of $A$, and perhaps not why the identity is true, you could use the following tactic: if the left hand side is equal to the right hand side, then the should be equal for $x = 0$ as well. What can you conclude with this? Does this already uniquely determine $A$ (up to the period of the sine function)? |
Why $f$ may not be continuous despite $\lim_{(x,y)\to (0,0)}f(0,0)=0$? | You are missing that $f(0,0)$ may be different from $0$. |
How do evaluate $x = (5^2 \bmod 6)^4 \bmod 15?$ | $$\begin{eqnarray}
&& (5^2 \bmod 6)^4 \bmod 15 \\
&=& (25 \bmod 6)^4 \bmod 15 \\
&=& (19 \bmod 6)^4 \bmod 15 \\
&=& (13 \bmod 6)^4 \bmod 15 \\
&=& (7 \bmod 6)^4 \bmod 15 \\
&=& (1 \bmod 6)^4 \bmod 15 \\
&=& 1^4 \bmod 15 \\
&=& 1 \bmod 15 \\
&=& 1
\end{eqnarray}$$ |
Is there a way to write the Cauchy-Riemann equation $\partial f / \partial \overline{z}=0$ without appealing to multivariable calculus? | After poking and prodding, I found that if I define the derivative as being able to find two complex numbers $L$ and $M$ such that:
$$\lim_{h\to 0} \frac{f(z+h)-f(z)-Lh-M\overline{h}}{h}=0$$
then this is exactly equivalent to treating $f$ as a function from $\mathbb{R}^2$ to $\mathbb{R}^2$ and linear transformation $\mathbf{J}$ such that
$$\lim_{h\to 0} \frac{\| f(z+h)-f(z)-\mathbf{J}h\|}{\| h\|}=0$$
In this case, by analogy, it makes sense to write $L=\dfrac{\partial f}{\partial z}$ and $M=\dfrac{\partial f}{\partial \overline{z}}$, even when the function isn't complex differentiable in the usual sense.
This has to do with the basis matrices in the linked post:
$$
\left\{\begin{bmatrix}1&0\\0&1\end{bmatrix},\begin{bmatrix}0&1\\-1&0\end{bmatrix}\right\}$$
(linear combinations of which are linear transformations representing multiplication by a complex number)
and:
$$\left\{\begin{bmatrix}1&0\\0&-1\end{bmatrix},\begin{bmatrix}0&1\\1&0\end{bmatrix}\right\}$$
(linear combinations of which are linear transformations representing first complex conjugation then multiplication by a complex number)
The four of these are general enough so that linear combinations of them can represent any $L$ and $M$ acting on $h$ in the equation or to represent any 2x2 matrix $\mathbf{J}$.
The best part is that this notation allows one to write $f(z+\Delta z)\approx f(z)+\frac{\partial f}{\partial z} \Delta z+\frac{\partial f}{\partial \overline{z}} \overline{\Delta z}$
However, the fact that no complex number exists that allows one to write $\frac{df}{dz}\Delta z=\frac{\partial f}{\partial z} \Delta z+\frac{\partial f}{\partial \overline{z}} \overline{\Delta z}$ reaffirms that it's probably a bad idea to abuse this notation,
and instead it's best to either stick to $\mathbb{R}^2\to \mathbb{R}^2$ and linear transformations, or functions for which $\frac{df}{dz}$ exists, and avoid switching between the two. |
Sampling with replacement events vs. fraction coverage of a specified set | Let $X_i$ take the value $1$ if card $i$ has been selected, $0$ otherwise. Then $$E(X_i)=P(X_i=1)=1-\left(1 -\frac{1}{N}\right)^m$$
Assume that the first $k$ cards are marked. Then, the expected number of marked cards that have been selected (and hence unmarked) is
$$E[\sum_{i=1}^k X_i]= k \left(1-\left(1 -\frac{1}{N}\right)^m\right) $$
So, the "expected coverage" $ E[\sum_{i=1}^k X_i]/k$ is the same as for the total number of cards. |
The complement of spanning trees is covered by a union of cycles | For each $e \in E\smallsetminus E(T)$, let $C_e$ be the unique cycle in $T+e$. If you take the "sum" modulo $2$ of all cycle $C_e$ for all $e \in E\smallsetminus E(T)$, you get a subgraph $H$ of $G$ all whose components are Eulerian, hence $H$ is the disjoint union of cycles. Also, $E\smallsetminus E(T) \subset E(H)$ since every $e \in E\smallsetminus E(T)$ is in exactly one cycle $C_e$ therefore it must be in "sum" modulo 2 (i.e $H$) |
Given a transformation matrix, how do I find out what it does? | It’s none of the above. You’ve already eliminated rotation and reflection by determining that the matrix is singular. A projection would satisfy $B^2=B$, so its only eigenvalues are $0$ and $1$, but $\operatorname{tr}B = 4-3\sqrt2$, so there’s at least one eigenvalue of $B$ that’s neither. |
Prove that $\lim\limits_{x\to 0^+}\sum\limits_{n=1}^\infty\frac{(-1)^n}{n^x}=-\frac12$ | We can write this sum in terms of the Dirichlet eta function or the Riemann zeta function:
$$
\begin{align}
\sum_{n=1}^\infty\frac{(-1)^n}{n^z}
&=-\eta(z)\\
&=-\left(1-2^{1-z}\right)\zeta(z)\tag1
\end{align}
$$
In this answer it is shown that $\zeta(0)=-\frac12$. Therefore,
$$
\bbox[5px,border:2px solid #C0A000]{\lim_{z\to0}\sum_{n=1}^\infty\frac{(-1)^n}{n^z}=-\frac12}\tag2
$$
Computing $\boldsymbol{\eta'(0)}$
Using the formula $\eta(z)\Gamma(z)=\int_0^\infty\frac{\,t^{z-1}}{e^t+1}\,\mathrm{d}t$, we get
$$
\begin{align}
\eta(z)\Gamma(z+1)
&=\int_0^\infty\frac{z\,t^{z-1}}{e^t+1}\,\mathrm{d}t\\
&=\int_0^\infty\frac1{e^t+1}\,\mathrm{d}t^z\\
&=\int_0^\infty\frac{t^z\,e^t}{\left(e^t+1\right)^2}\,\mathrm{d}t\tag3
\end{align}
$$
As shown in this answer,
$$
\int_0^\infty\log(t)\,e^{-t}\,\mathrm{d}t=-\gamma\tag4
$$
Using Gautschi's Inequality, $\lim\limits_{n\to\infty}\frac{\Gamma(n+1)}{\Gamma\left(n+\frac12\right)}
\frac{\Gamma(n+1)}{\Gamma\left(n+\frac32\right)}=1$; therefore,
$$
\begin{align}
\prod\limits_{k=1}^\infty\frac{2k}{2k-1}\frac{2k}{2k+1}
&=\lim_{n\to\infty}\prod\limits_{k=1}^n\frac{k}{k-\frac12}\frac{k}{k+\frac12}\\
&=\lim_{n\to\infty}\frac{\Gamma(n+1)\,\color{#090}{\Gamma\left(\frac12\right)}}{\Gamma\left(n+\frac12\right)}
\frac{\Gamma(n+1)\,\color{#090}{\Gamma\left(\frac32\right)}}{\Gamma\left(n+\frac32\right)}\\[6pt]
&=\color{#090}{\frac\pi2}\tag5
\end{align}
$$
Taking the derivative of $(3)$ and evaluating at $z=0$, we have
$$
\begin{align}
\eta'(0)-\frac12\gamma
&=\int_0^\infty\log(t)\frac{e^{-t}}{(1+e^{-t})^2}\,\mathrm{d}t\tag{6a}\\
\eta'(0)
&=-\int_0^\infty\log(t)\,\mathrm{d}\left(\frac{e^{-t}}{1+e^{-t}}-\frac{e^{-t}}2\right)\tag{6b}\\
&=-\frac12\int_0^\infty\log(t)\,\mathrm{d}\frac{e^{-t}(1-e^{-t})}{1+e^{-t}}\tag{6c}\\
&=\frac12\int_0^\infty\frac{e^{-t}(1-e^{-t})}{t\left(1+e^{-t}\right)}\,\mathrm{d}t\tag{6d}\\
&=\frac12\sum_{k=1}^\infty(-1)^{k-1}\int_0^\infty\frac{e^{-kt}-e^{-(k+1)t}}t\,\mathrm{d}t\tag{6e}\\
&=\frac12\sum_{k=1}^\infty(-1)^{k-1}\log\left(\frac{k+1}k\right)\tag{6f}\\
&=\frac12\sum_{k=1}^\infty\log\left(\frac{2k}{2k-1}\frac{2k}{2k+1}\right)\tag{6g}\\
&=\frac12\log\left(\frac\pi2\right)\tag{6h}
\end{align}
$$
Explanation:
$\text{(6a)}$: taking the derivative of $(3)$ and evaluating at $z=0$
$\text{(6b)}$: prepare to integrate by parts and subtract half of $(4)$ from both sides
$\text{(6c)}$: combine fractions
$\text{(6d)}$: integrate by parts
$\text{(6e)}$: write $\frac{e^{-t}}{1+e^{-t}}$ as a series in $e^{-t}$
$\text{(6f)}$: apply Frullani's Integral
$\text{(6g)}$: combine adjacent positive and negative terms
$\text{(6h)}$: apply $(5)$
Equation $(6)$ justifies Claude Leibovici's comment on the question that for $z$ slightly above $0$,
$$
\sum_{n=1}^\infty\frac{(-1)^n}{n^z}\sim-\frac12-\frac z2\,\log\left(\frac\pi2\right)\tag7
$$ |
weak-weak convergence of continuous operator | If $F$ is linear you can use the adjoint $F^*$ of $F$:
$$<F(x_k),y>=<x_k,F^*(y)>\rightarrow <x,F^*(y)>=<F(x),y>.$$
If $F$ is nonlinear, I am not sure, if the statement is still correct. The counterexample I have in mind looks as follows, but I did not check every detail, since I am a bit short on time right now:
Put
$$H=l^2:=\{(a_k)_{k\in\mathbb{N}}\subset \mathbb{R}|\ \sum_{k=1}^\infty(a_k)^2<\infty\}.$$
and the sequence $(x_k)_{i\in\mathbb{N}}=(\delta_{ki})_{i\in\mathbb{N}}.$
$x_k$ converges weakly to zero. Now put
$$F(x)=(\|x\|_{l^2},0,0,\ldots).$$
Here
$$\|a\|_{l^2}^2=\sum_{k=1}^\infty(a_k)^2$$
is the norm of $l^2$.
Hence
$$F(x_k)=(1,0,\ldots),$$
which does not converge weakly to zero. |
Solution to Mordell's Equation $y^2=x^3+4$ | If you are looking for integer points, then the LMFDB curve 108.a2 (Cremona label 108a1) $\,y^2=x^3+4,\,$
has the information that
$\,x=0, y=\pm2 \,$ are the only integer solutions. There is more information in the entry, such as the Mordell-Weil group structure is
$\,\mathbb{Z}/3\mathbb{Z}.$
Continuing from your attempt, with $\,x=2x_1, y=2y_1\,$ and
your last equation, gives
$\,(y_1-1)(y_1+1) = 2x_1^3. \,$ If $\,y_1\,$ is even gives a contradiction,
thus $\,y_1\,$ is odd. But now, if $\,x_1\,$ is odd gives a contradiction,
thus $\,x_1\,$ is even. Thus, let $\,x_1 = 2x_2, y_1 = 2y_2 + 1.\,$
The equation is now $\,2y_2(2y_2+2) = 8y_2(y_2+1)/2 = 16x_2^3.\,$
This reduces to $\,y_2(y_2+1)/2 = 2x_2^3\,$ where the left side is a
triangular number which is twice a cubic number (this is close to but
not the same as a
cubic triangular number). The obvious solutions
are $\,x_2=0,y_2=0\,$ and $\,x_2=0,y_2=-1.$ |
How do you solve this step by step? Exponential and logarithmic function | Familiarize yourself with the basic properties of logarithms.
$$
ln\left(\frac{A}{B}\right) = ln(A) - ln(B)
$$
$$
ln(A^x) = x\cdot ln(A)
$$
The above should help.
Make sure you understand what a logarithm to the base $e$ is if you do not already and the answer should follow immediately. |
Chi-square with $n$ degrees of freedom,Normal distribution | Starting with your last questions first: yes, the $\chi^2$ distribution with $k$ degrees of freedom is normally defiined as being the sum of the squares of $k$ independent $N(0,1)$ distributions.
For the moment generating function, note that since the MGF of a sum of independent variables, is the product of the MGFs, if $M_k$ denotes the moment generating function of $\chi^2(k)$ then
$$M_k(s) = M_1(s)^k.$$
Then to derive $M_1(s)$, denoting $X = N(0,1)^2$ and $f$ for the pdf of a $N(0,1)$ variable
\begin{align*}
M_1(s) & = \mathbf E \left[ e^{sX^2} \right] \\
& = \int_{-\infty}^\infty \exp(sx^2) f(x) dx \\
& = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp\left(sx^2\right) \exp\left(-x^2/2\right) dx\\
& = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp\left((s-1/2)x^2\right) dx\\
& = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp\left(-\frac{x^2}{2 (1-2s)^{-1}}\right) dx\\
& = \frac{\sqrt{2 \pi (1-2s)^{-1}}}{\sqrt{2 \pi} }\\
& = \left(1 - 2s\right)^{-\frac12}
\end{align*}
where in the second to last line we used the fact that the integral is the un-normalized pdf of a $N\left(0, 1-2s \right)$ distribution.
From the above we then get
$$ M_k(s) = M_1(s)^k = (1 - 2s)^{-k/2}.$$
Why is the MGF of a sum of independent variables the product of their MGFs?
This follows from the fact that given independent random variables $X,Y$ and functions $U,V$ then
$$ \mathbf E [ U(X) V(Y)] = \mathbf E[U(X)] \mathbf E[V(Y)]$$
In the special case that $U_s(x) = V_s(x) = \exp(sx)$ then
$$U_s(X)V_s(Y) = \exp(sX)\exp(sY) = \exp(s(X+Y)),$$
so the result about the MGFs follows.
To justify the general claim, we note that the joint distribution function of two independent variables satisfies $f_{X,Y}(x,y) = f_X(x)f_Y(y)$ and then
\begin{align*}
E [ U(X) V(Y)] &= \int_{-\infty}^\infty \int_{-\infty}^\infty U(x) V(y) f_{X,Y}(x,y) dx dy \\
& = \int_{-\infty}^\infty \int_{-\infty}^\infty U(x) f_X(x) V(y)f_Y(y) dx dy \\
& = \int_{-\infty}^\infty U(x)f_X(x) \left(\int_{-\infty}^\infty V(y)f_Y(y)dy \right) dx \\
& = \int_{-\infty}^\infty U(x)f_X(x) \mathbf E[V(Y)] dx\\
& = \mathbf{E}[V(Y)] \int_{-\infty}^\infty U(x)f_X(x) dx\\
& = \mathbf E[V(Y)] \mathbf E[V(X)]\\
\end{align*} |
Exercise on Boolean closure of elementary sets | For the intersection, this is what I have.
$E \bigcap F = \left(\bigcup\limits_{i=1}^mA_i \right) \bigcap \left(\bigcup\limits_{j=1}^nB_j \right)$ = $\bigcup_{{1\leq i \leq m} \quad {1\leq j \leq n}} \left( A_i \bigcap B_j \right).$ We turned the intersection of unions into a union of intersections. |
Are these two curves birationally equivalent? | In PARI/GP:
? ellinit(ellfromeqn(2*x^4+17*x^2+12-y^2)).j
%1 = 384200066/111747
? ellinit(ellfromeqn(x^4-17-2*y^2)).j
%2 = 1728
The curves are not birationally equivalent because they have different $j$-invariants. |
Resultant of homogeneous polynomials and composition | We have a solution at long last.
First a very general comment: we can freely move back and forth between the worlds of "homogeneous degree $d$ polynomials in $X,Y$" and "polynomials in $z$ of degree $d$" by homogenizing and de-homogenizing. In fact what we have is a one-to-one correspondence which preserves all the nice resultant statements we work with. Strictly speaking, the question only defines the resultant for homogeneous polynomials in $X$ and $Y$, but I bet you can easily guess the correct definition for the "inhomogeneous resultant."
Here are the facts about resultants I used to solve the problem: let $f(z), g(z) \in K[z]$ be polynomials of degree $d$ and $e$ respectively, and suppose that we can factor
$$
f(z) = a_0 \prod_{i=1}^d (z - \alpha_i), \ \ g(z) = b_0 \prod_{j=1} ^e (z - \beta_j)
$$
Then we have:
(1) $\mathrm{Res}(f , g) = a_0^e b_0 ^d \prod_{i=1}^d \prod_{j=1}^e (\alpha_i - \beta_j)$;
(2) $\mathrm{Res}(f_1 f_2 , g) = \mathrm{Res}(f_1 , g) \cdot \mathrm{Res}(f_2 , g)$ for all $f_1 , f_2 \in K[z]-\{0\}$;
(3) if $e=d$, then $\mathrm{Res}(f-\alpha g , f-\beta g) = (\alpha - \beta)^d \cdot \mathrm{Res}(f , g)$, and also $\mathrm{Res}(\lambda f , g) = \lambda^d \mathrm{Res}(f , g)$ and $\mathrm{Res}(f , \mu g) = \mu ^d \mathrm{Res}(f , g)$ for all $\lambda$ and $\mu$.
These statements have equally valid counterparts for homogeneous resultants according to our "homogenize/de-homogenize" machinery.
Suppose that $F$ and $G$ factor as
$$
F(X,Y) = a_0 \prod_{i=1}^D (X-\alpha_i Y), \ \ G(X,Y) = b_0 \prod_{j=1}^D (X- \beta_j Y)
$$
Then by definition
$$
A(X,Y) = a_0 \prod_{i=1}^D (f-\alpha_i g), \ \ B(X,Y) = b_0 \prod_{j=1}^D (f- \beta_j g)
$$
Using the above and the properties (1),(2), and (3), we see that
\begin{align*}
\mathrm{Res}(A , B) &= \mathrm{Res}\left(a_0 \prod_{i=1}^D (f - \alpha_i g) , b_0 \prod_{j=1}^D (f - \beta_j g)\right) \\[0.5em]
&= (a_0 b_0)^{dD} \mathrm{Res}\left( \prod_{i=1}^D (f - \alpha_i g) , \prod_{j=1}^D (f - \beta_j g)\right) \\[0.5em]
&= (a_0 b_0)^{dD} \prod_{i=1}^D \prod_{j=1}^D \left( (\alpha_i - \beta_j)^d \mathrm{Res}(f , g) \right) \\[0.5em]
&= (a_0 b_0)^{dD} \mathrm{Res}(f , g)^{D^2} \prod_{i=1}^D \prod_{j=1}^D ((\alpha_i - \beta_j)^d) \\[0.5em]
&= \mathrm{Res}(f , g)^{D^2} \left((a_0 b_0)^D \prod_{i=1}^D \prod_{j=1}^D (\alpha_i - \beta_j)\right)^d \\[0.5em]
&= \mathrm{Res}(f , g)^{D^2} \cdot \mathrm{Res}(F , G)^d
\end{align*}
as desired. |
Norm closure of diagonalizable operator on Hilbert space | Yes, that's how it works. You need to check that a diagonalizable operator $T=\sum_j\lambda_jP_j$ is normal, which is easy to do.
And, as you say, normal operators are limits of diagonalizable operators.
The two aforementioned facts, together, tell you that the normal operators are precisely the closure of the set of diagonalizable operators. |
On continuous functions | The point of the exercise is this:
The set $\{x \in (0,1] : f(x)=0 \}$ obviously has an infimum, because it is non-empty and bounded below. But is this infimum necessarily a minimum? That is, does the infimum itself belong to the set? This is what you have to prove.
It seems to me that you missed this. It also seems that you must have misunderstood your professor -- what he allegedly said makes no sense to me..
(Also, I presume you mean "archimedean proof". But what do you mean by this?) |
Closed Graph of a function and the continuous inverse implies the image is closed. | Suppose $y_n$ is in $B$ convergent to some $y$. It follows that $y_n$ is a Cauchy sequence, since $T_{-1}$ is continuous it is Lipschitz continuous you have that $x_n:=T_{-1}(y_n)$ is also Cauchy, as such it admits a limit $x\in X$.
So $x_n\in A$ is a convergent sequence in $X$ with $T(x_n)=y_n$ also convergent. Since $T$ has closed graph you have that the limits $x\in A$ and $y\in B$. |
Is it good to learn measure theory before topology or the other way around? | I think learning topology makes analysis easier to understand: Say, in Euclidean space, a convergent sequence cannot have two distinct limits. In classical mathematical analysis one used to pick $\epsilon_{0}=\dfrac{|a-b|}{2}$ to deduce the contradiction, where $a,b$ are the limit values. But if we switch into a topological point of view, convergent sequence means eventually its elements lie into a neighbourhood of $a$, and also in a neighbourhood of $b$. Since in Euclidean we can make those two neighbourhoods to be disjoint, the largest radius to the disjoint neighbourhoods would be $\dfrac{|a-b|}{2}$, that is the reason why we can get a contradiction in assuming the eventually elements lying in those two disjoint neighbourhoods.
It is the Hausdorff property which explains why needs to pick $\dfrac{|a-b|}{2}$ (or smaller) to be a candidate, this is hard to see if we merely do the usual $\epsilon$-argument along with triangle inequalities, it is simply a mess. But with the aid of topology, everything is clear, the strategy makes sense. |
Proving solutions of a system of differential equations satisfy certain equation | Hint: calculate the derivative of $u$ with respect to $t$. |
Calculate sum with binomial coefficients: $\sum_{k=0}^{n} \frac{1}{k+1} \binom nk x^{k+1}$ | Hint
$$\sum \limits_{k=0}^{n} \frac{1}{k+1}{n\choose k}x^{k+1}=\int_0^x \sum \limits_{k=0}^{n} {n\choose k}t^{k}dt=\int_0^x(1+t)^ndt$$
Can you take it from here? |
three dimension and maxima and minima | Put
$$\alpha:={\pi\over2}+x,\quad \beta:={\pi\over2}+y,\quad \gamma:={\pi\over2}+z\ .$$
Then maximizing/minimizing $\alpha+\beta+\gamma$ under the constraints
$$\cos^2\alpha+\cos^\beta+\cos^2\gamma=1,\qquad\alpha,\ \beta,\ \gamma\in[0,\pi]$$ is the same as maximizing $$s(x,y,z):=x+y+z$$
under the constraints
$$g(x,y,z):=\sin^2 x+\sin^2 y+\sin^2 z=1,\qquad x,\ y,\ z\in\left[-{\pi\over2},{\pi\over2}\right]\ .\tag{1}$$
The equation $g(x,y,z)=1$ defines a surface $S\subset{\mathbb R}^3$ with the symmetries of an octahedron. In fact $S$ contains the $12$ edges of an octahedron with diameter $\pi$; see the following figure.
Inspecting this figure we come to the tentative conclusion that $s$ is maximal when $x=y=z>0$, which leads to $\sin^2 x={1\over3}$, or $x=\arcsin{1\over\sqrt{3}}$. Therefore we conjecture
$$s_{\rm max}=3\arcsin{1\over\sqrt{3}}\doteq1.84644>{\pi\over2}\ ,\tag{2}$$
and because of the inherent symmetry we would have $s_{\rm min}=-s_{\max}$. The two extremal values of $\alpha+\beta+\gamma$ would then be ${3\pi\over2}$ smaller than the corresponding extremal values of $s$.
In order to prove $(2)$ we note
$$\nabla g(x,y,z)=\bigl(\sin(2x),\sin(2y),\sin(2z)\bigr)\ .$$
It follows that the only points ${\bf p}\in S$ where $\nabla g({\bf p})={\bf 0}$ are the six vertices of the octahedron. The values of $s$ in these points are $\pm{\pi\over2}$, i.e., not exceeding $(2)$. All other points of $S$ are regular points of $g$, and Lagrange's method can be used to bring the conditionally stationary points of $s$ to the fore. One computes
$$\nabla s-\lambda\nabla g=\bigl(1-\lambda\sin(2x),1-\lambda\sin(2y),1-\lambda\sin(2z)\bigr)\ .$$
It follows that at a conditionally stationary point one has
$$\sin(2x)=\sin(2y)=\sin(2z)\ne0\ .\tag{3}$$
This implies that there is a $u\in\bigl[0,{\pi\over4}\bigr]$ with
$$x={\pi\over 4}\pm u,\quad y={\pi\over 4}\pm u, \quad z={\pi\over 4}\pm u$$
or the opposite of these. A priori the $\pm$-signs can be chosen independently. But two or more $+$-signs immediately lead to a violation of $(1)$, and $x={\pi\over 4}+ u$, $\> y={\pi\over 4}- u$ implies $\sin^2 x+\sin^2 y=1$, whence $\sin z=0$, which is forbidden by $(3)$. This allows us to conclude that in fact
$$x=y=z={\pi\over 4}- u\ ,$$
and using $(1)$ we arrive at the conjectured solution $(2)$. |
Does this set with this operation define a group? | The problem here is associativity. Indeed, you have $$a_i=a_i*(a_j*a_k)\ne (a_i*a_j)*a_k=a_k$$ unless $a_i=a_k$ for every $i,k$, i.e., there is at most one non trivial element $a$. So $A=\{e,a\}$ which is of course isomorphic to $\Bbb{Z}/2$.
Edit: As whacka pointed out, I assumed that you meant $k*e=e*k=k$, otherwise it is obvious that $A=\{e\}$. |
Decidable predicates | We are talking natural numbers, and for simplicity we'll ignore the case where $x$ or $y$ is zero as waste cases to be dealt with separately.
Then $x$ is a multiple of $y$ just in case, for some $k \leq x$, $x = ky$. The obvious program structure to test whether this is so, for input $x$ and $y$ is,
for $k = 1$ to $x$,
compute $ky$
if $ky = x$ print "yes" and exit
else loop
print "no".
And there you are!
And yes it is decidable whether $x$ is prime (by deciding whether it is a multiple of any smaller number, other than 1). |
Circular logic in the concept of Godel numbers | If you have programming background, you should be able to understand this computability-based explanation of the incompleteness theorems, at least until the section titled "Explicitly independent sentence". It will take a significant amount of time and mental effort to work through it, but I can guarantee it is much easier to grasp than a rigorous explanation using the conventional approach (i.e. via the fixed-point lemma).
I cannot really make sense of your doubt about circularity, and I suspect (as you also did) that it is due to your current lack of a rigorous proof of the incompleteness theorem. So perhaps after you understand the proof you will either have no more doubt or you will be able to make precise your inquiry. In the meantime, it might be worth keeping in mind that the incompleteness theorems are themselves theorems of some formal system MS, which is often called the meta-system. MS does not need to assume much; it more or less just needs to support basic reasoning about finite strings, so that you can reason about programs and program execution, which are used (as per the linked post) to define general formal systems, and so that you can reason about formal systems that can reason about programs.
I also want to note that Godel numbering is not actually the core of the incompleteness theorems. It is necessary if you want to prove that theories of arithmetic like PA or PA− or Q are incomplete, but the incompleteness phenomenon is not due to the ability to encode finite sequences of natural numbers as a single natural number and decode it via arithmetical formula. I say a bit more here.
But note that a formal system that is able to reason about programs of course can reason about its own proof verifier, at least to verify that itself proves a theorem if it really does. That is not circular in any sense; an analogy is that you can write a program in any decent programming language L that expects an input (P,X,k) where P is a program in L and X is an input for P and k is a natural number, and outputs "yes" if P on input X halts within k steps, but outputs "no" otherwise. This program is written in L and verifies halting execution of programs written in L. No circularity! |
Find real-valued sequences $x(n)$ for which $c^{x(n)} = o(1/n )$ | Edited
Since $\log(c)<0$ we have $x=1/o(1/\log(n))$
For fixed const $c$ assuming $x$ is integer this sum is $\frac{c^x-c^{n}}{1-c}$. Since $c^n=o(1/n)$ the condition for $x$ remains the same. |
Find the remainder of $49!$ modulo $53$ | Hint: Find the multiplicative inverse, modulo $53$, of $51\cdot 50 \equiv (-2)(-3)=6$. |
Unit circle method for converting complex number into polar coordinate | For the unit circle, each time you reach another axis the sine and cosine of an angle change from negative to positive (and vice versa). Finding $\arg (a+bi)$ boils down to which quadrant you're in to get the correct angle.
Thus, using definition of $\arg (a+bi) = \arctan \frac {b}{a}$, and that $k \in \mathbb Z$...
(a) $-3i \rightarrow 0-3i$, so in this case $a=0$ and $b=-3$. Thus $\arg (0-3i) \rightarrow \arctan \frac {-3}{0}$, which only occurs when $\theta = \frac {3\pi}{2}+2 \pi k$. (In this case, $\tan \theta$ is undefined for $\frac {(2k+1)\pi}{2}$; we use $3\pi /2$ as $\sin \theta$ is negative and $\cos \theta$ is $0$ in the 3rd quadrant.) Since $|0-3i| = 3$, we can then write $-3i$ as $3e^{3i\pi/2}$.
(b) ${-\sqrt 3 +i}$ yields $a=-\sqrt{3}$ and $b=1$, so $\arg ({-\sqrt 3 +i}) = \arctan (-\frac {\sqrt{3}}{3})$, which only occurs when $\theta = \frac {5\pi}{6}+2 \pi k$. ($\tan \theta$ is negative in the second and fourth quadrants.) Since $|{-\sqrt 3 +i}| = 2$, we can then write ${-\sqrt 3 +i}$ as $2e^{5i\pi/6}$. |
Calculate $\lim_{n \to \infty}\frac{a_{n+1}}{a_n}$ while using MVT. | You have $a_{n+1} = g(a_n)$ with $g(x) = 2^x - 1$. Then
$$
\frac{a_{n+1}}{a_n} = \frac{g(a_n)-g(0)}{a_n - 0}
$$
It has already been shown that $(a_n)$ converges to zero. It follows that
$$
\lim_{n\to \infty }\frac{a_{n+1}}{a_n} = \lim_{n\to \infty }\frac{g(a_n)-g(0)}{a_n - 0}
= \lim_{x\to 0 } \frac{g(x)-g(0)}{x - 0}= g'(0) = \ln 2.
$$
simply from the definition of the derivative. |
Independence of Function of Random Vectors | Your thinking is right. For any given i.i.d sequence $(X_1(k): k \geq 0)$ we can construct an i.i.d sequence $(X_2(k): k \geq 0)$ such that $X_1(0)X_1(1)=X_2(3)$. In this case then first component of $Y$ is same as the second component of $Z$ so $Y$ and $Z$ are not independent as long as $X_1(0)X_1(1)$ is not almost surely constant. |
Is every finitely generated simple group $2$-generated? | I haven't read it thoroughly enough to understand the construction, but this 1986 paper by Guba constructs a finitely generated simple group all of whose $2$-generated subgroups are free, and which is therefore not itself $2$-generated. From the introduction it seems that the question you asked was open until then. |
Ramanujan series Type $\sum _{k=1}^{\infty } \frac{\sinh (2 \pi k)}{2 \sqrt{2} \pi ^9 k^{11} (1-\cosh (2 \pi k))}$ | Both of your sums can be brought under same context. Let $\zeta \notin \mathbb{R}$, integrate $$f(z)=\frac{{\cot \pi \zeta z\cot \pi z}}{{{z^n}}}$$ around a big circle centered at origin. When $n\geq 2$, the integral around big circle $\to 0$. $f(z)$ has poles at $z=k, k\zeta^{-1}$ for $k\in \mathbb{Z}$, if $n$ is moreover odd, then
$$\tag{*}\sum_{k = 1}^\infty {\left( {\frac{{\cot \pi \zeta k}}{{{k^n}}} + {\zeta ^{n - 1}}\frac{{\cot \pi {\zeta ^{ - 1}}k}}{{{k^n}}}} \right)} = - \frac{\pi }{2}{\mathop{\rm Res}\nolimits} [\frac{{\cot \pi \zeta z\cot \pi z}}{{{z^n}}},z = 0]$$
This is essentially the functional equation here.
When $\zeta = i$, $n\equiv 1\pmod{4}$, LHS of $(*)$ becomes
$$-2i\sum_{k = 1}^\infty \frac{\coth \pi k}{k^n}$$ so this sum
can be explicitly calculated. As pointed out by a comment, when $n=11$, this is your first sum.
When $\zeta = e^{\pi i /4}$, $n\equiv 5\pmod{8}$, LHS of $(*)$
becomes $$2i\sum\limits_{k = 1}^\infty {\frac{{\sinh \sqrt 2 \pi
k}}{{\cos (\sqrt 2 \pi k) - \cosh (\sqrt 2 \pi k)}}\frac{1}{{{k^n}}}}
$$ this is your first sum when $n=13$.
When $\zeta = e^{\pi i /4}$, $n\equiv 1\pmod{8}$, LHS of $(*)$
becomes $$-2\sum\limits_{k = 1}^\infty {\frac{{\sin \sqrt 2 \pi
k}}{{\cos (\sqrt 2 \pi k) - \cosh (\sqrt 2 \pi k)}}\frac{1}{{{k^n}}}}
$$ for example when $n=17$, this equals $-\frac{41 \pi
^{17}}{181976169375 \sqrt{2}}$.
When $\zeta=e^{\pi i/3}$, $\cot \pi {\zeta ^{ \pm 1}}k = \frac{{ \pm
i\sinh \sqrt 3 \pi k}}{{{{( - 1)}^k} - \cosh \sqrt 3 \pi k}}$, so LHS
of $(*)$ becomes $$i(1 - {\zeta ^{n - 1}})\sum\limits_{k = 1}^\infty
{\frac{{\sinh \sqrt 3 \pi k}}{{{{( - 1)}^k} - \cosh \sqrt 3 \pi
k}}\frac{1}{{{k^n}}}} $$ For instance, when $n=11$, we have
$$\sum\limits_{k = 1}^\infty {\frac{{\sinh \sqrt 3 \pi k}}{{{{( -
1)}^k} - \cosh \sqrt 3 \pi k}}\frac{1}{{{k^{11}}}}} = -
\frac{{7457{\pi ^{11}}}}{{1277025750\sqrt 3 }}$$
There are also formulas of comparable simplicity when $\zeta = e^{\pi i/6}$. |
Isomorphisms, & Automorphism of groupoids | Take a category with a single object $X$ and let $\operatorname{Mor}(X,X)=G$ with the group operation as composition.
This answers both parts. |
What is the equation of the line defined by $P(-1,1)$ and $Q(0,2)$? | Your solution is good, except for a small arithmetic mistake: the equation $1=1(-1)+k$ implies $k=2$ since $$1=1(-1)+k\implies 1=-1+k\implies 1-(-1)=k\implies k=2.$$ |
Conditions for unimodality of a sum increasing and decreasing functions | A simple sufficient condition for $h(x,y)$ to have a unique maximum is that it is strictly concave and its value at some point is greater than its maximum on some closed curve surrounding that point. If $f$ is concave and $g$ is strictly concave (or vice versa), then $h$ is strictly concave. |
How can one manipulate an equation within variance? | $$V(\mathcal F)=V(\alpha\hat\theta_1+(1-\alpha)\hat\theta_2)=V(\alpha\hat\theta_1+\hat\theta_2-\alpha\hat\theta_2)\color{red} \neq\alpha^2V(\hat\theta_1)+V(\hat\theta_2)+\alpha^2V(\hat\theta_2)$$
Could you see the reason? $\hat\theta_2$ and $\alpha\hat\theta_2$ are not uncorrelated.
Please see Basic properties of Variance. In particular, check the formula for $$\text{Var} \left(\sum_{i=1}^N a_i X_i \right)$$ |
Prove/Disprove asymptotics claims | Note that
$$2^{n^3}=2^{n^2\cdot n}=2^{2n^2}\cdot2^{n^2(n-2)}=4^{n^2}\cdot2^{n^2(n-2)}.$$
Can you finish? |
Prove that for every $x$ in a group $G$ there is a $y$ such that $y^n=x$. | Outline: Let $m$ be the order of the group. Since $m$ and $n$ are relatively prime, there exist integers $s$ and $t$ such that $sm+tn=1$. Then
$$x=x^1=x^{sm+tn}=(x^t)^n (x^m)^s=(x^t)^n.$$ |
How can $D_{2•4} $ be not a free group? | The point is that any set of generators of $K$ will have some relations between them, equations that are true of them but which do not need to be true in an arbitrary group. Then if you take $G$ to be a group with elements that do not satisfy those relations, you will get a counterexample to the universal property.
In particular, let $s\in S$ be any element. Since $K$ is finite, $s$ has finite order, so there is some $n>0$ such that $s^n=1$. But this is not true of every element in every group; in particular, it is not true of any nonzero element of $\mathbb{Z}$. So, let's take $f:S\to\mathbb{Z}$ to be any map such that $f(s)=1$. If we could extend $f$ to a group-homomorphism $g:K\to\mathbb{Z}$, we would have $g(s^n)=n\cdot g(s)=n$, which is a contradiction since $s^n=1$ and so we must have $g(s^n)=0$.
The proof that $K$ is not free is not quite complete yet, because I assumed that the set $S$ had an element! We must still rule out the possibility that $K$ is free on the empty set. In this case, actually, any map $f:\emptyset\to G$ always can be extended to a homomorphism $f:K\to G$, since we can take just the trivial homomorphism. However, the extension will not always be unique (for instance, if $G=K$, $g$ could be either the trivial homomorphism or the identity map).
This argument more generally shows that any nontrivial finite group is not free. (Nontriviality is required for the last part--in fact, the trivial group is free on the empty set.) |
Suppose $V$ is finite-dimensional with dim $V \gt 0$, and $W$ is infinite dimensional. Prove that $\mathcal{L}(V,W)$ is infinite dimensional. | Apart from the choice of words (a basis of $\mathcal{L}(V,W)$, not the basis), the proof is not good.
The idea is good, though. Suppose $\mathcal{L}(V,W)$ is finite dimensional. We want to show that also $W$ is. To this end, choose a basis $\{T_1,\dots,T_n\}$ of $\mathcal{L}(V,W)$. Set
$$
W_0=\operatorname{span}\{T_i(v_j)\mid 1\le i\le n, 1\le j\le m\},
$$
where $\{v_1,\dots,v_m\}$ is a basis of $V$. We want to show that $W_0=W$. If not, take $w\in W$, $w\notin W_0$. Then we can define a linear map $T\colon V\to W$ by
$$
T(v_j)=w
$$
Since $\{T_1,\dots,T_n\}$ is supposed to be a basis of $\mathcal{L}(V,W)$, we have
$$
T=a_1T_1+a_2T_2+\dots+a_nT_n
$$
which easily leads to a contradiction. |
Showing that for $S^1 \subset \mathbb C$, the induced homomorphism of $f_n = z^n$ corresponds to multiplying by n | Your definition of $g_n$ doesn't seem right. If $z\in S^1$, i.e., $|z|=1$, then $nz\notin S^1$.
To show that $(f_n)_*=\bullet \cdot n$, take your favorite generator $[\lambda]$ of $\pi_1(S^1)\cong \Bbb Z$. So $\lambda$ is a path that maps to $1$ in the fundamendal group. Then $(f_n)_*$ is determined by it's value on $[\lambda]$. But $$(f_n)_*([\lambda])=[f_n\circ \lambda].$$ So you should show that $[f_n\circ \lambda]$ is $n$ in $\pi_1(S^1)\cong \Bbb Z$. |
Use De Moivre–Laplace to approximate $1 - \sum_{k=0}^{n} {n \choose k} p^{k}(1-p)^{n-k} \log\left(1+\left(\frac{p}{1-p}\right)^{n-2k}\right)$ | $$\color{brown}{\textbf{Transformations}}$$
Let WLOG the inequality
$$q=\dfrac p{1-p}\in(0,1)\tag1$$
is valid. Otherwise, the corresponding opposite events can be reversed.
This allows to present the issue expression in the form of
\begin{align}
&S(n,p)=1 - (1-p)^n\sum_{k=0}^{n} {n \choose k} q^k\log\left(1+q^{n-2k}\right),\tag2\\[4pt]
\end{align}
or
\begin{align}
&=1 - (1-p)^n\sum_{k=0}^{n} {n \choose k}q^kq^{n-2k} - (1-p)^n\sum_{k=0}^{n} {n \choose k}q^k\left(\log\left(1+q^{n-2k}\right)-q^{n-2k}\right)\\[4pt]
&=1 - (1-p)^n(1+q)^n - (1-p)^n\sum_{k=0}^{n} {n \choose k}q^k\left(\log\left(1+q^{n-2k}\right)-q^{n-2k}\right)\\[4pt]
&S(n,p)= - (1-p)^n\sum_{k=0}^{n} {n \choose k}q^k\left(\log\left(1+q^{n-2k}\right)-q^{n-2k}\right).\tag3\\[4pt]
\end{align}
Formula $(3)$ can simplify the calculations, because it does not contain the difference of the closed values.
$$\color{brown}{\textbf{How to calculate this.}}$$
Note that the sum of $(3)$ contains both the positive and the negative degrees of $q.$ This means that in the case $n\to \infty$ the sum contains the terms of the different scale.
The calculations in the formula $(3)$ can be divided on the two parts.
$\color{green}{\textbf{The Maclaurin series.}}$
The Maclaurin series for the logarithmic part converges when the term $\mathbf{\color{blue}{q^{n-2k} < 1}}.$ This corresponds with the values $k<\frac n2$ in the case $\mathbf{q<1}$ and with the values $k>\frac n2$ in the case $\mathbf{q>1}.$ Then the Maclaurin series in the form of
$$\log(1+q^{n-2k}) = \sum_{i=1}^\infty\frac{(-1)^{i+1}}{i}q^{(2n-k)i}\tag4$$
can be used.
If $\mathbf{\color{blue}{q^{n-2k} > 1}},$ then
$$\log(1+q^{n-2k}) = \log(q^{2n-k}(1+q^{k-2n})) = (2n-k)\log q + \log(1+q^{k-2n}).\tag5$$
If $\mathbf{\color{blue}{q^{n-2k} = 1}},$ then $LHS(4) = \log2.$
If $\mathbf{\color{blue}{q^{n-2k} \lesssim 1}},$ then
$$\log(1+q^{2n-k}) = \log\frac{1+r}{1-r} = 2r\sum_{i=0}^\infty\frac{(-1)^i}{2i+1}r^{2i},\quad \text{ where } r=\frac{q^{2n-k}}{2+q^{2n-k}}\approx\frac{q^{2n-k}}3,\tag6$$
and can be used some terms of the series.
$\color{green}{\textbf{The double summations.}}$
After the substitution of the $(4)$ or $(5)$ to $(3)$ the sums can be rearranged. For example,
$$\sum_{k=0}^{L}{n \choose k}q^k\sum_{i=1}^\infty\frac{(-1)^{i+1}}{i}q^{(2n-k)i}= \sum_{i=1}^\infty\frac{(-1)^{i+1}}{i}\sum_{k=0}^{L}{n \choose k}q^kq^{(2n-k)i}$$
$$= q^{n+1}\sum_{i=1}^\infty\frac{(-1)^{i+1}}{i}\sum_{k=0}^{L}{n \choose k}\left(q^{i+1}\right)^{n-k},$$
wherein the order of the summation can be chosen, taking in account the given data. |
Confusion regarding ($d = rt$) vs ($x_1 = x_0 + v_o t + 0.5 at^2$) usage. | This is because you have to distinguish two different cases: (1) when you move at constant speed ($s=v t$) in a straight direction and when you have acceleration ($x_1=x_0+V_0 t+\frac{1}{2} a t^2$), i.e. a change of speed versus time. |
algebra, equivalence relation regarding associates | Well you need to show 3 things :
Reflexivity : Take $u=1$
Symmetry : If $u$ works in one direction, then $u^{-1}$ works in the other.
Transitivity : If $f(x) = ug(x)$ and $g(x) = vh(x)$, then $f(x) = (uv)h(x)$, and $uv$ is a unit if $u$ and $v$ are. |
Probability of getting at most 4 and at least one 4 in 5 consecutive die rolls | Hints:
How many equally probably ways are there of rolling five (or ten) dice?
How many equally probably ways are there of rolling five (or ten) dice so that each is no more than $4$?
How many equally probably ways are there of rolling five (or ten) dice so that each is no more than $3$?
How many equally probably ways are there of rolling five (or ten) dice so that largest value shown is exactly $4$? |
Probability that m samples without replacement cover the entire set | With the help of @lulu I found this closed form solution:
Fix $r$ elements in the set $S$. The probability that one player does not sample any of those $r$ elements is $\binom{n-r}{k}/\binom{n}{k}$. The probability that all $m$ players do not sample those $r$ elements is $(\binom{n-r}{k}/\binom{n}{k})^m$.
There are $\binom{n}{r}$ such ways to choose those $r$ elements. Therefore the probability that all the players do not sample any arbitrary set of $r$ elements is $(\binom{n-r}{k}/\binom{n}{k})^m \binom{n}{r}$. Let $P(r) = (\binom{n-r}{k}/\binom{n}{k})^m \binom{n}{r}$
Due to the principle of inclusion/excluson, the probability that all the players can cover the entire set $S$ is:
$P^* = 1 - P(1) + P(2) - ... + (-1)^{n-k}P(n-k)$
$ = 1 + \sum_{i=0}^{n-k} (-1)^i (\binom{n-i}{k}/\binom{n}{k})^m \binom{n}{i}$
Note that we don't consider the probability of missing more than $n-k$ items, since those probabilities will be $0$. |
Deduce expected value from conditional probability | You can apply the law of total expectation (or "tower rule"), which says $E(Y) = E(E(Y\mid X))$
In your case, you know that $E(Y\mid X)= a X +b$. Hence $E(Y)=E(a X+b) = a \mu_x +b$ |
Markov chain exercise 1 | $p_{i,j}^{(n)}$ can be found from the $n^{th}$ power of the transition matrix computed with spectral decomposition of $P=V\Lambda V^{-1}$ s.t., $P^n=V\Lambda^nV^{-1}$,
Here we have
$P
= \begin{pmatrix}
1 & 0 & 0\\
0 & 0 & 1\\
\frac14 & \frac14 & \frac12
\end{pmatrix}
$
$V = \begin{pmatrix}
0.5773503 & 0 & 0\\
0.5773503 & 0.7774375 & 0.9554226\\
0.5773503 & 0.6289602 & -0.2952418
\end{pmatrix}
$
$\Lambda = \begin{pmatrix}
1 & 0 & 0\\
0 & 0.809017 & 0\\
0 & 0 & -0.309017
\end{pmatrix}
$
Value of $p_{2,2}^{(n)}$ can be obtained from $P^n(2,2)$
s.t., for n = 2,
$ P^2 = \begin{pmatrix}
1.000 & 0.000 & 0.0 \\
0.250 & 0.250 & 0.5 \\
0.375 & 0.125 & 0.5
\end{pmatrix}
$
s.t., $p_{2,2}^{(2)}=\frac1 4$ and $p_{2,3}^{(2)}=\frac1 2$
Note that 1 being the absorbing state of the Markov chain, eventually $p_{i,1}^{(n)}=1, \forall{i}$ and $p_{i,j}^{(n)}=0, \forall{j \neq 1}$ when $n \to \infty$
The following animation shows how $p_{i,j}^{(n)}$ changes with increasing $n$. |
Inductive proof using modular Fibonacci numbers | Consider the sequence $f_n=F_n\pmod{3}$, given by
$$ \color{red}{0, 1},1,2,\color{purple}{0},2,2,1,\color{red}{0,1},1,2,\color{purple}{0},\ldots $$
and still fulfilling $f_{n+2}=f_{n+1}+f_n\pmod{3}$. Since $f_0=f_8$ and $f_1=f_9$, by induction it follows that
$$ f_{m} = f_{m\!\pmod{8}} $$
hence $f_m=0$ iff $m$ is a multiple of four. |
The convex hull of every open set is open | Let $A$ be an open subset of $X$.
If $\,x\,$ is an element of the convex hull of $A$, then there exist $\,x_1,\dots,x_n$ in $A$ and $\lambda_k \ge 0$ with $\sum_{k=1}^n \lambda_k=1\,$ and $\,x=\sum_{k=1}^n \lambda_k x_k$.
At least one $\lambda_k$ does not vanish: say $\lambda_1$.
The function $f: X \rightarrow X$, defined by $f(z)=\lambda_1^{-1}(z-\sum_{k=2}^n \lambda_k x_k)$, is continuous (it is a homothety), so $f^{-1}(A)=\lambda_1 A+\sum_{k=2}^n \lambda_k x_k\,$ is open.
Now $x \in f^{-1}(A)$ and $f^{-1}(A)$ is a subset of the convex hull of $A$, so the hull is open. |
Smallest $n$ such that $U(n)$ contains a subgroup isomorphic to $\mathbb Z_5 \oplus \mathbb Z_5$ | By CRT we have $\mathbb Z_{p_1^{\alpha_1}p_2^{\alpha_2}\dots p_r^{\alpha_r}}\cong \mathbb Z_{p_1^{\alpha_1}}\times \mathbb Z_{p_2^{\alpha_2}}\times\dots\times \mathbb Z_{p_r^{\alpha_r}}$.
So $\mathbb Z_{p_1^{\alpha_1}p_2^{\alpha_2}\dots p_r^{\alpha_r}}^*\cong \mathbb Z_{p_1^{\alpha_1}}^*\times \mathbb Z_{p_2^{\alpha_2}}^*\times\dots\times \mathbb Z_{p_r^{\alpha_r}}^*$.
Now use the fact that $\mathbb Z_{p^{\alpha}}\cong \mathbb Z_{\varphi(p^\alpha)}$ for odd primes and $\mathbb Z_{2^\alpha}\cong \mathbb Z_2\times \mathbb Z_{2^{\alpha-2}}$.
From here we obtain $\mathbb Z_{p^{\alpha}}$ has at most one subgroup isomorphic to $\mathbb Z_5$, and it contains one if and only if $5|\varphi(p^\alpha)$.
So $n$ has at least two coprime prime power factors $p^a$ and $q^b$ so that $\varphi(p^a)$ and $\varphi(p^b)$ are multiples of $5$.
It is easy to see by inspection that the smallest two such values are $11$ and $25$. (this is because if $5|\varphi(p^a)$ then either $p\equiv 1\bmod 5$ or $p=5$ and $a>1$).
So the smallest value of $n$ so that $\mathbb Z_n^*$ contains a subgroup isomorphic to $\mathbb Z_5\times \mathbb Z_5$ is $25\cdot11$ |
How many distinct trees with N nodes? | I think the answer to your problem according to Cayley-Sylvester theorem is $n$ in the power of $n-2$. |
A Subtle Point in Rudin's Proof on $e$ | So observe that
$t_n=\sum_{k=0}^{n}\frac{1}{k!}\prod_{i=0}^{k-1}(1-\frac{i}{n})$
If $n\geq m$, since each terms are positive,
$t_n=\sum_{k=0}^{n}\frac{1}{k!}\prod_{i=0}^{k-1}(1-\frac{i}{n})\geq \sum_{k=0}^{m}\frac{1}{k!}\prod_{i=0}^{k-1}(1-\frac{i}{n})$
Basically, in the inequality, rightside has less positive terms than $t_n$ if $n\geq m$ |
Limit of measures on set intersection | Under your monotonicity assumption ($A_{n+1} \subset A_n$ for every $n$) and the assumption that $\mu(A_N) < +\infty$ for some $N$ (and hence for every $n\geq N$), you always have that
$$
\mu\left(\bigcap_n A_n\right) = \lim_n \mu(A_n).
$$
(See for example here for the analogous result for increasing sequences of sets.) |
Finding $x + y$ from two equations given | Note that\begin{align}(x+y)^2&=x^2+2xy+y^2\\&=132+xy\\&=132+\sqrt{xy}^2\\&=132+\bigl(33-(x+y)\bigr)^2\end{align}and that the only solution of the equation $z^2=132-(33-z)^2$ is $\frac{37}2$. |
Prove that A is a subset of B. | If you are asking to prove that $A \subset B$, then we may proceed as follows:
Let $p \in A$, the $p=15b+a^2$ for some $a,b \in \mathbb{Z}$.
If $a^2 \equiv 0 $ (mod $5$), then $5$ divides $a^2$, and so $5$ divides $a^2+15 b$, i.e. $5$ divides $p$, but $p$ is considered prime, hence this does not hold. So we may consider that $a^2$ will never be congruent to zero (mod $5$).
Hence for any $a^2 \equiv \pm 1 $ (mod $5$) (easy to check), and so $a^2=5s+1$ or $a^2=5t-1$ with $t,s \in \mathbb{Z}$. Hence $p=5(3b)+ 5s+1$ or $p=5(3b)+ 5t-1$, i.e. $p=5(3b+s)+1$ or $p=5(3b+ t)-1$ with $t,s \in \mathbb{Z}$. Therefore $p \in B$, and so $A \subset B$. |
Give an expression that describe the language generated by this grammar | You have the right idea, but you’ve lost the relationship between the part to the left of the $\$$ and the part to the right of it: you cannot express it in terms of Kleene stars. For instance, when you apply the production $T\to WTba$ $n$ times, you get the same number of $W$s as you do $ba$s, so you don’t get $W^*T(ba)^*$.
First, it’s clear that the language produced by $W$ is $abb^*$: the only possible derivations starting with $W$ have the form
$$W\Rightarrow^nWb^n\Rightarrow abb^n$$
for $n\ge 0$. Now let’s look at a derivation that starts by using the production $S\to WTba$. Since we know what $W$ does, we should concentrate on $T$: we can apply the production $T\to WTba$ $n$ times for any $n\ge 0$, but eventually we must apply $T\to\$$ in order to get rid of the $T$. Thus, we get derivations of the form
$$S\Rightarrow WTba\Rightarrow^n WW^nT(ba)^nba\Rightarrow WW^n\$(ba)^nba$$
for each $n\ge 0$. This will evidently generate the language described by the expression
$$(abb^*)^n\$(ba)^n\,,\tag{1}$$
where $n$ can be any positive integer.
The other half of the language can be analyzed similarly. $V$ generates everything of the form $baa^*$, so we end up with everything of the form
$$(ab)^n\$(baa^*)^n\tag{2}$$
for $n\ge 1$. Let $L_1$ be the language described by $(1)$ and $L_2$ the language described by $(2)$, so that we want the language $L=L_1\cup L_2$. $L$ can be described in just this way, but the description can perhaps be made a little nicer if we notice that $L_2$ can be obtained from $L_1$ (and vice versa) in a fairly simple way. Let
$$f:\{a,b\}\to\{a,b\}:x\mapsto\begin{cases}
a,&\text{if }x=b\\
b,&\text{if }x=a\,,
\end{cases}$$
and let $\hat f$ be the natural extension of $f$ to $\{a,b\}^*$. (E.g., $\hat f:(abbbab)=baaaba$.) Then
$$L_2=\{\hat f(v)\$\hat f(u):u\$v\in L_1\}\,,$$
and
$$L_1=\{\hat f(v)\$\hat f(u):u\$v\in L_2\}\,.$$
Thus, $L$ can be described simply in terms of $(1)$ and $\hat f$. |
Vector space span over real numbers | Your answer is correct, but perhaps incomplete depending on the context you're working in. Certainly any $\mathbb{R}$-linear combination of vectors in $S$ would be of the form
$$
a_1(w_1,\overline{w_1},z_1) + ... + a_n(w_n,\overline{w_n},z_n) = (a_1w_1+...a_nw_n,\overline{a_1w_1+...a_nw_n},z_1+...+z_n) = (w,\overline{w},z)
$$
For some $a_1,...,a_n \in \mathbb{R}$, $w_1,...,w_n,z_1,...,z_n \in \mathbb{C}$.
So we have that
$$
\text{span}(S) = \left\{ (w,\overline{w},z)\mid w,z \in \mathbb{C}\right\}
$$
Which is as you say, although perhaps put a little more explicitly. In order to give a fuller description of the span of $S$ as a vector space over $R$, it might be helpful/more descriptive to also find a basis. |
How can we measure the accuracy of prediction algorithm? | You have given nowhere near enough information about your
data for me to give you a definitive answer. So I will fill
in the missing blanks with what I suppose is a plausible
scenario, and show you a way to handle that. If the result
is useless, at least you will have a clue how you need
to edit your question so someone can try to give a more
useful answer.
I suppose you have very many tickets, perhaps $n = 1000.$
Also, that your predictions as to likelihood of confirmation
either are give to the nearest ten percent (0.1), or can
be rounded to the nearest 10 percent without terrible loss
of information.
Then suppose you have ten categories with predictions: $p_i = .1, .2, \dots, 1.0$, for $ i = 1, 2, \dots, 10.$ Then the
respective 'expected' numbers of of confirmations by
category are $E_i = np_i/\sum_{i=1}^{10} p_i$, where the $E_i$ add to $n$.
Also, you should be able to determine the actual numbers
of confirmations in $X_i$ in each category. For example,
of those predicted to be in your 70% category, the predicted
number of confirmations is $E_7$ and the actual number is $X_7.$
For each of the 10 categories, compute $C_i = (E_i - X_i)^2/E_i.$
Then find $Q = C_1 + C_2 + \cdots C_{10}.$
In the circumstances described, if your algorithm is working
very well, then $Q \sim Chisq(df = 9)$, approximately.
The bigger $Q$ the worse your algorithm is. On average,
if the algorithm is working well then $Q \approx 9$.
If $Q > 16.92$, then you would say your algorithm isn't
giving predictions that fit with actual observations on
confirmations. (The number is 16.92 is obtained from tables
of the chi-squared distribution: row 9 and column headed '.05',
for a typical printed table.)
This process works OK provided most of the $E_i > 5$ and
all $E_i > 3.$ If that is not the case, merge a couple
of neighboring categories (add the $E_i$s and also add
the $X_i$s). Then instead of $df = 9$ for the chi-squared
distribution you have $df$ equal to one less than the number
of categories after merging. If $n$ is a thousand or so
and the categories are all represented reasonably often,
you should not have to do any mergers.
If you don't get a good fit, you can get some idea for
which percentage prediction categories your algorithm
is working most badly. Those would be the categories
with the largest $C_i$'s. In any case, pay particular
attention to any category $i$ with a $C_i$ greater than 3 or 4.
This method is called a 'Chi-squared Goodness-of-Fit' procedure,
and maybe you want to read about it in a statistics book or online.
Here is an example with data for 1000 tickets simulated according to an
algorithm that is working well; $Q = 10.4 < 16.9,$ so
the test verifies that the algorithm is OK.
i p E X C
1 0.1 18.1818 20 0.181818
2 0.2 36.3636 28 1.923636
3 0.3 54.5455 45 1.670455
4 0.4 72.7273 79 0.541023
5 0.5 90.9091 82 0.873091
6 0.6 109.0909 125 2.320076
7 0.7 127.2727 127 0.000584
8 0.8 145.4546 159 1.261420
9 0.9 163.6364 148 1.494141
10 1.0 181.8182 187 0.147682 |
Prize Money Distribution | You want to give $65$ nonnegative awards, $a_1, \ldots, a_{65}$, satisfying $$a_1+a_2+\cdots+a_{65}=(\text{total prize})$$
and each of the conditions below; $$a_1-a_2\le a_2-a_3,~~ a_2-a_3\le a_3-a_4,~~\ldots,~~ a_{63}-a_{64}\le a_{64}-a_{65}$$ |
Probability of objects are being sorted | There is only 2 possible ways to get the desired arrangement. Mint in box A, rest in box B or Orange in box A and mint in box B.
The total number of ways you can randomly put is $\frac {20!} {10! 10!}$ or $20 \choose 10$.
So the probability is $\frac 2 {20 \choose 10}$ or $\frac {2. 10!. 10!} {20!}$ |
Show that expected number of years till first observation is exceeded for first time is infinite given infinite observations | The probability that $N\ge n$ is the probability that $X_1$ is the greatest of the first $n$ values, which is $\frac1n$. Thus
$$
\mathsf E[N]=\sum_{n=1}^\infty\mathsf P(N\ge n)=\sum_{n=1}^\infty\frac1n=\lim_{k\to\infty}H_k=\infty\;.
$$ |
A step in a proof of Neumann's coset coverings | As pointed out, we simply have to prove that $K$ is of finite index in each $L_i$. Also, it only has to be proven in the case $n=2$, other cases being immediate through induction.
It can be written $G=a_1L_1\cup...\cup a_pL_1$ and therefore:
$$L_2=(a_1L_1\cap L_2)\cup...\cup(a_pL_1\cap L_2)$$
Eliminating empty sets, one can pick an element $b_i\in a_iL_1\cap L_2$ and therefore write:
$$L_2=b_1(L_1\cap L_2)\cup...\cup b_r(L_1\cap L_2)$$
Therefore proving $K=L_1\cap L_2$ is of finite index in $L_2$ (and therefore in $L_1$). |
A simple inequality involving expectations | You cannot multiply inequalities by negative numbers so your argument is not valid. ($Y \leq 1$ does not imply $XY \leq X$). In fact $EXY$ need not be $0$. Take for example $Y=\frac 1 2(1+\frac X {1+|X|})$. Then $0 \leq Y \leq 1$ and $EXY >0$. |
Using Cauchy Integral, solve: $\oint_C \frac{1}{(2z+1)^2}dz$ without use residue theorem | Unlike the logarithm, $-\tfrac{1/2}{2z+1}$ doesn't have branches, so an integral of $\tfrac{1}{(2z+1)^2}$ around a closed loop is $0$. |
if $A$ and $B$ are row equivalent and rows of $A$ are linearly independent then each row of $B$ expressible as a linear combination of $A$? | Here is a short conceptual argument for "column equivalent" matrices, that can be adapted by taking transposes, or just dualizing:
Column operations correspond to multiplication by invertible elementary matrices.
In particular, given column-equivalent linear maps $T,S:V \to W$, there is an invertible matrix $F:W \to W$ so that $F \circ T=S$. In particular, if we take the column space of $T$ as a spanning set for its image, then surely every column in $S$ can be written as $F(v)$, for $v \in \mathrm{Im}(T)$, and in particular, it is a linear combination. |
Prove that $\lim_{x \to 2} 5x^2 = 20$ using $\epsilon - \delta$ definition. | As we look for the limit when $ x $ goes to $ x=2 $, we can assume that $ x $ is not far from $ 2 $, let say
$$|x-2|<a \; or\; 2-a<x<2+a\; $$
$$or\; -4-a<4-a<x+2<4+a$$
$$or \; |x+2|<4+a$$
with $ a>0$.
with this additional condition, given $\epsilon>0$, we look for $ \delta>0$ such that
$$|x-2|<a \; and\; |x-2|<\delta \implies$$
$$5 |x-2||x+2|<\epsilon$$
but
$$5|x-2||x+2|<5(a+4)|x-2|$$
So, we just need to satisfy the condition
$$|x-2|<\frac{\epsilon}{5(a+4)}$$
Thus, you just need that
$$\delta=\min(a,\frac{\epsilon}{5(a+4)}).$$
The author prefered to choose $ a=1$
If you decide to take $ a=\frac 13$, you will choose
$$\delta=\min(\frac 13,\frac{\epsilon}{5(\frac 13+4)})$$ |
Calculating the Lagrangian map for a solenoidal vector field | In the Lagrangian specification of the flow field you track the position of fluid particles through time as they follow pathlines.
Let $\mathbb{X}(t,\mathbb{x}_0)$ denote the position at time $t$ of a fluid particle that occupied the intial position $\mathbb{x}_0$ at time $t=0$. As in basic mechanics, the time derivative of the position vector of a particle is the velocity.
For a given velocity field $\mathbb{u}(\mathbb{x},t)$ the map $\mathbb{x}_0 \mapsto\mathbb{X}(t,\mathbb{x}_0)$ is obtained as the solution to the initial value problem
$$\frac{\partial}{\partial t}\mathbb{X}(t,\mathbb{x}_0)= \mathbb{u}(\mathbb{X}(t,\mathbb{x}_0),t), \\ \mathbb{X}(0,\mathbb{x}_0)= \mathbb{x}_0.$$
This is a general result regardless of whether the flow is compressible or incompressible (solenoidal).
Why is this useful?
It would be an impractical and pointless exercise to track fluid particles in this way as they are moving through a continuum. Furthermore, the Eulerian velocity field $\mathbb{u}$ is typically the function that must be determined in solving a fluid flow problem.
However, the theoretical construct of the Lagrangian map is useful for proving theorems that allow us to derive the equations of fluid motion. An important example is the Reynolds Transport Theorem. This pertains to the time derivative of the integral of a scalar-, vector- or tensor-valued function over a region $V(t)$ that is moving and deforming through time with the flow. An application of this theorem to density $\rho$ results in
$$\frac{d}{dt}\int_{V(t)} \rho(\xi,t) \, d\xi = \int_{V(t)} \left[\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbb{u}) \right] \, d\xi. $$
Since mass is conserved, we have
$$\frac{d}{dt}\int_{V(t)} \rho(\xi,t) \, d\xi = 0.$$
Using the transformation $\xi = \mathbb{X}(t, \mathbb{x}_0)$ we obtain
$$\frac{d}{dt}\int_{V(t)} \rho(\xi,t) \, d\xi = \frac{d}{dt}\int_{V(0)} \rho(\mathbb{X}(t, \mathbb{x}_0),t) \frac{\partial \xi}{\partial \mathbb{x}_0} \, d\mathbb{x}_0, $$
where $\frac{\partial \xi}{\partial \mathbb{x}_0}$ is the Jacobian determinant.
By transforming to a region that is not changing in time we can interchange the derivative and the integral and ultimately obtain
$$\int_{V(t)} \left[\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbb{u}) \right] \, d\xi = 0,$$
eventually leading to the continuity equation. |
Exclude one function from another | The function exists, but I guess that what you want is an expression.
$$g(n)=3\left\lfloor\frac {n-1}2\right\rfloor+2-2\left(\frac n2-\left\lfloor\frac n2\right\rfloor\right)$$ |
$\int_C\frac{z}{z^2+1}\,dz$ where $C:\ \{z:\ |z-0|=2\}$ | First of all, you meant $t=-\pi$ and $t=\pi$, not $t=-2\pi$, etc. When $t$ goes through a period of $2\pi$, as you can check $u$ wraps around $0$ twice, so $\arg u$ changes by $4\pi$. Note that $\int \frac 1 u~du$ is not single valued; in general it is given by $\ln |u|+i\arg u$, so by the above remark $$\frac 12\int \frac 1u~du=\frac 12\cdot 4\pi i=2\pi i.$$ |
Give an unbiased estimators for the third central moment of the underlying distribution. | Yes it is unbiased and so correct. To check:
$$\mathbb E[\hat{\mu_{3}}]\\=\mathbb E[\hat{\theta}_{3}]-9\mathbb E[\hat{\theta}_{2}]+54 \\ = \mathbb E[X^3]-3\cdot3\cdot\mathbb E[X^2]+3 \cdot3^2\cdot\mathbb E[X] - 3^3 \\ = \mathbb E[(X-3)^3]$$
It is not the only unbiased estimator. |
Products of independent random variables. | Let $T\gt S$, then $P(Y_T=1|Y_S)=\frac{1}{2}=P(Y_T=1)$. The point being that the product of $X_i$ for $i\in T\ and\ i\notin S$ is independent of $Y_S$. |
Proof that $\lfloor x + x\sqrt3 \rfloor = x + \lfloor x\sqrt3 \rfloor$ Where $x \in N$ | We don't need induction. Note $\lfloor n + u \rfloor = \lfloor u \rfloor + n$ for natural $n$. This follows since $\lfloor u \rfloor = u - \epsilon$ for $\epsilon \in [0,1)$.
So we get
$\lfloor n + u \rfloor = \lfloor n + \epsilon + \lfloor u \rfloor \rfloor$ = $n + \lfloor u \rfloor$ since $\lfloor u \rfloor + n ≤ n + \epsilon + \lfloor u \rfloor < \lfloor u \rfloor + n + 1 $ |
What is the quickest way to find Nash equilibria in two player bimatrix game? | Computing Nash equilibria is not an easy business in general (if it were, then the world would be a weird place...). You may want to note the following facts:
Nash equilibria may not exist in pure strategies. They're only guaranteed to exist in mixed strategies.
In general, the problem of computing equilibria for bimatrix games is NP-hard (in fact, TFNP-hard).
For further reading, see this paper. |
Given an Odd Function, Which of the following must be necessarily be equal to $f'(-x_o)$? | No, it's incorrect. Your approach has two flaws. First of all $\cos$ is an even function. Second you should not draw conclusions from only one example.
To find the solution what you use is the definition of an odd function, that is $f(x) = -f(-x)$. Then you can for example use the chain rule:
$$f'(x) = {d\over dx} -f(-x) = -f'(-x)\cdot (-1) = f'(-x)$$
which means alternative (a). If you can't use the chain rule you can use the definition of the derivate directly:
$$f'(-x) = \lim {f(-x+h)-f(-x)\over h} = \lim {f(x) - f(x-h)\over h} \\=\lim{f(x)-f(x+h)\over -h} = \lim{f(x+h)-f(x)\over h} = f'(x)$$ |
Finding a third coordinate on a sphere that is equidistant from two known coordinates | This may just be my lack of classical education speaking, but to me the sane and simple thing would be first to convert your spherical coordinates to 3D rectangular coordinates (for two points on the unit sphere), then solve the problem in 3D, and finally convert the resulting point back to spherical coordinates.
In rectangular coordinates, the points you're looking for is at the intersection between three spheres, namely the unit sphere where everything takes place, and the spheres centered on each of your two points containing the other one.
Write down the equations for these three spheres, and subtract two of them. This makes the $x^2$, $y^2$, $z^2$ terms cancel out, and you're left with the equation for a plane that contains your target point. Do the same thing for a different pair of spheres, giving you a different plane. The target point must lie on the line where those two planes intersect; finding a parametric equation for that line is routine. Then you can find the intersection of that line with one of the spheres, which is just a quadratic equation in the parameter. In the general case there will be two solutions, corresponding to putting your equilateral spherical to the left or to the right of the line segment you start out with.
Note that there may be no solution at all; for example two antipodal points are not the corners of any spherical equilateral triangle. |
Show that $\overline{A}=\bigcap\limits_{n=1}^\infty U_n$ | Consider $U_n=\bigcup _{x\in \overline A}B(x,{\frac{1}{n}})=\{y\in \overline A:d(x,y)<\frac{1}{n}\}$
Since each $U_n$ is an union of open balls with center at $x\in \overline A$ so $\overline A\subseteq \bigcap_{n=1}^\infty U_n$.
Conversely if $x\in U_n\forall n$ then $\exists x_n\in \overline A$
such that $x\in B(x_n,\frac{1}{n})\implies (x_n)_n$ converges to $x\implies x$ is a limit point of $\overline A$ .
But $\overline A$ is closed and hence $x\in \overline A\implies \bigcap U_n\subseteq \overline A$.
DONE. |
Topology question with closed sets. | Since the $\inf$ is $0$ you find a sequence $(x_n)_{n\in\mathbb N}$ in $K$ and a sequence $(y_n)_{n\in\mathbb N}$ in $E$ such that
$$\lim_{n\to\infty} d(x_n,y_n)=0.$$
Since $K$ as compact set is bounded the sequence $y_n$ is bounded and hence lies in a compact subset of $E$. This implies that both sequences have a convergent subsequence. The limit of both subsequences have to be the same point because of the continuity of the distance function. Since $K$ and $E$ are closed the limit is element of $K\cap E$, hence the intersection is non-empty.
In your proof you assume to have a shortest vector connecting two points from $K$ and $E$. You can't simply assume that. First you have to show the existence of such a shortest vector. For example if you only know that both sets are closed then the statement is not true. Do you find a counter-example? Your proof doesn't use compactness of $K$ but that is necessary as your counter-example will show us. |
I can't understand a how the podium is built in my problem | Since the podium projects $\frac{1}{2}$ inch on each of the four sides, going from the top layer of $2 \times 2$ inches to the next layer would increase the size by $\frac{1}{2} + \frac{1}{2} = 1$ inch in its width, and the same in its height, to give that each dimension is now $2 + 1 = 3$ inches, i.e., the second layer dimensions are $3$ inches by $3$ inches. This results in $3 \times 3 = 9$ blocks for the second layer, for a total of $4 + 9 = 13$ blocks for the $2$ layers combined, just as the equation for the sum gives.
Note also when you wrote "for layer 2 13 blocks" that the equation is for the sum of the blocks up to layer $n$, not just the number of blocks in the $n$'th layer.
To confirm, have $f(n)$ be the total # of blocks up to layer $n$ to get
$$\begin{equation}\begin{aligned}
f(n) & = \frac{1}{3}n^3 + \frac{3}{2}n^2 + \frac{13}{6}n \\
& = \frac{2n^3 + 9n^2 + 13n}{6}
\end{aligned}\end{equation}\tag{1}\label{eq1A}$$
Thus, $f(1) = \frac{2 + 9 + 13}{6} = \frac{24}{6} = 4$, and $f(2) = \frac{16 + 36 + 26}{6} = \frac{78}{6} = 13$. |
$\| f g\|_\infty \leq \|f\|_\infty \|g\|_\infty$ . Is it true? | A simple proof: almost everywhere,
$$
|f(x) g(x)| = |f(x)|\times |g(x)| \le \|f\|_\infty \times \|g\|_\infty
\\\implies \|fg\|_\infty\le \|f\|_\infty \times \|g\|_\infty
$$ |
How am I to interpret induction/recursion in type theory? | As Mike Shulman says, 3 is the best interpretation. 2, by itself, is true but is arguably not really induction (without 3). I want to go into more detail about this, but first I want to elaborate on Mike's comment with regards to identity types.
To use your wording what the elimination rule for identity types says is that for $M : A$, $\langle M,\mathbf{refl}_M \rangle$ is the only value of $\Sigma x:A.(M =_A x)$ that we'll ever see or, to use Mike's wording, that exists. The rule says no such thing about $(M =_A N)$ for given or fixed $M, N : A$. Of course, there may be inhabitants of $\Sigma x:A.(M =_A x)$ that are only propositionally but not definitionally equal to $\langle M,\mathbf{refl}_M \rangle$. But ultimately they would be like different representatives from the same equivalence class, e.g. $\frac{2}{3} = \frac{4}{6}$ even though they look different. (Though generally I think it is better to think in terms of widening what we consider equal rather than forming equivalence classes, something much easier done in type theories than in [classical] set theories with a global notion of equality.) Notice that equality varies with type and this makes a crucial difference in this case. For example, you may retort "$\langle \mathbf{base}, \mathbf{loop} \rangle$ does not equal $\langle \mathbf{base}, \mathbf{refl_{base}} \rangle$ for the circle $\mathbb{S}^1$". To which I would reply, "at what type?" At the type $\mathbb{S}^1\times(\mathbf{base} =_{\mathbb{S}^1} \mathbf{base})$, i.e. $\Sigma x:\mathbb{S}^1.(\mathbf{base} =_{\mathbb{S}^1} \mathbf{base})$, you would be right. But at the type $\Sigma x:\mathbb{S}^1.(\mathbf{base} =_{\mathbb{S}^1} x)$, you would be wrong. (If we looked at the groupoid model, what's happening is that in this case these types as groupoids (may) have the same objects but have different arrows. In particular, the latter type has more isomorphisms.)
That was a bit longer than I was expecting. Moving to induction, it may be useful to look at an example where a nominal induction rule failed to achieve this property. The main example of this is the failure of the first-order induction schema in Peano arithmetic to rule out non-standard models. (Note, Peano's original formulation used a second-order induction rule, so didn't have this problem.) The first-order induction schema says $$\frac{P(0) \qquad \forall n.(P(n) \implies P(S(n)))}{\forall n.P(n)}$$
for each $P$ that you can write down. You could view it as a macro that looks at the syntax of the property $P$ you want to check, and then expands out into a tailor-made axiom for that property. The problem, of course, is not every property in the model need be syntactically expressible. On the other hand, the second-order characterization, i.e. where we quantify over $P$, essentially says that all properties in the model obey this rule. You can think of this as saying the induction rule is "uniform" or "continuous".
The "standard" set-theoretic model for Peano arithmetic is the minimal one. The second-order rule is interpreted as saying that we are looking for the set that is the intersection of all sets $X$ for which $0 \in X$ and $n \in X \implies S(n) \in X$. Categorically, this is stated by defining the natural numbers as the initial algebra of a functor. It's unique up to unique isomorphism. In type theory, if we have a universe, we can directly state what we want to achieve, namely any other type that "behaves" like the natural numbers (i.e. provides the same constructors and induction rule) is isomorphic to the natural numbers. Below is the Agda code. It's easy to change it to have the assumed induction rule on N target a type of propositions which better fits the preceding scenarios. Making that change means the recursor, rec, must now be assumed rather than defined from ind, and we also then need to assume that N is a (h-)set, i.e. that the identity type Id n m is a proposition for any two values n and m of N. Once that is done, Nat-is-unique barely changes and we don't necessarily need to be working in a type theory with a universe. This illustrates what's going on though. Among the properties being quantified over there is, for each potential model of the naturals, the property for which the conclusion of the induction rule is that the naturals embed in this alternate model and the premises of that instance of induction are satisfiable.
For inductive families like $=_A$, but also $\leq_{\mathbb{N}}$ and the rules defining type systems and operational semantics and many other things, we are inductively defining a subset of a (product) type. For $=_A$ we are saying the subset $\{ x:A\ |\ M =_A x \}$ is inductively defined. Or equivalently, the subset $\{ \langle x,y \rangle:A\times A\ |\ x =_A y \}$ is inductively defined. In extensional type theories, this is basically exactly what is happening. In intensional type theories like HoTT, our types are not necessarily "sets" (h-sets), and so "subset" becomes a much richer notion.
(I want to point out that there is still a lot of nuance here. In particular, it's critical to keep the distinction between internal and external views of a logic/type theory. Andrej Bauer has a good article showing that (externally) inductive types can look quite different from what we expect. But also see Peter Lumsdaine's comment therein.)
data Nat : Set where
Zero : Nat
Succ : Nat → Nat
data Id {A : Set} (x : A) : A → Set where
Refl : Id x x
_trans_ : {A : Set}{x y z : A} → Id x y → Id y z → Id x z
Refl trans q = q
cong : {A B : Set}{x y : A}(f : A → B) → Id x y → Id (f x) (f y)
cong f Refl = Refl
record Iso (X Y : Set) : Set where
field
to : X → Y
from : Y → X
lInv : (x : X) → Id x (from (to x))
rInv : (y : Y) → Id y (to (from y))
module _ (N : Set)
(zero : N)
(succ : N → N)
(ind : (P : N → Set) → P zero → ({n : N} → P n → P (succ n)) → (n : N) → P n) where
rec : (A : Set) → A → (A → A) → N → A
rec A z s = ind (λ _ → A) z (λ {_} → s)
module _ (recZ : (A : Set) → (z : A) → (s : A → A)
→ Id z (rec A z s zero))
(recS : (A : Set) → (z : A) → (s : A → A) → (n : N)
→ Id (s (rec A z s n)) (rec A z s (succ n))) where
Nat-is-unique : Iso N Nat
Nat-is-unique = record { to = toNat; from = fromNat; lInv = invN; rInv = invNat }
where toNat : N → Nat
toNat = rec Nat Zero Succ
fromNat : Nat → N
fromNat Zero = zero
fromNat (Succ n) = succ (fromNat n)
invN : (n : N) → Id n (fromNat (toNat n))
invN = ind (λ n → Id n (fromNat (toNat n))) (cong fromNat (recZ Nat Zero Succ))
(λ {n} p → cong succ p trans cong fromNat (recS Nat Zero Succ n))
invNat : (n : Nat) → Id n (toNat (fromNat n))
invNat Zero = recZ Nat Zero Succ
invNat (Succ n) = cong Succ (invNat n) trans recS Nat Zero Succ (fromNat n) |
why are conservative vector fields curl-free? | The important idea is that if $f$ is of class $C^2$ (meaning it is at least twice differentiable, and those derivatives are continuous), then mixed partials are equal (which is called Clairaut's theorem), and therefore a quick calculation shows that the curl of a gradient is zero.
For what it's worth, you should be aware that the requirement that $M_y=N_x$ is only a necessary and not a sufficient condition for $F$ to be a conservative vector field. There are topological obstructions which prevent curl-free vector fields from being conservative, a fact which marks the beginning of de Rham cohomology. |
Series definied by recurrence relation | That $a_2>a_1$ is clear by squaring both sides. So assume $a_n>a_{n-1}$. Want $a_{n+1}>a_n$. This equivalent to $\sqrt{2+a_n}> \sqrt{2+a_{n-1}}$ and this is true by squaring both sides and using induction hypothesis.
Now for $0<a_n<3$. It is true for $n=1$.
Assume true for $n$. Now $a_{n+1}=\sqrt{2+a_n}<\sqrt{2+3}<3$.
Now for the limit. The limit a exists by monotonicity and boundedness. From $a_{n+1}=\sqrt{2+a_n}$ and upon taking $n\to\infty$ we see that $a=\sqrt{2+a}$ and so $a^2-a-2=0$ and so $a=2$ since $a_1>1$ and the sequence is increasing. |
What is the definition of convexity of a discrete function defined on $\mathbb N^k$? | One can define convex functions on an arbitrary set $E\subset \mathbb{R}^k$ as follows: $f:E\to\mathbb{R}$ is convex if for every $x_0\in E$ there exists a vector $v\in\mathbb{R}^k$ such that
$$
f(x) - f(x_0) \ge v\cdot (x-x_0) \quad \forall x\in E
$$
In words, this means that at each point of the domain, $f$ admits an affine minorant that agrees with $f$ at that point.
When specialized to $E=\mathbb{R}^k$ or $E=\mathbb{N}$, this definition agrees with those quoted above. |
Linear combination of vectors in orthogonal set | The definition of linear independence says you can't make 0 out of a linear combination. It says nothing about not being able to make any other vector out of linear combinations.
(1,0) and (0,1) are independent since you cannot write (0,0) = c(1,0) + d(0,1) without c=d=0. But you can write every other vector as a nontrivial linear combination of these. (2,3) = 2(1,0)+3(0,1) for example. Spend some time making sense of the definitions with some concrete examples like this one and it will make sense eventually.
If you call your orthogonal set $\{v_1, v_2, \dots, v_n\}$, you can trivially write any vector in your set as a linear combination (take all coefficients $0$ except the coefficient of $v_k$ which is $1$).
$v_k = 0\cdot v_1+0\cdot v_2+\dots+0\cdot v_{k-1}+1\cdot v_k+0\cdot v_{k+1}+\dots+0\cdot v_n$
This is true of any set, whether it is orthogonal or not.
Moreover, any vector in the span of $\{v_1, v_2, \dots, v_n\}$ can be written as a linear combination of these vectors. This is again true of any set, whether orthogonal or not. |
Solving $\sin^2 x+ \sin^2 2x-\sin^2 3x-\sin^2 4x=0$ | First linearise with:
$$\sin^2 u=\frac{1-\cos 2u}2.$$
Simplifying yields the equation
$$\cos 2x+\cos 4x=\cos6x+\cos 8x.$$
Then the factorisation formula:
$$\cos p+\cos q=2\cos\frac{p+q}2\cos\frac{p-q}2 $$
lets you rewrite the equation, after some simplification, as
$$\cos x\cos 3x=\cos x\cos 7x\iff \begin{cases}\cos x=0\quad\text{or}\\\cos 3x=\cos 7x.\end{cases}$$
Solutions to the first equation: $\qquad x\equiv \dfrac\pi2\mod\pi$.
Solutions to the second equation:
$$7x\equiv \pm 3x\mod 2\pi\iff\begin{cases} 4x\equiv 0\mod 2\pi\\10x\equiv 0\mod2\pi\end{cases}\iff\begin{cases} x\equiv 0\mod \dfrac\pi2\\[1ex]x\equiv 0\mod\dfrac\pi5\end{cases} $$
The solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as:
$$x\equiv \dfrac\pi2\mod\pi,\quad x\equiv 0\mod\dfrac\pi5.
$$ |
What should be the value of a for it to be a singular matrix | A matrix is singular if it has a $0$ determinant. In your case, the determinant is $2a \cdot 3 - (-1)(-8) = 6a - 8$ so the determinant is $0$ precisely when $6a - 8 =0 \iff a = \frac{4}{3}$. |
what is the maximum value of $x(x+y)^3$ given that $x^2+y^2/d=1$? | Thought this pictorial view for d=2 and d=3 might provide some indight |
How can Falting's theorem be used to decide whether infinite many rational solutions exist? | Faltings' theorem applies when the genus of $C$ is $>1$. If $p(x)$
is a square-free polynomial of degree $\ge5$ then $g\ge2$ for the
curve $y^2=p(x)$ and this has just finitely many rational points.
If $p(x)$ is squarefree of degree $3$ or $4$ then $y^2=p(x)$
is an elliptic curve, which may or may not have infinitely many integer
points. |
Proving that $3^{n+2}$ does not divide $2^{3^n} +1$ for any positive integer $n$ | By induction. Let $S_n=2^{3^n}+1$ We claim that $v_3(S_n)={n+1}$. Easily seen for small $n$.
Notation: here, as is fairly standard, we are writing $v_p(n)$ for prime $p$ and natural number $n$ to indicate the maximal power of $p$ which divides $n$. Thus, with $n=1$ we have $$S_1=2^3+1=9=3^2\implies v_3(S_1)=2=1+1$$ as claimed.
Similarly,$$S_2=2^9+1=513=3^3\times 19 \implies v_3(S_2)=3=2+1$$ as claimed.
And again, $$S_3=2^{27}+1=134217729=3^4\times 19\times 87211\implies v_3(S_3)=4=3+1$$ as claimed.
Proof: Suppose it is true for $n-1$. That is, assume that $$v_3(S_{n-1})=(n-1)+1=n$$ Then we write $$S_n=(2^{3^{n-1}})^3+1=\left(S_{n-1}-1\right)^3+1=S_{n-1}^3-3S_{n-1}^2+3S_{n-1}$$ It is clear that $3^{n+2}$ divides each of the first two terms but only $3^{n+1}$ divides the third, so we are done. |
Hilbert function and series. | Let $R=K[X_1,\dots,X_t]$, and $I\subset R$ be a homogeneous ideal. Then, $$H(R/I,n)=\dim_KR_n-\dim_KI_n,$$ where $R_n$, respectively $I_n$ is the $n$th homogeneous component of $R$, respectively $I$.
It's well known that $\dim_KR_n={{t+n-1}\choose{n}}$. On the other side, in your case $I=(f)$ with $f$ homogeneous of degree $d$, and then $\dim_KI_n={{t+n-d-1}\choose{n-d}}$. By summing up we get the Hilbert series $$H_{R/I}(z)=\frac{1}{(1-z)^t}-\frac{z^d}{(1-z)^t},$$ that is, $(1-z^d)H_R(z)$.
But you can also get this from the exact sequence $$0\to R(-d)\stackrel{f\cdot}\to R\to R/I\to0,$$ so there is no need to compute the Hilbert function of $R/I$ in order to find out its Hilbert series. |
isomorphic localization of finitely presented modules | As per the OP's request, here is a reference that shows the result, a result often associated with the words "local freeness" for its applications in algebraic geometry:
Pete L. Clark's CA Notes, Page 136, Theorem 7.21 |
Inverse of a particular operator | You haven't specified the domain and range of this function. If you mean the operator over the $C^\infty$ functions, note that it will not be invertible. Taking for example $f(x) = \sin(x)$, we have
$$
\left(I + \frac{\partial^2}{\partial x^2}\right) f(x) = 0
$$
so your function is certainly not injective.
Suppose we are considering the operator over the space of functions with period $2$. We define the inner product
$$
\langle f,g\rangle = \int_0^1 f(x) \overline{g(x)}\,dx
$$
Then the family of functions
$$
a(x) = 1\\
b_n(x) = \sin(n \pi x)\\
c_n(x) = \cos(n \pi x)
$$
forms an orthonormal (Schauder) basis of this set. Relative to the basis $\{a,b_1,c_1,b_2,c_2,\dots\}$, we can represent the operator $(I + \beta\frac{\partial^2}{\partial x^2})$ as
$$
\pmatrix{1\\
&1-\pi^2\beta\\
&&1-\pi^2\beta\\
&&&1 - 2^2 \pi^2 \beta\\
&&&&1 - 2^2 \pi^2 \beta\\
&&&&&1 - 3^2 \pi^2 \beta\\
&&&&&& \ddots}
$$
We can find the inverse of this operator as we would find the inverse of any other diagonal operator. |
Linear combinations of three-dimensional vectors | Hint
Let $v=\begin{pmatrix}a\\b\\c\end{pmatrix}$. You need to see if there exists $x,y,z$ (scalars) such that
$$v=xv_1+yv_2+zv_3.$$
In other words you need to see if the following system is consistent for all $a,b,c$.
$$
\left[\begin{array}{ccc|l}
1 & -1 & 11 &a\\
1 & 1 & 1 & b\\
1 & 1 & -14 &c\\
\end{array}
\right]
$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.