title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
If f is uniformly differentialable on $(a,b)$ then $f'$ is continuous on $(a,b)$? | Given $x \in (a,b)$
\begin{align*}
|f'(y)-f'(x)| & \leq |f'(x) - \frac{f(x)-f(y)}{x-y} | + |\frac{f(x)-f(y)}{x-y}-f'(y)|
\end{align*}
Given that $f$ is differentiable at $x$, the first member of the sum is small for $y$ sufficiently close to $x$. The second member is small for $y$ sufficiently close to $x$, for $f$ is uniformly differentiable on $(a,b)$. |
Is this question related to Poisson process? | For (a) the answer is obviouisly $0$, since after 10 minutes $A$ and $B$ have failed and $C$ has just started
For (b) the answer is $\dfrac{1}{27}$ as the only pattern leaving $A$ being in service after $C$ has failed is that $B$ has lifetime $1$, $C$ has lifetime $1$ and $A$ has lifetime $3$.
For (c), you could reagrd this as a Poisson process, but there is an easier approach. $A$ and $B$ are iid with exponential distributions so the probability that $B$ fails before $A$ is $\dfrac{1}{2}$; $C$ then starts, and since the exponential distribution is memoryless, the probably that $C$ then fails before $A$ is also $\dfrac{1}{2}$. So the answer is $\dfrac{1}{2} \times \dfrac{1}{2}=\dfrac{1}{4}.$ |
How to correctly formulate this statement | This should be enough:
$[\ldots]$ We get $x = \frac{-b}{2a}$. Since $b$ is negative and $a$ is positive, $x$ is also positive. $~~~~\square$ |
Why probability of unordered samples is not equally likely? | You can see what is happening by choosing a very simple case: The population has only two people, $A$ and $B$. And your sample size will be two sample people. Thus $n=k=2$.
Then looking at the ordered samples:
$$
P(s_1 = A \wedge s_2 = A) = \frac12 \cdot \frac12 = \frac14 \\
P(s_1 = A \wedge s_2 = B) = \frac12 \cdot \frac12 = \frac14 \\
P(s_1 = B \wedge s_2 = A) = \frac12 \cdot \frac12 = \frac14 \\
P(s_1 = B \wedge s_2 = B) = \frac12 \cdot \frac12 = \frac14 \\
$$
But looking at"unordered samples" where all we care about is $n_A$ and $n_B$, the number of selected $A$'s and $B$'s, we have:
$$
P(n_A = 2, n_B = 0) = P(s_1 = A \wedge s_2 = A) = \frac14 \\
P(n_A = 1, n_B = 1) = P(s_1 = A \wedge s_2 = B)+P(s_1 = B \wedge s_2 = A) = \frac14 + \frac14 = \frac12\\
P(n_A = 0, n_B = 2) = P(s_1 = B \wedge s_2 = B) = \frac14
$$
We see that one of the unordered possibilities has a different probability than the others.
What has happened is that some of the unordered possibilities comprise just one ordered possibility, but some of them (just one in this case, but most of them in other cases) comprise multiple ordered possibilities. |
Is power set functor determined by its image on objects? | There exists at least one other endofunctor of $\mathbf{Set}$ that sends every set to its powerset. This endofunctor sends a function $f:X\to Y$ to
$$\widehat{f} :P(X)\to P(Y):U\mapsto \widehat{f}(U)=\{y\in Y\mid f^{-1}(\{y\})\subset U\}$$
(where $f^{-1}$ is the inverse image).
One can check directly that $\widehat{f\circ g}=\widehat{f}\circ \widehat{g}$ and $\widehat{id_X}=id_{P(X)}$, or use the following fact (which explains the origin of that definition) : for every set $X$, the powerset $P(x)$ is a poset (ordered by inclusion), and for any given $f$, $P(f), f^{-1}$ and $\widehat{f}$ are all monotone functions and we have two adjunctions $P(f)\dashv f^{-1}\dashv \widehat{f}$. Then, for any $g$ we have a chain of adjunctions
$$P(f\circ g)\dashv (f\circ g)^{-1}\dashv \widehat{f\circ g}$$
and since adjunctions can be composed, we also have
$$P(f)\circ P( g)\dashv g^{-1} \circ f^{-1}\dashv \widehat{f}\circ \widehat{g}$$
Since $P$ is a functor, the first term of the two chains coincide. By uniqueness of adjoint functors the other terms also coincide, thus $\widehat{f\circ g}=\widehat{f}\circ \widehat{g}$. You can use a similar argument for the identities. |
Representation of real or complex numbers as vector of coefficients of polynomials. | Using Thom codes: you can distinguish a root $x_0$ of $p$ between the other roots of $p$ by the signs of the numbers $p'(x_0),p''(x_0),\dots,p^{(N)}(x_0)$.
Relevant link: Manipulation of real roots of polynomials: Isolating Intervals or Thom's Codes. |
Divisors and Picard Group | If $X$ is a variety open in a variety $Y$, there is always a map from the group of Weil divisors on $Y$ to that of $X$ and the same for Picard group. The first map is onto, the second map may not be. |
Fibonacci Numbers Proof by Induction (Looking for Feedback) | The proof is technically valid and well-written; however, you never use the inductive hypothesis in your inductive step. Thus invoking induction was unnecessary. |
Intuition behind a Discrete and In-discrete Topology and Topologies in between | A Topology $\tau$ on a set $X$ (a topological space (X,$\tau$) ) is a collection of subsets of $X$ such that the empty set, $X$ , the union of any subcollection and the intersection of any finite subcollection are in $\tau$.
Okay so we are know equipped with the definition of a topology. So we start thinking what kind of topologies I can give to a set. So lets look at the definition again. It says union and intersection of any sub collection is in $\tau$
. Naturally one candidate for this will be all subsets of a set $X$. You can see that this satisfies all the conditions for a topology and hence we have successfully defined a topology. The intuition behind this is that we first want to see that how does the set $X$, its subsets are in accordance with the definition of a topology on a set .
Also when we define $\sigma$- algebra on a set a successful candidate is again the set of all subsets of that set.
Indiscrete topology:
Now in the indiscrete topology we consider the topology given by $\{X,\emptyset\}$. Now here again this is the most natural thing to think of because you will see that we want something such that the union of a subcollection is in the topology . Here we think of the set itself. Now you can see that the issue that occurs here is we don't have $\emptyset$ in it . So we have to include that to. |
The effect of attaching the Möbius strip to the torus | It does only wrap around the boundary circle once. The problem is that the boundary circle is not the generator of $H_1(M)$, it is twice the generator. Think of the deformation retract onto the middle circle to see this. |
how to find the CI of a 2 dimensional normal distribution | You have an elliptical confidence region of the form
$$
C_{\mu} = \{\mu \in \mathbb{R}^2 | ( \bar{X} - \mu)^TS^{-1}(\bar{X}-\mu) \le \frac{2}{n-2} F_{1-\alpha; 2, n-2} \},
$$
where
$$
\bar{X} = (\hat{\mu}_1, \hat{\mu}_2)^T,
$$
and
$$
S = \frac{1}{n}\sum_{i=1}^n (x_i -\hat\mu_1)(x_i -\hat\mu_2)^T.
$$ |
Rotation between two circles | Use the law of cosines. If $\rho_i$ denotes the distance of $p_i$ from the origin, and you want the distance between the centres to be $r_1+r_2$, you need the angle bewteen the lines from the origin to the centres to form an angle $\theta$ that satisfies
$$
(r_1+r_2)^2=\rho_1^2+\rho_2^2-2\rho_1\rho_2\cos\theta\;,
$$
and thus
$$
\theta=\arccos\frac{\rho_1^2+\rho_2^2-(r_1+r_2)^2}{2\rho_1\rho_2}\;.
$$
Then you just need to calculate the angle originally formed by these lines and rotate by the difference of the two angles. |
$C(\overline\Omega)$ and $C_0(\Omega)$ | Note that a function $f$ in $C_0(\Omega)$ is defined on $\Omega$ only, so it does not really make sense (yet) to talk about $f|_{\partial\Omega}$.
Usually, $C_0(\Omega)$ is defined as the completion of $C_c^\infty(\Omega)$ in $C(\Omega)$ with respect to the uniform norm. In this case, this set is exactly the set of functions that vanish near the boundary, that is $f(x)\to0$ when $x\to\partial\Omega$. In particular, it is possible to extend $f$ to a continuous function $\overline f$ on the closed set $\overline\Omega$ such that $\overline f|_\Omega =f$, and this extension is unique. Thus, the application
$$
\begin{array}{ccl}
c:&C_0(\Omega)&\to C(\overline\Omega)\\
& f &\mapsto \overline f\\
\end{array}
$$
being injective, we can identify $C_0(\Omega)$ to its image $f(C_0(\Omega))$, which is a subset of $C(\overline\Omega)$.
Likewise, using
$$
\begin{array}{ccl}
r:&C(\overline\Omega)&\to C(\Omega)\\
& f &\mapsto f|_\Omega\\
\end{array}
$$
that is also injective, we can identify $C(\overline\Omega)$ to a subset of $C(\Omega)$. In that sense, we can say that
$$
C_0(\Omega)\subset C(\overline\Omega)\subset C(\Omega)
$$
all of these inclusions being strict, since the constant function $1\in C(\overline\Omega)\backslash C_0(\Omega)$, and, as you noted, any continuous function that diverges near the boundary (like $x\mapsto 1/x$), cannot be extended to a continuous function on the closure of $\Omega$, and thus is in $C(\Omega)\backslash C(\overline\Omega)$. |
Is it true, that the probability for both events is always equal? If yes how to prove it, if not, why not? | Let $Y_n =\min(X_1,X_2 \cdots X_n)$, and let $y_n=\sum_{i=1}^n[X_i=Y_n]$ count the number of elements that attain that minimum.
Analogously, let $Z_n$ and $z_n$ be the maximum and maximum-count.
Then, by symmetry $P( X_{n} = Y_n \wedge y_n=1)=P(X_n=Y_n) P(y_n=1 \mid X_n=Y_n)=\frac{1}{n} P(y_n=1)$
Then, esentially you are asking if $P(y_n=1)=P(z_n=1)$ , that is, if the probability of having a single maximum equals the probability of having a single minimum. This is not true in general.
It's true for a continuous variable (continuous CDF) because in that case the probability of a having a single extrema equals $1$. It's also true for a symmetric (around the median) random variable. I'm not sure if there's a simple characterization for its CDF to be true in general.
Added:
Let $F(x) = P(X \le x)$ be the CDF, and let $p(x)= F(x) - F(x^-)$.
Then the probability of having a single minimun in $n+1$ realizations equals
$$A=p(y_{n+1}=1)= \int \left(\frac{1-F(x)}{1-F(x^-)}\right)^n dF(x)=
\int \left(1-\frac{p(x)}{1+p(x)-F(x)}\right)^n dF(x) \tag{1}$$
Similarly, for the maximum:
$$B=p(z_{n+1}=1)= \int \left(\frac{F(x^-)}{F(x)}\right)^n dF(x) =
\int \left(1- \frac{p(x)}{F(x)}\right)^n dF(x) \tag{2}$$
If $F(x)$ have finite discontinuities at $x_i$, $i=1,2\cdots k$ (perhaps the result is also valid for more general settings), we can write $F(x)=F_c(x) + \sum_i p(x_i)u(x-x_i)$ where $F_c(x)$ is continuous and $u(\cdot)$ is the unit-step function. Then
$$\begin{align}
A &=\sum_i p(x_i) \left(1-\frac{p(x_i)}{1+p(x_i)-F(x_i)}\right)^n +F_c(+\infty)\\
&=1- \sum_i p(x_i)\left[1- \left(1-\frac{p(x_i)}{1+p(x_i)-F(x_i)}\right)^n \right]\tag{3}
\end{align}
$$
$$\begin{align}
B&=\sum_i p(x_i) \left(1- \frac{p(x_i)}{F(x_i)}\right)^n +F_c(+\infty)\\
&=1- \sum_i p(x_i)\left[1- \left(1- \frac{p(x_i)}{F(x_i)}\right)^n
\right] \tag{4}
\end{align}$$
Of course, $A=B=1$ if $F(x)$ is continuous. Also, $A=B$ if the probability (both the continuous and the discrete parts!) is symmetric. There's not much more to say in general, I think... |
Matlab wrong cube root | In the following I'll assume that $n$ is odd, $n>2$, and $x<0$.
When asked for $\sqrt[n]{x}$, MATLAB will prefer to give you the principal root in the complex plane, which in this context will be a complex number in the first quadrant. MATLAB does this basically because the principal root is the most convenient one for finding all of the other complex roots.
You can dodge this issue entirely by taking $-\sqrt[n]{-x}$.
If you want to salvage what you have, then you'll find that the root you want on the negative real axis is $|z| \left ( \frac{z}{|z|} \right )^n$. Basically, this trick is finding a complex number with the same modulus as the root $z$ (since all the roots have the same modulus), but $n$ times the argument. This "undoes" the change in argument from taking the principal root. |
Map from unit ball in quaternions to $\text{SU}(2)$ a group homomorphism? | Yes. Let $L_q$ denote the left multiplication map on $\mathbb{H}$ given by $L_a(x)=ax$. Since quaternions are associative, we have $L_a(x\lambda)=L_a(x)\lambda$ for any other quaternion $\lambda$, in particular for complex numbers (here we are identifying $\mathbb{C}$ with a subspace of $\mathbb{H}$ in the usual way), so it is a linear transformation of $\mathbb{H}$ as a complex vector space. With $\{1,\mathbf{j}\}$ as a right $\mathbb{C}$ vector space basis, we get an identification $\mathbb{H}\cong\mathbb{C}^2$ allowing us to interpret $L_a$ as a matrix in $M_2(\mathbb{C})$.
Clearly $L:\mathbb{H}\to M_2(\mathbb{C})$ (sending $a$ to $L_a$) is an $\mathbb{R}$-linear map.
That it is an $\mathbb{R}$-algebra homomorphism means:
$L_a\circ L_b=L_{ab}$ for all $a,b$,
i.e. $L_a(L_b(c))=L_{ab}(c)$ for all $a,b,c$,
i.e. $a(bc)=(ab)c$ for all $a,b,c$.
The last interpretation is just the associative property of the quaternions.
It follows that $\mathbb{H}^\times\to\mathrm{GL}_2(\mathbb{C})$ is a group homomorphism, in particular it is a group homomorphism when restricted to $\mathrm{Sp}(1)$ (the group of unit quaternions).
To check that the image of $\mathrm{Sp}(1)$ is $\mathrm{SU}(2)\subseteq\mathrm{GL}(2,\mathbb{C})$, some linear algebra is in order. First we're using $\{1,\mathbf{j}\}$ as a right $\mathbb{C}$-basis, since $\mathbb{H}=\mathbb{C}\oplus\mathbf{j}\mathbb{C}$. Since $w\mathbf{j}=\mathbf{j}\overline{w}$, left multiplication by $w$ is
$$ L_w \quad \longleftrightarrow \quad \begin{pmatrix} w & 0 \\ 0 & \overline{w} \end{pmatrix}. $$
Left multiplication by $\mathbf{j}$ has the effect of $L_{\mathbf{j}}(1)=0+\mathbf{j}1$ and $L_{\mathbf{j}}(\mathbf{j})=-1+\mathbf{j}0$, so
$$ L_{\mathbf{j}} \quad \longleftrightarrow \quad \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}. $$
Since $\mathbf{q}\mapsto L_{\mathbf{q}}$ is an algebra homomorphism, we may write $\mathbf{q}=a+b\mathbf{j}$ ($a,b\in\mathbb{C}$) and get
$$ L_{a+b\mathbf{j}} \quad \longleftrightarrow \quad
\begin{pmatrix} a & 0 \\ 0 & \overline{a} \end{pmatrix}
+ \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}
\begin{pmatrix} b & 0 \\ 0 & \overline{b} \end{pmatrix}
= \begin{pmatrix} a & -b \\ \overline{b} & ~\overline{a} \end{pmatrix}.$$
When $\mathbf{q}\in\mathrm{Sp}(1)$ is a unit quaternion, the condition $|\mathbf{q}|^2=1$ is equivalent to $|a|^2+|b|^2=1$ which makes the above any possible matrix in $\mathrm{SU}(2)$. |
The ratio between Central Eulerian Numbers and the sum of Eulerian Numbers at a fixed level converges to zero | Using the asymptotic provided on the OEIS page
$$C(n)\sim\frac{2^{2n}n^{2n-1}\sqrt3}{e^{2n}}$$
and comparing it with the asymptotic of $(2n-1)!$
$$(2n-1)!\sim\sqrt{2\pi(2n-1)}\left(\frac{2n-1}e\right)^{2n-1}$$
we get
$$\lim_{n\to\infty}\frac{C(n)}{(2n-1)!}$$
$$=\lim_{n\to\infty}\frac{2^{2n}n^{2n-1}\sqrt3}{e^{2n}\sqrt{2\pi(2n-1)}\left(\frac{2n-1}e\right)^{2n-1}}$$
$$=\lim_{n\to\infty}\frac{2^{2n}n^{2n-1}\sqrt3}{e\sqrt{2\pi(2n-1)}(2n-1)^{2n-1}}$$
$$=\lim_{n\to\infty}\left(\frac{2n}{2n-1}\right)^{2n-1}\frac{2\sqrt3}{e\sqrt{2\pi(2n-1)}}$$
$$=\lim_{n\to\infty}\frac{2\sqrt3}{\sqrt{2\pi(2n-1)}}=0$$ |
$E$ is a certain subspace of $\mathbb{R}[x]$. Is the set $\{x − 2, (x − 2)^2, (x − 2)^3\}$ a basis of $E$? | The first part has already been answered in the comment by uer84413.
As for the second part: calculate the polynomials $(x-2)^2$ and $(x-2)^3$. Then it should be easy for you to prove that $x-2, (x-2)^2$ and $(x-2)^3$ are linearly independent. |
How to compute hitting probability for this Markov chain? | You missed using the fact that $h_6 = 1$ (we will reach 6 for sure as we are starting at 6). Once we use that, we get $h_0 = \frac{1}{4}, h_4 = h_5 = h_6 = 1$. |
Checking the solution of the problem about metric function | Yes, as you noted above, you can't apply Minkowski's inequality directly. But you can proceed as below :
Note that for any $n\in \Bbb N $,$$\sqrt[p]{\sum\limits_{i = 1}^{n} |x_i - y_i|^p} \le
\sqrt[p]{\sum\limits_{i = 1}^{n} |x_i - z_i|^p} + \sqrt[p]{\sum\limits_{i = 1}^{n} |z_i - y_i|^p}\leq \sqrt[p]{\sum\limits_{i = 1}^{\infty} |x_i - z_i|^p} + \sqrt[p]{\sum\limits_{i = 1}^{\infty} |z_i - y_i|^p}. $$
Since this is true for any $n\in \Bbb N $, taking the limits will give you the required inequality.
Let $S_n=\sqrt[p]{\sum\limits_{i = 1}^{n} |x_i - y_i|^p}$. Note that the sequence $\{S_n\}$ is an increasing sequence of non-negative real numbers and is bounded above by $$\sqrt[p]{\sum\limits_{i = 1}^{\infty} |x_i - z_i|^p} + \sqrt[p]{\sum\limits_{i = 1}^{\infty} |z_i - y_i|^p}.$$ So $\{S_n\}$ converges to $$\lim_{n\to\infty}S_n=\sqrt[p]{\sum\limits_{i = 1}^{\infty} |x_i - y_i|^p}$$ and the desired inequality holds (if you are still struggling, see what happens if $$\sqrt[p]{\sum\limits_{i = 1}^{\infty} |x_i - z_i|^p} + \sqrt[p]{\sum\limits_{i = 1}^{\infty} |z_i - y_i|^p}\lt\lim_{n\to\infty}S_n).$$ |
Averaged convergence of $a_n^p$ | Can do this quite neatly for $p\geq1$ using generating functions:
$a_n$ is convergent $\iff\sum_na_nx^n=O(\frac{1}{1-x})$ as $x\to1^-$
Calling $c_p(n):=\frac{a_1^p+a_2^p+...+a_n^p}{n}$, the condition tells us that $\sum_nc_1(n)x^n=O(\frac{1}{1-x})$ as $x\to1^-$
Noting that $0\leq a_n\leq1$, one can easily show that $c_p(n)\leq c_1(n)$ for $p>1$
We thus obtain, virtually for free, that $\sum_nc_p(n)x^n=O(\frac{1}{1-x})$ as $x\to1^-$, and thus that $c_p(n)$ converges also.
Will continue to think about the $0<p<1$ case.
[EDIT: On reflection, I can only comfortably prove that $a_n=O(1)\iff\sum_na_nx^n=O(\frac{1}{1-x})$ as $x\to1^-$. Still think there might be some legs in this approach so going to work on it.] |
Proof of $2^n > n$ by Induction | Technically, the statement $$2^{k+1} > k+k>k+1$$ is wrong, precisely because $k$ could be $1$. You can, however, write
$$2^{k+1} > k+k \geq k+1$$
and from that, you can still conclude that $2^{k+1}>k+1$. No special case needed, since $a>b$ and $b\geq c$ always implies $a>c$. |
Does a real linear function $l : \mathbb{C}^n\to\mathbb{R}$ satisfy $l(iz) = l(z)$? | The desired property does not hold. Consider the function $f : \mathbb{C} \to \mathbb{R}$ given by $f(x + iy) = x$. This is real linear, but $f(i(x + iy)) = f(-y + ix) = -y$ which is not equal to $x$ in general.
In fact, the property only holds for the zero function. If $l : \mathbb{C}^n \to \mathbb{R}$ is real linear and $l(iz) = l(z)$, then
$$l(z) = l(iz) = l(i(iz)) = l(-z) = -l(z),$$
so $l(z) = 0$. |
Number of roots of a Cubic Polynomial | Let $q\geq0$.
Thus, $$f\left(\sqrt{\frac{-p}{3}}\right)=\sqrt{\frac{-p^3}{27}}+p\sqrt{\frac{-p}{3}}+q=$$
$$=\sqrt{\frac{-p^3}{27}}-3\sqrt{\frac{-p^3}{27}}+q=2\left(\frac{q}{2}-\sqrt{\frac{-p^3}{27}}\right)=$$
$$=\frac{2\left(\left(\frac{q}{2}\right)^2+\left(\frac{p}{3}\right)^3\right)}{\frac{q}{2}+\sqrt{\frac{-p^3}{27}}}>0,$$
which says that our equation has an unique real root.
The case $q<0$ is a similar, but we need to work with a minimum point. |
Proof that $f(x)=x^4+5x^2-9=0$ has at least two real roots and another question? | Let's walk through the proof, shall we?
We know that $f(x)$ is continuous for all real numbers since it's a sum of continuous functions and a constant.
Small detail: a constant is also a continuous function, but what you said isn't wrong.
So, let $x=0$. Then $f(0)=-9$. Let $x=2$. Then $f(2)=27$. So, by the intermediate value theorem, there $\exists x_0$ such that $f(x_0)=0$. So, this proves that there is at least one real root.
Perfect.
Since the function's variables are all even powered, it follows that $x^4$=$(-x)^4$ and $5x^2=5(-x)^2$. So, $x^4+5x^2-9=(-x)^4+5(-x)^2-9$. Hence, let $x_n=-x_0$. Then, this would also be a root.
Why you would call one root $x_0$ and the other $x_n$ is a mystery to me, but nevertheless, it's not incorrect. The function being even indeed means the roots are mirrored over $0$, so if $x_0$ is a root, so is $-x_0$.
So, there $\exists x_0$ and $x_n$, which are two real roots. This proves there are at least two real roots.
Again, a small detail; we could have $x_0=x_n$, but this is easily disproved by showing $x_0\neq 0$.
Should I have explained more about the even powered variables? Some of my friends did it by contradictions.
No, this is perfectly fine. It of course depends on the context, but your proof is very clear. You should be okay.
As far as the bonus goes:
$f(x)$ is a continuous function where $f(x_0) \gt 0$. Prove there $\exists x$ such that $x \in (x_0-\epsilon, x_0+ \epsilon)$ for some $\epsilon \gt 0$.
I don't think you meant that; sure, take $x=x_0$, and $x \in (x_0-\epsilon, x_0+ \epsilon)$. |
G is a simple undirected, connected graph with $\kappa$(G) < $\frac{n}{2}$. G has a simple path with 2$\kappa$(G) length. | It looks similar to this lemma:
"If $G$ is a connected graph on $n$ vertices with minimum degree $\delta(G)<\frac{n}{2}$ then there exists a path of length $2\delta(G)$."
The proof of the lemma proceeds by considering a path $P$ of maximal length and uses the fact that the end vertices of $P$ have both degree at least $\delta(G)$. It is very similar to the proof of Dirac's theorem about hamiltonian graphs, maybe you've seen it. For a full proof, top of page 4: https://homepages.inf.ed.ac.uk/hguo/files/16.fall-adv.comb/Lecture_2_03.10.2016.pdf
For your problem, I think you can still consider a path of maximal length and prove that its end vertices must have degree at least $k(G)$ (why?), then the exact same proof should work. |
Polynomial with roots modulo all primes $p \equiv 3 \pmod 4$ | Let $$f(x)= x^3-3x+4$$
It is irreducible and $$Disc(f) = 4(3)^3-27(-4)^2=-18^2$$
Let $k$ be the splitting field of $f\bmod p$. Factorize $$f(x)=\prod_{j=1}^3 (x-a_j)\in k[x]$$ Note that $$Disc(f)^{1/2}=(a_1-a_2)(a_1-a_3)(a_2-a_3)\in k, \qquad Disc(f)=\prod_{i\ne j} (a_i-a_j)$$
Because of the Frobenius automorphism, if $f\bmod p$ is irreducible then $k=\Bbb{F}_p[x]/(f(x))$ ie. $[k:\Bbb{F}_p]=3$ which implies that $k$ doesn't contain any quadratic subfield ie. $Disc(f)^{1/2}\in \Bbb{F}_p$.
And (exlucding the case $p=3$ where $f=(x+1)^3$ is reducible) since we know that $Disc(f)^{1/2}\in \Bbb{F}_p$ iff $p\not\equiv 3\bmod 4$ we get that $f$ is never irreducible when $p\equiv 3\bmod 4$, ie. $f\bmod p$ has a root. |
Tangent line equation to $1/x$ | Hints: First, note your difference quotient has a typo. It should be
$\frac{\frac1{a+h}-\frac1a}{h}$, which can be rewritten as $\frac{-1}{a(a+h)}$.
Second, you must take the limit of the difference quotient as $h\to0$ to find the slope of the tangent line. If you don't take the limit, you don't have the tangent line at $(a,f(a))$, but only the approximating secant line between the points $(a,f(a))$ and $(a+h,f(a+h))$. So take the limit to get the slope.
Once you have the slope of the tangent line and the point $(a,f(a))$ of tangency on the graph, you can write an equation of the line tangent to the graph at that point by using point-slope form (recall it is $y-y_0 = m\cdot(x-x_0)$ where $m$ is the slope and $(x_0,y_0)$ is the point).
Note that everything will be in terms of the unspecified value $a$. You don't know what $a$ is, only that it is a value of $x$ for which there is a tangent to the graph. |
Extremum in $f:R^2\to R$ via partial differential? | The correct approach is the following: the necessary condition for a point $(x_1,x_2)$ to be an extremum of $C^1$ (i.e. continuous differentiable) function $f$ is:
$$
\nabla f(x_1,x_2) = 0
$$
i.e. both derivatives are zero: $\partial_1f(x_1,x_2) = 0$ and $\partial_2 f(x_1,x_2) = 0$. Due to your notation I guess that you have misunderstood the formula
$$
\left(\frac{\partial f}{\partial x_1},\frac{\partial f}{\partial x_2}\right)(x_1,x_2) = 0
$$
as a product while it is an evaluation at a point $(x_1,x_2)$ instead. |
Is the splitting $\pi_{k}(X,A)\simeq\pi_{k}(X)\times \pi_{k-1}(A)$ a $\pi_1(A)$-modules isomorphism? | Let's recall a few things. When $F\to X\to Y$ is a fibration sequence, you have an induced long fibration sequence $$\cdots\to\Omega F\to\Omega X\to\Omega Y\to F\to X\to Y$$
meaning that each series of three consecutive spaces forms a fibration sequence, called "exact Puppe sequence".
For a fibration $f:E\to B$ with fiber $F$ you have $\pi_k(E,F,*)\cong\pi_k(B,*)$ so that if $i:A\to X$ is an inclusion with homotopy fiber $F$, there is an isomorphism
$$\pi_k(\Omega X,\Omega A)\cong\pi_k(F)$$
induced from the fibration sequence $\Omega A\to\Omega X\to F$. Of course you also have
$$\pi_k(\Omega X,\Omega A)=\pi_{k+1}(X,A)$$
because the adjunction $(\Sigma,\Omega)$ induces an adjunction on the homotopy category of pairs.
In short, for any pair $(X,A)$, there is an isomorphism $\pi_{k+1}(X,A)\cong \pi_k(F)$ where $F$ is the homotopy fiber of the inclusion.
Unraveling the different isomorphisms, one can make the latter explicit : take a representative $f\colon (D^{k+1},S^k,*)\to (X,A,*)$ of an element of $\pi_{k+1}(X,A,*)$, then the associated element of $\pi_k(F)$ is represented by the map $f^\prime\colon (S^k,*)\to (F,*)$
$$f^\prime(z)=(a(z),\gamma(z))$$
with $a(z)=f(z)$ and $\gamma(z)(t)=f(z,t)$ under the identification $D^{k+1}=CS^k$.
We should make sure that is is $\pi_1(A)$-equivariant, I will just accept it, Tyrone claimed it in his comment. The action of $\pi_1(A)$ on $\pi_k(F)$ is the action of the fundamental group of the total space on the homotopy groups of the fiber: there is a monodrony action $\Omega X\times F\to F$ which is not basepoint preserving, but precomposing with $A\to X$ gives a well defined pointed map
$$\Omega A\times F\to F$$
inducing an action on homotopy groups.
Now if the inclusion $A\subseteq X$ is nulhomotopic, i.e. if it is homotopic to the trivial map $*\colon A\to X$, then the homotopy fibers of the inclusion and of the trivial map are weakly equivalent (by naturality of the long exact sequence of homotopy groups of a fibration, and the five lemma). But the homotopy fiber of the trivial map is $$A\times \Omega X$$ Also, the long exact sequences are $\pi_1(A,*)$-equivariant as well as the morphism between them. For a trivial map $*\colon A\to X$, the additional piece of data we have is that the fiber sequence
$$\Omega X\to F\to A$$
is such that $\Omega X\to F$ has a retract. So, the splitting is equivariant in this case, and hence is always equivariant.
Little check: we used that a commutative square of fibrations induces a map of long exact sequences, which is true in the strict setting. It is also true in the case of a diagram which only commutes up to homotopy: if $f\colon A\to X$ and $g\colon B\to C$ are fibrations and we have maps $k\colon A\to C$, $l\colon B\to D$ making the square commute up to homotopy, take a homotopy
$$H:A\times[0,1]\to D$$
and its dual
$$K:A\to D^{[0,1]}$$
You can form the following two strictly commutative squares:
$$
\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex}
\newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex}
\newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}}
%
\begin{array}{ccc}
A & \ra{f} & B \\
\da{a\mapsto (a,0)} & & \da{l}\\
A\times[0,1] & \ra{H} & D
\end{array}
$$
and
$$
\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex}
\newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex}
\newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}}
%
\begin{array}{ccc}
A & \ra{K} & D^{[0,1]} \\
\da{k} & & \da{\mathrm{ev}_1}\\
C & \ra{g} & D
\end{array}
$$
They induce long exact sequence in homotopy, and the lower long sequence of the first diagram is exactly the same as the upper long sequence of the second one (aka the long exact sequences of the maps $l\circ f\simeq k\circ g$), so they can be glued forming the exact sequence we were looking for. |
Calculating the control energy of a non-optimal prespecified trajectory of an LTI | One can only find the control signal if the provided trajectory is feasible. Namely in order to satisfy the differential equation the terms which are not a function of $u(t)$ still need to lie inside the span of $B$. Since
$$
\dot{\lambda}(t) - A\,\lambda(t) = B\,u(t)
$$
so in order to solve for $u(t)$ the left hand side needs to be a multiple of $B$.
For example it can be shown that linear interpolation of the example system is not feasible
$$
\lambda(t) =
\begin{bmatrix}1 \\ 0\end{bmatrix} \left(1 - \frac{t}{T}\right) +
\begin{bmatrix}0 \\ 1\end{bmatrix} \frac{t}{T}
$$
$$
\dot{\lambda}(t) =
\begin{bmatrix}-1 \\ 1\end{bmatrix} \frac{1}{T}
$$
$$
\dot{\lambda}(t) - A\,\lambda(t) =
\begin{bmatrix}
1 - \frac{1}{T} \\
\frac{1}{T} - \frac{3}{10}
\end{bmatrix} +
\begin{bmatrix}
- \frac{3}{2} \\
\frac{13}{10}
\end{bmatrix} \frac{t}{T}
$$
namely the constant term only lies in the span of $B$ for one specific value of $T$ (namely $T=\frac{20}{13}$), however the time varying term never lies in the span of $B$ except at $t=0$. So on the interval $0<t<T$ the expression $\dot{\lambda}(t) - A\,\lambda(t)$ does not always lie in the span of $B$ and is therefore not feasible.
If $\lambda(t)$ is feasible you could use the left inverse of $B$ if $B$ does not have a rank of $n$ (where $A\in\mathbb{R}^{n\times n}$) to find $u(t)$
$$
u(t) = \left(B^\top B\right)^{-1} B^\top \left(\dot{\lambda}(t) - A\,\lambda(t)\right).
$$
If $B$ does have a rank of $n$ then the normal inverse can be used and also any $\lambda(t)$ should be feasible, since $B$ would then span the whole $\mathbb{R}^n$
$$
u(t) = B^{-1} \left(\dot{\lambda}(t) - A\,\lambda(t)\right).
$$ |
Continuous and differentiable sum of series | Since the partial sums of $\cos(nx)$ are bounded, the series:
$$\sum_{n\geq 1}\frac{\cos(nx)}{n^{\alpha}}$$
is pointwise convergent on $(0,\pi/2)$ for any $\alpha>0$ by Dirichlet's test.
I think you can get it from there. |
How should we calculate difference between two numbers? | Traditionally, the “difference" between two numbers refers to the distance on a number line between the points corresponding to each of the two numbers, a.k.a. the absolute value. Analogously, if you asked “What is the distance from Toronto to Vancouver?” or "What is the distance from Vancouver to Toronto?", you would expect the same answer: the [positive] distance separating the two cities, regardless of the direction of travel.
On the other hand, if asked “What is the result when you subtract 3 from 5?”, you should give a different answer (2) than if you were asked “What is the result if you subtract 5 from 3?” (-2).
As for calculating on the number line:
If the two numbers are on the same side of $0$ (e.g., $-2$ and $-6$), the difference is the result when you subtract the smaller absolute value from the larger absolute value (e.g., $\lvert -6 \rvert - \lvert -2 \rvert = 6-2 = 4$);
If the two numbers are on opposite sides of $0$ (e.g., $-5$ and $2$), then you add the absolute values (e.g., $\lvert -5 \rvert + \lvert 2 \rvert = 5+2 = 7$), or alternatively subtract the negative number from the positive one which effects a sign change (e.g., $2-(-5)=2+5=7$). |
Finding $\frac{S_{\triangle BKC}}{S_{\triangle ABC}}$ | Since you let me have the pleasure of solving the problem (by not sharing your solution or approach), it is only fair that I reciprocate and let you enjoy the exercise of at least some parts of the proof. Here we go:
Lemma 1 - If in $\triangle ABC$ shown below, $AX$ bisects $\widehat A$ then $\frac{BX}{CX}=\frac{AB}{AC}$ .
Proof is left as an exercise for the interested reader.
Lemma 2- As in the figure below, extend $IK$ to $J$, such that $\widehat{JEK}=\widehat{IEK}$ . Then $\triangle EJI$ and $\triangle ABC$ are similar and $\frac{JI}{BC}=\frac{EJ}{AB}=\frac{EI}{AC}$.
Proof is somewhat beautiful, therefore it is left as an exercise for the interested reader.
Now for calculations, let us refer to the radius of the in-circle as $r$. Let us also write the lengths of $BC$ , $CA$ and $AB$ as $a$ , $b$ and $c$ , respectively. We have:
$$\frac{S_{\triangle BKC}}{S_{\triangle ABC}} = \frac{S_{\triangle BIC} + S_{\triangle BIK} + S_{\triangle CIK}}{S_{\triangle ABC}} $$
$$= \frac{a}{a+b+c} + \frac{a.IK}{(a+b+c)r} \qquad (1)$$
To find $IK$ , note that in $\triangle EJI$ , $EK$ bisects $\widehat{JEI}$ . From Lemma 1 we have:
$$\frac{IK}{JK} = \frac{EI}{EJ} $$
$$\therefore \frac{IK}{IJ} = \frac{EI}{EI+EJ} \qquad (2)$$
From Lemma 2 we have:
$$\frac{EI}{EI+EJ} = \frac{b}{b+c} \qquad (3)$$
and
$$\frac{IJ}{EI} = \frac{a}{b} $$
$$\therefore IJ = \frac{a.r}{b} \qquad (4)$$
From (2) , (3) and (4) we can calculate $IK$ :
$$IK = \frac{a.r}{b+c} \qquad (5)$$
Now from (5) and (1) we can calculate the desired ratio of areas:
$$\frac{S_{\triangle BKC}}{S_{\triangle ABC}} = \frac{a}{a+b+c} (1 + \frac{a}{b+c}) = \frac{a}{b+c}$$
For the particular values of $a=10$ , $b=8$ and $c=7$ we have:
$$\frac{S_{\triangle BKC}}{S_{\triangle ABC}} = \frac{10}{8+7} = \frac23$$ |
What is the difference betweeen Riemann sums and series? | No, convergence criteria of series does not apply here. For each partition $P$, $U(f,P)$ and $L(f,P)$ are finite sums. And, if you have partitions $P_1,P_2,P_3,\ldots$, each of which is a refinement of the previous one, the sequences $\bigl(U(f,P_n)\bigr)_{n\in\mathbb N}$ and $\bigl(L(f,P_n)\bigr)_{n\in\mathbb N}$ are not partial sums of a fixed series. |
What does it mean to show that a bijection betwen two hom-sets is natural? | It should mean (I don't have access to the book at the moment) that, say, if you have a morphism $A\longrightarrow A'$, the diagram
$$\DeclareMathOperator{\Cat}{\bf Cat}
\begin{matrix}
\Cat(A'\times B,C)&\!\longrightarrow&\Cat(A',C^B)\\
\downarrow&&\downarrow\\%
\Cat(A\times B,C)&\!\longrightarrow&\Cat(A,C^B)
\end{matrix}
$$
is commutative. |
If a function $f(x,y,z) = F(r)$ depends on distance from the origin, $r = \|\vec{r}\| = \sqrt{x^2 + y^2 + z^2}$, then $\nabla f = F'(r)e_{\vec{r}}$ | Start with the definition and then use the chain rule on each term:
$$\nabla f=\frac{\partial f}{\partial x}\vec i+\frac{\partial f}{\partial y}\vec j+\frac{\partial f}{\partial z}\vec k=\frac{\partial f}{\partial r}\frac{\partial r}{\partial x}\vec i+\frac{\partial f}{\partial r}\frac{\partial r}{\partial y}\vec j+\frac{\partial f}{\partial r}\frac{\partial r}{\partial z}\vec k=\frac{\partial f}{\partial r}\left ( \frac{x}{r}\vec i+\frac{y}{r}\vec j+\frac{z}{r}\vec k \right )=\frac{\partial f}{\partial r}\vec e_r$$ |
Finding First Integrals in the case $2xy u_x - (x^2+y^2) u_y =0$ | You have $dy/dx = y'/x' = 1/2(-x/y-y/x)$. Define $x/y = z$, then $y=x/z$, $dy/dx = (z-xdz/dx)/z^2$ so we have the ODE: $(z-xdz/dx) = z^2/2(-z-1/z) = 1/2(-z^3-z)$
$$ dz/dx x = 3/2 z+1/2 z^3$$
From there you get: $$dz/(3/2z +1/2 z^3) = dx/x$$
which you can integrate and solve. |
Is there a bijection between $ N $ and $ N^2 $ in which $ f(a, b) < p(a, b) $ | The bijection provided by Cantor's Pairing Function is polynomially bounded.
Indeed it is a polynomial itself -- $\pi : \mathbb{N}^2 \to \mathbb{N}$ is given by
$$\pi(x,y) \triangleq \frac{(x+y)(x+y+1)}{2} + y$$
I hope this helps ^_^ |
Solve $x,y,z, z^x=x, z^y=y, y^y=x$ | $z=-\sqrt{2}$ is not permissible.
This is because $\log(-\sqrt2)$, which is used in your derivation, is undefined. This spurious solution comes about because $y$ happens to be 2.
Edit:
As pointed out below in the comments, the function $z^y$ is defined for the domain $z>0$, which is implicit. To see why, for example, $(-1)^{1/2} \ne (-1)^{2/4}$, indicating that $z^y$ could have different values for $z=-1$ and can not be viewed as a function for negative value $z$. If any negative value is allowed for the solution, it needs to be explicitly stated in the question. |
Point of interesection between ln and exponential functions | Unfortunately, we can not express the solution of all equations with usual fonctions. And you know that already: for example, $$x^2=2$$ do not have solution in $\mathbb Q$, so we have to add a symbol to {integers,*,/,+,-} in order to manipulate the solutions of $x^2=2$ (namely, we add $\sqrt{\cdot}$).
We have the same kind of issue with your equation, you can feel it with all your unsuccesfull try of solving it (we can prove that the solutions of this equation can't be expressed with usual functions but it's a bit tricky) . What we can do:
study the function $f:x\mapsto \ln(x)-1+1.2^x$ in order to show that $f$ is strictly increasing on $\mathbb R^+$, $\lim_{x\to 0} = - \infty$, and $\lim_{x\to +\infty}=+\infty$.
Hence by the Intermediate value theorem, $f$ is bijective from $\mathbb R^{+*}$ to $\mathbb R$, so $f$ has an inverse $f^{-1}$.
Now, the solution of your equation is $f^{-1}(0)$. We've added the symbol "$f^{-1}$" to our usual langage in order to express this solution. |
Hilbert style: $\vdash\lnot\forall xB\to\exists x \lnot B$ | In order to prove something with Mendelson's system, you have to re-use previous proved results.
You need Lemma 1.11 (a) [ page 31 ]: $\vdash \lnot \lnot \mathcal B \to \mathcal B$, as well as Lemma 1.11 (e) : $\vdash (\mathcal B \to \mathcal C) \to (\lnot \mathcal C \to \lnot \mathcal B)$.
Proof sketch:
1) $\vdash \lnot \lnot \mathcal B \to \mathcal B$ --- Lemma 1.11 (b)
2) $\vdash \forall x \ (\lnot \lnot \mathcal B \to \mathcal B)$ --- from 1) by Gen
3) $\vdash \forall x \lnot \lnot \mathcal B \to \forall x \mathcal B$ --- from 2) by Ex.2.27 (a) [ page 73 ]: $\vdash \forall x \ (\mathcal B \to \mathcal C) \to (\forall x \mathcal B \to \forall x \mathcal C)$ and MP
4) $\vdash \lnot \forall x \mathcal B \to \lnot \forall x \lnot \lnot \mathcal B$ --- from 3) and Lemma 1.11 (e) by MP
5) $\vdash \lnot \forall x \mathcal B \to \exists \lnot \mathcal B$ --- from 4) and abbreviation. |
System of n equations | If $x_1>0,x_1\neq2$ then $x_2=2x_1-3+\frac{4}{x_1^2}>x_1$, so the cycle can't repeat at $x_n$.
If $x_1<-1$ then $x_2<x_1$.
If $-1<x_1<0$ then $x_1<x_2$. |
Prove that the set $\Big\{ 1/(n+1): n \in \mathbb{N} \Big\} \cup \big\{ 0 \big\} $ is closed. | Let $A = \{ \frac{1}{n+1}: n\in\mathbb{N} \} \cup \{0\}$. Then
$$A^{c} = \left(\bigcup_{n=1}^{\infty}\left(\frac{1}{n+1},\frac{1}{n}\right)\right) \cup (-\infty,0) \cup (1,\infty)$$
is a countable union of open intervals (which are open sets), hence it is open. Therefore $A$ is closed.
$\textbf{Edit}$: my answer assumed that $\mathbb{N}$ includes $0$. If your convention for $\mathbb{N}$ does not include $0$, then we would have
$$A^{c} = \left(\bigcup_{n=2}^{\infty}\left(\frac{1}{n+1},\frac{1}{n}\right)\right) \cup (-\infty,0) \cup \left(\frac{1}{2},\infty\right)$$
and the conclusion is the same. |
proving Power Set Difference $P(A-B) \neq P(A)-P(B)$ | $P(A-\emptyset)=P(A)\neq P(A)- P(\emptyset)$ |
Do path homotopy classes of concatenated paths have a middle fixed point? | No they don't have to contain $p$. That is, $[a\ast b]$ is the homotopy class of paths relative to $\{0, 1\}$, not relative to $\{0, \frac{1}{2}, 1\}$. It is very important that this is the case.
In defining the fundamental group, the inverse of $[a]$ is $[\bar{a}]$ where $\bar{a}$ is the loop $a$ traversed in reverse, i.e $\bar{a}(t) = a(1-t)$. If $[a\ast\bar{a}]$ was the set of homotopy classes relative to $\{0, \frac{1}{2}, 1\}$, then $[a\ast\bar{a}]$ would not in general be the homotopy class of the constant loop; that is, $a\ast\bar{a}$ is not in general homotopic to the constant loop relative to $\{0, \frac{1}{2}, 1\}$ (see the gif below from Wolfram MathWorld to see why).
$\hspace{68mm}$
As the homotopy class of the constant loop is the identity of the fundamental group, we see that $[\bar{a}]$ would not be the inverse of $[a]$. |
Inequality for the Laplace transform of a density function | Since $\hat{f}(s)=\int_0^{\infty}e^{-sx}f(x)dx=s\int_0^{\infty}e^{-sx}F(x)dx$ we get $$\int_0^{\infty}e^{-sx}(1-F(x))dx=\frac{1-\hat{f}(s)}{s}$$ and therefore $$a\left(\frac{1-\hat{f}(s)}{s}\right)=a\left(\int_0^{\infty}e^{-sx}(1-F(x))dx\right)\leq a\int_0^{\infty}1-F(x)dx=a\mathbb{E}X<1.$$ |
References for minimal norm of a linear map | Here is a trick for square matrices. If $A$ fails to be invertible, then of course the infimum is zero. In the case that $A$ is invertible, we have
$$
\begin{align}
\inf_{\|x\| = 1} \|Ax\| &= \inf_{x \neq 0} \frac{\|Ax\|}{\|x\|} =
\inf_{y\neq 0}\frac{\|A(A^{-1}y)\|}{\|A^{-1}y\|}
\\ & = \inf_{y \neq 0}\frac{\|y\|}{\|A^{-1}y\|} = \inf_{\|y\| = 1}\frac{1}{\|A^{-1}y\|} =
\left[\sup_{\|y\| = 1} \|A^{-1}y\|\right]^{-1}.
\end{align}
$$
A lower bound in the case that $A$ is rectangular. If $A$ fails to have full column rank, then the infimum must be zero. If $A$ does have full column rank, let $B$ be such that $BA = I$. We have
$$
\begin{align}
\inf_{\|x\| = 1} \|Ax\| &= \inf_{x \neq 0} \frac{\|Ax\|}{\|x\|} =
\inf_{x \neq 0} \frac{\|(Ax)\|}{\|B(Ax)\|}
\\ & \geq
\inf_{y\neq 0}\frac{\|y\|}{\|By\|} = \inf_{\|y\| = 1}\frac{1}{\|By\|} =
\left[\sup_{\|y\| = 1} \|By\|\right]^{-1}.
\end{align}
$$ |
characteristic function of a clopen set continuous? | If $A$ is any subset of $\mathbb{C}$ then
$(1_U)^{-1}[A] = U$ iff $1 \in A, 0 \notin A$,
$(1_U)^{-1}[A] = X\setminus U$ iff $0 \in A, 1 \notin A$,
$(1_U)^{-1}[A] = X$ iff $\{0,1\} \subseteq A$, and
$(1_U)^{-1}[A] = \emptyset$ iff $\{0,1\} \cap A = \emptyset$
So for all (open) $A$ the inverse image is open (as $U$ is clopen).
So $1_U$ is continuous. |
How do I show that $\mathbb{Z}$ is not isomorphic to $\mathbb{Z} \times \mathbb{Z}_3$? | In this case, it's easy: $\mathbb{Z}\times\mathbb{Z}_3$ has an element whose order is $3$, whereas $\mathbb Z$ has no such element.
In general, thinking about the orders of the elements is a good approach. |
Determining a number is perfect square | $88k+41\equiv8\pmod{11}$
Now for any integer $a,$
$$a^2\equiv0,1,4, 9,5,3\pmod{11}\not\equiv8$$ |
How to change the system in 4x4 system of first-order equations? | Introducing a whole new set of $x_i$ variables is somewhat confusing. Simpler to just introduce two new variables $y_3$ and $y_4$ where $y_3=y_1'$ and $y_4=y_2'$. So you now have:
$y_1' = y_3$
$y_2' = y_4$
$y_3' = y_1'' = \dots$
$y_4'=y_2''= \dots$ |
I need help with a proof showing $\|u\|^2 = \|\operatorname{proj}_v u\|^2 + \|u - \operatorname{proj}_v u\|^2 $ | How about this?
\begin{align}
\| u \|^2 &= \| u - \operatorname{proj}_v u + \operatorname{proj}_v u\|^2 \\
&= \left< u - \operatorname{proj}_v u + \operatorname{proj}_v u, u - \operatorname{proj}_v u + \operatorname{proj}_v u\right > \\
&= \|u - \operatorname{proj}_v u \|^2 + \|\operatorname{proj}_v u \|^2 + \underbrace{\left < u - \operatorname{proj}_v u ,\operatorname{proj}_v u \right >}_{\text{0}} + \underbrace{\left < \operatorname{proj}_v u, u - \operatorname{proj}_v u \right >}_{\text{0}}\\
&=\|u - \operatorname{proj}_v u \|^2 + \|\operatorname{proj}_v u \|^2
\end{align} |
Why does $1^{-i}$ equal 1? | Since $a^b = e^{b \log a}$, we have
$$1^{-i} = e^{-i \log 1} = e^{-i \cdot 2k\pi i} = e^{2k\pi}$$
Note that in the complex numbers $\log^\mathbb C z = \log^\mathbb R |z| + (2k \pi + \arg z)i$, so there are infinite choices for its value.
One of the values that $1^{-i}$ takes is $1$, but it is not always $1$.
If we set $k = 0$, the resulting number it's called principal value (http://en.wikipedia.org/wiki/Principal_value), and that is what wolfram reports.
With $k=0$ you get $e^0 = 1$ |
How to find a confidence interval for a Maximum Likelihood Estimate | I'll use the framework of the library book problem. Let $K$ be the total sample size, $N$ be the number of different items observed, $N_1$ be the number of items seen once, $N_2$ be the number of items seen twice, $A=N_1(1-{N_1 \over K})+2N_2,$ and $\hat Q = {N_1 \over K}.$
Then an approximate 95% confidence interval on the total population size $M$ is given by
$$\hat M_{Lower}={1 \over {1-\hat Q+{1.96 \sqrt{A} \over K} }} $$
$$\hat M_{Upper}={1 \over {1-\hat Q-{1.96 \sqrt{A} \over K} }} $$
As noted in the discussion of the library problem, at times the upper bound will be infinite, especially for small samples. Similarly, the lower bound may need to be capped at zero.
This approach is due to Good and Turing. A reference with the confidence interval is Esty, The Annals of Statistics, 1983. |
Prove that $\mathrm{rank}(A)=\dim(\mathbb{Q}\otimes_{\mathbb{Z}}A)$ | A priori, the general element of $\Bbb Q\otimes A$ is a rational linear combination of elements of the form $q\otimes a$ with $q\in\Bbb Q$ and $a\in A$. As $$q\otimes a=q\cdot 1\otimes a$$ and $$\frac nm\cdot 1\otimes a+\frac rs\cdot 1\otimes b=\frac 1{ms}\cdot 1\otimes(nsa+rmb)$$
we can write each element of $\Bbb Q\otimes A$ more specifically in the form $q\cdot 1\otimes a$ with $q\in \Bbb Q$ and $a\in A$.
If $\{x_i\}_{i\in I}$ are independent in $A$, then $\{1\otimes x_i\}_{i\in I}$ are linearly independent in $\Bbb Q\otimes A$. Indeed, if $\sum q_i\cdot 1\otimes x_i=0$ (with almost all $q_i=0$), then with $N$ as common denominator of all $q_i$, we have $n_i:=Nq_i\in\Bbb Z$ and $$\begin{align}0&=N\sum (q_i\cdot 1\otimes x_i)\\&=\sum(n_i\cdot 1\otimes x_i)\\&=\sum 1\otimes n_ix_i\\&=1\otimes\sum n_ix_i\end{align}$$
and conclude that $M\cdot \sum n_ix_i=0$ holds in $A$ for some integer $M$. Then with $m_i:=Mn_i$, $\sum m_ix_i=0$ and so all $m_i$ are $=0$ and also all $q_i=0$, as was to be shown.
Conversely, let $\{\alpha_i\}_{i\in I}$ with $\alpha_i\in\Bbb Q\otimes A$ be linearly independent. As seen above, we can write $\alpha_i=q_i\cdot 1\otimes a_i$ with $a_i\in A$ and $q_i\in \Bbb Q$. Of course, $q_i\ne 0$ for our linearly independent family. If we multiply each $\alpha_i$ with a non-zero reatioal, we still have a linearly independent family. Hence we may assume wlog $\alpha_i=1\otimes a_i$.
Then the $\{a_i\}_{i\in I}$ are independent. Indeed, if $\sum m_ia_i=0$ (with almost all $m_i=0$), then
$$\begin{align}0&=1\otimes 0
\\&=1\otimes \sum m_ia_i\\
&=\sum 1\otimes m_ia_i\\
&=\sum m_i\cdot 1\otimes a_i\\&=\sum m_i\alpha_i \end{align}$$
and hence all $m_i=0$, as was to be shown.
Specifically, if $a\in A$ is a torsion element and $ma=0$ for some non-zero integer $m$, then
$$1\otimes a=\frac 1m\otimes ma=\frac1m\otimes 0=0. $$ |
Taylor polynomials expansion with substitution | It's generally a good policy to "always expand around zero". This means that you want to have variables that go to zero at your point of interest.
In your case, you want to have $h$ go to zero for $x$ and $k$ go to zero for $y$. For this to happen at $x=3$ and $y=1$, you want to use $x=3+h$ and $y = 1+k$.
One reason that you want to see what happens when $h \to 0$ is that $h^2$ (and higher powers) are small compared to $h$, so they can be disregarded when you are seeing what happens. If $h$ does not tend to zero, then $h^2$ and higher powers cannot be disregarded and, in fact, may dominate. |
How can I find a recursive relation for the following words? | Construct these kind of words from start or end. Let's do this from the end.
If we place character a at the end of the word (n-th place), then we can place all the words of length $n-1$ from alphabet {a, b, c} that do not contain abc before this a to construct a word that we want. So in this case we have $d(n-1)$ ways to do this.
Same thing is true if we place character b at the end of our word. So again we have $d(n-1)$ ways to do this.
But if we place character c at the end of our word, we must make sure that we don't have ab before this c. We have $d(n-1)$ ways to construct words of length $n-1$ that don't contain abc but we must exclude words of length $n-1$ that end with ab. For calculating the number of words of length $n-1$ that end with ab and don't contain abc, we place ab at the end and construct the prefix with d(n-3) possible words. So we have $d(n-1) - d(n-3)$ possible words for this case.
For initial values we have $d(1)=3, d(2)=9, d(3)=3^3-1=26$.
So: $d(n) = 3d(n-1) - d(n-3)$ for $n\geq 4$. |
need a solution to the canonical form | I guess $ln(\xi+\eta)$ qualifies to be a solution. However I don't find any arbitrary function associated. I understand this is just $ln r$ which is a solution to the laplacian except at the origin and the time derivatives gets killed due to the absence of $t$. However this seems to be a wayward way to look I feel because this seems to be a particular class of solutions, namely the harmonic ones. can you help me?. |
Degree of a map and Suspension | Your conjecture is false because space filling curves exist. Here is an "explicit example" Take a surjective map $f:S^1\rightarrow S^2$, suspend it to $Sf:S^2\rightarrow S^3$. Then postcomposing with the Hopf fibration $\nu:S^3\rightarrow S^2$ will give you a map with the desired properties: $\nu^{-1}(y)$ will be a circle for every point and as the map $Sf$ is surjective $Sf^{-1}\nu^{-1}(y)$ will have an infinite number of points for all $y$.
I think it is a good idea to homotope your map to a smooth one: You can homotope to a smooth curve and then suspend. The map might not be smooth at the north and southpole but everywhere else it will be smooth. By a small perturbation (leaving the equator untouched) you can make the map smooth. The degree can be calculated for a regular value on the equator which you can relate to the degree of the unsuspended map. |
Uniform convergence of $(f_n)_{n \in \Bbb{N}}$ on $D_1 \cup D_2$ | Let me suggest one possible way to prove desired. We know that $f_n \to f$ uniformly on $A$, iif $\lim\limits_{n \to \infty}\sup\limits_{x\in A}|f_n(x)-f(x)|=0$.
So, we can consider 2 sequences $x_n=\sup\limits_{x\in A}|f_n(x)-f(x)|$ and $y_n=\sup\limits_{x\in B}|f_n(x)-f(x)|$. Knowing, that $x_n \to 0$ and that $y_n \to 0$, we need $z_n=\max(x_n,y_n)\to 0$. Now this is more easy to achieved, because for $\forall \varepsilon \gt 0$ we simply take $N=\max(N_1,N_2)$ from limit definition for $x_n$ and $y_n$ |
Completeness of derivatives of Hilbert basis with respect to a parameter | I assume you mean you have a family $(e_k(t))_{k\in\mathbb N}$ of orthonormal bases of a Hilbert space $\mathcal H$, depending differentiably on a parameter $t\in\mathbb R$. Let us try to decide whether $(e_k'(t_0))_{k\in\mathbb N}$ is complete in $\mathcal H$ for some fixed $t_0\in\mathbb R$. For this, we assume that there is a differentiable function $U : \mathbb R\to L(\mathcal H)$ such that $e_k(t) = U(t)e_k$, where $U(t_0) = I$ (the identity operator). This should be the case in your harmonic oscillator example. Obviously, $U(t)$ is a unitary operator for each $t$. Now, if $(e_k'(t))_{k\in\mathbb N}$ is not complete, there exists a non-zero vector $x\in\mathcal H$ such that $\langle x,e_k'(t_0)\rangle = 0$ for all $k\in\mathbb N$. Hence, $0 = \langle x,U'(t_0)e_k\rangle = \langle U'(t_0)^*x,e_k\rangle$ for all $k$, meaning that $U'(t_0)^*x = 0$. Actually, you can remove the star here, since $UU^* = I$ implies $U'U^* + U(U')^* = 0$ and thus $(U')^* = -U^*U'U^*$, which, in $t = t_0$, is $U'(t_0)^* = -U'(t_0)$ (in other words, $U'(t_0)$ is skew symmetric). So, the system $(e_k'(t_0))_{k\in\mathbb N}$ is not complete if and only if there exists a non-zero vector in the kernel of $U'(t_0)$. |
Prove that $\prod_{d|n} d = n^{\frac{τ(n)}{2}}$ | $d$ is a divisor of $n$ if and only if $\frac{n}{d}$ is a divisor of $n$.
Then
$$\prod_{d|n} d =\prod_{d |n} \frac{n}{d}$$
Thus,
$$ \left( \prod_{d|n} d \right)^2= \prod_{d|n} d \cdot \prod_{d |n} \frac{n}{d}= \prod_{d|n} n$$
You don't need to split it in to cases, that split was probably suggested by someone who expected you to group the $d$ and $\frac{n}{d}$ terms in $\prod_{d|n} d$... In that case, if $n=k^2$, you cannot group $k$ with $\frac{n}{k}=k$.... |
Distribution of probability with bits | $n$ is the amount of packets sent, $p$ is the probability that a packet is received okay, $X$ is the count of packets receieved okay.
Use $n=4, p=0.9$ so you have:
$$\mathsf P(X\,{=}\,x)=\binom 4x~0.9^x~0.1^{4-x}~\quad\big[x\in\{0,1,2,3,4\}\big]$$
Then $$\mathsf P(X\,{\geqslant}\, 2)= 1-\mathsf P(X\,{=}\,1)-\mathsf P(X\,{=}\,0)\\[2ex]\mathsf P(X\,{\leqslant}\,2)=1-\mathsf P(X\,{=}\,4)-\mathsf P(X\,{=}\,3)$$ |
Defining $\{\mathcal{P}^n(\omega) | n\in\omega\}$ using Replacement | This is similar to how we treat finite sequences of natural numbers in arithmetic (e.g. in Godel's incompleteness theorems): for $n\ge 1$ for simplicity we say that $y=\mathcal{P}^n(x)$ iff there is a sequence $(a_i)_{1\le i\le n}$ such that $a_0=x, a_n=y$, and $a_{i+1}=\mathcal{P}(a_i)$ for $i<n$. So we've replaced a non-first-order definition with a "local" approach.
(Note that this trick is basically how the set-theoretic recursion theorem is proved.) |
Calculate the volume $y= x^2+2$ and $y=3$ | So the radius of each disk is the distance between $y=3$ and $y=x^2+2$, or $1-x^2$.
For the radius, it is just the distance between the curves, so subtract :)
You are integrating between intersections, or $-1$ and $1$.
So, you have $\displaystyle π\int_{-1}^{1}(1-x^2)^2\,dx$.
Remember simply, that you are stacking circles on each other, you just need to know the radius, and apply $πr^2$. And as you stack each of the circles, you get an integral :)
P.S., this is not required, but if you notice that the solid formed will be symmetrical if divided by the plane $x=0$, you can rewrite the integral to $\displaystyle 2π\int_{0}^{1}(1-x^2)^2\,dx$.
This will make subtraction simpler if you need to do this by hand. |
General 2nd order ODE with non-constant coefficient | $$y''(t)+p(t)y'(t)+q(t)y(t)=0$$
Analytical solving of this general linear second order ODE is a much too wide question, even if $p(t)$ and $q(t)$ are not any kind of functions, but polynomial fractions.
For a general approach see :
http://mathworld.wolfram.com/Second-OrderOrdinaryDifferentialEquation.html
If you don't want a closed form solution, you can try to find a solution on the form of infinite series.
Often the closed form solution requires some special functions which where defined and standardised especially to solve a particular kind of ODE.
For example in case of $p(t)=\frac{1}{t}$ and $q(t)=\frac{t^2-n^2}{t^2}$ the analytic solution is
$$y(t)=c_1J_n(t)+c_2Y_n(t)$$
$J_n(t)$ and $Y_n(t)$ are the Bessel functions of first and second kind respectively.
More complicated example : Case of $p(t)=\frac{c-(a+b+1)t}{t^2-t}$ and $q(t)=\frac{ab}{t(t-1)}$ the analytic solution is
$$y(t)=c_1\:\:_2F_1(a,b;c;t)+c_2\:(-t)^{1-c}\:_2F_1(a-c+1,b-c+1;2-c;t)$$
$\:_2F_1(a,b;c;t)$ is the Gauss hypergeometric function.
They are a lot of examples of such ODEs which solutions are expressed with convenient special functions. But in the general case of any $p(t)$ and $q(t)$ the convenient special functions where not always standardized.
In case of your ODE with $p(t)=\frac{t^2+t^+1}{t^3+2}$ and $q(t)=\frac{t^4}{t+2}$ as far as I know no convenient special function is available. May be a generalized hypergeometric function ? Sorry I have not enough available time to check it and I doubt the extra effort is worth it. As usual in such a situation, one commonly use numerical method for solving. |
Where am I going wrong in my tensor notation? | I already addressed in a comment below my answer to your other question:$$e_i:e_j=e_i^Te_j=\delta_{ij},\,e_ie_j:e_ne_m=(e_ie_j)^Te_ne_m=e_j^Te_i^Te_ne_m=e_j^T\delta_{in}e_m=\delta_{in}\delta_{jm},$$in accordance with @Svyatoslav's answer, which uses a different, equivalent method. |
Norm of a projection onto a subspace | Not necessarily, what you will need is a vector $v$ perpendicular to $u$ which is only guaranteed if the dimension is $2$ or larger (you also need $\alpha\ge0$. Then you can construct a $w$ (as a linear combination of $u$ and $v$) such that the norm of $u$ projected on the subspace $L_w$ spanned by $w$ is $\alpha$.
Let's say we have ortogonal unit vectors $e_u$ (parallel with $u$) and $e_v$ and let $e_w = e_u\cos\varphi + e_v\sin\varphi$ then we have that the norm of $u$'s projection on the space spanned by $e_w$ is
$$u\cdot e_w = u\cdot e_u\cos\varphi + u\cdot e_v\sin\varphi = |u|\cos\varphi$$
Just chose $\varphi$ such that $0\le|u|\cos\varphi = \alpha < |u|$ |
Sum of bitwise OR of all possible subarrays | One idea is to compute how much is contributed by each bit in the sum. If the array is length $n$ there are $\frac 12n(n+1)$ subarrays because you choose two positions with replacement, with the earlier being the start and the later being the end. For each bit position the OR of a subarray will be $1$ unless all the numbers in it have a $0$ in that location, so we want to count the runs of $0$s. A run of $k\ 0$'s will generate $\frac 12k(k+1)$ subarrays that have a zero in that bit. Add up the number of subarrays that have a $0$ in the position, subtract from the total number of subarrays, and you have the number of subarrays that have a $1$ in that position.
As an example I will use the array $[1,2,3,4,5,6,7,8,9,10]$. There are $\frac 12\cdot 10 \cdot 11=55$ subarrays.
In the ones bit there are five runs of $0$'s, each of length $1$. Each of those contributes one subarray with a $0$ in the ones bit, so there are $50$ subarrays with a $1$ in the ones place, starting our sum with $50$. In the twos bit there is a run of $1$ from $1$, a run of $2$ from $4,5$, and a run of $2$ from $9,10$. These give $1+3+3=7$ subarrays that have a $0$ in the twos bit, so there are $48$ that have a $1$, contributing $96$ to the sum. In the fours bit there is a run of three $0$s to start off and a run of three at the end, so there are $12$ subarrays with a $0$ in the fours place, giving $43$ with a $1$ and contributing $172$ to the sum. Finally for the eights there is a run of $7$ numbers at the start with a $0$ which gives $28$ subarrays with a $0$ in the eights place, so $27$ with a $1$, contributing $216$ to the sum. The sum is then $50+96+172+216=534$
This approach is linear in the length of the array. The operations count also grows with the number of bits in the largest number. If you had a short array with large numbers it would be faster to compute each subarray, which is actually an $n^3$ process-there are about $\frac 12n^2$ subarrays of average length $\frac n2$. If you had a thousand numbers of billions of bits and you can do a bitwise OR as a single operation it would be more efficient to just compute each subarray. |
$\lim_{n\to\infty} 1^n = 1$? | Is it really true that
$$\lim_{\color{red}{x}\to\infty} 1^n = 1$$
You probably mean:
$$\lim_{\color{blue}{n}\to\infty} 1^n = 1$$
and yes, this is true because $1^n = 1$ for all $n$.
The expression $"1^{+\infty}"$ is indeterminate and the limit above doesn't contradict that.
Perhaps you know the following well-known limit too:
$$\lim_{n\to\infty}\left(\color{blue}{1+\frac{1}{n}}\right)^\color{red}{n} = e \ne1$$where you also have $\color{blue}{1+\frac{1}{n}}\to 1$ and $\color{red}{n}\to+\infty$.
Combining both limits shows that you can have sequences $c_n = \left(a_n\right)^{b_n}$ where $a_n \to 1$ and $b_n\to+\infty$ but with different limits for $c_n$; which is why we call $"1^{+\infty}"$ indeterminate. |
Find the solution of the recurrence relation $a_n=4a_{n-1}-3a_{n-2}+2^n+n+3$? | At first solve the homogeneous equation $$a_n=4a_{n-1}-3a_{n-2}$$ by the ansatz $$a_n=q^n$$ |
Does the concept of "cograph of a function" have natural generalisations / extensions? | You can get arbitrary relations by generalising the notion of the graph of a function, but you have to do it slightly differently.
Suppose that, instead of generalising the notion of "graph of a function $A \to B$" to "subset of $A \times B$", we generalise it to: triple $(A', B', \Gamma)$, where $\Gamma \subseteq A' \times B'$ and the projection maps $\Gamma \to A'$ and $\Gamma \to B'$ are surjective.
The corresponding 'cograph' would be the quotient of $A' \sqcup B'$ by the least equivalence relation identifying $a \in \iota_1(A')$ and $b \in \iota_2(B')$ if $\langle a,b \rangle \in \Gamma$, where $\iota_1$ and $\iota_2$ are the inclusion maps.
In this way we obtain a three-way equivalence, between:
Relations from $A$ to $B$;
Triples $(A',B',\Gamma)$, where $A' \subseteq A$ and $B' \subseteq B$ and $\Gamma \subseteq A' \times B'$ with surjective projection maps;
Triples $(A',B',E)$, where $A' \subseteq A$ and $B' \subseteq B$ and $E$ is an equivalence relation on $A' \sqcup B'$.
Indeed:
($1 \to 2$) send $R$ to $(\mathrm{dom}(R), \mathrm{im}(R), \mathrm{graph}(R))$.
($2 \to 1$) given $(A',B',\Gamma)$, declare $a\; R\; b$ if and only if $\langle a,b \rangle \in \Gamma$.
($1 \to 3$) send $R$ to $(\mathrm{dom}(R), \mathrm{im}(R), E_R)$, where $E_R$ is the least equivalence relation containing $\langle \iota_1(a), \iota_2(b) \rangle$ for all $a,b$ with $a\; R\; b$, where $\iota_1,\iota_2$ are the inclusion maps.
($3 \to 1$) given $(A',B',E)$, declare $a\; R\; b$ if and only if $\iota_1(a)$ and $\iota_2(b)$ lie in the same $E$-equivalence class.
Here are the generalisations of the notions you mention in your question:
1. Functions
Functions $f : A \to B$;
Subsets $\Gamma \subseteq A \times B'$ such that $B' \subseteq B$, the projection map $\Gamma \to B'$ is surjective and, for all $a \in A$, there exists a unique $b \in B'$ such that $\langle a,b \rangle \in \Gamma$.
Equivalence relations on $A \sqcup B'$, such that $B' \subseteq B$ and each equivalence class contains exactly one element of $B'$ and at least one element of $A$.
2. Partial functions
Partial functions $f : A \to B$;
Subsets $\Gamma \subseteq A' \times B'$ such that $A' \subseteq A$, $B' \subseteq B$, the projection maps $A' \leftarrow \Gamma \to B'$ are surjective and, for all $a \in A'$, there exists a unique $b \in B'$ such that $\langle a,b \rangle \in \Gamma$.
Equivalence relations on $A' \sqcup B'$, such that $A' \subseteq A$, $B' \subseteq B$, and each equivalence class contains exactly one element of $B'$ and at least one element of $A'$.
3. Multi-valued functions
Multi-valued functions from $A$ to $B$;
Subsets $\Gamma \subseteq A' \times B'$ such that $A' \subseteq A$, $B' \subseteq B$, the projection maps $A' \leftarrow \Gamma \to B'$ are surjective and, for all $a \in A'$, there exists at least one $b \in B'$ with $\langle a,b \rangle \in \Gamma$.
Equivalence relations on $A' \sqcup B'$ such that $A' \subseteq A$, $B' \subseteq B$, and each equivalence class contains at least one element of $A'$ and one element of $B'$. |
Is Choice an assumption or determined by category? | Whether or not choice holds may affect properties a particular category may have. For a simple example, in the category $Set$ of sets and functions every epimorphism admits a section if, and only if, the axiom of choice holds.
Also whether or not one can do certain constructions depends on whether or not choice is available. For instance, collecting all universal solutions for a given functor into a single left adjoint requires (often a very strong variant of) the axiom of choice.
There is also a question of chicken and egg: what comes first, the category one studies or the objects and arrows in it. Looking at it this way choice might be dictated by either wanting the category to have certain properties or by wanting the objects to have certain properties. |
Trigonometric equation and how to solve them in general | Notation: $\sin(nx)=s_n$ and similar for cos, and tan. Also let $t=\tan(x/2)=t_{1/2}$.
Then $$s_1=\frac{2t}{1+t^2}, c_1=\frac{1-t^2}{1+t^2}\tag1$$
Then each factor in your fractions can be reduced to being a function of just $c_1,s_1$, e.g:
$$s_4=2s_2c_2=4s_1c_1(2c_1^2-1)$$
$$1+c_4=2c_2^2=2(1-s_2^2)=2(1-4s_1^2c_1^2)$$
Using the above equations $(1)$ should get you to the answer.
Motivation behind $(1)$. |
Power series identity | Your identity is a special case of Gauss's hypergeometric identity (see here for a proof),
$$\begin{align*}
\sum_{n=0}^\infty\frac{\prod_{j=0}^{n-1} (x+j)}{\prod_{j=0}^{n-1} (1+(j+1)t)}t^n&=\sum_{n=0}^\infty\frac{\prod_{j=0}^{n-1} (x+j)}{\prod_{j=0}^{n-1} \left(1+\frac1{t}+j\right)}\\
&=\sum_{n=0}^\infty \frac{(x)_n}{\left(1+\frac1{t}\right)_n}=\sum_{n=0}^\infty \frac{(x)_n (1)_n}{\left(1+\frac1{t}\right)_n}\frac1{n!}\\
&={}_2 F_1\left({{1,x}\atop{1+\frac1{t}}}\mid 1\right)=\frac{\Gamma\left(1+\frac1{t}\right)\Gamma\left(\frac1{t}-x\right)}{\Gamma\left(\frac1{t}\right)\Gamma \left(1+\frac1{t}-x\right)}
\end{align*}$$
where ${}_2 F_1\left({{a,b}\atop{c}}\mid z\right)$ is the Gaussian hypergeometric function, and since $\Gamma(1+z)=z\Gamma(z)$,
$$\require{cancel} {}_2 F_1\left({{1,x}\atop{1+\frac1{t}}}\mid 1\right)=\frac{\cancel{\Gamma\left(\frac1{t}\right)}\cancel{\Gamma\left(\frac1{t}-x\right)}}{t\left(\frac1{t}-x\right)\cancel{\Gamma\left(\frac1{t}\right)}\cancel{\Gamma \left(\frac1{t}-x\right)}}=\frac1{t\left(\frac1{t}-x\right)}=\frac1{1-xt}$$ |
Applicability of different formulas for Fisher's information | You just have a mistake in the second derivative
$$
\frac{\partial}{\partial \lambda } \left( - \frac{1}{\lambda} + \frac{X}{\lambda^2}\right) =
\frac{1}{\lambda ^ 2} - \frac{2X}{\lambda^3},
$$
hence,
$$
I_X = -E\left[\frac{1}{\lambda ^ 2} - \frac{2X}{\lambda^3}\right] = -\frac{1}{\lambda ^ 2} + \frac{2\lambda}{\lambda^3} = \frac{1}{\lambda^2}.
$$
The two definitions are equivalent under appropriate (technical) regularity conditions. See, e.g., Appendix E here for the list of the conditions: Applicability of different formulas for Fisher's information.
This relates to the Cramer-Rao lower bound for estimator $T$ variance. If the ratio between the variance of the estimator $Var(T)$ and the bound is greater than $1$, the estimator $T$ is inefficient (in the sense of "risk" or variance). See here for further details: https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao_bound |
Fréchet derivative of $\|Au-f\|^2$ | you have :
$$\|A(u+h)-f\|^2-\|Au-f\|^2= \langle Au-f,Ah \rangle + \langle Ah,Au-f \rangle + \|Ah\|^2\\
=\langle Ah,2(Au-f)\rangle \text{ by symmetry }\\
= \langle h,2A^*(Au-f)\rangle \text{ by the proprety of the adjoint}$$
Then, you have the result :
$$\|A(u+h)-f\|^2-\|Au-f\|^2=\langle df,h\rangle+o(||h||) $$
Hence you get your derivative :
$$ 2A^*(Au-f)$$ |
Proving $\left|\frac{w-z}{1-\bar{w}z}\right|$ < 1 | Hint:
$$\left|\frac{w-z}{1-\overline wz}\right|<1\iff|w-z|^2<|1-\overline wz|^2\iff$$
$$\iff (w-z)(\overline w-\overline z)<(1-\overline wz)(1-w\overline z)\iff$$
$$\iff |w|^2+|z|^2-2\,\text{Re}\,(w\overline z)<1-2\,\text{Re}\,(w\overline z)+|w|^2|z|^2\iff$$
$$\ldots\text{last purely algebraic step yielding an obvious inequality...}$$ |
If $L$ contains a $n$-th root of $a\in K$, why does $K$ already contain a $n$-th root? | Hint: You have shown that $a^m$ has an $n$th root in $K$. Also $a^n$ has an $n$th root in $K$.
Is the set of integers $\ell$ such that $a^\ell$ has an $n$th root in $K$ closed under ...?
Spoiler solution(for readers other than Sam, who solved his problem before this was added):
$G=\{a^\ell\mid \ell\in\mathbf{Z}\}\cap {K^\times}^n$ is a multiplicative group. As a subgroup of a cyclic group it has to be cyclic itself, i.e. generated by some $a^t,t>0$. We saw that $a^m,a^n\in G$, so $t\mid m$, $t\mid n$ and the only possibility is $t=1$. |
Proof that stochastic process on infinite graph ends in finite step. | If you direct the edges then the problem becomes trivial.
Assume inductively that at time $i$ the only nodes that might be infected are $u_i$ and $v_i$.
Then at time $i+1$ nodes $u_{i+1}$ and $v_{i+1}$ may become infected by $u_i$ and $v_i$, but $u_i$ and $v_i$ become cured, and there is no way of reinfecting them from below.
Thus your argument is correct, and the probability that the process is alive at time $i$ is at most $q^i$.
Now, for any natural number $n$, if the process does not terminate after a finite number of levels, then the process much reach level $n$.
Therefore the probablity that the process does not terminate is less than or equal to the probability that the process reaches level $n$.
So if $A$ is the event that the process survives indefinitely we have
$$0\leq\mathbb P(A) \leq q^n$$for every $n\in\mathbb N$. Therefore we must have $\mathbb P(A) = 0$. |
What is the distribution of X+Y when (X,Y) has bivariate normal distribution. | Multivariate normal distribution has the following property:
If $X:p\times 1\sim N_{p}(\mu,\Sigma)$, then for any vector $\alpha:p\times 1$ of constants, $\alpha'X\sim N_{1}(\alpha'\mu, \alpha'\Sigma\alpha)$.
So, the distribution is univariate normal, with $$E(X+Y)=\alpha'\mu$$ and $$Var(X+Y)=\alpha'\Sigma\alpha$$ where $\alpha'=(1,1)$. |
Asymptotic Behavior with continuity and sum | Without loss of generality we may assume that $f(x)\ge 0$ on $(1,\infty )$.
Note that in that case $f(x)$ is decreasing on $ (1,\infty)$ and $$ f(2)+f(3)+...+ f(n) \le \int _1 ^n f(x) dx \le f(1)+f(2) + f(3) +...+f(n-1)$$
As $n\to \infty $ we get $$\displaystyle \sum_{k=2} ^{\infty} f(k)\le \int _1 ^{\infty} f(x) dx \le\displaystyle \sum_{k=1} ^{\infty} f(k)$$
Thus the infinite sum and the integral have the same behaviour. |
Two problems about rings. | $a+a=(a+a)^2=a^2+a^2+a^2+a^2=a+a+a+a$, so $a+a=0$.
$a+b=(a+b)^2=a^2+ab+ba+b^2=a+ab+ba+b$, so $ab+ba=0$. Using the previous fact, $ab-ba=0$ so $ab=ba$. |
Proof of Lowenheim-Skolem theorem | Let’s take the second question first. The important thing is that $F(A)$ contains at most one element for each $\varphi\in\sigma$ and each finite tuple in $\bigcup_{n\in\omega}A^n$. Assuming that $A$ is infinite, $$\left|\bigcup_{n\in\omega}A^n\right|=|A|\;,$$
so $|F(A)|\le|\sigma|\cdot|A|=\max\{|\sigma|,|A|\}=|\sigma|+|A|$. The extra $\aleph_0$ term covers the case in which $\sigma$ and $A$ are both finite.
To see why $F^\omega$ is a closure operator, note first that $A\subseteq F(A)\subseteq F^2(A)\subseteq\ldots$. Now suppose that $b\in F\big(F^\omega(A)\big)$; then there are a $\varphi\in\sigma$ and $a_1,\dots,a_n\in F^\omega(A)$ such that $b=f_\varphi(a_1,\dots,a_n)$.
For $k=1,\dots,n$ there is an $F^{m_k}(A)$ such that $a_k\in F^{m_k}(A)$; let $m=\max\{m_1,\dots,m_n\}$. Then $\{a_1,\dots,a_n\}\subseteq F^m(A)$, so $b\in F\big(F^m(A)\big)=F^{m+1}(A)\subseteq F^\omega(A)$. Thus, $$F\big(F^\omega(A)\big)\subseteq F^\omega(A)\subseteq F\big(F^\omega(A)\big)\;,$$ and hence $F\big(F^\omega(A)\big)=F^\omega(A)$, i.e., $F^\omega$ is a closure operator. |
Proving that the only glide reflection which maps more than one line to itself on a sphere is an antipodal map | If a symmetry of the sphere fixes 2 lines, then it must either fix or interchange their 2 points of intersection. That should narrow down the possibilities. |
Determine whether the lines $L_1$ and $L_2$ are parallel, skew, or intersecting. If they intersect, find the point of intersection. | Equate both lines' equations to parameters $t,u$:
$$\frac{x-2}1=\frac{y-3}{-2}=\frac{z-1}{-3}=t$$
$$\frac{x-3}1=\frac{y+4}3=\frac{z-2}{-7}=u$$
Thus
$$L_1=(2,3,1)+t(1,-2,-3)$$
$$L_2=(3,-4,2)+u(1,3,-7)$$
Since $(2,3,1)$ is not a scalar multiple of $(3,-4,2)$, the lines are not parallel. To see if they intersect, equate the first two coordinates and solve for $t,u$:
$$t+2=u+3\implies t-u=1$$
$$-2t+3=3u-4\implies 2t+3u=7$$
from which we get $t=2,u=1$. Now substitute into the equation for the third coordinate:
$$-3(2)+1=-5=-7(1)+2$$
So the two lines intersect, and their point of intersection is $(4,-1,-5)$. |
Proving an identity for complete homogenous symmetric polynomials | I don't know if there is a name for this expansion, but this is the complete homogenous symmetric polynomial.
Edit: Here's a proof that uses partial fraction decomposition.
First note $$h_k = \sum_{k_1,k_2,\dots,k_n\ge0}^{\sum_{i=1}^n k_i=k}\prod_i a_i^{k_i}$$ has the generating function $$\sum_{k=0}^\infty h_k t^k = \prod_{i=1}^n \frac{1}{1 - a_it}.$$ So we need to find a way to extract the coefficient of $t^k$ from the right-hand side.
Assuming none of the $a_i$'s are equal, this is a rational function in $t$ with $n$ distinct roots, so we can apply a partial fraction decomposition:
$$\prod_{i=1}^n \frac{1}{1 - a_it} = \sum_{i=1}^n \frac{c_i}{1 - a_it}$$
for some $c_i$ to be determined. Multiplying through by the denominator of the left gives
$$1 = \sum_{i=1}^n c_i\prod_{j \neq i} (1 - a_jt).$$
To find the $c_i$, set $t = 1/a_i$. Then each term vanishes except for the one with $c_i$. This gives
$$1 = c_i\prod_{j \neq i} (1 - a_j/a_i)$$
$$ c_i = \frac{1}{\prod_{j \neq i} (1 - a_j/a_i)} = \frac{a_i^{n-1}}{\prod_{j \neq i} (a_i - a_j)}$$
Then $$\sum_{k=0}^\infty h_k t^k = \sum_{i=1}^n \frac{c_i}{1 - a_it} = \sum_{i=1}^n \sum_{k=0}^\infty c_i a_i^k t^k$$
and so the coefficient of $t^k$ is
$$h_k = \sum_{i=1}^n c_i a_i^k = \sum_{i=1}^n \frac{a_i^{n+k-1}}{\prod_{j \neq i} (a_i - a_j)}$$
Nice work finding this pretty identity, btw! I am sure it is 'well-known' but I had not seen it. |
About $C^{0}$ being topological manifold | No, $C^0$ denotes the ring of continuous functions, as opposed to $C^k$, $k\ge 1$, which denotes functions with up to $k$th order derivatives continuous. |
high-water mark distribution | Consider the cycle representation of a permutation (including the trivial one-element cycles). In each cycle, bring the greatest element to the front. Then order the cycles by these greatest elements, smallest first. If you now write down this cycle representation, ignore the parentheses and consider the resulting string as representing a permutation, precisely the greatest elements of the cycles are high-water marks in this permutation.
This establishes a bijection of the symmetric group with itself that maps between numbers of cycles and numbers of high-water marks. Thus the distribution of the number of high-water marks is the distribution of the number of cycles. This is given by the unsigned Stirling numbers of the first kind. |
Prove that $\exists X_0\in \mathbb{R}$ such that $\bigcap_{n=1}^{\infty}I_{n}=\{X_{0}\}$ | $I_{n+1}\subset I_n$ means $[a_{n+1},b_{n+1}]\subset[a_n,b_n]$ and thus $a_{n+1}\geqslant a_n$ and $b_{n+1}\leqslant b_n$ for all $n\geqslant 1$. $(a_n)_{n\geqslant 1}$ is an increasing sequence, $(b_n)_{n\geqslant 1}$ is a decreasing sequence, thus $(b_n-a_n)_{n\geqslant 1}$ is a decreasing sequence, its limit is $\inf_{n\geqslant 1}(b_n-a_n)=0$. Thus the sequences $(a_n)$ and $(b_n)$ are adjacent and converge towards the same limit $\ell$, and $\ell$ is such that $a_n\leqslant\ell\leqslant b_n$ for all $n\geqslant 1$, this means that $\displaystyle\ell\in\bigcap_{n\geqslant 1}I_n$ and thus the intersection is not empty. Now if $\displaystyle\ell,\ell'\in\bigcap_{n\geqslant 1}I_n$, then $\ell,\ell'\in I_n$ for all $n\geqslant 1$ and thus $|\ell-\ell'|\leqslant b_n-a_n$ for all $n\geqslant 1$. Letting $n\rightarrow +\infty$ gives $\ell=\ell'$. |
Minimal boolean matrix coverage | You can solve this as a set partitioning (also known as exact cover) problem as follows. For each pseudo-rectangle $R$ that does not contain a $0$ cell ($M_{i,j}=0$), let binary decision variable $x_R$ indicate whether $R$ is selected. The problem is to minimize $\sum_R x_R$ subject to
$$\sum_{R: (i,j) \in R} x_R = 1 \quad \text{for all $(i,j)$ where $M_{i,j}=1$}$$ |
Why characters are continuous | A character is a linear functional with some additional properties ($\varphi(a\cdot b) = \varphi(a)\varphi(b)$ and $\varphi(1) = 1$).
Then the fact that a linear functional $\lambda \colon E \to \mathbb{K}$ on a topological vector space $E$ over $\mathbb{K}\in \{\mathbb{C},\mathbb{R}\}$ is continuous if and only if its kernel is closed yields the continuity of characters (assuming the closedness of maximal ideals to have been established beforehand).
Since the singleton set $\{0\}$ is closed in $\mathbb{K}$, the closedness of $\ker\lambda = \lambda^{-1}(\{0\})$ is evidently necessary for the continuity of $\lambda$.
Conversely, if $\ker\lambda$ is closed, either $\lambda \equiv 0$, and $\lambda$ is continuous, or - since $\dim_{\mathbb{K}} \mathbb{K} = 1$, $\lambda$ is surjective or identically $0$ - there is an $a \in \lambda^{-1}(\{1\})$. Then $\lambda^{-1}(\{1\}) = a + \ker\lambda$ is closed (translations are homeomorphisms), and thus there is a neighbourhood $U$ of $0$ with $U \cap \lambda^{-1}(\{1\}) = \varnothing$. In a topological vector space, the balanced neighbourhoods of $0$ - the neighbourhoods $V$ of $0$ with $t\cdot V \subset V$ for all $t\in \mathbb{K}$ with $\lvert t\rvert \leqslant 1$ - form a neighbourhood basis of $0$, hence we can without loss of generality assume that $U$ is balanced. Then it follows that $\lambda(U) \subset D_1(0) = \{t \in \mathbb{K} : \lvert t\rvert < 1\}$, for if we had $\lvert\lambda(u)\rvert \geqslant 1$ for some $u\in U$, then $\lvert \lambda(u)^{-1}\rvert \leqslant 1$, and hence $\lambda(u)^{-1}\cdot u \in U$ by balancedness, and $\lambda(u)^{-1}\cdot u \in U \cap \lambda^{-1}(\{1\})$ is a contradiction.
This shows that $\lambda$ is continuous at $0$, and since it is linear, continuous everywhere.
The equivalence of having a closed kernel and being continuous does not carry over to general (ring) homomorphisms $f \colon R \to S$ where $R,S$ are topological rings (modules, groups ...). On the one hand, $S$ need not be Hausdorff, and then $\{0\}$ is not closed, whence $\ker f$ has no reason to be closed if $f$ is continuous (the identity certainly is continuous if we take the same topology on domain and codomain, and its kernel is closed if and only if the ring, module, group ... is Hausdorff). On the other hand, if we have two different Hausdorff topologies $\tau_1,\tau_2$ (compatible with the algebraic structure) os a ring, module, group ... $R$, then $\operatorname{id} \colon (R,\tau_1) \to (R,\tau_2)$ is not continuous unless $\tau_1$ is strictly finer than $\tau_2$, in which case $\operatorname{id} \colon (R,\tau_2)\to (R,\tau_1)$ is not continuous. Yet its kernel is $\{0\}$, which is closed by the Hausdorff property.
The equivalence of being continuous and having a closed kernel is particular to linear maps of $\mathbb{K}$ vector spaces with a finite-dimensional range (that carries a Hausdorff vector space topology). (There may be some other situations where the closedness of the kernel alone is enough to deduce continuity, but generally it is not.) |
How does the surface of a sphere break the parallel postulate? | One thing to keep in mind is that line segments on the surface of a sphere do not work the same way as the Euclidean metric in space. For instance, the shortest distance along the surface from Singapore to San Francisco passes through Tokyo, which is not what you would expect at all from looking at the Pacific Ocean on a rectangular map.
So lines on a sphere are "great circles", which represent circles in space whose center is the center of the sphere. Thinking of the globe again, all longitudinal lines are great circles, but the Equator is the only latitudinal line that is a great circle. Based on that definition of lines in the sphere, it is not hard to convince yourself that any two distinct lines on a sphere must intersect in two places. |
Check the validity of the characterization of irreflexive kernel of $\mathcal R$ | Since $\mathcal R \setminus \text{id}_A$ is an irreflexive relation contained in $\mathcal R$, and $\mathcal R^{\ne}$ is
the largest irreflexive relation contained in $\mathcal R$, we have $\mathcal R \setminus \text{id}_A\subset \mathcal R^{\ne}$.
On the other hand, since $\mathcal R^{\ne}$ is irreflexive, $\mathcal R^{\ne}\subset (A\times A)\setminus \text{id}_A$.
Since $\mathcal R^{\ne}\subset \mathcal R$, it follows $\mathcal R^{\ne}\subset \mathcal R\setminus \text{id}_A$.
We conclude that $\mathcal R^{\ne}=\mathcal R \setminus \text{id}_A $. |
Isomorphism onto $\mathbb{F}_p^{\ast}$, p-adic integers | Any homomorphism of rings $f : R \to S$ gives rise to a homomorphism $R^{\times} \to S^{\times}$ between their unit groups. In particular, the quotient map $f : \mathbb{Z}_p \to \mathbb{F}_p$ (with kernel $p \mathbb{Z}_p$) gives rise to a homomorphism
$$\mathbb{Z}_p^{\times} \to \mathbb{F}_p^{\times}.$$
This homomorphism takes a unit $a_0 + a_1 p + \dots \in \mathbb{Z}_p^{\times}$ and sends it to $a_0 \bmod p$. Now the problem reduces to showing that
this homomorphism is surjective, and
its kernel is $1 + p \mathbb{Z}_p$.
A nice way to show that this homomorphism is surjective is to show that in fact it has a right inverse sending an element in $\mathbb{F}_p^{\times}$ to its Teichmüller representative, but you don't need to do it this way. Computing the kernel is pretty straightforward. |
Decomposition of ideal $I=(x^{2}-yz,xz-x)$ | The first ideal is the same as $(yz, x)$. Thinking about the corresponding geometric set of points (scheme, variety, whatever), this is $x=0$ and $yz=0$. That is, the union of two lines in the $x=0$ plane. |
Idempotent Elements and Isomorphisms | For a commutative ring $R$ to be isomorphic to a product of rings, there has to exist a non-trivial idempotent $e \in R$ and then one has an isomorphism $R \cong Re \times R (1 - e) $ given by $r \mapsto (r e, r (1-e) )$.
So let us have a look at the idempotents of $R = \mathbb{Z}/6\mathbb{Z}$. The nontrivial ones are given by the cosets $e_1 = 3 + 6\mathbb{Z}$ (as $3^2 = 9 \equiv 3$ mod $6$ ) and $e_2 = 4 + 6\mathbb{Z}$ (as $4^2 = 16 \equiv 4$ mod $6$). Also $e_1 + e_2 = 1$ in $\mathbb{Z}/6\mathbb{Z}$.
We thus have an isomorphism $R \cong R e_1 \times R e_2$.
Note that we have $$R e_1 = \{0 + 6 \mathbb{Z}, 3 + 6\mathbb{Z}\} \cong \mathbb{Z}/2 \mathbb{Z}$$ and $$R e_2 = \{0 + 6\mathbb{Z}, 2 + 6 \mathbb{Z}, 4 + 6\mathbb{Z}\} \cong \mathbb{Z}/3 \mathbb{Z}.$$ |
Find derivative of $\ln{\frac{\sqrt{1-\sin x}}{\sqrt{1 + \sin x}}}$ | $f(x) = \log{\frac{\sqrt{1-\sin x}}{\sqrt{1 + \sin x}}} = \frac 12 [ \log{(1 - \sin x)} - \log{(1 + \sin x)} ]$
Differentiating the RHS with respect to $x$, we get
$f'(x) = \dfrac{1}{2} \left(\dfrac{-\cos(x)}{1 - \sin(x)} - \dfrac{\cos(x)}{1 + \sin(x)}\right)$ |
I need to prove that the process $X_{t} = \exp(\lambda W_{t} - \frac{1}{2}\lambda^{2}t)$ is a martingale | As suggested in a comment above, note that for $s<t,$
$$\begin{align*}
E(X_t|F_s)&=E\left[\exp\left(\lambda(W_t-W_s)+\lambda W_s-\frac{1}{2}\lambda^2t\right)|F_s\right]\\
&=\exp(\lambda W_s-\frac{1}{2}\lambda^2t)E\left[\exp(\lambda(W_t-W_s))|F_s\right].
\end{align*}$$
Now note that $W_t-W_s\sim N(0, t-s)\sim \sqrt{t-s}N(0, 1)$ and is independent of $F_s.$
Therefore, $E\left[\exp(\lambda(W_t-W_s))|F_s\right]=E\left[\exp(\lambda\sqrt{t-s}N(0, 1)\right].$
The problem reduces to showing that the exponential moment
$$E[\exp(\mu N(0, 1)]=\exp(\frac{1}{2}\mu^2).$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.