title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Expressing a Vector as a Sum | Judging by the "$:=$" symbol, this is a definition of a new notation that they've just made up for use later in the text. So they're saying that from now on $\beta_1^{\rho}$ denotes the point defined in the right-hand side. And the expression on the right, when taken over all $\rho\in[0,1]$ represents the segment from $\beta_1$ to $\beta_2$. |
Show that if $T$ satisfies $T^*=-T$ then any eigenvalue of $T$ satisfies $\lambda^*=-\lambda.$ | Note that $\langle x, Tx \rangle = \langle T^*x, x \rangle = -\langle Tx, x \rangle =-\overline{\langle x, Tx \rangle}$, hence
$\langle x, Tx \rangle$ is purely imaginary for any $x$. |
Construct dependent random variables which converge to standard normal | Consider $(X_1,X_1,X_2,X_3,...)$ where $\{X_n\}$ is i.i.d with standard nomal distribution. This sequence is not independent. [ See my comment above]. |
Cyclic Subgroup of Order 2 | The easiest one is in $\mathbb{Z}_4$. The element 2 generates the subgroup $\{2,0\}$, and it's the only one, since all the other elements have order 4 or 1.
More generally, every $\mathbb{Z}_{2d}$, has this property, since the element $d$ generates $\{0,d\}$
Given $G$ finite group with odd order, then $\mathbb{Z}_{2}\times G$ is also ok.
switching to theory, we have that $a$ must be the only element in $G$ with order 2, so it is invariant by conjugation. It means that $H=\{0,a\}$ is a normal subgroup of $G$.
If $G$ is a finite group, and $H$ is a $2$- Sylow, then $G=H\times G'$. |
Finding the residues of the following poles | One may write, as $z \to n$, $n \in \mathbb{Z}$,
$$
\begin{align}
\frac1{\sin(\pi z)}&=\frac1{(-1)^n\pi(z-n)+O((z-n)^3)}
\\\\&=\frac{(-1)^n}{\pi(z-n)}\frac1{1+O((z-n)^2)}
\\\\&=\frac{(-1)^n}{\pi(z-n)}+O(z-n),
\end{align}
$$ giving
$$
\begin{align}
\frac{z^2-4z+4}{\sin(\pi z)}&=(z^2-4z+4)\left(\frac{(-1)^n}{\pi(z-n)}+O(z-n) \right)
\\\\&=((z-n)^2-(2n-4)(z-n)+n^2-4n+4)\left(\frac{(-1)^n}{\pi(z-n)}+O(z-n) \right)
\\\\&=\frac{(-1)^n(n^2-4n+4)}{\pi(z-n)}+O(1),
\end{align}
$$ thus the residue is
$$
\text{Res}(\phi;n)=\frac{(-1)^n(n^2-4n+4)}{\pi}.
$$ |
Show that $y =e-x^2e^x$ has an $x$-intercept at $x = 1$ | You are making this harder than it needs to be. You don't need to solve $y =e-x^2e^x$ for $y=0$. You have been given that the x-intercept is 1, so you simply need to verify that statement. In other words, just plug $x=1$ into the equation and verify that it gives $y=0$.
$$y = e-x^2e^x$$
Let $x=1$
$$y = e-1^2e^1$$
$$y = e-e = 0$$
So 1 is the x-intercept.
Here's some more info about this equation for those who are interested.
From the plot of the equation it appears that there's only one x-intercept. We can get some more info about the behaviour of the function by using a little calculus and taking its derivative.
$$\begin{align}
y & = e-x^2e^x\\
y' & = -x^2e^x -2xe^x\\
& = -(x^2 + 2x)e^x\\
y' & = -x(x + 2)e^x\\
\end{align}$$
We get stationary points (minima, maxima, or points of inflexion) when $y'$ is zero. $e^x$ is not zero for finite $x$, so the only solutions for $y'=0$ are $x=0$ or $x=-2$. With some further analysis, it can be shown that there's a minimum at $x=-2$, with $y=e-4e^{-2}$, and a maximum at $x=0$, with $y=e$.
It is actually possible to invert $y =e-x^2e^x$. It's not possible using elementary functions, but it can be done using the Lambert W function (aka the omega function).
This function is defined to be the inverse function of $f(x)=xe^x$. In other words, if $y = xe^x$, then $x = W(y)$. As the Wikipedia article mentions, Lambert W is defined over the complex plane, and it's actually a family of functions because there is generally not a single inverse.
Here's how we use Lambert W to invert the given function.
$$\begin{align}
y & = e - x^2 e^x\\
e - y & = x^2 e^x\\
\sqrt{e - y} & = x e^{\frac{x}{2}}\\
\frac{\sqrt{e - y}}{2} & = \frac{x}{2} e^{\frac{x}{2}}\\
W\left(\frac{\sqrt{e - y}}{2}\right) & = \frac{x}{2}\\
x & = 2 W\left(\frac{\sqrt{e - y}}{2}\right)
\end{align}$$
When using this equation we need to choose which branch of the $W$ that we want, we also need to specify whether we want the positive or negative square root.
Many advanced mathematics libraries provide the Lambert W. Here's a short Python 3 demo using the mpmath library.
from mpmath import mp
# Use 50 decimal digits of precision
mp.dps = 50
# Print with 10 digits of precision
out_prec = 10
def func(x):
return mp.e - x * x * mp.exp(x)
def inv_func(y, sign, k):
return 2 * mp.lambertw(sign * mp.sqrt(mp.e - y) / 2, k=k)
r = mp.mpf('.1')
for i in range(-25, 15):
x = i * r
y = func(x)
# Determine which square root and branch of the Lambert W
# function we need to get back the x we started with
if x < -2:
sign, k = -1, -1
elif -2 <= x < 0:
sign, k = -1, 0
else:
sign, k = 1, 0
xx = inv_func(y, sign, k)
print(x, mp.nstr(y, n=out_prec), mp.nstr(xx, n=out_prec))
output
-2.5 2.205250587 -2.5
-2.4 2.195746418 -2.4
-2.3 2.187912545 -2.3
-2.2 2.181994542 -2.2
-2.1 2.17824898 -2.1
-2.0 2.176940696 -2.0
-1.9 2.178339113 -1.9
-1.8 2.182713431 -1.8
-1.7 2.190326444 -1.7
-1.6 2.201426742 -1.6
-1.5 2.216238968 -1.5
-1.4 2.234951779 -1.4
-1.3 2.257703098 -1.3
-1.2 2.284562163 -1.2
-1.1 2.315507817 -1.1
-1.0 2.350402387 -1.0
-0.9 2.388960404 -0.9
-0.8 2.430711291 -0.8
-0.7 2.47495503 -0.7
-0.6 2.520709639 -0.6
-0.5 2.566649164 -0.5
-0.4 2.611030621 -0.4
-0.3 2.651608189 -0.3
-0.2 2.685532598 -0.2
-0.1 2.709233454 -0.1
0.0 2.718281828 0.0
0.1 2.707230119 0.1
0.2 2.669425718 0.2
0.3 2.596794536 0.3
0.4 2.479589877 0.4
0.5 2.306101511 0.5
0.6 2.06231906 0.6
0.7 1.731543002 0.7
0.8 1.293935634 0.8
0.9 0.7260033084 0.9
1.0 0.0 1.0
1.1 -0.9167590605 1.1
1.2 -2.06268654 1.2
1.3 -3.48282954 1.3
1.4 -5.229910107 1.4 |
What is the algebra to rearrange the constant acceleration formula? | You know that
$$d = \frac{1}{2}gt^2$$
Solve for $t$.
\begin{align*}
d & = \frac{1}{2}gt^2\\
2d & = gt^2\\
\frac{2d}{g} & = t^2\\
\sqrt{\frac{2d}{g}} & = t
\end{align*}
Substitute for $d$ and $g$ to obtain
$$t = \sqrt{\frac{2 \cdot 4~\text{ft}}{32~\frac{\text{ft}}{\text{s}^2}}} = \sqrt{8~\text{ft} \cdot \frac{1}{32}~\frac{\text{s}^2}{\text{ft}}} = \sqrt{\frac{1}{4}~\text{s}^2} = \frac{1}{2}~\text{s}$$ |
Calculating in closed form $\int _0^1\int _0^1\frac{1}{1+x y (x+y)} \ dx \ dy$ | By symmetry (i.e. by exploiting the fact that our integral is twice the integral over the sub-region $0\leq y\leq x\leq 1$) we just have to compute:
$$ I=2 \int_{0}^{1}\int_{0}^{1}\frac{x}{1+x^3 y(1+y)}\,dy\,dx = 2 \int_{0}^{1}\int_{0}^{2}\frac{x}{(1+x^3y)\sqrt{1+4y}}\,dy\,dx$$
Integrating with respect to $x$ first,
$$\begin{eqnarray*} I &=& 2\int_{0}^{2}\frac{2 \sqrt{3}\arctan\left(\frac{\sqrt{3}}{1-2y^{1/3}}\right)-2\log\left(1+y^{1/3}\right)+\log\left(1-y^{1/3}+y^{2/3}\right)}{6 y^{2/3}\sqrt{1+4y}}\,dy\\&=&\int_{0}^{2^{1/3}}\frac{2\sqrt{3}\arctan\left(\frac{\sqrt{3}}{1-2y}\right)-3\log(1+y)+\log(1+y^3)}{\sqrt{1+4y^3}}\,dy\end{eqnarray*}$$
but the resulting integrals in just one variable do not look so appealing.
Am I missing some crucial simplification that follows from replacing $y$ with a Jacobi elliptic function (maybe $\text{dn}$) or with the Weierstrass elliptic function $\wp(z)$ (corresponding to $g_2=0,g_3=-1$) then exploiting some weird/mystical product formulas? |
If two continuous maps of an interval commute, then they agree at some point | This answer was inspired by the one of @PaulSinclair.
Let us assume that the claim is false, so that $f\left(x\right)\neq g\left(x\right)$
for all $x\in I:=\left[0,1\right]$. By swapping $f,g$, we can assume
$f\left(0\right)<g\left(0\right)$. If we had $f\left(x\right)\geq g\left(x\right)$
for some $x\in I$, the intermediate value theorem (applied to $f-g$)
would yield the claim, in contradiction to our assumption. Hence,
$f\left(x\right)<g\left(x\right)$ for all $x\in I$.
By continuity, there is $\varepsilon>0$ with $\varepsilon+f\left(x\right)\leq g\left(x\right)$
for all $x\in I$. In particular $g\left(x\right)\geq\varepsilon$
for all $x\in I$.
By induction, we show $g^{n}\left(I\right)\subset\left[n\varepsilon,\infty\right)$
for all $n\in\mathbb{N}$. For $n=1$, we just showed that this holds.
Now, assume $g^{n}\left(I\right)\subset\left[n\varepsilon,\infty\right)$
and let $x\in I$ be arbitrary. We have
\begin{eqnarray*}
g^{n+1}\left(x\right) & = & g\left(g^{n}\left(x\right)\right)\\
& \geq & \varepsilon+f\left(g^{n}\left(x\right)\right)\\
& \overset{f\circ g=g\circ f}{=} & \varepsilon+g^{n}\left(f\left(x\right)\right)\\
& \overset{\text{induction}}{\geq} & \varepsilon+n\varepsilon\\
& = & \left(n+1\right)\varepsilon
\end{eqnarray*}
and thus $g^{n+1}(I) \subset [(n+1)\varepsilon, \infty)$.
For $n$ large enough, this is a contradiction to $g\left(I\right)\subset I$.
Hence, the claim must hold. |
Axiom of choice and automorphisms of vector spaces | Nov. 6th, 2011 After several long months a post on MathOverflow pushed me to reconsider this math, and I have found a mistake. The claim was still true, as shown by Läuchli $\small[1]$, however despite trying to do my best to understand the argument for this specific claim, it eluded me for several days. I then proceeded to construct my own proof, this time errors free - or so I hope. While at it, I am revising the writing style.
Jul. 21st, 2012 While reviewing this proof again it was apparent that its most prominent use in generating such space over the field of two elements fails, as the third lemma implicitly assumed $x+x\neq x$. Now this has been corrected and the proof is truly complete.
$\newcommand{\sym}{\operatorname{sym}}
\newcommand{\fix}{\operatorname{fix}}
\newcommand{\span}{\operatorname{span}}
\newcommand{\im}{\operatorname{Im}}
\newcommand{\Id}{\operatorname{Id}}
$
I got it! The answer is that you can construct such vector space.
I will assume that you are familiar with ZFA and the construction of permutation models, references can be found in Jech's Set Theory $\small[2, \text{Ch}. 15]$ as well The Axiom of Choice $\small{[3]}$. Any questions are welcomed.
Some notations, if $x\in V$ which is assumed to be a model of ZFC+Atoms then:
$\sym(x) =\{\pi\in\mathscr{G} \mid \pi x = x\}$, and
$\fix(x) = \{\pi\in\mathscr{G} \mid \forall y\in x:\ \pi y = y\}$
Definition: Suppose $G$ is a group, $\mathcal{F}\subseteq\mathcal{P}(G)$ is a normal subgroups filter if:
$G\in\mathcal{F}$;
$H,K$ are subgroups of $G$ such that $H\subseteq K$, then $H\in\mathcal{F}$ implies $K\in\mathcal{F}$;
$H,K$ are subgroups of $G$ such that $H,K\in\mathcal{F}$ then $H\cap K\in\mathcal{F}$;
${1}\notin\mathcal{F}$ (non-triviality);
For every $H\in\mathcal{F}$ and $g\in G$ then $g^{-1}Hg\in\mathcal{F}$ (normality).
Now consider the normal subgroups-filter $\mathcal{F}$ to be generated by the subgroups $\fix(E)$ for $E\in I$, where $I$ is an ideal of sets of atoms (closed under finite unions, intersections and subsets).
Basics of permutation models:
A permutation model is a transitive subclass of the universe $V$ that for every ordinal $\alpha$, we have $x\in\mathfrak{U}\cap V_{\alpha+1}$ if and only if $x\subseteq\mathfrak{U}\cap V_\alpha$ and $\sym(x)\in\mathcal{F}$.
The latter property is known as being symmetric (with respect to $\mathcal{F}$) and $x$ being in the permutation model means that $x$ is hereditarily symmetric. (Of course at limit stages take limits, and start with the empty set)
If $\mathcal{F}$ was generated by some ideal of sets $I$, then if $x$ is symmetric with respect to $\mathcal{F}$ it means that for some $E\in I$ we have $\fix(E)\subseteq\sym(x)$. In this case we say that $E$ is a support of $x$.
Note that if $E$ is a support of $x$ and $E\subseteq E'$ then $E'$ is also a support of $x$, since $\fix(E')\subseteq\fix(E)$.
Lastly if $f$ is a function in $\mathfrak{U}$ and $\pi$ is a permutation in $G$ then $\pi(f(x)) = (\pi f)(\pi x)$.
Start with $V$ a model of ZFC+Atoms, assuming there are infinitely (countably should be enough) many atoms. $A$ is the set of atoms, endow it with operations that make it a vector space over a field $\mathbb{F}$ (If we only assume countably many atoms, we should assume the field is countable too. Since we are interested in $\mathbb F_2$ this assertion is not a big hassle). Now consider $\mathscr{G}$ the group of all linear automorphisms of $A$, each can be extended uniquely to an automorphism of $V$.
Now consider the normal subgroups-filter $\mathcal{F}$ to be generated by the subgroups $\fix(E)$ for $E\in I$, where $E$ a finite set of atoms. Note that since all the permutations are linear they extend unique to $\span(E)$. In the case where $\mathbb F$, our field, is finite then so is this span.
Let $\mathfrak{U}$ be the permutation model generated by $\mathscr{G}$ and $\mathcal{F}$.
Lemma I: Suppose $E$ is a finite set, and $u,v$ are two vectors such that $v\notin\span(E\cup\{u\})$ and $u\notin\span(E\cup\{v\})$ (in which case we say that $u$ and $v$ are linearly independent over $E$), then there is a permutation which fixes $E$ and permutes $u$ with $v$.
Proof: Without loss of generality we can assume that $E$ is linearly independent, otherwise take a subset of $E$ which is. Since $E\cup\{u,v\}$ is linearly independent we can (in $V$) extend it to a base of $A$, and define a permutation of this base which fixes $E$, permutes $u$ and $v$. This extends uniquely to a linear permutation $\pi\in\fix(E)$ as needed. $\square$
Lemma II: In $\mathfrak{U}$, $A$ is a vector space over $\mathbb F$, and if $W\in\mathfrak{U}$ is a linear proper subspace then $W$ has a finite dimension.
Proof: Suppose $W$ is as above, let $E$ be a support of $W$. If $W\subseteq\span(E)$ then we are done. Otherwise take $u\notin W\cup \span(E)$ and $v\in W\setminus \span(E)$ and permute $u$ and $v$ while fixing $E$, denote the linear permutation with $\pi$. It is clear that $\pi\in\fix(E)$ but $\pi(W)\neq W$, in contradiction. $\square$
Lemma III: If $T\in\mathfrak{U}$ is a linear endomorphism of $A$, and $E$ is a support of $T$ then $x\in\span(E)\Leftrightarrow Tx\in\span(E)$, or $Tx=0$.
Proof: First for $x\in \span(E)$, if $Tx\notin\span(E)$ for some $Tx\neq u\notin\span(E)$ let $\pi$ be a linear automorphism of $A$ which fixes $E$ and $\pi(Tx)=u$. We have, if so:
$$u=\pi(Tx)=(\pi T)(\pi x) = Tx\neq u$$
On the other hand, if $x\notin\span(E)$ and $Tx\in\span(E)$ and if $Tx=Tu$ for some $x\neq u$ for $u\notin\span(E)$, in which case we have that $x+u\neq x$ set $\pi$ an automorphism which fixes $E$ and $\pi(x)=x+u$, now we have: $$Tx = \pi(Tx) = (\pi T)(\pi x) = T(x+u) = Tx+Tu$$ Therefore $Tx=0$.
Otherwise for all $u\neq x$ we have $Tu\neq Tx$. Let $\pi$ be an automorphism fixing $E$ such that $\pi(x)=u$ for some $u\notin\span(E)$, and we have: $$Tx=\pi(Tx)=(\pi T)(\pi x) = Tu$$ this is a contradiction, so this case is impossible. $\square$
Theorem: if $T\in\mathfrak{U}$ is an endomorphism of $A$ then for some $\lambda\in\mathbb F$ we have $Tx=\lambda x$ for all $x\in A$.
Proof:
Assume that $T\neq 0$, so it has a nontrivial image. Let $E$ be a support of $T$. If $\ker(T)$ is nontrivial then it is a proper subspace, thus for a finite set of atoms $B$ we have $\span(B)=\ker(T)$. Without loss of generality, $B\subseteq E$, otherwise $E\cup B$ is also a support of $T$.
For every $v\notin\span(E)$ we have $Tv\notin\span(E)$. However, $E_v = E\cup\{v\}$ is also a support of $T$. Therefore restricting $T$ to $E_v$ yields that $Tv=\lambda v$ for some $\lambda\in\mathbb F$.
Let $v,u\notin\span(E)$ linearly independent over $\span(E)$. We have that: $Tu=\alpha u, Tv=\mu v$, and $v+u\notin\span(E)$ so $T(v+u)=\lambda(v+u)$, for $\lambda\in\mathbb F$.
$$\begin{align}
0&=T(0) \\ &= T(u+v-u-v)\\
&=T(u+v)-Tu-Tv \\ &=\lambda(u+v)-\alpha u-\mu v=(\lambda-\alpha)u+(\lambda-\mu)v
\end{align}$$ Since $u,v$ are linearly independent we have $\alpha=\lambda=\mu$. Due to the fact that for every $u,v\notin\span(E)$ we can find $x$ which is linearly independent over $\span(E)$ both with $u$ and $v$ we can conclude that for $x\notin E$ we have $Tx=\lambda x$.
For $v\in\span(E)$ let $x\notin\span(E)$, we have that $v+x\notin\span(E)$ and therefore:
$$\begin{align}
Tx &= T(x+u - u)\\
&=T(x+u)-T(u)\\
&=\lambda(x+u)-\lambda u = \lambda x
\end{align}$$
We have concluded, if so that $T=\lambda x$ for some $\lambda\in\mathbb F$. $\square$
Set $\mathbb F=\mathbb F_2$ the field with two elements and we have created ourselves a vector space without any nontrivial automorphisms. However, one last problem remains. This construction was carried out in ZF+Atoms, while we want to have it without atoms. For this simply use the Jech-Sochor embedding theorem $\small[3, \text{Th}. 6.1, \text p. 85]$, and by setting $\alpha>4$ it should be that any endomorphism is transferred to the model of ZF created by this theorem.
(Many thanks to t.b. which helped me translating parts of the original paper of Läuchli.
Additional thanks to Uri Abraham for noting that an operator need not be injective in order to be surjective, resulting a shorter proof.)
Bibliography
Läuchli, H. Auswahlaxiom in der Algebra. Commentarii Mathematici Helvetici, vol 37, pp. 1-19.
Jech, T. Set Theory, 3rd millennium ed., Springer (2003).
Jech, T. The Axiom of Choice. North-Holland (1973). |
Calculating correlation matrix from covariance matrix - r>1 | Your covariance matrix is not a covariance matrix.
It should be positive (semi) definite so both eigenvalues should be $\ge 0$, but $\det(\Sigma) < 0$ so the eigenvalues have opposite signs. |
If $Y = X \backslash Z(f)$ for some affine variety $X$ with $p \in Y$, then $T_p Y \cong T_p X$ | The variety $X$ has a local ring $\mathcal O_{X,p}$ at $p$.
If $\mathfrak m \subset\mathcal O_{X,p}
$ is its maximal ideal we have $T^*_p(X)=\mathfrak m/\mathfrak m^2$, a $k$-vector space, and the dual of that vector space is the tangent space of $X$ at $p$ : $T_p(X)=(\mathfrak m/\mathfrak m^2)^*$.
Since the local ring $\mathcal O_{X,p}$ does not change if you replace $X$ by an open neighbourhood $U$ of $p$, you have $T_p(X)=T_p(U)$.
Apply to $U=X\setminus Z(f)=Y$
The tangent space I defined is called the Zariski tangent space and is easily seen to be isomorphic to your tangent space defined with derivations.
The basic idea is that given your derivation $D$, you can extend it to $\mathcal O_{X,p}$, then restrict it to $\mathfrak m$ and since this linear form is zero on $\frak m^2$ you finally obtain a linear form on $\mathfrak m/\mathfrak m^2$, a tangent vector in the Zariski sense. |
probability - 2 cards with same rank | The "first" card doesn't matter, as only the second card has to have the same rank. After removing one card, there are 51 cards left in the deck. 3 of them have the same rank as the card that was removed. Hence, the probability of getting dealt a pair is 3/51 = 1/17. |
Show that all lines are Borel sets and have Lebesgue measure zero | A line segment of length $1$ can be covered by closed cubes of sidelength
$1/N$ centred at the points $x_0,\ldots,x_N$ equally spaced between the endpoints $x_0$ and $x_N$. So it's contained in a set of outer measure
$\le (N+1)/N^k$ for any positive integer $N$. That's enough to show the
segment has measure zero. |
Recognising that $\sum_{n=0}^\infty \frac{a^2-b^2(2n+1)^2}{(a^2+b^2(2n+1)^2)^2}=-\frac{\pi^2\mathrm{sech}^2\left(\frac{a\pi}{2b}\right)}{8b^2}$ | As Semiclassical noted, you look at
$$ \sum_{n=0}^\infty \dfrac{r^2 - (2n+1)^2}{(r^2 + (2n+1)^2)^2} $$
Expand the summand in partial fractions:
$$ \dfrac{r^2 - (2n+1)^2}{(r^2 + (2n+1)^2)^2} ={\frac {2 {r}^{2}}{ \left( r^2 + (2n+1)^2 \right) ^{2}}} - \dfrac{1}{ r^2 + (2n+1)^2 }$$
First deal with the term on the right:
$$ F(r) = \sum_{n=0}^\infty \dfrac{1}{r^2 + (2n+1)^2}$$
Let $$G(r) = \sum_{k=1}^\infty \dfrac{1}{r^2 + k^2} $$
so that $F(r)$ consists of the terms of $G(r)$ for odd $k$. But since
$$\dfrac{1}{r^2 + (2n)^2} = \dfrac{1}{4} \dfrac{1}{(r/2)^2 + n^2}$$
we have
$$G(r) = F(r) + \dfrac{1}{4} G(r/2)$$
Now it is a ``well-known'' identity that
$$ \pi \cot(\pi z) = \dfrac{1}{z} + \sum_{n=1}^\infty \dfrac{2z}{z^2 - n^2}
= \dfrac{1}{z} - 2 z G(iz) $$
and this leads to
$$ F(r) = \dfrac{\pi \coth(\pi r)}{2r} - \dfrac{\pi \coth(\pi r/2)}{4r} $$
which can be simplified to
$$ F(r) = \dfrac{\pi \tanh(\pi r/2)}{4r}$$
Now note that $$\dfrac{d}{dr} \dfrac{1}{r^2 + (2n+1)^2} = \dfrac{-2r}{\left(r^2+(2n+1)^2\right)^2}$$
so your sum is
$$ - \dfrac{\pi \tanh(\pi r/2)}{4r} - r \dfrac{d}{dr} \dfrac{\pi \tanh(\pi r/2)}{4r}$$
which should simplify to
$$ - \dfrac{\pi^2 \text{sech}^2(\pi r/2)}{8} $$ |
Riemann (darboux?) integrating $f: [2,3] \to \mathbb{R} \quad f(x)=\frac{1}{x^2}$? | Let $I \subset\mathbb{R}$ be a closed interval and $f:I\to\mathbb{R}$ be a bounded function. Let
\begin{eqnarray}
\mathrm{L}f := \sup_{P \text{ is a partition of }I}L_{f,P}\\
\mathrm{U}f := \inf_{P \text{ is a partition of }I}U_{f,P}.
\end{eqnarray}
and $|P| > 0$ be the maximum length among the subinterevals in $P$, where $P$ is a partiton of $I$.
Then there is a theorem that
for all $\varepsilon > 0$ there exists $\delta > 0$ s.t. if a
partition $P$ of $I$ satisfies $|P| < \delta$ then \begin{eqnarray}
\left| L_{f,P}-Lf \right| < \varepsilon\\ \left| U_{f,P}-Uf \right| <
\varepsilon. \end{eqnarray}
Hence you can calculate the lower and upper Riemannian integral with partitions whose subintervals are of length $\frac{1}{n}$.
In order to calculate the lower one, calculate
\begin{eqnarray}
\frac{1}{n}\sum_{i=1}^n\frac{1}{\left( 2 + \frac{i}{n} \right)\left( 2 + \frac{i-1}{n} \right)}
\end{eqnarray}
instead of
\begin{eqnarray}
\frac{1}{n}\sum_{i=1}^n\frac{1}{\left( 2 + \frac{i}{n} \right)^2}
\end{eqnarray}
and then estimate the difference to each other. |
Show that if $2^t \le (t+1)^n, n\ge 5$, then $ t \le n^2-1$ | HINT: $t\ln 2\leq n\cdot\ln(t+1)$. |
Volume of a convex hull of 4 points in 3 dimension | You are looking for the volume of a tetrahedron. If the $4$ points are $P_i = (x_i, y_i, z_i)$, $i=1,4$, then the volume is $\frac{1}{6}$ of the absolute value of the determinant
$$ \left | \matrix{ 1 & x_1 & y_1 & z_1\\1 & x_2 & y_2 & z_2 \\1 & x_3 & y_3 & z_3 \\ 1 & x_4 & y_4 & z_4 } \right | $$ |
heat equation in polar coordinates | Hint: Let $u(r,t) = \frac{U(r,t)}{r}$. Then $$\frac{\partial u}{\partial t} =\frac{1}{r}\frac{\partial U}{\partial t} $$ $$\frac{\partial u}{\partial r} = \frac{1}{r}\frac{\partial U}{\partial r} - \frac{U}{r^2}$$ and $$\frac{\partial^2 u}{\partial r^2} = -\frac{1}{r^2}\frac{\partial U}{\partial r} + \frac{1}{r} \frac{\partial^2 U}{\partial r^2} + \frac{2 U}{r^3} -\frac{1}{r^2} \frac{\partial U}{\partial r} = \frac{1}{r} \frac{\partial^2 U}{\partial r^2} - \frac{2}{r^2} \frac{\partial U}{\partial r} + \frac{2 U}{r^3}$$ So the heat equation becomes
$$\begin{align}\frac{1}{r}\frac{\partial U}{\partial t} &= h^2\left(\frac{1}{r} \frac{\partial^2 U}{\partial r^2} - \frac{2}{r^2} \frac{\partial U}{\partial r} + \frac{2 U}{r^3} + \frac{2}{r}\left(\frac{1}{r}\frac{\partial U}{\partial r} - \frac{U}{r^2}\right)\right)\\& =h^2\left(\frac{1}{r} \frac{\partial^2 U}{\partial r^2} - \frac{2}{r^2} \frac{\partial U}{\partial r} + \frac{2 U}{r^3} + \frac{2}{r^2}\frac{\partial U}{\partial r} - \frac{2U}{r^3}\right)\\& = \frac{h^2}{r} \frac{\partial^2 U}{\partial r^2}\end{align}$$ which implies $$\frac{\partial U}{\partial t} = h^2 \frac{\partial U}{\partial r^2}$$ Do you think you take can it from here? |
Maybe Implicit Function Theorem?! | You have a function $y: \Bbb{R} \to \Bbb{R}$ (or open subsets thereof) such that for all $x$, $f(x,y(x)) = 0$. Part of the conclusion of the implicit function theorem is that the function $y(\cdot)$ will be twice continuously differentiable (because $f$ is). Now, differentiate both sides with respect to $x$:
\begin{align}
\dfrac{d}{dx} \bigg|_x f(x,y(x)) &= 0
\end{align}
Now, by the chain rule (which can be applied since by the implicit function theorem, all the functions involved are atleast $C^2$),
\begin{align}
\dfrac{\partial f}{\partial x} \bigg|_{(x,y(x))} + \dfrac{\partial f}{\partial x} \bigg|_{(x,y(x))} \cdot y'(x) &= 0
\end{align}
Now, since this equation is once again true for all $x \in \Bbb{R}$, we can differentiate again. Use the chain rule on the first term, and product (and chain) rule on second term.
If we differentiate the first term, we get:
\begin{align}
\dfrac{d}{dx} \bigg|_{x} \left( \dfrac{\partial f}{\partial x} \bigg|_{(x,y(x))} \right) &= \dfrac{\partial^2 f}{\partial x^2} \bigg|_{(x,y(x))} + \dfrac{\partial^2 f}{\partial y\partial x} \bigg|_{(x,y(x))} \cdot y'(x).
\end{align}
I leave it to you to differentiate carefully the second term. You should get a term involving $y''(x)$. Then, simply move everything over to the other side to solve for $y''(x)$ in terms of $y'(x)$, and the various partial derivatives of $f$ (evaluated of course at appropriate points). |
Let $\alpha$, $\beta$, and $\gamma$ be acute angles such that $\alpha$ + $\beta$ = $\gamma$. Show that | Let $\alpha\geq\beta$
Since $\cos$ is a decreasing function on $\left(0,\frac{\pi}{2}\right)$, $\frac{\alpha+\beta}{2}<\frac{\pi}{4}$ and $\frac{\alpha-\beta}{2}<\frac{\pi}{4}$, we obtain:
$$\cos\alpha+\cos\beta+\cos(\alpha+\beta)-1=2\cos\frac{\alpha+\beta}{2}\cos\frac{\alpha-\beta}{2}+2\cos^2\frac{\alpha+\beta}{2}-2>$$
$$>2\cdot\frac{1}{\sqrt2}\cdot\frac{1}{\sqrt2}+2\cdot\left(\frac{1}{\sqrt2}\right)^2-2=0$$
Let $\cos\alpha=x$, $\cos\beta=y$ and $\cos(\alpha+\beta)=z$.
Hence, we need to prove that $$(x+y-1+z)^2\geq4xyz$$ or
$$z^2+2(x+y-1-2xy)z+(x+y-1)^2\geq0.$$
$$\frac{\Delta_{z}}{4}=(x+y-1-2xy)^2-(x+y-1)^2=$$
$$=(x+y-1-2xy-(x+y-1))(x+y-1-2xy+x+y-1)=$$
$$=4xy(1-x)(1-y).$$
Thus, it remains to prove that
$$z\leq-(x+y-1-2xy)-\sqrt{4xy(1-x)(1-y)}$$ or
$$z\leq(1-x)(1-y)+xy-\sqrt{4xy(1-x)(1-y)}$$ or
$$\cos(\alpha+\beta)\leq\cos\alpha\cos\beta+(1-\cos\alpha)(1-\cos\beta)-2\sqrt{\cos\alpha\cos\beta(1-\cos\alpha)(1-\cos\beta)}$$ or
$$-\sin\alpha\sin\beta\leq4\sin^2\frac{\alpha}{2}\sin^2\frac{\beta}{2}-4\sin\frac{\alpha}{2}\sin\frac{\beta}{2}\sqrt{\cos\alpha\cos\beta}$$ or
$$-\cos\frac{\alpha}{2}\cos\frac{\beta}{2}\leq\sin\frac{\alpha}{2}\sin\frac{\beta}{2}-\sqrt{\cos\alpha\cos\beta}$$ or
$$\cos\frac{\alpha-\beta}{2}\geq\sqrt{\cos\alpha\cos\beta}$$ or
$$1+\cos(\alpha-\beta)\geq2\cos\alpha\cos\beta$$ or
$$1\geq\cos(\alpha+\beta)$$
Done! |
Prove or disprove that AB=AC $\implies$ B=C | What if $A$ is zero? ${}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}$ |
If $\sin \theta+\cos\theta+\tan\theta+\cot\theta+\sec\theta+\csc\theta=7$, then $\sin 2\theta$ is a root of $x^2 -44x +36=0$ My own bonafide attempt. | $$\sin(\theta)+\cos(\theta)+\tan(\theta)+\cot(\theta)+\sec(\theta)+\csc(\theta)=7$$
$$\sin(\theta)+\cos(\theta)+\frac{\sin(\theta)}{\cos(\theta)}+\frac{\cos(\theta)}{\sin(\theta)}+\frac{1}{\cos(\theta)}+\frac{1}{\sin(\theta)}=7$$
$$\sin^2\theta\cos\theta+\sin\theta\cos^2\theta+\sin^2\theta+\cos^2\theta+\sin\theta+\cos\theta=7\sin\theta\cos\theta$$
Let $\sin\theta+\cos\theta=u; \sin\theta\cos\theta=v$
$$uv+1+u=7v$$
$$u(1+v)=7v-1$$
$$u=\frac{7v-1}{v+1}$$
$$u^2=\left(\frac{7v-1}{v+1}\right)^2$$
$u^2=(\sin\theta+\cos\theta)^2=1+2\sin\theta\cos\theta=1+2v$
$$1+2v=\left(\frac{7v-1}{v+1}\right)^2$$
where $v=\sin\theta\cos\theta=\frac12 \sin2\theta$
Let $\sin2\theta=x$. Then $$1+x=\left(\frac{\frac72x-1}{\frac12x+1}\right)^2$$
$$1+x=\left(\frac{7x-2}{x+2}\right)^2$$
$$\color{red}{x^2-44x+36=0}$$ |
Formal argument for maximization of a sum of convex functions | You can use the following theorem: if the maximum of a convex function on a convex set is attained, then it is attained at an extreme point.
In your case, the convex set is a simplex, which is a compact set. The function $F(x) = \sum_{i=1}^n f_i(x_i)$ is convex, and thus continuous. Therefore, the maximum is attained by Weirstrass. The constraints define a convex polytope, whose extreme points are its vertices, which are $(X, 0, \dots, 0),~(0, X, 0, \dots, 0), \dots, (0, \dots, 0, X)$. The result you wish follows from the fact that the maximum is attained at one of those vertices. |
Calculate probability/ percentage | Note that P(2.1 < x < 3.7) can be written as P(x=3.7) - P(x=2.1). Recall that the equation for a z-score given information about a single sample is:
z = (x – x̄) / s
Since we are given that the distribution is normal and that our sample mean is 2.9 hours we let our x̄ = 2.9 hours. Since our sample standard deviation is given we can let s = .4 With this information we can find the probabilities corresponding to the respective z-score values of P(x=3.7), P(x=2.1) |
Confusion related to the volume of a solid of revolution | Your course seems to have an unusual definition of fourth quadrant.
But let’s consider a concrete example. Let $x=-1.$ This is midway between your $a$ and $b$ so it’s certainly relevant.
For the curve $y=x^3,$ at $x=-1$ you have $y=(-1)^3=-1.$
So you are correct when you say you want to measure the inner radius by going from $8$ to $0$ and then the additional distance from $0$ to $x^3,$ because $x^3=-1.$
In this particular case the radius is $9.$
OK, so let’s try $8+x^3$ as the “additional distance” intuition might suggest.
Since $x^3=-1,$ we find that $8+x^3=8+(-1)=7.$
But we already determined graphically that the correct radius is $9.$ So “additional distance $\implies$ use addition” is a faulty intuition.
On the other hand, $8-x^3=8-(-1)=9.$
So subtraction gives the correct answer after all, even when the $y$ values are on opposite sides of the $x$ axis.
The key takeaway for me is: Subtraction always gives the distance.
That’s because $p-q$ is precisely how much we have to add to $q$ in order to arrive at $p.$
To be a little more rigorous we should say $p-q$ always gives the distance or the negative of the distance between $p$ and $q,$ because whether you get a positive or negative negative number depends on whether you list the greater number first.
But if you’re only going to use the square of the distance then the positive/negative distinction is erased by the squaring. |
Is it true that taking expectation twice over the same distribution equals to 0? | I think you misread the statement. If the gradient is not affected by the baseline, the following derivation with respect to b should be zero.
But it is difficult to say from this snippet, so treat my answer with care.
Anyway, taking expectation value twice yields the same result as taking it only once, since the expectation value is already stripped of its dependency of the variable that was used for expectation calculation. Example: The expectation value of a normal 6-sided dice is 3.5, and taking the expectation value of the number 3.5 yields 3.5 again. |
convolution using Laplace transform | Since you have applied the convolution property, I assume you are familiar with shifting theorems. The $p$-shifting theorem states that $$\mathcal{L}\{e^{-at} f(t)\} = F(p+a).$$
The Laplace transform of a polynomial is $$\mathcal{L}\{t^n\} = \frac{n!}{p^{n+1}}.$$ Then, a straightforward application of the shifting theorem (in the $p$-variable) implies that $$\mathcal{L}^{-1}\bigg\{ \frac{1}{(p+a)^{n+1}} \bigg\} = \frac{t^n e^{-at}}{n!}.$$ For instance, one of your terms is $$\mathcal{L}^{-1}\bigg\{ \frac{2}{(p+1)^{3}} \bigg\} = \frac{2 t^2 e^{-t}}{2!} = t^2 e^{-t}.$$ |
Any comprehensive books on global smooth optimization? | There aren't very many (reasonably-priced) books on the subject in my opinion.
A nice survey article is here:
@article{BOUKOUVALA2016701,
title = "Global optimization advances in Mixed-Integer Nonlinear Programming, MINLP, and Constrained Derivative-Free Optimization, CDFO",
journal = "European Journal of Operational Research",
volume = "252",
number = "3",
pages = "701 - 727",
year = "2016",
issn = "0377-2217",
doi = "https://doi.org/10.1016/j.ejor.2015.12.018",
url = "http://www.sciencedirect.com/science/article/pii/S037722171501142X",
author = "Fani Boukouvala and Ruth Misener and Christodoulos A. Floudas",
}
It explains the main modern methods in teh state of the art solvers like BARON, Antigone, and SCIP.
Shamelessly self-promoting, you might also check out Chapter 5 of our Acta Numerica article.
https://www.cambridge.org/core/journals/acta-numerica/article/mixedinteger-nonlinear-optimization/2D0CE8CDA53363A31ADE8689565517BD
Hope this helps! |
Intuition on norm of quotient space | (1) $p$ is always a seminorm on the quotient and if it is not a norm there is $y=q(x)$ (where $q$ is the quotient map) such that $p(y)=0$ but $y\neq 0$. The latter condition measns that $x$ isn't in $U$ but the further means that one can approximate $x$ arbitrarily well by elements of $U$.
(2) Completeness of quotients is much deeper than (1). One can deduce it from a version of the open mapping theorem (a linear continuous map between Banach space is surjective and open if the closure of the image of the unit ball of the domain contains some ball in the range, see, e.g., Rudin's book on functional analysis) applied to $j\circ q$ where $j$ is the embedding of $X/U$ into its completion. |
Prove $xf(x) - \int_{0}^{x} f(t) \,dt = \int_{f(0)}^{f(x)} g(t) \,dt, g(x) = f^{-1}(x), \forall f(x)$? | You can use the Fundamental Theorem of Calculus (Leibniz's Rule) to show that the derivatives of the LHS and RHS wrt $x$ are identical ($=xf'(x)$).
Which means that LHS = RHS $+ c$.
All you need to do now is to show $c=0$ and that's trivial by taking $x=0$. |
Example of a unbounded projection | For example: let $H = \ell^2$. Define the transformation
$$
(x_1,x_2,\dots) \mapsto
\left(\sum_{k=1}^\infty kx_k, 0,0,\dots \right)
$$
Note, however, that this operator is not defined over all of $\ell^2$. |
Prove $det(F'(x)) = 0$ | If $F^T(x) F'(x) = 0$ then $F'(x)$ must be singular, hence $\det F'(x) = 0$. |
How many ways can we distribute $7$ different color pencils to $10$ drawers so there are at least $2$ pencils in the $10$th drawer? | There are $10^7$ ways of distributing seven pencils into ten drawers. Of course, many of those ways don't have at least two pencils in the tenth drawer. So let's subtract those out.
How many ways are there to have no pencils in the tenth drawer? That's equal to the number of ways to distribute seven pencils into the first nine drawers, which is $9^7$. So we subtract that out.
Now, how many ways are there to have exactly one pencil in the tenth drawer? Well, let's take it slowly: How many ways are there to have only the first pencil in the tenth drawer? That means that the remaining six pencils have to all be distributed into the first nine drawers, which is $9^6$. So we have to subtract that out. But that's just the first pencil. Each of the other six pencils can be the unique pencil in the tenth drawer, too. So that's six more subtractions of $9^6$.
Altogether, then, we have
$$
10^7-9^7-(7 \times 9^6)
$$
which is, indeed, quite a ways from $66430$.
The reason why you didn't get the right answer is that you didn't account for all the different combinations of pencils that could be in the tenth drawer. For instance, you write, for $4$ pencils in the tenth drawer, a total count of $729$. But that only takes care of the ways to put the other $3$ pencils in the other $9$ drawers. You forgot that there are many ways to select the $4$ pencils that will go into the tenth drawer—$\binom{7}{4} = 35$, in fact.
If you take that into account in your approach, you will have
$$
1+\binom76\times9+\binom75\times9^2+\binom74\times9^3+\binom73\times9^4+\binom72\times9^5
$$
and you will end up with the same answer as above. |
Is it possible to solve following integral with partial-fraction decomposition? | First divide numerator by denominator, then apply partial fraction decomposition to the resulting fraction from the remainder divided by the denominator (and integrate the quotient part separately). |
Product of $n$ consecutive positive integer is not a $n$th power? | I think I found a simple solution, will need someone to verify.
suppose $m$ exists such that $k(k+1)⋯(k+n−1)=m^n$
then $k < m < k + n - 1$
$gcd(m,m+1) = 1$
$m+1$ divides the LHS but doesn't divide the RHS |
Find $ \lim\limits_{x \to 0^{+}} x^x $ | Hint: Try $x^x=e^{\ln x^x}= e^{x \ln x}$. |
Prove that $\left(1 - \frac{1}{n}\right)^{n-1} \ge e^{-1}$ | First, we know $1+x\le e^x\,\,\,\forall x\in \mathbb{R}$. Then, if $n\neq 1$
$$1+\frac{1}{n-1}\le e^\frac{1}{n-1}$$
$$\iff \left(1+\frac{1}{n-1}\right)^{n-1}\le e$$
$$\iff \left(1-\frac{1}{n}\right)^{n-1}\ge e^{-1}$$
Note: I just explicitely used the inequality medicu took for granted in his comment. |
Projective bundle associated to sum of trivial line bundles | The total space of a trivial complex line bundle over $X$ is effectively $X \times \mathbf{C}$; "effectively" because a trivial line bundle has no "natural" trivialization, while $X \times \mathbf{C}$ has the distinguished section whose value is $1$ at each point.
In the same sense, the total space of the direct sum of two trivial complex line bundles is $X \times (\mathbf{C} \oplus \mathbf{C}) \simeq X \times \mathbf{C}^2$. Projectivizing obviously gives a global product $X \times \mathbf{CP}^1$ because the overlap maps of the rank-two vector bundle can be taken to be constant (with value the identity matrix). |
Prime ideals $\mathfrak{p} \supset \mathfrak{a}$ are finite in one-dimensional Noetherian domain | The quotient ring $A/\mathfrak a$ is zero-dimensional and noetherian, so it is artinian. But artinian rings have finitely many maximal ideals. |
Maximum Principle for Positive Laplacian | A subharmonic function satisfies an averaging inequality: If $x\in D$ and if $B\subset D$ is a ball centered at $x$, then $|B|^{-1}\int_B u\,dx \ge u(x)$. (Here $|B|$ denotes the area of $B$.) [Given your interest in complex function theory, Ransford's book Potential Theory in the Complex Plane will be of interest.) |
Iterates of $f_b(x) = x - \log_b(x) $ - for $\log(b) \approx 0.399$: convergence to accumulation points or chaos? | One important point in the system is $b=\sqrt{e}$, as was pointed out by Gottfried. For $b>\sqrt{e}$, there is a stable attracting fixed point for $f(x)=x-\log_b(x)$, and that fixed point is x=1. For $1.518120456732599974768513856<b<\sqrt{e}$, the fixed point in the neighborhood of one is repelling, and iterates starting in the neighborhood of one (but not equal to one) settle into a two cycle orbit. This value, $\approx1.5181$ is the next critical point in iterating f(x), and for bases less than that value, the system once again bifurcates.
Two cycle boundary base, where f(f(z0+delta))=z0-delta
b=1.518120456732599974768513856, with two-cycle fixed point values
If b is any smaller, the fixed points settle into a four cycle orbit
z0 0.3467994474160251099023657636
f(z0) 2.883510938240876275139117018
f(f(z0)) 0.3467994474160251099023657636
For $1.499042192220287185464351750<b<1.518120456732599974768513856$, the fixed point starting in the neighborhood of one settles into a four cycle orbit. Below that point, the system would settle into an eight cycle orbit. I didn't calculate the next bifurcation point, where the eight cycle orbits become sixteen cycle orbits. As I understand complex dynamics, there is an infinite sequence of these power of two bifurcations, with each region smaller than the previous bifurcation region. This is Mandelbrot like behavior, though I wouldn't presume to know how to make a "Mandelbrot" like complex graph for Gottfried's function.
Four cycle boundary base, where f(f(f(f(z0+delta))))=z0-delta
b=1.499042192220287185464351750, with four-cycle fixed point values
If b is any smaller, the fixed points settle into an eight cycle orbit
z0 0.4802675808315708379198177213
f(z0) 2.291937799458985333070087006
f(f(z0)) 0.2431639774409501127511516471
f(f(f(f(z0))) 3.736067056308005100926684146
f^5(z0) 0.4802675808315708379198177213
Gottfried looked at $b=\exp(0.4)$, which has a twelve cycle fixed point. This is a region of stability, that is actually past the infinite sequence of bifurcations, akin to a mini-Mandelbrot in the Mandelbrot set. Before getting to the mini-Mandelbrot region, you have to get past the infinite sequence of bifurcations, where chaos occurs, which seems to be near $b=\exp(0.4015293)$. For example, $b=\exp(0.4015295)$ is in the 512-cycle region, which is very close to the chaotic boundary.
edited with images updated. See this answer, How to figure out the starting point for this Mandelbrot? which has the ideal $z_0=1/\log(b)$ as the starting point for iterating f(x). Here is the main Mandelbrot "bug" generated from Gottfried's iterated function. Gridlines for the Mandelbrot image are 1/10, with the function varying from 1.425 to 1.725. You can see the main bifurcation line at $\exp(0.5)\approx1.65$. It looks like stackexchange resized the image, but if you right click, you can view the original at 750x500.
The algorithm I used works pretty well, with the $z_0$ starting point. Unfortunately, it appears that stackexchange edited out the exif comments from the .jpg files. Here is a zoom in, from 1.491 to 1.519, with grid lines of 1/100, showing the 8x bifurcation region. On the left, you can just make out Gottfried's region at 1.4918.
Here is the tip, from 1.44 to 1.50, with grid lines of 1/100. You can see several of the mini-mandelbrots.
Here is the biggest mini-mandelbrot, from 1.452 to 1.4544, with grid lines of 1/1000. This mini-mandelbrot has a main bulb with a 3-cycle, as compared with the 12-cycle from the mini-mandelbrot in Gottfried's question.
Finally, here is the amazing wide view, showing the infinite spiral, of radius $\exp(0.5)$, with the real values ranging from +/-1.66, and the imaginary maximum at 1.66 as well. Everywhere outside of this infinite circular spiral, the fixed attracting point is 1. The very larger black region in the center of the spiral between -0.6 and +1 is a computation artifact, where the base is close to zero. This is because the function escapes to +infinity instead of -infinity, so the algorithm incorrectly regards this as a stable fixed point region.
Here is a link to the pari-gp code. http://www.sheltx.com/pari/gottfried.gp |
Is this Stochastic Process bounded from above? | This might not be good news for you.
I don't think your setting will work generally. The reason -why your simulation seems to be confirming your hunch - might be due to your limited simulated time interval.
The main reason based on which I came to that conclusion is the Markov nature of your SDE. ( if there is no $t$ in f)
Now, let us together examine your setting to see if I have made any mistake.
Some assumptions I made on $f$:
(i) $f$ is time stationary, that is $ f(t,x)=f(x)$
(ii) $f(0)= 0$
(iii)$f(x) \ge 0$ if $x \le 0$ and $ f(x) \le 0$ in the other case.
(iv) $f$ is bounded under by a value $b<0$, that is $f(x) \ge b$
Let $\tau_1, \gamma_M$ denote:
$ \tau_1 := \inf \{ t \ge 1: X_t=0\}$ , the first time after $1$ that $X$ revisits $0$.
$ \gamma_M:= \inf \{ t \ge 0: X_t= m\}$ for some $M>0$
So the desired conclusion is that: $$\lim_{M \rightarrow +\infty} \mathbb{P}\left( \gamma_M < \infty \right) =0$$
What I will show is that: It seems to me, under these assumptions :
$$ \mathbb{P}\left( \gamma_M < \infty \right) =1 \space \forall M>0$$
Heuristically, under my third assumptions (iii), $f$ acts as a repulsive force that drags $X$ back to $0$, and I don't think it is hard to prove that:
$$ \mathbb{P}( \tau_1 < \infty)=1 $$
Then, we have:
$$\mathbb{P}\left( \gamma_M< \infty \right) =\mathbb{P}\left( \gamma_M< \tau_1 \right)+\mathbb{P}\left( \tau_1 \le \gamma_M<\infty \right)$$
$$ \underbrace{=}_{ \text{Strong Markov's property}} \mathbb{P}\left( \gamma_M< \tau_1 \right)+\mathbb{P}\left( \tau_1 \le \gamma_M \right)\mathbb{P}\left( \gamma_M <\infty \right)$$
Which is equivalent to:
$$\left[ \mathbb{P}\left( \gamma_M< \infty \right) -1 \right]\mathbb{P}\left( \gamma_M< \tau_1 \right)=0$$
$$\Leftrightarrow \mathbb{P}\left( \gamma_M< \infty \right)=1$$
Because the fourth assumption gives us clear a reason for which $\mathbb{P}\left( \gamma_M< \tau_1 \right) >0$. Indeed, we have:
$$\mathbb{P}\left( \gamma_M< \tau_1 \right) \ge \mathbb{P}\left( \gamma_M< 1 \right) \ge \mathbb{P}\left( X_1 > M \right) \ge \mathbb{P}\left( W_1+b > M \right) >0$$
**QED **
*Discuss *: So there are more than just only the control over the negativity of $f$ in order for your result to happen. |
Finding range of rational function with absolute value without use of calculus | Let $r\geq 0$ then see if the equation $f(x)=r$ has at least a solution $x\not=-3$.
Now we have that
$$\left| \frac{x-2}{x+3}\right|=r\Leftrightarrow x-2=\pm r(x+3)
\Leftrightarrow x=\frac{\pm 3r+2}{1\mp r}.$$
What may we conclude? |
Reference on spectral theory for selfadjont non-compact operators | As soon as you loose compactness the spectral theorem doesn't just talk about eigenvalues, but the entire spectrum, which may now contain a continuous part. So there is a leap from the simple compact operator spectral theorem to the spectral theorem for bounded operators.
I personally like the treatment in say M. Reed & B. Simon "Methods of modern mathematical physics - functional analysis" (pg. ~225). Another good book is W. Rudin "Functional Analysis" (see pg. 321).
Also have a look here for the spectal theorem for bounded oparators.
Addition:
Seems like you are interested in eigenvalues. A word of caution must be given here. A bounded self adjoint operator may have no eigenvalues. Consider for instance $M \colon L^2([0,1]) \to L^2([0,1])$ given by $(Mf)(x) = xf(x)$ has no eigenvalues.
If you are interested in Schrödinger type operators i suggest (as FreeziiS.) that you look at volume IV of Reed & Simons book (it is presented as an area known as perturbation theory). Also T. Kato's book "Perturbation Theory" is worth looking at. |
Is there any working strategy for finding the distance between a point and a function/curve in high dimensions? | The general from of a curve, surface, etc. in a finite-dimensional space is given by a map $f : U \to \Bbb B^n$, where $U \subseteq \Bbb R^m$ is a region (connected non-empty open set). The "hypersurface" is the image of $U$ under $f$, that is, the set $f(U)$.
Assuming $f$ is differentiable, only a small variation of the technique in the other thread is needed. If $p$ is the point whose distance from the hypersurface is desired, one forms the squared distance function $$D(x) = \|f(x) - p\|^2$$
where $x = (x_1, x_2, ..., x_m)$. $D$ is differentiable because $f$ is, and at the minimum, the derivative with respect to each of the coordinates must be $0$. So you solve the system of equations
$$\frac {\partial D}{\partial x_1} = 0\\\frac {\partial D}{\partial x_2} = 0\\\vdots\\\frac {\partial D}{\partial x_m} = 0$$
This gives you a system of $m$ equations in the $m$ unknowns $x_1, ..., x_m$, which you then solve to minimize $D$.
While this might be impractical to do by hand for a 1000-dimensional problem, it is not impossible. |
Why $P^\mu[X_{S+t} \in \Gamma \mid \mathcal{F}_{S+}]=P^\mu[X_{S+t} \in \Gamma \mid X_{S}]=0 \text{ ,} P^\mu\text{-a.s. on } \{S=\infty\}$ | I think your reasoning of $P^\mu[X_{S+t}\in\Gamma|\mathcal F_{S+}]=0$ on $\{S=\infty\}$ is correct.
According to the definition of $\sigma(X_S)$ given in Problem 1.1.17, $\{S=\infty\}\in\sigma(X_S)$. So I think you can apply the same reasoning as above to verify that $P^\mu[X_{S+t}\in\Gamma|X_S]=0$ on $\{S=\infty\}$.
To me it seems, therefore, that in asserting
$$
P^\mu[X_{S+t}\in\Gamma|\mathcal F_{S+}]
=P^\mu[X_{S+t}\in\Gamma|X_S]
=0\text{ on }\{S=\infty\},
$$
we don't need the progressive measurability of $X$. I guess that the authors say "for any progressively measurable process $X$" because immediately preceding the remark, they've defined Markov processes as progressively measurable processes satisfying the Markov property (Definitions 2.6.2 and 2.6.3). That is, "for any progressively measurable process $X$," it seems to me, is to say "for any progressively measurable process $X$ (whether it's a Markov process or not)."
$\quad\quad$Markov processes are defined among progressively measurable processes probably in view of Proposition 1.2.18, i.e., to make sure $X_S\in\mathcal F_{S+}$ (the Markov property is an assertion that a conditional expectation given $\mathcal F_{S+}$ be equal to that given $X_S$, so among other things $X_S$ must be measurable with respect to $\mathcal F_{S+}$). |
Prove that if $A$ is an infinite set then $A \times 2$ is equipotent to $A$ | Consider the collection $S$ of pairs $(X,f)$ where $X\subseteq A$ is infinite and $f:X\times 2\hookrightarrow X$. Then $S$ is nonempty, since $A$ being infinite contains a copy of $\Bbb N$, and it is known $\Bbb N\times 2\simeq \Bbb N$. It should be evident that under the ordering of extension, this set always has $(X,f)\in S$ if $X=\bigcup X_i$ and $f=\bigcup f_i$ with each $(X_i,f_i)\in S, i\in I$ and $I$ a total order. It follows by Zorn's lemma that there is a maximal element $(A',g)$ in $S$. If we show that $A'$ has the same cardinality as $A$, we're done. Now it is clear $\# A'\leqslant \# A$ since $A'$ is a subset of $A$. So assume $\#A'<\#A$. Then $A\setminus A'$ must be infinite, since else we would have by $A=A'\sqcup (A\setminus A')$ an equality (recall that if $\mathfrak a$ is infinite and $\mathfrak b$ finite, $\mathfrak a+\mathfrak b=\mathfrak a$) , so there is some $A''\subset A$ disjoint from $A'$ with cardinality $\aleph_0$. But then by patching things together, we would be able to extend the injection $A'\times 2\to A'$ to a larger $(A'\sqcup A'')\times 2\to (A'\sqcup A'')$. |
Transform Fresnel integrals into each other | I think I can explain it. Start with
$$
\Gamma(z) = \int_{0}^{\infty}y^{z-1}e^{-y}\,dy
$$Using Euler's formula, we have
$$
C + i S = \int_{0}^{\infty} e^{i t^2}\,dt
$$
$$
=\frac{e^{\pi i/(4)}}{2}\int _{0}^{\infty}s^{1/2-1} e^{-s}\,ds
$$
$$
= \frac{e^{\pi i/(4)}}{2}\Gamma(1/2)= {e^{\pi i/(4)}}\Gamma(3/2)
$$ This recovers $\displaystyle{S=C=\sqrt{\frac{\pi}{8}}}$. In fact, this is a general result: with $T\in \{\sin,\cos\}$ and $\Re(n)>1$, we have
$$
\Gamma(1+1/n)T(\pi/(2n))= \int _{0}^{\infty} T(t^n)\,dt
$$ |
Expected Time Until Absorption and Variance of Time Until Absorption for absorbing transition matrix P, but with a Probability Vector u | Regarding your first question, given the expected time $t(x)$ to absorb starting from each state $x$ and the initial probability distribution $u(x)$, the expected time to absorb with this initial probability distribution is $\sum_x u(x) t(x)$. Normally we write distributions as row vectors and functions as column vectors; in this case this can be written as just $ut$. In terms of the fundamental matrix, $t=N \mathbf{1}$ so this is $uN\mathbf{1}$.
Regarding your question about the variance, here is a way to do it from scratch. You have the recursion:
$$E[\tau^2 \mid X_0=x]=\sum_y E[(\tau+1)^2 \mid X_0=y] P[X_1=y \mid X_0=x]$$
which expands to
$$E[\tau^2 \mid X_0=x]=\sum_y E[\tau^2 \mid X_0=y] P[X_1=y \mid X_0=x] + 2 \sum_y E[\tau \mid X_0=y] P[X_1=y \mid X_0=y] + 1.$$
Thus the conditional variance can be obtained as
$$V[\tau^2 \mid X_0=x]=\sum_y E[\tau^2 \mid X_0=y] P[X_1=y \mid X_0=x] + \sum_y E[\tau \mid X_0=y] P[X_1=y \mid X_0=y]$$
for each non-absorbing state $x$. This follows by looking at the recursion for the expectation:
$$E[\tau \mid X_0=x]=1 + \sum_y E[\tau \mid X_0=y] P[X_1=y \mid X_0=y]$$
and subtracting on both sides. In matrix notation you could write the equation for the variance as
$$v(x)=(P(v+t))(x)$$
for each non-absorbing state $x$ and $v(x)=0$ for the absorbing states. You can then again multiply out $uv$ if you have an initial probability distribution $u$.
Regarding your question about computing the variance using the fundamental matrix, I looked into this, and it can be done. But you need to be careful about the grouping. The variance started from each state is given by the column vector
$$(2N-I)t-(t \circ t)$$
where $\circ$ is the Hadamard product. You can multiply that whole column vector by $u$ on the left to get the variance of the absorption time for an initial probability distribution $u$. You do not replace $t$ by $ut$ everywhere in the identity above. |
From the equation $\sigma(x^{\sigma(y)-1})=\frac{1}{\varphi(x)}(x^{y+1}-1)$ involving arithmetic functions to a characterization of Mersenne exponents | The conjecture is true.
Proof :
$x=1$ does not satisfy $(3)$, and $x=2$ satisfies $(3)$. In the following, $x\ge 3$.
As you already noticed, $x$ has to be a square-free integer.
Then, letting $x=\displaystyle\prod_{k=1}^{n}p_k$ with $n=\omega(x)$, we get
$$\prod_{k=1}^{n}\bigg({p_k}^{\sigma(2^{1+\varphi(x)}-1)}-1\bigg)=-1+\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}$$
Suppose here that $2^{1+\varphi(x)}-1$ is a composite number.
Here, using the following facts :
If $N$ is a composite number, then $\sigma(N)\ge 1+\sqrt N+N$.
If $N\ge 3$, then $\varphi(N)\ge 2$.
If $m\ge 2$ and $y\gt 0$, then $m^{2+y}-1\ge m^{1+y}$.
we get
$$\begin{align}-1&=\prod_{k=1}^{n}\bigg({p_k}^{\sigma(2^{1+\varphi(x)}-1)}-1\bigg)-\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}
\\\\&\ge \prod_{k=1}^{n}\bigg({p_k}^{1+\sqrt{2^{1+\varphi(x)}-1}+2^{1+\varphi(x)}-1}-1\bigg)-\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}
\\\\&=\prod_{k=1}^{n}\bigg({p_k}^{\sqrt{2^{1+\varphi(x)}-1}+2^{1+\varphi(x)}}-1\bigg)-\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}
\\\\&\ge\prod_{k=1}^{n}\bigg({p_k}^{\sqrt{2^{1+2}-1}+2^{1+\varphi(x)}}-1\bigg)-\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}
\\\\&\ge\prod_{k=1}^{n}\bigg({p_k}^{2+2^{1+\varphi(x)}}-1\bigg)-\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}
\\\\&\ge\prod_{k=1}^{n}\bigg({p_k}^{1+2^{1+\varphi(x)}}\bigg)-\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}\end{align}$$
from which we have
$$-1\ge \prod_{k=1}^{n}\bigg({p_k}^{1+2^{1+\varphi(x)}}\bigg)-\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}$$
which is impossible since the RHS is positive.
So, we see that $2^{1+\varphi(x)}-1$ is a prime number.
Then, we get
$$\prod_{k=1}^{n}\bigg({p_k}^{2^{1+\varphi(x)}}-1\bigg)=-1+\prod_{k=1}^{n}{p_k}^{2^{1+\varphi(x)}}$$
Suppose here that $n\ge 2$. Then, we get
$$\begin{align}1&=\prod_{k=1}^{n}\bigg({p_k}^{2^{1+\varphi(x)}}\bigg)-\prod_{k=1}^{n}\bigg({p_k}^{2^{1+\varphi(x)}}-1\bigg)
\\\\&\ge {p_n}^{2^{1+\varphi(x)}}\prod_{k=1}^{n-1}\bigg({p_k}^{2^{1+\varphi(x)}}\bigg)-\bigg({p_n}^{2^{1+\varphi(x)}}-1\bigg)\prod_{k=1}^{n-1}\bigg({p_k}^{2^{1+\varphi(x)}}\bigg)
\\\\&=\prod_{k=1}^{n-1}\bigg({p_k}^{2^{1+\varphi(x)}}\bigg)\end{align}$$
from which we have
$$1\ge \prod_{k=1}^{n-1}\bigg({p_k}^{2^{1+\varphi(x)}}\bigg)$$
which is impossible since the RHS is larger than $1$.
So, we have to have $\omega(x)=n=1$, and $x$ has to be a prime number.
Therefore, $x$ has to be a Mersenne exponent. $\quad\blacksquare$ |
Can $1+kq$ divide $p^3$ for distinct primes $p$ and $q$ with $p^3\not\equiv 1\pmod{q}$ | NO:
If $p=5, q=3, k=8$, then $3$ does not divide $p^3-1=124$, but $1+kq=25$ that divides $125$. |
Prove that a non-zero acceleration is perpendicular to a constant speed | You can apply the product rule when differentiating $\vec v\cdot\vec v =$constant.
The intuitive idea is that $\vec v(t)$ traces out a curve on a sphere centered at the origin (i.e., picture $\vec v(t)$ as a moving radius vector), while $\vec v'(t)=\vec a(t)$ is tangent to the sphere, and hence perpendicular to the radius at the point of tangency, namely $\vec v(t)$.
There is also a geometric description of what this says in terms of the original curve, say $\vec x(t)$, of which $\vec v(t)$ gives the velocity. Since $\vec v(t)$ gives a vector tangent to the trajectory of $\vec x(t)$, and in your case $\vec a(t)$ is perpendicular to $\vec v(t)$, $\vec a(t)$ is perpendicular to the trajectory of the original curve $\vec x(t)$.
Acceleration can occur for 2 reasons. In general, the component of $\vec a(t)$ in the direction of the trajectory (or opposite this direction) tells you how the speed is changing, while the component of $\vec a(t)$ perpendicular to the trajectory tells you how the direction is changing. So, again, in case the speed is constant, $\vec a(t)$ is perpendicular to the curve. Its direction tells you which direction $\vec x(t)$ is turning, while its magnitude tells you how quickly $\vec x(t)$ is changing direction. This last quantity is a constant multiple of the curvature of $\vec x(t)$. |
Limits without L'Hopitals Rule ( as I calculate it?) | Let $\displaystyle z = \frac{1}{y}\;,$ So when $z\rightarrow \infty\;,$ Then $y\rightarrow 0$
So limit convert into $$\displaystyle \lim_{y\rightarrow 0}\frac{1+\sqrt{1+2y}-2\sqrt{1+y}}{y^2} = \lim_{y\rightarrow 0}\frac{1+(1+2y)^{\frac{1}{2}}-2(1+y)^{\frac{1}{2}}}{y^2}$$
Now Using $$\displaystyle \bullet (1+t)^n = 1+nt+\frac{n(n-1)t^2}{2}+\frac{n(n-1)(n-2)}{6}t^3+.....$$
So we get $$\displaystyle \lim_{y\rightarrow 0}\frac{1+(1+ y-\frac{y^2}{2}-\frac{y^3}{6}+...)-2(1+\frac{y}{2}-\frac{y^2}{8}-\frac{z^3}{16}....)}{y^2}$$
So we get limit $$\displaystyle = -\frac{1}{4}$$ |
Box topology and connectedness property | The problem with your proof is that $A$ and $B$ do not have to have the form $A = \prod_{\alpha\in J} A_\alpha$ and $B = \prod_{\alpha\in J} B_\alpha$. Remember, sets of this form are not the entire box topology, but only a basis for the box topology. So instead, $A$ and $B$ just must be unions of sets of this form. |
Constructing a counterexample | There are three variables here; each can take two different values (true or false). That means there are only eight different situations to consider. That's small enough to do the straightforward thing: check 'em all, and see if any are counterexamples.
For example, if $p$, $m$, and $t$ are all "true", then $p \implies m$ is true, $m \implies t$ is true, and $m$ is true; so the premises are true. $m \implies p$ is also true, so this is not a counterexample.
For another example, if $p$ and $t$ are true but $m$ is false, then $p \implies m$ is false, so the premises are not all true; that means that this isn't a counterexample either.
There are six more cases to check - I'll leave those to you. |
Question related to Kolmogorov equations | That is now a question about regularity of solutions of parabolic differential equations.
In a nutshell and somewhat simplified, if $b \in C^{k+1,\alpha}, \, \sigma \in C^{k,\alpha}$, then $u \in C^{k+2,\alpha}$ for $t > 0$. Here $C^{k,\alpha}$ means $k$ times differentiable with $k$-th derivatives being $\alpha$ Holder continuous. If also $f \in C^{k+2,\alpha}$, then the solution is in the same space all the way to $t = 0$. Clearly one cannot do better, so this is called "maximal regularity".
Classical references are the books by Friedman from 1969 or so and by Ladyzhenskaya from around that time. |
Is $x^3+x^2+x+1$ divisible by $x^3+5x^2+x$ in $\mathbb{Z}_5\left[x\right]$? | I'm not sure where you had a leak, but:
$$x^3+x^2+x+1=(x+1)(x^2+1)$$
and this much is true in the integers $\;\Bbb Z\;$ , so it is also true in $\;\Bbb Z_5\;$ . It comes from noticing that $\;-1=4\pmod5\;$ is a root of the leftmost polynomial. Observe further that since $\;5=1\pmod4\;,\;\;-1\;$ is a square in this last field, so in fact
$$x^2+1=(x-2)(x+2)\pmod5= (x+3)(x+2)\pmod5$$
so in the end
$$x^3+x^2+x+1=(x+1)(x^2+1)=(x+1)(x+2)(x+3)\pmod5$$ |
What is the longest sequence of consecutive twin primes known to exist? | The longest consequtive twin primes sequence is $3,5,7$.
There thre numbers are actually the only three numbers of the forms $n, n+2, n+4$ in which all three numbers are prime. Any other such sequence (if $n\geq 2$) has at least one of the numbers divisible by $3$. |
Why I cannot get the same answer if I do substitution with $x=a \cos\theta$ for $\int \sqrt{a^2-x^2}dx$ compared with substitution $x=a \sin\theta$? | Method$\#1:$
If $\theta=\arccos\dfrac xa=\dfrac\pi2-\arcsin\dfrac xa,$
$\cos\theta=\dfrac xa,0\le\theta\le\pi$
$$dx=-a\sin\theta\ d\theta\text{ and }\sqrt{a^2-x^2}=a\sin\theta$$
$$\int\sqrt{a^2-x^2}\ dx=-\int a^2\sin^2\theta\ d\theta=\dfrac{a^2}2\int(\cos2\theta-1)\ d\theta=\dfrac{a^2\sin2\theta}4-\dfrac{a^2\theta}2$$
Now $\sin2\theta=2\sin\theta\cos\theta=?$
Method$\#2:$
If $\theta=\arcsin\dfrac xa,$
$\sin\theta=\dfrac xa,-\dfrac\pi2\le\theta\le\dfrac\pi2$
$$dx=a\cos\theta\ d\theta\text{ and }\sqrt{a^2-x^2}=a\cos\theta$$
$$\int\sqrt{a^2-x^2}\ dx=\int a^2\cos^2\theta\ d\theta=\dfrac{a^2}2\int(\cos2\theta+1)\ d\theta=\dfrac{a^2\sin2\theta}4+\dfrac{a^2\theta}2$$
Now $\sin2\theta=2\sin\theta\cos\theta=?$ |
If G is finite then $\Phi(G)$ is nilpotent. | This is a standard textbook result. It is enough to prove that any Sylow subgroup $P$ of $\Phi(G)$ is normal in $\Phi(G)$ (that is an equivalent condition to nilpotency for finite groups). In fact more is true: $P$ is normal in $G$.
By the Frattini Argument, we have $G = \Phi(G)N_G(P)$. But now if $N_G(P) \ne G$, then $N_G(P) \le M$ for some maximal subgroup $M$ of $G$. But $\Phi(G) \le M$, so we get $G=\Phi(G) N_G(P) \le M$, contradiction. |
A graphical interpretation of the failure of $x^2$ to be uniformly continuous | Let's first work through the formal definitions and such.
A function $f:\mathbb{R}\rightarrow \mathbb{R}$ is uniformly continuous if $$\forall \varepsilon >0:\exists \delta > 0: \forall x,y\in \mathbb{R}: |x-y|<\delta\Rightarrow |f(x)-f(y)|<\varepsilon.$$
Now fix any $\varepsilon>0$ and suppose there was such a $\delta$. Let $x$ be a variable for the moment but define $y=x+\frac{\delta}{2}$. Then clearly $|x-y|<\delta$, thus by assumtion $$|f(x)-f(y)|=|x^2-x^2+\delta x+\frac{\delta^2}{4}|=\delta |x+\frac{\delta}{4}|<\varepsilon.$$
Obviously the latter statement is false since $x$ could be chosen arbitrarily large.
Clearly we have shown that the function is not uniformly continuous. The problem is that we still had a choice in $x$ and this resulted in $f(x)$ lying arbitrarily far away from $f(y)$ even though $y$ was close to $x$. Uniform continuity expresses that if $x$ and $y$ are at a distance less than $\delta$, then the distance between $f(x)$ and $f(y)$ is less than $\varepsilon$. You can visualize this as follows:
Consider an arbitrary interval of the form $(x-\delta,x+\delta)$ in the domain (you can move this around by varying $x$). Uniform continuity says that for any value $y$ that you choose in this interval, we have that $f(y)\in (f(x)-\varepsilon, f(x)+\varepsilon).$ Hence the largest distance in the set $f((x-\delta,x+\delta))$ is less than $2\varepsilon$. In particular, this distance is independent of $x$, however this is clearly false for $x\mapsto x^2$. |
How to factor $x^4 +3x -2$? | Hints:
If it factors, you know the form will $(x^2 + bx \pm 1)(x^2 + cx \mp 2)$. You need a sum of $3$ and need for the cubic term to cancel out..
Now, can you use that and figure out the factors and find $b$ and $c$?
Result: $(x^2-x+2) (x^2+x-1)$ |
State space setup of a system | If we consider a controllable and observable state space model
$$\dot{\boldsymbol{x}}(t)=\boldsymbol{Ax}(t)+\boldsymbol{Bu}(t)$$
$$\boldsymbol{y}=\boldsymbol{Cx}(t)+\boldsymbol{Du}(t)$$
then it can be expressed in the frequency domain by a transfer function matrix $\boldsymbol{G}(s)$.
The representation in the state space formulation is not unique because you can always use a regular similarity transformation $\boldsymbol{z}=\boldsymbol{Tx}$ on the states such that the corresponding state space representation has changed while $\boldsymbol{G}(s)$ and the actual dynamics do not change. The dynamics are just described in a different coordinate system. |
Characteristic Equation | In the real numbers (and complex numbers, and integers, and rationals), if a product is equal to $0$, then at least one factor is equal to zero.
So for $(x-2)(x+1)$ to be equal to zero, either $x-2=0$, or $x+1=0$.
However, for $x-2$ to equal $0$ you don't need $x$ to be equal to $-2$, you need it to be equal to $2$: $x-2=0$ is equivalent to $x=2$. And for $x+1=0$ to be true, you need $x=-1$. So that's why you go from "$x$ minus $2$" to $x=2$, and from "$x$ plus $1$" to $x=-1$.
(In general, $x$ equals $a$ if and only if $x-a=0$. And $x+1 = x-(-1)$). |
Find $a\in\mathbb{R}$ such that $x_{1,2,3}\in\mathbb{Z}$ | The turning-points of the cubic $y=x^3-x$ are at $(\pm 1/\sqrt{3},\mp2/3\sqrt{3})$. So, using the horizontal line test, if $b^3-b=c^3-c$ and $b,c\ne-1,0,1$, then $b=c$. But then $(x-b)(x-c)(x-d)=(x-b)^3=x^3-3bx^2+3b^2x-b^3$, and the coefficient of $x^2$ shows that $b=0$. |
Probability confusion regarding different size of probability space | A sample space with not all outcomes equally likely is fine, as long as we take into account the relevant probabilities. However, it we can arrange to make all outcomes equally likely, the analyis is likely to be easier, with a diminished probability of error.
In the first sample space, the three outcomes described are not all equally likely. The probability of $A$ is $\frac{1}{2}$, while the probability of each of $BB$ and $BA$ is $\frac{1}{4}$. The outcomes $A$ and $BA$ have player A winning, so her probability of winning is $\frac{3}{4}$.
In the second analysis, the outcomes listed are all equally likely, so the fact that the probability is $\frac{3}{4}$ is very clear. |
How to find $|\{A \in \mathbb F _7^{5 \times 5}|A \text{ is invertible}\}|$? | The columns must form a basis of $\Bbb F_7^5$.
For the $k$th column, you can pick any of $7^5$ vectors, except those $7^{k-1}$ that are in the $(k-1)$-dimensional subspace generated by the preceding $k-1$ columns.
So the count is
$$(7^5-7^0)(7^5-7^1)(7^5-7^2)(7^5-7^3)(7^5-7^4) $$ |
How to calculate the integral $\int x d(4x) $? | $$\int xd(4x)=\frac14\int (4x)d(4x)=\frac14\frac{(4x)^2}{2}+C=2x^2+C$$ |
Differentiation Operator a Contraction Mapping | It's not a contraction.
Let $f(x)=\sin nx$, $g(x)=\cos nx$ ($n$ is a number we need to find). Then $f,g\in C^{\infty}[a,b]$. And
\begin{align*}
\left\|f-g\right\|=&\underset{[a,b]}{\max}|\sin nx-\cos nx|=\sqrt{2}\underset{[a,b]}{\max}|\sin(nx-\frac{\pi}{4})|
\\
\left\|\frac{df}{dx}-\frac{dg}{dx}\right\|=&\left\|n\cos nx+n\sin nx\right\|=n\underset{[a,b]}{\max}|\cos nx+\sin nx|=\sqrt{2}n\underset{[a,b]}{\max}|\sin(nx+\frac{\pi}{4})|
\end{align*}
If we take $n$ large enough that $n\ge\frac{2\pi}{b-a}$, then
\begin{align*}
\left\|f-g\right\|=\sqrt{2},\qquad\left\|\frac{df}{dx}-\frac{dg}{dx}\right\|=\sqrt{2}n.
\end{align*}
If we also make $n\ge1$,
$$\left\|\frac{df}{dx}-\frac{dg}{dx}\right\|\ge\left\|f-g\right\|.$$
Therefore the operator $d/dx$ is not contraction. |
Any simpler form for $ \frac{\sum_{k=2}^{n-2}{k\left(\sum_{i=0}^{k}\frac{(-1)^i}{i!}\right)}}{n\sum_{i=0}^{n}\frac{(-1)^i}{i!}}$ | Changing the order of summation and noting that the terms for $i=0, i=1$ cancel:
$$\sum^{n-2}_{k=2} \sum_{i=0}^k k \frac{(-1)^i}{i!}=\sum^{n-2}_{i=2} \sum_{k=i}^{n-2} k \frac{(-1)^i}{i!}$$
We have
$$\sum^{n-2}_{k=i} k = \frac{(n-2)(n-1)}{2} - \frac{(i-1)i}{2}$$
Plugging this into the previous expression we get
$$\frac{(n-2)(n-1)}{2}\sum_{i=2}^{n-2} \frac{(-1)^i}{i!} - \sum_{i=2}^{n-2} \frac{(i-1)i}{2}\frac{(-1)^i}{i!}$$
The second of these terms equals
$$\frac12\sum_{i=2}^{n-2} \frac{(-1)^i}{(i-2)!} = \frac12 \sum_{i=0}^{n-4} \frac{(-1)^i}{i!}$$
Now let us abbreviate $a_n = \sum_{i=0}^n \frac{(-1)^i}{i!}$. Note, $1\le a_n< e$ and $a_n\to e$ as $n\to\infty$. Then altogether we have simplified your expression to:
$$\frac{\sum^{n-2}_{k=2} \sum_{i=0}^k k \frac{(-1)^i}{i!}}{n \sum_{i=0}^n \frac{(-1)^i}{i!}} = \frac{1}{2 n a_n}\Big((n-2)(n-1)a_{n-2} - a_{n-4}\Big) =\frac{(n-2)(n-1)-1}{2n}+R_n$$
where $R_n$ is an error term given by
$$R_n=\frac{(n-2)(n-1)-1}{2n}\Big(\frac{a_{n-4}}{a_n}-1\Big)+ (-1)^{n-4} \frac{(n-2)(n-1)}{2n a_n} \Big(\frac1{(n-4)!}-\frac1{(n-3)!}\Big)$$
This tends to $0$ very quickly as $n\to\infty$. Estimate the first term using
$$\Big|\frac{a_{n-4}}{a_n}-1\Big|= \frac{|a_n-a_{n-4}|}{a_n} \le \frac{1}{n!}+\frac1{(n-1)!}+\frac1{(n-2)!}+\frac1{(n-3)!}\le \frac4{(n-3)!}$$
and the second term similarly so that we get
$$|R_n| < 10\frac{n^2}{(n-3)!}$$
For instance, $|R_{100}|< 10^{-147}$, which is comfortably beyond machine precision of any ordinary scientific calculator.
So for $n=100$ your number is equal to
$$\frac{98\cdot 99-1}{200}+R_{100} = 48.505+R_{100}$$ |
Find the $A \cap B$ for given sets | Do you agree that :
$A=\{(x,e^x) :x\in \mathbb{R}\}$ and $B=\{(x,x) :x\in \mathbb{R}\}$
Now how does an element in their intersection look like?
What does $(x,x)=(x,e^x)$ tells about $x$? |
Maximum flow in graph whose vertices are all subsets of set $\{1,..,k\}$ | To prove that more than $2^k$ is impossible, it suffices to find a cut with that capactiy. Let $S$ be the set of vertices whose subset does not contain $1$, and let $T$ be the vertices who do contain $1$. There are $2^{k-1}$ edges from $S$ to $T$, each with capacity $2^1$, so the capacity is $2^k$.
To prove that $2^k$ is possible, it suffices to find a flow. Here is the basic idea. You have $2^k$ units of stuff at the source, $\varnothing$. Send half of that to $\{k\}$. Now there is $2^{k-1}$ at $\varnothing$ and $2^{k-1}$ at $\{k\}$. Next, recursively send the $2^{k-1}$ at $\varnothing$ to $\{1,2,\dots,k-1\}$, and recursively send the $2^{k-1}$ at $\{k\}$ to $\{1,2,\dots,k-1,k\}$. Finally, reunite the two halves by sending the $2^{k-1}$ at $\{1,2,\dots,k-1\}$ to $\{1,2,\dots,k\}$. |
$x$ and $y$ parameterised in terms of the variable $t$. | Yes that correct, as an alternative we can also take for example
$x(t)=3t$
$y(t)=t+\frac{11}3$
for $x\in(-2/3,1/3)$. |
How can we bound $ch(G\cup H)$, the choice number, in terms of $ch(G)$ and $ch(H)$? | Here are the fruits of further research.
It's an open problem whether $\operatorname{ch}(G \cup H) \le \operatorname{ch}(G) \cdot \operatorname{ch}(H)$, but it would follow from the $(a:b)$-choosability conjecture of Erdős, Rubin, and Taylor. To elaborate: an $(a:b)$-choosable graph is a graph in which, for any assignment of lists of size $a$ to the vertices, we can choose $b$ elements from each vertex's list so that the elements chosen for adjacent vertices are disjoint. A $k$-choosable graph is clearly $(k:1)$-choosable; the conjecture says that an $(a:b)$-choosable graph is also $(am:bm)$ choosable for any $m>1$.
Lemma. If $G$ is $(kl:l)$-choosable and $H$ is $l$-choosable, then $G \cup H$ is $kl$-choosable.
Proof. Given any assignment of lists of size $kl$ to the vertices of $G\cup H$, choose sublists of size $l$ such that, for vertices adjacent in $G$, their sublists are disjoint. Use these to list-color $H$. The colors given to vertices adjacent in $H$ must be different by definition; the colors given to vertices adjacent in $G$ are different because they came from disjoint sublists. Therefore we've list-colored $G \cup H$. $\square$
If the $(a:b)$-choosability conjecture holds, then knowing that $G$ is $k$-choosable would imply that it's $(kl:l)$-choosable, so this lemma would be a proof that $\operatorname{ch}(G \cup H) \le \operatorname{ch}(G) \cdot \operatorname{ch}(H)$.
Zuta and Voigt (1996) show that Every $2$-choosable graph is $(2m:m)$-choosable and therefore this list-coloring bound holds whenever either $G$ or $H$ has list chromatic number $2$.
On the other hand, according to Graph Coloring Problems by Jensen and Toft (which you might have access to from the publisher here), we can write down some function $f$ such that $\operatorname{ch}(G \cup H)$ is bounded by $f(\operatorname{ch}(G), \operatorname{ch}(H))$.
But I'm not sure why. Jensen and Toft cite the argument as [Alon, personal communication in 1994], and claim that it follows from Theorem 5.1 in Restricted colorings of graphs (Alon, 1993). This theorem 5.1 says that if a graph is $k$-choosable, then its average degree $d$ satisfies $$d \le 4 \binom{k^4}{k} \log \left(2 \binom{k^4}{k}\right).$$
I can't quite complete the argument from here. If $G$ is $k$-choosable and $H$ is $l$-choosable, we have upper bounds on the average degree of $G$ and $H$, and therefore an upper bound on the average degree of $G \cup H$. But it's certainly possible to have graphs with fixed average degree $d$ and arbitrarily large list chromatic number: just pick your favorite graph that has the list chromatic number you want, and tack on lots of low-degree vertices to bring the average degree down to $d$. So what do we do here? |
Determining that the Inverse of a Matrix is Equal to | Hint: Show that $A^{-1} A = I$ where $A^{-1}$ is as given in the question.
$A^{-1} A = ((A^\top A)^{-1} A^\top) A = (A^\top A)^{-1} (A^\top A) = I$. |
Time (only) dependence with respect to the inner product of a wave function in $L^2(\mathbb{R})$ | There are at least two plausible interpretations for the equation
$$
\frac{d}{dt} \langle\psi(t), A\psi(t)\rangle
= \left\langle\frac{d\psi}{dt}, A\psi\right \rangle
+ \left\langle\psi, A\frac{d\psi}{dt}\right\rangle;
\tag{1}
$$
not sure which, if either, the author intended.
The time-dependent wave function $\psi$ is taken to be $L^{2}$-valued, not complex-valued; the time derivatives are also $L^{2}$-valued, and (1) is a formal consequence of bilinearity of the inner product.
Equation (1) should be taken as holding at each position $x$, and the $L^{2}$ inner product found by integrating each side.
Technically, an element of $L^{2}$ isn't a complex-valued function, but an equivalence class of functions with any two identically equal except on a set of measure zero. Consequently, any equation involving pointwise (spatial) values of $\psi$ has to be interpreted carefully. It's possible this explains your author's notation. |
Why exclusive-or does not include $\wedge \sim (p \wedge q)$? | If $p \wedge q$, then $p$ and $q$ both hold true, so $\sim p$ and $\sim q$ both holds false. Thus, $p \wedge \sim q$ is false and $\sim p \wedge q$ is false, meaning $(p \wedge \sim q) \vee (\sim p \wedge q)$ is also false. Thus, $p \wedge q \implies \sim (p \bigoplus q)$ and by contrapositives, this means $p \bigoplus q \implies \sim (p \wedge q)$, so there is no need to include $\sim (p \wedge q)$ in the definition of $\bigoplus$ as that is implied by the definition already.
To make this more clear, let's make a truth table where $p$ and $q$ change and we look at the result of $\bigoplus$ and $\wedge$:
$p$ is false, $q$ is false $\rightarrow$ $p \bigoplus q$ is false, $p \wedge q$ is false
$p$ is true, $q$ is false $\rightarrow$ $p \bigoplus q$ is true, $p \wedge q$ is false
$p$ is false, $q$ is true $\rightarrow$ $p \bigoplus q$ is true, $p \wedge q$ is false
$p$ is true, $q$ is true $\rightarrow$ $p \bigoplus q$ is false, $p \wedge q$ is true
As you can see, $p \bigoplus q$ and $p \wedge q$ are never true at the same time. This shows us that if we added $\sim (p \wedge q)$ to the definition of $\bigoplus$, it would just be redundant since there is no case where $p \bigoplus q$ will be true when $p \wedge q$ is true. |
How do I compute this integral without Γ? | Observe that the integral is
$$\int_0^2e^{x^3}\Bigg[\int_0^{\sqrt{x}}y^3 dy\Bigg]dx=\frac{1}{4}\int_0^2 x^2 e^{x^3}dx$$
which looks like an immediate integral |
Observing the complex sine function | One way to proceed is to note that
$$ \sin(z) = \frac{e^{iz} - e^{-iz}}{2i}.$$
So
$$\sin(iy) = \frac{e^{-y} - e^y}{2i}.$$
The function $f(y) = e^{-y} - e^y$ takes $\mathbb{R}$ to $\mathbb{R}$.
The derivative
$$f'(y) = -ye^{-y} - e^y = -ye^{-y}(1 - e^{2y})$$
is always negative, and so $f(y)$ is injective. As $\lim_{y \to \pm \infty} f(y) = \mp \infty$, we see that $f$ is surjective.
Thus $\sin(iy) = -\frac{i}{2} f(y)$ takes every value on the imaginary axis exactly once.
I do not know how to respond to you asking about the definition of $\sin(z)$. So I will use the exponential definition given above, or equivalently its series definition. |
Euler's Totient function $\forall n\ge3$, if $(n-\phi(n)) = \sqrt{n}\ $,$\ $then $(n-\phi(n)) \in \Bbb P$ | Your statement needs to be more precise.
The implication $\Rightarrow$ is obvious, if $n=p^2$ then it follows immediately from the definition or formula pf $\phi(n)$ that $n-\phi(n)=p$.
For the converse, you need to be more precise. The way in which is stated, is correct but trivial: you are saying $n=p^2$ if and only if $n=p^2$ and $n=(n-\phi(n))^2$.
If you mean $n=p^2 \Leftrightarrow n-\phi(n)$ is prime, this is not true. $n=15$ fails the converse.
Added
If $p=n-\phi(n)=\sqrt{n}$ it follows that $n=p^2$.
Now, it is easy to see that for all integers $m$ we have $\phi(m^2)=m \phi(m)$. Therefore
$$p=p^2-p\phi(p) \Rightarrow \phi(p) = p-1 $$
This implies $p \in P$. |
Are larger natural numbers less interesting? | Of course not, large primes are what keeps our communication channels secure. We don't talk about them coz they are too large for our brains (order of 200 digits) and have to use computers to deal with them. Some may not even have a significant real world application (monster group). Set theoreticians and logicians study the largest possible numbers (countably infinite and beyond) for an entire lifetime. Even moderately large numbers have lots of mathematicians pursuing them for various reasons(1729, Ramanujan). |
Simplify the trig function $\frac{2\cos{2x}}{\sin{2x}}$ | $$\frac{2\cos(2x)}{\sin(2x)}=\frac{2(\cos^2x-\sin^2x)}{2\sin x\cos x}=\cot x-\tan x$$ |
Let $W$ be an invertible matrix. Show that the map, $\|x\|_W = \|Wx\|_2$ is a norm on $\mathbb{R}^m$. | If $W$ is injective then $\|x\|_W = \|Wx\|_2$ is a norm.
$\|tx\|_W = \|W (tx)\|_2 = |t| \|Wx\|_2 = |t| \|x\|_W$.
$\|x+y\|_W = \|W(x+y)\|_2 \le \|Wx\|_2 + \|Wy\|_2 = \|x\|_W + \|y\|_W$.
Suppose $\|x\|_W = 0$, then $\|Wx\|_2 = 0$ and hence $Wx = 0$. Since $W$
is presume injective we have $x=0$.
Note, it does not have to be the Euclidean norm $\| \cdot \|_2$, any norm
will do. |
Whats the connection between formss and vector fields? | You might also think about this: vector fields are in the dual space of forms. I'll exemplify this in $\mathbb{R}^3$.
A vector field is a function over a region of three dimensional space that gives at each point a vector. For a point $P=(x,y,z)$ let the the vector field $V$ associates to $P$ the vector
$$
V(P) = v_x(P)i_x+v_y(P)i_y+v_z(P)i_z\in\mathbb{R}^3\text{ which is seen here as vector space}
$$
The corresponding dual element for $V$ is a function over the region of three dimensional space which gives a linear functional at each point; i.e.,
$$
w(P) = w_x(P)dx+w_y(P)dy+w_z(P)dz
$$
This is called a 1-form, a special case of a differential form. Thus a 1-form is a field of linear functionals. |
If $u+u^{-1}$ is analytic then $u$ is constant | 1. For the first one, take the laplacian $\Delta$ and use the fact that if $f$ is holomorphic, $\Delta|f|^2 = 4 |f'|^2$. Therefore, as the sum $\sum|f_i|^2$ is constant, you'll have
$$0 = \Delta\left( |f_1|^2 + ... + |f_n|^2\right) = 4 \left(|f'_1|^2 + ... + |f'_n|^2 \right)$$
It is now clear that $|f'_i|^2 = 0$ for every $i$, showing that each of them is aconstant.
2. (someone please check this one) . It is a general fact that if $f$ is holomorphic and if $f = f_1 + if_2$, and if
$$J = \begin{pmatrix} \frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y} \\ \frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y} \end{pmatrix}$$
denotes its jacobian matrix, then its determinant $|J|$ is equal to $|f'|^2$. This is a consequence of the Cauchy-Riemann equations.
If $f = \frac{1}{u} + iu$ is holomorphic, then writing its jacobian matrix yelds
$$J =
\begin{pmatrix} \frac{\partial u^{-1}}{\partial x} & \frac{\partial u^{-1}}{\partial y} \\ \frac{\partial u}{\partial x} & \frac{\partial u}{\partial y} \end{pmatrix} =
\begin{pmatrix} -\frac{\partial u}{\partial x}\frac{1}{u^2} & -\frac{\partial u}{\partial y}\frac{1}{u^2}\\ \frac{\partial u}{\partial x} & \frac{\partial u}{\partial y} \end{pmatrix}$$
Its jacobien is clearly zero, so we must have $|f'|=0$ and $f$ must be constant, forcing its imaginary part $u$ to be a constant too.
3. The contant function equal to zero everywhere is differentiable everywhere. |
Calculating the "edge" distance between two points | This is often called the Manhattan distance between two points, because cars in Manhattan can only drive on vertical and horizontal roads. The Manhattan distance between $(x_1,y_1)$ and $(x_2,y_2)$ is $|x_1-x_2|+|y_1-y_2|$. |
About the implicit funtion in a holomorphic situation. | If you first forget the complex structure, then this is a standard situation for the implicit function theorem with $F:\Bbb R^4\to \Bbb R^2$. The corollary of the theorem tells you that if $F(x,y)$ is $k$ times continuously differentiable, then so is $Y(x)$.
Here you only need the first derivative. Now reinserting back the complex structure, one finds that the derivative of $Y(x)$ inherits the complex structure from $F$ and its partial derivatives, so that $Y(x)$ satisfies the Cauchy-Riemann partial differential equations and is thus holomorph. |
Find the angle between the vectors | a)
That $\mathbb u + \mathbb v + \mathbb w = 0$ means that if we place the vectors head to tail to head to tail, they form a triangle.
By the law of cosines
$15^2 = 5^2 + 12^2 - 2(5)(12)\cos \theta\\
\frac {225 - 144 - 25}{120} = -\cos\theta\\
\frac {56}{120} = \cos\theta\\
\theta = \arccos -\frac {7}{15}$
But that is the angle between $u,v$ when they are head to tail. When they are tail to tail you will get the supplement of that angle.
$\arccos \frac 7{15}$
You could also do this with $\|\mathbb u + \mathbb v\| = \|\mathbb w\|$
b)
$(\mathbb u+\mathbb v)\cdot \mathbb u = 0\\
\mathbb u\cdot \mathbb u + \mathbb u\cdot \mathbb v = 0\\
\|\mathbb u\|^2 + \|\mathbb u\|\|\mathbb v\|\cos\theta = 0\\
9 + 15\cos\theta = 0\\
\theta = \arccos-\frac 35$ |
Question about the probability of a number | It's a bit fuzzy (because just saying "if we pick a random integer" is fraught with trouble), but I would assume that he means something like this:
Pick a positive integer N, and let us draw an integer with uniform probability from the interval $[0, pN - 1]$. Then the numbers that are divisible by $p$ are $0, p, 2p, \ldots, p(N-1)$, and there are $N$ of them. So the fraction of values that are divisible by $p$ is $\frac{N}{pN} = \frac{1}{p}$, so the probability that our chosen number is divisible by $p$ is $\frac{1}{p}$. This doesn't depend on $N$, so as $N \rightarrow \infty$ the probability remains $\frac{1}{p}$.
(Note that you could just pick from the interval $[0, N - 1]$ and note that the probability fluctuates around $\frac{1}{p}$ with a decreasing error term, but this makes the math a little easier to do.)
Alternatively, you can just say that the density of the multiples of $p$ among the integers is $\frac{1}{p}$ which is proven the same way but which doesn't have the pitfalls of trying to work with probabilities on an infinite list. |
a dense subspace of inner product space whose codimension is 1 | As Kavi Rama Murthy says in the comments, "Every infinte dimensional inner product space has a discontinuous linear functional f and the kernel of such a functional is dense and has codimension 1".
I would like to add a proof of the fact that every infinite dimensional normed space (in general) has an unbounded linear functional.
The kernel of a linear functional on a Banach space is either closed or dense: the kernel is closed if-f the functional is bounded. The kernel is dense if-f the functional is unbounded, see this post.
So let $X$ be a normed space with infinite dimension. Take $\{x_n\}$ a linearly independent subset of $X$ (which is possible, since $X$ has infinite dimension) and extend it to a Hamel basis of $X$, say $\{x_n\}_{n=1}^\infty\cup Z$, where $Z\subset X\setminus(\{x_n\}_{n=1}^\infty)$. We define a functional on the basis and we extend it linearly on all $X$. We simply set $f(x_n)=n\cdot\|x_n\|$ and $f(z)=0$ for all $z\in Z$. Note that $f$ cannot be bounded: if $|f(x)|\leq c\|x\|$ for all $x\in X$ for some $c>0$, then $n\|x_n\|\leq c\|x_n\|$ for all $n$, which is impossible since $\mathbb{N}$ has no upper bound. |
What's the relationship between the rank and eigenvalues of symmetric positive semidefinite matrix (real domain)? | What's there to prove? You're correct.
Let $P$ be the (invertible!) matrix you describe. We have
$$
\operatorname{rank}(A) =
\operatorname{rank}(P^TAP) =
\operatorname{rank}(\operatorname{diag}(\lambda_1,\dots,\lambda_n))
$$
the rank of $\operatorname{diag}(\lambda_1,\dots,\lambda_n)$ is the number of $\lambda_i$ such that $\lambda_i \neq 0$. Your conclusion follows. |
Expected number of buckets with at least one ball | First off, you've mixed up $N$ and $K$ between the given answer and your method. For the rest of this answer, I'll assume you meant
I had a different way of calculating this probability: counting the number of ways the $K$ balls can be put into $N−1$ buckets (all except bucket $i$), and dividing that by the number of ways $K$ balls can be put into $N$ buckets. I can use stars and bars to calculate each. Why does this method not work in this case? I'm a bit confused.
For this to make any sense to compute the probability, all the outcomes you're counting need to be equally likely. Is this true? Well, let's just take $N=2$ and $K=2$ for example. Using your logic, the outcomes are
Two balls in bucket 1
One ball in bucket 1, one ball in bucket 2
Two balls in bucket 2
Are these equally likely to occur? No. Outcomes 1 and 3 each occur with probability $1/4,$ but outcome 2 occurs with probability $1/2.$ |
Proof of the Isomorphism between: $SL(2,\mathbb R) \times SL(2, \mathbb R) \cong SO^+(2,2)$ | Let us consider the usual sporadic isogenies related to the one you ask. They are associated with quadratic forms of different signatures.
The case of the Lorentz group and the $2:1$ isogeny from its universal cover:
$$SL_2(\mathbb{C})\twoheadrightarrow SO^{\uparrow}_+(1,3)$$
The case of the quaternion algebra, the double cover $SU(2)\to SO(3)$ and the total group of proper isometries of the Euclidean quaternion algebra (with its quaternionic norm as the associated quadratic form):
$$\Psi:SU(2)\times SU(2)/\{\pm 1\}\cong SO(\mathbb{H})$$
given by $\Psi(A,B)(X)=AXB^{\dagger}$, where all quaternions are written in their standard complex $2$-dimensional representation (i.e. general unitary matrices when nonzero).
In the case of signature $(2,2)$ we have a nice way of presenting this quadratic space which suits us very well. That is, just write a $2 \times 2$-matrix and compute its determinant; this indeed corresponds to a neutral quadratic form on $\mathbb{R}^4$.
Now, it suffices to write down the action of $SL_2(\mathbb{R})^2$ on $M_2(\mathbb{R})$ similarly to the one above, that is:
$\Psi(A,B)(X):=AXB^{-1}$ defines a left action on $M_2(\mathbb{R})$ via proper isometries with respect to the quadratic form $A\mapsto \det A$, which in turn furnishes a double cover, which is the one you were looking for.
Note that both double covers have isomorphic complexifications. Hope this helps.
$$\Psi:SL_2(\mathbb{R})\times SL_2(\mathbb{R})\to SO^+(2,2).$$ |
Problem in calculating Fisher information of some distribution | $E(I_1I_2)=E[A_1^TV^{-1}(Z-A\theta)(Z-A\theta)^TV^{-1}A_2]$
Now note $E[(Z-A\theta)(Z-A\theta)^T]=V$ therefore $E(I_1I_2)=A_1^TV^{-1}A_2$, which is $0$ |
Verify the inf of a subset S of $\mathbb R$ | Use the definition of the infimum. Let $\inf S =x$ then by definition for all $\epsilon>0$, one needs to find a $y\in S$ such that $y < x+\epsilon.$ In other words you need to find $n\in \mathbb{N}$ such that $a_n=\frac{3n+1}{n}<x+\epsilon.$ Our guess for the infimum is $x=3$ and so we need to find an $\epsilon>0$ that works. So we try, $\frac{3n+1}{n}<3+\epsilon\implies \frac{1}{n}<\epsilon$ and so if we choose any $n>\frac{1}{\epsilon}$ then we have that $a_n<3+\epsilon$ and thus $\inf S = 3.$ |
Non-negative uniformly integrable local martingale | Found the answer to my question in a previous answer to a different question I got:
Uniformly integrable local martingale
The answer is no. |
how often do we find $p^m - q^n= \pm2$ for primes $p,q$ and $m,n > 1$ | The Diophantine equation $p^m-q^n=c$ for given integers $p,q,c$ is called Pillai's equation.
Here one searches for positive integer solutions $m,n$.
There are many conjectures (and some results) on this equation. For example, Pillai conjectured that $3^m-2^n=c$ has at most one solution in positive integers for $|c|>13$. This was proved by Stroecker and Tijdeman in 1982. For more details see http://www.math.ubc.ca/~bennett/B-Pillai.pdf.
What does this mean for our question, if $m,n>1$ are given ? As far as I can see, there are not many primes $p,q$ (I suppose that you want distinct primes) satisfying Pillai's equation with given exponents. |
Cardinality of countable subsets of the continuum | Let $A$ be an element of the set of all the countable subsets of $\mathbb{R}$, which we shall denote simply by $\mathcal{P}_{\le\omega}(\mathbb{R})$
Assertion: $\mathcal{P}_{\le\omega}(\mathbb{R})\preccurlyeq\,^\omega\mathbb{R}$
We will identify $A$ with a $\omega$-sequence of elements of $\mathbb{R}$, that is, an element of $^\omega\mathbb{R}$, in the following way:
If $A$ is infinite, let $\{a_n|\;n\in\omega\}$ be an enumeration of $A$. Then it is clear that the assignation $A\longmapsto(a_n)_{n\in\omega}$ is injective.
If $A$ is finite, say $A=\{a_0,\dots,a_n\}$, then we define the $\omega$-sequence $(b_k)_{k\in\omega}$ defined by:
$$b_k=\begin{cases}
a_k\qquad\qquad\text{if }k\le n \\
a_n+1\qquad\text{ if }k>n
\end{cases}$$
And in this case, it is also clear that the correspondence $A\longmapsto(b_k)_{k\in\omega}$ is injective.
In any case, $\mathcal{P}_{\le\omega}(\mathbb{R})\preccurlyeq\,^\omega\mathbb{R}$
Now, on the one hand we have that $\mathbb{R}\preccurlyeq\mathcal{P}_{\le\omega}(\mathbb{R})$, because the function $r\in\mathbb{R}\longmapsto\{r\}\in\mathcal{P}_{\le\omega}(\mathbb{R})$ is obviously injective.
On the other hand, $\mathcal{P}_{\le\omega}(\mathbb{R})\preccurlyeq\mathbb{R}$, since $\;\mathcal{P}_{\le\omega}(\mathbb{R})\preccurlyeq\,^\omega\mathbb{R}\;$ and $\;^\omega\mathbb{R}\preccurlyeq\mathbb{R}$: in fact, $|^\omega\mathbb{R}|=\big(2^{\aleph_0}\big)^{\aleph_0}=2^{\aleph_0\times\aleph_0}=2^{\aleph_0}=|\mathbb{R}|$
From the Cantor-Bernstein theorem, we obtain that $|\mathcal{P}_{\le\omega}(\mathbb{R})|=|\mathbb{R}|=2^{\aleph_0}$ |
Infinitely differentiable approximations for an indicator function | Let $h:\Bbb{R} \to \Bbb{R}$ be defined by
\begin{align}
h(x) :=
\begin{cases}
e^{-1/x} & \text{if $x>0$}\\
0 & \text{if $x \leq 0$}
\end{cases}
\end{align}
Verify for yourself (and draw a picture) that $h$ is $C^{\infty}$, $0 \leq h(\cdot) < 1$, and $h(x) > 0$ if and only if $x>0$.
For any $a,b \in \Bbb{R}$ with $a<b$, define $H_{a,b}: \Bbb{R} \to \Bbb{R}$ by
\begin{align}
H_{a,b}(x) := h(x-a) \cdot h(b-x)
\end{align}
Then, $H_{a,b}$ is $C^{\infty}$, $0 \leq H_{a,b}(\cdot) < 1$, and $H_{a,b}(x) > 0$ if and only if $a<x<b$ (draw a picture).
For any $a,b \in \Bbb{R}$ with $a<b$, define $G_{a,b}: \Bbb{R} \to \Bbb{R}$ by
\begin{align}
G_{a,b}(x) := \dfrac{\int_x^b H_{a,b}}{\int_a^b H_{a,b}}
\end{align}
Then, by the Fundamental Theorem of Calculus, we have $G_{ab}' = -\dfrac{H_{a,b}'}{\int_{a}^bH_{a,b}} \leq 0$; this shows $G_{a,b}$ is $C^{\infty}$ and (weakly) decreasing. Also, a direct calculation shows that if $x\leq a$ then $G_{a,b}(x)=1$ while if $x\geq b$ then $G_{a,b}(x)=0$. Also, by the derivative formula above, it follows that $G_{a,b}'(a)=G_{a,b}'(b)=0$ (but this is also obvious, because $G_{a,b}$ is smooth, and to the left of $a$, it is constant $1$, so the left-derivative is zero, but since $G_{a,b}$ is actually differentiable at $a$, the derivative is actually zero. Similar reasoning holds for $b$).
So, the function $g= G_{a,a+\delta}$ satisfies all the properties you're looking for.
Anyway, don't try to memorize all these functions. Try to understand them: the first function $h$ is very famous for being smooth (it's probably the only one you have to "memorize"), and it (or something very similar) is one of the building blocks for constructing smooth functions with such-and-such properties (like taking certain values, and decaying quick enough etc).
Next, $H_{a,b}$ is simply obtained from $h$ by translating, reflecting, and multiplication (to make it zero outside $(a,b)$). Finally, getting $G_{a,b}$ from $H_{a,b}$ is simply a matter of "normalization". Note that if you wanted a function which increases from $0$ to $1$ (as opposed to decreasing from $1$ to $0$ like $G_{a,b}$) all you have to do is consider the modified function $x\mapsto \dfrac{\int_a^x H_{a,b}}{\int_a^b H_{a,b}}$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.