title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
convergence of sum $\sum_{j=1}^\infty\frac{1}{2}2^{-j/2}\sqrt{j+1}$ | Let's prove by induction that $j+1 \le 2^{\frac j 2}$ for $j \ge 6$. If $j=6$ then indeed $7 \le 8$. The induction step:
$$j+2 = (j+1) + 1 \le 2^{\frac j 2} + 1 \le 2^{\frac j 2} + 2^{\frac 6 2} (\sqrt 2 -1) = 2^{\frac j 2} + 2^{\frac j 2} (\sqrt 2 -1) = 2^{\frac {j+1} 2} .$$
This means that for $J \ge 6$ we may write
$$\sum _{j=J} ^\infty \frac 1 2 2^{-\frac j 2} \sqrt {j+1} \le \sum _{j=J} ^\infty \frac 1 2 2^{-\frac j 2} \sqrt {2^{\frac j 2}} = \frac 1 2 \sum _{j=J} ^\infty \frac 1 {2 ^{\frac j 2 - \frac j 4}} = \frac 1 2 \sum _{j=J} ^\infty \frac 1 {\sqrt[4] 2 ^j} = \frac 1 {\sqrt[4] 2 ^J} \frac 1 {2 - \frac 2 {\sqrt[4] 2}} .$$
Now, a sufficient condition for what you want is to impose that
$$\frac 1 {\sqrt[4] 2 ^J} \frac 1 {2 - \frac 2 {\sqrt[4] 2}} < \epsilon ,$$
which means that
$$J > \log _{\sqrt[4] 2} \frac 1 {\left( 2 - \frac 2 {\sqrt[4] 2} \right) \epsilon} = - \log _{\sqrt[4] 2} \left( \left( 2 - \frac 2 {\sqrt[4] 2} \right) \epsilon \right) .$$ |
CDF on Standard uniform gives the same distribution | We have
$$F_Y(y)=\Pr(Y\le y)=\Pr(F_X^{-1}(U)\le y).$$
Because $F_X$ is strictly increasing,
$$\Pr(F_X^{-1}(U)\le y)=\Pr(U \le F_X(y))=F_X(y).$$ |
probability density of the maximum of samples from a uniform distribution | \begin{align}
P(Y\leq x) &= P(\max(X_1,X_2 ,\cdots,X_n)\leq x)\\
&= P(X_1\leq x,X_2\leq x,\cdots,X_n\leq x)\\
&\stackrel{ind}{=} \prod_{i=1}^nP(X_i\leq x )\\
&= \prod_{i=1}^n\dfrac{x}{\theta}\\&=\left(\dfrac{x}{\theta}\right)^n
\end{align} |
Is the following conclusion for a nonnegative stochastic matrix $P$ is true? | I have got the answer.
Given any vector $x \in {\mathbb R^n}$, we decompose it as $x = y + c1$, where $c$ is a scalar and $y$ has one zero entry and all other entries are nonnegative. ($c$ equals the minimum of the entries of $x$.)
Let $x(k + 1) = Px(k),x(k) = y(k) + c(k)1$, we have
\begin{align}\label{p_lem1.0}
\begin{split}
x(k + 1)&= P(y(k) + c(k)1)\\
&= Py(k)-h(k)1 + (c(k) + h(k))1,
\end{split}
\end{align}
where $h(k) = \min (Py(k))$ is a scalar. Then, we can easily obtain that $y(k + 1) = Py(k) - h(k)1$ and $c(k + 1)=c(k) + h(k)$.
Note that $\mathop {\max }\limits_{j = 1}^N \mathop {\min }\limits_{i = 1}^N {P_{ij}} > 0$ if and only if there is at least one column of $P$ whose entries are all positive.
We suppose that the all the entries of the $m$th column of $P$ are positive. Then, for $\forall i = 1,2, \cdots ,N$, we get
\begin{equation}
{y_i}(k + 1) = \sum\limits_{j = 1}^N {{p_{ij}}{y_j}(k)} - h(k) \le \lambda {y_m}(k) + (1 - \lambda ){\left\| {y(k)} \right\|_\infty } - h(k).
\end{equation}
Since, $y(k+1)$ and $y(k)$ has at least one zero entry, we get
\begin{equation}
- h(k) = - \mathop {\min }\limits_{i = 1,2, \cdots ,N} \left( {\sum\limits_{j = 1}^N {{p_{ij}}{y_j}(k)} } \right) \le - \lambda {y_m}(k)
\end{equation}
By the above two inequalities , for $\forall i = 1,2, \cdots ,N$, we get ${y_i}(k + 1) \le (1 - \lambda ){\left\| {y(k)} \right\|_\infty }$. |
$KC$-spaces and $US$-spaces. | A space $X$ is sequential it every sequentially closed set in $X$ is closed, where $A\subseteq X$ is sequentially closed iff $A$ contains the limit of every sequence in $A$ that converges in $X$. $X=[0,\omega_1]$ is not sequential because the subset $A=[0,\omega_1)$ is sequentially closed but not closed. It’s clearly not closed, since $\omega_1$ is a limit point of $A$. To see that $A$ is sequentially closed, let $\langle\xi_n:n\in\omega\rangle$ be a sequence in $A$ converging to some $\xi\in X$, and let $\eta=\sup_{n\in\omega}\xi_n$. Then $\eta<\omega_1$, so $(\eta,\omega_1]$ is an open nbhd of $\omega_1$ disjoint from $\{\xi_n:n\in\omega\}$, so $\omega_1\ne\xi$, and therefore $\xi\in A$.
Now let $p$ be any point not in $X$, and let $Y=X\cup\{p\}$. For each $\alpha<\omega_1$ let $(\alpha,\omega_1)\cup\{p\}$ be a basic open nbhd of $p$. In other words, $[0,\omega_1)\cup\{p\}$ is homeomorphic to $X$: to form $Y$ from $X$ I’ve simply split $\omega_1$ into two points, $\omega_1$ and $p$. You can easily check that $Y$ is compact, assuming that you know why $X$ is compact. $X$ is a compact subset of $Y$ that is not closed in $Y$, so $Y$ is not $KC$. Finally, if a sequence $\langle y_n:n\in\omega\rangle$ in $Y$ converges to $\omega_1$, it is eventually constant at $\omega_1$ (meaning that there is an $m\in\omega$ such that $y_n=\omega_1$ for all $n\ge m$), and if it converges to $p$, it is eventually constant at $p$. This implies that a sequence that converges to $\omega_1$ or to $p$ does not converge to any other point of $Y$. Finally, $Y\setminus\{\omega_1,p\}$ is Hausdorff, and sequences that converge to points of $Y\setminus\{\omega_1,p\}$ are eventually in $Y\setminus\{\omega_1,p\}$, so $Y$ is $US$.
Added: It is not true that a hereditarily Lindelöf $KC$ space is Katětov $KC$ iff it admits a weaker $US$ topology, even if weaker is understood to mean strictly weaker. Example $3.4$ of Chiara Baldovino & Camillo Constantini, ‘On some questions about $KC$ and related spaces’ is a countable $KC$ space $\langle X,\sigma\rangle$ that is not Katětov $KC$. Plainly $\langle X,\sigma\rangle$ is hereditarily Lindelöf, and it’s not hard to show that $\langle X,\sigma\rangle$ has a strictly weaker $US$ topology.
Specifically, in the notation of the paper fix any $\varphi:\Bbb N\to\Bbb N$, and replace the original fundamental system of open nbhds of $p_2$ by the smaller system $\{V_{2,\varphi,n}:n\in\Bbb N\}$. If $\sigma'$ is the resulting topology, the only sequences in $X$ that are convergent in $\sigma'$ but not in $\sigma$ are sequences that converge to $p_2$ in $\sigma'$, and they converge only to $p_2$ in $\sigma'$. Thus, $\langle X,\sigma'\rangle$ is $US$. |
Affine algebraic varieties defined over a [not necessarily algebraically closed] field | After posting some nonconstructive comments I decided to post a constructive answer.
Set $A=k[X_1, \dots, X_n]$ and $B=\bar k[X_1, \dots, X_n]$. The ring extension $A\subset B$ is flat (why?). Let $\mathfrak p$ be a prime ideal of $A$. The irreducible components of $V$, the algebraic set in $\bar k^n$ defined by $\mathfrak p$, are defined by the prime ideals $P_1,\dots,P_r$ of $B$ which are minimal over $\mathfrak pB$. Since the extension $A\subset B$ is flat it follows that $P_i\cap A=\mathfrak p$ for all $i$. Moreover, the ring extension $A_{\mathfrak p}\subset B_{P_i}$ is also flat and now we can apply the dimension formula and get $\dim B_{P_i}=\dim A_{\mathfrak p}+\dim B_{P_i}/\mathfrak pB_{P_i}$. But $\dim B_{P_i}/\mathfrak pB_{P_i}=0$ (why?) and thus we get $\dim B_{P_i}=\dim A_{\mathfrak p}$, that is, $\mbox{ht}(P_i)=\mbox{ht}(\mathfrak p)$ and this is enough to show that $\dim V_i = \dim k[X_1, \dots, X_n]/\mathfrak p$.
The extension $A\subset B$ is an integral extension of integrally closed domains. Furthermore, the field extension $K\subset L$ is normal, where $K$ and $L$ are the fields of fractions of $A$, respectively $B$, so we can apply Theorem 5(vi), page 33, from Matsumura, CA, that says the following: any two prime ideals $P_1$ and $P_2$ of $B$ lying over the same prime ideal $\mathfrak p$ of $A$ are conjugate, i.e. there is $\bar\sigma\in G(L/K)$ such that $\bar\sigma(P_2)=P_1$. Now take $\sigma=\bar\sigma_{\mid \bar k}\in G$. If I'm not wrong, $\sigma(V_2)=V_1$.
For the converse, let $P$ be a prime ideal of $B$ that defines $W$ and $\mathfrak p=P\cap A$. It remains to prove that $W$ is the algebraic set of $\bar k^n$ defined by $\mathfrak p$. This can be reformulated as follow: the prime ideals of $B$ lying over $\mathfrak p$ are the minimal elements of the set of the prime ideals of $B$ containing $\mathfrak p$, and this is easy to prove by using the well-known properties of integral extensions. |
Integral of $\sin^n(x)$, recurrence relation, some properties | With your induction relation, you can write $I(2n)=\frac{(2n-1)(2n-3)...}{2n(2n-2)...}I(0)$.
You can symplify this in $I(2n)=\frac{(2n)!}{4^n(n!)^2}I(0)$.
The same kind of relation exists for I(2n+1). |
another definition of Lebesgue functions | If $\phi_n\to f$ almost everywhere this mean that $\mathbf{1}_{A}\phi_n\to \mathbf{1}_{A}f$ everywhere for some measurable set $A\subset \mathbb{R}$ such that $\mu(A^\complement )=0$ (here $\mu$ is the measure defined in $[a,b]$, it can be the Lebesgue measure or any other). Now note that $\mathbf{1}_{A}f=f$ almost everywhere. |
posterior probability of bag given ball (evidence) | Your answer to the first question is correct. (ETA: Or was, before you edited $\frac{5}{8}$ to $\frac{15}{16}$. $<$g$>$)
Your answer to the second question is incorrect, because the $P(\text{red}, \text{Bag $C$}) = 10/30$, not $10/45$. (That is, out of the $30$ balls, each equally likely to be selected, exactly $10$ of them are both red and in Bag $C$.) You then obtain the correct answer of $10/13$. |
Minimum number of edges to change to obtain new graph structure | This is a hopeless problem. It is currently unknown if one can find out in polynomial time if two graphs are isomorphic, which means there isn't probably an efficient algorithm to find out if you can do what you want by changing 0 edges!
https://en.wikipedia.org/wiki/Graph_isomorphism_problem |
is my radius of convergence correct? | Yes, you are correct. Note that the difference in adjacent coefficients is just $\frac 1 {n+1}$, which goes to $0$ as $n\to \infty$, so your ratio of coefficients is pretty clearly going to 1. |
If $\int_{0}^{x}f(t)dt\rightarrow \infty$ as $|x|\rightarrow \infty\;,$ Then every line $y=mx$ Intersect $ y^2+\int_{0}^{x}f(t)dt=2$ | Since $g: x \mapsto \int_{0}^{x} 2m^{2}t + f(t) dt - 2$ on $\mathbb{R}$ is continuous, and since $g(x) > 0$ for large $x$ and $< 0$ for $x = 0$, by Bolzano's theorem we have $g(c) = 0$ for some $c$ between $0$ and the point at which $g > 0$. |
If given $\overline {x^2}=0$, is $\bar x=0$? | $\int_V x^2(v)dv = 0 \, \Leftrightarrow \, x(v)=0 \, \forall \, v\in V$ since $x^2(v)\geq0 \, \forall \, v$.
Therefore $\int_V x(v)dv = 0$ as well.
Of course, I assumed $x$ real here. |
Is $B = \{x:x \notin B\}$ a valid paradox in Naive Set Theory? | The basic problem here is that even in naive set theory you have no right to assume that
$$ B = \{ x\mid x\notin B\} $$
works as the definition of $B$. Since the letter $B$ appears in the expression that supposedly defines its meaning, what you have is not a definition, but merely an equation that you want $B$ to be a solution of. And there's nothing paradoxical about the fact that this equation happens not to have any solution, any more than it is paradoxical that
$$ x=x+1 $$
fails to define a number $x$.
Even if we look at the non-contradictory case,
$$ C = \{ x \mid x \in C \} $$
that too fails to define anything because it just says that $C$ equals itself, which does not single out any particular set among all the other sets that also equal themselves. Again, this is not mysterious, but the same as the fact that
$$ y = 1\cdot y $$
fails to define a number $y$. |
Radon Nikodym derivative and density | Does the following hold?
$$
f(x)=\lim_{\mu(E)\to 0, x\in E \in M} \frac{F(E)}{\mu(E)} \text{ (a.e.)}
$$
In general, no. In most cases that limit fails to exist.
When something like it does hold, it is known as a derivation theorem. The classical reference on derivation theorems is:
Hayes, C. A.; Pauc, C. Y., Derivation and martingales, Ergebnisse der Mathematik und ihrer Grenzgebiete. 49. Berlin-Heidelberg-New York: Springer-Verlag. VII, 203 p. (1970). ZBL0192.40604.
A simple example is where $\mu$ is Lebesgue measure in the real line. A condition which does work:
replace "$x \in E \in M$" with "$E$ is an interval with midpoint $x$". |
Showing $\mathbb Z[x]/$ is a field | The first step is to show that
$$
\frac{{\Bbb Z}[x]}{(5,x^3+x+1)}\simeq\frac{{\Bbb F}_5[x]}{(x^3+x+1)}
$$
For this you can just define an obvious morphism and show that is injective and surjective ($\Bbb F_q$ denotes the finite field with $q$ elements).
Next, you need to show that $x^3+x+1$ is irreducible in ${\Bbb F}_5[x]$. For, note that any decomposition $x^3+x+1=f(x)g(x)$ would give rise to a factor of degree $1$ and polynomials of degree $1$ have roots. Thus, to show irreducibility it is enough to show that $x^3+x+1$ has no roots in $\Bbb F_5$.
I leave the details to you. |
Coin game - applying Kelly criterion | On (1) the optimal strategy to maximise the expectation of your final net worth is to be everything you have all the time. With probability $\frac{1}{2^{100}} \approx 8 \times 10^{-31}$ you will end up with $100 \times 3^{100}\approx 5 \times 10^{49}$; otherwise you end up with nothing. So your expected final net worth with this all-or-nothing strategy is $100 \times \left(\frac32\right)^{100} \approx 4 \times 10^{19}$ despite the overwhelming likelihood that you will end up with nothing; whatever happens, the final outcome will not be close to the expected final outcome.
If instead you bet $\frac14$ of your net worth at each stage as a Kelly Strategy, your expected final net worth is $100 \times \left(\frac98\right)^{100} \approx 1.3 \times 10^{7}$, much less than the all-or-nothing strategy, even though with the Kelly Strategy you are very likely to end up ahead and quite likely to end up with something large.
On (2) the position is reversed. The all-or-nothing strategy gives an expected outcome for $\ln(100+N)$ of $\frac{2^{100}-1}{2^{100}} \times \ln(100)+\frac1{2^{100}}\times \ln(100+100\times 3^{100}) \approx 4.6$, rather less than less than the $\ln(100+100)\approx 5.3$ expectation if you were not to bet anything ever.
But betting $\frac14$ of your net worth at each stage would give an expected outcome for $\ln(N)$ of $\ln(100) + 100 \times \frac12\left(\ln\left(\frac32\right)+\ln\left(\frac34\right)\right) \approx 10.5$ and then expected outcome for $\ln(100+N)$ would be very close to this. This Kelly Strategy is much better for this expectation than either an all-or-nothing strategy or a never-bet strategy: note that $e^{4.6}-100 \approx 0$ while $e^{5.3}-100 \approx 100$ while $e^{10.5}-100 \approx 36000$. |
Triple integral $\iiint_{R} z \ \mathrm{d}V$ in spherical coordinates | This problem is easy in spherical coordinates if you know the trick. The upper bound gives us $R = \sqrt{2}$. Solving the lower bound gets
$$R\sin^2\phi = \cos\phi \implies R\cos^2\phi + \cos \phi -R = 0 $$
then by the quadratic equation we have that
$$\phi = \cos^{-1}\left(\frac{-1+\sqrt{1+4R^2}}{2R}\right)$$
where we took the positive root since we are above the $xy$ plane where $\cos\phi > 0$. Next we set up the integral with the $\phi$ integral first:
$$I = \int_0^{2\pi} \int_0^{\sqrt{2}} \int_0^{\cos^{-1}\left(\frac{-1+\sqrt{1+4R^2}}{2R}\right)}R^3\cos\phi\sin\phi \:d\phi \: dR \:d\theta$$
$$ = \frac{\pi}{2} \int_0^{\sqrt{2}}R\sqrt{1+4R^2}-R\:dR = \frac{\pi}{2}\left[\frac{1}{12}(1+4R^2)^{\frac{3}{2}}-\frac{1}{2}R^2\right]_0^{\sqrt{2}} = \frac{7\pi}{12}$$
Although not as clean as the polynomial that appears in the cylindrical coordinates version, this trick is useful to remember for similar integrals, since the integrand will determine whether this method or cylindrical coordinates is the optimal route to take. For other integrals, such as $\iiint_R dV$, this version will be easier than cylindrical coordinates. |
Applying Fermat’s Little Theorem to $g^\frac{p-1}{2}$ (mod $p$). | Let $a = 2k+1$.
$$(g^a)^{\frac{p-1}{2}} = g^{a\frac{p-1}{2}} = g^{(p-1)\frac{2k+1}{2}} = g^{(p-1)(k+\frac{1}{2})} = g^{(p-1)k}\cdot g^{\frac{p-1}{2}} \stackrel{*}{=} g^{\frac{p-1}{2}} \pmod{p}$$
Where Fermat's Little Theorem is used in $*$ |
Automorphism of the unit disk that fixes a point | Assume $b \ne 0$ as otherwise problem trivial.
Take $\psi_1(z)=\frac{z+b}{1+\bar b z}$. Clearly $\psi_1(0)=b$. Now take $\psi_2(z)=\alpha\frac{z-b}{1-\bar b z}, |\alpha|=1, \alpha \ne 1$. Clearly $\psi_2(b)=0$ so $\phi=\psi_1 \circ \psi_2$ satisfies $\phi(b)=b$
Since $\psi_2(0)=-\alpha b \ne -b$, $\phi(0) \ne 0$ so $\phi$ is not the identity |
Image of closed ball under degenerate integral operator is a closed set | [Edit: This answer applies to an earlier version of the question, asking if an operator of the form $Kf(x) = \int_0^1 k(x,y) f(y)\,dy$ , where $k(x,y) = \sum_{i=1}^n a_i(x) b_i(y)$ with $a_i, b_i \in C([0,1])$, necessarily maps the unit ball of $C([0,1])$ to a closed subset of $C([0,1])$.]
I believe this is not true.
Take $k(x,y) = \cos(\pi y)$. Then $Kf$ is a constant function identically equal to $\int_0^1 f(y) \cos(\pi y)\,dy$. For $\|f\| \le 1$, this constant can be any number in $(-2/\pi, 2/\pi)$. (To get close to $2/\pi$, take a piecewise linear function equal to $1$ on $[0, \frac{1}{2}-\epsilon]$ and equal to $-1$ on $[\frac{1}{2}+\epsilon, 1]$). But the constant cannot equal $2/\pi$, since by continuity of $f$, either $f$ is strictly less than 1 on some interval in $[0,1/2]$, or strictly greater than -1 on some interval in $[1/2,1]$. So in fact the image of the unit ball under $K$ is not closed.
As mentioned by others, the Exercise 5.6 you cite does not apply here, since $C([0,1])$ is not a Hilbert space.
So if the special case addressed in the revised question ($k(x,y) = x^2 + 2xy+y^2$) is in fact true, it will need to use something special about this particular function $k$. |
Finding $\frac{dy}{dx}$ given $y= \frac{ \sin x + x^2 }{ \cot 2x}$ | Hint: the Quotient Rule is $\displaystyle\frac{\mathrm{d}}{\mathrm{d}x}\frac uv=\frac{u'v-uv'}{v^2}$
Using the Quotient Rule and $\cot(x)=\frac1{\tan(x)}$ along with the usual derivatives of the trig functions should help. |
Best integer solution to overdetermined linear system with full column rank | Problem statement
Find the least squares solutions for the following linear system
$$
\begin{array}{cccc}
\mathbf{A} & x & = & b \\
\begin{bmatrix}
2 & 1 \\
1 & 1 \\
3 & 1 \\
\tfrac{1}{3} & 4
\end{bmatrix}
&
\begin{bmatrix}
N \\ M
\end{bmatrix}
&=&
\begin{bmatrix}
8 \\
6 \\
5 \\
4
\end{bmatrix}
&
\end{array}
$$
The system has $m=4$ rows, $n=2$ columns. The columns are linearly independent, therefore the matrix rank $\rho=2$.
Conclusions: This is an overdetermined system $(m>n)$ with full column rank $(\rho=n)$. The solution will be unique.
Solution
For the overdetermined problem, form the new linear system by premultiplying both sides of the original system by the adjoint matrix $\mathbf{A}^{*}$.
Normal equations
$$\begin{array}{cccc}
\phantom{\tfrac{1}{9}}\mathbf{A}^{*}\mathbf{A} & x & = &
\phantom{\tfrac{1}{3}}\mathbf{A}^{*} b\\
\tfrac{1}{9}
\begin{bmatrix}
127 & 57 \\
57 & 36
\end{bmatrix}
&
\begin{bmatrix}
N \\ M
\end{bmatrix}
& =
&
\tfrac{1}{3}
\begin{bmatrix}
115 \\
69
\end{bmatrix}
\end{array}$$
Least squares solution
$$\begin{array}{cccccc}
x_{LS} & =
& \phantom{\tfrac{1}{147}}\left( \mathbf{A}^{*}\mathbf{A} \right)^{-1}
& \phantom{\tfrac{1}{3}}\mathbf{A}^{*}b \\
\begin{bmatrix}
N \\ M
\end{bmatrix}
& = & \tfrac{1}{147}
\begin{bmatrix}
36 & -57 \\
-57 & 127
\end{bmatrix}
& \tfrac{1}{3}
\begin{bmatrix}
115 \\
69
\end{bmatrix}
& =
& \tfrac{1}{147}
\begin{bmatrix}
69 \\
736
\end{bmatrix}
& \approx
& \begin{bmatrix}
0.469388 \\
5.0068
\end{bmatrix}
\end{array}$$
Error
The minimum error for this problem is
$$
\lVert \mathbf{A}x_{LS}-b \rVert_{2}^{2} =
\frac{1154}{147}
\approx 7.85034
$$
The uncertainty in the fit parameters is
$$\sigma^{2}=\frac
{\lVert \mathbf{A}x_{LS}-b \rVert_{2}^{2}}
{m-n}
\text{diagonal }
\left( \mathbf{A}^{*}\mathbf{A} \right)^{-1}
= \tfrac{1}{21\,609}
\begin{bmatrix}
20\,772 \\
73\,279
\end{bmatrix}
\approx
\begin{bmatrix}
0.961266 \\
3.39113
\end{bmatrix}
$$
The final answer observes significant digits and is
$$
\boxed{
x_{LS} =
\begin{bmatrix}
0.46 \\
5.0
\end{bmatrix}
\pm
\begin{bmatrix}
0.98 \\
1.8
\end{bmatrix}}
$$
Visualizations
Merit function
The merit function minimized by the least squares process is
$$
\chi^{2}\left( N, M\right)=
\Bigg\Vert
\mathbf{A}
\begin{bmatrix}
N \\
M
\end{bmatrix}
- b
\Bigg\rVert_{2}^{2},
$$
and is plotted below. The gray, elliptical lines are level sets. The blue circles are 1, 2, and, 3 $\sigma$ curves.
Data vector
The graph below offers the perspective of signal to noise. The data vector $b$ is resolved into $\color{blue}{range}$ (signal) and $\color{red}{null}$ space (noise) components. The vector formula is
$$
b = \color{blue}{b_{R(A)}} + \color{red}{b_{N(A^{*})}}
$$
The Pythagorean equality is
$$
\begin{array}{ccccc}
b^{2} &= &\color{blue}{b_{R(A)}^{2}} &+ &\color{red}{b_{N(A^{*})}^{2}}\\
141^{2} &= & 133.15^{2} & + & 7.85^{2}
\end{array}
$$
Integer solution
For convenience, a set of points marking integer locations is superimposed on the merit function. This provides a feel for how quickly the merit function changes as $N$ and $M$ experience unit changes.
The results are
$$
\begin{array}{c}
N & M & \chi^{2}\left( N, M \right) \\\hline
0 & 4 & 21 \\
0 & 5 & 11 \\
0 & 6 & 9 \\
1 & 4 & 9.11111 \\
1 & 5 & 11.7778 \\
1 & 6 & 22.4444 \\
\end{array}
$$
The best integer answer is $(0,6)$ where the error is
$$
\chi^{2}\left( 0, 6\right)= 9.
$$ |
the number of permutations of all the letters of the word STATISTICIAN | If it begins with $C$ and ends with $N$, then the remaining letters are:
STATISTIIA
That is $\{S\cdot 2, T\cdot 3, A\cdot 2, I\cdot 3\}$
So, we are looking for the number of permutations of this multiset, which is:
$$\dfrac{10!}{2!3!2!3!} = 25200$$
Your answer in the comments is correct. Your problem with the first time you did it is that you fixed the S and the N, rather than the C and the N. |
Complex analysis exercises | The function $g(z)= \frac{p(\frac{1}{z})}{q(\frac{1}{z})}$ has a removable singularity at $0$ since the $ \lim_{z \rightarrow \infty }\frac{p(z)}{q(z)}$ exists. Furthermore since $p(z),q(z)$ have all their zeros inside $|z|=1\,$ , the functions $g(z)$ and $\frac{1}{g(z)}$ are analytic in $|z|<1+\epsilon\,$ , for some $\,\epsilon >0\,$ , so by the maximum principle both functions $\left(|g(z)|, \left|\frac{1}{g(z)}\right|\right)$ are bounded by $1$ for $ |z| \leq 1$ since the condition $|p(z)|=|q(z)|$ for $|z|=1$ that means $|g(z)|=1$ for $|z| \leq 1$ and you are done. |
Showing that $\overline{\mathbb{Z}}/n\overline{\mathbb{Z}}$ is infinite | I don't know about the idea behind the hint (it may be something interesting), but I would choose a simpler set, namely
$$S=\{\sqrt{p}\mid p\text{ is a prime natural number}\}.$$
First of all each of elements of $S$ is inside the integral closure of $\mathbb{Z}$ in $\mathbb{C}$ which is also called the ring of algebraic integers, because they are satisfying a monic polynomial with integer coefficients, $f_p(x)=x^2-p\in\mathbb{Z}[x]$. Now I show that for each pair of distinct prime numbers $p$ and $q$, we have $\bar{p}$ and $\bar{q}$ are distinct classes in the ring $\frac{\bar{\mathbb{Z}}}{n\bar{\mathbb{Z}}}$ when $n$ is an integer not equal to $\pm 1$ or 0. To show that two classes are different, you should show that the difference of their representative elements doesn't belong to the ideal. If $\sqrt{p}-\sqrt{q}\in n\bar{\mathbb{Z}}$, then there should exist an algebraic integer $a$ such that $na=\sqrt{p}-\sqrt{q}$. Now the trick is to play with the minimal polynomials of the two sides. First let's play with the right side.
$$\begin{array}{lll}
x=\sqrt{p}-\sqrt{q} & \Longleftrightarrow & x^2=p+q-2\sqrt{pq}\\
& \Longleftrightarrow & x^2-p-q=-2\sqrt{pq}\\
& \Longleftrightarrow & x^4-2(p+q)x^2+(p-q)^2=0
\end{array}$$
Note that $x^4-2(p+q)x^2+(p-q)^2\in\mathbb{Z}[x]$ and it has exactly four complex roots (which are also reals), $\pm(\sqrt{p}+\sqrt{q})$ and $\pm(\sqrt{p}-\sqrt{q})$.
Now going for the left side. If the minimal polynomial of $a$ is $f(x)$ (which is in $\mathbb{Z}[x]$ and is monic), then for an integer $n$, we have that $na$ satisfies $n^{deg(f)}f(\frac{1}{n}x)$ and this new polynomial is in $\mathbb{Z}[x]$ and monic (check for yourself why, we use $n\neq 0$ here). Besides because $na=\sqrt{p}-\sqrt{q}$, so $na$ should satisfy $x^4-2(p+q)x^2+(p-q)^2=0$ and this implies that the minimal polynomial of $na$ should divide this polynomial.
$$n^{deg(f)}f(\frac{1}{n}x)\mid x^4-2(p+q)x^2+(p-q)^2.$$
This tells us that degree of the left polynomial in above should be $1$, $2$, $3$ or $4$. Note that degree of the left polynomial is the same as degree of $f$. If $deg(f)=1$, then we get a contradiction with the roots of the right side polynomial are not integers. If $deg(f)=2$, then we get contradiction with $\sqrt{pq}$ is not an integer. If $deg(f)=3$, then the right side polynomial has to have a linear facor over $\mathbb{Z}$ which again makes a same contradiction with that none of its roots are integer. If $deg(f)=4$, then the right side polynomial should be an integer multiple of the polynomial in the left, however both are monic, thus they should be equal. This implies that $na$ should be one of the four roots of the right side polynomial. Calling the right side polynomial by $g(x)$ we can compute $f(x)$ (the minimal polynomial of $a$);
$$f(x)=\frac{1}{n^4}g(nx)=x^4+\frac{p+q}{n^2}x^2+\frac{(p-q)^2}{n^4}$$
Which is in $\mathbb{Z}[x]$ only for $n=\pm 1$. This causes contradiction by we assumed $n\neq \pm 1$. (Note that if $n^2$ divides both $p+q$ and $p-q$, then $n^2$ should divides $2p$ and $2q$ as well. This implies $n$ should divides $p$ and $q$ and so their g.c.d which is 1).
Therefore there is no such an algebraic integer $a$, and we proved for $n\in\mathbb{Z}-\{-1,0,1\}$, the elements of $S$ make distinct classes in the residual ring $\frac{\bar{\mathbb{Z}}}{n\bar{\mathbb{Z}}}$ and since $S$ is an infinite set (because prime numbers are infinite), the main claim of your question is proved.
There are three cases we didn't covered, namely $n=-1$, $n=0$ and $n=1$. For $n=0$ we have $n\bar{\mathbb{Z}}=\{0\}$ and so the residual ring is the algebraic integer ring itself which is clearly infinite (for example contains $\mathbb{Z}$ itself). For $n=\pm 1$, the ideal $n\bar{\mathbb{Z}}$ is the whole ring, and thus the resudual ring is a singleton $\{0\}$ with obvious structure. This case clearly is not infinite, so in your question's face has to be excluded. |
Find $a$ if $\lim_{x\to a} \frac{a^x-x^a}{x^x-a^a}=-1$ | You're almost there. Now, just substitute $x\to a$ and solve
$$\frac{a^a\ln a-a^a}{a^a(\ln a+1)}=-1\\ \implies a=1$$ |
What does $\mathcal{S}_{++}$ mean? | $\mathcal S^p_{++}$ is the set of all real $p \times p$ symmetric positive definite matrices. $S^p_{+}$ is the set of all real $p \times p$ symmetric positive semidefinite matrices. |
Classify all alphabets into homeomorphism classes $\{ M N B H\}$, | I'll describe a general approach here, using for illustration purposes the serif font that appears in the question. We are given compact connected Hausdorff spaces, also known as continua. A point $x$ of continuum $C$ is a cut point if $C\setminus\{x\}$ is not connected. The number of components of $C\setminus\{x\}$ is the order of cut point $x$. The cardinality of the set of cut points of order $n$ is a topological invariant of the space.
All of our continua have infinitely many cut points of order 2, so that does not help much (although we could look at the structure of the set of all such cut points, e.g., the number of its components). Let's count the cut points of order 3: $M$ has 4, $N$ has $3$, $B$ has none and $H$ has $6$. Hence, none of them are homeomorphic in the serif version.
In addition, $B$ is distinguished from the rest by having infinitely many non-cut points. It's a very neat theorem of general topology that every nondegenerate continuum has at least two non-cut points. |
Determine maximum value of direction derivative | You know that the directional derivative achieves its maximum in the direction of the gradient. So you know that it will be in the direction of $2i+3j+8k$. You want to scale that up by 3 to get the gradient, since the second component of the gradient is $f_y =9$. Thus the gradient is $6i+9j+24k$. Thus, the maximum a dot product of that with a unit vector will be the magnitude of that vector, which is $\sqrt{6^2+9^2+24^2}$. |
Proof of coset/subgroup property | They're trying to find cosets in which the identity from $H$ will not be present.
As a subgroup of $G$, "the identity of $H$" is the identity of $G$, namely $e$. Remember, every subgroup is a group in its own right, but also contains the identity of the larger group (as its own identity element).
The second line is based (sic)off of the first line.
Indeed, it is. I'll explain both lines, starting from the first.
Note that $e \in H$, so by property $1$ we have $eH = H$. Next, if $L$ is any other coset, then by property $2$, we have $L \cap H = \emptyset$. In other words, $e \notin L$. This is what the second line is saying in words : " no coset of $H$ besides $H$ contains $e$, the unique identity of $G$".
"It seems line they are magically jumping to it".
As I mentioned earlier, every subgroup of $G$ contains the identity element of $G$, so every coset other than $H$ doesn't have the identity so cannot be a subgroup!
This leaves $H$ as the only option, but we know that it is a subgroup, so the conclusion is that it is the only coset that is a subgroup as well. |
How to find $\operatorname{Im}(T)$ of $T:M_{2x2} \to\mathbb R^3$ | Yes, you are doing the right thing. You can say more. If, for example, three of the vectors in your span are linearly independent, then they will span all of $\mathbb{R}^3$. (By $\mathbb{R}^3$ I mean what I think you have denoted $\mathbb{R}_3$.)
Are, for example,
$$
(1,1,1)\quad\text{and}\quad (1,0,-1)\quad\text{and}\quad (-1,0,-1)
$$
linearly independent? |
$AA^t=BB^t \implies A=B$ | Warning and Notes: I wrote the answer for real matrices and transpose $(\cdot)^t$. One can simply adapt it for complex matrices and complex conjugate transpose. However, complex matrices and only transpose is a complete other story...
In the question it is not clear, if the condition is on $(A,B)$, that is some logical formula $P(A,B)$, or the same condition is on $A$ and $B$, that is some logical formula $P(A) \land P(B)$. For the second case, there is no "solution" for $n>1$ (cf @ErickWong's comment to the question).
Now the results...
Claim 1: The condition "$A,B$ are symmetric and definite" is sufficient to make
$$ AA^t = BB^t \implies A=B \lor A=-B$$
true.
Proof: First assume $A$ is positive definite. Then, $ AA^t = BB^t $ implies $I = A^{-1}B (A^{-1}B)^t$, that is $M = A^{-1}B$ is orthogonal. But $M$ is similar to $RMR^{-1} = R^{-1}BR^{-1}$, which is definite. That is, $M$ is either $I$ (in case $B$ is positive definite), or $-I$ (in case $B$ is negative definite). But that means $A = B$ or $A = -B$.
Now, assume $A$ is negative definite. Then, $\tilde A = -A$ is positive definite. Thus, by previous paragraph, we have $-A = B$ or $-A = -B$.
Claim 2: Let
$ M_0 = \{ A\in\mathbb R^{n\times n} : |\det A| = 1 \}, $
$ M_1 = \{ A\in M_0 : A \text{ symm. and definite} \}, $
and
$M_2 \subseteq M_0$ containing $M_1$ with the property
$\forall A,B\in M_2: [AA^t = BB^t \implies A = B \lor A=-B]$.
Then, $M_2 = M_1$.
Proof: Let $A\in M_2$. From $\tilde A = (AA^t)^{1/2}\in M_1 \subseteq M_2$ follows $A = \tilde A$ or $A = -\tilde A$. That is, $A$ is already in $M_1$. (thanks to @ErickWong) |
Integration of $\displaystyle \int\frac{1}{1+x^8}\,dx$ | Why not splitting up in fractions until you have first degree polynomials in the nominators?
$$\frac{1}{1+x^8}=\frac{A}{x-e^{i\pi/8}}+\frac{B}{x-e^{-i\pi/8}}+\frac{C}{x-e^{i3\pi/8}}+\frac{D}{x-e^{-i3\pi/8}}+\frac{E}{x-e^{i5\pi/8}}+\frac{F}{x-e^{-i5\pi/8}}+\frac{G}{x-e^{i7\pi/8}}+\frac{H}{x-e^{-i7\pi/8}}$$
or if you prefer without the complex numbers
$$\frac{1}{1+x^8}=\frac{ax+b}{x^2-2\cos(\pi/8)x+1}+\frac{cx+d}{x^2-2\cos(3\pi/8)x+1}+\frac{ex+f}{x^2-2\cos(5\pi/8)x+1}+\frac{gx+h}{x^2-2\cos(7\pi/8)x+1} \; .$$
With the complex formula, you can find the coefficients easily as follows
$$A=\lim_{x \to e^{i\pi/8}}\frac{x-e^{i\pi/8}}{1+x^8}\overset{\text{H}}{=}\lim_{x \to e^{i\pi/8}}\frac{1}{8x^7}=\frac{e^{-i7\pi/8}}{8}$$
where I used de l'Hôpital's rule. |
Weakly open sets are unbounded | The proposed map can't be an injection because we can't inject an infinite-dimensional space into a finite-dimensional space. In particular, injective maps preserve linear independence, so mapping $n + 1$ linearly independent vectors from $E$ (which is possible to find, since $E$ is infinite-dimensional) will produce $n + 1$ vectors in $\Bbb{R}^n$, which cannot be linearly independent, contradicting injectivity.
Now, every weakly open set contains a basic weakly open set, i.e. one of the form:
$$\mathcal{U} = \{x \in E : |f_i(x) - \alpha_i| < \varepsilon_i \text { for } i = 1, \ldots, n\}.$$
Let $K$ be the previously considered kernel. Note that, if $y \in K$, and $x \in \mathcal{U}$, then $x + y \in \mathcal{U}$. But $K$ is a non-trivial subspace, and hence is unbounded, and so we can make $x + y$ as large as we like, thus proving $\mathcal{U}$ (and hence the weakly open set) is unbounded too. |
How can a manifold be Hausdorff and have an atlas where coordinate charts intersect? | Hausdorff tells you that you can always find non-intersecting opens under certain conditions. It doesn't say that all opens are non-intersecting. |
Can a Free product with amalgamation of $\mathbb{Z}*\mathbb{Z}$ be isomorphic to $\mathbb{Z}\times \mathbb{Z}$? | Is there a free product with amalgamation of $\Bbb Z*\Bbb Z$ isomorphic to the direct product of $ \mathbb{Z} $ and $ \mathbb{Z} $?
No.
One way to look at $\Bbb Z\ast \Bbb Z$ is as the group given by the presentation
$$\langle a,b\mid \varnothing \rangle.\tag{1}$$ Similarly, $\Bbb Z\times\Bbb Z$ is given by
$$\langle a,b\mid ab=ba\rangle.\tag{2}$$
In order to go from $(1)$ to $(2)$ via a free product with amalgamation, we must be able to write
$$\Bbb Z\ast_{H=K}\Bbb Z\cong\langle a\mid \varnothing \rangle \ast_{w(a)=\widetilde{w}(b)}\langle b\mid \varnothing \rangle$$
in the form of $(2)$,
where $w(a)$ is some word over the alphabet $\{a\}$ (so, a power of $a$) that generates some subgroup $H\le \Bbb Z\cong\langle a\mid\varnothing \rangle $; similarly for $\widetilde{w}(b)$, $\{b\}$, and $K$.${}^\dagger$
But subgroups of cyclic groups are themselves cyclic.
Also, notice that $ab=ba$ in $(2)$ cannot be written as $a^h=b^k$ for any integers $h,k$.
For more on the topic, see Magnus et al.'s, "Combinatorial Group Theory: [. . .]".
I hope this helps :)
$\dagger$: As pointed out in the discussion in the Group Theory chat room, if I have understood the OP's question sufficiently, the case when the subgroup $H=K$ one amalgamates with is trivial needs covering. In that case, $$\Bbb Z\ast_{\{e\}}\Bbb Z=\Bbb Z\ast \Bbb Z.$$ |
Mathematical notation - cummulative summation | Here is how Accumulate works: Accumulate[{a, b, c, d}] produces
{a, a + b, a + b + c, a + b + c + d}
Thus, the mth entry of the list Accumulate[Table[MoebiusMu[k], {k, 1, n}]] is
$$M(m)=\sum_{k=1}^m\mu(k)$$
so that Mean[Accumulate[Table[MoebiusMu[k], {k, 1, n}]]], the average of the n entries on this list, is the just average of the first $n$ values of the Mertens function $M(k)$:
$$\frac{1}{n}\left(\sum_{k=1}^nM(k)\right)\qquad$$
and then the outer Table just makes a list of these values from n=1 to n=x. |
Why is the eigenvector matrix $S$ invertible? | If an $n×n$ matrix has rank $n$, it is invertible.
We are assuming we have a basis consisting of eigenvectors. Hence they are linearly independent and the matrix of eigenvectors has rank $n$. |
Meaning of fundamental interval in Fourier series | Usually the fundamental interval is the smallest period of the given function, so in this case $[0,\pi]$. |
Can formal grammar (language) serve as the model for some logic and express the semantics of this logic? | Can the formal grammar serve as the model for some logic? E.g. is it possible to express both the syntax of logic with one grammar and semantics for the same logic with other grammar?
If I understand you correctly, then yes, a formal grammar can serve as the model for some logic. Usually a logic's syntax is defined up front using a formal grammar in BNF form, such as this pseudo formal grammar for propositional logic:
\begin{align}
Sentence &\to AtomicSentence \mid ComplexSentence\\
AtomicSentence &\to True \mid False \mid P \mid Q \mid R \mid ...\\
ComplexSentence &\to (Sentence) \mid Sentence\ Connective\ Sentence \mid \neg Sentence\\
Connective &\to\ \land\ \mid\ \lor\ \mid\ \Rightarrow\ \mid\ ⇔
\end{align}
If you want to specify the semantics of the logic, you can try using Operational Semantics or Matching Logic. |
Given $\phi$ a mapping. Prove that for each $\mathit{i}$, $\sum_{j=1}^n \partial_{x_j}(\mathbf{cof} \mathit{D} \phi)_{ji} \equiv 0$ | 1st Proof. By rearranging the components of $\phi = (\phi_1, \cdots, \phi_n)$ if needed, we may asume that $i = 1$. Let $S_n$ denote the set of all permutations on $\{1,\cdots,n\}$. In light of the cofactor expansion of the determinant,
$$ \sum_{j=1}^{n} \partial_{x_j}(\mathbf{cof}D\phi)_{j1} = \sum_{j=1}^{n} (-1)^{i+j} \partial_{x_j}(D\phi)_{1j} $$
is equal to the formal determinant
$$ \det(\nabla, \nabla \phi_{2}, \dots, \nabla \phi_n) = \sum_{\sigma \in S_n} \operatorname{sign}(\sigma) \partial_{x_{\sigma(1)}} \prod_{k : k \neq 1} \partial_{x_{\sigma(k)}} \phi_k. $$
By the product rule,
$$ = \sum_{j=2}^{n} \sum_{\sigma \in S_n} \operatorname{sign}(\sigma) (\partial_{x_{\sigma(1)},x_{\sigma(j)}}\phi_j) \prod_{k : k \neq 1, j} \partial_{x_{\sigma(k)}} \phi_k. $$
Now by grouping the terms according to the value of the list $(\sigma(k))_{k\neq 1,j}$, it is not hard to see that each group contains exactly two terms which cancel out each other. Therefore the overall sum is also zero.
2nd Proof. Here is a solution using Stokes' Theorem:
The identity is equivalent to
$$ \forall \varphi \in C_c^{\infty}(\mathbb{R}^n) \ : \qquad \int_{\mathbb{R}^n} \varphi \sum_{j=1}^{n} \partial_{x_j}(\mathbf{cof}D\phi)_{ji} = 0. $$
Writing $I$ for the integral in the left-hand side, integration by parts shows that
$$ I = \int_{\mathbb{R}^n} \sum_{j=1}^{n} (\partial_{x_j}\varphi)(\mathbf{cof}D\phi)_{ji}
= \int_{\mathbb{R}^n} \det(D\tilde{\phi}), $$
where $\tilde{\phi}$ is the function whose $i$-th component is replaced by $\varphi$ and the last line follows from the cofactor expansion of the determinant along the $i$-th row. Assuming WLOG that $i = 1$, this means that
$$\tilde{\phi} = (\varphi, \phi_2, \phi_3, \dots, \phi_n).$$
Now fix a closed ball $\overline{B}$ large enough so that the support of $\varphi$ lies in the interior of $\overline{B}$. Then
$$ I = \int_{\overline{B}} d\varphi \wedge d\phi_2 \wedge \dots \wedge d\phi_n = \int_{\overline{B}} d (\varphi \, d\phi_2 \wedge \dots \wedge d\phi_n). $$
So, by the Stokes' Theorem,
$$ I = \int_{\partial \overline{B}} \varphi \, d\phi_2 \wedge \dots \wedge d\phi_n = 0 $$
since $\varphi \equiv 0$ on $\partial\overline{B}$. This proves the desired claim. |
Give counterexample for $\forall x (P(x) \rightarrow Q(x)), \exists x(P(x)) \vdash \forall xQ(x)$ | 1) $\forall x ((x > 0) \rightarrow (x > 0))$ --- is the generalization of a tautology; thus, it is valid
2) $\exists x (x > 0)$ --- it is true in $\mathbb N$
3) $\forall x(x > 0)$ --- it is false in $\mathbb N$, because not $0 > 0$.
Of course, we can "shrinck" it to a model $\mathcal M$ with $|\mathcal M |= \{ 0,1 \}$. |
Convert SOCP from quadratic form to generalized inequality form | You can find such notation and how it relates with the one you are used to in the classic book
Ben-Tal, Aharon, and Arkadi Nemirovski. Lectures on modern convex optimization: analysis, algorithms, and engineering applications. Vol. 2. Siam, 2001.
or more in brief in
http://docs.mosek.com/generic/modeling-a4.pdf
You basically introduce variables so that
$$ ||Ax - b|| \leq Cx +d $$
becomes
$$|| y || \leq z$$
$$z= Cx +d, y= Ax -b$$
and then you get $0\leq_K (y,z)$. |
Shape operator and principal curvature | The shape operator $S$ defines a quadratic form on the tangent space $T_p(M)$, the second fundamental form $\langle Sv, v \rangle$. If $e_1, e_2$ are unit eigenvectors of $S$ associated to the eigenvalues $\kappa_1, \kappa_2$, then write $v = v_1 e_1 + v_2 e_2$. Then it's not hard to see that $\langle Sv, v \rangle = \kappa_1 v_1^2 + \kappa_2 v_2^2$. This implies that $\kappa_1$ is the minimum value of the second fundamental form as $v$ ranges over all unit vectors and $\kappa_2$ is the maximum value.
On the other hand, the normal curvature of a curve $\alpha : I \to M$ parameterized by arc-length at $\alpha(0) = p$ is precisely $\langle S \dot{\alpha}(0), \dot{\alpha}(0) \rangle$. Once you know that associated to every unit vector of the tangent space is a geodesic whose tangent vector is that vector, you're done. |
Stitching together piece of flat space | Perhaps your description of what you wanted to do doesn't match with what you intended. If we go backwards, i.e., start with a right circular cone, and make some squiggly cut in it, from base to apex, and then flatten that out we will get a disk with a wedge with squiggly sides cut out. The two sguiggly sides match up.
Assume that if we did the cut with a straight line, then we'd get a wedge described by the lines $\pm \tilde{\theta}$. Now, we can imagine replacing those straight lines by the squiggly one described by your function. To keep from making a mess, we'd need constraints on the function, corresponding to your squiggly line on the cone not wrapping around and intersecting itself. Say $f'(0) = 0$, and $|f(r)| < \pi r$, but we might have to think about that more. Now we have a wedge cut out by curves described by $\pm \tilde{\theta} + f(r)$. In other words, we're removing the piece according to
$$
-\tilde{\theta} + f(r) \leq \theta \leq \tilde{\theta} + f(r),
$$
which is not the same as you have in your question. If we get it right, and we have a seam with no curvature, then it will be represented by a curve on the cone we get, so it makes sense to go backwards like this. |
Series involving factorial | By the inequalities
$$n!\le 1!+2!+\cdots+n!\le (n-2)(n-2)!+(n-1)!+n!$$
and using the squeeze theorem we see easily that
$$1!+2!+\cdots+n!\sim_\infty n!$$
hence
$$f(n,k)\sim_\infty \frac{1}{(n+k)\cdots(n+1)}$$
hence
if $k=1$ then $f(n,1)\sim\frac1{n+1}$ and then the series is divergent
if $k\ge2$ then $f(n,k)=O\left(\frac1{n^2}\right)$ and then the series is convergent. |
Exercise 2.3 (Prove Approximation lemma by Halmos ) Probability for Statistician by Galen R. Shorack | The proof to be "complete" or not depends on what is assumed the readers already know.
The measure is $\sigma$-finite, but the result only applies to sets $A \in \mathcal{A}$ that has finite measure.
Yes, this lemma means that the "new" sets that we now can measure with $\mu$ can be approximete arbitrarily close (in the measure sense)by set that were already in $\mathcal{C}$.
Well, to make the proof easier to understand it, a couple of sentences and details can be added, but the sketch is essentially complete. Let me know if you what me to detail the sketch.
Per your request, here is a very detailed proof. We will split the result in two: one for $\sigma[ \mathcal{C}]$ and the other for $ \hat{ \mathcal{A}_\mu}$.
The first result:
Let the $\sigma$-finite measure $\mu$ on the field $ \mathcal{C}$ be extended to $ \mathcal{A}=\sigma[ \mathcal{C}]$, and also refer to the extension as $\mu$. Then for each $A \in \mathcal{A}$ such that $\mu(A)<\infty$, and for each $\epsilon>0$, we have $$\mu(A\triangle C)<\epsilon\text{ for some set } C\in \mathcal{C}.$$
Proof: (to help keep all details visible, we are goind to note the extension of $\mu$ to $\sigma[ \mathcal{C}]$ by $\overline{\mu}$).
Since $\mu$ is $\sigma$-finite measure, we know that there is a unique extension $\overline{\mu}$ of $\mu$ to $\sigma[ \mathcal{C}]$. So, as a consequence of Carathéodory's theorem, such extention $\mu$ to $\sigma[ \mathcal{C}]$ coincides with the restriction of the outer measure $\mu^*$ to $\sigma[ \mathcal{C}]$. So, we have, for all $A \in \mathcal{A}$,
$$ \overline{\mu}(A)= \mu^*(A) = \inf \left \{ \sum^\infty_{n=1}\mu(A_n) : \textrm{for all } n , A_n \in \mathcal{C} \textrm{ and } A \subseteq \bigcup^\infty_{n=1}A_n \right \} $$
Now, given any $A \in \mathcal{A}$ and $ \overline{\mu}(A)<\infty$ and given $\epsilon>0$, there is $\{ A_n\}_n$ such that, for all $n$ , $A_n \in \mathcal{C}$, $A \subseteq \bigcup^\infty_{n=1}A_n$ and
$$\overline{\mu}(A) \leqslant \sum^\infty_{n=1}\mu(A_n) < \overline{\mu}(A) + \frac{\epsilon}{2}$$
Take $N_0$ such that $\sum^\infty_{n=N_0+1}\mu(A_n)<\epsilon/2$. Define $C= \bigcup^{N_0}_{n=1}A_n$.
Since $\mathcal{C}$ is a field, it is clear that $C \in \mathcal{C}$ and we have
\begin{equation*}
\begin{split}
\overline{\mu} (A\triangle C) & = \overline{\mu}(A\setminus C)+\overline{\mu}(C\setminus A) \\
& \leqslant \overline{\mu}\left (\bigcup_n A_n\setminus C \right )+\overline{\mu}\left (\bigcup_n A_n\setminus A \right)\\
& =\mu\left (\bigcup_n A_n\setminus C \right )+\overline{\mu}\left (\bigcup_n A_n\setminus A \right)\\
& =\mu\left (\bigcup_n A_n\setminus C \right )+\overline{\mu}\left (\bigcup_n A_n\right) - \overline{\mu}(A) \\
& \leqslant\sum^\infty_{n=N_0+1}\mu(A_n)+ \sum^\infty_{n=1}\overline{\mu}(A_n) - \overline{\mu}(A) \\
& =\sum^\infty_{n=N_0+1}\mu(A_n)+ \sum^\infty_{n=1}\mu(A_n) - \overline{\mu}(A) \\
& =\sum^\infty_{n=N_0+1}\mu(A_n)+ \left ( \sum^\infty_{n=1}\mu(A_n) - \overline{\mu}(A) \right )\\
& < \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon.
\end{split}
\end{equation*}
Now the second result:
Let the $\sigma$-finite measure $\mu$ on the field $ \mathcal{C}$ be extended to $ \hat{ \mathcal{A}_\mu}$ where $ \mathcal{A}=\sigma[ \mathcal{C}]$, and also refer to the extension as $\mu$. Then for each $A \in \hat{ \mathcal{A}_\mu}$ such that $\mu(A)<\infty$, and for each $\epsilon>0$, we have $$\mu(A\triangle C)<\epsilon\text{ for some set } C\in \mathcal{C}.$$
Proof: (to help keep all details visible, we are goind to note the extension of $\mu$ to $\hat{ \mathcal{A}_\mu}$. by $\overline{\mu}$).
Since $\mu$ is $\sigma$-finite measure, we know that there is a unique extension of $\mu$ to $ \mathcal{A}=\sigma[ \mathcal{C}]$ and so a unique extension $\overline{\mu}$ of $\mu$ to $\hat{ \mathcal{A}_\mu}$. So, as a consequence of Carathéodory's theorem, such extention $\mu$ to $\hat{ \mathcal{A}_\mu}$ coincides with the restriction of the outer measure $\mu^*$ to $\hat{ \mathcal{A}_\mu}$.
The rest of the proof is identical to the previous proof, just replacing $\mathcal{A}$ by $\hat{ \mathcal{A}_\mu}$. |
Is there any solution to this equation $1-2x\cos(\theta)+x^2=0$ | You may easily figure the complex solutions of
$$ (x^n-e^{ni\theta})(x^n-e^{-ni\theta}) = 0.$$ |
Determine whether $f$ is continuous or discontinuous from the right or left at $x_0=0$ of $f(x)= 1$ if $x=0$, $ x \sin \frac{1}{x}$, otherwise | We know that The value of x can take a maximum of 1 and a minimum of negative one!
$$-1 \leq \sin x \leq 1$$
So the same applies to
$$-1 \leq \sin \frac{1}{x} \leq 1$$
Now let's multiply the inequality with $x$,
$$-x \leq x\sin \frac{1}{x} \leq x$$
Taking the limit for all three quantities!
$$\lim_{\text x \rightarrow 0}-x \leq \lim_{\text x \rightarrow 0} x\sin \frac{1}{x} \leq \lim_{\text x \rightarrow 0} x $$
Know that
$$\lim_{\text x \rightarrow 0}-x =0 $$
$$\lim_{\text x \rightarrow 0} x=0$$
So limit for $x \sin \frac{1}{x}$ is
$$\lim_{\text x \rightarrow 0} x\sin \frac{1}{x}=0$$
Now let's check if the function is continuous
By definition of continuity we must have
$$\lim_{\text x \rightarrow 0} x\sin \frac{1}{x}= f(0)$$
Obviously the above is false since
$$\lim_{\text x \rightarrow 0} x\sin \frac{1}{x}=0 \neq f(0)=1$$
Addendum
The squeeze theorem sates as follows for the function exists on interval $[a,b]$
There exists a value c such that
$$a(x) \leq b(x) \leq c(x) $$
$$\lim_{\text x \rightarrow c}a(x) =L $$
$$\lim_{\text x \rightarrow c}c(x)= L$$
We can conclude that
$$\lim_{\text x \rightarrow c}b(x) =L $$ |
prooving graph with no cycles and |V | = |E| + 1 is a tree. | Start with all edges removed. That would give you a graph with $|V|$ connected components. Now add the edges one by one. Each edge you add will reduce the number of connected components by $1$ (since it will connect two previously unconnected components). By the time you get to $|V| - 1$ edges, there is exactly one connected component. |
Does there exist a maximum $m\in\mathbb{N}$ such that for some prime $p$, $p_r=p +\sum_{n=1}^r 2^n$ is prime for $r=1,2,\ldots,m$? | A conjecture that the answer to your question is positive is very strong.
Indeed, remark that $p_r=p + \sum\limits_{n=1}^r 2^n=p+2^{r+1}-2$ for each $r=1,2,\ldots,m$.
$\exists p\in\mathbb{P} :\;p_r=p + \sum_{n=1}^r 2^n\in\mathbb{P} \; \; \forall r\in\mathbb{N}\tag{1}$
If $\Bbb P$ is the set of prime numbers then (1) is false. Namely, $p_{p-1}=p+2^p-2$ is divisible by $p$ by Fermat little theorem.
Thus the only way to have the positive answer is to have infinitely many distinct prime numbers $p$ such that both $p$ and $p+2$ are prime, that is they are prime twins.
So far, I have concluded that $p$ must necessarily have $7$ as its first digit, that is, $p=7\,(\text{mod}\,10)$.
Given $m$, we can obtain a family of similar restrictions on $p=p_0$ as follows. Let $\Bbb P_2=\{3, 5, 11, 13,\dots \}$ be the set of prime numbers $q$ such that $2$ has order $q-1$ in the (cyclic) multiplicative group $\Bbb Z^*_q$ of the finite field $\Bbb Z_q=\Bbb Z/q\Bbb Z$. Let $q\le m+2$ and $q<p$ be any element of $\Bbb P_2$. Then among the residues of $p_0,\dots, p_m$ we met all residues modulo $q$ but $p-2$. Since all $p_r$ are prime, this is possible only if the only missed residue is zero, that is if $p-2$ is divisible by $q$. That is $p-2$ is divisible by a product $\prod \{q\in\Bbb P_2: q\le m+2,\, q<p\}$. |
Exact short sequence | For a sequence $A \overset{f}{\rightarrow} B \overset{g}{\rightarrow}C$, being exact in B means that $\operatorname{Ker}(g)=\operatorname{Im}(f)$. So you have to prove that your sequence is exact in $V_1$, $V$ and $V/V_1$, i.e. that the map $i$ is injective, the projection $\pi_{V/V_1}$ is surjective and that the image of the latter is equal to the kernel of the former. It is quite obvious but if you have any doubt, just prove it! |
Exterior derivative calculation in Chern and Hamilton paper | Is this on the sphere
$$x^2+y^2+z^2+w^2=1?$$
If so then
$$x\,dx+y\,dy+z\,dz+w\,dw=0$$
there.
Try using these relations in addition to your calculations. |
Identify quotient group on $\mathbb{Z} \times \mathbb{Z}_n$ | Consider the homomorphism $G : \mathbb{Z}\times \mathbb{Z}/n\mathbb{Z}\to \mathbb{Z}/mn\mathbb{Z}$ given by:
$$
(x,y+n\mathbb{Z})\mapsto x+my+mn\mathbb{Z}
$$
Left to the reader: Show that this is well defined, that it is a group homomorphism, and that $\mathrm{Im}(F)\subseteq\mathrm{Ker}(G)$.
To show the other inclusion:
Suppose $G(x,y+n\mathbb{Z})=mn\mathbb{Z}$, then $x+my\in mn\mathbb{Z}$ so there is some k such that $x+my=mnk$, so $x=mnk-my=m(nk-y)$, and $y\equiv -(nk-y) \mod n$, that is $$(x,y+n\mathbb{Z})=(m(nk-y),-(nk-y)+n\mathbb{Z})=F(nk-y).$$
Now we have $\mathrm{Im}(F)=\mathrm{Ker}(G)$, so by the first isomorphism theorem we get
$$(\mathbb{Z}\times \mathbb{Z}/n\mathbb{Z})/\mathrm{Im}(F)\cong \mathrm{Im}(G)$$
Now $\mathrm{Im}(G)=\mathbb{Z}/mn\mathbb{Z}$ because for all $x\in\mathbb{Z}$, $G(x,n\mathbb{Z})=x+mn\mathbb{Z}$
Note:
To get to the homomorphism above, I checked what $(x_1,y_1+n\mathbb{Z})+\mathrm{Im}(F)=(x_2,y_2+n\mathbb{Z})+\mathrm{Im}(F)$ in the quotient group means, which turned out to be (after some working out)
$$
\left\{\begin{aligned}
&x_1\equiv x_2&&\mod m\\
&x_1+my_1\equiv x_2+my_2&&\mod mn
\end{aligned}\right.
$$
And it turned out that the first congruence follows from the second, and the second congruence gives the homomorphism defined above. |
Solution to the Dirichlet problem is smooth up to the boundary if the boundary data is smooth? | You may want to read Kellogg's book on Potential Theory...
From Gilbarg and Trudinger, page 66,
Corollary 4.14. Let $\varphi \in C^{2,\alpha}(\bar{B}), \; \; f \in C^{\alpha}(\bar{B}) .$ Then the Dirichlet problem, $\Delta u = f$ in $B, \; \; u = \varphi$ on $\partial B,$ is uniquely solvable for a function $u\in C^{2,\alpha}(\bar{B}).$
In your case $f = 0.$ The Hölder spaces are defined on page 51.
A stronger version of this is indeed called Kellogg's Theorem, and the reference is Foundations of Potential Theory by O. D. Kellogg, a Dover reprint.
A more recent book, though no more elementary, is Axler et al |
Integral of $\int_{-\infty}^\infty\frac{1}{t}e^{-\frac{t^2}{4}}\int_0^{\frac{t}{2}}e^{u^2}dudt$ | With the substitution $u\to\frac{t}2x$, the inner integral becomes
$$\int_0^{\frac{t}{2}}e^{u^2}\,du=\frac{t}2\int^1_0e^{\frac{t^2}{4}x^2}\,dx,$$
implying
$$\int_{-\infty}^\infty\frac{1}{t}e^{-\frac{t^2}{4}}\int_0^{\frac{t}{2}}e^{u^2}\,du\,dt=\frac12\int_{-\infty}^\infty\int^1_0 e^{-\frac{t^2}{4}(1-x^2)}\,dx\,dt=\int_{-\infty}^\infty\int^1_0 e^{-s^2(1-x^2)}\,dx\,ds.$$
Since the integrand is positive, Mr. Fubini encourages us to change the order of integration, and
$$\int_{-\infty}^\infty e^{-s^2(1-x^2)}\,ds=\frac{\sqrt{\pi}}{\sqrt{1-x^2}}.$$ Now clearly $$\int^1_0\frac{dx}{\sqrt{1-x^2}}\,dx=\frac{\pi}2,$$
so we arrive at $$\int_{-\infty}^\infty\frac{1}{t}e^{-\frac{t^2}{4}}\int_0^{\frac{t}{2}}e^{u^2}\,du\,dt=\frac{\pi^{3/2}}2.$$ |
Do endomorphisms of infinite-dimensional vector spaces over algebraically closed fields always have eigenvalues? | The answer is no. Consider the space of infinite real sequences and let $T(x)=(0,x_1,x_2,\dots)$. Suppose $T(x)=\lambda x$.
If $\lambda=0$, then $x$ has to be zero, so $0$ is not an eigenvalue.
If $\lambda \neq 0$, then $0=\lambda x_1$, so $x_1=0$, but then $x_1=0=\lambda x_2$. Continuing by induction we again conclude that $x=0$. So $\lambda$ is not an eigenvalue either.
So $T$ has no eigenvalues.
Note that there is no contradiction with the finite dimensional case, because the finite dimensional analogue of this "forward shift" operator must necessarily "forget" about $x_n$, which means that $(0,0,\dots,1)$ will be an eigenvector with eigenvalue zero. |
Weaker Conditions for Integral Test (for series) | If $f$ is non-increasing and nonnegative then it is Riemann integrable and we have
$$\sum_{k=1}^{n-1} f(k) \geqslant\int_{1}^nf(x) \, dx \geqslant \sum_{k=2}^n f(k)$$
Note that $f(k)$ can always be defined since a monotone function has right and left limits at every point. This is enough to prove the integral test and it does not require $f$ to be continuous. This is easily generalized if $f$ is eventually non-increasing.
There are different /weaker conditions such that the integral test holds, where we do not even require $f$ to be monotone. See here.
I see redundant conditions for theorems often in elementary textbooks and, especially, online documents that may be subject to more relaxed standards. I can only speculate that it is an easy way to state a theorem correctly without delving too deeply in the subtleties of different combinations of weaker conditions. |
How to find taylor series of $ \sqrt{x+1} $ | The derivatives of $(1+x)^\alpha$ are relatively easy to find:
$$(1+x)^\alpha\to\alpha(1+x)^{\alpha-1}\to\alpha(\alpha-1)(1+x)^{\alpha-1}\to\alpha(\alpha-1)(\alpha-2)(1+x)^{\alpha-3}\to\cdots$$
and evaluate at $x=0$ as the falling factorials $(\alpha)_k$.
Then the Lagrange remainder reads
$$\frac{(\alpha)_{n+1}}{(n+1)!}(1+x^*)^{\alpha-n-1}x^{n+1}=\frac{(\alpha)_{n+1}}{(n+1)!}(1+x^*)^{\alpha}\left(\frac x{1+x^*}\right)^{n+1},$$ where $0\le|x^*|<|x|.$ Then for $-\frac12<x<1$, the last factor ensures an exponential decay (the others are bounded). |
Smooth function which is not real analytic | We need only worry about $x=0$.
First note that all the derivatives of $f$ for $x>0$ are sums of terms of the form $e^{-\frac{1}{x}} \frac{1}{x^n}$ (use induction if you want an explicit expression).
Second note that $e^{-\frac{1}{x}} \frac{1}{x^n} = \frac{\frac{1}{x^n}}{1+\frac{1}{x}+\frac{1}{2!x^2}+\cdots} < \frac{\frac{1}{x^n}}{\frac{1}{(n+1)!x^{n+1}}} = (n+1)!x$. It follows that all derivatives converge to $0$ as $x \downarrow 0$. |
How to read out numbers with scientific notation? | Say "six point six three four times ten to the fifteenth".
If this is the amount of some quantity with a metric name you can say
"six point six three four petawhatevers" |
Why are we allowed to ignore coefficients in Big-O notation? | I'll answer 2 first: No, there is no sufficiently large constant $\kappa$ such that $O(\kappa n) \neq O(n)$.
Now, I'll answer 1: Why do we ignore constants in asymptotic analysis?
Different computers with different architectures have different constant factors. A faster computer might be able to access memory faster than a slower computer, so faster computers will have a lower constant factor for memory access than slower computers. However, we're just interested in the algorithm, not the hardware, when doing asymptotic analysis, so we ignore such constant factors.
Also, slightly different algorithms with the same basic idea and computational complexity might have slightly different constants. For example, if one does $a(b-c)$ inside a loop while the other does $ab-ac$ inside a loop, if multiplication is slower than subtraction, then maybe the former will have a lower constant than the latter. However, often, we're not interested in such factors, so we'll ignore the the computational complexity of addition and multiplication.
As $n \to \infty$, constant factors aren't really a big deal. Constant factors are very small in the face of arbitrarily large $n$.
This doesn't mean we always ignore constant factors: Randomized quicksort, which is worst case $O(n^2)$ but average $O(n\log n)$, is often used over the version that has worst case $O(n\log n)$ because the latter has a high constant factor. However, in asymptotic analysis, we ignore constant factors because it removes a lot of the noise from hardware and small changes in algorithms that really aren't important for analysis, making algorithms easier to compare. |
Trigonometric integral with cosinus | Have you met the $\sec$ function? That will make things a lot easier here.
We have that $\sec y =\frac{1}{\cos y}$, therefore your integral becomes:
$$\int{\sec (5x) dx}$$Standard integration, while remembering to divide by the $5$, gives us $$\frac15\ln|\sec(5x)+\tan(5x)|+C$$
I believe your mistake is that $$y=\sin(s)\to\frac{dy}{ds}=\cos(s) \text{ rather than } (-\cos(s))$$ |
Triple integral in different coordinate systems. | Neither of your integrals is correct. You are integrating over the intersections of two balls. Spheres are the boundary of the balls. You need to find the $z$ coordinate where the spheres intersect, call it $z'$, then integrate over the top ball up to that $z$ position and integrate over the bottom ball above it. The $\varphi$ integral should be $0$ to $2\pi$ as you have it. The lowest point of the top ball is at $z=3-\sqrt {13}$ and the highest point of the bottom ball is at $z=2$. This gives $$\int_0^{2\pi}\int_{3-\sqrt{13}}^{z'}\int_0^{\sqrt{13-(z-3)^2}}rdr\ dz\ d\varphi$$ plus another term from $z'$ to $2$ in $z$ |
What is the form of this matrix? | $ad-bc \ge 0$ and $ac+db=0 \Rightarrow d=-\frac{ac}{b}$ thus
$$a\left( -\frac{ac}{b} \right)-bc\ge0 \Rightarrow (a^2+b^2)\frac{-c}{b}\ge0$$
which results in the fact that $b$ and $c$ have different signs such that $-\frac{c}{b}\ge0$. The same reasoning for $b=-\frac{ac}{d}$ gives $(d^2+c^2)\frac{a}{d}\ge0$ and thus $\frac{a}{d}\ge0$ which means $a$ and $d$ have the same sign. Therefore is general the statement is false, only the signs need to obey the above rules, $a=d$ and $b=-c$ are just a special case.
We conclude that matrix is of the form
$$
\begin{bmatrix}
a & c \\
-cm & \frac{a}{m}\\
\end{bmatrix}
$$
where $m \in \mathbb{R}^+$ and $a,c \in \mathbb{R}$ |
Distribution whose random vectors have distributions of the same class after linear transformations | You seem to be asking for more than the "Stable distribution" scenario, you are thinking about transformations of a multivariate variable:
$${\bf y} = {\bf A x} + {\bf b}$$
with ${\bf A}, {\bf b}$ arbitrary (${\bf A}$ square), so that the "family" (multivariate density) is preserved. I see several problems to give this a clear-cut answer, even a meaning. No only the "family" concept is rather vague, but also the "multivariate" random variable density familiy: for example, we have a definition for a multivariate gaussian, but we don't have (in general) a definition of (say) a multivariate Cauchy. Hence, its difficult to give a useful characterizaton of families of multivariate distributions.
One rather formal way of attacking it would with characteristic functions. Let $\Phi_X({\bf \omega}) = E[\exp(i {\bf \omega^t X })]$ be the (multimensional) c.f. of $X$ and $H_X({\bf \omega}) = \log(\Phi_X{\bf \omega})$. Then, we have
$$H_Y({\bf \omega}) = H_X({\bf A^t} {\bf \omega}) + i {\bf \omega}^t {\bf b}$$
Thus, we are seeking families of complex functions $H(\omega)$ (with $H(0)=0$) that are closed under the above transformation.
One can see immediately (what one already knew) that the gaussian familiy fits, because in that case $H(\omega)$ is a homegeneus cuadratic, and the transformation preserves the property. But I doubt one can say something more. |
How to find range of a Quadratic/Quadratic function easily without plotting its graph? | For $A\ne 0$ we have $Ax^2+Bx+C=Ag(x)+D$ where $g(x)=(x+\frac {B}{2A})^2$ and $D=C-\frac {B^2}{4A}.$ The range of $g(x)$ is $[0,\infty).$
If $A>0$ the range of $Ag(x)$ is $\{Ay: y\in [0,\infty)\}=[0,\infty).$ So the range of $Ax^2+Bx+C$ is $\{z+D: z\in [0,\infty)\}=[D,\infty).$
If $A<0$ the range of $Ag(x)$ is $\{Ay:y\in [0,\infty)\}=(-\infty,0].$ So the range of $Ax^2+BX+C$ is $\{z+D: z\in (-\infty,0]\}=(-\infty,D].$ |
Find even sum from a given set | Remember: $$even+even=even$$
$$odd+odd=even$$
$$odd+even=odd$$
If you want to ensure the sum is even, you have to find the maximum number of picks.
$$odd+even+even+even+even=odd$$ is the maximum number of picks to get odd.
So minimum of picks to ensure you have an even sum is six. |
Function $ f $ defined on unit disk $ D $ has real values on $ \partial D $, why is $ f $ real valued? | Note: this answers a different question (assuming $f$ is holomorphic). I leave it here for now because it may have some value anyway.
I assume that $f$ is continuous on the closed disc. One approach uses Schwarz' reflection principle: The function $f$ can be extended to an entire function by defining
$$ f(z) = \overline{f(1/\overline{z})} $$
for $|z| > 1$. (The hardish part of the reflection principle is that the resulting function is indeed holomorphic also for points on the unit circle.) Now $f$ is extended to a bounded entire function (since it is bounded on the compact closed unit disc) so it must be constant.
Another approach uses the open mapping theorem. Suppose $f$ is not constant. Then the image of the open unit disc is open and bounded (since $f$ is continuous on the closed unit disc). In particular it must have a non-real boundary point. This boundary point must be attained by $f$ on the compact closed unit disc. But $f$ is an open mapping on the open unit disc so this non-real boundary point must be attained on the unit circle. Contradiction. |
Deriving Trapezoid Rule via Newton-Cotes formula | For $n=1$, we have that $l_0$ and $l_1$ are linear functions. Moreover, $l_0(a) = l_1(b) = 1$ and $l_0(b) = l_1(a) = 0$. It follows that $$A_i = \int_a^b l_i(x) dx = \frac{1}{2}(b-a)$$ for $i=1,2$. |
Substitution rule in complex analysis | This is hardly ever correct. For example, let $f(z) = iz,$ $g(z) = e^{-|z|}.$ Then the left side equals
$$i\int_{\mathbb C} e^{-|z|} \, d\lambda(z),$$
the right side equals
$$\int_{\mathbb C} e^{-|w|} \, d\lambda(w).$$
Note two things: i) The only entire diffeomorphisms of $\mathbb C$ into $\mathbb C$ are of the form $f(z) = az+b.$ ii) If $f$ is a holomorphic injective map from $U$ onto $V,$ then the correct change of variables is
$$ \int_{U} g(f(z))|f'(z)|^2 \, d\lambda(z) = \int_{V} g(w) \, d\lambda(w).$$
Just compute the real Jacobian of the map $z\to f(z)$ to see this. |
Equivalence relation defined by a group action | You're idea is right, but all your actions should be on the same side. Here's the correct version of the argument you were trying to construct:
If $gx=x^\prime$, then $x=ex=(g^{-1}g)x=g^{-1}(gx)=g^{-1}x^\prime$ |
Definition of post-composing for homotopy classes of maps | You are correct, $\alpha_*[[f]] = [[\alpha\circ f]]$.
Let $\mathcal{C}(X, Y)$ denote the set of continuous functions $X \to Y$. There is a quotient map $\pi : \mathcal{C}(X, Y) \to [[X, Y]]$ given by $f \mapsto [[f]]$.
If $\alpha : Y \to Z$ is a continuous map, there is an induced map $\alpha_* : \mathcal{C}(X, Y) \to \mathcal{C}(X, Z)$ given by $f\mapsto \alpha\circ f$. If $f$ and $g$ are homotopic, then so are $\alpha\circ f$ and $\alpha\circ g$, i.e. if $[[f]] = [g]]$, then $[[\alpha\circ f]] = [[\alpha\circ g]]$. Therefore the map $\alpha_*$ descends to a well-defined map $[[X, Y]] \to [[X, Z]]$, again denoted by $\alpha_*$, given by $[[f]] \to [[\alpha\circ f]]$. This is represented in the commutative diagram below
$$\require{AMScd}
\begin{CD}
\mathcal{C}(X, Y) @>{\alpha_*}>> \mathcal{C}(X, Z)\\
@V{\pi}VV @VV{\pi}V \\
[[X, Y]] @>{\alpha_*}>> [[X,Z]].
\end{CD}$$ |
Find the remainder of the polynomial $f(x)$ divided by $(x-b)(x-a)$ given its remainder when divided by $x-a$ and $x-b$ | Here is a simpler approach:
$$
f(x)=(x-a)(x-b)q(x)+ux+v
$$
Then
$$
A = f(a) = ua + v,
\quad
B = f(b) = ub + v
$$
gives a linear system for $u,v$ whose solution is
$$
u = \frac{A-B}{a-b},
\quad
v = \frac{a B -A b}{a-b}
$$ |
Any 'odd fraction' can be represented as the finite sum of different 'odd unit fractions'? | I've just been able to prove that any odd fraction can be represented as the finite sum of different odd unit fractions.
First, note that it is sufficient to prove the following $(\star)$ :
$(\star)\ \ $ Any natural number $N$ can be represented as the finite sum of different odd unit fractions.
(If we get $(\star)$, then all we need is to divide $N$ by an odd which is the very denominator.)
Now, let $p_i$ be the $i$-th odd prime number arranged in ascending order. ($p_1=3, p_2=5, p_3=7, \cdots$) Here, we use the following two famous facts :
$(1) \ \ p_{i+1}\lt 2p_i$ (by Chebyshev)
$(2) \ \frac1{p_1}+\frac1{p_2}+\cdots$ diverges. (by Euler)
By $(2)$, we can take $n\ge 5$ such that
$$\frac1{p_1}+\frac1{p_2}+\cdots+\frac1{p_n}\gt N+1.$$
Let's fix one of such $n$. Then, letting $P=p_1p_2\cdots p_n$, we know that $P\gt 3$ and that
$$\left(1+\frac1{p_1}\right)\cdots\left(1+\frac1{p_n}\right)\gt 1+\left(\frac1{p_1}+\cdots+\frac1{p_n}\right)\gt 1+(N+1)\gt N+1+\frac 3P.$$
Now, let $q_j$ be the $j$-th positive divisor of $P$ arranged in ascending order: ($q_1, q_2,\cdots,q_m$ where $m=2^n$)
$$q_1=1, q_2=3,q_3=5,q_4=7, q_5=11, q_6=13, q_7=15, \cdots, q_n=P.$$
Lemma 1 : $q_{i+1}\lt 2q_i$ when $i=2,\cdots,m-2$.
(Note that this lacks $i=1,m-1$ because $\frac{q_2}{q_1}=\frac{q_m}{q_{m-1}}=3\gt 2$.)
Lemma 2 : Each of $3,4,\cdots, q_1+q_3+q_4+\cdots+q_i$ can be represented as the sum of some different elements in $q_1,q_2,\cdots,q_i$ when $i=3,\cdots,m-1$.
(For example, when $i=3$, we get $3=q_2, 4=q_1+q_2, 5=q_3, 6=q_1+q_3$. When $i=4$, we get $7=q_4, 8=q_1+q_4,9=q_1+q_2+q_3, 10=q_2+q_4, 11=q_1+q_2+q_4, 12=q_3+q_4, 13=q_1+q_3+q_4.)$
(I'll prove these lemmas at the last.)
Let's prove that $(\star)$ follows these lemmas. To do this, let $R=q_1+q_3+q_4+\cdots+q_{m-1}$.(Note that this lacks both of $q_2=3$ and $q_m=P$) Then, we get
$$R=(q_1+q_2+\cdots+q_m)-(3+P)=(p_1+1)\cdots(p_n+1)-(3+P).$$
Hence, since we get
$$\frac RP=\frac{(p_1+1)\cdots (p_n+1)}{P}-\left(\frac 3P+1\right)=\left(1+\frac1{p_1}\right)\cdots\left(1+\frac1{p_n}\right)-\left(\frac 3P+1\right)\gt\left(N+1+\frac 3P\right)-\left(\frac 3P+1\right)=N,$$
we know that $3\le NP\lt R$. By lemma 2, we can represent $NP$ as
$$NP=q_{a_1}+q_{a_2}+\cdots+q_{a_k} \ \ (1\le a_1\lt aa_2\lt \cdots\lt a_k\le m-1).$$
Hence, we get
$$N=\frac{q_{a_1}}{P}+\cdots+\frac{q_{a_k}}{P}.$$
Here, since each of $q_{a_i}$ is a positive divisor of $P$, each of $\frac{q_{a_i}}{P}$ is an odd unit fraction. Now the proof for $(\star)$ is completed.
In the following, we are going to prove the above lemmas.
Proof for lemma 1 : Let $_kr_i$ be the $i$-th number in ascending order which can be represented the product of $k\ (k=0,\cdots,n)$ different elements in $p_1,\cdots,p_n$.
For example, $$_kr_1=p_1p_2\cdots p_k,\ _kr_{\binom{n}{k}}=p_np_{n+1}\cdots p_{n-k+1},\ _0r_1=1, \ _nr_1=P.$$
Then, by the fact $(1)$, we know that we can get $_kr_{j+1}\lt 2 _kr_j$.
Now, let's prove that $q_{i+1}\lt 2q_i$ for $i=2,\cdots, m-2$. Note that $q_i=_kr_j$ for some $k,j$ where $1\le k\le n-1$. First, if $j\lt \binom nk$, then we know that
$$q_{i+1}\le _kr_{j+1}\lt 2_kr_j=2q_i.$$
Next, if $j=\binom nk$, then we get
$$q_i=p_n\times p_{n-1}\times \cdots\times p_{n-k+1}$$
where $i\le m-2$ leads $k\le n-2$ and $n-k+1\ge 3$. In the $n=5$ case, by $p_n=p_5=13$ and $p_1\times p_2=3\times 5=15$, we know
$$q_{i+1}\le p_1\times p_2\times p_{n-k+1}\times p_{n-k+2}\times\cdots\times p_{n-1}=\frac{15}{13}p_{n-k+1}\times p_{n-k+2}\cdots\times p_{n-1}\times p_n=\frac{15}{13}q_i\lt 2q_i.$$
In the $n\ge 6$ cases, by $p_n\ge p_6=17\gt 15=p_1\times p_2$, we get
$$_{k+1}r_1\lt _kr_{\binom nk}\lt _{k+1}r_{\binom{n}{k+1}}.$$
Hence, since we can tak $l$ such that
$$_{k+1}r_{l}\lt _kr_{\binom nk}\lt _{k+1}r_{l+1},$$
we know that
$$q_{i+1}\le _{k+1}r_{l+1}\lt 2 _{k+1}r_{l}\lt 2q_i$$
for $q_i= _kr_{\binom nk}.$ Now the proof for lemma 1 is completed.
Proof for lemma 2 : The $i=3$ case is obvious. We treat $l\ge 4$ in the following. First, by induction on $i$, we are going to prove
$$(\star\star)\ \ 3+q_{i+1}\le q_1+q_3+q_4+\cdots+q_i+1$$
when $4\le i\le m-2$.
In $i=4$ case, we get
$$LHS=3+q_5=3+11=14, RHS=q_1+q_3+q_4+1=1+5+7+1=14.$$
Now, supposing that $(\star\star)$ is true when $i=k\ (4\le k\le m-3)$, in $i=k+1$ case, by lemma 1 and inductive supposition, $LHS=3+q_{k+2}\lt 3+2q_{k+1}=(3+q_{k+1})+q_{k+1}\lt (q_1+q_3+q_4+\cdots+q_k+1)+q_{k+1}=RHS.$
Hence, the proof for $(\star\star)$ is completed.
Now, let's prove lemma 2 by induction on $i$. The $i=4$ case has been already proved, so let's suppose that lemma 2 is true when $i=k\ (4\le k\le m-2)$. We know that the numbers from $3+q_{k+1}$ to $q_1+q_3+q_4+\cdots+q_{k+1}$ can be represented as the sum of some different elements in $q_1,q_2, \cdots, q_{k+1}.$ By the way, by $(\star\star)$, since
$$3+q_{k+1}\le q_1+q_3+q_4+\cdots+q_k+1,$$
as a result we know that the numbers from $3$ to $q_1+q_3+q_4+\cdots+q_{k+1}$ can be represented as the sum of some different elements in $q_1,q_2,\cdots,q_{k+1}$. So lemma 2 is true when $i=k+1$. Hence we know that lemma 2 is true for $4\le i\le m-1$. Now the proof for lemma 2 is completed. |
What does $1+\frac{1}{8}+\frac{1}{27}+\frac{1}{64}+\frac{1}{125}+(\frac{1}{n})^3$ equal to? | $\zeta(3)$ is a constant known as Apery's constant. Its value is approximately $1.2021\ldots$ but as far as I know an exact value isn't known.
Some further reading:
https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s_constant
http://mathworld.wolfram.com/AperysConstant.html |
Rank of linear map and its dual: Understand the bijectivity in the proof. | Let $g\in (Imf)^*, g:Imf\rightarrow K$, $f_1^*(g)=g\circ f_1=g'\circ f$ where $g':W\rightarrow K$ extends $g$. This implies that $f_1^*(g)\in Imf^*$, since $f_1^*$ is injective, we just need to show that it is surjective.
Let $h\in Imf^*, h=f^*(g)=g\circ f, g\in W^*$, let $g_1=g_{\mid Imf}$, $g\circ f=g_1\circ f_1=f_1^*(g_1)$. |
Proving that for each prime number $p$, the number $\sqrt{p}$ is irrational | If $\sqrt p=\frac{a}{b}$ where (a,b)=1 and a>1 as p>1,
then $p=\frac{a^2}{b^2}$.
Now as p is integer, $b^2$ must divide $a^2$, which is impossible unless b=1 as (a,b)=1.
If b=1, p=$a^2$ which can not be prime as a>1. |
How to solve a system of equations | Yes, since $x$ is nonzero (because of the second equation), we can eliminate
$y=(60-2x^2)/(7x)$ and substitute this in the first equation, which gives
$$
25(x + 4)(x + 3)(x - 3)(x - 4)=0.
$$ |
Let $a,b,c>0$ such that $a+b+c=3$, find max/min of $\frac1{a^2+1}+\frac1{b^2+1}+\frac1{c^2+1}$ | Let $f(x) = \frac{1}{1+x^2}$ and $\mu = \frac{1}{\sqrt{3}}$.
WOLOG, we will assume $a \le b \le c$.
Notice for any $x \ge 0$,
$$f(x) + \frac12(x-2) = \frac{x(x-1)^2}{2(x^2+1)} \ge 0$$
Substitute $x$ by $a, b, c$ and sum, we obtain
$$S(a,b,c) \stackrel{def}{=} f(a)+f(b)+f(c) \ge \frac12(6-(a+b+c)) = \frac32$$
Since this lower bound $\frac32$ is achieved at $a = b = c = 1$, this is the minimum we seek.
For maximum, we use the fact
$$f''(x) = 2\frac{(3x^2-1)}{(x^2+1)^3}
\quad\text{ is }\quad
\begin{cases}
< 0, & x < \mu\\
= 0, & x = \mu\\
> 0, & x > \mu
\end{cases}$$
to conclude $f(x)$ is strictly concave over $[0,\mu)$ and strictly convex over $(\mu,\infty)$.
As a result of this, when $(a,b,c)$ is a configuration which maximizes $S(a,b,c)$, we have
At most one of $a,b,c$ lies inside $(\mu,\infty)$.
Assume the contrary, let's say $\mu < b \le c$.
When $b < c$,
$f''(x) > 0$ over $(\mu,\infty)$ implies $f'(x)$ is increasing there.
This means $f'(b) < f'(c)$. For small enough $\epsilon > 0$, we have
$$S(a, b-\epsilon,c+\epsilon) = S(a,b,c) + (f'(c)-f'(b))\epsilon + O(\epsilon^2) > S(a,b,c)$$
This contradicts with our choice of $a,b,c$ to maximize $S(a,b,c)$.
When $b = c$, $f(x)$ is strictly convex over $(\mu,\infty)$ and Jensen's inequality tell us for small enough $\epsilon > 0$,
$$S(a, b-\epsilon, b+\epsilon) > S(a,b,b)$$
This again contradicts with our choice of $a, b, c$.
For those $a,b,c$ lies inside $[0,\mu]$, we can assume they share a common value.
This is because $f(x)$ is concave over $[0,\mu]$, Jensen's inequality
tell us if we replace those $a,b,c$ inside $[0,\mu]$ by their average, it will only increase $S(a,b,c)$.
This leaves us with two possible configurations for $(a,b,c)$. Either
$a = b = c \le \mu$
$a = b \le \mu < c$
It is easy to see the first scenario gives us $a = b = c = 1$. This gives us the minimum instead of maximum. For the second scenario, we can look at the parametrization $(a,b,c) = (t,t,3-2t)$.
We find
$$S(a,b,c) = S(t,t,3-2t) = \frac{2}{1+t^2} + \frac{1}{1+(3-2t)^2}$$
Differentiate RHS gives us
$$\frac{4(3-2t)}{1+(3-2t)^2)^2}-\frac{4t}{(1+t^2)^2}
= -\frac{3(t-1)(6t^4-27t^3+49t^2-33t+1)}{((t^2+1)^2 (2t^2-6t+5)^2}$$
A plot of it shows that it is mostly negative over $[0,1)$.
It is positive over $[0,\lambda)$ where $\lambda
\approx 0.031776261136412972$ is the smallest positive real root of the quartic polynomial:
$$6t^4-27t^3+49t^2-33t+1$$
$S(t,t,3-2t)$ is increasing on $[0,\lambda]$ and decreasing on $[\lambda,\mu]$.
The maximum of $S(a,b,c)$ is achieved at
$$(\lambda,\lambda,3-2\lambda) \approx (0.031776261136412972,0.031776261136412972,2.936447477727174)$$
with value $\approx 2.101903255548146$. |
European Call Option - Expectation of Normal CDF | The only way I can make any sense of this is $d_1$ is a random variable taking values in $\mathbb{R}$ with PDF $f$ and CDF
$$Q(d_1 \leqslant x) = \int_{-\infty}^x f(t) \, dt,$$
so
$$E\left(I_{d_1 > 0}\int_0^{d_1} e^{-s^2/2} \, ds \right) = \int_0^\infty f(x)\left(\int_0^xe^{-s^2/2} \, ds\right) \, dx$$
Integrating by parts, we get
$$E\left(I_{d_1 > 0}\int_0^{d_1} e^{-s^2/2} \, ds \right) = \left.\int_0^xe^{-s^2/2} \, ds \int_{-\infty}^xf(t) \, dt\right|_0^\infty - \int_0^\infty \left(\int_{-\infty}^xf(t) \, dt\right) e^{-x^2/2}\, dx$$
Since $\int_{-\infty}^\infty f(t) \, dt = 1$ we have
$$E\left(I_{d_1 > 0}\int_0^{d_1} e^{-s^2/2} \, ds \right) = \int_0^\infty e^{-s^2/2} \, ds - \int_0^\infty \left(\int_{-\infty}^xf(t) \, dt\right) e^{-x^2/2}\, dx \\ = \int_0^\infty \underbrace{\left(\int_x^\infty f(t) \, dt\right)}_{Q(x < d_1)}e^{-x^2/2} \, dx$$
The second term can be handled in a similar way. |
Can we treat logic mathematically without using logic? | We use logic to study logic, not to create logic. Our study is usually not intended to justify some logic but rather to understand how it works. For example, we might try to prove that, whenever a conclusion $c$ follows from an infinite set $H$ of hypotheses then $c$ already follows from a finite subset of $H$. Many logical systems have this finiteness property; many others do not. And that's quite independent of the logic that we use in studying this property and trying to prove or disprove it for one or another logical system.
Here's an analogy: Suppose a biologist is writing a paper about the origin of trees. He could use a wooden pencil to write the paper. That pencil was made using wood from trees, so its existence presupposes that the origin of trees actually happened. Nevertheless, there is nothing circular here. The pencil that is being used probably consists of wood quite different from that in prehistoric trees. And even if it wasn't different, there's no problem with using the pencil to describe those ancient trees.
Similarly, there's no problem using ordinary reasoning, also called logic, to describe and analyze the process of reasoning. |
Defining $\mathbb{C}$ in $\mathbb{C}(X)$ | Here's a more elementary way of showing that $\mathbb{C}$ is definable in $\mathbb{C}(X)$. I found this solution in the book Model Theoretic Algebra by Jensen and Lenzing, which is a great reference for questions about definability in rings, fields, and modules. It's Proposition 3.3 on p. 34 of that book, and the method applies more generally to define $K$ in $K(X)$ whenever $K$ is a Pythagorean field (a field in which any sum of squares is a square) of characteristic $\neq 2$.
Consider the formula $\varphi(x)$: $$\exists y\, (1 + x^4 = y^2).$$
If $a\in \mathbb{C}$, then $1 + a^4\in \mathbb{C}$ is a square in $\mathbb{C}$, and hence also in $\mathbb{C}(X)$, so $\mathbb{C}(X)\models \varphi(a)$.
Conversely, suppose $a\in \mathbb{C}(X)$ and $\mathbb{C}(X)\models \varphi(a)$. Then there is some $b\in \mathbb{C}(X)$ such that $1 + a^4 = b^2$. Writing $a = p/q$ and $b = r/s$ in lowest terms, with $p,q,r,s\in \mathbb{C}[X]$, we have $1 + p^4/q^4 = r^2 / s^2$. Clearing denominators, $(q^4 + p^4)s^2 = r^2q^4$. Since $b = r/s$ is written in lowest terms, $s$ and $r$ are relatively prime. Thus $s^2 | q^4$, and hence $s | q^2$. Writing $q^2 = st$ for some $t\in \mathbb{C}[X]$, we have $q^4 = s^2t^2$. So $(q^4 + p^4)s^2 = r^2q^4 = r^2s^2t^2$, and $q^4 + p^4 = u^2$, where $u = rt$. Further, $p$, $q$, and $u$ are pairwise relatively prime, since a common irreducible factor of any two of the three would also divide the third, contradicting the assumption that $a = p/q$ is written in lowest terms.
Now it suffices to prove the following claim, since then $a = p/q\in \mathbb{C}$.
Claim: Suppose $p,q,u\in \mathbb{C}[X]$ satisfy $p^4 + q^4 = u^2$ and are pairwise relatively prime. Then $p,q,u\in \mathbb{C}$.
Proof: By induction on $\max(\deg(p),\deg(q))$. If $\max(\deg(p),\deg(q))\leq 0$, then $p,q\in \mathbb{C}$, so $u\in \mathbb{C}$ as well.
Now assume $\max(\deg(p),\deg(q))> 0$. By symmetry, we may assume $\deg(p) \leq \deg(q)$. Note that $2\deg(u) = \deg(u^2) \leq \max(\deg(p^4),\deg(q^4)) = 4\deg(q)$, so $\deg(u) \leq 2\deg(q) = \deg(q^2)$.
Rewriting $u^2 - q^4 = p^4$, we have $(u+q^2)(u-q^2) = p^4$. Now $(u+q^2)$ and $(u-q^2)$ are relatively prime, since a common irreducible factor would divide both $(u+q^2) + (u-q^2) = 2u$ and $(u+q^2) - (u-q^2) = 2q^2$, and hence would divide both $u$ and $q$. So each irreducible factor of $p$ divides exactly one of $(u+q^2)$ or $(u-q^2)$.
Since also any unit is a $4^\text{th}$ power in $\mathbb{C}$, it follows that $p$ factors as $p = \hat{p}\hat{q}$, where $(u+q^2) = \hat{p}^4$ and $(u - q^2) = \hat{q}^4$, and $\hat{p}$ and $\hat{q}$ are relatively prime. Then $2q^2 = (u + q^2)-(u-q^2) = \hat{p}^4 - \hat{q}^4$, so $\hat{u}^2 = \hat{p}^4 + (\zeta \hat{q})^4$, where $\hat{u} = \sqrt{2}q$ and $\zeta$ is a primitive $8^{\text{th}}$ root of unity.
We have $4\deg(\hat{p}) = \deg(\hat{p}^4) = \deg(u+q^2) \leq \deg(q^2) = 2\deg(q)$, where the inequality follows from the observation $\deg(u) \leq \deg(q^2)$ above. So $\deg(\hat{p}) \leq \deg(q)/2$. Similarly, $\deg(\hat{q}) \leq \deg(q)/2$. So $\max(\deg(\hat{p}),\deg(\zeta\hat{q})) < \deg(q) = \max(\deg(p),\deg(q))$.
Also, $\hat{p}$ and $\zeta\hat{q}$ are relatively prime, and hence $\hat{p}$, $\zeta\hat{q}$, and $\hat{u}$ are pairwise relatively prime, since a common irreducible factor of any two of the three would also divide the third.
By induction, $\hat{p}$, $\zeta\hat{q}$, and $\hat{u}$ are in $\mathbb{C}$. But then $\sqrt{2}q = \hat{u}$ and $p = \hat{p}\hat{q}$ implies $p,q\in \mathbb{C}$, and thus also $u\in \mathbb{C}$. |
Under what conditions is $A^T \Sigma A$ positive (semi-)definite for $\Sigma$ p.s.d? | If $A$ is not square, you cannot talk of its eigenvalues / vectors. Even if it is, you cannot assume that the eigenvectors are orthogonal w.r.t. to the scalar product that you give. Think for instance $\Sigma = I$; then you would like the eigenvectors to be orthogonal, which means that $A$ itself is orthogonal.
In general, the easiest way to prove p.s.d.-ness is from the definition: for $x \in \mathbb{R}^m$, compute
$$
x^T (A^T \Sigma A) x = (A x)^T \Sigma (Ax),
$$
and this is positive this $\Sigma$ is p.s.d. |
How do you calculate the change in thickness of a cylinder, if you shave off a flat section? | If I understand correctly, the problem is straightforward:
You want to compute $d$ (or $r-d$, which amounts to the same thing).
Since $d^2+ ({w \over 2})^2 = r^2$ and so
$d = \sqrt{r^2 - ({w \over 2})^2}$.
In this example, $w = 12$mm, $r = 100$mm and so
$ d \approx 99.82$m, so the difference seems negligible.
(To be expected since $\sqrt{1-x^2} \approx 1-{x^2 \over 2}$.) |
How to prove a graph is a manifold? | You've got the right parameterization. It's certainly a 1-1 map (why?), and certainly onto (why?), and certainly continuous (why?), and indeed, $C^k$ (why? What's the derivative matrix at $(x_1, \ldots, x_n)$ look like?).
Hint:
The only possible remaining part is finding an inverse and proving that the inverse is nice as well. So: here's the inverse. Let $K$ denote the graph of $f$.
$$
p: K \to \Bbb R^n : (x_1, \ldots, x_n, u_{n+1}, \ldots, u_k) \mapsto (x_1, \ldots, x_n).
$$
Can you explain why $p$ is continuous, 1-1 (on $K$), and $C^k$? |
finding point on a line where distance to point e (r) is equal to b-e | From your diagram it appears a tangential contact but not cutting of these lines by radius as a secant is required. Draw two lines parallel to $ ab,cd $ with minimum distance between them $r$ so that inside intersection point is $e$ and next draw the contacting circle of radius $r$ and center $e$. |
Relating geometric and Algebraic Definitions of the dot product | Consider this:
Both algebraic and geometric are commutative.
The algebraic dot product is linear: easily seen from the definition.
The geometric product is linear: scalar multiplication is easily checked. To check for summation, align $A,B$ on a coordinate system so that they are on the xy-plane with $B$ along the y-axis, then $|A|\cos\theta$ is just the $y$ coordinate of $A$, and then the rest follow.
The definition of both match each other at all pair of standard basis $e_{i}$: $e_{i}\cdot e_{j}=\delta_{i,j}$ in both case.
Hence they must match each other complete. Basically, it's just essentially a sort of uniqueness theorem: you want some sort of "dot product" that satisfy certain condition, and there is only one that come out. Any definition that match those condition must produce the same product.
And these conditions are very clearly motivated by physical issue: commutative is because you can rotate the world around and the physic should not change; linearity because it make sense that total energy spent by 2 people pushing something should be the sum of energy spent by each; orthogonal is because it make sense that no energy are spent if you don't move the object at all; and normalization is just choosing an arbitrary choice of unit. |
If $G/H$ is a group under set multiplication, must $H \lhd G$? | The product is actually an operation (that is, a function from $(G/H)\times (G/H)$ to $G/H$) if and only if $H$ is normal.
This is closely related to an old answer of mine. That one shows, among other things, that if you want to define the multiplication of cosets as $(aH)(bH) = (ab)H$, then this is well-defined if and only if $H$ is normal.
Basically, if $H$ is not normal, then not every set-theoretic product of left cosets is a left coset. So $*$ is not actually an operation on the set $G/H$ of left cosets. A similar argument holds for the set of right cosets.
Indeed, suppose $H$ is not normal. Then there exists $a\in G$ such that $aHa^{-1}\neq H$. In particular, there exists $h\in H$ such that $aha^{-1}\notin H$.
Now consider the product of $aH$ and $a^{-1}H$. The element $aha^{-1}e\in (aH)(a^{-1}H)$. As it does not lie in $H$, that means that if this product is to be a left coset of $H$, then it must be the left coset $aha^{-1}H$. That is, we must have $(aH)(a^{-1}H) = aha^{-1}H$.
On the other hand, the element $aea^{-1}e = e$ lies in $(aH)(a^{-1}H)$. Thus, $e$ must lie in $(aH)(a^{-1}H)$. However, the only left coset of $H$ that includes $e$ is $H$ itself. Thus, we must have $(aH)(a^{-1}H) = eH$. But this requires $aha^{-1}H = eH$, which in turn requires $aha^{-1}\in H$ . . . which is not true by assumption.
Thus, $(aH)(a^{-1}H)$ cannot be a left coset of $H$, proving that this product is not an operation on $G/H$ when $H$ is not normal.
Do take a look at that other answer for a pretty extensive discussion of this and related topics.
Let’s dig a bit more conceptually here about what is happening when you multiply cosets.
Lemma. Let $G$ be a group, $H$ a subgroup of $G$, and $a\in G$. If $(aH)(a^{-1}H)$ is a left coset of $H$, then it is $H$; similarly, if $(a^{-1}H)(aH)$ is a left coset of $H$, then it is $H$; and if both hold, then $a\in N_G(H)$. Conversely, if $a\in N_G(H)$, then $(aH)(a^{-1}H) =(a^{-1}H)(aH)= H$.
Proof. If $a\in N_G(H)$, then $aHa^{-1}=H$. Therefore, $(aH)(a^{-1}H) = (aHa^{-1})H = HH = H$; and since $N_G(H)$ is a subgroup, $a^{-1}\in N_G(H)$ and the same argument proves that $(a^{-1}H)(aH) = H$.
Conversely, assume that $(aH)(a^{-1}H)$ is a left coset of $H$. Since $aea^{-1}e = e\in (aH)(a^{-1}H)$, this coset must be equal to $H$. Thus, $(aH)(a^{-1}H) = H$. Therefore, for each $h\in H$ we have $aha^{-1} = aha^{-1}e \in aHa^{-1}H = H$, so $aHa^{-1}\subseteq H$. The same argument, mutatis mutandis, shows that if $(a^{-1}H)(aH)$ is a left coset, then it is $H$.
Now, if $(aH)(a^{-1}H) = H$, then for each $h\in H$ we have $aha^{-1} = aha^{-1}e \in (aH)(a^{-1}H) = H$. Thus, $aHa^{-1}\subseteq H$. Symmetrically, $(a^{-1}H)(aH) = H$ implies $a^{-1}Ha\subseteq H$, and multiplying on the left by $a$ and on the right by $a^{-1}$ we get $H\subseteq aHa^{-1}$, giving equality, so $a\in N_G(H)$. $\Box$
In particular, if $H$ is such that the product of two left cosets is always a left coset, then in particular for all $g\in G$ we have that $(gH)(g^{-1}H)$ is a left coset and $(g^{-1}H)(gH)$ is a left coset, and thus that $g\in N_G(H)$. Thus, $G\subseteq N_G(H)$, hence $H\triangleleft G$.
(Comment: I would like to weaken the hypothesis that both $(aH)(a^{-1}H)$ and $(a^{-1}H)(aH)$ are cosets to just one of them, but I’m not sure right now how to do it. The argument certainly shows that if $(aH)(a^{-1}H)$ is a coset, then it is $H$ and $aHa^{-1}\subseteq H$. Ideally of course, we should be able to say something about when the product $(aH)(bH)$ is a left coset, in terms of $a$, $b$, and the normalizer, but I haven’t fully worked that out yet . . . )
Note that of course one can define other operations on the set $G/H$ that have nothing to do with the multiplication of $G$ to make the set $G/H$ into a group. In greatest generality, the fact that any set can be made into a group is equivalent to the Axiom of Choice. But if you want the operation on $G/H$ to be somehow “induced” by the operation in $G$, you are pretty much forced into normal subgroups and the standard operation on the quotient. |
Rank of matrix with parameters | if rank of matrix is $2$,it means for matrices for sizes more then $2$,all minor determinants and also whole determinant is just zero,so you need to compute determinants seperately with size more then $2$ and set to zero,also dont forget there must be at least one matrix with size $2$ by $2$ with non zero determinant |
Let $\phi:\Bbb{Z}_{20} \to \Bbb{Z}_{20}$ be an automorphism and $\phi(5)=5$. What are the possibilities of $\phi(x)$? | It's a lot easier to see exactly how flexible the automorphisms are (and count them) if you use $\Bbb Z_{20}\cong \Bbb Z_4\times\Bbb Z_5$, with $5\mapsto (1, 0)$
This is known as the Chinese remainder theorem: If $m, n$ are coprime integers, then $\Bbb Z_{mn}\cong \Bbb Z_m\times \Bbb Z_n$. The most natural isomorphism is $1\mapsto (1, 1)$, so that's the one I'll use here. this gives the $5\mapsto(1, 0)$ above.
An automorphism of $\Bbb Z_4\times \Bbb Z_5$ must send $(1, 0)$ to an element of order $4$ (i.e. $(1, 0)$ or $(3, 0)$) and it must send $(0,1)$ to an element of order $5$ (i.e. $(0,i)$ for $1\leq i\leq 4$), and any automorphism is uniquely determined by where it sends these two elements.
We want all automorphisms which fix $5\in \Bbb Z_{20}$. That corresponds to automorphisms which fix $(1, 0)$ in $\Bbb Z_4\times\Bbb Z_5$. That means the only leeway we have is where $(0,1)$ is sent. We have four options, and they all work. Thus the four maps we are after all map $(1,0)$ to $(1,0)$, and then map $(0,1)$ to either one of the four order-$5$ elements $(0,i)$.
In order to translate back to $\Bbb Z_{20}$ I think it's easiest to see what happens to $(1, 1)$: It is sent to some element $(1, i)$ where $1\leq i\leq 4$. Any such choice gives a valid automorphism. Taking our designated isomorphism back to $\Bbb Z_{20}$, we get the following correspondences between automorphisms:
$$
\begin{array}{|c|c|}
\hline \text{image of (1, 1)}&\text{image of $1$}\\
\hline
(1, 1) & 1\\
(1, 2) & 17\\
(1, 3) & 13\\
(1, 4) & 9\\\hline
\end{array}
$$ |
$x^{x}$ as a composite function | We can write $x^x = e^{x\ln(x)}$ for $x>0$. In this way, it is a composite function. |
common point on the three lines...... | Here is the command in Maxima, you can use other CAS if you want:
s: (a+b+c)/2;
M: matrix(
[(b-c)*(2*s^3-2*s^2*a+s*(a^2-2*b*c)+a*b*c),
-a*(s-c)*(2*s^2-2*s*b+b^2),
a*(s-b)*(2*s^2-2*s*c+c^2)],
[b*(s-c)*(2*s^2-2*s*a+a^2),
(c-a)*(2*s^3-2*s^2*b+s*(b^2-2*c*a)+a*b*c),
-b*(s-a)*(2*s^2-2*s*c+c^2)],
[-c*(s-b)*(2*s^2-2*s*a+a^2),
c*(s-a)*(2*s^2-2*s*b+b^2),
(a-b)*(2*s^3-2*s^2*c+s*(c^2-2*a*b)+a*b*c)]
);
ratexpand(determinant(M));
(or use factor instead of ratexpand), which indeed shows the determinant is $0$.
You can also use eigenvalues(M); to see that there is indeed an eigenvalue $0$ of multiplicity $1$. To find the eigenspace, use eigenvectors(M);, which we are only interested in the last part of the output (the full output is [[[eigenvalues],[multiplicities]],[eigenvectors]])
$$
\begin{bmatrix}
1\\
{{bc^3+\left(ab-b^2\right)c^2+\left(-b^3+4ab^2+a^2b
\right)c+b^4-ab^3-a^2b^2+a^3b}\over{ac^3+\left(ab-a^2
\right)c^2+\left(ab^2+4a^2b-a^3\right)c+ab^3-a^2b^2-a^
3b+a^4}}\\
{{c^4+\left(-b-a\right)c^3+\left(-b^2+4ab-a^2
\right)c^2+\left(b^3+ab^2+a^2b+a^3\right)c}\over{ac^3+
\left(ab-a^2\right)c^2+\left(ab^2+4a^2b-a^3\right)c+ab
^3-a^2b^2-a^3b+a^4}} \end{bmatrix}
$$
(OK, maxima doesn't really output like this, but you get the idea). So clearing denominator $$x=a\left(\sum_{cyc}a^2(a-b-c)+2bc(b+c)+4abc\right)$$ and cyclic permute for $y,z$:
factor(M . columnvector([
a*(c^3+b*c^2-a*c^2+b^2*c+4*a*b*c-a^2*c+b^3-a*b^2-a^2*b+a^3),
b*(c^3-b*c^2+a*c^2-b^2*c+4*a*b*c+a^2*c+b^3-a*b^2-a^2*b+a^3),
c*(c^3-b*c^2-a*c^2-b^2*c+4*a*b*c-a^2*c+b^3+a*b^2+a^2*b+a^3)
]));
gives the output $[0,0,0]$.
Edit: correct factor of $2$, the expression is now much uglier. |
A advanced geometry question | I will present a slightly unusual solution to this problem...
Assume an ellipse with $B,C$ as foci and $A$ as a point on the ellipse.
Then what the question asks us to show is that the length $DM$ is half the length of the major axis of this ellipse.
For this, simply observe that the external angle bisector is actually the tangent to this ellipse at $A$.
And it is again well known that the foot of perpendicular from foci to a tangent lies on the auxilliary circle.
Hence $M$ lies on the auxilliary circle, which is exactly what was intended. |
Help with this exercise: $\lim_{x \to \ 0} \frac{\cos2x}{\sin3x} $ | You will have a problem with the existence of this limit. We have $$\lim_{x\to 0} \cos(2x) =1,$$ $$\lim_{x\to 0^+} \sin(3x) =0^+, \quad \lim_{x\to 0^-} \sin(3x) =0^-.$$ Hence $$\lim_{x \to 0^+} \frac{\cos(2x)}{\sin(3x)}= +\infty, \quad \lim_{x \to 0^-} \frac{\cos(2x)}{\sin(3x)}= -\infty.$$ Since the right and the left limits are different, the limit at $0$ doesn't exist. |
Counter-example Aubin-Lions p=1 | $\DeclareMathOperator{AC}{AC}$Notice that the requirement the space you are considering is precisely the space of absolutely continuous functions. Thus, the question reduces to
Can we find a bounded sequence of absolutely continuous which does not converge uniformly to a continuous function?
In particular, notice that it would be sufficient to take a sequence $u_n$ of absolutely continuous functions which converges pointwise to a discontinuous function. There are many examples, for instance
$$u_n(x) = \frac{x^n}{T^{n+1}}$$
It seems worth pointing out that in the setting you are considering, the Aubin-Lions lemma is the same as the humble Rellich-Kondrakov theorem. In particular, the space
$$A = \{u \in L^\infty((0,T); \mathbb{R}) | u' \in L^1(0,T;\mathbb{R})\}$$
is equivalence (as a Banach space) to $W^{1,1}((0,T))$, since on one hand, $L^1((0,T)) \supseteq L^\infty((0,T))$ by Holder's inequality, while on the other hand $L^\infty((0,T)) \supseteq W^{1,1}((0,T))$ by Poincare's inequality. |
Not every submodule has a complement submodule | Finding a direct-sum complement can fail in at least two ways:
$S$ could be an essential submodule (this means that for another submodule $T$, $S\cap T=\{0\}$ implies $T=\{0\}$)
$S$ could be a superfluous submodule (this means that for another submodule $T$, $S+T=M$ implies $T=M$.)
A good place to look for counterexamples is in rings which only have trivial idempotents, since idempotents correspond to module decompositions of the ring as a module over itself.
If you consider $\Bbb Z/4\Bbb Z$ as a module over itself, this module has exactly three submodules arranged in a line. The ring has no nontrivial idempotents. It's easy to see that the nontrivial submodule $2\Bbb Z/4\Bbb Z$ is both essential and superfluous at the same time, and so it (doubly) can't have a direct-summand complement.
Semisimple rings are exactly the class of rings for which you can always find direct-summand complements.
It seems that there are many differences between vector spaces and modules.
Of course. You might be interested in this question: Pathologies in module theory |
Intuition: Convexity of Multivariate Functions and Positive Semidefiniteness of the Hessian | Let $\Omega \subset \mathbb{R}^n$ be open and convex. A $C^1$ function $f \colon \Omega \to \mathbb{R}$ is convex if and only if the tangent plane to the graph of $f$ at any point $x \in \mathbb{R}^n$ lies below the graph of $f$. This means that at any $x \in \Omega$, for any $y \in \Omega$,
$$f(y) \geq f(x) + Df(x)(y - x).$$
In order to use the Hessian, let's assume $f$ is $C^2$. Since $f$ is $C^2$ on a convex open set we can apply Taylor's theorem:
$$f(y) = f(x) + Df(x)(y - x) + \frac{1}{2}(y - x) \cdot H(x + \theta(y - x))(y - x),$$
where $\theta \in (0, 1)$. Positive semidefiniteness of the hessian $H$ at every point ensures that
$$(y - x) \cdot H(x + \theta(y - x))(y - x) \geq 0,$$
and therefore that $f$ is convex. Conversely, if $H(x)$ is not positive semidefinite at some point $x \in \Omega$, then there exists $v \in \mathbb{R}^n$ such that $v \cdot H(x)v < 0$. By continuity of $H$, $v \cdot H(y)v < 0$ for $y$ near $x$. Consequently, for $h \in \mathbb{R}$ small,
\begin{align}
f(x + hv) &= f(x) + Df(x)hv + \frac{1}{2}hv \cdot H(x + \theta hv)hv \\
&= f(x) + Df(x)hv + \frac{1}{2}h^2v \cdot H(x + \theta hv)v \\
&< f(x) + Df(x)hv,
\end{align}
which implies that $f$ is not convex. |
What is the probability that a random K-bit odd-number is prime? | Your observation is in the right direction.
One often laxly says that the probability that $n$ is prime equals $\frac 1{\ln n}$. The more precise statement is the prime number theorem: There are approximately $\frac n{\ln n}$ primes below $n$ (and of course the really precise statement tells us what "approximately" means in this context).
Now in your setup, a $K$ bit number is a number $<2^K$, hence has "probability" $\frac1{\ln 2^K}=\frac1{K\ln 2}$. If we already exclude the even numbers, we improve this probability to $\frac2{K\ln 2}$. Numerically, we have $\frac2{\ln 2}=2.885\ldots$, which is not the same as $e=2.718\ldots$ but at least not too far off. So you only guessed the constant wrong. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.