title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Prove that $ \lfloor 2x \rfloor \leq 2\lfloor x \rfloor + 1$ | $$x = n + p$$
First case $0\leq p < 0.5$:
$$\lfloor 2x \rfloor = 2n \leq 2\lfloor x\rfloor + 1 = 2n + 1$$
Second case $0.5 \leq p < 1$:
$$\lfloor 2x \rfloor = 2n + 1 \leq 2\lfloor x\rfloor + 1 = 2n + 1$$
Proof is completed here. |
Angle issue here. Can't make a sensible diagram out of this. | Start with a line segment $RQ$, and make $\angle QRP=48^o$ and $\angle RQP=55^0$. This determines point $P$, and that $\angle RPQ=77^o$. Draw $PS$ such that $\angle RPS=135^0$. So $PS$ is the line from which the bearing of $R$ from $P$ is taken.
To find the bearing of $P$ from $R$ draw $RT$ parallel to $PS$. Hence $\angle TRP$, the supplement to $\angle RPS$, $=45^o$, and the bearing of $P$ from $R$, taken counter-clockwise, is $360^o-45^o=315^o$.
The bearing of $Q$ from $R$, taken from $RT$, is $48^o-45^o=3^o$.
Finally, to get the bearing of $P$ from $Q$, draw $QU$ parallel to $PS$. Since $\angle RPS=135^o$, and $\angle RPQ=77^o$, then $\angle SPQ=135^o-77^o=58^o$, and its supplement $\angle PQU=122^o$. Therefore, the bearing of $P$ from $Q$ is $360^o-122^o=238^o$. However, this last does not agree with the answer key. What has gone wrong here? |
Determine the cumulative distribution function of.. | You should notice that there is something strange in your solution since they ask for a function but what you have obtained is a number.
The fact that they say "as soon as one job is completely processed" is just, in my opinion, another way to say that you are considering a minimum. Now the minimum of indipendent exponential distributed random variables is just another exponential distributed random variable of parameter equal to the sum of the others parameters, in formula:
$$\lambda= \sum_{i=1}^{n}\lambda_i$$
so the distribution function is just:
$$F_y(t)=P(Y \le t)=\int_{0}^{t}\lambda e^{-\lambda x}dx,t\ge0.$$
I think you can compute $E(Y)$ on your own now, just apply the definition. |
Pythagorean Triplets & Prime Factors | Your formula does yield "new" triples, but they are not primitive triples, just multiples of the triples that come from the first formula. Specifically, if $p$ is a factor of $a$, then using the original formula for $p$ yields the triple $(p,\frac{p^2-1}2,\frac{p^2+1}2)$. Multiplying the triple by $\frac a p$ gives $(a,b,c)$ with $b=\frac{pa-\frac{a}{p}}2$. This is the same as your formula.
Note that the original formula doesn't even necessarily give you a primitive triple when $a$ is even. For example, when $a=6$, $b=8.$ In general, the formula only gives you a primitive triple for even $a$ when $4|a$.
There is a relationship between prime factorizations of $a$ and the set of primitive triples that contain $a>1$. Specifically, let $z(n)$ be the number of distinct prime divisors of $n$. Then, for fixed $a>1$, the number of primitive solutions to $a^2+b^2=c^2$ with $b,c>0$ is $2^{z(a)-1}$ if $a$ is odd or $4|a$, and $0$ otherwise.
This can be seen as a result of the fact that primitive triples can be written as $(u^2-v^2,2uv,u^2+v^2)$ where $(u,v)=1$ and $u,v$ are not both odd.
In any event, the point of the formula you found on Wikipedia was not to be exhaustive, but rather to quickly show that any $a$ can be found in some non-trivial triple $(a,b,c)$. |
Proving $\int_{0}^\infty \left(\frac{1}{(1+ix)^b}-\frac{1}{(1-ix)^b}\right)\sin(ax)\mathrm{d}x =\frac{-ia^{b-1}e^{-a}\pi}{\Gamma[b]} $ | $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
$\ds{\bbox[10px,#ffd]{\left.\int_{0}^{\infty}\bracks{{1 \over \pars{1 + \ic x}^{b}} -
{1 \over \pars{1 - \ic x}^{b}}}\sin\pars{ax}\,\dd x
\,\right\vert_{\ a, b\ >\ 0} =
-\,{a^{b - 1}\expo{-a} \over \Gamma\pars{b}}\,\pi\ic}:\ {\Large ?}}$.
\begin{align}
&\bbox[10px,#ffd]{\left.\int_{0}^{\infty}
\bracks{{1 \over \pars{1 + \ic x}^{b}} -
{1 \over \pars{1 - \ic x}^{b}}}\sin\pars{ax}\,\dd x\,\right\vert_{\ a,b\ >\ 0}}
\\[5mm] = &\
\ic\,\Im\int_{-\infty}^{\infty}{\sin\pars{ax} \over
\pars{1 + \ic x}^{b}}\,\dd x
\\[2mm] = &\
\ic\,\Im\int_{-\infty}^{\infty}\sin\pars{ax}\
\overbrace{\bracks{{1 \over \Gamma\pars{b}}\int_{0}^{\infty}t^{b - 1}
\expo{-\pars{1 + \ic x}t}\dd t}}
^{\ds{=\ {1 \over \pars{1 + \ic x}^{b}}}}\,\dd x
\\[5mm] = &\
{\ic \over \Gamma\pars{b}}\,\Im\int_{0}^{\infty}t^{b - 1}\expo{-t}
\int_{-\infty}^{\infty}\bracks{\expo{-\pars{t - a}\ic x} -
\expo{-\pars{t + a}\ic x}\over 2\ic}\dd x\,\dd t
\\[5mm] = &\
-\,{\ic\pi \over \Gamma\pars{b}}\int_{0}^{\infty}t^{b - 1}\expo{-t}
\bracks{\delta\pars{t - a} - \delta\pars{t + a}}\,\dd t
\\[5mm] = &\
\bbx{-\,{a^{b - 1}\expo{-a} \over \Gamma\pars{b}}\,\pi\ic}
\qquad\qquad a, b > 0
\\[5mm] &\ \mbox{}
\end{align} |
How do we approximate sum of random variables? | If you know the distribution of each variable, try computing the characteristic function of the partial sums $S_n=\sum_{i=k}^n X_k$, and testing out various normalizations $S_n/c_n$.
For any random variable $Y$, integrable or not, the characteristic function (CF) of $Y$ is
$$\varphi_Y(t):=E\exp(itY)\;.$$
If $S_n$ is the sum of iid variables $X_1+\cdots+X_n$, the CF of $S_n/c_n$ factors into a product of $n$ terms:
$$\varphi_{S_n/c_n}(t)=E\exp(it\sum_{k=1}^n X_k/c_n)=\prod_{k=1}^nE\exp(itX_k/c_n)=[\varphi_X(t/c_n)]^n\;.$$
With luck the CF of the normalized sum will converge to a non-trivial limit (which with luck you'll recognize as the distribution of some random variable $Z$). This implies that you can approximate the distribution of $S_n/c_n$ as $Z$ when $n$ is sufficiently large.
For example, the CF approach can be used to show that the average of $n$ iid Cauchy random variables has the same Cauchy distribution as a single variable. |
A mapping is "independent of the representation of the set $A$" means...? | $A\in\mathcal{A}$ means that there exists $B_1,\dots,B_k\in\mathcal{C}$ such that
$$A=\bigcup_{i=1}^kB_i$$
This is a representation of $A$. But it's not necessarily unique. You could also have an other representation
$$A=\bigcup_{i=1}^lC_i$$
What is said here is that the result that you get for $\bar{\mu}(A)$ does not depend on the representation that you choose for $A$ but only on $A$, that is:
$$\sum_{i=1}^k\mu(A_i)=\sum_{i=1}^l\mu(C_i)$$
This is necessary in order for $\bar{\mu}$ to be well defined. |
commuting algebra of an irreducible representation | As is implicit in the question, the fact that the centralizer is a division algebra is automatic, by the usual Schur's Lemma argument.
Call this division algebra $D$; then $V$ is isomorphic to $D^n$ for some $n$, and by the double centralizer theorem, the image of the group ring of $G$ is $M_n(D)$ (acting on the right, so really I should write $M_n(D^{op})$). But this image is a commutative ring (by the assumption that the rep'n is abelian), so $n = 1$ and $D$ actually is a commutative field. |
Let $T \colon \mathbb R^5 \rightarrow \mathbb R^5 $ be a linear transformation. | Try finding some simple counterexamples for 1, 3 and 4 (Hint: take a diagonal matrix). To see that 2 is true, have a look at the degree of the characteristic polynomial of $T$. |
Orthogonal and orthonormal basis in the vector space of polynomials | Yes, everything is well and good. You followed the Gram-Schmidt process and got a right result. Well done!
Edit: for the orthogonal part. It remains to get integer coefficients (multiply by the pertinent square roots to do away with them) and to re-normalise to get norm-one vectors. |
Pulling Operator Inside Integral | It depends on how you define $Y$ valued integrals.
Such integrals usually satisfy (or are defined by) the condition that
$\phi(\int g(x) d \mu (x)) = \int \phi(g(x)) d \mu (x)$ for all $\phi \in Y^*$.
If this is the case, then we have
\begin{eqnarray}
\phi(\int_{\mathbb{R}}{L_{x}L_{y}f(y)dy} ) &=& \int_{\mathbb{R}}{\phi(L_{x}L_{y}f(y))dy} \\
&=& \int_{\mathbb{R}}{(\phi \circ L_{x})(L_{y}f(y))dy} \\
&=& (\phi \circ L_x) (\int_{\mathbb{R}}{L_{y}f(y)dy} ) \\
&=& \phi(L_x (\int_{\mathbb{R}}{L_{y}f(y)dy} ))
\end{eqnarray}
where we have used the fact that $\phi \circ L_x \in Y^*$.
In other words,
$\int_{\mathbb{R}}{L_{x}L_{y}f(y)dy} = L_x (\int_{\mathbb{R}}{L_{y}f(y)dy} )$. |
Proof that rescaled eigenvector shares same eigenvalue | Since $v$ is an eigenvector of $A$ with eigenvalue $\lambda$ you know that $Av = \lambda v$.
And $A$ is linear. So what is $A(sv)$ ? |
Proving positive definiteness of matrix $a_{ij}=\frac{2x_ix_j}{x_i + x_j}$ | (Update: Some fixes have been added because I solved the problem with $a_{ij}=\frac{2x_ix_j}{x_i+x_j}$ instead of $a_{ij}=\frac{2x_ix_j}{x_i^2+x_j^2}$. Thanks to Paata Ivanisvili for his comment.)
The trick is writing the expression $\frac{1}{x^2+y^2}$ as an integral.
For every nonzero vector $(u_1,\ldots,u_n)$ of reals,
$$
(u_1,\ldots,u_n) \begin{pmatrix}
a_{11} & \dots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{n1} & \dots & a_{nn} \\
\end{pmatrix}
\begin{pmatrix} u_1\\ \vdots \\ u_n\end{pmatrix} =
\sum_{i=1}^n \sum_{j=1}^n a_{ij} u_i u_j =
$$
$$
=2 \sum_{i=1}^n \sum_{j=1}^n u_i u_j x_i x_j \frac1{x_i^2+x_j^2} =
2\sum_{i=1}^n \sum_{j=1}^n u_i u_j x_i x_j \int_{t=0}^\infty
\exp \Big(-(x_i^2+x_j^2)t\Big) \mathrm{d}t =
$$
$$
=2\int_{t=0}^\infty
\left(\sum_{i=1}^n u_i x_i \exp\big(-{x_i^2}t\big)\right)
\left(\sum_{j=1}^n u_j x_j \exp\big(-{x_j^2}t\big)\right)
\mathrm{d}t =
$$
$$
=2\int_{t=0}^\infty
\left(\sum_{i=1}^n u_i x_i \exp\big(-{x_i^2t}\big)\right)^2
\mathrm{d}t \ge 0.
$$
If $|x_1|,\ldots,|x_n|$ are distinct and nonzero then the last integrand cannot be constant zero: for large $t$ the minimal $|x_i|$ with $u_i\ne0$ determines the magnitude order. So the integral is strictly positive.
If there are equal values among $|x_1|,\ldots,|x_n|$ then the integral may vanish. Accordingly, the corresponding rows of the matrix are equal or the negative of each other, so the matrix is only positive semidefinite.
You can find many variants of this inequality. See
problem A.477. of the KöMaL magazine.
The entry $a_{ij}$ can be replaced by $\left(\frac{2x_ix_j}{x_i+x_j}\right)^c$ with an arbitrary fixed positive $c$; see
problem A.493. of KöMaL. |
Why $\{2,3\}$ isn’t a subset of B? | For a set $C$ to be a subset of another set $B$, every element of $C$ must be an element of $B$. In your case, the set $\{1\}$ has the element $1$ which is also an element of $B$. But the set $\{2, 3\}$ has elements $2$ and $3$, neither of which are elements of the original set $B$. |
$m^*(A \cup B) = m^*(A) + m^*(B)$ | By general properties of outer measures, $m^\ast (A \cup B) \leq m^\ast (A) + m^\ast (B)$, so that we only have to prove the reverse estimate.
In the case $m^\ast(A\cup B) =\infty$, this is trivial, so assume $m^\ast(A \cup B) < \infty$. Let $\varepsilon > 0$ be arbitrary and choose a covering $(I_k)_k$ of $A \cup B$ such that
$$
\sum_k \ell(I_k) \leq m^\ast(A \cup B) + \varepsilon. \qquad (\dagger)
$$
Note that we can assume $I_k \cap (A \cup B) \neq \emptyset$ for each $k$ (why?).
Also, we can replace each interval $I_k = [a_k, b_k]$ by a suitable finite union
$$
I_k = \bigcup_{m=1}^{m_k} [a_{k,m}, b_{k,m}]
$$
with $b_{k,m} - a_{k,m} \leq \frac{\alpha}{2}$ and $\sum\limits_{m=1}^{m_k} (b_{k,m} - a_{k,m}) = \ell(I_k)$, without changing $(\dagger)$, i.e. we can assume $\ell(I_k) \leq \frac{\alpha}{2}$ for all $k$.
Now, let
$$
I_A := \{k \in \Bbb{N} \mid I_k \cap A \neq \emptyset\}, \\
I_B := \{k \in \Bbb{N} \mid I_k \cap B \neq \emptyset\}.
$$
Because of $I_k \cap (A\cup B) \neq \emptyset$ for each $k$, we see $\Bbb{N} = I_A \cup I_B$. Furthermore, $I_A \cap I_B = \emptyset$ follows from your assumption that $|x-y| \geq \alpha$ for all $x \in A$, $y \in B$ together with $\ell(I_k) \leq \alpha/2$ (why?).
Hence,
$$
m^\ast(A) + m^\ast(B) \leq \sum_{k \in I_A} \ell(I_k) + \sum_{k \in I_B} \ell(I_k) = \sum_k \ell(I_k) \leq m^\ast(A \cup B) + \varepsilon.
$$
As $\varepsilon > 0$ was arbitrary, this completes the proof. |
Game theory formula explanation | It means that the value when all players cooperate is at least as large as the value of a few players when they cooperate.
The ⊆ symbol is to indicate a subset. |
How to factor in the 'immediately prior to the administration of the next dose' statement in this question? | You should note the interpretation of the parameter $k$ and $d$. In this case, $k$ is the degradation rate of the drug in the blood stream. And $d$ is the amount of drug being added with each drug administration (not rate).
As @Did commented, the amount just before the dose (or prior to $a_{n+1}$) is equal to the amount from previous dose ($a_n$) subtracts total amount of drug being used up by the body ($a_n k$). Thus your condition is:
$$a_n - k a_n > \frac{1}{2}$$
Now, at equilibrium, $a_n = a_\infty = d/k$, so the condition becomes:
$$\frac{d}{k} - k \frac{d}{k} > \frac{1}{2}$$
Note that since $k$ is a rate, $k < 1$, so the above expression can be simplified to:
$$ d > \frac{1}{2} \frac{k}{1 - k}$$ |
How can I prove this basic implication of gcd? | Hint. We have to show that $xy$, $z$ and $y$, $z$ have exactly the same common divisors. It is clear that if $d$ is a common divisor of $y$, $z$ then $d\mid y$ implies $d\mid xy$.
It remains to show that if $d\mid xy$ and $d\mid z$ then $d\mid y$. Note that $\gcd(x,z)=1$ implies that there are $a,b\in\mathbb{Z}$ such that $ax+bz=1$, and therefore $axy+bzy=y$. |
In the expression $a+it$ for a complex number, is the "$+$" just there to join the real and imaginary parts? or do the rules of arithmetic apply? | The original reason for the $x + iy$ notation probably dates from a day of less formal and more intuitive notation.
A more contemporary reason is to define the complex numbers formally as ordered pairs $(x, y)$ but then observe that the subset $(x, 0)$ is isomorphic to the real numbers. We now redefine the reals to be this subset and use just $x$ as a shorthand for $(x,0)$. Next we define $i$ to be $(0,1)$. We then find that $(x,y) = x(1,0) + y(0,1) = x +iy$. So, $x + iy$ is just a convenient and familiar way to write $(x,y)$.
Similar steps happen when the integers are defined from the natural numbers, the rational numbers from the integers, and the reals from the rationals. Strictly speaking the complex number $1$ is not the same as the real number $1$ and that is not the same as the rational $1$, the integer $1$, or the natural number $1$. Each time we extend, we redefine the old smaller system as a subset of the new isomorphic to it. Most of the time this is not confusing and makes life and notation simpler. Occasionally, you need to remember this. |
Exact functor and relationship between Ext functors | What your argument gives is a homomorphism $\operatorname{Ext}^r_\mathcal{A}(X,Y)\to\operatorname{Ext}^r_{\mathcal{B}}(F(X),F(Y))$ (to see that it is a homomorphism, just note that applying $F$ to everything preserves the Baer sum operation since $F$ is exact). However, this homomorphism may not be injective, so there is not necessarily any reasonable way you can consider $\operatorname{Ext}^r_\mathcal{A}(X,Y)$ as a subset of $\operatorname{Ext}^r_{\mathcal{B}}(F(X),F(Y))$. For instance, $F$ might be the zero functor in which case $\operatorname{Ext}^r_{\mathcal{B}}(F(X),F(Y))$ is always trivial, regardless of whether $\operatorname{Ext}^r_\mathcal{A}(X,Y)$ was trivial. |
Given sequence $(a_n)$ such that $a_{n + 2} = 4a_{n + 1} - a_n$. Prove that $\exists \frac{a_i^2 + 2}{a_j}, \frac{a_j^2 + 2}{a_i} \in \mathbb N$. | The condition holds true for all pairs $a_i, a_{i+1}$.
In fact, the relation $a_i^2+2=a_{i-1}a_{i+1}$ holds for all $i$, from which the above result follows.
This can be proven by induction. Assume $a_i^2+2=a_{i-1}a_{i+1}$ is true for $1<i<n$. That makes
$$
\begin{split}
a_{n-1}a_{n+1}
& = a_{n-1}\cdot(4a_n-a_{n-1})
= 4a_{n-1}a_n-a_{n-1}^2 \\
& = 4a_{n-1}a_n-(a_{n-2}a_n-2)
= (4a_{n-1}-a_{n-2})\cdot a_n + 2 \\
& = a_n^2 + 2
\end{split}
$$
for $n>2$, and you just have to verify that the relation holds for $i=2$ which is the first case not covered by the induction step. |
Calculating the expectation of $\frac{1}{|W \cap W_{tU}|}$ where W is a square with side $a$. | See comments under the question. Introducing $b=t/a$ in the first equality, $$\int_{-\pi}^{\pi}\frac{1}{(a-t|cos(\alpha)|)(a-t|sin(\alpha)|)} \mathrm{d}\alpha = \frac{1}{a^2} \int_{-\pi}^{\pi}\frac{1}{(1-b|cos(\alpha)|)(1-b|sin(\alpha)|)} \mathrm{d}\alpha = \\ = \frac{1}{a^2}4\int_0^{\pi/2}\frac{\mathrm{d}\alpha}{(1-\beta\cos\alpha)(1-\beta\sin\alpha)}.$$
Solving the last integral in Mathematica yielded the following for ∈(0,1):
$$\frac{4}{a^2}\frac{2 \left(\left(b^2-1\right) \log (1-b)+\sqrt{1-b^2} \left(-\tan ^{-1}\left(\frac{b-1}{\sqrt{1-b^2}}\right)+\tan ^{-1}\left(\frac{b+1}{\sqrt{1-b^2}}\right)+\sin ^{-1}(b)\right)\right)}{b^4-3 b^2+2}.$$ |
Unusual result to the addition | Hint: $n$ digits of $6$ is equal to
$$
\left(6\left(\frac{10^n-1}{9}\right)\right)^2 = \frac{4}{9}(10^n-1)^2
= \frac{4}{9}(10^{2n}-2\cdot 10^n+1).$$
Apply a similar transformation to the term with $8$'s, then see if you can simplify the sum. |
Definition - filter basis | By definition (at least all that I know of) of a filter $\mathscr{F}$ is non-empty (we can state that for sure that $E \in \mathscr{F}$) , so $BF_2$ forces that $\mathscr{B}$ is non-empty, to contain a subset of $E$. |
How can I prove that $f$ is continuous at $0$? | I claim that $\exists M \geq 0$ such that
$$
|f(x)| \leq M \quad\forall x\in E \text{ such that } \|x\| \leq 1 \qquad(\ast)
$$
If not, then $\exists x_n \in E$ such that $\|x_n\| \leq 1$ and
$$
|f(x_n)| > n^2
$$
Hence, take
$$
y_n = x_n/n
$$
Then, $y_n \to 0$ and $\{f(y_n)\}$ is unbounded.
Now can you prove that $f$ is continuous at $0$ from $(\ast)$? |
Calculate integral of Gaussian Geometry | Local Gauss-Bonnet Theorem. Let $R$ be a simple region of a surface $S$ and let $\alpha\colon I \to S$
be the boundary of the region. Assume $\alpha$ is positively oriented, parametrised by arc length $s$ and let $\theta_1, \ldots, \theta_k$ be the external angles at the vertices of $\alpha$. Then
$$ \int_{R} K\, dA + \sum_{i=1}^k \int_{a_i}^{b_i} k_g(s)\,ds + \sum_{i=1}^k \theta_i= 2\pi,$$
where $K$ is the Gauss curvature and $k_g$ stands for the geodesic curvature of the regulars arcs of $\alpha$. (Reference: Curves and Surfaces from Do Carmo.)
So in order to calculate the integral, we need to calculate two things: (1) the total geodesic curvature along all arcs, and (2) the external angles at the vertices.
Step 0: Parametrizing the arcs
A parametrisation of $\Sigma_r$ is
$$
x\colon \left[0,\frac{\pi}{2}\right]\times[0,r]: (u,v)\mapsto \left(v\cos u, v \sin u, \cos v\right).
$$
This surface is bounded by the arcs
$$
\begin{align*}
\alpha_1(t)&= x(0,t)
= (t,0,\cos t), \qquad \text{$t\in[0,r]$} \\
\alpha_2(t)&= x(t,r)
= (r\cos t, r \sin t, \cos r), \qquad\text{$t\in[0,\frac{\pi}{2}]$} \\
\alpha_3(t)&= x\left(\frac{\pi}{2},t\right)
= (0,r-t,\cos(r-t)), \qquad \text{$t\in[0,r]$.} \\
\end{align*}
$$
Step 1: Calculating the geodesic curvature
Big hint: the meridians of a surface of revolution are geodesics. Use this fact, it will save you 66% of the work. If you didn't see this fact in your course, prove it or look it up.
Step 2: Calculating the external angles
Note that the arcs are coordinate lines. It is easily seen that $x$ is an orthogonal parametrisation of $\Sigma_r$, so the external angles at every vertex is $\frac{\pi}{2}$.
I hope this answer is helpful for you. |
Deriving a Formula for the determinant of a block matrix. | You're applying Laplace's formula as if it applied to blocks, but it applies to elements. (This is also reflected in the notation; usually lowercase letters are used for matrix entries and uppercase letters for matrices.)
If Laplace's formula held for blocks (with the determinant of the block where you're using the block itself), then what you're trying to prove would be just a special case of it. However, while it works in this special case, for arbitrary blocks as in
$$
\pmatrix{A&B\\C&D}
$$
with all four blocks non-zero, it doesn't work, because there are cross-terms involving entries in all four blocks whereas the expansion would contain only terms with entries from two blocks at a time. |
Upper bound of an operation | Here is a heat kernel approach: Consider the heat semigroup $(e^{t\Delta})$. It is well known (and can be seen quite easily using the Fourier transform) that
$$
e^{t\Delta}f(x)=(4\pi t)^{-n/2}\int_{\mathbb{R}^n}e^{-\frac{|x-y|^2}{4t}}f(y)\,dy.
$$
Direct computation shows that $\|e^{t\Delta}f\|_p\leq \|f\|_p$. In fact, it suffices to show this for $p=2$ and $p=\infty$; the other cases follow by interpolation and duality.
The resolvent is the Laplace transform of the heat semigroup, that is,
$$
(-\Delta+\lambda)^{-1}f=\int_0^\infty e^{-t\lambda}e^{t\Delta}f\,dt.
$$
Thus
$$
\|(-\Delta+\lambda)^{-1}f\|_p\leq \int_0^\infty e^{-t\lambda}\|e^{t\Delta}f\|_p\,dt\leq \|f\|_p\int_0^\infty e^{-t\lambda}\,dt=\frac 1{\lambda}\|f\|_p.
$$
In your particular case you get $\|L^{-1}\|_{p\to p}\leq 1$. |
$\max _{w \in \mathbb{R}^{m}: w \neq 0} \frac{ |w A|}{|w|} \leq \max _{v \in \mathbb{R}^{n}: v \neq 0} \frac{ |A v|}{|v|}$ | I think the inequality is actually an equality, the reasons are as follows:
Do the singular value decomposition: $A=USV$, where $U\in O(m),V\in O(n) $ and $S$ is a diagonal matrix with the diagonal elements being the singular values of $A$. Then you can see
$$ \max _{w \in \mathbb{R}_{m}: w \neq 0} \frac{ |w A|}{|w|} = \max _{w \in \mathbb{R}_{m}: w \neq 0} \frac{ |wUS|}{|wU|} = \max _{w \in \mathbb{R}_{m}: w \neq 0} \frac{ |wS|}{|w|}$$
and
$$ \max _{v \in \mathbb{R}^{n}: v \neq 0} \frac{ | Av|}{|v|} = \max _{v \in \mathbb{R}^{n}: v \neq 0} \frac{ |SVv|}{|Vv|}=\max _{v \in \mathbb{R}^{n}: v \neq 0} \frac{ |S v|}{|v|} ,$$
both of which are equal to the maximal absolute value of the singular value of $A$. |
why use complex numbers when representing periodic signals? | It's not a matter of intuition*, it's a matter of expedience.
My professor in Electromagnetism and Waves used to like to say,
"Now let's complexify that."
What he meant was, "At this point I'm going to rewrite this using $e^{ix}$-like formulas
so that we can complete this lesson this morning rather than having to stretch it over
two or three class periods."
You don't use a calculator (or computer) to compute square roots because it gives you a
better intuition about square roots. You use the calculator because it takes so much
longer to extract a square root with pencil and paper.
The difference is not quite that dramatic (perhaps) when you change real sinusoidals
to complex exponentials, but it was dramatic enough to make that professor's
phrase memorable to me decades later. It was kind of like dropping some sinusoidal
functions into the hopper of a big mathematical machine, then you turn the crank and
answers come dropping out the bottom.
The examples you linked to don't really seem to do anything particularly useful with the
complex exponentials. They seem to just be introducing the idea that you can make a
correspondence between sinusoids and complex exponentials. The wonderful mathematical
machine is presumably waiting to be brought out in a later lesson.
*Or maybe it is a matter of intuition. As noted in a comment below, if you make certain associations between features of sinusoids and features of complex numbers, you can develop a good intuition for dealing with phase and amplitude. |
convergence in probability of mean of sequence $X_{k+1}=\beta X_k+\epsilon_k$ | The final version is based on the helpful (and constructive) comments (see below) that pointed out mistakes and flaws of the previous efforts to answer the question.
By substituting iteratively you find that $$\begin{align*}X_{k+1}&=βX_{k}+\epsilon_k=β(βX_{k-1}+\epsilon_{k-1})+\epsilon_k=β^2X_{k-1}+β\epsilon_{k-1}+\epsilon_k=\ldots\\\\&=β^kX_1+\left(\sum_{l=0}^{k-1}β^l\epsilon_{k-l}\right)\\&=\sum_{l=0}^{k-1}β^l\epsilon_{k-l}\end{align*}$$ since $X_1=0$. Thus \begin{array}{rcrcrr}X_2&=&\epsilon_1\\X_3&=&β\epsilon_1&+&\epsilon_2\\X_4&=&β^2\epsilon_1&+&β\epsilon_2&+&\epsilon_3&\\\ldots&&\ldots&&\ldots&&\ldots\\X_n+1&=&β^{n-1}\epsilon_1&+&β^{n-2}\epsilon_2&+&β^{n-3}\epsilon_3&+\ldots+β\epsilon_{n-1}+\epsilon_n\end{array}
Summing up both sides, we obtain $$\begin{align*}\sum_{k=1}^{n}X_{k+1}&=\frac{1-β^n}{1-β}\epsilon_1+\frac{1-β^{n-1}}{1-β}\epsilon_2+\ldots+\frac{1-β^2}{1-β}\epsilon_{n-1}+\frac{1-β}{1-β}\epsilon_n\\&=\sum_{k=1}^{n}\frac{1-β^{n+1-k}}{1-β}\epsilon_k=\ldots\\&=\frac{1}{1-β}\left(\sum_{k=1}^{n}\epsilon_k-β^{n+1}\sum_{k=1}^{n}\frac{1}{β^k}\epsilon_k\right)\end{align*}$$ and finally $$\bar{X}_n=\frac{1}{n(1-β)}\left(\sum_{k=1}^{n}\epsilon_k-β^{n+1}\sum_{k=1}^{n}\frac{1}{β^k}\epsilon_k\right)$$ From this form we can calculate $E[\bar{X}_n]$ and (especially) $Var(\bar{X}_n)$ since the $\epsilon_k$ are independent. Indeed $$E[\bar{X}_n]=\frac{1}{n(1-β)}\left(\sum_{k=1}^{n}μ-β^{n+1}\sum_{k=1}^{n}\frac{1}{β^k}μ\right)=\frac{μ}{1-β}\left(1-\frac{β^{n+1}-1}{n(1-β)}\right) \longrightarrow \frac{μ}{1-β}$$ as $n \to \infty$ and similarly $$\begin{align*}Var[\bar{X}_n]&=\frac{1}{n^2(1-β)^2}\left(\sum_{k=1}^{n}σ^2-(β^2)^{n+1}\sum_{k=1}^{n}\frac{1}{(β^2)k}σ^2\right)\\\\&=\frac{σ^2}{(1-β)^2}\left(\frac{1}{n}-\frac{(β^2)^{n+1}-1}{n(1-β^2)}\right) \, \longrightarrow \, 0\end{align*}$$ as $n \to \infty$. Hence $$E[\bar{X}_n^2]=Var(\bar{X}_n)+E[\bar{X}_n]^2 \longrightarrow 0+\frac{μ^2}{(1-β)^2}$$ which is enough to obtain $\mathcal L^2$ convergence $$E\left[\left(X_n-\frac{μ}{1-β}\right)^2\right]=E\left[\bar{X}_n^2-2\bar{X}_n\frac{μ}{1-β}+\frac{μ^2}{(1-β)^2}\right]\to \frac{2μ^2}{(1-β)^2}-\frac{2μ^2}{(1-β)^2}=0$$ from which you can conclude that $$\overline{X}_n=\frac{1}{n}\sum_{k=1}^{n}X_k\overset{p}\rightarrow \frac{μ}{1-β}$$ since convergence in $\mathcal L^r$ for $1\le r$ implies convergence in probability (see here).
Summing up both sides of the given relation $$\sum_{k=1}^{n}X_{k+1}=β\sum_{k=1}^{n}X_k+\sum_{k=1}^{n}\epsilon_k$$ gives you after simple calculations that $$\bar{X}_n=\frac{1}{n(1-β)}\left(\sum_{k=1}^{n}\epsilon_k-X_{n+1}\right)$$ and thus obtaining a closed form for $X_{n+1}$ (as above) is enough to obtain the result with less calculations than above. |
Calculating an ugly integral | When you convert it to the iterated integral, $y$ should range from $0$ to $\frac1x$, not from $1$ to $\frac1x$. Thus, your first few steps should be as follows:
$$\begin{align*}
\int_A\left(\frac1x\right)y^{1/2}dA&=\int_1^\infty\int_0^{1/x}\left(\frac1x\right)y^{1/2}dydx\\
&=\frac23\int_1^\infty\left(\frac1x\right)[y^{3/2}]_0^{1/x}dx\\
&=\frac23\int_1^\infty\left(\frac1x\right)\left(\frac1x\right)^{3/2}dx
\end{align*}$$
(where I’ve pulled out the constant factor to get it out of the way). In other words, you shouldn’t have $\frac23\left(\frac1x\right)^{3/2}-\frac23$, but just $\frac23\left(\frac1x\right)^{3/2}$. Now simplify to
$$\frac23\int_1^\infty\left(\frac1x\right)\left(\frac1x\right)^{3/2}dx=\frac23\int_1^\infty x^{-5/2}dx$$
and continue from there. Be a little careful: this is an improper integral, so you should be rewriting it as
$$\frac23\lim_{a\to\infty}\int_1^a x^{-5/2}dx\;.$$
You should now find that it does indeed exist. |
Find the volume of the region inside both the sphere $x^2+y^2+z^2=4$ and the cylinder $x^2+y^2=1$ | Note that
the intersection of the cilinder and the sphere is for $z=\pm \sqrt 3$ that is $\varphi=\frac{\pi}6$ and $\varphi=\frac{5\pi}6$
the first integral is twice the "ice cream" upper part of the sphere (in spherical coordinates)
the second integral is the residual part inside the cylinder (in spherical coordinates)
in the second integral $r(\varphi)=\frac1{\sin \varphi}$ |
Prove that, for any polygon, taking all pair of adjacent angles, subtracting 180 from their sum, and adding all the results together equals $180(n-4)$ | For an n-gon,
considering a line
rotating around it
rotates 360.
This implies the standard result
that
the sum of the exterior angles
in the direction of the rotation
is 360
so the sum of the interior angles
is n*180-360 = 180(n-2).
Therefore
the sum of the exterior angles
on both sides of each vertex
is 720.
From your first diagram,
each angle of
the extended polygon
is 180 minus the
sum of the adjacent exterior angles,
so their sum is
180*n-720
=180(n-4). |
The cardinality of the power set with $N$ elements is equal to $2^N$ | Your proof is "correct", but contains a massive omission. Why should $|\mathcal{P}(X_{N+1})|$ be twice the size of $|\mathcal{P}(X_{N})|$? That's the main part of the proof right there; everything else is the usual induction setup (which you did correctly).
The usual way to complete this gap is to take a set of size $N+1$, and distinguish some arbitrary element; let's paint it blue. Subsets of this set either do or do not contain the blue element; there are $|\mathcal{P}(X_N)|$ of each type, which by induction equals $2^N$. Hence we calculate $2^N+2^N=2^{N+1}$. |
When defining ordered pairs, are there any important distinctions between $\{\{a\},\{a,b\}\}$ and $\{a,\{b\}\}$? | Unfortunately, with the new definition, both $\langle\{0\},1\rangle$ and $\langle\{1\},0\rangle$ equal $\{\{0\},\{1\}\}$. Thus this definition is not suitable for ordered pairs. |
Positive integers $(a, b, c)$ are a primitive Pythagorean triple | Line up your ducks. And then shoot them.
Does
$(m^2 - n^2)^2 + (2mn)^2 {? \over=} (m^2+n^2)^2$
$m^4 - 2m^2n^2 + n^4 + 4m^2n^2 {? \over=} m^4 + 2m^2n^2 + n^4$
$m^4 + 2m^2n^2 {? \over=} m^4 + 2m^2n^2 + n^4$?
The answer is... yes, it does.
So $m^2-n^2, 2mn, m^2 + n^2$ are a pythogorean triple.
====
But are the a primative triplet? That is:
And are $m^2 - n^2$ and $2mn$ relatively prime if $m,n$ are and they are not both odd?
If $p$ is a prime divisor that divides $2mn$ then either
$p|2$ so $p=2$.
But $m,n$ are relatively prime so they are not both even and they are not both odd so $m^2 -n^2$ is odd and so $p\not \mid m^2 - n^2$.
$p|m$
But $m,n$ are relatively prime $p\not \mid n$. So $p|m^2$ but not $n^2$ so $p \not \mid m^2 -n^2$.
$p|n$
Same argument. $p\not \mid m$ so $p|n^2$ but not $m^2$ and therefor $p\not \mid m^2 - n^2$.
so no prime factor of $2mn$ is a facctor of $m^2 - n^2$ so $m^2-n^2$ and $2mn$ are relatively prime.
So $m^2-n^2, 2mn, m^2+n^2$ is a primitive pythagorean triplet. |
Slope interpretation of a log model? | The change in $y$ should be
\begin{align}\frac{e^{\beta_1+\beta_2 (x+1)}-e^{\beta_1+\beta_2 (x)}}{e^{\beta_1+\beta_2 (x)}} \times 100 \% &=(e^{\beta_2}-1)\times 100\%\\
&=(\beta_2 + \frac{\beta_2^2}2+\ldots )\times 100 \%\end{align}
The approximation requires $\beta$ to be small. |
Can anyone prove this formula? | For $n>2$ the prime $p_n$ is greater than $3$, so it must be of the form $6k+1$ or $6k+5$. If $p_n=6k+1$, then $$6\left\lfloor \frac{p_n}6+\frac12\right\rfloor= 6\left\lfloor k+\frac16+\frac12\right\rfloor=6k\;,$$ and $$\left\lfloor\frac{p_n}3\right\rfloor=\left\lfloor\frac{6k+1}3\right\rfloor=2k\;,$$ so $$6\left\lfloor\frac{p_n}6+\frac12\right\rfloor+(-1)^{\left\lfloor\frac{p_n}3\right\rfloor}=6k+(-1)^{2k}=6k+1=p_n\;.$$
If $p_n=6k+5$, then $$6\left\lfloor \frac{p_n}6+\frac12\right\rfloor= 6\left\lfloor k+\frac56+\frac12\right\rfloor=6(k+1)\;,$$ and $$\left\lfloor\frac{p_n}3\right\rfloor=\left\lfloor\frac{6k+5}3\right\rfloor=2k+1\;,$$ so $$6\left\lfloor\frac{p_n}6+\frac12\right\rfloor+(-1)^{\left\lfloor\frac{p_n}3\right\rfloor}=6(k+1)+(-1)^{2k+1}=6(k+1)-1=6k+5=p_n\;.$$ |
How can {rational x where x > C} set be substantially different from {rational x where x < C} set? | The problem is in the sentence that begins "As far as I understand it, it also means $\dots$"; it does not mean that. The set of rational numbers $q$ such that $-q<\Omega$ is computably enumerable. Just run a sub routine that enumerates computably the $q$'s with $q<\Omega$ (according to the first bullet item in the question) and, whenever a number appears in that enumeration, reverse its sign and output the result. |
What does $V^*\otimes_\Bbbk V$ look like inside $End_\Bbbk V$ for infinite-dimensional $V$? | Yes. If $\sum_{j=1}^n f_j\otimes e_j$ is an element of $V^*\otimes_k V$, you can assume WLOG that $\{e_1,\ldots,e_n\}$ is linearly independent. If $\sum_{j=1}^n f_j(v)e_j=0$ for all $v\in V$, then for each $j$, $f_j(v)=0$ for all $v\in V$, so that $f_1=f_2=\cdots=f_n=0$.
They are the linear transformations with finite dimensional range. If the range of $\phi\in \mathrm{End}(V)$ has basis $e_1,\ldots,e_n$, then you can write $\phi(v)=\sum_{j=1}^n c_j(v)e_j$, and each coefficient map $c_j:V\to k$ is linear. Thus $\phi$ comes from $\sum_{j=1}^n c_j\otimes e_j$. |
Density of points in 2D projection of a sphere | It's the opposite: The points are denser near the edge of the circle.
There is no (statistical) difference in looking at one hemisphere.
To see geometrically why 1. is true, project a circle (or the top half of a circle) in a plane to the horizontal axis. An element of arc length is "squashed" according to its "steepness". A fixed element of arc length maps to a shorter interval near the left and right edges, so after projection, a horizontal interval of given length contains more points if it lies near the left or right end.
Alternatively, graph $y = \sqrt{1 - x^{2}}$, divide the interval $[-1, 1]$ into $n$ pieces of equal length, and plot the corresponding points on the graph. The intervals near the ends subtend a larger arc on the circle. |
The differential and smooth sections | I know this feeling pretty well, it's really confusing, until it starts to make sense! I am not an expert, but maybe my student-like point of view can help you a bit.
Notation:
I use $T$ to denote the tangential (or differential) of a map, i.e. for $f: M \to N$, we get the tangential as $Tf: TM \to TN$. The restriction to a point $x$ is then given by $T_x f: T_xM \to T_{f(x)}N$.
For tangent spaces, I use the directional derivatives as the definition.
The Jacobian of a real valued function at point $x$ is denoted by $Df(x)$.
About the smoothness:
Let me define $U = (0,1)$ to avoid some confusion.
My approach would be to write $\dot{c}$ as a composition of smooth maps $U \rightarrow TU \rightarrow TM$, where the first map is given by $t \mapsto (t,1)$ and the second map is given by the tangential of $c$.
Smoothness of the first map:
The first map is smooth. To check this, we compute the chart representation. If we take $\mathrm{id}_U: U \to (0,1)$ as a chart, then the chart for $TU$ is given by $T \mathrm{id}_U: TU \to (0,1) \times \mathbb{R}$.
With resprect to these charts the map is then given by
$$ (0,1) \to (0,1) \times \mathbb{R} : t \mapsto (t,1),$$
which is a smooth map.
Smoothness of the second map:
If $c : U \rightarrow M$ is smooth, then also it tangential
$Tc : TU \rightarrow TM$ is a smooth map between the two manifolds $TU$ and $TM$. (Is this already known at the point of this excercise?)
What is $\xi$ exactly?
Let us fix $t$, then $\dot{c}(t) =: (c(t),v(t)) \in T_{c(t)}M$ is the tangent vector we are interested in.
To get some coordinates of $v(t)$, we first need a basis for the vector space $T_{c(t)}M$. Like in your question, we take a chart around $c(t)$, say
$$\varphi: U \to \mathbb{R}^n: p \mapsto (x^1(p), \dots, x^n(p)),$$ where $U$ is an open set containing $c(t)$.
Now each individual component $x^i$ of the chart is related to a tangent vector, denoted by $\frac{\partial}{\partial x^i}\big|_p$. Its defining property is being the corresponding directional derivative after applying the chart:
$$\frac{\partial}{\partial x^i}\big|_p(f) = (\partial_i (f \circ \varphi^{-1}) ) ( \varphi(p) ),\quad \text{for all } f\in C^{\infty}(M).$$
In this way the chart $\tilde{\varphi}$ is constructed.
An explicit formula for the coefficients is then given by
$$\xi^i(t) = D( x^i \circ c ) ( t ) = \frac{\partial (x^i \circ c)}{\partial t}\big|_t.$$
Hence you are right with the intuition, that the coefficients are smooth in $t$, as being partial derivatives
$$\dot{c}(t) = \sum_i \frac{\partial (x^i \circ c)}{\partial t}\big|_t \frac{\partial}{\partial x^i} \big|_{c(t)}$$
The more general situation is summarized in this diagram, where $h$ and $k$ are some charts and $\eta$, $\xi$ are charts for the associated tangent spaces. (The domains of these maps may need to be restricted, but I don't want to make it more complicated. So I assume that $h,k$ are global charts and I don't restrict the map $k \circ f \circ h^{-1}$ onto its real domain.)
$$
\newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{lll}
TM & \ra{Tf} & TN \\
\da{h \times \eta} & & \da{k \times \xi}\\
\mathbb{R}^m \times \mathbb{R}^m & \ra{k \circ f \circ h^{-1} \times D(k \circ f \circ h^{-1})} & \mathbb{R}^n \times \mathbb{R}^n \\
(x,\eta) & \mapsto ((k \circ f \circ h^{-1}) (x), D(k \circ f \circ h^{-1})(x)[\eta]) & = (y,\xi)
\end{array}
$$
For $x = h(p)$ and $v = \sum_i \eta^i(v) \frac{\partial}{\partial x^i} \big|_p \in T_p M$ we get the formula for the coefficients of
$$T f( v ) = \sum_j \xi^j( Tf(v) ) \frac{\partial}{\partial y^j} \big|_{f(x)},$$
given by
$$ \xi( Tf(v) ) = D(k \circ f \circ h^{-1})(x)[\eta(v)] = \left( \sum_{j=1}^m \frac{\partial (k^i \circ f \circ h^{-1})}{\partial x^j}\big|_{x} \cdot \eta^j(v) \right)_{i=1,\dots,n} \in \mathbb{R}^n$$
Like you see, this is not really an equation you want to use often,
but on the other hand it is just the normal derivative and some chain rules.
In my experience, you can usually take advantage of the special kind of manifold you are working on. Since $T_x f$ is a linear map, you 'just' need to find out how it acts on a basis of $T_x M$.
For example if you know how it acts on the vectors of a special charts, say polar coordinates, then you can find the coefficients of $T_x f$ directly, without the need to compute the derivative.
Anyway, quite often the questions are of general nature and the numeric values are of less interest. And if you are lucky, you maybe can use some theorems and the calculus related to differential forms, Lie derivatives, Riemannian metrics, Stokes etc... I guess that is the point where all these abstract notations really show off their power.
Is there a general approach?
This question is beyond my knowledge. Quite often you can
use diagrams to proof certain statements in a nice way. For example the tangential $T$ acts like a functor on the category of smooth manifolds
and there exists many 'canonical' chart independent constructions and objects, which can be used.
Proving smoothness is often no really needed nor interesting, since almost all objects are smooth by construction and you just combine them. But in the beginning you spend a lot of time proving all these basics, which will later be used without any notice. |
Galois group of $(X^4 - 2)(X^2 + 2)$ | You are wrong that there are only $4$ automorphisms of $\mathbb{K}:=\mathbb{Q}(\sqrt[4]{2},\text{i})$ that fix $\mathbb{Q}$. The automorphism group $G:=\text{Aut}_\mathbb{Q}(\mathbb{K})$ is isomorphic to the unique nonabelian semidirect product $C_4\rtimes C_2\cong D_4$, where $C_k$ is the cyclic group of order $k$ and $D_k$ is the dihedral group of order $2k$. The group $G$ is generated by $\tau$ and $\sigma$ where $$\tau(\sqrt[4]{2})=\text{i}\sqrt[4]{2}\,,\,\,\tau(\text{i})=\text{i}\,,\,\,\sigma(\sqrt[4]{2})=\sqrt[4]{2}\,,\text{ and }\sigma(\text{i})=-\text{i}\,.$$
Note that $\langle \tau\rangle\cap\langle \sigma\rangle=\text{Id}$, with $\langle \tau\rangle\cong C_4$ and $\langle \sigma\rangle\cong C_2$. In addition,
$$(\tau\circ\sigma\circ\tau)(\sqrt[4]{2})=(\tau\circ \sigma)(\text{i}\sqrt[4]{2})=\tau(-\text{i}\sqrt[4]{2})=\sqrt[4]{2}=\sigma(\sqrt[4]{2})$$
and
$$(\tau\circ\sigma\circ\tau)(\text{i})=(\tau\circ \sigma)(\text{i})=\tau(-\text{i})=-\text{i}=\sigma(\text{i})\,.$$
Hence, $\tau\circ\sigma\circ\tau=\sigma$, or $\tau\circ\sigma=\sigma\circ\tau^{-1}$. That is,
$$G=\big\langle \tau,\sigma\,\big|\,\tau^{\circ 4}=\text{id}\,,\,\,\sigma^{\circ 2}=\text{id}\,,\text{ and }\tau\circ\sigma=\sigma\circ\tau^{-1}\rangle\cong D_4\,.$$ |
a Unique Minimum Euclidean distance between point and hyperplane? | Your conjecture is correct. I'll give details below, and hope that they're at more or less the appropriate level of detail for you. If not, you need to go read up on compactness, convexity, the Heine-Borel Theorem, and the Bolzano-Weierstrass theorem. I know that's a lot, but it's all good stuff!
Renaming: I'm going to call your domain $Q$ (to avoid confusion with $d$, which I use for distance) and call the test-point $p$, reserving upper-case for sets. I'm assuming henceforth that your "rectangular" constraints don't contradict each other (i.e., you're not saying $x_1 < 3$ and $x_1 > 5$, for instance) so that $Q$ is nonempty.
$Q$ is closed and bounded; this is (by the Heine Borel Theorem) the same as "compact." And compact sets have many nice properties, which books in topology or introductions to real analysis will tell you about.
Let $S = \{ d(p, q) \mid q \in Q\}$. Then $S$ is a nonempty set of nonnegative real numbers, hence has a lower bound. (For instance, every element of $S$ is greater than or equal to $0$.) It therefore has a greatest lower bound, $b$. So far all we know is that $b$ is at least as big as $0$.
Because we're assuming that $p \notin Q$, I can now show that $b > 0$. Suppose not, i.e., suppose that $b = 0$ Then for every integer $n$, there must be a point $q_n$ with $d(p, q_n) < 1/n$. [Why? Well, suppose that for $n = 20$, there's no such point. That means that the distance from $p$ to every point of $q$ is at least $1/20$, hence $b \ge 1/20$.] The sequence $q_1, q_2, \ldots$ converges to $p$. But a wonderful property of closed sets is that if you have a convergent sequence in a closed set (like $Q$), then the limit of that sequence is also in $Q$. So this would imply that $p$, being the limit, is in $Q$, which is a contradiction. So the assumption that $b = 0$ is false, and we know that $b > 0$.
There's at least one point $c \in Q$ with $d(c, p) = b$. Reason: for each integer $n$, pick a point $r_n \in Q$ with $d(p, r_n) < b + \frac{1}{n}$ (by the same sort of reasoning as in step 3). We don't know that the sequence $r_1, r_2, \ldots$ has a limit, alas. But a nice property of compact sets is that they're bounded, so the sequence $\{r_i\}$ is bounded, and hence has a convergent subsequence (that's the Bolzano-Weierstrass theorem). Let's call the limit of that subsequence $c$. Then it's clear that (because "distance" is a continuous function of its arguments) that $d(p, c) = b$.
Suppose that there are two distinct points $c, c'$ in $Q$ with $d(c, p) = d(c', p) = b$. Then because $Q$ is not only closed and bounded, but convex, every point on the line segment from $c$ to $c'$ is also in $Q$. That line segment $L$ is just
$$L = \{(1-t)c + tc' \mid 0 \le t \le 1 \}.$$ And we can compute the (squared) distance from points along that segment to the point $p$:
\begin{align}
d( (1-t)c + tc', p)^2
& = [(1-t)c + tc' - p ] \cdot [(1-t)c + tc' - p ] \\
& = [(c - p) + t(c'- c) ] \cdot [(c - p) + t(c'- c) ]
\end{align}
Let $h = c - p \ne 0$, and $v = c' - c \ne 0$. (The vector $v$ is nonzero because the points are distinct. Can you say why $h$ is nonzero?) Then we can continue:
\begin{align}
d( (1-t)c + tc', p)^2
& = [h + tv ] \cdot [h + tv ] \\
& = h\cdot h + 2t (h \cdot v) + t^2 (v \cdot v).
\end{align}
This is a quadratic in $t$ with positive coefficient for $t^2$, hence its graph is an upwards-facing parabola. We know the values at $ t= 0$ and $t = 1$ are the same, and hence the value at $t = 0.5$ must be lower than the value at either end! But that means that the point $\frac{c + c'}{2}$ is closer to $p$ than either $c$ or $c'$, which contradicts the claim that $c$ and $c'$ are as close to $p$ as possible. (Note that $\frac{c + c'}{2} \in Q$ because $Q$ is convex.) This contradiction means that our initial assumption --- that there were TWO points of $Q$ that both minimized the distance to $p$ --- must have been false, hence there's a unique point whose distance to $p$ is the minimum value $b$. |
Approximate summation of flooring function | With a given fixed value $c$, we have
$$\sum_{k=0}^x\left\lfloor\frac kc\right\rfloor=\sum_{k=c}^x\left\lfloor\frac kc\right\rfloor$$
and we can take this as a cue to the full result. In particular, let $q=\lfloor\frac xc\rfloor$, then we have result
$$\sum_{k=0}^x\left\lfloor\frac kc\right\rfloor= c\binom q2+q(x+1-qc)$$
Plugging in e.g. $x=c-1,c,c+1$ shows the correct outcomes.
EDIT:
Considering that there is interest in computational complexity, it is worthwhile to consider how the result can be manipulated to create a quantity that has the fewest possible computations, or how it can be adjusted to reduce the computation expense. In that vein, note that
$$c\binom q2-cq^2=c\frac{q^2-q}2-c\frac{2q^2}2=-c{q(q+1)\over 2}=-c\binom{q+1}2$$
so our modified result can be computed as
$$\sum_{k=0}^x\left\lfloor\frac kc\right\rfloor=q(x+1)-\frac{cq(q+1)}2$$
I have left the sum pieces undistributed (e.g., $q(x+1)$), but as the sums only involve increment by $1$, the "add first then multiply" efficiency is essentially non-existent. |
Suppose $B = M_{V,V}(L_A)$, where $L_A$ is a linear transformation. Show $B$ is similar to $[[\lambda, 0]^T,[1,\lambda]^T]]$ | We have $Av = \lambda v$. What we want is to find a vector $w$ with $Aw = v + \lambda w$. Then with respect to the basis $[v,w]$, $A$ would have the desired form. We know that $Au = \beta v + \lambda u$, by reading the second column of the matrix for $A$.
Let's use the hint: $U = [v,v + \beta^{-1}u]$, ie $w = v + \beta^{-1}u$.
Then:
\begin{align*} Aw &= A(v + \beta^{-1}u) \\
&= Av + \beta^{-1}Au \\
&= \lambda v + \beta^{-1}(\beta v + \lambda u) \\
&= \lambda v + v + \beta^{-1}\lambda u \\
& = v + \lambda(v + \beta^{-1} u) \\
& = v + \lambda w\end{align*} |
Graphs that suffice being a bipartite and also complement of paths? | If you're asking "Which bipartite graphs are the complements of paths?" then the answer is certainly not "All bipartite graphs." For example, $K_{3,3}$ is a bipartite graph:
but it's not the complement of a path, because its complement is a graph with two cycles:
In general, if either side of the bipartition is too large, the complement of the graph will contain a cycle. So you only have to check bipartite graphs in which both sides of the bipartition are small, and there will only be a few cases that work. Your answer will be a finite (and small) set of graphs.
(And a more promising direction to go in is actually posing the problem differently: "Which complements of paths are bipartite?" There are a lot fewer different graphs that are complements of paths to check, than there are bipartite graphs.) |
How to calculate the percentage between two numbers? | Hint:
$25\%$ means $\frac{25}{100}$, so if
$$
100,25 \rightarrow \frac{25}{100} \rightarrow 25\%
$$
we have also
$$
50,25 \rightarrow \frac{25}{50}=\frac{50}{100} \rightarrow 50\%
$$ |
Ordered triple condition | On investigation numerically, there are no positive integer triples $(a,b,c)$ for which the function $ f(a,b,c) := a2^b + b2^c + c2^a$ fulfils the condition
$$ f(a,b,c) = 2\cdot 10^k $$
for any value of $k\in \mathbb N$ up to $600$.
Allowing $a,b,c$ to also be zero (but forbidding any of $2^a, 2^b, 2^c$ to be greater than the target value) gives us effectively one solution in that space, $f(1,4,0) = 20$, so also $(4,0,1)$ and $(0,1,4)$.
The closest approach to the original problem (where $k=3$) is $f(5,8,6)=1984$
I might have become a little obsessed with the form $a2^b + b2^c + c2^a$ ...
Anyway the original format above has no solutions out to $2\cdot 10^{10000}$.
I also had a quick look at variations in the form $a2^b + b2^c + c2^a = N^k$ (just for fun) and
$4\cdot 2^5 +5\cdot 2^3 +3\cdot 2^4 = 6^3$ (quite pretty, all low numbers)
$13\cdot 2^{17}+ 17\cdot 2^{14} + 14\cdot 2^{13} = 8^7$, weirdly
$3\cdot 2^9 + 9\cdot 2^7 + 7\cdot 2^3 = 14^3$ and
$2\cdot 2^7 + 7\cdot 2^3 + 3\cdot 2^2 = 18^2$ |
Prove that $T^2+bT+cI$ is invertible. | No, this is wrong. At no point you used the fact that $T$ is self-adjoint. So, if your proof was correct, you would have proved taht $T^2+bT+cI$ is always invertible when $b^2-4c<0$. This is false; just take $T(x,y)=(-y,x)$, $b=0$ and $c=1$.
You proved correctly that if $\lambda$ is an eigenvalue of $T$, then $\lambda^2+b\lambda+c$ is an eigenvalue of $T^2+bT+cI$. But how do you know that every eigenvalue of $T^2+bT+cI$ can be obtained by this process? |
Find a confidence interval using as pivotal quantity a function of the MLE | Assuming you want a confidence interval for $\theta$, you may use the CLT or a known chi distribution for variances and some integrals to get an interval for the parameter. First, we have that
$$
\hat{\theta}=-\frac{n}{\sum ln\ x_i}.
$$
For the expected value we have the integral
$$
E[X] = \int_0^1 \theta x^{(\theta-1)}x\ \text{d}\theta = \frac{\theta}{1+\theta}
$$
And for the second moment
$$
\mu_2 = \int_0^1 \theta x^{\theta-1}x^2\ \text{d}\theta = \frac{\theta}{2+\theta}
$$
So the variance is
$$
Var(X) = \frac{\theta}{2+\theta} - \left( \frac{\theta}{1+\theta} \right)^2 = \\
= \frac{\theta}{(\theta + 1)^2(\theta+2)}.
$$
Now then, we know that the following quotient has a chi-squared distribution, so in this case
$$
C = \frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1}
\text{ and } C = \frac{(n-1)S^2}{\frac{\theta}{(\theta + 1)^2(\theta+2)}}
$$
where $S^2 = \frac{1}{n-1}\sum(X_i-\overline{X})^2$. Finally we can work with the random variable $C$ distributed as a chi squared. We want a confidence interval of $(1-\alpha)$. Then
$$
P \left( \chi^2_{n-1}(\alpha/2) < C < \chi^2_{n-1}(1-\alpha/2)\right) = 1-\alpha
$$
denoting that $\chi^2_{n-1}(\beta)$ is the $\beta$ percentile of the $\chi^2$ which may be found in reference tables.
From here we work with the interval only substituting the percentiles with $\frac{1}{D_l}$ and $\frac{1}{D_s}$, lower and superior.
$$
\frac{1}{D_l} < C < \frac{1}{D_s} \implies
\frac{1}{D_l} < \frac{(n-1)S^2}{\frac{\theta}{(\theta + 1)^2(\theta+2)}} < \frac{1}{D_s} \\\ \\\ \\\
D_l > \frac{\frac{\theta}{(\theta + 1)^2(\theta+2)}}{(n-1)S^2} > D_s \\\ \\\ \\\
D_l((n-1)S^2) > \frac{\theta}{(\theta + 1)^2(\theta+2)} > D_s((n-1)S^2) \\\ \\\ \\\
(D_l((n-1)S^2)) > \frac{\theta}{(\theta + 1)^2(\theta+2)} > (D_s((n-1)S^2)) \\\ \\\ \\\
(\hat{\theta} + 1)^2(\hat{\theta}+2)\left( D_l((n-1)S^2) \right)
> \theta >
(\hat{\theta}+ 1)^2(\hat{\theta}+2)\left( D_s((n-1)S^2) \right) \\\ \\\ \\\
$$
where, taking the before definition and derivation, $\hat{\theta} = -\frac{n}{\sum ln\ x_i}$ and $S^2 = \frac{1}{n-1}\sum(X_i-\overline{X})^2$. When we sent the $\theta$ to the other side, we simply used its estimate to work around the problem. (However a mathematician should confirm this is valid!) Either way, if you find a better known distribution that may include $\theta$, then try it and tell us if it is simpler!
Hope it helps!
Note: made some edits due to an error :/ |
Evaluate $\oint_{C}(y-\sin x)dx+\cos x dy$ | Given
$$
\oint_C \Big( ( y - \sin(x) ) d x + \cos(x) d y \Big).
$$
Using Green's theorem, you should use $C$, so you get
$$
\oint_C \Big( ( y - \sin(x) ) d x + \cos(x) d y \Big)
= \oint_C \Big( P d x + Q d y \Big)
= \int_0^{2\pi} d x \int_0^{2x/\pi} d y
\left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right)\\
= - \int_0^{2\pi} d x \int_0^{2x/\pi} d y \Big( \sin(x) + 1 \Big).
$$
Work this out
$$
- \int_0^{2\pi} d x \int_0^{2x/\pi} d y\Big( \sin(x) + 1 \Big)
= - \frac{2}{\pi} \int_0^{2\pi} d x \Big( x \sin(x) + x \Big)\\
= \frac{2}{\pi} \left[ x \cos(x) - \sin(x) - \frac{1}{2} x^2 \right]_0^{2\pi}
= 4 - 4\pi
$$ |
A doubt regarding proof of values of trigonometric functions at allied angles | If the basis of your proofs are from Euler's identity $e^{i \theta} = \cos \theta + i \sin \theta$ then the quadrant becomes irrelevant.
Also consider the identities $\cos \theta = \dfrac {e^{i \theta} + e^{-i \theta} } 2$ and $\sin \theta = \dfrac {e^{i \theta} - e^{-i \theta} } 2$. |
Help with Boolean algebra | I think it is a parrallel system, not a series system becauae the probability for each unit is independent of other units.
So, the problem changes into this simple question; for a parallel system with n units, find the probability that the system will not fail. (with the variables $x$ and $p$ given the same characteristic as mentioned above)
Then, $f=x_1+x_2+x_3+....\pmod2$. Assuming that + in this equation is or operation, if at least one $x$ can do work, that $x$ value becomes $1$, and $f$ has a value $1$ which means the system is working fine.
1 in probability means that it is true for any condition. So we minus the probability that all of its units fail to $1$. which results in this polynomial expression of all $p$ s. $F=1 - (1-p_1)\cdot(1-p_2)\cdot(1-p_3)....$ And that would be the answer for this question if it is a parallel system. |
Need to compute $\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)(2n+2)3^{n+1}}$. Is my solution correct? | You series equals
$$ \frac{1}{3}\sum_{n\geq 0}\frac{(-1)^n}{(2n+1)3^n}-\sum_{n\geq 0}\frac{(-1)^n}{(2n+2)3^{n+1}} $$
or
$$ \frac{1}{\sqrt{3}}\arctan\frac{1}{\sqrt{3}}+\frac{1}{2}\sum_{n\geq 1}\frac{(-1)^n}{n 3^n}=\frac{\pi}{6\sqrt{3}}-\frac{1}{2}\log\left(1+\frac{1}{3}\right). $$
No integrals, just partial fraction decomposition, reindexing and the Maclaurin series of $\arctan(x)$ and $\log(1+x)$. |
Basic Combinatorics and Counting | You are basically assigning 5 X to 8 Y. Regardless of whether X and Y are chairs and people, or people and chairs, the math is the same. Such a beautifully universal language! |
Given $\frac{1}{1^4}+\frac{1}{2^4}...\infty=\frac{\pi^4}{90}$ | hint: For the even terms: $\dfrac{1}{(2n)^4} = \dfrac{1}{16}\cdot \dfrac{1}{n^4}$, and you get to solve $S + \dfrac{S}{16} = \dfrac{\pi^4}{90}$, with $S$ is the desire sum. |
Why is $\sum_{n=1}^{\infty}\frac{\sin(nx^{2})}{1+n^{4}}$ continuously differentiable in $\mathbb{R}$? | Apparently what i was missing is this:
Let $x_{0}\in\mathbb{R}$ ,
We need to show that $\sum_{n=1}^{\infty}\frac{\sin(nx^{2})}{1+n^{4}}$
is continuously differentiable at $x_{0}$. That is to show that $\sum_{n=1}^{\infty}\frac{\sin(nx^{2})}{1+n^{4}}$
is differentiable at $x_{0}$ and that $\left(\sum_{n=1}^{\infty}\frac{\sin(nx^{2})}{1+n^{4}}\right)'$
is continuous at $x_{0}$.
Choose $M=\left|x_{0}\right|+1$ so $x_{0}\in[-M,M]$ and
from the argument above we get what we need. |
How to prove that arc-length of x cos(1/x) is divergent? | Making the change of variable $t = \frac 1 x$, the length of the graph on $(\alpha, 1)$ is given by
$$\int \limits _\alpha ^1 \sqrt {1 + \left( \cos \frac 1 x + \frac 1 x \sin \frac 1 x \right)^2} \ \Bbb d x = \int \limits _1 ^{\frac 1 \alpha} \sqrt {1 + (\cos t + t \sin t)^2} \frac 1 {t^2} \ \Bbb d t .$$
For $\alpha \ne 0$ the integrand is a bounded continuous function on $[1 , \frac 1 \alpha]$, so its integral is finite.
On the other hand,
$$\int \limits _0 ^1 \sqrt {1 + \left( \cos \frac 1 x + \frac 1 x \sin \frac 1 x \right)^2} \ \Bbb d x = \int \limits _1 ^\infty \sqrt {1 + (\cos t + t \sin t)^2} \frac 1 {t^2} \ \Bbb d t \ge \int \limits _1 ^\infty \frac {|\cos t + t \sin t|} {t^2} \ \Bbb d t \ge \\
\sum _{k = 1} ^\infty \int \limits _{2k \pi + \frac \pi 4} ^{2k\pi + \frac \pi 2} \frac {|\cos t + t \sin t|} {t^2} \ \Bbb d t = \sum _{k = 1} ^\infty \int \limits _{2k \pi + \frac \pi 4} ^{2k\pi + \frac \pi 2} \frac {\cos t + t \sin t} {t^2} \ \Bbb d t \ge \sum _{k = 1} ^\infty \int \limits _{2k \pi + \frac \pi 4} ^{2k\pi + \frac \pi 2} \frac {t \sin t} {t^2} \ \Bbb d t = \\
\sum _{k = 1} ^\infty \int \limits _{2k \pi + \frac \pi 4} ^{2k\pi + \frac \pi 2} \frac {\sin t} t \ \Bbb d t \ge \sum _{k = 1} ^\infty \int \limits _{2k \pi + \frac \pi 4} ^{2k\pi + \frac \pi 2} \frac {\sin^2 t} t \ \Bbb d t = \sum _{k = 1} ^\infty \int \limits _{2k \pi + \frac \pi 4} ^{2k\pi + \frac \pi 2} \frac {1 - \cos 2t} {2t} \ \Bbb d t \ge \sum _{k = 1} ^\infty \int \limits _{2k \pi + \frac \pi 4} ^{2k\pi + \frac \pi 2} \frac 1 {2t} \ \Bbb d t = \\
\frac 1 2 \sum _{k = 1} ^\infty \ln \frac {2k\pi + \frac \pi 2} {2k\pi + \frac \pi 4} = \frac 1 2 \sum _{k = 1} ^\infty \ln \frac {k + \frac 1 4} {k + \frac 1 8} = \frac 1 2 \sum _{k = 1} ^\infty \ln \left( 1 + \frac {\frac 1 8} {k + \frac 1 8} \right) \sim \frac 1 2 \sum _{k = 1} ^\infty \frac {\frac 1 8} {k + \frac 1 8} \sim \sum _{k = 1} ^\infty \frac 1 k = \infty ,$$
where the tilde between two series means that those series have the same behaviour, i.e. either simultaneously convergent, or simultaneously divergent. |
Why a vector is a (1,0) tensor? | For a finite-dimensional vector space $V$, $V$ is isomorphic to its double dual $V^{**},$ through the natural isomorphism $\varphi:V\rightarrow V^{**}$ given by $\varphi(x)(v)=v(x),$ where $x\in V$ and $v\in V^*.$ Hence, an element of $V$ can be seen as a linear functional that acts on covectors. |
The mean value property and local maximum | The following argument works under the assumptions that $u$ is continuous in $\Omega$.
Assume your function $u$ has a local maximum in $x$, i.e. $M:=u(x)\geq u(y)$ for each $y$ in some open neighbourhood $U$ of $x$ contained in $\Omega$.
Let $R_x\leq a(x)$ be the radius of greatest ball centered in $x$ which is contained in $U$; then from the mean value property on balls you get:
$$\forall 0<r<R_x,\quad M=\frac{1}{\omega_N\ r^N} \int_{B(x;r)} u(y)\ \text{d} y\; .$$
Equality $M=\frac{1}{\omega_N\ r^N} \int_{B(x;r)} u(y)\ \text{d} y$ holds if and only if $u\equiv M$ in $B(x;r)$: in fact, if by contradiction you assume that there exists $\bar{y} \in B(x;r)$ s.t. $u(\bar{y})<M$, then there exists a nonempty open set $V\subseteq B(x;r)$ s.t.:
$$\forall y\in V,\quad u(y)<M\; ;$$
hence:
$$M=\frac{1}{\omega_N r^N}\int_{B(x;r)} u(y)\ \text{d} y = \frac{1}{\omega_N r^N}\left( \int_{V} u(y)\ \text{d} y + \int_{B(x;r)\setminus V} u(y)\ \text{d} y\right) <M$$
which is a contradiction!
Therefore $u(y)=M$ for all $y\in B(x;r)$ and $r<R_x$, that is $u\equiv M$ in $B(x;R_x)$.
Previous argument proves that the set $\Omega_M:=\{y\in \Omega :\ u(y)=M\}$ is both nonempty (because it contains $B(x;R)$) and open (for, if $y\in \Omega_M$ then one can repeat the previous argument to show that there exists an open ball $B(y;R_y)$ which is contained in $\Omega_M$); on the other hand $\Omega_M$ is relatively closed in $\Omega$, because $u$ is continuous.
Thus you found a subset $\Omega_M$ which is nonempty, open and relatively closed in $\Omega$; but $\Omega$ is connected hence its only nonempty, open and relatively closed subset is $\Omega$ itself, therefore $\Omega_M =\Omega$ and $u\equiv M$ everywhere.
On the other hand, previous proof works also if your function $u$ has the mean value property on spheres, that is if:
$$u(x)=\frac{1}{N\omega_N r^{N-1}}\int_{\partial B(x;r)} u(y)\ \text{d} \sigma (y)$$
for all $0<r<a(x)$.
In fact, equality $M=\frac{1}{N\omega_N\ r^{N-1}} \int_{\partial B(x;r)} u(y)\ \text{d} \sigma (y)$ holds if and only if $u\equiv M$ in $\partial B(x;r)$, hence $u\equiv M$ in $B(x;R_x)=\{x\}\cup \left(\bigcup_{0<r<R_x} \partial B(x;r)\right)$ and you can conclude the argument as before. |
Is this number encoding algorithm useful? | Can this algorithm be useful in cryptography?
I am afraid this question is a bit too broad to be answered negatively with any degree of certainty. However, if we consider instead the following question then the answer is probably not:
Can this algorithm be used as a cryptographic encryption scheme?
In cryptography, a common assumption is that security cannot rely on the adversary not knowing the scheme (otherwise it's known as security through obscurity https://en.wikipedia.org/wiki/Security_through_obscurity). Since there is no secret key used to decode, an adversary therefore has as much information as the intended recipient.
If you were to use this scheme outside of crypto, where security through obscurity is acceptable (say if you were designing a puzzle or wanted to exchange messages with a friend), then you would run into frequency analysis attacks (https://en.wikipedia.org/wiki/Frequency_analysis), since each letter -once encoded into a number- would always be 'encoded' by the same number.
Can this algorithm be useful outside of cryptography?
I would recommend looking at other esoteric bijection with easy encoding and decoding (such as quaterimaginary base $2i$ or base $-1\pm i$ https://en.wikipedia.org/wiki/Complex-base_system#Base_%E2%88%921_%C2%B1_i), and see if they have been used. |
Find all integer triples $(a,b,c)$ such that the equation $2a^2 + b^2 = 5c^2$ holds. Is $(0, 0, 0)$ the only solution to this equation? | For any $a$, we know $a^2 \equiv 0,1,$ or $4 \mod 5$. Then $2a^2 \equiv 0,2,$ or $3 \mod 5$ and $b^2 \equiv 0,1,$ or $4 \mod 5$. When could their sum equal $0 \mod 5$ as you pointed out it must? |
Evaluating $\lim_{x\to-\infty} \frac{ 8x + 6}{\sqrt{x^2+8x} - x}$ | $\sqrt{x^2+8x}+x=|x|\sqrt{1+{8\over x}}+x$, if $x<0$ we obtain $\sqrt{x^2+8x}+x=-x\sqrt{1+{8\over x}}+x=-x(\sqrt{1+{8\over x}}-1)$ |
Finding the covariance of a mixed pair of r.v.'s given one's distribution and a conditional distribution | You can in fact evaluate the sum directly; it is just a bit tedious compared to using iterated expectations.
$$\begin{align}
\operatorname{E}[XY]
&= \sum_{y=0}^n \int_{x=0}^1 xy \binom{n}{y} x^y (1-x)^{n-y} \, dx \\
&= \sum_{y=0}^n y \binom{n}{y} \int_{x=0}^1 x^{y+1} (1-x)^{n-y} \, dx \\
&= \sum_{y=0}^n y \binom{n}{y} \frac{\Gamma(y+2)\Gamma(n-y+1)}{\Gamma(n+3)} \\
&= \sum_{y=0}^n y \frac{n!}{y!(n-y)!} \frac{(y+1)!(n-y)!}{(n+2)!} \\
&= \sum_{y=0}^n \frac{y(y+1)}{(n+1)(n+2)}. \\
\end{align}$$
At this point, we can evaluate the sum directly, using the familiar formulas $\sum y = n(n+1)/2$ and $\sum y^2 = n(n+1)(2n+1)/6$, or we can use the hockey stick identity
$$\sum_{y=0}^n \frac{y(y+1)}{(n+1)(n+2)} = \frac{2}{(n+1)(n+2)} \sum_{y=1}^n \binom{y+1}{2} = \frac{2}{(n+1)(n+2)} \binom{n+2}{3} = \frac{n}{3}.$$
Combining this with the other terms in the covariance expression, we get $$\operatorname{Cov}[X,Y] = \frac{n}{3} - \frac{n}{4} = \frac{n}{12}$$ as claimed.
As an exercise, can you evaluate $\operatorname{E}[XY]$ with the summation and integration order exchanged; i.e., how would we proceed with $$\int_{x=0}^1 \sum_{y=0}^n xy \binom{n}{y} x^y (1-x)^{n-y} \, dx?$$ In fact, is this a more natural way to evaluate this? Does it mimic the iterated expectation evaluation $$\operatorname{E}[XY] = \operatorname{E}[\operatorname{E}[XY \mid X]]?$$ |
How prove this $(abc)^4+abc(a^3c^2+b^3a^2+c^3b^2)\le 4$ | It is possible to solve this by $pqr$ method. (That is, write everything in terms of $p = a+b+c$, $q=ab+bc+ca$, $r = abc$) together with $uvw$ method. This is an ugly method, but it works. The approach is sketched here.
Although the inequality is not symmetric, one can write
$$\begin{array}
&&2(a^3c^2 + b^3a^2 + c^3b^2) \\
&= a^3(b^2+c^2) + b^3(a^2+c^2) + c^3(a^2+b^2) + (a^3c^2+b^3a^2 + c^3b^2 - a^3b^2 - b^3c^2 - c^3a^2) \\
&= a^3(b^2+c^2) + b^3(a^2+c^2) + c^3(a^2+b^2) + (b-a)(c-a)(c-b)(ab+bc+ca)
\end{array}$$
This allows one to rewrite the inequality as
$$2(abc)^4 + abc(\sum_{sym} a^3b^2) + abc(ab+bc+ca)\sqrt{(a-b)^2(b-c)^2(c-a)^2} \leq 8$$
I used Sage to express the expression in terms of $p,q,r$, which hopefully someone can verify:
$$2r^4 + r(pq^2 - 2p^2r - qr) + qr \sqrt{\frac{4(p^2-3q)^3 - (2p^3-9pq+27r)^2}{27}} \leq 8$$
The main problem here, would be to efficiently remove the square root. For this, we use a lemma by Vasile Cirtoaje and Vo Quoc Bao Can, lemma 2.1 in here. In the lemma we take
$$\alpha = \frac{1}{\sqrt{27}}, a = 2(9-3q)^{3/2}, b = 54-27q+27r, \beta = \frac{14+q}{27q}$$
(The choice of $\alpha,a,b$ comes from the shape of the square root, while the choice of $\beta$ comes from cancelling the "linear term" of $r$ in $pq-2p^2r-qr = 3q - (18+q)r$; remember that $p = 3$ in this question)
Therefore LHS is bounded above by
$$2r^4 + 3rq^2 + (27q\frac{14+q}{27q} - (18+q))r^2 + r(q\frac{14+q}{27q} (54-27q) + 2(9-3q)^{3/2} \sqrt{\frac{q^2}{27} + \left(\frac{14+q}{27}\right)^2}) ...(*)$$
and it suffices to show that $(*) \leq 8$.
For this, we use the uvw method. Fix $r$. By the help of a computer, one can check that $(*)$ is convex in $q$. Thus to maximize $(*)$, $q$ should take maximum/minimum of its allowable range, which would force two of $a,b,c$ to be equal according to $uvw$ method.
Assume WLOG that $a=c$. Then $2a+b = 3$, and $a^2b = r$. Solving these gives
$$b = 3-2a \, \text{ and } \, r = a^2(3-2a) \, \text{ and } \, q = -3a^2+6a$$
The only constraint we have on $a$ are $a \in [0,3/2]$ (coming from $2a+b=3$ and $a,b \ge 0$), and $a^2(3-2a) = r \leq 1$ (since $p = 3$). The latter inequality is equivalent to $a \ge -1/2$, which means that the only constraint we have on $a$ is $a \in [0,3/2]$.
Therefore we are reduced to showing that for $a \in [0,3/2]$, the substituted expression $(**) \leq 8$, where $(**)$ comes from substituting $q,r$ into $(*)$ in terms of $a$. I find that $(**)$ is the following, and please verify this if (quite unlikely) you are reading up to this line.
$$2 (a^2 (3-2 a))^4+3a^2(3-2a)(-3a^2+6a)^2 - 4(a^2(3-2a))^2 + a^2(3-2a)(14 - 3a^2 + 6a)(2 - (-3a^2+6a))+2a^2 (3-2 a) (9-3 (-3 a^2+6 a))^{3/2} \sqrt{\frac{(-3a^2+6a)^2}{27}+\frac{(14-3 a^2+6 a)^2}{27^2}}..(**)$$
This is a single variable calculus problem that is ugly but should be tractable. I trust WolframAlpha to do the job for me, which does give the maximum is 8. |
Why is the Rational Rotation Algebra not a Matrix Algebra? | As you can read in the first paragraph of Chapter VI in Davidson, you can realize unitaries $u,v$ with $uv=e^{2\pi i \theta}vu$ by taking $u,v\in B(L^2(\mathbb T))$ where $u$ is multiplication by $z$, i.e. the bilateral shift.
The C$^*$-algebra generated by $u$ is $C^*(u)=C(\mathbb T)$, so by restricting your $\pi$ to $C^*(u)$ you get a $*$-homomorphism $$\pi:C(\mathbb T)\to M_n(\mathbb C).$$ If $\pi$ were injective, you would have an infinite-dimensional subalgebra of $M_n(\mathbb C)$. |
$u_n = \int_0^\pi (x-1)\cdot \sin(nx)\,dx$ converges | By the Riemann-Lebesgue lemma we instantly have $\lim_{n\to +\infty} u_n = 0$.
As an alternative,
$$ I_n = \int_{0}^{1}(x-1)\sin(nx)\,dx = \frac{1}{n}\int_{0}^{n}\left(\frac{x}{n}-1\right)\sin(x)\,dx $$
and $\sin(x)$ has a bounded primitive, hence it is enough to show that
$$ \frac{1}{n^2}\int_{0}^{n}x\sin(x)\,dx \stackrel{IBP}{=} \frac{1}{n^2}\left[n(1-\cos n)-\int_{0}^{n}(1-\cos x)\,dx\right] $$
is convergent to zero, but that is trivial. |
Is $k[x,xy] \subseteq k[x,y]$ a flat ring extension? | Set $A=k[x,xy]$ and $B=k[x,y]$. (Notice that $A$ is isomorphic to a polynomial ring over $k$ in two variables.)
We have an exact sequence $0\to A/xA\stackrel{xy\cdot}\to A/xA$.
Assuming that $B$ is $A$-flat and tensoring by this we get that $0\to B/xB\stackrel{xy\cdot}\to B/xB$ is exact. But this is the zero map, so $B=xB$, a contradiction. |
If $a^2 + b^2 = c^2$, then $a + b = c$? | The answer is no.
Take $a=3, b=4, c=5$.
$a^2+b^2=c^2$, since $9+16=25$ but $a+b\neq c$, since $3+4\neq 5$.
The reason is that $\sqrt{a^2+b^2}\neq \sqrt{a^2}+\sqrt{b^2}$. |
Prove that if integer $a > 0$ is not a square , then $ a \neq \frac{b^2}{c^2} $ for non-zero integers b,c | Almost correct, But don't start your proof with
$$ac^2 = b^2$$
instead, start with;
assume $$a = \frac{b^2}{c^2},$$ now we get $$ac^2 = b^2.$$ |
Find the posterior density and compute the posterior mean. | $\def\d{\mathrm{d}}$Your expression for $E({\mit Θ} \mid X)$ is correct except for normalization, but the integration goes wrong. In fact, $I_{(0, θ)}(x)$ is neglected when integrating with respect to $θ$, which requires that $θ > x$.
Full answer: First, because $0 < X < {\mit Θ} < c$, then\begin{align*}
f_X(x) &= \int_{\mathbb{R}} f_{X \mid {\mit Θ}}(x \mid θ) f_{\mit Θ}(θ) \,\d θ = \int_{\mathbb{R}} \frac{2x}{θ^2} I_{(0, θ)}(x) \cdot \frac{1}{c} I_{(0, c)}(θ) \,\d θ\\
&= \int_{\mathbb{R}} \frac{2x}{θ^2} I_{(x, +\infty)}(θ) \cdot \frac{1}{c} I_{(0, c)}(θ) \,\d θ = \frac{2x}{c} \int_x^c \frac{\d θ}{θ^2}\\
&= \frac{2x}{c} \left( \frac{1}{x} - \frac{1}{c} \right) = \frac{2}{c^2} (c - x).
\end{align*}
Thus,\begin{align*}
E({\mit Θ} \mid X) &= \int_{\mathbb{R}} θ f_{{\mit Θ} \mid X}(θ \mid x) \,\d θ = \frac{1}{f_X(x)} \int_{\mathbb{R}} θ f_{X \mid {\mit Θ}}(x \mid θ) f_{\mit Θ}(θ) \,\d θ\\
&= \frac{c^2}{2(c - x)} \int_{\mathbb{R}} θ \cdot \frac{2x}{θ^2} I_{(0, θ)}(x) \cdot \frac{1}{c} I_{(0, c)}(θ) \,\d θ\\
&= \frac{c^2}{2(c - x)} \int_{\mathbb{R}} θ \cdot \frac{2x}{θ^2} I_{(x, +\infty)}(θ) \cdot \frac{1}{c} I_{(0, c)}(θ) \,\d θ\\
&= \frac{cx}{c - x} \int_x^c \frac{\d θ}{θ} = \frac{cx}{c - x} (\ln c - \ln x).
\end{align*} |
Is there any sequence that is in the range of $[0,1]$ that converges to something outside of this range? | Let $x_n$ tend to a limit $x$. Then, for sufficiently large $n$, $|x_n-x|<0.5$.
Then all these $x_n$ are in one of the sets $[2n,2n+1]$, which is a closed set. Therefore $x$ is also in this set. |
Rules of inference: The Rules of Disjunctive Syllogism and Double Negation | $\;\lnot (r \land t)\lor u,\;$ (premise)
$\;r\land t,\;$ (premise)
$\lnot \lnot (r \land t)$ from $(2),\;$ (double negation)
$\therefore \;u\;$ (disjunctive syllogism) |
Showing that $f : (0,2)\to \mathbb{S}^1$ defined by $f(x) := e^{2\pi i x}$ is not a covering map | As pointed out above by k.stm, observe that $\left\{ 1\right\} = f^{-1}(1)\not\cong f^{-1}(i) = \left\{ 1/2 , 3/2\right\} $ in which $\not\cong $ means that there is no bijection.
You can follow other ideas as well. For instance, if $g:(0,2)\to \mathbb{S} ^1 $ is a covering map, then it is a universal cover. In particular, it is isomorphic to $\mathbb{R}\to\mathbb{S} ^1 $, $t\mapsto e ^{2\pi i t} $. Thereby the fibres of $g$ need to have infinity elements.
That is to say, every covering map $g:(0,2)\to \mathbb{S} ^1 $ needs to be such that $g^{-1}(x) $ has infinity elements (of course, for every $x\in \mathbb{S} ^1$). |
Question about a Sobolev norm | Yes you can integrate (the first integral) with respect to $x$ and $y$ in any order you want. This is an implication of Fubini's Theorem.
Unfortunately, what you suggest to do next is not a good idea, as the difference ratio of the first integral has much better chances to be integrable, than if you split it expanding $|f(x)-g(x)|^2$. |
Moment generating function constant term | A moment generating function is $1$ at $t=0$. So if $C\ne 0$ and $F(t)$ is an mgf, then $F(t)+C$ cannot be an mgf. |
Smart way to solve (complex) polynomial equation | No, you dont guess another root. Since $-1+2i$ is a root and since the coefficients are real, then you know that $-1-2i$ is another root. |
Are algebraic structures required to satisfy axioms? | Every structure does satisfy some statements and fails to satisfy the negations of the statements that it satisfies.
Every set of statements defines some class of structures: those that satisfy the statements in the set.
Some sets of axioms and some classes of structures are worth more attention than others.
Not all algebraic structures satisfy the field axioms. If by "required" you mean: must some particular axioms be satisfied in order that an algebraic structure be an algebraic structure? Then the answer is no, unless you mean that having an underlying set or some operations amounts to satisfying "axioms". But that's no reason why one cannot study the class of all algebraic structures that do satisfy some specified axioms. |
Show that a subset of $C(\mathbb{R})$ is compact with norm $\lVert u\rVert = \sup\lvert\frac{u(x)}{x^2 +1} \rvert$ | The two key steps here are a diagonalizing argument and an estimation of the norm given in terms of the sup norm, $|| \cdot ||_{\infty}$, for functions in $X$.
First observe that by the definition of $C$, if $f \in C$, for all $x \in \mathbb{R}, |f(x)| \leq |x|$. For $f \in C([-n, n]),$ define $$ ||f||_n = \sup_{x \in [-n, n]} \frac{f(x)}{ x^2 + 1}.$$ Then $||\cdot ||_n$ is a norm on $C([-n, n])$, for any $f \in C([-n, n])$, $||f||_n \leq ||f||_\infty$, and by our estimation on elements of $C$ for any $f \in C$, $$ 0 \leq ||f|| - ||f|_{[-n, n]} ||_n \leq \frac{1}{n}.$$
Inductively form a sequence of subsequences as follows. Let $f_{0,k} = f_k$. Suppose $f_{n-1, k}$ is defined. Then then $[-n, n]$ is compact, and as you've shown the sequence $f_k$ is equicontinuous and uniformly bounded on $[-n, n]$. So Arzela-Ascoli gives a convergent subsequence $f_{n, k}$ of $f_{n-1, k}$ with respect to the sup norm on $[-n, n]$, hence with respect to $||\cdot ||_{n}$ on $[-n, n]$.
Define now our subsequence of interest as $g_{k} = f_{k, k}$. Our construction guarantees that $g_k$ has a continuous limit $g$ so that $g_k$ converges to $g$ uniformly on every subset of the form $[-n, n]$ hence uniformly on compact subsets of $\mathbb{R}$ and that $\lim_{n \to \infty} ||g_k - g||_n = 0$ for all $n$. So let $\epsilon > 0$, take $n > 2/\epsilon$, and then take $k$ so that for all $k' > k$, $||g_{k'} - g_k||_n < \epsilon/2$. Using our estimate above, we see that for all $k' > k$, $$||g_{k'} - g_k|| \leq \frac{1}{n} + ||g_{k'} - g_k||_n < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.$$ |
Is my proof correct? $2^n=x^2+23$ has an infinite number of (integer) solutions. | There are only finitely many solutions. Pillai's conjecture is overkill because it allows for all possible exponents simultaneously, whereas the solutions of $2^n = x^2 + 23$ must lie on one of the three curves $y^3 = x^2+23$, $2y^3 = x^2+23$, $4y^3= x^2+23$, each of which has finitely many integer points. |
Algebraic integers and characteristic polynomial | Let $f(X) = \det(XI-\beta)$ as in the OP. (But $X$ is a transcendent variable.)
I will write $!K$ for the vector space over $\Bbb Q$ obtained from $K$ by applying the forgetful functor $!$, so $T:=(\beta I-\theta_\beta):!K\to !K$ is the zero morphism, so $\det T=0$. (We do not need Cayley-Hamilton. And moreover...)
This shows that assuming $f\in\Bbb Z[X]$, because also $f(\beta)=0$, the minimal polynomial of $\beta$ is a divisor of $f$, so it is in $\Bbb Z[X]$, gaussian Lemma, so $\beta$ is algebraic.
The converse, and the question.
Let us assume now that $\beta $ is algebraic.
Let $L$ be the subfield of $K$ generated by $\beta$ over $\Bbb Q$. Then $1,\beta,\dots,\beta^{k-1}$ is a basis of $!L$ over $\Bbb Q$, $k$ being the degree of $\beta$ over $\Bbb Q$.
Fix now some $x\in \Bbb Q$, and let us write the matrix of $$T(x):=(xI-\theta_\beta)\ ,\qquad x\in\Bbb Q\ ,$$ seen first as a morphism $!L\to\!L$, (later as $!K\to!K$,) w.r.t. this basis.
This matrix is $xI$ minus a companion matrix, and the companion matrix has only integer entries, since $\beta$ is algebraic.
Now consider a basis of $K:L$, and for each $\gamma$ in this basis the system $\gamma,\gamma\beta,\dots\gamma\beta^{k-1}$. The matrix of $T(x)$, restricted to the subspace generated by this system is again the same matrix, $xI$ minus integral companion matrix.
Pasting these system together, we get a basis of $K:\Bbb Q$, and the matrix of $T(x)$ is a diagonal repetition of the same block. Now $f(x) = \det T(x)$ holds for every $x\in\Bbb Q$, thus $f(X) = \det T(X)\in\Bbb Z[X]$. |
Sum of two co-prime integers | Just to provide an answer synthesized out of the comments already posted, your best (read as easiest) approach to this kind of problem is to toy around with general patterns until something either clicks and you can write a clever proof or until you accidentally exhaust all possible cases.
In this particular problem, we can break down cases into the residue classes $\bmod 4$ in order to hunt for patterns:
1) If $n=2k+1$ then the decomposition $n=(k)+(k+1)$ satisfies our criterion since consecutive numbers are always coprime and $k\geq 3$.
2) If $n=4k$ then consider the decomposition $n=(2k-1)+(2k+1)$. Are these numbers coprime? We can no longer rely upon the general fact that consecutive numbers are coprime, since these are not consecutive. However, if two numbers differ by exactly $2$, what is the only prime factor that they can share? In general, if two numbers differ by $m$, what prime factors can they share? Finally, are we sure that these numbers are both greater than $1$?
I have basically given away the entire answer, but I didn't know how to discuss this phenomenon in any other way, so I leave the final details of the second case, and the entirety of the third case, to you. |
Can a K1 graph be a maximal clique of a graph? | Yes, you are correct as you've written your question, but let's be careful with terminology.
A maximal clique means a clique that isn't contained in any larger clique in $G$. So isolated vertices in $G$ (which are $K_1$'s) are certainly maximal cliques.
On the other hand, a maximum clique is a clique in $G$ of largest possible size. So if $G$ has at least one edge, then an isolated vertex would not be a maximum clique since somewhere in $G$ there would at least be a $K_2$. |
Find the determinant of the following matrix | To expand on darij grinberg's comment, let
$$
X=A-I_n = \begin{bmatrix}
x_1^2 &x_1x_2 & ... & x_1x_n \\
x_2x_1&x_2^2 &... & x_2x_n\\
...& ... & ... &... \\
x_nx_1& x_nx_2 &... & x_n^2
\end{bmatrix}=(x_ix_j)_{1\leq i,j\leq n}
$$
Then all the lines of $X$ are multiples of $(x_1,x_2,\ldots,x_n)$ ; so
${\textsf{rank}}(X)\leq 1$. The eigenvalues of $X$ (counted with multiplicity)
are therefore $0,0,\ldots,0$ ($n-1$ times), plus some $\lambda\in{\mathbb R}$.
Since the trace of $X$ equals the sum of its eigenvalues, we must have $\lambda={\textsf{trace}}(X)=\sum_{i=1}^n x_i^2$. Then
$X$ is similar to a triangular matrix with diagonal ${\textsf{diag}}(0,0,0,\ldots, 0,\sum_{i=1}^n x_i^2)$, so that
$A$ is similar to a triangular matrix with diagonal ${\textsf{diag}}(1,1,1,\ldots, 1,1+\sum_{i=1}^n x_i^2)$, whence
$$
{\textsf{det}}(A)=1+\sum_{i=1}^n x_i^2
$$ |
What is the equation of the tangent drawn to the function $f(x)$ at the point $ x = 1 $ | Your solution is totally fine. Except for a lack of words I have nothing to say about it. Your textbook is incorrect, as the correct answer is not part of the options. |
More "efficient" system than powers of 2? | Yes! There are more efficient answers. Here is one, from https://oeis.org/A096858: $$\{285,433,510,550,570,581,587,590,592,593,594\}$$
This set of $11$ positive integers has the property that all $2^{11}$ of its subset sums are unique.
You can find out more by researching sets with unique subset sums and the Conway-Guy sequence. OEIS has a couple good links to start with. |
$\sum _{n=1}^{\infty} \frac 1 {n^2} =\frac {\pi ^2}{6}$ and $ S_i =\sum _{n=1}^{\infty} \frac{i} {(36n^2-1)^i}$ . Find $S_1 + S_2 $ | \begin{align*}
S_1+S_2&=\sum_{n\geq 1}\frac{1}{36n^2-1}+\frac{2}{(36n^2-1)^2}\\
&=\sum_{n\geq 1}\frac{1}{2}\frac{1}{(6n-1)^2}+\frac{1}{2}\frac{1}{(6n+1)^2}\\
&=\frac{1}{2}\sum_{\substack{n\geq 5\\n\equiv \pm1\,\!\!\!\mod 6}}\frac{1}{n^2}\\
&=\frac{1}{2}\left[\sum_{n\geq 1}\frac{1}{n^2}-\sum_{\substack{n\geq 1\\n\equiv 0\!\!\!\mod 2}}\frac{1}{n^2}-\sum_{\substack{n\geq 1\\n\equiv 0\!\!\!\mod 3}}\frac{1}{n^2}+\sum_{\substack{n\geq 1\\n\equiv 0\!\!\!\mod 6}}\frac{1}{n^2}-1\right]\\
&=\frac{1}{2}\left[\frac{\pi^2}{6}-\frac{1}{4}\frac{\pi^2}{6}-\frac{1}{9}\frac{\pi^2}{6}+\frac{1}{36}\frac{\pi^2}{6}-1\right]\\
&=\frac{\pi^2}{18}-\frac{1}{2}.
\end{align*} |
For every prime of the form 6x-1 are there comparable number of primes of the form 6x+1 | Among the first 10000 primes, there are $4988$ of the form $6k + 1$ and $5010$ of the form $6k-1$. Got these numbers using dumb computer search. |
Invariance of mutual information under invertible mapping | This is an almost trivial observation. Due to the invertible mappings considered, the random variables $X$, $X'$, $(X, X')$ are equivalent in terms of entropy as well as information they provide about $Y$ (or, equivalently, $Y'$, $(Y,Y')$).
A bit more formally, it holds
$$
\begin{align}
I(X;Y) &= I(X, X'; Y, Y')\\
&=H(X,X')-H(X,X'|Y,Y')\\
&=H(X')-H(X'|Y')\\
&=I(X';Y')
\end{align}
$$
The steps are intuitive, but you may want to explicitly prove them by computing the standard entropy expressions using the (joint) probability distributions of the quantities involved. |
Find the area bounded by the curve $\left(\frac{x}a+\frac{y}b\right)^5=\frac{x^2y^2}{c^4}$ | It's probably better to use the substitution
$$\begin{cases} x = ar\cos^2\theta \\ y = br\sin^2\theta \\ \end{cases} $$
This has a Jacobian of $abr\sin(2\theta)$. From looking at the equation, it only encloses a loop in the first quadrant. Then plugging in we get that
$$ r^5 = \frac{a^2b^2}{c^4}r^4\sin^4\theta\cos^4\theta \implies r = \frac{a^2b^2}{16c^4}\sin^42\theta$$
Now we can set up our integral
$$\int_0^{\frac{\pi}{2}} \int_0^{\frac{a^2b^2}{16c^4}\sin^42\theta} abr\sin 2\theta \:dr \: d\theta = \frac{a^5b^5}{512c^8} \int_0^{\frac{\pi}{2}} \sin^92\theta \: d\theta = \frac{a^5b^5}{1024c^8} \int_{-1}^1 (1-x^2)^4dx$$
The last integral evaluates to $\frac{256}{315}$, making the value of the area
$$A = \iint_{\text{Loop}} 1 \: dA = \frac{a^5b^5}{1260c^8}$$ |
Criteria to be in weak $L^{p}$ space | EDIT: the answer is actually wrong as noticed Xiang, since $p'$ as defined here is negative and $(p,p')$ can't be used to apply Hölder's inequality. I misread the bounds on p. Please consider his answer below
It looks like a good candidate for Hölder inequality, with $\frac{1}{p}$ and $p'$ such that $p + \frac{1}{p'}=1$, so $p'=\frac{1}{1-p}$
Let first assume that $f$ is positive and real-valued :
$\Gamma(E) = \int_X |f|^p 1_E d\mu \leq (\int_X(|f|^p)^{\frac{1}{p}} d\mu)^p(\int_X 1_E^{p'} d\mu)^{\frac{1}{p'}} = (\int_X|f| d\mu)^p(\int_X 1_E)^{1-p} $
With the hypothesis, and since $f$ is positive so $| \int \ldots | = \int \ldots $, we can derive :
$\Gamma(E) \leq C^p\mu(E)^{p-1}\mu(E)^{1-p}=C^p$
So we just find that $\Gamma(E)$, the integral of $f$ over a finite-measure set $E$, is bounded independently from $E$. Since $X$ is $\sigma$-finite, you can find an increasing sequence of such $E$s that converges to $X$ and applying Monotonuous convergence theorem for positive functions $|f|1_E$ shows that $f$ is $L^p$ and $||f||_p \leq C$.
The previous proof works obviously for $f$ real negative, or $f=ig$ is imaginary valued with $g$ positive or negative (because, in each cases, $| \int \ldots |$ can be simply expressed wrt $\int \ldots$.
For general case, I think you can decompose $f$ as $f=f1_{\{f \in \mathbb{R^+}\}} + f1_{\{f \in \mathbb{R^-}\}} + f1_{\{f \in i\mathbb{R^+}\}} + f1_{\{f \in i\mathbb{R^+}\}}$, see that the inequality hypothesis is still true for each subfunctions, apply the previous proof and thus deduce that $f$ is $L^p$ as a sum of $L^p$ elements. |
If $\det A^{2k+1}=-\det(A^{2k})=-1$ for all $k\in \Bbb N_0$ and $\mathrm{tr}(A)=0$ then $A\in \mathbf{O}(n)$? | No. As pointed out by another user in the comment, your conditions amount to $\det(A)=-1$ and $\operatorname{tr}(A)=0$. It is easy to cook up some counterexamples, such as $A=\pmatrix{1&1\\ 0&-1}$ or $A=\pmatrix{C&I\\ 0&P}$ where $C=\pmatrix{0&1&0\\ 0&0&1\\ -1&0&0}$ and $P=\pmatrix{0&1&0\\ 0&0&1\\ 1&0&0}$. |
What is the role of "X" symbol in the box-and-whisker plot? | Boxplot notation isn't very well standardized, but it's likely that the "X" here indicates the mean of the data being plotted. |
What does it mean to be a polynomial in the entries of a matrix? | $p_A(x)$ is presumably some polynomial function of $x$, with the form
$p_A(x) = a_0 + a_1x + a_2x^2 + \dots + a_nx^n$
where the $\{a_i\}$ are its coefficients. If the coefficients are "polynomial in the entries of $A$" then this just means that each $a_i$ is the sum of products of powers of the elements of $A$.
For example, if
$A=\left( \begin{matrix} a&b \\c&d \end{matrix} \right)$
then
$\det (A^2-xI) = \left|\left( \begin{matrix} a^2+bc-x&b(a+d) \\c(a+d)&d^2+bc-x \end{matrix} \right)\right| \\= x^2 -(a^2+2bc+d^2)x +(a^2d^2 + b^2c^2 -2abcd)$
has coefficients that are polynomials in $a,b,c,d$. |
Graph with chromatic index 1 billion more than max degree? | See this link.
Add an additional $999,999,999$ edges in parallel to each of the $3$ edges of $K_3$, and call this graph $G$. Then you can check $\chi'(G) = 3,000,000,000$ and $\Delta(G) = 2,000,000,000$. |
upper bound to product of square roots as function of their arguments | we have $$x\geq 0$$ and $$y\geq 0$$ thus we get after AM-GM:
$$\sqrt{x}\sqrt{y}\le \frac{1}{2}(x+y)$$ |
Why are the partial derivatives a basis of the tangent space? | To distinguish between points and tangent vectors, let $p=(p_1,...,p_n)\in \mathbb{R}^n$ a point of $\mathbb{R}^n$ and $v=(v_1,...,v_n)\in T_p(\mathbb{R}^n)$ a point of the tangent space $\mathbb{R}^n$.
The line through $p=(p_1,...,p_n)\in \mathbb{R}^n$
with direction $v=(v_1,...,v_n)\in T_p(\mathbb{R}^n)$ has parametrization $a(t)=(p_1+tv_1,...,p_n+tv_n)$.
If f is $C^\infty$ in a neighborhood of $p\in \mathbb{R}^n$ and $v$ is a tangent vector at $p$, define the directional derivative of $f$ in the direction of $v$ at $p$ as
$$D_vf=\lim\limits_{t \to 0} \frac{f(a(t))-f(p)}{t}.$$
By the chain rule, we have $$D_vf=\sum_{i=1}^{n} \frac{da^i}{dt}(0)\frac{\partial f}{\partial x^i}(p)=\sum_{i=1}^{n} v_i\frac{\partial f}{\partial x^i}(p).$$
Of course in the above notation $D_vf$, the partial derivatives are evaluated at $p$, since v is a vector at $p$. Now, we can define a map $D_v$ (which assigns to every
f which is $C^\infty$ the real number $D_v(f)$ ) with the natural way
$$D_v=\sum_{i=1}^{n} v_i\frac{\partial }{\partial x^i}(p)=\sum_{i=1}^{n} v_i\frac{\partial }{\partial x^i}\Bigr\rvert_{p}.$$
This map $D_v\in \mathcal{D}_p(\mathbb{R}^n)$ is in fact a derivation at $p$.
Finally, you can show that the map
\begin{align}
\phi :T_p(\mathbb{R}^n) &\to \mathcal{D}_p(\mathbb{R}^n) \\
v &\to D_v
\end{align}
is a linear isomorphism of vector spaces (for surjectivity, you can use Taylor's theorem).
So the answers to your questions are:
3) Υes, you can see every tangent vector $v \in T_p(\mathbb{R}^n)$ as a derivation $D_v\in \mathcal{D}_p(\mathbb{R}^n)$ using the isomorphism $\phi$.
2) Since $e_1,...,e_n$ is the canonical basis of $T_p(\mathbb{R}^n)$ and $\phi$ is an isomorphism, then $\phi(e_1),...,\phi(e_n)$ is a basis of $D_v\in \mathcal{D}_p(\mathbb{R}^n)$. But $\phi(e_i)=\frac{\partial }{\partial x^i}\Bigr\rvert_{p}$, hence $\{\frac{\partial }{\partial x^i}\Bigr\rvert_{p}\}_{i=1}^n$
is a basis of the tangent space
$\mathcal{D}_p(\mathbb{R}^n)\simeq T_p(\mathbb{R}^n)$.
1) As a result, you can say that the basis of $T_p(\mathbb{R}^n)$ is $\{\frac{\partial }{\partial x^i}\Bigr\rvert_{p}\}_{i=1}^n$. You can say all that because of $\phi$. |
Prove the following improper integral converges | The function integrand $f $ is bounded at $(0,1 ]$, so the integral is convergent.
let $\epsilon>0$.
$f $ is continuous at $[\frac {\epsilon}{4},1] $ thus it is integrable. there exist a subdivision $\sigma $ such that
$$U (f,\sigma)-L (f,\sigma)<\frac {\epsilon}{2} $$
put $\sigma'=\sigma \cup \{0\} $,
then
$$U (f,\sigma')-L (f,\sigma')\leq$$
$$U(f,\sigma)-L (f,\sigma)+2\frac{\epsilon}{4}<\epsilon $$
$f $ is integrable at $[0,1] $.
we can put $f (0)=1$ for example.
We can also observe that
$$|\sin (x+\frac {1}{x})|=$$
$$| \sin (x)\cos (\frac {1}{x})+\cos (x) \sin (\frac {1}{x} )|\leq$$
$$\sin (x)+\cos (x) $$
so your integral is absolutely convergent. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.