title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Prove the equality of circle's areas | This is known as the Pizza theorem (or, at least, a special case of the pizza theorem). There is a standard proof without words, using the following image from Wikipedia: |
How to generate a equally distant point grid on a sphere? | Generating an equidistant point grid on a sphere, where the points are allowed to be arbitrarily close together, is impossible. This is the same problem as the map projection problem.
If we could generate a point grid of arbitrary fineness, we could convert the points to coordinates on the plane, creating a map projection with no distortion. It is known that no map projection exists that doesn't distort areas.
That said, probably your best option is to pass to a projection (one that minimizes area distortions for your region of interest), plot a grid a points, and convert those points back to coordinates on a sphere. |
Joint normal distributions | Your error is in asserting that $X_1 = \sqrt2 Z_1 -1$ and $X_2 = \sqrt3 Z_2 +1$. If this were true then $X_1$ and $X_2$ would be independent (assuming $Z_1$ and $Z_2$ are independent), but we know they are correlated.
Rather than express $X:=(X_1,X_2)^T$ in terms of $Z_1$ and $Z_2$, express $Y$ directly in terms of $X$. Use the following fact:
If $B$ is a $2\times 2$ matrix of constants, and $X:=(X_1,X_2)^T$ is a random vector, then the expectation of $BX$ is $BE(X)$ and the covariance matrix of $BX$ is
$$\operatorname{Cov}(BX)=B\operatorname{Cov}(X) B^T.$$
Here $B=\begin{pmatrix}2&1\\-1&2\end{pmatrix}$, so plug in $\operatorname{Cov}(X)=\begin{pmatrix}2&-1\\-1&3\end{pmatrix}$ to obtain the mean vector and covariance matrix for $Y:=BX$. |
Are "FG-module characters" sometimes used, too? | I doubt this approach simplifies anything for the following reason: As a vector space $FG$ is free on the set $G$ so there is a canonical and natural bijection between linear maps $FG \to F$ and set maps $G \to F$. So there is a technical sense in which there is no additional information present by considering the linear map induced by a character. |
finding "exp(1)" in the p-adic numbers | The following is essentially a quote of A Course in $p$-adic Analysis, section 5.4.1.
The problem is that the radius of convergence of the $\exp_p$ series verifies
$$r_2=\frac12,\qquad\qquad \frac1p<r_p<1,\qquad (p>2).$$
A possible solution is taking
$$\exp_p(1)=\exp_p(n/n)=\root n\of{\exp_p(n)}$$
for $n$ ($p$-adically)small enough. Namely, for $p>3$ we have $|p|_p=1/p<r_p$, the series of $\exp_p(p)$ converges and we can define
$$\exp_p(1)=\root p\of{\exp_p(p)}.$$
(some root, no canonical choice)
Similarly, $\exp_2(1)=\root 4\of{\exp_2(4)}$ works.
If you are interested in concrete cases, PARI/GP can do calculations with $p$-adic numbers: https://www.math.lsu.edu/~verrill/teaching/math7280/padics_in_pari.pdf. |
Proof that $ \sum_{i=1}^\infty a_n$ is converges almost surely. | I presume $a_n$ are random variables. Label the events A := "$\sum_n a_n$ converges" and $B_N$ := "$\sum_n a_n > N$". We know that $\mathbb{P}[B_N] \leq C / N$ for all $N \geq 1$ for some $C \geq 0$. Note that $B_{N+1} \subset B_N$. Now, $\mathbb{P}[A^c] = \mathbb{P}[\bigcap_{N \geq M} B_N] \leq \mathbb{P}[B_M] \leq C / M$ for any $M \geq 1$. Taking $M \to \infty$ shows $\mathbb{P}[A^c] = 0$. Therefore $\mathbb{P}[A] = 1$. In other words, "$\sum_n a_n$ converges" almost surely. |
Why are there no continuous one-to-one functions from (0, 1) onto [0, 1]? | Suppose that such a function $g$ exists. Take $x_0\in(0,1)$ such that $g(x_0)=1$ and take $x_1<x_0$. Since $g$ is one-to-one and $g(x_0)=1$, $g(x_1)<1$. Now, take $x_2>x_0$. Again, since $g$ is one-to-one and $g(x_0)=1$, $g(x_2)<1$. And, since $g$ is one-to-one, $g(x_1)\neq g(x_2)$. There are then two possibilities:
$g(x_1)<g(x_2)$. Then, by the intermediate value theorem, there is a $y\in(x_1,x_0)$ such that $g(y)=g(x_2)$.
$g(x_1)>g(x_2)$. Then, by the intermediate value theorem, there is a $y\in(x_0,x_2)$ such that $g(y)=g(x_1)$.
In both cases, this contradicts that $g$ is one-to-one. |
Convergence of $\int_{1}^{\infty} \frac{\sin x}{x^{\alpha}}dx$ | $\frac{1}{x^\alpha}$ is a continuous decreasing function on $[1,+\infty)$ converging to zero and $\sin x$ is a function with a bounded primitive, hence for any $\alpha >0$ the integral
$$ \int_{1}^{+\infty}\frac{\sin x}{x^\alpha}\,dx$$
is converging due to the integral version of Dirichlet's test. Integration by parts leads to:
$$\int_{1}^{M}\frac{\sin x}{x^\alpha} = \left.\frac{-\cos x}{x^\alpha}\right|_{1}^{M}-\int_{1}^{M}\frac{\alpha\cos x}{x^{\alpha+1}}\,dx=\cos 1+O\left(\frac{1}{M^\alpha}\right)+O(1).$$ |
Advanced calculus advice | Perfect proof. You get full marks. |
If $\text{Ext}^1(\mathbb{Q}/ \mathbb{Z}, D ) = 0$ then $D$ is divisible | Following the suggestion of Jeremy Rickard:
I consider the exact sequence $$0 \to \mathbb{Z} \to \mathbb{Q} \to \mathbb{Q} / \mathbb{Z} \to 0 $$
Then we obtain the long exact sequence $$0 \to \hom(\mathbb{Q} /\mathbb{Z},D) \to \hom(\mathbb{Q}, D) \to \hom( \mathbb{Z}, D) \to \text{Ext}_1(\mathbb{Q}/ \mathbb{Z}, D ) \to \ldots $$ i.e. by the hypotheses the following sequence is exact $$0 \to \hom(\mathbb{Q} /\mathbb{Z},D) \to \hom(\mathbb{Q}, D) \to \hom( \mathbb{Z}, D) \to 0$$ Thus let $d \in D $ , $n \in \mathbb{N}$, consider $$\phi \in \hom( \mathbb{Z}, D)$$ $$\phi(1) = d$$ Then due to the exactness of the previous sequence we can extend $\phi $ to $\mathbb{Q}$, and so $$d = \phi(1) = \phi(n \cdot \frac{1}{n})= n \cdot \phi(\frac{1}{n}) = ng$$ with $g =\phi(\frac{1}{n})$.
This implies that $D$ is divisible. |
Finding Type I error | Under $H_0$, $\bar{x}$ follows $N(0,\frac{\sigma^2}{n})=N(0,1)$. So $$P(\bar{x}>2)=1-\Phi(2)$$ where $\Phi$ is the cdf of a standard normal. There is no explicit way to compute $\Phi(2)$, so you have to look it up on a table. |
Finding two numbers when having their sum and product | Your first two equations are correct. One way to continue is to solve one of them for a variable. For example, you could find
$$
a = 41 - b
$$
and substitute into the other equation to get
$$
(41 - b)b = 238.
$$
The equation in $b$ can be rewritten $b^2 - 41b + 238 = 0$, which has solutions $7$ and $34$ (you can use the quadratic formula to find these).
If we use this information in $a + b = 41$, we also get solutions of $7$ and $34$ for $a$. As far as we are concerned in this problem, there is no difference between the solution $a = 7$ and $b = 34$ and the solution $a = 34$ and $b = 7$. We may simply say: "The two numbers with sum $41$ and product $238$ are $7$ and $34$." |
Groups of order $8n$ have at least five distinct conjugacy classes | Proof in the case $n$ is odd
By Sylow's 1st theorem there is a subgroup $H \vartriangleleft G$ of order (at least) 2 × 2 × 2 = 8. This could be one of five possibilities:
$\mathbb{Z}_8, \mathbb{Z}_4 \times \mathbb{Z}_2, \mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2$ and additionally
$ D_8 \simeq \mathbb{Z}_4 \rtimes \mathbb{Z}_2, Q = (\mathbb{Z}_2 \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2 $. The dihedral group and quaternion group.
In the Abelian case, it is clear there are eight conjugacy classes - 1 for each element. Since $H$ is normal, these extend to conjugacy classes of $G$.
In the Dihedral case, the rotations form a normal subgroup: $\mathbb{Z}_4 \vartriangleleft D_8 $. This leads to 4 conjugacy classes + 2 for the reflections. These are the symmetries of a square (See List all subgroups of the symmetry group of $n$-gon).
The quaternion group $\langle i,j,k | i^2 = j^2 = k^2 = ijk=-1\rangle$ has 6 conjugacy classes.
Representation theory makes it as simple as the sumset identity ($7$ is missing):
$$ \{ 0^2,1^2,2^2\} + \{ 0^2,1^2,2^2\} + \{ 0^2,1^2,2^2\} \subset \{0, 1,2,3,4,5,6,7\}$$
Sylow theorem guaranteed us a large enough normal subgroup; this is clearly the way to go.
The problem is reduced to the detailed study of the partitions and permutations involves the number 8 and related coincidences. |
Simple proof of the existence of lines in the hyperbolic space | The set of fixed point of an isometry is totally geodesic. Now, if $(S,N)$ are two antipodal points in the unit sphere $S$ there exist an isometry $\sigma $ of $S$ wich fixes exactly these two points (the restriction of a reflexion to $S$). by the definition of the metric the map $T(r,u)=(r, \sigma(u))$ is an isometry whose fixed point set is $\bf H^1$, i.e. a line. The same argument proves that $\bf H^k$ is totally geodesic in $\bf H^n$ . |
What is a good book to understand probability statistics for Econometrics at a very basic level? | Are you interested in the mathematical background? Because then you should probably look for a book/script with the topic "introduction to probability theory". I don't know you mathematical background but if you get stuck with those kind of books you might want to look for books on "measure theory" and if the convergence arguments startle you you would have to look for books on calculus/analysis.
If you define the expected value as a Lebesgue integral over the random variable. And $COV(X,Y):=\mathbb{E}[(X-\mathbb{E}[X])(Y-\mathbb{E}[Y])]$
Understanding Lebesgue integrals is really worth it for probability theory. You understand what independence and correlation means on a more fundamental level. It starts to get really interesting once you get to conditional expected values and stochastic processes.
The proofs for the Law of large numbers and the central limit theorem are also quite nice to have seen at some point. |
Formalize "naturality" of a functor? | A "Natural" functor is typically a functor that arises from a natural mathematical situation,
ie. $U:\textbf{Ring} \rightarrow \textbf{Grp}$ described as earlier, $F:\textbf{Ring}\rightarrow \textbf{Ab}$ sends a ring to its underlying abelian group, and morphisms to abelian group homomorphisms.
Various constructions can be described in terms of functors, which was the motivation for Category Theory to begin with. |
Find all $x \in\mathbb Z$ such that $16x\equiv 26\pmod{42}$ | $$16x\equiv26\pmod{42}$$
Dividing each term by $(16,26,42)=2,$
$$\iff8x\equiv13\pmod{21}\equiv-8$$
As $(21,8)=1,$
$$\iff x\equiv-1\pmod{21}\equiv21-1$$ |
Integrate square root of 4th grad polynomials | I ended up with this formula
$$\sum\left(u\left(15 c_i^2 \sqrt{\frac{\sqrt{- a_i^2 b_i^2 (c_i^2 -1)}-6a_i^2 b_i^2 u^{3/2} - a_i b_i}{\sqrt{-a_i^2 b_i^2(c_i^2-1)}-a_i b_i}}
\sqrt{\frac{\sqrt{-a_i^2 b_i^2(c_i^2-1)}+6 a_i^2 b_i^2 u^{3/2}+a_i b_i}{\sqrt{-a_i^2 b_i^2(c_i^2 -1)}+a_i b_i}}\\
F_1\left(\frac{2}{3};\frac{1}{2};\frac{1}{2};\frac{5}{3};-\frac{6 a_i^2 b_i^2 u^{3/2}}{a_i b_i + \sqrt{- a_i^2 b_i^2(c_i^2 -1)}},-\frac{6 a_i^2 b_i^2 u^{3/2}}{a_i b_i - \sqrt{-a_i^2 b_i^2 - (c_i^2 -1)}}\right)+36 a_i b_i u^{3/2}\sqrt{\frac{\sqrt{- a_i^2 b_i^2(c_i^2 -1)}-6 a_i^2 b_i^2 u^{3/2}- a_i b_i}{\sqrt{-a_i^2 b_i^2(c_i^2 -1)}-a_i b_i}}\sqrt{\frac{\sqrt{- a_i^2 b_i^2(c_i^2 -1)}+6 a_i^2 b_i^2 u^{3/2}+a_i b_i}{\sqrt{-a_i^2 b_i^2(c_i^2 -1)}-a_i b_i}}\\
F_1\left(\frac{5}{3};\frac{1}{2};\frac{1}{2};\frac{8}{3};-\frac{6 a_i^2 b_i^2 u^{3/2}}{a_i b_i+\sqrt{- a_i^2 b_i^2(c_i^2-1)}},-\frac{6 a_i^2 b_i^2 u^{3/2}}{a_i b_i -\sqrt{- a_i^2 b_i^2(c_i^2-1)}}\right)\\
+10(12 a_i b_i u^{3/2}(3 a_i b_i u^{3/2}+1)+c_i^2)\right)\right)
/\left(25 \sqrt{12 a_i b_i u^{3/2}(3 a_i b_i u^{3/2}+1)+c_i^2}\right)
(+ constant)$$
Were $F_1$ is the Appell hypergeometric function and $u=t^2$.
Could that be right or are there any errors (probably...)? |
Axiomatization of the naturals starting with the primes | The structure $(\mathbb{N}; \times)$ is much less "contentful" than $(\mathbb{N}; +)$. In particular, $(\mathbb{N}; +)$ is rigid - it has no nontrivial automorphisms. By contrast, $(\mathbb{N};\times)$ has lots of automorphisms, namely one for every permutation of the primes. As a consequence of this, addition is not definable from multiplication alone, so the answer to your question is negative in a strong sense.
Adding constants naming each individual prime doesn't help either: addition is definable in a structure iff it is definable in the structure using only finitely many symbols (since definitions are finite), but fixing finitely many primes leaves lots of nontrivial permutations of the primes which don't respect addition.
Strictly speaking, the second paragraph refers specifically to first-order definability, while the first paragraph applies to any notion of logical definability which is "isomorphism-respecting." (And see this old answer of mine for a discussion of definability; there the focus is on first-order, but the setup is generally applicable.) For example, addition is definable in the natural numbers with multiplication and a constant symbol for each prime using an $\mathcal{L}_{\omega_1,\omega}$ formula.
First-order logic is in many ways quite limited - in particular, it doesn't generally allow "definition by recursion," and we have:
Addition is not first-order definable in the naturals with successor.
Multiplication is not first-order definable in the naturals with successor and addition.
Each of these facts is special to first-order logic - e.g. Peano's original formulation of arithmetic used second-order logic and in that context successor is enough to get addition and multiplication.
In some special situations definition by recursion may be first-order - e.g. the bulk of the proof of Godel's incompleteness theorem amounts to showing that definition by recursion is first-order once we have addition and multiplication - but it shouldn't be taken for granted. (See e.g. the middle part of this answer of mine.) |
$\lim\limits_{x\to\infty}\frac{(x+7)^2\sqrt{x+2}}{7x^2\sqrt{x}-2x\sqrt{x}}$ | Hint: A standard thing to do is to divide top and bottom by $x^2\sqrt{x}$.
The new top is
$$\left(1+\frac{7}{x}\right)^2 \sqrt{1+\frac{2}{x}},$$
and its behaviour for large $x$ is clear.
The new bottom is $7$ plus something tiny.
Remark: In the solution given in the OP, $\sqrt{x^2+2x}$ was replaced by $x+1$. True, this is fine, the change is indeed small. But if we are doing things formally, the replacement leaves a gap in the argument. |
Fourier Series of $e^x$ | For the Fourier series of the function $f(x)=e^x$:
$$f(x)=\frac{\text{a}_0}{2}+\sum_{n=1}^\infty\text{a}_\text{n}\cos\left(\text{n}x\right)+\sum_{n=1}^\infty\text{b}_\text{n}\sin\left(\text{n}x\right)$$
For $\text{a}_0$ we get:
$$\text{a}_0=\frac{1}{\pi}\int_{-\pi}^\pi e^x\space\text{d}x=\frac{1}{\pi}\left[e^x\right]_{-\pi}^\pi=\frac{e^\pi-e^{-\pi}}{\pi}=\frac{2\sinh(\pi)}{\pi}$$
For $\text{a}_\text{n}$ we get (using two times integration by parts):
$$\text{a}_\text{n}=\frac{1}{\pi}\int_{-\pi}^\pi e^x\cos(\text{n}x)\space\text{d}x=\frac{2\cos(\text{n}\pi)\sinh(\pi)}{\pi(1+\text{n}^2)}$$
For $\text{b}_\text{n}$ we get (using two times integration by parts):
$$\text{b}_\text{n}=\frac{1}{\pi}\int_{-\pi}^\pi e^x\sin(\text{n}x)\space\text{d}x=\frac{-2\text{n}\cos(\text{n}\pi)\sinh(\pi)}{\pi(1+\text{n}^2)}$$ |
Proof of the formula for Euler's totient function | By definition, $\phi(30)$ is the count of numbers less than $30$ that are co-prime to it. Also, $\phi(abc) = \phi(a)\times \phi(b)\times \phi(c)$. Note that $\phi(p)$ for all primes is always $p - 1$ because there are $p - 1$ numbers less than any given prime $p$, and all numbers less than a prime are coprime to it.
This means $\phi(30) = \phi(2\cdot 3 \cdot 5) = \phi(2) \cdot \phi(3) \cdot \phi(5) = (2-1)(3-1)(5-1) = 8$. But $\phi(60) = 16 \ne (2 - 1)(3 - 1)(5 - 1)$ so what you said is not always true. It is true only if you have an order $1$ of all the prime divisors. |
The closure of $C^1$ in the functions of bounded variation | I think the space is $W^{1,1}[0, 1]$. We clearly have that the closure (say $B$) is in $W^{1, 1}$. Furthermore, $W^{1, 1}$ is a proper subset of $\text{BV}$.
So, take a function $f$ in $W^{1, 1}$ and take an approximating sequence $f_n$ consisting of $C^\infty$ functions in the $W^{1, 1}$ norm.
So, we have $\|f_n - f\|_{\text{BV}} \lesssim \|f_n - f\|_{W^{1, 1}} \to 0$.
As $f_n$ are all in $C^1$ we also have that $f$ is in $B$.
Now we have
$$f(x) = f(0) + \int_0^x f'(t) \, \textrm{d}t.$$
So, $\|f\|_{W^{1, 1}} = \|f\|_{L^1} + \|f'\|_{L^1}$.
And $$\|f\|_{L^1} \leqslant |f(0)| + \int_0^1 \left | \int_0^x f'(t) \, \textrm{d}t \right | \leqslant |f(0)| + \int_0^1 |f'(t)| \, \textrm{d}t.$$ |
What is the support of a vector bundle complex? | The support of a complex of vector bundles usually refers to the set of points at which it is not exact. That is, the support is the set of $x\in M$ such that the complex of fibers $$0\to (V_1)_x\to (V_2)_x\to \dots\to (V_n)_x\to 0$$ is not exact. |
Help Implicit funtions theorem question | Let
$$
F=F(x,t,u)=u-h(x-tu)
$$
Then $F: \mathbb R^2\times\mathbb R^1\to\mathbb R^1$ is $C^2$, and $F_u\in C^1$ and
$$
F_u=1+th'(x+tu)>0\quad
$$
is $x$ $t$ near $0$ and $x$ near $x_0$. Hence, IFT provides the existence of $u(x,t)\in C^1$,
for an open set of $(x_0,0)$.
Note. If fact, this is Burger's equation, and if $h'>0$, then this IVP possesses a unique global solution ($t\ge 0$). But if $h'<0$, then it develops shocks in finite time. |
Proving a function $\frac{1}{2y}\int_{x-y}^{x+y} f(t) dt=f(x)$ is a linear polynomial | Rewriting the equation as
$$\int_{x-y}^{x+y}f(t)dt=2yf(x)$$
and differentiating the above equation twice with respect to y one obtains that:
$$f'(x+y)=f'(x-y)$$ from which one can conclude that, after setting respectively $x\rightarrow x-y$ and $x\rightarrow x+y$:
$$f'(x-2y)=f'(x)=f'(x+2y) ~~\forall y>0$$
which overrides the fact that y is defined to be positive and sets all values that $f$ takes on the axis equal (if y was a real number we would be able to conclude that simply by virtue of the 2nd equation written). Thus $f'$ is constant and we finally find that
$$f(x)=Cx+D$$
for C,D real numbers. One can check with backsubstitution that there are no other constraints on the two parameters.
In light of this proof, I don't have a good answer as to why $f$ needs to be twice differentiable. |
Bernoulli Random Variables and Variance | The moment generating function idea is in this case a good one.
Let $T_1$ be the smallest $n$ such that $S_n=1$. More informally, $T_1$ is the waiting time until the first "success." Let $T_2$ be the waiting time from the first success to the second, and let $T_3$ be the waiting time from the second success to the third.
Then the $T_i$ are independent and identically distributed, and $T=T_1+T_2+T_3$. Thus the moment generating function of $T$ is the cube of the mgf of $T_1$.
We proceed to find the mgf of $T_1$. So we want $E(
e^{tT_1})$. Note that $T_1=k$ with probablity $\frac{1}{2^k}$. So for the moment generating function of $T_1$ we want
$$\sum_{k=1}^\infty \frac{1}{2^k}e^{tk},$$
This is an infinite geometric progression with first term $\frac{e^t}{2}$ and common ratio $\frac{e^t}{2}$. Thus the moment generating function of $T_1$ is
$$\frac{e^t}{2(1-\frac{e^t}{2})}.$$
Cube this to get the mgf of $T$, and use that mgf to find $E(T)$ and $E(T^2)$.
Remark: The fact that the probabilities were $\frac{1}{2}$ was not of great importance. And neither was the fact that we are interested in the waiting time until the third success.
Our $T$ has distribution which is a special case of the negative binomial. The method we used adapts readily to find the mgf of a general negative binomial. |
Polynomial - problem in dots | You've already found that this polynomial has roots $\pm 1$, hence it is divisible by $(x-1)(x+1)$. By a direct computation we can shpw that this polynomial does not fit our equation, hence we will consider the polynomial $(x-1)(x+1)(x+k)$ and find this constant $k$.
WE write
$$f(x)(x-2)=(x-1)(x+1)(x+k)(x-2) = (x+1)f(x-1) = (x+1)(x-2)x(x+k-1).$$
After simplifications, we obtain
$$ (x-1) (x+k) = x(x+k-1).$$Clearly, $k=0$ fits.
The answer is $f(x)=x(x-1)(x+1)$. Now proving that this polynomial is the unique (up to a multiplicative constant) solution of this equation is another question. |
what is the difference between $\mathrm{SL}(2)$ and $\mathfrak{sl}(2)$? | The two are very different. The first one is the group of all non-singular two by two matrices with determinant 1 with entries in some field. The second one is the set of two by two matrices with vanishing trace, since take the differential of $\det(m)=1$ on $m$ gives $$\det(m)Tr(m^{-1}dm)=0$$
A way of visualizing $SL(2,\mathbb{R})$ can be found at here. |
Where is "Well" needed in the Transfinite Recursion Theorem? | The condition for $R$ being well-founded is not that $R^-[x]$ is always a set (if that was all, every relation on a set would be well-founded), but that every non-empty subset of $U$ has an $R$-minimal element.
As an example of a relation that doesn't have this property, consider the usual ordering $<$ on the closed unit interval $[0,1]$. The recursion theorem would fail for this set -- for example, consider
$$ \digamma(x,h) = \begin{cases} 0 & \text{if }[0,1]\cap\operatorname{Rng}h=\varnothing \\
\sup ([0,1]\cap \operatorname{Rng} h) & \text{otherwise} \end{cases} $$
This ought to satisfy your concept of a "$<$-recursive rule" on $[0,1]$.
However, this rule does not give rise to a unique $f$ -- in fact, every continuous non-decreasing $f:[0,1]\to[0,1]$ with $f(0)=0$ will satisfy the condition
$$ f(x)=\digamma\left(x,f\restriction R^{-}[x]\right) $$
so the recursion rule did not succeed in picking out a particular one among them.
The proof goes wrong in this case because $\bigcup H$ is not necessarily a function. In order to prove that it is, one needs the "well-founded" condition on $R$. |
Can anyone explain one step of derivation in a branching process example? | The goal is to find a formula for the $n$-fold composition of $f$ with itself. This is difficult to do directly from the definition, so the author uses a clever trick.
The idea is that $f$ is a fractional linear transformation, and for every fractional linear transformation there is a corresponding matrix. Explicitly, if $T(z)=\frac{az+b}{cz+d}$ then the corresponding matrix is $\begin{bmatrix}a&b\\c&d\end{bmatrix}$. For $f$ we have $a=0,b=p,c=-q$, and $d=1$, so the matrix corresponding to $f$ is $\begin{bmatrix}0&p\\-q&1\end{bmatrix}$.
Composing fractional linear transformations corresponds to multiplying matrices, so a formula for $f\circ\cdots\circ f$ can be found by finding a formula for the powers of the matrix corresponding to $f$. This can be done by diagonalizing the matrix. |
Prove $A^TA$ is similar to $AA^T$ | In case you are allowed to use singular value decomposition (SVD), Let $A=U\Sigma V^T$ be the SVD. Then $AA^T = U\Sigma^2 U^T$ and $A^TA = V\Sigma^2 V^T$. Since $U$ and $V$ are orthogonal, the similarity follows. |
How to calculate conditional expectation? | Since the probability space is the unit interval with Lebesgue measure, the integral $dP$ is just normal calculus integration: $$\int_{[\frac{y-1}{4},\frac{y}{4})} x^2 dP=\int_{(y-1)/4}^{y/4}x^2dx.$$
Looking at it again, your derivation is flawed. The equation $E[X|Y=y] = \sum_{i} x_i P(X=x_i,Y=y)/P(Y=y)$ is only valid when $X$ is a discrete random variable. This does not apply here, since $X$ is continuous.
What you instead want is
$$
E[X|Y=y] = \frac{E[X1_{\{Y=y\}}]}{P(Y=y)}=4\int X1_{\{Y=y\}}\,dP=4\int_0^1 x^21_{[(y-1)/4,y/4)}\,dx=4\int_{(y-1)/4}^{y/4}x^2\,dx
$$
The first equality is the definition of the conditional expectation of a (general) random variable on an event. The third inequality follows from the substitutions $X=x^2,\{Y=y\}=[(y-1)/4,y/4)$, and $dP=dx$. |
$A^2-B$ is positive-definite if and only if $A^{-1}BA^{-1}$ has all eigenvalues $<1$ | If $P$ is invertible and $H$ is Hermitian, the matrix product $P^\ast HP$ is said to be congruent to $H$. Positive or negative (semi)definiteness are preserved under congruence. In fact, since $P$ is invertible, $v=Pu$ is a one-to-one correspondence between two vectors $u$ and $v$. Therefore $v^\ast Hv>0$ if and only if $u^\ast P^\ast HPu>0$, and likewise when $>$ is replaced by $\ge,<$ or $\le$.
In your case, let $P=(A^2)^{1/2}$. Then $A^2-B=P^2-B=P(I-P^{-1}BP^{-1})P$ is positive definite if and only if $I-P^{-1}BP^{-1}$ is positive definite, and the latter occurs if and only if all eigenvalues of $P^{-1}BP^{-1}$ are less than $1$.
Now, in general, if $X$ and $Y$ are two square matrices of the same sizes, then $XY$ and $YX$ have the same characteristic polynomials and hence also the same spectra. Therefore $P^{-1}BP^{-1},\,P^{-2}B=A^{-2}B$ and $A^{-1}BA^{-1}$ have the same eigenvalues. Hence the result follows. |
Non elementary Integral problem | Following @Andrei's idea, we see that
\begin{align}
f(x)=\int^x_0 e^{-5t^2}\ dt= x+\frac{f^{(3)}(\xi(x))}{3!}x^3
\end{align}
which means
\begin{align}
|x-f(x)| = \left|\frac{f^{(3)}(\xi(x))}{3!}x^3 \right| < \frac{10}{3!}\frac{1}{5^3}=\frac{1}{75}.
\end{align} |
Compositions of homotopic maps are homotopic | There's no strong reason not to use $G(F(x,t),t)$. One reason the author may not have is that they may have had in mind the general principle that you can concatenate homotopies, and that this often lets you build homotopies that "do multiple things" by doing each one separately and concatenating the pieces along the time coordinate. In this particular example, it is actually possible to do both of them at once as you have pointed out, but it isn't always (see for instance my answer to this question). |
Integrating products of Hermite polynomials | Lets call the three Hermite polynomials $A,B,C=\Phi_k$. Then as the first $J+1$ Hermite polynomials form a basis of the polynomials of degree $J$, we can express
$$ AB = \sum_{j=0}^{\deg(AB)}a_j \Phi_j$$
By orthogonality, the answer is then
$$\int_{\mathbb R} W(\xi) ABC \ d\xi = \langle AB, \Phi_k\rangle = a_k\|\Phi_k\|^2 $$
where the inner product is
$$ \langle f,g\rangle = \int_\mathbb R f(\xi)g(\xi) W(\xi) d\xi$$
and the norm is the natural one induced by the inner product. Depending on your convention of what the Hermite polynomial is, i.e. what $W$ is, $\Phi$ may not have unit norm so you will have to check that. I don't know what the optimal choice of $C$ is, maybe you should take $C$ to be the largest degree polynomial? A nice special case is if $\deg C > \deg(AB)$ then the answer is automatically $0$. |
Spivak's Calculus on Manifolds - Proof of Inverse Function Theorem | We have shown that for every $y \in W$ there is a unique $x \in \operatorname{interior} U$ such that $f(x) = y$. But, it could very well be the case that there is also some $x' \in \mathbb{R}^n \setminus U$ such that $f(x') = y$. To account for this case, we take $V = (\operatorname{interior} U) \cap f^{-1}(W)$. |
Optimization of Rectangle | Let $C(x)$ be the the cost of covering the garden of a side $x$. Since $y=24/x$ (by area), $C(x)=50x+10x+2(10\frac{24}{x})=60x+\frac{480}{x}$. Since $0<x<240$, our restriction will be $x\in(0,240)$. To find for critical number/s,
$$C'(x)=60-\frac{480}{x^2}=0\ (or\ undefined)$$
$$x^2=\frac{480}{60}=8$$
$$\therefore x=2\sqrt{2}\ or\ -2\sqrt{2}$$but $-2\sqrt{2}\notin(0,240)$, so $x=2\sqrt{2}$. To test if it is a relative extremum:
$$C''(x)=\frac{960}{x^3}$$
$$C''(2\sqrt{2})=\frac{960}{(2\sqrt{2})^3}>0\ (rel. min.)$$
Since, there is only one relative extremum in $(0,240)$, therefore it will the absolute extremum, too.
Therefore, the $x$ (side with bricks) is $2\sqrt{2}ft.$, and the other side is $6\sqrt{2}ft.$ Sorry for a not so good solution, but hope it helps. Sorry my previous answer :( |
Rational numbers modulo $p$ for some prime $p$ | Hint.- Simply $\dfrac ab=\dfrac {ab^{-1}}{bb^{-1}}=ab^{-1}$ and $\dfrac ab\equiv0\iff \dfrac ab=Mp$ |
Interpretation of Power Spectral Density (DTFT of Covariance function) | $\mathbb{E}x[n+\tau]y[n] := R_{xy}[\tau]$ is the expected value of the cross-correlation at lag $\tau$. This is the average over the signal of the degree to which the signal $y$ "now" can be used to estimate the signal $x$ at "now + $\tau$". Since these signals are assumed stochastic (but stationary enough for these expectations to be stationary) we cannot say what the real cross-correlation is, only what the average is. This expection tells us about the predictability of one signal from the other on average.
$R_{xx}$ is the autocorrelation equivalent to the above. This expectation tells us about the predictability of a signal at various lags given its current value, on average.
The $S_{xy}$ and $S_{xx}$ are the power spectra of these expectations. If you can predict over long times, the correlations in the expectations will have their first zero-crossing(s) at large lags and will be "large" out to these large lags. After a few zero-crossings, we expect the correlations to meander around zero indicating that we have left predictability. The spectra will then have narrow bandwidth (around DC). Narrow bandwidth means that the peak near DC pokes up out of the noise floor and there is some consistent, predictable behaviour in the signal(s). Equivalently, the system has a low slew rate.
If you can only predict over short times, the correlations in the expectations will have their first zero-crossing(s) at small lags and will quickly become random noise centered at zero (so indicate unpredictability). The spectra will then have wide bandwidth (around DC). Wide bandwidth means that the smeared out lump around DC does not rise much out of the noise floor and the signal(s) are not strong predictors of their later behaviour. Equivalently, the system has a fast slew rate.
Note that the signals may be better predictors at certain frequencies than at others (for example, if the source system is resonant at one of these frequencies, or the source system injects narrowband noise). Consequently, the correlations may show short or long durations of predictability (i.e. with the entire spectrum muddled together) but the transforms may show that the predictability at certain frequencies is vastly longer or vastly shorter than what one might estimate from zero-crossings or correlation decay rates.
$R_{xx}[0] - R_{xx}[r] = \mathbb{E}x[n+0]x[n] - \mathbb{E}x[n+r]x[n]$, which is proportional to expected power "now" minus power at "now + $r$", so is proportional to the power difference, as you mention. Note that this is another "on average" claim -- at any particular time, this is probably false, but the mean power difference averaged over long times will tend to this value. |
I ran $\frac{d^n}{dx^n}[(x!)!]$ through the calculator but don’t understand the $R^{(0,1)}$ and $R(n,x)$ in the output | The superscript notation means (I think) the derivative: $$R^{(0,1)}(−1+n,x)=\frac{\partial R(n,x)}{\partial x}\Bigr|_{(−1+n,x)}$$
$R(n,x)$ is so ugly, it seems, that it cannot be expressed in some closed form but only as a recursion in $n$:
$$R(n,x)=\frac{d \ln[\Gamma(x)]}{dx} \,R(-1+n,x)+\frac{\partial R(n,x)}{\partial x}\Bigr|_{(−1+n,x)}$$ |
find the points on the graph according to the value of a tangent line | Well considering that the slope of a line tangent to $f(x)$ at all Real values of x is $f`(x)$, you would want to derivate your equation, set it equal to each of your values, and solve for the x values of each.
I'll do the last one for you to show you how:
$f`(x)=x^3-x^2-2x=10x$
$x^3-x^2-12x=0=x(x+3)(x-4)$
And finally, $x(x+3)(x-4)=0$ is true when $x=0, -3, 4$ |
How does this expression follow from Moore-Penrose inverse definition? | In general, if $D$ is a diagonal matrix and $C=D^+$, then $C$ is also a diagonal matrix with
$$
c_{ii}=\begin{cases}
d_{ii}^{-1}&\text{ when }d_{ii}\ne0,\\
0&\text{ when }d_{ii}=0.
\end{cases}
$$
It is easy to verify that the $D^+$ defined this way satisfies the four defining properties of Moore-Penrose pseudo-inverse. If you permute the standard basis vectors of $\mathbb R^n$ (or $\mathbb C^n$), so that $D=D_1\oplus0$, then the result above is just saying that $D^+=D_1^{-1}\oplus0$. Essentially, it means the Moore-Penrose pseudo-inverse of an invertible matrix $D_1$ is just the usual inverse $D_1^{-1}$, and the M-P pseudo-inverse of the zero matrix is the zero matrix itself. |
Calculate the quotient groups and classify $\mathbb{Z^3}/(1, 1, 1)$ - Fraleigh p. 151 15.8 | I'm not sure why you want to calculate the order of the quotient first, but for infinite groups you can not just divide the orders. For example $\left|\mathbb Z\big/2\mathbb Z\right|=2$ while $\left|\mathbb Z\right|=\left|2\mathbb Z\right|=\infty$.
Now let's look at $\mathbb Z^3\big/ H$ where $H=\langle\left(1,1,1\right)\rangle$. The idea is that you can always make the first coordinate $0$ by adding or substracting $(1,1,1)$ multiple times. To be precise:
$$
(x,y,z) - x\cdot(1,1,1) = (x,y,z) - (x,x,x) = (0, y-x, z-x).
$$
So for all cosets $[(x,y,z)]\in \mathbb Z^3\big/ H$ you have $[(x,y,z)] = [(0, y-x, z-x)]$ in the quotient.
Every coset has a representative with $0$ in the first coordinate.
Now can a coset have multiple representatives of this form? Assume $[(0,y,z)]=[(0,y',z')]$, then
$$
(0,y,z) - (0,y',z') = (0, y-y', z-z') \in H = \langle(1,1,1)\rangle.
$$
All elements in $H$ are of the form $(k,k,k)$ for some $k\in \mathbb Z$, so from $(0, y-y', z-z') \in H$ it follows that $y-y'=0$ and $z-z'=0$ since the first coordinate is $0$. Thus $y=y'$ and $z=z'$ and the representatives are equal.
To summarize what we know so far:
Every coset in $\mathbb Z^3\big/ H$ has exactly one representative of the form $(0,y,z)$.
This gives a bijective map
\begin{align*}
\Phi : \mathbb Z^2 &\longrightarrow \mathbb Z^3\big/ H, \\
(y,z) &\longmapsto [(0,y,z)].
\end{align*}
Now you can easily show this map is a homomorphism, so indeed $\mathbb Z^3\big/ H \cong \mathbb Z^2$.
Regarding your questions:
(3.) Can you please flesh out this idea? Why do we want the first coordinate 0?
This looks like the hinge. I'm perplexed where this loomed from.
This is indeed the key observation. While $\mathbb Z^3$ is generated by the 3 generators $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$, we can generate $\mathbb Z^3\big/H$ by the 2 generators $[(0,1,0)]$ and $[(0,0,1)]$, since (as I showed) every coset has a representative of the form $(0,y,z)$. So we can drop a generator, which suggests that our quotient is isomorphic to $\mathbb Z^2$ (or a quotient of that).
(4.) What do your square brackets mean? Are they my curly ones {?
No, the curly ones are always sets. The square brackets denote cosets:
$$
[(x,y,z)] := (x,y,z) + H.
$$
(5.) Can you please unfold where this bijective function magically sprang from?
It's the whole idea of the proof. We showed that we only need the two generators $[(0,1,0)], [(0,0,1)]$. So let's take $\mathbb Z^2$ and map it's two generators $(1,0)$, $(0,1)$ to the two of $\mathbb Z^3\big/H$ and we get an isomorphism!
Indeed if we have any abelian group $G$ generated by 2 elements $G=\langle a,b\rangle$ we get an epimorphism by
\begin{align}
\Phi: \mathbb Z^2 &\longrightarrow G,\\
(x,y) &\longmapsto x\cdot a + b\cdot y.
\end{align}
Try to prove this! Then, by the homomorphism theorem, $\mathbb Z^2/\ker\Phi\cong G$. In our case $\Phi$ is injective so $\ker\Phi=\{(0,0)\}$ and thus $\mathbb Z^2\cong G$.
Update (3.) I understand this proves every coset has a representative of the form (0,y,z).
Hence you're authorized to drop the generator (1,0,0)∈ℤ3.
But before you proved this, how do you envisage and envision this hinge/'key observation'? This feels magical. What induced you to make the first coordinate 0 by subtracting (1,1,1) x times?
It's an idea, thats very different from magic. And as you see, this idea turns out just fine. Maybe start with $\mathbb Z^3\big/\langle(1,0,0)\rangle$ which gives you $[(x,y,z)]=[(0,y,z)]$ immediatly, since we just modded out a generator. For $\mathbb Z^3\big/ H$ we are in a similar situation: We can generate $\mathbb Z^3$ by $(1,1,1)$, $(0,1,0)$ and $(0,0,1)$ (check this!), modding out one of these leaves us with the other two.
(2.) What's the intuition, by means of the last paragraph here?
When you look at the quotient $\mathbb Z\times\mathbb Z^6\big/\langle(1,2)\rangle$ you get the relation $[(1,2)]=[(0,0)]$ which you could also write as $[(1,0)]=[(0,-2)]=[(0,4)]$. Since $(0,4)$ has order $3$, the order of $[(1,0)]$ in the quotient must divide $3$ as well. |
Existence and uniqueness of the cube root | Your proof is nearly perfect, just notation errors.
There are similar ideas as your previous question: Proving the supremum of a set in the general case
Existence: After fixing some $y>0$, your set $S$ should be $S=\{x\in\mathbb{R}:x^3<y\}$, you do not need $x\ge0$. But your ideas are the same. Using your set $S$:
For $\beta^3<x$, its more clear to write 'there exists $z\in\mathbb{R}$ s.t. $\beta^3<z^3<x$' instead of $\beta<z$ and $z^3<x$, and likewise for the other case.
For $\beta^3>x$, $z$ is an upper bound of $S$! The contradiction should be 'So $\sup S\neq\beta$.'
Using my set $S$, you should conclude $\beta^3=y$.
Uniqueness: See Ross Millikan's comment. |
Something weaker than the Riesz basis | The term you're looking for is probably Schauder basis, altough for that you also need some independence properties (as usual with bases). |
Abel means of function $f$ at a jump discontinuity | This question is from Stein–Shakarchi and they give the following hint:
Hint: Explain why $\frac{1}{2\pi} \int_{-\pi}^0 P_r(\theta)\,d\theta$
= $\frac{1}{2\pi}\int_0^{\pi} P_r(\theta) = \frac{1}{2}$, then modify the
proof given in the text.
The proof they're referring to is that of Theorem 4.1 in chapter 2:
Theorem 4.1. Let $\{K_n\}_{n=1}^{\infty}$ be a family of good kernels,
and $f$ an integrable function on the circle.
Then $$\lim_{n \to \infty} (f * K_n)(x) = f(x)$$ whenever
$f$ is continuous at $x$. If $f$ is continuous everywhere,
then the above limit is uniform.
In particular, since $P_r(\theta)$ is even and nonnegative with $\frac{1}{2\pi}\int_{-\pi}^{\pi} P_r(\theta) = 1$, we have the identity given in the hint. Then
\begin{align}
A_r(f)(\theta) - \frac{f(\theta^+) + f(\theta^-)}{2}
&= \frac{1}{2\pi}\int_{-\pi}^{\pi} f(\theta-t)P_r(t)\,dt
- \frac{f(\theta^+) + f(\theta^-)}{2} \\
&= \frac{1}{2\pi}\int_{-\pi}^0 f(\theta - t)P_r(t)\,dt
+ \frac{1}{2\pi}\int_0^{\pi} f(\theta - t)P_r(t)\,dt
- \frac{f(\theta^+) + f(\theta^-)}{2} \\
&= \frac{1}{2\pi}\int_0^{\pi} f(\theta + t)P_r(t)\,dt
+ \frac{1}{2\pi}\int_0^{\pi} f(\theta - t)P_r(t)\,dt
- \frac{f(\theta^+) + f(\theta^-)}{2} \\
&= \frac{1}{2\pi}\int_0^{\pi}[f(\theta + t) - f(\theta^+)]P_r(t)\,dt
+ \frac{1}{2\pi}\int_0^{\pi}[f(\theta-t)-f(\theta^-)]P_r(t)\,dt.
\end{align}
From here you can continue similarly to the proof in the text,
using the left- and right-continuity of $f$ at $\theta$ to
bound the integrals near $0$ and using the properties of
good kernels to bound them away from $0$. |
There are $n^{n-3}$ numbers of trees with named edges - how to proof? | HINT: There are $n$ ways to pick one vertex to be the root of the tree. Once you’ve done that, you can define a direction on each edge by considering its relationship to the root. Then use that directions of the edges together with their label to label the vertices other than the root. Then apply Cayley’s formula. |
Find all homomorphisms from a quotient polynomial ring $\mathbb{Z}[X] /(15X^2+10X-2)$ to $\mathbb{Z}_7$ | Hint: a ring homomorphism $\Bbb Z[X]/(\cdots)\to\Bbb Z/7\Bbb Z$ will be determined by where $X$ is sent. It can't be sent just anywhere; it still has to satisfy $15X^2+10X-2=0$ (does this have roots in $\Bbb Z/7\Bbb Z$?). |
Euler totient function and unramified extension of $\mathbb{Q}_p$. A clarification. | The document says to add a "primitive root of order $p^n-1$" not a "primitive $n$-th root of one". So it must be an irreducible factor of $x^{p^n-1}-1$ that has degree $n$, but not $x^n-1$. |
if every sequence $(x_n)$ of $A$ contains a Cauchy subsequence, then $A$ is totally bounded | Your proof is fine. (You omitted 'totally' in the definition though.) To be very, very pedantic, your proof assumes that $A$ is non-empty. But it is trivial that the empty set is totally bounded. |
How to differentiate $F(x,y)=\int_x^y \sqrt{e^{tx}+3y}dt$ | Hint
Just to make it more general, consider the more complex case where$$G(x,y)=\int_{a(x,y)}^{b(x,y)} F[x,y,t] \, dt$$ Then, the fundamental theorem of calculus leads to $$\frac{dG(x,y)}{dx}=\int_{a(x,y)}^{b(x,y)} \frac{dF[x,y,t]}{dx} \, dt+F[x,y,b(x,y)]\frac{db(x,y)}{dx}-F[x,y,a(x,y)]\frac{da(x,y)}{dx}$$ $$\frac{dG(x,y)}{dy}=\int_{a(x,y)}^{b(x,y)} \frac{dF[x,y,t]}{dy} \, dt+F[x,y,b(x,y)]\frac{db(x,y)}{dy}-F[x,y,a(x,y)]\frac{da(x,y)}{dy}$$
The case of the post is much simpler since $$a(x,y)=x~,~ \frac{da(x,y)}{dx}=1~,~\frac{da(x,y)}{dy}=0$$ $$b(x,y)=y~,~\frac{db(x,y)}{dx}=0~,~\frac{db(x,y)}{dy}=1$$ and so $$\frac{dG(x,y)}{dx}=-F[x,y,x]+\int_{x}^{y} \frac{dF[x,y,t]}{dx} \, dt$$ $$\frac{dG(x,y)}{dy}=F[x,y,y]+\int_{x}^{y} \frac{dF[x,y,t]}{dy} \, dt$$ |
Which of the following are not class equations? | Using this answer for instance, one can prove that $D_{10}$ is the only nonabelian group of order 10. Thus all you need to do is compute its class equation. |
Let $R$ be a commutative ring with 1. If $R$ is a PID, then every prime ideal is either zero or maximal. | You used it right in the first sentence:
Let $I=(p)$ be a non-zero prime ideal of $R$.
If $R$ were not a PID, you very well could have a non-zero prime ideal that was not generated by any single element $p$. (Similarly, you used it when you assumed that some other ideal $J$ would be equal to $(m)$ for some $m$.)
As an example, if $R=\mathbb{C}[x,y]$, then the ideal $I=(x,y)$ is not equal to $(f)$ for any $f\in R$ (that is, it's not a principal ideal). |
How do I test the Convergence/ divergence of this series? | You idea is correct but you computed the limit of the inverse of the quotient in ratio test.
This is the real relation:
$$\frac{U_{n+1}}{U_n} \rightarrow 0<1$$ thus the series converges. |
Measurements Numbers in Compressed Sensing | There is a wonderful book called " A Mathematical Introduction to Compressive Sensing" by Dr.Simon Foucart and Dr.Holger Rauhut. You can find the answer in Chapter 6 from this book.
I can not give you the exact answer since there are lots of mathematical concepts that should be introduced first. But you can find the answer yourself from this book.
By the way, the error really depends on which method you use to construct $x'$, for example l1 minimization, Iterative hard thresholding or Orthogonal Matching Pursuit...... There are lots of method and the errors are different.
Also, we have some constraints on number of measurements m, but sparsity k is more important than m in the error analysis (if I memorize correctly).
In this book, they consider the noisy case, which means $y=Ax+e$. But it doesn't matter, you can simply remove error term and the results still hold. |
The reasoning behind $\sup\{\alpha\beta+\alpha\zeta\mid\zeta<\gamma\}=\alpha\beta+\sup\{\alpha\zeta\mid\zeta<\gamma\}$ | This comes from a more general fact:
Let $A$ be a set of ordinals and $\alpha$ be an ordinal. Then $$\alpha+\sup A=\sup\{\alpha+\gamma\mid \gamma\in A\}.\tag{1}$$
Note that by definition $\sup A=\bigcup A$. Let $\beta:=\sup A$.
If $\beta=0$ then $(1)$ holds trivially.
If $\beta=\delta+1$, you can show that necessarely $\beta\in A$, i.e, $\beta=\max A$. Therefore $(1)$ holds.
If $\beta$ is a limit ordinal, then by the very definition of ordinal addition, $\alpha+\beta=\sup\{\alpha+\gamma\mid\gamma\in\beta\}$, and you can easily verify that $$\sup\{\alpha+\gamma\mid\gamma\in\beta\}=\sup\{\alpha+\gamma\mid\gamma\in A\}$$ by looking at the cases where $\beta\in A$ and $\beta\notin A$. |
What's the problem with $-2=(-8)^{\frac{1}{3}}=(-8)^{\frac{2}{6}}=\sqrt[6]{(-8)^{2}}=2$? | The sixth root of a number is considered even and for any even root, the answer can be positive or negative. You pick which one is best fit or both, which in this case is -2 as shown from your starting point.
Another issue is by stating that -2=2, you imply that 0=4 or 0=1, which is a tell-tale sign something is wrong. |
Total Unimodularity in Integer QP | Let me assume that the feasible region is full dimensional. There are two issues:
None of the corner points of the feasible region might be optimal (since the objective function is not linear).
A QP is typically solved with a different algorithm (interior point vs [simplex method or interior point followed by the simplex method]). Therefore, if an entire face of the feasible region is optimal (which is possible when the objective function is not strictly convex), the optimization algorithm will find a non-corner point solution. |
Integration by partial fraction decomposition | The idea is that you want to have a 3 in the numerator as it makes the integral a bit easier to calculate.
Observe that $\int \frac{f'(x)}{f(x)}dx=\log(f(x))+C$.
Therefore, $\int \frac{3}{2+3x}dx=\log|2+3x|+C$.
Similarly for the other term. |
Question on $T_1$-topological spaces with compatible uniformities having countable bases | Once you have $\mathcal{U}(d) =\mathcal{U}'$, the topology induced by that metric
is the same as the topology induced by $\mathcal{U}'$, so $\tau'$.
This is because the intermediate step is superfluous in :
$d$ metric induces a uniformity, uniformity induces a topology.
is the same as metric induces that topology in one go, using the open balls.
This is clear when you look at the definitions of how we induce these topolgoies resp. uniformities:
Recall that a base for $\mathcal{U}(d)$ (as entourages) is all sets $\{(x,y) \in X^2: d(x,y) < \varepsilon\}$ and when we have a base for the entourage uniformity
$\mathcal{U}$ its induced topology on $X$ has as a base all sets $B[x]$ with $B$ in that base and $x \in X$, and for the standard $d$-uniformity base these $B[x]$ are just the open balls around $x$ with radius $\varepsilon$.
Also, in the proof of the uniform metrisability theorem (Birkhoff-Kakutani?) we explicitly construct a compatible metric for the uniformity with a countable base, so in any scenario the answer is yes, you can. |
About the weight of product | No, you have to use in an essential way that the base elements have a finite support, and count them more accurately, as I did in this recent answer.
Each $X_s$ has a base of size $\le \mathbf{m}$, and the index set has size $\le \mathbf{m}$ too, so it has $\le \mathbf{m}$ many subsets of size $n \in \omega$ and for each such set we have $\mathbf{m}^n = \mathbf{m}$ many base element choices, so still $\le \mathbf{m}$ choices for basic sets that depend on $n$ coordinates. This holds for each $n$ so in total the base size does not exceed $\mathbf{m}$ too, as $\aleph_0 \cdot \mathbf{m}=\mathbf{m}$. (We're only concerned with upper bounds here).
In your formula $$\prod_{s \in S} w(X_s)$$ could well be equal to $$\mathbf{m}^\mathbf{m} = 2^{\mathbf{m}} > \mathbf{m}$$
so naive products won't work (then you get to te weight of box products instead of the usual product topology). |
How do I show $\lim_{x\to\infty}f(x) = \lim_{x\to\infty} f '(x)=0$ if $\lim_{x\to\infty}f '(x)^2 + f(x)^3 = 0$? | Suppose that : $\lim_{x\to\infty}f '(x)^2 + f(x)^3 = 0.$
Show $\limsup_{x\to\infty} f(x) = 0:$ Showing $\,\limsup_{x\to\infty} f(x) \le 0$ is easy. If $\limsup_{x\to\infty} f(x) < 0,$ then $f(x) < -\epsilon$ for large $x$ for some $\epsilon > 0.$ Show this leads to a contradiction.
If $\lim_{x\to\infty} f(x) = 0$ fails, then from 1., $\liminf_{x\to\infty} f(x) < 0.$ This shows there is $\epsilon > 0$ and sequences $x_1<y_1<x_2<y_2 < \cdots \to \infty $ such that $f(x_n)>-\epsilon /2,f(y_n) <-\epsilon $ for all $n$. Then for each $n$ $\min_{[x_n,x_{n+1}]} f = f(c_n)<-\epsilon $ for some $c_n\in (x_n,x_{n+1}).$ We then have $f'(c_n)=0$ and thus $f'(c_n)^2+f(c_n)^3$ does not approach $0,$ contradiction |
Showing or refuting that two monic irreducible polynomials are coprimes. | If $d$ is not a unit, then it divides $p$, hence $d = p$. Similarly, it divides $q$, hence $d = q$ which is a contradiction. Therefore, $d$ is a unit and applying the division algorithm there are polynomials $r, s$ such that $rp+sq = d$. Divide by $d$ to get $1$ as a linear combination of $p, q$. |
$ \mathcal{F}_n = \sigma(\{0\},\{1\},\{2\}, \dots , \{n\})$ showing $\cup_{n\geq0} \mathcal{F}_n $ is not a sigma algebra, do not understand. | $$\mathcal F =\bigcup_{n\ge 0} \mathcal F_n$$ isn’t closed under countable union.
Namely all the $\{n\}$ belong to $\mathcal F$. However $A=\{0,1, 2, \dots\} \notin \mathcal F$ as $A$ belongs to none of the $\mathcal F_n$.
$$\mathcal F \neq \sigma(\{0\}, \{1\}, \dots )$$ |
Any submodule U of V such that the module V/U is completely reducible must contain the radical? | Let $M$ runs all maximal modules of $V$, so $\displaystyle \bigcap_M M= Rad (V)$. Then $(M+U)/U$ is maximal in $V/U$. It is easy to prove that
$$
(\bigcap_M M +U)/U \subseteq \bigcap_M \left( (M +U)/U \right).
$$
The left-hand side is $(Rad (V)+U)/U$, the right-hand side is $0$ since $Rad (V/U)=0$. Hence $Rad (V)\subseteq U$. |
Is every $k$-form $\omega$ on $V\oplus W$ of the form $\omega=\sum_{i+j=k} \pi_V^*\alpha_i\wedge \pi_W^* \beta_j$? | Yes. This is due to the isomorphism
$$\bigwedge(V\oplus W)\cong
\left(\bigwedge V\right)\hat\otimes\left(\bigwedge W\right)
$$
where $\hat\otimes$ is the skew-symmetric graded product of algebras.
You can prove this by abstract nonsense: prove the RHS is universal
for maps $\phi$ from $V\oplus W$ into algebras with the property that
$\phi(u+v)\phi(u+v)=0$, $u\in U$, $v\in V$. |
Question Regarding solution of this Problem $ \lim\limits_{x \rightarrow \infty }f(x) + f'(x) =L $ | You just need to check that you are allowed to use L'Hopital's rule: you need to verify that indeed $\lim_{x\to+\infty}g(x) = +\infty$.
As to why they used $e^x$, I think the answer lies in the differentiation of $g$ itself: you can see for yourself that it's indeed a fairly nice trick.
Hint to another way to solve this: what would happen if $\lim_{x\to+\infty}f(x)$ (provided it exists) were a number $R \not= L$ (or $\pm\infty$)? What could you say about the derivative? |
What do I need boundedness for in proving $g(\partial U) = \partial g(U)$? | Let be $W =$ an infinite horizontal band, $g =$ squeeze horizontally $W$ to a square. What happens when $U =$ smaller infinite horizontal band? |
Proving that the sum of elements of two bases is a basis | You don't need the fact that the $u_i$ are orthonormal, just the fact that $u_i$ lies in the span of $e_1,e_2,\ldots,e_i$ for every$~i$, and its expression involves $e_i$ with a positive coefficient. In other words the matrix $A$ which expresses the vectors $u_i$ in coordinates on the basis $[e_1,\ldots,e_n]$ is upper triangular with positive diagonal coefficients. Then $tI+(1-t)A$ is easily seen to have the same property for all $t\in[0,1]$, and this is the matrix expressing your convex combination in coordinates on the basis $[e_1,\ldots,e_n]$. Since this implies the matrix is invertible, you've got a basis of the vector space. |
$(f_1\,{\preceq}\,f_2)\,{\land}\,0_F\,{\preceq}\,f_3{\implies}f_1f_3\,{\preceq}\,f_2f_3$ | Let me recall from Wikipedia that:
A field $(F, +, \cdot)$ together with a total order $\preceq$ on $F$ is an ordered field if the order satisfies the following properties for all $a, b$ and $c$ in $F$:
i) if $a \preceq b$ then $a + c \preceq b + c$, and
ii) if $0_F \preceq a$ and $0_F \preceq b$ then $0_F \preceq a \cdot b$.
Suppose that $f_1\preceq f_2$ and $0_F\preceq f_3$.
By i) we have $$0_F=f_1-f_1 \preceq f_2-f_1$$
By ii) we have $$0_F=0_F\cdot f_3 \preceq (f_2-f_1)\cdot f_3=f_2\cdot f_3- f_1\cdot f_3$$
By i) we have $$f_1\cdot f_3 =0_F+f_1\cdot f_3 \preceq f_2\cdot f_3- f_1\cdot f_3+f_1\cdot f_3 =f_2\cdot f_3$$ |
How to find numbers that are both triangular and tetrahedral? | If a number is triangular, it can be represented by $\binom {n+1} 2$ for some $n$.
If a number is tetrahedral, it can be represented by $\binom {m+2} 3$ for some $m$.
Therefore, if a number is both triangular and tetrahedral, it is $\binom {n+1}2=\binom{m+2}3$ for some $n$ and $m$. |
Proof by induction: $n^2<4^n$ $n \in \mathbb N$ | we assume that $$4^k>k^2$$ and we want to prove that $$4^{k+1}>(k+1)^2$$ multiplying the first inequality by $4$ we get
$$4^{k+1}>4k^2$$ and now we have $$4k^2\geq (k+1)^2=k^2+2k+1$$ and this is true if $$3k^2-2k-1\geq 0$$ for $k\geq 1$ |
Trouble understanding sum and product of probability distributions | Assuming $X$ and $Y$ are independent discrete random variables:
a) Sum of Discrete Random Variable:
Let $Z=X+Y$ and further $X=k$ and $Z=z$. You can only have $Z=z$ when $Y=z-k$. So,
$P(Z=z)=\sum_{k=-\infty}^{k=+\infty} P(X=k)P(Y=z-k)$. This is a convolution operation.
Example: Let $X$ and $Y$ be independent Bernoulli random variables:
So,
$P(Z = 0) = P(X + Y = 0) = P(X = 0)P(Y = 0) = (1-p)^2$
$P(Z = 1) = P(X = 0)P(Y = 1) + P(X = 1)P(Y = 0)= (1 -p)p + p(1 -p) = 2p(1 -p)$
$P(Z = 2) = P(X = 1)P(Y = 1) = p^2$
$Z$ has a binomial distribution with $n=2$
$P(Z=z)=\binom{2}{z}p^z(1-p)^{2-z}$
b) For product, you can try it yourself based on above. |
A question about similar matrices | Hint: similar matrices have the same rank. |
Fourier transformation on $L^2(G)$. | If $G$ is a compact group and $\mu$ is the left-invariant Haar measure on $G$, then $\mu(G)$ is finite, so $L^2(G)\subset L^1(G)$ by Holder's inequality and we can directly define the Fourier transform on $L^2(G)$ since all the integrals in question are defined.
However $L^2(\mathbb{R})$ is not a subset of $L^1(\mathbb{R})$, which is why it is necessary to extend the Fourier transform to $L^2(\mathbb{R})$ using some dense subset. |
Possible textbook error | Correct, because
$$
\frac1x -\frac1{x+20} = \frac{20}{x(x+20)}
$$
after finding a common denominator for the two fractions on the LHS. |
What is Fourier Transform of $\phi(x,y) = 2x $ | Hint: Note that $\phi(x,y)=0$ if $(x,y)\not\in[-1,1]\times[-1,1]$. Thus,
$$\psi(u,v) = \int_{-\infty}^\infty \int_{-\infty}^\infty\phi(x,y)e^{-2\pi i(ux+vy)}dxdy = \int_{-1}^1\int_{-1}^1\phi(x,y)e^{-2\pi i(ux+vy)}dxdy.$$
I guess you are able to continue from here. |
If $X=\prod_{i \in I}X_i$ is regular, then $X_i$ is regular for every $i\in I$ | The system is unhappy with the number of comments we have been adding so I will add a (hopefully final) comment as an answer.
It seems that we have reduced the problem of whether or not your $\pi_{i}(U)$ and $\pi_{i}(V)$ are the requisite open sets to the question of whether or not every nonempty open set in $\prod X_{j}$ is of the form $\prod U_{j}$ where $U_{j}=X_{j}$ for all but finitely many $j$. These are certainly basic open sets.
Then let $O\subseteq X=\prod X_{j}$ be an open set. Then $O=\bigcup_{\alpha\in J} B_{\alpha}$ where the $B_{\alpha}$ are any basic open sets. We claim that $O$ itself is a basic open set. To see this we note that if $B_{\alpha}$ and $B_{\beta}$ are given with $\alpha\neq\beta$ then there is a finite set $I_{\alpha}=\{j_{\alpha_{1}},\ldots, j_{\alpha_{n}}\}$ such that $\pi_{j_{\alpha_{k}}}(U)\neq X_{j_{\alpha_{k}}}$ (please forgive this hurricane of indices). Likewise there is a finite set $I_{\beta}=\{j_{\beta_{1}},\ldots, j_{\beta_{m}}\}$ with similar characteristics. Then we consider $B_{\alpha}\cup B_{\beta}$. If $I_{\alpha}\cap I_{\beta}=\emptyset$ then $B_{\alpha}\cup B_{\beta}=X$. Otherwise $B_{\alpha}\cup B_{\beta}$ is a basic open set $B_{\gamma}$ where $I_{\gamma}=I_{\alpha}\cap I_{\beta}$.
In general, if $\pi_{j}(O)\neq X_{j}$ for infinitely $j$ then define $J_{O}$ to be the set of indices $j$ for which $\pi_{j}(O)\neq X_{j}$. We then have that, for all $\alpha\in J$, $\pi_{j}(B_{\alpha})\neq X_{j}$ for all $j\in J_{O}$. However, this implies that infinitely components of $B_{\alpha}$ are not equal to an $X_{j}$, which is a contradiction. Therefore $O$ must be a basic open set. In particular $O$ is the basic open set whose $i^{th}$ component is $\bigcup_{\alpha\in J}\pi_{j}(B_{\alpha})$.
${\bf Note:}$ This was not terribly rigorous. If you wanted to make it so you could use transfinite induction on the basic open sets that make up the open set $O$. |
Antisymmetric Relation: How can I use the formal definition? | Here is a different, but equivalent definition for antisymmetry: For a relation to be antisymmetric, we need that for any element $(x,y)$ in the relation where $x \neq y$, the element $(y,x)$ must not be in the relation.
Now look at your example. Are there any $(x,y) \in R$ with $x \neq y$? Yes, all three, $(a,b), (b,c)$ and $(a,c)$. Do we have $(y,x) \in R$ for any of these? No, $(b,a) \notin R$, $(c,b) \notin R$ and $(c,a) \notin R$, so the relation is antisymmetric. |
How to come up with a function? | Hint:
You can define a function similar to half life decay rate, which tends to $0$.
$N(t)=N_0(\dfrac{1}{2})^{\dfrac{t}{t_{1/2}}}$
And $t_{1/2}$ is the half-life. This might be helpful. |
Probability that a geyser erupts | Your calculation would be fine if the geyser erupted exactly every $75$ minutes and the probability comes from the interval chosen. The probability over an interval of length $t$ would be $\begin {cases} \frac t{75}& t \lt 75\\1 & t \ge 75 \end {cases}$
You are probably expected to assume that the probability of eruption in a short interval of time $dt$ is $p\ dt$ and that the eruptions are independent. Then the chance of no eruption over an interval of length $t$ is $e^{-pt}$. Use the data you are given to find $p$, then evaluate $e^{-20p}$ |
Is my proof of first Kepler's Law correct? | To prove Kepler's first law, take the total energy in polar coordinates $(r,\varphi)$, $$E_\text{total}=\frac12m\dot r+\frac{|\vec L|^2}{2mr^2}-G\frac{mM}{r},$$ where $L=|\vec L|$ is the angular momentum and solve this for $\dot r$:
$$\dot r=\left(\frac2m\left(E_\text{total}-\frac{L^2}{2mr^2}+G\frac{mM}{r}\right)\right)^{1/2}$$ and use $\dot\varphi=\frac{L}{mr^2}$ to get $$\frac{\mathrm d\varphi}{\mathrm dr}=\frac{\mathrm d\varphi}{\mathrm dt}\frac{\mathrm dt}{\mathrm dr}=\frac{L}{mr^2}\left(\frac2m\left(E_\text{total}-\frac{L^2}{2mr^2}+G\frac{mM}{r}\right)\right)^{-1/2}.$$
Now integrate this to get $\varphi(r)$ and solve this for $r$ to finally obtain $$r(\varphi)=\frac{a(1-b^2)}{1+b\cos\varphi}\qquad\text{for}\qquad a=-G\frac{mM}{2E_\text{total}}\quad\text{and}\quad b=\left(1+\frac{2E_\text{total}L^2}{G^2m^3M^2}\right)^{1/2}.$$
This is the equation of the conic section in polar coordinates. For $E_\text{total}<0$, this becomes the equation of an ellipse. |
Show that all of the roots of $f(z)= cz^n-e^z$ are from multiplicity of 1 | Note that $f'(z)=cnz^{n-1}-e^z$ and the if $f(z)=f'(z)=0$, then $cz^n=cnz^{n-1}$ and therefore $z=n$ or $z=0$. But neither $n$ nor $0$ are roots of $f$. |
Is the space R^N convex? | Sure, this is sufficient. In fact by the same argument, any set that is also a vector space is convex: this includes $\mathbb{R}^n$ and also linear subspaces of $\mathbb{R}^n$. |
Element from a formal power series that is algebraic over a field | The way I read the question we are free to choose the field $K$. Just in case I misunderstood, I proffer an example for all fields. To that end it suffices to cover all the prime fields $\Bbb{Q}, \Bbb{F}_p$, $p$ a prime number.
Recall the binomial series of $\sqrt{1+T}$ (=its Taylor series centered at the origin) from calculus. Its coefficients are rational, so this works for any field $K$ of characteristic zero.
If $K$ has characteristic $p>2$, then you can first show that the coefficients of the series of $\sqrt{1+4T}$ are all integers, and thus make sense modulo $p$.
If $K$ has characteristic $p=2$, then apply freshman's dream to the series
$$
a=T+T^2+T^4+T^8+\cdots=\sum_{k=0}^\infty T^{2^k}
$$
to show that $a+a^2=T$. Mind you, this series has a counterpart for all $p$, so you can skip the integrality property in the previous suggestion, and use that instead. |
Proof of Cauchy Integral Formula for annulus | The integral around the drawn curve vanishes. You may then let the vertical segments approach and they will cancel. The integral then becomes (with $\gamma$ being the boundary of the annulus, oriented in the opposite direction of your drawing):
$$ 0= \oint_\gamma \frac{f(w)-f(z)}{w-z} \frac{dw}{2\pi i} = \oint_\gamma \frac{f(w)}{w-z} \frac{dw}{2\pi i} - f(z)$$ |
What does this dead end mean in a system of equations? | (1), (2) and (3) can indeed be seen as a $3\times3$ system with the parameter $E$, which you can solve for $A,B,C$ in terms of $E$. Then plugging in (4) and (5), you obtain some $E=f(E).$
If you made no substitution mistake and you do reach
$$E=\beta E,$$
where $\beta\ne1$, the solution is $E=0$ (which is not a dead end). If on the other hand $\beta=1$, the system is indeterminate. |
Is there any simpler way to find a remainder in multiple divisions? | A possible way is as follows:
$f(x) = (x^2-x+1)q(x) + 3x-5$
$q(x) = (x-1)r(x) + c$
Hence,
$$f(x) = (x^2-x+1)((x-1)r(x) + c) + 3x -5$$ $$= (x^2-x+1)(x-1)r(x) + c(x^2-x+1) + 3x-5$$
Now you can find $c$:
$$f(1) =c -2 \Leftrightarrow c=2+f(1)$$
So, you get
$$f(x) = (x^2-x+1)(x-1)r(x) + \color{blue}{(2+f(1))(x^2-x+1) + 3x-5}$$
I leave it up to you to collect like terms of the remainder as it suits you. |
Is this a demonstration or a definition? | Depending on the equations suitable to define $n!$, there are multiple possible answers:
If $n! := n (n-1)!, 0!=1$ is used, it's a plain definition.
If $1! = 1$ is used as the starting point, it can be proved that $0!:=1$ is a consistent extension. (this is what you did)
If $n! := \prod_{i\in \mathbb N, i \le n} i$, it follows from the definition of the empty product, $\prod_{k\in\emptyset} f(k) := 1$.
If $n! := \Gamma(n+1), n\in\mathbb N$, it follows from the definition because $\Gamma(1) = 1$.
If $n! := |S_n| = |\{f: A\to A \text{ bijective}\}|$ with $|A| = n$, there is only one set with $0$ elements, $\emptyset$ and only one function $f: \emptyset \to\emptyset$ |
Flux Through a Closed Curve - Orientation | I hope you were using separate unit normals for each segment!
For segment $(-1,0)$ to $(0,1)$ anticlockwise with outer unit normal $\dfrac1{\sqrt{2}}\langle-1,1\rangle$, $x=-t,y=1-t,t:0\rightarrow1, ds=\sqrt{\left(\dfrac{dx}{dt}\right)^2+\left(\dfrac{dy}{dt}\right)^2}~dt=\sqrt{2}~dt$,
$\displaystyle\int_C\mathbf{F}\cdot\mathbf{n}=\int_0^1\dfrac1{\sqrt2}(-x+y^2)~ds=\dfrac{\sqrt2}{\sqrt2}\int_0^1t+(1-t)^2~dt=\dfrac56$.
If you keep the same direction and change to the inner unit normal $\dfrac1{\sqrt{2}}\langle1,-1\rangle$, you get $\displaystyle\int_0^1\dfrac1{\sqrt2}(x-y^2)~ds=\int_0^1-t-(1-t)^2~dt=\dfrac{-5}{6}$, i.e. the sign changes.
If you switch direction to clockwise but keep the outer unit normal $\dfrac1{\sqrt{2}}\langle-1,1\rangle$, you get
$x=t-1,y=t,t:0\rightarrow1,ds=\sqrt{2}~dt$, $\displaystyle\int_0^1\dfrac1{\sqrt2}(-x+y^2)~ds=\int_0^11-t+t^2~dt=\dfrac5{6}$
For this kind of "flux across curve" line integral, $\displaystyle\int_C\mathbf{F}\cdot\mathbf{n}~ds$, it does matter which unit normal you use - switching gives you a sign change. It doesn't matter about the direction of the curve. Why not? We're summing $\mathbf{F}\cdot\mathbf{n}$ along the curve, and this quantity does not depend on the direction of traversal (but does depend on the choice of normal). Further, $ds$, the 'piece of curve', is positive regardless of the direction of traversal. For the same reasons, the general line integral $\displaystyle\int_C\phi(x,y,z)~ds$ doesn't depend on the direction of traversal.
You can contrast this with the "work along curve" line integral,
$\displaystyle\int_C\mathbf{F}\cdot d\mathbf{r}$ where $d\mathbf{r}$ does depend on the direction of traversal. It's $d\mathbf{r}$, the instantaneous vector in the direction of traversal, that changes sign with a change in direction of traversal.
In more physical terms:
the flux integral is calculating the amount of flow perpendicular to the curve. Adding this up doesn't depend on which way you traverse the curve, but does depend on your idea of inside/outside (the choice of normal).
the work integral is calculating the amount of work needed to get from A to B. It matters on a given part of the curve whether you're moving with the current/wind or against it, and this depends on the direction of travel. |
Tricky trigonometric equation | Question has been answered, this is simply a comment that I can accept to close this question. |
Prove that a seminorm is a norm | Some assumptions are missing in your statement. If $(p_i)$ is just some family of semi-norm then the statement is obviously false. (Take a single semi-norm which is not a norm). So you have to assume that $p_i$'s generate the topology of $V$. This means that sets of the form $\{x:p_{i_1}(x) <r_1,p_{i_2}(x) <r_2,..,p_{i_N}(x) <r_N\}$ ( $N$, $i_j$'s arbitrary and $r_j$'s $>0$) form base for the neighborhoods of $0$. Under this assumption it follows that $p_i(x)=0$ for all $i$ implies that $x$ belongs to every neighborhood of $0$. Assuming that ths space is Hausdorff this implies $x=0$. |
Diagonalization over rings and the dimension of the cokernel of an endomorphism | You may assume that $\phi$ is diagonal using the Smith Normal Form (which works since $\mathcal{O}$ is a PID). I assume instead of $\dim$ you use $\mathrm{length}_{\mathcal{O}}$. |
Solving an initial value problem on a suitable interval | From
$$u'(t)=f(u(t)), u_1(0)=2, u_2(0)=\frac{19}{9}, u_k(0)=3^{-k}$$
one has the following system of differential equations
\begin{eqnarray}
&&u_1'(t)=\sum_{
k=1}^\infty \frac{2^{-k}}{1+3^{-k}-u_k}, u_1(0)=2,\tag{1}\\
&&u_2'(t)=0,u_2(0)=\frac{19}{9},\tag{2}\\
&&u_k'(t)=0,u_k(0)=3^{-k},k=3,4,5,\cdots.\tag{3}
\end{eqnarray}
From (2), $u_2(t)=\frac{19}{9}$ and from (3), $u_k(t)=3^{-k}$ for $k=3,4,5,\cdots$. So (1) becomes
$$ u_1'(t)=\frac{2^{-1}}{1+3^{-1}-u_1}+\frac{2^{-2}}{1+3^{-2}-\frac{19}{9}}+\sum_{k=3}^\infty 2^{-k}, u_1(0)=2$$
or
$$ u_1'(t)=\frac{3}{8-6u_1(t)},u_1(0)=2$$
whose solution is
$$ u_1(t)=\frac{1}{3}(4+\sqrt{4-9t}).$$
Thus
$$ u(t)=\{\frac{1}{3}(4+\sqrt{4-9t}), \frac{19}{9},3^{-3},3^{-4},\cdots\}.$$ |
written set of functions as a union of Borel measurable set | Let $B_n = \{ f \in \mathcal{H} : \sup_{\mathbb{R}} |f| \le n\}$ be the sup-norm ball of radius $n$ centered at 0. Note that $B_n$ is closed. Then $\mathcal{H} = \bigcup_{n=1}^\infty B_n$. |
Proving equivalence of operators imply equivalence of measures | If absolute continuity is not given then the statement is false. As an example, take finite measures $\mu$ concentrated in ${0}$ and $\nu$ concentrated in ${1}.$ The $L^2$ spaces are one-dimensional and a unitary operator is easily constructed. |
Geometric proof about rational numbers | The easiest proof I can think of follows your intuition, via trigonometry: Let the line make an angle of $\theta$ with the $x$-axis. Then it intersects the circle in two points: $(1, 0)$ itself, and $(-\cos 2\theta, \sin 2\theta)$. We will show that both coordinates are rational. (Don't read any further if you don't want more than that hint—I couldn't tell if you don't want to be spoiled.)
First, we see that $\tan \theta = b$. Then
$$
\cos \theta = \frac{1}{\sqrt{1+b^2}}
$$
and
$$
\sin \theta = \frac{b}{\sqrt{1+b^2}}
$$
Since $\cos 2\theta = 2\cos^2 \theta-1$ and $\sin 2\theta = 2 \sin \theta \cos \theta$, we have
$$
x = -\cos 2\theta = -\frac{1-b^2}{1+b^2}
$$
and
$$
y = \sin 2\theta = \frac{2b}{1+b^2}
$$
The above image depicts $0 < b < 1$, though that needn't be the case. |
How to deduce the Weyl group of type D? | Given a root system $R$ for a simple Lie group $G$, with maximal torus $T$, then the Weyl group $W$ is always isomorphic to $Norm_G(T)/T$. Hence, with $R$ given as in Humphrey's book, for type $D_n$ we obtain that $W$ consists of all permutations and an even number sign changes in $n$ coordinates.
Hence we have $W\cong (\mathbb{Z}/2)^{n-1}\ltimes S_n$. For more details see here. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.