title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Does $S^1 \times S^3$ admit a complex structure? | Which almost complex structure do you have in mind? In fact they are better than paralelizable ($S^1$ and $S^2$), their tangen bundles are Lie algebras so checking Nijenhuis tensor is very easy (it is linear problem) whatever structure given by that paralelization by fundamental vector fields you have in mind. It is even better, since both $S^1$ and $S^3$ are compact Lie groups you can find a left invariant complex structure on $S^1 \times S^3$ so Yes it admit complex structure.
EDIT:
The structure you are indicating is not vell defined since you have to choose an identification of tangent space with $\mathbb{R}^4$. You would like to write something like that:
$\bullet$ $TS^1$ is paralelizable by a single vector field $\xi$ let say
$\bullet$ $TS^3$ is paralelizable by vector fields $e_1, e_2, e_3$
that vector fields $lifts$ to vector fields on $S^1 \times S^3$ and give a paralelization of $T(S^1 \times S^3)$ where Lie bracket between them is:
$\bullet$ $[\xi, e_i]=0, [e_1,e_2]=e_3, [e_2,e_3]=e_1, [e_3,e_1]=e_2.$
Now you can define an almost complex structure for example by:
$$e_2 \mapsto \xi \mapsto - e_2$$
$$e_1 \mapsto e_3 \mapsto -e_1.$$
That is an almost complex structure yet you have to chcecke if it is integrable - so if $$[J,J](X,Y)=0$$ for all $X,Y \in \{e_1,e_2, e_3, \xi\}$. |
The existence of a simply-connected neighborhood of a contractible loop | There exist surjective maps $S^1 \to S^1$ of degree zero, for example the concatenation (as loops) of the identity and its reverse. They are zero in the fundamental group, but the only open set that contains them is $S^1$ which isn't simply connected. |
Statistic Missing Value | $$
82=\text{mean score}=\frac{\text{sum of all scores}}{30}=\frac{\text{missing score}+\text{sum of all others}}{30}.
$$
Therefore
$$
30\cdot82 = \text{missing score}+\text{sum of all others}.
$$
$$
30\cdot82=\text{missing score}+\left(29\cdot\text{mean of all others}\right)= \text{missing score}+(29\cdot84).
$$
So
$$
30\cdot82=\text{missing score}+(29\cdot84).
$$
Can you find the missing score given that? |
Are path components necessarily open? | A simple counterexample is $\mathbb{Q}$. The (path-)components are single points, which are not open. |
Problem related to $L^p$ space | Use (Riesz-Thorin) interpolation. Show that $\|K\|_{L^1(\Bbb R^d) \to L^1(\Bbb R^d)} \le c$ using $\sup_y \int |k(x,y)|\, dx \le c$, and show that $\|K\|_{L^\infty(\Bbb R^d)\to L^\infty(\Bbb R^d)} \le c$ using $\sup_x \int |k(x,y)|\, dy \le c$. These imply $\|K\|_{L^p(\Bbb R^d) \to L^p(\Bbb R^d)} \le c$ for all $1 < p < \infty$ by interpolation. This also gives well-definedness. For all $f,g\in L^p(\Bbb R^d)$, $$\|K(f) - K(g)\|_{L^p(\Bbb R^d)} = \|K(f-g)\|_{L^p(\Bbb R^d)} \le c\|f - g\|_{L^p(\Bbb R^d)}$$ So $K$ is uniformly continuous. |
How can I rotate a coordinate around a circle? | If your central point is $(c_x,c_y)$ and you want to rotate counter-clockwise about this point by an angle of $\theta$ (in radians) you would shift the center to the origin (and the rest of the points of the plane with it), then rotate, then shift back. You can use:
$x_{\text{rot}}=\cos(\theta)\cdot(x-c_x)-\sin(\theta)\cdot(y-c_y)+c_x$
$y_{\text{rot}}=\sin(\theta)\cdot(x-c_x)+\cos(\theta)\cdot(y-c_y)+c_y$
$(x,y)$ are your initial coordinates and $(x_\text{rot},y_{\text{rot}})$ are the new coordinates after rotation by $\theta$ about $(c_x,c_y)$
Example: If you want to rotate the point $(3,0)$ by $90^{\circ}$=$\frac{\pi}{2}$ radians about the point $(3,2)$ the formula should give $(5,2)$. Computing to check:
$x_{\text{rot}}=\cos(\frac{\pi}{2})\cdot(3-3)-\sin(\frac{\pi}{2})\cdot(0-2)+3=5$
$y_{\text{rot}}=\sin(\frac{\pi}{2})\cdot(3-3)+\cos(\frac{\pi}{2})\cdot(0-2)+2=2$ |
Outer Measure of a Set with a Single Point Removed | Your proof is correct.
A shorter argument for $m(A)\leq m(A \setminus \{a\})$:
$$m(A) = m\bigl((A \setminus \{a\}) \cup \{a\}\bigr) \le m(A \setminus \{a\}) + m(\{a\}) = m(A \setminus \{a\})$$ by the subadditivity of the outer measure, and the fact that $m(\{a\}) = 0$, since $\{a\}\subseteq \langle a - \varepsilon, a + \varepsilon\rangle$ for any $\varepsilon > 0$. (Or you can say $m(\{a\}) = \lambda(\{a\}) = 0$, where $\lambda$ is the Lebesgue measure on $\mathbb{R}$.) |
Dual Vector Space of linear functionals | The dual space $V^*$ of vector space $V$ is the space of linear functionals on $V$, so it is a different space as $V$, so you can not define an inner product between tvo different spaces and you can not define the notion of orthogonality between elements of these different spaces.. And note that a basis in $V*$ is such that ( using your notation): $w_i(e_j)=\delta_{ij}$ so:
$$
w_1(e_1)=1 \qquad w_2(e_2)=1 \qquad w_3(e_3)=1
$$ |
Question about a sum | These sums are classic and we can prove it by induction
$$\sum_{k=1}^n k=\frac{n(n+1)}{2}$$
and
$$\sum_{k=1}^n k^2=\frac{n(n+1)(2n+1)}{6}$$
so
$$\frac{3}{2}\sum_{k=1}^n3k^2+k=\frac{3n(n+1)^2}{2}$$
and on the other way
$$\sum_{k=1}^n(n+k+1)(n+1)=(n+1)\sum_{k=1}^n(n+k+1)\\=(n+1)\left(n(n+1)+\sum_{k=1}^nk\right)=\frac{3n(n+1)^2}{2}$$
so the equality is proved. |
Is the Inclusion functor always exact | Related to the suggestion given in the comments, here is a more-or-less universal example. If $0\to A\to B\to C\to 0$ is short exact in an abelian category $\mathscr A$, then the image of $0\to A\to B\to C$ under the Yoneda embedding is exact in the category of additive functors $[\mathscr A^\mathrm{op},\mathrm{Ab}]$, but $B\to C\to 0$ is never exact except in trivial cases. The reason is that cokernels in $[\mathscr A^\mathrm{op},\mathrm{Ab}]$ are constructed levelwise: the cokernel of a natural transformation $F\to G$ has cokernel $H$ with $H(X)=\mathrm{cok}(F(X)\to G(X))$, the latter cokernel being computed in abelian groups. So if $C$ is to map to the cokernel of $A\to B$ under the Yoneda embedding, then for every $X$ we must have $$\mathrm{Hom}_\mathscr{A}(X,C)=\mathrm{cok}(\mathrm{Hom}_\mathscr{A}(X,A)\to \mathrm{Hom}_\mathscr{A}(X,B).$$
In other words, any map $X\to C$ factors through $B$, uniquely up to a factorization through $A$. In particular the identity $C\to C$ factors through $B$, so $B\to C$ is a split epimorphism and the original short exact sequence was split. It's clear that the converse holds as well-exactly the split short exact sequences remain exact under the Yoneda embedding.
Now, the Yoneda lemma says that $\mathscr A$ appears as a full subcategory of $[\mathscr A^\mathrm{op},\mathrm{Ab}]$. So, to summarize, the Yoneda embedding gives an example of a full abelian subcategory for which the embedding is exact if and only if every short exact sequence in $\mathscr A$ is split. |
Does the following function have a root in $[0,\pi/2]$? | The answer is No.
In my opinion, it does have a root, as $f(0) = 1$ and $f(\frac \pi 2) \to -1$. Therefore, $f(\frac \pi 2)\cdot f(0) < 0$ and since $f(x)$ is a continuous function, it has a root in the given interval.
As you know you can apply this when a function is continuous, But this function isn't.
If $0\leq x <1,$ we have $\lim_{n\to\infty}x^n = 0$
So we know $f(x)=2^x $ when $x\in[0, 1)$.
When $x=1,$ just put $x=1,$ we get $f(1)=\frac{2-\sin1}{2}.$
If $x>1,$ we know $\lim_{n\to\infty}x^n = \infty$ so we divide numerator and demoninator with $\frac{1}{x^n}.$
Then evaluate the limit, we have $f(x)=-\sin x$ when $x\in\left(1,\frac{\pi}{2}\right].$
Can you end this now? |
How do I find the area of an inscribed triangle with a vertex at the intersection of an ellipse and a hyperbola with the same foci? | We know that the apex of the triangle must be one of the foci. If it were $P$, then an intersection of the two conics would lie on the ellipse’s minor axis, which is not possible with confocal conics with the given vertices. Wlog take the apex to be at $F_1$.
By Heron’s formula, the area of the triangle is $$A=\sqrt{s\,(s-|F_1P|)\,(s-|F_2P|)\,(s-|F_1F_2|)}$$ where $s$ is the semi-perimeter $(|F_1P|+|F_2P|+|F_1F_2|)/2$. Setting $|F_1F_2|=|F_1P|=2f$, $|F_2P|=2d$ and simplifying gives $A=d\sqrt{4f^2-d^2}$.
From the properties of the two conic sections we know that $$|F_1P|+|F_2P|=2f+2d=|QT|=2a_e \\ |F_1P|-|F_2P|=2f-2d=|RS|=2a_h.$$ Solving these equations for $f$ and $d$ yields $$\begin{align}f&=\frac12(a_e+a_h)\\d&=\frac12(a_e-a_h).\end{align}$$ Plug in the values that you’ve computed for the semi-axis lengths, and you’re done. |
Deduce the conclusion from the premise. | $\neg p\lor q\to r$ - Premise
$s\lor \neg q$ - Premise
$\neg t$ - Premise
$p\to t$ - Premise
$\neg p\land r\to \neg s$ - Premise
$\neg p$ - Modus Tollens
$\neg p\lor q$ - "or" introduction
$r$ - Modus Ponens
$\neg p\land r$ - "and" introduction
$\neg s$ - Modus Ponens
$\neg q$ - Disjunctive Syllogism |
$F, G \in k[X_1, \dots , X_n]$ homogeneous of degrees $r$ and $r+1$ $\implies$ $F+G$ is irreducible | After you homogenize, the result is a linear polynomial in $X_{n+1}$ with coefficients in some field which just happens to be $k(X_1, ... X_n)$. |
How to show that $g(y):=\frac{1}{2}\sum_{i=1}^n\left(\left(\sqrt{\lambda_i}y_i -c_i\right)^2-c_i^2\right)$ is strictly convex? | You can show the (strict) convexity of $g(y)$ without any Hessian by noting that $g(y)$ is (up to the positive factor $\frac{1}{2}$) the sum of functions of the form $h(t) = (\sqrt{\lambda}t-c)^2- c^2 = \lambda t^2 - 2ct$ with $\lambda > 0$ for each coordinate $y_i, \;(i=1, \ldots , n)$.
So, if you can show the (strict) convexity of $h$ you are basically done.
Since $-2ct$ is linear and $\lambda > 0$, the only thing to show is that $t^2$ is (strictly) convex. At this point one would usually just point out that $h''(t) = 2\lambda > 0$ (the 1-dimensional "Hessian").
But it is also very easy to show (strict) convexity of $t^2$ directly by noting that if $p \in [0,1]$ and $u,v \in \mathbb{R}$, you get by rearranging
$$(pu+(1-p)v)^2 \leq pu^2+(1-p)v^2\Leftrightarrow p(1-p)(u-v)^2\geq 0$$
The inequality on the RHS is always true and equality holds if and only if $p=0$ or $p=1$ or $u=v$. This is exactly strict convexity. |
If $I = \int_{-\infty}^\infty(xu - 3tu^2)\mathrm{d}x$, show that $\frac{\mathrm{d}I}{\mathrm{d}t} = 0$. | Starting from
$$ \int_{-\infty}^{\infty} (xu_t - 3u^2-6tuu_t) \, dx, $$
the second term can be integrated by parts:
$$ \int_{-\infty}^{\infty} -3u^2 \, dx = [-3xu^2]_{-\infty}^{\infty} + \int_{-\infty}^{\infty} 6xuu_x \, dx = \int_{-\infty}^{\infty} 6xuu_x \, dx, $$
since the boundary terms must vanish for $\int xu$ to be finite. We can apply KdV to replace the first two terms of the whole integral:
$$ \int_{-\infty}^{\infty} (xu_t +6xuu_x -6tuu_t) \, dx = \int_{-\infty}^{\infty} (-xu_{xxx}-6tuu_t) \, dx = \int_{-\infty}^{\infty} (u_{xx}-6tuu_t) \, dx , $$
where for the last equality we have integrated by parts again. The first term is the $x$-derivative of $u_x$, so vanishes when integrated, leaving us with
$$ -6t\int_{-\infty}^{\infty} uu_t \, dx $$
Lastly, $uu_t = -uu_{xxx}-3u^2u_x$. The second term of this is obviously the derivative of $-u^3$, so vanishes when integrated. The other term can be integrated by parts,
$$ \int -uu_{xxx} \, dx = 0 + \int u_x u_{xx} \, dx = \int \frac{1}{2}(u_x^2)_x \, dx, $$
and so is also a derivative. Hence all the terms vanish.
Alternatively, we can say:
$$ \int_{-\infty}^{\infty} (xu_t - 3u^2-6tuu_t) \, dx = \int_{-\infty}^{\infty} ((x-6tu)(-u_{xxx}-6uu_x) - 3u^2 ) \, dx \\
= \int_{-\infty}^{\infty} (-(x-6tu)u_{xxx}-6xuu_x + 36tu^2u_x - 3u^2 ) \, dx \\
= \int_{-\infty}^{\infty} ( (1-6tu_x)u_{xx}-((x-6tu)u_{xx})_x - 3(xu^2)_x + 3u^2+ 12t(u^3)_x - 3u^2 ) \, dx \\
= \int_{-\infty}^{\infty} (u_x-3tu_x^2 -(x-6tu)u_{xx} - 3xu^2 +12tu^3 )_x \, dx \\
= [u_x-3tu_x^2 -(x-6tu)u_{xx} - 3xu^2 +12tu^3 ]_{-\infty}^{\infty} = 0, $$
where the first equality uses KdV, the second regroups terms, the third uses the product rule in the form $f'g=(fg)'-fg'$ on the first three terms, the fourth writes everything possible as derivatives, the fifth from integrating and the final one comes from the decay at $\infty$. |
Questions about orthogonal matrices. | Here's one way to show this:
1) Use the Gram-Schmidt procedure to find orthonormal bases of the spaces spanned by $\{a_1,a_2\}$ and $\{b_1,b_2\}$. Since the inner product of $a_1$ with $a_2$ is the same as the inner product of $b_1$ with $b_2$, when you write $a_i$ in terms of the orthonormal basis of the space spanned by $\{a_1,a_2\}$, the coefficients will be same as when you write $b_i$ in terms of the orthonormal basis of the space spanned by $\{b_1,b_2\}$ (for $i=1,2$).
2) Complete your orthonormal sets to orthonormal bases of $V$.
3) There is an orthogonal matrix $A$ that takes the first basis to the second (one characterization of an orthogonal matrix is that it takes an orthonormal basis to an orthonormal basis).
4) Because of the properties mentioned in part 1), this matrix $A$ takes $a_1$ to $b_1$ and $a_2$ to $b_2$. |
does $\frac 1{x^2} = \frac 1{x+1}$ ? and if so how? | Your equation is this:
$$\frac{x-2}{x^2-x-2}$$
Factoring the denominator:
$$=\frac{x-2}{(x-2)(x+1)}$$
Cancelling like factors:
$$=\frac{1}{x+1}$$
The equation should not become $1/x^2$ when you cancel the factors for the removable discontinuity. |
Lower bound on convergence in probability | It suffices to prove that$\def\eps{\varepsilon}$
$$ \{X \le t -\eps\} \subseteq \{X_n \le t\} \cup \{|X_n - X| \ge \eps \} $$
Suppose that $X(\omega) \le t- \eps$, if $X_n(\omega) \le t$, we are done, in the case $X_n(\omega) > t$, we have
$$ |X_n(\omega) - X(\omega)| = X_n(\omega) - X(\omega) > t - (t- \eps) = \eps $$
Now use subadditivity of $\mathbf P$. |
Under which operations are singular values invariant? | Multiplying $A$ by an orthogonal matrix will necessarily preserve its singular values.
Here's an argument that proves this for square matrices. The singular values of $A$ are equal to the square roots eigenvalues of $A^TA$, which are equal to the square roots of the eigenvalues of $AA^T$. If we multiply $A$ by the left by an orthogonal matrix $U$, then we find that
$$
(UA)^T(UA) = A^TU^TUA = A^TA.
$$
Thus, the singular values of $UA$ are the square roots of the eigenvalues of $A^TA$, which are the singular values of $A$. Similarly, if we multiply $A$ from the right by an orthogonal matrix $U$, we have
$$
(AU)(AU)^T = AUU^TA^T = AA^T.
$$
So, the singular values of $AU$ are the square roots of the eigenvalues of $AA^T$, which are the singular values of $A$.
If $U$ is a permutation matrix, then $AU$ is a matrix constructed by permuting the columns of $A$ and $UA$ is a matrix constructed by permuting the rows of $A$. |
Conditional expectation of a Poisson random variable: confusing sums | The conditional distribution of $N$ given that $K = k$ is a Poisson distribution with parameter $q\lambda$ but displaced $k$ to the right; that is, (conditionally) $N$ is of the form $M+k$ where $M$ is Poisson$(q\lambda)$ and so $$E[N\mid K = k] = E[M+k] = E[M]+k = q\lambda + k.$$
Or, without spending time thinking about the matter, write $m = n-k$ and the sum as
$$\begin{align}E[N\mid K = k] &= \sum_{n\geq k}n \frac{(q\lambda)^{n-k}}{(n-k)!}e^{-q\lambda}\\
&= \sum_{m \geq 0} (m+k)\frac{(q\lambda)^{m}}{m!}e^{-q\lambda}\\
&= \sum_{m \geq 0}m\frac{(q\lambda)^{m}}{m!}e^{-q\lambda}
+ k \sum_{m \geq 0}\frac{(q\lambda)^{m}}{m!}e^{-q\lambda}\\
& = q\lambda + k
\end{align}$$
where I will leave how the very last line follows from
the previous one as a puzzle for you to work out. |
When rotation a sphere $90^{\circ}$ away from you and then $90^{\circ}$ counter-clockwise, what point is either fixed or sent to its antipode? | It is true that every point will move during this process, but two points will ultimately end up where they started.
In particular, for this combination of transformations, assuming you have the unit sphere centered at the origin, with the positive $x$-axis to the right, the positive $y$-axis away from you and the positive $z$-axis upwards, the point at $\left(-\frac{\sqrt3}3, -\frac{\sqrt3}3, \frac{\sqrt3}3\right)$ first gets sent to $\left(-\frac{\sqrt3}3, \frac{\sqrt3}3, \frac{\sqrt3}3\right)$ by the "away" rotation, then back to $\left(-\frac{\sqrt3}3, -\frac{\sqrt3}3, \frac{\sqrt3}3\right)$ by the counterclockwise rotation. Its antipodal point goes through a similar transformation. |
Product of varieties | What you write is correct but it's even simpler than that: projective space needn't be invoked, nor the product topology.
To say that $X$ (resp. $X'$) is a rational variety means that some non-empty open subset $U\subset X$ (resp. $U'\subset X'$) is isomorphic to some open subset $V\subset \mathbb A^n$ (resp. $V'\subset \mathbb A^{n'}$).
But then the open subset $U\times U'\subset X\times X'$ is isomorphic to the open subset $V\times V'\subset \mathbb A^{n}\times \mathbb A^{n'}=\mathbb A^{n+n'}$, proving that $X\times X'$ is rational. |
probability - 4 random digits, 2 different ones (clarification) | The short answer is that for case i) you have a very clear and defined way to know the difference between which number is used for which purpose. Regardless the order of the four digits, you can tell at sight which is the digit which "appears once" versus which is the digit that "appears three times." There is absolutely no ambiguity there.
Compare this to the situation where you are in case ii) where both appear twice. Which is the "first" digit that was selected to appear two times and which is the "second" digit that was selected to appear two times is ambiguous.
Rather than dividing by symmetry, I suggest instead to count the numerator for the second case in the following way:
Pick what the furthest left digit is: $10$ options.
Among the remaining three positions, exactly one will match what our left digit is. Pick which of these that is: $3$ options.
Pick what digit is used for the remaining two positions: $9$ options.
This gives a numerator of $10\times 3\times 9$ which you should see is equal to the same answer as given in Andre's answer before, just written differently. $\binom{10}{2}\binom{4}{2} = 10\times 3\times 9 = 10\times 9\times \binom{4}{2} / 2 = \dots$ |
A proof involving nested integrals and induction | The integral under consideration is
\begin{align*}
I &= \int_0^x dx_1 \int_0^{x_1}dx_2 \cdots \int_0^{x_{n-1}}f(x_n) \, dx_n & \text{} \\
&= \int_0^x \left( \ \underset{ x_n \leq x_{n-1} \leq \ldots \leq x_1 \leq x}{\idotsint} \ dx_1 \cdots dx_{n-1} \right)f(x_n) \ dx_n & \\
&= \int_0^x \left( \ \underset{ t \leq y_1 \leq \ldots \leq y_{n-1} \leq x}{\idotsint} \ dy_1 \cdots dy_{n-1} \right)f(t) \ dt & \text{ renaming variables.}
\end{align*}
So, we will be done if we can show that
$$\underset{ t \leq y_1 \leq \ldots \leq y_{n-1} \leq x}{\idotsint} \ dy_1 \cdots dy_{n-1} = \frac{1}{(n-1)!} (x-t)^{n-1}.$$
The above integral is the volume of the $(n-1)$-simplex
$$ \{ (y_1,\ldots,y_{n-1}) : t \leq y_1 \leq \ldots y_{n-1} \leq x\} \subset [t,x]^{n-1}.$$
There is a nice trick for computing this volume geometrically, if you like that sort of thing. Since the volume of the containing cube $[t,x]^{n-1}$ is $(x-t)^{n-1}$, we might ask why the volume of the simplex is a fraction $\frac{1}{(n-1)!}$ a fraction of that? The point here is that any permutation of the variables $y_1,\ldots,y_{n-1}$ in the inequality $t \leq y_1 \leq \ldots \leq y_{n-1} \leq x$ yields another simplex with the same volume. Moreover:
the volume of the overlap of any two of these simplices is zero (they can only touch on the zero volume subset of $[t,x]^{n-1}$ where at least two of the coordinates are equal) and
their union is the whole cube $[t,x]^{n-1}$ (any point $(y_1,\ldots,y_{n-1}) \in [t,x]^{n-1}$ must satisfy at least one of the possible chains of inequalities).
Since there are $(n-1)!$ simplices in total, each has volume $\frac{1}{(n-1)!} (x-t)^{n-1}$, as desired. |
If $H$ and $G/H$ are $p$-groups then $G$ is a $p$-group. | The assumption that $G$ is finite is redundant; a $p$-group is a group where every element has order a (finite) power of $p$ or, which is the same, if for every element $x$, $x^{p^a}=1$, for some $a>0$.
Let $x\in G$; then $(xH)^{p^a}=H$, for some $a$, because $G/H$ is a $p$-group. This means that $x^{p^a}\in H$, but then $(x^{p^a})^{p^b}=1$, for some $b$, because $H$ is a $p$-group. |
Question concerning a faithful module over an Artinian ring | "$\Leftarrow$" If $a\in A$ annihilates $M$, then $a$ annihilates $M^r$. In particular, $a$ annihilates $A$ and hence $a=0$.
"$\Rightarrow$" If $(x_i)_{i\in I}$ is a generating set for $M$, then there is an injective homomorphism $A\to\prod_{i\in I} Ax_i$ given by $a\mapsto(ax_i)_{i\in I}$, and thus we have an exact sequence $$0\to A\stackrel{f}\to M^I.$$ If $|I|<\infty$ we are done. Otherwise, let $p_i:M^I\to M$ be the canonical projections, and $q_i=p_if$. Since $\cap_{i\in I}\ker p_i=0$ we also have $\cap_{i\in I}\ker q_i=0$. Furthermore, since $A$ is artinian there is a finite subset $J\subset I$ such that $\cap_{j\in J}\ker q_j=0$. Thus the map $\oplus_{j\in J} q_j:A\to M^J$ is an embedding.
Edit. This is an alternative proof which works for non-commutative rings as well. For each $x\in M$ let $\phi_x:A\to M$ be the homomorphism defined by $\phi_x(a)=ax$. Since $M$ is faithful we have $\cap_{x\in M}\ker\phi_x=0$, and since $A$ is Artinian there is a finite subset $\{x_1,\dots,x_n\}$ of $M$ such that $\cap_{i=1}^n\ker\phi_{x_i}=0$. Then the map $\oplus_{i=1}^n\phi_{x_i}:A\to M^n$ is an embedding. |
Backus Normal Form and Logic | Let us first write the BNF grammar of all atomic propositional formulas (denoted by $\mathcal{A}$):
\begin{align}
\mathcal{A} ::= A \mid B \mid C \mid \dots
\end{align}
where $A$, $B$, $C$, $\dots$ are defined in the original post. Note that this definition does not require any form of induction, apart from the one used in definition of integer in the original post.
Then, the BNF grammar of all propositional formulas (denoted by $\mathcal{F}, \mathcal{G}$, $\dots$) is the following:
\begin{align}
\mathcal{F}, \mathcal{G} ::= \mathcal{A} \mid \lnot \mathcal{F} \mid (\mathcal{F} \land \mathcal{G}) \mid (\mathcal{F} \lor \mathcal{G}) \mid (\mathcal{F} \to \mathcal{G})
\end{align}
Note that, according this definition, $P \to Q$ is not a propositional formula, because parentheses are missing. The correct propositional formula in this case is $(P \to Q)$.
This massive use of parentheses is required to avoid ambiguous expressions such as $P \land Q \lor R$ can be considered as propositional formulas. Indeed, in $P \land Q \lor R$ it is not clear what is the principal connective. Correct propositional formulas are $((P \land Q) \lor R)$ and $(P \land (Q \lor R))$, where no ambiguity arises. |
How do I prove that for every positive integer $n$, there exist $n$ consecutive positive integers, each of which is composite? | To expand on the other solution already given,
Proof.
Assume $m$ and $n$ exist in the positive integers and that $m$ is less than $n$. If $m$ is less than $n$, then
$$
n!=1\cdot2\cdot3\cdot\dotsb\cdot m\cdot\dotsb\cdot n,
$$
which is to say that $m$ is a factor of $n!$. So,
\begin{align}
m+n! &= m+(1\cdot2\cdot\dotsb\cdot m\cdot\dotsb\cdot n) \\
&= m\left(\frac{1\cdot 2\cdot \dotsb\cdot n}{m + 1}\right)
\end{align}
Remember that $m$ is a factor of $n!$ and so $n!/m$ is still an integer.
So since $m$ is an integer that is bounded between $1$ and $n$, it stands that whatever number you pick up to $n$ can divide $m+n!$ making it composite till the $n$th integer, but $n!$ has that $n$th integer in it so the $n$th integer is also composite which means that you can pick any integer between $1$ and $n$ inclusively and it will be composite. |
Representation of a game with simultaneous movements | An extensive-form game can represent simultaneous moves via the use of imperfect information, that is, the fact that two players move at the same time is captured by one of them (either of them) moving first and subsequently the other player moves without learning what the first player to move did. For example, the following bimatrix game:
can be represented by the following imperfect-information extensive-form game:
In this extensive-form game, player 1 moves and then player 2 moves without knowing which move player 1 chose. Note, we could as well have just had player 2 move first in the extensive-form game, and then have player 1 move second, uninformed about player 2's move. Finally, note that if we dissolve the non-singleton information set, then we have a commitment (aka Stackelberg) game, as follows, where now the order of the players moving does matter.
The pictures in this post were generated with Game Theory Explorer (http://gametheoryexplorer.org). |
Can I place the rubiks3cube pieces in the distorted position I intend to get? | Yes, it's possible. Here's one algorithm (found here): U F' D' R' B L F L2 F' U F2 U' R2 B2 D2 L2 U' B2 D F2 B2. |
First derivative of scalar product | $F'(x)$ is defined as the vector $[\partial F/\partial x_i]$. Take the element-wise partial derivatives and you will see the answer. |
General form for a $3\times 3$ orthogonal projection | Orthogonal projections are unitarily diagonalisable and their eigenvalues are either $1$ or $0$. Therefore, when $n=3$, $P$ must assume one of the following four forms:
$$
0,\ uu^\ast,\ I-uu^\ast,\ I
$$
where $u$ denotes a unit vector. The matrix traces (and also the ranks) of these four forms are $0, 1, 2, 3$ respectively. So, if $P\ne0,\,I$, you may first calculate the trace of $P$ to determine whether $P=uu^\ast$ or $P=I-uu^\ast$. |
proof of the Krull-Akizuki theorem (Matsumura) | Q1. Think about extreme case $C=\mathbb{Z}^2$, $A=\mathbb{Z}$. In this case, you won't have such finite sequence of modules. The existence of such $t$ means $C$ is a torsion module, and being a torsion module ensures the existence of such finite sequence.
Q2. Finiteness of $B$, $J$ cannot be assumed. I think this is just because $a_0 B \subset J$. |
Area of equilateral triangle inscribed in right triangle | Rotate $\Delta Bps$ clockwise $60^\circ$ about $p$ as shown, to form $\Delta B'pq$.
Using $\Delta B'Cp$, find that $B'C=2\sqrt3~\mathrm{cm}$.
Using Sine Law on $\Delta B'Cq$, find that $\dfrac{B'q}{\sin 15^\circ}=\dfrac{B'C}{\sin 105^\circ}$, so $B'q=(4\sqrt3-6)~\mathrm{cm}$.
Use Pythagoras' theorem on $\Delta B'pq$ to find that $pq=2\sqrt{22-12\sqrt3}~\mathrm{cm}$.
Therefore the area is $\dfrac12a^2\sin60^\circ=(22\sqrt3-36)~\mathrm{cm}^2$. |
basis is minimal spanning set | Let $S$ be your basis. If it was not a minimal spanning set, there would be a $s\in S$ such that $\bigl\langle S\setminus\{s\}\bigr\rangle$ is the whole space. Then, $s$ can be written as a linear combination $\alpha_1s_1+\cdots+\alpha_ns_n$ of elements of $S\setminus\{s\}$. That is, we have$$1.s-\alpha_1s_1-\cdots-\alpha_ns_n=0,$$ which is impossible, since $S$ is linearly independent. |
Find number of triangles with integral sides and side lengths ≤ 2n | Number of triangles having at least one side equal to $2n$ is the number of couple $(a,b)$ with $a\ge b$ and $a+b > 2n$. That is,
$$\begin{aligned}
\sum^{2n}_{b=1}\sum^{2n}_{a = \max(b,2n + 1-b)} 1 &= \sum^{2n}_{b=1} (2n+1 - \max(b,2n+1-b))\\
&=\sum^{2n}_{b=n+1} ((2n+1) - b) + \sum^{n}_{b=1} (2n+1 - (2n+1-b))\\
&=\sum^{1}_{c=n}c + \sum^{n}_{b=1} b\hspace{5em}(\text{where } c = 2n+1-b)\\
&= n(n+1)
\end{aligned}$$
In order to argue recursively, you would need to compute $A_{2n-1}$ in similar fashion as well. |
Can this limit be solved with Riemann sum? | $$L=\lim_{n\rightarrow \infty} \lim_{x\rightarrow 0} (\cos x \cos 2x \cos 3x...\cos nx)^{\frac{1}{n^3x^2}}$$
As per @marty cohen, let us use $-y+y^2/2 \le \ln(1-y)\le -y~$ and $(1-t^2/2) \le \cos t \le (1-t^2/2+t^4/24)~$ when $~t~$ and $~y~$ are very small. We get
$$\ln L =\lim_{n\rightarrow \infty} \lim_{x\rightarrow 0} \frac{1}{n^3 x^2} \sum_{r=1}^{n} (\ln \cos rx)=\lim_{n\rightarrow \infty} \lim_{x\rightarrow}\frac{1}{n^3 x^2} \sum_{r=1}^n \ln(1-r^2x^2/2).$$ $$\Rightarrow \ln L = \lim_{n\rightarrow \infty} \lim_{x\rightarrow} \frac{1}{n^3 x^2} \sum_{r=1}^n (-r^2x^2/2)=\lim_{n\rightarrow \infty} \frac{1}{n}\sum_{r=1}^n -r^2/(2n^2)=
\int_{0}^{1}(-z^2/2) dz=\frac{-1}{6}.$$ So $L=e^{-1/6}.$ |
Other than $\setminus$ and $-$, are there any other notations for the set-theoretic difference of sets? | There's a notation that I've seen in a point-set topology book: $\mathrm{C}_S \,(\mathrm{T})$ is notation for the complement of $\mathrm{T}$ relative to $\mathrm{S}$, or $\mathrm{S} \setminus \mathrm{T}$. See https://proofwiki.org/wiki/Definition:Relative_Complement. |
Meaning of $ \Delta u \in L_{loc}^1(\Omega)$ in the sense of distribution | In distribution theory, any function can be differentiated by applying the derivative to the other side
So if $U$ is a distribution and $f$ a test function
$$
<D_i u, f > = -< u, D_i f>.
$$
This is a definition of $D_i u$ . Any bounded measurable function, $g,$ can be interpreted as a distribution via
$$
<g, f> = \int g(x) f(x) dx
$$
for test functions $f.$
If we compute $\Delta u$ using the distributional derivative and the resulting distribution agrees with that from an $L^{1}_{loc}$ function then we can say it is one. |
40% are men, 30% drink only beer, 42% are women and drink not only beer | In all cases we have that since 40% of festival goers are men, we have that 60% of festival goers are women. And since 42% of festival goers are women who don't drink only beer, you end up with 60-12= 18% of festival goers who are women who only drink beer. Note that this gives P(R|W)=30%
But as pointed out in the comments, there are 3 interpretations of the 30% claim:
A. 30% of the men drink only beer.
B. 30% of the festival goers are men who only drink beer
C. 30% of the festival goers drink only beer
So ... let's just work these out individually:
A. 40% of festival goers are men, 30% of whom drink only beer. Therefore, 12% of festival goers are men who drink only beer, and therefore 28% of festival goers are men who don't drink only beer. Note that $P(R|W) = P(R|W')$, so $R$ and $W$ are independent. Here is a picture:
B. 40% of festival goers are men, and 30% of the festival goers are men who only drink beer. That leaves 10% of festival goers who are men who don't drink only beer. Note that $P(R|W')=.75$, i.e. far more of the men drink only beer than the women do. So in this interpretation $R$ and $W$ are not independent. Picture:
C. 30% of the festival goers drink only beer. Since we already saw that 30% of the women drink only beer, that means that also 30% of the men only drink beer, and hence this scenario works out to be the same as scenario A (and thus $R$ are $W$ are independent again): |
Find$ \lim\limits_{n\rightarrow\infty}\frac{x_{n}}{n}$ | Here's what I got at the moment :
$x_n \to \infty$
We have $x_{n+1}-x_n = \int_0^{x_n}(f(t)-1)dt$ > $(e-2)(x_n-1)$ since $f>e-1$ for $x>1$.
As a result $x_{n+1}-x_n \to \infty$ and we can apply Cesaro lemma to conclude that $\frac{x_n}{n} \to \infty$ |
Complex partial fraction | i think:
$
(n+1)^2 \frac{t^{2n}}{(t^{n+1}-1)(t^{n+1}+1)}
$
$
u=t^n
$
$
(n+1)^2 \frac{u^2}{(u^2-1)^2(u^2+1)^2}
$
now, first decompose this , and then return to original variable, t, and decompose again the rest of terms
or again change the variable as below:
$
z=u^2
$
and then do the aforementioned sequential decomposition! |
Finding conditional expected value $E[y|x]$ | Simply integrate the $y$-term from $0$ to $1$.
$$\mathbb{E}[Y \mid X] = \int_{0}^{1}yf_{Y \mid X}(y \mid x)\text{ d}y = \int_{0}^{1}y\left(\dfrac{3y^2}{x^3}\right)\text{ d}y = \dfrac{3}{x^3}\int_{0}^{1}y^3\text{ d}y\text{.}$$ |
Proving $\mathbb{C}[x,y]/(x+2y^2)$ is isomorphic to a subring of $\mathbb{C}[s,t,u]/(st + u^2)$ | Is this a legitimate proof and course of action
Yes, it is legitimate, but I would say it this way: It is a property of polynomial algebras that this grants you a homomorphism $f:\mathbb C[x,y]\to R_2$ given the choices you made.
The first homomorphism theorem says that $\mathbb C[x,y]/\ker(f)\cong \mathrm{Im}(f)\subseteq R_2$.
The only thing left for you to decide is whether or not $\ker(f)=(x+2y^2)$. Certainly by your choice, you have already guaranteed that $(x+2y^2)\subseteq \ker(f)$, but proving the reverse containment would be necessary for complete success. |
Find the asymptotes of $y=2x-\arccos(1/x)$ | as $x$ approaches +-infinity $y$ approaches the line $2x-1$ monotonically. This suggests that $y=2x-1$ is an asymptote |
Möbius band as line bundle over $S^1$, starting from the cocycles | I take it that $S^1$, for you, is the unit interval with an identification of the two ends, so that $0 \sim 1$.
The intersection of $U_1$ and $U_2$ is then messy, because it has two components, namely $0 < x < 1/2$ and $1/2 < x < 1$. You want to define $g_{12}$ on both parts, and I suggests that you define it on the first part by
$$
g_{12} (x)(y) = y
$$
while on $1/2 < x < 1$ you define it by
$$
g_{12} (x)(y) = 1-y.
$$
The result should THEN be a mobius band. What you've got a this point is a cylinder (as you've verified).
I'm going to slightly edit your definition, and say that I want
$g_{12}(x)(y) = -y$ for $1/2 < x < 1$ (and similarly for the Mobius band), because this leaves a "centerline" at $y = 0$, while in your definition, the centerline is at $y = 1/2$; if you were defining a finite mobius band, using $[0, 1] \times [0, 1]$, that might make sense (although I might argue for $[0, 1] \times [-1, 1]$ instead!). But since you're using all of $\mathbb R$ as a fiber, it'd be nice if the transition maps were, fiberwise, vector-space isomorphisms, which requires sending $0$ to $0$.
Now my space $X$ consists of equivalence classes that look like one of four things:
$$
L_{a, s} = \{(a, s, 1), (a, s, 2) \}
$$
(where I've added an integer "identifier" to each patch), and where $s$ is any real number, or
$$
R_{a, s} = \{(a, s, 1), (a, -s, 2) \}
$$
The first is for $0 < a < 1/2$, and the second for $ 1/2 < a < 1$. Then there are two other kinds of equivalence classes:
$$
Z_s = \{(0, s, 1), (1, -s, 2) \}
$$
and
$$
H_s = \{ (\frac{1}{2}, s, 1), (\frac{1}{2}, -s, 2) \}.
$$
(The "Z" is mnemonic for "zero or one" and the "H" is for "halfway".)
In the Mobius band $M$, there are two kinds of equivalence classes:
$$
U_{a, s} = \{(a, s)\}
$$
(the "U" is for "usual") and
$$
P_{s} = \{(0, s), (1, -s)\}
$$
where the "P" is for "excePtional". :)
Now all I have to do is tell you how my homeomorphism $F$ will send each class in $X$ to a class in $M$. In the following, each line of the definition holds for all $s \in \mathbb R$. So:
\begin{align}
F(L_{a,s}) &= U_{a, s} & 0 < a < \frac{1}{2} \\
F(R_{a,s}) &= U_{a, -s} & \frac{1}{2} < a < 1 \\
F(Z_{s}) &= P_s \\
F(H_{s}) &= U_{ \frac{1}{2}, s}.
\end{align}
It's not hard to see that $F$ is bijective, and I leave that to you; the only question is continuity.
Continuity except at $a = 0, \frac{1}{2}, 1$ seems pretty clear, so I won't discuss that. Let's check continuity at $a = \frac{1}{2}$. To do so, I'm going to look at a "vertical" curve in $X$ and a "horizontal" one, and check that each maps to a nice continuous curve in $M$, That's not really a proof, but I'm hoping it'll suffice to convince you.
The vertical curve is
$$
\gamma: I \to X : t \mapsto H_t.
$$
Under the map $F$, this becomes
$$
\alpha = F\circ \gamma : I \to M : t \mapsto F(H_t) = U_{ \frac{1}{2}, t}
$$
which looks perfectly nice (as it should: $F$ is a vector-space isomorphism on the fiber!).
What about a "horizontal" curve in $X$? Let's look at one at "height" $1/3$:
$$
\beta: [\frac{1}{4}, \frac{3}{4}] \mapsto X : t \mapsto \begin{cases}
L_{t, \frac{1}{3}} & t < \frac{1}{2} \\
H_\frac{1}{3} & t = \frac{1}{2} \\
R_{t, \frac{1}{3}} & t > \frac{1}{2}
\end{cases}.
$$
It's pretty clear that $\beta$ is a continuous curve, as is $F\circ \beta$. So that part works out OK as well.
Finally, there's continuity at $0$. That's the tricky one. Once again, I'll use $\gamma$ and $\beta$ to denote the vertical and horizontal curves. In fact, I won't even bother with $\gamma$ -- you can give that a shot yourself. But $\beta$ is more interesting. I'll define it for $-1 < t < 1$, to make life a little easier for myself:
$$
\beta: [-1, 1] \to X: t \mapsto
\begin{cases}
Z_\frac{1}{3} & t = 0 \\
L_{\frac{t}{2}, \frac{1}{3}} & t > 0\\
R_{1+\frac{t}{2}, -\frac{1}{3}} & t < 0
\end{cases}
$$
If you look at $F\circ \beta$, you'll see that it's a curve that runs along at height $1/3$ in the mobius band to the right of $x = 0$, and at height $-1/3$ to the left of $x = 1$, which makes it continuous in the Mobius band. Of course, you also need to check that it's continuous in $X$, but I think that's not TOO difficult.
I hope (a) that this explicit description is of some help to you, and (b) that you never have to do such a thing explicitly again. :) |
Analytic solution of the equation $c\int_0^t x^{a-1}e^{-x}dx + (c+e^t)e^{-t}t^{a-1} = 0$ | I think I got it.
hint: if you derivate the left part, the derivative is alot simpler. Then you can probably reintegrate, and fit the integration constant, thus getting a simpler equivalent equation to solve. |
Let $n$ be an integer, prove that $\lfloor n/2 \rfloor \geq (n-1)/2$ | When $n$ is even, $\lfloor n/2 \rfloor = n/2$.
When $n$ is odd, $n/2 = (n-1)/2 +1/2$.
Since $n-1$ is even, $(n-1)/2$ is an integer and $1/2$ is a fraction. Thus, $\lfloor n/2 \rfloor = (n-1)/2$. |
Initial value problem general form | You write "eigenvalue" in the singular, but you (correctly) give two eigenvalues; then you give only a single eigenvector. The two different eigenvalues have different eigenvectors. You can only satisfy the initial conditions by using both eigenvectors. |
Values of $a_1$ and $a$ for which $a_{n+1}=a_n^2+(1+2a)a_n+a^2$ is convergent | It makes the proof neater to first simplify the algebraic forms.
Let $x_n=a-a_n$, then $x_{n+1}=x_n(1-x_n)$. If $x_n$ converges, to $ X$ say, then $X=X(1-X)$ and so $X=0$.
If $x_n<0$
Then $x_{n+1}=x_n(1-x_n)<x_n$ and so $x_n$ does not converge if $x_1<0$.
If $x_n>1$
Then $x_{n+1}=x_n(1-x_n)<0$ and we are in the above case. So $x_n$ does not converge if $x_1>1$.
If $1\ge x_n\ge 0$
Then $x_{n+1}=x_n(1-x_n)$ and so $x_n\ge x_{n+1}\ge 0$. Thus we have a non-increasing sequence of terms bounded below. Hence the sequence converges. |
Continuous function is constant $\mu$-almost everywhere, then it has to be constant on the topological support of $\mu$ | Assume that $x\in\text{supp}(\mu)$ with $\phi(x)>1$.
Then - because $\phi$ is continuous - some $U\in\tau$ will exist with $x\in U$ and $\phi(y)>1$ for every $y\in U$.
But $\mu(U)>0$ because $x\in\text{supp}(\mu)$ and a contradiction is found with $\phi=1$ $\mu$-a.e..
This also works if $\phi(x)<1$ and the conclusion $\phi(x)=1$ is justified. |
Cauchy with terms of differing signs after some term $\implies$ convergence | Since $\{x_n\}$ is Cauchy it is also convergent.
You can then use the fact that convergent subsequences of a convergent sequence converge to the same limit and consider the subsequence of all positive or all negative terms. |
Extension Theorem of twice continously differentiable functions? | I think your condition is not enough. Since initially you only require the function have bounded second derivative on a compact subset, since differentiable doesn't imply continuous differentiable, it may not be continuous differentiable on the compact set, so it can't be extended to $\mathbb{R}^3$.
An example is
$$f(x) = x^{4} \cdot \sin\left(\frac{1}{x}\right)$$
$$f(0) = 0$$
The $2$nd derivative of $f(x)$ exists for each value of $x$, bounded at the neighborhood of $0$ but not continuous at $x = 0$. |
Any integer can be written as $x^2+4y^2$ | Well, the most elementary way is to use the fact that if all prime divisors of $n$ are of the form $4k+1$ then one can represent $n=a^2+b^2,$ which is a direct consequence of Fermat's theorem on two squares. Since $n$ is odd, one has that either $a$ or $b$ is even. Hence, $n=a^2+4b_1^2.$
So we are left to show that all prime divisors of $n$ are of the form $4k+1.$
Indeed, since there is such $x,$ that $x^2=-4(\mod p)$ we can raise both sides to the power $(p-1)/2$ and apply Fermat's Little theorem to get $(-1)^{(p-1)/2}=1(\mod p).$ Thus, $(p-1)/2$ has to be even and the result follows. |
Complex Fourier Series of $|x|$ | Hint: Integration by parts $$
\int_a^b \underbrace{x}_{=u}\, \underbrace{e^{\lambda x}}_{=v'} \,dx
= \bigg(\underbrace{x}_{=u}\,\underbrace{\frac{e^{\lambda x}}{\lambda}}_{=v}\bigg)\Bigg|_a^b - \int_a^b \underbrace{1}_{u'}\,\underbrace{\frac{e^{\lambda x}}{\lambda}}_{=v} \,dx
$$ |
Proposition 24 of Euclid's first book | We have already established $\angle DGF = \angle DFG$.
Note that $\angle DGF = \angle DGE + \angle EGF$. Therefore $\angle DGF > \angle EGF$ since $\angle DGE > 0$, and using the equality, $\angle DFG > \angle EGF$. |
For finite dimensional vector spaces show that $(V \times W)' \cong V' \times W'$ | The claim follows immediately by counting dimensions, because $$\dim (V \times W)'=\dim(V \times W)=\dim V + \dim W=\dim(V' \times W').$$ Of course, you would like a 'natural' isomorphism. The hint already gives one, namely $T:V'\times W' \rightarrow (V \times W)'$. It is easily checked $T$ is linear. Now we only need to check injectivity, because then, the image of $T$ has the dimension of its domain, which is the same as the dimension of its codomain(here we use the fact the dimensions of $V$ and $W$ are finite!). So, suppose $T(f,g)=0$ for some linear functionals $f \in V'$, $g \in W'$. Then $f(x)+g(y)=0$ for all choices of $(x,y) \in V \times W$. So in particular $f(x)=0$ for all $x \in V$, so $f=0$. Similarly, $g=0$, so $(f,g)$ is the zero vector in $V'$, and $\ker T$ is trivial, so $T$ is injective. |
One-point compactification of manifold | For 1, yes you have an $E$ around $\infty$ with $E$ homeomorphic to a ball, but you don't know anything about $\overline{E}$. In fact, $\overline{E}$ is potentially all of $M^\ast$!
On the other hand, inside of $E$ is another open set $E'$, also homeomorphic to a ball, but whose closure is homeomorphic to $\overline{\mathbb{B}^n}$. Use $U = M^\ast \setminus \overline{E'}$.
For 2, since $M\setminus U$ is homeomorphic to $\mathbb{R}^n\setminus \mathbb{B}^n$, it follows that $M\setminus\overline{U}$ is homeomorphic to $\mathbb{R}^n\setminus \overline{\mathbb{B}^n}$. Call such a homeomorphism $g$.
By using the inversion map $f$, one sees that $\mathbb{R}^n\setminus\overline{\mathbb{B}^n}$ is homeomorphic to $\mathbb{B}^n \setminus\{\vec{0}\}$.
Composing $g$ and $f$, we have a homeomorphism between $M\setminus\overline{U}$ and $\mathbb{B}^n\setminus \{\vec{0}\}$. Try to prove that we can use these to find a homeomorphism between $M^\ast \setminus \overline{U}$ and $\mathbb{B}^n$. |
Solving systems of linear equations when matrix of coefficients is singular | Even when the system of equations is singular, you can find a least-squares solution by solving the system $A^{T}Ax=A^{T}b$. |
Finding the point at which two objects will collide (analogous to aiming an arrow at a moving target) | Let your target be at position $(x_{0t}+v_{xt}t,y_{0t}+v_{yt}t)$ and the arrow be $(x_{0a}+v_{xa}t,y_{0a}+v_{ya}t)=(x_{0a}+v_{a}\cos\theta t,y_{0a}+v_{a}\sin \theta t)$ To meet, they need to be at the same place, giving two equations in two unknowns $(t,\theta)$: $$x_{0t}+v_{xt}t=x_{0a}+v_{a}\cos\theta t \\ y_{0t}+v_{yt}t=y_{0a}+v_{a}\sin \theta t$$ |
Implicit declaration of function 'exp' | This is presumably no longer needed. Your differentiation was largely right, with a little slip, We start from
$$8x^5 e^{3y} + 11 y^4 e^{2x} = 17$$
and differentiate, using the Product Rule and the Chain Rule. We get
$$24x^5e^{3y}\frac{dy}{dx}+40e^{3y}+22y^4e^{2x}+44y^3e^{2x}\frac{dy}{dx}=0.$$
Rewrite as
$$\left(24x^5e^{3y}+44y^3e^{2x}\right)\frac{dy}{dx}=-\left(40e^{3y}+22y^4e^{2x}\right).$$
Solve for $\frac{dy}{dx}$. We get
$$\frac{dy}{dx}=-\frac{40e^{3y}+22y^4e^{2x}}{24x^5e^{3y}+44y^3e^{2x}}.$$
We could divide top and bottom by $2$. There is no other obvious simplification. |
How to show that the third power of an ideal in a Dedekind domain is principal | The ring $\mathbb Z[\alpha]$ has a norm given by
$$\mathbb N(x+y\alpha) = (x+y\alpha)(x+y\overline\alpha) = x^2+xy+6y^2.$$
Using the fact that $I=(2,\alpha)$ is a prime ideal lying above $2$, we find that the ideal norm $\mathbb N(I) = 2$, so $\mathbb N(I^3) = 8$. Hence, if $I^3$ is to be principal, it should be generated by an element of norm $8$. An obvious candidate for this is $(2-\alpha)$.
With the number theoretic motivation over with, we can now work purely manually. We have
$$(2,\alpha)^3 = (4,2\alpha,\alpha^2)(2,\alpha) = (8,4\alpha,2\alpha^2,\alpha^3).$$
Using the fact that $\alpha^2-\alpha + 6 = 0$, observe that
$$2-\alpha = \alpha^3+4\alpha + 8,$$
so $(2-\alpha)\subset(2,\alpha)^3$.
We can deduce equality either by an argument using norms, or by observing that
$$8 = -(\alpha^2-\alpha -2) = (\alpha+1)(2-\alpha)\\4\alpha = 8 + 4(\alpha-2) = (\alpha-3)(2-\alpha)\\
2\alpha^2=2\alpha-12=2(\alpha-2)-8=(-\alpha-3)(2-\alpha)\\
\alpha^3 = \alpha^2-6\alpha=\alpha(\alpha-2)-4\alpha=(3-2\alpha)(2-\alpha)
$$
so $(2,\alpha)^3\subset (2-\alpha)$. |
base change of an equivalence relation of fppf sheaves | Suppose given a parallel pair $X \rightrightarrows Y$ in $\mathbf{Sh}$. Let $Y \to \tilde{Z}$ be the coequaliser in $\mathbf{Psh}$ and let $Z$ be the sheaf associated with $\tilde{Z}$; then $Y \to Z$ is the coequaliser in $\mathbf{Sh}$. Now, consider a morphism $Z' \to Z$ and define $X' = Z' \times_Z X$, $Y' = Z' \times_Z Y$, and $\tilde{Z}' = Z' \times_Z \tilde{Z}$; note that we get a parallel pair $X' \rightrightarrows Y'$ in $\mathbf{Sh}$. Since coequalisers in $\mathbf{Set}$ are preserved by base change, $\tilde{Z}'$ is the coequaliser of $X' \rightrightarrows Y'$. Moreover, the associated sheaf functor preserves finite limits and all colimits, so $Z'$ is the sheaf associated with $\tilde{Z}'$ and is the coequaliser of $X' \rightrightarrows Y'$. Hence coequalisers are preserved by base change in $\mathbf{Sh}$.
(It may be helpful to draw some diagrams.) |
Divisible abelian $q$-group of finite rank | The Prüfer rank of an abelian group is the cardinality of a maximal linearly independent subset over $\mathbb{Z}$.
So if you got your Prüfer group $\mathbb{Z}(p^\infty)$, the maximum size of a linearly independent subset is going to be $1$. Using the representation of a Prüfer $p$-group as $\mathbb{Z}(p^\infty)=\{e^{\frac{2\pi i m}{p^n}}:m,n\in \mathbb{Z}\}$, we take the set $\{e^{2 \pi i a p^{-n}},e^{2 \pi i b p^{-m}}\}$ and try to solve $$\left(e^{2 \pi i a p^{-n}}\right)^{x_1}\left(e^{2 \pi i b p^{-m}}\right)^{x_2}=e^{2 i \pi \left(a p^{-n}x_1+b p^{-m} x_2\right)}=1$$
So we need to pick nontrivial $x_1$ and $x_2$ to satisfy $a p^{-n}x_1+b p^{-m} x_2=0$, which we can certainly do. Thus $\mathbb{Z}(p^\infty)$ has rank $1$. If you take a finite direct product of Prüfer groups, you'll still have finite rank because rank is additive; each component will just contribute $1$ to the total rank of the group. |
Why do Mathematica and Wolfram|Alpha say $\Gamma(-\infty)=0$? | If $x$ is a continuous variable, then $\lim_{x \to -\infty} \Gamma(x) $ does not exist.
But if $-\infty$ is approached in a discrete manner, then the limit might exist.
For example, let's replace $x$ with $N+ \frac{1}{2}$, where $N$ is an integer.
The reflection formula for the gamma function states that $$\Gamma(x) = \frac{\pi \csc(\pi x)}{\Gamma(1-x)} $$ for all $x \notin \mathbb{Z}. $
Therefore, $$ \lim_{N \to - \infty}\Gamma \left(N + \frac{1}{2} \right) = \lim_{N \to -\infty} \frac{\pi \csc \left(\pi \left(N + \frac{1}{2} \right) \right)}{\Gamma \left(\frac{1}{2}-N \right)} = \lim_{N \to -\infty} \frac{\pi \sec \left(\pi N \right)}{\Gamma \left(\frac{1}{2}-N \right)} =0 $$ since $\sec (\pi N)$ is just bouncing between $1$ and $-1$, while $\Gamma \left(\frac{1}{2}-N \right)$ is going to infinity.
EDIT:
In fact, as long as you stay greater than some fixed distance away from the negative integers, the limit will seemingly be zero. |
Question about generating function in an article | I'll take the following problem as an example of how to use generating functions in recurrences:
Let $p_{n+1}=3p_{n}-p_{n-1}$ with $p_0=5,p_1=7$. What is the general form of $p_n$?
Define the following generating function:
$$P(x)=\sum_{n=0}^\infty p_nx^n= p_0+p_1x+p_2x^2+p_3x^3+\cdots.$$
Notice that
$$3P(x)-xP(x)=3p_0+(3p_1-p_0)x+(3p_2-p_1)x^2+(3p_3-p_2)x^3+\cdots.$$
We can simplify this using the recurrence to the following:
$$(3-x)P(x)=3p_0+\color{Blue}{p_2x+p_3x^2+p_4x^3+\cdots}.$$
Now the blue part looks familiar. In fact, it's just the usual expansion for $P(x)$, but with the first two terms cut off and then the powers of $x$ reduced one, so this is:
$$(3-x)P(x)=3p_0+\frac{P(x)-(p_0+p_1x)}{x}$$
$$\implies [x(3-x)-1]P(x)=(3p_0-p_1)x-p_0.$$
Remark: It's unclear to me whether the AoPS article forgot an initial term or wanted us to subsume the initial $3p_0$ into the fraction in order to form the remainder polynomial $R(x)$. The important fact to take away is that we use the recurrence to simplify the right side involving only $P(x)$ and other basic operations with $x$, specifically as something plus $P(x)$ minus a cut-away divided by a power of $x$. Plug in the initial values for $p_0,p_1$ and divide for
$$P(x)=\frac{8x-5}{-x^2+3x-1}.$$
Now we attempt partial fraction decomposition. The quadratic formula tells us that the roots of the denominator polynomial $x^2-3x+1$ are $u=(3+\sqrt{5})/2$ and $v=(3-\sqrt{5})/2$. We then write:
$$-P(x)=\frac{8x-5}{x^2-3x+1}=\frac{A}{x-u}+\frac{B}{x-v}.$$
Combine the fractions on the right side and focus on numerators to get:
$$8x-5=(A+B)x-(vA+uB)$$
$$\implies \begin{pmatrix}1&1\\v&u\end{pmatrix}\begin{pmatrix}A\\B\end{pmatrix}=\begin{pmatrix}8\\5\end{pmatrix}$$
$$\implies\begin{pmatrix}A\\B\end{pmatrix}=\frac{1}{u-v}\begin{pmatrix}u&-1\\-v&1\end{pmatrix}\begin{pmatrix}8\\5\end{pmatrix}$$
$$=\frac{1}{\sqrt{5}}\begin{pmatrix}+7+4\sqrt{5}\\-7+4\sqrt{5}\end{pmatrix}.$$
Finally, we can expand $P(x)$ using the geometric series formula to get:
$$P(x)=-\frac{A}{x-u}-\frac{B}{x-v}$$
$$=\frac{A/u}{1-x/u}+\frac{B/v}{1-x/v}$$
$$=\sum_{n=0}^\infty \left(Au^{-1-n}+Bv^{-1-n}\right)x^n$$
$$=\sum_{n=0}^\infty\left[ (4+7/\sqrt{5})\left(\frac{3-\sqrt{5}}{2}\right)^{n+1}+(4-7/\sqrt{5})\left(\frac{3+\sqrt{5}}{2}\right)^{n+1}\right]x^n.$$
Note that we used $u^{-1}=v$ and $v^{-1}=u$. The expression inside the square brackets is therefore the formula of $p_n$. Also keep in mind this example is a bit more complicated than usual. Finally, remember that this is an example of a linear recurrence derived using the method described on the AoPS article; it does not apply to the original non$\text{}$linear problem you posted. |
Is the following described $p(x), q(x)$ the same interpolation polynomial? | Your intuition is correct. In general the interpolating polynomial of $p$ is of at most degree $n$ is unique.
Suppose that $p_1$ and $p_2$ are polynomials of degree at most $n$ which interpolate a function $f$ on the $n+1$ same $n+1$ distinct nodes. Then $d = p_1 - p_2$. This is a polynomial of degree at most $n$ which has at least $n+1$ zeros. By the fundamental theorem of algebra $d \equiv 0$ and $p_1 = p_2$.
Now in your special case you have established that $p_n$ and $q_n$ of degree at most $n$ both interpolate $f$ at $n+1$ nodes. It follows that they are both identical and that $p_n$ is an even function, i.e.
\begin{equation}
p_n(x) = p_n(-x).
\end{equation}
Even functions are said to have even parity, while odd functions are said to have odd parity. Differentiation switches the parity of a function. Ex. if $g$ is differentiable and odd, i.e.
\begin{equation}
g(-x) = - g(x)
\end{equation}
then the derivative is even. This is seen by considering the appropriate difference quotients. We have
\begin{equation}
\frac{g(x+h) - g(x)}{h} \rightarrow g'(x), \quad h \rightarrow 0, \quad h \not = 0,
\end{equation}
because $g$ is differentiable at $x$. Simultanously, we have
\begin{multline}
\frac{g(x+h) - g(x)}{h} = \frac{-g(-x-h) + g(-x)}{h} \\ = \frac{g(-x+(-h)) - g(-x)}{(-h)} \rightarrow g'(-x), \quad (-h) \rightarrow 0, \quad (-h) \not = 0,
\end{multline}
because $g$ is odd and differentiable at $-x$. It follows that
\begin{equation}
g'(x) = g'(-x)
\end{equation}
or equivalently that $g'$ is even.
Now suppose the polynomial
\begin{equation}
p(x) = \sum_{k=0}^n a_k x^k
\end{equation}
is an even function. We claim that all the odd numbered coefficients, i.e. $a_{2j+1}$ are zero. Since $p$ is even, $p'$ is odd, hence $p'(0) = 0$. However, $p'$ is simply
\begin{equation}
p'(x) = \sum_{k=1}^n k a_k x^{k-1} = a_1 + 2 a_2 x + 3 a_3 x^2 + \dotsc nx^{n-1}.
\end{equation}
In particular, $p'(0) = a_1 = 0$. Since $p'$ is odd, $p''$ is again even and we are all set to continue an inductive process. |
Family of Graphs, Planarity. | It is important that you understand what it really means for any two $n$-tuples to be adjacent in this graph. Once this becomes clear, each part, especially iii) becomes clear. What does it mean for $\sum_{i=1}^n (u_i-v_i)$ to be divisible by $2$? Notice that if $u_i = v_i$ ($u$ and $v$ are the same in the $i$th position), then $u_i-v_i = 0$, that is the $i$th position contributes $0$ to the sum. So the only contributions made to the sum are from the $i$th positions that are different between $u$ and $v$. If this sum is divisible by $2$, in other words even, then we can say that two vertices are adjacent if and only if they differ in an even number of positions.
As you've commented, $|V_n| = 2^n$, as the vertex set is composed of $n$-tuples whose entries are $0$ or $1$, and there are $2^n$ many such tuples ($2$ choices per entry).
Here are the graphs $G_1 = 2K_1, G_2 = 2K_2, G_3 = 2K_4$ from left to right. Small examples should lead you to the light.
From 2. can you see what is going on here? Especially focus on $G_3$. What do you notice about the tuples in each copy of $K_4$. If you still don't understand why the edges connect the way they do, nothing beats good old brute force. That is, list your tuples and compute the sum $\sum_{i=1}^n (u_i-v_i)$ for each $u,v$.
I will include guidance in a spoiler for the rest.
We refer to $n$-tuples (vertices) as even (odd) if they contain an even (odd) number of $1$'s.
Show that any two vertices $x$ and $y$ are adjacent if and only if they have the same parity.
Conclude from the above that $G_n$ has two cliques, each of order $2^{n-1}$. The vertex set of one is composed of even $n$-tuples and the other is composed of odd $n$-tuples.
To see that $G_n$ is not planar for $n \ge 4$, we can either use Kuratowski's Theorem or an edge counting argument (via Euler's formula, remember the $3n-6$ upper bound.) Essentially since the components are two cliques of order $2^{n-1}$, for $n\ge 4$ we will obtain both a $K_5$ and $K_{3,3}$ subdivision (trivially, since we have them as subgraphs.) So, instead if you independently prove that $K_5$ or $K_{3,3}$ are not planar, then that is sufficient as opposed to Kuratowski's Theorem.
If you require clarification feel free to comment. |
What does $φ(a) = a$ mean in this statement? | $F[x]$ is just the ring of polynomials with coefficients from the field $F$. So if $\phi(a)=a$ for all $a \in F$, it is just saying that all the constants in $F[x]$ get mapped to themselves under the mapping $\phi$ i.e. polynomials of the form $f(x)=a$ for $a \in F$ get mapped to themselves.
To prove the statement, first suppose that $f(x)$ is irreducible in $F[x]$. Then suppose $\phi(f(x)) = g(x)h(x)$ for some $g(x),h(x) \in F[x]$. Now,
$$\phi(f(x)) = g(x)h(x) \implies f(x)= \phi^{-1}(g(x)h(x)) = \phi^{-1}(g(x))\phi^{-1}(h(x))$$
But $f(x)$ is irreducible so either $\phi^{-1}(g(x))=a$ or $\phi^{-1}(h(x))=a$ for some $a \in F[x]$ and hence either $g(x)$ or $h(x)$ is a unit so $f(x)$ is irreducible.
For the other direction, suppose $\phi(f(x))$ is irreducible. Then if $f(x)$ were reducible, we could write $f(x)=g(x)h(x)$ for some non-constant polynomials in $g(x),h(x) \in F[x]$. But then we would have $$\phi(f(x))=\phi(g(x)h(x))= \phi(g(x)) \phi(h(x))$$
Using the hint you are given, $\phi(g(x))$ and $\phi(h(x))$ are non-constant since $g(x)$ and $h(x)$ are assumed to be non-constant and hence $\phi(f(x))$ is reducible which is a contradiction. Thus $f(x)$ must be irreducible. |
If $f(x) \leq M$ for all $x \in\mathbb{R}$ and $\lim_{x\to c} f(x) = L$, show that $L \leq M$. | Assume towards a contradiction $L > M$. $f(x)\rightarrow L$, therefore $\forall \epsilon>0$, $\exists \delta$ s.t. $|x-c|<\delta\implies |f(x)-L|<\epsilon$.
Specifically, choose $\epsilon=(L-M)/2$. Therefore $\exists \delta$ s.t. $|x-c|<\delta\implies |f(x)-L|<(M-L)/2\implies f(x)>M$. Which is a contradiction. Therefore we must have $L\leq M$. |
Question on operator norm $\vert \vert \cdot \vert \vert$ | Elaborating the hint of Reveillark: For $x \in X$ with $\Vert x \Vert = 1$ you clearly have $$\Vert Tx \Vert \leq \sup_{\Vert x \Vert \leq 1} \Vert Tx \Vert = \Vert T \Vert = \Vert T \Vert \Vert x \Vert. $$
For $x = 0$ the inequality holds trivially. So let $x \neq 0$. Then by the above inequality you have for $y = x / \Vert x \Vert$ that
$$\frac{1}{\Vert x \Vert} \Vert T x \Vert = \Vert T y \Vert \leq \Vert T \Vert \Vert y \Vert = \Vert T \Vert \frac{\Vert x \Vert}{\Vert x \Vert} = \Vert T \Vert$$
since $\Vert y \Vert = 1$. Multiplying the inequality with $\Vert x \Vert$ yields that $\Vert T x \Vert \leq \Vert T \Vert \Vert x \Vert$ for all $x \in X$.
Now suppose that $(T_n)_{n \in \mathbb N}$ is a Cauchy sequence in $\mathcal L(X, Y)$. Then for all $\varepsilon > 0$ there is $N \in \mathbb N$ such that $\Vert T_n - T_m \Vert < \varepsilon$ for all $n, m \geq N$. Let $x \in X$. Then the above inequality yields
$$\Vert T_n x - T_m x\Vert = \Vert (T_n - T_m)x \Vert \leq \Vert T_n - T_m \Vert \Vert x \Vert < \varepsilon \Vert x \Vert$$
for all $n, m \geq N$ which shows that $(T_n x)_{n \in \mathbb N}$ is indeed a Cauchy sequence in $X$ for any $x \in X$. I hope the arguments got clear :) |
Introduction to Set Theory, Hrbacek and Jech exercises 3.5.7 and 3.5.8 | You have the right idea for $3.5.7$ but either your notation or your reasoning isn't quite right. I wouldn't use $a_i$ to denote an arbitrary element of $R$ because elements of $R$ are $n$-tuples and we've been using $a_i$ to denote a component of such an $n$-tuple. Here's how I'd write the proof:
For each $0 \leq i \leq n-1$, define:
$$A_i=\{a_i|~\exists x_0, x_1, \ldots x_{i-1}, x_{i+1}, \ldots x_{n-2}, x_{n-1}~(\langle x_0, x_1, \ldots x_{i-1}, a_i, x_{i+1}, \ldots x_{n-2}, x_{n-1} \rangle \in R) \}.$$
Define
$$A=\bigcup_{i=0}^{n-1} A_i.$$
Then $R \subseteq A^n$.
For $3.5.8$, clearly the relation defined by $\langle a_0, a_1, \ldots a_{n-1}, a_n \rangle \in R \iff F(a_0, a_1, \ldots a_{n-1})=a_n $ is an $n$-ary relation on $A$ that satisfies the required condition, so all that's left is to show that it's unique; in other words, that any other $n$-ary relation $S$ on $A$ that satisfies this condition is in fact equal to $R$.
But if $\langle a_0, a_1, \ldots a_{n-1}, a_n \rangle \in S$, then the $\Rightarrow$ direction of our implication on $S$ tells us $f(a_0, \ldots a_{n-1}) = a_n$, so $\langle a_0, a_1, \ldots a_{n-1}, a_n \rangle \in R$ by the definition of $R$, and $S \subseteq R$.
Conversely, if $\langle a_0, a_1, \ldots a_{n-1}, a_n \rangle \in R,$ then by the definition of $R, f(a_0, \ldots a_{n-1}) = a_n$ and the $\Leftarrow$ implication of our condition on $S$ then tells us $\langle a_0, a_1, \ldots a_{n-1}, a_n \rangle \in S,$ so $R \subseteq S$, equality follows, and we have proved $R$ is unique. |
Which comes first?, the distribution $P_X$ or the measure $\text{Prob}$? | In practice the underlying probability space is abstract, and has the features that we need to define the random variables that we would like to define. It is a separate mathematical task, which most users of probability theory do not concern themselves with, to show that a suitable probability space for a given situation actually exists. For example, it's not immediately obvious how to construct a probability space on which a sequence of independent continuously distributed variables can be defined, but it can be done, and in practice most people take this fact for granted.
Only very occasionally do we need to grapple with the possibility that no such space exists (for example, this derails any attempt to classically define white noise). |
CF grammar for a Language. | The following context-free grammar appears to generate $L$:
$$\begin{align*}
S&\to AASB\mid AAAc\\
A&\to bA\mid cA\mid Ab\mid Ac\mid a\\
B&\to aB\mid cB\mid Ba\mid Bc\mid b
\end{align*}$$
$A$ and $B$ generate the regular languages corresponding to the regular expressions $(b+c)^*a(b+c)^*$ and $(a+c)^*b(a+c)^*$, respectively. Any derivation is going to begin
$$S\Rightarrow^n A^{2n}SB^n\Rightarrow A^{2n+3}cB^n\;;$$
each $A$ will generate a single $a$, possibly surrounded by a ‘halo’ of $b$s and $c$s, and each $B$ will generate a single $b$, possibly surrounded by a ‘halo’ of $a$s and $c$s. |
Prove the following relation between side lengths | Note that $\tan \angle AEB=\frac{x_1+x_2}{x_5}$
and $\angle AEB = \angle AED +\color{blue}{\angle BED}=\angle AED +\color{blue}{\angle EBF}$ given that lines $DE$ and $BF$ are parallel.
Using the sum of angles formula for the tangent function we have
$$\tan \angle AEB=\frac{\tan\angle AED +\tan\angle EBF}{1-\tan\angle AED\tan\angle EBF}$$
where $\tan \angle AED=\frac{x_1}{x_5}$ and $\tan \angle EBF=\frac{x_4}{x_3}$.
Substituting results in
$$\frac{x_1+x_2}{x_5}=\frac{\frac{x_1}{x_5}+\frac{x_4}{x_3}}{1-\frac{x_1}{x_5}\frac{x_4}{x_3}}=\frac{x_1x_3+x_4x_5}{x_3x_5-x_1x_4}$$ |
Region of convergence of complex series $\sum \frac{(-1)^n}{z+n}$. | Couple together adjacent terms:
$$-\frac1{z+2m-1}+\frac1{z+2m}=-\frac1{(z+2m-1)(z+2m)}.$$
We get a new series that converges absolutely, everywhere, save of
course where the denominators vanish. |
Question about Fixed Point Theorem Hypotheses | The function $X \to X : x \mapsto \frac{1}{1+\varepsilon} f(x)$ maps $B_1$ to a compact subset of $B_1$. So by the standard fixed point theorem, there exists an $a \in B_1$ such that $\frac{1}{1 + \varepsilon}f(a) = a$, or $f(a) = (1+ \varepsilon)a$. Hence,
$\| f(a) - a \| = \| \varepsilon a \| \leqslant \varepsilon$, since $\| a \| \leqslant 1$. $\quad {\small \square} $
In fact, this result is tight, in the sense that there are functions $f$ satisfying $\|f(x) - x\| \geqslant \varepsilon$ for all $x \in B_1$. [E.g., consider the constant function mapping all points to a fixed point of norm $1 + \varepsilon$.] In particular, $f$ is not guaranteed to have a fixed point inside $B_1$. |
When do the most people come and go inside the stadium? | Hint: On a graph where the y-axis is the rate ($f'(t)$) and the x-axis is time ($t$), the area under the curve represents the accumulation.
In your example, the area under the curve on any interval $[t_1,t_2]$ represents the number of people that enter the stadium during that interval. |
Galois group solvable but $f$ not solvable. | Suppose $f$ is solvable by radicals. Then we should have $$F\subset F(\alpha)\subset K\subset \overline{F}$$ where $K=F(\alpha_1,\cdots,\alpha_m)$ and $\alpha_1^{n_1}\in F, \alpha_i^{n_i}\in F(\alpha_1,\cdots,\alpha_{i-1})$.
First, we may assume $p\nmid n_i$ unless $p = n_i$. For if $n_i = pr, r>1$, then we can first adjoin a r-th root and then a p-th root.
Secondly, if $p\nmid n_i, p\nmid \{F(\alpha_1,\cdots,\alpha_{i-1},\alpha_i):F(\alpha_1,\cdots,\alpha_{i-1})\}$. If $p = n_i$, then $X^p-\alpha_i^p$ is irreducible provided that $\alpha_i\notin F(\alpha_1,\cdots,\alpha_{i-1})$ and $X^p-\alpha_i^p = (X-\alpha_i)^p$. Thus $1 = \{F(\alpha_1,\cdots,\alpha_{i-1},\alpha_i):F(\alpha_1,\cdots,\alpha_{i-1})\}$.
Therefore, $\{K:F\}=\prod_{i=1}^{m}\{F(\alpha_1,\cdots,\alpha_{i-1},\alpha_i):F(\alpha_1,\cdots,\alpha_{i-1})\}$ is not divisible by $p$. But we have $p =|G(E\ /\ F)|=\{E:F\}\mid \{K:F\}$, which is a contradiction. |
Prove that $\bigcup_{n \in \Bbb{N}} [1/n,1] = (0,1]$. | Suppose $x$ lies in the big union, which allows us to select a positive integer $k$ so that $1/k \leq x \leq 1 $. Since $\frac{1}{k} > 0$ then $x$ better lie on $(0,1]$. On the other hand, if $x \in (0,1]$, then we may use the archimedean property to cook up an interval of the form you want. |
Fundamental group of $\mathbb{R}^2 \setminus (\mathbb{Z} \times \{0\})$ | Here is a solution using the notion of the fundamental groupoid on a set $C$ of base points, which I introduced in 1967, and is discussed at this mathoverflow question. This method is developed in the book now available as Topology and Groupoids (T&G) (first edition published as "Elements of Modern Topology", 1968).
Let $X$ be the space in the question. We can write $X$ as the union of two open sets $U,V$ where $U$ is the top half of $X$ plus a little bit and $V$ is the bottom half plus a little bit, so that $U,V$ are contractible and $W= U \cap V$ is a countable union of contractible pieces. Let $C= \{(n+1/2,0): n \in \mathbb Z\}$. By 6.7.2 of T&G, the following square is a pushout of groupoids.
$$ \begin{matrix}
\pi_1(W,C) & \to & \pi_1(V,C)\\
\downarrow&& \downarrow\\
\pi_1(U,C) & \to & \pi_1(X,C).
\end{matrix}$$
It is worth noting that if you use the proof by verifying the universal property, then you do not need to know in advance that pushouts of groupoids exist, nor how to compute them.
We now use the general groupoid methods developed in Philip Higgins' Categories and Groupoids and in T&G Chapter 9 to say that as $\pi_1(W,C)$ is a discrete groupoid, and $\pi_1(U,C),\pi_1(V,C)$ are connected groupoids with trivial vertex groups, then $\pi_1(X,C)$ is a connected, free groupoid with vertex groups as required.
An advantage of this method is it applies to other analogous situations. For example, let $Y$ be the (non Hausdorff) space obtained from $\mathbb R \times \{1,-1\}$ by identifying all points $(x,1), (x,-1)$ for $ x \in \mathbb R$ except for $x \in \mathbb Z$. Then we get the same kind of conclusion. Here is a picture of part of this identification: the red and blue lines are identified, apart from certain points, to give the black line:
The use of groupoids adds a spatial component to group theory, which has proved powerful. It allows for example the construction of a new groupoid say $U_f(G)$ for each groupoid $G$ and each function $f: Ob(G) \to Y$ to a set $Y$ such that the following diagram, in which $D(Y)$ is the discrete groupoid on the set $Y$, is a pushout of groupoids:
$$\begin{matrix}
D(Ob(G)) & \xrightarrow{f} & D(Y) \\
\downarrow& & \downarrow \\
G & \to & U_f(G).
\end{matrix}$$
This construction includes that of free groups, free groupoids, and free products of groups.
One also needs to develop some general theorems, for example that if $G$ is a free groupoid, so also is $U_f(G)$, and that the vertex groups of a free groupoid are free groups.
Further, colimits of groupoids are constructed by using colimits of sets. This process is described in general in Appendix B of the book partially titled Nonabelian Algebraic Topology. |
Inequality for sides and height of right angle triangle | I think we get $$a+b<c+h$$
Squaring we obtain
$$a^2+b^2+2ab<c^2+h^2+2hc$$
thus we have
$$2ab<h^2+2hc$$
$$2hc<h^2+2hc$$
because $ab=ch$ (area formulas) and we get $$h^2>0,$$ which is true. |
Norm of an operator with $L^2$ | You can make all your inequalities into equalities if you take $x=\frac1{\sqrt2}\,(1_{[-1,0]}-1_{[0,1]})$. As $\|x\|=1$ and $|f(x)|=\sqrt2$, you get that $\|f\|\geq\sqrt2$. And you have already proven the reverse inequality.
As this $x$ is not continuous, we need to approximate. Namely, we need
$$
x_n(t)=\begin{cases}
1,&\ -1\leq t\leq -1/n\\ \ \\
-nt,&\ -1/n\leq t\leq 1/n\\ \ \\
-1,&\ 1/n\leq t\leq 1
\end{cases}
$$ |
What is the formal way to denote a fraction of a fraction? | The "of" can be symbolically replaced by multiplication.
$\dfrac{x}{y} $ of $\dfrac{1}{z} $ , like half of one third is one sixth. |
Number of times we have to compose a permutation in order to have exactly k fixed points | You need to look at the cycle lengths and find a set of them that adds up to the number of fixed points required. Here you have cycles of $2,3,5$, so you need the $2$ and $5$ to be back at start and the $3$ not to be. The first time the $2$ and $5$ will be back at start together is at $\operatorname{lcm} (2,5)=10$. As $3$ does not divide into $10$, it will not be back at start, so $10$ works. Any multiple of $10$ that is not a multiple of $3$ will also work. |
Are generators of elliptic curves unique? | Group of $\mathbb{Q}$-rational points on your elliptic curve $E$ is abstractly isomorphic with $A = \mathbb{Z}\oplus \mathbb{Z} \oplus \mathbb{Z}/3\mathbb{Z}$. Your question really concerns minimal generating sets of $A$ and has nothing to do with elliptic curves in this formulation. There are many minimal generating sets of $\mathbb{Z}\oplus \mathbb{Z}$ and hence many minimal generating sets of $A$. |
Developing $\int_0^1 \frac{1+x}{1-x^3} \ln\left(\frac{1}{x}\right)dx$ into series | Make a substitution $x=e^{-y}$:
$$\begin{align}\int_0^1 dx\: \frac{1+x}{1-x^3} \ln\left(\frac{1}{x}\right) &= \int_0^{\infty} dy \: y \,e^{-y} \frac{1+e^{-y}}{1-e^{-3 y}}\\ &= \int_0^{\infty} dy \: y (e^{-y}+e^{-2 y}) \sum_{k=0}^{\infty} e^{-3 k y}\\&=\sum_{k=0}^{\infty} \int_0^{\infty} dy \: y (e^{-(3 k+1)y}+e^{-(3 k+2) y})\\&= \sum_{k=0}^{\infty} \left [ \frac{1}{(3 k+1)^2}+\frac{1}{(3 k+2)^2}\right]\\ &=\sum_{k=0}^{\infty} \left [ \frac{1}{( k+1)^2}-\frac{1}{(3 k+3)^2}\right]\\&=\frac{\pi^2}{6} - \frac{1}{9}\frac{\pi^2}{6}\\&=\frac{4 \pi^2}{27}\end{align}$$
In general,
$$\int_0^1 dx \frac{1-x^{n-1}}{(1-x)(1-x^n)} \left[\ln{\left(\frac{1}{x}\right)}\right]^{m-1} = \left(1-\frac{1}{n^m}\right) \Gamma(m) \zeta(m)$$ |
For how many functions $f$ is $f(x)^{2}=x^{2}$? | You are correct. In fact there are $2^{2^\omega}=2^\mathfrak{c}$ such functions if they need not be continuous. For each $A\subseteq \mathbb{R}$ define $f_A$ as follows: $$f_A(x)=\begin{cases}x,&x\in A\\-x,&x\notin A\end{cases}$$ Then $f_A(x)^2=x^2$ for all $x\in\mathbb{R}$. It’s easy to see that if $A\setminus\{0\}\ne B\setminus\{0\}$, then $f_A\ne f_B$, so we have one such function for every subset of $\mathbb{R}\setminus\{0\}$. Finally, every function with the desired property is such a function: if $f(x)^2=x^2$ for all $x\in \mathbb{R}$, then $f=f_A$, where $A =$ $\{x\in\mathbb{R}:f(x)=x\}$.
If you limit yourself to piecewise continuous functions, there are only $2^\omega=\mathfrak{c}$ of them: there are $(2^\omega)^\omega=2^\omega$ ways to choose the partition points between the pieces, $2^\omega$ ways to choose $x$ or $-x$ on each interval, and $2^\omega$ ways to choose the values at the endpoints, for a total of $(2^\omega)^3=2^\omega$ functions. |
Difference between two integrals with PDFs | The integrals have the same boundaries so we can write them as one integral:
$$\int\limits_{E'}^{\bar{v}}(v-c)g(v)\mathrm{d}v-\int\limits_{E'}^{\bar{v}}(v-E')g(v)\mathrm{d}v
=\int\limits_{E'}^{\bar{v}}(v-c)g(v)-(v-E')g(v)\,\mathrm{d}v$$
Factoring $g$ out yields
$$\int\limits_{E'}^{\bar{v}}\Big[(v-c)-(v-E')\Big]g(v)\,\mathrm{d}v=\int\limits_{E'}^{\bar{v}}\Big[E'-c\Big]g(v)\,\mathrm{d}v.$$ |
Differentiability versus analyticity domains for complex functions | Maybe it helps to know the original meanings of the words analytic and differentiable. Let $U\subseteq\mathbb C$ be open. A function $f:U\to\mathbb C$ is called complex differentiable in $z_0\in U$ if the limit of the difference quotient at $z_0$ exists (so the classical idea behind differentiability). It is called analytic in $z_0$ if there exists an open neighborhood of $z_0$ on which $f(z)$ is identical to a power series centered at $z_0$. That is,
$$f(z)=\sum_{k=0}^\infty a_k(z-z_0)^k$$
for all $z$ in said open neighborhood. Now it turns out that if $f$ is analytic in $z_0$ according to this definition, then it is automatically analytic on the entire neighborhood in which it agrees with the power series. So if $f$ is analytic in a point, we can always find an open set on which it is also analytic. So in practice, we're always interested in analyticity on open sets.
You may ask what this definition of analyticity has to do with the one you were provided. It turns out that if a complex function is complex differentiable on an open set, it automatically has a power series representation. And any function which has a power series representation is automatically complex differentiable. So analyticity (the power series version) on an open set is equivalent to complex differentiability on that set. And many authors now use analytic to denote complex differentiability on an open set, knowing that it's equivalent to the original meaning.
So to answer your specific questions: Your example function is in fact analytic on $\mathbb C-\{\mathrm i,-\mathrm i\}$. But I could imagine a function which is only complex differentiable in a single point, like you mentioned yourself. For instance, $z\mapsto\vert z\vert^2$ is only complex differentiable on $\{0\}$, so it's not analytic. |
A question related to the Minkowski sum | A) I answer first to your final question "what is the intuition behind Mikowski sum ?"
The shapes I will consider are convex polygons, with notations
$$ A = A_1A_2\cdots A_m \ \ \ \text{and} \ \ \ \ B= B_1B_2\cdots B_n$$
(we assume a cylic numbering convention $A_{m+1}=A_1$, etc.)
I am aware of 3 different ways to consider Minkowski sum $ A \oplus B$ :
1) (fig. 1 : left) by considering the sum of all vectors
$$\vec{OC_{p,q}}:=\vec{OA_p}+\vec{OB_q}$$ for a certain origin point $O$ and then taking the convex hull (featured in green) of all points $C_{p,q}$ represented by little stars (the convex hull of a set of points is the smallest convex set containing all these points).
Remark The choice of origin point $O$ is unimportant : changing it into $O'$ results in a simple translation by vector $\vec{OO'}$ meaning that we have the same result ($A \oplus B$ is defined up to a translation).
Fig. 1 : Two ways to build the Minkowski sum.
2) (Fig. 1 : right) By defining it as the (green) shape "halfway from $A$ and $B$" (up to a proper normalization) in an operation called "morphing between shapes" with notation $\tfrac12(A+B)$.
3) (Fig. 2) still another way, by working on borders : consider the two families of vectors $A_{p+1}-A_p=\vec{A_pA_{p+1}}$ and $\vec{B_qB_{q+1}}$, merge them into a big bunch of vectors, call them $\vec{v_1}, \vec{v_2}, \cdots \vec{v_{m+n}}$, then do the inverse operation by constructing the shape whose "borders" are
$$\vec{v_1}, \vec{v_1}+\vec{v_{2}}, \vec{v_1}+\vec{v_2}+\vec{v_{3}}, \cdots$$
You get in this way the borderline of Minkowski sum of $A$ and $B$.
Remark : the first operation is akin to a derivation, the second one, its inverse, to an integration.
Fig. 2 : Constructing $A \oplus B$ in a third way.
B) (Fig. 3) Now, for the issue about the Minkowski sum of a polygon and a disk $rB$ with radius $r$, where $B$ is the unit disk (a disk can be considered as the limit of regular polygons with $n$ sides when $n \to \infty$) and the formula of its area :
$$area( A \oplus rB)=area(A)+Lr+\pi r^2\tag{1}$$
($A \oplus rB$ is called the dilated set in the domain of "mathematical morphology/computational geometry). The vizualisation of the decomposition into elementary areas presented under the following form is a model of graphical proof... I wish this figure helps you in the full understanding about the rounded parts.
Fig. 3 : A visual proof for formula (1)
Remark : for more, here is an excellent source. |
Non Surjective Transformation with dense range | The range contains $-e_1,e_1-e_2,...$ and since it is a subspace it contains each $e_n$. hence the range is dense. Now consider $(\frac 1 n)$. Note that $(B-I)(a_1,a_2,...)=(a_2-a_1,a_3-a_2,...)$. Suppose $(B_I)(a_n)=(\frac 1n )$. From the equation $a_n-a_{n-1}=\frac 1 n$ we get $a_n =a_1+\frac 1 2 +\frac 1 3+...+\frac 1 n$, Clearly this is a contradiction because LHS $\to 0$ and RHS $\to \infty$. Hence $(\frac 1 n)$ is not in the range of $B-I$. |
How many bacteria will there be after $24$ hours if the population doubles after every passing minute? | "After" 24 hours would mean after the last minute that makes up the 24 hours, so I think that you are correct.
I suspect the "others" might be using a geometric sequence to model the growth and think that they are looking for the value of the $1440$th term, which would be $2^{1439}$, but that wouldn't be correct. If we do want to use a geometric sequence to model the growth and we want term 1 to map to the population after minute 1, term 2, after minute 2, etc, then notice that the first term is no longer one bacterium, but two (i.e. the population after one minute). In that case, the term associated with the population after the last minute would be $2\cdot2^{1439}$, which is the same as your answer. |
Absolute Continuity, Lipschitz | To expand on my comment:
One approach is just to invoke the Weierstrass approximation theorem. This works even if $f$ is merely continuous, and it gives a $g$ which is a polynomial, which is drastically stronger than just being Lipschitz or even $C^\infty$.
You could also give a more direct proof. An absolutely continuous function has a derivative which is $L^1$; a Lipschitz function has a derivative which is bounded. You can approximate $L^1$ functions by bounded functions. Now, to get from the derivative back to the function, what could you do...?
Indeed, integrate. So find a bounded measurable function $h$ which is close to $f'$ in $L^1$ norm. What can you say about the difference between the integrals (from $a$ to $x$) of $f'$ and $h$?
So we can get $\int_a^x f'(t) dt - \int_a^x h(t)dt$ to be small, right? Or in other words, we can get $f(x) - f(a) - \int_a^x h(t)dt$ to be small. So what if we set $g(x) = f(a) + \int_a^x h(t)dt$? |
If $a \in X$ we can say "$X$ contains $a$". Is there a corresponding verb for the relation $A \subset X$? | $A \subset X$ translates to A is subset of X. You can say that A is included in B, as @Mauro said in the comment.
Moreover you can say, every element in set A is present in set X. This is one of the most mathematically true statement you can say for $A \subset X$.
Mathematically representing, $\forall x $ | $ x \in A \implies x \in X $ |
Blow up in holomorphic dynamics | Blow-ups are used in the construction of parabolic curves as described in Marco Abate's paper on diagonalisation of non-diagonalisable discrete holomorphic dynamical systems.
Brochero Martínez, Cano and López-Hernanz's paper gives a concise proof of existence in the $\mathbb C^2$-case. |
Nth term of the series where sign toggles after a triangular number | Using the formula for the triangular numbers we note that if
$m \in I = [2n^2+n+1,2n^2+3n+1]$ for some $n=0,1,2,\ldots$ then $f(m)=m,$ otherwise
$f(m)=-m.$
The only possible choice of $n$ is $ \lfloor \sqrt{m/2} \rfloor,$ since if we write
$l(n) = 2n^2+n+1$ and $u(n) = 2n^2+3n+1$ by writing $\sqrt{m/2} = N + r,$ where $N$ is an integer and $0 \le r < 1$ we have
$$u \left( \lfloor \sqrt{m/2} \rfloor – 1 \right) = 2N^2 – N < 2N^2+4Nr+r^2 < m,$$
and so $m \notin I.$ Similarly
$$l \left( \lfloor \sqrt{m/2} \rfloor + 1 \right) > m,$$
so $m \notin I.$ Hence we have
$$f(m) = m \textrm{ when } m \in [2t^2+t+1,2t^2+3t+1] \textrm{ for }
t = \lfloor \sqrt{m/2} \rfloor,$$
otherwise $f(m)=-m.$ |
Integration by parts (fluids question) | Let $dv=\phi\phi' \; dx$ and use $\frac{d}{dx}(\phi^2)=2\phi\phi'$. |
Proof involving primes of the form $5k+1$ and $5k+4$ | Using Legendre Symbol and Quadratic Reciprocity Theorem,
$p\mid(n^2-5)\iff n^2\equiv5\pmod p\implies \left(\frac5p\right)=1$
$$\left(\frac p5\right)\left(\frac5p\right)=(-1)^{\frac{p-1}2\frac{5-1}2}=1$$ for odd prime $n$
So, $\left(\frac5p\right)=\left(\frac p5\right)$
Now, $(5k\pm1)^2\equiv1\pmod 5, (5k\pm2)^2\equiv4\pmod 5$
So, $\left(\frac p5\right)=1\iff p\equiv1,4\pmod 5$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.