title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding $\sum_{n=1}^∞ \frac{1}{n(n+1)e^n}$ | $\frac{1}{n(n+1)}=\frac{1}{n}-\frac{1}{n+1}$. So you want:
$$\sum_{n=1}^{\infty} \frac{z^n}{n(n+1)} = \sum_{n} \frac{z^n}{n} - \frac{1}{z}\sum \frac{z^{n+1}}{n+1}.$$
Knowing $$f(z)=-\log(1-z)=\sum_{n=1}^{\infty} \frac{z^n}{n},$$ then $$\sum_{n=1}^{\infty} \frac{z^n}{n(n+1)} = f(z)-\frac{1}{z}(f(z)-z) = \left(1-\frac{1}{z}\right)f(z) + 1$$
For $z=e^{-1}$, then, you get: $(e-1)\log(1-1/e) + 1.$ |
Planar curves and number of intersection | Why not 1) a regular $n$-gon with smoothed corners (of vanishing radii of curvature) and 2) a circle of radius equal to the average radius of the $n$-gon, and let $n \to \infty$? You can avoid the "loop" restriction by cutting one leg from the $n$-gon and a vanishingly small segment from the circle.
Given imgur will not let me load a picture, here is the Mathematica code that generates it:
Graphics[{Line[CirclePoints[10]],
Red, Circle[{0, 0}, .98, {-π/2 + .05, 3 π/2 - .05}]}]
If the OP demands strictly convex (i.e., no straight line segments), then each straight line segment in the above candidate solution can be replaced by a section of a circle of arbitrarily large radius of curvature.
A "degenerate" solution would be two equal segments of any strictly convex curve that overlap along a finite segment (and hence contain infinitely many points in common). |
If $M$ is connected and $\int_M \omega = 0,$ then $\omega = 0$? | Let's go back to basics, where $M = [0,1]$.
I'm sure you can construct a non-zero function $f: M \to \Bbb R$ such that $\int_M f(x) \ \mathrm dx = 0$. |
What is $\mbox{Tr}^2(A)-\mbox{Tr}(A^2)$ in terms of the eigenvalues of $A$? | Yes. We have
$$\operatorname{Tr}^2(A)-\operatorname{Tr}(A^2)=(\lambda_1+\lambda_2+\lambda_3)^2-(\lambda_1^2+\lambda_2^2+\lambda_3^2)=2(\lambda_1\lambda_2+\lambda_1\lambda_3+\lambda_1\lambda_3)$$ |
The triangle formed by the $x$-axis, the tangent to $y=1/x$ at a 1st-quadrant point, and the line from the origin to that point is isosceles | It looks good to me. A geometric perspective to consider:
When we find point $A =(2a, 0)$, notice that $P$ is directly above the midpoint $OA$. (This midpoint is $C = (a, 0)$.)
The triangles $OCP$ and $ACP$ are congruent, right-angled triangles that are mirror images of each other ($CP$ as a common edge).
Hence $OPA$ is an isosceles triangle.
Moreover if we rotate $ACP$ around $P$ until it forms a rectangle with $OCP$ (which occurs when $A$ moves to $O$), we get a rectangle with base $a$ and height $1/a$ hence it has area $1$. |
How to prove that $f(x)=|x|$ is not surjective? | You are making things complicated. Look at the definition of "surjective" again:
A function $f:\mathbb{R}\to\mathbb{R}$ is surjective if (and only if) for every $y\in\mathbb{R}$, there exists $x\in\mathbb{R}$ such that $f(x)=y$.
There is no $x\in\mathbb{R}$ such that $f(x)=-1$. |
Calculating arc length based on certain parameters | Apparently you could draw this curve as $(x(t), y(t))$, and the length of the curve would be $$\int \sqrt{(\frac{dx}{dt})^2+(\frac{dy}{dt})^2}dt$$ |
Prove that there exists $x \in \mathbb{R}^n$ such that $(Ax, x) > 0$ and $(Bx, x) < 0$ | For those who are interested, there turns out to be a pretty elegant solution to this problem that involves probabilistic methods.
As were said in the post, we can choose a basis in which $A$ is diagonal. Let $X$ be a random vector: its i-th coordinate $X_i$ equals to 1 with probability $\dfrac{1}{2}$ and $-1$ with probability $\dfrac{1}{2}$ (all coordinates are independent). Then,
$$(Ax, x) = \sum\limits_{i=1}^n d_ix_i^2 = \sum\limits_{i=1}^n d_i = \operatorname{tr}A > 0$$
But
$$\operatorname{\mathbb{E}}(Bx, x) = \operatorname{\mathbb{E}}\left(\sum\limits_{ij} B_{ij}x_ix_j\right) = \sum\limits_{ij} B_{ij}\operatorname{\mathbb{E}}(x_ix_j) = \sum\limits_{i} B_{ii}\operatorname{E}(x_i^2) = \operatorname{tr}(B) < 0$$
It means that there exists such a realization of $x$ that $(Bx, x) < 0$ (if it isn't true, the expectation would be non-negative). |
About sphere with equatorial opposite identification | You must have misread. Having a homeomorphism $\Bbb S^n \cong \Bbb RP^n$ or even a homotopy equivalence $\Bbb S^n \cong \Bbb RP^n$ breaks down from $n=2$ onwards: they have different singular homology say, so they must be distinct. (If you don’t know about homology yet this is not really a problem. My point is that one can prove that what you want to prove is impossible to hold true)
In the case $n=1$ one indeed has a homeomorphism $\Bbb S^1 \cong \Bbb RP^1$. One always has a quotient map $\Bbb S^n \rightarrow \Bbb S^n/\pm 1 =: \Bbb RP^n$, which one often takes as the definition of $\Bbb RP^n$. In the case $n=1$ we note that we can represent $\Bbb S^1/\pm 1$ by $[0,\pi]/\{0,\pi\}$, which itself is homeomorphic to $\Bbb S^1$. |
Calculate the area that the following graphs form | You have to break up an integral involving $|x|$ into two integrals, one where $x\geq 0$ and use $x$ for $|x|$, and another where $x\leq0$ and use $-x$ for $|x|$.
If you call your functions $f(x) = |x|/2$, $g(x) = 1/(x^2+1)$, and $h(x) = 1/4$, then you want
$\int_{-1}^{-1/2}(g(x)-f(x))dx + \int_{-1/2}^{0}(g(x)-h(x))dx + \int_{0}^{1/2}(g(x)-h(x))dx + \int_{1/2}^{1}(g(x)-f(x))dx$
using the appropriate replacements for $f(x)$ on the two inner intervals as I mentioned. |
Proving statements by its contrapositive | Let p be the boolean value for the statement "$n^3+2^n+1$ is odd", q be the boolean value for "n is even".
$$p \rightarrow q$$
$$1 \rightarrow 1$$ $$0 \rightarrow 1$$ $$0 \rightarrow 0$$
(as you can see, only $1 \rightarrow 0$ will indicate $p \rightarrow q$ is false)
The contrapositive of $p \rightarrow q$ is $\lnot q \rightarrow \lnot p$, which is "if n is not even, $n^3+2^n+1$ is not odd."
The statement "if $n^3+2n+1$ is even then n is odd" is implying $\lnot p \rightarrow \lnot q$ and it is not equivalent to $p → q$.
$$¬p → ¬q$$
$$¬0 → ¬0$$ $$¬1 → ¬0$$ $$¬1 → ¬1$$
($\lnot p \rightarrow \lnot q$ will still be valid when $p=1$ and $q=0$, which is not the case in $p \rightarrow q$)
If it works for the contrapositive, your statement definitely holds.
The statements $p \rightarrow q$ and $\lnot q \rightarrow \lnot p$ are logically equivalent. |
Show that $f(x)$ satisfy the differential equation | Firstly, we consider the behavior of the function on the new coordinate. It is obvious that $F(0)=0$, also $F'(0)=0$ since the $x$ coordinate is just the tangent line. So to compute $F'''(x)$, we need $\lim_{x\to 0}3\cdot\frac{F(x)-F(-x)}{x^3}$, this can be verified via Taylor expansion on new coordinate.$\newcommand{\e}{\epsilon}$
Secondly, we move the point $(a,f(a))$ to $(0,0)$, the point $(a+\e,f(a+\e))$ is translated to $(\e,f(a+\e)-f(a))$. Consider the distance with the line $y=f'(a)x$ in the translated coordinate, this distance is assigned a sign corresponding to the $y$ axis in the new coordinate system, see the picture
$$y_1=\frac{f(a+\e)-f(a)-f'(a)\e}{\sqrt{1+f'(a)^2}},\\x_1=\frac{f'(a)(f(a+\e)-f(a))+\e}{\sqrt{1+f'(a)^2}}=\sqrt{1+(f'(a))^2}\e+\frac{f''(a)f'(a)}{2\sqrt{1+(f'(a))^2}}\e^2+o(\e^2)$$
Thus when $x_1\to -x_1$, the $\e\to \e_1=-\e-\frac{f''(a)f'(a)}{1+f'(a)^2}\e^2+o(\e^2)$
$$\lim_{x_1\to 0}\frac{y_1(\e)-y_1(-\e-\frac{f''(a)f'(a)}{1+f'(a)^2}\e^2)}{x_1^3}=0\Rightarrow \frac{f''(a)\e^2/2!+f'''(a)\e^3/3!-f''(a)(\e_1)^2/2!-f'''(a)(\e_1)^3/3!}{\left(\sqrt{1+(f'(a))^2}\e\right)^3}=0$$
Expand and remove the higher order term, we get
$$\frac{1}{3}f'''(a)-\frac{2}{2}\left(\frac{f''(a)^2f'(a)}{1+f'(a)^2}\right)=0.$$
Which is exactly $$(1+(f'(x))^2)f'''(x)=3f'(x)f''(x)^2$$
Note that this implies $$\left(\frac{1+(f'(x))^2}{f''(x)}\right)'=-f'(x)\Rightarrow \frac{1+(f'(x))^2}{f''(x)}= -f(x)+C\Rightarrow (f(x)f'(x))'=C f''(x)-1.$$
Hence $f(x)f'(x)=C_1 f'(x)-x+C_2$ and $((f(x)-C_1)^2)'=-2x+2C_2$. The final answer is
$$(f(x)-C_1)^2=-x^2+2C_2 x+C_3$$ |
Is that true that each non-initial object has elements under this definition of ETCS? | The definitions are equivalent. Here is the first argument I could come up with. There may well be a simpler one using the full power of the topos assumption; the below works in any well-pointed category with a strict initial object in which every epimorphism is effective.
In a well-pointed topos, if an object $X$ has no points, then we know that $X$ has at most one morphism to any object $Y$. In other words, the unique morphism $q:0\to X$ from the initial object is an epimorphism. The dual property, of $X$ being subterminal, is widely used, but I'm not sure this one has a standard name. "Quot-initial"?
Anyway, in any topos, every epimorphism is effective. The linked nLab article cites Mac Lane-Moerdijk, IV.7.8, for this result. Thus $q$ is the quotient map for some equivalence relation $R\rightarrowtail 0\times 0$ on the initial object $0$. But since toposes have strict initial objects, we find both $0\times 0\cong 0$ and (thus) $R\cong 0$. That is, there is a unique equivalence relation on $0$ and thus $q$ is isomorphic to the identity map in the slice category under $0$. |
How to prove $\sum_{ d \mid n} \mu(d)f(d)=\prod_{i=1}^r (1-f(p_i))$? | Please see Theorem 2.18 on page $37$ in Tom Apostol's Introduction to analytic number theory book.
The proof goes as follows:
Define $$ g(n) = \sum\limits_{d \mid n} \mu(d) \cdot f(d)$$
Then $g$ is multiplicative, so to determine $g(n)$ it suffices to compute $g(p^a)$. But note that $$g(p^a) = \sum\limits_{d \mid p^{a}} \mu(d) \cdot f(d) = \mu(1)\cdot f(1) + \mu(p)\cdot f(p) = 1-f(p)$$ |
Prove $a_n = \frac{(-1)^{n+3}}{n}$ converges. | Keep the part
Given $\epsilon>0$ $\exists N>0$ such that if $n>N$ then $|a_n-0|<\epsilon$
$$n>N \implies |a_n-0|<\epsilon$$
$$\implies |a_n|<\epsilon$$
$$\implies |\frac{(-1)^{n+3}}{n}|<\epsilon$$
$$\implies |\frac{1}{n}|<\epsilon \implies \frac{1}{n}<\epsilon \implies n>\frac{1}
{\epsilon}$$
but change (almost) every $\implies$ to $\Leftarrow$:
We want: $\epsilon>0$ $\exists N>0$ such that
$$n>N \implies |a_n-0|<\epsilon$$
$$\Leftarrow |\frac{1}{n}|<\epsilon \Leftarrow \frac{1}{n}<\epsilon \Leftarrow n>\frac{1}
{\epsilon}$$ |
Connection between eigenvalues of a real matrix A and its norm. | Let
$$A = \begin{pmatrix}1/2 & 1 \\ 0 & 1/2\end{pmatrix}$$
Then $1/2$ is the only eigenvalue of $A$. If $v = \begin{pmatrix}1 \\ 1\end{pmatrix}$, then $Av = \begin{pmatrix} 3/2 \\ 1/2\end{pmatrix}$, hence $\|Av\| = \sqrt{10}/2 > \sqrt{2} = \|v\|$. |
Prove Weierstrass approximation theorem for 0.2(3) | $x_1=0.2$
$x_n=x_{n-1}+3\cdot 10^{-n},n>1$
Means that the sequence is
$\{0.2,0.23,0.233,0.2333,\ldots\}$
that is $$\lim_{n\to\infty}x_n=0.2+3 \sum _{n=2}^{\infty } \left(\dfrac{1}{10}\right)^n=0.2+\dfrac{1}{30}=\dfrac{7}{30}=0.2\bar 3$$
and this is the limit because the geometric series with ratio $\dfrac{1}{10}$ converges |
Proof by resolution and IFF | You can write it both as a CNF and as a DNF:
$$(a \leftrightarrow b) \Leftrightarrow (a \land b) \lor (\lnot a \land \lnot b) \Leftrightarrow (a \lor \lnot b) \land (\lnot a \lor b).$$
For resolution you might want the negation:
$$\lnot (a \leftrightarrow b) \Leftrightarrow (a \land \lnot b) \lor (\lnot a \land b) \Leftrightarrow (a \lor b) \land (\lnot a \lor \lnot b).$$
In general, every statement can be converted to a CNF or a DNF, so resolution can be used to prove (or rather, refute) every possible statement. |
Linear Differential Equation but with piece wise function ( Cant Solve ) | First equation
(Update Fixed, thanks to RodrigodeAzevedo)
Consider $0 \leq x \leq 1$, then:
$$\begin{cases}
y' + y = 1 \\
y(0) = 1
\end{cases} \Rightarrow y(x) = 1 \Rightarrow y(1) = 1.$$
Now, solve the differential equation for $x > 1$:
$$\begin{cases}
y' + y = -1 \\
y(1) = 1
\end{cases} \Rightarrow y(t) = 2e^{-x+1} - 1.$$
Finally:
$$y(x) = \begin{cases}1 & 0 \leq x \leq 1 \\
2e^{-x+1} - 1 & x > 1.
\end{cases}$$
Notice that for $x=1$, this function is continuous.
Second equation
This can be solved by using the very standard method of Separation of variables.
Consider $0 \leq x \leq 3$, then:
$$\begin{cases}
y' + 2xy = x \\
y(0) = 2
\end{cases} \Rightarrow \\
\frac{dy}{dx} = x(1-2y) \Rightarrow \\
\int_{y(0)}^{y(x)}\frac{dy}{1-2y} = \int_0^x s ds \Rightarrow \\
\int_{2}^{y(x)}\frac{dy}{1-2y} = \int_0^x s ds \Rightarrow \\
-\frac{1}{2}\left[\log\left(y(x)-\frac{1}{2}\right) - \log\left(\frac{3}{2}\right)\right] = \frac{1}{2}x^2 \Rightarrow \\
y(x) = \frac{3e^{-x^2}+1}{2} \Rightarrow
y(3) = \frac{3e^{-9}+1}{2}.$$
Now consider $x > 3$, then:
$$\begin{cases}
y' + 2xy = 0 \\
y(3) = \frac{3e^{-9}+1}{2}
\end{cases} \Rightarrow \\
\frac{dy}{dx} = -2xy \Rightarrow \\
\int_{y(3)}^{y(x)}\frac{dy}{y} = -2\int_0^x s ds \Rightarrow \\
\int_{\frac{3e^{-9}+1}{2}}^{y(x)}\frac{dy}{y} = -2\int_3^x s ds \Rightarrow \\
\log\left(y(x)\right) - \log\left(\frac{3e^{-9}+1}{2}\right) = -(x^2-9) \Rightarrow \\
y(x) = \frac{3e^{-9}+1}{2}e^{-(x^2-9)}$$
Again, this is continuous for $x=3$. |
In a graph $|V|=40, |E|=80$, prove there's an anti-clique of size $\geq 8$ | Let's form an independent set of vertices by doing the following. First, randomly permute the vertices $v_1,\ldots,v_n$. Initialize $S = \{\}$. Then for $k = 1,2,3,\ldots,n$ if $v_k$ is still in the graph, add it to $S$ and delete $v_i$ and each of its neighbors from the graph. Then, $S$ is an independent set.
Define a random variable $I_v = 1$ if $v \in S$ and $I_v = 0$ if $v_k \not\in S$. Then, $|S| = \displaystyle\sum_{v \in V}I_v$.
For each vertex, $v \in S$ if $v$ has a lower index than each of its neighbors. If $v$ has $d(v)$ neighbors, then the probability of $v$ having the lowest index among itself and its neighbors is $\dfrac{1}{d(v)+1}$.
Hence, $\mathbb{E}[I_v] = \Pr[I_v = 1] = \dfrac{1}{d(v)+1}$. So, by linearity $\mathbb{E}[|S|] = \displaystyle\sum_{v \in V}\dfrac{1}{d(v)+1}$.
Therefore there must be at least one permutation of the vertices for which the above algorithm gives us an independent set of size at least $\displaystyle\sum_{v \in V}\dfrac{1}{d(v)+1}$. |
Average amount of money received from pulling three random coins | Each coin is pulled 30% of the time. So on average you will get 30% of 64 cents=19.2 cents |
Showing $ \int_0^\pi{f(x)}dx=\sum_{k=0}^\infty{\frac{2}{(2k+1)^4}} $ where $ f(x)=\sum_{n=0}^\infty{\frac{\sin(n\cdot x)}{n^3}} $ | hint
$$\int_0^\pi \frac{\sin(nx)}{n^3}dx=\Bigl[\frac{-\cos(nx)}{n^4}\Bigr]_0^\pi$$
$$=\frac{1-(-1)^n}{n^4}$$
$$=\frac{2}{(2k+1)^4} \text{ if } n=2k+1$$ |
determine the frontier | The elements of $\mathbb{R}\setminus\mathbb{Q}$ are the irrational numbers; the elements of the frontier $\partial(\mathbb{R}\setminus\mathbb{Q})$ (which is the same as $\partial\mathbb{Q}$) are the real numbers $x\in\mathbb{R}$ with the following property (assuming you are considering the standard topology on $\mathbb{R}$): for all $\epsilon>0$, $(x-\epsilon,x+\epsilon)$ intersects both $\mathbb{Q}$ and $\mathbb{R}\setminus\mathbb{Q}$. But since $\mathbb{Q}$ and $\mathbb{R}\setminus\mathbb{Q}$ are dense in $\mathbb{R}$, any $x\in\mathbb{R}$ has this property. So $\partial(\mathbb{R}\setminus\mathbb{Q})=\mathbb{R}$. |
How to show that $\forall x\in \mathbb{R},\space \exists!n\in \mathbb{Z}$ such that $n\leq x < n+1$ | Assuming that you agree with the existence of the largest element in $A$, the part of the proof in question follows as an easy consequence.
Let $n_0$ be the largest element in the set $A$. Then it satisfies the following properties:
$n_0 \leq x$, since $n_0$ is an element of $A$.
If $n < n_0$, then $n+1 \leq x$. This is clear from the inequality $n+1 \leq n_0 \leq x$.
If $n > n_0$, then $n > x$. Indeed, assume $n > n_0$. Then $n$ is not a member of $A$, otherwise the maximality of $n_0$ is contradicted. Since $n \notin A$, we have $n > x$.
Since $n_0 + 1 > n_0$, we have $n_0 \leq x < n_0 + 1$, which proves the existence. Also, both 2 and 3 excludes the possibility that $n \leq x < n+1$ is satisfied for some $n$ other than $n_0$. This proves the uniqueness.
If you are questioning the existence of the largest element of $A$, you may exploit the well-ordering principle of $\Bbb{N}$ with a slight modification. |
Grid coordinates $\Longleftrightarrow$ Triangle numbering | One thing you immediately notice is that $$nr(1,y)={y+1\choose 2}$$
Then you also know $$nr(x+1,y-1)=nr(x,y)-1$$
Therefore the result is
$$nr(x,y)={x+y\choose 2}-(x-1)$$
Edit:
For the other way around, first find the closest bigger triangular number.
Suppose you have $n=nr(x,y)$ and you know $n$.
Then $$x+y=\lceil{-1+\sqrt{8n+1}\over 2}\rceil+1$$
(This comes from solving ${u^2+u\over 2}=n$ where $u=x+y$)
Once you find $x+y$ you calculate $$x={x+y\choose 2}-n+1$$ and $$y=(x+y)-x$$.
To summarize in one formula, you have
$$nr2i(n)={\lceil{-1+\sqrt{8n+1}\over 2}\rceil+1\choose 2} - n + 1$$
$$nr2j(n)=(\lceil{-1+\sqrt{8n+1}\over 2}\rceil + 1) - ({\lceil{-1+\sqrt{8n+1}\over 2}\rceil+1\choose 2} - n + 1)$$ |
Canonical line bundle over a projective bundle | (Small caution: Algebraic geometers call the bundle $H^*$ the tautological bundle; the canonical bundle refers to the top exterior power of the cotangent bundle. When I was a student, topologists were coming around to this convention as well.)
Presumably Atiyah means that to understand the tautological bundle of a projective bundle $\mathbf{P}(E)$, it's enough (locally) to understand the tautological line bundle over a projective space (a.k.a., Grassmannian of lines). |
I don't get the same graph, before and after solving for Y? | The "missing information" comes from improperly taking the square root. The actual equation B, solved for y, is: $$ y \ =\ \pm\ \sqrt{1 \ +\ 2x \ ^2\ } $$ This will give you the second part of the graph that you were previously missing. |
How to prove that the Frobenius endomorphism is surjective? | All fields are domains; being a domain is not some extra property that a field may or may not have.
The Frobenius homomorphism is often called the Frobenius endomorphism, since "endomorphism" is a more specific term meaning "map from something to itself". However, I'll use "homomorphism" here to avoid confusion.
Let $K=\mathbb{F}_p(T)$, the rational functions in the variable $T$ over the field $\mathbb{F}_p=\mathbb{Z}/p\mathbb{Z}$. That is,
$$K=\mathbb{F}_p(T)=\left\{\,\frac{f}{g}\,\Bigg|\,\,\,f,g\in\mathbb{F}_p[T], g\neq0\right\}.$$
Then the Frobenius homomorphism $\phi:K\to K$ is not surjective, because (for instance) the element $T$ is not in the image of $\phi$. This is because if we had $\phi(\frac{f}{g})=\frac{f^{\;p}}{g^{\;p}}=T$ for some $\frac{f}{g}\in K$, then we have $p(\deg(f)-\deg(g))=\deg(T)=1$, which is impossible as $\deg(f)-\deg(g)$ is an integer and $1$ is not divisible by $p$.
Now let $K=\mathbb{F}_p$. Then the Frobenius homomorphism $\phi:K\to K$ is surjective, because it is in fact equal to the identity map, i.e. $\phi(x)=x$ for all $x\in K$. More generally, if $K$ is any finite field of characteristic $p$, then the Frobenius homomorphism is surjective, because (as I will show below) it is always injective, and an injective function from a finite set to itself must be surjective.
However, it is not necessary that $K$ be finite in order for the Frobenius homomorphism to be surjective. For example, now let $K=\mathbb{F}_p(T^{\;1/p^\infty})$. That is,
$$K=\mathbb{F}_p(T^{\;1/p^\infty})=\mathbb{F}_p(T,\sqrt[p]{T\;},\sqrt[p^2]{T\;},\ldots).$$ This is certainly an infinite field. The Frobenius homomorphism $\phi:K\to K$ is surjective. For example, the element $\alpha\in K$,
$$\alpha=\frac{\sqrt[p]{T\;}+2\cdot(\sqrt[p^2]{T\;})^3}{T^{\;2}-5T^{\;p}}$$
is in the image of $\phi$, because we can replace every occurrence of a $T^{\;p^d}$ with a one-lower-power-of-$p$ exponent, i.e.
$$\phi(\beta)=\alpha,\text{ where }\quad\beta=\frac{\sqrt[p^2]{T\;}+2\cdot(\sqrt[p^3]{T\;})^3}{(\sqrt[p]{T\;})^{\;2}-5T}$$
Similarly with any other element of $K$. (Note that $2$ or $5$ could very well be equal to $p$, and hence equal to $0$.)
Incidentally, the field $\mathbb{F}_p(T^{\;1/p^\infty})$ is called the perfection of the field $\mathbb{F}_p(T)$. It is a theorem that a field $K$ of characteristic $p$ is perfect if and only if the Frobenius homomorphism $\phi:K \to K$ is surjective, and by adding in $p^n$-th roots of every element of $\mathbb{F}_p(T)$, we have made a field for which it must be surjective, hence the name.
Now, some books (for example, McCarthy's Algebraic Extensions of Fields) use the term "isomorphism" to mean what we now call "monomorphism" (injective homomorphism), and when they want to express what we now call "isomorphism" (bijective homomorphism), they would say "onto isomorphism" (since onto is a synonym for surjective, and bijective = injective + surjective).
If for some reason you are using this older terminology, then the Frobenius homomorphism for a field is always an "isomorphism" (i.e. injective). This is because, if $K$ is any field of characteristic $p$, and $\phi:K\to K$ is the Frobenius homomorphism, then
$$\begin{align*}\phi(\alpha)=\alpha^p=\beta^p=\phi(\beta)&\implies\alpha^p-\beta^p=0\\ &\implies(\alpha-\beta)^p=0\\ & \implies\alpha-\beta=0\\ & \implies\alpha=\beta.\end{align*}$$
In fact, any homomorphism from a field $K$ to a non-zero ring $R$ must be injective (what could its kernel be, as an ideal of the field $K$?) Also note that, by the First Isomorphism Theorem for Rings, we therefore have that any homomorphism $f:K\to R$ from a field $K$ to a non-zero ring $R$ is an isomorphism (in the modern sense) onto the subring of $R$ that is the image of $f$, i.e. if we threw out the rest of $R$ and only looked at the subring $f(K)$, then $f:K\to f(K)$ is an isomorphism (in the modern sense). This is what Jyrki Lahtonen conjectured that you meant above.
If we look at things that are not domains (and so in particular are not fields), then the Frobenius homomorphism need not be injective. For example, let $R=\mathbb{F}_p[x]/(x^p)$. That is,
$$R=\mathbb{F}_p[x]/(x^p)=\{a_0+\cdots+a_{p-1}x^{p-1}+(x^p)\mid a_i\in\mathbb{F}_p\}.$$
Then if $\phi:R\to R$ is the Frobenius homomorphism, we have for example $x+(x^p)\neq0+(x^p)$, but $$\phi(x+(x^p))=x^p+(x^p)=0+(x^p)=0^p+(x^p)=\phi(0+(x^p)).$$
On a final note, I just want to emphasize that the concept of a Frobenius homomorphism is only applicable if the field is of characteristic $p$ (as all the examples above were). For example, $\mathbb{Q}$ is a field of characteristic $0$, and for any prime $p$, the function $f:\mathbb{Q}\to\mathbb{Q}$ defined by $f(a)=a^p$ is not a homomorphism (compare $f(2)$ with $2\cdot f(1)$). |
Finding the biggest decrease in a list of integers (Python) | Store the pair of values $(d, m)$, where $d$ is the largest decrease found so far, and $m$ is the largest value found in the array so far. Read through the array in linear time, updating the values by $(d', m') = (\max\{d, m - x\}, \max\{m, x\})$, where $x$ is the current value in the array. |
Spectrum $\sigma(T)$ of $T:l^1 \to l^1$ given by $T((a_j))=\left( \sum_{j=2}^{\infty} a_j \right) e_1 + \sum_{j=2}^{\infty} a_{j-1} e_j$ | I think you can find the spectrum directly. First,
$$T((a_j))_i=\sum_{j=2}^{\infty}a_j e_1+\sum_{i=2}^{\infty}a_{i-1} e_i=\sum_{j=2}^{\infty}a_j e_1+a_{i-1}$$
Then
$$(T-\lambda I)((a_j))_1=\sum_{j=2}^{\infty}a_j -\lambda a_1,$$
$$(T-\lambda I)((a_j))_i=a_{i-1}-\lambda a_i$$
So if $\lambda$ is an eigenvalue, the second equation implies $a_i=\frac{1}{\lambda}a_{i-1}$. The first implies that
\begin{eqnarray}
\lambda &=& \frac{1}{a_1}\sum_{j=2}^{\infty}\left(\frac{1}{\lambda}\right)^ja_1
\\
&=&\sum_{j=2}^{\infty}\left(\frac{1}{\lambda}\right)^j
\\
&=&\left(\frac{1}{\lambda}\right)^2\frac{\lambda}{\lambda-1}
\end{eqnarray}
Check the values $\lambda=0$ and $\lambda=1$. Then for all other values of $\lambda$, the equation defines a cubic polynomial equation which you can factor to find the roots.
To get the rest of the spectrum, we first calculate the eigenvalues of the adjoint
$$T'((b_j))=b_2 e_1+ \sum_{j=2}^{\infty} (b_1+b_{j+1})e_j.$$
We have
$$(T^{\prime}-\lambda I)_1=b_2-\lambda b_1$$
$$(T^{\prime}-\lambda I)_i=b_1+b_{i+1}-\lambda b_i.$$
The last equation implies $b_{i+1}=\lambda b_i-b_1$. This and the first equation imply
$$b_i=\lambda^{i-2}b_2-\sum_{j=0}^{i-3}\lambda^{j}b_1=b_1\left(\lambda^{i-1}-\sum_{j=0}^{i-3}\lambda^{j}\right)=b_1\left(\lambda^{i-1}-(1-\lambda^{i-2})/(1-\lambda)\right)=b_1\left(\frac{\lambda^{i-2}(\lambda-\lambda^2+1)-1}{1-\lambda}\right)$$
The question is: when is this sequence bounded? It is bounded when $\vert \lambda\vert\leq 1$ and when $\lambda$ is a root of $\lambda-\lambda^2+1$. It remains to check the case $\lambda=1$.
To finish, we use the fact that for an operator with closed range, $ker(T^{\prime})=Ran(T)^{\perp}$. Hence, $T^{\prime}-\bar{\lambda}I$ has nontrivial kernel iff $T-\lambda I$ is not surjective. But these are exactly the Eigenvalues of $T^{\prime}$. |
Floor function summation[difficult] | Here is one way: use Pick's theorem on the triangle with vertices $(0,0), (503,0), (503,305)$.
Or you can do it more algebraically by noting
$$
\left\lfloor\frac{305 r}{503}\right\rfloor + \left\lfloor\frac{305(503-r)}{503}\right\rfloor = 305-1=304
$$
for all $1\leq r\leq 502$ (since $503$ is prime). |
Number of primes in $]x,2x]$ compared to in $[0,x]$ for $x > 0$ | Use the prime number theorem in the form
$$\pi(y) = \int_2^y \frac{1}{\log t}dt + O_A(y (\log y)^{-A}).$$
We have
$$2\pi(x) - \pi(2x) = \int_2^x \frac{1}{\log t}dt - \int_x^{2x}\frac{1}{\log t}dt + O_A(x (\log x)^{-A}),$$
and we easily see that
$$\int_x^{2x}\frac{1}{\log t}dt \leq \frac{x}{\log x}.$$
If we now integrate by parts once, we get
$$\int_2^x \frac{1}{\log t}dt = O(1) + \frac{x}{\log x} + \int_2^x \frac{1}{\log^2 t}dt,$$
so
$$2\pi(x) - \pi(2x) \geq (1+o(1))\int_2^x \frac{1}{\log^2 t}dt \gg \frac{x}{\log^2 x}$$
as $x$ tends to infinity. |
Show me an example between topology and sigma-algebra | They have different axioms, though with a similar flavour. Topologies are closed under arbitrary (not just countable) unions, but are only closed under finite (again, not countable) intersections. Topologies also are not closed under complements, unlike $\sigma$-algebras.
As an example, the Borel sets form a $\sigma$-algebra, which contains the topology (i.e. all open sets). However, unlike the topology (in general) it will also contain all closed sets, and many other sets. As a more specific example, the Borel sets on $\mathbb{R}$ will include the half-open interval $[0, 1)$, which is not in the topology. |
Category of natural numbers with divisbility? | Every preorder $(P,\leq)$ can be regarded as a category. The object set is $P$, and a morphism $x \to y$ exists (and is unique) iff $x \leq y$. These are precisely those categories in which every diagram commutes. But still, it is interesting to apply category theory to preorders. A limit is just an infimum, a colimit is a supremum. In particular, initial (terminal) objects are least (largest) elements. Monotonic maps are just functors. Galois connections are just adjunctions between preorders.
We can define ideals and prime ideals in $P$. If $p \in P$ is an element, one says that $p$ is a meet prime iff the generated ideal ${\downarrow}p$ is a prime ideal, i.e. iff $p$ is not the largest element and $\inf(x,y) \leq p$ implies $x \leq p$ or $y \leq p$.
If $P = \mathbb{N}^+$ and $\leq$ is the relation of divisibility reversed(!), then prime elements are precisely the usual prime numbers. You can omit this reversion by looking at join prime elements.
More generally, if $P$ is the preorder of ideals of a ring $R$, ordered by inclusion, then prime elements are precisely the prime ideals of $R$ in the sense of ring theory.
One can generalize many notions of number theory / ring theory to preorders resp. lattices (see Wikipedia for a start; I have also found many papers by a quick google research). In order to answer your interesting question "That is, can one explain (or at least motivate) deeper ideas in elementary number theory in terms categorical language?" I would like to see an explicit "deeper idea in elementary number theory" first - then we may try to explain it in category-theoretic terms (although I doubt that we will gain anything from that). |
differential equation deriving power series | Let $y(x) = \sum_{n=0}^{\infty} a_n x^n$ be such a solution
You have
$$x^2y'' + xy' + (x-2)y = 0$$
$$x^2\left(\sum_{n=0}^{\infty} a_n x^n\right)'' + x\left(\sum_{n=0}^{\infty} a_n x^n\right)' + (x-2)\left(\sum_{n=0}^{\infty} a_n x^n\right) = 0$$
Then
$$x^2\sum_{n=2}^{\infty} n(n-1)a_{n} x^{n-2} + x\sum_{n=1}^{\infty} na_{n} x^{n-1} + (x-2)\sum_{n=0}^{\infty} a_n x^n = 0$$
$$\sum_{n=2}^{\infty} n(n-1)a_{n} x^n + \sum_{n=1}^{\infty} na_{n} x^n + \sum_{n=0}^{\infty} a_n x^{n+1} - \sum_{n=0}^{\infty} 2a_n x^{n} = 0$$
$$\sum_{n=2}^{\infty} n(n-1)a_{n} x^n + a_1x + \sum_{n=2}^{\infty} na_{n} x^n + a_0x+\sum_{n=2}^{\infty} a_{n-1} x^n - 2a_0 - 2a_1 x - \sum_{n=2}^{\infty} 2a_n x^{n} = 0$$
$$-2a_0 + \left( a_0- a_1 \right) x + \sum_{n=2}^{\infty} \left( n(n-1)a_{n}+na_n+a_{n-1}-2a_n \right) x^n = 0$$
Then you get the following system :
$$\left\lbrace \begin{array} .a_0 & = & 0 \\
a_0-a_1 &= &0 \\
n(n-1)a_{n}+na_n+a_{n-1}-2a_n & =& 0\end{array}\right.$$
$$\left\lbrace \begin{array} .a_0 & = & 0 \\
a_1 &= &0 \\
a_{n}& =& -\frac{a_{n-1}}{n(n-1)+n-2}\end{array}\right.$$
So it seems you're right and the only solution that has a power expension is the null solution |
example of Diffeomorphism | Why not $(x,y)\mapsto (e^x, y)$? |
Check the series for convergence | The series is the Taylor series for the sine function. Therefore, it is equal to $sin(2)$. Since $x=2$. |
What is the record for Collatz Conjecture Steps | A small caveat first: what counts as an 'iteration' of the Collatz process varies a bit from source to source. Because what happens at even numbers is 'boring', the most standard convention seems to be looking at the process as it relates to odd numbers specifically, taking $n\mapsto \dfrac{3n+1}{2^i}$, where $2^i$ is the largest power of $2$ that divides $3n+1$.
Now, we can think about how the Mersenne numbers map under this process. Suppose we start with $n_0=2^k-1$. Then we look at $3n_0+1$ and see that this is $3\cdot2^k-2$; dividing by $2$ once gives $n_1=3\cdot 2^{k-1}-1$. Unless $k=1$, this is also odd, so we have no more factors of $2$ to strip out.
And this repeats: $3n_1+1$ is $3^2\cdot2^{k-1}-2$, dividing by $2$ once gives $n_2=3^2\cdot 2^{k-2}-1$, and unless $k=2$ this will also be odd.
It should be clear how this process iterates; after $i$ steps we'll have $n_i=3^i\cdot2^{k-i}-1$, and this continues until $i=k-1$ and we get $n_{k-1}=3^{k-1}\cdot 2-1$. Then $3n_{k-1}+1=3^k\cdot2-2$, and dividing by $2$ gives $3^k-1$ — but note that now we can do at least one more division by $2$, so the expression for $n_k$ isn't quite so neat.
In general the iteration will get much messier from here, and there generally won't be any more 'clean' expressions for the results. But we can explicitly describe the first $k-1$ steps of the process this way, so producing a long chain of iterations is just a question of how much computing power you want to throw at it. |
How to put an expression in terms of a sum or a "productory" | You can try using something like
$$\sum_{1\leq i\leq j\leq 3} \varepsilon_{ij}x_ix_j = \varepsilon_{11}x_1x_1 + \varepsilon_{12}x_1x_2 + \varepsilon_{13}x_1x_3 + \varepsilon_{22}x_2x_2 + \varepsilon_{23}x_2x_3 + \varepsilon_{33}x_3x_3$$
with
$$\varepsilon_{ij} = \left\{ \begin{align}
\varepsilon_{i+j+1} \ \textrm{ if } i\neq j \\
\varepsilon_{i} \ \textrm{ if } i = j
\end{align}
\right.$$ |
Homework help finding pdf's of y given pdf's of x - stuck | One strategy is to find the cumulative distribution function $F_Y(y)$ of $Y$, and then differentiate. It is clear that $F_Y(y)=0$ if $y\le 1$.
We are told that $Y=X^2$ if $X\lt 2$. So if $1\lt y\lt 4$, we have
$$F_Y(y)=\Pr(Y\le y)=\Pr(X^2\le y)=\Pr(X\le \sqrt{y})=\int_1^{\sqrt{y}} \frac{1}{x^2}\,dx.$$ If we just want to find the density, we can use the Fundamental Theorem of Calculus to find the derivative without calculating the integral. However, in this case the integration is easy.
If $y\ge 4$, the calculation is similar. We get $F_Y(y)=\Pr(Y\le y)=\Pr(X\le y/2)=\int_1^{y/2} \frac{1}{x^2}\,dx$. |
transformation $y = u v$ transforms the differential equation | take $y=uv$ we have
$$y'=u'v+v'u$$
and
$$y''=u''v+2u'v'+uv''$$
replace in equation we have
$$(f(x)u''-4f'(x)u')v+f(x)uv''+(f(x)2u'-4f'(x)u)v'=0$$
Then if we want get the equation
$$v''+h(x)v=0$$
we can assume that
$$f(x)2u'-4f'(x)u=0$$
solving this simple equaton to u
we have
$u=e^{2lnf}=f^2$,
and we get the equation desired,
$$v''+h(x)v=0$$,
where $h(x)=\frac{f(x)u''-4f'(x)u'}{f(x)u}$ |
When is a space Lindelöf? | Closed subspace of Lindelöf space is Lindelöf. Two product spaces one of them compact and others Lindelöf is Lindelöf. If the space has countable basis is Lindelöf. |
Binomial expansion for $(x+a)^n$ for non-integer n | The Binomial theorem for any index $n\in\mathbb{R}$ with $|x|<1,$ is
$(1+x)^n=1+nx+\frac{n(n-1)}{2!}x^2+\frac{n(n-1)(n-2)}{3!}x^3+\ldots$
For $(x+a)^\pi$ one could take $x$ or $a$ common according as if $|a|<|x|$ or $|a|<|x|$ and use Binomial theorem for any index. i.e., $x^\pi(1+a/x)^\pi$ in case $|a|<|x|.$ |
Mean of Dirac Measures invariance Characterization | Let $\delta^j:=\delta_{f^j(p)}$, for $j=0,1,\dots,k$. If $\delta^j(f^{-1}(E))=1$, then
$$
f^j(p)\in f^{-1}(E)\implies f^{j+1}(p)\in E \implies \delta^{j+1}(E)=1.
$$
On the other hand, if $\delta^j(f^{-1}(E))=0$, then
$$
f^j(p)\notin f^{-1}(E)\implies f^{j+1}(p)\notin E \implies \delta^{j+1}(E)=0.
$$
Thus, $\delta^j(f^{-1}(E))=\delta^{j+1}(E)$. This implies that
$$
\delta_{p,j}(f^{-1}(E))=\frac1 k(\delta^1(E)+\dots+\delta^k(E)).
$$
Now, if $\delta_{p,j}$ is invariant, then $\delta_p(E)=\delta_{f^k(p)}(E)$ for all $E\in\mathcal{F}$ and we conclude that $f^k(p)=p$ by choosing a suitable $E$. Here we make the hypothesis that $\mathcal{F}$ separates points, but i guess this is implicit on the problem, because this is false for the trivial $\sigma$-algebra, for example. The converse is immediate.
Many thanks to @hamidkamali for the idea. |
If $a\neq 0 $ and $ab = ac \implies b=c$ | Note that from the line $$a(y+b)=a(y+c)$$ that makes you conclude that $y+b=y+c$.
If you could have perform that trick from the beginning, you would have solved the problem directly.
Proposal:
From $ab=ac$, why not perform $y(ab)=y(ac)$. |
Boundedness and Riemann-integrability implies Continuity at some point (Analysis) | Define the oscillation, $\omega_f(U)$ for an open set $U$ as $\omega_f(U)=\sup_{x\in U} f(x)-\inf_{x\in U}f(x)$. Then if $x\in \newcommand{\RR}{\mathbb{R}}\RR$, define $$\omega_f(x) = \lim_{\epsilon\to 0} \omega_f(B_\epsilon(x)).$$
It's easy to see that $f$ is continuous at $x$ if and only if $\omega_f(x)=0$. Now consider $A_n:=\omega_f\newcommand{\inv}{^{-1}}\inv([0,1/n))$ for $n > 0$. I claim that $A_n$ is open and dense for all $n$, then we'll have that the points of continuity, $\bigcap_{n\in \Bbb{N}_+} A_n$ will be dense by the Baire category theorem.
Thus we just need to show that $A_n$ is open and dense.
Suppose $x\in A_n$. Then $x$ has a neighborhood $V$ such that $\omega_f(V)< 1/n$. Hence for all $y\in V$, $\omega_f(y)\le \omega_f(V)<1/n$. Thus $V$ is an open nhood of $x$ contained in $A_n$. Thus $A_n$ is open. As for density, here we need to use integrability of $f$. Suppose $(r,s)$ is a nonempty interval contained in the complement of $A_n$. Then for all $x\in (r,s)$, $\omega_f(r,s) \ge 1/n$. Now take any partition $P$ containing $r$ and $s$. Then $U(P,f,\alpha)- L(P,f,\alpha) \ge 1/n(\alpha(s)-\alpha(r))$ for any such $P$, so the Riemann integral doesn't converge. (Assuming $\alpha$ is strictly increasing. If $\alpha$ weren't strict, bad stuff could happen when it remained constant. Therefore $A_n$ is open and dense, so by the Baire category theorem the points of continuity are dense. |
Propositional Identities which are both equal to XOR | $(a \lor b) \land \neg(a \land b) =$ (DeMorgan)
$(a \lor b) \land (\neg a \lor \neg b) =$ (Distribution) (like FOIL)
$(a \land \neg a) \lor (a \land \neg b) \lor (b \land \neg a) \lor (b \land \neg b)=$ (Complement)
$\bot \lor (a \land \neg b) \lor (b \land \neg a) \lor \bot=$ (Identity) ($\bot$ is identity element for $\lor$, just as 0 is for +)
$(a \land \neg b) \lor (b \land \neg a)$ |
The nature of quotient spaces | For the $S^2 \cong D^2/\partial D^2$, you can start by noting that $D^1/\partial D^1 \cong S^1$, which is easier to see: you take a strand and glue the edges together. The higher dimensional analogue works the same way, but you take the entire boundary and bring it to a point, kind of like a knapsack.
As for your question about these "laborious maps," I think there is a fundamental misunderstanding about the quotient space. One should first think of it as an equivalence relation on a set, $(X, \sim)$, and a canonical map $x \mapsto [x]$, that sends an element to its equivalent class. This map can always be defined set theoretically. The Quotient space is a space $X/{\sim}$ along with a map $\phi:X \to X/{\sim}$ that is continuous. To make it continuous, we endow $X/{\sim}$ with the coarsest topology that accomplishes this goal.
Hence, one should think of it as a topological space and a map that makes all of this happen.
The property of "coarsest" can be summarized by saying that whenever you have a map $f:X \to Y$ so that $x \sim y \implies f(x)=f(y)$, there exists a map $$\tilde{f}: X/{\sim} \longrightarrow Y$$ so that $\tilde{f} \circ \phi=f$.
The construction of a "laborious map" usually involves finding some $f:X \to Y$ so that $\tilde{f}$ is a homeomorphism, which would show that the quotient is homeomorphic to $Y$, or that some topological space can be identified with a quotient of $X$.
Here is an example:
$D^1/\partial D^1 \cong S^1$
Consider the surjective map $f: [0,2\pi] \to S^1$ given by $x \mapsto e^{ix}$. Clearly, the identification $x \sim y \iff x,y \in \partial S^1$ implies that $f(x)=f(y)$, so $f$ factors through the quotient space, inducing $\tilde{f}$. However, this is the only place where $f$ was noninjective, so the induced map is injective. However, $\tilde{f}$ is now a continuous bijection whose domain is compact and codomain Hausdorff, and so it is a homeomorphism. |
To find tangents to given circle from a point outside it | Sorry for the lack of a picture. You can show that the angle between the line from the origin to $(x_1,y_1)$ and a radial line at a point of tangency is
$$\tan{\theta} = \frac{\sqrt{x_1^2+y_1^2-a^2}}{a}$$
where $a$ is the radius of the circle centered at the origin. Let $\theta_1$ be the angle between the line from the origin to $(x_1,y_1)$ and the positive $x$ axis. Then the tangent points on the circle are given by
$$(a \cos{(\theta_1 \pm \theta)},a \sin{(\theta_1 \pm \theta)})$$
The combined equation of the tangent lines is then
$$y-y_1 = m_{\pm} (x-x_1)$$
where
$$m_{\pm} = \frac{y1-a \sin{(\theta_1 \pm \theta)}}{x_1-a \cos{(\theta_1 \pm \theta)}}$$ |
Enciphered Message with linear enciphering function. | The inverse of $11$ modulo $26$ is the integer $n\in\{0,\ldots,25\}$ such that $11n\equiv 1\pmod{26}$, or, if you prefer, $11n\bmod{26}=1$. There are theoretically more elegant ways to find $n$, but these numbers are small enough that brute force works fine. We want a multiple of $11$ that’s $1$ more than a multiple of $26$. The numbers that are $1$ more than the first few multiples of $26$ are $27,53,79,105,131,157,183,209$, and $209=11\cdot19$. Thus, the inverse of $11$ modulo $26$ is $19$.
Alternatively, you might note that $$11\cdot7=77=3\cdot26-1\;,$$ so $11(-7)=-77=-3\cdot26+1$, i.e., $11(-7)\bmod{26}=1$. Finally, $-7\bmod{26}=19$.
Now suppose that we have a ciphertext number $y$. It comes from a plaintext number $x$ by the transformation $f_E$: $f_E(x)=y$. In this case that means that $y=(11x+4)\bmod{26}$. In other words, $y\equiv 11x+4\pmod{26}$, and $x,y\in\{0,\ldots,25\}$. Given $y$, you want to solve for $x$. Clearly $y-4\equiv 11x\pmod{26}$. Now multiply both sides by the inverse of $11$:
$$19(y-4)\equiv 19(11x)\equiv x\!\!\!\pmod{26}\;.$$
Thus, $$x=(19y-76)\bmod{26}=(19y+2)\bmod{26}\;,$$ since $76\bmod{26}=-2$.
The ciphertext AEF corresponds to the sequence $0,4,5$. Applying the transformation
$$f_D:\Bbb Z_{26}\to\Bbb Z_{26}:x\mapsto 19x+2\;,$$
we get the sequence $2,(19\cdot4+2)\bmod{26}=78\bmod{26}=0,(19\cdot5+2)\bmod{26}=19$, which corresponds to the plaintext CAT. |
Prove that $ \left(a_{n}\right)_{n=1}^{\infty} $ converges when $|a_{n+1}-a_{n}|<q|a_{n}-a_{n-1}|$ for $ 0<q<1 $ | Hint Try to understand and complete the proof that $(a_n)$ is a Cauchy sequence.
For $n\ge m$ we have
$$|a_n-a_m|=\left|\sum_{k=m}^{n-1} a_{k+1}-a_k\right|\le \sum_{k=m}^{n-1}| a_{k+1}-a_k|\le|a_1-a_0|\sum_{k=m}^{n-1}q^k\le|a_1-a_0|\frac{q^m}{1-q}\to0$$ |
Vectorial Analysis , convert from Cartesian (x,y,z) to Cylindrical (ρ,θ,z) | We have that
$x=5t^2\sqrt{1-t^2}$
$y=5t^3$
$z=3t^2$
then
$\rho=\sqrt{x^2+y^2}=\sqrt{25t^4(1-t^2)+25t^6}=5t^2$
since $x$ is always positive
$\theta = \arctan \frac{y}{x}=\arctan \frac{t}{\sqrt{1-t^2}}$ |
Convergence of continuous function | For part (ii):
Note that $s_n = H_{2n} - H_n + \frac1{n},$ where
$$H_n = \sum_{k=1}^n \frac1{k},$$
and
$$\frac1{k+1} < \int_k^{k+1} \frac{dt}{t} =\log(k+1) - \log k < \frac1{k}.$$
Summing from $k = 1$ to $k = n-1$ we get
$$H_n -1 < \log n < H_n - \frac1{n}.$$
Hence,
$$\log n + \frac1{n} < H_n < \log n +1, \\ \log 2 + \log n + \frac1{2n} < H_{2n} < \log 2 + \log n +1, \\ \log 2 - 1 + \frac1{2n} < H_{2n} - H_n < \log 2 + 1 - \frac1{n}.$$
Thus, $s_n$ is bounded as
$$\log 2 -1 < \log 2 - 1 + \frac1{2n} + \frac1{n} < s_n = H_{2n} - H_n + \frac1{n} < \log 2 + 1.$$
Now show $s_n$ is decreasing and, therefore, convergent by the monotone convergence theorem.
Note that
$$s_{n+1} - s_n = \frac1{2n+2} + \frac1{2n+1} - \frac1{n} = \frac{-3n - 2}{n(2n+1)(2n+2)} < 0$$ |
Expressing the Hilbert polynomial of a complete intersection as the difference of two Hilbert polynomials | Given the definition $P_{d_1,\ldots, d_r}(v) = \sum_{I \subseteq \{1,\ldots, r\}} (-1)^{|I|} \binom{n+v-d_I}{n}$,
\begin{align} P_{d_1,\ldots, d_r}(v) - P_{d_1,\ldots, d_{r-1}}(v) &= \sum_{d_r \in I} (-1)^{I}\binom{n+v-d_I}{n} \\&= \sum_{J \subseteq \{1,\ldots, r-1\}} (-1)^{|J|+1} \binom{n + v - d_J - d_r}{n} = - P_{d_1,\ldots, d_{r-1}}(v-d_r).\end{align}
Presumably, this step is used in an inductive proof to calculate the Hilbert polynomial of $\Gamma$. A more conceptual way of understanding this formula is via the Koszul complex: if $S = k[x_0,\ldots, x_n]$ is the coordinate ring of projective space, then for $(F_1,\ldots, F_r)$ a complete intersection where $F_i$ has degree $d_i$, let $E = \oplus_{i=1}^r S(-d_i)$. Then we have an exact sequence
$$ 0 \leftarrow S/(F_1,\ldots, F_r) \leftarrow S \leftarrow E \leftarrow \wedge^2 E \leftarrow \wedge^3 E \leftarrow \cdots\leftarrow \wedge^r E \leftarrow 0$$
where the wedge power is over $S$.
Concretely, as an $S$-module $\wedge^k E = \sum_{|I| = k} S(-d_I)$, which gives your formula. |
Second order Taylor polynomial at x = 0? | Second order Taylor polynomial of f is:
$$f(a) + \frac{f'(a)}{1!}(x-a) + \frac{f''(a)}{2!}(x-a)^2$$
We are evaluating this at $a=0$ so the polynomial becomes the Maclaurin series:
$$f(0) + \frac{f'(0)}{1!}x + \frac{f''(0)}{2!}x^2$$
Differentiate both sides of the equation you supplied to find $f'$:
\begin{alignat}{3}
&& \frac{d}{dx} (f(x)+\ln(1+f(x)) & = \frac{d}{dx} (x) \\
\implies && \frac{df}{dx} + \frac{\frac{df}{dx}}{1+f(x)} & = 1 \\
\implies && \frac{df}{dx}\left(1 + \frac{1}{1+f(x)}\right) & = 1 \\
\implies && \frac{df}{dx} &= \frac{1+f(x)}{2+f(x)}
\end{alignat}
Evaluate $f'$ at $x=0$:
$$\frac{df}{dx}(0) = \frac{1+f(0)}{2+f(0)} = \frac{1}{2}.$$
Use the quotient rule to solve $f''$:
$$ \frac{d}{dx}(\frac{df}{dx}) = \frac{d}{dx}\left(\frac{1+f(x)}{2+f(x)}\right)$$
$$ \implies \frac{d^2f}{dx^2} = \frac{ (2+f(x))\frac{df}{dx} - (1+f(x))\frac{df}{dx}}{(2+f(x))^2} = \frac{\frac{df}{dx}}{(2+f(x))^2} = \frac{1+f(x)}{(2+f(x))^3}.$$
Evaluate $f''$ at $x=0$:
$$ \frac{d^2f}{dx^2}(0) = \frac{1+f(0)}{(2+f(0))^3} = \frac{1}{8}.$$
Therefore, the Taylor polynomial can be expressed as follows:
$$f(0) + \frac{f'(0)}{1!}x + \frac{f''(0)}{2!}x^2 = \frac{1}{2}x + \frac{1}{16}x^2.$$ |
Definition of Lie Groups | Here's a set-theoretic family of examples of smooth manifolds that are groups with smooth inversion but not multiplication. Let $(G,e,*)$ be a group of exponent $2$, i.e. $g *g=e$ for every $g\in G$ with the cardinality $\mathfrak{c}$ of $\mathbb{R}$. For instance, $G$ could be a direct product of $\mathfrak{c}$ $\mathbb{Z}_2$s. Let $\phi:\mathbb{R}\to G$ be any bijection and define a group structure on $\mathbb{R}$ by $x\star y=\phi^{-1}(\phi(x)*\phi(y))$. This kind of construction always yields a group. Now since we picked our $G$ to have an inversion map preserved under bijection, inversion is guaranteed to be smooth: for $x\in\mathbb{R}, x^{-1}_\star=\phi^{-1}(\phi(x)^{-1}_*)=\phi^{-1}\phi(x)=x$, i.e. the inversion map $\mathbb{R}\to\mathbb{R}$ is just the identity.
But for the vast majority of choices of $\phi$ the multiplication will not be smooth. For since $\mathbb{R}$ has $\mathfrak{c}^\mathfrak{c}$ self-bijections, we've exhibited $\mathfrak{c}^\mathfrak{c}$ distinct group structures on $\mathbb{R}$ all with smooth inversion. On the other hand, a continuous-in particular smooth-group structure on $\mathbb{R}$ is specified by the maps $x,y\mapsto x\star y$ for $x,y\in \mathbb{Q}$, i.e. by an element of $\mathbb{R}^{\mathbb{Q}\times\mathbb{Q}},$ which has cardinality only $\mathfrak{c}$!
Edit Incidentally, smoothness of multiplication actually implies smoothness of inversion if inversion is continuous. I don't know if there are groups with smooth multiplication and discontinuous inversion-my guess is there are. Re-edit But as Jack Lee's comment below shows, in fact smooth multiplication does imply smooth inversion. |
sub-family of cover with compact closure is still a cover? | All we have to show is that every point of $M$ is contained in some $Y_i\in \mathcal Y$. So let $x\in M$ be arbitrary. Because $M$ is locally Euclidean, there is some open subset $U\subseteq M$ containing $x$ and a homeomorphism $\phi\colon U\to \widehat U$, where $\widehat U$ is an open subset of $\mathbb R^n$. Let $\widehat x = \phi(x)$, and choose $r$ small enough that $B_{2r}(\widehat x)\subseteq \widehat U$. Then $\overline B_r(\widehat x)$ is compact and contained in $\widehat U$. Define
\begin{align*}
K & = \phi^{-1}\big(\overline B_r(\widehat x)\big),\\
W & = \phi^{-1}\big(B_r(\widehat x)\big).
\end{align*}
Since $\phi$ is a homeomorphism, $W$ is an open neighborhood of $x$ contained in the compact set $K$.
Now there is a set $Y$ in our original basis such that $x\in Y \subseteq W$. Because $\overline Y$ is a closed subset of the compact set $K$, it is compact. Thus $Y$ is one of the sets in $\mathcal Y$. |
Existence of a convergent sub-series | The intuition here is that, even though the sequence $(a_k)$ might be decreasing very slowly, since it goes to zero, you can always find a term that is as small as you want by picking the index $k$ to be sufficiently large. For instance, you can find some indices $s_1,s_2,\dots$ so that $a_{s_1} < 3^{-1}$, $a_{s_2}< 3^{-2}$, and in general $a_{s_k}<3^{-k}$. If you chose the subsequence $(a_{s_k})$, then there would be an inequality $3^ka_{s_k} < 3^k\cdot 3^{-k}=1$. This is not enough to guarantee that the series $\sum_{k=1}^\infty 3^k a_{s_k}$ converges, since all we know is that the terms are less than $1$; but we can choose larger indices to make the terms of the subsequence even smaller.
Do you see what to do now? |
Powers of a group element only generate other group elements | By definition, ${\circ}\colon G\times G\to G$ is a map with codomain $G$, i.e., it maps everything to some element of $G$.
Since $G$ is finite, then $S=\{x^n : n\in\mathbb Z\}$ is also finite since it's a subset of $G$. Thus some $x^n$ are repeated; otherwise we could set up a bijection between $\mathbb Z$ and $S$ which would prove that $G$ is infinte.
Your proof is basically correct — you're showing that repeatedly applying $\circ$ to things in $G$ keeps you in $G$ — but this is quite obvious. |
Combining two standard normal distributions | Let $X$ denote the time driving to work and $Y$ the time going back home.
Then $Z:=X+Y$ is the total, and has normal distribution with expectation: $$\mathbb EZ=\mathbb E(X+Y)=\mathbb EX+\mathbb EY=27+31.5$$
If moreover $X$ and $Y$ are independent then:$$\mathsf{Var}(Z)=\mathsf{Var}(X)+\mathsf{Var}(Y)=2.5^2+2.5^2$$
The distribution of $Z$ is determined by expectation and variance, so this together enables you to find $P(Z>61.5)$. |
Let $\sum_{k=0}^{\infty}a_kx^k$ a power series with radius of convergence $R>0$ | Hints: 1. I assume you meant $[0,R)$ in the first problem. Suppose $\sum a_nx^n$ converges uniformly on $[0,R).$ Then the partial sums of $\sum a_nx^n$ are uniformly Cauchy on $[0,R).$ Show that this implies the partial sums of $\sum a_nR^n$ are Cauchy, hence $\sum a_nR^n$ converges, contradiction.
Suppose $\sum a_nR^n$ converges. Using summation by parts, show that $\sum a_nx^n$ is uniformly Cauchy on $[0,R].$ See Abel's theorem for inspiration. |
maximize the values of given variables. | I'm not sure I have understood correctly, but can't you just look at all values of the A-column in your data, find the one that is largest, and then read off the x,y values next to it? |
what is the relation between $f(x+1)$ and $f(x)$? | One can view $f$ as a machine that takes an input (usually called $x$) and outputs something based on that input.
It is usually written in the general case as something like $f(x)=3x-5$ which says whatever our input $x$ happens to be, we output something that is (in this case) $3$ times that input and then subtract five.
In the above example it would be something like $f(\color{fuchsia}{1})=3\cdot \color{fuchsia}{1}-5$, or $f(\color{fuchsia}{2})=3\cdot \color{fuchsia}{2}-5$, or in general $f(\color{fuchsia}{x})=3\cdot \color{fuchsia}{x}-5$
In the case that the input happens to itself be written in a strange way, or even as a different function, that is okay. It is still the input, and we manipulate it in exactly the same way.
$f(\color{fuchsia}{\frac{1}{x}})=3\cdot \color{fuchsia}{\frac{1}{x}}-5$ and $f(\color{fuchsia}{x+1})=3\cdot (\color{fuchsia}{x+1})-5$ for examples.
We may choose to reorganize things afterwards at our convenience.
For example: $f(\color{fuchsia}{2x+1})=3\cdot (\color{fuchsia}{2x+1})-5=6x+3-5=6x-2$ |
Find the derivative of the function $(1 + 5x)e^{-5x}$ | Hint:
Use the Product Rule: $$y=f(x)g(x), \space y' = f(x)g'(x) + g(x)f'(x)$$
Where $$f(x) = 1+5x, \space g(x) = e^{-5x}$$
$$\implies y' = (1+5x) \cdot (e^{-5x})' + e^{-5x} \cdot (1+5x)'$$
For the differential of $g(x) = e^{-5x}$, use Chain Rule:
$$((f\circ g)x)' = f'(g(x))\cdot g'(x)$$
When differentiating the exponential function $e^x$, you have to differentiate '$e^x$' as well as the function in the power. So the differential of $e^x$ is actually $e^x \cdot(x)'$ but since the differential of $x$ is $1$, $\frac{dy}{dx} e^x = e^x$. For $e^{-5x}$ however, the function in the power $-5x$, has to be differentiated as well. Therefore, using the Chain Rule, $f'(g(x)) = (e^{-5x})'$ which is just ($e^{-5x}$) and $g'(x) = (-5x)'$ which is just $-5$.
$$\therefore (e^{-5x})' = (e^{-5x})' \cdot (-5x)' = e^{-5x} \cdot -5 = -5e^{-5x}$$ |
Limit of a sequence given by $x_n = \sqrt{3 + x_{n-1}}$ as n approaches infinity | Hint: prove that your sequence is increasing and bounded, hence converging to its supremum, then notice that there is only one real number $r$ such that $r=\sqrt{r+3}$. |
What is the rank of the differentiation operator on Pn? What is the kernel? | Consider $\dim P_n=n+1$ and clearly $\dim\ker(A)=1$ hence $\dim\operatorname{im}(A)=n$ |
Determine the number of different walks of length 4. | You could use the following interesting result . Let $A$ denote the adjacency matrix of your graph. Then, the $(i,j)$ entry of the matrix $A^n$ will denote the number of walks of length $n$ from vertex $i$ to vertex $j$. |
Is this proof of divergence of an alternating series correct? | Noting that
$$\begin{align}
b_r&=\sqrt{r+1}-\sqrt{r}\\\\
&=\frac{1}{\sqrt{r+1}+\sqrt{r}}
\end{align}$$
it is easy to see that $b_{r+1}\le b_r$ and $\lim_{r\to \infty}b_r=0$. That is, $b_r$ monotonically decreases to $0$.
Therefore, the Leibniz's Test for convergence of alternating series guarantees that the alternating series $S$ given by
$$S= \sum_{r=1}^\infty (-1)^{r-1}b_r$$
converges. |
A finite module over a Noetherian ring is torsionless if and only if it is a submodule of a finite free module | By definition, $M$ is torsion-less if the $R$-dual $M^\vee=\mathrm{Hom}_R(M,R)$ separates points, i.e., if the map $M\to\prod_{\varphi\in M^\vee}R$ given by $m\mapsto(\varphi(m))_{\varphi\in M^\vee}$ is injective. But $M^\vee$ is a finitely generated $R$-module. Indeed, we can choose $R^n\to M$ surjective, and then $\mathrm{Hom}_R(M,R)\to\mathrm{Hom}_R(R^n,R)=R^n$ is injective, and since $R$ is Noetherian, submodules of finite $R$-modules are finite, so $M^\vee$ is finitely generated. Let $\varphi_1,\ldots,\varphi_n$ be generators. Then I claim that the map $M\to\bigoplus_{i=1}^n R$ given by $m\mapsto(\varphi_i(m))$ is injective. Suppose that $m$ lies in the kernel, so $\varphi_i(m)=0$ for all $i$. Take any element $\varphi\in M^\vee$ and write $\varphi=\sum_i r_i\varphi_i$. Then $\varphi(m)=\sum_i r_i\varphi(m)=0$. So $M^\vee$ annihilates $m$, and thus $m=0$ by the assumption that $M$ is torsion-less. |
Peano's Theorem Examples | Here are some examples.
A bounded discontinuous function $F$ that has no solution to some
initial value problem
Take $$\begin{array}{l|rcl}
F : & \mathbb [-1,+1] \times [-1,+1] & \longrightarrow & \mathbb R \\
& (x,u) & \longmapsto & 1 \text{ if } x \in \mathbb Q \\
& (x,u) & \longmapsto & 0 \text{ otherwise} \end{array}$$
According to Darboux Theorem a solution $u$ should have a derivative $u^\prime$ taking all values in segment $[0,1]$ which is in contradiction with the IVP.
An example of a function $F$ that's continuous on a square $S$ such
that $F$ has a solution on a subset of the square but does not have a
solution on the entire square?
$$\begin{array}{l|rcl}
F : & \mathbb [-1,+1] \times [-1,+1] & \longrightarrow & \mathbb R \\
& (x,u) & \longmapsto & u^2 \end{array}$$
Has for solution $u(x)=\frac{2}{1-2x}$ to the IVP $u(0)=2$ and the solution is not defined for $x \ge \frac{1}{2}$.
A function $F$ that is continuous everywhere that doesn't have a
global solution to some initial value problem.
Have a look here. You need to look at infinite dimensional spaces to find a counterexample. |
probability of subset contained in dice rolls of custom dice with duplicate symbols | The problem can be solved by using a multivariate generating function, using Mathematica.
As a first step, let's encode the unusual symbols on the dice so they will easier to work with on a computer.
OP symbol + - x ^ / √
Encoded as 10 11 12 13 14 15
With this change, the four colors of dice become
red 0 1 2 3 10 11
blue 0 1 2 3 12 13
green 4 5 6 11 12 14
black 7 8 9 10 13 15
If we use the variable $x_i$ to track the number of occurrences of face $i$, the generating function for a red die is $x_0+x_1+x_2+x_3+x_{10}+x_{11}$. But notice that we don't really care about faces 0, 2, or 11 since they aren't in $D$, so we might as well lump their associated variables into a single "don't care" variable, $y$. With that change, the GF for a red die is
$$f_{red} = x_1 + x_3 + x_{10} + 3 y$$
Similarly, the GFs for the other colors of dice are
$$\begin{align}
f_{blue} &= x_1+x_3+x_{12}+3y \\
f_{green} &= x_4+x_{12}+4y \\
f_{black} &= x_{10}+5y
\end{align}$$
The GF for the roll of six dice of each color is
$$g=(f_{red} \cdot f_{blue} \cdot f_{green} \cdot f_{black})^6$$
What this means, for example, is that the number of ways to roll one $1$, two $3$'s, four $12$s and 17 "don't cares" in six rolls of the four dice is the coefficient of $x_1 x_3^2 x_{12}^4 y^{17}$ when $g$ is expanded. (Note that the exponents must sum to $24$.)
There are $6^{24}$ possible outcomes, all of which we assume are equally likely. Rather than counting the cases with at least two $1$s and at least one each of $3$, $4$, $10$ and $12$, it seems that it might be more efficient to count the cases which do not qualify and subtract from the total. I.e., we want to sum the coefficients of
$$x_1^{i_1} x_3^{i_3} x_4^{i_4} x_{10}^{i_{10}} x_{12}^{i_{12}} y^j$$
in $g$ where $i_1 < 2$ or at least one of $i_3, i_4, i_{10}$ or $i_{12}$ are less than $1$, where $j = 24-i_1-i_3-i_4-i_{10}-i_{12}$. When we do this in Mathematica, we find the sum is
$$n = 3467290195987632218 \approx 3.46729 \times 10^{18}$$
So the probability asked for in the OP, that $D$ is contained in a set of 6 rolls each of the four colors of dice, is
$$\frac{6^{24}-n}{6^{24}} \approx \boxed{0.268254}$$ |
A combinatorial identity: $ \sum_{k=m}^n \frac{\binom{1/2}{k-m}}{k \binom{-1/2}{k}}=\frac{\binom{-1/2}{n-m}}{m \binom{-1/2}{n}} $ | You have a summand that does not depend on $n$ and you have a conjecture for general $n$ (which is clearly true for $m=n$), so it just remains to check the induction step:
$$\frac{\binom{-1/2}{n-m}}{m \binom{-1/2}{n}}+\frac{\binom{1/2}{n-m+1}}{(n+1) \binom{-1/2}{n+1}}=\frac{\binom{-1/2}{n-m+1}}{m \binom{-1/2}{n+1}},$$
which reduces to a polynomial identity after you cancel all common factors. |
Intuition for Geometric Transformations | This might help,
Lets first look at what the matrix does to vectors. A useful way of looking at a matrix is to think of each column as being the result of applying the matrix to one of the basis vectors.
$$
\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]
\left[ \begin{array}{cc} 1 \\ 0 \end{array} \right]
=\left[ \begin{array}{cc} a \\ c \end{array} \right]
\qquad
\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]
\left[ \begin{array}{cc} 0 \\ 1 \end{array} \right]
=\left[ \begin{array}{cc} b \\ d \end{array} \right]
$$
When you have more than one nonzero component for a column vector the matrix is applied to each piece and the results are added. In other words multiplying a matrix by a column vector adds columns of the matrix weighted by the components of the vector.
$$
\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]
\left[ \begin{array}{cc} 3 \\ 4 \end{array} \right]
=\left[ \begin{array}{cc} 3a + 4b\\ 3c + 4d \end{array} \right]
$$
Now lets use this to see what your matrix does to the (x,y) ordered pairs,
$$
\left[ \begin{array}{cc} 1 & 0 \\ 1 & 1 \end{array} \right]
\left[ \begin{array}{cc} x \\ y \end{array} \right]
=\left[ \begin{array}{cc} x \\ x+y \end{array} \right]
\qquad
\left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right]
\left[ \begin{array}{cc} x \\ y \end{array} \right]
=\left[ \begin{array}{cc} x+y \\ y \end{array} \right].
$$
So you can see that when the first transformation is applied to a point it leaves the $x$ coordinate alone and then makes a new $y$ coordinate by adding the old $x$ and $y$ values. You can see this in your figure the shape is deformed vertically but not horizontally.
The second transformation does the same thing by with the roles of $x$ and $y$ reversed. Looking at this you can see that the $y$ coordinates do not change as a result of the transformation.
I hope this helps your intuition a bit. I'm not sure of you background but here are some general guidelines:
Your matrix will always treat $(0,0)$ as a special point. So when you are looking at what the transformation does you should keep in mind that it won't treat all squares equally.
When visualizing geometric transformations like this it is helpful to apply it to every point in the xy-grid. This gives you a new deformed grid which gives you an idea of what it does where. Think of this as stretching/compressing the xy plane.
If your transformation is diagonalizable you should find its eigenvalues and eigenvectors. These provide invaluable information about the transformation. |
Pointwise limit of the indicator function: $\lim_{c \to \infty}1_{\{X<kc\}}$ | Given $t$ in the domain of $X$, take $M$ so that $M>\frac{X(t)}{k}$. This is possible because $X$ is finite and $k$ is fixed. Then, for all $c\geq M$, we have
$1_{X<kc}(t) = 1$ because $X(t)<Mk\leq ck$. Therefore $\lim_{c\to\infty}1_{X<kc}(t)=1$ for each $t$, which means the pointwise limit $\lim_{c\to\infty}1_{\{X<kc\}}$ is $1$. |
Randomly generated permutations | A simple example is to consider arrays of the form $(m+1, m+2, \ldots n, 1, 2,\ldots, m)$. There are $n$ such arrays with $0 \le m \lt n$, and if they are equally likely then $A[i]=j$ when $m \equiv i-j \bmod n$ which happens with probability $\frac1n$
But there are $n!$ possible randomly generated permutations of $A$ and so in this example they are not equally likely: $n!-n$ of them have probability $0$ in this example while $n$ have positive probability
Added as your comment suggests you think the $n$ cyclic permutations have the same probabilities as each other
An alternative algorithm:
Generate a random number $m$ uniformly from $\{0,1,\ldots m-1\}$ and another $k$ from $\{0,1,2\}$
If $k=0$ or $k=2$ then choose the permutation $(m+1, m+2, \ldots n, 1, 2,\ldots, m)$
If $k=1$ then choose the reverse of that, i.e. permutation $(m,m-1,\ldots, 1, n, n-1, \ldots m+1)$
So there are now $2n$ possible permutations, $n$ having probability $\frac2{3n}$ of appearing and the other $n$ having probability $\frac1{3n}$ of appearing. For given $i$ and $j$ you still have the required $\mathbb P(A[i]=j)=\frac1n$.
Incidentally, in both this and the previous counterexample, and indeed any counterexample, you need $n \gt 2$ |
Implicit Differentiation: $(x/y)+(y/x) =1$ | if you going to do implicit differentiation you might as well multiply by $xy$ to get $x^2 + y^2 = xy$ before differencing. now differencing gives you $2xdx + 2ydy = xdy + ydx.$ this can also be written as
$$\dfrac{dy}{dx} = \dfrac{y - 2x}{2y-x} = \dfrac{y(y-2x)}{y(2y-x)}=
\dfrac{y(y-2x)}{2y^2 - xy} = \dfrac{y(y-2x)}{2xy - 2x^2 - xy} = \dfrac{y(y-2x)}{x(y-2x)} = \dfrac{y}{x} \tag 1$$
in implicit differentiation you don't have a unique answer. you always have to carry the constraint $\dfrac{x}{y} + \dfrac{y}{x} = 1$ along or an equivalent one like $x^2 + y^2 = xy$ along with the solution. in the equality (1) any one of them could be an answer. |
Question about disconnected metric spaces | The metric on the subspace is just the restriction of the metric from the original space. So $d(2,5)$ in the space you've written is $3$. There is not much you can say that is special about this situation from the metric point of view. One thing you can say is that if $A,B$ are as above and $d(a,b)$ is bounded below for $a \in A$ and $b \in B$, then $X$ is disconnected. But the converse is not true; $(0,1) \cup (1,2)$ is disconnected. |
Prove that $|C|=243$ given $C=\{(A,B):A,B\subseteq S,\; A\cup B=S\}$ and $S=\{1,2,3,4,5\}$. | For each element $x$ of $S$ there are $3$ possibilities: $x \in A$ or $x \in B$ or $x$ is in both ... so $\mid C \mid =3^{\mid S \mid}$. |
Prove that $\oint_{\partial S} \psi \; d\ell = \iint_S (\hat{\mathbf{n}} \times \nabla \psi) \; dS$ | Hint: Pick an arbitrary constant vector $e$, and take the inner product with it on both sides. Then use Stokes theorem to prove that
$$ \int_{\partial S}\psi\,(e,dl) = \int_S(e,dS\times \nabla \psi). $$ |
Prove $\int_{-\infty}^\infty\frac{dx}{(1+x^2/a)^n}$ converges | Hint. One may integrate by parts,
$$
\begin{align}
I_{n,a}&=\int_{-\infty}^\infty\frac{dx}{(1+x^2/a)^{n}}\qquad (n\ge1)
\\\\&=\int_{-\infty}^\infty 1 \cdot\frac{1}{(1+x^2/a)^{n}}\:dx
\\\\&=\left[ x\cdot \frac{1}{(1+x^2/a)^{n}}\right]_{-\infty}^\infty -\int_{-\infty}^\infty x\cdot \frac{-n\cdot\frac{2x}{a}}{(1+x^2/a)^{n+1}}\; dx
\\\\&=\color{red}{0}+2n\int_{-\infty}^\infty \frac{\frac{x^2}{a}}{(1+x^2/a)^{n+1}}\; dx
\\\\&=2n\int_{-\infty}^\infty \frac{1+\frac{x^2}{a}-1}{(1+x^2/a)^{n+1}}\; dx
\\\\&=2n\int_{-\infty}^\infty \frac{1}{(1+x^2/a)^{n}}\; dx-2n\int_{-\infty}^\infty \frac{1}{(1+x^2/a)^{n+1}}\; dx
\\\\&=2nI_{n,a}-2n I_{n+1,a}
\end{align}
$$ giving $(a)$.
One may apply the dominated convergence theorem to obtain $(b)$. |
Characteristic function of Levy process with stopping time | It is not difficult to show hat
$$T_j(\omega) := k 2^{-j} \qquad \text{for} \, \, \omega \in \{(k-1) 2^{-j} \leq T < k 2^{-j}\}, k \geq 1,$$
defines a sequence of discrete-valued stopping times such that $T_j \downarrow T$. By the right-continuity of the sample paths of $(X_t)_{t \geq 0}$, this implies by the dominated convergence theorem
$$\mathbb{E}e^{i u X_{T+t}} = \lim_{j \to \infty} \mathbb{E}e^{iu X_{T_j+t}} = \lim_{j \to \infty}\sum_{k \geq 1} \mathbb{E} \left( e^{iu X_{k 2^{-j}+t}} 1_{\{T_j = k 2^{-j}\}} \right). $$
Writing
$$X_{k 2^{-j}+t} = \big( X_{k2^{-j}+t}-X_{k2^{-j}+s} \big) +X_{k 2^{-j}+s}$$
and conditioning on $\mathcal{F}_{k2^{-j}+s}$ we get by the independence and stationarity of the increments
$$\mathbb{E}e^{iu X_{T+t}} = \mathbb{E}e^{iu X_{t-s}} \lim_{j \to \infty} \underbrace{\sum_{k \geq 1} \mathbb{E}(e^{iu X_{k 2^{-j}+s}} 1_{\{T_j = k 2^{-j}\}})}_{\mathbb{E}e^{iu X_{T_j+s}}}.$$
Using again the dominated convergence theorem, we conclude
$$\mathbb{E}e^{iu X_{T+t}} = \mathbb{E}e^{iu X_{t-s}} \mathbb{E}e^{iu X_{T+s}}.$$ |
Independent random variables $X_1$, $X_2$, $X_3$, ... defined on $(\Omega, \mathcal{F}, \mathbb{P})$, each normal mean zero, variance $1$. | Because of the monotonicity and continuity of $\Phi$, we have
$$\{x; \Phi^{-1}(x) \leq c\} = \{y; y \leq \Phi(c)\}$$
for any constant $c$. This implies
$$\mathbb{P}(X_j \leq c) = \mathbb{P}(\Phi^{-1}(U_j) \leq c) = \mathbb{P}(U_j \leq \Phi(c)).$$
Now use that $U_j$ is uniformly distributed on $[0,1]$ to finish the proof. |
Is there a name for this type of word problems? | I would simply call it calculus of interest (with compound interests), but instead of calculation with money, you calculate with different items here.
From a business perspective, you could also view this problem as a cost-benefit calculation. |
Sketch of 1-norm and ∞-norm | It's called ball because for the Euclidean norm $||\cdot||_2$ it is a disk in $\mathbb{R}^2$ and a sphere in $\mathbb{R}^3$. The name is used by extension to other norms. |
Probability of Two random variables being equal. | Yes, indeed it is so. In series form:
$$\begin{align}\mathsf P(X=Y)&=\sum_k \mathsf P(X=k)\mathsf P(Y=k)\\&=\sum_{k=0}^4 \binom 4k^2 2^{-8} \\&= 2^{-8}\sum_{k=0}^4\binom 4k\binom 4{4-k} \\&= 2^{-8}\binom 84&{\text{via Vandermonde's convolution}\\\sum_{k=0}^r \binom mk\binom n{r-k}=\binom{m+n}r}\end{align}$$ |
What is the kernel of $\varphi : R[u,v]\to R[x,1/x]$ defined by $\varphi(p(u,v))=p(x,1/x)$? | Clearly $I:=(1-uv) \subset \ker(\phi)$.
Now, look at the quotient ring $R[u,v] / I$. We want to show that any polynomial $P \in \ker(\phi)$ is $0$ modulo $I$, that is : $P \equiv 0 \pmod I$.
The trick is to notice that any polynomial $P(u,v)$ can be written as $P_1(u) + P_2(v)$ when working modulo $I$. Here is an example, first, say with $P(u,v) = 2+u+uv^2+3v^3-4u^2v^2+7u^5v$. Recall that $uv \equiv 1$ in $R[u,v]/I$ ! So we get:
$$P(u,v) \equiv
2+u+v+3v^3-4+7u^4 \pmod I$$
Let $P_1(u) = 7u^4+u-2,P_2(v)=4v^3+v$, then we have $P(u,v) \equiv P_1(u) + P_2(v) \pmod I$.
I let you the funny task to write this down in the general case
$$P(u,v) = \sum_{0 \leq i \leq n\\ 0 \leq j \leq m} a_{i,j} u^i v^j$$
But well, it works : let us write $P \in \ker(\phi)$ as $P_1(u) + P_2(v)$, modulo $I$. If $P_2$ has degree $m$, then
$X^m(P_1(X)+P_2(1/X))$ is the zero polynomial in $R[X]$, so you easily get that the coefficients of $P_1$ and of $P_2$ are zeros. Therefore $P \equiv P_1+P_2 = 0 \pmod I$, which precisely means $P \in I$ as desired.
From this, it is easy to show the following ring isomorphism:
$$R[u,v]/(uv-1) \cong R\left[x,\dfrac{1}{x}\right]$$
See also here : if $S$ is a commutative ring (not necessarily a domain) and $a \in S$, then $S_a \cong S[X]/(aX-1)$.
An isomorphism is given by $g : [P]_{(aX-1)} \mapsto P(1/a) \in S_a$.
The kernel of $f : S[X] \to S[X]/(aX-1) \cong S_a$ is $(aX-1)$.
In your case, $S=R[u],a=u$. The map $f$ is $$f : R[u][v] \to R[u][v]/(uv-1) \cong S_u = R[u,u^{-1}]$$
given by $f(P(v)) = P(1/u)$, i.e.
$$f : P(u,v) \in R[u,v] \;\longmapsto\; P(u,1/u)$$
We prove that
$$g : [P]_{(aX-1)} \mapsto P(1/a) \in S_a$$ provides an isomorphism $S[X]/(aX-1) \to S_a$.
well-defined : OK
$S$-algebras homomorphism : OK
surjective : take $\alpha X^n$...
injective : let $P(X) = \sum_{i=0}^m a_iX^i \in S[X]$ such that $P(1/a)=0$. Then $$a^mP(1/a)=a_m+a_{m-1}a+\dots+a_0a^m=0,$$
and
$$0 = a^mP(1/a) \cdot X^m \equiv a_m+a_{m-1}aX^m + \dots + a_0a^mX^m = P(X) \pmod{(aX-1)},$$
which precisely means that $P \in (aX-1)$.
$
\tag*{$\blacksquare$}
$ |
Adding and Multiplication of Pure Tensors | The multiplication is often defined that way for associative algebras by extending linearly, and it works, but the addition is not correct. In general not every element of $V\otimes W$ can be written as $v\otimes w$ for $v\in V$ and $w\in W$. Instead we have the rules
$$(v_1+v_2)\otimes w=v_1\otimes w+v_2\otimes w$$
and similarly for $v\otimes (w_1+w_2)$. It may very well be in a given case that $v_1\otimes w_1+v_2\otimes w_2$ can get no simpler than that, and of course more than two terms may be required in general. |
Discrepancy in counting the number of poles in complex function when refactoring | $$\text{cos}(z) = 1-\frac {z^2} 2 + \frac {z^4}{4!} - ...$$
$$1-\text{cos}(z) = \frac{z^2}2 - \frac{z^4}{4!} +...$$
This has a zero of order $2$ at zero, and thus $$\frac{1}{1-\text{cos}(z)}$$ has a pole of order 2 at zero. |
Generalised Triangle Inequality | Rewrite $x$ to $x=x+\sum y_i-\sum y_i$, then $|x|\leq |x+\sum y_i|+|\sum y_i|\leq |x+\sum y_i|+\sum| y_i|$ and you obtain your inequality. |
Odd Vector Product Question | Using the linearity of the cross product and the cross products of the canonical basis vectors, one can simply compute
$$
2{\bf i}×({\bf i}+{\bf j})=2{\bf i}×{\bf i}+2{\bf i}×{\bf j}=0+2{\bf k}
$$
For any two vectors $v$, $w$ of equal length, $v+w$, or in general $\|w\|·v+\|v\|·w$, -- if not zero -- is a vector in the direction of the angle bisector between $v$ and $w$.
Since the angle between the unit vectors ${\bf i}$ and ${\bf j}$ is $90°$, the bisected angle is $45°$ between ${\bf i}$ and ${\bf i}+{\bf j}$ resp. ${\bf i}+{\bf j}$ and ${\bf j}$. |
How can I find the maximum print size in centimeters based on width and height of a jpeg? | If you have 3000 pixels and print at 200 pixels per inch, you have
$$ \frac{3000 \,\text{pixels}}{200 \,\frac{\text{pixels}}{\text{inch}}} = 15 \,\text{inches} $$
of image.
Two ways to get centimeters:
From inches:
$$ 15 \,\text{inches} \cdot \frac{2.54 \,\text{cm}}{1 \,\text{inch}} = 38.1 \,\text{cm} \text{.} $$
Altering the units in the formula:
$$ \frac{3000 \,\text{pixels}}{200 \,\frac{\text{pixels}}{\text{inch}} \cdot \frac{1 \,\text{inch}}{2.54 \,\text{cm}}} = 38.1 \,\text{cm} $$ |
Solving intersection of translated spiric curve | The implicit equation of the torus is
$$(x^2+y^2+z^2+R^2-r^2)^2=4r^2(x^2+y^2)$$
and the parametric equation of the shifted one is
$$\begin{cases}x=(R+r\cos\theta)\cos\phi+p\\y=(R+r\cos\theta)\sin\phi\\z=r\sin\theta+q.\end{cases}$$
The sections are given by
$$y=Y$$ and by eliminating $\theta$,
$$R+r\cos\theta=Y\csc\phi,\\
\\
x=Y\cot\phi+p,\\z=q\pm\sqrt{r^2-\left(Y\csc\phi-R\right)^2}.$$
Then plugging one into the other,
$$\left((Y\cot\phi+p)^2+Y^2+\left(q\pm\sqrt{r^2-\left(Y\csc\phi-R\right)^2}\right)^2+R^2-r^2\right)^2=4r^2((Y\cot\phi+p)^2+Y^2).$$
We can rationalize with $\csc\theta=\dfrac{1+t^2}{2t}$ and $\cot\theta=\dfrac{1-t^2}{2t}$, giving
$$\left(\left(Y\dfrac{1-t^2}{2t}+p\right)^2+Y^2+\left(q\pm\sqrt{r^2-\left(Y\dfrac{1+t^2}{2t}-R\right)^2}\right)^2+R^2-r^2\right)^2=4r^2\left(\left(Y\dfrac{1-t^2}{2t}+p\right)^2+Y^2\right),$$
or, multiplying by $16t^4$,
$$\left(\left(Y(1-t^2)+2pt^2\right)^2+4(Y^2+R^2-r^2)t^2+\left(2qt\pm\sqrt{4r^2t^2-\left(Y(1+t^2)-2Rt\right)^2}\right)^2\right)^2=16r^2\left(\left(Y(1-t^2)+2pt\right)^2+4Y^2t^2\right)t^2.$$
It is possible to turn this equation in a polynomial one after leeennnngthy computation to eliminate the square root. |
Proving that a module is not projective by definition | It's right. Here's an alternative way.
The submodules of the regular module $\mathbb{Z}_{12}$ (that is, over itself) are just the additive subgroups and there is a unique subgroup for each divisor of $12$, so the list is $1\mathbb{Z}_{12}$ (order $12$), $2\mathbb{Z}$ (order $6$), $3\mathbb{Z}_{12}$ (order $4$), $4\mathbb{Z}_{12}$ (order $3$), $6\mathbb{Z}_{12}$ (order $2$), $12\mathbb{Z}_{12}$ (order $1$). (You can use hats, if you prefer.)
The only subgroup of order $2$ is $6\mathbb{Z}_{12}$ which has zero intersection only with the subgroup $4\mathbb{Z}_{12}$ (apart from the zero subgroup). If $6\mathbb{Z}_{12}$ were projective, being a homomorphic image of the regular module it would be a direct summand, but it isn't. |
How can we find the distribution function of an Uniform Random variable with Random variable bounds? | Hint:
$P(Y \le y \mid X=x)= \dfrac{y-x}{1-x}$ provided that $x \le y$
So $\displaystyle P(Y \le y) = \int_{x=0}^{x=y} \frac{y-x}{1-x} \,dx$
Then differentiate with respect to $y$ to get the density |
Finding ordered pairs for trigonometric Equation in two variables | So, as Calvin Lin wrote in the comments we know that$ -1\le$$ \cos x$ and $\sin y $$\le1.$ So their product can be $1$ only if both are $1$ or$-1$ Now in $(0,3\pi)$ we get six such ordered pairs(Notice form graphs of sine and cosine) . |
Could you check my results for entropies? | Some sanity checks that we can do without getting into the full calculation:
$H(X+Y,X)$ and $H(X+Y,X-Y)$ should both be at most $2$, because $H(X,Y)$ is $2$, and the pairs $(X+Y,X)$ and $(X+Y,X-Y)$ shouldn't have more entropy than $(X,Y)$: after all, we can compute them from $(X,Y)$!
In fact, knowing $X+Y$ and $X$, we can compute $Y$, and knowing $X+Y$ and $X-Y$, we can compute $X$ and $Y$. So we should have $H(X+Y,X) = H(X+Y,X-Y) = H(X,Y) = 2$.
$I(X+Y;X)$ should be positive because some values of $X$ definitely convey information about $X+Y$.
$I(X+Y;X-Y)$ should be positive because some values of $X-Y$ definitely convey information about $X+Y$. (If $X-Y=0$, we know that $X+Y$ is either $0$ or $2$; if $X-Y=\pm1$, we know that $X+Y=1$.)
For computing $H(X+Y)$ we do have to look at the formula. ($H(X-Y)$ will be the same). $X+Y$ takes on values $0, 1, 2$ with probabilities $\frac14, \frac12, \frac14$, so the surprisal of $X+Y$ takes on values $2, 1, 2$ with probabilities $\frac14, \frac12, \frac14$, and we get $H(X+Y) = 2 \cdot \frac14 + 1 \cdot \frac12 + 2 \cdot \frac14 = 1.5$. So your computation checks out.
Now, we get $$I(X+Y;X) = H(X+Y) + H(X) - H(X+Y,X) = 1.5 + 1 - 2 = 0.5$$ and $$I(X+Y;X-Y) = H(X+Y) + H(X-Y) - H(X+Y,X-Y) = 1.5 + 1.5 - 2 = 1.$$ |
Is there a short name for series which satisfy the hypothesis of the alternating series test? | Typically we refer to an alternating series that converges as "conditionally-convergent". Of course if the series converges absolutely then it is also absolutely convergent. Naturally, there are some of the former that aren't the latter.
Telescoping refers to when the partial sum terms all cancel except for perhaps a couple of the "end-terms". For example $\sum(\frac{1}{n}-\frac{1}{n+1})$ is telescoping. |
Showing continuity of $f(x,y) = x^y$, for $x\in[0,1]$ and $y\in[a,b]$, where $0<a<b$ | First of all, $f(x,y)=x^y=e^{y\ln x}$. Write $g(x)=e^x$ and $h(x,y)=y\ln x$, so that $f=g\circ h$. Now use the facts that
a composition of continuous functions is continuous
a product of continuous functions is continuous
the functions $(x,y)\mapsto y$ and $(x,y)\mapsto \ln x$ are continuous |
Hall and Knight question | Hint:
Consider $(3+\sqrt7)^n+(3-\sqrt7)^n$ |
Derivative of a function along a path | If the path parametrized by $\vec{r}(t)$ happens to be a contour of $f$, then (as you say) $\nabla f$ will be perpendicular to the curve and therefore perpendicular to $\vec{r'}(t)$. Therefore, the derivative will be zero. Another way to see this is that $f$ is (by definition) constant along a contour, thus (again) the derivative will be zero.
More generally, however, $\vec{r}$ may parametrize some arbitrary path, not necessarily a contour. That is when the formula is relevant. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.