title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Find a polynomial of degree > 0 in $\mathbb Z_4[X]$ that is a unit. | All the polynomials of the form
$$1+2p(x)$$
are units. This is because $2p(x)$ is nilpotent, and elements of the form $1+n$, $n$ nilpotent, are units in any ring.
These are all the units of $\mathbf{Z}_4[x]$. This follows from the fact that if $u(x)$ is a unit, then it must remain a unit after being reduced modulo two. But the only unit of $\mathbf{Z}_2[x]$ is the constant $1$. |
Problem about infinitary combinatorics | I think the problem should say: Construct $\mathcal{B} \subseteq \mathcal{A}$, such that ...
The hint is saying that letting $\mathcal{B} = \{b_{\alpha} : \alpha < \omega_1\}$, $\mathcal{A} - \mathcal{B} = \{a_{\alpha} : \alpha < \omega_1\}$ should work where $\mathcal{A}$ is a.d. family and for all $\alpha \neq \beta$, $a_{\alpha} \cap b_{\alpha} = \phi$, $a_{\alpha} \cap b_{\beta} \neq \phi$. To see this suppose $d$ is infinite and almost disjoint with each $b_{\alpha}$. By deleting finitely many elements from $d$, we can assume that it is disjoint with $b_{\alpha}$ for all $\alpha \in X$ where $X$ is an uncountable subset of $\omega_1$. Suppose $|a_{\alpha} - d| < \omega$ for every $\alpha$. Then for some finite set $F$ and an uncountable $Y \subseteq X$, we have $a_{\alpha} \subseteq d \cup F$ for every $\alpha \in Y$. Now choose $\alpha < \beta \in Y$ such that $a_{\alpha} \cap F = a_{\beta} \cap F$. Then, $a_{\alpha} \cap b_{\beta} = a_{\alpha} \cap b_{\beta} \cap F = a_{\beta} \cap b_{\beta} = \phi$ which is impossible. |
Prove that the antipodal mapping is an isometry on $S^n$. Help understanding the proof. | As user 7530 suggests, the antipodal map on the sphere is the restriction to the sphere of the $-id$ map on $\mathbb{R}^{n+1}$. Note that $-id$ is linear, and hence, its derivative at a point is also equal to $-id$. More explicitly, for a point $p\in\mathbb{R}^{n+1}$, identify the tangent space $T_p\mathbb{R}^{n+1}$ with $\mathbb{R}^{n+1}$. Then for any $v\in T_p\mathbb{R}^{n+1}$ we have$$d(-id)_p(v)=-v\in T_{-p}\mathbb{R}^{n+1}.$$Consequently, for any $u,v\in T_p\mathbb{R}^{n+1}$ we have$$\langle d(-id)_p(u),d(-id)_p(v)\rangle=\langle-u,-v\rangle=\langle u,v\rangle,$$and so $-id$ is an isometry.
Edit: We show $T_pS^n=T_{-p}S^n$. The argument is essentially the one used in Lee's answer, but spelled out differently. Let $\alpha:(-\epsilon,\epsilon)\to S^n$ satisfy $\alpha(0)=p$. We have$$\langle\alpha(t),\alpha(t)\rangle=const,$$and differentiating at $t=0$, using the Leibniz rule, we obtain$$\langle\dot{\alpha}(0),p\rangle=0.$$Hence,$$T_pS^n\subset p^\bot.$$It now follows from dimension consideration that in fact$$T_pS^n=p^\bot.$$Since $p^\bot=(-p)^\bot$, we are done.
Remarak: As commented bellow, the fact that $T_pS^n=T_{-p}S^n$ may be interesting in general, but is not crucial for this question. Every $M\in O_n(\mathbb{R})$ restricts to an isometry of $S^n$, and the antipodal map is just a particular case. |
On "the Hessian is the Jacobian of the gradient" | The relationship between gradients and Jacobians is the transpose. Suppose $f:\mathbb{R}^n\to\mathbb{R}$ is some $C^1$ function.
$J_f$ takes a point of $\mathbb{R}^n$ and produces a $1\times n$ matrix that is able to calculate directional derivatives. That is to say, if $\vec{v}\in\mathbb{R}^n$ is a vector and $p\in\mathbb{R}^n$ is a point, then $J_f(p)\vec{v}$ is a $1\times 1$ matrix whose entry is the directional derivative at $p$ in the $\vec{v}$ direction.
$\nabla f$ takes a point of $\mathbb{R}^n$ and produces a vector in $\mathbb{R}^n$ that can be used to calculate directional derivatives using the dot product. That is, with the same $p$ and $\vec{v}$ above, $\vec{v}\cdot\nabla f(p)$ is the directional derivative at $p$ in the $\vec{v}$ direction.
Recall that if $\vec{v},\vec{w}\in\mathbb{R}^n$ are vectors, then $\vec{v}\cdot\vec{w}=\vec{v}^T\vec{w}$, where we pretend that $1\times 1$ matrices are the same as scalars for this equation to make sense. Then, we have that $\vec{v}\cdot\nabla f(p) = (\nabla f(p))^T\vec{v}$, where the latter is commonly written as $(\nabla^Tf(p))\vec{v}$. Since this represents the directional derivative, too, and it holds for all $\vec{v}$ and $p$, then $\nabla^Tf=J_f$.
Let's take a look at the equation $H_f=J_{\nabla f}$ for the Hessian, where $f:\mathbb{R}^n\to\mathbb{R}$ is a $C^2$ function. We have that $\nabla f$ is a function $\mathbb{R}^n\to\mathbb{R}^n$, taking points to column vectors, and so $J_{\nabla f}$ is going to be an $n\times n$ matrix. Using that $(\nabla f(p))_i = \tfrac{\partial f}{\partial x_i}(p)$, then
$$(H_f(p))_{ij} = (J_{\nabla f}(p))_{ij} = \frac{\partial}{\partial x_j}(\nabla f(p))_i = \tfrac{\partial^2f}{\partial x_j\partial x_i}(p),$$
as expected. |
Predator–Prey Equation and Liapunov's Stability | By differentiating the two components of $V$ separately, you can see that $g(y) = y^a\cdot e^{-b\cdot y}$ has a global maximum at $y = \frac{a}{b}$ and that $h(x)=x^c\cdot e^{-d\cdot x}$ has a global maximum at $x = \frac{c}{d}$. Thus $V$ has a global maximum at $\left(\frac{c}{d},\,\frac{a}{b}\right)$. |
Solve the given differential equation $p²-py+x=0$ | $$p^2-py+x=0$$
This is D'Alembert's differential equation :
$$y=xf(p)+g(p)$$
$$y=\dfrac x {p}+p$$
Differentiate both sides:
$$p=\dfrac {p-xp'}{p^2}+p'$$
$$p^3-p=p'(p^2-x)$$
$$p(p^2-1)\dfrac {dx}{dp}=(p^2-x)$$
It's a first order linear DE.
$$p(p^2-1)x'+x=p^2$$ |
Merging overlapping axis-aligned rectangles | Represent rectangle $R^i$ as $[x_m^i,x_M^i]\times[y_m^i,y_M^i]$,
then $R^i$ and $R^j$ overlap iff their $x$-range and $y$-range overlap.
From there, maintain a sorted list of the $x_{m/M}$ and $y_{m/M}$ values, and loop over your collection of rectangles.
When you process $R^i$, find every other rectangles that overlap its $x$ or $y$ range. If you find an actual overlap with rectangle $R^j$, merge it on the spot
$R^i \gets \text{merge}(R^i,R^j)$
and update the sorted list with the new values of $x/y^i_{m/M}$. Continue until $R^i$ does not overlap any of the remaining rectangles. As $R^i$ grows through merges, $x^i_m$ can only decrease so in the worst case you have to compare it to other $x$ values $2N$ times. Likewise for the other bounds. "Processing $R^i$" is overall a linear procedure in $N$ and gives you a rectangle $R^i$ that does not overlap any of the remaining rectangles.
Repeat this "processing" on every rectangle that has not been "processed" yet.
If at some point, an overlap with an "already-processed" $R^i$ is created, it should be detected and handled at the creation of that overlap, so in the end there are no overlap remaining. Overall we obtain an $O(N^2)$ complexity, which is hopefully better than "probably $O(N^3)$".
Off-topic.
I legitimately don't know what the actual complexity of your initial procedure is.
Do you know if the merge order matters for the end rectangles, and do you have a proof of it available? |
Set of points at which sequence of measurable functions converge (another approach) | $(f_n(x))_n$ is cauchy means that for any positive integer $p$, there exists one integer $k$, such that for all $m > k$ and $n > k$ we have $|f_n(x) - f_m(x)| < \frac{1}{p}$.
Since it's for any positive integer p, we should have something like $\cap_{p=1}^{\infty}$.
And there exists a $k$ such that blabla means $\cup_{k=1}^{\infty}$ blabla..., i.e. for one k in $\{1,2, \cdots\}$, blabla is ok
For all $m,n$ greater than $k$ is $\cap_{m > k, n>k}$.
So finally $(f_n(x))_n$ is cauchy means that $x$ is in the set
$\cap_{p=1}^{\infty} \cup_{k=1}^{\infty}\cap_{m > k, n>k}\{x: |f_m(x) - f_n(x)| < \frac{1}{p}\}$
To resume, when there is "for any, for all", use intersection; when there is "exists", use union |
Expected number of loops | Suppose you start with $n$ ropes. You pick two free ends and tie them together:
If you happened to pick two ends of the same rope, you've added one additional loop (which you can set aside, since you'll never pick it now on), and have $n-1$ ropes
If you happened to pick ends of different ropes, you've added no loop, and effectively replaced the two ropes with a longer rope, so you have $n-1$ ropes in this case too.
Of the $\binom{2n}{2}$ ways of choosing two ends, $n$ of them result in the first case, so the first case has probability $\frac{n}{2n(2n-1)/2} = 1/(2n-1)$. So the expected number of loops you add in the first step, when you start with $n$ ropes, is $$\left(\frac{1}{2n-1}\right)1 + \left(1-\frac{1}{2n-1}\right)0 = \frac{1}{2n-1}.$$
After this, you start over with $n-1$ ropes. Since what happens in the first step and later are independent (and expectation is linear anyway), the expected number of loops is $$ \frac{1}{2n-1} + \frac{1}{2n-3} + \dots + \frac{1}{3} + 1 = H_{2n} - \frac{H_n}{2}$$
In particular, for $n=100$, the answer is roughly $3.28$, which come to think of it seems surprisingly small for the number of loops! |
Are all sets of zero measure (not necessarily Lebesgue measure) measurable? | Well, $\mu$ is a map $\mathfrak X\longrightarrow[0,\infty]$, so the fact that $\mu(E)$ is defined in the first place makes $E$ an element of $\mathfrak X$, so measurable.
If you're asking about outer measures (which measure spaces don't need!), then your question makes more sense and the answer is still yes, sets of measure $0$ are measurable. Let $\mu^\ast:\mathcal P(X)\longrightarrow [0,\infty]$ be an outer measure, $E\subseteq X$ such that $\mu^\ast(E)=0$. $E$ is measurable by definition if for all $A\subseteq X$ we have $\mu^\ast(A)=\mu^\ast(E\cap A)+\mu^\ast(E^c\cap A)$. Since outer measures are $\sigma$-subadditive this is equivalent to $\mu^\ast(A)\geq \mu^\ast(E\cap A)+\mu^\ast(E^c\cap A)$. Because of monotonicity we have $\mu^\ast(E\cap A)\leq\mu^\ast(E)=0$, so $\mu^\ast(E\cap A)=0$. So we only need to show $\mu^\ast(A)\geq \mu^\ast(E^c\cap A)$. But that's just monotonicity of the outer measure.
So $E$ is measurable. |
Injective function from set of all functions $f: \mathbb{R} \to \mathbb{R}$ to $\mathcal{P}(\mathbb{R})$ | Hint. Note that $\mathbb R \cong \mathcal P(\mathbb N)$, and that every function $\mathbb R \to \mathcal P(\mathbb N)$ corresponds uniquely (in a natural way) to a subset of $\mathbb R \times \mathbb N$.
Now you only need $\mathcal P(\mathbb R\times \mathbb N)\cong \mathcal P(\mathbb R)$ ... |
What is the meaning of the $0<x, y<\infty$ bound in this joint PDF? | If $0<x,y<\infty$ is equivalent to
If $0<x$ and $0<y$ and $x<\infty$ and $y<\infty$.
For $y\le0$, $y$ is indeed strictly less than $\infty$, but $0<y$ does not hold. Thus the condition is not satisfied and, by definition, we have $f_{X,Y}(x,y)=0$. |
How to apply a transformation to a conic | I find it a bit easier to work with homogeneous coordinates and matrices for this. In the following, I use lower-case bold letters to represent homogenous vectors and a tilde to indicate their corresponding inhomogeneous coordinate tuples.
Your general conic equation can be written as $$\mathbf x^TC\mathbf x = \begin{bmatrix}x&y&1\end{bmatrix} \begin{bmatrix} a&\frac b2&\frac d2 \\ \frac b2 & c & \frac e2 \\ \frac d2&\frac e2&f \end{bmatrix} \begin{bmatrix}x\\y\\1\end{bmatrix} = 0.$$ If you have an invertible point transformation $\mathbf x' = M\mathbf x$, then $$\mathbf x^TC\mathbf x = (M^{-1}\mathbf x')^TC(M^{-1}\mathbf x') = \mathbf x'^T(M^{-T}CM^{-1})\mathbf x',$$ so the conic’s matrix transforms as $C' = M^{-T}CM^{-1}$.
The invertible affine transformation $\tilde{\mathbf x} = A\tilde{\mathbf x}'+\tilde{\mathbf t}$ (note that I’m inverting your transformation for simplicity) can be represented by the $3\times 3$ matrix $$M^{-1} = \left[\begin{array}{c|c} A & \tilde{\mathbf t} \\ \hline \mathbf 0^T & 1\end{array}\right]$$ so that $\mathbf x = M^{-1}\mathbf x'$. Writing $C$ in block form as $$C = \left[\begin{array}{c|c} Q & \tilde{\mathbf b} \\ \hline \tilde{\mathbf b}^T & f \end{array}\right],$$ which corresponds to the form $\tilde{\mathbf x}^TQ\tilde{\mathbf x}+2\tilde{\mathbf b}^T\tilde{\mathbf x}+f = 0$ of the general conic equation, we then have $$M^{-T}CM^{-1} = \left[\begin{array}{c|c} A^T & \mathbf 0 \\ \hline \tilde{\mathbf t}^T & 1\end{array}\right] \left[\begin{array}{c|c} Q & \tilde{\mathbf b} \\ \hline \tilde{\mathbf b}^T & f \end{array}\right] \left[\begin{array}{c|c} A & \tilde{\mathbf t} \\ \hline \mathbf 0^T & 1\end{array}\right]
= \left[\begin{array}{c|c} A^TQA & A^T(Q\tilde{\mathbf t}+\tilde{\mathbf b}) \\ \hline (Q\tilde{\mathbf t}+\tilde{\mathbf b})^T A & \mathbf t^TC\mathbf t \end{array}\right].$$ There are several things to note here: the quadratic part of the equation represented by $Q$ is only affected by the linear part of the affine transformation; the new constant term is the left-hand side of the untransformed equation evaluated at $\mathbf t$, the translation part of the transformation; and the signs of $\det C$ and $\det Q$ are preserved by affine transformations. If you examine the signs of these determinants for a canonical example of each the types of conics that you’ve listed, you’ll find that the combination of signs determines the type of conic—opposite signs for an ellipse, $\det Q=0$ and $\det C\ne0$ for a parabola, $\det C=0$ and $\det Q\lt0$ for a pair of intersecting lines, and so on. In fact, $\det Q$ is a multiple of the discriminant of the conic equation. It’s a bit tedious, but not very difficult, to work out appropriate transformations to bring an arbitrary conic into one of these canonical forms. (You might need to use the fact that a real symmetric matrix is orthogonally diagonalizable to prove existence of $A$ in all cases.) For example, $\det Q\ne0$ for any central conic, so the linear terms in the equation of a central conic can be eliminated by choosing $\tilde{\mathbf t} = -Q^{-1}\tilde{\mathbf b}$. |
Method of spherical means/average | In the integral
$$
\overline{u}(r,t) = \frac{1}{4\pi r^2} \int\int_{||x||=r} u(x,t) dS
$$
the area in the spharical polar coordinates is element $dS = r^2\sin(\theta)d\theta d\phi$, where $0\le \theta \le \pi$ and $0 \le \phi \le 2\pi$. |
Computation of permanents of general matrices | The Monte=Carlo method is actually a FPRAS, at least according to my definition of FPRAS, so that is an example. Barvinok's algorithm is probably too hard to explain without a long answer and some higher level background math, but I do know that Barvinok came up with a (still farily complicated) way to count the number of integer points exactly in a convex polytope (bounded solution region to a system of linear inequalities) in polynomial time when the dimension of the polytope is fixed. So I'm assuming he is doing something related for the permanent, and that basically if the rank of the matrix is fixed this is like having "fixed true dimension" for a convex polytope, even if the polytope is embedded in a higher dimensional space. Barvinok is good at coming up with polynomial time algorithms when something like the "dimension" is fixed. |
How can I compute the discriminant of the field $\mathbb{Q}(\sqrt[3]{28})$? | (1.1) It is ok, the confirmation using computer algebra support, sage is as follows:
sage: R.<X> = PolynomialRing(QQ)
sage: K.<t> = NumberField( X^3-28 )
sage: K
Number Field in t with defining polynomial X^3 - 28
sage: t.minpoly()
x^3 - 28
sage: K(1).trace(), K(t).trace(), K(t^2).trace()
(3, 0, 0)
Here and in the sequel:
$$ t=\sqrt[3]{28}\ .
$$
(1.2) The norm of an element of the shape
$$ \xi = a+bt+ct^2 $$
is the product
$$ N\xi=(a+bt+ct^2)(a+b\epsilon t+c\epsilon^2 t^2)(a+b\epsilon^2 t+c\epsilon t^2)\ ,$$
where $\epsilon$ is a primitive root of unit of order three. Explicitly:
$$
\begin{aligned}
N\xi
&= a^3+b^3t^3 +c^3t^6\color{red}{-} 3abct^3 \\
&= a^3 + 28 b^3+ 28^2 c^3 \color{red}{-} 3\cdot 28abc\ .
\end{aligned}
$$
(Edited sign...)
For instance:
sage: (1+t+2018*t^2).norm()
6442872498805
sage: 1^3 + 28 + 2018^3*28^2 - 3*28*2018*1*1
6442872498805
So i use the basis $\{1,t,t^2\}$ above. Which is not the integral basis,
sage: K.integral_basis()
[1/3*t^2 + 1/3*t + 1/3, t, 1/2*t^2]
(2)
It is enough to compute the norm of the element:
$$
\begin{aligned}
N\beta
&=
N\left(\frac 13(1+7t+t^2)\right)
\\
&=
\frac 1{3^3}(1+28\cdot 7^3+28^2\cdot 1-3\cdot 28\dot 7)
\\
&=\frac 1{3^3}(1+9604+784-588)=\frac 13\cdot 9801\\
&=3\cdot 11^2\ .
\end{aligned}
$$
So $\beta$ is integral. (The norm is in $\Bbb Z$.)
We can also ask the computer for this (and its minimal polynomial):
sage: beta = ( 1 + 7*t + t^2 ) / 3
sage: beta.norm().factor()
3 * 11^2
sage: beta.minpoly()
x^3 - x^2 - 65*x - 363
Yes, the computation in (2), as posted is ok.
(3) Let us ask first for the result:
sage: K.discriminant().factor()
-1 * 2^2 * 3 * 7^2
Using the given basis,
$$
\frac 13(1+t+t^2)\ ,\
t\ ,\
\frac 12 t^2\ ,
$$
The field has three complex embeedings in $\Bbb C$,
defined on generator by
$t\to t=\sqrt[3]{28}$,
respectively by
$t\to \epsilon t$,
respectively by
$t\to \epsilon^2 t$.
So we have by definition to compute the square of the determinant:
$$
\begin{aligned}
D
&=
\begin{vmatrix}
\frac 13(1+t+t^2) & t & \frac 12t^2\\
\frac 13(1+\epsilon t+\epsilon^2 t^2) & \epsilon t & \frac 12\epsilon^2 t^2\\
\frac 13(1+\epsilon^2 t+\epsilon t^2) & \epsilon^2 t & \frac 12\epsilon t^2
\end{vmatrix}
\\[2mm]
&=
\frac 12\cdot \frac 13
\begin{vmatrix}
1+t+t^2 & t & t^2\\
1+\epsilon t+\epsilon^2 t^2 & \epsilon t & \epsilon^2 t^2\\
1+\epsilon^2 t+\epsilon t^2 & \epsilon^2 t & \epsilon t^2
\end{vmatrix}
\\[2mm]
&=
\frac 12\cdot \frac 13
\begin{vmatrix}
1 & t & t^2\\
1 & \epsilon t & \epsilon^2 t^2\\
1 & \epsilon^2 t & \epsilon t^2
\end{vmatrix}
\\[2mm]
&=
\frac 12\cdot \frac 13\cdot t^3
\begin{vmatrix}
1 & 1 & 1\\
1 & \epsilon & \epsilon^2\\
1 & \epsilon^2 & \epsilon
\end{vmatrix}
\\[2mm]
&=
\frac 12\cdot \frac 13\cdot 28
\cdot\underbrace{(1-\epsilon)^3}_{-3i\sqrt 3}
\cdot\underbrace{(\epsilon+\epsilon^2)}_{=-1}
\\[2mm]
&=
2\cdot 7\cdot i\sqrt 3
\ .
\end{aligned}
$$
The square is the discriminant.
$\square$ |
what is the geometry picture of Riemann tensor identity $R(X\wedge Y,V\wedge W) = R(V\wedge W, X\wedge Y)$ | Skew-symmetry in last two indices. By definition the Levi-Civita connection $\nabla$ of a Riemannian metric $g$ preserves that connection, so that $\nabla g = 0$. Thus, if we view the curvature as a section $R \in \bigwedge^2 T^*M \otimes \operatorname{End}(TM)$, the induced action of $R$ on $g$ is $R \# g = 0$. Expanding gives
$$0 = (R\#g)(Z, W) = -g(R\#W, Z) - g(R\#Z, W) .$$
But rearranging gives exactly that $g(R\#W, Z)$ is skew in $W, Z$ as claimed.
Equivalently, since $\nabla$ preserves $g$, so does $R$, in the sense that it takes values in $\bigwedge^2 T^*M \otimes \mathfrak{o}(g)$, and lowering an index with $g$ gives an isomorphism $\mathfrak{o}(g) \cong \bigwedge^2 T^*M$. In particular, this does not hold for general linear connections (which generically do not preserve metrics).
Pair-swapping symmetry. The pair-swapping identity for a Levi-Civita connection $\nabla$ is generated by the two transposition symmetries and the Bianchi identity, $$\mathfrak{S}[R(X, Y) W)] = 0 ,$$ where $\mathfrak{S}[\cdots]$ denotes the sum over cyclic permutations in $Y, Z, W$. But the Bianchi Identity can be proved by writing the expression $\mathfrak{S}[R(X, Y) W)]$ using the definition of curvature, rewriting everything in terms of Lie brackets (which in particular requires torsion-freeness of $\nabla$), and applying the Jacobi identity. |
Entropy of geometric random variable with parameter $1/2$ | Here is a correct version of the question:
What is the entropy of $X$ such that $P(X=i)=1/2^i$ for every positive integer $i$?
You need to know the value of the series $t(x)=\sum\limits_{i\geqslant0}ix^i$. You probably already know the value of the series $s(x)=\sum\limits_{i\geqslant0}x^i$ since $s(x)$ is the classical geometric series: $s(x)=\frac1{1-x}$.
But $t(x)=xs'(x)$ hence $t(x)=\frac{x}{(1-x)^2}$.
In your case, you start at $i=1$ instead of $i=0$ but this does not change anything since the $0$th term of $t(x)$ is $0$, and you choose $x=\frac12$. Hence $t\left(\frac12\right)=\frac{\frac12}{(1-\frac12)^2}=2$, as desired. |
Is there an analytic way of solving univariate polynomial equations in general? | More a long comment than an answer. The solution to the problem of finding an analytic expression for the roots of the general polynomial equation of order $n$
$$
a_n z^n + a_{n-1} z^{n-1} + \ldots + a_1 z + a_0=0 \label{1}\tag{1}
$$
was, according to Giuseppe Belardinelli ([1] pp. 3-4), triggered by the discovery of hypergeometric functions of $n$ variables. Precisely, by using such classes of functions, two analytic formulas, one by Hjalmar Mellin and one by Giuseppe Belardinelli himself, were found in 1921: a brief description of their approaches is given below.
Mellin (see [2] and also [1], §20, where the transformation of \eqref{1} in \eqref{1a} is explicitly described) starts by considering an equivalent form of equation \eqref{1}, precisely the following one
$$
z^n+b_1 z^{n_1}+\ldots+b_mz^{n_m} - 1=0,\label{1a}\tag{1a}
$$
where $n_i<n$ and $b_j\neq 0$ for each $j=1,\ldots,m<n-1$. This form has the property that if one of its solution solutions is known, say $z_p=z_p(b_1,\ldots,b_m)$, it is possible to find all the remaining ones by the following formula
$$
z_i(b_1,\ldots,b_m)=\zeta_iz_p(\zeta_i^{n_1}b_1,\ldots,\zeta_i^{n_p}b_m)\quad i=1,\ldots, n\label{2}\tag{2}
$$
where $\zeta_i$, $i=1,\ldots, n$ are the solutions of the cyclotomic equation
$$
\zeta^n-1=0
$$
Then Mellin succeeds in finding a particular hypergeometric function $z_p$, which he calls solution principale, such that $z_i(b_1,\ldots,b_m)$ is a root of \eqref{1a}, and thus by \eqref{2} he obtains a closed form analytic expression for the roots of \eqref{1}.
The solution of Belardinelli is similar in that it relies again on hypergeometric functions, but it is conceptually different: following the footsteps of Alfredo Capelli, he seeks the $n$ roots $z_i$, $i=1,\ldots, n$ of \eqref{1} in the form of convergent power series: and he succeeds in this task by defining a multiple Lagrange series of $n$-th order Pochammer hypergeometric functions that gives exactly the roots of \eqref{1} as a function of the coefficients $a_0,\ldots,a_n$ for each finite $n$ (see [1] for the details).
Additional note
The only English language reference I was able to find on the same topic is the short (translated from the Russian) paper [3] of E. N. Mikhalkin, which builds his result on the one of Mellin. He gives a very simple form for $z_p(b)$ (theorem 1, formula 4 at p. 302 of [3]): however, his formula is under all aspects a simplification of the one given by Mellin's in reference [2], and he doesn't explain nor describes how Mellin gets its result, nor provides any historical survey.
Bibliography
[1] Giuseppe Belardinelli (1960), Fonctions hypergéométriques de plusieurs variables et résolution analytique des équations algébriques générales (French), Mémorial des sciences mathématiques, no. 145 (1960), 80 p., MR121518, Zbl 0097.05901.
[2] Hjalmar Mellin (1921), Résolution de l’équation algébrique générale à l’aide de la fonction gamma (French), Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Paris, 172, pp. 658-661, JFM 48.1238.02.
[3] Evgenii Nikolaevich Mikhalkin, On solving general algebraic equations by integrals of elementary functions, Siberian Mathematical Journal 47, No. 2, 301-306 (2006). MR2227983, ZBL1115.33001. |
For $A=\mathbb{Z}_m$, show $A_n=\mathbb{Z}_d$ where $d=gcd(m,n)$. | $\,\ na=0$ in $\,Z_m\!\!\iff m\mid na\iff m\mid na,ma\iff m\mid (na,ma) = (n,m)a = da\iff m/d \mid a$
$\,\Bbb Z_m\,$ contains $d\,$ multiples of $\,c = m/d,\,$ viz. $\, 0,\, c,\, 2c,\,\ldots, (d\!-\!1)c,\,$ and it is easy to check that they form a subgroup isomorphic to $\,\Bbb Z_d.\,$ |
Where am I going wrong with the integral $\int\frac{1}{\sqrt{1-x^2}}dx$? | The error is when you say "Since $\frac{1}{-2x}$ is a constant at $du$, we can send it outside the integral." The fact is, $\frac{1}{-2x}$ is absolutely not constant with respect to $u$, since $u$ is a function of $x$.
(This would be even more clear if it were a definite integral. Instead of getting a number as you should, you'd somehow get a function of the dummy variable $x$!) |
Prove that $S^3\setminus S^1$ is connected | Using the stereographic projection, you can identify $S^3-\{point\}$ (the point not in $S^1$) to $\mathbb{R}^3$. The image of the circle by the stereographic projection is a circle if the center of the stereographic projection is not in the circle. If you remove a circle to $\mathbb{R}^3$ it is still connected.
This implies that $S^3-\{point\}\cup S^1$ is connected and $S^3-S^1$ is connected. |
How to solve this equation by hand? $4.68-4.50\cos\alpha-1.23\alpha=0$ | I don't know how the calculator got its value, $0.226$, or where the value $4.483$ comes from. Plotting
$$ f(\alpha) = 4.68 - 4.50 \cos \alpha - 1.23 \alpha $$
gives
We expect this graph to be a cosine with midline given by $4.68 - 1.23 \alpha$, so as soon as a local minimum is above zero, we need not proceed further in that direction (to the left), and as soon as a local maximum is below zero, we need not proceed further in that direction (to the right). The graph above indicates those features, so all possible zeroes are in the interval shown, $\alpha \in [-6,10]$.
Zooming in on the potential zero(es) near $\alpha = 0$, we see that there is a local minimum above the $\alpha$ axis, so there is/are no actual zero(es) there.
(This plot is an example of why it is useful to include "round" numbers on axes when your interval of interest is very close to that round number. If we only zoom in to show the minimum, we have to actually look at the tick labels to determine whether there is a zero. For instance, whether the above plot shows a zero is immediately clear. Whether the following one does takes a little more inspection.
)
Going back to the overview plot, there should be a zero of $f$ in the interval $[4,5]$. So let's see if we can find it. (Spoiler: It's at $\alpha = 4.516\,602\,526\,340\,883\,204\,3\dots$.)
Let's separate the polynomial (actually, linear in this problem) and non-polynomial parts and plot them separately. We want
$$ 4.68 - 1.23 \alpha = 4.50 \cos \alpha $$
Since $-1 \leq \cos \alpha \leq 1$, the right-hand side is in the interval $[-4.50, 4.50]$. The left-and side is a line and we can solve the the interval of $\alpha$s where the height of the line is in $[-4.50,4.50]$. \begin{align*}
-4.50 &\leq 4.68 - 1.23 \alpha \\
-9.18 &\leq -1.23 \alpha \\
7.463\,414\,634\,146\,341\,463\,4\dots =\frac{9.18}{1.23} &\geq \alpha \\
\end{align*} and \begin{align*}
4.68 - 1.23 \alpha &\leq 4.50 \\
-1.23 \alpha &\leq -0.18 \\
\alpha &\geq \frac{0.18}{1.23} = 0.146\,341\,463\,414\,634\,146\,34\dots
\end{align*}
(Notice that this possible interval for a zero of $f$ is smaller than the one we got above from looking at the graph.) Plotting the two sides over this interval, we have
We know from the above that there is/are no zero(es) of $f$ (intersections of the line from the left-hand side and the cosine from the right-hand side of the above) near $\alpha = 0$, but there is clearly one near $\alpha = 4.5$.
Binary Search
So we can proceed by binary search. We know that the $f$ is positive on one side of the zero and negative on the other, so we can cut the interval containing the zero in half for each evaluation of $f$. You seem to be using thousandths as precision, so we need only evaluate about ten times. We indicate the calculation of $f$ at the midpoint of the current interval containing the zero, then write down the new (smaller) interval which we may infer contains the zero. \begin{align*}
f(4) &= 2.7013\dots \\
f(5) &= -2.7464\dots & (4,5) \\
f(4.5) &= 0.09358\dots & (4.5,5) \\
f(4.75) &= -1.3317\dots & (4.5, 4.75) \\
f(4.6) &= -0.4733\dots & (4.5, 4.6) \\
f(4.55) &= -0.1889\dots & (4.5, 4.55) \\
f(4.52) &= -0.01918\dots & (4.5, 4.52) \\
f(4.51) &= 0.03724\dots & (4.51, 4.52) \\
f(4.515) &= 0.009043\dots & (4.515, 4.52) \\
f(4.517) &= -0.002243\dots & (4.515, 4.517) \\
f(4.516) &= 0.003400\dots & (4.516, 4.517)
\end{align*}
The decimal expansion of every point in that last interval begins "$4.516$", agreeing to three decimal places, so the zero is about $\alpha = 4.516$.
(Note that all we really care about is the sign of these values of $f$. If we know the sign, we know which endpoint to replace with the trial $\alpha$ in that step, so we do not have to evaluate these very carefully (especially in the first few steps). Also, we didn't use the exact midpoints. If we had, the interval containing the zero would be shorter than the interval we found for the same number of evaluations of $f$. It would also need a lot of calculator button pushing since the sequence of midpoints is $4.5$, $4.75$, $4.625$, $4.5625$, $4.53125$, $4.515625$, $4.5234375$, $4.51953125$, $4.517578125$, $4.5166015625$, $4.51708984375$, and $4.516845703125$ to get both endpoints to agree to $3$ decimals.)
Taylor Series
There are other methods we could try. We could replace the cosine with leading segments of its Taylor expansion. Then we are seeking roots of the resulting polynomial. This works if we can center the series close to the root. The Taylor series of cosine centered at $0$ is
$$ \cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} + \cdots + \frac{x^{2k}}{(2k)!} + \cdots \text{,} $$
but we want to find a zero near $4.5$, so we should center close to $4.5$. Picking standard angles that are around $4.5 \leq 1.5 \cdot \pi$, $\frac{4}{3}\pi = 4.188\dots$ and $\frac{3}{2}\pi = 4.712\dots$, so $\frac{3}{2}\pi$ is closer to $4.5$. The Taylor series of cosine centered at $\frac{3}{2} \pi$ is
$$ \cos x = \left( x - \frac{3}{2}\pi \right) - \frac{1}{3!}\left( x - \frac{3}{2}\pi \right)^3 + \frac{1}{5!}\left( x - \frac{3}{2}\pi \right)^5 + \cdots + \frac{1}{(2k+1)!}\left( x - \frac{3}{2}\pi \right)^{2k+1} + \cdots \text{.} $$
We want to use low degrees since polynomials of high degree can be hard to factor. Let's go up to degree $5$ (by $2$s since this series for cosine only has odd degree terms). We list the degree of the initial part of the cosine series we are keeping in that row, the resulting approximation for $f$, and the real roots of that polynomial. \begin{align*}
1& :& -5.73 \alpha + 25.885\dots &= 0 :& \{& 4.517\dots \} \\
3& :& 0.75 \alpha ^3-10.602\dots \alpha ^2+44.234\dots \alpha -52.598\dots &= 0 :& \{& 2.051\dots, 4.516\dots, 7.569\dots \} \\
5& :& -0.0375 \alpha ^5+0.883\dots \alpha ^4-7.577\dots \alpha ^3+28.639\dots \alpha ^2-48.227\dots \alpha +34.545 &= 0 :& \{& 4.516\dots\}
\end{align*}
We get our zero to three decimals almost immediately.
Newton's method
The last method I'll show is Newton's Method. We have prior information that $f$ has a zero near $\alpha = 4$, so we use a linear approximation to $f$ at that point, find that approximation's $\alpha$-intercept, and report that as an improved location of the zero. We need to know that
$$ \alpha - \frac{f(\alpha)}{f'(\alpha)} = \frac{150 \alpha \sin \alpha + 150 \cos \alpha - 156}{150 \sin \alpha - 41} \text{.} $$
\begin{align*}
4& :& 4&.5827\dots \\
4&.5827 :& 4&.5168\dots \\
4&.5168 :& 4&.5166\dots \\
4&.5166 :& 4&.5166\dots
\end{align*}
Since we have found an approximate fixed point of the iteration for Newton's method, and we have graphical evidence that there is a simple zero of $f$ near this last $\alpha$, we have found that $\alpha = 4.516$ is an approximate zero of $f$. |
Strengthening Poincaré Recurrence | Hint: If the set was not syndetic, then the sequence
$$
\frac1n\sum_{k=0}^{n-1}\mu(B\cap T^{-k}B)
$$
would have zero as an accumulation point (take larger and larger gaps). But
$$
\lim_{n\to\infty}\frac1n\sum_{k=0}^{n-1}\mu(B\cap T^{-k}B)=\lim_{n\to\infty}\frac1n\sum_{k=0}^{n-1}\int_B(\chi_B\circ T^k)\,d\mu=\int_B\lim_{n\to\infty}\frac1n\sum_{k=0}^{n-1}(\chi_B\circ T^k)\,d\mu,
$$
using the dominated convergence theorem. |
Limit belongs to the sum! | A hint about the numerator: If each term is divided by $(n+1)^a$ the numerator over that becomes a Riemann sum for the integral of $x^a$ for $x \in [0,1].$ [There's a missing term but that doesn't matter.] So that part of the expression approaches $1/(a+1)$ as $n \to \infty.$ Maybe in combination with the rest you have found, this can give a closed form which then can be set to $1/60$ to solve for $a.$
Added note: one also has to pull out a factor of $1/n$ for the width of the rectangles, which is incorporated into the Riemann sum part. That will make a difference in how the remaining section looks before taking its limit for $n \to \infty.$ [If it's convenient, one could use a factor of $1/(n+1)$ for this, since $(1/n)/(1/(n+1) \to 1.$] |
Two vector bundles over same base manifold $X$ | Let $S^1 \times S^1=X$ and let $V_1$ be the product bundle which is the Mobius bundle on the first factor (the unique non-orientable real line bundle over $S^1$) and the trivial bundle on the second factor and $V_2$ be the product bundle which is the Mobius bundle on the second factor and trivial on the first.
$w_1(V_1)\ne w_1(V_2)$ so these are not isomorphic as vector bundles over $X$. |
Finding the shortest/"most negative" closed directed trail in a weighted digraph with negative weights | The problem can be reduced to the minimum cost circulation program (with positive and negative costs), which in turn can be solved using linear programming. Consider a directed graph $G=(V,E)$. Assign each edge in $G$ capacity one. Then every “closed directed trail” corresponds to a flow circulation in $G$ (which satisfies capacity constraints). Conversely, suppose that we are given an integral circulation (the amount of flow on every edge is either 0 or 1). Consider the set of edges $E'\subset E$ on which the amount of flow is 1. Then the graph $(V,E')$ is a directed Eulerian graph (for every vertex, the number of incoming edges is equal the number of outgoing edges). Therefore, there is a “closed directed trail” (an Eulerian cycle in $(V,E')$) that consists of edges from $E'$: each edge $e\in E'$ appears on the trail exactly once, each edge on the trail is in $E'$.
We have a one-to-one correspondence between “closed directed trails” and integral circulations in $G$. Accordingly, the minimum cost trail corresponds to the minimum cost (integral) circulation. We can find the minimum cost circulation efficiently (see Wikipedia) e.g. using Linear Programming (note that the circulation polytope is integral and thus linear programming gives an integral solution, in which the amount of flow on every edge is either 0 or 1). |
Bijection between $ \bigcup_{i \in [0, 1]} X_i \ \ \ \text{and} \ \ \ [0, 1] \times [0, 1] $ | Yes, There is a bijection:
Each $X_i$ has the same cardinality as $[0,1]$. So let $f_i:[0,1]\rightarrow X_i$ be a bijection for all $i\in[0,1]$.
Let $f:[0,1]\times [0,1]\rightarrow \bigcup_{j\in[0,1]} X_j$ be the map $$f(j,x) = f_j(x)$$
for all $j\in [0,1]$.
This map is well defined. It is one to one because each of the $X_j$ are disjoint and onto because each of the $f_j$ are onto. |
Simple Proof as part of Merkle-Hellman | If $\gcd(r,q)=1$, then $$xr \not\equiv yr \pmod q$$ occurs if and only if $$x \not\equiv y \pmod q.$$
Proof: If $x \equiv y \pmod q$, then $xr \equiv yr \pmod q$. Now assume $x \not\equiv y \pmod q$. Since $\gcd(r,q)=1$, we know $r$ is invertible in $\mathbb{Z}_q$ (i.e., $r$ is a unit); its inverse can be found using the Extended Euclidean Algorithm. Hence, if $xr \equiv yr \pmod q$ then $x \equiv xrr^{-1} \equiv yrr^{-1} \equiv y \pmod q,$ giving a contradiction.
This means you need to ensure that $x \not\equiv y \pmod q$. |
Real Analysis: Show the sequence $a_n=10n$ is not convergent | In the examples you refer to, an $\epsilon$ is chosen because
Basically any $\epsilon$ would work
You only need one to work
You do need one to work
Given a number $\lambda$, the definition of "$a_n$ converges to $\lambda$" is
For any $\epsilon>0$, there is an $N\in \Bbb N$ such for any $n>N$ we have $$|a_n-\lambda|<\epsilon$$
Negating this, the definition of "$a_n$ doesn't converge to $\lambda$" becomes
There is an $\epsilon>0$ such that for any $N\in \Bbb N$ there is an $n>N$ such that $$|a_n-\lambda|\geq \epsilon$$
So as you can see, as long as they can prove that a single such $\epsilon$ exists, they will have proven that $\lambda$ is not the limit of the sequence. One common way to prove that an $\epsilon$ exists is to pick a specific number and show that that works.
You need to pick an $\epsilon$ which is small enough. If $1$ works, most people would probably pick that. Failing that, usually one would go for $\epsilon = \frac1k$ for some natural number $k$. And if one $\epsilon$ works, then any smaller $\epsilon$ also automatically works.
Some times, I like to pick a smaller-than-strictly-necessary $\epsilon$ in order to ensure that we get $|a_n-\lambda|>\epsilon$ rather than $|a_n-\lambda|\geq \epsilon$, because I like that better (for instance, for $b_n = (-1)^n, \lambda = 0$, then $\epsilon = \frac12$ is good enough but I prefer to use $\frac13$). But you don't need to do that. |
Could we compute integral of $z_i\bar{z_j}/\sum |z_k|^2$ on $\mathbb{C}P^n$ with the Fubini-Study metric? | Let $0\leq i\neq j\leq n$, and let $\varphi:\mathbb{C}P^n\to\mathbb{C}P^n$ be the "reflection" given by $$[z_0:\ldots:z_n]\mapsto[z_0:\ldots:z_{i-1}:-z_i:z_{i+1}:\ldots:z_n].$$
The Fubini-Studi form is preserved by $\varphi$, while your function is anti-preserved by $\varphi$. It follows that the average value is $0$. |
What does it mean for a matrix to be orthogonally diagonalizable? | I assume that by $A$ being orthogonally diagonalizable, you mean that there's an orthogonal matrix U and a diagnonal matrix $D$ such that
$$A = UDU^{-1} = UDU^T \text{.}$$
A must then be symmetric, since (note that since $D$ is diagnonal, $D^T = D$!)
$$A^T = \left(UDU^T\right)^T = \left(DU^T\right)^TU^T = UD^TU^T = UDU^T = A \text{.}$$ |
Given $\log_b5=a$ and $\log_b2.5=c$, find $x$ in terms of $a$ and $c$ if $5^x=2.5$ | The second condition gives $x\log_b5=c$ and from here $x=\frac{c}{a}$.
Because
$$c=\log_b2.5=\log_b5^x=x\log_b5=xa$$ |
Gauss curvature of points that are covered by a certain parametrization | Given a smooth regular and injective curve $\alpha(u)=(f(u),0,g(u))$, with $f(u)>0$ always, the revolution parametrization $$X(u,v) =(f(u)\cos v, f(u)\sin v,g(u)) $$is also regular, injective, and covers the revolution surface, excluding a meridian. If $\alpha$ has unit speed, the induced metric (first fundamental form) is then ${\rm d}s^2 = {\rm d}u^2+f(u)^2 {\rm d}v^2$. In other words, $E=1$, $F=0$ and $G(u,v)=f(u)^2$. With a little more patience one can compute the coefficients of the second fundamental form of $X$ ($(e,f,g)$, $(L,M,N)$, $(h_{ij})$, or whatever notation you've been taught) as $L= \langle X_{uu},N\rangle$, etc., where $$N(u,v) = \frac{X_u \times X_v}{\|X_u \times X_v\|}.$$ Using that $$K = \frac{LM-N^2}{EG-F^2}, $$ one can show that we'll have $$K = -\frac{f''}{f} $$ for this particular $X$. For $f(u)=(\cos u +2)$, we conclude that the Gaussian curvature of the torus is given by $$K(u,v)=\frac{\cos u}{2+\cos u}. $$
See, for example, p. $100$ in John Oprea's Differential Geometry and Its Applications (first edition, I think). |
How to prove this is true? | $$0 \le \log n! = \log \prod\limits_{k=1}^n k = \sum\limits_{k=1}^n \log k\le \sum\limits_{k=1}^n \log n = n\log n$$ |
Arithmetic regarding area under curve | You are missing some context. It is probably written somewhere that $x(t)=v_0t+\frac a2 t^2$ ($x$ is a function of $t$). |
Integrate $\int e^{-x} \cos x \,\mathrm{d}x$ | hint:$$\large{\int e^{-x} \cos x dx=\int e^{-x}\left(\frac{e^{ix}+e^{-ix}}{2} \right)}dx$$ |
proving product topology induced by Discrete Topology and Usual topology equal to this topology | $T$ has a base $\mathscr{B}$ consisting of all sets of the form $B(a,b,c)=\{a\}\times(b,c)$, where $a,b,c\in\Bbb R$, and $b<c$. Each of these sets belongs to $U$, since
$$B(a,b,c)=\left\{\langle x,y\rangle\in\Bbb R^2:\langle a,b\rangle<\langle x,y\rangle<\langle a,c\rangle\right\}$$
is an open interval in the order $\le$. Thus, $T\subseteq U$.
One way to show that $U\subseteq T$ is to show that the sets $B(a,b,c)$ are also a base for $U$. You can do it by showing that every open interval in the order $\le$ is a union of such sets. Consider the open interval $V$ with endpoints $\langle a,b\rangle$ and $\langle c,d\rangle$, with $\langle a,b\rangle\le\langle c,d\rangle$. There are two possibilities.
If $a=c$, check that $V=B(a,b,d)$.
If $a<c$, verify that $$V=\big(\{a\}\times(b,\to)\big)\cup\bigcup_{a<x<c}\big(\{x\}\times\Bbb R\big)\cup\big(\{c\}\times(\leftarrow,d)\big)\,,$$ and show how to write this set as a union of members of $\mathscr{B}$.
Note: The notations $(b,\to)$ and $(\leftarrow,d)$ are a less common standard notation equivalent to the more familiar notations $(b,\infty)$ and $(-\infty,d)$. |
If X ∼ N(µ, σ2 ) find the pdf of Y = e ^ X. | An approach (outline): you can for instance go through the cumulative distribution function, then differentiate it to get the pdf ($f_Y(x) = F^\prime_Y(x)$).
For $x \in\mathbb{R}$,
$$F_Y(x) = \mathbb{P}\{ Y \leq x \} = \mathbb{P}\{ e^X \leq x \}.$$
If $ x \leq 0$, this probability is $0$ as $e^X \geq 0$ a.s. For $x > 0$, you get
$$F_Y(x) = \mathbb{P}\{ e^X \leq x \}= \mathbb{P}\{ X \leq \ln x \} = F_X(\ln x)$$
which you know explicitly (as $X$ is Gaussian with known parameters). |
Derivative of $f(a)=\int_{0}^{1}\sin(t\cos(a)) \log(t)dt$ | I think you are done but if you really want to integrate it...
$$f'(a)=\int_{0}^{1}-t\sin(a)\cos(t\cos(a))\log(t) \,\mathrm{d}t$$
I will use the following for convenience.
$$I=\int\! t\cos(bt)\log(t) \,\mathrm{d}t$$
$$J =\int\! \cos(bt)\log(t) \,\mathrm{d}t$$
Integration by parts:
$$I = Jt -\int\! J\,\mathrm{d}t$$
Find $J$ by parts (omitted)
$$J =\int\! \cos(bt)\log(t) \,\mathrm{d}t = \dfrac{\log(t)\sin(bt)-\operatorname{Si}(bt)}{b}$$
Integrate $J$ by parts (omitted) and using $\int \operatorname{Si}(t) \,\mathrm{d}t = t\operatorname{Si}(t) + \cos(t)$.
$$\int J \,\mathrm{d}t = \dfrac{\operatorname{Ci}(bt)-bt\operatorname{Si}(bi)-(\log(t)+1)\cos(bt)}{b^2}$$
$$I = \left(\dfrac{bt\log(t)\sin(bt)-bt\operatorname{Si}(bt)}{b^2}\right) - \left(\dfrac{\operatorname{Ci}(bt)-bt\operatorname{Si}(bi)-(\log(t)+1)\cos(bt)}{b^2}\right)$$
$$I = \dfrac{bt\log(t)\sin(t)+(\log(t)+1)\cos(bt)-\operatorname{Ci}(bt)}{b^2}$$
Now just sub $b = \sin(a)$ and find $\sin(a)\cdot(I(0)-I(1))$. |
Compute $\lim_{x\to0}{1 - e^{-x}\over e^x - 1}$ | Here's three ways of doing it:
Taylor
You can use Taylor, i.e. $e^{x} \sim 1 + x$, so
$$\frac{1 - e^{-x}}{e^{x} - 1} \sim \frac{1 - (1-x)}{1+x-1} = \frac{x}{x} =1$$
Factorizing
Another (but equivalent) way is factor $e^{x}$ from the denominator
$$\displaystyle \lim_{x\to0}{1 - e^{-x}\over e^x - 1} = \displaystyle \lim_{x\to0}{1 - e^{-x}\over e^{x}(1 - e^{-x})}= \lim_{x\to0}\frac{1}{e^{x}} = \frac{1}{e^0} = 1 $$
L'Hopital
You can solve by L'Hopital, i.e.
$$\displaystyle \lim_{x\to0}{1 - e^{-x}\over e^x - 1} = \displaystyle \lim_{x\to0} \frac{e^{-x}}{e^{x}} = \frac{e^{0}}{e^{0}} = 1$$
You could use $(\epsilon,\delta)$ definition as well |
Suppose $V$ is a vector space generated by a subset $S$, and $U$ is a proper subspace of $V$. Might one also say that $S$ generates $U$? | Most certainly not. For example, take $S=\{(1,0,0), (0,1,0)\}$, and $U=\{(x, 0, 0)|x\in\mathbb R\}$. Then, clearly, $U$ is the subspace of $V=\{(x,y,0)|x,y\in\mathbb R\}$, however, $S$ does not generate $U$.
In fact, it is impossible for $U$ to be generated by $S$. By definition, the vector space, generated by $S$, is the smallest vector space that contains all vectors contained in $S$. Therefore, by definition, there does not exist any smaller vector space that also contains all of $S$.
In other words, if there exists some proper vector subspace $U$ such that $S\subset U$, then $V$ is, by definition, not generated by $S$. |
integral of a k-form over an oriented compact manifold | $$ \text{With }\omega = d\eta, \qquad \int_M \omega = \int_M d\eta = \int_{\partial M} \eta = 0 \quad \text{since }M \text{ has no boundary.}$$ |
Number of zeroes of an analytic function inside unit disc | Note: $|f(z)-z|<|z|\leq |z|+|f(z)|$. By Rouche's Theorem, $f$ has same number of zeros with $z$ on $B(0,1)$. Hence, it has 1 zero on $B(0,1)$. |
Discrete mathematics bit strings | No. There are $6$ illegal strings. Just write them out. Unless 'three in a row' means exactly three in a row ... but that's not how I would interpret that question ... |
Is this an invalid way to compute $\int_{2\pi}^\infty \frac{\sin(x)}{x} \, dx$? | $\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\color{#f00}{\int_{2\pi}^{\infty}{\sin\pars{x} \over x}\,\dd x}\ &\ =\
\overbrace{\int_{0}^{\infty}{\sin\pars{x} \over x}\,\dd x}^{\ds{\pi \over 2}}\ -\
\overbrace{\int_{0}^{2\pi}{\sin\pars{x} \over x}\,\dd x}^{\ds{\mrm{Si}\pars{2\pi}}}\ =\
\color{#f00}{{\pi \over 2} - \mrm{Si}\pars{2\pi}}
\end{align}
$\ds{\mrm{Si}}$ is the
Sine Integral Function.
The $\,\mrm{Si}\pars{z}$ series expansion is given by:
$$
\mrm{Si}\pars{z} = \sum_{n = 0}^{\infty}\pars{-1}^{n}\,
{z^{2n + 1} \over \pars{2n + 1}!\pars{2n + 1}}
$$ |
What is your idea about this conjecture? | The conjecture is false. This problem is directly related to finding large gaps between primes, and the methods of Erdos, Rankin and others.
Define the Jacobsthal function $j(q)$ to be the largest gap between consecutive reduced residues modulo $q$, that is the largest gap between elements that are relatively prime to $q$. Note that your conjecture is equivalent to asking if $$j\left(\prod_{p\leq n/2} p\right)\leq n$$ holds for all $n$. To see why, consider any sequence of $n$ consecutive numbers modulo $M=\prod_{p\leq n/2}p$. Then each of them will be divisible by some $p\leq n/2$ if and only if $j\left(\prod_{p\leq n/2}p\right)\geq n$.
This function $j$ is directly related to best lower bounds for prime gaps. Indeed, if $$j\left(\prod_{p\leq X}p\right)\geq f(X)$$ infinitely often, (where $f$ is a nice function, strictly increasing etc.) then $$\max_{p_{n+1}\leq x} p_{n+1}-p_n \geq f(\log x).$$ In a recent paper of Kevin Ford, Ben Green, Sergei Konyagin, James Maynard, Terence Tao, they proved that $$j\left(\prod_{p\leq x} p\right)\gg \frac{x\log x \log \log \log x}{\log \log x},$$ and hence
$$\max_{p_n\leq X} p_{n+1}-p_n\gg \frac{\log X\log \log X \log \log \log \log X}{\log \log \log X}.$$
We remark that Erdos had put a $10000\$$ prize on this result, the largest amount he set for any problem.
While this result does disprove your conjecture, we do not need use such powerful theorems. We need only use Lemma 7.13 of Montgomery and Vaughn which states that $$\lim_{n\rightarrow \infty} \frac{j\left(\prod_{p\leq n} p\right)}{n}=\infty.$$ This is proven using an elementary sieving argument, and the result was originally given by Westzynthius. |
Find the locus of midpoints dropped from hyperbola to asymptotes | Denote our asymptotes here as $L_{+}: x - y = 0$ (the diagonal line) and $L_{-}: x + y = 0$ the off-diagonal line. We have
\begin{align}
x^2 - y^2 &= a^2 \\
\implies \frac{x + y}{\sqrt{2}} \frac{x - y}{\sqrt{2}} &= \frac{ a^2 }2 \\
\implies D(P, L_{-}) \cdot D(P, L_{+}) &= \frac{ a^2 }2 \tag*{Eq.(1)}
\end{align}
where per the setting $P$ is a point on the hyperbola, and $D(\text{point, line})$ is the perpendicular distance between the point and the line.
This is one of the basic properties of any hyperbola that the product of the distances to the asymptotes are constant.
Denote the midpoint of $PN$ as $M$, then by definition $|MN| = \frac12 |PN|$.
Consider the situation where $N$ is on $L_{+}$, which means
$$D(M, L_{+})=|MN|= \frac12 |PN| = \frac12 D(P, L_{+})\tag*{Eq.(2)} $$
Now, for our rectangular hyperbola, the distance to the other asymptote remain the same
$$D(M, L_{-}) = D(P, L_{-}) \tag*{Eq.(3)}$$
because the asymptotes are perpendicular $L_{+} \perp L_{-}$.
Denote the coordinates of $M$ as $(x_1, y_1)$ and use them in the last step of rewriting Eq.(1):
\begin{align}
\text{take Eq.(2) into Eq.(1)}& & \implies && D(P, L_{-}) \cdot 2 D(M, L_{-}) &= \frac{ a^2 }2 \\
\text{from Eq.(3)}& & \implies && D(M, L_{-}) \cdot D(M, L_{-}) &= \frac{ a^2 }4 \\
&& \implies && \frac{ x_1 + y_1}{ \sqrt{2} } \cdot \frac{ x_1 - y_1}{ \sqrt{2} } &= \frac{ a^2 }4 \\
\end{align}
This gives us the hyperbola $\displaystyle (x_1)^2 - (y_1)^2 = \frac{a^2}2$ with the same asymptotes just with a different "constant". |
How to prove $f(x)=5\sqrt{x^4+1}$ is a transcedental function | $f$ satisfies the equation
$$
f(x)^2 - 25x^4 - 25 = 0
$$
which makes it algebraic (not transcendental) by the standard definition given
here.
Perhaps you are thinking of a different function.
Edit:
$f(x) = \sqrt[5]{x^4 + 1}$ is also algebraic. In this case,
$$
f(x)^5 - x^4 - 1 = 0
$$ |
How to identify which terms are infinite sequences? | To determine if a sequence goes on forever, you can use ellipses such as in {$\cdots-2, -1, 0, 1, 2\cdots$} showing that they do that, like you did. It can also be identified by when something says "all integers", "all even numbers greater than $-5$", "all numbers divisible by $10$", etc. The word "all" identifies that any of those sequences goes on forever meaning it doesn't have a finite number of numbers in it just for the ones I showed you and ones that have the same thing. |
Prove $ \exists c \in [0,1]: \int_{0}^{1}sin(x^3)dx = \int_{0}^{c}sin(x^2)dx $ | Intermediate value theorem would be the way to go. Define the antiderivative starting at $0$ by
$$F(t) \equiv \int_0^t\sin(x^2)\:dx$$
We have that $F(0) = 0$ and from your reasoning above that
$$F(1) = \int_0^1 \sin(x^2)\:dx \geq \int_0^1 \sin(x^3)\:dx$$
Establish the continuity of $F(t)$. Can you take it from here? |
approximation for formula -- relation to exp function | We have
$$\log f(n) = n\log (1-n^{-1/4}) = - n \sum_{k=1}^\infty \frac{1}{kn^{k/4}}.$$
In particular, ignoring all but the first term of the series,
$$\log f(n) < -n \frac{1}{n^{1/4}} = - n^{3/4},$$
whence
$$f(n) < \exp \left(-n^{3/4}\right).$$
Using more terms of the Taylor series of the logarithm, we get tighter bounds, e.g.
$$\log f(n) < - n^{3/4} - \frac{1}{2}\sqrt{n} \Rightarrow f(n) < \exp \left(- \left(n^{3/4} + \frac{1}{2}\sqrt{n}\right)\right).$$ |
Euclidean distances of polar coordinates? | $(\cos(a)r-\cos(b)r)^2-(\sin(a)r-\sin(b)r)^2$ inside the square root sign is not correct. It should$(\cos(a)r-\cos(b)r)^2+(\sin(a)r-\sin(b)r)^2$ be
Expanding the squares and using the fact that $\cos^{2} x+\sin ^{2}x=1$ we get $d=r\sqrt {2-2\cos (a-b)}$ since $\cos (a-b)=\cos a \cos b+\sin a \sin b$. You can further simplify this using the formula $1-\cos t=2\sin^{2}(\frac t 2)$. You can now finish the job. |
A question related to discrete and subspace topology | Hint: Either $A$ has a least positive element or it doesn't. The former gives you a discrete set and the latter a dense set (provided $A \neq \{0\}$).
EDIT: Adding details for the case where $A$ has a least positive element.
Let $\epsilon \in A$ be this least positive element.
Claim. $|x - y| > \epsilon/2$ for all $x, y \in A$ with $x\neq y$.
Proof. Suppose not. Let $x, y \in A$ be distinct such that $|x - y| \le \epsilon/2$.
Set $d = x - y$. Note that $d \in A$ and $-d \in A$. Thus, we may assume that $d > 0$.
Thus, we have $0 < d \le \epsilon/2 < \epsilon$. This contradicts that $\epsilon$ is the smallest positive element of $A$.
Thus, the claim is proven and it follows that $A$ is discrete. |
general topology, subsets and limit points | The proof is not correct. What you should do is to take an element $x\in A'$ and then to prove that $x\in B'$ (which is easy). |
What is the probability that the first head will appear on the even numbered tosses | Alternative approach - to see the first head on an even numbered toss we need to have:
Toss 1 is a tail (probability 1-R)
Renumbering Toss 2 as Toss 1, Toss 3 as Toss 2 etc. we are now want to see first head on an odd numbered toss
So
P(first head on even toss) = P(first head on odd toss) x (1-R)
but if we denote P(first head on even toss) by p then P(first head on odd toss) = 1-p, so
$p = (1-p)(1-R)$
$\Rightarrow p + p(1-R) = 1-R$
$\Rightarrow p = \frac{1-R}{2-R}$
Sanity checks: (i) p=0 when R=1 (ii) p approaches 1/2 as R approaches 0. |
when to sum two random variables | Both typists independently make errors within the publication at a known average rate, with each error independent of the placement of other errors (Poisson Distribution). Thus, other than which half of the publication they occur, the errors made within the publication will be independent of the placement of other errors and be made at an average rate(per publication) that is the sum of the twain (again, a Poisson distribution).
To be clear. Suppose the publication is $2n$ pages long, and each writes $n$ pages. Then if the first makes an average of $na$ errors per publication, and the second makes an average of $nb$ errors per publication, then the total errors per the publication is Poisson distributed with mean of $n(a+b)$ errors. |
black and white balls in the box | First, the number of balls in the box decrease by one at each step. Suppose $B(t)$ and $W(t)$ are the number of black and white balls present after $t$ steps. Since we start with $B(0)+W(0)=2731$ and $B(t+1)+W(t+1)=B(t)+W(t)-1$, we have that $B(2730)+W(2730)=1$. At this point we can no longer continue the process. So at this end, there is either a white ball or a black ball left.
Notice that the number of white balls present at any time is even. For instance, suppose we have just done step $t$. If we choose two black balls, then $B(t+1)=B(t)-1$ and $W(t+1)=W(t)$. If we choose a white ball and a black ball, then $B(t+1)=B(t)-1$ and $W(t+1)=W(t)$. If we choose two white balls, then $B(t+1)=B(t)+1$ and $W(t+1)=W(t)-2$. Necessarily, since $W(0)$ is even, it stays even throughout the process.
Hence $W(2730)=0$ and $B(2730)=1$. Even though what happens throughout the process is random, by parity, the process must end with only one black ball left. |
Trignometric Indentities | $$
4\sin^2x+7\cos^2x=4\sin^2x+4\cos^2x+3\cos^2x=4+3\cos^2x
$$ |
Find the value of $x$ that satisfies the equation $\log_{10} \left(\frac{x^{\frac{1}{x}}}{x^{\frac{1}{x+1}}}\right) = 1/5050$ . | This is equivalent to $$\log(x^{\frac{1}{x(1+x)}})=\frac{1}{5050}$$
Then:
$$x^{\frac{1}{x(1+x)}}=10^{\frac{2}{100*101}}$$
Or
$$x^{\frac{1}{x(1+x)}}=100^{\frac{1}{100*101}}$$
Thus $x=100$. |
Are these two distinct series representations for $\frac{1}{1-x^2}$ correct? | $|x^2|<1\iff-1<x^2<1\iff 0\leqslant x^2<1\text{ (because $x^2\geqslant0$)}\iff x\in(-1,1)$ |
Convergence of an improper integral $\int_3^\infty \frac{\sin(x)}{x+2\cos(x)}dx$ | \begin{align*}
\int_{3}^{M}\dfrac{\sin x}{x+2\cos x}dx&=-\dfrac{\cos x}{x+2\cos x}\bigg|_{x=3}^{x=M}-\int_{3}^{M}\dfrac{-(2\sin(x)-1)\cos x}{-(x+2\cos x)^{2}}dx\\
&=-\dfrac{\cos M}{M+2\cos M}+\dfrac{\cos 3}{3+2\cos 3}-\int_{3}^{M}\dfrac{-\sin(2x)+\cos(x)}{(x+2\cos x)^{2}}dx.
\end{align*}
Now each of the integrals
\begin{align*}
\int_{3}^{\infty}\dfrac{-\sin(2x)}{(x+2\cos x)^{2}}dx.
\end{align*}
\begin{align*}
\int_{3}^{\infty}\dfrac{\cos(x)}{(x+2\cos x)^{2}}dx.
\end{align*}
absolutely converges, since
\begin{align*}
\int_{3}^{M}\dfrac{1}{(x+2\cos x)^{2}}dx\leq\int_{3}^{M}\dfrac{1}{(x-2)^{2}}dx.
\end{align*}
So the improper integral converges. |
Affine space over algebraically closed field cannot be written as a finite union of lines. | The hypothesis $k$ algebraically closed is redundant. $k$ infinite is enough to get the result.
We can even prove the stronger result that an affine space $E$ over an infinite field $k$ can’t be the finite union of proper affine subspaces $E_1, \dots E_n$. Considering the underlying linear subspaces, it is enough to prove the similar result for linear subspaces.
For the proof we denote $F_i= E_1 \cup \dots E_i$ and proceed by induction. The result is clear for $n=1$ as $E_1$ is supposed to be a proper linear subspace.
Suppose that $n \ge 2$ and assume by contradiction that $E=F_n$. According to induction hypothesis, $E\neq F_{n-1}$. So it exists $x \in E \setminus F_{n-1}$. Let also $y$ be in $E \setminus E_n$.
The map $u : k \to E$ defined by $u(\lambda)= \lambda x +y$ is taking values in $F_{n-1}$ as $\lambda x +y \in E_n$ implies $y \in E_n$. As $k$ is supposed to be infinite, it exists $\alpha \neq \beta$ and $j \le n-1$ such that $u(\alpha),u(\beta) \in E_j$. This leads to the contradiction
$$x =(\alpha-\beta)^{-1}(u(\alpha)-u(\beta)) \in E_j.$$ |
what is the meaning of this symbol $ ∨?$ | The most common use of this symbol is as logical operator "or", which connects two statements.
So for two statements $A$ and $B$ the expression $A\vee B$ would read "A or B".
As many other symbols this has other uses too, so it depends on the context. You linked a set-theory related topic.
The other symbol "$\wedge$" is the logical "and".
Edit: Also note that $\cup$ is a set-theoretic operation, while $\vee$ is used for statements. So in two different settings.
You define the set theoretic $\cup$ of two sets $X$ and $Y$ as the set
$X\cup Y:=\{x: x\in X\vee x\in Y\}$
It is a preference of style if you use those symbols, or just write "or", "and" instead.
Outside of the field of mathematical logic, most people avoid these symbols, as they are somewhat unpleasant to read. |
Expected Number of Events over Multiple Poisson Intervals | The gaps between events are exponentially distributed with rate $50000/s$, so the probability a particular gap is less than $5ns$ is $1-e^{-50000\times 5\times 10^{-9}}\approx 0.00024997$
With about $50000$ gaps in a second, you would then expect about $50000 \times 0.00024997 \approx 12.498$ of them to be shorter than $5ns$
This approach gives a good approximation when the number of gaps expected is large, but it is not so good when the expected number of gaps is small. One reason is that in the large number of gaps case, you get to consider the gap after the second event; another is that in the small number of gaps case seeing a big gap can reduce the total number of other gaps |
Random processes: Repair time | Let $\lambda_R$ = rate of printer repair, $\lambda_B$ = rate of printer break down. Note that these are Poisson processes (which are known to have exponential wait times and are "memory-less"). A poisson process is any process that has a constant probability over time of an event happening. In a Poisson process, on average $\lambda$ events happen in a given interval.
Let's start by looking at one printer. It follows the following propensity functions:
Broken $\xrightarrow{\lambda_R}$ Working, Working$\xrightarrow{\lambda_B}$ Broken
Written as a markov process,
$\frac{dP(X=0)}{\partial{t}}=\lambda_B P(X=1)-\lambda_R P(X=0)$,
where X is the number of working printers (and we're only looking at 1 printer). Since we're looking at the stationary distribution, we set the derivative equal to 0. Then,
$0=\lambda_B P(X=1)-\lambda_R P(X=0)$
Dropping the $X=$ notation, so $P(X)$ for $X=1 = P(1)$, we find that
$\frac{P(1)}{P(0)}=\frac{\lambda_R}{\lambda_B}$. Then noting by conservation of probability that $P(0)+P(1)=1$, and substituting in $P(0)$, we find
$P(0)=\frac{1}{1+\frac{\lambda_R}{\lambda_B}}$
Therefore,
$P(1)=\frac{\frac{\lambda_R}{\lambda_B}}{1+\frac{\lambda_R}{\lambda_B}}$
Testing our intuition, if the repair rate were very large and the breaking rate were small, you can see that the limit of $P(1)\rightarrow1$
Note that the solution can be thought of as a Bernoulli process with distribution Bernoulli($\frac{\frac{\lambda_R}{\lambda_B}}{1+\frac{\lambda_R}{\lambda_B}}$). What is the distribution of the sum of four of these distributions? Let $B_P$ be a bernoulli variable with "success" parameter $p_w=P(1)=\frac{\frac{\lambda_R}{\lambda_B}}{1+\frac{\lambda_R}{\lambda_B}}$
$Z=B_p+B_p^\prime+B_p^{\prime\prime}+B_p^{\prime\prime\prime}$
Where primes denote independent Bernoulli variables. Remember that sums of independent random variables correspond to their convolution. Fortunately, convolutions of independent Bernoulli variables have been worked out (for example, see case of sum of 2 Bernoulli RVs). Namely,
$\sum\limits_{i=1}^N Bernoulli(p_w) = Binomial(N,p_w)$. Where N is the total number of printers here, and $p$ is the success rate as defined above for this problem.
Then the question of what is the probability at any given time of 0 printers functioning is plug and chug into the Binomial PDF.
$P(X=0)=\binom{4}{0} p_w^0 (1-p_w)^4 = (1-p_w)^4$ So plugging in $\lambda_B=\frac{1}{50}$, $\lambda_R=\frac{1}{20}$, I get the .6663% of the time all printers are broken. Let me know if you have questions.
Also, note that you can solve this by writing out the full transition probabilities between the states 0 printers through 4 printers functioning, and you should get the same answer. Computing P(0) in that case is slightly more involved but the approach is the same. |
the set of limit points is equal to the closure (?) | This is false for at least one definition of limit point.
If we define (as is very common) $x$ is a limit point of $A \subseteq X$ in the space $(X,\mathcal{T})$ iff for every $O \in \mathcal{T}$: if $x \in O$, then $O \cap (A \setminus \{x\}) \neq \emptyset$.
Another definition (which I learnt as adherent point): $x$ is an adherent point of $A \subseteq X$ in the space $(X,\mathcal{T})$ iff for every $O \in \mathcal{T}$: if $x \in O$, then $O \cap A \neq \emptyset$.
Then we have that $$\overline{A} = \{x: x \text{ adherent point of } A \}\text{.}$$
And $$\{x: x \text{ limit point of } A \} \subseteq \overline{A}\text{,}$$ without there being equality. The set $\{x: x \text{ limit point of } A \} $ is also denoted $A'$, the so-called derived set of $A$. $A$ could have isolated points, i.e. points $x \in A$ such that $\exists O \in \mathcal{T}: O \cap A = \{x\}$, and then for all $A$: $\overline{A} = A' \cup \{x: x \text{ isolated in } A\}$.
E.g. $A =\mathbb{Z}$ in the reals in the usual topology has $A' =\emptyset$, i.e. no limit points, but it equals its set of adherence points, so is closed. |
Besides $2^4$ and $4^2$, are there any other numbers that, with the base and exponent flipped, will equal the same value? | The numbers would have to share all prime factors. In other words, if one was divisible by 7, the other would have to be as well. Only the power of each prime factor could vary.
The numbers need to be close, or the larger exponential would ruin things. Big exponentials grow fast.
2 and four work because they are close powers of 2.
A close call would be 18 and 12, but no cigars.It is close because two squared is almost equal to three. Since two to the fourth is about 17, 544 and 578 look like candidates as well. But no dice.
Assume there are two prime factors for both numbers, a and b with powers k and l for the first number and m and n for the second. Since a and b a coprime, we can ignore b and look for cases where the powers of a are equal.
The two powers of a are k times a^m times b^n and m times a^k times b^l. We can ignore b again, because a and b are coprime. We need a case where k times a^m=m times a^k. This means k and m are both powers of a. By the same token l and n are powers of b.
EDIT: Similar arguments apply to larger numbers of primes. |
Preservation of Krull dimension under inverse limit | No. Let $p$ be a prime number. The ring of $p$-adic integers $\mathbb Z_p$ can be constructed as an inverse limit of the rings $\mathbb Z/ p^n\mathbb Z$, with the maps $\mathbb Z/p^{n+1}\mathbb Z \rightarrow \mathbb Z/p^n\mathbb Z$ being the canonical surjections. All the rings $\mathbb Z/p^n\mathbb Z$ have Krull dimension zero (being finite), but $\mathbb Z_p$ has Krull dimension at least one, since it is an integral domain containing a nonzero prime ideal $p \mathbb Z_p$.
See also http://mathworld.wolfram.com/p-adicInteger.html |
Example of the topology which has a dense closed point and has non-empty proper closed subset which does not contain any closed point | Let $X=H\cup Z$ where $H$ is infinite, $|Z|\ge2$, and $H\cap Z=\emptyset$, with the topology $\tau=\{\emptyset\}\cup\{H\setminus F:F\subseteq H,\ F\text{ finite}\}\cup\{X\setminus F:F\subseteq H,\ F\text{ finite}\}$.
$H$ is a dense subset of $X$, and every finite subset of $H$ is closed; $Z$ is closed but no nonempty proper subset of $Z$ is closed.
P.S. Here is an example where $X$ is a $T_0$ space.
Let $X=H\cup Z$ where $H$ and $Z$ are disjoint infinite sets, and let $\lt$ be a total ordering of $Z$ with no greatest element. Let $\tau$ be the collection of all sets $U\subseteq X$ satisfying the conditions:
(1) if $U\ne\emptyset$ then $H\setminus U$ is finite;
(2) if $a,b\in Z$, $a\lt b$, and $b\in U$, then $a\in U$.
Then $\tau$ is a $T_0$ topology on $X$; a singleton set $\{x\}$ is closed in $X$ (a "closed point") if and only if $x\in H$; the set $H$ of all closed points is an open dense subset of $X$; and $Z$ is a nonempty closed set containing no closed points. |
Eisenstein Series and cusps of $\Gamma_{1}(N)$ | The question has some implicit hypotheses, possibly not clear to the questioner, and this implicitness and ambiguities about it complicate matters. First, the more natural descriptions of Eisenstein series for GL(2) of weights k>2 are not of the form in the question, but are $\sum_{c,d} 1/(cz+d)^k$. This does not converge for $k=2$, so $k=2$ has to be approached more delicately (via an analytic continuation, Hecke summation, producing _something_like_ the expression in the question). However, at level one, that is, for $SL_2(\mathbb Z)$, there is no truly-holomorphic Eisenstein series of weight $2$. The analytic continuation has an extra term, which one may discard, but then destroying the literal automorphy condition. Maybe that doesn't matter, but one should be careful about "understandings".
Thus, depending what one means, wants, or needs, while at higher levels the meromorphic continuation can produce holomorphic modular forms at level 2. (This positive outcome always occurs for Hilbert modular forms, that is, for ground fields totally real anything other than $\mathbb Q$.) First, whatever description one chooses for "Eisenstein series" (attached to cusps?), a suitable weighted average of the level-7 (for example) such should be level-one. A literal notion of holomorphy tells us there is no level-one, weight-two such. Thus, there are (at most) six linearly independent Eisenstein series at that level, so not quite possible to "attach" one to each cusp. It is not hard to say more.
Then there is the further issue of expressing various Eisenstein series in terms of each other, by the group action. At square-free level, the underlying (!) representation theory is simpler (Iwahori-fixed vectors in principal series are well understood, at least up to a very useful point.)
But, at this point, without knowing more precisely what the questioner wants, or may discover is wanted, there are too many things that can be said to know which to choose to say. :) |
Is it true that $\delta$ is a Lebesgue number for the given cover? | Yes, it is.
First of all, you must prove $f$ is continuos. This follows from triangle inequality, since for all $x,y \in X$ we have
$$
d\left(x,U^c_i\right) \le d(x,y)+ d\left(y,U^c_i\right).$
$$
Now you must assure that your $\delta$ is not zero. But, since $U_i$'s are open and cover $X$, for all $x\in X$, $x$ belong to some $U_i$ for some $i$. Then, there exists $\epsilon$ such that the ball $B(x,\epsilon) \subset U_i$ which implies $f(x) \ge d(x, U_i^c) \ge \epsilon.$
This proves that $\delta >0$.
Now, let $B$ be a subset of $X$ of diameter less than $\delta$ and $x\in B$. Suppose $f(x) = d(x,U_m^c)$ for some $m$. Thus $d(x,z) \ge \delta$ for all $z\in U_m^c$. Since $diam(B) < \delta$ this implies $d(x,y) < \delta$ for all $y \in B$. This assures that $y$ mustn't lie in $U_m^c$ which concludes the proof. |
Metric over a Lie algebra $\mathfrak{u}(n)$ | Let $A$ and $B$ be complex $n \times n$ matrices with respective $(j, k)$ entries $A_{jk}$ and $B_{jk}$, and note that $B^*$ (the conjugate transpose) has $(j, k)$ entry $\bar{B}_{kj}$. By definition,
$$
\langle A, B\rangle
= \Re\bigl(\text{Tr}(AB^*)\bigr)
= \Re \sum_{j,k=1}^n A_{jk} \bar{B}_{jk},$$
which is precisely the Euclidean inner product of $A$ and $B$ if these matrices are identified with complex vectors in $\mathbf{C}^{n^2}$. The resulting pairing on $\mathfrak{u}(n)$ is the restriction of this inner product.
Generally, if $G$ is a Lie group and $g \in G$, then the left multiplication map $\ell_g:G \to G$ is a diffeomorphism sending $e$ to $g$, so the push-forward $(\ell_g)_*:\mathfrak{g} \to T_gG$ is an isomorphism of vector spaces. An inner product on $\mathfrak{g}$ thereby determines an inner product on each tangent space $T_gG$, and since multiplication is smooth (as a function of $g$) these inner products constitute a Riemannian metric on $G$.
(In case it matters, this left-invariant metric is only "unique" in the sense that it is completely determined by the choice of inner product on $\mathfrak{g}$.) |
How do we call a pair of sets between which there is a bijection that need not have additional property? | You can call them equinumerous. |
PDE changing Boundary Conditions | It is possible when the boundary condition is given over a characteristic curve. In your example the characteristic curves are $x-y=C$ and he BC is given on $y=0$, so that in this case there is a unique solution $u(x,y)=\sin(x-y)$.
Consider now the same equation with BC $u(x,x)=0$. Then $u(x,y)=\phi(x-y)$ is a solution for any differentiable function $\phi$ such that $\phi(0)=0$. |
Non-discrete valuation rings of Krull dimension 1 | Exercise $2$ in Bourbaki, Commutative Algebra, ch. 6, Valuations, §3, explains the costruction of a valuation ring associated to a given totally ordered group $\Gamma$.
As height $1$ ordered groups are just subgroups of the ordered group $\mathbf R$, you just have to take a non-discrete subgroup of $\mathbf R$ for this construction, viz. $\mathbf Q$. |
Interesting a Fibonacci quesiton. Need help. | There is a simple formula for $F_n$, but it has nothing to do with powers of e.
Define
$$\phi := \frac{1+\sqrt{5}}{2}$$
Then, the number
$$\frac{\phi^n}{\sqrt{5}}$$
is very close to the n-th Fibonacci-number $F_n$
In fact, when the given number is correctly rounded to an integer, it is $F_n$. |
Is there really no way to integrate $e^{-x^2}$? | That function is integrable. As a matter of fact, any continuous function (on a compact interval) is Riemann integrable (it doesn't even actually have to be continuous, but continuity is enough to guarantee integrability on a compact interval). The antiderivative of $e^{-x^2}$ (up to a constant factor) is called the error function, and can't be written in terms of the simple functions you know from calculus, but that is all. |
Spaces with different homotopy type | By the Künneth formula, we have
$$
H^*(S^1 \times S^1; \Bbb Z) \cong \Lambda_\Bbb Z[\alpha, \beta],
$$
the exterior algebra on two variables, with $|\alpha| = |\beta| = 1$. This can also be shown by direct computation using simplicial homology.
By the formula for the cohomology ring of a wedge sum, we have
$$
\tilde H^*(S^1 \vee S^1 \vee S^2; \Bbb Z) \cong \tilde H^*(S^1; \Bbb Z) \oplus \tilde H^*(S^1; \Bbb Z) \oplus \tilde H^*(S^2; \Bbb Z).
$$
Now we can see that the cup product of the two generators of $H^1(S^1 \times S^1; \Bbb Z)$ is nontrivial, whereas the cup product of the two generators of $H^1(S^1 \vee S^1 \vee S^2; \Bbb Z)$ is trivial. It follows that the two spaces are not homotopy equivalent. |
Prove that $A^{10}$ is equal to linear combination of $A^k, k = 1,...,9$ and identity matrix. | Using the Cayley Hamilton Theorem, there is a polynomial $\rho$ such that $\rho(A)=0$. Where it is understood that the RHS is the $3\times 3$ zero matrix. This polynomial is:
$$
\rho(t) = \det (A-t\cdot I) = (1-t)(t^2-3t+1)=-t^3+4t^2-4t+1
$$
This gives that $$\rho(A) =-A^3+4A^2-4A+I = 0$$
In particular,
$$
A^3 = 4A^2-4A+I \; \; \; \; (1)
$$
Thus,
$$
A^{10}=4A^9-4A^8+A^7
$$
In fact, we can write $A^{10}$ as a linear combination of $A$, $A^2$, and $I$ by using equation $(1)$, many times.
For example, Equation $(1)$ gives:
$$
A^4 = 4A^3-4A^2+A = 4(4A^2-4A+I)-4A^2+I
$$ |
Proving the inequality $\frac{1}{k!}+\frac{1}{(k + 1)!}+\frac{ 1}{ (k + 2)! }+...\leq {(\frac{e}{k})}^k$ | $$\sum_{n\geq k}\frac{1}{n!}=\frac{1}{k!}\left(1+\frac{1}{k+1}+\frac{1}{(k+1)(k+2)}+\ldots\right)\leq \frac{1}{k!}\sum_{j\geq 0}\frac{1}{(k+1)^j}=\frac{k+1}{k\cdot k!}$$
hence we just need to prove that:
$$ \left(1+\frac{1}{k}\right)\frac{k^k}{k!}\leq e^k. \tag{1} $$
The inequality triviall holds for $k=1$; moreover, by setting $A_k = \left(1+\frac{1}{k}\right)\frac{k^k}{k!}$ we have:
$$ \frac{A_{k+1}}{A_k}=\frac{k(k+2)}{(k+1)^2}\left(1+\frac{1}{k}\right)^k \leq \left(1+\frac{1}{k}\right)^k \leq e\tag{2},$$
hence $(1)$ holds for every $k\geq 1$ by induction. |
Nonlinear partial differential equation of first order | You have
$$u_t + g(u)u_x = 0$$
Using the method of characteristics, we find
$$\frac{dt}{ds} = 1 \implies dt = ds \ \ (1)\\
\frac{dx}{ds} = g(u) \implies \frac{dx}{dt} = g(u) \ \ (2) \\
\frac{du}{ds} = 0 \implies \frac{du}{dt} = 0 \ \ (3)$$
Now, from $(2)$, after integrating we see that
$$x(t) = g(u)t + x_0 \ \ (4)$$
where $x(0) = x_0$ is your constant of integration.
Now, from $(3)$, we see that $u$ is constant in $t$ i.e.
$$u(x, t) = f(x_0) \ \ (5)$$
where we have a function of integration and not a constant (why?).
So substituting $(4)$ into $(5)$
$$u(x, t) = f(x - g(u)t)$$
Applying the initial condition
$$\begin{align}
u(x, 0) &= f(x) \\
&= \phi(x) \\
\end{align}$$
Hence,
$$u(x, t) = \phi(x - g(u)t)$$ |
The bisector of the exterior angle at vertex C of triangle ABC intersects the circumscribed circle at point D. Prove that AD=BD | Note that since $CD$ is the external bisector of $\angle ACB$ $ \implies \angle BCA = 180-2\angle DCB \implies \angle ACD= 180-BCD \implies \angle ABD = \angle BCD $ ( using sum of opposite angles is equal to 180 ) .
Again by cyclic quads, we get that $\angle BCD= \angle DAB$.
Hence we have $\angle DAB=\angle BCD=\angle ABD$ and the result follows. |
Closed form for a binomial series | Let
$$
g(n,m)=\frac1{\binom{2n}{n}}\sum_{j=1}^n\frac{(-1)^j}{j^m}\binom{2n}{n+j}\tag{1}
$$
then it can be shown that
$$
g(n,m)=g(n-1,m)+\frac{g(n,m-2)}{n^2}\tag{2}
$$
The proof of $(2)$ is below.
Since
$$
\sum_{j=0}^k(-1)^j\binom{n}{j}=(-1)^k\binom{n-1}{k}\tag{3}
$$
we get
$$
g(n,0)=-\frac12\tag{4}
$$
then, using $(2)$ and $(4)$, we get
$$
g(n,2)=-\frac12H_n^{(2)}\tag{5}
$$
Therefore,
$$
\bbox[5px,border:2px solid #C0A000]{\sum_{j=1}^n\frac{(-1)^j}{j^2}\binom{2n}{n+j}=-\frac12H_n^{(2)}\binom{2n}{n}}\tag{6}
$$
where
$$
H_n^{(k)}=\sum_{j=1}^n\frac1{j^k}\tag{7}
$$
Using $(2)$ and $(5)$, we get that
$$
g(n,4)=-\frac14\left(\left(H_n^{(2)}\right)^2+H_n^{(4)}\right)\tag{8}
$$
therefore,
$$
\bbox[5px,border:2px solid #C0A000]{\sum_{j=1}^n\frac{(-1)^j}{j^4}\binom{2n}{n+j}=-\frac14\left(\left(H_n^{(2)}\right)^2+H_n^{(4)}\right)\binom{2n}{n}}\tag{9}
$$
Proof of $\boldsymbol{(2)}$:
Note that
$$
\begin{align}
g(n,m)
&=\sum_{j=1}^n\frac{(-1)^j}{j^m}\frac{n!}{(n-j)!}\frac{n!}{(n+j)!}\\
&=\frac{(-1)^n}{n^m\binom{2n}{n}}+\sum_{j=1}^{n-1}\frac{(-1)^j}{j^m}\frac{(n-1)!}{(n-j-1)!}\frac{(n-1)!}{(n+j-1)!}\frac{n^2}{n^2-j^2}\\
g(n-1,m)
&=\hphantom{\frac{(-1)^n}{n^m\binom{2n}{n}}+}\sum_{j=1}^{n-1}\frac{(-1)^j}{j^m}\frac{(n-1)!}{(n-j-1)!}\frac{(n-1)!}{(n+j-1)!}\\
g(n,m)-g(n-1,m)
&=\frac{(-1)^n}{n^m\binom{2n}{n}}+\sum_{j=1}^{n-1}\frac{(-1)^j}{j^m}\frac{(n-1)!}{(n-j-1)!}\frac{(n-1)!}{(n+j-1)!}\frac{j^2}{n^2-j^2}\\
&=\frac{(-1)^n}{n^m\binom{2n}{n}}+\frac1{n^2}\sum_{j=1}^{n-1}\frac{(-1)^j}{j^{m-2}}\frac{n!}{(n-j)!}\frac{n!}{(n+j)!}\\
&=\frac1{n^2}\sum_{j=1}^n\frac{(-1)^j}{j^{m-2}}\frac{n!}{(n-j)!}\frac{n!}{(n+j)!}\\
&=\frac{g(n,m-2)}{n^2}
\end{align}
$$ |
What does a hyperreal version of the Cantor Set look like? | This construction just doesn't work. The set $\bigcap_n X_n$ does not consist of intervals, at least not intervals that have well-defined middle thirds. For example, one of these "intervals" is the set of all nonnegative infinitesimal hyperreals. This "interval" has a lower endpoint $0$, but no upper endpoint (there is no largest infinitesimal, or smallest non-infinitesimal), and there is no meaningful way to define a "middle third" of it.
Note that the natural "hyperreal version" of the Cantor set would be defined by internal recursion on the hypernatural numbers, not by transfinite recursion on the ordinals. By transfer, this will behave just like the ordinary Cantor set, at least as far as the relevant first-order logic is concerned. It can be described, for instance, as the set of hyperreals which have a (hypernatural-indexed) ternary expansion consisting of $0$s and $2$s. If you're using an ultrapower to define the hyperreals, then this set would just be the ultrapower of the Cantor set. |
Show dim(Ker$(T)) = $dim(Ker$(QTP))$ | We know that $Q$ is an isomorphism, which implies that $\ker(Q) = \{0\}$. Now, let $v \in \ker(QTP)$. This means $Q(T(P(v))) = 0$; but since the kernel of $Q$ is trivial, this implies that $T(P(v)) = 0$, i.e. $v \in \ker(TP).$ We therefore have $\ker(QTP) \subseteq \ker(TP)$, and the reverse inclusion is trivial, so we have $\ker(QTP) = \ker(TP).$
Now, note that $P^{-1} \left(\ker(T) \right) = \ker(TP)$. Since $P$ is an isomorphism, $P^{-1}$ is as well, so $\dim(\ker(T)) = \dim(P^{-1}(\ker(T))) = \dim(\ker(TP))$. |
2D boolean matrix number of unique combinations without mirrored/rotated ones | I have no idea if there is a proper name in the literature, but this sounds like a job for... Burnside's lemma! Super handy for these kinds of counting problems.
Let $X$ be the set of all $n$x$n$ binary matrices, and let $G$ be the group action on $X$ with the four elements: do nothing (identity element), flip horizontal, flip vertical, and flip horizontal then flip vertical (note that the last is the same as flip vertical then flip horizontal); whose elements I'll henceforth call $\{e, h, v, hv\}$ respectively.
Burnside's lemma says that the number of distinct matrices up to the action of this group G (written as $|X/G|$) will be:
$$ \frac{1}{|G|}\sum_{g \in G}|X^g| $$
where here $X^g$ is the set of elements left unchanged by $g \in G$. So in our case
$$|X/G| = (|X^e| + |X^h| + |X^v| + |X^{hv}|)/4$$
$e$ is the identity, so $|X^e|=|X|=2^{n^2}$, and from a quick look at the other possibilities I think it follows that if $n$ is even, then $n = 2k$ and
$$|X/G| = (2^{n^2} + 2^{nk} + 2^{nk} + 2^{nk})/4$$
And if $n$ is odd, then $n = 2k+1$, and
$$|X/G| = (2^{n^2} + 2^{n(k+1)} + 2^{n(k+1)} + 2^{(k+1)^2+k^2})/4$$ |
Binary Strings Question | I don't see anything ambiguous about the expression. It's not standard mathematical notation, but as far as it's meant to denote the set of all strings that can be formed by concatenating any finite number of occurrences of 101, 1101 and 1011, it seems clear enough. My guess is that you mean you want to show that there are strings that can be formed in more than one way from these substrings. An example of such a string is afforded by 1011101, which is both 1011 101 and 101 1101. |
A combinatorics problem I can't quite understand | Mary has $777 \cdot 3=2331$ choices for the first day and $2330$ choices each day after that because this day has to be different from the last. This would give $2331 \cdot 2330^{14} \approx 3.24\cdot 10^{50}$ choices. The point of the other pizzeria is to say that she didn't eat here today and has the $2331$ choices tomorrow. |
Proof of the associative property of segments (axiomatic geometry) | What do you mean by the sum of segments? There may be two answers:
Sum of measures.
In this case answer is very simple, because it is implied by associative property of real numbers.
Sum of free segments (i'm not sure sure about the translation). I define free segment as an equivalence class of a congruence relation. We say that a free segment $\mathfrak{c}=[ab]$ is a sum of free segments $\mathfrak{a}$ and $\mathfrak{b}$ (we write $\mathfrak{c}=\mathfrak{a}+\mathfrak{b}$ ) if there exists point $p$ such that $B(apb)$ and $\mathfrak{a}=[ap], \mathfrak{b}=[pb]$. It can be proved that this definition doesn't depend on the choice of segment $ab$ and $+$ is a function.
Lemma: $\left(B(abc) \wedge B(acd)\right)\implies \left(B(abd)\wedge B(bcd)\right)$
Proof of assosiative property:
Let $\mathfrak{a}+\mathfrak{b}=[p_{1}p_3]$
So for some point $p_2$ such that $B(p_1p_2p_3)$ we have $\mathfrak{a}=[p_1p_2], \mathfrak{b}=[p_2p_3]$.
Now we find point $p_4$ such that $B(p_1p_3p_4)$ and $\mathfrak{c}=[p_3p_4]$. (We can do this by one of the axioms)
We have $(\mathfrak{a}+\mathfrak{b})+\mathfrak{c}=[p_1p_4]$
By lemma we have $B(p_2p_3p_4)$ which gives $\mathfrak{b}+\mathfrak{c}=[p_2p_4]$.
Again by lemma we have $B(p_1p_2p_4)$ which gives $\mathfrak{a}+(\mathfrak{b}+\mathfrak{c})=[p_1p_4]$
I denote $B(abc)$ when $b$ lies between $a$ and $c$ on the same line. |
Rewrite $u_{tt}-u_{xx}+u_t+au = 0$ as a $2 \times 2$ first order system | One way is to introduce the variables $v=u_t$ and $w=u_x$ to write
$$
\begin{split}
u_t&=v,\\
v_t&=w_x-au-v,\\
w_t&=v_x.
\end{split}
$$
It is of the form
$$
U_t=AU_x+BU,
$$
with
$U=(u,v,w)^T=(u,u_t,u_x)^T$, and constant matrices $A$ and $B$. |
Show that a piecewise function of two solutions to an ODE is a solution to the ODE | We have $u'(t)=f(t,u(t))$ and $v'(t)=f(t,v(t))$.
Hence, for $t<t_0$ we have
$$w'(t)=u'(t)=f(t,u(t))=f(t,w(t)),$$
while for $t>t_0$,
$$w'(t)=v'(t)=f(t,v(t))=f(t,w(t)).$$
Moreover, since $u(t_0)=v(t_0)$, the function $w$ is continuous and $w$ is a (weak) solution of the differential equation.
Added:
To be a weak solution means that $w(t)=c+\int_{t_0}^t f(s,w(s))\,ds$ for all $t$ in some open neighborhood of $t_0$, where $c=u(t_0)=v(t_0)$. This means that in general we look for solutions that are at most continuous at $t_0$. In other words, and without further hypothesis, one cannot expect that $w$ is differentiable at $t_0$.
But, for example, if $f$ is locally Lipschitz in $x$, then the solutions are unique and of course $w$ will be differentiable at $t_0$ with $w'(t_0)=u'(t_0)=v'(t_0)$. However, in that case, the problem is not very interesting since than we have in fact $u(t)=v(t)$ in in some open neighborhood of $t_0$ (and so also $w(t)=u(t)=v(t)$ in in some open neighborhood of $t_0$). |
Calculating integral using Cauchy's integral formula | Use $z=e^{i\theta}_{}$, and convert integral over $\theta$ to $\oint dz$ where the contour is a unit circle around origin in complex z-plane which can be evaluated using Cauchy method.
After few simplifications you get :
$$\int_{0}^{2\pi} d \theta \frac{1}{2+\cos\theta}=\oint_{|z|=1}^{} dz \frac{-2i}{z^2+4z+1}$$ which can be solved by Cauchy method as :
$$\int_{0}^{2\pi} d \theta \frac{1}{2+\cos\theta}=\oint_{|z|=1}^{} dz \frac{\frac{-2i}{z+2+\sqrt{3}}}{ z+2-\sqrt{3}} $$ |
Set of the form $m + n \alpha $ is bounded by $0$? | Assume there is an irrational $\beta\in A$ with $\beta<1$. Then we have $\beta':=1-\lfloor\frac1\beta\rfloor \beta\in A$ and $\beta''=\lceil\frac1\beta\rceil \beta-1\in A$ and both $\beta'$ and $\beta''$ are irrational. As $\beta'+\beta''=\beta$, one of them is $\le \frac12\beta$.
From $\lceil\alpha\rceil-\alpha\in A$ we conclude that indeed some irrational $<1$ is in $A$. By repeatedly doing the construction as in the first paragraph, starting with $\beta_0=\lceil\alpha\rceil-\alpha$ and recursively letting $\beta_{n+1}$ be the smaller of the numbers $\beta',\beta''$ constructed from $\beta=\beta_n$, we obtain a sequence with $\beta_n\le 2^{-n}\beta_0$. We conclude $\inf A\le \inf_n \beta_n\le 0$. |
Convolution of 2 uniform random variables | The density of $S$ is given by the convolution of the densities of $X$ and $Y$:
$$f_S(s) = \int_{\mathbb R}f_X(s-y)f_Y(y)\ \mathsf dy. $$
Now
$$
f_X(s-y) = \begin{cases}
\frac12,& 0\leqslant s-y\leqslant2\\
0,&\text{otherwise}
\end{cases}
$$
and
$$
f_Y(y) = \begin{cases}
\frac13,& 0\leqslant y\leqslant3\\
0,&\text{otherwise.}
\end{cases}
$$
So the integrand is $\frac16$ when $s-2\leqslant y\leqslant s$ and $0\leqslant y\leqslant3$, and zero otherwise. There are three cases (drawing a picture helps to determine this); when $0\leqslant s<2$ then
$$f_S(s)=\int_0^s\frac16\mathsf dy = \frac16s. $$
When $2\leqslant s< 3$ then
$$f_S(s)=\int_{s-2}^s\frac16\ \mathsf dy = \frac16(s-(s-2))= \frac13. $$
When $3\leqslant s\leqslant 5$ then
$$f_S(s)=\int_{s-2}^3\frac16\ \mathsf dy = \frac16(3 - (s-2)) = \frac56 - \frac16s. $$
Therefore the density of $S$ is given by
$$
f_S(s) =
\begin{cases}
\frac16s,& 0\leqslant s<2\\
\frac13,& 2\leqslant s<3\\
\frac56-\frac16s,& 3\leqslant s<5\\
0,&\text{otherwise.}
\end{cases}
$$
The distribution function of $S$ is obtained by integrating the density, i.e.
$$F_S(s)=\mathbb P(S\leqslant s)=\int_{-\infty}^s f_S(t)\ \mathsf dt. $$ |
Finding $\text{Res}(f,0)$ where $f(z)=\frac{1}{z^2\sin(z)}$ | $$\sin z=z-\dfrac{1}{3!}z^3+\dfrac{1}{5!}z^5-\dfrac{1}{7!}z^7+\cdots$$
then
\begin{align}
\dfrac{1}{z^2\sin z}
&= \dfrac{1}{z^3\left(1-\dfrac{1}{3!}z^2+\dfrac{1}{5!}z^4-\dfrac{1}{7!}z^6+\cdots\right)} \\
&= \dfrac{1}{z^3}\left(1+\dfrac{1}{6}z^2+(\dfrac{1}{36}-\dfrac{1}{120})z^4+\cdots\right) \\
&= \dfrac{1}{z^3}+\dfrac{1}{6z}+\dfrac{7z}{120}+\cdots
\end{align}
s0 $a_{-1}=\dfrac16$. |
Writing change of co-ordinates in tensor notation | Pavel uses the symbols $Z^i=Z^i(Z')$
to denote dependence. That means he is considering the coordinates $Z^i$ as depending on the coordinate system $Z'$.
On the other hand, he uses $Z^i \equiv Z^i(Z'(Z))$
to mean that $Z$ and $Z'$ are inverse functions of each other.
You can see that from sections 4.6 and 4.7 (page 42 onwards). |
Generating, or counting the sides of, "square-like" polygons (with all congruent sides, and all angles either $90^\circ$ or $270^\circ$) | Edit: I've added a proof that the $8$-sided polygon can't exist.
Number of edges
Not an actual formula for the number of edges, but a proof that:
The number of edges is always a multiple of $4$.
Travelling anticlockwise around the perimeter of the polygon, assign to each edge one of {up, down, left, right} according to the direction of travel along that edge.
It's clear that vertical and horizontal edges must alternate, since two vertical/horizontal edges in succession either make a longer edge or place one edge on top of the other, neither of which is alllowed.
Let $U, D, L, R$ be the number of edges with the directions up, down, left and right respectively.
In order to return to our starting point both vertically and horizontally, we require $$U=D$$ and $$L=R.$$
There are $U+D$ vertical edges and $L+R$ horizontal edges. But every horizontal edge is preceded by a vertical adge and vice versa, so $$U+D=L+R.$$
Together, these requirements imply
$$U=D=L=R=k.$$
The total number of edges is
$$U+D+L+R=4k,$$
and therefore a multiple of $4$.
Constructing the polygons
This is now looking like a combinatorics problem. For a given integer $k$, we must generate permutations of the four edge directions such that:
each direction occurs $k$ times.
vertical and horizontal directions alternate.
no section of the path (considering the whole path as a loop) may contain the same number of up, down, left and right edges.
The last of of these is to prevent the path from returning to a previously visited point before it's complete.
I think these requirements guarantee that every permutation generates a valid polygon, and that all valid polygons are generated (with a lot of repeats).
Number of vertices of each type
Consider the exterior angle at each vertex as we follow an anticlockwise path. This is $90°$ for a vertex where we turn left, and $-90°$ for one where we turn right. We need to have made one full rotation when we return to the starting point: that is, the sum of all the exterior angles must be $360°$.
There must therefore be four more vertices where we turn left than ones where we turn right: or in terms of the internal angles, $4$ more $90°$ vertices than $270°$ ones.
For the case where there are $4k$ vertices, let $n$ be the number of $270°$ vertices. Then there are $n+4$ $90°$ vertices, giving $2n+4$ vertices altogether. So we have
$$2n+4=4k,$$
from which
$$n=2(k-1)$$
Therefore
A $4k$-sided polygon of this type has $2(k-1)$ $270°$ vertices and $2(k+1)$ $90°$ vertices.
Impossibility of the 8-sided polygon
Suppose one of these polygons has $8$ sides. From the above, it has six $90°$ vertices and two $270°$ vertices. How are these arranged as we travel round the polygon?
First note that three vertices of the same type in succession—corresponding to three left or right turns—takes us round a small square. This is only OK if the small square is the entire polygon. So for $k>1$, no more than two vertices of the same type may occur in succession.
So each of the two $270°$ vertices can be followed by at most two $90°$ ones. But this only allows the polygon to have four $90°$ vertices, and it needs to have six.
Therefore
The 8-sided polygon cannot exist.
A similar argument shows that
The only $12$-sided polygon is the cross shape.
For this one we have $k=3$, giving $8$ vertices of one type and $4$ of the other. Denoting the two types as $L$ for a left turn ($90°$) and $R$ for a right turn ($270°$), the only possible sequence is $RLLRLLRLLRLL$, which generates the cross shape. Every $R$ has to be followed by $LL$, to avoid having three $L$'s in succession. |
description of the function whose graph corresponds to Figure | Let's label some relevant $x$ values:
Now $F(x)=\int_0^x f(s)\,ds\implies F'(x)=f(x) \text{ and } F''(x)=f'(x)$ by the Fundamental Theorem of Calculus. Thus, from the graph,
$F'(x)=f(x)>0$ on $(-\infty,0)\cup(a,b)$ and so $F$ is increasing here.
$F'(x)=f(x)<0$ on $(0,a)\cup(b,\infty)$ and so $F$ is decreasing here.
Thus, $F$ has a local max at $x=0,b$ and local min at $x=a$.
$F''(x)=f'(x)>0$ on $(c,d)$ and so $F$ is concave up here.
$F''(x)=f'(x)>0$ on $(-\infty,c)\cup(d,\infty)$ and so $F$ is concave down here.
Thus, $F$ has inflection points at $x=c,d$.
Hopefully you can sketch a curve that satisfies all those conditions. Here's a (very rough!) candidate: |
Prove a.s. convergence of $(X_n)_n$ satisfying $E(X_{n+1} \mid F_n) \leq X_n+Y_n$ for $\sum_n Y_n<\infty$ | It follows from your assumption on the conditial expectation that
$$Z_n = X_n - \sum_{i=0}^{n-1} Y_i$$
is a supermartingale. If we define
$$T_k := \inf\left\{n \in \mathbb{N}; \sum_{i=0}^n Y_i \geq k\right\}$$
for fixed $k \in \mathbb{N}$, then $T_k$ is an $F_n$-stopping time. By the optional stopping theorem, the stopped process $(Z_{n \wedge T_k})_{n \in \mathbb{N}}$ is also a supermartingale. Moreover, the definition of $T_k$ and the non-negativity of $X_n$ and $Y_n$ entail that $Z_{n \wedge T_k} \geq -k$ for all $n \in \mathbb{N}$, and therefore
$$\sup_{n \in \mathbb{N}} \mathbb{E}(Z_{n \wedge T_k}^-) < \infty.$$
Applying the standard convergence theorem for supermartingales, we conclude that the limit $\lim_{n \to \infty} Z_{n \wedge T_k}$ exists almost surely, and so
$$\Omega_0 := \bigcap_{k \geq 1} \{\omega \in \Omega; \lim_{n \to \infty} Z_{n \wedge T_k}(\omega) \, \, \text{exists}\}$$
has probability $1$. Now if $\omega \in \Omega_0$, then $\sum_{n} Y_n(\omega)<\infty$ implies that we can choose $k \in \mathbb{N}$ large enough such that $T_k(\omega)=\infty$. As $\omega \in \Omega_0$ we thus know that
$$\lim_{n \to \infty} Z_{n \wedge T_k}(\omega) = \lim_{n \to \infty} Z_n(\omega) = \lim_{n \to \infty} \left( X_n(\omega)- \sum_{i=0}^{n-1} Y_i(\omega) \right)$$
exists. Using once more that $\sum_{i} Y_i(\omega)<\infty$, this shows that $\lim_n X_n(\omega)$ exists. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.