title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Straight Line on a Sphere? | Straight isn't a good description. However, the shortest path between any two points on a sphere lies on a great circle just like in flat space, the shortest path between any two points lies on a straight line. |
how to determine whether a dynamical system is the uniformly exponentially stable if at equilibrium point | First you should know that the answer of the linear dynamic system:
$$ x'=Ax , x ∈R^n $$
can be achieved by calculating the following value:
$$x=e^(At)x_0=L^-1{(s*I-A)^-1}x_0$$
In which L denotes the Laplace transform.If you do that for $$a(t)=0$$ you have:
$$x(t)=\begin{matrix}
[1 & 1-e^(-t)\\
0 & e^(-t)]\\
\end{matrix}*x_0 $$
As you see the answer is not asymptotic stable so it can not be exponentionally stable!!!( Take care about $$ A(1,1)$$ which is constant).For the second part calculate the answer in this way.Try to find the lambda and C constant according to definition. Lambda is not the matrix eigenvalue. For finding them use the following inequality:
$$ ||[a\ b]^T||< 2 max(|a|,|b|)$$ |
Infinitesimal generator/intensity matrix | They are exponentiating. Recall that $$e^x = 1 + \frac x{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + ...$$
You can define the same power series for matrices. Given a square matrix $M$, you can define
$$e^M := I + \frac 1{1!}M + \frac{1}{2!}M^2 + \frac{1}{3!}M^3 + ...$$
Just as in the real case, this series always converges, and it solves similar multivariate differential equations. In particular, $\mathbf P_t = e^{t\mathbf A}$.
Since $\mathbf A= \mathbf {QDQ}^{-1}$, $$\mathbf A^n = \mathbf {QDQ}^{-1}\mathbf {QDQ}^{-1}\mathbf {QDQ}^{-1}...\mathbf {QDQ}^{-1} = \mathbf {QD}^n\mathbf Q^{-1}$$
From which it follows that $e^{t\mathbf A} = \mathbf Q\ e^{t\mathbf D}\mathbf Q^{-1}$
Now, it isn't hard to workout that if $\mathbf D = \text{diag}(d_1, ..., d_4)$ then $e^{\mathbf D} = \text{diag}(e^{d_1}, ..., e^{d_4})$.
So $$\mathbf P_t = e^{t\mathbf A} = \mathbf Q\ e^{t\mathbf D}\mathbf Q^{-1}=\mathbf Q\begin{bmatrix}e^0&0&0&0\\0&e^{-t}&0&0\\0&0&e^{-3t}&0\\0&0&0&e^{-4t}\end{bmatrix}\mathbf Q^{-1}$$
which when you expand out using the $\mathbf Q$ matrices gives the result $$\frac 1{12}\begin{bmatrix}
3+8e^{-t}+e^{-4t}&3-3e^{-4t}&3-4e^{-t}+e^{-4t}&3-4e^{-t}+e^{-4t}\\
3-3e^{-4t}&3+9e^{-4t}&3-3e^{-4t}&3-3e^{-4t}\\
3-4e^{-t}+e^{-4t}&3-3e^{-4t}&3+2e^{-t}+6e^{-3t}+e^{-4t}&3+2e^{-t}-6e^{-3t}+e^{-4t}\\
3-4e^{-t}+e^{-4t}&3-3e^{-4t}&3+2e^{-t}-6e^{-3t}+e^{-4t}&3+2e^{-t}+6e^{-3t}+e^{-4t}
\end{bmatrix}$$
Which they have then broken apart into the matrices of coefficients of the exponential expressions. |
Natural isomorphism $\tilde H_i(X) \xrightarrow{\cong} \tilde H_{i+1}(\Sigma X)$ where $\Sigma X$ is the suspension of $X$. | All the constructions that you used to define the isomorphism are natural/functorial:
Given a map $X \to Y$, you have a natural map that respect inclusions, which gives a starting point for all the applications of naturality to come:
$$(\Sigma X, C_+ X, C_- X, X \times \{0\}) \to (\Sigma Y, C_+ Y, C_- Y, Y \times \{0\});$$
The long exact sequence of a pair is natural, hence by using the natural map $(\Sigma X, C_+ X) \to (\Sigma Y, C_+ Y)$, the isomorphism $\tilde{H}_i(\Sigma X) \to \tilde{H}_i(\Sigma X, C_+ X)$ is natural in $X$;
Excision is natural, hence the excision isomorphism $\tilde{H}_i(C_- X, X) \to \tilde{H}_i(\Sigma X, C_+ X)$ is natural in $X$;
Finally the long exact sequence is still natural, hence the isomorphism $\tilde{H}_i(C_- X, X) \to \tilde{H}_{i-1}(X)$ is natural in $X$.
In conclusion, each subsquare in the following diagram is commutative, hence the "outer" rectangle (by inverting the horizontal arrows that go the wrong way, the composite of the whole thing is the suspension isomorphism) is commutative:
$$\require{AMScd}
\begin{CD}
\tilde{H}_i(\Sigma X) @>{\cong}>> \tilde{H}_i(\Sigma X, C_+ X) @<{\cong}<< \tilde{H}_i(C_- X, X) @>{\cong}>> \tilde{H}_{i-1}(X) \\
@VVV @VVV @VVV @VVV \\
\tilde{H}_i(\Sigma Y) @>{\cong}>> \tilde{H}_i(\Sigma Y, C_+ Y) @<{\cong}<< \tilde{H}_i(C_- Y, Y) @>{\cong}>> \tilde{H}_{i-1}(Y)
\end{CD}$$
tl;dr The composition of two functors is a functor. |
Volume of $E=\{ (x,y,z)\in \mathbb R^3 | x^2+y^2\le4, x^2+y^2+z^2\le 16 \}$ | You domain of integration is given by $\{(\theta, r): \theta \in [0, 2\pi], r\in [0,2]\}$. Now parametrize the top of the sphere as $z = \sqrt{16-x^2-y^2} = \sqrt{16-r^2}$. Hence you have the volume is given by;
$$2\int_{0}^{2\pi} \int_{0}^2 r \int_{0}^{\sqrt{16-r^2}} 1\ dz dr d\theta$$ |
Optimization algorithms for Distribution and Logistics scenario | What you are describing is a subset of automated planning. You would typically describe your problem in an action language like STRIPS and then run a planner to determine whether the goal state or states can be reached. Determining whether a successful plan exists for a given problem is PSPACE-complete, but constraints such as limiting the number of steps to reach a goal can make the problem merely NP-complete and amenable to attack by SAT solvers. |
Partitioning the edges of $K_n$ into $\lfloor \frac n6 \rfloor$ planar subgraphs | Hint: Show that at least one of the $\lfloor \frac{n}{6}\rfloor$ subgraphs must contain more than $3n - 6$ edges. |
Mapping cone of $BF:B\mathcal C\to B\mathcal D$ is a classifying space? | Yes, the mapping cone can be realized as the classifying space of a category via Thomason's homotopy colimit theorem. Recall that the mapping cone of a map $ f \colon A \to X$ is the homotopy colimit of the diagram
$$ \require{AMScd}
\begin{CD}
A @>f>> X \\
@VVV \\
* \\
\end{CD}
$$
where $*$ is the space with only point.
Thomason's homotopy colimit theorem says that if $J$ is a small category and $G \colon J \to \text{Cat}$ is a covariant functor to the category of small categories, then
$$ \underset{J}{\text{hocolim}} \, BG \simeq B(\text{Gr}_J G) $$
where $\text{Gr}_J G$ is the Grothendieck construction of $G$. This is a category whose objects are pairs $(x,a)$ with $x$ an object of $J$ and $a$ an object of $G(x)$. And a morphism from $(x,a)$ to $(y,b)$ is pair $(\alpha,f)$ where $\alpha \colon x \to y$ is a morphism in $J$ and $f \colon G(\alpha)(a) \to b$ is a morphism in $G(y)$. Composition is not hard to figure out.
Now use this with the category $J$ given by
$$ \require{AMScd}
\begin{CD}
a @>f>> b \\
@VgVV \\
c \\
\end{CD}
$$
and the functor $G \colon J \to \text{Cat}$ that sends $a$ to $C$, $b$ to $D$, $c$ to $*$ (the category with only one object and one morphism), $f$ to $F$ and $g$ to the constant functor. Then you have
$$ B(\text{Gr}_J G) \simeq \underset{J}{\text{hocolim}} \, BG \simeq C(BF) $$
So you just need to work out what $\text{Gr}_J G$ is in this case.
There is an important case where things are bit easier (and cool), when you have the constant functor $C \to *$. In this case the mapping cone is the cone of $BC$ and can be realized as follows. Take the category $M$ whose objects are the objects of $C$ and another object, let us call it $\infty$. The morphisms between two objects in $C$ are the same as in the category $C$. There is an unique morphism from any object to $\infty$. And there are no more morphisms. Composition between morphisms in $C$ is still the same. And composition of a morphism in $C$ with a morphism to $\infty$ is the corresponding unique morphism to $\infty$. Then $BM \simeq C(BC)$. This construction can be found in "Topological, simplicial and categorical joins" by R. Fritsch and M. Golasinski. |
How to prove this function is actually constant? | The solution is the exact same as the one you linked to. I will provide one extra step to aid in convincing you of this.
$f(x)=f(\frac{x}{2})=f(\frac{x}{4})=f(\frac{x}{8})=...=f(\frac{x}{2^n})$.
Can you use this fact to reconcile the answers provided in your linked solution? |
Asymptotic for $\sum_{n \le x}\frac{d(n)\log n}{n}$ | Using the hyperbola method
$$D(k) = \sum_{n=1}^k d(n) = k(C+\log k+\mathcal{O}(k^{-1/2}))$$
Summing by parts
$$H(k)=\sum_{n =1}^k d(n) \frac{\log n}{n} = \frac{\log k}{k}D(k)+ \sum_{n =1}^{k-1} D(n) (\frac{\log n}{n}-\frac{\log (n+1)}{(n+1)}) $$
Where $$\frac{\log n}{n}-\frac{\log (n+1)}{(n+1)} =\int_n^{n+1} \frac{\log x-1}{x^2} dx = \frac{\log n-1}{n^2}+\mathcal{O}(\frac{\log n}{n^{3}})$$
Thus
$$H(k)= \frac{\log k}{k} k(C+\log k+\mathcal{O}(k^{-1/2}))\! +\!\!\sum_{n =1}^{k-1}n(C+\log n+\mathcal{O}(n^{-1/2})) (\frac{\log n-1}{n^2}+\mathcal{O}(\frac{\log n}{n^{3}}))$$ $$= P(\log k)+B+O(k^{-1/2}\log k)$$
with $P$ a polynomial of degree $3$ (due to the term $\sum_{n=1}^k \frac{\log^2 n }{n} \sim \log^3 k$) and $B$ is the limit of the remainder convergent sequences.
Note $\sum_{n=1}^\infty d(n) n^{-s-1}\log n = - 2\zeta'(s+1)\zeta(s+1)$ thus $P(\log x) +B= \text{Res}(- 2\zeta'(s+1)\zeta(s+1)\frac{x^s}{s},0)$ |
Question about residually torsion -free nilpotent | With the way you have worded the question, I think the answer is no. Let $G$ be the group defined by the presentation $\langle x,y,z \mid xz=zx, yz=zy, [x,y] = z^2 \rangle$. Then $G$ itself is torsion-free nilpotent. Choose your $x$ to be the same as my $x$. Then we could take $N = \langle z \rangle$ and $n=2$. (I am using the notation $\gamma_1(G)=G$, $\gamma_2(G)=[G,G]$.) But in the group $G/\gamma_2(G)$ the image of $z$ has order 2, so it is not torsion-free.
Of course I could have chosen $N=1$ and/or $n=3$, and then the answer would have been yes. |
Maximal ideal of ring of continuous real-valued functions on $[0, 1]$ is not finitely generated. | Suppose $I_c = \langle f_1, \ldots, f_n\rangle$. Define $f = \sum_{i} |f_i|$. Clearly, $\sqrt f \in I_c$. Thus, $\sqrt f = \sum_i h_i f_i$ for some $h_i \in R$. Define $h = \sum_i |h_i|$. We have
\begin{align*}
\sqrt f &= \sum_i h_i f_i \\
&\le \sum_i |h_i| \cdot |f_i| \\
&\le \sum_i |h_i| \sum_i |f_i| \\
&= hf.
\end{align*}
The last inequality follows from Cauchy-Schwarz.
Note that for $x \ne c$, we must have at least one $i$ with $f_i(x) \ne 0$ since $f_i$ generate $I_c$. Thus, $f(x) \gt 0$ for $x \ne c$. It follows that $h \ge 1/\sqrt{f}$ for all $x \ne c$. Since $f(c) = 0$, this implies that $h$ is unbounded, but this contradicts the compactness of $[0, 1]$.
(I take no credit for this proof. I found it in my notes. I think I first saw it as an exercise in Atiyah-Macdonald or Dummit & Foote. I don't remember.) |
Solving an equation containing 4th power of variable. | Yes, quite correct, except that the outer square-root has a separate $\pm$. So $x=\pm\sqrt{-b\pm\sqrt{b^2-4ac}/2a}$ |
Contradiction or Contrapositive divides proof? | The proof by contrapositive would be the following:
Prove $4 \mid n^{2}-3 \implies n \notin \mathbb{Z}$.
Suppose $4 \mid n^{2}-3$. Then $n^{2}-3 = 4k \implies n^{2}-3 \equiv0 \pmod 4 \implies n^{2} \equiv 3\pmod 4 $.
But this has no solution as $n^{2} \equiv 0,1 \pmod 4$ for all $n$.
This means that our initial equivalence has no solution, i.e., no integer solution. We have proved that if there is one, then it must be a non-integer. |
Does a convex function with a Lipschitz continuous gradient always have a strong convex conjugate? | If $Hy = 0$, then for any $x$ not perpendicular to $y$, you can see that $F^*(x) = \infty$. So you need $H$ to be strictly positive definite if you want to avoid $\infty$ as a possible answer.
The issue is related to sub-Riemannian manifolds, and whether every geodesic necessarily satisfies the Euler-Lagrange equations. |
Finding lie algebra of a group by using exp map and tangent space | There are several ways.
I'll do one for $SO(3)$ and I guess you can figure out the general way of proceeding.
OUTLINE: We will start from a general form for an element of $SO(3)$ then we'll consider all the curves passing from the identity and we will derive effectively the tangent space in the identity. We'll find a base and that will be the Lie Algebra. If you exponentiate the tangent vector at the tangent space at the identity you'll get newly the element of the Lie Group $SO(3)$
Let's suppose that you know that every rotation can be seen as a rotation of an angle $\theta$ around a vector $\mathbf{v}=x\mathbf{e_x}+y\mathbf{e_y}+z\mathbf{e_z}$.
Then every element of $SO(3)$ can be parametrized as
$$R_{\mathbf{v}}\left(\theta\right)=\mathbb{1}+\sin\theta X+\left(1-\cos\theta\right)X^{2},$$ where $X$ is this matrix
$$X=\left(\begin{array}{ccc}
0 & -z & y\\
z & 0 & -x\\
-y & x & 0
\end{array}\right).$$
If you're smart you already understood that I'm cheating (because this parametrization of $SO(3)$ is already a sort of exponential... but let's continue...
So given the parametrization, a general rotation matrix is given by
$$R\left(\theta,\,x,\,y,\,z\right)=\left(\begin{array}{ccc}
x^{2}+\cos\theta\left(y^{2}+z^{2}\right) & \left(1-\cos\theta\right)xy-z\sin\theta & \left(1-\cos\theta\right)xz+y\sin\theta\\
\left(1-\cos\theta\right)xy+z\sin\theta & y^{2}+\cos\theta\left(x^{2}+z^{2}\right) & \left(1-\cos\theta\right)yz-x\sin\theta\\
\left(1-\cos\theta\right)xz-y\sin\theta & \left(1-\cos\theta\right)yz-x\sin\theta & z^{2}+\cos\theta\left(y^{2}+x^{2}\right)
\end{array}\right),$$
Now take a regular curve $\theta\left(t\right)$ such that $\theta\left(0\right)=0$ and let $\mathbf{v}$ be a vector $\mathbb{R}^{3}$ with coordinates $\left(x,\,y,\,z\right)$. The tangent vector of the curve is then given by
$$\frac{d}{dt}R_{\mathbf{v}}\left(\theta\right)=\left(\begin{array}{ccc}
-\left(y^{2}+z^{2}\right)\sin\theta & xy\sin\theta-z\cos\theta & xz\sin\theta+y\cos\theta\\
xy\sin\theta+z\cos\theta & -\left(x^{2}+z^{2}\right)\sin\theta & yz\sin\theta-x\cos\theta\\
xz\sin\theta-y\cos\theta & yz\sin\theta+x\cos\theta & -\left(y^{2}+x^{2}\right)\sin\theta
\end{array}\right)\cdot\dot{\theta}\left(t\right)$$.
Then in $t=0$
$$\frac{d}{dt}R_{\mathbf{v}}\left(\theta\right)|_{t=0}=\left(\begin{array}{ccc}
0 & -z & y\\
z & 0 & -x\\
-y & x & 0
\end{array}\right)\cdot\dot{\theta}\left(t\right).$$
So let's now consider the axis $\mathbf{x}=\left(1,\,0,\,0\right)$,
$\mathbf{y}=\left(0,\,1,\,0\right)$ and $\mathbf{z}=\left(0,\,0,\,1\right)$
Then you have
$$ E_{1}=\frac{d}{dt}R_{\mathbf{x}}\left(\theta\right)|_{t=0} =\left(\begin{array}{ccc}
0 & 0 & 0\\
0 & 0 & -1\\
0 & 1 & 0
\end{array}\right)\cdot\dot{\theta}\left(t\right),$$
$$E_{2}=\frac{d}{dt}R_{\mathbf{y}}\left(\theta\right)|_{t=0} =\left(\begin{array}{ccc}
0 & 0 & 1\\
0 & 0 & 0\\
-1 & 0 & 0
\end{array}\right)\cdot\dot{\theta}\left(t\right),$$
$$E_{3}=\frac{d}{dt}R_{z}\left(\theta\right)|_{t=0} =\left(\begin{array}{ccc}
0 & -1 & 0\\
1 & 0 & 0\\
0 & 0 & 0
\end{array}\right)\cdot\dot{\theta}\left(t\right).$$
Then you have a base for the Lie Algebra of $SO(3)$ and a general vector $X$ in the tangent space of $SO(3)$ at the identity can be written as
$$X=xE_{1}+yE_{2}+zE_{3},$$
If you exponentiate the general vector of the Lie Algebra you get the general rotation in the exponential form
$$R_{\mathbf{v}}\left(\theta\right)=\exp\left[\theta X\right],$$ with
$$X=\left(\begin{array}{ccc}
0 & -z & y\\
z & 0 & -x\\
-y & x & 0
\end{array}\right)$$
which in fact was the parametrization we were starting from. Hope gave you some hints on how to proceed. The other classic and more easy way to proceed it's just to do a taylor expansion, but I think this one it really explicit. |
Nature of the common root | Let's say $a$ is the common root of the first two equations. By multiplying the first equation with $q$ and the second with $p$ we get:
$$2(q^2-pr)a=p^2-rq$$
This equation cannot be $0=0$ for some positive $p$ and $q$, otherwise we would have $q^2-pr=p^2-rq=0$ so $r=\frac{p^2}{q}=\frac{q^2}{p}$, so $p=q=r$, contradiction with the hypothesis that $p,q,r$ are not all equal
So the equation must have at most one solution, which implies $a=\frac{p^2-rq}{2(q^2-pr)}$ is real.
$a$ cannot be $\ge 0$ as it would imply $pa^2+2qa+r>0$ by the positivity of $p,q,r$
So $a$ has to be negative. |
Is this function "$h$" symmetric of the plane $x=y$? | Yes.
Suppose that $h$ is a function defined as
$$
h(x,y) = \begin{cases} f(x, y), & x < y \\ f(y, x), & x \ge y. \end{cases}
$$
Notice that $h$ only "sees" the half-plane in $(x, y)$-coordinates where $x < y$. For example, $h(2, 3) = f(2, 3)$ and $h(3, 2) = f(2, 3)$, as well. The value $f(3, 2$) is never called upon, for instance.
This easily generalizes to a proof that $h$ has symmetry about the line $y = x$. If $x > y$, then $h(x, y) = f(y, x)$. In this case, $y < x$, so $h(y, x) = f(y, x)$. In other words, we can conclude that
$$
h(y, x) = h(x, y)
$$
for any $(x, y)$. |
Eigenvectors and invariant sub space | If I'm reading your question correctly, it sounds like you want to find the change of basis matrix $P$ and its inverse $P^{-1}$.
Luckily you've done the majority of the work.
The change of basis matrix $P$ is the matrix obtained from putting the eigenvectors of $A$ into the columns of $P$
$$
P=
\left[\begin{array}{rrr}
1 & 1 & 0 \\
-i & i & 0 \\
0 & 0 & 1
\end{array}\right]
$$
Note that the columns of $P$ are all mutually orthogonal. This implies that $PP^\top$ is a diagonal matrix and indeed
$$
PP^\top
=
\left[\begin{array}{rrr}
2 & 0 & 0 \\
0 & -2 & 0 \\
0 & 0 & 1
\end{array}\right]
$$
Now, to force our change of basis matrix to be an orthogonal matrix, we need to "normalize" its columns, as you observe. This is easily done by dividing each column by its length. In our case we have
\begin{align*}
\left\lVert
\langle 1,-i,0\rangle
\right\rVert &= \sqrt{2} &
\left\lVert
\langle 1,i,0\rangle
\right\rVert &= \sqrt{2} &
\left\lVert
\langle 0,0,1\rangle
\right\rVert &= 1
\end{align*}
Do you see how to get an orthogonal basis from here? |
Is this question a pigeon hole question? | There are exactly $15$ remainders modulo $15$ and they are $0, 1, 2, \dots, 14$.
This is slightly different (easier) than the birthday problem because we want to guarantee that some pair has the same remainder, and don't have to worry about the probabilities. |
Is the ideal $I = \langle x^2,x^3 \rangle $ in $\mathbb{R}[X]$ equal with $ \langle x^3 \rangle $? | Indeed, you cannot take $1/x^2$ as this is not a polynomial. Yet that $x$ can be $0$ is not the problem. You could not take, say, $1/(1+x^2)$ either. The elements of $\mathbb{R}[X]$ are the polynomials, that is expressions of the form $a_0 + a_1x + a_2 x^2 + \dots a_nx^n$.
If you restrict to polynomials correctly you can show that $x^2$ cannot be in the ideal generated by $x^3$. (For example consider the rank.) Thus the two ideals are not equal.
Would you allow expressions such as $1/x^2$ as elements of your ring, that is you consider rational functions, then indeed the ideals would be equal. Yet in the ring of rational functions $\mathbb{R}(X)$ – note the parenthesis instead of brackets – almost all ideals are equal as it is a field and the only ideals are $0$ and the ring itself. |
Evaluate the limit of the following sequence . | $$\int_0^1{\mathrm{dx}\over(1+x^2)^n}\geq\int_0^{1/\sqrt{n}}{\mathrm{dx}\over(1+x^2)^n}\geq\int_0^{1/\sqrt{n}}{\mathrm{dx}\over\left(1+\frac1n\right)^n}\geq{1\over\sqrt{n}}{1\over2e}$$
for large $n$. Therefore, the limit you seek is $\infty.$ |
Simplify the probability of events | $P(Z\cap(X \cup Y))=P((Z\cap X)\cup (Z \cap Y)) = P(Z \cap X) + P(Z \cap Y)$ because $X$ and $Y$ are mutually exclusive.
$P(Z \cap X) + P(Z \cap Y) = P(Z)P(X) + P(Z \cap Y) = m^2 + P(Z \cap Y)$ as $X$ and $Z$ are independent.
$P(Z \cap Y)=P(Z \cap \bar{X})$, as $X$, $Y$, and $Z$ are exhaustive. But $P(Z \cap \bar{X})=P(Z)P(\bar{X})=m(1-m)$ as $Z$ and $X$ (and therefore $\bar{X}$) are independent.
So, $P(Z|X \cup Y) = \frac{m^2 + m(1-m)}{m + n}=\frac{m}{m+n}$. |
How can we draw a line parallel $l_1$ and $l_2 $ to that passes of $p$ | As you called that segments , lines, I think you are working in Non-Euclidean geometry (Hyperbolic) and you are using the Klein's model. In this model, especially when you consider $\mathbb R^2$, The points are represented by the usual points in the interior of a circle (unit disk) and lines are represented by the chords, straight line segments with endpoints on the boundary of the disk.
Here, as you drew two lines and regarding the place of the point $P$. we cannot draw a chord as problem desired. I made some lines for you:
Note that, as @newzad remarked correctly, if you don't indicate the certain geometry , then your claim, homework or your problem will be absolutely wrong. |
Quadratic iterative system | The short answer is no. The dynamics of linear maps is very easy to understand, as you mention, but the dynamics of nonlinear maps usually is very complicated, and there is no easy way to describe the iteration or "asymptotics".
Recall that the dynamics of the logistic map
$$ \lambda\mapsto \lambda x(1-x)$$
can be very complicated ("chaotic"), and can depend sensitively both on the starting value and on the parameter $\lambda$.
In your setting, we can simulate this map by studying
$$(x,y)\mapsto (-\lambda x^2 + \lambda xy , y^2),$$
and using a starting value with $y=1$.
However, in the two-variable case, it may be interesting to note that we can project $\mathbb{R}^2$ to projective space (since your polynomial is homogeneous), and the iteration is semiconjugate to a one-dimensional map. More precisely, if we set $p := x/y$, then your map is semiconjugate to the quadratic rational map
$$ R(p) = \frac{ap^2+bp+c}{dp^2+ep+f}.$$
I mention this because dynamics in one variable is much, much better understood than dynamics in several variables. For example, the Hénon family, in two variables, still poses many mysteries, while the real quadratic family (in one variable) is by now rather well-understood. (Although it takes a lot of deep mathematics!) |
The idea of the proof of theorem 2.1 in Stein and Shakarchi' Fourier Analysis book | I do not understand exactly why he choose $\delta \leq \frac{\pi}{2}$ and why this leads to $f(θ) > \frac{f(0)}{2}$ and what is the importance of that $f(θ) > \frac{f(0)}{2}$ ?
They choose $\delta \leq \frac{\pi}{2}$ to make sure that $p(\theta)$ is non-negative whenever $|\theta|<\delta$.
All they want is a neighborhood of $0$ on which $f>0$. The value $\frac{f(0)}{2}$ is just a simple choice.
That they can choose such a $\delta$ is a consequence of the continuity of $f$ at $0$ :
$$
\exists\, \delta > 0, \forall\, \theta : |\theta| < \delta \implies f(0)-\frac{f(0)}{2} =\frac{f(0)}{2} < f(\theta)
$$
Also why he choose the trigonometric polynomial in that way and what is the importance of choosing $\eta < \delta$, so that $p(\theta) \geq 1 + \frac{\epsilon}{2}$ ?
They would like to choose $p$ such that $p>1$ on $(-\delta,\delta)$. But that's impossible because $p$ is continuous and they chose $\epsilon$ such that $|p(\theta)| < 1$ whenever $\delta \leq \theta \leq \pi$. That's why they need to restrict to $(-\eta,\eta) \subset (-\delta,\delta)$.
The value $1+\frac{\epsilon}{2}$ is unimportant as long as it is $> 1$. They could not simply choose $1+\epsilon$ because $p(0) = 1+\epsilon$.
why he let $p_k(\theta) = [p(\theta)]^k$
They saw that for any trigonometric polynomial $p$, $p^k$ is also a trigonometric polynomial. So they really started with $p_k := p^k$ in mind and chose $p$ so that $p^k$ peaks at $0$. To do so they used the fact that $a^k$ tends to $\infty$ if $a > 1$ and to $0$ if $|a| < 1$.
Could anyone explain this for me please?
While the proof is a bit technical, the idea is simple enough:
If $f(0) > 0$ then by continuity of $f$ at $0$ there is a neighborhood of $0$ on which $f > 0$.
Then if $p_k$ peak at $0$ we should have
$$
\int_{-\pi}^{\pi} p_k(\theta) f(\theta) \,d\theta \to \infty \tag{$\star$}
$$
In the proof they take care of the details on how to construct $p_k$ precisely and how to show rigorously $(\star)$ happens. |
How do you solve this probability problem? | Encode the possible choices as $\{0,1\}$-words of length $10$ containing three ones. There are ${10\choose3}$ such words, all of them equiprobable. In order to count the admissible words choose any word of length $8$ containing three ones, and insert a zero after the first and the second one. It follows that the probability $p$ we are after amounts to
$$p={{8\choose3}\over{10\choose3}}={8\cdot7\cdot 6\over 10\cdot 9\cdot 8}={7\over15}\ .$$ |
Prove $\left(\frac{1+\cos2\theta+i\sin2\theta}{1+\cos2\theta-i\sin2\theta}\right)^n = \cos2n\theta+i\sin2n\theta$ | We use Euler's formula : $e^{i\theta}=\cos\theta+i\sin\theta$
$$\left(\frac{1+\cos2\theta+i\sin2\theta}{1+\cos2\theta-i\sin2\theta}\right)^n $$
$$=\left(\frac{2\cos^2\theta+i(2\sin\theta\cos\theta)}{2\cos^2\theta-i(2\sin\theta\cos\theta)}\right)^n$$ $$=\left(\frac{2\cos\theta+2i\sin\theta}{2\cos\theta-2i\sin\theta}\right)^n $$ $$=\left(\frac{e^{i\theta}}{e^{-i\theta}}\right)^n $$ $$=e^{2in\theta}= \cos2n\theta+i\sin2n\theta$$. |
How to calculate minimum number of games in round robin with more than two players per games | There are two cases where I know how to do it, when the number of players $m$ is of the form $m=4^n$, and when $m$ is of the form $m={3^n-1\over2}$. These correspond to two types of finite geometries with $4$ points on a line. In the first case, we can consider the players as points in affine $n$ space over $\mathbb{F}_4$, the field with four elements. In the second case, we consider them as points in projective $n-1$ space over $\mathbb{F}_3.$ In either case, two points determine a line, so if we take all the lines as the games, each pair of players will meet exactly once, which is clearly the best possible. In these cases then, the minimum number of games is $$\left\lceil{m(m-1)\over12}\right\rceil.$$
I suspect these are the only cases in which this minimum can actually be achieved.
Unfortunately, these cases don't seem to cast any light on the remaining ones. In those cases, we have to have some pairs meet more than once, which eliminates any possibility of considering the games as lines.
We also need a good way of determining the minimum number is specific cases, so we can try to guess the pattern (if there is one.) For example, the best I've done so far with $8$ players is $6$ games, but is it the minimum? How to check?
P.S.
I realize of course that one can check with a computer program; I meant, "Is there an easy way to check by hand?"
EDIT
This appears to be A011976, the covering number $C(n,4,2).$ Since OEIS only lists values up to $n=20$, computation is probably hard. Apparently, it is an open problem to find a formula, but I don't know how up to-date-this site is. This paper discusses upper and lower bounds, and construction of coverings in some cases. A much more extensive table is cited by @Henry in the comments. |
Confusion between these two combinatorial problems | For the first problem, imagine that each of the phones have unique serial numbers.
So, we have for example the phones Black0001, Black0002, Black0003, ..., Black0009, Black0010, Red0001, Red0002, ..., Red0020.
Now... maybe to a normal person, they might not think to open up the phone's case and look inside at the serial number of the phone and so to them, the only things they recognize about the phone is that it is black or it is red and otherwise can't tell the phones apart... but to you and to me, we are aware of the fact that these serial numbers exist and that the phones occupy different positions in time and space to one another and are in fact distinct physical objects comprised of different atoms.
Now, the action of selecting nine phones at random we can describe the outcomes by the set of which phones we selected and under any reasonable interpretation of the problem will be such that each subset of nine phones will be equally likely to occur. There are $\binom{30}{9}$ different such subsets. Note here the fact that we are using $\binom{30}{9}$, the number of subsets of the set where the thirty phones are all considered distinct. Yes... to a lay person who isn't aware of the existence of the serial numbers who treats the phones as appearing identical, we might have said there are $10$ different possible outcomes, being that we got zero black, one black, two black phones total, etc... however, one should be aware that those ten outcomes are not equally likely to occur. It is more likely to have gotten, say, $3$ black phones and $6$ red phones than it is to have gotten $9$ black phones and we always try to use sample spaces where each outcome is equally likely to occur to help facilitate calculations.
So, among our $\binom{30}{9}$ possibilities, we include as possibilities having drawn the phones with serial numbers Black0001,Black0002,Black0003,Black0004,Red0001,Red0002,Red0003,Red0004,Red0005 and this is considered a different outcome than having drawn the phones Black0005,Black0006,Black0007,Black0008,Red0015,Red0016,Red0017,Red0018,Red0019 and so on...
Your claim that everything is determined once you have drawn the black phones is incorrect. Having drawn only the two black phones Black0001,Black0002 will indeed determine the number of black phones and red phones drawn, but it will not be enough information to determine which of the red phones it was that were drawn to complete the total collection. It is thus ambiguous whether Black0001,Black0002 continues with Red0001,Red0002,Red0003,... or if it continues with Red0001,Red0005,Red0010,... or if it continues some other way...
We count the number of valid cases by choosing which black phones it was and which red phones it was, not having stopped too early in the process of calculating. There are $\binom{10}{4}$ ways to select which black phones they were and $\binom{20}{5}$ ways to select which red phones they were. Multiplying gives us the number of ways of having selected four black phones. Dividing that result by the number of ways of having selected nine phones regardless gives us the probability of having selected four black phones as being $\dfrac{\binom{10}{4}\binom{20}{5}}{\binom{30}{9}}$
It is worth pointing out as well that these calculations so far have been where the order in which the phones were drawn does not matter. The reason is that it is convenient to have done so. That is not to say that the order should not matter, or cannot matter... just that it is convenient to have treated it as though it does not matter. You could have run the same calculations where order of selection mattered as well and have arrived at the same answer... effectively making the probability $\dfrac{\binom{10}{4}\binom{20}{5}\cdot 9!}{30\cdot 29\cdot 28\cdots 22}$ which one can see equals what we had before.
Now... on to the second problem of counting the number of lattice paths. You do have a mistake there... the number of paths from $(0,0)$ to $(20,20)$ will have been $\binom{40}{20}$, not $\binom{20}{10}$.
That being said, here we are not choosing distinct rights from a collection of distinct rights, and choosing distinct ups from a collection of distinct ups. All rights are identical. There are no hidden serial numbers here to look at.
The only thing that matters here is whether the first step in the path is to the right or is up, whether the second step in the path is to the right or up, and so on...
Could you have looked at a similar related problem where we want to count lattice paths from $(0,0)$ to $(20,20)$ and we color each individual step in the process with a particular differently colored crayon on our paper and we treat two lattice paths as different if the shapes of the path or the colors at each step of the path don't match exactly? Sure! And that is an interesting problem to explore, but here it is safe to assume that this is not the problem being discussed. One path is treated the same as the other path iff the shapes of the path are the same.
Now, to count this, we notice that regardless what shape the path takes, there will be $40$ steps total taken in the path. Among these steps, exactly twenty of them will be up and the remaining twenty of them are to the right. Now... we select from the available collection of positions in the sequence, which of those positions will be filled by rights and the remainder will be ups.
Again, I must emphasize, we are not picking which rights out of the available collection of rights they are like how we picked which black phones they were out of the available collection of black phones... here all rights are the same as all other rights.
There are $40$ available positions, and we picked $20$ of them to be rights (note, simultaneously, since the order of selection does not matter. we are counting subsets of positions, not sequences of positions). As such, there are $\binom{40}{20}$ such lattice paths. |
Why is the oldform map injective? | I don't know much about the adelic perspective, but I am familiar with an argument using the Atkin-Lehner operator and $q$-expansions. This argument is adapted from Proposition 5.1 of https://arxiv.org/abs/1808.04588.
Let $a,b\in\mathbb{Z}$ be such that $pb-aN=1$. The Atkin-Lehner operator $w_p$ on $S_k(\Gamma_0(Np))$ is defined by
$$w_p(f)=f|_k\begin{pmatrix} p & a \newline Np & pb \end{pmatrix}.$$
Note that $w_p$ is an involution, i.e. $w_p^2(f)=f$. Moreover, if $f\in S_k(\Gamma_0(N))\subseteq S_k(\Gamma_0(Np))$, then $w_p(f)=f|_k\pi$. Let me know if you want me to elaborate on either of these points.
Now as you've stated, if $(f_1,f_2)$ is in the kernel of the oldform map, then $f_1=-f_2|_k\pi=-w_p(f_2)$. So, in this case
$$f_2=w_p^2(f_2)=-w_p(f_1)=-f_1|_k\pi$$
and thus
\begin{align} f_1=-w_p(f_2)=-f_2|_k\pi=f_1|_k\pi^2.\tag{1}\end{align}
If $f_1=\sum_{n=1}^\infty a_nq^n$ then (1) becomes
\begin{equation}\sum_{n=1}^\infty a_nq^n=p^k\sum_{n=1}^\infty a_nq^{np^2}.\tag{2}\end{equation}
It suffices to show that $f_1=0$. So, suppose for a contradiction that $f_1$ is nonzero. Let $n>0$ be the least integer with $a_n\neq 0$. Then the left-hand side of (2) has a $q^n$ term whereas the right-hand side of (2) does not. This is impossible so $f_1$ must be zero as required. |
Norm in Sobolev Spaces. | Yes, it is true that $H_0^1(\Omega) \subset C^0(\Omega)$ when $\Omega \subset \mathbb{R}$. This follows from the (second part of the) Sobolev embedding theorem as written here. In fact, listed there is the stronger result that
$$H_0^1(\Omega) \subseteq C^{0,\frac12}(\Omega)$$
(the space of $1/2$-Holder continuous functions) with the embedding being continuous which gives the desired inequality. |
Twin Primes by an amateur mathematician | Incomplete answers
The first identity $G(n)−G_2(n)=\pi(n)−\pi_2(n)+c$ is true. Basically
a number is either composite or prime, except $1$: $$G(n) + \pi(n)=n-1 \tag{1}$$
a pair $(m,m+2)$ is either twin composites, twin primes or one prime one composite (let's say $k$-pairs): $$G_2(n)+\pi_2(n)+k=n-2$$
and $$G(n) - G_2(n) + \pi(n) - \pi_2(n)=k+1 \tag{2}$$
Now, let's show $$\pi(n)=\pi_2(n)+\frac{k+1-c}{2} \tag{3}$$
$$\begin{array}{|c|c|c|}
\hline
\pi(n) \text{ primes}& \pi_2(n) \text{ twin primes pairs} & k \text{ one prime one composite pairs} \\ \hline
2 & & (2,4)\\ \hline
3 & (3,5) & (1,3) \\ \hline
5 & (3,5), (5,7) & \\ \hline
7 & (5,7) & (7,9)\\ \hline
11 & (11,13) & (9,11)\\ \hline
13 & (11,13) & (13,15)\\ \hline
... & ... & ... \\ \hline
p_{i-1} & (p_{i-1},p_i) & (p_{i-1}-2,p_{i-1})\\ \hline
p_i & (p_{i-1},p_i), (p_i,p_{i+1}) & \\ \hline
p_{i+1} & (p_i,p_{i+1}) & (p_{i+1},p_{i+1}+2)\\ \hline
... & ... & ... \\ \hline
p_j & & (p_{j}-2,p_j),(p_j,p_{j}+2)\\ \hline
p_{j+1} & & (p_{j+1}-2,p_{j+1}),(p_{j+1},p_{j+1}+2)\\ \hline
... & ... & ... \\ \hline
p_{\pi(n)} & ... & ...\\ \hline
\end{array}$$
or
every row contains exactly $2$ pairs; either $2$ in "$\pi_2(n)$ column", $2$ in "$k$ column", or $1$ in "$\pi_2(n)$" and $1$ in "$k$". Altogether $2\pi(n)$ pairs.
for the "$\pi_2(n)$ column", if $(p_i,p_{i+1})$ is in the "row $p_i$" then $(p_i,p_{i+1})$ is also in the "row $p_{i+1}$". Thus column $\pi_2(n)$ always contains $2\pi_2(n)$ pairs.
for the "$k$ column" there are no repeating pairs.
As a result $$2\pi(n)=2\pi_2(n) + k + 1-c$$
where $c$ is either $0$ or $1$ to "balance parity of $k$".
Note: The original questions asks for $c$ to be either $0$ or $-1$, but this is only because I considered $p_1=2$. Original questions ignores $2$.
Now, injecting $(3)$ in $(2)$
$$G(n) - G_2(n) + \pi(n) - \pi_2(n)=2\pi(n)-2\pi_2(n)+c \Rightarrow \\ G(n)−G_2(n)=\pi(n)−\pi_2(n)+c \tag{4}$$
There is a link between 1st and 2nd inequalities. From
$$(1) \Rightarrow G(G(n))+\pi(G(n))=G(n)-1 \Rightarrow G(G(n))=G(n)-\pi(G(n))-1$$
$$(4) \Rightarrow G_2(n)=G(n)-\pi(n)+\pi_2(n)-c$$
Subtracting one from another
$$G_2(n)-G(G(n))=\pi_2(n)+\pi(G(n))-\pi(n)+1-c$$
As a result
$$\pi_2(n)>\pi(n)-\pi(G(n)) \Rightarrow G_2(n)>G(G(n))$$
and
$$G_2(n)>G(G(n)) \Rightarrow \pi_2(n)\geq \pi(n)-\pi(G(n))$$ |
Prove that between any two irrational numbers x<y there is a rational number | Let $z=y-x$. Since $x<y$, we know that $0<z$. Therefore, there is some positive integer $n$ such that $\frac1n<z$. (For instance, some $n$ such that $n>1$ and $n>\frac1z$.)
Therefore, multiplying both sides of the inequality by $n$ (which is positive), we get $$1<nz=n(y-x)=ny-nx$$ Since the gap between $nx$ and $ny$ is greater than 1, there must be some integer $m$ satisfying $$nx<m<ny$$ Finally, dividing everything by $n$ (again, positive), we get $$x<\frac mn<y$$ as desired, a rational number between $x$ and $y$. |
Interesting characteristics of the derivative operator, | The most important: the derivative gives the locally best linear approximation:
$$f(x) \approx f'(c)(x-c) + f(c).$$
Also interesting: a geometric interpretation of properties deduced from the $\lim$ definition. Your two first examples are ideal for this.
About the $\sin$ example: use as illustration of differential equations ($\sin$ is one solution of $y'' = y$). Without deepening, tell them about the importance of differential equations in physics. |
Partially tiling a square with parallelograms | Assume the top left corner is filled without any gaps. The only way to do this is shown below. In fact, the solution you've given is equivalent to filling the top right corner - this can be shown with a reflection. I've constructed each parallelogram from 2 right triangles.
We could then tile each parallelogram on the outer "shell" without leaving any gaps, as in the following figure. Then either the tiling continues until it reaches a corner, as in the bottom left, and we have 1 missing triangle. Or at some point the tiling reverses and another corner (the top right) is filled out, in which case, we would have 2 missing triangles in the top row but get one back in the next row. NB: we could have the tiling reverse and not fill out the top right corner but this would lose us 3 triangles of area. The crux of this is that tiling a shell loses us at least 2 triangles (1 unit) of space. Alternatively, if the top left corner of a shell weren't filled, it would lose us at least 1.5 units of space.
So if the same is true for each subsequent shell, we'd have $(n-1)+1$ units of lost space, counting over $n-1$ shells and an extra unit in the bottom right. Then our total filled area is at most $n(n-1)$. |
Tight asymptotic upper bound of $\sum\limits_{k=2}^N \frac{k}{\log k}$ | If we apply summation by parts we get
$$ \sum_{k=2}^{N}\frac{k}{\log k} = \frac{N^2+N-2}{2\log N} +\sum_{k=2}^{N-1}\frac{k^2+k-2}{2}\left(\frac{1}{\log k}-\frac{1}{\log(k+1)}\right)$$
from which:
$$ \sum_{k=2}^{N}\frac{k}{\log k} \leq \frac{N^2+N-2}{2\log N} + \sum_{k=2}^{N-1}\frac{k+1}{2\log^2 k} $$
and your sum is indeed $O\left(\frac{N^2}{\log N}\right)$, but such bound is sharp. |
Approximation of Sum Containing a Product | Term $n$ is
$$T_n = \sum_{j=0}^\infty \prod_{r=0}^j \frac{1}{n+rk} = k^{n/k-1} e^{1/k} \left( \Gamma\left(\frac{n}{k}\right) - \Gamma\left(\frac{n}{k},\frac{1}{k}\right)\right)$$
If you want an integral, you can represent this as
$$ T_n = \frac{e^{1/k}}{k} \int_0^{1} e^{-t/k} t^{n/k-1}\; dt $$
For large $k$, you can approximate $e^{(1-t)/k} \approx 1 + (1-t)/k$
making this
$$ T_n \approx \frac{k+n+1}{n(k+n)}$$
and then your expression
$$ \sum_{n=1}^k T_n \approx \ln(k)+ \gamma $$ |
Checking convergence of $\sum_{n=2}^{\infty} \frac{1}{(n\log(n))^p}$ | I'm a bit rusty on the Cauchy Condensation test. But here is an alternative solution, perhaps this will be of some use.
Let $a_k=\bigg( \frac{1}{k\ln k} \bigg)^p$
First, if $p\leqq 0 $ clearly $a_k \not\rightarrow 0$ as $k\rightarrow \infty$ and thus $\sum_{k=2}^{\infty} a_k$ diverges.
Now choose $b_k=\frac{1}{k(\ln k)^p}$ such that $\sum_{k=2}^{\infty} b_k$ is convergent for $p>1$ and divergent for $p \in (0,1]$. This follows from the integral test since $b_k$ is positive and decreasing $\forall k\in[2,\infty]$.
Note that in general if $|\frac{a_k}{b_k}| \longrightarrow 0, k\longrightarrow \infty$ it means that $\forall \epsilon>0$ $ \exists$ $M>0$ such that $|a_k| < \epsilon|b_k|$ when $k>M$ and if $\sum a_k$ and $\sum b_k$ are both positive series, we therefore know that $\sum b_k$ convergent $\implies$ $\sum a_k$ convergent by the comparison test.
In the same manner, we have that if $|\frac{a_k}{b_k}| \longrightarrow \infty, k\longrightarrow \infty$ $\implies$ $\forall N>0$ $\exists$ $M>0$ such that $|\frac{a_k}{b_k}| > N $ if $k>M$ and therefore $\sum b_k$ divergent $\implies \sum a_k$ diverges.
In our case, fix $p\in(0,1)$, then $|\frac{a_k}{b_k}| = \frac{1}{k^{p-1}} \longrightarrow \infty, k \longrightarrow \infty$ and because we know that $\sum b_k$ diveriges for $p\in(0,1)$ so does $\sum a_k$ (since both $a_k,b_k$ are non negative). If $p=1$ we use the integral test as above.
Now fix $p>1$ we have $|\frac{a_k}{b_k}| = \frac{1}{k^{p-1}} \longrightarrow 0, k \longrightarrow \infty$ and since $\sum b_k$ converges for these $p$, so does $\sum a_k$.
Thus, $\sum_{k=2}^{\infty}\bigg( \frac{1}{k\ln k} \bigg)^p$ converges $\forall$ $p>1$. |
A problem on connectedness | A subset $A$ of a topological space $X$ is connected if and only if every continuous map $f\colon A\to \{0,1\}$ (the codomain is endowed with the discrete topology) is constant.
This is independent on what space $A$ is embedded in.
It is similar for the case when $A$ is path-connected: $A$ is path-connected if and only if, for every $a,b\in A$, there exists a continuous map $f\colon[0,1]\to A$ such that $f(0)=a$ and $f(1)=b$. Again, this is independent on what topological space $A$ is embedded in. |
Could you help me show the corresponse in details?Thanks pretty much in advance! | Define $\hat{f}:\mathcal{K}\to \mathbb{C}, z\mapsto \sum_{n\in \mathbb{Z}} f(n)z^{n}$ and show that $F:f\mapsto\hat{f}$ is a unital isomorphism.
The following map gives a one to one correspond between $\mathcal{K} $ and $\mathcal{M}$:
$$\Phi:\mathcal{K}\to \mathcal{M}, z\mapsto \ker ev_z,$$
where $ev_z:\mathcal{A}\to \mathbb{C}, f\mapsto \hat{f}(z).$
We will show that $\Phi$ is a surjection.
Let $M$ be a maxiamal ideal of ${\mathcal{A}}$ and denote all the zero points in $\mathcal K$ of $\hat{f}\in \hat{\mathcal{A}}:=F(\mathcal{A})$ by $Z(\hat{f})$.
For any finite functions $f_i\in M, i=1,2,\cdots, n$, $$\sum_{i=1}^n\bar{\hat{f_i}}\hat{f_i}\in \hat{M}:=F(M).$$ If $Z(\sum_{i=1}^n\bar{\hat{f_i}}\hat{f_i})=\emptyset$, then $\frac{1}{\sum_{i=1}^n\bar{\hat{f_i}}\hat{f_i}}\in\hat{\mathcal A}$ and thus $1\in \hat{M}$, a contradiction with $\hat{M}$ being a proper ideal of $\hat{\mathcal{A}}$. Hence $$\bigcap_{i=1}^n Z(\hat{f})\supset Z(\sum_{i=1}^n\bar{\hat{f_i}}\hat{f_i})\neq\emptyset.$$
Therefore $$\bigcap_{f\in M}Z(\hat{f})\neq\emptyset.$$ Note that $Z(f)$ is closed for every continuous function and that $\mathcal{K}$ is compact.
Pick some $z\in \bigcap_{f\in M}Z(\hat f)$, then $\ker ev_z=\{f\in M|\hat f(z)=0\}\supset M$, so $\ker ev_z=M$.
Therefore, $\Phi$ is a surjection.
See here as a reference. |
How many solutions are there to the equation $x_1 + x_2 + x_3 + x_4 + x_5 = 21$, | see this answer for more explanations on the method
The answer is the coefficient of $x^{21}$ in
$$(1+x+x^2+x^3)(x+x^2+x^3)(1+x+x^2+\ldots)^3$$ which is the same as the coefficient of $x^{21}$ in
$$x(1-x^4)^2(1-x)^{-5}= x(1-2x^4 +x^8)\sum_{k=0}^\infty \binom{4+k}{k}x^k$$
which can be read off as
$$\binom{24}{20} -2\binom{20}{16} + \binom{17}{12}$$ |
Does every invertible $\mathbb{C}$-linear operator have an eigenvalue? | Take any bounded operator $S$ with no eigen value and choose $N$ such that $\|S\| <N$. Let $T=I+\frac S N$. Then $T$ is invertible but it has no eigen value. |
A basic question about group representation | The answer to this is no in general but it depends. Take for example the sign and the standard representations of $S_3$. Their characters are respectively $$\chi_{sgn} : e \mapsto 1, (12) \mapsto -1, (123) \mapsto 1$$ while $$\chi_{std} : e \mapsto 2, (12) \mapsto 0, (123) \mapsto -1.$$ So $\chi_{sgn} \cdot \chi_{std} = \chi_{std}$
and this is a case where the answer is yes.
Now for $S_4$ there is again the sign representation $$\chi_{sgn} : e \mapsto 1, (12) \mapsto -1, (12)(34) \mapsto 1, (123) \mapsto 1, (1234) \mapsto -1$$ but there is also another irreducible representation with character $$\rho : e \mapsto 3, (12) \mapsto 1, (12)(34) \mapsto -1, (123) \mapsto 0, (1234) \mapsto -1.$$ So in this case $\chi_{sgn} \cdot \rho$ is a new irreducible representation, so the answer is no.
EDIT: To answer Sean Ballentine's comment below, multiplying and comparing characters here is enough to answer the question. Indeed, if $\rho_1 : G \to GL_n(\mathbb{C})$ is an irreducible representation (with $n > 1$) and $\rho_0 : G \to GL_1(\mathbb{C})$ is a 1-dimensional representation, then $\rho_1 \cdot \rho_0 : G \to GL_n(\mathbb{C})$, defined using ordinary multiplication as $(\rho_1 \cdot \rho_0)(g) = \rho_1(g) \cdot \rho_0(g)$, is an irreducible representation with character $\chi_{\rho_1 \cdot \rho_0} = \chi_{\rho_1} \cdot \chi_{\rho_0}$. Since two irreducible representations are "the same" iff they have the same character, my examples are pertinent. This is a standard trick when you want to reconstruct an incomplete character table for some group : multiply each new character you find by the known 1-dimensional characters and you might find new characters (new irreducible representations), as in the case of my second example. |
Finding $x$ in the domain of the function where the tangent line is horizontal | That's absolutely the way to start. Now, the tangent line being horizontal means that the slope--which is $y'$--should be...what? That gives you the equation that you need to solve. |
Gradient as normal vector. | No.
First of all, the concept of inward and outward normal makes sense only if the surface is closed.
But even for closed surfaces there is no relation. The level curves $f(x,y,z)=C$ and $-f(x,y,z)=-C$ are the same, but in one case the gradient is an outward normal and in the other an inward normal. |
Syndrome decoding algorithm and standard form | If you used the matrix $P$ to get $GP=G'$, then you would apply the algorithm to the codeword $yP$.
At that point you'd be decoding $yP$ with $H'$ from the code $G'$. It sounds like you understand how to apply it so... not sure what is left to say. |
A beautiful game of gold and silver coins | The state of the game can be desribed by
$$
(g,s,G,S),
$$
where $g$ is the number of golden coins on the table, $s$ is the number of silver coins on the table, $G$ is the sum of the numbers in the first paper, and $S$ is the sum of the numbers in the second paper. The initial state is $(0,n,0,0)$, and we want to show that if the state of the game is $(g,0,G,S)$, then $G=S$.
If we are at $(g_i,s_i,G_i,S_i)$ and add a golden coin, the state changes to
$$
(g_{i+1},s_{i+1},G_{i+1},S_{i+1}) = (g_i+1,s_i,G_i,S_i+s_i),
$$
and if we remove a silver coin, the state changes to
$$
(g_{i+1},s_{i+1},G_{i+1},S_{i+1}) = (g_i,s_i-1,G_i+g_i,S_i).
$$
One plan to solve the problem is to find an invariant, for example, a function from $(g,s,G,S)$ to integers, such that these transformations do not change the value of that function. Looking at the equations for a while suggests something with $gs$ because that's how we would get changes of size $g$ and $s$. A bit more looking gives us
$$
f(g,s,G,S) = gs+G-S.
$$
Once we have found the above formula, it is easy to verify that a step does not affect the value of $gs+G-S$.
Thus if we start from $f(0,n,0,0)=0$ and end with $f(g,0,G,S) = G-S$, we can see that $G=S$. |
Union of connected subsets$ X$,$Y$ is connected if $\overline{X}\cap Y \neq \varnothing$ | Let $f:X\cup Y\rightarrow \{0,1\}$ be a continuous function. The restriction of $f$ to $X$ is constant since $X$ is connected.Suppose that it is $1$. $f^{-1}(1)$ is closed and contains $X$, so it contains $\bar X$. We deduce that the restriction of $f$ to $\bar X\cap Y$ is $1$. Since $Y$ is connected, the restriction of $f$ to $Y$ is also constant It is equal to the restriction of $f$ to $\bar X\cap Y$ which is $1$. Thus $f$ is constant and $X\cup Y$ is connected. |
Which operators commute with curl? | I don't claim to produce an exhaustive list, but some notable ones that are missing:
'scalar derivative operators' (I don't know a better way to say it), i.e. differential operators that don't see the vector field structure. For example $\partial_1Cf=C\partial_1f$. Of course this extends to any $P(\nabla)$ where $\nabla = (\partial_1,\partial_2,\partial_3)$ and $P$ is any scalar valued polynomial of three variables.
Convolutions. If $f$ is integrable, scalar valued, and compactly supported, then for any $g\in X$, the function $f*g$ defined by
$$ f*g(x):=\int_{\operatorname{supp} f} f(y) g(x-y)dy$$
is (1) well-defined (2) an element of $X$, and (3) commutes with $C$. Point (3) follows because of the standard fact that convolutions commute with any derivative:
$$ \partial_i (f*g) = f*( \partial_i g) $$
(this fact is of course used to prove (2).)
If you include enough decay at infinity, you also get lots of others like (scalar) Fourier multipliers and the projection to the gradient or curl parts of the Helmholtz decomposition. |
Intuition for Brownian motion time-inversion formula | This explanation is intuitive but very far from stringent. I think one issue with building an intuition for time inversion scaling based on previously obtained intuition for the regular scaling formula $c W_{t/c^2}$ is that the regular scaling formula is a kind of "uniform global scaling" whereas the time inversion scaling can be seen as more of a "local scaling". Assume that we have built an intuition for how $c W_{t/c^2}$ is BM. I wanted to show how one can "build a bridge" to time inversion scaling.
Consider the time interval $t_1-t_0$, where we will later set $t_0=0$. If we now take a large step back so we can see a big chunk, the interval $t-t_0$, of the random walk for which the interval $t_1-t_0$ is relatively tiny, $t_1\ll t$, then we can consider the BM to be fairly constant (in some sense) in the $t_1-t_0$ interval compared to the much larger $t-t_0$ interval.
Let's now use the regular scaling formula on the little interval, with $c=t_1-t_0$:
$$c W_{t/c^2}=(t_1-t_0) W_{t/(t_1-t_0)^2}$$
Here we need $c=t_1-t_0$ and $W_{t_1-t_0}$ to be "small" compared to the expected variation of $W_t$.
Set $t_0=0$ to get:
$$c W_{t/c^2}=t_1 W_{t/t_1^2}$$
If we let $t_1$ grow and catch up with $t$ (or $t$ shrink down to $t_1$) we get
$$t_1 W_{t/t_1^2} \rightarrow t W_{t/t^2}=t W_{1/t}$$
and this provides a link between the regular scaling formula and the time inversion formula. Apologies for the utter and complete lack of stringency.
Several things are problematic here. For example, what do we mean with the BM to be relatively (fairly) constant? How can we be allowed to let $t_1$ approach $t$? However, I suspect making it stringent would be a larger undertaking and not give the wanted intuition. The key, I think, is to realize that any subinterval $t_1-t_0$ will look like a regularly scaled BM when compared to a much larger interval. For stringent proofs, see:
Prove the time inversion formula is brownian motion
or
http://www.math.uchicago.edu/~may/VIGRE/VIGRE2011/REUPapers/Stolarski.pdf
It is very difficult to get a nice computer plot that shows time inversion. This can be understood, for example, as you needing an extremely large number of terms in a, say, Fourier series or Karhunen–Loève representation (https://en.wikipedia.org/wiki/Wiener_process). And if you try this in the obvious naïve way by just summing terms you will inevitably get a BM that is highly resolved close to $t=0$ but poorly resolved for larger value of $t$. In a way you get penalized for not being able to represent this as a true infinite series. |
Variational formulations in group theory? | There are several strong theorems about random variables on groups, that might be something you were looking for:
Generation Formula:
Suppose $G$ is a finite group, $\{X_i\}_{i = 1}^{n}$ are i.i.d uniformly distributed random elements of $G$. Then $P(\langle \{X_i\}_{i = 1}^{n} \rangle = G) = \sum_{H \leq G} \mu(G, H) {\left(\frac{|H|}{|G|}\right)}^n$, where $\mu$ is the Moebius function for subgroup lattice of $G$.
Generalized Erdos-Turan theorem:
Suppose $G$ is a finite group. $\{X_i\}_{i = 1}^{n}$ are i.i.d uniformly distributed random elements of $G$. Then $G$ is nilpotent of class $n$ iff $P([ … [[a_0, a_1], a_2]… a_n] = e) > 1 - \frac{3}{2^{n + 2}}$
Law of large numbers for groups:
Suppose $G$ is a finitely generated group, $A$ is a finite set of generators of $G$. $d: G\times G \to \mathbb{N}$, is the metric on $G$ induced by the Cayley Graph $Cay(G, A)$. Suppose $\{X_i\}_{i = 1}^\infty$ is a sequence of i.i.d. random elements of $G$, such that $E(d(X_i, e)) < \infty$, and $Z_n = \Pi_{i=1}^n X_i$. Then $\exists \alpha \in \mathbb{R}, P(\lim_{n \to \infty} \frac{d(Z_n, e)}{n} = \alpha) = 1$. |
Using logical truth to find the guilty and lying persons | Eric: "Joseph and Don are guilty, but Lea is innocent."
$E\equiv (j\wedge d \wedge\neg \ell)$
Don: "If Eric is guilty, then Lea is guilty and Jenny is innocent."
$D~{\equiv e\to (\ell \wedge\neg y)\\\equiv \neg e\vee(\ell\wedge \neg y)}$
Lea: "I am innocent, but at least one of the others is guilty."
$L\equiv \neg \ell\wedge(e\vee j\vee d\vee y)$
1)i can't think of such a model, because it's not a tautology, and if all of the evidences were true, it would've meant that one of them lied.
No. First of all, just because it is not a tautology does not mean it has no model. A simple atomic statement $P$ is not a tautology, but it does have a model: $P$ could be true
Second, you can make Eric's and Lea's assertions true by making Joseph and Don guilty and Lea innocent. But you can make Don's assertion true as well: just make sure Eric is innocent: if the 'if' part is false, then the whole statement is true (at least, that's what we assume in classical logic). So, there is a model: Joseph and Don are guilty, while Lea and Eric are innocent, but Jenny can be either.
$$E,D,L\vdash j\wedge d\wedge\neg\ell\wedge\neg e$$
just a theoretical wondering - if the all were telling the truth, it would've meant that the only guilty persons are lea and eric?
No. See above. If they are all true, then Joseph and Don are the only certainly guilty ones. Lea and Eric are innocent if all statements are true. Jenny's status is not established under that model.
2)it is quite evident that the third claim is a consequence of the first one.
Correct!
3)if everyone is innocent, it would've meant that the second testimony, don's, is a lie, since it contradicts the first and the last claims.
No. If everyone is innocent, that means Eric and Lea both lied (for they both claimed that there are guilty people), but in fact Don is speaking the truth, again exactly because Eric is not guilty. |
Approaching to a number and limit | No, in order to find that some real $l$ is the limit
$$
\lim_{x\to3^-}f(x)
$$
you don't compute $f(2.9)$, $f(2.99)$ and so on. And neither you compute $f(2.(9))$ (periodic $9$), because no assumption is made that $f$ is defined at $3$, nor the possible value of $f$ at $3$ is relevant for the existence of the limit.
Saying that
$$
\lim_{x\to3^-}f(x)=l
$$
means
for every $\varepsilon>0$ there exists $\delta>0$ such that, for $3-\delta<x<3$, it holds $|f(x)-l|<\varepsilon$.
You can compute $f(2.9...9)$, if you wish; it may give you an idea of what $l$ could be, but in general it won't. |
Riemann integral substitution with $e^x$ | Well, the general approach for proving the integral substitution rule is an approximation argument. If $f$ is a piecewise-constant function that identity clearly holds, hence the problem boils down to proving that any Riemann-integrable function can be approximated with a piecewise-constant function in a effective way. There are some natural candidates:
$$ f_n(x) = f\left(n\left\lfloor\frac{x}{n}\right\rfloor\right), $$
for instance. Can you fill the missing details? |
Level set's matrix derivative. | So, $S$ consists of all those $x,y,z$ points, where
$$
\left \{
\begin{array}{lcc}
xy-yz & = & 0 \\
x^2+y^2+z^2 & = & 1
\end{array}
\right .
$$
First equation has a solution
$$
\left [
\begin{array}{lcc}
y & = & 0 \\
x & = & z
\end{array}
\right .
$$
which looks like.
Second equation gives you points on unit sphere
$$
x^2+y^2+z^2 = 1
$$
So points, that satisfy both of those equations lie on intersection of this 3 surfaces.
which are vertical circle
$$
\left \{ \begin{array}{lcc}
x & = & \cos \phi \\
y & = & 0 \\
z & = & \sin \phi
\end{array}\right .
$$
and inclined ellipse
$$
\left \{ \begin{array}{lcc}
x & = & \frac 1{\sqrt 3}\cos \phi \\
y & = & \sin \phi \\
z & = & \frac 1{\sqrt 3}\cos \phi
\end{array}\right .
$$
In both cases $\phi \in \left [0, 2\pi \right )$, and both of them are obtained by simple substitution of $x=z$ and $y = 0$ conditions to the equation of sphere and applying polar coordinates. Level set look like |
Martingales: Stopping Time | I have a method but it assumes T and S are bounded stopping times. So let T and S be bounded stopping times. In Jacod and Protter, the theory is for Discrete time martingales and so T and S take on non negative integer values. (Note: All random variable equalities and inequalities are almost sure ones).
Let $M_t = E[Y|F_t]$
Then we need to show
$$ E[M_T|F_S] = M_{S\wedge T}$$
Let
$$A= \{\omega : S(w) \leq T(\omega)\}$$
$$ T_1 = T\vee S$$
$$ T_2 = T \wedge S $$
Note that $T_1$ and $T_2$ are stopping times. And $A \in F_S \cap F_T$. These are all exercises in Jacod and Protter.
$\therefore M_{S \wedge T} = M_S1_A + M_T1_{A^C}$
Now $E[M_T|F_S] = E[M_T 1_A + M_T1_{A^C}|F_S]$.
$E[M_T 1_A|F_S] = E[M_{T_1}1_A|F_S] = E[M_{T_1}|F_S]1_A$
But $T_1\geq S$. By Doob's Optional Sampling Theorem, and since $T, S, T_1$ and $T_2$ are bounded stopping times,
$E[M_{T_1}|F_S]1_A = M_S1_A \quad (1)$
Similarly $E[M_T1_{A^C}|F_S]=E[M_{T_2}1_{A^C}|F_S]=E[M_{T_2}|F_S]1_{A^C}$
But $T_2 \leq S$. Therefore $F_{T_2} \subseteq F_S$.
$\Rightarrow E[M_T1_{A^C}|F_S] = M_{T_2}1_{A^C} = M_T1_{A^C} \quad (2)$
From (1) and (2)
$E[M_T|F_S] = M_S1_A + M_T1_{A^C} = M_{S \wedge T} $
You can interchange T and S and the proof goes through with the corresponding changes. As for the unbounded case, a limiting argument would be required. See if you can get it using the above, replacing T and S with $T\wedge N$ and $S \wedge N$, letting N go to $\infty$. Be careful though.
Also in "Stochastic Processes" by Richard Bass, the above is true even in the continuous time case.
Edit: Finally proved it for the unbounded case. But had to resort to Doob's Martingale convergence and Levy's zero one law, both available on Wiki.
Let T and S be two stopping times not necessarily bounded. Consider
$$ T_n = T\wedge n $$
$$S_n = S\wedge n$$
Now $T_n$ and $S_n$ are bounded stopping times. Moreover $T_n \uparrow T$ and $S_n \uparrow S$. Consider the following:
$$E[M_{T_n}|F_{S_m}] = M_{T_n \wedge S_m} \quad (3)$$
which is what we proved before. Now let us fix m. Let $T_n\wedge S_m = U_n$ and $T\wedge S_m = U$. Note $U_n \uparrow U$.
Claim: $M_{U_n}$ is a discrete martingale wrt $F_{U_n}$. And it satisfies the condition for Doob's martingale convergence.
Proof: Since $E[M_{U_n}] = E[M_0] = E[Y] < \infty$, and $E[M_{U_k}|F_{U_m}] = M_{U_m}$ if $m<k$ by Doobs OST and $\sup_{n \geq 1}E[M_{U_n}] = \sup_{n \geq 1}E[Y] <\infty$, the claim is proved.
Therefore $$\lim_{n \rightarrow \infty} M_{U_n} = M_U$$
Applying this to (3), we get $$\lim_{n \rightarrow \infty} E[M_{T_n}|F_{S_m}] = \lim_{n \rightarrow \infty}M_{T_n \wedge S_m} = M_{T\wedge S_m}\quad (4)$$
Now by the same logic, $M_{T_n} \rightarrow M_T$ and $\sup_{n \geq 1} E[M_{T_n}] = E[Y] < \infty $. By Conditional DCT, $$\lim_{n \rightarrow \infty} E[M_{T_n}|F_{S_m}] = E[M_T|F_{S_m}] \quad (5) $$
By (4) and (5)
$$E[M_T|F_{S_m}] = M_{T\wedge S_m}$$
Now let $m \rightarrow \infty$. The RHS goes to $M_{T\wedge S}$ by Martingale convergence. Now Le'vy's conditional convergence (aka his zero-one law) states that if $X \in L^1$ and $F_m$ is a filtration then $$E[X|F_m] \rightarrow E[X|F_\infty] = X$$. Using this, we get
$$ E[M_T|F_{S_m}] \rightarrow E[M_T|F_{S_\infty}] = E[M_T|F_S]$$
QED
Note: Le'vy's zero one law is a corollary of Doob's martingale convergence. So we as good as used that only and conditional DCT. |
Basis and Linear Independence definitions | If I understood your confusion correctly, this should resolve it. You have a slight misunderstanding of the definitions. I will assume that we are dealing with a finite dimensional vector space $V$, for simplicity. If a list of vectors $\{v_1,\ldots,v_n\}$ is a basis for the space, then $\{v_1,\ldots,v_n\}$ satisfies two conditions:
$1)$ Linear Independence:
$$ \sum_{i=1}^na_iv_i=0\implies a_i=0\:\forall 1\le i\le n,$$
where the $a_i\in \mathbb{F}$, the base field for the vector space. Equivalently, we can reformulate this as saying no basis vector $v_i$ can be represented as
$$ \sum_{j=1,j\ne i}^na_jv_j=v_i.$$
Next, we have the second condition.
$2)$ Spanning: If $v\in V$, then
$$ v=\sum_{i=1}^n a_iv_i$$
where each $v_i$ is one of the basis vectors, and the $a_i$ are appropriately chosen scalars.
Now, these definitions are not contradictory precisely because each basis vector can be represented as $1$ times itself. That is
$$ v_i=1*v_i.$$
The contrast here is that it is not the case that a basis vector can be written as a linear combination of the other basis vectors, only as a trivial linear combination of itself. |
Real Analysis Methodologies to show $\gamma =2\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx$ | For
$$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$
using
$$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$
we have
$$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$
Hence, let $x=1$ we get
$$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$
let $s=t^k,~k>0$, we get
$$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$
So,let $k=a,b$
$$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$
hence
$$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$
then
$$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$
let $t\rightarrow \dfrac{1}{t}$,we get
$$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$
So
$$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$
and
$$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$
Lemma:
$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a>0$$
Proof:
Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$
so
$$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$
hence
$$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$
It's easy to see that $f\left ( \infty \right )=0$,so
$$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$
Let $x^a\to t$, we get
$$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$
Now using $(1)$ and $(2)$, we get
$$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$ |
Showing that complex functions with the same derivative on the unit disc differ by a constant | If $U \subseteq \mathbb{C}$ is a region (i.e., open and connected), $f :U \rightarrow \mathbb{C}$ is a holomorphic function, and $f^\prime(z) = 0$ for all $z \in U$, then $f$ is constant. For if we write $$f = u+iv,$$ where $u$ and $v$ are the real and imaginary parts of $f$, respectively, then all the partial derivatives $$u_x,u_y, v_x, v_y$$ vanish in $U$. Thus $f$ will be constant along any segment parallel to each of the coordinate axis. Since $U$ is a region, each pair of points of $U$ can be connected by a polygonal path lying entirely within $U$ which is composed with such segments. |
If $A=\sum_{n=1}^{3027}\sin(n\pi/2018)$ and $B=\sum_{n=1}^{3027}\cos(n\pi/2018)$, evaluate $A(1-\cos(\pi/2018))+B\sin(\pi/2018)$ | I'll generalize the problem a bit, writing $m$ for $3027$ and $2\theta$ for $\pi/2018$.
Adapting the identities proven here, we can write
$$\begin{align}
A &:= \sum_{n=1}^{m}\sin 2n\theta = \frac{\sin(m+1)\theta}{\sin\theta}\;\sin m\theta \\[4pt]
B &:= \sum_{n=1}^{m}\cos 2n\theta = \frac{\sin(m+1)\theta}{\sin\theta}\;\cos m\theta - 1
\end{align}$$
where we subtract $\cos 0=1$ from the cosine sum (and $\sin 0=0$ from the sine sum!) because our summations start at index $1$ instead of $0$. Then we have
$$\begin{align}
A \left(1-\cos 2\theta\right) + B\sin 2\theta
&= \phantom{+}A\cdot 2 \sin^2\theta + B\cdot 2\sin\theta\cos\theta \tag{1}\\[4pt]
&= \phantom{+}2\sin\theta\;\sin(m+1)\theta\;\sin m\theta \\
&\phantom{=}+ 2 \cos\theta\;\sin(m+1)\theta\;\cos m\theta - 2\sin\theta\cos\theta \tag{2}\\[4pt]
&= \phantom{+}2\sin(m+1)\theta\;\left(\sin\theta\sin m\theta+\cos\theta\cos m\theta\right)-\sin 2\theta \tag{3}\\[4pt]
&= \phantom{+}2\sin(m+1)\theta\;\cos(m-1)\theta -\sin 2\theta \tag{4} \\[4pt]
&= \phantom{+}\left( \sin 2m\theta + \sin 2\theta \right) - \sin 2\theta \tag{5} \\[4pt]
&= \phantom{+} \sin 2m\theta \tag{6}
\end{align}$$
where $(1)$ uses the double-angle identities, $(4)$ uses the angle-difference identity for cosine, and $(5)$ invokes a somewhat-lesser-known product-to-sum identity.
Restoring the specific values for the problem at hand,
$$2\theta = \frac{\pi}{2018} \qquad m = 3027 \qquad\to\qquad 2m\theta = \frac{3\pi}{2} \qquad\to\qquad \sin 2m\theta = -1 \tag{$\star$}$$
It's a bit of an identity slog, but there it is. $\square$ |
Gradient decent using Taylor Series | $o()$ refers to the Landau notation.
$$ f ( x_\alpha ) = f ( x ) + { \nabla f } ( x )'( x_a - x ) + o ( || x_a - x || )$$
Plugin the definition of $x_a$
$$= f ( x ) + \nabla f ( x )'( x - \alpha \nabla f ( x ) - x ) + o ( || x - \alpha { \nabla f } ( x ) - x || ) $$
$$ = f ( x ) + \nabla f ( x )'( - \alpha \nabla f ( x ) ) + o ( ||- \alpha { \nabla f } ( x ) || ) \qquad \qquad \quad$$
And by definition of $o$ and $||\cdot||$
$$ = f ( x ) -\alpha \underbrace{ \nabla f ( x )' \nabla f ( x )}_{||\nabla f(x)||^2} + \underbrace{o ( \alpha || { \nabla f } ( x ) || )}_{o(\alpha)} \qquad \qquad \qquad \quad \: \: \:$$ |
About $\sum_{i\geq 1}\frac{1}{(n+i)_{n+1}}$ and $\sum_{i\geq 1}\frac{1}{i^2-i-1}$ | $$\frac{1}{i(i+1)\cdot\ldots\cdot(i+n)} = \frac{1}{n}\cdot\left(\frac{1}{i(i+1)\cdot\ldots\cdot(i+n-1)}-\frac{1}{(i+1)\cdot\ldots\cdot(i+n)}\right)$$
hence the first series is a telescopic series. About the second one:
$$ \sum_{n\geq 0}\frac{1}{(n+a)(n+b)}=\frac{\psi(a)-\psi(b)}{a-b} $$
and
$$ \psi(z)-\psi(1-z) = -\pi\cot(\pi z) $$
rapidly prove your claim. |
Does every net have a countable subnet? | No. The net on $\omega_1$ into $\omega_2,$ sending each
countable ordinal to itself is a counter example.
If a net converges to a point p with countable
local base, then yes, there is a countable subnet.
Let { $U_1, U_2,$ ... } be the countable base and use
the decending local base { $U_1, U_1 \cap U_2,$ ... }
to construct a countable subnet. |
Eigenvalue eigenvector (basis) | The characteristic polynomial $p_A(X)=\det(A-XI_3)$ of your matrix is
$$
p_A(X)=72X+6X^2-X^3
$$
which has roots $-6$, $0$ and $12$.
So the matrix is diagonalizable, hence it has a basis consisting of eigenvectors. (The fact that it is diagonalizable also directly follows from the fact that $A$ is symmetric.)
Just find an eigenvector relative to each eigenvalue and you have found a basis as required. |
Discrete Math - Determine truth value of each proposition | $F \to F = T$. The "chart truth value" is:
$F \to T = T, F \to F = T, T \to F = F, T \to T = T$ |
Can adding one column to a matrix increase its rank by more than one | Answering the title question: No.
The rank of a matrix equals its column rank, which by definition is the number of linearly independent columns. Adding one column will increase the column rank by one if it is independent from the others and will leave the column rank unchanged if it is dependent on the others. |
Solution for a matrix commutation | I think you can't. Here is what I tried and why I thought that it is impossible.
I assume $\gamma, \mu$ to be a 1$\times$n matrix, 1 is a n$\times$1 matrix whose entries are identically 1 and $\Sigma^{−1}$ is an invertible matrix.
Now we can manipulate given formula as this :
$\mu\Sigma^{−1}1 - \gamma\Sigma^{−1}1=λ$
$\Rightarrow \gamma\Sigma^{−1}1 = \mu\Sigma^{−1}1 - λ$
And if I could find a 1$\times$n matrix $\alpha$ such that $\Sigma^{−1}1 \times \alpha = I$(here I put $\times$ to emphasize that it is a matrix multiplication, and $I$ represents n$\times$n identity matrix), then by right multiplying it both sides, we get
$\gamma = (\mu\Sigma^{−1}1 - λ)\alpha$.
But this is impossible.
Suppose there is such $\alpha$, that is, $\Sigma^{−1}1 \alpha = I$. Then since $\Sigma^{−1}$ is invertible matirx, $1 \alpha = \Sigma I = \Sigma$.
Hence $1 \alpha$ has to be an invertible matirx. But $rank(1 \alpha) = 1$ or $0$ for any 1$\times$n matrix $\alpha$. This leads to a contradiction unless you are dealing with 1$\times$1 matrices, which is just an ordinary real number arithmetic. |
Factoring the sum or difference of two cubes | (STRONG) HINT:If you multiply $(A+B)(A^2-AB+B^2)$ you get $A^3+B^3$ similarly $A^3-B^3=(A-B)(A^2+AB+B^2)$. So you have $A=x$ and $B=3$. Substitute into the above formula and you will get the factored form. |
Finding the number of Hamiltonian cycles for a cubical graph | Choose one point, arbitrarily (since the graph is symmetric). You can choose the Hamiltonian path through that vertex in $3$ ways, effectively choosing which of its incident edges will not be on the path. The adjacent vertex that will not now be adjacent on the path has also had its local path chosen, since we have removed the option of one edge.
To complete the path from these two short initial sections, one end is linked directly and the other end must be connected through the vertices not yet included on the path - $2$ choices. You can demonstrate this by choosing the non-connecting option at one end of a short section, then noticing that the other end of the same section must now directly connect to the other initial section of path to avoid a short cycle.
So there are a total of $3\times 2 = 6$ undirected Hamiltonian cycles for the cube, as you found. |
Binomial Distribution Excel Function Probability Problem | In excel BINOMDIST$(k;200;0.24;FALSE)$ tells the probability that in $200$ experiment $k$ are successful given that the probability of a success is $0.24$.
I've got:
$$\sum_{k=56}^{200} \text{ BINOMDIST}(k;200;0.24;FALSE)=0.10843$$
for the probability that in $200$ cases $56$ or more are binge drinkers.
Also, BINOMDIST$(k;200;0.24;TRUE)=P(\text{number of successes }\le k).$
So,
$$P(\text{number of successes }\ge 56)=1-\text{BINOMDIST}(55;200;0.24;TRUE)=0.10843.$$ |
Let $f(x,y)=2(x+y)$ for $-<x<y<1$ Find $E[Y]$ | I get $\frac{3}{4}$ for A.
We are integrating $2xy+2y^2$ with respect to $y$. We get antiderivative $xy^2+\frac{2}{3}y^3$. Plugging in the bounds we get $x+\frac{2}{3}-\frac{5}{3}x^3$.
Integrating with respect to $x$ gives $\frac{x^2}{2}+\frac{2}{3}x-\frac{5}{12}x^4$.
Insert the bounds. We get $\frac{1}{2}+\frac{2}{3}-\frac{5}{12}$. Simplify.
For B we also get $\frac{3}{4}$. The calculation is shorter.
Remark: One can view the second way as finding the density function of $Y$ first. I prefer to think of the expectation as a double integral, evaluated in two ways as an iterated ntegral. |
If $rS_n=S_n+(ar^{n+1}-a)$ , how $S_n={ar^{n+1}-a\over r-1 }$? | What is next to
$$rS_n=S_n+(ar^{n+1}-a)?$$ Well, we get (by adding to both sides by $-S_n$. or simply say that transpose $S_n$ to the left side)
$$rS_n-S_n=(ar^{n+1}-a)$$ and so (applying factoring at the left-hand side) we get
$$S_n(r-1)=(ar^{n+1}-a)$$ then dividing both sides by $r-1$ (if $r\neq 1$), we are done. |
Finding the area of a set | Write $x^2 + y^2 = r^2(\theta)$ on the left-hand-side and $x = r^3(\theta) \sin (\theta)$ and likewise for $y$ on the right-hand side. Cancel an $r(\theta)$ to get a cubic. The solutions are:
$$\left\{\sqrt[3]{\sin ^3(\theta )+\cos ^3(\theta )},-\sqrt[3]{-1} \sqrt[3]{\sin ^3(\theta )+\cos ^3(\theta )},(-1)^{2/3}
\sqrt[3]{\sin ^3(\theta )+\cos ^3(\theta )}\right\}$$
Now integrate in radial coordinates, being sure to use the proper solutions. |
Quick way of writing equation of a tangent | You are almost in there.
Since $(x_1,y_1)$ is a point on the curve, it satisfies that
$x_1^2-3y_1^2=4y_1$.
Substitute to your last equation:
$xx_1-x_1^2-3yy_1-2y+3y_1^2+2y_1=xx_1-3yy_1-2y-2y_1-(x_1^2-3y_1^2-4y_1)=0$
to get answer:
$xx_1-3yy_1=2y+2y_1$ |
Proving a subset is a subspace of $\mathbb{R}^3$ & Writing basis/dimensions | It's better to write the subset as:
$$W=\left\{\begin{bmatrix}
3s - 2t \\
s + 2t \\
2s + 3t \\
\end{bmatrix}:s,t \in \mathbb{R}\right\}.$$
I know that the conditions for a subset being a subspace is that it most include the zero vector, and be closed under addition and multiplication. However, I do not know how to do this.
To show the zero vector belongs to the set, we find values of $s \in \mathbb{R}$ and $t \in \mathbb{R}$ such that
$$\begin{bmatrix}
3s - 2t \\
s + 2t \\
2s + 3t \\
\end{bmatrix}=\begin{bmatrix}
0 \\
0 \\
0 \\
\end{bmatrix}.$$
To show that it's closed under addition, we take two arbitrary elements from $W$, add them together, and show that the result is in $W$. I.e. show
$$
\begin{bmatrix}
3s - 2t \\
s + 2t \\
2s + 3t \\
\end{bmatrix}
+
\begin{bmatrix}
3s' - 2t' \\
s' + 2t' \\
2s' + 3t' \\
\end{bmatrix}
=
\begin{bmatrix}
3(s+s') - 2(t+t') \\
(s+s') + 2(t+t') \\
2(s+s') + 3(t+t') \\
\end{bmatrix}
$$
belongs to $W$.
To show that it's closed under scalar multiplication, we take an arbitrary element from $W$, and multiply it by an arbitrary scalar $\alpha \in \mathbb{R}$, and show that the result is in $W$. I.e., we show
$$
\alpha
\begin{bmatrix}
3s - 2t \\
s + 2t \\
2s + 3t \\
\end{bmatrix}
=
\begin{bmatrix}
3\alpha s - 2\alpha t \\
\alpha s + 2\alpha t \\
2\alpha s + 3\alpha t \\
\end{bmatrix}
$$
is in $W$.
To identify a basis (a spanning, linearly independent subset), the co-efficients of $s$ and the coefficients of $t$ should give hint. We can rewrite $W$ as:
$$W=\left\{s\begin{bmatrix}
3 \\
1 \\
2 \\
\end{bmatrix}+t\begin{bmatrix}
- 2 \\
2 \\
3 \\
\end{bmatrix}:s,t \in \mathbb{R}\right\}.$$ Note: we need to check that the basis vectors are indeed linearly independent.
The dimension of $W$ is the size of any of it's bases, by the Dimension Theorem, so once you've found a basis, the answer to this part is "what's it's size?". |
Probability problem with cards | There's several different ways to do this. No guarantee that this is the simplest way to do this. In the following paragraphs, a hand is defined as a collection of 13-cards from the pool of available cards.
One way is to name the three other players, say Alice, Bob, and Charlie. Next, calculate the chance $p_{1}$ that Alice has a void in clubs. This is
$$
p_{1} = \frac{\texttt{# hands without clubs from 39 cards}}{\texttt{total # hands from 39 cards}}.
$$
Multiply by 3, and you get the chance that Alice or Bob or Charlie has a void...
EXCEPT we double-counted the cases where two of the three have a void. Let $p_{2}$ be the probability that Alice and Bob both have a void. This happens if and only if Charlie has all 5 remaining clubs, so
$$
p_{2} = \frac{\texttt{# hands with all 5 clubs from 39 cards}}{\texttt{total # hands from 39 cards}}.
$$
Your final answer would be $3p_{1}-3p_{2}$ (Normally, you would add $p_{3}$, the chance that all three have a void, but this is obviously equal to $0$. This is a case of the inclusion-exclusion principle, which is particularly useful when thinking through probability problems.)
Hopefully, this gives you a good launching point. Sometimes, the hardest part of probability questions is not double-counting and making sure you counted everything. |
Any element of $\mathbf{Z}[\xi]$ is congruent to an integer modulo $(1-\xi)^2$ if multiplied by a suitable power of $\xi$ | This is false - $ 1 - \xi $ can never be multiplied by a power of $ \xi $ to be an integer modulo $ (1 - \xi)^2 $. Let $ \mathfrak p = (1 - \xi) $ throughout the post.
To see this, note that $ p \equiv 0 \pmod{\mathfrak p^2} $ since the ideal $ (p) $ totally ramifies as $ (p) = \mathfrak p^{p-1} $, so if $ \xi^k (1 - \xi) $ is an integer modulo $ \mathfrak p^2 $, without loss of generality it can be taken to be in the set $ \{ 0, 1, \ldots, p-1 \} $. It cannot be $ 0 $, because that would imply $ 1 - \xi \in \mathfrak p^2 $ and $ \mathfrak p = \mathfrak p^2 $, but $ 1 - \xi $ is not a unit: it has norm $ \pm p $. It cannot be anything else in this set, because $ \mathbf Z \cap \mathfrak p = p\mathbf Z $, therefore all of the nonzero integers in the set are invertible modulo $ \mathfrak p $, and thus modulo $ \mathfrak p^2 $. Thus, such a congruence would imply that $ 1 - \xi $ is invertible modulo $ \mathfrak p^2 $, which is absurd, since the ideals $ (1 - \xi) = \mathfrak p $ and $ \mathfrak p^2 $ are not coprime. |
Derived Set of a given subset of Real Line. | $A$ is an additive subgroup of $\Bbb R$, which is not discrete since $\pi$ is irrational. A known result about additive sub-groups of $\Bbb R$ shows that $A$ is dense in $\Bbb R$ for the usual topology. Each point which is not in $A$ is a limit point.
For the points which are in $A$, we use irrationality criteria (proposition 4 in the link). |
Prove chromatic polynomial of n-cycle | You made a mistake at the beginning. The reduction should be
$$P(C_n,x)=P(P_n,x)-P(C_{n-1},x)$$
which holds for $n\ge2$. I don't know what $P_0$ means. We have:
$$P(C_1,x)=0$$
$$P(C_2,x)=P(P_2,x)-P(C_1,x)=P(P_2,x)-0=P(P_2,x)$$
$$P(C_3,x)=P(P_3,x)-P(C_2,x)=P(P_3,x)-P(P_2,x)$$
$$P(C_4,x)=P(P_4,x)-P(P_3,x)+P(P_2,x)$$
$$P(C_5,x)=P(P_5,x)-P(P_4,x)+P(P_3,x)-P(P_2,x)$$
and in general (for $n\ge2)$
$$P(C_n,x)=P(P_n,x)-P(P_{n-1},x)+\cdots+(-1)^nP(P_2,x)$$
$$=x(x-1)^{n-1}-x(x-1)^{n-2}+\cdots+(-1)^nx(x-1)$$
$$=\frac{x(x-1)^{n-1}+(-1)^nx}{1+(x-1)^{-1}}$$
$$=(x-1)^n+(-1)^n(x-1).$$ |
General form for powers of tridiagonal matrices | To see that $A^k$ has "one diagonal more" than $A^{k-1}$, write the column $j$ of the equation $A^k=A^{k-1}\cdot A$, which gives
$$
(A^k)_{:,j}=A^{k-1}\cdot (A)_{:,j}=a_{j-1,j}(A^{k-1})_{:,j-1}+a_{j,j}(A^{k-1})_{:,j}+a_{j+1,j}(A^{k-1})_{:,j+1}.
$$
So the column $(A^k)_{:,j}$ of $A^k$ is combined from the corresponding column of $A^{k-1}$ and the columns $j-1$ and $j+1$. Since $(A^{k-1})_{:,j-1}$ has one nonzero more below the last nonzero position of $(A^{k-1})_{:,j}$ and $(A^{k-1})_{:,j+1}$ has one nonzero more above the last nonzero position of $(A^{k-1})_{:,j}$, the nonzero pattern of the column $(A^k)_{:,j}$ grows by one both up and down.
Formally, you can use the induction in the same spirit. |
Is "induced subgraph" necessary to define an $H$-free graph? | Why do we not just say that $G$ is $H$-free if it does not contain $H$ as a subgraph?
In my experience, this is generally what being $H$-free means. If we want to say that $G$ does not contain $H$ as an induced subgraph, we say that $G$ is induced $H$-free.
Certainly, discussion of $H$-free graphs is very common when discussing the forbidden subgraph problem. This problem is generally only considered with the ordinary notion of subgraphs, not the induced kind. (If you're asking for the largest number of edges in an $n$-vertex induced $H$-free graph, then unless $H$ is a clique, you will get a boring answer: the best graph is $K_n$ with $\binom n2$ edges. This is why we don't talk about induced subgraphs for this problem.)
Both notions are useful, and there is terminology for both. Sometimes people are lazy about this terminology, and don't specify that $G$ is induced $H$-free when they should. Sometimes, it's hard to tell which convention is being used, and sometimes it doesn't matter. (For example, if we're counting $C_5$'s in a triangle-free graph, the answer doesn't change if we add the word "induced" in both places.) |
homeomorphism of the interior of an annulus on the plane and the whole plane without one point | This is easier in polar coordinates. Map
(r,$\theta$) to (r-a,$\theta$) for a < r < b and then map
(r,$\theta$) to (tan pi.r/2(b-a),$\theta$) for 0 < r < b-a. |
Question about Morse inequality | Since all the coefficients of $(1+t)Q(t)$ are non-negative integers, the initial equation says that $M_k(a,b)\ge\beta_k(a,b)$ for all $k$, and for all but a finite set of $k$, $M_k(a,b)=\beta_k(a,b)$. This implies 1).
If we set $t=-1$, the initial equation is just 2). |
Convergence of sequence of real numbers and (sub)subsequence | By contrapositive: Suppose that $(x_n)\not\rightarrow a.$ This means that $\exists\delta>0$ such that $\forall N\in\mathbb N$ there is some natural $n>N$ such that $|x_n-a|\geqslant\delta.$ Now construct a subsequence $(x_{n_k})$ of $(x_n)$ as follows. Let $n_0>0$ be such that $|n_0-a|\geqslant\delta.$ Having chosen $n_0,\ldots,n_k,$ let $n_{k+1}$ be such that $n_{k+1}>n_k$ and $|x_{n_{k+1}}-a|\geqslant\delta.$ Clearly, the subsequence $(x_{n_k})$ does not have a subsequence which converges to $a.$ Consequently, if $(x_n)\not\rightarrow a,$ you will always be able to find a subsequence $(x_{n_k})$ of $(x_n)$ that is far from $a.$ Therefore, if every subsequence of $(x_n)$ has a subsequence which converges to $a$ then $(x_n)$ converges to $a.$ |
Prove that if $X_{n}\cap X_{n+1}\neq\emptyset$, $X=\bigcup X_{n}$ is connected | As stated in the comments, each $X_n$ must be connected. For example, if $M=X_1$ is the discrete space of two points, then $X=X_1$ is not connected.
Suppose that $X$ is not connected. Then there exist disjoint nonempty open subsets $U$ and $V$ of $X$ which cover $X$. Since each $X_n$ is connected, we have, for each $n$,
$$\mbox{($X_n\subseteq U$ and $X_n\cap V=\emptyset$) or ($X_n\subseteq V$ and $X_n\cap U=\emptyset$)}.$$
Furthermore, since $U$ and $V$ are both nonempty, we have $m>1$. Now use the assumption $$\mbox{$X_n\cap X_{n+1}\ne\emptyset$ for all $n$}$$
to contradict our choice of $U$ and $V$. |
Someone can explain this interval? | The definition of $|a|$ is $$|a|=\begin{cases}a,& a\geq0\\-a,&a<0\end{cases}$$
Therefore $|a|<b$ means that either
$$0\leq a<b$$
or
$$a<0, -a<b$$
which is the same as $-b<a<0$.
Since there is an or all valid cases are the union of both cases.
The union of $-b<a<0$ and $0\leq a<b$ is $$-b<a<b$$ |
Looking for orthogonal basis of eigenvectors using Gram Schmidt process | The Gram-Schmidt process does not change the span. Since the span of the two eigenvectors associated to $\lambda=1$ is precisely the eigenspace corresponding to $\lambda=1$, if you apply Gram-Schmidt to those two vectors you will obtain a pair of vectors that are orthonormal, and that span the eigenspace; in particular, they will also be eigenvectors associated to $\lambda=1$.
You can also see that the eigenvector corresponding to $\lambda=10$ is orthogonal to the other two eigenvectors, hence to the entire eigenspace they span. So the third eigenvector will already be orthogonal to the orthonormal basis you find for $E_{1}$. You'll just need to normalize it.
Note. You can always find an orthonormal basis for each eigenspace by using Gram-Schmidt on an arbitrary basis for the eigenspace (or for any subspace, for that matter). In general (that is, for arbitrary matrices that are diagonalizable) this will not produce an orthonormal basis of eigenvectors for the entire space; but since your matrix is symmetric, the eigenspaces are mutually orthogonal so the process will definitely work. In fact, a real matrix is orthogonally diagonalizable if and only if it is symmetric. |
How can one write $z^{-1}$ as a Stieltjes function? | A friend of mine solved this for me. As he does not own an account, I will post his answer here.
Quite simply, $\mu(t) = H(t)$, where $H$ is the Heaviside function. The proof is immediate, since $\text{d}\mu(t) = \text{d}\delta(t)$ would give that
$$
\int_0^{\infty}\frac{\text{d}\mu(t)}{t+z} = \int_0^{\infty}\frac{\text{d}\delta(t)}{t+z} = \frac{1}{z}.
$$ |
Can we express $p_n$ in terms of $p_0, p_1$ and $n$? | You have a linear three-term recurrence. Rearrange as
$$p_{n+1}-bp_n+p_{n-1}=0$$
and you obtain the characteristic polynomial $x^2-bx+1=0$. This means that $p_n$ can be expressed as
$$p_n=cx_1^n+dx_2^n$$
where $x_1,x_2$ are the roots of the quadratic. You can determine $c$ and $d$ in terms of $a$ and $b$ by using the initial conditions and solving the resulting linear equations... |
What are (algebraic) equations, really? | The equation $y=x$ is a predicate about the variables $y,x$. It may be true or false. For each possible pair $(x,y)\in\mathbb{R}^2$, some pairs satisfy the equation and some do not. Hence the equation partitions $\mathbb{R}^2$ into those pairs that satisfy the equation, and the rest. It is possible that all pairs satisfy the equation, or that none do.
It is common to draw $\mathbb{R}^2$ as a pair of perpendicular axes, and indicate the satisfying pairs for a given equation as dots. This is called the graph of the equation.
As for your second question, in order to meaningfully talk about $\mathbb{R}$, much less $\mathbb{R}^2$, we need to have a mathematical definition for it. Part of this definition includes the field axioms. Unfortunately, the definition is pretty complicated, and students are taught to use $\mathbb{R}$ before they really understand that definition. |
Mathematical measure of whether something's on schedule or not? | A standard deviation refers to a probability distribution, but of course we don't actually know what's the probability distribution for the $\text{km}$ you'll jog on a given.
We can naively estimate this distribution as follows.
Let $N$ be your sample size, meaning your log has $N$ entries (or days). Here, the $i$-th entry is the number $x_i$ of $\text{km}$ you ran on day $i$, including days when you ran no $\text{km}$ at all $($in this case, $x_i=0)$. To each value $v$ on your log we'll associate the probability
$$p_v=\frac1N\cdot\#\{\text{entries with value $v$ in your log}\}.$$
The expected value of the number of $\text{km}$ ran on a given day will then be
$$\mu=\sum_v\,v\cdot p_v$$
and the standard deviation will be
$$\sigma=\sqrt{\sum_v^{}\,p_v\cdot {(v-\mu)}^2}$$
Example: Suppose your $2$-month log has $61$ days, with the following distribution:
$9$ days with $0$ $\text{km}$ jogs
$38$ days with $1$ $\text{km}$ jogs
$9$ days with $2$ $\text{km}$ jogs
$5$ days with $3$ $\text{km}$ jogs
Then our set of values $v$ is $\{0,1,2,3\}$ and we have
$$
\begin{array}{cc}
p_0=\frac{9}{61}&&p_1=\frac{38}{61}&&
p_2=\frac{9}{61}&&p_3=\frac{5}{61}
\end{array}
$$
Our mean jog has a length of
$$\mu=0\cdot\frac{9}{61}+1\cdot\frac{38}{61}+2\cdot\frac{9}{61}+3\cdot\frac{5}{61}=\frac{71}{61}\simeq 1.164$$
kilometres, and our standard deviation will hence be
\begin{align}
\sigma
&=\sqrt{
\frac{9}{61}\cdot{\left(0-\frac{71}{61}\right)}^2+
\frac{38}{61}\cdot{\left(1-\frac{71}{61}\right)}^2+
\frac{9}{61}\cdot{\left(2-\frac{71}{61}\right)}^2+
\frac{5}{61}\cdot{\left(3-\frac{71}{61}\right)}^2}\\
&=\sqrt{
\frac{9\cdot 71^2}{61^3}+
\frac{38\cdot 10^2}{61^3}+
\frac{9\cdot 51^2}{61^3}+
\frac{5\cdot 112^2}{61^3}}\\
&=\sqrt{
\frac{45369+3800+23409+62720}{61^3}
}\\
&=\sqrt{\frac{135298}{61^3}}\simeq 0.772
\end{align}
kilometres.
The standard deviation is a measure of how closely you follow your schedule in the following sense:
the closer your values $v$ are to the mean $\mu$, the smaller the standard deviation $\sigma$.
For instance, if you had run $1$ $\text{km}$ every day, then $\sigma=0$.
The thing is, the standard deviation in principle does not really care that your schedule is $1$ $\text{km}$ per day...
If you had run $0$ $\text{km}$ every day, or $3$ $\text{km}$ day, then you'd also have $\sigma = 0$.
With this in mind, what you can do is take a page from least squares and calculate how far from your scheduled $1$ $\text{km}$ your jogs are, on average.
In symbols, we'll be looking at
$$\epsilon=\sqrt{\frac{1}{N}\sum_{i=1}^N\,{\left(x_i-1\right)}^2}.$$
Grouping the $x_i$ by their values $v$, it turns out that
$$\epsilon=\sqrt{\sum_v^{}\,p_v\cdot {(v-1)}^2}.$$
Looks familiar, huh?
It's like the calculation for $\sigma$, except here we're forcing $\mu=1$ (our scheduled value).
Using the numbers for our previous example, we'd get
\begin{align}
\epsilon
&=\sqrt{
\frac{9}{61}\cdot{\left(0-1\right)}^2+
\frac{38}{61}\cdot{\left(1-1\right)}^2+
\frac{9}{61}\cdot{\left(2-1\right)}^2+
\frac{5}{61}\cdot{\left(3-1\right)}^2}\\
&=\sqrt{
\frac{9\cdot 1}{61}+
\frac{38\cdot 0}{61}+
\frac{9\cdot 1}{61}+
\frac{5\cdot 4}{61}}\\
&=\sqrt{
\frac{38}{61}} \simeq 0.789
\end{align}
Now, $\epsilon$ measures how closely you follow your schedule in the following sense:
the closer your values $v$ are to your intended schedule value $($ $1$ in this case$)$, the smaller the standard deviation $\sigma$.
If you had run $1$ $\text{km}$ every day, you'd still get $\epsilon=0$.
Moreover, we don't run into the same problems as we did with $\sigma$ before.
If you had run $n$ $\text{km}$ every day, then you'd have $\epsilon=|n-1|$ -- you can't get better than this for these cases! |
Is the sum always bigger than $n^2$? | This is true. If $\nu_p(n)$ is maximal such that $p^{\nu_p(n)}\mid n$, then
$$
s(n)\ge (\frac32)^{\nu_2(n)} (\frac43)^{\nu_3(n)}n.
$$
Therefore, for all $i=1$, $\dots$, $2^5 3^3$,
$$
s(2^5 3^3n+i)\ge (\frac32)^{\min(\nu_2(i),5)} (\frac43)^{\min(\nu_3(i),3)} (2^5 3^3 n+i).
$$
Summing over $i=1$, $\dots$, $2^5 3^3$ gives
\begin{eqnarray*}
\sum_{1\le i\le 2^5 3^3} s(2^5 3^3n+i)&\ge&1555910n+ 785732\\
&\ge& 2.08 \sum_{1\le i\le 2^5 3^3}( 2^5 3^3n + i).
\end{eqnarray*}
This proves that
$$
\sum_{1\le n\le N} s(n)\ge 2.08 \frac{N(N+1)}{2}>N^2
$$
whenever $N$ is a multiple of $2^5 3^3=864$. If $N$ is not a multiple of $864$, write $N=N'+N''$, where $N'$ is a multiple of $864$ and $1\le N''\le 863$. Then, since $s(n)\ge n$ for all $n$,
$$
\sum_{1\le n\le N} s(n)\ge 2.08 \frac{N'(N'+1)}{2} + N' N'' + \frac{N''(N''+1)}{2}
$$
which will be bigger than
$$
N^2 = N'^2 + 2 N' N'' + N''^2
$$
whenever $N'\ge 22464$. Also, computation shows that
$$
\sum_{1\le n\le N} s(n)> N^2
$$
whenever $24\le N\le 22463$. So, we can take $M=23$.
Asymptotically, $s(n)/n$ should behave similarly to the random variable
$$
W:=\prod_p (1+\frac{1}{p})^{Z_p-1},
$$
where the $Z_p$'s are independent geometrically distributed random variables with success probability $1-\frac{1}{p}$, and
$$
{\Bbb E}(W) = \prod_p \frac{p(p-1)}{p^2-p-1} \approx 2.67411.
$$ |
Riemann Manifold equipped with Euclidean metric | Let $(x^1, \dots, x^n)$ a local coordinate system and
$$X_p = \sum_i \mathcal{X}_{i=1}^n \frac{\partial}{\partial x^i}\Biggr|_p,$$
$$Y_p = \sum_i \mathcal{Y}_{i=1}^n \frac{\partial}{\partial x^i}\Biggr|_p,$$
$$Z_p = \sum_i \mathcal{Z}_{i=1}^n \frac{\partial}{\partial x^i}\Biggr|_p.$$ vector fields.
Moreover, for a Riemann metric $g$ we have
$$g(Y_p,Z_p) = \sum_{ij} \mathcal{Y}^i_p\mathcal{Z}^j_p g_{ij}\left(\frac{\partial}{\partial x^i}\Biggr|_p, \frac{\partial}{\partial x^j}\Biggr|_p\right).$$.
In addition, for two vector fields $Y_p,Z_p$ the Euclidean connection $\bar{\nabla}_{X_p}Y_p$ is given by
$$\bar{\nabla}_{X_p}Y_p = \sum_j X_p(\mathcal{Z}^j)\frac{\partial}{\partial x^j}\Biggr|_p$$
Now apply $X_p$ as derivation on $g$ and we obtain
$$X_p \cdot g(Y_p,Z_p) = \sum_{i,j}X_p(\mathcal{Y}^i\mathcal{Z}^j)g_{ij}\left(\frac{\partial}{\partial x^i}\Biggr|_p, \frac{\partial}{\partial x^j}\Biggr|_p\right) = \sum_{i,j}[(X_p\mathcal{Y}^i\mathcal{Z}^j) +\mathcal{Y}^i X_p\mathcal{Z}^j) ]g_{ij}\left(\frac{\partial}{\partial x^i}\Biggr|_p, \frac{\partial}{\partial x^j}\Biggr|_p\right) = g(\bar{\nabla}_{X_p}Y_p,Z_p) + g(Y_p,\bar{\nabla}_{X_p} Z_p).$$
Thank @Arctic Char for useful comments. |
Some kind of a converse of Leibniz-Newton | If we were lucky enough to have $f$ continuous, we could do this: Define
$$h(x) = G(x)-G(a) -\int_a^x f(t)\,dt.$$
Now $G'=g$ is given. And by the FTC, the derivative of the integral function is $f(x).$ Thus
$$h'(x) =g(x)-f(x)\ge 0.$$
Thus $h$ is nondecreasing on $[a,b].$ But $h(a)=h(b)=0.$ A nondecreasing function that is $0$ at the endpoints must be identically $0.$ Therefore $g\equiv f$ and $g$ is Riemann integrable. |
Reduction map on torsion points of an elliptic curve and their valuation | The points $[x:y:z] \in \ker(\pi)$ are those mapped in $[0:1:0]$ so they must satisfy $v(x),v(z)>0$ and $v(y)=0$. |
Gluing Lemma when A and B aren't both closed or open. | The relevant part here is that $A\cap B$ can be empty and yet have have the functions need to agree at the boundary of one because it's a limit point of the other. Let $A=[0,1]$ and $B=(1,2)$. Then $A\cup B=[0,2)$. Let $f(x)=x$ and $g(x)=-x$. Then $h(x)$ is discontinuous at $1$. But the criterion that they agree on the intersection is true, since the intersection is empty. Compare this with what happens with similar endpoints if both $A$ and $B$ are open (or both closed).
For an example with a nonempty intersection, consider $S^1$, the circle in $\mathbb{C}$. I will write $[\alpha,\beta]$ to mean arc starting at $\alpha$ and going clounterclockwise to $\beta$ closed, and with $()$ to denote open arcs.
Let $$A=(0,i),\quad B=[i,e^{\frac{i\pi}{4}}]$$
So $A$ is an open quarter circle and $B$ is a closed seven-eigths of a circle such that they share an endpoint. Notice that the two arcs over lap. This however does not force continuity because we still have the same issue at $i$ that we had at $1$ in the previous problem. |
Arithmetic progression in a set or its complement | The Szemerédi’s theorem applies here. It says that for all $0 < d < 1$, and $k$ positive integer, there exists $N$, such that every subset of $\{1, \dotsc, N \}$ with cardinal at least $dN$ contains an arithmetic progression with length $k$.
Take $d = 1/2$ and $k = 4$, and let $N = N(1/2, 4)$. Either $[1, N]\cap f^{-1}(1)$ as cardinal at least $N/2$, or $[1, N] \cap f^{-1}(0)$ has. So there is a arithmetic progression of length 4 in one of them. Qed.
Edit
It appears that this easy corollary of the Szemerédi’s theorem is in fact a special case (2 colors) of the easier van der Waerden's theorem (1927). |
On the approximation $\pi\approx 31^\frac{1}{3}$ | Such approximation is a consequence of a well-known identity:
$$ \frac{\pi^3}{32} = \sum_{n\geq 0}\frac{(-1)^{n}}{(2n+1)^3} \tag{1}$$
The RHS of $(1)$ is a fast-convergent series and $32\sum_{n=0}^5 \frac{(-1)^{n}}{(2n+1)^3}\approx 31$, hence $\color{red}{\pi^3\approx 31}$.
In a similar fashion $\frac{5\pi^5}{1536}=\sum_{n\geq 0}\frac{(-1)^n}{(2n+1)^5}$ and $1536\sum_{n=0}^{6}\frac{(-1)^n}{(2n+1)^5}\approx 1530+\frac{1}{10}$ lead to
$$ \pi^5 \approx \frac{15301}{50} \tag{2} $$
but I guess that $\pi\approx\left(\frac{15301}{50}\right)^{\frac{1}{5}}$ is a less fascinating approximation than $\pi\approx 31^{1/3}$. |
Finding values for $a >0$ where $y=x$ intersects $y=a^x$ | The case $a \leq 1$ is easy, so let us suppose $a > 1$. Now look at the function $f: (0, \infty) \rightarrow \mathbb{R}$
$$
f(x) = \frac{a^x}{x}.
$$
Our goal is to find all the minima of $f$. We have
$$
f'(x) = \frac{a^x(x \ln(a) - 1)}{x^2},
$$
so $f$ has a local minimum at $x = \frac{1}{\ln(a)}$. One can check that this is in fact a global minimum.
Now note that $a^x$ and $x$ intersect for some $x > 0$ if and only if $f(x) \leq 1$ for some $x > 0$ if and only if $f\left(\frac{1}{\ln(a)}\right) \leq 1$. But we have
$$
f\left(\frac{1}{\ln(a)}\right) = \ln(a) \cdot a^{\frac{1}{\ln(a)}} = \ln(a) \cdot e,
$$
so we want $\ln(a) \leq \frac{1}{e}$, i.e. $a \leq e^{\frac{1}{e}}$. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.