title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Transition Probability | Is this not similar to the:
http://en.wikipedia.org/wiki/Wald%E2%80%93Wolfowitz_runs_test
You might check if it's the same or similar problem. |
Closedness of the subset [0,1] in $\Bbb{Q}$ | The density of $\mathbb Q$ plays no role here. If $A$ and $B$ are subsets of $\mathbb R$ and $A$ is a closed subset of $\mathbb R$, then $A\cap B$ is a closed subset of $B$. That's so because the complement of $A\cap B$ in $B$ is equal to $A^\complement\cap B$. Now, since $A$ is a closed subset of $\mathbb R$, $A^\complement$ is an open subset of $\mathbb R$ and therefore $A^\complement\cap B$ is an open subset of $B$. |
Does the "bi" in bilinear and biorthogonal mean different things? | Bi essentially just means two.
In bilinear it means linear in two arguments. In biorthogonal it means two families of vectors are orthogonal in respect to one another (but neither must be orthogonal in respect to itself). In binomial it means that there are two summands within the power, i.e. $(a+b)^n$. In binary it means that there are two possible digits, $0$ and $1$. |
Identity element as product of transposition. | Some reminders about permutations:
A permutation is by definition a bijective function from a set to itself. The set of all permutations of $\{1,2,3,\dots,n\}$ (along with the operation of function composition) is referred to as the symmetric group $S_n$.
For example with $n=4$ we could have as an example of a permutation: $f=\{(1,2),(2,1),(3,3),(4,4)\}$, in other words $f(1)=2, f(2)=1, f(3)=3, f(4)=4$.
To simplify notation, we can refer to this in two-line format: $f=\begin{pmatrix} 1&2&3&4\\f(1)&f(2)&f(3)&f(4)\end{pmatrix} = \begin{pmatrix}1&2&3&4\\2&1&3&4\end{pmatrix}$
One can also prefer to write this in cyclic notation as well by "following the bouncing ball," keeping track of where one element gets mapped under repeated applications of the permutation until arriving back where it started. In the above example, $f=(2~1)$
The identity permutation is the identity function $e(x)=x$. In the context of $S_4$, that would be $e=\{(1,1),(2,2),(3,3),(4,4)\}$ or equivalently as $\begin{pmatrix}1&2&3&4\\1&2&3&4\end{pmatrix}$ or equivalently as $(1)$ or $(~)$ depending on your preferred notation.
That the identity permutation is indeed the identity for the group $S_n$ is immediate from how it is defined since:
$(e\circ f)(x)=e(f(x))=f(x)$ for all $x$, so $e\circ f = f$. Also $(f\circ e)(x)=f(e(x))=f(x)$ for all $x$, so $f\circ e = f$.
That the identity permutation can be written as the product of two equal transpositions follows from the fact that transpositions are self inverses.
Using the above example of $f=(2~1)$ again, we have:
$(f\circ f)(x)=f(f(x))=\begin{cases} f(f(1))&\text{if}~x=1\\ f(f(2))&\text{if}~x=2\\ f(f(x))&\text{for all other}~x\end{cases}=\begin{cases} f(2)&\text{if}~x=1\\ f(1)&\text{if}~x=2\\ f(x)&\text{for all other}~x\end{cases}=\begin{cases} 1&\text{if}~x=1\\ 2&\text{if}~x=2\\ x&\text{for all other}~x\end{cases}=x$ for all $x$.
Thus $f\circ f=(2~1)(2~1)=e$ |
Book suggestion for linear algebra "2" | I have to suggest the somewhat underrated Matrix Analysis by Horn and Johnson (the first edition was used for my ALA class at NCF.) They take a wonderfully concrete approach to most topics encountered in a second linear algebra course (Schur Decomposition, Spectral Theorem for Normal Operators, Jordan Canonical Form, Singular Value Decomposition) while adding a lot of other nice things into the mix. The fourth chapter on Hermitian Matrices talks about the Rayleigh Ritz Theorem and variational characterization of eigenvalues, which I imagine come up a lot in serious study of classical mechanics. Chapter five discusses finite dimensional inner product / normed / pre-normed spaces in terms of algebraic, analytic, and geometric properties. They include a discussion of completeness and the $l^p$ norms, which I guess could be seen as a preview of Hilbert Space Theory. There are also nice sections on the Gersgorin circle theorem and numerically solving linear systems.
I think it's a wonderful choice for any student, but especially a non-mathematician. The proofs are rigorous and sometimes tedious but always understandable. Typically, things are proved in an algorithmic fashion rather than through diagram chasing or algebraic artifice (nary a mention of finitely generated modules over a principal ideal domain.) My only complaint is that there are a fair number of results assumed regarding matrix algebra and determinants which wouldn't typically appear in a linear algebra course - references for these are typically not too hard to find though. |
Why does the limit $\lim_{x \to 0} \frac{1-\cos^3 x}{x\sin 2x}$ exist? | The separation of limits into the form
$$\lim_{x\to a}f(x)-g(x)=\lim_{x\to a}f(x)-\lim_{x\to a}g(x)$$
only holds when the limits exist. In this case, they don't, so we simply aren't allowed to do that, else we would always have
$$\lim_{x\to a}f(x)=\lim_{x\to a}f(x)-\frac1{x-a}+\frac1{x-a}=\underbrace{\lim_{x\to a}f(x)-\frac1{x-a}+\lim_{x\to a}\frac1{x-a}}_{\text{undefined}}$$ |
Trouble finding an estimator from a discrete RV | I assume you meant $(1 + \alpha)/6$ and $(1 - \alpha)/6$, where
$0 < \alpha < 1,$ for for the
respective probabilities of faces 1 and 6. Also that we
are rolling the die $n$ times.
Intuitively, and immediately obvious from the likelihood, one should
ignore counts for outcomes other than 1 and 6 as irrelevant. Let $X_1$ and
$X_6$ be the respective counts of 1 and 6 in $n$ rolls.
Upon finding the derivative of the log-likelihood function, etc.,
it seems the estimator is $(X_1 - X_2)/(X_1 + X_2).$
I'll leave the details of that, and the discussion of unbiasedness and consistency, to you.
I tried simulations with 10 million rolls, for $\alpha = .1$ and $.3$,
and got three place accuracy. |
Does weak convergance in $L^2(0,T,V)$ imply weak convergance in $V$ a.e. $t$? | We have $L^2(0,T;V)' \cong L^2(0,T;V')$ isometrically, with dual pairing $\langle u \vert v' \rangle = \int_0^T \langle u(t) \vert v'(t) \rangle dt$, where the brackets in the integrand denote the dual pairing between $V$ and $V'$. (This works because $V$ is reflexive, see here.)
If $u_n \rightarrow 0$ weakly in $L^2(0,T;V)$, then $\langle u_n \vert v' \rangle = \int_0^T \langle u_n(t) \vert v'(t) \rangle dt$ for all $v' \in L^2(0,T;V') $. Now by the scalar theory you conclude that (after passing to a subsequence) $\langle u_{n}(t) \vert v'(t)\rangle \rightarrow 0$ for almost all $0<t<T$. For every such $t$ we have $\langle u_n(t) \vert v'_0 \rangle \rightarrow 0$ for any $v_0'\in V'$ (just extend it to a constant function $v'$ with
value $v'(t) = v'_0$), but that's the desired result: $u_n(t) \rightarrow 0$
weakly in $V$ for almost every $0<t<T$ (after passing to a subsequence if necessary).
Edit (31st Aug.'18) The objection raised in the comments is valid: For a given $v'$ we can extract an a.e. convergent subsequence, but in the end we need a single subsequence that works for all $v'$.
Here is a work around, assuming that $V$ (or equivalently $V'$) is separable: Let $(v'_j)_j\subset V'$ be a countable dense subset and
view the $v_j'$ as constant functions in $L^2(0,T;V')$.
We inductively construct a sequence $(\varphi_m:\mathbb{N}\rightarrow \mathbb{N})_m$ of strictly increasing maps and a nested sequence $N_1\subset N_2 \subset \dots \subset (0,T)$ of null sets such that
$$
\forall m\in \mathbb{N}:\quad~ j\le m, ~ t \in (0,T)\backslash N_m\quad \Longrightarrow\quad \lim_{n\rightarrow\infty}\langle u_{\varphi_m(n)}(t) \vert v_j' \rangle = 0.$$
For $m=1$ it is clear how to obtain $\varphi_1$ and $N_1$. Supposing that maps and null sets up to the index $m$ are given, we construct $\varphi_{m+1}$ and $N_{m+1}$: By assumption $u_{\varphi_m(n)} \rightarrow 0$ in $L^2(0,T;V)$, hence $\langle u_{\varphi_m(n)} \vert v_{m+1}\rangle \rightarrow 0 $ in $L^1(0,T)$ and consequently there is a null set $S$ such that for a subsequence we have $$\lim_{k\rightarrow \infty }\langle u_{\varphi_m(n_k)}(t)\vert v_{m+1}'(t)\rangle= 0$$ for all $t\in S^c$. Put $\varphi_{m+1}(k) := \varphi_{m}(n_k)$ and $N_{m+1}:=N_m \cup S$.
We proceed by taking the diagonal sequence: Define $\varphi(n):=\varphi_n(n)$ and $N = \bigcup_{k\ge 1 }N_k$. Then
$$
\forall j \in \mathbb{N}: \quad t\in (0,T) \backslash S \quad \Longrightarrow \quad \lim_{n\rightarrow\infty}\langle u_{\varphi(n)}(t) \vert v_j' \rangle = 0.
$$
Since $(v_j')_j$ was dense it follows that
$$ \forall v' \in V': t\in(0,T) \backslash N \quad \Longrightarrow \quad \lim_{n\rightarrow\infty}\langle u_{\varphi(n)}(t) \vert v' \rangle = 0$$
and that is what you want. |
Left adjoint of a strange functor | First of all, the result you're trying to prove isn't true as stated. It is only true if you restrict $\mathcal{C}$ to be the category of $k$-linear colimit-preserving functors. Second, you don't need a natural map $M\otimes_k G(R)\to M$; you need a natural map $M\otimes_k G(R)\to G(M)$.
To construct such a map, first consider the case $M=R$. In that case, note that $R$ acts on itself (as a module) by multiplication, and so since $G$ is a functor, $R$ acts on $G(R)$ as well. Furthermore, since $G$ is $k$-linear, this action is compatible with the $k$-vector space structure on $G(R)$ and the $k$-algebra structure on $R$ (that is, elements of $R$ which are in $k$ act by the scalar multiplication on the vector space $G(R)$). This gives us a $k$-bilinear map $R\times G(R)\to G(R)$, and hence a $k$-linear map $R\otimes_k G(R)\to G(R)$.
Now let $M$ be an arbitrary module. Note that $M$ can canonically constructed from $R$ by colimits: the canonical presentation of $M$ as an $R$-module expresses $M$ as a cokernel of a map between coproducts of copies of $R$. Note that both $M\mapsto M\otimes_k G(R)$ and $M\mapsto G(M)$ preserve colimits, and so applying them to these particular colimits which construct $M$ from $R$, we see that our map $R\otimes_k G(R)\to G(R)$ induces a map $M\otimes_k G(R)\to G(M)$. This is functorial in $M$ because our expression of $M$ as a colimit is functorial in $M$ (if we choose the canonical presentation).
A bit more explicitly, there are functors $U,V:R\text{-mod}\to Set$ and an exact sequence $$\bigoplus_{V(M)} R\to \bigoplus_{U(M)} R\to M\to 0$$ which is natural in $M$ (here $U(M)$ is just the underlying set of $M$, and $V(M)$ is the underlying set of the kernel of the natural map $\bigoplus_{U(M)} R\to M$). Since $M\mapsto M\otimes_k G(R)$ and $M\mapsto G(M)$ preserve colimits, there are exact sequences $$\bigoplus_{V(M)} R\otimes_k G(R)\to \bigoplus_{U(M)} R\otimes_k G(R)\to M\otimes_k G(R)\to 0$$ and $$\bigoplus_{V(M)} G(R)\to \bigoplus_{U(M)} G(R)\to G(M)\to 0,$$ natural in $M$. Our map $R\otimes_k G(R)\to G(R)$ gives natural maps between the first two terms of these sequences which make the diagram
$$\require{AMScd}
\begin{CD}
\bigoplus_{V(M)} R\otimes_k G(R) @>>> \bigoplus_{U(M)} R\otimes_k G(R) @>>> M\otimes_k G(R) @>>> 0\\
@V{}VV @V{}VV \\
\bigoplus_{V(M)} G(R) @>>> \bigoplus_{U(M)} G(R) @>>> G(M) @>>> 0
\end{CD}$$
commute (the fact that the diagram commutes comes from the fact that the map $R\otimes_k G(R)\to G(R)$ is not just $k$-linear but $R$-linear, where $R$ acts on the first coordinate of $R\otimes_k G(R)$ and on $G(R)$ as described above). By exactness of the rows, there is then a unique map $M\otimes_k G(R)\to G(M)$ which makes the whole diagram commute. |
Sequence of convex non increasing sets convergence | Take care that the intersection of nested convex subspaces of a Banach space might be empty. Example here. |
Number of ways a handshaking can take place | The graph will be a union of cycles.
The sizes of these cycles add up to $9$.
As every cycle has size at least $3$, there are only four possibilities:
One cycle of lenght $9$.
Two cycles of lengths $3,6$.
Two cycles of lengths $4,5$.
Three cycles of lengths $3,3,3$.
Each of these four cases lead to a simple combinatorial problem. I show you the first and the last cases, you can finish the rest.
Preliminary calculation: Given $k\geq 3$ people, there are $(k-1)!/2$ ways to form a $k$-cycle out of them. Indeed, there are $(k-1)!$ cyclic orders of them, and each cyclic order defines the same cycle graph as the reverse cyclic order, and there are no other coincidences.
First case: the graph is a $9$-cycle. There are $(9-1)!/2=20160$ ways to make them form a $9$-cycle.
Last case: the graph is a union of three $3$-cycles. There are $\frac{9!}{(3!)^3\cdot 3!}=280$ ways to partition the $9$ people into three groups of $3$, and this uniquely determines a solution, as three people form a uique $3$-cycle.
Have fun with the rest of the proof, no new idea is needed. |
What is the value of $a$ so that this condition holds? | You should have gotten $\displaystyle\int_{0}^{1-a}[(1-a)x-x^2]\,dx = \left[\dfrac{1-a}{2}x^2-\dfrac{1}{3}x^3\right]_{0}^{1-a} = \dfrac{(1-a)^3}{6}$.
Then, you have $\dfrac{(1-a)^3}{6} = \dfrac{9}{2} \leadsto (1-a)^3 = 27$ which should be easy to solve. |
Map $\{x+iy \mid x^2+y^2<1 \text{ and } x^2 + (y-1)^2<2\}$ conformally to UHP | The region is the intersection of the unit disk (centered at the origin) and a disk of radius $\sqrt2$ centered at $(0,1)$. The points on the boundaries of both are $\pm1$, and we want to send one of these to zero, the other to infinity. If you take
$$f_1(z)=\frac{z-1}{i(z+1)}\,,$$ then the upper arc (unit circle) goes to the positive real axis, and the lower arc goes from $0$ to $\infty$ by way of $1-i$. The interior point $0$ of the domain goes to $i$, so you see that the image is the wedge of angle $3\pi/4$ at the origin. Just take the $4/3$-power now to expand it to the UHP. |
A formula in complete lattice: $\bigwedge\{ a\vee b : a\in A , b\in B\}\leq (\bigwedge_{a\in A}a)\vee(\bigwedge_{b\in B}b) $ | As stated, the property is false in general: for example, let $\Gamma$ be the diamond lattice $M_3$ with top element $1$, bottom element $0$, and middle elements $x,y,z$. Put $A=\{x,y\}$ and $B=\{z\}$. Then the LHS evaluates to $1$, but the RHS to $z$. EDIT: In response to the comments, we can slightly modify this example by inserting in the lattice an extra point $w$ such that $x,y<w<1$ (thus, $w=x\lor y$). The inequality still fails, while the additional hypotheses $\bigl(\bigvee A\bigr)\land\bigl(\bigvee B\bigr)=0$ and $\bigvee(A\cup B)=1$ hold. (Notice also that like $M_3$, the resulting lattice remains modular.)
On the other hand, the inequality is true if $\Gamma$ is distributive. First, if $A$ or $B$ is empty, then both sides evaluate to $1$, hence we may assume $A\ne\varnothing\ne B$. Fix $a_0\in A$, and $b_0\in B$. Let
$$c=\bigwedge_{\substack{a\in A\\b\in B}}(a\lor b)$$
denote the left-hand side. Using distributivity and the assumptions on $A$ and $B$, we have
$$\begin{align}
a_0\land c&=\bigwedge_{\substack{a\in A\\b\in B}}(a_0\land(a\lor b))\\
&=\bigwedge_{\substack{a\in A\\b\in B}}((a_0\land a)\lor(a_0\land b))\\
&=\bigwedge_{\substack{a\in A\\b\in B}}(a_0\land a)\\
&=\bigwedge_{a\in A}a,
\end{align}$$
and symmetrically,
$$b_0\land c=\bigwedge_{b\in B}b.$$
Thus, using distributivity again, we have
$$c=c\land(a_0\lor b_0)=(c\land a_0)\lor(c\land b_0)=\bigwedge_{a\in A}a\lor\bigwedge_{b\in B}b.$$
On the third hand, the property is weaker than distributivity. For example, let $\Gamma_1$ be an arbitrary complete lattice (in particular, non-distributive), and let $\Gamma$ be $\Gamma_1$ with a new least element $0$ attached. Then $\Gamma$ satisfies the property for the simple reason that the assumption $a\land b=0$ for all $a\in A$ and $b\in B$ holds only if one of the sets $A$, $B$ is $\varnothing$ or $\{0\}$, and the inequality is easy to verify in these cases. |
Limit of the derivative of a function | By Mean Value theorem we have the following equation $$f(x + 1) - f(x) = f'(\xi)$$ where $x < \xi < x + 1$. The LHS of the above equation tends to $1 - 1 = 0$ as $x \to \infty$ and RHS tends to $s$ as $x \to \infty$. Hence $s$ must be $0$. |
Rate distortion function with infinite distortion | Apologies in advance for not being able to format this properly.
The definition of D is the expectation value, E, of the distortion measure. E[d(x, x^)] = p(1|0) * 1 = p(1|0)= D. Now, because X is Bernoulli with probability 1/2, we have that p(0) = 1/2 = p(0|0) + p(0|1) = p(0|0). Finally p(1) = 1/2 = p(1|0) + p(1|1) = D + p(1|1), so p(1|1) = 1/2 - D.
This should take care of all the entries. |
Find $ \lim\limits_{{n \to \infty}} \frac1{2^n} \sum\limits_{k=1}^n \frac1{\sqrt{k}} \binom nk$ | Let $a_k = k^{-1/2}$. Notice that $(a_k)$ decreases to $0$. Then for each fixed $m \geq 1$ and for all $n \geq m$,
$$ \frac{1}{2^n} \sum_{k=1}^{n} \binom{n}{k} a_k \leq \frac{1}{2^n} \underbrace{\sum_{k=1}^{m} \binom{n}{k} (a_k - a_m)}_{= \mathcal{O}(n^m)} + \frac{1}{2^n} \underbrace{\sum_{k=1}^{n} \binom{n}{k} a_m}_{=(2^n - 1)a_m}. $$
Taking limsup as $n\to\infty$, it follows that
$$ \limsup_{n\to\infty} \frac{1}{2^n} \sum_{k=1}^{n} \binom{n}{k} a_k \leq a_m $$
Since $a_m \to 0$ as $m\to\infty$, this proves
$$ \lim_{n\to\infty} \frac{1}{2^n} \sum_{k=1}^{n} \binom{n}{k} a_k = 0. $$
Addendum. I just saw that OP is a high-school student. Here is a little tweak of the argument above that does not use any fancy analysis stuffs.
Let $m_n = \lfloor \log n \rfloor$. Then for $n \geq 3$, we always have $1 \leq m_n \leq n$. Then
\begin{align*}
0
\leq \frac{1}{2^n} \sum_{k=1}^{n} \binom{n}{k} \frac{1}{\sqrt{k}}
&= \frac{1}{2^n} \sum_{k=1}^{m_n} \binom{n}{k} \frac{1}{\sqrt{k}}
+ \frac{1}{2^n} \sum_{k=m_n + 1}^{n} \binom{n}{k} \frac{1}{\sqrt{k}} \\
&\leq \frac{1}{2^n} \sum_{k=1}^{m_n} n^k
+ \frac{1}{2^n} \sum_{k=m_n + 1}^{n} \binom{n}{k} \frac{1}{\sqrt{m_n}} \tag{1} \\
&\leq \frac{n^{1+m_n}}{2^n}
+ \frac{1}{\sqrt{m_n}}. \tag{2}
\end{align*}
For $\text{(1)}$ I utilized the fact that $\binom{n}{k} \leq n^k$ and $\frac{1}{\sqrt{k}}$ is decreasing. For $\text{(2)}$ I utilized the geometric sum formula and the identity $\sum_{k=0}^{n} \binom{n}{k} = 2^n$.
Now taking $n\to\infty$ and applying the squeezing lemma proves the claim. |
Morphism of $\mathcal{O}_{X}$-modules on an affine scheme. | All you need is compatibility with restriction. Remember that the sections of a sheaf (with values in a sufficiently nice category) are determined by their sections on the open subsets of a basis for the topology: if $U$ is an open set with a covering by $\{U_i\}_{i\in I}$, then $\mathcal{F}(U)$ is the equalizer of the diagram $$\prod_{i\in I} \mathcal{F}(U_i)\rightrightarrows\prod_{i,j\in I}\mathcal{F}(U_i\cap U_j).$$
If the $U_i$ are elements of a base, then $U_i\cap U_j$ are too, and thus if we have compatible restriction morphisms $\mathcal{F}(U_i)\to\mathcal{G}(U_i)$ for all elements of a basis for the topology, we get an induced morphism on the equalizers of the above diagram for any $U$, and these maps $\mathcal{F}(U)\to\mathcal{G}(U)$ are compatible by the assumption that the maps on the basis sets were. So since the distinguished affine opens $D(f)$ form a basis for the topology on an affine scheme, we get a morphism of sheaves assuming our maps are compatible. |
Example of a locally compact connected Abelian group with non-$\sigma$-finite measure | I should make my comments an answer.
An open subgroup $H$ of a topological group is closed: Its complement is the open set $\bigcup\limits_{g \notin H} gH$.
For every neighborhood $U$ of the identity element of a topological group the subgroup $H = \bigcup_{n \in \mathbb{Z}} U^{n}$ is open (hence closed by 1.): for every $h \in H$ the set $hU$ is contained in $H$ and a neighborhood of $h$.
If $G$ is connected and $U$ is a neighborhood of the identity the subgroup $H$ in 2 is open and closed, hence the entire group $G$.
This implies that a connected locally compact group $G$ is $\sigma$-compact (since we may take a compact neighborhood $U$ of the identity in the above), hence every Radon measure is $\sigma$-finite and thus there are no examples as you're asking for: if $S \subset G$ is any measurable subset then $\{0\} \times S$ is a measurable rectangle, hence measurable in $\mathbf{R} \times G$ and by Fubini-Tonelli it must have measure zero.
If connectedness is dropped then this Fubini-argument fails. The standard example is $\{0\} \times \mathbf{R}_d$ in $\mathbf{R} \times \mathbf{R}_d$ where $\mathbf{R}_d$ is the additive group of the reals equipped with the discrete topology and $\mathbf{R}$ carries the standard topology. The set $\{0\} \times \mathbf{R}_d$ is locally null, but not null (hence it has infinite measure). This is an exercise all mathematicians interested in analysis should do once in their lives, so I won't spell it out. In case of emergency consult (11.33) on p.127 in Hewitt-Ross, Abstract Harmonic Analysis, I. |
Can't figure out a step in the proof of $\cosh^{-1}x=\ln(x+\sqrt{x^2-1}), \forall x\ge1$? | Note that we have $$\cosh y=\frac{e^y+e^{-y}}{2}=x$$
So we have by multiplying $2e^{y}$ on each side, $$e^{2y}-2xe^{y}+1=0$$
Setting $e^{y}=t$ and applying the quadratic formula, we have that $$e^{y}=x \pm \sqrt{x^2-1}$$
Note that $\cosh^{-1} x$ is a function that goes from the real numbers which are greater than $1$ to the non-negative reals. So we have that $y \ge 0$. Since $e^{y}$ is an increasing function, $$e^{y}\ge e^{0}=1$$
However,$$x-\sqrt{x^2-1}=\frac{1}{x+\sqrt{x^2-1}} \le \frac{1}{1+\sqrt{1^2-1}}=1$$
So $e^{y} \neq x-\sqrt{x^2-1}$ given that $x \neq 1$. In the case when $x=1$, we have that $x-\sqrt{x^2-1}=x+\sqrt{x^2-1}$.
Thus, $e^{y}=x+\sqrt{x^2-1}$. |
Non linear Differential equation, sketching nullclines | Honestly, your phase portrait looks pretty close. For reference, here's a computer generated plot using Mathematica:
The key qualitative difference between your portrait and the solution / my computer picture is that linearization only gives local information, i.e., finding the eigenvalues and eigenvectors of the Jacobian at $(0,0)$ only determines the dynamics on an infinitesimal neighborhood of the origin. While this can have global impacts, the eigenvector lines tell you less and less about the situation the further away you go from the equilibrium point at which you linearized.
We can most clearly see this where you drew flow lines asymptotically approaching the $\pm x$-axis, whereas the flow lines should actually cross the $x$-axis once sufficiently far from the origin. I strongly encourage you to only draw the eigenvector lines / (un)stable linearization manifolds in a small neighborhood of where you linearized (as I have done with my computer plot) in order to avoid treating them as asymptotics on a global scale.
As for how to capture the global dynamics of the phase lines beyond drawing nullclines, you can also look at asymptotic behavior of solutions and also see if the coordinate axes can be used in a nullcline-esque manner. For example,
If $x > y \gg 0$, then $\dot x, \dot y \gg 0$. Therefore "phase lines in the top right should keep going up and right." If we wanted to be rigorous, we could even determine some asymptotic growth functions.
If $y=0$ and $x \gg 0$, then $\dot y = x^3$ and $\dot x \approx 1$. Therefore, once we get sufficiently far out along the positive $x$-axis, flow lines should go upwards across the $x$-axis (becoming more and more vertical the further we go).
Similarly, if $y=0$ and $x \ll 0$, then $(\dot x, \dot y) \approx (e^{-x}, x^3)$, so flow lines should head down and left across the negative $x$-axis. Since $\frac{e^{-x}}{x^3} \to \infty$ as $x \to - \infty$, they should cross the negative $x$-axis with shallower and shallower slopes as we move further along the $x$-axis.
Edit
For the curious, the phase portrait was built in Mathematica using the following code:
sp = StreamPlot[{1 + y - Exp[-x], x^3 - y}, {x, -2, 2}, {y, -2, 2},
StreamStyle -> Green];
nullclines =
ContourPlot[{1 + y - Exp[-x] == 0, x^3 - y == 0}, {x, -2,
2}, {y, -2, 2}, ContourStyle -> Magenta];
ev = ParametricPlot[{{t, 0}, {t, -2 t}}, {t, -.25, .25},
PlotStyle -> Blue];
Show[sp, nullclines, ev] |
Extinction of the population - Branching process with separate generations | The offspring distribution has generating function
$$P(s) = \frac18 + \frac38 s + \frac38 s^2 + \frac18 s^3 = \frac18(1+s)^3, $$
and mean
$$\mu = P'(1) = \frac32>1, $$
so if the initial generation had one individual, the probability of extinction is the unique solution to $P(\pi)=\pi$ with $0<\pi<1$, in which case $\pi=2-\sqrt3$. Now, since the initial generation has two individuals, we can treat these as two separate processes, and extinction occurs exactly when both processes expire. Hence the extinction probability is
$$(2-\sqrt 3)^2=7-4\sqrt3\approx 0.0718. $$ |
Showing that $J(W) = || S w || - \frac{1}{2} w^T S w$ is maximized only when $||w| = 1$ | The optimality condition is
$$
\frac1{\|Sw\|}S^2w - Sw=0.
$$
If $S$ is injective (or equivalently $X^T$ is injective) then this implies
$$
\frac1{\|Sw\|}Sw - w=0,
$$
which is the claim. |
Combinations Among Sets | The last three are true for all sets $A$ and $B$.
Take the third, for instance: if $S\in\binom{A}2\cup\binom{B}2$, then certainly $S\in\binom{A\cup B}2$, so $\binom{A\cup B}2\supseteq\binom{A}2\cup\binom{B}2$. Equality is possible: if $A\subseteq B$, for instance, then $\binom{A}2\subseteq\binom{B}2$, and $A\cup B=B$, so $$\binom{A}2\cup\binom{B}2=\binom{B}2=\binom{A\cup B}2\;.$$ On the other hand, if there are elements $a\in A\setminus B$ and $b\in B\setminus A$, then $$\{a,b\}=\binom{A\cup B}2\setminus\left(\binom{A}2\cup\binom{B}2\right)\;,$$ and $$\binom{A\cup B}2\supsetneqq\binom{A}2\cup\binom{B}2\;.$$
The fact that the second is true does not mean that the fourth isn’t true; In fact, the second implies the fourth: if $X$ and $Y$ are sets, and we know that $X=Y$, then we certainly know that $X\subseteq Y$. This is analogous to the situation with real numbers: if $x=y$, then it’s certainly true that $x\le y$.
The only one that is not always true is the first, and even it is true for some choices of $A$ and $B$. In fact it’s true precisely when $A\subseteq B$ or $B\subseteq A$; in all other cases we can find elements $a\in A\setminus B$ and $b\in B\setminus A$, and we’ve saw above that in that case the first assertion is false. |
Completion of Borel Algebra with point mass measure at 0. | The completion should be a $\sigma$-algebra (call it $\Sigma$) containing $\mathcal{P}(\mathbb{R} \setminus \{ 0 \}) \cup \mathcal{B}(\mathbb{R})$. But now, one can easily see that this $\sigma$-algebra must be the whole $\mathcal{P}(\mathbb{R})$. In fact, for all $S \in \mathcal{P}(\mathbb{R})$:
If $0 \notin S$, then $S \in \mathcal{P}(\mathbb{R} \setminus \{ 0 \} ) \subseteq \Sigma$
If $0 \in S$, then $S = (S \setminus \{ 0 \}) \cup \{ 0 \} \in \Sigma$ because $(S \setminus \{ 0 \}) \in \mathcal{P}(\mathbb{R} \setminus \{ 0 \} ) \subseteq \Sigma$, and $\{ 0 \} \in \mathcal{B}(\mathbb{R}) \subseteq \Sigma$
Any way, you get $\mathcal{P}(\mathbb{R}) \subseteq \Sigma$ |
abstract algebra question with a cyclic group | $G$ is generated by some $g\in G$. Since $d\mid n$, we can find $k$ such that $dk=n$. Now let $y=g^k$. Check that $y^j,0\leq j\leq d-1$ are pairwise distinct, so there are at least $d$ solutions to $x^d=e$. If $x\neq g^{kj}$ for all integer $0\leq j\leq d-1$, $x=g^p$ for some $p$. We write $p=kq+r$, where $1\leq r\leq k-1$. Then $x^d=g^{pd}=g^ng^{rd}=g^{rd}\neq e$. |
Show this function is Riemann-Stieltjes Integrable (RS-I) | $\alpha$ increasing is a very usual case, but being of bounded variation (difference between two monotone functions) is enough. Writing $\alpha$ as difference between two monotone functions is very easy in this case.
As $\alpha(x) = x^2$ is $C^1$, another property is applicable:
$$\int_a^b fd\alpha = \int_a^b f\alpha'$$
where the RHS is simply a Riemann integral. In this case, the RHS exists because $f$ has a finite mumber of discontinuities.
See Riemann–Stieltjes integral |
Why are separation results important in analysis? | You say you are autodidact in your presentation. This is the reason I give you here a HINT (not an answer) maybe of interest for you. Professor P.J. Laurent of University of Grenoble gave in his very good lessons these four consequences of Hahn-Banach theorem.
Let $E$ be a Banach space,
► If $x_0\in E$ then there is a bounded linear functional f defined on $E$ such that
(1º) $||f||=1$;
(2º) $f(x_0)=||x_0||$
► Let $G$ be a linear variety of $E$ and $y_0\notin G$ with $d=$ Inf$_{x\in G}||y_0 -x||\gt 0$. There is a bounded linear functional $f$ on $E$ such that
(1º) $f(x)=0$ for $x\in G$;
(2º) $f(y_0)=1$;
(3º)$=||f||=\frac 1d$
► Same hypothesis of 2).
There is a bounded linear functional $u$, orthogonal to $G$ such that $||u||=1$ with $d=u(y_0)$. Besides $d=$ Sup $ u(y_0)$ for all $y_0\in E$ where the Supremun is take for the elements $u\in G$ with $||u||=1$
► Let $G$ a linear variety of $E$ having as generators {$x_1,x_2,x_3,...$} and $y_0\notin G$. The element $y_0$ can be approximated arbitrarily by elements of $G$ (i.e, $y_0$ is adherent to $G$) if and only if all bounded linear functional $f$ on $E$ that verify $f(x_i)=0$ for all index $i$ verifies also $f(y_0)=0$
You can see here properties of separation though maybe it is not what you are looking for. If you are interested in the proof write me to [email protected] |
Sum of the series of real numbers. | $$\begin{eqnarray*}\sum_{n\geq 2}\frac{n}{(n+1)!} &=& \sum_{n\geq 2}\frac{n+1}{(n+1)!}-\sum_{n\geq 2}\frac{1}{(n+1)!}\\&=&\sum_{n\geq 2}\frac{1}{n!}-\sum_{n\geq 3}\frac{1}{n!}=\color{red}{\frac{1}{2}}.\end{eqnarray*}$$ |
What is the MLE $\theta^*$ of $\theta$? | The likelihood you wrote is wrong. You did not consider that the support of $X$ depends on $\theta$
The correct likelihood is
$$L(\theta)\propto \frac{1}{\theta^{2n}}\cdot\mathbb{1}_{[x_{(n)};\infty)}(\theta)$$
now, looking at your L, it is evident that it is strictly decreasing, thus
$$\hat{\theta}_{ML}=X_{(n)}$$
where $X_{(n)}$ is the max of the observations
Distribution of Max
Set $Z=max(X_i)$
$$F_Z(z)=P(Z\leq z)=P(X_1\leq z,\dots X_n\leq z)=P(X_1\leq z)\dots P(X_n \leq z)=$$
$$=[P(X_1\leq z)]^ n$$
To find the distribution of $\hat{\theta}_{ML}=X_{(n)}$ consider that the CDF of Max is the product of the single CDF's thus
$$F_X(x)=\int_{-\infty}^x f(t) dt=\frac{1}{\theta^2}\int_0^x 2t dt=\frac{x^2}{\theta^2}$$
Thus, setting $Z=\hat{\theta}$ you get
$$F_Z(z)=\frac{z^{2n}}{\theta^{2n}}$$
derivating you get the density and then calculate the expectation with the usual definition
$$f_Z(z)=\frac{d}{dz}F_Z=\frac{2n z^{2n-1}}{\theta^{2n}}$$
$$E(Z)=\int_0^\theta z f_Z(z)dz$$ |
Stuck with this limit of a sum: $\lim _{n \to \infty} \left(\frac{a^{n}-b^{n}}{a^{n}+b^{n}}\right)$. | If $b > a$, divide both the numerator and denominator by $b^n$ to get:
$$\lim_{n \to \infty} \frac{\frac{a^n}{b^n}-1}{\frac{a^n}{b^n}+1}=\frac{-1}{1}=-1$$ |
Prove that the rank of a block diagonal matrix equals the sum of the ranks of the matrices that are the main diagonal blocks. | By the matrix equivalent result, we can find two invertible matrices $P$ and $Q$ such that:
$$J_r=Q^{-1}AP,$$
where $J_r=\mathrm{diag}(1,\ldots,1,0,\ldots,0)$ with $r=\mathrm{rank}(A)$ ($r$ is the number of $1$).
Similary for $B$, there's $P'$ and $Q'$ such that:
$$J_{r'}=Q'^{-1}BP',$$
where $r'=\mathrm{rank}(B)$.
Now with the invertible block matrices $S=\mathrm{diag}(Q,Q')$ and $T=\mathrm{diag}(P,P')$ we have:
$$J=S^{-1}XT,$$
where $J=\mathrm{diag}(J_r,J_{r'})$ and it's clear that:
$$\mathrm{rank}(X)=\mathrm{rank}(J)=\mathrm{rank}(J_r)+\mathrm{rank}(J_{r'})=\mathrm{rank}(A)+\mathrm{rank}(B).$$ |
What is a Borel set? | There are several different ways to define Borel sets, you named one of them.
What the definitions don’t tell you is what Borel sets mean, and why we care about them.
When measure theory was developed, it was intended as a set theoretic formalization of measuring “things”.
Before that, geometry gave those answers (such as splitting some 2 dimensional form into many triangles and then summing up their area).
But with the advent of analytic geometry, geometry got reinterpreted as the study of subsets of $\mathbb{R}^n$. So what people wanted is a nice measure theory that doesn’t contradict prior intuition about how measuring areas work (e.g. moving something around doesn’t change it’s size) and it should be able to measure any subset of $\mathbb{R}^n$.
But shortly after, this was proven impossible, so people settled for the next best thing, which is not measuring all subsets of $\mathbb{R}^n$.
So you can view the Borel sets (or their completion, the Lebesgue sets), as subsets of $\mathbb{R}^n$ that you can measure, without your math breaking (i.e. without getting contradictions). See also: Vitali sets
That also means there are events that you cannot assign probability to!
(Albeit these are very pathological sets)
I recommend looking into Terence Tao’s Introduction to Measure Theory for a more detailed introduction. |
Parametrization of $x^2+y^2-ay=0$ | Hint:
$$x^2 + y^2 - ay = x^2 + (y-\frac a2)^2 - \frac{a^2}{4}$$
Meaning that $\Gamma$ is the circlle defined by the equation
$$x^2 + (y-\frac a2)^2 = \left(\frac{a}{2}\right)^2$$ |
What does consistency of propositional logic means? | "Here is my question; what axiomatic system did we use to prove the consistency of propositional logic, and how do we know that that axiomatic system is consistent?"
A formal axiomatic system probably didn't get used. Probably informal reasoning got used. I don't know of any guarantee that such informal reasoning is consistent.
"How can we be sure that propositional logic is actually consistent?"
I don't think that we can be absolutely sure of this. However, no mistakes come as known in any of the consistency meta-proofs of propositional logic. Also, were it not the case that propositional logic were consistent, then it would be inconsistent. So, it would have both $\vdash$A and $\vdash$$\lnot$A for some well-formed formula A. But, propositional logic is sound, so A is true. But, $\lnot$A is true also by inconsistency, so A is false. Thus, A is both true and false, and propositional logic would be unsound. But, propositional logic is sound, so propositional logic is not inconsistent, and thus propositional logic is consistent. |
Question about cardinals in ZF | Lets say you have a bijection $\mathfrak{b}\cup \mathfrak{a}\rightarrow \mathfrak{a}\times\{0,1\}$ where we assume that $\mathfrak{b}$ and $\mathfrak{a}$ are disjoint.
Consider the sequence
$$b\rightarrow (a_1,0)$$
$$a_1\rightarrow (a_2,0)$$
$$a_2\rightarrow (a_3,0)$$
$$\vdots$$
$$a_{n-1}\rightarrow (a_n,1)$$
Let us define $a_1, \ldots a_{n-1}$ to be the sequence associated with $b$ and $a_n$, the terminal element of $b$.
Note that the sequences associated to distinct $b$'s are disjoint, and the terminal elements are also unique, which can be seen just by back tracking. However the terminal element of one $b$ can belong to the associated sequence of another.
Further there may be some $b$ with an infinite ($\omega$ type) associated sequence and no terminal element.
Let $b$ be such an element with infinite associated sequence $a_1, a_2 \ldots$
Assume that each $a_i$ is the terminal element of $b_i$ (the case where one $a_i$ is not a terminal element I leave to you) Now define $f(b)=a_1$ and $f(b_i)=a_{i+1}$. So for each $b$ with infinite sequence we make this definition. Since the associated sequences are disjoint this is well defined.
There may be some elements of $\mathfrak{b}$ left over we then send them to their terminal elements. One can see that this is a well defined injection.
To summarize, an element $b$ is mapped to its terminal element, except if this terminal element belongs to the infinite associated sequence of another element, in which case it is mapped to the next element in the sequence. |
How can $\pi$ be irrational if it can be calculated using an infinite series of rational numbers? | A sequence of rational numbers can converge to an irrational. Yes, all the partial sums are rational but the limit need not be. There is nothing special about $\pi$ here, it is true of all irrationals. In fact, the most common construction of the real numbers from the rationals is through Dedekind cuts, which separate the rationals into the sets of those above and below the irrational. You can then find a sequence of rationals in either set that converges to the irrational. |
Comparing the maximum error between Lagrange, Hermite, and Spline Interpolation Methods | The number of data points for spline interpolation is $n+1$. If you want to interpolate the function $f$ on the grid $x_0,x_1,\ldots,x_n$, the only data you use is the value of $f$ on the grid points, that is, you require the spline function $s$ to satisfy
$$
s(x_i) = f(x_i), \qquad \text{for } i=0,1,\ldots,n.
$$
Additionally, the spline function is required to be $C^2$ (twice continuously differentiable). That is, if $s_i$ denotes the restriction of $s$ to the interval $[x_i,x_{i+1}]$, then
$$
s_{i-1}(x_i) = s_i(x_i) \quad\text{and}\quad
s'_{i-1}(x_i) = s'_i(x_i) \quad\text{and}\quad
s''_{i-1}(x_i) = s''_i(x_i) \qquad\text{for } i=1,2,\ldots,n-1.
$$
This gives $3(n-1)$ additional conditions, which together with the $n+1$ interpolation conditions yield the total of $4n-2$ quoted by the original poster.
However, the only data used in the $4n-2$ conditions is $f(x_i)$ for $i=0,1,\ldots,n$ and this is $n+1$ pieces of data. The continuity conditions do not use any data from the function $f$. |
Nilpotent Transformations and Invariant Subspaces | $0$ is the only eigenvalue of $S $, which form has the char. polynomial of $S $ ?
Now do not forget Cayley -Hamilton. |
2 Definitions of Holomorphic functions on Riemann surfaces | First notice that in your first definition of Riemann surface you shouldn't start with the hypothesis that $X$ is a 1-dimensional complex manifold, since this means that you are already considering, implicitely, a fixed maximal analytical atlas on it. In other words, Riemann surface and (connected) 1-dimensional complex manifold are synonyms. Just assume that $X$ is a Hausdorff second-countable topological space (or, if you prefer, a topological or smooth 1-manifold).
So let $X$ be a connected Hausdorff second-countable topological space.
If we have an analytical atlas, then, as you wrote, we also have a notion of holomorphic functions $\varphi:U\subset X\rightarrow\mathbb{C}$. Defining
\begin{equation}
O_X(U):=\{\varphi:U\rightarrow\mathbb{C}\ \ holomorphic\}
\end{equation}
for each $U\subset X$ open, this gives a geometric space $(X,O_X)$, just because constant functions are holomorphic and being holomorphic is a local property.
Viceversa, assume that you have a geometric space $(X,O_X)$. We will construct from it an analytic atlas on $X$.
First you need to notice that you have a model geometric structure $O_\mathbb{C}$ on $\mathbb{C}$ made by holomorphic functions, and that if $V_1,V_2\subset\mathbb{C}$ are open sets, then a map $f:V_1\rightarrow V_2$ is holomorphic if and only if it induces a morphism $f:(V_1,O_\mathbb{C}|_{V_1})\rightarrow(V_2,O_\mathbb{C}|_{V_2})$ of geometric structures.
For each $x\in X$, take an open neighbourhood $U_x\subset X$ of $x$ and an isomorphism $\phi^{(x)}:(U_x,O_X|_{U_x})\rightarrow(\phi^{(x)}(U_x),O_\mathbb{C}|_{\phi^{(x)}(U_x)})$, with $\phi^{(x)}(U_x)\subset\mathbb{C}$ open. In particular, $\phi^{(x)}:U_x\rightarrow \phi^{(x)}(U_x)$ is a chart on $X$.
When two open sets $U_x$ and $U_{x'}$ overlap, you get an isomorphism
\begin{equation}
(\phi^{(x)}(U_x\cap U_{x'}),O_\mathbb{C}|_{\phi^{(x)}(U_x\cap U_{x'})})\rightarrow(U_x\cap U_{x'},O_X|_{U_x\cap U_{x'}})\rightarrow(\phi^{(x')}(U_x\cap U_{x'}),O_\mathbb{C}|_{\phi^{(x')}(U_x\cap U_{x'})})\,,
\end{equation}
which means that the transition map $\phi^{(x)}(U_x\cap U_{x'})\rightarrow \phi^{(x')}(U_x\cap U_{x'})$ between the two charts is biholomorphic. Thus $\{(U_x,\phi^{(x)})\}$ is an analytic atlas on $X$.
You can easily see that these two constructions are one the inverse of the other. Finally, at this point it is clear that a map $f:X_1\rightarrow X_2$ between two Riemann surfaces is holomorphic according to the first definition (i.e. it is represented by holomorphic maps in analytic charts) if and only if it is holomorphic according to the second definition (i.e. it pulls back holomorphic functions on $X_2$ to holomorphic functions on $X_1$). |
question from Sirovich's "Introduction to applied mathematics" | Let's just go by the given equation:
$$\frac{dx^2}{dt^2}=\omega^2 x$$
If $\omega \in \mathbb{R}$ then this is not exactly a harmonic oscillator equation. It has the general solution in terms of hyperbolic functions, or real exponentials:
$$x(t)=A e^{\omega t}+Be^{-\omega t}$$
Writing down the suggested function explicitly:
$$w=A e^{\omega t}+Be^{-\omega t}+i \omega^2 \left(A e^{\omega t}-Be^{-\omega t} \right)=A(1+i \omega^2)e^{\omega t}+B(1-i \omega^2)e^{-\omega t}$$
Which, I believe is not a solution to any first order ODE.
Let's try a harmonic oscillator equation instead:
$$\frac{dx^2}{dt^2}=-\omega^2 x$$
$$x(t)=A e^{i\omega t}+Be^{-i\omega t}$$
$$w=A e^{i\omega t}+Be^{-i\omega t}- \omega^2 \left(A e^{i\omega t}-Be^{-i\omega t} \right)=A(1- \omega^2)e^{i\omega t}+B(1+ \omega^2)e^{-i\omega t}$$
That still is not a solution to a first order ODE.
Now let's try another function instead:
$$u=x+\frac{i}{\omega} \frac{dx}{dt}$$
Now for the HO equation we have:
$$u=A e^{i\omega t}+Be^{-i\omega t}- \left(A e^{i\omega t}-Be^{-i\omega t} \right)=2Be^{-i\omega t}$$
Which is now a solution to a first order equation:
$$\frac{du}{dt}=-i \omega u$$
My only conclusion: there must be a typo in the book (or several). |
Evaluate the integral p.v. $\int_{-\infty}^{\infty}\frac{e^{-2ix}}{x^2+1}\mathrm dx$ using residues | The poles should only be at x = i, x = - i. Recalling that $e^0 = 1$
Form the usual semi-circle and then isolate those points in the usual way. Then use the residue theorem. |
Algorithms - Finding Clique of size n in a Graph | Let's call your formula CNF.
Let's call the clauses $C_1, C_2, C_3, C_4$
Let $v_i$ be a literal in $C_i$
By your conversion, every $(v_i, v_j)$ for $i \neq j$ and $v_i \neq \bar v_j$ is an edge in G.
Now we have to prove: CNF is satisfiable $\iff$ G(V, E) has a clique of size 4
Suppose CNF is satisfiable
$\quad$This means every clause has at least one literal that is true.
$\quad$From each clause $C_i$, pick any one $v_i$ that is true.
$\quad$Now we have a set of 4 vertices $\{v_1, v_2, v_3, v_4 \}$ in $G(V, E)$
$\quad$Since all the $v_i$ are true, neither is a complement of the other.
$\quad$So, there must be an edge between every pair of vertices in the set.
$\quad$So, the set of vertices form a clique of size 4 in G(V, E).
Suppose $G(V, E)$ has a clique of size 4
$\quad$There are 4 vertices in the clique $v_1, v_2, v_3, v_4$
$\quad$Since each pair of vertices is connected by an edge and our conversion does not connect 2 literals
$\quad$from the same clause, each vertex must be in a different clause.
$\quad$So, each clause has a literal/vertex in the Clique.
$\quad$So, assigning true to each of the vertices would satisfy CNF. |
The meaning of "let" in theorem statements | like this
let there exist some positive number more then $5$,for instance $6$,then there exist such number which makes this $6$ nullified ,or $-6$.it is for some |
show $\sin(x)$ and $\tan(x)$ are increasing | Hint: $ \sin x - \sin y = 2\cos(\frac{x+y}{2})\sin(\frac{x-y}{2}) > 0 $ when $x-y>0$. |
Friedberg Linear Algebra problems | For the first exercise, this is simply the fact that a linear transformation is completely determined by its effect on a basis. One proves that there is exactly one by using linearity.
The second problem is a direct application of that fact. It basically says that dual space is isomorphic to the space of linear functions on $V$ into $\mathcal F$. These are also known as linear functionals. Just let $W=\mathcal F$, the one dimensional vector space over $\mathcal F$, in the first part.
It might have been better if Freidberg had written $\mathcal L(\color{blue}{V},\mathcal F)$, in the second question, I think. Since a basis is not a vector space. |
Using sigma notation to find the function | This is a Riemann sum for
$$ f(x) = e^{-x}\sqrt{1+\log{x}}: $$
such a sum looks like
$$ \sum_{k=1}^n h f(a+kh), $$
where $h$ is the width of the rectangles, and $f$ is sampled at the points at the upper-right of each rectangle, $a+kh$. This sum uses $n$ rectangles of width $h=5/n$, and the sample points are at $3+kh=3+5k/n$.
(Of course, actually doing this integral looks pretty grim, but thankfully you didn't ask that!) |
Cycles "converging" to an infinite cycle? | It's not really a good argument, because permutation groups have no well-behaved notion of convergence in general.
Hmm, scratch that. On further thought it seems perfectly well-behaved to say that $\sigma_1, \sigma_2, \sigma_3,\ldots$ converges iff for every $x$ there is an $N$ such that $\sigma_N(x)=\sigma_{N+1}(x)=\sigma_{N+2}(x)=\cdots$.
With this definition limits distribute over composition, but I don't think this is a standard notion, so you may have to include definitions and lemmas in your argument.
It's more of a problem that you seem to assume that limits commute with taking the order of a permutation. Consider the sequence $(\sigma_n)_n$ where $\sigma_n$ is a cyclic permutation of the elements $n$ through $2n-1$. Then $\sigma_n$ has order $n$, but the limit of the sequence is the identity permutation, which has order $1$! So the limit of a sequence of permutations of high order doesn't necessarily have high order itself.
It's better to look at $\tau\sigma$ as a finished whole and consider what it does to individual elements: Every even number $n$ is mapped to $n+2$, and every odd number $n$ is mapped to $n-2$, except that $1$ maps to $0$.
Since no amount of iterations of $\tau\sigma$ can make, say, 0, return to its original position, the permutation has infinite order. |
How exactly should I approach this problem? | If $85n_1-5n_2-14n_3=0$, then $5(17n_1-n_2)=14n_3$ so $n_3$ must be divisible by $5$. So, $n_3$ is either $0$ or $5$. If we set $n_3=0$, we would have $85n_1-5n_2=0$. However, since the lowest value of $n_1$ is $1$ and the highest value of $n_2$ is $9$, we must have $85n_1-5n_2\geq40$ so $n_3$ can't be $0$. Hence, $n_3=5$ and the problem becomes $85n_1-5n_2=70$, which leads us to the solution. |
Calculate Length of Time It Takes a car to lose speed from different starting speeds | Force exerted by a fluid on an object is directly proportional to its instantaneous velocity.
$$F\,{\propto}\,v$$
$$F=-kv$$
$$ma=-kv$$
$${\int}m{dv\over dt}={\int}-k{dx\over dt}$$
$$mv=-kx$$
$$m{dx\over dt}=-kx$$
$${\int}m{dx\over x}={\int}-kdt$$
$${\log}x={-kx\over m}$$
$$x=e^{-kx\over m}$$
$${dx\over dt}={{e^{-kx\over m}}\over dt}$$
$$v={-k\over m}e^{-kx\over m}$$ |
Probability on roll of dice | We want that among the $n$ die rolled, none at most one of them are four; two and six don't appear simultaneously; and likewise that three and five don't appear simultaneously.
[[Edit: André Nicolas points out a modification needs to be made.]]
Let $X_i$ be the event that number $i$ never shows; $Y_i$ that it shows once. To save typespace, let $X_i X_j$ be the conjunction of events. (It is crowded even with that.) What follows is as easy as PIE. (That is the Principle of Inclusion and Exclusion.)
$$\begin{align}\mathsf P(({X_4}\cup Y_4)({X_2}\cup{X_6})({X_3}\cup{X_5})) ~=~& {\mathsf P({X_2}{X_3}{X_4})+\mathsf P({X_3}{X_4}{X_6})+\mathsf P({X_2}{X_4}{X_5})+\mathsf P({X_4}{X_5}{X_6})\\-\mathsf P({X_2}{X_3}{X_4}{X_6})-\mathsf P({X_2}{X_3}{X_4}{X_5}) +\mathsf P({X_2}{X_3}{X_4}{X_5}{X_6})\\+\mathsf P({X_2}{X_3}{Y_4})+\mathsf P({X_3}{Y_4}{X_6})+\mathsf P({X_2}{Y_4}{X_5})+\mathsf P({Y_4}{X_5}{X_6})\\-\mathsf P({X_2}{X_3}{Y_4}{X_6})-\mathsf P({X_2}{X_3}{Y_4}{X_5}) +\mathsf P({X_2}{X_3}{Y_4}{X_5}{X_6})}
\end{align}$$
Note $\mathsf P({X_2}{X_3}{X_4})$ and so forth are each the probabilities that three specified numbers never show. That is $(\tfrac {3}{6})^n$. $\mathsf P(X_2X_3Y_4) = \tfrac n 6(\tfrac 36)^{n-1}$ And so on.
Put it together. |
Derivative of a logarithm from first principles | You should know that
$$\lim_{k \to 0} \frac{\log(1+k)}{k}=1$$
Then, calling $k= \frac hx$, you get
$$\lim_{h \to 0} \frac{\log(1+\frac hx)}{h}= \lim_{h \to 0} \frac{\log(1+\frac hx)}{x\frac hx} = 1 \cdot \frac{1}{x}= \frac{1}{x}$$ |
Evaluate $\int_0^\pi \frac{1}{\sin^\beta\left(\frac{\theta}{2}\right) + 1} d\theta$ | This appears to be intractable even with advanced techniques. Formally (and not hard to justify), you can expand the integrand into a geometric series, assuming $\beta > 0$:
$$
\frac{1}{(\sin\theta/2))^\beta + 1} = \sum_{n = 0}^\infty (-1)^n (\sin \theta / 2)^{n \beta}
$$
Each term can be integrated,
$$
\int_0^\pi (\sin \theta / 2)^{n \beta} d \theta =\frac{\sqrt{\pi } \Gamma
\left(\frac{1}{2} (\beta n+1)\right)}{\Gamma \left(\frac{\beta
n}{2}+1\right)}
$$
Now for even $\beta = 2k$ the sum is
$$
\sum_{n = 0}^\infty (-1)^n \frac{\sqrt{\pi } \Gamma
\left(\frac{1}{2} (\beta n+1)\right)}{\Gamma \left(\frac{\beta
n}{2}+1\right)} = \pi \cdot \, _k F_{k-1}(\frac{1}{\beta}, \frac{3}{\beta},\dots, \frac{\beta-1}{\beta}, \frac{2}{\beta}, \frac{4}{\beta}, \dots, \frac{\beta - 2}{\beta}; -1)
$$
that is a generalized hypergeometric function with a number of parameters that depends on $\beta$. It's unclear how to write this for general $\beta$. |
Calculate the sum of first 45 numbers | Hint:
An arithmetic sequence is linear, so that the average of the terms is equal to the average of the extreme terms. The sum easily follows. |
Proof with Euler's Totient Function | Just sum up the geometric sequence.
You get $2 + 2^2 + \cdots + 2^{\Phi(m)} = 2^{\Phi(m) + 1} - 2$. At this point you can use Euler's theorem. |
Convergence and topology , Intuition | The elements of the topological structure are regarded as the open sets. A closed sets is defined as the complement of an open set. (Thus these are closed under finite union and arbitrary intersection.)
In a topological space a sequence $x_n$ is said to converge to the ('limit') point $x$, if every open neighborhood $U$ of $x$ (i.e. open set containing $x$) contains all but finite elements of $\{x_n\}$, that is, there is an $N$ such that $x_n\in U$ if $n>N$.
We can prove that any subsequnce of a convergent sequence converges to the same limit point, and that if a closed set contains a converge sequence, it also contains its limit point.
If we are given a set $X$ and certain sequences of $X$ together with their 'wannabe-limit-points', then this data induces a topology on $X$: namely we define a set $S\subseteq X$ closed if whenever $S$ contains an infinite subsequnce of a given sequence, it also contains its 'wannabe-limit-point'.
Now in this topology, any given sequence $x_n$ indeed converges to its given limit point $x$, as if $x\in U$ with an open $U$, we can't have infinitely many $x_n$'s in the closed $X\setminus U$, since then it would define a subsequence of $x_n$ and $x\in X\setminus U$ would follow.
Different convergence data, of course, usually induce different topologies. |
Proof of a bijection to the set of subsets? | You are confusing between $f^{-1}(a)=\{x\mid f(x)=a\}$ and $f^{-1}(a)=\{x\mid f(x)\in a\}$. The latter is sometimes written as $f^{-1}[a]$ to avoid this sort of confusion when $f(x)$ is a set itself.
So $f^{-1}(\varnothing)=\{1\}$ and $f^{-1}[\varnothing]=\varnothing$. |
Real and Complex Analysis Rudin thm 1.41 | Thus lemma is also known as the Borel–Cantelli lemma. So we want to show that $\mu(A) = 0$. First notice that by definition $x \in A$ if and only if $x \in E_k$ for infinitely many $k$. Let $x \in A$. Then as $g(x) = \sum_{k=1}^{\infty} \chi_{E_k}(x)$, we see $g(x) = \infty$ (since $x$ is in infinitely many $E_k$). Now the idea is to recall that if $\int |g(x)| < \infty$, then $g(x)$ is finite valued almost everywhere. We omit the absolute values since $g \geq 0$
So we want to integrate $g$ and use the monotone convergence theorem(our $g$ and soon to be defined $g_n$ are non-negative and measurable, so we can apply MCT) with $g_n(x) := \sum_{k=1}^{n} g(x)$, to interswap the order of integration and summation.
So we see $\int_{\mathbb{R}^d} g(x)$ = $\int_{\mathbb{R^d}} \sum_{k=1}^{\infty} \chi_{E_k}(x)$ = $\sum_{k=1}^{\infty} \int _{\mathbb{R}^d}\chi_{E_k}(x)$ = $\sum_{k=1}^{\infty} \int_{E_k} 1$ = $\sum_{k=1}^{\infty} \mu(E_k) < \infty$ by assumption. Hence, we have $g(x)$ is finite valued almost everywhere. Therefore, as $A$ is the set where $g(x)$ is infinity, we see $\mu(A) = 0$ as desired. |
Does this equation have a rational point? (Elliptic curve?) | There may be faster ways, but this is what I have found.
Assume $x=p/q$ with $p,q$ coprime integers and $q$ positive. Multiply the curve equation by $q^4$ to make its left-hand side an integer:
$$F(p,q)=-3362 p^4 + 2009 p^3 q -1104 p^2 q^2 + 3017 p q^3 + 852 q^4=(q^2 y)^2$$
Therefore $q^2 y$ must be an integer to solve the equation, and the right-hand side must be a square in $\mathbb{Z}$.
Assume $5\mid q$, then $F(p,q)\equiv-2 p^4\pmod{5}$, which is not a square unless $5\mid p$, but the latter would contradict the lowest-terms requirement. So $5\not\mid q$, hence
any solution's $x$ can be represented in $\mathbb{Z}_5$ (the $5$-adic integers), and $F(x,1)$ must be a square in $\mathbb{Z}_5$.
Using Pari/GP, I verified that $F(x,1)$ is never a square modulo $5^4$, so the original equation has no rational solutions. |
Why is the Einstein Static Universe an infinite cylinder? | The "vertical" axis of the infinite cylinder is designated by $t$, which goes from $-\infty$ to $+\infty$. The variable $\chi$, in contrast, is one component of spherical coordinates on $\mathbb S^3$. If we designate standard spherical coordinates on $\mathbb S^2$ by $(\phi,\theta)$ and those on $\mathbb S^3$ by $(\chi,\phi,\theta)$, then the standard round metric on $\mathbb S^2$ is
$$
d\Omega^2 = d\phi^2 + \sin(\phi)^2 d\theta^2,
$$
and the standard round metric on $\mathbb S^3$ is
$$
d\chi^2 + \sin(\chi)^2 d\phi^2 + \sin(\chi)^2\sin(\phi)^2 d\theta^2
= d\chi^2 + \sin(\chi)^2 d\Omega^2.
$$
The variable $\chi$ only goes from $0$ to $\pi$ because it represents the angle downward from the "north pole" of $\mathbb S^3$. |
Linear algebra - projection matrix - inverse matrix | If we search the inverse on the form $aI+bA$ we get
$$(aI+bA)(I+cA)=I\iff aI+(ac+b+bc)A=I\iff (a=1)\land(c+b+bc=0)\\\iff(a=1)\land (b=-\frac{c}{1+c}), c\ne-1$$ |
Groups where all elements are order 3 | The standard example is the Heisenberg group. Consider the group of all matrices of the form
$$\left(\begin{array}{ccc}
1 & x & y\\
0 & 1 & z\\
0 & 0 & 1
\end{array}\right),$$
where $x,y,z\in\mathbb{Z}/3\mathbb{Z}$. It is not hard to verify that this is a group, that every one of its 27 elements is of exponent $3$, and that it is not abelian. Replacing $\mathbb{Z}/3\mathbb{Z}$ with $\mathbb{Z}/p\mathbb{Z}$ for odd prime $p$ shows that a similar result cannot hold for any prime other than $p=2$.
This is an example of smallest possible order: a finite group in which every element is of exponent $3$ must have order $3^n$ for some $n$ (a consequence of Cauchy's Theorem), and every group of order $3^2$ is abelian.
There is another nonabelian group of order $27$, but in that group there is an element of order $9$:
$$\langle a,b\mid a^9 = b^3 = 1, ba = a^4b\rangle.$$ |
Definition of permutation | To call them "the same" is a bit of an abuse of language (albeit a common one that you get used to quite quickly). I would prefer to describe them as "equivalent".
In fact, because $A$ is here an arbitrary set, it seems even more dangerous than usual to call them "the same", because to get from the ordered list to the bijection requires having some fixed ordering on $A$, which isn't part of the data. If $A=\{1,\dotsc,n\}$, then there is at least a "default" ordering that you can use.
In conclusion, I think you are right to be nervous about calling these two definitions the same, but if you fix some ordering on $A$, then you get a bijection between the two types of permutation; you convert an ordered list into the bijection mapping the first element of $A$ (under the fixed ordering) to the first element of the ordered list, and so on, and convert a bijection $\varphi\colon A\to A$ into the ordered list $\varphi(a_1),\varphi(a_2),\dotsc$, where $a_i$ is the $i$-th element of $A$ under the fixed ordering. |
Geometry behind $\int_{0}^{2π}\frac{e^{ix}}{e^{ix}-z}~dx=2\pi(|z|<1)$ | My attempt at providing a "geometric" interpretation will repeat @robjohn's points, before looking at it from a physical perspective.
With $w:=e^{ix}$, we can rewrite this as a contour integral, $\oint_{|w|=1}\frac{dw}{w-z}=2i\pi[|z|<1]$. It always helps to think of $\Bbb C$ as a Euclidean plane. Placing a $1/(w-z)$ factor in the loop contributes $2\pi$ to the integral, provided the pole $z$ is also in the loop. This is analogous to Ampère's law, in which current passing through a loop generates a magnetic field. Or if you consider a 2D closed surface in 3D space instead, Gauss's law says an electric field is generated by an enclosed charge. Physics metaphors aside, we're quantifying what is enclosed in a set boundary; it's all effectively Stokes's theorem (see also here). |
Differential equation substitution $z=\frac y x\implies z'=xz'+z$ | if $z = \frac{y}{x}$ then $y(x) = x\cdot z(x)$ - the important part here is that $x$ is your independent variable and both $y$ and $z$ depend on it. Hence when we take $\frac{\mathrm{d}y}{\mathrm{d}x} = \frac{\mathrm{d}}{\mathrm{d}x} (x\cdot z(x))$ we have to use the product rule on the right hand side)
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}x} (x\cdot z(x)) = x\cdot \frac{\mathrm{d}z}{\mathrm{d}x} + \frac{\mathrm{d}x}{\mathrm{d}x}\cdot z(x) = x\cdot \frac{\mathrm{d}z}{\mathrm{d}x} + z(x).
\end{equation} |
$\mathbb{R}_{\le3}[X]$ is not a subspace of $\mathbb{R}_{\le4}[X]$ (polynomials in linear algebra) | If, as lisyarus asked, $\mathbb R_{\le n}[x]$ represents polynomials of at most degree $n$ then I somewhat disagree with that solution (and why subscripts as powers??). But here is why I think they might claim it. As vector spaces,
$$\mathbb R_{\le 4}[x] \simeq \mathbb R^5$$
Meaning they are isomorphic, or essentially the same space. The polynomial (as a vector) $ax^4+bx^3+cx^2+dx+e$ is "essentially the same" as the vector $(a,b,c,d,e)\in \mathbb R^5$. Since there is no mentioned coefficient of $x^4$ I assume they mean it is a fundamentally different thing, just as $(a,b,c) \notin \{(a,b,c,d) \ | a,b,c,d\in\mathbb{R}\}$
However, I interpret $\mathbb{R}_{\le 3}[x]$ as a subspace of $\mathbb{R}_{\le 4}[x]$ in the following sense $$\{ax^4+bx^3+cx^2+dx+e \ |\ a=0, b,c,d,e\in\mathbb R\}$$
And similarly for $\mathbb R_{\le n}[x]$ for any $n\ge 3$.
Second, since
$$\{ax^3+bx^2+cx+d \ |\ b=0\}$$
Is certainly a subspace of $\mathbb R_{\le 3}[x]$, I would also say its a subset of $\mathbb R_{\le 4}[x]$, however my guess at their interpretation from before still stands. But without your explicit definitions and lecture materials I cannot guess any better. |
Lower bound for $|\cos z|$ for $|z|=1$ | $|\cos z|^2=(e^{2y}+e^{-2y}+2\cos(2x))/4$ and we want to prove it is less than $\frac{25}{9}$, or equivalently $e^{2y}+e^{-2y}+2\cos(2x)<100/9$. Since $x^2+y^2=1$ both $x$ and $y$ are forced in the range $[-1,1]$. The maximum of $e^{2y}+e^{-2y}$ is achieved when $y=1$ and the maximum of $\cos(2x)$ is achieved in $x=0$, hence LHS$\leq e^2+1/e^2+2<(2.8)^2+1/4+2=10.09<11<100/9$. So $|\cos z|<5/3$ for $|z|=1$.
For the other inequality we have to prove $e^{2y}+e^{-2y}+2\cos(2x)>4/9$. The minimum of $e^{2y}+e^{-2y}$ is achieved for $y=0$ and the minimum of $2\cos(2x)$ is achieved when $x=1$, so we have to prove $2+2cos(2)>4/9$. $cos(\alpha+\pi/2)=-sin(\alpha)$ and in our case $0<\alpha<\pi/4$ since $\pi/2+\pi/4>2>\pi/2$, so we have to prove $2(1-sin(\alpha))>4/9$ or $sin(\alpha)<7/9$, but in that range sin is increasing so $sin(\alpha)<sin(\pi/4)=\frac{\sqrt2}{2}<7/9$, hence we are done. |
1. What is the physical significance an inflexion point might have? | Let $c(t)$ represent the distance travelled by a car at time $t$ on a straight road. Suppose $c(t)$ is strictly monotonic on some interval $[t_0-s,t_0+s]$ with $s>0$ and that $c''(t_0)=0.$ Then as $t$ increased from $t_0-s$ to $t_0$, the velocity, which is positive for $t\in [t_0-s,t_0),$ decreased to the local minimum velocity $c'(t_0)$, and as $t$ increased from $t_0$ to $t_0+s$ the velocity increased. In particular if $c'(t_0)=0$ then the car slowed to a stop and then accelerated in the same direction. Example :$c(t)=t^3,$ with $t_0=0.$ |
How can two horizontal forces acting at one point be at right angles? | The question is set in 3D. Wlog one axis is vertical and the other two are horizontal. |
Find angles in a triangle, with two similar triangles with scale factor $\sqrt{3}$ | Let $\Delta ABD\sim\Delta ACD$.
Hence, $\frac{AB}{AC}=\frac{AD}{AD}$, which is impossible.
Let $\Delta ABD\sim\Delta ADC$.
Hence, $\measuredangle ABD=\measuredangle ADC,$ which is impossible.
Let $\Delta ABD\sim\Delta CAD$.
Hence, $\measuredangle ADB=\measuredangle CDA=90^{\circ}$ and since $\frac{AD}{CD}=\sqrt3$, we have $\measuredangle C=60^{\circ}$
and from here $\measuredangle B=30^{\circ}$ and $\measuredangle BAC=90^{\circ}$.
Let $\Delta ABD\sim\Delta CDA$.
Hence, $\measuredangle ABD=\measuredangle CDA$, which is impossible.
Let $\Delta ABD\sim\Delta DAC$.
Hence, $\measuredangle BAD=\measuredangle ADC$, which says $AB||BC$, which is impossible.
Let $\Delta ABD\sim\Delta DCA$, which gives $AB||BC$ again.
Done! |
Big O confusion about upper bound of an algorithm | Presumably you compute $a^k$ once, then search for it in the array. You should not multiply the number of operations for each part, you should add them. Using binary search assumes that the array is sorted, but we will accept that. You are right that you will do $\log_2 N$ comparisons. To compute $a^k$ you can express $k$ in binary, square $a$ enough times, then multiply the values that correspond to the one bits in $k$. You should be able to express this in terms of the number of bits in $a$, which is $\log_2 a$ and a function of $k$. |
Problem understanding Integration question | No it is not the same. First of all the area cannot be negative. Second of all, you have to divide it into two integrals. First one is the ,,positive" part where the graph of the function is above x axis and and the ,,negative" part where the graph is below x axis. The negative part has to be taken with absolute value. You have to count the integrals
$$Area=\int\limits_0^{\pi}x\sin xdx+|\int\limits_{\pi}^{2\pi}x\sin xdx| $$
Look at the graph of $x\sin x$.
http://www.wolframalpha.com/input/?i=xsinx%3D0 |
How to solve this first order non-linear ODE | Hint
You could notice that the equation is separable and that you can write it as
$$\frac{dv}{du}=\frac{1}{\sqrt{a u^2+b u^3}}$$ and the integration leads to $$v=-\frac{2 \tanh ^{-1}\left(\sqrt{\frac{a+b u}{a}}\right)}{\sqrt{a}}+c$$ I am sure that you can take from here.
Added later to this answer
All of that means that if the following change of variable is made $$u=\frac{a \left(z^2-1\right)}{b}$$ after simplification, we have $$\frac{dv}{dz}=-\frac{2}{\sqrt{a} \left(z^2-1\right)}$$ and then $$v=\frac{2 \tanh ^{-1}(z)}{\sqrt{a}}+c$$ |
Please explain about Chebyshev's inequality? | One cannot in general turn the Chebyshev Inequality into a correct one-sided inequality by simply dividing the tail probability by $2$.
However, there are one-sided inequalities, for example the Cantelli Inequality. This says that $$\Pr(X-\mu\ge \epsilon)\le \frac{\sigma^2}{\sigma^2+\epsilon^2}.$$
In your case, the Cantelli Inequality yields a probability bound of $\frac{625}{625+625}$, exactly the bound mentioned by your teacher. |
vector decomposition using eigenvectors | First, determine the eigenvectors $u_i$ for each eigenvalue $\lambda_i$ ($i \in \{1, 2, 3\}$).
Then, solve the linear equation system:
$$\begin{pmatrix} -- u_1 -- \\ -- u_2 -- \\ -- u_3 -- \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \\ \gamma \end{pmatrix} = \begin{pmatrix} 50\\ 0 \\ 0 \end{pmatrix}
$$
For $\alpha, \beta, \gamma \in \mathbb{R}$
Then your decomposition is:
$$\alpha u_1 + \beta u_2 + \gamma u_3 = (50, 0, 0)^T$$ |
The number of p-elements in a finite group | As mentioned in the comment, Gerry Myerson's example does not quite answer the question, but it is in the right spirit. Choose $\varepsilon >0,$ and for a given prime $p,$ choose an integer $n >0$ so that $p^{-n} < \varepsilon.$ Choose any prime $q$ such that $q \equiv 1$ (mod $p^n$). There is a Frobenius group $G$ of order $qp^n$ with kernel $K$ of order $q$ and complement $H$ of order $p^n.$ Then $G \backslash K$ is the set of non-identity $p$-elements of $G.$ The proportion of elements of $G$ which are non-identity $p$-elements is thus $1 - p^{-n}.$ Allowing for the identity, which makes a positive contribution to the proportion of $p$-elements, the proportion of elements of $G$ which are $p$-elements is greater than $1 - \varepsilon.$ |
Nonnullhomotopic degree 0 map | $$S^1 \times S^1 \xrightarrow{\pi_1} S^1 \cong S^1 \times \{1\} \hookrightarrow S^1 \times S^1$$ |
Why is the space of all connection on a vector bundle an affine space? | There is a useful lemma, that is often called "fundamental lemma of differential geometry" or "tensoriality lemma" that says the following:
If $E$ and $F$ are two vector bundles over $X$, and $A$ is a linear map from $\Gamma(E)$ to $\Gamma(F)$ (hence defined on global sections) then, $A$ is $C^\infty(M)$- linear iff there exists a vector bundle map $\alpha: E \rightarrow F$, such that $A(f)(x) = \alpha(f(x))$, where $f$ is a section of $E$ (hence, $A$ is defined pointwise).
This is proved by using smooth functions with little compact support in a neighborhood of $x$, which is equal to $1$ in a neighborhood of $x$.
Given this lemma, you have $A:= \nabla_1 - \nabla_2: \Gamma(E) \rightarrow \Gamma(\Omega^1(E))$. Compute $\nabla_1 - \nabla_2(fs)$ where $f$ is in $C^\infty(M)$ and $s$ is in $\Gamma(E)$. You find:
$$
(\nabla_1 - \nabla_2) (fs) = df \otimes s + f \nabla_1 s - df \otimes s - f \nabla_2 s = f(\nabla_1 - \nabla_2)s.
$$
This proves that $A$ is canonically associated to a vector bundle map $E \rightarrow \Omega^1(E)$, that is a tensor in $\Omega^1(End(E))$. |
Possible periods of periodic sequences of reals obeying $x_{n+2} = 1+x_{n+1} x_n.$ | Multiplying by $x_{n-1}$ we get
$$x_{n-1}x_{n+2}=x_{n-1}+x_{n-1}x_{n}x_{n+1}.$$
Multiplying by $x_{n+2}$ we get
$$x_{n+2}^2=x_{n+2}+x_{n}x_{n+1}x_{n+2}.$$
If we sum equalities of both types over all $n=1,2,\dots T$, we get that
$$\sum x_{n}x_{n+3} = \sum x_{n}^2, $$
or
$$\sum (x_{n}-x_{n+3})^2=0.$$
Hence $T|3$, but we know that $T\not = 1$, so $T=3$. |
What to do with a hanging $1$ in a Karnaugh map? | Karnaugh maps require a particular ordering of the variables different from a normal truth table. Your K-map is ordered like a truth table: bc b'c bc' b'c' (or 11 01 10 00) whereas it has to be ordered such that only one variable changes going from one column (or row) to the next, and it is usually written with 00 on the left, namely 00 01 11 10 (or b'c' b'c bc bc').
When the K-map is arranged like this, any adjacent pair (or 4 or 8) allows the elimination of one (or 2 or 3) variables. Take as an example the two adjacent red "ones" in your table above (we can use the vertical axis of your table since the "a" variable is ordered correctly. The red "ones" mean (ab'c + a'b'c) which reduces to b'c(a+a'). a+a' is always true, ie equals 1, thus the expression reduces to b'c. So putting a ring around two adjacent "ones" eliminates the variable which has both its true and complement present.
To proceed, we have to rewrite your table:
\begin{array}{ccccc}
& 00 & 01 & 11 & 10\\
\hline
0 & 1 & 1 & 1 & 0\\
1 & 0 & 1 & 0 & 1\\
\end{array}
We see here three adjacent ones on the a' (a=$0$) line and two adjacent ones in the b'c column ($01$). We can make two pairs horizontally and one pair vertically. The centre "one" is shared by all three pairs. These three pairs represent a'b' + b'c + a'c. The remaining, single "one" (bottom right) represents abc'. Thus the minimised function is a'b' + b'c + a'c + abc'. |
If $f$ is bijective, then $f^{- 1}$ is bijective. | A function $h: A \to B$ is bijective if and only if there is a function $g: B \to A$ such that $g \circ h =1_A$ and $h \circ g = 1_B$.
Now, apply this with $h= f^{-1}$ and $g=f$ to conclude that $f^{-1}$ is bijective. |
Isomorphism between quotient space $X/M$ and the orthogonal complement of $M$. | Elements in the space $X/M$ are cosets of the form $x+M$, with $x\in X$. For simlicity denote $[x]=x+M$.
Also denote the canonical basis of $X$ by $\{e_1,\dots,e_n\}$, where for each $e_j$ its $j-$th coordinate is $1$ and all other coordinates are $0$. Then it's not difficult to see that $\{[e_{k+1}],[e_{k+2}],\dots,[e_n]\}$ form a basis of $X/M$.
On the other hand, define $T_j\in X^*$ via:
$$T_j(w)=\begin{cases} 1, &w\in\text{span}\{e_j\}\\
0, &\text{else}\end{cases}.$$
Then it's not hard to see that $\{T_{k+1},T_{k+2},\dots,T_n\}$ form a basis of $M^\perp$. Hence
$$\text{dim}(X/M)=\text{dim}(M^\perp)=n-k.$$
This proves $X/M\cong M^\perp$. |
Show $\int^{\pi/2}_0 \cos^{\mu}(x)\sin^{v}(x)dx= \frac{1}{2}B(\frac{1+\mu}{2},\frac{1+v}{2})$ | You went wrong when you wrote the integral as the difference of two other integrals; you appear to have mistaken $(1-t)^{(\nu-1)/2}$ with $1-t^{(\nu-1)/2}$. As @Gary noted, $t=\sin^2x$ finishes the job, viz. $dt=2\sin x\cos xdx$. |
${\rm End}(\bigoplus V)=\bigoplus({\rm End}(V))$ | Use additivity of Hom, and note the sums go over different index sets
$$\begin{align}
{\rm End}\left(\bigoplus_{i=1}^n V\right) &= {\rm Hom}\left(\bigoplus_{i=1}^n V, \bigoplus_{i=1}^n V\right) \\
&= \bigoplus_{i=1}^n \bigoplus_{i=1}^n {\rm End}(V) \\
&= \bigoplus_{1\leq i,j\leq n} {\rm End}(V).
\end{align}$$ |
Compute the set of points (x,y) for a circle of arbitrary radius, with a 1 degree step, without using any trigonometric function. | Use the parametric equations
$$x=\frac{1-t^2}{1+t^2}\ ,\quad y=\frac{2t}{1+t^2}$$
for a circle of radius $1$ centred at the origin. You can then easily scale to any radius you like, and translate to be centred at any point you like.
This will not conveniently give you regularly spaced points around the circle - for that I suspect you can't avoid trigonometric functions. But of course you can still plot as many points as you like by taking suitable $t$ values.
There will also be some difficulties near $x=-1$ as this requires the $t$ values to approach $\pm\infty$. However I should think (depending on what programming resources you have available) that you could avoid this by plotting the right half of the circle; then plot the left half by using the same formulae and plotting $(-x,y)$ rather than $(x,y)$. |
Number Theory Problem Involving RSA | $527 = M = pq = 31 \cdot 17$, as you say. So $\phi(M) = (p-1)(q-1) = 30 \cdot 16 = 480$.
Now $d$ is usually defined as the number $d < \phi(M)$ such that $ed = 1 \bmod \phi(M)$. This will ensure that $(m^e)^d = m^{ed} = m \bmod M$ for all $m < M$. So $d$ is required secret exponent ($e = 37$ being the public one).
As $e = 37$ is prime and does not divide $480$, they have gcd equal to $1$ and we can, using the extended Euclidean algorithm, find $x,y \in \mathbb{Z}$ such that $37x + 480y = 1$. Taking this equation modulo $480$, we see that the $x$ in this equation (modulo $\phi(M) = 480$) is the number $d$ you're looking for.
BTW, $d = 13$, from the comments, is indeed correct and can be checked as follows: $13 \cdot 37 = 481 = 1 \bmod 480$. In this case the numbers are so small that one can easily find them by trial and error or a simple minded computer program. For large numbers with known factorisations we would implement the Euclidean algorithm instead. |
Generated $\sigma$-algebras identity (unions) | You'll need the fact that, if $\mathcal A$ is a $\sigma$-algebra, then the $\sigma$-algebra it generates is itself: $\sigma(\mathcal A)=\mathcal A$. We'll also need monotonicity: If $\mathcal C\subseteq\mathcal D$ then $\sigma(\mathcal C)\subseteq\sigma(\mathcal D)$. Armed with these observations, we can proceed as follows.
Since $\mathcal C_1\subseteq\mathcal C_1\cup\mathcal C_2$, we have $\sigma(\mathcal C_1)\subseteq\sigma(\mathcal C_1\cup\mathcal C_2)$, and similarly $\sigma(\mathcal C_2)\subseteq\sigma(\mathcal C_1\cup\mathcal C_2)$. So $\sigma(\mathcal C_1)\cup\sigma(\mathcal C_2)\subseteq\sigma(\mathcal C_1\cup\mathcal C_2)$. By monotonicity, we get $\sigma(\sigma(\mathcal C_1)\cup\sigma(\mathcal C_2))\subseteq\sigma(\sigma(\mathcal C_1\cup\mathcal C_2))$. Finally, our initial observation simplifies the right side of this to just $\sigma(\mathcal C_1\cup\mathcal C_2)$, as required. |
A question about the category of sets with endomorphisms | Let me try to give some insight into how to think about objects of this category. First, forgetting about the endomorphisms, what is a set? It's just some collection of points. So, you should envision this as just a big bag of points with no structure relating them.
Now, if you have such a set $A$, what additional structure does an endomorphism $\alpha:A\to A$ give you? Well, for each element $a\in A$, it links $a$ to some other element $\alpha(a)$. You could visualize this by imagining that you draw an arrow from $a$ to $\alpha(a)$, for each point in your set. So, instead of just a bag of points, you now have a diagram of points where there are some arrows between them, with exactly one arrow starting at each point. (More formally, a mathematician would call this a special type of "directed graph".)
Let's look at some simple examples. First, consider $A=\mathbb{N}$ with $\alpha(n)=n+1$. This looks like an infinite "ray" of points going off the the right, with an arrow from each point to the next one. Another example is $B=\mathbb{N}$ with $\beta(n)=n+2$. This has the same set of points, but now the arrows are different so they form two separate "rays". One ray consists of all the even numbers, and the other ray consists of all the odd numbers. We can thus see that $(A,\alpha)$ and $(B,\beta)$ are not isomorphic, since $(A,\alpha)$ has just a single connected ray while $(B,\beta)$ consists of two separate rays which are not connected to each other at all via the map $\beta$. (This isn't a completely rigorous argument, but it can be made into one.) Another example is $C=\mathbb{Z}$ with $\gamma(n)=n+1$. This one has just one connected piece like $(A,\alpha)$, but now instead of a ray it is a line that continues infinitely both forward and backward.
Your examples are quite a bit more complicated, but can be analyzed in a similar spirit. So, here are some things I encourage you to contemplate on your own. For $(\mathbb{Z},\alpha)$ with $\alpha(n)=2n$, what "connected pieces" can you break the picture into? What shape are the pieces? Are they rays? Lines? Something else? And, can you do the same analysis for $(\mathbb{Z},\beta)$ with $\beta(n)=3n$? Does it have the same number of pieces with the same shape (so it would be isomorphic), or not?
Brief answers to these questions are hidden below.
For $(\mathbb{Z},\alpha)$, there are infinitely many pieces. Namely, for any odd integer $n$, the set of integers of the form $2^kn$ for $k\geq 0$ forms a "ray", starting at $2^0n=n$ and then with $\alpha$ taking each $2^kn$ to the next point $2^{k+1}n$. Besides these rays, there is one other piece, consisting of just $0$. This is a "loop" with one point, since $\alpha$ maps $0$ to itself. So, to sum up $(\mathbb{Z},\alpha)$ consists of (countably) infinitely many rays together one one single point loop.
What about $(\mathbb{Z},\beta)$? It's almost the same! We again have infinitely many "rays", except now they start at integers that are not multiples of $3$, rather than at odd integers. And besides these rays, we have a one point loop at $0$.
So, since $(\mathbb{Z},\alpha)$ and $(\mathbb{Z},\beta)$ each consist of countably infinitely many rays and one single point loop, they should be isomorphic. I leave it to you to try to prove this more formally. |
Asymptotic of the heat kernel | You just need to check that
$$ \int_M H_k(x, y) \mathrm{d}y = 1 + O(t^{k+1})$$
for all $x \in M$ and for all $k$. For example, this follows directly from the method of stationary phase (just take a geodesic chart around $x$ that is so large that the support fo $\eta$ is in its domain. Translate to $\mathbb{R}^n$ and use the method of stationary phase there.
Therefore, your first equation in fact reduces to
$$(4 \pi t)^{n/2} \int_0^tC \tau^{k-n/2}(1 + O(\tau^{k+1}))\mathrm{d} \tau \leq C_1 t^{k+1}$$
which is obviously true.
So you just estimated to roughly when you took the sup-norm. Estimating the $L^1$-norm does the job. |
Every simply connected space is contractible | All the homotopy groups of a contractible space are trivial (because all the homotopy groups of the point you're contracting to are trivial). Consequently, if any $\pi_i \not\cong \{0\}$, your space is not contractible. For all $n > 0$, $\pi_n(S^n) \cong \mathbb{Z}$. This is a standard fact, and you can read more here. (This is proved using degree theory.) In fact, that $\pi_i(S^n)$ is frequently nontrivial for $i > n > 1$ is a wide area of study.
If you are familiar with homology, you could use the standard fact $H_0(S^n) \cong H_n(S^n) \cong \mathbb{Z}$ and the remaining homology groups are trivial. This is normally shown applying the Hurewicz theorem to the above results on homotopy groups and the fact that $S^n$ is $(n-1)$-connected. |
Number of spanning trees of a graph (behind the formula) | I'm assuming this means you have a graph $G$ on $n$ vertices with adjacency matrix $A$, and you want to count the number of spanning trees of $G$.
Now, $G$ is a subgraph of $K_n$, and so any spanning tree of $G$ is certainly a spanning tree of $K_n$. But a spanning tree $T$ of $K_n$ is not necessarily a spanning tree of $G$. In particular, $T$ is not a spanning tree of $G$ if and only if $T$ includes an edge $(i,j)$ of $K_n$ that is not an edge of $G$. In this case, $A_{i,j}=0$. So a spanning tree $T$ of $K_n$ is a spanning tree of $G$ if and only if $\prod_{(i,j)\in T}A_{i,j}=1$.
So the number of spanning trees of $G$ is the sum over all trees $T$ of $K_n$ in which $\prod_{(i,j)\in T}A_{i,j}=1$, which is your formula. |
Representation which have no unique decomposition into irreducible | You have to be careful about the difference between irreducibles and indecomposibles. The represenation $\mathbb{Z}\to GL_2$ defined by $1 \mapsto \left( \begin{array}{cc} 1& 1\\ 0&1\end{array}\right)$ is the basic example of a represenation that is reducible but doesn't decompose into irreducibles.
On the other hand if you're asking when things split up into indecomposibles, there is the Krull-Schmidt theorem which says that if a module satisfies both the ascending chain condition and the descending chain condition, then it is a direct sum of indecomposibles. So in particular this would include any finite dimensional representation $G\to GL_n(\mathbb C)$
For counterexamples, maybe you could look at this book, I just now found it by google searching 'krull-schmidt counterexample'... the first example they give is that for $R= \mathbb{Z}[\sqrt{-5}]$ you have $\langle 3,2+\sqrt{-5}\rangle \oplus \langle 3,2-\sqrt{-5}\rangle \cong R\oplus \langle 3\rangle$, so the decompositions aren't unique in this case in the sense that the summands on the LHS and RHS aren't isomorphic as $R$-modules. But this is a statement about rings rather than group representations, so it might not be quite what you're after... |
Sum of digits of n^n | A little estimating shows that $16^{16}$ has $20$ digits. Therefore $A≤180$. Therefore $B≤27$.
Now, if you kept on iterating the digit sum, you'd get to $7$ since $16^{16}\equiv 7 \pmod 9$. Thus $B\equiv 7 \pmod 9$ So $B\in \{7,16,25\}$.
It follows that the sum of the digits of $B$ is $7$.
Worth noting: a simple search through the possible values for $A$ shows that $25$ is not possible, but both $7$ and $16$ are. I don't see an easy way to eliminate either of these. In truth, $B=16$ but I don't see how to get there without heavier computation. |
Problem understanding Master theorem | $$a^{\log_bn}=a^{\log n/\log b}=\exp\left(\log a\cdot\frac{\log n}{\log b}\right)=n^{\log a/\log b}=n^{\log_ba}$$ |
If $f$ is has period $\omega$ and a pole at $z_0$, prove that $f$ has a pole at $z_0+\omega$. | Assuming $\epsilon>0$ is small enough to contain only one pole of $f$, we follow Cauchy...
$$\oint_{|z_0+\omega-z|=\epsilon} \frac{f(z)}{z-(z_0 + \omega)}\mathrm{d}z = \oint_{|z_0-y|=\epsilon} \frac{f(y+\omega)}{y-z_0}\mathrm{d}y = \oint_{|z_0-y|=\epsilon} \frac{f(y)}{y-z_0}\mathrm{d}y$$ where we have used the substitution $y = z-\omega$. This last integral should look pretty familiar. |
Find the cubic equation of roots $α, β, γ$. | Take the equation $$ax^3+bx^2+cx+d=0$$ which has solutions $\alpha$, $\beta$, $\gamma$. Group the odd and even powers on each side and factor to obtain $$x(ax^2+c) = -(bx^2+d).$$
Now squaring both sides gives
$$x^2(ax^2+c)^2 = (bx^2+d)^2.$$
This equation has solutions $\alpha$, $\beta$, $\gamma$ but only contains even powers of $x$ so we can make the substitution $y=x^2$ to obtain the equation $$y(ay+c)^2 = (by+d)^2$$
which has solutions $\alpha^2$, $\beta^2$, $\gamma^2$. You can replace $y$ by $x$ to obtain the answer in the back. |
How to obtain coordinates of turning points without differentiating? | The first equation shows that $y\ge -\frac{1}{2}$ for all x, with $y=-\frac{1}{2}$ when $x=\frac{1}{3}$,
and the second equation shows that $y\le\frac{9}{2}$ for all x, with $y=\frac{9}{2}$ when $x=-3$. |
Prove or disprove: if $f(n)=O(g^2(n))$ then $2^{f(n)}=O(8^{g(n)})$ | Neither you proved nor disproved it.
You can see it on a simple counterexample. I suppose it is meant for $n\to\infty$. Consider $g(n)=n$ and $f(n)=n^2$. We get $f\in O(g^2)$ since $f=g^2$. Now we disprove $2^f\in O(8^g)$.
Consider
$$
2^f\in O(8^g)\Leftrightarrow \limsup_{n\to\infty}\left|\frac{2^{f(n)}}{8^{g(n)}}\right|<\infty.
$$
Now we compute
$$
\limsup_{n\to\infty}\frac{2^{n^2}}{8^n}=\limsup_{n\to\infty}2^{n^2-3n}=\infty.
$$
This proves $2^f\notin O(8^g)$. |
What domain maps to $[-1,1]\times[-1,1]$ under $e^z$? | We are given the square $Q:=[{-1},1]^2$ in the $(u,v)$-plane ad have to find the $z=x+iy\in{\mathbb C}$ with $e^z\in Q$. Since $e^z=e^{x+iy}=e^x(\cos y+i\sin y)$ we have to describe the set $S$ of points $(x,y)$ satisfying
$$|e^x\cos y|\leq1,\quad |e^x\sin y|\leq 1\ ,$$or
$$\max\{|\cos y|,|\sin y|\}\leq e^{-x}\ .$$
Given $y\in{\mathbb R}$ this amounts to
$$x\leq-\log\max\{|\cos y|,|\sin y|\}=:\psi(y)\ .$$
The following figure shows a graph of the funcion $\psi$.
The set $S$ in question can then be written as
$$S=\bigl\{z=x+iy\>\big|\>-\infty< y<\infty,\ x\leq\psi(y)\bigr\}\ ,$$
whereby a vertical copy of the above curve, the bubbles showing to the left, serves as right boundary of $S$. |
Sheaves and complex analysis | As a complement to Matt's very interesting answer, let me add a few words on the historical context of Leray's discoveries.
Leray was an officer in the French army and after Frances's defeat in 1940, he was sent to Oflag XVII in Edelsbach, Austria (Oflag=Offizierslager=POW camp): look here .
The prisoners founded a university in captivity, of which Leray was the recteur (dean).
Leray was a brilliant specialist in fluid dynamics (he joked that he was un mécanicien, a mechanic!), but he feared that if the Germans learned that he gave a course on that subject, they would force him to work for them and help them in their war machine (planes, submarines,...).
So he decided to teach a harmless subject: algebraic topology!
So doing he recreated the basics on a subject in which he was a neophyte and invented sheaves, sheaf cohomology and spectral sequences.
After the war his work was examined, clarified and amplified by Henri Cartan (who introduced the definition of sheaves in terms of étalé spaces) and his student Koszul.
Serre (another Cartan student) and Cartan then dazzled the world with the overwhelming power of these new tools applied to algebraic topology, complex analysis in several variables and algebraic geometry.
I find it quite interesting and moving that the patriotism of one courageous man (officers had the option to be freed if they agreed to work for the Nazis) changed the course of 20th century mathematics.
Here, finally, is Haynes Miller's fascinating article on Leray's contributions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.