title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Understanding the ideal generated by a polynomial | Put in simple words,
In any ring $R$ (with unit!), the principle ideal $(a)$ generated by $a\in R$ consists of nothing but the multiples of $a$.
I give here just a heuristic:
I like to think of the quotient ring $R/(a)$ as having the same elements of $R$ but the equality is modified so that all elements of $(a)$ be equal to zero in the quotient. Observe that it is enough to require $a=0$, because $ras=0$ and $\sum r_ias_i=0$ already follows by the properties of equality.
In the case of polynomial ring, a factor like $R[t]/(f)$ with, say $f=t^n-a_{n-1}t^{n-1}-\dots-a_1t-a_0$ will always be represented by the set of polynomials of degree $<n$, because in the quotient ring we have
$$t^n=a_{n-1}t^{n-1}+\dots+a_1t+a_0$$
So that, any time a $t^n$ appears, it can be replaced by a degree $<n$ polynomial in $R[t]/(f)$. |
Substring of a regular language | HINT: Start with a DFA $M$ for $L$; we’ll modify $M$ to get an NFA $M'$ for $\operatorname{Substring}(L)$. The first step is to find all of the states of $M$ from which an acceptor state can be reached and and make them the acceptor states of $M'$. The second and final step is to add $\epsilon$-transitions from the initial state of $M$ to every other state of $M$. I’ll leave the formal details to you, along with the task of explaining why $M'$ does in fact recognize $\operatorname{Substring}(L)$. |
Counting how many natural numbers satisfy a given condition. | Hint: Note that as long as you are in the squares, you have the same number of terms. The term $p^2$ is the $p^{\text{th}}$ term. If you are in the first powers, $q$ is the $q^{\text{th}}$ term. When are you in each regime? |
What has happened in this step in the computation of DFT | \begin{align}&\frac{1}{N} e^{j k(2 \pi / N) N_{1}}\left(\frac{1-e^{-j k 2 \pi\left(2 N_{1}+1\right) / N}}{1-e^{-j k(2 \pi / N)}}\right)\qquad(\text{From Here})\\
&=\frac{1}{N} e^{j k(2 \pi / N) N_{1}}\left( \frac{e^{-jk2\pi(2N_1+1)/2N}}{e^{-jk(2\pi/2N)}}\right)\left(\frac{e^{j k 2 \pi\left(2 N_{1}+1\right) / 2N}-e^{-j k 2 \pi\left(2 N_{1}+1\right) / 2N}}{e^{j k(2 \pi / 2N)}-e^{-j k(2 \pi / 2N)}}\right)\\
&=\frac{1}{N} e^{j k(2 \pi / N) N_{1}}\left( \frac{e^{-jk2\pi(N_1+1/2)/N}}{e^{-jk(2\pi/2N)}}\right)\left(\frac{e^{j k 2 \pi\left(2 N_{1}+1\right) / 2N}-e^{-j k 2 \pi\left(2 N_{1}+1\right) / 2N}}{e^{j k(2 \pi / 2N)}-e^{-j k(2 \pi / 2N)}}\right)\\
&=\frac{1}{N} \left( \frac{e^{-jk2\pi(1/2)/N}}{e^{-jk(2\pi/2N)}}\right)\left(\frac{e^{j k 2 \pi\left(2 N_{1}+1\right) / 2N}-e^{-j k 2 \pi\left(2 N_{1}+1\right) / 2N}}{e^{j k(2 \pi / 2N)}-e^{-j k(2 \pi / 2N)}}\right)\\
&=\frac{1}{N} \frac{e^{-j k(2 \pi / 2 N)}\left[e^{j k 2 \pi\left(2 N_{1}+1\right) / 2 N}-e^{-j k 2 \pi\left(2 N_{1}+1\right) / 2 N}\right]}{e^{-j k(2 \pi / 2 N)\left[e^{j k(2 \pi / 2 N)}-e^{-j k(2 \pi / 2 N)}\right]}}\qquad(\text{To Here})\\ \end{align}
We just pull out the common factor of $e^{-jk(2\pi/2N)}$ from the denominator and $e^{-jk2\pi(2N_1+1)/2N}$ from the numerator. We then multiply the numerator part with $e^{j k(2 \pi / N) N_{1}}$ to further simplify it. The motivation of factoring it this way is to express it in terms of difference of conjugate, which would give us sine terms. |
A doubt regarding the need for lemma 52.3 in Munkres' "Topology". | The fundamental group concerns loops, i.e. paths for which the final point and the initial one coincide. If you have two paths which are not loops, but such that they share the same initial and final points, that claim is true but it needs a proof, since it does not follow "directly" from the triviality of $\pi_1$. |
If $a+\frac b2 + \frac c3+\frac d4+\frac e5=0$ , $ a+bx+cx^2+dx^3+ex^4=0$ has at least one real zero. | Hint: Consider $F(x)=ax+\dfrac{b}{2}x^2+\dfrac{c}{3}x^3+\dfrac{d}{4}x^4+\dfrac{e}{5}x^5$ and $f(x)=a+bx+cx^2+dx^3+ex^4$. |
Finding a basis and dimension of a subspace | I imagine finding specific conditions on the polynomials in $\mathcal{P}_4(\mathbf{R})$ that satisfy $p(1) = p'(1) = 0$ would be quite cumbersome . My humble suggestion to you is that you observe that a possible ordered basis for $U$ is as follows
$$\mathcal{K} = \{(x-1)^2,(x-1)^3,(x-1)^4\}$$ |
Probability a natural number of the form $m^2 - n^2$ can be exactly factored as the product of $2$ primes? | I'm interpreting the question as: "For a fixed $m$, how many $0\le n<m$ have the property that $n-m$ and $n+m$ are both prime?" (Note that this is a little bit wrong for $n=m-1$, but never mind.)
This is exactly the same as asking how many representations $2m$ has as the sum of two primes. In other words, this is the Goldbach conjecture in disguise.
So we would love to prove (but currently can't) that this number of representations is positive for all $m\ge2$.
There is also a conjectured asymptotic formula, as a function of $m$, for the number of such representations. |
Sentence ( closed formulas) vs formulas with free variables. Valuation. | Some comments abou the comments.
Regarding :
a formula with a free variable generally cannot be assigned a truth value that is consistent with all possible valuations.
I would like to rephrase it as follows :
a formula with a free variable may have a different truth value for different variable assignments.
Consider the formula of first-order language for arithmetic : $(x=0)$ and interpret it in the domain $\mathbb N$ of natural numbers.
With the variable assignment function $s$ such that $s(x)=0$, the formula is evaluated to $\mathsf T$ while with the function $s'$ such that $s'(x)=1$ the formula is evaluated to $\mathsf F$.
For the existentially quantified formula $\exists x \ (x=0)$, the semantical specification is :
$s$ satisfy $\exists x \ (x=0)$ in the said interpretation (i.e. $\mathbb N \vDash \exists x \ (x=0)[s]$ ) iff for some $n \in \mathbb N$, we have that $s(x|n)$ satisfy $(x=0)$, where $s(x|n)$ is the function which is exactly like $s$ except for the fact that it assigns the value $n$ to the variable $x$.
With this specification, obviously the above $s$ satisfy $\exists x \ (x=0)$, but also $s'$ does, because $s'(x|0)$ satisfy it.
Thus, the "trick" of the specification is simply to formalize the fact that, in order to satisfy $\exists x \varphi$ in a certain interpretation, it is enough to find an "object" in the domain such that $\varphi$ holds of it [irrespective of the fact that we have a name for it or not : see Computability & Logic].
We can easily prove that, for a closed formula $\varphi$ (a sentence), an interpretation $\mathfrak A$ satisfies $\varphi$ with every function $s$ from $\text {Var}$ into $|\mathfrak A|$, or $\mathfrak A$ does not satisfy $\varphi$ with any such function.
This is the meaning of :
if all of the variables in a formula are bounded so that the formula is a sentence, then it has only one truth value for all possible valuations. |
Probability of birthday in a group of N people | Your second answer is right. What if there were more than 365 people in the group? Then your first answer would produce a number larger than 1.
You can only add probabilities for "or" when the events are disjoint. In this case, it is possible that more than one person has their birthday on that day, so you can't just use the addition rule without subtracting out some overlap. |
The assumption of a Cauchy sequence in proving completeness of a set. | It is because, unless our set $S$ is empty, then we can find an element $a$ of $S$ and define $(\forall n\in\mathbb N):a_n=a$ (which, by the way, will be a Cauchy sequence). And also because, by definition, $S$ is complete if every Cauchy sequence of elements of $S$ converges to an element of $S$. |
Сondition for the existence of the subgroup | The converse of Lagrange's theorem holds for finite abelian groups. Here is an outline of a proof:
Every finite abelian $p$-group is a product of cyclic subgroups and so the converse of Lagrange's theorem holds for abelian $p$-groups.
Every finite abelian group is the product of its Sylow subgroups. By the previous result, the converse of Lagrange's theorem holds for all abelian groups. |
Powers of $2$ starting with $123$...Does a pattern exist? | If a power of 2 starts with 123, then it must be between $1.23\times 10^n$ and $1.24\times 10^n$ for some $n$.
So you want $k$ and $n$ for which $$1.23\times 10^n\leq 2^k < 1.24\times 10^n$$.
This is easier to deal with if we take logs (base 10). Then you want
$$\log(1.23) + n\leq k\log(2) < \log(1.24)+n$$
That is, the fractional part $\lfloor k\log(2)\rfloor$ satistfies
$$\log(1.23)\leq \lfloor k\log(2)\rfloor < \log(1.24),$$
that is, writing these as decimals,
$$0.0899051143939792\leq \lfloor 0.3010299956639811 k\rfloor < 0.09342168516223505.$$
Note that $0.0899051143939792$ and $0.09342168516223505$ don't differ by much. If you've found $k_1$ and $k_2$ that satisfy the above inequalities, then $0.3010299956639811 (k_1-k_2)$ is pretty close to an integer.
We could find example differences $\Delta k$ by looking for rational approximations of $\log(2)$, which we can do using the continued fraction expansion of $\log(2)$:
$$\log(2)=\frac{1}{3+\frac{1}{3+\frac1{9+...}}}$$
The numbers in that expresison are $3,3,9,2,2,4,6,2,1,...$ with no discernible pattern. If we cut the fraction off at various places we get "best rational approximations" of $\log(2)$. The first few are:
$$\log(2)\approx\frac{1}{3}$$
$$\log(2)\approx\frac{1}{3+\frac{1}{3}}=\frac{3}{10}$$
$$\log(2)\approx\frac{1}{3+\frac{1}{3+\frac1{9}}}=\frac{28}{93}$$
$$\log(2)\approx\frac{59}{196}$$
The errors in these are, respectively, $0.0323033...$, $0.001029996...$, $0.00045273..$ and $0.0000095875...$.
If you have a fraction $\frac{p}{q}$ which is within $\epsilon$ of $\log(2)$, then $q$ will sometimes work as $\Delta k$, because it means that multiplying $2^k$ by $2^q$ will change $\lfloor\log(2)k\rfloor$ by $q\epsilon$. If $2^k$ starts with 123, then as long as $q\epsilon$ is less than $\log(1.24) - \log(1.23)$, we have a chance that $2^{k+q}$ also will.
$\log(1.24)-\log(1.23)=0.0035167...$. Multiplying the errors above by the denominators, we get
$\frac13$ obviously won't work: $0.0323033\times3=0.0969...$ which is way bigger than $0.0035167$
$\frac3{10}$ won't work: $0.001029996\times10=0.0102999..$, which is also bigger than $0.0035167$
$\frac{28}{93}$ won't work, but only just: $0.00045273\times93=0.0042104...$. You probably found some "near misses" 93 apart.
$\frac{59}{196}$ works! $0.0000095875\times 196 = 0.0018791$, which is less than $0.0035167$. So it's possible to have $2^k$ and $2^{k+196}$ both starting with 123 - but not all of $2^k$, $2^{k+196}$ and $2^{k+2\times196}$, since $2\times 0.0018791=0.003758$, which is (just) too big.
The next continued fractions for $\log(2)$ are $\frac{146}{485}$ (recognise anything there?), $\frac{643}{2136}$ and $\frac{4004}{13301}$.
You'll notice this method missed your 289. That's because the continued fraction method gives the best rational approximations. It's true that $\log(2)\approx\frac{87}{289}$, but that's not as good an approximation as $\frac{59}{196}$.
In general, you're looking for numbers $q$ for which $q\log(2)$ is within $0.0035167$ of an integer. Finding those will be easier than finding powers of 2, perhaps :)
The property of 196 and 485 that makes them give rise to patterns in the powers of 2 that start with 123 is just "$q\times\log(2)$ is nearly an integer". That's got nothing much to do with the specific prefix you chose. If you look for powers of 2 starting with, say, 234, you'll probably see some of the exact same numbers popping up - but not 196, alas, since $\log(2.35/2.34)=0.00185...$ is a tighter requirement, and $196\times0.0000095875=0.00187$ is now too big. 485 will still work, for any starting three digits (though only just for 999), as will 2136, 13301, etc. |
How to calculate the eigenvalues of nonhomogeneous system of LDE? | Your "$\frac{dx}{dt}= -2x+ 4$"should be $\frac{dx}{dt}= -2x+ 4y$. And if $\lambda$ is an eigenvalue, we must have $\frac{dx}{dt}= -2x+ 4y= \lambda x$ and $\frac{dy}{dt}= 4x- 8y= \lambda y$. The two equations, $-2x+ 4y= \lambda x$ and $4x- 8y= \lambda y$ are equivalent to $(-2- \lambda)x+ 4y= 0$ and $4x- (8+ \lambda)y= 0$. Those equations have the obvious trivial solution $x= y= 0$. An "eigenvalue" for this problem is a value of $\lambda$ such that this equation has other, non-trivial solutions.
This set of equations can also be written as the matrix equation, $$\begin{bmatrix}-2- \lambda & 4 \\ 4 & -8- \lambda \end{bmatrix}\begin{bmatrix}x \\ y \end{bmatrix}= \begin{bmatrix} 0 \\ 0 \end{bmatrix}.$$ IF that matrix is invertible we could easily solve it by multiplying both sides by that inverse to get the trivial solution $$\begin{bmatrix}x \\ y \end{bmatrix}\begin{bmatrix} x \\ y \end{bmatrix}= \begin{bmatrix} 0 \\ 0 \end{bmatrix}.$$ There will be a non-trivial solution only if that matrix is NOT invertible which means that its determinant must be 0.
That is, $\lambda$ is an eigenvalue only if $$\left|\begin{array}{cc}-2- \lambda & 4 \\ 4 & -8- \lambda \end{array}\right|= (-2- \lambda)(-8- \lambda)- 16= \lambda^2+ 10\lambda= \lambda(\lambda+ 10)= 0.$$ That is, the eigenvalues are 0 and -10. |
Showing how the Jacobian connects volumes for change of coordinates | If we have vector-valued variables $\mathbf{u},\,\mathbf{v}$ of the same dimension $n$ (in your case $n=2$), the chain rule relating their infinitesimals is $\text{d}u_i=\sum_{j}J_{ij}\text{d}v_j$ with $J_{ij}:=\frac{\partial u_i}{\partial v_j}$, the entries of the Jacobian matrix $J$. In vector notation, we can just write $\text{d}\mathbf{u}=J\text{d}\mathbf{v}$. Some function $f(J)>0$ should exist with $\text{d}^n\mathbf{u}=f(J)\text{d}^n\mathbf{v}$. If we transform twice, say $\text{d}\mathbf{u}=J_1\text{d}\mathbf{v},\,\text{d}\mathbf{v}=J_2\text{d}\mathbf{w}$, it becomes clear we need $f(J_1 J_2)=f(J_1) f(J_2)$. This, combined with the multilinearity of $f$ (because if the $u_i$ are multiplied by constants we'd expect to absorb these factors into $f$), proves $f=|\det J|$. Indeed, if $f$ were allowed a sign in such a way it changes sign when swapping two variables, we'd have enough properties to prove $f=\det J$. |
Twisted Cech cohomology | Local coefficient homology is a particular case of sheaf cohomology (cohomology of a locally constant sheaf). So, even if I'm not sure I understood precisely your wishes, I think it is possible that what you're trying to do (twisting sheaf cohomology) really amounts to consider sheaf cohomology for something like a tensor product $\mathcal F \otimes \mathcal L$, where $\mathcal L$ is the locally constant sheaf corresponding to your local coefficients. If that's true, you may be happy with the genuine Čech cohomology of that particular sheaf.
Two classical references for sheaf cohomology for topologists are Iversen's Cohomology of Sheaves and Dimca's Sheaves in Topology. The latter is considerably terser than the former, but easier to find. In particular, those books deal with a general expression of Poincaré duality in sheaf cohomology (Poincaré-Verdier duality) which requires twisting (by the orientation local coefficient system $\mathcal L_{\textrm{or}}$) so they will spend some time explaining things quite close to the things you seem to dream about.
A quite ancient reference from the Séminaire Cartan by Frenkel alludes to a construction which seems related, but I must say it seems quite opaque to me.
(Even if it's only distantly related to your question, I'd like to take this opportunity to quote two books which break the omertà on local coefficients: G.W. Whitehead's Elements of homotopy theory and Davis & Kirk's Lectures on Algebraic Topology. Neither of their account on this topic is exhaustive or perfect, but at least, they do not content themselves with a two-line remark.) |
Finding $\left\lfloor\frac{a}{2} \right\rfloor \mod p$ knowing $a \mod p$ and $a \mod 2$? | By knowing $a\!\!\mod p$ and $a\!\!\mod 2$, we know $a\!\!\mod 2p$ by Chinese Remainder Theorem.
As such, let $a=2pq+r$, where $q$ and $r$ are the quotient and remainder.
$$\left\lfloor\frac{a}{2}\right\rfloor=\left\lfloor\frac{2pq+r}{2}\right\rfloor\\=\left\lfloor\frac{2pq+r}{2}\right\rfloor\\=pq+\left\lfloor\frac{r}{2}\right\rfloor\equiv\left\lfloor\frac{r}{2}\right\rfloor\mod p$$ |
Representation of matrix A = BC | Let,
$A=\begin{bmatrix}
A_1&A_2&A_3&\dots&A_n
\end{bmatrix}$
Here $A_i(1\le i\le n) $ are the columns of $A$.
Let $A_{i_{k}}(1\le k\le r)$ be the first r independent columns of A.
Let $B=\begin{bmatrix}
A_{i_1}&A_{i_2}&A_{i_3}&\dots&A_{i_r}
\end{bmatrix}$
Let $C=\begin{bmatrix}
C_{1}&C_{2}&C_{3}&\dots&C_{n}
\end{bmatrix}$ ($C_i$ are the columns of C)
$A_1=A_{i_1}$(As we can always select a set of $r$ independent vectors from a set of $n$ vectors spanning a $r$ dimensional space such that a particular vector $v$ is always there).
Now I will show that for some if $BC$ equals to $A$ then $C$ will be in its row reduced echelon form.
We must have $C_1=\begin{bmatrix}
1&0&0&\dots&0
\end{bmatrix}^t$
$A_2$ is either $kA_1$ for some $k\in F$ or $A_2$ and $A_1$ is linearly independent in which case $A_2=A_{i_2}$(Because $A_{i_2} \in$ set of first $r$ independent columns of $A$)
With similar arguement one can establish that $A_{p}=\sum_{j=1}^{k}a_jA_{i_j}$(with some $a_j\ne 0$) or $A_p=A_{i_{k+1}}$. Here $\{A_{1_1},A_{i_2}\dots A_{i_k}\}$ is the minimum spanning subset of $\{A_1,A_2,\dots , A_{p-1} \}$(By a minimum spanning subset $S$ of $T$ I mean that $T\subseteq \text{span} (S)$ , $S\subseteq \{A_{1_1},A_{i_2}\dots A_{i_r}\}$ and there is no $W\subset S$ such that $T\subseteq \text{span} (W)$).
From this it easily follows that $C$ is Row reduced echelon form. |
Show that the only solutions to the congruence $x^2 \equiv 1\pmod {p^n}$ are $x \equiv \pm 1 \pmod {p^n}$ | Hint: as $p$ is odd, if $p$ divides $x + 1$, then $p $ does not divide $x - 1$ (and vice versa). We are given that $p^n$ divides $x^2 -1 = (x + 1)(x - 1)$ and so $p$ divides one of $x + 1 $ or $x - 1$. But then $p^n$ must divide whichever of $x + 1$ and $x - 1$ that $p$ divides, i.e., $x \equiv \pm 1 (\mbox{mod}\;p^n)$ |
What is the dimension for the following basis? | If all vectors of $W$ are solutions of the given equation, then you can find that $x_2=2x_1+2x_3+4x_4$, thus the form of solution is $$\begin{pmatrix}x_1\\2x_1+2x_3+4x_4\\x_3\\x_4\end{pmatrix}=x_1\begin{pmatrix}1\\2\\0\\0\end{pmatrix}+x_3\begin{pmatrix}0\\2\\1\\0\end{pmatrix}+x_4\begin{pmatrix}0\\4\\0\\1\end{pmatrix}$$i.e $\displaystyle W=\text{Sp}\left\{\begin{pmatrix}1\\2\\0\\0\end{pmatrix},\begin{pmatrix}0\\2\\1\\0\end{pmatrix},\begin{pmatrix}0\\4\\0\\1\end{pmatrix}\right\}$, thus $\dim(W)=3$. |
Can this monstrous expression be simplified? | It's often much better to simplify as soon as possible. In this case, the parameterized $x$ and $y$ values at the specified value $\gamma_0 := -4(C-X)\csc(2\alpha)/Z$ reduces fairly nicely:
$$\begin{align}
x &=\tfrac12 mZ \left(\;\cos\gamma_0 + \gamma_0 \cos\alpha \sin(\alpha+\gamma_0)\;\right) \tag1\\[4pt]
y &=\tfrac12 mZ \left(\;\sin\gamma_0 - \gamma_0 \cos\alpha \cos(\alpha+\gamma_0)\;\right) \tag2
\end{align}$$
From there, we easily get
$$x^2+y^2 = \tfrac14m^2Z^2\left(\;1 + \gamma_0 \sin 2 \alpha + \gamma_0^2\cos^2\alpha\;\right) \tag3$$
(Conveniently, there are no $\gamma$s inside the trig functions.)
If you like, you can expand $1=\sin^2\alpha+\cos^2\alpha$ and $\sin2\alpha=2\sin\alpha\cos\alpha$, regroup, and write
$$x^2+y^2 =
\tfrac14m^2Z^2\left(\;\left(\gamma_0\cos\alpha+\sin\alpha\right)^2+\cos^2\alpha\;\right) \tag4$$
At this point, expanding $\gamma_0$ explicitly to $-4(C-X)\csc(2\alpha)/Z$ doesn't seem to give anything particularly pretty, so I'll leave that to the reader. $\square$
As a bit of a prequel, just substituting $r_d\to r_p+mX-mC$and $r_p\to mZ/2$ into OP's parametric equations gives the simplification
$$\begin{align}
x &= \tfrac12 mZ \left(\;
\cos\gamma + \gamma \sin\gamma +\gamma_0 \sin\alpha\cos(\alpha+\gamma)
\;\right) \tag{0.1}\\[4pt]
y &= \tfrac12 mZ \left(\;
\sin\gamma - \gamma \cos\gamma +\gamma_0 \sin\alpha \sin(\alpha+\gamma)
\;\right) \tag{0.2}
\end{align}$$
with $\gamma_0$ as above. From these, we get
$$x^2 + y^2 = \tfrac14 m^2Z^2 \left(\;
1 + \gamma_0 \sin2\alpha + \gamma^2\cos^2\alpha + (\gamma-\gamma_0)^2 \sin^2\alpha \;\right) \tag{0.3}$$
When $\gamma=\gamma_0$, we have that $(0.1)$, $(0.2)$, $(0.3)$ reduce to $(1)$, $(2)$, $(3)$. |
Evaluate Flux of Field through Open Cylinder | The integrand is the normal vector of the curved surface of the cylinder, and is orthogonal to the normal vector of the flat surface; thus the integral is simply the surface area of the curved surface. |
What is the difference between confidence interval and the width of confidence interval? | Is the difference between the upper limit and the lower limit of the interval supposed to be less than 0.5?
Yes, that is exactly what they are asking for. Or, if you read carefully, it should be only a little bit bigger than $0.5$, not necessarily smaller. Whatever they mean by that.
The confidence interval is an interval, with a lower and upper bound. The width of the confidence interval is a single real number, and the difference between the two bounds. |
Calculating the Net Present Value (NPV) | Hint :
$\frac {200} {1.1^i}$ is the general term of a geometric sequence. |
Does pointwise convergence against a non continuous function imply non uniform convergence | Yes. It's a theorem that if a sequence of continuous functions converges uniformly, then the limiting function is continuous. As the contrapositive, if the limiting function is not continuous, the convergence cannot be uniform.
If the functions $f_n$ are not continuous, then they can certainly converge uniformly to a non-continuous function. (For example, all the $f_n$'s could be the same non-continuous function.) |
Solve $2^x+2^{-x} = 2$ | Elucidate the problem by using the substitution $u = 2^x$, then you have $$u + \frac{1}{u} = 2$$
Multiply throughout by $u \neq 0$ to get $$u^2 +1 = 2u \iff u^2 - 2u + 1 = 0$$
This is an easy quadratic to solve, you should get $u = 1$ and hence you need only solve $2^x = 1 \iff x = 0$. |
Finding the number of elements of a quotient group. | Easiest to do it in short steps.
You probably know that $\mathscr O^\times/(1+\pi\mathscr O)$ is a group of order $q-1$, and that this group may also be seen as $\left(\mathscr O/\pi\mathscr O\right)^\times$. That’s the first step, which you show by establishing an isomorphism between this group and $\kappa^\times$, where $\kappa$ is the residue field.
Second step is what you use in an induction, namely that the multiplicative group $(1+\pi^m\mathscr O)/(1+\pi^{m+1}\mathscr O)$ is isomorphic to the additive group $\kappa^+$. Do you see the isomorphism? It is to take $1+\pi^mz$ and send it to $z\pmod{\pi\mathscr O}$. Of course you have to verify that the map really is a homomorphism, with kernel $1+\pi^{m+1}\mathscr O$. I’ll leave the details to you, but get back in a comment if you have trouble. |
Let A be a nonempty subset of R. Define -A = {-a: a ∈ A} and prove bounding relations between A and -A | hint: Use: $x \ge M \implies -x \le -M$. Its simple . |
Absolute convergence of a random variable in a countable space | Quirky. This should be starting like this:
$$\sum_{\omega}|X(\omega)|p_{\omega} = \sum_{|X(\omega)| < 1}\color{red}{|}X(\omega)\color{red}{|}p_{\omega} + \sum_{|X(\omega)| \geq 1}\color{red}{|}X(\omega)\color{red}{|}p_{\omega} \leq \ldots$$
(with the rest as is). The splitting is deliberate - it just allows to upper-bound each sum as is done.
Namely, $$\sum_{|X(\omega)| < 1}|X(\omega)|p_{\omega} \leq \sum_{|X(\omega)| < 1}p_{\omega} \leq \sum_{\omega}p_{\omega}$$ and, as $|x| \leq x^2$ when $|x| \geq 1$,
$$\sum_{|X(\omega)| \geq 1}|X(\omega)|p_{\omega} \leq \sum_{|X(\omega)| \geq 1}(X(\omega))^2 p_{\omega} \leq \sum_{\omega}(X(\omega))^2 p_{\omega}.$$ |
Quantification over the empty set | When you see a quantification like ‘$\forall \phi x : \psi x$’, this is shorthand for ‘$\forall x : \phi x \to \psi x$’. Since ‘$x \in \emptyset$’ is false for all ‘$x$’, the antecedent of ‘$x \in \emptyset \to \phi x$’ will always be false, thus the entire conditional statement is always true.
Edit:
In the case of existential quantification, the statement ‘$\exists \phi x : \psi x$’ is shorthand for ‘$\exists x : \phi x \land \psi x$’, so in this case, the conjunction ‘$x \in \emptyset \land \phi x$’ will always be false. |
Find the value of f(343, 56)? | A start: Note that $343=7\cdot 49$, and $56=7\cdot 8$. First find $f(49,8)$.
More: If $a$ and $b$ are relatively prime, draw an $a\times b$ chessboard, and think of the chessboard squares you travel through as you go from the beginning to the end. |
Apollonius’ Identity inner product space | In a Hilbert space you can use the Parallelogram law :
$$\|a+b\|^2 + \|a-b\|^2 = 2(\|a\|^2 + \|b\|^2)$$
So :
$$\begin{align}
&\|z - \frac{x+y}{2} \|^2\\
&= \|\frac{z}{2} - \frac{x}{2} + \frac{z}{2} - \frac{y}{2}\|^2\\
&= 2(\|\frac{z}{2} - \frac{x}{2}\|^2 + \|\frac{z}{2} - \frac{y}{2}\|^2) - \|\frac{y}{2} + \frac{x}{2}\|^2\\
&= \frac12(\|z - x\|^2 + \|z - y\|^2 - \frac{1}{4}\|y - x\|^2\\
&\square\end{align}$$ |
Proving that $\sin(x)$ is continuous at $0$ | Your proof is correct, but it is based on the inequality $|\sin x| \leq |x|$ for $|x|$ "small". If you are allowed to use this fact (which is not trivial, indeed), then your proof is rigorous.
Finally, since $\lim_{x \to 0} x =0$ is of course evident, you can deduce that $\lim_{x \to 0} x^2 = \lim_{x \to 0} x \cdot x = 0 \cdot 0 = 0$. |
let G be a finite group and let R be a field, show that RG is not an integral domain | (Before we begin, let's note that $G$ has to be nontrivial, otherwise the group ring is isomorphic to the field, and is an integral domain.)
One good utility to have when working with group rings is this observation:
If $H$ is a finite subgroup of order $n$ in a group $G$, and we write $s(H)=(\sum_{h\in H} h)$, then $s(H)^2=n\cdot s(H)$ in any group algebra $R[G]$.
Now, the integer $n$ is either a unit of $R$ or a zero divisor depending on its relationship to the characteristic of $R$. This leads to two cases:
If the integer $n$ is a unit in $R$, then you can divide both sides by $n^2$ to get $(\frac{s(H)}{n})^2=\frac{s(H)}{n}$. That is, $\frac{s(H)}{n}$ is an idempotent.
Otherwise, there is some integer $m$ such that $m\cdot s(H)^2=m\cdot n\cdot s(H)=0$.
You will see the utility of the first bullet once you prove this: an idempotent element of an integral domain must be $0$ or $1$.
The second one I think you will understand how to use right away.
Now, take a nontrivial subgroup $H$ of your $G$ and apply what lies above. Just make sure you convince yourself why the interesting quantities above are not zero or the identity, and you have a complete proof.
Conveniently, this also extends the conclusion slightly to possibly infinite groups which have a nontrivial finite subgroup. |
Mathematics to compare points | I would have to think a little longer about how to determine which side of a general curve some point lies, but with the line it is very straightforward.
The following approach only holds if the given set of points is entirely co-linear. Pick any two distinct points $(x_1,y_1)$ and $(x_2,y_2)$ such that $x_1 \neq x_2$, then line $f(x)$ will have be $f(x) = \left( \dfrac{y_2- y_1}{x_2 - x_1}\right)(x-x_1) + y_1$ where the fraction is the slope. Once you have the line, consider the point $(x_3,y_3)$ which you are to determine if it is on the left or right of $f(x)$.
Find the value of $a = f(x_3)$. If $a > y_3$, your point is "below" $f(x)$ which I would interpret as to the right of $f(x)$ if you were riding the line $f(x)$ from the negative $x$-axis to the positive $x$-axis.
If $a = y_3$, your point is on the line.
If $a < y_3$, your point is "above" $f(x)$ and I would consider it to be to the left of $f(x)$ for the same reasons mentioned in the twice-previous paragraph.
The only exception, as hinted with finding distinct points $(x_1,y_1)$ and $(x_2,y_2)$, is when $x_1 = x_2$. If the points given to you are co-linear but they all posses the same $x$-value, this means you have a line of the form $x = c$ which has no defined slope (among other properties). But this would be the easiest case because then you just compare your $(x_3,y_3)$'s first coordinate with the value of $c$ and you can find which side of $x=c$ it lies on.
This reminds me of Project Euler's Problem 102. I would consider using an implementation of the notion of a cross product for a more general case involving some arbitrary curve. Just a guess at an approach: isolate the two nearest points to your mystery point. Take vector 1 to be the line segment connecting the 2 points, and take vector 2 to be the line connecting the first given point and your mystery point. Analyzing the sign of the cross(vector) product involving those points (assuming you adhere to some convention like the Right-Hand-Rule) will tell you which side of the curve your mystery lies on. [Disclaimer: I haven't tried this yet but it's an approach worth considering/revising/improving] |
Cross product with two orthogonal vectors being zero implying a null vector | Do a change of basis. Consider the basis $B=\{e_1,e_2,e_3\}$, with $e_1=\vec B$, $e_2=\vec C$, and $e_3=\vec B\times\vec C$. Then you know that $\vec A\times e_1=\vec A\times e_2=0$. If $A=a_1e_1+a_2e_2+a_3e_3$, then this means that $a=b=c=0$, since $\vec A\times e_1=(0,c,-b)_B$ and $A\times e_2=(-c,0,a)_B$. |
How to find the point of intersection of four parametric equations | Maybe you could see the first parametric equation as:
$$x^2+y^2=1$$
And the second one as:
$$(x-2)^2+(y-3)^2=16$$ |
sech(x) inverse for x< 0 | Draw graphs
sec(x) always $>0$, $\le 1$
interchange $x,y$ axes for inverse function
$\sec^{-1}(x) $always $x>0$ as a real function. It does not exist real for $x<0$. |
How to prove this result about connectedness? | HINT: suppose $C$ and $D$ are open in $X$ and form a separation of $Y \cup A$. Then, since $Y$ is connected, it must lie entirely in one of $C$ or $D$, suppose it is $C$. But then $D \cap A$ and $C \cup B$ form a separation of $X$.
What's left to prove is that this works even when $B$ and $A$ are not open in $X$. |
Count of 6-digit numbers divisible by 6 but not by 9 | Its a trick question. The sum of digits of every occuring number is $1+2+3+7+8+9=30$ which is not divisible by $9$. Thus, the condition to be not divisible by $9$ is vacuous (recall that a number is divisible by $9$ if and only if its sum of digits is divisible by $9$). Also, since $30$ is divisible by $3$, every occuring number is divisible by $3$. The only restriction to the number is that $2$ or $8$ has to be the last digit to fulfill the divisible-by-$2$-condition. There are $2\cdot 5!=240$ possibilities for this. |
Modular functions and elliptic functions | I don't really understand what this means; modular functions and elliptic functions don't even have the same domain, although they are closely related.
A modular function is a meromorphic function on the upper half-plane $\mathbb{H}$ which is invariant under the action of the modular group $\Gamma = \text{SL}_2(\mathbb{Z})$. The corresponding quotient can be thought of as the moduli space of elliptic curves in a certain way. In other words, a modular function is something like an invariant of elliptic curves.
An elliptic function, on the other hand, is a meromorphic function on the complex plane $\mathbb{C}$ which is invariant under the action of a lattice $\Lambda = \mathbb{Z} \omega_1 \oplus \mathbb{Z} \omega_2$. The corresponding quotient can be thought of as a particular elliptic curve. In other words, an elliptic function is simply a function on a particular elliptic curve.
Your confusion probably stems from the fact that there are elliptic functions, such as the Weierstrass $\wp$ function, which can be uniformly defined for different elliptic curves, and hence which also allow us to define modular functions. This is because the Weierstrass function has more structure than just being an elliptic function; it is also a Jacobi form.
The analogy to trigonometric functions seems to fall short: elliptic functions are analogous to trigonometric functions, but there isn't a good notion of "modular function" here since the moduli space of circles is a point. |
How many topologies can be defined on a finite set with $n$ element? | This problem seems nontrivial. Any papers I've found on the topic suggest that this is still an open problem. See the following question on MathOverflow:
You might also see the following paper:
The Number of Topologies on a Finite Set
Or this paper:
Enumeration of Finite Topologies |
find $\frac{\partial f(u(x(t),y(t)),v(x(t),y(t)))}{\partial t} $ | My advice for these types of questions is that you learn the multivariable chain rule using the derivative matrix. This way, as long as you follow the matrix multiplication, you'll get the right answer.
In this case, we have three nested functions. Define $g(x)=\langle u(x,y),v(x,y) \rangle$ and $h(t) =\langle x(t),y(t)\rangle$, and we can write the function as
$$
z=f(u(x(t),y(t)),v(x(t),y(t))) = f\circ g\circ h (t)
$$
Now, the matrix derivative chain-rule gives us the following:
$$
\begin{align}
D f\circ g\circ h (t) &= \frac{\partial f}{\partial t}\\
&= D (f\circ g\circ h)\\
&= (Df)(g\circ h(t))D (g\circ h) \\
& =(Df)(g\circ h(t))(D g)(h(t)) (Dh)\\
& = \nabla f(g\circ h(t)) \cdot
\left(\frac{\partial u,v}{\partial x,y}
\begin{bmatrix}
x'(t)\\
y'(t)
\end{bmatrix}
\right)
\end{align}
$$
Where $\nabla$ is the gradient, $\cdot$ is the dot product, and $\frac{\partial u,v}{\partial x,y}$ is the Jacobian matrix, which is matrix multiplied by the matrix on its right.
I hope that helps.
When you expand that product, it looks like this:
$$
\frac{d f}{d t} =\frac{\partial f}{\partial u}
\left(
\frac{\partial u}{\partial x}\frac{dx}{dt} +
\frac{\partial u}{\partial y}\frac{dy}{dt}
\right) +
\frac{\partial f}{\partial v}
\left(
\frac{\partial v}{\partial x}\frac{dx}{dt} +
\frac{\partial v}{\partial y}\frac{dy}{dt}
\right)
$$ |
How to solve for the function $f(x)$ such that the area underneath is always equal to the arclength? | So you got to
$$
y=\sqrt{1+y'^2}\implies 1=y^2-y'^2
$$
This is the equation for a hyperbola, so use hyperbolic coordinates $y=\cosh(u(t))$, $y'=\sinh(u(t))$. The derivative of the first equation combined with the second then implies for $y'\ne0$ that $u'(t)=1$, so $y(t)=\cosh(t+C)$.
The other case $y'(t)=\sinh(u(t))=0$ gives $y=1$, which is also a solution. |
Finding a linear map. | It is just a matter of convention. When we say "give a matrix wrt to a pair of bases" we mean, a matrix $M$ so that $w = Mv$ whenever $v$ and $w$ are column vectors of some vector $x$ and the corresponding vector $\phi(x)$ in the bases given. There's nothing actually wrong with your reasoning, you are just using row vectors instead of the column vectors. |
Image of Borel set under continuous and injective map | We want to show that $$\mathscr{B} (\mathbb{R}^n) \subset \mathscr{A}:= \{A \subset \mathbb{R}^n : f(A) \in \mathscr{B}(\mathbb{R}^n) \}$$
By the definition of Borel $\sigma$-algebra it is enough to show that $\mathscr{A}$ is a $\sigma$-algebra and it contains all open sets.
Claim 1: $\mathscr{A}$ is a $\sigma$-algebra.
Proof: Indeed $f(\emptyset)=\emptyset$ and so $\emptyset\in\mathscr{A}$. Also write $\mathbb{R}^n = \bigcup_{m=1}^{\infty} I_m$ where $I_m$ is the $n$-dimensional cube with edge length $m$, this is a compact subset and so
$f(\mathbb{R}^n) = \bigcup_{m=1}^\infty f(I_m)$ now $f(I_m)$ is compact hence closed and so is Borel measurable, we cocnlude that the union is Borel measurable and so $\mathbb{R}^n\in\mathscr{A}$.
Now if $A\in\mathscr{A}$ then $f(\mathbb{R}^n\backslash A) = f(\mathbb{R}^n)\backslash f(A)$ (by injectivity) and so $\mathbb{R}^n\backslash A \in \mathscr{A}$.
Finally if $A_1,A_2,...\in\mathscr{A}$ then $f(\bigcup A_i ) =\bigcup_i f(A_i)$ and so $\bigcup A_i \in \mathscr{A}$. We conclude that $\mathscr{A}$ is a $\sigma$-algebra.
Claim 2: The image of any open set $U$ is Borel measurable.
Proof: This is not that hard, look at here Every open set in $\mathbb{R}^n$ is the increasing union of compact sets.
Every open set is a countable union of compact sets and so you can argue as we did with $\mathbb{R}^n$.
Thus by Claims 1 and 2 we have that $\mathcal{A}$ is a $\sigma$-algebra and it contains all open sets. Since $\mathcal{B}(\mathbb{R}^n)$ is the minimal with such property, we have the desired inclusion. |
For any group $\mathrm{ord}( a_1\circ a_2\circ\dots\circ a_{n-1}\circ a_n)=\mathrm{ord}(a_2\circ a_3\cdots\circ a_{n-1}\circ a_n\circ a_1)$ | Let's prove something more general: $ord(xy) = ord(yx)$.
Suppose $ord(xy)=n$. Consider $(yx)^{n+1}=y(xy)^nx = yx$. This means that $(yx)^n=1$ and so $ord(yx)\le n=ord(xy)$. By symmetry, $ord(yx)\ge ord(xy)$ and so $ord(xy) = ord(yx)$.
Now apply this to $x=a_1$ and $y=a_2 \circ \cdots \circ a_n$.
Edit:
A simpler proof that $ord(xy) = ord(yx)$ is to note that $xy$ and $yx$ and actually conjugate: $yx = z^{-1} (xy) z$, for $z=x$. And conjugate elements have the same order (the proof being along the same lines as above). |
Laplace's Method (Integration) | Hint:
Where does $h(t)$ achieve its maximum? |
How is this defined? | In ring theory we usually define $\Bbb Z[c]$ to be the set of all
polynomial expressions in $c$ with integer coefficients, that is
all $a_0+a_1c+a_2c^2+\cdots+a_rc^r$ with $a_0,\ldots,a_r\in\Bbb Z$. Here since $(\sqrt 5)^2=5\in\Bbb Z$ we have
$$\Bbb Z[\sqrt5]=\{a+b\sqrt 5:a,b\in\Bbb Z\}.$$ |
A container is 4/5 full. After 3 liters of its contents are poured out | $\frac{4}{5}x-3=\frac{3}{4}x$
Hence $\frac{1}{20}x=3$, then $x=60$. So you have to refill with $\frac{60}{4}=15$. |
Is primitive recursion necessary for recursive functions? | All the initial functions $0, x+1$ and $I^n_k(x_1, \dots, x_n)$ essentially depend (at most) on one variable. Assume that we have $f(x_1, \dots, x_n), g_1(x_1, \dots, x_m), \dots, g_n(x_1, \dots, x_m)$ and they depend essentially on $x_{i_0}$ and $x_{i_1}, \dots, x_{i_n}$ respectively. Then their composition
$$f(g_1(x_1, \dots, x_m), \dots, g_n(x_1, \dots, x_m))$$
depends essentially only on the essential variable of $g_{i_0}$, that is $x_{i_{i_0}}$.
Assume $g(x_1, \dots, x_n, y)$ depends essentially only on one variable and let $$f(x_1, \dots, x_n) = \mu y[g(x_1, \dots, x_n, y) = 0].$$
It is clear that in this case $f$ also depends on the same variable (or doesn't depend on its variables at all, if $g$ depends only on $y$).
The above shows that any function generated by using composition and minimalization from the initial functions depends essentially only on one variable. Hence, for instance, $x + y$ can't be generated this way. |
Hitting time of a Markov chain. | For any sequence $S=(s_1,\dots,s_k)$ of intermediate states, $0<s_1<\dots<s_k<m$, the probability that you travel from 0 to $m$ via $S$ is
$$2^{-s_1} 2^{-(s_2-s_1)}\cdots 2^{-(s_k-s_{k-1})} 2^{-(m-1-s_k)} = 2^{-(m-1)}.$$
Therefore, since there are ${m-1}\choose{k}$ ways of choosing $S$ with length $k$, the probability that it takes time $k+1$ to travel from 0 to $m$ is $2^{-(m-1)} {m-1\choose k}$, and the expected time is
$$
\sum_{0\le k\le m-1} 2^{-(m-1)} (k+1) {m-1\choose k}
$$
$$=
\sum_{0\le k\le m-1} 2^{-(m-1)} {m-1\choose k}+
\sum_{1\le k\le m-1} (m-1) 2^{-(m-1)} {m-2\choose k-1}
$$
$$=
\frac{m+1}{2}.
$$ |
If $A$ is path connected, then $\bar A$ is path connected? | If $(x_1,\sin(1/x_1))$ and $(x_2,\sin(1/x_2))$ are two points of $A$, then they can be connected by the path
$$\gamma(t)=\left(t,\sin\left(\frac1t\right)\right),\;t\in[x_1,x_2]$$ |
Does there exist such holomorphic bijection? | Hint: Try a constant multiple of an appropriate branch of $\log$. |
How to prove that a function is class $C^{\infty}(\mathbb{R}).$ | Suppose $a=0$ and $b=1$, because the same reasoning applies in general. So we have $f(x)=\exp\left(\frac{1}{x(x-1)}\right)$ on $(0,1)$. Compositions of $C^\infty$ functions are $C^\infty$, so the only possible problems are at $0$ and $1$. Since $f(x)=f(1-x)$, it is enough to handle $0$.
Let's start by showing that $f'(0)=0$. The left-hand difference quotients at $0$ are $0$, so it is enough to show that $\lim\limits_{x\searrow 0}\frac{f(x)}{x}=0$. Note that $
\frac{1}{x(x-1)}<-\frac{1}{x}$ for $x\in(0,1)$, so it is enough to show that $\lim\limits_{x\searrow 0}\frac{e^{-1/x}}{x}=0$. This is probably easiest to see by a change of variables, $t=1/x$, to yield $\lim\limits_{x\searrow 0}\frac{e^{-1/x}}{x}=\lim\limits_{t\to\infty}\frac{e^{-t}}{1/t}=\lim\limits_{t\to\infty}\frac{t}{e^t}=0$.
So far we know that $f'$ exists everywhere. Now as Davide Giraudo indicates in a comment, you can show by induction that in $(0,1)$, $f^{(k)}(x)=R_k(x)f(x)$ for some rational function $R_k$ having poles only at $0$ and $1$. I'll omit proof of this. Suppose that for some $k$ we know that $f^{(k)}(0)=0$. To show that $f^{(k+1)}(0)=0$, we need to show that $\lim\limits_{x\searrow 0}\frac{R_k(x)f(x)}{x}=0$. Note that $R_k(x)=\frac{g(x)}{x^{n}}$ for some integer $n$ and a function $g$ that is continuous at $0$. Again, since $f(x)<e^{-1/x}$, it suffices to show that $\frac{e^{-1/x}}{x^{n+1}}\to 0$ as $x\searrow 0$, and this is straightforward from the same change of variables $t=1/x$. (The base case $k=1$ wasn't really necessary to show since we are already given the base case $k=0$, but I though it might be helpful to start with the simplest case and make it more explicit.)
Again, since $f(x)=f(1-x)$, this also gives $f^{(k)}(1)=0$ for all $k$, and this shows (in sketch form) why $f$ is infinitely differentiable. |
Which natural number predicates can be defined in Robinson arithmetic? | Any explicit formula for $E(x,y,z)$ [in the language of PA] is ridiculously complicated.
Well, actually it's not that complicated! It is a tedious but easy exercise, once you've grasped how to use Gödel's beta-function (which itself can be written in primitive notation in half a line or so) to write down a candidate $E$ in primitive notation in a few lines.
Exponentiation cannot be defined in Robinson arithmetic!
Well, it depends what you mean by defined! Different authors mean different things by "define" (one of the mildly annoying things in this area is that is little consistency accross textbooks here).
Certainly, the following holds for $Q$ (Robinson Arithmetic): there is a formula $E(x, y, z)$ such that
if $m^n = k$ then $Q \vdash E(\overline{m}, \overline{n}, \overline{k})$ and
for every $m, n$, $Q \vdash \exists!zE(\overline{m}, \overline{n}, z)$
where $\overline{m}$ is $Q$'s formal numeral for $m$. And plenty of authors will call that defining (even "strongly defining") exponentiation. Indeed, in this sense, $Q$ can (initially surprisingly) define all the primitive recursive functions.
But yes, $Q$ is absolutely lousy at proving generalizations, and in particular (as I think you are pointing out)
$Q \nvdash \forall x\forall y\exists!zE(x, y, z)$
and so, $Q$ can't show that exponentiation is (as they say) a provably total function. Is this what you mean by defining?
Probably so: still, before proceeding further with your questions, we perhaps need an explicit statement of what exactly you mean by "definition" here. |
Uniform convergence where speed of convergence varies with x? | The most extreme example I can construct is the sequence of functions $f_n:\mathbb{R}\to\mathbb{R}$ defined by
$$f_n(x)=\begin{cases}\tfrac{1}{n} & \text{ if }x=0,\\\\
0 & \text{ if }x\neq 0
\end{cases}$$
which converges uniformly to the zero function but where $x=0$ is the only laggard. |
Laplace Transform with Multiplied Terms | $F(s) = \mathcal{L}\{te^{3t}\cos3t\} $
$F(s) = \mathcal{L}\{t\cos3t\}_{s\to s-3}$
$F(s) = -\frac{d}{ds}\mathcal{L}\{cos3t\}_{s\to s-3}$
$F(s) = -\frac{d}{ds}\{\frac{s}{s^2+9}\}_{s\to s-3}$
$F(s) = -\bigg[\frac{1\cdot(s^2+9) - s\cdot(2s)}{(s^2+9)^2}\bigg]_{s\to s-3}$
$F(s) = -\bigg[\frac{-s^2 + 9}{(s^2+9)^2}\bigg]_{s\to s-3} = \frac{(s-3)^2-9}{((s-3)^2 + 9)^2}$
$$F(s) = \frac{s^2 - 6s}{(s^2-6s+18)^2}$$ |
Solving a difference equation (Gambler's ruin) | The standard method to solve recurrence equations is to set as trial solution
$$a_i=x^i \implies x^i = \frac13x^{i+1} + \frac23 x^{i-1}\implies x =\frac13 x^2+\frac23\implies x^2-3x+2=0$$
then find $x_1$ and $x_2$ then the general solution is
$$x_i = ax_1^i+bx_2^i$$
with $a$ and $b$ determined by initial conditions.
Solving, $x_1=1, x_2=2$
Substituting, $x_i = a + 2^ib$
Using boundary conditions and solving through, $a = \frac{-1}{(2^{i+2}-1)}$ and $b = \frac{1}{(2^{i+2}-1)}$
Plugging back these into general solution equation and rearranging:
$a_i = \frac{(2^{i}-1)}{(2^{i+2}-1)}$ |
Quadratic variation and stopping time. | For continuous local martingales $X_t$, it is known (theorem 3) that $X_t$ and $[X]_t$ have the same intervals of constancy almost surely, by conditioning over the event $[S,T]$ you get the result in that case.
Nevertheless and contrary to what I said before editing the answer, I have a found simple counterexample for general local martingale (i.e. with jumps).
Let's take the (local) martingale $X_t= N_t-t$, (where $N_t$ is a Poisson process of intensity 1), and for $S=S_1$ the first jump time of $X$ and $T=S+1$ (here $S_i$ is the $i$-th jump time of $N_t$), then the event $A=\{\omega\in\Omega s.t. T<S_2\}$ has strictly positive probability equal to $e^{-1}$ (it is widely know that inter-arrival times for Poisson process of intensity $\lambda$ follows an exponential law of parameter $\lambda$).
So on $A$, we have $[X]_S=1=[X]_T$, but $X_S=1-S$ and for $t\in[S,T]$ we have $X_t-X_S=(1-t)-(1-S)=S-t$ which is not constant but continuous of finite variation, and so $X_t$ isn't constant on the interval $[S,T]$ almost surely as $A$ has positive probability.
This shows that the conclusion is wrong in general case.
So I think, that you omited to mention that your problem was only in a continuous process setting, as it is the only place where it is true.
Edit :
I felt that my preceding wrong proof was fishy sorry for that.
Edit 2:
Thanks to Didier Piau for pointing out a clarification of the argument.
Best Regards
NB:
Notice that the result (in continuous process setting) is more general as it works for all continuous local martingales and not only for continuous strict local martingales. |
If $P(A) = P(B) = P(A\cup B)$, prove that $P((A \cap B^{c}) \cup (B\cap A^{c})) = 0$ | Lemma
Given $X$ and $Y$ events of the sample space $\Omega$, we have that
\begin{align*}
\textbf{P}(X) & = \textbf{P}(X\cap Y) + \textbf{P}(X\cap Y^{c})\\\\
\end{align*}
Proof
It suffices to note that $X = X\cap\Omega = X\cap(Y\cup Y^{c}) = (X\cap Y)\cup(X\cap Y^{c})$.
Solution
Based on the given assumption, we have that
\begin{align*}
\textbf{P}(A\cup B) & = \textbf{P}(A) + \textbf{P}(B) - \textbf{P}(A\cap B)\\\\
& = 2\textbf{P}(A) - \textbf{P}(A\cap B) = \textbf{P}(A)\\\\
& \Rightarrow \textbf{P}(A) = \textbf{P}(B) = \textbf{P}(A\cap B)
\end{align*}
Since the events $A\cap B^{c}$ and $B\cap A^{c}$ are mutually exclusive, one has that
\begin{align*}
\textbf{P}((A\cap B^{c})\cup(B\cap A^{c})) & = \textbf{P}(A\cap B^{c}) + \textbf{P}(B\cap A^{c})\\\\
& = \textbf{P}(A) - \textbf{P}(A\cap B) + \textbf{P}(B) - \textbf{P}(A\cap B) =0
\end{align*}
and we are done. |
Find the coordinates of T (point where tangent touches the circle). | Note that the given point $(6,-6)$, the center $(-5,4)$ and the tangent point $T(p,q)$ form a right triangle, which via Pythagorean theorem leads to
$$(p-6)^2+(q+6)^2+25= 11^2+10^2$$
Also, $(p,q)$ is on the circle
$$(p+5)^2+(q-4)^2=25$$
Solve the joint equations to obtain the two tangent points
$$(p,q)=(-\frac{90}{13},-\frac8{13}),\> (-\frac{10}{17},\frac{108}{17})$$ |
Proving that a function is square integrable | For the second function, split the integral into two integrals for $-\infty<x<-\delta$ and $\delta<x<\infty$, which integrates to $\frac{2}{3\delta^3}$, which is unbounded as $\delta\rightarrow0$, so this function is not square-integrable over the real line.
For the first function, for a fixed value of $\delta$, the integral of both tails is bounded above by the integral over the same range of the second function, which is finite. The integral over $-\delta<x<\delta$ is bounded above by $2\delta$, because $|f(x)|^2\le1$, so the integral over the real line of the squared modulus of the second function is finite.
For the third function, use similar reasoning to the second function (can someone make this rigorous?) |
Behaviour of a continous function $f \text{ : } \lim_{n \to \infty} f(\frac{1}{2n}) = 1$ and $\lim_{n \to \infty} f(\frac{1}{2n+1}) = 2$ | “Please help me by providing an answer” is not really how things work around here. But you have a conjecture worth exploring:
intuitively it seems to me that $\nexists \lambda \in (1,2) \text{ such that } \{c \in (0,1) | f(c) = \lambda\}$ is finite
To unravel the negations and quantifiers, this is equivalent to: For all $\lambda\in(1,2)$, the set $\{c \in (0,1) \mid f(c) = \lambda\}$ is infinite. Focusing in on zero, it's sufficient to show: For all $\lambda \in (1,2)$ and $\epsilon > 0$, there exists $c$ such that $0 < c < \epsilon$ and $f(c) = \lambda$.
And I think you can prove that with your observations (i)–(iii). |
polar coordinates of Gaussian Distribution with non zero mean | If you mean a polar coordinate system with respect to the origin, then the result is a complicated mess. However, expressed in a polar coordinate system with respect to the point $(\mu_x,\mu_y)$, your third expression is again equal to your second expression, where $r$ now stands for the distance from the point $(\mu_x,\mu_y)$.
P.S.: I suggest to take more care to use terms precisely. These expressions are neither distributions, nor coordinates, nor equations; they're normalization integrals over distributions expressed in certain coordinates. |
Show that in every tree, there is a vertex $v$ $d(v) \geq 2$ that is adjacent to $k \geq d(v) -1$ leaves | Consider a path $x_0 - x_1 - x_2 - x_3$, with $x_0, x_3$ as leaves. You're looking at $x_1$. Sure enough, you find a node adjacent to $x_1$ which is not a leaf, that is $x_2$. But you can't take this, 'cause then your new "path" would no longer be a path.
I guess that makes your proof incorrect.
But you can continue like this. You take $x_1$. That is certainly adjacent to a non-leaf node. We aim to prove that there is exactly one, that is $u = x_2$. That would prove the result.But suppose there were two, $u, u'$. Now since this is a tree, you can delete node $x_1$, creating two new trees $T, T'$, with $u \in T$ and $u' \in T'$ in different trees. Take the longest path containing $u'$ in $T'$, and "join" that with $x_1$ and the remainder of the longest path you found, $u = x_2, ...$. This is surely longer than the earlier longest path, creating a contradiction. |
Understanding an Example of a *Pushout* in $\mathbf{Set}$ | This indeed is confusing. First of all, the pushout is not the disjoint union, it is the disjoint union with the images of $a$ identified.
Here's a specific example, designed to hit some "conceptual edge cases." Suppose
$$a = \{1,2\}$$
$$b = \{0,1,2,3\}$$
$$c = \{0,1,2,3,4\}$$
and for $f:a\to b$ and $g:a \to c$,
$$f(x) = x+1$$
$$g(x) = x.$$
Let's call the terminal $r$ as in the first picture.
We'll keep elements separate by adding little subscripts for whether they came from set $b$ or set $c$.
Thus, as described, $r$ contains $0_b,1_b,2_b,3_b$ and $0_c,1_c,2_c,3_c,4_c$, but these are not all distinct.
In particular, we must identify $f(x)$ and $g(x)$ for each $x\in a$.
This means in $r$ we set $f(1)=2_b=1_c=g(1)$ and $f(2)=3_b=2_c=g(2)$.
So the elements in $r$ are:
$$\{ 0_b,$$
$$0_c,$$
$$1_b,$$
$$2_b=1_c,$$
$$3_b=2_c,$$
$$3_c,$$
$$4_c\}$$ |
Showing that any of n balls drawn without replacement has the same probability of being a particular colour | I believe there may be a number of elegant proofs, perhaps relaying on the linearity of expectation of random variables and indicator variables (expectation = probability). Joseph K. Blitzstein has a similar problem explain here, which would be paraphrased as follows with regard to the symmetry insight:
This is true by symmetry. The first ball is equally likely to be any of the $b + w$ balls, so the probability of it being white is $\frac{w}{w +b}.$ But the second ball is also equally likely to be any of the $b + w$ balls (there aren’t certain balls that enjoy being chosen second and others that have an aversion to being chosen second); once we know whether the first ball is $W$ we have information that affects our uncertainty about the second ball, but before we have this information, the second ball is equally likely to be any of
the balls. Alternatively, intuitively it shouldn’t matter if we pick one ball at a time, or take one ball with the left hand and one with the right hand at the same time. By symmetry, the probabilities for the ball drawn with the left hand should be the same as those for the ball drawn with the right hand.
If every possible result is a single-cycle permutation of $w$ and $b$ balls that can be considered otherwise distinguishable by their order of extraction, but that for every single result each ball could have equally have been extracted in one position posterior to the position it occupies, the actual result can be viewed as that sliding of one position forward of each one of the balls with a period of $w+b,$ so that every single different result is matched by $w+b$ results where the relative position doesn't change, and each ball occupies all possible extraction points.
In your calculation you get to $E(X_1)=\frac{w}{w+b}$ and $E(X_2)=\frac{w}{w+b}\left(\large \Box \right),$ where happily, $\large \Box =1,$ and hence, $E(X_1)=E(X_2).$ So what we want is that this patterns holds for all $X_i,$ such that $E(X_i)=\frac{w}{w+b}\left(\color{red}{\large \Box} \right)$ with $\color{red}{\large \Box}=1$ for all $i$'s.
And this pattern can possibly be teased out by just seeing what happens next -in the case of $X_3:$
$$ E(X_3) =\Tiny \left(\frac{w}{w+b}\right) \left(\frac{w-1}{w+b-1}\right)\left(\frac{w-2}{w+b-2}\right)
+2\left(\frac{b}{w+b}\right) \left(\frac{w}{w+b-1}\right)\left(\frac{w-1}{w+b-2}\right) + \left(\frac{b}{w+b}\right) \left(\frac{b-1}{w+b-1}\right)\left(\frac{w}{w+b-2}\right)$$
Clearly we'll always be able to extract the $E(X_1)=\frac{w}{w+b}$ as a factor in front of the sum since each $w$ and $w+b$ appear in each term in the numerator and denominator, respectively. What remains to be proven is that the sum multiplied by $E(X_1)$ is always equal to $1:$
$$\begin{align}
1=\Tiny{ \left(\frac{w-1}{w+b-1}\right)\left(\frac{w-2}{w+b-2}\right)
+2\left(\frac{b}{1}\right) \left(\frac{1}{w+b-1}\right)\left(\frac{w-1}{w+b-2}\right) + \left(\frac{b}{1}\right) \left(\frac{b-1}{w+b-1}\right)\left(\frac{1}{w+b-2}\right)}\implies \\[3ex]
\small{(w+b-1)(w+b-2)=(w-1)\;(w-2) + 2\;b\;(w-1) + b\;(b-1)\\
={2 \choose 0}(w-1)\;(w-2) +{2 \choose 1} b\; (w-1) + {2\choose 2} b\;(b-1)}
\end{align}$$
But the LHS is the 2-permutations of $(w - 1) + b$, while the RHS is the binomial expansion considering $w$ and $b$ to denote the number of elements in the set of class $\text W$hite and $\text{B}$lack, respectively.
This pattern will hold for any $X_i,$
$$\begin{align}
\left((w-1) + b\right)\left((w-2) + b\right)\cdots\left((w-i-1) +b\right)&\\[3ex]
=\small{ {i-1 \choose 0} (w-1)(w-2)\cdots (w-i-1)\\+\cdots + {i-1 \choose j} (w-1)\cdots (w-i-j)\,b\,(b-1)\cdots(b-j)\\+\cdots+{i-1 \choose i-1} b\,(b-1)\cdots (b-i-1)}
\end{align}$$ |
Proving existence of an interval that satisfies these properties. | I'll expand on the hints I gave below.
The Lebesgue differentiation theorem implies there exists $x_0\in S$ and $a_0>0$ such that
$$\frac{m([x_0,x_0+a]\cap S)}{a} >\frac{1}{2},$$
for $0<a<a_0.$ We can write the above as $m([x_0,x_0+a]\cap S) >\dfrac{a}{2}.$
Fix any such $a.$ Define $f_a:[x_0,\infty)\to [0,\infty)$ by setting $f_a(x)=$ $m([x,x+a]\cap S)$ for $x\ge x_0.$ Then $f_a$ is continuous and $f(x_0)>a/2.$ Since $f_a(x)\to 0$ as $x\to \infty,$ there exists $x$ such that $f_a(x)=a/2$ by the IVT. For this $x$ we have $m([x,x+a]\cap S)=a/2.$ Thus $I=[x,x+a]$ is the desired interval for this $a.$
Earlier post, hints: Your idea about the Lebesgue differentiation theorem might come in handy. Note also i) For $a>0,$ $f_a(x)=m([x,x+a]\cap S)$ is a continuous function of $x.$ ii) Since $m(S)<\infty,$ $\lim_{x\to \infty}f_a(x)=0.$ |
Existence of almost disjoint families under MA($\kappa$) | You cannot pick $\mathcal{A}=\mathcal{C}$ in the first result: We pick any $y \in \mathcal{C}$ and define $\mathcal{F}=\{y\} \subseteq \mathcal{A}$ and see that $|y\setminus \bigcup \mathcal{F}|=|\emptyset|=0$ and not $\omega$. So the conditions are never met when $\mathcal{A}$ equals $\mathcal{C}$: these families should be sort of "independent": you cannot almost cover a set from one with finitely many from the other. |
Predict number of Birthdays for 1000 person of same class in next 365 Days | A person in the class in January has chance $1$ of having a birthday because he is in the class all year. That gets you $1000$ birthdays. A person who arrives in February has $\frac{365-31}{365}$ chance of having his birthday in class, so that group will on average get you $915$ birthdays. Keep going through the months and you will have your answer.
Slightly less accurately, because it ignores the differing numbers of days per month, you have on average $6500$ people in class, so you should have about $6500$ birthdays during the year.
I'm not sure this is at all what you are asking. Is it? |
Evaluate Integral of $\sin(\ln(x))dx$ | Or more concisely,$$\int\sin\ln xdx=\Im\int x^idx=\Im\left(\frac{1-i}{2}x^{1+i}\right)+C=\frac12x(\sin\ln x-\cos\ln x)+C.$$ |
Operations on pullbacks of vector bundles. | Your thoughts concerning topologizing the bundles are correct. After the proof of Proposition 1.5 Hatcher gets explicit about local trivializations of the pullback bundle $f^*(E)$, and that is all you need. For each $x \in B$ choose an open neighborhood $U$ such that both $E_1, E_2$ (and hence also $E_1 \otimes E_2$) are trivial over $U$. This canonically induces trivializations of $f^*(E_1),f^*(E_2), f^*(E_1 \otimes E_2)$ over $f^{-1}(U)$. With respect to these trivializations the fiberwise isomorphism $f^*(E_1) \otimes f^*(E_1) \to E_1 \otimes E_2$ corresponds on $f^{-1}(U)$ to $f \times id : f^{-1}(U) \times (\mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2}) \to U \times (\mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2})$ which is obviously is continuous. Hence $f^*(E_1) \otimes f^*(E_1) \to E_1 \otimes E_2$ is locally continuous which suffices to prove the desired result.
Another way to see it is to consider the obvious fiberwise isomorphism $f^*(E_1) \otimes f^*(E_1) \to f^*(E_1 \otimes E_2)$ of bundles over $A$. On $f^{-1}(U)$ it corresponds to the identity on $f^{-1}(U) \times (\mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2})$. Hence it is locally a bundle isomorphism which again suffices to prove the result.
By the way, perhaps you should consult also other books. Hatcher is a little short when dealing with the basic material. For example, he does not mention the relationship of bundles with the transition maps $U_\alpha \cap U_\beta \to GL_n(\mathbb{R})$ associated to a bundle atlas (see Andres Mejia's comment).
Here are some books:
Steenrod, Norman Earl. The topology of fibre bundles. Vol. 14. Princeton University Press, 1999
Husemoller, Dale. Fibre bundles. Vol. 5. New York: McGraw-Hill, 1966. https://www.maths.ed.ac.uk/~v1ranick/papers/husemoller
Cohen, Ralph. The Topology of Fiber Bundles. http://math.stanford.edu/~ralph/fiber.pdf
Atiyah, Michael. K-theory. CRC Press, 2018. https://www.maths.ed.ac.uk/~v1ranick/papers/atiyahk.pdf |
$G$-structure defined by a tensor | How may we view $T=u_∗T_0$ as a section of $T(M)$, when T is only a tensor above $π(u)$?
We can't. Hovever by changing $u$ in $L(M)$ we can. In case of $G$ structure wehere $G$ is a group of transformations leaving $T_0$ untached wewill get a tensor on $M$. Which leads to the answear to the second question.
Now let $G<GL(n,R)$ be the largest Lie subgroup that leaves $T_0$ invariant. How can we use invariance of $T_0$ to define a section of the associated bundle $L(M)/G$?
Namely we have a corespondance between $G$ structures and tensor fields $0$-reductible to $T_0$. When having $G$ structure $P \subset L(M)$ we define $$T_x={u_x}_* T_0$$ when $u_x \in P_x$ and the definition is correct since any two $u_x$'s in $P_x$ differ by an element of $G$. When having $T$ we simply define $$ P_x = \left\{ u_x \in L(M)_x :{u_x}_*T_0=T_x \right\} .$$ |
Finding the symbol of an differential operator. | Looks good. Since we have the definition $D = i\partial$, $D^2 = -\partial^2$, or $\partial^2 = -D^2$, the symbol of $\partial_1^2 + \partial_2^2$ is just $-\xi_1^2 -\xi_2^2$, as you noted. |
Problems with algebraically finding the range of a square root of a quadratic function. | Just looking at the sheer length of your argument, I think you're overcomplicating matters. Consider:
$\sqrt x \ge 0$ for all $x \ge 0$ and is continuous
$x^2 \ge 0$ for all real $x$ and is continuous
Therefore, $-x^2 \le 0$, then $2 - x^2 \le 2$, and therefore $\sqrt{2-x^2} \le \sqrt 2$
What this argument shows is that all values in the interval $[0,\sqrt 2]$ must be achieved, because of continuity and the function involved. We show that $f(x)$ is bounded above by $\sqrt 2$ and below by $0$. You can verify these values are achieved for $x = 0$ and $x=\sqrt 2$ respectively. Continuity gives us that every value in-between is also achieved. |
variance and generating function - probability hat problem | Variance is $E[(X-\bar{X})^2]$. Here $\bar{X}=1.2$, so we have
$$Var = 0.2*(0-1.2)^2 + 0.6*(1-1.2)^2 + 0.2*(3-1.2)^2$$
So $$Var = 0.96 = 24/25$$ |
What is the probability for a couple and their 2 kids to be born on the same day of the week? | There are seven days a week. If we assume that each is just as likely to be a birthday, then the probability of all four births being on a Wednesday was: $$\dfrac 1 {7^4} \approx 0.000416\small{5\ldots}$$
If you just wanted to know the probability that all four births were on the same weekday, it was: $$\dfrac 1 {7^3} \approx 0.00291\small{5\ldots}$$
Remark: However not all days may be equally likely. In fact, it turns out that Tuesday is the most popular weekday for giving birth in the USA, for unspecified reasons. So...
Remark 2 This also assumes that the day of birth is independent for each person, and not inherited in some manner (biorhythms?). |
Borel/ Lebesgue on $\chi_D$ for $D = \{(x,x):x \in E \}$ | Your answers are not corrcet.
For fixed $x \in E$ we have $f_x(y)=1$ if $y=x$ and $0$ otherwise.
For fixed $x \notin E$ we have $f_x(y)=0$ for all $y$.
This makes $f_x$ Borel, hence also Lebesgue measurable always. [No condition on $E$].
Similarly, $f^{y}$ is Borel, hence also Lebesgue measurable always.
Now can write down the answers to all parts of the question. |
series: $S(x)=\sum_{n=0}^\infty x^n(1-x)^2$ is NOT uniformly convergent in $[0,1]$ | The series is uniformly convergent on $[0,1]$.
The maximum of $f_n(x) = (1-x)^2x^n$ can be found by setting the derivative to zero and is attained at $x = n/(2+n)$. Thus, for $x \in [0,1]$
$$0 \leqslant f_n(x) \leqslant \left(\frac{2}{2+n}\right)^2 \left(\frac{n}{2+n}\right)^n< \frac{4}{(2+n)^2} < \frac{4}{n^2}$$
Uniform convergence follows from the Weierstrass test. |
Fixing orientation of connected smooth manifold in $\mathbb{R}^n$ by a single chart | Let $M$ be your $k$-dimensional surface oreinted with respect to the chart $\{ \varphi_i\}_i$, $\varphi_i : \mathbb R^k\rightarrow U_i \subset_{open } M $. $\exists \ \omega\in \Omega^k(M)$ such that $\omega$ is non-vanishing at every point. This is possible since $M$ is orientable. $\varphi_i^*\omega=g_i \lambda$ where $\lambda=dx_1\wedge dx_2\wedge \dots dx_n$ and $g_i:\mathbb R^k \rightarrow \mathbb R$ is a non-vanishing smooth function. Since the charts are consistent, either all $g_i$'s are positive or all negative. Assume that all the $g_i$'s are positive.
Now you have the charts $\{ \varphi_1, \varphi_j'\}_j $ As before we get $\varphi^*_1 \omega =g_1\lambda$ and ${\varphi'}_j^*\omega=h_j \lambda$. By the same logic as above, we get either $\{g_1, h_j \}_j$ are all positive functions or all negative. But since $g_1$ is positive, we get all $h_j$'s are positive. Thus you get the same orientation. |
Solving a complex equation | HINT:
Now $-1-i=\sqrt2e^{i(\pi+\pi/4)}$
and $-1+i=\sqrt2e^{i(\pi-\pi/4)}$
Now use this and How to prove Euler's formula: $e^{it}=\cos t +i\sin t$? |
Find the solutions to the $w''-z^2w=3z^2-z^4$ as Taylor series where $w(0)=0$ and $w'(0)=1$ | Now insert into the equation and compare the coefficients of equal power
$$
z^n:~~~~(n+2)(n+1)c_{n+2}-c_{n-2}=3\delta_{n,2}-\delta_{n,4}
$$
with $c_n=0$ for $n<0$. This then allows you to compute the coefficients step-by-step.
This gives equations
$$
2c_2=0\\
6c_3=0\\
12c_4-c_0=3\\
20c_5-c_1=0\\
30c_6-c_2=-1\\
42c_7-c_3=0\\
...
$$ |
Why is the intersection of this indexed set a closed interval? | Hint: the interval $[-1,1]$ is contained in each $B_i$, hence it should be contained in their intersection. But for any other point $x \not \in [-1,1]$, can't you find some $B_i$ such that $x \not \in B_i$? Maybe doing some drawings can help to get a mental picture of the phenomenon. |
How considering $f(x_0) > c$ and $f(x_0) \le c$ together leads to $f(x_0) = c$? | There is a small mistake in your argument: when you go to limits, strict inequalities are not preserved.
If $f(b_{n+1}) >c$ you can only conclude that $\lim f(b_{n+1}) \geq c$.
This can easely be proven with the $\epsilon-\delta$ definition of the limit.
To understand why we cannot have strict inequalities, consider that $\frac{1}{n} >0$ but
$$\lim_n \frac{1}{n} =0$$
Added To prove that if $x_n > a$ and $\lim_n x_n =b$ then $b \geq a$, you assume by contradiction that $b <a$. Chose $\epsilon$ such that $b < a-\epsilon$ (for example $\epsilon = \frac{a-b}{2}$) and write the definition of $\lim_n x_n=b$ for this $\epsilon$. You'll get a contradiction. |
Find count of equivalence relations if you know number of pairs | Yes it is correct, the number of pairs of the form $(x,y)$ with $x\neq y$ is always even inside any finite equivalence relation. So it cannot be equal to $5$, very nice solution! |
Induction Inequality with recurrence relation | Firstly, we see that $a_n>0$.
Now, by assuming of the induction $$u_{n+1}-4=\frac{(u_n-4)(3u_n-4)}{4u_n}>0$$ and $u_1>4$;
Also, $$u_{n+1}-u_n=\frac{(4-u_n)(4+u_n)}{4u_n}<0.$$ |
A tricky integration over the unit sphere | I am going to write $\theta_0$ where you write $\theta$ so that I can later use $\theta$ in the usual way as part of spherical coordinates.
Define vectors $\mathbf u_1 = (1, 0, 0)^T$ and
$\mathbf u_2 = (\cos\theta_0, \sin\theta_0, 0)^T$.
For any point $(x,y,z)$, if we view that point as a vector $\mathbf v = (x,y,z)^T$
then $x = \mathbf v^T\mathbf u_1$
and $x\cos\theta + y\sin\theta = \mathbf v^T\mathbf u_2.$
The angle between the vectors $\mathbf u_1$ and $\mathbf u_2$ is $\theta_0.$
Let $S$ be the plane through the $z$ axis bisecting the angle between $\mathbf u_1$ and $\mathbf u_2$; then
$\max\{0,x,x\cos\theta_0+y\sin\theta_0\} = x$ when $(x,y,z)$ is on the same side of the $y,z$ plane as $\mathbf u_1$ (or the positive $x$ axis) and also on the same side of the plane $P$ as $\mathbf u_1$.
That is, the part of the integral where we are just integrating $x$ is a segment of the sphere generated by placing a semicircle with endpoints at $(0,0,\pm 1)$ and rotating it from the $y,z$ plane to the $x,z$ plane and an angle $\frac{\theta_0}2$ beyond that. That is, the angle of the segment is $\frac\pi2 + \frac{\theta_0}2.$
On another segment of the sphere we integrate $x\cos\theta_0+y\sin\theta_0.$
That segment is a mirror image of the first segment in the plane $P,$ and its contribution to the integral is the same.
On the remainder of the sphere the integral is zero.
So we just have to integrate $x$ over the segment on which the integrand is $x,$ and then multiply the result by $2$ in order to count both of the
segments where the integral is non-zero.
Depending on which way $\mathbf u_2$ is pointing, the segment where we integrate $x$ might be mostly on the positive $y$ side of the $x,z$ plane or mostly on the negative $y$ side. Either way we get the same integral by reflection through the $x,z$ plane, so we can get the correct answer by assuming the segment is mostly on the positive $y$ side.
So you just need to compute this integral for $r = 1$ in spherical coordinates
(where $\phi = 0$ on the positive $z$ axis and $\theta = 0$ on the positive $x$ axis):
$$ 2 \int_{-\theta_0/2}^{\pi/2} \int_0^\pi x \sin\phi\, \mathrm d\phi\,\mathrm d\theta
= 2 \int_{-\theta_0/2}^{\pi/2} \int_0^\pi \cos\theta \sin^2\phi \,\mathrm d\phi\,\mathrm d\theta
= \pi \left(1 + \sin \frac{\theta_0}2 \right),$$
which is exactly equal to your numeric result. |
Looking for a comprehensive reference for vector identities | In order to prove such vector identities, all you need is the following. Let $\delta_{ij}$ and $\epsilon_{ijk}$ ($i,j,k=1,2,3$) denote the Kronecker delta and the Levi-Civita tensor. The cross-product of two vectors $\mathbf{A}$ and $\mathbf{B}$ can be written as $(\mathbf{A}\times \mathbf{B})_i = \sum_{j,k}\epsilon_{ijk} A_j B_k$.These are the only isotropic tensors i.e., invariant under rotations. Any invariant tensor that you can construct can be written using these two tensors.
With this background, there is only ONE identity worth remembering.
$$
\sum_{i=1}^3\epsilon_{ijk}\epsilon_{ilm} = \left(\delta_{jl} \delta_{km} - \delta_{jm} \delta{kl}\right)
$$
That is the reason books rarely go beyond giving some identities that appear often. Try to prove the equality of the identity that you mention using this method. |
Calculating a gps coordinate using satellites position | Yes, you have enough information with four satellites. You are trying to determine four numbers, your $x,y,z$ position and the time. You have four pieces of data, which are the times the satellites sent out each signal. If you are willing to assume sea level, you are only trying to measure three numbers, so three satellites would suffice. Having a fourth can improve your accuracy through a least squares fit. Now that your edit has come it and the data is readable, it looks like you have data from five satellites.
What you need to do is to translate the Lat/Lon/altitude to $x,y,z$ coordinates for each satellite. For each satellite you write an equation based on distance to satellite=(time now-time signal was sent)*speed of light. The distance to satellite uses the satellite coordinates you computed. This gives you five equations in four unknowns. As you have assumed your altitude is zero, you really have only three unknowns-your Lat/Lon and the time of observation. Normally you would compute the error in each of the observations as a function of your variables, then use a minimizer to find the best fit values of your variables. You don't solve for the time first, you solve for time and your position simultaneously.
Your assumptions and data are quite coarse. Reporting satellite Lat/Lon to only two decimals of a degree gives a position error of the order of a mile. The earth radius varies by $13$ miles depending on longitude. This will not make the measurement fail, it will just render it inaccurate. |
Almost everywhere convergent subsequence in a Sobolev space | First of all we need to understand what weak convergence in this space means. Observe that we have an isometric embedding $\mathcal{D}^{1,p}(\mathbb R^n) \hookrightarrow L^p(\mathbb R^n,\mathbb R^n)$ given by $u \mapsto \nabla u.$ By standard results about dual spaces of subspaces, we get that
$$ \mathcal{D}^{1,p}(\mathbb R^n)' \cong L^{p'}(\mathbb R^n,\mathbb R^n) / Y, $$
where
$$ Y = \left\{g \in L^{p'}(\mathbb R^n,\mathbb R^n) \middle| \int_{\mathbb R^n} g.\nabla u = 0 ,\ \forall u \in D^{1,p}(\mathbb R^n) \right\}.$$
(Note I do not claim that $\mathcal{D}^{1,p}(\mathbb R^n)$ is complete in this argument $(\dagger)$.) From this it is easy to see that $v_m \rightharpoonup v$ weakly in $D^{1,p}(\mathbb R^n)$ if and only if for all $g \in L^{p'}(\mathbb R^n, \mathbb R^n),$
$$ \int_{\mathbb R^n} g.\nabla v_m \rightarrow \int_{\mathbb R^n} g.\nabla v. $$
Now the idea is that a.e. convergence is a local property, so it suffices to restrict to a bounded domain where we get compact embeddings into $L^p.$ For this fix $\Omega \subset \mathbb R^n$ a bounded domain, then extending by zero we get for any $g \in L^{p'}(\Omega,\mathbb R^n),$
$$ \int_{\Omega} g.\nabla v_m \rightarrow \int_{\Omega} g.\nabla v. $$
Now the restriction $(v_n|_{\Omega})$ defines a bounded sequence in $W^{1,p}(\Omega),$ where we have used Hölder and the uniform bound of $(v_n)$ in $L^{p^*}(\mathbb R^n).$ We know by the Rellich-Kondrachov compactness theorem that there exists $u \in W^{1,p}(\Omega)$ and a subsequence $(v_{n_k}|_{\Omega})$ such that $v_{n_k} \rightharpoonup u$ weakly in $W^{1,p}(\Omega)$ and $v_{n_k} \rightarrow u$ a.e. also. Then for any $g \in L^{p'}(\Omega,\mathbb R^n)$ we have,
$$ 0 = \lim_{m\rightarrow 0} \int_{\mathbb R^n} g.\nabla v_{n_k} - g.\nabla v = \int_{\mathbb R^n} g.\nabla(u-v). $$
Hence $\nabla(u-v) = 0$ a.e. in $\Omega,$ and so there is $\lambda_{\Omega} \in \mathbb R$ such that $v_{n_k} \rightarrow (v + \lambda_{\Omega})$ a.e. in $\Omega.$
Now take a compact exhaustion $\mathbb R^n = \bigcup_j \Omega_j$ and iteratively choose subsequences $v_{k,j}$ such that $v_{k,j} \rightarrow v + \lambda_i$ a.e. as $k \rightarrow \infty$ in $\Omega_i$ for all $i \leq j.$ Then observe by Fatou's lemma that,
$$ k_j^{p^*} |\Omega_j| \leq \liminf_{k \rightarrow \infty} \int_{\Omega_j} |v_{k,j} - v|^{p^*} \leq \sup_{m} \int_{\mathbb R^n} 2^{p^*}(|v_m|^{p^*} + |v|^{p^*}) < \infty, $$
so $k_j \rightarrow 0$ as $j \rightarrow \infty.$ Hence the diagonal sequence $v_{n_k} = v_{k,k}$ satisfies $v_{n_k} \rightarrow v$ a.e. in $\mathbb R^n.$
Now as the limit is unique, we see that every subsequence of $(v_m)$ has a further subsequence which converges a.e. to $v.$ Hence the entire sequence converges a.e. to $v.$
Added later $(\dagger)$: Writing $X = \mathcal{D}^{1,p}(\mathbb R^n)$ and $\lVert \cdot \rVert = \lVert \nabla \cdot \rVert_{L^p(\mathbb R^n,\mathbb R^n)},$ it is not clear whether $(X,\lVert\cdot\rVert)$ is complete (which is equivalent to its image being closed in $L^p(\mathbb R^n,\mathbb R^n)$). However I claim this is not necessary for the argument to work.
Let $\overline X$ be the completion of $X$ with respect to this norm, which can be identified with the closure of $X$ in $L^p(\mathbb R^n,\mathbb R^n)$ identified via this isometric embedding. Then the restriction map $\overline X' \ni f \mapsto f|_X$ defines an isometric isomorphism $X' \cong \overline X'.$ Also we have,
\begin{align*}
Y &= \left\{ g \in L^{p'}(\mathbb R^n,\mathbb R^n) \middle| \int_{\mathbb R^n} g.\nabla u = 0, \ \forall u \in X \right\} \\
&= \left\{ g \in L^{p'}(\mathbb R^n,\mathbb R^n) \middle| \int_{\mathbb R^n} g.\nabla u = 0, \ \forall u \in \overline X \right\}
\end{align*}
Identifying $L^{p'}(\mathbb R^n,\mathbb R^n) \cong L^p(\mathbb R^n,\mathbb R^n)',$ we have $Y = X^{\perp} = \overline X^{\perp}.$ Now the result about duality of subspaces is generally is stated for closed subspaces (see e.g. proposition 3.67 in section 3.16 of Introduction to Banach Spaces and Algebras by Allan & Dales), but in light of above we have,
$$ X' \cong \overline X' \cong L^p(\mathbb R^n,\mathbb R^n)' / \overline{X}^{\perp} \cong L^{p'}(\mathbb R^n,\mathbb R^n) / Y, $$
so it also holds even if $X$ is not complete. |
Finding a minimal polynomial of a root of unity over a field extension | Let $\zeta=\exp(2\pi i/7)$
The Galois group of $K=\Bbb Q(\zeta)$ over $\Bbb Q$ is cyclic of order $6$. It is generated
by $\sigma_3$ which takes $\zeta$ to $\zeta^3$. So $K$ has a quadratic subfield; the fixed
field of $\left<\sigma^2_3\right>$. This includes $\zeta+\sigma_3^2(\zeta)+\sigma_3^4(\zeta)=\zeta+\zeta^2+\zeta^4$ which turns out to be $\frac12(-1+i\sqrt7)$.
Therefore $K\supseteq L=\Bbb Q(i\sqrt7)$.
The Galois group of $K/L$ is of order $3$ and is generated by $\sigma_3$. The conjugates
of $\zeta$ are $\zeta^2$ and $\zeta^4$, and so its minimal polynomial is
\begin{align}
(X-\zeta)(X-\zeta^2)(X-\zeta^4)&=X^3-(\zeta+\zeta^2+\zeta^4)X^2+
(\zeta^3+\zeta^5+\zeta^6)X-\zeta^7\\
&=X^3-\frac12(-1+i\sqrt7)X^2+\frac12(-1-i\sqrt7)X-1.
\end{align} |
Given integral $\iint_D (e^{x^2 + y^2}) \,dx \,dy$ in the domain $D = \{(x, y) : x^2 + y^2 \le 2, 0 \le y \le x\}.$ Move to polar coordinates. | Draw a picture of $D$: The first condition means that $D$ lies within the disk around $0$ of radius $2$, and the second condition means that $D$ lies below the line $y = x$, but above $0$. That is, it's a slice of pie lying in the first quadrant. This leads us to the bounds
$$0 \le r \le 2$$ and $$0 \le \theta \le \frac{\pi}{4}$$
Now make the change of variables. |
Showing Holder's inequality holds for $p=\infty$ and $q=1$ | The proof is not completely correct because the $\|\cdot\|_{\infty}$ norm is not the maximum of the absolute value of the sequence, but the supremum. For example, if $a_n=1-\frac{1}{n}$, then $\|a\|_{\infty}=1$, but there is no maximum.
To prove the inequality note that for every $n\geq 1$, $$|a_nx_n|=|a_n|\cdot |x_n|\leq\sup_n|a_n|\cdot |x_n|=\|a\|_{\infty}|x_n| $$
Therefore, for every $N$,
$$\sum_{n=1}^N|a_nx_n|\leq\|a\|_{\infty}\sum_{n=1}^N|x_n|\leq\|a\|_{\infty}\|x\|_1$$
Letting $N\to\infty$ the desired result follows. |
What is the average? | Your attempts is excellent, but will not lead you to success.
Actually, the minimum of
$$f_1(x) := \sum_{i=1}^{n}|x-a_i|$$
is known to be achieved by the median, another estimator of the central trend.
(The median is defined as the value of rank $n/2$ or the average of the two values of rank $(n\pm1)/2$; for even $n$, all values between the two middle ones, including the median, achieve the minimum.)
Instead, the average achieves the minimum of
$$f_2(x) := \sum_{i=1}^{n}(x-a_i)^2,$$ as one easily shows by canceling the derivative:
$$\frac12\dfrac{f_2(x)}{dx} = \sum_{i=1}^{n}(x-a_i)=nx-\sum_{i=1}^{n}a_i=0.$$
So the average is the solution of the equations $x=a_i$ in the least-squares sense. It is more sensitive than the median to far-away values, because of the squared weighting.
For the sake of the comparison, the plot shows $f_1(x)/4$ and $\sqrt{f_2(x)/4}$ for the points $1, 3, 4, 7$. |
Showing a set of limit points of a sequence of measurable functions is measurable. | Hint : Show that $\liminf f_n$ is measurable. The proof for $\limsup f_n$ is similar, and
$$
\{ x \in X \, | \, \lim_{n \to \infty} f_n \text{ exists} \} = \{ x \in X \, | \, \liminf_{n\to \infty} f_n = \limsup_{n\to \infty} f_n\}.
$$
Hope that helps, |
Show that a limit of a sequence is zero | Just observe that $$a_{n+1}\le \frac{n}{n+1}a_n\le \frac{n}{n+1}\cdot\frac{n-1}{n}a_{n-1}=\frac{n-1}{n+1}a_{n-1}\le\cdots\le \frac{1}{n}a_1\\a_{n+1}\ge \frac{n-1}{n}a_n\ge \frac{n-1}{n}\cdot\frac{n-2}{n-1}a_{n-1}=\frac{n-2}{n}a_{n-1}\ge\cdots\ge \frac{1}{n}a_2\\\implies \lim\sup_n a_n\le 0\le \lim\inf_n a_n\\\implies \lim_n a_n=0$$ Actually the lower bound on $a_n/a_{n-1}$ is redundant as $a_n$ is a positive sequence. |
A real differentiable function is convex if and only if its derivative is monotonically increasing | Your friend is right.
From the previously solved exercise, you can show that for arbitrary $s<t$ in $(a,b)$, $\lim\limits_{u\to s+}\frac{f(u)-f(s)}{u-s}\leq\lim\limits_{v\to t-}\frac{f(t)-f(v)}{t-v}$, so $f'(s)\leq f'(t)$. |
Shortest distance from set to point, in $\mathbb{R}^n$ | Consider, as you indicated, the function $h(z)=|\hat{x}-z|^2$. This is a convex function and what you are attempting is to minimize this function on the closed (I assume that your $f$ is also continuous here) convex set ${\mathcal S}$. There is a huge field of research called convex optimization which deals with exactly this problem, which very important for many applied math problems. If $f$ is also smooth, you can use Lagrange multipliers to find critical points and, then, minima. In general, take a look at the list of methods and references at the wikipedia page above or just google "convex optimization". |
surjective, but not injective linear transformation | The surjective part is easy. Given a polynomial $a_{0} + a_{1}t + a_{2}t^{2} + \cdots + a_{n}t^{n}$, which polynomial is being sent to this polynomial under the differentiation transformation $T$? Well, any polynomial of the form $C + a_{0}t + a_{1}\frac{t^{2}}{2} + \cdots + a_{n-1}\frac{t^{n}}{n} + a_{n}\frac{t^{n + 1}}{n + 1}$, where $C$ is any constant. Why? Differentiate $C + a_{0}t + a_{1}\frac{t^{2}}{2} + \cdots + a_{n-1}\frac{t^{n}}{n} + a_{n}\frac{t^{n + 1}}{n + 1}$ and you will get exactly $a_{0} + a_{1}t + a_{2}t^{2} + \cdots + a_{n}t^{n}$.
Now, I said $C$ can be any constant, and that polynomial will still be sent to $a_{0} + a_{1}t + a_{2}t^{2} + \cdots + a_{n}t^{n}$. In particular, if $C_{1}$ and $C_{2}$ are two distinct constants, both $C_{1} + a_{0}t + a_{1}\frac{t^{2}}{2} + \cdots + a_{n-1}\frac{t^{n}}{n} + a_{n}\frac{t^{n + 1}}{n + 1}$ and $C_{2} + a_{0}t + a_{1}\frac{t^{2}}{2} + \cdots + a_{n-1}\frac{t^{n}}{n} + a_{n}\frac{t^{n + 1}}{n + 1}$ are being sent to the same polynomial, even though they are distinct polynomials (they differ by the constants $C_{1}$ and $C_{2}$). If you don't believe they are sent to the same polynomial under $T$, differentiate both of them and check to see that you get the same polynomial as an output of $T$. So this is an example of $T(a) = T(b)$ but $a \neq b$, which means $T$ is not injective.
Try using the ideas I showed above and apply them to the integral linear operator. Do you see why it is injective but not surjective? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.