title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Showing that projections $\mathbb{R}^2 \to \mathbb{R}$ are not closed | No bounded set will work, because a closed and bounded set in ${\bf R}^2$ is compact, and the image of a compact set under any continuous map is compact (so closed in any Hausdorff space, in particular in $\bf R$).
On the other hand, the graph of any function with a vertical asymptote will work, for instance that of $1/x$.
In fact, it is not hard to show that any open set in $\bf R$ can be obtained as the projection of a closed set in ${\bf R}^2$ in such a way that the projection is injective (no two points in the closed set project onto the same point in $\bf R$), by a similar technique. This is related to the classical fact that any $G_\delta$ (countable intersection of open sets) in ${\bf R}$ (or any other Polish space, that is, separable and completely metrizable, if you're familiar with the concepts) can be embedded as a closed set in $\bf R^N$ (product of countably infinitely many copies of $\bf R$). |
Is the statement true? | Hint: Use induction in $n$. $ $ |
discrete mathematics relations question 2 | $R$ is not reflexive, but your reasoning is off. For any non-negative integer, we have that $x \not\gt 2x$. Hence, in those cases, $(x, x) \notin R$.
$R$ is not symmetric, since because $3 \gt 2\cdot 1 = 2$, $(3, 1) \in R$, but since $1 \not\gt 2\cdot 3 = 6, \;(1, 3) \notin R$.
However, $R$ is transitive. If $x \gt 2y$ and $y \gt 2z$, then $x \gt 2y \iff x \gt 2(2z)$, so certainly, $\frac 12 x \gt 2z \implies x \gt 2z$. |
Any explanation why obtained sum gives "Near Integer" result? | There is nothing magical about this sum. Remember that $e^{-\pi\sqrt{163}}\approx 4\cdot10^{-18}$ is a small positive number. When you substitute $k=0$, you get the main term $=94$. The other terms are all tiny.
If you don't believe this, try the following. Compute the same sums with $164, 165,\ldots$ instead of $163$. |
Complex sequence $(z_i)$ such that $\sum_i z_i^k$ converges and $\sum_i \vert z_i \vert^k$ diverges for all $k$ | Let $\alpha=(1+\sqrt{5})/2$. It has the Diophantine property: $|e^{i k\alpha}-1| \geq \frac{c}{k^2}$ for integer $k\neq 0$. Then
$$ a_n = r_n e^{i\alpha n}= \frac{1}{\log(n+1)} e^{i\alpha n}$$
will have the desired property. First,
$$ \sum_n |a_n|^k \geq \sum_n r_n^k= \sum_n \frac{1}{(\log(n+1))^k} = + \infty$$
Second, consider $$S_n=\sum_{p=1}^{n} e^{ipk\alpha}= e^{ik\alpha}\frac{e^{ink\alpha} -1}{e^{ik\alpha}-1} $$
for which $|S_n|\leq 2 k^2/c$. Then by an Abel partial summation
$$ \sum_{n= 1}^{N} a_n^k = \sum_{n=1}^{N} (S_{n}-S_{n-1}) r_n^k = \sum_{n=1}^{N} S_{n} (r_{n}^k -r_{n+1}^k)+S_{N} r_{N+1}^k $$
and $$\sum_n|S_n(r_{n}^k-r_{n+1}^k) | \leq \frac{2k^2}{c} \sum_n (r^k_{n}-r^k_{n+1}) = \frac{2k^2}{c} r_1^k $$
It follows that $\sum_n a_n^k$ is convergent.
As is evident from the proof you may use any irrational rotation number $\alpha$ and any decreasing sequence $r_n$ going to zero sufficiently slowly (so that $\sum_n r_n^k=+\infty$ for all $k$). |
If $x \in \operatorname{cl}(A)$, where $A$ is a connected subspace of a topological space $X$, then $A \cup \{x\}$ is connected. | The proof you gave is fine, I think. One can prove something slightly more general:
Let $A$ be a connected subspace of $X$ and suppose $A \subseteq D \subseteq \overline{A}$. Then $D$ is also connected.
My preferred proof uses the following well-known characterisation of connectedness:
$X$ is connected iff every continuous function from $X$ to $\mathbf{2}$ (the discrete space $\{0,1\}$) is constant.
Let $f: D \to \mathbf{2}$ be continuous. Then we know that $f|A$ is constant with value $i_A \in \{0,1\}$. So $f$ and the constant function with value $i_A$ agree on $A$, both are continuous, and as $A$ is dense in $D$ (from $D \subseteq \overline{A}$) and $\mathbf{2}$ is Hausdorff, another classic theorem says that $f$ agrees with the constant map with value $i_A$ on the whole of $D$.
So, $f$ is constant, and $D$ is connected.
In particular $\overline{A}$ is connected. |
Why does $\det (A)>0$ in this question of the section 8.1 (Hoffman and Kunze linear algebra book) | Taking $(X_1, X_2)=(- A_{22}, A_{12})$ works. |
Homotopy of certain maps induced homotopies | $\alpha_\gamma(t)$ just represents the angle made by the vector drawn from $\gamma(t)$ to $\psi(\gamma(t))$, which you could write down an explicit formula for if you wanted to.
Although it is true that $\alpha_\gamma(t)\in[0,2\pi)$, if you define it this way, it won't be continuous since it will in general jump from $2\pi$ back to $0$. You could let $\alpha_\gamma(t)$ take values in $\mathbb R$ to get around this, although all paths are homotopic in $\mathbb R$ since it is contractible. (If you want to also fix the endpoints, then you need to assume that $\alpha_{\gamma_0}$ has the same endpoints as $\alpha_{\gamma_1}$ which it won't in general.
I think the most natural thing is simply to think of $\rho_{\gamma}(t)$ as an element of the circle. Then it makes sense to ask whether $\rho_{\gamma_0}$ is homotopic to $\rho_{\gamma_1}$ for any pair of initial arcs. |
Sequence of continuous functions with unitary norm | Let $f_n(x)=|\sin(2^n\pi x)|$. For $m>n$, you have $f_n(2^{-(n+1)})=|\sin(\pi/2)|=1$ but $f_m(2^{-(n+1)})=|\sin(2^{m-n-1}\pi)|=0$ since $2^{m-n-1}\pi$ is an integer multiple of $\pi$. |
How to compute $\lim_{x \to \infty} \sqrt{x}(e^{-1/x}-1)$? | Let $\dfrac1x=h$
$$\lim_{h\to0^+}\frac{e^{-h}-1}{\sqrt h}$$
$$=\lim_{h\to0^+}\frac{e^h-1}h\cdot\frac{\lim_{h\to0^+}\sqrt h}{-\lim_{h\to0^+}e^h}=\cdots$$ |
Area of triangle and determinant | Let $A, B, C$, $P, Q, R$ be the $6$ column vectors
$$
\begin{cases}
A^T = (1, \sqrt{2}a, a^2),\\
B^T = (1, \sqrt{2}b, b^2),\\
C^T = (1, \sqrt{2}c, c^2)
\end{cases}
\quad\text{ and }\quad
\begin{cases}
P^T = (1, \sqrt{2}p, p^2),\\
Q^T = (1, \sqrt{2}q, q^2),\\
R^T = (1, \sqrt{2}r, r^2)
\end{cases}
$$
Using identites of the form
$$(1+ap)^2 = 1 + 2ap + a^2p^2 = 1\cdot 1 + \sqrt{2}a\cdot\sqrt{2}p + a^2\cdot
p^2 = A\cdot P$$
We can rewrite the determinant at hand as
$$\Delta \stackrel{def}{=}\begin{vmatrix}
(1+ap)^2 & (1+bp)^2 & (1+cp)^2 \\
(1+aq)^2 & (1+bq)^2 & (1+cq)^2 \\
(1+ar)^2 & (1+br)^2 & (1+cr)^2 \\
\end{vmatrix}
= \begin{vmatrix}
A\cdot P & B\cdot P & C\cdot P \\
A\cdot Q & B\cdot Q & C\cdot Q \\
A\cdot R & B \cdot R & C\cdot R \\
\end{vmatrix}
$$
Notice the matrix for rightmost determinant is a product of two $3 \times 3$ matrices
$$
\begin{bmatrix}
A\cdot P & B\cdot P & C\cdot P \\
A\cdot Q & B\cdot Q & C\cdot Q \\
A\cdot R & B \cdot R & C\cdot R \\
\end{bmatrix}
= \left[ P, Q, R\right]^T \left[A, B, C\right]
$$
This leads to (up to a sign),
$$\Delta = \begin{vmatrix}
1 & \sqrt{2}p & p^2 \\
1 & \sqrt{2}q & q^2 \\
1 & \sqrt{2}r & r^2 \\
\end{vmatrix}
\begin{vmatrix}
1 & 1 & 1\\
\sqrt{2}a & \sqrt{2}b & \sqrt{2}c \\
a^2 & b^2 & c^2 \\
\end{vmatrix}
= 2
\begin{vmatrix}
1 & p & p^2 \\
1 & q & q^2 \\
1 & r & r^2 \\
\end{vmatrix}
\begin{vmatrix}
1 & a & a^2 \\
1 & b & b^2 \\
1 & c & c^2 \\
\end{vmatrix}
= 2(2\times 3)(2\times\frac14) = 6
$$ |
How to solve recursive proof? | You must assume that the hypothesis is correct for $1,2,3,\cdots ,k$. Then all you need to show is that if $$a_k=2^k+(-1)^k$$ and $$a_{k-1}=2^{k-1}+(-1)^{k-1}$$
then $$a_{k+1}=a_{k}+2a_{k-1}=2^k+(-1)^k+2[2^{k-1}+(-1)^{k-1}]=2^{k+1}+(-1)^{k+1}$$ |
Prove a conjecture, balls in boxes, n steps | Alright! It's possible for all $n \geq 5.$ The underlying trick is actually pretty neat and simple, but we have to apply small tweaks for $4$ cases, depending on the remainder of $n/4.$ Because of this, I'm going to give a short sketch first.
The big idea is that we only actually need $2$ bins until the very last move. (it's easy to see the last move must always go from $(n, 0, 2n)$ or $(2n, 0, n)$ to $(n,n,n).$ Make sure you see why!) Once we've reduced it to two bins, the $i$th move must either take $i$ balls from one bin and put them in the other, or the opposite. It also means that if I tell you how many balls are in one bin, you automatically know how many are in the other. In my solutions/examples, I'll always pretend the two bins are the first (A) and the last (C). Now, let's look at what happens to $C$ over many steps. Say $T_i(n)$ is the number of balls in bin $C$ after $i$ steps. Then we have the following
$$T_0(n) = 0$$
$$T_i(n) = T_{i-1}(n) \pm i$$
Ie, solutions correspond to sums of $1, 2, \ldots , (n-1)$ with a choice of signs! $$T_{n-1}(n) = \pm 1 \pm 2 \pm 3 \ldots \pm (n-1).$$
We have additional constraints, like $T_i(n)$ (the $i$th partial sum of the above) must always be positive, and always less than $3n.$ We've got a solution if $T_{n-1}(n)$ is either $n$ or $2n.$ Otherwise, we can always convert from this running total of the number of balls in bin C to an actual solution.
So here's the recipe:
Start with a special sum $S$ of the form $\pm 1 \pm 2 \ldots$ that
ignores the magnitude constraints.
Change the first few terms of $S$ so that it meets the magnitude constraints, but now sums to something too big.
Change some intermediate term, and the final term of $S$ so that we return to the right value.
After this, we just have to check we're respecting the upper and lower bounds required. Then convert back into a solution to the original balls/bins problem. First, I'll introduce the special sum $S$ in the case where $n$ is odd. We'll deal with the even $n$ case later.
Assume for the moment that $n = 2k+1,$ ie $n$ is odd.
Let $$S(n) = 1 + 2 - 3 + 4 - 5 + 6 - \ldots + (n-6) - (n-5) + (n-4) - (n-3) + (n-2) + (n-1),$$ ie the sum of $1, \cdots, n-1$ with the following signs:
$a$ and $n-a$ have the same sign
for even $ a < n/2,$ $a$ has a positive sign
for odd $1 < a < n/2,$ $a$ has a negative sign
$1$ has a positive sign.
This gives a series with $n-1$ terms, $S(n).$ Note that since $a$ and $n-a$ have the same sign, we can combine them to get $S(n) = n + n - n + n - n + \ldots$ where we have a total of $k$ terms. If $k$ is odd, this totals to $n,$ while if $k$ is even, this totals to $2n.$ Let $S_i(n)$ denote the partial sums of $S(n),$ so that $S_2(n) = 1 + 2, S_4(n) = 1 + 2 - 3 + 4,$ and so on.
Fact 1: $S_i(n) > -n.$ This follows from rewriting $S_i(n)$ as $1 + (2-3) + (4-5) + \ldots,$ in the first half of the series, with the pattern flipping at the midpoint. So our minimum is achieved at either $i=k-1$ or $i=k+1,$ (Recall $k = (n-1)/2)$ depending on weather the sign on $k$ is positive or negative. We get slightly more than $-k/2$ over the first half, and an additional $-(k+1)$ in the worst case, which is all greater than $-n.$
Fact 2: $S_i(n) \leq 2n.$ This follows very similarly. We bracket $S_i(n)$ as $1 + 2 + (-3 + 4) + (-5+6) \ldots,$ and note that in the first half we're gaining at most $3 + k/2.$ At the midpoint we may spike up to $k + 4 + k/2,$ after which we descend until the very last few steps $(n-2) + (n-1),$ where we go from $3$ to $2n$ or something much smaller to $n,$ depending on if $k$ is even or odd.
We'll also need the fact that these bounds are sharper on the first half of the sequence ($i < k$), where we'll have an approximately $k/2$ bound either way. Depending on if $k$ is even or odd, this may be tight. Further, our upper bound is actually $3k/2$ except at these final steps.
Now, let's begin with the odd cases.
ODD CASES
$$n= 4m + 3$$
(While I realize the order I'm doing cases in is a bit unorthodox, I promise the cases are roughly in order of difficulty.)
In this case, $k=2m+1$ is odd, and hence $S(n) = n.$ We alter $S(n)$ so that the partial sums are all positive without changing the final total. Call $E(j) = 3 + 5 + 7 + 9 + \ldots + j$ the error of $j,$ because this is the amount $S_i(n)$ will change by if we set the signs of all numbers up to $j$ to positive. Find the smallest odd $j$ such that $E(j) \geq k$ and $E(j)$ is odd.
If $n$ is large enough (see appendix for a sketch of how large), then we can modify $S(n)$ as follows:
we change all signs of numbers $\leq j$ to positive.
we change the sign of $n-1-E(j)$ from negative to positive. (we're assuming $n-1-E(j) < k$)
we change the sign of $n-1$ from positive to negative.
Call this $T(n),$ with partial sums $T_i(n)$ defined just like for $S_i(n).$ Note that the $T_i(n)$ is always positive, because we are only adding values until step $j,$ after which we have $T_i(n) - S_i(n) > 2E(j) > n-1$ for $i \in [j, n-2],$ which means $T_i(n)$ is greater than $n-1 + S_i(n) \geq 0$ (we're using our lower bound on $S(n)$). Further, since $k$ is odd $S_i(n) \leq n,$ which implies $T(n)$ is never larger than $n + 2 (n-1) < 3n.$ So $T(n)$ is valid, and we've done it! $T(n)$ gets converted, as we discussed in the intro, into a solution for distributing the $3n$ balls!
EXAMPLE
Since we've abstracted things pretty far from the original setting, let's try it with $n=31.$ We compute:
$E(3) = 3, E(5) = 8, E(7) = 15$
so $j=7$ in this case, which has the happy accident that $E(7) = k.$ Since $n-1 = 30,$ we will also need to put a plus sign on $30-E(7) = 15.$ Finally, we'll put a minus sign on $30$ itself. This gives our answer series as:
$1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 - 9 + 10 - 11 + 12 - 13 + 14 +15 - 16 + 17 - 18 + 19 - 20 + \ldots + 27 - 28 + 29 - 30$
which in turn corresponds to the solution
$(93, 0, 0) \rightarrow_{1-8} (57,0,36) \rightarrow_{9-14} (54,0,39) \rightarrow_{15} (39,0,54) \rightarrow_{16-29} (32,0,61) \rightarrow_{30} (62, 0, 31) \rightarrow_{31} (31,31,31)$
(subscripts indicate what steps are happening during each arrow, I've grouped away the repetitive steps, much like the bracketing used for the upper and lower bounds).
$$n = 1 + 4m$$
We'll use the same notation as before, but now $k=2m$ is even and so $S(n) = 2n.$ Note that for $i \in [k, n-2]$ the sign of $i$ is positive if $i$ is odd, and otherwise negative. This time we'll chose the smallest $j$ so that $E(j) > m$ and $E(j)$ is odd.
We modify $S(n)$ exactly as before to make $T(n).$ This time, $S_i(n)$ is always at least $-k/2,$ so the lower bound $T_i(n) > 0$ is trivial. For the upper bound, because $n$ is large enough we have
$$T_i(n) \leq S_i(n) + 2 E(j) \leq 3m + 1 + 8m \leq 12m = 3n$$ for $i < n-2.$ We know a lot about the last few terms (most things in S(n) have canceled at this point), so a little arithmetic tells us
$$T_{n-2}(n) = T_{n-3}(n) + n-2 = S_{n-3}(n) + 2(n-1) + n-2= 3n-1 < 3n$$ and
$$T_{n-1} = 3n-1 - (n-1) = 2n$$ as desired! So we've got a valid sequence for all odd $n$ now!
What about even $n$?
It turns out (see the Errata at the bottom of the post) that we can't quite use this same strategy for all even $n,$ but a very small tweak fixes things. They key observation is the following set of moves:
$$(3n, 0, 0) \rightarrow_1 (3n-1, 1, 0) \rightarrow_2 (3n-3, 3, 0) \rightarrow_3 (3n-3, 0, 3) $$
it's as though we skipped the third move! We've gone from a bin with 3n-3 balls in it and a bin with 3 balls in it to exactly the same setup, but now our next move will move $4$ balls. The same trick can be used for any $c = a + b,$ we can move $a$ and $b$ into the middle bin on their respective moves, then move the $c$ balls in the middle bin to wherever $a,b$ should have gone. For us, this means that whenever $a,b$ both have the same sign, we can use the trick to remove the $\pm c$ term from the sum.
And there's one step in particular that we'd really like to remove: $\frac n 2$ (which we'll call $k$ in this section). When $n$ is even, $n-1$ is odd, and we have broken the nice pairing symmetry (ie, $a$ and $n-a)$) we used to make the sequence $S(n),$ because there's nothing left to pair with $k.$ We'll have a little extra case work depending on if $k$ is even or odd.
With this trick in mind, we'll define $S'(n)$ and $S'_i(n)$ as with $S(n),$ except that $S'(n)$ will omit $k$ from the sum. For sanity reasons, we'll say $S'_k(n) = S'_{k-1}(n)$ (so that $S'_i(n)$ still denotes the total number of balls in the second bin after $i$ steps). Note that all our bounds from before still work just as well, and we have
$S'(n) = n$ if
$$n-1= 1 + 2(2m-1) = 4m$$
($n-1$ terms, arranged with $1$ middle guy we're skipping, and an odd number on each side so that everything but 1 + (n-1) cancels) while $S'(n) = 2n$ if $$n-1 = 1 + 2(2m) = 2 + 4m.$$
$$n = 4m$$
First, let's deal with our new trick. In this case, $k = 2m$ is even, so we can write $k= (m+1) + (m-1),$ and rest assured these two have the same sign. Perform our trick so that we do not have a $k$th summand. Now, define $E(j) = 3 + 5 + 7 + \ldots + j$ as before, and choose the smallest $j$ so that $E(j) > 3m$ and $E(j)$ is odd.
We get $T(n)$ from $S'(n)$ by performing the (now familiar) alternations:
Take a positive sign for all $i < j.$
Change the sign of $n-1-E(j)$ from negative to positive.
Change the sign of $n-1$ from positive to negative.
For our lower bound, again $T_i(n)$ is $2E(j)$ larger than $S'_i(n),$ until $i > m-1.$ Then, since we're storing $m-1$ and $m+1$ for later, we may be as much as $2m=k$ smaller than expected. Hence our choice of $E(j) > 3m,$ so that we still have $$T_i(n) - S'_i(n) \geq 4m - S'_i(n) > 0.$$ For the upper bound, since $S'_i(n) \leq 4m$ ($k$ even case way above) and $$T_i(n) - S'_i(n) \leq 8m$$ (our total gain over $S'$ is at most $2(n-1)$), we have $$T_i(n) \leq 12m = 3n,$$ as desired. So this case works too!
Final Case
$$n = 4m + 2$$
Alright, we've made it to the final case. This time, $k=2m+1$ is odd, and we're forced to take exactly this decomposition. Ie, we'll be moving $2m$ and $1$ into the middle bin, both of which have positive signs in $S'(n),$ and then move them into the third bin on the $k$th step.
This time, we're picking the minimum $j$ so that $E(j) > m$ and $E(j)$ is odd. We get $T(n)$ exactly as before.
It remains to check the upper and lower bounds. The lower bound is nice in this case, since $S'_i(n) \geq -m$ and our decomposition doesn't change much (we have $1$ ball in the middle bin until the $k-1$st step, where we get $k$ balls in the middle bin and then immediately evict them).
For the upper bound, our two local maxima for $S'_i(n)$ are at $i = k+1$ and $i > n-2,$ with both $S'$ and $T$ decreasing between the two. The $n-2$ case is identical to before, while $S'_{k+1}(n) = 2m + 1 + m + 3$ and $T_i(n) - S'_i(n) \leq 8m-2,$ so $T_i(n) \leq 8m-2 + 3m + 4 < 12m.$
This does it!
Example 2
I'll conclude with another example when $n$ is even, for added clarity. Take $n=50,$ so we'll need $j= 11$ (hence $E(j) = 35$ and $n-1-E(j) = 15). The sequence $T(n)$ will be:
$$1 + 2 + 3 + \ldots + 9 + 10 + 11 + 12 - 13 + 14 + 15+ 16 -17 + 18 - 19 + \ldots + 24 + 0 + 26 - 27 + \ldots - 47 + 48 -49$$
which corresponds to the sequence of balls in bins:
$$(150, 0, 0) \rightarrow_1 (149, 1, 0) \rightarrow_{2-12} (72, 1, 77) \rightarrow_{13-14} (71, 1, 78) \rightarrow_{15} (56, 1, 93) \rightarrow_{16-24} (39, 25, 87)\rightarrow_{25} ( 39, 0, 111) \rightarrow_{26-47} (49, 0, 101) \rightarrow_{48} (1, 0, 149) \rightarrow_{49} (50, 0, 100)$$
Appendix
Here I'm collecting miscellaneous results, and some annoying inequalities, that I didn't want to include above.
A short proof that $n=2 + 4m$ cannot be done with only $2$ bins until the last step:
Consider the bins mod $2.$ Since $n$ is even, after $n-1$ steps we must end up with all bins having an even number of balls in them. Since we're only using $2$ of the bins, every move changes the number of balls in both bins by $\pm i.$ Modulo $2,$ the sign does not matter. So, we must have
$$0 = \sum_i i (mod 2) = \sum_{i=1, odd}^{4m+2} 1 (mod 2) = 2m+1 (mod 2) = 1$$
a contradiction. So we're forced to do something with the third bin in this case.
(Here I'm putting the precise meaning of '$n$ large enough.' It's not terribly insightful in my opinion, but I'm including it for completeness.)
For the n = 3 mod 4 case:
Our transformation needs the following inequality in order to be well defined:
$j < n-1-E(j) $
Since $E(j)$ grows quadratically with $j,$ eventually $j < \epsilon E(j)$ for any $\epsilon > 0.$ since $j$ is also how much $E(j)$ differs from $E(j-2),$ this means that for $n$ large enough, $E(j)$ is a good approximation for $k;$ it can only differ by $j \approx \epsilon E(j).$ Taking $\epsilon ~\frac 1 8$ suffices, and a short computation shows that this is reached for $n > 50.$ As the examples above show, smaller $n$ often work. A computation confirms that it's always possible for $n<50,$ though I do not want to copy over valid sequences for each...
One might also worry that $E(j)$ could end up greater than $n-1,$ but the same argument shows this can't happen for $n > 14.$
Finally, I note that an extremely similar series actually works for $n>10,$ but we must occasionally take $j$ smaller, and make sure that $n-1-E(j)$ has a negative sign. I chose to take the less general algorithm for simplicity of exposition. |
Relations of $S^2 V$ and heighest weight representations of Lie algebras. | $S^2(V)$ contains a highest weight vector of weight $2\omega_1$ and this weight space is 1 dimensional. Also $dim(S^2(V))=\frac{n(n+1)}{2}$
$dim(V(2\omega_1))=\prod_{j=2}^{n}\frac{a_1+j-1}{j-1}=\frac{n(n+1)}{2}$. Since $S^2(V)$ contains a highest weight vector of weight $2\omega_1$, $V(2\omega_1)$ is a subrepresentation of $S^2(V)$.
As a result, $S^2(V)\cong V(2\omega_1)$. |
Why is it when we restrict the homotopy to $t=0$ and $t=1$, we get $\phi_{0*}([f])=\beta_h(\phi_{1*}([f]))$? | Note that $h_0$ is a loop which is constant at $\varphi_0(x_0)$. Hence setting $t = 0$ in $h_t \cdot (\varphi_t f) \cdot \overline{h_t}$ gives something homotopic to $\varphi_0 f$. For $t = 1$ we have $h_1 = h$ and thus $[h_1 \cdot (\varphi_1 f) \cdot \overline{h_1}] = \beta_h([\varphi_1 f])$ by definition of $\beta_h$.
Therefore, via the homotopy $h_t \cdot (\varphi_t f) \cdot \overline{h_t}$ we have $[\varphi_0 f] = \beta_h([\varphi_1 f])$ or (by definition of $\varphi_{i,*}$) $$\varphi_{0*}([f])=\beta_h(\varphi_{1*}([f])).$$ |
Determine the convergence of $ \sum_{n=2}^ \infty \frac{(1)^n}{n(n-1)}$ and $ \sum_{n=2}^ \infty \frac{(-1)^n}{n(n-1)}$ | If you use partial fraction decomposition on $\frac{1}{n(n-1)}$, you get $\frac{1}{n-1} - \frac{1}{n}$. So the sequence will eliminate all terms except $\frac{1}{2-1}$, thus, the sum is equal to 1. |
What is the value of $\cot70+4\cos70?$ | Just using sum to product formulas:
$$\begin{aligned}
\cot70^\circ+4\cos 70^\circ &=\dfrac{\cos70^\circ+4\cos70^\circ\sin70^\circ}{\sin70^\circ}\\ &= \dfrac{\sin20^\circ+2\sin140^\circ}{\cos20^\circ}\\
&= \dfrac{\sin20^\circ+2\sin 40^\circ}{\cos20^\circ}\\
&= \dfrac{\sin20^\circ+\sin40^\circ+\cos50^\circ}{\cos20^\circ}\\
&= \dfrac{2\sin30^\circ\cos10^\circ+\cos50^\circ}{\cos20^\circ}\\
&= \dfrac{\cos10^\circ+\cos50^\circ}{\cos20^\circ}\\
&= \dfrac{2\cos20^\circ\cos30^\circ}{\cos20^\circ}\\
&=\sqrt{3}\\
\end{aligned}
$$ |
Probability of independent random variables X, Y and 1 form a triangle | To find the probabilities $\ p(Y\leq 1-X)\ $, $\ p(Y\leq X-1)\ $, and $\ p(Y\geq X+1)\ $, you have to integrate the joint density function $\ \frac{x}{2}\frac{1}{3}\ $ of $\ X\ $ and $\ Y\ $ over the regions in the. $x$-$y$ plane representing these inequalities. These are shown in blue, green and red, respectively, in the diagram below. Call these regions $\ B\ $, $\ G\ $, and $\ R\ $:
$$\ B=\left\{\,\left(x,y\right)\vert\,x\ge 0, y\ge 0, y\le 1-x\,\right\}\ ,$$
$$\ G=\left\{\,\left(x,y\right)\vert\,0\le x\le 2, y\ge 0, y\le x-1\,\right\}\ ,$$
$$\ R=\left\{\,\left(x,y\right)\vert\,x\ge 0, x+1\le y\le 3\,\right\}\ .$$
Then
\begin{eqnarray}
p(Y\leq 1-X) &=& \iint_B \frac{x}{6}dydx\\
&=& \int_0^1\hspace{-0.6em}\int_0^{1-x}\frac{x}{6}dydx\\
&=& \int_0^1 \frac{x\left(1-x\right)}{6}dx\\
&=& \frac{1}{36}
\end{eqnarray}
\begin{eqnarray}
p(Y\leq X-1) &=& \iint_G \frac{x}{6}dydx\\
&=& \int_1^2\hspace{-0.6em}\int_0^{x-1}\frac{x}{6}dydx\\
&=& \int_1^2 \frac{x\left(x-1\right)}{6}dx\\
&=& \frac{5}{36}
\end{eqnarray}
\begin{eqnarray}
p(Y\geq X+1) &=& \iint_R \frac{x}{6}dydx\\
&=& \int_0^2\hspace{-0.6em}\int_{x+1}^3\frac{x}{6}dydx\\
&=& \int_0^2 \frac{x\left(2-x\right)}{6}dx\\
&=& \frac{2}{9}
\end{eqnarray}
So $\ p\left(\Delta\right) = 1 - \frac{1}{36}-\frac{5}{36}-\frac{2}{9} =\frac{11}{18}
.$
As an alternative, it's a little easier to calculate $\ p\left(\Delta\right) = p\left(\,\{X+Y>1\}\, \&\, \{X+1>Y\}\,\&\, \{Y+1>X\}\ \right) $ directly, by integrating the density function over the region coloured grey in the diagram (call it $\ D\ $):
\begin{eqnarray}
p\left(\Delta\right) &=& \iint_D \frac{x}{6}dydx\\
&=& \int_0^1\frac{x}{6}\int_{1-x}^{x+1} dydx+\int_1^2\frac{x}{6}\int_{x-1}^{x+1} dydx\\
&=& \int_0^1 \frac{x^2}{3}+\int_1^2\frac{x}{3}dx\\
&=& \frac{11}{18}\ ,
\end{eqnarray}
as before. |
Can the time mean over a dense orbit equal the space mean for arbitrary functions? | If you can prove that
$$\lim_{N\to\infty} \frac{1}{N}\sum_{n=0}^{N-1}f(\phi^n(p)) = \int_M f \ d\mu$$ for all p and all $f $ integrable
than you are proving more than ergodicity. You are proving that the system is uniquely ergodic. Your example is not uniquely ergodic. So I think be hard prove ergodicity by the way you want. Because you should be able selected this points where is not true the equality above, and check that the set of such points has measure $0$. |
Find the values of p that makes the series converge | If you need to find $\lim_{n\to\infty} {n\biggl(\Bigl(\frac{2n+2}{2n+1}\Bigl)^p - 1}\biggl)$ you can put
$\dfrac{2n+2}{2n+1}=1+\dfrac{1}{2n+1}=1+t\iff n=\dfrac{1-t}{2t}$ then your expression becomes
$$\lim_{t\to0}\left(\dfrac{1-t}{2t}\right)((1+t)^p-1)=\lim_{t\to0} \dfrac{(1+t)^p-1}{\frac{2t}{1-t}}$$
You can apply now Hôpital's Rule so you can find
$$\lim_{n\to\infty} {n\biggl(\Bigl(\frac{2n+2}{2n+1}\Bigl)^p - 1}\biggl)=\dfrac p2$$ |
Additive and Bijective function on the real line | While there are uncountably many wild additive bijections on $ \mathbb R $ (as @MarcoVergamini has indicated in a comment above), the original equation you encountered has only one injective/surjective solution: the identity function. It's straightforward to check that the identity function is a bijective solution. To see that it's the only injective/surjective solution, let $ x = y = 0 $ in
$$ f \big( f ( x - y ) \big) = f ( x ) - f ( y ) \tag { * } \label 0 $$
to get $ f \big( f ( 0 ) \big) = 0 $. Then, put $ x = f ( 0 ) $ and $ y = 0 $ in \eqref{0} to see that $ f ( 0 ) = 0 $. By setting $ y = 0 $ in \eqref{0} you have $ f \big( f ( x ) \big) = f ( x ) $. This means $ f ( x ) = x $ for every $ x $ in the range of $ f $. So, if $ f $ is supposed to be surjective, it follows that $ f $ must be the identity function. Also, if $ f $ is supposed to be injective, $ f \big( f ( x ) \big) = f ( x ) $ directly implies $ f ( x ) = x $ for all $ x $.
But there seems to be a problem with your claim that solutions of \eqref{0} must be bijective. Just similar to the construction of wild additive bijections, you can use a Hamel basis $ \mathcal H $ of $ \mathbb R $ over $ \mathbb Q $ to construct wild solutions of \eqref{0}: choose a nonempty subset $ \mathcal I $ of $ \mathcal H $, map every member of $ \mathcal I $ to itself and every member of $ \mathcal H \setminus \mathcal I $ to an arbitrary member of $ \mathcal I $, and finally extend this mapping linearly (over $ \mathbb Q $) to a function $ f : \mathbb R \to \mathbb R $. It's straightforward to check that this $ f $ will satisfy \eqref{0}. Therefore, unless any further restriction on $ f $ is given, it cannot be concluded solely from \eqref{0} that $ f $ must be injective/surjective.
But additivity of $ f $ is secured by \eqref{0}. $ f \big( f ( x ) \big) = f ( x ) $ can be used to rewrite \eqref{0} as $ f ( x - y ) = f ( x ) - f ( y ) $. Substituting $ x + y $ for $ x $ and rearranging the terms, that equation is transformed to the Cauchy functional equation. So, this part of your claim is definitely true. |
Relationship between proper orthochronous Lorentz group $SO^+(1,3)$ and $SU(2)\times SU(2)$, or their Lie algebras | $SO^+(3,1)$ is the so-called restricted Lorentz group, which is the identity component of the Lorentz group $SO(3,1)$. It is a six-dimensional real Lie group, which is not simply connected.
Since $SO^+(1,3)$ is not compact, but $SU(2)\times SU(2)$ is compact, the groups cannot be isomorphic as real Lie groups.
We have $SO^+(3,1)\simeq SL(2,\mathbb{C})/\mathbb{Z}_2\simeq SU(2)_{\mathbb{C}}/\mathbb{Z}_2$, i.e., the complexification of the restricted Loretz group satisfies $$SO^+(3,1)_{\mathbb{C}}\simeq (SU(2)_{\mathbb{C}}\times SU(2)_{\mathbb{C}})/\mathbb{Z}_2.$$
In the same way, $\mathfrak{so}^+(3,1)_{\mathbb{C}}\simeq \mathfrak{su}(2)_{\mathbb{C}}\oplus \mathfrak{su}(2)_{\mathbb{C}}\simeq \mathfrak{sl}_2(\mathbb{C})\oplus \mathfrak{sl}_2(\mathbb{C})$. |
Proof involving integer squares and parity | Assume by way of contradiction $a$ and $b$ are odd, as you say we get that $$4(k^2+k+l^2+l)+2=c^2.$$ We have that $c^2$ is even thus $c$ is even, so $c=2m$ for some $m$. We get $$4(k^2+k+l^2+l)+2=4m^2.$$
divide both sides by two:
$$2(k^2+k+l^2+l)+1=2m^2$$
This is a contradiction since LHS is odd and RHS is even. |
Question on Correspondences | $\varphi$ is neither upper-- nor lower hemicontinuous.
If $\varphi$ were lower hemicontinuous, it would have a continuous selection by the Michael selection theorem, which is not the case.
It holds that $\varphi(0)=\{0\}\subseteq (-\frac{1}{2},\frac{1}{2})$, while there exists no neighborhood $U$ of zero such that $z\in U$ implies $\varphi(z)\subseteq (-\frac{1}{2},\frac{1}{2})$. So $\varphi$ is not upper hemicontinuous. |
Finding the number of solutions of linear system | Whilst expanding the answer, I decided to entirely rewrite it.
There are the following possibilities for a linear system of equations $Ax=b$ with an $m\times n$-matrix $A$:
1) $A$ is injective, i.e. $Ax=0$ implies $x=0$. In this case, a solution does not always exist but if it exists, it has to be unique. Equivalently, the rank of $A$ equals $n$.
2) $A$ is surjective, i.e. for all $b$ there is some $x$ such that $Ax=b$. In this case, a solution always exists but it may not be unique. Equivalently, the rank of $A$ equals $m$.
3) $A$ is bijective, i.e. satisfies 1) and 2). Thus there always exists precisely one solution. Equivalently, the rank of $A$ equals $m=n$.
4) $A$ is neither injective, nor surjective. There might be a solution or not and if you find one, it may be unique or not. In this case you are basically unable to make any general statement about the solutions.
Your question is settled in case 1). Thus, in general, there may be solutions or not. When does a solution exist? Precisely when $b$ is the column space of $A$, i.e. the vector space spanned by the columns of $A$. This is just the fact that the column space is the image (or range) of $A$. But whenever a solution exists, it has to be unique. Therefore, there can be none or one solution.
If you additionally know that $m=n$, then there is always precisely one solution.
If something remained unclear, don't hesitate to ask. |
Searching radioactive balls | It does seem it's better to think about this as identifying the cold (non-radioactive) balls rather than hot balls, though it's logically equivalent. I think I found a way to solve the $n=14,m=3$ case with $9$ checks. Again, I assume we can simulate a worst case by having all tests come back positive, provided a negative result would tell us even more.
Try 1-4, we get positive.
Try 5-8, positive.
Try 9-11, positive. So we know 12-14 are cold.
For each of the first three groups, two binary checks will narrow down which ball is hot, adding six more for a total of 9 checks.
Clearly better than the $12$ I suggested originally. In general, the approach here is to initially divide the balls into $m+1$ groups, where we test all but one of them. One or more of those groups will be shown to be negative, and we drill down with binary search on the remaining groups as required.
I believe there's a good chance this approach is optimal. In particular, it naturally handles the $m=n-1$ case and the $m=1$ case, as well as a handful of other cases I could verify. This can be represented as
$$m+(n\bmod (m+1))\left\lceil\log_2\left\lceil\frac{n}{m+1}\right\rceil\right\rceil+(m-(n\bmod (m+1))\left\lceil\log_2\left\lfloor\frac{n}{m+1}\right\rfloor\right\rceil\ ,$$
though there's probably a cleaner way to write that. The issue here which makes it uglier is that e.g. with $n=14,m=3$, we want to split into groups of $\{4,4,3,3\}$, and make sure we account for the required moves to binary-search all but the last entry, which we assume has been implied to be cold. Note that in this example, the first two $4$s don't take an extra check to binary-search compared to the $3$, but it won't always work out that way.
Some results using this approach: |
Finding the number of divisors of a number? | For any $n \in \mathbb{N}$, we have
\begin{align}
\left(n^2+3n+1\right)^2 &= n^4+6n^3+11n^2+6n+1\\\\
&=n(n+1)(n+2)(n+3)+1
\end{align}
In particular,
$$2011\cdot2012\cdot 2013\cdot 2014+1=(2011^2+3\cdot 2011 + 1)^2 = 4050155^2$$
We can factor $4050155$ by first dividing by $5$, and then realizing with difficulty that $191$ is a factor, to get:
$$4050155=5 \cdot 191 \cdot 4241$$
This means that the number you're interested in has the form $$p^2q^2r^2$$ for primes $p,q,r$. This means the number of divisors is $$3^3=\boxed{27}$$
The number of divisors function is usually denoted $\sigma_0$, and you can read about how to calculate it given a prime factorization here. $$\sigma_0\left(p_1^{a_1}\cdot p_2^{a_2} \cdots\right) = (a_1+1)(a_2+1)\cdots$$ |
Greatest value a such that the eigenvectors are linearly independent | There is no such $a$.
The characteristic polynomial of your matrix is $P(\lambda)=-\lambda^3-6\lambda^2-11\lambda-6+2a$. The discriminant of this polynomial is $4-108a^2$. The polynomial $P(\lambda)$ has only real roots if and only if $4-108a^2\geqslant0$, which means that $a\in\left[-\frac1{3\sqrt3},\frac1{3\sqrt3}\right]$. If $a\in\left(-\frac1{3\sqrt3},\frac1{3\sqrt3}\right)$, then there are three distinct real roots and therefore three linearly independent real eigenvectors. So, either $a=\frac1{3\sqrt3}$ is a solution of the problem or there is no solution.
However, a direct computation shows that, for this $a$, there are only two (up to multiplication by a non-zero scalar) real eigenvectors: $\left(3+\sqrt3,-3,1\right)$ (eigenvalue: $-\frac13\left(6+\sqrt3\right)$) and $\left(-6+4\sqrt3,6,1\right)$ (eigenvalue: $\frac23\left(-3+\sqrt3\right)$). |
Why doesn't u-Substitution work for $\int \ln({e^{6x-5}})\,dx$? | I bet the graphs are exactly the same except one is shifted up or down from the other. The only difference in the two answers is the $+C.$ The $C$'s aren't the same in both answers, but if you call one of them $D$, you can figure out how they're related.
See that
$$\frac{1}{12}(6x-5)^2+C = 3x^2 - 5x +\frac{25}{12} +C$$.
If $C$ is an arbitrary constant, then so is $\frac{25}{12}+C.$ |
Integral computation of $\int_0^\pi \mathrm d t \sin(a\cos t/2) \mathrm{sinh}(b\sin t/2)$ | One way to get an expression for the integral, although not in closed form, is to use the Jacobi-Anger expansion:
$${{\rm e}^{iz\cos \left( \theta \right) }}=\sum _{n=-\infty }^{\infty }
{i}^{n}{{\rm J}_n\left(z\right)}{{\rm e}^{in\theta}} \tag{1}$$
to expand the integrand in terms of Bessel functions of the first kind ($J_n$). In so doing, it can be shown that:
$$\sinh \left( {\frac {t\sin \left( \frac{\theta}{2} \right) }{\tau}}
\right) \sin \left( {\frac {y\cos \left( \frac{\theta}{2} \right) }{2u\tau}}
\right) =\sum_{n=-\infty}^{\infty} \left( \sum _{k=-\infty}^{\infty} i\left( -1 \right)
^{k}{{\rm J}_{2k+1}\left({\frac {y}{2u\tau}}\right)}
{{\rm J}_{2n+1}\left({\frac {-it}{\tau}}\right)}\sin \left( \frac{\,
\left( 2\,n+1 \right) \theta}{2} \right) \cos \left( \frac{\,
\left( 2\,k+1 \right) \theta}{2} \right) \right) \tag{2}$$
which can be written:
$$\frac{i}{2}\sum _{n=-\infty}^{\infty} \left( \sum _{k=-\infty}^{\infty}\left( -1
\right) ^{k}{{\rm J}_{2k+1}\left({\frac {y}{2u\tau}}\right)} \left(
{{\rm J}_{-2k-1+2n}\left({\frac {-it}{\tau}}\right)}+
{{\rm J}_{2k+1+2n}\left({\frac {-it}{\tau}}\right)} \right) \sin
\left( n\theta \right) \right) \tag{3}$$
and thus:
$$\int _{0}^{\pi }\!\sinh \left( {\frac {t\sin \left( 1/2\,\theta
\right) }{\tau}} \right) \sin \left( {\frac {y\cos \left( 1/2\,\theta
\right) }{2u\tau}} \right) {d\theta}=\sum _{n=-\infty}^{\infty} \left( \sum _{k=-\infty}^{\infty}\frac{i\left( -1
\right) ^{k}{{\rm J}_{2k+1}\left({\frac {y}{2u\tau}}\right)} \left(
{{\rm J}_{-2k+1+4n}\left({\frac {-it}{\tau}}\right)}+
{{\rm J}_{2k+3+4n}\left({\frac {-it}{\tau}}\right)} \right) }{2n+1} \right) \tag{4}$$
It may not look too pretty but the series often converges very rapidly and it may simplify further. Convolutions of Bessel functions are sometimes known as generalised Bessel functions although this may be a stretch in terms of analytic functions.
Update
This way is probably neater and simpler. Note that:
$$\sinh \left( {\frac {t\sin \left( \frac{\theta}{2} \right) }{\tau}}
\right) \sin \left( {\frac {y\cos \left( \frac{\theta}{2} \right) }{2u
\tau}} \right) =-\frac{i}{2} \left( \cos \left( x\sin \left( \frac{\theta}{2}+
\phi \right) \right) -\cos \left( x\sin \left( \frac{\theta}{2}-\phi
\right) \right) \right) \tag{i}$$
$$\phi=i\mathrm{arctanh} \left( {\frac {y}{2tu}} \right) \in \mathbb{C},\quad x=\frac{it}{2\tau}{\sqrt{4-{\frac {{y}^{2}}{{t}^{2}{u}^{2}}}}} \in \mathbb{C} \tag{ii} $$
where standard product-to-sum trig identities were used and then the two trig functions inside, each with the same argument, are written as one scaled and shifted trig function.
Then starting from $(1)$ you can prove:
$$-\frac{i}{2}\left[\cos \left( x\sin \left( \frac{\theta}{2}+
\phi \right) \right) -\cos \left( x\sin \left( \frac{\theta}{2}-\phi
\right) \right)\right] =\\
i\sum _{n=-\infty }^{\infty }{{\rm J}_n\left(x\right)}\sin \left( n\frac{\theta}{2}\right) \sin \left( n\phi \right) \tag{iii}$$
integrate and simplify to get:
$$\int _{0}^{\pi }\!-\frac{i}{2}\left[\cos \left( x\sin \left( \frac{\theta}{2}+
\phi \right) \right) -\cos \left( x\sin \left( \frac{\theta}{2}-\phi
\right) \right)\right] {d\theta}=\\4\,i\sum _{n=1}^{\infty }{\frac {
{{\rm J}_{4n-2}\left(x\right)}\sin \left( \left( 4n-2 \right)\,
\phi \right) }{2\,n-1}} \tag{iv}$$
which checks out for all numerical values I have tried so far. Note that the integrals on the left individually look like Bessel functions $(J_0)$ but for the $\phi$ shift. |
Question on the Axis-Angle Representation | This does deserve a little explanation: the comment is deceptive in that it must hide some conventions. I don't see anything in the article about this, but the following discussion seems necessary to justify it.
If you think about a unit vector of $\mathbb R^3$ called $(x,y,?)$, then strictly speaking there are two options for $?$, namely $\pm\sqrt{1-x^2-y^2}$. This is the constraint that the comment refers to.
So why doesn't it mention this wrinkle about two choices? Well, the axis of a transformation doesn't really depend on the direction of the unit vector $(x,y,z)$, just the line it lies in. So $(-x,-y,-z)$ would be as good as $(x,y,z)$, if we are representing our transformation. That being the case, we can always get the last coordinate to be positive if we wish.
From the beginning then, suppose we have an axis given by $(x,y,z)$ and angle of rotation. If $z$ is negative, we first replace this vector by $(-x,-y,-z)$ so that $z$ becomes positive. Then after normalizing the whole vector you have a unit vector pointing along the axis of rotation with positive $z$ coordinate. This can be our representation of the axis, and we probably choose the sign of our angle to maintain the right-hand rule of rotation using this vector.
Conversely, given any line through the origin, there's a unit vector with positive $z$ coordinate lying within the line. Now you can see that there is a $1-1$ correspondence of rotations with unit vectors in the upper-half-space $\{(x,y,z)\mid x,y\in \mathbb R, z\in \mathbb R^+\}$. This space can obviously be parameterized this way: $\{(x,y,\sqrt{1-x^2-y^2}\mid x,y\in \mathbb R\}$, with two parameters per element. |
Find a gradient without going into components. | To start, write
$$
\nabla \frac{1}{|\vec{x}-\vec{a}|}=\nabla[(\vec{x}-\vec{a})\cdot(\vec{x}-\vec{a})]^{-1/2}
=-\frac{1}{2}[(\vec{x}-\vec{a})\cdot(\vec{x}-\vec{a})]^{-3/2}\nabla[(\vec{x}-\vec{a})\cdot(\vec{x}-\vec{a})]
$$
Next, use the (ugly!) gradient-of-scalar-product identity
$$
\nabla(\vec{u}\cdot\vec{v})=(\vec{u}\cdot\nabla)\vec{v}+(\vec{v}\cdot\nabla)\vec{u}+\vec{u}\times(\nabla\times\vec{v})+\vec{v}\times(\nabla\times\vec{u}),
$$
which you can find in Wikipedia's "Del" article, to compute
$$
\begin{align}
\nabla[(\vec{x}-\vec{a})\cdot(\vec{x}-\vec{a})]&=2[(\vec{x}-\vec{a})\cdot\nabla](\vec{x}-\vec{a})+2(\vec{x}-\vec{a})\times(\nabla\times(\vec{x}-\vec{a}))\\&=2[(\vec{x}-\vec{a})\cdot\nabla]\vec{x}+2(\vec{x}-\vec{a})\times(\nabla\times\vec{x})
\end{align}
$$
Note that in the last line we dropped terms involving $\nabla$ acting on $\vec{a}$ because they are zero.
Now, note that $[(\vec{x}-\vec{a})\cdot\nabla]\vec{x}=\vec{x}-\vec{a}$ and $\nabla\times\vec{x} = 0$. These identities are easily proved using index notation:
The first is $(x_j-a_j)\nabla_j x_i=(x_j-a_j)\delta_{ji}=x_i-a_i$ and the second is $\epsilon_{ijk}\nabla_j x_k=\epsilon_{ijk}\delta_{jk}=\epsilon_{ijj}=0$. Note that both need the obvious identity $\nabla_i x_j=\delta_{ij}$.
Put this all together to get
$$
\nabla \frac{1}{|\vec{x}-\vec{a}|}=-[(\vec{x}-\vec{a})\cdot(\vec{x}-\vec{a})]^{-3/2}(\vec{x}-\vec{a})=-\frac{\vec{x}-\vec{a}}{|\vec{x}-\vec{a}|^3}
$$
The whole thing can be more easily done using index notation throughout:
$$
\begin{align}
\nabla_i\frac{1}{|\vec{x}-\vec{a}|}&=\nabla_i[(x_j-a_j)(x_j-a_j)]^{-1/2}\\
&=-\frac{1}{2}[(x_j-a_j)(x_j-a_j)]^{-3/2}\nabla_i[(x_k-a_k)(x_k-a_k)]\\
&=-[(x_j-a_j)(x_j-a_j)]^{-3/2}(x_k-a_k)\nabla_i(x_k-a_k)\\&=-[(x_j-a_j)(x_j-a_j)]^{-3/2}(x_k-a_k)\delta_{ik}\\
&=-\frac{x_i-a_i}{|\vec{x}-\vec{a}|^3}
\end{align}
$$ |
What are the laws and issues surrounding *quantifying over sentences?* | Quantifying over sentences of a logic within that logic is (normally) sacrilege. However, quantifying over sentences in some meta-mathematics is (normally) fine.
So, for instance, it makes sense to say "there is no sentence $\phi$ such that $\mathrm{ZFC} \vdash \phi \land \neg \phi$". However, it is not correct to write something like "$\forall \phi(\neg(\phi \land \neg \phi))$" as a sentence in a logical language.
Of course, in a theory as strong as ZFC, we can formalize logic itself. That is, you choose an enconding of strings of characters like $v,0,1,(,),\neg,\forall,\land,\in$ into natural numbers within ZFC, and you write a definition for when such a string represents a well formed formula, and you these elements of $\mathbb N$ into a subset $\mathrm{Form}$. Then, you can write $\mathrm{ZFC} \vdash \forall \phi \in \mathrm{Form}(...)$ -- however, what happens at the $...$ cannot be $\neg(\phi \land \neg \phi)$, because now the letter $\phi$ is a variable in the theory, and you cannot apply logical connectives to variables.
In fact, you might wonder if you could define a formula $\psi$ in the language of ZFC so that $\psi(\phi)$ corresponds to what we usually think of as "the truth of $\phi$" -- then you could cheat the above in by letting ... be $\neg(\psi(\phi) \land \neg\psi(\phi))$. As it turns out, this is impossible, for reasons closely related to the incompleteness theorem. |
Why is my solution incorrect for solving these quadratic equations? | You have to pay attention to your domain. In the first equation you get a positive and a negative value for $y$, while $\sqrt{x}$, which is your substitution, can only be positive. |
Calculate the operator norm on L2 | Hint
$A$ has rank 2 here, $V := \text{Im}(A)$ being spanned by the two orthogonal functions $u(x) = \cos x$ and $v(x) = \sin x$. Observe that
$A$ vanishes on $V^\bot$ and conclude that the maximum of $\|Af\|$ when $\|f\| = 1$ is reached when $f\in V$. You are left with the problem of computing the norm of an operator in a 2-dimensional space. Write $f = \lambda u + \mu v$ with $\lambda^2 + \mu^2 = C$, compute and maximize $\|A f\|$ |
What can we say if $A\twoheadrightarrow B$ and $A \rightarrowtail B$? | In general you can say nothing about $A$ and $B$.
Consider, for example, in category of unital rings, the inclusion $\mathbb Z\to\mathbb Q$: it's monic and epic, but it isn't bijective. |
How to calculate P(X|W,Z) in a Bayesian network? | The definition of conditional probability is $P(A\mid B)=\frac{P(A\cap B)}{P(B)}$.
So, $P(X\mid W,Z)=\frac{P(X,W,Z)}{P(W,Z)}$. The denominator is easy to calculate. For the numerator, the event $X\cap W\cap Z$ is stating that $X,W$, and $Z$ all occur. There are 2 cases: Either $Y$ happens or it does not, and you can compute the probability of $X\cap W\cap Z$ both under the presence of $Y$ and in the absence of $Y$.
Can you take it from here? |
Proving deduction theorem in Predicate Logic | See Goldrei's comment on the role of the thinning rule at page 223.
I think that there is no issue with it in the proof of the DT for predicate logic.
We have a derivation $Γ, \phi \vdash \psi$; due to the def of page 221, in it we have no use of Gen that quantifies a variable $x$ that is free in $\Gamma$ or $\phi$.
Thus, mimicking the propositional proof, in order to manage the step regarding the Gen rule we have to suppose that there is some $j < i$ such that $\psi_i$ is $∀x\psi_j$.
By the inductive hypotheses: $\Gamma \vdash \phi \to \psi_j$ and we know that $x$ is not a free variable of $\phi$, then, by axiom (A5), $(∀x(\phi → ψ_j) → (\phi → ∀xψ_j))$.
Since $Γ ⊢ \phi \to \psi_j$, we have, by Gen, $Γ ⊢ ∀x(\phi \to \psi_j)$,
and so, by MP, $Γ ⊢ \phi \to ∀x\psi_j$; that is, $Γ ⊢ \phi \to \psi_i$. |
Show that the larger $c$ is the faster ${\rm d}U_t^c=\frac c2h'(U_t^c){\rm d}t+\sqrt c{\rm d}W_t$ converges to its stationary distribution | This is not an answer to your exact question and I am ignoring quite some details, but maybe it helps anyway. The generator of your process can be written as
$$ \mathcal L_c f=\frac{c}{2}(f'' +h'f'), \quad f\in C_c^2(\Bbb R),$$
and with $\mu=e^h\lambda$ you can check that
$$ \mathcal E_c(f):=-\int f \mathcal L_c fd\mu=\frac{c}{2}\int (f')^2d\mu.$$
If $h$ is concave and $\int fd\mu=0$, a classical result by Bobkov yields
$$ \int f^2d\mu \le C_1 \int (f')^2d\mu$$
for some $C_1>0$. Together, we have the Poincaré inequality
$$ \|f\|^2_{L^2(\mu)}\le\frac{2C_1}{c}\mathcal E_c(f) ,$$
which implies (by taking derivatives and using Gronwall's Lemma)
$$ \|\kappa^{(c)}_t(\,\cdot\,, f)\|_{L^2(\mu)}\le e^{- C_2 ct} \|f\|_{L^2(\mu)}.$$
This $L^2$-rate for the decay of the semi-group is indeed faster, if one increases $c$. |
an identity related to moment map | This is just an application of the chain rule which is obscured by the notation. First, note that $\langle \mu(m), X \rangle$ isn't an inner product or anything, it is just the natural pairing between $\mathfrak{g}^\ast$ and $\mathfrak{g}$. In other words,
$$\langle \mu(m), X \rangle = \mu(m)(X).$$
The derivation of the identity becomes more transparent when we write the left-hand side in the latter notation:
$$\left. \frac{d}{dt}\right|_{t = 0} \langle \mu(\exp(tY)m), X \rangle \quad \leftrightarrow \quad \left. \frac{d}{dt}\right|_{t = 0} \mu(\exp(tY)m)(X).$$
We see that we are taking the derivative of the composite of the following two functions:
$$\exp(\cdot Y)m: \Bbb R \longrightarrow M,\, t \mapsto \exp(tY)m, \quad \mu: M \longrightarrow \mathfrak{g}^\ast, \,m\mapsto \mu(m).$$
Hence by the chain rule we have
\begin{align*}
\left. \frac{d}{dt}\right|_{t = 0} \mu(\exp(tY)m) & = \left. d\mu_m \circ \left(\frac{d}{dt} \exp(tY)m\right) \right|_{t = 0} \\
& = d\mu_m(Y^\diamond(m)).
\end{align*}
So the identity is
$$\left. \frac{d}{dt}\right|_{t = 0} \mu(\exp(tY)m)(X) = (\iota_{Y^\diamond} d\mu)_m(X).$$ |
For $\bigoplus_{n \leq 0}\mathbb{Z}_{p^n}$, the classes $\operatorname{Gen}(\bigoplus_{n \leq 0}\mathbb{Z}_{p^n})$ and $p$ torsion groups are equal. | Your intuition is right on track, I think, although I am confused by your approach to writing $G$ as a sum in the end of the proof, so my answer might be approaching that differently than you had intended.
$\operatorname{Gen}(M) \subseteq T_p$
Suppose that $N$ is a $\mathbb{Z}$-module such that there is an index set $X$ and a surjection $f: M^{(X)} \twoheadrightarrow N$. Note that $\mathbb{Z}/p^n\mathbb{Z}$ is $p$-torsion, and that a direct sum of modules is $p$-torsion if (and only if) its summands are $p$-torsion, and thus conclude that $M$ is $p$-torsion and furthermore $M^{(X)}$ is $p$-torsion. Next, note that homomorphic images of $p$-torsion modules are $p$-torsion, and thus conclude that $N$ is $p$-torsion.
$T_p \subseteq \operatorname{Gen}(M)$
Suppose that $G$ is a $p$-torsion group. Thinking of $G$ as a $\mathbb{Z}$-module, this just means that for all $g \in G$, there exists $n \in \mathbb{N}$ such that $p^n g = 0$. We can view $G$ as a colimit of its finitely generated submodules. Each of these f.g. submodules decomposes as $\bigoplus_{i=1}^k \mathbb{Z}/d_i\mathbb{Z}$ where $d_i \mid d_{i+1}$ (here we are appealing to, say, the structure theorem for f.g. modules over PIDs). The fact that each element of this f.g. submodule is annihilated by $p^n$ for some $n$ implies that each $d_i$ is a power of $p$. Thus $G$ is a colimit of finite direct sums of the form $\bigoplus_{i=1}^{k} \mathbb{Z}/p^{n_i}\mathbb{Z}$. A colimit of modules is by definition a special quotient of their direct sum, so this gives us a representation of $G$ as the quotient of a module of the form $H = \bigoplus_{\alpha \in I} \mathbb{Z}/p^{n_\alpha}\mathbb{Z}$ where $I$ is some index set. It is now straightforward to find an index set $X$ such that $M$ surjects onto $H$, and hence onto $G$ (because $G$ is a quotient of $H$). For example, you could take $X = I$ and the map $f: M^{(I)} \twoheadrightarrow H$ defined componentwise as follows: the $f_\alpha$ component of $f$ maps the $\mathbb{Z}/p^{n_\alpha}\mathbb{Z}$ component of $M$ identically to the $\alpha$ component of $H$ (which is also $\mathbb{Z}/p^{n_\alpha}\mathbb{Z}$) and vanishes on the other components of $M$. |
Consider the parametric curve: $x=6\cos^3(t), y=6\sin^3(t)$, write it in cartesian form. | solving for $t,$ you get $$\cos t = \left(\frac x6\right)^{1/3} , \, \sin t = \left(\frac y6\right)^{1/3} $$ now use the fact $$\sin^2 t + \cos ^2 t = 1 \to \left(\frac x6\right)^{2/3} + \left(\frac y6\right)^{2/3} = 1$$ |
The Size Of A Vector | If $W$ is a bounded subspace of $V$ then $W=\{0\}$. Otherwise there
is some $w\in W$ with $||w||=r$ and for $\alpha\in\mathbb{R}$ the
element $\alpha w\in W$ satisfies
$$
||\alpha w||=|\alpha|||w||=|\alpha|r
$$
which is arbitrary large since $r$ is fixed hence $W$ is not bounded |
prove that $s_k>0$ for infintely many $k$ and $s_k<0$ for infintely many $k$ | The result is false as stated. For $n\in\Bbb Z^+$ let $a_{2n-1}=\frac1n$ and $a_{2n}=-\frac1n$, so that
$$\sum_{n\ge 1}a_n=1-1+\frac12-\frac12+\frac13-\frac13+\ldots\;.$$
Then for each $n\in\Bbb Z^+$ we have $s_{2n-1}=a_{2n-1}=\frac1n$ and $s_{2n}=0$, so $\sum_{n\ge 1}a_n=0$. Thus, the series is convergent but not absolutely convergent, yet there are no negative partial sums. |
Total Second Derivative in Leibniz Notation - Issues and Questions | In short, in the composition $y(x(z))$ where you intend to set $z=y$ in the end, the derivatives are per chain rule
$$
\frac{dy}{dz}=\frac{dy}{dx}\,\frac{dx}{dz}
$$
and
$$
\frac{d^2y}{dz^2}=\frac{d^2y}{dx^2}\left(\frac{dx}{dz}\right)^2+\frac{dy}{dx}\,\frac{d^2x}{dz^2}
$$
so that
$$
0=\frac{d^2y}{dy^2}=\frac{d^2y}{dx^2}\left(\frac{dx}{dy}\right)^2+\frac{dy}{dx}\,\frac{d^2x}{dy^2}
$$
connects the second derivative of $y$ to the second derivative of the inverse function. |
Find R for two arcs joined tangentially | It's not clear to me how this "nozzle" shape is defined, but from the diagram and the description* of your "failed" attempt, it seems that the basic Pythagorean Theorem for right triangles suffices.
Namely, solve for $R$ from
$$(0.15+R)^2 = (0.25)^2 + (0.025+R)^2$$
Does this suit your needs?
*The fact that the hypotenuse (of length $0.15+R$) is one smooth straightline comes from the requirement that the two circular arcs share a tangent at the connecting point. |
Curvature of curves on the hyperbolic plane | The natural notion to use is Geodesic curvature which makes sense for curves on any Riemannian manifolds. The name comes from the fact that geodesics have zero curvature.
For example, on the hyperbolic plane with Gaussian curvature $-1$, horocycles have geodesic curvature $1$. Indeed, let's consider Poincaré half-plane model with metric $(dx^2+dy^2)/y^2$. The line $y=1$ is a horocycle that is naturally parametrized by arclength, $\alpha(t) = (t, 1)$. The tangent vector is the unit vector that points to the right. This makes it appear as if the horocycle doesn't curve but we should use parallel transport to judge whether two vectors are parallel.
Take two points $A=(t, 1)$ and $B=(t+h, 1)$ on the horocycle and draw a geodesic between them: it's an arc of a circle with Euclidean radius $\sqrt{1+(h/2)^2}$. The tangent vector to $\alpha$ makes the angle $\sin^{-1}(h/2)$ with the geodesic at both $A$ and $B$, but it's in opposite directions. So, transporting the vector $\alpha'(t)$ from $A$ to $B$ along the geodesic, we see that at the point $B$ it makes the angle $2\sin^{-1}(h/2)$ with $\alpha'(t+h)$. Since the unit tangent rotates by $2\sin^{-1}(h/2)$ over the distance $h$, the geodesic curvature is
$$
\lim_{h\to 0} \frac{2\sin^{-1}(h/2)}{h} = 1
$$
Disk model
On second thought, it's easier to use the disk model; the metric will be $4ds^2/(1-x^2-y^2)^2$ so the curvature is still $-1$. The diameter $(-1,1)\times \{0\}$ is a geodesic, and near the center $(0,0)$ its arclength parameterization moves approximately as $t\mapsto (t/2,0)$ when $t\approx 0$. So the parallel transport along this geodesic for small distances near center will be Euclidean, which implies that the geodesic curvature of any curve tangent to this geodesic at $(0,0)$ will be just $1/2$ of its Euclidean curvature at that point. (Here $1/2$ comes from the aforementioned speed of parameterization).
Summary: to compute geodesic curvature in the hyperbolic disk model, move the point of interest to the center by a Möbius transformation, and take $1/2$ of Euclidean curvature there. Examples:
Horocycles have geodesic curvature $1$, as shown by the horocycle $x^2+(y-1/2)^2=1/4$ that passes through $(0,0)$ and has Euclidean curvature $2$.
A hyperbolic circle of hyperbolic radius $R$ has geodesic curvature $1/\tanh R \in (1,\infty)$. Indeed, when such a circle passes through $(0,0)$, its point furthest from $(0,0)$ is at hyperbolic distance $2R$ from $(0,0)$. Solving $2\tanh^{-1} d = 2R$ yields $d=\tanh R$ for the Euclidean diameter of this circle, so its Euclidean curvature is $2/\tanh R$ and geodesic curvature is $1/\tanh R$ |
What is the integral of $(\ln(4-2x))^2$? | First make the substitution $u = 4 - 2x$. This is just a linear change of variables.
Now use the formula $$\int \ln^2 x dx = x \bigg(\ln^2 x - 2 \ln x +2 \bigg) +C $$
This formula can be obtained by integration by parts with $u = \ln^2 x$ and $dv = dx$, if I recall correctly. |
If $S \subset T$ prove $\overline{S} \subset \overline{T}$ and $\text{int}(S) \subset \text{int}(T)$ | To avoid confusion I am using the notation $\subseteq$ instead of $\subset$ preassuming that 'your' $S\subset T$ does not exclude that $S=T$.
Let $\mathcal K$ denote the collection of closed sets containing $S$.
As an intersection of closed sets the set $\overline{T}$ is closed, and this with $S\subseteq T\subseteq\overline{T}$.
That means that $\overline{T}\in \mathcal K$ so that $\overline{S}=\cap \mathcal K\subseteq\overline{T}$.
Let $\mathcal V$ denote the collection of open sets contained in $T$.
As a union of open sets the set $\operatorname{int}(S)$ is open, and this with $\operatorname{int}(S)\subseteq S\subseteq T$.
That means that $\operatorname{int}(S)\in \mathcal V$ so that $\operatorname{int}(S)\subseteq\cup \mathcal V=\operatorname{int}(T)$. |
Prove that there exists no polynomial in $\mathbb{Q}[x]$ of degree 1 or 2 which divides $x^3-2$. | Hint. A famous irreducibility criterion is applicable here. |
Polynomial approximation (Weierstrass' Theorem) with equality at the endpoints | Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \to f$ uniformly on $[a,b].$ Then verfify that the polynomials
$$Q_n(x)=P_n(x)- P_n(a) -\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$
converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$
In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements. |
Every $K$-algebra homomorphism of affine algebras is a comorphism | To show $\phi(X)\subseteq Y$, we have to show that for any $x\in X$ and any $g\in\mathscr{I}(Y)$, $g(\phi(x))=0$. But since $g\in\mathscr{I}(Y)$, we know that the equivalence class of $g$ in $K[Y]$ is $0$, and in particular its image under $\theta$ remains $0$. That means that when we replace each variable $t_i$ of $g$ with $\psi_i$ (i.e., apply $\theta$ to $g$), we get $0$. That is, $g(\psi_1,\dots,\psi_n)=0$. Now $$g(\phi(x))=g(\psi_1(x),\dots,\psi_n(x))$$ is just $g(\psi_1,\dots,\psi_n)$ evaluated at $x\in X$, so it is $0$, as desired. |
What is the sum of the coefficients of $P(Q(X))$? | I figured it out. Since $b_1+b_2+...+b_m=0$ and we want to get the sum of coefficients of $a_0+a_1Q(X)+...+a_nQ(X)^n$, all that must be done is replacing Q(X) with it's actual value.
So we have:
$a_0+a_1(b_0+b_1X+....+b_mX^m)+...+a_n(b_0+b_1X+...b_mX^m)^n$
And we want the sum of the coefficients, so we set $X=1$
So $a_0+a_1Q(1)+...+a_nQ(1)^n$, and as mentioned in the body of the question, $Q(1)=0$. So the answer is $a_0$. |
Proving $g(x)$ is not a rational function | $g(x)^2 (1-4x) = f(x)^2$ tells us that $deg(f^2) = 1+ deg(g^2)$. However, both degrees are even, since they are squares, and we reach a contradiction. |
How show that function is greater or equal zero? | The sine function $\sin(\theta)$ is defined as the $y$-coordinate of the point corresponding to the angle $\theta$ on the unit circle. Since the angle from $0$ to $\pi$ sweeps out the top half of the semicircle (above or on the $x$-axis), it is not negative on any $\theta$ in the interval $[0,\pi]$.
Since $\frac{1}{2}$ is a positive constant, it will not change the sign of $\sin(\theta)$, and our previous answer holds. |
Hilbert And Banach dual spaces | A vector space (a fortiori, Hilbert space, Banach space) is isomorphic to its dual if, and only if, it is finite dimensional. For any infinite dimensional vector space, the dual space is always "larger" than the predual, in the precise technical sense that there exists an injection from the predual into the dual, but never a bijection.
Additionally, the isomorphism between a finite dimensional vector space and its dual is dependent on a choice of vector basis. It is not a "natural" isomorphism, in the language of category theory. However, the isomorphism between a vector space and its double dual (the dual of the dual) IS natural, in that it can be defined without a choice of basis.
Studying this phenomenon is anecdotally what pushed Saunders MacLane and Samuel Eilenberg to invent category theory, by the way.
Edit: I was mistaken in the above, due to an ambiguity on the notion of "dimension" as used by an algebraist, and as used by an analyst; and have provided in the comments below a document which clarifies my point, and explains my mistake in your specific context. |
Proving a certain set is inductive | If $P \subseteq \Pi$ is a chain, then $\cup P$ is an upper bound. |
Showing topological equivalence between two topologies | Note first that you're asked to give a definition of what topologically equivalent could mean. You don't give one on that sheet. So minus points for that.
All your example are infinite sets; there is no finite space among them. (you do have a finite topology but I think the question wants you to consider finite spaces.
The examples do work, for the correct definition of topologically equivalent so props for that. As you don't have a definition, you cannot actually argue if they're equivalent or not either way..
Equivalent: $[0,1), [0,\infty)$ and $(0,1]$ as subspaces of $\Bbb R$ will do. Non-equivalent easier ones: discrete vs cofinite vs indiscrete/trivial is a thing to consider. |
Can I find a series where an abelian series of smallest possible length is different from derived series? | If you are OK with tied for the shortest length you can. For example, consider the dihedral group of order 8. Its derived group has order 2 (and the next term in the derived series has order 1), but there is an abelian series that goes from the whole group to its cyclic subgroup of order 4 to the identity. This has the same length as the derived series.
You can never do better than tie for the length of the derived series, though.
In any abelian series $G=G_0\vartriangleright G_1\vartriangleright....\vartriangleright G_n\vartriangleright G_{n+1}=1$, you need $G_1\ge G^{\prime}$ lest $G/G_1$ not be abelian. Using induction, we assume $G_i\ge G^{(i)}$ and we see that in order for $G_i/G_{i+1}$ to be abelian, we need its subgroup $G^{(i)}/G_{i+1}$ to be abelian, and this requires $G_{i+1}\ge G^{(i)\prime}=G^{(i+1)}$, so for any abelian series each term contains the corresponding term of the derived series. So a series of length less than the derived length still contains the last non-trivial term of the derived series in its last term.
This last paragraph also implies the equivalence of the two different definition of derived length that you reference. |
Prove that $Tv = \alpha v$ for some $\alpha \in \Bbb R$ | In this circumstance, the characteristic polynomial of $T$ is $p(\lambda)=\lambda^2+\det T$. So, in particular, $T^2=(-\det T)I$.
You can work that out explicitly - if $T=\begin{pmatrix}a&b\\c&-a\end{pmatrix}$ then:
$$T^2=\begin{pmatrix}a^2+bc&ab-ba\\ca-ac&bc+(-a)^2\end{pmatrix}=(a^2+bc)I$$
and $\det T=-(a^2+bc)$.
Letting $\alpha = \sqrt{-\det T}$. Then for any $v$, let $Pv=(T-\alpha I)v$ and $Qv=(T+\alpha I)v$.
Then show that $QP=0$.
Now, pick any $v\neq 0$.
If $Pv=0$ then $Tv=\alpha v$ and you are done.
If $Pv\neq 0$ then let $v'=Pv\neq 0$. Then $Qv'=QPv=0$ so $Tv'=(-\alpha)v'$, and you are again done. |
Probability of at least three wining lottery tickets in a month with $20$ ticket purchases given probability of a winning ticket is $0.1$ | For the average number of winning tickets you are expected to use the linearity of expectation. What is the chance that one ticket wins? The average number of winning tickets is $20$ times this. This is a critical concept |
Given some ultrametric space $X$, is its completed metric $\hat{X}$ necessarily an ultrametric? | Suppose that $x=\langle x_n:n\in\Bbb N\rangle$, $y=\langle y_n:n\in\Bbb N\rangle$, and $z=\langle z_n:n\in\Bbb N\rangle$ are Cauchy sequences in $X$. Let $\hat x,\hat y$, and $\hat z$ be the equivalence classes of these sequences, interpreted as point of $\widehat X$. We’d like to show that $\hat d(\hat x,\hat y)\le\max\{\hat d(\hat x,\hat z),\hat d(\hat z,\hat y)\}$. For each $n\in\Bbb N$ we know that $$d(x_n,y_n)\le\max\{d(x_n,z_n),d(z_n,y_n)\}\;,$$ so
$$\hat d(\hat x,\hat y)=\lim_{n\to\infty}d(x_n,y_n)\le\lim_{n\to\infty}\max\{d(x_n,z_n),d(z_n,y_n)\}\;.$$
If $d(\hat x,\hat z)<\hat d(\hat z,\hat y)$, then there is an $m\in\Bbb N$ such that $d(x_n,z_n)<d(z_n,y_n)$ for all $n\ge m$, in which case
$$\hat d(\hat x,\hat y)\le\lim_{n\to\infty}\max\{d(x_n,z_n),d(z_n,y_n)\}=\lim_{n\to\infty}d(z_n,y_n)=\hat d(\hat z,\hat y)\;.$$
Similarly, $\hat d(\hat x,\hat y)\le\hat d(\hat x,\hat z)$ if $\hat d(\hat z,\hat y)<\hat d(\hat x,\hat z)$. The only remaining possibility is that $d(\hat x,\hat z)=\hat d(\hat z,\hat y)$, in which case
$$\lim_{n\to\infty}\max\{d(x_n,z_n),d(z_n,y_n)\}=d(\hat x,\hat z)=\hat d(\hat z,\hat y)\;.$$
In all cases, therefore, $$\hat d(\hat x,\hat y)\le\max\{\hat d(\hat x,\hat z),\hat d(\hat z,\hat y)\}\;,$$ and $\hat d$ is an ultrametric. |
Differentiate $xx^T$ wrt to $x \in \mathbb{R}^n$ | Define a new matrix variable
$$\eqalign{
Y &= xx^T-A \cr
dY &= dx\,x^T + x\,dx^T \cr
}$$
Then write the function in terms of this new variable, and find its differential and gradient
$$\eqalign{
f &= \frac{1}{2}Y:Y \cr
df &=Y:dY = Y:(dx\,x^T + x\,dx^T) \cr
&= (Y+Y^T):dx\,x^T \cr
&= 2Yx:dx \cr
g = \frac{\partial f}{\partial x} &= 2Yx \cr
}$$
Now find the differential and gradient of the gradient (i.e the Hessian)
$$\eqalign{
dg
&= 2\,dY\,x + 2Y\,dx \cr
&= 2\,(dx\,x^T + x\,dx^T)\,x + 2Y\,dx \cr
&= 2\Big((x^Tx)I + xx^T +Y\Big)\,dx \cr
H = \frac{\partial g}{\partial x}
&= 2\Big((x^Tx)I + xx^T +Y\Big) \cr\cr
}$$
In some of these steps, a colon is used to denote the trace/Frobenius product
$$A:B = {\rm tr}(A^TB)$$ |
Practical applications of eigenvalues/eigenvectors in computer science | Eigenvectors and eigenvalues are important for understanding the properties of expander graphs, which I understand to have several applications in computer science (such as derandomizing random algorithms). They also give rise to a graph partitioning algorithm.
Perhaps the most famous application, however, is to Google's PageRank algorithm. |
Function is Lipschitz: $f \in C_L^{1,1}$ - what does it mean? | It would help if you provided context (where you found this), but without any further information, I would say it's the space of once-differentiably Hölder continuous functions that are Lipschitz. It is normal to specify the domain as well: $C^{1,1}_L(\Omega)$
So: if $f\in C^{1,1}_L(\Omega)$ then the function $f$ is continuous and Hölder continuous, as is its first derivative. Hölder continuity means $|f(x)-f(y)| \leq C|x-y|^\alpha \ \forall x,y \in \Omega$ where $0\leq \alpha \leq 1$ ($1$ in your case. If you had $C^{1,2}_L$ then $0\leq \alpha \leq 2$ for example).
If you'd like some further reference, check local Hölder continuity on page 2 of http://math.ucdenver.edu/~jmandel/classes/7760f05/spaces.pdf |
Solving modular equations with a variable in the modulus divisor | This equation can be rewritten as
$$cx + d \mid a-b.$$
Now let $g := gcd(c,d)$. Surely, $g$ divides all terms of the form $cx + d$, hence if $g$ does not divide $a-b$, you will have no solutions. We can hence assume that $g$ divides $a-b$ and thus divide everything by $g$.
So from now on assume w.l.o.g. that $gcd(c,d) = 1$. Furthermore, we set $e := a-b$ for simplicity.
Now let $r$ be any divisor of $e$. Then we want to know if the equation
$$cx + d = r$$
has a solution in the integers (of course it has exactly one solution, the only question is if the solution will be an integer).
Thus, we need to check if
$$\frac{r - d}{c}$$
is an integer.
This is equivalent to
$$r \equiv d \mod{c}.$$
Depending on $c$ and $e$, there is much or not so much to check here and there are still some tricks you can use to not have to check all $r$, but I'm sure you can figure that out. :) |
Find an example of $X, Y, \mathcal{A}$, and $f$ to show that $f$ is not necessarily continuous | Hint: the rationals $\Bbb Q$ can be seen as the countable union of its singletons (a very non-locally finite union of closed sets). $f\restriction_{\{q\}}$ is trivially continuous for any $f$... |
Find Combined Probability of one Die and two Coins Tossed | Yes, you have outcomes with probability $P[(D,C_1,C_2)=(d,c_1,c_2)]=1/24$ with for $1\leq d\leq 6$, and $(c_1,c_2)\in \{H,T\}^2.$ |
Prove that if f $\in L^1$, then $\forall t \in \mathbb{R}: \int_{\mathbb{R}} f(x)dx = \int_{\mathbb{R}}f(x+t)dx$ | Rather than using Dominated Convergence, can I recommend that you work directly from the definition of the integral?
You could first deal with the case where $f$ is positive. Here, the definition of the integral is
$$ \int f(x) dx = \sup_{0 \leq \varphi \leq f, \\ \varphi {\rm \ simple}} \int \varphi(x) dx.$$
But if $\varphi(x)$ is a positive simple function bounded above by $f(x)$, then $\varphi(x + t)$ is a positive simple function bounded above by $f(x + t)$... |
Why will repeated betting on a fair coin toss loose you money in the most cases? And how can you calculate it? | In fact your start capital multiply by $(1/2)^i(3/2)^{n-i}$, in $n$ rounds by probability of $\binom{n}{i}/2^n$,
disregarding the order of loses and wins.
Since it can never becomes $1$, so you can never return to any of your previous positions, including "quit with no loss or profit".
Suppose $j = \underset{(1/2)^i(3/2)^{n-i}>1}{\max}(i)$, so you profit by chance of $p_j = \sum_{i=0}^j \binom{n}{i}/2^n$, and lose by chance of $1-p_j = \sum_{i=j+1}^n \binom{n}{i}/2^n$.
Also you can use $\min$ function similarly, if you like:
Suppose $k = \underset{(1/2)^i(3/2)^{n-i}<1}{\min}(i)$, so you lose by chance of $p_k = \sum_{i=k}^n \binom{n}{i}/2^n$, and profit by chance of $1-p_k = \sum_{i=0}^{k-1} \binom{n}{i}/2^n$. |
Difference between dot product and cross product | The only thing I think they have in common is that they take two vectors as input and have the word "product" in their name. They don't work in the same dimensions, produce the same types of values or have a meaningful interpretation in terms of the other. |
Sum of elements of order $p$ in $(\mathbb{Z}/2p^2\mathbb{Z})^\times$? | One can explicitly find an element of order $p$ in this group, namely $b = 2p + 1$. Then it is not hard to compute
$$
A = 1 + b + \ldots + b^{p - 1}
$$
modulo $2p^2$. |
Parametric characterization for $x^2 + y^2 = 2z^2$ | Existence of such integers $a,b$ is equivalent to saying that $x,y$ have the same parity (do you see why?). If $x,y$ had opposite parity, then $x^2+y^2$ would be odd, while $2z^2$ is even.
Now if we have such $a,b$ then we can substitute to get $(a+b)^2+(a-b)^2=2z^2$. Simple algebra gives us that this is equivalent to $a^2+b^2=z^2$. I suspect that now you know the parametrization of solutions to this equation (you also have to show that $a,b,z$ are relatively prime; I leave it to you).
As you have asked for this, here is how to continue: because $(a,b,z)$ is a primitive Pythagorean triple, we have, for some integers $m,n$, that $a=m^2-n^2,b=2mn,z=m^2+n^2$. Now we have $x=a+b=m^2+2mn-n^2,y=a-b=m^2-2mn-n^2$. So the complete parametrization is:
$$(m^2+2mn-n^2,m^2-2mn-n^2,m^2+n^2)$$ |
Finding the general solution of a non-homogeneous differential equation when three of its solutions are given | Your approach is solid. I'm afraid the expressions you obtain aren't very nice, but that's just the nature of the problem. I would advise though to obtain two equations for $p(x)$ and $q(x)$ by substituting $y_4$ resp. $y_5$, and then solve for $p$ and $q$. For example, assuming my calculations are correct, I obtain
\begin{equation}
q(x) = \frac{2 e^{x^2}(2 x^2 -1)}{e^{x^2} -4x(x-1)-2},
\end{equation}
which isn't very nice, but seems correct. |
maximum value of $|\text{Re}[E e^{i \phi}]|$ for a complex 3-vector E | Write $E=E_r+E_ii$. You have the vector valued function
$$f(t):={\rm Re}(Ee^{it})={\rm Re}((E_r+E_ii)(\cos(t)+i\sin(t)))=E_r\cos(t)-E_i\sin(t).$$
Note also that
$$
f^{\prime}(t)=-(E_r\sin(t)+E_i\cos(t))\;.
$$
Setting the derivative of the dot product $\vert f(t) \vert^2=f(t)\cdot f(t)$ equal to zero, you get
$$2f^{\prime} (t)\cdot f(t)=0\Leftrightarrow f^{\prime}(t) \perp f(t)\;.$$
Bilinearity of the dot product gives
$$0=f^{\prime}(t)\cdot f(t)=E_i\cdot E_r (\sin^2(t)-\cos^2(t))+\sin(t)\cos(t)(\vert E_i \vert^2-\vert E_r \vert^2)$$
so
$$
E_i\cdot E_r (\sin^2(t)-\cos^2(t))=\sin(t)\cos(t)(\vert E_i \vert^2-\vert E_r \vert^2)
$$
Using the double angle formula, $\sin(t)\cos(t)=\frac{1}{2}\sin(2t)$ and $\sin^2(t)-\cos^2(t)=-\cos(2t)$, you have
$$
-2E_i\cdot E_r \cos(2t)=\sin(2t)(\vert E_i \vert^2-\vert E_r \vert^2)
$$
Notice that if $E_r$ and $E_i$ are orthonormal, then $f^{\prime}(t)$ and $f(t)$ are perpendicular for all $t$. hence $\vert f(t) \vert$ is constant. Otherwise, we solve
$$
\frac{2E_i\cdot E_r }{\vert E_r \vert^2-\vert E_i \vert^2}=\tan(2t)\Rightarrow t=\frac{1}{2}\tan^{-1}\Big(\frac{2E_i\cdot E_r }{\vert E_r \vert^2-\vert E_i \vert^2}\Big)+\frac{n\pi}{2}
$$
Substituting this value of $t$ into
$$\vert f(t)\vert^2=\vert E_r\vert^2\cos(t)^2-2E_r\cdot E_i\cos(t)\sin(t)+\vert E_i\vert^2\sin^2(t)$$
will give you the result (you have to check which values of $n$ correspond to maxima). |
Gravitational force inside a uniform solid ball - evaluation of the integral in spherical coordinates - mistake | By symmetry,
$$
\int_{0}^{\pi} \sqrt{a^{2}\cos^{2} \phi + R^{2} - a^{2}} \cos\phi \sin\phi\, d\phi = 0.
$$
Loosely, the radical and $\sin\phi$ are "even-symmetric" on $[0, \pi]$, while $\cos\phi$ is "odd-symmetric". That is, substituting $\psi = \phi - \frac{\pi}{2}$ gives
$$
\int_{0}^{\pi} \sqrt{a^{2}\cos^{2} \phi + R^{2} - a^{2}} \cos\phi \sin\phi\, d\phi
= -\int_{-\pi/2}^{\pi/2} \sqrt{a^{2}\sin^{2} \psi + R^{2} - a^{2}} \cos\psi \sin\psi\, d\psi,
$$
the integral of an odd function over a symmetric interval.
The rest of your calculation is correct; that is,
$$
2\pi G\delta m \int_{0}^{\pi} a\cos^{2}\phi \sin\phi\, d\phi
= 2\pi G\delta m \cdot \frac{2}{3} a
= Gm \frac{4}{3} \pi a^{3} \delta \cdot \frac{1}{a^{2}},
$$
as desired. |
Jech's Set Theory notation | A chain in a partially ordered set is a totally ordered subset. For any set $X$, the $\subseteq$ relation forms a partial order on that set, so a $\subseteq$-chain in $X$ is a subset $Y\subseteq X$ such that $\subseteq$ is a total order on $Y.$ In other words, for any $x,y \in Y,$ either $x\subseteq y$ or $y\subseteq x.$ |
Solutions to second order ODEs | You still have to check that your solutions are linearly independent. Therefore, you can use the Wronski - determinant
$$W(y_1, y_2) = \begin{vmatrix} y_1 & y_2 \\ y_1' & y_2' \end{vmatrix}.$$
Show, that $W(y_1, y_2) \neq 0$. |
Proof that if $0 \lt a \lt 1$ and $r \lt s$, then $a^{r} \gt a^{s}$. $r, s \in \mathbb{Q}$ | Notice if $s=r+t$ we have:
$$a^r-a^{r+t}>0\to a^r(1-a^t)>0$$
$$\to a^r(1-a)\sum_{n=0}^{t-1}a^n>0$$
For $0<a<1$, it should be fairly straightfoward to show that each element in the last expression is positive.
In case of confusion:
$$(1-a^t)=(1-a)(1+a+a^2+...+a^{t-1})=(1-a)\sum_{n=0}^{t-1}a^n$$ |
quotient homeomorphic to $S^2 \times S^1$? | If you see this as a quotient by the action of $\mathbb{Z}$ on $\mathbb{R}^3 \setminus \{0\}$, you can prove that the set $D=\{ 1 \le x^2+y^2+z^2 \le 2 \}$ is a fundamental domain. Then the quotient is homeomorphic to $D$ quotiented by the same action. But $D$ can also be seen as $S^2 \times [1, 2]$ and the quotient simply associates $S^2 \times \{1\}$ with $S^2 \times \{2\}$, so the original quotient is in fact homeomorphic to $S^2 \times [1, 2]/\{1, 2\}$ which is indeed equivalent to $S^2 \times S^1$. |
Help understanding how to determine a quotient group | Let's look at a slightly more "in-depth" example: Consider the subgroup $7\Bbb Z$ of $\Bbb Z$ which consists of all (positive, negative and $0$) multiples of $7$. Let's "expand" this a bit:
$7\Bbb Z = \{\dots,-21,-14,-7,0,7,14,21,28,\dots\}$
What set do we get if we add $1$ to everything in this set?
$1 + 7\Bbb Z = \{\dots,-20,-13,-6,1,8,15,22,29,\dots\}$
If we shift "over $2$" we get:
$2 + 7\Bbb Z = \{\dots,-20,-12,-5,2,9,16,23,30,\dots\}$
Now here is a curious thing: If we take something in, say $3 + 7\Bbb Z$, and add it to something in $2 + 7\Bbb Z$, we'll get something in $5 + 7\Bbb Z $. For example:
$10 + 9 = 19$ and $19$ is $5$ more than $14$.
This process "chops up" $\Bbb Z$ into $7$ pieces (each of which is "infinite") and in every piece, the numbers are all "$7$ apart".
So the "$k$" in $k + 7\Bbb Z$ measures how far to the right (right = counting up) we are from a multiple of $7$.
If you imagine the integers as an infinite string of beads, we are "wrapping the beads into a circle" so that the $7$th bead winds up in the same place as the "$0$-bead". In fact, any two beads that are a multiple of $7$ apart on our original string, wind up in the same place on our "$7$ position circle".
You've seen this before: on a clock (there it's "modulo $12$" instead of $7$, or "modulo $24$" if you use military time).
This has the effect of effectively "setting all multiples of $7$ equal to $0$". Since we know that $14$ (for example) isn't actually $0$, we don't say:
$14 = 0$, but rather, $14$ is equivalent to $0$ since it is a multiple of $7$ away from $0$ (if $6$ was "as high as we could count before starting over", $14$ would actually BE $\ 0$).
It's somewhat "magical" that all this actually WORKS, in that we don't get any contradictions or confusion this way. Part of this has to do with the fact that addition is commutative: that is, that $k + m = m+k$ for any two integers $k,m$. With a group in general, we need a more special condition on the subgroup to make this all work out (it has to be normal).
It turns out that with a general group $G$ and a subgroup $H$, that the set:
$(aH)(bH) = \{(ah)(bh'): h,h' \in H\}$ doesn't always equal some other coset $kH$, so subgroups for which this DOES happen are "special".
If $G$ is abelian, of course, then $(ah)(bh') = a(hb)h' = a(bh)h' = (ab)(hh')$ and this DOES happen. So quotient groups of abelian groups are "easier to understand".
In this special case of $G = \Bbb Z$ and $H = ${even integers}, this process is called "the arithmetic of parity" (reasoning by evens and odds). This "trick" (reducing mod $2$) turns an "infinite number of cases" to just TWO cases, and often that is all we need. |
Application of uniform boundedness principle | Try $T_n\colon c_0\to c_0$ defined by $$T_n(x)=(a_1x_1,\ldots,a_nx_n,0,0,\ldots).$$ |
collection $\mathcal{B}$ of subsets $V = \{ x + yk : k \in \mathbb{Z} \} $ for $x,y \in \mathbb{Z}$ form a basis for some topology of $\mathbb{Z}$ | I’m going to assume that the definition of $\mathscr{B}$ given in the question is slightly incorrect, and that $\mathscr{B}$ is actually the collection of all sets of the form $a+b\Bbb Z=\{a+bk:k\in\Bbb Z\}$ such that $a,b\in\Bbb Z$ and $b\ne 0$; without that last restriction $\{a\}\in\mathscr{B}$ for each $a\in\Bbb Z$, and $\mathscr{B}$ is trivially a base for the discrete topology on $\Bbb Z$, which of course is not compact.
Your argument that $\mathscr{B}$ covers $\Bbb Z$ is correct but unnecessarily complicated, since $\Bbb Z\in\mathscr{B}$. The next bit, however, does not make sense: $\{kn\}$ is a set containing one integer, $kn$, and is not in $\mathscr{B}$. If you really meant $\{kn:k\in\Bbb Z\}$, which is in $\mathscr{B}$, then $N_1$ and $N_2$ are the same set. Moreover, there are members of $\mathscr{B}$ containing $n$ that are not of this form. You need to start with completely arbitrary $N_1,N_2\in\mathscr{B}$ with $n\in N_1\cap N_2$. Let $N_1=a_1+b_1\Bbb Z$ and $N_2=a_2+b_2\Bbb Z$; then there are $k_1,k_2\in\Bbb Z$ such that $n=a_1+b_1k_1=a_2+b_2k_2$. Now you need to find $a,b\in\Bbb Z$ with $b\ne 0$ such that $n\in a+b\Bbb Z\subseteq\big((a_1+b_1\Bbb Z)\cap(a_2+b_2\Bbb Z)\big)$.
This actually takes a little work. Perhaps the most straightforward approach is to begin by showing that if $n\in r+s\Bbb Z$, then $r+s\Bbb Z=n+s\Bbb Z$. Thus, if $n\in N_1\cap N_2$, then $$(a_1+b_1\Bbb Z)\cap(a_2+b_2\Bbb Z)=(n+b_1\Bbb Z)\cap(n+b_2\Bbb Z)\;,$$
and you need only show that $(n+b_1\Bbb Z)\cap(n+b_2\Bbb Z)=n+m\Bbb Z$, where $m=\mbox{lcm}(b_1,b_2)$.
$\Bbb Z$ with this topology is not compact: the set of primes is an infinite, closed, discrete subset. (If $p$ is prime, find an open nbhd of $p$ that contains only multiples of $p$; if $n$ is composite, find an open nbhd of $n$ that contains only composite numbers.)
Added: To show that the space is Hausdorff, let $m,n\in\Bbb Z$ with $m\ne n$. Let $p$ be any positive integer that does not divide $n-m$, e.g., a prime larger than $|n-m|$; then $m+p\Bbb Z$ and $n+p\Bbb Z$ are disjoint open nbhds of $m$ and $n$, respectively. They are clearly open nbhds of $m$ and $n$. If $k\in(m+p\Bbb Z)\cap(n+p\Bbb Z)$, then there are integers $r$ and $s$ such that $k=m+pr=n+ps$. But then $n-m=pr-ps=p(r-s)$, where $r-s$ is an integer, so $p$ divides $n-m$, contradicting the choice of $p$.
Let $P$ be the set of positive primes. If $p\in P$, then $p\Bbb Z=0+p\Bbb Z$ is an open nbhd of $p$ that does not contain any other prime: if $q$ is a prime different from $p$, then $p$ is not a divisor of $q$, so $q\notin p\Bbb Z$. Thus, $(p\Bbb Z)\cap P=\{p\}$, each $p\in P$ is an isolated point of $P$, and $P$ is therefore discrete. Now suppose that $n\in\Bbb Z\setminus P$. If $n$ is composite, then every multiple of $n$ is either composite or $0$, so $n\Bbb Z$ is an open nbhd of $n$ disjoint from $P$. If $n=0$, then $4\Bbb Z$ is an open nbhd of $n$ disjoint from $P$, since every element of $4\Bbb Z$ is $0$ or a multiple of $4$ and therefore not prime. The only remaining possibility is that $n=-p$ for some $p\in P$. In that case $-p+3p\Bbb Z$ is an open nbhd of $n$ disjoint from $P$: every element of $-p+3p\Bbb Z$ is a multiple of $p$, so $p$ is the only prime that could possibly belong to $-p+3p\Bbb Z$, and it doesn’t, since $\frac23$ isn’t an integer. |
Srinivasa Ramanujan conjectures | Could it have been " Ramanujan's Lost Notebook" by George Andrews and Bruce C. Brendt, this is a work of which four volumes have been published .Quoting from the preface:
"This is the fourth of five volumes that the authors are writing in their exam ination of all the claims made by S. Ramanujan in The Lost Notebook and Other Unpublished Papers. Published by Narosa in 1988, the treatise contains the “Lost Notebook,” which was discovered by the first author in the spring of 1976 at the library of Trinity College, Cambridge. Also included in this publication are partial manuscripts, fragments, and letters that Ramanujan wrote to G.H. Hardy from nursing homes during 1917–1919. Although some of the claims examined in our fourth volume are found in the original lost notebook, most of the claims examined here are from the partial manuscripts and fragments. Classical analysis and classical analytic number theory are featured." |
Diffeq question: method of undetermined coefficients | Since you don't want any solutions that are linearly independent to the homogenous solution, you guess a solution of the form
$$y_p(t)=At\cos(2t)+Bt\sin(2t)$$
determine the coefficients, and by linearity of the differential operator, form the solution of the inhomogenous problem as
$$y(t)=y_c(t)+y_p(t).$$ |
Solve $(y')^2=(y/c)^2-1$ | $$\dfrac{dy}{dx} = \sqrt{\left(\frac{y}{c}\right)^2-1} \implies \int{\dfrac{dy}{\sqrt{\left(\dfrac{y}{c}\right)^2-1}}} =\int1dx$$
we let $u=\dfrac{y}{c}$, so $du = \dfrac{dy}{c}$
$$c\int{\dfrac{du}{\sqrt{u^2-1}}} =x+k$$
With $k$ a constant. Now, we know this is a "classic" integral: arccosh
$$c\int{\dfrac{du}{\sqrt{u^2-1}}} =x+k \implies \cosh^{-1}u=\dfrac{x+k}{c}\implies u = \cosh\dfrac{x+k}{c} \implies$$
$$\dfrac{y}{c} = \cosh\dfrac{x+k}{c} \implies y = c\cosh\dfrac{x+k}{c} $$ |
Riemann integrability and Riemann sums | If you'd like to see a specific example, here's one. For an arbitrary partition of $[0,1]$, let $[0,x_1]$ be its leftmost subinterval, where $0<x_1<1$ (of course). Then for any $N>1$, the point $x_1^{*}=\frac{x_1^2}{N^2}$ is in $[0,x_1]$, and the first term in the corresponding Riemann sum will be
$$f(x_1^{*})\Delta x_1=\frac{1}{\sqrt{x_1^2/N^2}}\cdot x_1=N,$$
which can be arbitrarily large if we let $N\to+\infty$. Since the rest of the terms are nonnegative, we see that Riemann sums are unbounded and don't have a limit. |
What can be said about an ideal contracted to a maximal ideal? | Hint: Consider $k \subset k[t]$ and some proper ideal $J \subset k[t]$. Then $J$ cannot contain any non-zero element of $k$ so that the intersection with $k$ is the zero ideal and hence a maximal ideal.
Edit: For an example that is not a field, consider $k[t] \subset k[t,s]$ and the (non-prime) ideal $J = (t,s^2) \subset k[t,s]$. Then $J \cap k[t] = (t) \subset k[t]$ is maximal. |
Area above oblique conical section | I'll assume you want the volume above the plane on which the cone sits and below the inverted cone. I'll also assume that the circle bounding the vertical cylinder is viewed as drawn in the $x,y$ plane with its center at $(a,0)$ and a radius of $r_2$, in such a way that the entire bounding circle of the cylinder lies inside the big circle bounding the bottom of the cone.
Now a parametrization of the region inside the circle bounding the vertical cylinder is given by the set of points $(a+t \cos \theta,t \sin \theta)$ where $0 \le t \le r_2$ and $0 \le \theta \le 2\pi.$ Setting up polar coordinates on this circle to do the integral for volume means we have to use the polar volume element $t dt d\theta$ in the integral.
What we need now is the height. If $k=d/2$ and $h=(k/2)\tan \theta$ [NOTE I'm using this $\theta$ only temporarily, and the later use for the smaller circle is unrelated] then the height $z$ of the cone at a point at distance $w$ from the origin is $z=h-wh/k$ (a linear function going down from $h$ when $w=0$ to $0$ when $w=k$ since $k$ is the radius of the big circle.
You should now be integrating the height up to the cone multiplied by the polar area element, over the region inside the smaller circle. Except for the issue of keeping notation straight, this integral is easy to set up and I'll leave that to you.
When I plugged the double integral into maple the closed form results were extremely messy, and one of the two iterations involved a nonelementary function. So maybe the best thing is to just set up the integral, and use numerical methods to approximate it in particular cases. |
Why is this function on the Riemann surface holomorphic? | I will follow Forster's notaion. It suffices to show that the restriction $f|[U,g]$ is holomorphic for every $g\in\mathcal{O}(U)$. Without loss of generality, we may assume that $(U,\varphi)$ is a chart. Since $p|[U,g]\colon[U,g]\to U$ is a homeomorphism, $([U,g],\varphi\circ p|[U,g])$ is a chart on $|\mathcal{O}|$. It follows that
$$\begin{align}
f(\varphi\circ p|[U,g])^{-1}(z)
&=f(p|[U,g])^{-1}\varphi^{-1}(z)\\
&=f(\rho_{\varphi^{-1}(z)}(g))\\
&=\rho_{\varphi^{-1}(z)}(g)(\varphi^{-1}(z))\\
&=g\varphi^{-1}(z)
\end{align}$$
for every $z\in\varphi(U)$. Hence, $f$ is holomorphic in $[U,g]$. |
Proving something is NOT a tensor | What you went wrong is that you don't write the derivation correctly. What you must do is this
$$
\frac{\partial v^{j'}}{\partial x^{i'}} = \frac{\partial}{\partial x^{i'}} v^{j'} = \underbrace{\frac{\partial x^i}{\partial x^{i'}} \frac{\partial}{\partial x^i}}_\text{by chain rule} \underbrace{\Bigg( \frac{\partial x^{j'}}{\partial x^j} v^j \Bigg) }_\text{by transf.rule for $v^{j'}$} = \dots (\text{proceed})
$$
which is by chain rule and tensor transformation rule $\textbf{for}$ $v^{j'}$. Proceed by yourself and find out why its not a tensor. |
Universal covering group and fundamental group of $SO(n)$ | Yes it is. Let $X$ be a topological space with universal cover $\widetilde{X}$ and covering map $p : \widetilde{X} \to X$. A homeomorphism $f : \widetilde{X} \to \widetilde{X}$ is called a Deck transformation of $p$ if $p\circ f = p$; that is, $f$ preserves the fibres of $p$ so if $y \in p^{-1}(x)$, $f(y) \in p^{-1}(x)$. The set of all Deck transformations of $p$ is denoted $\operatorname{Deck}(p)$ and forms a group under composition. The quotient of $\widetilde{X}$ by $\operatorname{Deck}(p)$ is $X$. Moreover, $\operatorname{Deck}(p) \cong \pi_1(X)$.
A good reference for this material is Hatcher's Algebraic Topology. |
If $X$ is uniformly bounded and $\lim_{n \to \infty} X_n=X$ a.s, then $X_n$ is uniformly bounded | This is false. On the space $(0,1)$ with Lebesgue measure let $X=0$ and $X_n=nI_{(0,1/n)}$. Then $X_n \to X$ at every point, $X$ is bounded but $\{X_n\}$ is not uniformly bounded. |
How to draw a graph that represent this problem? Using a $5$-liter bottle and a $3$-liter bottle to arrive at exactly $4$ liters of water. | Here is the graph representing the states of the puzzle and the possible ways we can get from one state to another.
A node labeled $(a,b)$ corresponds to a state where the $5$-liter bottle contains $a$ liters and the $3$-liter bottle contains $b$ liters. There are two kinds of edges:
Edges represented by solid lines describe two-way actions, where we can go from either state to the other. (For example, we can fill an empty bottle from the sink, then pour it out, returning to the original state.)
Edges represented by dashed lines describe one-way actions, which we cannot undo in one step. (For example, once we solve the puzzle, we can pour out the bottle with $4$ liters of water, and now we have to solve the puzzle all over again.)
This distinction is just to make the graph easier to read, avoiding too many arrows. |
What is $\sum_{r=0}^n \frac{(-1)^r}{\binom{n}{r}}$? | Brute force with the Beta function:
$$
\begin{align}
\sum_{k=0}^n\frac{(-1)^k}{\binom{n}{k}}
&=\sum_{k=0}^n(-1)^k\frac{\Gamma(k+1)\Gamma(n-k+1)}{\Gamma(n+1)}\tag{1a}\\
&=(n+1)\sum_{k=0}^n(-1)^k\frac{\Gamma(k+1)\Gamma(n-k+1)}{\Gamma(n+2)}\tag{1b}\\
&=(n+1)\sum_{k=0}^n(-1)^k\operatorname{B}(k+1,n-k+1)\tag{1c}\\
&=(n+1)\int_0^1\sum_{k=0}^n(-1)^kt^k(1-t)^{n-k}\,\mathrm{d}t\tag{1d}\\
&=(n+1)\int_0^1(1-t)^n\frac{1+\left(\frac t{1-t}\right)^{n+1}}{1+\frac{t}{1-t}}\,\mathrm{d}t\tag{1e}\\
&=(n+1)\int_0^1\left[(1-t)^{n+1}+t^{n+1}\right]\,\mathrm{d}t\tag{1f}\\
&=\bbox[5px,border:2px solid #C0A000]{2\frac{n+1}{n+2}}\tag{1g}
\end{align}
$$
Explanation:
$\text{(1a)}$: $\binom{n}{k}=\frac{n!}{k!\,(n-k)!}$ and $n!=\Gamma(n+1)$
$\text{(1b)}$: $\Gamma(n+2)=(n+1)\Gamma(n+1)$
$\text{(1c)}$: definition of the Beta function in terms of the Gamma function
$\text{(1d)}$: definition of the Beta function in terms of an integral
$\text{(1e)}$: $\frac{1+x^{n+1}}{1+x}=1-x+x^2-\dots+x^n$ when $n$ is even
$\text{(1f)}$: simplify the integrand
$\text{(1g)}$: integrate
A more elementary approach:
For $k\gt0$, partial fractions gives
$$
\frac1{\binom{n}{k}}=\sum_{j=1}^k(-1)^{k-j}\binom{k}{j}\frac j{n-j+1}\tag{2}
$$
Therefore,
$$
\begin{align}
\sum_{k=0}^n\frac{(-1)^k}{\binom{n}{k}}
&=1+\sum_{k=1}^n\frac{(-1)^k}{\binom{n}{k}}\tag{3a}\\
&=1+\sum_{k=1}^n\sum_{j=1}^k(-1)^j\binom{k}{j}\frac j{n-j+1}\tag{3b}\\
&=1+\sum_{j=1}^n\sum_{k=j}^n(-1)^j\binom{k}{j}\frac j{n-j+1}\tag{3c}\\
&=1+\sum_{j=0}^n(-1)^j\binom{n+1}{j+1}\frac j{n-j+1}\tag{3d}\\
&=1+\sum_{j=0}^n(-1)^j\binom{n}{j}\frac{(n+1)\,j}{(j+1)(n-j+1)}\tag{3e}\\
&=1+\frac{n+1}{n+2}\sum_{j=0}^n(-1)^j\binom{n}{j}\left(\frac j{j+1}+\frac j{n-j+1}\right)\tag{3f}\\
&=1+\frac{n+1}{n+2}\sum_{j=0}^n(-1)^j\binom{n}{j}\frac n{j+1}\tag{3g}\\
&=1+\frac{n+1}{n+2}\sum_{j=0}^n(-1)^j\binom{n+1}{j+1}\frac n{n+1}\tag{3h}\\
&=1+\frac n{n+2}\tag{3i}\\
&=\bbox[5px,border:2px solid #C0A000]{2\frac{n+1}{n+2}}\tag{3j}
\end{align}
$$
Explanation:
$\text{(3a)}$: separate the $k=0$ term
$\text{(3b)}$: apply $(2)$
$\text{(3c)}$: change order of summation
$\text{(3d)}$: $\sum\limits_{k=j}^n\binom{k}{j}=\binom{n+1}{j+1}$
$\text{(3e)}$: $\binom{n+1}{j+1}=\binom{n}{j}\frac{n+1}{j+1}$
$\text{(3f)}$: partial fractions
$\text{(3g)}$: substitute $j\mapsto n-j$ in the right summand
$\text{(3h)}$: $\binom{n}{j}\frac1{j+1}=\binom{n+1}{j+1}\frac1{n+1}$
$\text{(3i)}$: $\sum\limits_{j=0}^n(-1)^j\binom{n+1}{j+1}=1$
$\text{(3j)}$: simplify |
Some Games Give you extremely high rates of growth that taper down as you play. How do you calculate the inverse of exponential growth? | You are looking for logarithmic growth. The inverse function of $t^2$ is $log_2(t)$. There isn't one particular formula based on the two variables you have given, since just like with exponential growth, many different functions can pass through the same point, and your criteria only determine a single point $(t, y)$. |
Solving for the center of mass of a Semi Circle (without integration) | You can rewrite the expression as $$\sin(a) =a - \frac{\pi}{2}$$ so that each side of the equality is a function of $a$, each of which can be graphed (on the same graph!).
The LHS can be graphed as $$f(a) = \sin(a)$$ the graph of which you should know well,
and the RHS can be graphed as $$g(a) = a - \frac{\pi}{2}$$ Clearly, the graph of $g(a)$ is simply a line, with y-intercept $\left(-\frac {\pi}{2}, 0\right)$ and slope $= 1$.
You can use your graphs, then, to approximate any potential solutions; that is, find any point(s) of intersection. From that, you can probably "trouble shoot" with your calculator to make this approximation more precise.
I'll include graphs below: |
Compositions of functions: How to assign functions $f(x), g(x)$ so that $f(g(x))=\tan(x^3) $ | You had the right choice of which functions to use, but you named them incorrectly. In other words,
We want $$\color{green}{f}(\color{blue}{g(x)}) = \color{green}{\tan}(\color{blue}{x^3})$$
So let's name $\color{blue}{g(x) = x^3}$, and so $\color{green}{f(x) = \tan(x)}$ |
Vector subspace and linearly independence | You seem to be confusing two notions:
linear independence of a set of vectors $ S= \{u_i\}_{i\in I}$:
$S$ is a linearly independent set if the only linear relation between the vectors of this set:
$$\sum_{i\in I}\lambda_i u_i=0\qquad (\lambda_i\in K),$$
where $K$ is the base field, and the family $(\lambda_i)_{i\in I}$ has finite support, is the trivial relation, i.e. $\;\lambda_i=0\;\forall i\in I$.
subvector space $W$ of a vector space $V$: $W$ is a subspace of $V$ if
– $W$ is non-empty,
– if $w\in W$ and $\lambda\in K$, then $\lambda w\in W$ (stability w.r.t. scalar multiplication),
– if $w, w'\in W$, then $w+w'\in W$ (stability w.r.t. vector addition).
These conditions imply that the null vector of $V$ belongs to $W$. |
Definition of uniformity in different contexts | "Uniform" usually means roughly "the same everywhere". In the case of uniform continuity, given some $\epsilon$, the same $\delta$ works everywhere (or, at least, there is some $\delta$ that works everywhere). It's the continuity that says $\epsilon$-$\delta$ is involved, not the uniformity.
Similarly, a (usually infinite) collection of sequences (including sequences of real functions) is said to converge uniformly to some limit if for any $\epsilon$, you can find an $N$ that works for all sequences simultaneously.
As another example, uniform probability means any possible outcome is equally probable. I'm sure there are others. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.