title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How many times must I flip a coin if I want a 50% chance that I'll flip heads twice in a row? What about 3 times in a rows, 4, etc.? | For the case of two consecutive heads, you can express the general term in terms of the $n$th Fibonacci number, by varying slightly the reasoning in this answer (I do this below). The idea is basically considering how you would build a valid sequence of tosses of length $n$ from one of length $n-1$ or length $n-2$ (and that's how Fibonacci pops up).
The probability of at least 2 successive heads after $n$ throws is $$1-\frac{F_{n+2}}{2^n},$$
and indeed, with $n=4$, we have $1-\frac{F_{4+2}}{2^4} = 1 - \frac{8}{16}=\frac12$.
The corresponding sequence is available on OEIS here.
The reasoning for the general case of more than 2 coins generalises similarly. If you want $k$ successive heads, the probability is basically
$$1-\frac{F_{k,\,n+k}}{2^n}$$
where $F_{3,n}$ denotes the $n$th Tribonacci number, $F_{4,n}$ denotes the $n$th Tetranacci number, etc, using the generalised Fibonacci numbers. See the OEIS sequences for 3 and 4 coins here and here.
Computationally, you can also see that this is not always possible to have probability precisely equal to $\frac12$, e.g., for $k=3$ heads, we have that the probability is $0.46$ when $n=9$, but $0.51$ when $n=10$. Similarly, when $k=4$, we have probability $0.497$ when $n=21$ and probability $0.515$ when $n=22$.
Proof of the formula for the case where $k=2$
Let $F(n)$ denote the number of sequences of the letters in $\{H,T\}$ of length $n$, having no two successive $H$'s. Clearly we have $F(1) = 2$ and $F(2) = 3$.
Now to build a sequence of length $n$ with no successive $H$'s, you can start from one of length $n-1$ and add a $T$ on the end, or else start from one of length $n-2$ and add an $HT$ to the end. It follows that $$F(n) = F(n-1) + F(n-2),$$
and since $F(1) = 2 = F_3$ and $F(2) = 3 = F_4$, it follows that $F(n) = F_{n+2}$.
Now we are interested in sequences which do contain at least one pair of successive $H$'s, this is simply the remaining ones, i.e., there are $2^n - F_{n+2}$ in number. $\qquad\square$
Generalising the proof for $k\geqslant 3$
The idea is basically induction on $k$. Indeed, we can build a sequence of length $n$ with no successive heads by:
adding a $T$ to one of length $n-1$
adding an $HT$ to one of length $n-2$
adding an $HHT$ to one of length $n-3$
$\qquad\vdots$
adding an $\underbrace{H\cdots H}_{k-1}T$ to one of length $n-k$.
It follows that
$$F(n)=F(n-1) + F(n-2) + \cdots + F(n-k),$$
and by induction, we can establish that $F(n) = F_{k,\,n+k}$ for $1\leqslant n \leqslant k$.
Thus we get that the number of sequences of length $n$ with at least $k$ successive heads somewhere is $2^n - F_{k,\,n+k}$. |
Is $2\binom{d}{k} \le \binom{2d}{k}$ true? | Since this turned out to be quite trivial, I will just make Wojowu's comment into an answer:
When choosing $k$ out of $2d$ elements, we can choose $k$ among the first $d$ or $k$ among the last $d$ (or some mix). This gives $2\binom{d}{k}$ as a lower bound. |
Homo and Isomorphism for Sets | Theorem B appears to be a more specific version of Theorem A, specialized to the case where the equivalence relation required by Theorem A is the one defined between the two. |
How to calculate $(a \cdot b\cdot c \cdot \ldots) \pmod{x}$ | Yes this works because of the simple property :
If $$a \equiv b \pmod{n}$$ and $$c \equiv d \pmod{n}$$ then $$ac \equiv bd \pmod{n}$$
From the definition of the modulo :
$$x_1 \cdot x_2 \equiv (x_1 \cdot x_2 \pmod{c} ) \pmod{c}$$ and obviously :
$$x_3 \equiv x_3 \pmod{c}$$
Now using the property :
$$x_1 \cdot x_2 \cdot x_3 \equiv (x_1 \cdot x_2 \pmod{c} ) \cdot x_3 \pmod{c}$$
which is exactly what you wrote .
Basically you can reduce every number you want modulo $c$ in an expression and the expression will be unchanged modulo $c$ . |
Finding limit for infinite quantities. | Yes, you are right, the limit is not zero. This is my hint: for $x>0$,
$$\int_{x}^{\infty}\frac{dt}{1+t^4}=\sum_{k=1}^{\infty}\int_{kx}^{(k+1)x}\frac{dt}{1+t^4}\leq x\sum_{k=1}^{\infty}\frac{1}{1+(kx)^4}\leq \sum_{k=1}^{\infty}\int_{(k-1)x}^{kx}\frac{dt}{1+t^4}=\int_{0}^{\infty}\frac{dt}{1+t^4}.$$
Then take a look at How can I compute the integral $\int_{0}^{\infty} \frac{dt}{1+t^4}$? |
Combinatorics Problem with Coins | A large pile of coins consists of pennies, nickels, dimes, and quarters. How many different collections of coins can be formed if there are at least $30$ of each type of coin?
If we let $p$, $n$, $d$, and $q$ denote, respectively, the number of pennies, nickels, dimes, and quarters contained in the collection, then
$$p + n + d + q = 30 \tag{1}$$
A particular solution equation 1 corresponds to the placement of three addition signs in a row of $30$ ones. For instance,
$$+ 1 1 1 1 1 + 1 1 1 1 1 1 1 1 1 1 + 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1$$
corresponds to the solution $p = 0$, $n = 5$, $d = 10$, and $q = 15$. The number of such solutions is the number of ways we can place three addition signs in a row of thirty ones, which is
$$\binom{30 + 4 - 1}{4 - 1} = \binom{33}{3} = \binom{33}{30} = \frac{33!}{30!3!}$$
since we must choose which three of the thirty-three positions required for thirty ones and three addition signs will be filled with addition signs or, equivalently, which thirty of the thirty-three positions positions required for thirty ones and three addition signs will be filled with ones.
This appears to be what you had in mind. However, you did not use parentheses correctly in your answer.
$$\binom{30 + 4 - 1}{30} = \frac{(30 + 4 - 1)!}{30!(4 - 1)!} = \frac{33!}{30!3!}$$
If the pile contains only $15$ quarters but at least $30$ of each of the other types of coins, how many collections of $30$ coins can be chosen?
We must subtract those collections which include at least $16$ quarters from the total. Suppose $q \geq 16$. Then $q' = q - 16$ is a nonnegative integer. Substituting $q' + 16$ for $q$ in equation 1 yields
\begin{align*}
p + n + d + q' + 16 & = 30\\
p + n + d + q' & = 14 \tag{2}
\end{align*}
Equation 2 is an equation in the nonnegative integers with
$$\binom{14 + 4 - 1}{4 - 1} = \binom{17}{3}$$
solutions.
Hence, the number of collections of $30$ coins that can be formed with at most $15$ quarters is
$$\binom{33}{3} - \binom{17}{3}$$ |
For a block matrix of full rank that contains only positive elements, is the determinant of any square partition non-zero? | Consider the example
$$\begin{pmatrix}
0&0&1&0\\
0&0&0&1\\
1&0&0&0\\
0&1&0&0
\end{pmatrix}
$$ where the matrix $A$ is the zero matrix. |
Tricky trig question from GRE | Once you get to this stage:
$\cos^2 (\arccos \frac{\pi}{12})$,
Use the facts that $\cos(\arccos x) = x$ and $\cos^2 y = (\cos y)^2$ to get to the final answer.
In this case, that's simply $(\frac{\pi}{12})^2 = \frac{\pi^2}{144}$ |
Let $T : \mathbb R^m \to \mathbb R^n$ be a linear transformation, prove that... | Hint: take $v_1,\dots,v_k$ to be a set of linearly independent vectors. Suppose that $c_1,\dots,c_k$ are scalars such that
$$
c_1T(v_1) +\cdots+c_k T(v_k)= 0
$$
If $T$ is injective, show that each $c_i$ must be zero.
If $T$ is not injective, show that we can find a non-zero $v$ so that $T(v)=0$. The set $\{v\}$ is linearly independent. |
Two dimensional taylor expansion of arbitrary function | For $f=f(x,y)$ we have
$$f(a+h_a,b+h_b)=f(a,b) + f_x(a,b)h_a + f_y(a,b)h_b + O(h_a^2+h_b^2)$$
In your case $h_a=n_t$ and $h_b=n_{t-1}$ and $a=b=N^*$, therefore
$$f(n_t+N^*, n_{t-1}+N^*) = f(N^*, N^*) + (n_t f_x(N^*, N^*) + n_{t-1}f_{y}(N^*, N^*) + \mathcal{O}(n_t^2+n_{t-1}^2).
$$
$f_x(N^*, N^*)$ is the partial derivative with respect to the first variable evaluated at $(x,y)=(N^*, N^*)$. Similarly about $f_y$.
Since the above a little bit confusing you try to write it in a more confusing way:)
$$f(a+h_a,b+h_b)=f(a,b) + (\nabla f)(a,b)\cdot (h_a,h_b) + O(\|(h_a,h_b)\|^2)$$
Here the $\cdot$ is a dot product between two vectors, vector (function) gradient $\nabla f$ (evaluated at $(a,b)$) and vector $\vec{\delta_h}=(h_x,h_y)$, in this example you have $h_x=h_a$ and $h_y=h_b$. |
Is $\sum_{n=0}^\infty \frac{(-1)^n}{n!(x-n)!}$ equal to $0$? | If $x=0$ the result is false since the sum is equal to $1$ (the term with $n=0$ remains). So assume that $x>0$. We have, by the Newton binomial theorem, that$$\frac{1}{x!}\sum_{n\geq0}\dbinom{x}{n}\left(-1\right)^{n}=\frac{\left(1-1\right)^{x}}{x!}=0$$ and so the thesis. |
Generic elementary group theory problems. | I find that, for elementary group theory problems, one of the best places to start is by asking yourself if (how) groups actions could be used to express this. Then you have some very powerful (while still elementary) theorems that you can use.
For your examples, I point you to User-33433's answer.
For another example which was found in a homework problem in the abstract algebra class that I just finished:
Show that any group (including infinite) that contains a proper subgroup of finite index also contains a proper normal subgroup of finite index.
One way to prove it is to go through all of the motions of proving that the intersection of finite index subsets is, again, finite index, extending this by induction, looking at $N=\bigcap_{g\in G}gHg^{-1}$, proving that this normal and is actually a finite intersection by the Orbit-Stabilizer Theorem when $G$ acts on the cosets of $H$ by conjugation, and finally using the result on finite intersections of finite index sets that you would have proved.
The easier, and more powerful, way to show this result (and even more, as we'll see) is the following which comes from considering the action of $G$ on the cosets of $H$ by conjugation as the important feature, rather than just a means to show that the previous set $N$ is a finite intersection.
Let $H\leq G$ be of finite index $n$. Then, $G$ acts on the cosets of $H$ by conjugation, and this induces a map $\varphi:G\to\text{Sym}\left(G/H\right)\cong S_{n}$. The kernel of this map is normal, and $G/\ker\varphi$ isomorphic to a subgroup of $S_{n}$, so $|G:\ker\varphi|\leq n!$, as required.
This proves that, not only do we have a proper normal subgroup of finite index, but we have one of index $\leq n!$. So, group actions are very powerful and allow you to go straight to the core of many elementary group theory problems, rather than having you flounder around for a bunch of other results just to scratch the surface. |
Examples of sufficient statistics for non-exponential family distributions? | Your statement of the Pitman-Koopman-Darmois theorem is off; there is an additional assumption that the support of $\mathcal X$ does not change as $\theta$ changes where $\mathcal X$ is the support of $X_1$ and $\theta$ parameterizes the family. A quick counterexample to the statement of the theorem as given in the OP is the family $\{\mbox{Uniform}(0, \theta): \theta > 0\}$ for which $\max\{X_j, 1 \le j \le n\}$ is sufficient and does not vary with the sample size $n$.
More in the spirit of your question, the answer is yes, there do exist distributions who sufficient statistics are "lossy" even when the conditions of the PKD theorem are satisfied. Consider $X_1, X_2, ...$ iid from a Gamma distribution with shape parameter $\alpha$ (known) and mean $\mu$, and $Z_1, Z_2, ...$ iid Bernoulli with success probability $p$ also known. Then take $Y_i = Z_i X_i - (1 - Z_i)$, and our sample becomes $Y_1, Y_2, ...$. We only get information about $\mu$ when $Y_i \ne -1$, so our sufficient statistic is $\sum_{i: Y_i \ne -1} Y_i$ which grows like $pn$ on average.
Sublinear growth is possible, proceeding along the train of thought suggested above, i.e. using mixtures of distribution, and indeed this is getting at something that is useful in practice. Take $(X_1, Z_1), (X_2, Z_2), ...$ to be iid distributed according to an infinite mixture of normals $f(x, z | \pi, \mu) = \prod_{i = k} ^ \infty [\pi_k N(x | \mu_k, 1)]^{I(Z_i = k)}$, with $Z_i$ an indicator of which cluster $X_i$ is in (I'm not sure if the representation of it via a density that I wrote is valid but you should get the general idea); the dimension of the sufficient statistics should increase only when new clusters are discovered, and the rate of the appearance of new clusters can be controlled by taking $\{\pi_k\}_{k = 1} ^ \infty$ to be known and carefully choosing them; my hunch is that it should be easy to make it grow at a rate of $\log(1 + Cn)$ since I think this is how fast the number of clusters grows in the Dirichlet process. |
Understanding group presentation as a quotient | Here is an explanation of shotrevlex Knuth-Bendix rewriting for your group. It is long, but if you can understand how I make $R_4$, then the rest of it is just more of the same.
Let me change your group just a tiny bit, $e=rf$ and $r=ef$ so the group is now generated by $T=\{e,f\}$ $$\langle e,f \mid (ef)^4=f^2=e^2=1\rangle$$
For each element of $F_T/R^{F_T}$ I want to find the shortest way of writing it down, and amongst shortest ways, I'll choose the one that comes last alphabetically. If I use a relation to replace a long way with a shorter way (for the same element), I'll call that simplifying.
To keep things easy, I'll start out by only using relations the obvious way:
\begin{array}{ll}
R_1: & ee & \mapsto 1 \\
R_2: & ff & \mapsto 1 \\
R_3: & efefefef & \mapsto 1 \\
\end{array}
Now I want to make sure I don't miss any simplifications. Notice that $eefefefef$ can be viewed as both $(ee)fefefef \stackrel{R_1}{\mapsto} fefefef$ and as $e(efefefef) \stackrel{R_3}{\mapsto} e$. Clearly $e$ is simpler, but $fefefef$ doesn't match any of my rules. I'll go ahead and add the rule that says $fefefef$ and $e$ are the same but $e$ is simpler:
\begin{array}{ll}
R_1: & ee & \mapsto 1 \\
R_2: & ff & \mapsto 1 \\
R_3: & efefefef & \mapsto 1 \\
R_4: & fefefef & \mapsto e \\
\end{array}
Notice the third rule is now redundant: the left hand sides probably should be as simple as possible considering the other rules, but the left hand side of $R_3$ simplifies to $e(fefefef) \stackrel{R_4}{\mapsto} ee$ and so simplified $R_3$ is the same as $R_1$.
I look again for double-rule opportunities: $ffefefef$ is both $(ff)efefef \stackrel{R_2}{\mapsto} efefef$ and $f(fefefef) \stackrel{R_4}{\mapsto} fe$, but again none of the rules do that directly, so I add a rule that says $efefef$ and $fe$ are the same, but $fe$ is simpler:
\begin{array}{ll}
R_1: & ee & \mapsto 1 \\
R_2: & ff & \mapsto 1 \\
R_4: & fefefef & \mapsto e \\
R_5: & efefef & \mapsto fe \\
\end{array}
This time $R_4$ is redundant, the left hand side is $f(efefef) \stackrel{R_5}{\mapsto} f(fe) = (ff)e \stackrel{R_2}{\mapsto} e$, so we can skip it too.
Again $eefefef$ is both $(ee)fefef \stackrel{R_1}{\mapsto} fefef$ and $e(efefef) \stackrel{R_5}{\mapsto} e(fe)$ so we add rule 6, and notice rule 5 is redundant:
\begin{array}{ll}
R_1: & ee & \mapsto 1 \\
R_2: & ff & \mapsto 1 \\
R_6: & fefef & \mapsto efe \\
\end{array}
This continues one more time until we get:
\begin{array}{ll}
R_1: & ee & \mapsto 1 \\
R_2: & ff & \mapsto 1 \\
R_7: & efef & \mapsto fefe \\
\end{array}
Now at this point we have two double rule opportunities $eefef$ and $efeff$. Both work about the same, so I'll show the first: $(ee)fef \stackrel{R_1}{\mapsto} fef$ is the same as $e(efef) \stackrel{R_7}{\mapsto} e(fefe) = (efef)e \stackrel{R_7}{\mapsto} (fefe)e = fef(ee) \stackrel{R_1}{\mapsto} fef$. Well, that's clear, $fef=fef$. The other double rule opportunity is similar.
It is a theorem that because of this lack of exciting double rule opportunities, there are NO exciting opportunities for using rules in different ways. If you apply the rules mechanically until they cannot be applied anymore, then the resulting answer is always the same, no matter which order you choose to use the rules in, or if they match in multiple places, which place you decide to use the rule on first.
This also tells us all of the different elements: we just starts with the identity, and multiply by $e$ and $f$ until we can apply a rule. That would give us a shorter answer so we don't need to consider it.
Doing this we get $\hat G = \{1,e,f,ef,fe,efe,fef,fefe\} \subset F_T$ and that is it. We get that if $u,v \in \hat G$ and $u \neq v$ then $uR^{F_T} \neq vR^{F_T}$ (because no rule applies) and $F_T/R^{F_T} = \{ u R^{F_t} : u \in \hat G \}$ (because $\hat G$ is “closed” under multiplication, after applying rules, and contains the generators $e$ and $f$). |
solving inhomogeneous recurrence relation | It never hurts to gather some computational data first; sometimes it leads to a quick and easy guess at the answer, which can then be proved rigorously.
$$\begin{array}{r|c}
n:&1&2&3&4&5&6&7&8&9&10\\
f(n):&2&4&6&10&14&22&30&46&62&94\\
\text{Increase}:&&2&2&4&4&8&8&16&16&32
\end{array}$$
A look at that last line, showing the amount of increase from one term to the next, makes it pretty clear that $$f(2n+1)=2+2\left(2^1+2^2+\ldots+2^n\right)=2+2\sum_{k=1}^n2^k=2\sum_{k=0}^n2^k$$ and $$f(2n+2)=f(2n+1)+2^{n+1}=2\sum_{k=0}^n2^k+2^{n+1}$$ for $n\ge 0$.
From the formula for the sum of a geometric progression we know that $\sum_{k=0}^n2^k=2^{n+1}-1$, so conjecture that
$$\begin{align*}
f(n)&=\begin{cases}
2\left(2^{(n+1)/2}-1\right),&\text{if }n\text{ is odd}\\\\
2\left(2^{n/2}-1\right)+2^{n/2},&\text{if }n\text{ is even}
\end{cases}\\\\
&=\begin{cases}
2^{(n+3)/2}-2,&\text{if }n\text{ is odd}\\\\
3\cdot2^{n/2}-2,&\text{if }n\text{ is even}\;.
\end{cases}
\end{align*}$$
Now that we’ve discovered what the correct closed form almost certainly is, we can prove it by induction on $n$. I’ll leave that part to you.
There are more systematic approaches. In fact, after going through this argument you might well discover on your own that in general if $f(n)=f(n-1)+g(n-1)$, then
$$\begin{align*}
f(n)&=f(n-1)+g(n-1)\\
&=\Big(f(n-2)+g(n-2)\Big)+g(n-1)\\
&=\Big(f(n-3)+g(n-3)\Big)+g(n-2)+g(n-1)\\
&\;\vdots\\
&=f(1)+g(1)+g(2)+\ldots+g(n-2)+g(n-1)\\
&=f(1)+\sum_{k=1}^{n-1}g(k)\;.
\end{align*}$$
(This can be properly proved by induction.) Thus, if $g$ is any function for which you can find a closed form for $\sum_{k=1}^{n-1}g(k)$, you’re home free. In the specific problem we simply got a geometric series. |
Inequality proof involving series representation of ${ e }^{ x }$ | $$\begin{align}\left(1 + \frac xn\right)^n - (1+x) &= \left(\sum_{j=0}^n {n \choose j}\frac {x^j}{n^j}\right) - (1+x) \\&= \sum_{j=2}^n {n \choose j}\frac {x^j}{n^j} \\&= \sum_{j=2}^n \frac{n!}{(n-j)!n^j}\frac{x^j}{j!} \end{align}$$
Hence
$$\begin{align}\left| \left(1+\frac xn\right)^n-(1+x)-\sum_{j=2}^m\frac { x^j}{j!} \right| &= \left|\sum_{j=2}^m \left(\frac{n!}{(n-j)!n^j}-1\right)\frac{x^j}{j!} + \sum_{j=m+1}^n \frac{n!}{(n-j)!n^j}\frac{x^j}{j!}\right|\\&\le \sum_{j=2}^m\left|\frac{n!}{(n-j)!n^j}-1\right|\frac{\left|x^j\right|}{j!}+\sum_{j=m+1}^n \frac{n!}{(n-j)!n^j}\frac{\left|x^j\right|}{j!}\end{align}$$
Now, $$\frac{n!}{(n-j)!n^j} = \frac {\overbrace{n(n-1)(n-2)\dots(n-j+1)}^{j\text{ factors}}}{n^j} = 1\left(1-\frac 1n\right)\left(1-\frac 2n\right)\dots\left(1-\frac{j-1}n\right)$$
Since $j > 1$, we get $$\frac{n!}{(n-j)!n^j} < 1$$ But also since $\left(1-\frac{k}n\right) \ge \left(1-\frac{j-1}n\right)$ for $0 \le k \le j- 1 < n$,
$$\frac{n!}{(n-j)!n^j} =\left(1-\frac 1n\right)\left(1-\frac 2n\right)\dots\left(1-\frac{j-1}n\right) \ge \left(1-\frac{j-1}n\right)^{j-1}$$
and so $$0 < 1 - \frac{n!}{(n-j)!n^j} \le 1 - \left(1-\frac{j-1}n\right)^{j-1}$$
Therefore
$$\sum_{j=2}^m\left|\frac{n!}{(n-j)!n^j}-1\right|\frac{\left|x^j\right|}{j!} = \sum_{j=2}^m\frac{\left|x^j\right|}{j!}\left(1-\frac{n!}{(n-j)!n^j}\right)\le \sum_{j=2}^m\frac{\left|x^j\right|}{j!}\left(1 - \left(1-\frac{j-1}n\right)^{j-1}\right)$$
and
$$\sum_{j=m+1}^n \frac{n!}{(n-j)!n^j}\frac{\left|x^j\right|}{j!} \le \sum_{j=m+1}^n \frac{\left|x^j\right|}{j!} \le \sum_{j=m+1}^\infty \frac{\left|x^j\right|}{j!}$$
and your inequality follows. |
Inequality $\frac{1}{1^3}+\frac{1}{2^3}+\frac{1}{3^3}+\cdots+\frac{1}{n^3}<\frac{3}{2}$ | You can use for $n\geq 2$, $$\dfrac{1}{n^3} \leq \dfrac{1}{n(n+1)} = \dfrac{1}{n} - \dfrac{1}{n+1}$$ |
Change of Basis Confusion | The "change of basis matrix from $\beta$ to $\gamma$" or "change of coordinates matrix from $\beta$-coordinates to $\gamma$-coordinates" is the matrix $A$ with the property that for every vector $v\in V$,
$$A[v]_{\beta} = [v]_{\gamma},$$
where $[x]_{\alpha}$ is the coordinate vector of $x$ relative to $\alpha$. This matrix $A$ is obtained by considering the coordinate matrix of the identity linear transformation, from $V$-with-basis-$\beta$ to $V$-with-basis-$\gamma$; i.e., $[\mathrm{I}_V]_{\beta}^{\gamma}$.
Now, you say you want to take $T\colon V\to V$ that sends $v_i$ to $w_i$, and consider "the matrix of this linear transformation". Which matrix? With respect to what basis? The matrix of $T$ relative to $\beta$ and $\gamma$, $[T]_{\beta}^{\gamma}$, is just the identity matrix. So not that one.
Now, if you take $[T]_{\beta}^{\beta}$; i.e., you express the vectors $w_i$ in terms of the vectors $v_i$, what do you get? You get the matrix that takes $[x]_{\gamma}$ and gives you $[x]_{\beta}$; that is, you get the change-of-coordinates matrix from $\gamma$ to $\beta$. To see this, note that for example that $[w_1]_{\gamma} = (1,0,0,\ldots,)^t$, so $[T]_{\beta}^{\beta}[w_1]_\gamma$ is the first column of $[T]_{\beta}^{\beta}$, which is how you express $w_1$ in terms of $\beta$.
Which is why it would be the "change of basis matrix from $\gamma$ to $\beta$. Because, as Qiaochu mentions in the answer I linked to, the "translation" of coordinates vectors achieved by this matrix goes "the other way": it translates from $\gamma$-coordinates to $\beta$-coordinates, even though you "defined" $T$ as "going" from $\beta$ to $\gamma$. |
A semiring isomorphic to a cartesian product of two semirings | We can't know it because it is false!
Note that $S$ satisfies the following property: for any $x,y\in S$, $x+y$ is either $x$ or $y$.
But this is not true in $R\times T$: $(1,0)+(0,2)=(1,2)$.
So they cannot be isomorphic. |
Is the category of f.d. vector spaces coherent? | You are missing the fact that pulling back subspaces does not preserve the sum (which is the join of subobjects in the category of finite-dimensional vector spaces over $k$). I've given a counterexample here. |
Proof from "An isoperimetric inequality with applications to curve shortening" by Gage | The term 'bisects' very likely means that the region bounded by the convex curve is divided into two regions of equal area by such a line. The existence is (e.g.) a consequence of the mean value theorem applied to a family of lines passing through $X(s)$ and $X(s+t)$ for $t$ in the range $(0,\ell)$, $l$ being the length of the curve. You kind of swipe out the area of the region bounded by the curve using a family of lines through one single point. Since that region is convex such a line divides the region bounded by the curve into two subregions with continuously varying area (continuity is what you have to verify and what allows you to apply the mean value theorem).
It is not clear to me what exactly you are asking about the behaviour of $f$, maybe you can make that more precise.
(Edit) Note: Gage extends the geometric picture to Euclidean three space. There, the cross product of two vectors is normal to these vectors and its length is the area of the parallelogram spanned by the two vectors. This (the area of said parallelogram) is what he gets by multiplying with the normal. Note that this function measures, in some sense, how close the curve is to a circle. On a circle it vanishes identically, and (without checking) I'm sure the converse is true as well. On an ellipse which is not a circle it vanishes only in four points
I looked into the papers of Gage about some 15 years ago (I do assume that's what you are looking at), it's interesting to see this is still an area of interest. |
The probability that A hits a target is $\frac14$ and that of B is $\frac13$. If they fire at once and one hits the target, find $P(\text{A hits})$ | $$ \begin{align}
P(\mbox{target is hit once}) &= P(\mbox{A hitting}) \cdot P(\mbox{B not hitting}) + P(\mbox{A not hitting}) \cdot P(\mbox{B hitting}) \\
&= \frac{1}{4}\cdot\frac{2}{3} + \frac{3}{4}\cdot\frac{1}{3} \\
&= \frac{5}{12}
\end{align}
$$
So, $$P(\mbox{A hitting | target is hit once}) = \frac{P(\mbox{A hitting}) \cdot P(\mbox{B not hitting})}{P(\mbox{target is hit once})} = \dfrac{\frac{1}{6}}{\frac{5}{12}} = \frac{2}{5}.$$ |
Integral using trigonometric substitution | Note also that it is $\tan^{-1}(\sqrt 3t)$, not $\tan^{-1}\left(\frac{t}{\sqrt 3}\right)$.
Setting $t=\frac{1}{\sqrt 3}\tan\theta$ gives you
$$\begin{align}\frac 23\int\frac{1}{\left(\frac{1}{\sqrt 3}\right)^2+t^2}dt&=\frac 23\int\frac{1}{\frac 13+\frac 13\tan^2\theta}\cdot\frac{d\theta}{\sqrt 3\cos^2\theta}\\&=\frac 23\cdot\frac{1}{\frac 13\cdot \sqrt 3}\int d\theta\\&=\frac{2}{\sqrt 3}\tan^{-1}(\sqrt 3 t)+C\\&=\frac{2}{\sqrt 3}\tan^{-1}\left(\sqrt 3\tan\frac x2\right)+C\end{align}$$ |
Additive form of a spectral decomposition? | These two are the same, let $E_1, \cdots, E_n$ be the eigensubspaces and $P_i$ be the projection onto $E_i$. Let $x$ be a vector, then $Ax = A(x_1+ \cdots x_n)$ for some $x_i \in E_i$. Hence $Ax = Ax_1 + \cdots +Ax_n = \sum \lambda_i x_i= \sum \lambda_i P_i (x)$.
On the other hand, if you choose a unitary basis $\{v_1, \cdots, v_n\}$ such that $v_i \in E_i$, then the matrix $S$ with column vector $v_i$ would satisties $A = S\Lambda S^*$.
It is more convenient to use projections as one needs to deal with infinite dimensional spaces and one cannot use matrices to describe linear map on this space. In QM, for example, the state space is infinite dimensional and the Schrodinger operator is a operator on the state space. |
An introduction to Lagrangian and Hamiltonian mechanics well suited for people familiar with convex optimization | It may be worth a shot to have a look at Gauss principle of least constraint (GPoLC). There is very few material available about it, but I think it is worth the effort to get a hold on it. It basically says that the acceleration that a system experiences is such that the Euclidian norm of the difference between the acceleration it actually experiences and the acceleration it would experience without any constraints is minimised. Consider for example the link of a robot arm. The acceleration "without constraints" would be the acceleration it would experience if you were to cut it loose from any attached links and only be considering gravity, drag etc. but not the reaction forces occurring in the joints.
I recently wanted to implement a rigid body simulation of a bipedal robot with contact constraints. I was very confused by a lot of materials that I found about Lagrangian mechanics and them always being derived in a slightly different way and it was GPoLC that came to rescue.
I think the power of GPoLC is that it turns the problem of finding the equations of motion of a mechanical system into an optimisation problem with quadratic objective. If you add for example time-invariant holonomic constraints to your problem, it is not so hard to see that the feasible set will be convex and hence the problem itself will be convex which allows you to use the full power of convex optimisation results. Constraint forces for example will turn out to be Lagrange multipliers in your problem. |
Can an infinite set of primes be a regular language or CFG? | Assume $P$ is your infinite set of primes, and we are interested in the language writing this in a base $B$. We prove the language isn't context free by contradicting the respective pumping lemma. We will use variables to indicate numerical values and the strings in base $B$ representing them interchangeably for conciseness. In particular, $\lvert a \rvert$ denotes the length of the base $B$ representation of $a$.
Assume the language is context free, so it satisfies the respective pumping lemma. Let $N$ be the constant of the lemma, $p \in P$ such that $\lvert p \rvert \ge 2 N$. $P$ being infinite such $p$ certainly exists.
By the pumping lemma, we can write (as strings):
$$
p = u v w x y
$$
with $\lvert v \rvert + \lvert w \rvert + \lvert x \rvert \le N$, $\lvert v \rvert + \lvert x \rvert \ne 0$ such that for all $k \in \mathbb{N}_0$ we have (as strings) $u v^k w x^k y$ is in the language, in particular, it is a prime.
Translate into numbers now (and forget the strings). Calling $\alpha = \lvert v \rvert$ and $\beta = \lvert x \rvert$ the string pumped $k \ge 1$ times represents:
$$
p^{(k)}
= u \cdot B^{ (\alpha + \beta) k + a}
+ v \cdot B^{\beta k + b} \cdot \frac{B^{\alpha k} - 1}{B^\alpha - 1}
+ w \cdot B^{\alpha k + c}
+ x \cdot B^d \cdot \frac{B^{\beta k} - 1}{B^\beta - 1}
+ y
$$
for some $a$, $b$, $c$, and $d$. If $v$ or $x$ are empty, the respective term is missing from on now.
By Fermat's little theorem, $A^p \equiv A \pmod{p}$ for all $A$. Also $B^\alpha$ and $B^\beta$ are less than $p$, and so relatively prime to it. Now:
$$
p^{(p)}
\equiv u \cdot B^{ \alpha + \beta + a}
+ v \cdot B^{\beta + b} \cdot \frac{B^\alpha - 1}{B^\alpha - 1}
+ w \cdot B^{\alpha + c}
+ x \cdot B^d \cdot \frac{B^\beta - 1}{B^\beta - 1}
+ y
\equiv p
\equiv 0
\pmod{p}
$$
Contradiction. |
Problem solving strategy? | Persistently thinking about the problem, recording every little detail in the thought process, is the only way to get to a point where you can solve IMO 3s and 6s. The tendency is to ask people about how to become a master problem solver but the answer is to patiently climb the steep mountain of solving tough IMO problems even if it means that you have to think for weeks together about it.
Just as a matter of information, I used to ask the exact same questions. I then stopped asking, and started thinking about IMO 3s and 6s. I could very gradually solve some and it took almost 4 years to get to this point (I am 35 years old now). |
Tell whether the relation is reflexive, symmetric, asymmetric, antisymmetric or transitive. | You should ask yourself:
(1) Is it true for every person $x$ that $x$ was born in the same year as $x$ him- or her-self? (Reflexive).
(2) Is it true for all people $x,y$, if $x$ was born in the same year as $y$ does it necessarily follow that $y$ was born in the same year as $x$? (Symmetric).
See if you can take it from there and figure out transitive and the others. |
Continuous function on compact topological space | Let $B_{\frac{1}{n}}(f( p))$ the ball of radius $\frac{1}{n}$ around $f( p)$ in $\mathbb{C}$, $n\in\mathbb{N}$. For every $n\in\mathbb{N}$, since $f$ is continuous, $f^{-1}(B_{\frac{1}{n}}(f( p)))$ is open in $\mathcal{O}$ and contains $p$, therefore its complement, say $C_n$, has finitely many elements. Moreover, $f^{-1}( f( p))=\bigcap_{n\in\mathbb{N}} f^{-1}(B_{\frac{1}{n}}(f( p)))$. Since $X^{*}\setminus f^{-1}( f( p)) = \bigcup_{n\in\mathbb{N}}C_n$, it has at most countably many elements and we are done. |
AC-3 algorithm (short for Arc Consistency Algorithm) | Two possible cases:
(1) Imagine if there is another constraint involving $z$ and some other variable $t$ that has yet to be checked before. Then reducing the domain of $z$ to match the updated domain of $x$ could result in a different arc-reduce $(z, t)$ since the domain of $t$ might be able to cover for the updated domain of $z$, but it may not cover for the un-updated (and actually thus obsolete and incorrect) version of $D(z)$.
(2) Another scenario would be arc-reduce $(t, z)$, and the domain of $z$ might appear to cover more values of $t$, than it would have, had it been properly reduced. This might cause your algorithm to return true, instead of false, if for example, no values of the trimmed $z$ actually fit the $z-t$ constrain anymore.
Hope this helps. |
problem of a positive definite matrix | The answer is positive.
Note a matrix $A$ is positive definite if and only if its eigenvalues are all strictly greater than $0$. If we use $\lambda(A)$ to denote the eigenvalue of a matrix $A$, it can be easily verified that
$$\lambda(p(A)) = p(\lambda(A)),$$
where $p$ is any polynomial.
Therefore, $\lambda(k_1 H - k_2 I) = k_1 \lambda(H) - k_2$. In order to have them strictly greater than $0$, it is sufficient to have
$$k_1 > \frac{k_2}{\min\{\lambda(H)\}}.$$
The denominator is always positive since $H$ is assumed to be positive definite. |
On an Elementary Problem in Additive Number Theory | No, it's not true. For example, try $\{1, 2, 14\}$ and $\{8, 9\}$ |
Can you learn Algebra and Calculus at the same time? | I would say "No." If you don't already know algebra, you can't possibly learn calculus. And algebra itself isn't really enough. You really should have exposure to limits and analytic geometry, which typically are introduced in a precalculus course.
So I would buckle down and concentrate on mastering algebra first. Hit calculus after that (or after precalculus following algebra) -- you will understand it more deeply and more quickly if you have the proper tools in your toolkit. |
Trouble applying gram-schmidt to two vectors | You just made a silly mistake: you wrote $q_2=v_2-(v_2\cdot q_1)q_1$ but you accidentally substituted $v_1$ for the first $v_2$. |
Exam-Problem Functional analysis/sobolev spaces | For any $b>0$
$$
\langle u,v\rangle=\int_0^1 (u''v'' + b\,u'v'+u\,v)\,dx
$$
is an inner product on $W^{2,2}$ equivalent to the usual one. On the other hand
$$
\phi\mapsto\int_0^1 f\,\phi\,dx
$$
is a bounded linear functional on $W^{2,2}$. The result now follows from the Riesz representation theorem.
Another possibility would be to use the Lax-Milgram theorem on the bilinear form $\langle u,v\rangle$. |
Using clopen definition of connectedness to prove $M$ is not connected | $A= M\cap \{(x,y)\in R^2, x<0\}=M\cap \{(x,y)\in R^2, x\leq 0\}$ so is clopen for the topology of $M$. Remark that a subset of $M$ is open if it is the intersection between $M$ and an open subset of $R^2$ and a subset of $M$ is closed if it is the intersection between $M$ and a closed subset of $R^2$. |
Proof that the Lascoux-Schützenberger involutions satisfies the braid-relations | This is one of those cases where the margin is too small for everyone. Another proof is found in Marc A. A. van Leeuwen, Double crystals of binary and integral matrices, Electron. J. Combin., 13(1):Research Paper 86, 93, 2006. He uses different notations and works in a somewhat different setting; for details about how to derive your claim from van Leeuwen's work, see Remark 6.6 in Erik Aas, Darij Grinberg, Travis Scrimshaw, Multiline queues with spectral parameters, arXiv:1810.08157v1.
(I also have a different proof, and no time to write it up... It's on my tenure-track to-do list.) |
Logarithm of the determinant of a positive definite matrix | Hint 1: $\det(C)=\det(LL^T)=\det(L)\det(L^T)=\det(L)^2$, so $\log\det(C)=2\log\det(L)$. Denote by $\lambda_i$ the eigenvalues of $L$ and continue in the same way as you tried.
Hint 2: For the Jordan normal form $L=SJS^{-1}$ it holds
$\log(L)=S\log(J)S^{-1}$, so
$$
\operatorname{trace}\log(L)=\operatorname{trace}(S\log(J)S^{-1})=\operatorname{trace}\log(J).
$$ |
Can chain rule be used in first step | Whenever you need to compute the derivative of an expression which only contains products, quotients and powers, logarithmic differentiation makes life very simple.
Consider for example $$y=\frac{f(x)^a g(x)^b}{ h(x)^{c}}$$ where $a,b,c$ are constants. Take logarithms $$\log(y)=a \log(f(x))+b \log(g(x))-c \log(h(x))$$ Now $$\frac{y'}y=a \frac{f'}f+b\frac{g'}g-c\frac{h'}h$$ and then $y'$.
In a case similar to your $$y=f(x)^{g(x)}$$ $$\log(y)=g(x) \log(f(x))$$ and apply the product rule; so $$\frac{y'}y=g' \log(f)+g \frac {f'}f$$ and then $y'$. |
conditional probability of two dice when none lands on 6 | You are correct; the answer is $\frac23$. What you calculated is $P(\neg A\mid B)$, and it is easy to show that $P(\neg A\mid B)+P(A\mid B)=1$. |
Example of functions satisfying certain little-oh condition | $f(n)=\log n, n \ge 2$ and $g(n)=(\log n)^{-1/2}=o(1), n \ge 2$; $n^{g(n)}=e^{(\log n)^{1/2}}$ and we have $(\log n)^{1/2} \ge \log \log n$ so $n^{g(n)} \ge e^{\log \log n}=\log n$
(if needed of course we can use a small constant and take $f(n)=C\log n$ to take care of small cases but I think the inequality $(\log n)^{1/2} \ge \log \log n$ holds for all $n \ge 2$ not only for large $n$ where it is trivial) |
Find minimal polynomial over $\mathbb{Q}[x]$. | As achille hui points out, by Gauss's "content lemma" irreducibility of the monic integer polynomial $ x^{105} - 9 $ over the rationals $\mathbb{Q}$ is equivalent to its irreducibility over the integers $\mathbb{Z}$.
A simple condition that often suffices is Eisenstein's criterion:
Suppose we have an integer polynomial $$f(x) \equiv a_n x^n + a_{n-1} x^{n-1} + \ldots + a_0 $$ and a prime integer $p$ such that:
$p$ does not divide $a_n$,
$p$ does divide each $a_i$ for $i \lt n$, and
$p^2$ does not divide $a_0$.
Then the polynomial $f(x)$ is irreducible over the integers.
Unfortunately here the only prime $p$ that divides all the coefficients other than the leading coefficient of $x^{105} - 9$ is $3$, and the last part of Eisenstein's criterion fails since $3^2 = 9$ does divide the constant term.
If only we were instead dealing with $x^{105} - 3$, all the parts would work! Thus $x^{105} - 3$ is irreducible. We can use this observation to prove, with a little extra work, that $x^{105} - 9$ is also irreducible.
Let $r = 3^{1/105}$ be the positive real root of $x^{105} - 3 = 0$, and note that $s = 9^{1/105}$ (which we said is the root of $x^{105} - 9 = 0$) satisfies $s = r^2 = 3^{2/105}$.
Now the degree of the irreducible polynomial $x^{105} - 3$ is the same as the dimension of the field extension $\mathbb{Q}(r)$ as a vector space over $\mathbb{Q}$. Readers not familiar with this observation should reflect on the spanning set $\{1,r,r^2,\ldots,r^{104}\}$, and consider that irreducibility of the polynomial $x^{105} - 3$ implies the linear independence of these powers of $r$ less than $105$.
Since $s = r^2$, it is obvious that the field extension $\mathbb{Q}(s)$ is contained in $\mathbb{Q}(r)$. However it is also true that:
$$ r = \frac{1}{3} 3^{106/105} = \frac{1}{3} s^{53} $$
This proves $r\in \mathbb{Q}(s)$, showing $\mathbb{Q}(r) = \mathbb{Q}(s)$.
Because these are equal as field extensions, their dimensions as vector spaces over $\mathbb{Q}$ are equal. We deduce that the minimal polynomial of $s = 9^{1/105}$ over the rationals will have degree $105$. Now the monic integer polynomial $x^{105} - 9$ is irreducible, and thus it is the minimal polynomial for $s$ over the rationals. |
Bob has no less than 50 $2 coins in a bag. After organizing it he finds there are no remainders whether it's 6 or 8 coins per row. | If it can be arranged in a row of 6 and 8 without a remained, then 6 and 8 is a factor of the number of coins. The lowest common multiple of 6 and 8 is 24, however 24 is lower than the minimum of 50 coins required to be in a bag. Through trial and error you can find that the smallest number with both 6 and 8 as a factor over 50 is 72. If you have 72 2 dollar coins, then the total value is $144, your answer. |
Prove that if $m^p+n^p\equiv0\pmod p$ then $m^p+n^p\equiv0\pmod {p^2}$ where $p$ is an odd prime number. | You are aware that $ m \equiv - n \pmod {p}$. Let $m = -n + pk$, then
$$m^p + n^p = (pk - n)^p + n^p = \\
(pk)^p + {p\choose 1 } (pk)^{p-1}(- n)^1 + \ldots + { p \choose p-1 } (pk)^{1} (-n)^{p-1} + {p \choose p } (pk)^ 0 (-n)^p + n^p. $$
When $p$ is odd, notice that the last 2 terms cancel out.
Can you show that the rest of the terms are multiples of $p^2$?
Use your observation that ${ p \choose k } $ is also a multiple of $p$. |
Does the distributional derivative of the Dirac Comb have the same properties as a single Dirac Delta? | In fact, if stop reasoning in terms of integrals ans start reasoning in terms of distributions and distributional derivatives, then this result is quite easy to show. Let us note $\phi $ - am arbitrary test function (i.e. compactly supported and $C^\infty$)
We start with a delta-distribution $\delta_p$ given by
$$\langle \delta_p,\phi\rangle = \phi(p)$$
and its derivative $\delta_p'$, for which the definition of distributional derivative (yes, it has common roots with integration by parts) gives
$$\langle \delta_p' ,\phi\rangle=-\langle \delta_p,\phi'\rangle = -\phi'(p).$$
Now the Dirac comb $B = \sum_{k\in\Bbb Z} \delta_{kT}$ is a well-defined distribution, because $$\langle B,\phi\rangle = \sum_{k\in\Bbb Z} \phi(kT),$$ and the latter series converges because $\phi$ is compactly supported. I will omit the continuity part of the definition, it is an easy exercise.
Since $B$ is a distribution, it has a distributional derivative - it is one of the major resons to use distributions. Moreover, we have an explicit definition:
$$\langle B',\phi\rangle=-\langle B,\phi'\rangle = -\sum_{k\in\Bbb Z} \phi'(kT),$$
and the latter series converges for the exact same reasons as we mentioned earlier.
Finally, by testing against all possible test functions $\phi$ you can conclude that $B'$ has the form $ \sum_{k\in\Bbb Z} \delta'_{kT}$.
===
Another approach would be to show that the distributions $B_n= \sum_{k\in\Bbb Z,\,|k|\le n} \delta_{kT}$ converge to $B$ in the distributional sense (easy to show by definition of this convergence). It implies - another great property of distributional derivatives - that $B'_n$ converges to $B'$, and the explicit formula for $B'_n$ is quite obvious. |
Compact subset of $\mathbb{R}^1$ with countable limit points | We can generalize:
Take an increasing sequence of positive reals $a_1,a_2,\dots $ converging to $l$.
For each $i>1$ we let $(b^i)$ be an increasing sequence of reals converging to $a_i$ such that $b^i_j>a_{i-1}$.
Your same arguments prove $(a) \cup ( \bigcup\limits_{i=1}^\infty (b_i))$ is closed and compact and has limit points $a_2,a_3,\dots$ |
Axes of rotation in 4D | To clarify, a rotation $P$ in 4-D is defined to be an orthogonal matrix with positive determinant, that is, $P^TP=I$, $\det P=+1$.
For every such rotation, one can find two perpendicular rotation planes meeting only at the origin. So $P$ has a matrix $$\pmatrix{\cos\theta&-\sin\theta&0&0\\\sin\theta&\cos\theta&0&0\\0&0&\cos\phi&-\sin\phi\\0&0&\sin\phi&\cos\phi}$$
If by axis you mean non-zero fixed vectors $Px=x$, then there are no axes in general for 4-D rotations, just as there are no axes in 2-D.
So none of the suggestions 1-3 hold: there are two orthogonal planes, each rotating independently, and causing the volume of vectors in between them to rotate with them. The projections of a body onto the planes rotate with the planes.
Add: A rotating 3-D sphere does have an axis, and there is only one way to extend the rotation to 4-D, namely by fixing the fourth dimension; this is the case $\phi=0$ in the above matrix.
If you are rotating with the sphere and look up at the sky, and you have 4-D vision, then the sky would not look like a dome but a volume (a 3-sphere). There would be a whole plane (appearing like a great circle) of fixed 'polar' stars, and all other stars rotate about this plane along with the 'equator'.
If you find it hard to imagine stars rotating about a plane, think of the flatman A. Square: all his stars rotate about his circular Earth and it might be inconceivable for him to imagine a fixed polar star.
Add2: Represent the 3-sphere using three angles $(\theta,\phi,\psi)$. Here is a plot of three random stars as they move in this space:
In general they are seen as moving in a direction (the one you would have moved if you weren't stationary) but rotating about that direction. The two rotations could of course have different rates. |
Proving that $\mathbb{N}$ isn't bounded from above using Bolzano-Weierstrass | If you want to prove that $\mathbb{N}$ is bounded using BW, here is a way that uses it more directly.
Define $a_n$ = n.
Lets assume $\mathbb{N}$ is bounded. So $a_n$ is bounded. So according to BW, there exists a convergent sub sequence $a_{n_k}$ such that $a_{n_k} \rightarrow L$ where L is a real number. So $a_{n_k}$ is a Cauchy sequence, which gives us a contradiction, since $|a_{n_{k+1}}-a_{n_{k}}|>\frac{1}{2}$. So $\mathbb{N}$ isn't bounded. |
formula for summation notation involving variable powers | This probably doesn't have a nice closed form, since the simpler sum of $k^k$ (called the "hypertriangular function of $n$") doesn't.
There is no OEIS entry for the sums as a sequence with increasing $n$. The OEIS entry for the related sum of $k^k$ lists a result for the ratio of consecutive terms.
If $$a_n = \sum\limits_{k=1}^{n} k^k$$
Then
$$\lim_{n\to\infty}\left(\frac{1}{n}\cdot \frac{a_{n+1}}{a_n}\right)=e$$
At best, you can hope for a similar asymptotic result for your sum. |
Discrete mathematics poset | One direction is easy. If $(X,\leq)$ is a linear order then it only has one linear extension (show that if $(X,\leq')$ is a linear extension then it adds nothing new).
In the other direction, if $(X,\leq)$ is not a linear order then there are $a,b\in X$ such that $a\nleq b$ and $b\nleq a$. Show that you can extend $\leq$ by deciding $a\leq'b$ or you can extend it in the other direction. This must give you at least two different extensions for $(X,\leq)$. |
How to compute $P(Z_2 \le Z_3 \le Z_4| Z_2=Z_1 )$? where we have an i.i.d. sequence of standard normal $Z_1,Z_2,Z_3,Z_4$. | Partial answer that is too long for a comment:
Consider the joint distribution of $(Z_1-Z_2, Z_2)$. This is a bivariate normal distribution with means $0$ and $0$, variances $2$ and $1$, and correlation $-\frac{1}{\sqrt{2}}$. Thus it has the same distribution as $(2U, -\frac{1}{\sqrt{2}} U +\frac{1}{\sqrt{2}} V)$ where $U$ and $V$ are i.i.d. standard normal. (You can verify this by checking that the means, variances, and correlation are the same.)
So, the conditional distribution of $Z_2$ given $Z_1=Z_2$ is the same as the conditional distribution of $-\frac{1}{\sqrt{2}} U + \frac{1}{\sqrt{2}} V$ given $2U=0$. By plugging in $U=0$, we immediately see that the conditional distribution is $N(0, 1/2)$, as Brian Tung mentioned in the comments.
Thus you can rewrite your original probability as
$$P(\frac{1}{\sqrt{2}}V \le Z_3 \le Z_4)$$
where $V, Z_3, Z_4$ are i.i.d. standard normal. Some sort of volume/symmetry argument might allow you to compute this, but I'm not sure. |
Limit point of the set $G= \{1/n: n \in \mathbb N\}$? | I do understand 0 is a limit point but shouldn't it be the case that all the points in [0,1] are limit points by the definition?
Definitely not.
Consider a point $a\in G$, then there exists $n\in\mathbb N$ such that $a=\frac1n$. Choose $\delta<\frac1n-\frac1{n+1}$ and you get $G\cap (a-\delta,a+\delta)=\{a\}$, so by definition $a$ is no limit point of $G$.
If you consider $a\in (0,1]\setminus G$ then there exists $n\in\mathbb N$ such that $\frac1{n+1}<a<\frac1n$. Choose $\delta<\min\{\frac1n-a,a-\frac1{n+1}\}$ and you get $G\cap (a-\delta,a+\delta)=\emptyset$. By definition $a$ is no limit point of $G$. |
Find upper bound for stepsize h in Runge Kutta method | I think you should not use the triangle inequality. In fact, for $\lambda=3+3i$,
$$ |1+\lambda h|=|1+(3+3i)h|=|(1+3h)+3hi|=\sqrt{(1+3h)^2+(3h)^2}>1$$
for $h>0$. It means that the method is unstable for $\lambda=3+3i$. For $\lambda=-3+3i$,
$$ |1+\lambda h|=|1+(-3+3i)h|=|(1-3h)+3hi|=\sqrt{(1-3h)^2+(3h)^2}<1$$
gives
$$ 18h^2-6h<0 $$
whose solution is $0<h<\frac13$. Thus if $0<h<\frac13$, the method is stable for $\lambda=-3+3i$. |
Eigenvectors of this matrix | They are the same though. Since $\lambda_1 = e^{i\frac \pi 3} = \frac 12 + i\frac {\sqrt{3}}{2}$, we have
$$\frac 1{1 - \lambda_1} = \frac 1{\frac 12 - i\frac {\sqrt{3}}{2}} = \frac 1{\frac 12 - i\frac {\sqrt{3}}{2}} \frac{\frac 12 + i\frac {\sqrt{3}}{2}}{\frac 12 + i\frac {\sqrt{3}}{2}} = \lambda_1.$$
As a side note, be careful with the method of taking $x_1 = 1$, it may fail when the first coordinates of the eigenvectors are 0. It is safer to solve the linear system resulting from the equation for the eigenvectors:
$$\begin{cases}
x_2 &= \lambda x_1,\\
-x_1 + x_2 &= \lambda x_2.
\end{cases}$$ |
Laplace's Method for Integral asymptotes when g(c) = 0 | Note that you have an endpoint maximum, and hence the integral should only be one-sided; this is the sort of case where Watson's lemma applies. |
A trivial question about continuity | You're right that continuity is a property of each point of the domain. Usually when we say a function $f$ is continuous, we just want to say it is continuous on all its points.
In the $\epsilon-\delta$ definition of continuity you mentioned, the domain and range of a function are taken into account. A function $f : \{1,2,3\} \rightarrow \mathbb{R}$ will always be continuous under that definition. For instance, to show its continuity in $f(1)$, just take $\delta = \frac{1}{2}$ for any $\epsilon > 0$ and all points within distance $\frac{1}{2}$ of 1 in the domain (which is just 1) will fall within distance $\epsilon$ in the range.
The example above shows that yes, you can remove enough parts of the domain to make a function continuous. This does not necessarily imply the original function is continuous on the bigger domain. |
Study the convergence $\sum_{n=1}^\infty n^2/\exp(n)$ | Alternative way: $e^n\geq \frac{n^4}{4!}$ hence $e^{-n}\leq \frac{24}{n^4}$ and $0\leq \frac{n^2}{e^n}\leq \frac{24}{n^2}
$, and we get the convergence since $\sum_{n\geq 1}\frac 1{n^2}$ is convergent. |
Find circle given diameter and string function | Check that distance of $(a,2a)$ from $y=x+3$ should be $\sqrt 2$
Now your turn. |
How well-behaved are $C^{\infty}$ functions? | The counterexample given here was suggested by @Nate Eldredge in the comments; I'm just elaborating on its properties :)
The function $f: \Bbb{R} \to \Bbb{R}$ defined by
\begin{align}
f(x) =
\begin{cases}
e^{-1/x^2} \sin\left( \frac{1}{x}\right) & \text{if $x \neq 0$} \\
0 & \text{if $x=0$}
\end{cases}
\end{align}
is easily seen to be $C^{\infty}$ away from the origin, and at the origin, one can show that all the derivatives vanish. The most straight-forward proof I know is by direct verification (a really strict proof follows by induction on the form of the derivative).
The rapid oscillatory behaviour of $f$ near the origin shows that there is no $\varepsilon > 0$ for which those conditions you stated hold. (I suggest you use wolfram alpha to plot this function to see just how quickly things approach $0$ at the origin, and how fast the function is oscillating).
Here's a rough idea of the proof of $C^{\infty}$ at the origin. Let's first show that $f'(0)$ exists and equals $0$. For $x\neq 0$, we have that
\begin{align}
\left | \dfrac{f(0 + x) - f(0)}{x} \right| &= \left| \dfrac{e^{-1/x^2} \sin (1/x)}{x} \right| \\
& \leq \left| \dfrac{e^{-1/x^2}}{x} \right| \cdot 1
\end{align}
And, you should know from somewhere that "exponentials dominate polynomials", in the sense that the numerator goes to $0$ much faster than the denominator goes to $\pm\infty$, so as $x \to 0$, the RHS tends to $0$ as well.
In general, you can show that for $x \neq 0$, the $k^{th}$ derivative looks like an exponential term multiplied by trigonometric term multiplied by a polynomial in $\dfrac{1}{x}$. I.e There exist polynomials $P,Q$ such that
\begin{align}
f^{(k)}(x) = e^{-1/x^2} \left( P\left(\dfrac{1}{x} \right) \cdot \sin\left(\dfrac{1}{x} \right) + Q\left(\dfrac{1}{x} \right) \cdot \cos \left(\dfrac{1}{x} \right)\right)
\end{align}
And as $x \to 0$, the limit will be $0$, because "exponentials dominate polynomials" (the trigonometric terms are bounded by $1$; so they don't really matter). |
Closeness of infinite union of closed sets | The set (call it A) is not closed, because $\{0\} \in \text{cl}(A) \wedge \{0\} \not\in A$. In general, the infinite union of closed sets is not necessarily closed.
Remember, every set is a union of closed sets (the one-point sets that contain each of its points; $A = \cup_{p \in A} \{p\}$), so if infinite unions of closed sets were necessarily closed, then all sets would be closed. |
Determine $p,r \in\Bbb R$ s. t. $\sum_{n=2}^{\infty} \frac{(\log{n})^{r}}{n^{p}}$ is convergent | Hint for b: You've already solved the $p=1$ case, good. For other cases you want to start thinking this way: Any positive power of $\ln n$ is whimpy small compared to any positive power of $n$ when $n$ gets large. More precisely, if $a,b > 0$ then $(\ln n)^a/n^b \to 0$ as $n\to \infty,$ no matter how big $a$ is and how small $b$ is. You should learn this at a gut level if you haven't already.
Hint for c: $(\ln n)^x\le (\ln n)^r.$ |
Let $\alpha: [a,b] \rightarrow \mathbb{R}$ be monotonically increasing and let $f \in \mathscr{R}(\alpha)$. Prove that $|f| \in \mathscr{R}(\alpha)$. | For any subinterval $I$ of a partition, we have, using $||a|-|b|| \leqslant |a-b|$,
$$\sup_{x \in I}|f(x)| - \inf_{x \in I}|f(x)| = \sup_{x,y \in I}||f(x)| - |f(y)|| \leqslant \sup_{x,y \in I}|f(x) - f(y)| = \sup_{x \in I}f(x) - \inf_{x \in I}f(x) $$
Forming Riemann-Stieltjes sums over partition intervals we get
$$U(|f|,\alpha,P) - L(|f|,\alpha,P) \leqslant U(f,\alpha,P) - L(f,\alpha,P).$$
Now you can invoke the Riemann criterion for $f$ and apply it to $|f|.$ |
Proof that the canonical mapping of $V$ into $V^{**}$ is not bijective | This is just because $F$ itself is $0$. Indeed, if $x\in F$, then in particular $\langle x,a_\lambda^*\rangle=0$ for each $\lambda$, which means every coordinate of $x$ is $0$ so $x$ is $0$. |
What is the polar set of $A= \{(1,\,1) \in \mathbb{R^2}\}$? | In my opinion
$$
A^* = \{(x,y) \in \mathbb{R} : x+y \leq 1 \}
$$
is a good way to describe the solution.
$A^* = \{x \in \mathbb{R} : y \leq 1 -x\}$
This is wrong, since $A^*$ is a subset of $\mathbb R^2$ and not $\mathbb R$.
or we can write $A^*$ is the half-space $<x,\bar{n}> \,\leq 1$ where $x \in \mathbb{R^2}$ and $\bar{n}$ can be defined as $\bar{n} = (1/\sqrt(2), 1/\sqrt(2))$.
This is also wrong, because you changed the left-hand side by multiplying with
$1/\sqrt{2}$, but not the right-hand side.
It would be better to use $\bar{n}=(1,1)$. |
Number of a certain type of permutations | If $N(n)$ is the number of permutations of $n$ objects satisfying $\pi(i+1)\leq\pi(i)+1$, we can show that $N(n)=2^{n-1}$. If $\pi(1)=n$, you can append any of the $N(n-1)=2^{n-2}$ permutations and have a legal one. If $\pi(1)=1$, the only legal permutation is the identity. If $\pi(1)=k \in (1, n)$, you have to append all the numbers from $k+1$ through $n$, then you can have any permutation of the numbers $1$ through $k-1$, which is $2^{k-2}$ of them. So $N(n)=1 + \sum_{i=2}^n2^{i-2}=2^{n-1}$ |
Probability of being first, second, or third in a contest | The probability of being second/third is not $\frac 1{99}, \frac 1{98}$. Then, is the probability of coming $99$th $\frac{1}{100-99} = 1$? Clearly not,right?
Choosing a position for Michelle does not depend upon what position it is. The probability that she comes first, is equal to the probability that she comes second, is equal to the probability that she comes third, because in each case, we have to choose one place out of hundred : whether that is the first place, or second place, or third place, or seventy-seventh place, it is still one place out of hundred. Therefore, the probability of each of these events is $\frac 1{100}$, and given that coming first,second and third are mutually exclusive events gives the answer $\frac 3{100}$. |
Extensions of a field by a root of a monic irreducible polynomial | Let $a$ be a root of $f$. Since $f$ is irreducible, it is the minimal polynomial of $a$ (the polynomial of minimum degree with coefficients in $K$ having $a$ as a root). So no nonzero linear combination of $1, a, a^2, \dots a^{n-1}$ can equal $0$, so $K(a)$ has $K$-basis $1, a, a^2, \dots a^{n-1}$. Thus $[K(a) : K] = n$.
Similarly $K[X]/(f(X))$ is a vector space with basis $1, X, \dots X^{n-1}$ over $K$. The natural map $K[X]/(f(X)) \to K(a)$ sending $X^i \to a^i$ is therefore an isomorphism. |
Analytic solution of the heat equation with a source term | Hint:
the $\sin x$ source term is time independent: what happens if you add $t \sin x$ to your solution?
does it respect the PDE ? and the boundary conditions ? |
Redefining almost monotonic functions on a set of measure $0$ | Note that the choice for $\tilde f(x)$ is not uniquely determined for $x\in M$
(for instance, consider a function with a jump).
Here is one way to do it (w.l.o.g. let $f$ be monotonic non-decreasing):
For $x\in [a,b]$, we define
$$\tilde f(x) := \sup \{ f(y) | y \in [a,b]\setminus M, y \leq x\}.$$
Then it can be shown that $\tilde f(x)=f(x)$ for $x\not\in M$ and that
$\tilde f$ is non-decreasing (those things are not too hard to show, you can probably do that by yourself). |
Proof: Uniqueness of the Dot Product Definition of the Matrix Transpose | If you let $A^\top$ denote the result of "exchanging the rows and columns" of $A$, then you can directly check that $(Ax)^\top = x^\top A^\top$ for any $x$. Thus $(Ax)^\top y = x^\top (A^\top y)$ holds for any $x,y$.
Conversely, suppose $(Ax)^\top y = x^\top (Ty)$ for all $x,y$. By considering $x=e_i$ and $y=e_j$ to be standard basis vectors, you immediately have $A_{ji} = T_{ij}$ for each $i,j$. |
Characterization of loops on a cone that are ellipses? | If you mean by "overall curvature" the total curvature of a loop $\gamma$, i.e., the integral $\oint_\gamma \kappa_g\>ds$, then this total curvature is the same for all loops around the apex, by the Gauss'-Bonnet theorem, and can be expressed by the total Gaussian curvature of the cone, which is concentrated at the apex. It seems therefore that all loops are "equators" according to your definition. |
Group of order $255$ is cyclic | Yes I think it is correct. Because of the orders of all elements must divide to the order of the group and gcd(15,17)=1 you can prove that all the group is generated by only one element (by the same way you prove that a group of order 15 is cyclic) |
Diagonal squares on a chessboard | The bad cases are not so difficult to find for a $n\times n$ board:
$$2\binom{n}{4}+4\sum_{k=4}^{n-1}\binom{k}{4}=2\binom{n}{4}+4\binom{n}{5}$$
where we used the Hockey-stick identity. |
How many pair-wise touching "shapes" are there in an $n\times n\times n$ grid? | Let $f(n)$ be the maximal number of pair-wise touching "shapes" in an $n\times n\times n$ grid.
We have $f(n)\ge n$, as can be seen in the following image from the German Wikipedia article on the "Four color theorem":
For $n\ge 4$ we can stack two such constructs with completely different colors to find $f(n)\ge 2n$. This might be best possible, but I do not know. The remaining cases are as follows:
$f(1)=1$, obviously.
$f(2)=4$, and this is best possible.
Proof.
If more than four shapes are used, then one of the shapes must consists of a single cube. This cube must be places in one of the corners of the grid. But then there are only three remaining faces with which it can touch other shapes. So no fifth shape can touch this cube.
$f(3)\ge 5$, and I am not sure whether $6$ is possible. But I do not think so.
By the way, I think your question can be asked in purely graph theoretical terms:
The $n\times n\times n$ grid graph $G$ is given by $V(G)=\{1,...,n\}^3$ and
$$E(G)=\left\{\{v,w\}\in {V\choose 2}\;\middle|\; |v-w|_1=1\right\}.$$
What is the biggest $k$ so that the complete graph $K_k$ is a minor of $G$? |
Range of a trignonmetric function | $$a^2 \sin^2 x + b \sin x \cos x + c\cos^2 x$$
$$=\frac{a^2 2\sin^2 x + b 2\sin x \cos x + 2c\cos^2 x}2$$
$$=\frac{a^2(1-\cos2x)+b\sin2x+c(1+\cos2x))}2$$
$$=\frac{a^2+c}2+\frac12 \{\cos2x(c-a^2)+b\sin2x\}$$
Now, for $A\cos y+B\sin y=C\sin(y+\theta)$(say) where $C\ge 0$
Expanding and comparing the coefficients, we get $A=C\cos\theta, B=C\sin\theta$
squaring and adding we get, $C^2=A^2+B^2,C=\sqrt{A^2+B^2}$
As $-1\le\sin(y+\theta)\le 1, -\sqrt{A^2+B^2}\le C\sin(y+\theta)\le \sqrt{A^2+B^2}$
Here $y=2x,A=b, B=c-a^2$ |
Limits of functions of two variables | Well to me it's just a question to perform a change of variables, setting
$$n = \alpha\cdot r$$
so that one has
$$\lim_{r\to +\infty} r e^{-\alpha\beta r} = 0$$
Considering that
$\alpha > 0$
$\beta > 0$
Both $n$ and $r$ are integers
Actually it seems this is not a correct answer... |
Ulm and Frattini Subgroups | Proposition
$\cap\{(pqA)\,|\,p, q\in\mathbb{P}\}=\cap\{q(\cap\{pA\, |\,p\in \mathbb{P}\})\, |\, q\in \mathbb{P}\}$
Proof
Assume for a contradiction that
$\cap\{q(\cap\{pA\, |\,p\in \mathbb{P}\})\, |\, q\in \mathbb{P}\}<\cap\{(pqA)\,|\,p, q\in\mathbb{P}\}$
and let $b\in\cap\{(pqA)\,|\,p, q\in\mathbb{P}\}$ such that $b\notin \cap\{q(\cap\{pA\, |\,p\in \mathbb{P}\})\, |\, q\in \mathbb{P}\}$.
There are $r,q\in \mathbb{P}$ such that $(b=rqc\, \, (c\in A) \implies qc\notin\Phi(G))$
or in other words $(b=rqc \implies \exists s \in \mathbb{P} : qc\notin sA)$.
Obviously $q\neq s$ and by hypothesis we can write $b=rsx=rqc\,\, (x,y\in A)$.
Then $r(qc-sx)=0$, therefore $(qc\notin sA)$ $(qc-sx)\in A[r]-\{ 0\}$.
If $r\neq s$, $A[r]=sA[r]$ and there is a $z\in A[r]$ such that $qc-sx=sz$. So $qc\in sA$, a contradiction.
Now, $r=s$ and the following implication works
$(*)$ $b=qrc\,\, (c\in A) \implies qc\notin rA$
Now, since $r\neq q$, $A[r]=qA[r]$ and it exists an $a\in A[r]$ such that $qc-rx=qa$ (take in mind that $(qc-sx)\in A[r]-\{0\}$).
So we know that $q(c-a)=rx\in rA$ and $b=r^2 x=rq(c-a)$ and this contradicts $(*)$.
$\square$
Theorem
For all $n\in \mathbb{N}$ we have $\Phi^n(G)=U^n(G)$, where
$U^n:= \cap\{(p_1p_2...p_n)A\, |\, (p_1,...,p_n)\in \mathbb{P}^n\}$ $(n\geq1)$
Proof
We argue by induction on $n$. The above proposition proves the case when $n=2$ (it works fine also if we assume $n=1$ as starting point).
Let $n\geq 3$
So we have $\Phi^{n-1}(G)=U^{n-1}(G)$. Then $\Phi^n(G)=\Phi(U^{n-1}(G))$.
Assume now that $\Phi^n(G)$ is strictly contained in $U^n(G)$. Hence it exists an $b\in U^n(G)$ and $b\notin \Phi^n(G)$.
In other words, we have that exists a prime $q$ such that the following implication hold:
$b=qc\, \, (c\in G) \implies c\notin U^{n-1}(G)$ ($\exists q_1, q_2, ...,q_{n-1}: c\notin (q_1...q_{n-1})G$).
However $b$ is in $U^n(G)$ and so we can write $b=q(q_1...q_{n-1}x)$, a contradiction. $\square$
Corollary
$\Phi^\omega(G) = U(G)$ |
A question about the convergence of partial products of zeta of one. | Some of your formula's are not quite correct. It is a result of Mertens that $$\prod_{p\leq x} \left(1-\frac{1}{p}\right)^{-1} \sim e^\gamma \log x,$$ (see Merten's Third Theorem on this page) and we are taking $x=p_n \sim n\log n$ so it follows that
$$\lim_{n\rightarrow \infty} \frac{n}{\phi(n)} \frac{\pi(n)}{n}=e^\gamma.$$ Consequently
$$\lim_{n\rightarrow \infty} \left[ \sum_{i=1}^n \log\left(\frac{p_i}{p_i-1}\right)-\log \log n\right]=\gamma.$$
Now, using the fact that $$\sum_{p\leq x}\log\left(\frac{p}{p-1}\right)\sim\log\log x+\gamma$$ and $p_n\sim n\log n$ we find that $$\sum_{n<p\leq p_{n}}\log\left(\frac{p}{p-1}\right)\sim 0.$$ |
Show that $ x\in E: f(x)>g(x)$ is open. | Hint
Let $h=f-g$ which's continuous so what we can say for
$$h^{-1}(]0,+\infty[)?$$ |
Limit without applying l'hopital's rule, $\lim_{x \rightarrow-\infty} \frac{|2x+5|}{2x+5}$. | You should better be aware of the definition of a modulus function.
$$|x| = \left\{\begin{matrix}
x & x > 0\\
0 & x = 0 \\
-x & x<0
\end{matrix}\right.$$
For this question,
$$|2x+5| = \left\{\begin{matrix}
2x+5 & 2x+5 > 0\\
0 & 2x+5 = 0 \\
-(2x+5) & 2x+5<0
\end{matrix}\right.$$
Here, since, $x \to -\infty$ , that is $2x + 5 <0 $ , so, $|2x+5| = -(2x+5)$ and you get the limit as $-1$.
To answer your question in specific that what will be its limit when $x$ tends to $-9/2$ or $-1/2$ . Well, here is the difference.
$x \to -1/2 \implies 2x \to -1 \implies 2x + 5 \to 4 $ ($2x+4 >0$ )
So, $\lim_{x \to -1/2} \cfrac{|2x+5|}{2x + 5} = 1 $
While for $x\to -9/2 \implies 2x \to -9 \implies 2x + 5\to -4 $ ($2x+5 <0$)
So, $\lim_{x \to -9/2} \cfrac{|2x+5|}{2x +5} = - 1$ |
Discussion of Poincaré-Bendixson Theorem | For 1) Clearly $\mathbb{R}^2$ is forward invariant, but need not contain a periodic orbit (think of $\dot x=1, \dot y=0$).
For 2) consider the flow $\dot x=-x, \dot y=-y$, and $K$ a ball around the origin.
If you are wondering about the closedness of $K$ in condition 1), you can take a flows as in 2) and consider a ball minus the origin. This is forwards invariant, not closed, and does not have a periodic orbit. |
Calculating: $\lim_{n\to \infty}\int_0^\sqrt{n} {(1-\frac{x^2}{n})^n}dx$ | We can apply dominated convergence theorem, since $e^{—x}\geq 1-x$ for each $x\geq 0$ and $e^{-x^2}$ is integrable on $(0,+\infty)$. Define $g(x)=e^{-x}-(1-x)$. Then $g'(x)=-e^{-x}+1\geq 0$ for $x\geq 0$ hence $g(x)\geq g(0)=0$. Now, we have for $0\leq x\leq \sqrt n$,
$$0\leq \left(1-\frac{x^2}n\right)^n\leq \left(e^{-\frac{x^2}n}\right)^n=e^{-x^2}.$$ |
The proof of the Lagrange's Rational Function Theorem | Consider the field $K = \mathbb{Q}(x_1,x_2,\ldots, x_n)$, and consider $K_g \subset K$ to be the subfield generated by $g$. Define
$$
H_g = Aut(K/K_g) = \{\sigma \in Aut(K) : \sigma(\alpha) = \alpha \quad\forall \alpha\in K_g\}
$$
Similarly, define $K_f$ and $H_f$. Then you want to show that
$$
K_f\subset K_g \Leftrightarrow H_g \subset H_f
$$
If all the hypotheses are satisfied, which I think they are, this is merely the Fundamental Theorem of Galois Theory |
Determine the equations needed to solve a problem | Let $x_{ij} = 1$ if you put product $i$ in category $j$, $0$ otherwise. You need
$\sum_i x_{ij} \ge m_j$ for each $j$, where $m_j$ is the minimum for category $j$,
and $\sum_j x_{ij} = 1$ for each $i$, and each $x_{ij} \in \{0,1\}$. The last requirement takes it out of the realm of linear algebra. However, look up "Transportation problem". |
Asymptotic notation basics | Yes. There's nothing preventing functions being in all three at the same time, and it's really easy to verify that your function indeed is. |
Prison problem: locking or unlocking every $n$th door for $ n=1,2,3,...$ | This video provides a really wonderful visual explanation for the exact problem you are describing. The video only goes up to 90, but provides all the information you need to generalise it to 100.
http://www.youtube.com/watch?v=LejoPGtliTs |
Probability $Pr(W<R)$ of two Normal Random Variables $W$ and $R$. | Hint: Instead compute
$$P(W-R < 0)$$ |
Calculating Volume in different coordinate systems | a) Cylindrical coordinates
$$\int_0^{2\pi}d\phi\int_0^{\frac{\sqrt3}{2}R}\rho d\rho\int_{\frac R2}^{\sqrt{R^2-\rho^2}}dz$$
b) Spherical coordinates
$$\int_0^{2\pi}d\phi\int_0^{\frac\pi3}\sin\theta d\theta \int_{\frac R{2\cos\theta}}^{R}r^2dr$$
c) Cartesian coordinates
$$\int_{-\frac{\sqrt3}{2}R}^{\frac{\sqrt3}{2}R}dx\int_{-\sqrt{ \frac{3}{4}R^2 -x^2}}^{\sqrt{ \frac{3}{4}R^2 -x^2}}dy \int_{\frac R2}^{\sqrt{ R^2 -x^2-y^2}}dz$$ |
The inertia group for the archimedeans places | If $L$ is the local field corresponding to $w$ (either $\mathbb R$ or $\mathbb C$) then $T(w/v)$ is the group of the continuous automorphisms $\phi$ of $L$ s.t. $\phi(K)=K$ and $\phi(a)=a$ $\forall a\in k$. But the group of all continuous automophism of $L$ has at most 2 elements. |
Vectors Question - Direction and Speed | Hint:
Firstly, the question asks that the skiff must move directly towards the house and for this to happen the current would need to be flowing due west, as this would then make sense as the westbound current will offset the eastward component of the skiff such that it travels in a straight line. So confirmation is needed on whether the direction of the current is due east or due west.
Otherwise we have: |
Is $f(a)\leq (1-\delta) f(\frac{1}{\sqrt{C}}) \Rightarrow a\leq \frac{1- \epsilon}{\sqrt{C}} $ for $f(x)=\frac{1}{2} x- \frac{C}{6}x^{3}$? | Be $\frac{a}{2}-\frac{Ca^3}{6}\leq \frac{a}{2}\leq \frac{1-\delta}{3\sqrt{C}}$.
With $1-\epsilon:=2\frac{1-\delta}{3}$ you get your inequality for $a$. |
How many 5 digit numbers can be formed out of {1,2,3...,9} where a digit can repeat at most twice? | Your cases approach will work. For no repetitions, the number is clearly $9\cdot 8\cdot 7\cdot 6\cdot 5$.
For a single repetition, the repeated digit can be chosen in $9$ ways. For each way, its locations can be chosen in $\binom{5}{2}$ ways, and for every such way the empty spots can be filled in $8\cdot 7\cdot 6$ ways.
Double repetition is a little trickier. The two fortunate digits can be chosen in $\binom{9}{2}$ ways. For each such way, the locations of the larger digit can be chosen in $\binom{5}{2}$ ways, and then the locations of the smaller one can be chosen in $\binom{3}{2}$ ways. The remaining empty spot can be filled in $7$ ways.
Remark: We can alternately count the complement. This avoids the trickiness of the double repetition count, where it is all too easy to overcount by a factor of $2$. There are $9$ sequences with all entries the same. For $4$ the same and $1$ different, we have $9\cdot \binom{5}{4}\cdot 8$ choices. For $3$ the same and $2$ different, we have $9\cdot \binom{5}{3}\cdot 8\cdot 7$. And finally for $3$ the same and $2$ the same we have $9\cdot \binom{5}{3}\cdot 8$. |
How to express the series of $[\tan^{-1}(x)][\tanh^{-1}(x)]$ as $x^2+\left(1-\frac{1}{3}+\frac{1}{5}\right)\frac{x^6}{3}...$ | Hint:
$f(x)=[\tan^{-1}(x)][\tanh^{-1}(x)]$
$f'(x)=\dfrac{ \tan^{-1}(x)}{1-x^2}+\dfrac{ \tanh^{-1}(x)}{1+x^2} $
If $|x|<1$:
$\displaystyle f'(x)=\left(\sum_{n=0}^{+\infty} \dfrac{(-1)^n}{2n+1} x^{2n+1} \right) \left(\sum_{n=0}^{+\infty} x^{2n} \right) + \left(\sum_{n=0}^{+\infty} \dfrac{1}{2n+1} x^{2n+1} \right) \left(\sum_{n=0}^{+\infty} (-1)^n x^{2n} \right) $
Edit to show the idea
$f′(x)=\left(x−\dfrac{x^3}{3}+\dfrac{x^5}{5}−\dfrac{x^7}{7}+\dfrac{x^9}{9}...\right)(1+x^2+x^4+x^6+x^8...)+\left(x+\dfrac{x^3}{3}+\dfrac{x^5}{5}+\dfrac{x^7}{7}+\dfrac{x^9}{9}...\right)(1−x^2+x^4−x^6+x^8...) $
By using Cauchy product:
$f′(x)=2x+ 2\left(1−\dfrac{1}{3}+\dfrac{1}{5}\right)x^5+2\left(1 −\dfrac{1}{3}+\dfrac{1}{5}−\dfrac{1}{7}+\dfrac{1}{9}\right)x^9+...$
Then we integrate:
$f(x)=x^2+\left(1−\dfrac{1}{3}+\dfrac{1}{5}\right)\dfrac{x^6}{3} +\left(1 −\dfrac{1}{3}+\dfrac{1}{5}−\dfrac{1}{7}+\dfrac{1}{9}\right)\dfrac{x^{10}}{5} +... $ |
Best way to solve $X^3-X^2-X-1=0$ | Here's a solution that finds all three roots of the function
$$x^3-x^2-x-1=0$$
Substitute $y=x-\frac{1}{3}$:
$$-\frac{4}{3}-y-(y+\frac{1}{3})^2+(y+\frac{1}{3})^3=0$$
Expanding this gives:
$$y^3-\frac{4}{3}y-\frac{38}{27}=0$$
If $y=\frac{\lambda}{z}+z$ then $z=\frac{1}{2}\left(y+\sqrt{y^2-4\lambda}\right)$
$$-\frac{38}{27}-\frac{4}{3}\left(z+\frac{\lambda}{z}\right)+\left(z+\frac{\lambda}{z}\right)^3=0$$
Multiply both sides by $z^3$
$$z^6+z^4\left(3\lambda-\frac{4}{3}\right)-\frac{38z^3}{27}+z^2\left(3\lambda^2-\frac{4\lambda}{3}\right)+\lambda^3=0$$
Substitute $\lambda=\frac{4}{9}$ and $u=z^3$
$$u^2-\frac{38}{27}u+\frac{64}{729}=0$$
Choose one of the solutions
$$u=\frac{1}{27}(19+3\sqrt{33})$$
Substitute back for $u=z^3$
$$z^3=\frac{1}{27}(19+3\sqrt{33})$$
Solving for $z$
$$z=\frac{1}{3}\sqrt[3]{19+3\sqrt{33}}$$
$$z=-\frac{1}{3}\sqrt[3]{-19+3\sqrt{33}}$$
$$z=\frac{1}{3}(-1)^{2/3}\sqrt[3]{19+3\sqrt{33}}$$
Starting with the first solution, we substitute back for $z=\frac{y}{2}+\frac{1}{2}\sqrt{y^2-\frac{16}{9}}$
Solving this for $y$ gives
$$y=\frac{4}{3\sqrt[3]{19+3\sqrt{33}}}+\frac{1}{3}\sqrt[3]{19+3\sqrt{33}}$$
Substitute back for $y=x-\frac{1}{3}$
$$x=\frac{1}{3}+\frac{4}{3\sqrt[3]{19+3\sqrt{33}}}+\frac{1}{3}\sqrt[3]{19+3\sqrt{33}}$$
This is the first solution for $x$, going on to the second solution
$$z=-\frac{1}{3}\sqrt[3]{-19-3\sqrt{33}}$$
Substitute back for $z=\frac{y}{2}+\frac{1}{2}\sqrt{y^2-\frac{16}{9}}$
$$\frac{y}{2}+\frac{1}{2}\sqrt{y^2-\frac{16}{9}}=-\frac{1}{3}\sqrt[3]{-19-3\sqrt{33}}$$
Solve for $y$
$$y=\frac{4(-1)^{2/3}}{3\sqrt[3]{19+3\sqrt{33}}}-\frac{1}{3}\sqrt[3]{-19-3\sqrt{33}}$$
Substitute back for $y=x-\frac{1}{3}$
$$x=\frac{1}{3}+\frac{4(-1)^{2/3}}{3\sqrt[3]{19+3\sqrt{33}}}-\frac{1}{3}\sqrt[3]{-19-3\sqrt{33}}$$
Here's the second solution
$$z=\frac{1}{3}(-1)^{2/3}\sqrt[3]{19+3\sqrt{33}}$$
For the last solution we substitute back for $z=\frac{y}{2}+\frac{1}{2}\sqrt{y^2-\frac{16}{9}}$
$$\frac{y}{2}+\frac{1}{2}\sqrt{y^2-\frac{16}{9}}=\frac{1}{3}(-1)^{2/3}\sqrt[3]{19+3\sqrt{33}}$$
Solve the equation for $y$
$$y=\frac{(-1)^{2/3}}{3}\sqrt[3]{19+3\sqrt{33}}-\frac{4}{3}\sqrt[3]{\frac{-1}{19+3\sqrt{33}}}$$
Substitute back for $x=y-\frac{1}{3}$
$$x=\frac{1}{3}+\frac{(-1)^{2/3}}{3}\sqrt[3]{19+3\sqrt{33}}-\frac{4}{3}\sqrt[3]{\frac{-1}{19+3\sqrt{33}}}$$
I have collected the 3 solutions that I found above here.
$$x=\frac{1}{3}+\frac{4}{3\sqrt[3]{19+3\sqrt{33}}}+\frac{1}{3}\sqrt[3]{19+3\sqrt{33}}\approx1.83928675521416113255$$
$$x=\frac{1}{3}+\frac{4(-1)^{2/3}}{3\sqrt[3]{19+3\sqrt{33}}}-\frac{1}{3}\sqrt[3]{-19-3\sqrt{33}}\approx-0.41964-0.60629i$$
$$x=\frac{1}{3}+\frac{(-1)^{2/3}}{3}\sqrt[3]{19+3\sqrt{33}}-\frac{4}{3}\sqrt[3]{\frac{-1}{19+3\sqrt{33}}}\approx-0.41964+0.60629i$$ |
Number of NxN grids with only one "spot" allowed | Hint
First, count the number of grids with a single "horizontal spot," i.e. a blue square with a red square to its left. At the end, multiply by two.
Suppose the red square is in row $r$ and column $c$. An example when $n=7, r=3, c=5$ is below. Note that placing the spot at row $r$ and column $c$ forces a lot of other entries to be red and blue in order to prevent any further spots. However, there are still two subgrids left to be filled, which are smaller instances of the zero-spot column. This lets you solve the problem for any particular $r,c$, and then you can sum over all possible values of $r, c$. The summations will simplify nicely if you group the terms together correctly.
v
B B B B B B .
B B B B B B .
. . . . R B . <-
. . . . R R R
. . . . R R R
. . . . R R R
. . . . R R R |
Let $G$ be a finite group of automorphisms of $E$ and set $F=Fix(G)$, Then why is $E:F$ always separable? | Let $\alpha\in E$ and let $S$ be the orbit of $\alpha$ under the action of $G$. Then the polynomial $$f(x)=\prod_{s\in S}(x-s)$$ has coefficients in $F$: each coefficient is a symmetric function in the elements of $S$, and any element of $G$ permutes the elements of $S$ and hence fixes that symmetric function. But by definition, the roots of $f$ are all distinct. So $f$ is a polynomial with coefficients in $F$ with distinct roots and $\alpha$ as a root. Since $\alpha\in E$ is arbitrary, this means $E$ is separable over $F$. |
If $G$ is a bipartite Euler and Hamiltonian graph, prove that complement of $G$, $\bar G$ is not Eulers. | Your line of reasoning is correct. There is a simpler way to prove/disprove this though: If $G$ is bipartite Eulerian with both sides having the same number $l$ of vertices, then the number $n_G$ of vertices in $G$ is $2l$ which is even. This implies that every vertex in the complement $\bar{G}$ of $G$ will have odd degree [indeed, $d_{\bar{G}}(v) = n_G-1-d_G(v) =2l-1-d_G(v)$ where $d_G(v)$ the degree of vertex $v$ in $G$; make sure you see why this is so and why it is odd]. So $\bar{G}$ is not Eulerian.
[However, $\bar{G}$ can still be Hamiltonian itself. In fact, if $G$ is a cycle on (say) 16 vertices then $\bar{G}$ is Hamiltonian.] |
Separation of variables for partial differential equations | There is an extremely beautiful Lie-theoretic approach to separation of variables,
e.g. see Willard Miller's book [1] (freely downloadable). I quote from his introduction:
This book is concerned with the
relationship between symmetries of a
linear second-order partial
differential equation of mathematical
physics, the coordinate systems in
which the equation admits solutions
via separation of variables, and the
properties of the special functions
that arise in this manner. It is an
introduction intended for anyone with
experience in partial differential
equations, special functions, or Lie
group theory, such as group
theorists, applied mathematicians,
theoretical physicists and chemists,
and electrical engineers. We will
exhibit some modem group-theoretic
twists in the ancient method of
separation of variables that can be
used to provide a foundation for much
of special function theory. In
particular, we will show explicitly
that all special functions that arise
via separation of variables in the
equations of mathematical physics can
be studied using group theory. These
include the functions of Lam6, Ince,
Mathieu, and others, as well as those
of hypergeometric type.
This is a very critical time in the
history of group-theoretic methods in
special function theory. The basic
relations between Lie groups, special
functions, and the method of
separation of variables have recently
been clarified. One can now construct
a group-theoretic machine that, when
applied to a given differential
equation of mathematical physics,
describes in a rational manner the
possible coordinate systems in which
the equation admits solutions via
separation of variables and the
various expansion theorems relating
the separable (special function)
solutions in distinct coordinate
systems. Indeed for the most important
linear equations, the separated
solutions are characterized as common
eigenfunctions of sets of
second-order commuting elements in the
universal enveloping algebra of the
Lie symmetry algebra corresponding to
the equation. The problem of
expanding one set of separable
solutions in terms of another reduces
to a problem in the representation
theory of the Lie symmetry algebra.
[1] Willard Miller. Symmetry and Separation of Variables.
Addison-Wesley, Reading, Massachusetts, 1977 (out of print) |
Solution to a Nonlinear System of ODEs | After eliminating $z(t)$, you get a second order Liouville equation in $x(t)$:
$$ x''(t) = \frac{3}{2} \cot(x(t)) x'(t)^2 $$
This has an implicit solution
$$ \int^{x(t)} \dfrac{ds}{\sin(s)^{3/2}} = c_1 t + c_2 $$
The elliptic integral comes from that integration. |
Stating that a function is injective using quantifiers | The formula
$$\forall a\forall b\Big(f(a)=f(b)\rightarrow a=b\Big)$$
states that the function $f$ is injective: meaning that, for any two elements of its domain, if $f(a)=f(b)$, then you actually have that the elements you started with are one and the same, $a=b$.
But your formula is not actually wrong! It just has a redundant part, that is,
$$\forall a \forall b\Big(f(a)=f(b)\leftarrow a=b\Big):$$
what is a function? A function $f:X\rightarrow Y$ is a relation on $X\times Y$ (that is, a subset of $X\times Y$) such that, if both $(a,y_{1})$ and $(a, y_{2})$ are in this relation, then $y_{1}=y_{2}$. But we usually write, to mean that $(x,y)$ is in a function $f$, $f(x)=y$, so this means that a function is a relation satisfying, for any $a$ and $b$ in its domain, that
$$a=b\quad\text{implies}\quad f(a)=f(b),$$
what is exactly the redundant part of your formula. The extra implication you wrote down is already built in in the definition of function, and is therefore not necessary, although not wrong as well. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.