title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
$F\big( g(t) \big) - F\big( g(t + h) \big) \leq h$ implies that $g$ is right-continuous? | Choose $t\in[0,1)$ and assume, for any chosen $t$, $\lim\limits_{h\downarrow 0}g(t+h)$ exists. By the boundedness of $g$, these limits lie between $0$ and $1$.
Now, using the continuity of $F$ on $[0,1]$, note that,
$$
\lim\limits_{h\downarrow 0}F\big(g(t)\big)-\lim\limits_{h\downarrow 0}F\big(g(t+h)\big) = F\big(g(t)\big)-F\big(\lim\limits_{h\downarrow 0}g(t+h)\big) \leqslant \lim\limits_{h\downarrow 0}h = 0\,.
$$
Therefore,
$$
F\big(g(t)\big) \leqslant F\big(\lim\limits_{h\downarrow 0}g(t+h)\big)\,.\tag{1}
$$
Alternatively, since (for $\epsilon>0$)
$$
P\big( hf(X) > \epsilon \big) = \mathbb{E}\left[{\bf 1}_{\{hf(X)>\epsilon\}}\right] \leqslant \mathbb{E}\left[\frac{h}{\epsilon}f(X){\bf 1}_{\{hf(X)>\epsilon\}}\right] \leqslant \frac{h}{\epsilon}\mathbb{E}\left[f(X)\right],
$$
we have,
$$
\lim\limits_{h\downarrow 0}F\big( g(t) \big) - \lim\limits_{h\downarrow 0}F\big( g(t+h) - \epsilon \big) \geqslant \lim\limits_{h\downarrow 0}h - \lim\limits_{h\downarrow 0}P\big( hf(X) > \epsilon \big) \geqslant \lim\limits_{h\downarrow 0}h - \lim\limits_{h\downarrow 0}\frac{h}{\epsilon}\mathbb{E}\left[f(X)\right]) = 0\,.
$$
So, for each $\epsilon > 0$ small enough,
$$
F\big(g(t)\big) \geqslant \lim\limits_{h\downarrow 0}F\big( g(t+h) - \epsilon \big) = F\big( \lim\limits_{h\downarrow 0}g(t+h) - \lim\limits_{h\downarrow 0}\epsilon \big) = F\big( \lim\limits_{h\downarrow 0}g(t+h) - \epsilon \big)\,.
$$
Therefore,
$$
F\big(g(t)\big) \geqslant F\big( \lim\limits_{h\downarrow 0}g(t+h) \big)\,.\tag{2}
$$
Together, $(1)$ and $(2)$ imply
$$
F\big(g(t)\big) = F\big( \lim\limits_{h\downarrow 0}g(t+h) \big)\,.
$$
But, over $[1,0]$, $F$ is one-to-one, hence
$$
g(t) = \lim\limits_{h\downarrow 0}g(t+h)\,,\quad\forall\ t\in[0,1)\,.
$$ |
Prove the gradient of a function is bounded | Let's compute $Dg(x)$ using the chain rule: $g$ is just the composition $x \stackrel{f}{\longmapsto} f(x) \stackrel{h}{\longmapsto} {\bf u} \cdot f(x)$, and $h$ is linear. Thus $$Dg(x)({\bf v}) = D(h\circ f)(x)({\bf v}) = Dh(f(x))\circ Df(x)({\bf v}) = h(Df(x)({\bf v})) = {\bf u}\cdot Df(x)({\bf v}).$$So $|Dg(x)({\bf v})| = |{\bf u} \cdot Df(x)({\bf v})| \leq \|{\bf u}\| \|Df(x)({\bf v})\|$, by Cauchy-Schwarz. Now, ${\bf u}$ is a unit vector, so we can keep on bounding that using the definition of the operator norm $\|Df(x)\|$: $$|Dg(x)({\bf v})| \leq \|Df(x)\|\|{\bf v}\| \leq M \|{\bf v}\|.$$ Let ${\bf v}$ range over the unit sphere and take the supremum on the left side to get $\|Dg(x)\| \leq M$ as well. |
Let $G$ be group and $ N \triangleleft G$ , $H < G$. Prove $NH < G$. | $\forall n_1,n_2 \in N$, $\forall h_1,h_2 \in H$, we want to show that $ n_{1}h_{1}( n_{2}h_{2} )^{-1} \in NH$.
It's simpler to prove this directly:
$$
n_1h_{1}(n_2h_2)^{-1} = n_1h_1(h_2^{-1}n_2^{-1}) = \left(n_{1}\underbrace{(h_{1} h_{2}^{-1})n_{2}^{-1}(h_{1} h_{2}^{-1})^{-1}}_{\in N}\right)(h_{1} h_{2}^{-1}) \in NH
$$ |
T distribution problem | Yes, you can go online for $t$-test $p$-value calculators as well as $z$-table calculators to verify that for around $40$ degrees of freedom or more, the two tests behave almost exactly the same. If you really want to use $t$-test though, you can use $t$-test $p$-value calculators online that can handle larger numbers of degrees of freedom. Your question says "Justify ANY procedure you use" You can say that if degrees of freedom in estimating standard deviation is 30 or 40 or more, then empirical estimation of standard deviation is stable enough to justify $z$-scores instead of $t$-test. All that $t$-test does is take into account bias in the standard deviation calculation. |
Algebraic Extension help! | Let $a,b$ be any roots of $f(x),g(x)$ respectively taken from an extension field.
Subexercise One: explain why $[F(a,b):F]=(\deg f)\cdot(\deg g)$ suffices to show that $g(x)$ is irreducible over $F(a)$ (and, symmetrically, $f(x)$ is irreducible over $F(b)$). Hint: consider the extension $F(a,b)$ over $F(a)$ and $b$'s minimal polynomial over $F(a)$ versus $g(x)$. Also be sure to use the transitive property $L/M/N\Rightarrow [L:N]=[L:M][M:N]$.
Subexercise Two: actually show $[F(a,b):F]=(\deg f)(\deg g)$. Hint: show each degree on the right divides the index on the left and invoke coprimality, then argue the left hand side is bounded above by the right-hand side. |
Game semantics for first-order logic | You can treat $\lor$ as an existential quantifier over $2$ options: the left or right argument. So the devil gets to choose the left or right argument of the $\lor$ and the game continues.
Similarly, you can treat $\land$ as a universal quantifier over $2$ options. There you get to choose the left or right argument.
Finally, $\lnot$ switches the roles of the players.
The fact that one of the players has a winning strategy is just a particular instance of the fact that every finite deterministic two-player win/loss/draw game with perfect information has a winning or drawing strategy for one of the players. |
Loaded Dice Conditional Probability | Let's denote the events
LD - picked up the loaded dice;
S - shows 6 in the 1st throw and not 6 on the second throw;
We are asked to find
$$
P(LD|S)
$$
Use Bayesian theorem
$$
P(LD|S) = \frac{P(S|LD)P(LD)}{P(S|LD)P(LD) + P(S|\overline{LD})P(\overline{LD})}
$$
where
$$P(S|LD) = \frac{1}{2}\frac{1}{2},~P(LD)=P(\overline{LD})=\frac{1}{2},~P(S|\overline{LD}) = \frac{1}{6}\frac{5}{6}
$$
plug all back to the fraction, I got
$$
\frac{9}{14}
$$ |
Is there a simple way to find all the solutions of $x_1 + x_2 + \dots + x_k + \dots + x_K = N$ when $x_k$s and $N$ are all non-negative integers? | Several integer linear programming or constraint programming solvers will find all feasible solutions upon request, without you having to write a specialized algorithm. Gurobi and Cplex both support this functionality and have MATLAB APIs. |
What is the limit of $\lim_{x\to1}\left(\frac{1}{\ln x}-\frac{1}{x-1}\right)$? | HINT: consider $$\frac{x-1-\log(x)}{(x-1)\log(x)}$$ and use L'Hospital |
Examples of proofs by induction over the set of prime numbers | Theorem ('Euclid's Lemma')
Let $p$ be a prime number and $a, b \in \mathbb{N}$ such that $p\mid ab$. Then $p \mid a$ or $p \mid b$.
Proof:
We proceed by induction on $p$. The basis of the induction is the case $p = 2$. Here the statement boils down (if we define 'even' as divisible by 2 and 'odd' as not even) to the fact that the product of two odd numbers is again odd. While non-trivial, I am confident that all readers have convinced themselves of this fact at a much earlier point in their lives and I won't dwell on it here.
Now for the induction step, assume that the claim has been proven for all primes $q < p$. We will proceed by Fermat descent. That is: we start from an hypothetical counterexample: a pair $(a, b)$ such that $p|ab$ but $p\not|a$ and $p\not|b$ and construct from it a pair $(a', b')$ with the same properties, but such that either $a' < a$ or $b' < b$. Since we then can repeat this procedure for as long as we want, we will eventually hit a pair $(a, b)$ in which one of the members equals zero; this is a contradiction since zero is clearly divisible by $p$. It follows that the original pair $(a, b)$ was not a counterexample to begin with.
To get started, we first notice that the conditions on $a$ and $b$ (that is: $ab \cong 0 \mod p$, $a,b \neq 0 \mod p$) only depend on their congruence class modulo $p$ and hence we may assume without loss of generality that $0 < a, b < p$. It follows that $ab < p^2$ and hence $c := ab/p < p$.
Let $q$ be a prime divisor of $c$. Then $q < p$ as well. ($c$ does indeed have a prime divisor, since if $c = 1$ we would have $ab = p$, which by primality of $p$ means that $a = p$ or $b = p$, a contradiction). Now $q$ divides $ab$ and since $q < p$ the induction hypothesis states that $q|a$ or $q|b$. Let's assume without loss of generality that $q|a$ and that $a = qa'$. Writing $c = qc'$ we find that $qa'b = qc'p$ and hence that $a'b = c'p$. In other words we have that $p \mid (a'b)$, that $p \nmid b$ and moreover that $p \nmid a'$ since $a' \mid a$.
It follows that $(a', b)$ is a pair with the same properties as $(a, b)$ but with $a' < a$. Repeating this procedure we eventually run into a contradiction. |
How to rigorously prove that these two sets have different order types? | You’ve really answered your own question without quite realizing it. Suppose that there were an order-isomorphism $f:[0,1)\times\Bbb Z^+\to\Bbb Z^+\times[0,1)$. Then there are $m\in\Bbb Z^+$ and $x\in[0,1)$ such that $$f(\langle 0,1\rangle)=\langle m,x\rangle\;,$$ and there are $n\in\Bbb Z^+$ and $y\in[0,1)$ such that $$f(\langle 0,2\rangle)=\langle n,y\rangle\;.$$
Let $\langle k,z\rangle\in\Bbb Z^+\times[0,1)$ be such that $\langle m,x\rangle<_L\langle k,z\rangle<_L\langle n,y\rangle$; since $\langle 0,2\rangle$ is the immediate successor of $\langle 0,1\rangle$ in $[0,1)\times\Bbb Z^+$, $\langle k,z\rangle$ cannot belong to the range of $f$, and $f$ cannot be an order-isomorphism between $[0,1)\times\Bbb Z^+$ and $\Bbb Z^+\times[0,1)$ after all. |
Quotient Modules of a polynomial ring | If they were isomorphic, they would have the same annihilator. Unfortunately
$$\operatorname{Ann}_R R/I_a=I_a, \enspace\text{whereas}\quad\operatorname{Ann}_R R/I_b=I_b.$$
Note, however, they're isomorphic as $k$-vector spaces. |
Linear Functional: Continuous? | Suppose that $\mathcal B$ contains a sequence $(b_k)_{k\geqslant 1}$ such that $\lVert b_k\rVert=1$ for each $k$. Define $$L_n(x)=\sum_{k=1}^nk\cdot \delta_{b_k}(x).$$
Then for each $x$, $\sup_{n\geqslant 1}|L_n(x)|$ is finite. Since $\lVert L_n\rVert\geqslant n$, the principle of uniform boundedness implies a contradiction. |
Linear Algebra Challenge in Physics. | For (a) and (b), you can use this useful fact: If $p(t)$ is a polynomial (or a power series), then $Bp(B^\dagger) - p(B^\dagger)B = p'(B^\dagger)$. (You can show it for $p(t) = t^n$ by induction on $n$, and then extend linearly.)
So for (a), you can repeatedly use $B(B^\dagger)^n = (B^\dagger)^nB + n(B^\dagger)^{n-1}$ to find the result.
For (b), consider $p(t) = e^{zt}$.
For (c), I assume you mean $y_1$ and $y_2$ are eigenvectors with different eigenvalues? Then the result is a general fact about Hermitian operators. |
Z-score for binomial distribution does not have $\sqrt{n}$ term | Since the binomial random variable $W$ is the sum of $n$ independent Bernoulli variables, the expression from CLT which is close to the standard normal distribution can be formulated for just one $W$:
$$
\frac{W-np}{\sqrt{np(1-p)}}=\frac{W/n-p}{\sqrt{p(1-p)/n}}.
$$
Here you can interpret $W=X_1+\dots+X_n$ where $X_i$ - i.i.d. Bernoulli variables. For the series of $X_i$ the expression becomes pretty standard:
$$
\frac{X_1+\dots+X_n-np}{\sqrt{np(1-p)}}=\frac{\overline X-p}{\sqrt{p(1-p)/n}}.
$$ |
How to interpret objects and symbols in written mathematics? | Most notation is contextual. Here is a rough standard:
$\require{amsmath}$
\begin{align}
&i,j,k,l,m,n,p\ldots&\text{integers}\\
&p,q,r,s,t\ldots&\text{real numbers}\\
&\mathbb{C},\mathbb{R},\mathbb{Q},\mathbb{Z},\mathbb{N}\ldots&\text{sets of numbers}\\
&X,Y,W,Z,\Omega,\Gamma,\Lambda\ldots&\text{main sets---e.g., linear spaces}\\
&A,B,C,U,V,S,T\ldots&\text{subsets of main sets}\\
&a,b,x,y,z,\alpha,\beta,\lambda,\mu,\omega\ldots&\text{elements of sets}\\
&\mathcal{A},\mathcal{B},\mathcal{C}\ldots&\text{sets of sets---e.g., filters, topologies}\\
&f,g,h,p,q,\alpha,\beta,\lambda,\mu,\pi\ldots&\text{functions}\\
&\Gamma,\Delta,\Phi,\Psi\ldots&\text{sets of functions}
\end{align}
As for your question, think about how you would go about teaching the very thing you are reading. Then you will see the utility of notation. Furthermore, you will have a better understanding of why the author of the written material chose to write things in a certain way (maybe not the best way, in your opinion). It is up to you to rewrite things you would like to memorize in your own language with the caveat that likely others will dislike your own language and that some translation may be required when speaking to others.
I want to conclude that it is very important that mathematicians be allowed to have their own unique perspective and visualization. It is fine to ask how others visualize, but feel free to attempt to create your own visualization and run with it. |
Compactifications of limit ordinals | If $\alpha>\omega$, then the subspace $\omega+1$ of $\alpha$ is compact. It is therefore a compact subset of $\beta\alpha$ and hence a closed subset. But every infinite closed subset of $\beta\omega$ contains a copy of $\beta\omega$, so $\beta\omega$ contains no set homeomorphic to $\omega+1$. Thus, $\beta\alpha$ cannot be homeomorphic to $\beta\omega$.
If $\alpha=\beta+\omega$ for some limit ordinal $\beta$, then $\alpha$ is homeomorphic to the disjoint union of $\beta+1$ and $\omega$, and since $\beta+1$ is compact, $\beta\alpha$ is homeomorphic to $(\beta+1)\sqcup\beta\omega$.
It’s not clear to me just what happens when $\alpha$ is more complicated, even for $\omega^2$. |
Word for equivalence preserving transformations of equations | It is funny but it doesn't seem like English mathematical language has a generally used term for this because the concept is definitely very obvious.
I suggest that 'equivalence preserving transformation' is sufficient. It doesn't sound strange, even though usually in such circumstances where this would be used, 'transformation' by itself would be the usual term. |
Expectation of a continuous random variable on a discrete random variable | $$\mathsf E X|Y=\frac y 2$$
$$\mathsf E (X)=\mathsf E(\mathsf E X|Y)=\mathsf E(\frac y 2)=\frac{1}{6}(\frac 1 2+\frac 2 2+\frac 3 2+\frac 4 2+\frac 5 2+\frac 6 2)=\frac {21} {12}$$ |
How can the topological characteristics of ergodic Markov chains be used to implement an algorithm to generate such Markov chains? | Removing cycles is not actually at all what you want: the more edges you have, the better it is, both for irreducibility and for aperiodicity.
The answer to your question depends a lot on the distribution you want your directed graph to have. To consider a common model, suppose that we take an $n$-vertex directed graph and include each of the $n^2$ possible edges with probability $p$.
Then, provided $np - \log n \to \infty$ as $n \to \infty$, the probability that the graph is strongly connected approaches $1$ as $n \to \infty$. (This can be found, for example, as Theorem 12.9 in Frieze and Karonski's Introduction to Random Graphs.) A Markov chain is irreducible iff the associated directed graph is strongly connected.
The reason for this threshold is that it is the point at which every vertex has in-degree and out-degree at least $1$ (which is definitely a prerequisite for strong connectivity, and turns out to be the last obstacle.) The expected out-degree of a vertex is $np$, so a vertex has out-degree $0$ with probability about $e^{-np}$ by a Poisson approximation. Then the expected number of vertices with out-degree $0$ is about $n e^{-np}$, so the probability that there are no such vertices is about $e^{-ne^{-np}}$ by a second Poisson approximation. The same estimate holds for in-degree.
By the time $p$ is large enough that the directed graph is strongly connected, it is already very likely to be aperiodic. If you allow loops, then we are very certain to have $\Omega(\log n)$ loops at this point, and even one loop guarantees aperiodicity. Even if you don't allow loops, we also expect many cycles of every length that's a small constant. In a strongly connected graph, just having two cycles whose lengths are relatively prime makes you aperiodic. |
To find Z-transform of given sequence | Hints:
i)
$$\sin( bn )=\frac{1}{2i}(e^{ibn}-e^{-ibn}).$$
ii) Z-Transform of $a^ne^{ibn}$ is given by
$$F(z) = \sum_{n=0}^{\infty} a^n (e^{ib})^nz^{-n}.$$
iii) The following is known as the geometric series which you need to find a closed form for $F(z)$
$$ \sum_{n=0}^{\infty} t^n = \frac{1}{1-t}. $$
I think you can finish it now. |
Formula for the Variance of a sum of random variables | Please think about the RHS term $\displaystyle \sum_{i=1}^{n} \sum_{j=1}^{n} Cov(X_i, X_j)$
What happens when $i = j$?
$Cov(X_i, X_j)$ becomes $Var(X_i)$
When $i \neq j$, there are two terms viz., $Cov(X_i, X_j)$ and $Cov(X_j, X_i)$ in the summation. They are equal. Hence it is enough to replace the sum of these two terms by $2 Cov(X_i, X_j)$ when $i < j$. That is why you have a factor of $2$ before the single summation.
Hence
$\displaystyle \sum_{i=1}^{n} \sum_{j=1}^{n} Cov(X_i, X_j)$
$ = \displaystyle \sum_{i=1}^{n} Var (X_i) + 2 \displaystyle \sum_{i < j} Cov(X_i, X_j)$ |
how to solve second order differential equation | Hint It is Euler-Cauchy type, so try$$y = x^m$$ |
In how many different ways can 50 children be distributed in 5 identical classrooms | The binomial coefficient isn't used because the classrooms are identical, thus having different classrooms be empty doesn't result in distributions that are considered different. Furthermore, if the classrooms were distinguishable, your binomials would not be correct. You are choosing $k$ classrooms to be empty, and there are $5$ classrooms, not $50$. So it should be $5\choose k$, not $50\choose k$. |
4th order pde rotating bar | I don't think there is a neat general formula like for 1D wave equation. Separation of variables is the way to go, but boundary conditions are needed to determine the eigenfunctions $\phi_n$. From there, $$u(x,t) = \sum_n (A_n\cos \omega_n t+B_n\sin \omega_n t) \phi_n(x)$$
is the general form of $u$ with given boundary conditions. |
Improper Integral $\int_{1}^{\infty} \sin \left( \sin\left( \frac {1}{\sqrt{x}+1} \right) \right) dx$ | You don't need an asymptotic equivalence. Since for any $y\in[0,\pi/2]$
$$\sin y\geq\frac{2y}{\pi}$$
holds by convexity,
$$\int_{N}^{+\infty}\sin\sin\frac{1}{\sqrt{x}+1}\,dx \geq \frac{4}{\pi^2}\int_{N}^{+\infty}\frac{dx}{\sqrt{x}+1}$$
holds for any $N$ big enough, hence the starting is divergent. |
Find $\mathbb{P}(A)$ given that $\mathbb{P}(A\cup B)$ and $\mathbb{P}(A\cup B')$ | HINT Outcomes in $A$ are either also in $B$, or also not in $B$. |
How do I factor $x^8-x$ over $\mathbb{Z}_2$? | Well, it's a general (and obvious) fact that if $X-a$ divides a polynomial $P(X)\in k[X]$ for $a\in k$ ($k$ a field), then $a$ is a root of the polynomial. Since neither of your polynomials have $0$ or $1$ as a root, it must be that these cubic polynomials do not have linear factors in $k[X]$. But then this means that one cannot reduce them non-trivially anyway, so the reduction you have done so far is sufficient to complete the problem. |
How do you solve a system of linear equations in modulus arithmetic? | For a 2 by 2,
just use Cramer's rule:
https://en.wikipedia.org/wiki/Cramer%27s_rule |
Find the limit of a sequence involving products of $\left(1 − \frac{1}{\sqrt{ n+1}}\right)$ | Note that\begin{align}\lim_{n\to\infty}\log\left(\left(1-\frac1{\sqrt2}\right)\times\cdots\times\left(1-\frac1{\sqrt{n+1}}\right)\right)&=\lim_{n\to\infty}\log\left(1-\frac1{\sqrt2}\right)+\cdots+\log\left(1-\frac1{\sqrt{n+1}}\right)\\&=\sum_{k=1}^\infty\log\left(1-\frac1{\sqrt{k+1}}\right).\end{align}The sum of this series is $-\infty$, because$$\sum_{k=1}^\infty\log\left(1-\frac1{\sqrt{k+1}}\right)=-\sum_{k=1}^\infty-\log\left(1-\frac1{\sqrt{k+1}}\right)\tag1$$and you can apply the comparison test to the right hand side of $(1)$, comparing it with $\displaystyle\sum_{k=1}^\infty\frac1{\sqrt{k+1}}$. So, the limit of your sequence is $0$. |
How to compute Shannon information? | Is there an algorithm to compute it without calculating the probabilities $p_i$ first?
I doubt it, but I don't have a proof.
Having calculated the entropy $H_n$ of the first $n$ symbols can I find the entropy $H_{n+m}$ of the $n+m$ symbols (knowing about the first $n$ symbols only $H_n$)?
No. Suppose $H_n = 0$ and the final $m$ symbols are $b\ldots b$. You don't know whether $H_{n+m} = 0$ or $$H_{n+m} = -\sum_{i\in\{n,m\}} \frac{i}{n+m} \log \frac{i}{n+m}$$ |
Probability Problem Trouble | Each round of the game is independent of the last one. This means you can use linearity of expectation to work out the mean and standard deviation.
For two variables $X$ and $Y$, it's always true that $E[X+Y] = E[X] + E[Y]$. If $X$ and $Y$ are independent then it is also true that $E[XY] = E[X]E[Y]$. These are powerful results.
For the mean, it's fairly straightforward. The expectation over $N$ rounds is just the expected value of your total winnings from $N$ rounds, which is equal to the sum of the expected winnings from each of the rounds.
Since each round you have the same expected value, this leads to an expected value of $.799N$ over $N$ rounds.
The standard deviation is just the square root of the variance, which is given by $\mathrm{Var}(X) = E[X^{2}] - (E[X])^{2}$. If you play another round and score $Y$, your new variance is $\mathrm{Var}(X+Y) = E[(X+Y)^{2}] - (E[X+Y])^{2}$. Can you expand that out and see what you get?
You should get $E[X^{2}]+2E[XY]+E[Y^{2}] - (E[X]^{2} + 2E[X]E[Y] + E[Y]^{2})$, expanding the sums outside the expectation operator.
Since the rounds are each independent, $X$ and $Y$ are independent so we can use $E[XY] = E[X]E[Y]$.
Cancelling the terms should leave you with $E[X^{2}] - (E[X]^{2}) + E[Y^{2}] - (E[Y]^{2}) = \mathrm{Var}(X) + \mathrm{Var}(Y)$ - the variances just add!
This means your variance also scales with $N$ so your standard deviation scales with $\sqrt{N}$. For $N$ rounds, your standard deviation is $6.353\sqrt{N}$. So as $N$ gets larger, the standard deviation shrinks relative to the mean.
As to whether you'd want to play this game, even though the expected value of each round is positive, it's offset by the alarmingly large standard deviation (a consequence of the small chance of losing hard). However, the ratio between the mean and standard deviation improves with the number of rounds you play.
If you play enough rounds, the central limit theorem means you can approximate the distribution of your score as a normal distribution - so if your expectation is $4\sigma$ for example, you have a $0.003\%$ chance of a net loss. As lulu's shown, this takes a long time to build to though, and you could easily run out of money before reaching this point.
So how many rounds you play would probably depend on a number of factors - how much money you have as a safety net, how much time you have to play, how much of a risk-taker you are, whether you have a job that pays more than $\$.799\times 60 = \$47.94$ an hour... |
Convergence criteria of series/sequnces | For complex numbers case, the limit should have with the absolute value, if not, saying $>0$ is strange.
Assume $L=\lim_{k}|a_{k}|/|b_{k}|>0$, then there exists some $N$, for all $k\geq N$, $\left|\dfrac{|a_{k}|}{|b_{k}|}-L\right|<1$, then $\dfrac{|a_{k}|}{|b_{k}|}<L+1$, so $|a_{k}|<(L+1)|b_{k}|$, and if $\displaystyle\sum_{k}|b_{k}|<\infty$ then $\displaystyle\sum_{k}|a_{k}|\leq(L+1)\sum_{k}|b_{k}|<\infty$. |
Equivalent conditions between two ideals and a nilpotent ideal in a ring. | Suppose $IJ=(0)$ and $(0)$ is the only nilpotent ideal.
Clearly $I \cap J \subseteq I$ and $I \cap J \subseteq J$, so $$(I\cap J)^2 \subseteq IJ = (0) \Rightarrow I \cap J = (0) \Rightarrow I \cap J \ \text{is nilpotent} \Rightarrow I \cap J = (0)$$
This shows that $2 \Rightarrow 1$. |
Comparing Binomial Probability to Poisson Random Variable Probability | Let $n=6000$, $p=\frac1{1000}$, and $X\sim\mathrm{pois}(n,p)$. Then
$$
\mathbb P(X=0) = e^{-\frac np} = e^{-6} \approx 0.002478752.
$$
Now let $Y\sim\mathrm{Binom}(n,p)$. Then
$$
\mathbb P(Y=0) = (1-p)^n \approx 0.002471322.
$$
Since the difference is $-7.43006\times10^{-6}$, this is negligible. |
Finding the volume of a solid s using cross sections | The radius, $r$ refers to the distance between the $y$-axis and the right half of the parabola. You can see this in the image given to you. So to find the radius, you will want to solve the equation for x. Since $x = \pm \sqrt{-y + 2}$. Hence $r = x = 2\sqrt{-y+2}$. The Area of the quarter circle then simplifies to $\frac{\pi}{4}4(-y+2)$. Finally, $\int_{0}^{2}\pi(-y+2)dy$ will give you the volume of the solid. |
Let $M$ be a $G$-module, and let $f \in Z^2(G,M)$. Show that $f(1,1)=f(1,\sigma)=f(\sigma,1),\forall \sigma \in G$. | I don't think that the claim holds for all 2-cocycles (I guess you need to do some normalizing for it to work out, but I'm too rusty to recall the details).
Recall that any function $\phi:G\to M$ gives rise to a 2-coboundary $f$ via the recipe
$$
f(\sigma,\tau):=\sigma\cdot \phi(\tau)-\phi(\sigma\tau)+\phi(\sigma).
$$
It is an easy standard exercise to check that a 2-coboundary is always a 2-cocycle.
Consider the following example. Let $G=\langle\sigma\rangle\simeq C_2$ be a cyclic group of order two, acting on $M=\Bbb{Z}_3$ via $1\cdot x=x$, $\sigma\cdot x=-x$ (so $M$
becomes a $G$-module). Let us define $\phi:G\to M$ by declaring $\phi(1)=1$,
$\phi(\sigma)=2=-1$. Then
$$
\begin{aligned}
f(1,1)&=1\cdot\phi(1)-\phi(1)+\phi(1)=1,\\
f(1,\sigma)&=1\cdot\phi(\sigma)-\phi(\sigma)+\phi(1)=1,\\
f(\sigma,1)&=\sigma\cdot\phi(1)-\phi(\sigma)+\phi(\sigma)=-1,\\
f(\sigma,\sigma)&=\sigma\cdot\phi(\sigma)-\phi(1)+\phi(\sigma)=-1.
\end{aligned}
$$
You see that $f(1,g)=f(1,1)$ for all $g\in G$, but $f(\sigma,1)\neq f(1,1)$.
So the claim does not hold for all 2-coboundaries, and hence cannot hold for all 2-cocycles either. |
Probability of a typing monkey backspacing all letters. | Short easy answer
Edited after @Hurkyl observation.
[Corrected] It is a known result that an (unconstrained) simple 2D-random walk visits every position infinitely many times, except of course if the probability $p$ of up-steps is $0$ or $1$, in which cases the walk is not random.
Therefore, if in your example both $k$ and $n$ are strictly positive, than you may consider the problem as a simple 2D-random walk with $\frac{n}{n+k}>0$ probability of up-steps and $\frac{k}{n+k}>0$ of down-steps, starting at position $x$, and ending when position $0$ (clear screen) is reached (this is the constrained version). Anyhow, position $0$ is clearly reached.
Long answer (not even complete)
Starting at position $s(0)=x$ (characters), at each time $k\in\mathbb N$ either you go up one step with probability $\frac{n}{n+k}$ or you go down with probability $\frac{k}{n+k}$. If you reach position $s(k)=0$ you stop. If you count up-steps $u$ and down-steps $u$, in $k$ time units the possible distance run is, say, $\Delta$ (i.e. the difference $\Delta$ between the final and the initial number of characters characters after $k$ types). Then $d$ satisfies
$$
\begin{cases}
u-d~=&\Delta \\
u+d~=&k\\
u,\,d\in&\mathbb N
\end{cases}
\quad\rightarrow\quad
\begin{cases}
u~=&\frac{k+\Delta}2 \\
d~=&\frac{k-\Delta}2\\
u,\,d\in&\mathbb N
\end{cases}
$$
Then, as expected, in order $u,d$ to be positive integers,
$$
-k\leq\Delta\leq k
\quad\text{and}\quad
k+\Delta~\text{even}
$$
You want $\Delta=-x$ for some $k\geq x$, corresponding to $u=\frac{k-x}2$ up-steps and $d=\frac{k+x}2$ down-steps.
Let us first assume $x=2y$ even; then $k=2h$ must be even as well. The probability that in exactly $k\geq x$ steps (types) you reach $0$ following a specific sequence of $u$ up-steps $U$ and $d$ down-steps $D$, is then
$$
\left(\frac{n}{n+k}\right)^u
\left(\frac{k}{n+k}\right)^d
~=~
\left(\frac{n}{n+k}\right)^{\frac{2h-2y}{2}}
\left(\frac{k}{n+k}\right)^{\frac{2h+2y}{2}}
~=~
\frac{n^{h-y}k^{h+y}}{(n+k)^{2h}}
$$
Note that not every sequence of $U$ and $D$ goes well for our purposes, i.e.
$$
\delta_1\,\delta_2\,\delta_3\,\ldots\,\delta_k
\qquad\delta_i\in\{U,D\}
$$
must satisfy that for every $j=1\ldots k$, the subsequence $\delta_1\,\delta_2\,\delta_3\,\ldots\,\delta_j$ does not reach $0$. Clearly it cannot go below $0$, and in order not to overcount the events it neither can reach $0$ before, in which case we would have stopped at time $j$.
Then, let $u_j$ and $d_j$ be the number of up-steps and down-steps in the sequence up to time $j$. Then
$$
u_k=u,~d_k=d
\qquad\text{and}\qquad
n+u_j-d_j > 0
\quad\forall\, j=1\ldots k
$$
Then it is a matter of determining how many such sequences exist and summing over $k=n\ldots \infty$. The case $x$ odd is similar. |
Optimal $p$ for biased coin? | Let $E_k$ be the expected win if we start with a history of $k$ tails.
We want to maximize $E_0$.
We have
$$E_3 = p\cdot(-1+E_0) + (1-p)\cdot 2$$
as we either loose a dollar and start from $k=0$ again, or win two dollars and stop.
Similarly
$$E_2 = p\cdot(-1+E_0) + (1-p)\cdot (2+E_3)$$
$$E_1 = p\cdot(-1+E_0) + (1-p)\cdot (2+E_2)$$
$$E_0 = p\cdot(-1+E_0) + (1-p)\cdot (2+E_1).$$
Combining these equations, we find
$$E_0= \frac{3p^4-14p^3+26p^2-24p+8}{(p-1)^4}.$$
The derivative of this is
$$E_0'(p) =\frac{p^3-5p^2+10p-4}{(p-1)^5} $$
and has a single root in the interval $[0,1]$ at $p\approx 0.522$, which corresponds to a local maximum value of $\approx15.09$. |
Find as many non-isomorphic self-complementary graphs as possible (with up to $7$ vertices) | Your two examples are only one example, since they are isomorphic.
It will help your search to notice that $K_6$ and $K_7$ both have an odd number of edges, so there is no self-complementary graph on $6$ or $7$ vertices. Having $2$ or $3$ vertices is taken care of by the same argument, and you have already found the single example (up to isomorphism) on $4$ vertices.
What is left for you is then only to enumerate graphs with $5$ vertices and $5$ edges, and check which of them are self-complementary. |
Finding a second linearly independent solution to a differential equation | There's a method for doing this, called reduction of order. I have linked to the Wikipedia article, which should get you started. |
How do I compute Gaussian curvature in cylindrical coordinates? | Your cylindrical coordinate surface $r(\theta,z)$ in Cartesian coordinates is
$$\begin{align*}x&=r(\theta,z)\cos\;\theta\\y&=r(\theta,z)\sin\;\theta\\z&=z\end{align*}$$
which now allows you to apply the usual Gaussian curvature formula. In particular, you should get the expression
$$K=-\frac{r^3\frac{\partial^2 r}{\partial z^2}+r^2\left(\left(\frac{\partial^2 r}{\partial \theta\partial z}\right)^2-\frac{\partial^2 r}{\partial z^2}\frac{\partial^2 r}{\partial \theta^2}\right)+2r\frac{\partial r}{\partial \theta}\left(\frac{\partial^2 r}{\partial z^2}\frac{\partial r}{\partial \theta}-\frac{\partial r}{\partial z}\frac{\partial^2 r}{\partial \theta\partial z}\right)+\left(\frac{\partial r}{\partial \theta}\frac{\partial r}{\partial z}\right)^2}{\left(r^2+\left(r\frac{\partial r}{\partial z}\right)^2+\left(\frac{\partial r}{\partial \theta}\right)^2\right)^2}$$
For completeness, if you have $z$ as a function of $r$ and $\theta$, your Cartesian parametrization is
$$\begin{align*}x&=r\cos\;\theta\\y&=r\sin\;\theta\\z&=z(r,\theta)\end{align*}$$
and the corresponding Gaussian curvature expression is
$$K=\frac{r^2\frac{\partial^2 z}{\partial r^2}\left(\frac{\partial^2 z}{\partial \theta^2}+r\frac{\partial z}{\partial r}\right)-\left(\frac{\partial z}{\partial \theta}-r\frac{\partial^2 z}{\partial r\partial \theta}\right)^2}{\left(r^2\left(\left(\frac{\partial z}{\partial r}\right)^2+1\right)+\left(\frac{\partial z}{\partial \theta}\right)^2\right)^2}$$
I will leave the derivation of the Gaussian curvature expression for
$$\begin{align*}r&=f(u,v)\\\theta&=g(u,v)\\z&=h(u,v)\end{align*}$$
to the interested reader. |
Homogeneous Littlewood-Paley decomposition | You need to show that the series converges in the sense of distributions, i.e. for arbitrary Schwartz function $\phi\in \mathcal S$ you have (modulo polynomial distribution)
$$\lim _{n\to+\infty}\sum_{k=-n}^n \langle P_k u,\phi\rangle=\langle u,\phi\rangle = \langle F[u],F^{-1}[\phi]\rangle.$$
Given that thet Fourier transform is an ismorphism of tempered distributions and of the space of Schwartz functions, we can write by definition
$$\langle P_k u,\phi\rangle = \langle F[P_k u],F^{-1}[\phi]\rangle = \langle \psi_k F[u],F^{-1}[\phi]\rangle=\langle F[u],\psi_k F^{-1}[\phi]\rangle,$$
therefore, we need to consider the sum
$$\sum_{k\in\Bbb Z}\psi_k F^{-1}[\phi].$$
By the hypothesis on $\psi_k$ this sum converges to $F^{-1}[\phi]$ modulo the value in $\xi=0$. Therefore, the resulting sum of Fourier transfroms of projections differs from $F[u]$ at most on the sinlgeton $\{\xi=0\}$ (in other words, by a finites sum of Dirac delta and its derivatives). By taking the inverse Fourier transfrom we obtain that the sum of projections differ from the initial distribution by the inverse Fourier transform of that finite sum, which yields you that polynomial. |
How do I factorise this polynomial? | You can factorize it this way:
$$x^3-3x^2-9x-5=x^3-3x^2-4x-5x-5=x(x-4)(x+1)-5(x+1)=(x+1)(x^2-4x-5)$$
then continue from there. |
How to determine two angles formed when a perpendicular is dropped | Dropping the perpendicular splits your original triangle into two 'new' triangles, both right-angled triangles: triangle $XOY$ and triangle $ZOY$ (draw a picture!). Since $YO$ is perpendicular to $XZ$, you know the values of $\angle XOY$ and $\angle ZOY$, and you know that the sum of the angles in each of the 'new' triangles is $180^{\circ}$, so... (I expect you can finish the rest for yourself). |
Do these vectors form a basis? | The vectors $(a,a,a,0,0,0)$ are all multiples of $(1,1,1,0,0,0)$; the vectors $(b,b,b,b,b,b)$ are all multiples of $(1,1,1,1,1,1)$. The vectors $(0,d,2d,0,0,0)$ are all multiples of $(0,1,2,0,0,0)$. The span of all these vectors give you only a 3-dimensional subspace.
So the key lies in the vectors $(1,c,c^2,1,c,c^2)$; all these vectors span at most a $3$-dimensional subspace, since they all lie in the subspace of vectors $(x_1,x_2,x_3,x_4,x_5,x_6)$ with $x_1=x_4$, $x_2=x_5$, and $x_3=x_6$, which is $3$-dimensional. But they include $(1,1,1,1,1,1)$ (obtained with $c=1$); so you will get at best a 5-dimensional subspace from taking all these vectors together with the previously considered one; you cannot get a all of $\mathbb{R}^6$.
In fact, you get exactly a $5$-dimensional subspace: the vectors
$$\begin{align*}
&(1,c,c^2,1,c,c^2)\\
&(1,k,k^2,1,k,k^2)\\
&(1,\ell,\ell^2,1,\ell,\ell^2)
\end{align*}$$
are linearly independent if and only if the vectors $(1,c,c^2)$, $(1,d,d^2)$, and $(1,\ell,\ell^2)$ are linearly independent. This occurs if and only if
$$\left|\begin{array}{ccc}
1 & c & c^2\\
1 & d & d^2\\
1 & \ell & \ell^2
\end{array}\right| = (d-c)(\ell-c)(\ell-d)$$
is nonzero (this is a Vandermonde matrix); so distinct values of $c$, $d$, and $\ell$ will give you three linearly independent vectors, which therefore span
$$\mathbf{W} = \bigl\{(x_1,x_2,x_3,x_4,x_5,x_6)\in\mathbb{R}^5\mid x_1=x_4, x_2=x_5, x_3=x_6\bigr\}.$$
So your vectors span exactly a five-dimensional subspace of $\mathbb{R}^6$, and not all of $\mathbb{R}^6$
In fact, $(0,0,1,0,0,0)$ will be one of the vectors that does not lie in the span: if it lay in the span, then so would $(0,1,0,0,0,0) = (0,1,2,0,0,0)-2(0,0,1,0,0,0)$; hence also $(1,0,0,0,0,0)=(1,1,1,0,0,0)-(0,1,0,0,0,0)-(0,0,1,0,0,0)$. Since you can get any vector of the form $(x,y,z,x,y,z)$, this would also allow you to obtain the other three standard basis vectors, and you would have a span equal to the entire space, which is not the case.
So, no, you cannot obtain $(0,0,1,0,0,0)$ with the vectors described. |
Compute $2i\Re(z)\Im(z) = \bar{z} + 3 + i$ | $$2*i*a*b=a-bi+3+i$$
$$2iab=a+3+i(1-b)$$
How would you find a and b now? |
two Lie algebras: Isomorphic or not | The first Lie algebra $L_1$ is the cross product.
https://en.wikipedia.org/wiki/Cross_product#Coordinate_notation
You can write the second like so:
$[2y,x+z]=[2y,x]+[2y,z]=-2(x+z)$. $[2y,x-z]=-2z+2x=2(x-z)$. $[x-z,x+z]=2[x,z]=2y$.
This is the presentation of $SL2$
https://en.wikipedia.org/wiki/Special_linear_Lie_algebra
Over the field of real number they are not isomorphic.
https://mathoverflow.net/questions/165656/how-many-three-dimensional-real-lie-algebras-are-there |
Let $\Lambda(x)=(\lambda_1x_1,\lambda_2x_n,...)$ be an operator $l_2 \to l_2$. Show its range is closed iff $\inf_{\lambda_k\not=0} |\lambda_k|>0$ | Hints: Let $M$ be the closed subspace generated by $\{e_i: \lambda_i \neq 0\}$ where $(e_i)$ is the standard orthonormal basis. (Note that $M$ consist of the sums $\sum_{\lambda_i \neq 0} a_ie_i$ with $ \sum |a_i|^{2} <\infty$). Let $\Lambda_1=\Lambda |M$. Show that range of $\Lambda_1$ is also closed and that $\Lambda_1$ is also injective. By open mapping theorem its inverse is continuous so there exist a constant $C$ such that $\sum_{\lambda_i \neq 0} |\lambda_i x_i|^{2} \geq C \sum |x_i|^{2}$ for all $(x_i)$. From this the conclusion follows immediately. |
Need help with Application of Leibniz's Integral Rule | You had the calculation correct up to
$$\frac{d}{d \beta} \int_0^\beta b N F(b)^{N-1} f(b) db$$
At this point you need to apply the fundamental theorem of calculus, not Leinbiz's rule.
$$\frac{d}{d y} \int_0^y f(x)dx = f(y)$$
For your problem it yields
$$\frac{d}{d \beta} \int_0^\beta b N F(b)^{N-1} f(b) db = \beta N F(\beta)^{N-1} f(\beta)$$ |
Prove if $(a,p)=1$, then $\{a,2a,3a,...,pa\}$ is a complete residue system modulo $p$. | $$k_1a \equiv k_2a \pmod p \iff p \ | \ (k_1 - k_2)a \iff p \ | \ (k_1 - k_2)$$
Since in our case $k_1, k_2 \in A = \{1, 2, ..., p\}$ we have that $\left| { k_1 - k_2} \right| \lt p $ and $p$ will not divide $(k_1 - k_2)$ if both elements are from $A$. So for any two elements $r, q \in A \;\; rp \not \equiv qp \pmod p $. Which says that every element in $\{a,2a,3a,...,pa\}$ is not congruent to each other modulo $p$. Then each of them must be distinct modulo $p$. Since we have $p$ elements in the set it forms a complete set of residues modulo $p$. |
On the proof of $\lim\limits_{x\to 0}\frac{\sin x}{x}=1$ | While a lot about this relies on fairly fuzzy intuition, that part at least is fairly clear. If $E$ is a point on the segment $BD$, then $AE =\sqrt{AB^2+BE^2}>AB$ by the Pythagorean theorem, and the intersection $F$ of line $AE$ with the circle lies inside the segment $AE$. Then $AF\subset AE$ (as sets of points), and every point inside the circular wedge in that direction from $A$ is inside the triangle. Take the union over all possible $E$, and the circular wedge is a subset of the triangle. |
combinatorics - request for checking modified solution | The method is perfectly correct - the only problem is that you have the wrong value for $\binom 93$ which should be $84$ not $168$ (this makes the total $791$). |
Is this proof involving quantifiers valid? | Your proof is exactly correct! Well done.
And you used existential introduction in both cases exactly because you were trying to prove an existential: $\exists y (a \leq y \land \neg I(y))$
It's only after that proof by cases that you introduce a universal ... which is why you introduced $a$ as an arbitrary object.
Yes, it may feel weird that in case $2$ you end up creating the existential using the same object as that you end up creating the universal at the end with ... this is certainly unusual for a proof ... but it is all perfectly valid!
Indeed, when the statement says: $\forall x \exists y ...$, then of course for some of the $x$'s the $y$ could be the same as the $x$ ... and that's exactly what is going on here. |
How to solve this logarithm system? | Use the fact that $$\log_b(a) = \dfrac1{\log_a(b)}$$
Hence, if we denote $\log_9(x) = a$ and $\log_y(8) = b$, we get that
\begin{align}
a+b & = 2\\
\dfrac1a + \dfrac1b & = \dfrac83 \implies \dfrac{a+b}{ab} = \dfrac83 \implies ab = \dfrac34
\end{align}
Now solve for $a$ and $b$ and hence $x$ and $y$. |
Equivalence of geometric and algebraic definition of dot product | At author's suggestion, my comment, as an answer:
From $B=\sum B_ie_i$, it goes $$A\cdot B=A\cdot\sum B_ie_i=\sum A\cdot(B_ie_i)=\sum B_i(A\cdot e_i)$$ |
Orbit space of a continuous group action | Suppose $O \subseteq S$ is open. As $X{/}G$ has the quotient topology w.r.t. $\pi$, $\pi[O]$ will be open iff $\pi^{-1}[\pi[O]]$ (the saturation of $O$ under the equivalence relation) is open in $S$.
Now note that
$$\pi^{-1}[\pi[O]] = \{x \in S: \exists x' \in O: \exists g \in G: gx'=x\}$$
which equals $$\bigcup_{g \in G} Og= \bigcup T_g[O]$$
where by $Og$ I mean all points $\{x \cdot g, x \in O\}$ where $\cdot$ denotes the group action, and $T_g: S \to S, T_g(x)=x \cdot g$ is a homeomorphism (continuous as the action is, and with (also continuous) inverse $T_{g^{-1}}$). So $\pi^{-1}[\pi[O]]$ is a union of open sets (images of $O$ under the homeomorphisms $T_g$, which are open maps) hence open, and so $\pi$ is an open map too. |
A set in a $\sigma$-algebra that can't be "reached" with countable set-theoretical operations | I think the answer is negative. You confused two things at here: an element in the $\sigma$-algebra may not be constructively constructed and an element cannot be formed by countable 3 elementary set operations. In general there is no way to expect a $\sigma$-algebra needed to be generated by some set(of subsets). All we know is we have a set in $P(\mathcal{X})$ that satisfies the 3 axioms. Whether there exists such a generating set (of subsets) is not clear. And in particular if you assume you have such a generating set(of subsets), then every element by definition must be generated by countable such set operations. |
Does not algebraic multiplicity= geometric multiplicity $\Rightarrow$ the matrix is diagonalizable? | Indeed they are not similar. But $A$ is in fact not diagolizable.
It has a size 3 Jordan block:
$(A+3E)^2 \neq 0$ (whereas $(B+3E)^2=0$). This is also what you got using the $m_\lambda$ formulation. |
A problem in the proof of the completeness of $L^1$ space in stein | The series converges absolutely in the norm of $L^1$ hence it is well defined as an integrable function. |
conditional probability P(X=m+n|X>=m) | As Did said, use the Geometric Series to simplify.
$$\begin{align}\mathsf P(X\geq m)~&=~\sum_{k=0}^\infty \mathsf P(X=m+k)\\[1ex]&=~\sum_{k=1}^\infty p(1-p)^{m+k}\end{align} $$
Then as you appear to know: $$\mathsf P(X=m+n\mid X\geq m) = \dfrac{\mathsf P(X=m+n)}{\mathsf P(X\geq m)}$$
Once you have found that, the comparison to $\mathsf P(X=n)$ is rather obvious.
As to commenting, ask: What can you interpret the $p$ to represent? |
1 Coordinate Geometry | Hint:
You may use the fact that,
if the bisectors of the angles $\hat A$ and $\hat C$ intersect at the point $I$ at $BD$, then the distance from $I$ to $AB$ equals the distance from $I$ to $AD$ and the distance from $I$ to $CD$ equals the distance from $I$ to $CB$ |
How many ways are there to add up odd integers to 20? | To sum up to $20$, we could:
first write a $1$ and count all the ways to add up to the remainder, $19$, or
first write a $3$ and count all the ways to add up to the remainder, $17$, or
first write a $5$ and count all the ways to add up to the remainder, $15$, or …
The answer will be the sum of all these counts.
Let’s denote the number of lists of odd positive integers that sum to $n$ as $a_n$.
We can easily see that $a_1 = 1$ (just $1$) and $a_2 = 1$ (just $1+1$).
We can generalize our observation and say that, for $n \geq 1$,
$$a_n = a_{n-1} + a_{n-3} + a_{n-5} + \dots \tag{i}$$
ending at either $a_1$ (if $n$ is even) or $a_0$ (if $n$ is odd).
But then also for $n \geq 3$,
$$ a_{n-2} = a_{n-3} + a_{n-5} + \dots \tag{ii}$$
so combining $\text{(i)}$ and $\text{(ii)}$ we find
$$ a_n = a_{n-1} + a_{n-2}, $$
meaning $a_n$ are just the Fibonacci numbers ($1, 1, 2, 3, 5, \dots$).
Thus $a_{20}$ is the twentieth Fibonacci number, $6765$. |
Method to solve this indefinite integral $ \int \frac{x^{4}}{\sqrt{x^{2}+9}}dx $ | hint
We know that
$$(\forall t\in \Bbb R)\;\; \sqrt{1+\sinh^2(t)}=\cosh(t)$$
and
$$\sqrt{9+(3\sinh(t))^2}=3\cosh(t)$$
So, with $t=3\sinh(t) $,
the integral becomes
$$81\int \sinh^4(t)dt$$
now, use linearization by expanding
$$(\frac{e^x-e^{-x}}{2})^4$$ |
Suppose that $\hat{f}(n) = -\hat{f}(-n)\geq 0$ holds for all $n \in \mathbb{Z}$. Prove that $\sum^{\infty}_{n=1} \frac{\hat{f} (n)}{n} < \infty.$ | Why we don't use just the Cauchy-Schwarz inequality and Parseval's Theorem as follows?
$$\sum_{n=1}^\infty \frac{|\widehat{f}(n)|}{n} \le \Big( \sum_{k=1}^\infty \frac{1}{k^2}\Big)^{1/2} \Big( \sum_{n=1}^\infty |\widehat{f}(n)|^2 \Big)^{1/2} \le \frac{\pi}{\sqrt{6}} \Big( \int_0^1 |f(x)|^2 \, \mathrm{d} x \Big)^{1/2} $$
We don't even need that $\widehat{f}(n) = - \widehat{f}(-n) \ge 0$: The series is absolute convergent. |
How to find this limit $\lim_{x\to+\infty}f(x)=0$ | Since $\int |f'| < \infty$ and $f(t) = f(t_0) + \int_{t_0}^t f'(\tau) d \tau$ we have that $f_\infty = \lim_{t \to \infty} f(t) $ exists. Since $\int|f| < \infty$, we see that we must have $f_\infty =0$. |
How can we determine if a boolean expression is trivial or non trivial | The first one is trivial and is equivalent to $$x_1+x_2+x_3\ge 3$$
The second one is not trivial. Its equivalent form is$$x_1+x_2\ge 2\\\text{or}\\x_3+x_4\ge 2$$
Remark
Note that $x_1+x_2+x_3+x_4\ge 4$ leads to extra cases like$$x_1+x_3\ge 2\\\text{or}\\x_2+x_4\ge 2$$ |
Did Zariski really define the Zariski topology on the prime spectrum of a ring? | Johnstone's Stone spaces contains the following historical note at the end of Chapter V:
The Zariski spectrum is really a misnomer; it would be better to call it the Jacobson-Zariski-Grothendieck spectrum, except that the latter is too much of a mouthful. As explained in the Introduction, it was Zariski [1952] who introduced a topology on an arbitrary algebraic variety, by taking its algebraic subsets as closed sets; then Grothendieck [1960], exploiting the correspondence between points of an affine variety and maximal ideals of its coordinate ring, transferred Zariski's topology to the set of prime ideals of an arbitrary commutative ring. However, essentially the same topology had been introduced much earlier by Jacobson [1945] under the name 'hull-kernel topology', and extensively studied by various authors (e.g. [Arens and Kaplansky 1948], [Kaplansky 1950], [Gillman 1957], [Kohls 1957]) in a line of develoment which remained separate from the Zariski-Grothendieck one, even for some years after the publication of [Grothendieck and Dieudonné 1960] and [Bourbaki 1961a].
The relevant part of the introduction is also an interesting read:
The other area [besides category theory] where one searches in vain for the influence of Stone's Theorem is in algebraic geometry, with the rise of the 'Zariski topology'. It was sometime in the late forties (see [Zariski 1952]) that O. Zariski realized how one might define a topology on any abstract algebraic variety, by taking its algebraic subsets as closed sets; the precise date is difficult to determine, since Zariski himself does not seem to have attached much importance to the idea. (There is no mention of the Zariski topology in the first edition of Weil's book [1946] on algebraic geometry, although it plays a central role in the second edition [1962].) It was not until the work of Serre [1955] that the Zariski topology became an important tool in the application of topological methods (in this case, sheaf cohomology) to abstract algebraic geometry. There is an obvious similarity between the topologies introduced by Zariski and Stone, and indeed Dieudonné [1974] asserts that Zariski was influenced by Stone's work; but there seems to be no acknowledgement of this influence in Zariski's own papers.
The refoundation of algebraic geometry using schemes in place of varieties, begun by Grothendieck [1959, 1960] in the late fifties, brought the Zariski and Stone topologies even closer together; indeed, the latter is just the special case of the former applied to the spectrum of a Boolean ring. But again, one will not find any reference to Stone in the work of Grothendieck, even though his use of the word 'spectrum' is an obvious echo of [Stone 1940], and Grothendieck, with his background in functional analysis, must have been familiar with Stone's work in that field. Again, when the Zariski topology made its first appearance in a book on commutative algebra, as opposed to algebraic geometry, [Bourbaki 1961a], there was no mention of Stone's name. (The Zariski topology does not occur in [Zariski and Samuel 1958].)
I don't quite have the time to add in links to all those references... anyway, in short, it seems that Zariski only considered the maximal spectrum of varieties.
Zariski 1952: Zariski, Oscar. The fundamental ideas of abstract algebraic geometry. Proceedings of the International Congress of Mathematicians, Cambridge, Mass., 1950, vol. 2, 77–89. Amer. Math. Soc., Providence, R. I., 1952. MR0045412
Grothendieck 1960: Grothendieck, Alexander
The cohomology theory of abstract algebraic varieties. 1960 Proc. Internat. Congress Math. (Edinburgh, 1958) pp. 103–118 Cambridge Univ. Press, New York.
MR0130879
Jacobson 1945: Jacobson, N.
A topology for the set of primitive ideals in an arbitrary ring.
Proc. Nat. Acad. Sci. U. S. A. 31, (1945). 333–338. MR0013138
Arens and Kaplansky 1948: Arens, Richard F.; Kaplansky, Irving
Topological representation of algebras. Trans. Amer. Math. Soc. 63, (1948). 457–481. MR0025453
Kaplansky 1950:
Kaplansky, Irving
Topological representation of algebras. II.
Trans. Amer. Math. Soc. 68, (1950). 62–75.
MR0032612
Gillman 1957:
Gillman, L.
Rings with Hausdorff structure space.
Fund. Math. 45 (1957), 1–16. MR0092773
Kohls 1957: Kohls, C. W.
The space of prime ideals of a ring.
Fund. Math. 45 1957 17–27.
Grothendieck and Dieudonné:
Grothendieck, A.
Éléments de géométrie algébrique. I. Le langage des schémas. (French)
Inst. Hautes Études Sci. Publ. Math. No. 4 1960 228 pp.
14.05
Bourbaki 1961a: MR0163908 (29 #1207)
Grothendieck, A.
Éléments de géométrie algébrique. I. Le langage des schémas. (French)
Inst. Hautes Études Sci. Publ. Math. No. 4 1960 228 pp. MR163908 |
Let $K \subset \mathbb{R^1}$ consist of $0$ and the numbers $\frac{1}{n}$ for $n = 1,2,3,...$. Prove that $K$ is compact. | Hint: Here's a quick proof.
Let $\{U_\alpha\}$ be an open cover of $K$. In particular, there exists an open set say $U_1$ such that $0 \in U_1$. Since $0$ is a limit point of $K$, then $U_1$ contains all but finitely many $1/n$. Now I will let the reader finish the prove. |
Showing that the elements $6$ and $2+2\sqrt{5}$ in $\mathbb{Z}[\sqrt{5}]$ have no gcd | It is not true in general that if $\gcd(x,y)=d$ then $(x)+(y)=(d)$. In particular, if $R=\mathbb{Z}[x]$ (the ring of polynomials over $\mathbb{Z}$ in the indeterminate $x$), then $\gcd(2,x)=1$ (since any divisor of $2$ in $R$ must be an integer, and the only integers that divide $x$ in $R$ are $\pm 1$), however $(2,x)=(2)+(x)\subsetneq R$.
More generally, integral domains for which gcds can be written as a linear combination are known as Bézout domains--domains where every finitely generated ideal is principal.
As Bill mentions in his answer, GCD domains (ie integral domains in which every pair of nonzero elements has a gcd) satisfy the property that every irreducible is prime. In particular, if $R$ is an integral domain in which every nonzero nonunit can factor into irreducibles, then $R$ is a GCD domain if and only if $R$ is a UFD.
As for showing that $6$ and $2+2\sqrt{-5}$ have no gcd, this can also be done easily enough using the norm function $N:\mathbb{Z}[\sqrt{-5}]\rightarrow\mathbb{N}\cup\{0\}$ defined by $N(a+b\sqrt{-5})=a^2+5b^2$. In particular for all $\alpha,\beta\in R$, $N(\alpha\beta)=N(\alpha)N(\beta)$, $N(\alpha)=0$ if and only if $\alpha=0$, and $N(\alpha)=1$ if and only if $\alpha\in U(R)$ (proving this is a canonical abstract algebra exercise). So, if there is a gcd of 6 and $2+2\sqrt{-5}$, then there must be a common divisor of 3 and $1+\sqrt{-5}$ (since 2 divides both 6 and $2+2\sqrt{-5}$ in $R$).
Thus, there exists $\alpha\in R$ such that $\alpha \mid 3$ and $\alpha \mid 1+\sqrt{-5}$. Taking the norm (and using the norm properties above), we have $N(\alpha)\mid 9$ and $N(\alpha) \mid 6$. Therefore, it follows that either $N(\alpha)=1$ or $N(\alpha)=3$. However, the fact that there are no integer solutions to the equation $a^2+5b^2=3$ (why?) implies that there is no element in $R$ having a norm of 3. Therefore $N(\alpha)=1$ and $\gcd(3,1+\sqrt{-5})=1$.
Now suppose that $6$ and $2+2\sqrt{-5}$ have a gcd. Then we have $2=2\gcd(3,1+\sqrt{-5})=\gcd(6,2+2\sqrt{-5})$. However, we reach a contradiction as $1+\sqrt{-5}$ is a common divisor of both 6 and $2+2\sqrt{-5}$, but $1+\sqrt{-5}$ does not divide 2 (Hint: suppose it does and use the norm). Therefore $\gcd(6,2+2\sqrt{-5})$ does not exist. |
A question regarding complete metric spaces | One and only one: the same of $T^n$. Let $x\in M$ be the unique fixed point of $T^n.$
Now, we have that $T^n(T(x))=T(T^n(x))=T(x),$ and this implies that $T(x)$ is a fixed point of $T^n,$ so $T(x)=x$ by uniqueness of fixed point. This proves that x is a fixed point of $T.$
On the other hand, if $T(y)=y,$ iterating T we have that also $T^n(y)=y,$ and by uniqueness of the fixed point we conclude that $y=x,$ so it is unique. |
Elementary Matrices Row Operation | See here you have to add $3 $ times the 2nd row to the third row to make the $(3,2)$ th element to be zero. When you do this you automatically have the last $(3,3)$ element of B.
So the elementary matrix will be
$\begin{pmatrix}
1&0&0\\
0&1&0\\
0&3&1
\end {pmatrix}$
Here $\begin{pmatrix}1&0&0\\\end {pmatrix}$ in the first row denotes that you are adding $1$ times the first row to $0$ times the first row and $0$ times the 3rd row of A to get the 1st row of the resultant matrix B. Similarly others follow that is $\begin{pmatrix}0&1&0\\\end {pmatrix}$ in the 2nd row denotes that you have only $1$ times the 2nd row as your 2nd row of B and $\begin{pmatrix}0&3&1\\\end {pmatrix}$ in last row denotes that last row of B is comprised of $3$ times the 2nd row and 1 time the last row of A. |
Antonyms for SUBSPACE | The antonym could be "ambient space". However, in the case of concatenation, you might be interested in looking at the "(external) direct sum of vector spaces".
You also ask the difference between "external direct sum" and "Cartesian product". The Cartesian product is actually called the "direct product". For finitely many spaces, the direct product and direct sum are the same.
However, if you have an infinite collection of vector spaces, then the two are not generally the same.
More precisely: given a collection $\{V_i\}_{i \in I}$ of vector spaces, the direct product is the vector space which is equal to the cartesian product $\prod_{i \in I} V_i$ as a set and the operations are coordinate-wise.
However, the direct sum $\bigoplus_{i \in I} V_i$ is a subspace of the direct product consisting of those elements which have all but finitely many coordinates equal to $0$.
If $I$ is finite, then the two are the same. |
How does one create a rotation about a given axis in $R^{3}$ from rotations about the other axes? | In order to rotate by $\theta$ around some axis denoted by $\hat{u}=[u_{x},u_{y},u_{z}]$ you can construct the following $3\times3$ matrix, where $c_{\theta}=\cos{\theta}$ and $s_{\theta}=\sin{\theta}$:
$$\mathbf{R}_{\hat{u}}\triangleq\begin{bmatrix}c_{\theta}+u_{x}^{2}(1-c_{\theta}) & u_{x}u_{y}(1-c_{\theta})+u_{z}s_{\theta} & u_{x}u_{z}(1-c_{\theta})-u_{y}s_{\theta} \\ u_{x}u_{y}(1-c_{\theta})-u_{z}s_{\theta} & c_{\theta}+u_{y}^{2}(1-c_{\theta}) & u_{y}u_{z}(1-c_{\theta})+u_{x}s_{\theta} \\ u_{x}u_{z}(1-c_{\theta})+u_{y}s_{\theta} & u_{y}u_{z}(1-c_{\theta})-u_{x}s_{\theta} & c_{\theta}+u_{z}^{2}(1-c_{\theta})\end{bmatrix}$$ |
What are the names in English for Alterando, Invertendo, Componendo and Dividendo? | I think those rules are called by their latin names, if one ever would give it names! Something like 'Corollaries of cross multiplication' could help. |
Question based of orthocenter distance from angular points | The area $S$ of triangle $ABC$ is the sum of the areas of $ABH$, $BCH$ and $CAH$, so that:
$$
S={1\over2}AB\cdot HF+{1\over2}BC\cdot HD+{1\over2}CA\cdot HE.
$$
By dividing both sides by $S$ we get:
$$
1={AB\over2S} HF+{BC\over2S} HD+{CA\over2S} HE.
$$
Observe now that $AB/2S=1/CF$, $BC/2S=1/AD$ and $CA/2S=1/BE$, so we may rewrite the above equality as:
$$
1={HF\over CF}+{HD\over AD} +{HE\over BE}.
$$
Now plug in the obvious equalities $HF=CF-CH$, $HD=AD-AH$ and $HE=BE-BH$ to get:
$$
1={CF-CH\over CF}+{AD-AH\over AD} +{BE-BH\over BE},
$$
that is:
$$
1=1-{CH\over CF}+1-{AH\over AD} +1-{BH\over BE}
$$
and finally:
$$
{CH\over CF}+{AH\over AD}+{BH\over BE}=2,
$$
which is the sought-after result.
GOOD NEWS:
This result does not depend on the amplitude of the angles and holds for any point $H$ inside the triangle, provided $HD$, $HE$ and $HF$ are perpendicular to the sides of $ABC$. It is in fact a consequence of generalized Viviani's theorem.
BAD NEWS:
This result does not hold if $H$ is outside of the triangle (obtuse triangle). In that case however ${CH\over CF}+{AH\over AD}+{BH\over BE}$ does not have a fixed value, so the question cannot be answered. |
Cardinal arithmetic in inner models | The second question admits an easy negative answer: in models of $\sf AD$ every $\omega_n$ is singular for finite $n>2$.
The result is due to Kleinberg, Corolloary 2.2 of the following paper
E. M. Kleinberg, ${\rm AD}\vdash $ “the $\aleph _{n}$ are Jonsson cardinals and $\aleph _{\omega }$ is a Rowbottom cardinal”, Ann. Math. Logic 12 (1977), no. 3, 229--248. MR 469769. |
Proof that ${{2n}\choose{n}} > 2^n$ and ${{2n + 1}\choose{n}} > 2^{n+1}$, with $n > 1$ | The first one is easy:
$\displaystyle
{{2n}\choose{n}}
= \frac{2n}{n} \frac{2n-1}{n-1} \cdots \frac{n+1}{1}
> 2 \cdot 2 \cdots 2 = 2^n
$
The second one needs Pascal's relation and induction:
$\displaystyle
{{2n+1}\choose{n}}
= {{2n}\choose{n}} + {{2n}\choose{n-1}}
$
$\displaystyle\qquad\qquad
= {{2n}\choose{n}} + {{2n-1}\choose{n-1}} + {{2n-1}\choose{n-2}}
$
$\displaystyle\qquad\qquad
> {{2n}\choose{n}} + {{2n-1}\choose{n-1}}
$
$\displaystyle\qquad\qquad
> 2^n + 2^n = 2^{n+1}
$ |
Solution Set of $\sqrt{x+1}+\sqrt{x-1}=1$ | No solutions. For $\sqrt{x-1}$ to be well-defined, we must have $x \ge 1$, so $\sqrt{x+1} \ge \sqrt 2$. The left side of your equation is bigger than the right whenever both sides are well-defined. |
Totally real submanifold of 2 dimensional complex manifold | It depends on $X$, but all $3$ of those surfaces can be embedded as totally real submanifolds in some complex manifold of dimension $2$.
In all cases we can use the fact that for a lagrangian submanifold in a Kähler manifold is totally real (which is proven in the answer to this question What is the relation between totally real submanifold and Lagrangian submanifold?).
$S^{1} \times S^{1}$ is Lagrangian in $\mathbb{CP}^{2}$ since it is a toric manifold, and the pre-image of a regular value of the toric moment map is a lagrangian torus.
$\mathbb{RP}^{2}$ is a lagrangian in $\mathbb{CP}^{2}$, embedded by just taking points whose homogeneous co-ords are real numbers.
To see $S^{2}$ as a lagrangain, just take a smooth rational curve with self-intersection $-2$ in a del Pezzo surface, which can be achieved by blowing up a point and then a point on its strict transform. |
If $\phi$ is an injective homomorphism $G \rightarrow G' $ is it true that $G \cong \phi (G)$ | Yes, by the first isomorphism theorem. $(G/\operatorname{ker}\phi)\cong \phi(G)$. But if $\phi$ is injective, $\operatorname{ker}\phi=e$. |
$K[[x]]$ is not a Jacobson ring | To resolve this question: $\{0\}$ is what you're looking for.
Actually it's the only other prime ideal in the entire ring since the nontrivial ideals all look like $(x^n)$.
Thinking about this a little, you can see that a local ring is Jacobson if and only if its primes are maximal. In particular, local rings with Krull dimension of $1$ or more can't be Jacobson.
You can also generalize this away from this local principal ideal domain example to Dedekind domains. Since nonzero primes are maximal in Dedekind domains, the only question that remains to be answered is "is the zero ideal an intersection of maximal ideals or not?"
If a Dedekind domain has finitely many maximal ideals, the intersection of all of these ideals is nonzero, and so no combination of ideals can intersect to zero. In this case, it is not Jacobson.
On the other hand, suppose we have a Dedekind domain with infinitely many prime ideals. If the intersection of all primes is nonzero, then it has a unique factorization in terms of finitely many powers of prime ideals. But since "divides mean contains," and all prime ideals contain (divide) this ideal, all prime ideals would have to appear in its factorization (but there are too many.) Thus the intersection of all prime ideals would have had to have been zero in the first place, and the ring is Jacobson. |
Making a Piecewise Function a Single Function | You can rewrite $f(x)$ as $$f(x)=\chi_{[a,b)}(x)g(x)+\chi_{[c,d)}(x)h(x),$$ where $\chi_{[a,b)}(x)$ is $1$ if $x\in[a,b)$ and $0$ otherwise. |
Finding number of relations using counting | Hint:
Think every relation as a binary square matrix of size 4 whose rows and columns are marked with characters $a,b,c,d$. If $a$-th row and $c$-th column is marked 1 then $(a,b) \in R$ otherwise not. Try to observe the properties of such a matrix if $R$ is reflexive, symmetric or antisymmetric . |
Flipping heads 10 times in a row | Here is my attempt: So denote by $P_i$ the probability of having $10$ heads in a row in $i$ tosses. So $P_{10}=1/2^{10}$.
For $i=10,...,20$, we can calculate the probability straight forward by saying: Probability that I get $10$ heads row and my heads start at $1$ plus probability that I get $10$ heads in a row and I start at $2$ plus, and so on. For instance, for $i=16$ we have $1/2^{10}+6/2^{11}=0.390625$ precisely what you got.
When $i>20$, then the counting becomes a little more messy, but I think its doable using recursion. Note that $P_{i+1}=P_i+$Probability that my $10$ heads in a row start at $i-9$ (let me denote this probability $Q_i$). So you know that $P_i$ is for the first couple of cases, note that $Q_i$ for $i\leq 20$ is just $1/2^{11}$ because we need a $T$ to go on the $i-10$ position and followed by $10$ heads, however, that does not work for say $i=21$ because we want more, we want the $11$th position to be a $T$, the following to be 10 $H$s, and the previouse ones not to be a string of $10$ heads because we are counting the strings whose starting point is at $12$. Thus, $Q_{21}=(1/2^{11})(1-P_{10})$. Hence, $P_{21}=P_{20}+Q_{21}$ to exactly what you have.
For $P_{22}=P_{21}+Q_{22}$ where $Q_{22}=(1/2^{11})(1-P_{11})$, I just checked and indeed I get the same result.
In general, you can build them recursively, and use $Q_i=(1/2^{11})(1-P_{i-11})$.
A little more clear (above was me thinking as I typed, so it turned out somewhat messy): You can denote $P_0,..,P_9$ all to be $0$, $P_{10}=1/2^{10}$, and $P_n=P_{n-1}+(1/2^{11})(1-P_{n-11})$. |
Why can a variable's dependance be neglected in a function? | $\newcommand{\by}{\mathbf{y}}$
This is slightly wrong, as there are indeed terms missing in the Taylor expansion. These terms follow the pattern of the $\by$-derivatives, so that no information is lost, but ...
They should have just said that to derive the order conditions resp. the method of the highest possible order for the number of stages they only consider autonomous systems $\by'=f(\by)$. One has to take care that $\by$ is a vector and $f$ is vector valued, so that $f'(\by)$ is a matrix, the Jacobi matrix. Higher derivatives then are tensors, vector-valued symmetric multi-linear forms.
This is general enough, as you can transform any non-autonomous system into an artificially autonomous one as
$$
\frac{d}{dt}\pmatrix{t\\\by}=F((t,\by))=\pmatrix{1\\f(t,\by)}
$$
where now the state vector has the time $t$ as additional component. |
Equality of subgroups in groups locally finite | Lemma. A finite group $G$ is nilpotent if and only if, forall $a,b \in G$, $(|a|,|b|)=1 \Rightarrow [a,b]=1$.
Proof. If $G$ is nilpotent, then it is the direct product $P_1 \times \cdots \times P_n$ of its Sylow subgroups. So if $(|a|,|b|)=1$, then $a$ and $b$ are in the direct products of the Sylow subgroups of the primes dividing their orders, which are disjoint, and hence $[a,b]=1$. Conversely, if the condition holds, then Sylow subgroups of distinct primes commute, so $G$ is nilpotent.
Now let $G$ be a locally finite group. The condition in the lemma holds in $G/K$, so $G/K$ is locally nilpotent, and hence $N \le K$. Conversely, for any $N$ such that $G/N$ is locally nilpotent, the condition $(|a|,|b|)=1 \Rightarrow [a,b]=1$ holds in $G/N$, so $K \le N$. |
Show that $ABC$ is equilateral if and only if $a+b+c=0$ | Note that we need that $|a|=|b|=|c|$ as extra condition otherwise we can find solutions which do not represent equilateral triangle (e.g. $a=1, b=c=-\frac12$).
Assuming $|a|=|b|=|c|$, we have that for an equilateral triangle
$a=re^{i\theta}$
$b=re^{i\theta+i\frac {2\pi} 3}$
$c=re^{i\theta+i\frac {4\pi} 3}$
therefore
$$a+b+c= re^{i\theta}(1+e^{i\frac {2\pi} 3}+e^{i\frac {4\pi} 3})=re^{i\theta}\cdot 0=0$$
For the other direction, assuming wlog, up to scaling and rotation, $a=1 \implies b+c=-1$ that is
$a=1$
$b=-\frac12+iy$
$c=-\frac12-iy$
and since $|b|=|c|=1$ we obtain
$$\sqrt{\frac14+y^2}=1 \implies y^2=\frac34 \implies y=\frac{\sqrt 3}2$$ |
Isomorphisms of the Lorentz group and algebra | The (vector space) tensor product of two Lie algebras isn't naturally a Lie algebra; the obvious choice should fail to satisfy the Jacobi identity. The sources you've been reading probably mean the direct product $\mathfrak{su}(2) \times \mathfrak{su}(2)$, which is just the direct sum.
(Also, the statement is slightly incorrect. $\mathfrak{su}(2)$ should be replaced with its complexified form $\mathfrak{su}(2) \otimes \mathbb{C} \cong \mathfrak{sl}_2(\mathbb{C})$.)
Lie groups also don't have a notion of tensor product. The correct construction in the Lie group case is the direct product again, although the correct statement is complicated; there isn't obviously a notion of the complexification of a Lie group in the same way that there is a notion of the complexification of a Lie algebra. |
Linear span of polynomials of degree at most 8 | Here is one approach. Your vector space $\mathcal{P}_8$ maps to $\mathbb{R}^9$ by mapping the polynomial $a_0 + a_1x + \ldots a_8x^8 \mapsto (a_0, a_1, \ldots, a_8)^T$, and then you need to map the basis $\mathcal{F}$ and needed vectors, and do the usual linear algebra approach to this problem.
Exactly the same constants on the linear combination should produce same results in both $\mathcal{P}_8$ and $\mathbb{R}^9$. |
The significance of partial derivative notation | Mainly historical; see Earliest Uses of Symbols of Calculus : Partial Derivative :
The "curly $\mathrm{d}$" was used in 1770 by Antoine-Nicolas Caritat, Marquis de Condorcet (1743-1794) in "Memoire sur les Equations aux différence partielles", which was published in Histoire de L'Academie Royale des Sciences, pp. 151-178, Annee M. DCCLXXIII (1773). On page 152, Condorcet says:
Dans toute la suite de ce Memoire, $\mathrm{d} z$ & $\partial z$ désigneront ou deux differences partielles de $z$, dont une par rapport a $x$, l'autre par rapport a $y$, ou bien $\mathrm{d} z$ sera une différentielle totale, & $\partial z$ une difference partielle. [Throughout this paper, both $\mathrm{d} z$ & $\partial z$ will either denote two partial differences of $z$, where one of them is with respect to $x$, and the other, with respect to $y$, or $\mathrm{d} z$ and $\partial z$ will be employed as symbols of total differential, and of partial difference, respectively.]
However, the "curly $\mathrm{d}$" was first used in the form $\dfrac{\partial u}{\partial x}$ by Adrien Marie Legendre in 1786 in his "Memoire sur la manière de distinguer les maxima des minima dans le Calcul des Variations", Histoire de l'Academie Royale des Sciences, Annee M. DCCLXXXVI (1786), pp. 7-37, Paris, M. DCCXXXVIII (1788).
On page 8, it reads:
Pour éviter toute ambiguité, je répresentarie par $\dfrac{\partial u}{\partial x}$, le coefficient de $x$ dans la différence de $u$, & par $\dfrac{\mathrm{d} u}{\mathrm{d} x}$ la différence complète de $u$ divisée par $\mathrm{d} x$.
Legendre abandoned the symbol and it was re-introduced by Carl Gustav Jacob Jacobi in 1841. |
Probability of getting an Ace or a Spade (in any order) when cards are drawn with replacement? | The question looks fuzzy, but my take is as follows: Three cases.
$P_1=\frac{1}{52}\times\frac{16}{52}$ - first card is ace of spades.
$P_2=\frac{3}{52}\times\frac{13}{52}$ - first card is other ace.
$P_3=\frac{12}{52}\times\frac{4}{52}$ - first card is other spade.
Total $P_1+P_2+P_3=\frac{103}{2704}$ |
Some identity with a complex function | Proof is helped by the identity $|a+b|^2-|a-b|^2 = 4\operatorname{Re}(a\bar b)$:
\begin{split}
I_f & = \frac14 |f_x-if_y|^2 - \frac14 |f_x+if_y|^2 \\
& = \operatorname{Re}( i f_x \overline{f_y}) \\
&= \operatorname{Re}\left(i(u_x+iv_x)\overline{(u_y+iv_y)}\right) \\
& = u_xv_y-u_y v_x
\end{split} |
Misconception of infinite prime numbers proof by contradiction? | The page is wrong in attributing this proof by contradiction to Euclid.
However, let's look at that proof rather than Euclid's.
The conclusion that $p_1 \cdots p_n + 1$ is prime was reached under the initial assumption, which was ultimately proved false. Therefore, it was not validly proved.
Euclid's actual proof was not by contradiction, although a small part of his argument was. He did not assume that $p_1,\ldots,p_n$ are the only primes that exist, nor that they are the smallest primes. They could, for example, but $5, 23,97$, and $191$. He proved that either $p_1\cdots p_n+1$ is prime or it is divisible only by primes not among the original list $p_1,\ldots,p_n$. For example $(5\times23\times97\times191)+1$ either is prime or is divisible only by other primes not included among $5,23,97,191$. Therefore the original list can always be extended to a larger list of primes.
(The part that Euclid did prove by contradiction was that two consecutive numbers cannot share a prime factor in common.)
Dirichlet's posthumous book on number theory published in 1861 falsely attributed the proof by contradiction to Euclid, and many eminent mathematicians have stupidly or ignorantly made the same mistke since then. Catherine Woodgold and I published a joint paper demolishing the error and showing that Euclid's actual proof is better than this proof by contradiction falsely attributed to him. It's in the Fall 2009 volume of The Mathematical Intelligencer.
BTW, it is an error to write "infinite prime numbers" if you mean "infinitely many prime numbers". The phrase "infinite prime numbers" means "numbers, each one of which, by itself, is infinite." Thus if $A$ is an infinite prime number (what is that?) and so is $B$, then $A$ and $B$ are infinite prime numbers, but they are not infinitely many prime numbers, since there are only two of them.
PS: I quote from our paper:
Only the premise that a set contains all prime numbers could make one conclude that if a number is not divisible by any primes in that set, then it is not divisible by any primes.
Only the statement that $p_1\cdots p_n + 1$ is not divisible by any primes makes anyone conclude that that number ‘‘is therefore itself prime’’, to quote no less a number-theorist than G. H. Hardy in [55], where he actually attributed that conclusion to Euclid! (Euclid’s statement ‘‘Certainly [that number] is prime, or not’’, to be examined at greater length below, clearly shows that Euclid’s reasoning did not follow that path.)
The mistake of thinking that $p_1\cdots p_n + 1$ has been proved to be prime is made all the more tempting by the very obvious fact that that would entail the result to be proved.
The proposal to prove the twin prime conjecture by saying that $p_1\cdots p_n + 1$ and $p_1\cdots p_n - 1$ are both prime came from a student with whom one of us (M. Hardy) spoke, who a few months later entered a graduate program in mathematics.
The seemingly trivial rearrangement of the proof into a reductio ad absurdum has thus led us quickly into territory that the straightforward proof could not have hinted at, wherein lie substantial mathematical errors that would not have been approached otherwise.
In any proof by contradiction, once the contradiction is reached, one can wonder which of the statements asserted to have been proved along the way can really be proved in just the manner given (since the argument supporting them
does not rely on the initial assumption later proved false), which ones are correct but must be proved in some other way (since the argument supporting them does rely on the initial assumption), and which ones are false. It is easy to neglect that task. One’s consequent ignorance of the answers to those questions can lead to confusion: after all, when one remembers reading a proof of a proposition, might one not think the proposition has been proved and is
therefore known to be true? G. H. Hardy probably was aware that because the conclusion that $p_1\cdots p_n+1$ ‘‘is therefore itself prime’’ was contingent on a hypothesis later proved false, it could not be taken to be proved. But he did not say that explicitly. It seems hard to justify a similar confidence that all of his readers avoided the error into which he inadvertently invited them. Euclid’s proof as presented by Øystein Ore [above] spares us that task by limiting the use of proof by contradiction to the narrowest possible scope.
To this I would add that making it into a proof by contradiction adds an extra complication that serves no purpose and only makes the proof appear more complicated than it really is. |
How do you write "for every three integers $a,b,c$, exactly two of the integers $ab$, $ac$, and $bc$ cannot be odd" in predicate logic? | I interpret "exactly two cannot be odd" as "it cannot be the case that exactly two are odd". In that case, we can just start with "exactly two are odd" and negate that.
We can write "exactly two are odd" as the $\lor$ of three cases, depending on which of the two products are odd. For example, $O(ab) \land O(ac) \land \neg O(bc)$ would be one of the cases. |
Prove there do not exists such distribution. | Let $\eta\in\mathcal{D}(\mathbb{R})$ be such that $\eta(x)=1$ for $x\in(0,1)$. Let $\varphi\in\mathcal{D}(\mathbb{R})$ be defined by $\varphi(x)=\eta(x)\,e^x$. If $v\in\mathcal{D'}(\mathbb{R})$ is an extension of $u$, what would $\langle v,\varphi\rangle$ be? |
Phase portraits for a degenerate critical point | Each example will contain the matrix for $x' = A x, \delta$ and $\tau$, solution for $x_1(t), x_2(t)$, statement of stability and phase portrait.
Cases for $\det A \ne 0$:
Saddle Point
$$A =
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix} \implies \delta = -1 \lt 0 \implies ~\mbox{Unstable Saddle}$$
The solution for the system is $x_1(t) = c_1e^t, x_2(t) = c_2 e^{-t}$.
Stable Node
$$A =
\begin{bmatrix}
-1 & 0 \\
0 & -4 \\
\end{bmatrix} \implies \tau = -5 \lt 0, \delta = 4, \tau^2 - 4 \delta \ge 0 \implies ~\mbox{Stable Node}$$
The solution for the system is $x_1(t) = c_1e^{-t}, x_2(t) = c_2 e^{-4t}$.
Unstable Node
$$A =
\begin{bmatrix}
1 & 0 \\
0 & 4 \\
\end{bmatrix} \implies \tau = 5 \gt 0, \delta = 4, \tau^2 - 4 \delta \ge 0 \implies ~\mbox{Unstable Node}$$
The solution for the system is $x_1(t) = c_1e^{t}, x_2(t) = c_2 e^{4t}$.
Unstable Focus
$$A =
\begin{bmatrix}
1 & 2 \\
-2 & 1 \\
\end{bmatrix} \implies \tau = 2 \gt 0, \tau \ne 0, \delta = 5, \tau^2 - 4 \delta \lt 0 \implies ~\mbox{Unstable Focus}$$
The solution for the system is $x_1(t) = e^t(c_1 \cos 2t + c_2 \sin 2t), x_2(t) = e^t(-c_1 \cos 2t + c_2 \sin 2t)$.
Stable Focus
$$A =
\begin{bmatrix}
-1 & 2 \\
-2 & -1 \\
\end{bmatrix} \implies \tau = -2 \lt 0, \tau \ne 0, \delta = 5, \tau^2 - 4 \delta \lt 0 \implies ~\mbox{Stable Focus}$$
The solution for the system is $x_1(t) = e^{-t}(c_1 \cos 2t + c_2 \sin 2t), x_2(t) = e^{-t}(-c_1 \sin 2t + c_2 \cos 2t)$.
Center
$$A =
\begin{bmatrix}
0 & 1 \\
-1 & 0 \\
\end{bmatrix} \implies \tau = 0, \delta = 2 \gt 0 \implies ~\mbox{Marginally/Neutrally Stable Center}$$
The solution for the system is: $x_1(t) = c_1 \cos t + c_2 \sin t, x_2(t) = -c_1 \sin t + c_2 \cos t$
Cases for $\det A = 0$:
Case $\lambda \gt 0 = 1$:
$$A =
\begin{bmatrix}
1 & 0 \\
0 & 0 \\
\end{bmatrix}$$
The solution for the system is (you can plot solution curves) $x_1(t) = c_1 e^t, x_2(t) = c_2$. The phase portrait is
Case $\lambda \lt 0 = -1$: Note: we could easily collapse this case with the previous, just look at the direction arrows (that is why the author says eight).
$$A =
\begin{bmatrix}
-1 & 0 \\
0 & 0 \\
\end{bmatrix}$$
The solution for the system is (you can plot solution curves) $x_1(t) = c_1 e^{-t}, x_2(t) = c_2$. The phase portrait is
Case Linear Solution
$$A =
\begin{bmatrix}
0 & 1 \\
0 & 0 \\
\end{bmatrix}$$
The solution for the system is (you can plot solution curves) $x_1(t) = c_1 + c_2 t, x_2(t) = c_2$. The phase portrait is
Case Degenerate: All Zero Matrix:
$$A =
\begin{bmatrix}
0 & 0 \\
0 & 0 \\
\end{bmatrix}$$
The solution for the system is (you can plot solution curves) $x_1(t) = c_1, x_2(t) = c_2$.
There is no phase portrait for this case because every point is a constant solution, but it is neutrally stable.
Aside:
If you want to find eigenvalues and eigenvectors, some of the cases requires a Jordan Form and some do not because you can find two linearly independent eigenvectors for the eigenvalues. For example:
$$A =
\begin{bmatrix}
\lambda & 0 \\
0 & 0 \\
\end{bmatrix} \implies \lambda_1 = 0, \lambda_2 = \lambda, v_1 = (0, 1), v_2 = (1, 0)$$
$$A =
\begin{bmatrix}
0 & 1 \\
0 & 0 \\
\end{bmatrix} \implies \lambda_{1,2} = 0, v_1 = (1, 0), v_2 = (0,1)$$
The second example is a generalized eigenvector and this has a Jordan Form
$$A = PJP^{-1} = \begin{bmatrix}
1 & 0 \\
0 & 1\\
\end{bmatrix}\begin{bmatrix}
0 & 1 \\
0 & 0 \\
\end{bmatrix}\begin{bmatrix}
1 & 0 \\
0 & 1 \\
\end{bmatrix}$$ |
How to solve the logarithmic equation $\log_2 (1 + \sqrt{x}) = \log_3 x$ analytically | Substitute $1 + \sqrt x=z$ so we have $x=(z-1)^2$
The equation becomes
$\log_{2}{z}=\log_3(z-1)^2$
$\log_{2}{z}=2\log_3(z-1)$
Convert in natural log
$\dfrac{\log 3 \log z}{2 \log 2}=\log (z-1)$
$\dfrac{\log 3}{\log 4}=\dfrac{\log (z-1)}{\log z}$
$z=4$ is the unique solution of this equation since $f(z)=\dfrac{\log (z-1)}{\log z}$ is increasing for any $z>1$.
Indeed $f'(z)=\dfrac{-z \log (z-1)+\log (z-1)+z \log z}{(z-1) z \log ^2 }$ is positive when numerator $-z \log (z-1)+\log (z-1)+z \log z>0$ that means
$z> \dfrac{\log (z-1)}{\log (z-1)-\log (z)}$ which is verified for any $z>1$ as the RHS is less than $1$
Therefore
$z=4\to x=(z-1)^2=(4-1)^2\to x=9$
which is the only solution of the equation
Hope this helps |
Degree of a field extension of a minimal polynomial | First $\mathbf Q[α]=\{p(α)\mid \deg p<n\}$.
Indeed,we can divide any polynomial $p(x)$ by $f(x)$ by Euclidean division:
\begin{align}
p(x)&=q(x)f(x)+r(x),\quad& r&=0\;\text{or}\;\deg r<\deg f=n.
\end{align}
Thus $p(α)=r(α)$ and $\mathbf Q[α]$ is a finite-dimensional $\bf Q$-vectorspace, of dimension $n$ since $f$ is the minimal polynomial of $α$.
Next, note that $\mathbf Q(α)=\mathbf Q[α]$.
Indeed, since $\mathbf Q[α]$ is a finite dimensional $\mathbf Q$-vector space, so that multiplication by a non-zero element of $\mathbf Q[α]$, which is injective, is also surjective, i.e. $1$ is attained – in other words this non-zero element has an inverse in $\mathbf Q[α]$, which is therefore a field. |
Why is inverse of orthogonal matrix is its transpose? | Let $C_i$ the $i^{\text{th}}$ column of the orthogonal matrix $O$ then we have
$$\langle C_i,C_j\rangle=\delta_{ij}$$
and we have
$$O^T=(C_1\;\cdots\; C_n)^T=(C_1^T\;\cdots\; C_n^T)$$
so we get
$$O^TO=(\langle C_i,C_j\rangle)_{1\le i,j\le n}=I_n$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.