title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Probability of picking the only two same cards out of 25 cards if I can only pick twice | Your first steps are correct. Let $A$ be the event in which we pick one of the two cards in the first turn, and $B$ the event in which we pick the other card in the second turn. Using the definition of conditional probability, we have:
$$P(A, B) = P(A) P(B | A) = \frac{2}{25} \frac{1}{24} = \frac{1}{300}$$
The answer by cognitive uses another approach, but unfortunately is wrong. We can use the concept of combinatorics, by considering the amount of ways we can draw two cards out of 25. Since each combination is equally likely to be drawn, the probability of ending up with the two similar cards, equals:
$$P(A, B) = \frac{1}{25 \choose 2} = \frac{1}{\frac{25!}{23!2!}} = \frac{1}{\frac{25 \cdot 24}{2}} = \frac{1}{300}$$ |
Deriving Bachet's duplication formula | Your reasoning is correct. Substitute $Y = \dfrac{3x^2}{2y} X - \dfrac{x^2 - 2c}{2y}$ into $Y^2 = X^3 + c$ to obtain a cubic for $X$. Just don't brute force it. The key point is to observe that two of its roots are known, and equal to $x$ (do you see why?). Use a sum of roots theorem to get the third one.
BTW, to use the sum of roots, yo don't need the entire cubic, but just the coefficient at $X^2$, which is $\dfrac{9x^4}{4y^2}$. Now the root you are interested in is indeed $$\dfrac{9x^4}{4y^2} - 2x = \dfrac {9x^4 - 8xy^2}{4y^2} = \dfrac{9x^4 - 8x(x^3 + c)}{4y^2} = \dfrac{x^4 - 8cx}{4y^2}$$
Proceed for $y$. |
Arithmetic Sequences in Finite Set | There exists a set $S$ with the properties you want for every value of $l$. Here are a few examples for the smaller values of $l$:
For $l=2$: $|S \setminus \bar{S}|=1 < |\bar{S}|=2$
$$ S= \{1,2,4\}$$
$$ \bar{S}= \{2,4\}$$
For $l=3$: $|S \setminus \bar{S}|=3 < |\bar{S}|=4$
$$ S= \{1, 2, 3, 5, 6, 8, 9\}$$
$$ \bar{S}= \{3, 5, 8, 9\}$$
For $l=4$: $|S \setminus \bar{S}|=6 < |\bar{S}|=7$
$$ S= \{1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 13, 14, 16\}$$
$$ \bar{S}= \{4, 8, 9, 12, 13, 14, 16\}$$
For $l=5$: $|S \setminus \bar{S}|=16 < |\bar{S}|=17$
$$ S= \{1, 2, 3, 4, 5, 7, 8, 9, 10, 12, 13, 14, 15, 17, 18, 19, 20, 22, 23, 24, 25, 26, 33, 34, 35, 36, 37, 39, 43, 44, 45, 46, 47\}$$
$$ \bar{S}= \{5, 9, 13, 17, 22, 23, 24, 25, 26, 33, 34, 37, 43, 44, 45, 46, 47\}$$
For $l=6$: $|S \setminus \bar{S}|=15 < |\bar{S}|=16$
$$ S= \{1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33, 34, 36\}$$
$$ \bar{S}= \{6, 12, 13, 18, 19, 20, 24, 25, 26, 27, 30, 31, 32, 33, 34, 36\}$$
For $l=7$: $|S \setminus \bar{S}|=78 < |\bar{S}|=79$ , $\max(S)=239$
For $l=8$: $|S \setminus \bar{S}|=109 < |\bar{S}|=110$ , $\max(S)=335$
For $l=9$: $|S \setminus \bar{S}|=213 < |\bar{S}|=214$ , $\max(S)=633$
For $l=10$: $|S \setminus \bar{S}|=45 < |\bar{S}|=46$ , $\max(S)=100$
As the sets become rather large I left out the lasts few examples. These are the first sets I encountered in the following procedure.
We start for the chosen value of $l$ with the set $S=\{1,2,\dots,l-1\}$ which does not contain an arithmetic sequence of length $l$. (Optional $\bar{S}=\{l-1\}$). We also use an index $n=l$. Now we can use the following algorithm:
Add $n$ to the set $S$
Check wether there is any arithmetic sequence of length $l$ in $S$ that ends in $n$. If that is the case we remove $n$ from $S$, if not we have a new set that does not contain any arithmetic sequences of length $l$ or longer.
Optional: If $n$ was accepted we can check whether there is any arithmetic sequence of length $l-1$ in $S$ that ends in $n$. If that is so, we add $n$ to $\bar{S}$.
Increase $n$ by one and go back to step 1.
After that it is only repeating until the number of elements in $\bar{S}$ exceeds half the size of $S$.
I have not tried to prove the existence of such a set for any value of $l$, but I see no obvious reason why they would seize to exist for larger values of $l$. It should also be noted that there might very well be sets with the required properties that are smaller in size and of which the largest element in $S$ is less than the algorithm above gives.
It is, however, possible to construct a set for any value of $l$ in a systematic fashion. Hereto we start with the following sets:
$$S_0 = {1}$$
$$\bar{S}_0 = \emptyset$$
and apply the following iterative scheme:
$$S_{n+1} = \cup_{k=0}^{l-1} \left[ S_n + k \Delta_n \right]$$
$$S_{n+1} = \cup_{k=0}^{l-2} \left[ \bar{S}_n + k \Delta_n \right] \cup \left[S_n + (l-1) \Delta_n\right]$$
where $\Delta_n$ is not unique but should be large enough to avoid any problems with generating additional sequences. It is sufficient to choose $\Delta_n \geq 2 \max(S_{n-1})-1$.
We find that
$$|S_n| = l^n$$
$$|\bar{S}_n| = l^n - (l-1)^n$$
and if we choose
$$\Delta_n = (2l-1)^{n-1}$$
the largest element in $S_n$ is given by
$$\max(S_n) = \frac{(2 l-1)^n +1}{2}$$.
From this it follows that for $|S_n\setminus \bar{S}_n| < |\bar{S}_n|$ we require $(l-1)^n < l^n - (l-1)^n$, which means
$$n>- \frac{\ln 2}{\ln(1- \frac{1}{l})}$$.
For the case $l=3$ we would obtain the set
$$S=\{1,2,3,~~6,7,8,~~11,12,13\}$$
and for $l=4$ we get
$$S=\left\{\begin{array}{llll}
1,2,3,4, & 8,9,10,11, & 15,16,17,18, & 22,23,24,25,\\
50,51,52,53, & 57,58,59,60,& 64,65,66,67, & 71,72,73,74,\\
99,100,101,102,&106,107,108,109,&113,114,115,116,& 120,121,122,123,\\
148,149,150,151,&155,156,157,158,&162,163,164,165,& 169,170,171,172
\end{array} \right\}$$.
And finally, we get that
$$\frac{|\bar{S}_n|}{|S_n|} = 1 - \left( 1 - \frac{1}{l}\right)^n$$
and hence
$$\lim_{n\rightarrow \infty} \frac{|\bar{S}_n|}{|S_n|} = 1.$$ |
How to achieve a perfect square | Another solution, hopefully faster:
Find the greatest square $m$ such that
$m^2\mid n$.
You only have to test for $m\le\sqrt n$. Note the finite differences between two consecutive squares is just the sequence of odd integers $>1$, so it should be comparatively fast.
Then set $\;k=\dfrac n{m^2}$. |
if there exists a sequence $(a_n)^{\infty}_{n=0}$, consisting entirely of elements in $X$, which converges to $x$. | $x$ is an adherent point of $X$, if it is in the closure of $X.$
Every open ball containing $x$ contains a point in $X.$
Prove it forward.
Suppose there is sequence {$x_n$} that converges to $x$ such that every $x_n \in X$.
$\forall \epsilon>0,\exists N>0$ such that $n>N \implies |x_n-x| < \epsilon$
For any neighborhood of $x$, there is an $x_n\in X$ that is also in that neighborhood.
$x$ is an adherent point in $X.$
Now prove it the other way.
Suppose, $x$ is an adherent point in $X.$
Then in any neighborhood around $x$ ($\forall\epsilon > 0$) there exists $Y\subset X$ such that $y \in Y \implies |x-y|<\epsilon$
There exists a sequence of $y_n \in Y$ (which are also in X) that converges to $x.$ i.e. $n>N \implies |x-y_n|<\epsilon$
Note: $x$ could be an isolated point in $X$. In which case it would still be an adherent point. And, there would exists a sequence of {$x_n$} such that $n>N \implies x_n = x$ |
Dividing $8$ Children into $4$ teams of $2$ players each | Here, the order of the teams themselves does not matter. If we choose, say, these teams:
$\{A,B\}, \{C,D\}, \{E,F\}, \{G,H\}$
that's exactly the same set of teams as these:
$\{C,D\}, \{A,B\}, \{G,H\}, \{E,F\}$
Because the order of the teams themselves does not matter, we must divide by 4! = 24, the number of different orders we can put the four teams in, because all 24 different orders are in fact the same set of teams.
The answer is thus $$ \frac{\binom{8}{2} \times \binom{6}{2} \times \binom{4}{2} \times \binom{2}{2}}{4!}$$
$$ = \frac{2520}{4!}$$
$$ = 105 $$ |
Simultaneous Equations.... | $4s=-8$
$s=-2$
Put value in equation (1) or (2)
$t=10$ |
Help with difference/recursion equation change of variable | Do you have any conditions on $b$ or $y(0)$, say for example that they are positive or at least non-zero? Assuming that $y(k) \neq 0$ (which would be satisfied if $y(0),b > 0$ for instance), then try $z(k) := \frac{b}{y(k)}$. The right side is then rewritten as $(1+z(k))^{-1}$ and the left side is $\frac{b}{z(k+1)}$, so rearranging gives a linear difference equation. |
(solution verification) the series $\sum z^{n!}$ has the unit circle as a natural boundary | The solution is correct, but you could save some labor.
When $z$ is on the radius $(0,1)$, for every $N$ we have $$ f(z)\ge \sum_{n=0}^N z^{n!}\to N+1,\quad z\to 1^-$$ hence $f(z)\to +\infty$ as $z\to 1^{-}$.
For every rational number $a/b$, the function $f(e^{2\pi i a/b}z)-f(z)$ is a polynomial, because $(e^{2\pi i a/b})^{n!} =1$ when $n\ge b$.
From 1 and 2, $|f(e^{2\pi i a/b}z)|\to \infty$ as $z\to 1^-$, which is the same as saying that $|f|\to\infty$ along the radius from $0$ to $e^{2\pi ia/b}$. |
Converting map coordinates of a rotated grid. | Rotation matrices is basicaly what you want.
For rotating a vector
$\begin{pmatrix} x \\ y \end{pmatrix} $
by the angle $\alpha$, just multiply it with the matrix
$\begin{pmatrix} \cos(\alpha) &-\sin(\alpha) \\ \sin(\alpha) &\cos(\alpha) \end{pmatrix}$.
\begin{align}
\begin{pmatrix} x_{new} \\ y_{new} \end{pmatrix} &= \begin{pmatrix} \cos(\alpha) &-\sin(\alpha) \\ \sin(\alpha) &\cos(\alpha) \end{pmatrix} \cdot \begin{pmatrix} x_{old} \\ y_{old} \end{pmatrix} \\
&= \begin{pmatrix} x_{old} \cdot \cos(\alpha) - y_{old} \cdot \sin(\alpha) \\ x_{old} \cdot \sin(\alpha) + y_{old} \cdot \cos(\alpha) \end{pmatrix}
\end{align}
In case you are not familiar with vectors, this means:
\begin{align}
x_{new} &= x_{old} \cdot \cos(\alpha) - y_{old} \cdot \sin(\alpha) \\
y_{new} &= x_{old} \cdot \sin(\alpha) + y_{old} \cdot \cos(\alpha)
\end{align} |
Find $\sigma_1$ that minimizes $(e^{x^2/\sigma_1^2}e^{x^2/\sigma_2^2}-e^{x^2/\sigma_3^2})^2$ | Hint.
If
$$
e^{x^2/\sigma_1^2}e^{x^2/\sigma_2^2}=e^{x^2/\sigma_3^2}
$$
then
$$
e^{x^2/\sigma_1^2} = e^{x^2/\sigma_3^2-x^2/\sigma_2^2}
$$
or
$$
\frac{1}{\sigma_1^2} = \frac{1}{\sigma_3^2}-\frac{1}{\sigma_2^2}
$$
of course if $\sigma_3 < \sigma_2$ |
How to compute intersection multiplicity? | Your second equality is okay, because you even have $\mathcal{O}_{\mathbb{P}^2_{k},P}/(X^2Z-Y^3,X) \cong \mathcal{O}_{\mathbb{A}^2_{k},(0,0)}/(x^2-y^3,x)$.
The moral is that the local ring at a point $P$ is essentially a "local" object, i.e. you only have to look at an affine neighborhood (or "local coordinate" as Hulek puts it) of $P$ to know it (in the above situation, we are passing to the affine neighborhood $\{[a:b:c] \in \mathbb{P}^2_k, c \neq 0\} \cong \mathbb{A}^{2}$ of $P=[0:0:1]$).
Your last equality can be deduced from the following:
since $x^2-y^3$ and $x$ intersect only at the origin $(0,0)$ you have
$$
\mathcal{O}_{\mathbb{A}^2_k,(0,0)}/(x^2-y^3,x) \cong k[x,y]/(x^2-y^3,x)
$$
(cf. Fulton Algebraic Curves, Prop.6 in Section 2.9)
Now, as you already noted, $k[x,y]/(x^2-y^3,x)=k[x,y]/(x,y^3) \cong k[y]/(y^3)$ and the latter has $1,y,y^2$ as a basis over $k$. |
Is supremum over a compact domain of separately continuous function continuous? | Here is a counterexample.
Take $f\colon [0,1]\times[0,1]\to \mathbb R$ defined by:
$$
f(x,y) = \frac{2xy}{x^2+y^2},\quad f(0,0)=0.
$$
Notice that $2xy = x^2 + y^2 - (x-y)^2 \le x^2+y^2$ hence $f(x,y)\le 1$. While $f(y,y) = 1$.
Of course $f$ is separately continuous, but for $y>0$ one has:
$$
\sup_x f(x,y) = f(y,y) = 1
$$
while $f(x,0)=0$ hence $\sup_x f(x,0) = 0$. |
continuity of a piece wise defined function | Hint:
Assuming you want $f$ to be continuous first we need to make sure that at $x = a$ both the left hand and the right hand limits agree. That is we need to make sure $a = a^2$. From here you should be able to get a possible list of $a$ that work. Next ensure that $f(2) = 4$ for each of these $a$ and you have your answer. |
Making first order logic statements | In the first sentence $Y$ and $Z$ must be understood as referring to specific individuals, while $X$ is a dummy name that does not refer to any specific individual. In more technical terms, $Y$ and $Z$ are constants, and $X$ is a variable. If $\mbox{Likes}(x,y)$ is a two-place predicate meaning ‘$x$ likes $y$’, then you want
$$\forall X\Big(X\ne Y\to\mbox{Likes}(X,Z)\Big)\;.\tag{1}$$
Read fairly literally, that’s for all X, if X is not Y, then X likes Z, which is pretty clearly equivalent to the desired statement.
There is another possible interpretation of the original sentence. it certainly says that everyone who is not $Y$ likes $Z$. In everyday usage everyone except A does X generally also implies that $X$ does not do $X$, though it doesn’t explicitly say so. If you want to include that implication, then add $$\neg\mbox{Likes}(Y,Z)\land$$ at the front of $(1)$.
In the second sentence $X$ is a constant: it apparently refers to a particular individual, as does $Z$; this time it’s $Y$ that’s the variable ranging over all possible people. This statement doesn’t say anything one way or the other about whether $X$ likes $Z$, so we have
$$\forall Y\Big(Y\ne Z\to\mbox{Likes}(X,Y)\Big)\;:$$
for every Y, if Y is not Z, then X likes Y. |
Smallest subring of $\mathbb{C}$ containing arbitrary element $\alpha$ | Your guesses are correct. $\mathbb{Z}[x]$ is the ring of all polynomials in the variable $x$ with coefficients in $\mathbb{Z}$. The proof that $\{f(\alpha):f(x)\in\mathbb{Z}[x]\}$ is a subring of $\mathbb{C}$ is very straightforward. For instance, if you have two elements $f(\alpha)$ and $g(\alpha)$ in this set, then their sum is just $f(\alpha)+g(\alpha)=(f+g)(\alpha)$, which is another element of this set (since $f+g$ is another polynomial in $\mathbb{Z}[x]$).
Once you've shown that this set is a subring, that proves that it contains $\mathbb{Z}[\alpha]$. For the reverse inclusion, the idea is to prove that for any $f(x)\in\mathbb{Z}[x]$ $f(\alpha)$ is forced to be an element of $S$ if $S$ is a subring containing $\alpha$. For example, if $f(x)=x^2+3x+4$, then $$f(\alpha)=\alpha^2+3\alpha+4=\alpha\cdot\alpha+(1+1+1)\cdot\alpha+1+1+1+1$$ must in any such subring $S$ because $S$ contains $\alpha$ and $1$ and is closed under addition and multiplication.
(The case of Gaussian integers is a special case--it turns out that for $\alpha=i$, every element of $\mathbb{Z}[i]$ can be written in the form $a+ib$ for $a,b\in\mathbb{Z}$. In other words, you only need to use linear polynomials $f(x)$ in order to get all the elements of $\mathbb{Z}[i]$. But this is not true for general $\alpha$, and is not the definition of $\mathbb{Z}[\alpha]$ in general.) |
Why is $\frac{e^{x}(x-1) }{ 1-x} = -e^{x}$? | $$\frac{x-1}{1-x}=-1$$
Do you see it now? |
Lies, Damn Lies and,... Gradients? | The function $$
f(x, y) = [(x + y)(x - y)]^2 = (x^2 - y^2)^2
$$
has the property you've described: it's $U$-shaped when viewed in the $y = 0$ or $x = 0$ plane, but along the line $y = x$, it's horizontal, so there's a way to escape. It's also differentiable.
The real point is that "local min" implies "gradient is zero", but the other direction is not true at all (as $f(x, y) = x^2 - y^2 $ shows).
On the other hand, at places where the gradient is nonzero, it does tell you the direction of steepest descent...but only locally, and only to first order. If you stand on the slope of Bunker Hill in Boston, you can hardly expect the upward slope direction to point towards Mt. Everest, or even Mt. Washington. |
Necessity of uniform integrability in martingale convergence theorem | A very elementary example is the following: consider the probability space $(0,1)$ with Lebesgue measure. Let $X_n=nI_{(0,\frac 1 n)}$. A straightforward argument shows that $\{X_n\}$ is a martingale. It converges almost surely to $0$. Obviously, $X_n=E(0|X_1,X_2,...,X_n)$ is not true. Hint for proving martingale property: $\sigma \{X_1,X_2,...,X_n\}=\sigma \{(0,1),(0,\frac 1 2),... ,(0,\frac 1 n)\}=\sigma\{(0,\frac 1 n),[\frac 1 n, \frac 1 {n-1}),...,[\frac 1 2 ,1)\}$ and this last sigma algebra consists precisely of unions of the intervals $(0,\frac 1 n),[\frac 1 n, \frac 1 {n-1}),...,[\frac 1 2 ,1)$. Hence it is enough to show that $EX_{n+1} I_A=EX_nI_A$ for each one of these intervals. This is easy.
PS: this martingale is very useful in providing counter-examples. Unfortunately it is not found in texts. |
Standard compactness argument | For me the canonical compactness argument is the argument that shows that a compact Hausdorff space is regular, in fact showing a common maxim: "compact sets behave like points (often)"
Suppose $x\in X$, $C \subseteq X$ compact and $x \notin C$, all in a Hausdorff space $X$. Then $x$ and $C$ have disjoint open neighbourhoods (just as two distinct points already have).
For every $p \in C$ pick $U_p$ and $V_p$ open in $X$ such that $x \in U_p, p \in V_p , U_p \cap V_p = \emptyset$, which can be done as $x \neq p$ and $X$ is Hausdorff. The $V_p$ cover the compact set $C$, so finitely many cover them, say $C \subseteq V:=V_{p_1} \cup \ldots \cup V_{p_n}$, for finitely many $p_1,\ldots, p_n \in C$. But then $U = \cap_{i=1}^n U_{p_i}$ is also open (a finite intersection!) and contains $x$ and
$$U \cap V = \cup_{i=1}^n (V_{p_i} \cap U) \subseteq \cup_{i=1}^n (V_{p_i} \cap U_{p_i}) = \emptyset$$ showing that $U$ and $V$ are the required open neighbourhoods of $x$ and $C$.
A totally similar argument, using the above as a lemma, shows that $C$ and $D$ compact disjoint subsets of a Hausdorff space $X$ have disjoint open neighbourhoods as well.
The compactness allows one to fix things for points, and then using a finite subcover, to fix them for the whole compact set. Many compactness arguments use this idea. |
Are the two connected components of orthogonal flag bundles isomorphic? | The Stein factorization for the map $OG_X(n,V) \to X$ takes the form
$$
OG_X(n,V) \to \tilde{X} \to X,
$$
where the second arrow is an 'etale double covering, which in general is not trivial. So, in general there is only one component (even over a field, if the field is not algebraically closed and the determinant of the quadratic form is not a square). |
Effective divisors $D_1\leq D_2$ such that $h^0(D_1)=h^0(D_2)$, then $D_1=D_2?$ | The answer is no. For an example, let $X$ be a blow up of a smooth surface at a point and let $E$ be the exceptional divisor. Let $D$ be any curve not meeting $E$, for example, a curve in the original surface not passing through the blown up point, pulled back to $X$. Then, $H^0(D)=H^0(D+E)$, $D\leq D+E$, but $D$ is not linearly equivalent to $D+E$. |
Atiyah-Macdonald p.108 | If $B$ is an $A$-algebra then the tensor product $B \otimes_A M$ is what we mean when we say extend scalars to $B$. It is a $B$-module where the action is given by multiplying into the left factor, so $b \cdot (b' \otimes m) = (bb') \otimes m$.
For the homomorphisms, the first one in the last line is easy. It's just $\mathrm{id} \otimes g$ where $g$ is the natural map $g\colon M \to \hat M$. So $a \otimes m \mapsto a \otimes g(m)$.
For the second homomorphism let $B$ be an $A$-algebra and assume $M$ and $N$ are $B$-modules. Then they are also $A$-modules so both tensor products $M \otimes_A N$ and $M \otimes_B N$ are well defined. If you go back to the definition of the tensor product you'll see that $M \otimes_B N$ is the same thing as $M \otimes_A N$ except we now allow elements of $B$ to cross the tensor symbol, not just elements of $A$. In other words, the relations that we quotient out by in forming $M \otimes_A N$ are properly contained in the relations that we quotient out by to form $M \otimes_B N$. This means $M \otimes_B N$ is a quotient of $M \otimes_A N$ and therefore there's a natural map
$$M \otimes_A N \to M \otimes_B N$$
If all that was a little confusing here's another way to get that map. It's defined by $m \otimes n \mapsto m \otimes n$ so just check that this satisfies the bilinearity relations and hence gives a well defined map. Then it's obvious that it's surjective because it hits all the simple tensors. |
Inequality for finite harmonic sum | Hint. We have that for $k>1$,
$$\frac{1}{2}=2^{k-1}\cdot\frac{1}{2^{k}}<\sum_{j=2^{k-1}}^{2^{k}-1}\frac{1}{j}< 2^{k-1}\cdot \frac{1}{2^{k-1}}=1.$$
Now, note that
$$A_n=\sum_{k=1}^n\left(\sum_{j=2^{k-1}}^{2^{k}-1}\frac{1}{j}\right)$$
Can you take it from here? |
A thief is stealing a password. He knows three constraints; at most how many attempts must he make before he guesses the password? | First, count the number of $4$-digit combinations containing "8":
The number of $4$-digit combinations containing $\color\red1$ occurrence of "8" is $\binom{4}{\color\red1}\cdot9^{4-\color\red1}=2916$
The number of $4$-digit combinations containing $\color\red2$ occurrence of "8" is $\binom{4}{\color\red2}\cdot9^{4-\color\red2}=486$
The number of $4$-digit combinations containing $\color\red3$ occurrence of "8" is $\binom{4}{\color\red3}\cdot9^{4-\color\red3}=36$
The number of $4$-digit combinations containing $\color\red4$ occurrence of "8" is $\binom{4}{\color\red4}\cdot9^{4-\color\red4}=1$
The result is $2916+486+36+1=3439$.
Then, add any digit from "1" to "9" at the beginning of each combination.
This gives you $3439\cdot9=30951$ combinations of $5$ digits containing "8" and not starting with "0".
Additionally, you have $9^4=6561$ combinations starting with "8" and not containing any other "8".
Hence the total number of combinations is $30951+6561=37512$. |
Integral of normal distribution curve | $$\int_{0.30}^{\infty}P(x)\,dx=\int_{-\infty}^{\infty}P(x)\,dx-\int_{-\infty}^{0.30}P(x)\,dx$$
The first integral is equal to $1$ since $P(x)$ is a probability density function. The second one is not possible to evaluate with elementary functions.
However using the function
$$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x\exp(-t^2)\,dt,$$ the cummulative normal density function can be written as
$$\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^x\exp\left(-\frac{(t-\mu)^2}{2\sigma^2}\right)\,dt=\frac12\left(1+\operatorname{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right).$$
This makes \begin{align}
\int_{0.30}^{\infty}P(x)\,dx&=1-\frac12\left(1+\operatorname{erf}\left(\frac{0.30}{1.40\cdot\sqrt{2}}\right)\right)\\
&=\frac12-\frac12\operatorname{erf}\left(\frac{0.30}{1.40\cdot\sqrt{2}}\right).\end{align}
So it is easiest to just use a look-up table for the standard normal distribution for the cumulative density function of $2.78$ which gives $0.9973$ so the desired integral is $1-0.9973=0.0027$. |
Embedding a torus in $\mathbb{R}^{n+1}$ | If we have an embedding $T^k\subset\mathbb{R}^{k+1}$ then the subspace inclusion $\mathbb{R}^{k+1}\subset\mathbb{R}^{k+2}$ allows us to view $T^k$ as embedded in $\mathbb{R}^{k+2}$. The boundary of the tubular neighborhood of $T^k$ in $\mathbb{R}^{k+2}$ will then be a torus $T^{k+1}$ embedded in $\mathbb{R}^{k+2}$, and we proceed inductively. |
How to solve a non-homogeneous boundary value problem | Similar to ordinary linear autonomous differential equations, you can always add any solution to the homogeneous differential equation, to the solution to the inhomogeneous differential equation, often called the particular solution. The same can be said for a linear partial differential equation.
So if you have found a particular solution, lets call it $u_Q(x,t)$, then you can subtract it from the boundary conditions $A(t)$ and $B(t)$, such that the remaining equations can be used as new boundary conditions for a homogeneous equation and thus can be solved by the method of eigen-function expansion.
Writing this in terms of equation; if you define the solution as $u(x,t) = u_Q(x,t) + u_h(x,t)$, such that
$$
\frac{\partial^2 u_Q(x,t)}{\partial t^2} = \frac{\partial^2 u_Q(x,t)}{\partial x^2} + Q(x,t),
$$
$$
\frac{\partial^2 u_h(x,t)}{\partial t^2} = \frac{\partial^2 u_h(x,t)}{\partial x^2},
$$
then $u(0,t) = u_Q(0,t) + u_h(0,t) = A(t)$ and $u(L,t) = u_Q(L,t) + u_h(L,t) = B(t)$. This can be rewritten to constraints for $u_h(x,t)$, namely
$$
u_h(0,t) = A(t) - u_Q(0,t),
$$
$$
u_h(L,t) = B(t) - u_Q(L,t).
$$
Finding $u_h(x,t)$ should be trivial to the method you know, however finding a $u_Q(x,t)$ (since you can add any combination of solutions of the homogeneous equation to it) in general might be harder, but since you have not defined $Q(x,t)$ I got not say anything about that. |
Probability of having more Heads than Tails tossing a biased coin n times | The compact way to write your sums is $$\sum_{k=0}^\ldots{n\choose k}p^{n-k}(1-p)^k$$ where here $k$ represents the number of tails. So the maximum value of $k$ is when $$k\leq n-k\qquad\text{and}\qquad k+1>n-(k+1)$$ meaning $$k\leq \frac n2\qquad\text{and}\qquad k+1>\frac n2$$ so that $k_{max}=\left\lfloor\frac n2\right\rfloor$:
$$\sum_{k=0}^{\left\lfloor\frac n2\right\rfloor}{n\choose k}p^{n-k}(1-p)^k$$ |
Let $d(x,y)=|x-y|$, when does $|x-y|^p$ define a metric? | The only thing we need to check is the triangle inequality, i.e. for what $p$, do we have
\begin{align}
|x-y|^p \leq |x-z|^p+|z-y|^p.
\end{align}
It's clear that for $p>1$ the above inequality fails because
\begin{align}
2^p=|1-(-1)|^p> |1-0|^p+|0-(-1)|^p = 2.
\end{align}
Suppose $0<p\leq 1$, then we see the triangle iequality does hold. Let us prove it. Observe
\begin{align}
|x+y| \leq (|x|+|y|)= (|x|^{p/p}+|y|^{p/p}) \leq (|x|^p+|y|^p)^{1/p}
\end{align}
which means
\begin{align}
|x+y|^p \leq |x|^p+|y|^p.
\end{align}
Lastly, for $p<0$, we see that $1/|x-y|^{|p|}$ doesn't satisfies the condition $d(x, x) = 0$. |
On equalizers in Top | Since the underlying-set functor $\hom(1, -): \mathbf{Top} \to \mathbf{Set}$ preserves all limits (being a representable functor), it preserves equalizers in particular. So we know that the underlying set of the equalizer must be the equalizer as computed in $\mathbf{Set}$.
The only question then is what is the correct topology on the equalizer $i: E \to A$ (of a pair of arrows $f, g$ from $A$ to $B$ say). We know that $i$ must be continuous, and this means that $i^{-1}(U)$ must be open for every open $U \subseteq A$; that is, thinking of $i$ as an inclusion, we must have $U \cap E$ open in $E$. So at least the correct topology must contain the subspace topology. On the other hand, if we consider the inclusion map $j: E_{sub} \to A$ where $E_{sub}$ is the underlying set equipped with the subspace topology, then surely $f j = g j$, so this would have to factor through the correct topology, meaning the correct topology must be contained in the subspace topology. So it must be the subspace topology. |
Find the real parts of a complex number equation | If we denote $z$ the complexe number, and we suppose that all coefficients $\lambda_1,\psi_1,\omega,t$ are real numbers, we have:$$z=\frac{\lambda_1\psi_1}{\lambda_1^2+\omega^2}(e^{-\lambda_1 t}-e^{i\omega t})(\lambda_1-i\omega)=\frac{\lambda_1\psi_1}{\lambda_1^2+\omega^2}(e^{-\lambda_1 t}-\cos(\omega t)- i\sin (\omega t))(\lambda_1-i\omega)$$ Then : $$\mathcal Re(z)=\frac{\lambda_1\psi_1}{\lambda_1^2+\omega^2}(\lambda_1(e^{-\lambda_1 t}-\cos(\omega t))-\omega \sin(\omega t))$$ |
Is the linear combination of vectors in a vector space subject to the rules of addition/multiplication of that vector space? | The map $\log : \mathbb R^+ \to \mathbb R$ is a vector space isomorphism since you have
$$\log(x^{\lambda}y^{\mu}) = \lambda \log(x)+\mu\log(y)$$
and the map is bijective. So your vector space is one dimensional and every $x\in\mathbb R^+\setminus\{1\}$ is a possible basis. In particular, $x=2$ is a basis. |
For any $a \in \mathbb R$ and any $n \in \mathbb N^+$ there exists $q \in \mathbb Q$ such that $|a-q|< \frac{1}{n}$. | As pointed out in the comments, the theorem says "there exists", which means that providing one counterexample doesn't mean that no such $q$ exists.
To solve the problem, consider $k=\lfloor na\rfloor$ and let $q=\frac{k}{n}$. Then
$$
|a-\frac{k}{n}|=|\frac{na-\lfloor na\rfloor}{n}|<\frac{1}{n}
$$ |
Find number of integers satisfying $x^{y^z} \times y^{z^x} \times z^{x^y}=5xyz$ | Finding all of the solutions may be tricky, but counting how many there are is very easy! Notice that if one variable is $0$ and the other variables are positive, then both sides are obviously equal to $0$. So this gives infinitely many different solutions, since the other two variables can be any positive integers. (More precisely, there are $\aleph_0$ solutions.) |
Integrate $\dfrac{\tan{3x}}{\cos{3x}}\mathrm{d}x$ | $$\dfrac{\tan3x}{\cos3x}=\dfrac{\sin3x}{(\cos3x)^2}=-\dfrac{(\cos3x)'}{3(\cos3x)^2}$$ |
statistics range rule of thumb confidence interval | OK, now can apply Central Limit theorem if n is large enough. The thumb rule is $n>30$. We will see if it is the case. The width of the confidence interval is
$$\overline X+ z_{1-\frac{\alpha}{2}}\cdot \frac{s}{\sqrt n}-\left(\overline X- z_{1-\frac{\alpha}{2}}\cdot \frac{s}{\sqrt n}\right)=2\cdot z_{1-\frac{\alpha}{2}}\cdot \frac{s}{\sqrt n}$$
$\alpha$ is the significance level. Here it is $1-0.98=0.02$
$2\cdot z_{0.99}\cdot \frac{s}{\sqrt n}=0.4$
$2\cdot 2.33\cdot \frac{2}{\sqrt n}=0.4$
$\sqrt n=2\cdot 2.33\cdot \frac{2}{0.4}$ |
Why isn't $f_{tt}(\vec{x},t)g(\vec{x},t) - f(\vec{x},t)g_{tt}(\vec{x},t)$ always equal to zero? | The problem is with the equation after "I can write".
If you calculate the Fourier transform more carefully, you will notice that the $\omega^2$ does not come out of the convolution, but you need to convolve the functions $\omega^2\hat f(x,\omega)$ and $\hat g(x,\omega)$ (and the other way around for the other term).
Write down the convolution and you will see that you can't pull the $\omega^2$ outside the integral. |
Show that $A \cup B = (A$ \ $B ) \cup (A \cap B) \cup (B$ \ $A)$ | On one hand
$$ A\setminus B, A \cap B \subseteq A \subseteq A \cup B, \qquad B \setminus A \subseteq B \subseteq A \cup B $$
hence "$\supseteq$" holds.
Now let $x \in A \cup B$. Then $x \in A$ or $x \in B$. If $x \in A$, then either $x \in B$ and hence $x \in A \cap B$ or $x \not\in B$ hence $x \in A \setminus B$. If $x \not\in A$, then as $x \in A \cup B$ we must have $x \in B$ and hence $x \in B \setminus A$. So in all cases $x \in (B \setminus A) \cup (A \cap B) \cup (A \setminus B)$. This proves "$\subseteq$". |
Conditions for the existence of a limit | $\lim\limits_{x \to b} g(x)$ not necessarily exist. If $\lim\limits_{x \to a} f(x)=0$, for example, $f(x)=\sin{x}$, $a=0$, $g(x)=\dfrac{1}{\sqrt{x}}$. |
Ordinal with cofinality $\Omega_\omega$? | Because the cofinality of $\aleph_{\omega}$ itself is $\omega$, so if there is an unbounded subset $A \subseteq \alpha$ of size $\aleph_{\omega}$, then there is also an unbounded subset of size $\omega$. |
Showing lack of continuity of a function | I propose $\lim_{x_\to0}=0$
since $0\leq |\sin(\frac{1}{x})|\leq1$ for $\epsilon>0$ take $0<\delta<\epsilon$.
Then $ \epsilon>|\delta|=|\delta*1||\geq\delta|\sin(\frac{1}{x})||$ |
Dimension of image of Lie bracket | In general, the linear operator ${\rm ad}(x)$, defined by ${\rm ad}(x)(y)=[x,y]$ has
a non-trivial kernel $ker (ad(x))$, since we always have $[x,x]=0$, and an image ${\rm im}(ad(x))$. As you know, we have
$$
\dim \ker(ad(x))+\dim im(ad(x))=\dim (L)
$$
for all $x\in L$.
The dimensions $\dim im(ad(x))$ are not always the same, e.g., take $x=0$, and take an $x\neq 0$ with non-trivial adjoint operator. |
If $A \subseteq I_m$ such that $m \in A$, and if $n \in A \Rightarrow n+1 \in A$, then $A=I_m$. | If $I_m \neq A$, let $m_0 = \min(I_m\setminus A)$ (any non-empty subset of $\Bbb Z$ with a lower bound has a minimum).
Then $m \neq m_0$ (as $m \in A$ is given) and it follows that $m_0 -1 \in I_m$ as well (by definition of $I_m$) but it has to be in $A$ by minimality of $m_0$ but then $m_0 = (m_0 -1) +1 \in A$ by the assumption on $A$ and we have a contradiction. |
$L^{1}$ norm of a horizontally shifted measurable function | For fixed $x \in \mathbb{R}$, define a mapping
$$\tau_x: \mathbb{R} \to \mathbb{R}, y \mapsto y-x.$$
Then $\tau_x$ is continuous (hence measurable) and $\tau_x^{-1} = \tau_{-x}$. By definition,
$$\int_{\mathbb{R}} |g(y-x)| \, m(dy) = \int_{\mathbb{R}} |g \circ \tau_x(y)| \, m(dy).$$
We can rewrite the right-hand side using image measures:
$$ \int_{\mathbb{R}} |g \circ \tau_x(y)| \, m(dy) = \int_{\mathbb{R}} |g(z)| (\tau_x m)(dz) \tag{1}$$
where
$$(\tau_x m)(B) := m(\tau_x^{-1}(B)) = m(\tau_{-x}(B)) = m(B+x)$$
denotes the image measure of $\tau_x$ with respect to the Lebesgue measure $m$. Since the Lebesgue measure is translation invariant, i.e.
$$m(B+x) = m(B)$$
for any Borel set $B \in \mathcal{B}(\mathbb{R})$, we conclude $\tau_x m = m$. Hence, by $(1)$,
$$ \int_{\mathbb{R}} |g(y-x)| \, m(dy) = \int_{\mathbb{R}} |g(z)| \, m(dz)$$ |
Countable / Uncountable / Perfect Sets Question(s) | If you start with one point, this set has no limit points, so we cannot add points one by one that are limit points of the previous ones. If we have any finite set this is closed, so has no limit points, and there is nothing to add. So concretely: what point do you add to $\{0\}$ such that $0$ is a limit point of the new set? You cannot.
If you start with a set like $\Bbb Q$, all of its points are limit points (it has no isolated points) but there are still uncountably many points left that are also a limit point of it but not yet in it, and you cannot reach them all in countably many steps. In $\Bbb R$ this is unavoidable: a set $A$ that is equal to its set of limit points $A'=A$, or a perfect set, must be uncountable (typical examples are $[0,1]$, $\Bbb R$ itself, or the Cantor set).
A countable process as you imagine cannot start with any finite set, because at that stage there are no limit points to add. If you want to add points that are limit points, you can add a whole sequence of distinct points at the time, with the new limit point as its (sequential) limit, but then the points of that sequence you added are not yet limit points of the set at that stage and you have to add infinitely many new points to fix that etc. It's not going to be a simple matter of adding a point at the time. That way your set will remain countable and never be a perfect set. |
Why does $\sum\limits_{n=0}^{\infty}\frac{nz^{n-1}}{n!} = \sum\limits_{n=0}^{\infty}\frac{z^n}{n!}$ | We have
$$\begin{align}
\sum_{n=0}^\infty\frac{nz^{n-1}}{n!}&=\sum_{n=1}^\infty\frac{nz^{n-1}}{n!}\\\\
&=\sum_{n=1}^\infty\frac{z^{n-1}}{(n-1)!}\\\\
&=\sum_{n=0}^\infty\frac{z^{n}}{n!}
\end{align}$$
And we are done! |
A question on Euler Phi Function | The formula for Euler's function gives most of the answers:
$$n=p^aq^br^c...$$ $$\phi(n)=p^{a-1}(p-1)q^{b-1}(q-1)r^{c-1}(r-1)=n(1-\frac1 p)(1-\frac1 q)(1-\frac1 r)$$
If p|$\phi(n)$ then we must have:
(a) p must be one of the prime factors of n, and
(b) The index "a" of p in $n=p^aq^br^c...$, must be greater than 1. |
Accumulation Values of a sequence | Accumulation points for a sequence indicates towards all subsequential limits.
Whenever a sequence converge, it have only one accumulation point.
As your example,the only accumulation value is $e$ |
Relationship between incenter and circumcenter | Here is a "synthetic" solution with metric arguments:
We will show both conditions are equivalent with $\cos B +\cos C=\cos A$.
Condition 1: $O$ lies on $MN$. Let $\delta (P, XY)$ be the distance from point $P$ to line $XY$. Notice that since $M,N$ are the feet of the angle bisectors that $\delta (M, AB) + \delta (M, AC) = \delta (M, AB) = \delta (M, BC)$.
Meanwhile, $\delta (N, AB)+\delta (N,AC) = \delta (N,AC)=\delta (N,BC)$.
Since distances from points to lines form linear functions, the function $\delta (P, AB)+\delta (P, AC)-\delta (P, BC)$ must be zero for any point $P\in MN$, because it is zero for $P=M,N$.
Then if $O$ lies on the line, $\delta (P, BC)= R\cos A, \delta (P, AC)=R \cos B, \delta (P, AB)=R\cos C$ which implies $\cos B+\cos C=\cos A$.
Condition 2: $I$ lies on $EF$.
Remark that quadrilateral $BCEF$ is cyclic. Let $X$ be a point on segment $EF$ with $BF=XF$. Note that $\angle BXF = 0.5( \angle BXF + \angle FBX) = 0.5(\angle AFX) = 0.5 \angle ACB = \angle ICB$. This implies that quadrilateral $BXIC$ is cyclic.
As a result, since $\angle IBC=0.5\angle B$, we deduce $\angle IXC = 0.5\angle B$. Then since $\angle XEC = 180-\angle AEF = 180-\angle B$, we deduce that $\triangle XEC$ is isosceles so $XE=EC$.
Then $EF = EX+FX = EC+BF$. But $EF=a \cos A, EC = a \cos C, BF = a \cos B$ so we deduce $\cos A =\cos B + \cos C$ as desired.
Thus the two conditions are equivalent. |
Prove the sum is a Lipschitz function | $|{1 \over k+|x|} - {1 \over k+|y|}| = |{|y|-|x| \over (k+|x|)(k+|y|)}| \le {1 \over k^2}||y|-|x|| \le {1 \over k^2}|y-x| $, and ${1 \over k^2}$ is summable. |
Solve for x when $(\log_x (5x))(\log_7 x)=2$ | Hint:
$$\log_x(5x)=\frac{2}{\log_7 x} = \frac{\log_7 49}{\log_7 x}$$
I hope that you have tested the conditions for the logarithms to be defined. |
SIR model exact solution | What I think you mean is that, intuitively, the share of infected people is not monotone (increasing/decreasing), unlike the setup of SIx models, which predicts, roughly speaking, the average of the rate of decay, without accounting for minor variations. They can predict, given the hyperparameters, if the virus will consume the whole population, or die out eventually, but not the size of deviations from the trajectory. This is because hyperparameters are fixed. For something more involved you have to look into dynamic models, where the rates of recovery/infection are the functions of time, i.e. $\mu, \lambda=\mu(t), \lambda(t)$. |
Solving an ODE that is piecewise defined using the Dirac Delta function | I am going to look at the solution to a general problem and then approach your problem.
$$ L(u) = f(x) \tag{1} $$
defined on $ a < x < b$ subject to homogeneous boundary conditions where $L$ is a Sturm Liouville operator
$$ L(u) \equiv \frac{\rm d }{\rm dx}\bigg( p \frac{\rm d }{\rm d x} \bigg) + q \tag{2} $$
now in the simple case we have $ p= 1$ and $ q=0$ and our operator is $ L= \frac{\rm d^{2} }{\rm dx^{2}}$ this can be solved by variation of paramters
$$ u = u_{1} v_{1} + u_{2}v_{2} \tag{3}$$
Now consider this problem
$$ \frac{\rm d^{2} u}{\rm dx^{2}} = f(x) \tag{4}$$
with boundary conditions
$$ BC1: u(0) = 0 \\BC2: u(L) = 0 \tag{5} $$
the homogeneous solutions correspond to
$$ u_{1}(x) = x \\ u_{2}(x) = L - x \tag{6}$$
when you integrate
$$ v_{1}(x) = \frac{1}{L} \int_{0}^{x} f(\xi) (L-\xi) d \xi \tag{7} $$
$$ v_{2}(x) = -\frac{1}{L} \int_{0}^{x} f(\xi) \xi d \xi + c_{2} \tag{8} $$
then the solution to the non-homogeneous boundary problem becomes
$$ u(x) = \frac{-x}{L} \int_{x}^{L} f(\xi)(L-\xi) d \xi -\frac{L-x}{L}\int_{0}^{x}f(\xi) \xi d\xi \tag{9} $$
we transform this into our equation
$$ u(x) =\int_{0}^{L}f(\xi) G(x|\xi) d \xi \tag{10}$$
where our green's function is
$$ G(x|\xi) =\begin{align}\begin{cases} c_{1}+c_{2}x & 0 < \xi \\ \\ d_{1}+d_{2}x & x > \xi \end{cases} \end{align} \tag{11}$$
applying the boundary conditions $G(0|\xi) =G(1|\xi) = 0$
$$ G(x|\xi) =\begin{align}\begin{cases} cx & x < \xi \\ \\ d(x-1) & x > \xi \end{cases} \end{align} \tag{12}$$
in order for the greens function to be continuous
$$ c\xi = d(\xi-1) \implies d = c\frac{\xi}{\xi -1} \tag{13}$$
this is called the jump condition
$$ \frac{\rm d}{\rm dx} c \frac{\xi }{\xi -1}(x-1) \Big|_{x=\xi} - \frac{\rm d}{\rm dx}cx \Big|_{x =\xi} = 1 \tag{14}$$
$$c \frac{\xi }{\xi -1} - c = 1 \\ c= \xi-1\tag{15}$$
then the Green's function is
$$ G(x|\xi) =\begin{align}\begin{cases} (\xi-1)x & 0 \leq x \leq \xi \\ \\ \xi(x-1) & \xi \leq x \leq 1 \end{cases} \end{align} \tag{16}$$
$$ g(x|\xi) = (x-1)\int_{0}^{x} \xi f(\xi) d\xi + x \int_{x}^{1} (\xi-1) f(\xi) d\xi \tag{17}$$ |
Upper bound on number of models of a theory of a given cardinality | Assuming $T$ is a theory over a language with $\kappa$ symbols, a model of $T$ with an underlying set $S$ of size $\lambda$ consists of $\kappa$ subsets of $S^n$ for certain integers $n$. There are $2^\lambda$ choices for each of these subsets, so there are at most $(2^\lambda)^\kappa=2^{\lambda\kappa}$ such models with underlying set $S$. If the language is countable so $\kappa\leq\aleph_0$ (or more generally if $\kappa\leq \lambda$), $2^{\lambda\kappa}=2^\lambda$ so there are at most $2^\lambda$ such models.
(The bound on $\kappa$ is necessary. For instance, if you have $\kappa$ unary relation symbols there are at least $2^\kappa$ non-isomorphic models of any nonzero cardinality since each relation can be either satisfied by every element or not satisfied by every element.) |
Need help understanding why my proof that [0,1] is compact is wrong? | Hint: if $G$ is an open cover for $[0,1]$, and let $A=\{a \in [0,1]: [0,a] \text{ has a finite subcover from } G\}$. Trivially $0 \in A$, as $0$ is covered by some element of $G$. So $a_0 = \sup A$ exists. (lub property of $\Bbb R$). Try to reason why $a_0 < 1$ cannot happen. |
Find base for which $a+b = S$ | for each of $a, b, S,$ you can expand to $\sum_{i=1}^{n} a_{i}x^{i-1}$ with $n$ being the number of digits and $a_{i}$ being each digit from right to left. Solving for $x$ yields the base. For example: $a=3,b=7,S=10$ would give $$(3(1)) + (7(1)) = (1x + 0(1))$$
with the solution $x=10$
Another example would be $a=110, b=10, S=1000$
$$(1x^{2}+1x+0(1))+(1x+0(1))=(1x^{3}+0x^{2}+0x+0(1))$$
$$x^{2}+2x=x^{3}$$
$$x+2=x^{2}$$
So the original equation must be in base 2
(Note: this idea can be extended to non-integer variables quite easily) |
Distance to the kernel of a functional in Hilbert space | Let $\pi : H\rightarrow L$ be the orthogonal projection onto L. (this exists because H is a hilbert space), from a basic thm codim(L) = 1 and therefore for any $z\not\in L$:
$$L^\perp=span<z-\pi(z)>$$
Then because $f$ achieves its maximum on $L^\perp$ (this should be easy, ask otherwise) it follows that $|f(z-\pi(z))|=||f||\cdot ||z-\pi (z)||$.
finally I think you know that $d(z,L)=||z-\pi (z)||$ if you do you should be able to calculate:
$$|f(z)|=|f(z-\pi(z))|=||f||\cdot ||z-\pi (z)||=||f||\cdot d(z,L)$$ |
Is the function $f\colon \mathbb{R} \to \mathbb{R}$ defined by $f(x)=x+\sin(x)$ uniformly continuous on $\mathbb{R}$ | $|f'(x)|=|1+cosx|\leq 2$, and thus $f$ is Lipschitz continuous and thus uniformly continuous. |
Find liminf of $(-1-\frac{2}{n})^n$ | $(-1-\frac{2}{n})^n=(-1)^n\left[(1+\frac{2}{n})^{n/2}\right]^2$. Thus taking the liminf sends the $(-1)^n$ to $-1$. |
Twist map as a solution of the Quantum Yang-Baxter Equation (QYBE) | Let $m_1\otimes m_2\otimes m_3\in M\otimes M\otimes M$, then
$$
\begin{align}
R_{(1,2)}R_{(1,3)}R_{(2,3)}(m_1\otimes m_2\otimes m_3)&=
(\tau\otimes 1_M)(1_M\otimes\tau)(\tau\otimes 1_M)(1_M\otimes\tau)(1_M\otimes\tau)(m_1\otimes m_2\otimes m_3)\\
&=(\tau\otimes 1_M)(1_M\otimes\tau)(\tau\otimes 1_M)(1_M\otimes\tau)(m_1\otimes m_3\otimes m_2)\\
&=(\tau\otimes 1_M)(1_M\otimes\tau)(\tau\otimes 1_M)(m_1\otimes m_2\otimes m_3)\\
&=(\tau\otimes 1_M)(1_M\otimes\tau)(m_2\otimes m_1\otimes m_3)\\
&=(\tau\otimes 1_M)(m_2\otimes m_3\otimes m_1)\\
&=m_3\otimes m_2\otimes m_1\\
\end{align}
$$
$$
\begin{align}
R_{(2,3)}R_{(1,3)}R_{(1,2)}(m_1\otimes m_2\otimes m_3)&=
(1_M \otimes \tau)(1_M\otimes\tau)(\tau\otimes 1_M)(1_M\otimes\tau)(\tau\otimes 1_M)(m_1\otimes m_2\otimes m_3)\\
&=(1_M \otimes \tau)(1_M\otimes\tau)(\tau\otimes 1_M)(1_M\otimes\tau)(m_2\otimes m_1\otimes m_3)\\
&=(1_M \otimes \tau)(1_M\otimes\tau)(\tau\otimes 1_M)(m_2\otimes m_3\otimes m_1)\\
&=(1_M \otimes \tau)(1_M\otimes\tau)(m_3\otimes m_2\otimes m_1)\\
&=(1_M \otimes \tau)(m_3\otimes m_1\otimes m_2)\\
&=m_3\otimes m_2\otimes m_1\\
\end{align}
$$
so we conclude
$$
R_{(1,2)}R_{(1,3)}R_{(2,3)}(m_1\otimes m_2\otimes m_3)=R_{(2,3)}R_{(1,3)}R_{(1,2)}(m_1\otimes m_2\otimes m_3)\tag{1}
$$
Now take arbitrary $u\in M\otimes M\otimes M$, then we have representation
$$
u=\sum\limits_{i=1}^n m_1^{(i)}\otimes m_2^{(i)}\otimes m_3^{(i)}
$$
Hence using $(1)$ we get
$$
\begin{align}
R_{(1,2)}R_{(1,3)}R_{(2,3)}(u)
&=R_{(1,2)}R_{(1,3)}R_{(2,3)}\left(\sum\limits_{i=1}^n m_1^{(i)}\otimes m_2^{(i)}\otimes m_3^{(i)}\right)\\
&=\sum\limits_{i=1}^n R_{(1,2)}R_{(1,3)}R_{(2,3)}(m_1^{(i)}\otimes m_2^{(i)}\otimes m_3^{(i)})\\
&=\sum\limits_{i=1}^n R_{(2,3)}R_{(1,3)}R_{(1,2)}(m_1^{(i)}\otimes m_2^{(i)}\otimes m_3^{(i)})\\
&=R_{(2,3)}R_{(1,3)}R_{(1,2)}\left(\sum\limits_{i=1}^n m_1^{(i)}\otimes m_2^{(i)}\otimes m_3^{(i)}\right)\\
&=R_{(2,3)}R_{(1,3)}R_{(1,2)}(u)\\
\end{align}
$$
Since $u\in M\otimes M\otimes M$ is arbitrary we conclude
$$
R_{(1,2)}R_{(1,3)}R_{(2,3)}=R_{(2,3)}R_{(1,3)}R_{(1,2)}
$$ |
Finding a closed form expression for $\sum_{k=0}^n(k^2+3k+2)$ using generating functions | Yes, this approach of splitting into three sums is valid, but you forgot the $\sum_{k=0}^n$. Here's a correct solution. Let $a_n=\sum_{k=0}^n(k^2+3k+2)$. Then
\begin{align}
\sum_{n=0}^\infty a_n x^n &= \sum_{n=0}^\infty \left(\sum_{k=0}^n(k^2+3k+2)\right) x^n\\
&= \sum_{k=0}^\infty (k^2+3k+2) \sum_{n=k}^\infty x^n\\
&= \sum_{k=0}^\infty (k^2+3k+2) \frac{x^k}{1-x}\\
&= \frac{1}{1-x}\left(\sum_{k=0}^\infty k^2 x^k+3\sum_{k=0}^\infty kx^k+2\sum_{k=0}^\infty x^k\right)\\
&= \frac{1}{1-x}\left(\frac{x(x+1)}{(1-x)^3}+\frac{3x}{(1-x)^2}+\frac{2}{1-x}\right)\\
&= \frac{1}{1-x}\cdot\frac{2}{(1-x)^3}\\
&= \frac{2}{(1-x)^4}\\
&= 2\sum_{n=0}^\infty \binom{n+3}{3} x^n,
\end{align}
so $$a_n = 2 \binom{n+3}{3} = \frac{(n+3)(n+2)(n+1)}{3}.$$
More generally, note that if $B(x)=\sum_{n=0}^\infty b_n x^n$, then the generating function for the partial sums $\sum_{k=0}^n b_k$ is $B(x)/(1-x)$. |
Issue with Given Solution for Problem - SOA FM Exam Practice | $\frac{dP}{dt} = .02P + 15000$
$\frac{50\cdot dP}{P + 750000} = dt$
$50\cdot \ln|P + 750000| = t + C$
$\ln|P + 750000| = .02(t + C)$
$P + 750000 = e^{.02C}\cdot e^{.02t}$
When $t = 0, P = 0$
$e^{.02C} = 750000$
$P = 750000e^{.02t} - 750000$ .....is the specific solution
When $P = 1000 000, t= ?$
$1000 000 = 750000e^{.02t} - 750000$
$.02t = \ln (\frac{1750000}{750000}) = \ln (2.3333333)$
$t = 42.36489$ quarters
$ = 42.36489\cdot 3 = 127.09$ months |
When is this matrix positive definite? | Given a matrix $M_{n\times n}$ defined as proposed, its eigenvalues are
$$
\{1-p,\cdots,1-p,1+(n-1)p\}
$$
so the conditions are
$$
\cases{
1-p > 0\\
1+(n-1)p > 0
}
$$
or
$$
-\frac{1}{n-1}< p < 1
$$ |
Extending a quotient map to a covering map on $\mathbb{RP}^2$ | Recall that the map $f:S^2\to \mathbb{RP}^2$ which identifies antipodal points is a covering, and that $S^2$ is simply-connected, hence $S^2$ is the universal cover of $\mathbb{RP}^2$. If we had a covering $p:\mathbb R^2\to \mathbb{RP}^2$ then since $\mathbb R^2$ is also simply connected it would also be a universal cover of $\mathbb{RP}^2$. Since universal covers are unique up to homeomorphism, this implies that $S^2$ is homeomorphic to $\mathbb R^2$, which is false (for example, because $S^2$ is compact while $\mathbb R^2$ is not). Thus no such covering $p:\mathbb R^2\to \mathbb{RP}^2$ can exist. |
Linear algebra problem and What can we learn from tr(A) = 0? | Hints:
For (1):
If $\lambda\ne\mu$ are two distinct eigenvalues of $A$ with eigenvectors $v$ and $w$, then $v+w$ is not an eigenvector.
If all vectors are eigenvectors, then $A=\lambda I$.
For (2): let $v$ and $Av$ be the columns of $P$, i.e. we write the matrix of $x\mapsto Ax$ in the basis $(v,Av)$.
(We even get a representation $P^{-1}AP=\pmatrix{0&a\\1&0}$.) |
Topological Group $G$ totally disconnected $\Rightarrow$ $G$ hausdorff? | It's true. If $G$ is a totally disconnected topological group, then the connected component of the identity $e$ is $\{e\}$ (because components of a totally disconnected space are just points). Since connected components of any topological space are closed, $\{e\}$ is closed. This implies that $G$ is Hausdorff: the the diagonal $\Delta\subseteq G\times G$ is the inverse image of $\{e\}$ under the continuous map $(g,h)\mapsto gh^{-1}$, and is therefore closed. |
solution to a differential equation | If you differentiate $y'$, you have:
$$y'' = -y$$
Which has the solutions:
$$y=C_1 \cos(t) + C_2 \sin(t),$$ |
Is this a natural transformation? | Putting it another way, a choice of a nonzero vector $w \in W$ is the same as a choice of isomorphism $K \rightarrow W$ ($1\in K$ gets mapped to $w$), and $V$ is naturally isomorphic to $V \otimes K$. So, yes, you are right saying that there exists a natural isomorphism for every nonzero $w \in W$.
In fact, we can prove that you've found all natural isomorphisms. Since the identity functor is represented by $K$, that is, there is an isomorphism $\operatorname{Id}_{Vect_K} \cong \operatorname{Hom}(K, -)$, we can use Yoneda lemma to get all natural transformations $\operatorname{Id}_{Vect_K} \rightarrow -\otimes W$:
$$\operatorname{Nat}(\operatorname{Id}_{Vect_K}, -\otimes W)=\operatorname{Nat}(\operatorname{Hom}(K, -), -\otimes W)=K\otimes W \cong W$$
So, each element of $W$ gives a natural transformation; for nonzero $w$ these are isomorphisms.
I suppose the author means that there is no single preffered natural isomorphism - there are many of them, all as good as others. |
Proof of show transitivity between 3 variables with exponents | You can write $b = a^5x_1$ and $c = b^5x_2$ for $x_1,x_2 \in \mathbb{Z}$. Therefore, $c = b^5x_2 = (a^5x_1)^5 x_2 = a^{25} {x_1}^5 x_2$. We see that $a^{25}$ divides $c$, so in particular, $a^{20}$ divides $c$. |
Definition of submanifolds by regular values | The set $U = \{x\in M : Df|_x\text{ is surjective}\}$ is open in $M$. Thus it is a manifold. If you restrict $f$ to $U$, then
$$K = (f|_U)^{-1}(q)$$
Thus $K$ is a manifold and is a submanifold in $U$. |
How to solve for matrix $A$? | Treat it as an equation
$$AB=I+2A\iff AB-2A=I\iff A(B-2I)=I\iff A=(B-2I)^{-1}$$ |
Pass a logic function using only NAND | You can apply DeMorgan's Law and not distribute your NOTs.
for example:
$ab+c \longleftrightarrow \overline {\overline{a b} \overline c} $ |
Poisson distribution 3 | Assume $1$ month $= 30$ days. Let $X_i$ be the event of accident in $i$th day in shop $A$ and $Y_i$ be the event of accident in $i$th day in shop $B$. So $X_i \sim P(\frac{2}{90})$ and $Y_i \sim P(\frac{3}{120})$. Here 11 th Jan to 1 Mar We assume total $(21+28+31=80)$ days.
Note that $\sum_{i=1}^{n}X_i\sim P(\frac{2n}{90})$ and $\sum_{i=1}^{n}Y_i \sim P(\frac{3n}{120})$
So the probability that you can report 'no accidents so far='$$Pr[\sum_{i=1}^{80}Y_i=0]=\exp{(-\frac{3\cdot 80}{120})}=0.1352$$
Let total days of march is $31$.
Again the probability that there will be no accidents in shop A in the next month and one or more accidents in shop B is =$$\begin{align}P[\sum_{i=1}^{31}X_i=0,\sum_{i=1}^{31}Y_i \geq 1]&= P[\sum_{i=1}^{31}X_i=0]P[\sum_{i=1}^{31}Y_i \geq 1] \\ &=\exp{(-\frac{2\cdot 31}{90})}[1-\exp{(-\frac{3\cdot 31}{120})}]=0.2708\end{align}$$ |
Prove $P(X<2<4X)=P(\frac{1}{2}<X<2)$ | $$X < 2 < 4X \to \left\{ \begin{array}{l}X < 2\\2 < 4X \to X > \frac{1}{2}\end{array} \right.$$ |
Partial Differential Equation with Travelling Wave Solutions | We first set $\gamma=x-\alpha t$. Then we obtain the ordinary differential
equation
$$
-\alpha \frac{d \phi}{d \gamma}+f(\phi(\gamma))\frac{d \phi}{d \gamma}+\frac{d^{3} \phi}{d \gamma^{3}}=0.
$$
Integrating once we arrive at
$$
-\alpha \phi+f(\phi(\gamma))+\frac{d^{2} \phi}{d \gamma^{2}}=C_{1}
$$
where $C_{1}$ is a constant of integration. One more integration yields
$$
\frac{1}{2}\left(\frac{d \phi}{d \gamma}\right)^{2}=C_{2}+C_{1} \phi+\frac{\alpha}{2} \phi^{2}-F(\phi(\gamma)), \quad F(\phi)=\int_{0}^{\phi} f(y) d y
$$
Where $C_{2}$ is another constant of integration. |
Trigonometry, some true or false tasks about cosine-rule and sine-rule | Michael Hardy has the correct answer for B. There is a specific side to find, but if the angle is not between the known sides there could be two solutions. For C, it is saying that it is false that there can be two answers. You have argued that there is only one solution to the equation, which agrees with the book. In D you are right-sometimes there are two answers, but not all the time. It is also the case if your given angle is greater than 90 degrees. |
How to write $K$ as sum of $N$ integers? | Do you know what "smoothing" is?
Hint: Show that if $a \geq b+2$ then $(a-1)^2 + (b+1)^2 < a^2 + b^2$.
This proves that when the variance is minimal, each pair of integers must differ by at most 1 (otherwise we can reduce the variance further).
Hence, your claim follows. |
Find basis of vector space | This vector space is $1$-dimensional. Take $v = e$, then for every $u \in \left]0,\rightarrow\right]$, there exists $\lambda \in \mathbb{R}$ with $u = \lambda^v$.
You can gain some intuition of this vector space by considering logarithms: If $u,v \in V$, then $\log u + \log v = \log (u + v)$, where the latter sum is vector sum. If $\lambda$ is a scalar, $\log (\lambda v) = \lambda \log v$. |
Verifying a solution to a Differential Equation 2 | Hint: You need to differentiate and plug in. Use the Fundamental Theorem of Calculus.
Note that the derivative of
$$\int_0^t e^{-s^2}\,ds$$
with respect to $t$ is $e^{-t^2}$. |
Show that there aren't negative eigenvalues. | What you want to be able to show is that, if $y$ is a solution of the equation,
$$
0 \le \int_{0}^{1}(-y''(t)-ty(t))y(t)dt = \lambda \int_{0}^{1}y(t)^2dt.
$$
After integrating by parts, assuming $y(0)=y(1)=0$, you must show
$$
\int_{0}^{1}ty(t)^2dt \le \int_{0}^{1}y'(t)^2dt.
$$
This inequality holds more generally for a $C^2$ function with $y(0)=y(1)=0$ because the Cauchy-Schwarz inequality gives
\begin{align}
\int_{0}^{1}ty(t)^2dt & = \int_{0}^{1}t\left(\int_{t}^{1}y'(s)ds\right)^{2}dt \\
& \le \int_{0}^{1}t\int_{t}^{1}y'(s)^2ds\int_{t}^{1}1^2ds\,dt \\
& = \int_{0}^{1}t(1-t)\int_{t}^{1}y'(s)^2ds \,dt \\
& = \left.\left(\frac{t^2}{2}-\frac{t^3}{3}\right)\int_{t}^{1}y'(s)^2ds\right|_{t=0}^{1}+\int_{0}^{1}\left(\frac{t^2}{2}-\frac{t^3}{3}\right)y'(t)^2dt \\
& = \int_{0}^{1}\left(\frac{t^2}{2}-\frac{t^3}{3}\right)y'(t)^2dt.
\end{align}
The function $f(t)=\frac{t^2}{2}-\frac{t^3}{3}$ satisfies $f(0)=0$ and $f'(t) > 0$ for $0 < t < 1$. Therefore, $f(t) \le f(1)=\frac{1}{6}$ for $0 \le t \le 1$. Hence,
$$
\int_{0}^{1}ty(t)^2dt \le \frac{1}{6}\int_{0}^{1}y'(t)^2dt \\
-\int_{0}^{1}ty(t)^2dt \ge -\frac{1}{6}\int_{0}^{1}y'(t)^2dt.
$$
Because of this, any real solution $y$ of the eigenvalue equation with eigenvalue $\lambda$ must satisfy
\begin{align}
\lambda\int_{0}^{1}y(t)^2dt & = \int_{0}^{1}-y''(t)y(t)dt-ty(t)^2dt \\
& = \left.-y'(t)y(t)\right|_{t=0}^{1}+\int_{0}^{1}y'(t)^2dt - \int_{0}^{1}ty(t)^2dt \\
& = \int_{0}^{1}y'(t)^2dt - \int_{0}^{1}ty(t)^2dt \\
& \ge \int_{0}^{1}y'(t)^2dt - \frac{1}{6}\int_{0}^{1}y'(t)^2dt \\
& =\frac{5}{6}\int_{0}^{1}y'(t)^2dt \ge 0.
\end{align}
Therfore $\lambda \ge 0$, and $\lambda=0$ iff $y'\equiv 0$, which forces $y=0$ because $y(0)=0$ for such a solution. Hence, all eigenvalues are strictly positive. |
Left-hand limit and Right-hand limit of a function | You have $\frac{\sin(x)}{x}\to 1$ when $x\to 0$.
Here, however, you're trying to use it when $x$ is $\frac{1}{-h}$, which goes to $-\infty$ rather than (as $h$ itself does) to $0$.
What your calculation does show is that
$$ \lim_{x\to \pm\infty} f(x) =1 $$
Bur that was not the limit you set out to find. |
If $K[Q_8]\cong K[D_8]$, char $K=p$ odd, $p=?$ | This will be true as long as $K$ is a splitting field for $G$, i.e. as long as all irreducible representations of $G$ over $\bar{K}$ can be defined over $K$. That's because these representations will be more or less "the same" as the complex ones. Concretely, every complex representation of $Q_8$ can be realised over $\mathbb{Z}[i]$. Pick a prime ideal $\mathfrak{p}$ in $\mathbb{Z}[i]$ and reduce all the matrix entries modulo $\mathfrak{p}$. You will get an irreducible representation over the residue field $K$ of $\mathfrak{p}$. If the prime ideal $\mathfrak{p}$ is split in $\mathbb{Z}[i]$, then the characteristic of $K$ is $p\equiv 1\pmod{4}$ and $K=\mathbb{F}_p$. If the prime ideal is inert, then the characteristic of $K$ is $p\equiv 3\pmod{4}$ and $K=\mathbb{F}_{p^2}$.
Intuitively, you can think of it like this: if $p\equiv 1\pmod 4$, then $\mathbb{F}_p$ contains the fourth roots of unity, so there is nothing stopping you from realising the usual complex representations over $\mathbb{F}_p$, by simply replacing $\pm i$ with these fourth roots of unity. If $p\equiv 3\pmod 4$, then to realise your representations, you need to pass to the quadratic extension to acquire the fourth roots of unity.
In summary, for any odd $p$, there is a finite field $K$ of characteristic $p$ such that $K[Q_8]\cong K[D_8]$. But depending on $p$, this $K$ might have to be of order $p^2$, rather than $p$.
One can also express all of this in terms of Wedderburn and division rings. The upshot is that as soon as you pass to the splitting field, your division rings are all equal to this base field. |
In the ring $\mathbb{Z}[\sqrt{3}]=\{a+b\sqrt{3}\mid a,b \in \mathbb{Z}\}$, show the following. | You say that $(1-2\sqrt3)\nmid(8-5\sqrt3)$. Let's test this out.
Consider
$$\frac{8-5\sqrt3}{1-2\sqrt3}=\frac{(8-5\sqrt3)(1+2\sqrt3)}{(1-2\sqrt3)(1+2\sqrt3)}=\frac{-22+11\sqrt3}{-11}=2-\sqrt3.$$
I reckon that actually $(1-2\sqrt3)\mid(8-5\sqrt3)$.
But does $(8-5\sqrt3)\mid(1-2\sqrt3)$? Over to you! |
Localization at finitely many minimal prime ideals | 1) Let $\phi : A \to \prod_{i=1}^n A_{p_i}$ be the natural map (i.e. product of localization maps). Every element of $S$ is sent to a unit under $\phi$, so there is an induced map $\varphi : S^{-1}A \to \prod_{i=1}^n A_{p_i}$. But $\varphi$ is locally an isomorphism (at every maximal ideal $p_i$ of $S^{-1}A$, $\varphi_{p_i} : (S^{-1}A)_{p_i} \cong A_{p_i} \to (\prod_{i=1}^n A_{p_i})_{p_i} \cong A_{p_i}$), so $\varphi$ is globally an isomorphism.
2) For a reduced ring, the set of nonzerodivisors is precisely the complement of the union of the minimal primes (this is easy to see in the Noetherian case, but also holds in general). Thus in this case the two notions of total ring of fractions coincide. |
How to get $6$ from the numbers $\{6, 7, 8, 9\}$ using only addition, subtraction, division, and multiplication. | The following disgraceful python code performs an exhaustive search and gives no solutions for 6. Thw closest hit is $6+7/(8*9)\approx 6.097222$ (or $6-7/(8*9)$). You can change line 29, if K==6: from 6 to any other number to find other solutions. For instance, it finds correctly that $6/(7-9)+8=5$ and $(6+8)/(9-7)=7$. You can also remove the last 3 lines to make it spit out all possible answers. For instance, I believe all solutions for 7 are (with some dupes)
(6+8)/(9-7)=7.0
(8+6)/(9-7)=7.0
9-(6+8)/7=7.0
9-((6+8)/7)=7.0
9-(8+6)/7=7.0
9-((8+6)/7)=7.0
I didn't bother with checking if two choices of bracketing resulted in the same expression. In particular, it checks $4!\times 4^3\times 11=16896$ expressions(if the code doesnt terminate first). I hard-coded all possible bracketings because I don't know yet how to code better, but these are all of them because of How many ways are there to put parentheses between $n$ numbers? . It seems like I can learn from this website which solves a very similar problem. Anyway, the code-
import itertools
numbers = "6789"
functions = "+-*/"
b = "("
B = ")"
found_six_flag = False
for n in itertools.permutations(numbers):
for f in itertools.product(functions,repeat=3) :
results = []
results.append(n[0]+f[0]+n[1]+f[1]+n[2]+f[2]+n[3] ) #1 a+b+c+d
results.append(b+n[0]+f[0]+n[1]+B+f[1]+n[2]+f[2]+n[3]) #2 (a+b)+c+d
results.append(b+n[0]+f[0]+n[1]+f[1]+n[2]+B+f[2]+n[3]) #3 (a+b+c)+d
results.append(n[0]+f[0]+b+n[1]+f[1]+n[2]+B+f[2]+n[3]) #4 a+(b+c)+d
results.append(n[0]+f[0]+b+n[1]+f[1]+n[2]+f[2]+n[3]+B) #5 a+(b+c+d)
results.append(n[0]+f[0]+n[1]+f[1]+b+n[2]+f[2]+n[3]+B) #6 a+b+(c+d)
results.append(b+n[0]+f[0]+n[1]+B+f[1]+b+n[2]+f[2]+n[3]+B) #7 (a+b)+(c+d)
results.append(b+b+n[0]+f[0]+n[1]+B+f[1]+n[2]+B+f[2]+n[3]) #8 ((a+b)+c)+d
results.append(b+n[0]+f[0]+b+n[1]+f[1]+n[2]+B+B+f[2]+n[3]) #9 (a+(b+c))+d
results.append(n[0]+f[0]+b+b+n[1]+f[1]+n[2]+B+f[2]+n[3]+B) #10 a+((b+c)+d)
results.append(n[0]+f[0]+b+n[1]+f[1]+b+n[2]+f[2]+n[3]+B+B) #11 a+(b+(c+d))
for result in results:
K=eval(result)
if K==6:
found_six_flag = True
print(result+"="+str(K))
break
if found_six_flag:
break
You can compile this code on this website. |
on sequences of functions converging uniformly and their sequences of derivatives | You may have seen user126154's answer. They demonstrate that if $f_n \to 0$ pointwise, $f_n : [0,1] \to \mathbb{R}$ is differentiable, and $f'_n$ is continuous on $[0,1]$, then $X = \{x \in [0,1] : g(x) = 0 \}$ is dense in $[0,1]$, where $f'_n \to g$ pointwise.
That means if a counterexample does exist, we are limited to discontinuous, unbounded sequences $(f'_n)$.
However, this could be impossible. To show that such a sequence cannot exist, here's an idea:
According to Theorem 2 in Haskell Curry's answer, the set $E_n$ of discontinuities of $f'_n$ is a meagre set.
Then, $E = \bigcup_{n=1}^\infty E_n$ is also a meagre set, with the property that every $f'_n$ is continuous on $D = \mathbb{R} - E$. By definition, $D$ is a comeagre set, which implies that int($D$) is dense in $\mathbb{R}$.
Maybe we could use the same argument in user126154's answer, by replacing $[0,1]$ with $D$? If so, it would prove that $X = \{x \in \text{int}(D) : g(x) = 0 \}$ is dense in $\text{int}(D)$, and hence, $X$ is dense in $\mathbb{R}$.
Consequently, not only does $g$ vanish somewhere, but $g$ vanishes on a dense subset of $\mathbb{R}$. |
Mean and Variance of Number of Heads | In Law of Total Variance, $Y$ is a random variable. So for simplicity the definition should be like
$$ Y = \begin{cases} 1 & \text{if Coin 1 is chosen} \\
0 & \text{if Coin 2 is chosen} \end{cases}$$
And thus
$$ Var[X \mid Y] = \begin{cases}
\displaystyle n \times \frac {1} {3} \times \frac {2} {3} = \frac {2n} {9}
& \text{if} & Y = 1 \\
\displaystyle n \times \frac {1} {4} \times \frac {3} {4} = \frac {3n} {16}
& \text{if} & Y = 0
\end{cases}$$
You may write this compactly as
$$ Var[X \mid Y] = \frac {2n} {9} Y + \frac {3n} {16}(1 - Y)$$
The expectation will be
$$ E[Var[X \mid Y]] = \frac {2n} {9} \times \frac {1} {2} + \frac {3n} {16} \times \frac {1} {2} = \frac {59n} {288} $$
as the question presumed assumed both coins are equally-likely to be chosen.
Similarly,
$$ E[X \mid Y] = \frac {n} {3} Y + \frac {n} {4} (1 - Y) = \frac {n} {4} - \frac {n} {12} Y$$
$$ Var[E[X \mid Y]] = \frac {n^2} {12^2} \times \frac {1} {4} = \frac {n^2} {576} $$
Therefore,
$$ Var[X] = \frac {59n} {288} + \frac {n^2} {576} $$ |
Can a polynomial have an isolated local minimum at a transcendental point? | The answer is no; the coordinates of any isolated local minimum must be algebraic.
There's a nice argument I'd like to be able to use involving quotienting by the Jacobian ideal which I think is what Tabes suggests in the comments, but the problem is that the critical locus might be positive-dimensional over $\mathbb{C}$. Instead we can argue as follows. There are a collection of fields called real closed fields that can be defined in several equivalent ways, and we need that
the real algebraic numbers $\mathbb{R} \cap \overline{\mathbb{Q}}$ are a real closed field, and that
every real closed field satisfies the same first-order sentences in the language of fields as $\mathbb{R}$.
The latter might not seem so impressive until you know that inequalities such as $x \le y$ are expressible as first-order sentences: over a real closed field this condition is equivalent to $\exists z : y - x = z^2$. We can also express $x < y$ as the conjunction of $x \le y$ and $x \neq y$.
In particular, it follows that the claim that there exists a point $x = (x_1, \dots x_n)$ which is an isolated local minimum of $f$ can be expressed in the first-order language of fields! (Here we crucially need that the coefficients of $f$ are rational so we can write them all down in the first-order language of fields.) Namely, it's equivalent to the existence of $\epsilon > 0$ such that for all points $y = (y_1, \dots y_n)$ such that $\sum (x_i - y_i)^2 < \epsilon$ we have that either $f(y) > f(x)$ or $y = x$.
Hence if this sentence is true over $\mathbb{R}$ it's true over any real closed field and so true over the real algebraic numbers. But we can say more: if $f$ has an isolated local minimum $a = (a_1, \dots a_n)$ then we can express the claim that $f$ has an isolated local minimum near $a$ as a first-order sentence, by picking a sufficiently small $\delta > 0$ and finding rational upper and lower bounds $r_i \in (a_i - \delta, r), s_i \in (r, a_i + \delta)$ and adding to the above sentence the conditions that $r_i \le x_i \le s_i$. If we pick $\delta$ small enough so that, over $\mathbb{R}$, $a$ is the only isolated local minimum satisfying these bounds, then the existence of $a$ over $\mathbb{R}$ implies the existence of an isolated local minimum near $a$ over any real closed field and in particular over the real algebraic numbers, which must be $a$ itself. So $a$ has only algebraic coordinates. |
properties of least square estimators in regression | One has
$$
\hat\beta_1 = \frac{\sum_{i=1}^n (y_i-\bar y)(x_i-\bar x)}{\sum_{i=1}^n (x_i - \bar x)^2}
$$
where $\bar y = (y_1+\cdots+y_n)/n$ and $\bar x = (x_1+\cdots+x_n)/n$.
This is nonlinear as a function of $x_1,\ldots,x_n$ since there is division by a function of the $x$s and there is squaring. But it is linear as a function of $y_1,\ldots,y_n$. To see that, first observe that the denominator does not depend on $y_1,\ldots,y_n$, so we need only look at the numerator.
So look at
$$
y_i-\bar y = y_i - \frac{y_1 + \cdots + y_i + \cdots + y_n}{n} = \frac{-y_1 - y_2 - \cdots+(n-1)y_i-\cdots - y_n}{n} $$
This is linear in $y_1,\ldots,y_n$. Since the quantities $x_i-\bar x$, $i=1,\ldots,n$ do not depend on $y_1,\ldots,y_n$, the expression
$$
\sum_{i=1}^n (y_i-\bar y)(x_i-\bar x)
$$
is a linear combination of expressions each of which we just said is linear in $y_1,\ldots,y_n$. It is therefore itself a linear combination of $y_1,\ldots,y_n$.
Next, we have $\bar y = \hat\beta_0 + \hat\beta_1 \bar x$, so $\beta_0 = \bar y - \hat\beta_1\bar x$. Since $\hat y$ is a linear combination of $y_1,\ldots,y_n$ and we just got done showing that $\hat\beta_1$ is a linear combination of $y_1,\ldots,y_n$, and $\bar x$ does not depend on $y_1,\ldots,y_n$, it follows that $\hat\beta_0$ is a linear combination of $y_1,\ldots,y_n$.
MATRIX VERSION:
$$
\begin{bmatrix} Y_1 \\ \vdots \\ Y_n \end{bmatrix} = \begin{bmatrix} 1 & X_1 \\ \vdots & \vdots \\ 1 & X_n \end{bmatrix} \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \vdots \\ \varepsilon_n \end{bmatrix}
$$
$$
Y = M\beta + \varepsilon
$$
$$
\varepsilon \sim N_n( 0_n, \sigma^2 I_n)
$$
where $0_n\in\mathbb R^{n\times 1}$ and $I_n\in\mathbb R^{n\times n}$ is the identity matrix. Consequently
$$
Y\sim N_n(M\beta,\sigma^2 I_n).
$$
One can show (and I show further down below) that
$$
\hat\beta = (M^\top M)^{-1}M^\top Y. \tag 1
$$
Therefore
$$
\hat\beta \sim N_2(\Big((M^\top M)^{-1}M^\top\Big) M\beta,\quad (M^\top M)^{-1}M^\top\Big(\sigma^2 I_n\Big)M(M^\top M)^{-1})
$$
$$
= N_2( M\beta,\quad \sigma^2 (M^\top M)^{-1}).
$$
Here I have used the fact that when one multiplies a normally distributed column vector on the left by a constant (i.e. non-random) matrix, the expected value gets multiplied by the same matrix on the left and the variance gets multiplied on the left by that matrix and on the right by its transpose.
So how does one prove $(1)$?
"Least squares" means the vector $\hat Y$ of fitted values is the orthogonal projection of $Y$ onto the column space of $M$. That projection is
$$
\hat Y = M(M^\top M)^{-1}M^\top Y. \tag 2
$$
To see that that is the orthogonal projection, consider two things: Suppose $Y$ were orthogonal to the column spacee of $M$. Then the product $(2)$ must be $0$ since the product of the last two factors, ,$M^\top Y$, would be $0$. The suppose $Y$ is actually in the column space of $M$. Then $Y=M\gamma$ for some $\gamma\in \mathbb R^{2\times 1}$. Put $M\gamma$ into $(2)$ and simplify and the product will be $M\gamma=Y$, so that vectors in the column space are mapped to themselves.
Now we have
$$
M\hat\beta=\hat Y = M(M^\top M)^{-1} M^\top Y. \tag 3
$$
If we could multiply both sides of $(3)$ on the left by an inverse of $M$, we'd get $(1)$. But $M$ is not a square matrix and so has no inverse. But $M$ is a matrix with linearly independent columns and therefore has a left inverse, and that does the job. Its left inverse is
$$
(M^\top M)^{-1}M^\top.
$$
The left inverse is not unique, but this is the one that people use in this context. |
How can I isolate for the $z$ exponent? | Hint: $P= \frac{e^z+1-1}{e^z+1}=1-\frac{1}{e^z+1} $ |
For which values of $x$ is the following series convergent: $\sum_0^\infty \frac{1}{n^x}\arctan\Bigl(\bigl(\frac{x-4}{x-1}\bigr)^n\Bigr)$ | If $x>1$ the series converges absolutely because $\arctan$ is a bounded function:
$$
\Biggl|\frac{1}{n^x}\,\arctan\Bigl(\frac{x-4}{x-1}\Bigr)^n\Biggr|\le\frac{\pi}{2\,n^x}.
$$
If $x<1$ then $(x-4)/(x-1)>1$ and
$$
\lim_{n\to\infty}\arctan\Bigl(\frac{x-4}{x-1}\Bigr)^n=\frac{\pi}{2}.
$$
It follows that the series diverges in this case. |
Derived functor vs. spectral sequence | The spectral sequence is the Leray spectral sequence
$$E_2^{i,j} := R^if_* R^jg_* \mathcal F\implies R^{i+j}(f\circ g)_*\mathcal F.$$ |
Why is this a useful way to prove the characterisation of bases? | The idea is that a basis is sort of a "perfect" generating set.
If a generating (spanning) set $E$ is not linearly independent, it means we have some "redundancy" between the elements: we have more than we need to generate the space $V$. For example, if $v_3 = v_1 + v_2$, then we do not need all $3$ of $v_1,v_2$ and $v_3$, we can do without $v_3$ if we have $v_1$ and $v_2$. So, basically, we keep taking out elements of $E$ until it becomes linearly independent.
On the other hand, a linearly independent set is, de facto, a basis for the set it spans. However, this span may not be "all of $V$", so we need to "add to it". Adding anything that is already in $\text{span}(L)$ doesn't help our cause, we need to add something linearly independent to "get more".
It turns out that linear independence and spanning are sort of "dual" concepts, just like injective/surjective are for mappings. In fact, they correspond:
A linear map $T: V \to W$ is injective if it preserves linear independence.
A linear map $T:V \to W$ is surjective if it maps a generating set to a generating set.
So, given a set $S \subset V$, we want to know $2$ things:
$1$) Does it span $V$?
$2$) Is it linearly independent?
The neat thing is, if we only know $1$ out of the $2$, we can deduce the other based on the size of $S$. This is a real time-saving feature. |
An example where the supremum of Riesz's Lemma is not achieved | So if you have $f\in X$ with $\|f\|=1$, you need to find some $h\in L$ with $\|f-g\|<1$.
There is some open subinterval $(a,b)\subset[0,1]$, in which $|f|<\frac12$.
Let $g\in X$ vanish outside $(a,b)$, with $g(x)>0$ for $x\in(a,b)$, and $\int_0^1g=\int_0^1f$, so that $f-g\in L$.
Now put $h=\varepsilon\cdot(f-g)$ with $\varepsilon>0$, and note that $\|f-h\|<1$ if $\varepsilon$ is small enough:
For $x\notin(a,b)$, $$|f(x)-h(x)|=|(1-\varepsilon)f(x)|<1-\varepsilon.$$
And for $x\in(a,b)$,
$$|f(x)-h(x)|\le\tfrac12+\varepsilon(\tfrac12+\|g\|)<1 $$
if $\varepsilon$ is small enough. |
Rouché for Polynomial $p(z)=z^7+z(z-3)^3+1$ around non-centered annulus. | You get
$$
|z^7-1|\ge 2^7-1=127
$$
and
$$
|z(z-3)^3|\le 4
$$
on the circle $|z-3|=1$. Thus $p$ has the same number of roots inside the circle as $z^7+1$.
Roots of $z^7+\alpha z(z-3)^3+1$ for $α\in[0,1]$, red for $α=1$. |
a question about summation of series, how to prove $\int_0^\infty e^{-x}S(x)$=$\sum_{i=0}^\infty a_nn!$ | This follows from the fact that:
$$
\int_0^\infty e^{-x} x^n = n!
$$
which you can derive by repeatedly integrating by parts. Then apply this formula term-by-term in the series. |
A $\Bbb Z$-module that is not a direct sum of cyclic modules | Your argument is correct.
Note that a $\mathbb{Z}$-module is an abelian group and by the fundamental theorem of finitely generated abelian groups any finitely generated abelian group is the direct sum of cyclic $\mathbb{Z}$-modules, so in particular we are looking for an abelian group that is not finitely generated. This naturally suggest to look at $\mathbb{Q}$. |
How to think of $R[x,y]$-modules? | (I'm implicitly assuming that $R$ is a commutative ring.)
An $R[x_1,\ldots,x_n]$-module is the same thing as an $R$-module equipped with $n$ mutually commuting endomorphisms.
One way of putting it is that a left $S$-module $M$, where $S$ is a (not necessarily commutative) $R$-algebra, is the same thing as an $R$-module $M$ together with an $R$-algebra map from $S$ to the endomorphisms of $M$ (acting on the left on $M$). Now $R$-algebra maps $R[x_1,\ldots,x_n] \to T$ (where $T$ is a not necessarily commutative $R$-algebra) are exactly the same thing as $n$-tuples of pairwise commuting elements of $T$: in fancy speak, this says that $R[x_1,\ldots,x_n]$ represents the functor taking a $R$-algebra $T$ to the set of its $n$-tuples of pairwise commuting elements (when $T$ is commutative, of course, we can forget the "pairwise commuting" constraint).
To put it differently: once you know that an $R[x]$-module is an $R$-module with an endomorphism $\hat x$, you should remember that an endomorphism of $R[x]$-modules $\varphi\colon M\to M'$ is an endomorphism of $R$-modules commuting with the $R[x]$-structure, i.e., such that $\hat x'\circ \varphi = \varphi\circ \hat x$. Then an $R[x,y]$-module is, as you say, an $R[x]$-module with an endomorphism $\hat y$, but by what has just been said, $\hat x$ and $\hat y$ must commute and that is the only condition. This carries over to $n$ variables. |
Confusion about division in Clifford Algebra | Not every element of a Clifford algebra has the form $v$ for $v$ a vector. In general an element of a Clifford algebra is a sum of products of such things, and most of these aren't invertible.
Explicitly, let $\text{Cliff}(p, q)$ be the Clifford algebra over $\mathbb{R}$ generated by $p$ generators $e_1, \dots e_p$ satisfying $e_i^2 = -1$ and $q$ generators $f_1, \dots f_q$ satisfying $f_i^2 = 1$, all of which anticommute. If $q \neq 0$ then $f_i^2 = 1$ implies $(f_i + 1)(f_i - 1) = 0$, so $f_i \pm 1$ fail to be invertible.
So now let $q = 0$; the corresponding Clifford algebra is often denoted $\text{Cliff}(p)$. When $p = 1$ we get $\mathbb{C}$ and when $p = 2$ we get $\mathbb{H}$; these are division algebras. It turns out that for $p = 3$ we have
$$\text{Cliff}(3) \cong \mathbb{H} \times \mathbb{H}$$
which is not a division algebra. Explicitly, once you have three anticommuting square roots of $-1$, namely $e_1, e_2, e_3$, you can now write down
$$(e_1 e_2 e_3)^2 = 1$$
and so $e_1 e_2 e_3 \pm 1$ fails to be invertible. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.