title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding probability of other child also being a boy | This is not the same question as the one mentioned in the comment by Martin Sleziak, and your "correct ans" is wrong; the probability is indeed $\frac12$ (under the assumption, not quite true in reality but reasonable for this question, that the probability of a random person being male $\frac12$, and that it is independent of the gender of any other fixed person).
The question is equivalent to the following one: you pick a random person $p$ and ask what are his siblings; it turns out $p$ has just one brother. What is the probability that $p$ is male? You could also say, "pick a random man $m$ with one sibling, what is the chance that his sibling is a brother" (which more closely resembles the setup of your question); it is just another equally random way to make the selection (take $m$ to be the brother of $p$).
There are all kinds of ways to see the answer is $\frac12$ here, basically because the gender of one sibling is independent of the other. If you like detail: there are $4$ possibilities for genders in $2$-child families, in oldest-youngest order $FF,FM,MF,MM$, all equally likely. Person $p$ could be oldest or youngest, treat the cases separately (they give equal probabilities in the end anyway). Supposing $p$ is oldest, the fact that the younger sibling is a brother eliminates two possibilities leaving $FM$ and $MM$; this leaves equal probabilities for $p$ being female or male. If $p$ is youngest, then $MF$ and $MM$ are left, again leaving equal probabilities for $p$ being female or male.
The essential point that distinguishes this question from the one linked to is that you are not given that "one of the siblings is a boy", which gives a different kind of information (eliminating only one of four possibilities). Here a specific sibling is found to be a boy. |
Categories of $\mathbf T$-algebras. | Let us consider polynomial functors, which are functors that we can build up from the following operations:
constant functors
cartesian product $\times$
disjont sum $+$
For example, such a functor might be
$$T(X) = C \times X + X \times X \times (D + X \times X)$$
where $C$ and $D$ are fixed sets. Take your functor and use distributivity to write it out as a "polynomial", i.e., a disjoint sum of products. Let us also write $X^n$ for the product $X \times X \times \cdots \times X$ of $n$ copies of $X$. The above functor $T$ would be
$$T(X) = C \times X + D \times X^2 + X^4.$$
An algebra for $T$ is a set $A$ together with a structure map
$$a : T(A) \to A$$
Because we expressed $T(A)$ as a sum of powers, such an $a$ is equivalent to having several maps. For example,
$$a : C \times A + D \times A^2 + A^4 \to A$$
is equivalent to three maps
$$\begin{align*}
a_1 &: C \times A \to A \\
a_2 &: D \times A^2 \to A \\
a_3 &: A^4 \to A
\end{align*}
$$
But these are precisely the operations for our algebra! This procedure works always. Here are some examples.
Take $T(X) = \emptyset$. Then a $T$-algebra is a set $A$ with a map $a : \emptyset \to A$. But there is exactly one such map which is not doing anything, so the answer is that a $T$-algebra is just a set.
Take $T(X) = 1$. Then a $T$-algebra is a set $A$ with a map $a : 1 \to A$. Such a map is the same thing as one element of $A$, so the answer is that a $T$-algebra is a set $A$ together with one element. This is known as pointed set.
Take $T(X) = X \times X$. A $T$-algebra is a set $A$ with a map $a : A \times A \to A$, i.e., a set with a binary operations. This is known as magma.
Just for fun, let us try one more. Take $T(X) = 1 + X + X^2$. A $T$ algebra is a map $a : 1 + A + A^2 \to A$ which is equivalent to having
a map $a_1 : 1 \to A$, which is the same as having one element of $A$, and
a map $a_2 : A \to A$, which is just a unary operation on $A$,
a map $a_3 : A^2 \to A$, which is a binary operation.
Thus, a $T$-algebra is a set $A$ together with one element, one unary operation, and one binary operation. |
Proving that $H$ subgroup is equal to $S_n$ | Okay since you have a transposition (lets call it $\tau=(i,j)$ ) in H and for any $a,b\in\{1,...,n\}$ their exist a permutation in $\beta_{a,b}\in H$ such that $\beta_{a,b}(a)=b$, it follows that for any $a\in \{1,...,n\}$ we have that $\beta_{i,a}\tau \beta_{i,a}^{-1}=(a,j)$. In this way we generate all transpositions and thus all permutations. |
Expressing a Vector in a new Basis | we will first deal with expressing as a linear combination of $\sqrt{14}\hat a, \sqrt 5\hat b,$ and $\sqrt{70}\hat c.$
we need number $x, y, z$ so that $$x(i+2j+3k)+y(2i-j)+z(3i+6j-5k)=5i+3j+2k \tag 1 $$
in matrix form, we have $$\pmatrix{1&2&3&|&5\\2&-1&6&|&3\\3&0&-5&|&2}\to
\pmatrix{1&0&0&|&1.21428\\0&1&0&|&1.40000\\0&0&1&|&0.32857}$$
the solutions $\pmatrix{x\\y\\z}$ is the last column in the second matrix. now, multiplying $x,y,z$ by $\sqrt{14}, \sqrt{5}, \sqrt{70}$ gives you the answer. |
proving that a certain function defined using integrals is continuous | Hint:
$$
\int\limits_0^\pi\frac{\sin(xt)}{t}dt
=\int\limits_0^\pi\frac{\sin(xt)}{xt}d(xt)
=\int\limits_0^{\pi x}\frac{\sin(s)}{s}ds
$$ |
On the proof of the simplicial decomposition theorem | In your example where $L$ is a single 2-d simplex, for each of the vertices $v \in L$, it is not accurate to refer to the set $St'(v)$ as a "quarter" of a triangle. But you can instead think of $St'(v)$ as a "third" of a triangle. If we denote the three vertices of $L$ as $v_1,v_2,v_3$ then, as seen here, each triangle of the first barycentric subdivision $L'$ contains exactly one of the $v_1,v_2,v_3$. Also, each vertex $v_i$ is contained in exactly two of the triangles of $L'$, and the union of the two triangles of $L'$ containing $v_i$ is exactly $St'(v)$. Thus we have
$$L = St'(v_1) \cup St'(v_2) \cup St'(v_3)
$$
In general, for any simplicial complex $L$, as $v$ varies over the vertices of $L$ we have
$$L = \bigcup_v St'(v)
$$
So for any vertex $w'$ of $K'$, since $f(w') \in L = \bigcup_v St'(v)$, it follows that there exists $v$ a vertex of $L$ such that $f(w') \in St'(v)$.
Perhaps you were thinking of $L''$, the second barycentric subdivision, when you wrote "the area in the middle of the triangle is not covered by any of $L$'s 3 vertices' stars in $L'$". |
Isomorphism of hom sets implies objects are isomorphic | The answer is no for both questions.
For the first one, define a category $\mathcal{C}$ with two objects $x_0,x_1$, the set of morphisms $x\to y$ is $\{0,1,\dots\}$ if $x=y$ and $\{1,2,\dots\}$ otherwise, and composition is just addition of natural numbers. Then for all $y$ there is a bijection between $\operatorname{Hom}(x_0,y)$ and $\operatorname{Hom}(x_1,y)$ (since both are infinite and countable), but there is no isomorphism $x_0\to x_1$.
For a less artificial counterexampe, take the category of finite-dimensional vector spaces over $\mathbb{R}$; then $\operatorname{Hom}(x,y)$ is in bijection with $\mathbb{R}^{\dim(y)\times \dim(x)}$, so it has the same cardinality as $\mathbb{R}$ as long as neither $x$ nor $y$ is the zero vector space. In particular, for all non-zero vector spaces $x_0,x_1$, there is a bijection between $\operatorname{Hom}(x_0,y)$ and $\operatorname{Hom}(x_1,y)$ for all $y$.
For the second question, take any counterexample to the first one, and take $\mathcal{D}=\mathbf{Set}$ and $F,G$ the functors represented by $x_0$ and $x_1$ respectively. Then there exist morphisms $x_0\to x_1$ and $x_1\to x_0$, and thus natural transformations $G\Rightarrow F$ and $F\Rightarrow G$, but $F$ and $G$ are not isomorphic since $x_0$ and $x_1$ aren't. Note that even if $F$ and $G$ were isomorphic, there could still be natural transformations between them that are not isomorphisms.
You can also find plenty of counterexamples to your second question in this MO thread. |
Smallest and largest values of a vector length | First off, the smallest value can't be -1 because we're talking about |u-v|, which is the vector's magnitude, a quantity which is always positive. Have you learned about vector addition as a "tip-to-tail" composition? That's how I suggest thinking of this. What it would mean is that |u+v| is maximal when the vectors are in the same direction and minimal when they're in opposite directions. (|u+v|={u-(-v)|, and since |v|=|-v|=3 we can just as well look at it as a problem about |u+v|.) If you take three steps forward and then two more, you've taken five steps. If you take three forward and two back, you're one step away from the start. The triangle inequality does the same thing - just don't forget that you're still talking about magnitude. |
Prove that $Q_8 \cong \langle a, b \mid a^4, a^2b^{-2}, aba^{-1}b \rangle$ | The second relation gives $b^2 = a^2$. This says $|a| = |b|=4$, and that the elements $a^n b^m$, with $m\ge 3$ can be reduced to $a^{r}b^{s}$ but with $r=\{0,1,2,3\}, s = \{0,1\}$
The third relation: $aba^{-1}b = 1 \implies ab = b^{-1}a \implies bab =a$.
Moreover $bab = a \implies ba = ab^{-1} = ab^3 = a(b^2)b = a(a^2)b =a^3b$, ie, $ba=a^3b$. So elements with mixed letters like $ba, bab, etc$ can also be expressed as $a^rb^s$ with $r=\{0,1,2,3\}, s= \{0,1\}$
This means, $G$ has 8 distinct elements: $$1, a, a^2, a^3$$ $$b, ab, a^2b, a^3b$$
Since $i,j \in Q_8$ satisfies the relations in $G$, Von Dyck's Theorem gives a surjective homomorphism $G \to Q_8$, $a \mapsto i, b\mapsto j$
We have the following:
$|G| = 8$
The homomorphism is also an injection (same group order + surjectivity)
The homomorphism is now a bijection, hence an isomorphism. |
Relationship between vector's length and scalar multiplication | By "scalar multiplication" this question means "dot product," as in "the multiplication of two vectors which produces the scalar". You then can use the formula $\|u\| = \sqrt{(u,u)}$ to get the length of a vector $u$ |
Why almost sure convergence holds if $X_n = n$ w.p. $1/n$? | Sure there is a typo in the textbook. The first case is, for example, $X_n=\frac{1}{n}$ w.p. $\frac{1}{n}$ and zero elsewhere.
This sequence converges |
Analytic function on a circumference | Just write down everything in polar coordinates, but remember that you are integrating on $|z|=R$, so it would be better to write $\zeta=Re^{i\phi}$ for the integration variable, because $r$ and $\theta$ are already in use for the fixed point $z=re^{i\theta}$.
Cauchy formula is
$$f(re^{i\theta})=\frac{1}{2\pi i}\int_{0}^{2\pi}\frac{f(Re^{i\phi})}{Re^{i\phi}-re^{i\theta}}d(Re^{i\phi})$$
Now, $d(Re^{i\phi})=iRe^{i\phi}d\phi$, because $R$ is fixed, so
$$\frac{d(Re^{i\phi})}{Re^{i\phi}-re^{i\theta}}=iRe^{i\phi}\frac{d\phi}{e^{i\phi}(R-re^{i(\theta-\phi)})}=iR\frac{d\phi}{R-re^{i(\theta-\phi)}}=iR(R-re^{i(\phi-\theta)})\frac{d\phi}{R^2+r^2-2Rr\cos(\theta-\phi)}$$
On the other hand
$$\frac{r^2-Rre^{i(\phi-\theta)}}{R^2+r^2-2Rr\cos(\theta-\phi)}=\frac{re^{-i\theta}(re^{i\theta}-Re^{i\phi})}{(Re^{i\phi}-re^{i\theta})(Re^{-i\phi}-re^{-i\theta})}=\frac{re^{-i\theta}}{re^{-i\theta}-Re^{-i\phi}}=\frac{Rre^{i(\theta-\phi)}}{Rre^{i(\theta-\phi)}-R^2}=\frac{\zeta\bar{z}}{\zeta\bar{z}-R^2}$$
is a holomorphic function of $\zeta$, therefore
$$\frac{1}{2\pi}\int_{0}^{2\pi}\frac{r^2-Rre^{i(\phi-\theta)}}{R^2+r^2-2Rr\cos(\theta-\phi)}f(Re^{i\phi})d\phi=\frac{1}{2i\pi}\int_{|\zeta|=R^2}\frac{\zeta\bar{z}}{\zeta\bar{z}-R^2}\frac{f(\zeta)}{\zeta}d\zeta$$
and, by the Cauchy formula, the last integral is just the value of the function
$$\frac{\zeta\bar{z}}{\zeta\bar{z}-R^2}f(\zeta)$$
for $\zeta=0$, i.e. $0$.
Therefore
$$\frac{1}{2\pi}\int_{0}^{2\pi}\frac{-Rre^{i(\phi-\theta)}}{R^2+r^2-2Rr\cos(\theta-\phi)}f(Re^{i\phi})d\phi=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{-r^2}{R^2+r^2-2Rr\cos(\theta-\phi)}f(Re^{i\phi})d\phi$$
Now, we just put everything together:
$$f(re^{i\theta})=\frac{1}{2\pi i}\int\limits_{0}^{2\pi}\frac{f(Re^{i\phi})}{Re^{i\phi}-re^{i\theta}}d(Re^{i\phi})=\frac{1}{2\pi}\int\limits_{0}^{2\pi}\frac{R(R-re^{i(\phi-\theta)})f(Re^{i\phi})}{R^2+r^2-2Rr\cos(\theta-\phi)}d\phi={\frac{1}{2\pi}\int\limits_{0}^{2\pi}\frac{(R^2-r^2)f(Re^{i\phi})}{R^2+r^2-2Rr\cos(\theta-\phi)}d\phi}$$
which is the representation formula given by the Poisson kernel for harmonic functions.
Note: there are shorter derivations of this result, which start by searching for a real reproducing kernel, so you basically take
$$\frac{d\zeta}{\zeta- z}$$
and try to correct
$$\frac{d\zeta}{\zeta- z}+\frac{d\bar{\zeta}}{\bar{\zeta}-\bar{z}}$$
in order to obtain another reproducing kernel,
by showing that
$$\frac{1}{2\pi i}\int_{|\zeta|=1}\frac{f(\zeta)d\bar{\zeta}}{\bar{\zeta}-\bar{z}}=f(0)$$ |
Evaluate $\int_{0}^{\pi} \frac{x\cdot dx}{1+\cos(\alpha)\cdot \sin(x)}$ | $\sec(\alpha)$ is just a constant, and for any $A>1$ we have
$$ \int_{0}^{\pi/2}\frac{dx}{A+\sin(x)}=\int_{0}^{\pi/2}\frac{dx}{A+\cos(x)} = 2\int_{0}^{\pi/4}\frac{dx}{(A-1)+2\cos^2(x)}$$
where the substitution $x=\arctan t$ turns the last integral into
$$ 2\int_{0}^{1}\frac{dt}{(A-1)t^2+(A+1)}=\frac{1}{\sqrt{A^2-1}}\,\arctan\sqrt{\frac{A-1}{A+1}}. $$ |
What's the maximum number of minterms in the minimal form of all Boolean functions of $n$ variables? | The $n$-ary Boolean function with the largest minimal DNF representation is the modulo-2 sum of all inputs (which can also be viewed as the iterated exclusive or of all inputs).
There are $2^{n-1}$ ones in the truth table, and no term can cover more than a sincle one of them, so you need $2^{n-1}$ terms.
If you want minterms (as defined in your link), specifically, then the constant function $1$ will need even more, namely $2^n$ minterms. Since a minterm must, by definition, use all variables, the representations you list for two-variable functions are not all minterm-based. |
Find the value of $P(Y|X)$ | Suppose that $A$ and $B$ are any two events. If $A\subset B$, then $P(A)\le P(B)$. We have that $X\cap Y\subset X$, but $1/2=P(X\cap Y)>P(X)=1/4$. Clearly, there is a mistake somewhere.
I guess that the real statement of the exercise is as follows. If $P(X)=1/4$, $P(Y)=1/3$ and $P(X\cup Y)=1/2$ ($\cup$ denotes the union ("or") and $\cap$ denotes the intersection ("and")), find $P(Y\mid X)$. We have that
$$
P(Y\mid X)=\frac{P(X\cap Y)}{P(X)}=\frac{P(X)+P(Y)-P(X\cup Y)}{P(X)}=\frac{1/4+1/3-1/2}{1/4}=\frac13.
$$ |
Relevant to Gaussian Integers | Let $\pi_1=2+i\sqrt 5,\pi_2=-2+i\sqrt 5,\pi_3=3,\pi_4=-3,\pi_5=-2-i\sqrt 5,\pi_6= 2-i\sqrt 5$. Note $$\pi_1\pi_2=\pi_3\pi_4=\pi_5\pi_6=-9$$
The map $\eta :R\to\Bbb N\cup\{0\}$ defined by $(x+iy)\mapsto x^2+y^2$ is multiplicative ; i.e. $\eta(zz')=\eta(z)\eta(z')$ with $\eta(a+(b\sqrt5)i)=1\iff a^2+5b^2=1\iff a=\pm1\land b=0$. Moreover, note $\eta(z)=3$ has no solutions in $R$ as we'd certainly have $\text{Im}(z)=0$ but $3$ is not a perfect square. Therefore, if $\eta(z)=9$ then $z$ must be irreducible because $z=z'z''$ would then imply $\eta(z)=\eta(z')\eta(z'')=9$ implying one of $z',z''$ is a unit. |
Need help with Lagrange Multipliers in Support Vector Machine | if we know all the support vectors, geometrically we can create hyperplanes on both sides
Strictly speaking, support vectors do not necessarily determine hyperplanes uniquely. For example, if you only have two data points (one in each class), then the support vectors are obvious, but the hyperplane still needs to be found. But this is a minor point. The real answer is: you need Lagrange multipliers (or some form of convex minimization) because you don't know yet what support vectors are going to be. |
If $f:\mathbb{C}\to\mathbb{C}$ is a continuous function that is analytic off $[−1, 1]$, then $f$ is entire | I think you're missing the case where the "part" of $I$ in $T$ is the entirety of $I$:
This proof of Andreas Kleefeld also allows degenerate triangles in his decomposition of $T$ which I do not find helpful (i.e., he means at most 5 triangles) |
criterions for holomorphic functions | The extended function
$$g(z) = \begin{cases} f(z) &, \lvert z\rvert \leqslant 1\\ 1/\overline{f(1/\overline{z})} &, \lvert z \geqslant 1\end{cases},$$
where $f \colon \overline{\mathbb{D}} \to \mathbb{C}$ is continuous and holomorphic in $\mathbb{D}$ with $\lvert f(z) \rvert = 1$ for $\lvert z\rvert = 1$, is always a meromorphic function on the entire plane $\mathbb{C}$, with poles in the points obtained from reflection of the zeros of $f$ in the unit circle. That is a special case of the Schwarz reflection principle.
In the particular case that $f$ has no zeros in the unit disk, $f$ is constant by the minimum modulus principle, and the holomorphy (constantness) of $g$ is readily verified. |
Feelings of inadequacy as a PhD mathematician | That sounds like a typical impostor syndrome. Without knowing you personnally it is of course impossible to claim so with 100% certainty, but if your advisor is happy with your work so far this is a very good sign and you should try to carry on and not worry excessively.
I think having a period of stress/depression/feelings of inadequacy at some point during the PhD, especially at the end of the first year, is extremely common. Probably most of your peers feel them too or will at some point. |
Eigenvalues and eigenvectors of antidiagonal block matrix | $\det(\lambda I-C)=\det\pmatrix{\lambda I&-A\\ -B&\lambda I}$. Since all square subblocks have the same sizes and the two subblocks at bottom commute, the determinant is equal to $\det(\lambda^2 I - AB)$. Therefore, the eigenvalues of $C$ are the square roots of eigenvalues of $AB$. That is, for each eigenvalue $t$ of $AB$, the two roots of $\lambda^2-t=0$ are eigenvalues of $C$.
As pointed out in a comment, we have $\det(C)=\det(-AB)$ and hence there is some relation between the product of the eigenvalues of $C$ and the products of the eigenvalues of $A$ and $B$, but besides that, very few about the spectrum or the eigenvectors of $AB$ can be said even if the spectra and eigenvectors of $A$ and $B$ are fully known. When both $A$ and $B$ are positive definite, we do have some bounds for the eigenvalues of $AB$. See "Evaluating eigenvalues of a product of two positive definite matrices" on this site or "Eigenvalues of product of two symmetric matrices" on MO. |
If a space $X$ has no isolated points, then nor does any dense subset of $X$. | The claim is true assuming the $T_0$ (Kolmogorov) axiom.
Let $S\subseteq X$ be dense. And suppose $x_0\in S$ is isolated. Then there exists an open set $X_0\ni x_0$ such that $X_0 \cap S = \{x_0\}$. Let $x_1\in X_0$ and $N$ be an open neighbourhood of $x_1$. Suppose $x_0 \not\in N$, then $N \cap X_0 \subset X_0\setminus \{x_0\}$ is open, contradicting density of $S$. Therefore $x_0$ must be in every neighbourhood of $x_1$.
Therefore in $T_0$ holds $x_0$ is isolated in $X$
(The OP may want to check whether $T_0$ is a standing assumption in the book. It is even more reasonable to assume that than assuming Hausdorff.)
Editor's note: The above proof actually assumes $T_1$, see YCor's comment below for what went wrong and a counterexample. (I'm leaving this note as the answerer is currently unregistered) |
Simple proof of "maximum number of right angles in a convex $n$-gon is 3 for $n\geq 5$" for a 8th grade student? | Sum of exterior angles of a convex n-gon is $360^{\circ} = 4\cdot 90^{\circ}$. We conclude for $n > 4$, at most $3$ right angles are allowed. In that case, sum of rest $n-3$ exterior angles is $90^{\circ}$. |
Can identically distributed random variables $X$ and $Y$ have $P(X < Y) \geq p$? | Partial answer: if $X$ and $Y$ have the same distribution we cannot have $P\{X<Y\}=1$. Proof: this is easy if the common distribution has finite mean: $X<Y$ with probability $1$ and the $E(Y-X)=EY-EX=0$ which is a contradiction since a strictly positive random variable cannot have mean $0$. To handle the general case observe that $\tan ^{-1} Y-\tan ^{-1} X$ is a positive random variable with mean $0$.
If $X$ and $Y$ are i.i.d. with a continuous distribution then $P\{X<Y\}=\frac 1 2$. |
Separation of variables in algebraic formula | If $F(x,y,z,t)$ could be written as $F(x,y,z,t)=f(x)g(y)h(z)k(t)$, then $$\frac{F(x,0,2y,x-2y)}{F(x,y,y,x-2y)}=\frac{g(0)h(2y)}{g(y)h(2y)}$$ in particular, this would not depend on $x$. You can write out the fraction but the $x$ do not cancel out. Hence you cannot find such function $f,g,h,k$. |
Showing a sequence converges in product topology. | Yes, those approaches are correct: In fact $(y_n)_n$ converges in the product topology on $\Bbb R^{\Bbb N}$ iff for each coordinate $k$ the real sequence $(y_{n,k})_n$ converges in $\Bbb R$.
Or here directly: if $\prod_{i} U_i$ is a basic product open neighbourhood of $(0,0,0,\ldots)$, then $\exists N$ so that $0 \in U_i, i < N$ and $U_i = \Bbb R$ for $i \ge N$. Then for each $i < N$ we find $N_i$ so that $$\forall n > N_i: y_{n,i}=\frac{1}{n} \in U_i$$ using convergence of $\frac{1}{n} \to 0$ finitely many times. But then for $n > \max(N_i\mid i < N)$ all these conditions apply and so for those $n$: $$y_n \in \prod_i U_i$$ as required.
For the box topology we only have to observe that $$(0,0,0,0,\ldots) \in U=\prod_{n \in \Bbb N} (-\frac1n, \frac1n)$$ contains no point of this sequence at all, as the $n$-th coordinate of $y_n$ fails to be in $U_n$, for each $n$.
Indeed $(0,0,0,\ldots)$ is the only candidate for a limit in the box topology as it is already the unique limit in a coarser topology, so there is no convergence at all. |
Convolution a Schwartz function and a rapidly decreasing function | What maybe really isn't that obvious is that in
$$\left|g(y) \frac{\partial \varphi(x -y + c(h_n)e_j)}{\partial x_j}\right| \leq M |g(y)|$$
the constant $M$ is really independent of $h_n$, which you would need to use dominated convergence! If you use the Lipschitz property you get it easier just by
$$
\left|\frac{\varphi(x - y -h_n e_j) - \varphi(x -y)}{h_n} \right| \leq \left| \text{Lip}(\varphi)\frac{x-y - h_n e_j - x -y}{h_n}\right| = |\text{Lip}(\varphi)|,
$$
where $\text{Lip}(\varphi)$ is the Lipschitz-constant of $\varphi$.
(Also notice that in your last estimate you missing a constant in the second integral (which is not at all important). |
Prove $\mathbb{Q}(\zeta)$ is a field when $\zeta$ is algebraic | There is another very short way, using linear algebra:
Let's consider a non-zero element $p(\zeta)\in\mathbf Q[\zeta]$. Multiplication by $p(\zeta)$ in this $\mathbf Q$-vector space is injective since it is an integral domain (contained in $\mathbf C$). Now $\mathbf Q[\zeta]$ is a finite dimensional vector (with dimension $\deg P$), and an injective endomorphism of a finite dimensional vector space is surjective. In particular $1$ is attained, i.e. there exists polynomial $q$ such that $p(\zeta)q(\zeta)=1$ – in other words, $p(\zeta)$ is invertible in $\mathbf Q[\zeta]$. |
Inequality $\vert x^{*}y\vert\le \Vert x\Vert_1\Vert y\Vert_{\infty}.$ | $x$ is an $n \times 1$ column vector. $x^*$ is a $1 \times n$ row vector, the conjugate transpose of $x$. When you multiply the $1 \times n$ matrix $x^*$ by the $n \times 1$ matrix $y$, you see that
\begin{equation}
x^* y = \sum_{k=1}^n \bar{x}_k y_k.
\end{equation}
(Here $\bar{x}_k$ is the conjugate of $x_k$.) |
Digits sums and sum of power | In any base $b$, we can see that a number is of the form $a=\sum d_ib^i$ for digits $d_i$. We also know that $b^i \equiv 1 \pmod{b-1}$. Thus, we can conclude that in base $b$, the sum of the digits of a number $a$ leaves the same remainder as $a$, when divided by $b-1$. In notation-
$$a \equiv D(b,a) \pmod{b-1}$$
If we have $g=\gcd(n_0,n_1,\ldots,n_u,a)$, then we can see that $g \mid n_i^{m_i}$. We also know $$D(a+1,n_i^{m_i}) \equiv n_i^{m_i} \pmod{a} \implies g \mid D(a+1,n_i^{m_i})$$
because $g$ divides both $n_i^{m_i}$ and $a$. Since $g$ divides all of $D(a+1,n_i^{m_i})$, it divides their greatest common divisor. This concludes the proof. |
Sylow's Theorems And Normal Subgroups of prime order | $ q $ cannot be a divisor of $ pq - 1 $ since we know it is a divisor of $ pq $. Or to phrase it differently,
$$
pq - 1 \equiv -1\ (\mathrm{mod}\ q)
$$ |
Solving for the growth rate given a finite sum. | Consider that you look for the zero of function $$f(g)=(1+g)^n- b\,g+1$$ for which
$$f'(g)=n(1+g)^{n-1}-b \qquad \text{and} \qquad f''(g)=n(n-1)(1+g)^{n-2}$$
The first derivative cancels at
$$g_*=\left(\frac{b}{n}\right)^{\frac{1}{n-1}}-1$$ So, for a first approximation, develop $f(g)$ as a Taylor series around $x_*$ to get
$$f(g)=f(g_*)+\frac 12 f''(g_*) (g-g_*)^2+O((g-g_*)^3)$$
So the first approximation could be
$$g_0=g_*+\sqrt{-2\frac{f(g_*)}{f''(g_*)}}$$ Using it for $n=20$ and $b=123456789$, this would give $g_0=1.805$ while the exact solution is $1.599$. Starting from this guess, Newton method will converge very fast as shown below.
$$\left(
\begin{array}{cc}
n & g_n \\
0 & 1.80522 \\
1 & 1.69723 \\
2 & 1.62762 \\
3 & 1.60177 \\
4 & 1.59879 \\
5 & 1.59875
\end{array}
\right)$$ We could make a better approximations using
$$g_0=g_*+3 \frac{f''(g_*)}{f'''(g_*)}\qquad g_0=g_*+\frac{4 f(g_*) f'''(g_*)}{f(g_*) f''''(g_*)-6 f''(g_*)^2}\qquad \cdots$$ For the worked example, these would respectively give $g_0=1.657$ and $g_0=1.587$.
Specific case where $g \ll 1$
This case corresponds to problems in finance. Consider now the original equation
$$b=\frac{(1+g)^n-1}{g}$$ and develop the rhs as a Taylor series built around $g=0$
$$b=n+\frac{n(n-1)}{2} g +\frac{n(n-1)(n-2)}{6} g^2 +\frac{n(n-1)(n-2)(n-3)}{24} g^3+\frac{n(n-1)(n-2)(n-3)(n-4)}{120} g^4 +\frac{n(n-1)(n-2)(n-3)(n-4)(n-5)}{720} g^5 +O\left(g^6\right)$$
Using series reversion, this will give as an approximation
$$g=t-\frac{(n-2)}{3} t^2+\frac{(n-2) (5 n-7)}{36} t^3-\frac{(n-2) \left(17 n^2-44 n+29\right)}{270} t^4+\frac{(n-2) \left(193 n^3-708 n^2+885 n-374\right)}{6480}t^5+O\left(t^6\right)$$ where $t=\frac{2 (b-n)}{n(n-1)}$.
Let us try for $n=20$ and $b=25$. The above formula will give $g=0.022863$ while the exact solution is $0.022854$.
For sure, we could take more terms and have a better accuracy. |
Injectivity between two localizations | No. For instance, let $k$ be a field and $R=k[x,y]/(xy)$, $\mathfrak{p}=(x)$, and $\mathfrak{q}=(x,y)$. Then $x$ is nonzero in $R_\mathfrak{q}$ (its annihilator is $(y)\subset\mathfrak{q}$), but it is zero in $R_\mathfrak{p}$ since it is annihilated by $y\in R\setminus\mathfrak{p}$. |
Prove that $c_{m} \in[a, b],$ for all $m \geq 1, \lim _{m \rightarrow \infty} c_{m}$ exists and find its value. | The estimates $a \leq c_n \leq b$ follow from
$$
a^m = \int_0^1 a^m\, dx \leq \int_0^1 f(x)^m\, dx \leq \int_0^1 b^m\, dx = b^m.
$$
Let us prove that $(c_m)$ converges to $b$.
If $b=0$ then also $a=0$, so that the claim follows from the first part.
Assume that $b>0$, let $\varepsilon \in (0, b)$, and let $I\subset [0,1]$ be a set where $f \geq b-\varepsilon$ (since we are assuming $f$ continuous, we can take a suitably small interval containing the maximum point of $f$).
Denoting by $L>0$ the length of the interval $I$, we have that
$$
\int_0^1 f(x)^m\, dx \geq \int_I (b-\varepsilon)^m\, dx = (b-\varepsilon)^m L,
$$
so that
$$
c_m \geq (b-\varepsilon) L^{1/m}.
$$
Since $L^{1/m} \to 1$ as $m\to +\infty$, we have that
$$
\liminf_m c_m \geq b-\varepsilon.
$$
Since $c_m\leq b$ for every $m$, the claim follows by the arbitrariness of $\varepsilon$: |
Meteor hitting Poisson process question | It doesn't break iff it's hit by no meteors, or by one non-giant meteors or by two non-giant meteors:
$$\Pr(N(2)=0)+.9\Pr(N(2)=1)+.81\Pr(N(2)=2)$$
so the probability that it breaks is $1$ minus the above. |
Linear Algebra Basics | $C^\infty(a,b)$ is usually the functions on the open intervall (a,b) which are infinitely many times differentiable.
$V\times V\to V$ is a map, let's call it $f$. For a map you need to know where an element is mapped to. The notation $(u,v)\mapsto u+v$ means that $f((u,v))=u+v$. Here $(u,v)$ is an element of $V\times V$ whereas $u+v$ is an element of $V$.
An additive unit (of $V$) is an element $o\in V$ such that $v+o=v$ for all $v\in V$.
For the proof: You have to distinguish between the assumptions and the claim. The assumptions are that $W_1$ and $W_2$ are already vector spaces. The claim is that the intersection $W_1\cap W_2$ is a vector space. So in the proof of (2) as you have that $W_1$ is already a subspace (by assumption) and $u,v\in W_1$ you also have $u+v\in W_1$. Similarly $u+v\in W_2$ since $W_2$ is a subspace. Hence $u+v\in W_1\cap W_2$.
(3) then follows similarly: As above $W_1$ and $W_2$ are already subspaces, hence $\alpha u\in W_1$ and $\alpha u\in W_2$. Hence $\alpha u\in W_1\cap W_2$. |
Chance of having score of 63 | As @JMoravitz says, this is a Markov chain problem. But a $69\times 69$ matrix is unwieldy.
A good approximation is obtained by saying that, with every score $1,2,3,4,6$ equally likely, on average you will get $5$ scores in every total of $16\; (=1+2+3+4+6)$. E.g. if your first five scores were $1,2,3,4,6$ then you have gotten totals of $1,3,6,10,16$. Thus,
\begin{eqnarray*}
P(63) &\approx & \dfrac{5}{1+2+3+4+6} = \dfrac{5}{16} = 0.3125
\end{eqnarray*}
The reason, I think this is a good approximation is that the further away from $0$ your target score is, the more even become the probabilities. That is, $P(1), P(2), P(3)$ might be quite varied, but $P(61), P(62), P(63)$ become almost the same and they would approach that limiting value above.
It's not hard to use that method in a more realistic model where scores $1,2,3,4,6$ are not equally likely. Let's say, we estimate their individual probabilities as:
\begin{eqnarray*}
P(1) &=& 0.4 \\
P(2) &=& 0.2 \\
P(3) &=& 0.1 \\
P(4) &=& 0.25 \\
P(6) &=& 0.05
\end{eqnarray*}
Then our estimate would be:
\begin{eqnarray*}
P(63) &\approx & \dfrac{1}{ 1\times 0.4 + 2\times 0.2 + 3\times 0.1 + 4\times 0.25 + 6\times 0.05} = \dfrac{1}{2.4} \approx 0.4167
\end{eqnarray*}
The denominator there is the expected score on any one shot. |
Find the volume of solid generated by rotating this sector? | Referring to the diagram below the exercise is to find the volume of the solid of revolution when revolving the blue region about the $x$-axis.
If one varies $x$ between the values of $-\sqrt{2}$ and $\sqrt{2}$ and uses the annulus method then
\begin{eqnarray}
V&=&\int_{-\sqrt{2}}^{\sqrt{2}}\pi(R^2-r^2)\,dx\\
&=&2\pi\int_0^{\sqrt{2}}R^2-r^2\,dx\tag{1}
\end{eqnarray}
where $R=\sqrt{4-x^2}$ and $r=x$. This gives a value
\begin{equation}
V=\frac{16\sqrt{2}\pi}{3}
\end{equation}
As a check we can also find the volume of the solid of revolution formed by revolving the red region about the $x$-axis. The blue plus red volumes should sum to the volume $V=\dfrac{32\pi}{3}$.
The "red volume" can be found by the cylindrical shell method. We will use the sector on the right and double the result so that
\begin{equation}
V=2\int_0^\sqrt{2}2\pi rh\,dy
\end{equation}
where $r=y$ and $h=\sqrt{4-y^2}-y$. So
\begin{equation}
V=4\pi\int_0^\sqrt{2}y\sqrt{4-y^2}-y^2\,dy
\end{equation}
which has a value of
\begin{equation}
V=\frac{32\pi}{3}-\frac{16\sqrt{2}\pi}{3}
\end{equation}
And we see that the "blue volume" and "red volume" sum correctly to the volume of the sphere.
Addendum: Note that if one integrates equation $(1)$ over the range $[-2,2]$ that will give the volume of the solid of revolution formed by revolving both the green and the blue regions (with the green volume subtracted from the blue volume), which is not the volume asked for in the question. |
Question about generating function coefficients | Try substituting $x^3$ with, say $y$. Then can you do it? |
mean and standard deviation of students taking a test | For a size $n$ large-enough, a binomial populaion with $p=0.94$ and $q=0.06$ ( so that $p+q=1$) is approximately-normal, with mean $np=450(.94)=423$ in this case, and standard deviation $\sigma= np(1-p)= (450)(0.94)(0.06)=25.38$ |
Why the function $u(x)=\sum_{k=1}^{\infty}\frac{1}{2^k}|x-r_k|^{-\alpha}$ is in $W^{1, p}(\Omega)$? | To show $u\in W^{1,p}$ consider the partial sums
$$u_N(x)=\sum_{k=1}^N \frac{1}{2^k}|x-r_k|^{-\alpha}$$
and show they form a Cauchy sequence in $W^{1,p}$. Each term in the sum is unbounded at the rational number $r_k$, so $u(x)$ is unbounded on the rationals, which are dense in the reals (every open subset contains a rational $r_k$). |
Using the Weibull Distribution, derive $E(X^k)$ | You can calculate $\int_0^{\infty}x^k \frac{\beta}{\theta^{\beta}}x^{\beta -1}e^{-({x}/{\theta})^{\beta}}dx$ using integration by parts $k$ times, if you do it you finally get $\int_{0}^{\infty} \frac{\beta}{\theta^{\beta}}x^{\beta -1}e^{-({x}/{\theta})^{\beta}} dx$ and you know that it's equals $1$. |
Existence of a maximum $\max_{\mathbb{R}^{2n}\times(0,\infty)^2} \Phi(x,y,t,s)$ (related to viscosity solutions in Evans's PDE) | I'll assume from the notation that $s$ is a fixed value, if it's not, this is not bounded (as a function of $s$).
1) $\Phi(x,y,t,s)$ is bounded from above.
We have that both $u(x,t)$ and $v(y,s)$ are bounded from above.
The term $-\lambda(t+s) - \frac{1}{\epsilon} (t-s)^2$, as a function of $t$, is also bounded from above, as it is a quadratic function where the leading term is associated to a negative value.
The other terms are negative, therefore bounded by $0$ from above.
2) Attains it maximum.
This follows from a general statement. If you have any function $f(x)$ bounded from above that is continuous, then $H(x)=f(x) - c |x|^2$ attains its maximum for any $c>0$.
To see this, you consider the set $A_d=\{ x \in \mathbb{R}^2: H(x)\geq d \}$ for $d < \sup H(x)$. Since $f(x)$ is bounded from above, this set is bounded. Since $H(x)$ is continuous, this set is closed. Since we are in $\mathbb{R}^d$, this set is compact. Therefore, given that $H(x)$ is continuous, it attains its maximum on any compact set.
Now consider $f(x,y,t) = u(x,t)-v(y,s)-\lambda t - \frac{1}{\epsilon} (|x-y|^2 - (t-s)^2) + \epsilon t^2 $, this will be bounded from above as long as $\epsilon <1$ (if this is not the case, you can find a similar way to formulate this, the idea is the same).
by our proposition, we know that
$$ f(x,y,t) - \epsilon( |(x,y,t)|^2) = f(x,y,t) - \epsilon( |x|^2 + |y|^2 + t^2) = \Phi (x,y,t,s)$$
attains its maximum, that is what we wanted.
About the reasoning behind the penalization, I don't know, I'm not familiar with what is the author trying to achieve. |
Is there a geometric analogy for separable space? | A separable space is which has a countable dense subset. $\mathbb{R}$ is simple geometric example having this property that every neighborhood of an element of $\mathbb{R}$ has nonempty intersection with $\mathbb{Q}.$ Every separable space must have this property. |
How is the magnitude of a direction vector equal to speed? | The position vector is described parametrically with parameter $t$ as
$$\vec r(t)=\vec r_0+\vec v\,t$$
Note that if we wish to measure the rate of change of position with respect to the parameter $t$, we have
$$\frac{d\vec r(t)}{dt}=\vec v$$
The magnitude of that rate of change is
$$\left|\frac{d\vec r(t)}{dt}\right|=|\vec v|$$
Therefore, we can write
$$\begin{align}
\vec r(t)&=\vec r_0+\left(|\vec v|\,t\right)\,\left(\frac{\vec v|}{|\vec v|}\right)\\\\
&=\vec r_0 +t\,\left|\frac{d\vec r(t)}{dt}\right|\,\left(\frac{\vec v}{|\vec v|}\right)\tag 1
\end{align}$$
Note that in $(1)$, the term $\frac{\vec v}{|\vec v|}$ is a unit vector that points in the direction of $\vec v=\frac{d\vec r(t)}{dt}$ and $\left|\frac{d\vec r(t)}{dt}\right|$ is the magnitude of $\vec v$, which is the magnitude of the rate of change of the position vector with respect to the parameter $t$.
Now, interpreting $t$ as time and $\frac{d\vec r(t)}{dt}$ as velocity, we see that the position vector is given in terms of its initial position, $\vec r_0$, plus a vector that points in the direction of the velocity with magnitude equal to the product of the speed, $|\vec v|$, and time, $t$. |
combinatorics select committee different jobs | Your answers to questions A, B and D are correct. As for C, this problem can be solved as follows. First select one doctor, then select three other people. Finally, assign each of the positions to one person. The number of possible assignments thus equals:
$${3 \choose 1}{7 \choose 3}4! = 3 \cdot 35 \cdot 24 = 2520$$
This approach can also be used to calculate the result for question D, first selecting one, then two and finally three doctors:
$$\left({3 \choose 1}{7 \choose 3} + {3 \choose 2}{7 \choose 2} + {3 \choose 3}{7 \choose 1}\right) 4! = \left(3 \cdot 35 + 3 \cdot 21 + 1 \cdot 7\right) \cdot 24 = 175 \cdot 24 = 4200$$ |
Are Hilbert Scmidt integral operators on separable compact Hausdorff spaces in the Hilbert Schmidt class? | This is true and , in fact, the converse is also true. Every H-S operator on $L^{2}(\mu)$ is of this type for some $K$. Reference: Theorem VI.23, p. 210, Functional Analysis, Vol 1 by Reed and Simon. |
Is the projection of a closed set always a Borel set? | This question was modified after I posted this answer. It has changed from a deep question to a trivial one. In the earlier version no space was specified and I thought the OP was considering a closed subset of a product of two arbitrary toplogical spaces. Please see the revised answer at the end. Any analytic set in $\mathbb R$ is the projection of a closed set in $\mathbb R \times X$ for some Polish space $X$. ( We can take $X$ to be $\mathbb N ^{\mathbb N}$ for example). Ref. Theorem 4.1.1 of "A Course On Borel Sets" by S. M. Srivastava. There exist analytic sets in $\mathbb R$ which are not Borel. Hence there exist closed sets whose projections are not Borel. If you insist on getting a closed set in $\mathbb R^{2}$ I don't have an immediate reference but I believe such sets exist in this case also.
Answer to the modified question: Let $C$ be closed in $\mathbb R \times \mathbb R$. We can write $C$ as $\cup_{n=1}^{\infty }K_n$ with each $K_n$ compact. If $p$ denotes the projection under consideration then $p(C)=\cup_{n=1}^{\infty }p(K_n)$ which is a countable union of compact sets, hence Borel. |
Probability on cards involving two conditions . | $$\frac{\binom{13}{1}\times \binom{4}{2}\times \binom{12}{2}\times \binom{4}{1}\times \binom{4}{1}}{\binom{52}{4}}$$
$\checkmark$ Select a rank and two suits for it, and select two other ranks, and a suit for each.
$$\frac{\binom{2}{1}\times \binom{13}{1}\times \binom{2}{2}\times \binom{12}{1}\times \binom{2}{1}\times \binom{2}{1}}{\binom{52}{4}}$$
$\checkmark$ Select a colour, a rank, and two suits of that colour for that rank, and select another rank, and a suit of each colour. |
Inference Rules, Not(P) Implies Not(Q) / Q Implies P | The second "rule" isn't a deduction rule because it doesn't preserve truth. "If I'm Napoleon then you're not Napoleon" is trivially true, as I'm not Napoleon. But from that, you can't conclude that "If I'm not Napoleon then you're Napoleon": because, again, I'm not him, we could infer that you are in fact Napoleon — which I assume is false ;) |
Failure of the Vitali Covering Lemma for open coverings | You cannot take an compact set $E=[0,2]$ as a counter-example since it is finite covering.
Here is a counter-example:
Let $E=(0,1)$ and $I_x$ is an open interval centered at $x$ with radius $r=\frac{\min\{x,1-x\}}{2}$. Then $\mathcal{F}=\{I_x\}_{x\in E}$ will be an open covering of $E$, but we cannot choose finite intervals such that $m^*\left(E\setminus \bigcup_{k=1}^n I_{x_k}\right)<\epsilon$. |
A simple inequality about the $p$ norm | I'll give you a proof using Minkowski inequality.
First notice that using the change of variable $t=sx$ we can rewrite
$$\frac{1}{x}\int_0^x f(t)dt=\int_0^1 f(sx)ds.$$
Let's start the computation: by definition
$$\|F\|_p=\Big(\int_0^\infty \Big( \frac{1}{x}\int_0^x f(t)dt \Big)^pdx\Big)^{1/p}=\Big( \int_0^\infty \Big( \int_0^1 f(sx)ds \Big)^pdx\Big)^{1/p}$$
now by Minkowski integral inequality
$$\Big( \int_0^\infty \Big( \int_0^1 f(sx)ds \Big)^pdx\Big)^{1/p}\le\int_0^1\Big( \int_0^\infty f(sx)^p dx \Big)^{1/p}ds.$$
Hence, settin again $sx=t$,
$$\|F\|_p\le\int_0^1\Big( \int_0^\infty \frac{f(t)^p}{s} dt \Big)^{1/p}ds=\Big(\int_0^1 \frac{1}{s^{1/p}}ds \Big)\Big( \int_0^\infty f(t)^p dt\Big)^{1/p}=\frac{p}{p-1}\|f\|_p.$$ |
Equation for calculating azimuth between two points | I’m sure that there are quicker and dirtier ways of getting a very accurate but approximate answer, especially for a relatively short flight like the one you specify. but if we were going, say, from Princeton to Paris in a great circle route, then the heading would certainly vary greatly as we flew.
The problem is simple spherical trigonometry. You have a triangle with vertex at the North Pole, I’ll label this $C$, and the two towns, Princeton labeled $A$ and Boston labeled $B$. Then the side of the triangle opposite Boston we label $b$, in arc-length, that’s the complement of the latitude of Princeton, namely $49.6429^\circ$, and the other leg is labeled $a$, it’s the complement of the latitude of Boston, namely $47.6419^\circ$, if I‘ve done my mental subtractions right. Now the angle at the vertex is $C=3.6066^\circ$, the difference of the two longitudes.
Notice that you have a SAS situation, side-angle-side, and just as in plane trigonometry, you use the Law of Cosines to get the length of the side opposite $C$, and then the Law of Sines to get the two angles at $A$ and $B$. the Law of Cosines says:
$$
\cos c=\cos a\cos b+\sin a\sin b\cos C\,,
$$
and yes, if you recall the corresponding formula in plane trigonometry, that is a plus sign rather than a minus sign. Anyway, you now have the length of $c$, and you use the Law of Sines to get the angles at $A$ and $B$:
$$
\frac{\sin A}{\sin a}=\frac{\sin B}{\sin b}=\frac{\sin C}{\sin c}\,.
$$
I hope that you’ve been following this description with a diagram that you’ve drawn. If so, you see that the heading from Princeton to Boston is just $\angle A$, and the heading from Boston to Princeton is $180^\circ-\angle B$. Remember that the sum of the three angles of a spherical triangle is always greater than $180^\circ$. That’s why you can’t just subtract $A+C$ from $180^\circ$ to get $B$, though for a thin triangle like this the two numbers are not at all far apart. If you’re doing a serious navigation problem and want to know your heading at all times, the way to get your answer is going to depend on what your givens are. All can be solved using Sines and Cosines, though the Law of Cosines has a variant (“polar”) formulation for the ASA situation, namely
$$
\cos C=-\cos A\cos B+\sin A\sin B\cos c\,.
$$ |
Is value in the matrix notation | What is a matrix, exactly?
A good formal definition of a matrix is a function
$$A : [m] \times [n] \rightarrow \mathbb{R}$$
where $[m] = \{1,2,\cdots,m\}$ and $[n] = \{1,2,\cdots,n\}$. The codomain could be any set, but let's use $\mathbb{R}$ for sake of example.
Thus, when you see an entry $a_{i,j}$, you can essentially think of it as the output $A(i,j) \in \mathbb{R}$. In this case, you can easily see that the notation you're looking for is simply the range $ran(A)$of $A$:
$$ ran(A) = \{A(i,j) | (i,j) \in [m]\times [n]\} = \{a_{i,j} | (i,j) \in [m] \times [n]\}$$ |
Tangent plane and normals in $\mathbb{R}^2$ | By definition the normal to a surface (in $\mathbb{R}^3$) is the (affine) line ortogonal to the tangent (affine) plane to the surface.
Now it's easy to conclude:
Find the generic equation (in function of $x$ and $y$) of the affine plane $TS_{(x,y,f(x,y))}$ tangent to the surface in $(x,y,f(x,y))$
Impose that the line through the points $(0,0,f(0,0))$ and $(1,1,f(1,1))$ is conteined in $TS_{(x,y,f(x,y))}$
Solve the system of equations and find the opportune values of $x$, $y$ and $z$ |
Martingales composed with stopping times | Let $(\Omega, {\cal F}, ({\cal F}_t), P)$ be a filtered probability space.
In order to show that $M_\tau$ is a random variable we usually assume that $M$ is a measurable process,
meaning that the map $(t,\omega)\mapsto M(t,\omega)$ is (jointly) measurable from
$[0,\infty)\times \Omega$ to $\mathbb{R}.$
Here $\mathbb{R}$ is equipped with its Borel $\sigma$-field ${\cal B}(\mathbb{R})$ and
$[0,\infty)\times \Omega$ with the product $\sigma$-field
${\cal B}([0,\infty))\times {\cal F}.$
This condition can be achieved, for example, by assuming that $(M_t)$
has right continuous sample paths, in addition to being $({\cal F}_t)$-adapted.
If $M$ is a measurable process and and $\tau$ a finite random time, then
the composition
$$\begin{array}{ccccc}
\omega &\to& (\tau(\omega),\omega)&\to& M(\tau(\omega),\omega)\\[5pt]
{\cal F}&&{\cal B}([0,\infty))\times{\cal F} &&{\cal B}(\mathbb{R})
\end{array}
$$
is measurable.
In general,
$M_\tau 1_{\{\tau<\infty\}}
=\lim_{t\to\infty} M_{\tau\wedge t} 1_{\{\tau<\infty\}} $ is ${\cal F}$-measurable. The random time $\tau$ does not need to be an optional or stopping
time for this argument to work, just measurable. |
The "Hartshornian" sheafification of a sheaf | $\theta \circ \psi\colon \mathscr{F}^+ \to \mathscr{F}^+$ is a morphism such that $(\theta \circ \psi) \circ \theta = \theta \circ (\psi \circ \theta) = \theta \circ \operatorname{id}_\mathscr{F} = \theta$. $\operatorname{id}_{\mathscr{F}'}$ is another such morphism and you can apply uniqueness. |
The minimal polynomial of a vector is a factor of the minimal polynomial of a linear transform | (i) Suppose $p$ is the minimal polynomial for $\xi$. Let $m$ be another polynomial such that $m(A)\xi=0$, for example the minimal polynomial of $A$. Then dividing $m$ by $p$, $$ m=qp+r$$ $$r(A)\xi=m(A)\xi-q(A)p(A)\xi=0$$ Since $p$ is the minimal polynomial it follows that $r=0$ and $p$ is a factor of $m$.
(ii) Let $m$ be the lcm of $p_i$, which are the min polys of the basis vectors $e_i$. Then for any vector $\xi=\sum_i\alpha_ie_i$ $$m(A)\xi=\sum_i\alpha_im(A)e_i=0$$ since $m(A)e_i=q_i(A)p_i(A)e_i=0$. This means that $m(A)=0$. Secondly, suppose $p$ is some polynomial such that $p(A)=0$. Then each $p_i$ divides $p$ by part (i), and so their lcm $m$ does too. These two parts, that $m(A)=0$ and that $m$ divides any other polynomial $p(A)=0$ is the definition of minimal polynomial of $A$. |
Baloney detection kit for Math | Behold, the Aaronson crank detector kit:
http://www.scottaaronson.com/blog/?p=304
This is directed at complexity theory (in CS) but that doesn't make a difference. |
Evaluating the product containing the reciprocals of all primes $\prod_{k=1}^\infty\left(1+\frac{(-1)^k}{p_k}\right)$ | Just to give a few numbers.
Let us consider the partial products
$$P_n=\prod _{k=1}^{10^n} \left(1+\frac{(-1)^k}{p_k}\right)$$ Computed exactly, here are the results (in decimal values)
$$\left(
\begin{array}{cc}
n & P_n \\
1 & 0.5849897151924156997471733 \\
2 & 0.5738797348557373537954953 \\
3 & 0.5733032174061992771374417 \\
4 & 0.5732660657801225326295239 \\
5 & 0.5732633085315702175401998 \\
6 & 0.5732630922693932644226413
\end{array}
\right)$$ |
Show that $n^4-20n^2+4$ is composite when $n$ is any integer. | Given the expression $n^4-20n^2+4$
You write
"I found out that this expression could be factorised into".. $$(n^2−4n−2)(n^2+4n−2).$$
Is this related to the question?"
YES!
If neither one nor the other of the factors you found evaluates to $1, 0, -1,$ under any integer value of $n$, you will have shown that the expression can be factored into two non-zero, non-unit factors, and hence is composite.
Suppose we first try to solve whether either factor might evaluate to zero given some integer n:
$(1)\;\;n^2-4n - 2 = 0$
$(2)\;\; n^2 + 4n - 2 = 0$
By using the quadratic equation, you'll find that there are no integer roots for either. So neither factor is zero.
Similarly, you can use the quadratic equation to test out whether either factor can equal $\pm 1$
$(3)\;\;n^2-4n-2 = 1 \iff n^2-4n -3 = 0$
$(4)\;\;n^2 - 4n-2 = -1 \iff n^2 - 4n -1 = 0$.
Similarly, we see there are no integer roots $n$ when solving for $n$
$(5)\;\; n^2+4n-3 = 0, \;\;\;(6)\;\;n^2 + 4n -1 = 0$
Now, if we use common definition of a composite number as being strictly a positive integer $k$ with at least three factors (or at least one factor other than $1$ and itself $(k\geq 2)$, then the statement is not true.
Counter example under this stricter definition:
When $n=3$, the expression evaluates to: $3^4-20\times 3^2+4 = 81-180 + 4 = -95$. This is not a composite number, strictly defined, because it is not a positive integer. However, since $-95 = (-1)(1)(5)(19)$, for the purposes of this assignment, we can see that $-95$ is a negative composite number. |
In how many ways can I write $0$ as a sum of $n\; 0s, 1s \;\text{and}\; -1s?$ | $s(n) = \sum_{k=0}^{n/2} \binom{n}{n-2k} \cdot \binom{2k}{k}$ The first term is the number of ways to arrange the zeroes, and then the second term arranges the parities. Now, we can simplify further:
$s(n) = \sum_{k=0}^{n/2} \frac{2k!}{2k!} \cdot \frac{n!}{(n-2k)!(k!)(k!)} = \sum_{k=0}^{n/2} \frac{n!}{(n-2k)!(k!)(k!)}$
Edit: put a few values into the OIES and came across trinomial coeffecients. In particular, s(n) is the n-th central trinomial coefficient, which has several closed forms, which you can find in the second link. |
How does uncertainty of dataset propagates through numerical integration? | I found an answer to my question which in my opinion is correct (or at least has a point). Nevertheless it would be nice to have a validation from someone that might know more on the topic.
Since the average is calculated with a trapezoidal (which in reality is a summation) then the standard deviation can also be calculated by uncertainty propagation algebra for addition.
Based on Wikipedia for trapezoidal rule:
$$ \int_c^d f(x) dx \approx (d-c) \left[ \frac{f(c)+f(d)}{2} \right] = \left( \frac{d-c}{2} \right) f(c) + \left( \frac{d-c}{2} \right) f(d)$$
which will follow the 2nd uncertainty propagation rule from Wikipedia:
$$ f=aA+bB$$
with:
$ a = b = \frac{d-c}{2}$
$ A = f(c) $
$ B = f(d) $
In terms of Python3 code that translates to:
import numpy as np
# FUNCTIONS
def uncAddition(A=1.,B=1.,sigmaA=0.,sigmaB=0.,a=1.,b=1.,sigmaAB=0.):
f = (a * A) + (b * B)
sigmaf = np.sqrt( (a * sigmaA)**2 \
+ (b * sigmaB)**2 \
+ (2 * a * b * sigmaAB) )
return f, sigmaf
# MAIN
# Data: is ab nx2 array that contains n number of f values
# and their absolute standard deviation
# x: is the independent variable of the f values
# Obviously x and f should have the same number of rows
integralAve=0.
integralStD=0.
for i in range(1,len(data[:,0])):
trapzPartAve,trapzPartStd = uncAddition(A=f[i-1,0],
sigmaA=f[i-1,1],
a=(x[i]-x[i-1])/2.0,
B=f[i,0],
sigmaB=f[i,1],
b=(x[i]-x[i-1])/2.0)
integralAve += trapzPartAve
integralStd += trapzPartStd
print(integralAve,integralStd) |
Find point of intersection of tangent and circle given 3 points | Your assumption that the tangent at (0,-6) is y=-6 is incorrect as this line doesn't touch the circle and therefore you have no point on the real plane that satisfies both the circle and the line.
You will have to find the equation of tangent using its property that the perpendicular distance of line from centre is equal to the radius. |
Show that for all $ a \in \mathbb{R} $ applies: $ \int \limits_{-r}^{r} \frac{f(x)}{1+\mathrm{e}^{a x}} d x=\int \limits_{0}^{r} f(x) d x$ | First of all, because $f$ is even, we have:
$$\int_{-r}^rf(x)dx = \int_{-r}^0f(x)dx+\int_{0}^rf(x)dx=\int_{r}^0f(-y)(-dy)+\int_{0}^rf(x)dx $$
$$= \int_{0}^rf(-y)dy+\int_{0}^rf(x)dx=\int_{0}^rf(y)dy+\int_{0}^rf(x)dx=2\int_{0}^rf(x)dx$$
So, using your hint, we get
$$\int_{-r}^r\frac{f(x)}{1+e^{ax}}dx+\int_{-r}^r\frac{f(x)}{1+e^{-ax}}dx=\int_{-r}^rf(x)dx=2\int_{0}^rf(x)dx \ \ \ \ \ \ \ (*)$$
Now substituting $y=-x$, we have:
$$\int_{-r}^r\frac{f(x)}{1+e^{-ax}}dx = \int_{r}^{-r}\frac{f(-y)}{1+e^{ay}}(-dy)=\int_{-r}^{r}\frac{f(-y)}{1+e^{ay}}dy$$
$$=\int_{-r}^{r}\frac{f(y)}{1+e^{ay}}dy = \int_{-r}^{r}\frac{f(x)}{1+e^{ax}}dx$$
Substitute this back into $(*)$ to get:
$$
2\int_{-r}^r\frac{f(x)}{1+e^{ax}}dx=2\int_{0}^rf(x)dx
$$
and divide by $2$ to complete it. |
Eigenvalue decomposition for a very huge matrix of medical images (such as the pixel physical coordinates of CT images) | A Krylov based method would be to recommend as you can get away with matrix-vector operations.
Any method that tries to store and manipulate actual matrices will require too much memory.
You can read more about Krylov subspace methods for example at Wikipedia |
Differential forms and simplification | It's the negative of your answer. To get $d\alpha\wedge \lambda$, for example, you need to switch the order in $d\lambda\wedge dx$, etc. |
When to use Zorn's Lemma | As Dylan Moreland hints at in a comment above, one way to think about your specific question (on nilradicals), which is very commutative algebraic in spirit, is to first localize your ring $A$ at the non-nilpotent element $f$. The problem then amounts to proving that the non-zero ring $A_f$ admits a prime ideal, and this follows from Zorn's lemma: any non-zero ring (with identity) has a maximal (and hence prime) ideal.
This result on the existence of maximal ideals is the standard use of Zorn's lemma in commutative algebra, akin to the existence of bases in linear algebra.
If you would like to strengthen your commutative algebra, the solution is perhaps not so much to find a wider range of situations in which to apply Zorn's lemma, but rather to practice applying standard tricks such as localization, so as to find ways to put yourself into situations where this standard application of Zorn's lemma can be used. |
Angle between two vectors, when you don't know one constant | From the formulae for the movement of both airplanes you can read their velocities (up to a scaling factor, but the scale won't matter for the angle):
$$ \vec v_1 = -2\vec i + 3\vec j+ \vec k$$
$$ \vec v_2 = -\vec i + 2\vec j +a \vec k$$
You can calculate
$$ |\vec v_1| = \sqrt{(-2)^2+3^2+1^2} = \sqrt{14}$$
$$ |\vec v_2| = \sqrt{(-1)^2+2^2+a^2} = \sqrt{5 +a^2}$$
$$ \vec v_1\cdot \vec v_2 = (-2)\cdot (-1)+3\cdot 2 + 1\cdot a = 8+a$$
You have then an equation
$$ 8+a = \sqrt{14} \sqrt{5 +a^2} \cos(40^\circ)$$
From this you are able to find the value of $a$. |
How to prove facts regarding sentential logic | About:
But can we use a circular argument like this for proving?
consider this example: do you think it is possible to write a book regarding the grammar of a natural language, like e.g. English, without using language ?
If you want to avoid confusion (if any) we can write the English grammar in Latin.
The same for a math log textbook: the meta-language is the "plain" language used to describe e.g. the sentential calculus.
Of course, natural language uses "natural" logic: it cannot be otherwise... |
If the roots of the quadratic equation $2kx^{2}+(4k-1)x+2k-3=0$ are rational and k is an integer, how many values can k take which are less that 50? | So far so good. Notice that the equation $16k=p^2-1$ shows that $p^2$ is congruent to $1$ modulo $16$. You can then show that $p$ has to be congruent to $1,7,9,$ or $15$ modulo $16$. This gives a list of possible $p$'s: $1,7,9,15,17,23,25,31,\ldots$. You'll find that $p=31$ gives $k=60$, but everything smaller on the list should work. It follows that there are $7$ such values of $k$ (if you include $k=0$). |
Does vanishing of wronskian of solutions at point $\implies$ solutions are linearly dependent? | Hint:
The theorem is:
If the Wronskian $W(u,v)(x_0)$ is nonzero for some $x_0 \in [a,b]$ then $u$ and $v$ are linearly independent on $[a,b]$.
The contraposition is:
If $u$ and $v$ are linearly dependent then the Wronskian is zero for all $x \in [a,b]$. |
Prove/Disprove that if two sets have the same power set then they are the same set | Suppose $A \neq B$. Without loss of generality, there exists an $x \in A$ such that $x \notin B$. Then $\{x\} \in \mathscr{P}(A)$ whereas $\{x\} \notin \mathscr{P}(B)$. Thus $\mathscr{P}(A) \neq \mathscr{P}(B)$.
Conversely, if $\mathscr{P}(A) = \mathscr{P}(B)$, then all their singletons are the same. Thus $A = B$.
$A = B$ if and only if $\mathscr{P}(A) = \mathscr{P}(B)$. |
Fake proof of differentiability | The very first step is wrong. You are only given that partial derivatives exist and are continuous, not that $f_j$ is a differentiable function on $\mathbb R^{n}$. |
Inverse that isn't Sequentially Compact | You can get even simpler than this, when in doubt look at the extreme cases. Define $f(x)=0$ to be the constant function. Then {0} is sequentially compact, but $f^{-1} (\{ 0\})=\mathbb R$, which isnt. |
Expected number of cards drawn before drawing a $4$ or $5$ | Let's call the 4s and 5s "special" cards. Add a joker to the deck and pretend it's an additional special card, so that there are now $9$ special cards in a deck of $53$ cards. Now shuffle all the cards up and then deal them out, face down, in one big circle. If you think about it, the average distance between consecutive special cards is $53/9$. Now locate the joker and think of it as identifying the "top" of the deck. The average distance to the next special card (which is now either a 4 or a 5) is still $53/9$. |
Birthday Paradox: 4 people What is the probability that two (or more) of them have the same birthday? | Your calculations are all correct except the percentage is wrong (your multiplication by $100$ is off). However your complement of the event $A$ should read
$A'=$ "None of the people in the room share the same birthday".
Fix these two small issues and it looks good. |
About the proof of Proposition 6.45 on Ziller's notes | Since $K$ is the identity component of $L$, $K$ is normal in $L$. Thus, $g$ normalizes $K$.
This implies that $\exp(X)$ also normalizes $K$. Indeed, if $k\in K$, then $ \exp(X) k \exp(X)^{-1} = \exp(X)y (y^{-1} k y) (\exp(X) y)^{-1} = g(y^{-1} k y)g^{-1}$. Since both $y,k\in K$, and $g$ normalizes $K$, the result follows.
The normalizing condition can equivalently stated by saying that for any $k\in K$, there is a $k'\in K$ with $\exp(X) k = k' \exp(X)$. In other words, we can pass $\exp(X)$ across elements of $K$ as long as we're willing to change the element of $K$. Applying this to $g^n = (\exp (X)y)(\exp(X)y)...(\exp(X)y)$, we can rewrite this as $\exp(x)^n y' = \exp(nx) y'$ for some $y'\in K$. |
Understanding a proof related to continuity | Take $\delta = \frac{f(y)}{2}$. Then $(\delta, \infty)$ is an open set. By the definition of continuity (for a general topological space), $U = f^{-1}((\delta, \infty))$ is open. And clearly by definition, $y \in U$ since $f(y) > f(y) / 2 = \delta$. And for all $x \in U$, we have $f(x) > \delta$ and thus $f(x) \geq \delta$. |
Does this result for inverses of linear transformation holds for infinite dimensional case. | In general the statement is not true. The problem is that $S_1T$ is the identity on the image of $T$ which could be smaller than $V$. Think for example in the space of polynomial ring the multiplication by $x$ which is linear and has an inverse in the image (i.e. divide by $x$ by shifting all the coefficient) but clearly is not invertible on the all space (a polynomial of degree zero has not pre image) |
How are graphs constructed? | Questions from algebraic graph theory are a natural source of such graph constructions. Namely, when you start asking questions about automorphism groups of graphs then you run into these kind of examples or more often counter-examples.
Some examples can be found here: https://en.wikipedia.org/wiki/Graph_automorphism#Graph_families_defined_by_their_automorphisms.
I also recommend reading at least the first few chapters of "Algebraic Graph Theory" by Chris Godsil and Gordon Royle to get a feel for how these famous graphs (again, often constructed as counter-examples) or even whole graph families were constructed by investigating automorphism groups of graphs. |
Why $V_{\mathbb{C}} = V_{1,0}\oplus V_{0,1}$? | You've already computed the minimal polynomial. It then suffices to note that a linear transformation is diagonalizable whenever its minimal polynomial factors completely into distinct linear factors. |
If G contains a normal subgroup $N \cong \mathbb{Z}_2$ and $G/N \cong \mathbb{Z}$, then $G\cong \mathbb{Z}\times \mathbb{Z_2}$. | First, we should show that $G$ is abelian:
$N=\{e,n\}$ is a normal subgroup with two elements, in particular $n$ is a central element.
$G/N$ is cyclic, let $g \in G$, such that $\overline g$ is a generator. Then any $x \in G$ is of the form $g^jn$ with some $j \in \mathbb Z$. Since $n$ is central, two elements of the form $g^jn$ certainly commute with each other.
Now we are given a short exact sequence of abelian groups
$$0 \to C_2 \to G \to \mathbb Z \to 0$$
Since $\mathbb Z$ is free, the sequence splits, thus $G \cong C_2 \times \mathbb Z$. |
Show a,b,c congruences when $a^2+b^2=c^2$ | (a)
(i)If $3\not\mid ab, a\equiv\pm1\pmod3,b\equiv\pm1\implies a^2+b^2\equiv1+1\equiv-1\pmod 3$
but $c\equiv\pm1,0\implies c^2\equiv1,0\pmod 3$
(ii)Alternatively, using this, $a=k(m^2-n^2),b=k(2mn),c=k(m^2+n^2)$ will generate all Pythagorean triples uniquely where m, n, and k are positive integers with m > n, m − n odd, and with m and n co-prime.
So, $ab=k2mn\cdot k(m^2-n^2)$
If $3\mid mn, 3\mid b$ we are done else $(mn,3)=1$
Using Fermat's Little Theorem,
$m^2\equiv1\pmod 3$ and $n^2\equiv1\implies 3\mid(m^2-n^2)\implies 3\mid a$
(b)
(i) If $5\not\mid ab, a\equiv\pm1,\pm2\pmod3 \implies a^2\equiv1,4\pmod 5$
Similarly, $b^2\equiv1,4\pmod 5$
Observe that $c\equiv0,\pm1,\pm2\pmod5\implies c^2\equiv0,1,4\pmod 5$
If $a\equiv\pm1,b\equiv\pm1, a^2+b^2\equiv2\pmod5$ which is not congruent to any square $\pmod 5$
If $a\equiv\pm2,b\equiv\pm2, a^2+b^2\equiv8\equiv3\pmod5$ which is not congruent to any square $\pmod 5$
So, either $a\equiv\pm2,b\equiv\pm1$ or $a\equiv\pm1,b\equiv\pm2$
(ii)In the alternate way, $a=k(m^2-n^2),b=k(2mn),c=k(m^2+n^2)$
So, $abc=k^32mn(m^4-n^4)$
If $5|m$ or $5|n$ we are done else $(mn,5)=1$
Using Fermat's Little Theorem,
$m^4\equiv1\pmod 5$ and $n^4\equiv1\implies 5\mid(m^4-n^4)\implies 5\mid(m^2-n^2)$ or $5\mid(m^2+n^2)$
(c)$a=k(m^2-n^2),b=k(2mn),c=k(m^2+n^2)$
As $m-n$ is odd, $2\mid mn\implies 4\mid b$
The last problem : Observe that the square of any number is $\equiv0,1,4\pmod 8$ |
Labeling derivatives of functions from a graph | Hint:
A, B, C, and D look like the graphs of linear, quadratic, cubic, and quartic polynomials respectively. Think about this and about what happens to the degree of a polynomial when you take its derivative. |
function continuous at a limit point (Rudin) theorem question | For isolated point $p$ both statements are true and hence, the implications are valid.
Edit: The above answer is wrong. Rudin does only define the limit of functions for limit point (p. 83). Therefore, one of the statements is not defined. |
which continued fraction is bigger? $[1,1,a,1,1,1,1]$ or $[1,1,1,b,1,1,1]$ | Hints:
$a<[a,{\cal S}]<a+1$ for any natural $a$ and nonempty sequence $\cal S$ of naturals after.
$[{\cal S}_1]>[{\cal S}_2]\implies [u,{\cal S}_1]<[v,{\cal S}_2]$ for any $u\le v$ and sequences ${\cal S}_{1,2}$ of naturals.
In fact, to determine which of two (each possibly finite, possibly infinite) continued fractions is bigger, it suffices to look for the first number in which they differ. In an odd position, the larger digit is attached to the larger number, and in an even position the opposite holds. Another way of saying this is with Arthur's comment above. This can also be proved with the above two hints. |
How do you calculate a rolling average speed? | If you want the average speed over the last $5$ minutes you can just add up the last $300,000$ speed values and divide by $300,000$. Once you get the sum started you can just update it by adding the new value and subtracting the old one. Storing the speed data in a circular buffer will make this easy. You might want to add the speed data in blocks of $1000$ to get a speed for the second first. |
What is the best way to solve a puzzle involving sets of information which seem in disorder? | Well, as mentioned, a logic grid helps some people to organize this kind of problem.
The basic facts given can be laid in a grid as follows:
Then we can use the exclusive nature of each attribute to exclude other possible earnings at Blue and other places paying $\$3300$:
Then the salaries at each company can be inferred from the lack of other options:
Then a slightly less obvious inference - the salary of $\$3700$ at Electric allows us to know that Robert doesn't work there. On the grid, the checkmark at Electric/$\$3700$ "sees" the cross at Robert/$\$3700$ around the corner and can echo it along the other direction.
And finally the lack of other options shows that Audrey works at Electric and again the checkmark can be propagated to salary from the round-the-corner alignment, giving the answer. Clearly all the other checkmarks can be completed also at this stage if desired. |
Deduce function f from integral | There is no such function $f$, since letting $x=0$ on both sides of the given equation yields $0=-1$.
(Notice that using the Fundamental Theorem of Calculus to solve for $f$ yields
$\hspace .3 in \displaystyle f(x)=\frac{e^{3x}(2x-1)}{e^x-1}$, and both integrals diverge for this function.) |
How do I split this into partial sums? | You may write, for the partial sum:
$$
\begin{align}
\sum_{n=0}^N(s_n-s_{n+1})&=\sum_{n=0}^Ns_n-\sum_{n=0}^Ns_{n+1}\\\\
&=\sum_{n=0}^Ns_n-\sum_{n=1}^{N+1}s_{n}\\\\
&=s_0+\sum_{n=1}^Ns_n-\sum_{n=1}^{N}s_{n}-s_{N+1}\\\\
&=s_0-s_{N+1}
\end{align}
$$ that what is called 'telescoping'. |
Proof of Euler's formula that doesn't use differentiation? | For all real $z$, we have:
$$e^z=\lim_{N\to\infty}\left(1+\frac zN\right)^N$$
so it seems like a good definition for complex $z$ as well. Letting $z=ix$ and using De Moivre and L'Hôpital gets you $\cos x+i\sin x$. |
Motivation for supermanifolds | In the early 1980s, Witten, Alvarez-Gaume, and Getzler gave short proofs of (a special case of) the Atiyah-Singer index theorem using supermanifold tools. This has applications in topology beyond physics. |
How can we prove that a (locally bounded) semigroup is strongly continuous on the closure of its generator? | You are right that it suffices to show strong continuity at $0$ (by the semigroup property), but it is not true that it is enough to check strong continuity at $0$ for $x\in D(A)$. You would need some uniform bound here. On the other hand, right-differentiability at $0$ automatically implies right-continuity at $0$: If $T_t x-x$ does not tend to zero, then there is no chance for the limit $\frac 1 t(T_t x-x)$ to exist.
In your situation, strong continuity on $\overline{D(A)}$ is equivalent to local boundedness on $\overline{D(A)}$. One implication follows directly from the uniform boundedness principle and the semigroup property. For the other implication (the one you ask about), let $x\in\overline{D(A)}$ and $(x_n)$ a sequence in $D(A)$ such that $x_n\to x$. Then
$$
\|T(t)x-x\|\leq \sup_{s\in[0,T]}\|T(s)\|_{\mathcal{L}(\overline{D(A)})}\|x-x_n\|+\|T_t x_n-x_n\|+\|x-x_n\|.
$$
Letting first $t\to 0$ and then $n\to\infty$ yields the desired convergence. |
Similar to PIE but works quite exact in this case, why? | If the statements $n$ is coprime to $2$ and $n$ is coprime to $3$ were independent, we wouldn't have to worry about inclusion-exclusion. There would be some fraction coprime to $2$ and the fraction of those coprime to $3$ would be the same fraction as the fraction of $n$. That would make the multiplication exact. As you say, that works any time $n$ is a multiple of $6=2 \cdot 3$ With your larger set of primes, the computation assuming independence will be exact when $n$ is a multiple of $2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13=30030$
We can look at how close it is for the primes $2,3,5$. Multiplication predicts that $\frac {1 \cdot 2 \cdot 4}{2 \cdot 3 \cdot 5}=\frac 4{15}$ of the numbers will be coprime to all of these. We can see how close it is
$$\begin {array} {r r r r}n&coprime up to n&prediction&error\\1&1&0.266667&0.733333\\2&1&0.533333&0.466667\\3&1&0.8&0.2\\4&1&1.066667&-0.06667\\5&1&1.333333&-0.33333\\6&1&1.6&-0.6\\7&2&1.866667&0.133333\\8&2&2.133333&-0.13333\\9&2&2.4&-0.4\\10&2&2.666667&-0.66667\\11&3&2.933333&0.066667\\12&3&3.2&-0.2\\13&4&3.466667&0.533333\\14&4&3.733333&0.266667\\15&4&4&0\\16&4&4.266667&-0.26667\\17&5&4.533333&0.466667\\18&5&4.8&0.2\\19&6&5.066667&0.933333\\20&6&5.333333&0.666667\\21&6&5.6&0.4\\22&6&5.866667&0.133333\\23&7&6.133333&0.866667\\24&7&6.4&0.6\\25&7&6.666667&0.333333\\26&7&6.933333&0.066667\\27&7&7.2&-0.2\\28&7&7.466667&-0.46667\\29&8&7.733333&0.266667\\30&8&8&0 \end {array}$$
If we had started at $0$ all the predictions would have been $\frac 4{15}$ higher. |
Does every $x\in (0,1)$ have a base-3 expansion containing a 1? | Suppose $x=.202020202\dots = .a_1 a_2 a_3\dots$ Could $a_1\le 1?$ No because then the most $.a_1 a_2 a_3\dots$ could be is $.1222222\dots = .20000000\dots < x.$ So $a_1=2.$
Thus $x = .2 a_2 a_3\dots.$ Could $a_2>0?$ No because then the smallest $.2 a_2 a_3\dots$ could be is $.210000\dots = .20222222\dots > x.$
So we have $x = .2 0 a_3\dots.$ Now multiply by $3^2$ to get $20.2020202\dots = 20.a_3\dots.$ Subtract $20$ from both sides to get $.2020202\dots = .a_3 a_4\dots.$ By what we did above, $a_3=2,a_4=0.$ Etc. |
Strong or weak convergence of $f_n(x) = \frac{1}{n}$ if $x \in [0,n]$ in $L^1(\mathbb{R})$ | If $g \in C_c (\Bbb {R}) $ is arbitrary, then
$\int f_n g dx \to 0$.
Hence, the only possible (weak) limit of any subsequence is ???
But
if we take $g \equiv 1$, then $\int f_n g dx =1$ for all $x $.
Hence, there is no weakly convergent subsequence. In particular, the sequence does not converge strongly. |
converging sequence, intersection of a closed set with an open set | No. Take $A=\{0\}$ and $B=\mathbb{R}^n$, $x_k=(1/k,0,\ldots,0)$.
More generally, let $x$ be any non-interior point in $A$. For each $n$, pick $x_n$ with $|x_n-x|<1/n$, $x_n\notin A$, and $x_n\in B$ (the latter is automatic when $n$ is large enough). |
Fibonacci Proof: Prove that $\frac{F_n-F_{n+16}}{7}$ is always an odd integer. | I suspect that, instead, you are merely supposed to show that $$\frac{F_n-F_{n+16}}7$$ is an integer for all $n$. As discussed in the comments above, the integer needn't be odd.
I leave the two base cases to you (we need two of them for an induction proof).
Now, let's suppose that $$\frac{F_n-F_{n+16}}7$$ and $$\frac{F_{n+1}-F_{(n+1)+16}}7$$ are both integers for some $n$. Note, then, that
$$\begin{align*}
\frac{F_{n+2}-F_{(n+2)+16}}7&=\frac{(F_n+F_{n+1})-(F_{n+16}+F_{(n+1)+16})}7\\\\
&=\frac{F_n-F_{n+16}}7+\frac{F_{n+1}-F_{(n+1)+16}}7\;.
\end{align*}$$ |
Extending a group action to a quotient group | My understanding of the phrase "$G$ acts on $A$" is something along the lines of:
elements of $G$ are behaving like functions $A \to A$ (edit: and the identity element behaves like the identity function),
composition of these functions behaves like multiplication in $G$,
each function respects the structure on $A$.
That is, if $A$ is a set, then the third stipulation is meaningless, but if $A$ is a group, then I expect these functions to be group homomorphisms. Likewise if $A$ is a topological space, I expect these functions to be continuous maps. A shorter way of saying all this: an "action" of $G$ on $A$ is simply a group homomorphism $G \to \mathrm{Aut}(A)$, whatever an 'automorphism' of $A$ means.
(If $G$ is acting on $A$ purely as a set, I don't think this can be true. Let $G = \{1, g\}$, $A = \{1, a, \ldots, a^5\}$, $B =\{ 1, a^3\}$, all cyclic groups. Now suppose $g$ acts on $A$ by swapping $a$ and $a^2$, but leaving the other four points fixed. Then $a + B = a^4 + B$, but $g(a) + B \neq g(a^4) + B$.) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.