title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Set of idempotent elements closed under addition: prove $a+a = 0, ab=ba$ | As $1$ is idempotent, so is $2:=1+1$. Hence $4=2$ and $2=0$.
Consequently
$$ x+x=2\cdot x=0\cdot x=0$$
for all $x\in R$.
If $x,y$ are idempotent, then so is their sum, hence
$$ x+y=(x+y)^2=x^2+xy+yx+y^2=x+y+xy+yx$$
and using the above, $$xy=-yx= yx.$$
Then also $(xy)^2=xyxy=xxyy=x^2y^2$, i.e., we see that the idempotents are not only closed under addition (and, as $-1=1$, under subtraction), but also under multiplication, i.e., they form a subring $S$ of $R$. Also, $S$ is abelian. Can we show $S=R$ from this? |
Cardinality of a closed uncountable set. | This is too long for a comment, but: we could also ask about extending the "closed" side of the question. That is, in a "reasonable" metric space, what sort of sets do we know have the perfect set property? If this is interesting to you, you might be interested in Descriptive Set Theory!
For example, one result in descriptive set theory is that in any complete separable metric space (the underlying topological space of such a space is called Polish, by the way), every analytic set has the perfect set property. This is proved in Kechris' book, in the chapter "Games People Play" if I recall correctly. The analytic sets are a broad class of sets, properly extending the Borel sets.
Meanwhile, it is consistent with ZFC that coanalytic sets need not have this property! And in fact statements of the form "every set of type $\Gamma$ has the perfect set property" are closely related to large cardinals - roughly speaking, if we assume that the combinatorial structure of the set-theoretic universe is sufficiently rich, then every projective set has the perfect set property! (The projective sets are what you get by starting with the Borel sets and closing off under continuous image and complementation - that they extend the Borel sets properly was proved by Souslin, correcting an error of Lebesgue.) There are a pair of themes here:
Regularity properties for sets of reals follow naturally from the determinacy of certain kinds of infinite games.
Determinacy principles follow from, and in a certain sense imply (we have to be careful here - what I really mean is that they imply the existence of an inner model with large cardinals, or more coarsely the consistency of large cardinals with ZFC), large cardinals - which I'm not going to even try to sketch here, but see e.g. http://projecteuclid.org/download/pdfview_1/euclid.ndjfl/1427202981.)
Descriptive set theory beyond the projective is studied, but I know very little about it. |
Does the conductor of an elliptic curve always divide the minimal discriminant? | The answer to the question in the title is yes. You are looking for Ogg's formula. See Silverman's "Advanced topics in the arithmetic of elliptic curves", Sections 9, 10, and 11, and in particular Ogg's fomula in 11.1:
Let $K/\mathbb{Q}_p$ be a local field and let $E/K$ be an elliptic curve, and let
$\nu_K(\mathcal{D}_{E/K})$ = the valuation of the minimal discriminant of $E/K$,
$f(E/K)=$ the exponent of the conductor of $E/K$,
$m(E/K)=$ the number of components on the special fiber of $E/K$ (thus $m(E/K)\geq 1$).
Then:
$$\nu_K(\mathcal{D}_{E/K})=f(E/K)+m(E/K)-1.$$
In particular, $\nu_K(\mathcal{D}_{E/K})\geq f(E/K)$. |
Showing orthogonality of Legendre polynomials using Rodrigues' formula | The proof works by mathematical induction, which is a basic mathematical technique.
Start with the integral you gave, and integrate by parts once:
\begin{align}
I_l
& =
\frac{1}{2^{2l}(l!)^2} \int_{-1}^1 \left[\frac{d^l(x^2-1)^l}{dx^l}\right]\left[\frac{d^l(x^2-1)^l}{dx^l}\right]dx
\\ & =
\frac{1}{2^{2l}(l!)^2} \left[
\left. \left[\frac{d^{l-1}(x^2-1)^l}{dx^{l-1}}\right]\left[\frac{d^{l}(x^2-1)^l}{dx^{l}}\right] \right|_{-1}^1
-
\int_{-1}^1 \left[\frac{d^{l-1}(x^2-1)^l}{dx^{l-1}}\right]\left[\frac{d^{l+1}(x^2-1)^l}{dx^{l+1}}\right]dx
\right]
\\ & =
\frac{(-1)}{2^{2l}(l!)^2}
\int_{-1}^1 \left[\frac{d^{l-1}(x^2-1)^l}{dx^{l-1}}\right]\left[\frac{d^{l+1}(x^2-1)^l}{dx^{l+1}}\right]dx
,
\end{align}
where the boundary terms vanish, because $(x^2-1)^l = (x+1)^l (x-1)^l$ has $l$-fold zeros at both endpoints, and the $(l-1)$-fold derivative leaves one zero on each end.
This suggests the intermediate result for the inductive step: the claim that
\begin{align}
I_l
& =
\frac{(-1)^k}{2^{2l}(l!)^2}
\int_{-1}^1 \left[\frac{d^{l-k}(x^2-1)^l}{dx^{l-k}}\right]\left[\frac{d^{l+k}(x^2-1)^l}{dx^{l+k}}\right]dx
,
\end{align}
for all $k=0,1,\ldots, l$. To prove this claim via induction, we assume that it's true for some $k$, and then we integrate by parts again:
\begin{align}
I_l
& =
\frac{(-1)^k}{2^{2l}(l!)^2}
\int_{-1}^1 \left[\frac{d^{l-k}(x^2-1)^l}{dx^{l-k}}\right]\left[\frac{d^{l+k}(x^2-1)^l}{dx^{l+k}}\right]dx
\\ & =
\frac{(-1)^{k}}{2^{2l}(l!)^2} \left[
\left. \left[\frac{d^{l-k-1}(x^2-1)^l}{dx^{l-k-1}}\right]\left[\frac{d^{l+k}(x^2-1)^l}{dx^{l+k}}\right] \right|_{-1}^1
-
\int_{-1}^1 \left[\frac{d^{l-k-1}(x^2-1)^l}{dx^{l-k-1}}\right]\left[\frac{d^{l+k+1}(x^2-1)^l}{dx^{l+k+1}}\right]dx
\right]
\\ & =
\frac{(-1)^{k+1}}{2^{2l}(l!)^2}
\int_{-1}^1 \left[\frac{d^{l-k-1}(x^2-1)^l}{dx^{l-k-1}}\right]\left[\frac{d^{l+k+1}(x^2-1)^l}{dx^{l+k+1}}\right]dx
,
\end{align}
where the boundary terms vanish for the same reason as above. This is exactly the same claim with $k$ replaced by $k+1$, which completes the proof by induction.
The desired result,
$$I_l = \frac{(-1)^l}{2^{2l}(l!)^2} \int_{-1}^1 (x^2-1)^l \frac{d^{2l}(x^2-1)^l}{dx^{2l}}dx,$$
then follows as the special case $k=l$. Finally, to get to
$$
I_l = \frac{(2l)!}{2^{2l}(l!)^2} \int_{-1}^1 (1 - x^2)^l dx
$$
you simply need to realize that $(x^2-1)^l$ is a polynomial of degree $2l$ with leading coefficient $1$, and that under a $2l$-fold derivative the only thing that will survive is $(2l)!$ times that leading coefficient. |
Possibly unbounded operator which is closed and has a lower bound has closed range | C) is quite easy: Let $S\phi_n \to \psi$. We have to show that $\psi $ belongs to range of $S$. $\|S\phi_n-S\phi_m\| \to 0$ and hence $\|\phi_n- \phi_m\| \to 0$ By 4). Since $H$ is complete there exist $\phi$ such that $\phi_n \to \phi$. But $\phi_n \to \phi$ and $S(\phi_n) \to \psi$ implies that $\psi =S(\phi)$ which finishes the proof. |
Show $|a\sqrt{b}-\sqrt{c}|$ equal to zero or larger than $\frac{1}{2}10^{-3}$ when $a, b$ and $c$ are natural numbers strictly less than 100 | If $a\sqrt{b} - \sqrt{c} \neq 0$, then $|a^2b - c| \geq 1$. We see that:
$$
a\sqrt{b} + \sqrt{c} < 100(10) + 10 = 1010
$$
Therefore:
$$
|a\sqrt{b} - \sqrt{c}| = \frac{|a^2b - c|}{a\sqrt{b} + \sqrt{c}} > \frac{1}{1010} > \frac{1}{2000}
$$ |
Prove that A rectangle is a square **if and only if** its diagonals are perpendicular. | Hint: an approach to proving "$A$ if and only if $B$" that sometimes works out nicely is to prove that each of $A$ and $B$ is equivalent to some third assertion $C$. In this case, consider rotating the rectangle, $R$ say, through $90^o$ to get a new rectangle, say $R'$. Then the following are equivalent:
$R$ is a square,
$R = R'$ (as sets of points in the plane),
the diagonals of $R$ and $R'$ are perpendicular. |
knot invariants from the symmetric group | The only information the symmetric group quotient keeps about an element of a braid group is how strings connect vertices. The only information about the corresponding link you can determine from that is the total number of components (equal to the number of cycles of the corresponding permutation). All other information is destroyed by the relations $s_i^2 = 1$.
However (and you probably know this, but it's worth saying) instead of mapping into the symmetric group, you can map into the Hecke algebra of $S_n$ (so instead quotient by a relation like $s_i^2 = (q - 1) s_i + 1$). The representation theory of the Hecke algebra is (most of the time) just like the representation theory of $S_n$, but you can actually get interesting invariants this way. See, for example, Kassel and Turaev's Braid groups. |
Is the function $f(x)= {\sin x \over x}$ uniformly continuous over $\mathbb{R}$? | One way to see that $f$ is uniformly continuous (assuming the removable singularity in $0$ removed) is to note that it is differentiable, and its derivative is bounded. Then use the fact that any differentiable function with bounded derivative is uniformly continuous (even Lipschitz continuous), since by the mean value theorem we have
$$\left\lvert \frac{f(y) - f(x)}{y-x}\right\rvert = \lvert f'(\xi)\rvert \Rightarrow \lvert f(y) - f(x)\rvert \leqslant M\cdot\lvert y-x\rvert,$$
where $M$ is a bound for the derivative.
Another way to see it is to use the fact that $\lim\limits_{\lvert x\rvert\to\infty} f(x) = 0$, and that every continuous function on a compact interval is uniformly continuous. For a given $\varepsilon > 0$, choose a $K > 0$ with $\lvert x\rvert \geqslant K \Rightarrow \lvert f(x)\rvert < \varepsilon/3$. By the uniform continuity of $f$ restricted to the compact interval $[-K-1,K+1]$, there is a $\delta > 0$ such that $\lvert x\rvert, \lvert y\rvert \leqslant K+1, \lvert y-x\rvert < \delta \Rightarrow \lvert f(y)-f(x)\rvert < \varepsilon$. If $\delta$ is chosen $< 1$, that $\delta$ works then on all of $\mathbb{R}$. |
How do you formally prove that rotation is a linear transformation? | Maybe my first answer was not enough "formal", since I didn't say explicitely what I was assuming a "rotation" is -nor "angle". (Exercise: (1) find in which places I was using those unsaid definitions of "rotation" and "angle" and (2) which definitions are they. :-) )
Well, it was a sort of "high-school proof". The problem about the definition of "rotation", as some answers have already pointed out, or implied, is that, if you really want to be formal, you end saying that a rotation is some kind of linear transformation by definition. Full stop.
Nevertheless, I think I can produce an "undergraduate" proof of this fact, avoiding circular arguments. It has the merit that is valid in any dimension. For which I need the following
Convention. Whatever "rotation" means,
It is a map of some vector space $V$,
Which has a way of measuring "lengths" and "angles" of its vectors, and
"Rotations" preserve those "lengths" and "angles".
Now, a fancy way to have "lengths" and "angles" in a vector space is to have a dot product in it. So let's assume that $V$ is an euclidian vector space; for instance, a real vector space such as $\mathbb{R}^n$ with the standard dot product.
Then, in such a $V$, "length" means "norm", which is $\| v \| = +\sqrt{v\cdot v}$.
We have to be more careful with "angles" because the standard definition already involves rotations. To avoid a circular argument, we define the (non-oriented) angle determined by two vectors $v, w$ as the unique real number $\theta = \widehat{vw} \in [0, \pi ]$ such that
$$
\cos \theta = \frac{v\cdot w}{\|v\| \|w\|} \ .
$$
Notice that this definition makes sense since, because of the Cauchy-Schwarz inequality, we always have $-1 \leq \frac{v\cdot w}{\|v\| \|w\|} \leq 1$ and $\cos : [0, \pi] \longrightarrow [-1,1]$ is bijective.
So, "rotation" is some kind of map $f: V \longrightarrow V$ which preserves norms and angles. Since the dot product can be expressed in terms of norms and (cosines of) angles,
$$
v\cdot w = \|v\| \|w\| \cos\widehat{vw}
$$
rotations preserve dot products:
$$
f(v) \cdot f(w) = v\cdot w \ .
$$
Now, let's show that a map that preserves the dot product is necessarily linear:
$$
\begin{align}
\| f(v+w) - f(v) -f(w) \|^2 &= \left( f(v+w) -f(v) -f(w) \right) \cdot \left( f(v+w) -f(v) -f(w) \right) \\\
&= \left( f(v+w)\cdot f(v+w) - f(v+w) \cdot f(v) - f(v+w)\cdot f(w) \right) \\\
&{} \qquad + \left( -f(v)\cdot f(v+w) + f(v) \cdot f(v) + f(v)\cdot f(w) \right) \\\
&{} \qquad + \left( -f(w)\cdot f(v+w) + f(w)\cdot f(v) + f(w)\cdot f(w) \right) \\\
&= \left( (v+w)\cdot (v+w) - (v+w) \cdot v - (v+w)\cdot w \right) \\\
&{} \qquad + \left( -v\cdot (v+w) + v \cdot v + v\cdot w \right) \\\
&{} \qquad + \left( -w\cdot (v+w) + w\cdot v + w\cdot w \right) \\\
&= \| v+w - v -w \|^2 = 0 \ .
\end{align}
$$
So
$$
f(v+w) = f(v) + f(w) \ .
$$
A similar computation shows that
$$
\|f(\lambda v ) - \lambda f(v) \|^2 = \|\lambda v - \lambda v\|^2 = 0 \ .
$$
Thus
$$
f(\lambda v) = \lambda f(v) \ .
$$
Hence, our rotation $f$ is a linear transformation. |
Properties of a $B^\ast$-algebra | B*-algebra is just an old-fashioned name for an abstract C*-algebra. Can you use the Gelfand–Naimark theorem?
If so, then the first clause follows directly from it as in this case your algebra is isometrically *-isomorphic to the space of continuous functions vanishing at infinity $C_0(X)$ defined on some locally compact Hausdorff space $X$. You can use it also to derive the second clause. Indeed, all non-zero multiplicative linear functionals on $C_0(X)$ are just point evaluations for which this is clear. |
Proving that $\lim\limits_{n\to \infty } \frac{x^n}{n!}=0$ | Notice that to get the enequality "<" $$N + 1 > N \Rightarrow \frac{1}{N+1} < \frac{1}{N} \Rightarrow \frac{|x|}{N+1} < \frac{|x|}{N}\ \text{and} \ \ n > N \Rightarrow \frac{1}{n} < \frac{1}{N}$$
As for the second question you are taking $n > N$ so
$$1 , 2 , \ldots , N, N + 1, \ldots , n$$
Edit:
One more thing $$\frac{|x|}{1}\frac{|x|}{2}\frac{|x|}{3}\cdots\frac{|x|}{N-1} = \frac{|x|^{N-1}}{(N-1)!}$$ |
How to break the solution set and join the solution sets and notations associated with them | It is fine to write your solution as a union of two sets
$$
\left\{\frac{(2n+1)\pi}{2} \mid n \in \Bbb{Z} \right\} \cup \left\{\frac{(2n+1)\pi}{4} \mid n \in \Bbb{Z} \right\}
$$
but if you want to avoid the union, you could write this set as follows
$$
\left\{\frac{n\pi}{4} \mid n \in \Bbb{Z} \text{ and } n \not\equiv 0 \bmod 4 \right\}
$$ |
Evaluate $\int \frac{\sec(11 x) \tan(11 x)}{\sqrt{\sec(11 x)}} \, dx $ | Your $du$ is wrong. It should have an extra 11 in it. :) |
Prove $f:N\rightarrow N$, $f(x,y)=2^{x-1}(2y-1)$ is surjective | Note that any positive integer can be written (uniquely) as $2^n k$ where $n\geq 0$ and $k$ is odd. For fun: the uniqueness shows that $f$ is also injective.
Also, I will say that a product of surjective functions need not be surjective. Indeed, consider $h_1,h_2:\mathbb R\to\mathbb R$ with $h_1(x):=h_2(x):=x$, which are clearly surjective, but $h_1h_2(x)=x^2$ is not surjective. |
Function field of genus zero with a prime of degree 2 | I don't know if it's still relevant, but I think I have a solution. I hope I did not make any mistake, and that it will be of any help to you.
Let's assume towards contradiction that $F(x)=F(x,y)=F(y)$ (By symmetry, if one of the equalities is not true, we finish by your argument). By the properties of the p-valuation $v_p$, we know that $\{1,x,y,x^2,y^2,xy\}\in L(2p)$. By Riemman-Roch the dimension of $L(2p)$ is 5. So we have a linear dependency,
$$a_0+a_1x+a_2y+a_3x^2+a_4y^2+a_5xy=0$$
The ring $F[T,U]$ is a UFD. So if we denote $f(T,U)=a_0+a_1T+a_2U+a_3T^2 +a_4U^2+a_5TU$, and if $f(T,U)$ is reducible, it must be factored to linear factors. But then by the previous equation we get a non-trivial linear dependency on $1,x,y$ (over $F$) which is impossible. So, $f$ is irreducible.
Now look at the polynomial $a_0+a_1x+a_3x^2+(a_2+a_5x)T+a_4T^2\in F(x)[T]$. If $a_4\neq 0$, it is irreducible, since it is a primitive, irreducible polynoimal over $F[x][T]$. If $a_4=0$, we may assume WLOG that $a_2+a_5x$ does not divide $a_0+a_1x+a_3x^2$. Other wise, we could devide the first equation to ge an equation of the form $b_0+b_1x+y=0$, which is impossible by the linear independence of $1,x,y$. Hence The polynomial above is still primitive and so is the minimal polynomial of $y$ over $F(x)$.
We split to 2 cases. If $a_3=0$, we may write:
$$a_0+a_1x+a_2y+a_5xy=0$$
As $a_5\neq 0$ (by linear independence of $1,x,y$) This gives us-
$$y=\frac{-a_1x-a_0}{a_2+a_5x}\Rightarrow v_p(y)=v_p(-a_1x-a_0)-v_p(a_2+a_5x)$$
As the $p$ valuation of $1,x$ are different, we know that $$v_p(\alpha+\beta x)=min(v_p(\alpha),v_p(\beta x))=min(0,-1)=-1$$
If $\alpha\neq 0$. So, we get $v_p(y)\geq -1+1=0$ - a contradiction.
So we may assume $a_3\neq0$. In this case, by the symmetry between $x$ and $y$ - we finish.
[Edit: I now noticed that I am using a different notation than you, taking $F$ to be the ground field.] |
An example of uniform convergence on compact sets but not uniform convergence? | $x^n$ on $[0,1)$: the compact subsets are contained in $[0,r]$ for some $r<1$.
Other examples are power series with infinite radius of convergence. |
Why does $ \mathrm{Tor}_0^R(M,N)\cong M\otimes_R N $ | Start with a projective resolution
$$
\cdots \to P_2 \to P_1 \to M \to 0.
$$
Tensor with $N$ and remember that $\_ \otimes_R N$ is right exact. This gives the following exact sequence
$$
P_2 \otimes_R N \xrightarrow{\varphi_2} P_1 \otimes_R N \xrightarrow{\varphi_1} M \otimes_R N \to 0
$$
By exactness, $\operatorname{im} \varphi_2 = \ker \varphi_1$. Thus, $\operatorname{Tor}_0^R(M, N) = (P_1 \otimes_R N) / \ker \varphi_1$, and this isomorphic to $M \otimes_R N$ by the first isomorphism theorem. |
Formal definition of homogenous coordinates | The homogeneous coordinates of a single point are the whole equivalence class, i.e. the set of all multiples, excluding the null vector. For finite points this means
$$\{(kx,ky,k)\mid k\in\mathbb R\setminus\{0\}\}$$
but you can also get homogeneous coordinates for points at infinity, so more generally you'd have
$$\{(kx,ky,kz)\mid k\in\mathbb R\setminus\{0\}\}\qquad
\text{with}\quad (x,y,z)\neq(0,0,0)$$
i.e. all the multiples of any three-element vector, again excluding the null vector. For $z=0,(x,y)\neq(0,0)$ you'd get what is typically referred to as a point at infinity.
A single vector $(x,y,z)$ from one of these equivalence classes is called a representative or representant. Often computations can be performed at the level of these representative, so that sometimes the distinction between representative vectors and the equivalence classes they describe is not explicit in the notation, but merely implicit from the context.
The space of all homogeneous coordinates is often denoted as
$$\frac{\mathbb R^3\setminus\{(0,0,0)\}}{\mathbb R\setminus\{0\}}$$
This quotient construction means you take the set of all three-element vectors, excluding the null vector, and then form equivalence classes using scalar multiples, again excluding zero.
(Writing this, I realized that there is some problem here with singular and plural: a single coordinate as in one element from the vector doesn't make any sense for homogeneous coordinates, and the plural coordinates makes it a bit harder to distinguish a single equivalence class from multiple such classes.) |
How to prove $e^{A \oplus B} = e^A \otimes e^B$ where $A$ and $B$ are matrices? (Kronecker operations) | What is to be proved is the following: $$ e^{A \otimes I_b +I_a \otimes B} = e^A \otimes e^B~$$ where $I_a,A \in M_n$ , $ I_b, B \in M_m$
This is true because $$ A \otimes I_b~~~~\text{and}~~~~ I_a \otimes B$$ commute, which can be shown by using the so called mixed-product property of the Kronecker product. i.e. $$ (A \otimes B)\cdot (C \otimes D) = (A\cdot C) \otimes (B\cdot D)~$$ Here, $\cdot$ represents the ordinary matrix product.
One can also show that for an arbitrary matrix function $f$, $$f(A\otimes I_b) = f(A)\otimes I_b~~~~\text{and}~~~ f(I_b \otimes A) = I_b \otimes f(A)~.$$ Together with the commutative property mentioned above, you can prove your result. |
An elegant proof for a claim about orthogonal and positive-definite matrices | The following proof does diagonalize $P$, but not in matrix form.
Let $\{v_1,\ldots,v_n\}$ be an orthonormal eigenbasis of $P$ and $Pv_i=\lambda_iv_i$ for each $i$. Then
$$
\langle P,I\rangle
= \langle P,O\rangle = \sum_i\langle Pv_i,Ov_i\rangle
\le \sum_i\|Pv_i\|\|Ov_i\|
= \sum_i\lambda_i
= \operatorname{tr}(P)
= \langle P,I\rangle.
$$
Therefore $\langle Pv_i,Ov_i\rangle=\|Pv_i\|\|Ov_i\|$ for each $i$. Since $Pv_i=\lambda v_i$ and $Ov_i$ is a unit vector, we have $\langle v_i,Ov_i\rangle=1$ and in turn $Ov_i=v_i$ whenever $\lambda_i>0$. Hence $OP$ and $P$ agree on $\{v_1,\ldots,v_n\}$, meaning that $OP=P$. |
Is there a nice way to find this integral $\int_0^1\frac{ \arcsin x}{x} \mathrm{d}x$? | $$I = \int_0^1 \frac{\arcsin x}{x}\,dx = \int_{0}^{\pi/2}\frac{x\,dx}{\sin x}\cos x = x\log\sin x \bigg|_0^{\pi/2}-\int_0^{\pi/2} \log\sin x\,dx$$
This last integral is well known to equal $-\frac{\pi}{2}\log 2$. Thus
$$I = \frac{\pi}{2}\log 2$$ |
SVD: Calculate singular values of sub-matrix from SVD of full matrix? | In general, answer is no. When you select $k$ rows of $A$ arbitrarily, you are effectively multiplying $A$ with a $k \times N$ matrix $P$ whose rows are from the $n \times n$ identity matrix. Thus, $A_{[n]}=PA = PUSV^T$ |
Rank of a Matrix Sum | The rank does not behave well under sum
Example $0=A+(-A)$ for any $A$.
On the other hand, the rank is subadditive:
$\operatorname{rank}(A+B)\leq\operatorname{rank}(A)+\operatorname{rank}(B)$
Proof. I denote by "span of a set" the vector space generated by that set.
The rank of a matrix is the dimension of the span of the set of its columns. The span of the columns of $A+B$ is contained in the span of {columns of $A$ and columns of $B$}.
Edit. From a comment:
Let $C_A$ be the span of the columns of $A$ and $C_B$ the span of the columns of $B$. Let $c=\dim(C_A\cap C_B)$. The span of the columns of $A+B$ is contained in the span of $C_A\cup C_B$.
Then $$\operatorname{rank}(A+B)\leq \dim(C_A)+\dim(C_B)-\dim(C_A\cap C_B)=\operatorname{rank}(A)+\operatorname{rank}(B)-c.$$
Now, let $R_A$ be the span of the rows of $A$ and $R_B$ the span of the rows of $B$. Let $d=\dim(R_A\cap R_B)$.
Since the rank by columns equals the rank by row we have
$$\operatorname{rank}(A+B)\leq \dim(R_A)+\dim(R_B)-\dim(R_A\cap R_B)=\operatorname{rank}(A)+\operatorname{rank}(B)-d.$$
In conclusion
$$\operatorname{rank}(A+B)\leq \operatorname{rank}(A)+\operatorname{rank}(B)-\max\{c,d\}.$$ |
Covariance of increments of fractional Gaussian noise | Hint:
$\mathbb{E}\left[\left(B_{t}^{H}-B_{s}^{H}\right)\left(B_{u}^{H}-B_{v}^{H}\right)\right]=\frac{1}{2} \mathbb{E}\left[\left(B_{t}^{H}-B_{v}^{H}\right)^{2}+\left(B_{s}^{H}-B_{u}^{H}\right)^{2}-\left(B_{t}^{H}-B_{u}^{H}\right)^{2}-\left(B_{s}^{H}-B_{v}^{H}\right)^{2}\right]=\frac{1}{2}\left(|t-v|^{2 H}+|s-u|^{2 H}-|t-u|^{2 H}-|s-v|^{2 H}\right)$ |
Separation of a convex set and a vector by a hyperplane | EDIT: Here's a picture of the situation in the "scholarly paper".
The convex set $C$ is in red, and the hyperplane $H$ is blue. The vector $v$ is $(1,0)$, so $H$ is a vertical line passing through $x$. The inequality
$v \cdot (y - x) \le 0$ for $y \in C$ says that $C$ is to the left of this hyperplane (possibly with some points on the hyperplane). $x+v$ is to the right of the hyperplane. |
Vector geometry involving parallel vectors | Two vectors $\mathbf u$ and $\mathbf v$ are parallel if there is a scalar $c$, so $\mathbf u =c\mathbf v$. So if $\vec{MX}$ is parallel to $\vec{OB}$, then there should be $c$:
$$
\left(k-\frac32\right)\mathbf a+\left(\frac12-k\right)\mathbf b=c\mathbf b
$$
But since $\mathbf a$ isn't parallel to $\mathbf b$, it can't be true until $k-3/2=0$. |
Is $^nC_r$ defined when $r>n$? | By convention, $_{n}C_{r}$ is defined to be zero if $r>n$ or if $r$ is negative. This is so that the rule for generating Pascal's triangle would still hold with this extended definition. |
Rolling 6 dice and 3 on the same side | I apologize for the error in my first answer, which I have since deleted—I was intent on proving one thing right and another wrong, and failed to notice that the truth was contrary to my expectation.
Your web link, with this justification:
Three of a kind can be formed either as two three of a kinds or as three of a kind and three other dice from one of the other five values that aren't all three the same value.
Is correct, as is the value 14700.
What is wrong with doing it in the way given by the other question you linked? Suppose we choose a number (6 options) and a group of three for that number ($6 \choose 3$ options) and then three additional numbers that are distinct ($5^3$ options). Then, we are double-counting combinations such as this: (1, 1, 1, 2, 2, 2)! Notice that we count this once, by choosing the roll $1$, the combination $\{1, 2, 3\}$, and the other numbers $(2, 2, 2)$. Then we count it again, by choosing the roll $2$, the combination ${4, 5, 6}$, and the other numbers $(1, 1, 1)$.
So, the method that works for exactly four of a kind, does not work for three of a kind, because of doubly counting certain rolls.
To avoid this double-counting we shall consider two cases:
Case 1: There are two three-of-a-kinds combined in a single roll.
In this case, there are ${6 \choose 2}{6 \choose 3}$ options, as correctly mentioned in your post. The $6 \choose 2$ represents the choice of the two numbers that are rolled, each three times. The $6 \choose 3$ represents the number of ways to distribute those two numbers.
Case 2: There is only one three-of-a-kind in a single roll.
We solve this case similarly to the linked answer. First we choose a number to be repeated thrice ($6$ options), then a group of three rolls for that number to appear ($6 \choose 3$ options), then finally the values for the other three. We must however be careful not to include cases where the other three values are identical. Supposing this restriction did not exist, we would have $5^3$ choices, but because it does, we must remove the $5$ cases where all three other values are identical. Hence, there are $5^3 - 5$ options for the other cases.
Multiplication of our independent terms gives $6 {6 \choose 3} (5^3 - 5)$.
Total
Combining our two cases gives your equation:
$${6 \choose 2} {6 \choose 3} + 6 {6 \choose 3} (5^3 - 5) = 14700$$ |
Help with particular solution to solving $z^4 - 2z^3 + 9z^2 - 14z + 14 = 0$. | Right as far as it goes. Just that you made a mistake: The remainder is $(2 - 2 y^2) z + (y^4 - 7 y^2 + 6)$.
Now, for the proposed factor really to be a factor, this remainder must be the zero polynomial, that is:
$$
\begin{align*}
2 - 2 y^2 &= 0 \\
y^4 - 7 y^2 + 6 &= 0
\end{align*}
$$
From the first one you get $y = 1$, and that checks with the second one, so your roots are $1 \pm i$, and the factor is $z^2 - 2 z + 2$. The other one is $z^2 + 7$, with respective roots $\pm i \sqrt{7}$. |
Asymptotic behavior of $\sum_{k=1}^{n}\left(1-p^{k}\right)^{n-k}$ | Note that $f(k,n) = (1-p^k)^{n-k}$ is an increasing function of $k$ for fixed $n$.
For $k = \frac{\log n}{\log(1/p)} + t$ Maple says we have, as $n \to \infty$,
$f(k,n) \approx e^{-p^t} + O(1/n)$. This goes exponentially to $1$ as $t \to +\infty$ and
very rapidly to $0$ as $t \to -\infty$, so I suspect the sum will be $n - \frac{\log n}{\log(1/p)} + O(1)$ as $n \to \infty$ (i.e. the error made by approximating $f(k,n)$ by $0$ for $k < \frac{\log n}{\log(1/p)}$ and $1$ for $k > \frac{\log n}{\log(1/p)}$ will be bounded as $n \to \infty$). |
Proving a Fibonacci identity: $F_{2n} = F_n (F_{n+1} + F_{n-1})$ | This follows from the matrix formulation, which is well worth knowing and easily proved:
$$
\begin{pmatrix}1&1\\1&0\end{pmatrix}^n=
\begin{pmatrix}F_{n+1}&F_n\\F_n&F_{n-1}\end{pmatrix}
$$
Just compare
$$
\begin{pmatrix}1&1\\1&0\end{pmatrix}^{2n}=
\begin{pmatrix}*&F_{2n}\\*&*\end{pmatrix}
$$
with
$$
\begin{pmatrix}F_{n+1}&F_n\\F_n&F_{n-1}\end{pmatrix}^2=
\begin{pmatrix}*&\cdots\\*&*\end{pmatrix}
$$ |
Equivalent ways to study perturbed abstract Cauchy problem | Unfortunately this is not so simple/immediate and does depend on the type of equation and perturbation.
I suggest that before anything else you have a look at the Engel-Nagel book, more precisely at Chapter 3 (such as at their Theorem 3.14). They also include discussions of some specific types of equations, although I don't remind elliptic. |
distortion of Riemann Integrable function at finite number of points makes it Riemann Integrable again | I think your proof is fine.
Alternately, note that you could make things very simple :
Prove that the functions $\{1_{\{c\}} : c \in [a,b]\}$ where $1_{\{c\}}$ is the indicator function of $\{c\}$, are Riemann integrable and have Riemann integral zero. This is much easier to do, because you can choose your partitions and compute the upper and lower sum obviously.
From here, obviously by scaling, functions of the form $L 1_{\{c\}}$ are integrable for any number $L$, and their integral is zero.
Suppose $\bar f$ differs from $f$ at finitely many points. Then $\bar f - f$ is a finite sum $\sum_{i=1}^N L_i1_{x_i}$, which is Riemann integrable with integral zero. It follows that $\bar f$ is integrable with the same integral as $f$. |
Find $\displaystyle\int_0^\pi f_{(x)}dx$ when $f_{(x)}=\displaystyle\int_0^x\frac{\sin t}{t}dt$ using $f_{(\pi)}=\beta$ | Integration by parts tells you that
$$
\int_{0}^{\pi} f(x)\,dx=
\Bigl[xf(x)\Bigr]_0^{\pi}
-\int_0^\pi xf'(x)\,dx
$$
Now compute $f'(x)$ using the fundamental theorem of calculus. |
Find the minimum value of $4(a^3 + b^3 + c^3) + 15abc$ with $a + b + c = 2$. | I do not know if you had to use AM-GM but the problem is quite simple using pure algebra.
Considering
$$ P = 4(a^3 + b^3 + c^3) + 15abc \qquad \text{with} \qquad a+b+c=2$$ eliminate $c$ from the constaint to get
$$P=3 a^2 (8-9 b)-3 a (b-2) (9 b-8)+8 (3 (b-2) b+4)$$ Now
$$\frac{\partial P}{\partial a}=6 a (8-9 b)-3 (b-2) (9 b-8)=0 \implies a=\frac{2-b} 2$$
Reusing the constaint, this gives $c=a$ and then $a=b=c=\frac 23$.
Plug in $P$ and get the result. |
Proof of Eckart-Young-Mirsky theorem | One needs to show that if $\mathrm{rank}(B)=k$, then $\|A-B\|_2\geq\|A-A_k\|_2$. This can be done as follows.
Since $\mathrm{rank}(B)=k$, $\dim\mathcal{N}(B)=n-k$ and from $$\dim\mathcal{N}(B)+\dim\mathcal{R}(V_{k+1})=n-k+k+1=n+1$$ (where $V_{k+1}=[v_1,\ldots,v_{k+1}]$ is the matrix of right singular vectors associated with the first $k+1$ singular values in the descending order), we have that there exists an
$$x\in\mathcal{N}(B)\cap\mathcal{R}(V_{k+1}), \quad \|x\|_2=1.$$
Hence
$$
\|A-B\|_2^2\geq\|(A-B)x\|_2^2=\|Ax\|_2^2=\sum_{i=1}^{k+1}\sigma_i^2|v_i^*x|^2\geq\sigma_{k+1}^2\sum_{i=1}^{k+1}|v_i^*x|^2=\sigma_{k+1}^2.
$$
From $\|A-A_k\|_2=\sigma_{k+1}$, one gets hence $\|A-B\|_2\geq\|A-A_k\|_2$. No contradiction required, Quite Easily Done.
EDIT An alternative proof, which works for both the spectral and Frobenius norms, is based on the Weyl's theorem for eigenvalues (or more precisely, its alternative for singular values): if $X$ and $Y$ are $m\times n$ ($m\geq n$) and (as above) the singular values are ordered in the descreasing order, we have
$$\tag{1}
\sigma_{i+j-1}(X+Y)\leq\sigma_i(X)+\sigma_j(Y) \quad\text{for $1\leq i,j\leq n, \; i+j-1\leq n$}
$$
(this follows from the variational characterization of eigen/singular values; see, e.g., Theorem 3.3.16 here).
If $B$ has rank $k$, $\sigma_{k+1}(B)=0$.
Setting $j=k+1$, $Y:=B$, and $X:=A-B$, in (1) gives
$$
\sigma_{i+k}(A)\leq\sigma_i(A-B) \quad \text{for $1\leq i\leq n-k$}.
$$
For the spectral norm, it is sufficient to take $i=1$. For the Frobenius norm, this gives
$$
\|A-B\|_F^2\geq\sum_{i=1}^{n-k}\sigma_i^2(A-B)\geq\sum_{i=k+1}^n\sigma_i^2(A)
$$
with the equality attained, again, by $B=A_k$. |
A quick question about a logical negation | No. You've only negated one part of the statement. Negating each part:
There exists a set $A$ so that for all well-ordered sets $V$ there exists a surjection $\pi:A\to V$.
I think I've negated what you're actually going for here. |
Is it true that $f(X)$ is countable implies $f$ is measurable? | Take a non measurable subset $A$ of $X$ and consider its indicator function $\chi_A$. |
Proving integrability given local Lp integrability | Extensive revision: I have decided to rewrite entirely my answer since the question is interesting and deserves more than few sketchy (and sometimes buggy) hints. Now lets start again by giving hints: to start with a little notation, define $\mu(E)$ as the Lebesgue measure of the set $E\subset\mathbb{R}^2$.
$f\in L^{3/2}_\mathrm{loc}(\mathbb{R}^2)$ implies $f\in L^1_\mathrm{loc}(\mathbb{R}^2)$: as a matter of fact, by Hölder's inequality
$$
\Vert f\Vert_{L^1(K)}\leq \mu(K)^{1/3}\Vert f\Vert_{L^{3/2}(K)} \quad \text{for all compact sets }K\subset\mathbb{R}^2\tag{1}\label{1}
$$
See the Wikipedia article on Locally integrable functions for a comprehensive explanation.
After having obtained inequality \eqref{1}, the next step is trying to find a countable covering of $\mathbb{R}^2$ made of pairwise disjoint sets $\{S_n\}_{n\in\mathbb{N}}$ build by using the family $\{E_r\}_{r\geq1}$. If we succeed in this search, this would enable us to use \eqref{1} on each set of the covering and find a condition for the finiteness of $\Vert f\Vert_{L^1(\mathbb{R}^2)}$ since formally
$$
\Vert f\Vert_{L^1(\mathbb{R}^2)}=\int_{\mathbb{R}^2}|f|\mathrm{d}\mu=\sum_{n=1}^\infty \int_{S_n}|f|\mathrm{d}\mu=\sum_{n=1}^\infty\Vert f\Vert_{L^1(S_n)}\tag{2}\label{2}
$$
Claim: the sought for covering $\{S_n\}_{n\in\mathbb{N}}$ of $\mathbb{R}^2$ can be build from the sets
$$
S_n\triangleq E_{n+1}\setminus E_n=\{(x_1,x_2)\in\mathbb{R}^2:2n\le x_1^4+x_2^2\le 2(n+1)\}\quad \forall n\in\mathbb{N}
$$
or the sets
$$
S_n\triangleq E_n\setminus E_{n+1}=\{(x_1,x_2)\in\mathbb{R}^2:n\le x_1^4+x_2^2\le n+1\}\quad \forall n\in\mathbb{N}
$$
(It is up to yo to see what would be the better choice)
After choosing the proper covering $\{S_n\}_{n\in\mathbb{N}}$ next step is to calculate the area $\mu(S_n)$ for all $n\in\mathbb{N}$: in this way, by using \eqref{1} and the estimate
$$
\Vert f\Vert_{L^{3/2}(E_n)}=\left|\int_{E_r}|f|^{3/2}\mathrm{d}\mu\right|^{2/3}\le r^{-\frac{2a}{3}}
$$
we can obtain an estimate
$$
\Vert f\Vert_{L^1(s_n)}\leq \mu(S_n)^{1/3}\Vert f\Vert_{L^{3/2}(S_n)}\le n^{\,p(a)} \quad \forall n\in\mathbb{N}\tag{3}\label{3}
$$
where $p(a)$ is a function which surely depends on $a$ and arises from the estimation of $\Vert f\Vert_{L^{3/2}(S_n)}$ and $\mu(S_n)$ as a function of $n$.
Hint on the calculation of $\mu(E_r)$. The estimation of $\mu(E_r)$ is a key step in the quantitative evaluation of \eqref{3}, since estimates of $\mu(S_n)$ and of $\Vert f\Vert_{L^{3/2}(S_n)}$ depend on it: in order to do this, the first thing to note is that $E_r$ is homeomorphic to a ring domain through the mapping $(x_1,x_2)\mapsto(\rho,\theta)$ defined as
$$
x_1=
\begin{cases}
\rho^\frac{1}{2}|\sin\theta|^\frac{1}{2} & 0\leq\theta<\pi\\
-\rho^\frac{1}{2}|\sin\theta|^\frac{1}{2} & \pi\leq\theta<2\pi
\end{cases}\quad
x_2=\rho\cos\theta
$$
where $\rho\in[r,2r]$ and $\theta\in[0,2\pi]$. Then, by calculating the modulus of the Jacobian determinant of the mapping, we have that
$$
\mu(E_r)=\int_{E_r}\mathrm{d}x_1\mathrm{d}x_2=\int\limits_r^{2r}\mathrm{d}\rho\int\limits_0^{2\pi}|J(\rho,\theta)|\mathrm{d}\theta
$$
The integral at the last term of the inequality is not easy to calculate: however it is easyly estimable as a function of $r$ by the Fubini-Tonelli theorem, and this leads to the sought for estimate \eqref{3}.
By using estimate \eqref{3} we have that
$$
\Vert f\Vert_{L^1(\mathbb{R}^2)}=\sum_{n=1}^\infty\Vert f\Vert_{L^1(s_n)}\le\sum_{n=1}^\infty\mu(S_n)^{1/3}\Vert f\Vert_{L^{3/2}(S_n)}\le \sum_{n=1}^\infty n^{\,p(a)}\tag{4}\label{4}
$$
Therefore, from this inequality we have that
$$
p(a)<-1\implies f\in L^1(\mathbb{R}^2)\tag{5}\label{5}
$$
Now, after those hint we are able to give a direct answer to the OP questions.
Answer to question 1. The exponent $3/2$ and $3/8$ are related by the condition $p(a)<-1$ of condition \eqref{5} for convergence of the series $\sum s_n$.
Answer to question 2. The funky domains $E_r$ should be used to construct a partition of $\mathbb{R}^2$ as a family of disjoint sets, such as one of the families $\{S_n\}_{n\in\mathbb{N}}$.
Answer to question 3. No. Intuitively, since the area of the "funky" region $E_r$, $r>0$ grows, its measure cannot vanish. More precisely, following the hint for the calculation of $\mu(E_r)$ above, we have that
$$
|J(\rho,\theta)|=\frac{1}{2}\rho^\frac{1}{2}\left||\sin\theta|^\frac{1}{2}\sin\theta+|\sin\theta|^{-\frac{1}{2}}\cos^2\theta\right|.\\
$$
This implies that
$$
\begin{align}
\mu(E_r)=&\int\limits_r^{2r}\mathrm{d}\rho\int\limits_0^{2\pi}|J(\rho,\theta)|\mathrm{d}\theta\\
=&\frac{1}{2}\int\limits_r^{2r}\rho^\frac{1}{2} \mathrm{d}\rho\int\limits_0^{2\pi}\left||\sin\theta|^\frac{1}{2}\sin\theta+|\sin\theta|^{-\frac{1}{2}}\cos^2\theta\right|\mathrm{d}\theta\\
=&\frac{C_\theta}{2}\int\limits_r^{2r}\rho^\frac{1}{2} \mathrm{d}\rho
= \frac{C_\theta}{3}\left[\rho^\frac{3}{2}\right]_r^{2r}=\frac{C_\theta}{3}\left[(2r)^\frac{3}{2}-r^\frac{3}{2}\right]\\
=&C_\theta\frac{\sqrt[2]{8}-1}{3}r^\frac{3}{2}
\end{align}
$$
where $C_\theta$ is the (constant) value of the trigonometric integral above, i.e.
$$
C_\theta=\int\limits_0^{2\pi}\left||\sin\theta|^\frac{1}{2}\sin\theta+|\sin\theta|^{-\frac{1}{2}}\cos^2\theta\right|\mathrm{d}\theta
$$
This implies $\mu(E_r)=O\left(r^\frac{3}{2}\right)$ as $r\to\infty$. |
Form groups of people using combinatorics | You are overcounting.
Let girls be $G_1,G_2,...,G_7$ and boys be $B_1,B_2,B_3,B_4$. Then, you are choosing $2$ girls in the beginning, let's say they are $G_1$ and $G_2$. Then from the rest of $9$, you are choosing $3$ people. Here, suppose you choose $G_3,B_1,B_2$. But this is same as choosing $G_2,G_3$ initially and then choosing $G_1,B_1,B_2$ and also same as choosing $G_1,G_3$ initially and then $G_2,B_1,B_2$. In this case, you counted this specific group $3$ times for instance.
As an additional note, when you have $4$ girls in group, you will be counting each of those groups $6$ times and when you have $5$ girls in group, it becomes $10$. So, I cannot find a way to proceed from where you started. If the overcounting amount was same for all the groups you form, we could have divided the result by that amount to get rid of the overcounting but this is not the case here.
You can use case distinction in number of girls in order to solve this problem. As a small hint, since there are only $4$ boys, you have only one case where group contains less than $2$ girls. So you can find the number of complementary cases easily. |
The solutions of $x^2+ax+b=0\pmod n$ in $\mathbb Z_n$ | Hint: $(x-u)(x-v)\equiv 0 \pmod n$ can also hold for $x-u=k$ and $x-v=l$ if $kl=n$, not only for $x-u=0$ or $x-v=0$. |
How many flags, with 3 horizontal stripes, can be made if two stripes are one color and the third is a different color? | If there are only 2 colours to choose from, then there are only 6 ways. There are $\binom{3}{1}=3$ ways to choose which stripe will be the single one. You then have two choice for the colour of that single stripe, and there is then no choice for the colour of the other two stripes as they must be the other colour. This gives $\binom{3}{1}*2=6$ flags.
Another way of counting it is: There are two choices of colour for each stripe, for a total of $2^3=8$ possible flags. However, this includes the $2$ flags where all the stripes are the same colour. That leaves $2^3-2=6$ flags which use both colours (2 stripes of one colour, 1 of the other). |
Matrix Square Root Taylor Series | Let $A$ be a positive definite symmetric matrix (for positive semidefinite matrices there is a technical problem - see below). Then $A = UDU^*$ with a unitary matrix $U$ and a diagonal matrix $D$ which has the (positive) eigenvalues of $A$ on its diagonal. Now, let $f(x) = \sum_n a_n(x-x_0)^n$ be an expansion of the square root whose convergence interval contains all those eigenvalues $d_i$. Then
\begin{align*}
f(A)&= f(UDU^*) = \sum_n a_n(UDU^*-x_0)^n = \sum_n a_n(U(D-x_0)U^*)^n\\
&= U\left(\sum_n a_n(D-x_0)^n\right)U^* = U\left(\sum_n a_n\left(\operatorname{diag}_i(d_i-x_0)^n\right)\right)U^*\\
&= U\operatorname{diag}_i\left(\sum_n a_n(d_i-x_0)^n\right)U^* = U\operatorname{diag}_i\sqrt{d_i}U^* = A^{1/2}.
\end{align*}
If the series also converges in zero (which I don't know) the calculation goes through as above also for positive semidefinite matrices.
With the above approach you can also define a square root for diagonalizable matrices having their eigenvalues in the convergence disk. You can extend this definition even for all matrices having their eigenvalues in $D = \mathbb C\setminus (-\infty,0]$. For this, draw a closed path $\gamma$ in $D$ around all the eigenvalues of $A$ and define
$$
A^{1/2} := -\frac 1 {2\pi i}\int_\gamma \sqrt{\lambda}(A - \lambda I)^{-1}\,d\lambda.
$$
This is the Riesz-Dunford functional calculus. |
Fastest way to compare angles | Since $\cos\theta$ is monotonically decreasing for $\theta\in[0,\pi]$, and the dot product of two vectors ${\bf p}$ and ${\bf q}$ is ${\bf p}\cdot{\bf q}=pq\cos\theta_{pq}$, you can simply compare the appropriately normalized dot products. Specifically, you can look at the sign of
$$({\bf p}_1 - {\bf p}_0)\cdot\left[\frac{{\bf q} - {\bf p}_{0}}{||{\bf q} - {\bf p}_{0}||} - \frac{{\bf r} - {\bf p}_{0}}{||{\bf r} - {\bf p}_{0}||}\right].$$ |
Show convergence of sequence | I assume that you are speaking abot real analysis.
For any $\epsilon>0$, there exists $n_0\in\Bbb N$ such that $1/2^{n_0-1}<\epsilon$.
For any natural numbers $q>p\geq n_0$ you have
$$|x_q-x_p|\leq\sum_{k=p}^{q-1}|x_{k+1}-x_k|\leq \sum_{k=p}^{q-1}\frac 1{2^k}\leq\frac1{2^{p-1}}<\epsilon$$
Therefore, the sequence is a Cauchy sequence and hence converges. |
Explicit Kähler forms and Kähler cone of one-point blowup of $\mathbb{CP}^2$ | It is quite standard in complex geometry to confuse cycles and their Poincaré duals. (Of course, intersection of cycles corresponds to cup products of their Poincaré duals.) The key one to work out is that the Kähler form $\omega$ on $\Bbb P^n$ (which we normalize to generate $H^2(\Bbb P^n,\Bbb Z)$) corresponds to a hyperplane $\Bbb P^{n-1}$, linearly embedded. This follows because the integral of $\omega$ over any line ($\Bbb P^1\subset\Bbb P^n$) is $1$.
The other crucial ingredient is that any divisor $D$ (integral linear combination of hypersurfaces) in a compact complex manifold $M$ corresponds to a line bundle $L$, and $c_1(L)\in H^2(M,\Bbb Z)$ is Poincaré dual to $D$. This can be worked out explicitly with differential forms (or currents); see pp. 141-143 of Griffiths-Harris.
When we blow up a point in a compact $n$-dimensional complex manifold $M$, obtaining $\tilde M$, we add a generator to $H^2(M,\Bbb Z)$ (thinking of the exceptional divisor $E\in H_{2n-2}(\tilde M,\Bbb Z)$). In general, the normal bundle $N(E,\tilde M)$ is the tautological line bundle on $E\cong \Bbb P^{n-1}$.
For the case of a surface $M$, then $E\cdot E = -1$, because the self-intersection of $E$ is given by $\displaystyle\int_E c_1(N(E))=-\int_{\Bbb P^1}\omega=-1$ (where $\omega$ is the Kähler form of $\Bbb P^1$).
Now, back to your explicit problem. Thinking of $\Sigma_1$ as $\tilde{\Bbb P^2}$, we can think of $H^2$ as being generated by (the Poincaré duals of) a generic line in $\Bbb P^2$ and the exceptional divisor $E$. There is a natural projection $\pi\colon \tilde{\Bbb P^2}\to\Bbb P^2$ and we can consider $\omega_1 = \pi^*\omega$, the pullback of the Kähler form on $\Bbb P^2$. Thinking of $E$ as a divisor in $\Sigma_1$, we get a line bundle and a corresponding Chern form $\phi\in H^2_{dR}(\Sigma_1)$, and it is natural to define $\omega_1-\phi$ as a Kähler form on $\Sigma_1$.
On the other hand, we can also think of $\Sigma_1$, as you suggested, as a $\Bbb P^1$-bundle over a generic line $L\subset\Bbb P^2$ (e.g., if we are blowing up $[1,0,0]$, the origin in $\Bbb C^2\subset\Bbb P^2$, we can take $L=\{z_0=0\}$ to be the "line at infinity"). Then we naturally get (see, e.g., Bott-Tu) two generators for the cohomology: the pullback of the Kähler form on $L$ and the Kähler form on a fiber $F$. The exceptional divisor $E\subset\tilde{\Bbb P^2}$ is often then interpreted as the "infinity section" of this $\Bbb P^1$-bundle and we can see that (thinking in homology or cohomology) $E = L - F$: Since $E\cdot F = 1$, $L\cdot L = L\cdot F = 1$, and $F\cdot F=0$, writing $E = aL+bF$ and solving, we get $a=1=-b$. (See pp. 514-520 of Griffiths-Harris for way more on this.) |
Does finite expectation imply bounded random variable? | If not $|X|<\infty$ a.s., then there is a set of positive measure where $|X|=\infty$. The integral
$$ \int_\Omega X\,\mathrm dP= \int_\Omega \max\{0,X\}\,\mathrm dP-\int_\Omega \max\{0,-X\}\,\mathrm dP$$
is defined only when at most one of the summands is infinite. If $X=+\infty$ on a set of positive measure, then $X=-\infty$ only on a zero-set, and then $\mathbb E(X)=+\infty$. Similarly, if $X=-\infty$ on a set of positive measure, then $\mathbb E(X)=-\infty$. Thus we have the following possibilities:
$|\mathbb E(X)|<\infty$ and $|X|<\infty$ a.s.
$\mathbb E(X)=+\infty$ and $X>-\infty$ a.s.
$\mathbb E(X)=-\infty$ and $X<\infty$ a.s.
$X$ is not integrable ($\mathbb E(X)$ does not exixts)
(So specifically, as you wrote $\mathbb E(X)<\infty$ without absolute value, it is possible to have $|X|=\infty$ with positive probability) |
Apply Banach's fixed point theorem | very easy !
$$|\frac 2 5\int_{0}^{1}(x^2+t^2)f(t)dt-\frac 2 5\int_{0}^{1}(x^2+t^2)g(t)dt|\leq|\frac 2 5\int_{0}^{1}(x^2+t^2)|f(t)-g(t)|dt| \\\leq \max|f(t)-g(t)|\frac 2 5\int_{0}^{1}(x^2+t^2)|dt|\leq\max|f(t)-g(t)|\frac 8 {15}$$ |
Given $N$ slots and $S$ objects to fill those slots, how many ways are there to fill the slots such that no two objects are adjacent. | Method 1: Let's work with your bit string idea. Notice that each $1$ except the last must be immediately be followed by a $0$. For your example of three objects in seven slots, we would then have to count arrangements of $10, 10, 1, 0, 0$ in which the solitary $1$ must follow both $10$s. Doing so would force us to do casework. We can avoid that by appending an extra $0$ to the string, so we have to arrange $10, 10, 10, 0, 0$. Notice that no matter how we arrange the five objects, the final digit will be a $0$. Thus, the number of strings of length $8$ ending with $0$ in which no two of the three $1$s are consecutive is equal to the number of bit strings of length $7$ in which no two of the three $1$s are consecutive since there is only one way to fill the final slot. Treating each $10$ as a single object gives us five positions to fill. Choosing which three of them will be filled with $10$s completely determines the string. For instance, if we fill the first three slots with $10$s, we obtain
$$10101000$$
which is equivalent to the string $1010100$, while if we fill the second, fourth, and fifth slots with $10$s, we obtain
$$01001010$$
which is equivalent to the string $0100101$. The number of such strings is $\binom{5}{3}$ since we must select which three of the five positions will be filled with $10$s.
More generally, if we have $k$ objects to place in $n$ slots, we add an extra $0$ so that we can form a bit string of length $n + 1$ consisting of $k$ $10$s and $n + 1 - 2k$ $0$s. Then no two of the $1$s will be consecutive. The number of such bit strings is
$$\binom{n + 1 - 2k + k}{k} = \binom{n - k + 1}{k}$$
since we must choose which $k$ of the $n - k + 1$ positions required for $k$ $10$s and $n + 1 - 2k$ $0$s will be filled with $10$s.
Method 2: Let's consider your example of three $1$s and four $0$s again. Place the four $0$s in a row. This creates five spaces, three between successive $1$s and two at the ends of the row.
$$\square 0 \square 0 \square 0 \square 0 \square$$
To separate the ones, we must choose three of these five spaces in which to place the ones. If we choose the first three spaces, we obtain
$$1010100$$
If we instead choose the second, fourth, and fifth spaces, we obtain
$$0100101$$
The number of such choices is $\binom{5}{3}$.
More generally, if we have $k$ objects to place in $n$ slots, we form a bit string with $k$ $1$s and $n - k$ $0$s. We place the $n - k$ $0$s in a row, which creates $n - k + 1$ spaces in which we can insert the $1$s, $n - k - 1$ spaces between successive zeros and two at the ends of the row. To separate the $1$s, we must choose $k$ of these $n - k + 1$ spaces in which to place a $1$, which yields
$$\binom{n - k + 1}{k}$$
which agrees with the answer we obtained above. |
there is no bounded linear functional on $ H$ | Hint: Try to find a sequence of functions $\{f_n\}$ in $C^1$, such that $f_n$ is bounded in $H$ (i.e., the function values should not be large), but $f_n'$ unbounded in $F$ (i.e., the derivative is large). |
$\ell^p$ as a direct summand of $L^p$ | Let $(A_n)_{n\in\mathbb{N}}$ be the set of disjoint measurable subsets of $L_p(\Omega,\mu)$ that all of positive measure. Then consider bounded liner operator
$$
P:L_p(\Omega,\mu)\to L_p(\Omega,\mu):f\mapsto \sum_{n=1}^\infty\left(\mu(A_n)^{-1}\int_{A_n}f(\omega)d\mu(\omega)\right)\chi_{A_n}
$$
One can show that $\operatorname{Im}(P)\underset{{\mathbf{Ban}_1}}{\cong}\ell_p(\mathbb{N})$ and $P$ is a norm $1$ projector. Therefore $V=\operatorname{Ker}(P)$. |
inverse of $\arcsin (\frac{x}{x-1})$ | $$ \sin y = \dfrac{x}{x-1}$$
$$x\sin y-\sin y=x$$
$$x(\sin y-1)=\sin y$$
$$x=\frac{\sin y}{\sin y-1}$$
For the second
$$-2y + \frac{1}{2}= e^{-x}$$
$$0.5-2y=e^{-x}$$
take the $\log$
$$\log(0.5-2y)=-x$$
$$x=-\log(0.5-2y)$$
$$x=\log(\frac{1}{0.5-2y})$$ |
Function which is never its own ($n^{th}$) derivative? | $f(x)=\frac{1}{1+x^2}$ should do the trick. |
Prove that $10\mid A000793(n\ge16)$ | I will first prove the claim by @Gerry Myerson in the comments: For every $m$, there exists $n_0$ such that $n>n_0$ implies $m \mid g(n)$. At the end, I show that $10 \nmid g(n) \Rightarrow n<1550$.
First, a trivial lemma:
Lemma 1: If $a_1, a_2, \ldots , a_k \geq 2$ are positive integers, then $a_1a_2 \ldots a_k \geq a_1+a_2+ \ldots +a_k$.
Proof: We proceed by induction on $k$. This is clearly true when $k=1$. When $k=2$, since $(a_1-1)(a_2-1) \geq 1$, we easily get $a_1a_2 \geq a_1+a_2$. Suppose that it holds for $k=i$. Then $a_1a_2 \ldots a_ia_{i+1} \geq (a_1+a_2+ \ldots +a_{i-1})+a_ia_{i+1}$ by the induction hypothesis. By the base case where $k=2$, we have $a_ia_{i+1} \geq a_i+a_{i+1}$, so we are done by induction.
Now, suppose that we have $n=x_1+x_2+ \ldots +x_t$, $lcm(x_1, x_2, \ldots, x_t)=g(n)$. If $x_i$ is neither $1$ nor a prime power for some $i$, then we may write $x_i=a_1a_2 \ldots a_k, k \geq 2$, where each $a_j$ is a prime power. By lemma $1$, we may then replace $x_i$ by $k+1$ terms: $a_1, a_2, \ldots , a_k$, and $x_i-(a_1+a_2+ \ldots +a_k)$. (The last term is non-existent if equality holds) This will not decrease the lcm of the numbers, so we may safely assume that all $x_i$ are prime powers (or 1).
For a prime $p$, define $f_p(n)=v_p(g(n))$. Clearly if $f_p(n) \geq 1$, then $p^{f_p(n)}$ must be one of the terms in the partition of $n$ with lcm $g(n)$. (We have already assumed that all terms are $1$, or prime powers)
We want to show that for any positive integer $m$, there are finitely many $n$ s.t. $m \nmid g(n)$. It clearly suffices to show this for $m$ a prime power. ($m=1$ is trivial)
Let $m=p^a$. Suppose that $m \nmid g(n)$. Then $f_p(n) \leq a-1$.
Consider any prime $q \not =p$. If $q \mid g(n)$, then $q^{f_q(n)}$ is a term in the partition. Define $b_q=\lceil \frac{\log{q}}{\log{p}} \rceil$. Then $p^{b_q}>q$.
If $p^{a-1+b_q}+q^{f_q(n)-1} \leq q^{f_q(n)}$, we may replace $q^{f_q(n)}$ by the $3$ terms $p^{a-1+b_q}$, $q^{f_q(n)-1}$, and $q^{f_q(n)}-(p^{a-1+b_q}+q^{f_q(n)-1})$. (The third term is non-existent if equality holds) Now, $v_p(lcm)$ is now equal to $a-1+b_q \geq a$, and $v_q(lcm)$ is at least $f_q(n)-1$. Since $p^{a-1+b_q}q^{f_q(n)-1}>p^{a-1}q^{f_q(n)} \geq p^{v_p(n)}q^{f_q(n)}$, the lcm of the partition has increased, so $g(n)$ is not the largest lcm, a contradiction.
Therefore $p^{a-1+b_q}+q^{f_q(n)-1}>q^{f_q(n)}$. Thus $p^aq \geq p^{a-1+b_q}>q^{f_q(n)}-q^{f_q(n)-1}=(q-1)q^{f_q(n)-1}$, so $f_q(n)<1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}}$.
If $q \nmid g(n)$, then $f_q(n)=0<1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}}$.
Thus if $p^a \nmid g(n)$, then $f_q(n)<1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}}$ for all primes $q \not =p$.
Note that $f_p(n)+1 \leq a<1+\frac{\log{(\frac{p}{p-1}p^a)}}{\log{p}}$.
We proceed to show that if $q$ is sufficiently large (in relation to $p, a$), then $p^a \nmid g(n) \Rightarrow q \nmid g(n)$.
We first prove $2$ lemmas:
Lemma 2: Let $S$ be a finite set of primes, and $q$ be a prime s.t. $q^2 \nmid g(n)$. If $\sum_{s \in S}{s^{f_s(n)+1}} \leq q<\prod_{s \in S}{s}$, then $q \nmid g(n)$.
Proof: If $q \mid g(n)$, then $f_q(n)=1$, so $q$ appears in the partition. We may clearly replace $q$ by the terms $s^{f_s(n)+1}, s \in S$ and $q-\sum_{s \in S}{s^{f_s(n)+1}}$. Since $\prod_{s \in S}{s^{f_s(n)+1}}>\prod_{s \in S}{s^{f_s(n)}}q$, the lcm of the parition has increased as a result, so we get a contradiction. Therefore $q \nmid g(n)$.
Lemma 3: Let $p_i$ denote the $i$th prime. Then for $i \geq 3$, we have $p_{i+1}^2+p_{i+2}^2+p_{i+3}^2<p_{i}p_{i+1}p_{i+2}$.
Proof: By Betrand's postulate, $p_{i+3} \leq 2p_{i+2} \leq 4p_{i+1}$, so $p_{i+1}^2+p_{i+2}^2+p_{i+3}^2<p_{i+1}p_{i+2}+2p_{i+1}p_{i+2}+8p_{i+1}p_{i+2} \leq p_ip_{i+1}p_{i+2}$ for $i \geq 4$. When $i=3$, we clearly have $7^2+11^2+13^2=339<385=5(7)(11)$.
Now, note that for prime $q>p^a$, we have $q-1 \geq p^a$, so $f_q(n)<1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}} \leq 2$. Thus $f_q(n) \leq 1$. Let $c=\max(\pi(p^a), 2) \geq 2$, and consider $q \geq p_{c+1}^2+p_{c+2}^2+p_{c+3}^2$. Let $d$ be the largest positive integer such that $q \geq p_d^2+p_{d+1}^2+p_{d+2}^2$. Clearly $d \geq c+1 \geq 3$, so by the maximality of $d$ and lemma $3$ we have $q<p_{d+1}^2+p_{d+2}^2+p_{d+3}^2<p_dp_{d+1}p_{d+2}$.
Since $q>p_{d+2}>p_{d+1}>p_d \geq p_{c+1}>p^a$, $f_{p_{d+2}}(n), f_{p_{d+1}}(n), f_{p_d}(n), f_q(n) \leq 1$. Therefore $p_d^{f_{p_d}(n)+1}+p_{d+1}^{f_{p_{d+1}}(n)+1}+p_{d+2}^{f_{p_{d+2}}(n)+1} \leq p_d^2+p_{d+1}^2+p_{d+2}^2 \leq q<p_dp_{d+1}p_{d+2}$, so by lemma $2$ we have $q \nmid g(n)$.
Therefore, $p^a \nmid g(n) \Rightarrow q \nmid g(n)$ for $q \geq p_{c+1}^2+p_{c+2}^2+p_{c+3}^2$.
Combining this with the previous bounds on $f_q(n)$, we get:
$$p^a \nmid g(n) \Rightarrow pg(n) \mid \prod_{q \text{prime} \atop q< p_{c+1}^2+p_{c+2}^2+p_{c+3}^2}{q^{\left \lfloor 1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}} \right \rfloor}}$$
This gives
$$g(n)<pg(n) \leq \prod_{q \text{prime} \atop q<p_{c+1}^2+p_{c+2}^2+p_{c+3}^2}{q^{\left \lfloor 1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}} \right \rfloor}}=\prod_{q \text{prime} \atop q \leq p^a}{q^{\left \lfloor 1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}} \right \rfloor}}\prod_{q \text{prime} \atop p^a<q<p_{c+1}^2+p_{c+2}^2+p_{c+3}^2}{q}$$
This clearly implies that
\begin{align}
n<\sum_{q \text{prime} \atop q \leq p^a}{q^{\left \lfloor 1+\frac{\log{(\frac{q}{q-1}p^a)}}{\log{q}} \right \rfloor}}+\sum_{q \text{prime} \atop p^a<q<p_{c+1}^2+p_{c+2}^2+p_{c+3}^2}{q} & \leq \sum_{q \text{prime} \atop q \leq p^a}{(\frac{q^2}{q-1}p^a)}+\sum_{q \text{prime} \atop p^a<q<p_{c+1}^2+p_{c+2}^2+p_{c+3}^2}{q} \\
& \leq p^a\sum_{q \text{prime} \atop q \leq p^a}{(q+2)}+\sum_{q \text{prime} \atop p^a<q<p_{c+1}^2+p_{c+2}^2+p_{c+3}^2}{q}
\end{align}
We have thus shown that for any positive integer $m$, there exists $n_0$ s.t. $n>n_0$ implies $m \mid g(n)$. In fact, the above bound is easily seen to be $O(m^4)$.
Application to $m=10$: We have that $$2 \nmid g(n) \Rightarrow n<2\sum_{q \text{prime} \atop q \leq 2}{(q+2)}+\sum_{q \text{prime} \atop 2<q<5^2+7^2+11^2}{q}=3837$$
$$5 \nmid g(n) \Rightarrow n<5\sum_{q \text{prime} \atop q \leq 5}{(q+2)}+\sum_{q \text{prime} \atop 5<q<7^2+11^2+13^2}{q}=10261$$
Thus $10 \nmid g(n) \Rightarrow n<10261$.
In fact, we can do much better with a simple refinement. Note that by the above results, $2 \nmid g(n) \Rightarrow q \nmid g(n)$ for $q \geq 5^2+7^2+11^2=195$. Observe that $f_2(n)=0$ and $f_p(n) \leq 1$ for $p \geq 3$. Consider $85 \leq q<195$, then since $$2^{f_2(n)+1}+3^{f_3(n)+1}+5^{f_5(n)+1}+7^{f_7(n)+1} \leq 2^1+3^2+5^2+7^2=85 \leq q<195<2(3)(5)(7)$$, we have by lemma $2$ ($f_q(n) \leq 1$) that $q \nmid g(n)$.
The same argument then gives $$2 \nmid g(n) \Rightarrow n<2\sum_{q \text{prime} \atop q \leq 2}{(q+2)}+\sum_{q \text{prime} \atop 2<q<85}{q}=880$$
Similarly, $5 \nmid g(n) \Rightarrow q \nmid g(n)$ for $q \geq 7^2+11^2+13^2=339$. Observe that $f_5(n)=0$ and $f_p(n) \leq 1$ for $p \geq 5$. Consider $175 \leq q<339$, then since $$5^{f_5(n)+1}+7^{f_7(n)+1}+11^{f_{11}(n)+1} \leq 5^1+7^2+11^2=175 \leq q<339<(5)(7)(11)$$, we have by lemma $2$ ($f_q(n) \leq 1$) that $q \nmid g(n)$.
Now $f_2(n)<\lfloor 1+\frac{\log{2(5)}}{\log{2}} \rfloor=5$ and $f_3(n)<\lfloor 1+\frac{\frac{3}{2}(5)}{\log{3}} \rfloor=3$. Thus $f_2(n) \leq 4, f_3(n) \leq 2$. Consider $113 \leq q<175$, then since $$2^{f_2(n)+1}+3^{f_3(n)+1}+5^{f_5(n)+1}+7^{f_7(n)+1} \leq 2^5+3^3+5+7^2=113 \leq q<175<2(3)(5)(7)$$, we have by lemma $2$ ($f_q(n) \leq 1$) that $q \nmid g(n)$.
The same argument then gives $$5 \nmid g(n) \Rightarrow n<5\sum_{q \text{prime} \atop q \leq 5}{(q+2)}+\sum_{q \text{prime} \atop 5<q<113}{q}=1550$$
Thus $10 \nmid g(n) \Rightarrow n<1550$. The remaining small cases can be easily checked with a computer. |
Is joint PDF always less than or equals to marginal PDF? | That link does not say it is the conditional probability which is infinite, but the conditional density. Even then it is stretching a point.
A conditional probability is a probability so is between $0$ and $1$. If $X=f(y)$ when $Y=y$ then $x$ is fixed by $y$ and you can say $\mathbb P(X=f(y) \mid Y=y)=1$ as a conditional probability. This is clearly finite.
But your quote is trying to say this in the language of densities, and so it says the conditional density is $p(x \mid y) = \delta(x-f(y))$ using a Dirac delta function, which despite the name is not a proper function $\mathbb R \to \mathbb R$. This $\delta(w)$ might be seen as zero when $w \not =0$ but with the property $\int\limits_{-\infty}^{\infty} \delta(w)\, dw = 1$, so some people might describe $\delta(0)$ as infinite in a special way. This same special way is what leads to mention of the infinite conditional density in your quote. |
Finite ergodic Markov chain interesting limit | Let $S$ denote the state space of the Markov chain and $S_x=S\setminus\{x\}$. Let $P=(p_{yz})_{(y,z)\in S\times S}$ denote the transition matrix of the Markov chain and $P_x=(p_{yz})_{(y,z)\in S_x\times S_x}$ the matrix obtained from $P$ omitting the $x$ line and the $x$ column. Let $U_x=(p_{xy})_{y\in S_x}$ denote the line vector describing the sub-distribution of $X_1$ restricted to $X_1\ne x$, conditionally on $X_0=x$. Finally, let $\mathbf 1_x=(1)_{y\in S_x}$ denote the column vector of ones indexed by $S_x$.
Then, for every $n\geqslant1$,
$$
\mathbb P_x(\tau_x\gt n)=U_xP_x^{n-1}\mathbf 1_x.\tag{$\ast$}
$$
Assume first that $p_{xy}\ne0$ for every $y\ne x$. Then $N_x:Q\mapsto N_x(Q)=\sum\limits_{(y,z)\in S_x\times S_x} p_{xy}|Q_{yz}|$ is a norm on the space of square matrices indexed by $S_x$ and $N_x(Q)=U_xQ\mathbf 1_x$ for every matrix $Q$ with nonnegative coefficients, hence $\mathbb P_x(\tau_x\gt n)=N_x(P_x^{n-1})$. A consequence is that
$$
\mathbb P_x(\tau_x\gt n)^{1/n}\to\varrho(P_x),
$$
where, for every matrix $Q$, $\varrho(Q)$ denotes the spectral radius of $Q$, which is also the Perron–Frobenius eigenvalue. In other words,
$$
\color{red}{\frac{\log\mathbb P_x(\tau_x\gt n)}n\to\log\varrho(P_x)}.
$$
More generally, assume that the Markov chain is irreducible and aperiodic. Then there exists $k\geqslant0$ such that every coefficient of $U_xP_x^k$ is positive, hence, considering $(U_xP_x^k)P_x^{n-k-1}\mathbf 1_x$, one sees that the same result applies.
Note that the result may really depend on $x$. For example, consider the Markov chain with states $0$ and $1$ such that $p_{01}=a$ and $p_{10}=b$. Then, for every $n\geqslant1$,
$$\mathbb P_0(\tau_0\gt n)=a(1-b)^{n-1},\qquad
\mathbb P_1(\tau_1\gt n)=b(1-a)^{n-1},
$$
hence the limits above are $\log(1-b)$ for $x=0$ and $\log(1-a)$ for $x=1$ (or, more directly, $P_0$ and $P_1$ are $1\times1$ matrices whose unique coefficients are $1-b$ and $1-a$ respectively).
Edit To prove the key formula $(\ast)$, note that
$\mathbb P_x(\tau_x\gt n)$ is the sum of $p(c)=\prod\limits_{k=1}^np_{x_{k-1}x_k}$ over every path $c=(x_k)_{0\leqslant k\leqslant n}$ in $S$ starting from $x_0=x$ such that $x_k\ne x$ for every $k\geqslant1$. Hence,
$$
\mathbb P_x(\tau_x\gt n)=\sum\limits_{c}p(c)=\sum_{x_1\in S_x}p_{xx_1}\sum_{x_2\in S_x}p_{x_1x_2}\cdots\sum_{x_n\in S_x}p_{x_{n-1}x_n},
$$
that is,
$$
\mathbb P_x(\tau_x\gt n)=\sum_{x_1\in S_x}(U_x)_{x_1}\sum_{x_n\in S_x}(P_x)^{n-1}_{x_1x_n}=\sum_{x_n\in S_x}(U_xP_x^{n-1})_{x_n}=U_xP_x^{n-1}\mathbf 1_x.
$$ |
Maths recursive defintition | By induction all numbers of the form $-(2^x\cdot 3^y)$ for non negative integers $x$ and $y$ are in $X$).
Proof: $-(2^0\cdot 3^0)$ is in $X$.
now suppose the number $-(2^a\cdot 3^b)$ is in X. Then the numbers $-(2^{a+1}\cdot 3^b)$and $-(2^a\cdot 3^{b+1})$are also in $X$
Yes, you can easily see that only negative numbers are there and $10$ is not of the form $-(2^x\cdot 3^y)$ so it isn't there. While $-4=-(2^2\cdot 3^0)$ is. |
How to find an invariant of the knot puzzle game? | If $P$ is a knot projection (rather than a link of multiple components), then any starting configuration of lamps can be turned all off or all on. In Region crossing change is an unknotting operation, Ayaka Shimizu proves that for any fixed lamp, there is a sequence of region flips that can change the given lamp but leave all others fixed.
The basic idea of the proof follows. Think of the lamps as crossing information, so that the game board is now a knot diagram. Then flipping all the lamps in a region corresponds to changing all crossings around a region. The procedure to change one crossing is:
Orient the knot, and resolve the chosen crossing according to the orientation:
The resulting link diagram has two components. Apply a checkerboard coloring for one of the components (pretending the other component is not there).
Perform a region crossing change for each of the shaded regions (in the original knot diagram).
Here is an example of the algorithm in action (taken from an expository article by Shimizu):
Since it's possible to change a single crossing (or flip a single lamp in the original formulation) through a sequence of region flips, that means any configuration of on/off lamps (including all-on and all-off) is obtainable from any initial configuration. |
Probability, expected value | The probability you end up with less than you started with is the probability you lose the bet more than twice the number of times that you win. E.g. Win 30 times, lose 70 times you are in the hole. Win 35 times, lose 65 times you're in the green.
Let $X$ be the number of times you win. Then $Y=100-X$ is the number of times you lose.
$$P(2x < y) = P(x < 100/3) = P(x \le 33)$$
This is binomial, so
$$P(x \le 33)= \sum_{x=0}^{33} {100\choose x} 0.5^{100} = \dfrac{553785737846639752356280235}{1267650600228229401496703205376}$$
And
$$\left|\dfrac{1}{2300} - \dfrac{553785737846639752356280235}{1267650600228229401496703205376}\right| \approx 0.000002$$
so it appears the author made an estimation.
As for the expected value, you would expect to win 50 times because the probability of winning a single toss is $0.5$, and these bing independent trials, $E(X)=0.5(100)=50$.
The amount of money you wind up with is $M=200X-100Y$. Thus
$$E(M)=E(200X-100Y) = 200E(X)-100E(Y) = 200(50)-100(50) = 5000$$ |
Power series expansion of ODE | $y′=\sum_{n=-1}^{\infty}a_{n+1}(n+1)x^n$
$y′'=\sum_{n=-2}^{\infty}a_{n+2}(n+2)(n+1)x^n$
$\lambda y = 2xy'-(1-x^2)y''$
$\lambda y = 2x[\sum_{n=-1}^{\infty}a_{n+1}(n+1)x^n]-(1-x^2)[\sum_{n=-2}^{\infty}a_{n+2}(n+2)(n+1)x^n]$
$\lambda y = [\sum_{n=-1}^{\infty}2a_{n+1}(n+1)x^{n+1}]-[\sum_{n=-2}^{\infty}a_{n+2}(n+2)(n+1)x^n]+[\sum_{n=-2}^{\infty}a_{n+2}(n+2)(n+1)x^{n+2}]$
$\lambda y = \sum_{n=0}^{\infty}[2a_{n}n-a_{n+2}(n+2)(n+1)+a_{n}n(n-1)]x^n$
$\lambda a_{n} = a_{n}n(n+1)-a_{n+2}(n+2)(n+1)$
$[\lambda-n(n+1)] a_{n} = -a_{n+2}(n+2)(n+1)$
$a_{n+2} = \frac{n(n+1)-\lambda}{(n+2)(n+1)}a_{n}$ |
Finite Medvedev degree | You have the right idea, but some details seem a little off or at least unjustified. Where have you used that $\mathcal{A} \neq \mathbf{1}$? Why does $\Phi_{n+2}$ map $\mathcal{A}$ into $\mathcal{B}_n$? Why is $\Psi_n(\mathcal{A})$ different from $\mathcal{A}$ for cofinitely many $n$---is this perhaps where $\mathbf{1}$ or $\mathbf{0}$ comes in? |
definition of payoff matrix, how is it defined? | Yes it is expected payoff.
Pick a term in $x^T A y$ like $x_i a_{ij} y_j$ this is the payoff for player 1 when 1 plays $i$ and 2 plays $j$ which is $a_{ij}$ then weighted by the probability of that happening which is the product of the two independent probabilities $x_i$ for 1 to play $i$ and $y_j$ for 2 to play $j$.
Add these all up and you get an expected value for the payoff to 1 when 1 is using mixed strategy $x$ and 2 is using mixed strategy $y$. |
Prove that if $f$ is differentiable at $x$, then this limit equals $f'(x)$ | $$\lim_{h\rightarrow 0}{{f(x+h)-f(x)}\over h}=f'(x)\\
\lim_{h\rightarrow 0}{{f(x-h)-f(x)}\over {-h}}={{f(x)-f(x-h)}\over h}=f'(x)$$
Adding the two equality you have:
$$\lim_{h\rightarrow 0}{{f(x+h)-f(x-h)}\over h} =2f'(x)$$ |
Question in proof that $\dim(U_1+U_2)=\dim U_1+\dim U_2-\dim(U_1 \cap U_2)$ | In very simple terms:
by construction every vector in $U_1$ is a linear combination of $(u_1,...,u_m,v_1,...,v_j)$ and every vector in $U_2$ is a linear combination of $(u_1,...,u_m,w_1,...,w_k)$.
By definition, every vector in $U_1+U_2$ is of the form $u+u^\prime$ with $u\in U_1$ and $u^\prime\in U_2$.
Putting things together every vector in $U_1+U_2$ can be written as a linear combination of $(u_1,...,u_m,v_1,...,v_j,w_1,...,w_k)$. |
Prove that a system of linear equations, will have infinitely many solutions whenever there is a specific value. | Taking user36196’s suggestion, compute the determinant of $$\begin{bmatrix}-1&\sin\alpha&-\cos\alpha \\ \beta & \sin\alpha & \cos\alpha \\ 1&\cos\alpha&\sin\alpha \end{bmatrix}.$$ For there to be an infinite number of solutions to the system, this determinant must vanish. After a bit of simplification, this condition results in the equation $$\beta = \cos2\alpha+\sin2\alpha.$$ Now, the right-hand side can be rewritten as $\sqrt2 \sin\left(2\alpha+\frac\pi4\right),$ so clearly $\beta\in\left[-\sqrt2,\sqrt2\right]$ for this to hold, but that condition by itself is not sufficent for there to be an infinite number of solutions to the system. |
Calculate the integral with the help of Monte Catlo method and three given points. | The integral you have to evaluate is supposed to give the volume of a solid above $[-1,1]\times[-1,1]$ and below the surface determined by your function whose graph is given below:
It is obvious that over $[-1,1]\times[-1,1]$ surface is below $z=1$ and above $0$.
Let's see what if we generate pairs of random variables $\{(X_i,Y_i)\}$ uniformly distributed over $[0,1]\times[0,1]$. Then the average
$$\lim_{N\to \infty}\frac1N\sum_1^N X_i^{2}\cos(Y_i)=$$
$$E[X_i^{2}\cos(Y_i)\mid (X_1,Y_1)\in[0,1]\times[0,1]]=\int_{0}^{1}\int_{0}^{1}x^{2}\cos(y)dxdy$$
with probability one (law of large numbers). So the original integral equals the average times $4$.
Your calculation is OK. The difference from the true value of the integral is a result of the little number of data. I did an experiment with $1000$ data and the result was $1.12796$. Working with data sets of three numbers I even got $0.17$ as an approximation of the integral.
I guess that the moral of the original example partly is that small datasets may give very bad bad approximations. |
Axiom of pair follows from the weak axiom | Hint: Use the weak axiom to deduce that there is such $z$ and then find a formula, with parameters, such that $\{u\in z\mid \varphi(u,x,y)\}=\{x,y\}$. |
if $P \implies Q$ why does $\bar{Q} \implies \bar{P}$ | These two compound statements $P\implies Q$ and $\bar{Q}\implies \bar{P}$ are in fact equivalent. One easy way to see this is to notice that they have identical truth tables. Namely:
$P=\mbox{true}$, $Q=\mbox{true}$: both compound statements are true.
$P=\mbox{true}$, $Q=\mbox{false}$: both compound statements are false.
$P=\mbox{false}$, $Q=\mbox{true}$: both compound statements are true.
$P=\mbox{false}$, $Q=\mbox{false}$: both compound statements are true. |
Prove $\lim_{x\to\infty} \left( \sqrt{x+1} - \sqrt{x} \right) = 0$ | Try multiplying your expression by $\frac{\sqrt{x + 1} + \sqrt{x}}{\sqrt{x + 1} + \sqrt{x}}$ and then simplify the numerator and take the limit of this new expression as $x \to \infty$. What do you get?
Note that this is a common method for solving a problem like this. We say $\sqrt{x + 1} + \sqrt{x}$ is the conjugate of $\sqrt{x + 1} - \sqrt{x}$. The purpose of multiplying by this is that the numerator becomes the factored form of the difference of squares (recall: $(a + b)(a - b)= a^{2} - b^{2}$). This gets rid of the square roots in the numerator and allows us to cancel the $x$'s. |
Solving Green's function with Dirichlet boundary Conditions | Since
$$ \forall A>0,\qquad \int_{-\infty}^{+\infty}\frac{dy'}{\left(A+(y-y')^2\right)^{3/2}}=\int_{-\infty}^{+\infty}\frac{dy'}{\left(A+y^2\right)^{3/2}}=\frac{2}{A}\tag{1} $$
the given integral equals
$$ 2\int_{0}^{+\infty}\frac{dx'}{(x-x')^2+z^2}=\frac{2}{|z|}\int_{-x/|z|}^{+\infty}\frac{dx'}{x'^2+1^2}=\frac{2}{|z|}\left(\frac{\pi}{2}+\arctan\frac{x}{|z|}\right).\tag{2}$$ |
Estimating remainder without modular principles | $5^3=1+124$
$\implies(5^3)^{41}=(1+124)^{41}$
$=1+\binom{41}1124^1+\binom{41}2124^2\cdots+\binom{41}{40}124^{40}+124^{41}\equiv1\pmod{124}$
$\implies5^{125}=5^2\cdot(5^3)^{41}\equiv25\cdot1\pmod{124}$ |
Describing convergence/divergence of a complex sequence | Note that
$$\left|a-a_n\right|<\varepsilon$$
Implies:
$$\left|a-a_n\right| \leq \varepsilon$$
On the other hand:
$$\left|a-a_n\right| \leq \varepsilon$$
Implies:
$$\left|a-a_n\right| < 2\varepsilon$$ |
Cayley Graph of the Symmetry Group of the Triangular Prism | Why do you have four generators? I would think three is natural: rotation of 120 degrees, reflection about a plane parallel to the triangle, reflection about a plane orthogonal to the triangle.
Now write down the 12 elements of the group that you have in some organized way and draw edges between them when multiplication with a generator from the right gets you from one element to the other. Label this edge with the corresponding generator. That's your Caley graph. (Each element should have three outgoing edges and three incoming edges.) |
A non-zero and non-invertible element in a noetherian integral domain has a decomposition into irreducible elements | I don't think we need to go far as to find prime ideals. Given a maximal element $(b)$ of $M$, observe that since $(b) \in M$, the element $b$ is not irreducible. So we can write it as a product of two non-units $b = b_1 b_2$. But then $(b_1) \supsetneq (b)$ and $(b_2) \supsetneq (b)$, so by the maximality of $(b)$ in $M$, we see that $b_1$ and $b_2$ can both be written as a product of irreducibles. So $b = b_1 b_2$ can be written as a product of irreducibles, contradicting $(b) \in M$. |
First Fundamental Theorem of Calculus very simple question | The first fundamental theorem tells you that, under reasonable assumptions on $f$ (say, continuity), for any point $t_0$ in the interval of definition of $f$, the function $$ G(x)=\int_{t_0}^x f(s)\,ds $$ is an antiderivative of $f$.
In the text in question, the author is simply taking $t_0=c$.
(Now, once you know $f$ has an antiderivative $G$, for any constant $\alpha$ the function $\alpha+G$ is another antiderivative of $f$, so if all one wanted was an antiderivative vanishing at $c$, choosing $\alpha$ carefully would take care of it. But note that the goal of the author is a bit more, namely, to prove the second fundamental theorem, so he needs the specific formula for $G$ that the first fundamental theorem gives us. The point $c$ that plays here the role of what I called $t_0$ is irrelevant, of course.) |
Showing that the groups $S^1, SO_2, G, \mathbb R/\mathbb Z$ are isomorphic | Define $\phi:S^1\to SO_2$ as
$\phi (e^{i\theta})=\begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix}$
And show that this is an isomorphism |
On a Geometric Interpretation of the Local Criterion for Flatness in Eisenbud's | This is a standard statement in the general principle of "true at a point implies true in a neighborhood" that can be applied to many situations involving f.g. modules over noetherian rings.
I'll show how to prove that free at a point implies free in a neighborhood for f.g. modules (see Hartshorne Algebraic Geometry, exercise II.5.7).
So suppose $A$ is a noetherian ring and $M$ is a finitely generated $A$ module free at some point $P \in Spec(A)$. Then there are elements $m_1, \dots, m_s \in M$ such that their localization at $P$ give a basis for $M_P$. Writing $N$ for the submodule of $M$ generated by these elements $m_i$, we have an exact sequence
$$
0 \to C \to A^s \to M \to M/N \to 0
$$
The middle map here is the surjection of the free module onto the submodule $N$. The modules $C$ and $M/N$ are both zero after localizing at $P$ and are finitely generated (here's the only place I used noetherian) and so we know there is some $f \not\in P$ such that $f$ annihilates both $C$ and $M/N$. The basic open set $D(f) = V(f)^c$ of $Spec(A)$ is now a neighborhood of $P$ on which $M$ is free.
I think you can combine this idea with some flatness criteria to get the statement at the end of your question.
Edit: I'm including one of my comments below to more completely answer the question of how to go from local flatness to the statement in Eisenbud.
First, recall that for finitely generated modules, flat and projective are equivalent (since both are equivalent to locally free). Therefore, if $f: X \to Y$ is a finite morphism between schemes, we're done. The hard part (and indeed one of the very useful aspects of this flatness criterion) is proving openness for finite type morphisms, and here I don't see how to get around re-proving some part of generic freeness. Here's a sketch, which is in the spirit of Eisenbud's proof but is more overtly geometric.
Suppose we have a morphism of varieties $\phi: X \to Y$. Being varieties, both $X$ and $Y$ are of finite type over a field $k$, and so in particular $\phi$ is of finite type and we could locally factor $\phi$ in a neighborhood of $P''$ and $P'$ as
$$ X \to \mathbb{A}^n_Y \to Y $$
where $X \to \mathbb{A}^n_Y$ is finite and $\mathbb{A}^n_Y \to Y$ is the projection. Here $n$ is the relative dimension of $A(X)_{P''}$ over $A(Y)_{P'}$.
Flatness is easy to pass back and forth for $\mathbb{A}^n_Y \to Y$ (e.g. using the local flatness criterion on the coordinates of affine space), and now we can reduce to the case $\phi$ is finite and use the proof mentioned previously. |
Taylor expansion of a function of two variables, one of which is an exponent | You might write your function as $ f(x,y) = \exp(y \log(1+x^3))$.
Now $\log(1+x^3) = O(x^3)$, so $$f(x,y) = \exp(O(x^3 y)) = 1 + O(x^3 y)$$
Thus the Taylor expansion to second order is just $1$. |
Conditional Expectation: $E(X|Y) \geq Y$ and $E(Y|X) \geq X$ implies $X=Y$ a.e. | Rewriting what you have in the question body, we have:
$\int_B X - Y \geq 0$ and $\int_A X - Y \leq 0$ for $B \in \sigma(Y)$ and $A \in \sigma(X)$. We need to show that these are actually equalities, so let us just do one. Let $B$ be an arbitrary set, and note that $\int_{B^c} X - Y \geq 0$ since $B^c$ is in $\sigma(Y)$ as well. We then have (by some abuse of notation) $\int_{B^c} + \int_B = \int_\Omega$. So, we overall have
$$0 \leq \int_B X - Y = \int_\Omega X - Y - \int_{B^c} X - Y \leq 0$$
since the first term is negative ($\Omega \in \sigma(X)$) and the second one is positive. Thus, for any $B \in \sigma(Y)$, $\int_B X - Y = 0$. You can do the same argument for all $A$ in $\sigma(X)$ as well, so we have that $X = Y$ for all sets $A \in \sigma(X) \cup \sigma(Y)$. Now, if you want the full sigma algebra $\sigma(X, Y)$, let us note that $1_{A \cap B} = 1_{A} + 1_{B} - 1_{A \cup B}$, so you can show that $X = Y$ in an algebra generated by the two sigma algebras. You can then use monotone convergence to go to a monotone class containing these to show closure under countable unions and intersections. Then, monotone class theorem for sets should give you the overall result. |
Lindeberg's condition vs Lyapunov's condition | Consider a random variable $X$ taking values in $\mathbb Z$ and $$P(X=i) = \frac{c}{|i|^3\log^2|i|}, i\in \mathbb Z \setminus \{0,1,-1\}
$$
where $c$ is the correct normalizing constant. Notice $E(X) = 0$ from symmetry, $\sigma^2 :=EX^2 < \infty$ and for any $\delta>0$, $E(|X|^{2+\delta}) = \infty$. Let $X,X_1,X_2, \ldots, $ be i.i.d. Clearly Lyapunov condition does not hold.
Now lets check the Lindeberg condition. $s_n^2 = \sum_{i=1}^n \sigma_i^2 = n\sigma^2$. Also for all $\epsilon >0$,
$$
\frac 1 {s_n^2}\sum_{k=1}^nE(|X_k|^2 1_{|X_k| > \epsilon s_n}) = \frac{n}{\sigma^2 n} \sum_{|i|>\epsilon \sigma \sqrt {n}}i^2 \frac{1}{|i|^3\log^2 |i|} < c \int_{c'\sqrt{n}}^\infty \frac{dz}{z\log^2z} < \int_{c''\log n}^{\infty}\frac{du}{u^2} =O\left(\frac {1}{\log n}\right)
$$
Hence Lindeberg condition is satisfied. |
Show that character $\chi_ϕ(g) ∈ \Bbb Z$ for any representation $ϕ$ of $G$. | You have identified that each $\chi(g^i)$ is in the field $\mathbb Q(\omega)$, and you have shown that $ \sum_{(i,n) = 1} \chi(g^i) $ is invariant under the action of any automorphism $\sigma \in Gal(\mathbb Q(\omega) : \mathbb Q)$. You then deduced that
$$ \sum_{(i,n) = 1} \chi(g^i) \in \mathbb Q. $$
This is most of the hard work!
Since $g$ is conjugate to $g^i$ when $(i,n) = 1$, you know that $\chi(g) = \chi(g^i)$ when $(i,n) = 1$. Therefore, $ \chi(g^i) \in \mathbb Q$
for any $i$ such that $(i,n) = 1$.
But characters of representations over $\mathbb C$ are always algebraic integers! (See here.)
The only algebraic integers in $\mathbb Q$ are the (genuine) integers! So you can conclude that $ \chi(g^i) \in \mathbb Z$ for any $i$ such that $(i,n) = 1$. |
Does there exists a $\mathbb{Q}$ vector space automorphism of $\mathbb{R}$ which is not Lebesgue measurable? | Every Lebesgue-measurable additive function $f \colon \mathbb{R} \to \mathbb{R}$ is contiuous, so the Lebesgue-measurable $\mathbb{Q}$-automorphisms of $\mathbb{R}$ are precisely the $\mathbb{R}$-automorphisms of $\mathbb{R}$, i.e. the $x \mapsto cx$ for $c\in \mathbb{R}\setminus \{0\}$.
Since there are $2^{\mathfrak{c}} > \mathfrak{c}$ $\mathbb{Q}$-automorphisms of $\mathbb{R}$ (every permutation of a Hamel basis induces a $\mathbb{Q}$-automorphism, and there are $2^{\mathfrak{c}}$ of those; on the other hand there are only $2^{\mathfrak{c}}$ maps $\mathbb{R}\to \mathbb{R}$), most - in the cardinality sense - $\mathbb{Q}$-automorphisms of $\mathbb{R}$ are not Lebesgue-measurable. |
Solution to Square Triangle Area | The yellow area is the same as the area of the small square drawn with dashed lines. Why? Because if you continue the dotted lines, you divide the square in four parts. The yellow section outside the lower left square is the same as the white section inside that square. If it is not clear I'll try and make a drawing. |
Likelihood across multiple averages... Sorta | I assume the probabilities are independent, meaning that one school's decision has no influence on the result of another. This is probably false, but it's the only way a definitive answer can be given (unless you know exactly what the dependence is).
The probability of getting into at least one school is $1$ minus the probability of getting rejected from every school. Assuming independence, this is just the product of the probabilities of getting rejected from each school. In your example, we get
$$1-(1-.5)(1-.25)(1-.25)(1-.5)=.859375=85.9375\%$$
as the probability of getting accepted into at least one school. |
Evaluate the improper integral | Hint:
$$\arctan{a x}-\arctan{b x} = x \int_b^a \frac{dy}{1+x^2 y^2}$$
Show that you can reverse the order of integration. You then end up integrating
$$\int_0^{\infty} \frac{dx}{1+x^2 y^2} = \frac{\pi}{2 y}$$
I assume you can handle the rest. |
What is the difference in this question between $\log$ and $\lg$? | $\lg3$ is the answer :)
Because $\lg{\dfrac{15}{5}}=\lg{3}$ |
Evaluating $\int\frac{e^{\tan^{-1} x}}{(1+x^2)^{\frac32}}dx$ | Let $I=\int\dfrac{e^{\tan^{-1} x}}{(1+x^2)^{\tfrac32}}dx$.
Substitute $t=\tan ^{-1}x\implies dt=\frac 1{1+x^2}dx$. Now noting that $x=\tan t$, we have
$I=\int e^t \frac 1{\sqrt{\sec^2t}}dt=\int e^t\cos t dt$
Can you take it from here? |
What is $\frac{\partial \overline{z}}{\partial z}$? | Yes, $\frac{\partial\overline{z}}{\partial z} = 0$. You can compute that from the definitions,
$$\begin{align}
\frac{\partial\overline{z}}{\partial z} &= \frac{1}{2}\left(\frac{\partial}{\partial x} - i\frac{\partial}{\partial y}\right)(x-iy)\\
&= \frac{1}{2}\left(\frac{\partial x}{\partial x} + (-i)^2\frac{\partial y}{\partial y} - i\left(\frac{\partial y}{\partial x} + \frac{\partial x}{\partial y}\right)\right)\\
&= \frac{1}{2}\left(1+(-1)\cdot 1 - i(0+0)\right)\\
&= 0.
\end{align}$$
Then you can obtain $\frac{\partial x}{\partial z} = \frac{1}{2}$ for example from $x = \frac{1}{2}(z + \overline{z})$. Or from $\frac{\partial x}{\partial z} = \frac{1}{2}\left(\frac{\partial x}{\partial x} - i \frac{\partial x}{\partial y}\right)$ using the definition of the Wirtinger derivatives. |
Limits of integral: $\iiint_{D} \frac {\mathrm{d}x\mathrm{d}y\mathrm{d}z}{(x + y + z + 1)^3}$ , where $D =\{ x > 0 , y > 0 , z > 0 , x + y + z < 2\}$ | Method 1
A trivial rescaling of the original integral ( substitute $x,y,z$ by $2x,2y,2z$ )
can bring it to the form of a type 1 Dirichlet integral
Let $f$ be a function defined on
$[0,1]$ and $\Delta \subset \mathbb{R}^d$ be the $d$-simplex $0 \le x_1, \ldots x_d$; $x_1+\cdots+x_d \le 1$, we have
$$\int_{\Delta} f(\sum_{i=1}^d x_i) \prod_{i=1}^{d} x_i^{\alpha_i-1} d^d x
=\frac{\prod_{i=i}^d \Gamma(\alpha_i)}{\Gamma(\sum_{i=1}^d \alpha_i)} \int_0^1 f(x) x^{\sum_{i=1}^d a_i)-1} dx$$
In this case $d = 3, f(x) = \frac{1}{(2x+1)^3}$ and $a_1 = a_2 = a_3 = 1$.
It is then clear the integral is equal to
$$\frac{2^3\Gamma(1)^3}{\Gamma(3)}\int_0^1 \frac{x^{(3-1)}}{(2x+1)^3} dt
= 4 \left[ \frac{8x + 2(2x+1)^2\log(2x+1)+3}{16(2x+1)^2}\right]_0^1
= \frac{\log 3}{2} - \frac{4}{9}
$$
Method 2
If one absolutely want to evaluate this as a multiple integral, there is a useful
trick to turn the integral as one over a hypercube $[0,1]^3$. Perform the substitution $x, y, z$ by $2x, 2y, 2z$ as before and introduce variables $\lambda, \mu, \nu$:
$$\begin{cases}
\lambda & = x + y + z\\
\lambda \mu & = y + z\\
\lambda \mu \nu & = z
\end{cases}
\quad\iff\quad
\begin{cases}
x &= \lambda (1-\mu)\\
y &= \lambda \mu (1-\nu)\\
z &= \lambda \mu \nu
\end{cases}$$
We have $$\begin{align}
dx \wedge dy \wedge dz &= d(x+y+z) \wedge d(y+z) \wedge dz = d\lambda \wedge \lambda d\mu \wedge \lambda\mu d\nu = \lambda^2\mu\,d\lambda \wedge d\mu \wedge d\nu
\end{align}$$
and the integral we want becomes
$$2^3 \int_{[0,1]^3} \frac{\lambda^2 \mu}{(2\lambda + 1)^3} d\lambda d\mu d\nu
= 4 \int_0^1 \frac{\lambda^2}{(2\lambda + 1)^3} d\lambda$$
The same integral we got as a Dirichlet integral. |
finite polynomials satisfy $|f(x)|\le 2^x$ | Solution credits to Rui Yao:
According to a property of finite differences, if we set $c_n\in \mathbb{Z}$ to be the highest coefficient of $f(x)$, which has degree $n$, then
$$n!c_n=\Delta ^n [f](x)=\sum_{i=0}^{n}\binom{n}{i} f(i)(-1)^{n-i}$$
Thus
$$|n!c_n|=|\sum_{i=0}^{n}\binom{n}{i} f(i)(-1)^{n-i}|\le \sum_{i=0}^{n}|\binom{n}{i} f(i)(-1)^{n-i}|\le \sum_{i=0}^{n}2^i\binom{n}{i}=3^n$$
Combined with $c_n\in \mathbb{Z}/{0}$ this leads to $n\le 6$.
Since we've proved that for any given degree $n$ there's only finite number of polynomials satisfy the condition, we can conclude that the polynomial satisfy the condition is finite. |
How to use universal quotient property to construct homeomorphism | Yes 2 is equivalent to $f^{-1}[\{y\}], y \in Y$ being equal to the partition of $X$ into classes. This both shows 1-1-ness of $\tilde{f}$ (if two classes map to the same $y$ under $\tilde{f}$, they were already in the same class) as well as well-definedness (if two points are from the same class, they have the same $f$ image).
In your $\Bbb S^2, \mathbb{ RP}^2$ example
the universal property of the topology implies that if $f$ is continuous, so is $g$ (as $g = f \circ q$), so the work is in showing $f$ continuous. $2$ is easy as two distinct points on the sphere define the same line iff they are antipodal. That fact, plus that every class is obtained, makes $g$ a bijection. You still have the show continuity of $f$ though.
Note that in this case the last two steps are unnecessary as a continuous bijection from a compact space to a Hausdorff space is already a homeomorphism. |
Prove the ratio of convergence implies $\frac{U_{h}(f)-U_{h/b^2}(f)}{U_{h/b^2}(f)-U_{h/b^4}(f)} \approx b^k$ | In reality, you do not have enough information to make any progress towards your goal. You need a stronger assumption. Specifically, there must exists an asymptotic error expansion of the form
$$I(f) - U_h(f) = \alpha h^k + O(h^q), \quad h \rightarrow 0, \quad h > 0$$
where $\alpha \not = 0$ is a constant independent of $h$ and $0 < k < q$. Now let $\rho>0$. By replacing $h$ with $\rho h$ we have $$I - U_{\rho h} = \alpha \rho^k h^k + O(h^q),\quad h \rightarrow 0, \quad h > 0.$$
It follows that
$$U_h - U_{\rho h} = \alpha(\rho^k - 1)h^k + O(h^q), \quad h\rightarrow 0, \quad h > 0.$$
By once again replacing $h$ with $\rho h$ we have
$$ U_{\rho h} - U_{\rho^2h} = \alpha (\rho^k-1) \rho^k h^k + O(h^q), \quad h\rightarrow 0, \quad h > 0.$$
It follows that
$$ \frac{U_h - U_{\rho h}}{U_{\rho h} - U_{\rho^2h}} = \frac{\alpha(\rho^k - 1)h^k + O(h^q)}{\alpha (\rho^k-1) \rho^k h^k + O(h^q)} = \frac{1 + O(h^{q-k})}{\rho^k + O(h^{q-k})} \approx \rho^{-k}$$
A common choice of $\rho$ is $\rho = \frac{1}{2}$ which corresponds to halving the length of each subinterval. This allows us to recycle the function values that have already been computed. In this case
$$ \frac{U_h - U_{h/2}}{U_{h/2} - U_{h/4}} \approx 2^k.$$
Your left-hand side corresponds to the choice of $\rho = b^{-2}$ and your right-hand side should have been $b^{2k}$ rather than $b^k$ but this is not a serious issue.
In the case of the trapezoidal rule and a sufficiently smooth integrand the existence of a suitable asymptotic error expansion follows from the Euler-Maclaurian summation formula and $$(k,q) = (2,4).$$ If the integrand is not smooth, then you may still have an asymptotic error expansion, but the exponents $k$ and $q$ are not necessarily integers. Be mindful of the fact that you will never see the numbers $U_h$, only the computed value $\hat{U}_h$ which is affected by rounding errors and subtractive cancellation is an issue when computing $U_h - U_{\rho h}$ for small values of $h$. It is possible to say substantially more about the behavior of your fraction and the impact of rounding errors, but that is is outside the scope of your question. |
Showing no point in the rationals (as a subspace of $\mathbb{R}$) has a precompact neighborhood | You have to take the closure in $\Bbb Q$, not in $\Bbb R$. And in the rationals the closure is $[-1,1] \cap \Bbb Q$ which is not compact, as there is a rational sequence in it (converging in $\Bbb R$ to $\frac{1}{\sqrt{2}}$, say) that has no convergence subsequence (in $\Bbb Q$ nor in $[-1,1] \cap \Bbb Q$). |
Cancellation law of morphisms? | One thing you can say is that Schanuel's lemma holds. Maybe that is what you want. |
Question about solving absolute values. | I'll answer by editing your solution slightly:
Depending on the sign of $x-3$:
$$\begin{align}
x-3=3-x&\text{ and }x-3\ge 0&\quad\text{ or }\quad&&-(x-3)=3-x\text{ and }&x-3<0
\\\\
x-3-3+x=0&\text{ and }x\ge 3&\quad\text{ or }\quad&& x-3=x-3\text{ and }&x<3
\\\\
2x-6=0&\text{ and }x\ge 3&\quad\text{ or }\quad&& x-3-x+3=0\text{ and }&x<3
\\\\
2x=6&\text{ and }x\ge 3&\quad\text{ or }\quad&& 0=0\text{ and }&x<3
\\\\
x=3&\text{ and }x\ge 3&\quad\text{ or }\quad&& \text{(true) and }&x<3
\\\\
&x=3&\quad\text{ or }\quad&& x<3&
\\\\
&&x\le 3&
\end{align}$$
edit: As a further explanation of the problem as a whole, consider the graph below, where $|x-3|$ is shown in blue and $3-x$ is shown in red.
The graphs coincide for $x\le 3$ and the blue graph is higher for $x>3$, so the original equation is true for $x\le 3$. |
Determinants and uniqueness of a solution | It's clearly false. Take
$A = I; \tag 1$
then
$\det(A) = 1 \ne 0, \tag 2$
but
$\forall x, \; Ax = Ix = x; \tag 3$
the solution to
$Ax = x \tag 4$
is certainly not unique!
This example shows that the proposed hypothetical is not true, but why? I think the most concise reason I can give is that $\det(A) \ne 0$ does not preclude $\det (A - I) = 0$; if
$\det(A - I) \ne 0, \tag 5$
then
$(A - I)x = 0 \tag 6$
has the unique solution $0$; but when $\det(A - I)$ vanishes, in general we will have
$\ker (A - I) \ne \{0\}; \tag 7$
indeed, $\dim(\ker(A - I))$ may be as great as $\text{size}(A)$, as the above example indicates. |
Prove: If $b\in$ Range($A$), then $b \not\in$ Null($A^T$). | If $b=Ax $ and $A^Tb=0$, you have $A^TAx=0$. Then $$0=x^TA^TAx=(Ax)^TAx=b^Tb, $$ and so $b=0$. |
Expectation value of random variable conditionally defined | Express
$$ X = WY + (1 - W)Z $$
where $W \sim \text{Bernoulli}(p)$ and independent of $Y, Z$. Then taking expectation, and by independence, you immediately obtain the desired result. |
Exercise about inequalities | $$\begin{array}{l}
|1+3x|\le 1 &\iff -1 \le 1 +3x \le 1\\
&\iff -2\le 3x \le 0\\
&\iff -\frac{2}{3}\le x \le 0\\
&\iff -\frac{2}{3}\le x \text{ and } x \le 0\\
&\implies -\frac{2}{3}\le x
\end{array}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.