title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Is it possible that the zeroes of a polynomial form an infinite field? | Proof by induction on $n$:
For $n=1$, we could divide off a linear factor $x_1-a$ for each $a\in F$, which is impossible.
For each $a\in F$, the polynomial $p(x_1,\ldots,x_{n-1},a)\in K[x_1,\ldots,x_{n-1}]$ vanishes in $F$, hence by induction we may assume that it is the zero polynomial. When we write $p$ as element of $K[x_n][x_1,\ldots,x_{n-1}]$, this means that the coefficients of each monomial is a polynomial $\in K[x_1]$ that vanishes on $F$. As in the case $n=1$, this implies that all coefficients are zero. |
$\lim_{y \to 0}\frac{1}{\ln(y)}\int_{y}^1\frac{1-\cos(x)}{x^3}dx=-1/2$ | Numerator goes to $\infty$ and denominator to $-\infty$
L'Hopital rule
$$\lim_{y\to 0}\frac{\int_y^1 \frac{1-\cos x}{x^3} \, dx}{\ln y}=\lim_{y\to 0}\frac{\frac{\cos y}{y^3}-\frac{1}{y^3}}{\frac{1}{y}}=\lim_{y\to 0}\frac{\cos y-1}{y^2}=-\frac12$$ |
Definition of normal field extension | $E$ is not the splitting field of $X-\alpha$. You are misunderstanding the definition of a splitting field. A splitting field of a polynomial $f(X)\in K[X]$ is NOT just a field where your polynomial $f(X)$ splits. It's a field extension $E/K$ s.t. $E$ is generated by roots of $f(X)$. In this case $K[\alpha]=K$, so $E$ is not generated by roots of $X-\alpha$ unless $E=K$. |
Is there a matrix product which results in this relation? | This is the outer product $a b^{T}$ for a,b as column vectors. |
number of combinations/permutations | If order does matter, then there is just two choices for each drawer, so there are $2^n$ possibilities for $n$ drawers. In this case there would be $8$, then.
If order does not matter, then you can think, if you have $n$ writing utensils, you could have $0$ pencils (and therefore $n$ pens), $1$ pencil (and therefore $n-1$ pens), 2 pencils, etc., up to $n$ pencils and no pens. This is $n+1$ options. |
Problem related to p-adic metric. | HINTS:
Is there a maximum possible value of $\|x\|_{(p)}$?
For any given positive integer $k$ let $R_k=\{0,1,2,\ldots,p^k-1\}$; given $x\in\Bbb Z$, can you find $y\in R_k$ such that $\rho_{(p)}(x,y)\le p^{-k}$?
Corrected and extended: Let $x_n=\sum_{k=0}^np^k$ for $n\in\Bbb N$. Show that if $k,\ell\ge n$, then $\rho_{(p)}(x_k,x_\ell)<p^{-n}$, so the sequence $\sigma=\langle x_n:n\in\Bbb N\rangle$ is Cauchy. Let $x\in\Bbb Z$ be arbitrary, and fix $n\in\Bbb N$ such that $|x|<p^n$. Show that if $k>n$, then $\rho_{(p)}(x_k,x)\ge p^{-(n+1)}$, so $\sigma$ does not converge to $x$. Conclude that the space is not complete.
A metric space is compact if and only if it is complete and totally bounded. |
May a monoid have two disjoint submonoids? | "May a monoid have a different neutral element than its submonoid?"
By the very definition of a submonoid, no.
But of course there are sub-semigroups which happen to be monoids w.r.t. a different neutral element, for example $(\{0\},*,0)$ inside $(\{0,1\},*,1)$. |
Show that there is a surjective homomorphism from $\pi_1(K,x_0)$ onto $D_6$ | a) A typical presentation of $\pi_1(K)$ is $\langle a,b \mid abab^{-1} = 1\rangle$ One particularly useful presentation of the dihedral group $D_6$ is $\langle x,y \mid x^6 = y^2 = xyxy^{-1} =1\rangle$. Then we can define a $f:\pi_1(K) \rightarrow D_6$ by $f(a)=x,f(b)=y$ and easily check that $f$ is a surjective homomorphism.
b) We proceed by contradiction. Assume $p:K \rightarrow T$ where $T$ is the torus, is a covering space. It is a fundamental result that the induced map $p_*:\pi_1(K)\rightarrow \pi_1(T)$ is injective. Thus for the presentation
$$\pi_1(K) \approx \langle a,b \mid abab^{-1} = 1\rangle$$
As $p_*$ is a homomorphism and $\pi_1(T) $ is abelian
$$p_*(abab^{-1}) = p_*(a)p_*(b)p_*(a)p_*(b)^{-1} = p_*(a)^2 = 1$$
As $\pi_1(T) \approx \mathbb Z \oplus \mathbb Z$ has no element of order $2$ we conclude that $p_*(a) =1$ which is a contradiction as $ a \neq 1$ in $\pi_1(K)$. Thus $p:K \rightarrow T$ is not a covering space. |
A Limit Question of 0/0 Uncertainty | Here is something to get you started:
To what is this limit evaluated:
$$\lim_{x\rightarrow 0} \frac{\sin x}{x}$$
You, probably, know the answer, that the above limit is $1$. Now, divide nominator and denominator by $x$. This should result in:
$$\lim_{x\rightarrow 0}\frac{3+ \frac{\sin^2 x}{x}}{\frac{\sin 2x}{x}- x^2}$$
Then , write that $\frac{\sin^2 x}{x}$ as $\frac{\sin x \cdot \sin x}{x}$ use the limit above I presented you with. At the denominator make a change of variable $u \mapsto 2x$ and make use of the limit I presented you with again.
I am pretty sure you can take it from here. :) |
Is there a linear transformation from $\mathbb R^2\to\mathbb R^2$ with $\dim(\ker T)=\dim(\mathrm{im}T)=1$ and $\mathbb R^2=\ker T\oplus\mathrm{im}T$? | Sure, take $T\colon ℝ^2 → ℝ^2,~(x,y) ↦ (x,0)$. Now what is $\ker T$ and $\operatorname{img} T$?
An important class of examples of such transformations $T$ on a vector space $X$ with the property $V = \ker T \oplus T$ are so-called projectors, see wiki/Projector. These are endomorphisms $p$ on a vector space $V$ such that $p^2 = p$. It’s easy to see that, for these, $\ker p ∩ \operatorname{img} p = 0$ and $\ker p + \operatorname{img} p = V$. |
How can we bend a line by an angle? | Not a complete answer, but possibly helpful (if I understand the question correctly).
First, redraw the picture so that the circle is the unit circle in the plane and the line is the set of points $(1,y)$ along the vertical tangent to the circle at $(1,0)$.
Then the central angle of the line between the origin and $(1,y)$ is $\arctan y$. You want to end up at the point on the circle with central angle $c \arctan y$. That point has coordinates
$$
(\cos(c \arctan { y}), \sin(c \arctan y)) .
$$
The radius you drew suggests the answer above.
Note that this wrapping does not convert distances along the line to distances along the circle. If that is what you want, the map is in fact easier: the point $(1,y)$ maps to $(\cos cy, \sin cy)$.
These calculations are all in radians. Convert to degrees and move the coordinate system as you wish.
(Perhaps someone will edit this answer to include a picture.) |
Is the Iterated Continued fraction from Convergents for Pi/2 exactly 3/2? | Suppose that the original (irrational) number has the continued fraction $[1;x,y,\ldots]$. After one iteration, the number is of the form
$$1+\frac{1}{1+1/x+\frac{1}{1+1/(x+1/y)+\epsilon}},$$
where $\epsilon < 1$. Its continued fraction starts $[1;z,w,\ldots]$, where $z$ is the floor of
$$1+1/x+\frac{1}{1+1/(x+1/y)+\epsilon}.$$
It's easy to see that this quantity is always less than $3$, so either $z=1$ or $z=2$. If $x=1$ then clearly $z=2$. Therefore after at most two iterations we reach a number with $z=2$. In this case $w$ is the floor of
$$
\begin{align*}
&\frac{1}{\frac{1}{1+1/(2+1/y)+\epsilon} - \frac{1}{2}} \\ =
&\frac{1}{\frac{2+1/y}{3+1/y+(2+1/y)\epsilon} - \frac{1}{2}} \\ =
&\frac{6+2/y+(4+2/y)\epsilon}{1+1/y-(2+1/y)\epsilon} \\ =
&\frac{6y+2+(4y+2)\epsilon}{y+1-(2y+1)\epsilon}.
\end{align*}
$$
Denote by $r = [1;2,y,\ldots]$ the original number. So
$$ r = [1;2,y,\ldots] \geq 1+\frac{1}{2+1/y} = \frac{3y+1}{2y+1}.$$
Denote the convergents of $r$ by $r_1=1,r_2=3/2,r_3,\ldots$. For $i \geq 3$ we have $$\frac{3y+1}{2y+1} \leq r \leq \frac{3}{2}.$$
Denote these by $r_l < r < r_h$. Therefore
$$
\begin{align*}
\epsilon &= [0;r_3,r_4,\ldots] \\ &\geq [0;r_h,r_l,r_h,r_l,\ldots] \\ &=
\frac{\sqrt{r_l^2 r_h^2 + 4r_l r_h} - r_lr_h}{2r_l} \geq \frac{1}{2} - \frac{1}{60y}.
\end{align*}
$$
The last inequality can be obtained through laborious Taylor expansion. Substituting this value in the lower bound for $w$, we get that it is at least
$$\frac{312y^2+126y}{32y+1} \geq \frac{312}{32} y. $$
Therefore the third coefficient of the continued fraction increases exponentially. We deduce that the iterates converge exponentially to $[1;2] = 3/2$. |
How to efficiently minimize factored single variable polynomial? | You are given $$f(x)=(x-a_1)(x-a_2)\cdots (x-a_n) $$
with $a_1\le a_2\le\ldots \le a_n$.
We may assume $n$ is even, or else there is no global minimum.
Local minima are between roots in intervals $(a_{2k-1},a_{2k})$, namely where $f$ is negative; we need to find these local minima and pick the lowest among them.
Exception: If all $(a_{2k-1},a_{2k})$ are empty (i.e., all roots are of even multiplicity), then $f(x)\ge 0$ for all $x$ and the roots are precisely the minimizers, and we are done.
We have
$$\tag1f'(x)=\sum_{i=1}^n\prod_{j=1\atop j\ne i}^n(x-a_j) $$
and look for roots of this in $(a_{2k-1},a_{2k})$. If one or both of $a_{2k-1},a_{2k}$ is of higher multiplicity $m_{2k-1}\ge2$ or $m_{2k}\ge2$, then it is also a root of $f'$ (or multiplicity $m_{2k-1}$ or $m_{2k}-1$). To avoid distraction, get rid of these and consider
$$g(x)=\frac{f'(x)}{\prod_{i=1}^n(x-a_i)^{m_i-1}} $$
which doesn't really involve divisions but rather dropping a few factors from $(1)$.
Now use any of the well-known methods to find zeres of $g$ in $(a_{2k-1},a_{2k})$. For example, the secant method works (because $g$ is non-zero at the interval ends) and avoids computation of $g'$. |
Inverse trigonometric expansion related question | Every reasonable function of two variables has a Taylor expansion, which is a sum of homogeneous polynomials of increasing degree.
Your first function makes it trivial as it is the sum of two one-variable functions, and the polynomials are of the form $c_k(x^{2k+1}+y^{2k+1})$.
The second function isn't much more complex, as you can write it as the sum of $c_k(x\pm y)^{2k+1}$. These can be developed using the binomial formula
$$c_k(x\pm y)^{2k+1}=c_k\sum_{j=0}^{2k+1}\binom nj x^{j}(\pm y)^{2k+1-j}.$$ |
How do I calculate values for Gamma function with complex arguments? | You can start using the asymptotic development (Stirling series)$$\log\big(\Gamma(x)\big)=x (\log (x)-1)+\frac{1}{2} \left(\log (2 \pi
)-\log(x)\right)+\frac{1}{12 x}-\frac{1}{360 x^3}+\frac{1}{1260 x^5}-\frac{1}{1680
x^7}+$$ $$\frac{1}{1188 x^9}-\frac{691}{360360 x^{11}}+\frac{1}{156
x^{13}}-\frac{3617}{122400 x^{15}}+\frac{43867}{244188 x^{17}}+O\left(\left(\frac{1}{x}\right)^{35/2}\right)$$ and use the fact that $x$ is a complex number.
For example, this expansion gives $$\log\big(\Gamma(3+2i)\big)=-0.03163905938061+2.02219319752573 i$$ while the "exact" value (as given by a CAS) would be $$\approx -0.03163905937396 + 2.02219319750133 i$$ So, the expansion would lead to $$\Gamma(3+2i)=-0.422637286329669 + 0.871814255680394 i$$ for an "exact" value (as given by a CAS) $$\approx -0.422637286311202 + 0.871814255696507 i$$ For sure, you could have more terms for the expansion. |
Proof that a certain section of the orientation cover is continuous | I will use the notation $f$ instead of $\hat {\alpha}$ just for convenience. We have to show that the section $f:x \mapsto \alpha_x$ is continuous. Fix a point $x \in M$. It suffices to show that $f$ is continuous at $x$. Let $U:=U(\beta_B)$ be any basis open set of $M_\Bbb Z$ containing the point $f(x)=\alpha_x$. We have to show that $f^{-1}(U)$ contains a neighborhood of $x$ in $M$. (Here, the notation $U(\beta_B)$ is as in Hatcher's. Thus, $B$ is an open ball such that $x \in B \subset \Bbb R^n \subset M$ and $\beta_B \in H_n (M,M-B;R)$. Also, we are identifying a neighborhood of $x$ in $M$ that is homeomorphic to $\Bbb R^n$ with $\Bbb R^n$, again as in Hatcher's. I think you might be familiar with these because you were reading Hatcher's.).
We will in fact show that $f^{-1}(U)=B$. We obviously have the inclusion "$\subset$". Note that, since $\alpha_x \in U$, we have $\beta_B \mapsto \alpha_x$ under the map $H_n(M,M-B;R) \to H_n(M,M-x;R)$, by the definition of $U$. On the other hand, under the map $H_n(M;R) \to H_n(M,M-x;R)$ we have $\alpha \mapsto \alpha_x$. Since this map factors through $H_n(M,M-B;R)$, we must have $\alpha \mapsto \beta_B$ under the map $H_n(M;R) \to H_n(M,M-B;R)$ (because $H_n(M,M-B;R) \to H_n(M,M-x;R)$ is an isomorphism, and in particular injective).
Finally, to show that $B \subset f^{-1}(U)$, let $y \in B$. By hypothesis we have $\alpha \mapsto \alpha_y$ under the map $H_n(M;R) \to H_n(M,M-y;R)$, which factors through $H_n(M,M-B;R)$. Hence, by commutativity, we must have $\beta_B \mapsto \alpha_y$ under the map $H_n(M,M-B;R) \to H_n(M,M-y;R)$, which implies $f(y)=\alpha_y \in U$, or equivalently $y \in f^{-1}(U)$, as desired, and the proof is complete.
The proof seems long because I wrote all the details, but in fact there are no difficult ideas, i.e., this is a quite straightforward argument. |
Riemann-Roch Theorem and Ideals of a Ring | As suggested in the comments, I will try to write an answer (although I'm a little against this).
First of all, I may accidentally forget to write the word "compact" in "compact Riemann surface". So I'm assuming that all Riemann surfaces are compact. Actually everything fails if it's not compact. For instance, the correspondence between curves and Riemann surfaces (that will be explained later) fails for the disk (actually, the disk is not even a complex affine curve).
Let $X$ be a Riemann surface. This means that $X$ is a complex manifold of dimension $1$. One can consider the field of meromorphic functions (or rational functions) on $X$, usually denoted by $\mathbb{C}(X)$,i.e, the field composed by the functions that have a power series expansion of the form $f(x) = \sum_{k = -n}^{\infty} c_k (x - p)^k$ around each point $p \in X$.
It turns out that if $X$ is compact we can reconstruct $X$, by just using $\mathbb{C} (X)$. Actually given a finite extension $K/\mathbb{C}(x)$ (the field of rational polynomials), we can consider the set of valuations of $K$ that we will call $Y_{K}$ and put a topology on $Y$ minimal with the property that sets $V(f) = \{v \in Y; v(f)>0 \}$. Intuitively, this means that for each meromorphic function of an hypothetical space $Y$ we consider the zeroes of $f$ to be a closed set.
For instance, if we let $K = \mathbb{C} (x) = \mathbb{C} (\mathbb{P}^1)$ (the meromorphic functions on the Riemann sphere), we get a point $v_p$ in $Y_{K}$ for each point of $\mathbb{P}^1$ together with a point at infinity $v_{\infty}$ (the archimedean valuation). More precisely, $v_{p} (f) = -n$ if $f(x) = \sum_{k = -n}^m c_k (x - p)^k$ near the point $p$ and $v_{\infty} (f) = -deg(f)$. So, in this case, $v (f) > 0$ for $v= v_p, v_{\infty}$ precisely when $f$ is zero at the given point represented by the valuation.
However, of course, the topology on $Y_{K}$ is coarser then the topology of $\mathbb{P}^1$. Nowadays, we now how to recover this topology entirely. This is done by using the GAGA correspond that will give as output the space $Y_{K}^{an}$ (the analytification of $Y_K$). Reciprocally, we can embed any Riemann surface $X$ in the projective space $\mathbb{P}^3(\mathbb{C})$ and use the Zariski topology to get back this coarse topology (defined early). So this is, in fact, a bijective correspondence. More precisely I'm saying that complex smooth projective curves are exactly Riemann surfaces. However, these smooth complex projective curves arises always as some $Y_K$ for some $K$ (as above). Hence we get the correspondence between compact Riemann surfaces and extensions of $\mathbb{C}(x)$.
Actually, given any two (complex) function fields $K$ and $L$ (i. e, finite extensions of $\mathbb{C}(x)$) together with maps $L \rightarrow K$ we get a ramified covering $Y_K \rightarrow Y_L$. Furthermore, this covering is Galois iff $K/L$ is Galois. So we can see a lot of analogies between certain fields and Riemann surfaces.
It turns out that all this procedure using valuations can be carried over to any global field (finite extensions of $k(t)$ or finite extensions of $\mathbb{Q}$) or, more generally, to any Dedekind domain (a Noetherian domain such that the localizations at primes are discrete valuation rings). It should be noted that this procedure using valuations is a little problematic in higher dimensions (when the transcendence degree $K$ over $\mathbb{C}$ is greater then one) and does not coincides with the usual nowadays scheme theory (this is because isomorphic fields of meromorphic functions, aka irrational varieties, does not yields isomorphic varieties in higher dimensions).
For instance, for the number field $\mathbb{Q}$, we have exactly one valuation for each prime $p$ given by $v_p(\frac{p^nm}{n}) = -n$ for $\frac{m}{n}$ cop rime to $p$ and one valuation at infinity $v_{\infty}(a) = \log (|a|)$ where $|-|$ is the usual norm in the reals. In this case, the algebraic integers of $\mathbb{Q}$ are the ordinary integers $\mathbb{Z}$. Furthermore we can complete the field at some $v$ (this is analogous to picking germs of holomorphic functions around the point corresponding to $v$) and we get the usuals $\mathbb{Q}_p$ for the valuations $v_p$ and $\mathbb{R}$ for the valuation $v_{\infty}$.
Again, as in the case of Riemann surfaces, extensions corresponds to ramified extensions of the corresponding curves. More precisely, given two number fields $L$ and $K$ together with a map $L \rightarrow K$ we get a finite ramified covering $Y_K \rightarrow Y_L$, where $Y_K$ denotes $\text{Proj} (\mathcal{O}_K) = \text{Spec} (\mathcal{O_K}) \cup \{\text{valuations at } \infty \}$ (which amounts to exactly the same construction done for Riemann surfaces earlier). In this case, the non-archimedean valuations (the non-infinity ones) corresponds exactly to the prime ideals in $\mathcal{O}_L$ and the ramification above a given point $\mathfrak{p}$ (prime ideal) means exactly that over the extension $K$, $\mathfrak{B}^{e}| \mathfrak{p}$ for some prime ideal $\mathfrak{B}$ and $e > 1$.
So this correspondence indeed makes sense. Actually, this correspondence has developed in a more general one called the function field analogy, but I will not write about this (you can check for instance http://www-math.mit.edu/~poonen/papers/curves.pdf around page 32).
Now let's go to the line bundles. A line bundle $\pi: L \rightarrow X$ over a Riemann surface $X$ is a complex vector bundle of dimension one, i.e, it's a complex manifold $L$ that looks like $X \times \mathbb{C}$ locally. We can pick the sections of this line bundle for each open set $U \subset X$, i.e, the functions $s: U \rightarrow L$ that such that $\pi s = 1_X$. This yields a module over the ring of homolorphic functions for each open set $U$ (this is what's called a sheaf of $\mathscr{O}_X$-modules). For the case of number fields we can do the same using $\mathcal{O}_K$ instead of the ring of holomorphic functions (look at the notational similarity). For this we need to tell what are the regular functions (an analogous to the holomorphic functions) on each open set $D(f)= \text{Spec} (\mathcal{O}_K) \setminus V(f)$. This is done by localizing (inverting) $f$ and we denote this by $A_{f}$ (I may $A$ instead of $\mathcal{O}_K$ if the procedure holds for any commutative ring with unity). It turns out that giving a line bundle over $\text{Spec} (A)$ is the same as giving an $A$-modules $M$ that satisfy $M \otimes_A \text{Hom}_A (M, A) \cong A$. In the case of a Dedekind domain and, in particular, the case of $\mathcal{O}_K$ this is exactly the same as giving a fractional ideal $I \subset K$. Actually for any integral domain $A$, line bundles corresponds to invertible fractional ideals of the field of fractions of $A$.
The Riemann Roch theorem says that for some line bundle $L$ and an special line bundle $K$ (the canonical bundle given by the 1-forms on the curve) on a smooth projective curve of genus $g$ the equality $$h^0 (X, L) - h^0 (X, L^{-1} \otimes K) = deg (L) + 1 -g$$ holds. The degree of a line bundle in the formula is described by the degree of the Weil divisor associated to $L$. Roughly speaking giving a line bundle over a smooth projective curve (a Riemann surface or the spectrum of any Dedekind domain) we can find the points where the line bundle fails to give an isomorphism $L \cong X \otimes \mathbb{C}$ (or $L \cong \mathcal{O}_K$) together with the multiplicities of this failure in each point. The sum of these multiplicities is the degree. More precisely, there is a section defined except finitely many points that fail give the above isomorphism section because it's zero at some of these points and is infinity at these undefined points. The sum of the multiplicities of the zeros minus the multiplicities of the poles is exactly the degree $deg (L)$.
I don't have time to write more none. Maybe I will add more things later. If something is not clear, just ask me (I don't know how much knowledge about commutative algebra and differential geometry you have). |
The set $5^{-\infty}\mathbb{Z}$ is a colimit | Hint: $5^{-n}\Bbb{Z}$ here means $\{5^{-n} x \mid x \in \Bbb{Z}\} \subseteq \Bbb{Q}$. The arrow $5^{-n}\Bbb{Z} \to 5^{-n-1}\Bbb{Z}$ is the inclusion function $x \mapsto x$ (a multiple of $1/5$ is also a multiple of $1/25$ etc.). The colimit construction gives that the elements of the colimit are represented by finite sequences $(x_1, \ldots x_n)$ of integers with $x_n \neq 0$ (or $n = 1$ and $x_n = 0$) under an equivalence relations that means each such sequence can be identified with $x_n/5^n$. |
Divide a square into different parts | By request, I'm spinning my comment out into an answer. For $n$ even, say $n=2k$, subdivide the square into a $k\times k$ grid of squares. I'll show it for $k=5$, because I think it's easier to visualize when everything renders as true squares rather than with $\dots$:
$$\begin{array}{|c|c|c|c|c|}
\hline
\,\,&\,\,&\,\,&\,\,&\,\, \\
\hline
& & & & \\
\hline
& & & & \\
\hline
& & & & \\
\hline
& & & & \\
\hline
\end{array}
$$
Now divide it into the top row of squares, the left column of squares, and everything else:
$$\begin{array}{|c|c c c c|}
\hline
\,\,&\,\,\,\,\mid &\,\,\mid &\,\,\mid& \\
\hline
\underline{\,\,\,}& & & & \\
\underline{\,\,\,}& & & & \\
\underline{\,\,\,}& & & & \\
& & & & \\\hline
\end{array}
$$
(Sorry for the horrible latex hack, but \multicolumn didn't work.) So we have $2k-1$ little squares and one big one, for a total of $2k=n$.
For $n$ odd, split one of the squares into four. (This leaves the "corner case" $n=7$, which I leave as an exercise for the interested reader :) |
Proving Laplace expansion using exterior algebra | $\newcommand{\bw}{\bigwedge}
\newcommand{\w}{\wedge}$ There is a simple proof (although tedious if we write down all the computations, which is what I did), but for that we need to interpret the quantity $\det(A_{ij})$ in terms of maps.
My notations will be the following : $\iota_k : R^{n-1}\to R^n$ is the map that sends basis vectors to basis vectors in linear order, but not touching the $k$th one in $R^n$, $p_k : R^n\to R^{n-1}$ will be the map that forgets about the $k$th coordinate, $\rho_k = \iota_k\circ p_k : R^n\to R^n$ is the map that sets the $k$th coordinate to $0$, and finally $\pi_j : R^n \to R^n$ is projection onto the $j$th coordinate (that is, it's $id_{R^n} - \rho_k$)
Then, identifying matrices with linear maps, we have the following commutative square :
$\require{AMScd} \begin{CD}R^n @>A>> R^n \\
@A\iota_jAA @Vp_iVV\\
R^{n-1} @>A_{ij}>>R^{n-1}\end{CD}$
This will be important later, because it lets us interpret $\det(A_{ij})$ : indeed take $\bigwedge^{n-1}$ of this diagram and you get :
$\require{AMScd} \begin{CD}\bigwedge^{n-1}R^n @>\bigwedge^{n-1}A>> \bigwedge^{n-1}R^n \\
@A\bigwedge^{n-1}\iota_jAA @V\bigwedge^{n-1}p_iVV\\
\bigwedge^{n-1}R^{n-1} @>\det(A_{ij})>>\bigwedge^{n-1}R^{n-1}\end{CD}$
Right, now let $e_1,...,e_n$ denote the standard basis of $R^n$ (I will let $b_1,...,b_{n-1}$ denote the one of $R^{n-1}$), and take any $i$; we have
$\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = (-1)^i \bw^n\phi (e_i \w e_1 \w... \w \hat{e_i} \w ... \w e_n) = (-1)^i \phi(e_i) \w \bw^{n-1}\phi(e_1 \w ... \w \hat{e_i} \w ... \w e_n)$
where as usual, $\hat{e_i}$ means we omit $e_i$ from the term. Now any element of $R^n$ is a sum of its projections, which implies that $\phi = \sum_j \pi_j \circ \phi$. It follows that
$\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = (-1)^i \phi(e_i) \w \sum_j \bw^{n-1}(\pi_j \circ \phi)(e_1\w ... \w \hat{e_i} \w...\w e_n)$
Moreover, $(e_1,... \hat{e_i}, ..., e_n) = (\iota_i(b_1), ..., \iota_i(b_{n-1}))$ so that
$\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = (-1)^i \phi(e_i) \w \sum_j \bw^{n-1}(\pi_j \circ \phi)\circ \bw^{n-1}\iota_i(b_1\w...\w b_{n-1}) = (-1)^i \phi(e_i) \w \sum_j \bw^{n-1}(\pi_j \circ \phi\circ \iota_i)(b_1\w...\w b_{n-1})$
Now, note that $\phi(e_i) = \sum_k a_{ki} e_k$
Therefore $\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = \sum_{k,j} (-1)^i a_{ki} e_k\w \bw^{n-1}(\pi_j \circ \phi\circ \iota_i)(b_1\w...\w b_{n-1})$
$\pi_j$ of anything is colinear to $e_j$, therefore if $j=k$, the term in the sum is $0$. So we may remove all the $j=k$ terms. Now note that for fixed $k$, $\sum_{j\neq k} \pi_j = \rho_k$.
Therefore our sum simplifies to
$\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = \sum_k (-1)^i a_{ki}e_k \w \bw^{n-1}(\rho_k\circ \phi \circ \iota_i) (b_1\w...\w b_{n-1})$
We're almost there : $\rho_k = \iota_k\circ p_k$ as we mentioned earlier, so that
$\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = \sum_k (-1)^i a_{ki}e_k \w \bw^{n-1}\iota_k \circ \bw^{n-1}(p_k \circ \phi \circ \iota_i)(b_1\w...\w b_{n-1})$
Our interpretation above yields that $\bw^{n-1}(p_k \circ \phi \circ \iota_i)(b_1\w...\w b_{n-1}) = \det(A_{ki}) b_1\w...\w b_{n-1}$; and $\bw^{n-1}\iota_k (b_1\w...\w b_{n-1}) = e_1\w...\w \hat{e_k} \w ... \w e_n$ so that
$\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = \sum_k (-1)^i a_{ki}\det(A_{ki}) e_k \w e_1\w...\w \hat{e_k} \w ... \w e_n = \sum_k (-1)^i a_{ki}\det(A_{ki}) (-1)^k e_1 \w... \w e_n$
All in all :
$$\bigwedge^n\phi(e_1\wedge ... \wedge e_n) = \sum_k (-1)^{i+k} a_{ki} \det(A_{ki}) e_1\w ... \w e_n$$
and in particular, $\det(A) = \sum_k (-1)^{i+k} a_{ki} \det(A_{ki})$
What worries me a tad is that I get $ji$ instead of your $ij$. Now of course this isn't fundamentally a problem, as $\det(A) = \det(A^T)$, so we do get your formula in the end, but that's not the one you get if you just follow through the given proof. Hopefully I didn't mix things up along the way
As a passing (not so useful) comment, you can see how this is an instance of categorification : we have a completely concrete formula in terms of elements of $R$, and we interpret it as saying something about various maps (if you look at it closely, all the equalities I wrote can be interpreted as saying something about maps) instead of elements, and then the computations become more straightforward - to get back the concrete thing you decategorify at the end. |
Laplace's equation for a semi infinite strip | You want homogenous conditions $u(0,y)=0=u(4,y)$ instead, which can achieved by subtracting $T$ from the $u$ in order to obtain a new equation for $v=u-T$
$$
v_{xx}+v_{yy} = 0, \;\; v(0,y)=0=v(4,y),\;\; v(x,0)=-T.
$$
Now when you separate variables $v(x,y)=X(x)Y(y)$, you obtain
$$
-\frac{X''}{X} = \lambda = \frac{Y''}{Y}.
$$
The negative can go in either place, but I know how it works out, so I chose the above so that $\lambda$ is positive instead of negative. Then $X$ must satisfy
$$
X''+\lambda X = 0,\;\; X(0)=X(4)=0.
$$
The solutions are
$$
\lambda_n=\frac{n^2\pi^2}{16},\;\; X_n(x)=\sin\left(\frac{n\pi}{4}x\right)
$$
The corresponding solutions in $Y$ are
$$
Y_n = A_n\exp\left(-\frac{n\pi}{4}y\right).
$$
(I've excluded solutions with positive exponent in order to have a bounded solution.) The constants $A_n$ must be chosen so that
$$
v(x,y) = \sum_{n=1}^{\infty}A_n\sin\left(\frac{n\pi}{4}x\right)\exp\left(-\frac{n\pi}{4}y\right),\;\;\; v(x,0)=-T. \\
\implies \sum_{n=1}^{\infty}A_n\sin\left(\frac{n\pi}{4}x\right)=-T \\
$$
Multiplying both sides by $\sin\left(\frac{n\pi}{4}x\right)$, integrating in $x$ over $[0,4]$, and using the orthogonality of the eigenfunctions determines the constants $A_n$ by the equations
$$
A_n\int_{0}^{4}\sin^{2}\left(\frac{n\pi}{4}x\right)dx=-T\int_{0}^{4}\sin\left(\frac{n\pi}{4}x\right)dx.
$$
The solution of the original problem is
$$
u(x,y) = T+v(x,y) = T+\sum_{n=1}^{\infty}A_n\sin\left(\frac{n\pi}{4}x\right)\exp\left(-\frac{n\pi}{4}y\right).
$$ |
Limit of Quotient of two functions changed by constant amount | Following this, we want to check that for unbounded functions such that for $x\to a,g(x),f(x)\to \pm \infty$ it holds
$$
\limsup_{x\to a} \frac{f(x)}{g(x)} = \limsup_{x\to a} \frac{f(x)}{g(x) + c}
$$
It seems that we need to assume further, that both $f,g$ either go to plus infinity or both go to minus infinity at the same time. So that $\frac f g$ stays positive in a surrounding of $a$. If $\limsup_{x\to a} \frac{f(x)}{g(x)}=\infty$ the reasoning which follows should still hold.
First we do a simple modification
$$
\limsup_{x\to a} \frac{f(x)}{g(x) + c}=\limsup_{x\to a}\left( \frac{f(x)}{g(x)}\frac 1 {1+\frac c {g(x)}}\right )
$$
which is certainly true since $g(x)\neq 0$ in a neighborhood of $a$ because of our unbounded assumption.
Then we conclude, using the property of $\limsup$ and some limiting point $x_0$
$$
\limsup_{x\to x_0}\left(f(x)g(x)\right )\leq \limsup_{x\to x_0}f(x)\limsup_{x\to x_0}g(x)
$$
that indeed it holds
\begin{align}
\limsup_{x\to a} \frac{f(x)}{g(x) + c}=\limsup_{x\to a}\left( \frac{f(x)}{g(x)}\frac 1 {1+\frac c {g(x)}}\right )&\leq\limsup_{x\to a} \frac{f(x)}{g(x)}\limsup_{x\to a}\frac 1 {1+\frac c {g(x)}}\tag 1 \\
&=\limsup_{x\to a}\frac{f(x)}{g(x)}
\end{align}
because
$$
\limsup_{x\to a}\frac 1 {1+\frac c {g(x)}}=\liminf_{x\to a}\frac 1 {1+\frac c {g(x)}}=\lim_{x\to a}\frac 1 {1+\frac c {g(x)}}=1
$$
The other direction also holds, since
\begin{align}
\limsup_{x\to a} \frac{f(x)}{g(x)}=\limsup_{x\to a} \frac{f(x)}{g(x)}\liminf_{x\to a}\frac 1 {1+\frac c {g(x)}}\leq&\limsup_{x\to a}\left( \frac{f(x)}{g(x)}\frac 1 {1+\frac c {g(x)}}\right ) \tag 2\\=&\limsup_{x\to a} \frac{f(x)}{g(x) + c}
\end{align}
Eventually because of the inequalities $(1)$ and $(2)$ the equality holds.
For further references one could check the linked references in the first given link from above. The case where $f/g$ is negative needs to get some further attention - it's possible that there exist counterexamples. |
contour integration $\int_0^\infty \frac{dx}{x^p(x^2+2x\cos{\phi}+1)}$ | The basic procedure is very simple. Suppose we are using the standard keyhole contour of radius $R$ and with the branch cut of the logarithm along the positive real axis and its argument between $0$ and $2\pi.$
Let $$f(x) = \frac{e^{-p \log x}}{x^2+ 2x\cos\phi+1}$$ and let $I$ be the integral we are looking for. Writing
$$ I = \int_0^\infty f(x) dx = \int_0^\infty \frac{e^{-p \log x}}{x^2+ 2x\cos\phi+1} dx$$
and integrating counterclockwise we see that the segment just above the real axis goes to $I,$ and the one below to $-I e^{-2\pi i p}.$
Now note that the only additional two poles are at
$$ \rho_{0,1} = -\cos\phi \pm \sqrt{\cos^2\phi -1} =
-\cos\phi \pm i\sin\phi = - e^{\mp i\phi} = e^{\pi i} e^{\mp i\phi}.$$
It follows by the Cauchy Residue Theorem that
$$ I \left(1- e^{-2\pi i p} \right) = 2\pi i
\left(\operatorname{Res}_{x=\rho_0} f(x) + \operatorname{Res}_{x=\rho_1} f(x)\right).$$
By definition we have
$$ \operatorname{Res}_{x=\rho_0} f(x) =
\lim_{x\to\rho_0} \frac{x^{-p}}{x-\rho_1} =
\frac{e^{-\pi i p} e^{p i\phi}}{2i\sin\phi} $$
and
$$ \operatorname{Res}_{x=\rho_1} f(x) =
\lim_{x\to\rho_1} \frac{x^{-p}}{x-\rho_0} =
-\frac{e^{-\pi i p} e^{-p i\phi}}{2i\sin\phi} $$
Putting it all together, we find
$$I \left(1 - e^{-2\pi i p} \right) = 2\pi i \,
e^{-\pi i p} \frac{e^{p i\phi} - e^{-p i\phi}}{2i\sin\phi} =
2\pi i \, e^{-\pi i p} \frac{\sin (p\phi)}{\sin\phi}$$
or
$$ I =
\frac{2\pi i \, e^{-\pi i p}}{1 - e^{-2\pi i p}} \frac{\sin (p\phi)}{\sin\phi} =
\frac{2\pi i}{e^{\pi i p} - e^{-\pi i p}} \frac{\sin (p\phi)}{\sin\phi} =
\frac{\pi}{\sin(\pi p)} \frac{\sin (p\phi)}{\sin\phi}.$$
It remains to verify that the integral along the outer circle of radius $R$ disappears as $R$ goes to infinity. But $f(x)$ is $O(1/R^{2+p})$ so the integral is $O(1/R^{1+p})$ which disappears as claimed. |
Every ideal of the localization is an extended ideal | Given $\mathfrak{b}\lhd R_S$, you want to show that $\mathfrak{b}=\mathfrak{b}^{ce}$, where $\mathfrak{b}^c$ is the contraction of $\mathfrak{b}$ to $R$. Clearly $\mathfrak{b}^{ce}\subset\mathfrak{b}$. For the reverse inclusion
$$
\frac{r}{s}\in\mathfrak{b}\Rightarrow\frac{r}{1}=\frac{s}{1}\frac{r}{s}\in\mathfrak{b}\Rightarrow r\in\mathfrak{b}^c\Rightarrow\frac{r}{s}=\frac{1}{s}\frac{r}{1}\in(\mathfrak{b}^c)^e
$$ |
Analysis of $\sum_{k=1}^{\infty}\frac{x^2}{(1+x^2)^k}$ | Hints:
$$1.-\;\;\;\;\forall\;x\,,\;\;0<a\le x\implies\frac1{(1+x^2)^k}\le\frac1{(1+a^2)^k}\;\;\text{and Apply Weierstrass $M$-test}$$
$$2.-\;\;\;\;\text{Use what you say you proved and check what happens for}\;\;0\in[-a,a]\ldots\ldots$$ |
Order of Operations in Calculator | I imagine the problems comes from the fact that your calculator evaluates the following as so:
$$
(-3)^2 = 9 \\
-3^2 = -9
$$
Remember that $\sqrt{12 - 3x^2}$ is only going to have real solutions when $12 - 3x^2 \geq 0$, so for $x \in [-2, 2]$ |
Estimate the Vapnik Chervonenkis dimension | It's sound's that I was missing some dump details : the function $O$ refers to the big $O$ notation |
How to either prove or disprove if it is possible to arrange a series of numbers such the sum of any two adjacent number adds up to a prime number | Not an answer but a comment on the problem:
I think the enumeration of such rearranged sequences is a graph theory question. Consider the undirected graph $G_n$ whose nodes are the numbers from $1$ to $n$. Draw an edge between $a$ and $b$ if $a+b$ is prime. Then by Bertrand's postulate you get a connected graph.
Arrangements of the numbers such that the sum of any two consecutive numbers is prime correspond to Hamiltonian paths of $G_n$. An example for $n=8$:
Note that the graph is bipartite. This gives you methods to find prime arrangements in $O(1.5^n)$ rather than $O(n!)$ as required for the brute force approach.
However I doubt that it really helps to prove for which numbers such an arrangement exists. |
Evaluate limit $x^2$ and $\sqrt{1+2/x}$ | Expand the square root in series in $t\rightarrow0$:
$$\sqrt{1+t} = 1 + \frac{1}{2}t - \frac{1}{8}t^2 + o(t^2)$$
where $o(t^2)$ are insignificant terms that tend to 0.
In your case $\frac{1}{x} \rightarrow 0$ and also $\frac{2}{x} \rightarrow 0$ so
$$\lim_{x\to \infty}x^2\left(\sqrt{1+\frac 2x}+1-2\sqrt{1+\frac 1x}\right) = $$
$$\lim_{x\to \infty} x^2 \left( \left( 1 + \frac{1}{2}\frac{2}{x} - \frac{1}{8}\frac{4}{x^2} + o(x^{-2}) \right) + 1 - 2\left( 1 + \frac{1}{2}\frac{1}{x} - \frac{1}{8}\frac{1}{x^2} + o(x^{-2}) \right) \right) = $$
$$\lim_{x\to \infty} x^2 \left( -\frac{1}{4x^2} + o(x^{-2})\right) = -\frac{1}{4}$$
The idea of this method is that you are trying to construct in a point $x_0$ the original function using only simple polynomials.
The same idea is used when you expand in Taylor series. |
Convergence in probability and expectation | This is not true. Consider $[-1,1]$ with the measure $P(A)=\frac {\lambda(A)} 2$ where $\lambda$ is Lebesgue measure. Let $X_n(x)=n^{2} xI_{(-\frac 1 n, \frac 1 n)}(x)$, and $X\equiv0$. Then $EX_n =EX=0$ for all $n$ and $X_n \to X$ almost surely, hence in probability. But $E|X_n|=1$ for all $n$. |
Transversality: what is wrong with this counter example to persistence for small perturbations? | If an intersection of $M$ and $N$ is non-transversal, that means that you cannot be sure that the intersection persists under small perturbation. In your example, it's clear that the intersection will not disappear. However, if you would have chosen $N$ to be the graph of $g_\epsilon(x) = x^2 + \epsilon$, then it's clear that the intersection at $x=0$ will disappear for $\epsilon > 0$.
In the end, it's a matter of logic: $P$ (an intersection is transversal) $\to$ $Q$ (this intersection persists under small perturbations) is equivalent to the contrapositive $\neg Q \to \neg P$, but not equivalent to the inversion $\neg P \to \neg Q$. |
Proving the result of finding the number of ways in which $N$ can be resolved as a product of two factors | First consider N to not be a perfect square
Given a divisor $p$ of the number $N$, we can say that N/p is also a divisor of $N$ and is such that $N = p × {N\over p}$
Thus every divisor of N corresponds to a way of expressing N as a product of two numbers. Thus we can write if p is a divisor of N, then.$$p \mapsto (p, {N\over p})$$ Also see that both p and ${N\over p}$ correspond to the same way of expressing N as a product of two factors. To be more precise ${N\over p} \mapsto (p, {N\over p})$
However, if N is not a perfect square, then p and ${N\over p}$ are never equal. Therefore every divisor corresponds to one way but every way corresponds to two divisors. Hence the factor of half.
Also, if N is a perfect square, then the product $(\sqrt{N}, \sqrt{N})$ corresponds to only one divisor, and so in order to find the number of ways, consider the ways other than $(\sqrt{N}, \sqrt{N})$, all these ways correspond to two divisors. Hence twice the number of these ways will give all factors of N other than $\sqrt{N}$. Now add one to this term to include the element ${\sqrt{N}, \sqrt{N}}$ |
Image of Exact Sequence under surjective maps | No.
Let $R=\Bbb Z$, $A_1=A_2=\Bbb Z$ and $A_n= 0$ otherwise, $A_1\to A_2$ the identity.
Let $B_1=\Bbb Z$ and $B_n=0$ otherwise. Then we have obvious epimorphims $A_n\to B_n$, but $B_\bullet $ is not exact. |
Prove that $H \leq Z(G)$ if and only if $[H,G] = \{1\}$. | Suppose $x\in G$ and $[x,G]=\{1\}$, in the sense that $[x,g]=1$ for every $g\in G$. Then $x\in Z(G)$, because $xgx^{-1}g^{-1}=1$ is equivalent to $xg=gx$.
Actually, this proves both directions at once:
$H\subseteq Z(G)$ if and only if, for every $x\in H$, $[x,G]=\{1\}$
There is no need to assume $H$ is a subgroup, but usually the notation $H\le K$ for subsets of a group implicitly assumes $H$ and $K$ are subgroups, even if it is not specified at the outset. |
Show that $\lim_{n\to\infty}\mathcal I\left(\exp\left(\frac{2\pi\cdot i}{\log_n(p_n\#)}\right)\right)=0$ | First, we can rewrite $f(n)$ as
$$f(n)=\sin\left(\frac{2\pi \log n}{\log p_n \#}\right).$$
We can use the fact that $p_n\# =e^{(1+o(1))n\log n}$ to get
$$f(n)=\sin\left(\frac{2\pi}{(1+o(1))n}\right),$$
which, of course, goes to zero as $n$ goes to infinity. |
Shift of summation variable in sum over binomial coefficients. | The factor $$\binom{\lambda + \beta - \mu}{\sigma}$$ is $0$ for $\beta < \sigma + \mu - \lambda$, so the sum is the same, whether you start at $0$ or at $\sigma + \mu - \lambda$. Taking $0$ leads to simpler typography. |
Let $X \in [a,b]$ satisfy $\mathbb{E}[\exp(-X)]=1$. Prove $\mathbb{E}[X] \le \frac18(b-a)^2$. | Directly applying the cited lemma, we find that $$\mathbb E(\exp(-X))=1\le\exp\left(-\mathbb E(X)+\frac 1 8 (-1)^2(b-a)^2\right)$$
Take the log of both sides
$$0\le-\mathbb E(X)+\frac 18(b-a)^2$$
Add $\mathbb E(X)$ to both sides.
Where you combined the conjecture with Hoeffding's Lemma, I think you cannot conclude $\exp(t\mathbb E(X))\le\exp\left(t(\frac18 (b-a)^2)\right)$ because $t\in\mathbb R$, so it does not hold if $t<0$. |
Matrix, Null space, Col space | You are correct that the statement is false, but the counterexample you give does not work. Instead, consider the matrix
$$
A = \pmatrix{0&1\\0&0}.
$$ |
Fourier transform of Integro-differential equation | $$
is\hat{f}+\hat{f}=\frac{1}{\sqrt{2\pi}}+\sqrt{2\pi}\hat{f}(s)\hat{g}(s) \\
\hat{f}=\frac{1}{\sqrt{2\pi}}\frac{1}{1+is+\sqrt{2\pi}\hat{g}(s)}
$$ |
Hermitian positive definite matrix | Hint:
1) Consider continuous function
$$
f:S^{n-1}\to\mathbb{R}:x\mapsto (Ax)\cdot\overline{x}
$$
defined on a compact space $S^{n-1}$. It is achieves its maximum at some vector $x_0\in S^{n-1}$.
2) Use the following identity
$$
\frac{(Ax)\cdot\overline{x}}{|x|^2}=f\left(\frac{x}{|x|}\right)
$$ |
Optimization of Linear Term Under $ {L}_{1} $, $ {L}_{2} $ and $ {L}_{\infty} $ Inequality Norm Constraints | You wish to minimize $f(x)=\sum_{i=1}^n c_ix_i$ under the condition that
$$\left\{\begin{array}{cccc}
\sum_{i=1}^n |x_i| \leq 1&&&p=1\\
\sum_{i=1}^n x_i^2 \leq 1&&&p=2\\
\max |x_i| \leq 1&&&p=\infty\\
\end{array}\right.
$$
Notice that $\nabla f(x) = c$ for all $x\in \mathbb R^n$, so a minimum must be on the boundary of the domain.
In other words, we need only consider equalities above.
The case $p=2$ is easily dealt with Lagrange multipliers, and we can see that $x=-c/{\lVert c \rVert}_2$.
Another way to see this is via good ol' Cauchy-Schwarz:
$$\left|c^Tx\right| = \left|\langle c, x \rangle\right| \leq {\lVert c \rVert}_2\,{\lVert x \rVert}_2$$
Hence, the best you could possibly do for $p=2$ is $c^Tx = -{\lVert c \rVert}_2\,{\lVert x \rVert}_2 = -{\lVert c \rVert}_2$, where the first equality is only possible if $x$ is a scalar multiple of $c$ and the second equality follows from ${\lVert x \rVert}_2 = 1$.
Therefore, $x=-c/{\lVert c \rVert}_2$.
For the other cases, these boundaries are no longer smooth: they have corners.
Nonetheless, a Lagrange multiplier mentality should still help.
Anytime we're at the boundary, if we're able to move along the boundary in some direction given by $-\nabla f = -c$ (ie, in some direction given by any of its components), we will be sensing a decrease in the value of $f$.
In this manner, we'll either arrive at some $k$-dimensional 'wall' $(0<k<n)$, or at a corner $($in a sense, a '$0$-dimensional wall'$)$ of the boundary.
However, if we arrive at a wall, that's because the wall is perpendicular to $-c$.
Hence, moving along the wall is okay: $f$ does not decrease along the wall, but it does not increase either.
Since every wall contains a corner, we may therefore assume without loss of generality that we have arrived at a corner.
It follows that for $p=1,\infty$ you need only check the corners of your domain.
For $p=1$, the corners are the vectors $\pm e_i$, where $e_i$ is the vector whose $i$-th coordinate is $1$ and all others are $0$.
In this case, $c^T(\pm e_i) = \pm c_i$, so $x$ is chosen so that $i$ maximizes $|c_i|$ and the signal is chosen so that the result is negative, ie
$$x = -\text{sgn}\left(c_{\text{argmax}_i\,|c_i|}\right)\cdot e_{\text{argmax}_i\,|c_i|}$$
For $p=\infty$, the corners are the vectors $(\pm 1, \pm 1, \dots, \pm 1)$.
In this case, it's easy to see that
$$x = \Big(-\text{sgn}(c_1), -\text{sgn}(c_2), \dots, -\text{sgn}(c_n)\Big)$$ |
Mean Value related problem. | $$
f(a+h)-f(a)=f'(a)h+\frac{1}{2}f''(a)h^2+o(h^2)
$$
and
$$
f'(a+h\theta) = f'(a)+f''(a)h\theta + o(h\theta).
$$
In the last equation I just used the definition of $f''(a)$.
Hence
$$
f'(a)h+\frac{1}{2}f''(a)h^2 + o(h^2)=f'(a)h+f''(a)\theta h^2 + o(h^2\theta).
$$
Dividing by $h^2$ we deduce
$$
\frac{1}{2}f''(a) = f''(a) \theta +o(1).
$$
If $f''(a) \neq 0$, then $\theta \to 1/2$. |
Is this simple proof that the Frobenius endomorphism of an elliptic curve defined over $\mathbb F_q$ is surjective valid? | What about the curve $X+Y=0$ defined over $k = \Bbb F_2(Z)$. The point $(Z,Z)$ of the curve is not in the image of $\phi$ even though the ideals of the curve and of its image by $\phi$ are the same. |
Why is $\mathbb{Q}$ not semisimple as a $\mathbb{Z}$ module? | Hint It is easy to come up with elements $x \in \bigoplus\mathbb{Z}/p_i\mathbb{Z}$ such that
$$x+x+...+x=0$$ |
Are the following graph isomorphic? | You have a few problems. First, you haven't given the full map between the vertex sets.
Second, you've only argued that this map isn't an isomorphism. You haven't clearly ruled out the possibility that some other map is an isomorphism. I think you're trying to argue that the partial map you've shown is perfectly general, but you haven't started anything to that effect.
Typically, showing that two graphs are not isomorphic involves showing some invariant differs between them. Common invariants are degree sequence, diameter, connectivity, chromatic number, and Hamiltonicity.
As a hint, notice that every edge in the left graph is contained in two otherwise disjoint 4-cycles. |
Rolle's Theorem in reverse | You are nearly there: $f(0)=0=f(1)$. So just apply Rolle to $f$. |
Show that a connected graph on $n$ vertices is a tree if and only if it has $n-1$ edges. | The proofs are correct. Here's alternative proof that a connected graph with n vertices and n-1 edges must be a tree modified from yours but without having to rely on the first derivation:
Let $G$ be a connected graph on n vertices, with n−1 edges. Suppose $G$ is not a tree. Then, there exists at least one cycle in $G$. Remove one of the edges within a cycle. This leaves a connected graph on n vertices with n-2 edges which is impossible as a connected graph on n vertices must at least have n - 1 edges. |
Find limits of integration for the interior region of sphere with center $(a,0,0)$ and radius $a$ using spherical coordinates | you have a few choices.
rectangular:
$(x-a)^2 + y^2 + z^2 = a^2\\
x^2 + y^2 + z$ = 2ax$
Spherical... since x is the "special one", I would suggest.
$$x = r \cos(\phi)\\
y = r \sin(\theta) \sin(\phi)\\
z = r \cos(\theta) \sin(\phi)$$
Plug these into your equation for the sphere and,
$r^2 = 2a\,r\cos\phi $
$r$ will range from $0$ to $2a\cos\phi, \theta$ from $0$ to $2\pi, \phi$ from $0$ to $\pi/2$
If you went with the traditional.
$$x =r \cos(\theta) \sin(\phi)\\
y = r \sin(\theta) \sin(\phi)\\
z = r \cos(\phi)$$
Then $r$ will range from $0$ to $2a\cos\theta\sin\phi, \theta$ from$-\pi/2$ to $\pi/2$
how about...Taking the traditional and translating it.
$$x =r \cos(\theta) \sin(\phi) + a\\
y = r \sin(\theta) \sin(\phi)\\
z = r \cos(\phi)$$
And $r$ goes from $0$ to $a.$
Cylindrical.
$$x = x+a\\
y = r \sin(\theta)\\
z = r \cos(\theta)$$
x from $-\sqrt {a^2-r^2}$ to $\sqrt {a^2-r^2}$
or,
$$x = x\\
y = r \sin(\theta)\\
z = r \cos(\theta)$$
x from $a-\sqrt {a^2-r^2}$ to $a+\sqrt {a^2-r^2}$
etc. |
Probability of passing an exam . | Your book's solution tells you that the probability of failing both exams is one minus the probability of passing both exams. No, that is the probability of failing in at least one exam.
You gave the correct answer for the probability of failing in both exams. Is this the question that was asked?
Remark: Of course, both answers assume that the events of the student passing each of the exams is independent of the other, contrary to the purpose of testing. |
Solving simultaneous equations involving a quadratic | $$x^2 + y^2 = 25 $$
$$2x - y = 5 \Leftrightarrow y=2x-5$$
$$ x^2 + (2x-5)^2 = 25 $$
$$ x^2 +4x^2 -20x +25 = 25 $$
$$ 5x^2 -20x=0 $$ |
Prove that $\measuredangle AQP+\measuredangle NAP=90^o$ | The following works only for the particular case when the circle PHNQ passes through A too.
$\alpha + \gamma = \beta + \gamma$ (angles in the same segment)
$= \delta$ (exterior angles cyclic quad)
$= 90^0$!
figure-1
For the general case, I have to give up after a long period of trial. However, I would like to share some of the interesting findings that might be useful for those who are interested.
(1) Some colored pairs of lines are parallel. XY should be parallel to PB’ too but that remains to be proven.
(2) PP’A’M’ is cyclic.
(3) If Point T is the perpendicular point of XTY and APTP’, then the question is solved. |
Inequality $\sum\limits_{k=1}^n \frac{1}{n+k} \le \frac{3}{4}$ | By C-S
$$\sum_{k=1}^n\frac{1}{n+k}=1-\sum_{k=1}^n\left(\frac{1}{n}-\frac{1}{n+k}\right)=1-\frac{1}{n}\sum_{k=1}^n\frac{k}{n+k}=$$
$$=1-\frac{1}{n}\sum_{k=1}^n\frac{k^2}{nk+k^2}\leq1-\frac{\left(\sum\limits_{k=1}^nk\right)^2}{n\sum\limits_{k=1}^n(nk+k^2)}=1-\frac{\frac{n(n+1)^2}{4}}{n\cdot\frac{n(n+1)}{2}+\frac{n(n+1)(2n+1)}{6}}=$$
$$=\frac{7n-1}{2(5n+1)}<0.7\leq\frac{3}{4}.$$
Done!
I think it's interesting that $\ln2=0.6931...$.
C-S forever!!! |
Proving a complex number is differentiable using the limit definition | Not like that. Instead:
$$
0\le\left|\frac{xy^3-0}{x+iy-0}\right|=\frac{|xy^3|}{\sqrt{x^2+y^2}}\le(x^2+y^2)^{3/2},
$$
since $|x|\le\sqrt{x^2+y^2}$ and $|y|\le\sqrt{x^2+y^2}$. The right-hand side $(x^2+y^2)^{3/2}$ converges to zero when $x+iy\to0$. So $f$ is differentiable at $0$ and $f'(0)=0$. |
Parameterization of a rhombus | If one uses the unit step function
\begin{equation}
u(t)=\begin{cases}0\text{ for }t<0\\1\text{ for }t\ge0\end{cases}
\end{equation}
then
\begin{aligned}
x(t) &= 1-t+2(t-2)u(t-2) \\
y(t) &= t+2(1-t)u(t-1)+2(t-3)u(t-3)
\end{aligned}
for $0 \leq t \leq 4$.
The only advantage with this version is constant speed.
Desmos animation of constant speed rhombus |
Prove that $2^{n} s$ can be written as the sum of perfect squares, where $s$ is the sum of perfect squares | Hint: If the numbers $x$ and $y$ can each be written as the sum of 2 perfect squares, then so can the number $xy$. This is known as the Brahmagupta-Fibonacci Identity.
You can either prove this statement yourself, or click on the Wikipedia link. |
Find all limit points for the set $A=\lbrace (\frac{1}{n},\frac{1}{m}):n,m\geq 1\rbrace$. | As you've worked out, if you fix either $n$ or $m$ in $\{(1/n, 1/m):n,m≥1\}$, you get a limit point where the other variable is zero. As you already know the answer, I'll just give you a concise way of writing it:
\begin{align*}
\{(1/n, 0):n≥1\}\cup\{(0, 1/m):m≥1\}\cup\{(0,0)\}
\end{align*}
EDIT A bit more detail:
For a point $a$ to be a limit point of the set, given any ball of radius $\epsilon$ around the point, you need to be able to find another point $b$ so that it's not equal to $a$ but it's also inside the ball.
Consider any point of the form $(\frac{1}{p}, \frac{1}{q})$ where $p, q \in \mathbb{N}$. Then the "nearest points" are $(\frac{1}{p}, \frac{1}{q-1}), (\frac{1}{p}, \frac{1}{q+1}), (\frac{1}{p-1}, \frac{1}{q}),$ and $(\frac{1}{p+1}, \frac{1}{q})$. However, you now know that the distance between them is some constant in terms of $p$ and $q$. For example, the distance from $(\frac{1}{p}, \frac{1}{q})$ to $(\frac{1}{p}, \frac{1}{q-1})$ is $\frac{1}{q(q-1)}$. By choosing an $\epsilon$ less than the minimum of these four "closest values" it's clear that $(\frac{1}{p}, \frac{1}{q})$ can't be a limit point.
Now consider points of the form $(0, \frac{1}{q})$. Then given any $\epsilon > 0$, if you set $p = \lceil\frac{1}{\epsilon}\rceil+1$, you'll see that $(\frac{1}{p}, \frac{1}{q})$ is within epsilon of $(0, \frac{1}{q})$, and it's also not equal to $(0, \frac{1}{q})$. The same proof works for $(\frac{1}{p}, 0)$ and $(0, 0)$ |
Rewrite an Optimization problem for $\textrm {min } \:\textrm {max} \{f_1, \dots, f_N\}$ | Hint: Try to find a relation between the local optimal of your problem and following one
$$\textrm {min } t $$$$\textrm{s.t. } \: h(x) = 0$$ $$ f_{i}(x) \leq t \quad i=1,2,...,N$$ |
$\sum_{n=1}^{p-1}{\frac{1}{n}} = \frac{A_p}{B_p}$ What is $A_p$ (mod $p^2$) where $\frac{A_p}{B_p}$ is a reduced form fraction? | I eventually found a solution. The trick is to first factor out a $p$, then to show that the remaining expression is still $0$ (mod $p$).
\begin{align*}
\sum_{n=1}^{p-1}{\frac{1}{n}} &=
\sum_{n=1}^{\frac{p-1}{2}}{(\frac{1}{n} + \frac{1}{p-n})} \\ &=
\sum_{n=1}^{\frac{p-1}{2}}{\frac{p}{n(p-n)}}
\end{align*}
After removing $p$ we obtain
\begin{align*}
\sum_{n=1}^{\frac{p-1}{2}}{\frac{1}{n(p - n)}} &\equiv
\sum_{n=1}^{\frac{p-1}{2}}{\frac{1}{n(0 - n)}} &(\text{mod } p) \\ &=
\sum_{n=1}^{\frac{p-1}{2}}{-n^{-2}} \\ &\equiv
-2\sum_{n=1}^{p-1}{n^{-2}} &(\text{mod } p)
\end{align*}
Since $\sum_{n=1}^{p-1}{n^{-2}}$ is just a reordering of the terms of $\sum_{n=1}^{p-1}{n^{2}}$ (mod $p$) we can write
\begin{align*}
-2\sum_{n=1}^{p-1}{n^{-2}} &\equiv
-2\sum_{n=1}^{p-1}{n^{2}} &(\text{mod } p) \\ &=
-2\frac{(p-1)p(2(p-1) + 1)}{6} \\ &=
-\frac{(p-1)p(2p - 1)}{3} \\ &\equiv
0 & (\text{mod } p) \text{ if } p \ne 3
\end{align*} |
Why is it so difficult to find beginner books in Algebraic Geometry? | 1) "Maybe because this area is very recent?"
No, algebraic geometry goes back at least to Descartes and Fermat and is essentially as old as calculus.
2) "Because there aren't many buyers to buy algebraic geometry books?"
No, that's irrelevant.
Authors don't care, since they know very well that any book beyond the level of linear algebra or calculus won't bring in any money.
More surprisingly publishers don't seem to care either: I have at times been called upon to write a book but it was pretty clear that the representatives of publishers weren't too interested in the content but just wanted to add titles to their catalogue.
3) "...he said that the chapter 7 is very hard to understand and has a lot hard calculations and he never fully understood this chapter"
The candidness of this professor is much to be admired and I quite believe that, as you say, he did good research in algebraic geometry.
He pinpointed the heart of the problem: algebraic geometry is indeed extremely difficult to penetrate, precisely because it is such an old subject which people have attacked with such a variety of unrelated tools. For details, look at this answer .
4) A challenge.
There always exists a conic passing through five points in the projective plane.
Given six points in such a plane, what condition must they satisfy in order that a conic passes through all of them ?
I'd be interested in the thoughts of a non-professional algebraic geometer on this problem, which illustrates a point I'll make later.
Edit: Answer to the challenge
Consider the six points $p_1,\cdots,p_6$ and the hexagon they determine.
The three pairs of opposing sides intersect in three points $$A_1=(p_1p_2)\cap (p_4p_5), A_2=(p_2p_3)\cap (p_5p_6),A_3=(p_3p_4)\cap (p_6p_1) $$ There exists a conic passing through these six points $p_i$ if and only if the points $A_1,A_2,A_3$ are collinear.
This extremely pretty result is Pascal's theorem (+its converse), which he found in 1640 at the ripe age of 16.
My point is that one couldn't come up with such a solution even after having read and understood the more than 10000 pages of EGA+SGA+Stacks Project.
That algebraic geometry is so vast, ranging from this theorem by Pascal to the moduli stack of curves, explains why writing a comprehensive introductory book on the subject is an essentially impossible mission. |
Derivation of Dirac-Delta with complicated argument $\delta(f(x))$ | So, for $\delta\bigl(f(x)\bigr)$ to be defined, we need that $f$ is smooth on the domain $U$ in consideration. If all roots of $f$ are simple, one has for any $\phi \in \def\D{\mathcal D}\def\R{\mathbb R}\D(U)$:
$$ \def\<#1>{\left\langle#1\right\rangle}\<\phi, \delta \circ f> = \sum_{i: f(x_i) = 0} \frac{\phi(x_i)}{\def\abs#1{\left|#1\right|}\abs{f'(x_i)}}$$
We have
\begin{align*}
\<\phi, x\cdot \partial_x\delta(\sqrt{xy}-z)>
&= -\<\phi + x\phi', \delta(\sqrt{xy} - z)>
\end{align*}
Now $f(x) = \sqrt{xy} - z$ is zero at $x = \frac{z^2}y$ if $z > 0$, and nowhere if $z \le 0$ (here the quantity in question is zero). So suppose $z > 0$, we have
$$ f'(x) = \frac y{2\sqrt{xy}}, \quad f'\left(\frac{z^2}y\right) = \frac y{2z} $$
We continue
\begin{align*}
\<\phi, x\cdot \partial_x\delta(\sqrt{xy}-z)>
&= -\<\phi + x\phi', \delta(\sqrt{xy} - z)>\\
&= -\frac 1{\abs{\frac{y}{2z}}}\biggl(\phi\left(\frac y{2z}\right) + \frac{y}{2z}\cdot\phi' \left( \frac{y}{2z}\right)\biggr)\\
&= -\frac 1{\abs{\frac y{2z}}}\<\phi, \tau_{y/2z}\delta> - \mathop{\rm sgn}\frac{y}{2z}\<\phi', \tau_{y/2z}\delta>\\
&= -\abs{\frac{2z}y}\<\phi, \tau_{y/2z}\delta> + \mathop{\rm sgn}y\<\phi, \tau_{y/2z}\partial_x\delta>
\end{align*}
So
$$ B = -\abs{\frac{2z}y}\delta(x - y/2z) + \mathop{\rm sgn}\,y \cdot \partial_x\delta(x - y/2z) $$ |
Constant function and Argument Principle | Claim: $h(D_C)\subset h(C).$
Choose $z\in\mathbb{C}\setminus h(C).$ Note that $h(C)$ is compact and $z$ lies in the unbounded component of $h(C).$ Hence the winding number of $z$ with respect to $h(C)$, $n(h(C);z)=0.$ So from the formula $n(h(C);z)=\Sigma_1^nn(C;z_i),$ where $z_i$ are the preimages of $z$, we get that no $z\in\mathbb{C}\setminus h(C)$ has preimage in $D_C.$ So $h(D_C)\subset h(C).$
By the claim, $h$ is constant by Open Mapping Theorem on $D_C$ and hence on $D.$ |
How do I prove that the metric in $\mathbb{C}_{\infty}$ satisfies the triangle inequality? | Just for reference, I completed the answer to my question provided by Alfonso Delfin
$d(z,z') \leq d(z,z'') + d(z'',z')$
Let $\sigma: \mathbb{C}\cup\{\infty\} \to S$ be the stereographic projection
\begin{cases}
(\displaystyle\frac{2\Re(z)}{|z|^2+1},\displaystyle\frac{2\Im(z)}{|z|^2+1},\displaystyle\frac{|z|^2 - 1}{|z|^2 + 1}), & \mbox{ if } z \in \mathbb{C}\\
(0,0,1), & \mbox{ if } z=\infty
\end{cases}
Notice that
$d(z,z')=||\sigma(z)-\sigma(z')||$,
where $||\cdot||$ is the usual norm in $\mathbb{R}^3$. Thus, the triangle inequality follows from the one for $||\cdot||.$
Let $\textbf{u}, \textbf{v} \in \mathbb{R}^n$, then
$||\textbf{u} + \textbf{v}||^2 = (\textbf{u} + \textbf{v})\cdot(\textbf{u} + \textbf{v}) = ||\textbf{u}||^2 + 2(\textbf{u}\cdot\textbf{v}) + ||\textbf{v}||^2$
By Cauchy-Schwartz inequality we have
$
\textbf{u}\cdot\textbf{v} \leq ||\textbf{u}|| \mbox{ } ||\textbf{v}||
$
Hence,
$||\textbf{u} + \textbf{v}||^2 \leq ||\textbf{u}||^2 + 2||\textbf{u}|| \mbox{ } ||\textbf{v}|| + ||\textbf{v}||^2 = (||\textbf{u}|| + ||\textbf{v}||)^2
$
Taking square roots both sides yields
$||\textbf{u} + \textbf{v}|| \leq ||\textbf{u}|| + ||\textbf{v}||$
Now consider
$ d(z,z')= ||[\sigma(z)- \sigma(z'')] + [\sigma(z'') - \sigma(z')]||$
Let $\textbf{u}= \sigma(z) - \sigma(z''), \textbf{v} = \sigma(z'') - \sigma(z')$, then
$d(z,z')= ||\textbf{u} + \textbf{v}|| \leq ||\textbf{u}|| + ||\textbf{v}|| = d(z,z'') + d(z'',z') $ |
Why is the Killing form of $\mathfrak{g}$ restricted to a subalgebra $\mathfrak{a} \subset \mathfrak{g}$ not the Killing form of $\mathfrak{a}$? | As to the question "why":
The Killing form of $\mathfrak{a}$ is given by
$$k_1(x,y) := Tr (ad_{\color{red}{\mathfrak{a}}}(x) \circ ad_{\color{red}{\mathfrak{a}}}(y))$$
for $x,y \in \mathfrak{a}$, whereas the Killing form of $\mathfrak{g}$ restricted to $\mathfrak{a}$ is given by
$$k_2(x,y) := Tr (ad_{\color{red}{\mathfrak{g}}}(x) \circ ad_{\color{red}{\mathfrak{g}}}(y))$$
for $x,y \in \mathfrak{a}$. The second one involves "bigger matrices" a.k.a. more information, namely, how $x,y$ operate on the entire $\mathfrak{g}$, whereas for the first one, we forget everything about how they act outside of $\mathfrak{a}$. For examples how that can make a difference, see the other answer.
Now if $\mathfrak{a}$ happens to be an ideal, then the "bigger matrices" in the second case actually have a certain block form, so that the traces are completely determined by the blocks which correspond to $ad_{\color{red}{\mathfrak{a}}}$ of the respective elements, and that is why in that case, our two options are the same. It is worthwhile if you try to write down these "block forms" yourself to see what's happening. (It's done (Spoiler warning!) here.) |
Convergence of the Picard sequence | Your $f$ is globally Lipschitz with constant $L=1$, the Picard iteration converges on every bounded interval uniformly, or globally on $\Bbb R$ in the norm $\|y\|_L=\sup e^{-2L|x|}|y(x)|$.
We know $|f(x,y)|\le 1$, thus $|y(x)|\le x$.
We know that $(n+\frac12)\pi$, $n\in\Bbb Z$, are stationary points, thus the solution of the IVP $y(0)=0$ is restricted to $-\frac\pi2<y(x)<\frac\pi2$ |
A questionable proof that $Z(S_n)$ is trivial for $n\ge 3$. | Unfortunately, there are a number of problems with your "proof". First, $s^{-1}=s$ (and the notation is very regrettable), so your statement that $lcm(s,s^{-1})=1$ is not legitimate. However, the most serious mistake is that you have not used the fact that $n\geq 3$.
Try the following:
Suppose $\sigma\in Z(S_n)$ is not the identity. Then $\sigma(i)=j\neq i$ for some $i,j\in\{1,\ldots,n\}$. Now, since $n\geq 3$, there exists $k\in \{1,\ldots,n\}\backslash\{i,j\}$.
Claim: There exists $\tau\in S_n$ such that $\tau(j)=k$, but $\tau(i)=i$ (can you think of one?).
Given the claim, compute $\sigma\tau(i)$ and $\tau\sigma(i)$. Are they equal? If not, what can you conclude? |
Probability distribution (pdf) | It is a probability distribution supported on $\{0,1,2,3,4,5\}$. You just have to give the 6 values of the probability mass function; assuming the fact that a given item is faulty is independent of the others, let $X$ be the number of defectives in the set. You get
\begin{align*}
\mathbb{P}\{X=0\} &= \left(1-\frac{20}{100}\right)^5 \tag{All are good} \\
\mathbb{P}\{X=1\} &= \binom{5}{1}\left(1-\frac{20}{100}\right)^4\left(\frac{20}{100}\right)^1 \tag{One is bad out of 5} \\
\vdots
\end{align*}
You can explicitly compute them. Note that this is actually a binomial distribution $\operatorname{Binom}(0.2, 5)$ — can you see intuitively why? |
Greedy algorithm fails to give chromatic number | The standard example is the crown graph which gives $n$ instead of $2$ in bad order |
Cardinality of subsets of $\mathbb{N}$ with fixed asymptotic density | For each $n\in\mathbb N$ choose one element from the set $\{2n-1,2n\}$. Each of the $2^{\aleph_0}$ sets you can get this way has asymptotic density $\frac12$.
P.S. More generally: If $S$ has density $\alpha$ and $T$ has density zero, then the symmetric difference $S\triangle T$ has density $\alpha$. Of course there are continuum many sets of density zero, e.g., the subsets of a given infinite set of density zero. Thus, by fixing $S$ and varying $T$, we get continuum many sets of density $\alpha$. |
Subgroups of automorphism groups | No, a characteristic subgroup of an automorphism group of a group need not be isomorphic to the automorphism group of any group.
Here are two very well known infinite families of examples:
The cyclic group of order p (p an odd prime) is not the automorphism group of any group (such a group is abelian since its inner automorphism group is cyclic, but an abelian group either has inversion as an automorphism of order 2, or is an elementary abelian 2-group, and an elementary abelian 2-group either has trivial automorphism group or has a coordinate swap as an order 2 automorphism). However, the cyclic group of order p is a characteristic Sylow p-subgroup of AGL(1,p), which is the automorphism group of the dihedral group of order 2p and of itself. AGL(1,p) is the normalizer of a Sylow p-subgroup of the symmetric group on p-points.
The alternating group of degree n (n ≥ 9) is a characteristic, index 2 subgroup of its automorphism group, but is not itself the automorphism group of any group by Robinson (1982, MR678545).
At least as far as I understand it, automorphism groups of groups tend to be big and "full", and so it should not be surprising that many of their subgroups are not themselves automorphism groups of groups since they are "missing" something. For instance, a simple group cannot be an automorphism group unless it is complete; M12 is incomplete. Odd order cyclic groups do not work, since one is missing inversion (and indeed, the rest of AGL1). |
A question about tangent lines | Hint: Use what you know: take $f'(x)$. You know that $f'(s)=f'(t)$ or the slopes don't match. Then you know that $f(t)-f(s)=f'(t)(t-s)$ because they are on the same line with that slope. Two equations, two unknowns. |
Kernel of matrix polynomials | .Use the fact that if $g,h$ are relatively prime polynomials then there exist polynomials $p(x),q(x)$ such that $p(x)g(x) + q(x)h(x) = 1$ for all $x$,(Bezout identity), so in particular, $p(A)g(A) + q(A)h(A) = I$.
Now, with this in mind, let $v \in \Bbb R^n$. Then, we may write $v = p(A)g(A)v + q(A)h(A)v$.
Now, it is clear(by commutativity of the polynomial operators in $A$) that :
$$
h(A)[p(A)g(A)v] = p(A)[g(A)h(A)v] = 0 \\
g(A)[q(A)h(A)v] = q(A)[g(A)h(A)v] = 0
$$
Therefore $v$ is a sum of two elements lying in the kernels of $h(A)$ and $g(A)$ respectively. Adding the disjointness of these subspaces gives the required conclusion. |
Is $U(2)=SU(2) \times U(1)$? | Not quite. There is a natural short exact sequence
$$1 \to SU(2) \to U(2) \xrightarrow{\det} U(1) \to 1.$$
This sequence doesn't have a natural splitting, but it does have a splitting given by
$$U(1) \ni z \mapsto \left[ \begin{array}{cc} z & 0 \\ 0 & 1 \end{array} \right] \in U(2).$$
Such a splitting exhibits $U(2)$ as a semidirect product $SU(2) \rtimes U(1)$, where the action of $U(1)$ on $SU(2)$ is given by conjugation with respect to the above splitting. The image of the splitting does not, and cannot, commute with the image of $SU(2)$ (we'd like to send $z$ to the diagonal matrix with entries $\sqrt{z}, \sqrt{z}$, but $\sqrt{z}$ isn't well-defined on $U(1)$), so this semidirect product cannot be refined to a direct product. |
Show that $I$ is an ideal of $\mathbb{K}[x]$ | Looks like you got it! You can also notice that your set $I$ in question is equal to $I_1\cap I_2\cap I_3$ where $I_j = \{f \in K[x] : f(x_j) = 0\}$, and it's a little bit easier for the purpose of writing your proof to simply prove that any one of the $I_j$ is an ideal, and therefore they all are ideals. Then you can quote or prove the result that the intersection of ideals is an ideal to conclude $I$ is an ideal. |
How do I simplify this trigonometric expression? | $$\frac{\sin(x)}{1+\cos(x)}+\frac{1+\cos(x)}{\sin(x)}=\frac{\sin^{2}(x)+\cos^{2}(x)+2\cos(x)+1}{\sin(x)(1+\cos(x))}=\frac{2}{\sin(x)}$$ |
If $R \subset S$ is a domain and $S \subseteq \mathrm{Quot}(R)$ then $\mathrm{ann}_{R}(S/R) \ne {0}$ | In $\mathrm{Quot}(R)$ there are $x_1,\dots,x_n$ such that $S=Rx_1 + \cdots + Rx_n$. But any $x_i$ can be written $x_i=\frac{a_i}{b_i}$ with $a_i$, $b_i \in R \setminus \{ 0 \}$. I advise you to consider the element $b=b_1 \cdots b_n$ for your problem |
Meaning of $^sB$, s an element, B a subgroup | $^sB=sBs^{-1}$ is defined at the beginning of the book (page xxi "General notation"). It seems that that notation for a group action is more common in Cohomology. |
algebraic Modulo question | It would appear that unknown1 is 5 and unknown2 is 7. If you reduce the coefficients of $6x^2y-5x^2-xy^3+7y^2+13$ modulo 5, you get $x^2y-xy^3+2y^2-2$, and similarly for the other example. |
Combinatorics vs probability with replcements | Your calculation $\frac3{10}\cdot\frac29$ in the first problem gives the probability of drawing a blueberry first and a chokeberry second; you need to double it to include the probability of drawing a chokeberry first and a blueberry second.
To solve the second problem by counting rather than by working directly with probabilities, you must look at the possible sequences of $4$ draws, not sets of $4$. There are $7^4$ possible sequences of draws, and $5^4$ of them include only green marbles, so the desired probability is
$$\frac{5^4}{7^4}=\left(\frac57\right)^4\;,$$
just as in the other calculation. |
Evaluate $\sum_{j=0}^n(-1)^{n+j}{n\choose j}{{n+j}\choose j}\frac{1}{(j+1)^2}$ | Use Kelenner's hint about
$$ \int_{0}^{1}x^j(-\log x)\,dx = \frac{1}{(1+j)^2}\tag{1}$$
and exploit the fact that $(-\log x)$ has a nice representation in terms of shifted Legendre polynomials:
$$ (-\log x) = 1+\sum_{j\geq 1}\frac{(-1)^j(2j+1)}{j(j+1)}Q_j(x)\tag{2} $$
since, for any $n\geq 1$,
$$ \int_{0}^{1}(-\log x)Q_n(x)\,dx = \color{red}{\frac{(-1)^n}{n(n+1)}}\tag{3} $$
can be proved through Rodrigues' formula and integration by parts (the derivative of $(-\log x)$ is simple to deal with). |
Fields of characteristics and affine conics | You are off to a good start with considering $\phi$ and noting that you can choose $e_3$ so that $\phi(e_1, e_3) = 1$.
Now let $c = \phi(e_3, e_3)$, which may or may not be $0$. But we can replace $e_3$ by $e'_3 = \frac{c}{2} e_1 - e_3$ and compute that $\phi(e'_3, e'_3) =
\frac{c^2}{4}\phi(e_1,e_1) - 2\frac{c}{2}\phi(e_1,e_3) + \phi(e_3,e_3) = 0$. So, replacing $e_3$ with $e'_3$, we may assume $\phi(e_1,e_1) = \phi(e_3,e_3) = 0$ while $\phi(e_1,e_3) = \phi(e_3,e_1) = 1$.
Let $e_2$ be a third vector to complete the basis. Set $b = \phi(e_1,e_2)$. Setting $e'_2 = e_2 - b e_3$, we have that $\phi(e_1,e'_2) = 0$, so replacing $e_2$ by $e'_2$, we may assume $\phi(e_1,e_2) = \phi(e_2, e_1) = 0$. Now let $c = \phi(e_2,e_3)$ and set $e'_2 = e_2 - c e_1$. The $\phi(e'_2,e_3) = 0$ and $\phi(e'_2,e_1) = 0$, so replacing $e_2$ by $e'_2$ again, we get that the matrix for $\phi$ has the form
$$
\begin{pmatrix}
0 & 0 & 1 \\ 0 & a & 0 \\ 1 & 0 & 0
\end{pmatrix}
$$
for some $a$. Because $\phi$ is non-degenerate, $a \neq 0$. This yields the desired $Q$.
If $-a$ has a square root in $k$ (always the case if $k$ is algebraically closed), then replacing $e_2$ by $e'_2 = \frac{1}{\sqrt{-a}} e_2$, we may assume that $Q(e_2) = \phi(e_2,e_2) = -1$.
For the second part, let $C \subset \mathbb{P}^2_k$ be a non-degenerate conic defined by a non-degenerate quadratic form $Q$. Note that the existence of the vector $e_1 \neq 0$ such that $Q(e_1) \neq 0$ is equivalent to the fact that the conic $C$ is not empty.
Applying a projective equivalence corresponds to making a change of basis in $V$. And we just showed that we could change our basis so that $Q$ has the form
$$
\begin{pmatrix}
0 & 0 & 1 \\ 0 & -1 & 0 \\ 1 & 0 & 0
\end{pmatrix}.
$$
But this $Q$ defines the conic $C$ given by the equation $XZ - Y^2 = 0$, or $XZ = Y^2$. (Edit: Actually, depending on conventions, you are more likely to interpret this $Q$ as defining the conic $2 XZ = Y^2$, but see the remark below.)
Edit: As Georges Elencwajg pointed out in the comments, you can use $Z \mapsto Z/a$ to get $XZ = aY^2$ projectively equivalent to $XZ = Y^2$ for any $a \neq 0$. |
Unitary operator on dense set, Unique extension? | First of all, if a unitary extension exists, it has to be unique due to continuity and $V$ being dense.
For each $x \in H_1$ and any sequence $(a_n(x))_n$ in $V$ converging to $x$ define $\widetilde{U} x = \lim\limits_{n \rightarrow \infty} U a_{n}(x)$. We have to check that this is well-defined. In fact, $(U a_{n}(x))_n$ is a Cauchy-sequence as $U$ is unitary (it suffices to assume $U$ to be bounded) and convergent as $H_2$ is complete. Moreover, as $U$ is unitary the definition is independent of the choice of $(a_n(x))_n$ (Here again, boundedness is sufficient). Now it is easy to show that $\widetilde{U}$ is indeed a linear operator $H_1 \rightarrow H_2$ extending $U$.
Finally, we have to check that $\widetilde{U}$ is unitary.Note that the adjoint $\widetilde{U}^*$ of $\widetilde{U}$ extends the adjoint $U^*$ of $U$ and
the compositions $\widetilde{U}^* \widetilde{U}$ and $\widetilde{U}\widetilde{U}^*$ are identities when restricted to $V$ and $W$, respectively. So we are done by continuity and $V$ and $W$ being dense. |
$v\in\mathcal{L}(F,E)$ such that $u\circ v\circ u=u$ | I can do it for the infinite-dimensional case using the Axiom of Choice. I suspect it requires some form of this axiom.
Let $W$ be the null space of $u$, and extend a basis $\{e_\alpha\}_{\alpha \in A}$ of $W$ to a basis $\{e_\beta\}_{\beta \in B}$ of $E$. Then the range of $u$ is the linear span of $\{u e_\beta\}_{\beta \in B \backslash A}$, which are linearly independent. Extend this to a basis $\{w_\gamma\}_{\gamma \in \Gamma}$.
Then we define $v$ to map $u e_\beta$ to $e_\beta$ for $\beta \in B \backslash A$
and all other $w_\gamma$ to $0$. This satisfies both $u v u = u$ and $v u v = v$. |
Rotating two vectors to another plane | Generally if you know that for two not collinear vectors $a'=Ra$ and $b'=Rb$ you know also that $R(a \times b)= a' \times b'$ ( $\times$ here vector product and $R$ rotation matrix)
These six vectors are sufficient to construct from them two $3 \times 3$ matrices with vectors as columns, name them $C=[a \ \ b \ \ a\times b]$ and $C'=[a' \ \ b' \ \ a' \times b' ]$.
So we have $C'=RC$.
From this we can calculate rotation matrix $R$.
$R=C'C^{-1}$
$C^{-1}$ exists because $a, b, a\times b$ are not coplanar. |
Multiple points on an elliptic curve | Just to say that by referring to $F$ as an elliptic curve, you are implicitly assuming that it's smooth as this forms part of the definition.
An easy way to see when this works is for medium Weierstrass equations. For $y^2=f(x)=x^2+a_2x^2+a_4x+a_6$, a point $P=(x,y)$ is singular if and only if $\partial F/\partial x=-f'(x)=0$ and $\partial F/\partial y=2y=0$. This is if and only if $y=f'(x)$, so when $f$ has a double root.
Try it with some obvious examples like $y^2=x^3$ and $y^2=x^3-3x+2$. You'll find a point of intersection at $(0,0)$ (resp. $(1,0)$) which is the unique singular point lying on the curve.
But yes! This gives a necessary and sufficient criterion for $F$ to have an affine singular point. |
If three vector such $|a|^2=a\cdot b=b\cdot c=1,a\cdot c=2$,show that $|a+b+c|\ge 4$ | We are given that, $a.b=1$ and $a.c=2$. Adding these two, we get,
$\vec{a}.(\vec{b}+\vec{c})=3$.
$\Rightarrow |\vec{a}||(\vec{b}+\vec{c})|cos\theta=3$, where $\theta$ is the angle between the vectors $\vec{a}$ and $(\vec{b}+\vec{c})$.
Since, $cos\theta\le1\Rightarrow|\vec{a}||(\vec{b}+\vec{c})|\ge3 $. Now, use he fact that, $|\vec{a}|^2=1\Rightarrow |\vec{a}|=1$ to get, $|\vec{b}+\vec{c}|\ge3$ |
An inequality like Riemann sum involving $\sqrt{1-x^2}$ | Write the inequality as
$$\frac{\pi}{4} < \frac{1}{2n} + \frac{1}{n} \sum_{k=1}^{n-1} \sqrt{1-\left(\frac{k}{n}\right)^2} + \frac{1}{2n} \sqrt{\frac{1}{2n}}.$$
The left-hand side $\pi/4$ is the area of the part of the unit circle that
lies in the first quadrant (below the curve $y=f(x)=\sqrt{1-x^2}$).
We want to interpret the right-hand side as the area of a region $D$ which
covers that quarter circle.
Note that $f$ is concave, so that its graph lies below any tangent line.
Thus the trapezoid bounded by the lines $x=a-\epsilon$ and $x=a+\epsilon$
and by the $x$ axis and the tangent line through $(a,f(a))$ will cover the
corresponding part of the circle:
$$\int_{a-\epsilon}^{a+\epsilon} f(x) dx < 2\epsilon f(a).$$
Thus, taking $D$ to be the union of the following pieces does the trick:
A rectangle of height 1 between $x=0$ and $x=1/2n$.
Trapezoids as above, of width $\frac{1}{n}$ and centered at $x=k/n$ for $k=1,\ldots,n-1$.
A trapezoid as above, of width $\frac{1}{2n}$ and centered at $x=1-1/4n$. This last one has area
$$\frac{1}{2n} f(1-1/4n) = \frac{1}{2n} \sqrt{\frac{1}{2n} - \frac{1}{16n^2}} < \frac{1}{2n} \sqrt{\frac{1}{2n}}.$$ |
anihilator closure and continuity | Since $N(f)$ is closed, $H=U\oplus N(f)$ where $U$ is the finite dimensional orthogonal complement of $N(f)$. Write $U=Vect(e_1,..,e_n)$ with $\|e_i\|=1$, and $f(x)=<x,e_i>$, $f_i$ is continuous. Let $a_i=f(e_i)$, consider $g=a_1f_1+...+a_nf_n$, for every $x\in H, x=u+v, u\in U, v\in N(f)$, write $u=u_1e_1+..u_ne_n$, $f(x)=f(u+v)=f(u)=f(u_1e_1+..+u_ne_n)=$
$=u_1a_1+..u_na_n=a_1<x,e_1>+..+a_n<x,e_n>=a_1f_1(x)+..+a_nf_n(x)=g(x)$ implies that $f=g$ and $f$ is continuous since $g$ is continuous. |
Find the sum of the coefficients of $x^{20}$ and $x^{21}$ in the power series expansion of $\frac 1{(1-x^3)^4}$. | Since
$(1-x)^{-n}
=\sum_{k=0}^{\infty} \binom{n+k-1}{n-1}x^k
$
(see https://en.wikipedia.org/wiki/Binomial_theorem#Newton.27s_generalized_binomial_theorem),
$(1-x^m)^{-n}
=\sum_{k=0}^{\infty} \binom{n+k-1}{n-1}x^{km}
$.
Therefore
$(1-x^3)^{-4}
=\sum_{k=0}^{\infty} \binom{k+3}{3}x^{3k}
$.
Since there is no term
with $x^{20}$
and the term with
$x^{21}$ corresponds to
$k = 7$,
the result is
$\binom{10}{3}
=120
$. |
Interchange of limit and derivative | A very simple sufficient condition is to assume each $g_n$ is Riemann-integrable and converges uniformly to some continuous function. In this case a proof can be given using integration and the Fundamental theorem of calculus.
A slightly weaker assumption, which requires a more careful argument is the following theorem (see Baby Rudin's book, theorem 7.17): Suppose $f_n$ is a sequence of differentiable functions on $[a,b]$, and which converges at some point $x_0\in[a,b]$. If $g_n:=f_n'$ converges uniformly on $[a,b]$ then $f_n\to f$ uniformly for some $f$ which is differentiable and for all $x\in [a,b]$, $f'(x)=\lim\limits_{n\to\infty}g_n(x)$.
Notice that in (2), we are no longer assume that $f_n\to f$, but that is part of our conclusion; we only assume convergence at one point. Also, we do not assume Riemann-integrability of the derivatives $g_n:=f_n'$.
As a fun fact, we have the following wonderful theorem in complex analysis (I know your question is mainly focused on the real case, but I thought it would be nice to contrast with the nice behavior in the complex case)
Suppose $U\subset\Bbb{C}$ is open, and $\{f_n\}_{n=1}^{\infty}$ is a sequence of holomorphic functions on $U$ such that $f_n\to f$ for some function $f$ on $U$, such that the convergence is uniform on compact subsets of $U$. Then, $f$ is holomorphic and we also have that $f_n'\to f'$ uniformly on compact subsets of $U$. |
Definite integral of partial fractions? | You neglected the chain rule:
$$
\int \frac{4}{2x+1} \, dx = 2\ln|2x+1|+C \ne 4\ln|2x+1|+C.
$$ |
Online source for counting primes in residue classes | I did not find it as well, so just did it using Python and put it online, please verify that it makes what you want with some simple tests. Initially seems to do what you needed: here is the link (to repl.it, is a very well known safe place to put online code for simple tests), and here is the Python code just in case you want to give it to your class, use it freely:
def isPrime(num):
if num <= 1:
return False
for factor in range(2, num):
if num % factor == 0:
return False
return True
N = 100
a = 7
q = 15
total = 0
print("List of primes p in [1,"+str(N)+"] such that p mod "+str(q)+" equiv to "+str(a))
for n in range(1,N):
if isPrime(n):
if a==(n%q):
total = total + 1
print(n)
print("Total: " +str(total))
In the link, just modify in the left side the values of $N, a$ and $q$ and then click the run button, the results will appear in the right side. I think it should work normally in very basic navigators.
Please let me know if it does not work.
(*) I forgot to include the total in the initial version, now is included. |
Product of sums of all subsets mod $k$? | Here's a solution. I'll edit later with the explanation.
Input: a set $S = \{s_1, s_2, \ldots, s_N\}$, and an integer $k$.
Define matrices $P,Q$ of size $(N+1) \times k$ initializing the first row
$P_{0,i} = i$, $Q_{0,i} = 1$
and filling in successive rows by settting
$$P_{n,i} = P_{n-1,i} \cdot P_{n-1, i+s_n}$$
$$Q_{n,i} = P_{n-1,i} \cdot Q_{n-1, i+s_n} + P_{n-1,i} \cdot Q_{n-1, i+s_n}$$
while reducing everything (including indices) modulo $k$.
Output $Q_{N,0}$.
Filling in each entry of the matrix is just multiplication, addition, and reducing mod $k$, each of which I believe can be done in time $\log k$, so in total the algorithm is of order $Nk\log k$.
I implemented this in Python:
def aur(S, k):
P = [list(range(k))]
Q = [[1]*k]
for n in range(1,len(S)+1):
P.append([])
Q.append([])
for i in range(k):
P[n].append( (P[n-1][i] * P[n-1][(i+S[n-1]) % k]) % k )
Q[n].append( ( P[n-1][i] * Q[n-1][i] + Q[n-1][i]* P[n-1][(i+S[n-1]) % k] ) % k )
return Q[n][0]
You can check that e.g., aur$([1,2,3], 20) = 0$ and aur$([1,2,3], 3000) = 2160$ as in the OP's examples.
Edit: Here is the explanation.
Define a polynomial $$P_S(x) = \prod_{T \subseteq S} (x + \sum_{i \in T} i).$$ If we set $x=0$, we almost get what we want, except that the product also includes the empty set which (I'm assuming) should not be there. Nevertheless, it will be convenient to include it. From $P_S(x)$ we can still recover our answer: we first divide by $x$ to eliminate the empty set, and then set $x=0$. Equivalently, we take $P'(0)$.
The reason we define this polynomial is because it obeys a nice recurrence relation. If we set $S_n = \{s_1, \ldots, s_n\}$, then
$$P_{S_n}(x) = P_{S_{n-1}}(x) P_{S_{n-1}}(x+s_n).$$
This follows from the fact that a subset $T$ of $S_n$ is either a subset $T$ of $S_{n-1}$, or a union $T \cup \{s_n\}$ for $T \subseteq S_{n-1}$. Furthermore, we can start with the initial condition $P_{\emptyset}(x) = x$.
Computing the coefficients of $P_{S}(x)$ this way is enough to compute our output, but this polynomial will have degree $2^N$, so this is not feasible. Instead, we use the product rule, setting $Q_S(x) = P_S'(x).$ This gives
$$Q_{S_n}(x) = P_{S_{n-1}}(x) Q_{S_{n-1}}(x+s_n) + Q_{S_{n-1}}(x) P_{S_{n-1}}(x+s_n).$$
We can then output $Q_{S_N}(0)$ and reduce mod $k$. In this way we don't need the full list of exponentially many coefficients of these polynomials, but we only need to evaluate them for $x = 0, 1, \ldots, k$. |
Are these two statements logically equivalent? XOR and If/then | No. The second formula can be satisfied by a population of couch potatoes ie $\neg B(x) \land \neg S(x)$ is consistent with the second formula, but not the first. |
Linear operator such that $N(T)=R(T)$ | One example of a linear operator on an infinite-dimensional space, consider the map $T:\ell^\infty\to\ell^\infty$ ($\ell^\infty$ is the space of bounded real or complex sequences) given by
$$ T(x_1,x_2,x_3,x_4,\ldots)=(0,x_1,0,x_3,0,\ldots). $$
Then we have
$$N(T)=\{(x_i)_{i\in\mathbb{N}}\in\ell^\infty:x_k=0 \textrm{ for $k$ even\}}=R(T). $$
To get a $2$-dimensional example, consider the restriction of this example to the first two coordinates. |
Borel probability measure vs Probability measure | In 2) the sigma-algebra is not specified.
In 1) the sigma-algebra is the sigma-algebra of all Borel sets, the one generated by open sets. |
What are the conditions on a linear time invariant system for a PI controller to converge to a specified set point? | Hint.
As a LTI is easily Laplace transformable, the problem can be stated as
$$
\cases{
\left(sI-A\right)Y= B U\\
U = \left(k_p I+\frac{k_i I}{s}\right)E\\
E = W_r - W
}
$$
and putting all together
$$
E = W_r - C Y = W_r - \frac 1s C\left(sI-A\right)^{-1}B\left(s k_p I+ k_i I\right)E
$$
and calling
$$
G = I+\frac 1s C\left(sI-A\right)^{-1}B\left(s k_p I+ k_i I\right)
$$
we have
$$
E = G^{-1}W_r
$$
and the error dynamics depend on the zeros from $\det(G)$
as an example, considering
$$
A = \left(
\begin{array}{cc}
1 & 2 \\
-3 & 4 \\
\end{array}
\right),\ \ B = \left(
\begin{array}{c}
1 \\
0 \\
\end{array}
\right), \ \ C = \left(
\begin{array}{cc}
1 & 1 \\
\end{array}
\right)
$$
we have
$$
\det(G) = \frac{2 k_i s-14 k_i+2 k_p s^2-14 k_p s+s^3-5 s^2+10 s}{s \left(s^2-5 s+10\right)}
$$
and the finite zeros are given by
$$
(2s-14) k_i+(2s^2-14s) k_p+s^3-5 s^2+10 s = 0
$$
Those zeros now are continuously dependent on $k_p,k_i$ and should be located inside the left complex plane to attain stability. This can be handled using the Routh-Hurwitz criterion. Additionally, to have asymptotic null error we should verify also that once stable, the error dynamics should obey
$$
\lim_{s\to 0}s E =0
$$ |
The convergence of an integral | We have that
$$\int_0^1 \frac{x}{\ln x}dx=\int_1^\frac12 \frac{x}{\ln x}dx+\int_\frac12^1 \frac{x}{\ln x}dx$$
and since $\lim_{x\to 0^+} \frac{x}{\ln x}=0$ all boils down in $\int_\frac12^1 \frac{x}{\ln x}dx$ and by $x=1-y$ we obtain
$$\int_\frac12^1 \frac{x}{\ln x}dx=\int_0^\frac12 \frac{1-y}{\ln (1-y)}dy$$
which diverges by limit comparison test with $\int_0^\frac12 \frac1ydy$ indeed
$$\lim_{x\to 0^+} \frac{\frac{1-y}{\ln (1-y)}}{\frac1y}=\lim_{x\to 0^+} (1-y)\frac{y}{\ln (1-y)}=1$$
In a similar way we can prove that for $b>1$
$$\int_0^1 \frac {x} { (\ln x)^b }dx$$
the integral converges. |
$3$ independent events $ A, B, C $ what is the $P(AB\cup C)$? | You have to subtract $P(A \cap B \cap C)$ since the term need not be zero.
\begin{align}
P(AB \cup C)&=P(AB)+P(C)-P(ABC)\\&=P(A)P(B)+P(C)-P(A)P(B)P(C)
\end{align}
A particular examples of $A,B,C$ that are mutually independent and are not mutually exclusive are whether you get heads from 3 independent coin tosses. |
showing covariant derivative of skew-symmetric rank $2$ tensor vanish | The tensor $A_{ij,k}$ satisfies two identities, namely $A_{ij,k} = A_{ik,j}$ and $A_{ij,k} = -A_{ji,k}$. If we alternate between the two, we find that
$$A_{ij, k} = A_{ik,j} = -A_{ki,j} = -A_{kj,i} = A_{jk,i} = A_{ji,k} = -A_{ij,k}$$
so $A_{ij,k} = 0$. |
Monotonicity of $\int^x_0\frac{F(y)}{F(x)}dy$ | The answer is: No, your integral is not always increasing. Consider the following strictly increasing, continuous CDF:
\begin{align}
F:\Bbb R&\to[0,1] \\
x&\mapsto
\begin{cases}
0, &x\le 0 \\
\frac{49}{10}x, & 0\le x \le \frac1{10} \\
\frac{39}{80}+\frac{x}{40}, & \frac1{10}\le x\le \frac9{10} \\
\frac{49}{10}x-\frac{39}{10}, & \frac9{10}\le x\le 1 \\
1, & 1\le x
\end{cases}.
\end{align}
Plot of $F$:
Then we have
\begin{gather}
\int_0^{\frac9{10}} F = \frac{849}{2000}, \\
\int_0^1 F = \frac12,
\end{gather}
and thus
\begin{equation}
\frac{\displaystyle\int_0^{\frac9{10}} F}{F\left(\frac9{10}\right)}=\frac{\frac{849}{2000}}{\frac12}=\frac{849}{1000}>\frac12=\frac{\displaystyle\int_0^{1} F}{F\left(1\right)}.
\end{equation}
Hence, $x\mapsto \dfrac{\int_0^x F}{F(x)}$ is not increasing (in fact, it decreases strictly on $]\frac9{10},1[$).
Remark. Since you can approximate any continuous function on $[0,1]$ arbitrary well (w.r.t. $\|\cdot\|_\infty$-norm) by smooth functions, it is also possible to construct a smooth counter-example. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.