title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
A characterization of commutative rings with Krull dimension zero | If $R/\sqrt{0}$ is von Neumann regular, and $P$ is a prime ideal in $R$, then $R/P$ is an integral domain which is a quotient of $R/\sqrt{0}$ and so is von Neumann regular. This implies easily that $R/P$ is a field, so $P$ is maximal. Thus, $\dim(R)=0$.
Conversely, suppose that $\dim(R)=0$, and put $R'=R/\sqrt{0}$, so $R'$ is $0$-dimensional and reduced. Given $a\in R'$, put $S=\{a^n(1+ab):n\in\mathbb{N},\;b\in R'\}$, and note that this is multiplicatively closed. If $0\notin S$ then we can find a prime ideal $P$ not meeting $S$. Now $\dim(R')=0$ so $P$ is maximal, so either $a\in P$ or $1-ab\in P$ for some $b\in P$. Either possibility contradicts the fact that $P\cap S=\emptyset$. Thus, we must have $0\in S$ after all, so $a^n(1-ab)=0$ for some $n$ and $b$. This means that $a(1-ab)$ is nilpotent, but $R'$ is reduced, so $a(1-ab)=0$. It follows that $R'$ is von Neumann regular. |
How to get the angle in a trapezoid formed by two triangles? | Angle chasing gives $\angle BAD = 50^{\circ}, \angle BCD = 70^{\circ}$. So $\triangle BAD,\triangle BDC$ is isosceles. Hence we have $BA=BD=BC$. Since $BA=BC$, $\triangle BAC$ is isosceles, so $$ x = \frac{180^{\circ}-(80+40)^{\circ}}{2}=30^{\circ}$$ |
Properties of ellipse x-y form | In the familiar case $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$, the axes of the ellipse are $x=0$ and $y=0$. In this case, they are $x-y=0$ and $x+y=0$.
What we have here is a "standard" ellipse, rotated through $45^\circ$. |
Question about the following implication: If $f: S \to T \subseteq Y$, then $f:S\to Y$. | The function $f : S \to T$ gives you the function $i \circ f : S \to Y$, where $i : T \to Y$ denote the inclusion function $i(t) = t$. The functions $f$ and $i \circ f$ are not the same, but sometimes notation is abused by writing $f : S \to Y$ instead of $i \circ f$. It depends on the proof whether this is harmless or not.
Edited:
The question is based on Collection of all partial functions is a set . In my opinion the existing answers are not sufficient to prove Tao's claim.
Let us assume as in the accepted answer that it has been shown that for any two sets $S$ and $T$ the collection $T^S$ of all functions $S \to T$ is a set.
Then we can form the union
$$\bigcup_{(X',Y') \in P(X) \times P(Y)} (Y')^{X'}$$
which is again a set. This set consists of all partial functions from $X$ to $Y$. Note that the sets $(Y')^{X'}$ are pairwise disjoint.
However, if we identify partial functions $f : X' \to Y' \subset Y$ with functions $f : X' \to Y$, then certainly
$$\bigcup_{X' \in P(X)} Y^{X'} \subsetneqq \bigcup_{(X',Y') \in P(X) \times P(Y)} (Y')^{X'} .$$
Edited:
See Asaf Karagila's answer to Set of all partial functions exists for a proof not using the product $P(X) \times P(Y)$. We only need the power set axiom, Lemma 3.4.8 and twice the axiom of union.
But let me emphasize again that $Y^{X'}$ is not the same as $\bigcup_{Y' \in P(Y)} (Y')^{X'}$. |
Prove that for any polynomial p with p(x)>0 for all $x \in \mathbb{R}$, $\exists \alpha > 0$ s.t. $p(x)>\alpha \forall x \in \mathbb{R}$ | If $p$ is a constant this is obvious. Let $p(x)=\sum_{k=1}^{N} a_k x^{k}$, $a_N \neq 0$. Suppose $N$ is odd. Then $p(x) \to -\infty $ as $x \to \infty$ or as $x \to -\infty$ according as $a_N <0$ or $a_N>0$. Since $p$ is positive it follows that $N$ must be even. Now there exists $T$ such that $p(x) >1$ for all $x$ with $|x| >T$. On $[-T,T]$ the continuous function $p$ has a minimum (which must be positive). The rest should now be clear. |
$M$ orientable implies $H_{n-1}(M, \mathbb{Z})$ is free Abelian group. | An orientable manifold is $R$-orientable for any $R$. Now if $M$ is closed connected $n$ dimensional $R$-orientable manifold then $H_n(M;R)=R$.
Again universal co-efficient theorem for homology says that , If $C$ is a chain complex of free abelian groups, then there are natural short exact sequence $0\to H_m(C)\otimes G\to H_m(C;G)\to Tor(H_{m-1}(C);G)\to 0$ is a split exact sequence for all $m,G$.
And $Tor$ has following properties,
$Tor(A,B)= Tor(T(A),B)$ where $T(A)$ is the torsion subgroup of $A$, and $Tor(Z_n,B)= ker(B\to_n B)$, in particular $Tor(Z_n,Z_n)=Z_n$.
Since homology groups of $M$ are finitely generated, now if $H_{n-1}(M;\mathbb{Z})$ contained torsion, then there exists a prime $p$ s.t if we use universal coefficient theorem for homology then $H_n(M,Z_p)$ would be larger that $Z_p$. (contradiction). |
Finding out the trajectory of the particle in polar coordinates | Let's say that the position of particle 1 at some time $t$ is $(x_1,y_1)$, which we can write in polar coordinates as $(r_1\cos{\theta_1},r_1\sin{\theta_1})$. By symmetry, particle 2 is going to be at the same distance from the origin ($r_2=r_1=r$), but at an angle $\theta_2=\theta_1+\frac{2\pi}{n}$. The coordinates of particle 2 are $(x_2,y_2)=(r\cos{\theta_2},r\sin{\theta_2})$. The tangent to the trajectory at particle 1 location points towards particle 2, so
$$\frac{dy}{dx}=\frac{y_2-y_1}{x_2-x_1}=\frac{r(\sin\theta_2-\sin\theta_1)}{r(\cos\theta_2-\cos\theta_1)}=\frac{\sin(\theta+\frac{2\pi}{n})-\sin\theta}{\cos(\theta+\frac{2\pi}{n})-\cos\theta}$$
The next step is writing $\frac{dy}{dx}$ in terms of $r$ and $\theta$, and let's say that $r=f(\theta)$. It is relatively straightforward to get this value:
$$\frac{dy}{dx}=\frac{\frac{dy}{d\theta}}{\frac{dx}{d\theta}}=\frac{r\cos\theta+r'\sin\theta}{-r\sin\theta+r'\cos\theta}$$
where $r'=\frac{dr}{d\theta}$. Let's call for now $\frac{dy}{dx}=\alpha$. Then $$\alpha(-r\sin\theta+r'\cos\theta)=r\cos\theta+r'\sin\theta\\r'(\sin\theta-\alpha\cos\theta)=-r(\alpha\sin\theta+\cos\theta)$$
and therefore:
$$\frac{dr}{r}=-\frac{\alpha\sin\theta+\cos\theta}{\sin\theta-\alpha\cos\theta}d\theta$$
Plug in the value of $\alpha$ in terms of $\theta$ and $n$, and integrate both sides. For n=4, the last equation reduces to $$\frac{dr}{r}=-d\theta$$ which is easy to integrate, yielding an exponential spiral |
lamplighter group - Growth from Fibonacci tree | Lemma: The number of 0-1 strings of length $n$ avoiding $'00'$ is at most $C\varphi^n$.
Proof: Let $\Sigma_n$ be the set of such strings. Note $\Sigma_n = \{01c : c \in \Sigma_{n-2}\}\cup\{1c : c \in \Sigma_{n-1}\}$, so $|\Sigma_n| \le |\Sigma_{n-1}|+|\Sigma_{n-2}|$, yielding a fibonacci recurrence. $\square$
.
Main Problem: Let $f$ correspond to flip, $R$ to right, and $L$ to left. Note all elements of $G_1$ that can be achieved in at most $n$ steps can be written as a sequence of $f$'s, $L$'s, and $R$'s such that (1) there are no two $f$'s in a row, and (2) there is no $L$ appearing after an $R$ (i.e., we can WLOG that we go left for a bit and then only go right or flip from then on). For $0 \le k \le n$, we may consider the set of such sequences that don't go left after the $k^{th}$ step. Since the first $k$ steps comprise $f$'s and $L$'s without two $f$'s in a row, there are, by the Lemma, at most $C\varphi^k$ possibilities for the first $k$ steps, and then since the last $n-k$ steps are comprised of only $R$'s and $f$'s with no two $f$'s in a row, there are at most $C\varphi^{n-k}$ possibilities for the last $n-k$ steps. This gives me $\sum_{k=0}^n C^2\varphi^k\varphi^{n-k} = C^2n\varphi^n$. Although the authors say $C\varphi^n$, my upper bound $C^2n\varphi^n$ still gives $\varphi$ as an upper bound for the growth rate. |
Numbers on a circle: how many arc sums can be positive? | This isn't a complete answer (no upper bounds), but some more general constructions and remarks that make it too long for a comment.
As joriki pointed out, we can assume $n \ge 2k$.
The first construction is due to Aravind (see the comment above). Suppose $n = qk + r$, $q \ge 2$ and $1 \le r \le k-1$ (if $r = 0$, then $k | n$, and then we know there can be $n - k$ positive sums, and this is best possible). Now for $1 \le i \le n$, set
$ a_i = \begin{cases} k - 1 + \frac{r}{q} & \mbox{if } k | i, \\ -1 &\mbox{otherwise.}\end{cases}$ We then have $\sum_{i=1}^n a_i= 0$.
Any sum containing one of the $a_{jk}$ is positive, so the only negative sums are $S_{qk+1}, S_{qk+2}, ..., S_{qk+r}$. Thus there are $n-r$ positive sums. If $r = 1$, this is clearly optimal.
Note also that any placement of the $q$ positive terms, provided they are at least $k$ apart, works, since all $qk = n-r$ sums involving the positive terms will be positive.
When $r > \frac{k}{2}$, there is a construction that does better: it gives $n - k + r$ positive sums instead. If $r = k-1$, this is again best possible. For $1 \le i \le n-1$, set $a_i = \begin{cases} - \left(k-1- \frac{1}{2q} \right) &\mbox{if } k|i, \\ 1 &\mbox{otherwise.}\end{cases}$ Let $a_n = -\left(r - \frac12 \right)$. We again have $\sum_{i=1}^n a_i = 0$.
Any sum involving only one negative term is positive, hence the only negative sums have two negative terms. The only such sums have both $a_{qk}$ and $a_n$, and are $S_{n-k+1}, S_{n-k+2}, ..., S_{n-r}$ (recall that $qk = n-r$). Hence there are $k-r$ negative sums, and thus $n-k+r$ positive sums.
Finally, I'd like to point out that if the answer had indeed been $n-k$, this would have proven the Manickam-Miklos-Singhi conjecture. The MMS conjecture has been the focus of much recent research. It claims that if $n \ge 4k$, then for any collection of $n$ numbers summing to $0$, there are at least $\binom{n-1}{k-1}$ subsets of size $k$ with non-positive sum. This can be achieved, for example, by taking one number to be $-(n-1)$, and the others to be $1$. The difficulty is in proving that there cannot be fewer.
This is known to be true if $k | n$, or if $n \ge 10^{46} k$ (a result of Pokrovskiy (paper)). The condition $n \ge 4k$, or something similar, is necessary, since counterexamples are known for smaller $n$ (e.g. $n = 3k+1$).
Now suppose in this cyclic version, we could never have more than $n-k$ positive sums, so always had at least $k$ non-positive sums. If we take a random cyclic ordering of the $n$ elements, every cyclic $k$-set is a uniformly random $k$-set. Hence if at least $k$ have non-positive sum, that implies that at least a $\frac{k}{n}$-proportion of all $k$-sets, or $\binom{n-1}{k-1}$ $k$-sets, have non-positive sum, proving the conjecture.
Sadly, however, the above examples show that this cannot be used to prove the Manickam-Miklos-Singhi conjecture (at least not without additional work). |
probability of sequence of integers | I did it in excel. The entries are the number of acceptable strings of length 1 through 4 ending in the number on the left. $$\begin {array} {c c c c c c}1&1&8&57&413\\2&1&7&50&362\\3&1&7&51&368\\4&1&7&51&367\\5&1&7&51&367\\6&1&7&51&367\\7&1&7&51&367\\8&1&7&51&368\\9&1&7&50&362\\10&1&8&57&413\\&&&&3754\end {array}$$ The total is $3754$ out of $10000$ for a chance of $0.3754$ |
Product of all prime numbers on the interval [m+1, 2m] is $\le \left(\begin{matrix} 2m \\m\end{matrix}\right)$ | ...maybe somehow show that all the prime numbers on that interval are divisors of the combination number...
Yes, they are. How many times does such a prime appear in the numerator $(2m)!=1\cdot 2\cdot 3\cdots (2m-1)\cdot 2m$? How many times does it appear in the denominator $(m!)^2=1^2\cdot 2^2\cdot 3^2\cdots (m-1)^2\cdot m^2$? |
Is the property "being a derivative" preserved under multiplication and composition? | Let me address just one of your problems.
Problem. Suppose that $f$ and $g$ are both derivatives. Under
what conditions can we assert that the product $fg$ is
also a derivative?
The short answer is that this is not true in general. In fact even if we assume that $f$ is continuous and $g$ is a derivative the product need not
be a derivative. However if we strengthen that to assuming that $f$ is not merely continuous but also of bounded variation, then indeed the product with any derivative would be a derivative.
This is an interesting problem and leads to interesting ideas.
For references to the statements here and an in depth look at the problem here are some references:
Bruckner, A. M.; Mařík, J.; Weil, C. E. Some aspects of products of
derivatives. Amer. Math. Monthly 99 (1992), no. 2, 134–145.
Fleissner, Richard J. Distant bounded variation and products of
derivatives . Fund. Math. 94 (1977), no. 1, 1–11.
Fleissner, Richard J. On the product of derivatives. Fund. Math. 88
(1975), no. 2, 173–178.
Fleissner, Richard J. Multiplication and the fundamental theorem of
calculus—a survey. Real Anal. Exchange 2 (1976/77), no. 1, 7–34.
Foran, James
On the product of derivatives, Fund. Math. 80 (1973), no.
3, 293–294.
I will edit in some links when I find them. Foran and Fleissner were close childhood friends who ended up pursuing their PhD at the same time in Milwaukee. Fleissner died in an automobile accident in 1983.
NOTE ADDED. Elementary students are not going to want to pursue this topic to quite this depth. But here is an exercise aimed at this level that they might find entertaining.
Exercise. Consider the function $$f(x)=\begin{cases} \cos \frac1x, & x\not=0 \\ 0 &x=0 \end{cases} $$ Show that the function $f$ is a
derivative but that its square $f^2$ is not. |
$\ker \varphi_p \subset (\ker \varphi)_p$ where $(\cdot)_p$ is taking the stalk of sheaves at the point $p$ (Diagram inside!). | Suppose $x \in F_p$ is in $\ker(\varphi_p)$. Then there exist a neighborhood $U$ of $p$ and $\bar x \in F(U)$ such that $(\bar x)_p = x$. Furthermore, since $(\bar x)_p \in \ker(\varphi_p)$, we have that $[\varphi_U(\bar x)]_p = 0 \in G_p$. Therefore, there exists some neighborhood $V \subseteq U$ of $p$ such that $[\varphi_U(\bar x)] |_V = 0$. But then $[\varphi_U(\bar x)]|_V = \varphi_V(\bar x |_V)$. So, $\bar x |_V \in [\ker(\varphi)](V)$, and $[\bar x |_V]_p = x$. |
Elementary congruence statement proof: $a \equiv b \mod m \Longrightarrow a^{k} \equiv b^{k} \mod m $ | you already said it yourself. =D $a \equiv b \mod m$ if and only if $m | a - b$. Now if $a \equiv b \mod m$, then $m | a - b$. Since $a - b | a^k - b^k$, then it follows that $m | a^k - b^k$. |
Proof-check: Using the definition to find the derivative of $f(x)=\sqrt{x}$. | When we try to find the derivative of a function $x_o$, what we are basically trying to find is the slope at the certain point of the $f(x)$.So when you try to find the derivative of a function at a certain point, this $x_o$ must be in the domain of the function. Else, you cannot find the derivative on a point outside the domain cause you cannot find the slope at this point.
The fact that $x$ approaches $x_o$ happens just for the fact I explained before: You want to find the slope. As you already know, you can find the slope of a straight line by knowing two points of this line. But what it means to find the slope at a certain point? It basically means to find the slope from $x$ to $x_o$, where $x$ is very close to $x_o$, but not equal to it. It is like thinking that the function at the interval $(x,x_o)$ is a straight line, cause the interval is very small. If $x=x_o$, then you cannot find any slope.
The substitution of $x_o$ depends on what kind of limit you have. If the limit ends up to be of the form $\frac{0}{0}$ then you must try other tricks.
I hope I helped you! |
Number of prefixes that match suffixes | The sequence $1,2,4,7,11,17,...$ seems to be
$$\sum_{i=0}^n\kappa(i)$$where $\kappa(i)$ is the kappa (auto)correlation function.
No formula for $\kappa(i)$ is known. It appears in OEIS as A005434 and was introduced in this article:
L. J. Guibas and A. M. Odlyzko, Periods in Strings, Journal of Combinatorial Theory A 30:1 (1980) 19-42 |
Generalization of Dirichlet's theorem | This is false as stated. Take, for example, $p(n) = n^2 + n$. The correct condition is that $p$ is irreducible and there exists no $d > 1$ such that $d | p(n)$ for all $n$. With this condition this is a big open problem, the Bunyakovsky conjecture, and it is open for any such polynomial $p$ of degree greater than $1$. |
Find the value of $A=\frac{3\sqrt{8+2\sqrt7}}{\sqrt{8-2\sqrt{7}}}-\frac{\sqrt{2\left(3+\sqrt7\right)}}{\sqrt{3-\sqrt7}}$ | $\begin{align}&A = \frac{3\sqrt{8+2\sqrt7}}{\sqrt{8-2\sqrt7}}\frac{\sqrt{8+2\sqrt7}}{\sqrt{8+2\sqrt7}}- \frac{\sqrt2\sqrt{3+\sqrt7}}{\sqrt{3-\sqrt7}}\frac{\sqrt{3+\sqrt7}}{\sqrt{3+\sqrt7}}\\\Rightarrow&A=\frac{3(8+2\sqrt7)}{6}-\sqrt2\left(\frac{3 +\sqrt7}{\sqrt2}\right) = 4+\sqrt7-3-\sqrt7=1\end{align}$ |
Finding minimum value of relation with quadratic coefficient | Wlog we can assume $a=1$. So the task becomes to minimize $F(b,c)=\frac{1+b+c}{b-1
}$ with $b^2\leq4c$.
Let us start by taking the derivative with respect to $c$, $\frac{\partial F}{\partial c}=\frac{1}{b-1}=0$. We see that we have no extremal points. So we turn to the boundary $F(b,\tfrac{b^2}{4})$. Again looking for extremal points we differentiate this time with respect to $b$ and end up with $\frac{b^2-2b-8}{4(b-1)^2}=0$ which implies $b\in\{-2,4\}$. Inserting yields $F(-2,1)=0$ and $F(4,4)=3$. |
For which polynomials $p(x)$ is $p(p(x))+p(x)$=$x^4+3x^2+3$, for all $x \in \mathbb{R}$ | Identifying the coefficient of $x^4$ one gets $a=1$ and then the coefficient of $x^3$ you get $b=0$ hence $c=1$ or $c=-3$. Then check that $x^2+1$ is indeed a solution while $x^2-3$ is not. |
Prove the n-th power of a matrix is the null matrix | A non-constant entire function that has constant modulus on the unit circle must be a constant multiple of the power function. It follows that $\det(zI+AB^{-1})=z^n$ and $AB^{-1}$ is nilpotent. As $B$ is invertible and it commutes with $A$, $A$ must also be nilpotent.
When $A,B$ do not commute, the assertion does not necessarily hold. It is easy to construct a counterexample such that $AB^{-1}$ is nilpotent but $A$ isn't. E.g. set $AB^{-1}=\pmatrix{0&1\\ 0&0}$ and $B=\pmatrix{0&1\\ 1&0}$, so that $A=\pmatrix{1&0\\ 0&0}$. |
Find elements in $K[x,y]$ which generates it as an $A$-module. | $A=K[x,y]^{S_2}$ is the ring of symmetric polynomials in two variables, that is, $A=K[x+y,xy]$. Now I think you can find the desired element(s). |
Are there any homomorphisms from integers into finite rings other than modulo $n$? | The kernel of a homomorphism needs to be an ideal. All proper ideals of $\mathbb{Z}$ are principal, i.e., in one-to-one correspondence to elements $n=0,1,2, \dots$.
If you ask about group homomorphisms, the answer is the same.
Of course, I assume implicitly for $n=0$ that you mean $\mathbb{Z} / 0\;\mathbb{Z} :=\mathbb{Z}$. I seems that you forgot about the map $\mathbb{Z} \rightarrow \mathbb{Z}, n \mapsto n$. |
Proving Derivative of $e^x$ | Actually, the simplest way to find the derivative of $e^x$ is to first define $ln(x)= \int_1^x \frac{1}{t}dt$. From that it is easy to prove the usual properties of $ln(x)$, that $ln(xy)= ln(x)+ ln(y)$ and $ln(x^a)= aln(x)$. And, of course, that $\frac{dln(x)}{dx}= \frac{1}{x}$ follows from the "fundamental theorem of Calculus".
Then define $e^x$ to be the inverse function to ln(x). Then it is immediate that, with $y= e^x$, $\frac{dy}{dx}= \frac{1}{\frac{dx}{dy}}= \frac{1}{\frac{1}{y}}= y= e^x$. |
Derivation of the energy equation in fluid dynamics | The square of the norm of the velocity field is
$$|\mathbf{u}|^2=u_x^2+u_y^2+u_z^2.$$
Taking the partial derivative with respect to time we get
$$\frac{\partial}{\partial t}|\mathbf{u}|^2=2u_x\frac{\partial u_x}{\partial t}+2u_y\frac{\partial u_y}{\partial t}+2u_z\frac{\partial u_z}{\partial t}= 2\mathbf{u} \cdot\frac{\partial \mathbf{u}} {\partial t}.$$
Hence,
$$\mathbf{u} \cdot\frac{\partial \mathbf{u}} {\partial t}=\frac1{2}\frac{\partial}{\partial t}|\mathbf{u}|^2$$ |
Maclaurin expansion of order 5 of a composite function $\log \frac{\cos x+e}{x+1}$ | Near $0$, you have$$\cos(x)+e=1+e-\frac{x^2}2+\frac{x^4}{24}+O(x^6),$$and, near $1+e$, you have\begin{multline}\log(x)=\log (1+e)+\frac{x-e-1}{1+e}-\frac{(x-e-1)^2}{2(1+e)^2}+\frac{(x-e-1)^3}{3(1+e)^3}+\\-\frac{(x-e-1)^4}{4(1+e)^4}+\frac{(x-e-1)^5}{5(1+e)^5}+O\left((x-e-1)^6\right),\end{multline}and therefore, again near $0$, you have$$\log\bigl(\cos(x)+e\bigr)=\log(1+e)-\frac{x^2}{2(1+e)}+\frac{(-2+e)x^4}{24 (1+e)^2}+O(x^6).$$What I did to get this was to take the Taylor polynomial of order $5$ at $1+e$ of $\log(x)$, to replace $x$ by $1+e-\frac{x^2}2+\frac{x^4}{24}$ and to ignore all terms with degree greater than $5$.
On the other hand, we have,near $0$,$$\log(x+1)=x-\frac{x^2}2+\frac{x^3}3-\frac{x^4}4+\frac{x^5}5+O\left(x^6\right),$$and so\begin{align}\log\left(\frac{\cos(x)+e}{x+1}\right)&=\log\bigl(\cos(x)+e\bigr)-\log(x+1)\\&=\log(1+e)-x+\frac{ex^2}{2+2e}-\frac{x^3}3+\frac{\left(4+13e+6e^2\right)
x^4}{24 (1+e)^2}-\frac{x^5}{5}+O\left(x^6\right).\end{align} |
Confusion about limits, specifically the definition of a limit | No, by definition, a limit never "hits" the target value of the argument. To evaluate a limit, you must not use the value of $f(x_0)$.
Intuitively, a limit is the value we should reach for the given value of the argument, by comparing to the values in the neighborhood. It is not the value we do reach by using the function value.
When the two values are defined and coincide (I mean $\lim_{x\to x_0}f(x)$ and $f(x_0)$), we say that the function is continuous. But they don't have to. |
Computing an iterative integer sequence modulo composite n, where the iteration function involves division by a factor of n. | "Were it not for that pesky division by three, we could do all the calculations in the ring $\Bbb Z/9\Bbb Z$."
The division by three is easily disposed of. Just define $h(3n)=3g(n)$, so that $h^j(3n)=3g^j(n)$. Then $$h(27k+3r) = a_r \times 4^k + b_r$$ and we can compute (the residues of) $g^j$ just by computing (them of) $h^j$.
The real problem is the $4^k$. I had the mistaken idea that if $k \equiv l \pmod m$ then $b^k \equiv b^l \pmod m$, but this isn't the case, so doing the calculations in the ring $\Bbb Z/27\Bbb Z$ won't work.
Still the above does answer the problem I identified, and since the rest of the question is irredeemably wrong-headed, I'm going to mark this one as answered. |
Question about two simple problems on covering spaces | on the first one: Take $z\in Z$ and let $U$ be an evenly covered neighborhood of $z$. Then $j^{-1}(U)=\cup_{i\leq k} V_i$ (dj union) and $j^{-1}(z)= \{b_1,...b_k\}$ where $b_i \in V_i$. Now choose $b_i \in X_i \subset V_i$ where now the $X_i$'s are evenly covered by $p$. Then let $Y= \cap_{i\leq k} j(X_i)$. $Y$ is open since this is a finite intersection. Now $Y$ evenly covered by the composite.
on the second one: Suppose $p$ were not injective. let $p(x_1)=p(x_2)=y$ Then take a path $\gamma$ from $x_1$ to $x_2$. $p\gamma$ is then a loop at $y$ homotopic to the trivial loop. So when the trivial loop and $p\gamma$ lift to paths starting at $x_1$ the two lifts must end at the same point. But the lift of the trivial loop ends at $x_1$ and the lift of $p\gamma$ which is $\gamma$ ends at $x_2$. |
Determine the value of $\lim_{n \rightarrow \infty} \sum_{i=1}^{n}\sum_{j=1}^{i} \frac{j}{n^3}$ | Use the formula for $\sum_{j=1}^i j$. |
Prove that $\lim_{n \to \infty} \frac 1n \sum_{j=1}^n f(j\gamma)=\int_0^1 f(x) dx $ | Look up
equidistribution:
https://en.wikipedia.org/wiki/Equidistributed_sequence
The sequence
$\{j\gamma\}
$,
where
$\{x\}
$
is the fractional part of $x$,
is equidistributed
for any irrational $\gamma$.
This implies that
that average
converges to the integral.
The proof is non-trivial. |
Find limit without L' Hopital's rule? | Following Michael's suggestion, if L'Hospital's rule is not allowed, perhaps the derivative is? Because then we can rewrite his expression. First sub $2x=t$ so that the limit can be taken for $t$ to $2\pi$ on the expression $\frac{ln(tan0.125t)}{t-2\pi}$. Now in that numerator we can "add" the term $ln(tan\frac{\pi}{4})$ which is really zero (where did I get this from?) and thus we can use the definition of the derivative, in this case $ln(tan\frac{t}{8})$. I want you now to do that work and then plug in for $t=2\pi$. Don't forget to raise it back to the power $e$ to get the actual limit ($e^{\frac{1}{4}}$) |
function with range ${\mathbb{C}\backslash (-\infty,o]}$ | As long as $f$ is holomorphic in the unit disc, and it's complex derivative is non-zero in the unit disc, then the unit disc will be conformally equivalent to the range of $f$ iff the range of $f$ is simply connected because of the riemann mapping theorem. |
Size of fibers versus degrees of residue fields in a morphism of affine schemes | Write $f(x)=x^2+1$, and $R= \mathbb Z[x]/f(x)$. $R$ is of course (isomorphic to) $\mathbb Z[i]$, where one writes/identifies $$ i = x \pmod{f(x)}.$$
Write $\phi \colon\, \mathop{\rm spec} R \to \mathop {\rm spec } \mathbb Z$ for the morphism of schemes - and for the continuous map on the underlying topological spaces (abuse of notation?) - corresponding to the (finite) ring extension/inclusion $ {\mathbb Z} \hookrightarrow R$. For any prime $\frak p$ of $R$, by definition, $\phi (\frak p) = \frak p \cap \mathbb Z$.
One has of course that $\phi^{-1}(\, (0)\,) = \{(0)\}$.
Suppose from now on that $p$ is a non-zero prime of $\mathbb Z$. Then $A=R/pR= \mathbb F_p[x]/ f(x)$ is an algebra over $\mathbb F_p$ of dimension $2$, where we also consider $f(x)$ as a polynomial of $\mathbb F_p[x]$ (abuse of notation!).
Now, the (very small) lattice of ideals of $R$ containing $pR$ is isomorphic to the lattice of ideals of $A$. But the latter corresponds to the factorization of $f(x)=x^2+1$ over $\mathbb F_p$. In any case, the primes of $A$ (which are in fact maximal) are in bijection with the set $\phi^{-1} (\,(p)\,)$.
Now, since $ f'(x)=2x$, and $ f(0)\not=0 \in \mathbb F_p$, the polynomial has repeated roots (is NOT separable) if and only if $p=2.$
In fact, if $p=2$, then $ f(x) = (x+1)^2\in \mathbb F_2[x]$: we have that $(x+1)$ is the unique prime of $A$, and the unique prime of $R$ over $2$ is $\frak p= (i+1,2)$. On the other hand, $R$ is a unique factorization domain (in fact, a Euclidean ring), so $\frak p$ is principal: $\frak p = (i+1)$, and $2= -i(i+1)^2$. [To match up with your comment, if $\epsilon = x+1 \in A$, then $\epsilon \not =0$, but $\epsilon^2=0$, and $A =\mathbb F_2[\epsilon]$.] In any case, we see that $\phi^{-1}(\,(2)\,) = \{\frak p\}$, and ${R/\frak p} \simeq \mathbb F_2$.
Suppose now $p\not = 2$. Then, by degree considerations, the separable $f(x)$ is either irreducible, or factors into two (distinct - relatively prime) linear factors. The latter occurs iff $f(x)$ has a root in $ \mathbb F_p$ - i.e., if and only if there exists $u\in \mathbb F_p$ such that $u^2 = -1$.
In the inert case - where $f(x)$ is an irreducible poly of $\mathbb F _p[x]$, $A$ is a field (of degree $2$ over $\mathbb F_p$), and its unique (prime) ideal $(0)\subset A$ corresponds to the maximal ideal ${\frak p} = pR\subset R$ - the unique prime above $(p)\subset \mathbb Z$. Therefore, we see that $\phi^{-1}(p) = \{\frak p\}$, and $R/{\frak p} \simeq \mathbb F_{p^2}$.
In the other (split) case,
$ f(x) = (x-u)(x+u) \in F_p[x],$ and, by the Chinese remainder theorem,
$$ A\simeq \mathbb F_p \times \mathbb F_p$$
as $\mathbb F_p$ algebras, (i.e., $A$ is isomorphic to a product (in the cat of $\mathbb F_p$ algebras) of fields of degree $1$) where the isomorphism is given by
$$ x \mapsto (u,-u).$$
The above corresponds - though it does not matter for your question, per se - to the factorization into maximal (prime!) ideals in $R$ of
$$ pR = {\frak p} \bar{\frak p}= (i-u, p)\,(i +u ,p),$$ for some $u$ (abuse of notation!) in $\mathbb Z$. [Edit - In fact, as $R$ is a UFD, there exists $a$ and $b \in \mathbb Z$ such that $\frak p =( a + bi)$ and $\bar {\frak p} = (a -ib)$; in fact, $p = a^2 + b^2$. ] In any event, in this case, one sees that there are $2$ primes above $(p)$: $\phi^{-1}(\,(p)\,) = \{\frak p,\bar{\frak p} \}$, and $R/{\frak p} \simeq R/\bar{\frak p} \simeq \mathbb F_p$.
To add a bit of number theory, where $p\not =2$: the multiplicative group $\mathbb F_p^*$ is cyclic of order $p-1$. As $(-1)^2=1\in \mathbb F_p$, $x^2=-1$ has a solution iff $4 | p-1$, or, equivalently, the prime $p$ splits in $R$ (and thus $\#\phi^{-1} (\,(p)\, ) =2 $) iff
$$p \equiv 1 \pmod 4.$$ |
Problem on finding the maxima of a function | For the ease of calculation, let $t=6-(2\sin A+3\cos A)$
$$\implies6-\sqrt{2^2+3^2}\le t<6+\sqrt{2^2+3^2}$$
$$g(t)=(10-t)^2\cdot t^3$$
Using AM-GM inequality for $(10-t,t>0\iff0<t<10$
$$\dfrac{10}5=\dfrac{2\cdot\dfrac{10-t}2+3\cdot\dfrac t3}{2+3}\ge\sqrt[2+3]{\left(\dfrac{10-t}2\right)^2\left(\dfrac t3\right)^3}$$
the maximum of the RHs will be attained if $$\dfrac{10-t}2=\dfrac t3\iff30-3t=2t\iff t=6$$
$$\iff2\sin A+3\cos A=0$$
$$\iff\dfrac{\sin A}3=\dfrac{\cos A}{-2}=\pm\sqrt{\dfrac{\sin^2A+\cos^2A}{(3)^2+(-2)^2}}$$
So, we need $\sin A=\dfrac{3b}{\sqrt{13}},\cos A=\dfrac{-2b}{\sqrt{13}}$ where $b=\pm1$
So, we only need opposite signs of $\sin A,\cos A$ |
$f(t)^2<1+2\int_{0}^{t}f(s)\:ds$ Then prove that $f(t)<1+t$ | Let $g(t)=\int_{0}^{t}f(s)ds$ and $h(t)=\sqrt{1+2g(t)}-t-1,$ h is a continuously differentiable function.
I observe that
$$h'(t)=\frac{g'(t)}{\sqrt{1+2g(t)}}-1<0.$$
So, $$h(t)\leq h(0)=0\Rightarrow 1+2g(t)\leq (1+t)^2$$ and as a consequence
$$f(t)<1+t.$$ |
Help Splitting a group so that each meeting has limited overlap | I have 30 people who I need to meet over seven different meeting dates.
There are some simple calculations that help to sort out what is possible and what isn't, if your goal is "having each person meet each other" in a schedule where each group is of size $k$ and there are $q$ groups that meet each day/week of the schedule.
First $kq$ must be the number of people $v=30$ in your design. Second we see that on any given day, each person will meet $k-1$ other people who happen to be in the same group for that day.
In order for each person to meet all $v-1=29$ other people, we need a schedule of at least $\lceil (v-1)/(k-1) \rceil$ days (rounding up if that quotient isn't an exact integer).
I'd like to have groups of 6 groups of 5 meet each of the seven weeks.
The problem as originally written seemingly has $q=6$ groups of $k=5$ people each day. But then seven days (weeks) are not enough because $\lceil (v-1)/(k-1) \rceil \gt 7$.
Perhaps the wording is slightly garbled and the intent was $q=5$ groups of $k=6$ people each day. In that case seven days will be enough. Each day of meetings allows a person to meet $k-1=5$ other people, and $5\cdot 7 = 35 \gt 29$.
However some people will meet more than once, since $35$ is strictly greater than $29$. This excess (or "overlap" as the Question describes it) can be distributed in a kind of uniform way, though the total of duplicate meetings is not really minimized because it is known in advance.
This uniform way is to treat the $30$ people as $15$ pairs of people. We then assign these pairs of people, three at a time, to $q=5$ groups of $k=6$ people each day so that each pair meets each other pair exactly once during the seven days of the schedule. This amounts to substituting these pairs for the fifteen individuals in Kirkman's schoolgirl problem.
The result is that while partners in a pair will meet in all seven of their scheduled groups, otherwise two people will meet exactly once.
For convenience we list such seven days of five groups (six people in each group), where people are $0$ to $29$ and pairs are $(2n,2n+1)$ for $n=0,\ldots,14$:
Sunday
$ \begin{array}{lrrrrrrr}
(& 0, & 1, & 10, & 11, & 20, & 21 &) \\
(& 2, & 3, & 12, & 13, & 22, & 23 &) \\
(& 4, & 5, & 14, & 15, & 24, & 25 &) \\
(& 6, & 7, & 16, & 17, & 26, & 27 &) \\
(& 8, & 9, & 18, & 19, & 28, & 29 &) \end{array} $
Monday
$ \begin{array}{lrrrrrrr}
(& 0, & 1, & 2, & 3, & 8, & 9 &) \\
(& 4, & 5, & 6, & 7, & 12, &13 &) \\
(& 14, & 15, & 16, & 17, & 22, & 23 &) \\
(& 18, & 19, & 20, & 21, & 26, & 27 &) \\
(& 24, & 25, & 28, & 29, & 10, & 11 &) \end{array} $
Tuesday
$ \begin{array}{lrrrrrrr}
(& 2, & 3, & 4, & 5, & 10, & 11 &) \\
(& 6, & 7, & 8, & 9, & 14, & 15 &) \\
(& 16, & 17, & 18, & 19, & 24, &25 &) \\
(& 20, & 21, & 22, & 23, & 28, &29 &) \\
(& 26, & 27, & 0, & 1, & 12, & 13 &) \end{array} $
Wednesday
$ \begin{array}{lrrrrrrr}
(& 8, & 9, & 10, & 11, & 16, & 17 &) \\
(& 12, & 13, & 14, & 15, & 20, & 21 &) \\
(& 22, & 23, & 24, & 25, & 0, & 1 &) \\
(& 26, & 27, & 28, & 29, & 4, &5 &) \\
(& 2, & 3, & 6, & 7, & 18, &19 &) \end{array} $
Thursday
$ \begin{array}{lrrrrrrr}
(& 4, & 5, & 8, & 9, & 20, & 21 &) \\
(& 6, & 7, & 10, & 11, & 22, & 23 &) \\
(& 12, & 13, & 16, & 17, & 28, & 29 &) \\
(& 14, & 15, & 18, & 19, & 0, & 1 &) \\
(& 24, & 25, & 26, & 27, & 2, & 3 &) \end{array} $
Friday
$ \begin{array}{lrrrrrrr}
(& 8, & 9, & 12, & 13, & 24, & 25 &) \\
(& 10, & 11, & 14, & 15, & 26, & 27 &) \\
(& 16, & 17, & 20, & 21, & 2, & 3 &) \\
(& 18, & 19, & 22, & 23, & 4, & 5 &) \\
(& 28, & 29, & 0, & 1, & 6, & 7 &) \end{array} $
Saturday
$ \begin{array}{lrrrrrrr}
(& 20, & 21, & 24, & 25, & 6, & 7 &) \\
(& 22, & 23, & 26, & 27, & 8, & 9 &) \\
(& 28, & 29, & 2, & 3, & 14, & 15 &) \\
(& 0, & 1, & 4, & 5, & 16, & 17 &) \\
(& 10, & 11, & 12, & 13, & 18, & 19 &) \end{array} $ |
School-level problem on divisibility | $18189=9\times 2021$ is relatively prime with $10$.
So there exists $k$ such that $10^k\equiv 1\pmod{18189}$
$k=966=(2)(3)(7)(23)$ can be tested from divisors of Euler totient function $\phi(18189)=(2)^3(3)^2(7)(23)$.
Thus the repunit $\frac 19(10^k-1)$ is divisible by 2021. |
Are the number of maximal opens sets in a Noetherian topological space finite? | No. For instance, $\Bbb A^n$ and $\Bbb P^n$ are T1, and therefore the maximal proper open sets are the complements of points. |
Matrix inverse property, show that $(I + uv^T)^{-1} = I - \frac{uv^T}{1+v^Tu}$ | Why not multiply the right side by $I+uv^t$ and see if you get the identity matrix? Note that $v^tu$ is a scalar. |
Give an explicit isomorphism between two sets | I assume you are looking for bijections between the three solution sets of the equations (one for each equation).
Let $S_1,S_2,S_3$ be the three solution sets.
For all $i=1,2,3$ and each choice of $(w,x,y)\in \mathbb{R}^3$ there is a unique $z\in \mathbb{R}$ such that $(w,x,y,z)\in S_i$.
So let $f:S_1 \rightarrow S_2$ be defined by $f((w,x,y,z))\mapsto (w,x,y,\frac{-w+x-y}{5})$
and let $g:S_2 \rightarrow S_3$ be defined by $f((w,x,y,z))\mapsto (w,x,y,\frac{-2w+2x+y}{4})$
and let $h:S_3 \rightarrow S_1$ be defined by $f((w,x,y,z))\mapsto (w,x,y,\frac{-w+x}{3})$ |
If $E(Y\mid X) = E(Y)$, do we have $X,Y$ independent? | No. Here is the joint distribution of a counterexample:
\begin{array}{cc|ccc}
&&&X&\\
&&-1&0&1\\\hline
&1& 1/6& 0&1/6 &\\
Y&0& 0& 1/3& 0&\\
&-1 & 1/6&0 &1/6
\end{array}
Note $E(Y|X=x)=0$ for all $x\in \{-1,0,1\}$. |
The pressure of a fluid as the change of internal energy. | Before getting to your question, note that $\rho$ is not probability density. It is just density, i.e., it is a measure of concentration of mass per volume.
Now, in general, $\rho$ can have various values in various locations in the system. Therefore, properties such as pressure ($p$) and internal energy ($u$), which depend on $\rho$, also take different values in different locations in the system. So, to answer your question: yes, equation (1) gives you local pressure.
Also note that the internal energy of the whole system ($U$) is the sum of internal energies of infinitesimal elements that make the system, hence the integral that you mentioned. |
Definition of power set uses strict subset | Note that the author says:
"we write $\subset$ rather than $\subseteq$, so in our notation $X\subset X$ is a true statement."
They are using the symbol "$\subset$" as a synonym for "$\subseteq$." (This leaves the symbol "$\subsetneq$" for proper subsethood.) Munrkes' topology textbook also follows this convention.
Personally I think this is a terrible choice since it clashes with "$<$ vs. $\le$," but they are being consistent and stating it explicitly. |
Variation of Nim-Game? | Yes, safely means that both players are playing optimally.
Start classifying numbers to winning and losing positions (a number $n$ corresponds to the game in which at the start there are $n$ stones).
$1,2$ are winning, thus $3$ is losing, thus $4,5$ are winning, thus $6$ is losing... |
Linear independence of $1, e^{it}, e^{2it}, \ldots, e^{nit}$ | For the proof, we will employ Euler's formula:
$$ e^{i\theta} = \cos{(\theta)} + i\sin{(\theta)}$$
We proceed by induction.
Base case:
The base case where $n = 1$ follows easily, for if
$$c_0 + c_1e^{it} = 0$$
for all $t \in [-\pi,\pi]$
then for $ t = 0 $ and $t = \pi$, we have the following two equations:
$$ c_0 + c_1 = 0$$
$$ c_0 - c_1 = 0$$
which implies that
$$ c_0 = c_1 = 0 $$
Inductive case:
For the inductive case, suppose there are scalars $c_0, c_1, \dots, c_n$ such that
$$ c_0 + c_1e^{it} + \cdots c_ne^{nit} = 0$$
for all $t \in [-\pi,\pi]$.
Using Euler's formula and setting $t = 0$, we have
$$c_0 + c_1\sin{(0)} + \cdots + c_n\sin{(0)} = 0$$
so $$c_0 = 0$$
Thus,
$$ c_1e^{it} + c_2e^{2it} + \cdots + c_ne^{nit} = 0 $$
so we can factor out $e^{it}$ to get
$$ e^{it}(c_1 + c_2e^{it} + \cdots + c_ne^{(n-1)it}) = 0 $$
and since $e^{it} \ne 0$ for all $t$, this implies
$$c_1 + c_2e^{it} + \cdots + c_ne^{(n-1)it} = 0$$
in which case we employ the inductive hypothesis to get
$$ c_1 = c_2 = \cdots = c_n = 0 $$
and since $c_0 = 0$ as well, this ends the proof. |
Partial differential equation without substitution | The general solution of $u_t = u_{xx}$ for $t > 0$, $-\infty < x < \infty$ with initial data $u(0,x)=f(x)$ where $f$ is absolutely integrable is
$$
u(x,t)=\frac{1}{\sqrt{4\pi t}}\int_{-\infty}^{\infty}e^{-(x-y)^2/4t}f(y)dy.
$$
In your case $f(y)=y$ for $-1 < y < 1$ and is $0$ otherwise. You can first rewrite the above using $y=x+w$:
$$
u(x,t)=\frac{1}{\sqrt{4\pi t}}\int_{-1}^{1}e^{-(x-y)^2/4t}ydy \\
=\frac{1}{\sqrt{4\pi t}}\int_{-1+x}^{1+x}e^{-w^2/4t}(x+w)dw
$$
Then you can scale $v=w/\sqrt{4t}$:
$$
u(x,t)=\frac{1}{\sqrt{4\pi t}}\int_{\frac{-1+x}{\sqrt{4t}}}^{\frac{-1+x}{\sqrt{4t}}}e^{-v^2}(x+\sqrt{4t}v)\sqrt{4t}dv
$$
Then you can integrate the term with $v$ in it because $\frac{d}{dv}e^{-v^2}=-2ve^{-v^{2}}$. |
Why can't we just multiply by the inverse of K in the equation of the eigenvalue problem? | First of all in $$AK=\lambda K$$ K is a vector not a matrix so we can not find $K^{-1}$ so we have to solve for the non-zero vector $K$ to satisfy $$AK=\lambda K$$
Secondly you mentioned that $$P(A)=P(\lambda)$$ and that is not the case because $P(A)$ is a matrix and $P(\lambda) $ is a polynomial. The Cayley- Hamilton theorem is about $P(A)=0$ for the characteristic polynomial of the matrix $A$ not the equality $P(A)=P(\lambda)$ which have different dimensions. |
Homotopic type of $GL^+(n)$, $SL(n)$ and $SO(n)$ | Your approach for the first question will work, but you need to note that the matrix $B$ you construct necessarily has distinct eigenvalues: since $A^TA$ is symmetric and positive definite, it has distinct eigenvalues, and thus so does its square root. (This is how you contract your space $P$ by the way - send the eigenvalues - which are positive reals - continuously to 1. The same argument does not work over the complex numbers, by the way!)
You should also be a little careful with matrix square roots and continuity. But I'll leave you to solve that problem.
For the second decomposition things are much simpler. Remember that $$\det(tA) = t^n\det(A)$$ for any $t \in \mathbb R$ and $A \in GL(n)$. So let's construct the maps $$f: GL^+(n) \to \mathbb R^+ \times SL(n), \qquad f(A) = \left(\sqrt[n]{\det A},\frac{1}{\sqrt[n]{\det A}}A\right)$$ and $$g: \mathbb R^+ \times SL(n) \to GL(n), \qquad g(t, B) = tB.$$ Then clearly $f$ and $g$ are inverses; they're both continuous and hence give the desired homeomorphisms. |
Continuity of function $f(x)=\lim_{n\to \infty} \frac{1}{1+x^n}$ | $$\lim_{n\to+\infty}\frac{1}{1+x^n}=$$
$\frac{1}{2}$ if $x=1$
$1$ if $|x|<1$
$0$ if $|x|>1$
doesn't exist if $x=-1$
thus, $(f_n)_n $ has a pointwise limit at $(-\infty,-1)\cup (-1,+\infty)$ which is not continuous at $x=1$. |
How to find the cosets of the circle group T in the nonzero complex numbers C* | Let $\;z,w\in\Bbb C^*\;$ . Observe that
$$zT=wT\iff z^{-1}w\in T\iff\left|z^{-1}w\right|=1\iff|z|=|w|$$
Can you now see, both geometrically and algebraically, what the cosets in the quotient group $\;\Bbb C^*/T\;$ are? |
Exponential translation | To make the function pass through $\langle 0,4\rangle$, you need to choose $A$ and $B$ so that $f(0)=4$, i.e., so that $A2^0+B=4$. You already know what $B$ has to be in order to get the right horizontal asymptote, so there's only one choice for $A$; does it work, or does the resulting function fail to meet one of the criteria?
If you're allowed to choose $k$ as well as $A$ and $B$, note that it makes a difference whether $k$ is positive or negative. |
Continuity, Smash product, etc. | $\require{AMScd}$
You don't need the local compactness in that direction. You have $\{x\}\times A\subseteq g^{-1}(U)$ which is open as $g$ is continuous. Then you can apply the tube lemma, which gives you an open set $V\ni x$ such that $V\times A\subseteq g^{-1}(U)$.
For the other direction: One can show, using the local compactness of $K$, that the evaluation map $$\varepsilon:Y^K×K→Y\\(h,k)\mapsto h(k)$$
is continuous. This map factors through a map $\barε:Y^K\wedge K→Y$, and the composition at the bottom row is the map whose continuity you want to show.
$$\begin{CD}
X\times K @>f×1>>Y^K×K @>ε>> Y \\
@VVV @VVV @|\\
X\wedge K @>>> Y^K\wedge K @>>> Y
\end{CD}$$
Regarding your attempt for that direction: You cannot say that $V×\{k\}$ is open, this is usually not an open set. |
Distance to a plane question? | Draw a perpendicular line (in both directions) of length 5 starting at your original plane. This describes the set of points at distance $5$ from your plane. Can you see it? |
Pointwise and Uniformly Convergent Series | Let $\mathbb{Q} = \{ r_1, r_2, \ldots\}$ be an enumeration of the rational numbers and take
$$f_n(x) = \begin{cases}1, &x = r_n \\ 0, & \text{otherwise} \end{cases}$$
We have pointwise convergence of the series
$$\sum_{n=1}^\infty f_n(x) = f(x) =\begin{cases}1, &x \in \mathbb{Q} \\ 0, & \text{otherwise} \end{cases}$$
The rational numbers in any interval $[a,b]$ can be enumerated by a subsequence $(r_{n_k})$. The series does not converge uniformly on the interval because for any $k \in \mathbb{N}$ we have $n_k \geqslant k$ such that
$$\left|\sum_{n=1}^{n_k} f_n(r_{n_{k+1}}) - f(r_{n_{k+1}})\right| = 1, $$
and it is not possible to find for any $0 < \epsilon < 1$ an integer $k$ such that for all $m >k$ and all $x \in [a,b],$
$$\left|\sum_{n=1}^{m} f_n(x) - f(x)\right| < \epsilon$$ |
How to prove that $v^TAw = w^TAv$ for symmetric $A$ | A quick and easy way to show this is to take the tanspose of the left hand side of you equation. |
Hausdorff condition: existence or choice? | This is one of the many cases where choice is used, but only to make the argument intuitive and making sense.
Use the common non-AC motto: If you can't choose, just take all of them!
So instead of choosing an open set for every pair, take all the open sets which witness a separation of a point in $C$ from a fixed point outside (i.e. they are disjoint from some nonempty open set which is a neighborhood of your fixed point), again this constitutes a cover, so there is a finite subcover which gives you a way to separate all points in $C$ from your fixed point. Note that the separation implies that the fixed point is outside the closure of the open sets, so it is not in the closed set which is the union of the closure of the open sets.
It follows that every point outdise $C$ has an open neighborhood disjoint from $C$, so it is a closed set. |
Did H. Lebesgue claim "1 is prime" in 1899? Source? | After further research, I suspect no source other than Derbyshire's text exists. If so, this would be just a very small slip in an excellent and well written text. I know I have made similar slips.
Here is my reasoning. First, Lebesgue published his first paper in 1898, three in 1899 and then two in 1900 (all most easily found in his collected works). None of these contain any references to the prime numbers:
Sur l'approximation des fonctions, Bull. Sci. Math. 22 (1898), 278--287.
Sur la définition de l'aire d'une surface, C. R. Math. Acad. Sci. Paris 129 (1899), 870--873.
Sur les fonctions de plusieurs variables, C. R. Math. Acad. Sci. Paris 128 (1899), 811--813.
Sur quelques surfaces non réglées applicables sur le plan, C. R. Math. Acad. Sci. Paris 128 (1899), 1502--1505.
Sur la définition de certaines intégrales de surface, C. R. Math. Acad. Sci. Paris 131 (1900), 867--870.
Sur le minimum de certaines intégrales, C. R. Math. Acad. Sci. Paris 131 (1900), 935--937.
So if Lebesgue stated that one was prime in 1899, it appears not to be in a published work of his. (This does not rule out lectures, interviews, works written about him by others...)
Second, all of the Internet references to this that we checked either cite Derbyshire, or were posted well after his work. For example, a friend of mine checked Wikipedia, and this statement about Lebesgue and unity appears in the English and Dutch entry for prime, but not the French, German, Spanish, Italian, Portuguese, Polish, Russian, Czech, or Swedish. In English it was added in 2006, after Derbyshire's 2003 text. In English only it is also found in the Wikipedia page for "Henri Lebesgue":
Is this proof Derbyshire's text is the source, absolutely not. Recall Derbyshire said that he can not recall his source, but that there was one. So if you can find a source that predates his text, I'd like to know; otherwise I think it may just have been a small transcription error while written this popular text. I have done the same myself.
As for the related question: who was the last mathematician of any importance who considered the number 1 to be a prime, my student and I settle on G. H. Hardy in our draft paper (http://arxiv.org/abs/1209.2007). Hardy's A Course of Pure Mathematics, 6th edition, 1933, presented Euclid's proof that there are infinitely many primes with a sequence of primes beginning with 1:
(This was changed in the next edition.) As discussed in our draft article, there is even a remnant of Hardy listing 1 as prime in the revised 10th edition of his text published recently.
We have a list of over 125 references pertinent to the question "is one prime" collected here: http://primes.utm.edu/notes/one.pdf |
Eigenspace of A Matrix | The equation $(A-I)x=0$ yields \begin{align}x_1+2x_2-x_3&=0\\ x_1+2x_2-x_3&=0\\-x_1-2x_2+x_3&=0,
\end{align}
and hence $$x_1+2x_2=x_3. $$ It follows that the eigenspace corresponding to $\lambda=1$ is the span of $$\begin{bmatrix}2\\-1\\0\end{bmatrix},\begin{bmatrix}1\\0\\1\end{bmatrix}. $$ Similarly, $(A-5I)x=0$ yields
\begin{align}
-3x_1+2x_2-x_3&=0\\
x_1-2x_2-x_3&=0\\
-x_1-2x_2-3x_3&=0,
\end{align}
and hence $$x_3=-x_1,\quad x_2=x_1. $$ It follows that the eigenspace corresponding to $\lambda=5$ is the span of $$\begin{bmatrix}1\\1\\-1\end{bmatrix}.$$ |
If $f:X \to Y $ is continuous then $f^{-1}(\emptyset)= \emptyset?$ | Continuity has nothing to do with either.
In the first case, no assumptions are needed except that $f$ be a mapping. If $f^{-1}(\varnothing) \neq \varnothing$, then there's $x \in f^{-1}(\varnothing)$ and this means that $f(x) \in \varnothing$ which is impossible.
In the second case, also only $f$ being a mapping is required for $1)$. However, for $2)$, we need injectivity.
Edit:
Let $f: X \to Y$ be a mapping. Then:
$f^{-1}(A \cap B) = f^{-1}(A) \cap f^{-1}(B)$
Where for any $M \subset Y$, $f^{-1}(M) := \{x \in X, f(x) \in M \}$.
Proof: For the first inclusion, let $x \in f^{-1}(A \cap B)$, then $f(x) \in A \cap B$, so $f(x) \in A$ and $f(x) \in B$, so that $x \in f^{-1}(A)$ and $x \in f^{-1}(B)$, i.e. $x \in f^{-1}(A) \cap f^{-1}(B)$. Now let $x \in f^{-1}(A) \cap f^{-1}(B)$, then $x \in f^{-1}(A)$ and $x \in f^{-1}(B)$, so $f(x) \in A$ and $f(x) \in B$, so that $f(x) \in A \cap B$, thus $x \in f^{-1}(A \cap B)$. |
Gambler's ruin: verifying Markov property | You can see that Markov property holds, using Durret's condition:
Let $X_n$ be the amount of money after $n$ plays. For any possible history of your wealth, $i_0, \ldots , i_{n-1},i,$ $$ P(X_{n+1}=i+1 | X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0)=p $$
since to increase your wealth by $1$ unit you have to win the next bet.
Indeed, writing:
$$P(X_{n+1}= i+i | X_n=i) = \frac{P(X_{n+1}= i+i , X_n=i)}{P(X_n=i)} = \frac{\sum_{i_k, k \in \{1,\ldots,n-1\}}P(X_{n+1}=i+1 , X_n=i, X_{n-1}=i_{n-1}, \ldots, X_0=i_0)}{\sum_{i_k, k \in \{1,\ldots,n-1\}}P( X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0)} = \frac{\sum_{i_k, k \in \{0,1,\ldots,n-1\}}p\cdot P( X_n=i, X_{n-1}=i_{n-1}, \ldots, X_0=i_0)}{\sum_{i_k, k \in \{1,\ldots,n-1\}}P( X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0)} \\
=p\frac{\sum_{i_k, k \in \{1,\ldots,n-1\}} P( X_n=i, X_{n-1}=i_{n-1}, \ldots, X_0=i_0)}{\sum_{i_k, k \in \{1,\ldots,n-1\}}P( X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0)} = p$$
where we considered the set of every possible outcomes in the $(n-1)$ bets in the sum and used that
$$ p = P(X_{n+1}=i+1 | X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0) = \frac{P(X_{n+1}=i+1 , X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0)}{P( X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0)}$$
implies
$$ p\cdot P( X_n=i, X_{n-1}=i_{n-1}, \ldots, X_0=i_0) \\= P(X_{n+1}=i+1 , X_n=i , X_{n-1}=i_{n-1}, \ldots, X_0=i_0) $$ |
What is all N that make $2^N + 1$ divisible by 3? | You can easily prove by induction on $k$ that $2\times 4^k+1$ is divisible by $3$: It is for $k=1$, and if it is for some $k$, then
$$2\times 4^{k+1}+1=2\times 4^k\times 4+1=4(2\times 4^k+1)-3$$
Therefore $2^{2m+1}+1$ is divisible by $3$ for every natural $m$. |
Geometric random variables $X_1:G(p_1)$ $X_2:G(p_2)$ $X_3:G(p_3)$ are independent, prove the following : | Hint: condition on $X_2$: $$\begin{align*} \Pr[X_1 < X_2 < X_3] &= \sum_{k=0}^\infty \Pr[X_1 < k < X_3]\Pr[X_2 = k] \\ &\overset{\text{ind}}{=} \sum_{k=0}^\infty \Pr[X_1 < k]\Pr[k < X_3]\Pr[X_2 = k]. \end{align*}$$ |
Find $\int{\dfrac{x}{\sqrt{x+1}+\sqrt[3]{x+1}}dx}$ | In principle this is very straightforward. Multiply the polynomial directly. A binomial times a trinomial isn't too lengthy.
If you are looking for an elegant approach as opposed to a brute-force one, I'm not sure what would work. |
What's the derivative $\frac{dE[F]}{dF_i}$ of this function $E[F] = |\frac{dF}{dx}|^2$? | The notation leaves something to be desired. $\frac{dF}{dx}:=F_{i+1}-F_i$.
$$\frac{dE[F]}{dF_i}=\frac{d}{dF_i}|F_{i+1}-F_i|^2=\frac{d}{dF_i}\left(F_{i+1}^2-2F_{i+1}F_i+F_i^2\right)=2F_{i+1}+2F_i^2$$
This doesn't seem right, so perhaps the absolute value really means: take the norm of the vector $\frac{dF}{dx}$ whose $i$'th component is $F_{i+1}-F_i$. Then
$$\left|\frac{dF}{dx}\right|^2=\sum_{i=1}^{n-1}(F_{i+1}-F_i)^2$$
and the set of terms involving $F_i$ are $(F_{i}-F_{i-1})^2+(F_{i+1}-F_i)^2$. Now take derivatives in $F_i$, to get $2F_i-2F_{i-1}-(2F_{i+1}+2F_i)=4F_{i}-2F_{i-1}-2F_{i-1}$. |
Is every factorial totient? | Yes, look at https://artofproblemsolving.com/community/c6h140361. Basically, we can choose $$n=\frac{k!\cdot k\#}{\varphi( k\#) } ,$$
where $k\# = \prod_{p \leq k} p$ is the primorial. To see why this works, let
$$
k!=p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}
$$
be the unique prime factorization of $k!$, then $k\#=p_1p_2\cdots p_r$ and $\varphi(k\#)=$$(p_1-1)(p_2-1)$$\cdots (p_r-1)$. Each of $p_i-1 \leq k$, and so $\varphi(k\#) \mid k!$. Furthermore, let $\varphi(k\#)=p_1^{l_{1}}p_2^{l_{2}}\cdot p_r^{l_{r
}}$, then $$n=p_1^{e_1+1-l_1}p_2^{e_2+1-l_2}\cdots p_r^{e_r+1-l_r},$$
and so it's easy to verify $\varphi(n)=p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}=k!.$ |
Analytic continuation of Schwarz-Christoffel mappings? | My other answer is a little sloppy. I will probably delete it once thisone is completed.
I'll look at a biholomophic map from the upper half-plane to a polygon.
Let $a_1,\ldots,a_n \in (0,1)^n, \sum_m a_m < n-1$ and $b_1<\ldots < b_n\in \mathbb{R}$
$$f'(z) =\prod_{m=1}^n (z-b_m)^{a_m-1}, \qquad f(z) = \int_0^z f'(s)ds$$
In $\int_0^z f'(s)ds$ we need to choose a curve $0 \to z$. When integrating $f'$ we continue it analytically along the curve.
All the analytic continuations of $f'$ and $f$ are locally analytic away from $b_1,\ldots,b_n$.
As $|z| \to \infty$, $f'(z) = O(z^{-1-\varepsilon})$ thus $\lim_{R \to \infty} \int_0^t f'(R e^{it}) d(R e^{it}) = 0$ and $f$ is continuous at $\infty$.
Look at the upper semi-disks $D_R = \{ z, Im(z) > 0, |z|< R\}$ and the branch of $f$ analytic on $Im(z) >0$ and continuous on $Im(z) \ge 0$. Then $f(\partial D_R)$ is a closed curve and $\lim_{R \to \infty} f(\partial D_R) = f(\mathbb{R})$ is the boundary of the polygon $P$ with vertices $f(b_0),f(b_1),\ldots,f(b_m)$ where $f(b_0)= f(+\infty) = f(i\infty) = f(-\infty) $. Also the $\pi a_m$ are the angles at those vertices.
Since $f'$ doesn't vanish on $Im(z) > 0$ then $f$ maps biholomorphically $Im(z) > 0$ to the interior of $P$.
Rotation around a branch point : given a branch of $f$ analytic around $b_m+z$, continue analytically $f(b_m+ze^{2i\pi t})$ from $t=0$ to $t=1$ and set $f(b_m+ze^{2i\pi}) =\lim_{t \to 1} f(b_m+ze^{2i\pi t})$.
For $|z| < \min_{l \ne m} |b_l-b_m|$ then $f'(b_m+z e^{2i \pi}) = f'(b_m+z ) e^{2i \pi a_m}$ thus $$f(b_m+ze^{2i\pi}) = f(b_m+z)+\lim_{\epsilon \to 0} \int_z^{\epsilon z} f'(b_m+s)ds\qquad\qquad\qquad\qquad \\ \qquad\qquad\qquad\qquad + \int_0^{2\pi} f'(b_m+\epsilon z e^{it}) d(\epsilon e^{it})+\int_{\epsilon z}^{z} f'(b_m+se^{2\pi})ds$$
$$ = f(b_m+z)+ \int_z^0 f'(b_m+s)ds+\int_0^{z} f'(b_m+se^{2\pi})ds$$
$$ = f(b_m)+e^{2i \pi a_m} (f(b_m+z)-f(b_m))$$
Fix some branch $\tilde{f}$ of $f$ and let $G$ be the group generated by the affine transformations $g_m(f(z))= e^{2i \pi a_m} f(z) + (1-e^{2i \pi a_m})\tilde{f}(b_m)$.
Then the monodromy $M$ group of $f$ is the group generated by the affine transformations $\sigma_{m,s}(f(z)) = e^{2i \pi a_m} f(z) + (1-e^{2i \pi a_m})s(\tilde{f}(b_m))$ for each $m$ and each $s \in G$.
How do we get that for some choices of $a_m,b_m$ then $f^{-1}$ is periodic, or doubly-periodic, or is some sort of tilling of the plane ?
For example with $n=2,a_1 =a_2=1/2, b_1=1,b_2 = -1$ then $f(z) = \text{arcosh}(z)$ and $f^{-1}(z) = \cosh(z)$ is $2i\pi$ periodic because rotating around $z=1$ then rotating in the other direction around $z=-1$ yields $f(z) \mapsto -f(z)+2 f(1) \mapsto -(-f(z)+2 f(1))-2 (-f(-1)+2 f(-1))= f(z)+C$ where $C=-2 f(1)-2f(-1)=2i\pi$
so $f^{-1}(f(z)) = f^{-1}(f(z)+C) $ and $f^{-1}$ is $C= 2i\pi$ periodic. |
Seeking an analytic solution to a first order nonlinear ordinary differential equation | There are closed-form constant solutions (which may or may not be real, depending on the values of the parameters). Otherwise, you need to integrate
$$ \int \dfrac{dy}{\sqrt{B \sin^2(y) - C \sin(y) + D}} $$
which seems not to be elementary in general: Maple gives a rather complicated antiderivative
in terms of EllipticF, which is the incomplete elliptic integral of the first kind. |
Showing the ergodicity of a rotation on the unit sphere | Hint:
Show that the Fourier coefficients $c_k(\psi)$ of any $L^2$ function $\psi$ satisfy $c_k(\psi\circ R_\alpha)=\alpha^kc_k(\psi)$ for $k\in\mathbb Z$. After that use the fact that $\alpha^k\ne1$ (notice that if $\psi$ is invariant, the Fourier coefficients of $\psi$ and $\psi\circ R_\alpha$ are the same for all $k$). |
moving two x, y positions one unit length closer | It sounds like you want to keep $x_2$ and $y_2$ fixed and trying to find $x_1$ and $y_1$ so the direction is the same and the distance is reduced by $1$. So $d.old=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ as you have, and $d.new=d.old-1$. If you let $d.ratio=\frac{d.new}{d.old}$, $x_1'=x_2+d.ratio(x_1-x_2)$ and $y_1'=y_2+d.ratio(y_1-y_2)$ |
How to show that $x_n = \sum_{k=1}^{n} \frac{\cos(k+1)x - \cos kx}{k}$ converges using Cauchy convergence theorem? | Write:
$$\frac{\cos(k+1)x-\cos(kx)}{k}=\frac{\cos((k+1)x)}{k+1}-\frac{\cos(kx)}{k}+\frac{\cos((k+1)x)}{k(k+1)}$$
Then if $n\geq m$
$$\sum_{k=m}^n \left(\frac{\cos((k+1)x)-\cos(kx)}{k}\right)=\frac{\cos((n+1)x)}{n+1}-\frac{\cos(mx)}{m}+\sum_{m}^n\frac{\cos((k+1)x)}{k(k+1)}$$
We have
$$\left|\frac{\cos((n+1)x)}{n+1}-\frac{\cos(mx)}{m}\right|\leq \frac{1}{n+1}+\frac{1}{m}$$
and
$$\left|\sum_m^n\frac{\cos((k+1)x)}{k(k+1)}\right|\leq \sum_{m}^{n}\left(\frac{1}{k}-\frac{1}{k+1}\right)=\frac{1}{m}-\frac{1}{n+1}$$
Hence
$$\left|\sum_{k=m}^n \left(\frac{\cos(k+1)x-\cos(kx)}{k}\right)\right|\leq \frac{2}{m}$$
and it is easy to finish. |
Conditional pdf of a random variable that is a function of other random variables | You need to say something about the relationship between $X$ and $Z$. Take $Y=g(X,Z)=Z$, then you don't generally have the pdf of $g(x,Z)=Z$ equaling the conditional pdf of $Z$ given $X=x$. |
A special type of linear regression | If your model is $$y=a_0+c_1x,\\z=b_0+c_1w$$ you can minimize
$$\sum(a_0+c_1x-y)^2+\sum(b_0+c_1w-z)^2,$$
giving the equations
$$\sum a_0+c_1x-y=0,\\\sum b_0+c_1w-z=0,\\\sum x(a_0+c_1x-y)+\sum w(b_0+c_1w-z)=0.$$
Now solve this $3\times3$ system for $a_0,b_0,c_1$.
The fit quality is still given by the ratio of the explained variance over the total variance. |
Mathematical notation: max of 5 values | Depending on context, there a number of things you could write (and this list is not exhaustive):
\begin{align}
&\max\{t_1,\ldots,t_n\} \quad\text{ or }\quad \sup\big\{t_1,\ldots,t_n\big\}\\
&\max(t_1,\ldots,t_n)\\
&\max\big(\{t_1,\ldots,t_n\}\big)\\
&\max\big((t_1,\ldots,t_n)\big)\\
&\max\big(\langle t_1,\ldots,t_n\rangle\big)\\
&\max_{i = 1}^{n} t_i \quad\text{ or }\quad \max_{i = 1,\ldots,n} t_i\\
&\max\Big((t_i)_{i = 1,\ldots,t_n}\Big)\\
&\sup\big\{t_i \mid 1 \leq i \leq n\big\}.\\
\end{align}
If you want to complicate things and $t_i \geq 0$, there's also the $p$-norm:
$$\lim_{p \to \infty}\left(\sum_{i = 0}^{n}|t_i|^p\right)^{\frac{1}{p}}.$$
My personal preference is the first or the second from the list, depending on what is more readable in the context.
I hope this helps $\ddot\smile$
Edit:
Please note that the supremum operator $\sup$ is most often used in a context when we don't know or don't care if the maximum is attained by any element it the set, e.g. you cannot write $\max\left\{-\frac{1}{n} \mid n \geq 1\right\}$ because the set does not contain $0$, yet $\sup\left\{-\frac{1}{n} \mid n \geq 1\right\} = 0$. On the other hand, it is certainly not wrong to write $\sup A$ for a finite set $A$ or any other that contains its supremum, and authors often switch between them depending on whether they want to stress or deemphasize that property. |
Show that $\left| \frac{1}{2\pi}\int_s^t \frac{\sin Ku}{\sin u}du\right|\lt 1$ for $00$. | First, we may as well take $k> 1$ since you have already proved the conjecture for $0<k\leq 1$. Since the limit as $u$ approaches $0$
$$\lim_{u\to 0}\frac{\sin(ku)}{\sin(u)}=k$$
we know
$$\max_{u\in(0,\pi/2]}\left\{k,\frac{\sin(ku)}{\sin(u)}\right\}$$
exists and is well defined. Consider
$$u\in\left(0,\frac{\pi}{2k}\right]\subset \left(0,\frac{\pi}{2}\right)$$
Since $\frac{\pi}{2k}<\frac{\pi}{2}$ we know $\frac{\sin(ku)}{\sin(u)}$ is decreasing. This takes a second to prove, so start with the derivative:
$$\frac{d}{du}\left(\frac{\sin(ku)}{\sin(u)}\right)=\csc (u) (k \cos (k u)-\cot (u) \sin (k u))$$
If we assume this is zero for some $u$ in the interval, we get
$$0=\csc (u) (k \cos (k u)-\cot (u) \sin (k u))$$
Since $\csc(u)$ is positive for all $u\in(0,\pi/2)$, this implies
$$0=k \cos (k u)-\cot (u) \sin (k u)\Rightarrow k \cos (k u)=\cot (u) \sin (k u)$$
$$\Rightarrow 1=\frac{\cot (u) \tan (k u)}{k}$$
Differentiate both sides of this equation with respect to $k$ to get
$$0=\frac{\cot (u) \left(k u \sec ^2(k u)-\tan (k u)\right)}{k^2}$$
Again, $\cot(u)>0$ for $u\in (0,\pi/2)$. Therefore
$$0=k u \sec ^2(k u)-\tan (k u)$$
Set $y=ku$ (note that $y\in (0,\pi/2]$) to get
$$0=y \sec ^2(y)-\tan (y)$$
$$f(y)=\tan(y)\cos^2(y)-y=0$$
Note that $y=0$ solves this equation. Also, differentiating and setting equal to zero gives us
$$f'(y)=\cos^2(y)-\sin^2(y)-1=0$$
which implies the derivative changes signs at $y=2\pi n$ for $n\in\mathbb{Z}$. Since $f(1)=-0.545351<0$, $f(0)=0$, and $f'(y)\neq 0$ for $y\in (0,\pi/2]$, we can conclude $f(y)\neq 0$ for $y\in (0,\pi/2]$. As this is a contradiction, we can conclude
$$0\neq \csc (u) (k \cos (k u)-\cot (u) \sin (k u))$$
for $u\in \left(0,\frac{\pi}{2k}\right]$. Since $k>1$, for small $\epsilon>0$ this implies
$$\csc (u) (k \cos (k u)-\cot (u) \sin (k u))=\frac{1}{3} \left(k-k^3\right) \epsilon+O\left(\epsilon ^3\right)$$
$$=-\frac{1}{3} \left(k^3-k\right) \epsilon+O\left(\epsilon ^3\right)<0$$
We conclude $\frac{\sin(ku)}{\sin(u)}$ is decreasing for $u\in \left(0,\frac{\pi}{2k}\right]$. For
$$u\in \left(\frac{\pi}{2k},\frac{\pi}{2}\right]$$
we use the fact that $\frac{2}{\pi}x\leq\sin(x)$ to get
$$\frac{\sin(ku)}{\sin(u)}\leq \frac{\pi}{2u}\leq k$$
Thus,
$$\max_{u\in(0,\pi/2]}\left\{k,\frac{\sin(ku)}{\sin(u)}\right\}=k$$
Now, define the zeros of the function $\sin(ku)$ as
$$a_n=\frac{n\pi}{k}$$
and define the integral
$$I_n=\int_{a_n}^{a_{n+1}}\frac{\sin(ku)}{\sin(u)}du$$
From the previous section of our proof, we have that
$$|I_n|=\left|\int_{a_n}^{a_{n+1}}\frac{\sin(ku)}{\sin(u)}du\right|\leq \int_{a_n}^{a_{n+1}}\left|\frac{\sin(ku)}{\sin(u)}\right|du \leq \int_{a_n}^{a_{n+1}}kdu=k(a_{n+1}-a_n)=\pi$$
Let us consider what happens when we add $I_n+I_{n+1}$ for odd $n$. Then
$$I_n+I_{n+1}=\int_{a_n}^{a_{n+1}}\frac{\sin(ku)}{\sin(u)}du+\int_{a_{n+1}}^{a_{n+2}}\frac{\sin(ku)}{\sin(u)}du$$
Note that $I_n$ is negative since $n$ odd corresponds to an odd zero of $\sin(ku)$ (implying that the function is decreasing). The same logic applies to say that $I_{n+1}$ is positive. Since $\sin(u)$ is increasing, we can $I_n$ from below with $a_{n}^{-1}$ and bound $I_{n+1}$ from above with $a_{n+2}^{-1}$. Thus
$$|I_n+I_{n+1}|=\left|\int_{a_n}^{a_{n+1}}\frac{\sin(ku)}{\sin(u)}du+\int_{a_{n+1}}^{a_{n+2}}\frac{\sin(ku)}{\sin(u)}du\right|$$
$$\leq \left|\frac{1}{a_n}\int_{a_n}^{a_{n+1}}\sin(ku)du+\frac{1}{a_{n+1}}\int_{a_{n+1}}^{a_{n+2}}\sin(ku)du\right|=\frac{4}{\pi -4 \pi n^2}$$
Then we can bound any finite sum of these types of integrals by
$$\sum_{n\text{ odd}}^m|I_n+I_{n+1}|<-\sum_{n\text{ odd}}^\infty \frac{4}{\pi -4 \pi n^2}=\frac{2}{\pi}<\frac{\pi}{2}$$
In much the same we, we can bound the evens by
$$\sum_{n\text{ even}}^m|I_n+I_{n+1}|<-2 + \frac{1}{\pi} + \pi<\frac{\pi}{2}$$
This one is slightly more difficult as $I_0$ and $I_1$ have to be counted separately due to division by zero otherwise. We now get to the finale of the proof. Obviously, there exists $i,j$ such that
$$a_i\leq s<a_{i+1}\text{ and }a_j<t\leq a_{j+1}$$
There are four cases: $i,j$ even, $i,j$ odd, $i$ even and $j$ odd, $i$ odd and $j$ even. We will present the first case as the rest follow in a similar manner. We have
$$\left|\frac{1}{2\pi}\int_{s}^{t}\frac{\sin(ku)}{\sin(u)}du\right|=\frac{1}{2\pi}\left|\int_{s}^{a_{i+1}}\frac{\sin(ku)}{\sin(u)}du+\sum_{m=i+1}^{j-2}\int_{a_m}^{a_{m+1}}\frac{\sin(ku)}{\sin(u)}du+\int_{a_{j-1}}^{t}\frac{\sin(ku)}{\sin(u)}du\right|$$
Now, note that
$$0<\int_{s}^{a_{i+1}}\frac{\sin(ku)}{\sin(u)}du\leq \pi$$
$$0>\int_{a_{j-1}}^{t}\frac{\sin(ku)}{\sin(u)}du>- \pi$$
This implies
$$-\pi\leq \int_{s}^{a_{i+1}}\frac{\sin(ku)}{\sin(u)}du+\int_{a_{j-1}}^{t}\frac{\sin(ku)}{\sin(u)}du\leq pi$$
Thus
$$\left|\frac{1}{2\pi}\int_{s}^{t}\frac{\sin(ku)}{\sin(u)}du\right|=\frac{1}{2\pi}\left|\int_{s}^{a_{i+1}}\frac{\sin(ku)}{\sin(u)}du+\sum_{m=i+1}^{j-2}\int_{a_m}^{a_{m+1}}\frac{\sin(ku)}{\sin(u)}du+\int_{a_{j-1}}^{t}\frac{\sin(ku)}{\sin(u)}du\right|$$
$$\leq \frac{1}{2\pi}\left| \pi \right|+\frac{1}{2\pi}\left| \frac{2}{\pi} \right|=\frac{1}{2}+\frac{1}{\pi ^2}=0.601321<1$$
From this, a proof strategy emerges: bound $s$ and $t$ between the zeros of $\sin(k u)$, bound the oscillating part between the zeros, show the ends are also bounded.
I would also like to add a clarification: there may be some bounds in this proof which are not as tight as possible, or I may have missed some details along the way. Unfortunately, these types of proofs are the most dull things to write, and I found my mind wandering for a significant portion of it. If anyone spots an error or a place that it could be tightened, please let me know. Of course, I am confident in the overall method, but have no wish to triple check all details. |
Does a compact manifold require non zero Ricci curvature? | No. In fact compact Riemannian manifolds can have zero sectional curvature; take flat tori $\mathbb{R}^n/\Gamma$ where $\mathbb{R}^n$ has the usual Euclidean metric and $\Gamma$ is a lattice.
Apparently every compact Lie group is the isometry group of some compact Riemannian manifold, but I don't know how cavalier one can be about specifying its curvature. |
Doubt regarding polar coordinates | Some polar plots are symmetric across the $y$ axis because
the radius is the same for $\theta = \frac\pi2 + \alpha$
as it is for $\theta = \frac\pi2 - \alpha,$ that is, because
$r\left(\frac\pi2 + \alpha\right) = r\left(\frac\pi2 - \alpha\right).$
This one is not like that. Except for certain special values of $\theta,$
$$r\left(\frac\pi2 + \alpha\right) =
\cos\left(\frac\pi4 + \frac\alpha2\right) \neq
\cos\left(\frac\pi4 - \frac\alpha2\right) =
r\left(\frac\pi2 - \alpha\right).$$
Instead, the $y$-axis symmetry comes from a set of points with negative radius values that are symmetric with the points with positive radius values.
In fact for $0 \leq \theta \leq 2\pi$ you get only points on or above the $x$ axis:
$r \geq 0$ when $0 \leq \theta \leq \pi$;
and $r \leq 0$ when $\pi \leq \theta \leq 2\pi.$
What you might try is to compute $(x,y)$ for $\theta = \pi + \alpha$
and for $\theta = \pi - \alpha$
(angles measured symmetrically outward from the middle angle $\theta = \pi$):
\begin{align}
&& \theta &= \pi - \alpha & \theta &= \pi + \alpha \\
x &= r\cos\theta
& x &= \cos\left(\frac{\pi - \alpha}2\right)\cos(\pi - \alpha)
& x &= \cos\left(\frac{\pi + \alpha}2\right)\cos(\pi + \alpha) \\
y &= r\sin\theta
& y &= \cos\left(\frac{\pi - \alpha}2\right)\sin(\pi - \alpha)
& y &= \cos\left(\frac{\pi + \alpha}2\right)\sin(\pi + \alpha) \\
\end{align}
Alternatively, take angles measured symmetrically inward from the two ends of
the range $[0,2\pi]$, that is,
$\theta = \alpha$ and for $\theta = 2\pi - \alpha$:
\begin{align}
&& \theta &= \alpha & \theta &= 2\pi - \alpha \\
x &= r\cos\theta
& x &= \cos\left(\frac{ \alpha}2\right)\cos( \alpha)
& x &= \cos\left(\frac{ 2\pi - \alpha}2\right)\cos( 2\pi - \alpha) \\
y &= r\sin\theta
& y &= \cos\left(\frac{ \alpha}2\right)\sin( \alpha)
& y &= \cos\left(\frac{ 2\pi - \alpha}2\right)\sin( 2\pi - \alpha) \\
\end{align} |
I want to show that $f(x)=x.f(1)$ where $f:R\to R$ is additive. | HINTS:
Look at $0$ first: $f(0)=f(0+0)=f(0)+f(0)$, so $f(0)=0=0\cdot f(1)$.
Use induction to prove that $f(n)=nf(1)$ for every positive integer $n$, and use $f(0)=0$ to show that $f(n)=nf(1)$ for every negative integer as well.
$f(1)=f\left(\frac12+\frac12\right)=f\left(\frac13+\frac13+\frac13\right)=\dots\;$.
Once you’ve got it for $f\left(\frac1n\right)$, use the idea of (2) to get it for all rationals.
Then use continuity at a point. |
Measures which are absolutely continuous with respect to a Riemannian measure | As I already pointed out in the comments, one nice way of expressing the set $P$ of probability measures that are absolutely continuous with respect to some $\sigma$-finite measure $\mu$ is
$$
P = \bigg\{f \, d\mu \, \mid \, f\in L^1(\mu) \text{ with } f \geq 0\text{ and } \int f \, d\mu =1 \bigg\}.
$$
The inclusion "$\supset$" is immediate and the reverse inclusion is a consequence of the Radon Nikodym theorem.
Because manifolds are second countable and since the measure generated by the volume form is locally finite, the measure $m_g$ is $\sigma$-finite. Hence, the above identity also holds for $d\mu = dm_g$. |
Showing that $z^3 e^z = 1$ has infinitely many solutions | Can Picard's Big Theorem also be used to show that $z^3 e^z =1$ has infinitely many solutions?
Sure.
We can rewrite this equation as $e^z - z^{-3} =0$ so that we could let $f(z)= e^z - z^{-3}$ and see that it has an essential singularity at $\infty$, correct?
Yes, but it's more direct in the given form:
$$f(z) = z^3e^z$$
has only one zero, with multiplicity $3$. Hence $0$ is the exceptional value. |
Prove that $L(V,W)$ forms a vector space | In order to show that $(S + T) \in L(V,W)$, we need to show that $S+T$ satisfies the defining properties of $L(V,W)$. Clearly, $S + T$ takes a vector in $V$ and gives us a vector in $W$, so what we need to show is that $S + T$ is also linear.
That is, we need to show that for $v_1,v_2 \in V$ and $k \in F$, we have:
$$
(S + T)(v_1 + v_2) = (S + T)v_1 + (S + T)v_2\\
(S + T)(k v_1) = k\cdot (S + T)(v_1)
$$
In order to show that this is the case, use the definition of $S+T$ and the fact that both $S$ and $T$ are linear.
After that, do the same for $a T$. That is, show that
$$
(aT)(v_1 + v_2) = (aT)v_1 + (aT)v_2\\
(aT)(k v_1) = k\cdot (aT)(v_1)
$$ |
How to solve $x^6-x^5+x^4-x^3+x^2-x+1=0$? | You got the right expression. $x^7+1=0$
The roots of the equation will be like $$x= \cos (\frac{2k\pi}{7})+ i\sin (\frac{2k\pi}{7}) , 0\le k\le6$$
Note: Stress on be like, this formula will not give you the exact roots. You have to change it a bit to account for the roots of negative unity.
EDIT: Complete solution follows:
$$x^7+1=0$$ or,
$$(x+1)(x^6-x^5+x^4-x^3+x^2-x+1)=0$$
This implies your equation has exactly 1 real root and 3 pairs of complex conjugate roots.
Hence now I can write
$$x^7+1=0$$ or,
$$x^7=-1$$ or,
$$x^7=\cos \pi + i\sin \pi = \cos (2k+1)\pi + i\sin (2k+1)\pi , 0 \le k\le 6 $$ or,
$$x=[\cos (2k+1)\pi + i\sin (2k+1)\pi]^{\frac{1}{7}} , 0 \le k\le 6$$ or,
$$x=\cos \frac{(2k+1)\pi}{7}+ i\sin \frac{(2k+1)\pi}{7} , 0 \le k\le 6$$
From the comment by Macavity: However,you have a spurious root included - $k=3$ . By multiplying by $x+1$ you introduced this root which is not a root of the original polynomial. So there are no real roots for the polynomial, only complex ones. Hence the final solutions are as follows: $$x=\cos \frac{(2k+1)\pi}{7}+ i\sin \frac{(2k+1)\pi}{7} , 0 \le k\le 6 \,\ \text{and} \,\ k \not = 3$$ |
Polynomial is prime when evaluated at prime numbers | We will prove that only these 2 cases are possible as you mentioned above.
First of all, if $f(x)$ is a constant polynomial with the above property,then obviously
$f(x)=p$ with $p$ being prime and we are ok.
Suppose now that $a_n\neq 0$ and $f(x)=a_nx^n+\cdots +a_0$ be such a polynomial you are asking.
We have two cases:
1) There is a prime $p$ which is a prime divisor of the polynomial at some value, and $p$ is not a divisor of $a_0$.
Suppose that $f(k)\equiv0$ $\ (modp)$ for a proper integer $k$.
$p$ does not divide $a_0$, so we can easily see that $\gcd(p,k)=1$.
From Dirichlet's theorem we know that exist infinitely many primes of the form $q=k+n\cdot p$.
So,$f(q)=f(k+n\cdot p)\equiv0$$(modp)$ which means that $f(q)=p$ because as you want $f(q)$ must be prime.
But there are infinitely many primes $q$ of the above form ,so $f(x)$ must take the value $p$ infinitelly often which means that $f(x)=p$ for all $x$, which is a contradiction
(because as we assumed $f(x)$ is not constant)
2) Every prime divisor of $f(x)$ is a divisor of $a_0$
We know that every polynomial which is not constant has infinitely many prime divisors,so $a_0$ has infinitely many prime divisors so $a_0=0$.
This means that $x$ is a divisor of $f(x)$ always,which shows us that $f(x)/x$ is an integer and because we want $f(q)=q'$ for primes $q,q'$ then $q=q'$ whenever this happens.
This proves that $f(x)=x$ for every $x$. |
Testing goodness of fit using Kolmogorov-Smirnov test | Let's review some basics on which you may be confused.
In a p-value test we have a hypothesis, called the null hypothesis, from which probabilities are computable, then use a p-value to quantify how well the hypothesis "fits" some data. The p-value is the probability, conditional on the null hypothesis, that the data would be at least surprising, relative to the expectations of said hypothesis, as it in fact was. (When I say that, I'm glossing over the difference between 1- and 2-tailed tests; in 1-tailed tests, the p-value is the probability that the data would be at least this surprising, in the direction in which it is surprising.)
In this example, the null hypothesis is that the distributions are the same, so $p$ is already the p-value for that hypothesis. The only event that we know has probability $1-p$ is that the data would be less "surprising", again conditional on the null hypothesis. We certainly can't do another test in which the role of null hypothesis switches to the opposite of what it was before; "the distributions differ" doesn't allow us to calculate p-values.
I think that answers your second question. As for the first, the reason we talk about "failing to reject" the null hypothesis is because you can't prove it, only disprove it or be impressed it survived the effort. As for what you can do in this example, I suggest you double-check a p-value of 1. Such a p-value means the data is as consistent with the distributions being the same as it could possibly get. With data drawn from a continuous distribution, this is suspicious. |
Condition number for polynomial interpolation matrix | Surely you cannot obtain any useful upper bound from $h$. If you translate all data points $x_i$ by the same amount $t$, then $h$ remains unchanged but the matrix becomes closer and closer to singular when $t\to\infty$.
For lower bounds, if you google "vandermonde matrix condition number", there are a handful of results that seem relevant. E.g., the first result, How Bad are Vandermonde Matrices by Victor Pan (2015, arXiv:1504.02118) looks quite interesting. To quote the author:
Our results … indicate that the condition number of an $n\times n$ Vandermonde matrix is exponential in $n$ unless its knots are more or less equally spaced on or about the unit circle $C(0,1)$.
He has derived several lower bounds for the condition number. You may see if they are useful or if you can relate them to your $h$. |
On miscellaneous questions about perfect numbers II | It might be helpful to rewrite your $(1)$ as
$$\frac{\varphi(\operatorname{rad}(n))}{\operatorname{rad}(n)}=\frac{\varphi(\sigma(n))}{\sigma(n)}.\tag{1}$$
OEIS sequence A027598 is the sequence of numbers satisfying
$$\operatorname{rad}(n)=\operatorname{rad}(\sigma(n)).\tag{2}$$
If a number $m$ is in this sequence we have
$$\frac{\varphi(\operatorname{rad}(m))}{\operatorname{rad}(m)}=\frac{\varphi(\operatorname{rad}(\sigma(m)))}{\operatorname{rad}(\sigma(m))}.$$
We can use the equation
$\frac{\varphi(\operatorname{rad}(n))}{\operatorname{rad}(n)}=\frac{\varphi(n)}{n}$
to obtain
$$\frac{\varphi(\operatorname{rad}(m))}{\operatorname{rad}(m)}=\frac{\varphi(\sigma(m))}{\sigma(m)}$$
which means $m$ satisfies $(1)$. Thus any number in the sequence satisfies $(1)$.
Suppose $m$ is a (different) number satisfying $(1)$. Then we have
$$\frac{\varphi(\operatorname{rad}(m))}{\operatorname{rad}(m)}=\frac{\varphi(\operatorname{rad}(\sigma(m)))}{\operatorname{rad}(\sigma(m))}.\tag{3}$$
Suppose also $(2)$ does not hold for $m$ so that
$$\frac{\operatorname{rad}(m)}{p_1 p_2 p_3...p_r}=\frac{\operatorname{rad}(\sigma(m))}{q_1 q_2 q_3...q_r}\tag{4}$$
Where the $q_i$ are prime factors of $\sigma(m)$ but not of $m$ and the $p_i$ are factors of $m$ but not of $\sigma(m)$.
We can rewrite $(3)$ as
$$\frac{\operatorname{rad}(\sigma(m))}{\operatorname{rad}(m)}=\frac{\varphi(\operatorname{rad}(\sigma(m)))}{\varphi(\operatorname{rad}(m))}$$
and using $(4)$ we can write it as
$$\frac{q_1 q_2 q_3...q_r}{p_1p_2p_3...p_r}=\frac{\varphi(\operatorname{rad}(\sigma(m)))}{\varphi(\operatorname{rad}(m))}$$
where the fraction on the left is in lowest terms. But from $(4)$ we also have
$$\frac{\varphi(\operatorname{rad}(\sigma(m)))}{\varphi(\operatorname{rad}(m))}=\frac{(q_1-1)(q_2-1)(q_3-1)...(q_r-1)}{(p_1-1)(p_2-1)(p_3-1)...(p_r-1)}.$$
Combining these two we obtain
$$\frac{q_1 q_2 q_3...q_r}{p_1p_2p_3...p_r}=\frac{(q_1-1)(q_2-1)(q_3-1)...(q_r-1)}{(p_1-1)(p_2-1)(p_3-1)...(p_r-1)}.$$
This cannot be true because the fraction on the left is in lowest terms, and the fraction on the right has a smaller numerator and a smaller denominator. Thus our assumption was false and no such $p_i$ and $q_i$ exist. Therefore $(2)$ holds.
We have now shown that if $(1)$ holds for a number $m$ then $(2)$ holds i.e. $m$ is in A027598. We have also shown previously that if $m$ is in A027598 then $(1)$ holds. Thus the sequence of integers satisfying $(1)$ is the same sequence as A027598. |
How many significant figures are needed in base 2? | Basically you want to estimate the $\delta$ such that $2^{x+\delta} - 2^x = 1$. This means $2^\delta = 1+\dfrac{1}{2^x}$, so that $\delta$ might be estimated as $\dfrac{1}{2^x \times \ln2}$, or, taking the upper bound for $x$, $\delta$ might be estimated as $\dfrac{1}{2^{501} \times \ln2}$.
The number of required significant digits (after the decimal point) is about $-\log_{10}\delta = \log_{10}(2^{501}) + \log_{10}\ln2$, which is about 151, plus-minus a digit. Or, if you're working in base 2, the number of required significant digits is about $-\log_{2}\delta = \log_{2}(2^{501}) + \log_{2}\ln2$, which is about 502.
Given such a number of digits after the decimal point, changing the remaining digits won't change $2^x$ by more than $1$, so that, if $2^x$ is an integer, you can say what integer it is.
However, it is impossible to tell for sure whether $2^x$ is an integer, given only its rounded value, independent of the accuracy, as adding a small value beyond the accuracy limits to $x$ will turn $2^x$ from integer to non-integer and vice versa. |
union of two disjoint topological spaces is a topological space? | Yes, if we have disjoint spaces $(X, \tau_X)$ and $(Y, \tau_Y)$ we can form $X \cup Y$ and give this new set the topology $$\tau:=\{U \cup V: U \in \tau_X, V \in \tau_Y\}$$
and it's rather easy to check that $\tau$ is a well-defined topology on $X \cup Y$ (and this space is often denoted $X \coprod Y$ or $X \oplus Y$, depending on your text book tradition) and gives us the so-called sum topology on $X \cup Y$. It further has the property that $X$ and $Y$ are open in $X \coprod Y$ and as a subspace of that sum (or co-product) $X$ and $Y$ keep their old topology (so $X \to X \coprod Y, x \to x$ and the similar map $Y \to X \coprod Y, y \to y$ are open embedding maps) and the sum topology is the largest topology such that those two above maps are continuous.
So the statement is right in the sense that there is a standard accepted topology on the union of two disjoint topological spaces.
Also, if both $X$ and $Y$ are both subspaces of some larger space $Z$, $X \cup Y$ is a valid subspace and thus a space in its own right too, regardless of disjointness, even. |
What is the upper bound for this 2-norm | The constraint $\|\mathbf x\|_2^2 \leq \alpha \|\mathbf y\|_2^2$ is inactive when $\alpha$ is large enough so that the solution to the unconstrained problem is the solution to the constrained problem.
Then, the unconstrained problem does not depend on $\alpha$. |
Probability of not guessing any number in lotto drawing 7/34 | Method 1: You select seven of the $34$ numbers. If you do not select any of the winning numbers, you must select seven of the other $24 - 7 = 27$ numbers.
The number of ways of selecting seven of the $34$ numbers is $$\binom{34}{7}$$ The number of ways of selecting none of the winning numbers is $$\binom{27}{7}$$ Thus, the probability of selecting none of the winning numbers is $$\frac{\dbinom{27}{7}}{\dbinom{34}{7}}$$
Method 2: Since there are $27$ non-winning numbers, the probability that the first number you pick is not among the winning numbers is $7/27$. That leaves six more numbers to pick and $26$ non-winning numbers from which to pick. Hence, the probability that the second number you pick is also a non-winning number is $6/26$.
Continuing in this way, the probability that you pick none of the seven winning numbers is $$\frac{7}{27} \cdot \frac{6}{26} \cdot \frac{5}{25} \cdot \frac{4}{24} \cdot \frac{3}{23} \cdot \frac{2}{22} \cdot \frac{1}{21}$$ |
How to find the exact value of an upper bound for an exponential random variable | $f(x)=\lambda e^{-\lambda x}, $ $\lambda=5$
find $\int_{20}^{\infty} \lambda e^{-\lambda x} dx$ |
What is $SO(n+1)/O(n)$ as a topological space? | You are correct that the quotient can be identified with $\Bbb R \Bbb P^n$.
Hint The usual action of $SO(n + 1)$ on $\Bbb R^{n + 1}$ is transitive and linear, so it descends to a transitive action on the space $\Bbb R \Bbb P^n$ of lines through the origin in $\Bbb R^{n + 1}$. Thus, if we fix an element $\ell \in \Bbb R \Bbb P^n$, we can identify $SO(n + 1) / H$, where $H < SO(n + 1)$ is the stabilizer in $SO(n + 1)$ of $\ell$.
If we take $\ell$ to be the span $[e_0]$ of the first standard basis element, $g \in SO(n + 1)$ fixes $\ell$ iff it has the form $$\pmatrix{\ast & \ast \\ 0 & \ast},$$ and the condition $g \in SO(n + 1)$ implies that any such $g$ has the form $$\pmatrix{\det A & 0 \\ 0 & A}, \qquad A \in O(n); $$ conversely, any matrix of that form is in $H$. |
Are $1$-tilting modules always faithful? | Indeed every tilting module is faithful.
Note that a general module $M$ is faithful if and only if there is an embedding of the algebra $A$ into $M^n$ for some $n$, see for example Lemma 5.5. in the book Frobenius algebras I by Skowronski and Yamagata. |
Level Curves of a plane | The family of planes $ax+by+cz=d$ define, implicitly, a family of functions $z = f(x,y) = \frac{1}{c}[d-ax-by]$, for $c \neq 0$. To find the level curves of this function, you fix $z=k$ for an arbitrary constant $k$. This leads to: \begin{eqnarray}
ck = d-ax-by \tag{1} \label{1}
\end{eqnarray}
Because $ck$ and $d$ are constants, we can write $ck-d = \alpha$, and (\ref{1}) becomes:
\begin{eqnarray}
\alpha = -ax-by \tag{2} \label{2}
\end{eqnarray}
These is a family of lines with slopes $-a/b$, once we can rewrite (\ref{2}) as:
\begin{eqnarray}
y = -\frac{a}{b}x -\frac{\alpha}{b}
\end{eqnarray} |
Help with an Algebraic Proof that has Two Variables and a Square Root | Since $x$ and $y$ are positive, you can consider $a=\sqrt{x}$ and $b=\sqrt{y}$, so your inequality becomes
$$
a^2+b^2\ge ab
$$
which is equivalent to
$$
a^2-ab+b^2\ge0
$$
Since $a+b>0$, you can multiply by $a+b$, getting
$$
a^3+b^3\ge0
$$
which is certainly satisfied.
Alternatively, start from $(a-b)^2\ge0$, which becomes
$$
a^2+b^2\ge 2ab
$$
Since $2ab>ab$, you have $a^2+b^2>ab$ and so
$$
x+y>\sqrt{xy}
$$ |
Implicit Differentiation: Multiple Solutions? | You have already implicitly derived the expression for $\frac{dy}{dx}$. You can set that equal to 1 and simplify to get $x = -y$. Substitute that in the original equation and you have a quadratic that can be easily solved.
I got the solutions $x = \pm 8/3$ and $y = \mp 8/3$.
The book seems to be making a big deal out of a very simple question. |
What is the meaning of the quantity $S_n/\sqrt{n\log\log n}$ in the law of iterated logarithm? | The mean of a random walk is expected to be 0 and the variance is expected to grow like $\sqrt{n}$. Similar for iid random varialbe with mean 0 and variance 1.
What happens if we just add them? The sum of the random variables should grow like
$$ \sup \left[ Y_1 + \dots + Y_n\right] \approx \sqrt{ 2 n} \log \log n $$
Intuitively $\log n$ is the number of "bits" or "digits" of $n$.
Another issue is the difference between "almost sure" and "in probability" convergence.
For almost sure convergence, for almost every sequence of "coin-flips", the sequence $X_1, \dots, X_n \to X$.
For convergence in probability, you measure each $X_n$ individually. $\mathbb{P}\big[|X_n - X |< \epsilon\big] \to 0$
According to Wikipedia, $\sup \frac{1}{\sqrt{2n}} \left[ Y_1 + \dots + Y_n\right] \approx \log \log n$ converges in probability but not almost surely... so although random walks thought to grow like $\sqrt{n}$ the "peak value" is growing slightly faste |
Proof of the Bonferonni inequalities | I'll write my version of proving Beferonni's inequality for both odd and even cases (under context of probability $(\Omega,\mathcal{A},\mathbb{P})$): if $A_i\in\mathcal{A}$ is a sequence of events, then
$$P\left(\bigcup_{i=1}^{n}A_i\right)\ge\sum_{i=1}^{n} P(A_i)-\sum_{i<j}P(A_i\cap A_j)$$
$$P\left(\bigcup_{i=1}^{n}A_i\right)\leq\sum_{i=1}^{n} P(A_i)-\sum_{i<j}P(A_i\cap A_j)+\sum_{i<j<k}P(A_i\cap A_j\cap A_k)$$
For any $x\in\Omega$, claim that $$\chi_{x\in\bigcup_{i=1}^{n}A_i}\ge\sum_{i}\chi_{x\in A_i}-\sum_{i<j}\chi_{x\in A_i\cap A_j}$$
Indeed, if $x$ is exactly $m$ sets of $A_i$'s, for $m=0$ it's the trivial case; for $m\ge 1$, $1\ge m-\binom{m}{2}$ holds (Why?). Then integrate the above identity w.r.t. the probabbility measure to get the result.
For the other case, stop the binomial expansion at third order, and use the same reasoning. For extension to higher order cases, it will be the same idea. Hope this helps. |
Help with a certain derivative of a log of sum -- problem rooted in Poisson distributions that I think may have an elegant solution | For a fixed integer value of $y \geq 1$, we have
$$\sum_{m=0}^\infty \frac{x^m m^y}{m!}=e^x x P_{y-1}(x)$$
$$ \log\left( \sum_{m=0}^\infty \frac{x^m m^y}{m!}\right)=x+\log(x)+\log\big(P_{y-1}(x)\big)$$
$$\frac d {dx}\log\left( \sum_{m=0}^\infty \frac{x^m m^y}{m!}\right)=1+\frac 1 x+\frac{P'_{y-1}(x) } {P_{y-1}(x) }$$
So, I think that the problem is "just" to identify what are the polynomials $P_{y-1}(x)$. Listing them
$$\left(
\begin{array}{cc}
1 & 1 \\
2 & x+1 \\
3 & x^2+3 x+1 \\
4 & x^3+6 x^2+7 x+1 \\
5 & x^4+10 x^3+25 x^2+15 x+1 \\
6 & x^5+15 x^4+65 x^3+90 x^2+31 x+1 \\
7 & x^6+21 x^5+140 x^4+350 x^3+301 x^2+63 x+1 \\
8 & x^7+28 x^6+266 x^5+1050 x^4+1701 x^3+966 x^2+127 x+1 \\
9 & x^8+36 x^7+462 x^6+2646 x^5+6951 x^4+7770 x^3+3025 x^2+255 x+1 \\
10 & x^9+45 x^8+750 x^7+5880 x^6+22827 x^5+42525 x^4+34105 x^3+9330 x^2+511 x+1
\end{array}
\right)$$
The coefficients seem to be the Stirling numbers of the second kind.
$$\mathcal{S}_{y+1}^{(y)}=\{1,3,6,10,15,21,28,36,45\}$$
$$\mathcal{S}_{y+2}^{(y)}=\{1,7,25,65,140,266,462,750\}$$
$$\mathcal{S}_{y+3}^{(y)}=\{1,15,90,350,1050,2646,5880\}$$
$$\mathcal{S}_{y+4}^{(y)}=\{1,31,301,1701,6951,22827\}$$
$$\mathcal{S}_{y+5}^{(y)}=\{1,63,966,7770,42525\}$$
I think that what are the polynomials is now clear. |
Almost Disjoint Functions | Just construct $\{f_\xi:\xi<\kappa^+\}$ recursively. At each stage you have at most $\kappa$ functions already defined, so by your hypothesis you can add a new one that’s almost disjoint from the ones that you already have.
In other words, at stage $\eta<\kappa^+$ you have $\mathscr{F}_\eta=\{f_\xi:\xi<\eta\}$. $|\mathscr{F}_\eta|\le\kappa$, so by your hypothesis there is a function $f_\eta:\kappa\to\kappa$ that is almost disjoint from $f_\xi$ for each $\xi<\eta$. Now continue.
Added: For the sake of completeness I’m adding the construction of $f_\eta$. Since $|\mathscr{F}_\eta|\le\kappa$, we can re-index it as $\{g_\xi:\xi<\alpha\}$ for some $\alpha\le\kappa$. Now define
$$f_\eta:\kappa\to\kappa:\xi\mapsto\sup\{g_\zeta(\xi)+1:\zeta\le\xi\}\;;$$
then for each $\zeta<\alpha$ we have $f_\eta(\xi)>g_\zeta(\xi)$ whenever $\xi\ge\zeta$, and hence $f_\eta$ is almost disjoint from $g_\zeta$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.